forked from Minki/linux
mm, THP: don't hold mmap_sem in khugepaged when allocating THP
When allocating huge page for collapsing, khugepaged currently holds
mmap_sem for reading on the mm where collapsing occurs. Afterwards the
read lock is dropped before write lock is taken on the same mmap_sem.
Holding mmap_sem during whole huge page allocation is therefore useless,
the vma needs to be rechecked after taking the write lock anyway.
Furthemore, huge page allocation might involve a rather long sync
compaction, and thus block any mmap_sem writers and i.e. affect workloads
that perform frequent m(un)map or mprotect oterations.
This patch simply releases the read lock before allocating a huge page.
It also deletes an outdated comment that assumed vma must be stable, as it
was using alloc_hugepage_vma(). This is no longer true since commit
9f1b868a13
("mm: thp: khugepaged: add policy for finding target node").
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Minchan Kim <minchan@kernel.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Rik van Riel <riel@redhat.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
447f05bb48
commit
8b1645685a
@ -2322,23 +2322,17 @@ static struct page
|
||||
int node)
|
||||
{
|
||||
VM_BUG_ON_PAGE(*hpage, *hpage);
|
||||
|
||||
/*
|
||||
* Allocate the page while the vma is still valid and under
|
||||
* the mmap_sem read mode so there is no memory allocation
|
||||
* later when we take the mmap_sem in write mode. This is more
|
||||
* friendly behavior (OTOH it may actually hide bugs) to
|
||||
* filesystems in userland with daemons allocating memory in
|
||||
* the userland I/O paths. Allocating memory with the
|
||||
* mmap_sem in read mode is good idea also to allow greater
|
||||
* scalability.
|
||||
*/
|
||||
*hpage = alloc_pages_exact_node(node, alloc_hugepage_gfpmask(
|
||||
khugepaged_defrag(), __GFP_OTHER_NODE), HPAGE_PMD_ORDER);
|
||||
/*
|
||||
* After allocating the hugepage, release the mmap_sem read lock in
|
||||
* preparation for taking it in write mode.
|
||||
* Before allocating the hugepage, release the mmap_sem read lock.
|
||||
* The allocation can take potentially a long time if it involves
|
||||
* sync compaction, and we do not need to hold the mmap_sem during
|
||||
* that. We will recheck the vma after taking it again in write mode.
|
||||
*/
|
||||
up_read(&mm->mmap_sem);
|
||||
|
||||
*hpage = alloc_pages_exact_node(node, alloc_hugepage_gfpmask(
|
||||
khugepaged_defrag(), __GFP_OTHER_NODE), HPAGE_PMD_ORDER);
|
||||
if (unlikely(!*hpage)) {
|
||||
count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
|
||||
*hpage = ERR_PTR(-ENOMEM);
|
||||
|
Loading…
Reference in New Issue
Block a user