mm: hugetlb: bail out unmapping after serving reference page

When unmapping a given VM range, we could bail out if a reference page is
supplied and is unmapped, which is a minor optimization.

Signed-off-by: Hillf Danton <dhillf@gmail.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Hillf Danton 2012-03-21 16:34:03 -07:00 committed by Linus Torvalds
parent fcf4d8212a
commit 9e81130b7c

View File

@ -2280,6 +2280,10 @@ void __unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
if (pte_dirty(pte)) if (pte_dirty(pte))
set_page_dirty(page); set_page_dirty(page);
list_add(&page->lru, &page_list); list_add(&page->lru, &page_list);
/* Bail out after unmapping reference page if supplied */
if (ref_page)
break;
} }
flush_tlb_range(vma, start, end); flush_tlb_range(vma, start, end);
spin_unlock(&mm->page_table_lock); spin_unlock(&mm->page_table_lock);