forked from Minki/linux
migration: only migrate_prep() once per move_pages()
migrate_prep() is fairly expensive (72us on 16-core barcelona 1.9GHz). Commit3140a22730
improved move_pages() throughput by breaking it into chunks, but it also made migrate_prep() be called once per chunk (every 128pages or so) instead of once per move_pages(). This patch reverts to calling migrate_prep() only once per chunk as we did before 2.6.29. It is also a followup to commit0aedadf91a
("mm: move migrate_prep out from under mmap_sem"). This improves migration throughput on the above machine from 600MB/s to 750MB/s. Signed-off-by: Brice Goglin <Brice.Goglin@inria.fr> Acked-by: Christoph Lameter <cl@linux-foundation.org> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: Rik van Riel <riel@redhat.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
7f33d49a2e
commit
35282a2de4
@ -820,7 +820,6 @@ static int do_move_page_to_node_array(struct mm_struct *mm,
|
||||
struct page_to_node *pp;
|
||||
LIST_HEAD(pagelist);
|
||||
|
||||
migrate_prep();
|
||||
down_read(&mm->mmap_sem);
|
||||
|
||||
/*
|
||||
@ -907,6 +906,9 @@ static int do_pages_move(struct mm_struct *mm, struct task_struct *task,
|
||||
pm = (struct page_to_node *)__get_free_page(GFP_KERNEL);
|
||||
if (!pm)
|
||||
goto out;
|
||||
|
||||
migrate_prep();
|
||||
|
||||
/*
|
||||
* Store a chunk of page_to_node array in a page,
|
||||
* but keep the last one as a marker
|
||||
|
Loading…
Reference in New Issue
Block a user