mm: fix madvise WILLNEED performance problem

The calculation of the end page index was incorrect, leading to a
regression of 70% when running stress-ng.

With this fix, we instead see a performance improvement of 3%.

Fixes: e6e88712e4 ("mm: optimise madvise WILLNEED")
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Tested-by: Xing Zhengjun <zhengjun.xing@linux.intel.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: William Kucharski <william.kucharski@oracle.com>
Cc: Feng Tang <feng.tang@intel.com>
Cc: "Chen, Rong A" <rong.a.chen@intel.com>
Link: https://lkml.kernel.org/r/20201109134851.29692-1-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Matthew Wilcox (Oracle) 2020-11-21 22:17:22 -08:00 committed by Linus Torvalds
parent 488dac0c92
commit 66383800df

View File

@ -226,7 +226,7 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma,
struct address_space *mapping) struct address_space *mapping)
{ {
XA_STATE(xas, &mapping->i_pages, linear_page_index(vma, start)); XA_STATE(xas, &mapping->i_pages, linear_page_index(vma, start));
pgoff_t end_index = end / PAGE_SIZE; pgoff_t end_index = linear_page_index(vma, end + PAGE_SIZE - 1);
struct page *page; struct page *page;
rcu_read_lock(); rcu_read_lock();