mm: compaction: validate pfn range passed to isolate_freepages_block

Commit 0bf380bc70 ("mm: compaction: check pfn_valid when entering a
new MAX_ORDER_NR_PAGES block during isolation for migration") added a
check for pfn_valid() when isolating pages for migration as the scanner
does not necessarily start pageblock-aligned.

Since commit c89511ab2f ("mm: compaction: Restart compaction from near
where it left off"), the free scanner has the same problem.  This patch
makes sure that the pfn range passed to isolate_freepages_block() is
within the same block so that pfn_valid() checks are unnecessary.

In answer to Henrik's wondering why others have not reported this:
reproducing this requires a large enough hole with the right aligment to
have compaction walk into a PFN range with no memmap.  Size and
alignment depends in the memory model - 4M for FLATMEM and 128M for
SPARSEMEM on x86.  It needs a "lucky" machine.

Reported-by: Henrik Rydberg <rydberg@euromail.se>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Mel Gorman 2012-12-06 19:01:14 +00:00 committed by Linus Torvalds
parent 04c5decdc0
commit 60177d31d2

View File

@ -713,7 +713,15 @@ static void isolate_freepages(struct zone *zone,
/* Found a block suitable for isolating free pages from */ /* Found a block suitable for isolating free pages from */
isolated = 0; isolated = 0;
end_pfn = min(pfn + pageblock_nr_pages, zone_end_pfn);
/*
* As pfn may not start aligned, pfn+pageblock_nr_page
* may cross a MAX_ORDER_NR_PAGES boundary and miss
* a pfn_valid check. Ensure isolate_freepages_block()
* only scans within a pageblock
*/
end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
end_pfn = min(end_pfn, zone_end_pfn);
isolated = isolate_freepages_block(cc, pfn, end_pfn, isolated = isolate_freepages_block(cc, pfn, end_pfn,
freelist, false); freelist, false);
nr_freepages += isolated; nr_freepages += isolated;