linux/mm
Vlastimil Babka 81c5857b27 mm, kswapd: remove bogus check of balance_classzone_idx
During work on kcompactd integration I have spotted a confusing check of
balance_classzone_idx, which I believe is bogus.

The balanced_classzone_idx is filled by balance_pgdat() as the highest
zone it attempted to balance.  This was introduced by commit dc83edd941
("mm: kswapd: use the classzone idx that kswapd was using for
sleeping_prematurely()").

The intention is that (as expressed in today's function names), the
value used for kswapd_shrink_zone() calls in balance_pgdat() is the same
as for the decisions in kswapd_try_to_sleep().

An unwanted side-effect of that commit was breaking the checks in
kswapd() whether there was another kswapd_wakeup with a tighter (=lower)
classzone_idx.  Commits 215ddd6664 ("mm: vmscan: only read
new_classzone_idx from pgdat when reclaiming successfully") and
d2ebd0f6b8 ("kswapd: avoid unnecessary rebalance after an unsuccessful
balancing") tried to fixed, but apparently introduced a bogus check that
this patch removes.

Consider zone indexes X < Y < Z, where:
- Z is the value used for the first kswapd wakeup.
- Y is returned as balanced_classzone_idx, which means zones with index higher
  than Y (including Z) were found to be unreclaimable.
- X is the value used for the second kswapd wakeup

The new wakeup with value X means that kswapd is now supposed to balance
harder all zones with index <= X.  But instead, due to Y < Z, it will go
sleep and won't read the new value X.  This is subtly wrong.

The effect of this patch is that kswapd will react better in some
situations, where e.g.  the first wakeup is for ZONE_DMA32, the second is
for ZONE_DMA, and due to unreclaimable ZONE_NORMAL.  Before this patch,
kswapd would go sleep instead of reclaiming ZONE_DMA harder.  I expect
these situations are very rare, and more value is in better
maintainability due to the removal of confusing and bogus check.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: David Rientjes <rientjes@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-17 15:09:34 -07:00
..
kasan kasan: add functions to clear stack poison 2016-03-09 15:43:42 -08:00
backing-dev.c mm/backing-dev.c: fix error path in wb_init() 2016-02-11 18:35:48 -08:00
balloon_compaction.c virtio_balloon: fix race between migration and ballooning 2016-01-12 20:47:06 +02:00
bootmem.c x86/mm: Introduce max_possible_pfn 2015-12-06 12:46:31 +01:00
cleancache.c cleancache: constify cleancache_ops structure 2016-01-27 09:09:57 -05:00
cma_debug.c
cma.c mm/cma.c: suppress warning 2015-11-05 19:34:48 -08:00
cma.h
compaction.c mm/compaction: speed up pageblock_pfn_to_page() when zone is contiguous 2016-03-15 16:55:16 -07:00
debug.c mm, debug: move bad flags printing to bad_page() 2016-03-15 16:55:16 -07:00
dmapool.c mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd 2015-11-06 17:50:42 -08:00
early_ioremap.c mm/early_ioremap: use offset_in_page macro 2015-11-05 19:34:48 -08:00
fadvise.c
failslab.c mm: fault-inject take over bootstrap kmem_cache check 2016-03-15 16:55:16 -07:00
filemap.c mm: remove unnecessary uses of lock_page_memcg() 2016-03-15 16:55:16 -07:00
frame_vector.c mm: fix docbook comment for get_vaddr_frames() 2015-11-05 19:34:48 -08:00
frontswap.c
gup.c mm: retire GUP WARN_ON_ONCE that outlived its usefulness 2016-02-03 08:57:14 -08:00
highmem.c
huge_memory.c thp: cleanup split_huge_page() 2016-03-15 16:55:16 -07:00
hugetlb_cgroup.c mm: make compound_head() robust 2015-11-06 17:50:42 -08:00
hugetlb.c mm/hugetlb: use EOPNOTSUPP in hugetlb sysctl handlers 2016-03-09 15:43:42 -08:00
hwpoison-inject.c hwpoison: use page_cgroup_ino for filtering by memcg 2015-09-10 13:29:01 -07:00
init-mm.c
internal.h /proc/kpageflags: return KPF_BUDDY for "tail" buddy pages 2016-03-17 15:09:34 -07:00
interval_tree.c
Kconfig mm/Kconfig: correct description of DEFERRED_STRUCT_PAGE_INIT 2016-02-05 18:10:40 -08:00
Kconfig.debug mm/page_poisoning.c: allow for zero poisoning 2016-03-15 16:55:16 -07:00
kmemcheck.c mm: kmemcheck skip object if slab allocation failed 2016-03-15 16:55:16 -07:00
kmemleak-test.c
kmemleak.c Revert "gfp: add __GFP_NOACCOUNT" 2016-01-14 16:00:49 -08:00
ksm.c mm/ksm.c: mark stable page dirty 2016-01-15 17:56:32 -08:00
list_lru.c mm: memcontrol: move kmem accounting code to CONFIG_MEMCG 2016-01-20 17:09:18 -08:00
maccess.c mm/maccess.c: actually return -EFAULT from strncpy_from_unsafe 2015-11-05 19:34:48 -08:00
madvise.c mm/madvise: update comment on sys_madvise() 2016-03-15 16:55:16 -07:00
Makefile mm/page_poison.c: enable PAGE_POISONING as a separate option 2016-03-15 16:55:16 -07:00
memblock.c mm/memblock.c: remove unnecessary memblock_type variable 2016-03-15 16:55:16 -07:00
memcontrol.c mm: memcontrol: report kernel stack usage in cgroup2 memory.stat 2016-03-17 15:09:34 -07:00
memory_hotplug.c mm/compaction: speed up pageblock_pfn_to_page() when zone is contiguous 2016-03-15 16:55:16 -07:00
memory-failure.c mm/memory-failure.c: remove useless "undef"s 2016-03-15 16:55:16 -07:00
memory.c Merge branch 'akpm' (patches from Andrew) 2016-03-16 11:51:08 -07:00
mempolicy.c mm/mempolicy.c: skip VM_HUGETLB and VM_MIXEDMAP VMA for lazy mbind 2016-03-15 16:55:16 -07:00
mempool.c mm/mempool: avoid KASAN marking mempool poison checks as use-after-free 2016-03-11 16:17:47 -08:00
memtest.c memtest: remove unused header files 2015-09-08 15:35:28 -07:00
migrate.c mm: migrate: consolidate mem_cgroup_migrate() calls 2016-03-15 16:55:16 -07:00
mincore.c thp: change pmd_trans_huge_lock() interface to return ptl 2016-01-21 17:20:51 -08:00
mlock.c mm: fix mlock accouting 2016-01-21 17:20:51 -08:00
mm_init.c
mmap.c Linux 4.5-rc7 2016-03-07 09:27:30 +01:00
mmu_context.c
mmu_notifier.c mmu-notifier: add clear_young callback 2015-09-10 13:29:01 -07:00
mmzone.c mm/mmzone.c: memmap_valid_within() can be boolean 2016-01-14 16:00:49 -08:00
mprotect.c mm, dax: check for pmd_none() after split_huge_pmd() 2016-02-11 18:35:48 -08:00
mremap.c mm, dax: check for pmd_none() after split_huge_pmd() 2016-02-11 18:35:48 -08:00
msync.c mm/msync: use offset_in_page macro 2015-11-05 19:34:48 -08:00
nobootmem.c x86/mm: Introduce max_possible_pfn 2015-12-06 12:46:31 +01:00
nommu.c kmemcg: account certain kmem allocations to memcg 2016-01-14 16:00:49 -08:00
oom_kill.c mm: oom_kill: don't ignore oom score on exiting tasks 2016-03-17 15:09:34 -07:00
page_alloc.c sound: query dynamic DEBUG_PAGEALLOC setting 2016-03-17 15:09:34 -07:00
page_counter.c mm: page_counter: let page_counter_try_charge() return bool 2015-11-05 19:34:48 -08:00
page_ext.c mm/page_poisoning.c: allow for zero poisoning 2016-03-15 16:55:16 -07:00
page_idle.c mm: add page_check_address_transhuge() helper 2016-01-15 17:56:32 -08:00
page_io.c
page_isolation.c mm/page_isolation: do some cleanup in "undo_isolate_page_range" 2016-01-15 17:56:32 -08:00
page_owner.c mm, page_owner: dump page owner info from dump_page() 2016-03-15 16:55:16 -07:00
page_poison.c mm/page_poisoning.c: allow for zero poisoning 2016-03-15 16:55:16 -07:00
page-writeback.c mm: remove unnecessary uses of lock_page_memcg() 2016-03-15 16:55:16 -07:00
pagewalk.c thp: rename split_huge_page_pmd() to split_huge_pmd() 2016-01-15 17:56:32 -08:00
percpu-km.c
percpu-vm.c
percpu.c tree wide: use kvfree() than conditional kfree()/vfree() 2016-01-22 17:02:18 -08:00
pgtable-generic.c mm,thp: fix spellos in describing __HAVE_ARCH_FLUSH_PMD_TLB_RANGE 2016-02-11 18:35:48 -08:00
process_vm_access.c ptrace: use fsuid, fsgid, effective creds for fs access checks 2016-01-20 17:09:18 -08:00
quicklist.c
readahead.c mm: move lru_to_page to mm_inline.h 2016-01-14 16:00:49 -08:00
rmap.c mm: simplify lock_page_memcg() 2016-03-15 16:55:16 -07:00
shmem.c mm: migrate: do not touch page->mem_cgroup of live pages 2016-03-15 16:55:16 -07:00
slab_common.c mm: new API kfree_bulk() for SLAB+SLUB allocators 2016-03-15 16:55:16 -07:00
slab.c mm: memcontrol: report slab usage in cgroup2 memory.stat 2016-03-17 15:09:34 -07:00
slab.h mm: memcontrol: report slab usage in cgroup2 memory.stat 2016-03-17 15:09:34 -07:00
slob.c mm: slab: free kmem_cache_node after destroy sysfs file 2016-02-18 16:23:24 -08:00
slub.c mm/slub: query dynamic DEBUG_PAGEALLOC setting 2016-03-17 15:09:34 -07:00
sparse-vmemmap.c x86, mm: introduce vmem_altmap to augment vmemmap_populate() 2016-01-15 17:56:32 -08:00
sparse.c x86, mm: introduce vmem_altmap to augment vmemmap_populate() 2016-01-15 17:56:32 -08:00
swap_cgroup.c
swap_state.c mm: memcontrol: charge swap to cgroup2 2016-01-20 17:09:18 -08:00
swap.c mm, x86: get_user_pages() for dax mappings 2016-01-15 17:56:32 -08:00
swapfile.c wrappers for ->i_mutex access 2016-01-22 18:04:28 -05:00
truncate.c mm: remove unnecessary uses of lock_page_memcg() 2016-03-15 16:55:16 -07:00
userfaultfd.c memcg: adjust to support new THP refcounting 2016-01-15 17:56:32 -08:00
util.c proc: revert /proc/<pid>/maps [stack:TID] annotation 2016-02-03 08:28:43 -08:00
vmacache.c mm/vmacache: inline vmacache_valid_mm() 2015-11-05 19:34:48 -08:00
vmalloc.c mm/vmalloc: query dynamic DEBUG_PAGEALLOC setting 2016-03-17 15:09:34 -07:00
vmpressure.c mm/vmpressure.c: fix subtree pressure detection 2016-02-03 08:28:43 -08:00
vmscan.c mm, kswapd: remove bogus check of balance_classzone_idx 2016-03-17 15:09:34 -07:00
vmstat.c mm, page_owner: convert page_owner_inited to static key 2016-03-15 16:55:16 -07:00
workingset.c mm: simplify lock_page_memcg() 2016-03-15 16:55:16 -07:00
zbud.c mm/zbud.c: use list_last_entry() instead of list_tail_entry() 2016-01-15 11:40:52 -08:00
zpool.c mm: zsmalloc: constify struct zs_pool name 2015-11-06 17:50:42 -08:00
zsmalloc.c zsmalloc: fix migrate_zspage-zs_free race condition 2016-01-20 17:09:18 -08:00
zswap.c mm/zswap: change incorrect strncmp use to strcmp 2015-12-18 14:25:40 -08:00