From a43a83c79b4fd36e297f97b7468be28fdf771d78 Mon Sep 17 00:00:00 2001 From: Miaohe Lin Date: Tue, 16 Aug 2022 21:05:48 +0800 Subject: [PATCH] mm/hugetlb: fix incorrect update of max_huge_pages Patch series "A few fixup patches for hugetlb". This series contains a few fixup patches to fix incorrect update of max_huge_pages, fix WARN_ON(!kobj) in sysfs_create_group() and so on. More details can be found in the respective changelogs. This patch (of 6): There should be pages_per_huge_page(h) / pages_per_huge_page(target_hstate) pages incremented for target_hstate->max_huge_pages when page is demoted. Update max_huge_pages accordingly for consistency. Link: https://lkml.kernel.org/r/20220816130553.31406-1-linmiaohe@huawei.com Link: https://lkml.kernel.org/r/20220816130553.31406-2-linmiaohe@huawei.com Signed-off-by: Miaohe Lin Reviewed-by: Muchun Song Cc: Mike Kravetz Signed-off-by: Andrew Morton --- mm/hugetlb.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ea1c7bfa1cc3..e72052964fb5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3472,7 +3472,8 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) * based on pool changes for the demoted page. */ h->max_huge_pages--; - target_hstate->max_huge_pages += pages_per_huge_page(h); + target_hstate->max_huge_pages += + pages_per_huge_page(h) / pages_per_huge_page(target_hstate); return rc; }