Messages in this thread Patch in this message | | | Date | Mon, 17 Oct 2022 20:25:05 -0400 | From | Rik van Riel <> | Subject | [PATCH] mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages |
| |
The h->*_huge_pages counters are protected by the hugetlb_lock, but alloc_huge_page has a corner case where it can decrement the counter outside of the lock.
This could lead to a corrupted value of h->resv_huge_pages, which we have observed on our systems.
Take the hugetlb_lock before decrementing h->resv_huge_pages to avoid a potential race.
Fixes: a88c76954804 ("mm: hugetlb: fix hugepage memory leak caused by wrong reserve count") Cc: stable@kernel.org Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Glen McCready <gkmccready@meta.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Muchun Song <songmuchun@bytedance.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Rik van Riel <riel@surriel.com> --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b586cdd75930..dede0337c07c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2924,11 +2924,11 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, page = alloc_buddy_huge_page_with_mpol(h, vma, addr); if (!page) goto out_uncharge_cgroup; + spin_lock_irq(&hugetlb_lock); if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) { SetHPageRestoreReserve(page); h->resv_huge_pages--; } - spin_lock_irq(&hugetlb_lock); list_add(&page->lru, &h->hugepage_activelist); set_page_refcounted(page); /* Fall through */ -- 2.37.2
| |