lkml.org 
[lkml]   [2014]   [Nov]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 3.13 149/162] mm: free compound page with correct order
    Date
    3.13.11.11 -stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Yu Zhao <yuzhao@google.com>

    commit 5ddacbe92b806cd5b4f8f154e8e46ac267fff55c upstream.

    Compound page should be freed by put_page() or free_pages() with correct
    order. Not doing so will cause tail pages leaked.

    The compound order can be obtained by compound_order() or use
    HPAGE_PMD_ORDER in our case. Some people would argue the latter is
    faster but I prefer the former which is more general.

    This bug was observed not just on our servers (the worst case we saw is
    11G leaked on a 48G machine) but also on our workstations running Ubuntu
    based distro.

    $ cat /proc/vmstat | grep thp_zero_page_alloc
    thp_zero_page_alloc 55
    thp_zero_page_alloc_failed 0

    This means there is (thp_zero_page_alloc - 1) * (2M - 4K) memory leaked.

    Fixes: 97ae17497e99 ("thp: implement refcounting for huge zero page")
    Signed-off-by: Yu Zhao <yuzhao@google.com>
    Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Mel Gorman <mel@csn.ul.ie>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Bob Liu <lliubbo@gmail.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Kamal Mostafa <kamal@canonical.com>
    ---
    mm/huge_memory.c | 4 ++--
    1 file changed, 2 insertions(+), 2 deletions(-)

    diff --git a/mm/huge_memory.c b/mm/huge_memory.c
    index 64a7f9c..310f27e 100644
    --- a/mm/huge_memory.c
    +++ b/mm/huge_memory.c
    @@ -193,7 +193,7 @@ retry:
    preempt_disable();
    if (cmpxchg(&huge_zero_page, NULL, zero_page)) {
    preempt_enable();
    - __free_page(zero_page);
    + __free_pages(zero_page, compound_order(zero_page));
    goto retry;
    }

    @@ -225,7 +225,7 @@ static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink,
    if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) == 1) {
    struct page *zero_page = xchg(&huge_zero_page, NULL);
    BUG_ON(zero_page == NULL);
    - __free_page(zero_page);
    + __free_pages(zero_page, compound_order(zero_page));
    return HPAGE_PMD_NR;
    }

    --
    1.9.1


    \
     
     \ /
      Last update: 2014-11-07 00:41    [W:4.490 / U:0.160 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site