lkml.org 
[lkml]   [2019]   [Aug]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.19 86/98] mm/zsmalloc.c: migration can leave pages in ZS_EMPTY indefinitely
    Date
    From: Henry Burns <henryburns@google.com>

    commit 1a87aa03597efa9641e92875b883c94c7f872ccb upstream.

    In zs_page_migrate() we call putback_zspage() after we have finished
    migrating all pages in this zspage. However, the return value is
    ignored. If a zs_free() races in between zs_page_isolate() and
    zs_page_migrate(), freeing the last object in the zspage,
    putback_zspage() will leave the page in ZS_EMPTY for potentially an
    unbounded amount of time.

    To fix this, we need to do the same thing as zs_page_putback() does:
    schedule free_work to occur.

    To avoid duplicated code, move the sequence to a new
    putback_zspage_deferred() function which both zs_page_migrate() and
    zs_page_putback() call.

    Link: http://lkml.kernel.org/r/20190809181751.219326-1-henryburns@google.com
    Fixes: 48b4800a1c6a ("zsmalloc: page migration support")
    Signed-off-by: Henry Burns <henryburns@google.com>
    Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Cc: Henry Burns <henrywolfeburns@gmail.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Shakeel Butt <shakeelb@google.com>
    Cc: Jonathan Adams <jwadams@google.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    mm/zsmalloc.c | 19 +++++++++++++++----
    1 file changed, 15 insertions(+), 4 deletions(-)

    --- a/mm/zsmalloc.c
    +++ b/mm/zsmalloc.c
    @@ -1882,6 +1882,18 @@ static void dec_zspage_isolation(struct
    zspage->isolated--;
    }

    +static void putback_zspage_deferred(struct zs_pool *pool,
    + struct size_class *class,
    + struct zspage *zspage)
    +{
    + enum fullness_group fg;
    +
    + fg = putback_zspage(class, zspage);
    + if (fg == ZS_EMPTY)
    + schedule_work(&pool->free_work);
    +
    +}
    +
    static void replace_sub_page(struct size_class *class, struct zspage *zspage,
    struct page *newpage, struct page *oldpage)
    {
    @@ -2051,7 +2063,7 @@ static int zs_page_migrate(struct addres
    * the list if @page is final isolated subpage in the zspage.
    */
    if (!is_zspage_isolated(zspage))
    - putback_zspage(class, zspage);
    + putback_zspage_deferred(pool, class, zspage);

    reset_page(page);
    put_page(page);
    @@ -2097,14 +2109,13 @@ static void zs_page_putback(struct page
    spin_lock(&class->lock);
    dec_zspage_isolation(zspage);
    if (!is_zspage_isolated(zspage)) {
    - fg = putback_zspage(class, zspage);
    /*
    * Due to page_lock, we cannot free zspage immediately
    * so let's defer.
    */
    - if (fg == ZS_EMPTY)
    - schedule_work(&pool->free_work);
    + putback_zspage_deferred(pool, class, zspage);
    }
    +
    spin_unlock(&class->lock);
    }


    \
     
     \ /
      Last update: 2019-08-27 09:59    [W:6.134 / U:0.080 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site