lkml.org 
[lkml]   [2022]   [Jun]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.15 36/66] zsmalloc: fix races between asynchronous zspage free and page migration
    Date
    From: Sultan Alsawaf <sultan@kerneltoast.com>

    commit 2505a981114dcb715f8977b8433f7540854851d8 upstream.

    The asynchronous zspage free worker tries to lock a zspage's entire page
    list without defending against page migration. Since pages which haven't
    yet been locked can concurrently migrate off the zspage page list while
    lock_zspage() churns away, lock_zspage() can suffer from a few different
    lethal races.

    It can lock a page which no longer belongs to the zspage and unsafely
    dereference page_private(), it can unsafely dereference a torn pointer to
    the next page (since there's a data race), and it can observe a spurious
    NULL pointer to the next page and thus not lock all of the zspage's pages
    (since a single page migration will reconstruct the entire page list, and
    create_page_chain() unconditionally zeroes out each list pointer in the
    process).

    Fix the races by using migrate_read_lock() in lock_zspage() to synchronize
    with page migration.

    Link: https://lkml.kernel.org/r/20220509024703.243847-1-sultan@kerneltoast.com
    Fixes: 77ff465799c602 ("zsmalloc: zs_page_migrate: skip unnecessary loops but not return -EBUSY if zspage is not inuse")
    Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
    Acked-by: Minchan Kim <minchan@kernel.org>
    Cc: Nitin Gupta <ngupta@vflare.org>
    Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    mm/zsmalloc.c | 37 +++++++++++++++++++++++++++++++++----
    1 file changed, 33 insertions(+), 4 deletions(-)

    --- a/mm/zsmalloc.c
    +++ b/mm/zsmalloc.c
    @@ -1743,11 +1743,40 @@ static enum fullness_group putback_zspag
    */
    static void lock_zspage(struct zspage *zspage)
    {
    - struct page *page = get_first_page(zspage);
    + struct page *curr_page, *page;

    - do {
    - lock_page(page);
    - } while ((page = get_next_page(page)) != NULL);
    + /*
    + * Pages we haven't locked yet can be migrated off the list while we're
    + * trying to lock them, so we need to be careful and only attempt to
    + * lock each page under migrate_read_lock(). Otherwise, the page we lock
    + * may no longer belong to the zspage. This means that we may wait for
    + * the wrong page to unlock, so we must take a reference to the page
    + * prior to waiting for it to unlock outside migrate_read_lock().
    + */
    + while (1) {
    + migrate_read_lock(zspage);
    + page = get_first_page(zspage);
    + if (trylock_page(page))
    + break;
    + get_page(page);
    + migrate_read_unlock(zspage);
    + wait_on_page_locked(page);
    + put_page(page);
    + }
    +
    + curr_page = page;
    + while ((page = get_next_page(curr_page))) {
    + if (trylock_page(page)) {
    + curr_page = page;
    + } else {
    + get_page(page);
    + migrate_read_unlock(zspage);
    + wait_on_page_locked(page);
    + put_page(page);
    + migrate_read_lock(zspage);
    + }
    + }
    + migrate_read_unlock(zspage);
    }

    static int zs_init_fs_context(struct fs_context *fc)

    \
     
     \ /
      Last update: 2022-06-03 20:01    [W:3.090 / U:0.308 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site