lkml.org 
[lkml]   [2024]   [Feb]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [PATCH v3] mm/swap: fix race when skipping swapcache
    From
    On 18.02.24 08:59, Huang, Ying wrote:
    > David Hildenbrand <david@redhat.com> writes:
    >
    >> On 16.02.24 10:51, Kairui Song wrote:
    >>> From: Kairui Song <kasong@tencent.com>
    >>> When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more
    >>> threads
    >>> swapin the same entry at the same time, they get different pages (A, B).
    >>> Before one thread (T0) finishes the swapin and installs page (A)
    >>> to the PTE, another thread (T1) could finish swapin of page (B),
    >>> swap_free the entry, then swap out the possibly modified page
    >>> reusing the same entry. It breaks the pte_same check in (T0) because
    >>> PTE value is unchanged, causing ABA problem. Thread (T0) will
    >>> install a stalled page (A) into the PTE and cause data corruption.
    >>> One possible callstack is like this:
    >>> CPU0 CPU1
    >>> ---- ----
    >>> do_swap_page() do_swap_page() with same entry
    >>> <direct swapin path> <direct swapin path>
    >>> <alloc page A> <alloc page B>
    >>> swap_read_folio() <- read to page A swap_read_folio() <- read to page B
    >>> <slow on later locks or interrupt> <finished swapin first>
    >>> ... set_pte_at()
    >>> swap_free() <- entry is free
    >>> <write to page B, now page A stalled>
    >>> <swap out page B to same swap entry>
    >>> pte_same() <- Check pass, PTE seems
    >>> unchanged, but page A
    >>> is stalled!
    >>> swap_free() <- page B content lost!
    >>> set_pte_at() <- staled page A installed!
    >>> And besides, for ZRAM, swap_free() allows the swap device to discard
    >>> the entry content, so even if page (B) is not modified, if
    >>> swap_read_folio() on CPU0 happens later than swap_free() on CPU1,
    >>> it may also cause data loss.
    >>> To fix this, reuse swapcache_prepare which will pin the swap entry
    >>> using
    >>> the cache flag, and allow only one thread to pin it. Release the pin
    >>> after PT unlocked. Racers will simply wait since it's a rare and very
    >>> short event. A schedule() call is added to avoid wasting too much CPU
    >>> or adding too much noise to perf statistics
    >>> Other methods like increasing the swap count don't seem to be a good
    >>> idea after some tests, that will cause racers to fall back to use the
    >>> swap cache again. Parallel swapin using different methods leads to
    >>> a much more complex scenario.
    >>> Reproducer:
    >>> This race issue can be triggered easily using a well constructed
    >>> reproducer and patched brd (with a delay in read path) [1]:
    >>> With latest 6.8 mainline, race caused data loss can be observed
    >>> easily:
    >>> $ gcc -g -lpthread test-thread-swap-race.c && ./a.out
    >>> Polulating 32MB of memory region...
    >>> Keep swapping out...
    >>> Starting round 0...
    >>> Spawning 65536 workers...
    >>> 32746 workers spawned, wait for done...
    >>> Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss!
    >>> Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss!
    >>> Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss!
    >>> Round 0 Failed, 15 data loss!
    >>> This reproducer spawns multiple threads sharing the same memory
    >>> region
    >>> using a small swap device. Every two threads updates mapped pages one by
    >>> one in opposite direction trying to create a race, with one dedicated
    >>> thread keep swapping out the data out using madvise.
    >>> The reproducer created a reproduce rate of about once every 5
    >>> minutes,
    >>> so the race should be totally possible in production.
    >>> After this patch, I ran the reproducer for over a few hundred rounds
    >>> and no data loss observed.
    >>> Performance overhead is minimal, microbenchmark swapin 10G from 32G
    >>> zram:
    >>> Before: 10934698 us
    >>> After: 11157121 us
    >>> Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
    >>> Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of
    >>> synchronous device")
    >>> Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1]
    >>> Reported-by: "Huang, Ying" <ying.huang@intel.com>
    >>> Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.com/
    >>> Signed-off-by: Kairui Song <kasong@tencent.com>
    >>> Cc: stable@vger.kernel.org
    >>> ---
    >>> Update from V2:
    >>> - Add a schedule() if raced to prevent repeated page faults wasting CPU
    >>> and add noise to perf statistics.
    >>> - Use a bool to state the special case instead of reusing existing
    >>> variables fixing error handling [Minchan Kim].
    >>> V2:
    >>> https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
    >>> Update from V1:
    >>> - Add some words on ZRAM case, it will discard swap content on swap_free so the race window is a bit different but cure is the same. [Barry Song]
    >>> - Update comments make it cleaner [Huang, Ying]
    >>> - Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
    >>> - Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO instead of "direct swapin path" [Yu Zhao]
    >>> - Update commit message.
    >>> - Collect Review and Acks.
    >>> V1:
    >>> https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
    >>> include/linux/swap.h | 5 +++++
    >>> mm/memory.c | 20 ++++++++++++++++++++
    >>> mm/swap.h | 5 +++++
    >>> mm/swapfile.c | 13 +++++++++++++
    >>> 4 files changed, 43 insertions(+)
    >>> diff --git a/include/linux/swap.h b/include/linux/swap.h
    >>> index 4db00ddad261..8d28f6091a32 100644
    >>> --- a/include/linux/swap.h
    >>> +++ b/include/linux/swap.h
    >>> @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp)
    >>> return 0;
    >>> }
    >>> +static inline int swapcache_prepare(swp_entry_t swp)
    >>> +{
    >>> + return 0;
    >>> +}
    >>> +
    >>> static inline void swap_free(swp_entry_t swp)
    >>> {
    >>> }
    >>> diff --git a/mm/memory.c b/mm/memory.c
    >>> index 7e1f4849463a..7059230d0a54 100644
    >>> --- a/mm/memory.c
    >>> +++ b/mm/memory.c
    >>> @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
    >>> struct page *page;
    >>> struct swap_info_struct *si = NULL;
    >>> rmap_t rmap_flags = RMAP_NONE;
    >>> + bool need_clear_cache = false;
    >>> bool exclusive = false;
    >>> swp_entry_t entry;
    >>> pte_t pte;
    >>> @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
    >>> if (!folio) {
    >>> if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
    >>> __swap_count(entry) == 1) {
    >>> + /*
    >>> + * Prevent parallel swapin from proceeding with
    >>> + * the cache flag. Otherwise, another thread may
    >>> + * finish swapin first, free the entry, and swapout
    >>> + * reusing the same entry. It's undetectable as
    >>> + * pte_same() returns true due to entry reuse.
    >>> + */
    >>> + if (swapcache_prepare(entry)) {
    >>> + /* Relax a bit to prevent rapid repeated page faults */
    >>> + schedule();
    >>> + goto out;
    >>> + }
    >>> + need_clear_cache = true;
    >>> +
    >>
    >> I took a closer look at __read_swap_cache_async() and it essentially
    >> does something similar.
    >>
    >> Instead of returning, it keeps retrying until it finds that
    >> swapcache_prepare() fails for another reason than -EEXISTS (e.g.,
    >> freed concurrently) or it finds the entry in the swapcache.
    >>
    >> So if you would succeed here on a freed+reused swap entry,
    >> __read_swap_cache_async() would simply retry.
    >>
    >> It spells that out:
    >>
    >> /*
    >> * We might race against __delete_from_swap_cache(), and
    >> * stumble across a swap_map entry whose SWAP_HAS_CACHE
    >> * has not yet been cleared. Or race against another
    >> * __read_swap_cache_async(), which has set SWAP_HAS_CACHE
    >> * in swap_map, but not yet added its folio to swap cache.
    >> */
    >>
    >> Whereby we could not race against this code here as well where we
    >> speculatively set SWAP_HAS_CACHE and might never add something to the swap
    >> cache.
    >>
    >>
    >> I'd probably avoid the wrong returns and do something even closer to
    >> __read_swap_cache_async().
    >>
    >> while (true) {
    >> /*
    >> * Fake that we are trying to insert a page into the swapcache, to
    >> * serialize against concurrent threads wanting to do the same.
    >> * [more from your description]
    >> */
    >> ret = swapcache_prepare(entry);
    >> if (likely(!ret)
    >> /*
    >> * Move forward with swapin, we'll recheck if the PTE hasn't
    >> * changed later.
    >> */
    >> break;
    >> else if (ret != -EEXIST)
    >> goto out;
    >
    > The swap entry may be kept in swap cache for long time. For example, it
    > may be read into swap cache via MADV_WILLNEED.

    Right, we'd have to check for the swapcache.

    I briefly thought about just factoring out what we have in
    __read_swap_cache_async() and reusing here. Similar problem to solve,
    and quite a lot of duplicate code.

    But not worth the churn in a simple fix. We could explore that option
    as a cleanup on top.

    --
    Cheers,

    David / dhildenb


    \
     
     \ /
      Last update: 2024-05-27 15:09    [W:2.984 / U:0.012 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site