lkml.org 
[lkml]   [2021]   [Dec]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [PATCH RFC v2 1/2] mm: Don't skip swap entry even if zap_details specified
    Date
    On Tuesday, 16 November 2021 12:49:50 AM AEDT Peter Xu wrote:
    > This check existed since the 1st git commit of Linux repository, but at that
    > time there's no page migration yet so I think it's okay.
    >
    > With page migration enabled, it should logically be possible that we zap some
    > shmem pages during migration. When that happens, IIUC the old code could have
    > the RSS counter accounted wrong on MM_SHMEMPAGES because we will zap the ptes
    > without decreasing the counters for the migrating entries. I have no unit test
    > to prove it as I don't know an easy way to trigger this condition, though.
    >
    > Besides, the optimization itself is already confusing IMHO to me in a few points:

    I've spent a bit of time looking at this and think it would be good to get
    cleaned up as I've found it hard to follow in the past. What I haven't been
    able to confirm is if anything relies on skipping swap entries or not. From
    you're description it sounds like skipping swap entries was done as an
    optimisation rather than for some functional reason is that correct?

    > - The wording "skip swap entries" is confusing, because we're not skipping all
    > swap entries - we handle device private/exclusive pages before that.
    >
    > - The skip behavior is enabled as long as zap_details pointer passed over.
    > It's very hard to figure that out for a new zap caller because it's unclear
    > why we should skip swap entries when we have zap_details specified.
    >
    > - With modern systems, especially performance critical use cases, swap
    > entries should be rare, so I doubt the usefulness of this optimization
    > since it should be on a slow path anyway.
    >
    > - It is not aligned with what we do with huge pmd swap entries, where in
    > zap_huge_pmd() we'll do the accounting unconditionally.
    >
    > This patch drops that trick, so we handle swap ptes coherently. Meanwhile we
    > should do the same mapping check upon migration entries too.

    I agree, and I'm not convinced the current handling is very good - if we
    skip zapping a migration entry then the page mapping might get restored when
    the migration entry is removed.

    In practice I don't think that is a problem as the migration entry target page
    will be locked, and if I'm understanding things correctly callers of
    unmap_mapping_*() need to have the page(s) locked anyway if they want to be
    sure the page is unmapped. But it seems removing the migration entries better
    matches the intent and I can't think of a reason why they should be skipped.

    > Signed-off-by: Peter Xu <peterx@redhat.com>
    > ---
    > mm/memory.c | 6 ++----
    > 1 file changed, 2 insertions(+), 4 deletions(-)
    >
    > diff --git a/mm/memory.c b/mm/memory.c
    > index 8f1de811a1dc..e454f3c6aeb9 100644
    > --- a/mm/memory.c
    > +++ b/mm/memory.c
    > @@ -1382,16 +1382,14 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
    > continue;
    > }
    >
    > - /* If details->check_mapping, we leave swap entries. */
    > - if (unlikely(details))
    > - continue;
    > -
    > if (!non_swap_entry(entry))
    > rss[MM_SWAPENTS]--;
    > else if (is_migration_entry(entry)) {
    > struct page *page;
    >
    > page = pfn_swap_entry_to_page(entry);
    > + if (unlikely(zap_skip_check_mapping(details, page)))
    > + continue;
    > rss[mm_counter(page)]--;
    > }
    > if (unlikely(!free_swap_and_cache(entry)))
    >




    \
     
     \ /
      Last update: 2021-12-02 12:07    [W:3.393 / U:0.012 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site