lkml.org 
[lkml]   [2024]   [May]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v4 2/3] mm/rmap: integrate PMD-mapped folio splitting into pagewalk loop
Hey Zi and Jason,

Thanks a lot for reaching out!

On Thu, May 9, 2024 at 12:35 AM Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> On Wed, May 08, 2024 at 12:22:08PM -0400, Zi Yan wrote:
> > On 8 May 2024, at 11:52, Jason Gunthorpe wrote:
> >
> > > On Wed, May 08, 2024 at 10:56:34AM -0400, Zi Yan wrote:
> > >
> > >> Lance is improving try_to_unmap_one() to support unmapping PMD THP as a whole,
> > >> so he moves split_huge_pmd_address() inside while (page_vma_mapped_walk(&pvmw))
> > >> and after mmu_notifier_invalidate_range_start() as split_huge_pmd_locked()
> > >> and does not include the mmu notifier ops inside split_huge_pmd_address().

IMO, It might be reasonable to exclude the mmu notifier ops in
split_huge_pmd_locked(). IIUC, before acquiring the PTL, callers need to tear
down the secondary mappings via mmu_notifier_invalidate_range_start() with
the range aligned to HPAGE_PMD_SIZE.

> > >> I wonder if that could cause issues, since the mmu_notifier_invalidate_range_start()
> > >> before the while loop only has range of the original address and
> > >> split huge pmd can affect the entire PMD address range and these two ranges
> > >> might not be the same.

As Baolin mentioned [1] before:
"For a PMD mapped THP, I think the address is already THP size alignment
returned from vma_address(&folio->page, vma)."

Given this, perhaps we don't need to re-align the input address after
starting the pagewalk? IMO, if any corner cases arise, we should catch them
by using VM_WARN_ON_ONCE() in split_huge_pmd_locked().

Zi, what do you think?

[1] https://lore.kernel.org/linux-mm/cc9fd23f-7d87-48a7-a737-acbea8e95fb7@linux.alibaba.com/

> > >
> > > That does not sound entirely good..
> > >
> > > I suppose it depends on what split does, if the MM page table has the
> > > same translation before and after split then perhaps no invalidation
> > > is even necessary.
> >
> > Before split, it is a PMD mapping to a PMD THP (order-9). After split,
> > they are 512 PTEs mapping to the same THP. Unless the secondary TLB
> > does not support PMD mapping and use 512 PTEs instead, it seems to
> > be an issue from my understanding.
>
> I may not recall fully, but I don't think any secondaries are
> so sensitive to the PMD/PTE distinction.. At least the ones using
> hmm_range_fault() are not.
>
> When the PTE eventually comes up for invalidation then the secondary
> should wipe out any granual they may have captured.
>
> Though, perhaps KVM should be checked carefully.
>
> > In terms of two mmu_notifier ranges, first is in the split_huge_pmd_address()[1]
> > and second is in try_to_unmap_one()[2]. When try_to_unmap_one() is unmapping
> > a subpage in the middle of a PMD THP, the former notifies about the PMD range
> > change due to one PMD split into 512 PTEs and the latter only needs to notify
> > about the invalidation of the unmapped PTE. I do not think the latter can
> > replace the former, although a potential optimization can be that the latter
> > can be removed as it is included in the range of the former.
>
> I think we probably don't need both, either size might be fine, but
> the larger size is definately fine..
>
> > Regarding Lance's current code change, is it OK to change mmu_notifier range
> > after mmu_notifier_invalidate_range_start()?
>
> No, it cannot be changed during a start/stop transaction.

I understood and will keep that in mind - thanks!

Thanks again for clarifying!
Lance

>
> Jason
>
>

\
 
 \ /
  Last update: 2024-05-09 10:22    [W:0.129 / U:0.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site