lkml.org 
[lkml]   [2022]   [Dec]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [QUESTION] about the maple tree and current status of mmap_lock scalability
On Wed, Dec 28, 2022 at 09:10:20AM -0800, Suren Baghdasaryan wrote:
> Hi Hyeonggon,
>
> On Wed, Dec 28, 2022 at 4:49 AM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote:
> >
> > Hello mm folks,
> >
> > I have a few questions about the current status of mmap_lock scalability.
> >
> > =============================================================
> > What is currently causing the kernel to use mmap_lock to protect the maple tree?
> > =============================================================
> >
> > I understand that the long-term goal is to remove the need for mmap_lock in readers
> > while traversing the maple tree, using techniques such as RCU or SPF.
> > What is the biggest obstacle preventing this from being achieved at this time?
>
> Maple tree has an RCU mode which does not need mmap_lock for
> traversal. Liam and I were testing it recently and Liam fixed a number
> of issues to enable it. It seems stable now and the fixes are
> incorporated into the "per-vma locks" patchset which I prepared in
> this branch: https://github.com/surenbaghdasaryan/linux/tree/per_vma_lock.

Thank you for the link. I didn't realize how far the discussion had progressed.

Let me check if I understand correctly:

To allow non-overlapping page faults while writers are performing VMA operations,
per-VMA locking moves from the mmap_lock to the VMA lock on the reader
side during page fault.

While maple tree traversal is done without locking, readers must take
VMA lock in read mode within RCU read section (or retry taking mmap_lock
if failed) to process page fault.

This ensures that readers are not racing with writers for access to the same
VMA.

Am I getting it right?

> I haven't posted this patchset upstream yet but it's pretty much ready
> to go. I'm planning to post it in early January.

Looking forward to that,
thank you for working on this.

--
Thanks,
Hyeonggon

\
 
 \ /
  Last update: 2023-03-26 23:22    [W:0.337 / U:0.328 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site