lkml.org 
[lkml]   [2021]   [Apr]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 13/37] mm: implement speculative handling in __handle_mm_fault().
On Thu, Apr 29, 2021 at 05:12:34PM +0100, Matthew Wilcox wrote:
> On Wed, Apr 28, 2021 at 05:05:17PM -0700, Andy Lutomirski wrote:
> > On Wed, Apr 28, 2021 at 5:02 PM Michel Lespinasse <michel@lespinasse.org> wrote:
> > > Thanks Paul for confirming / clarifying this. BTW, it would be good to
> > > add this to the rcu header files, just so people have something to
> > > reference to when they depend on such behavior (like fast GUP
> > > currently does).
> >
> > Or, even better, fast GUP could add an explicit RCU read lock.
> >
> > >
> > > Going back to my patch. I don't need to protect against THP splitting
> > > here, as I'm only handling the small page case. So when
> > > MMU_GATHER_RCU_TABLE_FREE is enabled, I *think* I could get away with
> > > using only an rcu read lock, instead of disabling interrupts which
> > > implicitly creates the rcu read lock. I'm not sure which way to go -
> > > fast GUP always disables interrupts regardless of the
> > > MMU_GATHER_RCU_TABLE_FREE setting, and I think there is a case to be
> > > made for following the fast GUP stes rather than trying to be smarter.
> >
> > How about adding some little helpers:
> >
> > lockless_page_walk_begin();
> >
> > lockless_page_walk_end();
> >
> > these turn into RCU read locks if MMU_GATHER_RCU_TABLE_FREE and into
> > irqsave otherwise. And they're somewhat self-documenting.
>
> One of the worst things we can do while holding a spinlock is take a
> cache miss because we then delay for several thousand cycles to wait for
> the cache line. That gives every other CPU a really long opportunity
> to slam into the spinlock and things go downhill fast at that point.
> We've even seen patches to do things like read A, take lock L, then read
> A to avoid the cache miss while holding the lock.

I understand the effect your are describing, but I do not see how it
applies here - what cacheline are we likely to miss on when using
local_irq_disable() that we wouldn't touch if using rcu_read_lock() ?

> What sort of performance effect would it have to free page tables
> under RCU for all architectures? It's painful on s390 & powerpc because
> different tables share the same struct page, but I have to believe that's
> a solvable problem.

I agree using RCU to free page tables would be a good thing to try.
I am afraid of adding that to this patchset though, as it seems
somewhate unrelated and adds risk. IMO we are most likely to find
justification for pushing this if/when we try accessing remote mm's without
taking the mmap lock, since disabling IPIs clearly wouldn't work there.

--
Michel "walken" Lespinasse

\
 
 \ /
  Last update: 2021-04-29 21:19    [W:0.084 / U:1.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site