lkml.org 
[lkml]   [2013]   [Nov]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 4/5] futex: Avoid taking hb lock if nothing to wakeup
On Mon, 25 Nov 2013, Peter Zijlstra wrote:
> On Mon, Nov 25, 2013 at 05:23:51PM +0100, Thomas Gleixner wrote:
> > On Sat, 23 Nov 2013, Linus Torvalds wrote:
> >
> > > On Sat, Nov 23, 2013 at 5:16 AM, Thomas Gleixner <tglx@linutronix.de> wrote:
> > > >
> > > > Now the question is why we queue the waiter _AFTER_ reading the user
> > > > space value. The comment in the code is pretty non sensical:
> > > >
> > > > * On the other hand, we insert q and release the hash-bucket only
> > > > * after testing *uaddr. This guarantees that futex_wait() will NOT
> > > > * absorb a wakeup if *uaddr does not match the desired values
> > > > * while the syscall executes.
> > > >
> > > > There is no reason why we cannot queue _BEFORE_ reading the user space
> > > > value. We just have to dequeue in all the error handling cases, but
> > > > for the fast path it does not matter at all.
> > > >
> > > > CPU 0 CPU 1
> > > >
> > > > val = *futex;
> > > > futex_wait(futex, val);
> > > >
> > > > spin_lock(&hb->lock);
> > > >
> > > > plist_add(hb, self);
> > > > smp_wmb();
> > > >
> > > > uval = *futex;
> > > > *futex = newval;
> > > > futex_wake();
> > > >
> > > > smp_rmb();
> > > > if (plist_empty(hb))
> > > > return;
> > > > ...
> > >
> > > This would seem to be a nicer approach indeed, without needing the
> > > extra atomics.
> >
> > I went through the issue with Peter and he noticed, that we need
> > smp_mb() in both places. That's what we have right now with the
> > spin_lock() and it is required as we need to guarantee that
> >
> > The waiter observes the change to the uaddr value after it added
> > itself to the plist
> >
> > The waker observes plist not empty if the change to uaddr was made
> > after the waiter checked the value.
> >
> >
> > write(plist) | write(futex_uaddr)
> > mb() | mb()
> > read(futex_uaddr) | read(plist)
> >
> > The spin_lock mb() on the waiter side does not help here because it
> > happpens before the write(plist) and not after it.
>
> Ah, note that spin_lock() is only a smp_mb() on x86, in general its an
> ACQUIRE barrier which is weaker than a full mb and will not suffice in
> this case even it if were in the right place.

So now the question is whether this lockless empty check optimization
which seems to be quite nice on x86 for a particular workload will
have any negative side effects on other architectures.

If the smp_mb() is heavy weight, then it will hurt massivly in the
case where the hash bucket is not empty, because we add the price for
the smp_mb() just for no gain.

In that context it would also be helpful to measure the overhead on
x86 for the !empty case.

Thanks,

tglx


\
 
 \ /
  Last update: 2013-11-25 19:21    [W:8.013 / U:0.812 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site