lkml.org 
[lkml]   [2013]   [Nov]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 4/5] futex: Avoid taking hb lock if nothing to wakeup
On Sat, 23 Nov 2013, Thomas Gleixner wrote:
> On Fri, 22 Nov 2013, Davidlohr Bueso wrote:
> So with the atomic ops you are changing that to:
>
> CPU 0 CPU 1
>
> val = *futex;
> futex_wait(futex, val);
>
> spin_lock(&hb->lock);
>
> atomic_inc(&hb->waiters);
> uval = *futex;
> *futex = newval;
>
> if (uval != val) { futex_wake();
> spin_unlock(&hb->lock); if (!atomic_read(&hb->waiters))
> return; return;
> }
> spin_lock((&hb->lock);
> plist_add(hb, self);
> spin_unlock(&hb->lock);
> schedule(); wakeup_waiters(hb);
> ...
>
> which restores the ordering guarantee, which the hash bucket lock
> provided so far.

Actually that's not true by design, it just happens to work.

atomic_inc() on x86 is a "lock incl".

The LOCK prefix just guarantees that the cache line which is affected
by the INCL is locked. And it guarantees that locked operations
serialize all outstanding load and store operations.

But Documentation/atomic_ops.txt says about atomic_inc():

"One very important aspect of these two routines is that they DO NOT
require any explicit memory barriers. They need only perform the
atomic_t counter update in an SMP safe manner."

So while this has a barrier on x86, it's not guaranteed.

atomic_read() is a simple read.

This does not guarantee anything. And if you read
Documentation/atomic_ops.txt it's well documented:

"*** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! ***"


So now your code melts down to:

write(hb->waiters) | write(uaddr)
mb | read(hb->waiters)
read(uaddr)

I fear you simply managed to make the window small enough that your
testing was not longer able expose it.

Thanks,

tglx






\
 
 \ /
  Last update: 2013-11-25 21:41    [W:0.147 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site