lkml.org 
[lkml]   [2013]   [Nov]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 3/5] MCS Lock: Barrier corrections
On Thu, Nov 07, 2013 at 04:50:23AM -0800, Michel Lespinasse wrote:
> On Thu, Nov 7, 2013 at 4:06 AM, Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
> >
> > On Nov 7, 2013 6:55 PM, "Michel Lespinasse" <walken@google.com> wrote:
> >>
> >> Rather than writing arch-specific locking code, would you agree to
> >> introduce acquire and release memory operations ?
> >
> > Yes, that's probably the right thing to do. What ops do we need? Store with
> > release, cmpxchg and load with acquire? Anything else?
>
> Depends on what lock types we want to implement on top; for MCS we would need:
> - xchg acquire (common case) and load acquire (for spinning on our
> locker's wait word)
> - cmpxchg release (when there is no next locker) and store release
> (when writing to the next locker's wait word)
>
> One downside of the proposal is that using a load acquire for spinning
> puts the memory barrier within the spin loop. So this model is very
> intuitive and does not add unnecessary barriers on x86, but it my
> place the barriers in a suboptimal place for architectures that need
> them.

OK, I will bite... Why is a barrier in the spinloop suboptimal?

Can't say that I have tried measuring it, but the barrier should not
normally result in interconnect traffic. Given that the barrier is
required anyway, it should not affect lock-acquisition latency.

So what am I missing here?

Thanx, Paul



\
 
 \ /
  Last update: 2013-11-07 16:01    [W:0.138 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site