lkml.org 
[lkml]   [2013]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 tip/core/locking 6/7] locking: Add an smp_mb__after_unlock_lock() for UNLOCK+LOCK barrier
On Tue, Dec 10, 2013 at 09:45:08AM -0800, Josh Triplett wrote:
> On Tue, Dec 10, 2013 at 01:37:26PM +0100, Peter Zijlstra wrote:
> > On Mon, Dec 09, 2013 at 05:28:02PM -0800, Paul E. McKenney wrote:
> > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > index f89da808ce31..abf645799991 100644
> > > --- a/arch/powerpc/include/asm/barrier.h
> > > +++ b/arch/powerpc/include/asm/barrier.h
> > > @@ -84,4 +84,6 @@ do { \
> > > ___p1; \
> > > })
> > >
> > > +#define smp_mb__after_unlock_lock() do { } while (0)
> > > +
> > > #endif /* _ASM_POWERPC_BARRIER_H */
> >
> > Didn't ben said ppc actually violates the current unlock+lock assumtion
> > and therefore this barrier woulnd't actually be a nop on ppc
>
> Or, ppc could fix its lock primitives to preserve the unlock+lock
> assumption, and avoid subtle breakage across half the kernel.

Indeed. However, another motivation for this change was the difficulty
in proving that x86 really provided the equivalent of a full barrier
for the MCS lock handoff case:

http://www.spinics.net/lists/linux-mm/msg65653.html

Thanx, Paul



\
 
 \ /
  Last update: 2013-12-10 21:21    [W:0.086 / U:0.284 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site