lkml.org 
[lkml]   [2013]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 tip/core/locking 6/7] locking: Add an smp_mb__after_unlock_lock() for UNLOCK+LOCK barrier
On Tue, Dec 10, 2013 at 01:37:26PM +0100, Peter Zijlstra wrote:
> On Mon, Dec 09, 2013 at 05:28:02PM -0800, Paul E. McKenney wrote:
> > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > index f89da808ce31..abf645799991 100644
> > --- a/arch/powerpc/include/asm/barrier.h
> > +++ b/arch/powerpc/include/asm/barrier.h
> > @@ -84,4 +84,6 @@ do { \
> > ___p1; \
> > })
> >
> > +#define smp_mb__after_unlock_lock() do { } while (0)
> > +
> > #endif /* _ASM_POWERPC_BARRIER_H */
>
> Didn't ben said ppc actually violates the current unlock+lock assumtion
> and therefore this barrier woulnd't actually be a nop on ppc

Or, ppc could fix its lock primitives to preserve the unlock+lock
assumption, and avoid subtle breakage across half the kernel.

- Josh Triplett


\
 
 \ /
  Last update: 2013-12-10 19:41    [W:0.157 / U:1.260 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site