lkml.org 
[lkml]   [2014]   [Feb]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: allow preemption in check_task_state
On Mon, Feb 10, 2014 at 07:12:03PM +0100, Nicholas Mc Guire wrote:
> On Mon, 10 Feb 2014, Peter Zijlstra wrote:
>
> > On Mon, Feb 10, 2014 at 06:17:12PM +0100, Nicholas Mc Guire wrote:
> > > maybe I'm missing/missunderstanding something here but
> > > pi_unlock -> arch_spin_unlock is a full mb()
> >
> > Nope, arch_spin_unlock() on x86 is a single add[wb] without LOCK prefix.
> >
> > The lock and unlock primitives are in general specified to have ACQUIRE
> > resp. RELEASE semantics.
> >
> > See Documentation/memory-barriers.txt for far too much head-hurting
> > details.
>
> I did check that - but from the code check it seems to me to be using a
> lock prefix in the fast __add() path and an explicit smp_add() in the slow
> path (arch/x86/include/asm/spinlock.h:arch_spin_unlock) the __add from
> arch/x86/include/asm/cmpxchg.h does lock or am I missinterpreting this ?
> the other archs I believe were all doing explicit mb()/smp_mb() in the
> arch_spin_unlock - will go check this again.

It uses UNLOCK_LOCK_PREFIX, which if you look carefully, is normally
always "". Only some 'broken' i386 chips require a LOCK there.


\
 
 \ /
  Last update: 2014-02-10 20:01    [W:0.142 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site