lkml.org 
[lkml]   [2013]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v7 1/4] spinlock: A new lockref structure for lockless update of refcount
From
Date
On Fri, 2013-08-30 at 12:06 +1000, Michael Neuling wrote:

> powerpc patch below. I'm using arch_spin_is_locked() to implement
> arch_spin_value_unlocked().

>
> +static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
> +{
> + return !arch_spin_is_locked(&lock);
> +}
> +

Arguably, it should be done the other way around :-)

arch_spin_value_unlocked semantics is to basically operate on an already
read copy of the value, while arch_spin_is_locked() has ACCESS_ONE
semantics on *top* of that.

Or we can keep both completely separate like Linus does on x86.

Cheers,
Ben.




\
 
 \ /
  Last update: 2013-08-30 05:01    [W:0.960 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site