lkml.org 
[lkml]   [2013]   [Mar]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] atomic: improve atomic_inc_unless_negative/atomic_dec_unless_positive
From
On Tue, Mar 12, 2013 at 11:39 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
>
> Atomic operations that return a value are required to act as full memory
> barriers. This means that code relying on ordering provided by these
> atomic operations must also do ordering, either by using an explicit
> memory barrier or by relying on guarantees from atomic operations.
>
> For example:
>
> CPU 0 CPU 1
>
> X = 1; r1 = Z;
> if (atomic_inc_unless_negative(&Y) smp_mb();
> do_something();
> Z = 1; r2 = X;
>
> Assuming X and Z are initially zero, if r1==1, we are guaranteed
> that r2==1. However, CPU 1 needs its smp_mb() in order to pair with
> the barrier implicit in atomic_inc_unless_negative().
>
> Make sense?

Yes, it does, and thanks for the explanation.

But looks the above example is not what Frederic described:

"the above atomic_read() might return -1 because there is no
guarantee it's seeing the recent update on the remote CPU."

Even I am not sure if adding one smp_mb() around atomic_read()
can guarantee that too.

Andrew, please ignore the patch, thanks.

Thanks,
--
Ming Lei


\
 
 \ /
  Last update: 2013-03-12 05:42    [W:0.089 / U:0.376 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site