Messages in this thread Patch in this message | | | Subject | Re: Adding plain accesses and detecting data races in the LKMM | From | Akira Yokosawa <> | Date | Sat, 20 Apr 2019 00:06:58 +0900 |
| |
Hi Paul,
Please find inline comments below.
On Fri, 19 Apr 2019 05:47:20 -0700, Paul E. McKenney wrote: > On Fri, Apr 19, 2019 at 02:53:02AM +0200, Andrea Parri wrote: >>> Are you saying that on x86, atomic_inc() acts as a full memory barrier >>> but not as a compiler barrier, and vice versa for >>> smp_mb__after_atomic()? Or that neither atomic_inc() nor >>> smp_mb__after_atomic() implements a full memory barrier? >> >> I'd say the former; AFAICT, these boil down to: >> >> https://elixir.bootlin.com/linux/v5.1-rc5/source/arch/x86/include/asm/atomic.h#L95 >> https://elixir.bootlin.com/linux/v5.1-rc5/source/arch/x86/include/asm/barrier.h#L84 > > OK, how about the following? > > Thanx, Paul > > ------------------------------------------------------------------------ > > commit 19d166dadc4e1bba4b248fb46d32ca4f2d10896b > Author: Paul E. McKenney <paulmck@linux.ibm.com> > Date: Fri Apr 19 05:20:30 2019 -0700 > > tools/memory-model: Make smp_mb__{before,after}_atomic() match x86 > > Read-modify-write atomic operations that do not return values need not > provide any ordering guarantees, and this means that both the compiler > and the CPU are free to reorder accesses across things like atomic_inc() > and atomic_dec(). The stronger systems such as x86 allow the compiler > to do the reordering, but prevent the CPU from so doing, and these > systems implement smp_mb__{before,after}_atomic() as compiler barriers. > The weaker systems such as Power allow both the compiler and the CPU > to reorder accesses across things like atomic_inc() and atomic_dec(), > and implement smp_mb__{before,after}_atomic() as full memory barriers. > > This means that smp_mb__before_atomic() only orders the atomic operation > itself with accesses preceding the smp_mb__before_atomic(), and does > not necessarily provide any ordering whatsoever against accesses > folowing the atomic operation. Similarly, smp_mb__after_atomic()
s/folowing/following/
> only orders the atomic operation itself with accesses following the > smp_mb__after_atomic(), and does not necessarily provide any ordering > whatsoever against accesses preceding the atomic operation. Full ordering > therefore requires both an smp_mb__before_atomic() before the atomic > operation and an smp_mb__after_atomic() after the atomic operation. > > Therefore, linux-kernel.cat's current model of Before-atomic > and After-atomic is too strong, as it guarantees ordering of > accesses on the other side of the atomic operation from the > smp_mb__{before,after}_atomic(). This commit therefore weakens > the guarantee to match the semantics called out above. > > Reported-by: Andrea Parri <andrea.parri@amarulasolutions.com> > Suggested-by: Alan Stern <stern@rowland.harvard.edu> > Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com> > > diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt > index 169d938c0b53..e5b97c3e8e39 100644 > --- a/Documentation/memory-barriers.txt > +++ b/Documentation/memory-barriers.txt > @@ -1888,7 +1888,37 @@ There are some more advanced barrier functions: > atomic_dec(&obj->ref_count); > > This makes sure that the death mark on the object is perceived to be set > - *before* the reference counter is decremented. > + *before* the reference counter is decremented. However, please note > + that smp_mb__before_atomic()'s ordering guarantee does not necessarily > + extend beyond the atomic operation. For example: > + > + obj->dead = 1; > + smp_mb__before_atomic(); > + atomic_dec(&obj->ref_count); > + r1 = a; > + > + Here the store to obj->dead is not guaranteed to be ordered with > + with the load from a. This reordering can happen on x86 as follows:
s/with//
And I beg you to avoid using the single letter variable "a". It's confusing.
> + (1) The compiler can reorder the load from a to precede the > + atomic_dec(), (2) Because x86 smp_mb__before_atomic() is only a > + compiler barrier, the CPU can reorder the preceding store to > + obj->dead with the later load from a. > + > + This could be avoided by using READ_ONCE(), which would prevent the > + compiler from reordering due to both atomic_dec() and READ_ONCE() > + being volatile accesses, and is usually preferable for loads from > + shared variables. However, weakly ordered CPUs would still be > + free to reorder the atomic_dec() with the load from a, so a more > + readable option is to also use smp_mb__after_atomic() as follows:
The point here is not just "readability", but also the portability of the code, isn't it?
Thanks, Akira
> + > + WRITE_ONCE(obj->dead, 1); > + smp_mb__before_atomic(); > + atomic_dec(&obj->ref_count); > + smp_mb__after_atomic(); > + r1 = READ_ONCE(a); > + > + This orders all three accesses against each other, and also makes > + the intent quite clear. > > See Documentation/atomic_{t,bitops}.txt for more information. > > diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat > index 8dcb37835b61..b6866f93abb8 100644 > --- a/tools/memory-model/linux-kernel.cat > +++ b/tools/memory-model/linux-kernel.cat > @@ -28,8 +28,8 @@ include "lock.cat" > let rmb = [R \ Noreturn] ; fencerel(Rmb) ; [R \ Noreturn] > let wmb = [W] ; fencerel(Wmb) ; [W] > let mb = ([M] ; fencerel(Mb) ; [M]) | > - ([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) | > - ([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) | > + ([M] ; fencerel(Before-atomic) ; [RMW]) | > + ([RMW] ; fencerel(After-atomic) ; [M]) | > ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) | > ([M] ; po ; [UL] ; (co | po) ; [LKW] ; > fencerel(After-unlock-lock) ; [M]) >
| |