Messages in this thread | | | From | WANG Rui <> | Date | Tue, 1 Aug 2023 10:29:31 +0800 | Subject | Re: [PATCH] LoongArch: Fixup cmpxchg sematic for memory barrier |
| |
Hello,
On Tue, Aug 1, 2023 at 9:16 AM <guoren@kernel.org> wrote: > diff --git a/arch/loongarch/include/asm/cmpxchg.h b/arch/loongarch/include/asm/cmpxchg.h > index 979fde61bba8..6a05b92814b6 100644 > --- a/arch/loongarch/include/asm/cmpxchg.h > +++ b/arch/loongarch/include/asm/cmpxchg.h > @@ -102,8 +102,8 @@ __arch_xchg(volatile void *ptr, unsigned long x, int size) > " move $t0, %z4 \n" \ > " " st " $t0, %1 \n" \ > " beqz $t0, 1b \n" \ > - "2: \n" \ > __WEAK_LLSC_MB \ > + "2: \n" \
Thanks for the patch.
This would look pretty good if it weren't for the special memory barrier semantics of the LoongArch's LL and SC instructions.
The LL/SC memory barrier behavior of LoongArch:
* LL: <memory-barrier> + <load-exclusive> * SC: <store-conditional> + <memory-barrier>
and the LoongArch's weak memory model allows load/load reorder for the same address.
So, the __WEAK_LLSC_MB[1] is used to prevent load/load reorder and no explicit barrier instruction is required after SC.
[1] https://lore.kernel.org/loongarch/20230516124536.535343-1-chenhuacai@loongson.cn/
Regards, -- WANG Rui
| |