lkml.org 
[lkml]   [2023]   [Jul]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] LoongArch: Fixup cmpxchg sematic for memory barrier
Hello,

On Tue, Aug 1, 2023 at 9:16 AM <guoren@kernel.org> wrote:
> diff --git a/arch/loongarch/include/asm/cmpxchg.h b/arch/loongarch/include/asm/cmpxchg.h
> index 979fde61bba8..6a05b92814b6 100644
> --- a/arch/loongarch/include/asm/cmpxchg.h
> +++ b/arch/loongarch/include/asm/cmpxchg.h
> @@ -102,8 +102,8 @@ __arch_xchg(volatile void *ptr, unsigned long x, int size)
> " move $t0, %z4 \n" \
> " " st " $t0, %1 \n" \
> " beqz $t0, 1b \n" \
> - "2: \n" \
> __WEAK_LLSC_MB \
> + "2: \n" \

Thanks for the patch.

This would look pretty good if it weren't for the special memory
barrier semantics of the LoongArch's LL and SC instructions.

The LL/SC memory barrier behavior of LoongArch:

* LL: <memory-barrier> + <load-exclusive>
* SC: <store-conditional> + <memory-barrier>

and the LoongArch's weak memory model allows load/load reorder for the
same address.

So, the __WEAK_LLSC_MB[1] is used to prevent load/load reorder and no
explicit barrier instruction is required after SC.

[1] https://lore.kernel.org/loongarch/20230516124536.535343-1-chenhuacai@loongson.cn/

Regards,
--
WANG Rui

\
 
 \ /
  Last update: 2023-08-01 04:30    [W:0.125 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site