lkml.org 
[lkml]   [2023]   [Jul]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] LoongArch: Fixup cmpxchg sematic for memory barrier
Date
From: Guo Ren <guoren@linux.alibaba.com>

When cmpxchg failed, no memory barrier was needed to take. Only
when cmpxchg success and the new value is written, then the memory
barriers needed.

- cmpxchg_asm: Unnecessary __WEAK_LLSC_WB for the fail path would
reduce the performance of the cmpxchg loop trying.
- cmpxchg_small: Miss an necessary __WEAK_LLSC_WB when sc succeeds.
It's a bug because there is no memory synchronization
when sc succeeds.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
---
arch/loongarch/include/asm/cmpxchg.h | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/loongarch/include/asm/cmpxchg.h b/arch/loongarch/include/asm/cmpxchg.h
index 979fde61bba8..6a05b92814b6 100644
--- a/arch/loongarch/include/asm/cmpxchg.h
+++ b/arch/loongarch/include/asm/cmpxchg.h
@@ -102,8 +102,8 @@ __arch_xchg(volatile void *ptr, unsigned long x, int size)
" move $t0, %z4 \n" \
" " st " $t0, %1 \n" \
" beqz $t0, 1b \n" \
- "2: \n" \
__WEAK_LLSC_MB \
+ "2: \n" \
: "=&r" (__ret), "=ZB"(*m) \
: "ZB"(*m), "Jr" (old), "Jr" (new) \
: "t0", "memory"); \
@@ -148,10 +148,8 @@ static inline unsigned int __cmpxchg_small(volatile void *ptr, unsigned int old,
" or %1, %1, %z6 \n"
" sc.w %1, %2 \n"
" beqz %1, 1b \n"
- " b 3f \n"
- "2: \n"
__WEAK_LLSC_MB
- "3: \n"
+ "2: \n"
: "=&r" (old32), "=&r" (temp), "=ZC" (*ptr32)
: "ZC" (*ptr32), "Jr" (mask), "Jr" (old), "Jr" (new)
: "memory");
--
2.36.1
\
 
 \ /
  Last update: 2023-08-01 03:16    [W:0.099 / U:2.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site