Messages in this thread Patch in this message | | | Date | Wed, 24 Apr 2019 14:36:59 +0200 | From | Peter Zijlstra <> | Subject | [RFC][PATCH 3/5] mips/atomic: Optimize loongson3_llsc_mb() |
| |
Now that every single LL/SC loop has loongson_llsc_mb() in front, we can NO-OP smp_mb__before_llsc() in that case.
While there, remove the superfluous __smp_mb__before_llsc().
Cc: Huacai Chen <chenhc@lemote.com> Cc: Huang Pei <huangpei@loongson.cn> Cc: Paul Burton <paul.burton@mips.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- arch/mips/include/asm/barrier.h | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)
--- a/arch/mips/include/asm/barrier.h +++ b/arch/mips/include/asm/barrier.h @@ -221,15 +221,12 @@ #ifdef CONFIG_CPU_CAVIUM_OCTEON #define smp_mb__before_llsc() smp_wmb() -#define __smp_mb__before_llsc() __smp_wmb() /* Cause previous writes to become visible on all CPUs as soon as possible */ #define nudge_writes() __asm__ __volatile__(".set push\n\t" \ ".set arch=octeon\n\t" \ "syncw\n\t" \ ".set pop" : : : "memory") #else -#define smp_mb__before_llsc() smp_llsc_mb() -#define __smp_mb__before_llsc() smp_llsc_mb() #define nudge_writes() mb() #endif @@ -264,11 +261,19 @@ * This case affects all current Loongson 3 CPUs. */ #ifdef CONFIG_CPU_LOONGSON3_WORKAROUNDS /* Loongson-3's LLSC workaround */ +#define smp_mb__before_llsc() do { } while (0) #define loongson_llsc_mb() __asm__ __volatile__("sync" : : :"memory") #else #define loongson_llsc_mb() do { } while (0) #endif +#ifndef smp_mb__before_llsc +#define smp_mb__before_llsc() smp_llsc_mb() +#endif + +#define __smp_mb__before_atomic() smp_mb__before_llsc() +#define __smp_mb__after_atomic() smp_llsc_mb() + static inline void sync_ginv(void) { asm volatile("sync\t%0" :: "i"(STYPE_GINV));
| |