lkml.org 
[lkml]   [2024]   [Apr]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v2 2/4] locking/atomic/x86: Introduce arch_atomic64_read_tearable to x86_32
Date
Introduce arch_atomic64_read_tearable for 32-bit targets to load
the value from atomic64_t location in a non-atomic way. The read
might be torn, but can safely be consumed by the 64-bit
compare-and-swap loop.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
arch/x86/include/asm/atomic64_32.h | 14 ++++++++++++++
1 file changed, 14 insertions(+)

diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 11e817dab44a..b4434e5ae31d 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -14,6 +14,20 @@ typedef struct {

#define ATOMIC64_INIT(val) { (val) }

+/*
+ * This function is intended to load the value from atomic64_t
+ * location in a non-atomic way. The read might be torn, but can
+ * safely be consumed by the 64-bit compare-and-swap loop.
+ */
+static __always_inline s64 arch_atomic64_read_tearable(const atomic64_t *v)
+{
+ /*
+ * See the comment in arch_atomic_read() on why we use
+ * __READ_ONCE() instead of READ_ONCE_NOCHECK() here.
+ */
+ return __READ_ONCE(v->counter);
+}
+
#define __ATOMIC64_DECL(sym) void atomic64_##sym(atomic64_t *, ...)
#ifndef ATOMIC64_EXPORT
#define ATOMIC64_DECL_ONE __ATOMIC64_DECL
--
2.42.0

\
 
 \ /
  Last update: 2024-05-27 16:31    [W:0.032 / U:0.596 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site