Messages in this thread Patch in this message | | | From | Waiman Long <> | Subject | [PATCH 1/2] locking: Provide a low overhead do_arch_spin_lock() API | Date | Wed, 21 Sep 2022 09:21:51 -0400 |
| |
There are some code paths in the kernel like tracing or rcu where they want to use a spinlock without the lock debugging overhead (lockdep, etc). Provide a do_arch_spin_lock() API with proper preemption disabling and enabling without any debugging or tracing overhead.
Signed-off-by: Waiman Long <longman@redhat.com> --- include/linux/spinlock.h | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+)
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 5c0c5174155d..535ef0d5bb80 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -119,6 +119,33 @@ do { \ #define raw_spin_is_contended(lock) (((void)(lock), 0)) #endif /*arch_spin_is_contended*/ +/* + * Provide a set of do_arch_spin*() APIs to make use of the arch_spinlock_t + * with proper preemption disabling & enabling without any debugging and + * tracing overhead. Any users of arch_spinlock_t should use this set of + * APIs unless it is sure that either preemption or irqs has been disabled. + */ +static __always_inline void do_arch_spin_lock(arch_spinlock_t *lock) +{ + preempt_disable_notrace(); + arch_spin_lock(lock); +} + +static __always_inline int do_arch_spin_trylock(arch_spinlock_t *lock) +{ + preempt_disable_notrace(); + if (arch_spin_trylock(lock)) + return 1; + preempt_enable_notrace(); + return 0; +} + +static __always_inline void do_arch_spin_unlock(arch_spinlock_t *lock) +{ + arch_spin_unlock(lock); + preempt_enable_notrace(); +} + /* * smp_mb__after_spinlock() provides the equivalent of a full memory barrier * between program-order earlier lock acquisitions and program-order later -- 2.31.1
| |