lkml.org 
[lkml]   [2021]   [Jul]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[patch 62/63] locking/rtmutex: Add adaptive spinwait mechanism
    From: Steven Rostedt <rostedt@goodmis.org>

    Going to sleep when a spinlock or rwlock is contended can be quite
    inefficient when the contention time is short and the lock owner is running
    on a different CPU. The MCS mechanism is not applicable to rtmutex based
    locks, so provide a simple adaptive spinwait mechanism for the RT specific
    spin/rwlock implementations.

    [ tglx: Provide a contemporary changelog ]

    Originally-by: Gregory Haskins <ghaskins@novell.com>
    Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    ---
    kernel/locking/rtmutex.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++-
    1 file changed, 49 insertions(+), 1 deletion(-)
    ---
    --- a/kernel/locking/rtmutex.c
    +++ b/kernel/locking/rtmutex.c
    @@ -8,6 +8,11 @@
    * Copyright (C) 2005-2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com>
    * Copyright (C) 2005 Kihon Technologies Inc., Steven Rostedt
    * Copyright (C) 2006 Esben Nielsen
    + * Adaptive Spinlocks:
    + * Copyright (C) 2008 Novell, Inc., Gregory Haskins, Sven Dietrich,
    + * and Peter Morreale,
    + * Adaptive Spinlocks simplification:
    + * Copyright (C) 2008 Red Hat, Inc., Steven Rostedt <srostedt@redhat.com>
    *
    * See Documentation/locking/rt-mutex-design.rst for details.
    */
    @@ -1529,6 +1534,43 @@ static __always_inline int __rt_mutex_lo
    * Functions required for spin/rw_lock substitution on RT kernels
    */

    +#ifdef CONFIG_SMP
    +/*
    + * Note that owner is a speculative pointer and dereferencing relies
    + * on rcu_read_lock() and the check against the lock owner.
    + */
    +static bool rtlock_adaptive_spinwait(struct rt_mutex_base *lock,
    + struct task_struct *owner)
    +{
    + bool res = true;
    +
    + rcu_read_lock();
    + for (;;) {
    + /* Owner changed. Trylock again */
    + if (owner != rt_mutex_owner(lock))
    + break;
    + /*
    + * Ensure that owner->on_cpu is dereferenced _after_
    + * checking the above to be valid.
    + */
    + barrier();
    + if (!owner->on_cpu) {
    + res = false;
    + break;
    + }
    + cpu_relax();
    + }
    + rcu_read_unlock();
    + return res;
    +}
    +#else
    +static bool rtlock_adaptive_spinwait(struct rt_mutex_base *lock,
    + struct task_struct *owner)
    +{
    + return false;
    +}
    +#endif
    +
    /**
    * rtlock_slowlock_locked - Slow path lock acquisition for RT locks
    * @lock: The underlying rt mutex
    @@ -1536,6 +1578,7 @@ static __always_inline int __rt_mutex_lo
    static void __sched rtlock_slowlock_locked(struct rt_mutex_base *lock)
    {
    struct rt_mutex_waiter waiter;
    + struct task_struct *owner;

    lockdep_assert_held(&lock->wait_lock);

    @@ -1554,9 +1597,14 @@ static void __sched rtlock_slowlock_lock
    if (try_to_take_rt_mutex(lock, current, &waiter))
    break;

    + if (&waiter == rt_mutex_top_waiter(lock))
    + owner = rt_mutex_owner(lock);
    + else
    + owner = NULL;
    raw_spin_unlock_irq(&lock->wait_lock);

    - schedule_rtlock();
    + if (!owner || !rtlock_adaptive_spinwait(lock, owner))
    + schedule_rtlock();

    raw_spin_lock_irq(&lock->wait_lock);
    set_current_state(TASK_RTLOCK_WAIT);
    \
     
     \ /
      Last update: 2021-07-30 16:23    [W:4.063 / U:0.088 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site