lkml.org 
[lkml]   [2021]   [Jun]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex
Date
The current mutex code may set the HANDOFF bit right after wakeup
without checking if the mutex is unlocked. The chance of setting the
HANDOFF bit on an unlocked mutex can be relatively high. In this case,
it doesn't really block other waiters from acquiring the lock thus
wasting an unnecessary atomic operation.

To reduce the chance, do a trylock first before setting the HANDOFF bit.
In addition, optimistic spinning on the mutex will only be done if the
HANDOFF bit is set on a locked mutex to guarantee that no one else can
steal it.

Reported-by: Xu, Yanfei <yanfei.xu@windriver.com>
Signed-off-by: Waiman Long <longman@redhat.com>
---
kernel/locking/mutex.c | 42 +++++++++++++++++++++++++++++-------------
1 file changed, 29 insertions(+), 13 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index d2df5e68b503..472ab21b5b8e 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -118,9 +118,9 @@ static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock)
}

/*
- * We set the HANDOFF bit, we must make sure it doesn't live
- * past the point where we acquire it. This would be possible
- * if we (accidentally) set the bit on an unlocked mutex.
+ * Always clear the HANDOFF bit before acquiring the lock.
+ * Note that if the bit is accidentally set on an unlocked
+ * mutex, anyone can acquire it.
*/
flags &= ~MUTEX_FLAG_HANDOFF;

@@ -180,6 +180,11 @@ static inline void __mutex_set_flag(struct mutex *lock, unsigned long flag)
atomic_long_or(flag, &lock->owner);
}

+static inline long __mutex_fetch_set_flag(struct mutex *lock, unsigned long flag)
+{
+ return atomic_long_fetch_or_relaxed(flag, &lock->owner);
+}
+
static inline void __mutex_clear_flag(struct mutex *lock, unsigned long flag)
{
atomic_long_andnot(flag, &lock->owner);
@@ -1007,6 +1012,8 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas

set_current_state(state);
for (;;) {
+ long owner = 0L;
+
/*
* Once we hold wait_lock, we're serialized against
* mutex_unlock() handing the lock off to us, do a trylock
@@ -1035,24 +1042,33 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas
spin_unlock(&lock->wait_lock);
schedule_preempt_disabled();

+ /*
+ * Here we order against unlock; we must either see it change
+ * state back to RUNNING and fall through the next schedule(),
+ * or we must see its unlock and acquire.
+ */
+ if (__mutex_trylock(lock))
+ break;
+
+ set_current_state(state);
+
/*
* ww_mutex needs to always recheck its position since its waiter
* list is not FIFO ordered.
*/
- if (ww_ctx || !first) {
+ if (ww_ctx || !first)
first = __mutex_waiter_is_first(lock, &waiter);
- if (first)
- __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
- }

- set_current_state(state);
+ if (first)
+ owner = __mutex_fetch_set_flag(lock, MUTEX_FLAG_HANDOFF);
+
/*
- * Here we order against unlock; we must either see it change
- * state back to RUNNING and fall through the next schedule(),
- * or we must see its unlock and acquire.
+ * If a lock holder is present with HANDOFF bit set, it will
+ * guarantee that no one else can steal the lock. We may spin
+ * on the lock to acquire it earlier.
*/
- if (__mutex_trylock(lock) ||
- (first && mutex_optimistic_spin(lock, ww_ctx, &waiter)))
+ if ((owner & ~MUTEX_FLAGS) &&
+ mutex_optimistic_spin(lock, ww_ctx, &waiter))
break;

spin_lock(&lock->wait_lock);
--
2.18.1
\
 
 \ /
  Last update: 2021-06-29 22:14    [W:0.058 / U:0.320 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site