lkml.org 
[lkml]   [2021]   [Jun]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v2] locking/mutex: Fix the handoff mechanism doesn't take effect
Date
Commit e274795ea7b7 ("locking/mutex: Fix mutex handoff") removes the
judgment of "handoff" in __mutex_trylock_or_owner() as blow, it makes
anyone can clear MUTEX_FLAG_HANDOFF bit when it gets the lock, even it
is the stealing lock. That makes set of MUTEX_FLAG_HANDOFF by the
top-waiter in vain.

- if (handoff)
- flags &= ~MUTEX_FLAG_HANDOFF;
+ flags &= ~MUTEX_FLAG_HANDOFF;

We could fix it by setting MUTEX_FLAG_HANDOFF bit before the top-waiter
in wait_list falls asleep, then It must can grab the lock after being
woken up. Instead of probably being stolen lock by a optimistic spinner,
and being cleared MUTEX_FLAG_HANDOFF bit by the task which stole the lock,
and probably fall to sleep again without MUTEX_FLAG_HANDOFF due to the
task which stole the lock falls asleep.

Note: there still is a very small window that the top-waiter can't get
the lock after being awoken because no MUTEX_FLAG_HANDOFF bit is observed
in unlock path and then wake up the top-waiter. But it doesn't matter,
the top-waiter will optimistically spin on the lock or fall asleep with
MUTEX_FLAG_HANDOFF bit again.

Also correct a obsolete comment in __mutex_trylock_or_owner().

Fixes: e274795ea7b7 ("locking/mutex: Fix mutex handoff")
Suggested-by: Waiman Long <longman@redhat.com>
Signed-off-by: Yanfei Xu <yanfei.xu@windriver.com>
---
v1->v2:
1. Bring the assignment of "first" variable to the front of
schedule_preempt_disabled() to make the top-waiter can grab the
lock when it wakes up for the first time.
2. Correct the comments in __mutex_trylock_or_owner by Waiman.
3. Rename this patch name form "locking/mutex: fix the
MUTEX_FLAG_HANDOFF bit is cleared unexpected" to "locking/mutex: Fix
the handoff mechanism doesn't take effect"

kernel/locking/mutex.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 013e1b08a1bf..ba36d93e65e8 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -118,9 +118,9 @@ static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock)
}

/*
- * We set the HANDOFF bit, we must make sure it doesn't live
- * past the point where we acquire it. This would be possible
- * if we (accidentally) set the bit on an unlocked mutex.
+ * Always clear the HANDOFF bit before acquiring the lock.
+ * Note that if the bit is accidentally set on an unlocked
+ * mutex, anyone can acquire it.
*/
flags &= ~MUTEX_FLAG_HANDOFF;

@@ -1033,17 +1033,17 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
}

spin_unlock(&lock->wait_lock);
- schedule_preempt_disabled();

/*
* ww_mutex needs to always recheck its position since its waiter
* list is not FIFO ordered.
*/
- if (ww_ctx || !first) {
+ if (ww_ctx || !first)
first = __mutex_waiter_is_first(lock, &waiter);
- if (first)
- __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
- }
+ if (first)
+ __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
+
+ schedule_preempt_disabled();

set_current_state(state);
/*
--
2.27.0
\
 
 \ /
  Last update: 2021-06-30 07:43    [W:0.046 / U:0.260 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site