Messages in this thread | | | From | Valentin Schneider <> | Subject | Re: [RESEND][PATCH v9 1/7] locking/mutex: Remove wakeups from under mutex::wait_lock | Date | Tue, 09 Apr 2024 18:12:11 +0200 |
| |
On 01/04/24 16:44, John Stultz wrote: > From: Peter Zijlstra <peterz@infradead.org> > > In preparation to nest mutex::wait_lock under rq::lock we need to remove > wakeups from under it. > > Cc: Joel Fernandes <joelaf@google.com> > Cc: Qais Yousef <qyousef@google.com> > Cc: Ingo Molnar <mingo@redhat.com> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Juri Lelli <juri.lelli@redhat.com> > Cc: Vincent Guittot <vincent.guittot@linaro.org> > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> > Cc: Valentin Schneider <vschneid@redhat.com> > Cc: Steven Rostedt <rostedt@goodmis.org> > Cc: Ben Segall <bsegall@google.com> > Cc: Zimuzo Ezeozue <zezeozue@google.com> > Cc: Youssef Esmat <youssefesmat@google.com> > Cc: Mel Gorman <mgorman@suse.de> > Cc: Daniel Bristot de Oliveira <bristot@redhat.com> > Cc: Will Deacon <will@kernel.org> > Cc: Waiman Long <longman@redhat.com> > Cc: Boqun Feng <boqun.feng@gmail.com> > Cc: "Paul E. McKenney" <paulmck@kernel.org> > Cc: Metin Kaya <Metin.Kaya@arm.com> > Cc: Xuewen Yan <xuewen.yan94@gmail.com> > Cc: K Prateek Nayak <kprateek.nayak@amd.com> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: kernel-team@android.com > Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> > Acked-by: Davidlohr Bueso <dave@stgolabs.net> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> > [Heavily changed after 55f036ca7e74 ("locking: WW mutex cleanup") and > 08295b3b5bee ("locking: Implement an algorithm choice for Wound-Wait > mutexes")] > Signed-off-by: Juri Lelli <juri.lelli@redhat.com> > [jstultz: rebased to mainline, added extra wake_up_q & init > to avoid hangs, similar to Connor's rework of this patch] > Signed-off-by: John Stultz <jstultz@google.com>
This looks mostly good to me, some preemption questions below.
> @@ -934,6 +942,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne > } > } > > + preempt_disable(); > raw_spin_lock(&lock->wait_lock); > debug_mutex_unlock(lock); > if (!list_empty(&lock->wait_list)) { > @@ -952,8 +961,8 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne > __mutex_handoff(lock, next); >
(minor nit) Could the preempt_disable() be moved here instead? IMO if it's closer to the unlock it makes it clearer why it is there (e.g. sched/core.c::affine_move_task(), rt_mutex_setprio(), __sched_setscheduler().
> raw_spin_unlock(&lock->wait_lock); > - > wake_up_q(&wake_q); > + preempt_enable(); > } >
> @@ -1775,8 +1782,9 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, > * irqsave/restore variants. > */ > raw_spin_lock_irqsave(&lock->wait_lock, flags); > - ret = __rt_mutex_slowlock_locked(lock, ww_ctx, state); > + ret = __rt_mutex_slowlock_locked(lock, ww_ctx, state, &wake_q); > raw_spin_unlock_irqrestore(&lock->wait_lock, flags); > + wake_up_q(&wake_q);
Shouldn't this also be wrapped in a preempt-disabled region?
> rt_mutex_post_schedule(); > > return ret;
> @@ -122,6 +123,7 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb, > if (!ret) > atomic_inc(&rwb->readers); > raw_spin_unlock_irq(&rtm->wait_lock); > + wake_up_q(&wake_q);
Same question wrt preemption.
> if (!ret) > rwbase_rtmutex_unlock(rtm); >
| |