Messages in this thread |  | | From | Joel Fernandes <> | Date | Sun, 20 Nov 2022 20:49:22 -0500 | Subject | Re: [RFC PATCH 07/11] sched: Add proxy execution |
| |
On Sun, Nov 20, 2022 at 7:22 PM Joel Fernandes <joel@joelfernandes.org> wrote: > > Hello Dietmar, > > On Fri, Nov 04, 2022 at 06:09:26PM +0100, Dietmar Eggemann wrote: > > On 31/10/2022 19:00, Joel Fernandes wrote: > > > On Mon, Oct 31, 2022 at 05:39:45PM +0100, Dietmar Eggemann wrote: > > >> On 29/10/2022 05:31, Joel Fernandes wrote: > > >>> Hello Dietmar, > > >>> > > >>>> On Oct 24, 2022, at 6:13 AM, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote: > > >>>> > > >>>> On 03/10/2022 23:44, Connor O'Brien wrote: > > >>>>> From: Peter Zijlstra <peterz@infradead.org> > > > > [...] > > > > >>>>> + rq_unpin_lock(rq, rf); > > >>>>> + raw_spin_rq_unlock(rq); > > >>>> > > >>>> Don't we run into rq_pin_lock()'s: > > >>>> > > >>>> SCHED_WARN_ON(rq->balance_callback && rq->balance_callback != > > >>>> &balance_push_callback) > > >>>> > > >>>> by releasing rq lock between queue_balance_callback(, push_rt/dl_tasks) > > >>>> and __balance_callbacks()? > > >>> > > >>> Apologies, I’m a bit lost here. The code you are responding to inline does not call rq_pin_lock, it calls rq_unpin_lock. So what scenario does the warning trigger according to you? > > >> > > >> True, but the code which sneaks in between proxy()'s > > >> raw_spin_rq_unlock(rq) and raw_spin_rq_lock(rq) does. > > >> > > > > > > Got it now, thanks a lot for clarifying. Can this be fixed by do a > > > __balance_callbacks() at: > > > > I tried the: > > > > head = splice_balance_callbacks(rq) > > task_rq_unlock(rq, p, &rf); > > ... > > balance_callbacks(rq, head); > > > > separation known from __sched_setscheduler() in __schedule() (right > > after pick_next_task()) but it doesn't work. Lot of `BUG: scheduling > > while atomic:` > > How about something like the following? This should exclude concurrent > balance callback queues from other CPUs and let us release the rq lock early > in proxy(). I ran locktorture with your diff to make writer threads RT, and I > cannot reproduce any crash with it: > > ---8<----------------------- > > From: "Joel Fernandes (Google)" <joel@joelfernandes.org> > Subject: [PATCH] Exclude balance callback queuing during proxy's migrate > > In commit 565790d28b1e ("sched: Fix balance_callback()"), it is clear that rq > lock needs to be held when __balance_callbacks() in schedule() is called. > However, it is possible that because rq lock is dropped in proxy(), another > CPU, say in __sched_setscheduler() can queue balancing callbacks and cause > issues. > > To remedy this, exclude balance callback queuing on other CPUs, during the > proxy(). > > Reported-by: Dietmar Eggemann <dietmar.eggemann@arm.com> > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> > --- > kernel/sched/core.c | 15 +++++++++++++++ > kernel/sched/sched.h | 3 +++ > 2 files changed, 18 insertions(+) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 88a5fa34dc06..f1dac21fcd90 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -6739,6 +6739,10 @@ proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf) > p->wake_cpu = wake_cpu; > } > > + // Prevent other CPUs from queuing balance callbacks while we migrate > + // tasks in the migrate_list with the rq lock released. > + raw_spin_lock(&rq->balance_lock); > + > rq_unpin_lock(rq, rf); > raw_spin_rq_unlock(rq); > raw_spin_rq_lock(that_rq); > @@ -6758,7 +6762,18 @@ proxy(struct rq *rq, struct task_struct *next, struct rq_flags *rf) > } > > raw_spin_rq_unlock(that_rq); > + > + // This may make lockdep unhappy as we acquire rq->lock with balance_lock > + // held. But that should be a false positive, as the following pattern > + // happens only on the current CPU with interrupts disabled: > + // rq_lock() > + // balance_lock(); > + // rq_unlock(); > + // rq_lock(); > raw_spin_rq_lock(rq);
Hmm, I think there's still a chance of deadlock here. I need to rethink it a bit, but that's the idea I was going for.
thanks,
- Joel
> + > + raw_spin_unlock(&rq->balance_lock); > + > rq_repin_lock(rq, rf); > > return NULL; /* Retry task selection on _this_ CPU. */ > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > index 354e75587fed..932d32bf9571 100644 > --- a/kernel/sched/sched.h > +++ b/kernel/sched/sched.h > @@ -1057,6 +1057,7 @@ struct rq { > unsigned long cpu_capacity_orig; > > struct callback_head *balance_callback; > + raw_spinlock_t balance_lock; > > unsigned char nohz_idle_balance; > unsigned char idle_balance; > @@ -1748,6 +1749,7 @@ queue_balance_callback(struct rq *rq, > void (*func)(struct rq *rq)) > { > lockdep_assert_rq_held(rq); > + raw_spin_lock(&rq->balance_lock); > > /* > * Don't (re)queue an already queued item; nor queue anything when > @@ -1760,6 +1762,7 @@ queue_balance_callback(struct rq *rq, > head->func = (void (*)(struct callback_head *))func; > head->next = rq->balance_callback; > rq->balance_callback = head; > + raw_spin_unlock(&rq->balance_lock); > } > > #define rcu_dereference_check_sched_domain(p) \ > -- > 2.38.1.584.g0f3c55d4c2-goog >
|  |