Messages in this thread | | | From | Josh Don <> | Date | Thu, 29 Apr 2021 13:39:54 -0700 | Subject | Re: [PATCH 04/19] sched: Prepare for Core-wide rq->lock |
| |
On Thu, Apr 29, 2021 at 1:03 AM Aubrey Li <aubrey.intel@gmail.com> wrote: > > On Thu, Apr 22, 2021 at 8:39 PM Peter Zijlstra <peterz@infradead.org> wrote: > ----snip---- > > @@ -199,6 +224,25 @@ void raw_spin_rq_unlock(struct rq *rq) > > raw_spin_unlock(rq_lockp(rq)); > > } > > > > +#ifdef CONFIG_SMP > > +/* > > + * double_rq_lock - safely lock two runqueues > > + */ > > +void double_rq_lock(struct rq *rq1, struct rq *rq2) > > +{ > > + lockdep_assert_irqs_disabled(); > > + > > + if (rq1->cpu > rq2->cpu) > > It's still a bit hard for me to digest this function, I guess using (rq->cpu) > can't guarantee the sequence of locking when coresched is enabled. > > - cpu1 and cpu7 shares lockA > - cpu2 and cpu8 shares lockB > > double_rq_lock(1,8) leads to lock(A) and lock(B) > double_rq_lock(7,2) leads to lock(B) and lock(A) > > change to below to avoid ABBA? > + if (__rq_lockp(rq1) > __rq_lockp(rq2)) > > Please correct me if I was wrong.
Great catch Aubrey. This is possibly what is causing the lockups that Don is seeing.
The proposed usage of __rq_lockp() is prone to race with sched core being enabled/disabled. It also won't order properly if we do double_rq_lock(smt0, smt1) vs double_rq_lock(smt1, smt0), since these would have equivalent __rq_lockp(). I'd propose an alternative but similar idea: order by core, then break ties by ordering on cpu.
+#ifdef CONFIG_SCHED_CORE + if (rq1->core->cpu > rq2->core->cpu) + swap(rq1, rq2); + else if (rq1->core->cpu == rq2->core->cpu && rq1->cpu > rq2->cpu) + swap(rq1, rq2); +#else if (rq1->cpu > rq2->cpu) swap(rq1, rq2); +#endif
| |