Messages in this thread | | | Subject | Re: [PATCH 04/19] sched: Prepare for Core-wide rq->lock | From | Aubrey Li <> | Date | Wed, 28 Apr 2021 14:02:36 +0800 |
| |
On 4/22/21 8:05 PM, Peter Zijlstra wrote: > When switching on core-sched, CPUs need to agree which lock to use for > their RQ. > > The new rule will be that rq->core_enabled will be toggled while > holding all rq->__locks that belong to a core. This means we need to > double check the rq->core_enabled value after each lock acquire and > retry if it changed. > > This also has implications for those sites that take multiple RQ > locks, they need to be careful that the second lock doesn't end up > being the first lock. > > Verify the lock pointer after acquiring the first lock, because if > they're on the same core, holding any of the rq->__lock instances will > pin the core state. > > While there, change the rq->__lock order to CPU number, instead of rq > address, this greatly simplifies the next patch. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> > --- > kernel/sched/core.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++-- > kernel/sched/sched.h | 41 +++++++++++------------------------------ > 2 files changed, 57 insertions(+), 32 deletions(-) > > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -186,12 +186,37 @@ int sysctl_sched_rt_runtime = 950000; > > void raw_spin_rq_lock_nested(struct rq *rq, int subclass) > { > - raw_spin_lock_nested(rq_lockp(rq), subclass); > + raw_spinlock_t *lock; > + > + if (sched_core_disabled()) { > + raw_spin_lock_nested(&rq->__lock, subclass); > + return; > + } > + > + for (;;) { > + lock = rq_lockp(rq); > + raw_spin_lock_nested(lock, subclass); > + if (likely(lock == rq_lockp(rq))) > + return; > + raw_spin_unlock(lock); > + } > } > > bool raw_spin_rq_trylock(struct rq *rq) > { > - return raw_spin_trylock(rq_lockp(rq)); > + raw_spinlock_t *lock; > + bool ret; > + > + if (sched_core_disabled()) > + return raw_spin_trylock(&rq->__lock); > + > + for (;;) { > + lock = rq_lockp(rq); > + ret = raw_spin_trylock(lock); > + if (!ret || (likely(lock == rq_lockp(rq)))) > + return ret; > + raw_spin_unlock(lock); > + } > } > > void raw_spin_rq_unlock(struct rq *rq) > @@ -199,6 +224,25 @@ void raw_spin_rq_unlock(struct rq *rq) > raw_spin_unlock(rq_lockp(rq)); > } > > +#ifdef CONFIG_SMP > +/* > + * double_rq_lock - safely lock two runqueues > + */ > +void double_rq_lock(struct rq *rq1, struct rq *rq2) > +{ > + lockdep_assert_irqs_disabled(); > + > + if (rq1->cpu > rq2->cpu) > + swap(rq1, rq2);
I'm not sure why swap rq here instead of rq lock? This swaps dst rq and src rq and causes the subsequent logic wrong at least in try_steal_cookie().
Thanks, -Aubrey
| |