Messages in this thread | | | Date | Wed, 28 Apr 2021 09:13:44 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH 04/19] sched: Prepare for Core-wide rq->lock |
| |
On Mon, Apr 26, 2021 at 03:21:36PM -0700, Josh Don wrote: > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index f732642e3e09..1a81e9cc9e5d 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -290,6 +290,10 @@ static void sched_core_assert_empty(void) > > static void __sched_core_enable(void) > > { > > static_branch_enable(&__sched_core_enabled); > > + /* > > + * Ensure raw_spin_rq_*lock*() have completed before flipping. > > + */ > > + synchronize_sched(); > > synchronize_rcu()
Moo, I actually like synchronize_sched() because it indicates it matches a preempt disabled region.
> > __sched_core_flip(true); > > sched_core_assert_empty(); > > } > > @@ -449,16 +453,22 @@ void raw_spin_rq_lock_nested(struct rq *rq, int subclass) > > { > > raw_spinlock_t *lock; > > > > + preempt_disable(); > > if (sched_core_disabled()) { > > raw_spin_lock_nested(&rq->__lock, subclass); > > + /* preempt *MUST* still be disabled here */ > > + preempt_enable_no_resched(); > > return; > > } > > This approach looks good to me. I'm guessing you went this route > instead of doing the re-check after locking in order to optimize the > disabled case?
Exactly.
> Recommend a comment that the preempt_disable() here pairs with the > synchronize_rcu() in __sched_core_enable().
Fair enough.
| |