Messages in this thread Patch in this message | | | From | Josh Don <> | Date | Fri, 18 Nov 2022 11:25:09 -0800 | Subject | Re: [PATCH v3] sched: async unthrottling for cfs bandwidth |
| |
On Fri, Nov 18, 2022 at 4:47 AM Peter Zijlstra <peterz@infradead.org> wrote: > > preempt_disable() -- through rq->lock -- also holds off rcu. Strictly > speaking this here is superfluous. But if you want it as an annotation, > that's fine I suppose.
Yep, I purely added this as extra annotation for future readers.
> Ideally we'd first queue all the remotes and then process local, but > given how all this is organized that doesn't seem trivial to arrange. > > Maybe have this function return false when local and save that cfs_rq in > a local var to process again later, dunno, that might turn messy.
Maybe something like this? Apologies for inline diff formatting.
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 012ec9d03811..100dae6023da 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5520,12 +5520,15 @@ static bool distribute_cfs_runtime(struct cfs_bandwidth *cfs_b) struct cfs_rq *cfs_rq; u64 runtime, remaining = 1; bool throttled = false; + int this_cpu = smp_processor_id(); + struct cfs_rq *local_unthrottle = NULL; + struct rq *rq; + struct rq_flags rf;
rcu_read_lock(); list_for_each_entry_rcu(cfs_rq, &cfs_b->throttled_cfs_rq, throttled_list) { - struct rq *rq = rq_of(cfs_rq); - struct rq_flags rf; + rq = rq_of(cfs_rq);
if (!remaining) { throttled = true; @@ -5556,14 +5559,36 @@ static bool distribute_cfs_runtime(struct cfs_bandwidth *cfs_b) cfs_rq->runtime_remaining += runtime;
/* we check whether we're throttled above */ - if (cfs_rq->runtime_remaining > 0) - unthrottle_cfs_rq_async(cfs_rq); + if (cfs_rq->runtime_remaining > 0) { + if (cpu_of(rq) != this_cpu || + SCHED_WARN_ON(local_unthrottle)) { + unthrottle_cfs_rq_async(cfs_rq); + } else { + local_unthrottle = cfs_rq; + } + } else { + throttled = true; + }
next: rq_unlock_irqrestore(rq, &rf); } rcu_read_unlock();
+ /* + * We prefer to stage the async unthrottles of all the remote cpus + * before we do the inline unthrottle locally. Note that + * unthrottle_cfs_rq_async() on the local cpu is actually synchronous, + * but it includes extra WARNs to make sure the cfs_rq really is + * still throttled. + */ + if (local_unthrottle) { + rq = cpu_rq(this_cpu); + rq_lock_irqsave(rq, &rf); + unthrottle_cfs_rq_async(local_unthrottle); + rq_unlock_irqrestore(rq, &rf); + } + return throttled; }
Note that one change we definitely want is the extra setting of throttled = true in the case that cfs_rq->runtime_remaining <= 0, to catch the case where we run out of runtime to distribute on the last entity in the list. > > + > > + /* Already enqueued */ > > + if (SCHED_WARN_ON(!list_empty(&cfs_rq->throttled_csd_list))) > > + return; > > + > > + list_add_tail(&cfs_rq->throttled_csd_list, &rq->cfsb_csd_list); > > + > > + smp_call_function_single_async(cpu_of(rq), &rq->cfsb_csd); > > Hurmph.. so I was expecting something like: > > first = list_empty(&rq->cfsb_csd_list); > list_add_tail(&cfs_rq->throttled_csd_list, &rq->cfsb_csd_list); > if (first) > smp_call_function_single_async(cpu_of(rq), &rq->cfsb_csd); > > But I suppose I'm remembering the 'old' version. I don't think it is > broken as written. There's a very narrow window where you'll end up > sending a second IPI for naught, but meh.
The CSD doesn't get unlocked until right before we call the func(). But you're right that that's a (very) narrow window for an extra IPI. Please feel free to modify the patch with that diff if you like.
> > > +} > > Let me go queue this thing, we can always improve upon matters later.
Thanks! Please add at least the extra assignment of 'throttled = true' from the diff above, but feel free to squash both the diffs if it makes sense to you.
| |