lkml.org 
[lkml]   [2022]   [Nov]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v3] sched: async unthrottling for cfs bandwidth
    On Fri, Nov 18, 2022 at 11:25:09AM -0800, Josh Don wrote:
    > On Fri, Nov 18, 2022 at 4:47 AM Peter Zijlstra <peterz@infradead.org> wrote:
    > >
    > > preempt_disable() -- through rq->lock -- also holds off rcu. Strictly
    > > speaking this here is superfluous. But if you want it as an annotation,
    > > that's fine I suppose.
    >
    > Yep, I purely added this as extra annotation for future readers.
    >
    > > Ideally we'd first queue all the remotes and then process local, but
    > > given how all this is organized that doesn't seem trivial to arrange.
    > >
    > > Maybe have this function return false when local and save that cfs_rq in
    > > a local var to process again later, dunno, that might turn messy.
    >
    > Maybe something like this? Apologies for inline diff formatting.
    >
    > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    > index 012ec9d03811..100dae6023da 100644
    > --- a/kernel/sched/fair.c
    > +++ b/kernel/sched/fair.c
    > @@ -5520,12 +5520,15 @@ static bool distribute_cfs_runtime(struct
    > cfs_bandwidth *cfs_b)
    > struct cfs_rq *cfs_rq;
    > u64 runtime, remaining = 1;
    > bool throttled = false;
    > + int this_cpu = smp_processor_id();
    > + struct cfs_rq *local_unthrottle = NULL;
    > + struct rq *rq;
    > + struct rq_flags rf;
    >
    > rcu_read_lock();
    > list_for_each_entry_rcu(cfs_rq, &cfs_b->throttled_cfs_rq,
    > throttled_list) {
    > - struct rq *rq = rq_of(cfs_rq);
    > - struct rq_flags rf;
    > + rq = rq_of(cfs_rq);
    >
    > if (!remaining) {
    > throttled = true;
    > @@ -5556,14 +5559,36 @@ static bool distribute_cfs_runtime(struct
    > cfs_bandwidth *cfs_b)
    > cfs_rq->runtime_remaining += runtime;
    >
    > /* we check whether we're throttled above */
    > - if (cfs_rq->runtime_remaining > 0)
    > - unthrottle_cfs_rq_async(cfs_rq);
    > + if (cfs_rq->runtime_remaining > 0) {
    > + if (cpu_of(rq) != this_cpu ||
    > + SCHED_WARN_ON(local_unthrottle)) {
    > + unthrottle_cfs_rq_async(cfs_rq);
    > + } else {
    > + local_unthrottle = cfs_rq;
    > + }
    > + } else {
    > + throttled = true;
    > + }
    >
    > next:
    > rq_unlock_irqrestore(rq, &rf);
    > }
    > rcu_read_unlock();
    >
    > + /*
    > + * We prefer to stage the async unthrottles of all the remote cpus
    > + * before we do the inline unthrottle locally. Note that
    > + * unthrottle_cfs_rq_async() on the local cpu is actually synchronous,
    > + * but it includes extra WARNs to make sure the cfs_rq really is
    > + * still throttled.

    With this said ->

    > + */
    > + if (local_unthrottle) {
    > + rq = cpu_rq(this_cpu);
    > + rq_lock_irqsave(rq, &rf);

    Should we add:
    if (cfs_rq_throttled(local_unthrottle))

    before calling into unthrottle_cfs_rq_async(local_unthrottle) to avoid a
    potential WARN?

    As for whether the local cfs_rq can be unthrottled now after rq lock is
    re-acquired, I suppose it can be. e.g. another user sets a new quota to
    this task group during the window of rq lock gets dropped in the above
    loop and re-acquired here IIUC.

    > + unthrottle_cfs_rq_async(local_unthrottle);
    > + rq_unlock_irqrestore(rq, &rf);
    > + }
    > +
    > return throttled;
    > }
    >

    \
     
     \ /
      Last update: 2022-11-22 07:10    [W:4.094 / U:0.116 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site