lkml.org 
[lkml]   [2022]   [Jan]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC 15/16] sched/fair: Account kthread runtime debt for CFS bandwidth
    On Wed, Jan 05, 2022 at 07:46:55PM -0500, Daniel Jordan wrote:
    > As before, helpers in multithreaded jobs don't honor the main thread's
    > CFS bandwidth limits, which could lead to the group exceeding its quota.
    >
    > Fix it by having helpers remote charge their CPU time to the main
    > thread's task group. A helper calls a pair of new interfaces
    > cpu_cgroup_remote_begin() and cpu_cgroup_remote_charge() (see function
    > header comments) to achieve this.
    >
    > This is just supposed to start a discussion, so it's pretty simple.
    > Once a kthread has finished a remote charging period with
    > cpu_cgroup_remote_charge(), its runtime is subtracted from the target
    > task group's runtime (cfs_bandwidth::runtime) and any remainder is saved
    > as debt (cfs_bandwidth::debt) to pay off in later periods.
    >
    > Remote charging tasks aren't throttled when the group reaches its quota,
    > and a task group doesn't run at all until its debt is completely paid,
    > but these shortcomings can be addressed if the approach ends up being
    > taken.

    > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    > index 44c452072a1b..3c2d7f245c68 100644
    > --- a/kernel/sched/fair.c
    > +++ b/kernel/sched/fair.c
    > @@ -4655,10 +4655,19 @@ static inline u64 sched_cfs_bandwidth_slice(void)
    > */
    > void __refill_cfs_bandwidth_runtime(struct cfs_bandwidth *cfs_b)
    > {
    > - if (unlikely(cfs_b->quota == RUNTIME_INF))
    > + u64 quota = cfs_b->quota;
    > + u64 payment;
    > +
    > + if (unlikely(quota == RUNTIME_INF))
    > return;
    >
    > - cfs_b->runtime += cfs_b->quota;
    > + if (cfs_b->debt) {
    > + payment = min(quota, cfs_b->debt);
    > + cfs_b->debt -= payment;
    > + quota -= payment;
    > + }
    > +
    > + cfs_b->runtime += quota;
    > cfs_b->runtime = min(cfs_b->runtime, cfs_b->quota + cfs_b->burst);
    > }

    It might be easier to make cfs_bandwidth::runtime an s64 and make it go
    negative.

    > @@ -5406,6 +5415,32 @@ static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
    > rcu_read_unlock();
    > }
    >
    > +static void incur_cfs_debt(struct rq *rq, struct sched_entity *se,
    > + struct task_group *tg, u64 debt)
    > +{
    > + if (!cfs_bandwidth_used())
    > + return;
    > +
    > + while (tg != &root_task_group) {
    > + struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
    > +
    > + if (cfs_rq->runtime_enabled) {
    > + struct cfs_bandwidth *cfs_b = &tg->cfs_bandwidth;
    > + u64 payment;
    > +
    > + raw_spin_lock(&cfs_b->lock);
    > +
    > + payment = min(cfs_b->runtime, debt);
    > + cfs_b->runtime -= payment;

    At this point it might hit 0 (or go negative if/when you do the above)
    and you'll need to throttle the group.

    > + cfs_b->debt += debt - payment;
    > +
    > + raw_spin_unlock(&cfs_b->lock);
    > + }
    > +
    > + tg = tg->parent;
    > + }
    > +}

    So part of the problem I have with this is that these external things
    can consume all the bandwidth and basically indefinitely starve the
    group.

    This is doulby so if you're going to account things like softirq network
    processing.

    Also, why does the whole charging API have a task argument? It either is
    current or NULL in case of things like softirq, neither really make
    sense as an argument.

    Also, by virtue of this being a start-stop annotation interface, the
    accrued time might be arbitrarily large and arbitrarily delayed. I'm not
    sure that's sensible.

    For tasks it might be better to mark the task and have the tick DTRT
    instead of later trying to 'migrate' the time.

    \
     
     \ /
      Last update: 2022-01-14 10:33    [W:2.324 / U:0.016 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site