lkml.org 
[lkml]   [2022]   [Nov]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v2] sched: async unthrottling for cfs bandwidth
Hi Michal,

Thanks for taking a look.

On Wed, Nov 2, 2022 at 9:59 AM Michal Koutný <mkoutny@suse.com> wrote:
>
> Hello.
>
> On Wed, Oct 26, 2022 at 03:44:49PM -0700, Josh Don <joshdon@google.com> wrote:
> > To fix this, we can instead unthrottle cfs_rq's asynchronously via a
> > CSD. Each cpu is responsible for unthrottling itself, thus sharding the
> > total work more fairly across the system, and avoiding hard lockups.
>
> FIFO behavior of the cfs_b->throttled_cfs_rq is quite important to
> ensure fairness of throttling (historically when it FIFO wasn't honored,
> it caused some cfs_rq starving issues).
>
> Despite its name, distribute_cfs_runtime() doesn't distribute the
> runtime, the time is pulled inside assign_cfs_rq_runtime() (but that's
> already on target cpu).
> Currently, it's all synchronized under cfs_b->lock but with your change,
> throttled cfs_rq would be dissolved among cpus that'd run concurrently
> (assign_cfs_rq_runtime() still takes cfs_b->lock but it won't be
> necessarily in the unthrottling order).

I don't think my patch meaningfully regresses this; the prior state
was also very potentially unfair in a similar way.

Without my patch, distribute_cfs_runtime() will unthrottle the
cfs_rq's, and as you point out, it doesn't actually give them any real
quota, it lets assign_cfs_rq_runtime() take care of that. But this
happens asynchronously on those cpus. If they are idle, they wait for
an IPI from the resched_curr() in unthrottled_cfs_rq(), otherwise they
simply wait until potentially the next rescheduling point. So we are
currently far from ever being guaranteed that the order the cpus pull
actual quota via assign_cfs_rq_runtime() matches the order they were
unthrottled from the list.

> > +static inline void __unthrottle_cfs_rq_async(struct cfs_rq *cfs_rq)
> > [...]
> > + if (rq == this_rq()) {
> > + unthrottle_cfs_rq(cfs_rq);
> > + return;
> > + }
>
> It was pointed out to me that generic_exec_single() does something
> similar.
> Wouldn't the flow bandwidth control code be simpler relying on that?

We already hold rq lock so we couldn't rely on the
generic_exec_single() special case since that would double lock.

> Also, can a particular cfs_rq be on both cfs_b->throttled_csd_list and
> cfs_b->throttled_cfs_rq lists at any moment?
> I wonder if having a single list_head node in cfs_rq would be feasible
> (and hence enforcing this constraint in data).

That's an interesting idea, this could be rewritten so that
distribute() pulls the entity off this list and moves it to the
throttled_csd_list; we never have an actual need to have entities on
both lists at the same time.

I'll wait to see if Peter has any comments, but that could be made in
a v3 for this patch.

Best,
Josh

\
 
 \ /
  Last update: 2022-11-03 01:10    [W:0.155 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site