Messages in this thread | | | From | bsegall@google ... | Subject | Re: [RFC] sched/fair: hard lockup in sched_cfs_period_timer | Date | Tue, 12 Mar 2019 10:29:37 -0700 |
| |
Phil Auld <pauld@redhat.com> writes:
> On Mon, Mar 11, 2019 at 10:44:25AM -0700 bsegall@google.com wrote: >> Phil Auld <pauld@redhat.com> writes: >> >> > On Wed, Mar 06, 2019 at 11:25:02AM -0800 bsegall@google.com wrote: >> >> Phil Auld <pauld@redhat.com> writes: >> >> >> >> > On Tue, Mar 05, 2019 at 12:45:34PM -0800 bsegall@google.com wrote: >> >> >> Phil Auld <pauld@redhat.com> writes: >> >> >> >> >> >> > Interestingly, if I limit the number of child cgroups to the number of >> >> >> > them I'm actually putting processes into (16 down from 2500) the problem >> >> >> > does not reproduce. >> >> >> >> >> >> That is indeed interesting, and definitely not something we'd want to >> >> >> matter. (Particularly if it's not root->a->b->c...->throttled_cgroup or >> >> >> root->throttled->a->...->thread vs root->throttled_cgroup, which is what >> >> >> I was originally thinking of) >> >> >> >> >> > >> >> > The locking may be a red herring. >> >> > >> >> > The setup is root->throttled->a where a is 1-2500. There are 4 threads in >> >> > each of the first 16 a groups. The parent, throttled, is where the >> >> > cfs_period/quota_us are set. >> >> > >> >> > I wonder if the problem is the walk_tg_tree_from() call in unthrottle_cfs_rq(). >> >> > >> >> > The distribute_cfg_runtime looks to be O(n * m) where n is number of >> >> > throttled cfs_rqs and m is the number of child cgroups. But I'm not >> >> > completely clear on how the hierarchical cgroups play together here. >> >> > >> >> > I'll pull on this thread some. >> >> > >> >> > Thanks for your input. >> >> > >> >> > >> >> > Cheers, >> >> > Phil >> >> >> >> Yeah, that isn't under the cfs_b lock, but is still part of distribute >> >> (and under rq lock, which might also matter). I was thinking too much >> >> about just the cfs_b regions. I'm not sure there's any good general >> >> optimization there. >> >> >> > >> > It's really an edge case, but the watchdog NMI is pretty painful. >> > >> >> I suppose cfs_rqs (tgs/cfs_bs?) could have "nearest >> >> ancestor with a quota" pointer and ones with quota could have >> >> "descendants with quota" list, parallel to the children/parent lists of >> >> tgs. Then throttle/unthrottle would only have to visit these lists, and >> >> child cgroups/cfs_rqs without their own quotas would just check >> >> cfs_rq->nearest_quota_cfs_rq->throttle_count. throttled_clock_task_time >> >> can also probably be tracked there. >> > >> > That seems like it would add a lot of complexity for this edge case. Maybe >> > it would be acceptible to use the safety valve like my first example, or >> > something like the below which will tune the period up until it doesn't >> > overrun for ever. The down side of this one is it does change the user's >> > settings, but that could be preferable to an NMI crash. >> >> Yeah, I'm not sure what solution is best here, but one of the solutions >> should be done. >> >> > >> > Cheers, >> > Phil >> > >> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >> > index 310d0637fe4b..78f9e28adc7b 100644 >> > --- a/kernel/sched/fair.c >> > +++ b/kernel/sched/fair.c >> > @@ -4859,16 +4859,42 @@ static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer) >> > return HRTIMER_NORESTART; >> > } >> > >> > +extern const u64 max_cfs_quota_period; >> > +s64 cfs_quota_period_autotune_thresh = 100 * NSEC_PER_MSEC; >> > +int cfs_quota_period_autotune_shift = 4; /* 100 / 16 = 6.25% */ >> >> Letting it spin for 100ms and then only increasing by 6% seems extremely >> generous. If we went this route I'd probably say "after looping N >> times, set the period to time taken / N + X%" where N is like 8 or >> something. I think I'd probably perfer something like this to the >> previous "just abort and let it happen again next interrupt" one. > > Okay. I'll try to spin something up that does this. It may be a little > trickier to keep the quota proportional to the new period. I think that's > important since we'll be changing the user's setting. > > Do you mean to have it break when it hits N and recalculates the period or > reset the counter and keep going? >
In theory you should be fine doing it once more I think? And yeah, keeping the quota correct is a bit more annoying given you have to use fixed point math.
| |