Messages in this thread | | | From | Vincent Guittot <> | Date | Wed, 4 Mar 2020 10:43:51 +0100 | Subject | Re: [RFC PATCH] sched: fix the nonsense shares when load of cfs_rq is too, small |
| |
On Wed, 4 Mar 2020 at 09:47, Vincent Guittot <vincent.guittot@linaro.org> wrote: > > On Wed, 4 Mar 2020 at 02:19, 王贇 <yun.wang@linux.alibaba.com> wrote: > > > > > > > > On 2020/3/4 上午3:52, Peter Zijlstra wrote: > > [snip] > > >> The reason is because we have group B with shares as 2, which make > > >> the group A 'cfs_rq->load.weight' very small. > > >> > > >> And in calc_group_shares() we calculate shares as: > > >> > > >> load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg); > > >> shares = (tg_shares * load) / tg_weight; > > >> > > >> Since the 'cfs_rq->load.weight' is too small, the load become 0 > > >> in here, although 'tg_shares' is 102400, shares of the se which > > >> stand for group A on root cfs_rq become 2. > > > > > > Argh, because A->cfs_rq.load.weight is B->se.load.weight which is > > > B->shares/nr_cpus. > > > > Yeah, that's exactly why it happens, even the share 2 scale up to 2048, > > on 96 CPUs platform, each CPU get only 21 in equal case. > > > > > > > >> While the se of D on root cfs_rq is far more bigger than 2, so it > > >> wins the battle. > > >> > > >> This patch add a check on the zero load and make it as MIN_SHARES > > >> to fix the nonsense shares, after applied the group C wins as > > >> expected. > > >> > > >> Signed-off-by: Michael Wang <yun.wang@linux.alibaba.com> > > >> --- > > >> kernel/sched/fair.c | 2 ++ > > >> 1 file changed, 2 insertions(+) > > >> > > >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > >> index 84594f8aeaf8..53d705f75fa4 100644 > > >> --- a/kernel/sched/fair.c > > >> +++ b/kernel/sched/fair.c > > >> @@ -3182,6 +3182,8 @@ static long calc_group_shares(struct cfs_rq *cfs_rq) > > >> tg_shares = READ_ONCE(tg->shares); > > >> > > >> load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg); > > >> + if (!load && cfs_rq->load.weight) > > >> + load = MIN_SHARES; > > >> > > >> tg_weight = atomic_long_read(&tg->load_avg); > > > > > > Yeah, I suppose that'll do. Hurmph, wants a comment though. > > > > > > But that has me looking at other users of scale_load_down(), and doesn't > > > at least update_tg_cfs_load() suffer the same problem? > > > > Good point :-) I'm not sure but is scale_load_down() supposed to scale small > > value into 0? If not, maybe we should fix the helper to make sure it at > > least return some real load? like: > > > > # define scale_load_down(w) ((w + (1 << SCHED_FIXEDPOINT_SHIFT)) >> SCHED_FIXEDPOINT_SHIFT) > > you will add +1 of nice prio for each device
Of course, it's not prio but only weight which is different
> > should we use instead > # define scale_load_down(w) ((w >> SCHED_FIXEDPOINT_SHIFT) ? (w >> > SCHED_FIXEDPOINT_SHIFT) : MIN_SHARES) > > Regards, > Vincent > > > > > Regards, > > Michael Wang > > > > >
| |