Messages in this thread Patch in this message | | | Date | Fri, 25 Aug 2023 11:30:05 +0530 | Subject | Re: [RFC PATCH 5/7] sched/fair: Adjust the busiest group scanning depth in idle load balance | From | Shrikanth Hegde <> |
| |
On 7/27/23 8:05 PM, Chen Yu wrote: > Scanning the whole sched domain to find the busiest group is time costly > during newidle_balance(). And if a CPU becomes idle, it would be good > if this idle CPU pulls some tasks from other CPUs as quickly as possible. > > Limit the scan depth of newidle_balance() to only scan for a limited number > of sched groups to find a relatively busy group, and pull from it. > In summary, the more spare time there is in the domain, the more time > each newidle balance can spend on scanning for a busy group. Although > the newidle balance has per domain max_newidle_lb_cost to decide > whether to launch the balance or not, the ILB_UTIL provides a smaller > granularity to decide how many groups each newidle balance can scan. > > The scanning depth is calculated by the previous periodic load balance > based on its overall utilization. > > Tested on top of v6.5-rc2, Sapphire Rapids with 2 x 56C/112T = 224 CPUs. > With cpufreq governor set to performance, and C6 disabled. > > Firstly, tested on a extreme synthetic test[1], which launches 224 > process. Each process is a loop of nanosleep(1 us), which is supposed > to trigger newidle balance as much as possible: > > i=1;while [ $i -le "224" ]; do ./nano_sleep 1000 & i=$(($i+1)); done; > > NO_ILB_UTIL + ILB_SNAPSHOT: > 9.38% 0.45% [kernel.kallsyms] [k] newidle_balance > 6.84% 5.32% [kernel.kallsyms] [k] update_sd_lb_stats.constprop.0 > > ILB_UTIL + ILB_SNAPSHOT: > 3.35% 0.38% [kernel.kallsyms] [k] newidle_balance > 2.30% 1.81% [kernel.kallsyms] [k] update_sd_lb_stats.constprop.0 > [...]
> Link: https://raw.githubusercontent.com/chen-yu-surf/tools/master/stress_nanosleep.c #1 > Suggested-by: Tim Chen <tim.c.chen@intel.com> > Signed-off-by: Chen Yu <yu.c.chen@intel.com> > --- > kernel/sched/fair.c | 20 +++++++++++++++++++- > 1 file changed, 19 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 6925813db59b..4e360ed16e14 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -10195,7 +10195,13 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd > struct sg_lb_stats *local = &sds->local_stat; > struct sg_lb_stats tmp_sgs; > unsigned long sum_util = 0; > - int sg_status = 0; > + int sg_status = 0, nr_sg_scan; > + /* only newidle CPU can load the snapshot */ > + bool ilb_can_load = env->idle == CPU_NEWLY_IDLE && > + sd_share && READ_ONCE(sd_share->total_capacity); > + > + if (sched_feat(ILB_UTIL) && ilb_can_load)
Suggestion for small improvement:
it could be ? This could help save a few cycles of checking if the feature is enabled when its not newidle.
if ( ilb_can_load && sched_feat(ILB_UTIL))
Same comments below in this patch as well in PATCH 6/7.
> + nr_sg_scan = sd_share->nr_sg_scan; > > do { > struct sg_lb_stats *sgs = &tmp_sgs; > @@ -10222,6 +10228,9 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd > sds->busiest_stat = *sgs; > } > > + if (sched_feat(ILB_UTIL) && ilb_can_load && --nr_sg_scan <= 0) > + goto load_snapshot; > +
Same comment as above.
> next_group: > /* Now, start updating sd_lb_stats */ > sds->total_load += sgs->group_load; > @@ -10231,6 +10240,15 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd > sg = sg->next; > } while (sg != env->sd->groups); > > + ilb_can_load = false; > + > +load_snapshot: > + if (ilb_can_load) { > + /* borrow the statistic of previous periodic load balance */ > + sds->total_load = READ_ONCE(sd_share->total_load); > + sds->total_capacity = READ_ONCE(sd_share->total_capacity); > + } > + > /* > * Indicate that the child domain of the busiest group prefers tasks > * go to a child's sibling domains first. NB the flags of a sched group
| |