Messages in this thread Patch in this message | | | Date | Thu, 19 Dec 2019 11:04:21 +0100 | From | Peter Zijlstra <> | Subject | Re: [PATCH] sched, fair: Allow a small degree of load imbalance between SD_NUMA domains |
| |
On Wed, Dec 18, 2019 at 03:44:02PM +0000, Mel Gorman wrote: > @@ -8690,6 +8686,38 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s > env->migration_type = migrate_task; > env->imbalance = max_t(long, 0, (local->idle_cpus - > busiest->idle_cpus) >> 1); > + > +out_spare: > + /* > + * Whether balancing the number of running tasks or the number > + * of idle CPUs, consider allowing some degree of imbalance if > + * migrating between NUMA domains. > + */ > + if (env->sd->flags & SD_NUMA) { > + unsigned int imbalance_adj, imbalance_max; > + > + /* > + * imbalance_adj is the allowable degree of imbalance > + * to exist between two NUMA domains. It's calculated > + * relative to imbalance_pct with a minimum of two > + * tasks or idle CPUs. > + */ > + imbalance_adj = (busiest->group_weight * > + (env->sd->imbalance_pct - 100) / 100) >> 1; > + imbalance_adj = max(imbalance_adj, 2U);
The '2' here comes from a 'pair of communicating tasks' right? Perhaps more clearly detail that in the comment, such that when we're looking at this code again in a few years time, we're not left wondering wtf that 2 is about :-)
> + > + /* > + * Ignore imbalance unless busiest sd is close to 50% > + * utilisation. At that point balancing for memory > + * bandwidth and potentially avoiding unnecessary use > + * of HT siblings is as relevant as memory locality. > + */ > + imbalance_max = (busiest->group_weight >> 1) - imbalance_adj; > + if (env->imbalance <= imbalance_adj && > + busiest->sum_nr_running < imbalance_max) { > + env->imbalance = 0; > + } > + } > return; > } > > > -- > Mel Gorman > SUSE Labs
| |