Messages in this thread | | | From | Vincent Guittot <> | Date | Mon, 14 Feb 2022 10:48:35 +0100 | Subject | Re: [PATCH 1/2] sched/fair: Improve consistency of allowed NUMA balance calculations |
| |
On Tue, 8 Feb 2022 at 10:43, Mel Gorman <mgorman@techsingularity.net> wrote: > > There are inconsistencies when determining if a NUMA imbalance is allowed > that should be corrected. > > o allow_numa_imbalance changes types and is not always examining > the destination group so both the type should be corrected as > well as the naming. > o find_idlest_group uses the sched_domain's weight instead of the > group weight which is different to find_busiest_group > o find_busiest_group uses the source group instead of the destination > which is different to task_numa_find_cpu > o Both find_idlest_group and find_busiest_group should account > for the number of running tasks if a move was allowed to be > consistent with task_numa_find_cpu > > Fixes: 7d2b5dd0bcc4 ("sched/numa: Allow a floating imbalance between NUMA nodes") > Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> --- > kernel/sched/fair.c | 18 ++++++++++-------- > 1 file changed, 10 insertions(+), 8 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 095b0aa378df..4592ccf82c34 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -9003,9 +9003,10 @@ static bool update_pick_idlest(struct sched_group *idlest, > * This is an approximation as the number of running tasks may not be > * related to the number of busy CPUs due to sched_setaffinity. > */ > -static inline bool allow_numa_imbalance(int dst_running, int dst_weight) > +static inline bool > +allow_numa_imbalance(unsigned int running, unsigned int weight) > { > - return (dst_running < (dst_weight >> 2)); > + return (running < (weight >> 2)); > } > > /* > @@ -9139,12 +9140,13 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu) > return idlest; > #endif > /* > - * Otherwise, keep the task on this node to stay close > - * its wakeup source and improve locality. If there is > - * a real need of migration, periodic load balance will > - * take care of it. > + * Otherwise, keep the task close to the wakeup source > + * and improve locality if the number of running tasks > + * would remain below threshold where an imbalance is > + * allowed. If there is a real need of migration, > + * periodic load balance will take care of it. > */ > - if (allow_numa_imbalance(local_sgs.sum_nr_running, sd->span_weight)) > + if (allow_numa_imbalance(local_sgs.sum_nr_running + 1, local_sgs.group_weight)) > return NULL; > } > > @@ -9350,7 +9352,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s > /* Consider allowing a small imbalance between NUMA groups */ > if (env->sd->flags & SD_NUMA) { > env->imbalance = adjust_numa_imbalance(env->imbalance, > - busiest->sum_nr_running, busiest->group_weight); > + local->sum_nr_running + 1, local->group_weight); > } > > return; > -- > 2.31.1 >
| |