Messages in this thread | | | From | Vincent Guittot <> | Date | Wed, 6 Jul 2022 11:56:52 +0200 | Subject | Re: [PATCH v4] sched/fair: Make per-cpu cpumasks static |
| |
On Wed, 6 Jul 2022 at 10:36, Bing Huang <huangbing775@126.com> wrote: > > From: Bing Huang <huangbing@kylinos.cn> > > load_balance_mask and select_idle_mask are only used in fair.c. Make
You have to rebase on tip/sched/core as select_idle_mask has been renamed select_rq_mask
> them static and move their allocation into init_sched_fair_class(). > > Replace kzalloc_node() with zalloc_cpumask_var_node() to get rid of the > CONFIG_CPUMASK_OFFSTACK #ifdef and to align with per-cpu cpumask > allocation for RT (local_cpu_mask in init_sched_rt_class()) and DL > class (local_cpu_mask_dl in init_sched_dl_class()). > > Signed-off-by: Bing Huang <huangbing@kylinos.cn> > Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Beside the rebase and renaming
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> --- > > v1->v2: > move load_balance_mask and select_idle_mask allocation from > sched_init() to init_sched_fair_class() > v2->v3: > fixup by Dietmar Eggemann <dietmar.eggemann@arm.com> > v3->v4: > change the patch title and commit message > > kernel/sched/core.c | 11 ----------- > kernel/sched/fair.c | 13 +++++++++++-- > 2 files changed, 11 insertions(+), 13 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index da0bf6fe9ecd..2feff25fd905 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -9530,9 +9530,6 @@ LIST_HEAD(task_groups); > static struct kmem_cache *task_group_cache __read_mostly; > #endif > > -DECLARE_PER_CPU(cpumask_var_t, load_balance_mask); > -DECLARE_PER_CPU(cpumask_var_t, select_idle_mask); > - > void __init sched_init(void) > { > unsigned long ptr = 0; > @@ -9576,14 +9573,6 @@ void __init sched_init(void) > > #endif /* CONFIG_RT_GROUP_SCHED */ > } > -#ifdef CONFIG_CPUMASK_OFFSTACK > - for_each_possible_cpu(i) { > - per_cpu(load_balance_mask, i) = (cpumask_var_t)kzalloc_node( > - cpumask_size(), GFP_KERNEL, cpu_to_node(i)); > - per_cpu(select_idle_mask, i) = (cpumask_var_t)kzalloc_node( > - cpumask_size(), GFP_KERNEL, cpu_to_node(i)); > - } > -#endif /* CONFIG_CPUMASK_OFFSTACK */ > > init_rt_bandwidth(&def_rt_bandwidth, global_rt_period(), global_rt_runtime()); > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 77b2048a9326..61ae0853721e 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -5843,8 +5843,8 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) > #ifdef CONFIG_SMP > > /* Working cpumask for: load_balance, load_balance_newidle. */ > -DEFINE_PER_CPU(cpumask_var_t, load_balance_mask); > -DEFINE_PER_CPU(cpumask_var_t, select_idle_mask); > +static DEFINE_PER_CPU(cpumask_var_t, load_balance_mask); > +static DEFINE_PER_CPU(cpumask_var_t, select_idle_mask); > > #ifdef CONFIG_NO_HZ_COMMON > > @@ -11841,6 +11841,15 @@ void show_numa_stats(struct task_struct *p, struct seq_file *m) > __init void init_sched_fair_class(void) > { > #ifdef CONFIG_SMP > + int i; > + > + for_each_possible_cpu(i) { > + zalloc_cpumask_var_node(&per_cpu(load_balance_mask, i), > + GFP_KERNEL, cpu_to_node(i)); > + zalloc_cpumask_var_node(&per_cpu(select_idle_mask, i), > + GFP_KERNEL, cpu_to_node(i)); > + } > + > open_softirq(SCHED_SOFTIRQ, run_rebalance_domains); > > #ifdef CONFIG_NO_HZ_COMMON > -- > 2.25.1 >
| |