Messages in this thread | | | Date | Wed, 2 Aug 2023 13:28:36 +0200 | From | Peter Zijlstra <> | Subject | Re: [RFC PATCH 2/4] sched/fair: Make tg->load_avg per node |
| |
On Wed, Jul 19, 2023 at 09:45:00PM +0800, Aaron Lu wrote: > On Wed, Jul 19, 2023 at 01:53:58PM +0200, Peter Zijlstra wrote: > > On Tue, Jul 18, 2023 at 09:41:18PM +0800, Aaron Lu wrote: > > > +#if defined(CONFIG_FAIR_GROUP_SCHED) && defined(CONFIG_SMP) > > > +static inline long tg_load_avg(struct task_group *tg) > > > +{ > > > + long load_avg = 0; > > > + int i; > > > + > > > + /* > > > + * The only path that can give us a root_task_group > > > + * here is from print_cfs_rq() thus unlikely. > > > + */ > > > + if (unlikely(tg == &root_task_group)) > > > + return 0; > > > + > > > + for_each_node(i) > > > + load_avg += atomic_long_read(&tg->node_info[i]->load_avg); > > > + > > > + return load_avg; > > > +} > > > +#endif > > > > So I was working on something else numa and noticed that for_each_node() > > (and most of the nodemask stuff) is quite moronic, afaict we should do > > something like the below. > > > > I now see Mike added the nr_node_ids thing fairly recent, but given > > distros have NODES_SHIFT=10 and actual machines typically only have <=4 > > nodes, this would save a factor of 256 scanning.
More complete nodemask patch here:
https://lkml.kernel.org/r/20230802112458.230221601%40infradead.org
| |