Messages in this thread | | | Date | Tue, 27 Jun 2023 12:14:37 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH v2] sched/task_group: Re-layout structure to reduce false sharing |
| |
On Mon, Jun 26, 2023 at 01:47:56PM +0800, Aaron Lu wrote:
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > > index ec7b3e0a2b20..4fbd4b3a4bdd 100644 > > --- a/kernel/sched/sched.h > > +++ b/kernel/sched/sched.h > > @@ -389,6 +389,19 @@ struct task_group { > > #endif > > #endif > > > > + struct rcu_head rcu; > > + struct list_head list; > > + > > + struct list_head siblings; > > + struct list_head children; > > + > > + /* > > + * To reduce false sharing, current layout is optimized to make > > + * sure load_avg is in a different cacheline from parent, rt_se > > + * and rt_rq. > > + */
That comment is misleading I think; you don't particularly care about those fields more than any other active fields that would cause false sharing.
> > + struct task_group *parent; > > + > > I wonder if we can simply do: > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > index ec7b3e0a2b20..31b73e8d9568 100644 > --- a/kernel/sched/sched.h > +++ b/kernel/sched/sched.h > @@ -385,7 +385,9 @@ struct task_group { > * it in its own cacheline separated from the fields above which > * will also be accessed at each tick. > */ > - atomic_long_t load_avg ____cacheline_aligned; > + struct { > + atomic_long_t load_avg; > + } ____cacheline_aligned_in_smp; > #endif > #endif > > This way it can make sure there is no false sharing with load_avg no > matter how the layout of this structure changes in the future.
This. Also, ISTR there was a series to split this atomic across nodes; whatever happend to that, and can we still measure an improvement over this with that approach?
| |