Messages in this thread | | | Date | Mon, 24 Apr 2017 16:27:39 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH 4/4] sched/topology: the group balance cpu must be a cpu where the group is installed |
| |
On Mon, Apr 24, 2017 at 04:19:44PM +0200, Peter Zijlstra wrote: > On Mon, Apr 24, 2017 at 03:03:26PM +0200, Peter Zijlstra wrote: > > > Also, would it not make sense to re-order patch 2 to come after this, > > such that we _do_ have the group_mask available and don't have to jump > > through hoops in order to link up the sgc? Afaict we don't actually use > > the sgc until the above (reverse) loop computing the CPU capacities. > > That is, if I force 4 on without 2, then doesn't something like the > below also do the right thing? (without duplicating part of the magic > already contained in build_group_mask) > > --- > --- a/kernel/sched/topology.c > +++ b/kernel/sched/topology.c > @@ -498,13 +498,16 @@ enum s_alloc { > * > * This function can only be used when all the groups are already built. > */ > -static void build_group_mask(struct sched_domain *sd, struct sched_group *sg) > +static void > +build_group_mask(struct sched_domain *sd, struct sched_group *sg, struct cpumask *mask) > { > const struct cpumask *sg_span = sched_group_cpus(sg); > struct sd_data *sdd = sd->private; > struct sched_domain *sibling; > int i; > > + cpumask_clear(mask); > + > for_each_cpu(i, sg_span) { > sibling = *per_cpu_ptr(sdd->sd, i); > > @@ -514,7 +517,7 @@ static void build_group_mask(struct sche > if (!cpumask_equal(sg_span, sched_group_cpus(sibling->groups))) > continue; > > - cpumask_set_cpu(i, sched_group_mask(sg)); > + cpumask_set_cpu(i, mask); > } > } > > @@ -549,14 +552,19 @@ build_group_from_child_sched_domain(stru > } > > static void init_overlap_sched_group(struct sched_domain *sd, > - struct sched_group *sg, int cpu) > + struct sched_group *sg) > { > + struct cpumask *mask = sched_domains_tmpmask; > struct sd_data *sdd = sd->private; > struct cpumask *sg_span; > + int cpu; > + > + build_group_mask(sd, sg, mask); > + cpu = cpumask_first_and(sched_group_mask(sg), mask); /* balance cpu */
s/group_mask/group_span/
> > sg->sgc = *per_cpu_ptr(sdd->sgc, cpu); > if (atomic_inc_return(&sg->sgc->ref) == 1) > - build_group_mask(sd, sg); > + cpumask_copy(sched_group_mask(sg), mask); > > /* > * Initialize sgc->capacity such that even if we mess up the
| |