Messages in this thread | | | Subject | Re: [PATCH v3 04/10] sched/fair: rework load_balance | From | Valentin Schneider <> | Date | Wed, 2 Oct 2019 11:47:59 +0100 |
| |
On 02/10/2019 09:30, Vincent Guittot wrote: >> Isn't that one somewhat risky? >> >> Say both groups are classified group_has_spare and we do prefer_sibling. >> We'd select busiest as the one with the maximum number of busy CPUs, but it >> could be so that busiest.sum_h_nr_running < local.sum_h_nr_running (because >> pinned tasks or wakeup failed to properly spread stuff). >> >> The thing should be unsigned so at least we save ourselves from right >> shifting a negative value, but we still end up with a gygornous imbalance >> (which we then store into env.imbalance which *is* signed... Urgh). > > so it's not clear what happen with a right shift on negative signed > value and this seems to be compiler dependent so even > max_t(long, 0, (local->idle_cpus - busiest->idle_cpus) >> 1) might be wrong >
Yeah, right shift on signed negative values are implementation defined. This is what I was worried about initially, but I think the expression resulting from the subtraction is unsigned (both terms are unsigned) so this would just wrap when busiest < local - but that is still a problem.
((local->idle_cpus - busiest->idle_cpus) >> 1) should be fine because we do have this check in find_busiest_group() before heading off to calculate_imbalance():
if (busiest->group_type != group_overloaded && (env->idle == CPU_NOT_IDLE || local->idle_cpus <= (busiest->idle_cpus + 1))) /* ... */ goto out_balanced;
which ensures the subtraction will be at least 2. We're missing something equivalent for the sum_h_nr_running case.
> I'm going to update it > > >> >> [...]
| |