Messages in this thread | | | Date | Thu, 18 Aug 2016 14:28:54 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH v3 11/13] sched/fair: Consider spare capacity in find_idlest_group() |
| |
On Thu, Aug 18, 2016 at 12:16:33PM +0100, Morten Rasmussen wrote: > On Tue, Aug 16, 2016 at 03:57:06PM +0200, Vincent Guittot wrote: > > > @@ -5204,6 +5218,13 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, > > > load = target_load(i, load_idx); > > > > > > avg_load += load; > > > + > > > + spare_cap = capacity_spare_wake(i, p); > > > + > > > + if (spare_cap > max_spare_cap && > > > + spare_cap > capacity_of(i) >> 3) { > > > > This condition probably needs some descriptions. You're not only > > looking for max spare capacity but also a significant spare capacity > > (more than 12.5% of cpu_capacity_orig). Can't this additional test > > lead to some strange situation where a CPU with more spare capacity > > will not be selected because of this 12.5% condition whereas another > > with less spare capacity will be selected because its capacity_orig is > > lower ? > > Right, the reason why I added the 12.5% check is that I thought we > wouldn't want to pack cpus too aggressively. You are right that we could > reject a 1024 capacity with a spare capacity of 100 and pick a 512 > capacity cpu with a spare capacity of 65.
You could of course track both.. but complexity. At the very least I agree with Vincent in that this very much deserves a comment.
> From a latency perspective it might not be a bad idea staying away from > cpus with a utilization even if they have more capacity available as the > task is more likely to end up waiting on the rq. For throughput tasks > you would of course want it the other way around.
(debug) tuning-knob ;-)
> > > @@ -5211,12 +5232,27 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, > > > > > > if (local_group) { > > > this_load = avg_load; > > > - } else if (avg_load < min_load) { > > > - min_load = avg_load; > > > - idlest = group; > > > + this_spare = max_spare_cap; > > > + } else { > > > + if (avg_load < min_load) { > > > + min_load = avg_load; > > > + idlest = group; > > > + } > > > + > > > + if (most_spare < max_spare_cap) { > > > + most_spare = max_spare_cap; > > > + most_spare_sg = group; > > > + } > > > } > > > } while (group = group->next, group != sd->groups); > > > > > > + /* Found a significant amount of spare capacity. */ > > > > It may worth explaining the threshold when it becomes better to choose > > the most spare group instead of the least loaded group. > > Yes. I admit that the threshold is somewhat randomly chosen. Based on a > few experiments I found that requiring enough spare capacity to fit the > task completely was too conservative. We would bail out and go with the > least loaded groups very often, especially for new tasks, despite the > spare capacity only being slightly too small. Allowing a small degree of > stuffing of the task seemed better. Choosing the least loaded group > instead doesn't give any better throughput for the waking task unless it > has high priority. For overall throughput, the most spare capacity cpus > should be the better choice. > > Should I just add a comment saying that we want to allow a little bit of > task stuffing to accommodate better for new tasks and have better overall > throughput, or should we investigate the threshold further?
A comment would certainly be nice..
| |