Messages in this thread Patch in this message | | | From | Valentin Schneider <> | Subject | [PATCH 1/3] sched/fair: Add asymmetric CPU capacity wakeup scan | Date | Fri, 24 Jan 2020 12:42:53 +0000 |
| |
From: Morten Rasmussen <morten.rasmussen@arm.com>
On asymmetric CPU capacity topologies, we currently rely on wake_cap() to drive select_task_rq_fair() towards either - its slow-path (find_idlest_cpu()) if either the previous or current (waking) CPU has too little capacity for the waking task - its fast-path (select_idle_sibling()) otherwise
Commit 3273163c6775 ("sched/fair: Let asymmetric CPU configurations balance at wake-up") points out that this relies on the assumption that "[...]the CPU capacities within an SD_SHARE_PKG_RESOURCES domain (sd_llc) are homogeneous".
This assumption no longer holds on newer generations of big.LITTLE systems (DynamIQ), which can accommodate CPUs of different compute capacity within a single LLC domain. To hopefully paint a better picture, a regular big.LITTLE topology would look like this:
+---------+ +---------+ | L2 | | L2 | +----+----+ +----+----+ |CPU0|CPU1| |CPU2|CPU3| +----+----+ +----+----+ ^^^ ^^^ LITTLEs bigs
which would result in the following scheduler topology:
DIE [ ] <- sd_asym_cpucapacity MC [ ] [ ] <- sd_llc 0 1 2 3
Conversely, a DynamIQ topology could look like:
+-------------------+ | L3 | +----+----+----+----+ | L2 | L2 | L2 | L2 | +----+----+----+----+ |CPU0|CPU1|CPU2|CPU3| +----+----+----+----+ ^^^^^ ^^^^^ LITTLEs bigs
which would result in the following scheduler topology:
MC [ ] <- sd_llc, sd_asym_cpucapacity 0 1 2 3
What this means is that, on DynamIQ systems, we could pass the wake_cap() test (IOW presume the waking task fits on the CPU capacities of some LLC domain), thus go through select_idle_sibling(). This function operates on an LLC domain, which here spans both bigs and LITTLEs, so it could very well pick a CPU of too small capacity for the task, despite there being fitting idle CPUs - it very much depends on the CPU iteration order, on which we have absolutely no guarantees capacity-wise.
Introduce yet another select_idle_sibling() helper function that takes CPU capacity into account. The policy is basically to pick the first idle CPU which is big enough for the task (task_util * margin < cpu_capacity).
Unlike other select_idle_sibling() helpers, this one operates on the sd_asym_cpucapacity sched_domain pointer, which is guaranteed to span all known CPU capacities in the system. As such, this will work for both "legacy" big.LITTLE (LITTLEs & bigs split at MC, joined at DIE) and for newer DynamIQ systems (e.g. LITTLEs and bigs in the same MC domain).
Co-authored-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> --- kernel/sched/fair.c | 39 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 38 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index fe4e0d7753756..47a4f52d89b44 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5772,7 +5772,7 @@ void __update_idle_core(struct rq *rq) */ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target) { - struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); + struct cpumask *cpus; int core, cpu; if (!static_branch_likely(&sched_smt_present)) @@ -5781,6 +5781,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int if (!test_idle_cores(target, false)) return -1; + cpus = this_cpu_cpumask_var_ptr(select_idle_mask); cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); for_each_cpu_wrap(core, cpus, target) { @@ -5894,6 +5895,37 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t return cpu; } +/* + * Scan the asym_capacity domain for idle CPUs; pick the first idle one on which + * the task fits. + */ +static int select_idle_capacity(struct task_struct *p, int target) +{ + struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); + struct sched_domain *sd; + int cpu; + + if (!static_branch_unlikely(&sched_asym_cpucapacity)) + return -1; + + sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, target)); + if (!sd) + return -1; + + cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); + + for_each_cpu_wrap(cpu, cpus, target) { + if (!available_idle_cpu(cpu)) + continue; + if (!task_fits_capacity(p, capacity_of(cpu))) + continue; + + return cpu; + } + + return -1; +} + /* * Try and locate an idle core/thread in the LLC cache domain. */ @@ -5902,6 +5934,11 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) struct sched_domain *sd; int i, recent_used_cpu; + /* For asymmetric capacities, try to be smart about the placement */ + i = select_idle_capacity(p, target); + if ((unsigned)i < nr_cpumask_bits) + return i; + if (available_idle_cpu(target) || sched_idle_cpu(target)) return target; -- 2.24.0
| |