Messages in this thread Patch in this message | | | From | Srikar Dronamraju <> | Subject | [PATCH 03/10] sched/fair: Update idle-core more often | Date | Thu, 22 Apr 2021 15:53:19 +0530 |
| |
Currently when the scheduler does a load balance and pulls a task or when a CPU picks up a task during wakeup without having to call select_idle_cpu(), it never checks if the target CPU is part of the idle-core. This makes idle-core less accurate.
Given that the identity of idle-core for LLC is maintained, its easy to update the idle-core as soon as the CPU picks up a task.
This change will update the idle-core whenever a CPU from the idle-core picks up a task. However if there are multiple idle-cores in the LLC, and waking CPU happens to be part of the designated idle-core, idle-core is set to -1 (i.e there are no idle-cores).
To reduce this case, whenever a CPU updates idle-core, it will look for other cores in the LLC for an idle-core, if the core to which it belongs to is not idle.
Cc: LKML <linux-kernel@vger.kernel.org> Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com> Cc: Parth Shah <parth@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Valentin Schneider <valentin.schneider@arm.com> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Vincent Guittot <vincent.guittot@linaro.org> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> --- kernel/sched/fair.c | 44 ++++++++++++++++++++++++++++++++++++++++++-- kernel/sched/idle.c | 6 ++++++ kernel/sched/sched.h | 2 ++ 3 files changed, 50 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 03083eacdaf0..09c33cca0349 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6037,6 +6037,39 @@ static inline int get_idle_core(int cpu, int def) return def; } +static void set_next_idle_core(struct sched_domain *sd, int target) +{ + struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); + int core, cpu; + + cpumask_andnot(cpus, sched_domain_span(sd), cpu_smt_mask(target)); + for_each_cpu_wrap(core, cpus, target) { + bool idle = true; + + for_each_cpu(cpu, cpu_smt_mask(core)) { + if (!available_idle_cpu(cpu)) { + idle = false; + break; + } + } + + if (idle) { + set_idle_core(core, per_cpu(smt_id, core)); + return; + } + + cpumask_andnot(cpus, cpus, cpu_smt_mask(core)); + } +} + +void set_core_busy(int core) +{ + rcu_read_lock(); + if (get_idle_core(core, -1) == per_cpu(smt_id, core)) + set_idle_core(core, -1); + rcu_read_unlock(); +} + /* * Scans the local SMT mask to see if the entire core is idle, and records this * information in sd_llc_shared->idle_core. @@ -6046,11 +6079,13 @@ static inline int get_idle_core(int cpu, int def) */ void __update_idle_core(struct rq *rq) { + struct sched_domain *sd; int core = cpu_of(rq); int cpu; rcu_read_lock(); - if (get_idle_core(core, 0) != -1) + sd = rcu_dereference(per_cpu(sd_llc, core)); + if (!sd || get_idle_core(core, 0) != -1) goto unlock; for_each_cpu(cpu, cpu_smt_mask(core)) { @@ -6058,10 +6093,15 @@ void __update_idle_core(struct rq *rq) continue; if (!available_idle_cpu(cpu)) - goto unlock; + goto try_next; } set_idle_core(core, per_cpu(smt_id, core)); + goto unlock; + +try_next: + set_next_idle_core(sd, core); + unlock: rcu_read_unlock(); } diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index 7199e6f23789..cc828f3efe71 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -425,6 +425,12 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl static void put_prev_task_idle(struct rq *rq, struct task_struct *prev) { +#ifdef CONFIG_SCHED_SMT + int cpu = rq->cpu; + + if (static_branch_likely(&sched_smt_present)) + set_core_busy(cpu); +#endif } static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 46d40a281724..5c0bd4b0e73a 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1102,6 +1102,7 @@ static inline bool is_migration_disabled(struct task_struct *p) #ifdef CONFIG_SCHED_SMT extern void __update_idle_core(struct rq *rq); +extern void set_core_busy(int cpu); static inline void update_idle_core(struct rq *rq) { @@ -1111,6 +1112,7 @@ static inline void update_idle_core(struct rq *rq) #else static inline void update_idle_core(struct rq *rq) { } +static inline void set_core_busy(int cpu) { } #endif DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); -- 2.18.2
| |