Messages in this thread Patch in this message | | | Date | Thu, 29 Jul 2021 19:00:19 -0700 | Subject | [PATCH 2/2] sched: adjust SCHED_IDLE interactions | From | Josh Don <> |
| |
This patch makes some behavioral changes when SCHED_IDLE entities are competing with non SCHED_IDLE entities.
1) Ignore min_granularity for determining the sched_slide of a SCHED_IDLE entity when it is competing with a non SCHED_IDLE entity. This reduces the latency of getting a non SCHED_IDLE entity back on cpu, at the expense of increased context switch frequency of SCHED_IDLE entities.
In steady state competition between SCHED_IDLE/non-SCHED_IDLE, preemption is driven by the tick, so SCHED_IDLE min_granularity is approximately bounded on the low end by the tick HZ.
Example: on a machine with HZ=1000, spawned two threads, one of which is SCHED_IDLE, and affined to one cpu. Without this patch, the SCHED_IDLE thread runs for 4ms then waits for 1.4s. With this patch, it runs for 1ms and waits 340ms (as it round-robins with the other thread).
The benefit of this change is to reduce the round-robin latency for non SCHED_IDLE entities when competing with a SCHED_IDLE entity.
2) Don't give sleeper credit to SCHED_IDLE entities when they wake onto a cfs_rq with non SCHED_IDLE entities. As a result, newly woken SCHED_IDLE entities will take longer to preempt non SCHED_IDLE entities.
Example: spawned four threads affined to one cpu, one of which was set to SCHED_IDLE. Without this patch, wakeup latency for the SCHED_IDLE thread was ~1-2ms, with the patch the wakeup latency was ~10ms.
The benefit of this change is to make it less likely that a newly woken SCHED_IDLE entity will preempt a short-running non SCHED_IDLE entity before it blocks.
Signed-off-by: Josh Don <joshdon@google.com> --- kernel/sched/fair.c | 32 ++++++++++++++++++++++++++------ 1 file changed, 26 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a7feae1cb0f0..24b2c6c057e6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -674,6 +674,7 @@ static u64 __sched_period(unsigned long nr_running) static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) { unsigned int nr_running = cfs_rq->nr_running; + struct sched_entity *init_se = se; u64 slice; if (sched_feat(ALT_PERIOD)) @@ -684,12 +685,13 @@ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) for_each_sched_entity(se) { struct load_weight *load; struct load_weight lw; + struct cfs_rq *qcfs_rq; - cfs_rq = cfs_rq_of(se); - load = &cfs_rq->load; + qcfs_rq = cfs_rq_of(se); + load = &qcfs_rq->load; if (unlikely(!se->on_rq)) { - lw = cfs_rq->load; + lw = qcfs_rq->load; update_load_add(&lw, se->load.weight); load = &lw; @@ -697,8 +699,18 @@ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) slice = __calc_delta(slice, se->load.weight, load); } - if (sched_feat(BASE_SLICE)) - slice = max(slice, (u64)sysctl_sched_min_granularity); + if (sched_feat(BASE_SLICE)) { + /* + * SCHED_IDLE entities are not subject to min_granularity if + * they are competing with non SCHED_IDLE entities. As a result, + * non SCHED_IDLE entities will have reduced latency to get back + * on cpu, at the cost of increased context switch frequency of + * SCHED_IDLE entities. + */ + if (!se_is_idle(init_se) || + cfs_rq->h_nr_running == cfs_rq->idle_h_nr_running) + slice = max(slice, (u64)sysctl_sched_min_granularity); + } return slice; } @@ -4216,7 +4228,15 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) if (sched_feat(GENTLE_FAIR_SLEEPERS)) thresh >>= 1; - vruntime -= thresh; + /* + * Don't give sleep credit to a SCHED_IDLE entity if we're + * placing it onto a cfs_rq with non SCHED_IDLE entities. + */ + if (!se_is_idle(se) || + cfs_rq->h_nr_running == cfs_rq->idle_h_nr_running) + vruntime -= thresh; + else + vruntime += 1; } /* ensure we never gain time by being placed backwards. */ -- 2.32.0.554.ge1b32706d8-goog
| |