lkml.org 
[lkml]   [2022]   [Aug]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 4/4] sched/fair: limit sched slice duration
Date
In presence of a lot of small weight tasks like sched_idle tasks, normal
or high weight tasks can see their ideal runtime (sched_slice) to increase
to hundreds ms whereas it normally stays below sysctl_sched_latency.

2 normal tasks running on a CPU will have a max sched_slice of 12ms
(half of the sched_period). This means that they will make progress
every sysctl_sched_latency period.

If we now add 1000 idle tasks on the CPU, the sched_period becomes
3006 ms and the ideal runtime of the normal tasks becomes 609 ms.
It will even become 1500ms if the idle tasks belongs to an idle cgroup.
This means that the scheduler will look for picking another waiting task
after 609ms running time (1500ms respectively). The idle tasks change
significantly the way the 2 normal tasks interleave their running time
slot whereas they should have a small impact.

Such long sched_slice can delay significantly the release of resources
as the tasks can wait hundreds of ms before the next running slot just
because of idle tasks queued on the rq.

Cap the ideal_runtime to sysctl_sched_latency when comparing to the next
waiting task to make sure that tasks will regularly make progress and will
not be significantly impacted by idle/background tasks queued on the rq.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---

While studying the problem, I have also considered to substract
cfs.idle_h_nr_running before computing the sched_slice but we can have
quite similar problem with low weight bormal task/cgroup so I have decided
to keep this solution.

Also, this solution doesn't completly remove the impact of idle tasks
in the scheduling pattern but cap the running slice of a task to a max
value of 2*sysctl_sched_latency.

kernel/sched/fair.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 260a55ac462f..96fedd0ab5fa 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4599,6 +4599,8 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
if (delta < 0)
return;

+ ideal_runtime = min_t(u64, ideal_runtime, sysctl_sched_latency);
+
if (delta > ideal_runtime)
resched_curr(rq_of(cfs_rq));
}
--
2.17.1
\
 
 \ /
  Last update: 2022-08-25 14:29    [W:0.303 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site