Messages in this thread Patch in this message | | | Date | Fri, 26 Mar 2021 11:34:01 +0100 | From | Peter Zijlstra <> | Subject | [PATCH 9/9] sched,fair: Alternative sched_slice() |
| |
The current sched_slice() seems to have issues; there's two possible things that could be improved:
- the 'nr_running' used for __sched_period() is daft when cgroups are considered. Using the RQ wide h_nr_running seems like a much more consistent number.
- (esp) cgroups can slice it real fine, which makes for easy over-scheduling, ensure min_gran is what the name says.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/sched/fair.c | 15 ++++++++++++++- kernel/sched/features.h | 3 +++ 2 files changed, 17 insertions(+), 1 deletion(-)
--- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -680,7 +680,16 @@ static u64 __sched_period(unsigned long */ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) { - u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq); + unsigned int nr_running = cfs_rq->nr_running; + u64 slice; + + if (sched_feat(ALT_PERIOD)) + nr_running = rq_of(cfs_rq)->cfs.h_nr_running; + + slice = __sched_period(nr_running + !se->on_rq); + + if (sched_feat(BASE_SLICE)) + slice -= sysctl_sched_min_granularity; for_each_sched_entity(se) { struct load_weight *load; @@ -697,6 +706,10 @@ static u64 sched_slice(struct cfs_rq *cf } slice = __calc_delta(slice, se->load.weight, load); } + + if (sched_feat(BASE_SLICE)) + slice += sysctl_sched_min_granularity; + return slice; } --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -90,3 +90,6 @@ SCHED_FEAT(WA_BIAS, true) */ SCHED_FEAT(UTIL_EST, true) SCHED_FEAT(UTIL_EST_FASTUP, true) + +SCHED_FEAT(ALT_PERIOD, true) +SCHED_FEAT(BASE_SLICE, true)
| |