lkml.org 
[lkml]   [2022]   [Aug]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH v3 00/10] sched/fair: task load tracking optimization and cleanup
Date
Hi all,

This patch series is optimization and cleanup for task load tracking
when task migrate CPU/cgroup or switched_from/to_fair().

There are three types of detach/attach_entity_load_avg (except fork and
exit case) for a fair task:
1. task migrate CPU (on_rq migrate or wake_up migrate)
2. task migrate cgroup (detach and attach)
3. task switched_from/to_fair (detach later attach)

patch 01-03 cleanup the task change cgroup case by remove cpu_cgrp_subsys->fork(),
since we already do the same thing in sched_cgroup_fork().

patch 05 optimize the task migrate CPU case by combine detach into dequeue.

patch 06 fix another detach on unattached task case which has been woken up
by try_to_wake_up() but is waiting for actually being woken up by
sched_ttwu_pending().

patch 07 remove unnecessary limitation that we would fail when change
cgroup of forked task which hasn't been woken up by wake_up_new_task().

patch 08 refactor detach/attach_entity_cfs_rq by using update_load_avg()
DO_DETACH and DO_ATTACH flags.

patch 09-10 optimize post_init_entity_util_avg() for fair task and skip
setting util_avg and runnable_avg for !fair task.

Thanks!

Changes in v3:
- One big change is this series don't freeze PELT sum/avg values to be
used as initial values when re-entering fair any more, since these
PELT values become much less relevant.
- Reorder patches and collect tags from Vincent and Dietmar. Thanks!
- Fix detach on unattached task which has been woken up by try_to_wake_up()
but is waiting for actually being woken up by sched_ttwu_pending().
- Delete TASK_NEW which limit forked task from changing cgroup.
- Don't init util_avg and runnable_avg for !fair taks at fork time.

Changes in v2:
- split task se depth maintenance into a separate patch3, suggested
by Peter.
- reorder patch6-7 before patch8-9, since we need update_load_avg()
to do conditional attach/detach to avoid corner cases like twice
attach problem.

Chengming Zhou (10):
sched/fair: maintain task se depth in set_task_rq()
sched/fair: remove redundant cpu_cgrp_subsys->fork()
sched/fair: reset sched_avg last_update_time before set_task_rq()
sched/fair: update comments in enqueue/dequeue_entity()
sched/fair: combine detach into dequeue when migrating task
sched/fair: fix another detach on unattached task corner case
sched/fair: allow changing cgroup of new forked task
sched/fair: refactor detach/attach_entity_cfs_rq using
update_load_avg()
sched/fair: defer task sched_avg attach to enqueue_entity()
sched/fair: don't init util/runnable_avg for !fair task

include/linux/sched.h | 5 +-
kernel/sched/core.c | 57 ++--------
kernel/sched/fair.c | 242 ++++++++++++++++++------------------------
kernel/sched/sched.h | 6 +-
4 files changed, 119 insertions(+), 191 deletions(-)

--
2.36.1

\
 
 \ /
  Last update: 2022-08-01 06:28    [W:0.082 / U:0.276 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site