Messages in this thread Patch in this message | | | From | Chengming Zhou <> | Subject | [PATCH 5/8] sched/fair: fix load tracking for new forked !fair task | Date | Sat, 9 Jul 2022 23:13:50 +0800 |
| |
New forked !fair task will set its sched_avg last_update_time to the pelt_clock of cfs_rq, after a while in switched_to_fair():
switched_to_fair attach_task_cfs_rq attach_entity_cfs_rq update_load_avg __update_load_avg_se(now, cfs_rq, se)
the delta (now - sa->last_update_time) will contribute/decay sched_avg depends on the task running/runnable status at that time.
This patch don't set sched_avg last_update_time of new forked !fair task, leave it to 0. So later in update_load_avg(), we don't need to contribute/decay the wrong delta (now - sa->last_update_time).
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> --- kernel/sched/fair.c | 18 ++---------------- 1 file changed, 2 insertions(+), 16 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 171bc22bc142..153a2c6c1069 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -849,22 +849,8 @@ void post_init_entity_util_avg(struct task_struct *p) sa->runnable_avg = sa->util_avg; - if (p->sched_class != &fair_sched_class) { - /* - * For !fair tasks do: - * - update_cfs_rq_load_avg(now, cfs_rq); - attach_entity_load_avg(cfs_rq, se); - switched_from_fair(rq, p); - * - * such that the next switched_to_fair() has the - * expected state. - */ - se->avg.last_update_time = cfs_rq_clock_pelt(cfs_rq); - return; - } - - attach_entity_cfs_rq(se); + if (p->sched_class == &fair_sched_class) + attach_entity_cfs_rq(se); } #else /* !CONFIG_SMP */ -- 2.36.1
| |