lkml.org 
[lkml]   [2022]   [Jul]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v2 07/10] sched/fair: use update_load_avg() to attach/detach entity load_avg
From
On 13/07/2022 06:04, Chengming Zhou wrote:
> Since update_load_avg() support DO_ATTACH and DO_DETACH now, we can
> use update_load_avg() to implement attach/detach entity load_avg.
>
> Another advantage of using update_load_avg() is that it will check
> last_update_time before attach or detach, instead of unconditional
> attach/detach in the current code.
>
> This way can avoid some corner problematic cases of load tracking,
> like twice attach problem, detach unattached NEW task problem.

This explanation is somewhat hard to follow for me. Since both issues
have been fixed already (you mention this further below) you're saying
that with you change you don't reintroduce them?

> 1. switch to fair class (twice attach problem)
>
> p->sched_class = fair_class; --> p.se->avg.last_update_time = 0
> if (queued)
> enqueue_task(p);
> ...
> enqueue_entity()
> update_load_avg(UPDATE_TG | DO_ATTACH)
> if (!se->avg.last_update_time && (flags & DO_ATTACH)) --> true
> attach_entity_load_avg() --> attached, will set last_update_time
> check_class_changed()
> switched_from() (!fair)
> switched_to() (fair)
> switched_to_fair()
> attach_entity_load_avg() --> unconditional attach again!
>
> 2. change cgroup of NEW task (detach unattached task problem)
>
> sched_move_group(p)
> if (queued)
> dequeue_task()
> task_move_group_fair()
> detach_task_cfs_rq()
> detach_entity_load_avg() --> detach unattached NEW task
> set_task_rq()
> attach_task_cfs_rq()
> attach_entity_load_avg()
> if (queued)
> enqueue_task()
>
> These problems have been fixed in commit 7dc603c9028e
> ("sched/fair: Fix PELT integrity for new tasks"), which also
> bring its own problems.
>
> First, it add a new task state TASK_NEW and an unnessary limitation
> that we would fail when change the cgroup of TASK_NEW tasks.
>
> Second, it attach entity load_avg in post_init_entity_util_avg(),
> in which we only set sched_avg last_update_time for !fair tasks,
> will cause PELT integrity problem when switched_to_fair().

I guess those PELT integrity problems are less severe since we have the
enqueue_task_fair() before the switched_to_fair() for enqueued tasks. So
we always decay the time the task spend outside fair.

Looks to me that you want to replace this by your `freeze PELT when
outside fair` model.

> This patch make update_load_avg() the only location of attach/detach,
> and can handle these corner cases like change cgroup of NEW tasks,
> by checking last_update_time before attach/detach.

[...]

> @@ -11527,9 +11522,7 @@ static void detach_entity_cfs_rq(struct sched_entity *se)
> struct cfs_rq *cfs_rq = cfs_rq_of(se);
>
> /* Catch up with the cfs_rq and remove our load when we leave */
> - update_load_avg(cfs_rq, se, 0);
> - detach_entity_load_avg(cfs_rq, se);
> - update_tg_load_avg(cfs_rq);
> + update_load_avg(cfs_rq, se, UPDATE_TG | DO_DETACH);

IMHO, the DO_[DE|AT]TACH comments in update_load_avg() would have to be
updated in this case.

[...]

\
 
 \ /
  Last update: 2022-07-15 13:19    [W:0.311 / U:0.132 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site