lkml.org 
[lkml]   [2023]   [Nov]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] sched/pelt: avoid underestimate of task utilization
From


On 11/22/23 14:01, Vincent Guittot wrote:
> It has been reported that thread's util_est can significantly decrease as
> a result of sharing the CPU with other threads. The use case can be easily
> reproduced with a periodic task TA that runs 1ms and sleeps 100us.
> When the task is alone on the CPU, its max utilization and its util_est is
> around 888. If another similar task starts to run on the same CPU, TA will
> have to share the CPU runtime and its maximum utilization will decrease
> around half the CPU capacity (512) then TA's util_est will follow this new
> maximum trend which is only the result of sharing the CPU with others
> tasks. Such situation can be detected with runnable_avg wich is close or
> equal to util_avg when TA is alone but increases above util_avg when TA
> shares the CPU with other threads and wait on the runqueue.
>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>
> This patch implements what I mentioned in [1]. I have been able to
> reproduce such pattern with rt-app.
>
> [1] https://lore.kernel.org/lkml/CAKfTPtDd-HhF-YiNTtL9i5k0PfJbF819Yxu4YquzfXgwi7voyw@mail.gmail.com/#t

Thanks Vincet for looking at it! I didn't have to to come
back to this issue.

>
> kernel/sched/fair.c | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 07f555857698..eeb505d28905 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4774,6 +4774,11 @@ static inline unsigned long task_util(struct task_struct *p)
> return READ_ONCE(p->se.avg.util_avg);
> }
>
> +static inline unsigned long task_runnable(struct task_struct *p)
> +{
> + return READ_ONCE(p->se.avg.runnable_avg);
> +}
> +
> static inline unsigned long _task_util_est(struct task_struct *p)
> {
> struct util_est ue = READ_ONCE(p->se.avg.util_est);
> @@ -4892,6 +4897,14 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
> if (task_util(p) > arch_scale_cpu_capacity(cpu_of(rq_of(cfs_rq))))
> return;
>
> + /*
> + * To avoid underestimate of task utilization, skip updates of ewma if
> + * we cannot grant that thread got all CPU time it wanted.
> + */
> + if ((ue.enqueued + UTIL_EST_MARGIN) < task_runnable(p))
> + goto done;
> +
> +
> /*
> * Update Task's estimated utilization
> *

That looks reasonable to do. I cannot test it right now on my pixel6.
It should do the trick to the task util that we need in bigger apps
than rt-app as well.

Reviewed-by: Lukasz luba <lukasz.luba@arm.com>

\
 
 \ /
  Last update: 2023-11-23 14:13    [W:0.109 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site