lkml.org 
[lkml]   [2017]   [Dec]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v2 2/4] sched/fair: add util_est on top of PELT
On Tue, Dec 05, 2017 at 05:10:16PM +0000, Patrick Bellasi wrote:
> @@ -562,6 +577,12 @@ struct task_struct {
>
> const struct sched_class *sched_class;
> struct sched_entity se;
> + /*
> + * Since we use se.avg.util_avg to update util_est fields,
> + * this last can benefit from being close to se which
> + * also defines se.avg as cache aligned.
> + */
> + struct util_est util_est;
> struct sched_rt_entity rt;
> #ifdef CONFIG_CGROUP_SCHED
> struct task_group *sched_task_group;


> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index b19552a212de..8371839075fa 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -444,6 +444,7 @@ struct cfs_rq {
> * CFS load tracking
> */
> struct sched_avg avg;
> + unsigned long util_est_runnable;
> #ifndef CONFIG_64BIT
> u64 load_last_update_time_copy;
> #endif


So you put the util_est in task_struct (not sched_entity) but the
util_est_runnable in cfs_rq (not rq). Seems inconsistent.

\
 
 \ /
  Last update: 2017-12-13 17:19    [W:2.166 / U:0.296 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site