lkml.org 
[lkml]   [2018]   [Mar]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 1/4] sched/fair: add util_est on top of PELT
On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> +static inline void util_est_enqueue(struct cfs_rq *cfs_rq,
> + struct task_struct *p)
> +{
> + unsigned int enqueued;
> +
> + if (!sched_feat(UTIL_EST))
> + return;
> +
> + /* Update root cfs_rq's estimated utilization */
> + enqueued = READ_ONCE(cfs_rq->avg.util_est.enqueued);
> + enqueued += _task_util_est(p);
> + WRITE_ONCE(cfs_rq->avg.util_est.enqueued, enqueued);
> +}

> +static inline void util_est_dequeue(struct cfs_rq *cfs_rq,
> + struct task_struct *p,
> + bool task_sleep)
> +{
> + long last_ewma_diff;
> + struct util_est ue;
> +
> + if (!sched_feat(UTIL_EST))
> + return;
> +
> + /*
> + * Update root cfs_rq's estimated utilization
> + *
> + * If *p is the last task then the root cfs_rq's estimated utilization
> + * of a CPU is 0 by definition.
> + */
> + ue.enqueued = 0;
> + if (cfs_rq->nr_running) {
> + ue.enqueued = READ_ONCE(cfs_rq->avg.util_est.enqueued);
> + ue.enqueued -= min_t(unsigned int, ue.enqueued,
> + _task_util_est(p));
> + }
> + WRITE_ONCE(cfs_rq->avg.util_est.enqueued, ue.enqueued);

It appears to me this isn't a stable situation and completely relies on
the !nr_running case to recalibrate. If we ensure that doesn't happen
for a significant while the sum can run-away, right?

Should we put a max in enqueue to avoid this?

\
 
 \ /
  Last update: 2018-03-06 20:00    [W:0.142 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site