lkml.org 
[lkml]   [2017]   [Sep]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH -v2 09/18] sched/fair: More accurate reweight_entity()
When a (group) entity changes it's weight we should instantly change
its load_avg and propagate that change into the sums it is part of.
Because we use these values to predict future behaviour and are not
interested in its historical value.

Without this change, the change in load would need to propagate
through the average, by which time it could again have changed etc..
always chasing itself.

With this change, the cfs_rq load_avg sum will more accurately reflect
the current runnable and expected return of blocked load.

[josef: compile fix !SMP || !FAIR_GROUP]
Reported-by: Paul Turner <pjt@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
kernel/sched/fair.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2900,12 +2900,22 @@ static void reweight_entity(struct cfs_r
if (cfs_rq->curr == se)
update_curr(cfs_rq);
account_entity_dequeue(cfs_rq, se);
+ dequeue_runnable_load_avg(cfs_rq, se);
}
+ dequeue_load_avg(cfs_rq, se);

update_load_set(&se->load, weight);

- if (se->on_rq)
+#ifdef CONFIG_SMP
+ se->avg.load_avg = div_u64(se_weight(se) * se->avg.load_sum,
+ LOAD_AVG_MAX - 1024 + se->avg.period_contrib);
+#endif
+
+ enqueue_load_avg(cfs_rq, se);
+ if (se->on_rq) {
account_entity_enqueue(cfs_rq, se);
+ enqueue_runnable_load_avg(cfs_rq, se);
+ }
}

static inline int throttled_hierarchy(struct cfs_rq *cfs_rq);

\
 
 \ /
  Last update: 2017-09-01 15:41    [W:0.761 / U:1.356 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site