lkml.org 
[lkml]   [2017]   [Sep]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH -v2 16/18] sched/fair: More accurate async detach
The problem with the overestimate is that it will subtract too big a
value from the load_sum, thereby pushing it down further than it ought
to go. Since runnable_load_avg is not subject to a similar 'force',
this results in the occasional 'runnable_load > load' situation.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
kernel/sched/fair.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3469,6 +3469,7 @@ update_cfs_rq_load_avg(u64 now, struct c

if (cfs_rq->removed.nr) {
unsigned long r;
+ u32 divider = LOAD_AVG_MAX - 1024 + sa->period_contrib;

raw_spin_lock(&cfs_rq->removed.lock);
swap(cfs_rq->removed.util_avg, removed_util);
@@ -3477,17 +3478,13 @@ update_cfs_rq_load_avg(u64 now, struct c
cfs_rq->removed.nr = 0;
raw_spin_unlock(&cfs_rq->removed.lock);

- /*
- * The LOAD_AVG_MAX for _sum is a slight over-estimate,
- * which is safe due to sub_positive() clipping at 0.
- */
r = removed_load;
sub_positive(&sa->load_avg, r);
- sub_positive(&sa->load_sum, r * LOAD_AVG_MAX);
+ sub_positive(&sa->load_sum, r * divider);

r = removed_util;
sub_positive(&sa->util_avg, r);
- sub_positive(&sa->util_sum, r * LOAD_AVG_MAX);
+ sub_positive(&sa->util_sum, r * divider);

add_tg_cfs_propagate(cfs_rq, -(long)removed_runnable_sum);


\
 
 \ /
  Last update: 2017-09-01 15:35    [W:0.433 / U:0.884 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site