lkml.org 
[lkml]   [2014]   [Oct]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RESEND PATCH 2/3 v5] sched: Rewrite per entity runnable load average tracking
On Fri, Oct 10, 2014 at 10:21:56AM +0800, Yuyang Du wrote:
> static inline long calc_tg_weight(struct task_group *tg, struct cfs_rq *cfs_rq)
> {
> - long tg_weight;
> -
> - /*
> - * Use this CPU's actual weight instead of the last load_contribution
> - * to gain a more accurate current total weight. See
> - * update_cfs_rq_load_contribution().
> - */
> - tg_weight = atomic_long_read(&tg->load_avg);
> - tg_weight -= cfs_rq->tg_load_contrib;
> - tg_weight += cfs_rq->load.weight;
> -
> - return tg_weight;
> + return atomic_long_read(&tg->load_avg);

Since you're now also delaying updating load_avg, why not retain this
slightly better approximation?


\
 
 \ /
  Last update: 2014-10-21 17:41    [W:0.139 / U:3.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site