Messages in this thread |  | | Date | Tue, 21 Oct 2014 16:56:52 +0200 | From | Peter Zijlstra <> | Subject | Re: [RESEND PATCH 2/3 v5] sched: Rewrite per entity runnable load average tracking |
| |
On Fri, Oct 10, 2014 at 10:21:56AM +0800, Yuyang Du wrote: > static inline long calc_tg_weight(struct task_group *tg, struct cfs_rq *cfs_rq) > { > - long tg_weight; > - > - /* > - * Use this CPU's actual weight instead of the last load_contribution > - * to gain a more accurate current total weight. See > - * update_cfs_rq_load_contribution(). > - */ > - tg_weight = atomic_long_read(&tg->load_avg); > - tg_weight -= cfs_rq->tg_load_contrib; > - tg_weight += cfs_rq->load.weight; > - > - return tg_weight; > + return atomic_long_read(&tg->load_avg);
Since you're now also delaying updating load_avg, why not retain this slightly better approximation?
|  |