Messages in this thread |  | | Date | Tue, 21 Oct 2014 16:54:35 +0200 | From | Peter Zijlstra <> | Subject | Re: [RESEND PATCH 2/3 v5] sched: Rewrite per entity runnable load average tracking |
| |
On Fri, Oct 10, 2014 at 10:21:56AM +0800, Yuyang Du wrote: > /* > + * Updating tg's load_avg is necessary before update_cfs_share (which is done) > + * and effective_load (which is not done because it is too costly). > */ > +static inline void update_tg_load_avg(struct cfs_rq *cfs_rq) > { > + long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib; > > + if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64) { > + atomic_long_add(delta, &cfs_rq->tg->load_avg); > + cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg; > } > }
In the thread here: lkml.kernel.org/r/1409094682.29189.23.camel@j-VirtualBox there are concerns about the error bounds of such constructs. We can basically 'leak' nr_cpus * threshold, which is potentially a very large number.
Do we want to introduce the force updated to combat this?
|  |