lkml.org 
[lkml]   [2008]   [Nov]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH][mmotm] Sched fix stale value in average load per task
* Balbir Singh <balbir@linux.vnet.ibm.com> [2008-11-12 16:19:00]:

> + else
> + rq->avg_load_per_task = 0;
>
> return rq->avg_load_per_task;

On second thoughts, does this look better? (Based on Mike Galbraith's
suggestion)


cpu_avg_load_per_task() returns a stale value when nr_running is 0. It returns
an older stale (caculated when nr_running was non zero) value. This patch
returns and sets rq->avg_load_per_task to zero when nr_running is 0.

Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com>
---

kernel/sched.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff -puN kernel/sched.c~sched-fix-stale-load-avg kernel/sched.c
--- linux-2.6.28-rc4/kernel/sched.c~sched-fix-stale-load-avg 2008-11-12 15:55:38.000000000 +0530
+++ linux-2.6.28-rc4-balbir/kernel/sched.c 2008-11-12 16:40:26.000000000 +0530
@@ -571,8 +571,6 @@ struct rq {
int cpu;
int online;

- unsigned long avg_load_per_task;
-
struct task_struct *migration_thread;
struct list_head migration_queue;
#endif
@@ -1428,10 +1426,10 @@ static unsigned long cpu_avg_load_per_ta
{
struct rq *rq = cpu_rq(cpu);

- if (rq->nr_running)
- rq->avg_load_per_task = rq->load.weight / rq->nr_running;
-
- return rq->avg_load_per_task;
+ if (unlikely(!rq->nr_running))
+ return 0;
+ else
+ return rq->load.weight / rq->nr_running;
}

#ifdef CONFIG_FAIR_GROUP_SCHED
_


--
Balbir


\
 
 \ /
  Last update: 2008-11-12 12:43    [W:0.067 / U:0.192 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site