lkml.org 
[lkml]   [2013]   [May]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 1/2] sched: Use do_div() for 64 bit division at power utilization calculation (putil)
On 05/23/2013 04:34 PM, Lukasz Majewski wrote:
> Now explicit casting is done when power usage variable (putil) is calculated
>
> Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>
> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
> ---
> This patch was developed on top of the following Alex's repository:
> https://github.com/alexshi/power-scheduling/commits/power-scheduling
> ---
> kernel/sched/fair.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>


Thanks for catch this issue. seems use div_u64 is better, and there are 2 same bugs.
so, could I rewrite the patch like following?
---

From 9f72c25607351981898d99822f5a66e0ca67a3da Mon Sep 17 00:00:00 2001
From: Alex Shi <alex.shi@intel.com>
Date: Wed, 29 May 2013 11:09:39 +0800
Subject: [PATCH 1/2] sched: fix cast on power utilization calculation and use
div_u64

Now explicit casting is done when power usage variable (putil) is
calculated.
div_u64 is optimized on u32.

Signed-off-by: Lukasz Majewski <l.majewski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Alex Shi <alex.shi@intel.com>
---
kernel/sched/fair.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 09ae48a..3a4917c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1504,8 +1504,8 @@ static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
__update_tg_runnable_avg(&rq->avg, &rq->cfs);

period = rq->avg.runnable_avg_period ? rq->avg.runnable_avg_period : 1;
- rq->util = (u64)(rq->avg.runnable_avg_sum << SCHED_POWER_SHIFT)
- / period;
+ rq->util = div_u64(((u64)rq->avg.runnable_avg_sum << SCHED_POWER_SHIFT),
+ period);
}

/* Add the load generated by se into cfs_rq's child load-average */
@@ -3407,8 +3407,8 @@ static int is_sd_full(struct sched_domain *sd,
/* p maybe a new forked task */
putil = FULL_UTIL;
else
- putil = (u64)(p->se.avg.runnable_avg_sum << SCHED_POWER_SHIFT)
- / (p->se.avg.runnable_avg_period + 1);
+ putil = div_u64(((u64)p->se.avg.runnable_avg_sum << SCHED_POWER_SHIFT),
+ p->se.avg.runnable_avg_period + 1);

/* Try to collect the domain's utilization */
group = sd->groups;
@@ -3463,9 +3463,11 @@ find_leader_cpu(struct sched_group *group, struct task_struct *p, int this_cpu,
int vacancy, min_vacancy = INT_MAX;
int leader_cpu = -1;
int i;
+
/* percentage of the task's util */
- unsigned putil = (u64)(p->se.avg.runnable_avg_sum << SCHED_POWER_SHIFT)
- / (p->se.avg.runnable_avg_period + 1);
+ unsigned putil;
+ putil = div_u64(((u64)p->se.avg.runnable_avg_sum << SCHED_POWER_SHIFT),
+ p->se.avg.runnable_avg_period + 1);

/* bias toward local cpu */
if (cpumask_test_cpu(this_cpu, tsk_cpus_allowed(p)) &&
--
1.7.12

--
Thanks
Alex


\
 
 \ /
  Last update: 2013-05-30 04:21    [W:1.609 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site