lkml.org 
[lkml]   [2013]   [Jan]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH v3 15/22] sched: log the cpu utilization at rq
    On Sat, Jan 05, 2013 at 08:37:44AM +0000, Alex Shi wrote:
    > The cpu's utilization is to measure how busy is the cpu.
    > util = cpu_rq(cpu)->avg.runnable_avg_sum
    > / cpu_rq(cpu)->avg.runnable_avg_period;
    >
    > Since the util is no more than 1, we use its percentage value in later
    > caculations. And set the the FULL_UTIL as 99%.
    >
    > In later power aware scheduling, we are sensitive for how busy of the
    > cpu, not how weight of its load. As to power consuming, it is more
    > related with busy time, not the load weight.
    >
    > Signed-off-by: Alex Shi <alex.shi@intel.com>
    > ---
    > kernel/sched/debug.c | 1 +
    > kernel/sched/fair.c | 4 ++++
    > kernel/sched/sched.h | 4 ++++
    > 3 files changed, 9 insertions(+)
    >
    > diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
    > index 2cd3c1b..e4035f7 100644
    > --- a/kernel/sched/debug.c
    > +++ b/kernel/sched/debug.c
    > @@ -318,6 +318,7 @@ do { \
    >
    > P(ttwu_count);
    > P(ttwu_local);
    > + P(util);
    >
    > #undef P
    > #undef P64
    > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    > index ee015b8..7bfbd69 100644
    > --- a/kernel/sched/fair.c
    > +++ b/kernel/sched/fair.c
    > @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update)
    >
    > static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
    > {
    > + u32 period;
    > __update_entity_runnable_avg(rq->clock_task, &rq->avg, runnable);
    > __update_tg_runnable_avg(&rq->avg, &rq->cfs);
    > +
    > + period = rq->avg.runnable_avg_period ? rq->avg.runnable_avg_period : 1;
    > + rq->util = rq->avg.runnable_avg_sum * 100 / period;

    The existing tg->runnable_avg and cfs_rq->tg_runnable_contrib variables
    both holds
    rq->avg.runnable_avg_sum / rq->avg.runnable_avg_period scaled by
    NICE_0_LOAD (1024). Why not use one of the existing variables instead of
    introducing a new one?

    Morten

    > }
    >
    > /* Add the load generated by se into cfs_rq's child load-average */
    > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
    > index 66b08a1..3c6e803 100644
    > --- a/kernel/sched/sched.h
    > +++ b/kernel/sched/sched.h
    > @@ -350,6 +350,9 @@ extern struct root_domain def_root_domain;
    >
    > #endif /* CONFIG_SMP */
    >
    > +/* Take as full load, if the cpu percentage util is up to 99 */
    > +#define FULL_UTIL 99
    > +
    > /*
    > * This is the main, per-CPU runqueue data structure.
    > *
    > @@ -481,6 +484,7 @@ struct rq {
    > #endif
    >
    > struct sched_avg avg;
    > + unsigned int util;
    > };
    >
    > static inline int cpu_of(struct rq *rq)
    > --
    > 1.7.12
    >
    > --
    > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    > the body of a message to majordomo@vger.kernel.org
    > More majordomo info at http://vger.kernel.org/majordomo-info.html
    > Please read the FAQ at http://www.tux.org/lkml/
    >



    \
     
     \ /
      Last update: 2013-01-10 13:21    [W:5.152 / U:0.704 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site