lkml.org 
[lkml]   [2015]   [Dec]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFCv6 PATCH 09/10] sched: deadline: use deadline bandwidth in scale_rt_capacity
    On 14/12/15 16:56, Vincent Guittot wrote:
    > On 14 December 2015 at 16:17, Peter Zijlstra <peterz@infradead.org> wrote:
    > > On Tue, Dec 08, 2015 at 10:19:30PM -0800, Steve Muckle wrote:
    > >> From: Vincent Guittot <vincent.guittot@linaro.org>
    > >
    > >> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
    > >> index 8b0a15e..9d9eb50 100644
    > >> --- a/kernel/sched/deadline.c
    > >> +++ b/kernel/sched/deadline.c
    > >> @@ -43,6 +43,24 @@ static inline int on_dl_rq(struct sched_dl_entity *dl_se)
    > >> return !RB_EMPTY_NODE(&dl_se->rb_node);
    > >> }
    > >>
    > >> +static void add_average_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
    > >> +{
    > >> + u64 se_bw = dl_se->dl_bw;
    > >> +
    > >> + dl_rq->avg_bw += se_bw;
    > >> +}
    > >> +
    > >> +static void clear_average_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
    > >> +{
    > >> + u64 se_bw = dl_se->dl_bw;
    > >> +
    > >> + dl_rq->avg_bw -= se_bw;
    > >> + if (dl_rq->avg_bw < 0) {
    > >> + WARN_ON(1);
    > >> + dl_rq->avg_bw = 0;
    > >> + }
    > >> +}
    > >
    > >
    > >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    > >> index 4c49f76..ce05f61 100644
    > >> --- a/kernel/sched/fair.c
    > >> +++ b/kernel/sched/fair.c
    > >> @@ -6203,6 +6203,14 @@ static unsigned long scale_rt_capacity(int cpu)
    > >>
    > >> used = div_u64(avg, total);
    > >>
    > >> + /*
    > >> + * deadline bandwidth is defined at system level so we must
    > >> + * weight this bandwidth with the max capacity of the system.
    > >> + * As a reminder, avg_bw is 20bits width and
    > >> + * scale_cpu_capacity is 10 bits width
    > >> + */
    > >> + used += div_u64(rq->dl.avg_bw, arch_scale_cpu_capacity(NULL, cpu));
    > >> +
    > >> if (likely(used < SCHED_CAPACITY_SCALE))
    > >> return SCHED_CAPACITY_SCALE - used;
    > >>
    > >> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
    > >> index 08858d1..e44c6be 100644
    > >> --- a/kernel/sched/sched.h
    > >> +++ b/kernel/sched/sched.h
    > >> @@ -519,6 +519,8 @@ struct dl_rq {
    > >> #else
    > >> struct dl_bw dl_bw;
    > >> #endif
    > >> + /* This is the "average utilization" for this runqueue */
    > >> + s64 avg_bw;
    > >> };
    > >
    > > So I don't think this is right. AFAICT this projects the WCET as the
    > > amount of time actually used by DL. This will, under many circumstances,
    > > vastly overestimate the amount of time actually spend on it. Therefore
    > > unduly pessimisme the fair capacity of this CPU.
    >
    > I agree that if the WCET is far from reality, we will underestimate
    > available capacity for CFS. Have you got some use case in mind which
    > overestimates the WCET ?

    I guess simply the fact that one task can be admitted to the system, but
    then in practice sleep, waiting from some event to happen.

    > If we can't rely on this parameters to evaluate the amount of capacity
    > used by deadline scheduler on a core, this will imply that we can't
    > also use it for requesting capacity to cpufreq and we should fallback
    > on a monitoring mechanism which reacts to a change instead of
    > anticipating it.
    >

    There is at least one way in the middle: use utilization of active
    servers (as I think Luca was already mentioning). This solution should
    remove some of the pessimism, but still be safe for our needs. I should
    be able to play with this alternative in the (hopefully) near future.

    Thanks,

    - Juri


    \
     
     \ /
      Last update: 2015-12-14 17:21    [W:9.170 / U:0.016 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site