lkml.org 
[lkml]   [2020]   [Nov]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH -tip 08/32] sched/fair: Fix forced idle sibling starvation corner case
    On Tue, Nov 17, 2020 at 06:19:38PM -0500, Joel Fernandes (Google) wrote:
    > From: Vineeth Pillai <viremana@linux.microsoft.com>
    >
    > If there is only one long running local task and the sibling is
    > forced idle, it might not get a chance to run until a schedule
    > event happens on any cpu in the core.
    >
    > So we check for this condition during a tick to see if a sibling
    > is starved and then give it a chance to schedule.
    >
    > Tested-by: Julien Desfossez <jdesfossez@digitalocean.com>
    > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
    > Signed-off-by: Vineeth Pillai <viremana@linux.microsoft.com>
    > Signed-off-by: Julien Desfossez <jdesfossez@digitalocean.com>
    > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
    > ---
    > kernel/sched/core.c | 15 ++++++++-------
    > kernel/sched/fair.c | 40 ++++++++++++++++++++++++++++++++++++++++
    > kernel/sched/sched.h | 2 +-
    > 3 files changed, 49 insertions(+), 8 deletions(-)
    >
    > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
    > index 1bd0b0bbb040..52d0e83072a4 100644
    > --- a/kernel/sched/core.c
    > +++ b/kernel/sched/core.c
    > @@ -5206,16 +5206,15 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
    >
    > /* reset state */
    > rq->core->core_cookie = 0UL;
    > + if (rq->core->core_forceidle) {
    > + need_sync = true;
    > + rq->core->core_forceidle = false;
    > + }
    > for_each_cpu(i, smt_mask) {
    > struct rq *rq_i = cpu_rq(i);
    >
    > rq_i->core_pick = NULL;
    >
    > - if (rq_i->core_forceidle) {
    > - need_sync = true;
    > - rq_i->core_forceidle = false;
    > - }
    > -
    > if (i != cpu)
    > update_rq_clock(rq_i);
    > }
    > @@ -5335,8 +5334,10 @@ next_class:;
    > if (!rq_i->core_pick)
    > continue;
    >
    > - if (is_task_rq_idle(rq_i->core_pick) && rq_i->nr_running)
    > - rq_i->core_forceidle = true;
    > + if (is_task_rq_idle(rq_i->core_pick) && rq_i->nr_running &&
    > + !rq_i->core->core_forceidle) {
    > + rq_i->core->core_forceidle = true;
    > + }
    >
    > if (i == cpu) {
    > rq_i->core_pick = NULL;
    > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    > index f53681cd263e..42965c4fd71f 100644
    > --- a/kernel/sched/fair.c
    > +++ b/kernel/sched/fair.c
    > @@ -10692,6 +10692,44 @@ static void rq_offline_fair(struct rq *rq)
    >
    > #endif /* CONFIG_SMP */
    >
    > +#ifdef CONFIG_SCHED_CORE
    > +static inline bool
    > +__entity_slice_used(struct sched_entity *se, int min_nr_tasks)
    > +{
    > + u64 slice = sched_slice(cfs_rq_of(se), se);

    I wonder if the definition of sched_slice() should be revisited for core
    scheduling?

    Should we use sched_slice = sched_slice / cpumask_weight(smt_mask)?
    Would that resolve the issue your seeing? Effectively we need to answer
    if two sched core siblings should be treated as executing one large
    slice?

    Balbir Singh.


    \
     
     \ /
      Last update: 2020-11-22 11:37    [W:3.080 / U:1.252 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site