lkml.org 
[lkml]   [2023]   [Feb]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v3 06/10] sched/fair: Use the prefer_sibling flag of the current sched domain
    On 2023-02-09 at 15:05:03 -0800, Tim Chen wrote:
    > On Thu, 2023-02-09 at 20:00 +0000, Chen, Tim C wrote:
    > > > >  static inline void update_sd_lb_stats(struct lb_env *env, struct
    > > > > sd_lb_stats *sds)  {
    > > > > -       struct sched_domain *child = env->sd->child;
    > > > >         struct sched_group *sg = env->sd->groups;
    > > > >         struct sg_lb_stats *local = &sds->local_stat;
    > > > >         struct sg_lb_stats tmp_sgs;
    > > > > @@ -10045,9 +10044,11 @@ static inline void
    > > > > update_sd_lb_stats(struct
    > > > lb_env *env, struct sd_lb_stats *sd
    > > > >                 sg = sg->next;
    > > > >         } while (sg != env->sd->groups);
    > > > >
    > > > > -       /* Tag domain that child domain prefers tasks go to
    > > > > siblings first */
    > > > > -       sds->prefer_sibling = child && child->flags &
    > > > > SD_PREFER_SIBLING;
    > > > > -
    > > > > +       /*
    > > > > +        * Tag domain that @env::sd prefers to spread excess
    > > > > tasks among
    > > > > +        * sibling sched groups.
    > > > > +        */
    > > > > +       sds->prefer_sibling = env->sd->flags & SD_PREFER_SIBLING;
    > > > >
    > > > This does help fix the issue that non-SMT core fails to pull task
    > > > from busy SMT-
    > > > cores.
    > > > And it also semantically changes the definination of prefer
    > > > sibling. Do we also
    > > > need to change this:
    > > >        if ((sd->flags & SD_ASYM_CPUCAPACITY) && sd->child)
    > > >                sd->child->flags &= ~SD_PREFER_SIBLING; might be:
    > > >        if ((sd->flags & SD_ASYM_CPUCAPACITY))
    > > >                sd->flags &= ~SD_PREFER_SIBLING;
    > > >
    > >
    > > Yu,
    > >
    > > I think you are talking about the code in sd_init()
    > > where SD_PREFER_SIBLING is first set
    > > to "ON" and updated depending on SD_ASYM_CPUCAPACITY.  The intention
    > > of the code
    > > is if there are cpus in the scheduler domain that have differing cpu
    > > capacities,
    > > we do not want to do spreading among the child groups in the sched
    > > domain.
    > > So the flag is turned off in the child group level and not the parent
    > > level. But with your above
    > > change, the parent's flag is turned off, leaving the child level flag
    > > on.
    > > This moves the level where spreading happens (SD_PREFER_SIBLING on)
    > > up one level which is undesired (see table below).
    > >
    Yes, it moves the flag 1 level up. And if I understand correctly, with Ricardo's patch
    applied, we have changed the original meaning of SD_PREFER_SIBLING:
    Original: Tasks in this sched domain want to be migrated to another sched domain.
    After init change: Tasks in the sched group under this sched domain want to
    be migrated to a sibling group.
    > >
    > Sorry got a bad mail client messing up the table format. Updated below
    >
    > SD_ASYM_CPUCAPACITY SD_PREFER_SIBLING after init
    > original code proposed
    > SD Level
    > root ON ON OFF (note: SD_PREFER_SIBLING unused at this level)
    SD_PREFER_SIBLING is hornored in root level after the init proposal.
    > first level ON OFF OFF
    Before the init proposed, tasks in first level sd do not want
    to be spreaded to a sibling sd. After the init proposeal, tasks
    in all sched groups under root sd, do not want to be spreaded
    to a sibling sched group(AKA first level sd)

    thanks,
    Chenyu
    > second level OFF OFF ON
    > third level OFF ON ON
    >
    > Tim

    \
     
     \ /
      Last update: 2023-03-27 00:17    [W:3.285 / U:0.064 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site