Messages in this thread | | | From | Josh Don <> | Date | Thu, 12 Aug 2021 14:09:15 -0700 | Subject | Re: [PATCH 2/2] sched: adjust SCHED_IDLE interactions |
| |
> > @@ -697,8 +699,18 @@ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) > > slice = __calc_delta(slice, se->load.weight, load); > > } > > > > - if (sched_feat(BASE_SLICE)) > > - slice = max(slice, (u64)w); > > + if (sched_feat(BASE_SLICE)) { > > + /* > > + * SCHED_IDLE entities are not subject to min_granularity if > > + * they are competing with non SCHED_IDLE entities. As a result, > > + * non SCHED_IDLE entities will have reduced latency to get back > > + * on cpu, at the cost of increased context switch frequency of > > + * SCHED_IDLE entities. > > + */ > > Ensuring that the entity will have a minimum runtime has been added to > ensure that we let enough time to move forward. > If you exclude sched_idle entities from this min runtime, the > sched_slice of an idle_entity will be really small. > I don't have details of your example above but I can imagine that it's > a 16 cpus system which means a sysctl_sched_min_granularity=3.75ms > which explains the 4ms running time of an idle entity > For a 16 cpus system, the sched_slice of an idle_entity in your > example in the cover letter is: 6*(1+log2(16))*3/1027=87us. Of course > this become even worse with more threads and cgroups or thread with > ncie prio -19 > > This value is then used to set the next hrtimer event in SCHED_HRTICK > and 87us is too small to make any progress > > The 1ms of your test comes from the tick which could be a good > candidate for a min value or the > normalized_sysctl_sched_min_granularity which has the advantage of not > increasing with number of CPU
Fair point, this shouldn't completely ignore min granularity. Something like
unsigned int sysctl_sched_idle_min_granularity = NSEC_PER_MSEC;
(and still only using this value instead of the default min_granularity when the SCHED_IDLE entity is competing with normal entities)
> > @@ -4216,7 +4228,15 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial) > > if (sched_feat(GENTLE_FAIR_SLEEPERS)) > > thresh >>= 1; > > > > - vruntime -= thresh; > > + /* > > + * Don't give sleep credit to a SCHED_IDLE entity if we're > > + * placing it onto a cfs_rq with non SCHED_IDLE entities. > > + */ > > + if (!se_is_idle(se) || > > + cfs_rq->h_nr_running == cfs_rq->idle_h_nr_running) > > Can't this condition above create unfairness between idle entities ? > idle thread 1 wake up while normal thread is running > normal thread thread sleeps immediately after > idle thread 2 wakes up just after and gets some credits compared to the 1st one.
Yes, this sacrifices some idle<->idle fairness when there is a normal thread that comes and goes. One alternative is to simply further reduce thresh for idle entities. That will interfere with idle<->idle fairness when there are no normal threads, which is why I opted for the former. On second thought though, the former fairness issue seems more problematic. Thoughts on applying a smaller sleep credit threshold universally to idle entities?
| |