Messages in this thread | | | From | Vincent Guittot <> | Date | Wed, 28 Aug 2019 19:32:05 +0200 | Subject | Re: [PATCH 08/15] sched,fair: simplify timeslice length code |
| |
On Thu, 22 Aug 2019 at 04:18, Rik van Riel <riel@surriel.com> wrote: > > The idea behind __sched_period makes sense, but the results do not always. > > When a CPU has one high priority task and a large number of low priority > tasks, __sched_period will return a value larger than sysctl_sched_latency, > and the one high priority task may end up getting a timeslice all for itself > that is also much larger than sysctl_sched_latency.
note that unless you enable sched_feat(HRTICK), the sched_slice is mainly used to decide how fast we preempt running task at tick but a newly wake up task can preempt it before
> > The low priority tasks will have their time slices rounded up to > sysctl_sched_min_granularity, resulting in an even larger scheduling > latency than targeted by __sched_period.
Will this not break the fairness between a always running task and a short sleeping one with this changes ?
> > Simplify the code by simply ripping out __sched_period and always taking > fractions of sysctl_sched_latency. > > If a high priority task ends up getting a "too small" time slice compared > to low priority tasks, the vruntime scaling ensures that it will simply > get scheduled more frequently than low priority tasks.
Will you not increase the number of context switch ?
> > Signed-off-by: Rik van Riel <riel@surriel.com> > --- > kernel/sched/fair.c | 18 +----------------- > 1 file changed, 1 insertion(+), 17 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 8f8c85c6da9b..74ee22c59d13 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -691,22 +691,6 @@ static inline u64 calc_delta_fair(u64 delta, struct sched_entity *se) > return delta; > } > > -/* > - * The idea is to set a period in which each task runs once. > - * > - * When there are too many tasks (sched_nr_latency) we have to stretch > - * this period because otherwise the slices get too small. > - * > - * p = (nr <= nl) ? l : l*nr/nl > - */ > -static u64 __sched_period(unsigned long nr_running) > -{ > - if (unlikely(nr_running > sched_nr_latency)) > - return nr_running * sysctl_sched_min_granularity; > - else > - return sysctl_sched_latency; > -} > - > /* > * We calculate the wall-time slice from the period by taking a part > * proportional to the weight. > @@ -715,7 +699,7 @@ static u64 __sched_period(unsigned long nr_running) > */ > static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) > { > - u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq); > + u64 slice = sysctl_sched_latency; > > for_each_sched_entity(se) { > struct load_weight *load; > -- > 2.20.1 >
| |