lkml.org 
[lkml]   [2023]   [Sep]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RFC PATCH 1/2] sched: Rate limit migrations to 1 per 2ms per task
From
On 9/6/23 09:57, Mathieu Desnoyers wrote:
> On 9/6/23 04:41, Peter Zijlstra wrote:
[...]
>>
>> Also:
>>
>>> I have noticed that in order to observe the speedup, the workload needs
>>> to keep the CPUs sufficiently busy to cause runqueue lock contention,
>>> but not so busy that they don't go idle.
>>
>> This would suggest inhibiting pulling tasks based on rq statistics,
>> instead of tasks stats. It doesn't matter when the task migrated last,
>> what matter is that this rq doesn't want new tasks at this point.
>>
>> Them not the same thing.
>
> I suspect we could try something like this then:
>
> When a cpu enters idle state, it could grab a sched_clock() timestamp
> and store it into this_rq()->enter_idle_time. Then, when it exits
> idle and reenters idle again, it could save rq->enter_idle_time to
> rq->prev_enter_idle_time, and sample enter_idle_time again.
>
> When considering the CPU as a target for task migration, if it is
> idle but the delta between sched_clock_cpu(cpu_of(rq)) and that
> prev_enter_idle_time is below a threshold (e.g. a few ms), this means
> the CPU got out of idle and went back to idle pretty quickly, which
> means it's not a good target for pulling tasks for a short while.
>
> I'll try something along these lines and see how it goes.

I've tried this approach and failed to observe any kind of speed up.

The effect I'm looking for is to favor keeping a task on its prev
runqueue (prevent migration) even if there are rq siblings which have a
lower load (or are actually idle) as long as the prev runqueue does not
have a too high load.

I'll try this approach and let you know how it goes.

Thanks,

Mathieu

--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

\
 
 \ /
  Last update: 2023-09-06 17:37    [W:6.821 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site