lkml.org 
[lkml]   [2024]   [May]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH v2] sched: Consolidate cpufreq updates
    On Tue, 7 May 2024 at 13:08, Qais Yousef <qyousef@layalina.io> wrote:
    >
    > On 05/07/24 10:58, Vincent Guittot wrote:
    > > On Mon, 6 May 2024 at 01:31, Qais Yousef <qyousef@layalina.io> wrote:
    > > >
    > > > Improve the interaction with cpufreq governors by making the
    > > > cpufreq_update_util() calls more intentional.
    > > >
    > > > At the moment we send them when load is updated for CFS, bandwidth for
    > > > DL and at enqueue/dequeue for RT. But this can lead to too many updates
    > > > sent in a short period of time and potentially be ignored at a critical
    > > > moment due to the rate_limit_us in schedutil.
    > > >
    > > > For example, simultaneous task enqueue on the CPU where 2nd task is
    > > > bigger and requires higher freq. The trigger to cpufreq_update_util() by
    > > > the first task will lead to dropping the 2nd request until tick. Or
    > > > another CPU in the same policy triggers a freq update shortly after.
    > > >
    > > > Updates at enqueue for RT are not strictly required. Though they do help
    > > > to reduce the delay for switching the frequency and the potential
    > > > observation of lower frequency during this delay. But current logic
    > > > doesn't intentionally (at least to my understanding) try to speed up the
    > > > request.
    > > >
    > > > To help reduce the amount of cpufreq updates and make them more
    > > > purposeful, consolidate them into these locations:
    > > >
    > > > 1. context_switch()
    > >
    > > I don't see any cpufreq update when switching from idle to CFS. We
    >
    > You mean SCHED_IDLE to SCHED_NORMAL, right? Yes, if we switch policies even
    > from fair to RT an update could be missed.

    No I mean going out of idle. On an idle cpu, nothing happens at CFS
    task wakeup and we have to wait for the next tick to apply the new
    freq. This happens for both short task with uclamp min or long
    running/sleeping task (i.e. with high util_est)

    >
    > I'll need to think more about it, but I think adding an update when we switch
    > policies in the syscall looks sufficient to me, if the task is on rq already.
    > Agreed?
    >
    > > have to wait for the next tick to get a freq update whatever the value
    > > of util_est and uclamp
    > >
    > > > 2. task_tick_fair()
    > >
    > > Updating only during tick is ok with a tick at 1000hz/1000us when we
    > > compare it with the1048us slice of pelt but what about 4ms or even
    > > 10ms tick ? we can have an increase of almost 200 in 10ms
    >
    > IMHO the current code can still fail with these setups to update frequencies in
    > time. If there's a single task on the rq, then the only freq update will happen
    > at tick. So this is an existing problem.

    But any newly enqueued task can trigger a freq update without waiting
    1/4/10ms whereas we need to wait for next tick with this patch

    >
    > The way I see it is that setting such high TICK values implies low
    > responsiveness by definition. So the person who selects this setup needs to
    > cater that their worst case scenario is that and be happy with it. And this
    > worst case scenario does not change.
    >
    > That said, the right way to cater for this is via my other series to remove the
    > magic margins. DVFS headroom should rely on TICK value to ensure we run at
    > adequate frequency until the next worst case scenario update, which relies on
    > TICK. Which is sufficient to handle util_est changes. See below for uclamp.
    >
    > Wake up preemption should cause context switches to happen sooner than a tick
    > too as we add more tasks on the rq. So I think the worst case scenario is not
    > really changing that much. In my view, it's better to be consistent about the
    > behavior.
    >
    > >
    > > > 3. {attach, detach}_entity_load_avg()
    > >
    > > At enqueue/dequeue, the util_est will be updated and can make cpu
    > > utilization quite different especially with long sleeping tasks. The
    > > same applies for uclamp_min/max hints of a newly enqueued task. We
    > > might end up waiting 4/10ms depending of the tick period.
    >
    > uclamp_min is a property of the task. And waiting for the task that needs the
    > boost to run is fine IMHO. And I am actually hoping to remove uclamp max()

    But you will delay all CPU work and the running time fo the task

    And what about util_est ?

    > aggregation in favour of applying boosts/caps when tasks are RUNNING. But more
    > things need to be improved first.
    >
    > We are missing a freq update when uclamp values change by the way. This is
    > a known bug and I keep forgetting to post a patch to fix it. Let me do this
    > along update freq when policy changes.
    >
    > Thanks!

    \
     
     \ /
      Last update: 2024-05-27 18:19    [W:4.324 / U:0.104 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site