lkml.org 
[lkml]   [2023]   [Jul]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v6 2/2] cpuidle: teo: Introduce util-awareness
Hi Qais,

On Tue, Jul 11, 2023 at 06:58:14PM +0100, Qais Yousef wrote:
> Hi Kajetan
>
> On 01/05/23 14:51, Kajetan Puchalski wrote:
>
> [...]
>
> > @@ -510,9 +598,11 @@ static int teo_enable_device(struct cpuidle_driver *drv,
> > struct cpuidle_device *dev)
> > {
> > struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
> > + unsigned long max_capacity = arch_scale_cpu_capacity(dev->cpu);
> > int i;
> >
> > memset(cpu_data, 0, sizeof(*cpu_data));
> > + cpu_data->util_threshold = max_capacity >> UTIL_THRESHOLD_SHIFT;
>
> Given that utilization is invariant, why do we set the threshold based on
> cpu capacity?

Conceptually, the threshold is meant to represent a level at which the
core is considered 'utilized'. I appreciate the definitions here can get
a little fuzzy but I think of it as "generally doing a non-insignificant
amount of work" even if there are currently no tasks scheduled on the core.
This comes in handy in real-world workloads where the core will go
through multiple cycles of busy-idle-busy-idle within each second.
The intention here is to be able to distinguish a scenario of "going
into idle for a few us because of the nature of the workload" from
"going into idle for longer because there is no workload".

I set the threshold based on capacity because I think conceptually it
makes more sense to say "every CPU is consireded to be utilized if the
util is above X% of its capacity" than to effectively have a varying
percentage based on the size of the core. 60 util is not that
much work for a 1024-util big core but it's almost half the capacity of
a little one, using a percentage/shift on capacity lets us account for that
while using a raw value would not.

There's also very practical issues but I'll describe those below.

> I'm not sure if this is a problem, but on little cores this threshold would be
> too low. Given that util is invariant - I wondered if we need to have a single
> threshold for all type of CPUs instead. Have you tried something like that
> while developing the patch?

Yes, the problem there is that it's very difficult to define what "too low"
actually means :)
Namely, do we define 'too low' based on the effects it has on
performance in terms of latency, on the resulting power usage or on the
prediction accuracy? In terms of the prediction accuracy, how do we
weigh the two possible types of mispredictions? I'll just try to explain
my thinking and how I got to my conclusions.

Based on my tests, on the types of platforms we both work with our
state0/wfi is very power efficient to stay in, very power efficient
to enter/exit and also very fast so it has very little impact on
latency. On the other hand, state1 is power efficient to *stay in* but
very costly to enter/exit in terms of *both* power and latency. The
effect this has is that there's many cases where going through a cycle
of busy-state1-busy-state1-busy and so on will actually use up more
power than if you only kept the core in wfi.

I had some tests done with effectively making the governor do "return 0"
in state selection, never using any state1 and the results were still
pretty good, only slightly worse than e.g. menu. The problem there was
that not using state1 on big cores would not leave them time to cool
down and we'd burn through the thermal budget too quickly then tank the
performance.

I don't have the numbers on hand but even completely disabling state1 on
the entire little cluster will work perfectly fine - your latency for
tasks that run on littles will improve and the thermal budget/power
won't take a particularly noticeable hit because of how small they are
in the first place.

This is why the governor is intentionally skewed towards shallower
states, they just work better most of the time. If you try to skew it
the other way the results just come out much worse because even a
relatively small amount of mispredicted state1 entries can completely
nullify any benefits that selecting state1 could bring.

The percentage approach does make the threshold for littles pretty small
but as desccribed above that's perfectly fine, could say a feature not a
bug :) If we tried setting a fixed one across all CPUs then we'd need to
pick one high enough for the big cores which would end up being too high
for the littles, lead to excessive state1 entries and all the issues
I've just described. TLDR: there's just more downsides on the other side.

In development I just had a sysctl to set the threshold shift and iirc I
tested values from 3 to 10-12 eventually arriving at 6 being the one
with best results across different metrics and benchmarks. If you're
backporting the patch somewhere and have a specific platform feel free
to test it with different shift values, it's possible that different
platforms will behave differently with this. I doubt there's any
appetite to make the shift tweakable at runtime rather than a
compile-time constant but if you'd like to push for that I'm happy to
sign off on it, would work just as well as it does now.


> Thanks
>
> --
> Qais Yousef

\
 
 \ /
  Last update: 2023-07-18 14:04    [W:0.205 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site