lkml.org 
[lkml]   [2023]   [Jul]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v6 2/2] cpuidle: teo: Introduce util-awareness
On 07/18/23 13:02, Kajetan Puchalski wrote:
> Hi Qais,
>
> On Tue, Jul 11, 2023 at 06:58:14PM +0100, Qais Yousef wrote:
> > Hi Kajetan
> >
> > On 01/05/23 14:51, Kajetan Puchalski wrote:
> >
> > [...]
> >
> > > @@ -510,9 +598,11 @@ static int teo_enable_device(struct cpuidle_driver *drv,
> > > struct cpuidle_device *dev)
> > > {
> > > struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
> > > + unsigned long max_capacity = arch_scale_cpu_capacity(dev->cpu);
> > > int i;
> > >
> > > memset(cpu_data, 0, sizeof(*cpu_data));
> > > + cpu_data->util_threshold = max_capacity >> UTIL_THRESHOLD_SHIFT;
> >
> > Given that utilization is invariant, why do we set the threshold based on
> > cpu capacity?
>
> Conceptually, the threshold is meant to represent a level at which the
> core is considered 'utilized'. I appreciate the definitions here can get
> a little fuzzy but I think of it as "generally doing a non-insignificant
> amount of work" even if there are currently no tasks scheduled on the core.
> This comes in handy in real-world workloads where the core will go
> through multiple cycles of busy-idle-busy-idle within each second.
> The intention here is to be able to distinguish a scenario of "going
> into idle for a few us because of the nature of the workload" from
> "going into idle for longer because there is no workload".
>
> I set the threshold based on capacity because I think conceptually it
> makes more sense to say "every CPU is consireded to be utilized if the
> util is above X% of its capacity" than to effectively have a varying
> percentage based on the size of the core. 60 util is not that
> much work for a 1024-util big core but it's almost half the capacity of
> a little one, using a percentage/shift on capacity lets us account for that
> while using a raw value would not.

Thanks for the explanation.

I did try the busy perspective, but I think I still view this as 60util means
we've are running on average for X ms. which I think what matters more than how
much this is of a work to the big core. I look at this; we still have few ms
worth of runtime on the CPU and it's not worth going to deeper idle state
yet.

I can appreciate you think that this percentage of runtime should be lower for
smaller cores. My doubt (which again is not backed by real problem - so I'm not
questioning but rather trying to understand :)) is that if this becomes too low
is it better than letting usual TEO logic to operate. The series in its current
shape is great and offers good improvement already, no doubt :)

By the way, by default big will get a threshold of 16, the little will get
a threshold of around 2. I think the latter will translate to few hundreds of
us of activity (haven't done proper measurement to be honest, so this could be
off but I don't think by much).

>
> There's also very practical issues but I'll describe those below.
>
> > I'm not sure if this is a problem, but on little cores this threshold would be
> > too low. Given that util is invariant - I wondered if we need to have a single
> > threshold for all type of CPUs instead. Have you tried something like that
> > while developing the patch?
>
> Yes, the problem there is that it's very difficult to define what "too low"
> actually means :)

target residency maybe?

> Namely, do we define 'too low' based on the effects it has on
> performance in terms of latency, on the resulting power usage or on the
> prediction accuracy? In terms of the prediction accuracy, how do we
> weigh the two possible types of mispredictions? I'll just try to explain
> my thinking and how I got to my conclusions.
>
> Based on my tests, on the types of platforms we both work with our
> state0/wfi is very power efficient to stay in, very power efficient
> to enter/exit and also very fast so it has very little impact on
> latency. On the other hand, state1 is power efficient to *stay in* but
> very costly to enter/exit in terms of *both* power and latency. The
> effect this has is that there's many cases where going through a cycle
> of busy-state1-busy-state1-busy and so on will actually use up more
> power than if you only kept the core in wfi.
>
> I had some tests done with effectively making the governor do "return 0"
> in state selection, never using any state1 and the results were still
> pretty good, only slightly worse than e.g. menu. The problem there was
> that not using state1 on big cores would not leave them time to cool
> down and we'd burn through the thermal budget too quickly then tank the
> performance.
>
> I don't have the numbers on hand but even completely disabling state1 on
> the entire little cluster will work perfectly fine - your latency for
> tasks that run on littles will improve and the thermal budget/power
> won't take a particularly noticeable hit because of how small they are
> in the first place.
>
> This is why the governor is intentionally skewed towards shallower
> states, they just work better most of the time. If you try to skew it
> the other way the results just come out much worse because even a
> relatively small amount of mispredicted state1 entries can completely
> nullify any benefits that selecting state1 could bring.
>
> The percentage approach does make the threshold for littles pretty small
> but as desccribed above that's perfectly fine, could say a feature not a
> bug :) If we tried setting a fixed one across all CPUs then we'd need to

I didn't think it's a bug. But it seemed too low, hence the question.
I actually thought a single value is enough for all CPUs since util is
invariant and the tipping point where TEO normal predictions should work should
be the same.

Thanks for the detailed explanation :)

> pick one high enough for the big cores which would end up being too high
> for the littles, lead to excessive state1 entries and all the issues
> I've just described. TLDR: there's just more downsides on the other side.

This is an artifact of the shifting algorithm you used to derive it. It is not
a real restriction. But I appreciate that simple approach proved to be good
enough, and I have nothing against it.

>
> In development I just had a sysctl to set the threshold shift and iirc I
> tested values from 3 to 10-12 eventually arriving at 6 being the one
> with best results across different metrics and benchmarks. If you're
> backporting the patch somewhere and have a specific platform feel free
> to test it with different shift values, it's possible that different
> platforms will behave differently with this. I doubt there's any
> appetite to make the shift tweakable at runtime rather than a
> compile-time constant but if you'd like to push for that I'm happy to
> sign off on it, would work just as well as it does now.

These patches are in GKI. So we'll if there are uncaught problems I guess :)

No appetite for a knob, but the very low value for littles did strike me and
thought I better ask at least. Today's littles are too tiny for their own good
and it seemed the threshold could end up being too aggressive especially in low
activity state. You effectively are saying that if we have few 100us of
activity, normal TEO predictions based on timers are no good and better to stay
shallower anyway.

Note that due to NOHZ, if we go to idle for an extended period the util value
might not decay for a while and miss some opportunities. Especially that when
it next wakes up, it's enough for this wake up to run for few 100s us to block
a deeper state before going back to sleep for extended period of time.

But we shall see. I got the answer I was looking for for now.


Thanks for the help!

--
Qais Yousef

\
 
 \ /
  Last update: 2023-07-18 15:25    [W:0.259 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site