lkml.org 
[lkml]   [2023]   [Jul]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v6 2/2] cpuidle: teo: Introduce util-awareness
+CC Vincent and Peter

On 07/17/23 14:47, Lukasz Luba wrote:
> Hi Qais,
>
> The rule is 'one size doesn't fit all', please see below.
>
> On 7/11/23 18:58, Qais Yousef wrote:
> > Hi Kajetan
> >
> > On 01/05/23 14:51, Kajetan Puchalski wrote:
> >
> > [...]
> >
> > > @@ -510,9 +598,11 @@ static int teo_enable_device(struct cpuidle_driver *drv,
> > > struct cpuidle_device *dev)
> > > {
> > > struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
> > > + unsigned long max_capacity = arch_scale_cpu_capacity(dev->cpu);
> > > int i;
> > > memset(cpu_data, 0, sizeof(*cpu_data));
> > > + cpu_data->util_threshold = max_capacity >> UTIL_THRESHOLD_SHIFT;
> >
> > Given that utilization is invariant, why do we set the threshold based on
> > cpu capacity?
>
>
> To treat CPUs differently, not with the same policy.
>
>
> >
> > I'm not sure if this is a problem, but on little cores this threshold would be
> > too low. Given that util is invariant - I wondered if we need to have a single
> > threshold for all type of CPUs instead. Have you tried something like that
>
> A single threshold for all CPUs might be biased towards some CPUs. Let's
> pick the value 15 - which was tested to work really good in benchmarks
> for the big CPUs. On the other hand when you set that value to little
> CPUs, with max_capacity = 124, than you have 15/124 ~= 13% threshold.
> That means you prefer to enter deeper idle state ~9x times (at max
> freq). What if the Little's freq is set to e.g. < ~20% fmax, which
> corresponds to capacity < ~25? Let's try to simulate such scenario.

Hmm what I'm struggling with is that PELT is invariant. So the time it takes to
rise and decay to threshold of 15 should be the same for all CPUs, no?

>
> In a situation we could have utilization 14 on Little CPU, than CPU capacity
> (effectively frequency) voting based on utilization would be
> 1.2 * 14 = ~17 so let's pick OPP corresponding to 17 capacity.
> In such condition the little CPU would run the 14-util-periodic-task for
> 14/17= ~82% of wall-clock time. That's a lot, and not suited for
> entering deeper idle state on that CPU, isn't it?

Yes runtime is stretched. But we counter this at utilization level by making
PELT invariant. I thought that any CPU in the system will now take the same
amount of time to ramp-up and decay to the same util level. No?

But maybe what you're saying is that we don't need to take the invariance into
account?

My concern (that is not backed by real problem yet) is that the threshold is
near 0, and since PELT is invariant, the time to gain few points is constant
irrespective of any CPU/capacity/freq and this means the little CPUs has to be
absolutely idle with no activity almost at all, IIUC.

>
> Apart from that, the little CPUs are tiny in terms of silicon area
> and are less leaky in WFI than big cores. Therefore, they don't need
> aggressive entries into deeper idle state. At the same time, they
> are often used for serving interrupts, where the latency is important
> factor.

On Pixel 6 this threshold will translate to util_threshold of 2. Which looks
too low to me. Can't TEO do a good job before we reach such extremely low level
of inactivity?


Thanks

--
Qais Yousef

>
> > while developing the patch?
>
> We have tried different threshold values in terms of %, but for all CPUs
> (at the same time) not per-cluster. The reason was to treat those CPUs
> differently as described above.
>
> Regards,
> Lukasz

\
 
 \ /
  Last update: 2023-07-17 20:23    [W:1.974 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site