lkml.org 
[lkml]   [2023]   [Jan]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] sched/fair: unlink misfit task from cpu overutilized
> > I was testing this on a Pixel 6 with a 5.18 android-mainline kernel with

> Do you have more details to share on your setup ?
> Android kernel has some hack on top of the mainline. Do you use some ?
> Then, the perf and the power can be largely impacted by the cgroup
> configuration. Have you got details on your setup ?

The kernel I use has all the vendor hooks and hacks switched off to
keep it as close to mainline as possible. Unfortunately 5.18 was the
last mainline that worked on this device due to some driver issues so we
just backport mainline scheduling patches as they come out to at least
keep the scheduler itself up to date.

> I just sent a v3 which fixes a condition. Wonder if this could have an
> impact on the results both perf and power

I don't think it'll fix the GB5 score side of it as that's clearly
related to overutilization while the condition changed in v3 is inside
the non-OU section of feec(). I'll still test the v3 on the weekend
if I have some free time.

The power usage issue was already introduced in the uclamp fits capacity
patchset that's been merged so I doubt this change will be enough to
account for it but I'll give it a try regardless.

> > The most likely cause for the regression seen above is the decrease in the amount of
> > time spent while overutilized with these patches. Maximising
> > overutilization for GB5 is the desired outcome as the benchmark for
> > almost its entire duration keeps either 1 core or all the cores
> > completely saturated so EAS cannot be effective. These patches have the
> > opposite from the desired effect in this area.
> >
> > +----------------------------+--------------------+--------------------+------------+
> > | kernel | time | total_time | percentage |
> > +----------------------------+--------------------+--------------------+------------+
> > | baseline | 121.979 | 181.065 | 67.46 |
> > | baseline_ufc | 120.355 | 184.255 | 65.32 |
> > | ufc_patched | 60.715 | 196.135 | 30.98 | <-- !!!
> > +----------------------------+--------------------+--------------------+------------+
>
> I'm not surprised because some use cases which were not overutilized
> were wrongly triggered as overutilized so switching back to
> performance mode. You might have to tune the uclamp value

But they'd be wrongly triggered with the 'baseline_ufc' variant while
not with the 'baseline' variant. The baseline here is pre taking uclamp
into account for cpu_overutilized, all cpu_overutilized did on that
kernel was compare util against capacity.
Meaning that the 'real' overutilised would be in the ~67% ballpark while
the patch makes it incorrectly not trigger more than half the time. I'm
not sure we can tweak uclamp enough to fix that.

> >
> > 2. Jankbench (power usage regression)
> >
> > +--------+---------------+---------------------------------+-------+-----------+
> > | metric | variable | kernel | value | perc_diff |
> > +--------+---------------+---------------------------------+-------+-----------+
> > | gmean | mean_duration | baseline_60hz | 14.6 | 0.0% |
> > | gmean | mean_duration | baseline_ufc_60hz | 15.2 | 3.83% |
> > | gmean | mean_duration | ufc_patched_60hz | 14.0 | -4.12% |
> > +--------+---------------+---------------------------------+-------+-----------+
> >
> > +--------+-----------+---------------------------------+-------+-----------+
> > | metric | variable | kernel | value | perc_diff |
> > +--------+-----------+---------------------------------+-------+-----------+
> > | gmean | jank_perc | baseline_60hz | 1.9 | 0.0% |
> > | gmean | jank_perc | baseline_ufc_60hz | 2.2 | 15.39% |
> > | gmean | jank_perc | ufc_patched_60hz | 2.0 | 3.61% |
> > +--------+-----------+---------------------------------+-------+-----------+
> >
> > +--------------+--------+---------------------------------+-------+-----------+
> > | chan_name | metric | kernel | value | perc_diff |
> > +--------------+--------+---------------------------------+-------+-----------+
> > | total_power | gmean | baseline_60hz | 135.9 | 0.0% |
> > | total_power | gmean | baseline_ufc_60hz | 155.7 | 14.61% | <-- !!!
> > | total_power | gmean | ufc_patched_60hz | 157.1 | 15.63% | <-- !!!
> > +--------------+--------+---------------------------------+-------+-----------+
> >
> > With these patches while running Jankbench we use up ~15% more power
> > just to achieve roughly the same results. Here I'm not sure where this
> > issue is coming from exactly but all the results above are very consistent
> > across different runs.
> >
> > > --
> > > 2.17.1
> > >
> > >

\
 
 \ /
  Last update: 2023-03-26 23:39    [W:0.098 / U:1.304 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site