lkml.org 
[lkml]   [2024]   [Apr]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [linus:master] [timers] 7ee9887703: stress-ng.uprobe.ops_per_sec -17.1% regression
Date
Anna-Maria Behnsen <anna-maria@linutronix.de> writes:

> Hi,
>
> Lukasz Luba <lukasz.luba@arm.com> writes:
>> On 4/26/24 17:03, Rafael J. Wysocki wrote:
>>> On Thu, Apr 25, 2024 at 10:23 AM Anna-Maria Behnsen
>>> <anna-maria@linutronix.de> wrote:
>
> [...]
>
>>>> So my assumption here is, that cpuidle governors assume that a deeper
>>>> idle state could be choosen and selecting the deeper idle state makes an
>>>> overhead when returning from idle. But I have to notice here, that I'm
>>>> still not familiar with cpuidle internals... So I would be happy about
>>>> some hints how I can debug/trace cpuidle internals to falsify or verify
>>>> this assumption.
>>>
>>> You can look at the "usage" and "time" numbers for idle states in
>>>
>>> /sys/devices/system/cpu/cpu*/cpuidle/state*/
>>>
>>> The "usage" value is the number of times the governor has selected the
>>> given state and the "time" is the total idle time after requesting the
>>> given state (ie. the sum of time intervals between selecting that
>>> state by the governor and wakeup from it).
>>>
>>> If "usage" decreases for deeper (higher number) idle states relative
>>> to its value for shallower (lower number) idle states after applying
>>> the test patch, that will indicate that the theory is valid.
>>
>> I agree with Rafael here, this is the first thing to check, those
>> statistics. Then, when you see difference in those stats in baseline
>> vs. patched version, we can analyze the internal gov decisions
>> with help of tracing.
>>
>> Please also share how many idle states is in those testing platforms.
>
> Thanks Rafael and Lukasz, for the feedback here!
>
> So I simply added the state usage values for all 112 CPUs and calculated
> the diff before and after the stress-ng call. The values are from a
> single run.
>

Now here are the values of the states and the time because I forgot to
track also the time in the first run:

USAGE good bad bad+patch
---- --- ---------
state0 115 137 234
state1 450680 354689 420904
state2 3092092 2687410 3169438


TIME good bad bad+patch
---- --- ---------
state0 9347 9683 18378
state1 626029557 562678907 593350108
state2 6130557768 6201518541 6150403441


> good: 57e95a5c4117 ("timers: Introduce function to check timer base
> is_idle flag")
> bad: v6.9-rc4
> bad+patch: v6.9-rc4 + patch
>
> I choosed v6.9-rc4 for "bad", to make sure all the timer pull model fixes
> are applied.
>
> If I got Raphael right, the values indicate, that my theory is not
> right...

.. but with the time values: CPUs are less often but in total longer in
state2.

Thanks,

Anna-Maria

\
 
 \ /
  Last update: 2024-04-29 12:41    [W:0.098 / U:3.136 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site