lkml.org 
[lkml]   [2022]   [Jun]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v4 4/4] arm64: dts: qcom: sdm845: Add CPU BWMON
From
On 23/06/2022 08:48, Rajendra Nayak wrote:
>>>> diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
>>>> index 83e8b63f0910..adffb9c70566 100644
>>>> --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
>>>> +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
>>>> @@ -2026,6 +2026,60 @@ llcc: system-cache-controller@1100000 {
>>>> interrupts = <GIC_SPI 582 IRQ_TYPE_LEVEL_HIGH>;
>>>> };
>>>>
>>>> + pmu@1436400 {
>>>> + compatible = "qcom,sdm845-cpu-bwmon";
>>>> + reg = <0 0x01436400 0 0x600>;
>>>> +
>>>> + interrupts = <GIC_SPI 581 IRQ_TYPE_LEVEL_HIGH>;
>>>> +
>>>> + interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
>>>> + <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
>>>> + interconnect-names = "ddr", "l3c";
>>>
>>> Is this the pmu/bwmon instance between the cpu and caches or the one between the caches and DDR?
>>
>> To my understanding this is the one between CPU and caches.
>
> Ok, but then because the OPP table lists the DDR bw first and Cache bw second, isn't the driver
> ending up comparing the bw values thrown by the pmu against the DDR bw instead of the Cache BW?

I double checked now and you're right.

> Atleast with my testing on sc7280 I found this to mess things up and I always was ending up at
> higher OPPs even while the system was completely idle. Comparing the values against the Cache bw
> fixed it.(sc7280 also has a bwmon4 instance between the cpu and caches and a bwmon5 between the cache
> and DDR)

In my case it exposes different issue - under performance. Somehow the
bwmon does not report bandwidth high enough to vote for high bandwidth.

After removing the DDR interconnect and bandwidth OPP values I have for:
sysbench --threads=8 --time=60 --memory-total-size=20T --test=memory
--memory-block-size=4M run

1. Vanilla: 29768 MB/s
2. Vanilla without CPU votes: 8728 MB/s
3. Previous bwmon (voting too high): 32007 MB/s
4. Fixed bwmon 24911 MB/s
Bwmon does not vote for maximum L3 speed:
bwmon report 9408 MB/s (thresholds set: <9216000 15052801>
)
osm l3 aggregate 14355 MBps -> 897 MHz, level 7, bw 14355 MBps

Maybe that's just problem with missing governor which would vote for
bandwidth rounding up or anticipating higher needs.

>>> Depending on which one it is, shouldn;t we just be scaling either one and not both the interconnect paths?
>>
>> The interconnects are the same as ones used for CPU nodes, therefore if
>> we want to scale both when scaling CPU, then we also want to scale both
>> when seeing traffic between CPU and cache.
>
> Well, they were both associated with the CPU node because with no other input to decide on _when_
> to scale the caches and DDR, we just put a mapping table which simply mapped a CPU freq to a L3 _and_
> DDR freq. So with just one input (CPU freq) we decided on what should be both the L3 freq and DDR freq.
>
> Now with 2 pmu's, we have 2 inputs, so we can individually scale the L3 based on the cache PMU
> counters and DDR based on the DDR PMU counters, no?
>
> Since you said you have plans to add the other pmu support as well (bwmon5 between the cache and DDR)
> how else would you have the OPP table associated with that pmu instance? Would you again have both the
> L3 and DDR scale based on the inputs from that bwmon too?

Good point, thanks for sharing. I think you're right. I'll keep only the
l3c interconnect path.


Best regards,
Krzysztof

\
 
 \ /
  Last update: 2022-06-23 14:59    [W:0.091 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site