lkml.org 
[lkml]   [2023]   [Oct]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 2/3] cpufreq: CPPC: Keep the target core awake when reading its cpufreq rate
From

在 2023/10/25 18:54, Mark Rutland 写道:
> [adding Ionela]
>
> On Wed, Oct 25, 2023 at 05:38:46PM +0800, Zeng Heng wrote:
>> As ARM AMU's document says, all counters are subject to any changes
>> in clock frequency, including clock stopping caused by the WFI and WFE
>> instructions.
>>
>> Therefore, using smp_call_on_cpu() to trigger target CPU to
>> read self's AMU counters, which ensures the counters are working
>> properly while cstate feature is enabled.
> IIUC there's a pretty deliberate split with all the actual reading of the AMU
> living in arch/arm64/kernel/topolgy.c, and the driver code being (relatively)
> generic.
>
> We already have code in arch/arm64/kernel/topolgy.c to read counters on a
> specific CPU; why can't e reuse that (and avoid exporting cpu_has_amu_feat())?
>
> Mark.

In this scenario, both topology.c and cppc_acpi.c do not provide an API
to keep the AMU online

during the whole sampling period. Just using cpc_read_ffh at the start
and end of the sampling

period is not enough.

However, I can propose cpc_ffh_supported() function to replace the
cpu_has_amu_feat() as v2

if you think this patch set is still valuable.


Thanks,

Zeng Heng

>> Reported-by: Sumit Gupta <sumitg@nvidia.com>
>> Link: https://lore.kernel.org/all/20230418113459.12860-7-sumitg@nvidia.com/
>> Signed-off-by: Zeng Heng <zengheng4@huawei.com>
>> ---
>> drivers/cpufreq/cppc_cpufreq.c | 39 ++++++++++++++++++++++++++--------
>> 1 file changed, 30 insertions(+), 9 deletions(-)
>>
>> diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
>> index fe08ca419b3d..321a9dc9484d 100644
>> --- a/drivers/cpufreq/cppc_cpufreq.c
>> +++ b/drivers/cpufreq/cppc_cpufreq.c
>> @@ -90,6 +90,12 @@ static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data,
>> struct cppc_perf_fb_ctrs *fb_ctrs_t0,
>> struct cppc_perf_fb_ctrs *fb_ctrs_t1);
>>
>> +struct fb_ctr_pair {
>> + u32 cpu;
>> + struct cppc_perf_fb_ctrs fb_ctrs_t0;
>> + struct cppc_perf_fb_ctrs fb_ctrs_t1;
>> +};
>> +
>> /**
>> * cppc_scale_freq_workfn - CPPC arch_freq_scale updater for frequency invariance
>> * @work: The work item.
>> @@ -840,9 +846,24 @@ static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data,
>> return (reference_perf * delta_delivered) / delta_reference;
>> }
>>
>> +static int cppc_get_perf_ctrs_pair(void *val)
>> +{
>> + struct fb_ctr_pair *fb_ctrs = val;
>> + int cpu = fb_ctrs->cpu;
>> + int ret;
>> +
>> + ret = cppc_get_perf_ctrs(cpu, &fb_ctrs->fb_ctrs_t0);
>> + if (ret)
>> + return ret;
>> +
>> + udelay(2); /* 2usec delay between sampling */
>> +
>> + return cppc_get_perf_ctrs(cpu, &fb_ctrs->fb_ctrs_t1);
>> +}
>> +
>> static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
>> {
>> - struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0};
>> + struct fb_ctr_pair fb_ctrs = { .cpu = cpu, };
>> struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
>> struct cppc_cpudata *cpu_data = policy->driver_data;
>> u64 delivered_perf;
>> @@ -850,18 +871,18 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
>>
>> cpufreq_cpu_put(policy);
>>
>> - ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t0);
>> - if (ret)
>> - return 0;
>> -
>> - udelay(2); /* 2usec delay between sampling */
>> + if (cpu_has_amu_feat(cpu))
>> + ret = smp_call_on_cpu(cpu, cppc_get_perf_ctrs_pair,
>> + &fb_ctrs, false);
>> + else
>> + ret = cppc_get_perf_ctrs_pair(&fb_ctrs);
>>
>> - ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t1);
>> if (ret)
>> return 0;
>>
>> - delivered_perf = cppc_perf_from_fbctrs(cpu_data, &fb_ctrs_t0,
>> - &fb_ctrs_t1);
>> + delivered_perf = cppc_perf_from_fbctrs(cpu_data,
>> + &fb_ctrs.fb_ctrs_t0,
>> + &fb_ctrs.fb_ctrs_t1);
>>
>> return cppc_cpufreq_perf_to_khz(cpu_data, delivered_perf);
>> }
>> --
>> 2.25.1
>>

\
 
 \ /
  Last update: 2023-10-26 05:22    [W:0.099 / U:0.464 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site