lkml.org 
[lkml]   [2022]   [Apr]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    SubjectRE: [patch 00/10] x86/cpu: Consolidate APERF/MPERF code
    Date
    On Wed, Apr 20 2022 at 15:08, Doug Smythies wrote:
    > On 2022.04.19 14:11 Thomas Gleixner wrote:
    >>> That's because after the changes in this series scaling_cur_freq
    >>> returns 0 if the given CPU is idle.
    >>
    >> Which is sensible IMO as there is really no point in waking an idle CPU
    >> just to read those MSRs, then wait 20ms wake it up again to read those
    >> MSRs again.
    >
    > I totally agree.
    > It is the inconsistency for what is displayed as a function of driver/governor
    > that is my concern.

    Raphael suggested to move the show_cpuinfo() logic into the a/mperf
    code. See below.

    Thanks,

    tglx
    ---
    Subject: x86/aperfmperf: Integrate the fallback code from show_cpuinfo()
    From: Thomas Gleixner <tglx@linutronix.de>
    Date: Mon, 25 Apr 2022 15:19:29 +0200

    Due to the avoidance of IPIs to idle CPUs arch_freq_get_on_cpu() can return
    0 when the last sample was too long ago.

    show_cpuinfo() has a fallback to cpufreq_quick_get() and if that fails to
    return cpu_khz, but the readout code for the per CPU scaling frequency in
    sysfs does not.

    Move that fallback into arch_freq_get_on_cpu() so the behaviour is the same
    when reading /proc/cpuinfo and /sys/..../cur_scaling_freq.

    Suggested-by: "Rafael J. Wysocki" <rafael@kernel.org>
    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    ---
    arch/x86/kernel/cpu/aperfmperf.c | 10 +++++++---
    arch/x86/kernel/cpu/proc.c | 7 +------
    2 files changed, 8 insertions(+), 9 deletions(-)

    --- a/arch/x86/kernel/cpu/aperfmperf.c
    +++ b/arch/x86/kernel/cpu/aperfmperf.c
    @@ -405,12 +405,12 @@ void arch_scale_freq_tick(void)
    unsigned int arch_freq_get_on_cpu(int cpu)
    {
    struct aperfmperf *s = per_cpu_ptr(&cpu_samples, cpu);
    + unsigned int seq, freq;
    unsigned long last;
    - unsigned int seq;
    u64 acnt, mcnt;

    if (!cpu_feature_enabled(X86_FEATURE_APERFMPERF))
    - return 0;
    + goto fallback;

    do {
    seq = raw_read_seqcount_begin(&s->seq);
    @@ -424,9 +424,13 @@ unsigned int arch_freq_get_on_cpu(int cp
    * which covers idle and NOHZ full CPUs.
    */
    if (!mcnt || (jiffies - last) > MAX_SAMPLE_AGE)
    - return 0;
    + goto fallback;

    return div64_u64((cpu_khz * acnt), mcnt);
    +
    +fallback:
    + freq = cpufreq_quick_get(cpu);
    + return freq ? freq : cpu_khz;
    }

    static int __init bp_init_aperfmperf(void)
    --- a/arch/x86/kernel/cpu/proc.c
    +++ b/arch/x86/kernel/cpu/proc.c
    @@ -86,12 +86,7 @@ static int show_cpuinfo(struct seq_file
    if (cpu_has(c, X86_FEATURE_TSC)) {
    unsigned int freq = arch_freq_get_on_cpu(cpu);

    - if (!freq)
    - freq = cpufreq_quick_get(cpu);
    - if (!freq)
    - freq = cpu_khz;
    - seq_printf(m, "cpu MHz\t\t: %u.%03u\n",
    - freq / 1000, (freq % 1000));
    + seq_printf(m, "cpu MHz\t\t: %u.%03u\n", freq / 1000, (freq % 1000));
    }

    /* Cache size */
    \
     
     \ /
      Last update: 2022-04-25 17:47    [W:4.731 / U:0.128 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site