lkml.org 
[lkml]   [2021]   [Jul]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.13 114/800] perf/x86/intel: Fix fixed counter check warning for some Alder Lake
    Date
    From: Kan Liang <kan.liang@linux.intel.com>

    commit ee72a94ea4a6d8fa304a506859cd07ecdc0cf5c4 upstream.

    For some Alder Lake machine, the below fixed counter check warning may be
    triggered.

    [ 2.010766] hw perf events fixed 5 > max(4), clipping!

    Current perf unconditionally increases the number of the GP counters and
    the fixed counters for a big core PMU on an Alder Lake system, because
    the number enumerated in the CPUID only reflects the common counters.
    The big core may has more counters. However, Alder Lake may have an
    alternative configuration. With that configuration,
    the X86_FEATURE_HYBRID_CPU is not set. The number of the GP counters and
    fixed counters enumerated in the CPUID is accurate. Perf mistakenly
    increases the number of counters. The warning is triggered.

    Directly use the enumerated value on the system with the alternative
    configuration.

    Fixes: f83d2f91d259 ("perf/x86/intel: Add Alder Lake Hybrid support")
    Reported-by: Jin Yao <yao.jin@linux.intel.com>
    Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Cc: stable@vger.kernel.org
    Link: https://lore.kernel.org/r/1624029174-122219-2-git-send-email-kan.liang@linux.intel.com
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    arch/x86/events/intel/core.c | 9 +++++++--
    1 file changed, 7 insertions(+), 2 deletions(-)

    --- a/arch/x86/events/intel/core.c
    +++ b/arch/x86/events/intel/core.c
    @@ -6157,8 +6157,13 @@ __init int intel_pmu_init(void)
    pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
    pmu->name = "cpu_core";
    pmu->cpu_type = hybrid_big;
    - pmu->num_counters = x86_pmu.num_counters + 2;
    - pmu->num_counters_fixed = x86_pmu.num_counters_fixed + 1;
    + if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) {
    + pmu->num_counters = x86_pmu.num_counters + 2;
    + pmu->num_counters_fixed = x86_pmu.num_counters_fixed + 1;
    + } else {
    + pmu->num_counters = x86_pmu.num_counters;
    + pmu->num_counters_fixed = x86_pmu.num_counters_fixed;
    + }
    pmu->max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, pmu->num_counters);
    pmu->unconstrained = (struct event_constraint)
    __EVENT_CONSTRAINT(0, (1ULL << pmu->num_counters) - 1,

    \
     
     \ /
      Last update: 2021-07-12 10:19    [W:4.058 / U:0.392 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site