lkml.org 
[lkml]   [2021]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH V5 07/25] perf/x86: Hybrid PMU support for unconstrained
Date
From: Kan Liang <kan.liang@linux.intel.com>

The unconstrained value depends on the number of GP and fixed counters.
Each hybrid PMU should use its own unconstrained.

Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
arch/x86/events/intel/core.c | 5 ++++-
arch/x86/events/perf_event.h | 1 +
2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 33d26ed..39f57ae 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3147,7 +3147,10 @@ x86_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
}
}

- return &unconstrained;
+ if (!is_hybrid() || !cpuc->pmu)
+ return &unconstrained;
+
+ return &hybrid_pmu(cpuc->pmu)->unconstrained;
}

static struct event_constraint *
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 993f0de..cfb2da0 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -639,6 +639,7 @@ struct x86_hybrid_pmu {
int max_pebs_events;
int num_counters;
int num_counters_fixed;
+ struct event_constraint unconstrained;
};

static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
--
2.7.4
\
 
 \ /
  Last update: 2021-04-05 17:40    [W:0.149 / U:1.220 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site