lkml.org 
[lkml]   [2019]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 15/22] perf/x86/intel: Set correct weight for topdown subevent counters
Date
From: Andi Kleen <ak@linux.intel.com>

The top down sub event counters are mapped to a fixed counter,
but should have the normal weight for the scheduler.
So special case this.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
arch/x86/events/intel/core.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index af2028ee2d1e..d69537b6c184 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -4994,6 +4994,15 @@ __init int intel_pmu_init(void)
* counter, so do not extend mask to generic counters
*/
for_each_event_constraint(c, x86_pmu.event_constraints) {
+ /*
+ * Don't limit the event mask for topdown sub event
+ * counters.
+ */
+ if (x86_pmu.num_counters_fixed >= 3 &&
+ c->idxmsk64 & INTEL_PMC_MSK_ANY_SLOTS) {
+ c->weight = hweight64(c->idxmsk64);
+ continue;
+ }
if (c->cmask == FIXED_EVENT_FLAGS
&& c->idxmsk64 != INTEL_PMC_MSK_FIXED_REF_CYCLES) {
c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1;
--
2.17.1
\
 
 \ /
  Last update: 2019-03-18 22:46    [W:0.163 / U:0.972 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site