Messages in this thread | | | Date | Mon, 6 Aug 2018 20:35:15 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH 2/3] x86, perf: Add a separate Arch Perfmon v4 PMI handler |
| |
On Mon, Aug 06, 2018 at 10:23:42AM -0700, kan.liang@linux.intel.com wrote: > @@ -2044,6 +2056,14 @@ static void intel_pmu_disable_event(struct perf_event *event) > if (unlikely(event->attr.precise_ip)) > intel_pmu_pebs_disable(event); > > + /* > + * We could disable freezing here, but doesn't hurt if it's on. > + * perf remembers the state, and someone else will likely > + * reinitialize. > + * > + * This avoids an extra MSR write in many situations. > + */ > + > if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) { > intel_pmu_disable_fixed(hwc); > return; > @@ -2119,6 +2139,11 @@ static void intel_pmu_enable_event(struct perf_event *event) > if (event->attr.exclude_guest) > cpuc->intel_ctrl_host_mask |= (1ull << hwc->idx); > > + if (x86_pmu.counter_freezing && !cpuc->frozen_enabled) { > + enable_counter_freeze(); > + cpuc->frozen_enabled = 1; > + } > + > if (unlikely(event_is_checkpointed(event))) > cpuc->intel_cp_status |= (1ull << hwc->idx); >
Why here? That doesn't really make sense; should this not be in intel_pmu_cpu_starting() or something?
> +static bool disable_counter_freezing; > +module_param(disable_counter_freezing, bool, 0444); > +MODULE_PARM_DESC(disable_counter_freezing, "Disable counter freezing feature." > + "The PMI handler will fall back to generic handler." > + "Default is false (enable counter freezing feature).");
Why?
> + /* > + * Ack the PMU late after the APIC. This avoids bogus
That doesn't make sense. PMU and APIC do not have order.
> + * freezing on Skylake CPUs. The acking unfreezes the PMU > + */ > + if (status) { > + intel_pmu_ack_status(status); > + } else { > + /* > + * CPU may issues two PMIs very close to each other. > + * When the PMI handler services the first one, the > + * GLOBAL_STATUS is already updated to reflect both. > + * When it IRETs, the second PMI is immediately > + * handled and it sees clear status. At the meantime, > + * there may be a third PMI, because the freezing bit > + * isn't set since the ack in first PMI handlers. > + * Double check if there is more work to be done. > + */
Urgh... fun fun fun.
> + status = intel_pmu_get_status(); > + if (status) > + goto again; > + } > + > + if (bts) > + intel_bts_enable_local(); > + cpuc->enabled = pmu_enabled; > + return handled; > +}
> @@ -3432,6 +3538,11 @@ static void intel_pmu_cpu_dying(int cpu) > free_excl_cntrs(cpu); > > fini_debug_store_on_cpu(cpu); > + > + if (cpuc->frozen_enabled) { > + cpuc->frozen_enabled = 0; > + disable_counter_freeze(); > + } > }
See, you have the dying thing, so why not the matching starting thing.
> @@ -4442,6 +4555,15 @@ __init int intel_pmu_init(void) > pr_cont("full-width counters, "); > } > > + /* > + * For arch perfmon 4 use counter freezing to avoid > + * several MSR accesses in the PMI. > + */ > + if (x86_pmu.counter_freezing) { > + x86_pmu.handle_irq = intel_pmu_handle_irq_v4; > + pr_cont("counter freezing, "); > + }
Lets not print the counter freezing, we already print v4, right?
> @@ -561,6 +566,7 @@ struct x86_pmu { > struct x86_pmu_quirk *quirks; > int perfctr_second_write; > bool late_ack; > + bool counter_freezing;
Please make the both of them int or something.
> u64 (*limit_period)(struct perf_event *event, u64 l); > > /*
| |