Messages in this thread | | | From | Vince Weaver <> | Date | Mon, 18 May 2015 13:48:11 -0400 (EDT) | Subject | Re: perf: WARNING perfevents: irq loop stuck! |
| |
On Fri, 8 May 2015, Ingo Molnar wrote:
> > * Ingo Molnar <mingo@kernel.org> wrote: > > > > > * Vince Weaver <vincent.weaver@maine.edu> wrote: > > > > > So this is just a warning, and I've reported it before, but the > > > perf_fuzzer triggers this fairly regularly on my Haswell system. > > > > > > It looks like fixed counter 0 (retired instructions) being set to > > > 0000fffffffffffe occasionally causes an irq loop storm and gets > > > stuck until the PMU state is cleared. > > > > So 0000fffffffffffe corresponds to 2 events left until overflow, > > right? And on Haswell we don't set x86_pmu.limit_period AFAICS, so we > > allow these super short periods. > > > > Maybe like on Broadwell we need a quirk on Nehalem/Haswell as well, > > one similar to bdw_limit_period()? Something like the patch below? > > > > Totally untested and such. I picked 128 because of Broadwell, but > > lower values might work as well. You could try to increase it to 3 and > > upwards and see which one stops triggering stuck NMI loops? > > > > Thanks, > > > > Ingo > > > > Signed-off-by: Ingo Molnar <mingo@kernel.org> > > > > --- > > arch/x86/kernel/cpu/perf_event_intel.c | 12 +++++++++++- > > 1 file changed, 11 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c > > index 960e85de13fb..26b13ea8299c 100644 > > --- a/arch/x86/kernel/cpu/perf_event_intel.c > > +++ b/arch/x86/kernel/cpu/perf_event_intel.c > > @@ -2479,6 +2479,15 @@ hsw_get_event_constraints(struct cpu_hw_events *cpuc, int idx, > > > > return c; > > } > > +/* > > + * Really short periods might create infinite PMC NMI loops on Haswell, > > + * so limit them to 128. There's no official erratum for this AFAIK. > > + */ > > +static unsigned int hsw_limit_period(struct perf_event *event, unsigned int left) > > +{ > > + return max(left, 128U); > > +} > > + > > > > /* > > * Broadwell: > > @@ -2495,7 +2504,7 @@ hsw_get_event_constraints(struct cpu_hw_events *cpuc, int idx, > > * Therefore the effective (average) period matches the requested period, > > * despite coarser hardware granularity. > > */ > > -static unsigned bdw_limit_period(struct perf_event *event, unsigned left) > > +static unsigned int bdw_limit_period(struct perf_event *event, unsigned left) > > { > > if ((event->hw.config & INTEL_ARCH_EVENT_MASK) == > > X86_CONFIG(.event=0xc0, .umask=0x01)) { > > @@ -3265,6 +3274,7 @@ __init int intel_pmu_init(void) > > x86_pmu.hw_config = hsw_hw_config; > > x86_pmu.get_event_constraints = hsw_get_event_constraints; > > x86_pmu.cpu_events = hsw_events_attrs; > > + x86_pmu.limit_period = hsw_limit_period; > > x86_pmu.lbr_double_abort = true; > > pr_cont("Haswell events, "); > > break; > > Also, I'd apply the quirk not just to Haswell, but Nehalem, Westmere > and Ivy Bridge as well, I have seen it as early as on a Nehalem > prototype box.
so at the suggestion of Andi Kleen I did some tests to see if this was related to Haswell erratum HSD143: Fixed-Function Performance Counter May Over Count Instructions Retired by 32 When Intel Hyper-Threading Technology is Enabled
and indeed the problem seemed to go away if I disabled Hyperthreading.
However a patch implementing the Intel suggested workaround for that erratum of programming the FIXED_CTR_CTRL_MSR only after the GLOBAL_CTRL_MSR is set did not fix the issue (once I re-enabled hypethreading on the machine).
Vince
| |