Messages in this thread | | | From | "Lendacky, Thomas" <> | Subject | Re: [RFC PATCH 2/2] x86/perf/amd: Resolve NMI latency issues when multiple PMCs are active | Date | Fri, 15 Mar 2019 17:47:31 +0000 |
| |
On 3/15/19 10:49 AM, Tom Lendacky wrote: > On 3/15/19 10:11 AM, Peter Zijlstra wrote: >> On Fri, Mar 15, 2019 at 02:44:32PM +0000, Lendacky, Thomas wrote: >> >>>>> @@ -689,6 +731,7 @@ static __initconst const struct x86_pmu amd_pmu = { >>>>> .amd_nb_constraints = 1, >>>>> .wait_on_overflow = amd_pmu_wait_on_overflow, >>>>> + .mitigate_nmi_latency = amd_pmu_mitigate_nmi_latency, >>>>> }; >>>> >>>> Again, you could just do amd_pmu_handle_irq() and avoid an extra >>>> callback. >>> >>> This is where there would be a bunch of code duplication where I thought >>> adding the callback at the end would be better. But if it's best to add >>> an AMD handle_irq callback I can do that. I'm easy, let me know if you'd >>> prefer that. >> >> Hmm, the thing that avoids you directly using x86_pmu_handle_irq() is >> that added active count, but is that not the same as the POPCNT of >> cpuc->active_mask? >> >> Is the latency of POPCNT so bad that we need avoid it? >> >> That is, I was thinking of something like: >> >> int amd_pmu_handle_irq(struct pt_regs *regs) >> { >> struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); >> int active = hweight_long(cpuc->active_mask); >> int handled = x86_pmu_handle_irq(regs); > > Yup, I had a total brain lapse there of just calling x86_pmu_handle_irq() > from the new routine. > >> >> + if (active <= 1) {
And I wasn't taking into account other sources of NMIs triggering the running of the handler while perf is running. I was only thinking in terms of NMIs coming from the PMCs. So this really needs to be a !active check and the setting of the perf_nmi_counter below needs to be the min of 2 or active.
Thanks, Tom
>> this_cpu_write(perf_nmi_counter, 0); >> + return handled; >> } >> + >> + /* >> + * If a counter was handled, record the number of possible >> remaining >> + * NMIs that can occur. >> + */ >> + if (handled) { >> + this_cpu_write(perf_nmi_counter, >> + min_t(unsigned int, 2, active - 1)); >> + >> + return handled; >> + } >> + >> + if (!this_cpu_read(perf_nmi_counter)) >> + return NMI_DONE; >> + >> + this_cpu_dec(perf_nmi_counter); >> + >> + return NMI_HANDLED; >> } >> >>>> Anyway, we already had code to deal with spurious NMIs from AMD; see >>>> commit: >>>> >>>> 63e6be6d98e1 ("perf, x86: Catch spurious interrupts after >>>> disabling counters") >>>> >>>> And that looks to be doing something very much the same. Why then do you >>>> still need this on top? >>> >>> This can happen while perf is handling normal counter overflow as opposed >>> to covering the disabling of the counter case. When multiple counters >>> overflow at roughly the same time, but the NMI doesn't arrive in time to >>> get collapsed into a pending NMI, the back-to-back support in >>> do_default_nmi() doesn't kick in. >>> >>> Hmmm... I wonder if the wait on overflow in the disable_all() function >>> would eliminate the need for 63e6be6d98e1. That would take a more testing >>> on some older hardware to verify. That's something I can look into >>> separate from this series. >> >> Yes please, or at least better document the reason for their separate >> existence. It's all turning into a bit of magic it seems. > > Ok, I'll update the commit message with a bit more info and add to the > comment of the new AMD handle_irq function. > > Thanks, > Tom > >>
| |