Messages in this thread | | | From | "Lendacky, Thomas" <> | Subject | Re: [RFC PATCH 2/2] x86/perf/amd: Resolve NMI latency issues when multiple PMCs are active | Date | Fri, 15 Mar 2019 15:50:00 +0000 |
| |
On 3/15/19 10:11 AM, Peter Zijlstra wrote: > On Fri, Mar 15, 2019 at 02:44:32PM +0000, Lendacky, Thomas wrote: > >>>> @@ -689,6 +731,7 @@ static __initconst const struct x86_pmu amd_pmu = { >>>> >>>> .amd_nb_constraints = 1, >>>> .wait_on_overflow = amd_pmu_wait_on_overflow, >>>> + .mitigate_nmi_latency = amd_pmu_mitigate_nmi_latency, >>>> }; >>> >>> Again, you could just do amd_pmu_handle_irq() and avoid an extra >>> callback. >> >> This is where there would be a bunch of code duplication where I thought >> adding the callback at the end would be better. But if it's best to add >> an AMD handle_irq callback I can do that. I'm easy, let me know if you'd >> prefer that. > > Hmm, the thing that avoids you directly using x86_pmu_handle_irq() is > that added active count, but is that not the same as the POPCNT of > cpuc->active_mask? > > Is the latency of POPCNT so bad that we need avoid it? > > That is, I was thinking of something like: > > int amd_pmu_handle_irq(struct pt_regs *regs) > { > struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); > int active = hweight_long(cpuc->active_mask); > int handled = x86_pmu_handle_irq(regs);
Yup, I had a total brain lapse there of just calling x86_pmu_handle_irq() from the new routine.
> > + if (active <= 1) { > this_cpu_write(perf_nmi_counter, 0); > + return handled; > } > + > + /* > + * If a counter was handled, record the number of possible remaining > + * NMIs that can occur. > + */ > + if (handled) { > + this_cpu_write(perf_nmi_counter, > + min_t(unsigned int, 2, active - 1)); > + > + return handled; > + } > + > + if (!this_cpu_read(perf_nmi_counter)) > + return NMI_DONE; > + > + this_cpu_dec(perf_nmi_counter); > + > + return NMI_HANDLED; > } > >>> Anyway, we already had code to deal with spurious NMIs from AMD; see >>> commit: >>> >>> 63e6be6d98e1 ("perf, x86: Catch spurious interrupts after disabling counters") >>> >>> And that looks to be doing something very much the same. Why then do you >>> still need this on top? >> >> This can happen while perf is handling normal counter overflow as opposed >> to covering the disabling of the counter case. When multiple counters >> overflow at roughly the same time, but the NMI doesn't arrive in time to >> get collapsed into a pending NMI, the back-to-back support in >> do_default_nmi() doesn't kick in. >> >> Hmmm... I wonder if the wait on overflow in the disable_all() function >> would eliminate the need for 63e6be6d98e1. That would take a more testing >> on some older hardware to verify. That's something I can look into >> separate from this series. > > Yes please, or at least better document the reason for their separate > existence. It's all turning into a bit of magic it seems.
Ok, I'll update the commit message with a bit more info and add to the comment of the new AMD handle_irq function.
Thanks, Tom
>
| |