lkml.org 
[lkml]   [2020]   [Nov]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v2 04/17] perf: x86/ds: Handle guest PEBS overflow PMI and inject it to guest
Date
With PEBS virtualization, the PEBS records get delivered to the guest,
and host still sees the PEBS overflow PMI from guest PEBS counters.
This would normally result in a spurious host PMI and we needs to inject
that PEBS overflow PMI into the guest, so that the guest PMI handler
can handle the PEBS records.

Check for this case in the host perf PEBS handler. If a PEBS overflow
PMI occurs and it's not generated from host side (via check host DS),
a fake event will be triggered. The fake event causes the KVM PMI callback
to be called, thereby injecting the PEBS overflow PMI into the guest.

No matter how many guest PEBS counters are overflowed, only triggering
one fake event is enough. The guest PEBS handler would retrieve the
correct information from its own PEBS records buffer.

If the counter_freezing is disabled on the host, a guest PEBS overflow
PMI would be missed when a PEBS counter is enabled on the host side
and coincidentally a host PEBS overflow PMI based on host DS_AREA is
also triggered right after vm-exit due to the guest PEBS overflow PMI
based on guest DS_AREA. In that case, KVM will disable guest PEBS before
vm-entry once there's a host PEBS counter enabled on the same CPU.

Originally-by: Andi Kleen <ak@linux.intel.com>
Co-developed-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
---
arch/x86/events/intel/ds.c | 64 ++++++++++++++++++++++++++++++++++++++
1 file changed, 64 insertions(+)

diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 86848c57b55e..1e759c74bffd 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1721,6 +1721,67 @@ intel_pmu_save_and_restart_reload(struct perf_event *event, int count)
return 0;
}

+/*
+ * We may be running with guest PEBS events created by KVM, and the
+ * PEBS records are logged into the guest's DS and invisible to host.
+ *
+ * In the case of guest PEBS overflow, we only trigger a fake event
+ * to emulate the PEBS overflow PMI for guest PBES counters in KVM.
+ * The guest will then vm-entry and check the guest DS area to read
+ * the guest PEBS records.
+ *
+ * Without counter_freezing support on the host, the guest PEBS overflow
+ * PMI may be dropped when both the guest and the host use PEBS.
+ * Therefore, KVM will not enable guest PEBS once the host PEBS is enabled
+ * without counter_freezing since it may bring a confused unknown NMI.
+ *
+ * The contents and other behavior of the guest event do not matter.
+ */
+static int intel_pmu_handle_guest_pebs(struct cpu_hw_events *cpuc,
+ struct pt_regs *iregs,
+ struct debug_store *ds)
+{
+ struct perf_sample_data data;
+ struct perf_event *event = NULL;
+ u64 guest_pebs_idxs = cpuc->pebs_enabled & ~cpuc->intel_ctrl_host_mask;
+ int bit;
+
+ /*
+ * Ideally, we should check guest DS to understand if it's
+ * a guest PEBS overflow PMI from guest PEBS counters.
+ * However, it brings high overhead to retrieve guest DS in host.
+ * So we check host DS instead for performance.
+ *
+ * If PEBS interrupt threshold on host is not exceeded in a NMI, there
+ * must be a PEBS overflow PMI generated from the guest PEBS counters.
+ * There is no ambiguity since the reported event in the PMI is guest
+ * only. It gets handled correctly on a case by case base for each event.
+ *
+ * Note: This is based on the assumption that counter_freezing is enabled,
+ * or KVM disables the co-existence of guest PEBS and host PEBS.
+ */
+ if (!guest_pebs_idxs || !in_nmi() ||
+ ds->pebs_index >= ds->pebs_interrupt_threshold)
+ return 0;
+
+ for_each_set_bit(bit, (unsigned long *)&guest_pebs_idxs,
+ INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed) {
+
+ event = cpuc->events[bit];
+ if (!event->attr.precise_ip)
+ continue;
+
+ perf_sample_data_init(&data, 0, event->hw.last_period);
+ if (perf_event_overflow(event, &data, iregs))
+ x86_pmu_stop(event, 0);
+
+ /* Inject one fake event is enough. */
+ return 1;
+ }
+
+ return 0;
+}
+
static void __intel_pmu_pebs_event(struct perf_event *event,
struct pt_regs *iregs,
void *base, void *top,
@@ -1954,6 +2015,9 @@ static void intel_pmu_drain_pebs_icl(struct pt_regs *iregs)
if (!x86_pmu.pebs_active)
return;

+ if (intel_pmu_handle_guest_pebs(cpuc, iregs, ds))
+ return;
+
base = (struct pebs_basic *)(unsigned long)ds->pebs_buffer_base;
top = (struct pebs_basic *)(unsigned long)ds->pebs_index;

--
2.21.3
\
 
 \ /
  Last update: 2020-11-09 03:17    [W:0.128 / U:2.340 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site