lkml.org 
[lkml]   [2020]   [Nov]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2 04/17] perf: x86/ds: Handle guest PEBS overflow PMI and inject it to guest
From
Date
Hi Peter,

On 2020/11/17 22:35, Peter Zijlstra wrote:
> On Mon, Nov 09, 2020 at 10:12:41AM +0800, Like Xu wrote:
>> With PEBS virtualization, the PEBS records get delivered to the guest,
>> and host still sees the PEBS overflow PMI from guest PEBS counters.
>> This would normally result in a spurious host PMI and we needs to inject
>> that PEBS overflow PMI into the guest, so that the guest PMI handler
>> can handle the PEBS records.
>>
>> Check for this case in the host perf PEBS handler. If a PEBS overflow
>> PMI occurs and it's not generated from host side (via check host DS),
>> a fake event will be triggered. The fake event causes the KVM PMI callback
>> to be called, thereby injecting the PEBS overflow PMI into the guest.
>>
>> No matter how many guest PEBS counters are overflowed, only triggering
>> one fake event is enough. The guest PEBS handler would retrieve the
>> correct information from its own PEBS records buffer.
>>
>> If the counter_freezing is disabled on the host, a guest PEBS overflow
>> PMI would be missed when a PEBS counter is enabled on the host side
>> and coincidentally a host PEBS overflow PMI based on host DS_AREA is
>> also triggered right after vm-exit due to the guest PEBS overflow PMI
>> based on guest DS_AREA. In that case, KVM will disable guest PEBS before
>> vm-entry once there's a host PEBS counter enabled on the same CPU.
>
> How does this guest DS crud work? DS_AREA is a host virtual address;

A host counter will be scheduled (maybe cross-mapped) for a guest PEBS
counter (via guest PEBS event), and its enable bits (PEBS_ENABLE + EN
+ GLOBAL_CTRL) will be set according to guest's values right before the
vcpu entry (via atomic_switch_perf_msrs).

The guest PEBS record(s) will be written to the guest DS buffer referenced
by the guest DS_AREA msr, which is switched during the vmx transaction,
and it is the guest virtual address.

> ISTR there was lots of fail trying to virtualize it earlier. What's
> changed? There's 0 clues here.

Ah, now we have EPT-friendly PEBS facilities supported since Ice Lake
which makes guest PEBS feature possible w/o guest memory pinned.

>
> Why are the host and guest DS area separate, why can't we map them to
> the exact same physical pages?

If we map both guest and host DS_AREA to the exact same physical pages,
- the guest can access the host PEBS records, which means that the host
IP maybe leaked, because we cannot predict the time guest drains records
and it would be over-designed to clean it up before each vm-entry;
- different tasks/vcpus on the same pcpu cannot share the same PEBS DS
settings from the same physical page. For example, some require large
PEBS and reset values, while others do not.

Like many guest msrs, we use the separate guest DS_AREA for the guest's
own use and it avoids mutual interference as little as possible.

Thanks,
Like Xu

\
 
 \ /
  Last update: 2020-11-18 17:17    [W:0.108 / U:0.184 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site