lkml.org 
[lkml]   [2015]   [Aug]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC 1/3] vmx: allow ioeventfd for EPT violations
On Mon, Aug 31, 2015 at 10:53:58AM +0800, Xiao Guangrong wrote:
>
>
> On 08/30/2015 05:12 PM, Michael S. Tsirkin wrote:
> >Even when we skip data decoding, MMIO is slightly slower
> >than port IO because it uses the page-tables, so the CPU
> >must do a pagewalk on each access.
> >
> >This overhead is normally masked by using the TLB cache:
> >but not so for KVM MMIO, where PTEs are marked as reserved
> >and so are never cached.
> >
> >As ioeventfd memory is never read, make it possible to use
> >RO pages on the host for ioeventfds, instead.
>
> I like this idea.
>
> >The result is that TLBs are cached, which finally makes MMIO
> >as fast as port IO.
>
> What does "TLBs are cached" mean? Even after applying the patch
> no new TLB type can be cached.

The Intel manual says:
No guest-physical mappings or combined mappings are created with
information derived from EPT paging-structure entries that are not present
(bits 2:0 are all 0) or that are misconfigured (see Section 28.2.3.1).

No combined mappings are created with information derived from guest
paging-structure entries that are not present or that set reserved bits.

Thus mappings that result in EPT violation are created, this makes
EPT violation preferable to EPT misconfiguration.


> >
> >Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >---
> > arch/x86/kvm/vmx.c | 5 +++++
> > 1 file changed, 5 insertions(+)
> >
> >diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> >index 9d1bfd3..ed44026 100644
> >--- a/arch/x86/kvm/vmx.c
> >+++ b/arch/x86/kvm/vmx.c
> >@@ -5745,6 +5745,11 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
> > vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO, GUEST_INTR_STATE_NMI);
> >
> > gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
> >+ if (!kvm_io_bus_write(vcpu, KVM_FAST_MMIO_BUS, gpa, 0, NULL)) {
> >+ skip_emulated_instruction(vcpu);
> >+ return 1;
> >+ }
> >+
>
> I am afraid that the common page fault entry point is not a good place to do the
> work.

Why isn't it?

> Would move it to kvm_handle_bad_page()? The different is the workload of
> fast_page_fault() is included but it's light enough and MMIO-exit should not be
> very frequent, so i think it's okay.

That was supposed to be a slow path, I doubt it'll work well without
major code restructuring.
IIUC by design everything that's not going through fast_page_fault
is supposed to be slow path that only happens rarely.

But in this case, the page stays read-only, we need a new fast path
through the code.

--
MST


\
 
 \ /
  Last update: 2015-08-31 10:01    [W:0.177 / U:0.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site