Messages in this thread | | | Date | Mon, 27 Mar 2023 16:33:06 +0800 | Subject | Re: [PATCH v4 6/6] KVM: VMX: Make CR0.WP a guest owned bit | From | Xiaoyao Li <> |
| |
On 3/22/2023 9:37 AM, Mathias Krause wrote: > Guests like grsecurity that make heavy use of CR0.WP to implement kernel > level W^X will suffer from the implied VMEXITs. > > With EPT there is no need to intercept a guest change of CR0.WP, so > simply make it a guest owned bit if we can do so.
I'm interested in the performance gain. Do you have data like Patch 2?
> This implies that a read of a guest's CR0.WP bit might need a VMREAD. > However, the only potentially affected user seems to be kvm_init_mmu() > which is a heavy operation to begin with. But also most callers already > cache the full value of CR0 anyway, so no additional VMREAD is needed. > The only exception is nested_vmx_load_cr3(). > > This change is VMX-specific, as SVM has no such fine grained control > register intercept control. > > Suggested-and-co-developed-by: Sean Christopherson <seanjc@google.com> > Signed-off-by: Mathias Krause <minipli@grsecurity.net> > --- > arch/x86/kvm/kvm_cache_regs.h | 2 +- > arch/x86/kvm/vmx/nested.c | 4 ++-- > arch/x86/kvm/vmx/vmx.c | 2 +- > arch/x86/kvm/vmx/vmx.h | 18 ++++++++++++++++++ > 4 files changed, 22 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h > index 4c91f626c058..e50d353b5c1c 100644 > --- a/arch/x86/kvm/kvm_cache_regs.h > +++ b/arch/x86/kvm/kvm_cache_regs.h > @@ -4,7 +4,7 @@ > > #include <linux/kvm_host.h> > > -#define KVM_POSSIBLE_CR0_GUEST_BITS X86_CR0_TS > +#define KVM_POSSIBLE_CR0_GUEST_BITS (X86_CR0_TS | X86_CR0_WP) > #define KVM_POSSIBLE_CR4_GUEST_BITS \ > (X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \ > | X86_CR4_OSXMMEXCPT | X86_CR4_PGE | X86_CR4_TSD | X86_CR4_FSGSBASE) > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c > index f63b28f46a71..61d940fc91ba 100644 > --- a/arch/x86/kvm/vmx/nested.c > +++ b/arch/x86/kvm/vmx/nested.c > @@ -4481,7 +4481,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, > * CR0_GUEST_HOST_MASK is already set in the original vmcs01 > * (KVM doesn't change it); > */ > - vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; > + vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); > vmx_set_cr0(vcpu, vmcs12->host_cr0); > > /* Same as above - no reason to call set_cr4_guest_host_mask(). */ > @@ -4632,7 +4632,7 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu) > */ > vmx_set_efer(vcpu, nested_vmx_get_vmcs01_guest_efer(vmx)); > > - vcpu->arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; > + vcpu->arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); > vmx_set_cr0(vcpu, vmcs_readl(CR0_READ_SHADOW)); > > vcpu->arch.cr4_guest_owned_bits = ~vmcs_readl(CR4_GUEST_HOST_MASK); > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 8fc1a0c7856f..e501f6864a72 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -4790,7 +4790,7 @@ static void init_vmcs(struct vcpu_vmx *vmx) > /* 22.2.1, 20.8.1 */ > vm_entry_controls_set(vmx, vmx_vmentry_ctrl()); > > - vmx->vcpu.arch.cr0_guest_owned_bits = KVM_POSSIBLE_CR0_GUEST_BITS; > + vmx->vcpu.arch.cr0_guest_owned_bits = vmx_l1_guest_owned_cr0_bits(); > vmcs_writel(CR0_GUEST_HOST_MASK, ~vmx->vcpu.arch.cr0_guest_owned_bits); > > set_cr4_guest_host_mask(vmx); > diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h > index 2acdc54bc34b..423e9d3c9c40 100644 > --- a/arch/x86/kvm/vmx/vmx.h > +++ b/arch/x86/kvm/vmx/vmx.h > @@ -640,6 +640,24 @@ BUILD_CONTROLS_SHADOW(tertiary_exec, TERTIARY_VM_EXEC_CONTROL, 64) > (1 << VCPU_EXREG_EXIT_INFO_1) | \ > (1 << VCPU_EXREG_EXIT_INFO_2)) > > +static inline unsigned long vmx_l1_guest_owned_cr0_bits(void) > +{ > + unsigned long bits = KVM_POSSIBLE_CR0_GUEST_BITS; > + > + /* > + * CR0.WP needs to be intercepted when KVM is shadowing legacy paging > + * in order to construct shadow PTEs with the correct protections. > + * Note! CR0.WP technically can be passed through to the guest if > + * paging is disabled, but checking CR0.PG would generate a cyclical > + * dependency of sorts due to forcing the caller to ensure CR0 holds > + * the correct value prior to determining which CR0 bits can be owned > + * by L1. Keep it simple and limit the optimization to EPT. > + */ > + if (!enable_ept) > + bits &= ~X86_CR0_WP; > + return bits; > +} > + > static __always_inline struct kvm_vmx *to_kvm_vmx(struct kvm *kvm) > { > return container_of(kvm, struct kvm_vmx, kvm);
| |