lkml.org 
[lkml]   [2021]   [Jul]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.10 256/593] KVM: nVMX: Dont clobber nested MMUs A/D status on EPTP switch
    Date
    From: Sean Christopherson <seanjc@google.com>

    [ Upstream commit 272b0a998d084e7667284bdd2d0c675c6a2d11de ]

    Drop bogus logic that incorrectly clobbers the accessed/dirty enabling
    status of the nested MMU on an EPTP switch. When nested EPT is enabled,
    walk_mmu points at L2's _legacy_ page tables, not L1's EPT for L2.

    This is likely a benign bug, as mmu->ept_ad is never consumed (since the
    MMU is not a nested EPT MMU), and stuffing mmu_role.base.ad_disabled will
    never propagate into future shadow pages since the nested MMU isn't used
    to map anything, just to walk L2's page tables.

    Note, KVM also does a full MMU reload, i.e. the guest_mmu will be
    recreated using the new EPTP, and thus any change in A/D enabling will be
    properly recognized in the relevant MMU.

    Fixes: 41ab93727467 ("KVM: nVMX: Emulate EPTP switching for the L1 hypervisor")
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Message-Id: <20210609234235.1244004-4-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    arch/x86/kvm/vmx/nested.c | 7 -------
    1 file changed, 7 deletions(-)

    diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
    index 8f1319b7d3bd..67554bc7adb2 100644
    --- a/arch/x86/kvm/vmx/nested.c
    +++ b/arch/x86/kvm/vmx/nested.c
    @@ -5484,8 +5484,6 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
    {
    u32 index = kvm_rcx_read(vcpu);
    u64 new_eptp;
    - bool accessed_dirty;
    - struct kvm_mmu *mmu = vcpu->arch.walk_mmu;

    if (!nested_cpu_has_eptp_switching(vmcs12) ||
    !nested_cpu_has_ept(vmcs12))
    @@ -5494,13 +5492,10 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
    if (index >= VMFUNC_EPTP_ENTRIES)
    return 1;

    -
    if (kvm_vcpu_read_guest_page(vcpu, vmcs12->eptp_list_address >> PAGE_SHIFT,
    &new_eptp, index * 8, 8))
    return 1;

    - accessed_dirty = !!(new_eptp & VMX_EPTP_AD_ENABLE_BIT);
    -
    /*
    * If the (L2) guest does a vmfunc to the currently
    * active ept pointer, we don't have to do anything else
    @@ -5509,8 +5504,6 @@ static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
    if (!nested_vmx_check_eptp(vcpu, new_eptp))
    return 1;

    - mmu->ept_ad = accessed_dirty;
    - mmu->mmu_role.base.ad_disabled = !accessed_dirty;
    vmcs12->ept_pointer = new_eptp;

    kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu);
    --
    2.30.2


    \
     
     \ /
      Last update: 2021-07-12 08:53    [W:4.251 / U:0.016 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site