lkml.org 
[lkml]   [2022]   [Mar]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[RFC PATCH v5 036/104] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault
    Date
    From: Sean Christopherson <sean.j.christopherson@intel.com>

    Explicitly check for an MMIO spte in the fast page fault flow. TDX will
    use a not-present entry for MMIO sptes, which can be mistaken for an
    access-tracked spte since both have SPTE_SPECIAL_MASK set.

    The fast page fault handles the case of changing access bits without
    obtaining mmu_lock. For example, clear write protect bit for dirty page
    tracking. MMIO emulation is handled in a slow path. So it doesn't affect
    the default VM case.

    Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
    Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
    ---
    arch/x86/kvm/mmu/mmu.c | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

    diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
    index b68191aa39bf..9907cb759fd1 100644
    --- a/arch/x86/kvm/mmu/mmu.c
    +++ b/arch/x86/kvm/mmu/mmu.c
    @@ -3167,7 +3167,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
    break;

    sp = sptep_to_sp(sptep);
    - if (!is_last_spte(spte, sp->role.level))
    + if (!is_last_spte(spte, sp->role.level) || is_mmio_spte(spte))
    break;

    /*
    --
    2.25.1
    \
     
     \ /
      Last update: 2022-03-04 21:11    [W:4.257 / U:0.084 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site