lkml.org 
[lkml]   [2021]   [Jun]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    Subject[PATCH 11/54] KVM: x86/mmu: WARN and zap SP when sync'ing if MMU role mismatches
    From
    When synchronizing a shadow page, WARN and zap the page if its mmu role
    isn't compatible with the current MMU context, where "compatible" is an
    exact match sans the bits that have no meaning in the overall MMU context
    or will be explicitly overwritten during the sync. Many of the helpers
    used by sync_page() are specific to the current context, updating a SMM
    vs. non-SMM shadow page would use the wrong memslots, updating L1 vs. L2
    PTEs might work but would be extremely bizaree, and so on and so forth.

    Drop the guard with respect to 8-byte vs. 4-byte PTEs in
    __kvm_sync_page(), it was made useless when kvm_mmu_get_page() stopped
    trying to sync shadow pages irrespective of the current MMU context.

    Signed-off-by: Sean Christopherson <seanjc@google.com>
    ---
    arch/x86/kvm/mmu/mmu.c | 5 +----
    arch/x86/kvm/mmu/paging_tmpl.h | 27 +++++++++++++++++++++++++--
    2 files changed, 26 insertions(+), 6 deletions(-)

    diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
    index 9f277c5bab76..2e2d66319325 100644
    --- a/arch/x86/kvm/mmu/mmu.c
    +++ b/arch/x86/kvm/mmu/mmu.c
    @@ -1784,10 +1784,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
    static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
    struct list_head *invalid_list)
    {
    - union kvm_mmu_page_role mmu_role = vcpu->arch.mmu->mmu_role.base;
    -
    - if (sp->role.gpte_is_8_bytes != mmu_role.gpte_is_8_bytes ||
    - vcpu->arch.mmu->sync_page(vcpu, sp) == 0) {
    + if (vcpu->arch.mmu->sync_page(vcpu, sp) == 0) {
    kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list);
    return false;
    }
    diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
    index 52fffd68b522..b632606a87d6 100644
    --- a/arch/x86/kvm/mmu/paging_tmpl.h
    +++ b/arch/x86/kvm/mmu/paging_tmpl.h
    @@ -1030,13 +1030,36 @@ static gpa_t FNAME(gva_to_gpa_nested)(struct kvm_vcpu *vcpu, gpa_t vaddr,
    */
    static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
    {
    + union kvm_mmu_page_role mmu_role = vcpu->arch.mmu->mmu_role.base;
    int i, nr_present = 0;
    bool host_writable;
    gpa_t first_pte_gpa;
    int set_spte_ret = 0;

    - /* direct kvm_mmu_page can not be unsync. */
    - BUG_ON(sp->role.direct);
    + /*
    + * Ignore various flags when verifying that it's safe to sync a shadow
    + * page using the current MMU context.
    + *
    + * - level: not part of the overall MMU role and will never match as the MMU's
    + * level tracks the root level
    + * - access: updated based on the new guest PTE
    + * - quadrant: not part of the overall MMU role (similar to level)
    + */
    + const union kvm_mmu_page_role sync_role_ign = {
    + .level = 0xf,
    + .access = 0x7,
    + .quadrant = 0x3,
    + };
    +
    + /*
    + * Direct pages can never be unsync, and KVM should never attempt to
    + * sync a shadow page for a different MMU context, e.g. if the role
    + * differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the
    + * reserved bits checks will be wrong, etc...
    + */
    + if (WARN_ON_ONCE(sp->role.direct ||
    + (sp->role.word ^ mmu_role.word) & ~sync_role_ign.word))
    + return 0;

    first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);

    --
    2.32.0.288.g62a8d224e6-goog
    \
     
     \ /
      Last update: 2021-06-22 19:59    [W:4.179 / U:0.672 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site