lkml.org 
[lkml]   [2020]   [Oct]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.14 018/191] KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages
    Date
    From: Sean Christopherson <sean.j.christopherson@intel.com>

    commit e89505698c9f70125651060547da4ff5046124fc upstream.

    Call kvm_mmu_commit_zap_page() after exiting the "prepare zap" loop in
    kvm_recover_nx_lpages() to finish zapping pages in the unlikely event
    that the loop exited due to lpage_disallowed_mmu_pages being empty.
    Because the recovery thread drops mmu_lock() when rescheduling, it's
    possible that lpage_disallowed_mmu_pages could be emptied by a different
    thread without to_zap reaching zero despite to_zap being derived from
    the number of disallowed lpages.

    Fixes: 1aa9b9572b105 ("kvm: x86: mmu: Recovery of shattered NX large pages")
    Cc: Junaid Shahid <junaids@google.com>
    Cc: stable@vger.kernel.org
    Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
    Message-Id: <20200923183735.584-2-sean.j.christopherson@intel.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    arch/x86/kvm/mmu.c | 1 +
    1 file changed, 1 insertion(+)

    --- a/arch/x86/kvm/mmu.c
    +++ b/arch/x86/kvm/mmu.c
    @@ -5846,6 +5846,7 @@ static void kvm_recover_nx_lpages(struct
    cond_resched_lock(&kvm->mmu_lock);
    }
    }
    + kvm_mmu_commit_zap_page(kvm, &invalid_list);

    spin_unlock(&kvm->mmu_lock);
    srcu_read_unlock(&kvm->srcu, rcu_idx);

    \
     
     \ /
      Last update: 2020-10-27 15:39    [W:4.311 / U:0.464 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site