lkml.org 
[lkml]   [2021]   [Apr]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    Subject[PATCH v2 08/10] KVM: Take mmu_lock when handling MMU notifier iff the hva hits a memslot
    From
    Defer acquiring mmu_lock in the MMU notifier paths until a "hit" has been
    detected in the memslots, i.e. don't take the lock for notifications that
    don't affect the guest.

    For small VMs, spurious locking is a minor annoyance. And for "volatile"
    setups where the majority of notifications _are_ relevant, this barely
    qualifies as an optimization.

    But, for large VMs (hundreds of threads) with static setups, e.g. no
    page migration, no swapping, etc..., the vast majority of MMU notifier
    callbacks will be unrelated to the guest, e.g. will often be in response
    to the userspace VMM adjusting its own virtual address space. In such
    large VMs, acquiring mmu_lock can be painful as it blocks vCPUs from
    handling page faults. In some scenarios it can even be "fatal" in the
    sense that it causes unacceptable brownouts, e.g. when rebuilding huge
    pages after live migration, a significant percentage of vCPUs will be
    attempting to handle page faults.

    x86's TDP MMU implementation is especially susceptible to spurious
    locking due it taking mmu_lock for read when handling page faults.
    Because rwlock is fair, a single writer will stall future readers, while
    the writer is itself stalled waiting for in-progress readers to complete.
    This is exacerbated by the MMU notifiers often firing multiple times in
    quick succession, e.g. moving a page will (always?) invoke three separate
    notifiers: .invalidate_range_start(), invalidate_range_end(), and
    .change_pte(). Unnecessarily taking mmu_lock each time means even a
    single spurious sequence can be problematic.

    Note, this optimizes only the unpaired callbacks. Optimizing the
    .invalidate_range_{start,end}() pairs is more complex and will be done in
    a future patch.

    Suggested-by: Ben Gardon <bgardon@google.com>
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    ---
    virt/kvm/kvm_main.c | 15 +++++++++++----
    1 file changed, 11 insertions(+), 4 deletions(-)

    diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
    index 25ecb5235e17..f6697ad741ed 100644
    --- a/virt/kvm/kvm_main.c
    +++ b/virt/kvm/kvm_main.c
    @@ -482,10 +482,10 @@ static void kvm_null_fn(void)
    static __always_inline int __kvm_handle_hva_range(struct kvm *kvm,
    const struct kvm_hva_range *range)
    {
    + bool ret = false, locked = false;
    struct kvm_gfn_range gfn_range;
    struct kvm_memory_slot *slot;
    struct kvm_memslots *slots;
    - bool ret = false;
    int i, idx;

    /* A null handler is allowed if and only if on_lock() is provided. */
    @@ -493,11 +493,13 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm,
    IS_KVM_NULL_FN(range->handler)))
    return 0;

    - KVM_MMU_LOCK(kvm);
    -
    idx = srcu_read_lock(&kvm->srcu);

    + /* The on_lock() path does not yet support lock elision. */
    if (!IS_KVM_NULL_FN(range->on_lock)) {
    + locked = true;
    + KVM_MMU_LOCK(kvm);
    +
    range->on_lock(kvm, range->start, range->end);

    if (IS_KVM_NULL_FN(range->handler))
    @@ -532,6 +534,10 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm,
    gfn_range.end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, slot);
    gfn_range.slot = slot;

    + if (!locked) {
    + locked = true;
    + KVM_MMU_LOCK(kvm);
    + }
    ret |= range->handler(kvm, &gfn_range);
    }
    }
    @@ -540,7 +546,8 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm,
    kvm_flush_remote_tlbs(kvm);

    out_unlock:
    - KVM_MMU_UNLOCK(kvm);
    + if (locked)
    + KVM_MMU_UNLOCK(kvm);

    srcu_read_unlock(&kvm->srcu, idx);

    --
    2.31.0.208.g409f899ff0-goog
    \
     
     \ /
      Last update: 2021-04-02 02:58    [W:4.467 / U:0.140 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site