lkml.org 
[lkml]   [2021]   [Nov]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [RFC 02/19] KVM: x86/mmu: Batch TLB flushes for a single zap
On Wed, Nov 10, 2021, Ben Gardon wrote:
> When recursively handling a removed TDP page table, the TDP MMU will
> flush the TLBs and queue an RCU callback to free the PT. If the original
> change zapped a non-leaf SPTE at PG_LEVEL_1G or above, that change will
> result in many unnecessary TLB flushes when one would suffice. Queue all
> the PTs which need to be freed on a list and wait to queue RCU callbacks
> to free them until after all the recursive callbacks are done.

I'm pretty sure we can do this without tracking disconnected SPs. The whole point
of protecting TDP MMU with RCU is to wait until _all_ CPUs are guaranateed to have
dropped references. Emphasis on "all" because that also includes the CPU that's
doing the zapping/replacement!

And since the current CPU is required to hold RCU, we can use its RCU lock as a
proxy for all vCPUs executing in the guest. That will require either flushing in
zap_gfn_range() or requiring callers to hold, or more likely a mix of both so that
flows that zap multiple roots or both TDP and legacy MMU pages can batch flushes

If this doesn't sound completely bonkers, I'd like to pick this up next week, I
wandered into KVM's handling of invalidated roots and have patches that would
conflict in weird ways with this idea.

So I think this can simply be (sans zap_gfn_range() changes):

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 4e226cdb40d9..d2303bca4449 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -431,9 +431,6 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt,
shared);
}

- kvm_flush_remote_tlbs_with_address(kvm, gfn,
- KVM_PAGES_PER_HPAGE(level + 1));
-
call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);
}

@@ -716,11 +713,11 @@ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm,
return false;

if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
- rcu_read_unlock();
-
if (flush)
kvm_flush_remote_tlbs(kvm);

+ rcu_read_unlock();
+
if (shared)
cond_resched_rwlock_read(&kvm->mmu_lock);
else
@@ -817,7 +814,6 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
}

rcu_read_unlock();
- return flush;
}

/*
@@ -954,6 +950,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
ret = RET_PF_SPURIOUS;
else if (!tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte))
return RET_PF_RETRY;
+ else if (<old spte was present shadow page>)
+ kvm_flush_remote_tlbs(kvm);

/*
* If the page fault was caused by a write but the page is write

> +static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm,
> + struct tdp_iter *iter,
> + u64 new_spte)
> +{
> + return __tdp_mmu_set_spte_atomic(kvm, iter, new_spte, NULL);

This helper and refactoring belongs in patch 19. It is impossible to review without
the context of its user(s).

\
 
 \ /
  Last update: 2021-11-13 00:54    [W:1.264 / U:0.528 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site