lkml.org 
[lkml]   [2022]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] KVM: x86/mmu: Recurse down to 1GB level when zapping pages in a range
Date
The recursive zapping that was reintroduced by reverting "KVM: x86/mmu:
Zap only TDP MMU leafs in kvm_zap_gfn_range()" can be expensive. Allow
zap_gfn_range to recurse down to the PDPTE level, so that periodic
yielding is possible with a finer granularity.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kvm/mmu/tdp_mmu.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 87d8910c9ac2..53689603078a 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -926,8 +926,10 @@ static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
/*
* No need to try to step down in the iterator when zapping all SPTEs,
* zapping the top-level non-leaf SPTEs will recurse on their children.
+ * Do not do it above the 1GB level, to avoid making tdp_mmu_set_spte's
+ * recursion too expensive and allow yielding.
*/
- int min_level = zap_all ? root->role.level : PG_LEVEL_4K;
+ int min_level = zap_all ? PG_LEVEL_1G : PG_LEVEL_4K;

end = min(end, tdp_mmu_max_gfn_host());

--
2.31.1
\
 
 \ /
  Last update: 2022-03-18 17:43    [W:0.042 / U:0.252 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site