lkml.org 
[lkml]   [2022]   [Jun]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
Subject[PATCH v2 4/4] KVM: x86/mmu: Buffer nested MMU split_desc_cache only by default capacity
From
Buffer split_desc_cache, the cache used to allcoate rmap list entries,
only by the default cache capacity (currently 40), not by doubling the
minimum (513). Aliasing L2 GPAs to L1 GPAs is uncommon, thus eager page
splitting is unlikely to need 500+ entries. And because each object
is (currently) a non-trivial 128 bytes (see struct pte_list_desc), those
extra ~500 entries means KVM is in all likelihood wasting ~64kb of memory.

Link: https://lore.kernel.org/all/YrTDcrsn0%2F+alpzf@google.com
Reviewed-by: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/mmu/mmu.c | 27 ++++++++++++++++++---------
1 file changed, 18 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index eae5c801e442..52664c3caaab 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6123,17 +6123,26 @@ static bool need_topup_split_caches_or_resched(struct kvm *kvm)

static int topup_split_caches(struct kvm *kvm)
{
- int r;
-
- lockdep_assert_held(&kvm->slots_lock);
-
/*
- * Setting capacity == min would cause KVM to drop mmu_lock even if
- * just one object was consumed from the cache, so make capacity
- * larger than min.
+ * Allocating rmap list entries when splitting huge pages for nested
+ * MMUs is uncommon as KVM needs to use a list if and only if there is
+ * more than one rmap entry for a gfn, i.e. requires an L1 gfn to be
+ * aliased by multiple L2 gfns and/or from multiple nested roots with
+ * different roles. Aliasing gfns when using TDP is atypical for VMMs;
+ * a few gfns are often aliased during boot, e.g. when remapping BIOS,
+ * but aliasing rarely occurs post-boot or for many gfns. If there is
+ * only one rmap entry, rmap->val points directly at that one entry and
+ * doesn't need to allocate a list. Buffer the cache by the default
+ * capacity so that KVM doesn't have to drop mmu_lock to topup if KVM
+ * encounters an aliased gfn or two.
*/
- r = __kvm_mmu_topup_memory_cache(&kvm->arch.split_desc_cache,
- 2 * SPLIT_DESC_CACHE_MIN_NR_OBJECTS,
+ const int capacity = SPLIT_DESC_CACHE_MIN_NR_OBJECTS +
+ KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE;
+ int r;
+
+ lockdep_assert_held(&kvm->slots_lock);
+
+ r = __kvm_mmu_topup_memory_cache(&kvm->arch.split_desc_cache, capacity,
SPLIT_DESC_CACHE_MIN_NR_OBJECTS);
if (r)
return r;
--
2.37.0.rc0.161.g10f37bed90-goog
\
 
 \ /
  Last update: 2022-06-24 23:32    [W:0.046 / U:2.880 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site