lkml.org 
[lkml]   [2022]   [Jun]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
Subject[PATCH 0/4] KVM: x86/mmu: pte_list_desc fix and cleanups
From
Reviewing the eager page splitting code made me realize that burning 14
rmap entries for nested TDP MMUs is extremely wasteful due to the per-vCPU
caches allocating 40 entries by default. For nested TDP, aliasing L2 gfns
to L1 gfns is quite rare and is not performance critical (it's exclusively
pre-boot behavior for sane setups).

Patch 1 fixes a bug where pte_list_desc is not correctly aligned nor sized
on 32-bit kernels. The primary motivation for the fix is to be able to add
a compile-time assertion on the size being a multiple of the cache line
size, I doubt anyone cares about the performance/memory impact.

Patch 2 tweaks MMU setup to support a dynamic pte_list_desc size.

Patch 3 reduces the number of sptes per pte_list_desc to 2 for nested TDP
MMUs, i.e. allocates the bare minimum to prioritize the memory footprint
over performance for sane setups.

Patch 4 fills the pte_list_desc cache if and only if rmaps are in use,
i.e. doesn't allocate pte_list_desc when using the TDP MMU until nested
TDP is used.

Sean Christopherson (4):
KVM: x86/mmu: Track the number entries in a pte_list_desc with a ulong
KVM: x86/mmu: Defer "full" MMU setup until after vendor
hardware_setup()
KVM: x86/mmu: Shrink pte_list_desc size when KVM is using TDP
KVM: x86/mmu: Topup pte_list_desc cache iff VM is using rmaps

arch/x86/include/asm/kvm_host.h | 5 ++-
arch/x86/kvm/mmu/mmu.c | 78 +++++++++++++++++++++++----------
arch/x86/kvm/x86.c | 17 ++++---
3 files changed, 70 insertions(+), 30 deletions(-)


base-commit: 4b88b1a518b337de1252b8180519ca4c00015c9e
--
2.37.0.rc0.161.g10f37bed90-goog

\
 
 \ /
  Last update: 2022-06-25 01:29    [W:0.095 / U:0.740 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site