lkml.org 
[lkml]   [2022]   [May]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH V3 02/12] KVM: X86/MMU: Add using_local_root_page()
Date
From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

In some cases, local root pages are used for MMU. It is often using
to_shadow_page(mmu->root.hpa) to check if local root pages are used.

Add using_local_root_page() to directly check if local root pages are
used or needed to be used even mmu->root.hpa is not set.

Prepare for making to_shadow_page(mmu->root.hpa) returns non-NULL via
using local shadow [root] pages.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
arch/x86/kvm/mmu/mmu.c | 40 +++++++++++++++++++++++++++++++++++++---
1 file changed, 37 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index efe5a3dca1e0..624b6d2473f7 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1690,6 +1690,39 @@ static void drop_parent_pte(struct kvm_mmu_page *sp,
mmu_spte_clear_no_track(parent_pte);
}

+/*
+ * KVM uses the VCPU's local root page (vcpu->mmu->pae_root) when either the
+ * shadow pagetable is using PAE paging or the host is shadowing nested NPT for
+ * 32bit L1 hypervisor.
+ *
+ * It includes cases:
+ * nonpaging when !tdp_enabled (direct paging)
+ * shadow paging for 32 bit guest when !tdp_enabled (shadow paging)
+ * NPT in 32bit host (not shadowing nested NPT) (direct paging)
+ * shadow nested NPT for 32bit L1 hypervisor in 32bit host (shadow paging)
+ * shadow nested NPT for 32bit L1 hypervisor in 64bit host (shadow paging)
+ *
+ * For the first four cases, mmu->root_role.level is PT32E_ROOT_LEVEL and the
+ * shadow pagetable is using PAE paging.
+ *
+ * For the last case, it is
+ * mmu->root_role.level > PT32E_ROOT_LEVEL &&
+ * !mmu->root_role.direct && mmu->cpu_role.base.level <= PT32E_ROOT_LEVEL
+ * And if this condition is true, it must be the last case.
+ *
+ * With the two conditions combined, the checking condition is:
+ * mmu->root_role.level == PT32E_ROOT_LEVEL ||
+ * (!mmu->root_role.direct && mmu->cpu_role.base.level <= PT32E_ROOT_LEVEL)
+ *
+ * (There is no "mmu->root_role.level > PT32E_ROOT_LEVEL" here, because it is
+ * already ensured that mmu->root_role.level >= PT32E_ROOT_LEVEL)
+ */
+static bool using_local_root_page(struct kvm_mmu *mmu)
+{
+ return mmu->root_role.level == PT32E_ROOT_LEVEL ||
+ (!mmu->root_role.direct && mmu->cpu_role.base.level <= PT32E_ROOT_LEVEL);
+}
+
static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct)
{
struct kvm_mmu_page *sp;
@@ -4252,10 +4285,11 @@ static bool fast_pgd_switch(struct kvm *kvm, struct kvm_mmu *mmu,
{
/*
* For now, limit the caching to 64-bit hosts+VMs in order to avoid
- * having to deal with PDPTEs. We may add support for 32-bit hosts/VMs
- * later if necessary.
+ * having to deal with PDPTEs. Local roots can not be put into
+ * mmu->prev_roots[] because mmu->pae_root can not be shared for
+ * different roots at the same time.
*/
- if (VALID_PAGE(mmu->root.hpa) && !to_shadow_page(mmu->root.hpa))
+ if (unlikely(using_local_root_page(mmu)))
kvm_mmu_free_roots(kvm, mmu, KVM_MMU_ROOT_CURRENT);

if (VALID_PAGE(mmu->root.hpa))
--
2.19.1.6.gb485710b
\
 
 \ /
  Last update: 2022-05-21 15:16    [W:0.165 / U:0.452 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site