lkml.org 
[lkml]   [2022]   [Feb]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 06/18] KVM: x86/mmu: do not consult levels when freeing roots
On Thu, Feb 17, 2022, Paolo Bonzini wrote:
> Right now, PGD caching requires a complicated dance of first computing
> the MMU role and passing it to __kvm_mmu_new_pgd(), and then separately calling

Wrap at ~75 chars. I'm starting to wonder if you role 5x d20 when deciding what
line number you wrap at :-)

> kvm_init_mmu().
>
> Part of this is due to kvm_mmu_free_roots using mmu->root_level and
> mmu->shadow_root_level to distinguish whether the page table uses a single
> root or 4 PAE roots. Because kvm_init_mmu() can overwrite mmu->root_level,
> kvm_mmu_free_roots() must be called before kvm_init_mmu().
>
> However, even after kvm_init_mmu() there is a way to detect whether the
> page table may hold PAE roots, as root.hpa isn't backed by a shadow when
> it points at PAE roots. Using this method results in simpler code, and
> is one less obstacle in moving all calls to __kvm_mmu_new_pgd() after the
> MMU has been initialized.

I think it's worth adding a blurb about 5-level nNPT. Something like

Note, this is technically wrong when KVM is using shadowing 4-level NPT
in L1 with 5-level NPT in L0, as the PDPTEs are not used in that case
and mmu->root.hpa will not be backed by a shadow page. But the PDPTEs
will be '0' so processing them does no harm, not too mention that that
particular nNPT case is completely broken in KVM and this code will
need to be reworked to correctly handle 5=>4-level nNPT no matter what.

> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 10 ++++++----
> 1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index a478667d7561..e1578f71feae 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3240,12 +3240,15 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
> struct kvm *kvm = vcpu->kvm;
> int i;
> LIST_HEAD(invalid_list);
> - bool free_active_root = roots_to_free & KVM_MMU_ROOT_CURRENT;
> + bool free_active_root;
>
> BUILD_BUG_ON(KVM_MMU_NUM_PREV_ROOTS >= BITS_PER_LONG);
>
> /* Before acquiring the MMU lock, see if we need to do any real work. */
> - if (!(free_active_root && VALID_PAGE(mmu->root.hpa))) {
> + free_active_root = (roots_to_free & KVM_MMU_ROOT_CURRENT)
> + && VALID_PAGE(mmu->root.hpa);

Pretty please, put the && on the first line and align the indentation.

free_active_root = (roots_to_free & KVM_MMU_ROOT_CURRENT) &&
VALID_PAGE(mmu->root.hpa);

With that,

Reviewed-by: Sean Christopherson <seanjc@google.com>

> +
> + if (!free_active_root) {
> for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
> if ((roots_to_free & KVM_MMU_ROOT_PREVIOUS(i)) &&
> VALID_PAGE(mmu->prev_roots[i].hpa))
> @@ -3263,8 +3266,7 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
> &invalid_list);
>
> if (free_active_root) {
> - if (mmu->shadow_root_level >= PT64_ROOT_4LEVEL &&
> - (mmu->root_level >= PT64_ROOT_4LEVEL || mmu->direct_map)) {
> + if (to_shadow_page(mmu->root.hpa)) {
> mmu_free_root_page(kvm, &mmu->root.hpa, &invalid_list);
> } else if (mmu->pae_root) {
> for (i = 0; i < 4; ++i) {
> --
> 2.31.1
>
>

\
 
 \ /
  Last update: 2022-02-18 18:28    [W:0.423 / U:0.220 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site