lkml.org 
[lkml]   [2024]   [Mar]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    SubjectRe: [PATCH v19 056/130] KVM: x86/tdp_mmu: Init role member of struct kvm_mmu_page at allocation
    Date
    On Mon, 2024-02-26 at 00:25 -0800, isaku.yamahata@intel.com wrote:
    > To handle private page tables, argument of is_private needs to be
    > passed
    > down.  Given that already page level is passed down, it would be
    > cumbersome
    > to add one more parameter about sp. Instead replace the level
    > argument with
    > union kvm_mmu_page_role.  Thus the number of argument won't be
    > increased
    > and more info about sp can be passed down.
    >
    > For private sp, secure page table will be also allocated in addition
    > to
    > struct kvm_mmu_page and page table (spt member).  The allocation
    > functions
    > (tdp_mmu_alloc_sp() and __tdp_mmu_alloc_sp_for_split()) need to know
    > if the
    > allocation is for the conventional page table or private page table. 
    > Pass
    > union kvm_mmu_role to those functions and initialize role member of
    > struct
    > kvm_mmu_page.

    tdp_mmu_alloc_sp() is only called in two places. One for the root, and
    one for the mid-level tables.

    In later patches when the kvm_mmu_alloc_private_spt() part is added,
    the root case doesn't need anything done. So the code has to take
    special care in tdp_mmu_alloc_sp() to avoid doing anything for the
    root.

    It only needs to do the special private spt allocation in non-root
    case. If we open code that case, I think maybe we could drop this
    patch, like the below.

    The benefits are to drop this patch (which looks to already be part of
    Paolo's series), and simplify "KVM: x86/mmu: Add a private pointer to
    struct kvm_mmu_page". I'm not sure though, what do you think? Only
    build tested.

    diff --git a/arch/x86/kvm/mmu/mmu_internal.h
    b/arch/x86/kvm/mmu/mmu_internal.h
    index f1533a753974..d6c2ee8bb636 100644
    --- a/arch/x86/kvm/mmu/mmu_internal.h
    +++ b/arch/x86/kvm/mmu/mmu_internal.h
    @@ -176,30 +176,12 @@ static inline void
    kvm_mmu_init_private_spt(struct kvm_mmu_page *sp, void *priva

    static inline void kvm_mmu_alloc_private_spt(struct kvm_vcpu *vcpu,
    struct kvm_mmu_page *sp)
    {
    - bool is_root = vcpu->arch.root_mmu.root_role.level == sp-
    >role.level;
    -
    - KVM_BUG_ON(!kvm_mmu_page_role_is_private(sp->role), vcpu->kvm);
    - if (is_root)
    - /*
    - * Because TDX module assigns root Secure-EPT page and
    set it to
    - * Secure-EPTP when TD vcpu is created, secure page
    table for
    - * root isn't needed.
    - */
    - sp->private_spt = NULL;
    - else {
    - /*
    - * Because the TDX module doesn't trust VMM and
    initializes
    - * the pages itself, KVM doesn't initialize them.
    Allocate
    - * pages with garbage and give them to the TDX module.
    - */
    - sp->private_spt = kvm_mmu_memory_cache_alloc(&vcpu-
    >arch.mmu_private_spt_cache);
    - /*
    - * Because mmu_private_spt_cache is topped up before
    starting
    - * kvm page fault resolving, the allocation above
    shouldn't
    - * fail.
    - */
    - WARN_ON_ONCE(!sp->private_spt);
    - }
    + /*
    + * Because the TDX module doesn't trust VMM and initializes
    + * the pages itself, KVM doesn't initialize them. Allocate
    + * pages with garbage and give them to the TDX module.
    + */
    + sp->private_spt = kvm_mmu_memory_cache_alloc(&vcpu-
    >arch.mmu_private_spt_cache);
    }

    static inline gfn_t kvm_gfn_for_root(struct kvm *kvm, struct
    kvm_mmu_page *root,
    diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
    index ac7bf37b353f..f423a38019fb 100644
    --- a/arch/x86/kvm/mmu/tdp_mmu.c
    +++ b/arch/x86/kvm/mmu/tdp_mmu.c
    @@ -195,9 +195,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct
    kvm_vcpu *vcpu)
    sp = kvm_mmu_memory_cache_alloc(&vcpu-
    >arch.mmu_page_header_cache);
    sp->spt = kvm_mmu_memory_cache_alloc(&vcpu-
    >arch.mmu_shadow_page_cache);

    - if (kvm_mmu_page_role_is_private(role))
    - kvm_mmu_alloc_private_spt(vcpu, sp);
    -
    return sp;
    }

    @@ -1378,6 +1375,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct
    kvm_page_fault *fault)
    * needs to be split.
    */
    sp = tdp_mmu_alloc_sp(vcpu);
    + if (!(raw_gfn & kvm_gfn_shared_mask(kvm)))
    + kvm_mmu_alloc_private_spt(vcpu, sp);
    tdp_mmu_init_child_sp(sp, &iter);

    sp->nx_huge_page_disallowed = fault-
    >huge_page_disallowed;
    @@ -1670,7 +1669,6 @@ static struct kvm_mmu_page
    *__tdp_mmu_alloc_sp_for_split(struct kvm *kvm, gfp_t

    sp->spt = (void *)__get_free_page(gfp);
    /* TODO: large page support for private GPA. */
    - WARN_ON_ONCE(kvm_mmu_page_role_is_private(role));
    if (!sp->spt) {
    kmem_cache_free(mmu_page_header_cache, sp);
    return NULL;
    @@ -1686,10 +1684,6 @@ static struct kvm_mmu_page
    *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
    struct kvm_mmu_page *sp;

    kvm_lockdep_assert_mmu_lock_held(kvm, shared);
    - KVM_BUG_ON(kvm_mmu_page_role_is_private(role) !=
    - is_private_sptep(iter->sptep), kvm);
    - /* TODO: Large page isn't supported for private SPTE yet. */
    - KVM_BUG_ON(kvm_mmu_page_role_is_private(role), kvm);

    /*
    * Since we are allocating while under the MMU lock we have to
    be
    \
     
     \ /
      Last update: 2024-05-27 15:56    [W:4.102 / U:0.064 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site