lkml.org 
[lkml]   [2022]   [Jun]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v6 6/8] KVM: Handle page fault for private memory
On Fri, Jun 24, 2022 at 09:28:23AM +0530, Nikunj A. Dadhania wrote:
> On 5/19/2022 9:07 PM, Chao Peng wrote:
> > A page fault can carry the information of whether the access if private
> > or not for KVM_MEM_PRIVATE memslot, this can be filled by architecture
> > code(like TDX code). To handle page faut for such access, KVM maps the
> > page only when this private property matches host's view on this page
> > which can be decided by checking whether the corresponding page is
> > populated in the private fd or not. A page is considered as private when
> > the page is populated in the private fd, otherwise it's shared.
> >
> > For a successful match, private pfn is obtained with memfile_notifier
> > callbacks from private fd and shared pfn is obtained with existing
> > get_user_pages.
> >
> > For a failed match, KVM causes a KVM_EXIT_MEMORY_FAULT exit to
> > userspace. Userspace then can convert memory between private/shared from
> > host's view then retry the access.
> >
> > Co-developed-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> > Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> > Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
> > ---
> > arch/x86/kvm/mmu.h | 1 +
> > arch/x86/kvm/mmu/mmu.c | 70 +++++++++++++++++++++++++++++++--
> > arch/x86/kvm/mmu/mmu_internal.h | 17 ++++++++
> > arch/x86/kvm/mmu/mmutrace.h | 1 +
> > arch/x86/kvm/mmu/paging_tmpl.h | 5 ++-
> > include/linux/kvm_host.h | 22 +++++++++++
> > 6 files changed, 112 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
> > index 7e258cc94152..c84835762249 100644
> > --- a/arch/x86/kvm/mmu.h
> > +++ b/arch/x86/kvm/mmu.h
> > @@ -176,6 +176,7 @@ struct kvm_page_fault {
> >
> > /* Derived from mmu and global state. */
> > const bool is_tdp;
> > + const bool is_private;
> > const bool nx_huge_page_workaround_enabled;
> >
> > /*
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index afe18d70ece7..e18460e0d743 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -2899,6 +2899,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
> > if (max_level == PG_LEVEL_4K)
> > return PG_LEVEL_4K;
> >
> > + if (kvm_slot_is_private(slot))
> > + return max_level;
>
> Can you explain the rationale behind the above change?
> AFAIU, this overrides the transparent_hugepage=never setting for both
> shared and private mappings.

As Sean pointed out, this should check against fault->is_private instead
of the slot. For private fault, the level is retrieved and stored to
fault->max_level in kvm_faultin_pfn_private() instead of here.

For shared fault, it will continue to query host_level below. For
private fault, the host level has already been accounted in
kvm_faultin_pfn_private().

Chao
>
> > host_level = host_pfn_mapping_level(kvm, gfn, pfn, slot);
> > return min(host_level, max_level);
> > }
>
> Regards
> Nikunj

\
 
 \ /
  Last update: 2022-06-24 11:08    [W:0.163 / U:0.364 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site