lkml.org 
[lkml]   [2022]   [Jan]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 00/10] KVM: selftests: Add support for test-selectable ucall implementations
On Thu, Dec 30, 2021 at 09:11:12PM +0000, Sean Christopherson wrote:
> On Fri, Dec 10, 2021, Michael Roth wrote:
> > To summarize, x86 relies on a ucall based on using PIO intructions to generate
> > an exit to userspace and provide the GVA of a dynamically-allocated ucall
> > struct that resides in guest memory and contains information about how to
> > handle/interpret the exit. This doesn't work for SEV guests for 3 main reasons:
> >
> > 1) The guest memory is generally encrypted during run-time, so the guest
> > needs to ensure the ucall struct is allocated in shared memory.
> > 2) The guest page table is also encrypted, so the address would need to be a
> > GPA instead of a GVA.
> > 3) The guest vCPU register may also be encrypted in the case of
> > SEV-ES/SEV-SNP, so the approach of examining vCPU register state has
> > additional requirements such as requiring guest code to implement a #VC
> > handler that can provide the appropriate registers via a vmgexit.
> >
> > To address these issues, the SEV selftest RFC1 patchset introduced a set of new
> > SEV-specific interfaces that closely mirrored the functionality of
> > ucall()/get_ucall(), but relied on a pre-allocated/static ucall buffer in
> > shared guest memory so it that guest code could pass messages/state to the host
> > by simply writing to this pre-arranged shared memory region and then generating
> > an exit to userspace (via a halt instruction).
> >
> > Paolo suggested instead implementing support for test/guest-specific ucall
> > implementations that could be used as an alternative to the default PIO-based
> > ucall implementations as-needed based on test/guest requirements, while still
> > allowing for tests to use a common set interfaces like ucall()/get_ucall().
>
> This all seems way more complicated than it needs to be. HLT is _worse_ than
> PIO on x86 because it triggers a userspace exit if and only if the local APIC is
> not in-kernel. That is bound to bite someone.

Hmmm, fair point. It's easy for me to just not use in-kernel APIC in
the current SEV tests to avoid the issue, but HLT is being made
available as an available ucall implementation for other tests as well,
and given in-kernel APIC is set up automatically maybe it's not robust
enough.

> not in-kernel. That is bound to bite someone. The only issue with SEV is the
> address, not the VM-Exit mechanism. That doesn't change with SEV-ES, SEV-SNP,
> or TDX, as PIO and HLT will both get reflected as #VC/#VE, i.e. the guest side
> needs to be updated to use VMGEXIT/TDCALL no matter what, at which point having
> the hypercall request PIO emulation is just as easy as requesting HLT.

I'm not aware of any #VC handling needed for HLT in the case of
SEV-ES/SEV-SNP. That was one of the reasons for the SEV tests using
this ucall implementation. Of course, at some point, we'd want full support
for PIO/MMIO/etc. in the #VC handler, but it's not something I'd planned on
adding until after the SEV-SNP tests, since it seems like we'd need to
import a bunch of intruction decoding code from elsewhere in the kernel,
which is a lot of churn that's not immediately necessary for getting at least
some basic tests in place. Since the HLT implementation is only 20 lines of
code it seemed like a reasonable stop-gap until we start getting more CoCo
tests in place. But the in-kernel APIC issue probably needs more
consideration...

Perhaps for *just* PIO, the intruction decoding can be open-coded so it
can be added to the initial #VC handler implementation, which would avoid the
need for HLT implementation. I'll take a look at that.

>
> I also don't like having to differentiate between a "shared" and "regular" ucall.
> I kind of like having to explicitly pass the ucall object being used, but that
> puts undue burden on simple single-vCPU tests.

I tried to avoid it, but I got hung up on that fact that pre-allocating
arrays/lists of ucall structs needs to be done for each VM, and so we'd
end up needing some way for a guest to identify which pool it's ucall
struct should be allocated from. But you've gotten around that by just
sync_global_to_guest()'ing for each pool at the time ucall_init() is
called, so the guest only ever sees it's particular pool. Then the switch
from writing GVA to writing GPA solves the translation problem. Nice.

>
> The inability to read guest private memory is really the only issue, and that can
> be easily solved without completely revamping the ucall framework, and without
> having to update a huge pile of tests to make them place nice with private memory.

I think the first 5 patches in this series are still relevant cleanups
vs. having a complete standalone ucall implementation for each arch, and Andrew
has also already started looking at other header cleanups related to
patch #1, so maybe Paolo would still like to queue those. Would also
provide a better starting point for having a centralized allocator for
the ucall structs, which you hinted at wanting below.

But the subsequent patches that add the ucall_shared() interfaces should
probably be set aside for now in favor of your proposal.

>
> This would also be a good opportunity to clean up the stupidity of tests having to
> manually call ucall_init(), drop the unused/pointless @arg from ucall_init(), and
> maybe even fix arm64's lurking landmine of not being SMP safe (the address is shared
> by all vCPUs).

I thought you *didn't* want to update a huge pile of tests :) I suppose
it's unavoidable, since with your proposal, having something like ucall_init()
being called at some point is required, as opposed to the current
implementation where it is optional. Are you intending to have it be
called automatically by vm_create*()?

>
> To reduce the burden on tests and avoid ordering issues with creating vCPUs,
> allocate a ucall struct for every possible vCPU when the VM is created and stuff
> the GPA of the struct in the struct itself so that the guest can communicate the
> GPA instead of the GVA. Then confidential VMs just need to make all structs shared.

So a separate call like:

ucall_make_shared(vm->ucall_list)

? Might need some good documentation/assertions to make sure it gets
called at the right place for confidential VMs, and may need some extra
hooks in SEV selftest implementation for switching from private to shared
after the memory has already been allocated, but seems reasonable.

>
> If all architectures have a way to access a vCPU ID, the ucall structs could be
> stored as a simple array. If not, a list based allocator would probably suffice.

I think list allocator is nicer, generating #VCs for both the PIO and the
cpuid checks for vCPU lookup seems like a lot of extra noise to sift
through while debugging where an errant test is failing, and doesn't seem to
have any disadvantage vs. an array.

Thanks,

Mike

\
 
 \ /
  Last update: 2022-01-05 00:42    [W:0.121 / U:1.536 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site