lkml.org 
[lkml]   [2022]   [Aug]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [V3 10/11] KVM: selftests: Add ucall pool based implementation
On Thu, Aug 18, 2022 at 11:29:11PM +0000, Sean Christopherson wrote:
> On Thu, Aug 18, 2022, Andrew Jones wrote:
> > On Thu, Aug 18, 2022 at 04:00:40PM +0000, Sean Christopherson wrote:
> > > But why is "use_ucall_pool" optional? Unless there's a use case that fundamentally
> > > conflicts with the pool approach, let's make the pool approach the _only_ approach.
> > > IIRC, ARM's implementation isn't thread safe, i.e. there's at least one other use
> > > case that _needs_ the pool implementation.
> >
> > Really? The ucall structure is on the vcpu's stack like the other
> > architectures. Ah, you're probably thinking about the shared address used
> > to exit to userspace. The address doesn't matter as long as no VM maps
> > it, but, yes, a multi-VM test where the VMs have different maps could end
> > up breaking ucalls for one or more VMs.
>
> Yah, that's what I was thinking of.
>
> > It wouldn't be hard to make that address per-VM, though, if ever necessary.
> >
> > >
> > > By supporting both, we are signing ourselves up for extra maintenance and pain,
> > > e.g. inevitably we'll break whichever option isn't the default and not notice for
> > > quite some time.
> >
> > uc pools are currently limited to a single VM. That could be changed, but
> > at the expense of even more code to maintain.
>
> Unless I'm missing something, it's actually less code. "globals" that are sync'd
> to the guest aren't truly global, each VM has a separate physical page that is a
> copy of the global, hence the need for sync_global_to_guest().
>
> And FWIW, the code is actually "safe" in practice because I doubt any test spawns
> multiple VMs in parallel, i.e. the host might get confused over the value of
> ucall_pool, but guests won't stomp on each other so long as they're created
> serially. Ditto for ARM's ucall_exit_mmio_addr.
>
> To make this truly thread safe, we just need a way to atomically sync the pointer
> to the guest, and that's easy enough to add.

I like that.

>
> With that out of the way, all of the conditional "use pool" code goes away, which
> IMO simplifies things overall. Using a pool versus stack memory isn't _that_ much
> more complicated, and we _know_ the stack approach doesn't work for all VM types.
>
> Indeed, this partial conversion is:
>
> 7 files changed, 131 insertions(+), 18 deletions(-)
>
> versus a full conversion:
>
> 6 files changed, 89 insertions(+), 20 deletions(-)
>
> And simplification is only a secondary concern, what I'm really worried about is
> things bitrotting and then some poor sod having to wade through unrelated issues
> because somebody else broke the pool implementation and didn't notice.
>
> Argh, case in point, none of the x86 (or s390 or RISC-V) tests do ucall_init(),
> which would have been a lurking bug if any of them tried to switch to the pool.
>
> Argh + case in point #2, this code is broken, and that bug may have sat unnoticed
> due to only the esoteric SEV test using the pool.
>
> -static inline size_t uc_pool_idx(struct ucall *uc)
> +static noinline void ucall_free(struct ucall *uc)
> {
> - return uc->hva - ucall_pool->ucalls;
> -}
> -
> -static void ucall_free(struct ucall *uc)
> -{
> - clear_bit(uc_pool_idx(uc), ucall_pool->in_use);
> + /* Beware, here be pointer arithmetic. */
> + clear_bit(uc - ucall_pool->ucalls, ucall_pool->in_use);
> }

Dropping the only once-used uc_pool_idx() and adding the comment looks
better, but I don't think there was a bug because uc == uc->hva.

>
>
> So my very strong vote is to fully switch to a single implementation. That way
> our code either works or it doesn't, i.e. we notice very quickly if it breaks.
>
> Peter, if I can convince Drew of One Pool To Rule Them All, can you slot in the

There are other hills for me to stand on, so I won't insist on this one.

> attached patches and slightly rework the ordering so that all SEV patches are at
> the end? E.g. something like:
>
> KVM: selftests: Automatically do init_ucall() for non-barebones VMs
> KVM: selftests: move vm_phy_pages_alloc() earlier in file
> KVM: selftests: sparsebit: add const where appropriate
> KVM: selftests: add hooks for managing encrypted guest memory
> KVM: selftests: handle encryption bits in page tables
> KVM: selftests: add support for encrypted vm_vaddr_* allocations
> KVM: selftests: Consolidate common code for popuplating
> KVM: selftests: Consolidate boilerplate code in get_ucall()
> tools: Add atomic_test_and_set_bit()
> KVM: selftests: Add macro to atomically sync per-VM "global" pointers
> KVM: selftests: Add ucall pool based implementation
> KVM: selftests: add library for creating/interacting with SEV guests
> KVM: selftests: Add simple sev vm testing
>
> The patches are tested on x86 and compile tested on arm. I can provide more testing
> if/when necessary. I also haven't addressed any other feedback in the "ucall pool"
> patch, though I'm guessing much of it no longer applies.

I skimmed the patches but don't know how to comment on attachments, so
I'll add the two things that popped to mind here.

1) Doing ucall_init() at VM create time may be too early for Arm. Arm
probes for an unmapped address for the MMIO address it needs, so it's
best to do that after all mapping.

2) We'll need to change the sanity checks in Arm's get_ucall() to not
check that the MMIO address == ucall_exit_mmio_addr since there's no
guarantee that the exiting guest's address matches whatever is lingering
in the host's version. We can either drop the sanity check altogether
or try to come up with something else.

Thanks,
drew

\
 
 \ /
  Last update: 2022-08-19 07:18    [W:0.099 / U:0.268 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site