lkml.org 
[lkml]   [2022]   [Jul]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC V2 PATCH 2/8] selftests: kvm: Add a basic selftest to test private memory
On Wed, Jul 20, 2022 at 4:03 PM Sean Christopherson <seanjc@google.com> wrote:
> ...
> > + * which doesn't handle global offset table updates. Calling standard libc
> > + * functions would normally result in referring to the global offset table.
> > + * Adding O1 here seems to prohibit compiler from replacing the memory
> > + * operations with standard libc functions such as memset.
> > + */
>
> Eww. We should either fix kvm_vm_elf_load() or override the problematic libc
> variants. Playing games with per-function attributes is not maintainable.
>

I will try to spend more time on how kvm_vm_elf_load can be modified
to handle GOT fixups in different scenarios including
statically/dynamically linked sefltest binaries as I currently recall
limited information here.

But modifying kvm_vm_elf_load to fixup GOT entries will be
insufficient as guest VM code (possibly whole selftest binary) will
need to be compiled with flags that allow memset/memcpy
implementations to work with specific guest VM configurations e.g. AVX
extension. Same concern is outlined in
https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/kvm/lib/x86_64/svm.c#L64.

Would it be ok to maintain selftest binary compilation flags based on
guest VM configurations?

> > +static bool __attribute__((optimize("O1"))) do_mem_op(enum mem_op op,
> > + void *mem, uint64_t pat, uint32_t size)
>
> Oof. Don't be so agressive in shortening names, _especially_ when there's no
> established/universal abbreviation. It took me forever to figure out that "pat"
> is "pattern". And for x86, "pat" is especially confusing because it already
> a very well-established name that just so happens to be relevant to memory types,
> just a different kind of a memory type...
>
> > +{
> > + uint64_t *buf = (uint64_t *)mem;
> > + uint32_t chunk_size = sizeof(pat);
> > + uint64_t mem_addr = (uint64_t)mem;
> > +
> > + if (((mem_addr % chunk_size) != 0) || ((size % chunk_size) != 0))
>
> All the patterns are a repeating byte, why restrict this to 8-byte chunks? Then
> this confusing assert-but-not-an-assert goes away.
>
> > + return false;
> > +
> > + for (uint32_t i = 0; i < (size / chunk_size); i++) {
> > + if (op == SET_PAT)
> > + buf[i] = pat;
> > + if (op == VERIFY_PAT) {
> > + if (buf[i] != pat)
> > + return false;
>
> If overriding memset() and memcmp() doesn't work for whatever reason, add proper
> helpers instead of a do_stuff() wrapper.
>
> > + }
> > + }
> > +
> > + return true;
> > +}
> > +
> > +/* Test to verify guest private accesses on private memory with following steps:
> > + * 1) Upon entry, guest signals VMM that it has started.
> > + * 2) VMM populates the shared memory with known pattern and continues guest
> > + * execution.
> > + * 3) Guest writes a different pattern on the private memory and signals VMM
> > + * that it has updated private memory.
> > + * 4) VMM verifies its shared memory contents to be same as the data populated
> > + * in step 2 and continues guest execution.
> > + * 5) Guest verifies its private memory contents to be same as the data
> > + * populated in step 3 and marks the end of the guest execution.
> > + */
> > +#define PMPAT_ID 0
> > +#define PMPAT_DESC "PrivateMemoryPrivateAccessTest"
> > +
> > +/* Guest code execution stages for private mem access test */
> > +#define PMPAT_GUEST_STARTED 0ULL
> > +#define PMPAT_GUEST_PRIV_MEM_UPDATED 1ULL
> > +
> > +static bool pmpat_handle_vm_stage(struct kvm_vm *vm,
> > + void *test_info,
> > + uint64_t stage)
>
>
> Align parameters, both in prototypes and in invocations. And don't wrap unnecessarily.
>
> static bool pmpat_handle_vm_stage(struct kvm_vm *vm, void *test_info,
> uint64_t stage)
>
>
> Or even let that poke out (probably not in this case, but do keep in mind that the
> 80 char "limit" is a soft limit that can be broken if doing so yields more readable
> code).
>
> static bool pmpat_handle_vm_stage(struct kvm_vm *vm, void *test_info, uint64_t stage)
>
> > +{
> > + void *shared_mem = ((struct test_run_helper *)test_info)->shared_mem;
> > +
> > + switch (stage) {
> > + case PMPAT_GUEST_STARTED: {
> > + /* Initialize the contents of shared memory */
> > + TEST_ASSERT(do_mem_op(SET_PAT, shared_mem,
> > + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
> > + "Shared memory update failure");
>
> Align indentation (here and many other places).
>
> > + VM_STAGE_PROCESSED(PMPAT_GUEST_STARTED);
> > + break;
> > + }
> > + case PMPAT_GUEST_PRIV_MEM_UPDATED: {
> > + /* verify host updated data is still intact */
> > + TEST_ASSERT(do_mem_op(VERIFY_PAT, shared_mem,
> > + TEST_MEM_DATA_PAT1, TEST_MEM_SIZE),
> > + "Shared memory view mismatch");
> > + VM_STAGE_PROCESSED(PMPAT_GUEST_PRIV_MEM_UPDATED);
> > + break;
> > + }
> > + default:
> > + printf("Unhandled VM stage %ld\n", stage);
> > + return false;
> > + }
> > +
> > + return true;
> > +}
> > +
> > +static void pmpat_guest_code(void)
> > +{
> > + void *priv_mem = (void *)TEST_MEM_GPA;
> > + int ret;
> > +
> > + GUEST_SYNC(PMPAT_GUEST_STARTED);
> > +
> > + /* Mark the GPA range to be treated as always accessed privately */
> > + ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, TEST_MEM_GPA,
> > + TEST_MEM_SIZE >> MIN_PAGE_SHIFT,
> > + KVM_MARK_GPA_RANGE_ENC_ACCESS, 0);
> > + GUEST_ASSERT_1(ret == 0, ret);
>
> "!ret" instead of "ret == 0"
>
> > +
> > + GUEST_ASSERT(do_mem_op(SET_PAT, priv_mem, TEST_MEM_DATA_PAT2,
> > + TEST_MEM_SIZE));
> > + GUEST_SYNC(PMPAT_GUEST_PRIV_MEM_UPDATED);
> > +
> > + GUEST_ASSERT(do_mem_op(VERIFY_PAT, priv_mem,
> > + TEST_MEM_DATA_PAT2, TEST_MEM_SIZE));
> > +
> > + GUEST_DONE();
> > +}
> > +
> > +static struct test_run_helper priv_memfd_testsuite[] = {
> > + [PMPAT_ID] = {
> > + .test_desc = PMPAT_DESC,
> > + .vmst_handler = pmpat_handle_vm_stage,
> > + .guest_fn = pmpat_guest_code,
> > + },
> > +};
>
> ...
>
> > +/* Do private access to the guest's private memory */
> > +static void setup_and_execute_test(uint32_t test_id)
>
> This helper appears to be the bulk of the shared code between tests. This can
> and should be a helper to create a VM with private memory. Not sure what to call
> such a helper, maybe vm_create_with_private_memory()? A little verbose, but
> literal isn't always bad.
>
> > +{
> > + struct kvm_vm *vm;
> > + int priv_memfd;
> > + int ret;
> > + void *shared_mem;
> > + struct kvm_enable_cap cap;
> > +
> > + vm = vm_create_default(VCPU_ID, 0,
> > + priv_memfd_testsuite[test_id].guest_fn);
> > +
> > + /* Allocate shared memory */
> > + shared_mem = mmap(NULL, TEST_MEM_SIZE,
> > + PROT_READ | PROT_WRITE,
> > + MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0);
> > + TEST_ASSERT(shared_mem != MAP_FAILED, "Failed to mmap() host");
> > +
> > + /* Allocate private memory */
> > + priv_memfd = memfd_create("vm_private_mem", MFD_INACCESSIBLE);
> > + TEST_ASSERT(priv_memfd != -1, "Failed to create priv_memfd");
> > + ret = fallocate(priv_memfd, 0, 0, TEST_MEM_SIZE);
> > + TEST_ASSERT(ret != -1, "fallocate failed");
> > +
> > + priv_memory_region_add(vm, shared_mem,
> > + TEST_MEM_SLOT, TEST_MEM_SIZE,
> > + TEST_MEM_GPA, priv_memfd, 0);
> > +
> > + pr_info("Mapping test memory pages 0x%x page_size 0x%x\n",
> > + TEST_MEM_SIZE/vm_get_page_size(vm),
> > + vm_get_page_size(vm));
> > + virt_map(vm, TEST_MEM_GPA, TEST_MEM_GPA,
> > + (TEST_MEM_SIZE/vm_get_page_size(vm)));
> > +
> > + /* Enable exit on KVM_HC_MAP_GPA_RANGE */
> > + pr_info("Enabling exit on map_gpa_range hypercall\n");
> > + ret = ioctl(vm_get_fd(vm), KVM_CHECK_EXTENSION, KVM_CAP_EXIT_HYPERCALL);
> > + TEST_ASSERT(ret & (1 << KVM_HC_MAP_GPA_RANGE),
> > + "VM exit on MAP_GPA_RANGE HC not supported");
>
> Impressively bizarre indentation :-)
>

Thanks Sean for all the feedback here. I will address the comments in
the next series.

Regards,
Vishal

\
 
 \ /
  Last update: 2022-07-21 22:25    [W:0.953 / U:1.324 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site