lkml.org 
[lkml]   [2021]   [Jun]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] selftests: kvm: fix overlapping addresses in memslot_perf_test
Date
On 30.05.2021 01:13, Maciej S. Szmigiero wrote:
> On 29.05.2021 12:20, Paolo Bonzini wrote:
>> On 28/05/21 21:51, Maciej S. Szmigiero wrote:
>>> On 28.05.2021 21:11, Paolo Bonzini wrote:
>>>> The memory that is allocated in vm_create is already mapped close to
>>>> GPA 0, because test_execute passes the requested memory to
>>>> prepare_vm.  This causes overlapping memory regions and the
>>>> test crashes.  For simplicity just move MEM_GPA higher.
>>>>
>>>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>>>
>>> I am not sure that I understand the issue correctly, is vm_create_default()
>>> already reserving low GPAs (around 0x10000000) on some arches or run
>>> environments?
>>
>> It maps the number of pages you pass in the second argument, see
>> vm_create.
>>
>>    if (phy_pages != 0)
>>      vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
>>                                  0, 0, phy_pages, 0);
>>
>> In this case:
>>
>>    data->vm = vm_create_default(VCPU_ID, mempages, guest_code);
>>
>> called here:
>>
>>    if (!prepare_vm(data, nslots, maxslots, tdata->guest_code,
>>                    mem_size, slot_runtime)) {
>>
>> where mempages is mem_size, which is declared as:
>>
>>          uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES;
>>
>> but actually a better fix is just to pass a small fixed value (e.g. 1024) to vm_create_default,
>> since all other regions are added by hand
>
> Yes, but the argument that is passed to vm_create_default() (mem_size
> in the case of the test) is not passed as phy_pages to vm_create().
> Rather, vm_create_with_vcpus() calculates some upper bound of extra
> memory that is needed to cover that much guest memory (including for
> its page tables).
>
> The biggest possible mem_size from memslot_perf_test is 512 MiB + 1 page,
> according to my calculations this results in phy_pages of 1029 (~4 MiB)
> in the x86-64 case and around 1540 (~6 MiB) in the s390x case (here I am
> not sure about the exact number, since s390x has some additional alignment
> requirements).
>
> Both values are well below 256 MiB (0x10000000UL), so I was wondering
> what kind of circumstances can make these allocations collide
> (maybe I am missing something in my analysis).

I see now that there has been a patch merged last week called
"selftests: kvm: make allocation of extra memory take effect" by
Zhenzhong that now allocates also the whole memory size passed to
vm_create_default() (instead of just page tables for that much memory).

The commit message of this patch says that "perf_test_util and
kvm_page_table_test use it to alloc extra memory currently", however both
kvm_page_table_test and lib/perf_test_util framework explicitly add the
required memory allocation by doing a vm_userspace_mem_region_add() call
for the same memory size that they pass to vm_create_default().

So now they allocate this memory twice.

@Zhenzhong: did you notice improper operation of either kvm_page_table_test
or perf_test_util-based tests (demand_paging_test, dirty_log_perf_test,
memslot_modification_stress_test) before your patch?

They seem to work fine for me without the patch (and I guess other
people would have noticed earlier, too, if they were broken).

After this patch not only these tests allocate their memory twice but
it is harder to make vm_create_default() allocate the right amount
of memory for the page tables in cases where the test needs to
explicitly use vm_userspace_mem_region_add() for its allocations
(because it wants the allocation placed at a specific GPA or in a
specific memslot).

One has to basically open-code the page table size calculations from
vm_create_with_vcpus() in the particular test then, taking also into
account that vm_create_with_vcpus() will not only allocate the passed
memory size (calculated page tables size) but also behave like it was
allocating space for page tables for these page tables (even though
the passed memory size itself is supposed to cover them).

Due to the above, I suspect the previous behavior was, in fact, correct.

Thanks,
Maciej

\
 
 \ /
  Last update: 2021-06-03 01:07    [W:0.073 / U:0.248 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site