lkml.org 
[lkml]   [2022]   [Oct]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v1 01/18] KVM: selftests/kvm_util: use array of pointers to maintain vcpus in kvm_vm
On Mon, Oct 24, 2022, Wei Wang wrote:
> Each vcpu has an id associated with it and is intrinsically faster
> and easier to be referenced by indexing into an array with "vcpu->id",
> compared to using a list of vcpus in the current implementation. Change
> the vcpu list to an array of vcpu pointers. Users then don't need to
> allocate such a vcpu array on their own, and instead, they can reuse
> the one maintained in kvm_vm.
>
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> ---
> .../testing/selftests/kvm/include/kvm_util.h | 4 +++
> .../selftests/kvm/include/kvm_util_base.h | 3 +-
> tools/testing/selftests/kvm/lib/kvm_util.c | 34 ++++++-------------
> tools/testing/selftests/kvm/lib/x86_64/vmx.c | 2 +-
> 4 files changed, 17 insertions(+), 26 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index c9286811a4cb..5d5c8968fb06 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -10,4 +10,8 @@
> #include "kvm_util_base.h"
> #include "ucall_common.h"
>
> +#define vm_iterate_over_vcpus(vm, vcpu, i) \

vm_for_each_vcpu() is more aligned with existing KVM terminology.

> + for (i = 0, vcpu = vm->vcpus[0]; \
> + vcpu && i < KVM_MAX_VCPUS; vcpu = vm->vcpus[++i])

I hate pointer arithmetic more than most people, but in this case it avoids the
need to pass in 'i', which in turn cuts down on boilerplate and churn.

> #endif /* SELFTEST_KVM_UTIL_H */
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> index e42a09cd24a0..c90a9609b853 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> @@ -45,7 +45,6 @@ struct userspace_mem_region {
> };
>
> struct kvm_vcpu {
> - struct list_head list;
> uint32_t id;
> int fd;
> struct kvm_vm *vm;
> @@ -75,7 +74,6 @@ struct kvm_vm {
> unsigned int pa_bits;
> unsigned int va_bits;
> uint64_t max_gfn;
> - struct list_head vcpus;
> struct userspace_mem_regions regions;
> struct sparsebit *vpages_valid;
> struct sparsebit *vpages_mapped;
> @@ -92,6 +90,7 @@ struct kvm_vm {
> int stats_fd;
> struct kvm_stats_header stats_header;
> struct kvm_stats_desc *stats_desc;
> + struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];

We can dynamically allocate the array without too much trouble, though I'm not
sure it's worth shaving the few KiB of memory. For __vm_create(), the number of
vCPUs is known when the VM is created. For vm_create_barebones(), we could do
the simple thing of allocating KVM_MAX_VCPU.

> @@ -534,6 +533,10 @@ __weak void vcpu_arch_free(struct kvm_vcpu *vcpu)
> static void vm_vcpu_rm(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> {
> int ret;
> + uint32_t vcpu_id = vcpu->id;
> +
> + TEST_ASSERT(!!vm->vcpus[vcpu_id], "vCPU%d wasn't added\n", vcpu_id);

This is unecessary, there's one caller and it's iterating over the array of vCPUs.

\
 
 \ /
  Last update: 2022-10-27 01:48    [W:0.243 / U:0.844 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site