lkml.org 
[lkml]   [2022]   [Oct]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v1 05/18] KVM: selftests/hardware_disable_test: code consolidation and cleanup
On Thu, Oct 27, 2022, Wang, Wei W wrote:
> On Thursday, October 27, 2022 8:16 AM, Sean Christopherson wrote:
> > > diff --git a/tools/testing/selftests/kvm/hardware_disable_test.c
> > > static void run_test(uint32_t run)
> > > {
> > > struct kvm_vcpu *vcpu;
> > > struct kvm_vm *vm;
> > > cpu_set_t cpu_set;
> > > - pthread_t threads[VCPU_NUM];
> > > pthread_t throw_away;
> > > - void *b;
> > > + pthread_attr_t attr;
> > > uint32_t i, j;
> > > + int r;
> > >
> > > CPU_ZERO(&cpu_set);
> > > for (i = 0; i < VCPU_NUM; i++)
> > > CPU_SET(i, &cpu_set);
> >
> > Uh, what is this test doing? I assume the intent is to avoid spamming all
> > pCPUs in the system, but I don't get the benefit of doing so.
>
> IIUIC, it is to test if the condition race between the 2 paths:
> #1 kvm_arch_hardware_disable->drop_user_return_notifiers() and
> #2 fire_user_return_notifiers->kvm_on_user_return
> has been solved by disabling interrupts in kvm_on_user_return.
>
> To stress the tests, it creates a bunch of threads (continuously making syscalls
> to trigger #2 above) to be scheduled on the same pCPU that runs a vCPU, and
> then VM is killed, which triggers #1 above.
> They fork to test 512 times hoping there is chance #1 and #2 above can happen
> at the same time without an issue.

But why does it matter what pCPU a vCPU is running on? Wouldn't the probability
of triggering a race between kvm_on_user_return() and hardware_disable() be
_higher_ if there are more pCPUs returning to userspace?

\
 
 \ /
  Last update: 2022-10-27 20:05    [W:0.164 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site