Messages in this thread | | | From | Paolo Bonzini <> | Subject | Re: [PATCH 1/2] KVM: x86: reduce pvclock_gtod_sync_lock critical sections | Date | Thu, 8 Apr 2021 10:15:16 +0200 |
| |
On 07/04/21 19:40, Marcelo Tosatti wrote: >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >> index fe806e894212..0a83eff40b43 100644 >> --- a/arch/x86/kvm/x86.c >> +++ b/arch/x86/kvm/x86.c >> @@ -2562,10 +2562,12 @@ static void kvm_gen_update_masterclock(struct kvm *kvm) >> >> kvm_hv_invalidate_tsc_page(kvm); >> >> - spin_lock(&ka->pvclock_gtod_sync_lock); >> kvm_make_mclock_inprogress_request(kvm); >> + > Might be good to serialize against two kvm_gen_update_masterclock > callers? Otherwise one caller could clear KVM_REQ_MCLOCK_INPROGRESS, > while the other is still at pvclock_update_vm_gtod_copy().
Makes sense, but this stuff has always seemed unnecessarily complicated to me.
KVM_REQ_MCLOCK_INPROGRESS is only needed to kick running vCPUs out of the execution loop; clearing it in kvm_gen_update_masterclock is unnecessary, because KVM_REQ_CLOCK_UPDATE takes pvclock_gtod_sync_lock too and thus will already wait for pvclock_update_vm_gtod_copy to end.
I think it's possible to use a seqcount in KVM_REQ_CLOCK_UPDATE instead of KVM_REQ_MCLOCK_INPROGRESS. Both cause the vCPUs to spin. I'll take a look.
Paolo
| |