lkml.org 
[lkml]   [2021]   [Apr]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH 1/2] KVM: x86: reduce pvclock_gtod_sync_lock critical sections
Date
On 08/04/21 14:00, Marcelo Tosatti wrote:
>>
>> KVM_REQ_MCLOCK_INPROGRESS is only needed to kick running vCPUs out of the
>> execution loop;
> We do not want vcpus with different system_timestamp/tsc_timestamp
> pair:
>
> * To avoid that problem, do not allow visibility of distinct
> * system_timestamp/tsc_timestamp values simultaneously: use a master
> * copy of host monotonic time values. Update that master copy
> * in lockstep.
>
> So KVM_REQ_MCLOCK_INPROGRESS also ensures that no vcpu enters
> guest mode (via vcpu->requests check before VM-entry) with a
> different system_timestamp/tsc_timestamp pair.

Yes this is what KVM_REQ_MCLOCK_INPROGRESS does, but it does not have to
be done that way. All you really need is the IPI with KVM_REQUEST_WAIT,
which ensures that updates happen after the vCPUs have exited guest
mode. You don't need to loop on vcpu->requests for example, because
kvm_guest_time_update could just spin on pvclock_gtod_sync_lock until
pvclock_update_vm_gtod_copy is done.

So this morning I tried protecting the kvm->arch fields for kvmclock
using a seqcount, which is nice also because get_kvmclock_ns() does not
have to bounce the cacheline of pvclock_gtod_sync_lock anymore. I'll
post it tomorrow or next week.

Paolo

\
 
 \ /
  Last update: 2021-04-08 14:26    [W:0.043 / U:0.992 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site