lkml.org 
[lkml]   [2020]   [Dec]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE
On Tue, Dec 15, 2020 at 3:35 AM Marcelo Tosatti <mtosatti@redhat.com> wrote:
>
> On Fri, Dec 11, 2020 at 10:59:59PM +0100, Paolo Bonzini wrote:
> > On 11/12/20 22:04, Thomas Gleixner wrote:
> > > > Its 100ms off with migration, and can be reduced further (customers
> > > > complained about 5 seconds but seem happy with 0.1ms).
> > > What is 100ms? Guaranteed maximum migration time?
> >
> > I suppose it's the length between the time from KVM_GET_CLOCK and
> > KVM_GET_MSR(IA32_TSC) to KVM_SET_CLOCK and KVM_SET_MSR(IA32_TSC). But the
> > VM is paused for much longer, the sequence for the non-live part of the
> > migration (aka brownout) is as follows:
> >
> > pause
> > finish sending RAM receive RAM ~1 sec
> > send paused-VM state finish receiving RAM \
> > receive paused-VM state ) 0.1 sec
> > restart /
> >
> > The nanosecond and TSC times are sent as part of the paused-VM state at the
> > very end of the live migration process.
> >
> > So it's still true that the time advances during live migration brownout;
> > 0.1 seconds is just the final part of the live migration process. But for
> > _live_ migration there is no need to design things according to "people are
> > happy if their clock is off by 0.1 seconds only".
>
> Agree. What would be a good way to fix this?
>

Could you implement the Hyper-V clock interface? It's much, much
simpler than the kvmclock interface. It has the downside that
CLOCK_BOOTTIME won't do what you want, but I'm not really convinced
that's a problem, and you could come up with a minimal extension to
fix that.

\
 
 \ /
  Last update: 2020-12-15 17:59    [W:0.127 / U:0.216 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site