lkml.org 
[lkml]   [2022]   [Jun]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRE: [PATCH] KVM: x86/xen: Update Xen CPUID Leaf 4 (tsc info) sub-leaves, if present
From
Date
> -----Original Message-----
[snip]
> > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > > index 00e23dc518e0..8b45f9975e45 100644
> > > --- a/arch/x86/kvm/x86.c
> > > +++ b/arch/x86/kvm/x86.c
> > > @@ -3123,6 +3123,7 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
> > > if (vcpu->xen.vcpu_time_info_cache.active)
> > > kvm_setup_guest_pvclock(v, &vcpu->xen.vcpu_time_info_cache, 0);
> > > kvm_hv_setup_tsc_page(v->kvm, &vcpu->hv_clock);
> > > + kvm_xen_setup_tsc_info(v);
> >
> > This can be called inside this if statement, no?
> >
> > if (unlikely(vcpu->hw_tsc_khz != tgt_tsc_khz)) {
> >
> > }
> >

I think it ought to be done whenever the shared copy of Xen's vcpu_info is updated (it will always match on real Xen) so unconditionally calling it here seems reasonable.

> > > return 0;
> > > }
> > >
> > > diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
> > > index 610beba35907..a016ff85264d 100644
> > > --- a/arch/x86/kvm/xen.c
> > > +++ b/arch/x86/kvm/xen.c
> > > @@ -10,6 +10,9 @@
> > > #include "xen.h"
> > > #include "hyperv.h"
> > > #include "lapic.h"
> > > +#include "cpuid.h"
> > > +
> > > +#include <asm/xen/cpuid.h>
> > >
> > > #include <linux/eventfd.h>
> > > #include <linux/kvm_host.h>
> > > @@ -1855,3 +1858,41 @@ void kvm_xen_destroy_vm(struct kvm *kvm)
> > > if (kvm->arch.xen_hvm_config.msr)
> > > static_branch_slow_dec_deferred(&kvm_xen_enabled);
> > > }
> > > +
> > > +void kvm_xen_set_cpuid(struct kvm_vcpu *vcpu)
> >
> > This is a very, very misleading name. It does not "set" anything. Given that
> > this patch adds "set" and "setup", I expected the "set" to you know, set the CPUID
> > leaves and the "setup" to prepar for that, not the other way around.
> >
> > If the leaves really do need to be cached, kvm_xen_after_set_cpuid() is probably
> > the least awful name.
> >

Ok I'll rename it kvm_xen_after_set_cpuid().

> > > +{
> > > + u32 base = 0;
> > > + u32 function;
> > > +
> > > + for_each_possible_hypervisor_cpuid_base(function) {
> > > + struct kvm_cpuid_entry2 *entry = kvm_find_cpuid_entry(vcpu, function, 0);
> > > +
> > > + if (entry &&
> > > + entry->ebx == XEN_CPUID_SIGNATURE_EBX &&
> > > + entry->ecx == XEN_CPUID_SIGNATURE_ECX &&
> > > + entry->edx == XEN_CPUID_SIGNATURE_EDX) {
> > > + base = function;
> > > + break;
> > > + }
> > > + }
> > > + if (!base)
> > > + return;
> > > +
> > > + function = base | XEN_CPUID_LEAF(3);
> > > + vcpu->arch.xen.tsc_info_1 = kvm_find_cpuid_entry(vcpu, function, 1);
> > > + vcpu->arch.xen.tsc_info_2 = kvm_find_cpuid_entry(vcpu, function, 2);
> >
> > Is it really necessary to cache the leave? Guest CPUID isn't optimized, but it's
> > not _that_ slow, and unless I'm missing something updating the TSC frequency and
> > scaling info should be uncommon, i.e. not performance critical.

If we're updating the values in the leaves on every entry into the guest (as with calls to kvm_setup_guest_pvclock()) then I think the cached pointers are worthwhile.

Paul

\
 
 \ /
  Last update: 2022-06-27 17:34    [W:0.585 / U:0.416 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site