Messages in this thread | | | Date | Mon, 8 Nov 2021 17:23:45 +0800 | From | Like Xu <> | Subject | Re: [PATCH 1/3] KVM: x86: Copy kvm_pmu_ops by value to eliminate layer of indirection |
| |
On 5/11/2021 11:30 pm, Sean Christopherson wrote: > On Wed, Nov 03, 2021, Like Xu wrote: >> Replace the kvm_pmu_ops pointer in common x86 with an instance of the >> struct to save one pointer dereference when invoking functions. Copy the >> struct by value to set the ops during kvm_init(). >> >> Using kvm_x86_ops.hardware_enable to track whether or not the >> ops have been initialized, i.e. a vendor KVM module has been loaded. >> >> Signed-off-by: Like Xu <likexu@tencent.com> >> --- >> arch/x86/kvm/pmu.c | 41 +++++++++++++++++++++------------------ >> arch/x86/kvm/pmu.h | 4 +++- >> arch/x86/kvm/vmx/nested.c | 2 +- >> arch/x86/kvm/x86.c | 3 +++ >> 4 files changed, 29 insertions(+), 21 deletions(-) >> >> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c >> index 0772bad9165c..0db1887137d9 100644 >> --- a/arch/x86/kvm/pmu.c >> +++ b/arch/x86/kvm/pmu.c >> @@ -47,6 +47,9 @@ >> * * AMD: [0 .. AMD64_NUM_COUNTERS-1] <=> gp counters >> */ >> >> +struct kvm_pmu_ops kvm_pmu_ops __read_mostly; >> +EXPORT_SYMBOL_GPL(kvm_pmu_ops); >> + > > ... > >> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c >> index b4ee5e9f9e20..1e793e44b5ff 100644 >> --- a/arch/x86/kvm/vmx/nested.c >> +++ b/arch/x86/kvm/vmx/nested.c >> @@ -4796,7 +4796,7 @@ void nested_vmx_pmu_entry_exit_ctls_update(struct kvm_vcpu *vcpu) >> return; >> >> vmx = to_vmx(vcpu); >> - if (kvm_x86_ops.pmu_ops->is_valid_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL)) { >> + if (kvm_pmu_ops.is_valid_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL)) { > > I would much prefer we export kvm_pmu_is_valid_msr() and go through that for nVMX > than export all of kvm_pmu_ops for this one case.
Applied. Is it an abuse to export a function for only one case ?
> >> vmx->nested.msrs.entry_ctls_high |= >> VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL; >> vmx->nested.msrs.exit_ctls_high |= >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >> index ac83d873d65b..72d286595012 100644 >> --- a/arch/x86/kvm/x86.c >> +++ b/arch/x86/kvm/x86.c >> @@ -11317,6 +11317,9 @@ int kvm_arch_hardware_setup(void *opaque) >> memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops)); >> kvm_ops_static_call_update(); >> >> + if (kvm_x86_ops.hardware_enable) > > Huh? Did you intend this to be? > > if (kvm_x86_ops.pmu_ops) > > Either way, I don't see the point, VMX and SVM unconditionally provide the ops.
Let me drop it.
> > I would also say land this memcpy() above kvm_ops_static_call_update(), then the > enabling patch can do the static call updates in kvm_ops_static_call_update() > instead of adding another helper.
Both applied.
> >> + memcpy(&kvm_pmu_ops, kvm_x86_ops.pmu_ops, sizeof(kvm_pmu_ops)); > > As part of this change, the pmu_ops should be moved to kvm_x86_init_ops and tagged > as __initdata. That'll save those precious few bytes, and more importantly make > the original ops unreachable, i.e. make it harder to sneak in post-init modification > bugs.
Applied. Opportunistically,
-struct kvm_pmu_ops [amd|intel]_pmu_ops = { +struct kvm_pmu_ops [amd|intel]_pmu_ops __initdata = {
> >> + >> if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES)) >> supported_xss = 0; >> >> -- >> 2.33.0 >> >
| |