lkml.org 
[lkml]   [2022]   [Jan]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v2 4/7] kvm/mips: rework guest entry logic
On Wed, Jan 19, 2022 at 10:58:51AM +0000, Mark Rutland wrote:
> In kvm_arch_vcpu_ioctl_run() we use guest_enter_irqoff() and
> guest_exit_irqoff() directly, with interrupts masked between these. As
> we don't handle any timer ticks during this window, we will not account
> time spent within the guest as guest time, which is unfortunate.
>
> Additionally, we do not inform lockdep or tracing that interrupts will
> be enabled during guest execution, which caan lead to misleading traces
> and warnings that interrupts have been enabled for overly-long periods.
>
> This patch fixes these issues by using the new timing and context
> entry/exit helpers to ensure that interrupts are handled during guest
> vtime but with RCU watching, with a sequence:
>
> guest_timing_enter_irqoff();
>
> guest_state_enter_irqoff();
> < run the vcpu >
> guest_state_exit_irqoff();
>
> < take any pending IRQs >
>
> guest_timing_exit_irqoff();

Looking again, this patch isn't sufficient.

On MIPS a guest exit will be handled by kvm_mips_handle_exit() *before*
returning into the "< run the vcpu >" step above, so we won't call
guest_state_exit_irqoff() before using RCU, etc.

This'll need some more thought...

Mark.

> Since instrumentation may make use of RCU, we must also ensure that no
> instrumented code is run during the EQS. I've split out the critical
> section into a new kvm_mips_enter_exit_vcpu() helper which is marked
> noinstr.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
> Cc: Frederic Weisbecker <frederic@kernel.org>
> Cc: Huacai Chen <chenhuacai@kernel.org>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Paul E. McKenney <paulmck@kernel.org>
> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
> ---
> arch/mips/kvm/mips.c | 37 ++++++++++++++++++++++++++++++++++---
> 1 file changed, 34 insertions(+), 3 deletions(-)
>
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index aa20d074d3883..1a961c2434fee 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -438,6 +438,24 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
> return -ENOIOCTLCMD;
> }
>
> +/*
> + * Actually run the vCPU, entering an RCU extended quiescent state (EQS) while
> + * the vCPU is running.
> + *
> + * This must be noinstr as instrumentation may make use of RCU, and this is not
> + * safe during the EQS.
> + */
> +static int noinstr kvm_mips_vcpu_enter_exit(struct kvm_vcpu *vcpu)
> +{
> + int ret;
> +
> + guest_state_enter_irqoff();
> + ret = kvm_mips_callbacks->vcpu_run(vcpu);
> + guest_state_exit_irqoff();
> +
> + return ret;
> +}
> +
> int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> {
> int r = -EINTR;
> @@ -458,7 +476,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> lose_fpu(1);
>
> local_irq_disable();
> - guest_enter_irqoff();
> + guest_timing_enter_irqoff();
> trace_kvm_enter(vcpu);
>
> /*
> @@ -469,10 +487,23 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> */
> smp_store_mb(vcpu->mode, IN_GUEST_MODE);
>
> - r = kvm_mips_callbacks->vcpu_run(vcpu);
> + r = kvm_mips_vcpu_enter_exit(vcpu);
> +
> + /*
> + * We must ensure that any pending interrupts are taken before
> + * we exit guest timing so that timer ticks are accounted as
> + * guest time. Transiently unmask interrupts so that any
> + * pending interrupts are taken.
> + *
> + * TODO: is there a barrier which ensures that pending interrupts are
> + * recognised? Currently this just hopes that the CPU takes any pending
> + * interrupts between the enable and disable.
> + */
> + local_irq_enable();
> + local_irq_disable();
>
> trace_kvm_out(vcpu);
> - guest_exit_irqoff();
> + guest_timing_exit_irqoff();
> local_irq_enable();
>
> out:
> --
> 2.30.2
>
>

\
 
 \ /
  Last update: 2022-01-20 17:45    [W:0.443 / U:0.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site