lkml.org 
[lkml]   [2021]   [Jul]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 07/16] KVM: arm64: Wire MMIO guard hypercalls
On Tue, 27 Jul 2021 19:11:46 +0100,
Will Deacon <will@kernel.org> wrote:
>
> On Thu, Jul 15, 2021 at 05:31:50PM +0100, Marc Zyngier wrote:
> > Plumb in the hypercall interface to allow a guest to discover,
> > enroll, map and unmap MMIO regions.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/kvm/hypercalls.c | 20 ++++++++++++++++++++
> > include/linux/arm-smccc.h | 28 ++++++++++++++++++++++++++++
> > 2 files changed, 48 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c
> > index 30da78f72b3b..a3deeb907fdd 100644
> > --- a/arch/arm64/kvm/hypercalls.c
> > +++ b/arch/arm64/kvm/hypercalls.c
> > @@ -5,6 +5,7 @@
> > #include <linux/kvm_host.h>
> >
> > #include <asm/kvm_emulate.h>
> > +#include <asm/kvm_mmu.h>
> >
> > #include <kvm/arm_hypercalls.h>
> > #include <kvm/arm_psci.h>
> > @@ -129,10 +130,29 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
> > case ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID:
> > val[0] = BIT(ARM_SMCCC_KVM_FUNC_FEATURES);
> > val[0] |= BIT(ARM_SMCCC_KVM_FUNC_PTP);
> > + val[0] |= BIT(ARM_SMCCC_KVM_FUNC_MMIO_GUARD_INFO);
> > + val[0] |= BIT(ARM_SMCCC_KVM_FUNC_MMIO_GUARD_ENROLL);
> > + val[0] |= BIT(ARM_SMCCC_KVM_FUNC_MMIO_GUARD_MAP);
> > + val[0] |= BIT(ARM_SMCCC_KVM_FUNC_MMIO_GUARD_UNMAP);
> > break;
> > case ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID:
> > kvm_ptp_get_time(vcpu, val);
> > break;
> > + case ARM_SMCCC_VENDOR_HYP_KVM_MMIO_GUARD_INFO_FUNC_ID:
> > + val[0] = PAGE_SIZE;
> > + break;
>
> I get the nagging feeling that querying the stage-2 page-size outside of
> MMIO guard is going to be useful once we start looking at memory sharing,
> so perhaps rename this to something more generic?

At this stage, why not follow the architecture and simply expose it as
ID_AA64MMFR0_EL1.TGran{4,64,16}_2? That's exactly what it is for, and
we already check for this in KVM itself.

>
> > + case ARM_SMCCC_VENDOR_HYP_KVM_MMIO_GUARD_ENROLL_FUNC_ID:
> > + set_bit(KVM_ARCH_FLAG_MMIO_GUARD, &vcpu->kvm->arch.flags);
> > + val[0] = SMCCC_RET_SUCCESS;
> > + break;
> > + case ARM_SMCCC_VENDOR_HYP_KVM_MMIO_GUARD_MAP_FUNC_ID:
> > + if (kvm_install_ioguard_page(vcpu, vcpu_get_reg(vcpu, 1)))
> > + val[0] = SMCCC_RET_SUCCESS;
> > + break;
> > + case ARM_SMCCC_VENDOR_HYP_KVM_MMIO_GUARD_UNMAP_FUNC_ID:
> > + if (kvm_remove_ioguard_page(vcpu, vcpu_get_reg(vcpu, 1)))
> > + val[0] = SMCCC_RET_SUCCESS;
> > + break;
>
> I think there's a slight discrepancy between MAP and UNMAP here in that
> calling UNMAP on something that hasn't been mapped will fail, whereas
> calling MAP on something that's already been mapped will succeed. I think
> that might mean you can't reason about the final state of the page if two
> vCPUs race to call these functions in some cases (and both succeed).

I'm not sure that's the expected behaviour for ioremap(), for example
(you can ioremap two portions of the same page successfully).

I guess MAP could return something indicating that the page is already
mapped, but I wouldn't want to return a hard failure in this case.

M.

--
Without deviation from the norm, progress is not possible.

\
 
 \ /
  Last update: 2021-07-28 12:47    [W:0.079 / U:0.144 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site