lkml.org 
[lkml]   [2022]   [Jan]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRE: [PATCH v4 0/4] kvm/arm: New VMID allocator based on asid
    Date
    Hi,

    A gentle ping on this series. Please take a look and let me know the new approach
    taken in this revision is good enough or not.

    Appreciate your feedback.

    Thanks,
    Shameer

    > -----Original Message-----
    > From: linux-arm-kernel [mailto:linux-arm-kernel-bounces@lists.infradead.org]
    > On Behalf Of Shameer Kolothum
    > Sent: 22 November 2021 12:19
    > To: linux-arm-kernel@lists.infradead.org; kvmarm@lists.cs.columbia.edu;
    > linux-kernel@vger.kernel.org
    > Cc: maz@kernel.org; will@kernel.org; catalin.marinas@arm.com;
    > james.morse@arm.com; julien.thierry.kdev@gmail.com;
    > suzuki.poulose@arm.com; jean-philippe@linaro.org;
    > Alexandru.Elisei@arm.com; qperret@google.com; Jonathan Cameron
    > <jonathan.cameron@huawei.com>; Linuxarm <linuxarm@huawei.com>
    > Subject: [PATCH v4 0/4] kvm/arm: New VMID allocator based on asid
    >
    > Changes from v3:
    > - Main change is in patch #4, where the VMID is now set to an
    > invalid one on vCPU schedule out. Introduced an
    > INVALID_ACTIVE_VMID which is basically a VMID 0 with generation 1.
    >   Since the basic allocator algorithm reserves vmid #0, it is never
    > used as an active VMID. This (hopefully) will fix the issue of
    > unnecessarily reserving VMID space with active_vmids when those
    > VMs are no longer active[0] and at the same time address the
    > problem noted in v3 wherein everything ends up in slow-path[1].
    >
    > Testing:
    >  -Run with VMID bit set to 4 and maxcpus to 8 on D06. The test
    > involves running concurrently 50 guests with 4 vCPUs. Each
    > guest will then execute hackbench 5 times before exiting.
    > No crash was observed for a 4-day continuous run.
    >   The latest branch is here,
    >    https://github.com/hisilicon/kernel-dev/tree/private-v5.16-rc1-vmid-v4
    >
    >  -TLA+ model. Modified the asidalloc model to incorporate the new
    > VMID algo. The main differences are,
    >   -flush_tlb_all() instead of local_tlb_flush_all() on rollover.
    >   -Introduced INVALID_VMID and vCPU Sched Out logic.
    >   -No CnP (Removed UniqueASIDAllCPUs & UniqueASIDActiveTask invariants).
    >   -Removed  UniqueVMIDPerCPU invariant for now as it looks like
    > because of the speculative fetching with flush_tlb_all() there
    > is a small window where this gets triggered. If I change the
    > logic back to local_flush_tlb_all(), UniqueVMIDPerCPU seems to
    > be fine. With my limited knowledge on TLA+ model, it is not
    > clear to me whether this is a problem with the above logic
    > or the VMID model implementation. Really appreciate any help
    > with the model.
    > The initial VMID TLA+ model is here,
    > https://github.com/shamiali2008/kernel-tla/tree/private-vmidalloc-v1
    >
    > Please take a look and let me know.
    >
    > Thanks,
    > Shameer
    >
    > [0]
    > https://lore.kernel.org/kvmarm/20210721160614.GC11003@willie-the-truck/
    > [1]
    > https://lore.kernel.org/kvmarm/20210803114034.GB30853@willie-the-truck/
    >
    > History:
    > --------
    > v2 --> v3
    > -Dropped adding a new static key and cpufeature for retrieving
    > supported VMID bits. Instead, we now make use of the
    > kvm_arm_vmid_bits variable (patch #2).
    >
    > -Since we expect less frequent rollover in the case of VMIDs,
    > the TLB invalidation is now broadcasted on rollover instead
    > of keeping per CPU flush_pending info and issuing a local
    > context flush.
    >
    > -Clear active_vmids on vCPU schedule out to avoid unnecessarily
    > reserving the VMID space(patch #3).
    >
    > -I have kept the struct kvm_vmid as it is for now(instead of a
    > typedef as suggested), as we may soon add another variable to
    > it when we introduce Pinned KVM VMID support.
    >
    > RFCv1 --> v2
    > -Dropped "pinned VMID" support for now.
    > -Dropped RFC tag.
    > RFCv1
    >
    > https://lore.kernel.org/kvmarm/20210506165232.1969-1-shameerali.kolothu
    > m.thodi@huawei.com/
    >
    > Julien Grall (1):
    > KVM: arm64: Align the VMID allocation with the arm64 ASID
    >
    > Shameer Kolothum (3):
    > KVM: arm64: Introduce a new VMID allocator for KVM
    > KVM: arm64: Make VMID bits accessible outside of allocator
    > KVM: arm64: Make active_vmids invalid on vCPU schedule out
    >
    > arch/arm64/include/asm/kvm_host.h | 10 +-
    > arch/arm64/include/asm/kvm_mmu.h | 4 +-
    > arch/arm64/kernel/image-vars.h | 3 +
    > arch/arm64/kvm/Makefile | 2 +-
    > arch/arm64/kvm/arm.c | 106 +++-----------
    > arch/arm64/kvm/hyp/nvhe/mem_protect.c | 3 +-
    > arch/arm64/kvm/mmu.c | 1 -
    > arch/arm64/kvm/vmid.c | 196
    > ++++++++++++++++++++++++++
    > 8 files changed, 228 insertions(+), 97 deletions(-) create mode 100644
    > arch/arm64/kvm/vmid.c
    >
    > --
    > 2.17.1
    >
    >
    > _______________________________________________
    > linux-arm-kernel mailing list
    > linux-arm-kernel@lists.infradead.org
    > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
    \
     
     \ /
      Last update: 2022-01-05 14:26    [W:2.190 / U:0.032 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site