lkml.org 
[lkml]   [2021]   [Jun]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v13 4/8] KVM: arm64: Introduce MTE VM feature
On Mon, May 24, 2021 at 11:45:09AM +0100, Steven Price wrote:
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index c5d1f3c87dbd..226035cf7d6c 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -822,6 +822,42 @@ transparent_hugepage_adjust(struct kvm_memory_slot *memslot,
> return PAGE_SIZE;
> }
>
> +static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> + unsigned long size)
> +{
> + if (kvm_has_mte(kvm)) {

Nitpick (less indentation):

if (!kvm_has_mte(kvm))
return 0;

> + /*
> + * The page will be mapped in stage 2 as Normal Cacheable, so
> + * the VM will be able to see the page's tags and therefore
> + * they must be initialised first. If PG_mte_tagged is set,
> + * tags have already been initialised.
> + * pfn_to_online_page() is used to reject ZONE_DEVICE pages
> + * that may not support tags.
> + */
> + unsigned long i, nr_pages = size >> PAGE_SHIFT;
> + struct page *page = pfn_to_online_page(pfn);
> +
> + if (!page)
> + return -EFAULT;
> +
> + for (i = 0; i < nr_pages; i++, page++) {
> + /*
> + * There is a potential (but very unlikely) race
> + * between two VMs which are sharing a physical page
> + * entering this at the same time. However by splitting
> + * the test/set the only risk is tags being overwritten
> + * by the mte_clear_page_tags() call.
> + */

And I think the real risk here is when the page is writable by at least
one of the VMs sharing the page. This excludes KSM, so it only leaves
the MAP_SHARED mappings.

> + if (!test_bit(PG_mte_tagged, &page->flags)) {
> + mte_clear_page_tags(page_address(page));
> + set_bit(PG_mte_tagged, &page->flags);
> + }
> + }

If we want to cover this race (I'd say in a separate patch), we can call
mte_sync_page_tags(page, __pte(0), false, true) directly (hopefully I
got the arguments right). We can avoid the big lock in most cases if
kvm_arch_prepare_memory_region() sets a VM_MTE_RESET (tag clear etc.)
and __alloc_zeroed_user_highpage() clears the tags on allocation (as we
do for VM_MTE but the new flag would not affect the stage 1 VMM page
attributes).

> + }
> +
> + return 0;
> +}
> +
> static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> struct kvm_memory_slot *memslot, unsigned long hva,
> unsigned long fault_status)
> @@ -971,8 +1007,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (writable)
> prot |= KVM_PGTABLE_PROT_W;
>
> - if (fault_status != FSC_PERM && !device)
> + if (fault_status != FSC_PERM && !device) {
> + ret = sanitise_mte_tags(kvm, pfn, vma_pagesize);
> + if (ret)
> + goto out_unlock;

Maybe it was discussed in a previous version, why do we need this in
addition to kvm_set_spte_gfn()?

> +
> clean_dcache_guest_page(pfn, vma_pagesize);
> + }
>
> if (exec_fault) {
> prot |= KVM_PGTABLE_PROT_X;
> @@ -1168,12 +1209,17 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
> bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> {
> kvm_pfn_t pfn = pte_pfn(range->pte);
> + int ret;
>
> if (!kvm->arch.mmu.pgt)
> return 0;
>
> WARN_ON(range->end - range->start != 1);
>
> + ret = sanitise_mte_tags(kvm, pfn, PAGE_SIZE);
> + if (ret)
> + return false;
> +
> /*
> * We've moved a page around, probably through CoW, so let's treat it
> * just like a translation fault and clean the cache to the PoC.

Otherwise the patch looks fine.

--
Catalin

\
 
 \ /
  Last update: 2021-06-03 18:01    [W:0.107 / U:0.456 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site