Messages in this thread | | | Subject | Re: [PATCH v4 2/2] arm64: kvm: Introduce MTE VCPU feature | From | Steven Price <> | Date | Wed, 18 Nov 2020 16:01:20 +0000 |
| |
On 17/11/2020 16:07, Catalin Marinas wrote: > Hi Steven, > > On Mon, Oct 26, 2020 at 03:57:27PM +0000, Steven Price wrote: >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c >> index 19aacc7d64de..38fe25310ca1 100644 >> --- a/arch/arm64/kvm/mmu.c >> +++ b/arch/arm64/kvm/mmu.c >> @@ -862,6 +862,26 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >> if (vma_pagesize == PAGE_SIZE && !force_pte) >> vma_pagesize = transparent_hugepage_adjust(memslot, hva, >> &pfn, &fault_ipa); >> + >> + /* >> + * The otherwise redundant test for system_supports_mte() allows the >> + * code to be compiled out when CONFIG_ARM64_MTE is not present. >> + */ >> + if (system_supports_mte() && kvm->arch.mte_enabled && pfn_valid(pfn)) { >> + /* >> + * VM will be able to see the page's tags, so we must ensure >> + * they have been initialised. >> + */ >> + struct page *page = pfn_to_page(pfn); >> + long i, nr_pages = compound_nr(page); >> + >> + /* if PG_mte_tagged is set, tags have already been initialised */ >> + for (i = 0; i < nr_pages; i++, page++) { >> + if (!test_and_set_bit(PG_mte_tagged, &page->flags)) >> + mte_clear_page_tags(page_address(page)); >> + } >> + } > > If this page was swapped out and mapped back in, where does the > restoring from swap happen?
Restoring from swap happens above this in the call to gfn_to_pfn_prot()
> I may have asked in the past, is user_mem_abort() the only path for > mapping Normal pages into stage 2? >
That is my understanding (and yes you asked before) and no one has corrected me! ;)
Steve
| |