lkml.org 
[lkml]   [2020]   [Nov]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v4 2/2] arm64: kvm: Introduce MTE VCPU feature
From
Date
On 17/11/2020 16:07, Catalin Marinas wrote:
> Hi Steven,
>
> On Mon, Oct 26, 2020 at 03:57:27PM +0000, Steven Price wrote:
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 19aacc7d64de..38fe25310ca1 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -862,6 +862,26 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>> if (vma_pagesize == PAGE_SIZE && !force_pte)
>> vma_pagesize = transparent_hugepage_adjust(memslot, hva,
>> &pfn, &fault_ipa);
>> +
>> + /*
>> + * The otherwise redundant test for system_supports_mte() allows the
>> + * code to be compiled out when CONFIG_ARM64_MTE is not present.
>> + */
>> + if (system_supports_mte() && kvm->arch.mte_enabled && pfn_valid(pfn)) {
>> + /*
>> + * VM will be able to see the page's tags, so we must ensure
>> + * they have been initialised.
>> + */
>> + struct page *page = pfn_to_page(pfn);
>> + long i, nr_pages = compound_nr(page);
>> +
>> + /* if PG_mte_tagged is set, tags have already been initialised */
>> + for (i = 0; i < nr_pages; i++, page++) {
>> + if (!test_and_set_bit(PG_mte_tagged, &page->flags))
>> + mte_clear_page_tags(page_address(page));
>> + }
>> + }
>
> If this page was swapped out and mapped back in, where does the
> restoring from swap happen?

Restoring from swap happens above this in the call to gfn_to_pfn_prot()

> I may have asked in the past, is user_mem_abort() the only path for
> mapping Normal pages into stage 2?
>

That is my understanding (and yes you asked before) and no one has
corrected me! ;)

Steve

\
 
 \ /
  Last update: 2020-11-18 17:03    [W:0.104 / U:0.460 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site