lkml.org 
[lkml]   [2020]   [Nov]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    Subject[PATCH mm v11 26/42] arm64: mte: Reset the page tag in page->flags
    From
    From: Vincenzo Frascino <vincenzo.frascino@arm.com>

    The hardware tag-based KASAN for compatibility with the other modes
    stores the tag associated to a page in page->flags.
    Due to this the kernel faults on access when it allocates a page with an
    initial tag and the user changes the tags.

    Reset the tag associated by the kernel to a page in all the meaningful
    places to prevent kernel faults on access.

    Note: An alternative to this approach could be to modify page_to_virt().
    This though could end up being racy, in fact if a CPU checks the
    PG_mte_tagged bit and decides that the page is not tagged but another
    CPU maps the same with PROT_MTE and becomes tagged the subsequent kernel
    access would fail.

    Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
    Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
    Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
    ---
    Change-Id: I8451d438bb63364de2a3e68041e3a27866921d4e
    ---
    arch/arm64/kernel/hibernate.c | 5 +++++
    arch/arm64/kernel/mte.c | 9 +++++++++
    arch/arm64/mm/copypage.c | 9 +++++++++
    arch/arm64/mm/mteswap.c | 9 +++++++++
    4 files changed, 32 insertions(+)

    diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
    index 42003774d261..9c9f47e9f7f4 100644
    --- a/arch/arm64/kernel/hibernate.c
    +++ b/arch/arm64/kernel/hibernate.c
    @@ -371,6 +371,11 @@ static void swsusp_mte_restore_tags(void)
    unsigned long pfn = xa_state.xa_index;
    struct page *page = pfn_to_online_page(pfn);

    + /*
    + * It is not required to invoke page_kasan_tag_reset(page)
    + * at this point since the tags stored in page->flags are
    + * already restored.
    + */
    mte_restore_page_tags(page_address(page), tags);

    mte_free_tag_storage(tags);
    diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
    index 8f99c65837fd..86d554ce98b6 100644
    --- a/arch/arm64/kernel/mte.c
    +++ b/arch/arm64/kernel/mte.c
    @@ -34,6 +34,15 @@ static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
    return;
    }

    + page_kasan_tag_reset(page);
    + /*
    + * We need smp_wmb() in between setting the flags and clearing the
    + * tags because if another thread reads page->flags and builds a
    + * tagged address out of it, there is an actual dependency to the
    + * memory access, but on the current thread we do not guarantee that
    + * the new page->flags are visible before the tags were updated.
    + */
    + smp_wmb();
    mte_clear_page_tags(page_address(page));
    }

    diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
    index 70a71f38b6a9..b5447e53cd73 100644
    --- a/arch/arm64/mm/copypage.c
    +++ b/arch/arm64/mm/copypage.c
    @@ -23,6 +23,15 @@ void copy_highpage(struct page *to, struct page *from)

    if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
    set_bit(PG_mte_tagged, &to->flags);
    + page_kasan_tag_reset(to);
    + /*
    + * We need smp_wmb() in between setting the flags and clearing the
    + * tags because if another thread reads page->flags and builds a
    + * tagged address out of it, there is an actual dependency to the
    + * memory access, but on the current thread we do not guarantee that
    + * the new page->flags are visible before the tags were updated.
    + */
    + smp_wmb();
    mte_copy_page_tags(kto, kfrom);
    }
    }
    diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
    index c52c1847079c..7c4ef56265ee 100644
    --- a/arch/arm64/mm/mteswap.c
    +++ b/arch/arm64/mm/mteswap.c
    @@ -53,6 +53,15 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page)
    if (!tags)
    return false;

    + page_kasan_tag_reset(page);
    + /*
    + * We need smp_wmb() in between setting the flags and clearing the
    + * tags because if another thread reads page->flags and builds a
    + * tagged address out of it, there is an actual dependency to the
    + * memory access, but on the current thread we do not guarantee that
    + * the new page->flags are visible before the tags were updated.
    + */
    + smp_wmb();
    mte_restore_page_tags(page_address(page), tags);

    return true;
    --
    2.29.2.454.gaff20da3a2-goog
    \
     
     \ /
      Last update: 2020-11-23 21:39    [W:3.032 / U:1.000 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site