lkml.org 
[lkml]   [2021]   [Mar]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.4 151/168] arm64: mm: use a 48-bit ID map when possible on 52-bit VA builds
    Date
    From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    From: Ard Biesheuvel <ardb@kernel.org>

    [ Upstream commit 7ba8f2b2d652cd8d8a2ab61f4be66973e70f9f88 ]

    52-bit VA kernels can run on hardware that is only 48-bit capable, but
    configure the ID map as 52-bit by default. This was not a problem until
    recently, because the special T0SZ value for a 52-bit VA space was never
    programmed into the TCR register anwyay, and because a 52-bit ID map
    happens to use the same number of translation levels as a 48-bit one.

    This behavior was changed by commit 1401bef703a4 ("arm64: mm: Always update
    TCR_EL1 from __cpu_set_tcr_t0sz()"), which causes the unsupported T0SZ
    value for a 52-bit VA to be programmed into TCR_EL1. While some hardware
    simply ignores this, Mark reports that Amberwing systems choke on this,
    resulting in a broken boot. But even before that commit, the unsupported
    idmap_t0sz value was exposed to KVM and used to program TCR_EL2 incorrectly
    as well.

    Given that we already have to deal with address spaces being either 48-bit
    or 52-bit in size, the cleanest approach seems to be to simply default to
    a 48-bit VA ID map, and only switch to a 52-bit one if the placement of the
    kernel in DRAM requires it. This is guaranteed not to happen unless the
    system is actually 52-bit VA capable.

    Fixes: 90ec95cda91a ("arm64: mm: Introduce VA_BITS_MIN")
    Reported-by: Mark Salter <msalter@redhat.com>
    Link: http://lore.kernel.org/r/20210310003216.410037-1-msalter@redhat.com
    Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
    Link: https://lore.kernel.org/r/20210310171515.416643-2-ardb@kernel.org
    Signed-off-by: Will Deacon <will@kernel.org>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    arch/arm64/include/asm/mmu_context.h | 5 +----
    arch/arm64/kernel/head.S | 2 +-
    arch/arm64/mm/mmu.c | 2 +-
    3 files changed, 3 insertions(+), 6 deletions(-)

    diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
    index 3827ff4040a3..3a5d9f1c91b6 100644
    --- a/arch/arm64/include/asm/mmu_context.h
    +++ b/arch/arm64/include/asm/mmu_context.h
    @@ -63,10 +63,7 @@ extern u64 idmap_ptrs_per_pgd;

    static inline bool __cpu_uses_extended_idmap(void)
    {
    - if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52))
    - return false;
    -
    - return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
    + return unlikely(idmap_t0sz != TCR_T0SZ(vabits_actual));
    }

    /*
    diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
    index 438de2301cfe..a2e0b3754943 100644
    --- a/arch/arm64/kernel/head.S
    +++ b/arch/arm64/kernel/head.S
    @@ -337,7 +337,7 @@ __create_page_tables:
    */
    adrp x5, __idmap_text_end
    clz x5, x5
    - cmp x5, TCR_T0SZ(VA_BITS) // default T0SZ small enough?
    + cmp x5, TCR_T0SZ(VA_BITS_MIN) // default T0SZ small enough?
    b.ge 1f // .. then skip VA range extension

    adr_l x6, idmap_t0sz
    diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
    index d10247fab0fd..99bc0289ab2b 100644
    --- a/arch/arm64/mm/mmu.c
    +++ b/arch/arm64/mm/mmu.c
    @@ -38,7 +38,7 @@
    #define NO_BLOCK_MAPPINGS BIT(0)
    #define NO_CONT_MAPPINGS BIT(1)

    -u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
    +u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN);
    u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;

    u64 __section(".mmuoff.data.write") vabits_actual;
    --
    2.30.1


    \
     
     \ /
      Last update: 2021-03-15 15:39    [W:4.026 / U:0.040 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site