lkml.org 
[lkml]   [2020]   [Apr]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.4 217/232] arm64: Always force a branch protection mode when the compiler has one
    Date
    From: Mark Brown <broonie@kernel.org>

    commit b8fdef311a0bd9223f10754f94fdcf1a594a3457 upstream.

    Compilers with branch protection support can be configured to enable it by
    default, it is likely that distributions will do this as part of deploying
    branch protection system wide. As well as the slight overhead from having
    some extra NOPs for unused branch protection features this can cause more
    serious problems when the kernel is providing pointer authentication to
    userspace but not built for pointer authentication itself. In that case our
    switching of keys for userspace can affect the kernel unexpectedly, causing
    pointer authentication instructions in the kernel to corrupt addresses.

    To ensure that we get consistent and reliable behaviour always explicitly
    initialise the branch protection mode, ensuring that the kernel is built
    the same way regardless of the compiler defaults.

    [This is a reworked version of b8fdef311a0bd9223f1075 ("arm64: Always
    force a branch protection mode when the compiler has one") for backport.
    Kernels prior to 74afda4016a7 ("arm64: compile the kernel with ptrauth
    return address signing") don't have any Makefile machinery for forcing
    on pointer auth but still have issues if the compiler defaults it on so
    need this reworked version. -- broonie]

    Fixes: 7503197562567 (arm64: add basic pointer authentication support)
    Reported-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
    Signed-off-by: Mark Brown <broonie@kernel.org>
    Cc: stable@vger.kernel.org
    [catalin.marinas@arm.com: remove Kconfig option in favour of Makefile check]
    Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    arch/arm64/Makefile | 4 ++++
    1 file changed, 4 insertions(+)

    --- a/arch/arm64/Makefile
    +++ b/arch/arm64/Makefile
    @@ -72,6 +72,10 @@ stack_protector_prepare: prepare0
    include/generated/asm-offsets.h))
    endif

    +# Ensure that if the compiler supports branch protection we default it
    +# off.
    +KBUILD_CFLAGS += $(call cc-option,-mbranch-protection=none)
    +
    ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
    KBUILD_CPPFLAGS += -mbig-endian
    CHECKFLAGS += -D__AARCH64EB__

    \
     
     \ /
      Last update: 2020-04-16 16:17    [W:4.037 / U:0.228 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site