lkml.org 
[lkml]   [2021]   [Feb]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH v2] ARM: Implement Clang's SLS mitigation
    On Fri, Feb 12, 2021 at 11:53 AM 'Jian Cai' via Clang Built Linux
    <clang-built-linux@googlegroups.com> wrote:

    The oneline of the commit is "ARM: Implement Clang's SLS mitigation,"
    but that's not precise. GCC implements the same flag with the same
    arguments. There is nothing compiler specific about this patch.
    (Though perhaps different section names are used, see below).

    >
    > This patch adds CONFIG_HARDEN_SLS_ALL that can be used to turn on
    > -mharden-sls=all, which mitigates the straight-line speculation
    > vulnerability, speculative execution of the instruction following some
    > unconditional jumps. Notice -mharden-sls= has other options as below,
    > and this config turns on the strongest option.
    >
    > all: enable all mitigations against Straight Line Speculation that are implemented.
    > none: disable all mitigations against Straight Line Speculation.
    > retbr: enable the mitigation against Straight Line Speculation for RET and BR instructions.
    > blr: enable the mitigation against Straight Line Speculation for BLR instructions.
    >
    > Link: https://reviews.llvm.org/D93221
    > Link: https://reviews.llvm.org/D81404
    > Link: https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/downloads/straight-line-speculation
    > https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/frequently-asked-questions#SLS2
    >
    > Suggested-by: Manoj Gupta <manojgupta@google.com>
    > Suggested-by: Nathan Chancellor <nathan@kernel.org>
    > Suggested-by: David Laight <David.Laight@aculab.com>
    > Signed-off-by: Jian Cai <jiancai@google.com>

    I observe lots of linker warnings with this applied on linux-next:
    ld.lld: warning:
    init/built-in.a(main.o):(.text.__llvm_slsblr_thunk_x0) is being placed
    in '.text.__llvm_slsblr_thunk_x0'
    You need to modify arch/arm64/kernel/vmlinux.lds.S and
    arch/arm/kernel/vmlinux.lds.S (and possibly
    arch/arm/boot/compressed/vmlinux.lds.S as well) to add these sections
    back into .text so that the linkers don't place these orphaned
    sections in wild places. The resulting aarch64 kernel image doesn't
    even boot (under emulation).

    For 32b ARM:
    ld.lld: warning:
    init/built-in.a(main.o):(.text.__llvm_slsblr_thunk_arm_r0) is being
    placed in '.text.__llvm_slsblr_thunk_arm_r0'
    ...
    ld.lld: warning:
    init/built-in.a(main.o):(.text.__llvm_slsblr_thunk_thumb_r0) is being
    placed in '.text.__llvm_slsblr_thunk_thumb_r0'
    ...
    <trimmed, but there's close to 60 of these>

    And the image doesn't boot (under emulation).

    > ---
    >
    > Changes v1 -> v2:
    > Update the description and patch based on Nathan and David's comments.
    >
    > arch/arm/Makefile | 4 ++++
    > arch/arm64/Makefile | 4 ++++
    > security/Kconfig.hardening | 7 +++++++
    > 3 files changed, 15 insertions(+)
    >
    > diff --git a/arch/arm/Makefile b/arch/arm/Makefile
    > index 4aaec9599e8a..11d89ef32da9 100644
    > --- a/arch/arm/Makefile
    > +++ b/arch/arm/Makefile
    > @@ -48,6 +48,10 @@ CHECKFLAGS += -D__ARMEL__
    > KBUILD_LDFLAGS += -EL
    > endif
    >
    > +ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
    > +KBUILD_CFLAGS += -mharden-sls=all
    > +endif
    > +
    > #
    > # The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
    > # later may result in code being generated that handles signed short and signed
    > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
    > index 90309208bb28..ca7299b356a9 100644
    > --- a/arch/arm64/Makefile
    > +++ b/arch/arm64/Makefile
    > @@ -34,6 +34,10 @@ $(warning LSE atomics not supported by binutils)
    > endif
    > endif
    >
    > +ifeq ($(CONFIG_HARDEN_SLS_ALL), y)
    > +KBUILD_CFLAGS += -mharden-sls=all
    > +endif
    > +
    > cc_has_k_constraint := $(call try-run,echo \
    > 'int main(void) { \
    > asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
    > diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
    > index 269967c4fc1b..9266d8d1f78f 100644
    > --- a/security/Kconfig.hardening
    > +++ b/security/Kconfig.hardening
    > @@ -121,6 +121,13 @@ choice
    >
    > endchoice
    >
    > +config HARDEN_SLS_ALL
    > + bool "enable SLS vulnerability hardening"
    > + def_bool $(cc-option,-mharden-sls=all)

    This fails to set CONFIG_HARDEN_SLS_ALL for me with:
    $ ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- make LLVM=1 LLVM_IAS=1
    -j72 defconfig
    $ grep SLS_ALL .config
    # CONFIG_HARDEN_SLS_ALL is not set

    but it's flipped on there for arm64 defconfig:
    $ ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- make LLVM=1 LLVM_IAS=1
    -j72 defconfig
    $ grep SLS_ALL .config
    CONFIG_HARDEN_SLS_ALL=y

    What's going on there? Is the cc-option Kconfig macro broken for
    Clang when cross compiling 32b ARM? I can still enable
    CONFIG_HARDEN_SLS_ALL via menuconfig, but I wonder if the default
    value is funny because the cc-option check is failing?

    > + help
    > + Enables straight-line speculation vulnerability hardening
    > + at highest level.
    > +
    > config GCC_PLUGIN_STRUCTLEAK_VERBOSE
    > bool "Report forcefully initialized variables"
    > depends on GCC_PLUGIN_STRUCTLEAK
    > --

    --
    Thanks,
    ~Nick Desaulniers

    \
     
     \ /
      Last update: 2021-02-17 19:22    [W:2.272 / U:0.028 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site