lkml.org 
[lkml]   [2021]   [Oct]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 2/4] arm64: implement support for static call trampolines
On Mon, 25 Oct 2021 at 16:47, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Mon, Oct 25, 2021 at 04:19:16PM +0200, Peter Zijlstra wrote:
> > On Mon, Oct 25, 2021 at 04:08:37PM +0200, Ard Biesheuvel wrote:
>
> > > > Ooohh, but what if you go from !func to NOP.
> > > >
> > > > assuming:
> > > >
> > > > .literal = 0
> > > > BTI C
> > > > RET
> > > >
> > > > Then
> > > >
> > > > CPU0 CPU1
> > > >
> > > > [S] literal = func [I] NOP
> > > > [S] insn[1] = NOP [L] x16 = literal (NULL)
> > > > b x16
> > > > *BANG*
> > > >
> > > > Is that possible? (total lack of memory ordering etc..)
> > > >
> > >
> > > The CBZ will branch to the RET instruction if x16 == 0x0, so this
> > > should not happen.
> >
> > Oooh, I missed that :/ I was about to suggest writing the address of a
> > bare 'ret' trampoline instead of NULL into the literal.
>
> Perhaps a little something like so.. Shaves 2 instructions off each
> trampoline.
>
> --- a/arch/arm64/include/asm/static_call.h
> +++ b/arch/arm64/include/asm/static_call.h
> @@ -11,9 +11,7 @@
> " hint 34 /* BTI C */ \n" \
> insn " \n" \
> " ldr x16, 0b \n" \
> - " cbz x16, 1f \n" \
> " br x16 \n" \
> - "1: ret \n" \
> " .popsection \n")
>
> #define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) \
> --- a/arch/arm64/kernel/patching.c
> +++ b/arch/arm64/kernel/patching.c
> @@ -90,6 +90,11 @@ int __kprobes aarch64_insn_write(void *a
> return __aarch64_insn_write(addr, &i, AARCH64_INSN_SIZE);
> }
>
> +asm("__static_call_ret: \n"
> + " ret \n")
> +

This breaks BTI as it lacks the landing pad, and it will be called indirectly.

> +extern void __static_call_ret(void);
> +

Better to have an ordinary C function here (with consistent linkage),
but we need to take the address in a way that works with Clang CFI.

As the two additional instructions are on an ice cold path anyway, I'm
not sure this is an obvious improvement tbh.

> void arch_static_call_transform(void *site, void *tramp, void *func, bool tail)
> {
> /*
> @@ -97,9 +102,7 @@ void arch_static_call_transform(void *si
> * 0x0 bti c <--- trampoline entry point
> * 0x4 <branch or nop>
> * 0x8 ldr x16, <literal>
> - * 0xc cbz x16, 20
> - * 0x10 br x16
> - * 0x14 ret
> + * 0xc br x16
> */
> struct {
> u64 literal;
> @@ -113,6 +116,7 @@ void arch_static_call_transform(void *si
> insns.insn[0] = cpu_to_le32(insn);
>
> if (!func) {
> + insns.literal = (unsigned long)&__static_call_ret;
> insn = aarch64_insn_gen_branch_reg(AARCH64_INSN_REG_LR,
> AARCH64_INSN_BRANCH_RETURN);
> } else {

\
 
 \ /
  Last update: 2021-10-25 16:56    [W:0.834 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site