lkml.org 
[lkml]   [2022]   [Feb]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH 05/29] x86: Base IBT bits
Date
On 18/02/2022 16:49, Peter Zijlstra wrote:
> +/*
> + * A bit convoluted, but matches both endbr32 and endbr64 without
> + * having either as literal in the text.
> + */
> +static inline bool is_endbr(const void *addr)
> +{
> + unsigned int val = ~*(unsigned int *)addr;
> + val |= 0x01000000U;
> + return val == ~0xfa1e0ff3;
> +}

At this point, I feel I've earned an "I told you so". :)

Clang 13 sees straight through the trickery and generates:

is_endbr:                               # @is_endbr
        movl    $-16777217, %eax                # imm = 0xFEFFFFFF
        andl    (%rdi), %eax
        cmpl    $-98693133, %eax                # imm = 0xFA1E0FF3
        sete    %al
        retq

Here's one I prepared earlier:

/*
 * In some cases we need to inspect/insert endbr64 instructions.
 *
 * The naive way, mem{cmp,cpy}(ptr, "\xf3\x0f\x1e\xfa", 4), optimises
unsafely
 * by placing 0xfa1e0ff3 in an imm32 operand, and marks a legal indirect
 * branch target as far as the CPU is concerned.
 *
 * gen_endbr64() is written deliberately to avoid the problematic
operand, and
 * marked __const__ as it is safe for the optimiser to hoist/merge/etc.
 */
static inline uint32_t __attribute_const__ gen_endbr64(void)
{
    uint32_t res;

    asm ( "mov $~0xfa1e0ff3, %[res]\n\t"
          "not %[res]\n\t"
          : [res] "=&r" (res) );

    return res;
}

which should be robust against even the most enterprising optimiser.

~Andrew

P.S. Clang IAS had better never get "clever" enough to optimise what it
finds in asm statements...
\
 
 \ /
  Last update: 2022-02-18 21:50    [W:0.624 / U:0.400 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site