lkml.org 
[lkml]   [2022]   [Jul]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch 00/38] x86/retbleed: Call depth tracking mitigation
On Mon, Jul 18, 2022 at 03:48:04PM -0700, Sami Tolvanen wrote:
> On Mon, Jul 18, 2022 at 2:18 PM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Mon, Jul 18, 2022 at 10:44:14PM +0200, Thomas Gleixner wrote:
> > > And we need input from the Clang folks because their CFI work also puts
> > > stuff in front of the function entry, which nicely collides.
> >
> > Right, I need to go look at the latest kCFI patches, that sorta got
> > side-tracked for working on all the retbleed muck :/
> >
> > Basically kCFI wants to preface every (indirect callable) function with:
> >
> > __cfi_\func:
> > int3
> > movl $0x12345678, %rax
> > int3
> > int3
> > \func:
>
> Yes, and in order to avoid scattering the code with call target
> gadgets, the preamble should remain immediately before the function.

I think we have a little room, but yes, -6 is just right to hit the UD2.

> > Ofc, we can still put the whole:
> >
> > sarq $5, PER_CPU_VAR(__x86_call_depth);
> > jmp \func_direct
> >
> > thing in front of that.
>
> Sure, that would work.

So if we assume \func starts with ENDBR, and further assume we've fixed
up every direct jmp/call to land at +4, we can overwrite the ENDBR with
part of the SARQ, that leaves us 6 more byte, placing the immediate at
-10 if I'm not mis-counting.

Now, the call sites are:

41 81 7b fa 78 56 34 12 cmpl $0x12345678, -6(%r11)
74 02 je 1f
0f 0b ud2
e8 00 00 00 00 1: call __x86_indirect_thunk_r11

That means the offset of +10 lands in the middle of the CALL
instruction, and since we only have 16 thunks there is a limited number
of byte patterns available there.

This really isn't as nice as the -6 but might just work well enough,
hmm?

> > But it does somewhat destroy the version I had that only needs the
> > 10 bytes padding for the sarq.
>
> There's also the question of how function alignment should work in the
> KCFI case. Currently, the __cfi_ preamble is 16-byte aligned, which
> obviously means the function itself isn't.

That seems unfortunate, at least the intel parts have a 16 byte i-fetch
window (IIRC) so aligning the actual instructions at 16 bytes gets you
the best bang for the buck wrt ifetch (functions are random access and
not sequential).

Also, since we're talking at least 4 bytes more padding over the 7 that
are required by the kCFI scheme, the FineIBT alternative gets a little
more room to breathe. I'm thinking we can have the FineIBT landing site
at -16.

__fineibt_\func:
endbr64 # 4
xorl $0x12345678, %r10d # 7
je \func+4 # 2
ud2 # 2

\func:
nop4
...

With the callsite looking like:

nop7
movl $0x12345678, %r10d # 7
call *%r11 # 3

or something like that (everything having IBT has eIBRS at the very
least).

\
 
 \ /
  Last update: 2022-07-19 01:52    [W:0.160 / U:0.392 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site