lkml.org 
[lkml]   [2018]   [Jan]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH] x86/retpoline/entry: Disable the entire SYSCALL64 fast path with retpolines on
On Fri, Jan 26, 2018 at 7:57 AM, Andy Lutomirski <luto@kernel.org> wrote:
>
> I gave the rearrangement like this a try yesterday and it's a bit of a
> mess. Part of the problem is that there are a bunch of pieces of code
> that expect sys_xyz() to be actual callable functions.

That's not supposed to be a mess.

That's part of why we do that whole indirection through SYSC##xyz to
sys##_xyz: the asm-callable ones will do the full casting of
troublesome arguments (some architectures have C calling sequence
rules that have security issues, so we need to make sure that the
arguments actually follow the right rules and 'int' arguments are
properly sign-extended etc).

So that whole indirection could be made to *also* create another
version of the syscall that instead took the arguments from ptregs.

We already do exactly that for the tracing events: look how
FTRACE_SYSCALLS ends up creating that extra metadata.

The ptreg version should be done the same way: don't make 'sys_xyz()'
take a struct ptregs, instead make those SYSCALL_DEFINE*() macros
create a _new_ function called 'ptregs_xyz()' and then that function
does the argument unpacking.

Then the x86 system call table can just be switched over to call those
ptreg versions instead.

Linus

\
 
 \ /
  Last update: 2018-01-26 18:40    [W:0.043 / U:1.248 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site