lkml.org 
[lkml]   [2018]   [Jan]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] x86/retpoline/entry: Disable the entire SYSCALL64 fast path with retpolines on
On Fri, Jan 26, 2018 at 09:40:23AM -0800, Linus Torvalds wrote:
> On Fri, Jan 26, 2018 at 7:57 AM, Andy Lutomirski <luto@kernel.org> wrote:
> >
> > I gave the rearrangement like this a try yesterday and it's a bit of a
> > mess. Part of the problem is that there are a bunch of pieces of code
> > that expect sys_xyz() to be actual callable functions.
>
> That's not supposed to be a mess.
>
> That's part of why we do that whole indirection through SYSC##xyz to
> sys##_xyz: the asm-callable ones will do the full casting of
> troublesome arguments (some architectures have C calling sequence
> rules that have security issues, so we need to make sure that the
> arguments actually follow the right rules and 'int' arguments are
> properly sign-extended etc).
>
> So that whole indirection could be made to *also* create another
> version of the syscall that instead took the arguments from ptregs.
>
> We already do exactly that for the tracing events: look how
> FTRACE_SYSCALLS ends up creating that extra metadata.
>
> The ptreg version should be done the same way: don't make 'sys_xyz()'
> take a struct ptregs, instead make those SYSCALL_DEFINE*() macros
> create a _new_ function called 'ptregs_xyz()' and then that function
> does the argument unpacking.
>
> Then the x86 system call table can just be switched over to call those
> ptreg versions instead.

Umm... What about other architectures? Or do you want SYSCALL_DEFINE...
to be per-arch? I wonder how much would that "go through pt_regs" hurt
on something like sparc...

\
 
 \ /
  Last update: 2018-01-26 19:08    [W:0.050 / U:0.388 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site