Messages in this thread | | | Date | Wed, 9 Dec 2015 16:08:22 -0500 | Subject | Re: [PATCH] x86/entry/64: Remove duplicate syscall table for fast path | From | Brian Gerst <> |
| |
On Wed, Dec 9, 2015 at 1:53 PM, Andy Lutomirski <luto@amacapital.net> wrote: > On Wed, Dec 9, 2015 at 5:02 AM, Brian Gerst <brgerst@gmail.com> wrote: >> Instead of using a duplicate syscall table for the fast path, create stubs for >> the syscalls that need pt_regs that save the extra registers if a flag for the >> slow path is not set. >> >> Signed-off-by: Brian Gerst <brgerst@gmail.com> >> To: Andy Lutomirski <luto@amacapital.net> >> Cc: Andy Lutomirski <luto@kernel.org> >> Cc: the arch/x86 maintainers <x86@kernel.org> >> Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org> >> Cc: Borislav Petkov <bp@alien8.de> >> Cc: Frédéric Weisbecker <fweisbec@gmail.com> >> Cc: Denys Vlasenko <dvlasenk@redhat.com> >> Cc: Linus Torvalds <torvalds@linux-foundation.org> >> --- >> >> Applies on top of Andy's syscall cleanup series. > > A couple questions: > >> @@ -306,15 +306,37 @@ END(entry_SYSCALL_64) >> >> ENTRY(stub_ptregs_64) >> /* >> - * Syscalls marked as needing ptregs that go through the fast path >> - * land here. We transfer to the slow path. >> + * Syscalls marked as needing ptregs land here. >> + * If we are on the fast path, we need to save the extra regs. >> + * If we are on the slow path, the extra regs are already saved. >> */ >> - DISABLE_INTERRUPTS(CLBR_NONE) >> - TRACE_IRQS_OFF >> - addq $8, %rsp >> - jmp entry_SYSCALL64_slow_path >> + movq PER_CPU_VAR(cpu_current_top_of_stack), %r10 >> + testl $TS_SLOWPATH, ASM_THREAD_INFO(TI_status, %r10, 0) >> + jnz 1f > > OK (but see below), but why not do: > > addq $8, %rsp > jmp entry_SYSCALL64_slow_path
I've always been adverse to doing things like that because it breaks call/return branch prediction. Also, are there any side effects to calling enter_from_user_mode() more than once?
> here instead of the stack munging below? > >> + subq $SIZEOF_PTREGS, %r10 >> + SAVE_EXTRA_REGS base=r10 >> + movq %r10, %rbx >> + call *%rax >> + movq %rbx, %r10 >> + RESTORE_EXTRA_REGS base=r10 >> + ret >> +1: >> + jmp *%rax >> END(stub_ptregs_64)
After some thought, that can be simplified. It's only executed on the fast path, so pt_regs is at 8(%rsp).
> Also, can we not get away with keying off rip or rsp instead of > ti->status? That should be faster and less magical IMO.
Checking if the return address is the instruction after the fast path dispatch would work.
Simplified version: ENTRY(stub_ptregs_64) cmpl $fast_path_return, (%rsp) jne 1f SAVE_EXTRA_REGS offset=8 call *%rax RESTORE_EXTRA_REGS offset=8 ret 1: jmp *%rax END(stub_ptregs_64)
-- Brian Gerst
| |