Messages in this thread | | | From | Andy Lutomirski <> | Date | Wed, 9 Dec 2015 13:15:54 -0800 | Subject | Re: [PATCH] x86/entry/64: Remove duplicate syscall table for fast path |
| |
On Wed, Dec 9, 2015 at 1:08 PM, Brian Gerst <brgerst@gmail.com> wrote: > On Wed, Dec 9, 2015 at 1:53 PM, Andy Lutomirski <luto@amacapital.net> wrote: >> On Wed, Dec 9, 2015 at 5:02 AM, Brian Gerst <brgerst@gmail.com> wrote: >>> Instead of using a duplicate syscall table for the fast path, create stubs for >>> the syscalls that need pt_regs that save the extra registers if a flag for the >>> slow path is not set. >>> >>> Signed-off-by: Brian Gerst <brgerst@gmail.com> >>> To: Andy Lutomirski <luto@amacapital.net> >>> Cc: Andy Lutomirski <luto@kernel.org> >>> Cc: the arch/x86 maintainers <x86@kernel.org> >>> Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org> >>> Cc: Borislav Petkov <bp@alien8.de> >>> Cc: Frédéric Weisbecker <fweisbec@gmail.com> >>> Cc: Denys Vlasenko <dvlasenk@redhat.com> >>> Cc: Linus Torvalds <torvalds@linux-foundation.org> >>> --- >>> >>> Applies on top of Andy's syscall cleanup series. >> >> A couple questions: >> >>> @@ -306,15 +306,37 @@ END(entry_SYSCALL_64) >>> >>> ENTRY(stub_ptregs_64) >>> /* >>> - * Syscalls marked as needing ptregs that go through the fast path >>> - * land here. We transfer to the slow path. >>> + * Syscalls marked as needing ptregs land here. >>> + * If we are on the fast path, we need to save the extra regs. >>> + * If we are on the slow path, the extra regs are already saved. >>> */ >>> - DISABLE_INTERRUPTS(CLBR_NONE) >>> - TRACE_IRQS_OFF >>> - addq $8, %rsp >>> - jmp entry_SYSCALL64_slow_path >>> + movq PER_CPU_VAR(cpu_current_top_of_stack), %r10 >>> + testl $TS_SLOWPATH, ASM_THREAD_INFO(TI_status, %r10, 0) >>> + jnz 1f >> >> OK (but see below), but why not do: >> >> addq $8, %rsp >> jmp entry_SYSCALL64_slow_path > > I've always been adverse to doing things like that because it breaks > call/return branch prediction.
I'd agree with you there except that the syscalls in question really don't matter for performance enough that we should worry about a handful of cycles from a return misprediction. We're still avoiding IRET regardless (to the extent possible), and that was always the major factor.
> Also, are there any side effects to calling enter_from_user_mode() > more than once?
A warning that invariants are broken if you have an appropriately configured kernel.
> >> here instead of the stack munging below? >> >>> + subq $SIZEOF_PTREGS, %r10 >>> + SAVE_EXTRA_REGS base=r10 >>> + movq %r10, %rbx >>> + call *%rax >>> + movq %rbx, %r10 >>> + RESTORE_EXTRA_REGS base=r10 >>> + ret >>> +1: >>> + jmp *%rax >>> END(stub_ptregs_64) > > After some thought, that can be simplified. It's only executed on the > fast path, so pt_regs is at 8(%rsp). > >> Also, can we not get away with keying off rip or rsp instead of >> ti->status? That should be faster and less magical IMO. > > Checking if the return address is the instruction after the fast path > dispatch would work. > > Simplified version: > ENTRY(stub_ptregs_64) > cmpl $fast_path_return, (%rsp)
Does that instruction actually work the way you want it to? (Does it link?) I think you might need to use leaq the way I did in my patch.
> jne 1f > SAVE_EXTRA_REGS offset=8 > call *%rax > RESTORE_EXTRA_REGS offset=8 > ret > 1: > jmp *%rax > END(stub_ptregs_64)
This'll work, I think, but I still think I prefer keeping as much complexity as possible in the slow path. I could be convinced otherwise, though -- this variant is reasonably clean.
--Andy
| |