lkml.org 
[lkml]   [2022]   [Mar]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH V3 3/7] x86/entry: Switch the stack after error_entry() returns
On Wed, Mar 16, 2022 at 11:05 PM Peter Zijlstra <peterz@infradead.org> wrote:
>
> > SYM_CODE_END(error_entry)
>
> So the new Changelog doesn't seem to help me much. But looking at both
> fixup_bad_iret() and sync_regs(), they both have:
>
> __this_cpu_read(cpu_tss_rw.x86_tss.sp0) - 1


For a long time in old days (before KPTI), tss.sp0 does be the kernel
thread stack, but now is not the kernel thread stack, rather it is the
trampoline stack (or entry stack) in the cpu entry area.

And bad IRET can happen when doing IRET to return to userspace and it
is also the trampoline stack. So fixup_bad_iret() is really just moving
the lower partial pt_regs up to concat with user IRET frame head
in the entry stack.

So fixup_bad_iret() will NOT have setup pt_regs on the thread stack.
And sync_regs() is needed after fixup_bad_iret()

The patch
https://lore.kernel.org/lkml/20200817062355.2884-3-jiangshanlai@gmail.com/
tried changing fixup_bad_iret() to copy the pt_regs directly to
the kernel stack.

And in the V1 of the patchset that converting ASM to code also
tried it:
https://lore.kernel.org/lkml/20210831175025.27570-13-jiangshanlai@gmail.com/

And in the current patchset, it focuses on ASM code only. I don't think
we need to change it. It would be much simpler to change the behavior
of fixup_bad_iret() when error_entry() is converted to C.

>
> as hard-coded destination. Now, fixup_bad_iret() sets up a complete
> ptregs there and then returns a pointer to this stack.
>
> sync_regs otoh, does a straight up pt_regs sized copy from arg0 to this
> new stack.
>
> Therefore it appears to me that doing sync_regs() after fixup_bad_iret()
> is a complete NO-OP and only confuses things further.
>
> Would not something like the below clarify things?
>
> --- a/arch/x86/entry/entry_64.S
> +++ b/arch/x86/entry/entry_64.S
> @@ -1004,6 +1004,7 @@ SYM_CODE_START_LOCAL(error_entry)
> .Lerror_entry_from_usermode_after_swapgs:
> /* Put us onto the real thread stack. */
> call sync_regs
> +.Lerror_entry_from_usermode_after_sync_regs:
> RET
>
> /*
> @@ -1058,8 +1059,12 @@ SYM_CODE_START_LOCAL(error_entry)
> */
> leaq 8(%rsp), %rdi /* arg0 = pt_regs pointer */
> call fixup_bad_iret
> - mov %rax, %rdi
> - jmp .Lerror_entry_from_usermode_after_swapgs
> + /*
> + * fixup_bad_iret() will have setup pt_regs on the thread stack, and
> + * returns a pointer to that stack exactly like sync_regs() would've
> + * done. As such, calling sync_regs again makes no sense.
> + */
> + jmp .Lerror_entry_from_usermode_after_sync_regs
> SYM_CODE_END(error_entry)
>
> SYM_CODE_START_LOCAL(error_return)

\
 
 \ /
  Last update: 2022-03-16 17:44    [W:0.100 / U:0.336 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site