lkml.org 
[lkml]   [2018]   [Jan]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [tip:x86/pti] x86/retpoline: Fill RSB on context switch for affected CPUs
From
Date
On Mon, 2018-01-15 at 14:35 +0000, David Laight wrote:
> From: David Woodhouse
> >
> > Sent: 14 January 2018 17:04
> > x86/retpoline: Fill RSB on context switch for affected CPUs
> >
> > On context switch from a shallow call stack to a deeper one, as the CPU
> > does 'ret' up the deeper side it may encounter RSB entries (predictions for
> > where the 'ret' goes to) which were populated in userspace.
> >
> > This is problematic if neither SMEP nor KPTI (the latter of which marks
> > userspace pages as NX for the kernel) are active, as malicious code in
> > userspace may then be executed speculatively.
> ...
>
> Do we have a guarantee that all cpu actually detect the related RSB underflow?
>
> It wouldn't surprise me if at least some cpu just let it wrap.
>
> This would means that userspace would see return predictions based
> on the values the kernel 'stuffed' into the RSB to fill it.
>
> Potentially this leaks a kernel address to userspace.

Yeah, KASLR is dead unless we do a full IBPB before *every* VMLAUNCH or
return to userspace anyway, isn't it? With KPTI we could put the RSB-
stuffer into the syscall trampoline page perhaps...

For this to be a concern for userspace, I think it does have to be true
that only the lower bits are used, which adds a little complexity but
probably isn't insurmountable?

[unhandled content-type:application/x-pkcs7-signature]
\
 
 \ /
  Last update: 2018-01-15 15:40    [W:0.212 / U:0.096 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site