lkml.org 
[lkml]   [2019]   [Mar]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [GIT pull] x86/asm for 5.1
On Mon, Mar 11, 2019 at 12:23 PM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
> On Mon, Mar 11, 2019, 12:14 Kees Cook <keescook@chromium.org> wrote:
>>
>> >
>> > this_cpu_write(cpu_tlbstate.cr4, __read_cr4() | cr4_pin);
>> >
>> ..
>>
>> The protection needs to be around the actual "mov %rdi, %cr4" that
>> native_write_cr4() exposes,
>
>
> You misunderstand.
>
> The above is just the "initialise cr4 shadow cache" case.
>
> If you do the above, I think we may have cr4 values initialled early enough that all CPUs can then just use the "check that the pinned bits were set" unconditionally in the actual routine that changes cr4.

Oh! I see what you mean -- separate the or and test. Okay, I'll look
at that too.

--
Kees Cook

\
 
 \ /
  Last update: 2019-03-11 21:37    [W:0.288 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site