lkml.org 
[lkml]   [2018]   [Jan]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] x86/retpoline: Also fill return buffer after idle
From
Date
On Mon, 2018-01-08 at 15:51 -0800, Andi Kleen wrote:
> From: Andi Kleen <ak@linux.intel.com>
>
> This is an extension of the earlier patch to fill the return buffer
> on context switch. It uses the assembler macros added earlier.
>
> When we go into deeper idle states the return buffer could be cleared
> in MWAIT, but then another thread which wakes up earlier might
> be poisoning the indirect branch predictor. Then when the return
> buffer underflows there might an uncontrolled indirect branch.
>
> To guard against this always fill the return buffer when exiting idle.
>
> Needed on Skylake and some Broadwells.
>
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> ---
>  arch/x86/entry/entry_32.S    |  8 ++++++++
>  arch/x86/entry/entry_64.S    |  8 ++++++++
>  arch/x86/include/asm/mwait.h | 11 ++++++++++-
>  3 files changed, 26 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
> index 7dee84a3cf83..2687cce8a02e 100644
> --- a/arch/x86/entry/entry_32.S
> +++ b/arch/x86/entry/entry_32.S
> @@ -1092,3 +1092,11 @@ ENTRY(rewind_stack_do_exit)
>   call do_exit
>  1: jmp 1b
>  END(rewind_stack_do_exit)
> +
> +ENTRY(fill_return_buffer)
> +#ifdef CONFIG_RETPOLINE
> + ALTERNATIVE "ret", "", X86_FEATURE_RETPOLINE
> +        FILL_RETURN_BUFFER
> +#endif
> +        ret
> +END(fill_return_buffer)
> diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
> index a33033e2bfe0..92fbec1b0eb5 100644
> --- a/arch/x86/entry/entry_64.S
> +++ b/arch/x86/entry/entry_64.S
> @@ -1831,3 +1831,11 @@ ENTRY(rewind_stack_do_exit)
>  
>   call do_exit
>  END(rewind_stack_do_exit)
> +
> +ENTRY(fill_return_buffer)
> +#ifdef CONFIG_RETPOLINE
> + ALTERNATIVE "ret", "", X86_FEATURE_RETPOLINE
> + FILL_RETURN_BUFFER
> +#endif
> + ret
> +END(fill_return_buffer)
> diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
> index 39a2fb29378a..1d9f9269b5e7 100644
> --- a/arch/x86/include/asm/mwait.h
> +++ b/arch/x86/include/asm/mwait.h
> @@ -87,6 +87,8 @@ static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
>        :: "a" (eax), "c" (ecx));
>  }
>  
> +extern __visible void fill_return_buffer(void);
> +
>  /*
>   * This uses new MONITOR/MWAIT instructions on P4 processors with PNI,
>   * which can obviate IPI to trigger checking of need_resched.
> @@ -107,8 +109,15 @@ static inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
>   }
>  
>   __monitor((void *)¤t_thread_info()->flags, 0, 0);
> - if (!need_resched())
> + if (!need_resched()) {
>   __mwait(eax, ecx);
> + /*
> +  * idle could have cleared the return buffer,
> +  * so fill it to prevent uncontrolled
> +  * speculation.
> +  */
> + fill_return_buffer();
> + }
>   }
>   current_clr_polling();
>  }

Probably doesn't matter right there but it's going to end up being used
elsewhere with IBRS/IBPB, and the compiler is going to think it needs
to save all the call-clobbered registers for that. Do we want to make
it use inline asm instead?[unhandled content-type:application/x-pkcs7-signature]
\
 
 \ /
  Last update: 2018-01-14 23:18    [W:0.060 / U:1.732 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site