lkml.org 
[lkml]   [2020]   [Apr]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v2 3/4] arch/x86: Optionally flush L1D on context switch
Date
On Wed, 2020-04-08 at 01:52 +0200, Thomas Gleixner wrote:
>
> Balbir,
>
> Balbir Singh <sblbir@amazon.com> writes:
> > diff --git a/arch/x86/include/asm/tlbflush.h
> > b/arch/x86/include/asm/tlbflush.h
> > index 6f66d841262d..69e6ea20679c 100644
> > --- a/arch/x86/include/asm/tlbflush.h
> > +++ b/arch/x86/include/asm/tlbflush.h
> > @@ -172,7 +172,7 @@ struct tlb_state {
> > /* Last user mm for optimizing IBPB */
> > union {
> > struct mm_struct *last_user_mm;
> > - unsigned long last_user_mm_ibpb;
> > + unsigned long last_user_mm_spec;
> > -static inline unsigned long mm_mangle_tif_spec_ib(struct task_struct
> > *next)
> > +static inline unsigned long mm_mangle_tif_spec_bits(struct task_struct
> > *next)
> > -static void cond_ibpb(struct task_struct *next)
> > +static void cond_mitigation(struct task_struct *next)
> > {
> > + unsigned long prev_mm, next_mm;
> > +
> > if (!next || !next->mm)
> > return;
>
> can you please split out these preparatory changes into a separate
> patch?
>

Will do and repost a new iteration

Balbir Singh

\
 
 \ /
  Last update: 2020-04-08 02:14    [W:0.066 / U:1.208 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site