Messages in this thread | | | From | Andy Lutomirski <> | Date | Wed, 31 Jan 2018 13:03:24 -0800 | Subject | Re: [PATCH v2] x86: Align TLB invalidation info |
| |
On Wed, Jan 31, 2018 at 1:00 PM, Nadav Amit <namit@vmware.com> wrote: > The TLB invalidation info is allocated on the stack, which might cause > it to be unaligned. Since this information may be transferred to > different cores for TLB shootdown, this might result in an additional > cache-line bouncing between the cores. > > We do not use __cacheline_aligned() since it also defines the section, > which is inappropriate for stack variables. > > Signed-off-by: Nadav Amit <namit@vmware.com> > > Cc: Andy Lutomirski <luto@kernel.org> > Cc: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Andy Lutomirski <luto@kernel.org>
This is basically free and adds no mess, so I think it's probably okay even in the absence that it's a huge win.
But Dave is right, the commit message needs updating. It will reduce the number of cachelines that become shared and then get exclusively owned by the originator from 2 to 1. This isn't really "bouncing".
> > -- > v1 -> v2: use __aligned instead of all the mess (Andy) > --- > arch/x86/mm/tlb.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c > index 5bfe61a5e8e3..9690112e3a82 100644 > --- a/arch/x86/mm/tlb.c > +++ b/arch/x86/mm/tlb.c > @@ -576,7 +576,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, > { > int cpu; > > - struct flush_tlb_info info = { > + struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = { > .mm = mm, > }; > > -- > 2.14.1 >
| |