lkml.org 
[lkml]   [2022]   [Jun]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCHv3 4/8] x86/mm: Handle LAM on context switch
On Fri, Jun 10, 2022 at 05:35:23PM +0300, Kirill A. Shutemov wrote:

> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 4af5579c7ef7..5b93dad93ff4 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -86,6 +86,9 @@ struct tlb_state {
> unsigned long last_user_mm_spec;
> };
>
> +#ifdef CONFIG_X86_64
> + u64 lam_cr3_mask;
> +#endif
> u16 loaded_mm_asid;
> u16 next_asid;
>

Urgh.. so there's a comment there that states:

/*
* 6 because 6 should be plenty and struct tlb_state will fit in two cache
* lines.
*/
#define TLB_NR_DYN_ASIDS 6

And then look at tlb_state:

struct tlb_state {
struct mm_struct * loaded_mm; /* 0 8 */
union {
struct mm_struct * last_user_mm; /* 8 8 */
long unsigned int last_user_mm_spec; /* 8 8 */
}; /* 8 8 */
u16 loaded_mm_asid; /* 16 2 */
u16 next_asid; /* 18 2 */
bool invalidate_other; /* 20 1 */

/* XXX 1 byte hole, try to pack */

short unsigned int user_pcid_flush_mask; /* 22 2 */
long unsigned int cr4; /* 24 8 */
struct tlb_context ctxs[6]; /* 32 96 */

/* size: 128, cachelines: 2, members: 8 */
/* sum members: 127, holes: 1, sum holes: 1 */
};

If you add that u64 as you do, you'll wreck all that.

Either use that one spare byte, or find room elsewhere I suppose.

\
 
 \ /
  Last update: 2022-06-16 11:09    [W:0.511 / U:0.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site