Messages in this thread | | | Date | Tue, 8 Nov 2022 19:54:35 -0800 | Subject | Re: [PATCHv11.1 04/16] x86/mm: Handle LAM on context switch | From | Andy Lutomirski <> |
| |
On 11/7/22 13:35, Kirill A. Shutemov wrote: > Linear Address Masking mode for userspace pointers encoded in CR3 bits. > The mode is selected per-process and stored in mm_context_t. > > switch_mm_irqs_off() now respects selected LAM mode and constructs CR3 > accordingly. > > The active LAM mode gets recorded in the tlb_state. >
> +static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm) > +{ > + return mm->context.lam_cr3_mask;
READ_ONCE -- otherwise this has a data race and might generate sanitizer complaints.
> +}
> @@ -491,6 +496,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > { > struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm); > u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid); > + unsigned long prev_lam = tlbstate_lam_cr3_mask(); > + unsigned long new_lam = mm_lam_cr3_mask(next);
So I'm reading this again after drinking a cup of coffee. new_lam is next's LAM mask according to mm_struct (and thus can change asynchronously due to a remote CPU). prev_lam is based on tlbstate and can't change asynchronously, at least not with IRQs off.
> bool was_lazy = this_cpu_read(cpu_tlbstate_shared.is_lazy); > unsigned cpu = smp_processor_id(); > u64 next_tlb_gen; > @@ -520,7 +527,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > * isn't free. > */ > #ifdef CONFIG_DEBUG_VM > - if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid))) { > + if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid, prev_lam))) {
So is the only purpose of tlbstate_lam_cr3_mask() to enable this warning to work?
> /* > * If we were to BUG here, we'd be very likely to kill > * the system so hard that we don't see the call trace. > @@ -552,9 +559,15 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > * instruction. > */ > if (real_prev == next) { > + /* Not actually switching mm's */ > VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != > next->context.ctx_id); > > + /* > + * If this races with another thread that enables lam, 'new_lam' > + * might not match 'prev_lam'. > + */ > +
Indeed.
> /* > * Even in lazy TLB mode, the CPU should stay set in the > * mm_cpumask. The TLB shootdown code can figure out from > @@ -622,15 +635,16 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > barrier(); > }
> @@ -691,6 +705,10 @@ void initialize_tlbstate_and_flush(void) > /* Assert that CR3 already references the right mm. */ > WARN_ON((cr3 & CR3_ADDR_MASK) != __pa(mm->pgd)); > > + /* LAM expected to be disabled in CR3 and init_mm */ > + WARN_ON(cr3 & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57)); > + WARN_ON(mm_lam_cr3_mask(&init_mm)); > +
I think the callers all have init_mm selected, but the rest of this function is not really written with this assumption. (But it does force ASID 0, which is at least a bizarre thing to do for non-init-mm.)
What's the purpose of this warning? I'm okay with keeping it, but maybe also add a warning that fires if mm != &init_mm.
| |