Messages in this thread |  | | From | Catalin Marinas <> | Date | Wed, 13 Nov 2013 16:16:35 +0000 | Subject | Re: [PATCH 2/2] s390/mm,tlb: race of lazy TLB flush vs. recreation of TLB entries |
| |
On 13 November 2013 08:16, Martin Schwidefsky <schwidefsky@de.ibm.com> wrote: > diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h > index 5d1f950..e91afeb 100644 > --- a/arch/s390/include/asm/mmu_context.h > +++ b/arch/s390/include/asm/mmu_context.h > @@ -48,13 +48,38 @@ static inline void update_mm(struct mm_struct *mm, struct task_struct *tsk) > static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, > struct task_struct *tsk) > { > - cpumask_set_cpu(smp_processor_id(), mm_cpumask(next)); > - update_mm(next, tsk); > + int cpu = smp_processor_id(); > + > + if (prev == next) > + return; > + if (atomic_inc_return(&next->context.attach_count) >> 16) { > + /* Delay update_mm until all TLB flushes are done. */ > + set_tsk_thread_flag(tsk, TIF_TLB_WAIT); > + } else { > + cpumask_set_cpu(cpu, mm_cpumask(next)); > + update_mm(next, tsk); > + if (next->context.flush_mm) > + /* Flush pending TLBs */ > + __tlb_flush_mm(next); > + } > atomic_dec(&prev->context.attach_count); > WARN_ON(atomic_read(&prev->context.attach_count) < 0); > - atomic_inc(&next->context.attach_count); > - /* Check for TLBs not flushed yet */ > - __tlb_flush_mm_lazy(next); > +} > + > +#define finish_switch_mm finish_switch_mm > +static inline void finish_switch_mm(struct mm_struct *mm, > + struct task_struct *tsk) > +{ > + if (!test_and_clear_tsk_thread_flag(tsk, TIF_TLB_WAIT)) > + return; > + > + while (atomic_read(&mm->context.attach_count) >> 16) > + cpu_relax(); > + > + cpumask_set_cpu(smp_processor_id(), mm_cpumask(mm)); > + update_mm(mm, tsk); > + if (mm->context.flush_mm) > + __tlb_flush_mm(mm); > }
Some care is needed here with preemption (we had this on arm and I think we need a fix on arm64 as well). Basically you set TIF_TLB_WAIT on a thread but you get preempted just before finish_switch_mm(). The new thread has the same mm as the preempted on and switch_mm() exits early without setting another flag. So finish_switch_mm() wouldn't do anything but you still switched to the new mm. The fix is to make the flag per mm rather than thread (see commit bdae73cd374e).
-- Catalin
|  |