Messages in this thread | | | Date | Fri, 15 Nov 2013 10:13:26 +0100 | From | Martin Schwidefsky <> | Subject | Re: [PATCH 2/2] s390/mm,tlb: race of lazy TLB flush vs. recreation of TLB entries |
| |
On Thu, 14 Nov 2013 09:10:07 +0100 Martin Schwidefsky <schwidefsky@de.ibm.com> wrote:
> On Wed, 13 Nov 2013 16:16:35 +0000 > Catalin Marinas <catalin.marinas@arm.com> wrote: > > > On 13 November 2013 08:16, Martin Schwidefsky <schwidefsky@de.ibm.com> wrote: > > > diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h > > > index 5d1f950..e91afeb 100644 > > > --- a/arch/s390/include/asm/mmu_context.h > > > +++ b/arch/s390/include/asm/mmu_context.h > > > @@ -48,13 +48,38 @@ static inline void update_mm(struct mm_struct *mm, struct task_struct *tsk) > > > static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, > > > struct task_struct *tsk) > > > { > > > - cpumask_set_cpu(smp_processor_id(), mm_cpumask(next)); > > > - update_mm(next, tsk); > > > + int cpu = smp_processor_id(); > > > + > > > + if (prev == next) > > > + return; > > > + if (atomic_inc_return(&next->context.attach_count) >> 16) { > > > + /* Delay update_mm until all TLB flushes are done. */ > > > + set_tsk_thread_flag(tsk, TIF_TLB_WAIT); > > > + } else { > > > + cpumask_set_cpu(cpu, mm_cpumask(next)); > > > + update_mm(next, tsk); > > > + if (next->context.flush_mm) > > > + /* Flush pending TLBs */ > > > + __tlb_flush_mm(next); > > > + } > > > atomic_dec(&prev->context.attach_count); > > > WARN_ON(atomic_read(&prev->context.attach_count) < 0); > > > - atomic_inc(&next->context.attach_count); > > > - /* Check for TLBs not flushed yet */ > > > - __tlb_flush_mm_lazy(next); > > > +} > > > + > > > +#define finish_switch_mm finish_switch_mm > > > +static inline void finish_switch_mm(struct mm_struct *mm, > > > + struct task_struct *tsk) > > > +{ > > > + if (!test_and_clear_tsk_thread_flag(tsk, TIF_TLB_WAIT)) > > > + return; > > > + > > > + while (atomic_read(&mm->context.attach_count) >> 16) > > > + cpu_relax(); > > > + > > > + cpumask_set_cpu(smp_processor_id(), mm_cpumask(mm)); > > > + update_mm(mm, tsk); > > > + if (mm->context.flush_mm) > > > + __tlb_flush_mm(mm); > > > } > > > > Some care is needed here with preemption (we had this on arm and I > > think we need a fix on arm64 as well). Basically you set TIF_TLB_WAIT > > on a thread but you get preempted just before finish_switch_mm(). The > > new thread has the same mm as the preempted on and switch_mm() exits > > early without setting another flag. So finish_switch_mm() wouldn't do > > anything but you still switched to the new mm. The fix is to make the > > flag per mm rather than thread (see commit bdae73cd374e). > > Interesting. For s390 I need to make sure that each task attaching an > mm waits for the completion of concurrent TLB flush operations. If the > scheduler does not switch the mm I don't care, the mm is still attached. > For the s390 issue a TIF bit seems appropriate. But I have to add an > preempt_enable/preempt_disable pair to finish_switch_mm, otherwise the > task can get hit by preemption after the while loop.
I almost committed a patch to add preempt_enable/preempt_disable when I realized that it is not needed after all. If a preemptive schedule hits in finish_switch_mm a full switch_mm/finish_switch_mm pair will be done when the task is picked up again by a CPU. The worst that can happen is that the update_mm is done a second time which is ok. All good :-)
-- blue skies, Martin.
"Reality continues to ruin my life." - Calvin.
| |