lkml.org 
[lkml]   [2013]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 11/18] mm: fix TLB flush race between migration, and change_protection_range
On Tue, Dec 10, 2013 at 05:19:36PM +0000, Mel Gorman wrote:
> On Tue, Dec 10, 2013 at 09:25:39AM -0500, Rik van Riel wrote:
> > On 12/09/2013 02:09 AM, Mel Gorman wrote:
> >
> > After reading the locking thread that Paul McKenney started,
> > I wonder if I got the barriers wrong in these functions...
>
> If Documentation/memory-barriers.txt could not be used to frighten small
> children before, it certainly can now.

Depends on the children. Some might find it quite attractive, sort of
like running while carrying a knife.

> > > +#if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION)
> > > +/*
> > > + * Memory barriers to keep this state in sync are graciously provided by
> > > + * the page table locks, outside of which no page table modifications happen.
> > > + * The barriers below prevent the compiler from re-ordering the instructions
> > > + * around the memory barriers that are already present in the code.
> > > + */
> > > +static inline bool tlb_flush_pending(struct mm_struct *mm)
> > > +{
> > > + barrier();
> >
> > Should this be smp_mb__after_unlock_lock(); ?
>
> I think this is still ok. Minimally, it's missing the unlock/lock pair that
> would cause smp_mb__after_unlock_lock() to be treated as a full barrier
> on architectures that care. The CPU executing this code as already seen
> the pmd_numa update if it's in the fault handler so it just needs to be
> sure to not reorder the check with respect to the page copy.

You really do need a lock operation somewhere shortly before the
smp_mb__after_unlock_lock().

> > > + return mm->tlb_flush_pending;
> > > +}
> > > +static inline void set_tlb_flush_pending(struct mm_struct *mm)
> > > +{
> > > + mm->tlb_flush_pending = true;
> > > + barrier();
> > > +}
>
> That now needs an smp_mb_before_spinlock to guarantee that the store
> mm->tlb_flush_pending does not leak into the section updating the page
> tables and get re-ordered. The result would pair with tlb_flush_pending
> to guarantee that a pagetable update that starts in parallel will be
> visible to flush the TLB before the cop

That would be required even if UNLOCK+LOCK continued being a full barrier.
A lock acquisition by itself never was guaranteed to be a full barrier.

Thanx, Paul

> > > +/* Clearing is done after a TLB flush, which also provides a barrier. */
> > > +static inline void clear_tlb_flush_pending(struct mm_struct *mm)
> > > +{
> > > + barrier();
> > > + mm->tlb_flush_pending = false;
> > > +}
> >
>
> This should be ok. Stores updating page tables complete before the ptl
> unlock in addition to the TLB flush itself being a barrier that
> guarantees the this update takes place afterwards.
>
> Peter/Paul?
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index c122bb1..33e5519 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -482,7 +482,12 @@ static inline bool tlb_flush_pending(struct mm_struct *mm)
> static inline void set_tlb_flush_pending(struct mm_struct *mm)
> {
> mm->tlb_flush_pending = true;
> - barrier();
> +
> + /*
> + * Guarantee that the tlb_flush_pending store does not leak into the
> + * critical section updating the page tables
> + */
> + smp_mb_before_spinlock();
> }
> /* Clearing is done after a TLB flush, which also provides a barrier. */
> static inline void clear_tlb_flush_pending(struct mm_struct *mm)
>



\
 
 \ /
  Last update: 2013-12-10 19:41    [W:0.078 / U:0.900 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site