lkml.org 
[lkml]   [2013]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 11/18] mm: fix TLB flush race between migration, and change_protection_range
On 12/09/2013 02:09 AM, Mel Gorman wrote:

After reading the locking thread that Paul McKenney started,
I wonder if I got the barriers wrong in these functions...

> +#if defined(CONFIG_NUMA_BALANCING) || defined(CONFIG_COMPACTION)
> +/*
> + * Memory barriers to keep this state in sync are graciously provided by
> + * the page table locks, outside of which no page table modifications happen.
> + * The barriers below prevent the compiler from re-ordering the instructions
> + * around the memory barriers that are already present in the code.
> + */
> +static inline bool tlb_flush_pending(struct mm_struct *mm)
> +{
> + barrier();

Should this be smp_mb__after_unlock_lock(); ?

> + return mm->tlb_flush_pending;
> +}
> +static inline void set_tlb_flush_pending(struct mm_struct *mm)
> +{
> + mm->tlb_flush_pending = true;
> + barrier();
> +}
> +/* Clearing is done after a TLB flush, which also provides a barrier. */
> +static inline void clear_tlb_flush_pending(struct mm_struct *mm)
> +{
> + barrier();
> + mm->tlb_flush_pending = false;
> +}

And these smp_mb__before_spinlock() ?

Paul? Peter?

--
All rights reversed


\
 
 \ /
  Last update: 2013-12-10 16:01    [W:0.956 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site