lkml.org 
[lkml]   [2020]   [May]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v1 02/25] mm/swap: Don't abuse the seqcount latching API
On Wed, May 20, 2020 at 03:22:15PM +0300, Konstantin Khlebnikov wrote:
> On 20/05/2020 00.45, Ahmed S. Darwish wrote:
> > Commit eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls")
> > implemented an optimization mechanism to exit the to-be-started LRU
> > drain operation (name it A) if another drain operation *started and
> > finished* while (A) was blocked on the LRU draining mutex.

That commit is horrible...

> Well, I thought it fits perfectly =)
>
> Maybe it's worth to add helpers with appropriate semantics?
> This is pretty common pattern.

Where's more sites?

> > @@ -725,21 +735,48 @@ void lru_add_drain_all(void)
> > if (WARN_ON(!mm_percpu_wq))
> > return;
> > - seq = raw_read_seqcount_latch(&seqcount);
> > mutex_lock(&lock);
> > /*
> > - * Piggyback on drain started and finished while we waited for lock:
> > - * all pages pended at the time of our enter were drained from vectors.
> > */
> > - if (__read_seqcount_retry(&seqcount, seq))
> > goto done;

Since there is no ordering in raw_read_seqcount_latch(), and
mutex_lock() is an ACQUIRE, there's no guarantee the read actually
happens before the mutex is acquired.

> > - raw_write_seqcount_latch(&seqcount);
> > cpumask_clear(&has_work);

\
 
 \ /
  Last update: 2020-05-20 15:06    [W:0.236 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site