lkml.org 
[lkml]   [2022]   [Mar]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH-mm v3] mm/list_lru: Optimize memcg_reparent_list_lru_node()
On Sun, Mar 27, 2022 at 08:57:15PM -0400, Waiman Long wrote:
> On 3/22/22 22:12, Muchun Song wrote:
> > On Wed, Mar 23, 2022 at 9:55 AM Waiman Long <longman@redhat.com> wrote:
> > > On 3/22/22 21:06, Muchun Song wrote:
> > > > On Wed, Mar 9, 2022 at 10:40 PM Waiman Long <longman@redhat.com> wrote:
> > > > > Since commit 2c80cd57c743 ("mm/list_lru.c: fix list_lru_count_node()
> > > > > to be race free"), we are tracking the total number of lru
> > > > > entries in a list_lru_node in its nr_items field. In the case of
> > > > > memcg_reparent_list_lru_node(), there is nothing to be done if nr_items
> > > > > is 0. We don't even need to take the nlru->lock as no new lru entry
> > > > > could be added by a racing list_lru_add() to the draining src_idx memcg
> > > > > at this point.
> > > > Hi Waiman,
> > > >
> > > > Sorry for the late reply. Quick question: what if there is an inflight
> > > > list_lru_add()? How about the following race?
> > > >
> > > > CPU0: CPU1:
> > > > list_lru_add()
> > > > spin_lock(&nlru->lock)
> > > > l = list_lru_from_kmem(memcg)
> > > > memcg_reparent_objcgs(memcg)
> > > > memcg_reparent_list_lrus(memcg)
> > > > memcg_reparent_list_lru()
> > > > memcg_reparent_list_lru_node()
> > > > if (!READ_ONCE(nlru->nr_items))
> > > > // Miss reparenting
> > > > return
> > > > // Assume 0->1
> > > > l->nr_items++
> > > > // Assume 0->1
> > > > nlru->nr_items++
> > > >
> > > > IIUC, we use nlru->lock to serialise this scenario.
> > > I guess this race is theoretically possible but very unlikely since it
> > > means a very long pause between list_lru_from_kmem() and the increment
> > > of nr_items.
> > It is more possible in a VM.
> >
> > > How about the following changes to make sure that this race can't happen?
> > >
> > > diff --git a/mm/list_lru.c b/mm/list_lru.c
> > > index c669d87001a6..c31a0a8ad4e7 100644
> > > --- a/mm/list_lru.c
> > > +++ b/mm/list_lru.c
> > > @@ -395,9 +395,10 @@ static void memcg_reparent_list_lru_node(struct
> > > list_lru *lru, int nid,
> > > struct list_lru_one *src, *dst;
> > >
> > > /*
> > > - * If there is no lru entry in this nlru, we can skip it
> > > immediately.
> > > + * If there is no lru entry in this nlru and the nlru->lock is free,
> > > + * we can skip it immediately.
> > > */
> > > - if (!READ_ONCE(nlru->nr_items))
> > > + if (!READ_ONCE(nlru->nr_items) && !spin_is_locked(&nlru->lock))
> > I think we also should insert a smp_rmb() between those two loads.
>
> Thinking about this some more, I believe that adding spin_is_locked() check
> will be enough for x86. However, that will likely not be enough for arches
> with a more relaxed memory semantics. So the safest way to avoid this
> possible race is to move the check to within the lock critical section,
> though that comes with a slightly higher overhead for the 0 nr_items case. I
> will send out a patch to correct that. Thanks for bring this possible race
> to my attention.

Yes, I think it's not enough:
CPU0 CPU1
READ_ONCE(&nlru->nr_items) -> 0
spin_lock(&nlru->lock);
nlru->nr_items++;
spin_unlock(&nlru->lock);
&& !spin_is_locked(&nlru->lock) -> 0


Getting back to the original patch, I wonder if instead we can batch reparenting
of lrus so we don't have to grab and release nlru->lock for each reparenting lru.


Thanks!

\
 
 \ /
  Last update: 2022-03-28 21:13    [W:0.099 / U:0.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site