lkml.org 
[lkml]   [2018]   [Sep]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/4] lib/percpu-refcount: introduce percpu_ref_resurge()
On Wed, Sep 19, 2018 at 03:55:07PM +0800, Ming Lei wrote:
> On Wed, Sep 19, 2018 at 01:19:10PM +0800, jianchao.wang wrote:
> > Hi Ming
> >
> > On 09/18/2018 06:19 PM, Ming Lei wrote:
> > > + unsigned long __percpu *percpu_count;
> > > +
> > > + WARN_ON_ONCE(__ref_is_percpu(ref, &percpu_count));
> > > +
> > > + /* get one extra ref for avoiding race with .release */
> > > + rcu_read_lock_sched();
> > > + atomic_long_add(1, &ref->count);
> > > + rcu_read_unlock_sched();
> > > + }
> >
> > The rcu_read_lock_sched here is redundant. We have been in the critical section
> > of a spin_lock_irqsave.
>
> Right.
>
> >
> > The atomic_long_add(1, &ref->count) may have two result.
> > 1. ref->count > 1
> > it will not drop to zero any more.
> > 2. ref->count == 1
> > it has dropped to zero and .release may be running.
>
> IMO, both the two cases are fine and supported, or do you have other
> concern about this way?

It is too quick, :-)

Yeah, the .release() may be running.

For blk-mq/NVMe's use case, it won't be an issue. We may comment on this
race and let user handle it if it is a problem.


thanks,
Ming

\
 
 \ /
  Last update: 2018-09-19 10:02    [W:0.044 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site