lkml.org 
[lkml]   [2013]   [Jan]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 23/32] Generic dynamic per cpu refcounting
On Wed, 26 Dec 2012 18:00:02 -0800
Kent Overstreet <koverstreet@google.com> wrote:

> This implements a refcount with similar semantics to
> atomic_get()/atomic_dec_and_test(), that starts out as just an atomic_t
> but dynamically switches to per cpu refcounting when the rate of
> gets/puts becomes too high.
>
> It also implements two stage shutdown, as we need it to tear down the
> percpu counts. Before dropping the initial refcount, you must call
> percpu_ref_kill(); this puts the refcount in "shutting down mode" and
> switches back to a single atomic refcount with the appropriate barriers
> (synchronize_rcu()).
>
> It's also legal to call percpu_ref_kill() multiple times - it only
> returns true once, so callers don't have to reimplement shutdown
> synchronization.
>
> For the sake of simplicity/efficiency, the heuristic is pretty simple -
> it just switches to percpu refcounting if there are more than x gets
> in one second (completely arbitrarily, 4096).
>
> It'd be more correct to count the number of cache misses or something
> else more profile driven, but doing so would require accessing the
> shared ref twice per get - by just counting the number of gets(), we can
> stick that counter in the high bits of the refcount and increment both
> with a single atomic64_add(). But I expect this'll be good enough in
> practice.

I still don't "get" why this code exists. It is spectacularly,
stunningly undocumented and if someone were to ask me "under what
circumstances should I use percpu-refcount", I would not be able to
help them.



\
 
 \ /
  Last update: 2013-01-04 00:21    [W:0.695 / U:1.084 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site