lkml.org 
[lkml]   [2022]   [Sep]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v3 1/2] percpu: Add percpu_counter_add_local
Date
Add percpu_counter_add_local for only updating local counter
without aggregating to global counter.

This function could be used with percpu_counter_sum together if
you need high accurate counter. It could bring obvious performance
improvement if percpu_counter_add is frequently called and
percpu_counter_sum is not in the critical path.

Please use percpu_counter_add_batch instead if you need the counter
timely but not accurate and the call of percpu_counter_add_batch is
not heavy.

Signed-off-by: Jiebin Sun <jiebin.sun@intel.com>
---
include/linux/percpu_counter.h | 7 +++++++
lib/percpu_counter.c | 14 ++++++++++++++
2 files changed, 21 insertions(+)

diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
index 01861eebed79..344d69ae0fb1 100644
--- a/include/linux/percpu_counter.h
+++ b/include/linux/percpu_counter.h
@@ -40,6 +40,7 @@ int __percpu_counter_init(struct percpu_counter *fbc, s64 amount, gfp_t gfp,

void percpu_counter_destroy(struct percpu_counter *fbc);
void percpu_counter_set(struct percpu_counter *fbc, s64 amount);
+void percpu_counter_add_local(struct percpu_counter *fbc, s64 amount);
void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount,
s32 batch);
s64 __percpu_counter_sum(struct percpu_counter *fbc);
@@ -138,6 +139,12 @@ percpu_counter_add(struct percpu_counter *fbc, s64 amount)
preempt_enable();
}

+static inline void
+percpu_counter_add_local(struct percpu_counter *fbc, s64 amount)
+{
+ percpu_counter_add(fbc, amount);
+}
+
static inline void
percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch)
{
diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index ed610b75dc32..36907eb573a8 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -72,6 +72,20 @@ void percpu_counter_set(struct percpu_counter *fbc, s64 amount)
}
EXPORT_SYMBOL(percpu_counter_set);

+/*
+ * Recommend to use the function combined with percpu_counter_sum if you need
+ * high accurate counter. As the percpu_counter_sum add up all the percpu
+ * counter, there is no need to check batch size and sum in percpu_counter_add.
+ * If the percpu_counter_sum is infrequent used and the percpu_counter_add
+ * is in critical path, this combination could have significant performance
+ * improvement than the function percpu_counter_add_batch.
+ */
+void percpu_counter_add_local(struct percpu_counter *fbc, s64 amount)
+{
+ this_cpu_add(*fbc->counters, amount);
+}
+EXPORT_SYMBOL(percpu_counter_add_local);
+
/*
* This function is both preempt and irq safe. The former is due to explicit
* preemption disable. The latter is guaranteed by the fact that the slow path
--
2.31.1
\
 
 \ /
  Last update: 2022-09-06 10:39    [W:2.497 / U:0.212 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site