Messages in this thread Patch in this message | | | Date | Sun, 18 Sep 2022 13:08:16 +0200 | Subject | Re: [PATCH v6 1/2] percpu: Add percpu_counter_add_local and percpu_counter_sub_local | From | Manfred Spraul <> |
| |
Hi Jiebin,
On 9/13/22 21:25, Jiebin Sun wrote: > > +/* > + * With percpu_counter_add_local() and percpu_counter_sub_local(), counts > + * are accumulated in local per cpu counter and not in fbc->count until > + * local count overflows PERCPU_COUNTER_LOCAL_BATCH. This makes counter > + * write efficient. > + * But percpu_counter_sum(), instead of percpu_counter_read(), needs to be > + * used to add up the counts from each CPU to account for all the local > + * counts. So percpu_counter_add_local() and percpu_counter_sub_local() > + * should be used when a counter is updated frequently and read rarely. > + */ > +static inline void > +percpu_counter_add_local(struct percpu_counter *fbc, s64 amount) > +{ > + percpu_counter_add_batch(fbc, amount, PERCPU_COUNTER_LOCAL_BATCH); > +} > +
Unrelated to your patch, and not relevant for ipc/msg as the functions are not called from interrupts, but: Aren't there races with interrupts?
> * > * This function is both preempt and irq safe. The former is due to > explicit > * preemption disable. The latter is guaranteed by the fact that the > slow path > * is explicitly protected by an irq-safe spinlock whereas the fast > patch uses > * this_cpu_add which is irq-safe by definition. Hence there is no need > muck > * with irq state before calling this one > */ > void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, > s32 batch) > { > s64 count; > > preempt_disable(); > count = __this_cpu_read(*fbc->counters) + amount; > if (abs(count) >= batch) { > unsigned long flags; > raw_spin_lock_irqsave(&fbc->lock, flags); > fbc->count += count; > __this_cpu_sub(*fbc->counters, count - amount); > raw_spin_unlock_irqrestore(&fbc->lock, flags); > } else { > this_cpu_add(*fbc->counters, amount); > } > preempt_enable(); > } > EXPORT_SYMBOL(percpu_counter_add_batch); > > Race 1:
start: __this_cpu_read(*fbc->counters) = INT_MAX-1.
Call: per_cpu_counter_add_batch(fbc, 1, INT_MAX);
Result:
count=INT_MAX;
if (abs(count) >= batch) { // branch taken
before the raw_spin_lock_irqsave():
Interrupt
Within interrupt:
per_cpu_counter_add_batch(fbc, -2*(INT_MAX-1), INT_MAX)
count=-(INT_MAX-1);
branch not taken
this_cpu_add() updates fbc->counters, new value is -(INT_MAX-1)
exit interrupt
raw_spin_lock_irqsave()
__this_cpu_sub(*fbc->counters, count - amount)
will substract INT_MAX-1 from *fbc->counters. But the value is already -(INT_MAX-1) -> underflow.
Race 2: (much simpler)
start: __this_cpu_read(*fbc->counters) = 0.
Call: per_cpu_counter_add_batch(fbc, INT_MAX-1, INT_MAX);
amont=INT_MAX-1;
- branch not taken.
before this_cpu_add(): interrupt
within the interrupt: call per_cpu_counter_add_batch(fbc, INT_MAX-1, INT_MAX)
new value of *fbc->counters: INT_MAX-1.
exit interrupt
outside interrupt:
this_cpu_add(*fbc->counters, amount);
<<< overflow.
Attached is an incomplete patch (untested). If needed, I could check the whole file and add/move the required local_irq_save() calls.
--
Manfred From 6a1d2a4beb180241b63f9bf57454bbe031915dd1 Mon Sep 17 00:00:00 2001 From: Manfred Spraul <manfred@colorfullife.com> Date: Sun, 18 Sep 2022 12:17:27 +0200 Subject: [PATCH] lib/percpu_counter: [RFC] potential overflow/underflow
If an interrupt happens between __this_cpu_read(*fbc->counters) and this_cpu_add(*fbc->counters, amount), and that interrupt modifies the per_cpu_counter, then the this_cpu_add() after the interrupt returns may under/overflow.
Thus: Disable interrupts.
Note: The patch is incomplete, if the race is real, then more functions than just percpu_counter_add_batch() needs to be updated.
Especially, the !CONFIG_SMP code looks wrong to me as well: > static inline void > percpu_counter_add(struct percpu_counter *fbc, s64 amount) > { > preempt_disable(); > fbc->count += amount; > preempt_enable(); > } The update of fbc->count is not IRQ safe.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com> --- lib/percpu_counter.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c index ed610b75dc32..39de94d59b4f 100644 --- a/lib/percpu_counter.c +++ b/lib/percpu_counter.c @@ -82,18 +82,20 @@ EXPORT_SYMBOL(percpu_counter_set); void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch) { s64 count; + unsigned long flags; preempt_disable(); + local_irq_save(flags); count = __this_cpu_read(*fbc->counters) + amount; if (abs(count) >= batch) { - unsigned long flags; - raw_spin_lock_irqsave(&fbc->lock, flags); + raw_spin_lock(&fbc->lock); fbc->count += count; __this_cpu_sub(*fbc->counters, count - amount); - raw_spin_unlock_irqrestore(&fbc->lock, flags); + raw_spin_unlock(&fbc->lock); } else { this_cpu_add(*fbc->counters, amount); } + local_irq_restore(flags); preempt_enable(); } EXPORT_SYMBOL(percpu_counter_add_batch); -- 2.37.2
| |