Messages in this thread Patch in this message | | | From | Leonardo Bras <> | Subject | [RFC PATCH 4/4] slub: apply new local_schedule_work_on() interface | Date | Sat, 29 Jul 2023 05:37:35 -0300 |
| |
Make use of the new local_*lock_n*() and local_schedule_work_on() interface to improve performance & latency on PREEMTP_RT kernels.
For functions that may be scheduled in a different cpu, replace local_*lock*() by local_lock_n*(), and replace schedule_work_on() by local_schedule_work_on(). The same happens for flush_work() and local_flush_work().
This should bring no relevant performance impact on non-RT kernels: For functions that may be scheduled in a different cpu, the local_*lock's this_cpu_ptr() becomes a per_cpu_ptr(smp_processor_id()).
Signed-off-by: Leonardo Bras <leobras@redhat.com> --- mm/slub.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c index e3b5d5c0eb3a..feb4a502d9a8 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2733,13 +2733,14 @@ static inline void unfreeze_partials_cpu(struct kmem_cache *s, #endif /* CONFIG_SLUB_CPU_PARTIAL */ -static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) +static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c, + int cpu) { unsigned long flags; struct slab *slab; void *freelist; - local_lock_irqsave(&s->cpu_slab->lock, flags); + local_lock_irqsave_n(&s->cpu_slab->lock, flags, cpu); slab = c->slab; freelist = c->freelist; @@ -2748,7 +2749,7 @@ static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) c->freelist = NULL; c->tid = next_tid(c->tid); - local_unlock_irqrestore(&s->cpu_slab->lock, flags); + local_unlock_irqrestore_n(&s->cpu_slab->lock, flags, cpu); if (slab) { deactivate_slab(s, slab, freelist); @@ -2790,14 +2791,16 @@ static void flush_cpu_slab(struct work_struct *w) struct kmem_cache *s; struct kmem_cache_cpu *c; struct slub_flush_work *sfw; + int cpu; + cpu = w->data.counter; sfw = container_of(w, struct slub_flush_work, work); s = sfw->s; - c = this_cpu_ptr(s->cpu_slab); + c = per_cpu_ptr(s->cpu_slab, cpu); if (c->slab) - flush_slab(s, c); + flush_slab(s, c, cpu); unfreeze_partials(s); } @@ -2829,14 +2832,14 @@ static void flush_all_cpus_locked(struct kmem_cache *s) INIT_WORK(&sfw->work, flush_cpu_slab); sfw->skip = false; sfw->s = s; - queue_work_on(cpu, flushwq, &sfw->work); + local_queue_work_on(cpu, flushwq, &sfw->work); } for_each_online_cpu(cpu) { sfw = &per_cpu(slub_flush, cpu); if (sfw->skip) continue; - flush_work(&sfw->work); + local_flush_work(&sfw->work); } mutex_unlock(&flush_lock); -- 2.41.0
| |