lkml.org 
[lkml]   [2021]   [Jun]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[RFC v2 32/34] mm, slub: make slab_lock() disable irqs with PREEMPT_RT
Date
We need to disable irqs around slab_lock() (a bit spinlock) to make it
irq-safe. The calls to slab_lock() are nested under spin_lock_irqsave() which
doesn't disable irqs on PREEMPT_RT, so add explicit disabling with PREEMPT_RT.

We also distinguish cmpxchg_double_slab() where we do the disabling explicitly
and __cmpxchg_double_slab() for contexts with already disabled irqs. However
these context are also typically spin_lock_irqsave() thus insufficient on
PREEMPT_RT. Thus, change __cmpxchg_double_slab() to be same as
cmpxchg_double_slab() on PREEMPT_RT.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/slub.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 6721169f816d..12e966f07f19 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -374,12 +374,12 @@ __slab_unlock(struct page *page, unsigned long *flags, bool disable_irqs)
static __always_inline void
slab_lock(struct page *page, unsigned long *flags)
{
- __slab_lock(page, flags, false);
+ __slab_lock(page, flags, IS_ENABLED(CONFIG_PREEMPT_RT));
}

static __always_inline void slab_unlock(struct page *page, unsigned long *flags)
{
- __slab_unlock(page, flags, false);
+ __slab_unlock(page, flags, IS_ENABLED(CONFIG_PREEMPT_RT));
}

static inline bool ___cmpxchg_double_slab(struct kmem_cache *s, struct page *page,
@@ -422,14 +422,19 @@ static inline bool ___cmpxchg_double_slab(struct kmem_cache *s, struct page *pag
return false;
}

-/* Interrupts must be disabled (for the fallback code to work right) */
+/*
+ * Interrupts must be disabled (for the fallback code to work right), typically
+ * by an _irqsave() lock variant. Except on PREEMPT_RT where locks are different
+ * so we disable interrupts explicitly here.
+ */
static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page,
void *freelist_old, unsigned long counters_old,
void *freelist_new, unsigned long counters_new,
const char *n)
{
return ___cmpxchg_double_slab(s, page, freelist_old, counters_old,
- freelist_new, counters_new, n, false);
+ freelist_new, counters_new, n,
+ IS_ENABLED(CONFIG_PREEMPT_RT));
}

static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page,
--
2.31.1
\
 
 \ /
  Last update: 2021-06-09 14:01    [W:0.360 / U:0.584 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site