lkml.org 
[lkml]   [2022]   [Sep]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Subject307af6c879377 "mbcache: automatically delete entries from cache on freeing" ==> PREEMPT_RT grumble
From
Date
Hi Jan,

diff --git a/fs/mbcache.c b/fs/mbcache.c
index d1ebb5df2856..96f1d49d30a5 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@ -106,21 +106,28 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
}
}
hlist_bl_add_head(&entry->e_hash_list, head);
- hlist_bl_unlock(head);
-
+ /*
+ * Add entry to LRU list before it can be found by
+ * mb_cache_entry_delete() to avoid races
+ */
spin_lock(&cache->c_list_lock);
list_add_tail(&entry->e_list, &cache->c_list);
- /* Grab ref for LRU list */
- atomic_inc(&entry->e_refcnt);
cache->c_entry_count++;
spin_unlock(&cache->c_list_lock);
+ hlist_bl_unlock(head);

return 0;
}
EXPORT_SYMBOL(mb_cache_entry_create);

The above movement of hlist_bl_unlock() is a problem for RT wrt both
taking and releasing of ->c_list_lock, it becoming an rtmutex in RT and
hlist_bl_unlock() taking a preemption blocking bit spinlock.

Is that scope increase necessary? If so, looks like ->c_list_lock
could probably become a raw_spinlock_t without anyone noticing.

-Mike

\
 
 \ /
  Last update: 2022-09-06 18:45    [W:0.824 / U:0.176 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site