lkml.org 
[lkml]   [2014]   [Feb]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] slub: Hold list_lock unconditionally before the call to add_full.
On Sat, 8 Feb 2014, Gautham R Shenoy wrote:

> Hi,
>
> From the lockdep annotation and the comment that existed before the
> lockdep annotations were introduced,
> mm/slub.c:add_full(s, n, page) expects to be called with n->list_lock
> held.
>
> However, there's a call path in deactivate_slab() when
>
> (new.inuse || n->nr_partial <= s->min_partial) &&
> !(new.freelist) &&
> !(kmem_cache_debug(s))
>
> which ends up calling add_full() without holding
> n->list_lock.
>
> This was discovered while onlining/offlining cpus in 3.14-rc1 due to
> the lockdep annotations added by commit
> c65c1877bd6826ce0d9713d76e30a7bed8e49f38.
>
> Fix this by unconditionally taking the lock
> irrespective of the state of kmem_cache_debug(s).
>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Pekka Enberg <penberg@kernel.org>
> Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>

No, it's not needed unless kmem_cache_debug(s) is actually set,
specifically s->flags & SLAB_STORE_USER.

You want the patch at http://marc.info/?l=linux-kernel&m=139147105027693
instead which is already in -mm and linux-next.


\
 
 \ /
  Last update: 2014-02-07 22:01    [W:0.051 / U:0.548 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site