lkml.org 
[lkml]   [2018]   [Mar]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] slab, slub: remove size disparity on debug kernel
On Tue, 13 Mar 2018, Shakeel Butt wrote:

> However for SLUB in debug kernel, the sizes were same. On further
> inspection it is found that SLUB always use kmem_cache.object_size to
> measure the kmem_cache.size while SLAB use the given kmem_cache.size. In
> the debug kernel the slab's size can be larger than its object_size.
> Thus in the creation of non-root slab, the SLAB uses the root's size as
> base to calculate the non-root slab's size and thus non-root slab's size
> can be larger than the root slab's size. For SLUB, the non-root slab's
> size is measured based on the root's object_size and thus the size will
> remain same for root and non-root slab.

Note that the object_size and size may differ for SLUB based on kernel
parameters and slab configuration. For SLAB these are compilation options.

> @@ -379,7 +379,7 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
> }
>
> static struct kmem_cache *create_cache(const char *name,
> - unsigned int object_size, unsigned int size, unsigned int align,
> + unsigned int object_size, unsigned int align,
> slab_flags_t flags, unsigned int useroffset,

Why was both the size and object_size passed during cache creation in the
first place? From the flags etc the slab logic should be able to compute
the actual bytes required for each object and its metadata.

\
 
 \ /
  Last update: 2018-03-13 18:20    [W:0.045 / U:1.996 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site