lkml.org 
[lkml]   [2022]   [Nov]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 09/12] mm, slub: split out allocations from pre/post hooks
On Mon, Nov 28, 2022 at 12:01:39AM +0100, Vlastimil Babka wrote:
> On 11/27/22 11:54, Hyeonggon Yoo wrote:
> > On Mon, Nov 21, 2022 at 06:11:59PM +0100, Vlastimil Babka wrote:
> >> In the following patch we want to introduce CONFIG_SLUB_TINY allocation
> >> paths that don't use the percpu slab. To prepare, refactor the
> >> allocation functions:
> >>
> >> Split out __slab_alloc_node() from slab_alloc_node() where the former
> >> does the actual allocation and the latter calls the pre/post hooks.
> >>
> >> Analogically, split out __kmem_cache_alloc_bulk() from
> >> kmem_cache_alloc_bulk().
> >>
> >> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> >> ---
> >> mm/slub.c | 127 +++++++++++++++++++++++++++++++++---------------------
> >> 1 file changed, 77 insertions(+), 50 deletions(-)
> >
> > [...]
> >
> >> +
> >> +/* Note that interrupts must be enabled when calling this function. */
> >> +int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> >> + void **p)
> >> +{
> >> + int i;
> >> + struct obj_cgroup *objcg = NULL;
> >> +
> >> + /* memcg and kmem_cache debug support */
> >> + s = slab_pre_alloc_hook(s, NULL, &objcg, size, flags);
> >> + if (unlikely(!s))
> >> + return false;
> >> +
> >> + i = __kmem_cache_alloc_bulk(s, flags, size, p, objcg);
> >> +
> >> + /*
> >> + * memcg and kmem_cache debug support and memory initialization.
> >> + * Done outside of the IRQ disabled fastpath loop.
> >> + */
> >> + if (i != 0)
> >> + slab_post_alloc_hook(s, objcg, flags, size, p,
> >> + slab_want_init_on_alloc(flags, s));
> >
> > This patch looks mostly good but wondering what happens if someone calls it with size == 0
> > so it does not call slab_post_alloc_hook()?
>
> Hmm looks like in that case we miss doing obj_cgroup_put(objcg) in
> memcg_slab_post_alloc_hook(). Good catch, thanks!
>
> Fixing up, also a "return false" from int function
> (existed also before this patch).
>
> ----8<----
> diff --git a/mm/slub.c b/mm/slub.c
> index b74c4664160e..88f0ce49caab 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3888,10 +3888,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> int i;
> struct obj_cgroup *objcg = NULL;
>
> + if (!size)
> + return 0;
> +
> /* memcg and kmem_cache debug support */
> s = slab_pre_alloc_hook(s, NULL, &objcg, size, flags);
> if (unlikely(!s))
> - return false;
> + return 0;
>
> i = __kmem_cache_alloc_bulk(s, flags, size, p, objcg);

With that,

Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Thanks!

--
Thanks,
Hyeonggon

\
 
 \ /
  Last update: 2022-11-28 14:06    [W:0.069 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site