lkml.org 
[lkml]   [2022]   [Jul]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v1] mm/slub: enable debugging memory wasting of kmalloc
From
On 7/13/22 09:36, Feng Tang wrote:
> Hi Vlastimil,
>
> On Mon, Jul 11, 2022 at 10:15:21AM +0200, Vlastimil Babka wrote:
>> On 7/1/22 15:59, Feng Tang wrote:
>> > kmalloc's API family is critical for mm, with one shortcoming that
>> > its object size is fixed to be power of 2. When user requests memory
>> > for '2^n + 1' bytes, actually 2^(n+1) bytes will be allocated, so
>> > in worst case, there is around 50% memory space waste.
>> >
>> > We've met a kernel boot OOM panic (v5.10), and from the dumped slab info:
>> >
>> > [ 26.062145] kmalloc-2k 814056KB 814056KB
>> >
>> > From debug we found there are huge number of 'struct iova_magazine',
>> > whose size is 1032 bytes (1024 + 8), so each allocation will waste
>> > 1016 bytes. Though the issue was solved by giving the right (bigger)
>> > size of RAM, it is still nice to optimize the size (either use a
>> > kmalloc friendly size or create a dedicated slab for it).
> [...]
>>
>> Hi and thanks.
>> I would suggest some improvements to consider:
>>
>> - don't use the struct track to store orig_size, although it's an obvious
>> first choice. It's unused waste for the free_track, and also for any
>> non-kmalloc caches. I'd carve out an extra int next to the struct tracks.
>> Only for kmalloc caches (probably a new kmem cache flag set on creation will
>> be needed to easily distinguish them).
>> Besides the saved space, you can then set the field from ___slab_alloc()
>> directly and not need to pass the orig_size also to alloc_debug_processing()
>> etc.
>
> Here is a draft patch fowlling your suggestion, please check if I missed
> anything? (Quick test showed it achived similar effect as v1 patch). Thanks!

Thanks, overal it looks at first glance!

> ---
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 0fefdf528e0d..d3dacb0f013f 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -29,6 +29,8 @@
> #define SLAB_RED_ZONE ((slab_flags_t __force)0x00000400U)
> /* DEBUG: Poison objects */
> #define SLAB_POISON ((slab_flags_t __force)0x00000800U)
> +/* Indicate a slab of kmalloc */

"Indicate a kmalloc cache" would be more precise.

> +#define SLAB_KMALLOC ((slab_flags_t __force)0x00001000U)
> /* Align objs on cache lines */
> #define SLAB_HWCACHE_ALIGN ((slab_flags_t __force)0x00002000U)
> /* Use GFP_DMA memory */
> diff --git a/mm/slub.c b/mm/slub.c
> index 26b00951aad1..3b0f80927817 100644

<snip>

>
>> - the knowledge of actual size could be used to improve poisoning checks as
>> well, detect cases when there's buffer overrun over the orig_size but not
>> cache's size. e.g. if you kmalloc(48) and overrun up to 64 we won't detect
>> it now, but with orig_size stored we could?
>
> The above patch doesn't touch this. As I have a question, for the
> [orib_size, object_size) area, shall we fill it with POISON_XXX no matter
> REDZONE flag is set or not?

Ah, looks like we use redzoning, not poisoning, for padding from
s->object_size to word boundary. So it would be more consistent to use the
redzone pattern (RED_ACTIVE) and check with the dynamic orig_size. Probably
no change for RED_INACTIVE handling is needed though.

> Thanks,
> Feng
>
>> Thanks!
>> Vlastimil

\
 
 \ /
  Last update: 2022-07-14 22:12    [W:0.084 / U:0.636 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site