lkml.org 
[lkml]   [2022]   [Sep]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 02/12] skbuff: Proactively round up to kmalloc bucket size
On Wed, 21 Sep 2022 20:10:03 -0700 Kees Cook wrote:
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 974bbbbe7138..4fe4c7544c1d 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -427,14 +427,15 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
> */
> size = SKB_DATA_ALIGN(size);
> size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> - data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc);
> - if (unlikely(!data))
> - goto nodata;
> - /* kmalloc(size) might give us more room than requested.
> + /* kmalloc(size) might give us more room than requested, so
> + * allocate the true bucket size up front.
> * Put skb_shared_info exactly at the end of allocated zone,
> * to allow max possible filling before reallocation.
> */
> - osize = ksize(data);
> + osize = kmalloc_size_roundup(size);
> + data = kmalloc_reserve(osize, gfp_mask, node, &pfmemalloc);
> + if (unlikely(!data))
> + goto nodata;
> size = SKB_WITH_OVERHEAD(osize);
> prefetchw(data + size);

I'd rename osize here to alloc_size for consistency but one could
argue either way :)

Acked-by: Jakub Kicinski <kuba@kernel.org>

\
 
 \ /
  Last update: 2022-09-22 21:41    [W:0.251 / U:1.220 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site