lkml.org 
[lkml]   [2022]   [Oct]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v4] skbuff: Proactively round up to kmalloc bucket size
From
On 10/22/22 01:49, Kees Cook wrote:
> Instead of discovering the kmalloc bucket size _after_ allocation, round
> up proactively so the allocation is explicitly made for the full size,
> allowing the compiler to correctly reason about the resulting size of
> the buffer through the existing __alloc_size() hint.
>
> This will allow for kernels built with CONFIG_UBSAN_BOUNDS or the
> coming dynamic bounds checking under CONFIG_FORTIFY_SOURCE to gain
> back the __alloc_size() hints that were temporarily reverted in commit
> 93dd04ab0b2b ("slab: remove __alloc_size attribute from __kmalloc_track_caller")
>
> Cc: "David S. Miller" <davem@davemloft.net>
> Cc: Eric Dumazet <edumazet@google.com>
> Cc: Jakub Kicinski <kuba@kernel.org>
> Cc: Paolo Abeni <pabeni@redhat.com>
> Cc: netdev@vger.kernel.org
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: Nick Desaulniers <ndesaulniers@google.com>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Kees Cook <keescook@chromium.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

Nit below:

> ---
> v4: use kmalloc_size_roundup() in callers, not kmalloc_reserve()
> v3: https://lore.kernel.org/lkml/20221018093005.give.246-kees@kernel.org
> v2: https://lore.kernel.org/lkml/20220923202822.2667581-4-keescook@chromium.org
> ---
> net/core/skbuff.c | 50 +++++++++++++++++++++++------------------------
> 1 file changed, 25 insertions(+), 25 deletions(-)
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 651a82d30b09..77af430296e2 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -508,14 +508,14 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
> */
> size = SKB_DATA_ALIGN(size);
> size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> - data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc);
> + osize = kmalloc_size_roundup(size);
> + data = kmalloc_reserve(osize, gfp_mask, node, &pfmemalloc);
> if (unlikely(!data))
> goto nodata;
> /* kmalloc(size) might give us more room than requested.

The line above should now say kmalloc_size_roundup(size), or maybe could be
deleted completely now?

> * Put skb_shared_info exactly at the end of allocated zone,
> * to allow max possible filling before reallocation.
> */
> - osize = ksize(data);
> size = SKB_WITH_OVERHEAD(osize);
> prefetchw(data + size);
>

\
 
 \ /
  Last update: 2022-10-24 21:23    [W:3.297 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site