lkml.org 
[lkml]   [2021]   [Jul]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/1 v2] skbuff: Fix a potential race while recycling page_pool packets
> > >           atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1,

[...]

> > > &shinfo->dataref))
> > > - return;
> > > + goto exit;
> >
> > Is it possible this patch may break the head frag page for the original skb,
> > supposing it's head frag page is from the page pool and below change clears
> > the pp_recycle for original skb, causing a page leaking for the page pool?
>
> I don't see how. The assumption here is that when atomic_sub_return
> gets down to 0 we will still have an skb with skb->pp_recycle set and
> it will flow down and encounter skb_free_head below. All we are doing
> is skipping those steps and clearing skb->pp_recycle for all but the
> last buffer and the last one to free it will trigger the recycling.

I think the assumption here is that
1. We clone an skb
2. The original skb goes into pskb_expand_head()
3. skb_release_data() will be called for the original skb

But with the dataref bumped, we'll skip the recycling for it but we'll also
skip recycling or unmapping the current head (which is a page_pool mapped
buffer)

>
> > >
> > > skb_zcopy_clear(skb, true);
> > >
> > > @@ -674,6 +674,8 @@ static void skb_release_data(struct sk_buff *skb)
> > > kfree_skb_list(shinfo->frag_list);
> > >
> > > skb_free_head(skb);
> > > +exit:
> > > + skb->pp_recycle = 0;
>
> Note the path here. We don't clear skb->pp_recycle for the last buffer
> where "dataref == 0" until *AFTER* the head has been freed, and all
> clones will have skb->pp_recycle = 1 as long as they are a clone of
> the original skb that had it set.

\
 
 \ /
  Last update: 2021-07-15 16:46    [W:0.121 / U:0.228 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site