Messages in this thread | | | From | Alexander Duyck <> | Date | Thu, 15 Jul 2021 07:25:16 -0700 | Subject | Re: [PATCH 1/1 v2] skbuff: Fix a potential race while recycling page_pool packets |
| |
On Wed, Jul 14, 2021 at 9:02 PM Yunsheng Lin <linyunsheng@huawei.com> wrote: > > On 2021/7/9 14:29, Ilias Apalodimas wrote: > > As Alexander points out, when we are trying to recycle a cloned/expanded > > SKB we might trigger a race. The recycling code relies on the > > pp_recycle bit to trigger, which we carry over to cloned SKBs. > > If that cloned SKB gets expanded or if we get references to the frags, > > call skbb_release_data() and overwrite skb->head, we are creating separate > > instances accessing the same page frags. Since the skb_release_data() > > will first try to recycle the frags, there's a potential race between > > the original and cloned SKB, since both will have the pp_recycle bit set. > > > > Fix this by explicitly those SKBs not recyclable. > > The atomic_sub_return effectively limits us to a single release case, > > and when we are calling skb_release_data we are also releasing the > > option to perform the recycling, or releasing the pages from the page pool. > > > > Fixes: 6a5bcd84e886 ("page_pool: Allow drivers to hint on SKB recycling") > > Reported-by: Alexander Duyck <alexanderduyck@fb.com> > > Suggested-by: Alexander Duyck <alexanderduyck@fb.com> > > Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> > > --- > > Changes since v1: > > - Set the recycle bit to 0 during skb_release_data instead of the > > individual fucntions triggering the issue, in order to catch all > > cases > > net/core/skbuff.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c > > index 12aabcda6db2..f91f09a824be 100644 > > --- a/net/core/skbuff.c > > +++ b/net/core/skbuff.c > > @@ -663,7 +663,7 @@ static void skb_release_data(struct sk_buff *skb) > > if (skb->cloned && > > atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1, > > &shinfo->dataref)) > > - return; > > + goto exit; > > Is it possible this patch may break the head frag page for the original skb, > supposing it's head frag page is from the page pool and below change clears > the pp_recycle for original skb, causing a page leaking for the page pool?
I don't see how. The assumption here is that when atomic_sub_return gets down to 0 we will still have an skb with skb->pp_recycle set and it will flow down and encounter skb_free_head below. All we are doing is skipping those steps and clearing skb->pp_recycle for all but the last buffer and the last one to free it will trigger the recycling.
> > > > skb_zcopy_clear(skb, true); > > > > @@ -674,6 +674,8 @@ static void skb_release_data(struct sk_buff *skb) > > kfree_skb_list(shinfo->frag_list); > > > > skb_free_head(skb); > > +exit: > > + skb->pp_recycle = 0;
Note the path here. We don't clear skb->pp_recycle for the last buffer where "dataref == 0" until *AFTER* the head has been freed, and all clones will have skb->pp_recycle = 1 as long as they are a clone of the original skb that had it set.
| |