lkml.org 
[lkml]   [2021]   [Jul]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 1/1 v2] skbuff: Fix a potential race while recycling page_pool packets
    On Fri, Jul 09, 2021 at 07:34:38AM -0700, Alexander Duyck wrote:
    > On Thu, Jul 8, 2021 at 11:30 PM Ilias Apalodimas
    > <ilias.apalodimas@linaro.org> wrote:
    > >
    > > As Alexander points out, when we are trying to recycle a cloned/expanded
    > > SKB we might trigger a race. The recycling code relies on the
    > > pp_recycle bit to trigger, which we carry over to cloned SKBs.
    > > If that cloned SKB gets expanded or if we get references to the frags,
    > > call skbb_release_data() and overwrite skb->head, we are creating separate
    > > instances accessing the same page frags. Since the skb_release_data()
    > > will first try to recycle the frags, there's a potential race between
    > > the original and cloned SKB, since both will have the pp_recycle bit set.
    > >
    > > Fix this by explicitly those SKBs not recyclable.
    > > The atomic_sub_return effectively limits us to a single release case,
    > > and when we are calling skb_release_data we are also releasing the
    > > option to perform the recycling, or releasing the pages from the page pool.
    > >
    > > Fixes: 6a5bcd84e886 ("page_pool: Allow drivers to hint on SKB recycling")
    > > Reported-by: Alexander Duyck <alexanderduyck@fb.com>
    > > Suggested-by: Alexander Duyck <alexanderduyck@fb.com>
    > > Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
    > > ---
    > > Changes since v1:
    > > - Set the recycle bit to 0 during skb_release_data instead of the
    > > individual fucntions triggering the issue, in order to catch all
    > > cases
    > > net/core/skbuff.c | 4 +++-
    > > 1 file changed, 3 insertions(+), 1 deletion(-)
    > >
    > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
    > > index 12aabcda6db2..f91f09a824be 100644
    > > --- a/net/core/skbuff.c
    > > +++ b/net/core/skbuff.c
    > > @@ -663,7 +663,7 @@ static void skb_release_data(struct sk_buff *skb)
    > > if (skb->cloned &&
    > > atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1,
    > > &shinfo->dataref))
    > > - return;
    > > + goto exit;
    > >
    > > skb_zcopy_clear(skb, true);
    > >
    > > @@ -674,6 +674,8 @@ static void skb_release_data(struct sk_buff *skb)
    > > kfree_skb_list(shinfo->frag_list);
    > >
    > > skb_free_head(skb);
    > > +exit:
    > > + skb->pp_recycle = 0;
    > > }
    > >
    > > /*
    > > --
    > > 2.32.0.rc0
    > >
    >
    > This is probably the cleanest approach with the least amount of
    > change, but one thing I am concerned with in this approach is that we
    > end up having to dirty a cacheline that I am not sure is otherwise
    > touched during skb cleanup. I am not sure if that will be an issue or
    > not. If it is then an alternative or follow-on patch could move the
    > pp_recycle flag into the skb_shared_info flags itself and then make
    > certain that we clear it around the same time we are setting
    > shinfo->dataref to 1.
    >

    Yep that's a viable alternative. Let's see if there's any measurable
    impact.

    > Otherwise this looks good to me.
    >
    > Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>

    Thanks Alexander!

    \
     
     \ /
      Last update: 2021-07-09 19:30    [W:3.090 / U:0.204 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site