Messages in this thread | | | From | Jesper Dangaard Brouer <> | Subject | Re: [PATCH 1/1 v2] skbuff: Fix a potential race while recycling page_pool packets | Date | Mon, 12 Jul 2021 13:52:46 +0200 |
| |
On 09/07/2021 16.34, Alexander Duyck wrote: > On Thu, Jul 8, 2021 at 11:30 PM Ilias Apalodimas > <ilias.apalodimas@linaro.org> wrote: >> >> As Alexander points out, when we are trying to recycle a cloned/expanded >> SKB we might trigger a race. The recycling code relies on the >> pp_recycle bit to trigger, which we carry over to cloned SKBs. >> If that cloned SKB gets expanded or if we get references to the frags, >> call skbb_release_data() and overwrite skb->head, we are creating separate >> instances accessing the same page frags. Since the skb_release_data() >> will first try to recycle the frags, there's a potential race between >> the original and cloned SKB, since both will have the pp_recycle bit set. >> >> Fix this by explicitly those SKBs not recyclable. >> The atomic_sub_return effectively limits us to a single release case, >> and when we are calling skb_release_data we are also releasing the >> option to perform the recycling, or releasing the pages from the page pool. >> >> Fixes: 6a5bcd84e886 ("page_pool: Allow drivers to hint on SKB recycling") >> Reported-by: Alexander Duyck <alexanderduyck@fb.com> >> Suggested-by: Alexander Duyck <alexanderduyck@fb.com> >> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> >> --- >> Changes since v1: >> - Set the recycle bit to 0 during skb_release_data instead of the >> individual fucntions triggering the issue, in order to catch all >> cases >> net/core/skbuff.c | 4 +++- >> 1 file changed, 3 insertions(+), 1 deletion(-) >> >> diff --git a/net/core/skbuff.c b/net/core/skbuff.c >> index 12aabcda6db2..f91f09a824be 100644 >> --- a/net/core/skbuff.c >> +++ b/net/core/skbuff.c >> @@ -663,7 +663,7 @@ static void skb_release_data(struct sk_buff *skb) >> if (skb->cloned && >> atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1, >> &shinfo->dataref)) >> - return; >> + goto exit; >> >> skb_zcopy_clear(skb, true); >> >> @@ -674,6 +674,8 @@ static void skb_release_data(struct sk_buff *skb) >> kfree_skb_list(shinfo->frag_list); >> >> skb_free_head(skb); >> +exit: >> + skb->pp_recycle = 0; >> } >> >> /* >> -- >> 2.32.0.rc0 >> > > This is probably the cleanest approach with the least amount of > change, but one thing I am concerned with in this approach is that we > end up having to dirty a cacheline that I am not sure is otherwise > touched during skb cleanup. I am not sure if that will be an issue or > not. If it is then an alternative or follow-on patch could move the > pp_recycle flag into the skb_shared_info flags itself and then make > certain that we clear it around the same time we are setting > shinfo->dataref to 1. >
The skb->cloned and skb->pp_recycle (bitfields) are on the same cache-line (incl. nohdr, destructor, active_extensions). Thus, we know this must be in CPUs cache, regardless of this change. I do acknowledge that it might be in cache coherency "Shared" state, and writing skb->pp_recycle=0 the CPU *might* have to change the cache coherency state, but I don't expect this to be a performance problem.
> Otherwise this looks good to me. > > Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
I've gone over the code-path, with Ilias on IRC and I've convinced myself that this fix is correct, thus ACK.
| |