lkml.org 
[lkml]   [2019]   [Sep]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.19 11/79] xen-netfront: do not assume sk_buff_head list is empty in error handling
    Date
    From: Dongli Zhang <dongli.zhang@oracle.com>

    [ Upstream commit 00b368502d18f790ab715e055869fd4bb7484a9b ]

    When skb_shinfo(skb) is not able to cache extra fragment (that is,
    skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS), xennet_fill_frags() assumes
    the sk_buff_head list is already empty. As a result, cons is increased only
    by 1 and returns to error handling path in xennet_poll().

    However, if the sk_buff_head list is not empty, queue->rx.rsp_cons may be
    set incorrectly. That is, queue->rx.rsp_cons would point to the rx ring
    buffer entries whose queue->rx_skbs[i] and queue->grant_rx_ref[i] are
    already cleared to NULL. This leads to NULL pointer access in the next
    iteration to process rx ring buffer entries.

    Below is how xennet_poll() does error handling. All remaining entries in
    tmpq are accounted to queue->rx.rsp_cons without assuming how many
    outstanding skbs are remained in the list.

    985 static int xennet_poll(struct napi_struct *napi, int budget)
    ... ...
    1032 if (unlikely(xennet_set_skb_gso(skb, gso))) {
    1033 __skb_queue_head(&tmpq, skb);
    1034 queue->rx.rsp_cons += skb_queue_len(&tmpq);
    1035 goto err;
    1036 }

    It is better to always have the error handling in the same way.

    Fixes: ad4f15dc2c70 ("xen/netfront: don't bug in case of too many frags")
    Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    drivers/net/xen-netfront.c | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

    --- a/drivers/net/xen-netfront.c
    +++ b/drivers/net/xen-netfront.c
    @@ -909,7 +909,7 @@ static RING_IDX xennet_fill_frags(struct
    __pskb_pull_tail(skb, pull_to - skb_headlen(skb));
    }
    if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
    - queue->rx.rsp_cons = ++cons;
    + queue->rx.rsp_cons = ++cons + skb_queue_len(list);
    kfree_skb(nskb);
    return ~0U;
    }

    \
     
     \ /
      Last update: 2019-09-20 00:12    [W:4.138 / U:0.744 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site