lkml.org 
[lkml]   [2022]   [Oct]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 6.0 053/862] net: thunderbolt: Enable DMA paths only after rings are enabled
    Date
    From: Mika Westerberg <mika.westerberg@linux.intel.com>

    commit ff7cd07f306406493f7b78890475e85b6d0811ed upstream.

    If the other host starts sending packets early on it is possible that we
    are still in the middle of populating the initial Rx ring packets to the
    ring. This causes the tbnet_poll() to mess over the queue and causes
    list corruption. This happens specifically when connected with macOS as
    it seems start sending various IP discovery packets as soon as its side
    of the paths are configured.

    To prevent this we move the DMA path enabling to happen after we have
    primed the Rx ring. This makes sure no incoming packets can arrive
    before we are ready to handle them.

    Fixes: e69b6c02b4c3 ("net: Add support for networking over Thunderbolt cable")
    Cc: stable@vger.kernel.org
    Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    drivers/net/thunderbolt.c | 28 +++++++++++++++++-----------
    1 file changed, 17 insertions(+), 11 deletions(-)

    --- a/drivers/net/thunderbolt.c
    +++ b/drivers/net/thunderbolt.c
    @@ -612,18 +612,13 @@ static void tbnet_connected_work(struct
    return;
    }

    - /* Both logins successful so enable the high-speed DMA paths and
    - * start the network device queue.
    + /* Both logins successful so enable the rings, high-speed DMA
    + * paths and start the network device queue.
    + *
    + * Note we enable the DMA paths last to make sure we have primed
    + * the Rx ring before any incoming packets are allowed to
    + * arrive.
    */
    - ret = tb_xdomain_enable_paths(net->xd, net->local_transmit_path,
    - net->rx_ring.ring->hop,
    - net->remote_transmit_path,
    - net->tx_ring.ring->hop);
    - if (ret) {
    - netdev_err(net->dev, "failed to enable DMA paths\n");
    - return;
    - }
    -
    tb_ring_start(net->tx_ring.ring);
    tb_ring_start(net->rx_ring.ring);

    @@ -635,10 +630,21 @@ static void tbnet_connected_work(struct
    if (ret)
    goto err_free_rx_buffers;

    + ret = tb_xdomain_enable_paths(net->xd, net->local_transmit_path,
    + net->rx_ring.ring->hop,
    + net->remote_transmit_path,
    + net->tx_ring.ring->hop);
    + if (ret) {
    + netdev_err(net->dev, "failed to enable DMA paths\n");
    + goto err_free_tx_buffers;
    + }
    +
    netif_carrier_on(net->dev);
    netif_start_queue(net->dev);
    return;

    +err_free_tx_buffers:
    + tbnet_free_buffers(&net->tx_ring);
    err_free_rx_buffers:
    tbnet_free_buffers(&net->rx_ring);
    err_stop_rings:

    \
     
     \ /
      Last update: 2022-10-19 10:44    [W:3.363 / U:0.664 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site