lkml.org 
[lkml]   [2013]   [Apr]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[ 021/171 ] sfc: Properly sync RX DMA buffer when it is not the last in the page
    3.6.11.2 stable review patch.
    If anyone has any objections, please let me know.

    ------------------

    From: Ben Hutchings <bhutchings@solarflare.com>

    [ Upstream commit 3a68f19d7afb80f548d016effbc6ed52643a8085 ]

    We may currently allocate two RX DMA buffers to a page, and only unmap
    the page when the second is completed. We do not sync the first RX
    buffer to be completed; this can result in packet loss or corruption
    if the last RX buffer completed in a NAPI poll is the first in a page
    and is not DMA-coherent. (In the middle of a NAPI poll, we will
    handle the following RX completion and unmap the page *before* looking
    at the content of the first buffer.)

    Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
    Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
    ---
    drivers/net/ethernet/sfc/rx.c | 15 ++++++++++-----
    1 file changed, 10 insertions(+), 5 deletions(-)

    diff --git a/drivers/net/ethernet/sfc/rx.c b/drivers/net/ethernet/sfc/rx.c
    index 719319b..16ef366 100644
    --- a/drivers/net/ethernet/sfc/rx.c
    +++ b/drivers/net/ethernet/sfc/rx.c
    @@ -240,7 +240,8 @@ static int efx_init_rx_buffers_page(struct efx_rx_queue *rx_queue)
    }

    static void efx_unmap_rx_buffer(struct efx_nic *efx,
    - struct efx_rx_buffer *rx_buf)
    + struct efx_rx_buffer *rx_buf,
    + unsigned int used_len)
    {
    if ((rx_buf->flags & EFX_RX_BUF_PAGE) && rx_buf->u.page) {
    struct efx_rx_page_state *state;
    @@ -251,6 +252,10 @@ static void efx_unmap_rx_buffer(struct efx_nic *efx,
    state->dma_addr,
    efx_rx_buf_size(efx),
    DMA_FROM_DEVICE);
    + } else if (used_len) {
    + dma_sync_single_for_cpu(&efx->pci_dev->dev,
    + rx_buf->dma_addr, used_len,
    + DMA_FROM_DEVICE);
    }
    } else if (!(rx_buf->flags & EFX_RX_BUF_PAGE) && rx_buf->u.skb) {
    dma_unmap_single(&efx->pci_dev->dev, rx_buf->dma_addr,
    @@ -273,7 +278,7 @@ static void efx_free_rx_buffer(struct efx_nic *efx,
    static void efx_fini_rx_buffer(struct efx_rx_queue *rx_queue,
    struct efx_rx_buffer *rx_buf)
    {
    - efx_unmap_rx_buffer(rx_queue->efx, rx_buf);
    + efx_unmap_rx_buffer(rx_queue->efx, rx_buf, 0);
    efx_free_rx_buffer(rx_queue->efx, rx_buf);
    }

    @@ -539,10 +544,10 @@ void efx_rx_packet(struct efx_rx_queue *rx_queue, unsigned int index,
    goto out;
    }

    - /* Release card resources - assumes all RX buffers consumed in-order
    - * per RX queue
    + /* Release and/or sync DMA mapping - assumes all RX buffers
    + * consumed in-order per RX queue
    */
    - efx_unmap_rx_buffer(efx, rx_buf);
    + efx_unmap_rx_buffer(efx, rx_buf, len);

    /* Prefetch nice and early so data will (hopefully) be in cache by
    * the time we look at it.
    --
    1.7.10.4



    \
     
     \ /
      Last update: 2013-04-11 23:41    [W:4.141 / U:0.440 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site