lkml.org 
[lkml]   [2021]   [May]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH v1 1/1] platform/mellanox: mlxbf-tmfifo: Fix a memory barrier issue
Date


> -----Original Message-----
> From: Liming Sun <limings@nvidia.com>
> Sent: Friday, May 7, 2021 6:19 PM
> To: Andy Shevchenko <andy@infradead.org>; Darren Hart
> <dvhart@infradead.org>; Vadim Pasternak <vadimp@nvidia.com>
> Cc: Liming Sun <limings@nvidia.com>; linux-kernel@vger.kernel.org;
> platform-driver-x86@vger.kernel.org
> Subject: [PATCH v1 1/1] platform/mellanox: mlxbf-tmfifo: Fix a memory
> barrier issue
>
> The virtio framework uses wmb() when updating avail->idx. It guarantees
> the write order, but not necessarily loading order for the code accessing the
> memory. This commit adds a load barrier after reading the avail->idx to make
> sure all the data in the descriptor is visible. It also adds a barrier when
> returning the packet to virtio framework to make sure read/writes are visible
> to the virtio code.

I suppose it should be sent as Bugfix?

>
> Signed-off-by: Liming Sun <limings@nvidia.com>
> ---
> drivers/platform/mellanox/mlxbf-tmfifo.c | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c
> b/drivers/platform/mellanox/mlxbf-tmfifo.c
> index bbc4e71..38800e8 100644
> --- a/drivers/platform/mellanox/mlxbf-tmfifo.c
> +++ b/drivers/platform/mellanox/mlxbf-tmfifo.c
> @@ -294,6 +294,9 @@ static irqreturn_t mlxbf_tmfifo_irq_handler(int irq,
> void *arg)
> if (vring->next_avail == virtio16_to_cpu(vdev, vr->avail->idx))
> return NULL;
>
> + /* Make sure 'avail->idx' is visible already. */
> + virtio_rmb(false);
> +
> idx = vring->next_avail % vr->num;
> head = virtio16_to_cpu(vdev, vr->avail->ring[idx]);
> if (WARN_ON(head >= vr->num))
> @@ -322,7 +325,7 @@ static void mlxbf_tmfifo_release_desc(struct
> mlxbf_tmfifo_vring *vring,
> * done or not. Add a memory barrier here to make sure the update
> above
> * completes before updating the idx.
> */
> - mb();
> + virtio_mb(false);
> vr->used->idx = cpu_to_virtio16(vdev, vr_idx + 1); }
>
> @@ -733,6 +736,12 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct
> mlxbf_tmfifo_vring *vring,
> desc = NULL;
> fifo->vring[is_rx] = NULL;
>
> + /*
> + * Make sure the load/store are in order before
> + * returning back to virtio.
> + */
> + virtio_mb(false);
> +
> /* Notify upper layer that packet is done. */
> spin_lock_irqsave(&fifo->spin_lock[is_rx], flags);
> vring_interrupt(0, vring->vq);
> --
> 1.8.3.1

\
 
 \ /
  Last update: 2021-05-07 17:39    [W:0.060 / U:0.328 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site