lkml.org 
[lkml]   [2021]   [Feb]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/2] vdpa/mlx5: Fix suspend/resume index restoration
From
Date


On 2/16/2021 8:20 AM, Eli Cohen wrote:
> When we suspend the VM, the VDPA interface will be reset. When the VM is
> resumed again, clear_virtqueues() will clear the available and used
> indices resulting in hardware virqtqueue objects becoming out of sync.
> We can avoid this function alltogether since qemu will clear them if
> required, e.g. when the VM went through a reboot.
>
> Moreover, since the hw available and used indices should always be
> identical on query and should be restored to the same value same value
> for virtqueues that complete in order, we set the single value provided
> by set_vq_state(). In get_vq_state() we return the value of hardware
> used index.
>
> Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")
> Signed-off-by: Eli Cohen <elic@nvidia.com>
Acked-by: Si-Wei Liu <si-wei.liu@oracle.com>

> ---
> drivers/vdpa/mlx5/net/mlx5_vnet.c | 17 ++++-------------
> 1 file changed, 4 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index b8e9d525d66c..a51b0f86afe2 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1169,6 +1169,7 @@ static void suspend_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m
> return;
> }
> mvq->avail_idx = attr.available_index;
> + mvq->used_idx = attr.used_index;
> }
>
> static void suspend_vqs(struct mlx5_vdpa_net *ndev)
> @@ -1426,6 +1427,7 @@ static int mlx5_vdpa_set_vq_state(struct vdpa_device *vdev, u16 idx,
> return -EINVAL;
> }
>
> + mvq->used_idx = state->avail_index;
> mvq->avail_idx = state->avail_index;
> return 0;
> }
> @@ -1443,7 +1445,7 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa
> * that cares about emulating the index after vq is stopped.
> */
> if (!mvq->initialized) {
> - state->avail_index = mvq->avail_idx;
> + state->avail_index = mvq->used_idx;
> return 0;
> }
>
> @@ -1452,7 +1454,7 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa
> mlx5_vdpa_warn(mvdev, "failed to query virtqueue\n");
> return err;
> }
> - state->avail_index = attr.available_index;
> + state->avail_index = attr.used_index;
> return 0;
> }
>
> @@ -1532,16 +1534,6 @@ static void teardown_virtqueues(struct mlx5_vdpa_net *ndev)
> }
> }
>
> -static void clear_virtqueues(struct mlx5_vdpa_net *ndev)
> -{
> - int i;
> -
> - for (i = ndev->mvdev.max_vqs - 1; i >= 0; i--) {
> - ndev->vqs[i].avail_idx = 0;
> - ndev->vqs[i].used_idx = 0;
> - }
> -}
> -
> /* TODO: cross-endian support */
> static inline bool mlx5_vdpa_is_little_endian(struct mlx5_vdpa_dev *mvdev)
> {
> @@ -1777,7 +1769,6 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
> if (!status) {
> mlx5_vdpa_info(mvdev, "performing device reset\n");
> teardown_driver(ndev);
> - clear_virtqueues(ndev);
> mlx5_vdpa_destroy_mr(&ndev->mvdev);
> ndev->mvdev.status = 0;
> ++mvdev->generation;

\
 
 \ /
  Last update: 2021-02-17 20:45    [W:2.391 / U:1.580 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site