lkml.org 
[lkml]   [2022]   [Dec]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 4/4] virtio-net: sleep instead of busy waiting for cvq command
From

在 2022/12/27 14:58, Michael S. Tsirkin 写道:
> On Tue, Dec 27, 2022 at 12:33:53PM +0800, Jason Wang wrote:
>> On Tue, Dec 27, 2022 at 10:25 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>>> On Mon, 26 Dec 2022 15:49:08 +0800, Jason Wang <jasowang@redhat.com> wrote:
>>>> We used to busy waiting on the cvq command this tends to be
>>>> problematic since:
>>>>
>>>> 1) CPU could wait for ever on a buggy/malicous device
>>>> 2) There's no wait to terminate the process that triggers the cvq
>>>> command
>>>>
>>>> So this patch switch to use virtqueue_wait_for_used() to sleep with a
>>>> timeout (1s) instead of busy polling for the cvq command forever. This
>>> I don't think that a fixed 1S is a good choice.
>> Well, it could be tweaked to be a little bit longer.
>>
>> One way, as discussed, is to let the device advertise a timeout then
>> the driver can validate if it's valid and use that timeout. But it
>> needs extension to the spec.
> Controlling timeout from device is a good idea, e.g. hardware devices
> would benefit from a shorter timeout, hypervisor devices from a longer
> timeout or no timeout.


Yes.


>
>>> Some of the DPUs are very
>>> lazy for cvq handle.
>> Such design needs to be revisited, cvq (control path) should have a
>> better priority or QOS than datapath.
> Spec says nothing about this, so driver can't assume this either.


Well, my understanding is that it's more than what spec can define or
it's a kind of best practice.

The current code is one example, that is, driver may choose to busy poll
which cause spike.


>
>>> In particular, we will also directly break the device.
>> It's kind of hardening for malicious devices.
> ATM no amount of hardening can prevent a malicious hypervisor from
> blocking the guest. Recovering when a hardware device is broken would be
> nice but I think if we do bother then we should try harder to recover,
> such as by driving device reset.


Probably, but as discussed in another thread, it needs co-operation in
the upper layer (networking core).


>
>
> Also, does your patch break surprise removal? There's no callback
> in this case ATM.


I think not (see reply in another thread).

Thanks


>
>>> I think it is necessary to add a Virtio-Net parameter to allow users to define
>>> this timeout by themselves. Although I don't think this is a good way.
>> Very hard and unfriendly to the end users.
>>
>> Thanks
>>
>>> Thanks.
>>>
>>>
>>>> gives the scheduler a breath and can let the process can respond to
>>>> asignal. If the device doesn't respond in the timeout, break the
>>>> device.
>>>>
>>>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>>>> ---
>>>> Changes since V1:
>>>> - break the device when timeout
>>>> - get buffer manually since the virtio core check more_used() instead
>>>> ---
>>>> drivers/net/virtio_net.c | 24 ++++++++++++++++--------
>>>> 1 file changed, 16 insertions(+), 8 deletions(-)
>>>>
>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>>> index efd9dd55828b..6a2ea64cfcb5 100644
>>>> --- a/drivers/net/virtio_net.c
>>>> +++ b/drivers/net/virtio_net.c
>>>> @@ -405,6 +405,7 @@ static void disable_rx_mode_work(struct virtnet_info *vi)
>>>> vi->rx_mode_work_enabled = false;
>>>> spin_unlock_bh(&vi->rx_mode_lock);
>>>>
>>>> + virtqueue_wake_up(vi->cvq);
>>>> flush_work(&vi->rx_mode_work);
>>>> }
>>>>
>>>> @@ -1497,6 +1498,11 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
>>>> return !oom;
>>>> }
>>>>
>>>> +static void virtnet_cvq_done(struct virtqueue *cvq)
>>>> +{
>>>> + virtqueue_wake_up(cvq);
>>>> +}
>>>> +
>>>> static void skb_recv_done(struct virtqueue *rvq)
>>>> {
>>>> struct virtnet_info *vi = rvq->vdev->priv;
>>>> @@ -1984,6 +1990,8 @@ static int virtnet_tx_resize(struct virtnet_info *vi,
>>>> return err;
>>>> }
>>>>
>>>> +static int virtnet_close(struct net_device *dev);
>>>> +
>>>> /*
>>>> * Send command via the control virtqueue and check status. Commands
>>>> * supported by the hypervisor, as indicated by feature bits, should
>>>> @@ -2026,14 +2034,14 @@ static bool virtnet_send_command(struct virtnet_info *vi, u8 class, u8 cmd,
>>>> if (unlikely(!virtqueue_kick(vi->cvq)))
>>>> return vi->ctrl->status == VIRTIO_NET_OK;
>>>>
>>>> - /* Spin for a response, the kick causes an ioport write, trapping
>>>> - * into the hypervisor, so the request should be handled immediately.
>>>> - */
>>>> - while (!virtqueue_get_buf(vi->cvq, &tmp) &&
>>>> - !virtqueue_is_broken(vi->cvq))
>>>> - cpu_relax();
>>>> + if (virtqueue_wait_for_used(vi->cvq)) {
>>>> + virtqueue_get_buf(vi->cvq, &tmp);
>>>> + return vi->ctrl->status == VIRTIO_NET_OK;
>>>> + }
>>>>
>>>> - return vi->ctrl->status == VIRTIO_NET_OK;
>>>> + netdev_err(vi->dev, "CVQ command timeout, break the virtio device.");
>>>> + virtio_break_device(vi->vdev);
>>>> + return VIRTIO_NET_ERR;
>>>> }
>>>>
>>>> static int virtnet_set_mac_address(struct net_device *dev, void *p)
>>>> @@ -3526,7 +3534,7 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
>>>>
>>>> /* Parameters for control virtqueue, if any */
>>>> if (vi->has_cvq) {
>>>> - callbacks[total_vqs - 1] = NULL;
>>>> + callbacks[total_vqs - 1] = virtnet_cvq_done;
>>>> names[total_vqs - 1] = "control";
>>>> }
>>>>
>>>> --
>>>> 2.25.1
>>>>
>>>> _______________________________________________
>>>> Virtualization mailing list
>>>> Virtualization@lists.linux-foundation.org
>>>> https://lists.linuxfoundation.org/mailman/listinfo/virtualization

\
 
 \ /
  Last update: 2023-03-26 23:21    [W:0.159 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site