lkml.org 
[lkml]   [2022]   [Jun]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 19/19] vdpasim: control virtqueue support
On Wed, Jun 22, 2022 at 05:04:44PM +0200, Eugenio Perez Martin wrote:
>On Wed, Jun 22, 2022 at 12:21 PM Eugenio Perez Martin
><eperezma@redhat.com> wrote:
>>
>> On Tue, Jun 21, 2022 at 5:20 PM Stefano Garzarella <sgarzare@redhat.com> wrote:
>> >
>> > Hi Gautam,
>> >
>> > On Wed, Mar 30, 2022 at 8:21 PM Gautam Dawar <gautam.dawar@xilinx.com> wrote:
>> > >
>> > > This patch introduces the control virtqueue support for vDPA
>> > > simulator. This is a requirement for supporting advanced features like
>> > > multiqueue.
>> > >
>> > > A requirement for control virtqueue is to isolate its memory access
>> > > from the rx/tx virtqueues. This is because when using vDPA device
>> > > for VM, the control virqueue is not directly assigned to VM. Userspace
>> > > (Qemu) will present a shadow control virtqueue to control for
>> > > recording the device states.
>> > >
>> > > The isolation is done via the virtqueue groups and ASID support in
>> > > vDPA through vhost-vdpa. The simulator is extended to have:
>> > >
>> > > 1) three virtqueues: RXVQ, TXVQ and CVQ (control virtqueue)
>> > > 2) two virtqueue groups: group 0 contains RXVQ and TXVQ; group 1
>> > > contains CVQ
>> > > 3) two address spaces and the simulator simply implements the address
>> > > spaces by mapping it 1:1 to IOTLB.
>> > >
>> > > For the VM use cases, userspace(Qemu) may set AS 0 to group 0 and AS 1
>> > > to group 1. So we have:
>> > >
>> > > 1) The IOTLB for virtqueue group 0 contains the mappings of guest, so
>> > > RX and TX can be assigned to guest directly.
>> > > 2) The IOTLB for virtqueue group 1 contains the mappings of CVQ which
>> > > is the buffers that allocated and managed by VMM only. So CVQ of
>> > > vhost-vdpa is visible to VMM only. And Guest can not access the CVQ
>> > > of vhost-vdpa.
>> > >
>> > > For the other use cases, since AS 0 is associated to all virtqueue
>> > > groups by default. All virtqueues share the same mapping by default.
>> > >
>> > > To demonstrate the function, VIRITO_NET_F_CTRL_MACADDR is
>> > > implemented in the simulator for the driver to set mac address.
>> > >
>> > > Signed-off-by: Jason Wang <jasowang@redhat.com>
>> > > Signed-off-by: Gautam Dawar <gdawar@xilinx.com>
>> > > ---
>> > > drivers/vdpa/vdpa_sim/vdpa_sim.c | 91 ++++++++++++++++++++++------
>> > > drivers/vdpa/vdpa_sim/vdpa_sim.h | 2 +
>> > > drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 88 ++++++++++++++++++++++++++-
>> > > 3 files changed, 161 insertions(+), 20 deletions(-)
>> > >
>> > > diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
>> > > index 659e2e2e4b0c..51bd0bafce06 100644
>> > > --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
>> > > +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
>> > > @@ -96,11 +96,17 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim)
>> > > {
>> > > int i;
>> > >
>> > > - for (i = 0; i < vdpasim->dev_attr.nvqs; i++)
>> > > + spin_lock(&vdpasim->iommu_lock);
>> > > +
>> > > + for (i = 0; i < vdpasim->dev_attr.nvqs; i++) {
>> > > vdpasim_vq_reset(vdpasim, &vdpasim->vqs[i]);
>> > > + vringh_set_iotlb(&vdpasim->vqs[i].vring, &vdpasim->iommu[0],
>> > > + &vdpasim->iommu_lock);
>> > > + }
>> > > +
>> > > + for (i = 0; i < vdpasim->dev_attr.nas; i++)
>> > > + vhost_iotlb_reset(&vdpasim->iommu[i]);
>> > >
>> > > - spin_lock(&vdpasim->iommu_lock);
>> > > - vhost_iotlb_reset(vdpasim->iommu);
>> > > spin_unlock(&vdpasim->iommu_lock);
>> > >
>> > > vdpasim->features = 0;
>> > > @@ -145,7 +151,7 @@ static dma_addr_t vdpasim_map_range(struct vdpasim *vdpasim, phys_addr_t paddr,
>> > > dma_addr = iova_dma_addr(&vdpasim->iova, iova);
>> > >
>> > > spin_lock(&vdpasim->iommu_lock);
>> > > - ret = vhost_iotlb_add_range(vdpasim->iommu, (u64)dma_addr,
>> > > + ret = vhost_iotlb_add_range(&vdpasim->iommu[0], (u64)dma_addr,
>> > > (u64)dma_addr + size - 1, (u64)paddr, perm);
>> > > spin_unlock(&vdpasim->iommu_lock);
>> > >
>> > > @@ -161,7 +167,7 @@ static void vdpasim_unmap_range(struct vdpasim *vdpasim, dma_addr_t dma_addr,
>> > > size_t size)
>> > > {
>> > > spin_lock(&vdpasim->iommu_lock);
>> > > - vhost_iotlb_del_range(vdpasim->iommu, (u64)dma_addr,
>> > > + vhost_iotlb_del_range(&vdpasim->iommu[0], (u64)dma_addr,
>> > > (u64)dma_addr + size - 1);
>> > > spin_unlock(&vdpasim->iommu_lock);
>> > >
>> > > @@ -250,8 +256,9 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr)
>> > > else
>> > > ops = &vdpasim_config_ops;
>> > >
>> > > - vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, 1,
>> > > - 1, dev_attr->name, false);
>> > > + vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops,
>> > > + dev_attr->ngroups, dev_attr->nas,
>> > > + dev_attr->name, false);
>> > > if (IS_ERR(vdpasim)) {
>> > > ret = PTR_ERR(vdpasim);
>> > > goto err_alloc;
>> > > @@ -278,16 +285,20 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr)
>> > > if (!vdpasim->vqs)
>> > > goto err_iommu;
>> > >
>> > > - vdpasim->iommu = vhost_iotlb_alloc(max_iotlb_entries, 0);
>> > > + vdpasim->iommu = kmalloc_array(vdpasim->dev_attr.nas,
>> > > + sizeof(*vdpasim->iommu), GFP_KERNEL);
>> > > if (!vdpasim->iommu)
>> > > goto err_iommu;
>> > >
>> > > + for (i = 0; i < vdpasim->dev_attr.nas; i++)
>> > > + vhost_iotlb_init(&vdpasim->iommu[i], 0, 0);
>> > > +
>> > > vdpasim->buffer = kvmalloc(dev_attr->buffer_size, GFP_KERNEL);
>> > > if (!vdpasim->buffer)
>> > > goto err_iommu;
>> > >
>> > > for (i = 0; i < dev_attr->nvqs; i++)
>> > > - vringh_set_iotlb(&vdpasim->vqs[i].vring, vdpasim->iommu,
>> > > + vringh_set_iotlb(&vdpasim->vqs[i].vring, &vdpasim->iommu[0],
>> > > &vdpasim->iommu_lock);
>> > >
>> > > ret = iova_cache_get();
>> > > @@ -401,7 +412,11 @@ static u32 vdpasim_get_vq_align(struct vdpa_device *vdpa)
>> > >
>> > > static u32 vdpasim_get_vq_group(struct vdpa_device *vdpa, u16 idx)
>> > > {
>> > > - return 0;
>> > > + /* RX and TX belongs to group 0, CVQ belongs to group 1 */
>> > > + if (idx == 2)
>> > > + return 1;
>> > > + else
>> > > + return 0;
>> >
>> > This code only works for the vDPA-net simulator, since
>> > vdpasim_get_vq_group() is also shared with other simulators (e.g.
>> > vdpa_sim_blk),
>>
>> That's totally right.
>>
>> > should we move this net-specific code into
>> > vdpa_sim_net.c, maybe adding a callback implemented by the different
>> > simulators?
>> >
>>
>> At this moment, VDPASIM_BLK_VQ_NUM is fixed to 1, so maybe the right
>> thing to do for the -rc phase is to check if idx > vdpasim.attr.nvqs?
>> It's a more general fix.
>>
>
>Actually, that is already checked by vhost/vdpa.c.
>
>Taking that into account, is it worth introducing the change for 5.19?
>I'm totally ok with the change for 5.20.
>
>Thanks!
>
>> For the general case, yes, a callback should be issued to the actual
>> simulator so it's not a surprise when VDPASIM_BLK_VQ_NUM increases,
>> either dynamically or by anyone testing it.

Exactly, since those parameters are not yet configurable at runtime
(someday I hope they will be), I often recompile the module by changing
them, so for me we should fix them in 5.19.

Obviously it's an advanced case, and I expect that if someone recompiles
the module changing some hardwired thing, they can expect to have to
change something else as well.

So, I'm also fine with leaving it that way for 5.19, but if you want I
can fix it earlier.

Thanks,
Stefano

\
 
 \ /
  Last update: 2022-06-22 17:47    [W:0.056 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site