lkml.org 
[lkml]   [2022]   [Jul]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] virtio: Force DMA restricted devices through DMA API
On Tue, Jul 19, 2022 at 07:56:09AM -0400, Michael S. Tsirkin wrote:
> On Tue, Jul 19, 2022 at 10:02:56AM +0000, Keir Fraser wrote:
> > If virtio devices are tagged for "restricted-dma-pool", then that
> > pool should be used for virtio ring setup, via the DMA API.
> >
> > In particular, this fixes virtio_balloon for ARM PKVM, where the usual
> > workaround of setting VIRTIO_F_ACCESS_PLATFORM in the virtio device
> > doesn't work because the virtio_balloon driver clears the flag. This
> > seems a more robust fix than fiddling the flag again.
> >
> > Signed-off-by: Keir Fraser <keirf@google.com>
>
>
> So the reason balloon disables ACCESS_PLATFORM is simply
> because it passes physical addresses to device and
> expects device to be able to poke at them.
>
> I worry about modifying DMA semantics yet again - it has as much of a
> chance to break some legacy configs as it has to fix some.
>
>
> And I don't really know much about restricted-dma-pool but
> I'd like to understand why does it make sense to set it for
> the balloon since it pokes at all and any system memory.

So this is set in the device tree by the host, telling it to bounce all DMA
through a restricted memory window (basically swiotlb). The original reason
is simply to isolate DMA, to the extent possible, on IOMMU-less systems.

However it is also useful for PKVM because the host is not trusted to access
ordinary protected VM memory. To allow I/O via the host, restricted-dma-pool
is used to cause a bounce aperture to be allocated during VM boot, which is
then explicitly shared with the host. For correct PKVM virtio operation, all
data *and metadata* (virtio rings and descriptors) must be allocated in or
bounced through this aperture.

Insofar as virtio device accesses to virtio rings in guest memory essentially
*are* DMA (from the pov of the guest), I think it makes sense to respect the
bounce buffer for those rings, if so configured by the device tree.

> > ---
> > drivers/virtio/virtio_ring.c | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index a5ec724c01d8..12be2607c648 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -12,6 +12,7 @@
> > #include <linux/hrtimer.h>
> > #include <linux/dma-mapping.h>
> > #include <linux/spinlock.h>
> > +#include <linux/swiotlb.h>
> > #include <xen/xen.h>
> >
> > #ifdef DEBUG
> > @@ -248,6 +249,13 @@ static bool vring_use_dma_api(struct virtio_device *vdev)
> > if (!virtio_has_dma_quirk(vdev))
> > return true;
> >
> > + /* If the device is configured to use a DMA restricted pool,
> > + * we had better use it.
> > + */
> > + if (IS_ENABLED(CONFIG_DMA_RESTRICTED_POOL) &&
> > + is_swiotlb_for_alloc(vdev->dev.parent))
> > + return true;
> > +
> > /* Otherwise, we are left to guess. */
> > /*
> > * In theory, it's possible to have a buggy QEMU-supposed
> > --
> > 2.37.0.170.g444d1eabd0-goog
>
>

\
 
 \ /
  Last update: 2022-07-19 16:24    [W:0.210 / U:0.792 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site