lkml.org 
[lkml]   [2018]   [Apr]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
    From
    Date
    On 04/20/2018 10:19 AM, Daniel Vetter wrote:
    > On Wed, Apr 18, 2018 at 11:10:58AM +0100, Roger Pau Monné wrote:
    >> On Wed, Apr 18, 2018 at 11:01:12AM +0300, Oleksandr Andrushchenko wrote:
    >>> On 04/18/2018 10:35 AM, Roger Pau Monné wrote:
    >>>> On Wed, Apr 18, 2018 at 09:38:39AM +0300, Oleksandr Andrushchenko wrote:
    >>>>> On 04/17/2018 11:57 PM, Dongwon Kim wrote:
    >>>>>> On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:
    >>>>>>> On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:
    >>>>> 3.2 Backend exports dma-buf to xen-front
    >>>>>
    >>>>> In this case Dom0 pages are shared with DomU. As before, DomU can only write
    >>>>> to these pages, not any other page from Dom0, so it can be still considered
    >>>>> safe.
    >>>>> But, the following must be considered (highlighted in xen-front's Kernel
    >>>>> documentation):
    >>>>>  - If guest domain dies then pages/grants received from the backend cannot
    >>>>>    be claimed back - think of it as memory lost to Dom0 (won't be used for
    >>>>> any
    >>>>>    other guest)
    >>>>>  - Misbehaving guest may send too many requests to the backend exhausting
    >>>>>    its grant references and memory (consider this from security POV). As the
    >>>>>    backend runs in the trusted domain we also assume that it is trusted as
    >>>>> well,
    >>>>>    e.g. must take measures to prevent DDoS attacks.
    >>>> I cannot parse the above sentence:
    >>>>
    >>>> "As the backend runs in the trusted domain we also assume that it is
    >>>> trusted as well, e.g. must take measures to prevent DDoS attacks."
    >>>>
    >>>> What's the relation between being trusted and protecting from DoS
    >>>> attacks?
    >>> I mean that we trust the backend that it can prevent Dom0
    >>> from crashing in case DomU's frontend misbehaves, e.g.
    >>> if the frontend sends too many memory requests etc.
    >>>> In any case, all? PV protocols are implemented with the frontend
    >>>> sharing pages to the backend, and I think there's a reason why this
    >>>> model is used, and it should continue to be used.
    >>> This is the first use-case above. But there are real-world
    >>> use-cases (embedded in my case) when physically contiguous memory
    >>> needs to be shared, one of the possible ways to achieve this is
    >>> to share contiguous memory from Dom0 to DomU (the second use-case above)
    >>>> Having to add logic in the backend to prevent such attacks means
    >>>> that:
    >>>>
    >>>> - We need more code in the backend, which increases complexity and
    >>>> chances of bugs.
    >>>> - Such code/logic could be wrong, thus allowing DoS.
    >>> You can live without this code at all, but this is then up to
    >>> backend which may make Dom0 down because of DomU's frontend doing evil
    >>> things
    >> IMO we should design protocols that do not allow such attacks instead
    >> of having to defend against them.
    >>
    >>>>> 4. xen-front/backend/xen-zcopy synchronization
    >>>>>
    >>>>> 4.1. As I already said in 2) all the inter VM communication happens between
    >>>>> xen-front and the backend, xen-zcopy is NOT involved in that.
    >>>>> When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a
    >>>>> XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE).
    >>>>> This call is synchronous, so xen-front expects that backend does free the
    >>>>> buffer pages on return.
    >>>>>
    >>>>> 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY:
    >>>>>   - closes all dumb handles/fd's of the buffer according to [3]
    >>>>>   - issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make
    >>>>> sure
    >>>>>     the buffer is freed (think of it as it waits for dma-buf->release
    >>>>> callback)
    >>>> So this zcopy thing keeps some kind of track of the memory usage? Why
    >>>> can't the user-space backend keep track of the buffer usage?
    >>> Because there is no dma-buf UAPI which allows to track the buffer life cycle
    >>> (e.g. wait until dma-buf's .release callback is called)
    >>>>>   - replies to xen-front that the buffer can be destroyed.
    >>>>> This way deletion of the buffer happens synchronously on both Dom0 and DomU
    >>>>> sides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out
    >>>>> error
    >>>>> (BTW, wait time is a parameter of this IOCTL), Xen will defer grant
    >>>>> reference
    >>>>> removal and will retry later until those are free.
    >>>>>
    >>>>> Hope this helps understand how buffers are synchronously deleted in case
    >>>>> of xen-zcopy with a single protocol command.
    >>>>>
    >>>>> I think the above logic can also be re-used by the hyper-dmabuf driver with
    >>>>> some additional work:
    >>>>>
    >>>>> 1. xen-zcopy can be split into 2 parts and extend:
    >>>>> 1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and
    >>>>> vise versa,
    >>>> I don't know much about the dma-buf implementation in Linux, but
    >>>> gntdev is a user-space device, and AFAICT user-space applications
    >>>> don't have any notion of dma buffers. How are such buffers useful for
    >>>> user-space? Why can't this just be called memory?
    >>> A dma-buf is seen by user-space as a file descriptor and you can
    >>> pass it to different drivers then. For example, you can share a buffer
    >>> used by a display driver for scanout with a GPU, to compose a picture
    >>> into it:
    >>> 1. User-space (US) allocates a display buffer from display driver
    >>> 2. US asks display driver to export the dma-buf which backs up that buffer,
    >>> US gets buffer's fd: dma_buf_fd
    >>> 3. US asks GPU driver to import a buffer and provides it with dma_buf_fd
    >>> 4. GPU renders contents into display buffer (dma_buf_fd)
    >> After speaking with Oleksandr on IRC, I think the main usage of the
    >> gntdev extension is to:
    >>
    >> 1. Create a dma-buf from a set of grant references.
    >> 2. Share dma-buf and get a list of grant references.
    >>
    >> I think this set of operations could be broken into:
    >>
    >> 1.1 Map grant references into user-space using the gntdev.
    >> 1.2 Create a dma-buf out of a set of user-space virtual addresses.
    >>
    >> 2.1 Map a dma-buf into user-space.
    >> 2.2 Get grefs out of the user-space addresses where the dma-buf is
    >> mapped.
    >>
    >> So it seems like what's actually missing is a way to:
    >>
    >> - Create a dma-buf from a list of user-space virtual addresses.
    >> - Allow to map a dma-buf into user-space, so it can then be used with
    >> the gntdev.
    >>
    >> I think this is generic enough that it could be implemented by a
    >> device not tied to Xen. AFAICT the hyper_dma guys also wanted
    >> something similar to this.
    > You can't just wrap random userspace memory into a dma-buf. We've just had
    > this discussion with kvm/qemu folks, who proposed just that, and after a
    > bit of discussion they'll now try to have a driver which just wraps a
    > memfd into a dma-buf.
    So, we have to decide either we introduce a new driver
    (say, under drivers/xen/xen-dma-buf) or extend the existing
    gntdev/balloon to support dma-buf use-cases.

    Can anybody from Xen community express their preference here?

    And I hope that there is no objection to have it all in the kernel,
    without going to user-space with VAs and back (device-X driver)
    >
    > Yes i915 and amdgpu and a few other drivers do have facilities to wrap
    > userspace memory into a gpu buffer object. But we don't allow those to be
    > exported to other drivers, because the core mm magic needed to make this
    > all work is way too tricky, even within the context of just 1 driver. And
    > dma-buf does not have the required callbacks and semantics to make it
    > work.
    > -Daniel
    >
    >>> Finally, this is indeed some memory, but a bit more [1]
    >>>> Also, (with my FreeBSD maintainer hat) how is this going to translate
    >>>> to other OSes? So far the operations performed by the gntdev device
    >>>> are mostly OS-agnostic because this just map/unmap memory, and in fact
    >>>> they are implemented by Linux and FreeBSD.
    >>> At the moment I can only see Linux implementation and it seems
    >>> to be perfectly ok as we do not change Xen's APIs etc. and only
    >>> use the existing ones (remember, we only extend gntdev/balloon
    >>> drivers, all the changes in the Linux kernel)
    >>> As the second note I can also think that we do not extend gntdev/balloon
    >>> drivers and have re-worked xen-zcopy driver be a separate entity,
    >>> say drivers/xen/dma-buf
    >>>>> implement "wait" ioctl (wait for dma-buf->release): currently these are
    >>>>> DRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and
    >>>>> DRM_XEN_ZCOPY_DUMB_WAIT_FREE
    >>>>> 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not
    >>>>> needed
    >>>>> by current hyper-dmabuf, but is a must for xen-zcopy use-cases)
    >>>> I think this needs clarifying. In which memory space do you need those
    >>>> regions to be contiguous?
    >>> Use-case: Dom0 has a HW driver which only works with contig memory
    >>> and I want DomU to be able to directly write into that memory, thus
    >>> implementing zero copying
    >>>> Do they need to be contiguous in host physical memory, or guest
    >>>> physical memory?
    >>> Host
    >>>> If it's in guest memory space, isn't there any generic interface that
    >>>> you can use?
    >>>>
    >>>> If it's in host physical memory space, why do you need this buffer to
    >>>> be contiguous in host physical memory space? The IOMMU should hide all
    >>>> this.
    >>> There are drivers/HW which can only work with contig memory and
    >>> if it is backed by an IOMMU then still it has to be contig in IPA
    >>> space (real device doesn't know that it is actually IPA contig, not PA)
    >> What's IPA contig?
    >>
    >> Thanks, Roger.
    >> _______________________________________________
    >> dri-devel mailing list
    >> dri-devel@lists.freedesktop.org
    >> https://lists.freedesktop.org/mailman/listinfo/dri-devel

    \
     
     \ /
      Last update: 2018-04-20 13:25    [W:3.200 / U:0.344 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site