lkml.org 
[lkml]   [2022]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v3] udmabuf: add vmap and vunmap methods to udmabuf_ops
From
Am 15.11.22 um 21:04 schrieb Lukasz Wiecaszek:
> The reason behind that patch is associated with videobuf2 subsystem
> (or more genrally with v4l2 framework) and user created
> dma buffers (udmabuf). In some circumstances
> when dealing with V4L2_MEMORY_DMABUF buffers videobuf2 subsystem
> wants to use dma_buf_vmap() method on the attached dma buffer.
> As udmabuf does not have .vmap operation implemented,
> such dma_buf_vmap() natually fails.
>
> videobuf2_common: __vb2_queue_alloc: allocated 3 buffers, 1 plane(s) each
> videobuf2_common: __prepare_dmabuf: buffer for plane 0 changed
> videobuf2_common: __prepare_dmabuf: failed to map dmabuf for plane 0
> videobuf2_common: __buf_prepare: buffer preparation failed: -14
>
> The patch itself seems to be strighforward.
> It adds implementation of .vmap and .vunmap methods
> to 'struct dma_buf_ops udmabuf_ops'.
> .vmap method itself uses vm_map_ram() to map pages linearly
> into the kernel virtual address space.
> .vunmap removes mapping created earlier by .vmap.
> All locking and 'vmapping counting' is done in dma_buf.c
> so it seems to be redundant/unnecessary in .vmap/.vunmap.
>
> Signed-off-by: Lukasz Wiecaszek <lukasz.wiecaszek@gmail.com>

> Reported-by: kernel test robot <lkp@intel.com>

Please drop this line, the kernel test robot should only be mentioned if
the original report came from it.

And keep in mind that it might be necessary to implement begin/end cpu
access callbacks as well.

Apart from that the patch is Acked-by: Christian König
<christian.koenig@amd.com>.

Regards,
Christian.

> ---
> v1: https://lore.kernel.org/linux-media/202211120352.G7WPASoP-lkp@intel.com/T/#t
> v2: https://lore.kernel.org/linux-media/20221114052944.GA7264@thinkpad-p72/T/#t
>
> v2 -> v3: Added .vunmap to 'struct dma_buf_ops udmabuf_ops'
> v1 -> v2: Patch prepared and tested against 6.1.0-rc2+
>
> drivers/dma-buf/udmabuf.c | 28 ++++++++++++++++++++++++++++
> 1 file changed, 28 insertions(+)
>
> diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
> index 283816fbd72f..740d6e426ee9 100644
> --- a/drivers/dma-buf/udmabuf.c
> +++ b/drivers/dma-buf/udmabuf.c
> @@ -13,6 +13,8 @@
> #include <linux/slab.h>
> #include <linux/udmabuf.h>
> #include <linux/hugetlb.h>
> +#include <linux/vmalloc.h>
> +#include <linux/iosys-map.h>
>
> static int list_limit = 1024;
> module_param(list_limit, int, 0644);
> @@ -60,6 +62,30 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
> return 0;
> }
>
> +static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
> +{
> + struct udmabuf *ubuf = buf->priv;
> + void *vaddr;
> +
> + dma_resv_assert_held(buf->resv);
> +
> + vaddr = vm_map_ram(ubuf->pages, ubuf->pagecount, -1);
> + if (!vaddr)
> + return -EINVAL;
> +
> + iosys_map_set_vaddr(map, vaddr);
> + return 0;
> +}
> +
> +static void vunmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
> +{
> + struct udmabuf *ubuf = buf->priv;
> +
> + dma_resv_assert_held(buf->resv);
> +
> + vm_unmap_ram(map->vaddr, ubuf->pagecount);
> +}
> +
> static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
> enum dma_data_direction direction)
> {
> @@ -162,6 +188,8 @@ static const struct dma_buf_ops udmabuf_ops = {
> .unmap_dma_buf = unmap_udmabuf,
> .release = release_udmabuf,
> .mmap = mmap_udmabuf,
> + .vmap = vmap_udmabuf,
> + .vunmap = vunmap_udmabuf,
> .begin_cpu_access = begin_cpu_udmabuf,
> .end_cpu_access = end_cpu_udmabuf,
> };

\
 
 \ /
  Last update: 2022-11-16 13:08    [W:0.063 / U:1.472 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site