lkml.org 
[lkml]   [2019]   [Jul]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v6 2/5] dma-buf: heaps: Add heap helpers
On Thu, Jul 18, 2019 at 3:06 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> > +void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> > + void (*free)(struct heap_helper_buffer *))
>
> Please use a lower case naming following the naming scheme for the
> rest of the file.

Yes! Apologies as this was a hold-over from when the initialization
function was an inline function in the style of
INIT_WORK/INIT_LIST_HEAD. No longer appropriate that its a function.
I'll change it.

> > +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
> > +{
> > + void *vaddr;
> > +
> > + vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
> > + if (!vaddr)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + return vaddr;
> > +}
>
> Unless I'm misreading the patches this is used for the same pages that
> also might be dma mapped. In this case you need to use
> flush_kernel_vmap_range and invalidate_kernel_vmap_range in the right
> places to ensure coherency between the vmap and device view. Please
> also document the buffer ownership, as this really can get complicated.

Forgive me I wasn't familiar with those calls, but this seems like it
would apply to all dma-buf exporters if so, and I don't see any
similar flush_kernel_vmap_range calls there (both functions are
seemingly only used by xfs, md and bio).

We do have the dma_heap_dma_buf_begin_cpu_access()/dma_heap_dma_buf_end_cpu_access()
hooks (see more on these below) which sync the buffers for each
attachment (via dma_sync_sg_for_cpu/device), and are used around cpu
access to the buffers. Are you suggesting the
flush_kernel_vmap_range() call be added to those hooks or is the
dma_sync_sg_for_* calls sufficient there?

> > +static vm_fault_t dma_heap_vm_fault(struct vm_fault *vmf)
> > +{
> > + struct vm_area_struct *vma = vmf->vma;
> > + struct heap_helper_buffer *buffer = vma->vm_private_data;
> > +
> > + vmf->page = buffer->pages[vmf->pgoff];
> > + get_page(vmf->page);
> > +
> > + return 0;
> > +}
>
> Is there any exlusion between mmap / vmap and the device accessing
> the data? Without that you are going to run into a lot of coherency
> problems.

This has actually been a concern of mine recently, but at the higher
dma-buf core level. Conceptually, there is the
dma_buf_map_attachment() and dma_buf_unmap_attachment() calls drivers
use to map the buffer to an attached device, and there are the
dma_buf_begin_cpu_access() and dma_buf_end_cpu_access() calls which
are to be done when touching the cpu mapped pages. These look like
serializing functions, but actually don't seem to have any enforcement
mechanism to exclude parallel access.

To me it seems like adding the exclusion between those operations
should be done at the dmabuf core level, and would actually be helpful
for optimizing some of the cache maintenance rules w/ dmabuf. Does
this sound like something closer to what your suggesting, or am I
misunderstanding your point?

Again, I really appreciate the review and feedback!

Thanks so much!
-john

\
 
 \ /
  Last update: 2019-07-23 06:10    [W:0.092 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site