lkml.org 
[lkml]   [2018]   [Feb]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] staging: android: ion: Restrict cache maintenance to dma mapped memory
From
Date
On 02/09/2018 10:21 PM, Liam Mark wrote:
> The ION begin_cpu_access and end_cpu_access functions use the
> dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> maintenance.
>
> Currently it is possible to apply cache maintenance, via the
> begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> dma mapped.
>
> The dma sync sg APIs should not be called on sg lists which have not been
> dma mapped as this can result in cache maintenance being applied to the
> wrong address. If an sg list has not been dma mapped then its dma_address
> field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> use the dma_address field to calculate the address onto which to apply
> cache maintenance.
>
> Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> cache maintenance to buffers which have been dma mapped.
>
I think this looks okay. I was initially concerned about concurrency and
setting the dma_mapped flag but I think that should be handled by the
caller synchronizing map/unmap/cpu_access calls (we might need to re-evaluate
in the future)

I would like to hold on queuing this for just a little bit until I
finish working on the Ion unit test (doing this in the complete opposite
order of course). I'm assuming this passed your internal tests Liam?

Thanks,
Laura

> Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and mapping")
> Signed-off-by: Liam Mark <lmark@codeaurora.org>
> ---
> drivers/staging/android/ion/ion.c | 16 ++++++++++++----
> 1 file changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
> index f480885e346b..e5df5272823d 100644
> --- a/drivers/staging/android/ion/ion.c
> +++ b/drivers/staging/android/ion/ion.c
> @@ -214,6 +214,7 @@ struct ion_dma_buf_attachment {
> struct device *dev;caller
> struct sg_table *table;
> struct list_head list;
> + bool dma_mapped;
> };
>
> static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev,
> @@ -235,6 +236,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf, struct device *dev,
>
> a->table = table;
> a->dev = dev;
> + a->dma_mapped = false;
> INIT_LIST_HEAD(&a->list);
>
> attachment->priv = a;
> @@ -272,6 +274,7 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment,
> direction))
> return ERR_PTR(-ENOMEM);
>
> + a->dma_mapped = true;
> return table;
> }
>
> @@ -279,7 +282,10 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment,
> struct sg_table *table,
> enum dma_data_direction direction)
> {
> + struct ion_dma_buf_attachment *a = attachment->priv;
> +
> dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> + a->dma_mapped = false;
> }
>
> static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> @@ -345,8 +351,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>
> mutex_lock(&buffer->lock);
> list_for_each_entry(a, &buffer->attachments, list) {
> - dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
> - direction);
> + if (a->dma_mapped)
> + dma_sync_sg_for_cpu(a->dev, a->table->sgl,
> + a->table->nents, direction);
> }
> mutex_unlock(&buffer->lock);
>
> @@ -367,8 +374,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>
> mutex_lock(&buffer->lock);
> list_for_each_entry(a, &buffer->attachments, list) {
> - dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
> - direction);
> + if (a->dma_mapped)
> + dma_sync_sg_for_device(a->dev, a->table->sgl,
> + a->table->nents, direction);
> }
> mutex_unlock(&buffer->lock);
>
>

\
 
 \ /
  Last update: 2018-02-12 21:39    [W:0.042 / U:0.148 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site