lkml.org 
[lkml]   [2022]   [Apr]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] dma-direct: avoid redundant memory sync for swiotlb
On Wed, Apr 13, 2022 at 09:02:02AM +0800, Chao Gao wrote:
> dma_direct_sync_single_for_cpu() also calls arch_sync_dma_for_cpu_all()
> and arch_dma_mark_clean() in some cases. if SWIOTLB does sync internally,
> should these two functions be called by SWIOTLB?
>
> Personally, it might be better if swiotlb can just focus on bounce buffer
> alloc/free. Adding more DMA coherence logic into swiotlb will make it
> a little complicated.
>
> How about an open-coded version of dma_direct_sync_single_for_cpu
> in dma_direct_unmap_page with swiotlb_sync_single_for_cpu replaced by
> swiotlb_tbl_unmap_single?

I don't think the swiotlb and non-coherent case ever fully worked.
Before the merge of swiotlb into dma-direct they obviously were
mutally exclusive, and even now all the cache maintainance is done
on the physical address of the original data, not the swiotlb buffer.

If we want to fix that properly all the arch dma calls will need to
move into swiotlb, but that is a much bigger patch.

So for now I'd be happy with the one liner presented here, but
eventually the whole area could use an overhaul.

\
 
 \ /
  Last update: 2022-04-13 08:14    [W:0.089 / U:0.224 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site