lkml.org 
[lkml]   [2023]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 02/21] xtensa: dma-mapping: use normal cache invalidation rules
On Mon, Mar 27, 2023 at 5:14 AM Arnd Bergmann <arnd@kernel.org> wrote:
>
> From: Arnd Bergmann <arnd@arndb.de>
>
> xtensa is one of the platforms that has both write-back and write-through
> caches, and needs to account for both in its DMA mapping operations.
>
> It does this through a set of operations that is different from any
> architecture. This is not a problem by itself, but it makes it rather
> hard to figure out whether this is correct or not, and to unify this
> implementation with the others.
>
> Change the semantics to the usual ones for non-speculating CPUs:
>
> - On DMA_TO_DEVICE, call __flush_dcache_range() to perform the
> writeback even on writethrough caches, where this is a nop.
>
> - On DMA_FROM_DEVICE, invalidate the mapping before the DMA rather
> than afterwards.
>
> - On DMA_BIDIRECTIONAL, combine the pre-writeback with the
> post-invalidate into a call to __flush_invalidate_dcache_range()
> that turns into a simple invalidate on writeback caches.
>
> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
> ---
> arch/xtensa/Kconfig | 1 -
> arch/xtensa/include/asm/cacheflush.h | 6 +++---
> arch/xtensa/kernel/pci-dma.c | 29 +++++-----------------------
> 3 files changed, 8 insertions(+), 28 deletions(-)

Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>

--
Thanks.
-- Max

\
 
 \ /
  Last update: 2023-03-27 17:45    [W:0.468 / U:0.292 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site