lkml.org 
[lkml]   [2022]   [May]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] net: fs_enet: sync rx dma buffer before reading
Date


Le 20/05/2022 à 14:35, Måns Rullgård a écrit :
> Christophe Leroy <christophe.leroy@csgroup.eu> writes:
>
>> Le 19/05/2022 à 21:24, Mans Rullgard a écrit :
>>> The dma_sync_single_for_cpu() call must precede reading the received
>>> data. Fix this.
>>
>> See original commit 070e1f01827c. It explicitely says that the cache
>> must be invalidate _AFTER_ the copy.
>>
>> The cache is initialy invalidated by dma_map_single(), so before the
>> copy the cache is already clean.
>>
>> After the copy, data is in the cache. In order to allow re-use of the
>> skb, it must be put back in the same condition as before, in extenso the
>> cache must be invalidated in order to be in the same situation as after
>> dma_map_single().
>>
>> So I think your change is wrong.
>
> OK, looking at it more closely, the change is at least unnecessary since
> there will be a cache invalidation between each use of the buffer either
> way. Please disregard the patch. Sorry for the noise.
>

I also looked deeper.

Indeed it was implemented in kernel 4.9 or 4.8. At that time
dma_unmap_single() was a no-op, it was not doing any sync/invalidation
at all, invalidation was done only at mapping, so when we were reusing
the skb it was necessary to clean the cache _AFTER_ the copy as if it
was a new mapping.

Today a sync is done at both map and unmap, so it doesn't really matter
whether we do the invalidation before or after the copy when we re-use
the skb.

Christophe
\
 
 \ /
  Last update: 2022-05-20 14:57    [W:0.100 / U:0.512 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site