lkml.org 
[lkml]   [2022]   [May]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] net: fs_enet: sync rx dma buffer before reading
Date


Le 20/05/2022 à 19:43, Jakub Kicinski a écrit :
> On Fri, 20 May 2022 12:54:56 +0000 Christophe Leroy wrote:
>> Le 20/05/2022 à 14:35, Måns Rullgård a écrit :
>>> Christophe Leroy <christophe.leroy@csgroup.eu> writes:
>>>> See original commit 070e1f01827c. It explicitely says that the cache
>>>> must be invalidate _AFTER_ the copy.
>>>>
>>>> The cache is initialy invalidated by dma_map_single(), so before the
>>>> copy the cache is already clean.
>>>>
>>>> After the copy, data is in the cache. In order to allow re-use of the
>>>> skb, it must be put back in the same condition as before, in extenso the
>>>> cache must be invalidated in order to be in the same situation as after
>>>> dma_map_single().
>>>>
>>>> So I think your change is wrong.
>>>
>>> OK, looking at it more closely, the change is at least unnecessary since
>>> there will be a cache invalidation between each use of the buffer either
>>> way. Please disregard the patch. Sorry for the noise.
>>>
>>
>> I also looked deeper.
>>
>> Indeed it was implemented in kernel 4.9 or 4.8. At that time
>> dma_unmap_single() was a no-op, it was not doing any sync/invalidation
>> at all, invalidation was done only at mapping, so when we were reusing
>> the skb it was necessary to clean the cache _AFTER_ the copy as if it
>> was a new mapping.
>>
>> Today a sync is done at both map and unmap, so it doesn't really matter
>> whether we do the invalidation before or after the copy when we re-use
>> the skb.
>
> Hm, I think the patch is necessary, sorry if you're also saying that
> and I'm misinterpreting.

Well, I say the contrary.

On the mainline the patch may be applied as is, it won't harm.

However, it is gets applied to kernel 4.9 (based on the fixes: tag), it
will break the driver for at least powerpc 8xx.

In 4.9, dma_direct_map_page() invalidates the cache, but
dma_direct_unmap_page() is a no-op. It means that when we re-use a skb
as we do in fs_enet when the received packet is small, the cache must be
invalidated _AFTER_ reading the received data.

The driver works like this:

allocate an SKB with the largest possible packet size
dma_direct_map_page() ==> cache invalidation
loop forever
wait for some received data in DMA
if (received packet is small)
allocate a new SKB with the size of the received packet
copy received data into the new SKB
hand new SKB to network layer
invalidate the cache
else
dma_direct_unmap_page() ==> no-op
hand SKB to network layer
allocate a new SKB with the largest possible packet size
dma_direct_map_page() ==> cache invalidation
endif
endloop


If you don't invalidate the cache _AFTER_ the copy, you have stale data
in the cache when you later hand a non-small received packet to the
network stack.

Invalidating _BEFORE_ the copy is useless as it has already been
invalidated at mapping time.



In mainline, the DMA handling has been make generic, and cache
invalidation is performed at both mapping at unmapping (Which is by the
way sub-optimal) so by change it would still work after the patch.

>
> Without the dma_sync_single_for_cpu() if swiotlb is used the data
> will not be copied back into the original buffer if there is no sync.

I don't know how SWIOTLB works or even what it is, does any of the
microcontrollers embedding freescale ethernet uses that at all ?

Christophe
\
 
 \ /
  Last update: 2022-05-21 08:46    [W:0.093 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site