lkml.org 
[lkml]   [2023]   [Aug]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH net-next v5 10/10] net: axienet: Introduce dmaengine support
Date
> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Monday, August 14, 2023 9:00 PM
> To: Pandey, Radhey Shyam <radhey.shyam.pandey@amd.com>
> Cc: vkoul@kernel.org; robh+dt@kernel.org;
> krzysztof.kozlowski+dt@linaro.org; conor+dt@kernel.org; Simek, Michal
> <michal.simek@amd.com>; davem@davemloft.net; edumazet@google.com;
> pabeni@redhat.com; linux@armlinux.org.uk; dmaengine@vger.kernel.org;
> devicetree@vger.kernel.org; linux-arm-kernel@lists.infradead.org; linux-
> kernel@vger.kernel.org; netdev@vger.kernel.org; git (AMD-Xilinx)
> <git@amd.com>
> Subject: Re: [PATCH net-next v5 10/10] net: axienet: Introduce dmaengine
> support
>
> On Sat, 12 Aug 2023 15:27:19 +0000 Pandey, Radhey Shyam wrote:
> > > Drop on error, you're not stopping the queue correctly, just drop, return
> OK
> > > and avoid bugs.
> >
> > As I understand NETDEV_TX_OK returns means driver took care of packet.
> > So inline with non-dmaengine xmit (axienet_start_xmit_legacy) should
> > we stop the queue and return TX_BUSY?
>
> You should only return BUSY if there is no space. All other errors
> should lead to drops, and increment of tx_error. Otherwise problem
> with handling a single packet may stall the NIC forever.
> It is somewhat confusing that we return TX_OK in that case but it
> is what it is.
>
> > > Why create a cache ?
> > > Isn't it cleaner to create a fake ring buffer of sgl? Most packets will not
> have
> > > MAX_SKB_FRAGS of memory. On a ring buffer you can use only as many
> sg
> > > entries as the packet requires. Also no need to alloc/free.
> >
> > The kmem_cache is used with intent to use slab cache interface and
> > make use of reusing objects in the kernel. slab cache maintains a
> > cache of objects. When we free an object, instead of
> > deallocating it, it give it back to the cache. Next time, if we
> > want to create a new object, slab cache gives us one object from the
> > slab cache.
> >
> > If we maintain custom circular buffer (struct circ_buf) ring buffer
> > we have to create two such ring buffers one for TX and other for RX.
> > For multichannel this will multiply to * no of queues. Also we have to
> > ensure proper occupancy checks and head/tail pointer updates.
> >
> > With kmem_cache pool we are offloading queue maintenance ops to
> > framework with a benefit of optimized alloc/dealloc. Let me know if it
> > looks functionally fine and can retain it for this baseline dmaengine
> > support version?
>
> The kmemcache is not the worst possible option but note that the
> objects you're allocating (with zeroing) are 512+ bytes. That's
> pretty large, when most packets will not have full 16 fragments.
> Ring buffer would allow to better match the allocation size to
> the packets. Not to mention that it can be done fully locklessly.

I modified the implementation to use a circular ring buffer for TX
and RX. It seems to be working in initial testing and now running
perf tests.

Just had one question on when to submit v6 ? Wait till dmaengine
patches([01/10-[07/10] is part of net-next? Or can I send it now also.

Thanks,
Radhey

\
 
 \ /
  Last update: 2023-08-23 19:39    [W:0.216 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site