lkml.org 
[lkml]   [2023]   [Apr]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH V3,net-next, 2/4] net: mana: Refactor RX buffer allocation code to prepare for various MTU
Date


> -----Original Message-----
> From: Leon Romanovsky <leon@kernel.org>
> Sent: Thursday, April 13, 2023 9:04 AM
> To: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: linux-hyperv@vger.kernel.org; netdev@vger.kernel.org; Dexuan Cui
> <decui@microsoft.com>; KY Srinivasan <kys@microsoft.com>; Paul Rosswurm
> <paulros@microsoft.com>; olaf@aepfle.de; vkuznets@redhat.com;
> davem@davemloft.net; wei.liu@kernel.org; edumazet@google.com;
> kuba@kernel.org; pabeni@redhat.com; Long Li <longli@microsoft.com>;
> ssengar@linux.microsoft.com; linux-rdma@vger.kernel.org;
> daniel@iogearbox.net; john.fastabend@gmail.com; bpf@vger.kernel.org;
> ast@kernel.org; Ajay Sharma <sharmaajay@microsoft.com>;
> hawk@kernel.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH V3,net-next, 2/4] net: mana: Refactor RX buffer allocation
> code to prepare for various MTU
>
> On Wed, Apr 12, 2023 at 02:16:01PM -0700, Haiyang Zhang wrote:
> > Move out common buffer allocation code from mana_process_rx_cqe() and
> > mana_alloc_rx_wqe() to helper functions.
> > Refactor related variables so they can be changed in one place, and buffer
> > sizes are in sync.
> >
> > Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
> > Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
> > ---
> > V3:
> > Refectored to multiple patches for readability. Suggested by Jacob Keller.
> >
> > V2:
> > Refectored to multiple patches for readability. Suggested by Yunsheng Lin.
> >
> > ---
> > drivers/net/ethernet/microsoft/mana/mana_en.c | 154 ++++++++++-------
> -
> > include/net/mana/mana.h | 6 +-
> > 2 files changed, 91 insertions(+), 69 deletions(-)
>
> <...>
>
> > +static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
> > + dma_addr_t *da, bool is_napi)
> > +{
> > + struct page *page;
> > + void *va;
> > +
> > + /* Reuse XDP dropped page if available */
> > + if (rxq->xdp_save_va) {
> > + va = rxq->xdp_save_va;
> > + rxq->xdp_save_va = NULL;
> > + } else {
> > + page = dev_alloc_page();
>
> Documentation/networking/page_pool.rst
> 10 Basic use involves replacing alloc_pages() calls with the
> 11 page_pool_alloc_pages() call. Drivers should use
> page_pool_dev_alloc_pages()
> 12 replacing dev_alloc_pages().
>
> General question, is this sentence applicable to all new code or only
> for XDP related paths?

Quote from the context before that sentence --

=============
Page Pool API
=============
The page_pool allocator is optimized for the XDP mode that uses one frame
per-page, but it can fallback on the regular page allocator APIs.
Basic use involves replacing alloc_pages() calls with the
page_pool_alloc_pages() call. Drivers should use page_pool_dev_alloc_pages()
replacing dev_alloc_pages().

--unquote

So the page pool is optimized for the XDP, and that sentence is applicable to drivers
that have set up page pool for XDP optimization.
static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) //need a pool been set up

Back to our mana driver, we don't have page pool setup yet. (will consider in the future)
So we cannot call page_pool_dev_alloc_pages(pool) in this place yet.

Thanks,
- Haiyang

\
 
 \ /
  Last update: 2023-04-13 19:07    [W:0.074 / U:1.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site