lkml.org 
[lkml]   [2024]   [May]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RFC PATCH net-next v8 02/14] net: page_pool: create hooks for custom page providers
From
On 5/8/24 15:25, Jason Gunthorpe wrote:
> On Wed, May 08, 2024 at 12:30:07PM +0100, Pavel Begunkov wrote:
>
>>> I'm not going to pretend to know about page pool details, but dmabuf
>>> is the way to get the bulk of pages into a pool within the net stack's
>>> allocator and keep that bulk properly refcounted while.> An object like
>>> dmabuf is needed for the general case because there are
>>> not going to be per-page references or otherwise available.
>>
>> They are already pinned, memory is owned by the provider, io_uring
>> in this case, and it should not be freed circumventing io_uring,
>> and at this stage calling release_pages() is not such a hassle,
>> especially comparing to introducing an additional object.
>
> Something needs to co-ordinate when the net stack's allocator is done
> with the bulk of pages and when io_uring and do the final
> put_user_page() to free it. DMABUF is not an unreasonable choice for
> this.

When a page pool dies notifies io_uring via the ->destroy callback.
Vise versa, when io_uring wants to terminate zerocopy, it releases
the interface queue, which kills the page pool, ending in
->destroy again.

>>> topic to me, and honestly hacking into the allocator free function
>>> seems a bit weird..
>>
>> Do you also think that DMA_BUF_IOCTL_SYNC is a weird hack, because
>> it "delays free" by pinning the dmabuf object and letting the user
>> read memory instead of copying it? I can find many examples
>
> It seems to me the flow you want is for the driver to allocate a page,
> put it on a rx ring, process it through the netstack, and deliver it
> to io_uring. io_uring would then sit on the allocation until userspace
> it done and return it back to the netstack allocator.

That's right, with a note that "driver allocating a page" is
not a alloc_page() but grabbing a user page/frag the user
registered beforehand.

> Hooking the free of the netstack allocator and then defering it seems

FWIW, it's not about page pool's ->release_page, it's a slow
path and in an ideal world wouldn't be called outside of tear
down.

> like a weird and indirect way to get there. Why can't io_uring just be
> the entity that does the final free and not mess with the logic
> allocator?

Then the user has to do a syscall (e.g. via io_uring) to return pages,
and there we'd need to care how to put the pages efficiently, i.e.
hitting the page pool's fast path, e.g. by hoping napi is scheduled and
scheduled for the CPU we're running on, or maybe transferring the pages
to the right CPU first.

Compare it with userspace putting pages into a ring, and the allocator
taking from there when needed without any extra synchronisation and
hassle just because it's a sole consumer.

--
Pavel Begunkov

\
 
 \ /
  Last update: 2024-05-08 17:45    [W:0.068 / U:0.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site