lkml.org 
[lkml]   [2022]   [Aug]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCHv7 02/14] mm: Add support for unaccepted memory
From
On 8/5/22 11:17, Vlastimil Babka wrote:
>> 3. Pull the page off the 2M/4M lists, drop the zone lock, accept it,
>> then put it back.
> Worth trying, IMHO. Perhaps easier to manage if the lists are distinct from
> the normal ones, as I suggested.

I was playing with another series recently where I did this, momentarily
taking pages off some of the high-order lists and dropping the zone lock.

Kirill, if you go looking at this, just make sure that you don't let
this happen to too much memory at once. You might end up yanking memory
out of the allocator that's not reflected in NR_FREE_PAGES.

You might, for instance want to make sure that only a small number of
threads can have pulled memory off the free lists at once. Something
*logically* like this:

// Limit to two threads at once:
atomic_t nr_accepting_threads = ATOMIC_INIT(2);

page = del_page_from_free_list();
if (!PageAccepted(page)) {
if (atomic_dec_and_test(&nr_accepting_threads)) {
// already at the thread limit
add_page_from_free_list(page, ...);
spin_unlock_irq(&zone->lock);
// wait for a slot...
spin_lock_irq(&zone->lock);
goto retry;
} else {
spin_unlock_irq(&zone->lock);
accept_page(page);
spin_lock_irq(&zone->lock);
add_page_from_free_list(page, ...);
// do merging if it was a 2M page
}
}

It's a little nasty because the whole thing is not a sleepable context.
I also know that the merging code needs some refactoring if you want to
do merging with 2M pages here. It might all get easier if you move all
the page allocator stuff to only work at the 4M granularity.

In any case, I'm not trying to dissuade anyone from listening to the
other reviewer feedback. Just trying to save you a few cycles on a
similar problem I was looking at recently.

\
 
 \ /
  Last update: 2022-08-08 17:55    [W:0.128 / U:0.200 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site