lkml.org 
[lkml]   [2018]   [Mar]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] mm, compaction: drain pcps for zone when kcompactd fails
On Thu, Mar 01, 2018 at 01:23:34PM +0100, Vlastimil Babka wrote:
> On 03/01/2018 12:42 PM, David Rientjes wrote:
> > It's possible for buddy pages to become stranded on pcps that, if drained,
> > could be merged with other buddy pages on the zone's free area to form
> > large order pages, including up to MAX_ORDER.
>
> BTW I wonder if we could be smarter and quicker about the drains. Let a
> pcp struct page be easily recognized as such, and store the cpu number
> in there. Migration scanner could then maintain a cpumask, and recognize
> if the only missing pages for coalescing a cc->order block are on the
> pcplists, and then do a targeted drain.
> But that only makes sense to implement if it can make a noticeable
> difference to offset the additional overhead, of course.

Perhaps we should turn this around ... rather than waiting for the
coalescer to come along, when we're about to put a page on the pcp list,
check whether its buddy is PageBuddy(). If so, send it to the buddy
allocator so it can get merged instead of putting it on the pcp list.

I can see the negatives of that; if you're in a situation where you've
got a 2^12 block free and allocate one page, that's 12 splits. Then you
free the page and it does 12 joins. Then you allocate again and do 12
splits ...

That seems like a relatively rare scenario; we're generally going to
have a lot of pages in motion on any workload we care about, and there's
always going to be pages on the pcp list.

It's not an alternative to David's patch; having page A and page A+1 on
the pcp list will prevent the pages from getting merged. But it should
delay the time until his bigger hammer kicks in.

\
 
 \ /
  Last update: 2018-03-02 18:29    [W:0.153 / U:0.712 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site