lkml.org 
[lkml]   [2020]   [Jan]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: avoid blocking lock_page() in kcompactd
On Tue, Jan 28, 2020 at 10:13:52AM +0100, Michal Hocko wrote:
> On Tue 28-01-20 00:30:44, Matthew Wilcox wrote:
> > On Tue, Jan 28, 2020 at 09:17:12AM +0100, Michal Hocko wrote:
> > > On Mon 27-01-20 11:06:53, Matthew Wilcox wrote:
> > > > On Mon, Jan 27, 2020 at 04:00:24PM +0100, Michal Hocko wrote:
> > > > > On Sun 26-01-20 15:39:35, Matthew Wilcox wrote:
> > > > > > On Sun, Jan 26, 2020 at 11:53:55AM -0800, Cong Wang wrote:
> > > > > > > I suspect the process gets stuck in the retry loop in try_charge(), as
> > > > > > > the _shortest_ stacktrace of the perf samples indicated:
> > > > > > >
> > > > > > > cycles:ppp:
> > > > > > > ffffffffa72963db mem_cgroup_iter
> > > > > > > ffffffffa72980ca mem_cgroup_oom_unlock
> > > > > > > ffffffffa7298c15 try_charge
> > > > > > > ffffffffa729a886 mem_cgroup_try_charge
> > > > > > > ffffffffa720ec03 __add_to_page_cache_locked
> > > > > > > ffffffffa720ee3a add_to_page_cache_lru
> > > > > > > ffffffffa7312ddb iomap_readpages_actor
> > > > > > > ffffffffa73133f7 iomap_apply
> > > > > > > ffffffffa73135da iomap_readpages
> > > > > > > ffffffffa722062e read_pages
> > > > > > > ffffffffa7220b3f __do_page_cache_readahead
> > > > > > > ffffffffa7210554 filemap_fault
> > > > > > > ffffffffc039e41f __xfs_filemap_fault
> > > > > > > ffffffffa724f5e7 __do_fault
> > > > > > > ffffffffa724c5f2 __handle_mm_fault
> > > > > > > ffffffffa724cbc6 handle_mm_fault
> > > > > > > ffffffffa70a313e __do_page_fault
> > > > > > > ffffffffa7a00dfe page_fault
> > >
> > > I am not deeply familiar with the readahead code. But is there really a
> > > high oerder allocation (order > 1) that would trigger compaction in the
> > > phase when pages are locked?
> >
> > Thanks to sl*b, yes:
> >
> > radix_tree_node 80890 102536 584 28 4 : tunables 0 0 0 : slabdata 3662 3662 0
> >
> > so it's allocating 4 pages for an allocation of a 576 byte node.
>
> I am not really sure that we do sync migration for costly orders.

Doesn't the stack trace above indicate that we're doing migration as
the result of an allocation in add_to_page_cache_lru()?

> > > Btw. the compaction rejects to consider file backed pages when __GFP_FS
> > > is not present AFAIR.
> >
> > Ah, that would save us.
>
> So the NOFS comes from the mapping GFP mask, right? That is something I
> was hoping to get rid of eventually :/ Anyway it would be better to have
> an explicit NOFS with a comment explaining why we need that. If for
> nothing else then for documentation.

I'd also like to see the mapping GFP mask go away, but rather than seeing
an explicit GFP_NOFS here, I'd rather see the memalloc_nofs API used.
I just don't understand the whole problem space well enough to know
where to put the call for best effect.

\
 
 \ /
  Last update: 2020-01-28 11:50    [W:0.194 / U:0.540 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site