lkml.org 
[lkml]   [2019]   [Sep]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch for-5.3 0/4] revert immediate fallback to remote hugepages
On Thu 05-09-19 14:06:28, David Rientjes wrote:
> On Wed, 4 Sep 2019, Andrea Arcangeli wrote:
>
> > > This is an admittedly hacky solution that shouldn't cause anybody to
> > > regress based on NUMA and the semantics of MADV_HUGEPAGE for the past
> > > 4 1/2 years for users whose workload does fit within a socket.
> >
> > How can you live with the below if you can't live with 5.3-rc6? Here
> > you allocate remote THP if the local THP allocation fails.
> >
> > > page = __alloc_pages_node(hpage_node,
> > > gfp | __GFP_THISNODE, order);
> > > +
> > > + /*
> > > + * If hugepage allocations are configured to always
> > > + * synchronous compact or the vma has been madvised
> > > + * to prefer hugepage backing, retry allowing remote
> > > + * memory as well.
> > > + */
> > > + if (!page && (gfp & __GFP_DIRECT_RECLAIM))
> > > + page = __alloc_pages_node(hpage_node,
> > > + gfp | __GFP_NORETRY, order);
> > > +
> >
> > You're still going to get THP allocate remote _before_ you have a
> > chance to allocate 4k local this way. __GFP_NORETRY won't make any
> > difference when there's THP immediately available in the remote nodes.
> >
>
> This is incorrect: the fallback allocation here is only if the initial
> allocation with __GFP_THISNODE fails. In that case, we were able to
> compact memory to make a local hugepage available without incurring
> excessive swap based on the RFC patch that appears as patch 3 in this
> series.

That patch is quite obscure and specific to pageblock_order+ sizes and
for some reason requires __GPF_IO without any explanation on why. The
problem is not THP specific, right? Any other high order has the same
problem AFAICS. So it is just a hack and that's why it is hard to reason
about.

I believe it would be the best to start by explaining why we do not see
the same problem with order-0 requests. We do not enter the slow path
and thus the memory reclaim if there is any other node to pass through
watermakr as well right? So essentially we are relying on kswapd to keep
nodes balanced so that allocation request can be satisfied from a local
node. We do have kcompactd to do background compaction. Why do we want
to rely on the direct compaction instead? What is the fundamental
difference?

Your changelog goes in length about some problems in the compaction but
I really do not see the underlying problem description. We cannot do any
sensible fix/heuristic without capturing that IMHO. Either there is
some fundamental difference between direct and background compaction
and doing a the former one is necessary and we should be doing that by
default for all higher order requests that are sleepable (aka
__GFP_DIRECT_RECLAIM) or there is something to fix for the background
compaction to be more pro-active.

> > I said one good thing about this patch series, that it fixes the swap
> > storms. But upstream 5.3 fixes the swap storms too and what you sent
> > is not nearly equivalent to the mempolicy that Michal was willing
> > to provide you and that we thought you needed to get bigger guarantees
> > of getting only local 2m or local 4k pages.
> >
>
> I haven't seen such a patch series, is there a link?

not yet unfortunatelly. So far I haven't heard that you are even
interested in that policy. You have never commented on that IIRC.
--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2019-09-09 21:31    [W:0.624 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site