lkml.org 
[lkml]   [2022]   [May]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/5 v1] mm, oom: Introduce per numa node oom for CONSTRAINT_MEMORY_POLICY
On Thu 12-05-22 12:46:29, Gang Li wrote:
> TLDR:
> If a mempolicy is in effect(oc->constraint == CONSTRAINT_MEMORY_POLICY), out_of_memory() will
> select victim on specific node to kill. So that kernel can avoid accidental killing on NUMA system.
>
> Problem:
> Before this patch series, oom will only kill the process with the highest memory usage.
> by selecting process with the highest oom_badness on the entire system to kill.
>
> This works fine on UMA system, but may have some accidental killing on NUMA system.
>
> As shown below, if process c.out is bind to Node1 and keep allocating pages from Node1,
> a.out will be killed first. But killing a.out did't free any mem on Node1, so c.out
> will be killed then.
>
> A lot of our AMD machines have 8 numa nodes. In these systems, there is a greater chance
> of triggering this problem.

Sorry, I have only now found this email thread. The limitation of the
NUMA constrained oom is well known and long standing. Basically the
whole thing is a best effort as we are lacking per numa node memory
stats. I can see that you are trying to fill up that gap but this is
not really free. Have you measured the runtime overhead? Accounting is
done in a very performance sensitive paths and it would be rather
unfortunate to make everybody pay the overhead while binding to a
specific node or sets of nodes is not the most common usecase.

Also have you tried to have a look at cpusets? Those should be easier to
make a proper selection as it should be possible to iterate over tasks
belonging to a specific cpuset much more easier - essentialy something
similar to memcg oom killer. We do not do that right now and by a very
brief look at the CONSTRAINT_CPUSET it seems that this code is not
really doing much these days. Maybe that would be a more appropriate way
to deal with more precise node aware oom killing?

[...]
> 21 files changed, 317 insertions(+), 111 deletions(-)

The code footprint is not free either. And more importantnly does this
even work much more reliably? I can see quite some NUMA_NO_NODE
accounting (e.g. copy_pte_range!).Is this somehow fixable?

Also how do those numbers add up. Let's say you increase the counter as
NUMA_NO_NODE but later on during the clean up you decrease based on the
page node?

Last but not least I am really not following MM_NO_TYPE concept. I can
only see add_mm_counter users without any decrements. What is going on
there?
--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2022-05-16 18:46    [W:0.092 / U:0.280 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site