lkml.org 
[lkml]   [2006]   [Feb]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Terminate process that fails on a constrained allocation
Can anyone give us a good reason why we shouldn't just remove the oom
killer, entirely?

Christoph wrote:
> If a task has restricted its memory allocation to one node and does
> excessive allocations then that process needs to die not other processes
> that are harmlessly running on the node and that may not be allocating
> memory at the time.

That _exact_ same argument applies to a system that only has one node.

If we want to remove the oom killer, lets just remove the oom killer.


> People are accustomed of having random processes killed? <shudder>

That's what the oom killer does ... well, it makes an honest effort
not to be random.

So, yes, since it has been there a long time, people are used to
it. Maybe they don't like it, maybe with good reason. But it
is there.


> OOM killing makes
> sense for global allocations if the system is really tight on memory and
> survival is the main goal

If that argument justifies OOM killing on a simple UMA system, then
surely, for -some- critical tasks, it justifies it on a big NUMA system.

Either OOM is useful in some cases or it is not.


--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2006-02-08 21:57    [W:2.266 / U:0.796 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site