lkml.org 
[lkml]   [2020]   [Nov]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: introduce oom_kill_disable sysctl knob
On Mon 09-11-20 07:39:33, Minchan Kim wrote:
> On Mon, Nov 09, 2020 at 08:37:06AM +0100, Michal Hocko wrote:
> > On Fri 06-11-20 12:32:38, Minchan Kim wrote:
> > > It's hard to have some tests to be supposed to work under heavy
> > > memory pressure(e.g., injecting some memory hogger) because
> > > out-of-memory killer easily kicks out one of processes so system
> > > is broken or system loses the memory pressure state since it has
> > > plenty of free memory soon so.
> >
> > I do not follow the reasoning here. So you want to test for a close to
> > no memory available situation and the oom killer stands in the way
> > because it puts a relief?
>
> Yub, technically, I'd like to have consistent memory pressure to cause
> direct reclaims on proesses on the system and swapping in/out.

> >
> > > Even though we could mark existing process's oom_adj to -1000,
> > > it couldn't cover upcoming processes to be forked for the job.
> >
> > Why?
>
> Thing is the system has out-of-control processes created on demand.
> so only option to prevent OOM is echo -1000 > `pidof the process`
> since they are forked. However, I have no idea when they are forked
> so should race with OOM with /proc polling and OOM is frequently
> ahead of me.

I am still confused. Why would you want all/most processes to be hidden
from the oom killer?

> > > This knob is handy to keep system memory pressure.
> >
> > This sounds like a very dubious reason to introduce a knob to cripple
> > the system.
> >
> > I can see some reason to control the oom handling policy because the
> > effect of the oom killer is really disruptive but a global on/off switch
> > sounds like a too coarse interface. Really what kind of production
> > environment would ever go with oom killer disabled completely?
>
> I don't think shipping production system will use it. It would be
> just testing only option.

Then it doesn't really belong to the kernel IMHO.

> My intention uses such heavy memory load to see various system behaviors
> before the production launching because it usually happens in real workload
> once we shipped but hard to generate such a corner case without artificial
> memory pressure.

But changing the oom behavior will result in a completely different
system behavior. So you would be testing something that doesn't really
happen in any production system.

> Any suggestion?

Not really because I still do not understand your objective. You can
generate memory pressure and tune it up for specific testing scenario.
Sure there will be a some interference from the background noise (kernel
subsystems reacting to external events, processes created etc.) but why
that is a problem? This is normal to any running system.

--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2020-11-09 17:07    [W:0.055 / U:0.748 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site