lkml.org 
[lkml]   [2013]   [Nov]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subjectuser defined OOM policies
Hi,
it's been quite some time since LSFMM 2013 when this has been
discussed[1]. In short, it seems that there are usecases with a
strong demand on a better user/admin policy control for the global
OOM situations. Per process oom_{adj,score} which is used for the
prioritizing is no longer sufficient because there are other categories
which might be important. For example, often it doesn't make sense to
kill just a part of the workload and killing the whole group would be a
better fit. I am pretty sure there are many others some of them workload
specific and thus not appropriate for the generic implementation.

We have basically ended up with 3 options AFAIR:
1) allow memcg approach (memcg.oom_control) on the root level
for both OOM notification and blocking OOM killer and handle
the situation from the userspace same as we can for other
memcgs.
2) allow modules to hook into OOM killer path and take the
appropriate action.
3) create a generic filtering mechanism which could be
controlled from the userspace by a set of rules (e.g.
something analogous to packet filtering).

As there was no real follow up discussion after the conference I would
like to open it here on the mailing list again and try to get to some
outcome.

I will follow up with some of my ideas but lets keep this post clean and
short for starter. Also if there are other ideas, please go ahead...

I wasn't sure who was present in the room and interested in the
discussion so I am putting random people I remember...

Ideas?

Thanks

---
[1] http://lwn.net/Articles/548180/
--
Michal Hocko
SUSE Labs


\
 
 \ /
  Last update: 2013-11-19 14:21    [W:0.258 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site