lkml.org 
[lkml]   [2021]   [Apr]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC] memory reserve for userspace oom-killer
On Wed, Apr 21, 2021 at 06:26:37AM -0700, Shakeel Butt wrote:
> On Tue, Apr 20, 2021 at 7:58 PM Roman Gushchin <guro@fb.com> wrote:
> >
> [...]
> > >
> > > Michal has suggested ALLOC_OOM which is less risky.
> >
> > The problem is that even if you'll serve the oom daemon task with pages
> > from a reserve/custom pool, it doesn't guarantee anything, because the task
> > still can wait for a long time on some mutex, taken by another process,
> > throttled somewhere in the reclaim.
>
> I am assuming here by mutex you are referring to locks which
> oom-killer might have to take to read metrics or any possible lock
> which oom-killer might have to take which some other process can take
> too.
>
> Have you observed this situation happening with oomd on production?

I'm not aware of any oomd-specific issues. I'm not sure if they don't exist
at all, but so far it's wasn't a problem for us. Maybe it because you tend to
have less pagecache (as I understand), maybe it comes to specific oomd
policies/settings.

I know we had different pains with mmap_sem and atop and similar programs,
where reading process data stalled on mmap_sem for a long time.

Thanks!

\
 
 \ /
  Last update: 2021-04-21 21:05    [W:0.063 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site