lkml.org 
[lkml]   [2018]   [Aug]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2 3/3] mm, oom: introduce memory.oom.group
From
Date
On 2018/08/02 20:21, Michal Hocko wrote:
> On Thu 02-08-18 19:53:13, Tetsuo Handa wrote:
>> On 2018/08/02 9:32, Roman Gushchin wrote:
> [...]
>>> +struct mem_cgroup *mem_cgroup_get_oom_group(struct task_struct *victim,
>>> + struct mem_cgroup *oom_domain)
>>> +{
>>> + struct mem_cgroup *oom_group = NULL;
>>> + struct mem_cgroup *memcg;
>>> +
>>> + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
>>> + return NULL;
>>> +
>>> + if (!oom_domain)
>>> + oom_domain = root_mem_cgroup;
>>> +
>>> + rcu_read_lock();
>>> +
>>> + memcg = mem_cgroup_from_task(victim);
>>
>> Isn't this racy? I guess that memcg of this "victim" can change to
>> somewhere else from the one as of determining the final candidate.
>
> How is this any different from the existing code? We select a victim and
> then kill it. The victim might move away and won't be part of the oom
> memcg anymore but we will still kill it. I do not remember this ever
> being a problem. Migration is a privileged operation. If you loose this
> restriction you shouldn't allow to move outside of the oom domain.

The existing code kills one process (plus other processes sharing mm if any).
But oom_cgroup kills multiple processes. Thus, whether we made decision based
on correct memcg becomes important.

>
>> This "victim" might have already passed exit_mm()/cgroup_exit() from do_exit().
>
> Why does this matter? The victim hasn't been killed yet so if it exists
> by its own I do not think we really have to tear the whole cgroup down.

The existing code does not send SIGKILL if find_lock_task_mm() failed. Who can
guarantee that the victim is not inside do_exit() yet when this code is executed?

\
 
 \ /
  Last update: 2018-08-02 13:54    [W:0.826 / U:0.260 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site