lkml.org 
[lkml]   [2019]   [Oct]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] cgroup, blkcg: prevent dirty inodes to pin dying memory cgroups
Date
On Mon, Oct 07, 2019 at 04:57:15PM +0200, Vlastimil Babka wrote:
> On 10/5/19 12:11 AM, Roman Gushchin wrote:
> >
> > One possible approach to this problem is to switch inodes associated
> > with dying wbs to the root wb. Switching is a best effort operation
> > which can fail silently, so unfortunately we can't run once over a
> > list of associated inodes (even if we'd have such a list). So we
> > really have to scan all inodes.
> >
> > In the proposed patch I schedule a work on each memory cgroup
> > deletion, which is probably too often. Alternatively, we can do it
> > periodically under some conditions (e.g. the number of dying memory
> > cgroups is larger than X). So it's basically a gc run.
> >
> > I wonder if there are any better ideas?
>
> I don't know this area, so this will be likely easily shown impossible,
> but perhaps it's useful to do that explicitly.
>
> What if instead of reparenting each inode, we "reparent" the wb?

It seems to be an arguable idea, at least at the offlining moment.
Dirty memory left after a cgroup should be written back using
corresponding limits, and reparenting can easily break them.

Also, it's not clear to me, how to reparent dirty stats?

> But I see it's not a small object either. Could we then add some bias
> for inode switching conditions so that anyone else touching the inode
> from dead wb would get it immediately?

You mean touching for writing? That's doable, but doesn't solve the case
when there are only readers. And the case is quite common.

> And what would happen if we reused the reparented wb's for newly created
> cgroups? Would it "punish" them for the old inodes?
>

No idea, to be honest.

Thank you!

\
 
 \ /
  Last update: 2022-09-17 16:06    [W:0.107 / U:0.344 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site