lkml.org 
[lkml]   [2021]   [Mar]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages
On Tue, Mar 30, 2021 at 05:30:10PM -0400, Johannes Weiner wrote:
> On Tue, Mar 30, 2021 at 11:58:31AM -0700, Roman Gushchin wrote:
> > On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> > > On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > >
> > > > Since the following patchsets applied. All the kernel memory are charged
> > > > with the new APIs of obj_cgroup.
> > > >
> > > > [v17,00/19] The new cgroup slab memory controller
> > > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> > > >
> > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > it exists at a larger scale and is causing recurring problems in the real
> > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > and make page reclaim very inefficient.
> > > >
> > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > >
> > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > of the dying cgroups will not increase if we run the following test script.
> > > >
> > > > ```bash
> > > > #!/bin/bash
> > > >
> > > > cat /proc/cgroups | grep memory
> > > >
> > > > cd /sys/fs/cgroup/memory
> > > >
> > > > for i in range{1..500}
> > > > do
> > > > mkdir test
> > > > echo $$ > test/cgroup.procs
> > > > sleep 60 &
> > > > echo $$ > cgroup.procs
> > > > echo `cat test/cgroup.procs` > cgroup.procs
> > > > rmdir test
> > > > done
> > > >
> > > > cat /proc/cgroups | grep memory
> > > > ```
> > > >
> > > > Patch 1 aims to fix page charging in page replacement.
> > > > Patch 2-5 are code cleanup and simplification.
> > > > Patch 6-15 convert LRU pages pin to the objcg direction.
> > >
> > > The main concern I have with *just* reparenting LRU pages is that for
> > > the long running systems, the root memcg will become a dumping ground.
> > > In addition a job running multiple times on a machine will see
> > > inconsistent memory usage if it re-accesses the file pages which were
> > > reparented to the root memcg.
> >
> > I agree, but also the reparenting is not the perfect thing in a combination
> > with any memory protections (e.g. memory.low).
> >
> > Imagine the following configuration:
> > workload.slice
> > - workload_gen_1.service memory.min = 30G
> > - workload_gen_2.service memory.min = 30G
> > - workload_gen_3.service memory.min = 30G
> > ...
> >
> > Parent cgroup and several generations of the child cgroup, protected by a memory.low.
> > Once the memory is getting reparented, it's not protected anymore.
>
> That doesn't sound right.
>
> A deleted cgroup today exerts no control over its abandoned
> pages. css_reset() will blow out any control settings.

I know. Currently it works in the following way: once cgroup gen_1 is deleted,
it's memory is not protected anymore, so eventually it's getting evicted and
re-faulted as gen_2 (or gen_N) memory. Muchun's patchset doesn't change this,
of course. But long-term we likely wanna re-charge such pages to new cgroups
and avoid unnecessary evictions and re-faults. Switching to obj_cgroups doesn't
help and likely will complicate this change. So I'm a bit skeptical here.

Also, in my experience the pagecache is not the main/worst memcg reference
holder (writeback structures are way worse). Pages are relatively large
(in comparison to some slab objects), and rarely there is only one page pinning
a separate memcg. And switching to obj_cgroup doesn't completely eliminate
the problem: we just switch from accumulating larger mem_cgroups to accumulating
smaller obj_cgroups.

With all this said, I'm not necessarily opposing the patchset, but it's
necessary to discuss how it fits into the long-term picture.
E.g. if we're going to use obj_cgroup API for page-sized objects, shouldn't
we split it back into the reparenting and bytes-sized accounting parts,
as I initially suggested. And shouldn't we move the reparenting part to
the cgroup core level, so we could use it for other controllers
(e.g. to fix the writeback problem).

>
> If you're talking about protection previously inherited by
> workload.slice, that continues to apply as it always has.
>
> None of this is really accidental. Per definition the workload.slice
> control domain includes workload_gen_1.service. And per definition,
> the workload_gen_1.service domain ceases to exist when you delete it.
>
> There are no (or shouldn't be any!) semantic changes from the physical
> unlinking from a dead control domain.
>
> > Also, I'm somewhat concerned about the interaction of the reparenting
> > with the writeback and dirty throttling. How does it work together?
>
> What interaction specifically?
>
> When you delete a cgroup that had both the block and the memory
> controller enabled, the control domain of both goes away and it
> becomes subject to whatever control domain is above it (if any).
>
> A higher control domain in turn takes a recursive view of the subtree,
> see mem_cgroup_wb_stats(), so when control is exerted, it applies
> regardless of how and where pages are physically linked in children.
>
> It's also already possible to enable e.g. block control only at a very
> high level and memory control down to a lower level. Per design this
> code can live with different domain sizes for memory and block.

I'm totally happy if it's safe, I just don't know this code well enough
to be sure without taking a closer look.

\
 
 \ /
  Last update: 2021-03-31 00:07    [W:0.073 / U:0.936 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site