lkml.org 
[lkml]   [2022]   [Aug]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH] memcg: use root_mem_cgroup when css is inherited
On Tue 23-08-22 09:21:16, Suren Baghdasaryan wrote:
> On Tue, Aug 23, 2022 at 4:51 AM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Tue 23-08-22 17:20:59, Zhaoyang Huang wrote:
> > > On Tue, Aug 23, 2022 at 4:33 PM Michal Hocko <mhocko@suse.com> wrote:
> > > >
> > > > On Tue 23-08-22 14:03:04, Zhaoyang Huang wrote:
> > > > > On Tue, Aug 23, 2022 at 1:21 PM Michal Hocko <mhocko@suse.com> wrote:
> > > > > >
> > > > > > On Tue 23-08-22 10:31:57, Zhaoyang Huang wrote:
> > > > [...]
> > > > > > > I would like to quote the comments from google side for more details
> > > > > > > which can also be observed from different vendors.
> > > > > > > "Also be advised that when you enable memcg v2 you will be using
> > > > > > > per-app memcg configuration which implies noticeable overhead because
> > > > > > > every app will have its own group. For example pagefault path will
> > > > > > > regress by about 15%. And obviously there will be some memory overhead
> > > > > > > as well. That's the reason we don't enable them in Android by
> > > > > > > default."
> > > > > >
> > > > > > This should be reported and investigated. Because per-application memcg
> > > > > > vs. memcg in general shouldn't make much of a difference from the
> > > > > > performance side. I can see a potential performance impact for no-memcg
> > > > > > vs. memcg case but even then 15% is quite a lot.
> > > > > Less efficiency on memory reclaim caused by multi-LRU should be one of
> > > > > the reason, which has been proved by comparing per-app memcg on/off.
> > > > > Besides, theoretically workingset could also broken as LRU is too
> > > > > short to compose workingset.
> > > >
> > > > Do you have any data to back these claims? Is this something that could
> > > > be handled on the configuration level? E.g. by applying low limit
> > > > protection to keep the workingset in the memory?
> > > I don't think so. IMO, workingset works when there are pages evicted
> > > from LRU and then refault which provide refault distance for pages.
> > > Applying memcg's protection will have all LRU out of evicted which
> > > make the mechanism fail.
> >
> > It is really hard to help you out without any actual data. The idea was
> > though to use the low limit protection to adaptively configure
> > respective memcgs to reduce refaults. You already have data about
> > refaults ready so increasing the limit for often refaulting memcgs would
> > reduce the trashing.
>
> Sorry for joining late.
> A couple years ago I tested root-memcg vs per-app memcg configurations
> on an Android phone. Here is a snapshot from my findings:
>
> Problem
> =======
> We see tangible increase in major faults and workingset refaults when
> transitioning from root-only memory cgroup to per-application cgroups
> on Android.
>
> Test results
> ============
> Results while running memory-demanding workload:
> root memcg per-app memcg delta
> workingset_refault 1771228 3874281 +118.73%
> workingset_nodereclaim 4543 13928 +206.58%
> pgpgin 13319208 20618944 +54.81%
> pgpgout 1739552 3080664 +77.1%
> pgpgoutclean 2616571 4805755 +83.67%
> pswpin 359211 3918716 +990.92%
> pswpout 1082238 5697463 +426.45%
> pgfree 28978393 32531010 +12.26%
> pgactivate 2586562 8731113 +237.56%
> pgdeactivate 3811074 11670051 +206.21%
> pgfault 38692510 46096963 +19.14%
> pgmajfault 441288 4100020 +829.1%
> pgrefill 4590451 12768165 +178.15%
>
> Results while running application cycle test (20 apps, 20 cycles):
> root memcg per-app memcg delta
> workingset_refault 10634691 11429223 +7.47%
> workingset_nodereclaim 37477 59033 +57.52%
> pgpgin 70662840 69569516 -1.55%
> pgpgout 2605968 2695596 +3.44%
> pgpgoutclean 13514955 14980610 +10.84%
> pswpin 1489851 3780868 +153.77%
> pswpout 4125547 8050819 +95.15%
> pgfree 99823083 105104637 +5.29%
> pgactivate 7685275 11647913 +51.56%
> pgdeactivate 14193660 21459784 +51.19%
> pgfault 89173166 100598528 +12.81%
> pgmajfault 1856172 4227190 +127.74%
> pgrefill 16643554 23203927 +39.42%

Thanks! It would be interesting to see per memcg stats as well. Are
there any outliers? Are there any signs of over-reclaim (more pages
scanned & reclaimed by both kswapd and direct reclaim?

> Tests were conducted on an Android phone with 4GB RAM.
> Similar regression was reported a couple years ago here:
> https://www.spinics.net/lists/linux-mm/msg121665.html
>
> I plan on checking the difference again on newer kernels (likely 5.15)
> after LPC this September.

Thanks, that would be useful!

--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2022-08-24 09:59    [W:0.168 / U:0.448 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site