lkml.org 
[lkml]   [2022]   [Mar]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH] mm: memcg: Do not count memory.low reclaim if it does not happen
On Thu, Mar 24, 2022 at 11:17:14AM -0700, Roman Gushchin <roman.gushchin@linux.dev> wrote:
> Ok, so it’s not really about the implementation details of the reclaim
> mechanism (I mean rounding up to the batch size etc),

Actually, that was what I deemed more serious first.
It's the point 2 of RFCness:

| 2) The observed behavior slightly impacts distribution of parent's memory.low.
| Constructed example is a passive protected workload in s1 and active in s2
| (active ~ counteracts the reclaim with allocations). It could strip
| protection from s1 one by one (one:=SWAP_CLUSTER_MAX/2^sc.priority).
| That may be considered both wrong (s1 should have been more protected) or
| correct s2 deserves protection due to its activity.
| I don't have (didn't collect) data for this, so I think just masking the
| false events is sufficient (or independent).

> Idk, I don’t have a strong argument against this change (except that
> it changes the existing behavior), but I also don’t see why such
> events are harmful. Do you mind elaborating a bit more?

So I've collected some demo data now.

systemd-run \
-u precious.service --slice=test-protected.slice \
-p MemoryLow=50M \
/root/memeater 50 # allocates 50M anon, doesn't use it

systemd-run \
-u victim.service --slice=test-protected.slice \
-p MemoryLow=0M \
/root/memeater -m 50 50 # allocates 50M anon, uses it

echo "Started workloads"

systemctl set-property --runtime test.slice MemoryMax=200M
systemctl set-property --runtime test-protected.slice MemoryLow=50M

sleep 5

systemd-run \
-u pressure.service --slice=test.slice \
-p MemorySwapMax=0M \ # to push test-protected.slice to swap
/root/memeater -m 170 170

sleep 5
systemd-cgtop -b -1 -m test.slice

Result with memory_recursiveprot

> Control Group Tasks %CPU Memory Input/s Output/s
> test.slice 3 - 199.9M - -
> test.slice/pressure.service 1 - 170.5M - -
> test.slice/test-protected.slice 2 - 29.4M - -
> test.slice/test-protected.slice/victim.service 1 - 29.1M - -
> test.slice/test-protected.slice/precious.service 1 - 292.0K - -

Result without memory_recursiveprot

> Control Group Tasks %CPU Memory Input/s Output/s
> test.slice 3 - 199.8M - -
> test.slice/pressure.service 1 - 170.5M - -
> test.slice/test-protected.slice 2 - 29.3M - -
> test.slice/test-protected.slice/precious.service 1 - 28.7M - -
> test.slice/test-protected.slice/victim.service 1 - 560.0K - -

(kernel 5.17.0, systemd 249.10)

So with this result, I'd say the event reporting is an independent change
(admiteddly, thanks to the current implementation (not the proposal of
mine) I noticed this issue).
/me scratches head, let me review my other approaches...


Michal

\
 
 \ /
  Last update: 2022-03-25 11:32    [W:1.254 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site