lkml.org 
[lkml]   [2018]   [Feb]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v5 2/2] mm/memcontrol.c: Reduce reclaim retries in mem_cgroup_resize_limit()
From
Date
On 02/21/2018 11:17 PM, Andrew Morton wrote:
> On Fri, 19 Jan 2018 16:11:18 +0100 Michal Hocko <mhocko@kernel.org> wrote:
>
>> And to be honest, I do not really see why keeping retrying from
>> mem_cgroup_resize_limit should be so much faster than keep retrying from
>> the direct reclaim path. We are doing SWAP_CLUSTER_MAX batches anyway.
>> mem_cgroup_resize_limit loop adds _some_ overhead but I am not really
>> sure why it should be that large.
>
> Maybe restarting the scan lots of times results in rescanning lots of
> ineligible pages at the start of the list before doing useful work?
>
> Andrey, are you able to determine where all that CPU time is being spent?
>

I should have been more specific about the test I did. The full script looks like this:

mkdir -p /sys/fs/cgroup/memory/test
echo $$ > /sys/fs/cgroup/memory/test/tasks
cat 4G_file > /dev/null
while true; do cat 4G_file > /dev/null; done &
loop_pid=$!
perf stat echo 50M > /sys/fs/cgroup/memory/test/memory.limit_in_bytes
echo -1 > /sys/fs/cgroup/memory/test/memory.limit_in_bytes
kill $loop_pid


I think the additional loops add some overhead and it's not that big by itself, but
this small overhead allows task to refill slightly more pages, increasing
the total amount of pages that mem_cgroup_resize_limit() need to reclaim.

By using the following commands to show the the amount of reclaimed pages:
perf record -e vmscan:mm_vmscan_memcg_reclaim_end echo 50M > /sys/fs/cgroup/memory/test/memory.limit_in_bytes
perf script|cut -d '=' -f 2| paste -sd+ |bc

I've got 1259841 pages (4.9G) with the patch vs 1394312 pages (5.4G) without it.

\
 
 \ /
  Last update: 2018-02-22 14:50    [W:0.164 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site