lkml.org 
[lkml]   [2014]   [Oct]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch 3/3] mm: memcontrol: fix transparent huge page allocations under pressure
[I do not have time to get over all points here and will be offline
until Monday - will get back to the rest then]

On Tue 07-10-14 21:11:06, Johannes Weiner wrote:
> On Tue, Oct 07, 2014 at 03:59:50PM +0200, Michal Hocko wrote:
[...]
> > I am completely missing any notes about potential excessive
> > swapouts or longer reclaim stalls which are a natural side effect of direct
> > reclaim with a larger target (or is this something we do not agree on?).
>
> Yes, we disagree here. Why is reclaiming 2MB once worse than entering
> reclaim 16 times to reclaim SWAP_CLUSTER_MAX?

You can enter DEF_PRIORITY reclaim 16 times and reclaim your target but
you need at least 512<<DEF_PRIORITY pages on your LRUs to do it in a
single run on that priority. So especially small groups will pay more
and would be subject to mentioned problems (e.g. over-reclaim).

> There is no inherent difference in reclaiming a big chunk and
> reclaiming many small chunks that add up to the same size.

[...]

> > Another part that matters is the size. Memcgs might be really small and
> > that changes the math. Large reclaim target will get to low prio reclaim
> > and thus the excessive reclaim.
>
> I already addressed page size vs. memcg size before.
>
> However, low priority reclaim does not result in excessive reclaim.
> The reclaim goal is checked every time it scanned SWAP_CLUSTER_MAX
> pages, and it exits if the goal has been met. See shrink_lruvec(),
> shrink_zone() etc.

Now I am confused. shrink_zone will bail out but shrink_lruvec will loop
over nr[...] until they are empty and only updates the numbers to be
roughly proportional once:

if (nr_reclaimed < nr_to_reclaim || scan_adjusted)
continue;

/*
* For kswapd and memcg, reclaim at least the number of pages
* requested. Ensure that the anon and file LRUs are scanned
* proportionally what was requested by get_scan_count(). We
* stop reclaiming one LRU and reduce the amount scanning
* proportional to the original scan target.
*/
[...]
scan_adjusted = true;

Or do you rely on
/*
* It's just vindictive to attack the larger once the smaller
* has gone to zero. And given the way we stop scanning the
* smaller below, this makes sure that we only make one nudge
* towards proportionality once we've got nr_to_reclaim.
*/
if (!nr_file || !nr_anon)
break;

and SCAN_FILE because !inactive_file_is_low?

[...]
--
Michal Hocko
SUSE Labs


\
 
 \ /
  Last update: 2014-10-08 18:02    [W:0.338 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site