lkml.org 
[lkml]   [2013]   [Jan]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/2] mm: prevent to add a page to swap if may_writepage is unset
On Wed,  9 Jan 2013 15:21:13 +0900
Minchan Kim <minchan@kernel.org> wrote:
>

This changelog is quite hard to understand :(

> Recently, Luigi reported there are lots of free swap space when
> OOM happens. It's easily reproduced on zram-over-swap, where
> many instance of memory hogs are running and laptop_mode is enabled.
>
> Luigi reported there was no problem when he disabled laptop_mode.
> The problem when I investigate problem is following as.
>
> try_to_free_pages disable may_writepage if laptop_mode is enabled.
> shrink_page_list adds lots of anon pages in swap cache by
> add_to_swap, which makes pages Dirty and rotate them to head of
> inactive LRU without pageout. If it is repeated, inactive anon LRU
> is full of Dirty and SwapCache pages.

"Dirty and SwapCache" is ambigious. Does it mean "dirty pages and
swapcache pages" or does it mean "dirty swapcache pages". The latter,
I expect.

>
> In case of that, isolate_lru_pages fails because it try to isolate
> clean page due to may_writepage == 0.
>
> The may_writepage could be 1 only if total_scanned is higher than
> writeback_threshold in do_try_to_free_pages but unfortunately,
> VM can't isolate anon pages from inactive anon lru list by
> above reason and we already reclaimed all file-backed pages.
> So it ends up OOM killing.

Here, please expand upon "by above reason". Explain here exactly why
scanning is unsuccessful.

> This patch prevents to add a page to swap cache unnecessary when
> may_writepage is unset so anoymous lru list isn't full of
> Dirty/Swapcache page. So VM can isolate pages from anon lru list,
> which ends up setting may_writepage to 1 and could swap out
> anon lru pages. When OOM triggers, I confirmed swap space was full.
>
> ...
>
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -780,6 +780,8 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> if (PageAnon(page) && !PageSwapCache(page)) {
> if (!(sc->gfp_mask & __GFP_IO))
> goto keep_locked;
> + if (!sc->may_writepage)
> + goto keep_locked;
> if (!add_to_swap(page))
> goto activate_locked;
> may_enter_fs = 1;

Needs a comment explaining why we bale out in this case, please.

If I'm understanding it correctly, this change causes the kernel to
move less anonymous memory onto the inactive anon LRU and thereby
causes the scanner to be more successful in locating clean swapcache
pages on that list? But that makes no sense, because from your
description it appears the intent of the patch is to use *more* swap.


\
 
 \ /
  Last update: 2013-01-16 23:21    [W:0.134 / U:0.800 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site