lkml.org 
[lkml]   [2008]   [Nov]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] vmscan: bail out of page reclaim after swap_cluster_max pages
Date
> Sometimes the VM spends the first few priority rounds rotating back
> referenced pages and submitting IO. Once we get to a lower priority,
> sometimes the VM ends up freeing way too many pages.
>
> The fix is relatively simple: in shrink_zone() we can check how many
> pages we have already freed, direct reclaim tasks break out of the
> scanning loop if they have already freed enough pages and have reached
> a lower priority level.
>
> However, in order to do this we do need to know how many pages we already
> freed, so move nr_reclaimed into scan_control.
>
> Signed-off-by: Rik van Riel <riel@redhat.com>
> ---
> Kosaki, this should address the zone scanning pressure issue.

hmmmm. I still don't like the behavior when priority==DEF_PRIORITY.
but I also should explain by code and benchmark.

therefore, I'll try to mesure this patch in this week.

thanks.




\
 
 \ /
  Last update: 2008-11-25 12:49    [W:0.104 / U:0.308 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site