lkml.org 
[lkml]   [2014]   [Jan]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 3/3] mm: vmscan: shrink_slab: do not skip caches with < batch_size objects
Date
In its current implementation, shrink_slab() won't scan caches that have
less than batch_size objects. If there are only a few shrinkers
available, such a behavior won't cause any problems, because the
batch_size is usually small, but if we have a lot of slab shrinkers,
which is perfectly possible since FS shrinkers are now per-superblock,
we can end up with hundreds of megabytes of practically unreclaimable
kmem objects. For instance, mounting a thousand of ext2 FS images with a
hundred of files in each and iterating over all the files using du(1)
will result in about 200 Mb of FS caches that cannot be dropped even
with the aid of the vm.drop_caches sysctl! Fix this.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Glauber Costa <glommer@gmail.com>
---
mm/vmscan.c | 25 +++++++++++++++++++------
1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index f6d716d..2e710d4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -275,7 +275,7 @@ shrink_slab_node(struct shrink_control *shrinkctl, struct shrinker *shrinker,
* a large delta change is calculated directly.
*/
if (delta < freeable / 4)
- total_scan = min(total_scan, freeable / 2);
+ total_scan = min(total_scan, max(freeable / 2, batch_size));

/*
* Avoid risking looping forever due to too large nr value:
@@ -289,21 +289,34 @@ shrink_slab_node(struct shrink_control *shrinkctl, struct shrinker *shrinker,
nr_pages_scanned, lru_pages,
freeable, delta, total_scan);

- while (total_scan >= batch_size) {
+ /*
+ * To avoid CPU cache thrashing, we should not scan less than
+ * batch_size objects in one pass, but if the cache has less
+ * than batch_size objects in total, and we really want to
+ * shrink them all, go ahead and scan what we have, otherwise
+ * last batch_size objects will never get reclaimed.
+ */
+ if (total_scan < batch_size &&
+ !(freeable < batch_size && total_scan >= freeable))
+ goto out;
+
+ do {
unsigned long ret;
+ unsigned long nr_to_scan = min(total_scan, batch_size);

- shrinkctl->nr_to_scan = batch_size;
+ shrinkctl->nr_to_scan = nr_to_scan;
ret = shrinker->scan_objects(shrinker, shrinkctl);
if (ret == SHRINK_STOP)
break;
freed += ret;

- count_vm_events(SLABS_SCANNED, batch_size);
- total_scan -= batch_size;
+ count_vm_events(SLABS_SCANNED, nr_to_scan);
+ total_scan -= nr_to_scan;

cond_resched();
- }
+ } while (total_scan >= batch_size);

+out:
/*
* move the unused scan count back into the shrinker in a
* manner that handles concurrent updates. If we exhausted the
--
1.7.10.4


\
 
 \ /
  Last update: 2014-01-17 22:01    [W:0.080 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site