lkml.org 
[lkml]   [2011]   [Feb]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH] memcg: more mem_cgroup_uncharge batching
It seems odd that truncate_inode_pages_range(), called not only when
truncating but also when evicting inodes, has mem_cgroup_uncharge_start
and _end() batching in its second loop to clear up a few leftovers, but
not in its first loop that does almost all the work: add them there too.

Signed-off-by: Hugh Dickins <hughd@google.com>
---

mm/truncate.c | 2 ++
1 file changed, 2 insertions(+)

--- 2.6.38-rc6/mm/truncate.c 2011-01-21 20:54:14.000000000 -0800
+++ linux/mm/truncate.c 2011-02-23 16:12:19.000000000 -0800
@@ -225,6 +225,7 @@ void truncate_inode_pages_range(struct a
next = start;
while (next <= end &&
pagevec_lookup(&pvec, mapping, next, PAGEVEC_SIZE)) {
+ mem_cgroup_uncharge_start();
for (i = 0; i < pagevec_count(&pvec); i++) {
struct page *page = pvec.pages[i];
pgoff_t page_index = page->index;
@@ -247,6 +248,7 @@ void truncate_inode_pages_range(struct a
unlock_page(page);
}
pagevec_release(&pvec);
+ mem_cgroup_uncharge_end();
cond_resched();
}


\
 
 \ /
  Last update: 2011-02-24 06:47    [W:0.045 / U:0.380 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site