lkml.org 
[lkml]   [2014]   [Jan]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH 1/3] mm/memcg: fix last_dead_count memory wastage
Shorten mem_cgroup_reclaim_iter.last_dead_count from unsigned long to
int: it's assigned from an int and compared with an int, and adjacent
to an unsigned int: so there's no point to it being unsigned long,
which wasted 104 bytes in every mem_cgroup_per_zone.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
Putting this one first as it should be nicely uncontroversial.
I'm assuming much too late for v3.13, so all 3 diffed against mmotm.

mm/memcontrol.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- mmotm/mm/memcontrol.c 2014-01-10 18:25:02.236448954 -0800
+++ linux/mm/memcontrol.c 2014-01-12 22:21:10.700570471 -0800
@@ -149,7 +149,7 @@ struct mem_cgroup_reclaim_iter {
* matches memcg->dead_count of the hierarchy root group.
*/
struct mem_cgroup *last_visited;
- unsigned long last_dead_count;
+ int last_dead_count;

/* scan generation, increased every round-trip */
unsigned int generation;

\
 
 \ /
  Last update: 2014-01-14 03:21    [W:0.156 / U:0.664 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site