lkml.org 
[lkml]   [2014]   [Oct]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] mm/slab: reverse iteration on find_mergeable()
Date
Unlike the SLUB, sometimes, object isn't started at the beginning of
the slab in the SLAB. This causes the unalignment problem when
after slab merging is supported by commit 12220dea07f1 ("mm/slab:
support slab merge"). Alignment mismatch check is introduced ("mm/slab:
fix unalignment problem on Malta with EVA due to slab merge") to prevent
merge in this case.

This causes undesirable result that merging happens between
infrequently used kmem_caches if there are kmem_caches with same size and
different alignment. For example, kmem_caches whose object size
is 256 bytes, are merged into pool_workqueue rather than kmalloc-256,
because kmem_caches for kmalloc are at the tail of the list.

To prevent this situation, this patch reverses iteration order in
find_mergeable() to find frequently used kmem_caches. This change
helps to merge kmem_cache to frequently used kmem_caches, such as
kmalloc kmem_caches.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
mm/slab_common.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 2657084..f6510d9 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -250,7 +250,7 @@ struct kmem_cache *find_mergeable(size_t size, size_t align,
size = ALIGN(size, align);
flags = kmem_cache_flags(size, flags, name, NULL);

- list_for_each_entry(s, &slab_caches, list) {
+ list_for_each_entry_reverse(s, &slab_caches, list) {
if (slab_unmergeable(s))
continue;

--
1.7.9.5


\
 
 \ /
  Last update: 2014-10-31 09:21    [W:0.159 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site