lkml.org 
[lkml]   [2013]   [Apr]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [RT LATENCY] 249 microsecond latency caused by slub's unfreeze_partials() code.
On Thu, Apr 04, 2013 at 01:53:25PM +0000, Christoph Lameter wrote:
> On Thu, 4 Apr 2013, Joonsoo Kim wrote:
>
> > Pekka alreay applied it.
> > Do we need update?
>
> Well I thought the passing of the count via lru.next would be something
> worthwhile to pick up.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

Hello, Pekka.

Here goes a patch implementing Christoph's idea.
Instead of updating my previous patch, I re-write this patch on top of
your slab/next tree.

Thanks.

------------------------8<-------------------------------
From e1c18793dd2a9d9cef87b07faf975364b71276d7 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Date: Fri, 5 Apr 2013 10:49:36 +0900
Subject: [PATCH] slub: use page->lru.next to calculate nr of acquired object

We can pass inuse count via page->lru.next in order to calculate number
of acquired objects and it is more beautiful way. This reduces one
function argument and makes clean code.

Cc: Christoph Lameter <cl@linux.com>
Suggested-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/mm/slub.c b/mm/slub.c
index 21b3f00..8a35464 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1493,11 +1493,12 @@ static inline void remove_partial(struct kmem_cache_node *n,
*/
static inline void *acquire_slab(struct kmem_cache *s,
struct kmem_cache_node *n, struct page *page,
- int mode, int *objects)
+ int mode)
{
void *freelist;
unsigned long counters;
struct page new;
+ unsigned long inuse;

/*
* Zap the freelist and set the frozen bit.
@@ -1507,7 +1508,7 @@ static inline void *acquire_slab(struct kmem_cache *s,
freelist = page->freelist;
counters = page->counters;
new.counters = counters;
- *objects = new.objects - new.inuse;
+ inuse = page->inuse;
if (mode) {
new.inuse = page->objects;
new.freelist = NULL;
@@ -1525,6 +1526,7 @@ static inline void *acquire_slab(struct kmem_cache *s,
return NULL;

remove_partial(n, page);
+ page->lru.next = (void *)inuse;
WARN_ON(!freelist);
return freelist;
}
@@ -1541,7 +1543,6 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
struct page *page, *page2;
void *object = NULL;
int available = 0;
- int objects;

/*
* Racy check. If we mistakenly see no partial slabs then we
@@ -1559,11 +1560,11 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
if (!pfmemalloc_match(page, flags))
continue;

- t = acquire_slab(s, n, page, object == NULL, &objects);
+ t = acquire_slab(s, n, page, object == NULL);
if (!t)
break;

- available += objects;
+ available += (page->objects - (unsigned long)page->lru.next);
if (!object) {
c->page = page;
stat(s, ALLOC_FROM_PARTIAL);
--
1.7.9.5


\
 
 \ /
  Last update: 2013-04-05 05:01    [W:0.078 / U:2.352 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site