Messages in this thread | | | Date | Thu, 11 Dec 2014 14:35:18 +0100 | From | Jesper Dangaard Brouer <> | Subject | Re: [PATCH 0/7] slub: Fastpath optimization (especially for RT) V1 |
| |
On Wed, 10 Dec 2014 10:30:17 -0600 Christoph Lameter <cl@linux.com> wrote:
[...] > > Slab Benchmarks on a kernel with CONFIG_PREEMPT show an improvement of > 20%-50% of fastpath latency: > > Before: > > Single thread testing [...] > 2. Kmalloc: alloc/free test [...] > 10000 times kmalloc(256)/kfree -> 116 cycles [...] > > > After: > > Single thread testing [...] > 2. Kmalloc: alloc/free test [...] > 10000 times kmalloc(256)/kfree -> 60 cycles [...]
It looks like an impressive saving 116 -> 60 cycles. I just don't see the same kind of improvements with my similar tests[1][2].
My test[1] is just a fast-path loop over kmem_cache_alloc+free on 256bytes objects. (Results after explicitly inlining new func is_pointer_to_page())
baseline: 47 cycles(tsc) 19.032 ns patchset: 45 cycles(tsc) 18.135 ns
I do see the improvement, but it is not as high as I would have expected.
(CPU E5-2695)
[1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/time_bench_kmem_cache1.c [2] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/qmempool_bench.c
-- Best regards, Jesper Dangaard Brouer MSc.CS, Sr. Network Kernel Developer at Red Hat Author of http://www.iptv-analyzer.org LinkedIn: http://www.linkedin.com/in/brouer
| |