lkml.org 
[lkml]   [2008]   [Aug]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] smp_call_function: use rwlocks on queues rather than rcu
> > Ah so it was already 25% slower even without kmalloc? I thought
> > that was with already. That doesn't sound good. Any idea where that slowdown
> > comes from?
>
> Just longer code path, I think. It calls the generic

I did IPI measurements quite some time ago and what I remember
from them is that IPI latencies were in the low multiple thousands cycle
ballpark.

> smp_call_function_mask(), which then does a popcount on the cpu mask
> (which it needs to do anyway), sees only one bit set, and then punts to
> the smp_call_function_single() path.

But that is more in the a few tens of cycles (or maybe 1-2 hundreds
if you have a NR_CPU==4096 kernel with really large cpumask)

Doesn't really explain a 25% slowdown I would say.

Are you sure there isn't a new cache miss in there or something? Actually
it must be even multiple ones to account for such a slow down.

-Andi

--
ak@linux.intel.com


\
 
 \ /
  Last update: 2008-08-24 11:01    [W:0.064 / U:0.160 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site