lkml.org 
[lkml]   [2002]   [Oct]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [BENCHMARK] 2.5.44-mm6 contest results
Con Kolivas wrote:
>
> io_load:
> Kernel [runs] Time CPU% Loads LCPU% Ratio
> 2.5.44 [3] 873.8 9 69 12 12.24
> 2.5.44-mm1 [3] 347.3 22 35 15 4.86
> 2.5.44-mm2 [3] 294.2 28 19 10 4.12
> 2.5.44-mm4 [3] 358.7 23 25 10 5.02
> 2.5.44-mm5 [4] 270.7 29 18 11 3.79
> 2.5.44-mm6 [3] 284.1 28 20 10 3.98

Jens, I think I prefer fifo_batch=16. We do need to expose
these in /somewhere so people can fiddle with them.

>...
> mem_load:
> Kernel [runs] Time CPU% Loads LCPU% Ratio
> 2.5.44 [3] 114.3 67 30 2 1.60
> 2.5.44-mm1 [3] 159.7 47 38 2 2.24
> 2.5.44-mm2 [3] 116.6 64 29 2 1.63
> 2.5.44-mm4 [3] 114.9 65 28 2 1.61
> 2.5.44-mm5 [4] 114.1 65 30 2 1.60
> 2.5.44-mm6 [3] 226.9 33 50 2 3.18
>
> Mem load has dropped off again

Well that's one interpretation. The other is "goody, that pesky
kernel compile isn't slowing down my important memory-intensive
whateveritis so much". It's a tradeoff.

It appears that this change was caused by increasing the default
value of /proc/sys/vm/page-cluster from 3 to 4. I am surprised.

It was only of small benefit in other tests so I'll ditch that one.

Thanks.

(You're still testing with all IO against the same disk, yes? Please
rememeber that things change quite significantly when the swap IO
or the io_load is against a different device)
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:30    [W:0.430 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site