lkml.org 
[lkml]   [2014]   [Jan]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [numa shrinker] 9b17c62382: -36.6% regression on sparse file copy
On Mon, Jan 06, 2014 at 04:20:48PM +0800, fengguang.wu@intel.com wrote:
> Hi Dave,
>
> We noticed throughput drop in test case
>
> vm-scalability/300s-lru-file-readtwice (*)
>
> between v3.11 and v3.12, and it's still low as of v3.13-rc6:
>
> v3.11 v3.12 v3.13-rc6
> --------------- ------------------------- -------------------------
> 14934707 ~ 0% -48.8% 7647311 ~ 0% -47.6% 7829487 ~ 0% vm-scalability.throughput
> ^^ ^^^^^^
> stddev% change%

What does this vm-scalability.throughput number mean?

> (*) The test case basically does
>
> truncate -s 135080058880 /tmp/vm-scalability.img
> mkfs.xfs -q /tmp/vm-scalability.img
> mount -o loop /tmp/vm-scalability.img /tmp/vm-scalability
>
> nr_cpu=120
> for i in $(seq 1 $nr_cpu)
> do
> sparse_file=/tmp/vm-scalability/sparse-lru-file-readtwice-$i
> truncate $sparse_file -s 36650387592
> dd if=$sparse_file of=/dev/null &
> dd if=$sparse_file of=/dev/null &
> done

So a page cache load of reading 120x36GB files twice concurrently?
There's no increase in system time, so it can't be that the
shrinkers are running wild.

FWIW, I'm at LCA right now, so it's going to be a week before I can
look at this, so if you can find any behavioural difference in the
shrinkers (e.g. from perf profiles, on different filesystems, etc)
I'd appreciate it...

Cheers,

Dave.
--
Dave Chinner
david@fromorbit.com


\
 
 \ /
  Last update: 2014-01-08 09:41    [W:0.437 / U:0.728 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site