lkml.org 
[lkml]   [2016]   [May]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] locking/rwsem: Add reader-owned state to the owner field
On 05/07/2016 12:56 AM, Ingo Molnar wrote:
> * Waiman Long<Waiman.Long@hpe.com> wrote:
>
>> On a 4-socket Haswell machine running on a 4.6-rc1 based kernel, the
>> fio test with multithreaded randrw and randwrite tests on the same
>> file on a XFS partition on top of a NVDIMM were run, the aggregated
>> bandwidths before and after the patch were as follows:
>>
>> Test BW before patch BW after patch % change
>> ---- --------------- -------------- --------
>> randrw 988 MB/s 1192 MB/s +21%
>> randwrite 1513 MB/s 1623 MB/s +7.3%
> What testcase/suite is this? I'd like to run this on other machines as well.
>
> Thanks,
>
> Ingo

I just used fio on a nvdimm based xfs filesystem. It is essentially like
a ramfs filesystem in term of performance. Attached were config files
that I used.

Cheers,
Longman
[global]
direct=1
ioengine=libaio
norandommap
randrepeat=0
bs=4K
size=1G
iodepth=1 # pmem has no queue depth
runtime=30
time_based=1
group_reporting
thread=1
gtod_reduce=1 # reduce=1 except for latency test
gtod_cpu=1


## cross-CPU combinations
numjobs=18
cpus_allowed=0-39

cpus_allowed_policy=split

[drive_0]
filename=/mnt/fio
cpus_allowed=0-17
rw=randrw

[drive_1]
filename=/mnt/fio2
cpus_allowed=18-35
rw=randrw

[global]
direct=1
ioengine=libaio
norandommap
randrepeat=0
bs=4K
size=1G
iodepth=1 # pmem has no queue depth
runtime=30
time_based=1
group_reporting
thread=1
gtod_reduce=1 # reduce=1 except for latency test
gtod_cpu=1


## cross-CPU combinations
numjobs=18
cpus_allowed=0-39

cpus_allowed_policy=split

[drive_0]
filename=/mnt/fio
cpus_allowed=0-17
rw=randwrite

[drive_1]
filename=/mnt/fio2
cpus_allowed=18-35
rw=randwrite

\
 
 \ /
  Last update: 2016-05-08 05:21    [W:0.376 / U:1.340 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site