Messages in this thread | | | Date | Sat, 7 May 2016 23:04:12 -0400 | From | Waiman Long <> | Subject | Re: [PATCH v2] locking/rwsem: Add reader-owned state to the owner field |
| |
On 05/07/2016 12:56 AM, Ingo Molnar wrote: > * Waiman Long<Waiman.Long@hpe.com> wrote: > >> On a 4-socket Haswell machine running on a 4.6-rc1 based kernel, the >> fio test with multithreaded randrw and randwrite tests on the same >> file on a XFS partition on top of a NVDIMM were run, the aggregated >> bandwidths before and after the patch were as follows: >> >> Test BW before patch BW after patch % change >> ---- --------------- -------------- -------- >> randrw 988 MB/s 1192 MB/s +21% >> randwrite 1513 MB/s 1623 MB/s +7.3% > What testcase/suite is this? I'd like to run this on other machines as well. > > Thanks, > > Ingo
I just used fio on a nvdimm based xfs filesystem. It is essentially like a ramfs filesystem in term of performance. Attached were config files that I used.
Cheers, Longman [global] direct=1 ioengine=libaio norandommap randrepeat=0 bs=4K size=1G iodepth=1 # pmem has no queue depth runtime=30 time_based=1 group_reporting thread=1 gtod_reduce=1 # reduce=1 except for latency test gtod_cpu=1
## cross-CPU combinations numjobs=18 cpus_allowed=0-39
cpus_allowed_policy=split
[drive_0] filename=/mnt/fio cpus_allowed=0-17 rw=randrw
[drive_1] filename=/mnt/fio2 cpus_allowed=18-35 rw=randrw
[global] direct=1 ioengine=libaio norandommap randrepeat=0 bs=4K size=1G iodepth=1 # pmem has no queue depth runtime=30 time_based=1 group_reporting thread=1 gtod_reduce=1 # reduce=1 except for latency test gtod_cpu=1
## cross-CPU combinations numjobs=18 cpus_allowed=0-39
cpus_allowed_policy=split
[drive_0] filename=/mnt/fio cpus_allowed=0-17 rw=randwrite
[drive_1] filename=/mnt/fio2 cpus_allowed=18-35 rw=randwrite
| |