Messages in this thread | | | Date | Fri, 20 Mar 2015 13:27:32 +0800 | Subject | Re: [PATCH v2 4/4] block: loop: support to submit I/O via kernel aio based | From | Ming Lei <> |
| |
On Fri, Mar 20, 2015 at 12:37 AM, Maxim Patlasov <mpatlasov@parallels.com> wrote: > On 03/18/2015 07:57 PM, Ming Lei wrote: >> >> On Thu, Mar 19, 2015 at 2:28 AM, Maxim Patlasov <mpatlasov@parallels.com> >> wrote: >>> >>> On 01/13/2015 07:44 AM, Ming Lei wrote: >>>> >>>> Part of the patch is based on Dave's previous post. >>>> >>>> This patch submits I/O to fs via kernel aio, and we >>>> can obtain following benefits: >>>> >>>> - double cache in both loop file system and backend file >>>> gets avoided >>>> - context switch decreased a lot, and finally CPU utilization >>>> is decreased >>>> - cached memory got decreased a lot >>>> >>>> One main side effect is that throughput is decreased when >>>> accessing raw loop block(not by filesystem) with kernel aio. >>>> >>>> This patch has passed xfstests test(./check -g auto), and >>>> both test and scratch devices are loop block, file system is ext4. >>>> >>>> Follows two fio tests' result: >>>> >>>> 1. fio test inside ext4 file system over loop block >>>> 1) How to run >>>> - linux kernel base: 3.19.0-rc3-next-20150108(loop-mq merged) >>>> - loop over SSD image 1 in ext4 >>>> - linux psync, 16 jobs, size 200M, ext4 over loop block >>>> - test result: IOPS from fio output >>>> >>>> 2) Throughput result: >>>> ------------------------------------------------------------- >>>> test cases |randread |read |randwrite |write | >>>> ------------------------------------------------------------- >>>> base |16799 |59508 |31059 |58829 >>>> ------------------------------------------------------------- >>>> base+kernel aio |15480 |64453 |30187 |57222 >>>> ------------------------------------------------------------- >>> >>> >>> Ming, it's important to understand the overhead of aio_kernel_() >>> implementation. So could you please add test results for raw SSD device >>> to >>> the table above next time (in v3 of your patches). >> >> what aio_kernel_() does is to just call ->read_iter()/->write_iter(), >> so it should not have introduced extra overload. >> >> From performance view, the effect is only from switching to >> O_DIRECT. With O_DIRECT, double cache can be avoided, >> meantime both page caches and CPU utilization can be decreased. > > > The way how you reused loop_queue_rq() --> queue_work() functionality (added > early, by commit b5dd2f604) may affect performance of O_DIRECT operations. > It can be easily demonstrated on ram-drive, but measurements on real storage > h/w would be more convincing.
The test data in the commit log is on real storage h/w, which is attached to one sata 3.0Gbps drive.
blk-mq may affect performance a bit on ram-drive too, which can be demonstrated from null_blk test(blk_mq vs. bio), but looks not a big deal since it isn't a real use case.
> > Btw, when you wrote "linux psync, 16 jobs, size 200M, ext4 over loop block" > -- does it mean that there were 16 threads in userspace submitting I/O > concurrently? If yes, throughput comparison for a single job test would be > also useful to look at.
Yes, it is the 'numjobs' in fio config file because performance can only be got higher for 'sync' I/O by increasing number of I/O threads.
No problem, throughput comparison for single job will be provided in V3.
Thanks, Ming Lei
| |