lkml.org 
[lkml]   [2022]   [Dec]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [RFC for-6.2/block V2] block: Change the granularity of io ticks from ms to ns
Date
On 12/7/22 14:32, Gulam Mohamed wrote:
> As per the review comment from Jens Axboe, I am re-sending this patch
> against "for-6.2/block".
>

why is this marked as RFC ? are you waiting for something more to get
resolved so this can be merged ?

>
> Use ktime to change the granularity of IO accounting in block layer from
> milli-seconds to nano-seconds to get the proper latency values for the
> devices whose latency is in micro-seconds. After changing the granularity
> to nano-seconds the iostat command, which was showing incorrect values for
> %util, is now showing correct values.
>
> We did not work on the patch to drop the logic for
> STAT_PRECISE_TIMESTAMPS yet. Will do it if this patch is ok.
>
> The iostat command was run after starting the fio with following command
> on an NVME disk. For the same fio command, the iostat %util was showing
> ~100% for the disks whose latencies are in the range of microseconds.
> With the kernel changes (granularity to nano-seconds), the %util was
> showing correct values. Following are the details of the test and their
> output:
>
> fio command
> -----------
> [global]
> bs=128K
> iodepth=1
> direct=1
> ioengine=libaio
> group_reporting
> time_based
> runtime=90
> thinktime=1ms
> numjobs=1
> name=raw-write
> rw=randrw
> ignore_error=EIO:EIO
> [job1]
> filename=/dev/nvme0n1
>
> Correct values after kernel changes:
> ====================================
> iostat output
> -------------
> iostat -d /dev/nvme0n1 -x 1
>
> Device r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
> nvme0n1 0.08 0.05 0.06 128.00 128.00 0.07 6.50
>
> Device r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
> nvme0n1 0.08 0.06 0.06 128.00 128.00 0.07 6.30
>
> Device r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
> nvme0n1 0.06 0.05 0.06 128.00 128.00 0.06 5.70
>
> From fio
> --------
> Read Latency: clat (usec): min=32, max=2335, avg=79.54, stdev=29.95
> Write Latency: clat (usec): min=38, max=130, avg=57.76, stdev= 3.25
>
> Values before kernel changes
> ============================
> iostat output
> -------------
>
> iostat -d /dev/nvme0n1 -x 1
>
> Device r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
> nvme0n1 0.08 0.06 0.06 128.00 128.00 1.07 97.70
>
> Device r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
> nvme0n1 0.08 0.06 0.06 128.00 128.00 1.08 98.80
>
> Device r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
> nvme0n1 0.08 0.05 0.06 128.00 128.00 1.06 97.20
>
> From fio
> --------
> Read Latency: clat (usec): min=33, max=468, avg=79.56, stdev=28.04
> Write Latency: clat (usec): min=9, max=139, avg=57.10, stdev= 3.79
>
> Changes in V2:
> 1. Changed the try_cmpxchg() to try_cmpxchg64() in function
> update_io_ticks()as the values being compared are u64 which was giving
> a build error on i386 and microblaze
>
> Signed-off-by: Gulam Mohamed <gulam.mohamed@oracle.com>
> ---

I believe it has no effect on the overall performance, if so I'd
document that.

Based on the quantitative data present in the commit log this
looks good to me, I believe you did audit all drivers and places
in the block layer.

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>

-ck
\
 
 \ /
  Last update: 2022-12-08 00:05    [W:0.072 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site