lkml.org 
[lkml]   [2021]   [Dec]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [REGRESSION] 5-10% increase in IO latencies with nohz balance patch
On Fri, Dec 03, 2021 at 12:03:27PM +0000, Valentin Schneider wrote:
> On 30/11/21 00:26, Valentin Schneider wrote:
> > On 29/11/21 14:49, Josef Bacik wrote:
> >> On Mon, Nov 29, 2021 at 06:31:17PM +0000, Valentin Schneider wrote:
> >>> On 29/11/21 13:15, Josef Bacik wrote:
> >>> > On Mon, Nov 29, 2021 at 06:03:24PM +0000, Valentin Schneider wrote:
> >>> >> Would you happen to have execution traces by any chance? If not I should be
> >>> >> able to get one out of that fsperf thingie.
> >>> >>
> >>> >
> >>> > I don't, if you want to tell me how I can do it right now. I've disabled
> >>> > everything on this box for now so it's literally just sitting there waiting to
> >>> > have things done to it. Thanks,
> >>> >
> >>>
> >>> I see you have Ftrace enabled in your config, so that ought to do it:
> >>>
> >>> trace-cmd record -e 'sched:*' -e 'cpu_idle' $your_test_cmd
> >>>
> >>
> >> http://toxicpanda.com/performance/trace.dat
> >>
> >> it's like 16mib. Enjoy,
> >>
> >
> > Neat, thanks!
> >
> > Runqueue depth seems to be very rarely greater than 1, tasks with ~1ms
> > runtime and lots of sleeping (also bursty kworker activity with activations
> > of tens of µs), and some cores (Internet tells me that Xeon Bronze 3204
> > doesn't have SMT) spend most of their time idling. Not the most apocalyptic
> > task placement vs ILB selection, but the task activation patterns roughly
> > look like what I was thinking of - there might be hope for me yet.
> >
> > I'll continue the headscratching after tomorrow's round of thinking juice.
> >
>
> Could you give the 4 top patches, i.e. those above
> 8c92606ab810 ("sched/cpuacct: Make user/system times in cpuacct.stat more precise")
> a try?
>
> https://git.gitlab.arm.com/linux-arm/linux-vs.git -b mainline/sched/nohz-next-update-regression
>
> I gave that a quick test on the platform that caused me to write the patch
> you bisected and looks like it didn't break the original fix. If the above
> counter-measures aren't sufficient, I'll have to go poke at your
> reproducers...
>

It's better but still around 6% regression. If I compare these patches to the
average of the last few days worth of runs you're 5% better than before, so
progress but not completely erased.

metric baseline current stdev diff
======================================================================
write_io_kbytes 125000 125000 0 0.00%
read_clat_ns_p99 0 0 0 0.00%
write_bw_bytes 1.73e+08 1.74e+08 5370366.50 0.69%
read_iops 0 0 0 0.00%
write_clat_ns_p50 18265.60 18150.40 345.21 -0.63%
read_io_kbytes 0 0 0 0.00%
read_io_bytes 0 0 0 0.00%
write_clat_ns_p99 84684.80 90316.80 6607.94 6.65%
read_bw_bytes 0 0 0 0.00%
elapsed 1 1 0 0.00%
write_lat_ns_min 0 0 0 0.00%
sys_cpu 91.22 91.00 1.40 -0.24%
write_lat_ns_max 0 0 0 0.00%
read_lat_ns_min 0 0 0 0.00%
write_iops 42308.54 42601.71 1311.12 0.69%
read_lat_ns_max 0 0 0 0.00%
read_clat_ns_p50 0 0 0 0.00%

Thanks,

Josef

\
 
 \ /
  Last update: 2021-12-03 20:01    [W:0.083 / U:0.236 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site