lkml.org 
[lkml]   [2022]   [Feb]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [REGRESSION] 5-10% increase in IO latencies with nohz balance patch
On Fri, Feb 18, 2022 at 12:00:41PM +0100, Thorsten Leemhuis wrote:
> Hi, this is your Linux kernel regression tracker speaking. Top-posting
> for once, to make this easy accessible to everyone.
>
> FWIW, this is a gentle reminder that I'm still tracking this regression.
> Afaics nothing happened in the last few weeks.
>
> If the discussion continued somewhere else, please let me know; you can
> do this directly or simply tell my regression tracking bot yourself by
> sending a reply to this mail with a paragraph containing a regzbot
> command like "#regzbot monitor
> https://lore.kernel.org/r/some_msgi@example.com/"
>
> If you think there are valid reasons to drop this regressions from the
> tracking, let me know; you can do this directly or simply tell my
> regression tracking bot yourself by sending a reply to this mail with a
> paragraph containing a regzbot command like "#regzbot invalid: Some
> explanation" (without the quotes).
>
> Anyway: I'm putting it on back burner now to reduce the noise, as this
> afaics is less important than other regressions:
>
> #regzbot backburner: Culprit is hard to track down
> #regzbot poke
>
> You likely get two more mails like this after the next two merge
> windows, then I'll drop it if I don't here anything back.
>
> Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
>
> P.S.: As the Linux kernel's regression tracker I'm getting a lot of
> reports on my table. I can only look briefly into most of them and lack
> knowledge about most of the areas they concern. I thus unfortunately
> will sometimes get things wrong or miss something important. I hope
> that's not the case here; if you think it is, don't hesitate to tell me
> in a public reply, it's in everyone's interest to set the public record
> straight.
>
>

Roman and I sat down to mess with this some more and had some weird
observations.

On our Facebook internal boxes we couldn't reproduce. If we disable all the
normal FB specific stuff so the box is "quiet" 5.16 performs better. However
these are all single socket machines with stupid high numbers of cores, my local
machine is 2 socket 6 cores.

On my box it was actually pretty noisy testing in isolation as well. In the end
I rigged up fsperf to run 1000 runs and graph each kernel on top of eachother.
What came out was really strange.

1. The "good" kernel had a period for the first ~100 runs that were very low,
the p50 was ~9000ns, but after those first 100 runs it jumped up and was right
ontop of 5.16. This explains why it shows up on my overnight tests, the box
literally reboots and runs tests. So there's a "warmup" period for the
scheduler, once it's been hammered on enough it matches 5.16 exactly, otherwise
its faster at the beginning.

2. The regression essentially disappears looking at the graphs over 1000 runs.
The results are so jittery this was the only way we could honestly look at the
results and see anything. The only place the "regression" shows up is in the
write completion latency p99. There 5.15 ranges between 75000-85000 ns, whereas
5.16 ranges between 80000 and 100000 ns. However again this is only on my
machine, and the p50 latencies and the actual bw_bytes is the same.

Given this test is relatively bursty anyways and the fact that we can't
reproduce it internally, and the fact that 5.16 actually consistently performs
better internally has convinced us to drop this, it's simply too noisy to get a
handle on to actually call it a problem.

#regzbot invalid: test too noisy and the results aren't clear cut.

Thanks,

Josef

\
 
 \ /
  Last update: 2022-02-18 16:35    [W:0.114 / U:0.416 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site