lkml.org 
[lkml]   [2022]   [Jun]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RFC[ Alloc in vsprintf
From
On 6/27/22 04:25, David Laight wrote:
> From: Linus Torvalds
>> Sent: 26 June 2022 21:19
> ..
>> That does require teaching the sprint_symbol() functions that they
>> need to take a "length of buffer" and return how much they used, but
>> that would seem to be a sensible thing anyway, and what the code
>> should always have done?
>
> It needs to return the 'length it would have used'.
> While occasionally useful I'm pretty sure this is actually
> a side effect of the was that libc snprintf() was originally
> implemented (sprintf() had an on-stack FILE).
>
> In any case it might be simplest to pass all these functions
> the write pointer and buffer limit and have them return the
> new write pointer.
> It is likely to generate much better code that passing
> a structure by reference.

I've said it before, and now I'm going to say it again more forcefully:

This obsession with perfect machine code in every situation, regardless
of whether it's shown up in benchmarks or profiles, regardless of what
it does to the readability of the C code that we humans work with, has
to stop.

Has. To. Stop. Full stop.

We have to be thinking about the people who come after us and have to
read and maintain this stuff. Linux should still be in use 50, 100 years
from now, and if it's not it's because we _fucked up_, and in software
the way you fuck up is by writing code no one can understand - by
writing code that people become afraid to touch without breaking it.

This happens, routinely, and it's _painful_ when it does.

A big learning experience for me when I was a much younger engineer,
freshly starting at Google, was working next to a bunch of guys who were
all chasing and fixing bugs in ext4 - and they weren't exactly enjoying
it. bcache uncovered one or two of them too, and I got to debug that and
then had to argue that it wasn't a bug in bcache (we were calling
bio->bi_endio in process context, which uncovered a locking bug in
ext4). The codebase had become a mess that they were too scared to
refactor, in part because there were too many options that were
impossible to test - my big lesson from that is that the code you're
scared to refactor, that's the code that needs it the most.

And I could name some very senior kernel people who write code that's
too brittle in the name of chasing performance - in fact I will name
one, because I know he won't take it personally: the writeback
throttling code that Jens wrote was buggy for _ages_ and at least one of
my users was regularly tripping over it and I couldn't make out what the
hell that code was trying to do, and not for lack of trying.

Other code nightmares:
- The old O_DIRECT code, which was a huge performance sink but no one
could touch it without breaking something (I spent months on a beautiful
refactoring that cut it in half by LOC and improved performance
drastically, but I couldn't get it to completely pass xfstests. That
sucked).
- The old generic_file_buffered_read(), which was a 250 line monster
filled with gotos - all in the name of performance, mind you - that
people barely touched, and when people absolutely had to they'd do so in
the most minimal way possible that ended up just adding to the mess
(e.g. the IOCB_NOWAIT changes) - up until I finally cut it apart, then
curiously right after that a ton more patches landed. It's almost like
cleaning stuff up and prioritizing readability makes it easier for
people to work on.
- merge_bvec_fn was also quite the tale - also done in the name of
performance, noticing a theme here?

I like to write fast code too. Of course I do, I'm a kernel engineer, I
wouldn't be a real one if I didn't.

But that means writing code that is _optimizable_, which means writing
code that's easy to go back and modify and change when profiling
discovers something. Which means keeping things as simple as is
reasonably possible, and prioritizing good data types and abstractions
and structure.

When I'm first writing code and thinking about performance, here's what
I think about:
- algorithmic complexity
- good data structures (vectors instead of lists, where it matters -
it often doesn't)
- memory layout: keep pointer indirection at an absolute minimum
memory layout matters
- locking

And honestly, not much else. Because on modern machines, with the type
of code we feed our CPUs running in the kernel, memory layout and
locking are what matter and not much else. Not shaving every cycle.

I already demonstrated this with actual numbers in the printbuf
discussion, to Rasmus - yes, the compiler constantly reloading is a
shame and it shows up in the text size, and perhaps we'll want to
revisit the -fno-strict-aliasing thing someday (I'm fully in agreement
with Linus on why he hates strict aliasing, it was something the
compiler people sprung on everyone else without discussion or a clear
escape hatch or _tooling to deal with existing codebases_ but the
tooling has improved since thing, it might not be complete insanity anymore)

...but if you look at the actual microbenchmarks I showed Rasmus? It
turns out to not affect performance pretty much at all, it's in the
noise. Waiting on loads from memory is what matters to us, and not much
else.

\
 
 \ /
  Last update: 2022-06-28 04:58    [W:0.116 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site