lkml.org 
[lkml]   [2022]   [Sep]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2] minmax: clamp more efficiently by avoiding extra comparison
Hi Andrew,

On Fri, Sep 23, 2022 at 03:54:12PM -0700, Andrew Morton wrote:
> On Fri, 23 Sep 2022 17:40:01 +0200 "Jason A. Donenfeld" <Jason@zx2c4.com> wrote:
>
> > Currently the clamp algorithm does:
> >
> > if (val > hi)
> > val = hi;
> > if (val < lo)
> > val = lo;
> >
> > But since hi > lo by definition, this can be made more efficient with:
> >
> > if (val > hi)
> > val = hi;
> > else if (val < lo)
> > val = lo;
> >
> > So fix up the clamp and clamp_t functions to do this, adding the same
> > argument checking as for min and min_t.
> >
>
> The patch adds 140 bytes of text to mm/memblock.o, for example.
> Presumably from the additional branch. Larger text means larger cache
> footprint means slower.
>
> So where's the proof that this change gives us a more efficient kernel?

For x86, code generation ought to be the same, because the compiler can
still generate cmovs for all:

unsigned int clamp1(unsigned int val, unsigned int lo, unsigned int hi)
{
if (val >= hi)
val = hi;
if (val <= lo)
val = lo;
return val;
}

unsigned int clamp2(unsigned int val, unsigned int lo, unsigned int hi)
{
if (val >= hi)
val = hi;
else if (val <= lo)
val = lo;
return val;
}

On x86_64 this is:

clamp1:
cmp edi, edx
mov eax, esi
cmova edi, edx
cmp edi, esi
cmovnb eax, edi
ret
clamp2:
cmp edi, esi
mov eax, edx
cmovnb esi, edi
cmp edi, edx
cmovb eax, esi
ret

The latter one clever compares hi and lo first. I observe the same when
hi and lo are constants instead. So no change.

On ARM64 the same thing happens:

clamp1:
cmp w0, w2
csel w8, w0, w2, lo
cmp w8, w1
csel w0, w8, w1, hi
ret
clamp2:
cmp w0, w1
csel w8, w0, w1, hi
cmp w0, w2
csel w0, w8, w2, lo
ret

On MIPS64, on the other hand, we save some arithmetic and the number of
branches remains the same:

clamp1:
sltu $3,$6,$4
bne $3,$0,.L2
move $2,$6

move $2,$4
.L2:
sltu $3,$2,$5
bnel $3,$0,.L7
move $2,$5

.L7:
jr $31
nop

clamp2:
sltu $3,$4,$6
beq $3,$0,.L13
move $2,$6

sltu $3,$4,$5
bne $3,$0,.L12
move $2,$4

.L13:
jr $31
nop

.L12:
jr $31
move $2,$5

So it seems like, at least in isolation, this is only a win?

It's possible that when inlined into a more complex function that the
various cases are optimized together and a branch is introduced if the
compiler thinks its better; but alone I'm not seeing that happen.

Or maybe older compilers do something worse? On x86_64, clang doesn't do
the smart thing until clang 13 and gcc not until gcc 11. So maybe your
text size blew up because your compiler is old?

Either way, I agree that text size increase is not a good idea, and we
should avoid that if we can.

Worth noting, by the way, is that the input validation check already
caught a bug when 0day test bot choked:

https://lore.kernel.org/linux-hwmon/20220924101151.4168414-1-Jason@zx2c4.com/

So, options:
1) Keep this patch as-is, because it is useful on modern compilers.
2) Add an ifdef on compiler version, so we generate the best code in
each case.
2) Go back to testing twice, but keep the checker macro because it's
apparently useful.
4) Do nothing and discard this series.

Any of those are okay with me. Opinions?

Jason

\
 
 \ /
  Last update: 2022-09-24 12:37    [W:1.151 / U:0.900 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site