lkml.org 
[lkml]   [2022]   [May]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 1/2] locking/lockref: Use try_cmpxchg64 in CMPXCHG_LOOP macro
On Wed, May 25, 2022 at 7:40 AM Uros Bizjak <ubizjak@gmail.com> wrote:
>
> Use try_cmpxchg64 instead of cmpxchg64 in CMPXCHG_LOOP macro.
> x86 CMPXCHG instruction returns success in ZF flag, so this
> change saves a compare after cmpxchg (and related move instruction
> in front of cmpxchg). The main loop of lockref_get improves from:

Ack on this one regardless of the 32-bit x86 question.

HOWEVER.

I'd like other architectures to pipe up too, because I think right now
x86 is the only one that implements that "arch_try_cmpxchg()" family
of operations natively, and I think the generic fallback for when it
is missing might be kind of nasty.

Maybe it ends up generating ok code, but it's also possible that it
just didn't matter when it was only used in one place in the
scheduler.

The lockref_get() case can be quite hot under some loads, it would be
sad if this made other architectures worse.

Anyway, maybe that try_cmpxchg() fallback is fine, and works out well
on architectures that use load-locked / store-conditional as-is.

But just to verify, I'm adding arm/powerpc/s390/mips people to the cc. See

https://lore.kernel.org/all/20220525144013.6481-2-ubizjak@gmail.com/

for the original email and the x86-64 code example.

Linus

\
 
 \ /
  Last update: 2022-05-25 18:48    [W:0.094 / U:2.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site