lkml.org 
[lkml]   [2022]   [Jul]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 03/13] locking/qspinlock: split common mcs queueing code into its own function
Excerpts from Peter Zijlstra's message of July 6, 2022 3:01 am:
> On Tue, Jul 05, 2022 at 12:38:10AM +1000, Nicholas Piggin wrote:
>> +void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>> +{
>> + if (pv_enabled()) {
>> + queued_spin_lock_mcs_queue(lock);
>> + return;
>> + }
>> +
>> + if (virt_spin_lock(lock))
>> + return;
>> +
>
> This reminds me; at the time I meant to make queued_spin_lock_slowpath()
> a static_call() and redirect the function appropriately at boot time.
> But that was before static_call() was merged and I never seem to have
> gotten around to doing that afterwards...

Wouldn't hurt. OTOH hyper optimising the contended path is probably
almost not measurable. Optimising coherency in the contended path
absolutely, but straight line code less so. That said don't let me
stop you :)

Thanks,
Nick

\
 
 \ /
  Last update: 2022-07-12 02:11    [W:0.179 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site