lkml.org 
[lkml]   [2022]   [May]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 2/2] locking/qrwlock: Reduce cacheline contention for rwlocks used in interrupt context
From
On 5/11/22 09:34, Peter Zijlstra wrote:
> On Wed, May 11, 2022 at 08:44:55AM -0400, Waiman Long wrote:
>
>>> I'm confused; prior to this change:
>>>
>>> CPU0 CPU1
>>>
>>> write_lock_irq(&l)
>>> read_lock(&l)
>>> <INRQ>
>>> read_lock(&l)
>>> ...
>>>
>>> was not deadlock, but now it would AFAICT.
>> Oh you are right. I missed that scenario in my analysis. My bad.
> No worries; I suppose we can also still do something like:
>
> void queued_read_lock_slowpath(struct qrwlock *lock, int cnts)
> {
> /*
> * the big comment
> */
> if (unlikely(in_interrupt())) {
> /*
> * If not write-locked, insta-grant the reader
> */
> if (!(cnts & _QW_LOCKED))
> return;
>
> /*
> * otherwise, wait for the writer to go away.
> */
> atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED));
> return;
> }
>
> ...
> }
>
> Which saves one load in some cases... not sure it's worth it though.

Yes, it is a micro-optimization that can be done. The gain, if any,
should be minor though.

Cheers,
Longman

\
 
 \ /
  Last update: 2022-05-11 18:01    [W:0.052 / U:1.712 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site