lkml.org 
[lkml]   [2015]   [Jun]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 3/3] locking/qrwlock: Don't contend with readers when setting _QW_WAITING
On 06/22/2015 12:21 PM, Will Deacon wrote:
> Hi Waiman,
>
> On Fri, Jun 19, 2015 at 04:50:02PM +0100, Waiman Long wrote:
>> The current cmpxchg() loop in setting the _QW_WAITING flag for writers
>> in queue_write_lock_slowpath() will contend with incoming readers
>> causing possibly extra cmpxchg() operations that are wasteful. This
>> patch changes the code to do a byte cmpxchg() to eliminate contention
>> with new readers.
> [...]
>
>> diff --git a/arch/x86/include/asm/qrwlock.h b/arch/x86/include/asm/qrwlock.h
>> index a8810bf..5678b0a 100644
>> --- a/arch/x86/include/asm/qrwlock.h
>> +++ b/arch/x86/include/asm/qrwlock.h
>> @@ -7,8 +7,7 @@
>> #define queued_write_unlock queued_write_unlock
>> static inline void queued_write_unlock(struct qrwlock *lock)
>> {
>> - barrier();
>> - ACCESS_ONCE(*(u8 *)&lock->cnts) = 0;
>> + smp_store_release(&lock->wmode, 0);
>> }
>> #endif
> I reckon you could actually use this in the asm-generic header and remove
> the x86 arch version altogether. Most architectures support single-copy
> atomic byte access and those that don't (alpha?) can just not use qrwlock
> (or override write_unlock with atomic_sub).
>
> I already have a patch making this change, so I'm happy either way.

Yes, I am aware of that. If you have a patch to make that change, I am
fine with that too.

Cheers,
Longman


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2015-06-23 05:21    [W:0.087 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site