lkml.org 
[lkml]   [2015]   [Apr]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] locking/rwsem: reduce spinlock contention in wakeup after up_read/up_write
On 04/18/2015 11:40 AM, Peter Zijlstra wrote:
> On Fri, Apr 17, 2015 at 10:03:18PM -0400, Waiman Long wrote:
>> @@ -478,7 +515,28 @@ struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem)
>> {
>> unsigned long flags;
>>
>> - raw_spin_lock_irqsave(&sem->wait_lock, flags);
>> + /*
>> + * If a spinner is present, it is not necessary to do the wakeup.
>> + * Try to do wakeup only if the trylock succeeds to minimize
>> + * spinlock contention which may introduce too much delay in the
>> + * unlock operation.
>> + *
>> + * In case the spinning writer is just going to break out of the
>> + * waiting loop, it will still do a trylock in
>> + * rwsem_down_write_failed() before sleeping.
>> + * IOW, if rwsem_has_spinner() is true, it will guarantee at least
>> + * one trylock attempt on the rwsem.
> successful trylock? I think we're having 'issues' on if failed trylocks
> (and cmpxchg) imply full barriers.
>
>> + *
>> + * spinning writer
>> + * ---------------
>> + * [S] osq_unlock()
>> + * MB
>> + * [RmW] rwsem_try_write_lock()
>> + */
> Ordering comes in pairs, this is incomplete.

I am sorry that I am a bit sloppy here. I have just sent out an updated
patch to remedy this. I have added a smp_mb__after_atomic() to ensure
proper memory ordering. However, I am not so sure if this primitive or
just a simple smp_rmb() will be more expensive in other non-x86
architectures.

Cheers,
Longman


\
 
 \ /
  Last update: 2015-04-23 21:21    [W:0.066 / U:0.684 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site