lkml.org 
[lkml]   [2019]   [Nov]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 4/5] locking/percpu-rwsem: Extract __percpu_down_read_trylock()
Hi Peter, sorry for delay.

I'll re-read this series tomorrow, but everything looks correct at first
glance...

Except one very minor problem in this patch, see below.

On 11/13, Peter Zijlstra wrote:
>
> -bool __percpu_down_read(struct percpu_rw_semaphore *sem, bool try)
> +static bool __percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
> {
> __this_cpu_inc(*sem->read_count);
>
> @@ -70,14 +70,21 @@ bool __percpu_down_read(struct percpu_rw
> * If !readers_block the critical section starts here, matched by the
> * release in percpu_up_write().
> */
> - if (likely(!smp_load_acquire(&sem->readers_block)))
> + if (likely(!atomic_read_acquire(&sem->readers_block)))

I don't think this can be compiled ;) ->readers_block is "int" until the next
patch makes it atomic_t and renames to ->block.

And. I think __percpu_down_read_trylock() should do

if (atomic_read(&sem->block))
return false;

at the start, before __this_cpu_inc(read_count).

Suppose that the pending writer sleeps in rcuwait_wait_event(readers_active_check).
If the new reader comes, it is better to not wake up that writer.

Oleg.

\
 
 \ /
  Last update: 2019-11-18 17:29    [W:0.245 / U:0.812 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site