lkml.org 
[lkml]   [2020]   [Sep]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH] locking/percpu-rwsem: use this_cpu_{inc|dec}() for read_count
On 09/18, Peter Zijlstra wrote:
>
> On Fri, Sep 18, 2020 at 12:01:12PM +0200, peterz@infradead.org wrote:
> > + u64 sum = per_cpu_sum(*(u64 *)sem->read_count);
>
> Moo, that doesn't work, we have to do two separate sums.

Or we can re-introduce "atomic_t slow_read_ctr".

percpu_up_read_irqsafe(sem)
{
preempt_disable();
atomic_dec_release(&sem->slow_read_ctr);
if (!rcu_sync_is_idle(&sem->rss))
rcuwait_wake_up(&sem->writer);
preempt_enable();
}

readers_active_check(sem)
{
unsigned int sum = per_cpu_sum(*sem->read_count) +
(unsigned int)atomic_read(&sem->slow_read_ctr);
if (sum)
return false;
...
}

Of course, this assumes that atomic_t->counter underflows "correctly", just
like "unsigned int".

But again, do we really want this?

Oleg.

\
 
 \ /
  Last update: 2020-09-18 12:49    [W:0.174 / U:0.360 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site