lkml.org 
[lkml]   [2021]   [Sep]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: rcu/tree: Protect rcu_rdp_is_offloaded() invocations on RT
On 2021-09-22 13:38:20 [+0200], Frederic Weisbecker wrote:
> > The part with rcutree.use_softirq=0 on RT does not make it any better,
> > right?
>
> The rcuc kthread disables softirqs before calling rcu_core(), so it behaves
> pretty much the same as a softirq. Or am I missing something?

Oh, no you don't.

> > So you rely on some implicit behaviour which breaks with RT such as:
> >
> > CPU 0
> > -----------------------------------------------
> > RANDOM TASK-A RANDOM TASK-B
> > ------ -----------
> > int *X = &per_cpu(CPUX, 0) int *X = &per_cpu(CPUX, 0)
> > int A, B;
> > spin_lock(&D);
> > spin_lock(&C);
> > WRITE_ONCE(*X, 0);
> > A = READ_ONCE(*X);
> > WRITE_ONCE(*X, 1);
> > B = READ_ONCE(*X);
> >
> > while spinlock C and D are just random locks not related to CPUX but it
> > just happens that they are held at that time. So for !RT you guarantee
> > that A == B while it is not the case on RT.
>
> Not sure which spinlocks you are referring to here. Also most RCU spinlocks
> are raw.

I was bringing an example where you also could rely on implicit locking
provided by spin_lock() which breaks on RT.

Sebastian

\
 
 \ /
  Last update: 2021-09-22 15:03    [W:0.069 / U:0.144 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site