lkml.org 
[lkml]   [2020]   [May]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[tip: core/rcu] rcutorture: Add test of holding scheduler locks across rcu_read_unlock()
The following commit has been merged into the core/rcu branch of tip:

Commit-ID: 52b1fc3f798d02a3a9d1cf7a84e98a795223410a
Gitweb: https://git.kernel.org/tip/52b1fc3f798d02a3a9d1cf7a84e98a795223410a
Author: Paul E. McKenney <paulmck@kernel.org>
AuthorDate: Sat, 28 Mar 2020 18:53:25 -07:00
Committer: Paul E. McKenney <paulmck@kernel.org>
CommitterDate: Mon, 27 Apr 2020 11:03:50 -07:00

rcutorture: Add test of holding scheduler locks across rcu_read_unlock()

Now that it should be safe to hold scheduler locks across
rcu_read_unlock(), even in cases where the corresponding RCU read-side
critical section might have been preempted and boosted, the commit adds
a test of this capability to rcutorture. This has been tested on current
mainline (which can deadlock in this situation), and lockdep duly reported
the expected deadlock. On -rcu, lockdep is silent, thus far, anyway.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
kernel/rcu/rcutorture.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index 5453bd5..b348cf8 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -1147,6 +1147,7 @@ static void rcutorture_one_extend(int *readstate, int newstate,
struct torture_random_state *trsp,
struct rt_read_seg *rtrsp)
{
+ unsigned long flags;
int idxnew = -1;
int idxold = *readstate;
int statesnew = ~*readstate & newstate;
@@ -1181,8 +1182,15 @@ static void rcutorture_one_extend(int *readstate, int newstate,
rcu_read_unlock_bh();
if (statesold & RCUTORTURE_RDR_SCHED)
rcu_read_unlock_sched();
- if (statesold & RCUTORTURE_RDR_RCU)
+ if (statesold & RCUTORTURE_RDR_RCU) {
+ bool lockit = !statesnew && !(torture_random(trsp) & 0xffff);
+
+ if (lockit)
+ raw_spin_lock_irqsave(&current->pi_lock, flags);
cur_ops->readunlock(idxold >> RCUTORTURE_RDR_SHIFT);
+ if (lockit)
+ raw_spin_unlock_irqrestore(&current->pi_lock, flags);
+ }

/* Delay if neither beginning nor end and there was a change. */
if ((statesnew || statesold) && *readstate && newstate)
\
 
 \ /
  Last update: 2020-05-11 23:01    [W:0.026 / U:0.456 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site