lkml.org 
[lkml]   [2016]   [Mar]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] lib/spinlock_debug: Prevent unnecessary recursive spin_dump()
Date
For your information, there is another patch I posted which looks similar
to this patch, but is totally different, that is,
https://lkml.org/lkml/2016/3/11/192, trying to solve a deadlock problem.

In that patch, I tried to make printk async to avoid the deadlock but I
found Sergey and Jan were already working on the asynchronous printk().
That is, https://lkml.org/lkml/2016/3/23/316. Thus I decided to wait the
work to be done and to work on my patch again based on the patch by Sergey
and Jan.

On the other hand, this patch is not for avoiding the deadlock, but for
avoiding unnecessary recursion. See the backtrace shown below.

-----8<-----
From c4be9976ec21debc3883fc2d97ba4a905102876f Mon Sep 17 00:00:00 2001
From: Byungchul Park <byungchul.park@lge.com>
Date: Thu, 24 Mar 2016 11:37:25 +0900
Subject: [PATCH] lib/spinlock_debug: Prevent unnecessary recursive spin_dump()

Printing "lockup suspected" for the same lock more than once is
meaningless. Furtheremore, it can cause an indefinite recursion if it
occures within a printk(). For example,

printk
spin_lock(A)
spin_dump // lockup suspected for A
printk
spin_lock(A)
spin_dump // lockup suspected for A
... indefinitely

where "A" can be any lock which is used within printk().

The recursion can be stopped if the lock causing the lockup is released.
However this warning messages repeated and accumulated unnecessarily
might eat off the printk log buffer. And it also consumes away the cpu
time unnecessarily. We have to avoid this situation.

Of course this patch cannot detect the recursion perfectly. In a rare
case, it can print the "lockup suspected" message recursively several
times. But at least we can detect and prevent unnecessary indefinite
recursion without missing any important printing. Detecting perfectly
requires to make the code more complex and use more memory. Who care?

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
---
kernel/locking/spinlock_debug.c | 31 ++++++++++++++++++++++++++++++-
1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c
index 0374a59..653eea9 100644
--- a/kernel/locking/spinlock_debug.c
+++ b/kernel/locking/spinlock_debug.c
@@ -103,6 +103,31 @@ static inline void debug_spin_unlock(raw_spinlock_t *lock)
lock->owner_cpu = -1;
}

+static raw_spinlock_t *sus_lock;
+static unsigned int sus_cpu = -1;
+static pid_t sus_pid = -1;
+
+static inline void enter_lockup_suspected(raw_spinlock_t *lock)
+{
+ sus_lock = lock;
+ sus_cpu = raw_smp_processor_id();
+ sus_pid = task_pid_nr(current);
+}
+
+static inline void exit_lockup_suspected(raw_spinlock_t *lock)
+{
+ sus_lock = NULL;
+ sus_cpu = -1;
+ sus_pid = -1;
+}
+
+static inline int detect_recursive_lockup_suspected(raw_spinlock_t *lock)
+{
+ return sus_lock == lock &&
+ sus_cpu == raw_smp_processor_id() &&
+ sus_pid == task_pid_nr(current);
+}
+
static void __spin_lock_debug(raw_spinlock_t *lock)
{
u64 i;
@@ -114,7 +139,11 @@ static void __spin_lock_debug(raw_spinlock_t *lock)
__delay(1);
}
/* lockup suspected: */
- spin_dump(lock, "lockup suspected");
+ if (likely(!detect_recursive_lockup_suspected(lock))) {
+ enter_lockup_suspected(lock);
+ spin_dump(lock, "lockup suspected");
+ exit_lockup_suspected(lock);
+ }
#ifdef CONFIG_SMP
trigger_all_cpu_backtrace();
#endif
--
1.9.1
\
 
 \ /
  Last update: 2016-03-24 07:21    [W:0.075 / U:0.136 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site