lkml.org 
[lkml]   [2009]   [Aug]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH] tracing, sched: mark preempt_schedule() notrace

Current preempt_schedule() is not marked notrace. It may be
infinite recursion in __trace_graph_return().

preempt_schedule()
__trace_graph_return()
ftrace_preempt_disable() (!!return false!!)
ftrace_preempt_enable()
preempt_enable_notrace()
preempt_schedule() (need_resched() may be true again)


It hardly happens, but marking preempt_schedule() notrace
makes it safer.

One interesting thing is that preempt_schedule() is in
the blacklist of kprobe subsystem. "__kprobes" implies "notrace".
But preempt_schedule() cannot be marked __kprobes for it
has been marked __sched. It is in the blacklist makes me
consider this: should it be marked "notrace" -- YES.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
diff --git a/kernel/sched.c b/kernel/sched.c
index 5184580..2e9e209 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5534,7 +5534,7 @@ out:
* off of preempt_enable. Kernel preemptions off return from interrupt
* occur there and call schedule directly.
*/
-asmlinkage void __sched preempt_schedule(void)
+asmlinkage void __sched notrace preempt_schedule(void)
{
struct thread_info *ti = current_thread_info();










\
 
 \ /
  Last update: 2009-08-18 10:05    [W:0.065 / U:0.924 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site