lkml.org 
[lkml]   [2015]   [Sep]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC][PATCH 03/11] sched: Robustify preemption leak checks
On Tue, 29 Sep 2015 11:28:28 +0200
Peter Zijlstra <peterz@infradead.org> wrote:

> When we warn about a preempt_count leak; reset the preempt_count to
> the known good value such that the problem does not ripple forward.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> kernel/exit.c | 4 +++-
> kernel/sched/core.c | 4 +++-
> 2 files changed, 6 insertions(+), 2 deletions(-)
>
> --- a/kernel/exit.c
> +++ b/kernel/exit.c
> @@ -706,10 +706,12 @@ void do_exit(long code)
> smp_mb();
> raw_spin_unlock_wait(&tsk->pi_lock);
>
> - if (unlikely(in_atomic()))
> + if (unlikely(in_atomic())) {
> pr_info("note: %s[%d] exited with preempt_count %d\n",
> current->comm, task_pid_nr(current),
> preempt_count());
> + preempt_count_set(PREEMPT_ENABLED);
> + }

Looks good.

>
> /* sync mm's RSS info before statistics gathering */
> if (tsk->mm)
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2960,8 +2960,10 @@ static inline void schedule_debug(struct
> * schedule() atomically, we ignore that path. Otherwise whine
> * if we are scheduling when we should not.
> */
> - if (unlikely(in_atomic_preempt_off() && prev->state != TASK_DEAD))
> + if (unlikely(in_atomic_preempt_off() && prev->state != TASK_DEAD)) {
> __schedule_bug(prev);
> + preempt_count_set(PREEMPT_DISABLED);
> + }

Of course, if this was not a preemption leak, but something that called
schedule within a preempt_disable()/preempt_enable() section, when it
returns, preemption will be enabled, right?

-- Steve


> rcu_sleep_check();
>
> profile_hit(SCHED_PROFILING, __builtin_return_address(0));
>



\
 
 \ /
  Last update: 2015-09-29 17:21    [W:0.257 / U:0.904 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site