lkml.org 
[lkml]   [2015]   [Sep]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC][PATCH 03/11] sched: Robustify preemption leak checks
    On Tue, Sep 29, 2015 at 11:28:28AM +0200, Peter Zijlstra wrote:
    > When we warn about a preempt_count leak; reset the preempt_count to
    > the known good value such that the problem does not ripple forward.
    >
    > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    > ---
    > kernel/exit.c | 4 +++-
    > kernel/sched/core.c | 4 +++-
    > 2 files changed, 6 insertions(+), 2 deletions(-)
    >
    > --- a/kernel/exit.c
    > +++ b/kernel/exit.c
    > @@ -706,10 +706,12 @@ void do_exit(long code)
    > smp_mb();
    > raw_spin_unlock_wait(&tsk->pi_lock);
    >
    > - if (unlikely(in_atomic()))
    > + if (unlikely(in_atomic())) {
    > pr_info("note: %s[%d] exited with preempt_count %d\n",
    > current->comm, task_pid_nr(current),
    > preempt_count());
    > + preempt_count_set(PREEMPT_ENABLED);
    > + }
    >
    > /* sync mm's RSS info before statistics gathering */
    > if (tsk->mm)
    > --- a/kernel/sched/core.c
    > +++ b/kernel/sched/core.c
    > @@ -2960,8 +2960,10 @@ static inline void schedule_debug(struct
    > * schedule() atomically, we ignore that path. Otherwise whine
    > * if we are scheduling when we should not.
    > */
    > - if (unlikely(in_atomic_preempt_off() && prev->state != TASK_DEAD))
    > + if (unlikely(in_atomic_preempt_off() && prev->state != TASK_DEAD)) {
    > __schedule_bug(prev);
    > + preempt_count_set(PREEMPT_DISABLED);

    That one would be a bit fragile if we kept PREEMPT_ACTIVE, but since we are removing
    it...

    Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com>


    \
     
     \ /
      Last update: 2015-09-29 15:41    [W:4.030 / U:0.168 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site