Messages in this thread | | | Date | Tue, 29 Sep 2015 17:17:13 +0200 | From | Peter Zijlstra <> | Subject | Re: [RFC][PATCH 03/11] sched: Robustify preemption leak checks |
| |
On Tue, Sep 29, 2015 at 11:07:34AM -0400, Steven Rostedt wrote: > On Tue, 29 Sep 2015 11:28:28 +0200 > Peter Zijlstra <peterz@infradead.org> wrote:
> > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -2960,8 +2960,10 @@ static inline void schedule_debug(struct > > * schedule() atomically, we ignore that path. Otherwise whine > > * if we are scheduling when we should not. > > */ > > - if (unlikely(in_atomic_preempt_off() && prev->state != TASK_DEAD)) > > + if (unlikely(in_atomic_preempt_off() && prev->state != TASK_DEAD)) { > > __schedule_bug(prev); > > + preempt_count_set(PREEMPT_DISABLED); > > + } > > Of course, if this was not a preemption leak, but something that called > schedule within a preempt_disable()/preempt_enable() section, when it > returns, preemption will be enabled, right?
Indeed.. But it ensures only the task that incorrectly called schedule() gets screwed and not everybody else.
This is most important on x86 which has a per cpu preempt_count that is not saved/restored (after this series). So if you schedule with an invalid (!2*PREEMPT_DISABLE_OFFSET) preempt_count the next task is messed up too.
Enforcing this invariant limits the borkage to just the one task.
| |