Messages in this thread Patch in this message | | | Date | Thu, 9 Oct 2014 18:57:13 +0200 | From | Oleg Nesterov <> | Subject | Re: [PATCH v2 1/2] sched: schedule_tail() should disable preemption |
| |
Peter,
let me first say that I understand that cleanups are always subjective. So if you do not like it - I won't argue at all.
On 10/09, Peter Zijlstra wrote: > > On Thu, Oct 09, 2014 at 04:57:26PM +0200, Oleg Nesterov wrote: > > > but first we need to remove ->saved_preempt_count. > > Why do you want to kill that?
Because imo this makes the code a bit simpler. But (perhaps) mostly because personally I dislike any "special" member in task_struct/thread_info, and it seems to me that ->saved_preempt_count buys nothing. We only need it to record/restore the counter before/after switch_to(), a local variably looks better to me.
But again, see above. If the maintainer doesn't like the cleanup - then it should be counted as uglification ;)
> Your earlier proposal would penalize every > !x86 arch by adding extra code to the scheduler core while they already > automagically preserve their thread_info::preempt_count.
Sure, and it can't be even compiled on !x86.
But this is simple, just we need a new helper, preempt_count_restore(), defined as nop in asm-generic/preempt.h. Well, perhaps another helper makes sense, preempt_count_raw() which simply reads the counter, but this is minor.
After the patch below we can remove ->saved_preempt_count. Including init_task_preempt_count(), it is no longer needed after the change in schedule_tail().
No?
Oleg.
diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h index 8f32718..695307f 100644 --- a/arch/x86/include/asm/preempt.h +++ b/arch/x86/include/asm/preempt.h @@ -27,6 +27,11 @@ static __always_inline void preempt_count_set(int pc) raw_cpu_write_4(__preempt_count, pc); } +static __always_inline void preempt_count_restore(int pc) +{ + preempt_count_set(pc); +} + /* * must be macros to avoid header recursion hell */ diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h index eb6f9e6..14de30e 100644 --- a/include/asm-generic/preempt.h +++ b/include/asm-generic/preempt.h @@ -20,6 +20,10 @@ static __always_inline void preempt_count_set(int pc) *preempt_count_ptr() = pc; } +static __always_inline void preempt_count_restore(int pc) +{ +} + /* * must be macros to avoid header recursion hell */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index cfe9905..ad8ca02 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2279,6 +2279,8 @@ asmlinkage __visible void schedule_tail(struct task_struct *prev) { struct rq *rq; + preempt_count_set(PREEMPT_DISABLED); + /* finish_task_switch() drops rq->lock and enables preemtion */ preempt_disable(); rq = this_rq(); @@ -2299,6 +2301,7 @@ context_switch(struct rq *rq, struct task_struct *prev, struct task_struct *next) { struct mm_struct *mm, *oldmm; + int pc; prepare_task_switch(rq, prev, next); @@ -2333,10 +2336,12 @@ context_switch(struct rq *rq, struct task_struct *prev, #endif context_tracking_task_switch(prev, next); + + pc = preempt_count(); /* Here we just switch the register state and the stack. */ switch_to(prev, next, prev); - barrier(); + preempt_count_restore(pc); /* * this_rq must be evaluated again because prev may have moved * CPUs since it called schedule(), thus the 'rq' on its stack
| |