lkml.org 
[lkml]   [2014]   [Oct]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 1/2] sched: schedule_tail() should disable preemption
On Thu, Oct 09, 2014 at 06:57:13PM +0200, Oleg Nesterov wrote:
> > Your earlier proposal would penalize every
> > !x86 arch by adding extra code to the scheduler core while they already
> > automagically preserve their thread_info::preempt_count.
>
> Sure, and it can't be even compiled on !x86.
>
> But this is simple, just we need a new helper, preempt_count_restore(),
> defined as nop in asm-generic/preempt.h. Well, perhaps another helper
> makes sense, preempt_count_raw() which simply reads the counter, but
> this is minor.
>
> After the patch below we can remove ->saved_preempt_count. Including
> init_task_preempt_count(), it is no longer needed after the change in
> schedule_tail().

Ah, right, this makes more sense.

> @@ -2333,10 +2336,12 @@ context_switch(struct rq *rq, struct task_struct *prev,
> #endif
>
> context_tracking_task_switch(prev, next);
> +
> + pc = preempt_count();

The only problem here is that you can loose PREEMPT_NEED_RESCHED, I
haven't thought about whether that is a problem here or not.

> /* Here we just switch the register state and the stack. */
> switch_to(prev, next, prev);
> -
> barrier();
> + preempt_count_restore(pc);
> /*
> * this_rq must be evaluated again because prev may have moved
> * CPUs since it called schedule(), thus the 'rq' on its stack
>


\
 
 \ /
  Last update: 2014-10-09 20:01    [W:0.366 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site