lkml.org 
[lkml]   [2014]   [Oct]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [tip:x86/asm] x86: Speed up ___preempt_schedule*() by using THUNK helpers
On 10/03, Linus Torvalds wrote:
>
> On Fri, Oct 3, 2014 at 5:01 PM, Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
> >
> > The real fix would appear to be to use
> > "preempt_enable_no_resched_notrace()", which your patch did, but
> > without the loop.
>
> Actually, the real fix would be to not be stupid, and just make the
> code do something like
>
> > if (likely(!preemptible()))
> > return;
> >
> > __preempt_count_add(PREEMPT_ACTIVE);
> > prev_ctx = exception_enter();
> >
> > __schedule();
> >
> > exception_exit(prev_ctx);
> > __preempt_count_sub(PREEMPT_ACTIVE);
>
> and *not* enable preemption around the scheduling at all. The whole
> enable and then re-disable seems entirely broken, and comes from the
> code using "preempt_schedule()" which doesn't work while preemption is
> disabled. So don't do that then.

Again, it is too late for me... Most probably I am wrong, but somehow
it seems to me that the real fix should try to kill preempt_schedule_context()
altogether and teach preempt_schedule() to play well with CONTEXT_TRACKING.

Oleg.



\
 
 \ /
  Last update: 2014-10-04 03:21    [W:0.084 / U:0.096 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site