lkml.org 
[lkml]   [2014]   [Oct]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subject[PATCH 0/2] (Was: sched: fix the PREEMPT_ACTIVE check in __trace_sched_switch_state())
On 10/08, Peter Zijlstra wrote:
>
> On Tue, Oct 07, 2014 at 09:50:46PM +0200, Oleg Nesterov wrote:
> > And note that another caller of task_preempt_count(), set_cpu(), is
> > fine but it doesn't really need this helper.
> >
> > And afaics we do not need ->saved_preempt_count at all, the trivial
> > patch below makes it unnecessary, we can kill it and all its users.
> >
> > Not only this will simplify the code, this will make (well, almost)
> > the per-cpu preempt counter arch-agnostic.
> >
> > Or I missed something?
>
> Two things, per-cpu isn't always faster on some archs, and load-store
> archs have problems with PREEMPT_NEED_RESCHED, although arguably you can
> do per-cpu preempt count without that.

Ah, but I didn't mean we should make it per-cpu on every arch.

I meant that (imo) this change can cleanup x86 code, and it can also help
if we want to change another arch to use per-cpu preempt_count.

> > Do you think this makes sense? If yes, I'll try to make the patches.
>
> It penalizes everything but x86 I think.

I don't think so.

But please forget for the moment, lets discuss this later. Let me start
with 2 simple preparations which imho make sense anyway. Then we will see.

1/2 looks like the obvious bugfix (iirc we already discussed this a bit),
2/2 depends on this patch.

Oleg.



\
 
 \ /
  Last update: 2014-10-08 21:21    [W:0.114 / U:0.288 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site