lkml.org 
[lkml]   [2008]   [May]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] x86: enable preemption in delay

On Sun, 25 May 2008, Thomas Gleixner wrote:
> > - preempt_disable(); /* TSC's are per-cpu */
> > + preempt_disable();
> > + cpu = smp_processor_id();
> > rdtscl(bclock);
> > do {
> > rep_nop();
> > rdtscl(now);
> > + /* Allow RT tasks to run */
> > + preempt_enable();
> > + preempt_disable();
> > + /*
> > + * It is possible that we moved to another CPU,
> > + * and since TSC's are per-cpu we need to
> > + * calculate that. The delay must guarantee that
> > + * we wait "at least" the amount of time. Being
> > + * moved to another CPU could make the wait longer
> > + * but we just need to make sure we waited long
> > + * enough. Rebalance the counter for this CPU.
> > + */
> > + if (unlikely(cpu != smp_processor_id())) {
>
> Eeek, once you migrated you do this all the time. you need to update
> cpu here.

Good catch! I'll update that.

>
> > + if ((now-bclock) >= loops)
> > + break;
>
> Also this is really dangerous with unsynchronized TSCs. You might get
> migrated and return immediately because the TSC on the other CPU is
> far ahead.

No it isn't ;-)

The now and bclock are both from before the migration. The cpus were the
same becaues we were under preempt disbled at the time. I recalculate
after the change has been noticed.

But you are right, I forgot to update cpu. :-/

>
> What you really want is something like the patch below, but we should
> reuse the sched_clock_cpu() thingy to make that simpler. Looking into
> that right now.
>

Sure, but this should be simple enough.

-- Steve



\
 
 \ /
  Last update: 2008-05-25 15:09    [W:0.097 / U:0.268 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site