lkml.org 
[lkml]   [2007]   [Jun]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: v2.6.21.4-rt11
On Mon, Jun 11, 2007 at 09:36:34AM +0200, Ingo Molnar wrote:
>
> * Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote:
>
> > 2.6.21.4-rt12 boots on 4-CPU Opteron and passes several hours of
> > rcutorture. However, if I simply do "modprobe rcutorture", the kernel
> > threads do not spread across the CPUs as I would expect them to, even
> > given CFS. Instead, the readers all stack up on a single CPU, and I
> > have to use the "taskset" command to spread them out manually. Is
> > there some config parameter I am missing out on?
>
> hm, what affinity do they start out with? Could they all be pinned to
> CPU#0 by default?

They start off with affinity masks of 0xf on a 4-CPU system. I would
expect them to load-balance across the four CPUs, but they stay all
on the same CPU until long after I lose patience (many minutes).

Since there are eight readers, I use the following commands:

taskset -p 3 pid1
taskset -p 3 pid2
taskset -p 6 pid3
taskset -p 6 pid4
taskset -p c pid5
taskset -p c pid6
taskset -p 9 pid7
taskset -p 9 pid8

where the "pidn" are all replaced by the pids of the torture readers.

Before I do this, the processes are all sharing a single CPU. After I
do this, they are spread reasonably nicely over the CPUs. I do need to
allow some migration in order to fully test the realtime RCU variants
in the various preemption scenarios.

Thanx, Paul
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2007-06-11 16:47    [W:0.070 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site