Messages in this thread | ![/](/images/icornerl.gif) | | Subject | Re: [PATCH 5/6] perf: Optimise topology iteration | From | Lin Ming <> | Date | Mon, 21 Feb 2011 11:29:24 +0800 |
| |
On Mon, 2011-02-21 at 05:15 +0800, Andi Kleen wrote: > On Mon, Feb 21, 2011 at 12:57:39AM +0800, Lin Ming wrote: > > Currently we iterate the full machine looking for a matching core_id/nb > > for the percore and the amd northbridge stuff , using a smaller topology > > mask makes sense. > > This is still wrong for CPU hotplug. The CPU "owning" the per core > does not necessarily need to be online anymore.
This is remain issue for hotplug case, no matter we use for_each_online_cpu or topology_thread_cpumask.
> Please drop this patch.
Re-look at the code, I think for_each_online_cpu is wrong for percore, we should use topology_thread_cpumask instead.
for_each_online_cpu(i) { struct intel_percore *pc = per_cpu(cpu_hw_events, i).per_core;
if (pc && pc->core_id == core_id) { kfree(cpuc->per_core); cpuc->per_core = pc; break; } }
Assume 2 sockets,
//socket 0 cpu 0: core_id 0 cpu 1: core_id 0
//socket 1 cpu 2: core_id 0 cpu 3: core_id 0
If for_each_online_cpu is used, apparently 4 logical cpus will share the same percore. This is wrong.
If topology_thread_cpumask is used, then cpu0 and cpu1 share one percore and cpu2 and cpu3 share another percore. This is what we want.
Lin Ming
> > -Andi
| ![\](/images/icornerr.gif) |