Messages in this thread | | | From | Erich Focht <> | Subject | Re: Externalize SLIT table | Date | Fri, 5 Nov 2004 18:13:24 +0100 |
| |
Hi Jack,
the patch looks fine, of course. > # cat ./node/node0/distance > 10 20 64 42 42 22 Great!
But: > # cat ./cpu/cpu8/distance > 42 42 64 64 22 22 42 42 10 10 20 20 ...
what exactly do you mean by cpu_to_cpu distance? In analogy with the node distance I'd say it is the time (latency) for moving data from the register of one CPU into the register of another CPU: cpu*/distance : cpu -> memory -> cpu node1 node? node2
On most architectures this means flushing a cacheline to memory on one side and reading it on another side. What you actually implement is the latency from memory (one node) to a particular cpu (on some node). memory -> cpu node1 node2
That's only half of the story and actually misleading. I don't think the complexity hiding is good in this place. Questions coming to my mind are: Where is the memory? Is the SLIT matrix really symmetric (cpu_to_cpu distance only makes sense for symmetric matrices)? I remember talking to IBM people about hardware where the node distance matrix was asymmetric.
Why do you want this distance anyway? libnuma offers you _node_ masks for allocating memory from a particular node. And when you want to arrange a complex MPI process structure you'll have to think about latency for moving data from one processes buffer to the other processes buffer. The buffers live on nodes, not on cpus.
Regards, Erich
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |