lkml.org 
[lkml]   [2012]   [Dec]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] mm: add node physical memory range to sysfs
From
Date
On Wed, 2012-12-12 at 20:49 -0800, Dave Hansen wrote:
> On 12/12/2012 06:03 PM, Davidlohr Bueso wrote:
> > On Wed, 2012-12-12 at 17:48 -0800, Dave Hansen wrote:
> >> But if we went and did it per-DIMM (showing which physical addresses and
> >> NUMA nodes a DIMM maps to), wouldn't that be redundant with this
> >> proposed interface?
> >
> > If DIMMs overlap between nodes, then we wouldn't have an exact range for
> > a node in question. Having both approaches would complement each other.
>
> How is that possible? If NUMA nodes are defined by distances from CPUs
> to memory, how could a DIMM have more than a single distance to any
> given CPU?

Can't this occur when interleaving emulated nodes with physical ones?

>
> >> How do you plan to use this in practice, btw?
> >
> > It started because I needed to recognize the address of a node to remove
> > it from the e820 mappings and have the system "ignore" the node's
> > memory.
>
> Actually, now that I think about it, can you check in the
> /sys/devices/system/ directories for memory and nodes? We have linkages
> there for each memory section to every NUMA node, and you can also
> derive the physical address from the phys_index in each section. That
> should allow you to work out physical addresses for a given node.
>

I had looked at the memory-hotplug interface but found that this
'phys_index' doesn't include holes, while ->node_spanned_pages does.

Thanks,
Davidlohr



\
 
 \ /
  Last update: 2012-12-14 00:41    [W:0.087 / U:0.388 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site