lkml.org 
[lkml]   [2022]   [May]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 4/4] sched/numa: Adjust imb_numa_nr to a better approximation of memory channels
On Wed, May 18, 2022 at 11:41:12AM +0200, Peter Zijlstra wrote:
> On Wed, May 11, 2022 at 03:30:38PM +0100, Mel Gorman wrote:
> > For a single LLC per node, a NUMA imbalance is allowed up until 25%
> > of CPUs sharing a node could be active. One intent of the cut-off is
> > to avoid an imbalance of memory channels but there is no topological
> > information based on active memory channels. Furthermore, there can
> > be differences between nodes depending on the number of populated
> > DIMMs.
> >
> > A cut-off of 25% was arbitrary but generally worked. It does have a severe
> > corner cases though when an parallel workload is using 25% of all available
> > CPUs over-saturates memory channels. This can happen due to the initial
> > forking of tasks that get pulled more to one node after early wakeups
> > (e.g. a barrier synchronisation) that is not quickly corrected by the
> > load balancer. The LB may fail to act quickly as the parallel tasks are
> > considered to be poor migrate candidates due to locality or cache hotness.
> >
> > On a range of modern Intel CPUs, 12.5% appears to be a better cut-off
> > assuming all memory channels are populated and is used as the new cut-off
> > point. A minimum of 1 is specified to allow a communicating pair to
> > remain local even for CPUs with low numbers of cores. For modern AMDs,
> > there are multiple LLCs and are not affected.
>
> Can the hardware tell us about memory channels?

It's in the SMBIOS table somewhere as it's available via dmidecode. For
example, on a 2-socket machine;

$ dmidecode -t memory | grep -E "Size|Bank"
Size: 8192 MB
Bank Locator: P0_Node0_Channel0_Dimm0
Size: No Module Installed
Bank Locator: P0_Node0_Channel0_Dimm1
Size: 8192 MB
Bank Locator: P0_Node0_Channel1_Dimm0
Size: No Module Installed
Bank Locator: P0_Node0_Channel1_Dimm1
Size: 8192 MB
Bank Locator: P0_Node0_Channel2_Dimm0
Size: No Module Installed
Bank Locator: P0_Node0_Channel2_Dimm1
Size: 8192 MB
Bank Locator: P0_Node0_Channel3_Dimm0
Size: No Module Installed
Bank Locator: P0_Node0_Channel3_Dimm1
Size: 8192 MB
Bank Locator: P1_Node1_Channel0_Dimm0
Size: No Module Installed
Bank Locator: P1_Node1_Channel0_Dimm1
Size: 8192 MB
Bank Locator: P1_Node1_Channel1_Dimm0
Size: No Module Installed
Bank Locator: P1_Node1_Channel1_Dimm1
Size: 8192 MB
Bank Locator: P1_Node1_Channel2_Dimm0
Size: No Module Installed
Bank Locator: P1_Node1_Channel2_Dimm1
Size: 8192 MB
Bank Locator: P1_Node1_Channel3_Dimm0
Size: No Module Installed
Bank Locator: P1_Node1_Channel3_Dimm1

SMBIOUS contains the information on number of channels and whether they
are populated with at least one DIMM.

I'm not aware of how it can be done in-kernel on a cross architectural
basis. Reading through the arch manual, it states how many channels are
in a given processor family and it's available during memory check errors
(apparently via the EDAC driver). It's sometimes available via PMUs but
I couldn't find a place where it's generically available for topology.c
that would work on all x86-64 machines let alone every other architecture.

It's not even clear if SMBIOS was parsed in early boot whether it's a
good idea. It could result in difference imbalance thresholds for each
NUMA domain or weird corner cases where assymetric NUMA node populations
would result in run-to-run variance that are difficult to analyse.

--
Mel Gorman
SUSE Labs

\
 
 \ /
  Last update: 2022-05-18 13:17    [W:0.539 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site