lkml.org 
[lkml]   [2021]   [Dec]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] sched/fair: Adjust the allowed NUMA imbalance when SD_NUMA spans multiple LLCs
On Sat, Dec 04, 2021 at 12:14:33AM +1300, Barry Song wrote:
> > > Hi Mel, you used to have 25% * numa_weight if node has only one LLC.
> > > for a system with 4 numa, In case sd has 2 nodes, child is 1 numa node,
> > > then nr_groups=2, num_online_nodes()=4, imb_numa_nr will be
> > > child->span_weight/2/2/4?
> > >
> > > Does this patch change the behaviour for machines whose numa equals LLC?
> > >
> >
> > Yes, it changes behaviour. Instead of a flat 25%, it takes into account
> > the number of LLCs per node and the number of nodes overall.
>
> Considering the number of nodes overall seems to be quite weird to me.
> for example, for the below machines
>
> 1P * 2DIE = 2NUMA: node1 - node0
> 2P * 2DIE = 4NUMA: node1 - node0 ------ node2 - node3
> 4P * 2DIE = 8NUMA: node1 - node0 ------ node2 - node3
> node5 - node4 ------ node6 - node7
>
> if one service pins node1 and node0 in all above configurations, it seems in all
> different machines, the app will result in different behavior.
>

The intent is to balance between LLCs across the whole machine, hence
accounting for the number of online nodes.

> the other example is:
> in a 2P machine, if one app pins the first two NUMAs, the other app pins
> the last two NUMAs, why would the num_online_nodes() matter to them?
> there is no balance requirement between the two P.
>

The previous 25% imbalance also did not take pinning into account and
the choice was somewhat arbitrary.

--
Mel Gorman
SUSE Labs

\
 
 \ /
  Last update: 2021-12-03 14:28    [W:0.038 / U:0.908 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site