lkml.org 
[lkml]   [2021]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH v4 0/2] Adjust NUMA imbalance for multiple LLCs
Date
Changelog since V3
o Calculate imb_numa_nr for multiple SD_NUMA domains
o Restore behaviour where communicating pairs remain on the same node

Commit 7d2b5dd0bcc4 ("sched/numa: Allow a floating imbalance between NUMA
nodes") allowed an imbalance between NUMA nodes such that communicating
tasks would not be pulled apart by the load balancer. This works fine when
there is a 1:1 relationship between LLC and node but can be suboptimal
for multiple LLCs if independent tasks prematurely use CPUs sharing cache.

The series addresses two problems -- inconsistent use of scheduler domain
weights and sub-optimal performance when there are many LLCs per NUMA node.

include/linux/sched/topology.h | 1 +
kernel/sched/fair.c | 36 ++++++++++++++++---------------
kernel/sched/topology.c | 39 ++++++++++++++++++++++++++++++++++
3 files changed, 59 insertions(+), 17 deletions(-)

--
2.31.1

Mel Gorman (2):
sched/fair: Use weight of SD_NUMA domain in find_busiest_group
sched/fair: Adjust the allowed NUMA imbalance when SD_NUMA spans
multiple LLCs

include/linux/sched/topology.h | 1 +
kernel/sched/fair.c | 36 +++++++++++++++++----------------
kernel/sched/topology.c | 37 ++++++++++++++++++++++++++++++++++
3 files changed, 57 insertions(+), 17 deletions(-)

--
2.31.1

\
 
 \ /
  Last update: 2021-12-10 10:34    [W:0.184 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site