lkml.org 
[lkml]   [2023]   [Jul]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] nvme-pci: do not set the NUMA node of device if it has none
From

>> If a device has no NUMA node information associated with it, the driver
>> puts the device in node first_memory_node (say node 0). As a side
>> effect, this gives an indication to userspace IRQ balancing programs
>> that the device is in node 0 so they prefer CPUs in node 0 to handle the
>> IRQs associated with the queues. For example, irqbalance will only let
>> CPUs in node 0 handle the interrupts. This reduces random access
>> performance on CPUs in node 1 since the interrupt for command completion
>> will fire on node 0.
>>
>> For example, AWS EC2's i3.16xlarge instance does not expose NUMA
>> information for the NVMe devices. This means all NVMe devices have
>> NUMA_NO_NODE by default. Without this patch, random 4k read performance
>> measured via fio on CPUs from node 1 (around 165k IOPS) is almost 50%
>> less than CPUs from node 0 (around 315k IOPS). With this patch, CPUs on
>> both nodes get similar performance (around 315k IOPS).
>
> irqbalance doesn't work with this driver though: the interrupts are
> managed by the kernel. Is there some other reason to explain the perf
> difference?

Maybe its because the numa_node goes to the tagset which allocates
stuff based on that numa-node ?

\
 
 \ /
  Last update: 2023-07-26 10:09    [W:0.069 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site