Messages in this thread | | | From | Pratyush Yadav <> | Subject | Re: [PATCH] nvme-pci: do not set the NUMA node of device if it has none | Date | Wed, 26 Jul 2023 17:30:33 +0200 |
| |
On Wed, Jul 26 2023, Christoph Hellwig wrote:
Hi all,
> On Wed, Jul 26, 2023 at 10:58:36AM +0300, Sagi Grimberg wrote: >>>> For example, AWS EC2's i3.16xlarge instance does not expose NUMA >>>> information for the NVMe devices. This means all NVMe devices have >>>> NUMA_NO_NODE by default. Without this patch, random 4k read performance >>>> measured via fio on CPUs from node 1 (around 165k IOPS) is almost 50% >>>> less than CPUs from node 0 (around 315k IOPS). With this patch, CPUs on >>>> both nodes get similar performance (around 315k IOPS). >>> >>> irqbalance doesn't work with this driver though: the interrupts are >>> managed by the kernel. Is there some other reason to explain the perf >>> difference?
Hmm, I did not know that. I have not gone and looked at the code but I think the same reasoning should hold, just with s/irqbalance/kernel. If the kernel IRQ balancer sees the device is on node 0, it would deliver its interrupts to CPUs on node 0.
In my tests I can see that the interrupts for NVME queues are sent only to CPUs from node 0 without this patch. With this patch CPUs from both nodes get the interrupts.
>> >> Maybe its because the numa_node goes to the tagset which allocates >> stuff based on that numa-node ? > > Yeah, the only explanation I could come up with is that without this > the allocations gets spread, and that somehow helps. All of this > is a little obscure, but so is the NVMe practice of setting the node id > to first_memory_node, which no other driver does. I'd really like to > understand what's going on here first. After that this patch probably > is the right thing, I'd just like to understand why.
See above for my conjecture on why this happens.
More specifically, I discovered this when running an application pinned to a node 1 CPU reading from an NVME device. I noticed it was performing worse than when it was pinned to node 0.
If the process is free to move around it might not see such a large performance difference since it could move to a node 0 CPU. But if it is pinned to a CPU in node 1 then the interrupt will always hit a node 0 CPU and create higher latency for the reads.
I have a simple fio test that can reproduce this. Save this [1] as fio.txt and then run numactl --cpunodebind 1 fio ./fio.txt. You can run it on any host with an NVME device that has no NUMA node. I have tested this on AWS EC2's i3.16xlarge instance type.
[1] [global] ioengine=libaio filename=/dev/nvme0n1 group_reporting=1 direct=1 verify=0 norandommap=0 size=10% time_based=1 runtime=30 ramp_time=0 randrepeat=0 log_max_value=1 unified_rw_reporting=1 percentile_list=50:99:99.9:99.99:99.999 bwavgtime=10000
[4k_randread_qd16_4w] stonewall bs=4k rw=randread iodepth=32 numjobs=1
-- Regards, Pratyush Yadav
Amazon Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B Sitz: Berlin Ust-ID: DE 289 237 879
| |