Messages in this thread | | | Subject | Re: [RFC PATCH] sched&net: avoid over-pulling tasks due to network interrupts | From | Eric Dumazet <> | Date | Mon, 8 Nov 2021 08:27:50 -0800 |
| |
On 11/8/21 1:27 AM, Peter Zijlstra wrote: > On Mon, Nov 08, 2021 at 07:08:09AM +1300, Barry Song wrote: >> On Sat, Nov 6, 2021 at 1:25 AM Peter Zijlstra <peterz@infradead.org> wrote: >>> >>> On Fri, Nov 05, 2021 at 06:51:36PM +0800, Barry Song wrote: >>>> From: Barry Song <song.bao.hua@hisilicon.com> >>>> >>>> In LPC2021, both Libo Chen and Tim Chen have reported the overpull >>>> of network interrupts[1]. For example, while running one database, >>>> ethernet is located in numa0, numa1 might be almost idle due to >>>> interrupts are pulling tasks to numa0 because of wake_up affine. >>>> I have seen the same problem. One way to solve this problem is >>>> moving to a normal wakeup in network rather than using a sync >>>> wakeup which will be more aggressively pulling tasks in scheduler >>>> core. >>>> >>>> On kunpeng920 with 4numa, ethernet is located at numa0, storage >>>> disk is located at numa2. While using sysbench to connect this >>>> mysql machine, I am seeing numa1 is idle though numa0,2 and 3 >>>> are quite busy. >>>> >>> >>>> I am not saying this patch is exactly the right approach, But I'd >>>> like to use this RFC to connect the people of net and scheduler, >>>> and start the discussion in this wider range. >>> >>> Well the normal way would be to use multi-queue crud and/or receive >>> packet steering to get the interrupt/wakeup back to the cpu that data >>> came from. >> >> The test case has been a multi-queue ethernet and irqs are balanced >> to NUMA0 by irqbalanced or pinned to NUMA0 where the card is located >> by the script like: >> #!/bin/bash >> irq_list=(`cat /proc/interrupts | grep network_name| awk -F: '{print $1}'`) >> cpunum=0 >> for irq in ${irq_list[@]} >> do >> echo $cpunum > /proc/irq/$irq/smp_affinity_list >> echo `cat /proc/irq/$irq/smp_affinity_list` >> (( cpunum+=1 )) >> done >> >> I have heard some people are working around this issue by pinning >> multi-queue IRQs to multiple NUMAs which can spread interrupts and >> avoid over-pulling tasks to one NUMA only, but lose ethernet locality? > > So you're doing explicitly the wrong thing with your script above and > then complain the scheduler follows that and destroys your data > locality? > > The network folks made RPS/RFS specifically to spread the processing of > the packets back to the CPUs/Nodes the TX happened on to increase data > locality. Why not use that? >
+1
This documentation should describe how this can be done
Documentation/networking/scaling.rst
Hopefully it is not completely outdated.
| |