Messages in this thread | | | Date | Mon, 17 Apr 2023 16:05:18 +0800 | From | Ming Lei <> | Subject | Re: [PATCH v2] nvme/tcp: Add support to set the tcp worker cpu affinity |
| |
On Mon, Apr 17, 2023 at 03:50:46PM +0800, Li Feng wrote: > > > > 2023年4月17日 下午3:37,Ming Lei <ming.lei@redhat.com> 写道: > > > > On Thu, Apr 13, 2023 at 09:29:41PM +0800, Li Feng wrote: > >> The default worker affinity policy is using all online cpus, e.g. from 0 > >> to N-1. However, some cpus are busy for other jobs, then the nvme-tcp will > >> have a bad performance. > > > > Can you explain in detail how nvme-tcp performs worse in this situation? > > > > If some of CPUs are knows as busy, you can submit the nvme-tcp io jobs > > on other non-busy CPUs via taskset, or scheduler is supposed to choose > > proper CPUs for you. And usually nvme-tcp device should be saturated > > with limited io depth or jobs/cpus. > > > > > > Thanks, > > Ming > > > > Taskset can’t work on nvme-tcp io-queues, because the worker cpu has decided at the nvme-tcp ‘connect’ stage, > not the sending io stage. Assume there is only one io-queue, the binding cpu is CPU0, no matter io jobs > run other cpus.
OK, looks the problem is on queue->io_cpu, see nvme_tcp_queue_request().
But I am wondering why nvme-tcp doesn't queue the io work on the current cpu? And why is queue->io_cpu introduced? Given blk-mq defines cpu affinities for each hw queue, driver is supposed to submit IO request to hardware on the local CPU.
Sagi and Guys, any ideas about introducing queue->io_cpu?
Thanks, Ming
| |