lkml.org 
[lkml]   [2023]   [Apr]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] nvme/tcp: Add support to set the tcp worker cpu affinity
From
Hey Li,

> The default worker affinity policy is using all online cpus, e.g. from 0
> to N-1. However, some cpus are busy for other jobs, then the nvme-tcp will
> have a bad performance.
>
> This patch adds a module parameter to set the cpu affinity for the nvme-tcp
> socket worker threads. The parameter is a comma separated list of CPU
> numbers. The list is parsed and the resulting cpumask is used to set the
> affinity of the socket worker threads. If the list is empty or the
> parsing fails, the default affinity is used.

I can see how this may benefit a specific set of workloads, but I have a
few issues with this.

- This is exposing a user interface for something that is really
internal to the driver.

- This is something that can be misleading and could be tricky to get
right, my concern is that this would only benefit a very niche case.

- If the setting should exist, it should not be global.

- I prefer not to introduce new modparams.

- I'd prefer to find a way to support your use-case without introducing
a config knob for it.

- It is not backed by performance improvements, but more importantly
does not cover any potential regressions in key metrics (bw/iops/lat)
or lack there of.

\
 
 \ /
  Last update: 2023-04-17 15:46    [W:0.175 / U:0.936 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site