lkml.org 
[lkml]   [2022]   [Mar]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/2] NVMe_over_TCP: support specifying the congestion-control
I feel that I'd better address this a little bit more to express the
meaning behind this feature.

You know, InfiniBand/RoCE provides NVMe-oF a lossless network
environment (that is zero packet loss), which is a great advantage
to performance.

In contrast, 'TCP/IP + ethernet' is often used as a lossy network
environment (packet dropping often occurs).
And once packet dropping occurs, timeout-retransmission would be
triggered. But once timeout-retransmission was triggered, it’s a great
damage to the performance.

So although NVMe/TCP may have a bandwidth competitive to that of
NVMe/RDMA, but the packet dropping of the former is a flaw to
its performance.

However, with the combination of the following conditions, NVMe/TCP
can almost be as competitive as NVMe/RDMA in the data center.

- Ethernet NICs supporting QoS configuration (support mapping TOS/DSCP
in IP header into priority, support PFC)

- Ethernet Switches supporting ECN marking, supporting adjusting
buffer size of each priority.

- NVMe/TCP supports specifying the tos for its TCP traffic
(already implemented)

- NVMe/TCP supports specifying dctcp as the congestion-control of its
TCP sockets (the work of this feature)

So this feature is the last item from the software aspect to form up the
above combination.

\
 
 \ /
  Last update: 2022-03-08 14:05    [W:0.068 / U:1.524 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site