lkml.org 
[lkml]   [2022]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC v2 net-next 0/5] net: Qdisc backpressure infrastructure
On Mon, Aug 22, 2022 at 09:17:37AM -0700, Jakub Kicinski wrote:
> On Mon, 22 Aug 2022 02:10:17 -0700 Peilin Ye wrote:
> > Currently sockets (especially UDP ones) can drop a lot of packets at TC
> > egress when rate limited by shaper Qdiscs like HTB. This patchset series
> > tries to solve this by introducing a Qdisc backpressure mechanism.
> >
> > RFC v1 [1] used a throttle & unthrottle approach, which introduced several
> > issues, including a thundering herd problem and a socket reference count
> > issue [2]. This RFC v2 uses a different approach to avoid those issues:
> >
> > 1. When a shaper Qdisc drops a packet that belongs to a local socket due
> > to TC egress congestion, we make part of the socket's sndbuf
> > temporarily unavailable, so it sends slower.
> >
> > 2. Later, when TC egress becomes idle again, we gradually recover the
> > socket's sndbuf back to normal. Patch 2 implements this step using a
> > timer for UDP sockets.
> >
> > The thundering herd problem is avoided, since we no longer wake up all
> > throttled sockets at the same time in qdisc_watchdog(). The socket
> > reference count issue is also avoided, since we no longer maintain socket
> > list on Qdisc.
> >
> > Performance is better than RFC v1. There is one concern about fairness
> > between flows for TBF Qdisc, which could be solved by using a SFQ inner
> > Qdisc.
> >
> > Please see the individual patches for details and numbers. Any comments,
> > suggestions would be much appreciated. Thanks!
> >
> > [1] https://lore.kernel.org/netdev/cover.1651800598.git.peilin.ye@bytedance.com/
> > [2] https://lore.kernel.org/netdev/20220506133111.1d4bebf3@hermes.local/
>
> Similarly to Eric's comments on v1 I'm not seeing the clear motivation
> here. Modern high speed UDP users will have a CC in user space, back
> off and set transmission time on the packets. Could you describe your
> _actual_ use case / application in more detail?

Not everyone implements QUIC or CC, it is really hard to implement CC
from scratch. This backpressure mechnism is much simpler than CC (TCP or
QUIC), as clearly it does not deal with any remote congestions.

And, although this patchset only implements UDP backpressure, it can be
applied to any other protocol easily, it is protocol-independent.

Thanks.

\
 
 \ /
  Last update: 2022-08-29 18:53    [W:0.247 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site