lkml.org 
[lkml]   [2022]   [Sep]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC v2 net-next 0/5] net: Qdisc backpressure infrastructure
On Mon, Aug 29, 2022 at 09:53:43AM -0700, Eric Dumazet wrote:
> On Mon, Aug 29, 2022 at 9:47 AM Cong Wang <xiyou.wangcong@gmail.com> wrote:
> >
> > On Mon, Aug 22, 2022 at 09:22:39AM -0700, Eric Dumazet wrote:
> > > On Mon, Aug 22, 2022 at 2:10 AM Peilin Ye <yepeilin.cs@gmail.com> wrote:
> > > >
> > > > From: Peilin Ye <peilin.ye@bytedance.com>
> > > >
> > > > Hi all,
> > > >
> > > > Currently sockets (especially UDP ones) can drop a lot of packets at TC
> > > > egress when rate limited by shaper Qdiscs like HTB. This patchset series
> > > > tries to solve this by introducing a Qdisc backpressure mechanism.
> > > >
> > > > RFC v1 [1] used a throttle & unthrottle approach, which introduced several
> > > > issues, including a thundering herd problem and a socket reference count
> > > > issue [2]. This RFC v2 uses a different approach to avoid those issues:
> > > >
> > > > 1. When a shaper Qdisc drops a packet that belongs to a local socket due
> > > > to TC egress congestion, we make part of the socket's sndbuf
> > > > temporarily unavailable, so it sends slower.
> > > >
> > > > 2. Later, when TC egress becomes idle again, we gradually recover the
> > > > socket's sndbuf back to normal. Patch 2 implements this step using a
> > > > timer for UDP sockets.
> > > >
> > > > The thundering herd problem is avoided, since we no longer wake up all
> > > > throttled sockets at the same time in qdisc_watchdog(). The socket
> > > > reference count issue is also avoided, since we no longer maintain socket
> > > > list on Qdisc.
> > > >
> > > > Performance is better than RFC v1. There is one concern about fairness
> > > > between flows for TBF Qdisc, which could be solved by using a SFQ inner
> > > > Qdisc.
> > > >
> > > > Please see the individual patches for details and numbers. Any comments,
> > > > suggestions would be much appreciated. Thanks!
> > > >
> > > > [1] https://lore.kernel.org/netdev/cover.1651800598.git.peilin.ye@bytedance.com/
> > > > [2] https://lore.kernel.org/netdev/20220506133111.1d4bebf3@hermes.local/
> > > >
> > > > Peilin Ye (5):
> > > > net: Introduce Qdisc backpressure infrastructure
> > > > net/udp: Implement Qdisc backpressure algorithm
> > > > net/sched: sch_tbf: Use Qdisc backpressure infrastructure
> > > > net/sched: sch_htb: Use Qdisc backpressure infrastructure
> > > > net/sched: sch_cbq: Use Qdisc backpressure infrastructure
> > > >
> > >
> > > I think the whole idea is wrong.
> > >
> >
> > Be more specific?
> >
> > > Packet schedulers can be remote (offloaded, or on another box)
> >
> > This is not the case we are dealing with (yet).
> >
> > >
> > > The idea of going back to socket level from a packet scheduler should
> > > really be a last resort.
> >
> > I think it should be the first resort, as we should backpressure to the
> > source, rather than anything in the middle.
> >
> > >
> > > Issue of having UDP sockets being able to flood a network is tough, I
> > > am not sure the core networking stack
> > > should pretend it can solve the issue.
> >
> > It seems you misunderstand it here, we are not dealing with UDP on the
> > network, just on an end host. The backpressure we are dealing with is
> > from Qdisc to socket on _TX side_ and on one single host.
> >
> > >
> > > Note that FQ based packet schedulers can also help already.
> >
> > It only helps TCP pacing.
>
> FQ : Fair Queue.
>
> It definitely helps without the pacing part...

True. but the fair queuing part has nothing related to this patchset...
Only the pacing part is related to this topic, and it is merely about
TCP.

Thanks.

\
 
 \ /
  Last update: 2022-09-19 19:10    [W:0.073 / U:1.232 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site