Messages in this thread | | | From | Eric Dumazet <> | Date | Wed, 5 Jan 2022 05:38:18 -0800 | Subject | Re: Expensive tcp_collapse with high tcp_rmem limit |
| |
On Wed, Jan 5, 2022 at 4:15 AM Daniel Dao <dqminh@cloudflare.com> wrote: > > Hello, > > We are looking at increasing the maximum value of TCP receive buffer in order > to take better advantage of high BDP links. For historical reasons ( > https://blog.cloudflare.com/the-story-of-one-latency-spike/), this was set to > a lower than default value. > > We are still occasionally seeing long time spent in tcp_collapse, and the time > seems to be proportional with max rmem. For example, with net.ipv4.tcp_rmem = 8192 2097152 16777216, > we observe tcp_collapse latency with the following bpftrace command: >
I suggest you add more traces, like the payload/truesize ratio when these events happen. and tp->rcv_ssthresh, sk->sk_rcvbuf
TCP stack by default assumes a conservative [1] payload/truesize ratio of 50%
Meaning that a 16MB sk->rcvbuf would translate to a TCP RWIN of 8MB.
I suspect that you use XDP, and standard MTU=1500. Drivers in XDP mode use one page (4096 bytes on x86) per incoming frame. In this case, the ratio is ~1428/4096 = 35%
This is one of the reason we switched to a 4K MTU at Google, because we have an effective ratio close to 100% (even if XDP was used)
[1] The 50% ratio of TCP is defeated with small MSS, and malicious traffic.
> bpftrace -e 'kprobe:tcp_collapse { @start[tid] = nsecs; } kretprobe:tcp_collapse /@start[tid] != 0/ { $us = (nsecs - @start[tid])/1000; @us = hist($us); delete(@start[tid]); printf("%ld us\n", $us);} interval:s:6000 { exit(); }' > Attaching 3 probes... > 15496 us > 14301 us > 12248 us > @us: > [8K, 16K) 3 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| > > Spending up to 16ms with 16MiB maximum receive buffer seems high. Are there any > recommendations on possible approaches to reduce the tcp_collapse latency ? > Would clamping the duration of a tcp_collapse call be reasonable, since we only > need to spend enough time to free space to queue the required skb ?
It depends if the incoming skb is queued in in-order queue or out-of-order queue. For out-of-orders, we have a strategy in tcp_prune_ofo_queue() which should work reasonably well after commit 72cd43ba64fc17 tcp: free batches of packets in tcp_prune_ofo_queue()
Given the nature of tcp_collapse(), limiting it to even 1ms of processing time would still allow for malicious traffic to hurt you quite a lot.
| |