lkml.org 
[lkml]   [2014]   [Oct]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [QA-TCP] How to send tcp small packages immediately?
On 10/24/2014 12:41 AM, Zhangjie (HZ) wrote:
> Hi,
>
> I use netperf to test the performance of small tcp package, with TCP_NODELAY set :
>
> netperf -H 129.9.7.164 -l 100 -- -m 512 -D
>
> Among the packages I got by tcpdump, there is not only small packages, also lost of
> big ones (skb->len=65160).
>
> IP 129.9.7.186.60840 > 129.9.7.164.34607: tcp 65160
> IP 129.9.7.164.34607 > 129.9.7.186.60840: tcp 0
> IP 129.9.7.164.34607 > 129.9.7.186.60840: tcp 0
> IP 129.9.7.164.34607 > 129.9.7.186.60840: tcp 0
> IP 129.9.7.186.60840 > 129.9.7.164.34607: tcp 65160
> IP 129.9.7.164.34607 > 129.9.7.186.60840: tcp 0
> IP 129.9.7.164.34607 > 129.9.7.186.60840: tcp 0
> IP 129.9.7.164.34607 > 129.9.7.186.60840: tcp 0
> IP 129.9.7.186.60840 > 129.9.7.164.34607: tcp 80
> IP 129.9.7.186.60840 > 129.9.7.164.34607: tcp 512
> IP 129.9.7.186.60840 > 129.9.7.164.34607: tcp 512
>
> SO, how to test small tcp packages? Including TCP_NODELAY, What else should be set?

Well, I don't think there is anything else you can set. Even with
TCP_NODELAY set, segment size with TCP will still be controlled by
factors such as congestion window.

I am ass-u-me-ing your packet trace is at the sender. I suppose if your
sender were fast enough compared to the path that might combine with
congestion window to result in the very large segments.

Not to say there cannot be a bug somewhere with TSO overriding
TCP_NODELAY, but in broad terms, even TCP_NODELAY does not guarantee
small TCP segments. That has been something of a bane on my attempts to
use TCP for aggregate small-packet performance measurements via netperf
for quite some time.

And since you seem to have included a virtualization mailing list I
would also ass-u-me that virtualization is involved somehow. Knuth only
knows how that will affect the timing of events, which will be very much
involved in matters of congestion window and such. I suppose it is even
possible that if the packet trace is on a VM receiver that some delays
in getting the VM running could mean that GRO would end-up making large
segments being pushed up the stack.

happy benchmarking,

rick jones


\
 
 \ /
  Last update: 2014-10-24 18:01    [W:0.044 / U:0.496 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site