lkml.org 
[lkml]   [2014]   [Jan]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: kvm virtio ethernet ring on guest side over high throughput (packet per second)
Thank you so much Stefan for the help and cc'ing Michael & Jason.
Like you advised yesterday on IRC, today we are making some tests with
the application setting TCP_NODELAY in the socket options.

So we will try that and get back to you with further information.
In the mean time, maybe showing what options the vms are using while running !

# ------------------------------------------------------------------------------------------------------------------------------------------------------------------
/usr/bin/kvm -S -M pc-1.0 -cpu
core2duo,+lahf_lm,+rdtscp,+pdpe1gb,+aes,+popcnt,+x2apic,+sse4.2,+sse4.1,+dca,+xtpr,+cx16,+tm2,+est,+vmx,+ds_cpl,+pbe,+tm,+ht,+ss,+acpi,+ds
-enable-kvm -m 32768 -smp 8,sockets=1,cores=6,threads=2 -name
instance-00000254 -uuid d25b1b20-409e-4d7f-bd92-2ef4073c7c2b
-nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000254.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
-no-shutdown -kernel /var/lib/nova/instances/instance-00000254/kernel
-initrd /var/lib/nova/instances/instance-00000254/ramdisk -append
root=/dev/vda console=ttyS0 -drive
file=/var/lib/nova/instances/instance-00000254/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=writethrough
-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
-netdev tap,fd=19,id=hostnet0 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:27:d4:6d,bus=pci.0,addr=0x3
-chardev file,id=charserial0,path=/var/lib/nova/instances/instance-00000254/console.log
-device isa-serial,chardev=charserial0,id=serial0 -chardev
pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1
-usb -device usb-tablet,id=input0 -vnc 0.0.0.0:4 -k en-us -vga cirrus
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
# ------------------------------------------------------------------------------------------------------------------------------------------------------------------

best regards


Alejandro Comisario
#melicloud CloudBuilders
Arias 3751, Piso 7 (C1430CRG)
Ciudad de Buenos Aires - Argentina
Cel: +549(11) 15-3770-1857
Tel : +54(11) 4640-8443


On Wed, Jan 22, 2014 at 12:22 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
> On Tue, Jan 21, 2014 at 04:06:05PM -0200, Alejandro Comisario wrote:
>
> CCed Michael Tsirkin and Jason Wang who work on KVM networking.
>
>> Hi guys, we had in the past when using physical servers, several
>> throughput issues regarding the throughput of our APIS, in our case we
>> measure this with packets per seconds, since we dont have that much
>> bandwidth (Mb/s) since our apis respond lots of packets very small
>> ones (maximum response of 3.5k and avg response of 1.5k), when we
>> where using this physical servers, when we reach throughput capacity
>> (due to clients tiemouts) we touched the ethernet ring configuration
>> and we made the problem dissapear.
>>
>> Today with kvm and over 10k virtual instances, when we want to
>> increase the throughput of KVM instances, we bumped with the fact that
>> when using virtio on guests, we have a max configuration of the ring
>> of 256 TX/RX, and from the host side the atached vnet has a txqueuelen
>> of 500.
>>
>> What i want to know is, how can i tune the guest to support more
>> packets per seccond if i know that's my bottleneck?
>
> I suggest investigating performance in a systematic way. Set up a
> benchmark that saturates the network. Post the details of the benchmark
> and the results that you are seeing.
>
> Then, we can discuss how to investigate the root cause of the bottleneck.
>
>> * does virtio exposes more packets to configure in the virtual ethernet's ring ?
>
> No, ring size is hardcoded in QEMU (on the host).
>
>> * does the use of vhost_net helps me with increasing packets per
>> second and not only bandwidth?
>
> vhost_net is generally the most performant network option.
>
>> does anyone has to struggle with this before and knows where i can look into ?
>> there's LOOOOOOOOOOOOOOOTS of information about networking performance
>> tuning of kvm, but nothing related to increase throughput in pps
>> capacity.
>>
>> This is a couple of configurations that we are having right now on the
>> compute nodes:
>>
>> * 2x1Gb bonded interfaces (want to know the more than 20 models we are
>> using, just ask for it)
>> * Multi queue interfaces, pined via irq to different cores
>> * Linux bridges, no VLAN, no open-vswitch
>> * ubuntu 12.04 kernel 3.2.0-[40-48]


\
 
 \ /
  Last update: 2014-01-22 23:21    [W:0.088 / U:2.096 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site