lkml.org 
[lkml]   [2012]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 2/2] virtio-ring: Allocate indirect buffers from cache when possible
On 08/29/2012 05:38 PM, Michael S. Tsirkin wrote:
> On Wed, Aug 29, 2012 at 05:03:03PM +0200, Sasha Levin wrote:
>> On 08/29/2012 01:07 PM, Michael S. Tsirkin wrote:
>>> On Tue, Aug 28, 2012 at 03:35:00PM +0200, Sasha Levin wrote:
>>>> On 08/28/2012 03:20 PM, Michael S. Tsirkin wrote:
>>>>> On Tue, Aug 28, 2012 at 03:04:03PM +0200, Sasha Levin wrote:
>>>>>> Currently if VIRTIO_RING_F_INDIRECT_DESC is enabled we will
>>>>>> use indirect descriptors and allocate them using a simple
>>>>>> kmalloc().
>>>>>>
>>>>>> This patch adds a cache which will allow indirect buffers under
>>>>>> a configurable size to be allocated from that cache instead.
>>>>>>
>>>>>> Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
>>>>>
>>>>> I imagine this helps performance? Any numbers?
>>>>
>>>> I ran benchmarks on the original RFC, I've re-tested it now and got similar
>>>> numbers to the original ones (virtio-net using vhost-net, thresh=16):
>>>>
>>>> Before:
>>>> Recv Send Send
>>>> Socket Socket Message Elapsed
>>>> Size Size Size Time Throughput
>>>> bytes bytes bytes secs. 10^6bits/sec
>>>>
>>>> 87380 16384 16384 10.00 4512.12
>>>>
>>>> After:
>>>> Recv Send Send
>>>> Socket Socket Message Elapsed
>>>> Size Size Size Time Throughput
>>>> bytes bytes bytes secs. 10^6bits/sec
>>>>
>>>> 87380 16384 16384 10.00 5399.18
>>>>
>>>>
>>>> Thanks,
>>>> Sasha
>>>
>>> This is with both patches 1 + 2?
>>> Sorry could you please also test what happens if you apply
>>> - just patch 1
>>> - just patch 2
>>>
>>> Thanks!
>>
>> Sure thing!
>>
>> I've also re-ran it on a IBM server type host instead of my laptop. Here are the
>> results:
>>
>> Vanilla kernel:
>>
>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.33.1
>> () port 0 AF_INET
>> enable_enobufs failed: getprotobyname
>> Recv Send Send
>> Socket Socket Message Elapsed
>> Size Size Size Time Throughput
>> bytes bytes bytes secs. 10^6bits/sec
>>
>> 87380 16384 16384 10.00 7922.72
>>
>> Patch 1, with threshold=16:
>>
>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.33.1
>> () port 0 AF_INET
>> enable_enobufs failed: getprotobyname
>> Recv Send Send
>> Socket Socket Message Elapsed
>> Size Size Size Time Throughput
>> bytes bytes bytes secs. 10^6bits/sec
>>
>> 87380 16384 16384 10.00 8415.07
>>
>> Patch 2:
>>
>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.33.1
>> () port 0 AF_INET
>> enable_enobufs failed: getprotobyname
>> Recv Send Send
>> Socket Socket Message Elapsed
>> Size Size Size Time Throughput
>> bytes bytes bytes secs. 10^6bits/sec
>>
>> 87380 16384 16384 10.00 8931.05
>>
>>
>> Note that these are simple tests with netperf listening on one end and a simple
>> 'netperf -H [host]' within the guest. If there are other tests which may be
>> interesting please let me know.
>>
>>
>> Thanks,
>> Sasha
>
>
> And which parameter did you use for patch 2?
>

Same as in the first one, 16, the only difference in patch 2 is that we use a
kmemcache, so there's no point in changing the threshold vs patch 1.


Thanks,
Sasha


\
 
 \ /
  Last update: 2012-08-29 20:21    [W:0.162 / U:0.188 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site