lkml.org 
[lkml]   [2014]   [Jul]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH net-next V2 3/3] virtio-net: rx busy polling support

On Thursday 17 July 2014 08:25 AM, Jason Wang wrote:
> On 07/16/2014 04:38 PM, Varka Bhadram wrote:
>> On 07/16/2014 11:51 AM, Jason Wang wrote:
>>> Add basic support for rx busy polling.
>>>
>>> Test was done between a kvm guest and an external host. Two hosts were
>>> connected through 40gb mlx4 cards. With both busy_poll and busy_read
>>> are set to 50 in guest, 1 byte netperf tcp_rr shows 116% improvement:
>>> transaction rate was increased from 9151.94 to 19787.37.
>>>
>>> Cc: Rusty Russell <rusty@rustcorp.com.au>
>>> Cc: Michael S. Tsirkin <mst@redhat.com>
>>> Cc: Vlad Yasevich <vyasevic@redhat.com>
>>> Cc: Eric Dumazet <eric.dumazet@gmail.com>
>>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>>> ---
>>> drivers/net/virtio_net.c | 190
>>> ++++++++++++++++++++++++++++++++++++++++++++++-
>>> 1 file changed, 187 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>> index e417d93..4830713 100644
>>> --- a/drivers/net/virtio_net.c
>>> +++ b/drivers/net/virtio_net.c
>>> @@ -27,6 +27,7 @@
>>> #include <linux/slab.h>
>>> #include <linux/cpu.h>
>>> #include <linux/average.h>
>>> +#include <net/busy_poll.h>
>>> static int napi_weight = NAPI_POLL_WEIGHT;
>>> module_param(napi_weight, int, 0444);
>>> @@ -94,8 +95,143 @@ struct receive_queue {
>>> /* Name of this receive queue: input.$index */
>>> char name[40];
>>> +
>>> +#ifdef CONFIG_NET_RX_BUSY_POLL
>>> + unsigned int state;
>>> +#define VIRTNET_RQ_STATE_IDLE 0
>>> +#define VIRTNET_RQ_STATE_NAPI 1 /* NAPI or refill owns
>>> this RQ */
>>> +#define VIRTNET_RQ_STATE_POLL 2 /* poll owns this RQ */
>>> +#define VIRTNET_RQ_STATE_DISABLED 4 /* RQ is disabled */
>>> +#define VIRTNET_RQ_OWNED (VIRTNET_RQ_STATE_NAPI |
>>> VIRTNET_RQ_STATE_POLL)
>>> +#define VIRTNET_RQ_LOCKED (VIRTNET_RQ_OWNED |
>>> VIRTNET_RQ_STATE_DISABLED)
>>> +#define VIRTNET_RQ_STATE_NAPI_YIELD 8 /* NAPI or refill yielded
>>> this RQ */
>>> +#define VIRTNET_RQ_STATE_POLL_YIELD 16 /* poll yielded this RQ */
>>> + spinlock_t lock;
>>> +#endif /* CONFIG_NET_RX_BUSY_POLL */
>>> };
>>> +#ifdef CONFIG_NET_RX_BUSY_POLL
>>> +static inline void virtnet_rq_init_lock(struct receive_queue *rq)
>>> +{
>>> +
>>> + spin_lock_init(&rq->lock);
>>> + rq->state = VIRTNET_RQ_STATE_IDLE;
>>> +}
>>> +
>>> +/* called from the device poll routine or refill routine to get
>>> ownership of a
>>> + * receive queue.
>>> + */
>>> +static inline bool virtnet_rq_lock_napi_refill(struct receive_queue
>>> *rq)
>>> +{
>>> + int rc = true;
>>> +
>> bool instead of int...?
> Yes, it was better.
>>> + spin_lock(&rq->lock);
>>> + if (rq->state & VIRTNET_RQ_LOCKED) {
>>> + WARN_ON(rq->state & VIRTNET_RQ_STATE_NAPI);
>>> + rq->state |= VIRTNET_RQ_STATE_NAPI_YIELD;
>>> + rc = false;
>>> + } else
>>> + /* we don't care if someone yielded */
>>> + rq->state = VIRTNET_RQ_STATE_NAPI;
>>> + spin_unlock(&rq->lock);
>> Lock for rq->state ...?
>>
>> If yes:
>> spin_lock(&rq->lock);
>> if (rq->state & VIRTNET_RQ_LOCKED) {
>> rq->state |= VIRTNET_RQ_STATE_NAPI_YIELD;
>> spin_unlock(&rq->lock);
>> WARN_ON(rq->state & VIRTNET_RQ_STATE_NAPI);
>> rc = false;
>> } else {
>> /* we don't care if someone yielded */
>> rq->state = VIRTNET_RQ_STATE_NAPI;
>> spin_unlock(&rq->lock);
>> }
> I didn't see any differences. Is this used to catch the bug of driver
> earlier? btw, several other rx busy polling capable driver does the same
> thing.

We need not to include WARN_ON() & rc=false under critical section.

--
Regards,
Varka Bhadram



\
 
 \ /
  Last update: 2014-07-17 06:01    [W:0.083 / U:0.504 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site