lkml.org 
[lkml]   [2018]   [Jul]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[V9fs-developer] [PATCH] net/9p: Fix a deadlock case in the virtio transport
Date
When client has multiple threads that issue io requests all the
time, and the server has a very good performance, it may cause
cpu is running in the irq context for a long time because it can
check virtqueue has buf in the *while* loop.

So we should keep chan->lock in the whole loop.

Signed-off-by: Yiwen Jiang <jiangyiwen@huawei.com>
---
net/9p/trans_virtio.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
index 05006cb..9b0f5f2 100644
--- a/net/9p/trans_virtio.c
+++ b/net/9p/trans_virtio.c
@@ -148,20 +148,18 @@ static void req_done(struct virtqueue *vq)

p9_debug(P9_DEBUG_TRANS, ": request done\n");

+ spin_lock_irqsave(&chan->lock, flags);
while (1) {
- spin_lock_irqsave(&chan->lock, flags);
req = virtqueue_get_buf(chan->vq, &len);
- if (req == NULL) {
- spin_unlock_irqrestore(&chan->lock, flags);
+ if (req == NULL)
break;
- }
chan->ring_bufs_avail = 1;
- spin_unlock_irqrestore(&chan->lock, flags);
/* Wakeup if anyone waiting for VirtIO ring space. */
wake_up(chan->vc_wq);
if (len)
p9_client_cb(chan->client, req, REQ_STATUS_RCVD);
}
+ spin_unlock_irqrestore(&chan->lock, flags);
}

/**
--
1.8.3.1
\
 
 \ /
  Last update: 2018-07-15 22:07    [W:0.049 / U:2.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site