lkml.org 
[lkml]   [2014]   [Jan]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] nvme: Cleanup nvme_dev_start()
On Mon, 20 Jan 2014, Alexander Gordeev wrote:
> This update fixes an oddity when a device is first added
> and then removed from dev_list in case of initialization
> failure, instead of just being added in case of success.
>
> Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
> ---
> drivers/block/nvme-core.c | 19 ++++++++-----------
> 1 files changed, 8 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
> index e1e4ad4..e4e12be 100644
> --- a/drivers/block/nvme-core.c
> +++ b/drivers/block/nvme-core.c
> @@ -2105,29 +2105,26 @@ static int nvme_dev_start(struct nvme_dev *dev)
> if (result)
> goto unmap;
>
> - spin_lock(&dev_list_lock);
> - list_add(&dev->node, &dev_list);
> - spin_unlock(&dev_list_lock);
> -
> result = set_queue_count(dev, num_online_cpus());
> if (result == -EBUSY)

For whatever reason, some of these devices unfortunetly don't support
legacy interrupts. We expect an interrupt when the completion is posted
for setting the queue count, but failing that, we rely on the polling
thread to invoke the completion, so the device needs to be in the dev_list
before calling set_queue_count.


\
 
 \ /
  Last update: 2014-01-20 18:22    [W:0.107 / U:0.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site