lkml.org 
[lkml]   [2022]   [Oct]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] null_blk: allow teardown on request timeout
Date
On 10/17/22 03:16, Ming Lei wrote:
> On Mon, Oct 17, 2022 at 10:04:26AM +0000, Chaitanya Kulkarni wrote:
>> On 10/17/22 02:50, Ming Lei wrote:
>>> On Mon, Oct 17, 2022 at 09:30:47AM +0000, Chaitanya Kulkarni wrote:
>>>>
>>>>>> + /*
>>>>>> + * Unblock any pending dispatch I/Os before we destroy the device.
>>>>>> + * From null_destroy_dev()->del_gendisk() will set GD_DEAD flag
>>>>>> + * causing any new I/O from __bio_queue_enter() to fail with -ENODEV.
>>>>>> + */
>>>>>> + blk_mq_unquiesce_queue(nullb->q);
>>>>>> +
>>>>>> + null_destroy_dev(nullb);
>>>>>
>>>>> destroying device is never good cleanup for handling timeout/abort, and it
>>>>> should have been the last straw any time.
>>>>>
>>>>
>>>> That is exactly why I've added the rq_abort_limit, so until the limit
>>>> is not reached null_abort_work() will not get scheduled and device is
>>>> not destroyed.
>>>
>>> I meant destroying device should only be done iff the normal abort handler
>>> can't recover the device, however, your patch simply destroys device
>>> without running any abort handling.
>>>
>>
>> I did not understand your comment, can you please elaborate on exactly
>> where and which abort handlers needs to be called in this patch before
>> null_destroy_nullb() ?
>
> In case of request timeout, there may be something wrong which needs
> to be recovered.
>

In case of null_blk there is no real backend controller hence we don't
have anything to try to recover, only recovery scenario is exercised by
allowing multiple timed out request and waiting for the rq_abort_limit
to be reached before teardown.

>>
>> the objective of this patch it to simulate the teardown scenario
>> from timeout handler so it can get tested on regular basis with
>> null_blk ...
>
> Why does teardown scenario have to be triggered for timeout? That

The ideal way is to read the controller status from timeout and
check if it is recoverable, if controller is not recoverable then
we need to gracefully shutdown the device else continuing to issue
I/Os to the non-recoverable device can create more damage to the
device and potentially to the system, and I've encountered this
scenario where SSD was getting hot since device F/W had a bug regarding
temperature control reporting and it became non-responsive-> timing out
the requests in the SSD Qualification process and lack of teardown
made system non-responsive and we only figured it out by logging the
temperature with Vendor unique commands..

> looks you think teardown & destroying device for timeout is one normal
> and common way, but I think it is not, the device shouldn't be removed

No I do not think like that, null_blk has no backend to check for the
status of the device. As explained earlier the decision to remove
the device only needs to be made after reading controller's state and
confirming that it is in non-recoverable state, which is not possible to
check for null_blk so we cannot call abort routines before destroying
the device ..

> if it still can work. I have got such kind of complaints of disk

ofcourse it still can work see above explanation but if device status is
non-recoverable then it should be allow to gracefully teardown-canceling
the I/Os and removing from the system...

> disappeared just by request timeout, such as, nvme-pci.
>

It will be great if we can start a new thread on linux-nvme list
to address the complaints that you are received, I'll be happy to review
and reply..

-ck

\
 
 \ /
  Last update: 2022-10-17 12:48    [W:2.455 / U:1.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site