lkml.org 
[lkml]   [2021]   [Feb]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v19 2/3] scsi: ufs: L2P map management for HPB read
On 2021-02-08 16:53, Daejun Park wrote:
>>>> @@ -342,13 +1208,14 @@ void ufshpb_suspend(struct ufs_hba *hba)
>>>> > struct scsi_device *sdev;
>>>> >
>>>> > shost_for_each_device(sdev, hba->host) {
>>>> > - hpb = sdev->hostdata;
>>>> > + hpb = ufshpb_get_hpb_data(sdev);
>>>> > if (!hpb)
>>>> > continue;
>>>> >
>>>> > if (ufshpb_get_state(hpb) != HPB_PRESENT)
>>>> > continue;
>>>> > ufshpb_set_state(hpb, HPB_SUSPEND);
>>>> > + ufshpb_cancel_jobs(hpb);
>>>>
>>>> Here may have a dead lock problem - in the case of runtime suspend,
>>>> when ufshpb_suspend() is invoked, all of hba's children scsi devices
>>>> are in RPM_SUSPENDED state. When this line tries to cancel a running
>>>> map work, i.e. when ufshpb_get_map_req() calls below lines, it will
>>>> be stuck at blk_queue_enter().
>>>>
>>>> req = blk_get_request(hpb->sdev_ufs_lu->request_queue,
>>>> REQ_OP_SCSI_IN, 0);
>>>>
>>>> Please check block layer power management, and see also commit
>>>> d55d15a33
>>>> ("scsi: block: Do not accept any requests while suspended").
>>>
>>> I am agree with your comment.
>>> How about add BLK_MQ_REQ_NOWAIT flag on blk_get_request() to avoid
>>> hang?
>>>
>>
>> That won't work - BLK_MQ_REQ_NOWAIT allows one to fast fail from
>> blk_mq_get_tag(),
>> but blk_queue_enter() comes before __blk_mq_alloc_request();
>>
> In blk_queue_enter(), BLK_MQ_REQ_NOWAIT flag can make error than wait
> rpm
> resume. Please refer following code.

Oops, sorry, my memory needs to be refreshed on that part.

But will BLK_MQ_REQ_NOWAIT flag breaks your original purpose? When
runtime suspend is out of the picture, if traffic is heavy on the
request queue, map_work() will be stopped frequently once it is
not able to get a request from the queue - that shall pull down the
efficiency of one map_work(), that may hurt random performance...

Can Guo.

>
> int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
> {
> const bool pm = flags & BLK_MQ_REQ_PM;
>
> while (true) {
> bool success = false;
>
> rcu_read_lock();
> if (percpu_ref_tryget_live(&q->q_usage_counter)) {
> /*
> * The code that increments the pm_only counter is
> * responsible for ensuring that that counter is
> * globally visible before the queue is unfrozen.
> */
> if ((pm && queue_rpm_status(q) != RPM_SUSPENDED) ||
> !blk_queue_pm_only(q)) {
> success = true;
> } else {
> percpu_ref_put(&q->q_usage_counter);
> }
> }
> rcu_read_unlock();
>
> if (success)
> return 0;
>
> if (flags & BLK_MQ_REQ_NOWAIT)
> return -EBUSY; <-- out from the function.
>
> Thanks,
> Daejun

\
 
 \ /
  Last update: 2021-02-08 10:40    [W:0.522 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site