lkml.org 
[lkml]   [2023]   [May]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v2 5/5] ufs: core: Add error handling for MCQ mode
From
On 4/25/2023 5:21 PM, Bart Van Assche wrote:
> On 4/17/23 14:05, Bao D. Nguyen wrote:
>> +        /* MCQ mode */
>> +        if (is_mcq_enabled(hba))
>> +            return ufshcd_clear_cmds(hba, 1UL << lrbp->task_tag);
>
> The above code will trigger an overflow if lrbp->task_tag >= 8 *
> sizeof(unsigned long). That's not acceptable.
This ufshcd_clear_cmds() uses a bit map. There are multiple places in
the UFS code have this limitation if the queue depth grows to be greater
than 64. I am thinking:
1. Current ufs controllers in the market probably support queue depth 64
or less, so it may not be a problem today if host controller cap is set
to 64 queue depth, but can be a problem in multiple places in the code
later.
2. In mcq mode, we can pass a tag number into this api
ufshcd_clear_cmds(); while in SDB mode, pass the tag's bit mask as before.
3. Use sbitmask() to support large queue depth? Thanks for any suggestions.

>
>>   static irqreturn_t ufshcd_transfer_req_compl(struct ufs_hba *hba)
>>   {
>> +    struct ufshcd_lrb *lrbp;
>> +    u32 hwq_num, utag;
>> +    int tag;
>> +
>>       /* Resetting interrupt aggregation counters first and reading the
>>        * DOOR_BELL afterward allows us to handle all the completed
>> requests.
>>        * In order to prevent other interrupts starvation the DB is
>> read once
>> @@ -5580,7 +5590,22 @@ static irqreturn_t
>> ufshcd_transfer_req_compl(struct ufs_hba *hba)
>>        * Ignore the ufshcd_poll() return value and return IRQ_HANDLED
>> since we
>>        * do not want polling to trigger spurious interrupt complaints.
>>        */
>> -    ufshcd_poll(hba->host, UFSHCD_POLL_FROM_INTERRUPT_CONTEXT);
>> +    if (!is_mcq_enabled(hba)) {
>> +        ufshcd_poll(hba->host, UFSHCD_POLL_FROM_INTERRUPT_CONTEXT);
>> +        goto out;
>> +    }
>> +
>> +    /* MCQ mode */
>> +    for (tag = 0; tag < hba->nutrs; tag++) {
>> +        lrbp = &hba->lrb[tag];
>> +        if (lrbp->cmd) {
>> +            utag = blk_mq_unique_tag(scsi_cmd_to_rq(lrbp->cmd));
>> +            hwq_num = blk_mq_unique_tag_to_hwq(utag);
>> +            ufshcd_poll(hba->host, hwq_num);
>> +        }
>> +    }
>
> Is my understanding correct that ufshcd_transfer_req_compl() is only
> called from single doorbell code paths and hence that the above change
> is not necessary?
ufshcd_transfer_req_compl() can be invoked from MCQ mode such as the
ufshcd_err_handler() as below:
ufshcd_err_handler()-->ufshcd_complete_requests()-->ufshcd_transfer_req_compl()

>
>
>> +    if (is_mcq_enabled(hba)) {
>> +        struct ufshcd_lrb *lrbp;
>> +        int tag;
>> +
>> +        for (tag = 0; tag < hba->nutrs; tag++) {
>> +            lrbp = &hba->lrb[tag];
>> +            if (lrbp->cmd) {
>> +                ret = ufshcd_try_to_abort_task(hba, tag);
>> +                dev_err(hba->dev, "Aborting tag %d / CDB %#02x %s\n",
>> tag,
>> +                    hba->lrb[tag].cmd ? hba->lrb[tag].cmd->cmnd[0] : -1,
>> +                    ret ? "failed" : "succeeded");
>> +            }
>> +            if (ret) {
>> +                needs_reset = true;
>> +                goto out;
>> +            }
>> +        }
>> +    } else {
>> +        /* Clear pending transfer requests */
>> +        for_each_set_bit(tag, &hba->outstanding_reqs, hba->nutrs) {
>> +            ret = ufshcd_try_to_abort_task(hba, tag);
>> +            dev_err(hba->dev, "Aborting tag %d / CDB %#02x %s\n", tag,
>> +                hba->lrb[tag].cmd ? hba->lrb[tag].cmd->cmnd[0] : -1,
>> +                ret ? "failed" : "succeeded");
>> +            if (ret) {
>> +                needs_reset = true;
>> +                goto out;
>> +            }
>>           }
>>       }
>
> Please introduce helper functions for the MCQ and SDB code paths such
> that the nesting level of the above code is reduced.
Sure. I will change.

>
> Thanks,
>
> Bart.

\
 
 \ /
  Last update: 2023-05-04 06:18    [W:0.140 / U:0.224 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site