lkml.org 
[lkml]   [2021]   [Jul]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v4 06/10] scsi: ufs: Remove host_sem used in suspend/resume
From
Date
On 28/06/21 10:26 am, Can Guo wrote:
> On 2021-06-24 18:04, Adrian Hunter wrote:
>> On 24/06/21 9:31 am, Can Guo wrote:
>>> On 2021-06-24 14:23, Adrian Hunter wrote:
>>>> On 24/06/21 9:12 am, Can Guo wrote:
>>>>> On 2021-06-24 13:52, Adrian Hunter wrote:
>>>>>> On 24/06/21 5:16 am, Can Guo wrote:
>>>>>>> On 2021-06-23 22:30, Adrian Hunter wrote:
>>>>>>>> On 23/06/21 10:35 am, Can Guo wrote:
>>>>>>>>> To protect system suspend/resume from being disturbed by error handling,
>>>>>>>>> instead of using host_sem, let error handler call lock_system_sleep() and
>>>>>>>>> unlock_system_sleep() which achieve the same purpose. Remove the host_sem
>>>>>>>>> used in suspend/resume paths to make the code more readable.
>>>>>>>>>
>>>>>>>>> Suggested-by: Bart Van Assche <bvanassche@acm.org>
>>>>>>>>> Signed-off-by: Can Guo <cang@codeaurora.org>
>>>>>>>>> ---
>>>>>>>>>  drivers/scsi/ufs/ufshcd.c | 12 +++++++-----
>>>>>>>>>  1 file changed, 7 insertions(+), 5 deletions(-)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
>>>>>>>>> index 3695dd2..a09e4a2 100644
>>>>>>>>> --- a/drivers/scsi/ufs/ufshcd.c
>>>>>>>>> +++ b/drivers/scsi/ufs/ufshcd.c
>>>>>>>>> @@ -5907,6 +5907,11 @@ static void ufshcd_clk_scaling_suspend(struct ufs_hba *hba, bool suspend)
>>>>>>>>>
>>>>>>>>>  static void ufshcd_err_handling_prepare(struct ufs_hba *hba)
>>>>>>>>>  {
>>>>>>>>> +    /*
>>>>>>>>> +     * It is not safe to perform error handling while suspend or resume is
>>>>>>>>> +     * in progress. Hence the lock_system_sleep() call.
>>>>>>>>> +     */
>>>>>>>>> +    lock_system_sleep();
>>>>>>>>
>>>>>>>> It looks to me like the system takes this lock quite early, even before
>>>>>>>> freezing tasks, so if anything needs the error handler to run it will
>>>>>>>> deadlock.
>>>>>>>
>>>>>>> Hi Adrian,
>>>>>>>
>>>>>>> UFS/hba system suspend/resume does not invoke or call error handling in a
>>>>>>> synchronous way. So, whatever UFS errors (which schedules the error handler)
>>>>>>> happens during suspend/resume, error handler will just wait here till system
>>>>>>> suspend/resume release the lock. Hence no worries of deadlock here.
>>>>>>
>>>>>> It looks to me like the state can change to UFSHCD_STATE_EH_SCHEDULED_FATAL
>>>>>> and since user processes are not frozen, nor file systems sync'ed, everything
>>>>>> is going to deadlock.
>>>>>> i.e.
>>>>>> I/O is blocked waiting on error handling
>>>>>> error handling is blocked waiting on lock_system_sleep()
>>>>>> suspend is blocked waiting on I/O
>>>>>>
>>>>>
>>>>> Hi Adrian,
>>>>>
>>>>> First of all, enter_state(suspend_state_t state) uses mutex_trylock(&system_transition_mutex).
>>>>
>>>> Yes, in the case I am outlining it gets the mutex.
>>>>
>>>>> Second, even that happens, in ufshcd_queuecommand(), below logic will break the cycle, by
>>>>> fast failing the PM request (below codes are from the code tip with this whole series applied).
>>>>
>>>> It won't get that far because the suspend will be waiting to sync filesystems.
>>>> Filesystems will be waiting on I/O.
>>>> I/O will be waiting on the error handler.
>>>> The error handler will be waiting on system_transition_mutex.
>>>> But system_transition_mutex is already held by PM core.
>>>
>>> Hi Adrian,
>>>
>>> You are right.... I missed the action of syncing filesystems...
>>>
>>> Using back host_sem in suspend_prepare()/resume_complete() won't have this
>>> problem of deadlock, right?
>>
>> I am not sure, but what was problem that the V3 patch was fixing?
>> Can you give an example?
>
> V3 was moving host_sem from wl_system_suspend/resume() to
> ufshcd_suspend_prepare()/ufshcd_resume_complete(). It is to
> make sure error handling does not run concurrenly with system
> PM, since error handling is recovering/clearing runtime PM
> errors of all the scsi devices under hba (in patch #8). Having the
> error handling doing so (in patch 8) is because runtime PM framework
> may save the runtime errors of the supplier to one or more consumers (
> unlike the children - parent relationship), for example if wlu resume
> fails, sda and/or other scsi devices may save the resume error, then
> they will be left runtime suspended permanently.

Sorry for the slow reply. I was going to do some more investigation but
never found time.

I was wondering if it would be simpler to do the error recovery for
wl_system_suspend/resume() before exiting wl_system_suspend/resume().

Then it would be possible to do something along the lines:
- prevent runtime suspend while the error handler is outstanding
- at suspend, block queuing of the error handler work and flush it
- at resume, allow queuing of the error handler work

\
 
 \ /
  Last update: 2021-07-07 21:04    [W:0.087 / U:2.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site