lkml.org 
[lkml]   [2022]   [Mar]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 2/3] drm/msm/gpu: Park scheduler threads for system suspend
From

On 2022-03-17 13:35, Rob Clark wrote:
> On Thu, Mar 17, 2022 at 9:45 AM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 17.03.22 um 17:18 schrieb Rob Clark:
>>> On Thu, Mar 17, 2022 at 9:04 AM Christian König
>>> <christian.koenig@amd.com> wrote:
>>>> Am 17.03.22 um 16:10 schrieb Rob Clark:
>>>>> [SNIP]
>>>>> userspace frozen != kthread frozen .. that is what this patch is
>>>>> trying to address, so we aren't racing between shutting down the hw
>>>>> and the scheduler shoveling more jobs at us.
>>>> Well exactly that's the problem. The scheduler is supposed to shoveling
>>>> more jobs at us until it is empty.
>>>>
>>>> Thinking more about it we will then keep some dma_fence instance
>>>> unsignaled and that is and extremely bad idea since it can lead to
>>>> deadlocks during suspend.
>>> Hmm, perhaps that is true if you need to migrate things out of vram?
>>> It is at least not a problem when vram is not involved.
>> No, it's much wider than that.
>>
>> See what can happen is that the memory management shrinkers want to wait
>> for a dma_fence during suspend.
> we don't wait on fences in shrinker, only purging or evicting things
> that are already ready. Actually, waiting on fences in shrinker path
> sounds like a pretty bad idea.
>
>> And if you stop the scheduler they will just wait forever.
>>
>> What you need to do instead is to drain the scheduler, e.g. call
>> drm_sched_entity_flush() with a proper timeout for each entity you have
>> created.
> yeah, it would work to drain the scheduler.. I guess that might be the
> more portable approach as far as generic solution for suspend.
>
> BR,
> -R


I am not sure how this drains the scheduler ? Suppose we done the
waiting in drm_sched_entity_flush,
what prevents someone to push right away another job into the same
entity's queue  right after that ?
Shouldn't we first disable further pushing of jobs into entity before we
wait for  sched->job_scheduled ?

Andrey


>
>> Regards,
>> Christian.
>>
>>>> So this patch here is an absolute clear NAK from my side. If amdgpu is
>>>> doing something similar that is a severe bug and needs to be addressed
>>>> somehow.
>>> I think amdgpu's use of kthread_park is not related to suspend, but
>>> didn't look too closely.
>>>
>>> And perhaps the solution for this problem is more complex in the case
>>> of amdgpu, I'm not super familiar with the constraints there. But I
>>> think it is a fine solution for integrated GPUs.
>>>
>>> BR,
>>> -R
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> BR,
>>>>> -R
>>>>>

\
 
 \ /
  Last update: 2022-03-17 19:12    [W:0.112 / U:2.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site