lkml.org 
[lkml]   [2022]   [Aug]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 3/4] drm/udl: Kill pending URBs at suspend and disconnect
From
Hi

Am 09.08.22 um 09:15 schrieb Takashi Iwai:
> On Tue, 09 Aug 2022 09:13:16 +0200,
> Thomas Zimmermann wrote:
>>
>> Hi
>>
>> Am 04.08.22 um 09:58 schrieb Takashi Iwai:
>>> At both suspend and disconnect, we should rather cancel the pending
>>> URBs immediately. For the suspend case, the display will be turned
>>> off, so it makes no sense to process the rendering. And for the
>>> disconnect case, the device may be no longer accessible, hence we
>>> shouldn't do any submission.
>>>
>>> Tested-by: Thomas Zimmermann <tzimmermann@suse.de>
>>> Signed-off-by: Takashi Iwai <tiwai@suse.de>
>>> ---
>>> drivers/gpu/drm/udl/udl_drv.h | 2 ++
>>> drivers/gpu/drm/udl/udl_main.c | 25 ++++++++++++++++++++++---
>>> drivers/gpu/drm/udl/udl_modeset.c | 2 ++
>>> 3 files changed, 26 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
>>> index f01e50c5b7b7..28aaf75d71cf 100644
>>> --- a/drivers/gpu/drm/udl/udl_drv.h
>>> +++ b/drivers/gpu/drm/udl/udl_drv.h
>>> @@ -39,6 +39,7 @@ struct urb_node {
>>> struct urb_list {
>>> struct list_head list;
>>> + struct list_head in_flight;
>>> spinlock_t lock;
>>> wait_queue_head_t sleep;
>>> int available;
>>> @@ -84,6 +85,7 @@ static inline struct urb *udl_get_urb(struct drm_device *dev)
>>> int udl_submit_urb(struct drm_device *dev, struct urb *urb,
>>> size_t len);
>>> int udl_sync_pending_urbs(struct drm_device *dev);
>>> +void udl_kill_pending_urbs(struct drm_device *dev);
>>> void udl_urb_completion(struct urb *urb);
>>> int udl_init(struct udl_device *udl);
>>> diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
>>> index 93615648414b..47204b7eb10e 100644
>>> --- a/drivers/gpu/drm/udl/udl_main.c
>>> +++ b/drivers/gpu/drm/udl/udl_main.c
>>> @@ -135,7 +135,7 @@ void udl_urb_completion(struct urb *urb)
>>> urb->transfer_buffer_length = udl->urbs.size; /* reset to actual */
>>> spin_lock_irqsave(&udl->urbs.lock, flags);
>>> - list_add_tail(&unode->entry, &udl->urbs.list);
>>> + list_move(&unode->entry, &udl->urbs.list);
>>> udl->urbs.available++;
>>> spin_unlock_irqrestore(&udl->urbs.lock, flags);
>>> @@ -180,6 +180,7 @@ static int udl_alloc_urb_list(struct
>>> drm_device *dev, int count, size_t size)
>>> retry:
>>> udl->urbs.size = size;
>>> INIT_LIST_HEAD(&udl->urbs.list);
>>> + INIT_LIST_HEAD(&udl->urbs.in_flight);
>>> init_waitqueue_head(&udl->urbs.sleep);
>>> udl->urbs.count = 0;
>>> @@ -246,7 +247,7 @@ struct urb *udl_get_urb_timeout(struct drm_device *dev, long timeout)
>>> }
>>> unode = list_first_entry(&udl->urbs.list, struct urb_node,
>>> entry);
>>> - list_del_init(&unode->entry);
>>> + list_move(&unode->entry, &udl->urbs.in_flight);
>>> udl->urbs.available--;
>>> unlock:
>>> @@ -279,7 +280,7 @@ int udl_sync_pending_urbs(struct drm_device *dev)
>>> spin_lock_irq(&udl->urbs.lock);
>>> /* 2 seconds as a sane timeout */
>>> if (!wait_event_lock_irq_timeout(udl->urbs.sleep,
>>> - udl->urbs.available == udl->urbs.count,
>>> + list_empty(&udl->urbs.in_flight),
>>> udl->urbs.lock,
>>> msecs_to_jiffies(2000)))
>>> ret = -ETIMEDOUT;
>>> @@ -287,6 +288,23 @@ int udl_sync_pending_urbs(struct drm_device *dev)
>>> return ret;
>>> }
>>> +/* kill pending URBs */
>>> +void udl_kill_pending_urbs(struct drm_device *dev)
>>> +{
>>> + struct udl_device *udl = to_udl(dev);
>>> + struct urb_node *unode;
>>> +
>>> + spin_lock_irq(&udl->urbs.lock);
>>> + while (!list_empty(&udl->urbs.in_flight)) {
>>> + unode = list_first_entry(&udl->urbs.in_flight,
>>> + struct urb_node, entry);
>>> + spin_unlock_irq(&udl->urbs.lock);
>>> + usb_kill_urb(unode->urb);
>>> + spin_lock_irq(&udl->urbs.lock);
>>> + }
>>> + spin_unlock_irq(&udl->urbs.lock);
>>> +}
>>> +
>>> int udl_init(struct udl_device *udl)
>>> {
>>> struct drm_device *dev = &udl->drm;
>>> @@ -335,6 +353,7 @@ int udl_drop_usb(struct drm_device *dev)
>>> {
>>> struct udl_device *udl = to_udl(dev);
>>> + udl_kill_pending_urbs(dev);
>>> udl_free_urb_list(dev);
>>> put_device(udl->dmadev);
>>> udl->dmadev = NULL;
>>> diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
>>> index 50025606b6ad..169110d8fc2e 100644
>>> --- a/drivers/gpu/drm/udl/udl_modeset.c
>>> +++ b/drivers/gpu/drm/udl/udl_modeset.c
>>> @@ -397,6 +397,8 @@ udl_simple_display_pipe_disable(struct drm_simple_display_pipe *pipe)
>>> struct urb *urb;
>>> char *buf;
>>> + udl_kill_pending_urbs(dev);
>>> +
>>
>> I already reviewed the patchset, but I have another comment. I think
>> we should only kill urbs from within the suspend handler. Same for the
>> call to the URB-sync function in patch 2.
>>
>> This disable function is part of the regular modeset path. It's
>> probably not appropriate to outright remove pending URBs here. This
>> can lead to failed modesets, which would have succeeded otherwise.
>
> Well, the device shall be turned off right after that point, so the
> all pending rendering makes little sense, no?

udl_simple_display_pipe_disable() only disables the display, but not the
device. The kill operation here could potentially kill some valid
modeset operation that was still going on. And who knows what the device
state is after that.

Best regards
Thomas

>
>
> Takashi

--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2022-08-09 09:41    [W:0.061 / U:0.628 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site