lkml.org 
[lkml]   [2022]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [Question] iforce_serio lockup
On Sun, Aug 28, 2022 at 05:05:09PM +0800, Hillf Danton wrote:
> On Fri, 29 Jul 2022 21:24:13 +0200 Greg Tulli <greg.iforce@gmail.com>
> >
> > Chain exists of:
> > &iforce->xmit_lock --> &port_lock_key --> &serport->lock
> >
> > Possible unsafe locking scenario:
> >
> > CPU0 CPU1
> > ---- ----
> > lock(&serport->lock);
> > lock(&port_lock_key);
> > lock(&serport->lock);
> > lock(&iforce->xmit_lock);
> >
> > *** DEADLOCK ***
> >
> > 2022-07-11 11:32 GMT+02:00, Greg T <greg.iforce@gmail.com>:
> >
> > > That problem vanishes if we don't call iforce_process_packet directly
> > > from iforce_serio_irq, but from a tasklet. Is that a right approach?
>
> Another option is to do wakeup without serport->lock held,
> given iforce->xmit_lock. Your test will provide info about the races
> that may come up due to the serport->lock.

No, I think we should use work to process the outbound buffer/queue,
as the comment in include/linux/tty_ldisc.h recommends. I believe a
single work instance in struct iforce_serio will suffice:
iforce_serio_xmit already should be able to handle concurrent
invocations, so we just need to schedule the work from
iforce_serio_write_wakeup() (and it is fine if it is already scheduled)
and let iforce_serio_xmit() do its job.

We can wait for the buffer to empty (which should take care of the work
running, but we may also do cancel_work_sync() for a good measure) in
iforce_serio_stop_io().

Thanks.

--
Dmitry

\
 
 \ /
  Last update: 2022-08-29 20:14    [W:0.053 / U:0.332 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site