lkml.org 
[lkml]   [2020]   [May]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 4/6] dmaengine: dw: Print warning if multi-block is unsupported
On Mon, May 11, 2020 at 06:13:44AM +0300, Serge Semin wrote:
> On Fri, May 08, 2020 at 10:06:22PM +0300, Andy Shevchenko wrote:
> > On Fri, May 08, 2020 at 12:53:34PM +0100, Mark Brown wrote:
> > > On Fri, May 08, 2020 at 02:26:04PM +0300, Andy Shevchenko wrote:
> > > > On Fri, May 08, 2020 at 01:53:02PM +0300, Serge Semin wrote:
> > >
> > > > > Multi-block support provides a way to map the kernel-specific SG-table so
> > > > > the DW DMA device would handle it as a whole instead of handling the
> > > > > SG-list items or so called LLP block items one by one. So if true LLP
> > > > > list isn't supported by the DW DMA engine, then soft-LLP mode will be
> > > > > utilized to load and execute each LLP-block one by one. A problem may
> > > > > happen for multi-block DMA slave transfers, when the slave device buffers
> > > > > (for example Tx and Rx FIFOs) depend on each other and have size smaller
> > > > > than the block size. In this case writing data to the DMA slave Tx buffer
> > > > > may cause the Rx buffer overflow if Rx DMA channel is paused to
> > > > > reinitialize the DW DMA controller with a next Rx LLP item. In particular
> > > > > We've discovered this problem in the framework of the DW APB SPI device
> > >
> > > > Mark, do we have any adjustment knobs in SPI core to cope with this?
> > >
> > > Frankly I'm not sure I follow what the issue is - is an LLP block item
> > > different from a SG list entry? As far as I can tell the problem is
> > > that the DMA controller does not support chaining transactions together
> > > and possibly also has a limit on the transfer size? Or possibly some
> > > issue with the DMA controller locking the CPU out of the I/O bus for
> > > noticable periods? I can't really think what we could do about that if
> > > the issue is transfer sizes, that just seems like hardware which is
> > > never going to work reliably. If the issue is not being able to chain
> > > transfers then possibly an option to linearize messages into a single
> > > transfer as suggested to cope with PIO devices with ill considered
> > > automated chip select handling, though at some point you have to worry
> > > about the cost of the memcpy() vs the cost of just doing PIO.
> >
> > My understanding that the programmed transfers (as separate items in SG list)
> > can be desynchronized due to LLP emulation in DMA driver. And suggestion
> > probably is to use only single entry (block) SG lists will do the trick (I
> > guess that we can configure SPI core do or do not change CS between them).
>
> CS has nothing to do with this.

I meant that when you do a single entry SG transfer, you may need to shut SPI
core with CS toggling if needed (or otherwise).

> The problem is pure in the LLP emulation and Tx
> channel being enabled before the Rx channel initialization during the next LLP
> reload. Yes, if we have Tx and Rx SG/LLP list consisting of a single item, then
> there is no problem. Though it would be good to fix the issue in general instead
> of setting such fatal restrictions. If we had some fence of blocking one channel
> before another is reinitialized, the problem could theoretically be solved.
>
> It could be an interdependent DMA channels functionality. If two channels are
> interdependent than the Rx channel could pause the Tx channel while it's in the
> IRQ handling procedure (or at some other point... call a callback?). This !might!
> fix the problem, but with no 100% guarantee of success. It will work only if IRQ
> handler is executed with small latency, so the Tx channel is paused before the Rx
> FIFO has been filled and overrun.
>
> Another solution could be to reinitialize the interdependent channels
> synchronously. Tx channel stops and waits until the Rx channel is finished its
> business of data retrieval from SPI Rx FIFO. Though this solution implies
> the Tx and Rx buffers of SG/LLP items being of the same size.
>
> Although non of these solutions I really like to spend some time for its
> development.

I think you don't need go too far with it and we can get easier solution (as
being discussed in continuation of this thread).

> > > > > working in conjunction with DW DMA. Since there is no comprehensive way to
> > > > > fix it right now lets at least print a warning for the first found
> > > > > multi-blockless DW DMAC channel. This shall point a developer to the
> > > > > possible cause of the problem if one would experience a sudden data loss.
> > >
> > > I thought from the description of the SPI driver I just reviewed that
> > > this hardware didn't have DMA? Or are there separate blocks in the
> > > hardware that have a more standard instantiation of the DesignWare SPI
> > > controller with DMA attached?
> >
> > I speculate that the right words there should be 'we don't enable DMA right now
> > due to some issues' (see above).
>
> It's your speculation and it's kind of offensive implicitly implying I was
> lying.

Sorry, if you think so. I didn't imply you are lying, I simple didn't get a big
picture, but here you elaborate better, thank you.

> If our System SPI controller had DMA I would have said that and would
> have made it supported in the driver and probably wouldn't bother with a
> dedicated driver development. Again the Baikal-T1 System Boot SPI controller
> doesn't have DMA, doesn't have IRQ, is equipped with only 8 bytes FIFO, is
> embedded into the Boot Controller, provides a dirmap interface to an SPI flash
> and so on. Baikal-T1 has also got two more normal DW APB SSI interfaces with 64
> bytes FIFO, IRQ and DMA.

--
With Best Regards,
Andy Shevchenko


\
 
 \ /
  Last update: 2020-05-11 16:03    [W:0.304 / U:0.964 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site