lkml.org 
[lkml]   [2021]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH 1/2] ASoC: dwc: add a quirk DW_I2S_QUIRK_STOP_ON_SHUTDOWN to dwc driver
Date


On 4/23/21 7:46 PM, Mark Brown wrote:
> On Fri, Apr 23, 2021 at 09:54:38PM +0530, Vijendar Mukunda wrote:
>
>> For CZ/StoneyRidge platforms, ACP DMA between ACP SRAM and
>> I2S FIFO should be stopped before stopping I2S Controller DMA.
>
>> When DMA is progressing and stop request received, while DMA transfer
>> ongoing between ACP SRAM and I2S FIFO, Stopping I2S DMA prior to ACP DMA
>> stop resulting DMA Channel stop failure.
>
> This again... copying in Peter for the sequencing discussion. If we
> need to do this I'm not convinced that bodging it in the driver is a
> good idea, and especially not deferring it outside of the trigger
> operation - for example on a suspend or pause we won't actually do a
> shutdown() so the trigger will end up not happening which seems like it
> may cause problems.

It will certainly leave the i2s running and can lead to hard to explain
issues

> We'd probably be better off with the core knowing
> what's going on and being able to reorder the callbacks although
> designing an interface for that seems a bit annoying.

I agree, it would be better to have some sort of flag which tells the
core that there is an integration issue between the DMA and peripheral.
I believe this is only affecting playback?

>> This issue can't be fixed in ACP DMA driver due to design constraint.
>
> What is the design constraint here - can't we fix the design? Or is it
> a hardware design constraint (presumably broken signalling between the
> I2S and DMA blocks)?

From the description my guess is that stop on the DMA want to flush it's
FIFO (complete the in progress packet, segment). Since the peripheral is
stopped it will not pull in more data -> the DMA will time out internally.

The question: how the ACP DMA driver's terminate_all is implemented? It
can not really wait for the DMA to stop, we can not use
terminate_all_sync() in trigger, it must just set a stop bit and make
sure at synchronize() time that it has stopped, right?

What happens if the time between the DMA stop and the DAI stop is less
then it would take to flush the DMA FIFO? You would have the same issue,
but in a rather hard to reproducible way?

As sidenote: TI's k3-udma initially had similar issue at the design
phase on the playback side which got solved by a flush bit on the
channel to detach it from the peripheral and set it to free run to drain
w/o peripheral.
On capture however I need to push a dummy 'drain' packet to flush out
the data from the DMA (if the stop happens when we did not have active
descriptor on the channel).

With a flag to reorder the DMA/DAI stop sequence it might work most of
the time, but imho it is going to introduce a nasty time-bomb of failure.
Also your DAI will underflow instead of the DMA error.

--
Péter

\
 
 \ /
  Last update: 2021-04-26 08:19    [W:0.140 / U:0.604 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site