lkml.org 
[lkml]   [2012]   [Nov]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH 13/20] async_tx: do DMA unmap in core for XOR operations
From
[resend]

On Mon, Nov 5, 2012 at 2:00 AM, Bartlomiej Zolnierkiewicz
<b.zolnierkie@samsung.com> wrote:
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index 440b609..0df69f1 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -392,6 +392,10 @@ void dma_chan_cleanup(struct kref *kref);
> typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
>
> typedef void (*dma_async_tx_callback)(void *dma_async_param);
> +
> +/* max value of ->max_xor from struct dma_device */
> +#define DMA_ASYNC_TX_MAX_ENT 128

This balloons the descriptor size. Looks like the ppc4xx driver will
try to do 16MB allocations after this. I think this should be limited
in the core to something like 16 or at most 32. ppc4xx is also going
to be impacted by the removal of channel switching support in the
core. Adding Anatolij as a heads up.

> +
> /**
> * struct dma_async_tx_descriptor - async transaction descriptor
> * ---dma generic offload fields---
> @@ -402,8 +406,9 @@ typedef void (*dma_async_tx_callback)(void *dma_async_param);
> * @phys: physical address of the descriptor
> * @chan: target channel for this operation
> * @tx_submit: set the prepared descriptor(s) to be executed by the engine
> - * @dma_src: DMA source address (needed for DMA unmap)
> - * @dma_dst: DMA destination address (needed for DMA unmap)
> + * @dma_src: DMA source addresses (needed for DMA unmap)
> + * @dma_src_cnt: number of DMA source addresses (needed for DMA unmap)
> + * @dma_dst: DMA destination addresses (needed for DMA unmap)
> * @dma_len: DMA length (needed for DMA unmap)
> * @callback: routine to call after this operation is complete
> * @callback_param: general parameter to pass to the callback routine
> @@ -420,8 +425,9 @@ struct dma_async_tx_descriptor {
> dma_addr_t phys;
> struct dma_chan *chan;
> dma_cookie_t (*tx_submit)(struct dma_async_tx_descriptor *tx);
> - dma_addr_t dma_src;
> - dma_addr_t dma_dst;
> + dma_addr_t dma_src[DMA_ASYNC_TX_MAX_ENT];
> + unsigned int dma_src_cnt;
> + dma_addr_t dma_dst[DMA_ASYNC_TX_MAX_ENT];
> size_t dma_len;
> dma_async_tx_callback callback;
> void *callback_param;

For engines that don't care about raid this unmap data should be at
the end to hopefully get the more frequently used callback parameters
into the same cacheline as the rest.


\
 
 \ /
  Last update: 2012-11-07 22:41    [W:0.179 / U:0.256 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site