lkml.org 
[lkml]   [2018]   [Sep]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [EXT] [PATCH net] net: mvneta: fix the Rx desc buffer DMA unmapping
Date
Hi Gregory.

I want to clarify static mvneta_rxq_drop_pkts():

static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
struct mvneta_rx_queue *rxq)
{
int rx_done, i;

rx_done = mvneta_rxq_busy_desc_num_get(pp, rxq);
if (rx_done)
mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_done);

if (pp->bm_priv) { <---------------------------- this is case for HWBM
for (i = 0; i < rx_done; i++) {
struct mvneta_rx_desc *rx_desc =
mvneta_rxq_next_desc_get(rxq);
u8 pool_id = MVNETA_RX_GET_BM_POOL_ID(rx_desc);
struct mvneta_bm_pool *bm_pool;

bm_pool = &pp->bm_priv->bm_pools[pool_id];
/* Return dropped buffer to the pool */
mvneta_bm_pool_put_bp(pp->bm_priv, bm_pool,
rx_desc->buf_phys_addr);
}
return;
}

<-------- this is case for SWBM only
for (i = 0; i < rxq->size; i++) {
struct mvneta_rx_desc *rx_desc = rxq->descs + i;
void *data = rxq->buf_virt_addr[i];

if (!data || !(rx_desc->buf_phys_addr))
continue;
dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
__free_page(data);
}
}

So I suggest to fix dma_unmap_single() call too.

Thanks.
Yelena


-----Original Message-----
From: Gregory CLEMENT [mailto:gregory.clement@bootlin.com]
Sent: Thursday, September 20, 2018 6:00 PM
To: Antoine Tenart <antoine.tenart@bootlin.com>
Cc: Yelena Krivosheev <yelena@marvell.com>; davem@davemloft.net; netdev@vger.kernel.org; linux-kernel@vger.kernel.org; thomas.petazzoni@bootlin.com; maxime.chevallier@bootlin.com; miquel.raynal@bootlin.com; Nadav Haklai <nadavh@marvell.com>; Stefan Chulski <stefanc@marvell.com>; Yan Markman <ymarkman@marvell.com>; mw@semihalf.com
Subject: Re: [EXT] [PATCH net] net: mvneta: fix the Rx desc buffer DMA unmapping

Hi Antoine,

On jeu., sept. 20 2018, Antoine Tenart <antoine.tenart@bootlin.com> wrote:

> Hi Yelena,
>
> On Thu, Sep 20, 2018 at 10:14:56AM +0000, Yelena Krivosheev wrote:
>>
>> Please, check and fix all cases of dma_unmap_single() usage.
>> See mvneta_rxq_drop_pkts()
>> ...
>> if (!data || !(rx_desc->buf_phys_addr))
>> continue;
>> dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
>> MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
>> __free_page(data);
>> ...
>
> I had a look at the one reported by CONFIG_DMA_API_DEBUG, and at DMA
> unmapping calls using PAGE_SIZE. As you pointed out there might be
> others parts, thanks!

Actually Jisheng had submitted a similar patch few weeks ago and as I pointed at this time, the dma_unmap in mvneta_rxq_drop_pkts can be called when the allocation is done in with HWBM in this case which use a dma_map_single.

I though that in this case using dma_map_single is the things to do even if in the SWBM case it is less optimal.

Gregory

>
> Antoine
>
> --
> Antoine Ténart, Bootlin
> Embedded Linux and Kernel engineering
> https://bootlin.com

--
Gregory Clement, Bootlin
Embedded Linux and Kernel engineering
http://bootlin.com
\
 
 \ /
  Last update: 2018-09-22 09:09    [W:0.471 / U:0.520 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site