Messages in this thread | | | Date | Tue, 7 Mar 2000 04:47:09 -0800 | From | "David S. Miller" <> | Subject | Re: DMA mapping |
| |
Date: Tue, 7 Mar 2000 12:13:17 +0000 From: Giuliano Procida <myxie@dev.madge.com>
> At this cost you receive portability and a clean API to work with. > Also, you receive a lower driver maintenance cost, see below.
Portable, once the 32/64 bit issues are sorted out.
IA64 should use a 32-bit dma_addr_t for 2.4.x
I have a nice piece of hardware, that manages state (32-bit handle) for me. It is just a shame that I will have to hack further state tracking on top.
If it was a nice piece of hardware, perhaps a 64-bit handle would have been used so that you could stick the whole address in there on any given current architecture and then you wouldn't even need the silly bus_to_virt calls :-)))))
Going off at a slight tangent... all the costs associated with mapping and unmapping skbs become a lot smaller if the buffers can be recycled in some fashion. Is there a proper (clean, documented) way of doing this? The ATM devops interface provided a hook for this and may still do so, but I don't know if it is safe.
Sockets hold references to SKBs until the data is read by the application or the socket is closed (at which point the receive queue is purged). So on receive, you would not benefit much from recycling.
I suppose you saw the recycling mechanism provided by the Microsoft NT driver interfaces and considered if such a scheme would benefit Linux as well?
The map/unmap costs on x86 are NULL. On other machines there are cache (either at the CPU or at the PCI controller) coherency issues to deal with, so even if we recycled there would still be overhead at "DMA transfer done" time.
There was a proposed scheme to make the dma mapping allocation potentially cheaper, but 2.4.x is too close to add the interfaces.
Essentially the driver asks for a "pool" of dma mapping areas at driver init, it owns this DMA mapping area until driver shutdown (at which time it releases the "pool"). So something like:
cookie = pci_create_pool(pdev, POOL_ENT_SIZE, NUM_POOL_ENTS);
...
dma_addr = pci_map_from_pool(pdev, cookie, pool_entry_index, skb->data, skb->len);
...
pci_unmap_from_pool(pdev, cookie, pool_entry_index, dma_addr, skb->len);
But this is 2.5.x material at this point. The entry allocation algorithms used by ports which use dynamic DMA mappings are pretty efficient, and ways are being discovered to make them even faster. So it's not a critical issue.
On x86 all of these operations are NOPs anyways if that is the platform whose performance you care about. :-)
I think if you really feel that this extra software driver state is a big deal, walk over your driver and try to discover ways to somehow eliminate at least one register or descriptor read or write in a hot code path. I bet the gains from that will far outweigh whatever performance burden you will suffer on x86 from keeping track of DMA and cpu addresses.
Later, David S. Miller davem@redhat.com
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |