lkml.org 
[lkml]   [2020]   [May]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2 09/14] device core: Add ability to handle multiple dma offsets
From
Date
Hi Jim,

On Wed, 2020-05-27 at 11:43 -0400, Jim Quinlan wrote:
> Hi Nicolas,
>
> On Wed, May 27, 2020 at 11:00 AM Nicolas Saenz Julienne
> <nsaenzjulienne@suse.de> wrote:
> > Hi Jim,
> > one thing comes to mind, there is a small test suite in
> > drivers/of/unittest.c
> > (specifically of_unittest_pci_dma_ranges()) you could extend it to include
> > your
> > use cases.
> Sure, will check out.
> > On Tue, 2020-05-26 at 15:12 -0400, Jim Quinlan wrote:
> > > The new field in struct device 'dma_pfn_offset_map' is used to facilitate
> > > the use of multiple pfn offsets between cpu addrs and dma addrs. It is
> > > similar to 'dma_pfn_offset' except that the offset chosen depends on the
> > > cpu or dma address involved.
> > >
> > > Signed-off-by: Jim Quinlan <james.quinlan@broadcom.com>
> > > ---
> > > drivers/of/address.c | 65 +++++++++++++++++++++++++++++++++++--
> > > drivers/usb/core/message.c | 3 ++
> > > drivers/usb/core/usb.c | 3 ++
> > > include/linux/device.h | 10 +++++-
> > > include/linux/dma-direct.h | 10 ++++--
> > > include/linux/dma-mapping.h | 46 ++++++++++++++++++++++++++
> > > kernel/dma/Kconfig | 13 ++++++++
> > > 7 files changed, 144 insertions(+), 6 deletions(-)
> > >
> >
> > [...]
> >
> > > @@ -977,10 +1020,19 @@ int of_dma_get_range(struct device *dev, struct
> > > device_node *np, u64 *dma_addr,
> > > pr_debug("dma_addr(%llx) cpu_addr(%llx) size(%llx)\n",
> > > range.bus_addr, range.cpu_addr, range.size);
> > >
> > > + num_ranges++;
> > > if (dma_offset && range.cpu_addr - range.bus_addr !=
> > > dma_offset)
> > > {
> > > - pr_warn("Can't handle multiple dma-ranges with
> > > different
> > > offsets on node(%pOF)\n", node);
> > > - /* Don't error out as we'd break some existing DTs
> > > */
> > > - continue;
> > > + if (!IS_ENABLED(CONFIG_DMA_PFN_OFFSET_MAP)) {
> > > + pr_warn("Can't handle multiple dma-ranges
> > > with
> > > different offsets on node(%pOF)\n", node);
> > > + pr_warn("Perhaps set
> > > DMA_PFN_OFFSET_MAP=y?\n");
> > > + /*
> > > + * Don't error out as we'd break some
> > > existing
> > > + * DTs that are using configs w/o
> > > + * CONFIG_DMA_PFN_OFFSET_MAP set.
> > > + */
> > > + continue;
> >
> > dev->bus_dma_limit is set in of_dma_configure(), this function's caller,
> > based
> > on dma_start's value (set after this continue). So you'd be effectively
> > setting
> > the dev->bus_dma_limit to whatever we get from the first dma-range.
> I'm not seeing that at all. On the evaluation of each dma-range,
> dma_start and dma_end are re-evaluated to be the lowest and highest
> bus values of the dma-ranges seen so far. After all dma-ranges are
> examined, dev->bus_dma_limit being set to the highest. In fact, the
> current code -- ie before my commits -- already does this for multiple
> dma-ranges as long as the cpu-bus offset is the same in the
> dma-ranges.

Sorry I got carried away, you're right.

So I understand there is an underlaying assumption that the non DMAble memory
space will always sit on top of the bus memory space, as intertwined as it
might be, so as to every phys_to_dma() call on non DMAble memory to fall above
bus_dma_limit.

Regards,
Nicolas

[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2020-05-27 19:02    [W:0.338 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site