lkml.org 
[lkml]   [2023]   [Feb]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] dmapool: push new blocks in ascending order
On Thu, Feb 23, 2023 at 12:41:37PM -0800, Andrew Morton wrote:
> On Tue, 21 Feb 2023 11:07:32 -0700 Keith Busch <kbusch@kernel.org> wrote:
>
> > On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
> > > On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> > > > From: Keith Busch <kbusch@kernel.org>
> > > >
> > > > Some users of the dmapool need their allocations to happen in ascending
> > > > order. The recent optimizations pushed the blocks in reverse order, so
> > > > restore the previous behavior by linking the next available block from
> > > > low-to-high.
> > >
> > > Who are those users?
> > >
> > > Also should we document this behavior somewhere so that it isn't
> > > accidentally changed again some time in the future?
> >
> > usb/chipidea/udc.c qh_pool called "ci_hw_qh".
>
> It would be helpful to know why these users need this side-effect. Did
> the drivers break? Or just get slower?

The affected driver was reported to be unusable without this behavior.

> Are those drivers misbehaving by assuming this behavior? Should we

I do think they're using the wrong API. You you shouldn't use the dmapool if
your blocks need to be arranged in a contiguous address order. They should just
directly use dma_alloc_coherent() instead.

> require that they be altered instead of forever constraining the dmapool
> implementation in this fashion?

This change isn't really constraining dmapool where it matters. It's just an
unexpected one-time initialization thing.

As far as altering those drivers, I'll reach out to someone on that side for
comment (I'm currently not familiar with the affected subsystem).

\
 
 \ /
  Last update: 2023-03-27 00:35    [W:0.057 / U:0.188 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site