lkml.org 
[lkml]   [2019]   [Mar]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 09/10] mm/hmm: allow to mirror vma of a file on a DAX backed filesystem
On Wed, Mar 13, 2019 at 09:06:04AM -0700, Andrew Morton wrote:
> On Tue, 12 Mar 2019 20:10:19 -0400 Jerome Glisse <jglisse@redhat.com> wrote:
>
> > > You're correct. We chose to go this way because the HMM code is so
> > > large and all-over-the-place that developing it in a standalone tree
> > > seemed impractical - better to feed it into mainline piecewise.
> > >
> > > This decision very much assumed that HMM users would definitely be
> > > merged, and that it would happen soon. I was skeptical for a long time
> > > and was eventually persuaded by quite a few conversations with various
> > > architecture and driver maintainers indicating that these HMM users
> > > would be forthcoming.
> > >
> > > In retrospect, the arrival of HMM clients took quite a lot longer than
> > > was anticipated and I'm not sure that all of the anticipated usage
> > > sites will actually be using it. I wish I'd kept records of
> > > who-said-what, but I didn't and the info is now all rather dissipated.
> > >
> > > So the plan didn't really work out as hoped. Lesson learned, I would
> > > now very much prefer that new HMM feature work's changelogs include
> > > links to the driver patchsets which will be using those features and
> > > acks and review input from the developers of those driver patchsets.
> >
> > This is what i am doing now and this patchset falls into that. I did
> > post the ODP and nouveau bits to use the 2 new functions (dma map and
> > unmap). I expect to merge both ODP and nouveau bits for that during
> > the next merge window.
> >
> > Also with 5.1 everything that is upstream is use by nouveau at least.
> > They are posted patches to use HMM for AMD, Intel, Radeon, ODP, PPC.
> > Some are going through several revisions so i do not know exactly when
> > each will make it upstream but i keep working on all this.
> >
> > So the guideline we agree on:
> > - no new infrastructure without user
> > - device driver maintainer for which new infrastructure is done
> > must either sign off or review of explicitly say that they want
> > the feature I do not expect all driver maintainer will have
> > the bandwidth to do proper review of the mm part of the infra-
> > structure and it would not be fair to ask that from them. They
> > can still provide feedback on the API expose to the device
> > driver.
>
> The patchset in -mm ("HMM updates for 5.1") has review from Ralph
> Campbell @ nvidia. Are there any other maintainers who we should have
> feedback from?

John Hubbard also give his review on couple of them iirc.

>
> > - driver bits must be posted at the same time as the new infra-
> > structure even if they target the next release cycle to avoid
> > inter-tree dependency
> > - driver bits must be merge as soon as possible
>
> Are there links to driver patchsets which we can add to the changelogs?
>

Issue with that is that i often post the infrastructure bit first and
then the driver bit so i have email circular dependency :) I can alway
post driver bits first and then add links to driver bits. Or i can
reply after posting so that i can cross link both.

Or i can post the driver bit on mm the first time around and mark them
as "not for Andrew" or any tag that make it clear that those patch will
be merge through the appropriate driver tree.

In any case for this patchset there is:

https://patchwork.kernel.org/patch/10786625/

Also this patchset refactor some of the hmm internal for better API so
it is getting use by nouveau too which is already upstream.


> > Thing we do not agree on:
> > - If driver bits miss for any reason the +1 target directly
> > revert the new infra-structure. I think it should not be black
> > and white and the reasons why the driver bit missed the merge
> > window should be taken into account. If the feature is still
> > wanted and the driver bits missed the window for simple reasons
> > then it means that we push everything by 2 release ie the
> > revert is done in +1 then we reupload the infra-structure in
> > +2 and finaly repush the driver bit in +3 so we loose 1 cycle.
> > Hence why i would rather that the revert would only happen if
> > it is clear that the infrastructure is not ready or can not
> > be use in timely (over couple kernel release) fashion by any
> > drivers.
>
> I agree that this should be more a philosophy than a set of hard rules.
>

\
 
 \ /
  Last update: 2019-03-13 19:40    [W:0.152 / U:0.200 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site