lkml.org 
[lkml]   [2018]   [Jun]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 1/3] nvmem: Update the OF binding to use a subnode for the cells list
On Tue, 1 May 2018 17:49:03 +0100
Srinivas Kandagatla <srinivas.kandagatla@linaro.org> wrote:

> On 18/04/18 14:34, Alban wrote:
> > On Wed, 18 Apr 2018 13:53:56 +0100
> > Srinivas Kandagatla <srinivas.kandagatla@linaro.org> wrote:
> >
> >> On 18/04/18 13:32, Alban wrote:
> >>>> I was also suggesting you to use nvmem-cell subnode, but make it a
> >>>> proper nvmem provider device, rather than reusing its parent device.
> >>>>
> >>>> You would end up some thing like this in dt.
> >>>>
> >>>> flash@0 {
> >>>> #address-cells = <1>;
> >>>> #size-cells = <1>;
> >>>> compatible = "s25sl064a";
> >>>> reg = <0>;
> >>>>
> >>>> nvmem-cells {
> >>>> compatible = "mtd-nvmem";
> >>>> #address-cells = <1>;
> >>>> #size-cells = <1>;
> >>>>
> >>>> calibration: calib@404 {
> >>>> reg = <0x404 0x10>;
> >>>> };
> >>>> };
> >>>> };
> >>> But the root cause is in the nvmem binding, this conflict could exists
> >> No, the root cause is because of passing wrong device instance to nvmem
> >> core. And trying to workaround is the actual issue.
> >
> > The data is stored on the MTD, so the nvmem provider is the MTD device.
> > I don't think it is a good idea to have a virtual device in the DT to
> > accommodate the nvmem API.
> >
> Yep, I agree! this is same issue if we make nvmem-cells a child of nvmem
> provider too.
>
> However, I would like to see this moving forward.
>
> I can think of one possible solution here, which is, adding
> "nvmem-mtd-cell" or "nvmem-cell" compatible string to each cell.

I would definitely use "nvmem-cell", from the binding point of view it
doesn't matter what the underlaying storage is.

> The problem you mentioned regarding #address-cells and #size-cells with
> provider need to be addressed in nvmem core.
>
> Currently nvmem core only support offsets of 32 bits, if you are
> expecting a 64 bit offsets then we should add that as a feature to nvmem
> core.
>
> nvmem core as it is today should work fine with 32 bit offsets for mtd
> cases.

That's not what I meant, 32 bit should be more that enough for now.
What I meant is that if a binding already has children nodes using
unit-address, then we would end up with two different uses of the same
"address space".

> what do you think?

AFAIU the only thing that we disagree on now is if the nodes
representing the cells should be direct children of the provider
or in a dedicated subnode. For the MTD case both solution would solve
the binding clash. I would really appreciate if the DT people could
chip in so that we can settle this and get the MTD support merged.

Alban
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2018-06-07 18:43    [W:0.099 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site