lkml.org 
[lkml]   [2020]   [Jun]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH] gpio: uapi: v2 proposal
On Tue, Jun 09, 2020 at 10:03:42AM +0200, Bartosz Golaszewski wrote:
> sob., 6 cze 2020 o 03:56 Kent Gibson <warthog618@gmail.com> napisał(a):
> >
>
> [snip!]
>
> > >
> > > I'd say yes - consolidation and reuse of data structures is always
> > > good and normally they are going to be wrapped in some kind of
> > > low-level user-space library anyway.
> > >
> >
> > Ok, and I've changed the values field name to bitmap, along with the change
> > to a bitmap type, so the stuttering is gone.
> >
> > And, as the change to bitmap substantially reduced the size of
> > gpioline_config, I now embed that in the gpioline_info instead of
> > duplicating all the other fields. The values field will be zeroed
> > when returned within info.
> >
>
> Could you post an example, I'm not sure I follow.
>

The gpioline_info_v2 now looks like this:

/**
* struct gpioline_info_v2 - Information about a certain GPIO line
* @name: the name of this GPIO line, such as the output pin of the line on
* the chip, a rail or a pin header name on a board, as specified by the
* gpio chip, may be empty
* @consumer: a functional name for the consumer of this GPIO line as set
* by whatever is using it, will be empty if there is no current user but
* may also be empty if the consumer doesn't set this up
* @config: the configuration of the line. Note that the values field is
* always zeroed.
* @offset: the local offset on this GPIO device, fill this in when
* requesting the line information from the kernel
* @padding: reserved for future use
*/
struct gpioline_info_v2 {
char name[GPIO_MAX_NAME_SIZE];
char consumer[GPIO_MAX_NAME_SIZE];
struct gpioline_config config;
__u32 offset;
__u32 padding[GPIOLINE_INFO_V2_PAD_SIZE]; /* for future use */
};

Previously that had all the fields from config - other than the values.

When that is populated the config.values will always be zeroed.

[snip!]
>
> > > >
> > > > I'm also kicking around the idea of adding sequence numbers to events,
> > > > one per line and one per handle, so userspace can more easily detect
> > > > mis-ordering or buffer overflows. Does that make any sense?
> > > >
> > >
> > > Hmm, now that you mention it - and in the light of the recent post by
> > > Ryan Lovelett about polling precision - I think it makes sense to have
> > > this. Especially since it's very easy to add.
> > >
> >
> > OK. I was only thinking about the edge events, but you might want it
> > for your line info events on the chip fd as well?
> >
>
> I don't see the need for it now, but you never know. Let's leave it
> out for now and if we ever need it - we now have the appropriate
> padding.
>

OK. It is a trivial change - I've already got the patch for it.

> > > > And would it be useful for userspace to be able to influence the size of
> > > > the event buffer (currently fixed at 16 events per line)?
> > > >
> > >
> > > Good question. I would prefer to not overdo it though. The event
> > > request would need to contain the desired kfifo size and we'd only
> > > allow to set it on request, right?
> > >
> >
> > Yeah, it would only be relevant if edge detection was set and, as per
> > edge detection itself, would only be settable via the request, not
> > via set_config. It would only be a suggestion, as the kfifo size gets
> > rounded up to a power of 2 anyway. It would be capped - I'm open to
> > suggestions for a suitable max value. And the 0 value would mean use
> > the default - currently 16 per line.
> >
>
> This sounds good. How about 512 for max value for now and we can
> always increase it if needed. I don't think we should explicitly cap
> it though - let the user specify any value and just silently limit it
> to 512 in the kernel.
>

It will be an internal cap only - no error if the user requests more.
I was thinking 1024, which corresponds to the maximum default - 16*64.

> > If you want the equivalent for the info watch then I'm not sure where to
> > hook it in. It should be at the chip scope, and there isn't any
> > suitable ioctl to hook it into so it would need a new one - maybe a
> > set_config for the chip? But the buffer size would only be settable up
> > until you add a watch.
> >
>
> I don't think we need this. Status changes are naturally much less
> frequent and the potential for buffer overflow is miniscule here.
>

Agreed.

Cheers,
Kent.

\
 
 \ /
  Last update: 2020-06-09 11:44    [W:0.077 / U:0.160 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site