lkml.org 
[lkml]   [2018]   [Aug]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 1/2] media: docs-rst: Document memory-to-memory video decoder interface
On Tue, Aug 7, 2018 at 4:37 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
>
> On 08/07/2018 09:05 AM, Tomasz Figa wrote:
> > On Thu, Jul 26, 2018 at 7:57 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
> >>>> What if you set the format to 0x0 but the stream does not have meta data with
> >>>> the resolution? How does userspace know if 0x0 is allowed or not? If this is
> >>>> specific to the chosen coded pixel format, should be add a new flag for those
> >>>> formats indicating that the coded data contains resolution information?
> >>>
> >>> Yes, this would definitely be on a per-format basis. Not sure what you
> >>> mean by a flag, though? E.g. if the format is set to H264, then it's
> >>> bound to include resolution information. If the format doesn't include
> >>> it, then userspace is already aware of this fact, because it needs to
> >>> get this from some other source (e.g. container).
> >>>
> >>>>
> >>>> That way userspace knows if 0x0 can be used, and the driver can reject 0x0
> >>>> for formats that do not support it.
> >>>
> >>> As above, but I might be misunderstanding your suggestion.
> >>
> >> So my question is: is this tied to the pixel format, or should we make it
> >> explicit with a flag like V4L2_FMT_FLAG_CAN_DECODE_WXH.
> >>
> >> The advantage of a flag is that you don't need a switch on the format to
> >> know whether or not 0x0 is allowed. And the flag can just be set in
> >> v4l2-ioctls.c.
> >
> > As far as my understanding goes, what data is included in the stream
> > is definitely specified by format. For example, a H264 elementary
> > stream will always include those data as a part of SPS.
> >
> > However, having such flag internally, not exposed to userspace, could
> > indeed be useful to avoid all drivers have such switch. That wouldn't
> > belong to this documentation, though, since it would be just kernel
> > API.
>
> Why would you keep this internally only?
>

Well, either keep it internal or make it read-only for the user space,
since the behavior is already defined by selected pixel format.

> >>>> I wonder if we should make these min buffer controls required. It might be easier
> >>>> that way.
> >>>
> >>> Agreed. Although userspace is still free to ignore it, because REQBUFS
> >>> would do the right thing anyway.
> >>
> >> It's never been entirely clear to me what the purpose of those min buffers controls
> >> is. REQBUFS ensures that the number of buffers is at least the minimum needed to
> >> make the HW work. So why would you need these controls? It only makes sense if they
> >> return something different from REQBUFS.
> >>
> >
> > The purpose of those controls is to let the client allocate a number
> > of buffers bigger than minimum, without the need to allocate the
> > minimum number of buffers first (to just learn the number), free them
> > and then allocate a bigger number again.
>
> I don't feel this is particularly useful. One problem with the minimum number
> of buffers as used in the kernel is that it is often the minimum number of
> buffers required to make the hardware work, but it may not be optimal. E.g.
> quite a few capture drivers set the minimum to 2, which is enough for the
> hardware, but it will likely lead to dropped frames. You really need 3
> (one is being DMAed, one is queued and linked into the DMA engine and one is
> being processed by userspace).
>
> I would actually prefer this to be the recommended minimum number of buffers,
> which is >= the minimum REQBUFS uses.
>
> I.e., if you use this number and you have no special requirements, then you'll
> get good performance.

I guess we could make it so. It would make existing user space request
more buffers than it used to with the original meaning, but I guess it
shouldn't be a big problem.

>
> >
> >>>
> >>>>
> >>>>> +7. If all the following conditions are met, the client may resume the
> >>>>> + decoding instantly, by using :c:func:`VIDIOC_DECODER_CMD` with
> >>>>> + ``V4L2_DEC_CMD_START`` command, as in case of resuming after the drain
> >>>>> + sequence:
> >>>>> +
> >>>>> + * ``sizeimage`` of new format is less than or equal to the size of
> >>>>> + currently allocated buffers,
> >>>>> +
> >>>>> + * the number of buffers currently allocated is greater than or equal to
> >>>>> + the minimum number of buffers acquired in step 6.
> >>>>
> >>>> You might want to mention that if there are insufficient buffers, then
> >>>> VIDIOC_CREATE_BUFS can be used to add more buffers.
> >>>>
> >>>
> >>> This might be a bit tricky, since at least s5p-mfc and coda can only
> >>> work on a fixed buffer set and one would need to fully reinitialize
> >>> the decoding to add one more buffer, which would effectively be the
> >>> full resolution change sequence, as below, just with REQBUFS(0),
> >>> REQBUFS(N) replaced with CREATE_BUFS.
> >>
> >> What happens today in those drivers if you try to call CREATE_BUFS?
> >
> > s5p-mfc doesn't set the .vidioc_create_bufs pointer in its
> > v4l2_ioctl_ops, so I suppose that would be -ENOTTY?
>
> Correct for s5p-mfc.

As Philipp clarified, coda supports adding buffers on the fly. I
briefly looked at venus and mtk-vcodec and they seem to use m2m
implementation of CREATE_BUFS. Not sure if anyone tested that, though.
So the only hardware I know for sure cannot support this is s5p-mfc.

Best regards,
Tomasz

\
 
 \ /
  Last update: 2018-08-08 04:56    [W:0.220 / U:0.352 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site