lkml.org 
[lkml]   [2021]   [Jul]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [Linaro-mm-sig] [PATCH] drm/msm: Add fence->wait() op
    On Wed, Jul 21, 2021 at 09:34:43AM -0700, Rob Clark wrote:
    > On Wed, Jul 21, 2021 at 12:59 AM Daniel Vetter <daniel@ffwll.ch> wrote:
    > >
    > > On Wed, Jul 21, 2021 at 12:32 AM Rob Clark <robdclark@gmail.com> wrote:
    > > >
    > > > On Tue, Jul 20, 2021 at 1:55 PM Daniel Vetter <daniel@ffwll.ch> wrote:
    > > > >
    > > > > On Tue, Jul 20, 2021 at 8:26 PM Rob Clark <robdclark@gmail.com> wrote:
    > > > > >
    > > > > > On Tue, Jul 20, 2021 at 11:03 AM Christian König
    > > > > > <ckoenig.leichtzumerken@gmail.com> wrote:
    > > > > > >
    > > > > > > Hi Rob,
    > > > > > >
    > > > > > > Am 20.07.21 um 17:07 schrieb Rob Clark:
    > > > > > > > From: Rob Clark <robdclark@chromium.org>
    > > > > > > >
    > > > > > > > Somehow we had neither ->wait() nor dma_fence_signal() calls, and no
    > > > > > > > one noticed. Oops.
    > > > > > >
    > > > > > >
    > > > > > > I'm not sure if that is a good idea.
    > > > > > >
    > > > > > > The dma_fence->wait() callback is pretty much deprecated and should not
    > > > > > > be used any more.
    > > > > > >
    > > > > > > What exactly do you need that for?
    > > > > >
    > > > > > Well, the alternative is to track the set of fences which have
    > > > > > signalling enabled, and then figure out which ones to signal, which
    > > > > > seems like a lot more work, vs just re-purposing the wait
    > > > > > implementation we already have for non-dma_fence cases ;-)
    > > > > >
    > > > > > Why is the ->wait() callback (pretty much) deprecated?
    > > > >
    > > > > Because if you need it that means for your driver dma_fence_add_cb is
    > > > > broken, which means a _lot_ of things don't work. Like dma_buf poll
    > > > > (compositors have patches to start using that), and I think
    > > > > drm/scheduler also becomes rather unhappy.
    > > >
    > > > I'm starting to page back in how this works.. fence cb's aren't broken
    > > > (which is also why dma_fence_wait() was not completely broken),
    > > > because in retire_submits() we call
    > > > dma_fence_is_signaled(submit->hw_fence).
    > > >
    > > > But the reason that the custom wait function cleans up a tiny bit of
    > > > jank is that the wait_queue_head_t gets signaled earlier, before we
    > > > start iterating the submits and doing all that retire_submit() stuff
    > > > (unpin/unref bo's, etc). I suppose I could just split things up to
    > > > call dma_fence_signal() earlier, and *then* do the retire_submits()
    > > > stuff.
    > >
    > > Yeah reducing the latency there sounds like a good idea.
    > > -Daniel
    > >
    >
    > Hmm, no, turns out that isn't the problem.. or, well, it is probably a
    > good idea to call drm_fence_signal() earlier. But it seems like
    > waking up from wait_event_* is faster than wake_up_state(wait->task,
    > TASK_NORMAL). I suppose the wake_up_state() approach still needs for
    > the scheduler to get around to schedule the runnable task.
    >
    > So for now, I'm going back to my own wait function (plus earlier
    > drm_fence_signal())
    >
    > Before removing dma_fence_opps::wait(), I guess we want to re-think
    > dma_fence_default_wait().. but I think that would require a
    > dma_fence_context base class (rather than just a raw integer).

    Uh that's not great ... can't we fix this instead of papering over it in
    drivers? Aside from maybe different wakeup flags it all is supposed to
    work exactly the same underneath, and whether using a wait queue or not
    really shouldn't matter.
    -Daniel

    >
    > BR,
    > -R
    >
    > > >
    > > > BR,
    > > > -R
    > > >
    > > > > It essentially exists only for old drivers where ->enable_signalling
    > > > > is unreliable and we paper over that with a retry loop in ->wait and
    > > > > pray no one notices that it's too butchered. The proper fix is to have
    > > > > a driver thread to guarantee that ->enable_signalling works reliable,
    > > > > so you don't need a ->wait.
    > > > >
    > > > > Can you type up a kerneldoc patch for dma_fence_ops->wait to hammer
    > > > > this in please?
    > > > > -Daniel
    > > > >
    > > > > >
    > > > > > BR,
    > > > > > -R
    > > > > >
    > > > > > > Regards,
    > > > > > > Christian.
    > > > > > >
    > > > > > > >
    > > > > > > > Note that this removes the !timeout case, which has not been used in
    > > > > > > > a long time.
    > > > > > >
    > > > > > >
    > > > > > > >
    > > > > > > > Signed-off-by: Rob Clark <robdclark@chromium.org>
    > > > > > > > ---
    > > > > > > > drivers/gpu/drm/msm/msm_fence.c | 59 +++++++++++++++++++--------------
    > > > > > > > 1 file changed, 34 insertions(+), 25 deletions(-)
    > > > > > > >
    > > > > > > > diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c
    > > > > > > > index cd59a5918038..8ee96b90ded6 100644
    > > > > > > > --- a/drivers/gpu/drm/msm/msm_fence.c
    > > > > > > > +++ b/drivers/gpu/drm/msm/msm_fence.c
    > > > > > > > @@ -38,11 +38,10 @@ static inline bool fence_completed(struct msm_fence_context *fctx, uint32_t fenc
    > > > > > > > return (int32_t)(fctx->completed_fence - fence) >= 0;
    > > > > > > > }
    > > > > > > >
    > > > > > > > -/* legacy path for WAIT_FENCE ioctl: */
    > > > > > > > -int msm_wait_fence(struct msm_fence_context *fctx, uint32_t fence,
    > > > > > > > - ktime_t *timeout, bool interruptible)
    > > > > > > > +static signed long wait_fence(struct msm_fence_context *fctx, uint32_t fence,
    > > > > > > > + signed long remaining_jiffies, bool interruptible)
    > > > > > > > {
    > > > > > > > - int ret;
    > > > > > > > + signed long ret;
    > > > > > > >
    > > > > > > > if (fence > fctx->last_fence) {
    > > > > > > > DRM_ERROR_RATELIMITED("%s: waiting on invalid fence: %u (of %u)\n",
    > > > > > > > @@ -50,33 +49,34 @@ int msm_wait_fence(struct msm_fence_context *fctx, uint32_t fence,
    > > > > > > > return -EINVAL;
    > > > > > > > }
    > > > > > > >
    > > > > > > > - if (!timeout) {
    > > > > > > > - /* no-wait: */
    > > > > > > > - ret = fence_completed(fctx, fence) ? 0 : -EBUSY;
    > > > > > > > + if (interruptible) {
    > > > > > > > + ret = wait_event_interruptible_timeout(fctx->event,
    > > > > > > > + fence_completed(fctx, fence),
    > > > > > > > + remaining_jiffies);
    > > > > > > > } else {
    > > > > > > > - unsigned long remaining_jiffies = timeout_to_jiffies(timeout);
    > > > > > > > -
    > > > > > > > - if (interruptible)
    > > > > > > > - ret = wait_event_interruptible_timeout(fctx->event,
    > > > > > > > - fence_completed(fctx, fence),
    > > > > > > > - remaining_jiffies);
    > > > > > > > - else
    > > > > > > > - ret = wait_event_timeout(fctx->event,
    > > > > > > > - fence_completed(fctx, fence),
    > > > > > > > - remaining_jiffies);
    > > > > > > > -
    > > > > > > > - if (ret == 0) {
    > > > > > > > - DBG("timeout waiting for fence: %u (completed: %u)",
    > > > > > > > - fence, fctx->completed_fence);
    > > > > > > > - ret = -ETIMEDOUT;
    > > > > > > > - } else if (ret != -ERESTARTSYS) {
    > > > > > > > - ret = 0;
    > > > > > > > - }
    > > > > > > > + ret = wait_event_timeout(fctx->event,
    > > > > > > > + fence_completed(fctx, fence),
    > > > > > > > + remaining_jiffies);
    > > > > > > > + }
    > > > > > > > +
    > > > > > > > + if (ret == 0) {
    > > > > > > > + DBG("timeout waiting for fence: %u (completed: %u)",
    > > > > > > > + fence, fctx->completed_fence);
    > > > > > > > + ret = -ETIMEDOUT;
    > > > > > > > + } else if (ret != -ERESTARTSYS) {
    > > > > > > > + ret = 0;
    > > > > > > > }
    > > > > > > >
    > > > > > > > return ret;
    > > > > > > > }
    > > > > > > >
    > > > > > > > +/* legacy path for WAIT_FENCE ioctl: */
    > > > > > > > +int msm_wait_fence(struct msm_fence_context *fctx, uint32_t fence,
    > > > > > > > + ktime_t *timeout, bool interruptible)
    > > > > > > > +{
    > > > > > > > + return wait_fence(fctx, fence, timeout_to_jiffies(timeout), interruptible);
    > > > > > > > +}
    > > > > > > > +
    > > > > > > > /* called from workqueue */
    > > > > > > > void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence)
    > > > > > > > {
    > > > > > > > @@ -114,10 +114,19 @@ static bool msm_fence_signaled(struct dma_fence *fence)
    > > > > > > > return fence_completed(f->fctx, f->base.seqno);
    > > > > > > > }
    > > > > > > >
    > > > > > > > +static signed long msm_fence_wait(struct dma_fence *fence, bool intr,
    > > > > > > > + signed long timeout)
    > > > > > > > +{
    > > > > > > > + struct msm_fence *f = to_msm_fence(fence);
    > > > > > > > +
    > > > > > > > + return wait_fence(f->fctx, fence->seqno, timeout, intr);
    > > > > > > > +}
    > > > > > > > +
    > > > > > > > static const struct dma_fence_ops msm_fence_ops = {
    > > > > > > > .get_driver_name = msm_fence_get_driver_name,
    > > > > > > > .get_timeline_name = msm_fence_get_timeline_name,
    > > > > > > > .signaled = msm_fence_signaled,
    > > > > > > > + .wait = msm_fence_wait,
    > > > > > > > };
    > > > > > > >
    > > > > > > > struct dma_fence *
    > > > > > >
    > > > >
    > > > >
    > > > >
    > > > > --
    > > > > Daniel Vetter
    > > > > Software Engineer, Intel Corporation
    > > > > http://blog.ffwll.ch
    > >
    > >
    > >
    > > --
    > > Daniel Vetter
    > > Software Engineer, Intel Corporation
    > > http://blog.ffwll.ch

    --
    Daniel Vetter
    Software Engineer, Intel Corporation
    http://blog.ffwll.ch

    \
     
     \ /
      Last update: 2021-07-21 21:04    [W:3.159 / U:0.812 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site