lkml.org 
[lkml]   [2021]   [Apr]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH v3 79/79] media: hantro: document the usage of pm_runtime_get_sync()
    Date
    Despite other *_get()/*_put() functions, where usage count is
    incremented only if not errors, the pm_runtime_get_sync() has
    a different behavior, incrementing the counter *even* on
    errors.

    That's an error prone behavior, as people often forget to
    decrement the usage counter.

    However, the hantro driver depends on this behavior, as it
    will decrement the usage_count unconditionally at the m2m
    job finish time, which makes sense.

    So, intead of using the pm_runtime_resume_and_get() that
    would decrement the counter on error, keep the current
    API, but add a documentation explaining the rationale for
    keep using pm_runtime_get_sync().

    Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
    ---
    drivers/staging/media/hantro/hantro_drv.c | 7 +++++++
    1 file changed, 7 insertions(+)

    diff --git a/drivers/staging/media/hantro/hantro_drv.c b/drivers/staging/media/hantro/hantro_drv.c
    index 595e82a82728..96f940c1c85c 100644
    --- a/drivers/staging/media/hantro/hantro_drv.c
    +++ b/drivers/staging/media/hantro/hantro_drv.c
    @@ -155,6 +155,13 @@ static void device_run(void *priv)
    ret = clk_bulk_enable(ctx->dev->variant->num_clocks, ctx->dev->clocks);
    if (ret)
    goto err_cancel_job;
    +
    + /*
    + * The pm_runtime_get_sync() will increment dev->power.usage_count,
    + * even on errors. That's the expected behavior here, since the
    + * hantro_job_finish() function at the error handling code
    + * will internally call pm_runtime_put_autosuspend().
    + */
    ret = pm_runtime_get_sync(ctx->dev->dev);
    if (ret < 0)
    goto err_cancel_job;
    --
    2.30.2
    \
     
     \ /
      Last update: 2021-04-27 12:32    [W:5.435 / U:0.400 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site