lkml.org 
[lkml]   [2021]   [Jul]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.13 412/800] drm/amd/display: take dc_lock in short pulse handler only
    Date
    From: Aurabindo Pillai <aurabindo.pillai@amd.com>

    [ Upstream commit d2aa1356834d845ffdac0d8c01b58aa60d1bdc65 ]

    [Why]
    Conditions that end up modifying the global dc state must be locked.
    However, during mst allocate payload sequence, lock is already taken.
    With StarTech 1.2 DP hub, we get an HPD RX interrupt for a reason other
    than to indicate down reply availability right after sending payload
    allocation. The handler again takes dc lock before calling the
    dc's HPD RX handler. Due to this contention, the DRM thread which waits
    for MST down reply never gets a chance to finish its waiting
    successfully and ends up timing out. Once the lock is released, the hpd
    rx handler fires and goes ahead to read from the MST HUB, but now its
    too late and the HUB doesnt lightup all displays since DRM lacks error
    handling when payload allocation fails.

    [How]
    Take lock only if there is a change in link status or if automated test
    pattern bit is set. The latter fixes the null pointer dereference when
    running certain DP Link Layer Compliance test.

    Fixes: c8ea79a8a276 ("drm/amd/display: NULL pointer error during compliance test")

    Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
    Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
    Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 19 +++++++++++++++++--
    .../gpu/drm/amd/display/dc/core/dc_link_dp.c | 2 +-
    .../gpu/drm/amd/display/dc/inc/dc_link_dp.h | 4 ++++
    3 files changed, 22 insertions(+), 3 deletions(-)

    diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
    index dcb4e585c270..2b2d7b9f26f1 100644
    --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
    +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
    @@ -28,6 +28,7 @@

    #include "dm_services_types.h"
    #include "dc.h"
    +#include "dc_link_dp.h"
    #include "dc/inc/core_types.h"
    #include "dal_asic_id.h"
    #include "dmub/dmub_srv.h"
    @@ -2696,6 +2697,7 @@ static void handle_hpd_rx_irq(void *param)
    enum dc_connection_type new_connection_type = dc_connection_none;
    struct amdgpu_device *adev = drm_to_adev(dev);
    union hpd_irq_data hpd_irq_data;
    + bool lock_flag = 0;

    memset(&hpd_irq_data, 0, sizeof(hpd_irq_data));

    @@ -2726,15 +2728,28 @@ static void handle_hpd_rx_irq(void *param)
    }
    }

    - if (!amdgpu_in_reset(adev)) {
    + /*
    + * TODO: We need the lock to avoid touching DC state while it's being
    + * modified during automated compliance testing, or when link loss
    + * happens. While this should be split into subhandlers and proper
    + * interfaces to avoid having to conditionally lock like this in the
    + * outer layer, we need this workaround temporarily to allow MST
    + * lightup in some scenarios to avoid timeout.
    + */
    + if (!amdgpu_in_reset(adev) &&
    + (hpd_rx_irq_check_link_loss_status(dc_link, &hpd_irq_data) ||
    + hpd_irq_data.bytes.device_service_irq.bits.AUTOMATED_TEST)) {
    mutex_lock(&adev->dm.dc_lock);
    + lock_flag = 1;
    + }
    +
    #ifdef CONFIG_DRM_AMD_DC_HDCP
    result = dc_link_handle_hpd_rx_irq(dc_link, &hpd_irq_data, NULL);
    #else
    result = dc_link_handle_hpd_rx_irq(dc_link, NULL, NULL);
    #endif
    + if (!amdgpu_in_reset(adev) && lock_flag)
    mutex_unlock(&adev->dm.dc_lock);
    - }

    out:
    if (result && !is_mst_root_connector) {
    diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
    index 3ff3d9e90983..72bd7bc681a8 100644
    --- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
    +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
    @@ -1976,7 +1976,7 @@ enum dc_status read_hpd_rx_irq_data(
    return retval;
    }

    -static bool hpd_rx_irq_check_link_loss_status(
    +bool hpd_rx_irq_check_link_loss_status(
    struct dc_link *link,
    union hpd_irq_data *hpd_irq_dpcd_data)
    {
    diff --git a/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h b/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
    index 3ae05c96d557..a9c0c7f7a55d 100644
    --- a/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
    +++ b/drivers/gpu/drm/amd/display/dc/inc/dc_link_dp.h
    @@ -67,6 +67,10 @@ bool perform_link_training_with_retries(
    struct pipe_ctx *pipe_ctx,
    enum signal_type signal);

    +bool hpd_rx_irq_check_link_loss_status(
    + struct dc_link *link,
    + union hpd_irq_data *hpd_irq_dpcd_data);
    +
    bool is_mst_supported(struct dc_link *link);

    bool detect_dp_sink_caps(struct dc_link *link);
    --
    2.30.2


    \
     
     \ /
      Last update: 2021-07-12 11:10    [W:2.214 / U:1.460 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site