lkml.org 
[lkml]   [2022]   [Nov]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3
    From
    On 01/11/2022 17:37, Doug Anderson wrote:
    > Hi,
    >
    > On Mon, Oct 31, 2022 at 5:15 PM Dmitry Baryshkov
    > <dmitry.baryshkov@linaro.org> wrote:
    >>
    >> On 01/11/2022 03:08, Doug Anderson wrote:
    >>> Hi,
    >>>
    >>> On Mon, Oct 31, 2022 at 2:11 PM Kuogee Hsieh <quic_khsieh@quicinc.com> wrote:
    >>>>
    >>>> Hi Dmitry,
    >>>>
    >>>>
    >>>> Link rate is advertised by sink, but adjusted (reduced the link rate)
    >>>> by host during link training.
    >>>>
    >>>> Therefore should be fine if host did not support HBR3 rate.
    >>>>
    >>>> It will reduce to lower link rate during link training procedures.
    >>>>
    >>>> kuogee
    >>>>
    >>>> On 10/31/2022 11:46 AM, Dmitry Baryshkov wrote:
    >>>>> On 31/10/2022 20:27, Kuogee Hsieh wrote:
    >>>>>> An HBR3-capable device shall also support TPS4. Since TPS4 feature
    >>>>>> had been implemented already, it is not necessary to limit link
    >>>>>> rate at HBR2 (5.4G). This patch remove this limitation to support
    >>>>>> HBR3 (8.1G) link rate.
    >>>>>
    >>>>> The DP driver supports several platforms including sdm845 and can
    >>>>> support, if I'm not mistaken, platforms up to msm8998/sdm630/660.
    >>>>> Could you please confirm that all these SoCs have support for HBR3?
    >>>>>
    >>>>> With that fact being confirmed:
    >>>>>
    >>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
    >>>>>
    >>>>>
    >>>>>>
    >>>>>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com>
    >>>>>> ---
    >>>>>> drivers/gpu/drm/msm/dp/dp_panel.c | 4 ----
    >>>>>> 1 file changed, 4 deletions(-)
    >>>>>>
    >>>>>> diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c
    >>>>>> b/drivers/gpu/drm/msm/dp/dp_panel.c
    >>>>>> index 5149ceb..3344f5a 100644
    >>>>>> --- a/drivers/gpu/drm/msm/dp/dp_panel.c
    >>>>>> +++ b/drivers/gpu/drm/msm/dp/dp_panel.c
    >>>>>> @@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel
    >>>>>> *dp_panel)
    >>>>>> if (link_info->num_lanes > dp_panel->max_dp_lanes)
    >>>>>> link_info->num_lanes = dp_panel->max_dp_lanes;
    >>>>>> - /* Limit support upto HBR2 until HBR3 support is added */
    >>>>>> - if (link_info->rate >=
    >>>>>> (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4)))
    >>>>>> - link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4);
    >>>>>> -
    >>>>>> drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor);
    >>>>>> drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate);
    >>>>>> drm_dbg_dp(panel->drm_dev, "lane_count=%d\n",
    >>>>>> link_info->num_lanes);
    >>>
    >>> Stephen might remember better, but I could have sworn that the problem
    >>> was that there might be something in the middle that couldn't support
    >>> the higher link rate. In other words, I think we have:
    >>>
    >>> SoC <--> TypeC Port Controller <--> Display
    >>>
    >>> The SoC might support HBR3 and the display might support HBR3, but the
    >>> TCPC (Type C Port Controller) might not. I think that the TCPC is a
    >>> silent/passive component so it can't really let anyone know about its
    >>> limitations.
    >>>
    >>> In theory I guess you could rely on link training to just happen to
    >>> fail if you drive the link too fast for the TCPC to handle. Does this
    >>> actually work reliably?
    >>>
    >>> I think the other option that was discussed in the past was to add
    >>> something in the device tree for this. Either you could somehow model
    >>> the TCPC in DRM and thus know that a given model of TCPC limits the
    >>> link rate or you could hack in a property in the DP controller to
    >>> limit it.
    >>
    >> Latest pmic_glink proposal from Bjorn include adding the drm_bridge for
    >> the TCPC. Such bridge can in theory limit supported modes and rates.
    >
    > Excellent! Even so, I think this isn't totally a solved problem,
    > right? Even though a bridge seems like a good place for this, last I
    > remember checking the bridge API wasn't expressive enough to solve
    > this problem. A bridge could limit pixel clocks just fine, but here we
    > need to take into account other considerations to know if a given
    > pixel clock can work at 5.4 GHz or not. For instance, if we're at 4
    > lanes we could maybe make a given pixel clock at 5.4 GHz but not if we
    > only have 2 lanes. I don't think that the DP controller passes the
    > number of lanes to other parts of the bridge chain, though maybe
    > there's some trick for it?

    I hope that somebody would fix MSM DP's data-lanes property usage to
    follow the usual way (a part of DT graph). Then it would be possible to
    query the amount of the lanes from the bridge.

    > ...I guess the other problem is that all existing users aren't
    > currently modeling their TCPC in this way. What happens to them?

    There are no existing users. Bryan implemented TCPM support at some
    point, but we never pushed this upstream.

    --
    With best wishes
    Dmitry

    \
     
     \ /
      Last update: 2022-11-02 18:17    [W:2.588 / U:0.032 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site