Messages in this thread | | | Date | Wed, 2 Nov 2022 20:42:47 +0300 | Subject | Re: [PATCH] drm/msm/dp: remove limitation of link rate at 5.4G to support HBR3 | From | Dmitry Baryshkov <> |
| |
On 02/11/2022 20:25, Doug Anderson wrote: > Hi, > > On Wed, Nov 2, 2022 at 10:15 AM Dmitry Baryshkov > <dmitry.baryshkov@linaro.org> wrote: >> >> On 01/11/2022 17:37, Doug Anderson wrote: >>> Hi, >>> >>> On Mon, Oct 31, 2022 at 5:15 PM Dmitry Baryshkov >>> <dmitry.baryshkov@linaro.org> wrote: >>>> >>>> On 01/11/2022 03:08, Doug Anderson wrote: >>>>> Hi, >>>>> >>>>> On Mon, Oct 31, 2022 at 2:11 PM Kuogee Hsieh <quic_khsieh@quicinc.com> wrote: >>>>>> >>>>>> Hi Dmitry, >>>>>> >>>>>> >>>>>> Link rate is advertised by sink, but adjusted (reduced the link rate) >>>>>> by host during link training. >>>>>> >>>>>> Therefore should be fine if host did not support HBR3 rate. >>>>>> >>>>>> It will reduce to lower link rate during link training procedures. >>>>>> >>>>>> kuogee >>>>>> >>>>>> On 10/31/2022 11:46 AM, Dmitry Baryshkov wrote: >>>>>>> On 31/10/2022 20:27, Kuogee Hsieh wrote: >>>>>>>> An HBR3-capable device shall also support TPS4. Since TPS4 feature >>>>>>>> had been implemented already, it is not necessary to limit link >>>>>>>> rate at HBR2 (5.4G). This patch remove this limitation to support >>>>>>>> HBR3 (8.1G) link rate. >>>>>>> >>>>>>> The DP driver supports several platforms including sdm845 and can >>>>>>> support, if I'm not mistaken, platforms up to msm8998/sdm630/660. >>>>>>> Could you please confirm that all these SoCs have support for HBR3? >>>>>>> >>>>>>> With that fact being confirmed: >>>>>>> >>>>>>> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> Signed-off-by: Kuogee Hsieh <quic_khsieh@quicinc.com> >>>>>>>> --- >>>>>>>> drivers/gpu/drm/msm/dp/dp_panel.c | 4 ---- >>>>>>>> 1 file changed, 4 deletions(-) >>>>>>>> >>>>>>>> diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c >>>>>>>> b/drivers/gpu/drm/msm/dp/dp_panel.c >>>>>>>> index 5149ceb..3344f5a 100644 >>>>>>>> --- a/drivers/gpu/drm/msm/dp/dp_panel.c >>>>>>>> +++ b/drivers/gpu/drm/msm/dp/dp_panel.c >>>>>>>> @@ -78,10 +78,6 @@ static int dp_panel_read_dpcd(struct dp_panel >>>>>>>> *dp_panel) >>>>>>>> if (link_info->num_lanes > dp_panel->max_dp_lanes) >>>>>>>> link_info->num_lanes = dp_panel->max_dp_lanes; >>>>>>>> - /* Limit support upto HBR2 until HBR3 support is added */ >>>>>>>> - if (link_info->rate >= >>>>>>>> (drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4))) >>>>>>>> - link_info->rate = drm_dp_bw_code_to_link_rate(DP_LINK_BW_5_4); >>>>>>>> - >>>>>>>> drm_dbg_dp(panel->drm_dev, "version: %d.%d\n", major, minor); >>>>>>>> drm_dbg_dp(panel->drm_dev, "link_rate=%d\n", link_info->rate); >>>>>>>> drm_dbg_dp(panel->drm_dev, "lane_count=%d\n", >>>>>>>> link_info->num_lanes); >>>>> >>>>> Stephen might remember better, but I could have sworn that the problem >>>>> was that there might be something in the middle that couldn't support >>>>> the higher link rate. In other words, I think we have: >>>>> >>>>> SoC <--> TypeC Port Controller <--> Display >>>>> >>>>> The SoC might support HBR3 and the display might support HBR3, but the >>>>> TCPC (Type C Port Controller) might not. I think that the TCPC is a >>>>> silent/passive component so it can't really let anyone know about its >>>>> limitations. >>>>> >>>>> In theory I guess you could rely on link training to just happen to >>>>> fail if you drive the link too fast for the TCPC to handle. Does this >>>>> actually work reliably? >>>>> >>>>> I think the other option that was discussed in the past was to add >>>>> something in the device tree for this. Either you could somehow model >>>>> the TCPC in DRM and thus know that a given model of TCPC limits the >>>>> link rate or you could hack in a property in the DP controller to >>>>> limit it. >>>> >>>> Latest pmic_glink proposal from Bjorn include adding the drm_bridge for >>>> the TCPC. Such bridge can in theory limit supported modes and rates. >>> >>> Excellent! Even so, I think this isn't totally a solved problem, >>> right? Even though a bridge seems like a good place for this, last I >>> remember checking the bridge API wasn't expressive enough to solve >>> this problem. A bridge could limit pixel clocks just fine, but here we >>> need to take into account other considerations to know if a given >>> pixel clock can work at 5.4 GHz or not. For instance, if we're at 4 >>> lanes we could maybe make a given pixel clock at 5.4 GHz but not if we >>> only have 2 lanes. I don't think that the DP controller passes the >>> number of lanes to other parts of the bridge chain, though maybe >>> there's some trick for it? >> >> I hope that somebody would fix MSM DP's data-lanes property usage to >> follow the usual way (a part of DT graph). Then it would be possible to >> query the amount of the lanes from the bridge. > > Sorry, can you explain how exactly this works?
This was related to your point regarding communicating number of data lanes. Currently DP nodes have data-lanes in the device node itself. This contradicts with the typical definition and usage of the property - to be used in the graph endpoint. Then the drm_of_get_data_lanes_count() and drm_of_get_data_lanes_count_ep() functions can be used to query data-lanes value.
> I suspect that _somehow_ we need to get info from the TCPC to the DP > controller driver about the maximum link rate. I think anything where > the TCPC uses mode_valid() to reject modes and tries to make decisions > based on "pixel clock" is going to be bad. If nothing else, I think > that during link training that DP controller can try many different > things to see what works. It may try varying the number of lanes, the > BPC, the link rate, etc. I don't think mode_valid() is called each > time through here.
In the worst case this can become the new max-data-rate propery. Or the existing link-frequencies property. But this needs to defined in the board file (or in the TCPC driver if that's the hardware limitation).
Granted the existing dp_panel code I think that the fix can be to check for the link-frequencies property and to limit the link_info->rate based on the value.
>>> ...I guess the other problem is that all existing users aren't >>> currently modeling their TCPC in this way. What happens to them? >> >> There are no existing users. Bryan implemented TCPM support at some >> point, but we never pushed this upstream. > > I mean existing DP users, like sc7180-trogdor devices. If the TCPC > isn't modeled, then these need to continue defaulting to HBR2 since at > least some of the boards have HBR2-only TCPCs.
Ack. I think somebody has to describe the DP links properly on those platforms. E.g. by adding the usb-connector nodes, etc (I assume that existing sc7180/7280 platforms use USB-C connectors mixed with USB rather than normal DP/uDP connectors). Let's see how Bjorn's proposal goes.
-- With best wishes Dmitry
| |