lkml.org 
[lkml]   [2022]   [Aug]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] soc: qcom: rpmhpd: Use highest corner until sync_state
From

On 8/10/2022 2:25 AM, Bjorn Andersson wrote:
> On Mon 08 Aug 03:17 CDT 2022, Rajendra Nayak wrote:
>
>>
>> On 8/6/2022 9:20 PM, Bjorn Andersson wrote:
>>> In some cases the hardware that the bootloader has left configured
>>> depends on RPMH power domains for their operation up until the point
>>> where the related Linux device driver probes and can inherit that
>>> configuration, or power down the hardware gracefully.
>>>
>>> Unfortunately as Linux probes the releavant drivers in sequence there
>>> are periods during the Linux boot flow where either the genpd refcount
>>> will reach 0, or worse where the active performance_state votes does not
>>> meet the requirements of the state that the hardware was left in.
>>>
>>> One specific example of this is during boot of e.g. SM8150/SC8180X,
>>> where the display clock controller probes, without any particular
>>> performance state needs (to access its registers). This will drop the
>>> MMCX rail to MIN_SVS, which isn't sufficient to sustain the clock rates
>>> that the later probing MDP is configured to. This results in an
>>> unrecoverable system state.
>>>
>>> Handle both these cases by keeping the RPMH power-domais that are
>>> referenced voted for highest state, until sync_state indicates that all
>>> devices referencing the RPMH power-domain driver has been probed.
>>>
>>> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
>>> ---
>>> drivers/soc/qcom/rpmhpd.c | 35 ++++++++++++++++++++++++++++++++++-
>>> 1 file changed, 34 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/soc/qcom/rpmhpd.c b/drivers/soc/qcom/rpmhpd.c
>>> index 092f6ab09acf..494bb6c75ed7 100644
>>> --- a/drivers/soc/qcom/rpmhpd.c
>>> +++ b/drivers/soc/qcom/rpmhpd.c
>>> @@ -39,6 +39,7 @@
>>> * @res_name: Resource name used for cmd-db lookup
>>> * @addr: Resource address as looped up using resource name from
>>> * cmd-db
>>> + * @state_synced: Indicator that sync_state has been invoked for the rpmhpd resource
>>> */
>>> struct rpmhpd {
>>> struct device *dev;
>>> @@ -54,6 +55,7 @@ struct rpmhpd {
>>> bool enabled;
>>> const char *res_name;
>>> u32 addr;
>>> + bool state_synced;
>>> };
>>> struct rpmhpd_desc {
>>> @@ -493,7 +495,13 @@ static int rpmhpd_aggregate_corner(struct rpmhpd *pd, unsigned int corner)
>>> unsigned int this_active_corner = 0, this_sleep_corner = 0;
>>> unsigned int peer_active_corner = 0, peer_sleep_corner = 0;
>>> - to_active_sleep(pd, corner, &this_active_corner, &this_sleep_corner);
>>> + if (pd->state_synced) {
>>> + to_active_sleep(pd, corner, &this_active_corner, &this_sleep_corner);
>>> + } else {
>>> + /* Clamp to highest corner if sync_state hasn't happened */
>>> + this_active_corner = pd->level_count - 1;
>>> + this_sleep_corner = pd->level_count - 1;
>>> + }
>>> if (peer && peer->enabled)
>>> to_active_sleep(peer, peer->corner, &peer_active_corner,
>>> @@ -708,11 +716,36 @@ static int rpmhpd_probe(struct platform_device *pdev)
>>> return of_genpd_add_provider_onecell(pdev->dev.of_node, data);
>>> }
>>> +static void rpmhpd_sync_state(struct device *dev)
>>> +{
>>> + const struct rpmhpd_desc *desc = of_device_get_match_data(dev);
>>> + struct rpmhpd **rpmhpds = desc->rpmhpds;
>>> + unsigned int corner;
>>> + struct rpmhpd *pd;
>>> + unsigned int i;
>>> +
>>> + mutex_lock(&rpmhpd_lock);
>>> + for (i = 0; i < desc->num_pds; i++) {
>>> + pd = rpmhpds[i];
>>> + if (!pd)
>>> + continue;
>>> +
>>> + pd->state_synced = true;
>> []
>>
>>> + if (pd->enabled)
>>> + corner = max(pd->corner, pd->enable_corner);
>>> + else
>>> + corner = 0;
>>> + rpmhpd_aggregate_corner(pd, corner);
>>
>> can't this be
>> if (pd->enabled) {
>> corner = max(pd->corner, pd->enable_corner);
>> rpmhpd_aggregate_corner(pd, corner);
>> }
>
> Please correct me if I'm wrong, but that would imply that if the
> power-domain was disabled by all clients when we hit the sync_state we
> would be left at max state until one of them tickles it again.
>
> The way it's written here means that in addition to keeping things at
> the highest corner we also keep it on until sync_state hits - but no
> longer.

ah, that's right, seems like the right thing to do.

>>
>> also, we do check for return value of rpmhpd_aggregate_corner() but I guess
>> here there isn't much different we would do even if there is an error?
>>
>
> You're right, ignoring the return value here means that an error would
> be completely silent. There's not much we can do about it, but it might
> be useful to the developer to know that this happened.
>
> So a dev_err() seems in order.
>
> Thanks,
> Bjorn
>
>>> + }
>>> + mutex_unlock(&rpmhpd_lock);
>>> +}
>>> +
>>> static struct platform_driver rpmhpd_driver = {
>>> .driver = {
>>> .name = "qcom-rpmhpd",
>>> .of_match_table = rpmhpd_match_table,
>>> .suppress_bind_attrs = true,
>>> + .sync_state = rpmhpd_sync_state,
>>> },
>>> .probe = rpmhpd_probe,
>>> };

\
 
 \ /
  Last update: 2022-08-10 06:58    [W:0.065 / U:0.160 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site