Messages in this thread | | | From | Ulf Hansson <> | Date | Wed, 23 Jun 2021 11:55:24 +0200 | Subject | Re: [PATCH] PM: domains: Shrink locking area of the gpd_list_lock |
| |
On Wed, 23 Jun 2021 at 10:31, Stephen Boyd <swboyd@chromium.org> wrote: > > Quoting Ulf Hansson (2021-06-22 09:27:09) > > On Mon, 21 Jun 2021 at 22:10, Stephen Boyd <swboyd@chromium.org> wrote: > > > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c > > > index b6a782c31613..18063046961c 100644 > > > --- a/drivers/base/power/domain.c > > > +++ b/drivers/base/power/domain.c > > > @@ -1984,8 +1984,8 @@ int pm_genpd_init(struct generic_pm_domain *genpd, > > > > > > mutex_lock(&gpd_list_lock); > > > list_add(&genpd->gpd_list_node, &gpd_list); > > > - genpd_debug_add(genpd); > > > mutex_unlock(&gpd_list_lock); > > > + genpd_debug_add(genpd); > > > > > > return 0; > > > } > > > @@ -2162,9 +2162,11 @@ static int genpd_add_provider(struct device_node *np, genpd_xlate_t xlate, > > > cp->xlate = xlate; > > > fwnode_dev_initialized(&np->fwnode, true); > > > > > > + mutex_lock(&gpd_list_lock); > > > > By looking at the existing code, $subject patch makes the behavior > > consistent and fixes the problem that the locks must always be > > taken/released in the same order. > > > > However, as I have been looking at this before (but never got to the > > point of sending a patch), I am actually thinking that it would be > > better to decouple the two locks, instead of further combining them. > > > > In other words, we shouldn't lock/unlock the &gpd_list_lock here in > > this function. Of course, that also means we need to fixup the code in > > of_genpd_del_provider() accordingly. > > Yes I was wondering why this list lock was used here at all. It seems to > be a substitute for calling genpd_lock()? I opted to just push the list
The genpd_lock should be used to protect some of the data in the genpd struct. Like the genpd->provider and the genpd->has_provider, for example.
Clearly, some of the data in the genpd struct are protected with gpd_list_lock, which is suboptimal.
> lock as far down as possible to fix the problem, which is holding it > over the calls into OPP.
Yes, we don't want that.
> > If I've read the code correctly it serves no purpose to grab the > gpd_list_lock here in genpd_add_provider() because we grab the > of_genpd_mutex and that is protecting the of_genpd_providers list > everywhere else. Is that right? Put another way, This hunk of the patch > can be dropped and then your concern will be addressed and there isn't > anything more to do.
It certainly can be dropped from the $subject patch, please re-spin to update that.
However, there are additional changes that deserve to be done to improve the behaviour around the locks. More precisely, the &gpd_list_lock and the &of_genpd_mutex should be completely decoupled, but there are some other related things as well.
Probably it's easier if I post a patch, on top of yours, to try to further improve the behavior. I would appreciate it if you could help with the test/review then.
> > > > > > > > mutex_lock(&of_genpd_mutex); > > > list_add(&cp->link, &of_genpd_providers); > > > mutex_unlock(&of_genpd_mutex); > > > + mutex_unlock(&gpd_list_lock); > > > pr_debug("Added domain provider from %pOF\n", np); > > > > > > return 0; > > > @@ -2314,8 +2314,6 @@ int of_genpd_add_provider_onecell(struct device_node *np, > > > } > > > } > > > > > > - mutex_unlock(&gpd_list_lock); > > > - > > > return ret; > > > } > > > EXPORT_SYMBOL_GPL(of_genpd_add_provider_onecell); > > > > I will continue to have a look at this and provide some more comments > > asap, but overall the change is a step in the right direction. > > > > Possibly, we may even consider applying it as is and work on the > > things I pointed out above, as improvements on top. Let's see, give me > > a day or so. > > > > Ok sure.
Kind regards Uffe
| |