lkml.org 
[lkml]   [2021]   [Mar]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v6 2/2] hwspinlock: add sun6i hardware spinlock support
On Tue, 2 Mar 2021 18:20:02 +0100
Maxime Ripard <maxime@cerno.tech> wrote:

> Hi,
>
> On Mon, Mar 01, 2021 at 03:06:08PM +0100, Wilken Gottwalt wrote:
> > On Mon, 1 Mar 2021 14:13:05 +0100
> > Maxime Ripard <mripard@kernel.org> wrote:
> >
> > > On Sat, Feb 27, 2021 at 02:03:54PM +0100, Wilken Gottwalt wrote:
> > > > Adds the sun6i_hwspinlock driver for the hardware spinlock unit found in
> > > > most of the sun6i compatible SoCs.
> > > >
> > > > This unit provides at least 32 spinlocks in hardware. The implementation
> > > > supports 32, 64, 128 or 256 32bit registers. A lock can be taken by
> > > > reading a register and released by writing a 0 to it. This driver
> > > > supports all 4 spinlock setups, but for now only the first setup (32
> > > > locks) seem to exist in available devices. This spinlock unit is shared
> > > > between all ARM cores and the embedded companion core. All of them can
> > > > take/release a lock with a single cycle operation. It can be used to
> > > > sync access to devices shared by the ARM cores and the companion core.
> > > >
> > > > There are two ways to check if a lock is taken. The first way is to read
> > > > a lock. If a 0 is returned, the lock was free and is taken now. If an 1
> > > > is returned, the caller has to try again. Which means the lock is taken.
> > > > The second way is to read a 32bit wide status register where every bit
> > > > represents one of the 32 first locks. According to the datasheets this
> > > > status register supports only the 32 first locks. This is the reason the
> > > > first way (lock read/write) approach is used to be able to cover all 256
> > > > locks in future devices. The driver also reports the amount of supported
> > > > locks via debugfs.
> > > >
> > > > Signed-off-by: Wilken Gottwalt <wilken.gottwalt@posteo.net>
> >
> > Nope, I had to replace the devm_hwspin_lock_register function by the
> > hwspin_lock_register function because like Bjorn pointed out that it can
> > fail and needs to handled correctly. And having a devm_* function does not
> > play well with the non-devm clock/reset functions and winding back if an
> > error occurs. It also messes with the call order in the remove function. So
> > I went back to the classic way where I have full control over the call order.
>
> If you're talking about the clock and reset line reassertion, I don't
> really see what the trouble is. Sure, it's not going to be in the exact
> same order in remove, but it's still going to execute in the proper
> order (ie, clock disable, then reset disable, then clock put and reset
> put). And you can use devm_add_action if you want to handle things
> automatically.

See, in v5 zje result of devm_hwspin_lock_register was returned directly. The
remove callback or the bank_fail/clk_fail labels would not run, if the registering
fails. In v6 it is fixed.

+ platform_set_drvdata(pdev, priv);
+
+ return devm_hwspin_lock_register(&pdev->dev, priv->bank, &sun6i_hwspinlock_ops,
+ SPINLOCK_BASE_ID, priv->nlocks);
+bank_fail:
+ clk_disable_unprepare(priv->ahb_clk);
+clk_fail:
+ reset_control_assert(priv->reset);
+
+ return err;
+}

So, is v6 fine for you even if it uses a more classic approach?

greetings,
Will

\
 
 \ /
  Last update: 2021-03-06 12:07    [W:0.280 / U:0.644 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site