lkml.org 
[lkml]   [2021]   [Feb]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [Linuxarm] Re: [PATCH for next v1 0/2] gpio: few clean up patches to replace spin_lock_irqsave with spin_lock
Date


> -----Original Message-----
> From: Andy Shevchenko [mailto:andy.shevchenko@gmail.com]
> Sent: Thursday, February 11, 2021 3:57 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: luojiaxing <luojiaxing@huawei.com>; Linus Walleij
> <linus.walleij@linaro.org>; Grygorii Strashko <grygorii.strashko@ti.com>;
> Santosh Shilimkar <ssantosh@kernel.org>; Kevin Hilman <khilman@kernel.org>;
> open list:GPIO SUBSYSTEM <linux-gpio@vger.kernel.org>; Linux Kernel Mailing
> List <linux-kernel@vger.kernel.org>; linuxarm@openeuler.org
> Subject: Re: [Linuxarm] Re: [PATCH for next v1 0/2] gpio: few clean up patches
> to replace spin_lock_irqsave with spin_lock
>
> On Wed, Feb 10, 2021 at 11:50:45AM +0000, Song Bao Hua (Barry Song) wrote:
> > > -----Original Message-----
> > > From: Andy Shevchenko [mailto:andy.shevchenko@gmail.com]
> > > Sent: Wednesday, February 10, 2021 11:51 PM
> > > On Wed, Feb 10, 2021 at 5:43 AM luojiaxing <luojiaxing@huawei.com> wrote:
> > > > On 2021/2/9 17:42, Andy Shevchenko wrote:
>
> ...
>
> > > > Between IRQ handler A and IRQ handle A, it's no need for a SLIS.
> > >
> > > Right, but it's not the case in the patches you provided.
> >
> > The code still holds spin_lock. So if two cpus call same IRQ handler,
> > spin_lock makes them spin; and if interrupts are threaded, spin_lock
> > makes two threads run the same handler one by one.
>
> If you run on an SMP system and it happens that spin_lock_irqsave() just
> immediately after spin_unlock(), you will get into the troubles. Am I mistaken?

Hi Andy,
Thanks for your reply.

But I don't agree spin_lock_irqsave() just immediately after spin_unlock()
could a problem on SMP.
When the 1st cpu releases spinlock by spin_unlock, it has completed its section
of accessing the critical data, then 2nd cpu gets the spin_lock. These two CPUs
won't have overlap on accessing the same data.

>
> I think this entire activity is a carefully crafted mine field for the future
> syzcaller and fuzzers alike. I don't believe there are no side effects in a
> long
> term on all possible systems and configurations (including forced threaded IRQ
> handlers).

Also I don't understand why forced threaded IRQ could be a problem. Since IRQ has
been a thread, this actually makes the situation much simpler than non-threaded
IRQ. Since all threads including those IRQ threads want to hold spin_lock,
they won't access the same critical data at the same time either.

>
> I would love to see a better explanation in the commit message of such patches
> which makes it clear that there are *no* side effects.
>

People had the same questions before, But I guess the discussion in this commit
has led to a better commit log:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4eb7d0cd59

> For time being, NAK to the all patches of this kind.

Fair enough, if you expect better explanation, I agree the commit log is too
short.

>
> --
> With Best Regards,
> Andy Shevchenko
>

Thanks
Barry

\
 
 \ /
  Last update: 2021-02-10 21:45    [W:0.076 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site