lkml.org 
[lkml]   [2020]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/1] ppc/crash: Skip spinlocks during crash
From
Date
Hello Christophe, thanks for the feedback.

I noticed an error in this patch and sent a v2, that can be seen here:
http://patchwork.ozlabs.org/patch/1262468/

Comments inline::

On Fri, 2020-03-27 at 07:50 +0100, Christophe Leroy wrote:
> > @@ -142,6 +144,8 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
> > if (likely(__arch_spin_trylock(lock) == 0))
> > break;
> > do {
> > + if (unlikely(crash_skip_spinlock))
> > + return;

Complete function for reference:
static inline void arch_spin_lock(arch_spinlock_t *lock)
{
while (1) {
if (likely(__arch_spin_trylock(lock) == 0))
break;
do {
if (unlikely(crash_skip_spinlock))
return;
HMT_low();
if (is_shared_processor())
splpar_spin_yield(lock);
} while (unlikely(lock->slock != 0));
HMT_medium();
}
}

> You are adding a test that reads a global var in the middle of a so hot
> path ? That must kill performance.

I thought it would, in worst case scenario, increase a maximum delay of
an arch_spin_lock() call 1 spin cycle. Here is what I thought:

- If the lock is already free, it would change nothing,
- Otherwise, the lock will wait.
- Waiting cycle just got bigger.
- Worst case scenario: running one more cycle, given lock->slock can
turn to 0 just after checking.

Could you please point where I failed to see the performance penalty?
(I need to get better at this :) )


> Can we do different ?

Sure, a less intrusive way of doing it would be to free the currently
needed locks before proceeding. I just thought it would be harder to
maintain.

> Christophe

Best regards,
Leonardo
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2020-03-27 16:58    [W:2.057 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site