Messages in this thread | | | Subject | Re: [PATCH] locking/qspinlock: Add bug check for exceeding MAX_NODES | From | Waiman Long <> | Date | Wed, 16 Jan 2019 11:53:42 -0500 |
| |
On 01/16/2019 11:47 AM, Will Deacon wrote: > On Tue, Jan 15, 2019 at 04:55:44PM -0500, Waiman Long wrote: >> On some architectures, it is possible to have nested NMIs taking >> spinlocks nestedly. Even though the chance of having more than 4 nested >> spinlocks with contention is extremely small, there could still be a >> possibility that it may happen some days leading to system panic. >> >> What we don't want is a silent corruption with system panic somewhere >> else. So add a BUG_ON() check to make sure that a system panic caused >> by this will show the correct root cause. >> >> Signed-off-by: Waiman Long <longman@redhat.com> >> --- >> kernel/locking/qspinlock.c | 10 ++++++++++ >> 1 file changed, 10 insertions(+) >> >> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c >> index 8a8c3c2..f823221 100644 >> --- a/kernel/locking/qspinlock.c >> +++ b/kernel/locking/qspinlock.c >> @@ -412,6 +412,16 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) >> idx = node->count++; >> tail = encode_tail(smp_processor_id(), idx); >> >> + /* >> + * 4 nodes are allocated based on the assumption that there will >> + * not be nested NMIs taking spinlocks. That may not be true in >> + * some architectures even though the chance of needing more than >> + * 4 nodes will still be extremely unlikely. Adding a bug check >> + * here to make sure there won't be a silent corruption in case >> + * this condition happens. >> + */ >> + BUG_ON(idx >= MAX_NODES); >> + > Hmm, I really don't like the idea of putting a BUG_ON() on the spin_lock() > path. I'd prefer it if (a) we didn't add extra conditional code for the > common case and (b) didn't bring down the machine. Could we emit a > lockdep-style splat, instead? > > Will
I am going to drop this patch. I am working on another one that will handle the no MCS node available case by spinning directly on the lock cacheline under this rare circumstance. Of course, that will incur a little bit of performance overhead in the slow path which I am trying to measure right now.
Cheers, Longman
| |