lkml.org 
[lkml]   [2015]   [Sep]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[tip:locking/core] locking/qspinlock/x86: Fix performance regression under unaccelerated VMs
Commit-ID:  43b3f02899f74ae9914a39547cc5492156f0027a
Gitweb: http://git.kernel.org/tip/43b3f02899f74ae9914a39547cc5492156f0027a
Author: Peter Zijlstra <peterz@infradead.org>
AuthorDate: Fri, 4 Sep 2015 17:25:23 +0200
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 11 Sep 2015 07:49:42 +0200

locking/qspinlock/x86: Fix performance regression under unaccelerated VMs

Dave ran into horrible performance on a VM without PARAVIRT_SPINLOCKS
set and Linus noted that the test-and-set implementation was retarded.

One should spin on the variable with a load, not a RMW.

While there, remove 'queued' from the name, as the lock isn't queued
at all, but a simple test-and-set.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Reported-by: Dave Chinner <david@fromorbit.com>
Tested-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <Waiman.Long@hp.com>
Cc: stable@vger.kernel.org # v4.2+
Link: http://lkml.kernel.org/r/20150904152523.GR18673@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/x86/include/asm/qspinlock.h | 16 ++++++++++++----
include/asm-generic/qspinlock.h | 4 ++--
kernel/locking/qspinlock.c | 2 +-
3 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 9d51fae..8dde3bd 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -39,15 +39,23 @@ static inline void queued_spin_unlock(struct qspinlock *lock)
}
#endif

-#define virt_queued_spin_lock virt_queued_spin_lock
+#define virt_spin_lock virt_spin_lock

-static inline bool virt_queued_spin_lock(struct qspinlock *lock)
+static inline bool virt_spin_lock(struct qspinlock *lock)
{
if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
return false;

- while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
- cpu_relax();
+ /*
+ * On hypervisors without PARAVIRT_SPINLOCKS support we fall
+ * back to a Test-and-Set spinlock, because fair locks have
+ * horrible lock 'holder' preemption issues.
+ */
+
+ do {
+ while (atomic_read(&lock->val) != 0)
+ cpu_relax();
+ } while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0);

return true;
}
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index 83bfb87..e2aadbc 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -111,8 +111,8 @@ static inline void queued_spin_unlock_wait(struct qspinlock *lock)
cpu_relax();
}

-#ifndef virt_queued_spin_lock
-static __always_inline bool virt_queued_spin_lock(struct qspinlock *lock)
+#ifndef virt_spin_lock
+static __always_inline bool virt_spin_lock(struct qspinlock *lock)
{
return false;
}
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 337c881..87e9ce6a 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -289,7 +289,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
if (pv_enabled())
goto queue;

- if (virt_queued_spin_lock(lock))
+ if (virt_spin_lock(lock))
return;

/*

\
 
 \ /
  Last update: 2015-09-13 13:21    [W:0.083 / U:0.172 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site