lkml.org 
[lkml]   [1999]   [Nov]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[patch] new spinlock variant, spinlock-2.3.30-A4

the attached patch implements a new spinlock variant, which is 2 bytes
shorter (thus less icache footprint) per lock/unlock than the previous
variant. The inline part of the kernel is smaller by 1654 bytes. Speed is
the same. Patch is against pre3-2.3.30 and works just fine for me.


there is also another spinlock variant which i came up with: it does not
use any LOCK-ed instruction. This is for up to 8 CPU systems (easy to be
extended to 16 CPUs, or simplified to 4 CPUs). Every spinlock is NR_CPUS
bytes, each byte represents one CPU. (%%esi) and 4(%%esi) is the spinlock
in the 8-CPU case. Eg. on CPU3 the following code is executed:


movb $1, 3(%%esi) # spin_lock
movl 4(%%esi), %%edx
addl (%%esi), %%edx
cmpl $0x01000000, %%edx
je slow_path

...

movb $0, 3(%%esi) # spin_unlock

the method uses the fact that 8-bit writes and 32-bit reads are guaranteed
to be atomic by Intel, and that Processor Ordering is enforced. By the
time we add (%%esi) and 4(%%esi), all other CPUs' potential stores must
have been observed, and thus we get into the slow path if there was any
write to any of the two words. And we get into the 'lock acquired' path if
and only if we were the only store to all 8 bytes. (The slow path is
reasonably straightforward.)

the above (together with necessery glue instructions) executes in about 15
cycles on a Xeon, while current spinlocks execute in 22 cycles. (~9 cycles
out of those 15 cycles are due to a pipeline stall caused by the partial
byte depencency.)

i'm not at all convinced this is worth the trouble though, but it's an
interesting and LOCK-less variant nevertheless ;)

-- mingo
--- linux/include/asm-i386/spinlock.h.orig Tue Nov 30 06:49:09 1999
+++ linux/include/asm-i386/spinlock.h Tue Nov 30 08:59:54 1999
@@ -33,7 +33,7 @@
#define SPINLOCK_MAGIC_INIT /* */
#endif

-#define SPIN_LOCK_UNLOCKED (spinlock_t) { 0 SPINLOCK_MAGIC_INIT }
+#define SPIN_LOCK_UNLOCKED (spinlock_t) { 1 SPINLOCK_MAGIC_INIT }

#define spin_lock_init(x) do { *(x) = SPIN_LOCK_UNLOCKED; } while(0)
/*
@@ -43,21 +43,24 @@
* We make no fairness assumptions. They have a cost.
*/

-#define spin_unlock_wait(x) do { barrier(); } while(((volatile spinlock_t *)(x))->lock)
+#define spin_unlock_wait(x) do { barrier(); } while(((volatile spinlock_t *)(x))->lock != 1)

+/*
+ * This spinlock variant works for up to 255 CPUs.
+ */
#define spin_lock_string \
"\n1:\t" \
- "lock ; btsl $0,%0\n\t" \
- "jc 2f\n" \
+ "lock ; decb %0\n\t" \
+ "jnz 2f\n" \
".section .text.lock,\"ax\"\n" \
"2:\t" \
- "testb $1,%0\n\t" \
+ "cmpb $1,%0\n\t" \
"jne 2b\n\t" \
"jmp 1b\n" \
".previous"

#define spin_unlock_string \
- "movb $0,%0"
+ "movb $1,%0"

extern inline void spin_lock(spinlock_t *lock)
{
@@ -85,7 +88,7 @@
:"=m" (__dummy_lock(lock)));
}

-#define spin_trylock(lock) (!test_and_set_bit(0,(lock)))
+#define spin_trylock(lock) (!atomic_dec_and_test((atomic_t*)(lock)))

/*
* Read-write spinlocks, allowing multiple readers
\
 
 \ /
  Last update: 2005-03-22 13:55    [W:0.136 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site