lkml.org 
[lkml]   [1999]   [Nov]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] Re: spin_unlock optimization(i386)

On Mon, 22 Nov 1999, Jamie Lokier wrote:

> An rmb() will prevent reads that occur before it from being
> speculatively executed after it, and vice versa. rmb() expands to asm
> volatile ("lock; addl $0,0(%esp)" : : : "memory"). Being a locked
> operation it serves as a processor barrier for both reads and writes;
> being in cache, it is fast on a PPro. [...]

this is the misconception. Yes, it's in cache. Yes it should be fast.
Still it's not fast - and not faster than 'btrl $0, %0'. It also has more
icache footprint. (together with the mov) We went through this thing a
dozen times, and if you want to improve things in LOCK-land then please
also time things with the cycle counter as we did, and consider
spin_unlock() total opcode size. (it's not necesserily trivial to make the
cycle counter measurement is accurate, rdtsc is not a serializing
instruction and interacts with the measured region.) I'm afraid that this
area isnt likely to bring anything new/better.

nevertheless the situation isnt all that bad, our spinlocks are mean and
lean already :)

-- mingo


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:55    [W:0.064 / U:0.332 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site