lkml.org 
[lkml]   [2003]   [May]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [Linux-ia64] Re: web page on O(1) scheduler
On Wed, 21 May 2003, Arjan van de Ven wrote:

> oh you mean the OpenMP broken behavior of calling sched_yield() in a
> tight loop to implement spinlocks ?
>
> I'd guess that instead of second guessing the runtime, they should use
> the pthreads primitives which are the fastest for the platform one would
> hope.. (eg futexes nowadays)
>

That really depends on the circumstances. I think there will always
be applications that use custom synchronization because they need the
last bit of performance in their specific environment.

I just ran a quick test to compare

a) 10,000,000 lock/unlock pairs with a rudimentary custom user-level spinlock
implementation, and
b) 10,000,000 pthread_mutex_lock/pthread_mutex_unlock pairs.

All locking is done by a single thread; this is the completely contention-free
case.

On a 1GHz Itanium 2 I get

Custom lock: 180 msecs
Custom lock: 1382 msecs

On a 2GHz Xeon, I get

Custom lock: 646 msecs
Custom lock: 1659 msecs

There are good reasons for the differences:

1) The pthread implementation needs an atomic operation on lock exit to
check whether waiters need to be awoken. The spin lock just needs a
release barrier, which is cheap on both platforms.

2) The custom lock can be inlined. The pthread one normally involves a
dynamic library call. That has to be the case if you want to be able
to plug in different thread implementations.

3) Pthread locks come in various flavors, and the interface is designed
such that you have to check at runtime which flavor you have.

In the contention case there are other interesting issues, since it's
often far more efficient to spin before attempting to yield, and pthreads
implementations don't always do that.

The original case may also have involved barrier synchronization instead
of locks, in which case there is probably at least as much motivation to
"roll your own".

To reproduce my results from attached files (ao is my current attempt at
a portable atomic operations library):

tar xvfz ao-0.2.tar.gz
cp time_lock.c ao-0.2
cd ao-0.2
gcc -O2 time_lock.c -lpthread
./a.out

--
Hans Boehm
(hboehm@hpl.hp.com)
[unhandled content-type:application/x-gzip]#include <pthread.h>
#include <stdio.h>
#include <sys/time.h>
#include "atomic_ops.h"

/* Timing code stolen from Ellis/Kovac/Boehm GCBench. */
#define currentTime() stats_rtclock()
#define elapsedTime(x) (x)

unsigned long
stats_rtclock( void )
{
struct timeval t;
struct timezone tz;

if (gettimeofday( &t, &tz ) == -1)
return 0;
return (t.tv_sec * 1000 + t.tv_usec / 1000);
}

AO_TS_T my_spin_lock = AO_TS_INITIALIZER;

pthread_mutex_t my_pthread_lock = PTHREAD_MUTEX_INITIALIZER;

void spin_lock_ool(AO_TS_T *lock)
{
/* Should repeatly retry the AO_test_and_set_acquire, perhaps */
/* after trying a plain read. Should "exponentially" back off */
/* between tries. For short time periods it should spin, for */
/* medium ones it should use sched_yield, and for longer ones usleep. */

/* For now we punt, since this is a contention-free test. */
abort();
}

inline void spin_lock(AO_TS_T *lock)
{
if (__builtin_expect(AO_test_and_set_acquire(lock) != AO_TS_CLEAR, 0))
spin_lock_ool(lock);
}

inline void spin_unlock(AO_TS_T *lock)
{
AO_CLEAR(lock);
}

int main()
{
unsigned long start_time, end_time;
int i;

start_time = currentTime();
for (i = 0; i < 10000000; ++i) {
spin_lock(&my_spin_lock);
spin_unlock(&my_spin_lock);
}
end_time = currentTime();
fprintf(stderr, "Custom lock: %lu msecs\n",
elapsedTime(end_time - start_time));
start_time = currentTime();
for (i = 0; i < 10000000; ++i) {
pthread_mutex_lock(&my_pthread_lock);
pthread_mutex_unlock(&my_pthread_lock);
}
end_time = currentTime();
fprintf(stderr, "Custom lock: %lu msecs\n",
elapsedTime(end_time - start_time));
return 0;
}
\
 
 \ /
  Last update: 2005-03-22 13:35    [W:0.231 / U:0.224 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site