lkml.org 
[lkml]   [2008]   [Mar]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: Are Linux pipes slower than the FreeBSD ones ?
Date
On Thursday 06 March 2008 23:11, Dmitry Antipov wrote:
> Nick Piggin wrote:
> > One thing to try is pinning both processes on the same CPU. This
> > may be what the FreeBSD scheduler is preferring to do, and it ends
> > up being really a tradeoff that helps some workloads and hurts
> > others. With a very unscientific test with an old kernel, the
> > pipe.c test gets anywhere from about 1.5 to 3 times faster when
> > running it as taskset 1 ./pipe
>
> Sounds interesting. What kernel version did you tried? Can you
> send your .config to me?
>
> I've tried this trick on 2.6.25-rc4, and got ~20% more throughput for
> large (> 8K) buffers at the cost of going ~30% down for the small ones.

Seems some people are still concerned about this benchmark. OK I
tried with Linux 2.6.25-rc6 (just because it's what I've got on
this system). Versus FreeBSD 7.0.

Unfortunately, I don't think FreeBSD supports binding a process to a
CPU, and on either system when the scheduler is allowed to choose
what happens, results are more variable than you would like.

That being said, I found that Linux often outscored FreeBSD in all 3
tests of pipe_v3. FreeBSD does appear like it can get a slightly higher
throughput at 64K in test #1, so maybe it's data copy routines are
slightly better. OTOH, I found Linux is better at 64K in test #2. For
the low sizes, I found Linux was usually faster than FreeBSD in tests
1 and 2, and around the same on test 3.

The other thing is that this test is pretty well a random context
switch benchmark that really depends on slight variations in how the
scheduler scheduler runs things. If you happen to be able to keep the
pipe from filling or emptying completely, you can run both processes
at the same time on different CPUs. If you run both processes on the
same CPU, then you want to avoid preempting the producer until it
fills the pipe, then you want to avoid preempting the consumer until
it empties the pipe in order to minimise context switches.

For example, if I call a nice(20) in the start of the reader processes
in tests #1 and #2, I get around a 5x speedup in Linux when running
reader and writer on the same CPU.

I won't bother posting actual numbers... if anybody is interested I
can mail raw results offline.

But again, it isn't such a great test because a higher number doesn't
really mean you'll do better with any real program, and optimising for
a higher number here could actually harm real programs.

pipe test v3 also is doing funny things with "simulating" real
accesses. It should generally write into the buffer before
write(2)ing it, and read from the buffer after read(2)ing it. Instead
it writes to the buffer after write(2) and after read(2). Also, it
should probably touch a significant number of the pages and
cachelines transferred in each case, rather than the 1/2 stores it
does right now. There are a lot of ways you can copy data around, so
even if you defeat page flipping (for small transfers), then you
still don't know if one method of copying data around is better than
another.

Basically I will just reiterate what I said before that it is really
difficult to draw any conclusions from a test like this, and from the
numbers I see, you certainly can't say FreeBSD is faster than Linux.

If you want to run this kind of microbenchmark, something like lmbench
at least has been around for a long time and been reviewed (whether or
not it is any more meaningful, I don't know). Or do you have a real
workload that this pipe test simulates?

Thanks,
Nick



\
 
 \ /
  Last update: 2008-03-17 13:57    [W:0.080 / U:0.728 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site