lkml.org 
[lkml]   [1999]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] arca-vm-2.2.5
Andrea Arcangeli wrote:
>
> On Mon, 5 Apr 1999, Chuck Lever wrote:
>
> >buckets out of 32K buckets have hundreds of buffers). i have a new hash
> >function that works very well, and even helps inter-run variance and
> >perceived interactive response. i'll post more on this soon.
>
> Cool! ;)) But could you tell me _how_ do you design an hash function? Are
> you doing math or do you use instinct? I never gone into the details of
> benchmarking or designing an hash function simply because I don't like
> fuzzy hash and instead I want to replace them with RB-trees all over the
> place (and I know the math about RB-trees), but I like to learn how to
> design an hash function anyway (even if I don't want to discover that
> myself without reading docs as usual ;).
>
> >but also the page hash function uses the hash table size as a shift value
> >when computing the index, so it may combine the interesting bits in a
> >different (worse) way when you change the hash table size. i'm planning
> >to instrument the page hash to see exactly what's going on.
>
> Agreed. This is true. I thought about that and I am resizing the hash
> table size to the original 11 bit now (since you are confirming that I
> broken the hash function).
>
> >IMHO, i'd think if the buffer cache has a large hash table, you'd want the
> >page cache to have a table as large or larger. both track roughly the
> >same number of objects... and the page cache is probably used more often
>
> Agreed.

Hmmm...I've talked about this a few times to Alan Cox and Stephen
Tweedie. I didn't bother to instrument the hash function because in
this case I knew it was tuned to the size of the inode structs. But, I
did implement a variable sized page cache hash table array. I did this
because at 512MB of RAM, having only 2048 hash buckets (11 bits) became
a real bottleneck in page cache hash entry lookups. Essentially, when
the number of physical pages per hash buckets gets too large, then hash
chains can grow excessively long and take entirely too long to look up
when the page is not in the page cache (or when it is but it's at the
end of a long chain). Since these lookups occur as frequently as they
do, they get to be excessive quickly. I'm attaching the patch I've been
using here for a while. Feel free to try it out. One of the best tests
for the effect of this patch is a bonnie run on a machine with more disk
speed than CPU power. In that case the page cache lookup becomes a
bottleneck. In other conditions, you see the same disk speed results
but with smaller % CPU utilization. On a PII400 dual machine with 512MB
RAM and a 4 Cheetah RAID0 array it made a 10MByte/s difference in
throughput. On another machine it made a roughly 6 to 8% difference in
CPU utilization.


--
Doug Ledford <dledford@redhat.com>
Opinions expressed are my own, but
they should be everybody's.[unhandled content-type:application/octet-stream]
\
 
 \ /
  Last update: 2005-03-22 13:51    [W:0.112 / U:0.304 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site