[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Implementing database tuning checks (2.1.29/db4.2.52.2/bdb)

--On Thursday, April 22, 2004 5:48 PM +0200 Buchan Milne <bgmilne@obsidian.co.za> wrote:

I have implemented parts of the tuning recommendations in
http://www.openldap.org/faq/data/cache/191.html using a script to report
the suggested minimum cache size.

Hoever, the section described by this paragraph has been giving me
"Unlike the B-trees, where you only need to touch one data page to find
an  entry of interest, doing an index lookup generally touches multiple
keys,  and the point of a hash structure is that the keys are evenly
distributed  across the data space. That means there's no convenient
compact subset of  the database that you can keep in the cache to insure
quick operation, you  can pretty much expect references to be scattered
across the whole thing.  My strategy here would be to provide enough
cache for at least 50% of all  of the hash data. (Number of hash buckets
+ number of overflow pages +  number of duplicate pages) * page size / 2."

How does one determine (ie which options to db_stat and which resulting
value) the number of hash buckets, overflow pages and duplicate pages for
an index file?

BTW, I did some simple benchmarks a while back, and performance
differences with cache size settings varying from 256kB up to 50MB on a
db  with ~250MB total in database files were marginal. Do others really
see  huge differences (objective ones).

Hi Buchan,

I found that database tuning vastly affected my query times when playing with my server. I eventually ended up making the cache so large that it contains the entire DB in cache.


Quanah Gibson-Mount
Principal Software Developer
ITSS/TSS/Computing Systems
ITSS/TSS/Infrastructure Operations
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html