[Date Prev][Date Next] [Chronological] [Thread] [Top]

Implementing database tuning checks (2.1.29/db4.2.52.2/bdb)



I have implemented parts of the tuning recommendations in 
http://www.openldap.org/faq/data/cache/191.html using a script to report 
the suggested minimum cache size.

Hoever, the section described by this paragraph has been giving me 
problems:
"Unlike the B-trees, where you only need to touch one data page to find an 
entry of interest, doing an index lookup generally touches multiple keys, 
and the point of a hash structure is that the keys are evenly distributed 
across the data space. That means there's no convenient compact subset of 
the database that you can keep in the cache to insure quick operation, you 
can pretty much expect references to be scattered across the whole thing. 
My strategy here would be to provide enough cache for at least 50% of all 
of the hash data. (Number of hash buckets + number of overflow pages + 
number of duplicate pages) * page size / 2."

How does one determine (ie which options to db_stat and which resulting 
value) the number of hash buckets, overflow pages and duplicate pages for 
an index file?

BTW, I did some simple benchmarks a while back, and performance 
differences with cache size settings varying from 256kB up to 50MB on a db 
with ~250MB total in database files were marginal. Do others really see 
huge differences (objective ones).

Regards,
Buchan