[Date Prev][Date Next]
Re: Ang. How to trace activity on Window platform ?
--On Monday, June 14, 2004 4:59 PM +0200 Frank Hoffsummer
I am testing openldap 2.2.12 on a Solaris 9 machine with bdb 4.2.52
I want to arrive at the best cache size for a database with e.g. 300.000
I have read all kinds of list messages and FAQ entries regarding this, in
After loading the database with 300.000 entries, I ran a script that
essentially sums up (internal_pages + 1) * (page_size) for all *bdb files
(dn2id.bdb, id2entry.bdb and my indices)
That total is 13918208bytes (13,27MB).
So according to what I deem is the consensus on this list, my recommended
cache size is about 14MB, and increasing it significantly should not
yield significant performance gains - until my cache size becomes large
enough to hold all DBs in memory (that would be 792MB in my case).
Well this is not what I see in my tests.
Increasing the cache size (via set_cache_size in DB_CONFIG) significantly
improves read and write performance, even if the cache thus set becomes
larger then the size of all DBs (792MB in my case).
Has anyone else seen this?
On a related note, suppose we have a rapidly growing database and want to
adjust cache sizes. would the proper procedure be
1) dump database
2) change cache size in DB_CONFIG
3) load database
It is sufficient to stop slapd, change the size in DB_CONFIG, and then run
Note that using a memory cache instead of a disk cache can significantly
effect performance as well (man slapd.conf).
As for putting the whole DB in memory, that is the only way I've actually
run my systems (Solaris 8). It was essentially required under older
versions of BDB (4.1.x), and I've never bothered to go back and try with
Principal Software Developer
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html