[Date Prev][Date Next] [Chronological] [Thread] [Top]

Ang. How to trace activity on Window platform ?

Hello all,

I am testing openldap 2.2.12 on a Solaris 9 machine with bdb 4.2.52 (+2patches).
I want to arrive at the best cache size for a database with e.g. 300.000 entries.

I have read all kinds of list messages and FAQ entries regarding this, in particular:

After loading the database with 300.000 entries, I ran a script that essentially sums up (internal_pages + 1) * (page_size) for all *bdb files (dn2id.bdb, id2entry.bdb and my indices)
That total is 13918208bytes (13,27MB).

So according to what I deem is the consensus on this list, my recommended cache size is about 14MB, and increasing it significantly should not yield significant performance gains - until my cache size becomes large enough to hold all DBs in memory (that would be 792MB in my case).

Well this is not what I see in my tests.

Increasing the cache size (via set_cache_size in DB_CONFIG) significantly improves read and write performance, even if the cache thus set becomes larger then the size of all DBs (792MB in my case).

Has anyone else seen this?

On a related note, suppose we have a rapidly growing database and want to adjust cache sizes. would the proper procedure be

1) dump database
2) change cache size in DB_CONFIG
3) load database

or could that be accomplished with db_recover?

thanks for any insights you might have