[Date Prev][Date Next] [Chronological] [Thread] [Top]

Can cachesize be too big?


Is it possible to have a BerkeleyDB cache size (set by set_cachesize in
DB_CONFIG) be TOO big? For example, if the cachesize is greater than
physical memory? Or for that matter, total system memory (swap + physical)?

We have two identical machines with 2GB of physical memory and 2GB of swap
(4GB total).

Both machines running OpenLDAP 2.1.25 + BerkeleyDB 4.1.25-1
Machine A:
 slapd-1:  cachesize=500MB
 slapd-2:  cachesize=500MB
 slapd-3:  cachesize=1GB
Total:     2GB

Machine B:
 slapd-1:  cachesize=500MB
 slapd-2:  cachesize=500MB
 slapd-3:  cachesize=1GB
 slapd-4:  cachesize=500MB
 slapd-5:  cachesize=500MB
 slapd-6:  cachesize=1GB
 slapd-7:  cachesize=500MB
 slapd-8:  cachesize=500MB
 slapd-9:  cachesize=1GB
 slapd-10: cachesize=500MB
 slapd-11: cachesize=500MB
 slapd-12: cachesize=1GB
Total:     6GB

Machine A is rock solid, whereas Machine B craps out daily (some instances
will shoot up to 100% cpu, prompting a kill and db_recover).

Would the fact that the total amount of cachesize required for Machine B is
greater than the total amount of memory on the system be a possible culprit?
I don't think I have ever actually seen the memory usage reach max capacity
(or even close), so I would guess no, but is it a possibility?

// Diplomacy is the art of saying 'nice doggy'
// until you can find a rock.