[Date Prev][Date Next]
Re: bdb corruption
Nope, not currently using a DB_CONFIG. I just created a file
%DATA%/DB_CONFIG with the following entry:
set_cachesize 0 52428800 0
My resident memory size is no larger than 26 megs, so if that's the
right thing to base it on, I should be good. But the actual storage size
(especially for the attributes I'm indexing) is around 90 megs, so is
that the value I should be basing it on?
I've got a bit of a problem. I'm using OpenLDAP in a high volume
environment using the berkeley DB back end, and what's happening is that
every few days the database gets corrupted. Searches take excrutiatingly
long to complete. Sometimes I'm able to do a slapcat (even though it
takes forever, only a few entries a second) and rebuild the ldap database
with that, other times it can get so bad, that even the slapcat fails
(hangs with no more output half way through an entry) and I have to
restore from backup and the replication log.
Do you have a DB_CONFIG file set up? If so, what are the parameters?
What OS are you on? BDB corruption is problematic in 4.1.X if you don't
properly configure your environment.