[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: High volume ldap queries causing corruption in bdb backend.

Dear All, I'm seeing similar problems to Peter. We've openldap 2.1.22 servers using Berkeley DB 4.1.25 backends. We've backend-monitor enabled on all the servers

OS Solaris 2.8 on Netra's and Debian Linux running on a Dell 2650

Ldap is used for classroom authentication (via samba) , authentication of web pages
and for email lookups.

None of the servers are highly loaded CPU or memory wise. We've previously seen high
slapd cpu usage but that doesn't seem to be a symptom this time.

After some time, weeks, ldapsearch operations crawl to a halt, what was instantaneous
now takes 10s' of seconds.

I've stopped the database & run db_stat with a variety of options & the database
seems ok, Lock and Locker numbers are well below the Maximum possible,

Numbers like
56M     Total number of locks requested.
56M     Total number of locks released.
2915    Total number of lock requests failing because DB_LOCK_NOWAIT was set.
32      Total number of locks not immediately available due to conflicts.
0       Number of deadlocks.
0       Lock timeout value.
0       Number of locks that have timed out.
0       Transaction timeout value.
0       Number of transactions that have timed out.
408KB   The size of the lock region..
1557    The number of region locks granted after waiting.
106M    The number of region locks granted without waiting.

I'm thinking that 1557 locks granted after waiting isn't too bad out of 106M (These are for 1 lock region: 4)

Hits in the caches are 99% - 100% by the time the server dies. I noticed stopping the server this time
produced a core file but didn't see anything useful in it.

Any hints for setting ldap logging levels to try and get to the bottom of this
(Or backend-monitor searches to look at to see traffic levels) I've got logging off for

My DB_CACHE file is just

#Set DB CacheSize

set_cachesize   0       52428800        0

#       0 Gigabytes     50M     0-> allocate contiguously

Cheers, Duncan