[Date Prev][Date Next]
The futex() problem
Judging from the list archives, this is something which has bitten others
as well as myself.
I want to upgrade the OpenLDAP software we're currently using, currently
2.1.30. It runs famously, but I'm really after the server-paged results
I've had 2.2.23 and 2.2.24 up and running on our Redhat machines.
Red Hat Enterprise Linux AS release 3
2.4.21-15.0.4.ELsmp kernel version.
Both versions exhibit the deadlock on futex() problem.
Others have mentioned that tuning BDB when using back-bdb can help avoid
the problem, but I've played with the settings and it didn't seem to make
a difference. Hopefully I'm just doing something wrong.
Here's my DB_CONFIG:
set_cachesize 0 104857600 0
... this is a "test" instance which gets blown away and reloaded a lot,
thus the DB_TXN_NOSYNC. Production server wouldn't have that.
The database def from slapd.conf:
Built using BerkeleyDB 4.3.27 (I see 4.3.28 is out now, but haven't tried
it) with the following:
./configure --with-slapd --with-slurpd \
--without-ldapd --with-threads=posix \
--enable-static --quiet --enable-local \
--enable-cldap --disable-rlookups --without-kerberos \
--with-tls=openssl --enable-crypt --prefix=/appl/ldap \
--libexecdir=/appl/ldap/sbin --localstatedir=/var/run \
--datadir=/appl/ldap/data --mandir=/usr/share/man \
--sysconfdir=/appl/ldap/etc --with-subdir=no \
With slapd built this way, I managed to do some big directory updates and
queries running every two seconds for four days without a hitch. Then,
after I STOPPED beating on slapd, *that* is when it decided to deadlock on
Our directory contains about 150k entities, and the stress-tests I was
doing involved making tens of thousands of changes at a time.
Can someone point me in the right direction to dealing with this? Are my
BDB tunings not generous enough? The FAQ-o-Matic seems to mostly deal with
ldbm, and the discussions for tuning BDB made it sound like a ten megabyte
cache was more than enough.