[Date Prev][Date Next] [Chronological] [Thread] [Top]

RE: (ITS#3564) Programatic Insert Scaleability Problem



I had some problems with openldap's update speed. Digging through the
source it turned out that it was locking every 'row' (I am using the bdb
backend) of a multi valued attribute just to add one value. The code is
full of things like this, and I am not sure if they are terrible
mistakes or if they are necessary for correct operation. Either way the
problem was that one of our admins had turned off caching for bdb
because the startup and shutdown times were too long (with a 2gb
in-memory cache). When I increased the cache to be larger than the index
file where the attribute lived (about 11 Mb) my time dropped down from
120 seconds to under 1 second.

I send a few 100k of changes every day easily now. The moral here is
that even small changes can be sped up massively by correctly sizing
your cache, and a small (1Mb) change in cache size can mean a 10x change
in some cases.


-Bill Kuker


-----Original Message-----
From: owner-openldap-bugs@OpenLDAP.org
[mailto:owner-openldap-bugs@OpenLDAP.org] On Behalf Of
Armbrust.Daniel@mayo.edu
Sent: Wednesday, March 16, 2005 2:08 PM
To: openldap-its@OpenLDAP.org
Subject: RE: (ITS#3564) Programatic Insert Scaleability Problem

I've probably confused the issue by using the term bulk loading when I
shouldn't have.

I'm simply trying to add (a large amount) of content into the server
while it is running, through a connection to the running server.  My
program that actually processes the data is written in Java, and uses
Suns standard API for accessing a LDAP server.

If I had (or could easily generate) my data as ldif, I would do so (and
then use slapadd as we have in the past) but its not a simple or trivial
task.  Plus, the programmatic API is already implemented and works (with
small data sets) (and large data sets on other ldap implementations)


Dan