[Date Prev][Date Next] [Chronological] [Thread] [Top]

RE: Openldap scalability issues



To be fair, that is 10 to 15 million dn's - each of those averages about 10 attributes - most of which are indexed.  So it is close to 100 million (or more) individual entries that are being written.  I would guess that it is our extensive indexing which takes most of the time.

Hardware specs:
Dual cpu AMD 2800+
3 GB RAM
IDE 7200 RPM drives

Running Fedora Core 3.


Since the bulk load times are on par with the time it takes to load a MySQL database with the same data - I'm pretty happy with openldap there.

These limitations I'm just discovering for doing runtime additions are quite a bit harder for us to deal with, however.
I'm going to kick off another load later today, giving the cache all the memory I have available, to see how far I can get.

Dan

-----Original Message-----
From: Mike Jackson [mailto:mj@sci.fi] 
Sent: Monday, February 21, 2005 11:52 AM
To: Armbrust, Daniel C.
Cc: openldap-software@OpenLDAP.org
Subject: Re: Openldap scalability issues

Armbrust, Daniel C. wrote:
> I'm looking for some configuration advice to make openldap scale better for databases that hold millions of records.
> In the past we have been loading most of our large databases from ldif with slapadd.  We have been successful in
 > loading databases with 10 to 15 million entries.  The largest ones usually take a day or two - 
but we can deal with that.

Hi,
  Can you please tell the specs of your hardware and OS? A day or two? That should be a big, 
blinking red light for the designers, unless you are using a 486 DX100 or something similar...

  As an example, I offline loaded 10m entries on my home machine, 2.4GHz, 512MB RAM, in 3h50m, 
using one of the longtime commercial LDAP servers which will remain unnamed since this is the 
"OpenLDAP" forum :-)

--
mike