[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: large ldap server recommendation



ram wrote:

  I am using ldap for authentication & addressbook  for a large
mailserver setup with around 300k users  ( this will grow to 500k )

We have about 50,000 students, and 20,000 faculty & staff, with over four million records. My understanding is that we're one of the larger OpenLDAP sites around, although Stanford and some other major Universities might be larger.


The ldap server is a 8GB Ram box with RHEL-5 with
openldap-servers-2.3.27-5

Our production OpenLDAP servers have 32GB of RAM on Solaris 9, running OpenLDAP 2.3.35 (we've seen some weird problems with 2.3.40, and haven't even seriously looked at the 2.4.* stuff). We've been working with assigning 10GB or 20GB for Berkeley DB cache in this environment, and we're not sure if the larger amount is causing the weird problems we've had this week. I couldn't tell you what our IDL or slapd caches are set to, however.


I am confused what database type to use ldbm or bdb. Currently I have
the users on bdb with lot of problems. The ldap server dies all of a
sudden and I have to recover the data to get it started

I don't know that much about OpenLDAP yet, but based on my reading of the OpenLDAP documentation and what I know of different database formats like dbm and db outside of OpenLDAP, you want to avoid dbm if at all possible.


Can someone help me

The one thing that is kind of weird about our environment is that they're going add-happy in giving an OpenLDAP unique identifier to each and every student who has ever passed through the University, and to each and every person who has ever worked at the University, including temps or consultants who may have only been here for a few days. So, I think we're issuing a lot more ids than we have currently active people.


One thing you're going to want to look at is how many records you have for each active user, and how big those records are, so that you have an idea of how much memory you're going to need. In our case, we've been told that with an average record size of about 3KB, we should be able to fit all the BDB files into a 16GB RAM cache. Checking the various *.bdb files we have on our production server, I see that they take up just under 14GB on disk, so a 16GB RAM cache for them would seem reasonable. However, that doesn't account for the index (*.hdb) files....

To be honest, my past experiences with other databases is that the indexes are frequently an order of magnitude larger than the raw data itself, so we could be in some really big trouble here, but at this point I'm just guessing blind.

--
Brad Knowles <b.knowles@its.utexas.edu>
Senior System Administrator, UT Austin ITS-Unix
COM 24  |  5-9342