[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: OpenLDAP system architecture?



On Thu, 2008-01-24 at 15:08 -0800, Howard Chu wrote:

> In my experience, 4 million objects (at around 3KB per entry) is near the 
> limit of what will fit into 16GB of RAM. Sounds like you need a server with 
> more than 16GB if you want to keep growing and not be waiting on disks.

I've been out sick the last couple of workdays, but I did discover a new
piece of information today -- the machines in question have 32GB of RAM
and are using 20GB of cache for the databases, although I do not yet
know how that's split between the BDB, the HDB, the IDL, and the slapd
entry caches.  Finding that out will be one of my next tasks.

I also don't know if the processes are individually limited to 2GB of
RAM each (as mentioned at
<http://www.openldap.org/faq/data/cache/1076.html>).


Okay, so performance-wise, there doesn't seem to be much advantage to
the master/slave or master/proxy/slave system architecture -- that seems
to be more of a directory integrity and synchronization thing.

I would imagine that partitioning the OpenLDAP database into multiple
slices and putting different schemas onto different servers should help
with performance, although you'd have to make sure that the appropriate
clients are querying the appropriate servers for that slice of the data.
Are there any other system architecture things I could/should be looking
at with regards to performance?

Thanks!

-- 
Brad Knowles <b.knowles@its.utexas.edu>
Sr. System Administrator, UT Austin ITS-Unix
COM 24 | 5-9342