I'm planning to populate my OpenLDAP database with big amount of data. In the final phase there will be approximately 5 mio entries (150000000 attributes, 10 indexes ) in the LDAP tree.
Most of the data will be inserted by slapadd or ldapadd in the database installation phase, in the "running" phase an application will only read from directory, writing will happen hardly ever.
How to configure slapd and BerkeleyDB (BDB) to handle and operate efficiently?
Does someone out there work with the same amount of data, so he or she can give me advice how to set up slapd.conf and DB_CONFIG?
I'd really like to see some examples of those two files.
As I understood the most important parameters which have influence on performance are cachesize (slapd.conf), set_cachesize (DB_CONFIG), max log size (DB_CONFIG) ...? What else should I pay attention on?
I'm using the following SW:
BerkeleyDB 4.2.52 + 4 patches (BDB)
4 x 1,6 Ghz CPU
1 GB RAM
OS: Monta Vista Linux CGE 3.1
Thanks a lot,