[Date Prev][Date Next]
What is the recommended way for sutting up big distributed directories. I understand the first part:
1) Each organisational unit gets its own LDAP server, or at least administrative rights for one part of the LDAP tree.
3) Add a failover replica for each master server.
2) Connect LDAP servers with refferals
Now we have a big structure that actually looks robust and maintainable, and clients don't need to know which server actually holds the data. So far so good.
The question is "what about the scaling and performance?". I'm just reading some internal docs where use of referencing is considered like a bad idea because of the performance, and I wonder if that is really so. If yes, would it make sense to use the setup as described above as a backend, and let the clients connect to a bunch of caching-only servers (ldap- or meta- backend) instead?
Two related questions:
A) How do ACLs work in such a setup? I can imagine that one may get better performance if ACLs are determined on the caching server:
1) Person A requests something from the caching server. This request is forwarded to backend server, the resulting output is cached, and the part of it (dependent on ACLs) shown to person A
2) Person B sends the same request. Cache is parsed, filtered according to ACLs, and that's it.
On the other hand, I'm not sure if one should really delegate the security-relevant configuration to caching servers?
B) Does openLDAP have some built-in mechanisms for preventing DOS attacks and dumb clients from ruining the fun for everyone? Two simple examples would be:
1) A client quickly spans many LDAP binds, and thus puts a server under high load
2) A client sends some very general request that will span a search over all the reffered LDAP servers, and return huge ammounts of data.
T-Mobile Austria GmbH,
Information Technologies / Services
Knowledge Management & Process Automation
Dr. Denis Havlik, eMail: email@example.com
Rennweg 97-99, BT2E0304031 Phone: +43-1-79-585/6237
A-1030 Vienna Fax: +43-1-79-585/6584