[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: distributed directories


My two cents on some of your questions:

> Hi, folks
> What is the recommended way for sutting up big distributed directories.
> I  understand the first part:
> 1) Each organisational unit gets its own LDAP server, or at least
> administrative rights for one part of the LDAP tree.
> 3) Add a failover replica for each master server.
> 2) Connect LDAP servers with refferals
> Now we have a big structure that actually looks robust and maintainable,
>  and clients don't need to know which server actually holds the data. So
>  far so good.
> The question is "what about the scaling and performance?". I'm just
> reading some internal docs where use of referencing is considered like a
>  bad idea because of the performance, and I wonder if that is really so.
> If  yes, would it make sense to use the setup as described above as a
> backend,  and let the clients connect to a bunch of caching-only servers
>  (ldap- or  meta- backend) instead?

If you have very precisely detailed queries that use a certain filter
template and require a defined set of attributes, the use of back-ldap
with proxy caching could be a solution.  I can't say much about
performances, but it shoudl be fine.  Another approach we often follow is
to have a mostly unloaded master which keeps a set of replicas up to date,
and the replicas carry all the load.

> Two related questions:
> A) How do ACLs work in such a setup? I can imagine that one may get
> better  performance if ACLs are determined on the caching server:
>         1) Person A requests something from the caching server. This
> request is forwarded to backend server, the resulting output is cached,
> and the         part of it (dependent on ACLs) shown to person A
>         2) Person B sends the same request. Cache is parsed, filtered
> according to ACLs, and that's it.
>     On the other hand, I'm not sure if one should really delegate the
> security-relevant configuration to caching servers?

In general it is not a good idea, but it can be based on the trust you can
put on the caching servers.  In the scenario you're drawing it appears
that you can trust them (it's basically an internally distributed DSA, so
the fact that there are more than one instance of the DSA is only a
technical detial, it basically works as a single DSA).

> B) Does openLDAP have some built-in mechanisms for preventing DOS
> attacks  and dumb clients from ruining the fun for everyone? Two simple
> examples  would be:
>         1) A client quickly spans many LDAP binds, and thus puts a
> server
> under high load
>         2) A client sends some very general request that will span a
> search over all the reffered LDAP servers, and return huge ammounts of
> data.

a) you can disable some operations; there are different means to do this
b) you can limit access to certain resources based on the identity of the
client (see slapd.conf(5), limits statement)

This allows to deny certain expensive operations to everybody, or to limit
the size (and even the occurrence) of searches to certain categories of
DNs, e.g. give your own application the possibility to search the whole
DIT, while giving anonymous the possibility to receive only 10 entries if
the filter is selective enough, or none at all, with no overhead on the
server side, if the estimated candidate list exceeds a certain size.


Pierangelo Masarati