[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: cn=config: sharing, conditionals

Howard Chu writes:
>Hallvard B Furuseth wrote:

[Rearranging a little]

>> I'm sure there are good reasons for plenty of other things to differ,
>> while config-replication could still be useful.  Depends on how flexible
>> partial config-replication is intended to be.
> Since there is currently no support at all, I think it's important to get 
> something usable first, and worry about those other cases later.

Yes, I'm not suggesting to delay everything until something wonderfully
flexible can be implemented all at once.  Though if something flexible
can be done just as easily, that's of course nice.  Otherwise, just
consider these cases to keep in mind.  An inflexible design might be
cumbersome to extend, except by making a new and independent feature.
(Such as the cn=config + suffixmassage you just suggested:-)

Mostly I'm also not talking about scenarios which are relevant for our
site today, though some might become relevant someday.

>> And things like<authz-policy>  and<allow>.  Security settings, if you
>> run a master inside a well protected subnet and partial slaves on more
>> open ones.
> It would be a mistake to run any servers with lesser security settings than 
> any other server, if all of them are sharing data.

Which they might not do.

But when they do: Machines can still have different physical protection,
or may be set up specially, thus you may be able to trust entities on
some machines you'd rather not trust on others.  In particular trust
with write/admin access on the master.  E.g. authz-regexp mapping an
ldapi:// uid/gid to an admin DN.  Or, I expect, (a particular user on)
a particular network address, when both peers are inside a safe subnet.

>> Master and slave can share some but not all databases, and you might
>> want to replicate config of those they share - but this way the database
>> numbers will differ.  Might not share all related schema either.
> It would most likely be a mistake not to share schema. Of course, there's 
> nothing preventing us from putting ServerMatch on schema entries too. And no, 
> the database numbers would be the same - all of the database configs would be 
> replicated, but some of them would be inactive on various servers.

Unless you have different servers for different purposes, which share
_some_ data - e.g. a database with user/group info.  Though then it may
be about time to give up the idea of replicating config.  It might only
be feasible to replicate the config of the shared database anyway.

But a simple case is slave = master + schema/data under development.
An inactive database in the master solves the numbering, but I'd likely
prefer to keep schema under development out of the master.

>> <threads>, cache settings,<argsfile>, etc. if your servers run on
>> different OSes.  Which can be useful so that if an OS-specific problem
>> hits one server, others are in no danger.
> Threads, maybe. cache settings, perhaps. argsfile is just a command
> line parameter, so not relevant.

Sorry, I meant pidfile.  It can be OS-specific where to write it -
e.g. /var/run/(openldap/)slapd.pid for RedHat Linux's /etc/rc scripts.

> Most of the sites we work with deploy
> identical hardware for their pools of servers. The most common form of
> load balancing is round-robin DNS, which is only "fair" if all of the
> servers are equivalent in their load handling capability.

Yes, then the poorest server needs to be good enough for its task.
Which might or might not be a problem.

In our case, failover is the primary reason for multiple servers, since
I think a single server is currently good enough as far as load is
concerned.  We'd solve load problems by throwing more hardware at the
problem.  But then, this is not relevant for us currently anyway.  We've
considered multiple platform and not (yet) bothered, it's just on the
nice-to-have list.