[Date Prev][Date Next] [Chronological] [Thread] [Top]

Antw: Migrating to new production servers via syncrepl



>>> "samuli.seppanen@gmail.com" <samuli.seppanen@gmail.com> schrieb am
02.10.2013
um 12:25 in Nachricht <524BF485.1040509@gmail.com>:
> Hi,
> 
> I'm phasing out two OpenLDAP production servers[1] in a master-master
> configuration. The production servers can't afford more than a few mins
> of downtime, so migrating using slapcat/slapadd is out of the question.
> So, what I'm ending up doing is migrating using syncrepl. Here's the
> plan, with arrows pointing from the provider to the consumer:
> 
> old1 <-> old2 -> new1 -> new2

I'd like to know a procedure to add a new node for multi-master-sync to an
existing configuration, assuming the master is so big and busy that a
consistent slapcat is not possible. Or are slapcats always consistent
(read-locked)?

In that case I'd suggest to extend the zwo-node multi-master to a temorary
thre- or four-node multi-master, then switch servers, then remove the old nodes
(first from the configuration, then from the network).

> 
> Once the replicas on "new1" and "new2" are complete, I plan to
> 
> 1) Direct all LDAP reads to "new1"
> 2) Direct all LDAP writes to "new1" (=make it briefly the only active
> LDAP server)
> 3) Change the replication config on "new1" so that it fetches changes
> from "new2" instead of "old2"
> 4) Restart slapd on new1 (it uses slapd.conf) to activate the change
> 4) Start offloading LDAP reads/writes to "new2" (e.g. using a loadbalancer)
> 
> A couple of questions:
> 
> . Does this plan makes sense in general?
> - Which caveats I should be aware of?
> - How can I ensure the replicas are complete and in the same state[1]?
> - Does switching the replication provider for "new1" from "old2" to
> "new2" cause any side-effects?
> 
> Also, when is full reload of the replica[2] required/suggested? I've
> managed to end up with incomplete replicas on "old2" a couple of times
> even if I've wiped /var/lib/ldap and started replication from scratch.
> 
> Any suggestions or pointers are most welcome! Also let me know if you
> need more info (configs, etc) and I'll provide it.
> 
> Best regards,
> 
> Samuli SeppÃnen
> 
> ---
> 
> [1] I've used these command so far:
> 
> $ cd /var/lib/ldap
> $ db_stat -d <database-file>
> 
> If the numbers (data items etc) match, can I be sure the replicas are
> identical?
> 
> I've also checked the contextCSN on the using something like this:
> 
> $ ldapsearch -H ldap://old2:389 -D "cn=admin,dc=domain,dc=com" -x -W -v
> -s base -b "dc=domain,dc=com"
> $ ldapsearch -H ldap://new1:389 -D "cn=admin,dc=domain,dc=com" -x -W -v
> -s base -b "dc=domain,dc=com"
> $ ldapsearch -H ldap://new2:389 -D "cn=admin,dc=domain,dc=com" -x -W -v
> -s base -b "dc=domain,dc=com"
> 
> The output seems to be identical for "old2" and "new1", but "new2"
> differs, even though the database seems identical if checked with
> db_stat. I assume this is normal given the replication chaining (see
above).
> 
> [2] Starting slapd with the "-c rid=<rid>" switch should do this, correct?