[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Recovering from a Single-Node downtime in a Multi-Master Setup



W dniu 10.12.2013, 18:12, Quanah Gibson-Mount pisze:
--On Tuesday, December 10, 2013 11:08 AM -0600 espeake@oreillyauto.com
wrote:

Do the slapcat on ldap2 and then delete the db files on ldap1 and then
run
the slapadd.  you will not get duplicates because all of the CSN's
will be
the same.  This is what I have done my migrations to the most recent
versions and doing my own builds.  Works great that way.

Huh?

The CSNs contain the server IDs.  Servers ignore their own changes.


I admin I was not (yet!) forced to recover OpenLDAP master server from serious disaster.

How should I recover any single master in topology where total number of masters is more than two - and modifications are made across all of them? I'm using LTB project package - its start+stop script is also able to make backups of server configuration and all databases.

Will replication manage to bring recovered master up-to-date based on slapadd from output of slapcat from *any* of other masters? Or more proper way is to restore last backup of configuration and database and let replication do the rest? Of course preventing clients to contact LDAP server until replication finishes is a must in such a situation.

Best regards,
--
Olo