[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: High Availability / Clustering

How do the virtual IPs ensure atomicity (can't two writes come in at the same time to both masters)? What about the other loop problems ive read about on the list with multi-master replication?


On Tuesday, April 15, 2003, at 12:56 AM, Markus Storm wrote:


our servers are located behind a loadbalancer, too, but we're using multi-master replication.
We use different virtual IPs for reading and writing. This isn't applicable in all situations - YMMV -,
but that way you can guarantee atomicity for writes while evenly distributing reads across all servers.
No manual intervention needed whatsoever.
This system has been in production for more than 2 years providing the backend for a large-scale ISP.
Only pity is the OpenLDAP developers still call multi-master 'experimental' and refuse touching it.
I've been unable to change their mind in these 2 years ... I guess they simply don't like it.
But we consider it to be a far greater achievement than another 5 or 10% of performance.


Dave Augustus wrote:

Hi Lee,

Here is another twist that provides load-balancing and failover for

I have LDAP slaves available for reads via Linux Virtual Server(LVS).
Any servers that require LDAP look at the LVS IP address for

Then I have 2 additional machines runing in a master/slave situation via
hearbeat- they share a common IP. When the master(server A) goes down (
for whatever reason) the slave(server B) is promoted. When the original
master(server A) comes up again, it comes up as a slave and then
determines via heartbeat if it needs to promote itself to be the master
again. Any writes and occured while it was offline are attemped by the
slurpd running on the new master (server B) or are available via the
replog.rej files.

It is all just a matter of some scripts, heartbeat and Openldap. This
system has been in production for 6 months and works great!

Dave Augustus

On Thu, 2003-04-10 at 16:47, Lee wrote:

Ive read through most of the archives concerning high-availability options for openldap. Right now, we're trying to create make our ldap infrastructure highly-available, as well as load-balanced between three servers (more to be added later). From my research it seems like we have a few options:

1) Experimental Multi-Master: This seems to have a number of atomicity issues. Plus this doesn't really solve the problem with the ldap clients not appropriately using the second listed server if the first goes down.

2) Plain old Master + Multiple Slaves: Same second issue above, plus no high availability on writes.

3) Master + Slaves, Promotion of Slave if master goes down: Same ldap client not using second listed server problem as in 1) and 2) above, plus lots of issues with reclaiming Master status after failure.

4) 1 Master Server Cluster using shared storage: This seems like the only viable solutions. The problem is we need support for many servers connecting to one shared storage device, as well some sort of reliable locking mechanism thats compatible with openldap.

Has anyone successfully implemented 4) ? If so can you recommend any specific hardware and or software that works nicely with openldap?

Thanks, L