[Date Prev][Date Next] [Chronological] [Thread] [Top]

Cluster replication: multiple masters



	Suppose I have a TCP/IP load balancer that directs incoming LDAP
traffic (on a public I.P. address) to any one of a group of computers (all
on a private network).  I'm talking about a standard load-balanced H.A.
cluster, i.e., something you'd see at http://www.linuxvirtualserver.org/ .

	Because of the load balancer, all the LDAP (backend) servers
appear to have the same public I.P. address.  Thus, redirecting writes to
a "master" is not an option.  (Or rather, not a practical one.  I'd have
to set up another public I.P., with special forwarding rules on the load
balancer, and then I'd have to automate the changing of those rules in a
LDAP master failover situation.  Also, I'd have to maintain and propagate
the "master" config file, apart and separate from the "slave"  config
file, and then have some kind of election take place so if the master
fails the slaves can figure out who the new master will be.  Too much work
and complexity for nothing.)

	I want LDAP writes to any one of the backend computers to be
immediately propagated to all other backend computers, such that once
somebody make a write to one backend computer, that new data can be read
from any of the other backend computers.

	So, after all that: Is there any reason I can't just make every
backend computer a "master", configured to have all of the other backend
computers as "slaves"?  Thus every computer would be a master to all the
others, and every computer would also be a slave to all the others.

	Any reason that won't work?

	(And: what happens if there is a config file that list a node's
own I.P. address as a replica?  Does that result in some kind of infinite
loop?  Or is slurpd smart enough to ignore replicas that are ourselves?
It would be nice if I only needed one config file that listed all
computers as replicas --including itself-- instead of having to remove the
I.P. of the particular backend computer for each one independently.)

	Any information is greatly appreciated.  I have searched the FAQ
and read the Admin guide, but did not find anything.  And retrieving 51
different archive files, saving them out of my email client to the hard
drive, and then grepping for I'm-not-sure-what didn't seem like a very
productive road, so I did not search the archives.  My apologies for that.


Thank You,
Derek Simkowiak