[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: MirrorMode behind fail over loadbalancer



I think too, the idea is you treat the second master server as a slave in practice, meaning you never do updates to it unless the primary master is down.

Effectively, the difference from a Master/Slave setup is that you will not have to promote the Slave to a Master and adjust any replication agreement settings in the event of a failed server.

Is that a fair analysis ?

Sellers

On Jan 22, 2008, at 5:23 PM, Matthew Hardin wrote:

Diaa Radwan wrote:
We have two openldap 2.4.7 , configured as MirrorMode, We are planning
to add load balancer in front of both servers into the production
environment, We don't want too go through conflicts issues as it was
stated before as messy process.


--------- ---------
. . . .
.  Srv1 . .  Srv2 .
--------- ---------
  \                 /
   ---- ------------
      . LoadB   .
.-------.

As per my understanding, the load balancer(failover mode) is
redirecting all traffic to the active server(srv1); if the active
server went down the traffic will be redirected to stand-by
server(srv2). When srv1 goes online again the load balancer will
redirect all trafic to srv1, while srv1 is in progress to get synced
with srv2. The load balancer will not consider the sync process; it
will just redirect the traffic.

it was previously stated on the mailing list that there should be one
write at a time. is there any conflict will occur when server getting
bulk syncing and receiving updates(attribute level)/add requests as
well?

 
Yes, this is a possibility. At Symas we do not advise our customers to immediately switch back to a failed server when it comes back online. Your mirrormode servers should be peers in every sense of the word: They should have the same disk, memory, network, and processor configuration. Therefore it won't matter which server is fielding write requests. When your first server goes offline, your load balancer should switch to the second and continue in that configuration until that one goes offline. Presumably by then you will have gotten your first server back online and it will have synchronized itself. If your second server goes offline, then the load balancer can switch back to the first. The synchronization status can be checked by looking at the operational attribute 'contextCSN' in the root object of the replicated naming context (remember to use '+' or call the attribute name out explicitly when using ldapsearch).
What happen if there attribute-level conflict? how to avoid it?
suggestions are highly welcomed.

 
Best to follow procedure from the previous paragraph. If you absolutely _must_ switch back to the first server as soon as possible, wait until the contextCSN attributes in the mirror pair are equal to one another, or at least reasonably close. Note that in a system with a heavy write load this may not happen long enough to make a clean switch, so 'close' is good enough.
--
Diaa Radwan
.

 
Hope this helps,

-Matt

--

Matthew Hardin
Symas Corporation - The LDAP Guys
http://www.symas.com


______________________________________________
Chris G. Sellers | NITLE Technology
AIM: imthewherd | GTalk: cgseller@gmail.com