[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: how to configure multi-master



Buchan,

Thank you very much for taking the time to iterate over some scenarios and include your suggestions. I resonate with most of what you have suggested, and I had a follow-up question to see if you recommend something different for our particular scenario.

We have a small ldap database, but have had several LDAP outages (mostly due to bdb corruptions that we've yet to diagnose why they're occurring, other than possibly because all the versions are several years old). This ends up taking out all of our unix, OSX, and Windows systems (we're running samba on ldap). Our slave ldap seems to be in a good state during these outages, but most of our systems do not have the ability for one reason or another, to communicate with the slave --- even if we were to point all the systems to it, we would not be able to write to it while the master ldap is down, which is a deal breaker for us! Changes need to occur, and we need to feel confident that the diffs will make it back into the master when it is revived.

What is your suggestion for our specific scenario?

Thanks in advance,
Kevin

Buchan Milne wrote:
On Tuesday 22 July 2008 18:53:56 Kevin Elliott wrote:
Folks,

With all this talk about multimaster, could someone point me to some
resources that describe industry standard implementations and best
practices of OpenLDAP in multimaster mode for the purposes of high
availability and robustness? I have yet to see comprehensive documents
that describe solutions for most small to medium businesses, and would
love to see something you recommend.


In my opinion (I may have missed some scenarios):

1)If you need failover reads, have sufficient slaves, and ensure that all software and configurations are able/configured to fail over. In my case, that means I probably need to build sudo against OpenLDAP on Solaris instead of against the Sun LDAP SDK, and I might need to find a solution for bind_sdb-
ldap (which doesn't seem to be able to take multiple hostnames in the LDAP URI).


2)If you need a site that only has a slave to be able to propagate changes, ensure that your software is configured to chase referrals on updates (e.g., samba can, pam_ldap can etc.).

3)If you have a site that only has a slave, but changes need to be propagated from clients of this slave from software that does not chase referrals, use the chain overlay.

If you have users using the OpenLDAP commandline utilities (which won't chase referrals with authentication), teach the users to send changes to your master. If they can't do that, they shouldn't be using these utilities.

4)If you need consistent but highly available writes, use a cluster middleware. If you have shared storage available (e.g. SAN), use it. If you don't use a shared storage software implementation (e.g. drbd).

5)If you need more write throughput (and tuning will not help you further), split your DIT, or scale up (get faster disks, more disks, SAN etc.). Scaling out won't help.

6)If you need to be able to write to the same DIT portion on different servers simultaneously, you should consider whether the possible data synchronisation issues could pose a problem. If they don't, multi-master may be for you.


I have seen people on this list wanting multi-master to solve most of the items above, where only one of them (6) may be a valid reason.


BTW, I use multi-master on my "personal" infrastructure, which consists of a desktop machine at home, a laptop that is used at home and at work and other places, and a desktop at work. Both desktops are domain controllers backed on LDAP, and I have multi-master configured between these 3 machines to ensure that password changes by domain members (at home, or at work) will be propagated to all LDAP servers. However, I think this is probably an abuse of multi-master, and I don't think I will be logging any ITSs in the event that I lose any changes ....

In production, I have one HA cluster (RHEL3 with Red Hat Cluster Suite on EMC SAN for shared storage) for a master for one environment (with 2 slaves in the production site, and one "failover" master and one slave in the DR site). The other environment (which is actually bigger) has a standalone master and load-
balanced slaves for the "production" site, and standalone slaves for site sites. I don't think I will be risking data consistency on > 1 million entires with multi-master.



Regards,
Buchan