[Date Prev][Date Next]
Re: Multi Master Enviornment for Openldap 2.3
--On Friday, February 02, 2007 11:29 AM +0200 Buchan Milne
Hmm, I load balance OpenLDAP 2.3. Why ? Because, on the occasions we have
heavy writes (averaging 30-40 modifications per second over a few hours),
one slaves isn't guaranteed to respond to our radius servers within the
time we have to send a response back to the radius proxy. Under the high
write load, 1 slave would not handle the full read load (about 1000
operations per second), but each slave can normally still handle about
250 operations per second without too much delay.
Sure, pure read performance is much higher than I need (easily 10000
searches/sec on boxes with no write load), but you need to consider
worst-case load over the entire lifetime of the installation, and write
load can have a big impact.
There are other reasons to load balance, as well. Consider 24/7 uptime
requirements. If I have only 1 server, that is a SPOF. Even if I have 2
servers, the risk of not meeting the 24/7 needs rather great. At Stanford,
we have one master, isolated for nearly pure write purposes, 1 HA Standby
master (waiting on 2.4), and 4 replicas that are geographically diverse.
This also means I can upgrade the entire replica cluster with no outage.
And of course, as Buchan notes, if there was a high volume of writes to hit
the servers, this helps mitigate the impact.
And although OpenLDAP is blazingly fast, when you start throwing things in
like SASL/GSSAPI binds, your performance can take a major hit. Although my
hardware can sustain 15,000 authorizations/second with simple binds (30,000
searches/second), it drops to 100 searches/second with SASL/GSSAPI in play
if the application connects, binds, searches, disconnects. If it uses
persistent connections, then it is again blazingly fast, since the
massively slow bind step is removed.
Principal Software Developer
ITS/Shared Application Services
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html