[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Backends for Reliability



On Sunday 05 February 2006 14:24, Ian A. Tegebo wrote:
> My shop is mostly running PostgreSQL with perl scripts that access our
> data.  We got more data and users over time and had a hard time keeping
> up with load, data consistency, and interoperability with other
> software.
>
> In comes OpenLDAP.  Great.  LDAP can replicate our data and provide data
> consistency and interoperability, but crashes with the back-bdb had left
> us with data loss;

So, configure checkpoints, and if you're using < 2.3 then ensure checkpointing 
is done (ie via cron) and never let slapd start up without running database 
recovery.

Or, upgrade.

> granted, I did not employ all of Sleepycat's recovery 
> mechanisms but that left our management with some doubts about its
> reliability.

Not configuring the reliability features, then having doubts about reliability 
seems counter-intuitive.

[...]

> 			Main Goal
> We want to be able to centralize our data in a cluster and then
> distribute and access it with slapd.

What kind of cluster?

> 			---------
>
> With MySQL or PostgreSQL we could do the first part, but then not
> easily be able to access it from slapd.  Of course, going the other
> way is easy.  If we centralized all of our data in slapd, we'd be
> even more freaked out about data loss as we would create a single
> point of failure at the master slapd's backend.

So, fix your reliability issues first (as above).

>
> In comes Berkeley DB's High Availability product.  This looks fantastic
> in terms of our Main Goal:
>
> http://www.sleepycat.com/docs/ref/rep/intro.html
>
> Unfortunately:
>
> http://www.openldap.org/lists/openldap-software/200402/msg00666.html
>
> And this is the an argument I've been putting off; I contend that slapd
> replication is a good thing, but that there is a compelling reason to
> include Berkeley DB replication.  If I rely solely on slapd for
> replication I can run into trouble.
>
> (Please correct my understanding of replication mechanisms.)
>
> Right now, only masters/providers can modify their backends from client
> requests.  So if the master/provider goes down, the slaves/consumers may
> have the data, but they cannot accept updates nor forward requests for
> writes.  I cannot think of a way that slaves/consumers could failover to
> another master/provider to allow updates to happen; and if that did,
> you've created the problem of having to sync/rebuild the provider when
> it comes back up.

This depends if the data storage backend is consistent between the masters or 
not.

> I started to imagine a situation where slave/consumers would have
> multiple updateref url's that referred to slapd's running on top of the
> Berkeley DB cluster; they could failover to subsequent ones after a
> timeout.  Alternatively, one could use round-robin DNS: sweet.

Or, you could use HA clustering software, point the slaves at the floating IP 
managed by your HA software. Most HA clustering software can manage both 
shared storage and floating IPs per-service. Shared storage need not be 
expensive (see drbd etc).

Helping you evaluate your ideas is not easy when you haven't provided enough 
detail on your requirements (or available resources), and if you did, simple 
solutions may be possible.

Regards,
Buchan


-- 
Buchan Milne
ISP Systems Specialist
B.Eng,RHCE(803004789010797),LPIC-2(LPI000074592)

Attachment: pgpWGpKBJh0Qd.pgp
Description: PGP signature