[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: replication to 50.000 servers



On Wednesday, September 5, 2001, at 01:51 AM, Roel van Meer wrote:
Hi list,
I am in need of a solution where i store configuration in a central
database, which gets replicated to about 50.000 different machines.
I thought about using ldap for this.

Er... bad idea.

With *every* ldap (and every db system) I know of.

Maybe DNS could work better, maybe slaving the machines to a few high speed servers would work better, maybe a content distribution system.....

Apart from finding a solution where i run 50.000 slurpd processes,
is there an other way to do this? The updates don't have to be done
immediatly.

5 databases, replicated? One lookup server for one master tree portion, and 100 branch servers?


Think, for a seond, about dns:
1 . (20?)
2 com. (20?)
3 .company.com.  (2?)

By breaking it down, less than a hundred servers direct queries for *billions* of domans.

Referrals work in a similar way.

(I though about a cron job that checks for which destinations
updates have been comitted to their update logs, and run a slurpd
process for those domains for a limited time only.)

Could you define your throughput needs, how the data works through the system, and most importantly, why you would need 50.000 identical copies of the same data, rather than fewer central servers? It doesn't make sense to replicate this amount of data to replace live searches, in most cases.


-Bop

--2D426F70|759328624|00101101010000100110111101110000
ron@opus1.com, 520-326-6109, http://www.opus1.com/ron/
The opinions expressed in this email are not necessarily those of myself,
my employers, or any of the other little voices in my head.