[Date Prev][Date Next]
Re: replication to 50.000 servers
On Wednesday, September 5, 2001, at 01:51 AM, Roel van Meer wrote:
I am in need of a solution where i store configuration in a central
database, which gets replicated to about 50.000 different machines.
I thought about using ldap for this.
Er... bad idea.
With *every* ldap (and every db system) I know of.
Maybe DNS could work better, maybe slaving the machines to a few
high speed servers would work better, maybe a content
Apart from finding a solution where i run 50.000 slurpd processes,
is there an other way to do this? The updates don't have to be done
5 databases, replicated? One lookup server for one master tree
portion, and 100 branch servers?
Think, for a seond, about dns:
1 . (20?)
2 com. (20?)
3 .company.com. (2?)
By breaking it down, less than a hundred servers direct queries
for *billions* of domans.
Referrals work in a similar way.
(I though about a cron job that checks for which destinations
updates have been comitted to their update logs, and run a slurpd
process for those domains for a limited time only.)
Could you define your throughput needs, how the data works
through the system, and most importantly, why you would need
50.000 identical copies of the same data, rather than fewer
central servers? It doesn't make sense to replicate this amount
of data to replace live searches, in most cases.
email@example.com, 520-326-6109, http://www.opus1.com/ron/
The opinions expressed in this email are not necessarily those
my employers, or any of the other little voices in my head.