[Date Prev][Date Next] [Chronological] [Thread] [Top]

RE: Intense LDAP write operations (sort of)




Some comments.

Once one gets into distribution - ie different name space in different
servers - and one can apply distributed searching with domain based access
controls and distributed trust/authentication enforced - as per X.500 then
things get a little more complex re management and performance.

In addition - to avoid the above points other factors make life very
difficult for those who only have replicating LDAP servers or LDAP protocol
"routing" technologies..

Attribute indexing.
Some servers manually index or have limited indexing capabilities (we index
everything automatically) - so in the former case a search on attribute type
A in server X is different from attribute M in server X - and when you
string a lot of these types of servers together - with eg. LDAP routing
functions the problem is compounded - because there is NO SEARCH (or
update)constiency across this type of system and if these operations are
queued with updates then how long is the update ... its a bit like the web.
So how do you interpret a MIB ?? when you dont know what the indexing,
routing and queueing properties of these servers are. 

DIT sizes.. 
Some vendors enjoy the fact that they have one billion entries per server...
why would you do this?. My answer is becuase you HAVE TO.. No distribution.
And specifically if one can only index a few attributes with DIT sizes like
this in one server, this will place considerable search overheads on the
server and if these are queued with updates then how long is the update.

Component Matching. 
If you do an update during someone else (a security administrator)
retreiving millions of certs because the directory does not have cert/crl
component matching - how long is the update. 

Replicate everything to everywhere: LDAP server mode. 
If one update is a less than a few milliseconds in one server .. wow.. But
if you MUST HAVE a number of replicated servers all over the world - then
surely the update time and cost is explosive ... comms, processing, logging,
integrity and reliability, back ups and all that...Lets say as a scenario
that each update across ten servers is worth at least $10 of operational and
resource costs.


System integrity: 
If you dont apply ACI, DIT integrity, OC and attribute checking before the
update and 2 phase commit during the update - what have you got.. Well if it
isnt a distributed  directory, then all it is is a unscaleable, high
operational cost, low integrity, single server, LDAP attribute store.

AND WHY would I want one of those to run an on line business with..

It may have taken 20 years to get a few million customers.. with a poor
directory strategy - you can loose these in minutes.. and your stock
value!!!



After all - a directory service is the scaleable, extensible OO information
infrastructure with logical views (based on trusted interfaces, consistent
auth and ACI) for customers and staff alike and it is necessary for User
Service provisioning, DEN, PKI, CRM, PBX, CTI and user authentication
applications - as well as White, Blue, Green, Yellow (Customer Facing) and
Catalogue (Purchasing) type applications..

Why would I even consider something that could not provide scaling
flexibility, consistent behaviour as well as information integrity - when
developing ones global EC infrastructure.


When you get into distributed information systems - life is different. It is
not very useful to say that a single server with a fixed update operation -
does it in xx milliseconds. Specifically when its integrity features are not
explained. The issue that we address is building a deterministic, high
integrity, operational distributed directory infrastructure for 300M+
identified and profiled users - globally.




regards as always alan



-----Original Message-----
From: John_Payne@motorcity2.lotus.com
[mailto:John_Payne@motorcity2.lotus.com]
Sent: Saturday, June 10, 2000 4:52 AM
To: ietf-ldapext@netscape.com
Subject: RE: Intense LDAP write operations (sort of)




O.K. This may be getting a little bit off-topic, but this discussion about
replication, intense write operations etc. does raise one very good issue.
Tuning such a distributed/replicated information base is a very complex and
ongoing activity based on changing usage patterns, volatility of the
information
etc. etc.  I have seen very little discussion here about statistics
gathering
and monitoring/management of the directory.  Is there another RFC/mailing
list
for this topic?