[Date Prev][Date Next] [Chronological] [Thread] [Top]

RE: Intense LDAP write operations



[Moderator, I am not sure if I am violating the intent of this list in continuing this discussion. If so, pls. let me know and I can take this on a separate discussion group. Thanks in advance]

Alan,

Actually, the exercise was taken to see how far we could push the limits before eDirectory breaks down (It didnt, in this case. We had to stop the exercise at 1+ billion objects to show it in an event).

Billion objects may sound like a lot today but so did 640KB of address space for PCs in 1984!

Not all entries need necessarily be personal secrets. In your specific example, it does makes sense to have a global directory for mobile phone numbers (projected to cross 1 billion by 2003) that is updated by regional admins but seen as one by the mobile phone owners. Global knowledge bases like Human Genome will contain about 3 billion codes. Replication may take a 'long' time and may even be filtered, but since they dont change much, it makes sense to have the directory automatically replicate this where scientists actually need them.

I dont think the assumption that there is a trade-off between (capacity,concurrency,search time) is necessarily true. With proper design (as in NDS eDirectory) that exploits the usage pattern for directories, it is possible to retain essentially flat search times thru hundreds of millions of entries.

Subbu K. K.
>>> "Lloyd, Alan" <Alan.Lloyd@ca.com> 06/09/00 11:45AM >>>
If you are worried about capacity and scaleability - well replicated systems
dont scale - Just consider the telephone system - we log on where we are and
use local calls, national calls and long distance calls - ie the
authentication base is distributed. X.500 is a distributed system in that it
forms a distributed system infrastructure.. ie one does not need to
replicate AU to the US, US to the UK and so on..

It seems odd that one would put a billion objects in one server and then
have to replicate this all over the place? Becuase one could not have a
distributed system.

In addition the real test for a directory is not lots of objects that
replicate - its the ability to have these objects distributed  - as well as
having a distributed service that can handle high user concurrency - eg 100s
of Ms of users - all logging on in seconds and all wanting directory service
information...in seconds.

In terms of design - with multi LDAP/DAP/DSP/DISP backbone load balancing /
alternate DSA modules - coupled to DSA processes, with one or more DBs, etc
- we can do fault tolerant, load balancing distributed/replicated meshed
based global directory services and integrate LDAP servers. So the issues of
read/write requirements is met in the operation system design of the
directory service..


Replicated systems... one only does this for fault tolerance.. If you
replicate names/passworrd/certs/CRLs, ACI for millions of entries all over
the world then - one will have a considerable security issue to deal with..

regards alan

-----Original Message-----
From: Subbu K. k. [mailto:KKSUBRAMANIAM@novell.com] 
Sent: Thursday, June 08, 2000 7:33 PM
To: TGullotta@access360.com; ietf-ldapext@netscape.com;
Martin.Rahm@nokia.com 
Subject: RE: Intense LDAP write operations


Not really. It is just that when you have millions of objects, they tend to
be read much more often than modified. For instance, when a user object is
created, it is read thousands of times before it gets updated.
Operationally, NDS does update its database internally for its replication
status etc. The last login time attribute does get updated for every login,
but this is an operational attribute.

NDS itself uses a transactional database to ensure full integrity and
incremental replication to ensure scalability. It has tested to hold more
than 1 billion objects on a single E450. As installed, it just takes less
than a megabyte for its database.

Subbu K. K.

-------------------
K. K. Subramaniam,                                                
Product Manager, NDS eDirectory (UNIX/Linux)
Novell Software, Bangalore                                  Ph: +91 (80) 572-1856x2212

>>> <TGullotta@access360.com> 06/07/00 09:08PM >>>
Subbu,

Has Novell done any benchmarks on read vs. write operations? It seems that
in a lot of directory implementations, although not every entry is being
updated that frequently, if you have a million or so users, just periodic
updates to entries could add up when you look at the total number of users.
I would think this is realistic. Is it a problem with NDS?

Tony Gullotta
Lead System Architect
access360


-----Original Message-----
From: Subbu K. k. [mailto:KKSUBRAMANIAM@novell.com] 
Sent: Wednesday, June 07, 2000 12:15 AM
To: ietf-ldapext@netscape.com; Martin.Rahm@nokia.com 
Subject: Re: Intense LDAP write operations


I dont think this was the sweet spot planned for directories. Writes would
entail applying locks on the data store which would impact the regular
search, retrieval and any sync operations. You also need to deal with
atomicity of updates to the group of attributes and the latency of
propagation of changes across multiple copies of the data store. A
transactional database may be a better choice.

Are you sure you need a directory service and not a transactional database
engine that supports LDAP in addition to SQL?

Subbu K. K
?---------------------------------------------
K. K. Subramaniam
Product Manager, Novell Directory Services (UNIX)
Novell

>>> <Martin.Rahm@nokia.com> 06/07/00 12:27PM >>>
Hi,

I am working on a problem where LDAP would be used to keep a directory with
user data.  One concern I have is that LDAP is supposed to be less efficient
when it comes to intense write operations.

Can LDAP handle millions of users data in a directory where some of the data
(a few attributes) needs to be updated very frequently?  How would that
affect performance and is LDAP effective when the searched data must be
returned prompty with very little delay?

If anyone has any comments on this, I would appreciate it,

Martin Rahm