[Date Prev][Date Next]
Some openldap fixes...
in the past few months Peter Zijlstra <email@example.com> and I have
been busy extending and sometimes rewriting openldap 1.2.11 in order
to make it faster and more stable.
We are both employees of Freeler, a large free ISP in the Netherlands.
After we successfully implemented our authentication database using
openldap, we came up with the idea of using openldap for our mail
backends. We quickly found out that openldap was definitely not able
to cope with the number of modifications to the database which would
be necessary to operate such a database, both in terms of stability as
well as speed.
Some issues were so simple to fix that we decided to stick with openldap
and not buy some expensive ldap server. Because we started using openldap
(well, our own blend) for other projects as well, the fixes became more
'intrusive'. When openldap 2.0 was released, we noticed that we still
had numerous enhancements which didn't make it into 2.0. After some debate
we have decided to do 'the right thing' ;) and give the patches back
to you. Some stuff is quick 'n dirty, other is more readable/usable. For
example, we have only coded for BDB2, because that's the best one anyway.
Our main purpose is to give you a lot of new ideas and suggestions and to
show you where we think openldap should be heading. We feel that openldap
should be able to handle complex queries and a lot of modifications on
large trees and not only the one-level nearly-static business-card databases
it's used for today :)
Because the diff is almost 85k, we'll upload it to your ftp site as
/incoming/ldap-ilab-0917200.patch . We'll now summarize the changes and fixes:
- Removal of flush after key removal. This is WAY faster and not dangerous.
- Because flushing after every key insert was too slow and no write syncing
resulted in data loss, we introduced 'lazy syncing', which only syncs all
databases if the last sync was more than a second ago.
- Add some entries to the connection array. select sometimes gives back fd
1024, which leads to a crash because the array goes from 0..1023.
- Impose an upper limit on cache entries. We'd rather have a lot of
small entries in the cache than a few big ones.
- Reduce priority of the select thread. Otherwise the system will suffer
from thread saturation. The whole connection part is terribly inefficient
anyway when used with threads and will be replaced shortly.
- Added an usleep to cache_find_entry_id in order to avoid excessive cache
mutex locking and run away threads. Right now it's an ugly fix, but the
idea is ok.
- Added an interface option so you can specify the interface you want to
- The pagesize for id2entry is now configurable. If you put larger items
in the ldap db, you'll want to raise this value, but not necessarily
the pagesize for all the db's.
- Not included in the patch, but well worth mentioning is the fact that
you can put extra yieldpoints into the BDB2 code when using a cooperative
mt package. This will greatly enhance responsiveness. If you want more
info on this, please ask.
Peter made the rest of the descriptions:
- removed NEXTID file
* instead use the value of the last key in the id2entry db
- removed the explicit dn index
* couldn't find any use, and the program kept working just fine :)
- fixed search scope:
* modified/removed filter alteration in (onelevel|subtree)_candidates
scope enforcement is done ldbm_back_search anyway.
* fixed that butt-ugly for-loop in ldbm_back_search
* replaced all calls to idl_allids with give_children
id2children.c::ID_BLOCK * give_children( Beckend *,
Entry * base,
users the id2children db to construct a list of all id's within the
specified base/scope pair.
- fixed scopelessness of indices:
* changed the index key from: <value>
to: <value>|<string-reversed normalized parent dn>
* altered the search algo such that:
- onelevel: use key: <value>|<string-reversed normalized base dn>
- subtree: use all keys that begin with this key
thus if the onelevel key would be: foobar|ln=c,releerf=o,eno=nc
the subtree query would match:
- remove hard limit on idl block splits, this is VERY annoying when
the db gets a bit bigger ;(
- rewrite cache in order to avoid one mutex for the whole cache
- rewrite str2entry and entry2str so that they use a number of memory
slots. this is to avoid the mutex locking they employ now.
If at first you don't succeed, destroy all evidence that you tried.