[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: back-bdb IDL limitations



We've tested ranges from 2 million to 20 million entries, and it seems rather inconvenient to do incremental adds here. First we need to add a flag to slapadd to tell it to skip the first N entries and start adding from some offset into the input LDIF, because manually splitting such a large input LDIF file is a pain. Second we need to add feedback from slapadd to tell you exactly where it was in the input when it failed (entry number may be enough, byte offset would probably be better) but of course, if the add is interrupted because the entire machine crashed, you may never see that feedback. I guess one question to answer is what kind of failures are you protecting against, and what does it cost either way. For the most part, it is simpler to just wipe out the database and start over.

Jonghyuk Choi wrote:


Is it so because of the recovery overhead can be huge ? Even so, transactions seem to be indispensable to situations like incremental slapadd.
- Jong-Hyuk


------------------------
Jong Hyuk Choi
IBM Thomas J. Watson Research Center - Enterprise Linux Group
P. O. Box 218, Yorktown Heights, NY 10598
email: jongchoi@us.ibm.com
(phone) 914-945-3979    (fax) 914-945-4425   TL: 862-3979




*"David Boreham" <david_list@boreham.org>* Sent by: owner-openldap-devel@OpenLDAP.org

03/04/2005 12:06 PM

To: <openldap-devel@OpenLDAP.org>
cc: Subject: Re: back-bdb IDL limitations





2) there should be non-zero need for transaction protected operations of slap tools to cope with a system failure during a lengthy directory population.
I think you will find that it's almost always faster to simply disable transactions and
re-start the slapadd in the event of a failure.



-- -- Howard Chu Chief Architect, Symas Corp. Director, Highland Sun http://www.symas.com http://highlandsun.com/hyc Symas: Premier OpenSource Development and Support