[Date Prev][Date Next]
Re: slapadding a large bdb database
On Fri, Aug 02, 2002 at 10:55:34AM -0700, Howard Chu wrote:
% Sorry to make you go on a wild goose chase. This is probably also due to a
% bug (ITS#1939) in 2.1.3 back-bdb/idl.c. This has been patched in the CVS
% The symptoms of this bug are that adds fail somewhere around 65535
% entries, just as you've encountered. This is an indication that an index
% slot in an index database (objectclass, most likely) has just been maxed
% out. There is code to convert this index slot from a fixed-size list into
% a range specifier, and that code is broken in 2.1.3. If you can, try
% getting the current idl.c (rev 1.51 as of today) from CVS and rebuild your
% slapd, that should take care of this problem.
That works great, thanks!
I have one more question, though. I slapadded the database in two parts
because the transaction logs got so large that they filled up the
filesystem. I split the ldif, slapadded the first one then slapadded the
second one. After the second slapadd had finished its recovery, I removed
the old transaction logs (db_archive listed them as freeable). The bdb files
themselves get to be about 1GB (roughly the same size as our existing ldbm
database, maybe a little bigger) Does this sound sane?
The problem comes when I start slapd for the first time with the new bdb
database. slapd starts and runs recovery just fine. However, I can't find
entries I know should be in the directory (they were in the ldif). I tried
searching on different attrs (guessing at possible index corruption), but
that didn't help. Downed slapd, reindexed with slapindex, brought slapd back
up. Same results. Of the 458,904 entries in the ldif, I can only "find"
31,729 of them.
Also, I was getting lots of:
Aug 3 16:08:17 oh slapindex: bdb(o=frontier): Duplicate data items are not
supported with sorted data
interspersed with some:
Aug 3 16:08:17 oh slapindex: => bdb_dn2id_add: put failed: DB_KEYEXIST:
Key/data pair already exists -30996
while running slapindex.
Did I do something wrong? Have any idea what would cause this?
% And again, for tuning purposes, you'll get better performance if you can
% put the transaction logs on a separate physical disk from the actual
% databases. Bumping up the log buffer size and setting NOSYNC also helps,
% of course.
Due to administrative reasons, writing the transaction logs to a separate
spindle is difficult right now. I just needed something to speed up the ldbm
-> bdb conversion because of the massive amount of writes involved. Once the
conversion is done, we see relatively few writes to the directory, so I'm
not so worried.
John Morrissey _o /\ ---- __o
email@example.com _-< \_ / \ ---- < \,
www.horde.net/ __(_)/_(_)________/ \_______(_) /_(_)__