[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Index Generation Failed error



Pierangelo Masarati wrote:
Suhel Momin wrote:
I have made changes to keep log in memory using DB_LOG_INMEMORY flag.
The log size is kept 10MB.
Now when I try to add 64K entries, 65526 entries get added but 65527th entry always fails.
The error I get is "Index Generation Failed".
I tried to debug the issue and found that the cursor->c_del function in bdb_idl_insert_key fails returning an error code as DB_LOG_BUFFER_FULL.


Am I missing something or is this a known problem?
Do I need to do anything more while keeping logs in memory?

Have you read Berkeley documentation about DB_LOG_INMEMORY? This is the expected result of creating an in-memory buffer smaller than the size of the largest transaction you want to create. Either use a larger buffer, or write smaller chunks. And read the docs <http://www.oracle.com/technology/documentation/berkeley-db/db/articles/inmemory/C/index.html>.

In this case they cannot write smaller chunks, it's purely a function of how we generate indexes. When an index slot gets to about 65536 elements we delete that list and replace it with a 3-element range. This delete operation consumes a great deal of log space.


The solution of course is to use a larger buffer.
--
  -- Howard Chu
  Chief Architect, Symas Corp.  http://www.symas.com
  Director, Highland Sun        http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP     http://www.openldap.org/project/