[Date Prev][Date Next]
RE: purpose of set_lg_regionmax
On Mon, 29 Sep 2003, Howard Chu wrote:
> There really isn't any good explanation.
that sucks 8-/...
> Yes, every attribute that you configure for indexing uses one file to
> store its index. There's also the id2entry and dn2id files which are
> always created. WHat constitutes a "large number" in this case is
> unknown. The best answer I got from Sleepycat is "it depends."
Is there any recommended wisdom in setting the various log related options?
I have seen a wide variety of settings in the mailing list archives but not
necessarily a strong recommendation or explanation as to why they were
chosen. Here is my current DB_CONFIG:
set_cachesize 0 536870912 1
I forgot to mention that when I originally was having problems it turned
out I was running out of locks. At that time, I was getting no space errors
while trying to load the ldif. Based on the output from db_stat -c, I
increased the maximum amount of locks and lock objects. That's when I
started receiving the the fatal region errors.
I originally was using the default log size of 10MB and had increased the
log buffer size to 2MB based on a recommendation in either the FAQ or on
the mailing list. Yesterday, I somewhat capriciously increased the log size
to 100MB, the log buffer size to 20MB, and the log region size to 40MB. I'm
currently trying to load the ldif again to see what happens.
I would really prefer to have a better understanding of the rationale for
tuning the log related variables. What are the pros/cons of increasing the
log size? The only thing the sleepycat documentation really mentions is
that by making the log files bigger it would take longer to reach the
maximum number of log files. I am actually utilizing slapcat for backup
purposes rather than trying to backup the actual database, so any backup
related issues are immaterial. I would prefer settings to maximize
performance and reliability. I have 2GB of memory in the server, which is
dedicated for LDAP, is there any harm in setting the log buffer size and
region values relatively high?
I didn't see any options of db_stat that seemed applicable to the log
region other than -l which lists the log region size (output as 60MB even
though I set the maximum to 40MB?). Is there anyway to tell what the
current utilization of the log region is for tuning purposes?
> Whenever you start making wholesale changes like deleting and recreating
> raw database files, you really really really need to run db_recover
I completely deleted all of the .bdb files, the __db files, and the log
files before running slapadd, so was basically creating the database from
scratch which is why I didn't need to run db_recover.
> But you can certainly run into situations where things fail because
> you've run out of BDB cache. In that case, checking db_stat -m will tell
> you some useful things about your cache state, and you'll probably want
> to run db_recover to destroy the existing cache so you can configure a
> larger one.
I was running db_stat -c every 30 seconds, and nothing appeared out of the
ordinary until it began to fail due to fatal region errors...
Berkeley DB configuration has a little bit too much black magic in it
Paul B. Henson | (909) 979-6361 | http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst | email@example.com
California State Polytechnic University | Pomona CA 91768