[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: HELP: Continuing Openldap Database Corruption!



Lee,

Here is a script I wrote that does exactly that, but you need db-4.2 because of a bug with reporting subdatabases. If you want this to work with db-4.1 you need the patch listed in this email:

http://www.openldap.org/lists/openldap-software/200311/msg00479.html

schu

Lee wrote:
That is an excellent explanation. Since this is so easily calculated, why not include this either:

a) As part of the openldap daemon, such that if db type = bdb, on startup openldap runs db_stat -m, calculates teh appropriate values, checks that a the system has mem*2 or something, and generates a DB_CONFIG. This could even be a options in slapd.conf (i.e. AUTOTUNE_BDB = 1)

b) As an included shell script (i.e. openldap_bdb_tune)?

Thank you for all the help.

Lee

I have the cache set to 15MB. Besides using 15MB of memory, is there any down side to it being this big (i.e. if the cache is huge does performance degrade b/c it takes longer to search the entire cache)?


On Dec 16, 2003, at 8:54 AM, Andreas wrote:

On Tue, Dec 16, 2003 at 08:38:23AM -0500, Frank Swasey wrote:

Check with db_stat -m if it is enough. Also remember that, if this file
is created after the database, this setting has no effect.


but a db_recover is sufficient to rebuild the environment so it does
take effect.


Check out this excellent post by Howard:

http://www.openldap.org/lists/openldap-software/200311/msg00469.html

#!/bin/sh

DATADIR="/usr/local/var/openldap-data"
DB_STAT="/usr/local/BerkeleyDB.4.1/bin/db_stat"

INDEXMEM=0
TOTALHASHBUCKETS=0
TOTALOVERFLOWPAGES=0
TOTALDUPLICATEPAGES=0

DN2INTERNALPAGES=`$DB_STAT -d $DATADIR/dn2id.bdb \
  | grep "Number of tree internal pages" \
  | awk '{ print $1 }'`

DN2PAGESIZE=`$DB_STAT -d $DATADIR/dn2id.bdb \
  | grep "Underlying database page size" \
  | awk '{ print $1 }'`

ID2INTERNALPAGES=`$DB_STAT -d $DATADIR/id2entry.bdb \
  | grep "Number of tree internal pages" \
  | awk '{ print $1 }'`

ID2PAGESIZE=`$DB_STAT -d $DATADIR/id2entry.bdb \
  | grep "Underlying database page size" \
  | awk '{ print $1 }'`


INDEXMEM=`expr $INDEXMEM + \( $DN2INTERNALPAGES \* $DN2PAGESIZE \)`
INDEXMEM=`expr $INDEXMEM + \( $ID2INTERNALPAGES \* $ID2PAGESIZE \)`

INTERNALPAGESSIZE=$INDEXMEM
INTERNALPAGES=`expr $DN2INTERNALPAGES + $ID2INTERNALPAGES`


DATABASES=`$DB_STAT -m -h $DATADIR | grep -v "dn2id" | grep -v "id2entry" \
  | awk '/^Pool File:/ {print $3}' \
  | awk -F. '{print $1}'`

for DATABASE in $DATABASES; do

  HASHBUCKETS=`$DB_STAT -d $DATADIR/$DATABASE.bdb -s $DATABASE \
    | grep "Number of hash buckets" \
    | awk '{ print $1 }'`

  OVERFLOWPAGES=`$DB_STAT -d $DATADIR/$DATABASE.bdb -s $DATABASE \
    | grep "Number of bucket overflow pages" \
    | awk '{ print $1 }'`

  DUPLICATEPAGES=`$DB_STAT -d $DATADIR/$DATABASE.bdb -s $DATABASE \
    | grep "Number of duplicate pages" \
    | awk '{ print $1 }'`

  TOTALHASHBUCKETS=`expr $TOTALHASHBUCKETS + $HASHBUCKETS`
  TOTALOVERFLOWPAGES=`expr $TOTALOVERFLOWPAGES + $OVERFLOWPAGES`
  TOTALDUPLICATEPAGES=`expr $TOTALDUPLICATEPAGES + $DUPLICATEPAGES`

  TMP=`expr \( $HASHBUCKETS + $OVERFLOWPAGES + $DUPLICATEPAGES \) \* $DN2PAGESIZE` 
  INDEXMEM=`expr $INDEXMEM + $TMP` 

done

echo
echo
echo "Number of tree internal pages (dn2id + id2entry): $INTERNALPAGES"
echo "Tree internal pages size (internal pages * pagesize): $INTERNALPAGESSIZE"
echo "Total hash buckets for all indexes: $TOTALHASHBUCKETS"
echo "Total overflow pages for all indexes: $TOTALOVERFLOWPAGES"
echo "Total duplicate pages for all indexes: $TOTALDUPLICATEPAGES"
echo "Cache Size needed (internal pages + hash buckets + overflow pages + duplicate pages ) * pagesize: $INDEXMEM"