[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Implementing database tuning checks (2.1.29/db4.2.52.2/bdb)



I have tried to use this script, but when I do I get the following
output:

db_archive: Ignoring log file: /var/lib/ldap/log.0000000001:
unsupported log version 7
(I have a line for each one of the log files)

Minimum BDB cache size
Date: Thu May 20 17:04:00 2004

Hash dbs                             HBk Ovr Dup  PgSz    Cache

Btrieve dbs                                  IPg  PgSz    Cache

Minimum cache size:                                           0

This is on debian testing, I am trying to parse the FAQ's details, but
it is quite difficult and would love a script that would aid in this.

Micah


On Thu, 22 Apr 2004, Kirk A. Turner-Rustin wrote:

> On Thu, 22 Apr 2004, Buchan Milne wrote:
> 
> > I have implemented parts of the tuning recommendations in 
> > http://www.openldap.org/faq/data/cache/191.html using a script to report 
> > the suggested minimum cache size.
> > 
> [snip]
> > 
> > How does one determine (ie which options to db_stat and which resulting 
> > value) the number of hash buckets, overflow pages and duplicate pages for 
> > an index file?
> 
> This is how I interpreted Howard's post. Change the values of
> $db_stat, $db_config_home, and $db_archive to taste. Works with
> OL 2.1.25 and BDB 4.2.52 on RedHat Linux 9.0. Sorry if any of the
> below gets word-wrapped in your mailer.
> 
> #!/usr/bin/perl
> my $db_config_home = "/var/lib/ldap";
> my $db_stat = "/usr/sbin/slapd_db_stat -h $db_config_home";
> my $db_archive = "/usr/sbin/slapd_db_archive -h $db_config_home";
> my @all_dbs = `$db_archive -s`; chomp(@all_dbs);
> my (
>     @stats,
>     %d_btree,
>     %d_hash
> );
> #-----------------------+
> # Get stats for each db |
> #-----------------------+
> foreach my $db (@all_dbs) {
>     @stats = `$db_stat -d $db`;
>     chomp(@stats);
>     if ($stats[0] =~ /Btree/) {
>         $d_btree{$db}{"page_size"}      = (split(/\s/, $stats[4]))[0];
>         $d_btree{$db}{"internal_pages"} = (split(/\s/, $stats[8]))[0];
>         $d_btree{$db}{"cache_size"}     =
>             ($d_btree{$db}{"internal_pages"} + 1) *
> 		$d_btree{$db}{"page_size"};
>     }
>     else {
>         $d_hash{$db}{"page_size"}      = (split(/\s/, $stats[3]))[0];
>         $d_hash{$db}{"hash_buckets"}   = (split(/\s/, $stats[7]))[0];
>         $d_hash{$db}{"bkt_of_pages"}   = (split(/\s/, $stats[11]))[0];
>         $d_hash{$db}{"dup_pages"}      = (split(/\s/, $stats[13]))[0];
>         $d_hash{$db}{"cache_size"}     =
>             (($d_hash{$db}{"hash_buckets"} +
> 		$d_hash{$db}{"bkt_of_pages"} +
> 		    $d_hash{$db}{"dup_pages"}) *
> 			$d_hash{$db}{"page_size"})/2;
>     }
> }
> #-----------------------+
> # Write stats to stdout |
> #-----------------------+
> my $total_cache_size;
> print "Minimum BDB cache size\n";
> print "Date: " . scalar(localtime()) . "\n";
> print "\n";
> printf (
>         "%-35s  %3s %3s %3s %5s  %7s\n",
>         "Hash dbs", "HBk", "Ovr", "Dup", "PgSz", "Cache"
> );
> while (my ($dbname, $h_ref) = each (%d_hash)) {
>     printf (
>         "%-35s  %3d %3d %3d %5d  %7d\n",
>         $dbname,
>         $$h_ref{'hash_buckets'},
>         $$h_ref{'bkt_of_pages'},
>         $$h_ref{'dup_pages'},
>         $$h_ref{'page_size'},
>         $$h_ref{'cache_size'}
>     );
>     $total_cache_size += $$h_ref{'cache_size'};
> }
> print "\n";
> printf ("%-44s %3s %5s  %7s\n", "Btrieve dbs", "IPg", "PgSz", "Cache");
> while (my ($dbname, $h_ref) = each (%d_btree)) {
>     printf (
>         "%-44s %3s %5s  %7s\n",
>         $dbname,
>         $$h_ref{'internal_pages'},
>         $$h_ref{'page_size'},
>         $$h_ref{'cache_size'}
>     );
>     $total_cache_size += $$h_ref{'cache_size'};
> }
> print "\n";
> printf ("%-55s %7d\n", "Minimum cache size:", $total_cache_size);
> exit;
> 
> -- 
> Kirk Turner-Rustin
> Information Systems
> Ohio Wesleyan University
> http://www.owu.edu
> ktrustin@owu.edu
>