[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Openldap 2.1.3 - Berkeley BDB 4.0 database backup inconsistencies



man, 2002-07-29 kl. 20:37 skrev Peter A. Savitch:

> TE> My experience is, that even by following the docs for deleting Berkeley
> TE> DBD 4.0/Openldap 2.1.3 log files and creating new, with db_checkpoint,
> TE> it just does not work. Newly created logfiles give unrecovorable errors
> TE> with slapd. Even with a minimum, often with more, of 2 db_checkpoints in
> TE> advance and db_recover -c.  Punctum finale.

> Are You sure You can run db_checkpoint safely? BDB is an `embedded'
> DB, so I'm not sure all `destructive' tools are known to be
> `consistent' across processes. I mean external programs, not own slapd
> threads. I guess the (only) way is to use slapd-bdb(5) `checkpoint'
> directive and BDB db_archive(1) tool.

Peter,

This is a bit long - see the end of my message.

> Since db_archive detects log files not being in use I consider it
> `safe'. So, I ask You to carry out the following experiment (I'm
> interested in that, too):

> Set the BDB backend `checkpoint' option:

> database bdb
> ...
> checkpoint 1024 0

O.k., did it - though "checkpoint 256 0" to save time. This is a test
installation with little relative change. Up to then I did
"db_checkpoint -1" at uneven intervals.

> By that I expect BDB backend to checkpoint it's log every 1024k
> regardless of time. Then, make some kind of

> /usr/bin/db_archive -a -h /path/to/ldap/bdb | xargs gzip -9

That's no good, it doesn't do anything. db_archive is no archiving tool,
it just gives info. to stdout or one can pipe the result (then you need
-l or -s) -. But not to gzip.

> OR
> 
> for log in `/usr/bin/db_archive -a -h /path/to/ldap/bdb`; do
>   /bin/mv -f "${log}" /path/to/backup
> done

I'd rather not :-)

> to run every hour.

> Does it still ruin Your database? Please let me know.

Basically, yes it does.

But, thanks to you, I got the courage to go further. I now see that I
could write a shell script (no, not perl) backup routine which could
backup and/or restore the whole database with impunity, (including
splitting up, renaming and zipping old log files), BUT I can't see any
way of getting rid of old log files, at the moment. When, as an
organization,  one has gone on for a year, say, and has 365 gzipped
multi MB log fles, how in the name of all that's good is one supposed to
have room (or time) to restore them all? Deleting old log files gives
"input past end, unrecoverable" kind of errors, even if one has
db_checkpointed them till one is blue in the face.

This is *really badly* documented stuff. However, as sysadmin, I more
than several times experienced the absolute necessity of having 100%
reliable backups, which would restore data no matter what. Without those
backups, I'd have got the sack and the firm(s) would have gone bankrupt.
Neither of which ever happened :c)

Best,

Tony


-- 

Tony Earnshaw

The usefulness of RTFM is vastly overrated.

e-post:		tonni@billy.demon.nl
www:		http://www.billy.demon.nl
gpg public key:	http://www.billy.demon.nl/tonni.armor

Telefoon:	(+31) (0)172 530428
Mobiel:		(+31) (0)6 51153356

GPG Fingerprint = 3924 6BF8 A755 DE1A 4AD6 FA2B F7D7 6051 3BE7 B981
3BE7B981


Attachment: signature.asc
Description: Dette er en digitalt signert meldingsdel