[Date Prev][Date Next] [Chronological] [Thread] [Top]

BDB Backend: How to force transaction log rotation for log archival based on db_archive



Hi,

 

I use OpenLDAP with the Berkeley DB backend and db_archive to move unused transaction logs to a backup location for the purpose of disaster recovery. The intention is to reduce the data loss window to N minutes. Consequently, I execute db_archive every N minutes. If the transaction throughput is high enough so that the maximum transaction log size causes a new transaction log to be created and database checkpoints occur that cause the old log to become unused, that strategy works as expected. If, however, the throughput is minimal, archiving would only occur once the maximum transaction log size is hit and a checkpoint frees the old log. Consequently, those changes cannot be restored because logs are not being archived.

 

Is there a way to force a transaction log rotation to make sure that even changes that do not cause a new transaction log to be created will get archived at regular intervals? By reducing the maximum transaction log size, the situation could be improved but not resolved. All I can come up with is forcing a rotation by writing a custom appropriately sized transaction log entry using the Berkeley DB API DB_ENV->log_put() before triggering a checkpoint and calling db_archive but that still doesn't sound like a production-ready solution.

 

Thanks in advance for any additional info on this subject!

 

Cheers,

 

Horst