With the current replication between slurpd/slapds, how we ensure that if one the slave slapd's data store (Bekeley DB) is in sync with the master's data store if we decide to unplug that slave out of the cluster and plug it back in after a few hours for maintenance. I am aware of the rejection logs, but with this kind of replication, there is no way of guarantee that the order of updates will be in order unless each transaction has an associated unique sequence number. Since I have manually populate the rejection file using one shot mode, how do I guarantee that if I add the failed slapd back to the cluster, the most current update will not be updated since if the current update may get overwritten by the reject log one shot mode operation.
Does anyone have similar problems and do we take in account of this ???