[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: trouble when switching from bdb to mdb database



Well, it looks like using single user for replication is bad idea for MDB.

debug log:
slapd[23170]: do_bind: version=3 dn="cn=repmgr,ou=ldapusers,o=test1" method=128
slapd[23170]: daemon: epoll: listen=7 active_threads=0 tvp=zero
slapd[23170]: => mdb_entry_get: ndn: "cn=repmgr,ou=ldapusers,o=test1"
slapd[23170]: daemon: epoll: listen=8 active_threads=0 tvp=zero
slapd[23170]: => mdb_entry_get: oc: "(null)", at: "(null)"
slapd[23170]: daemon: epoll: listen=9 active_threads=0 tvp=zero

after this strace show mentioned âAssertion failed'.

WiadomoÅÄ napisana przez Aleksander DzierÅanowski <olo@e-lista.pl> w dniu 13 lis 2013, o godz. 22:17:

> Hi.
> 
> I have properly runnig setup of three multimaster OpenLDAP servers (version 2.4.36 from ltb project) with bdb database backend. Everything was working flawless so I decided to try out ânew shiny' mdb database with the same configuration - the only thing I changed was removing âcacheâ settings and adding âmaxsizeâ.
> 
> What Iâm doing and observing:
> - clear all config and database on all masters. Generate new configuration from slapd.conf using âslaptestâ tool.
> - on master1 I add three base organizations letâs say o=test1 + o=test2 + o=test3 using slapadd [without -w switch]
> - on master1 I add some entries using ldapadd command so all organizations have now contextCSN attribute.
> - starting master1 - everything OK
> - starting master2 - everything OK, including succesfull replication from master1
> - starting master3 - everything OK and including replication, butâ some or all other master are dying unexpectedly.
> 
> strace of dying process show:
> ---
> write(2, "slapd: id2entry.c:509: mdb_opinfo_get: Assertion `!rc' failed.\n", 63) = 63
> â
> 
> debug log last lines:
> â
> => mdb_entry_get: ndn: âo=test1â
> => mdb_entry_get: oc: "(null)", at: âcontextCSN"
> â
> 
> But when I do âslapcatâ I can clearly see contextCSN for all o=test[123] databases...
> 
> Is it bug or some possible replication configuration issue?
> â
> Olo
>