[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: (ITS#4702) out-of-memory on huge DB ?

> > slapcat -f slapd.conf
> So you understand that without the -l flag to slapcat, you are  
> writing to
> stdout instead of a file?  Is it possible that is what is causing  
> slapcat
> to crash?

I used slapcat without -l for two reasons:

usually i slapcat pipiing in gzip the stdout so i can realtime gzip  
the ldif:

slapcat -f slapd.conf | gzip > out.ldif.gz

and also because on 32 bit slapcat, avoid the 2GB filsize limit on  
output ldif

> > slapcat -f slapd.conf

here the full command was:

slapcat -f slapd.conf > fulldump.ldif

However, I've retry with "-l file" on 32 bit linux and solaris (why  
not ?)

Linux stops when reached 2GB file out; but there was a little bit of  
memory available and the allocation rate was similar to previous trials

In Solaris, due to speed up the test I've ulimit -d 1024000,  reached  
2 GB file without stops itself (usually solaris continues, simply  
throws out additional data)

then some time later:

ch_malloc of 16392 bytes failed
ch_malloc.c:57: failed assertion `0'
Abort (core dumped)

with a while/loop ps, last line was:

18246 1652488 1656144 41.1       31:23 09:22:35 slapcat

> On my linux 2.6 system (64 bit), with a 2.5 million entry db, which  
> is:
> ldap00:/var/lib/ldap/slamd/64-2.5mil-db42# du -h *.bdb
> 84M     cn.bdb
> 618M    dn2id.bdb
> 85M     employeeNumber.bdb
> 3.2G    id2entry.bdb
> 3.7M    objectClass.bdb
> 85M     uid.bdb

this runs I've tried a db like:

ldap@labsLDAP:/TEST_producer/openldap-data> du -h *.bdb

3,0M    testCode.bdb
2,9G    dn2id.bdb
600M    entryCSN.bdb
545M    entryUUID.bdb
15G     id2entry.bdb
1,9M    objectClass.bdb
270M    testId.bdb

> So, that does mean one thing -- It definitely outgrew the 366MB of BDB
> cache defined.  On the other hand, once it reached its max size, it  
> stayed
> steady there until completion was reached.

In my trials, instead, allocation never stops :(
Perhaps due to a greater DB ?