[Date Prev][Date Next] [Chronological] [Thread] [Top]

(ITS#4702) out-of-memory on huge DB ?



Full_Name: Paolo Rossi
Version: 2.3.27
OS: Solaris 8
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (88.149.168.114)


Hi, during some test on very huge DB, due to see how syncrepl works in this
scenario, I've found a strange behavior:

Solaris 8 on 2xUSIII+ 4GB RAM
openLDAP 2.3.27 
BDB 4.2.52.4

backend hdb

1 provider, 1 consumer, 1 consumer with filter.

on 1 million dn LDAP whit 2 sub-dn for each dn, all the systems works fine,
when I've tried to use a 10m dn with 3 sub-dn (very big ldap, openldap-data dir
about 20GB):

slapadd with -w on producer. it works.
some ldapsearch, it works.

stop producer, 
slapcat the producer to obtain the ldif for consumer preload and... bum

after about 150 minutes of slapcat, memory full (look the screenshot of top )

PID      |USERNAME | SIZE     |RES     |TIME     | CPU       |COMMAND
21495 |	ldap       |4072M |3591M |150:07 | 21.36%  |slapd

memory full, then coredump and a console messages:

ch_malloc of 16392 bytes failed
ch_malloc.c:57: failed assertion `0'

the out ldif was about 85% of the full LDAP 

I've tried again with same results.


Then I've tried to syncrepl the entire DB, turning on the empty consumers (crazy
idea ;) i know ) but the provider memory allocated, again reached 4GB and...
bum

core dumped

in the slapd.log 

ch_calloc of 1 elems of 80 bytes failed

second try:
ch_malloc of 16 bytes failed

seems to be a issue like ITS#4010

some ideas?

Regards