[Date Prev][Date Next]
Re: restricting slapd memory consumption
Quanah Gibson-Mount schrieb:
--On Wednesday, April 02, 2008 10:41 AM +0200 Ralf Narozny
Yep, as I wrote in my initial mail, we are using 2.3.32 (for testing so
The current release is 2.3.41. Why are you using such an old release?
There've been many known problems fixed since that release, including
security issues, and possibly some memory leaks.
We are using that version, because it is a project that is not highly
priorized and the package was compiled long a ago. But I just
initialized an upgrade. Memory leaks are for sure a show stopper.
What version of BDB are you using? Have you patched it with all the
We are using BDB 4.4.20. With as far as I know no patches. Any
recommended version or neccessary patches?
And I wrote that we are using BDB which is configured to use 4GB of
shared mem. The only problem I have is that with 1000000 entries
configured as entry cache, slapd uses 11GB out of 16 GB of RAM after the
insert with ldapadd. Which makes it use 7GB for entry cache (and
What's the contents of your DB_CONFIG file? What's the size du -c -h
*.bdb in your database directory? What is your idlcachesize setting?
How many file descriptors do you allocate to slapd?
set_cachesize 4 0 2
du -c -h *.bdb:
idlcachesize: 0, because we only use it for customer logins, so there
are no searches besides the ones for the customer id
If you mean the number of file descriptors ready to be used, it is 4096
currently, but might be raised, if neccessary.
Since you are using ldapadd to populate the database (rather than
slapadd), the entry cache is getting immediately allocated. As I
described in a previous email writeup I did on performant tuning of
slapd, the DB_CONFIG setting is the most important (to hold the size
of your entire DB), followed by the entry cache size.
Yep, that is my problem, we will surely need a lot of entries in cache,
because, we got more than 2 million customers logging in per day.
Can you tell me the subject of the mail writeup so I can find it in the
The entry cache really only needs to be the size of your active set of
entries, rather than all entries. For example, at a previous
position, even though we had about 500,000 entries in our database,
our entry cache was only 20,000. And it was very performant.
We got 23 million entries of which more than 2 million are active
entries...we can not really reduce the size of our database. Maybe a few
thousands, but not millions ;-)
Our entries have (in LDIF of course) an average size of below 200 Bytes.
So taking 6GB out of the 7GB used as size of the entry cache, it would
mean that each entry consumes about 6K of RAM. Is that correct?
If so, is there any documentation on how to configure the slapd for a
larger amount of entries like ours?
How many entries do you have?