[Date Prev][Date Next]
Re: restricting slapd memory consumption
Buchan Milne schrieb:
On Wed, Apr 2, 2008 at 10:41 AM, Ralf Narozny <firstname.lastname@example.org> wrote:
Pierangelo Masarati schrieb:
cache available via back-monitor, such as the number of entries in the
Ralf Narozny wrote:
Pierangelo Masarati schrieb:
Buchan Milne wrote:
/me notes that it would be nice to have more detail on the entry
cache, and the amount of entry cache that is used ...
is not present.
I configured the slapd to create a monitor, but the information you want
bash-3.1$ ldapsearch -x -H ldap://:9011 -b 'cn=Databases,cn=Monitor' \
thouroughly written by now ;-)
Maybe I missed something to configure, but the manual is not too
'objectclass=*' '*' '+'
ldapsearch -D 'cn=root,cn=monitor' -W -b 'cn=Databases,cn=Monitor'
(as far as I understood, this should show all data for the entries below
Well, that information is only available since OpenLDAP 2.4; I inferyou're using an earlier distribution. In any case, the monitor has nothing
to do with the entry cache configuration, it only shows the current usage.
Refer to slapd.conf or back-config for what is configured for your system.
Yep, as I wrote in my initial mail, we are using 2.3.32 (for testing so
And I wrote that we are using BDB which is configured to use 4GB of shared
mem. The only problem I have is that with 1000000 entries configured as
entry cache, slapd uses 11GB out of 16 GB of RAM after the insert with
Firstly, what is the problem with slapd using 11 of 16GB ? My
production LDAP servers typically run consuming at least 4GB of the
available 6GB, and that's the way I want it (or, maybe using a tad
more, but leaving enough free to run a slapcat without causing the
server to swap). Unused ram is wasted ram (at least on Unix) ...
No problem, I want to have slapd use about 14GB of the memory, but I'm
not able to predict the RAM slapd uses, that's why I ask :-)
On the other side I read about the importance of the BDB cache and that
it should be using most of the available resources. But I cannot raise
it above 4GB, because my machine will start to swap after inserting a
few million entries. And that is really worst case.
Which makes it use 7GB for entry cache (and whatever else).
Plus the overhead of approx 10MB per-thread, a few kB per file descriptor etc.
Our entries have (in LDIF of course) an average size of below 200 Bytes. So
taking 6GB out of the 7GB used as size of the entry cache, it would mean
that each entry consumes about 6K of RAM. Is that correct?
Roughly ... assuming that you are using a decent memory allocator, and
that you have applied the memory leak patches for Berkeley DB (I don't
see that you provide your Berkeley DB version). The glibc memory
allocator is probably going to do quite badly in this specific
scenario (bulk add over the wire), using one of the better allocators
(e.g. hoard, tcalloc), would probably reduce this value considerably.
Howard has published extensive benchmarks on this ...
It is hard to know what info to provide for you to be able to help:
BDB 4.4.20 with no patches.
Linux Kernel 22.214.171.124 SMP
I'm not the one compiling the packet so I mainly have no idea of how to
change anything like that.
If so, is there any documentation on how to configure the slapd for a
larger amount of entries like ours?
Yes, which ones have you read so far?
I searched the OpenLDAP docs and a few pages everywhere in the net, but
since they all think about 500k entries are a lot, it does not really
help me with my growing 23 million entries. :-(
The size of the bdb files:
Which will never fit into my 16 GB :-)
Anything else you would need?