[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Memory consumption issue



Thorsten Kohlhepp wrote:
Pierangelo Masarati wrote:
Thorsten Kohlhepp wrote:
Hi,

Hallvard B Furuseth wrote:
Andrew Findlay writes:
Retrieving 2M entries in a single operation is going to tax any LDAP
server, especially if you do not request paged results. Consider what
it must do:

1) Make a list of every entry ID
2) Retrieve the data for every entry
3) Build a message containing 2M entries
4) Send the message
No, each entry is sent in a separate message.
I also thought it would send each message separately, because to build a
message with 2M entries wouldn't make sense. It would also take much
longer to respond. The first entry of the search is returned immediately
which indicates that each entry is sent separately.

There's no need to experiment. This is clearly indicated in the protocol specification (RFC4511, but it has always been like this)


However OpenLDAP does build a list of all entry IDs to examine and
possibly, subject to indexes for the filters. And it must readlock all
these entries so that an update operation won't mess things up while it
is sending, and so updates will be atomic as seen by the search request.


I don't know what BDB does when there are 2M entries to examine though.
Maybe it just gives up and examines all entries, as LDBM did.

The total memory of the server is 4 GB and swap 2 GB. So it will survive
even if we pull the entire tree by using ldapsearch. But we would like
to put other services as well on the same server which could slow things
down if LDAP is already using a lot of memory.

I know doing an ldapsearch "(objectClass=*)" is a bad way to get all
entries,

Too bad there's no other way. If you find any, please let us know.

but I want to make sure that a bad formatted search can't slow
down the entire server by consuming a lot of memory.

If you want to inhibit expensive searches, tke a look at the "limits" statement of slapd.conf(5). In detail, consider limits size.unchecked.


Another question why isn't it releasing the used memory after the search
finished?

Depending on the backend and on the database, caching may take place (and should, if you want performances). For details about Berkeley DB caching, see Sleepycat's documentation. For details about back-bdb and back-hdb caching, see cachesize, idlcachesize, dncachesize in slapd-bdb(5), and <http://www.openldap.org/doc/admin24/tuning.html>.


Of course it will cache the entries, but I defined a cache size of 8.4m, an entry cachesize of 1000 and an idlcachesize of 1000. When the search finishes it consumes 937316 kB. This is a way over than the cachesize.
What do I wrong?

What is 8.4m? 8.4 minutes? Berkeley BDB cache size is expressed by two numbers, the first one counting the GB (Giga bytes) and the second counting the MB (Mega bytes). If by "8.4m" you mean 8.4 MB, then your cache is very likely way underestimated.


An entry cachesize of 1000 means 1000 entries. So it may well mean lots of kB (or MB) depending on the actual size of your entries (the size of an entry is usually more than twice that of its textual representation, since all values are stored in pretty and normalized form, plus overhead).

In any case, if you fear leaks, please do run slapd under valgrind and report any issue. It will help making slapd better.

p.


Ing. Pierangelo Masarati OpenLDAP Core Team

SysNet s.r.l.
via Dossi, 8 - 27100 Pavia - ITALIA
http://www.sys-net.it
-----------------------------------
Office:  +39 02 23998309
Mobile:  +39 333 4963172
Fax:     +39 0382 476497
Email:   ando@sys-net.it
-----------------------------------