[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: openldap profiling tools



Sébastien Georget wrote:
 I did not mention it because I was looking for a general way to find
 what's the bottleneck in an openldap installation. I can stress the
 server and play with the configuration to find what parameters
 improve performances (reduce failed authentications) but it's a long
 and not so easy process. I thought that server-side performances
 mesuring tools would have help in tuning the configuration with
 information such as

It sounds like you're asking a pretty general question about code profiling then, not something specific to LDAP or OpenLDAP.


 - time spent processing filters - time spent processing acl - time
 spent in backend specific code

On x86 Linux valgrind can be useful here (with calltree module). On Solaris and Linux I use FunctionCheck 1.5.4 for profiling. (Originally written by Yannick Perret, significantly enhanced by me.)


http://freshmeat.net/projects/fncchk/

 - number of connections rejected - peak request/s - ...

 Here is the configuration I am using : - openldap 2.2.27 with 8 bdb
 backends (1 + 7 subordinates) - 250 entries, will certainly grow to
 2000-3000
> What sort of entry cache do you have?
 cachesize 2000 default for other parameters I plan to make tests with
 : idlcachesize, conn_max_pending, timelimit, idletimeout

 Since the bdb files are very small (<200k) I supposed that they stay
 in memory and I dit not look at DB_CONFIG files, should I ?

Probably. Use db_stat -m, that will tell you whether the current (default) settings are working well or not.


--
 -- Howard Chu
 Chief Architect, Symas Corp.       Director, Highland Sun
 http://www.symas.com               http://highlandsun.com/hyc
 Symas: Premier OpenSource Development and Support