[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: openldap profiling tools



Quanah Gibson-Mount wrote:
>> I wonder if there are some tools to 'profile' openldap to see where is
>> the bottleneck in our current configuration (indexes, backend
>> configuration, acl...).
>>
>> I'm also interested in any comments from administrator of such clusters
>> or platform with 'high' peak rates. 'high' means ~1000 request/s, can
>> openldap (one server) handle this  ?
>
> Unfortunately, you leave out some useful information.
> 
> What version of OpenLDAP are you running?
> What database backend are you using?
> What backing software for the database are you using?
> How many entries do you have?
> What sort of entry cache do you have?

I did not mention it because I was looking for a general way to find
what's the bottleneck in an openldap installation. I can stress the
server and play with the configuration to find what parameters improve
performances (reduce failed authentications) but it's a long and not so
easy process. I thought that server-side performances mesuring tools
would have help in tuning the configuration with information such as
- time spent processing filters
- time spent processing acl
- time spent in backend specific code
- number of connections rejected
- peak request/s
- ...

Here is the configuration I am using :
- openldap 2.2.27 with 8 bdb backends (1 + 7 subordinates)
- 250 entries, will certainly grow to 2000-3000

> What backing software for the database are you using?
Not sure to understand this question.

> What sort of entry cache do you have?
cachesize 2000
default for other parameters
I plan to make tests with : idlcachesize, conn_max_pending, timelimit,
idletimeout

Since the bdb files are very small (<200k) I supposed that they stay in
memory and I dit not look at DB_CONFIG files, should I ?

SÃbastien
-- 
SÃbastien Georget
INRIA Sophia-Antipolis, Service DREAM, B.P. 93
06902 Sophia-Antipolis Cedex, FRANCE
E-mail : sebastien.georget@sophia.inria.fr