[Date Prev][Date Next] [Chronological] [Thread] [Top]

strange performance result


I ran into some strange performance problem when I try to insert a linear list of 10 million entries into openldap (each entry is ~1K byte).  The list has no index since the application always retrieve data base on the DN.  I am running openldap 2.1.16 under RedHat 8.  The following is a list of problems I am running into.

1.  occasional spike in data insertion time of 10x or more.

Many data entries are inserted into openldap in under 100ms.  However, every 6 or so entries, an entry will take more than 1 to 2 second to insert.  This happens even thought the size of all the records are exact the same. 

2.  the CPU and file IO are grossly under utilized during the data insertion

The server is a dual CPU DELL box running Linux (redhat 8).  The CPU utilization seems to peak at 20% and the IO utilization is just 15%.  I tried to run multiple data insertion processes in parallel to no avail. The collective throught put stays flat and the CPU and IO utilization stays the same.   There seems to be a common point of contention in the system that is severly limiting the degree of parallism.  Another thing I notice is that the cpu and io utilization seems to degrade gradually as the list gets larger and larger.  

I tried tweaking dbnosync and cachesize but it didn't help.  I am pretty sure the network(100 megabit ethernet) is not the issue.  I tried to run the data insertion app directly in the machine that is running openldap and got the same result.

Is this because the flat nature of the large list caused the DN lookup table to become the bottle neck?  Would artifically grouping the data into smaller sub groups help?

Any helps is sincerely appreciated.


Join Excite! - http://www.excite.com
The most personalized portal on the Web!