Well since nobody seems to have done any measurements about this issue
so far, I have done some.
The test machine was a Athlon 500 with 512MB of RAM and one IDE HD. Test
platform was SUSE Linux 9.0 with Berkeley db 4.1.25 and OpenLDAP 2.1.22.
The ldap database was alone on a freshly created filesystem.
The first step (import) was to slapadd 10000 Objects (to a database
containing already 2 Objects). Five attributes were indexed in this
process. Since slapadd uses the same backend routines as the
bdb-backend, this should give some impression about write performance
(please correct me if I'm wrong here). I used a DB_CONFIG-file
containing only the line set_cachesize 0 134217728 1 (which results in a
cachesize of 160MB in one pool). A test without this file was aborted
due to my impatence.
In the second step the ldap-server was started and 20000 random search
operations for an indexed attribute (cn) where performed via a 100MB
Ethernet LAN. The bdb-backend entry cache was left at the default of
1000 entries to simulate an environment where only 10% of the entries
fit into the entry cache.
Observation and Results
During the imports the CPU load was relatively low (below 50% all the
time). A db_stat call after the import showed that the chosen bdb-cache
was sufficient for these operations. The HD was accessed all the time,
so a faster HD (or the separation of the write processes to different
HDs) could improve speed (while a faster CPU could not).
The different filesystems showed a siginigicant difference in write
performance (the difference between the slowest and the fastet fileystem
was more than a factor of 2). To my surprise instead of the
non-journaling ext2-filesystem IBM-jfs was the fastest filesystem in the
test. The import was significantly slower on reiserfs and ext3 than on
any of the other filesystems.
For the searches the CPU was a limiting factor. The CPU load was at 100%
nearly all the time and the disk wasn't accessed at all. The network
transfer rate was at about 300KB per seconds. Most probably all
filesystem-IO was done in the OS block buffer cache (which was quite
large). The read performance was approximately the same on all
filesystems. This result may be different if the ratio between main
memory of the system and the size of the database changes.
Filesystem import search
ext2 4:20 min 33 s
ext3 6:41 min 33 s
reiserfs 7:09 min 31 s
xfs 4:36 min 30 s
jfs 3:11 min 31 s
The differences in write performance should not be overestimated. The
impact of proper tuning of the underlying Berkeley-db has a larger
impact on the write performance of the directory than the choice of the
underlying filesystem. However, the choice of ext3 and reiser
filesystems for OpenLDAP's Berkeley-db will result in a siginficant
reduction in write performance and the benefits for using these
filesystems should be reassessed in this context.
The read tests showed that a sufficiently sized main memory will offset
all other factors. A fast CPU may further increase the throughput (but
the measured 650 search operations per second should suffice for most