[Date Prev][Date Next] [Chronological] [Thread] [Top]

Searches are slow *with* Indexes

Hi folks,
I hope one of you can help me with a problem I've been bashing my head against the wall trying to solve.

I've build & installed OpenLDAP 2.0.7 w/ BerkeleyDB 3.1.17 on Solaris 2.6. Everything appears to work fine, except that searches are always linear, even when I've built indexes for the requested search attributes.

To test the server, I loaded it up with roughly 25,000 entries. Each entry look similar to this (all of the indexed attributes are unique):

dn: cn=User Name + amid=811, o=alumni.sfu.ca
cn: User Name
cn: U Name
sn: Name
givenname: U
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
objectClass: sfuPerson
maildrop: userid@alumni.sfu.ca
uid: userid
userPassword: {crypt}<blanked>
mail: u_name@alumni.sfu.ca
mail: user_name@alumni.sfu.ca
mail: userid@alumni.sfu.ca

The index lines from the slapd.conf file look like this:

index   default pres,eq
index   uid eq,sub
index   mail eq,sub
index   maildrop eq,sub

Index files are being built, and running slapd in debug mode shows it reading the relevant index files.

A search of "(uid=userid)", when running slapd in debug mode, shows slapd reading the uid.dbb file. After reading it, the following lines are written to debug output:

=> ldbm_cache_open( "/tmp/sfu/uid.dbb", 16384, 600 )
<= ldbm_cache_open (cache 3)
=> key_read
<= index_read 1 candidates
<= equality_candidates 1
<= filter_candidates 1
<= list_candidates 25036
<= filter_candidates 25036
<= list_candidates 25036
<= filter_candidates 25036
====> cache_return_entry_r( 1 ): returned (0)
=> id2entry_r( 1 )

It then proceeds to iterate through every entry in the database looking for matches.

So what am I doing wrong? Why are my index files being ignored? I've tried deleting the database and recreating it from a fresh ldif file, with no change. I've tried increasing the cachesize and dbcachesize parameters. This, predictably, decreases search times, but not a lot, and chews up 100+mb of RAM - with a bigger cache, it's just iterating through every entry in the database from RAM instead of from disk.

Any suggestions appreciated.

Steve Hillman                           hillman@sfu.ca
Senior Systems Administrator            (604) 291-3960
Simon Fraser University