[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: replicating back-sql data


You don't indicate what version of OpenLDAP's slapd
you're using.

I have one server on 2.2.15 and another using a cvs snapshot last updated about a week ago.

ldapsearch -H ldaps://xx.uen.org -D uid=bmidgley,dc=my,dc=uen,dc=org -x
-W -d 256 -z 10 "(uid=bmidgley)"

It looks like the backend selects all records from ldap_entries. It
takes about 6 seconds on our fastest db server.

This is strange, since your filter should result in an exact match search.

I thought so too.

Is there a way to replicate this back-sql data into a traditional
openldap backend to improve performance? I know slurpd needs a log so
that won't work. Is there anything out there that will just do a brute
force push from one ldap db to another? I think this would be more
reasonable than constant heavyweight user lookups. We could deal with
some lag time in the updates.

SysNet developed a tool to synchronize SQL and LDAP
servers in either a push or pull mode; you may contact

ok, I will look into it.

I guess something could be attempted using syncrepl,
but since there are no timestamps in the default
back-sql implementation, this might not be a viable


Are you using a view or a table fro ldap_entries?
The view approach may definitely impact performances,
although the table approach requires to maintain one
extra table.

It is a view, but it also does some pattern-based filtering so only some of our users appear in ldap.