[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: issue w/ LDAP that I have encountered

No, I don't take any offense, I actually really appreciate it. I originally had planned to take that approach, and might still do it in the future, but right now I'm too pressed time-wise and would like to have something in place. As illogical as it sounds, but due to the situation I'm in, it's actually "better" to have a solution that might need fixing later on ( if what you say bears out, and I will admit that I knew that back-sql was experimental, even if I didn't know of the serious issues w/ performance that you mention ) but is still there right now, than to a) have a solution that won't interoperate w/out me writing code that I don't have time to do right now and b) have to do that work to implement another db right now. I'm in one of those "hey we've been talking about it for 10 months, but we're only giving you time and equipment for 2 weeks to get it done" situations right now. I've had to re-learn the little I knew about LDAP ( I have only basic experience w/ it, and have been too busy fighting fires for the past 10 months to learn more ) in the past week.
BTW, I just fixed the problem I posted on. I was right, in that the has_ldapinfo_dn_ru statement did scrag my DB, and re-creating the tables fixed it. I'm still getting no love from LDAP, even a simple ldapsearch gives me nothing :
[root@uiln001 openldap]# ldapsearch
SASL/GSSAPI authentication started
SASL username: root/admin@TLC2.UH.EDU
SASL installing layers
# extended LDIF
# LDAPv3
# base <> with scope subtree
# filter: (objectclass=*)
# requesting: ALL

# search result
search: 5
result: 32 No such object

# numResponses: 1

I know I'm missing something dead obvious, but I can't figure out what. Shouldn't it be giving me something, even though I haven't been able to add any data to the directory? When I try to add the following LDIF :
dn: dc=tlc2,dc=uh,dc=edu
dc: tlc2
objectClass: top
objectClass: domain
I get the following :

[root@uiln001 ldap]# ldapadd -f base.ldif
SASL/GSSAPI authentication started
SASL username: root/admin@TLC2.UH.EDU
SASL installing layers
adding new entry "dc=tlc2,dc=uh,dc=edu"
dldap_add: Server is unwilling to perform (53)
       additional info: operation not permitted within namingContext

Anyways, any and all help would be appreciated, and I really appreciate the advice you've given so far.
Derek R.

"As a rule, dictatorships guarantee safe streets and
terror of the doorbell. In democracy the streets
may be unsafe after dark, but the most likely visitor
in the early hours will be the milkman."
-- Adam Michnik

Quanah Gibson-Mount wrote:

--On Thursday, July 06, 2006 3:03 PM -0500 "Derek R." <derekr@tlc2.uh.edu> wrote:

Thanks for your reply. The only reason the db is expendable is because
I'm setting it up right now, so it won't be expendable once it's
populated w/ LDAP data. The reason I have chosen an SQL-based solution
is because we are planning on integrating all of the data in LDAP ( user
account info, user organizational data, DNS records, DHCP, etc. ) w/
ticket-tracking and other management software, and we have decided that
an SQL solution offers us the best interoperability as well as the widest
range of choices should we need to move to a different DB later on.
I appreciate the tip on using the Heimdal implementation. Should I
encounter any issues in my initial testing, I will try Heimdal out.
However, right now I'm just trying to get things working, and if I have
time ( before my deadline, which is creeping inexorably closer ) I will
do performance testing and tuning.

This is all my opinion, so please don't take any offense... ;)

I would re-examine your premises on this. The back-sql stuff is still fairly experimental from all the development I see going on with it, and it is magnitudes slower than back-{hb}db. If you are wanting to run an *LDAP* service, I would highly advise using one of those two backends.

At Stanford, we have what are probably very similar data source issues, ticket integration, etc, that you do. Our solution was to do the following:

(1) Have a central RDBMS that stores the data
(2) Have a process that takes that data, converts it to LDIF, and writes it to the LDAP master

This allows much easier data audit/cleanup/integration, etc, while allowing us to have a high-performance, high-availability LDAP cluster.

I presented some on how things work @ Stanford at ODD#3, you can find it at:



Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html
fn:Derek Richardson
org:University of Houston;Texas Learning and Computation Center
adr:;;218 Philip G. Hoffman Hall;Houston;Texas;77204-3058;United States of America
title:Linux Cluster Administrator