[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: more SASLprep/protocol problems



Kurt D. Zeilenga writes:
> I think here are opposing objectives.

Yes...

> One fraction of the community is attempting to improve
> interoperability between independently developed implementations.
> One faction is attempting to support legacy systems.

I disagree.  This has nothing to do with legacy systems.  Maybe LDAP
will someday conquer the world to the point where an operating system
whose password handling is not written with LDAP in mind, is a legacy
system, but that time is far away.  Today, that is a _normal_ system.

> In independently developed systems, there is the assumption that
> there may be no consistency between input devices, operating
> system platforms.  Using a preparation algorithm is sound way
> of addressing these problems.

If they use that, there is also the assumption that the system's
password handling is written with LDAP in mind, or is otherwise
restricted so it can handle prepared passwords.  (Ours does,
incidentally: it only accepts ASCII passwords.  So this is all a
somewhat abstract argument to me.)

> As the server doesn't generally
> have knowledge of user input devices and operating system
> platforms used, preparation by the client is a sound approach.

Sometimes true.  But as I mentioned in the encrypted passwords thread,
if the server only knows a hashed version of the password, and the hash
did not do SASLprep first (i.e. it was not done with LDAP in mind), the
server must reverse the preparation before matching the passwords, so
then it _must_ know the character set the password was stored with.
Thus preparation can enforce just the opposite of what it is supposed to
give us.

The server doesn't have to know that if preparation is not done: Then it
can leave such problems to the user, who has presumably been instructed
by the sysadmin about how to handle any problems with mismatching input
devices.  E.g. "only use ASCII passwords".

> The specification is primarily written to promote interoperability
> of independently developed implementations.

I know.  I still think this is an area where the interoperability comes
at too great cost.  Interoperability is good when it comes in _addition_
to working well at campus, but if it causes the system to not work at
all on campus, any interoperability which is offered doesn't seem very
relevant to me.

> This is not meant that we should abandon "legacy" issues, but to note
> why we there is a "SHOULD".

And that is why I don't think there should be a "SHOULD", since I don't
think it's a legacy issue.  Or rather, I think implementations "SHOULD"
support both options.

> How's this?

Nope...

-- 
Hallvard