[Date Prev][Date Next] [Chronological] [Thread] [Top]

RE: LDUP Working Group Agenda - Summary of objections to requirements draft



[Alan]
This is good and should be discussed.

Lets put the goals down first - can the LDUP and URP standards make
scaleable, reliable directory information systems for business use?

I aggree and have voiced the opinion that once we go "sub atomic" on
directory objects/entries - then that is implementation - not
standardisation. For instance we dismantle in every DSA a complete DSAs
NameSpace into RDB tables (32 patents) - so we can automatically index, 2
phase commit, etc.. we can do database replication (sub atomic - product
feature) and ,DISP and DSP writethrough (standards feature).

Can any vendor/person  on this list confirm their desire to do directory
entry sub - atomic interoperability testing.. and waht are the tests when
they can do that.

[Albert]
Thanks for the agreement. However I would like to focus on user requirements
rather than vendor desires. When reviewing the LDUP archives I noticed your
views and others expressing various "reservations", however the previous
discussion was focussed on detail design and in the context of what
vendors desired to implement. I believe that focus on vendor desires rather
than
user requirements is what led to the fundamental technical errors in the
current
architecture and URP drafts.

Now that the requirements draft is up for "final call" (regrettably AFTER
extensive architecture and design work) a decision MUST be made as to
whether
these ARE the requirements that users of the directory service have, that
must be satisfied by the future proposed standards for meeting them.

Let's focus on that REQUIREMENTS issue. It's what is on the agenda NOW.

The basic problem, underlying all three of the key issues I summarized,
is that the requirements draft simply contains NO requirements whatsoever
relevant to multi-master replication.

It does start from similar goals to your "scaleable, reliable directory
information systems for business use". But then it simply does not state
what is required from the standards in order to achieve those goals.

All it requires of multi-master standards is:

      5.6  The replication model MUST support both master-slave and
           authoritative multi-updateable replica relationships.

Perhaps that's why others didn't find anything to object to? They may
have only been looking for requirements that could cause difficulties
in their vendor implementations.

But even that is completely botched by the meaningless definition:

Updateable Replica - A Non-authoritative read-writeable copy of the
      replicated information. Such that during conflict resolution a
      authoritative master takes precedents in resolving conflicts.

Simple proof reading is clearly required before final call.

But how could ANYONE have actually READ that and thought that either the
document or the intentions behind it, was ready for "final call"?

There is obviously no possibility of an "authoritative master" taking
precedence in resolving conflicts, since the definition of multi-master
replication, copied from the WG charter correctly specifies:

Multi-Master Replication - A replication model where entries can be
      written and updated on any of several updateable replica copies
      without requiring communication with other updateable replicas
      before the write or update is performed.

There is no such thing as a "non-authoritative read-writeable copy"
in multi-master.

Nobody with the slightest understanding of what the WG is actually
supposed to be working on could have such a complete misconception about
updateable replicas, regardless of their expertize in other areas.

Of course its rather hard for people to read the draft, since it is being
discussed while expired and deleted, and is only available from the
LDUP email archives:

http://www.imc.org/ietf-ldup/mail-archive/msg00471.html


[Alan]
The rules of the IETF (IMHO) state that 2 or more implementations must
interwork - to make an RFC.
 Can those involved say so.. otherwise -  please consider..


Something which is "lightweight" cannot be "Unpredictable" and
"Preposterously complex".



regards alan

[Albert]
My understanding is that the current architecture and design approach is the
result of a "vendor bake-off", so there must be sufficient vendor support
for interoperability testing. Presumably Telstra is planning an
implementation
since the URP design came from there and they already have a simulator.
Likewise
Novell looks strongly committed together with Netscape and Sun. Even Oracle
seems to
be involved despite knowing a thing or two about the consequences of
non-atomicity
in databases. Of the majors, only Microsoft seems to have stayed well clear,
perhaps laughing quietly on the way to the bank.

Despite the complexities, I don't think the URP draft is unimplementable at
all.

Presumably there would have been loud screams from the various vendors if it
was.

The criteria for any testing would presumably be based on whether an
implementation
complies with the proposed standards. Acceptance of those proposed standards
would
presumably depend on whether they satisfy the requirements in the
requirements
document. Since that does not actually state ANY meaningful requirement for
multi-master replication, it would not be difficult not to meet those
requirements.

Actually, as currently drafted, with not even convergence clearly required,
an
implementation could pass ANY test suite by simply dropping all updates and
returning:
the error code serverClocksOutOfSync (72). See p40 of:

http://www.ietf.org/internet-drafts/draft-ietf-ldup-model-04.txt

That's what happens when you let implementors define their own requirements,
though
it is rather more spectacular than usual in this case, since they have come
up
with something simply absurd.

It is the users, including system administrators, who will be hit by the
complexities
resulting from non-convergence, inconsistency and no audit trail.

There is no shortage of vendor products that are "unpredictable" and
"preposterously
complex" to use. Sticking an IETF label of approved LDAP standard on such
products
won't help them as much as they seem to think.

The question is whether what the vendors plan to implement will in fact meet
user
requirements for LDAP replication (whether "business" or other) and whether
it will
negatively impact existing LDAP standards.

If there is a user requirement and LDAP standard for guaranteed eventual
convergence
it doesn't matter a damn if every potential implementor is ready, willing
and able to
do something else - the result would be useless to users.

Likewise if there is a user requirement for atomic operations or
modifiersName. Given
the LDAP standards, anything that doesn't provide that just isn't LDAP.

BTW a "proposed standard" from a WG does not require 2 or more
implementations.
That is only essential at a later stage of draft standard.

An Informational RFC need not have any implementations at all. It can just
be a poem
provided it offers some information of interest to the internet community.

The requirements draft now being considered for WG "final call" is naturally
intended as
an "Informational" RFC and would normally go through without much external
review.

Unfortunately it does not provide ANY useful information as to what
requirements the LDUP
WG intends to meet for its main focus on multi-master LDAP replication,
despite considerable
detail on matters common to single-master and multi-master replication.

If unchallenged, that would result in the WG continuing to focus on vendor
desires
instead of user requirements in completing their proposed standards.

Since a requirements draft is intended to guide the development of
standards, and this
one does not, it should be published only with simultaneous application of
the status
"Historic" (your spelling may vary).