[Date Prev][Date Next] [Chronological] [Thread] [Top]

RE: LDUP Working Group Agenda - Summary of objections to requirements draft



[Issue A - ACI Replication]
> [Albert]
> I am also sending this to LDAPEXT as the direction taken by
> LDUP could have
> major consequences for LDAPEXT (eg the current proposals for
> ACL simply will
> not replicate correctly

[Steve]
Servers that don't store access controls as values of the ldapACI
attribute will have to give the appearance that they do for the
purposes of LDUP. Having done that, the access controls will replicate
as any other attribute values would.

[Albert]
Understood. Naturally LDUP can only replicate access controls that are
stored as LDAP attributes. RFC 2820 already requires that ACL standards
MUST satisfy that necessity (3.1 G3). Since LDUP will replicate access
controls the same way that it will replicate other attributes, they
will encounter the same problems that other applications will encounter.

The problem is that URP merges conflicting concurrent changes to attributes
and attribute values made at different replicas, whereas currently proposed
LDAPEXT ACI related attributes (like many other applications), rely on an
application semantics that assumes they can only be updated atomically.

[Issue B - Transactions]
> and future support for transactions will be
> difficult if not impossible).

[Steve]
I've outlined an extension to LDUP for providing strong transactional
consistency with a configurable degree of availability. Though you
might not agree with the philosophy, no one has yet pointed out any
fatal flaw in the procedure.

[snip]

[Albert]
When an actual draft has been published, feedback can be expected.

Meanwhile, the outline you gave clearly indicates that the proposal
relates to DSAs contacting other DSAs before performing an update.

This does not satisfy the definition of multi-master replication in
the LDUP charter:

"A replication model where entries can be written and updated on any
of several replica copies, without requiring communication with other
masters before the write or update is performed."

http://www.ietf.org/html.charters/ldup-charter.html

It therefore cannot meet the real performance and availability
requirements that motivate work on this (although those real requirements
for multi-master mentioned in the introduction are completely obscured,
buried or forgotten, in the actual text of the "final" requirements doc).

There are many well known ways to support full transactions semantics
in a non-multimaster environment and yours may or may not do well in
comparison with others. Personally I cannot see what advantage it has
over Alan Lloyd's advocacy of failover single master, but no doubt you
will explain the advantages in a draft.

However, like Alan's proposal, it cannot resolve any problems
concerning future standardization of transactions that may result from
deployment of multi-master LDUP standards, because it is not multi-master.

Both the advantage and irrelevance of Alan's proposal is that it does
not need any new standards, as it is just an implementation of existing
single master standards. I see no reason to solve a problem by new standards
when there is no clear advantage over what can be achieved within existing
standards. Multi-master does provide local availability of updateable
replicas,
including availability without continuous connectivity, which cannot be
achieved by either your proposal or Alans.

Work is proceeding, outside LDUP, but within LDAPEXT, on how to group
updates and/or do transactions in a standardized manner.

Some applications already use existing directory services to add their
own non-standardized transactional semantics, relying only on the
LDAP/X.500 data model which *does* support atomic operations on a
single entry, but does not provide any means for locking a set of
entries or performing an operation atomically on that set.

Methods include writing transaction identifiers to special
attributes in each of the set of entries, then checking that nobody
else did so, to any of them, after updating all the entries. This requires
that only DUAs cooperating in a transactional application have write
access to all relevant attributes of the entries.

This sort of thing can work in a single master environment, but becomes
more complex in a multi-master environment. It is still possible
provided attributes as a whole are replicated atomically, even
though updates to different attributes of a single entry may be
merged - as shown by the fact that Active Directory applications can
do this, clumsily, using "consistency guids" and "child counts".

However it becomes much more difficult, and perhaps impossible, with
URP, because even concurrent changes to individual attribute values of
a single attribute may be merged.

I cannot think of a way to do it satisfactorily in that environment. I
suspect you cannot either, or you would not be turning your mind to future
proposals for far more complex fundamental changes to DSA implementations,
involving locking a majority of DSAs prior to each update.

If you can think of a way for applications that need transactions
to supply that support themselves within the currently proposed LDUP
multi-master framework, as they can with single master replication,
please spell it out.

However there is no urgency to do so at the moment. All that is
needed now is a clear statement in the requirements document that
LDUP standards "SHOULD NOT impede future work on transactions".

Actual demonstration, or justifying an inability
to do so, can be left to subsequent documents and liason with
people actually working on grouping or transactions (an area outside the
scope of LDUP because it also applies to single server directories).

Outlines of future proposals are of course interesting and relevant to
the WG, but your outline concerning future transactional support
has only one relevance to the final call on the requirements document.

If you are confident about it, you should support adding a requirement that
LDUP standards "SHOULD NOT impede future work on transactions", or perhaps a
stronger requirement that it "MUST NOT".

I have provided a definition of "SHOULD NOT impede" and ("MUST NOT impede"),
in my individual submission:

http://www.ietf.org/internet-drafts/draft-langer-ldup-mdcr-00.txt

[Issue C - Convergence]
> SUMMARY OF OBJECTIONS TO REQUIREMENTS DRAFT
>
> The three key points are:
>
> 1) There is no requirement for convergence or "eventual consistency".
>
> This looks like just poor expression, but in fact the LDUP
> architecture and
> Update Reconciliation Procedures do specify proposed standards that
> guarantee long term divergence by relying on timestamps

[Steve]
The timestamp in the CSN is just a version number that happens
to increment without visible update activity. A version number
scheme that leaves gaps in the runs of version numbers isn't
broken as long as the version numbers from a server are
monotonically increasing. The LDUP CSN is monotonically increasing too.
The only difference with the LDUP CSN is that the gaps just happen
to have a correlation to elapsed time.

> and
> allowing DSAs to
> transmit changes out of order and drop changes when clocks
> are out of sync.
> This is easily fixed, once a requirement to fix it is agreed on.
>
> Details on how to fix it based on the Coda replication protocols also
> adopted by Active Directory, and a semi-formal proof that the
> fix would be
> robust in the face of DSAs crashing and being restored from
> backups, network
> partitioning etc etc, is included in my draft below.

Your proof neglects the effects of the purging mechanism. Restoring
a crashed DSA from a backup works if no change information is ever
removed. However if a backup restores a DSA to a state prior to the
purge point of any of the other replicas there exists the possibility
that the other DSAs have forgotten changes that the restored DSA
needs to bring it up to date and consistent with the others.

I have a procedure for solving the replica consistency problems of
restored replicas and rejoined partitions but it is written in terms
of a log-based implementation using a different purging mechanism.
I'm still in the process of recasting it in state-based terms with
an update vector.

[snip]

[Albert]
My reference to "relying on timestamps and allowing DSAs to
transmit changes out of order and drop changes when clocks are out
of sync" should be read as a single sentence. Clocks are not
necessarily even monotonically increasing because they sometimes
jump around when administrators make mistakes, eg with time zone
and daylight savings settings.

Anyway, as you are not seeking a "final call" on URP without a
mechanism spelled out for ensuring eventual convergence, I am quite
happy to wait and see whether it can be done without a log-based
mechanism when you publish a draft.

My proposal, based on a similar vector to the current LDUP drafts,
does take account of the purging mechanism - it
prevents a purge at *any* replica until after *every* replica has
received the change. It does so, precisely for the reason you mentioned.

The solution I proposed is based on existing implementations
(Coda and AD) that have been thoroughly researched and tested
and are known to work and to have a number of advantages. For
marketing purposes, AD also describes that as "state based"
rather than "log based", by coyly calling the log an "index". I prefer
to stick to the technical necessities and leave marketing doubletalk
to others.

That proposal separates report propagation from update processing and so
would be equally applicable to URP or MDCR. It also enables a natural
transition from single master to multi-master implementations instead of
attempting to force implementation of multi-master as the current LDUP
proposals do.

The Coda mechanism relies on the fact that change reports are transmitted
in order and relies on version numbers rather than timestamps.

If it used timestamps it would not work because clocks cannot be
guaranteed monotonic.

If you have some other mechanism that can also be proved to work, but is
somehow able to do it using timestamps and changes transmitted in random
order,
that's quite an achievement as the Coda research was quite a major project.
I look forward to reading the draft but repeat my recommendation that you
study the Coda research.

As already stated, I do not believe this is a fundamental problem
inherent in URP, since URP need not rely on timestamps.

There may well be other and better ways to do it, as long as we are
agreed that it MUST be done. My description was certainly incomplete,
as a protocol for determining which replicas are currently active
and which are excluded is necessary for any method. I wrote it
because in the current proposals the mechanism was not merely
incomplete but absent, and there was explicit language about
dropping updates. When I checked back to the requirements I found
that what I thought was common ground in assuming a requirement for
eventual convergence, was so vague that simply dropping updates
and leaving replicas with different states for the same entries
could arguably be consistent with the stated requirements.

If the current architecture and URP documents have left this
for further work, that is fine by me, though it would have been
better to have said so explicitly in the drafts.

I simply object to the combination of current drafts that indicate
intended non-convergence by dropping updates, together with an attempt
to finalize a requirements document that does not make it utterly
clear that this would be unacceptable.

The current wording says the scope includes two models, "Eventual
Consistency" and "Limited Effort Eventual Consistency", defining the
latter as "where replicas may purge updates therefore dropping
propagation changes when some replica time boundary is exceeded, thus
leaving some changes replicated to a portion of the replica topology".

Instead of ruling out the second, it explicitly says "LADP replication
should be flexible enough to cover the above range of capabilities".
The actual current drafts do "purge updates therefore dropping propagation
changes when some replica time boundary is exceeded, thus leaving some
changes replicated to a portion of the replica topology."

A requirement "to be flexible enough to cover this" is "simply
absurd". That is really an understatement.

Are we agreed that the final requirements draft should unambiguously
require eventual convergence under ALL circumstances?

As long as that requirement IS met, I don't really care that much HOW.

[Issue D - Atomic operations]
> 2) There is no requirement for atomic operations.
>
> Again this is obscured by poor expression and nonsensical
> definitions, but
> in fact the architecture and URP drafts merge changes to individual
> attribute values made concurrently at different replicas. The
> fact that this
> obviously breaks the ACL standards being developed in [LDAPEXT],

[Steve]
URP doesn't merge changes to individual values. The current ldapACI
attribute type definition equates two values if they are semantically
the same, so URP will only have the effect of changing the meaning of
a collection of ACIs because of an explicit user change request to
remove ACIs.

[Albert]
There is obviously no way that URP *could* merge two attribute values
that do not match for equality. I was not accusing you of doing an XOR
of the individual bits of attribute values! Nor do I see any problem in
the fact that URP will sometimes update a value with a slightly different
bit string that does match for equality.

The problem I was referring to is that URP merges the addition or deletion
of different attribute values of a single entry made concurrently by DUA
operations at different replicas.

That means the set of ACI attribute values which each of two users thought
they
were establishing, will differ from the set of ACI attribute values that
actually
result from their concurrent actions. The example below from Alison's
document
shows the 9 states that could result from concurrent updates to two
attributes
at one replica with eventual convergence to one of those changes. Exactly
the
same applies to concurrent changes to two values of a single attribute such
as
ldapACI. In addition, the eventual convergence can be to a mixture of both
changes with some attributes or attribute values changed by one concurrent
operation and others changed by the other. If two users delete an attribute
value
from an attribute with 2 values it ends up empty, and possibly a schema
violation,
although each thought they were setting it to the single value the other
thought
they were deleting.

Frankly I think that idea should also be just dismissed as "simply absurd".

It isn't something people should find out about only after standards are
finalized, buried among the other ingredients like monosodium glutamate.

It should be highlighted in the requirements document, preferably
with a heading like "WARNING: Lark's Vomit".

However it has obviously been taken seriously, so I accept an obligation to
answer it seriously. I just want to make damn sure that others who would be
affected by it, and who could feel equally amazed, are aware of this
intention, and may therefore come to the meeting, since I can't be there -
and/or
join or rejoin discussions of the WG "last call" on requirements in the LDUP
mailing list.

[Steve]
On the other hand, MDCR will arbitrarily throw away
previously accepted ACIs because of a potential, but probably
non-existent, semantic conflict between the original change requests.
MDCR assumes that changes to the same entry at different replicas
are automatically in conflict and one of them has to lose.

I contend that URP does less damage and therefore has a better chance
of achieving a favourable final AC state than MDCR.

> despite an
> overlap in authorship between the two documents, strongly
> confirms that the
> consequences for existing applications are simply not
> [understood] and should
> be studied through a requirements analysis by actively explaining the
> implications and soliciting input from other areas
> (operations etc) that may
> be affected.
>
> Fixing this would require substantial changes to the current
> architecture
> and URP. I have sketched one possible way to do so in the draft below.

[Albert]
Your contention should certainly be examined carefully. (See below).

My aim at present is simply to ensure that LDAPEXT
is fully aware of the consequences of letting a "final call" on
LDUP requirements go unchallenged.

The failure to require atomicity clearly affects ACI and I believe also
many other aspects of the base LDAP standards and other extensions to them.
The requirements document should clearly explain the potential impact and
actively solicit input instead of glossing it over and inventing a new and
absurd definition of "atomicity" to prevent thought in a thoroughly
Orwellian
manner.

What is needed is not "a better chance of achieving a favorable
[Access Control] state".

The requirement ought to be: "LDUP standards SHALL absolutely
guarantee correct operation of replicated access controls under
all circumstances, during convergence as well as after".

If either MDCR or URP, or any other proposal, cannot provide that
guarantee it is not acceptable, whether or not it has a "better
chance" than some other unacceptable proposal.

In my view that guarantee can only be met satisfactorily by preserving
the current LDAP/X.500 data model, in which entries are the unit on which
operations are performed atomically, not individual attribute values.

Obviously we disagree about that. Can we nevertheless agree
that meeting that guarantee ought to be a requirement?

[Issue E - modifiersName]
> 3) There is no requirement to support mandatory operational
> attributes of
> LDAP.
>
> The operational attribute "modifiersName" cannot be supported
> meaningfully
> as nobody in particular can be said to be responsible for a
> change that has
> in fact been merged from two or more concurrent changes made
> independently
> and without knowledge of each other.

[Steve]
Even though we talk about changes being "merged" the reality is
that one of them will take precedence over the others. The
operational attribute updates from that one change also take
precedence, so the value of modifiersName corresponds to the
latest apparent change to the entry, which is exactly what
you'd expect from the single master case.

> This severely complicates system
> administration as the first thing anybody would want to know
> after receiving
> a problem report is "who changed what".

You've over estimated the utility of the modifiersName attribute.
It only tells you who last changed something, not what they changed.

[snip]

[Albert]
On a single master system, "modifiersName" tells you that what they
changed is what was in the entry immediately before the change and
what they could have read before making that change. If it wasn't
broken before, then it broke because of that change by that DUA.

If problems result from the rare race condition in which DUA B updates
an entry between DUA A reading the previous state and writing the
change based on that read, it tells you that the application should be
fixed to prevent such problems in future - eg by explicitly replacing
every attribute not changed as well as replacing the attributes changed,
instead of just adding or deleting attribute values that are actually
changed.

That is usually unnecessary in single master environments but is the sort
of thing that DOES become necessary for some existing applications
in ANY multi-master environment, due to the much longer interval in
which conflicting concurrent changes can occur, despite having read
the state of the LOCAL copy of an entry "immediately" before changing
that entry.

That kind of fix could work, perhaps even with URP, by ensuring that
convergence will be to the new state resulting from only one of the
concurrent changes having actually occurred, and the others having
had no effect. With that fix, modifiersName has the same semantics as
on existing X.500/LDAP directories - as required by the existing
standards.

However, most applications will NOT be initially fixed for transition
to a multi-master environment. MDCR attempts to minimize the system
administration problems that WILL result, by ensuring convergence to
the results of applying only 1 concurrent change, whether the applications
have been fixed or not.

This means that when something breaks that was not broken before, you
know it was broken by something done by the DUA that last changed it.
With URP, you just know it broke, and it broke after you turned on
multi-master replication.

Neither the directory service nor any applications of it are intended to
support applications making concurrent changes to a single entry. Such
concurrency is an unavoidable problem that must be dealt with by any
multi-master system. This should be clearly explained in the requirements
document and input actively sought from other areas likely to be affected.

With URP, modifiersName only tells you that some of the attributes and
values in an entry that is causing a problem were changed by that DUA
and were changed blindly by that DUA with no way of knowing what the
rest of the entry state that it was changing might actually be, or what
the changes of that state might result in, based on unknown concurrent
changes by other DUAs. The problem may in fact be due to the unknown
concurrent changes by other DUAs.

The same is true for entries that are causing no problems and in both
cases there is no way to be sure of "who did what".

So where does a sysadmin begin when trying to track down a problem?

Personally I would start by ripping out multi-master replication, because
it is obviously broken and causing problems. For what it is worth, that
incidentally is the anecdotal results I have been getting in some
discussions
about future plans - "we're not interested in multi-master, it looks too
complicated and unreliable".

Nobody wants to have to administer a "directory" where they cannot be sure
"who changed what". In fact the only valid value for a URP "modifiersName"
attribute could be "nobody".

[Issue F - Predictability and Understandability]
> In my view, both an explicit requirement for atomic operations and a
> requirement that the results be 1) predictable and 2) make some kind of
> sense to users, should be in the final requirements draft.

[Steve]
I don't think obliterating all trace of a user's previously
accepted change to an entry because someone else changed some unrelated
information in the same entry qualifies as either predictable or
sensible to users.

[snip]

Regards,
Steven

[Albert]
Neither do I. MDCR keeps not only a trace, but a full audit
trail at every replica, in which each change is associated with both the
modifiersName of the DUA responsible and the replica at which the change
occurred and the previous state of the entry. This makes it possible to
make the rejected concurrent changes available for fixing by users instead
of administrators - in much the same way that URP does this by placing
orphans in "lost and found" for those conflicts that it does resolve
atomically instead of "reconciling". It also makes it possible to add
notifications in conjunction with LCUP or by email.

URP just throws all this information away, pretending that there is no
problem with conflicting concurrent changes, by obliterating the fact that
there ever was a conflict.

I believe it is highly predictable and easily understood by users, that
if somebody else changes an entry at around the same time that you do,
only 1 of the changes can succeed so their change may be eventually accepted
and yours eventually rejected. This also happens on single master and
single server directories, except that "around the same time" is a longer
interval and it will therefore happen more often, and the successful
change does not always have the latest timestamp.

MDCR also tries to minimize the frequency with which it will happen, by
maintaining a tree to serialize any concurrent changes that *could* be
serialized.

Some applications would still be affected, and the MDCR draft suggests
possible ways of dealing with that, such as moving attributes or attribute
values that can safely be updated concurrently into separate child entries,
and providing backwards compatability through methods based on David
Chadwick's
"Compound (families) of entries" draft (expired).

However it is still certainly a major concern and if the WG was considering
any such
proposal on its agenda, the potential problems should also be highlighted in
a
requirements document, to solicit input from other areas that might be
affected,
for the same reason that URP's approach should be.

The fact is you can have locally available updatable replicas
(multi-master),
atomic operations and irrevocable operations - pick any 2. The choices that
must be made between atomicity and durability of operations for multi-master
replication should be clearly explained in requirements drafts, so that the
consequences of those choices can be evaluated based on feedback.

Instead the LDUP WG has chosen to provide no information and solicit no
feedback. In my view that does not merit publication even as an
"Informational" RFC, except perhaps with the status "Historic".

What is not predictable or easily understood by users or administrators is
when changes appear on entries that were not made by anybody in particular
and yet identify somebody in particular as "modifiersName".

URP produces "Extraordinary States" and "Transient Extraordinary States" and
can multiply 2 changes made concurrently into 9 different transient states
at
different replicas for the same entry (with a combinatorial explosion for
larger numbers of attribute values changed or concurrent operations).

The consistency model of URP, was summarized in Alison's "Contribution to
Profiles Document (Consistency Discussion)":

http://www.imc.org/ietf-ldup/mail-archive/msg00548.html

The attached Word version, with more easily read tables, is:

http://www.imc.org/ietf-ldup/mail-archive/doc00000.doc

It certainly isn't predictable. In fact I doubt that it can be analysed in
non-polynomial time.

If you think that it is predictable and makes some kind of sense to users,
how
about trying to summarize it for the requirements document.

Here's an extract, please write a paragraph summary for the LDUP
requirements
document, in the marketing brochure style of that document, emphasizing how
this predictability and understandability further enhances the "simple,
highly
efficient and flexible" LDUP standards "to meet the needs of both the
internet and
enterprise environments".

***
3.1  Latest Known Wins Consistency

The "Latest Known Wins" approach has the outcome that all attributes will
"eventually" reach the latest value entered into the system.  During the
"eventuality" being reached, intermediate states reflecting a combination of
states which have existed over time may be held by an individual DSA.  These
intermediate states are best described using an example.  Two operations, T1
at time 1 and T2 at time 2, occur on an entry updating attributes A and B.
This may be viewed as follows :
T0     T1     T2
A0     A1     A2
B0     B1     B2

"Latest Known Wins" consistency has the semantics that all directories will
eventually have the state A2, B2 (providing no further operations in the
distributed system affect attributes A and B).  However, at times between
either update occurring and this "eventuality", the directory state may be
any combination of the individual states of A and B depending on the
primitives currently received and processed by each DSA.

That is, any of the following states may exist :
* A0, B0 (Neither Operation has yet occurred)
* A0, B1 (Partial completion of Operation 1)
* A0, B2 (Partial completion of Operation 2)
* A1, B0 (Partial completion of Operation 1)
* A1, B1 (Operation 1 has occurred)
* A1, B2 (Operation 1 completed, Partial completion of Operation 2)
* A2, B0 (Partial completion of Operation 2)
* A2, B1 (Operation 1 completed, Partial completion of Operation 2)
* A2, B2 (Operation 2 has occurred)

 The length of time for "eventuality" to be reached is dependent on
* replication agreement structure, (e.g. does the overall topology supported
by all relevant replication agreements allow ease of communication between
all DSAs?)
* scheduling policies, (e.g. are changes made onchange, or via periodic
policies (e.g. one per day?))
* DSA reliability, (e.g. are any DSAs bottlenecks in the transition of
updates from one network area to another?)
* communications links reliability. (e.g. are any DSAs separated from the
remainder of the DSAs through slow or unreliable communications links?)
Additionally, the above also influences the number and duration of these
intermediate states occupied by a DSA.

It is possible within a small community of reliable DSAs with reliable
communications links to greatly minimise the period of time spent in these
intermediate states.

***

In my view the only accurate summary would be "LDUP currently does not have
a viable
proposal for replication standards as it did not do a requirements analysis
before
embarking on design and the result has inevitably turned out to be broken.
We are
therefore seeking input on requirements before proceeding further."

The problem for URP is that the directory service has no way of defining
what is
"unrelated information" in the same entry. The whole basis of the LDAP/X.500
data model is that an "entry" consists of RELATED information about a
"single
entity". The actual relationships among attributes and attribute values are
maintained by users and their applications, not by the directory service, so
it simply cannot assume that they are unrelated. It does assume that
different
entries are unrelated, except for maintaining the tree structure, thus
making
applications themselves responsible for transactions affecting multiple
entries.

This is intentionally a VERY weak requirement on the directory service,
necessary
for it to be distributed and globally scalable.

By assuming that attributes of an individual entry are unrelated, Active
Directory
breaks the LDAP/X.500 data model and could be in a real trouble if a viable
standards based approach to multi-master LDAP replication gets going.

By going further and treating even attribute values of a single attribute as
though
they are unrelated, URP makes it highly unlikely that a viable alternative
to AD
will actually be deployable. Why would anyone want anything even less
consistent?

Anyway, you seem to be agreeing that predictability and understandability
should be
a criteria by which proposals are evaluated. Can we agree that this should
be written
into the requirements document?

We obviously disagree about whether atomicity is necessary for
predictability and
understandability and should also be written into the requirements document,
but here's
a reminder of the close relationship between the two:

"Re: LDUP warmup exercise: atomicity in LDAPv3", from Tim Howes (16 December
1998):

***
"Each LDAP operation (add, modify, delete, moddn) as
a whole is atomic. The whole operation either happens
or it doesn't. Changes cannot be half-applied to any
single LDAP server.

The replication consistency model must assume and
build on this basic fact to define how multiple LDAP
replicas converge to the same state over time, in
the absence of additional changes. This kind of loose
consistency model is pretty fundamental to the notion
of a directory.

My two cents on what's important in a replication
consistency model are that it must be 1) predictable,
and that it should 2) make some kind of sense to
people using the system.

All this talk of consistency at different levels
(e.g., between applications using the directory at
the same time) is a red herring. Our job is to define
a consistency model for the directory itself. Some
applications may find this model sufficient for their
needs. Others may have to build more elaborate models
on top. But let's start with the basics.    -- Tim"

http://www.imc.org/ietf-ldup/mail-archive/msg00214.html

***

Could you please respond to it? Do you agree that LDUP
has a responsibility to state its consistency model for entries
in the requirements document? Whatever that model should be,
the requirements document avoids any input about it by
simply not mentioning it, and obscures this by irrelevant
discussion of consistency between the state of different
replicas.

Seeya, Albert

PS I have corrected two typos in my original with the
intended words "understood" and "LDAPEXT" instead of
"understand" and "LDUP", and marked those corrections
with [brackets].

Also, I have not attempted to list other areas of LDAPEXT
work that would be affected by the current LDUP approach,
such as signed operations (RFC 2649) - broken by not
maintaining the concept of an "operation".