[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Controls and criticality



In short,

if the control is critical, the server cannot ignore it. It must either make use of it as prescribed or fail.
if the control is non-critical, the server can choose to ignore it. However, it should only do so before making use of it as prescribed.


Some controls specifications are simply broken. No part of the 'making use of the control' should depend on the value of criticality.

-- Kurt

On Nov 1, 2008, at 9:08 AM, Pierangelo Masarati wrote:

I see some inconsistency and possibly some misunderstanding (probably
mostly at my side) on controls and criticality. This has been discussed
many times, but probably slapd did not converge yet.


The specs (RFC4511) say that controls CAN be critical or non-critical.
Implementations (DSAs) MAY or MAY NOT recognize controls.  I see many
cases; some of them are trivial, but other are not.

Let's consider a matrix, where columns represent criticality (FALSE,
TRUE) and rows represent implementation (FALSE, TRUE).

implemented \ criticality  |  FALSE |  TRUE  |
----------------------------+--------+--------+
                    FALSE  |   F,F  |   F,T  |
----------------------------+--------+--------+
                     TRUE  |   T,F  |   T,T  |
----------------------------+--------+--------+

The specs state that if a control is not implemented, criticality determines whether the control is ignored (crit == FALSE) or the operation fails (crit == TRUE). If the control is implemented, then criticality determines whether, in case the control cannot be applied, whether the control is ignored (crit == FALSE) or the operation fails (crit == TRUE).

It should be clear that the criticality field is already overloaded: in the first case the implementation does not know the control, while in the second case it knows the control so much that it can determine whether or not it can be applied.



Let's now add a third, fuzzy dimension: control semantics. In fact, in many specs, the control semantics tries to bend the meaning of criticality towards common sense.

The specs say that all four combinations F,F; F,T; T,F; T,T are
perfectly legitimate, and describe how a DSA MUST behave in all cases
(or CAN, when an arbitrary behavior is legitimate).

Let me start with intuitive considerations: there are cases in which
some of the four possible behaviors should (lowercase "should") not be
allowed by common sense.  However, this seems to violate a strict
interpretation of the specs.  I'll bring examples shortly.  But in any
case, I believe slapd should behave:

1) consistently

2) biased towards security

Examples:

a) RFC4370, proxied authorization control, states that "Clients MUST
include the criticality flag and MUST set it to TRUE. Servers MUST
reject any request containing a Proxy Authorization Control without a
criticality flag or with the flag set to FALSE with a protocolError
error." and explains why: "These requirements protect clients from
submitting a request that is executed with an unintended authorization
identity."  This is just common sense, and is part of the control's
semantics, but it disagrees with RFC4511, which is normative.

Currently, slapd does not follow this common sense advice, and allows
non-critical proxied authorization requests.  Please note the first
"MUST" (for the DUA) and the second "MUST" (for the DSA; this is in
violation of RFC4511).

b) <draft-zeilenga-ldap-dontusecopy-04> (a work in progress) states that
"The criticality MUST be TRUE. There is no corresponding response
control." This again is common sense, since if the control is not
recognized, the operation's semantics would be totally different, as
(out of sync) copies could be used instead of the original data, which
is the purpose of the control. Please note the "MUST" (uppercase; for
the DUA).


Currently, slapd follows the advice of this work in progress, but it'll
probably need to be changed in order to be published as an RFC, because
as it is now it does not comply with RFC4511 (see ITS#5785).




Note that RFC4511 provides a sentence, about what controls
specifications are to provide, that sounds a little bit ambiguous:
"[...] direction as to what value the sender should provide for the
criticality field (note: the semantics of the criticality field are
defined above should not be altered by the control's specification)".

In the end, the responsibility for getting into trouble is all placed on
the DUA; the specs "are to" (lowercase, no "MUST") provide what the
sender "should" (lowercase, no "SHOULD") provide for the criticality field.


I believe that this is absolutely correct, but as a DSA implementor, and
not only a DUA implementor, I also believe that it is the responsibility
of the DSA to make sure its integrity and security is not compromised by
a poorly behaving DUA. I recommend we take two actions:


1) make sure slapd behaves consistently in all those cases, no matter
whether we choose to follow the letter of RFC4511 or to tend towards the
security and integrity side


2) act in direction of a modification of RFC4511 that somehow allows the
semantics of a control to override those of the criticality field. I
believe the criticality field is a mandatory mechanism to promote
interoperability when it comes to extensions. However, there are cases
where either interoperability exists or doesn't. In those cases, we
cannot simply delegate security and integrity to the DUA, as the DSA is
responsible of both in the end.


I believe the main consequence of allowing the DSA to reject (as something closely related to a protocol violation) a control whose criticality does not conform with that control's specification in terms of semantics would be that in the case of criticality set to FALSE two implementations would behave differently depending on whether that control is implemented or not. But this degree of arbitrarity does not differ much from that implicitly allowed by clients when they set criticality to FALSE.

Sending dontUseCopy with FALSE is an error. RFC4511 says we should return possibly outdated data in response. Receiving an error in response to an error, instead of incorrect data because the control was ignored, seems more appropriate. Not per se, but rather because it brings less harm than it could by behaving according to RFC4511.

This is my 2c (well, reading back, it looks like more than just 2 :)

p.