[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: ASN1 - syncInfoValue



At 11:12 AM 3/16/2005, Lee Jensen wrote:
>On Wed, 2005-03-16 at 10:40 -0800, Kurt D. Zeilenga wrote:
>> At 09:57 AM 3/16/2005, Lee Jensen wrote:
>> >I figured out my problem from yesterday... It seems the decoding
>> >specification I was using did not set the certain boolean elements to
>> >optional so they were expected and the decoding did not work properly.
>> 
>> Note that OPTIONAL != DEFAULT FALSE.
>> 
>> I suggest you look at how DEFAULT FALSE is handled elsewhere in
>> net::ldap.  Such ASN.1 constructions are common in LDAP.
>
>Actually that's the reason I ended up using OPTIONAL... Below is another
>part of Net::LDAP's ASN1 spec:
>    Control ::= SEQUENCE {
>        type             LDAPOID,
>        critical         BOOLEAN OPTIONAL, -- DEFAULT FALSE,
>        value            OCTET STRING OPTIONAL }
>
>It would seem that Convert::ASN1 doesn't support default values which I
>believe it should. It causes a bit of ambiguity because when say LDAP
>returns a syncInfoMessage the refreshDeletes flag is just missing as
>opposed to False...  Maybe I should work on adding optional value
>functionality to Convert::ASN1 while I'm at it. Otherwise I have to make
>assumptions in my code that non-existent means false, but that causes
>problems with DEFAULT TRUE... Lame...

Note that the LDAP TS specifically requires that DEFAULT values
not be transferred.  It is a protocol error to send FALSE when
DEFAULT FALSE. See section 5.1 of RFC 2251.

So, yes, I think it would be good to add DEFAULT support to
Convert::ASN1.  But that's a discussion for the net::ldap list.

>> >I have another issue. I am essentially just wondering essentially how
>> >are the syncUUIDs encoded?
>> 
>> Each syncUUID is a OCTET STRING of size 16.  So, the
>> its encoded as 04 10 followed by the 16 octets of the UUID.
>> 
>> >When I try to print out the decoded OCTET
>> >STRINGS they are not standard ascii or utf-8 characters.
>> 
>> You cannot just print the UUID as its octets are not character
>> data.  You need to encode them (using say the UUID string
>> format) before printing.
>
>K this all makes sense... I guess I was just confused about the
>discrepency between the string representation stored as entryUUID in
>LDAP and the value I was getting back from the sync message. I'm not
>aware of whether perl has a conversion function.

unpack?

>Am I correct in using
>the following RFC to implement conversion functions??
>http://mirror.switch.ch/cgi-bin/search/nph-findstd?preview=draft-mealling-uuid-urn-05.txt&scope=draft

Assuming that's the latest uuid-urn I-D, yes.

>> You actually should store them as an 16-octet value
>> and compare them to the 16-octet value **represented by**
>> the entryUUID value.  That is,
>> do NOT:
>>         compare(uuid2str(syncuuid), entryuuid))
>> do:
>>         compare(syncuuid, str2uuid(entryuuid))
>> 
>> as a single UUID has multiple possible string representations.
>> 
>> That is, follow the general rule that comparisons in LDAP are
>> to be done between abstract values, not physical representations
>> of those abstract values.
>
>I'm confused as to why the "abstract" representation is binary data
>and the physical is a string. How is the data actually stored in the
>database?  I assume you use the binary representation as a key not the
>string. I would also assume that when you say request the entryUUID
>attribute with ldapsearch it gets changed from it's binary form to the
>string representation.

The protocol doesn't care how its stored, that's an
implementation detail.  The protocol used 16-octet form here as
its more compact.

In slapd(8), we actually store both.  Basically we generate the
string representation once and return it many times (instead
of generating it as needed).   This fits better into our general
unnormalized v. normalized framework for attribute values.
Remember, the UUID string representation is not canonical.
So, all lookups, compares, mappings are done using normalized
values (in this case, the 16-octet value).

It's possible to design an implementation that only stored
one representation internally, as well as implementations that
use other representations internally.  For instance, one could
internally representing the UUID as a "large" integer.

If I were writing a LDAPsync client, I'd use the 16-octet
representations internally as that is generally more efficient
(both in space and time) and only produce the string representations
when I needed to present a UUID to a user (such as in debugging).

>Again, I really appreciate the help and clarification...
>
>Lee Jensen