[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: LDAP C API: Data Types and Legacy Implementations



Mark C Smith writes (rearranged by priority):

>> We need a way to printf() and sscanf() these types.
>> ...
> 
> I disagree.  Why?  Presumably applications will convert between
> ber_int_t and another type of sufficient size if they need to use
> printf() of scanf().

I was unclear, I meant that as one way to do printf/scanf.

> Or will that approach cause problems?

The problem is that a portable program can't know what "another type of
sufficient size" might be.  Before `long long' and C9X's `intmax_t' were
introduced, we could use `long'.  Maybe 10 years from now, we can expect
<inttypes.h> to exist and use `intmax_t'.  Until then, we'd need a
configure program to test if `long long' or worse exists, and whether or
not printf/sscanf supports it.

So, to reiterate, we need either a promise "these types are not wider
than `long'", or macros with format characters that can printf/scanf the
new types directly.

#

>> I don't quite see what the new type ber_tag_t gives us when we don't
>> know the mapping between (class,encoding,tag-value) tuples and ber_tag_t
>> values.  (Well, we know that CLASS and ENCODING must always be in the
>> low octet, otherwise most code which tests them will fail).
>> 
>> So I suggest to either add:
>>         unsigned long lber_tag_t_to_value(ber_tag_t)
>>     and
>>         ber_tag_t     lber_value_to_tag_t(unsigned long)
>> 
>> or: that the mapping between ber_tag_t and (class,encoding,tag-value) is
>> defined in the draft -- I suggest to read it as a raw integer from the
>> network with the least significant byte first; that way the bitmask
>> 0xe0 will extract the encoding and class even for multi-octet tags.
> 
> I don't think we need to add new functions to address this issue, but
> some clarifying text might be useful.  Maybe I misunderstand what you
> suggest, but it would strike me as very odd to interpret a ber_tag_t
> "least significant byte first."

Odd, yes.  That and functions to avoid it are just the 'least bad
choices' I can think of which will let us inspect or construct a
ber_tag_t, e.g. to get at the CONSTRUCTED bit.

> In all implementations of the ber...() functions I know of, the tag
> values are simple integers where leading zero bits are essentially
> ignored.

If they inherit from umich ldap, they malfunction with multi-octet tags
anyway.  Umich ldap sometimes assumes that host integers are big-endian
and sometimes not: ber_printf doesn't write multi-octet tags read with
ber_peek_tag correctly, for example.  Also, it sometimes assumes tags
are single-octet, and of course that 'char' is 8-bit.

Anyway, the reason I raise this point again is mostly aesthetic, I think
this looks worse with a nice-looking named type than with an unexplained
'unsigned long'.

#

> I amk not convinced we need to add ber_uint_t as it won't be used in any
> of the API calls defined by the draft.  Can you provide an example that
> shows why it is needed?

Not in the API.  It's just that unsigend versions of integer types tend
to be useful.  (pointers to) unsigned somethings get used when the
signed something works wrongly in shift operations, or can get overflow
traps, or the program says things like ((unsigned something)-1)/2 to get
the max value (unless you #define LBER_INT_<MIN/MAX>), and so on.
Usually 'unsigned long' is fine (if the something is known to be no
larger than 'long'), but not always.
That's all.

-- 
Hallvard