[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: CHAR_BIT > 8 in LDAP C API
Kurt D. Zeilenga writes:
>At 08:53 PM 6/20/99 +0200, Hallvard B Furuseth wrote:
>>How should bit strings be represented on hosts where
>>'char' has more than 8 bits?
>
> You place the bits in an array sufficent to hold your bits. If
> you have 15 bits to send and your char is 8 bits, you need 2 chars.
> If char is 9 bits, you need 2. If char is 7 bits, you need 3.
> If char is 16 bits, you need 1.
Thanks. (I was wondering if we'd only use 8 bits per char like octet
strings, in case that wasn't clear. 7-bit char is no trouble, char is
always >= 8 bits.
> Regardless of the number of bits, I note that we might want
> to clarify the bit order and padding of last 'char' requirements.
Left-shifted, though one must read the ASN.1 spec to see that.
I agree it would be nice to have in the draft (and your "use all bits in
the byte" answer too).
> More interesting is 'char' less than 8 bits as the value 0xff
> (boolean truth) is not representable. This could be changed
> to say "zero for FALSE or non-zero TRUE"
TRUE is treated as int and not char, so sending TRUE as 0xff is no
trouble. I think that should stay, since rfc2251 uses 0xff for TRUE.
> the page 5 architecture clarification ('int' at least 32) be extended:
>
> This API is designed for use in environments where
> 'char' is at least 8 bits in size
I think the requirement is:
'char' is unsigned or two's complement
Otherwise each 'unsigned char' in an octet string must be converted to
'char' before it can be used as a string, and vice versa.
Yet even fgets() and printf() are broken on a host which fails the above
requirement, so only freestanding implementations can do that and call
themselves ISO C.
> and 'int' is at least 32 bits in size.
I hope this requirement will be unnecessary when we get ber_int_t.
--
Hallvard