[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Large delay before transmitting search result



Rene B. Elgaard writes:

> Why is it [ber_printf] shifting the buffer 1 byte ?

A BER-encoded sequence/set consists of {tag, length, data}.  The length
of the length depends on the size of the data.  ber_printf() does not
initially know how big the data will be, so it initially reserves 5
bytes for the length.

When the sequence is complete, liblber fills in the actual length and
moves the data up to close the gap.  This happens recursively, since
there are sequences/sets inside sequences/sets.  Also liblber grows the
buffer dynamically, so there can be some reallocs moving the data too.

> If this could be optimized somehow, it could lead to a huge
> improvement in response time.

For now I suggest we always use 5 bytes when the length exceeds some
threshold.  This is valid in BER and in LDAP's somewhat restricted BER,
but not valid DER.  Possibly some LDAP implementations or even some part
of OpenLDAP expect DER anyway, so we might have to clean up further or
revert such a change.

Maybe the realloc strategy can also be tweaked.

Next would be two-pass generation of sequences/sets.  First use 5-byte
lengts and remember their positions, then shorten them.  That'll
eliminate all but one memmove.  This needs some new data structures to
track the length positions though.

Maybe it'd be easier to let the caller do more of the work.  Call
liblber twice, first just so it can find the lengths and then again with
the same data so it can fill that out.

To get rid if the final memmove, we'd have to feed a data structure
describing the entire BER element to liblber, so it could get it right
at the first attempt.  Not anytime soon, that'd require rewriting both
liblber and anything using it.


I'm not currently volunteering to do any of this, sorry.

-- 
Hallvard