[Date Prev][Date Next] [Chronological] [Thread] [Top]

paged results gives invalid cookie (ITS#3089)

Full_Name: Daniel Armbrust
Version: HEAD
OS: Redhat 9
Submission from: (NULL) (

When I do a search (from java client code) that has a fairly large number
results, and set the page size to be quite small (usually less than 100) at some
point while iterating over the results, I will get an error when I ask for the
next page:  [LDAP: error code 53 - paged results cookie is invalid or old]. 
This seems to happen more frequently when I set the page size to a small number
- Making the page size larger usually either changes the point where it fails,
or makes it not fail at all.

The debug log on the server looks like this when the failure happens:
ber_scanf fmt ({miiiib) ber:
>>> dnPrettyNormal: <dc=concepts,
=> ldap_bv2dn(dc=concepts,
<= ldap_bv2dn(dc=concepts,
=> ldap_dn2bv(272)
<= ldap_dn2bv(dc=concepts,codingScheme=ICD9,dc=codingSchemes,service=ICD9Service,dc=HL7,dc=org,272)=0
=> ldap_dn2bv(272)
<= ldap_dn2bv(dc=concepts,codingScheme=ICD9,dc=codingschemes,service=icd9service,dc=hl7,dc=org,272)=0
<<< dnPrettyNormal: <dc=concepts,codingScheme=ICD9,dc=codingSchemes,service=ICD9Service,dc=HL7,dc=org>,
ber_scanf fmt ({mm}) ber:
ber_scanf fmt ({M}}) ber:
=> get_ctrls
ber_scanf fmt ({m) ber:
ber_scanf fmt (b) ber:
ber_scanf fmt (m) ber:
=> get_ctrls: oid="1.2.840.113556.1.4.319" (critical)
ber_scanf fmt ({im}) ber:
<= get_ctrls: n=1 rc=53 err="paged results cookie is invalid or old"
send_ldap_result: conn=0 op=4068 p=3
send_ldap_response: msgid=4069 tag=101 err=53
ber_flush: 53 bytes to sd 9
do_search: get_ctrls failed
connection_get(9): got connid=0
connection_read(9): checking for input on id=0
ber_get_next on fd 9 failed errno=104 (Connection reset by peer)
connection_read(9): input error=-2 id=0, closing.
connection_closing: readying conn=0 sd=9 for close
connection_close: conn=0 sd=9

With a given page size, and a given database, the error always happens at the
same point.  Different databases have different failure points (and for that
matter, the same database with the same page size fails at a different point
depending on the speed of the hardware it is running on - seems to happen more
often on faster hardware)

I could consistantly reproduce this behavior with versions older than 2.2.7.  I
could not reproduce it in 2.2.7 (so I assumed it was fixed), but now I can
reproduce it with the code I checked out from HEAD today.

Also, with the todays code, once this error occurrs, the database will not allow
me to get any paged results, until I restart the database (with versions prior
to 2.2.7, it would allow me to get results up to the failure point).

If necessary, I should be able to provide a database that consistantly
reproduces this error.