[Date Prev][Date Next] [Chronological] [Thread] [Top]

SUMMARY: Re: cachesize does not exceed 1000 entries



Folks,
I finally found the problem after much frustration --> Solaris. Once I tried the same query on Linux and was able to retrieve >1000 entries I figured the problem was with the ldap client on Solaris.
Looking in the logs, Solaris was using paged results from back-bdb. When doing an ldap query that consults nss_ldap such as "finger", libsldap.so is hard-coded with a LISTPAGESIZE of 1000 (see http://cvs.opensolaris.org/source/xref/usr/src/lib/libsldap/common/ns_internal.h). I grabbed source for libldap from openSolaris.org, increased the pagesize limit, and hacked it so it would work on Solaris 9 and lo and behold it worked -- I was able to to retrieve >1000 entries.


However, I didn't feel comfortable putting my hacked libraries on production Solaris servers so I stripped all the pagedresults controls from back-bdb/search.c and this worked, too, with default Solaris.
This is how I'm going to leave the system for now.


Is it possible to request greater freedom over these controls (e.g., pagedresults) in a future version of openldap?

Thanks for everyone's help.

Robert Petkus

Pierangelo Masarati wrote:

Pierangelo Masarati wrote:

>>Folks,
>>I'm still having the same problem -- I'd <<really>> like to get this
>>working because it really seems like a waste to proxy and not cache.
>>I took Aaron's suggestion, querying and tracing the backend LDAP server
>>to see if its responses were limited to 1000 entries, but this was not
>>the case.
>> >>
>
>Do you mean that the proxies, without pcache, return more that 1000 entries?
> >
I suggest you try and directly query the remote database with ldapsearch
using a query that matches the proxy attrset and the proxytemplate;
check if you hit any limit; moreover, on the remote server, with
loglevel set to "-d 4", look for a group of lines of the form
<log>
SRCH "dc=example,dc=com" 2 0 0 0 0
filter: (objectClass=inetOrgPerson)
attrs: cn uid uidnumber gidnumber gecos description homedirectory
loginshell
</log>
If anly limit was requested by the client (e.g., in the next example,
SIZELIMIT and TIMELIMIT were set to 1000 and 3600 in ldaprc), you'll see
<log>
SRCH "dc=telco,dc=com" 2 0 1000 3600 0
filter: (objectClass=inetOrgPerson)
attrs: cn uid uidnumber gidnumber gecos description homedirectory
loginshell
</log>
If you see any sizelimit appear in the request, then it's your client
picking them up somewhere in some ldap.conf, ldaprc or .ldaprc file in
the default location, in your home or in your working directory.


>>Some curiosities here:
>>1. If I use ldap as the database with overlay pcache, "cachesize" is
>>completely ignored and always defaults to 1000. However, the
>>"sizelimit" is adhered to but <not> if it exceeds 1000.
>> >>
>
>I note that, as of 2.2, "sizelimit" is a wrapper around the "limits"
>command, which is per-database, while "sizelimit" should be global, so I
>suspect you should place "sizelimit" before any database statement; I need
>to check.
>
> >
>>2. If I use meta as the database with overlay pcache, "cachesize" is
>>functional > 1000 entries but searches only work once -- the initial
>>search is cached but subsequent searches don't retrieve from cache but
>>just query the database endlessly.
>> >>
>
>I Note that you shouldn't use back-meta if you're proxying a single
>target; however, the behavior you see shouldn't occur. I'll try to
>reproduce it and, in case, you should file an ITS. I'll be back.
> >
I couldn't reprouce the issue with HEAD code using back-meta; a database
with 2000 entries matching the proxytemplate were successfully cached
during the first query, and subsequently returned from the cache. I
suggest you upgrade to check if the problem persists. In case, please
file an ITS with detailed instructions to reproduce it.


p.


SysNet - via Dossi,8 27100 Pavia Tel: +390382573859 Fax: +390382476497