[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: openldap 2.4.16 hangs with dncachesize smaller than max number of records



Howard,

The idea was exactly to use the dncachesize large so the search or 
random database access would not be affected.

The issue is that after DB is filled  any search hangs time by time even 
there are many entrances into cache. I was expecting that to remove or 
re-cache an entrance some performance affectance would occur but if 
ldapsearch would be stopped and a new one started the search would be 
faster since there are millions of entrances already cached.

What I'm seeing is exactly the opposite where after cache is filled even 
a new search or query would be very slow. See example below after I 
stopped the first ldapsearch that filled cache and started a new one :

[root@brtldp12 ~]# date;cat /backup/test_temp_CONTENT.ldif|egrep -e 
'^pnnumber' |wc -l;sleep 1;date;cat /backup/test_temp_CONTENT.ldif|egrep 
-e '^pnnumber' |wc -l
Wed Jun 17 21:29:10 BRT 2009
3380
Wed Jun 17 21:29:11 BRT 2009
3380
[root@brtldp12 ~]# date;cat /backup/test_temp_CONTENT.ldif|egrep -e 
'^pnnumber' |wc -l;sleep 1;date;cat /backup/test_temp_CONTENT.ldif|egrep 
-e '^pnnumber' |wc -l
Wed Jun 17 21:29:21 BRT 2009
3514
Wed Jun 17 21:29:22 BRT 2009
3536

See in the first 2 lines that slapd appears to hang since even passed 1 
second there are no increment in the file I'm dumping the searched 
records. I was expecting the opposite since there are cached entrances 
that would be in memory and should be returned faster to the ldapsearch.

Not totally sure how the cache works. I do not see any I/O that could 
justify these hangs and any other HW resource limitation.

I think this behavior is related with dncachesize smaller than the 
maximum number of records. Not sure if looking in the code if some 
possible constraints after cache is filled would be checked for this 
behavior. I can put gdb and try to grab more information to you.

Thanks,

Rodrigo.

Howard Chu wrote:
> Rodrigo Costa wrote:
>>
>> Howard,
>>
>> I tried bigger caches but I do not have enough memory to apply them.
>> This was the reason I only tried the dncachesize to speed up search 
>> queries.
>>
>> I also have this same database running in a old openldap version(2.1.30)
>> even with a little more records on it. In this version, as I understand,
>> there isn't any cache at openLDAP but only at BDB.
>
> False. back-bdb was originally written without entry caching, but it 
> was never released that way, and entry and IDL caching are both 
> present in 2.1.30.
>
>> See how it is
>> behaving in terms of memory and CPU from the openLDAP 2.1.30:
>>
>>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU 
>> COMMAND
>>   3651 root      24   0  136M 135M  126M S     0.0  1.1   0:00   1 slapd
>>
>> See a really small memory consumption and I really reasonable
>> performance. The only issue I have with this version is about the
>> replication mechanism which I would like to increase its availability
>> using syncrepl unless slurpd.
>>
>> The problem is that for version after 2.1 looks like we need to have
>> enough memory to cache all database since there are many situations
>> where slapd appear increase CPU or memory usage and considerably reduce
>> performance.
>
> Not all, but as the documentation says, the dncache needs to be that 
> large. None of the other caches are as critical.
>
>> I tried to remove from slapd.conf any cache constraint seeing if the
>> previous version performance directly from disk reads would be
>> reproduced. I saw some good startup, like 500 returns per second, but
>> after sometime slapd hanged and did not return any more records to
>> ldapseach. Also it consumed all 4 CPUs :
>>
>>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+
>> COMMAND
>>   4409 ldap      15   0  400m 183m 132m S  400  1.5  27:22.70 slapd
>>
>> And even after I stop the ldapsearch the CPU consumption continues
>> consuming all CPUs processing. I believe it entered in a dead loop.
>
> You should get a gdb snapshot of this situation so we can see where 
> the loop is occurring.
>
>> I do not have a heavily load system but based in records and DBs I have
>> some memory resource constraints.
>
> Performance is directly related to memory. If the DB is too large to 
> fit in memory then you're stuck with disk I/Os for most operations and 
> nothing can improve that. You're being mislead by your observation of 
> "initially good results" - you're just seeing the filesystem cache at 
> first, but when it gets exhausted then you see the true performance of 
> your disks.
>
>> I also tried some smaller caches, like :
>>
>> cachesize         500000
>> dncachesize     500000
>> idlcachesize    500000
>> cachefree       10000
>>
>> But it also hangs the search after sometime.
>>
>> I was wondering if there is a way to run slapd without cache, reading
>> from disk(like first time read to insert record in cache), what is
>> enough for small/medium systems in terms of consulting. In this way I
>> could use the behavior as the 2.1.30 system and the new syncrepl
>> replication.
>
> The 2.4 code is quite different from 2.1, there is no way to get the 
> same behavior.
>
> --   -- Howard Chu
>  CTO, Symas Corp.           http://www.symas.com
>  Director, Highland Sun     http://highlandsun.com/hyc/
>  Chief Architect, OpenLDAP  http://www.openldap.org/project/
>
>