[Date Prev][Date Next]
RE: simpler lockobj for back-bdb entry cache locking
> -----Original Message-----
> From: Jonghyuk Choi [mailto:email@example.com]
> I also remember the discussion a while ago.
> Anyway, I spoke soon on the DB access numbers.
> It turned out that most DB accesses when everything is cached
> is from non-existent entry in the index databases - because
> they are not cached in the IDL cache.
> In the example case, it was (objectClass=referral) in the
> (referral objcectClass is not contained in any of the
> directory entries used for the test).
> To see effects, I took the filter out :
> servers/slapd/slapd : 54.5%
> libpthread : 5.3%
> libc: 17.0%
> vmlinux : 14.4%
> libdb : 2.1% ( now this all are for locking )
> DirMark : 2460 ~ 2534 ops/sec
That's pretty good, considering that we were only at 2100, 4 days ago. Can't
argue with a 20% speedup...
> I wonder which one is the right approach :
> 1) to have the IDL cache store indexing key whose result is empty
> 2) to add IDL cache and/or index DB entry for special cases such as
> Option 1) would also cover the use supplied empty indexing keys while
> option 2) only covers those by the system. In case of 1) we could also
> limit the number of empty IDL entries to a certain value.
1) sounds good to me. I don't know if it's necessary to limit the number of
empty IDLs in the cache. If there's already an LRU scheme, that should be
sufficient. If there's an empty IDL that is referenced all the time, it
shouldn't get pushed out of the cache just because some secondary limit was
> I also changed the lockobj value from e_id to e_nname again :
> servers/slapd/slapd : 53.9%
> libpthread : 5.6%
> libc : 17.0%
> vmlinux : 14.3%
> libdb : 2.4% (hashing overhead added - but this time, it's less than
> DirMark is in the same range as the above.
> In the new profiling result
> now libc and thread scheduling / connection management are left.
> e_id : ftp://ftp.openldap.org/incoming/profile4.txt
> e_nname : ftp://ftp.openldap.org/incoming/profile5.txt
> Any new observations and comments on profiles welcome ...
In my own tests I've observed that the server response time has improved as
well. Previously with 10 clients I saw slapd spawn 12 threads to handle all
the queries. With the current code, it only spawned 9 - which means it was
actually dispatching operations faster than the clients could submit them.
(Client and server both on the same machine.) Pretty astonishing...
-- Howard Chu
Chief Architect, Symas Corp. Director, Highland Sun
Symas: Premier OpenSource Development and Support