[Date Prev][Date Next]
RE: simpler lockobj for back-bdb entry cache locking
I also remember the discussion a while ago.
Anyway, I spoke soon on the DB access numbers.
It turned out that most DB accesses when everything is cached
is from non-existent entry in the index databases - because they are not
cached in the IDL cache.
In the example case, it was (objectClass=referral) in the search_candidate
(referral objcectClass is not contained in any of the directory entries
used for the test).
To see effects, I took the filter out :
servers/slapd/slapd : 54.5%
libpthread : 5.3%
vmlinux : 14.4%
libdb : 2.1% ( now this all are for locking )
DirMark : 2460 ~ 2534 ops/sec
I wonder which one is the right approach :
1) to have the IDL cache store indexing key whose result is empty
2) to add IDL cache and/or index DB entry for special cases such as
Option 1) would also cover the use supplied empty indexing keys while
option 2) only covers those by the system. In case of 1) we could also
limit the number of
empty IDL entries to a certain value.
I also changed the lockobj value from e_id to e_nname again :
servers/slapd/slapd : 53.9%
libpthread : 5.6%
libc : 17.0%
vmlinux : 14.3%
libdb : 2.4% (hashing overhead added - but this time, it's less than
DirMark is in the same range as the above.
In the new profiling result
now libc and thread scheduling / connection management are left.
e_id : ftp://ftp.openldap.org/incoming/profile4.txt
e_nname : ftp://ftp.openldap.org/incoming/profile5.txt
Any new observations and comments on profiles welcome ...
Jong Hyuk Choi
IBM Thomas J. Watson Research Center - Enterprise Linux Group
P. O. Box 218, Yorktown Heights, NY 10598
(phone) 914-945-3979 (fax) 914-945-4425 TL: 862-3979
"Howard Chu" <email@example.com> on 04/11/2003 03:48:15 AM
To: Jonghyuk Choi/Watson/IBM@IBMUS, <firstname.lastname@example.org>
Subject: RE: simpler lockobj for back-bdb entry cache locking
> -----Original Message-----
> From: Jonghyuk Choi [mailto:email@example.com]
> When lockobj is changed from e_nname to e_id,
> 1% of db overhead is cut down from the execution trace
> and DirMark is up to 2381.2 ops/sec.
> The hashing function of BerkeleyDB seems performance sensitive
> to the key length. (__lock_ohash and __ham_func5)
Yah, I thought so. When we were first designing the locking approach I
figured by using the DN we could quickly prevent multiple attempts to Add
ModRDN duplicate DNs. I'm not sure this is a valid concern now. At one time
added code to ModRDN to lock the newDN while the operation was in progress,
but it didn't prove to be helpful for avoiding clashes.
I'm also working out hierarchical entry cache code, so much of this will
change soon. No harm in using e_id as an interim step though.
-- Howard Chu
Chief Architect, Symas Corp. Director, Highland Sun
Symas: Premier OpenSource Development and Support