[Date Prev][Date Next] [Chronological] [Thread] [Top]

RE: back-bdb DB_RECOVER and soft restart



Matthew Hardin writes:
>>From: Hallvard B Furuseth [mailto:h.b.furuseth@usit.uio.no]
>>Matthew Hardin writes:
>>> On startup each instance of back-bdb will do the following:
>>> (...)
>>> 2. Attempt to place a write lock on the lock file. (...)
>>> (...)
>>> 5. Wait for a read lock on the lock file and leave it there for the
>>>    life of the back-bdb instance.
>>
>> Why get a read lock when you already have a write lock?
> 
> (...) Demoting a write lock to a read lock (...)

Oh... that was the point I was missing.  But if a file descriptor has a
write lock, are you sure that acquiring a read lock will lose the write
lock?  On all operating systems that use Berkeley DB, and all kinds of
locks OpenLDAP might use?

>> Besides, if you will use libraries/liblutil/lockf.c:lutil_lockf, that
>> may use lockf() which only supports 'locks', not 'read/write locks'.
> 
> Thanks for the tip. I'll have to look into that.

BTW, I think this is a (somewhat inefficient) way to get up to 2047
read/write locks with lockf():

lockf() locks a _section_ of the file.  So,
- to get a write lock, wait for a lock on all the 2048 bytes.
- to get a read lock, first wait for a lock on byte #0.  This will wait
  for any write lock to be released.  Then try - without waiting - to
  lock one random byte among bytes#1-2047.  This will be the read lock.
  Repeat until success, or give up and fail if there is no free byte.
  Finally, release the lock on byte #0.
If there is a read lock, A process waiting for a write lock will be
bypassed by later processes asking for read locks, so it may wait
forever.  If you don't want that, let write locking lock byte #0 first
and then all 2048 bytes.

-- 
Hallvard