[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: (ITS#4943) tpool.c pause vs. finish

hyc@symas.com writes:
>h.b.furuseth@usit.uio.no wrote:
>> Eh?  Yes, that's what I was talking about.  But I'm not sure what you
>> mean with them exiting immediately - or how they could.  Also they stay
>> around until there are no pending requests.  A pause request could
>> arrive more or less at any time, I presume.  (I don't know if there is a
>> pause while the max threads number is being modified.)
> Yes. As stated back in the distant past, cn=config fully serializes
> all write operations. The pool is paused for each write op.

Oh, right.

> When the op completes and the pool is resumed, the condition variable
> is broadcasted, so all the other threads will wake up, and the first
> to do so will see that they're above the max and exit, before any
> other tasks are started.

No, they won't exit pool_wrapper() while there are pending requests, but
if that is a bug it is easy to fix.  And by that description the wait I
inserted after context_reset() in pool_wrapper() should not be there,
since it releases ltp_mutex before the ID is removed from thread_keys[].

However another thread could be waiting in pool_pause() and start
another pause.  And the thread which did pool_resume() will be active
and could call pool_pause() again (but I think it won't in the same LDAP
operation with the current bconfig.c code).  So if pool_wrapper() does
not go active for context_reset() we must ensure that a pause is not
started while a thread in pool_wrapper() is dying.  That leaves
thread_keys[] protected by either a pause or one of two mutexes,
depending on circumstances.  I'll sleep on it before trying to wrap my
head around that one:-)

>> (...) We did have code which compared thread IDs with '==' for
>> quite some time.  Do you remember if anyone complained, or if the
>> switch to using ldap_pvt_thread_equal() was a "just in case" fix?
> Yes, I needed to use pthread_equal() on OS/390 since their thread type
> was a structure.

Ouch.  That kills my tls.c patch too.  I so hoped not to hear of an

>> BTW I don't know if the unsigned long needs to be the thread ID or just
>> _some_ unique integer.  A question to openssl-users@openssl.org got no
>> answer.
> It just needs to be a unique integer. I've already had this
> conversation with the OpenSSL folks...

Got a reference to that which I can beat them with?  (To get them to
update their doc if nothing else.)

>>>>>>> The best fix may be to just use the address of the thread's
>>>>>>> stack as the thread ID. That's actually perfect for our purpose.
>> (...)
>> Hmm.  I asked the wrong question, should have read some man pages first.
>> Still, thr_cthreads.c & thr_lwp.c don't use LDAP_PVT_THREAD_STACK_SIZE.
>> man pthread_attr_getstacksize says it sets the _minimum_ stack size, but
>> perhaps several M is large enough that one can expect it won't get
>> multiples of that.
> That kinda implies that no one has been using the cthreads or lwp support
> recently, since I doubt their default stack size would work with back-bdb's
> filterindex demands.

Um, okay...  could remove them from configure and tell anyone who needs
them to shout.

How about pool_context() called from subsystems which create their own
threads (like Perl code or plugins)?

I guess it's just that I don't see I why using a private,
hopefully-OS-independent hack is an improvement over OS-provided
thread-specific data (when that is supported).

Any risk that there are systems around which support non-contiguous
stacks?  (Allocate a new stack chunk if the current one is used up, and
chain it to the old one.)  I don't have a clue how common they are or
were, nor of the arguments for/against implementing them.