[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Problem with file descriptors limit



> It seems that the problem is about slapd limits.

It's unlikely to have a problem with slapd limits. slapd itself can easily
scale to many thousands of connections (likely more if given enough
resources). But a common issue is that libwrap uses fopen(), which
sometimes can't handle as many fds as some other calls.

Now, on Linux, open() and fopen() are the same limit in my experience.
That "same limit" should be the value set by ulimit -n. (There's a stress
program at http://access1.sun.com/technotes/01406.html that might be a
good test exercise.)


If you're setting ulimit -n and still not getting desired results, I'd
wonder if either:

* whatever means you're using of starting slapd is overwriting the limit.
maybe steal your slapd init script and replace it with the stress program.

* do you actually have 8,192 (or whatever) files open? Try
 ls /proc/`cat slapd.pid`/fd | wc
when libwrap is complaining.


With all this said, I've seen libwrap issues before, but the server
doesn't crash. You might have something more evil going on; might be worth
filing with SuSE.