[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Problem with file descriptors limit

> > > It's unlikely to have a problem with slapd limits. slapd itself can
> > > easily scale to many thousands of connections (likely more if given
> > > enough resources). But a common issue is that libwrap uses fopen(), which
> > > sometimes can't handle as many fds as some other calls.
> However, libwrap shouldn't have many files open. IME, it seems when libwrap
> complains about open file limits, it's just the most visible file limit
> error.

The question isn't how many files libwrap has open; since libwrap runs in
the same slapd process, the total number of fd's used by slapd is in
question. So, for instance, on a Solaris server:

# pfiles `pgrep slapd` | grep ': S' | tail -5
 266: S_IFSOCK mode:0666 dev:249,0 ino:63473 uid:0 gid:0 size:0
 267: S_IFREG mode:0600 dev:136,3 ino:5529 uid:0 gid:1 size:77824
 268: S_IFSOCK mode:0666 dev:249,0 ino:56848 uid:0 gid:0 size:0
 272: S_IFSOCK mode:0666 dev:249,0 ino:2820 uid:0 gid:0 size:0
 307: S_IFSOCK mode:0666 dev:249,0 ino:3576 uid:0 gid:0 size:0

Now, when the next connection hits (which it will, without EMFILE), and
libwrap fopen()s /etc/hosts.{allow,deny}, it's going to return "Too many
open files." This is because open() is going to return (say) 269, and
fopen() won't do that on Solaris (it's limited to 256). This has nothing
to do with my ulimits nor hard kernel limitations; it's a libc
implementation decision. slapd's use of fopen() is such that it's very
unlikely to be affected by this. libwrap's use of fopen() is such that
it's visible very often.

I'd imagine this has little to do with Linux, though, because I've always
known Linux to have open() and fopen() with the same limit. But I
certainly haven't seen it all, and there's a definite potential for
different limits to exist between the two calls.