[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Problem with file descriptors limit

On Monday 26 June 2006 20:48, Aaron Richton wrote:
> > > > It's unlikely to have a problem with slapd limits. slapd itself can
> > > > easily scale to many thousands of connections (likely more if given
> > > > enough resources). But a common issue is that libwrap uses fopen(),
> > > > which sometimes can't handle as many fds as some other calls.
> >
> > However, libwrap shouldn't have many files open. IME, it seems when
> > libwrap complains about open file limits, it's just the most visible file
> > limit error.
> The question isn't how many files libwrap has open; since libwrap runs in
> the same slapd process, the total number of fd's used by slapd is in
> question.
> So, for instance, on a Solaris server: 

IIRC, the OP was using Linux.

> Now, when the next connection hits (which it will, without EMFILE), and
> libwrap fopen()s /etc/hosts.{allow,deny}, it's going to return "Too many
> open files." This is because open() is going to return (say) 269, and
> fopen() won't do that on Solaris (it's limited to 256). This has nothing
> to do with my ulimits nor hard kernel limitations; it's a libc
> implementation decision. slapd's use of fopen() is such that it's very
> unlikely to be affected by this. libwrap's use of fopen() is such that
> it's visible very often.

AFAIK, fopen under linux doens't have a separate limit.

> I'd imagine this has little to do with Linux, though, because I've always
> known Linux to have open() and fopen() with the same limit. But I
> certainly haven't seen it all, and there's a definite potential for
> different limits to exist between the two calls.

Put it this way, since we put 'ulimit -n 4096' in the init script on the Linux 
machines hitting this problem, we haven't seen problems with file limits.


Buchan Milne
ISP Systems Specialist

Attachment: pgpvsD2EceZ1H.pgp
Description: PGP signature