[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: 1024 fd limit ?



On Thu, Nov 16, 2000 at 02:25:28PM +0100, Yann Dupont wrote:
> 
> Stricly openLDAP related issue now : IF we can't pass this 1024 fd
> barrier, is it possible to use a forking model in that case ?? If I
> understand correctly, threads shares all ressources (and so, the 1024
> limit is for the whole set of threads) - forks, instead are real
> processes ... and don't suffer from the same problem... 
> But I doubt there is such a mecanism in openldap 2.0.7 (and I imagine it
> will complexify code a little...)
If you look in the openldap source, you will find that openldap look for some
value (OPEN_MAX). 
localhost:/home/luc/srcs/openldap/openldap-2.0.7>grep OPEN_MAX -r .
clients/finger/main.c:  tblsize = sysconf( _SC_OPEN_MAX );
clients/gopher/go500.c: dtblsize = sysconf( _SC_OPEN_MAX );
clients/gopher/go500gw.c:       dtblsize = sysconf( _SC_OPEN_MAX );
libraries/libldap/os-ip.c:      tblsize = sysconf( _SC_OPEN_MAX );
libraries/libldap_r/os-ip.c:    tblsize = sysconf( _SC_OPEN_MAX );
libraries/liblutil/detach.c:    nbits = sysconf( _SC_OPEN_MAX );
servers/slapd/daemon.c: dtblsize = sysconf( _SC_OPEN_MAX );

So perhaps, you need to modify the file limit.h in the kernel source and change
this line

#define OPEN_MAX         256 /* # open files a process may have */

This my 2 cents.
Luc