[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Max Concurrent connections / fopen() limits on Solaris



On Tue, Nov 20, 2001 at 11:07:44AM +0000, Dave Lewney wrote:
> Alister Winfield wrote:

> > We use 1.2.13 and get peaks of > 500 conns. Slaps fails at about 1000
> > connections due to the use of select. I have a local workaround using
> > poll that is being 'tested' hard that will go as high as the limits
> > in the kernel.

> > Have you checked with limit / ulimit before starting slapd that the soft
> > limit isn't set to something like 64. I have umlimit -n 1024 in my start
> > script to avoid cutting the server off earlier that it can handle.

> True, but even with a ulimit set to 1024 I still fell foul of bug
> #4472643 during replication from a heavily loaded master server. Word
> from Sun is that it might be fixed in Solaris 9 and a possible
> workaround might be to use the system call "open" rather than fopen. My
> workaround is to point clients at the replicas and leave the master
> unloaded. Any comments?

open() vs fopen(): has anyone tried compiling OpenLDAP against AT&T's
"sfio" library instead of Sun's native stdio.h? We had a custom plugin
for our Netscape/iPlanet threaded Web servers that ran into fopen() limits
with stdio.h, but solved those problems by using sfio instead. IIRC, with
Solaris 2.6 sfio "only" doubled or tripled the number of filehandles our 
plugin could deal with (based on some contrived testing), but for us that 
was more than sufficient.

	http://www.research.att.com/sw/tools/sfio/

I'd be curious to hear if you have more luck with that.

-Peter