[Date Prev][Date Next]
I run a cluster of several dual-processor OpenLDAP servers that are starting
to saturate one of their CPUs with OpenLDAP's connection-handling thread.
The machines are healthy otherwise (memory consumption, disk I/O,
interrupts, etc.) so this single connection handling thread is the current
bottleneck in our installation. AFAICT, once this single thread starts
consuming an entire CPU, we've hit the wall even though the bulk of the
other CPU is idle.
I've considered running two slapd instances, one for each binding, but that
requires twice the disk space and twice the memory consumption. Ideally, I
would like to bind to two ports/interfaces and split connection handling
between two threads, one thread per bound socket. Our load balancers would
see each port/interface combination as a separate host and load balance
across the two, splitting the connection-handling CPU load fairly evenly
between the two CPUs.
Is this idea feasible? Is there a better way of solving this, without
cutting the number of LDAP operations being performed or throwing more
hardware at it?
John Morrissey _o /\ ---- __o
firstname.lastname@example.org _-< \_ / \ ---- < \,
www.horde.net/ __(_)/_(_)________/ \_______(_) /_(_)__