[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: slapd memory and performance problems



Upon starting slapd, each thread has a size of about 4800KB.  This
thread size seems to double after 8 hours or so and grows to about 15M.
Also, the total memory available on the box seems to degrade over time,
not just from the increased thread size, but from what appears to be a
memory leak somewhere since just restarting slapd does not increase the
available amount of system memory.  Only a reboot recovers system RAM.

I'm running RedHat 8.0 on a platform with an Intel 1G processor and
512MB of RAM.  Processor idle time never dips below 90%, no swap space
is ever used but memory, as I said, diminishes over time.  However,
performance stays constant and doesn't change as memory degrades (until
memory drops to below 5M or so, at which time the server must be
rebooted).

This sounds very very strange. Since your problem persists after halting the process, it is clearly not related to slapd itself. I'll discuss this though as I suspect other people may encounter this problem when tuning their systems, linux or not.


How are you measuring the remaining memory? There is "Free" memory, "Cached" memory, "Used" memory, and on some kernel versions, "shared." (2.4 doesn't count shared correctly as it takes too long)

"Free" memory really only means Wasted memory, and ideally this number should be as Low as possible. On a healthy system it will only be large at boot or after a massive process has just exited. Free memory does not include swap; it's just the amount of RAM not being used for anything at all.

Available physical memory touching the filesystem causes disk pages to be copied to otherwise Free pages. These are then known as Cached memory. Future disk reads and writes go to that Cached memory instead of disk. This is generally a large performance improvement.

If a process starts requesting memory, the kernel first pulls from Free pages, then starts tossing out cache in order to make room. "Cached" memory should therefore be considered as Available ram that is currently being borrowed to improve I/O performance. (but is still available should a process need it)

If this explanation was beneath you, then I humbly apologize. I provide it as I've seen many, Many people get confused about this and think their (otherwise healthy) servers were out of ram.

As for swap, it is normal for the kernel to page unused "Used" memory out to swap to improve the space available for caching.

You said the processes grew to 15MB in size. Even if you forced the old default of 32 threads, that yields something significantly less than the 512MB of ram available to your machine. If you have enough swap to push the other unused daemons out of the way and you tune your database layer correctly (heh), then this should be workable. If you plan on using the server heavily for anything else at the same time though, you will probably want to adjust entry caching and the DB_CONFIG so that slapd consumes no more than half of physical memory or you'll be battling the swapper. A dedicated directory server will let you get away with far more outrageous abuses of the vm system, though...

You said no swap was being used. That would be extremely abnormal for the conditions you describe. How much swap do you have configured? What kernel release?

You say free memory keeps diminishing and is not fully returned when the process exits. This Really sounds like Free ram getting replaced by Cached pages. How are you determining that a reboot is necessary? I.e., what happens if you don't? What does the output of vmstat 1 10 look like during these conditions?

Having 5MB "Free" would be normal for a system with that amount of ram. (And this scales fairly linearly; our 2GB servers generally have 20MB free)

Matthew Backes
lucca@csun.edu