[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Performance issues lately.

dtrace is a fantastic tool but it is also hard to use :) But if I use the existing scripts from the dtracetoolkit, like that of iotop:

  UID    PID   PPID CMD              DEVICE  MAJ MIN D            BYTES
    0  15099      1 slapd            sd0      32   0 R            16384
    0      3      0 fsflush          sd0      32   0 R            98304
    0      3      0 fsflush          sd2      32 128 R            98304
    0      0      0 sched            sd0      32   0 W         36930560
    0      0      0 sched            sd2      32 128 W         39273472

would suggest that it is indeed paging memory around to make room.

I do also wonder how it plays with ZFS and ARC for memory, if I should make some changing with that in mind as well. I have tried normal recordsize and 16k, and compression on and off. Not that it made any difference in this case.

Thanks for the suggestions, I will keep monitoring and see if I can else I can discover. I have requested to increase the cachesize to 8G on one of the slaves, which I can probably do tomorrow at 2am.


Doug Leavitt wrote:
I suggest that you use dtrace to get a better understanding of what is
going on.

You can start with some pre-existing documented scripts from the dtrace


The dtrace guide is here:


There are many examples in the dtrace toolkit that should help sort out
what other processes or system resources are affecting the ldap servers
performance in your specific situation.


On 11/14/10 07:42 PM, Jorgen Lundman wrote:

Howard Chu wrote:

If it slows down after you wait a while, that means some other process
on the machine is using the system RAM and forcing the BDB data out of
the system cache. Find out what other program is hogging the memory,
it's obvious that BDB is not doing anything wrong on its own.

If I db_stat another large file, like dn2id.bdb, the subsequent
id2entry.bdb will be slower. So maybe it is fighting itself.

However, since I am executing separate "db_stat" processes each time,
the setcachesize would have no chance to help improve things. I will
have to try different values for slapd running.

Could be I should investigate various Solaris specific process limits
as well. It is all 64bit now, but per process limits may still be

Jorgen Lundman       | <lundman@lundman.net>
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo    | +81 (0)90-5578-8500          (cell)
Japan                | +81 (0)3 -3375-1767          (home)