[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Antw: Re: Slapd running very slow




On 2015-04-23 5:56 PM, Howard Chu wrote:
> Geoff Swan wrote:
>>
>>
>> On 2015-04-23 4:43 PM, Howard Chu wrote:
>>> Geoff Swan wrote:
>>>>
>>>>
>>>> On 2015-04-23 4:07 PM, Ulrich Windl wrote:
>>>>>>>> Geoff Swan <gswan3@bigpond.net.au> schrieb am 22.04.2015 um 23:18
>>>>>>>> in Nachricht
>>>>> <5538100A.3060301@bigpond.net.au>:
>>>>>
>>>>> [...]
>>>>>> Free stats look fine to me. No swap is being used, or has been used
>>>>>> yet
>>>>>> on this system.
>>>>>>
>>>>>>                total       used       free     shared    buffers
>>>>>> cached
>>>>>> Mem:     132005400   21928192  110077208      10612     363064
>>>>>> 19230072
>>>>>> -/+ buffers/cache:    2335056  129670344
>>>>>> Swap:      8388604          0    8388604
>>>>>>
>>>>>> The effect I am seeing is that despite a reasonably fast disc
>>>>>> system,
>>>>>> the kernel writing of dirty pages is painfully slow.
>>>>> So disk I/O stats is also available via sar. May we see them?
>>>>> Finally there's blktrace where you can follow the timing and
>>>>> positions of each individual block being written, but that's not
>>>>> quite easy to run and analyze (unless I missed the easy way).
>>>>>
>>>>> I suspect scattered writes that bring your I/O rate down.
>>>>>
>>>>> Regards,
>>>>> Ulrich
>>>>>
>>>> sysstat (and sar) is not installed on this system, so I can't give you
>>>> that output.
>>>
>>> As I said in my previous email - use iostat and/or vmstat to monitor
>>> paging activity and system wait time. You can also install atop and
>>> get all of the relevant stats on one screen.
>>>
>>> http://www.atoptool.nl/
>>>
>>> I find it indispensible these days.
>>>
>> BTW, just using nmon whilst running a test script that writes 20K small
>> objects shows a write speed of around 3-5 MB/s to the disc (without
>> dbnosync, so as to see the immediate effects). This is on a SAS disc
>> with a 140MB/s capability, checked using regular tools such a dd
>> (writing 1000 files of 384K each). I'm failing to understand why the
>> write operations from the kernel page cache is so drastically slower.
>> With dbnosync enabled, lmdb writes to the pages fast (as expected),
>> however the pdflush that follows writes at the same slow speed, causing
>> delays for further processes.
>
> In normal (safe) operation, every transaction commit performs 2
> fsyncs. Your 140MB/s throughput spec isn't relevant here, your disk's
> IOPS rate is what matters. You can use NOMETASYNC to do only 1 fsync
> per commit.
>
Put atop on the test machine. Nice utiltity.
Ran the test script again and got these results around the memory and
disk lines, which appear to confirm the low IOPS. Any clues on improving
this?
The dd test on the disc was just to confirm that the C600 SAS
controller/driver and disc could achieve the throughput.

MEM | tot    31.4G  | free   30.4G |  cache 326.5M | buff  167.2M  |
slab   85.2M |  shmem  10.0M | vmbal   0.0M  | hptot   0.0M |  hpuse  
0.0M |
SWP | tot    16.0G  | free   16.0G |               |              
|              |               |               | vmcom   5.4G |  vmlim 
31.7G |
DSK |          sda  | busy     75% |  read       0 | write   3400  |
KiB/r      0 |  KiB/w      3 | MBr/s   0.00  | MBw/s   1.28 |  avio 2.20
ms |