[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Less aggressive syncrepl ?





Quanah Gibson-Mount wrote:
--On Tuesday, March 02, 2010 7:58 PM +0100 masarati@aero.polimi.it wrote:


25 consumers doing a full refresh probably ate up all threads
available on
the producer. You should either cascade your consumers (build a
replication chain where a layer of consumers acts as producers for the
remaining), or increase the number of threads on the producer.

Using delta-syncrepl can also help reduce such a load.


So if I understand it completely:


* I could increase Threads on ldapmaster, from the default (16?) to say 32.

It is a 4 core server, dedicated to slapd. Most documentation does seem to discourage this though. However, doing something like rsync will generally only yield 500KB/s to 1MB/s of speed. Most likely due to disk IO. It appears that it had 25 or so servers trying to do a complete sync/consistency-check. So perhaps there just isn't more to get out of it while the setup is this way.



* Make master sync only to slave01 and slave02, and they in turn, sync to everyone else. Is this the recommended setup?

Change the other 23 servers to sync with slave01 and slave02 instead. This way master only has a few servers to sync with, and each slave has "its share of the workload". But can I specify more than one "provider" in syncrepl command as a fail-over?

                provider=ldap://ldapslave01,ldap://ldapslave02 ?

But I would guess you can not. (it only has one master right now anyway, just curious)



* Investigate delta-sync

Unknown to me, will need to research if our current ldap software version can support it. Perhaps try it on the test-servers.


* Logical tree split

I guess potentially I could run multiple masters, and have separate trees (one for mail, one for www, one for dns etc) but it would be nice not to have to do that.



For the time being I stopped syncrepl on all but 10 servers, so that we could have a read/write ldapmaster while it sorted itself out. It appears those 10 servers needed about 14 hours to sync. I have re-added the remaining servers to sync now and we should be back to stable in 14 hours or so.


--
Jorgen Lundman       | <lundman@lundman.net>
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo    | +81 (0)90-5578-8500          (cell)
Japan                | +81 (0)3 -3375-1767          (home)