[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Re:Re: High availability (was Re: Question about theconfigbranch)



Hello Kurt D. Zeilenga,

    What about the idea using BDB sync mechanism to keep the data synchronization?

    I think to sync the ldap data entry by entry is very slow. If there are more than millions of entry, it would spend tens of hours.

    What about your idea? 

Best regards, 
  
======= At 2005-12-23, 01:21:19 you wrote: =======

>At 08:26 PM 12/21/2005, sparklezou@hotmail.com wrote:
>>Hi Kurt,
>>
>>  Your Questions;
>>  a) the slave is currently unavailable?
>>  b) the slave doesn't respond in a reasonable amount of time?
>>  c) the slave responds with an error?
>>
>>  My answer:
>>  As I described before, the master don't know how many slave would get copy from this DS. And the slave only know where is the master, don't know another slaves which get copy from the master. The master and the slaves are consider as a cluster.
>>
>>  So if we add all of the information into the config branch to let the master and slaves see each other, your question is easy to solve.
>>
>>  a) If the slave is currently unavailable, the master knows it,because there is no connection from this slave. And in the master set this slave status to asynchronous. When the slave is available, the master knows it, and push the updates to this slave to make it synchronous.
>
>So, in this case, consistency between the master and this slave
>is not ensured to be better than eventual.
>
>
>>  b) Set the slave status to asynchronous. And do the same wiht (a).
>
>So, in this case, consistency between the master and this slave
>is not ensured to be better than eventual.
>
>
>>  c) Set at least 1(or more) slave should response success.
>
>Not sure how it can be assured that at least one slave will respond
>with success.
>
>>If the slave response error, set the slave status to asynchronous. And do the same wiht (a).
>
>So, in this case, consistency between the master and this slave
>is not ensured to be better than eventual.
>
>
>>  What about your ideas?
>
>Well, my thought is that your scheme doesn't actually ensure
>better than eventual.  Certainly, from your description,
>you aren't providing transactional consistency (meaning
>that either all servers are updated or none are updated
>to the new state requested by the update).
>
>>Question:
>>  (d)Why should the client wait for the slave when waiting won't necessarily ensure the slave is in sync?
>>
>>   It is to ensure all of the DS in the cluster keeping synchronous.
>
>Waiting of the certainly doesn't ensure consistency between the
>servers in the cluster, and as you pointed out by your answers
>in the able, "success" result to the client doesn't mean that
>all servers in the cluster have consistent state.
>
>So the waiting seems for not.
>
>>If the master fail, one of the slaves could take over the master.
>
>Well, yes, but data could be lost in doing so as you haven't
>provided transactional consistency.
>
>>  What about your idea?
>
>It seems you are after transactional consistency but
>employing methods which cannot produce better than
>eventual consistency.
>
>
>>Best Regards,
>>Sparklezou
>>
>>
>>sparklezou wrote:
>>>    In fact, Iwould like to implement HA using replicate.
>>>Today I found that there is an session discussed before about this 
>>topic.http://www.openldap.org/lists/openldap-devel/200310/msg00068.html And the idea descriped is something the same with mine.   
>>>    I want to implement as following:
>>>
>>>(1) Client                   Master                Slave
>>>  |------ modify(0) -->    |                     |
>>>  |                        | --- modify(1) --->  |
>>>  |                        |                     |
>>>  |                        | <--- SUCCESS -----  |
>>>  |                        |                     |
>>>  | <---- SUCCESS  -----   |                     |
>>
>>What do you propose the master do when
>>a) the slave is currently unavailable?
>>b) the slave doesn't respond in a reasonable amount of time?
>>c) the slave responds with an error?
>>
>>Why should the client wait for the slave when
>>waiting won't necessarily ensure the slave is in
>>in sync?
>>
>>And I note that simply forwarding modify request to the
>>slave will result in out-sync servers as CSNs, timestamps,
>>etc. will differ.  To properly accomplish this approach,
>>you'd need a special forwarding operation (much like
>>we need a chaining operation when forwarding requests
>>from slaves to other servers).
>>
>>
>>>(2)
>>>Client                   Slave                                          
>>Master
>>>  |------ modify(0) -->    |  (Chaining the modify request to Master)     
>> |
>>>  |                        | --- modify(0) 
>>-------------------------------> |
>>>  |                        |                                              
>> |
>>>  |                        |                                              
>> |
>>>  |                        |Master update the data,and try to update 
>>Slave  |
>>>  |                        | <--- modify (1)---------------------------- 
>>-  |  
>>>  |                        |                                              
>> |
>>>  |                        
>>|------------SUCCESS(1)----------------------->  |
>>>  |                        |                                              
>> |
>>>  |                        
>>|<------------SUCCESS(0)-----------------------  |
>>>  |                        |                                              
>> |
>>>  | <---- SUCCESS(0)-----  |   
>>
>>Not sure this approach has any value over the slave asking for
>>a copy of the entry post master-update (using a post-read
>>control attached to the chaining request) and sync'ing to this.
>>
>>>                                            | But current implemented in OpenLdap2.3 is:
>>>
>>>(3) Client                   Master                Slave
>>>  |------ modify(0) -->    |                     |
>>>  |<---- SUCCESS(0) -----  |                     |
>>>  |                        |                     |
>>>  |-Query the entry again->|                     |
>>>  |                        | --- modify(1) --->  |
>>>  |                        |                     |
>>>  |                        | <--- SUCCESS(1)---- |
>>>  |                        |                     |
>>
>>
>>This adheres to LDAP's "eventually consistent" distributed
>>data model.
>>
>>
>>>(4)
>>>Client                   Slave                                          
>>Master
>>>  |------ modify(0) -->    |  (Chaining the modify request to Master)     
>> |
>>>  |                        | --- modify(0) 
>>-------------------------------> |
>>>  |                        |                                              
>> |
>>>  |                        |                                              
>> |
>>>  |                        
>>|<------------SUCCESS(0)-----------------------  |
>>>  |                        |                                              
>> |
>>>  | <---- SUCCESS(0)-----  |                                              
>> |    |                        |                                           
>>    |
>>>  |------------Query the entry on Master 
>>again----------------------------->|
>>>  |                        |                                              
>> |
>>>  |                        |                                              
>> |
>>>  |                        |                                              
>> |
>>>  |                        | <----------- modify(1) 
>>------------------------|
>>>  |                        |                                              
>> |
>>>  |                        
>>|---------------SUCCESS(1)---------------------->|
>>
>>Not sure this is implemented, but I certainly suggested that
>>if prior to a search request the slave chained an update
>>request, the slave should also chain the update request.
>>This is because its reasonable (given LDAP's referral
>>model) for a client doing an update to assume the lack of
>>referral means its talking to the master.
>>
>>> It's a NOT good solution to implement the HA, because the data is not 
>>keep sync always.
>>>
>>> I would like to join replicate develop team to do it.
>>>
>>> And I want to implement failover dynamicly. If the master is down, one 
>>of the slaves will be the master.
>>>
>>> So I want to add some more entrys in the configure branch to let the DS 
>>know how many DSs are in the same cluster. Master and Slaves know each other. And currently in OpneLdap, the Master don't know how many Slaves; the Slave only know where is Master, but don't know another Slavers.
>>>
>>> So I would like to join OpenLdap to implement these functions.
>>>
>>>Best regards,   
>>>======= At 2005-12-16, 18:12:21 you wrote: =======
>>>
>>> 
>>>>sparklezou@hotmail.com wrote:
>>>>   
>>>>>Dear all,
>>>>>
>>>>>  I read the source code of config branch. I'm puzzled that why the 
>>schema for config branch are defined in the source code, not in a .schema file? It a little dificult to extend the config schema. ^_^
>>>>>     
>>>>Most elements of the config schema map directly to data structures in the 
>>slapd code. It would be pointless to alter the config schema without making corresponding code changes. It would be inappropriate to store the config schema in a regular file; that would imply that it can be changed freely which is incorrect.
>>>>
>>>>So, since you are asking about extending the config schema, that must 
>>mean you have written code to provide a new feature. It would be helpful for the purpose of this discussion if you describe what your new feature is, so we can focus on the proper hooks for integrating it.
>>>>
>>>>-- -- Howard Chu
>>>>Chief Architect, Symas Corp.  http://www.symas.com
>>>>Director, Highland Sun        http://highlandsun.com/hyc
>>>>OpenLDAP Core Team            http://www.openldap.org/project/
>>>>
>>>>.
>>>>   
>>>
>>>= = = = = = = = = = = = = = = = = = = =
>>>                       
>>>sparklezou
>>>sparklezou@hotmail.com
>>>2005-12-21
>>>
>>>
>>>
>

= = = = = = = = = = = = = = = = = = = =
			
sparklezou
sparklezou@hotmail.com
2005-12-30