[Date Prev][Date Next]
[Chronological]
[Thread]
[Top]
Re: Feature request: multimaster
- To: Pierangelo Masarati <ando@sys-net.it>
- Subject: Re: Feature request: multimaster
- From: Howard Chu <hyc@symas.com>
- Date: Wed, 02 Feb 2005 14:13:28 -0800
- Cc: openldap-devel <openldap-devel@OpenLDAP.org>
- In-reply-to: <41FB67EA.4080903@sys-net.it>
- References: <41FB67EA.4080903@sys-net.it>
- User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8b) Gecko/20050130
Pierangelo Masarati wrote:
We are in the process of investigating the feasibility of adding the
above capability to slapd.
First of all a fundamental clarification: multimaster here means that
there is a pool of DSAs that are in sync (e.g. by syncrepl). Each
time, only one of them acts as the provider, but they all know of each
other. As soon as the provider is not reachable any more, the
consumers elect another provider among the remaining DSAs in the pool,
much like replication for Berkely DB does, selecting the one with the
latest contextCSN or, in case, resolving the conflict somehow (e.g. a
ballot with random sleep). Appropriate measures are required to
welcome the original provider back in the pool: it should become a
consumer and sync with the new provider, but conflicts might occur if
it was modified after losing providership.
This essentially needs the configuration of the replication (i.e.
syncprov overlay and mostly the syncrepl directive, updatedn and
updateref) to be modifiable run-time, via some mech to be defined.For
the purpose, unless back-config is available any soon, I'd like to
investigate the possibility to temporarily delegate this to
back-monitor, e.g. by exposing the syncrepl and updateref directives
and allowing them to be modified via protocol. Despite
promotions/demotions should be performed internally, manual
intervention via protocol should still be possible.
The rest of the multimaster functionality, that is the capability to
accept writes anywhere in the pool, should be delegated to the chain
overlay, in order to guarantee the consistency of the operation in a
tansparent manner, at the cost of a delay between the successful
return of the update and the actual appearance of the changes on the
database.
I should note that if you go very far down this path, you are going to
wind up re-implementing X.500 DSP and DISP:
X.500 servers have explicit knowledge references to their superior,
subordinate, and peer servers
cooperating servers maintain a single connection to each other, all
chaining activity is mutiplexed over this connection using DSP
when any server connection dies, its connected neighbors notice
immediately
All of those features are necessary in order for what you propose to
work, and they're already part of X.500.
--
-- Howard Chu
Chief Architect, Symas Corp. Director, Highland Sun
http://www.symas.com http://highlandsun.com/hyc
Symas: Premier OpenSource Development and Support