[Date Prev][Date Next]
Re: Distributed ppolicy state
- To: Howard Chu <email@example.com>
- Subject: Re: Distributed ppolicy state
- From: "Brett @Google" <firstname.lastname@example.org>
- Date: Fri, 23 Oct 2009 00:34:28 +1000
- Cc: email@example.com
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=/ZnldzhQPkQPQTuBKuQ/d7WNVuy6jF/FMcl+qHCRLX4=; b=I/qkMEMcvbSSLO2w539o6oarJZvnbQB2Kjxg7RjWOULRkY7Ro4aQKBPJ+gyUJKTMSw o3zzsUZ7Upw4GmGge3cyjuRhVFLgLiNJPiMrp4iwWTZwHweiFF0t1KcDovb+qwtVg0Ef wYZIiU5AFi/Cj9N8YqRhEZtGLZp2ncRRO5ICM=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=k/CWzYL93eOaQ/8k6zkFqlH5qT5AzLM0Jzg+r0XJZeitKqcQKMSPbasZz2Td17P9ML oYUdRw/98tMpaRlZQvi+p46v/kwZ9sivKFEUDByVifB5rITemMan+/irjWII6g4gkTNw gMf58a4/c5cyK+4+TX59pR5BPhScH1fK1efZc=
- In-reply-to: <4AE00D64.firstname.lastname@example.org>
- References: <4AE00D64.email@example.com>
On Thu, Oct 22, 2009 at 5:44 PM, Howard Chu <firstname.lastname@example.org>
In the case of a local, load-balanced cluster of replicas, where the network latency between DSAs is very low, the natural coalescing of updates may not occur as often. Still, it would be better if the updates didn't happen at all. And in such an environment, where the DSAs are so close together that latency is low, distributing reads is still cheaper than distributing writes. So, the correct way to implement this global state is to keep it distributed separately during writes, and collect it during reads.
I'd think that to indicate the topology you would create some administrative name, perhaps a simple string "sales west" or "cluster one" to indicate a topological region, and you would specify for each DSA which administrative name or topology it is logically part of. Then this administrative region name + unique identifier of the principal in question, could be used as a key to hold a simple locked / unlocked boolean value on the replica's parent.
Above would give course grained control with little overhead. All DSA's could keep track of password failures locally, and report up / push up the lock value up to it's provider only if retries have been exceeded for a particular principal and adminsitative domain on a particular server, thus locking all other principal use under the same administrative region. This administrative "lock" value would be replicated downward to other DSA's in the administrative domain, used for locking that principal on all DSA's in each administrative domain.
Alternatively for more fine grained capture of password failure counts, the could push a key containing administrative name + unique identifier of the principal in question + it's replica id, with a simple count of password failures. The value would be stored locally, but pushed up to the provider only when the value changes, and as each consumer would have it's own private namespace on the provider, there would be no collisions not any need to wait for exclusive access to write to it.
The provider could aggregate these values periodically without the need for an exclusive lock, and the aggregated value could then be replicated downwards to the replicas for use in controlling access to accounts.