[Date Prev][Date Next] [Chronological] [Thread] [Top]

RE: Antw: RE: Openldap in container advice, how have you done it?



The ip address is known when I start the container that would mean I 
need to sed some ready ldif and import it into the slapd at runtime. 
That would also require the the availability of some secret to be able 
to import it. 

Although I have prepared the container for ldif fetching. Nicer would be 
if I could specify something like an environment variable in olcSyncrepl



-----Original Message-----
From: Ulrich Windl [mailto:Ulrich.Windl@rz.uni-regensburg.de] 
Sent: maandag 12 augustus 2019 8:56
To: Marc Roos
Subject: Antw: RE: Openldap in container advice, how have you done it?

 >>> "Marc Roos" <M.Roos@f1-outsourcing.eu> schrieb am 10.08.2019 um 
14:07 in Nachricht 
<"H00000710014b895.1565438831.sx.f1-outsourcing.eu*"@MHS>:

> Ok so long rep id is not going to work modifying entry 
> "olcDatabase={2}hdb,cn=config"
> ldap_modify: Other (e.g., implementation specific) error (80)
>      additional info: Error: parse_syncrepl_line: syncrepl id 
> 1911533132 is out of range [0..999]

Why not derive the ID from some container ID or from the container's IP 
address?

> 
> 
> 
> 
> ‑‑‑‑‑Original Message‑‑‑‑‑
> From: Marc Roos
> Sent: zaterdag 10 augustus 2019 1:24
> To: openldap‑technical@openldap.org
> Subject: Openldap in container advice, how have you done it?
> 
> 
> 
> I was thinking of putting read‑only slapd('s) in a container 
> environment so other tasks can query their data. Up until now I have 
> had replication only between vm's.
> 
> To be more flexible I thought of using stateless containers. Things 
> that could be caveats
> 
> ‑ replication id's
> say I spawn another instance, I need to have a new replication id to 
> get updates from the master. But what if the tasks is killed, should I 

> keep this replication id? Or better just always use a random unique 
> replication id whenever a slapd container is launched? Maybe use 
> launch date/time (date +'%g%H%M%S%2N') as repid? Is this giving issues 

> with the master? What if I test with launching instances and the 
> master will think there are a hundred slaves that are not connecting 
anymore?
> 
> ‑ updating of a newly spawned slapd instance When the new task is 
> launched, it is not up to date with its database, can I prevent 
> connections to the slapd until it is fully synced?
> Say I have user id's in slapd, it could be that when launching a new 
> instance, this user is not available yet. When clients are requesting 
> this data, they do not get it, and this user could be 'offline' until 
> that specific instance of slapd is fully updated.
> 
> ‑ to prevent lots of records syncing
> Can I just copy the data of /var/lib/ldap of any running instance to 
> the container default image? Or does it have some unique id's that 
> will prevent this data to be run multiple times? Is there some advice 
> on how to do this?
> 
> ‑ doing some /var/lib/ldap cleanup
> I am cleaning with db_checkpoint ‑1 ‑h /var/lib/ldap, and db_archive 
‑d. 
> Is there an option slapd can initiate this? 
> 
> ‑ keep uniform configuration environment, or better a few different 
> slapd instances?
> In my current environment vm slave slapd's only sync data from the 
> master that the masters acls allow access to. That results in that on 
> some vm's the ldap database is quite small and on other it is larger.
> I think for the container slapd instances to have all data, and just 
> limit client access via the acls. But this means a lot more indexes on 

> the slapd
> 
> What else am I missing?