[Date Prev][Date Next]
Re: Ldap Servers sharing same mount point
> When I modified gdbm_open function in the file
> libraries/libldbm/ldbm.c to have GDBM_NOLOCK, I was
> able to do ldapsearch from two LDAP servers sharing
> the same NFS mount. But, doing a write from one LDAP
> server and read from other one, caused the writing
> LDAP server to crash. (So, this could not be
> It seems, once the index file (dn2id.gdbm) is open by
> the server, the lock acquired exists till the ldap
> server is stopped. But, there should be no harm (I
> mean none of the index/database files will be
> corrupted) if only one slapd modifies the
> database/processes the queries, while the other one is
> only processing queries. I am not that an expert of
> the OpenLdap source code, but any idea whether this is
> feasible, if so how much time will it take ?
Santhosh, you don't seem to appreciate the amount of trouble that
you are headed for. You are trying to get slapd to do something
it was definitely not designed to do. I would agree that multiple
slapd processes could read the same directory if they are *all*
read-only. To do what you are asking, you will need to put in
code to detect changes to files so you can throw away old cached
data. The options are, of course, endless -- you can do anything
you want once you start programming, but I think your efforts would
be better spent moving to 2.x, getting real referrals working, and
working within the design of LDAP/slapd where you could get a lot
more support from these lists.
> Are the index/database files modified during
> ldapsearch ?
They could be.
> (My assumption is that they are modified only for
But there is the potential (unless your particular setup avoids it)
that a modify/add/etc. could occur at the same time (in another
thread within slapd, or in the master slapd in your case) that
the database could be modified while the search is in progress.
> I did not have any luck with slave slapd setup to get
> data propagated to the master server.
That's what the referral is for. While I understand there is work
on LDUP (each server still has it's own copy of the database),
which does more like what you are thinking, the LDAP way
is to issue a referral to the client requesting the modify to tell
it to go somewhere else to make the change. This keep replication
Another drawback of using a common directory through an NFS mount
is that if you lose the system which is serving out the directory
via NFS, you're down, period. One of the ideas of replicas is
that a replica can stand on its own, at least for read-only operations.
> I appreciate any help.
> --- "Christian J. Chuba" <firstname.lastname@example.org>
> > Hugo, a slight clarification. We are only trying to
> > use two slapd servers
> > to 'read' the directory not update the directory.
> > Are you saying that only
> > one slapd server can access the same back end
> > storage even for read
> > operations?
> > ----- Original Message -----
> > From: <Hugo.van.der.Kooij@caiw.nl>
> > To: "Iddyamadom Santhoshkumar" <email@example.com>
> > Cc: <OpenLDAP-software@OpenLDAP.org>
> > Sent: Sunday, December 03, 2000 5:59 AM
> > Subject: Re: Ldap Servers sharing same mount point
> > > On Sat, 2 Dec 2000, Iddyamadom Santhoshkumar
> > wrote:
> > >
> > > > As part of setting up two ldap servers (for
> > > > availability) we thought of setting up two
> > machines
> > > > with OpenLdap installed on both of them. Both
> > "slapd"s
> > > > point to the same database (same "directory"
> > entry in
> > > > slapd.conf in both machines). It is possible to
> > do
> > > > ldapsearch only on the machine where slapd is
> > started
> > > > first. On the other machine, ldapsearch gives
> > > > "ldap_search: No Such object" . With detailed
> > log, it
> > > > seems the problem is ldbm_cache_open returns
> > error 11,
> > > > "Resource Temporarily Unavailable", while trying
> > to
> > > > open dn2id.gdbm
> > >
> > > I don't think you can have two servers accessing
> > the same backend database
> > > like this. Perhaps it works with a SQL backend
> > database but it is a
> > > nightmare setup.
> > >
> > > For multiple servers the replication daemon is the
> > path to go.
> > >
> > > Hugo.
> > >
> > > --
> > > Hugo van der Kooij; Oranje Nassaustraat 16; 3155
> > VJ Maasland
> > > firstname.lastname@example.org
> > http://home.kabelfoon.nl/~hvdkooij/
> > >
> > > This message has not been checked and may contain
> > harmfull content.
> > >
> > >
> > >
> Do You Yahoo!?
> Yahoo! Shopping - Thousands of Stores. Millions of Products.