[Date Prev][Date Next] [Chronological] [Thread] [Top]

how about back mysql




Hi,

I am using back ldbm with  BDB 2.4.14 of sleepycat 
which comes with the default rpm on Red hat Linux 6.2. 
what i wanted was to setup multiple ldap servers on 
different machines and all of them contacting one 
machine called the data server, for the database files 
of ldap(in this case .dbb files).

if it is not possible with NFS. 
Then has anyone got any idea about if it is possible 
with Coda File system.
also can anyone point me to, where i can find 
information about how ldbm manages and works with the 
bdb.
 
if not bdb is it possible with mysql. if yes
will i be having any performance problems.
since i am using version 2 of ldap using schemacheck off 
option of slapd.conf i can remove the issue of storing object class in ldap for each enty. thus eliminating the requirement of multiple attribute values.and i have no other data which has multiple values for a single attribute right at this moment.
i am using openldap for imap authentication ,qmail and  storing of some other user details, like hintquestion ,hintanswer, service expiry for an application to contact and authenticate.
 
your suggestions will be greatly appreciable
looking forward for a reply
Thanks 
RJ

> 
> On Sat, 01 Dec 2001 Howard Chu wrote :
> > If you're using back-ldbm I'm pretty sure that none 
> of 
> > the available
> > database libraries (ndbm, gdbm, bdb) support multiple 
> > access across NFS.
> > None of (ndbm, gdbm) are even suited for multiple 
> > access on a single host.
> > gdbm runs in N-readers/1-writer mode; since back-ldbm 
> > opens the database in
> > RW mode there's no way multiple slapd instances could 
> > access the database.
> > BDB supports locking on a per-page basis (and so, 
> > perhaps it will allow
> > multiple slapds on a single database) but in general 
> > its lock system lives
> > in shared memory, which also does not propagate 
> across 
> > NFS. (The BDB
> > documentation explicitly says not to store BDB 
> > databases in NFS, for this
> > and other
n (with 
> > working lockd) then gdbm
> > shouldn't allow you to even open the database 
> multiple 
> > times. If you don't
> > have a working lockd, then you will probably corrupt 
> > the database very
> > quickly. For bdb, you will just corrupt the database.
> > 
> > Note that bdb (at least in version 3, I dunno about 
> > earlier versions) has an
> > RPC interface that could allow you to run multihost 
> > access without using
> > NFS. But there's nothing in slapd that will set this 
> up 
> > for you.
> > 
> >   -- Howard Chu
> >   Chief Architect, Symas Corp.       Director, 
> Highland 
> > Sun
> >   http://www.symas.com           
>  OpenSource Development and Support
> > 
> > > -----Original Message-----
> > > From: owner-openldap-software@OpenLDAP.org
> > > [mailto:owner-openldap-software@OpenLDAP.org]On 
> > Behalf Of rj
> > > Sent: Friday, November 30, 2001 9:41 PM
> > > To: openldap-software@OpenLDAP.org
> > > Subject: multiple slapd Accessing one data area
> > >
> > >
> > >
> > >
> > >  Hi Dear List,
> > >
> > > is it possible in openldap1.2.9 for multiple slapd 
> on 
> > different
> > > systems to access one data area ( all the .dbb 
> files) 
> >  if yes
> > > where do i specify it.
> > >
> > > one option that i can think of now is mounting via 
> > nfs,
> > > and what i have done is
> > >
> > > on machine A i have ldap server with data in /aaa 
> > directory. now
> > > from system B i mount the /aaa of machine A in 
> > Read/Write Mode
> > > and in the slapd.conf file i give the directory as 
> > the mount
> > > point. when i start the servers on both the 
> machines 
> > i am able to
> > > perform search successfully, but any modification, 
> > add or del are
> > > only getting reflected when i restart the server. 
> is 
> > it the
> > > problem with the File system r the underlying 
> > database "locking"
> > > mechanism. can someone make this clear to me.
> > > Or do we have other options.
> > >
> > >  Thanks in advance
> >
>
> > >  Regards,
> > >  RJ
> > >
> > >
> > >
> > >
> > 
>  
>