[Date Prev][Date Next]
RE: PLEASE HELP! maximum size of DB in 2.0.7
- To: "Dan Shriver" <firstname.lastname@example.org>, <openldap-software@OpenLDAP.org>
- Subject: RE: PLEASE HELP! maximum size of DB in 2.0.7
- From: "Serikstad, Kevin (CCI-Atlanta)" <Kevin.Serikstad@cox.com>
- Date: Mon, 27 Aug 2001 16:38:35 -0400
- Content-class: urn:content-classes:message
- Thread-index: AcEvNl/Sapsnl4/dTqKWbDiC/wd52AAAXnzg
- Thread-topic: PLEASE HELP! maximum size of DB in 2.0.7
It is possible you've hit the file system file size limit and not a
limit with OpenLDAP. I don't know what the process is to enable large
files for an XFS file systems, but with UFS for example, you have to
format a system for large files and then mount it with the "largefiles"
option.. With a "vxfs", you can simply enable large files by mounting it
with the "largefiles" option set.
Maybe you have a man page on mounting xfs file systems. On Solaris, I
can do a "man mount_ufs" and it will bring up the supported options for
a file system.
> -----Original Message-----
> From: Dan Shriver [SMTP:email@example.com]
> Sent: Monday, August 27, 2001 4:22 PM
> To: openldap-software@OpenLDAP.org
> Subject: PLEASE HELP! maximum size of DB in 2.0.7
> We recently were trying to add 5 million entries to a directory
> server on a system with XFS filesystem and openLDAP 2.0.7. At
> when the db grew to 2^31 it refused to accept any more entries.
> A ~2Gig file is far too small for us. What is the workaround?
> Is this bug just in 2.0.7 (we did not try loading 2.0.11 on the
> other box since it is a production machine, but if we do deploy
> with openLDAP we plan to use 2.0.11).