[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: can slapcat dump dbb file which size larger than 9G?



Wait a minute.  Slapcat, in it's default call dumps to stdout.  That means that you should be able to get around the limit by using an OS redirect...

slapcat > huge_output.ldif

If your shell can't handle that (I've run into similar issues with TAR), you should be able to do something like this...

slapcat | gzip > huge_output.ldif.gz

I don't have this much data to test with, but this should work (along with the -z parameter suggested previously).  The resulting output will definitely be less than 2 GB (unless your data size is greater than 15 to 20 GB), as near 90% compression would be expected...

slapcat -z 9999999 | gzip > huge_output.ldif.gz

Thanks,
Gary Allen Vollink



james wrote:
RH 7.2  or RH AS 2.1 support larger than 2GB file,
 
i'am afraid the slapcat can not support this.
 
But i don't know how to modify slapcat.c to let it support dump file > 2GB
 


Tony Earnshaw <tonye@billy.demon.nl> wrote:
tir, 30.12.2003 kl. 03.49 skrev james:

> There is more than 2,500,000 entries in my openldap's id2entry.dbb,
> and the data file's size is larger than 9 G.
> I want to dump all the entry to ldif file using slapcat
> (/usr/local/sbin/slapcat ),but slapcat exited only dumped 970,000
> entries.
>
> what can i do to get all the data in my id2entry.dbb?
> OS: linux RH 7.2 or RH advance server 2.1
> openldap: 2.0.8
> berkely db: 3.19
>
Could be you're trying to save a file greater than the OS 2GB limit. In
which case you'll have to save bits of the tree at a time with
ldapsearch, GQ or similar.

--Tonni

--
mail: billy - at - billy.demon.nl
http://billy.demon.nl

Do you Yahoo!?
New Yahoo! Photos - easier uploading and sharing