[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: [LMDB] Large transactions



Jürgen Baier wrote:
Hi,

thanks for the answer. However, I still have a follow-up question on this benchmark.

When I add 1 billion key/value pairs (16 byte MD5) to the LMDB database (in a single transaction (but I also get similar results when I add the same data in multiple transactions)) I get the following results:

Windows, without MDB_WRITEMAP: 46h
Windows, with MDB_WRITEMAP: 6h (!)
Linux (ext4), without MDB_WRITEMAP: 75h
Linux (ext4), with MDB_WRITEMAP: 73h

MDB_WRITEMAP seems to have a huge impact on write performance on Windows, but on Linux I do not see similar improvements.

So I have two questions:

1) Could the the difference between Linux and Windows performance regarding the MDB_WRITEMAP option be related to the fact that LMDB currently uses sparse files on Linux, but not on Windows?

Unlikely.

2) Is there a way to speed up Linux? Is there a way to pre-allocate the data.mdb on startup?

Try it and see. Use the env fd with fallocate(2).

Thanks,

Jürgen


On 21.11.17 21:17, Howard Chu wrote:
Jürgen Baier wrote:
Hi,

I have a question about LMDB (I hope this is the right mailing list for such a question).

I'm running a benchmark (which is similar to my intended use case) which does not behave as I hoped. I store 1 billion key/value pairs in a single LMDB database. _In a single transaction._ The keys are MD5 hash codes from random data (16 bytes) and the value is the string "test".

The documentation about mdb_page_spill says (as far as I understand) that this function is called to prevent MDB_TXN_FULL situations. Does this mean that my transaction is simply too large to be handled efficiently by LMDB?

Yes.




--
  -- Howard Chu
  CTO, Symas Corp.           http://www.symas.com
  Director, Highland Sun     http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/