[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: better malloc strategies

Aaron Richton wrote:
DB. Really fixing that requires a smarter malloc.

You've mentioned libumem as that "smarter malloc" along with portability concerns: are you using libumem as a drop-in, or are you seeing performance improvements based off umem_alloc(3MALLOC) use?

I've tested again with libhoard 3.5.1,  and it's actually superior to libumem for both speed and fragmentation. Here are some results against glibc 2.3.3, libhoard, and libumem:

glibc size glibc time hoard size hoard time umem size umem time
initial size 660M
startup single 1254M 02:37.0 1678M 03:19.9 1560M 03:01.1
startup single 1713M 02:31.5 1683M 01:38.5 1698M 01:42.5
startup single 1727M 02:01.6 1684M 01:41.0 1732M 01:48.2
startup single 1727M 01:42.7 1684M 01:44.1 1753M 01:46.9
4 at once 1784M 07:31.2 1766M 02:23.4 1818M 02:30.3
4 at once 1800M 06:24.3 1782M 02:22.2 1840M 02:28.5
4 at once 1954M 07:54.0 1783M 02:24.2 1841M 02:31.1
4 at once 2002M 06:30.7 1783M 02:25.2 1841M 02:26.5

The initial size is the size of the slapd process right after startup, with the process totally idle. The id2entry DB is 1.3GB with about 360,000 entries, BDB cache at 512M, entry cache at 70,000 entries, cachefree 7000. The subsequent statistics are the size of the slapd process after running a single search filtering on an unindexed attribute, basically spanning the entire DB. The entries range in size from a few K to a couple megabytes. Since not everything fits in RAM, obviously the disk speed is a factor here but the disk and DB are identical from run to run.

After running the single ldapsearch 4 times,  I then ran the same search again with 4 jobs in parallel. There should of course be some process growth for the resources for 3 additional threads (about 60MB is about right since this is an x86_64 system).

The machine only had 2GB of RAM, and you can see that with glibc malloc the kswapd got really busy in the 4-way run. The times might improve slightly after I add more RAM to the box. But clearly glibc malloc is fragmenting the heap like crazy. The current version of libhoard looks like the winner here.
  -- Howard Chu
  Chief Architect, Symas Corp.  http://www.symas.com
  Director, Highland Sun        http://highlandsun.com/hyc
  OpenLDAP Core Team            http://www.openldap.org/project/