[Date Prev][Date Next] [Chronological] [Thread] [Top]

RE: back-bdb performance



I already tried this, I saw a further slowdown from adding child
transactions. As I see it, the update order is already fixed/constant. The
only thing the child transaction should gain you is less retry work
if you abort from a deadlock. All I measured was additional overhead
from creating additional transactions, overall throughput dropped.

If you'd care to give me some hints about what you saw in the update
order, I'm open.

  -- Howard Chu
  Chief Architect, Symas Corp.       Director, Highland Sun
  http://www.symas.com               http://highlandsun.com/hyc
  Symas: Premier OpenSource Development and Support

> -----Original Message-----
> From: owner-openldap-devel@OpenLDAP.org
> [mailto:owner-openldap-devel@OpenLDAP.org]On Behalf Of Marijn Meijles
> Sent: Thursday, December 13, 2001 8:38 AM
> To: openldap-devel@OpenLDAP.org
> Subject: Re: back-bdb performance
>
>
> You wrote:
> >
> > Concurrent write access is a problem because of the possibility of
> > deadlock. Using the Concurrent Database Store may seem to cause a
> > bottleneck, but it may be faster overall because it avoids deadlock.
> > When I split my 10000 entry ldif file into two separate files and
> > spawn two ldapadd commands on the same server, back-bdb time goes to
> > 1 or 2 minutes (up from 30-some seconds) depending on the deadlock
> > detector. Back-ldbm is still around 46 seconds.
>
> It's fairly easy to get this back to 30-something. Just think hard
> about the update order and use child transactions as a trick. I
> just finished it and it works like a charm.
>
> --
> Marijn@bitpit.net
> ---
> The light at the end of a tunnel may be an oncoming train.
>