[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: (ITS#3724) back_meta more sizelimit problems

On Fri, 2005-05-13 at 12:42 +0200, Pierangelo Masarati wrote:
> > On Fri, 2005-05-13 at 12:11 +0200, Pierangelo Masarati wrote:
> >> > These tests were applied to 2.2.23 with the patch from its#3720.
> >>
> >> Could you please check if the same problem, or different problems occur
> >> without the patch, i.e. if there's any regression issue?
> never mind, I've got it.
> > I've done this and the behaviour is the same. No regression issues.
> >
> >> I'll try to reproduce the issue; I fear a conflict between back-meta and
> >> backglue sizelimit handling.
> Indeed there's a conflict: backglue decreases the enforced sizelimit by
> the amount of entries returned by each subordinate database, so back-meta
> sees a sizelimit of 1 when it is after the bdb and the bdb returns one
> entry; but back-meta uses the overal entry count to check if the sizelimit
> has been exceeded.  I need to use a local count.  A fix is about to come.
> Thanks for reporting.

Thanks. I will test the fix.

> By the way, let me note that glueing a back-meta with a local database may
> not be a good choice because backglue performs operations searially, while
> back-meta perofrms them in parallel; if the remoter server response is
> orders of magnitude slower than that of the local database, you may want
> to turn the subordinate bdb into an additional target of the back-meta,
> possibly using a ldapi:// listener, so that operations are run in parallel
> on all databases, and the performance penalty of the remote servers is
> spread across all the operation time.

Indeed, the remote server response is at least 1 order of magnitute
slower. However, the majority (10-20 per second or so from an imap
server) of the hits against our server involve a subtree that consists
of the local database entries only. The searches that include the remote
proxy occur (web server type things) at a rate of around 1 per minute.
So I am happy with the serial latency situation we have. If the rates
become more even, I will consider your approach. Thanks for the
suggestion though. I was hoping to investigate the (your?) proxycache
facility when I have some time in an effort to improve individual search
latency across the whole tree.


> p.
-- Dr MDT Evans, Computing Services, Queen Mary, University of London