[Date Prev][Date Next]
Re: (ITS#3851) Berkeley DB Scalability Patch
--On Thursday, July 28, 2005 9:02 AM -0400 Jong-Hyuk
> Quanah, thanks for your inputs with experimental data. I believe that you
> need more data points to draw a meaningful comparison, though.
> Especially, I'd like to see 1) what the result for larger DITs, say ones
> having 1 ~ 4 million entries, would be; and 2) the BDB cache sizes and
> actual RSS are needed for each of these experiments. I'm afraid that you
> are looking at only one point in the whole scalability picture.
> Actually, I was busy doing baseline scalability evaluation so far. I was
> able to add up to 64 million entries within a reasonable amount of time
> (currently doing the same with 128 million).
I don't think you read my email very closely.
1) I noted I don't have systems with large amounts of RAM to test large
2) I didn't say this was a meaningful comparison. I said this is behavior
I noticed when using a large set of indices.
I am quite aware I'm only looking at one small data point. But what is
significant to me about that data point, and what you said in your previous
emails about this patch, is that you are not using a significant number of
indices. You are only using two (one objectclass eq, one cn eq,sub). What
I saw in my 100k test is when I went from 3 indices to 21, is that the
scalability patch begins to suffer. Which is then why I asked, at the end
of my email:
>> Have you done any testing of your patch on large scale DB's with a
>> good number of indices?
That is the question I'm interested in having an answer to. Its great if
the patch holds up if you do 500 billion entry databases. If it can't
function when you have more than some X number of indices, where X is
rather small, then its usefulness becomes suspect.
Packaged, certified, and supported LDAP solutions powered by OpenLDAP: