[Date Prev][Date Next]
Re: 8 hours tests ends with inconsistent DB.
Quanah Gibson-Mount wrote:
--On Saturday, June 12, 2004 7:31 PM +0200 paul k <firstname.lastname@example.org>
While totally agreeing with your deployment strategies I would not count
them as proper arguments for the matter at hand (benchmarking OL). I may
be ignorant but I'd expect a poorly configured system to perform poorly
and of course not doing things as you expect but not to misbehave or
fail. If the underlying DB is not configured explicitely, fall back to
sane defaults, if you hit the resource limits of hardware or whatever
mechanism, the application should behave gracefully.
I'm sorry, but what's being done here is not benchmarking.
Hi Quanah, thanks for you answer.
I apologize for not using the term "benchmark" correctly. One may have
different expectations about the result of a "benchmark":
b) comparable results
if b) applies you are right about missing specs, but I guess the OP just
wanted to test the limits of OL and the issue was not poor performance
but corrupt DB and lack of stability under extrem load. You cannot blame
Sleepycat for that (bugs aside). From OLs view, the backend is (should
be) just a resource like RAM, HD whatever with its inherent limits. The
question is what happens if those limits are hit.(I know this is an
oversimplified view not taken from this world ;). Howard has answered