[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Updating the test suite (Was: commit: ldap/tests run.in)

Howard Chu writes:
> I guess it would be nice to fix things so we don't have to edit the
> scripts and mess up our commits, but how do you set up the variables
> in a script that invokes slapd (or any program) multiple times, and
> you only want special treatment for one specific invocation?

I'm not certain how I feel about my own suggestion here, but:

Each test could give each program invocation a name which is unique for
the test.

Always invoke programs with a function Invoke():

  Invoke NAME [--bg 'description' | --retcode] command arg...

Invoke() would

* Export $LDAP_TESTNAME=<NAME> for the program, in case you use a script
  which checks the test name.

* Search for an environment variable containing a command to prepend to
  the program.  In order (though the search can be optimized a bit with
  some setup when he script starts):

    $LDAP_TESTER_<test number>_<NAME>,
    $LDAP_TESTER_<test number>,
    $LDAP_TESTER_<BG, RETCODE or FG, depending on the options>

* And since we've got that function anyway,

  - call Set_pidinfo() for background processes.  The pid variable name
    would be PID_<NAME> or something.  Kill_named() should accept the
    invocation name as well as the pid number.

The invocation names could be a problem.  They should be short, they
should normally not be renamed, and trying to come up with meaningful
names for everything would be a royal pain.

When people don't feel creative I'd suggest just S<decimals> for
searches, C<decimals> for compares etc - where <decimals> are considered
the fractional part after a decimal comma.  That way we won't get
out-of-order names.  If we want to insert a search between S3 and S4,
use S35.

>> Actually that --soft argument to my Demand_RC() causes a lot of problems
>> when I look at it closely.  For the time being I think this would at most
>> implement that 'run all' does not abort if one test fails.

In case this wasn't clear: I meant do _not_ implement Demand_RC --soft,
only implement that './run -ignore all' would run all the tests.

> I still have to think about how it would get used.  After a fresh build,
> you should not ignore any errors by default.  An option like this will
> only be used after you've hit one error and you want to move on to the
> following tests.

cvs update; make -s
Run all tests under 'valgrind --log-file=foobar' and go home.
Next day, there is almost no info because test003 crashed:-(
An option to keep running would avoid that.

> As such, I think a more relevant solution will be to allow telling
> "all" to begin at test XX and advance from there.  Perhaps another
> option to say all, but excluding XX, YY, and ZZ because we already
> know they will fail.

Yes, those sound useful too.

_Ignoring_ errors is a bit strong.  ./run -i all should at least
end with report which tests failed and maybe return failure.
Except then 'make test' would stop after the first _backend_ which
had a failure....

Which reminds me - I'd also like a way to pass ./run arguments to 'make
test'.  Maybe ./run should not override the variables it sets from
the command line options.

OTOH, when this thread is over we may have reorganized things to the
point where there is little need for any of that.  Or I could just
go on doing
(for b in bdb ldbm hdb; do for s in scripts/test*; do
	./run -b $b `basename $s`; done; done)
which isn't all that much to write.

>> it gets a lot slower with valgrind
>> --memcheck on an already slow box.  I remember _some_ sleep was too
>> fast for me in that case, but I don't remember if it was this one.
>> Anyway, we could reduce the default sleep time but take the max sleep
>> time from an optional environment variable.
> I would guess the sleeps in the replication tests (to allow propagation)
> should get this variable treatment. The default sleeps are more than
> long enough right now, and they would need to be stretched further for a
> valgrind case.


>> to find the test I want.  Then after completing the test name I have to
>> go back and delete "scripts/" after writing the test name.  That's three
>> extra keystrokes - horrible!  (As opposed to - how many? - in this part
>> of the discussion?:-)
> OK. Does no harm...

OTOH, my own explanation didn't exactly convince me that we need this:-)