Code review comment for lp:~vila/bzr/shell-like-tests-borked

Revision history for this message
Martin Pool (mbp) wrote :

2009/9/2 Vincent Ladeuil <email address hidden>:

Thanks for doing this, you beat me to it.

I haven't read the patch itself yet, but I am keen to see some
experiments in this direction.

> <snip/>
>
>    jam> 1) My first concern is that the above doesn't provide a
>    jam> way to do any assertions. It just provides a way to run
>    jam> a bunch of commands.
>
> No. You can provide input, expected stdout and expected stderr:
>
> bzr init
> cat >file
> <content
> bzr add file
>>adding file
> bzr add foo
> 2>bzr: ERROR: No such file: u'foo'

Maybe this is a hole in the documentation then, if jam didn't notice it?

>
> Well, ok, I slightly lied for the last one because bzr output an
> abspath and that's a bit hard to embed (still doable but
> unpractical).

(Also a bug that we have the unicode prefix.)

Perhaps it should eventually pretend that everything is being run in eg /test/

>
> <snip/>
>
>    jam> Though we then don't have a way to declare a different
>    jam> between stdout and stderr.
>
> Yes we have. I didn't distinguish between stdout and stderr at
> first, but then, I realized that it will make things more
> complicated.
>
> So the feature here is that if you specify a content for stdout
> or stderr, you want it to match exactly.
>
> I feel that strict matching is a bit restrictive (as shown above
> already), but I'm not yet sold to regexps there (and how), I'd
> like more use cases before.

Robert said that the loose matching in doctest can be separated out.

>    jam> Nor do we have a way to say that this command should
>    jam> succeed cleanly, or this command should fail in a
>    jam> particular way.
>
> The idea is that a script should "succeed" as described,
> i.e. match the expected std[out|err] if specified for each
> command.
>
> This may not be tested correctly yet but see
> test_unexpected_output.

You could make it pretend it's being run under a shell that prints a
message if it exits with a non-zero status, as bash and zsh can be
configured to do.

>    jam> Maybe the overhead isn't as big as it once was, because
>    jam> we don't spawn anymore. But given Robert's recent
>    jam> threads about wanting to *speed up* selftest, this
>    jam> doesn't seem to be going in the right direction.
>
> The intent is not to have faster tests but to allow more people
> to write tests.
>
> Once the tests are written that way, we can rewrite them in more
> optimal ways, the bug or desired feature has been captured,
> that's the goal.
>
> And the original writers can even look at the rewritten form and
> learn quicker since they know what the test is supposed to do in
> its first form.

I think that's a valid goal, and will lower a barrier to contribution,
but not ultimately where we want to end up. It could go wrong like
this: people who can only read the easy tests can't add new tests in
modules written in the old style, and eventually can't even read the
tests they previously wrote themselves.

I don't think we want a style of writing tests that the core
developers never use, because it will rot: so this needs to be
competitively fast and easy to use effectively, at least for some
category of tests.

But I think that's quite possible: parsing it out of a string into the
functions that the test would otherwise call shouldn't be a big
overhead.

The big thing we do need to avoid here is people creating history by a
long series of add/commit sequences, unless they're actually testing
add and commit. But there could perhaps be a pseudocommand like

 setup_simple_history

and that may in fact be faster than what some tests do.

--
Martin <http://launchpad.net/~mbp/>

« Back to merge proposal