jam> 1) My first concern is that the above doesn't provide a
jam> way to do any assertions. It just provides a way to run
jam> a bunch of commands.
No. You can provide input, expected stdout and expected stderr:
bzr init
cat >file
<content
bzr add file
>adding file
bzr add foo
2>bzr: ERROR: No such file: u'foo'
Well, ok, I slightly lied for the last one because bzr output an
abspath and that's a bit hard to embed (still doable but
unpractical).
<snip/>
jam> Though we then don't have a way to declare a different
jam> between stdout and stderr.
Yes we have. I didn't distinguish between stdout and stderr at
first, but then, I realized that it will make things more
complicated.
So the feature here is that if you specify a content for stdout
or stderr, you want it to match exactly.
I feel that strict matching is a bit restrictive (as shown above
already), but I'm not yet sold to regexps there (and how), I'd
like more use cases before.
jam> Nor do we have a way to say that this command should
jam> succeed cleanly, or this command should fail in a
jam> particular way.
The idea is that a script should "succeed" as described,
i.e. match the expected std[out|err] if specified for each
command.
This may not be tested correctly yet but see
test_unexpected_output.
<snip/>
jam> Maybe the overhead isn't as big as it once was, because
jam> we don't spawn anymore. But given Robert's recent
jam> threads about wanting to *speed up* selftest, this
jam> doesn't seem to be going in the right direction.
The intent is not to have faster tests but to allow more people
to write tests.
Once the tests are written that way, we can rewrite them in more
optimal ways, the bug or desired feature has been captured,
that's the goal.
And the original writers can even look at the rewritten form and
learn quicker since they know what the test is supposed to do in
its first form.
jam> It feels to me like things such as "do_bzr" should be part of a helper
jam> class, not part of the TestCase itself.
That doesn't fly for jailed commands, so I didn't try.
<snip/>
jam> By making _script_class a member variable, test cases
jam> that want to override behavior can do so by subclassing
jam> ScriptClass rather than directly overriding
Use cases for overriding ?
And don't forget that we can run several scripts in the same test
like we do with run_bzr().
<snip/>
jam> 3) I think the functionality is well tested.
jam> My biggest concerns are how is this actually going to
jam> benefit us.
>>>>> "jam" == John A Meinel writes:
<snip/>
jam> 1) My first concern is that the above doesn't provide a
jam> way to do any assertions. It just provides a way to run
jam> a bunch of commands.
No. You can provide input, expected stdout and expected stderr:
bzr init
cat >file
<content
bzr add file
>adding file
bzr add foo
2>bzr: ERROR: No such file: u'foo'
Well, ok, I slightly lied for the last one because bzr output an
abspath and that's a bit hard to embed (still doable but
unpractical).
<snip/>
jam> Though we then don't have a way to declare a different
jam> between stdout and stderr.
Yes we have. I didn't distinguish between stdout and stderr at
first, but then, I realized that it will make things more
complicated.
So the feature here is that if you specify a content for stdout
or stderr, you want it to match exactly.
I feel that strict matching is a bit restrictive (as shown above
already), but I'm not yet sold to regexps there (and how), I'd
like more use cases before.
jam> Nor do we have a way to say that this command should
jam> succeed cleanly, or this command should fail in a
jam> particular way.
The idea is that a script should "succeed" as described,
i.e. match the expected std[out|err] if specified for each
command.
This may not be tested correctly yet but see _output.
test_unexpected
<snip/>
jam> Maybe the overhead isn't as big as it once was, because
jam> we don't spawn anymore. But given Robert's recent
jam> threads about wanting to *speed up* selftest, this
jam> doesn't seem to be going in the right direction.
The intent is not to have faster tests but to allow more people
to write tests.
Once the tests are written that way, we can rewrite them in more
optimal ways, the bug or desired feature has been captured,
that's the goal.
And the original writers can even look at the rewritten form and
learn quicker since they know what the test is supposed to do in
its first form.
jam> 2)
jam> +class TestCaseWithScr ipt(tests. TestCaseWithTra nsport) :
jam> ...
jam> + def do_bzr(self, input, args):
jam> ...
jam> + def do_cat(self, input, args):
jam> ...
jam> It feels to me like things such as "do_bzr" should be part of a helper
jam> class, not part of the TestCase itself.
That doesn't fly for jailed commands, so I didn't try.
<snip/>
jam> By making _script_class a member variable, test cases
jam> that want to override behavior can do so by subclassing
jam> ScriptClass rather than directly overriding
Use cases for overriding ?
And don't forget that we can run several scripts in the same test
like we do with run_bzr().
<snip/>
jam> 3) I think the functionality is well tested.
jam> My biggest concerns are how is this actually going to
jam> benefit us.
I hope I clarified things a bit.