Merge lp:~clint-fewbar/charm-tools/charm-tests-spec into lp:~charmers/charm-tools/trunk

Proposed by Clint Byrum
Status: Merged
Merged at revision: 122
Proposed branch: lp:~clint-fewbar/charm-tools/charm-tests-spec
Merge into: lp:~charmers/charm-tools/trunk
Diff against target: 188 lines (+182/-0)
1 file modified
doc/source/charm-tests.rst (+182/-0)
To merge this branch: bzr merge lp:~clint-fewbar/charm-tools/charm-tests-spec
Reviewer Review Type Date Requested Status
charmers Pending
Review via email:

Description of the change

This is a specification for implementing automated tests for the approved juju charms. It is being generated from this branch into an HTML file every 15 minutes here:

To post a comment you must log in.
Revision history for this message
Jelmer Vernooij (jelmer) wrote :

This seems very reasonable. Some nitpicky comments:

It seems that a non-zero exit code indicates a failure, and a zero exit code indicates success? I guess this is fairly standard, but it would be nice to have it documented explicitly. Is it necessary to use 1 for failure (and perhaps anything else for errors), or is any non-zero exit code sufficient?

Based on the example, it also seems like results can be reported to stdout. Is the "ERROR: " prefix necessary for problems, and how many ERROR: lines can there be? What happens to output that doesn't start with ERROR: or INFO: ? What happens with stderr?

Do packages listed in requirements.yaml have to be in main, or can they be in universe/multiverse too?

Revision history for this message
Martin Pool (mbp) wrote :

> This seems very reasonable.

Yes, thanks for working on this.

> Some nitpicky comments:
> It seems that a non-zero exit code indicates a failure, and a zero exit code
> indicates success? I guess this is fairly standard, but it would be nice to
> have it documented explicitly. Is it necessary to use 1 for failure (and
> perhaps anything else for errors), or is any non-zero exit code sufficient?

I think it would be well worth while making this explicit from day one: most test frameworks eventually need to distinguish "test failed" from "test couldn't be run", "results were inconclusive", "test timed out", etc. If you specify that a failure must be, say, 1, then you can add other things later if you want.

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

OOPS! I pushed this into charm-tools trunk by accident, I did mean to address all the questions asked.

* I will write up explicit explanations of all possile exit codes, including reserving a few for the future and suggesting what the currently used ones are for.

* Re the results output, I was thinking that it would be best if we simply printed them on stdout, where any other problems end up on stderr. I will define an expected, loose format, but I think the error code is the place to specify PASS/FAIL/SKIP/ERROR (meaning a problem outside an assertion).

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added directory 'doc'
2=== added directory 'doc/source'
3=== added file 'doc/source/charm-tests.rst'
4--- doc/source/charm-tests.rst 1970-01-01 00:00:00 +0000
5+++ doc/source/charm-tests.rst 2012-01-26 01:33:28 +0000
6@@ -0,0 +1,182 @@
8+Charm Testing
16+Juju has been designed from the start to foster a large collection of
17+"charms". Charms are expected to number in the thousands, and be self
18+contained, with well defined interfaces for defining their relationships
19+to one another.
21+Because this is a large complex system, not unlike a Linux software
22+distribution, there is a need to test the charms and how they interact
23+with one another. This specification defines a plan for implementing
24+a simple framework to help this happen.
26+Static tests have already been implemented in the ``charm proof`` command
27+as part of ``charm-tools``. Any static testing of charms is beyond the
28+scope of this specification.
30+Phase 1 - Generic tests
33+All charms share some of the same characteristics. They all have a
34+yaml file called ``metadata.yaml``, and when deployed, juju will always
35+attempt to progress the state of the service from install to config to
36+started. Because of this, all charms can be tested using the following
39+ deploy charm
40+ while state != started
41+ if timeout is reached, FAIL
42+ if state == install_error, config_error, or start_error, FAIL
43+ if state == started, PASS
45+Other generic tests may be identified, so a collection of generic tests should be the focus of an implementation.
47+Note that this requirement is already satisfied by Mark Mims' jenkins tester:
50+Phase 2 - Charm Specific tests
53+Charm authors will have the best insight into whether or not a charm is
54+working properly.
56+To facilitate tests attached to charms, a simple structure will be
57+utilized to attach tests to charms. Under the charm root directory,
58+a sub-directory named 'tests' will be scanned by a test runner for
59+executable files matching the glob ``*.test``. These will be run in
60+lexical order by the test runner, with a predictible environment. The
61+tests can make the following assumptions:
63+* A minimal install of the release of Ubuntu which the charm is targetted
64+ at will be available.
65+* A version of juju is installed and available in the system path.
66+* the default environment is bootstrapped
67+* The CWD is the charm root directory
68+* Full network access to deployed nodes will be allowed.
69+* the bare name of any charm in arguments to juju will be resolved to a
70+ charm url and/or repository arguments of the test runner's choice. This
71+ means that if you need mysql, you do not do ``juju deploy cs:mysql`` or
72+ ``juju deploy --repository ~/charms local:mysql``, but just ``juju deploy
73+ mysql``. A wrapper will resolve this according to the circumstances of
74+ the test.
75+* a special sub-command of juju, ``deploy-previous``, will deploy the
76+ last successfully tested charm instead of the one from the current
77+ delta. This will allow testing upgrade-charm.
79+The following restrictions will be enforced:
81+* bootstrap and destroy-environment will be unavailable
82+* ``~/.juju`` will not be accessible to the tests
84+The following restrictions may be enforced:
86+* Internet access will be restricted from the testing host.
88+If present, tests/requirements.yaml will be read to determine packages
89+that need to be installed in order to facilitate the tests. The packages
90+can *only* be installed from the official, default Ubuntu archive for the
91+release which the charm is intended for. The format of requirements.yaml
92+is as such::
94+ packages: [ package1, package2, package3 ]
96+If a tool is needed to perform a test and not available in the Ubuntu
97+archive, it can also be included in the ``tests/`` directory, as long
98+as the file which contains it does not end in ``.test``. Note that build
99+tools cannot be assumed to be available on the testing system.
101+Test Runner
104+A test runner will periodically poll the collection of charms for changes
105+since the last test run. If there have been changes, the entire set of
106+changes will be tested as one delta. This delta will be recorded in the
107+test results in such a way where a developer can repeat the exact set
108+of changes for debugging purposes.
110+All of the charms will be scanned for tests in lexical order by
111+series, charm name, branch name. Non official charms which have not
112+been reviewed by charmers will not have their tests run until the test
113+runner's restrictions have been vetted for security, since we will be
114+running potentially malicious code. It is left to the implementor to
115+determine what mix of juju, client platform, and environment settings
116+are appropriate, as all of these are variables that will affect the
117+running charms, and so may affect the outcome.
122+Deploy requirements and Poll
125+The following example test script uses a tool that is not widely available
126+yet, ``get-unit-info``. In the future enhancements should be made to
127+juju core to allow such things to be made into plugins. Until then,
128+it can be included in each test dir that uses it, or we can build a
129+package of tools that are common to tests.::
131+ #!/bin/sh
133+ set -e
135+ teardown() {
136+ juju destroy-service memcached
137+ juju destroy-service mysql
138+ juju destroy-service mediawiki
139+ if [ -n "$datadir" ] ; then
140+ if [ -f $datadir/passed ]; then
141+ rm -r $datadir
142+ else
143+ echo $datadir preserved
144+ fi
145+ fi
146+ }
147+ trap teardown EXIT
150+ juju deploy mediawiki
151+ juju deploy mysql
152+ juju deploy memcached
153+ juju add-relation mediawiki:db mysql:db
154+ juju add-relation memcached mediawiki
155+ juju expose mediawiki
157+ for try in `seq 1 600` ; do
158+ host=`juju status | tests/get-unit-info mediawiki public-address`
159+ if [ -z "$host" ] ; then
160+ sleep 1
161+ else
162+ break
163+ fi
164+ done
166+ if [ -z "$host" ] ; then
167+ echo ERROR: status timed out
168+ exit 1
169+ fi
171+ datadir=`mktemp -d ${TMPDIR:-/tmp}/wget.test.XXXXXXX`
172+ echo INFO: datadir=$datadir
174+ wget --tries=100 --timeout=6 http://$host/ -O - -a $datadir/wget.log | grep -q '<title>'
176+ if [ $try -eq 600 ] ; then
177+ echo ERROR: Timed out waiting.
178+ exit 1
179+ fi
181+ touch $datadir/passed
183+ trap - EXIT
184+ teardown
186+ echo INFO: PASS
187+ exit 0


People subscribed via source and target branches