Merge lp:~clint-fewbar/charm-tools/charm-tests-spec into lp:~charmers/charm-tools/trunk

Proposed by Clint Byrum
Status: Merged
Merged at revision: 122
Proposed branch: lp:~clint-fewbar/charm-tools/charm-tests-spec
Merge into: lp:~charmers/charm-tools/trunk
Diff against target: 188 lines (+182/-0)
1 file modified
doc/source/charm-tests.rst (+182/-0)
To merge this branch: bzr merge lp:~clint-fewbar/charm-tools/charm-tests-spec
Reviewer Review Type Date Requested Status
charmers Pending
Review via email: mp+90232@code.launchpad.net

Description of the change

This is a specification for implementing automated tests for the approved juju charms. It is being generated from this branch into an HTML file every 15 minutes here:

http://people.canonical.com/~clint/charm-tests.html

To post a comment you must log in.
Revision history for this message
Jelmer Vernooij (jelmer) wrote :

This seems very reasonable. Some nitpicky comments:

It seems that a non-zero exit code indicates a failure, and a zero exit code indicates success? I guess this is fairly standard, but it would be nice to have it documented explicitly. Is it necessary to use 1 for failure (and perhaps anything else for errors), or is any non-zero exit code sufficient?

Based on the example, it also seems like results can be reported to stdout. Is the "ERROR: " prefix necessary for problems, and how many ERROR: lines can there be? What happens to output that doesn't start with ERROR: or INFO: ? What happens with stderr?

Do packages listed in requirements.yaml have to be in main, or can they be in universe/multiverse too?

Revision history for this message
Martin Pool (mbp) wrote :

> This seems very reasonable.

Yes, thanks for working on this.

> Some nitpicky comments:
>
> It seems that a non-zero exit code indicates a failure, and a zero exit code
> indicates success? I guess this is fairly standard, but it would be nice to
> have it documented explicitly. Is it necessary to use 1 for failure (and
> perhaps anything else for errors), or is any non-zero exit code sufficient?

I think it would be well worth while making this explicit from day one: most test frameworks eventually need to distinguish "test failed" from "test couldn't be run", "results were inconclusive", "test timed out", etc. If you specify that a failure must be, say, 1, then you can add other things later if you want.

Revision history for this message
Clint Byrum (clint-fewbar) wrote :

OOPS! I pushed this into charm-tools trunk by accident, I did mean to address all the questions asked.

* I will write up explicit explanations of all possile exit codes, including reserving a few for the future and suggesting what the currently used ones are for.

* Re the results output, I was thinking that it would be best if we simply printed them on stdout, where any other problems end up on stderr. I will define an expected, loose format, but I think the error code is the place to specify PASS/FAIL/SKIP/ERROR (meaning a problem outside an assertion).

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added directory 'doc'
=== added directory 'doc/source'
=== added file 'doc/source/charm-tests.rst'
--- doc/source/charm-tests.rst 1970-01-01 00:00:00 +0000
+++ doc/source/charm-tests.rst 2012-01-26 01:33:28 +0000
@@ -0,0 +1,182 @@
1==============
2Charm Testing
3==============
4
5Intro
6=====
7
8**DRAFT**
9
10Juju has been designed from the start to foster a large collection of
11"charms". Charms are expected to number in the thousands, and be self
12contained, with well defined interfaces for defining their relationships
13to one another.
14
15Because this is a large complex system, not unlike a Linux software
16distribution, there is a need to test the charms and how they interact
17with one another. This specification defines a plan for implementing
18a simple framework to help this happen.
19
20Static tests have already been implemented in the ``charm proof`` command
21as part of ``charm-tools``. Any static testing of charms is beyond the
22scope of this specification.
23
24Phase 1 - Generic tests
25=======================
26
27All charms share some of the same characteristics. They all have a
28yaml file called ``metadata.yaml``, and when deployed, juju will always
29attempt to progress the state of the service from install to config to
30started. Because of this, all charms can be tested using the following
31algorithm::
32
33 deploy charm
34 while state != started
35 if timeout is reached, FAIL
36 if state == install_error, config_error, or start_error, FAIL
37 if state == started, PASS
38
39Other generic tests may be identified, so a collection of generic tests should be the focus of an implementation.
40
41Note that this requirement is already satisfied by Mark Mims' jenkins tester:
42http://charmtests.markmims.com/
43
44Phase 2 - Charm Specific tests
45==============================
46
47Charm authors will have the best insight into whether or not a charm is
48working properly.
49
50To facilitate tests attached to charms, a simple structure will be
51utilized to attach tests to charms. Under the charm root directory,
52a sub-directory named 'tests' will be scanned by a test runner for
53executable files matching the glob ``*.test``. These will be run in
54lexical order by the test runner, with a predictible environment. The
55tests can make the following assumptions:
56
57* A minimal install of the release of Ubuntu which the charm is targetted
58 at will be available.
59* A version of juju is installed and available in the system path.
60* the default environment is bootstrapped
61* The CWD is the charm root directory
62* Full network access to deployed nodes will be allowed.
63* the bare name of any charm in arguments to juju will be resolved to a
64 charm url and/or repository arguments of the test runner's choice. This
65 means that if you need mysql, you do not do ``juju deploy cs:mysql`` or
66 ``juju deploy --repository ~/charms local:mysql``, but just ``juju deploy
67 mysql``. A wrapper will resolve this according to the circumstances of
68 the test.
69* a special sub-command of juju, ``deploy-previous``, will deploy the
70 last successfully tested charm instead of the one from the current
71 delta. This will allow testing upgrade-charm.
72
73The following restrictions will be enforced:
74
75* bootstrap and destroy-environment will be unavailable
76* ``~/.juju`` will not be accessible to the tests
77
78The following restrictions may be enforced:
79
80* Internet access will be restricted from the testing host.
81
82If present, tests/requirements.yaml will be read to determine packages
83that need to be installed in order to facilitate the tests. The packages
84can *only* be installed from the official, default Ubuntu archive for the
85release which the charm is intended for. The format of requirements.yaml
86is as such::
87
88 packages: [ package1, package2, package3 ]
89
90If a tool is needed to perform a test and not available in the Ubuntu
91archive, it can also be included in the ``tests/`` directory, as long
92as the file which contains it does not end in ``.test``. Note that build
93tools cannot be assumed to be available on the testing system.
94
95Test Runner
96===========
97
98A test runner will periodically poll the collection of charms for changes
99since the last test run. If there have been changes, the entire set of
100changes will be tested as one delta. This delta will be recorded in the
101test results in such a way where a developer can repeat the exact set
102of changes for debugging purposes.
103
104All of the charms will be scanned for tests in lexical order by
105series, charm name, branch name. Non official charms which have not
106been reviewed by charmers will not have their tests run until the test
107runner's restrictions have been vetted for security, since we will be
108running potentially malicious code. It is left to the implementor to
109determine what mix of juju, client platform, and environment settings
110are appropriate, as all of these are variables that will affect the
111running charms, and so may affect the outcome.
112
113Example
114=======
115
116Deploy requirements and Poll
117----------------------------
118
119The following example test script uses a tool that is not widely available
120yet, ``get-unit-info``. In the future enhancements should be made to
121juju core to allow such things to be made into plugins. Until then,
122it can be included in each test dir that uses it, or we can build a
123package of tools that are common to tests.::
124
125 #!/bin/sh
126
127 set -e
128
129 teardown() {
130 juju destroy-service memcached
131 juju destroy-service mysql
132 juju destroy-service mediawiki
133 if [ -n "$datadir" ] ; then
134 if [ -f $datadir/passed ]; then
135 rm -r $datadir
136 else
137 echo $datadir preserved
138 fi
139 fi
140 }
141 trap teardown EXIT
142
143
144 juju deploy mediawiki
145 juju deploy mysql
146 juju deploy memcached
147 juju add-relation mediawiki:db mysql:db
148 juju add-relation memcached mediawiki
149 juju expose mediawiki
150
151 for try in `seq 1 600` ; do
152 host=`juju status | tests/get-unit-info mediawiki public-address`
153 if [ -z "$host" ] ; then
154 sleep 1
155 else
156 break
157 fi
158 done
159
160 if [ -z "$host" ] ; then
161 echo ERROR: status timed out
162 exit 1
163 fi
164
165 datadir=`mktemp -d ${TMPDIR:-/tmp}/wget.test.XXXXXXX`
166 echo INFO: datadir=$datadir
167
168 wget --tries=100 --timeout=6 http://$host/ -O - -a $datadir/wget.log | grep -q '<title>'
169
170 if [ $try -eq 600 ] ; then
171 echo ERROR: Timed out waiting.
172 exit 1
173 fi
174
175 touch $datadir/passed
176
177 trap - EXIT
178 teardown
179
180 echo INFO: PASS
181 exit 0
182

Subscribers

People subscribed via source and target branches