Merge lp:~clint-fewbar/charm-tools/charm-tests-spec into lp:~charmers/charm-tools/trunk

Proposed by Clint Byrum
Status: Rejected
Rejected by: Mark Mims
Proposed branch: lp:~clint-fewbar/charm-tools/charm-tests-spec
Merge into: lp:~charmers/charm-tools/trunk
Diff against target: 217 lines (+147/-33)
1 file modified
doc/source/charm-tests.rst (+147/-33)
To merge this branch: bzr merge lp:~clint-fewbar/charm-tools/charm-tests-spec
Reviewer Review Type Date Requested Status
Mark Mims (community) Disapprove
Review via email: mp+90813@code.launchpad.net

Description of the change

Fixes per community feedback.

To post a comment you must log in.
128. By Clint Byrum

updating example to reflect output guidelines

129. By Clint Byrum

Explain what happens to services left behind after tests exit

Revision history for this message
Mark Mims (mark-mims) wrote :

just grabbed this to learn more lp...

for the record this is in lp:juju/docs now

review: Disapprove

Unmerged revisions

129. By Clint Byrum

Explain what happens to services left behind after tests exit

128. By Clint Byrum

updating example to reflect output guidelines

127. By Clint Byrum

clarify requirements.yaml and move get-unit-info tip into footnotes

126. By Clint Byrum

rearranging examples and clarifying exit codes

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'doc/source/charm-tests.rst'
2--- doc/source/charm-tests.rst 2012-01-26 01:24:15 +0000
3+++ doc/source/charm-tests.rst 2012-02-01 21:14:18 +0000
4@@ -80,10 +80,11 @@
5 * Internet access will be restricted from the testing host.
6
7 If present, tests/requirements.yaml will be read to determine packages
8-that need to be installed in order to facilitate the tests. The packages
9-can *only* be installed from the official, default Ubuntu archive for the
10-release which the charm is intended for. The format of requirements.yaml
11-is as such::
12+that need to be installed on the host running tests in order to facilitate
13+the tests. The packages can *only* be installed from the official,
14+default Ubuntu archive for the release which the charm is intended for,
15+from any of the repositories enabled by default in said release. The
16+format of requirements.yaml is as such::
17
18 packages: [ package1, package2, package3 ]
19
20@@ -92,35 +93,52 @@
21 as the file which contains it does not end in ``.test``. Note that build
22 tools cannot be assumed to be available on the testing system.
23
24-Test Runner
25-===========
26-
27-A test runner will periodically poll the collection of charms for changes
28-since the last test run. If there have been changes, the entire set of
29-changes will be tested as one delta. This delta will be recorded in the
30-test results in such a way where a developer can repeat the exact set
31-of changes for debugging purposes.
32-
33-All of the charms will be scanned for tests in lexical order by
34-series, charm name, branch name. Non official charms which have not
35-been reviewed by charmers will not have their tests run until the test
36-runner's restrictions have been vetted for security, since we will be
37-running potentially malicious code. It is left to the implementor to
38-determine what mix of juju, client platform, and environment settings
39-are appropriate, as all of these are variables that will affect the
40-running charms, and so may affect the outcome.
41-
42-Example
43-=======
44+Purpose of tests
45+----------------
46+
47+The purpose of these tests is to assert that the charm works well on the
48+intended platform and performs the expected configuration steps. Examples
49+of things to test in each charm beyond install/start is:
50+
51+* After install, expose, and adding of required relations, the service is
52+ listening on the intended ports and is functional.
53+* Adding, removing, and re-adding a relation should work without error.
54+* Setting config values should result in the config value reflected in
55+ the service's configuraion.
56+* Adding multiple units to a web app charm and relating to a load balancer
57+ results in the same HTML on both units directly and the load balancer.
58+
59+Exit Codes
60+----------
61+
62+Upon exit, the test's exit code will be evaluated to mean the following:
63+
64+* 0: Test passed
65+* 1: Failed test
66+* 100: Test is skipped because of incomplete environment
67+
68+Output
69+------
70+
71+There is a general convention which output should follow, though it will
72+not be interpreted by machine. On stdout, a message indicating the reason
73+for the exit code should be printed, with a prefix string corresponding to
74+the exit codes defined above. The correlation is:
75+
76+* PASS - 0
77+* FAIL - 1
78+* SKIP - 100
79+
80+Example Tests
81+-------------
82
83 Deploy requirements and Poll
84-----------------------------
85-
86-The following example test script uses a tool that is not widely available
87-yet, ``get-unit-info``. In the future enhancements should be made to
88-juju core to allow such things to be made into plugins. Until then,
89-it can be included in each test dir that uses it, or we can build a
90-package of tools that are common to tests.::
91+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
92+
93+The test below [*]_ deploys mediawiki with mysql and memcached related to it,
94+and then tests to make sure it returns a page via http with "<title>"
95+somewhere in the content.::
96+
97
98 #!/bin/sh
99
100@@ -158,7 +176,7 @@
101 done
102
103 if [ -z "$host" ] ; then
104- echo ERROR: status timed out
105+ echo FAIL: status timed out
106 exit 1
107 fi
108
109@@ -168,7 +186,7 @@
110 wget --tries=100 --timeout=6 http://$host/ -O - -a $datadir/wget.log | grep -q '<title>'
111
112 if [ $try -eq 600 ] ; then
113- echo ERROR: Timed out waiting.
114+ echo FAIL: Timed out waiting.
115 exit 1
116 fi
117
118@@ -180,3 +198,99 @@
119 echo INFO: PASS
120 exit 0
121
122+Test config settings
123+~~~~~~~~~~~~~~~~~~~~
124+
125+The following example tests checks to see if the default_port change
126+the admin asks for is actually respected post-deploy::
127+
128+ #!/bin/sh
129+
130+ if [ -z "`which nc`" ] ; then
131+ echo "SKIP: cannot run tests without netcat"
132+ exit 100
133+ fi
134+
135+ set -e
136+
137+ teardown() {
138+ juju destroy-service mongodb
139+ }
140+ trap teardown EXIT
141+
142+ juju deploy mongodb
143+ juju expose mongodb
144+
145+ for try in `seq 1 600` ; do
146+ host=`juju status | tests/get-unit-info mongodb public-address`
147+ if [ -z "$host" ] ; then
148+ sleep 1
149+ else
150+ break
151+ fi
152+ done
153+
154+ if [ -z "$host" ] ; then
155+ echo FAIL: status timed out
156+ exit 1
157+ fi
158+
159+ assert_is_listening() {
160+ local port=$1
161+ listening=""
162+ for try in `seq 1 10` ; do
163+ if ! nc $host $port < /dev/null ; then
164+ continue
165+ fi
166+ listening="$port"
167+ break
168+ done
169+
170+ if [ -z "$listening" ] ; then
171+ echo "FAIL: not listening on port $port after 10 retries"
172+ return 1
173+ else
174+ echo "PASS: listening on port $listening"
175+ return 0
176+ fi
177+ }
178+
179+ assert_is_listening 27017
180+
181+ juju set mongodb default_port=55555
182+
183+ assert_is_listening 55555
184+ echo PASS: config change tests passed.
185+ exit 0
186+
187+.. [*] get-unit-info
188+ The example tests script uses a tool that is not widely available yet,
189+ ``get-unit-info``. In the future enhancements should be made to juju
190+ core to allow such things to be made into plugins. Until then, it can
191+ be included in each test dir that uses it, or we can build a package
192+ of tools that are common to tests.
193+
194+Test Runner
195+===========
196+
197+A test runner will periodically poll the collection of charms for changes
198+since the last test run. If there have been changes, the entire set of
199+changes will be tested as one delta. This delta will be recorded in the
200+test results in such a way where a developer can repeat the exact set
201+of changes for debugging purposes.
202+
203+All of the charms will be scanned for tests in lexical order by
204+series, charm name, branch name. Non official charms which have not
205+been reviewed by charmers will not have their tests run until the test
206+runner's restrictions have been vetted for security, since we will be
207+running potentially malicious code. It is left to the implementor to
208+determine what mix of juju, client platform, and environment settings
209+are appropriate, as all of these are variables that will affect the
210+running charms, and so may affect the outcome.
211+
212+If tests exit with services still in the environment, the test runner
213+may clean them up, whether by destroying the environment or destroying
214+the services explicitly, and the machines may be terminated as well.
215+Tests should cleanup any services they start so that the test script
216+is more indpendent and idempotent. Any artifacts needed from the test
217+machines should be retrieved and displayed before the test exits.

Subscribers

People subscribed via source and target branches