Merge lp:~mskalka/juju-ci-tools/assess-destroy-model into lp:juju-ci-tools
- assess-destroy-model
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 1988 |
Proposed branch: | lp:~mskalka/juju-ci-tools/assess-destroy-model |
Merge into: | lp:juju-ci-tools |
Diff against target: |
348 lines (+339/-0) 2 files modified
assess_destroy_model.py (+164/-0) tests/test_assess_destroy_model.py (+175/-0) |
To merge this branch: | bzr merge lp:~mskalka/juju-ci-tools/assess-destroy-model |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Curtis Hovey (community) | code | Approve | |
Aaron Bentley (community) | Approve | ||
Review via email: mp+321795@code.launchpad.net |
Commit message
Description of the change
In response to: https:/
Adds a test that bootstraps, gets the controller ID, adds a model, destroys it, then gets the controller ID again and asserts the IDs are equal. It will fail if juju can't get the controller ID or if they differ.
Additionally it adds an optional arg to client.
Also changes the syntax of client.switch() to default to the client's controller if none is specified. In the past switching back to the default jujupy model used in tests was ambiguous as the model and controller had the same name. Although it wasn't used in this test it was bugging me to no end.
- 1975. By Michael Skalka
-
docstring edit
- 1976. By Michael Skalka
-
backed out client.py changes
- 1977. By Michael Skalka
-
restructured test, unittests, backed out client.py changes
Michael Skalka (mskalka) wrote : | # |
Curtis,
I backed out the changes made to client.py. I will rework and resubmit those as their own MP.
I changed how the test behaves to better align with the committed bug fix. The test now checks to see if Juju drops its current model when that model is destroyed, and raises errors if it doesn't. Is this more in line with what you had in mind for this test?
Curtis Hovey (sinzui) wrote : | # |
Thank you Michael. This test needs another round of changes. In general never use "assert" and always raise JujuAssertionError when juju fails. Juju errors are being logged as info, but the test will still pass. You might want to consider using
with logged_
to log exceptions without needing to write log.error.
- 1978. By Michael Skalka
-
changes as per mp comments
- 1979. By Michael Skalka
-
added unittest for failure cases
Michael Skalka (mskalka) wrote : | # |
Made the requested changes in regards to the asserts. Also added unittests for failure cases where they were missing. I'm not sure what you meant exactly by "Drop this since you didn't write tests for it." but I assumed that was it.
Aaron Bentley (abentley) wrote : | # |
get_current_model and get_current_
I recommend using --format json or --format yaml in general. It is much more precise than the default textual output. For example, with list-models, parsed[
- 1980. By Michael Skalka
-
changes as per mp comments
Michael Skalka (mskalka) wrote : | # |
I modified the functions to use --format yaml. I didn't know that was even an option with list-models. It's definitely a lot cleaner, so thank you for the suggestion. As for why I this uses list-models instead of switch, the bug specifies list-models and I wanted to keep it as close to the original case as possible.
Aaron Bentley (abentley) wrote : | # |
--format json or --format yaml is an option with basically every query command (show-x, list-x). I prefer json because the parser is much faster (which can affect unit test speed), you don't need to tell it to be safe, and it's part of the standard library. Unfortunately, juju's json output is not human-readable. (You can pretty-print it by piping it to jq, though.)
I still recommend explaining the difference between get_current_model, get_current_
However, this looks like a response to bug #1505504. Cheryl adjusted the scope of bug #1582021 to be just the timing hole: https:/
Michael Skalka (mskalka) wrote : | # |
Hmm. I don't see a report linked for the timing hole, but I'll search through Juju's bug reports. I'm not sure how we could adequately or reliably test for that with the tools we have right now however. Possibly repeatedly adding and deleting a model or series of models in rapid succession and listing each time? I'm happy to update the test if you have a better idea of what we should be attempting.
Aaron Bentley (abentley) wrote : | # |
> Hmm. I don't see a report linked for the timing hole
The report of the timing hole is bug #1582021 itself.
It is the fact that "list-models" errors with "model not found". It is not erroring because the current-model is wrong. It is erroring because "we first get all models, then query each one for more details"
"bigdata" was in the first list, but was not accessible when queried, producing the "model not found" error.
A correct test for this would be something like:
juju bootstrap --default-model foobar
juju destroy-model foobar
# Avoid bug 1505504.
juju switch controller
# If we have a timing hole, this fails with "model not found"
juju list-models
This test should be run against 2.0-beta7 (or earlier) to prove that it fails, with 2.0-beta8 to prove that it fails even with bug #1505504 fixed, and then with current juju to prove that it passes.
- 1981. By Michael Skalka
-
cleanup
- 1982. By Michael Skalka
-
more cleanup as per mp
- 1983. By Michael Skalka
-
lint
Michael Skalka (mskalka) wrote : | # |
Ok. I restructured the test a little to reflect what you think is the proper procedure and swapped to json at your recommendation.
Now it:
boostraps and stores that as a ModelClient object
adds a model which is captured as a new ModelClient object
destroys that model using that object
lists model so we know we avoid bug 1505504
switches back to the original model using our first client object
runs list-model on that client object to try to catch that timing hole
I confirmed the test fails on 2.0-beta8; it throws a called process error on list-models which is the expected outcome. It passes on current juju versions.
Aaron Bentley (abentley) wrote : | # |
This looks good, though please update the error message as noted below. Also, Curtis should re-review before you merge to make sure his concerns are dealt with.
A few notes about testing:
assertRaisesRegexp (aka assertRaisesRegex in Python3) is much nicer than assertRaises followed by checking message content.
Also, assertIn should be used in preference to assertTrue(x in y).
- 1984. By Michael Skalka
-
unittest changes as per mp comments
- 1985. By Michael Skalka
-
docstring edit
Preview Diff
1 | === added file 'assess_destroy_model.py' |
2 | --- assess_destroy_model.py 1970-01-01 00:00:00 +0000 |
3 | +++ assess_destroy_model.py 2017-04-07 20:13:21 +0000 |
4 | @@ -0,0 +1,164 @@ |
5 | +#!/usr/bin/env python |
6 | +"""Assess if Juju tracks the model when the current model is destroyed.""" |
7 | + |
8 | +from __future__ import print_function |
9 | + |
10 | +import argparse |
11 | +import logging |
12 | +import sys |
13 | +import subprocess |
14 | +import json |
15 | + |
16 | +from deploy_stack import ( |
17 | + BootstrapManager, |
18 | + ) |
19 | +from utility import ( |
20 | + add_basic_testing_arguments, |
21 | + configure_logging, |
22 | + JujuAssertionError, |
23 | + ) |
24 | + |
25 | + |
26 | +__metaclass__ = type |
27 | + |
28 | + |
29 | +log = logging.getLogger("assess_destroy_model") |
30 | + |
31 | +TEST_MODEL = 'test-tmp-env' |
32 | + |
33 | + |
34 | +def assess_destroy_model(client): |
35 | + """Tests if Juju tracks the model properly through deletion. |
36 | + |
37 | + In normal behavior Juju should drop the current model selection if that |
38 | + model is destroyed. This will fail if Juju does not drop it's current |
39 | + selection. |
40 | + |
41 | + :param client: Jujupy ModelClient object |
42 | + """ |
43 | + |
44 | + current_model = get_current_model(client) |
45 | + controller = get_current_controller(client) |
46 | + log.info('Current model: {}'.format(current_model)) |
47 | + |
48 | + new_client = add_model(client) |
49 | + destroy_model(client, new_client) |
50 | + |
51 | + log.info('Juju successfully dropped its current model. ' |
52 | + 'Switching to {} to complete test'.format(current_model)) |
53 | + switch_model(client, current_model, controller) |
54 | + |
55 | + log.info('SUCCESS') |
56 | + |
57 | + |
58 | +def add_model(client): |
59 | + """Adds a model to the current juju environment then destroys it. |
60 | + |
61 | + Will raise an exception if the Juju does not deselect the current model. |
62 | + |
63 | + :param client: Jujupy ModelClient object |
64 | + """ |
65 | + log.info('Adding model "{}" to current controller'.format(TEST_MODEL)) |
66 | + new_client = client.add_model(TEST_MODEL) |
67 | + new_model = get_current_model(new_client) |
68 | + if new_model == TEST_MODEL: |
69 | + log.info('Current model and newly added model match') |
70 | + else: |
71 | + error = ('Juju failed to switch to new model after creation. ' |
72 | + 'Expected {} got {}'.format(TEST_MODEL, new_model)) |
73 | + raise JujuAssertionError(error) |
74 | + return new_client |
75 | + |
76 | + |
77 | +def destroy_model(client, new_client): |
78 | + log.info('Destroying model "{}"'.format(TEST_MODEL)) |
79 | + new_client.destroy_model() |
80 | + new_model = get_current_model(client) |
81 | + if new_model: |
82 | + error = 'Juju failed to unset model after it was destroyed' |
83 | + raise JujuAssertionError(error) |
84 | + |
85 | + |
86 | +def switch_model(client, current_model, current_controller): |
87 | + """Switches back to the old model. |
88 | + |
89 | + :param client: Jujupy ModelClient object |
90 | + :param current_model: String name of initial testing model |
91 | + :param current_controller: String name of testing controller |
92 | + """ |
93 | + client.switch(model=current_model, controller=current_controller) |
94 | + new_model = get_current_model(client) |
95 | + if new_model == current_model: |
96 | + log.info('Current model and switch target match') |
97 | + else: |
98 | + error = ('Juju failed to switch back to existing model. ' |
99 | + 'Expected {} got {}'.format(TEST_MODEL, new_model)) |
100 | + raise JujuAssertionError(error) |
101 | + |
102 | + |
103 | +def get_current_controller(client): |
104 | + """Gets the current controller from Juju's list-models command. |
105 | + |
106 | + :param client: Jujupy ModelClient object |
107 | + :return: String name of current controller |
108 | + """ |
109 | + raw = client.get_juju_output('switch', include_e=False).decode('utf-8') |
110 | + raw = raw.split(':')[0] |
111 | + return raw |
112 | + |
113 | + |
114 | +def get_current_model(client): |
115 | + """Gets the current model from Juju's list-models command. |
116 | + |
117 | + :param client: Jujupy ModelClient object |
118 | + :return: String name of current model |
119 | + """ |
120 | + raw = list_models(client) |
121 | + try: |
122 | + return raw['current-model'] |
123 | + except KeyError as e: |
124 | + log.warning('No model is currently selected.') |
125 | + return None |
126 | + |
127 | + |
128 | +def list_models(client): |
129 | + """Helper function to get the output of juju's list-models command. |
130 | + |
131 | + Instead of using 'juju switch' or client.backend.get_active_model() we use |
132 | + list-models because that was what the bug report this test was generated |
133 | + around used. It also allows for flexiblity in the future to get more |
134 | + detailed information about the models that Juju thinks it has |
135 | + if we need it. |
136 | + |
137 | + :param client: Jujupy ModelClient object |
138 | + :return: Dict of list-models command |
139 | + """ |
140 | + try: |
141 | + raw = client.get_juju_output('list-models', '--format', 'json', |
142 | + include_e=False).decode('utf-8') |
143 | + except subprocess.CalledProcessError as e: |
144 | + log.error('Failed to list current models due to error: {}'.format(e)) |
145 | + raise e |
146 | + return json.loads(raw) |
147 | + |
148 | + |
149 | +def parse_args(argv): |
150 | + """Parse all arguments.""" |
151 | + parser = argparse.ArgumentParser( |
152 | + description='Test if juju drops selection of the current model ' |
153 | + 'when that model is destroyed.') |
154 | + add_basic_testing_arguments(parser) |
155 | + return parser.parse_args(argv) |
156 | + |
157 | + |
158 | +def main(argv=None): |
159 | + args = parse_args(argv) |
160 | + configure_logging(args.verbose) |
161 | + bs_manager = BootstrapManager.from_args(args) |
162 | + with bs_manager.booted_context(args.upload_tools): |
163 | + assess_destroy_model(bs_manager.client) |
164 | + return 0 |
165 | + |
166 | + |
167 | +if __name__ == '__main__': |
168 | + sys.exit(main()) |
169 | |
170 | === added file 'tests/test_assess_destroy_model.py' |
171 | --- tests/test_assess_destroy_model.py 1970-01-01 00:00:00 +0000 |
172 | +++ tests/test_assess_destroy_model.py 2017-04-07 20:13:21 +0000 |
173 | @@ -0,0 +1,175 @@ |
174 | +"""Tests for assess_destroy_model module.""" |
175 | + |
176 | +import logging |
177 | +import StringIO |
178 | +from textwrap import dedent |
179 | + |
180 | +from mock import ( |
181 | + Mock, |
182 | + call, |
183 | + patch, |
184 | + ) |
185 | + |
186 | +from assess_destroy_model import ( |
187 | + add_model, |
188 | + destroy_model, |
189 | + switch_model, |
190 | + parse_args, |
191 | + main, |
192 | + ) |
193 | +from jujupy import ( |
194 | + fake_juju_client, |
195 | + ) |
196 | +from tests import ( |
197 | + parse_error, |
198 | + TestCase, |
199 | + ) |
200 | +from utility import ( |
201 | + JujuAssertionError |
202 | + ) |
203 | + |
204 | +list_model_initial = dedent( |
205 | + """{ |
206 | + "models": [ |
207 | + {"name": "controller", |
208 | + "controller-name": "foo-tmp-env"}, |
209 | + {"name": "foo-tmp-env", |
210 | + "controller-name": "foo-tmp-env"}], |
211 | + "current-model": "foo-tmp-env" |
212 | + }""") |
213 | + |
214 | +list_model_destroy = dedent( |
215 | + """{ |
216 | + "models": [ |
217 | + {"name": "controller", |
218 | + "controller-name": "foo-tmp-env"}, |
219 | + {"name": "foo-tmp-env", |
220 | + "controller-name": "foo-tmp-env"}] |
221 | + }""") |
222 | + |
223 | +list_model_add = dedent( |
224 | + """{ |
225 | + "models": [ |
226 | + {"name": "controller", |
227 | + "controller-name": "foo-tmp-env"}, |
228 | + {"name": "test-tmp-env", |
229 | + "controller-name": "foo-tmp-env"}, |
230 | + {"name": "foo-tmp-env", |
231 | + "controller-name": "foo-tmp-env"}], |
232 | + "current-model": "test-tmp-env" |
233 | + }""") |
234 | + |
235 | +list_model_switch = dedent( |
236 | + """{ |
237 | + "models": [ |
238 | + {"name": "controller", |
239 | + "controller-name": "foo-tmp-env"}, |
240 | + {"name": "foo-tmp-env", |
241 | + "controller-name": "foo-tmp-env"}], |
242 | + "current-model": "bar-tmp-env" |
243 | + }""") |
244 | + |
245 | + |
246 | +class TestParseArgs(TestCase): |
247 | + |
248 | + def test_common_args(self): |
249 | + args = parse_args(["an-env", "/bin/juju", "/tmp/logs", "an-env-mod"]) |
250 | + self.assertEqual("an-env", args.env) |
251 | + self.assertEqual("/bin/juju", args.juju_bin) |
252 | + self.assertEqual("/tmp/logs", args.logs) |
253 | + self.assertEqual("an-env-mod", args.temp_env_name) |
254 | + self.assertEqual(False, args.debug) |
255 | + |
256 | + def test_help(self): |
257 | + fake_stdout = StringIO.StringIO() |
258 | + with parse_error(self) as fake_stderr: |
259 | + with patch("sys.stdout", fake_stdout): |
260 | + parse_args(["--help"]) |
261 | + self.assertEqual("", fake_stderr.getvalue()) |
262 | + self.assertNotIn("TODO", fake_stdout.getvalue()) |
263 | + |
264 | + |
265 | +class TestMain(TestCase): |
266 | + |
267 | + def test_main(self): |
268 | + argv = ["an-env", "/bin/juju", "/tmp/logs", "an-env-mod", "--verbose"] |
269 | + client = Mock(spec=["is_jes_enabled"]) |
270 | + with patch("assess_destroy_model.configure_logging", |
271 | + autospec=True) as mock_cl: |
272 | + with patch("assess_destroy_model.BootstrapManager.booted_context", |
273 | + autospec=True) as mock_bc: |
274 | + with patch('deploy_stack.client_from_config', |
275 | + return_value=client) as mock_cfc: |
276 | + with patch("assess_destroy_model.assess_destroy_model", |
277 | + autospec=True) as mock_assess: |
278 | + main(argv) |
279 | + mock_cl.assert_called_once_with(logging.DEBUG) |
280 | + mock_cfc.assert_called_once_with('an-env', "/bin/juju", debug=False, |
281 | + soft_deadline=None) |
282 | + self.assertEqual(mock_bc.call_count, 1) |
283 | + mock_assess.assert_called_once_with(client) |
284 | + |
285 | + |
286 | +class TestAssess(TestCase): |
287 | + |
288 | + def test_add_model(self): |
289 | + fake_client = Mock(wraps=fake_juju_client()) |
290 | + fake_client.bootstrap() |
291 | + fake_client.get_juju_output.return_value = list_model_add |
292 | + fake_client.add_model.return_value = fake_client |
293 | + add_model(fake_client) |
294 | + self.assertEqual( |
295 | + [call.bootstrap(), |
296 | + call.add_model('test-tmp-env'), |
297 | + call.get_juju_output('list-models', '--format', 'json', |
298 | + include_e=False)], |
299 | + fake_client.mock_calls) |
300 | + |
301 | + def test_add_model_fails(self): |
302 | + fake_client = Mock(wraps=fake_juju_client()) |
303 | + fake_client.bootstrap() |
304 | + fake_client.get_juju_output.return_value = list_model_initial |
305 | + fake_client.add_model.return_value = fake_client |
306 | + pattern = 'Expected test-tmp-env got foo-tmp-env' |
307 | + with self.assertRaisesRegexp(JujuAssertionError, pattern): |
308 | + add_model(fake_client) |
309 | + |
310 | + def test_destroy_model(self): |
311 | + fake_client = Mock(wraps=fake_juju_client()) |
312 | + fake_client.bootstrap() |
313 | + fake_client.get_juju_output.return_value = list_model_destroy |
314 | + destroy_model(fake_client, fake_client) |
315 | + self.assertEqual( |
316 | + [call.bootstrap(), |
317 | + call.destroy_model(), |
318 | + call.get_juju_output('list-models', '--format', 'json', |
319 | + include_e=False)], |
320 | + fake_client.mock_calls) |
321 | + |
322 | + def test_destroy_model_fails(self): |
323 | + fake_client = Mock(wraps=fake_juju_client()) |
324 | + fake_client.bootstrap() |
325 | + fake_client.get_juju_output.return_value = list_model_initial |
326 | + pattern = 'Juju failed to unset model after it was destroyed' |
327 | + with self.assertRaisesRegexp(JujuAssertionError, pattern): |
328 | + destroy_model(fake_client, fake_client) |
329 | + |
330 | + def test_switch_model(self): |
331 | + fake_client = Mock(wraps=fake_juju_client()) |
332 | + fake_client.bootstrap() |
333 | + fake_client.get_juju_output.return_value = list_model_initial |
334 | + switch_model(fake_client, 'foo-tmp-env', 'foo-tmp-env') |
335 | + self.assertEqual( |
336 | + [call.bootstrap(), |
337 | + call.switch(controller='foo-tmp-env', model='foo-tmp-env'), |
338 | + call.get_juju_output('list-models', '--format', 'json', |
339 | + include_e=False)], |
340 | + fake_client.mock_calls) |
341 | + |
342 | + def test_switch_model_fails(self): |
343 | + fake_client = Mock(wraps=fake_juju_client()) |
344 | + fake_client.bootstrap() |
345 | + fake_client.get_juju_output.return_value = list_model_switch |
346 | + pattern = 'Expected test-tmp-env got bar-tmp-env' |
347 | + with self.assertRaisesRegexp(JujuAssertionError, pattern): |
348 | + switch_model(fake_client, 'foo-tmp-env', 'test-tmp-env') |
Hi Michael.
I don't think this test is recreating what happened, nor is it verifying how we know juju is correct. I have many thoughts in line, much regarding the extra changes the branch has.