Merge ~peter-sabaini/charm-mongodb:upgrade-functests into charm-mongodb:master
- Git
- lp:~peter-sabaini/charm-mongodb
- upgrade-functests
- Merge into master
Status: | Merged |
---|---|
Approved by: | Peter Sabaini |
Approved revision: | d39567042500595199b43817df89ace3c1f393f5 |
Merged at revision: | ed31d71d1f45df6027bdcf8cd7ea760b9eced5d5 |
Proposed branch: | ~peter-sabaini/charm-mongodb:upgrade-functests |
Merge into: | charm-mongodb:master |
Diff against target: |
1188 lines (+269/-107) 13 files modified
.gitignore (+2/-1) Makefile (+10/-6) actions/backup.py (+4/-4) dev/null (+0/-74) hooks/hooks.py (+18/-18) tests/bundles/bionic-shard.yaml (+27/-0) tests/bundles/bionic.yaml (+9/-0) tests/bundles/xenial.yaml (+9/-0) tests/test_requirements.txt (+2/-0) tests/tests.yaml (+17/-0) tests/tests_mongodb.py (+137/-0) tox.ini (+30/-0) unit_tests/test_hooks.py (+4/-4) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Adam Dyess (community) | Approve | ||
Paul Goins | Approve | ||
Review via email: mp+382331@code.launchpad.net |
Commit message
Functional testing update
Description of the change
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
Paul Goins (vultaire) wrote : | # |
I haven't actually tested this myself, but it looks good. It doesn't look like this provides parity with the Amulet suite (correct me if I'm wrong), but it looks like it does port us over to Python 3 and provides a start of a new Zaza-based test suite.
I'm +1 for this, but would like one more pair of eyes given the size of this.
Peter Sabaini (peter-sabaini) wrote : | # |
Thanks Paul.
On 15.04.20 23:12, Paul Goins wrote:
> Review: Approve
>
> I haven't actually tested this myself, but it looks good. It doesn't look like this provides parity with the Amulet suite (correct me if I'm wrong), but it looks like it does port us over to Python 3 and provides a start of a new Zaza-based test suite.
>
> I'm +1 for this, but would like one more pair of eyes given the size of this.
>
> Diff comments:
>
>> diff --git a/Makefile b/Makefile
>> index c23418b..68db95f 100644
>> --- a/Makefile
>> +++ b/Makefile
>> @@ -31,14 +32,14 @@ lint: .venv
>> .venv/bin/flake8 --exclude hooks/charmhelpers actions $(ACTIONS) hooks tests unit_tests
>> .venv/bin/
>>
>> -test: .venv
>> +unit:
>> @echo Starting unit tests...
>> - .venv/bin/nosetests -s --nologcapture --with-coverage $(EXTRA) unit_tests/
>> - .venv/bin/nosetests -s --nologcapture --with-coverage $(EXTRA) actions/
>> + @tox -e unit
>>
>> -functional_test:
>> - @echo Starting amulet tests...
>> - @juju test -v -p AMULET_HTTP_PROXY --timeout 900
>> +functional:
>> + @echo Starting functional tests...
>> + rm -rf .venv
>
> .venv seems tangential to the functional target at this point because of moving to tox. We can remove it, but I'm not sure it's necessary. (Feel free to ignore.)
I've added it as the Zaza deploy uploads the charm incl. the .venv, and balks at the symlink to /usr/bin/python
I will add a comment to that effect
>> + @tox -e functional
>>
>> sync:
>> @mkdir -p bin
>> diff --git a/tox.ini b/tox.ini
>> new file mode 100644
>> index 0000000..013abf0
>> --- /dev/null
>> +++ b/tox.ini
>> @@ -0,0 +1,34 @@
>> +[tox]
>> +skipsdist=True
>> +envlist = unit, functional, lint
>> +skip_missing_
>> +
>> +[testenv]
>> +setenv =
>> + PYTHONPATH = .
>> +passenv =
>> + HOME
>> + JUJU_REPOSITORY
>> + MODEL_SETTINGS
>> +
>> +[testenv:unit]
>> +basepython = python2
>> +commands =
>> + nosetests -s --nologcapture --with-coverage unit_tests/ actions/
>> +deps = -r{toxinidir}
>> +
>> +[testenv:
>> +basepython = python3
>> +commands =
>> + functest-run-suite --keep-model
>> +deps =
>> + git+https:/
>> + pymongo
>> +
>> +[testenv:
>> +basepython = python3
>> +commands =
>> + functest-run-suite --keep-model --smoke
>> +deps =
>> + git+https:/
>> + pymongo
>
> Since we have this list of dependencies in two different places, I feel like it may be better to put it in a tests/test_
Fair point, will do
>
>
>
Adam Dyess (addyess) wrote : | # |
Wow this is great work. Everything reads so clearly with the zaza tests.
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
Change successfully merged at revision ed31d71d1f45df6
Preview Diff
1 | diff --git a/.gitignore b/.gitignore |
2 | index a936365..e6ff298 100644 |
3 | --- a/.gitignore |
4 | +++ b/.gitignore |
5 | @@ -3,8 +3,9 @@ |
6 | .pydevproject |
7 | .coverage |
8 | .settings |
9 | +.idea/ |
10 | *.pyc |
11 | -.venv/ |
12 | +.tox/ |
13 | bin/* |
14 | scripts/charm-helpers-sync.py |
15 | exec.d/* |
16 | diff --git a/Makefile b/Makefile |
17 | index c23418b..8e20153 100644 |
18 | --- a/Makefile |
19 | +++ b/Makefile |
20 | @@ -20,6 +20,7 @@ clean: |
21 | rm -f .coverage |
22 | find . -name '*.pyc' -delete |
23 | rm -rf .venv |
24 | + rm -rf .tox |
25 | (which dh_clean && dh_clean) || true |
26 | |
27 | .venv: |
28 | @@ -31,14 +32,14 @@ lint: .venv |
29 | .venv/bin/flake8 --exclude hooks/charmhelpers actions $(ACTIONS) hooks tests unit_tests |
30 | .venv/bin/charm-proof |
31 | |
32 | -test: .venv |
33 | +unit: |
34 | @echo Starting unit tests... |
35 | - .venv/bin/nosetests -s --nologcapture --with-coverage $(EXTRA) unit_tests/ |
36 | - .venv/bin/nosetests -s --nologcapture --with-coverage $(EXTRA) actions/ |
37 | + @tox -e unit |
38 | |
39 | -functional_test: |
40 | - @echo Starting amulet tests... |
41 | - @juju test -v -p AMULET_HTTP_PROXY --timeout 900 |
42 | +functional: |
43 | + @echo Starting functional tests... |
44 | + rm -rf .venv # rm the python2 venv from unittests as it fails the juju deploy |
45 | + @tox -e functional |
46 | |
47 | sync: |
48 | @mkdir -p bin |
49 | @@ -48,3 +49,6 @@ sync: |
50 | publish: lint unit_test |
51 | bzr push lp:charms/mongodb |
52 | bzr push lp:charms/trusty/mongodb |
53 | + |
54 | +# The targets below don't depend on a file |
55 | +.PHONY: lint test unittest functional publish sync |
56 | diff --git a/actions/backup.py b/actions/backup.py |
57 | index ac7c4d9..cac42cd 100644 |
58 | --- a/actions/backup.py |
59 | +++ b/actions/backup.py |
60 | @@ -35,9 +35,9 @@ def restore(): |
61 | def backup_command(cmd, args, dir): |
62 | try: |
63 | mkdir(dir) |
64 | - except OSError, e: |
65 | + except OSError as e: |
66 | pass # Ignoring, the directory already exists |
67 | - except Exception, e: |
68 | + except Exception as e: |
69 | action_set({"directory creation exception": e}) |
70 | action_fail(str(e)) |
71 | return |
72 | @@ -48,10 +48,10 @@ def backup_command(cmd, args, dir): |
73 | try: |
74 | output = execute(command, dir) |
75 | action_set({"output": output}) |
76 | - except subprocess.CalledProcessError, e: |
77 | + except subprocess.CalledProcessError as e: |
78 | action_set({"error_code": e.returncode, |
79 | "exception": e, "output": e.output}) |
80 | action_fail(str(e)) |
81 | - except Exception, e: |
82 | + except Exception as e: |
83 | action_set({"exception": e}) |
84 | action_fail(str(e)) |
85 | diff --git a/hooks/hooks.py b/hooks/hooks.py |
86 | index e2bc2b5..a4a3ee5 100755 |
87 | --- a/hooks/hooks.py |
88 | +++ b/hooks/hooks.py |
89 | @@ -150,7 +150,7 @@ def port_check(host=None, port=None, protocol='TCP'): |
90 | s.shutdown(socket.SHUT_RDWR) |
91 | juju_log("port_check: %s:%s/%s is open" % (host, port, protocol)) |
92 | return(True) |
93 | - except Exception, e: |
94 | + except Exception as e: |
95 | juju_log("port_check: Unable to connect to %s:%s/%s." % |
96 | (host, port, protocol)) |
97 | juju_log("port_check: Exception: %s" % str(e)) |
98 | @@ -195,7 +195,7 @@ def update_file(filename=None, new_data=None, old_data=None): |
99 | with open(filename, 'w') as f: |
100 | f.write(new_data) |
101 | retVal = True |
102 | - except Exception, e: |
103 | + except Exception as e: |
104 | juju_log(str(e)) |
105 | retVal = False |
106 | finally: |
107 | @@ -211,7 +211,7 @@ def process_check(pid=None): |
108 | else: |
109 | juju_log("process_check: pid not defined.") |
110 | retVal = (None, None) |
111 | - except Exception, e: |
112 | + except Exception as e: |
113 | juju_log("process_check exception: %s" % str(e)) |
114 | retVal = (None, None) |
115 | finally: |
116 | @@ -490,7 +490,7 @@ def mongo_client_smart(host='localhost', command=None): |
117 | '--eval', 'printjson(%s)' % command] |
118 | juju_log("mongo_client_smart executing: %s" % str(cmd_line), level=DEBUG) |
119 | |
120 | - for i in xrange(MONGO_CLIENT_RETRIES): |
121 | + for i in range(MONGO_CLIENT_RETRIES): |
122 | try: |
123 | cmd_output = subprocess.check_output(cmd_line) |
124 | juju_log('mongo_client_smart executed, output: %s' % |
125 | @@ -614,7 +614,7 @@ def enable_replset(replicaset_name=None): |
126 | |
127 | juju_log('enable_replset will return: %s' % str(retVal), level=DEBUG) |
128 | |
129 | - except Exception, e: |
130 | + except Exception as e: |
131 | juju_log(str(e), level=WARNING) |
132 | retVal = False |
133 | finally: |
134 | @@ -632,7 +632,7 @@ def remove_replset_from_upstart(): |
135 | mongodb_init_config = re.sub(r' --replSet .\w+', '', |
136 | mongodb_init_config) |
137 | retVal = update_file(default_mongodb_init_config, mongodb_init_config) |
138 | - except Exception, e: |
139 | + except Exception as e: |
140 | juju_log(str(e)) |
141 | retVal = False |
142 | finally: |
143 | @@ -643,7 +643,7 @@ def step_down_replset_primary(): |
144 | """Steps down the primary |
145 | """ |
146 | retVal = mongo_client('localhost', 'rs.stepDown()') |
147 | - for i in xrange(MONGO_CLIENT_RETRIES): |
148 | + for i in range(MONGO_CLIENT_RETRIES): |
149 | if not am_i_primary(): |
150 | juju_log("step_down_replset_primary returns: %s" % retVal, |
151 | level=DEBUG) |
152 | @@ -665,7 +665,7 @@ def remove_rest_from_upstart(): |
153 | mongodb_init_config = regex_sub([(' --rest ', ' ')], |
154 | mongodb_init_config) |
155 | retVal = update_file(default_mongodb_init_config, mongodb_init_config) |
156 | - except Exception, e: |
157 | + except Exception as e: |
158 | juju_log(str(e)) |
159 | retVal = False |
160 | finally: |
161 | @@ -743,7 +743,7 @@ def disable_configsvr(port=None): |
162 | os.kill(int(pid), signal.SIGTERM) |
163 | os.unlink('/var/run/mongodb/configsvr.pid') |
164 | retVal = True |
165 | - except Exception, e: |
166 | + except Exception as e: |
167 | juju_log('no config server running ...') |
168 | juju_log("Exception: %s" % str(e)) |
169 | retVal = False |
170 | @@ -835,7 +835,7 @@ def disable_mongos(port=None): |
171 | os.kill(int(pid), signal.SIGTERM) |
172 | os.unlink('/var/run/mongodb/mongos.pid') |
173 | retVal = True |
174 | - except Exception, e: |
175 | + except Exception as e: |
176 | juju_log('no mongo router running ...') |
177 | juju_log("Exception: %s" % str(e)) |
178 | retVal = False |
179 | @@ -948,7 +948,7 @@ def backup_cronjob(disable=False): |
180 | |
181 | with open(script_filename, 'w') as output: |
182 | output.writelines(rendered) |
183 | - chmod(script_filename, 0755) |
184 | + chmod(script_filename, 0o755) |
185 | |
186 | juju_log('Installing cron.d/mongodb') |
187 | |
188 | @@ -1085,7 +1085,7 @@ def config_changed(): |
189 | # update config-server information and port |
190 | try: |
191 | (configsvr_pid, configsvr_cmd_line) = configsvr_status() |
192 | - except Exception, e: |
193 | + except Exception as e: |
194 | configsvr_pid = None |
195 | configsvr_cmd_line = None |
196 | juju_log("config_changed: configsvr_status failed.") |
197 | @@ -1101,7 +1101,7 @@ def config_changed(): |
198 | # update mongos information and port |
199 | try: |
200 | (mongos_pid, mongos_cmd_line) = mongos_status() |
201 | - except Exception, e: |
202 | + except Exception as e: |
203 | mongos_pid = None |
204 | mongos_cmd_line = None |
205 | juju_log("config_changed: mongos_status failed.") |
206 | @@ -1137,7 +1137,7 @@ def stop_hook(): |
207 | retVal = service('stop', 'mongodb') |
208 | os.remove('/var/lib/mongodb/mongod.lock') |
209 | # FIXME Need to check if this is still needed |
210 | - except Exception, e: |
211 | + except Exception as e: |
212 | juju_log(str(e)) |
213 | retVal = False |
214 | finally: |
215 | @@ -1227,7 +1227,7 @@ def rs_add(host): |
216 | juju_log("Executing: %s" % cmd_line, level=DEBUG) |
217 | run(cmd_line) |
218 | |
219 | - for i in xrange(MONGO_CLIENT_RETRIES): |
220 | + for i in range(MONGO_CLIENT_RETRIES): |
221 | c = MongoClient('localhost') |
222 | subprocess.check_output(cmd_line) |
223 | r = run_admin_command(c, 'replSetGetStatus') |
224 | @@ -1243,7 +1243,7 @@ def rs_add(host): |
225 | |
226 | def am_i_primary(): |
227 | c = MongoClient('localhost') |
228 | - for i in xrange(10): |
229 | + for i in range(10): |
230 | try: |
231 | r = run_admin_command(c, 'replSetGetStatus') |
232 | pretty_r = pprint.pformat(r) |
233 | @@ -1310,7 +1310,7 @@ def get_mongod_version(): |
234 | Mainly used for application_set_version in config-changed hook |
235 | """ |
236 | |
237 | - c = MongoClient('localhost') |
238 | + c = MongoClient('localhost', serverSelectionTimeoutMS=60000) |
239 | return c.server_info()['version'] |
240 | |
241 | |
242 | @@ -1612,7 +1612,7 @@ def run(command, exit_on_error=True): |
243 | juju_log(command) |
244 | return subprocess.check_output( |
245 | command, stderr=subprocess.STDOUT, shell=True) |
246 | - except subprocess.CalledProcessError, e: |
247 | + except subprocess.CalledProcessError as e: |
248 | juju_log("status=%d, output=%s" % (e.returncode, e.output)) |
249 | if exit_on_error: |
250 | sys.exit(e.returncode) |
251 | diff --git a/tests/00_setup.sh b/tests/00_setup.sh |
252 | deleted file mode 100755 |
253 | index 4f58709..0000000 |
254 | --- a/tests/00_setup.sh |
255 | +++ /dev/null |
256 | @@ -1,9 +0,0 @@ |
257 | -#!/bin/bash |
258 | - |
259 | -set -e |
260 | - |
261 | -sudo apt-get install python-setuptools -y |
262 | -sudo add-apt-repository ppa:juju/stable -y |
263 | - |
264 | -sudo apt-get update |
265 | -sudo apt-get install amulet python3 python3-requests python3-pymongo juju-core charm-tools python-mock python-pymongo -y |
266 | diff --git a/tests/base_deploy.py b/tests/base_deploy.py |
267 | deleted file mode 100644 |
268 | index b136cca..0000000 |
269 | --- a/tests/base_deploy.py |
270 | +++ /dev/null |
271 | @@ -1,90 +0,0 @@ |
272 | -#!/usr/bin/env python3 |
273 | - |
274 | -import amulet |
275 | -import requests |
276 | -import sys |
277 | -import time |
278 | -import traceback |
279 | -from pymongo import MongoClient |
280 | - |
281 | - |
282 | -class BasicMongo(object): |
283 | - def __init__(self, units, series, deploy_timeout): |
284 | - self.units = units |
285 | - self.series = series |
286 | - self.deploy_timeout = deploy_timeout |
287 | - self.d = amulet.Deployment(series=self.series) |
288 | - self.addy = None |
289 | - |
290 | - def deploy(self): |
291 | - try: |
292 | - self.d.setup(self.deploy_timeout) |
293 | - self.d.sentry.wait(self.deploy_timeout) |
294 | - except amulet.helpers.TimeoutError: |
295 | - message = 'The environment did not setup in %d seconds.', |
296 | - self.deploy_timeout |
297 | - amulet.raise_status(amulet.SKIP, msg=message) |
298 | - |
299 | - self.sentry_dict = {svc: self.d.sentry[svc] |
300 | - for svc in list(self.d.sentry.unit)} |
301 | - |
302 | - def validate_status_interface(self): |
303 | - addy = self.addy |
304 | - fmt = "http://{}:28017" |
305 | - if ":" in addy: |
306 | - fmt = "http://[{}]:28017" |
307 | - |
308 | - time_between = 10 |
309 | - tries = self.deploy_timeout / time_between |
310 | - |
311 | - try: |
312 | - r = requests.get(fmt.format(addy), verify=False) |
313 | - r.raise_for_status() |
314 | - except requests.exception.ConnectionError as ex: |
315 | - sys.stderr.write( |
316 | - 'Connection error, sleep and retry... to {}: {}\n'. |
317 | - format(addy, ex)) |
318 | - tb_lines = traceback.format_exception(ex.__class__, |
319 | - ex, ex.__traceback__) |
320 | - tb_text = ''.join(tb_lines) |
321 | - sys.stderr.write(tb_text) |
322 | - tries = tries - 1 |
323 | - if tries < 0: |
324 | - sys.stderr.write('retry limit caught, failing...\n') |
325 | - time.sleep(time_between) |
326 | - |
327 | - def validate_world_connectivity(self): |
328 | - addy = self.addy |
329 | - # ipv6 proper formating |
330 | - if ":" in addy: |
331 | - addy = "[{}]".format(addy) |
332 | - |
333 | - client = MongoClient(addy) |
334 | - db = client['test'] |
335 | - |
336 | - # Can we successfully insert? |
337 | - insert_id = db.amulet.insert({'assert': True}) |
338 | - if insert_id is None: |
339 | - amulet.raise_status(amulet.FAIL, msg="Failed to insert test data") |
340 | - |
341 | - # Can we delete from a shard using the Mongos hub? |
342 | - result = db.amulet.remove(insert_id) |
343 | - if 'err' in result and result['err'] is not None: |
344 | - amulet.raise_status(amulet.FAIL, msg="Failed to remove test data") |
345 | - |
346 | - def validate_running_services(self): |
347 | - for service in self.sentry_dict: |
348 | - grep_command = 'grep RELEASE /etc/lsb-release' |
349 | - release = self.sentry_dict[service].run(grep_command) |
350 | - release = str(release).split('=')[1] |
351 | - if release >= '15.10': |
352 | - status_string = 'active (running)' # systemd |
353 | - else: |
354 | - status_string = 'mongodb start/running' # upstart |
355 | - |
356 | - output = self.sentry_dict[service].run('service mongodb status') |
357 | - service_active = str(output).find(status_string) |
358 | - if service_active == -1: |
359 | - message = "Failed to find running MongoDB on host {}".format( |
360 | - service) |
361 | - amulet.raise_status(amulet.SKIP, msg=message) |
362 | diff --git a/tests/bundles/bionic-shard.yaml b/tests/bundles/bionic-shard.yaml |
363 | new file mode 100644 |
364 | index 0000000..b7ee354 |
365 | --- /dev/null |
366 | +++ b/tests/bundles/bionic-shard.yaml |
367 | @@ -0,0 +1,27 @@ |
368 | +series: bionic |
369 | +description: "mongodb-charm test bundle" |
370 | +applications: |
371 | + configsvr: |
372 | + charm: "../../." |
373 | + num_units: 1 |
374 | + options: |
375 | + replicaset: configsvr |
376 | + mongodb: |
377 | + charm: "../../." |
378 | + num_units: 1 |
379 | + options: |
380 | + replicaset: testset |
381 | + shard1: |
382 | + charm: "../../." |
383 | + num_units: 1 |
384 | + options: |
385 | + replicaset: shard1 |
386 | + shard2: |
387 | + charm: "../../." |
388 | + num_units: 1 |
389 | + options: |
390 | + replicaset: shard2 |
391 | +relations: |
392 | + - [ "configsvr:configsvr", "mongodb:mongos-cfg" ] |
393 | + - [ "mongodb:mongos", "shard1:database" ] |
394 | + - [ "mongodb:mongos", "shard2:database" ] |
395 | diff --git a/tests/bundles/bionic.yaml b/tests/bundles/bionic.yaml |
396 | new file mode 100644 |
397 | index 0000000..ebfb98e |
398 | --- /dev/null |
399 | +++ b/tests/bundles/bionic.yaml |
400 | @@ -0,0 +1,9 @@ |
401 | +series: bionic |
402 | +description: "mongodb-charm test bundle" |
403 | +applications: |
404 | + mongodb: |
405 | + charm: "../../." |
406 | + num_units: 3 |
407 | + options: |
408 | + replicaset: testset |
409 | + backup_directory: /var/backups |
410 | diff --git a/tests/bundles/xenial.yaml b/tests/bundles/xenial.yaml |
411 | new file mode 100644 |
412 | index 0000000..9d29342 |
413 | --- /dev/null |
414 | +++ b/tests/bundles/xenial.yaml |
415 | @@ -0,0 +1,9 @@ |
416 | +series: xenial |
417 | +description: "mongodb-charm test bundle" |
418 | +applications: |
419 | + mongodb: |
420 | + charm: "../../." |
421 | + num_units: 3 |
422 | + options: |
423 | + replicaset: testset |
424 | + backup_directory: /var/backups |
425 | diff --git a/tests/deploy_replicaset-trusty b/tests/deploy_replicaset-trusty |
426 | deleted file mode 100755 |
427 | index 9ed77a8..0000000 |
428 | --- a/tests/deploy_replicaset-trusty |
429 | +++ /dev/null |
430 | @@ -1,6 +0,0 @@ |
431 | -#!/usr/bin/env python3 |
432 | - |
433 | -import deploy_replicaset |
434 | - |
435 | -t = deploy_replicaset.Replicaset('trusty') |
436 | -t.run() |
437 | \ No newline at end of file |
438 | diff --git a/tests/deploy_replicaset-xenial b/tests/deploy_replicaset-xenial |
439 | deleted file mode 100755 |
440 | index 4f142dd..0000000 |
441 | --- a/tests/deploy_replicaset-xenial |
442 | +++ /dev/null |
443 | @@ -1,6 +0,0 @@ |
444 | -#!/usr/bin/env python3 |
445 | - |
446 | -import deploy_replicaset |
447 | - |
448 | -t = deploy_replicaset.Replicaset('xenial') |
449 | -t.run() |
450 | diff --git a/tests/deploy_replicaset.py b/tests/deploy_replicaset.py |
451 | deleted file mode 100644 |
452 | index 6fa0290..0000000 |
453 | --- a/tests/deploy_replicaset.py |
454 | +++ /dev/null |
455 | @@ -1,150 +0,0 @@ |
456 | -#!/usr/bin/env python3 |
457 | - |
458 | -import amulet |
459 | -import logging |
460 | -import re |
461 | -import sys |
462 | -import time |
463 | -import traceback |
464 | -from pymongo import MongoClient |
465 | -from pymongo.errors import OperationFailure |
466 | -from collections import Counter |
467 | - |
468 | -from base_deploy import BasicMongo |
469 | - |
470 | -# max amount of time to wait before testing for replicaset status |
471 | -wait_for_replicaset = 600 |
472 | -logger = logging.getLogger(__name__) |
473 | - |
474 | - |
475 | -class Replicaset(BasicMongo): |
476 | - def __init__(self, series): |
477 | - super(Replicaset, self).__init__(units=3, |
478 | - series=series, |
479 | - deploy_timeout=1800) |
480 | - |
481 | - def _expect_replicaset_counts(self, |
482 | - primaries_count, |
483 | - secondaries_count, |
484 | - time_between=10): |
485 | - unit_status = [] |
486 | - tries = wait_for_replicaset / time_between |
487 | - |
488 | - for service in self.sentry_dict: |
489 | - addy = self.sentry_dict[service].info['public-address'] |
490 | - if ":" in addy: |
491 | - addy = "[{}]".format(addy) |
492 | - while True: |
493 | - try: |
494 | - client = MongoClient(addy) |
495 | - r = client.admin.command('replSetGetStatus') |
496 | - break |
497 | - except OperationFailure as ex: |
498 | - sys.stderr.write( |
499 | - 'OperationFailure, sleep and retry... to {}: {}\n'. |
500 | - format(addy, ex)) |
501 | - tb_lines = traceback.format_exception(ex.__class__, |
502 | - ex, ex.__traceback__) |
503 | - tb_text = ''.join(tb_lines) |
504 | - sys.stderr.write(tb_text) |
505 | - tries = tries - 1 |
506 | - if tries < 0: |
507 | - sys.stderr.write('retry limit caught, failing...\n') |
508 | - break |
509 | - time.sleep(time_between) |
510 | - unit_status.append(r['myState']) |
511 | - client.close() |
512 | - |
513 | - primaries = Counter(unit_status)[1] |
514 | - if primaries != primaries_count: |
515 | - message = "Expected %d PRIMARY unit(s)! Found: %s %s" % ( |
516 | - primaries_count, |
517 | - primaries, |
518 | - unit_status) |
519 | - amulet.raise_status(amulet.FAIL, message) |
520 | - |
521 | - secondrs = Counter(unit_status)[2] |
522 | - if secondrs != secondaries_count: |
523 | - message = ("Expected %d secondary units! (Found %s) %s" % |
524 | - (secondaries_count, secondrs, unit_status)) |
525 | - amulet.raise_status(amulet.FAIL, message) |
526 | - |
527 | - def deploy(self): |
528 | - self.d.add('mongodb', charm='mongodb', units=self.units) |
529 | - self.d.expose('mongodb') |
530 | - super(Replicaset, self).deploy() |
531 | - self.wait_for_replicaset = 600 |
532 | - |
533 | - def validate_status_interface(self): |
534 | - self.addy = self.d.sentry['mongodb'][0].info['public-address'] |
535 | - super(Replicaset, self).validate_status_interface() |
536 | - |
537 | - def validate_replicaset_setup(self): |
538 | - self.d.sentry.wait(self.deploy_timeout) |
539 | - self._expect_replicaset_counts(1, 2) |
540 | - |
541 | - def validate_replicaset_relation_joined(self): |
542 | - self.d.add_unit('mongodb', units=2) |
543 | - self.d.sentry.wait(wait_for_replicaset) |
544 | - self.sentry_dict = {svc: self.d.sentry[svc] |
545 | - for svc in list(self.d.sentry.unit)} |
546 | - self._expect_replicaset_counts(1, 4) |
547 | - |
548 | - def validate_world_connectivity(self): |
549 | - # figuring out which unit is primary |
550 | - primary = False |
551 | - while not primary: |
552 | - for unit in self.sentry_dict: |
553 | - unit_address = self.sentry_dict[unit].info['public-address'] |
554 | - if ":" in unit_address: |
555 | - unit_address = "[{}]".format(unit_address) |
556 | - c = MongoClient(unit_address) |
557 | - r = c.admin.command('replSetGetStatus') |
558 | - if r['myState'] == 1: |
559 | - # reusing address without possible brackets [] |
560 | - primary = self.sentry_dict[unit].info['public-address'] |
561 | - break |
562 | - time.sleep(.1) |
563 | - |
564 | - self.addy = primary |
565 | - super(Replicaset, self).validate_world_connectivity() |
566 | - |
567 | - def validate_running_services(self): |
568 | - super(Replicaset, self).validate_running_services() |
569 | - |
570 | - def validate_workload_status(self): |
571 | - primaries = 0 |
572 | - secondaries = 0 |
573 | - regex = re.compile('^Unit is ready as (PRIMARY|SECONDARY)$') |
574 | - self.d.sentry.wait_for_messages({'mongodb': regex}) |
575 | - |
576 | - # count how many primaries and secondaries were reported in the |
577 | - # workload status |
578 | - for unit_name, unit in self.d.sentry.get_status()['mongodb'].items(): |
579 | - workload_msg = unit['workload-status']['message'] |
580 | - matched = re.match(regex, workload_msg) |
581 | - |
582 | - if not matched: |
583 | - msg = "'{}' does not match '{}'".format(workload_msg, regex) |
584 | - amulet.raise_status(amulet.FAIL, msg=msg) |
585 | - elif matched.group(1) == 'PRIMARY': |
586 | - primaries += 1 |
587 | - elif matched.group(1) == 'SECONDARY': |
588 | - secondaries += 1 |
589 | - else: |
590 | - amulet.raise_status(amulet.FAIL, |
591 | - msg='Unknown state: %s' % matched.group(1)) |
592 | - |
593 | - logger.debug('Secondary units found: %d' % secondaries) |
594 | - if primaries > 1: |
595 | - msg = "Found %d primaries, expected 1" % primaries |
596 | - amulet.raise_status(amulet.FAIL, msg=msg) |
597 | - |
598 | - def run(self): |
599 | - self.deploy() |
600 | - self.validate_status_interface() |
601 | - self.validate_running_services() |
602 | - self.validate_replicaset_setup() |
603 | - self.validate_replicaset_relation_joined() |
604 | - self.validate_world_connectivity() |
605 | - self.validate_workload_status() |
606 | diff --git a/tests/deploy_shard-trusty b/tests/deploy_shard-trusty |
607 | deleted file mode 100755 |
608 | index 200110f..0000000 |
609 | --- a/tests/deploy_shard-trusty |
610 | +++ /dev/null |
611 | @@ -1,6 +0,0 @@ |
612 | -#!/usr/bin/env python3 |
613 | - |
614 | -import deploy_shard |
615 | - |
616 | -t = deploy_shard.ShardNode('trusty') |
617 | -t.run() |
618 | \ No newline at end of file |
619 | diff --git a/tests/deploy_shard-xenial b/tests/deploy_shard-xenial |
620 | deleted file mode 100755 |
621 | index cb00363..0000000 |
622 | --- a/tests/deploy_shard-xenial |
623 | +++ /dev/null |
624 | @@ -1,6 +0,0 @@ |
625 | -#!/usr/bin/env python3 |
626 | - |
627 | -import deploy_shard |
628 | - |
629 | -t = deploy_shard.ShardNode('xenial') |
630 | -t.run() |
631 | diff --git a/tests/deploy_shard.py b/tests/deploy_shard.py |
632 | deleted file mode 100644 |
633 | index d384ef4..0000000 |
634 | --- a/tests/deploy_shard.py |
635 | +++ /dev/null |
636 | @@ -1,80 +0,0 @@ |
637 | -#!/usr/bin/env python3 |
638 | - |
639 | -import amulet |
640 | - |
641 | -from base_deploy import BasicMongo |
642 | - |
643 | - |
644 | -class ShardNode(BasicMongo): |
645 | - def __init__(self, series): |
646 | - super(ShardNode, self).__init__(units=1, |
647 | - series=series, |
648 | - deploy_timeout=900) |
649 | - |
650 | - def deploy(self): |
651 | - self.d.add('configsvr', charm='mongodb', units=self.units) |
652 | - self.d.add('mongos', charm='mongodb', units=self.units) |
653 | - self.d.add('shard1', charm='mongodb', units=self.units) |
654 | - self.d.add('shard2', charm='mongodb', units=self.units) |
655 | - |
656 | - # Setup the config svr |
657 | - self.d.configure('configsvr', {'replicaset': 'configsvr'}) |
658 | - |
659 | - # define each shardset |
660 | - self.d.configure('shard1', {'replicaset': 'shard1'}) |
661 | - self.d.configure('shard2', {'replicaset': 'shard2'}) |
662 | - |
663 | - self.d.configure('mongos', {}) |
664 | - |
665 | - # Connect the config servers to mongo shell |
666 | - self.d.relate('configsvr:configsvr', 'mongos:mongos-cfg') |
667 | - |
668 | - # connect each shard to the mongo shell |
669 | - self.d.relate('mongos:mongos', 'shard1:database') |
670 | - self.d.relate('mongos:mongos', 'shard2:database') |
671 | - self.d.expose('configsvr') |
672 | - self.d.expose('mongos') |
673 | - super(ShardNode, self).deploy() |
674 | - |
675 | - self.sentry_dict = { |
676 | - 'config-sentry': self.d.sentry['configsvr'][0], |
677 | - 'mongos-sentry': self.d.sentry['mongos'][0], |
678 | - 'shard1-sentry': self.d.sentry['shard1'][0], |
679 | - 'shard2-sentry': self.d.sentry['shard2'][0] |
680 | - } |
681 | - |
682 | - def validate_world_connectivity(self): |
683 | - self.addy = self.d.sentry['mongos'][0].info['public-address'] |
684 | - super(ShardNode, self).validate_world_connectivity() |
685 | - |
686 | - def validate_running_services(self): |
687 | - super(ShardNode, self).validate_running_services() |
688 | - |
689 | - def validate_status_interface(self): |
690 | - self.addy = self.sentry_dict['config-sentry'].info['public-address'] |
691 | - super(ShardNode, self).validate_status_interface() |
692 | - |
693 | - def validate_manual_connection(self): |
694 | - fmt = "mongo {}" |
695 | - addy = self.d.sentry['mongos'][0].info['public-address'] |
696 | - if ":" in addy: |
697 | - fmt = "mongo --ipv6 {}:27017" |
698 | - jujuruncmd = fmt.format(addy) |
699 | - output, code = self.d.sentry['shard1'][0].run(jujuruncmd) |
700 | - if code != 0: |
701 | - msg = ("Manual Connection failed, unit shard1:{} code:{} cmd:{}" |
702 | - .format(output, code, jujuruncmd)) |
703 | - amulet.raise_status(amulet.SKIP, msg=msg) |
704 | - |
705 | - output, code = self.d.sentry['shard2'][0].run(jujuruncmd) |
706 | - if code != 0: |
707 | - msg = ("Manual Connection failed, unit shard2:{} code:{} cmd:{}" |
708 | - .format(output, code, jujuruncmd)) |
709 | - amulet.raise_status(amulet.SKIP, msg=msg) |
710 | - |
711 | - def run(self): |
712 | - self.deploy() |
713 | - self.validate_world_connectivity() |
714 | - self.validate_status_interface() |
715 | - self.validate_running_services() |
716 | - self.validate_manual_connection() |
717 | diff --git a/tests/deploy_single-trusty b/tests/deploy_single-trusty |
718 | deleted file mode 100755 |
719 | index 59a2231..0000000 |
720 | --- a/tests/deploy_single-trusty |
721 | +++ /dev/null |
722 | @@ -1,8 +0,0 @@ |
723 | -#!/usr/bin/env python3 |
724 | - |
725 | -import deploy_single |
726 | - |
727 | -t = deploy_single.SingleNode('trusty') |
728 | -t.run() |
729 | - |
730 | - |
731 | diff --git a/tests/deploy_single-xenial b/tests/deploy_single-xenial |
732 | deleted file mode 100755 |
733 | index 3f718d8..0000000 |
734 | --- a/tests/deploy_single-xenial |
735 | +++ /dev/null |
736 | @@ -1,8 +0,0 @@ |
737 | -#!/usr/bin/env python3 |
738 | - |
739 | -import deploy_single |
740 | - |
741 | -t = deploy_single.SingleNode('xenial') |
742 | -t.run() |
743 | - |
744 | - |
745 | diff --git a/tests/deploy_single.py b/tests/deploy_single.py |
746 | deleted file mode 100644 |
747 | index c3181b1..0000000 |
748 | --- a/tests/deploy_single.py |
749 | +++ /dev/null |
750 | @@ -1,23 +0,0 @@ |
751 | -#!/usr/bin/env python3 |
752 | - |
753 | -from base_deploy import BasicMongo |
754 | - |
755 | - |
756 | -class SingleNode(BasicMongo): |
757 | - def __init__(self, series): |
758 | - super(SingleNode, self).__init__(units=1, |
759 | - series=series, |
760 | - deploy_timeout=900) |
761 | - |
762 | - def deploy(self): |
763 | - self.d.add('mongodb', charm='mongodb', units=self.units) |
764 | - self.d.expose('mongodb') |
765 | - super(SingleNode, self).deploy() |
766 | - |
767 | - def validate_world_connectivity(self): |
768 | - self.addy = self.d.sentry['mongodb'][0].info['public-address'] |
769 | - super(SingleNode, self).validate_world_connectivity() |
770 | - |
771 | - def run(self): |
772 | - self.deploy() |
773 | - self.validate_world_connectivity() |
774 | diff --git a/tests/deploy_with_ceilometer-trusty b/tests/deploy_with_ceilometer-trusty |
775 | deleted file mode 100755 |
776 | index b73d870..0000000 |
777 | --- a/tests/deploy_with_ceilometer-trusty |
778 | +++ /dev/null |
779 | @@ -1,6 +0,0 @@ |
780 | -#!/usr/bin/env python3 |
781 | - |
782 | -import deploy_with_ceilometer |
783 | - |
784 | -t = deploy_with_ceilometer.TestCeilometer('trusty') |
785 | -t.run() |
786 | diff --git a/tests/deploy_with_ceilometer-xenial b/tests/deploy_with_ceilometer-xenial |
787 | deleted file mode 100755 |
788 | index 4678457..0000000 |
789 | --- a/tests/deploy_with_ceilometer-xenial |
790 | +++ /dev/null |
791 | @@ -1,7 +0,0 @@ |
792 | -#!/usr/bin/env python3 |
793 | - |
794 | -import deploy_with_ceilometer |
795 | - |
796 | -#Not running this because of: https://launchpad.net/bugs/1656651 |
797 | -#t = deploy_with_ceilometer.TestCeilometer('xenial') |
798 | -#t.run() |
799 | diff --git a/tests/deploy_with_ceilometer.py b/tests/deploy_with_ceilometer.py |
800 | deleted file mode 100644 |
801 | index 133eee3..0000000 |
802 | --- a/tests/deploy_with_ceilometer.py |
803 | +++ /dev/null |
804 | @@ -1,36 +0,0 @@ |
805 | -#!/usr/bin/env python3 |
806 | - |
807 | -import amulet |
808 | - |
809 | -from base_deploy import BasicMongo |
810 | - |
811 | - |
812 | -class TestCeilometer(BasicMongo): |
813 | - def __init__(self, series): |
814 | - super(TestCeilometer, self).__init__(units=1, series=series, |
815 | - deploy_timeout=900) |
816 | - |
817 | - def deploy(self): |
818 | - self.d.add('mongodb', charm='mongodb', units=self.units) |
819 | - self.d.add('ceilometer', 'cs:{}/ceilometer'.format(self.series)) |
820 | - self.d.relate('mongodb:database', 'ceilometer:shared-db') |
821 | - self.d.expose('mongodb') |
822 | - super(TestCeilometer, self).deploy() |
823 | - |
824 | - def validate_world_connectivity(self): |
825 | - self.addy = self.d.sentry['mongodb'][0].info['public-address'] |
826 | - super(TestCeilometer, self).validate_world_connectivity() |
827 | - |
828 | - def validate_mongo_relation(self): |
829 | - unit = self.d.sentry['ceilometer'][0] |
830 | - mongo = self.d.sentry['mongodb'][0].info['public-address'] |
831 | - mongo_reladdr = self.d.sentry['mongodb'][0].relation( |
832 | - 'database', 'ceilometer:shared-db') |
833 | - cont = unit.file_contents('/etc/ceilometer/ceilometer.conf') |
834 | - if (mongo not in cont and mongo_reladdr.get( |
835 | - 'hostname', 'I SURE HOPE NOT') not in cont): |
836 | - amulet.raise_status(amulet.FAIL, "Unable to verify ceilometer cfg") |
837 | - |
838 | - def run(self): |
839 | - self.deploy() |
840 | - self.validate_world_connectivity() |
841 | diff --git a/tests/deploy_with_storage-trusty b/tests/deploy_with_storage-trusty |
842 | deleted file mode 100755 |
843 | index f7e8314..0000000 |
844 | --- a/tests/deploy_with_storage-trusty |
845 | +++ /dev/null |
846 | @@ -1,6 +0,0 @@ |
847 | -#!/usr/bin/env python3 |
848 | - |
849 | -import deploy_with_storage |
850 | - |
851 | -t = deploy_with_storage.WithStorage('trusty') |
852 | -t.run() |
853 | diff --git a/tests/deploy_with_storage-xenial b/tests/deploy_with_storage-xenial |
854 | deleted file mode 100755 |
855 | index 6b0d660..0000000 |
856 | --- a/tests/deploy_with_storage-xenial |
857 | +++ /dev/null |
858 | @@ -1,8 +0,0 @@ |
859 | -#!/usr/bin/env python3 |
860 | - |
861 | -import deploy_with_storage |
862 | - |
863 | -# We are not testing this against xenial yet, because |
864 | -# cs:~chris-gondolin/trusty/storage-5 does not exist for xenial (yet) |
865 | -#t = deploy_with_storage.WithStorage('xenial') |
866 | -#t.run() |
867 | diff --git a/tests/deploy_with_storage.py b/tests/deploy_with_storage.py |
868 | deleted file mode 100644 |
869 | index 92a5fe6..0000000 |
870 | --- a/tests/deploy_with_storage.py |
871 | +++ /dev/null |
872 | @@ -1,74 +0,0 @@ |
873 | -#!/usr/bin/env python3 |
874 | - |
875 | -from base_deploy import BasicMongo |
876 | - |
877 | -import amulet |
878 | -from pymongo import MongoClient |
879 | -from collections import Counter |
880 | - |
881 | - |
882 | -class WithStorage(BasicMongo): |
883 | - def __init__(self, series): |
884 | - super(WithStorage, self).__init__(units=2, |
885 | - series=series, |
886 | - deploy_timeout=1800) |
887 | - |
888 | - def deploy(self): |
889 | - self.d.add('mongodb', |
890 | - charm='mongodb', |
891 | - units=self.units, |
892 | - constraints={'root-disk': '20480M'}) |
893 | - |
894 | - storage_charm = 'cs:~chris-gondolin/{}/storage-5'.format(self.series) |
895 | - self.d.add('storage', charm=storage_charm, series=self.series) |
896 | - self.d.configure('storage', {'provider': 'local'}) |
897 | - super(WithStorage, self).deploy() |
898 | - self.d.expose('mongodb') |
899 | - |
900 | - ordered_units = sorted(self.d.sentry['mongodb'], |
901 | - key=lambda u: u.info['unit']) |
902 | - self.sentry_dict = { |
903 | - 'mongodb0-sentry': ordered_units[0], |
904 | - 'mongodb1-sentry': ordered_units[1] |
905 | - } |
906 | - |
907 | - def validate_status(self): |
908 | - self.d.sentry.wait_for_status(self.d.juju_env, ['mongodb']) |
909 | - |
910 | - def validate_replicaset_setup(self): |
911 | - self.d.sentry.wait(self.deploy_timeout) |
912 | - |
913 | - unit_status = [] |
914 | - for service in self.sentry_dict: |
915 | - addy = self.sentry_dict[service].info['public-address'] |
916 | - if ":" in addy: |
917 | - addy = "[{}]".format(addy) |
918 | - client = MongoClient(addy) |
919 | - r = client.admin.command('replSetGetStatus') |
920 | - unit_status.append(r['myState']) |
921 | - client.close() |
922 | - |
923 | - prims = Counter(unit_status)[1] |
924 | - if prims != 1: |
925 | - message = "Only one PRIMARY unit allowed! Found: %s" % (prims) |
926 | - amulet.raise_status(amulet.FAIL, message) |
927 | - |
928 | - secnds = Counter(unit_status)[2] |
929 | - if secnds != 1: |
930 | - message = "Only one SECONDARY unit allowed! (Found %s)" % (secnds) |
931 | - amulet.raise_status(amulet.FAIL, message) |
932 | - |
933 | - def run(self): |
934 | - self.deploy() |
935 | - self.validate_status() |
936 | - self.validate_replicaset_setup() |
937 | - |
938 | - print("Adding storage relation, and sleeping for 2 min.") |
939 | - try: |
940 | - self.d.relate('mongodb:data', 'storage:data') |
941 | - except OSError as e: |
942 | - print("Ignoring error: {}", e) |
943 | - self.d.sentry.wait(120) # 2 minute |
944 | - |
945 | - self.validate_status() |
946 | - self.validate_replicaset_setup() |
947 | diff --git a/tests/test_requirements.txt b/tests/test_requirements.txt |
948 | new file mode 100644 |
949 | index 0000000..9fbeeab |
950 | --- /dev/null |
951 | +++ b/tests/test_requirements.txt |
952 | @@ -0,0 +1,2 @@ |
953 | +git+https://github.com/openstack-charmers/zaza.git#egg=zaza |
954 | +pymongo |
955 | diff --git a/tests/tests.yaml b/tests/tests.yaml |
956 | new file mode 100644 |
957 | index 0000000..dcbd01c |
958 | --- /dev/null |
959 | +++ b/tests/tests.yaml |
960 | @@ -0,0 +1,17 @@ |
961 | +charm_name: mongodb-charm |
962 | +tests: |
963 | + - model_alias_xenial: |
964 | + - tests.tests_mongodb.BasicMongodbCharmTest |
965 | + - tests.tests_mongodb.ReplicatedMongodbCharmTest |
966 | + - tests.tests_mongodb.XenialMongodbCharmTest |
967 | + - model_alias_bionic: |
968 | + - tests.tests_mongodb.BasicMongodbCharmTest |
969 | + - tests.tests_mongodb.ReplicatedMongodbCharmTest |
970 | + - model_alias_shard: |
971 | + - tests.tests_mongodb.ShardedMongodbCharmTest |
972 | +gate_bundles: |
973 | + - model_alias_xenial: xenial |
974 | + - model_alias_bionic: bionic |
975 | + - model_alias_shard: bionic-shard |
976 | +smoke_bundles: |
977 | + - model_alias_bionic: bionic |
978 | diff --git a/tests/tests_mongodb.py b/tests/tests_mongodb.py |
979 | new file mode 100644 |
980 | index 0000000..dc50d8d |
981 | --- /dev/null |
982 | +++ b/tests/tests_mongodb.py |
983 | @@ -0,0 +1,137 @@ |
984 | +#!/usr/bin/env python3 |
985 | +import requests |
986 | +import unittest |
987 | + |
988 | +from pymongo import MongoClient |
989 | +from requests.adapters import HTTPAdapter |
990 | +from requests.packages.urllib3.util.retry import Retry |
991 | +from zaza import model |
992 | +from zaza.charm_lifecycle import utils as lifecycle_utils |
993 | + |
994 | + |
995 | +MONGO_STARTUP = 0 |
996 | +MONGO_PRIMARY = 1 |
997 | +MONGO_SECONDARY = 2 |
998 | +MONGO_RECOVERING = 3 |
999 | +MONGO_FATAL = 4 |
1000 | +MONGO_STARTUP2 = 5 |
1001 | +MONGO_UNKNOWN = 6 |
1002 | +MONGO_ARBITER = 7 |
1003 | +MONGO_DOWN = 8 |
1004 | +MONGO_ROLLBACK = 9 |
1005 | +MONGO_REMOVED = 10 |
1006 | + |
1007 | + |
1008 | +def requests_retry_session( |
1009 | + retries=3, backoff_factor=2, status_forcelist=(500, 502, 504), session=None |
1010 | +): |
1011 | + """Create a http session with retry""" |
1012 | + session = session or requests.Session() |
1013 | + retry = Retry( |
1014 | + total=retries, |
1015 | + read=retries, |
1016 | + connect=retries, |
1017 | + backoff_factor=backoff_factor, |
1018 | + status_forcelist=status_forcelist, |
1019 | + ) |
1020 | + adapter = HTTPAdapter(max_retries=retry) |
1021 | + session.mount("http://", adapter) |
1022 | + session.mount("https://", adapter) |
1023 | + return session |
1024 | + |
1025 | + |
1026 | +class MongodbCharmTestBase(unittest.TestCase): |
1027 | + @classmethod |
1028 | + def setUpClass(cls): |
1029 | + cls.model_name = model.get_juju_model() |
1030 | + cls.test_config = lifecycle_utils.get_charm_config() |
1031 | + model.block_until_all_units_idle() |
1032 | + addr = model.get_lead_unit_ip("mongodb") |
1033 | + if ":" in addr: # ipv6 formatting |
1034 | + cls.leader_address = "[{}]".format(addr) |
1035 | + else: |
1036 | + cls.leader_address = addr |
1037 | + cls.db_client = MongoClient(cls.leader_address) |
1038 | + |
1039 | + def cat_unit(self, unit, path): |
1040 | + unit_res = model.run_on_unit(unit, "sudo cat {}".format(path)) |
1041 | + return unit_res["Stdout"] |
1042 | + |
1043 | + def web_admin_interface(self, ipaddr, port=28017): |
1044 | + url = "http://{}:{}".format(ipaddr, port) |
1045 | + resp = requests_retry_session(retries=10).get(url) |
1046 | + return resp |
1047 | + |
1048 | + |
1049 | +class BasicMongodbCharmTest(MongodbCharmTestBase): |
1050 | + def test_db_insert(self): |
1051 | + """Test if we can insert and remove a value""" |
1052 | + test_db = self.db_client.test_db |
1053 | + insert_id = test_db.testcoll.insert({"assert": True}) |
1054 | + self.assertTrue(insert_id is not None, "Failed to insert test data") |
1055 | + result = test_db.testcoll.remove(insert_id) |
1056 | + self.assertTrue("err" not in result, "Failed to remove test data") |
1057 | + |
1058 | + def test_service_running(self): |
1059 | + """Test if we have a mongod running on all units""" |
1060 | + for i in (0, 1, 2): |
1061 | + running_for = model.get_unit_service_start_time( |
1062 | + "mongodb/{}".format(i), "mongod", timeout=20 |
1063 | + ) |
1064 | + self.assertGreater(running_for, 0) |
1065 | + |
1066 | + |
1067 | +class XenialMongodbCharmTest(MongodbCharmTestBase): |
1068 | + def test_status_interface(self): |
1069 | + """Check if we can access the web admin port -- xenial only""" |
1070 | + resp = self.web_admin_interface(self.leader_address) |
1071 | + resp.raise_for_status() |
1072 | + |
1073 | + |
1074 | +class ReplicatedMongodbCharmTest(MongodbCharmTestBase): |
1075 | + def get_set_status(self): |
1076 | + unit_status = [] |
1077 | + for addr in model.get_app_ips("mongodb"): |
1078 | + unit_client = MongoClient(addr) |
1079 | + unit_status.append(unit_client.admin.command("replSetGetStatus")) |
1080 | + return unit_status |
1081 | + |
1082 | + def test_replset_numbers(self): |
1083 | + """Test if we have 1 primary and 2 secondary mongodbs""" |
1084 | + unit_status = self.get_set_status() |
1085 | + primaries = [u for u in unit_status if u["myState"] == MONGO_PRIMARY] |
1086 | + secondaries = [u for u in unit_status if u["myState"] == MONGO_SECONDARY] |
1087 | + self.assertEqual(len(primaries), 1) |
1088 | + self.assertEqual(len(secondaries), 2) |
1089 | + |
1090 | + def test_replset_consistent_members(self): |
1091 | + """Test if all units have the same view on membership""" |
1092 | + unit_status = self.get_set_status() |
1093 | + prim_members = [u for u in unit_status if u["myState"] == MONGO_PRIMARY][0][ |
1094 | + "members" |
1095 | + ] |
1096 | + secondary_members = [ |
1097 | + u["members"] for u in unit_status if u["myState"] == MONGO_SECONDARY |
1098 | + ] |
1099 | + |
1100 | + def extract(member_dict): |
1101 | + # Extract a subset of membership info as a frozenset. Name is ipaddr:port, health a float and stateStr a str |
1102 | + return frozenset( |
1103 | + v for k, v in member_dict.items() if k in ["name", "health", "stateStr"] |
1104 | + ) |
1105 | + |
1106 | + ref = set( |
1107 | + map(extract, prim_members) |
1108 | + ) # Our reference view on membership comes from the primary |
1109 | + secondaries = [set(map(extract, sec)) for sec in secondary_members] |
1110 | + for sec in secondaries: |
1111 | + self.assertEqual(ref, sec) |
1112 | + |
1113 | + |
1114 | +class ShardedMongodbCharmTest(MongodbCharmTestBase): |
1115 | + def test_mongos_running(self): |
1116 | + """Test if the mongos service is running""" |
1117 | + running_for = model.get_unit_service_start_time( |
1118 | + "mongodb/0", "mongos", timeout=20 |
1119 | + ) |
1120 | + self.assertGreater(running_for, 0) |
1121 | diff --git a/tox.ini b/tox.ini |
1122 | new file mode 100644 |
1123 | index 0000000..6e3a650 |
1124 | --- /dev/null |
1125 | +++ b/tox.ini |
1126 | @@ -0,0 +1,30 @@ |
1127 | +[tox] |
1128 | +skipsdist=True |
1129 | +envlist = unit, functional, lint |
1130 | +skip_missing_interpreters = True |
1131 | + |
1132 | +[testenv] |
1133 | +setenv = |
1134 | + PYTHONPATH = . |
1135 | +passenv = |
1136 | + HOME |
1137 | + JUJU_REPOSITORY |
1138 | + MODEL_SETTINGS |
1139 | + |
1140 | +[testenv:unit] |
1141 | +basepython = python2 |
1142 | +commands = |
1143 | + nosetests -s --nologcapture --with-coverage unit_tests/ actions/ |
1144 | +deps = -r{toxinidir}/test_requirements.txt |
1145 | + |
1146 | +[testenv:functional] |
1147 | +basepython = python3 |
1148 | +commands = |
1149 | + functest-run-suite --keep-model |
1150 | +deps = -r{toxinidir}/tests/test_requirements.txt |
1151 | + |
1152 | +[testenv:func-smoke] |
1153 | +basepython = python3 |
1154 | +commands = |
1155 | + functest-run-suite --keep-model --smoke |
1156 | +deps = -r{toxinidir}/tests/test_requirements.txt |
1157 | diff --git a/unit_tests/test_hooks.py b/unit_tests/test_hooks.py |
1158 | index b44d11f..c6e3c22 100644 |
1159 | --- a/unit_tests/test_hooks.py |
1160 | +++ b/unit_tests/test_hooks.py |
1161 | @@ -187,8 +187,8 @@ class MongoHooksTest(CharmTestCase): |
1162 | @patch('time.sleep') |
1163 | def test_am_i_primary(self, mock_sleep, mock_mongo_client, |
1164 | mock_run_admin_cmd): |
1165 | - mock_run_admin_cmd.side_effect = [{'myState': x} for x in xrange(5)] |
1166 | - expected_results = [True if x == 1 else False for x in xrange(5)] |
1167 | + mock_run_admin_cmd.side_effect = [{'myState': x} for x in range(5)] |
1168 | + expected_results = [True if x == 1 else False for x in range(5)] |
1169 | |
1170 | # Check expected return values each time... |
1171 | for exp in expected_results: |
1172 | @@ -203,7 +203,7 @@ class MongoHooksTest(CharmTestCase): |
1173 | mock_run_admin_cmd): |
1174 | msg = 'replSetInitiate - should come online shortly' |
1175 | mock_run_admin_cmd.side_effect = [OperationFailure(msg) |
1176 | - for x in xrange(10)] |
1177 | + for x in range(10)] |
1178 | |
1179 | try: |
1180 | hooks.am_i_primary() |
1181 | @@ -262,7 +262,7 @@ class MongoHooksTest(CharmTestCase): |
1182 | def test_mongo_client_smart_error_cases(self, mock_ck_output, mock_sleep): |
1183 | mock_ck_output.side_effect = [CalledProcessError(1, 'cmd', |
1184 | output='fake-error') |
1185 | - for x in xrange(11)] |
1186 | + for x in range(11)] |
1187 | rv = hooks.mongo_client_smart(command='fake-cmd') |
1188 | self.assertFalse(rv) |
1189 |
This merge proposal is being monitored by mergebot. Change the status to Approved to merge.