Merge ~canonical-is-bootstack/charm-grafana:aluria/add-tests-lint into charm-grafana:master
- Git
- lp:~canonical-is-bootstack/charm-grafana
- aluria/add-tests-lint
- Merge into master
Status: | Merged |
---|---|
Approved by: | Alvaro Uria |
Approved revision: | 5c703269d4d15afcb379f01eaff21952486c30c8 |
Merged at revision: | 93a72a956e90762f95c9f76be915126286e4afe7 |
Proposed branch: | ~canonical-is-bootstack/charm-grafana:aluria/add-tests-lint |
Merge into: | charm-grafana:master |
Diff against target: |
929 lines (+719/-38) 12 files modified
.gitignore (+20/-11) Makefile (+44/-0) lib/charms/layer/grafana.py (+2/-3) reactive/grafana.py (+26/-24) requirements.txt (+1/-0) tests/functional/bundle.yaml.j2 (+162/-0) tests/functional/conftest.py (+67/-0) tests/functional/juju_tools.py (+68/-0) tests/functional/overlay.yaml.j2 (+11/-0) tests/functional/requirements.txt (+7/-0) tests/functional/test_deploy.py (+240/-0) tox.ini (+71/-0) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Paul Goins | Approve | ||
Review via email: mp+379659@code.launchpad.net |
Commit message
Functional tests support on multiple versions
* OpenStack: Xenial-Queens, Bionic-Queens and Bionic-Stein
* Ubuntu series in Grafana: Xenial and Bionic
* Install methods: apt, snap and random wrong method
* Checks: Juju status messages, and imported dashboards exist
Description of the change
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
Unable to determine commit message from repository - please click "Set commit message" and enter the commit message manually.
Alvaro Uria (aluria) wrote : | # |
* pytest fixture scopes
- module scope used on "model" and "deploy_openstack"
- function scope used on "deploy_app"
With the above, I can create a model per Ubuntu OpenStack version we
want to test on, and deploy openstack (setting openstack-origin or
source by rendering a bundle.yaml.j2).
Then, on the "deploy_app" function-scoped fixture, I can deploy each
permutation of series and install method supported by the charm,
relating it to prometheus via an rendered overlay.yaml.j2 config file.
For now, we check that "juju status" is the expected one per install
method, and we verify that automatically imported dashboards (the ones
that start with "juju-") exist.
* There are no unit tests because the charm is coded in a single
reactive script. There is a bug to rewrite the charm following
template-
and these same functional tests with no support for multiple OpenStack
versions testing)
Rest of changes happened after I rebased from master and were not intended. I have also checked the previous MP to fix [1] and it doesn't work. There will be another MP after this one is merged.
Alvaro Uria (aluria) wrote : | # |
BTW, my func tests run:
* https:/
* https:/
Paul Goins (vultaire) wrote : | # |
This looks good. And yes, I read everything which wasn't from template-
My only real concern is regarding the use of JUJU_REPOSITORY in the Makefile, and the lack of a default. I'd like to see either a default added or a mention that JUJU_REPOSITORY must be defined. (Of course, ideal would be to replace JUJU_REPOSITORY with CHARM_BUILD_DIR.)
This said, I don't think this needs to be fixed "now". Other than that, it basically looks OK. Thus, I'm marking this as approved.
Alvaro Uria (aluria) wrote : | # |
Thank you for the review Paul. I agree that in the long term, we want to use the new charm-tools env vars (CHARM_BUILD_DIR is one of them). I used the old env var because template-
I've also removed the unused custom fixtures "app" and "units". Thanks again.
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
Change successfully merged at revision 93a72a956e90762
Preview Diff
1 | diff --git a/.gitignore b/.gitignore |
2 | index 88f01bf..32e2995 100644 |
3 | --- a/.gitignore |
4 | +++ b/.gitignore |
5 | @@ -1,13 +1,22 @@ |
6 | -*.swp |
7 | -*~ |
8 | -.idea |
9 | -.tox |
10 | +# Byte-compiled / optimized / DLL files |
11 | +__pycache__/ |
12 | +*.py[cod] |
13 | +*$py.class |
14 | + |
15 | +# Log files |
16 | +*.log |
17 | + |
18 | +.tox/ |
19 | .coverage |
20 | -.vscode |
21 | -/builds/ |
22 | -/deb_dist |
23 | -/dist |
24 | -/repo-info |
25 | -__pycache__ |
26 | -/report/ |
27 | |
28 | +# vi |
29 | +.*.swp |
30 | + |
31 | +# pycharm |
32 | +.idea/ |
33 | + |
34 | +# version data |
35 | +repo-info |
36 | + |
37 | +# reports |
38 | +report/* |
39 | diff --git a/Makefile b/Makefile |
40 | new file mode 100644 |
41 | index 0000000..0066378 |
42 | --- /dev/null |
43 | +++ b/Makefile |
44 | @@ -0,0 +1,44 @@ |
45 | +help: |
46 | + @echo "This project supports the following targets" |
47 | + @echo "" |
48 | + @echo " make help - show this text" |
49 | + @echo " make lint - run flake8" |
50 | + @echo " make test - run the lint and functional tests" |
51 | + @echo " make functional - run the tests defined in the functional subdirectory" |
52 | + @echo " make release - build the charm" |
53 | + @echo " make clean - remove unneeded files" |
54 | + @echo "" |
55 | + |
56 | +submodules: |
57 | + @echo "Cloning submodules" |
58 | + @git submodule update --init --recursive |
59 | + |
60 | +lint: |
61 | + @echo "Running flake8" |
62 | + @tox -e lint |
63 | + |
64 | +test: lint functional |
65 | + |
66 | +functional: build |
67 | + @PYTEST_KEEP_MODEL=$(PYTEST_KEEP_MODEL) \ |
68 | + PYTEST_CLOUD_NAME=$(PYTEST_CLOUD_NAME) \ |
69 | + PYTEST_CLOUD_REGION=$(PYTEST_CLOUD_REGION) \ |
70 | + tox -e functional |
71 | + |
72 | +build: |
73 | + @echo "Building charm to base directory $(JUJU_REPOSITORY)" |
74 | + @-git describe --tags > ./repo-info |
75 | + @CHARM_LAYERS_DIR=./layers CHARM_INTERFACES_DIR=./interfaces TERM=linux \ |
76 | + JUJU_REPOSITORY=$(JUJU_REPOSITORY) charm build . --force |
77 | + |
78 | +release: clean build |
79 | + @echo "Charm is built at $(JUJU_REPOSITORY)/builds" |
80 | + |
81 | +clean: |
82 | + @echo "Cleaning files" |
83 | + @if [ -d .tox ] ; then rm -r .tox ; fi |
84 | + @if [ -d .pytest_cache ] ; then rm -r .pytest_cache ; fi |
85 | + @find . -iname __pycache__ -exec rm -r {} + |
86 | + |
87 | +# The targets below don't depend on a file |
88 | +.PHONY: lint test unittest functional build release clean help |
89 | diff --git a/lib/charms/layer/grafana.py b/lib/charms/layer/grafana.py |
90 | index b482203..67b53ea 100644 |
91 | --- a/lib/charms/layer/grafana.py |
92 | +++ b/lib/charms/layer/grafana.py |
93 | @@ -2,13 +2,12 @@ |
94 | |
95 | import json |
96 | import requests |
97 | +from charmhelpers.core import unitdata |
98 | from charmhelpers.core.hookenv import ( |
99 | config, |
100 | log, |
101 | ) |
102 | |
103 | -from charmhelpers.core import unitdata |
104 | - |
105 | |
106 | def get_admin_password(): |
107 | kv = unitdata.kv() |
108 | @@ -30,7 +29,7 @@ def import_dashboard(dashboard, name=None): |
109 | name = dashboard['dashboard'].get('title') or 'Untitled' |
110 | headers = {'Content-Type': 'application/json'} |
111 | import_url = 'http://localhost:{}/api/dashboards/db'.format( |
112 | - config('port')) |
113 | + config('port')) |
114 | passwd = get_admin_password() |
115 | if passwd is None: |
116 | return (False, 'Unable to retrieve grafana password.') |
117 | diff --git a/reactive/grafana.py b/reactive/grafana.py |
118 | index 713aa8f..a34f41b 100644 |
119 | --- a/reactive/grafana.py |
120 | +++ b/reactive/grafana.py |
121 | @@ -4,13 +4,11 @@ import glob |
122 | import json |
123 | import os |
124 | import re |
125 | -import requests |
126 | import shutil |
127 | -import six |
128 | import subprocess |
129 | import time |
130 | -from jsondiff import diff |
131 | |
132 | +from charmhelpers import fetch |
133 | from charmhelpers.contrib.charmsupport import nrpe |
134 | from charmhelpers.core import ( |
135 | hookenv, |
136 | @@ -18,11 +16,9 @@ from charmhelpers.core import ( |
137 | unitdata, |
138 | ) |
139 | from charmhelpers.core.templating import render |
140 | -from charmhelpers import fetch |
141 | -from charms.reactive.helpers import ( |
142 | - any_file_changed, |
143 | - is_state, |
144 | -) |
145 | + |
146 | +from charms.layer import snap |
147 | +from charms.layer.grafana import import_dashboard |
148 | from charms.reactive import ( |
149 | hook, |
150 | remove_state, |
151 | @@ -30,11 +26,20 @@ from charms.reactive import ( |
152 | when, |
153 | when_not, |
154 | ) |
155 | +from charms.reactive.helpers import ( |
156 | + any_file_changed, |
157 | + is_state, |
158 | +) |
159 | |
160 | -from charms.layer import snap |
161 | -from charms.layer.grafana import import_dashboard |
162 | from jinja2 import Environment, FileSystemLoader, exceptions |
163 | |
164 | +from jsondiff import diff |
165 | + |
166 | +import requests |
167 | + |
168 | +import six |
169 | + |
170 | + |
171 | SVCNAME = {'snap': 'snap.grafana.grafana', |
172 | 'apt': 'grafana-server'} |
173 | SNAP_NAME = 'grafana' |
174 | @@ -120,8 +125,7 @@ def install_packages(): |
175 | set_state('grafana.installed') |
176 | hookenv.status_set('active', 'Completed installing grafana') |
177 | elif source == 'snap' and \ |
178 | - (host.lsb_release()['DISTRIB_CODENAME'] >= 'xenial' or |
179 | - host.lsb_release()['DISTRIB_CODENAME'] < 'p'): |
180 | + (host.lsb_release()['DISTRIB_CODENAME'] >= 'xenial' or host.lsb_release()['DISTRIB_CODENAME'] < 'p'): |
181 | # NOTE(aluria): precise is the last supported Ubuntu release, so |
182 | # anything below 'p' is actually newer than xenial (systemd support) |
183 | snap.install(SNAP_NAME, channel=channel, force_dangerous=False) |
184 | @@ -433,8 +437,9 @@ def configure_website(website): |
185 | |
186 | |
187 | def validate_datasources(): |
188 | - """TODO: make sure datasources option is merged with |
189 | - relation data |
190 | + """Verify that datasources configuration is valid, if existing. |
191 | + |
192 | + TODO: make sure datasources option is merged with relation data |
193 | TODO: make sure datasources are validated |
194 | """ |
195 | config = hookenv.config() |
196 | @@ -448,7 +453,8 @@ def validate_datasources(): |
197 | |
198 | |
199 | def check_datasource(ds): |
200 | - """ |
201 | + """Check for and add datasources not currently in grafana DB. |
202 | + |
203 | CREATE TABLE `data_source` ( |
204 | `id` INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL |
205 | , `org_id` INTEGER NOT NULL |
206 | @@ -470,7 +476,6 @@ def check_datasource(ds): |
207 | , `with_credentials` INTEGER NOT NULL DEFAULT 0); |
208 | INSERT INTO "data_source" VALUES(1,1,0,'prometheus','BootStack Prometheus','proxy','http://localhost:9090','','','',0,'','',1,'{}','2016-01-22 12:11:06','2016-01-22 12:11:11',0); |
209 | """ # noqa E501 |
210 | - |
211 | # ds will be similar to: |
212 | # {'service_name': 'prometheus', |
213 | # 'url': 'http://10.0.3.216:9090', |
214 | @@ -505,9 +510,8 @@ def check_datasource(ds): |
215 | |
216 | # This isn't exposed in charmhelpers: https://github.com/juju/charm-helpers/issues/367 |
217 | def render_custom(source, context, **parameters): |
218 | - """ |
219 | - Renders a template from the template folder with custom environment |
220 | - parameters. |
221 | + """Render a template from the template folder with custom environment parameters. |
222 | + |
223 | source: template file name to render from |
224 | context: template context variables |
225 | parameters: initialization parameters for the jinja Environment |
226 | @@ -750,7 +754,8 @@ def generate_query(ds, is_default, id=None): |
227 | @when('grafana.started') |
228 | @when_not('grafana.admin_password.set') |
229 | def check_adminuser(): |
230 | - """ |
231 | + """Create Adminuser if not existing. |
232 | + |
233 | CREATE TABLE `user` ( |
234 | `id` INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL |
235 | , `version` INTEGER NOT NULL |
236 | @@ -770,7 +775,6 @@ def check_adminuser(): |
237 | ); |
238 | INSERT INTO "user" VALUES(1,0,'admin','root+bootstack-ps45@canonical.com','BootStack Team','309bc4e78bc60d02dc0371d9e9fa6bf9a809d5dc25c745b9e3f85c3ed49c6feccd4ffc96d1db922f4297663a209e93f7f2b6','LZeJ3nSdrC','hseJcLcnPN','',1,1,0,'light','2016-01-22 12:00:08','2016-01-22 12:02:13'); |
239 | """ # noqa E501 |
240 | - |
241 | # XXX: If you add any dependencies on config items here, |
242 | # be sure to update config_changed() accordingly! |
243 | |
244 | @@ -806,9 +810,7 @@ def check_adminuser(): |
245 | query = cur.execute('SELECT id, login, salt FROM user') |
246 | for row in query.fetchall(): |
247 | if row[1] == 'admin': |
248 | - nagios_context = config.get('nagios_context', False) |
249 | - if not nagios_context: |
250 | - nagios_context = 'UNKNOWN' |
251 | + nagios_context = config.get('nagios_context', 'UNKNOWN') |
252 | email = 'root+%s@canonical.com' % nagios_context |
253 | hpasswd = hpwgen(passwd, row[2]) |
254 | if hpasswd: |
255 | diff --git a/requirements.txt b/requirements.txt |
256 | new file mode 100644 |
257 | index 0000000..8462291 |
258 | --- /dev/null |
259 | +++ b/requirements.txt |
260 | @@ -0,0 +1 @@ |
261 | +# Include python requirements here |
262 | diff --git a/tests/functional/bundle.yaml.j2 b/tests/functional/bundle.yaml.j2 |
263 | new file mode 100644 |
264 | index 0000000..20f0672 |
265 | --- /dev/null |
266 | +++ b/tests/functional/bundle.yaml.j2 |
267 | @@ -0,0 +1,162 @@ |
268 | +series: {{ series }} |
269 | +applications: |
270 | + ceph-mon: |
271 | + charm: cs:ceph-mon |
272 | + num_units: 3 |
273 | + options: |
274 | + expected-osd-count: 3 |
275 | + source: {{ openstack_origin }} |
276 | + ceph-osd: |
277 | + charm: cs:ceph-osd |
278 | + num_units: 3 |
279 | + options: |
280 | + osd-devices: /srv/osd |
281 | + use-direct-io: False |
282 | + bluestore: False |
283 | + source: {{ openstack_origin }} |
284 | + cinder: |
285 | + charm: cs:cinder |
286 | + num_units: 1 |
287 | + options: |
288 | + worker-multiplier: 0.1 |
289 | + block-device: None |
290 | + glance-api-version: 2 |
291 | + openstack-origin: {{ openstack_origin }} |
292 | + cinder-ceph: |
293 | + charm: cs:cinder-ceph |
294 | + num_units: 0 |
295 | + glance: |
296 | + charm: cs:glance |
297 | + num_units: 1 |
298 | + options: |
299 | + worker-multiplier: 0.1 |
300 | + openstack-origin: {{ openstack_origin }} |
301 | + keystone: |
302 | + charm: cs:keystone |
303 | + num_units: 1 |
304 | + options: |
305 | + worker-multiplier: 0.1 |
306 | + openstack-origin: {{ openstack_origin }} |
307 | + mysql: |
308 | + charm: cs:percona-cluster |
309 | + num_units: 1 |
310 | + options: |
311 | + max-connections: 1000 |
312 | + innodb-buffer-pool-size: 256M |
313 | + tuning-level: fast |
314 | + source: {{ openstack_origin }} |
315 | + neutron-api: |
316 | + charm: cs:neutron-api |
317 | + num_units: 1 |
318 | + options: |
319 | + worker-multiplier: 0.1 |
320 | + neutron-security-groups: true |
321 | + overlay-network-type: "gre vxlan" |
322 | + flat-network-providers: physnet1 |
323 | + openstack-origin: {{ openstack_origin }} |
324 | + neutron-openvswitch: |
325 | + charm: cs:neutron-openvswitch |
326 | + num_units: 0 |
327 | + nova-cloud-controller: |
328 | + charm: cs:nova-cloud-controller |
329 | + num_units: 1 |
330 | + options: |
331 | + worker-multiplier: 0.1 |
332 | + network-manager: Neutron |
333 | + ram-allocation-ratio: '64' |
334 | + cpu-allocation-ratio: '64' |
335 | + openstack-origin: {{ openstack_origin }} |
336 | + nova-compute: |
337 | + charm: cs:nova-compute |
338 | + num_units: 1 |
339 | + options: |
340 | + enable-live-migration: False |
341 | + enable-resize: False |
342 | + migration-auth-type: ssh |
343 | + force-raw-images: False |
344 | + openstack-origin: {{ openstack_origin }} |
345 | + rabbitmq-server: |
346 | + charm: cs:rabbitmq-server |
347 | + num_units: 1 |
348 | + |
349 | + prometheus: |
350 | + charm: cs:prometheus2 |
351 | + num_units: 1 |
352 | + options: |
353 | + label-juju-units: true |
354 | + telegraf: |
355 | + charm: cs:telegraf |
356 | + options: |
357 | + hostname: '{unit}' |
358 | + prometheus_output_port: default |
359 | + prometheus-ceph-exporter: |
360 | + charm: cs:prometheus-ceph-exporter |
361 | + num_units: 1 |
362 | + |
363 | +relations: |
364 | +- - nova-compute:ceph-access |
365 | + - cinder-ceph:ceph-access |
366 | +- - nova-compute:amqp |
367 | + - rabbitmq-server:amqp |
368 | +- - keystone:shared-db |
369 | + - mysql:shared-db |
370 | +- - nova-cloud-controller:identity-service |
371 | + - keystone:identity-service |
372 | +- - glance:identity-service |
373 | + - keystone:identity-service |
374 | +- - neutron-api:identity-service |
375 | + - keystone:identity-service |
376 | +- - neutron-openvswitch:neutron-plugin-api |
377 | + - neutron-api:neutron-plugin-api |
378 | +- - neutron-api:shared-db |
379 | + - mysql:shared-db |
380 | +- - neutron-api:amqp |
381 | + - rabbitmq-server:amqp |
382 | +- - glance:shared-db |
383 | + - mysql:shared-db |
384 | +- - glance:amqp |
385 | + - rabbitmq-server:amqp |
386 | +- - nova-cloud-controller:image-service |
387 | + - glance:image-service |
388 | +- - nova-compute:image-service |
389 | + - glance:image-service |
390 | +- - nova-cloud-controller:cloud-compute |
391 | + - nova-compute:cloud-compute |
392 | +- - nova-cloud-controller:amqp |
393 | + - rabbitmq-server:amqp |
394 | +- - nova-compute:neutron-plugin |
395 | + - neutron-openvswitch:neutron-plugin |
396 | +- - neutron-openvswitch:amqp |
397 | + - rabbitmq-server:amqp |
398 | +- - nova-cloud-controller:shared-db |
399 | + - mysql:shared-db |
400 | +- - nova-cloud-controller:neutron-api |
401 | + - neutron-api:neutron-api |
402 | +- - cinder:image-service |
403 | + - glance:image-service |
404 | +- - cinder:amqp |
405 | + - rabbitmq-server:amqp |
406 | +- - cinder:identity-service |
407 | + - keystone:identity-service |
408 | +- - cinder:cinder-volume-service |
409 | + - nova-cloud-controller:cinder-volume-service |
410 | +- - cinder-ceph:storage-backend |
411 | + - cinder:storage-backend |
412 | +- - ceph-mon:client |
413 | + - nova-compute:ceph |
414 | +- - cinder:shared-db |
415 | + - mysql:shared-db |
416 | +- - ceph-mon:client |
417 | + - cinder-ceph:ceph |
418 | +- - ceph-mon:client |
419 | + - glance:ceph |
420 | +- - ceph-osd:mon |
421 | + - ceph-mon:osd |
422 | +- - prometheus-ceph-exporter |
423 | + - ceph-mon:client |
424 | +- - prometheus:target |
425 | + - telegraf:prometheus-client |
426 | +- - prometheus:target |
427 | + - prometheus-ceph-exporter |
428 | +- - telegraf |
429 | + - ceph-mon |
430 | diff --git a/tests/functional/conftest.py b/tests/functional/conftest.py |
431 | new file mode 100644 |
432 | index 0000000..1d2d6e9 |
433 | --- /dev/null |
434 | +++ b/tests/functional/conftest.py |
435 | @@ -0,0 +1,67 @@ |
436 | +#!/usr/bin/python3 |
437 | +""" |
438 | +Reusable pytest fixtures for functional testing |
439 | + |
440 | +Environment variables |
441 | +--------------------- |
442 | + |
443 | +PYTEST_CLOUD_REGION, PYTEST_CLOUD_NAME: cloud name and region to use for juju model creation |
444 | + |
445 | +PYTEST_KEEP_MODEL: if set, the testing model won't be torn down at the end of the testing session |
446 | + |
447 | +""" |
448 | + |
449 | +import asyncio |
450 | +import os |
451 | +import uuid |
452 | +import pytest |
453 | +import subprocess |
454 | + |
455 | +from juju.controller import Controller |
456 | +from juju_tools import JujuTools |
457 | + |
458 | + |
459 | +@pytest.fixture(scope='module') |
460 | +def event_loop(): |
461 | + """Override the default pytest event loop to allow for fixtures using a broader scope""" |
462 | + loop = asyncio.get_event_loop_policy().new_event_loop() |
463 | + asyncio.set_event_loop(loop) |
464 | + loop.set_debug(True) |
465 | + yield loop |
466 | + loop.close() |
467 | + asyncio.set_event_loop(None) |
468 | + |
469 | + |
470 | +@pytest.fixture(scope='module') |
471 | +async def controller(): |
472 | + """Connect to the current controller""" |
473 | + _controller = Controller() |
474 | + await _controller.connect_current() |
475 | + yield _controller |
476 | + await _controller.disconnect() |
477 | + |
478 | + |
479 | +@pytest.fixture(scope='module') |
480 | +async def model(controller): |
481 | + """This model lives only for the duration of the test""" |
482 | + model_name = "functest-{}".format(str(uuid.uuid4())[-12:]) |
483 | + _model = await controller.add_model(model_name, |
484 | + cloud_name=os.getenv('PYTEST_CLOUD_NAME'), |
485 | + region=os.getenv('PYTEST_CLOUD_REGION'), |
486 | + ) |
487 | + # https://github.com/juju/python-libjuju/issues/267 |
488 | + subprocess.check_call(['juju', 'models']) |
489 | + while model_name not in await controller.list_models(): |
490 | + await asyncio.sleep(1) |
491 | + yield _model |
492 | + await _model.disconnect() |
493 | + if not os.getenv('PYTEST_KEEP_MODEL'): |
494 | + await controller.destroy_model(model_name) |
495 | + while model_name in await controller.list_models(): |
496 | + await asyncio.sleep(1) |
497 | + |
498 | + |
499 | +@pytest.fixture(scope='module') |
500 | +async def jujutools(controller, model): |
501 | + tools = JujuTools(controller, model) |
502 | + return tools |
503 | diff --git a/tests/functional/juju_tools.py b/tests/functional/juju_tools.py |
504 | new file mode 100644 |
505 | index 0000000..4b4884f |
506 | --- /dev/null |
507 | +++ b/tests/functional/juju_tools.py |
508 | @@ -0,0 +1,68 @@ |
509 | +import pickle |
510 | +import juju |
511 | +import base64 |
512 | + |
513 | +# from juju.errors import JujuError |
514 | + |
515 | + |
516 | +class JujuTools: |
517 | + def __init__(self, controller, model): |
518 | + self.controller = controller |
519 | + self.model = model |
520 | + |
521 | + async def run_command(self, cmd, target): |
522 | + """ |
523 | + Runs a command on a unit. |
524 | + |
525 | + :param cmd: Command to be run |
526 | + :param unit: Unit object or unit name string |
527 | + """ |
528 | + unit = ( |
529 | + target |
530 | + if isinstance(target, juju.unit.Unit) |
531 | + else await self.get_unit(target) |
532 | + ) |
533 | + action = await unit.run(cmd) |
534 | + return action.results |
535 | + |
536 | + async def remote_object(self, imports, remote_cmd, target): |
537 | + """ |
538 | + Runs command on target machine and returns a python object of the result |
539 | + |
540 | + :param imports: Imports needed for the command to run |
541 | + :param remote_cmd: The python command to execute |
542 | + :param target: Unit object or unit name string |
543 | + """ |
544 | + python3 = "python3 -c '{}'" |
545 | + python_cmd = ('import pickle;' |
546 | + 'import base64;' |
547 | + '{}' |
548 | + 'print(base64.b64encode(pickle.dumps({})), end="")' |
549 | + .format(imports, remote_cmd)) |
550 | + cmd = python3.format(python_cmd) |
551 | + results = await self.run_command(cmd, target) |
552 | + return pickle.loads(base64.b64decode(bytes(results['Stdout'][2:-1], 'utf8'))) |
553 | + |
554 | + async def file_stat(self, path, target): |
555 | + """ |
556 | + Runs stat on a file |
557 | + |
558 | + :param path: File path |
559 | + :param target: Unit object or unit name string |
560 | + """ |
561 | + imports = 'import os;' |
562 | + python_cmd = ('os.stat("{}")' |
563 | + .format(path)) |
564 | + print("Calling remote cmd: " + python_cmd) |
565 | + return await self.remote_object(imports, python_cmd, target) |
566 | + |
567 | + async def file_contents(self, path, target): |
568 | + """ |
569 | + Returns the contents of a file |
570 | + |
571 | + :param path: File path |
572 | + :param target: Unit object or unit name string |
573 | + """ |
574 | + cmd = 'cat {}'.format(path) |
575 | + result = await self.run_command(cmd, target) |
576 | + return result['Stdout'] |
577 | diff --git a/tests/functional/overlay.yaml.j2 b/tests/functional/overlay.yaml.j2 |
578 | new file mode 100644 |
579 | index 0000000..cab782a |
580 | --- /dev/null |
581 | +++ b/tests/functional/overlay.yaml.j2 |
582 | @@ -0,0 +1,11 @@ |
583 | +applications: |
584 | + {{ appname }}: |
585 | + series: {{ series }} |
586 | + charm: {{ charm_path }} |
587 | + num_units: 1 |
588 | + options: |
589 | + install_method: {{ install_method }} |
590 | + |
591 | +relations: |
592 | +- - prometheus:grafana-source |
593 | + - {{ appname }}:grafana-source |
594 | diff --git a/tests/functional/requirements.txt b/tests/functional/requirements.txt |
595 | new file mode 100644 |
596 | index 0000000..b2e6ce6 |
597 | --- /dev/null |
598 | +++ b/tests/functional/requirements.txt |
599 | @@ -0,0 +1,7 @@ |
600 | +flake8 |
601 | +Jinja2 |
602 | +juju |
603 | +mock |
604 | +pytest |
605 | +pytest-asyncio |
606 | +requests |
607 | diff --git a/tests/functional/test_deploy.py b/tests/functional/test_deploy.py |
608 | new file mode 100644 |
609 | index 0000000..45dbc16 |
610 | --- /dev/null |
611 | +++ b/tests/functional/test_deploy.py |
612 | @@ -0,0 +1,240 @@ |
613 | +import asyncio |
614 | +import json |
615 | +import os |
616 | +import subprocess |
617 | + |
618 | +import pytest |
619 | +import requests |
620 | +import yaml |
621 | +import jinja2 |
622 | + |
623 | +# Treat all tests as coroutines |
624 | +pytestmark = pytest.mark.asyncio |
625 | + |
626 | +OPENSTACK_TIMEOUT = 3600 |
627 | +APP_TIMEOUT = 900 |
628 | + |
629 | +CHARM_BUILD_DIR = os.getenv('JUJU_REPOSITORY', '/tmp/charm-builds/grafana').rstrip('/') |
630 | +BUNDLE_PATH = os.path.join(CHARM_BUILD_DIR, "..", "bundle.yaml") |
631 | +OVERLAY_PATH = os.path.join(CHARM_BUILD_DIR, "..", "overlay.yaml") |
632 | + |
633 | +APP_SERIES = [ |
634 | + "xenial", |
635 | + "bionic", |
636 | + # "trusty", # not supported in LXD containers |
637 | +] |
638 | +FAKE_INSTALL_METHOD = 'wrong-method' |
639 | +INSTALL_METHOD = { |
640 | + 'apt': {"message": "Started grafana-server"}, |
641 | + 'snap': {"message": "Started snap.grafana.grafana"}, |
642 | + 'wrong-method': None, |
643 | +} |
644 | +OPENSTACK_VERSIONS = { |
645 | + "xenial-queens": "cloud:xenial-queens", |
646 | + "bionic-queens": "distro", |
647 | + "bionic-stein": "cloud:bionic-stein", |
648 | +} |
649 | + |
650 | +APP_SERIES_INSTALL_METHOD_MAP = [ |
651 | + [series, install_method] |
652 | + for series in APP_SERIES |
653 | + for install_method in INSTALL_METHOD |
654 | + if [series, install_method] != ["trusty", "snap"] # snaps not supported in trusty |
655 | +] |
656 | + |
657 | + |
658 | +################ |
659 | +# Helper funcs # |
660 | +################ |
661 | + |
662 | +def render(templates_dir, template_name, context): |
663 | + templates = jinja2.Environment( |
664 | + loader=jinja2.FileSystemLoader(templates_dir)) |
665 | + template = templates.get_template(template_name) |
666 | + return template.render(context) |
667 | + |
668 | + |
669 | +################### |
670 | +# Custom fixtures # |
671 | +################### |
672 | + |
673 | +@pytest.fixture(scope='module', params=OPENSTACK_VERSIONS) |
674 | +async def model(request, controller): |
675 | + ubuntu_os_version = request.param |
676 | + model_name = 'functest-grafana-{}'.format(ubuntu_os_version) |
677 | + if model_name not in await controller.list_models(): |
678 | + _model = await controller.add_model(model_name, |
679 | + cloud_name=os.getenv('PYTEST_CLOUD_NAME'), |
680 | + region=os.getenv('PYTEST_CLOUD_REGION'), |
681 | + ) |
682 | + else: |
683 | + _model = await controller.get_model(model_name) |
684 | + |
685 | + # https://github.com/juju/python-libjuju/issues/267 |
686 | + subprocess.check_call(['juju', 'models']) |
687 | + while model_name not in await controller.list_models(): |
688 | + await asyncio.sleep(1) |
689 | + yield _model |
690 | + await _model.disconnect() |
691 | + if not os.getenv('PYTEST_KEEP_MODEL'): |
692 | + await controller.destroy_model(model_name) |
693 | + while model_name in await controller.list_models(): |
694 | + await asyncio.sleep(1) |
695 | + |
696 | + |
697 | +@pytest.fixture(scope='module') |
698 | +async def deploy_openstack(model): |
699 | + dir_path = os.path.dirname(os.path.realpath(__file__)) |
700 | + |
701 | + # functest-grafana-SERIES-OSVERSION |
702 | + _, _, series, os_version = model.info.name.split("-") |
703 | + |
704 | + context = { |
705 | + 'series': series, |
706 | + 'openstack_origin': OPENSTACK_VERSIONS["{}-{}".format(series, os_version)], |
707 | + } |
708 | + |
709 | + # render overlay config file |
710 | + rendered = render(dir_path, 'bundle.yaml.j2', context) |
711 | + with open(BUNDLE_PATH, 'w') as fd: |
712 | + fd.write(rendered) |
713 | + |
714 | + if os.path.exists(BUNDLE_PATH): |
715 | + # Note(aluria): using await model.deploy(BUNDLE_PATH) will raise an error |
716 | + # because ceph-mon application already exists and cannot be added |
717 | + # OTOH, 'juju deploy bundle.yaml' will try to finalize a previous run |
718 | + subprocess.check_call([ |
719 | + 'juju', |
720 | + 'deploy', |
721 | + '-m', |
722 | + model.info.name, |
723 | + BUNDLE_PATH, |
724 | + ]) |
725 | + assert True |
726 | + else: |
727 | + assert False |
728 | + |
729 | + |
730 | +@pytest.fixture( |
731 | + scope='function', |
732 | + params=APP_SERIES_INSTALL_METHOD_MAP, |
733 | + ids=["{}-{}".format(series, install_method) |
734 | + for series, install_method in APP_SERIES_INSTALL_METHOD_MAP], |
735 | +) |
736 | +async def deploy_app(request, model): |
737 | + series, install_method = request.param |
738 | + app_name = 'grafana-{series}-{imethod}'.format( |
739 | + series=series, |
740 | + imethod=install_method, |
741 | + ) |
742 | + # functest-grafana-SERIES-OSVERSION |
743 | + ubuntu_os_version = model.info.name[len("functest-grafana-"):] |
744 | + |
745 | + dir_path = os.path.dirname(os.path.realpath(__file__)) |
746 | + context = { |
747 | + 'appname': app_name, |
748 | + 'series': series, |
749 | + 'install_method': install_method, |
750 | + 'charm_path': CHARM_BUILD_DIR, |
751 | + 'openstack_origin': OPENSTACK_VERSIONS[ubuntu_os_version], |
752 | + } |
753 | + # render bundle config file |
754 | + rendered = render(dir_path, 'bundle.yaml.j2', context) |
755 | + with open(BUNDLE_PATH, 'w') as fd: |
756 | + fd.write(rendered) |
757 | + |
758 | + # render overlay config file |
759 | + rendered = render(dir_path, 'overlay.yaml.j2', context) |
760 | + with open(OVERLAY_PATH, 'w') as fd: |
761 | + fd.write(rendered) |
762 | + |
763 | + subprocess.check_call([ |
764 | + 'juju', |
765 | + 'deploy', |
766 | + '-m', |
767 | + model.info.name, |
768 | + BUNDLE_PATH, |
769 | + '--overlay', |
770 | + OVERLAY_PATH, |
771 | + ]) |
772 | + while True: |
773 | + try: |
774 | + grafana_app = model.applications[app_name] |
775 | + break |
776 | + except KeyError: |
777 | + await asyncio.sleep(5) |
778 | + yield grafana_app |
779 | + |
780 | + |
781 | +######### |
782 | +# Tests # |
783 | +######### |
784 | + |
785 | +async def test_openstack_deploy(deploy_openstack, model): |
786 | + with open(BUNDLE_PATH, 'r') as fd: |
787 | + num_apps = len(yaml.safe_load(fd)["applications"].keys()) |
788 | + |
789 | + await model.block_until(lambda: (len(model.applications) >= num_apps |
790 | + and all(app.status == 'active' |
791 | + for name, app in model.applications.items() |
792 | + if name.find(FAKE_INSTALL_METHOD) == -1)), |
793 | + timeout=OPENSTACK_TIMEOUT) |
794 | + assert True |
795 | + |
796 | + |
797 | +async def test_grafana_status(deploy_app, model): |
798 | + config = await deploy_app.get_config() |
799 | + install_method = config['install_method']['value'] |
800 | + if install_method == FAKE_INSTALL_METHOD: |
801 | + status = 'blocked' |
802 | + message = 'Unsupported install_method' |
803 | + else: |
804 | + status = 'active' |
805 | + message = INSTALL_METHOD[install_method].get("message") |
806 | + |
807 | + await model.block_until(lambda: (deploy_app.status == status |
808 | + and all(unit.workload_status_message == message |
809 | + for unit in deploy_app.units)), |
810 | + timeout=APP_TIMEOUT) |
811 | + assert True |
812 | + |
813 | + |
814 | +async def test_grafana_imported_dashboards(deploy_app, model): |
815 | + config = await deploy_app.get_config() |
816 | + install_method = config['install_method']['value'] |
817 | + if install_method == FAKE_INSTALL_METHOD: |
818 | + status = 'blocked' |
819 | + message = 'Unsupported install_method' |
820 | + else: |
821 | + status = 'active' |
822 | + message = INSTALL_METHOD[install_method].get("message") |
823 | + |
824 | + port = config['port']['value'] |
825 | + units = deploy_app.units |
826 | + for grafana_unit in units: |
827 | + await model.block_until(lambda: (grafana_unit.workload_status == status |
828 | + and grafana_unit.agent_status == 'idle' |
829 | + and grafana_unit.workload_status_message == message), |
830 | + timeout=APP_TIMEOUT) |
831 | + if install_method == FAKE_INSTALL_METHOD: |
832 | + # no further checks |
833 | + continue |
834 | + |
835 | + action = await grafana_unit.run_action('get-admin-password') |
836 | + action = await action.wait() |
837 | + passwd = action.results['password'] |
838 | + host = grafana_unit.public_address |
839 | + req = requests.get('http://{host}:{port}' |
840 | + '/api/search?dashboardIds'.format(host=host, port=port), |
841 | + auth=('admin', passwd)) |
842 | + if req.status_code == 200: |
843 | + dashboards = json.loads(req.text) |
844 | + if len(dashboards) >= 3: |
845 | + assert all(dash.get('type', '') == 'dash-db' |
846 | + and dash.get('uri', '').startswith('db/juju-') |
847 | + for dash in dashboards) |
848 | + continue |
849 | + assert len(dashboards) >= 3 |
850 | + assert req.status_code == 200 |
851 | + |
852 | + assert len(units) > 0 |
853 | diff --git a/tox.ini b/tox.ini |
854 | new file mode 100644 |
855 | index 0000000..288fd77 |
856 | --- /dev/null |
857 | +++ b/tox.ini |
858 | @@ -0,0 +1,71 @@ |
859 | +[tox] |
860 | +skipsdist=True |
861 | +envlist = unit, functional |
862 | +skip_missing_interpreters = True |
863 | + |
864 | +[testenv] |
865 | +basepython = python3 |
866 | +setenv = |
867 | + PYTHONPATH = . |
868 | + |
869 | +[testenv:unit] |
870 | +commands = pytest -v --ignore {toxinidir}/tests/functional \ |
871 | + --cov=lib \ |
872 | + --cov=reactive \ |
873 | + --cov=actions \ |
874 | + --cov-report=term \ |
875 | + --cov-report=annotate:report/annotated \ |
876 | + --cov-report=html:report/html |
877 | +deps = -r{toxinidir}/tests/unit/requirements.txt |
878 | + -r{toxinidir}/requirements.txt |
879 | +setenv = PYTHONPATH={toxinidir}/lib |
880 | + |
881 | +[testenv:functional] |
882 | +passenv = |
883 | + HOME |
884 | + JUJU_REPOSITORY |
885 | + PATH |
886 | + PYTEST_KEEP_MODEL |
887 | + PYTEST_CLOUD_NAME |
888 | + PYTEST_CLOUD_REGION |
889 | +commands = pytest -v --ignore {toxinidir}/tests/unit |
890 | +deps = -r{toxinidir}/tests/functional/requirements.txt |
891 | + -r{toxinidir}/requirements.txt |
892 | + |
893 | +[testenv:lint] |
894 | +commands = flake8 |
895 | +deps = |
896 | + flake8 |
897 | + flake8-docstrings |
898 | + flake8-import-order |
899 | + pep8-naming |
900 | + flake8-colors |
901 | + |
902 | +[flake8] |
903 | +exclude = |
904 | + .git, |
905 | + __pycache__, |
906 | + .tox, |
907 | +# H405: Multi line docstrings should start with a one line summary followed by |
908 | +# an empty line. |
909 | +# D100: Missing docstring in public module |
910 | +# D101: Missing docstring in public class |
911 | +# D102: Missing docstring in public method |
912 | +# D103: Missing docstring in public function |
913 | +# D104: Missing docstring in public package |
914 | +# D105: Missing docstring in magic method |
915 | +# D107: Missing docstring in __init__ |
916 | +# D200: One-line docstring should fit on one line with quotes |
917 | +# D202: No blank lines allowed after function docstring |
918 | +# D203: 1 blank required before class docstring |
919 | +# D204: 1 blank line required after class docstring |
920 | +# D205: 1 blank line required between summary line and description |
921 | +# D208: Docstring is over-indented |
922 | +# D400: First line should end with a period |
923 | +# D401: First line should be in imperative mood |
924 | +# I201: Missing newline between import groups |
925 | +# I100: Import statements are in the wrong order |
926 | + |
927 | +ignore = H405,D100,D101,D102,D103,D104,D105,D107,D200,D202,D203,D204,D205,D208,D400,D401,I100,I201,W503 |
928 | +max-line-length = 120 |
929 | +max-complexity = 10 |
This merge proposal is being monitored by mergebot. Change the status to Approved to merge.