Merge ~aluria/charm-prometheus-blackbox-exporter:rewrite into ~aluria/charm-prometheus-blackbox-exporter:refactor-peer-discovery
- Git
- lp:~aluria/charm-prometheus-blackbox-exporter
- rewrite
- Merge into refactor-peer-discovery
Proposed by
Alvaro Uria
Status: | Rejected |
---|---|
Rejected by: | Alvaro Uria |
Proposed branch: | ~aluria/charm-prometheus-blackbox-exporter:rewrite |
Merge into: | ~aluria/charm-prometheus-blackbox-exporter:refactor-peer-discovery |
Diff against target: |
1373 lines (+977/-224) 21 files modified
.gitignore (+22/-0) Makefile (+56/-0) icon.svg (+12/-0) interfaces/.empty (+1/-0) layer.yaml (+8/-4) layers/.empty (+1/-0) lib/lib_bb_peer_exporter.py (+158/-0) lib/lib_network.py (+94/-0) metadata.yaml (+9/-6) reactive/prometheus-blackbox-exporter.py (+133/-214) requirements.txt (+1/-0) tests/functional/bundle.yaml.j2 (+13/-0) tests/functional/conftest.py (+68/-0) tests/functional/juju_tools.py (+71/-0) tests/functional/requirements.txt (+7/-0) tests/functional/test_deploy.py (+150/-0) tests/unit/conftest.py (+69/-0) tests/unit/requirements.txt (+7/-0) tests/unit/test_lib_bb_peer_exporter.py (+46/-0) tox.ini (+50/-0) wheelhouse.txt (+1/-0) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Peter Sabaini (community) | Needs Fixing | ||
Alvaro Uria | Pending | ||
Review via email: mp+374432@code.launchpad.net |
Commit message
Description of the change
To post a comment you must log in.
Revision history for this message
Alvaro Uria (aluria) wrote : | # |
Thank you for the review. A new MP can be found at [1]. I'll go ahead and mark this MP as rejected.
Unmerged commits
- 371fd4b... by Alvaro Uria
-
Rewrite charm using template-
python- pytest * Cloned template-
python- pytest
* Moved the reactive script helpers to lib_bb_peer_exporter (generic)
and lib_network (network related helpers)
* Created minimum unit (for the libs) and functional tests
* wheelhouse.txt installs psutil as well as netifaces and pyroute2.
* Linting now runs flake8-docstrings, flake8-import-order and other
extra checks. All scripts have been updated following parser
recommendations.
* Better use of the peer-discovery interface. Available (and transient)
states when units join or leave a peer relation are used to trigger
config changes.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | diff --git a/.gitignore b/.gitignore |
2 | new file mode 100644 |
3 | index 0000000..32e2995 |
4 | --- /dev/null |
5 | +++ b/.gitignore |
6 | @@ -0,0 +1,22 @@ |
7 | +# Byte-compiled / optimized / DLL files |
8 | +__pycache__/ |
9 | +*.py[cod] |
10 | +*$py.class |
11 | + |
12 | +# Log files |
13 | +*.log |
14 | + |
15 | +.tox/ |
16 | +.coverage |
17 | + |
18 | +# vi |
19 | +.*.swp |
20 | + |
21 | +# pycharm |
22 | +.idea/ |
23 | + |
24 | +# version data |
25 | +repo-info |
26 | + |
27 | +# reports |
28 | +report/* |
29 | diff --git a/Makefile b/Makefile |
30 | new file mode 100644 |
31 | index 0000000..eb95ca8 |
32 | --- /dev/null |
33 | +++ b/Makefile |
34 | @@ -0,0 +1,56 @@ |
35 | +PROJECTPATH = $(dir $(realpath $(firstword $(MAKEFILE_LIST)))) |
36 | +DIRNAME = $(notdir $(PROJECTPATH:%/=%)) |
37 | + |
38 | +ifndef CHARM_BUILD_DIR |
39 | + CHARM_BUILD_DIR := /tmp/$(DIRNAME)-builds |
40 | + $(warning Warning CHARM_BUILD_DIR was not set, defaulting to $(CHARM_BUILD_DIR)) |
41 | +endif |
42 | + |
43 | +help: |
44 | + @echo "This project supports the following targets" |
45 | + @echo "" |
46 | + @echo " make help - show this text" |
47 | + @echo " make lint - run flake8" |
48 | + @echo " make test - run the unittests and lint" |
49 | + @echo " make unittest - run the tests defined in the unittest subdirectory" |
50 | + @echo " make functional - run the tests defined in the functional subdirectory" |
51 | + @echo " make release - build the charm" |
52 | + @echo " make clean - remove unneeded files" |
53 | + @echo "" |
54 | + |
55 | +lint: |
56 | + @echo "Running flake8" |
57 | + @tox -e lint |
58 | + |
59 | +test: lint unittest functional |
60 | + |
61 | +unittest: |
62 | + @tox -e unit |
63 | + |
64 | +functional: build |
65 | + @PYTEST_KEEP_MODEL=$(PYTEST_KEEP_MODEL) \ |
66 | + PYTEST_CLOUD_NAME=$(PYTEST_CLOUD_NAME) \ |
67 | + PYTEST_CLOUD_REGION=$(PYTEST_CLOUD_REGION) \ |
68 | + CHARM_BUILD_DIR=$(CHARM_BUILD_DIR) \ |
69 | + tox -e functional |
70 | + |
71 | +build: |
72 | + @echo "Building charm to base directory $(CHARM_BUILD_DIR)" |
73 | + @CHARM_LAYERS_DIR=./layers \ |
74 | + CHARM_INTERFACES_DIR=./interfaces \ |
75 | + TERM=linux \ |
76 | + CHARM_BUILD_DIR=$(CHARM_BUILD_DIR) \ |
77 | + charm build . --force |
78 | + |
79 | +release: clean build |
80 | + @echo "Charm is built at $(CHARM_BUILD_DIR)" |
81 | + |
82 | +clean: |
83 | + @echo "Cleaning files" |
84 | + @if [ -d $(CHARM_BUILD_DIR) ] ; then rm -r $(CHARM_BUILD_DIR) ; fi |
85 | + @if [ -d .tox ] ; then rm -r .tox ; fi |
86 | + @if [ -d .pytest_cache ] ; then rm -r .pytest_cache ; fi |
87 | + @find . -iname __pycache__ -exec rm -r {} + |
88 | + |
89 | +# The targets below don't depend on a file |
90 | +.PHONY: lint test unittest functional build release clean help |
91 | diff --git a/icon.svg b/icon.svg |
92 | new file mode 100644 |
93 | index 0000000..ffa6296 |
94 | --- /dev/null |
95 | +++ b/icon.svg |
96 | @@ -0,0 +1,12 @@ |
97 | +<?xml version="1.0" encoding="UTF-8"?> |
98 | +<svg width="100px" height="100px" viewBox="0 0 100 100" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> |
99 | + <!-- Generator: Sketch 45.2 (43514) - http://www.bohemiancoding.com/sketch --> |
100 | + <title>prometheus</title> |
101 | + <desc>Created with Sketch.</desc> |
102 | + <defs></defs> |
103 | + <g id="Page-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd"> |
104 | + <g id="prometheus" fill-rule="nonzero" fill="#B8B8B8"> |
105 | + <path d="M50.0004412,3.12178676e-15 C22.3871247,3.12178676e-15 0,22.3848726 0,49.9995588 C0,77.6133626 22.3871247,100 50.0004412,100 C77.6137577,100 100,77.6133626 100,49.9995588 C100,22.3848726 77.6128753,-1.77635684e-15 50.0004412,3.12178676e-15 Z M49.8864141,88.223871 C42.8334797,88.223871 37.1152482,83.6412396 37.1152482,77.9899953 L62.6575799,77.9899953 C62.6575799,83.6404692 56.9393484,88.223871 49.8864141,88.223871 Z M70.9802642,74.6007896 L28.7901876,74.6007896 L28.7901876,67.159792 L70.9810563,67.159792 L70.9810563,74.6007896 L70.9802642,74.6007896 Z M70.8289715,63.3298895 L28.9105881,63.3298895 C28.771177,63.1734883 28.6285974,63.0193985 28.493939,62.860686 C24.1753633,57.7603128 23.1582959,55.0976406 22.1705366,52.3841188 C22.1539023,52.2947467 27.4071661,53.4280774 31.1324525,54.2432125 C31.1324525,54.2432125 33.0493552,54.6746641 35.8518351,55.1716037 C33.1610425,52.1036753 31.5633596,48.2036619 31.5633596,44.2173581 C31.5633596,35.4658266 38.4642091,27.8183486 35.974612,21.6370353 C38.397672,21.8288772 40.9894511,26.6110549 41.164507,34.0882636 C43.740444,30.6258652 44.8185037,24.3027893 44.8185037,20.4258893 C44.8185037,16.4118494 47.5378123,11.7490913 50.257913,11.5896084 C47.8332688,15.4765242 50.886055,18.8087166 53.5998189,27.0748652 C54.6176783,30.1797752 54.4877725,35.4049611 55.2735442,38.7186628 C55.5341479,31.8362408 56.7508266,21.794207 61.2397056,18.3271859 C59.2594343,22.6933211 61.5327858,28.1565758 63.0876948,30.7830368 C65.5963025,35.020507 67.1171509,38.2309685 67.1171509,44.302878 C67.1171509,48.3739312 65.5717472,52.2069155 62.964918,55.2031922 C65.9289881,54.6623369 67.9757966,54.1746426 67.9757966,54.1746426 L77.6014994,52.3479077 C77.6022915,52.3471373 76.2034279,57.9421388 70.8289715,63.3298895 Z" id="path3023"></path> |
106 | + </g> |
107 | + </g> |
108 | +</svg> |
109 | \ No newline at end of file |
110 | diff --git a/interfaces/.empty b/interfaces/.empty |
111 | new file mode 100644 |
112 | index 0000000..792d600 |
113 | --- /dev/null |
114 | +++ b/interfaces/.empty |
115 | @@ -0,0 +1 @@ |
116 | +# |
117 | diff --git a/layer.yaml b/layer.yaml |
118 | index 6036e6d..929f452 100644 |
119 | --- a/layer.yaml |
120 | +++ b/layer.yaml |
121 | @@ -1,11 +1,15 @@ |
122 | +# exclude the interfaces and layers folders we use for submodules |
123 | +exclude: |
124 | + - interfaces |
125 | + - layers |
126 | +# include required layers here |
127 | includes: |
128 | - 'layer:basic' |
129 | - - "interface:juju-info" |
130 | + - 'layer:status' |
131 | - 'interface:http' |
132 | + - 'interface:juju-info' |
133 | - 'interface:peer-discovery' |
134 | -repo: 'https://git.launchpad.net/prometheus-blackbox-exporter-charm' |
135 | -ignore: |
136 | - - '.*.swp' |
137 | +repo: 'https://git.launchpad.net/prometheus-blackbox-peer-exporter-charm' |
138 | options: |
139 | basic: |
140 | use_venv: true |
141 | diff --git a/layers/.empty b/layers/.empty |
142 | new file mode 100644 |
143 | index 0000000..792d600 |
144 | --- /dev/null |
145 | +++ b/layers/.empty |
146 | @@ -0,0 +1 @@ |
147 | +# |
148 | diff --git a/lib/lib_bb_peer_exporter.py b/lib/lib_bb_peer_exporter.py |
149 | new file mode 100644 |
150 | index 0000000..6de8dc2 |
151 | --- /dev/null |
152 | +++ b/lib/lib_bb_peer_exporter.py |
153 | @@ -0,0 +1,158 @@ |
154 | +"""General helpers.""" |
155 | +import os |
156 | +import subprocess |
157 | + |
158 | +from charmhelpers import fetch |
159 | +from charmhelpers.core import hookenv, host, unitdata |
160 | +from charmhelpers.core.templating import render |
161 | + |
162 | +from charms.reactive.helpers import any_file_changed, data_changed |
163 | + |
164 | +import lib_network |
165 | + |
166 | +import yaml |
167 | + |
168 | + |
169 | +APT_PKG_NAME = 'prometheus-blackbox-exporter' |
170 | +SVC_NAME = 'prometheus-blackbox-exporter' |
171 | +SVC_PATH = os.path.join('/usr/bin', SVC_NAME) |
172 | +PORT_DEF = 9115 |
173 | +BLACKBOX_EXPORTER_YML_TMPL = 'blackbox.yaml.j2' |
174 | +CONF_FILE_PATH = '/etc/prometheus/blackbox.yml' |
175 | + |
176 | + |
177 | +class BBPeerExporterError(Exception): |
178 | + """Handle exceptions encountered in BBPeerExporterHelper.""" |
179 | + |
180 | + pass |
181 | + |
182 | + |
183 | +class BBPeerExporterHelper(): |
184 | + """General helpers.""" |
185 | + |
186 | + def __init__(self): |
187 | + """Load config.yaml.""" |
188 | + self.charm_config = hookenv.config() |
189 | + |
190 | + @property |
191 | + def config_changed(self): |
192 | + """Verify and update checksum if config has changed.""" |
193 | + return data_changed('blackbox-peer-exporter.config', self.charm_config) |
194 | + |
195 | + def peer_relation_data_changed(self, keymap): |
196 | + """Verify and update checksum if peer relation data has changed.""" |
197 | + return data_changed('blackbox-peer.relation_data', keymap) |
198 | + |
199 | + def bbexporter_relation_data_changed(self, keymap): |
200 | + """Verify and update checksum if provides relation data has changed.""" |
201 | + return data_changed('blackbox-exporter.relation_data', keymap) |
202 | + |
203 | + @property |
204 | + def is_blackbox_exporter_relation_enabled(self): |
205 | + """Verify if the blackbox-export relation exists.""" |
206 | + kv = unitdata.kv() |
207 | + return kv.get('blackbox_exporter', False) |
208 | + |
209 | + @property |
210 | + def enable_blackbox_exporter_relation(self): |
211 | + """Enable the blackbox-export flag for one time functions.""" |
212 | + kv = unitdata.kv() |
213 | + kv.set('blackbox_exporter', True) |
214 | + |
215 | + @property |
216 | + def disable_blackbox_exporter_relation(self): |
217 | + """Disable the blackbox-export flag for one time functions.""" |
218 | + kv = unitdata.kv() |
219 | + kv.set('blackbox_exporter', False) |
220 | + |
221 | + @property |
222 | + def modules(self): |
223 | + """Return the modules config parameter.""" |
224 | + return self.charm_config['modules'] |
225 | + |
226 | + @property |
227 | + def scrape_interval(self): |
228 | + """Return the scrape-interval config parameter.""" |
229 | + return self.charm_config['scrape-interval'] |
230 | + |
231 | + @property |
232 | + def port_def(self): |
233 | + """Return the port exposed by blackbox-exporter.""" |
234 | + return PORT_DEF |
235 | + |
236 | + @property |
237 | + def templates_changed(self): |
238 | + """Verify if any stored template has changed.""" |
239 | + return any_file_changed(['templates/{}'.format(tmpl) |
240 | + for tmpl in [BLACKBOX_EXPORTER_YML_TMPL]]) |
241 | + |
242 | + def install_packages(self): |
243 | + """Install the APT package and sets Linux capabilities.""" |
244 | + fetch.install(APT_PKG_NAME, fatal=True) |
245 | + cmd = ["setcap", "cap_net_raw+ep", SVC_PATH] |
246 | + try: |
247 | + subprocess.check_output(cmd) |
248 | + except subprocess.CalledProcessError as error: |
249 | + hookenv.log('unable to set linux capabilities: {}'.format(str(error)), |
250 | + hookenv.ERROR) |
251 | + raise BBPeerExporterError('Unable to set linux capabilities') |
252 | + |
253 | + def render_modules(self): |
254 | + """Generate /etc/prometheus/blackbox.yml from the template.""" |
255 | + try: |
256 | + modules = yaml.safe_load(self.modules) |
257 | + if 'modules' in modules: |
258 | + modules = modules['modules'] |
259 | + except (yaml.parser.ParserError, yaml.scanner.ScannerError) as error: |
260 | + hookenv.log('error retrieving modules yaml config: {}'.format(str(error)), |
261 | + hookenv.ERROR) |
262 | + # render "null" value |
263 | + modules = None |
264 | + |
265 | + context = {'modules': yaml.safe_dump(modules, default_flow_style=False)} |
266 | + render(source=BLACKBOX_EXPORTER_YML_TMPL, target=CONF_FILE_PATH, context=context) |
267 | + hookenv.open_port(PORT_DEF) |
268 | + |
269 | + def restart_bbexporter(self): |
270 | + """Restart the exporter daemon.""" |
271 | + if not host.service_running(SVC_NAME): |
272 | + hookenv.log('Starting {}...'.format(SVC_NAME)) |
273 | + host.service_start(SVC_NAME) |
274 | + else: |
275 | + hookenv.log('Restarting {}, config file changed...'.format(SVC_NAME)) |
276 | + host.service_restart(SVC_NAME) |
277 | + |
278 | + def get_icmp_and_tcp_targets(self, unit_networks, unit_ports, principal_unit): |
279 | + """Return a dict with structured unit-networks and unit-ports for Prometheus.""" |
280 | + peers_net_info = {key: [] for key in ('networks', 'icmp_targets', 'tcp_targets')} |
281 | + unit_networks = [] if unit_networks is None else lib_network.safe_loads(unit_networks) |
282 | + unit_ports = [] if unit_ports is None else lib_network.safe_loads(unit_ports) |
283 | + if not unit_networks: |
284 | + return peers_net_info |
285 | + |
286 | + local_networks = set(net['net'] for net in lib_network.get_unit_ipv4_networks()) |
287 | + for unit_network in unit_networks: |
288 | + # We can't probe a network we can't reach directly |
289 | + if unit_network['net'] not in local_networks: |
290 | + continue |
291 | + |
292 | + probe_dst_ip = unit_network['ip'] |
293 | + peers_net_info['networks'].append(unit_network['net']) |
294 | + peers_net_info['icmp_targets'].append({ |
295 | + 'network': unit_network['net'], |
296 | + 'interface': unit_network['iface'], |
297 | + 'ip-address': probe_dst_ip, |
298 | + 'principal-unit': principal_unit, |
299 | + 'module': 'icmp', |
300 | + 'source-ip': lib_network.get_source_ip_ipv4(probe_dst_ip), |
301 | + }) |
302 | + |
303 | + for port in unit_ports: |
304 | + peers_net_info['tcp_targets'].append({ |
305 | + 'ip-address': probe_dst_ip, |
306 | + 'port': port, |
307 | + 'principal-unit': principal_unit, |
308 | + 'module': 'tcp_connect', |
309 | + }) |
310 | + |
311 | + return peers_net_info |
312 | diff --git a/lib/lib_network.py b/lib/lib_network.py |
313 | new file mode 100644 |
314 | index 0000000..1d13452 |
315 | --- /dev/null |
316 | +++ b/lib/lib_network.py |
317 | @@ -0,0 +1,94 @@ |
318 | +"""Network helpers.""" |
319 | +import ast |
320 | +import ipaddress |
321 | +import re |
322 | +import socket |
323 | + |
324 | +import netifaces |
325 | + |
326 | +import psutil |
327 | + |
328 | +import pyroute2 |
329 | + |
330 | +IFACE_BLACKLIST_PATTERN = '^(lo|virbr|docker|lxdbr|vhost|tun|tap)' |
331 | + |
332 | + |
333 | +def _is_valid_ipv4_addr(address): |
334 | + """Filter out non-IPv4 addresses.""" |
335 | + try: |
336 | + ip = ipaddress.IPv4Address(address.get('addr')) |
337 | + invalid = any((ip.is_multicast, ip.is_reserved, ip.is_link_local, ip.is_loopback)) |
338 | + return not invalid |
339 | + except (ipaddress.AddressValueError, ValueError): |
340 | + return False |
341 | + |
342 | + |
343 | +def _get_local_ifaces(): |
344 | + """Ignore interfaces used by Docker, KVM, Contrail, etc.""" |
345 | + iface_blacklist_re = re.compile(IFACE_BLACKLIST_PATTERN) |
346 | + ifaces_whitelist = (iface for iface in netifaces.interfaces() |
347 | + if not iface_blacklist_re.search(iface)) |
348 | + return ifaces_whitelist |
349 | + |
350 | + |
351 | +def _get_ipv4_addresses(iface): |
352 | + """Return all IPv4 IPs from a given interface. |
353 | + |
354 | + Note that the interface name will exist as this function is called after |
355 | + the list of interfaces has been retrieved. |
356 | + """ |
357 | + ipv4_addresses = netifaces.ifaddresses(iface).get(netifaces.AF_INET, []) |
358 | + filtered_ipv4_addrs = (ipv4_addr for ipv4_addr in ipv4_addresses |
359 | + if _is_valid_ipv4_addr(ipv4_addr)) |
360 | + return filtered_ipv4_addrs |
361 | + |
362 | + |
363 | +def get_source_ip_ipv4(destination): |
364 | + """Return the src ip of a connection towards "destination". |
365 | + |
366 | + This function is similar to running "ip route get <ipv4_addr>" |
367 | + |
368 | + Disclaimer: source ip configuration for the blackbox exporter is done in |
369 | + the module definition. To avoid having to create a module for every local |
370 | + address we are simply letting the exporter use whatever source IP the |
371 | + kernel deems appropriate. Be aware that this function is doing a route |
372 | + lookup, and it is not - strictly speaking - returning the source IP of the |
373 | + packet the blackbox exporter will generate. The two addresses should always |
374 | + be the same, but YMMV. |
375 | + """ |
376 | + with pyroute2.IPRoute() as ipr: |
377 | + routes = ipr.route('get', dst=destination) |
378 | + return routes[0].get_attr('RTA_PREFSRC') |
379 | + |
380 | + |
381 | +def get_unit_ipv4_networks(): |
382 | + """Return a list of IPv4 network blocks available on the host.""" |
383 | + networks = [] |
384 | + for iface in _get_local_ifaces(): |
385 | + for ip_addr in _get_ipv4_addresses(iface): |
386 | + network = "{}/{}".format(ip_addr['addr'], ip_addr['netmask']) |
387 | + ip_ipv4 = ipaddress.IPv4Interface(network) |
388 | + networks.append( |
389 | + {"iface": iface, |
390 | + "ip": str(ip_ipv4.ip), |
391 | + "net": str(ip_ipv4.network)} |
392 | + ) |
393 | + return networks |
394 | + |
395 | + |
396 | +def get_unit_open_ports(): |
397 | + """Return TCP connections listening on ANY_ADDRESS.""" |
398 | + ports = [str(conn.laddr.port) for conn in psutil.net_connections() |
399 | + if conn.status == "LISTEN" and |
400 | + conn.type == socket.SOCK_STREAM and |
401 | + conn.family.value in (netifaces.AF_INET, netifaces.AF_INET6) and |
402 | + conn.laddr.ip in ("0.0.0.0", "::")] |
403 | + |
404 | + return list(set(ports)) |
405 | + |
406 | + |
407 | +def safe_loads(data_str): |
408 | + """Evaluate a string and convert it into a Python object.""" |
409 | + if not isinstance(data_str, str) or not data_str: |
410 | + return [] |
411 | + return ast.literal_eval(data_str) |
412 | diff --git a/metadata.yaml b/metadata.yaml |
413 | index 89d3c6e..6e0be63 100644 |
414 | --- a/metadata.yaml |
415 | +++ b/metadata.yaml |
416 | @@ -1,10 +1,13 @@ |
417 | -name: prometheus-blackbox-exporter |
418 | -display-name: Prometheus Blackbox Exporter |
419 | -summary: Blackbox exporter for Prometheus |
420 | -maintainer: Jacek Nykis <jacek.nykis@canonical.com> |
421 | +name: prometheus-blackbox-peer-exporter |
422 | +display-name: Prometheus Blackbox Peer Exporter |
423 | +summary: Blackbox peer exporter for Prometheus |
424 | +maintainer: Llama (LMA) Charmers <llama-charmers@lists.launchpad.net> |
425 | description: | |
426 | - The blackbox exporter allows blackbox probing of |
427 | - endpoints over HTTP, HTTPS, DNS, TCP and ICMP. |
428 | + The blackbox peer exporter allows blackbox probing of |
429 | + endpoints over HTTP, HTTPS, DNS, TCP and ICMP. This |
430 | + charm allows all units to probe their peers, whereas |
431 | + the prometheus-blackbox-exporter charm deploys a single |
432 | + unit that can probe external endpoints. |
433 | tags: |
434 | - monitoring |
435 | series: |
436 | diff --git a/reactive/prometheus-blackbox-exporter.py b/reactive/prometheus-blackbox-exporter.py |
437 | index 7d4e877..a057bba 100644 |
438 | --- a/reactive/prometheus-blackbox-exporter.py |
439 | +++ b/reactive/prometheus-blackbox-exporter.py |
440 | @@ -1,250 +1,169 @@ |
441 | -import ast |
442 | -import re |
443 | -import subprocess |
444 | -import yaml |
445 | -import sys |
446 | - |
447 | -from charmhelpers.core import host, hookenv |
448 | -from charmhelpers.core.templating import render |
449 | -from charms.reactive import ( |
450 | - when, when_not, set_state, remove_state |
451 | -) |
452 | -from ipaddress import ( |
453 | - IPv4Interface, IPv4Address |
454 | -) |
455 | -from pyroute2 import IPRoute |
456 | -from netifaces import interfaces, ifaddresses, AF_INET |
457 | -from charms.reactive.helpers import any_file_changed, data_changed |
458 | -from charms.reactive import hook |
459 | +"""Reactive script describing the prometheus-blackbox-peer-exporter behavior.""" |
460 | |
461 | -from charmhelpers.fetch import apt_install |
462 | +from charmhelpers.core import hookenv |
463 | |
464 | -hooks = hookenv.Hooks() |
465 | +from charms.layer import status |
466 | +from charms.reactive import ( |
467 | + clear_flag, is_flag_set, set_flag, when, when_any, when_not |
468 | +) |
469 | |
470 | -APT_PKG_NAME = 'prometheus-blackbox-exporter' |
471 | -SVC_NAME = 'prometheus-blackbox-exporter' |
472 | -EXECUTABLE = '/usr/bin/prometheus-blackbox-exporter' |
473 | -PORT_DEF = 9115 |
474 | -BLACKBOX_EXPORTER_YML_TMPL = 'blackbox.yaml.j2' |
475 | -CONF_FILE_PATH = '/etc/prometheus/blackbox.yml' |
476 | -IFACE_BLACKLIST_PATTERN = re.compile('^(lo|virbr|docker|lxdbr|vhost|tun|tap)') |
477 | +from lib_bb_peer_exporter import BBPeerExporterError, BBPeerExporterHelper |
478 | |
479 | +import lib_network |
480 | |
481 | -def templates_changed(tmpl_list): |
482 | - return any_file_changed(['templates/{}'.format(x) for x in tmpl_list]) |
483 | +helper = BBPeerExporterHelper() |
484 | |
485 | |
486 | -@when_not('blackbox-exporter.installed') |
487 | +@when_not('prometheus-blackbox-peer-exporter.installed') |
488 | def install_packages(): |
489 | - hookenv.status_set('maintenance', 'Installing software') |
490 | - config = hookenv.config() |
491 | - apt_install(APT_PKG_NAME, fatal=True) |
492 | - cmd = ["sudo", "setcap", "cap_net_raw+ep", EXECUTABLE] |
493 | - subprocess.check_output(cmd) |
494 | - set_state('blackbox-exporter.installed') |
495 | - set_state('blackbox-exporter.do-check-reconfig') |
496 | - |
497 | - |
498 | -def get_modules(): |
499 | - config = hookenv.config() |
500 | + """Install APT package or get blocked until user action.""" |
501 | + status.maintenance('Installing software') |
502 | try: |
503 | - modules = yaml.safe_load(config.get('modules')) |
504 | - except: |
505 | - return None |
506 | - |
507 | - if 'modules' in modules: |
508 | - return yaml.safe_dump(modules['modules'], default_flow_style=False) |
509 | - else: |
510 | - return yaml.safe_dump(modules, default_flow_style=False) |
511 | + helper.install_packages() |
512 | + set_flag('prometheus-blackbox-peer-exporter.installed') |
513 | + set_flag('prometheus-blackbox-peer-exporter.do-check-reconfig') |
514 | + except BBPeerExporterError as error: |
515 | + status.blocked(error) |
516 | |
517 | |
518 | -@when('blackbox-exporter.installed') |
519 | -@when('blackbox-exporter.do-reconfig-yaml') |
520 | +@when('prometheus-blackbox-peer-exporter.installed') |
521 | +@when('prometheus-blackbox-peer-exporter.do-reconfig-yaml') |
522 | def write_blackbox_exporter_config_yaml(): |
523 | - modules = get_modules() |
524 | - render(source=BLACKBOX_EXPORTER_YML_TMPL, |
525 | - target=CONF_FILE_PATH, |
526 | - context={'modules': modules} |
527 | - ) |
528 | - hookenv.open_port(PORT_DEF) |
529 | - set_state('blackbox-exporter.do-restart') |
530 | - remove_state('blackbox-exporter.do-reconfig-yaml') |
531 | + """Generate /etc/prometheus/blackbox.yml.""" |
532 | + helper.render_modules() |
533 | + set_flag('prometheus-blackbox-peer-exporter.do-restart') |
534 | + clear_flag('prometheus-blackbox-peer-exporter.do-reconfig-yaml') |
535 | |
536 | |
537 | -@when('blackbox-exporter.started') |
538 | +@when('prometheus-blackbox-peer-exporter.started') |
539 | def check_config(): |
540 | - set_state('blackbox-exporter.do-check-reconfig') |
541 | + """Trigger a config check.""" |
542 | + set_flag('prometheus-blackbox-peer-exporter.do-check-reconfig') |
543 | |
544 | |
545 | -@when('blackbox-exporter.do-check-reconfig') |
546 | +@when('prometheus-blackbox-peer-exporter.do-check-reconfig') |
547 | def check_reconfig_blackbox_exporter(): |
548 | - config = hookenv.config() |
549 | - |
550 | - if data_changed('blackbox-exporter.config', config): |
551 | - set_state('blackbox-exporter.do-reconfig-yaml') |
552 | - |
553 | - if templates_changed([BLACKBOX_EXPORTER_YML_TMPL]): |
554 | - set_state('blackbox-exporter.do-reconfig-yaml') |
555 | + """Trigger a blackbox.yml config regeneration on changes.""" |
556 | + if helper.config_changed or helper.templates_changed: |
557 | + set_flag('prometheus-blackbox-peer-exporter.do-reconfig-yaml') |
558 | |
559 | - remove_state('blackbox-exporter.do-check-reconfig') |
560 | + clear_flag('prometheus-blackbox-peer-exporter.do-check-reconfig') |
561 | |
562 | |
563 | -@when('blackbox-exporter.do-restart') |
564 | +@when('prometheus-blackbox-peer-exporter.do-restart') |
565 | def restart_blackbox_exporter(): |
566 | - if not host.service_running(SVC_NAME): |
567 | - hookenv.log('Starting {}...'.format(SVC_NAME)) |
568 | - host.service_start(SVC_NAME) |
569 | - else: |
570 | - hookenv.log('Restarting {}, config file changed...'.format(SVC_NAME)) |
571 | - host.service_restart(SVC_NAME) |
572 | - hookenv.status_set('active', 'Ready') |
573 | - set_state('blackbox-exporter.started') |
574 | - remove_state('blackbox-exporter.do-restart') |
575 | + """Restart the blackbox-exporter daemon.""" |
576 | + helper.restart_bbexporter() |
577 | + status.active('Ready') |
578 | + set_flag('prometheus-blackbox-peer-exporter.started') |
579 | + clear_flag('prometheus-blackbox-peer-exporter.do-restart') |
580 | + |
581 | |
582 | # Relations |
583 | -@hook('blackbox-peer-relation-{joined,departed}') |
584 | -def configure_blackbox_exporter_relation(peers): |
585 | - hookenv.log('Running blackbox exporter relation.') |
586 | - hookenv.status_set('maintenance', 'Configuring blackbox peer relations.') |
587 | - config = hookenv.config() |
588 | +@when_any('blackbox-peer.joined', |
589 | + 'blackbox-peer.departed') |
590 | +def blackbox_peer_changed(): |
591 | + """Shares unit data with its peers in the blackbox-peer relation. |
592 | + |
593 | + If a single unit exists, data will NOT be shared (the main goal of this |
594 | + service is to probe against its peers). When a unit is removed, all its |
595 | + peers may need to rebuild their probes (one less). |
596 | + """ |
597 | + hookenv.log('Running blackbox peer relations.') |
598 | + status.maintenance('Configuring blackbox peer relations.') |
599 | + rids = hookenv.relation_ids("blackbox-peer") |
600 | + if not rids or len(rids) != 1: |
601 | + hookenv.log('[*] More than one blackbox-peer relation', hookenv.ERROR) |
602 | + return |
603 | + |
604 | + rid, unit = rids[0], hookenv.local_unit() |
605 | + relation_settings = hookenv.relation_get(rid=rid, unit=unit) |
606 | + relation_settings.update({ |
607 | + 'principal-unit': hookenv.principal_unit(), |
608 | + 'private-address': hookenv.unit_get('private-address'), |
609 | + 'unit-networks': lib_network.get_unit_ipv4_networks(), |
610 | + 'unit-ports': lib_network.get_unit_open_ports(), |
611 | + }) |
612 | + if helper.peer_relation_data_changed(relation_settings): |
613 | + hookenv.relation_set(relation_id=rid, relation_settings=relation_settings) |
614 | |
615 | - icmp_targets = [] |
616 | - tcp_targets = [] |
617 | - networks = [] |
618 | - for rid in hookenv.relation_ids('blackbox-peer'): |
619 | - for unit in hookenv.related_units(rid): |
620 | - unit_ports = hookenv.relation_get('unit-ports', rid=rid, unit=unit) |
621 | - principal_unit = hookenv.relation_get('principal-unit', rid=rid, unit=unit) |
622 | - unit_networks = hookenv.relation_get('unit-networks', rid=rid, unit=unit) |
623 | - if unit_networks is not None: |
624 | - unit_networks = ast.literal_eval(unit_networks) |
625 | - for unit_network in unit_networks: |
626 | - # Chcek if same network exists on this unit |
627 | - if unit_network['net'] in [net['net'] for net in get_unit_networks()]: |
628 | - networks.append(unit_network['net']) |
629 | - probe_dst_ip = unit_network['ip'] |
630 | - icmp_targets.append({ |
631 | - 'network': unit_network['net'], |
632 | - 'interface': unit_network['iface'], |
633 | - 'ip-address': probe_dst_ip, |
634 | - 'principal-unit': principal_unit, |
635 | - 'module': 'icmp', |
636 | - 'source-ip': _get_source_ip(probe_dst_ip) |
637 | - }) |
638 | - |
639 | - if unit_ports is not None: |
640 | - unit_ports = ast.literal_eval(unit_ports) |
641 | - for port in unit_ports: |
642 | - tcp_targets.append({ |
643 | - 'ip-address': unit_network['ip'], |
644 | - 'port': port, |
645 | - 'principal-unit': principal_unit, |
646 | - 'module': 'tcp_connect', |
647 | - }) |
648 | - |
649 | - relation_settings = {} |
650 | - relation_settings['icmp_targets'] = icmp_targets |
651 | - relation_settings['tcp_targets'] = tcp_targets |
652 | - relation_settings['networks'] = networks |
653 | - relation_settings['ip_address'] = hookenv.unit_get('private-address') |
654 | - relation_settings['port'] = PORT_DEF |
655 | - relation_settings['job_name'] = hookenv.principal_unit() |
656 | - relation_settings['scrape_interval'] = config.get('scrape-interval') |
657 | + # Share with Prometheus - all peer units' network data with present unit |
658 | + if is_flag_set('blackbox-exporter.available'): |
659 | + configure_blackbox_exporter_relation() |
660 | |
661 | + status.active('Ready') |
662 | |
663 | - for rel_id in hookenv.relation_ids('blackbox-exporter'): |
664 | - relation_settings['ip_address'] = \ |
665 | - hookenv.ingress_address(rid=rel_id, unit=hookenv.local_unit()) |
666 | - hookenv.relation_set(relation_id=rel_id, relation_settings=relation_settings) |
667 | - |
668 | - hookenv.status_set('active', 'Ready') |
669 | |
670 | - |
671 | -def _get_source_ip(destination): |
672 | - """ |
673 | - Get the source ip of a connection towards destination without having to run |
674 | - ip r g via subprocess |
675 | - |
676 | - Disclaimer: source ip configuration for the blackbox exporter is done in |
677 | - the module definition. To avoid having to create a module for every local |
678 | - address we are simply letting the exporter use whatever source IP the |
679 | - kernel deems appropriate. Be aware that this function is doing a route |
680 | - lookup, and it is not - strictly speaking - returning the source IP of the |
681 | - packet the blackbox exporter will generate. The two addresses should always |
682 | - be the same, but YMMV. |
683 | - """ |
684 | - with IPRoute() as ipr: |
685 | - routes = ipr.route('get', dst=destination) |
686 | - return routes[0].get_attr('RTA_PREFSRC') |
687 | +@when('blackbox-peer.connected') |
688 | +@when('blackbox-exporter.available') |
689 | +def blackbox_peer_and_exporter_any_hook(): |
690 | + """First time the prometheus2:blackbox-exporter relation is seen.""" |
691 | + if not helper.is_blackbox_exporter_relation_enabled: |
692 | + helper.enable_blackbox_exporter_relation |
693 | + configure_blackbox_exporter_relation() |
694 | |
695 | |
696 | -def _is_valid_ip(address): |
697 | - """ |
698 | - Filter out "uninteresting" addresses |
699 | - """ |
700 | - ip = IPv4Address(address.get('addr')) |
701 | - return not (ip.is_multicast or |
702 | - ip.is_reserved or |
703 | - ip.is_link_local or |
704 | - ip.is_loopback) |
705 | +@when('blackbox-peer.connected') |
706 | +@when_not('blackbox-exporter.available') |
707 | +def blackbox_peer_and_no_exporter_any_hook(): |
708 | + """When the prometheus2:blackbox-exporter relation is gone.""" |
709 | + if helper.is_blackbox_exporter_relation_enabled: |
710 | + helper.disable_blackbox_exporter_relation |
711 | |
712 | |
713 | -def _is_valid_iface(iface): |
714 | - """ |
715 | - Ignore interfaces used by Docker, KVM, Contrail, etc |
716 | - """ |
717 | - if IFACE_BLACKLIST_PATTERN.search(iface): |
718 | - return False |
719 | - else: |
720 | - return True |
721 | +def configure_blackbox_exporter_relation(): |
722 | + """Retrieve data from peers and share the bundle of probes with Prometheus.""" |
723 | + hookenv.log('Running blackbox exporter relation.') |
724 | |
725 | + # Reads peer relations |
726 | + rids = hookenv.relation_ids("blackbox-peer") |
727 | + if not rids or len(rids) != 1: |
728 | + hookenv.log('[*] More than one blackbox-peer relation', hookenv.ERROR) |
729 | + return |
730 | |
731 | -def get_unit_networks(): |
732 | + status.maintenance('Configuring blackbox peer relations.') |
733 | networks = [] |
734 | - for iface in filter(_is_valid_iface, interfaces()): |
735 | - ip_addresses = ifaddresses(iface) |
736 | - for ip_address in filter(_is_valid_ip, ip_addresses.get(AF_INET, [])): |
737 | - ip_v4 = IPv4Interface( |
738 | - "{}/{}".format(ip_address['addr'], ip_address['netmask']) |
739 | - ) |
740 | - networks.append( |
741 | - {"iface": iface, |
742 | - "ip": str(ip_v4.ip), |
743 | - "net": str(ip_v4.network)} |
744 | - ) |
745 | - return networks |
746 | - |
747 | -def get_principal_unit_open_ports(): |
748 | - cmd = "lsof -P -iTCP -sTCP:LISTEN".split() |
749 | - result = subprocess.check_output(cmd) |
750 | - result = result.decode(sys.stdout.encoding) |
751 | - |
752 | - ports = [] |
753 | - for r in result.split('\n'): |
754 | - for p in r.split(): |
755 | - if '*:' in p: |
756 | - ports.append(p.split(':')[1]) |
757 | - ports = [p for p in set(ports)] |
758 | - |
759 | - return ports |
760 | - |
761 | -@hook('blackbox-peer-relation-{joined,departed}') |
762 | -def blackbox_peer_departed(peers): |
763 | - hookenv.log('Blackbox peer unit joined/departed.') |
764 | - set_state('blackbox-exporter.redo-peer-relation') |
765 | - |
766 | -@when('blackbox-exporter.redo-peer-relation') |
767 | -def setup_blackbox_peer_relation(peers): |
768 | - # Set blackbox-peer relations |
769 | - hookenv.log('Running blackbox peer relations.') |
770 | - hookenv.status_set('maintenance', 'Configuring blackbox peer relations.') |
771 | - for rid in hookenv.relation_ids('blackbox-peer'): |
772 | - relation_settings = hookenv.relation_get(rid=rid, unit=hookenv.local_unit()) |
773 | - relation_settings['principal-unit'] = hookenv.principal_unit() |
774 | - relation_settings['private-address'] = hookenv.unit_get('private-address') |
775 | - relation_settings['unit-networks'] = get_unit_networks() |
776 | - relation_settings['unit-ports'] = get_principal_unit_open_ports() |
777 | - hookenv.relation_set(relation_id=rid, relation_settings=relation_settings) |
778 | + icmp_targets = [] |
779 | + tcp_targets = [] |
780 | + rid = rids[0] |
781 | + for peer_unit in hookenv.related_units(rid): |
782 | + # principal-unit: ubuntu/1 |
783 | + principal_unit = hookenv.relation_get('principal-unit', rid=rid, unit=peer_unit) |
784 | + # unit-networks: '[{''iface'': ''eth0'', ''ip'': ''10.66.111.152'', ''net'': ''10.66.111.0/24''}]' |
785 | + unit_networks = hookenv.relation_get('unit-networks', rid=rid, unit=peer_unit) |
786 | + # unit-ports: '[''22'', ''9115'']' |
787 | + unit_ports = hookenv.relation_get('unit-ports', rid=rid, unit=peer_unit) |
788 | + |
789 | + peer_net_info = helper.get_icmp_and_tcp_targets( |
790 | + unit_networks, unit_ports, principal_unit) |
791 | + |
792 | + if peer_net_info['networks']: |
793 | + networks.extend(peer_net_info['networks']) |
794 | + |
795 | + if peer_net_info['icmp_targets']: |
796 | + icmp_targets.extend(peer_net_info['icmp_targets']) |
797 | + |
798 | + if peer_net_info['tcp_targets']: |
799 | + tcp_targets.extend(peer_net_info['tcp_targets']) |
800 | + |
801 | + relation_settings = { |
802 | + 'networks': networks, |
803 | + 'icmp_targets': icmp_targets, |
804 | + 'tcp_targets': tcp_targets, |
805 | + 'ip_address': hookenv.unit_get('private-address'), |
806 | + 'port': helper.port_def, |
807 | + 'job_name': hookenv.principal_unit(), |
808 | + 'scrape_interval': helper.scrape_interval, |
809 | + } |
810 | + |
811 | + if not helper.bbexporter_relation_data_changed(relation_settings): |
812 | + status.active('Ready') |
813 | + return |
814 | + |
815 | + # Shares information with the "prometheus" application |
816 | + for rel_id in hookenv.relation_ids('blackbox-exporter'): |
817 | + relation_settings['ip_address'] = \ |
818 | + hookenv.ingress_address(rid=rel_id, unit=hookenv.local_unit()) |
819 | + hookenv.relation_set(relation_id=rel_id, relation_settings=relation_settings) |
820 | |
821 | - hookenv.status_set('active', 'Ready') |
822 | - remove_state('blackbox-exporter.redo-peer-relation') |
823 | + status.active('Ready') |
824 | diff --git a/requirements.txt b/requirements.txt |
825 | new file mode 100644 |
826 | index 0000000..8462291 |
827 | --- /dev/null |
828 | +++ b/requirements.txt |
829 | @@ -0,0 +1 @@ |
830 | +# Include python requirements here |
831 | diff --git a/tests/functional/bundle.yaml.j2 b/tests/functional/bundle.yaml.j2 |
832 | new file mode 100644 |
833 | index 0000000..e52b841 |
834 | --- /dev/null |
835 | +++ b/tests/functional/bundle.yaml.j2 |
836 | @@ -0,0 +1,13 @@ |
837 | +applications: |
838 | + {{ ubuntu_appname }}: |
839 | + series: {{ series }} |
840 | + charm: cs:ubuntu |
841 | + num_units: 2 |
842 | + |
843 | + {{ bb_appname }}: |
844 | + series: {{ series }} |
845 | + charm: {{ charm_path }} |
846 | + |
847 | +relations: |
848 | + - - {{ ubuntu_appname }} |
849 | + - {{ bb_appname }} |
850 | diff --git a/tests/functional/conftest.py b/tests/functional/conftest.py |
851 | new file mode 100644 |
852 | index 0000000..9679d97 |
853 | --- /dev/null |
854 | +++ b/tests/functional/conftest.py |
855 | @@ -0,0 +1,68 @@ |
856 | +""" |
857 | +Reusable pytest fixtures for functional testing. |
858 | + |
859 | +Environment variables |
860 | +--------------------- |
861 | + |
862 | +PYTEST_CLOUD_REGION, PYTEST_CLOUD_NAME: cloud name and region to use for juju model creation |
863 | + |
864 | +PYTEST_KEEP_MODEL: if set, the testing model won't be torn down at the end of the testing session |
865 | +""" |
866 | + |
867 | +import asyncio |
868 | +import os |
869 | +import subprocess |
870 | +import uuid |
871 | + |
872 | +from juju.controller import Controller |
873 | + |
874 | +from juju_tools import JujuTools |
875 | + |
876 | +import pytest |
877 | + |
878 | + |
879 | +@pytest.fixture(scope='module') |
880 | +def event_loop(): |
881 | + """Override the default pytest event loop to allow for fixtures using a broader scope.""" |
882 | + loop = asyncio.get_event_loop_policy().new_event_loop() |
883 | + asyncio.set_event_loop(loop) |
884 | + loop.set_debug(True) |
885 | + yield loop |
886 | + loop.close() |
887 | + asyncio.set_event_loop(None) |
888 | + |
889 | + |
890 | +@pytest.fixture(scope='module') |
891 | +async def controller(): |
892 | + """Connect to the current controller.""" |
893 | + _controller = Controller() |
894 | + await _controller.connect_current() |
895 | + yield _controller |
896 | + await _controller.disconnect() |
897 | + |
898 | + |
899 | +@pytest.fixture(scope='module') |
900 | +async def model(controller): |
901 | + """Create a temporary model to run the tests.""" |
902 | + model_name = "functest-{}".format(str(uuid.uuid4())[-12:]) |
903 | + _model = await controller.add_model(model_name, |
904 | + cloud_name=os.getenv('PYTEST_CLOUD_NAME'), |
905 | + region=os.getenv('PYTEST_CLOUD_REGION'), |
906 | + ) |
907 | + # https://github.com/juju/python-libjuju/issues/267 |
908 | + subprocess.check_call(['juju', 'models']) |
909 | + while model_name not in await controller.list_models(): |
910 | + await asyncio.sleep(1) |
911 | + yield _model |
912 | + await _model.disconnect() |
913 | + if not os.getenv('PYTEST_KEEP_MODEL'): |
914 | + await controller.destroy_model(model_name) |
915 | + while model_name in await controller.list_models(): |
916 | + await asyncio.sleep(1) |
917 | + |
918 | + |
919 | +@pytest.fixture(scope='module') |
920 | +async def jujutools(controller, model): |
921 | + """Load helpers to run commands on the units.""" |
922 | + tools = JujuTools(controller, model) |
923 | + return tools |
924 | diff --git a/tests/functional/juju_tools.py b/tests/functional/juju_tools.py |
925 | new file mode 100644 |
926 | index 0000000..850c296 |
927 | --- /dev/null |
928 | +++ b/tests/functional/juju_tools.py |
929 | @@ -0,0 +1,71 @@ |
930 | +"""Juju helpers to run commands on the units.""" |
931 | +import base64 |
932 | +import pickle |
933 | + |
934 | +import juju |
935 | + |
936 | + |
937 | +class JujuTools: |
938 | + """Load helpers to run commands on units.""" |
939 | + |
940 | + def __init__(self, controller, model): |
941 | + """Load initialized controller and model.""" |
942 | + self.controller = controller |
943 | + self.model = model |
944 | + |
945 | + async def run_command(self, cmd, target): |
946 | + """ |
947 | + Run a command on a unit. |
948 | + |
949 | + :param cmd: Command to be run |
950 | + :param unit: Unit object or unit name string |
951 | + """ |
952 | + unit = ( |
953 | + target |
954 | + if isinstance(target, juju.unit.Unit) |
955 | + else await self.get_unit(target) |
956 | + ) |
957 | + action = await unit.run(cmd) |
958 | + return action.results |
959 | + |
960 | + async def remote_object(self, imports, remote_cmd, target): |
961 | + """ |
962 | + Run command on target machine and returns a python object of the result. |
963 | + |
964 | + :param imports: Imports needed for the command to run |
965 | + :param remote_cmd: The python command to execute |
966 | + :param target: Unit object or unit name string |
967 | + """ |
968 | + python3 = "python3 -c '{}'" |
969 | + python_cmd = ('import pickle;' |
970 | + 'import base64;' |
971 | + '{}' |
972 | + 'print(base64.b64encode(pickle.dumps({})), end="")' |
973 | + .format(imports, remote_cmd)) |
974 | + cmd = python3.format(python_cmd) |
975 | + results = await self.run_command(cmd, target) |
976 | + return pickle.loads(base64.b64decode(bytes(results['Stdout'][2:-1], 'utf8'))) |
977 | + |
978 | + async def file_stat(self, path, target): |
979 | + """ |
980 | + Run stat on a file. |
981 | + |
982 | + :param path: File path |
983 | + :param target: Unit object or unit name string |
984 | + """ |
985 | + imports = 'import os;' |
986 | + python_cmd = ('os.stat("{}")' |
987 | + .format(path)) |
988 | + print("Calling remote cmd: " + python_cmd) |
989 | + return await self.remote_object(imports, python_cmd, target) |
990 | + |
991 | + async def file_contents(self, path, target): |
992 | + """ |
993 | + Return the contents of a file. |
994 | + |
995 | + :param path: File path |
996 | + :param target: Unit object or unit name string |
997 | + """ |
998 | + cmd = 'cat {}'.format(path) |
999 | + result = await self.run_command(cmd, target) |
1000 | + return result['Stdout'] |
1001 | diff --git a/tests/functional/requirements.txt b/tests/functional/requirements.txt |
1002 | new file mode 100644 |
1003 | index 0000000..3d8a11b |
1004 | --- /dev/null |
1005 | +++ b/tests/functional/requirements.txt |
1006 | @@ -0,0 +1,7 @@ |
1007 | +flake8 |
1008 | +jinja2 |
1009 | +juju |
1010 | +mock |
1011 | +pytest |
1012 | +pytest-asyncio |
1013 | +requests |
1014 | diff --git a/tests/functional/test_deploy.py b/tests/functional/test_deploy.py |
1015 | new file mode 100644 |
1016 | index 0000000..173ab29 |
1017 | --- /dev/null |
1018 | +++ b/tests/functional/test_deploy.py |
1019 | @@ -0,0 +1,150 @@ |
1020 | +"""Tests around Juju deployed charms.""" |
1021 | +import asyncio |
1022 | +import os |
1023 | +import stat |
1024 | +import subprocess |
1025 | + |
1026 | +import jinja2 |
1027 | + |
1028 | +import pytest |
1029 | + |
1030 | +# Treat all tests as coroutines |
1031 | +pytestmark = pytest.mark.asyncio |
1032 | + |
1033 | +CHARM_BUILD_DIR = os.getenv('CHARM_BUILD_DIR', '.').rstrip('/') |
1034 | + |
1035 | +# series = ['bionic', |
1036 | +# pytest.param('eoan', marks=pytest.mark.xfail(reason='canary')), |
1037 | +# ] |
1038 | +series = ['bionic'] |
1039 | +sources = [('local', os.path.join(CHARM_BUILD_DIR, 'prometheus-blackbox-peer-exporter')), |
1040 | + # ('jujucharms', 'cs:...'), |
1041 | + ] |
1042 | + |
1043 | + |
1044 | +def render(templates_dir, template_name, context): |
1045 | + """Render a configuration file from a template.""" |
1046 | + templates = jinja2.Environment(loader=jinja2.FileSystemLoader(templates_dir)) |
1047 | + template = templates.get_template(template_name) |
1048 | + return template.render(context) |
1049 | + |
1050 | +# Uncomment for re-using the current model, useful for debugging functional tests |
1051 | +# @pytest.fixture(scope='module') |
1052 | +# async def model(): |
1053 | +# from juju.model import Model |
1054 | +# model = Model() |
1055 | +# await model.connect_current() |
1056 | +# yield model |
1057 | +# await model.disconnect() |
1058 | + |
1059 | + |
1060 | +# Custom fixtures |
1061 | +@pytest.fixture(params=series) |
1062 | +def series(request): |
1063 | + """Return series scope.""" |
1064 | + return request.param |
1065 | + |
1066 | + |
1067 | +@pytest.fixture(params=sources, ids=[s[0] for s in sources]) |
1068 | +def source(request): |
1069 | + """Return location of deployed charm (local disk or charm store).""" |
1070 | + return request.param |
1071 | + |
1072 | + |
1073 | +@pytest.fixture |
1074 | +async def app(model, series, source): |
1075 | + """Return Juju application object for the deployed series.""" |
1076 | + app_name = 'prometheus-blackbox-peer-exporter-{}-{}'.format(series, source[0]) |
1077 | + return await model._wait_for_new('application', app_name) |
1078 | + |
1079 | + |
1080 | +async def test_bbpeerexporter_deploy(model, series, source, request): |
1081 | + """Start a deploy for each series. |
1082 | + |
1083 | + subprocess is used because libjuju fails with JAAS |
1084 | + https://github.com/juju/python-libjuju/issues/221 |
1085 | + """ |
1086 | + dir_path = os.path.dirname(os.path.realpath(__file__)) |
1087 | + bundle_path = os.path.join(CHARM_BUILD_DIR, 'bundle.yaml') |
1088 | + |
1089 | + ubuntu_appname = "ubuntu-{}-{}".format(series, source[0]) |
1090 | + bb_appname = 'prometheus-blackbox-peer-exporter-{}-{}'.format(series, source[0]) |
1091 | + |
1092 | + context = { |
1093 | + "ubuntu_appname": ubuntu_appname, |
1094 | + "bb_appname": bb_appname, |
1095 | + "series": series, |
1096 | + "charm_path": os.path.join(CHARM_BUILD_DIR, "prometheus-blackbox-peer-exporter") |
1097 | + } |
1098 | + rendered = render(dir_path, "bundle.yaml.j2", context) |
1099 | + with open(bundle_path, "w") as fd: |
1100 | + fd.write(rendered) |
1101 | + |
1102 | + if not os.path.exists(bundle_path): |
1103 | + assert False |
1104 | + |
1105 | + cmd = ["juju", "deploy", "-m", model.info.name, bundle_path] |
1106 | + if request.node.get_closest_marker('xfail'): |
1107 | + # If series is 'xfail' force install to allow testing against versions not in |
1108 | + # metadata.yaml |
1109 | + cmd.append('--force') |
1110 | + subprocess.check_call(cmd) |
1111 | + while True: |
1112 | + try: |
1113 | + model.applications[bb_appname] |
1114 | + break |
1115 | + except KeyError: |
1116 | + await asyncio.sleep(5) |
1117 | + assert True |
1118 | + |
1119 | + |
1120 | +# async def test_charm_upgrade(model, app): |
1121 | +# """Test juju upgrade-charm, from one source to another.""" |
1122 | +# if app.name.endswith('local'): |
1123 | +# pytest.skip("No need to upgrade the local deploy") |
1124 | +# unit = app.units[0] |
1125 | +# await model.block_until(lambda: unit.agent_status == 'idle', |
1126 | +# timeout=60) |
1127 | +# subprocess.check_call(['juju', |
1128 | +# 'upgrade-charm', |
1129 | +# '--switch={}'.format(sources[0][1]), |
1130 | +# '-m', model.info.name, |
1131 | +# app.name, |
1132 | +# ]) |
1133 | +# await model.block_until(lambda: unit.agent_status == 'executing') |
1134 | + |
1135 | + |
1136 | +# Tests |
1137 | +async def test_bbpeerexporter_status(model, app): |
1138 | + """Verify status for all deployed series of the charm.""" |
1139 | + await model.block_until(lambda: app.status == 'active', |
1140 | + timeout=900) |
1141 | + unit = app.units[0] |
1142 | + await model.block_until(lambda: unit.agent_status == 'idle', |
1143 | + timeout=900) |
1144 | + |
1145 | + |
1146 | +# async def test_example_action(app): |
1147 | +# unit = app.units[0] |
1148 | +# action = await unit.run_action('example-action') |
1149 | +# action = await action.wait() |
1150 | +# assert action.status == 'completed' |
1151 | + |
1152 | + |
1153 | +async def test_run_command(app, jujutools): |
1154 | + """Test simple juju-run command.""" |
1155 | + unit = app.units[0] |
1156 | + cmd = 'hostname -i' |
1157 | + results = await jujutools.run_command(cmd, unit) |
1158 | + assert results['Code'] == '0' |
1159 | + assert unit.public_address in results['Stdout'] |
1160 | + |
1161 | + |
1162 | +async def test_file_stat(app, jujutools): |
1163 | + """Verify a file exists in the deployed unit.""" |
1164 | + unit = app.units[0] |
1165 | + path = '/var/lib/juju/agents/unit-{}/charm/metadata.yaml'.format(unit.entity_id.replace('/', '-')) |
1166 | + fstat = await jujutools.file_stat(path, unit) |
1167 | + assert stat.filemode(fstat.st_mode) == '-rw-r--r--' |
1168 | + assert fstat.st_uid == 0 |
1169 | + assert fstat.st_gid == 0 |
1170 | diff --git a/tests/unit/conftest.py b/tests/unit/conftest.py |
1171 | new file mode 100644 |
1172 | index 0000000..5410860 |
1173 | --- /dev/null |
1174 | +++ b/tests/unit/conftest.py |
1175 | @@ -0,0 +1,69 @@ |
1176 | +"""Reusable pytest fixtures for functional testing.""" |
1177 | +import unittest.mock as mock |
1178 | + |
1179 | +import pytest |
1180 | + |
1181 | + |
1182 | +# If layer options are used, add this to prometheusblackboxpeerexporter |
1183 | +# and import layer in lib_bb_peer_exporter |
1184 | +@pytest.fixture |
1185 | +def mock_layers(monkeypatch): |
1186 | + """Mock imported modules and calls.""" |
1187 | + import sys |
1188 | + sys.modules['charms.layer'] = mock.Mock() |
1189 | + sys.modules['charms.reactive.helpers'] = mock.Mock() |
1190 | + sys.modules['reactive'] = mock.Mock() |
1191 | + # sys.modules['lib_network'] = mock.Mock() |
1192 | + # Mock any functions in layers that need to be mocked here |
1193 | + |
1194 | + def options(layer): |
1195 | + # mock options for layers here |
1196 | + # if layer == 'status': |
1197 | + # return None |
1198 | + return None |
1199 | + |
1200 | + # monkeypatch.setattr('lib_bb_peer_exporter.lib_network', options) |
1201 | + |
1202 | + |
1203 | +@pytest.fixture |
1204 | +def mock_hookenv_config(monkeypatch): |
1205 | + """Mock hookenv.config().""" |
1206 | + import yaml |
1207 | + |
1208 | + def mock_config(): |
1209 | + cfg = {} |
1210 | + yml = yaml.load(open('./config.yaml')) |
1211 | + |
1212 | + # Load all defaults |
1213 | + for key, value in yml['options'].items(): |
1214 | + cfg[key] = value['default'] |
1215 | + |
1216 | + # Manually add cfg from other layers |
1217 | + return cfg |
1218 | + # cfg['my-other-layer'] = 'mock' |
1219 | + |
1220 | + monkeypatch.setattr('lib_bb_peer_exporter.hookenv.config', mock_config) |
1221 | + |
1222 | + |
1223 | +@pytest.fixture |
1224 | +def mock_remote_unit(monkeypatch): |
1225 | + """Mock a remote unit name.""" |
1226 | + monkeypatch.setattr('lib_bb_peer_exporter.hookenv.remote_unit', lambda: 'unit-mock/0') |
1227 | + |
1228 | + |
1229 | +@pytest.fixture |
1230 | +def mock_charm_dir(monkeypatch): |
1231 | + """Mock CHARM_DIR Juju environment variable.""" |
1232 | + monkeypatch.setattr('lib_bb_peer_exporter.hookenv.charm_dir', lambda: '/mock/charm/dir') |
1233 | + |
1234 | + |
1235 | +@pytest.fixture |
1236 | +def bbpeerexporter(tmpdir, mock_layers, mock_hookenv_config, mock_charm_dir, monkeypatch): |
1237 | + """Test the basic structure of the helper class.""" |
1238 | + from lib_bb_peer_exporter import BBPeerExporterHelper |
1239 | + helper = BBPeerExporterHelper() |
1240 | + |
1241 | + # Any other functions that load helper will get this version |
1242 | + monkeypatch.setattr('lib_bb_peer_exporter.BBPeerExporterHelper', lambda: helper) |
1243 | + |
1244 | + return helper |
1245 | diff --git a/tests/unit/requirements.txt b/tests/unit/requirements.txt |
1246 | new file mode 100644 |
1247 | index 0000000..31b5d37 |
1248 | --- /dev/null |
1249 | +++ b/tests/unit/requirements.txt |
1250 | @@ -0,0 +1,7 @@ |
1251 | +charmhelpers |
1252 | +charms.reactive |
1253 | +netifaces |
1254 | +psutil |
1255 | +pyroute2 |
1256 | +pytest |
1257 | +pytest-cov |
1258 | diff --git a/tests/unit/test_lib_bb_peer_exporter.py b/tests/unit/test_lib_bb_peer_exporter.py |
1259 | new file mode 100644 |
1260 | index 0000000..9c0ab0f |
1261 | --- /dev/null |
1262 | +++ b/tests/unit/test_lib_bb_peer_exporter.py |
1263 | @@ -0,0 +1,46 @@ |
1264 | +"""Tests around lib_bb_peer_exporter module.""" |
1265 | + |
1266 | + |
1267 | +class TestLib(): |
1268 | + """Test suite for lib_bb_peer_exporter module.""" |
1269 | + |
1270 | + def test_bbpeerexporter(self, bbpeerexporter): |
1271 | + """See if the helper fixture works to load charm configs.""" |
1272 | + assert isinstance(bbpeerexporter.charm_config, dict) |
1273 | + assert bbpeerexporter.modules |
1274 | + assert bbpeerexporter.scrape_interval == "60s" |
1275 | + assert bbpeerexporter.port_def == 9115 |
1276 | + |
1277 | + def test_get_icmp_and_tcp_targets(self, bbpeerexporter, monkeypatch): |
1278 | + """Correct output over given input.""" |
1279 | + principal_unit = "ubuntu/1" |
1280 | + unit_networks = "[{'iface': 'eth0', 'ip': '10.66.111.152', 'net': '10.66.111.0/24'}]" |
1281 | + unit_ports = "['22', '9115']" |
1282 | + monkeypatch.setattr( |
1283 | + 'lib_network.get_source_ip_ipv4', |
1284 | + lambda x: '10.66.111.1') |
1285 | + monkeypatch.setattr( |
1286 | + 'lib_network.get_unit_ipv4_networks', |
1287 | + lambda: [{'net': '192.168.0.0/23'}, {'net': '10.66.111.0/24'}]) |
1288 | + |
1289 | + expected = { |
1290 | + "networks": ['10.66.111.0/24'], |
1291 | + "icmp_targets": [{ |
1292 | + 'network': '10.66.111.0/24', |
1293 | + 'interface': 'eth0', |
1294 | + 'ip-address': '10.66.111.152', |
1295 | + 'principal-unit': principal_unit, |
1296 | + 'module': 'icmp', |
1297 | + 'source-ip': '10.66.111.1', |
1298 | + }], |
1299 | + "tcp_targets": [{'ip-address': '10.66.111.152', 'port': '22', |
1300 | + 'principal-unit': principal_unit, 'module': 'tcp_connect', |
1301 | + }, |
1302 | + { |
1303 | + 'ip-address': '10.66.111.152', 'port': '9115', |
1304 | + 'principal-unit': principal_unit, 'module': 'tcp_connect', |
1305 | + }], |
1306 | + } |
1307 | + |
1308 | + actual = bbpeerexporter.get_icmp_and_tcp_targets(unit_networks, unit_ports, principal_unit) |
1309 | + assert actual == expected |
1310 | diff --git a/tox.ini b/tox.ini |
1311 | new file mode 100644 |
1312 | index 0000000..bd1f448 |
1313 | --- /dev/null |
1314 | +++ b/tox.ini |
1315 | @@ -0,0 +1,50 @@ |
1316 | +[tox] |
1317 | +skipsdist=True |
1318 | +envlist = unit, functional |
1319 | +skip_missing_interpreters = True |
1320 | + |
1321 | +[testenv] |
1322 | +basepython = python3 |
1323 | +setenv = |
1324 | + PYTHONPATH = . |
1325 | + |
1326 | +[testenv:unit] |
1327 | +commands = pytest -v --ignore {toxinidir}/tests/functional \ |
1328 | + --cov=lib \ |
1329 | + --cov=reactive \ |
1330 | + --cov=actions \ |
1331 | + --cov-report=term \ |
1332 | + --cov-report=annotate:report/annotated \ |
1333 | + --cov-report=html:report/html |
1334 | +deps = -r{toxinidir}/tests/unit/requirements.txt |
1335 | + -r{toxinidir}/requirements.txt |
1336 | +setenv = PYTHONPATH={toxinidir}/lib |
1337 | + |
1338 | +[testenv:functional] |
1339 | +passenv = |
1340 | + HOME |
1341 | + CHARM_BUILD_DIR |
1342 | + PATH |
1343 | + PYTEST_KEEP_MODEL |
1344 | + PYTEST_CLOUD_NAME |
1345 | + PYTEST_CLOUD_REGION |
1346 | +commands = pytest -v --ignore {toxinidir}/tests/unit |
1347 | +deps = -r{toxinidir}/tests/functional/requirements.txt |
1348 | + -r{toxinidir}/requirements.txt |
1349 | + |
1350 | +[testenv:lint] |
1351 | +commands = flake8 |
1352 | +deps = |
1353 | + flake8 |
1354 | + flake8-docstrings |
1355 | + flake8-import-order |
1356 | + pep8-naming |
1357 | + flake8-colors |
1358 | + |
1359 | +[flake8] |
1360 | +exclude = |
1361 | + .git, |
1362 | + __pycache__, |
1363 | + .tox, |
1364 | +max-line-length = 120 |
1365 | +max-complexity = 10 |
1366 | diff --git a/wheelhouse.txt b/wheelhouse.txt |
1367 | index ea2fb5e..f06761b 100644 |
1368 | --- a/wheelhouse.txt |
1369 | +++ b/wheelhouse.txt |
1370 | @@ -1,2 +1,3 @@ |
1371 | netifaces |
1372 | +psutil |
1373 | pyroute2 |
Added mostly minor comments, questions and nits.
The only thing of of substance I'm wondering about is the use of unit ports (.get_unit_ open_ports( )) -- aiui those would be used as reference points (probe targets)? If that is the case I'm wondering if these wouldn't be too dynamic. E.g. how are they going to be updated if services change or are removed?
Also wonder about the scope of the bb exporter here. Should the bb exporter measure network health or service health? If a service goes down the listening port would go down as well, triggering an alert even though the network might be healthy.