Merge ~aluria/charm-prometheus-blackbox-exporter:rewrite into ~aluria/charm-prometheus-blackbox-exporter:refactor-peer-discovery

Proposed by Alvaro Uria
Status: Rejected
Rejected by: Alvaro Uria
Proposed branch: ~aluria/charm-prometheus-blackbox-exporter:rewrite
Merge into: ~aluria/charm-prometheus-blackbox-exporter:refactor-peer-discovery
Diff against target: 1373 lines (+977/-224)
21 files modified
.gitignore (+22/-0)
Makefile (+56/-0)
icon.svg (+12/-0)
interfaces/.empty (+1/-0)
layer.yaml (+8/-4)
layers/.empty (+1/-0)
lib/lib_bb_peer_exporter.py (+158/-0)
lib/lib_network.py (+94/-0)
metadata.yaml (+9/-6)
reactive/prometheus-blackbox-exporter.py (+133/-214)
requirements.txt (+1/-0)
tests/functional/bundle.yaml.j2 (+13/-0)
tests/functional/conftest.py (+68/-0)
tests/functional/juju_tools.py (+71/-0)
tests/functional/requirements.txt (+7/-0)
tests/functional/test_deploy.py (+150/-0)
tests/unit/conftest.py (+69/-0)
tests/unit/requirements.txt (+7/-0)
tests/unit/test_lib_bb_peer_exporter.py (+46/-0)
tox.ini (+50/-0)
wheelhouse.txt (+1/-0)
Reviewer Review Type Date Requested Status
Peter Sabaini (community) Needs Fixing
Alvaro Uria Pending
Review via email: mp+374432@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Peter Sabaini (peter-sabaini) wrote :

Added mostly minor comments, questions and nits.

The only thing of of substance I'm wondering about is the use of unit ports (.get_unit_open_ports()) -- aiui those would be used as reference points (probe targets)? If that is the case I'm wondering if these wouldn't be too dynamic. E.g. how are they going to be updated if services change or are removed?

Also wonder about the scope of the bb exporter here. Should the bb exporter measure network health or service health? If a service goes down the listening port would go down as well, triggering an alert even though the network might be healthy.

review: Needs Fixing
Revision history for this message
Alvaro Uria (aluria) wrote :

Thank you for the review. A new MP can be found at [1]. I'll go ahead and mark this MP as rejected.

1. https://code.launchpad.net/~aluria/charm-prometheus-blackbox-exporter-peer/+git/charm-prometheus-blackbox-exporter-peer/+merge/374648

Unmerged commits

371fd4b... by Alvaro Uria

Rewrite charm using template-python-pytest

 * Cloned template-python-pytest
 * Moved the reactive script helpers to lib_bb_peer_exporter (generic)
 and lib_network (network related helpers)
 * Created minimum unit (for the libs) and functional tests
 * wheelhouse.txt installs psutil as well as netifaces and pyroute2.
 * Linting now runs flake8-docstrings, flake8-import-order and other
 extra checks. All scripts have been updated following parser
 recommendations.
 * Better use of the peer-discovery interface. Available (and transient)
 states when units join or leave a peer relation are used to trigger
 config changes.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
diff --git a/.gitignore b/.gitignore
0new file mode 1006440new file mode 100644
index 0000000..32e2995
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,22 @@
1# Byte-compiled / optimized / DLL files
2__pycache__/
3*.py[cod]
4*$py.class
5
6# Log files
7*.log
8
9.tox/
10.coverage
11
12# vi
13.*.swp
14
15# pycharm
16.idea/
17
18# version data
19repo-info
20
21# reports
22report/*
diff --git a/Makefile b/Makefile
0new file mode 10064423new file mode 100644
index 0000000..eb95ca8
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,56 @@
1PROJECTPATH = $(dir $(realpath $(firstword $(MAKEFILE_LIST))))
2DIRNAME = $(notdir $(PROJECTPATH:%/=%))
3
4ifndef CHARM_BUILD_DIR
5 CHARM_BUILD_DIR := /tmp/$(DIRNAME)-builds
6 $(warning Warning CHARM_BUILD_DIR was not set, defaulting to $(CHARM_BUILD_DIR))
7endif
8
9help:
10 @echo "This project supports the following targets"
11 @echo ""
12 @echo " make help - show this text"
13 @echo " make lint - run flake8"
14 @echo " make test - run the unittests and lint"
15 @echo " make unittest - run the tests defined in the unittest subdirectory"
16 @echo " make functional - run the tests defined in the functional subdirectory"
17 @echo " make release - build the charm"
18 @echo " make clean - remove unneeded files"
19 @echo ""
20
21lint:
22 @echo "Running flake8"
23 @tox -e lint
24
25test: lint unittest functional
26
27unittest:
28 @tox -e unit
29
30functional: build
31 @PYTEST_KEEP_MODEL=$(PYTEST_KEEP_MODEL) \
32 PYTEST_CLOUD_NAME=$(PYTEST_CLOUD_NAME) \
33 PYTEST_CLOUD_REGION=$(PYTEST_CLOUD_REGION) \
34 CHARM_BUILD_DIR=$(CHARM_BUILD_DIR) \
35 tox -e functional
36
37build:
38 @echo "Building charm to base directory $(CHARM_BUILD_DIR)"
39 @CHARM_LAYERS_DIR=./layers \
40 CHARM_INTERFACES_DIR=./interfaces \
41 TERM=linux \
42 CHARM_BUILD_DIR=$(CHARM_BUILD_DIR) \
43 charm build . --force
44
45release: clean build
46 @echo "Charm is built at $(CHARM_BUILD_DIR)"
47
48clean:
49 @echo "Cleaning files"
50 @if [ -d $(CHARM_BUILD_DIR) ] ; then rm -r $(CHARM_BUILD_DIR) ; fi
51 @if [ -d .tox ] ; then rm -r .tox ; fi
52 @if [ -d .pytest_cache ] ; then rm -r .pytest_cache ; fi
53 @find . -iname __pycache__ -exec rm -r {} +
54
55# The targets below don't depend on a file
56.PHONY: lint test unittest functional build release clean help
diff --git a/icon.svg b/icon.svg
0new file mode 10064457new file mode 100644
index 0000000..ffa6296
--- /dev/null
+++ b/icon.svg
@@ -0,0 +1,12 @@
1<?xml version="1.0" encoding="UTF-8"?>
2<svg width="100px" height="100px" viewBox="0 0 100 100" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
3 <!-- Generator: Sketch 45.2 (43514) - http://www.bohemiancoding.com/sketch -->
4 <title>prometheus</title>
5 <desc>Created with Sketch.</desc>
6 <defs></defs>
7 <g id="Page-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
8 <g id="prometheus" fill-rule="nonzero" fill="#B8B8B8">
9 <path d="M50.0004412,3.12178676e-15 C22.3871247,3.12178676e-15 0,22.3848726 0,49.9995588 C0,77.6133626 22.3871247,100 50.0004412,100 C77.6137577,100 100,77.6133626 100,49.9995588 C100,22.3848726 77.6128753,-1.77635684e-15 50.0004412,3.12178676e-15 Z M49.8864141,88.223871 C42.8334797,88.223871 37.1152482,83.6412396 37.1152482,77.9899953 L62.6575799,77.9899953 C62.6575799,83.6404692 56.9393484,88.223871 49.8864141,88.223871 Z M70.9802642,74.6007896 L28.7901876,74.6007896 L28.7901876,67.159792 L70.9810563,67.159792 L70.9810563,74.6007896 L70.9802642,74.6007896 Z M70.8289715,63.3298895 L28.9105881,63.3298895 C28.771177,63.1734883 28.6285974,63.0193985 28.493939,62.860686 C24.1753633,57.7603128 23.1582959,55.0976406 22.1705366,52.3841188 C22.1539023,52.2947467 27.4071661,53.4280774 31.1324525,54.2432125 C31.1324525,54.2432125 33.0493552,54.6746641 35.8518351,55.1716037 C33.1610425,52.1036753 31.5633596,48.2036619 31.5633596,44.2173581 C31.5633596,35.4658266 38.4642091,27.8183486 35.974612,21.6370353 C38.397672,21.8288772 40.9894511,26.6110549 41.164507,34.0882636 C43.740444,30.6258652 44.8185037,24.3027893 44.8185037,20.4258893 C44.8185037,16.4118494 47.5378123,11.7490913 50.257913,11.5896084 C47.8332688,15.4765242 50.886055,18.8087166 53.5998189,27.0748652 C54.6176783,30.1797752 54.4877725,35.4049611 55.2735442,38.7186628 C55.5341479,31.8362408 56.7508266,21.794207 61.2397056,18.3271859 C59.2594343,22.6933211 61.5327858,28.1565758 63.0876948,30.7830368 C65.5963025,35.020507 67.1171509,38.2309685 67.1171509,44.302878 C67.1171509,48.3739312 65.5717472,52.2069155 62.964918,55.2031922 C65.9289881,54.6623369 67.9757966,54.1746426 67.9757966,54.1746426 L77.6014994,52.3479077 C77.6022915,52.3471373 76.2034279,57.9421388 70.8289715,63.3298895 Z" id="path3023"></path>
10 </g>
11 </g>
12</svg>
0\ No newline at end of file13\ No newline at end of file
diff --git a/interfaces/.empty b/interfaces/.empty
1new file mode 10064414new file mode 100644
index 0000000..792d600
--- /dev/null
+++ b/interfaces/.empty
@@ -0,0 +1 @@
1#
diff --git a/layer.yaml b/layer.yaml
index 6036e6d..929f452 100644
--- a/layer.yaml
+++ b/layer.yaml
@@ -1,11 +1,15 @@
1# exclude the interfaces and layers folders we use for submodules
2exclude:
3 - interfaces
4 - layers
5# include required layers here
1includes:6includes:
2 - 'layer:basic'7 - 'layer:basic'
3 - "interface:juju-info"8 - 'layer:status'
4 - 'interface:http'9 - 'interface:http'
10 - 'interface:juju-info'
5 - 'interface:peer-discovery'11 - 'interface:peer-discovery'
6repo: 'https://git.launchpad.net/prometheus-blackbox-exporter-charm'12repo: 'https://git.launchpad.net/prometheus-blackbox-peer-exporter-charm'
7ignore:
8 - '.*.swp'
9options:13options:
10 basic:14 basic:
11 use_venv: true15 use_venv: true
diff --git a/layers/.empty b/layers/.empty
12new file mode 10064416new file mode 100644
index 0000000..792d600
--- /dev/null
+++ b/layers/.empty
@@ -0,0 +1 @@
1#
diff --git a/lib/lib_bb_peer_exporter.py b/lib/lib_bb_peer_exporter.py
0new file mode 1006442new file mode 100644
index 0000000..6de8dc2
--- /dev/null
+++ b/lib/lib_bb_peer_exporter.py
@@ -0,0 +1,158 @@
1"""General helpers."""
2import os
3import subprocess
4
5from charmhelpers import fetch
6from charmhelpers.core import hookenv, host, unitdata
7from charmhelpers.core.templating import render
8
9from charms.reactive.helpers import any_file_changed, data_changed
10
11import lib_network
12
13import yaml
14
15
16APT_PKG_NAME = 'prometheus-blackbox-exporter'
17SVC_NAME = 'prometheus-blackbox-exporter'
18SVC_PATH = os.path.join('/usr/bin', SVC_NAME)
19PORT_DEF = 9115
20BLACKBOX_EXPORTER_YML_TMPL = 'blackbox.yaml.j2'
21CONF_FILE_PATH = '/etc/prometheus/blackbox.yml'
22
23
24class BBPeerExporterError(Exception):
25 """Handle exceptions encountered in BBPeerExporterHelper."""
26
27 pass
28
29
30class BBPeerExporterHelper():
31 """General helpers."""
32
33 def __init__(self):
34 """Load config.yaml."""
35 self.charm_config = hookenv.config()
36
37 @property
38 def config_changed(self):
39 """Verify and update checksum if config has changed."""
40 return data_changed('blackbox-peer-exporter.config', self.charm_config)
41
42 def peer_relation_data_changed(self, keymap):
43 """Verify and update checksum if peer relation data has changed."""
44 return data_changed('blackbox-peer.relation_data', keymap)
45
46 def bbexporter_relation_data_changed(self, keymap):
47 """Verify and update checksum if provides relation data has changed."""
48 return data_changed('blackbox-exporter.relation_data', keymap)
49
50 @property
51 def is_blackbox_exporter_relation_enabled(self):
52 """Verify if the blackbox-export relation exists."""
53 kv = unitdata.kv()
54 return kv.get('blackbox_exporter', False)
55
56 @property
57 def enable_blackbox_exporter_relation(self):
58 """Enable the blackbox-export flag for one time functions."""
59 kv = unitdata.kv()
60 kv.set('blackbox_exporter', True)
61
62 @property
63 def disable_blackbox_exporter_relation(self):
64 """Disable the blackbox-export flag for one time functions."""
65 kv = unitdata.kv()
66 kv.set('blackbox_exporter', False)
67
68 @property
69 def modules(self):
70 """Return the modules config parameter."""
71 return self.charm_config['modules']
72
73 @property
74 def scrape_interval(self):
75 """Return the scrape-interval config parameter."""
76 return self.charm_config['scrape-interval']
77
78 @property
79 def port_def(self):
80 """Return the port exposed by blackbox-exporter."""
81 return PORT_DEF
82
83 @property
84 def templates_changed(self):
85 """Verify if any stored template has changed."""
86 return any_file_changed(['templates/{}'.format(tmpl)
87 for tmpl in [BLACKBOX_EXPORTER_YML_TMPL]])
88
89 def install_packages(self):
90 """Install the APT package and sets Linux capabilities."""
91 fetch.install(APT_PKG_NAME, fatal=True)
92 cmd = ["setcap", "cap_net_raw+ep", SVC_PATH]
93 try:
94 subprocess.check_output(cmd)
95 except subprocess.CalledProcessError as error:
96 hookenv.log('unable to set linux capabilities: {}'.format(str(error)),
97 hookenv.ERROR)
98 raise BBPeerExporterError('Unable to set linux capabilities')
99
100 def render_modules(self):
101 """Generate /etc/prometheus/blackbox.yml from the template."""
102 try:
103 modules = yaml.safe_load(self.modules)
104 if 'modules' in modules:
105 modules = modules['modules']
106 except (yaml.parser.ParserError, yaml.scanner.ScannerError) as error:
107 hookenv.log('error retrieving modules yaml config: {}'.format(str(error)),
108 hookenv.ERROR)
109 # render "null" value
110 modules = None
111
112 context = {'modules': yaml.safe_dump(modules, default_flow_style=False)}
113 render(source=BLACKBOX_EXPORTER_YML_TMPL, target=CONF_FILE_PATH, context=context)
114 hookenv.open_port(PORT_DEF)
115
116 def restart_bbexporter(self):
117 """Restart the exporter daemon."""
118 if not host.service_running(SVC_NAME):
119 hookenv.log('Starting {}...'.format(SVC_NAME))
120 host.service_start(SVC_NAME)
121 else:
122 hookenv.log('Restarting {}, config file changed...'.format(SVC_NAME))
123 host.service_restart(SVC_NAME)
124
125 def get_icmp_and_tcp_targets(self, unit_networks, unit_ports, principal_unit):
126 """Return a dict with structured unit-networks and unit-ports for Prometheus."""
127 peers_net_info = {key: [] for key in ('networks', 'icmp_targets', 'tcp_targets')}
128 unit_networks = [] if unit_networks is None else lib_network.safe_loads(unit_networks)
129 unit_ports = [] if unit_ports is None else lib_network.safe_loads(unit_ports)
130 if not unit_networks:
131 return peers_net_info
132
133 local_networks = set(net['net'] for net in lib_network.get_unit_ipv4_networks())
134 for unit_network in unit_networks:
135 # We can't probe a network we can't reach directly
136 if unit_network['net'] not in local_networks:
137 continue
138
139 probe_dst_ip = unit_network['ip']
140 peers_net_info['networks'].append(unit_network['net'])
141 peers_net_info['icmp_targets'].append({
142 'network': unit_network['net'],
143 'interface': unit_network['iface'],
144 'ip-address': probe_dst_ip,
145 'principal-unit': principal_unit,
146 'module': 'icmp',
147 'source-ip': lib_network.get_source_ip_ipv4(probe_dst_ip),
148 })
149
150 for port in unit_ports:
151 peers_net_info['tcp_targets'].append({
152 'ip-address': probe_dst_ip,
153 'port': port,
154 'principal-unit': principal_unit,
155 'module': 'tcp_connect',
156 })
157
158 return peers_net_info
diff --git a/lib/lib_network.py b/lib/lib_network.py
0new file mode 100644159new file mode 100644
index 0000000..1d13452
--- /dev/null
+++ b/lib/lib_network.py
@@ -0,0 +1,94 @@
1"""Network helpers."""
2import ast
3import ipaddress
4import re
5import socket
6
7import netifaces
8
9import psutil
10
11import pyroute2
12
13IFACE_BLACKLIST_PATTERN = '^(lo|virbr|docker|lxdbr|vhost|tun|tap)'
14
15
16def _is_valid_ipv4_addr(address):
17 """Filter out non-IPv4 addresses."""
18 try:
19 ip = ipaddress.IPv4Address(address.get('addr'))
20 invalid = any((ip.is_multicast, ip.is_reserved, ip.is_link_local, ip.is_loopback))
21 return not invalid
22 except (ipaddress.AddressValueError, ValueError):
23 return False
24
25
26def _get_local_ifaces():
27 """Ignore interfaces used by Docker, KVM, Contrail, etc."""
28 iface_blacklist_re = re.compile(IFACE_BLACKLIST_PATTERN)
29 ifaces_whitelist = (iface for iface in netifaces.interfaces()
30 if not iface_blacklist_re.search(iface))
31 return ifaces_whitelist
32
33
34def _get_ipv4_addresses(iface):
35 """Return all IPv4 IPs from a given interface.
36
37 Note that the interface name will exist as this function is called after
38 the list of interfaces has been retrieved.
39 """
40 ipv4_addresses = netifaces.ifaddresses(iface).get(netifaces.AF_INET, [])
41 filtered_ipv4_addrs = (ipv4_addr for ipv4_addr in ipv4_addresses
42 if _is_valid_ipv4_addr(ipv4_addr))
43 return filtered_ipv4_addrs
44
45
46def get_source_ip_ipv4(destination):
47 """Return the src ip of a connection towards "destination".
48
49 This function is similar to running "ip route get <ipv4_addr>"
50
51 Disclaimer: source ip configuration for the blackbox exporter is done in
52 the module definition. To avoid having to create a module for every local
53 address we are simply letting the exporter use whatever source IP the
54 kernel deems appropriate. Be aware that this function is doing a route
55 lookup, and it is not - strictly speaking - returning the source IP of the
56 packet the blackbox exporter will generate. The two addresses should always
57 be the same, but YMMV.
58 """
59 with pyroute2.IPRoute() as ipr:
60 routes = ipr.route('get', dst=destination)
61 return routes[0].get_attr('RTA_PREFSRC')
62
63
64def get_unit_ipv4_networks():
65 """Return a list of IPv4 network blocks available on the host."""
66 networks = []
67 for iface in _get_local_ifaces():
68 for ip_addr in _get_ipv4_addresses(iface):
69 network = "{}/{}".format(ip_addr['addr'], ip_addr['netmask'])
70 ip_ipv4 = ipaddress.IPv4Interface(network)
71 networks.append(
72 {"iface": iface,
73 "ip": str(ip_ipv4.ip),
74 "net": str(ip_ipv4.network)}
75 )
76 return networks
77
78
79def get_unit_open_ports():
80 """Return TCP connections listening on ANY_ADDRESS."""
81 ports = [str(conn.laddr.port) for conn in psutil.net_connections()
82 if conn.status == "LISTEN" and
83 conn.type == socket.SOCK_STREAM and
84 conn.family.value in (netifaces.AF_INET, netifaces.AF_INET6) and
85 conn.laddr.ip in ("0.0.0.0", "::")]
86
87 return list(set(ports))
88
89
90def safe_loads(data_str):
91 """Evaluate a string and convert it into a Python object."""
92 if not isinstance(data_str, str) or not data_str:
93 return []
94 return ast.literal_eval(data_str)
diff --git a/metadata.yaml b/metadata.yaml
index 89d3c6e..6e0be63 100644
--- a/metadata.yaml
+++ b/metadata.yaml
@@ -1,10 +1,13 @@
1name: prometheus-blackbox-exporter1name: prometheus-blackbox-peer-exporter
2display-name: Prometheus Blackbox Exporter2display-name: Prometheus Blackbox Peer Exporter
3summary: Blackbox exporter for Prometheus3summary: Blackbox peer exporter for Prometheus
4maintainer: Jacek Nykis <jacek.nykis@canonical.com>4maintainer: Llama (LMA) Charmers <llama-charmers@lists.launchpad.net>
5description: |5description: |
6 The blackbox exporter allows blackbox probing of6 The blackbox peer exporter allows blackbox probing of
7 endpoints over HTTP, HTTPS, DNS, TCP and ICMP.7 endpoints over HTTP, HTTPS, DNS, TCP and ICMP. This
8 charm allows all units to probe their peers, whereas
9 the prometheus-blackbox-exporter charm deploys a single
10 unit that can probe external endpoints.
8tags:11tags:
9 - monitoring12 - monitoring
10series:13series:
diff --git a/reactive/prometheus-blackbox-exporter.py b/reactive/prometheus-blackbox-exporter.py
index 7d4e877..a057bba 100644
--- a/reactive/prometheus-blackbox-exporter.py
+++ b/reactive/prometheus-blackbox-exporter.py
@@ -1,250 +1,169 @@
1import ast1"""Reactive script describing the prometheus-blackbox-peer-exporter behavior."""
2import re
3import subprocess
4import yaml
5import sys
6
7from charmhelpers.core import host, hookenv
8from charmhelpers.core.templating import render
9from charms.reactive import (
10 when, when_not, set_state, remove_state
11)
12from ipaddress import (
13 IPv4Interface, IPv4Address
14)
15from pyroute2 import IPRoute
16from netifaces import interfaces, ifaddresses, AF_INET
17from charms.reactive.helpers import any_file_changed, data_changed
18from charms.reactive import hook
192
20from charmhelpers.fetch import apt_install3from charmhelpers.core import hookenv
214
22hooks = hookenv.Hooks()5from charms.layer import status
6from charms.reactive import (
7 clear_flag, is_flag_set, set_flag, when, when_any, when_not
8)
239
24APT_PKG_NAME = 'prometheus-blackbox-exporter'10from lib_bb_peer_exporter import BBPeerExporterError, BBPeerExporterHelper
25SVC_NAME = 'prometheus-blackbox-exporter'
26EXECUTABLE = '/usr/bin/prometheus-blackbox-exporter'
27PORT_DEF = 9115
28BLACKBOX_EXPORTER_YML_TMPL = 'blackbox.yaml.j2'
29CONF_FILE_PATH = '/etc/prometheus/blackbox.yml'
30IFACE_BLACKLIST_PATTERN = re.compile('^(lo|virbr|docker|lxdbr|vhost|tun|tap)')
3111
12import lib_network
3213
33def templates_changed(tmpl_list):14helper = BBPeerExporterHelper()
34 return any_file_changed(['templates/{}'.format(x) for x in tmpl_list])
3515
3616
37@when_not('blackbox-exporter.installed')17@when_not('prometheus-blackbox-peer-exporter.installed')
38def install_packages():18def install_packages():
39 hookenv.status_set('maintenance', 'Installing software')19 """Install APT package or get blocked until user action."""
40 config = hookenv.config()20 status.maintenance('Installing software')
41 apt_install(APT_PKG_NAME, fatal=True)
42 cmd = ["sudo", "setcap", "cap_net_raw+ep", EXECUTABLE]
43 subprocess.check_output(cmd)
44 set_state('blackbox-exporter.installed')
45 set_state('blackbox-exporter.do-check-reconfig')
46
47
48def get_modules():
49 config = hookenv.config()
50 try:21 try:
51 modules = yaml.safe_load(config.get('modules'))22 helper.install_packages()
52 except:23 set_flag('prometheus-blackbox-peer-exporter.installed')
53 return None24 set_flag('prometheus-blackbox-peer-exporter.do-check-reconfig')
5425 except BBPeerExporterError as error:
55 if 'modules' in modules:26 status.blocked(error)
56 return yaml.safe_dump(modules['modules'], default_flow_style=False)
57 else:
58 return yaml.safe_dump(modules, default_flow_style=False)
5927
6028
61@when('blackbox-exporter.installed')29@when('prometheus-blackbox-peer-exporter.installed')
62@when('blackbox-exporter.do-reconfig-yaml')30@when('prometheus-blackbox-peer-exporter.do-reconfig-yaml')
63def write_blackbox_exporter_config_yaml():31def write_blackbox_exporter_config_yaml():
64 modules = get_modules()32 """Generate /etc/prometheus/blackbox.yml."""
65 render(source=BLACKBOX_EXPORTER_YML_TMPL,33 helper.render_modules()
66 target=CONF_FILE_PATH,34 set_flag('prometheus-blackbox-peer-exporter.do-restart')
67 context={'modules': modules}35 clear_flag('prometheus-blackbox-peer-exporter.do-reconfig-yaml')
68 )
69 hookenv.open_port(PORT_DEF)
70 set_state('blackbox-exporter.do-restart')
71 remove_state('blackbox-exporter.do-reconfig-yaml')
7236
7337
74@when('blackbox-exporter.started')38@when('prometheus-blackbox-peer-exporter.started')
75def check_config():39def check_config():
76 set_state('blackbox-exporter.do-check-reconfig')40 """Trigger a config check."""
41 set_flag('prometheus-blackbox-peer-exporter.do-check-reconfig')
7742
7843
79@when('blackbox-exporter.do-check-reconfig')44@when('prometheus-blackbox-peer-exporter.do-check-reconfig')
80def check_reconfig_blackbox_exporter():45def check_reconfig_blackbox_exporter():
81 config = hookenv.config()46 """Trigger a blackbox.yml config regeneration on changes."""
8247 if helper.config_changed or helper.templates_changed:
83 if data_changed('blackbox-exporter.config', config):48 set_flag('prometheus-blackbox-peer-exporter.do-reconfig-yaml')
84 set_state('blackbox-exporter.do-reconfig-yaml')
85
86 if templates_changed([BLACKBOX_EXPORTER_YML_TMPL]):
87 set_state('blackbox-exporter.do-reconfig-yaml')
8849
89 remove_state('blackbox-exporter.do-check-reconfig')50 clear_flag('prometheus-blackbox-peer-exporter.do-check-reconfig')
9051
9152
92@when('blackbox-exporter.do-restart')53@when('prometheus-blackbox-peer-exporter.do-restart')
93def restart_blackbox_exporter():54def restart_blackbox_exporter():
94 if not host.service_running(SVC_NAME):55 """Restart the blackbox-exporter daemon."""
95 hookenv.log('Starting {}...'.format(SVC_NAME))56 helper.restart_bbexporter()
96 host.service_start(SVC_NAME)57 status.active('Ready')
97 else:58 set_flag('prometheus-blackbox-peer-exporter.started')
98 hookenv.log('Restarting {}, config file changed...'.format(SVC_NAME))59 clear_flag('prometheus-blackbox-peer-exporter.do-restart')
99 host.service_restart(SVC_NAME)60
100 hookenv.status_set('active', 'Ready')
101 set_state('blackbox-exporter.started')
102 remove_state('blackbox-exporter.do-restart')
10361
104# Relations62# Relations
105@hook('blackbox-peer-relation-{joined,departed}')63@when_any('blackbox-peer.joined',
106def configure_blackbox_exporter_relation(peers):64 'blackbox-peer.departed')
107 hookenv.log('Running blackbox exporter relation.')65def blackbox_peer_changed():
108 hookenv.status_set('maintenance', 'Configuring blackbox peer relations.')66 """Shares unit data with its peers in the blackbox-peer relation.
109 config = hookenv.config()67
68 If a single unit exists, data will NOT be shared (the main goal of this
69 service is to probe against its peers). When a unit is removed, all its
70 peers may need to rebuild their probes (one less).
71 """
72 hookenv.log('Running blackbox peer relations.')
73 status.maintenance('Configuring blackbox peer relations.')
74 rids = hookenv.relation_ids("blackbox-peer")
75 if not rids or len(rids) != 1:
76 hookenv.log('[*] More than one blackbox-peer relation', hookenv.ERROR)
77 return
78
79 rid, unit = rids[0], hookenv.local_unit()
80 relation_settings = hookenv.relation_get(rid=rid, unit=unit)
81 relation_settings.update({
82 'principal-unit': hookenv.principal_unit(),
83 'private-address': hookenv.unit_get('private-address'),
84 'unit-networks': lib_network.get_unit_ipv4_networks(),
85 'unit-ports': lib_network.get_unit_open_ports(),
86 })
87 if helper.peer_relation_data_changed(relation_settings):
88 hookenv.relation_set(relation_id=rid, relation_settings=relation_settings)
11089
111 icmp_targets = []90 # Share with Prometheus - all peer units' network data with present unit
112 tcp_targets = []91 if is_flag_set('blackbox-exporter.available'):
113 networks = []92 configure_blackbox_exporter_relation()
114 for rid in hookenv.relation_ids('blackbox-peer'):
115 for unit in hookenv.related_units(rid):
116 unit_ports = hookenv.relation_get('unit-ports', rid=rid, unit=unit)
117 principal_unit = hookenv.relation_get('principal-unit', rid=rid, unit=unit)
118 unit_networks = hookenv.relation_get('unit-networks', rid=rid, unit=unit)
119 if unit_networks is not None:
120 unit_networks = ast.literal_eval(unit_networks)
121 for unit_network in unit_networks:
122 # Chcek if same network exists on this unit
123 if unit_network['net'] in [net['net'] for net in get_unit_networks()]:
124 networks.append(unit_network['net'])
125 probe_dst_ip = unit_network['ip']
126 icmp_targets.append({
127 'network': unit_network['net'],
128 'interface': unit_network['iface'],
129 'ip-address': probe_dst_ip,
130 'principal-unit': principal_unit,
131 'module': 'icmp',
132 'source-ip': _get_source_ip(probe_dst_ip)
133 })
134
135 if unit_ports is not None:
136 unit_ports = ast.literal_eval(unit_ports)
137 for port in unit_ports:
138 tcp_targets.append({
139 'ip-address': unit_network['ip'],
140 'port': port,
141 'principal-unit': principal_unit,
142 'module': 'tcp_connect',
143 })
144
145 relation_settings = {}
146 relation_settings['icmp_targets'] = icmp_targets
147 relation_settings['tcp_targets'] = tcp_targets
148 relation_settings['networks'] = networks
149 relation_settings['ip_address'] = hookenv.unit_get('private-address')
150 relation_settings['port'] = PORT_DEF
151 relation_settings['job_name'] = hookenv.principal_unit()
152 relation_settings['scrape_interval'] = config.get('scrape-interval')
15393
94 status.active('Ready')
15495
155 for rel_id in hookenv.relation_ids('blackbox-exporter'):
156 relation_settings['ip_address'] = \
157 hookenv.ingress_address(rid=rel_id, unit=hookenv.local_unit())
158 hookenv.relation_set(relation_id=rel_id, relation_settings=relation_settings)
159
160 hookenv.status_set('active', 'Ready')
16196
16297@when('blackbox-peer.connected')
163def _get_source_ip(destination):98@when('blackbox-exporter.available')
164 """99def blackbox_peer_and_exporter_any_hook():
165 Get the source ip of a connection towards destination without having to run100 """First time the prometheus2:blackbox-exporter relation is seen."""
166 ip r g via subprocess101 if not helper.is_blackbox_exporter_relation_enabled:
167102 helper.enable_blackbox_exporter_relation
168 Disclaimer: source ip configuration for the blackbox exporter is done in103 configure_blackbox_exporter_relation()
169 the module definition. To avoid having to create a module for every local
170 address we are simply letting the exporter use whatever source IP the
171 kernel deems appropriate. Be aware that this function is doing a route
172 lookup, and it is not - strictly speaking - returning the source IP of the
173 packet the blackbox exporter will generate. The two addresses should always
174 be the same, but YMMV.
175 """
176 with IPRoute() as ipr:
177 routes = ipr.route('get', dst=destination)
178 return routes[0].get_attr('RTA_PREFSRC')
179104
180105
181def _is_valid_ip(address):106@when('blackbox-peer.connected')
182 """107@when_not('blackbox-exporter.available')
183 Filter out "uninteresting" addresses108def blackbox_peer_and_no_exporter_any_hook():
184 """109 """When the prometheus2:blackbox-exporter relation is gone."""
185 ip = IPv4Address(address.get('addr'))110 if helper.is_blackbox_exporter_relation_enabled:
186 return not (ip.is_multicast or111 helper.disable_blackbox_exporter_relation
187 ip.is_reserved or
188 ip.is_link_local or
189 ip.is_loopback)
190112
191113
192def _is_valid_iface(iface):114def configure_blackbox_exporter_relation():
193 """115 """Retrieve data from peers and share the bundle of probes with Prometheus."""
194 Ignore interfaces used by Docker, KVM, Contrail, etc116 hookenv.log('Running blackbox exporter relation.')
195 """
196 if IFACE_BLACKLIST_PATTERN.search(iface):
197 return False
198 else:
199 return True
200117
118 # Reads peer relations
119 rids = hookenv.relation_ids("blackbox-peer")
120 if not rids or len(rids) != 1:
121 hookenv.log('[*] More than one blackbox-peer relation', hookenv.ERROR)
122 return
201123
202def get_unit_networks():124 status.maintenance('Configuring blackbox peer relations.')
203 networks = []125 networks = []
204 for iface in filter(_is_valid_iface, interfaces()):126 icmp_targets = []
205 ip_addresses = ifaddresses(iface)127 tcp_targets = []
206 for ip_address in filter(_is_valid_ip, ip_addresses.get(AF_INET, [])):128 rid = rids[0]
207 ip_v4 = IPv4Interface(129 for peer_unit in hookenv.related_units(rid):
208 "{}/{}".format(ip_address['addr'], ip_address['netmask'])130 # principal-unit: ubuntu/1
209 )131 principal_unit = hookenv.relation_get('principal-unit', rid=rid, unit=peer_unit)
210 networks.append(132 # unit-networks: '[{''iface'': ''eth0'', ''ip'': ''10.66.111.152'', ''net'': ''10.66.111.0/24''}]'
211 {"iface": iface,133 unit_networks = hookenv.relation_get('unit-networks', rid=rid, unit=peer_unit)
212 "ip": str(ip_v4.ip),134 # unit-ports: '[''22'', ''9115'']'
213 "net": str(ip_v4.network)}135 unit_ports = hookenv.relation_get('unit-ports', rid=rid, unit=peer_unit)
214 )136
215 return networks137 peer_net_info = helper.get_icmp_and_tcp_targets(
216138 unit_networks, unit_ports, principal_unit)
217def get_principal_unit_open_ports():139
218 cmd = "lsof -P -iTCP -sTCP:LISTEN".split()140 if peer_net_info['networks']:
219 result = subprocess.check_output(cmd)141 networks.extend(peer_net_info['networks'])
220 result = result.decode(sys.stdout.encoding)142
221143 if peer_net_info['icmp_targets']:
222 ports = []144 icmp_targets.extend(peer_net_info['icmp_targets'])
223 for r in result.split('\n'):145
224 for p in r.split():146 if peer_net_info['tcp_targets']:
225 if '*:' in p:147 tcp_targets.extend(peer_net_info['tcp_targets'])
226 ports.append(p.split(':')[1])148
227 ports = [p for p in set(ports)]149 relation_settings = {
228150 'networks': networks,
229 return ports151 'icmp_targets': icmp_targets,
230152 'tcp_targets': tcp_targets,
231@hook('blackbox-peer-relation-{joined,departed}')153 'ip_address': hookenv.unit_get('private-address'),
232def blackbox_peer_departed(peers):154 'port': helper.port_def,
233 hookenv.log('Blackbox peer unit joined/departed.')155 'job_name': hookenv.principal_unit(),
234 set_state('blackbox-exporter.redo-peer-relation')156 'scrape_interval': helper.scrape_interval,
235157 }
236@when('blackbox-exporter.redo-peer-relation')158
237def setup_blackbox_peer_relation(peers):159 if not helper.bbexporter_relation_data_changed(relation_settings):
238 # Set blackbox-peer relations160 status.active('Ready')
239 hookenv.log('Running blackbox peer relations.')161 return
240 hookenv.status_set('maintenance', 'Configuring blackbox peer relations.')162
241 for rid in hookenv.relation_ids('blackbox-peer'):163 # Shares information with the "prometheus" application
242 relation_settings = hookenv.relation_get(rid=rid, unit=hookenv.local_unit())164 for rel_id in hookenv.relation_ids('blackbox-exporter'):
243 relation_settings['principal-unit'] = hookenv.principal_unit()165 relation_settings['ip_address'] = \
244 relation_settings['private-address'] = hookenv.unit_get('private-address')166 hookenv.ingress_address(rid=rel_id, unit=hookenv.local_unit())
245 relation_settings['unit-networks'] = get_unit_networks()167 hookenv.relation_set(relation_id=rel_id, relation_settings=relation_settings)
246 relation_settings['unit-ports'] = get_principal_unit_open_ports()
247 hookenv.relation_set(relation_id=rid, relation_settings=relation_settings)
248168
249 hookenv.status_set('active', 'Ready')169 status.active('Ready')
250 remove_state('blackbox-exporter.redo-peer-relation')
diff --git a/requirements.txt b/requirements.txt
251new file mode 100644170new file mode 100644
index 0000000..8462291
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1 @@
1# Include python requirements here
diff --git a/tests/functional/bundle.yaml.j2 b/tests/functional/bundle.yaml.j2
0new file mode 1006442new file mode 100644
index 0000000..e52b841
--- /dev/null
+++ b/tests/functional/bundle.yaml.j2
@@ -0,0 +1,13 @@
1applications:
2 {{ ubuntu_appname }}:
3 series: {{ series }}
4 charm: cs:ubuntu
5 num_units: 2
6
7 {{ bb_appname }}:
8 series: {{ series }}
9 charm: {{ charm_path }}
10
11relations:
12 - - {{ ubuntu_appname }}
13 - {{ bb_appname }}
diff --git a/tests/functional/conftest.py b/tests/functional/conftest.py
0new file mode 10064414new file mode 100644
index 0000000..9679d97
--- /dev/null
+++ b/tests/functional/conftest.py
@@ -0,0 +1,68 @@
1"""
2Reusable pytest fixtures for functional testing.
3
4Environment variables
5---------------------
6
7PYTEST_CLOUD_REGION, PYTEST_CLOUD_NAME: cloud name and region to use for juju model creation
8
9PYTEST_KEEP_MODEL: if set, the testing model won't be torn down at the end of the testing session
10"""
11
12import asyncio
13import os
14import subprocess
15import uuid
16
17from juju.controller import Controller
18
19from juju_tools import JujuTools
20
21import pytest
22
23
24@pytest.fixture(scope='module')
25def event_loop():
26 """Override the default pytest event loop to allow for fixtures using a broader scope."""
27 loop = asyncio.get_event_loop_policy().new_event_loop()
28 asyncio.set_event_loop(loop)
29 loop.set_debug(True)
30 yield loop
31 loop.close()
32 asyncio.set_event_loop(None)
33
34
35@pytest.fixture(scope='module')
36async def controller():
37 """Connect to the current controller."""
38 _controller = Controller()
39 await _controller.connect_current()
40 yield _controller
41 await _controller.disconnect()
42
43
44@pytest.fixture(scope='module')
45async def model(controller):
46 """Create a temporary model to run the tests."""
47 model_name = "functest-{}".format(str(uuid.uuid4())[-12:])
48 _model = await controller.add_model(model_name,
49 cloud_name=os.getenv('PYTEST_CLOUD_NAME'),
50 region=os.getenv('PYTEST_CLOUD_REGION'),
51 )
52 # https://github.com/juju/python-libjuju/issues/267
53 subprocess.check_call(['juju', 'models'])
54 while model_name not in await controller.list_models():
55 await asyncio.sleep(1)
56 yield _model
57 await _model.disconnect()
58 if not os.getenv('PYTEST_KEEP_MODEL'):
59 await controller.destroy_model(model_name)
60 while model_name in await controller.list_models():
61 await asyncio.sleep(1)
62
63
64@pytest.fixture(scope='module')
65async def jujutools(controller, model):
66 """Load helpers to run commands on the units."""
67 tools = JujuTools(controller, model)
68 return tools
diff --git a/tests/functional/juju_tools.py b/tests/functional/juju_tools.py
0new file mode 10064469new file mode 100644
index 0000000..850c296
--- /dev/null
+++ b/tests/functional/juju_tools.py
@@ -0,0 +1,71 @@
1"""Juju helpers to run commands on the units."""
2import base64
3import pickle
4
5import juju
6
7
8class JujuTools:
9 """Load helpers to run commands on units."""
10
11 def __init__(self, controller, model):
12 """Load initialized controller and model."""
13 self.controller = controller
14 self.model = model
15
16 async def run_command(self, cmd, target):
17 """
18 Run a command on a unit.
19
20 :param cmd: Command to be run
21 :param unit: Unit object or unit name string
22 """
23 unit = (
24 target
25 if isinstance(target, juju.unit.Unit)
26 else await self.get_unit(target)
27 )
28 action = await unit.run(cmd)
29 return action.results
30
31 async def remote_object(self, imports, remote_cmd, target):
32 """
33 Run command on target machine and returns a python object of the result.
34
35 :param imports: Imports needed for the command to run
36 :param remote_cmd: The python command to execute
37 :param target: Unit object or unit name string
38 """
39 python3 = "python3 -c '{}'"
40 python_cmd = ('import pickle;'
41 'import base64;'
42 '{}'
43 'print(base64.b64encode(pickle.dumps({})), end="")'
44 .format(imports, remote_cmd))
45 cmd = python3.format(python_cmd)
46 results = await self.run_command(cmd, target)
47 return pickle.loads(base64.b64decode(bytes(results['Stdout'][2:-1], 'utf8')))
48
49 async def file_stat(self, path, target):
50 """
51 Run stat on a file.
52
53 :param path: File path
54 :param target: Unit object or unit name string
55 """
56 imports = 'import os;'
57 python_cmd = ('os.stat("{}")'
58 .format(path))
59 print("Calling remote cmd: " + python_cmd)
60 return await self.remote_object(imports, python_cmd, target)
61
62 async def file_contents(self, path, target):
63 """
64 Return the contents of a file.
65
66 :param path: File path
67 :param target: Unit object or unit name string
68 """
69 cmd = 'cat {}'.format(path)
70 result = await self.run_command(cmd, target)
71 return result['Stdout']
diff --git a/tests/functional/requirements.txt b/tests/functional/requirements.txt
0new file mode 10064472new file mode 100644
index 0000000..3d8a11b
--- /dev/null
+++ b/tests/functional/requirements.txt
@@ -0,0 +1,7 @@
1flake8
2jinja2
3juju
4mock
5pytest
6pytest-asyncio
7requests
diff --git a/tests/functional/test_deploy.py b/tests/functional/test_deploy.py
0new file mode 1006448new file mode 100644
index 0000000..173ab29
--- /dev/null
+++ b/tests/functional/test_deploy.py
@@ -0,0 +1,150 @@
1"""Tests around Juju deployed charms."""
2import asyncio
3import os
4import stat
5import subprocess
6
7import jinja2
8
9import pytest
10
11# Treat all tests as coroutines
12pytestmark = pytest.mark.asyncio
13
14CHARM_BUILD_DIR = os.getenv('CHARM_BUILD_DIR', '.').rstrip('/')
15
16# series = ['bionic',
17# pytest.param('eoan', marks=pytest.mark.xfail(reason='canary')),
18# ]
19series = ['bionic']
20sources = [('local', os.path.join(CHARM_BUILD_DIR, 'prometheus-blackbox-peer-exporter')),
21 # ('jujucharms', 'cs:...'),
22 ]
23
24
25def render(templates_dir, template_name, context):
26 """Render a configuration file from a template."""
27 templates = jinja2.Environment(loader=jinja2.FileSystemLoader(templates_dir))
28 template = templates.get_template(template_name)
29 return template.render(context)
30
31# Uncomment for re-using the current model, useful for debugging functional tests
32# @pytest.fixture(scope='module')
33# async def model():
34# from juju.model import Model
35# model = Model()
36# await model.connect_current()
37# yield model
38# await model.disconnect()
39
40
41# Custom fixtures
42@pytest.fixture(params=series)
43def series(request):
44 """Return series scope."""
45 return request.param
46
47
48@pytest.fixture(params=sources, ids=[s[0] for s in sources])
49def source(request):
50 """Return location of deployed charm (local disk or charm store)."""
51 return request.param
52
53
54@pytest.fixture
55async def app(model, series, source):
56 """Return Juju application object for the deployed series."""
57 app_name = 'prometheus-blackbox-peer-exporter-{}-{}'.format(series, source[0])
58 return await model._wait_for_new('application', app_name)
59
60
61async def test_bbpeerexporter_deploy(model, series, source, request):
62 """Start a deploy for each series.
63
64 subprocess is used because libjuju fails with JAAS
65 https://github.com/juju/python-libjuju/issues/221
66 """
67 dir_path = os.path.dirname(os.path.realpath(__file__))
68 bundle_path = os.path.join(CHARM_BUILD_DIR, 'bundle.yaml')
69
70 ubuntu_appname = "ubuntu-{}-{}".format(series, source[0])
71 bb_appname = 'prometheus-blackbox-peer-exporter-{}-{}'.format(series, source[0])
72
73 context = {
74 "ubuntu_appname": ubuntu_appname,
75 "bb_appname": bb_appname,
76 "series": series,
77 "charm_path": os.path.join(CHARM_BUILD_DIR, "prometheus-blackbox-peer-exporter")
78 }
79 rendered = render(dir_path, "bundle.yaml.j2", context)
80 with open(bundle_path, "w") as fd:
81 fd.write(rendered)
82
83 if not os.path.exists(bundle_path):
84 assert False
85
86 cmd = ["juju", "deploy", "-m", model.info.name, bundle_path]
87 if request.node.get_closest_marker('xfail'):
88 # If series is 'xfail' force install to allow testing against versions not in
89 # metadata.yaml
90 cmd.append('--force')
91 subprocess.check_call(cmd)
92 while True:
93 try:
94 model.applications[bb_appname]
95 break
96 except KeyError:
97 await asyncio.sleep(5)
98 assert True
99
100
101# async def test_charm_upgrade(model, app):
102# """Test juju upgrade-charm, from one source to another."""
103# if app.name.endswith('local'):
104# pytest.skip("No need to upgrade the local deploy")
105# unit = app.units[0]
106# await model.block_until(lambda: unit.agent_status == 'idle',
107# timeout=60)
108# subprocess.check_call(['juju',
109# 'upgrade-charm',
110# '--switch={}'.format(sources[0][1]),
111# '-m', model.info.name,
112# app.name,
113# ])
114# await model.block_until(lambda: unit.agent_status == 'executing')
115
116
117# Tests
118async def test_bbpeerexporter_status(model, app):
119 """Verify status for all deployed series of the charm."""
120 await model.block_until(lambda: app.status == 'active',
121 timeout=900)
122 unit = app.units[0]
123 await model.block_until(lambda: unit.agent_status == 'idle',
124 timeout=900)
125
126
127# async def test_example_action(app):
128# unit = app.units[0]
129# action = await unit.run_action('example-action')
130# action = await action.wait()
131# assert action.status == 'completed'
132
133
134async def test_run_command(app, jujutools):
135 """Test simple juju-run command."""
136 unit = app.units[0]
137 cmd = 'hostname -i'
138 results = await jujutools.run_command(cmd, unit)
139 assert results['Code'] == '0'
140 assert unit.public_address in results['Stdout']
141
142
143async def test_file_stat(app, jujutools):
144 """Verify a file exists in the deployed unit."""
145 unit = app.units[0]
146 path = '/var/lib/juju/agents/unit-{}/charm/metadata.yaml'.format(unit.entity_id.replace('/', '-'))
147 fstat = await jujutools.file_stat(path, unit)
148 assert stat.filemode(fstat.st_mode) == '-rw-r--r--'
149 assert fstat.st_uid == 0
150 assert fstat.st_gid == 0
diff --git a/tests/unit/conftest.py b/tests/unit/conftest.py
0new file mode 100644151new file mode 100644
index 0000000..5410860
--- /dev/null
+++ b/tests/unit/conftest.py
@@ -0,0 +1,69 @@
1"""Reusable pytest fixtures for functional testing."""
2import unittest.mock as mock
3
4import pytest
5
6
7# If layer options are used, add this to prometheusblackboxpeerexporter
8# and import layer in lib_bb_peer_exporter
9@pytest.fixture
10def mock_layers(monkeypatch):
11 """Mock imported modules and calls."""
12 import sys
13 sys.modules['charms.layer'] = mock.Mock()
14 sys.modules['charms.reactive.helpers'] = mock.Mock()
15 sys.modules['reactive'] = mock.Mock()
16 # sys.modules['lib_network'] = mock.Mock()
17 # Mock any functions in layers that need to be mocked here
18
19 def options(layer):
20 # mock options for layers here
21 # if layer == 'status':
22 # return None
23 return None
24
25 # monkeypatch.setattr('lib_bb_peer_exporter.lib_network', options)
26
27
28@pytest.fixture
29def mock_hookenv_config(monkeypatch):
30 """Mock hookenv.config()."""
31 import yaml
32
33 def mock_config():
34 cfg = {}
35 yml = yaml.load(open('./config.yaml'))
36
37 # Load all defaults
38 for key, value in yml['options'].items():
39 cfg[key] = value['default']
40
41 # Manually add cfg from other layers
42 return cfg
43 # cfg['my-other-layer'] = 'mock'
44
45 monkeypatch.setattr('lib_bb_peer_exporter.hookenv.config', mock_config)
46
47
48@pytest.fixture
49def mock_remote_unit(monkeypatch):
50 """Mock a remote unit name."""
51 monkeypatch.setattr('lib_bb_peer_exporter.hookenv.remote_unit', lambda: 'unit-mock/0')
52
53
54@pytest.fixture
55def mock_charm_dir(monkeypatch):
56 """Mock CHARM_DIR Juju environment variable."""
57 monkeypatch.setattr('lib_bb_peer_exporter.hookenv.charm_dir', lambda: '/mock/charm/dir')
58
59
60@pytest.fixture
61def bbpeerexporter(tmpdir, mock_layers, mock_hookenv_config, mock_charm_dir, monkeypatch):
62 """Test the basic structure of the helper class."""
63 from lib_bb_peer_exporter import BBPeerExporterHelper
64 helper = BBPeerExporterHelper()
65
66 # Any other functions that load helper will get this version
67 monkeypatch.setattr('lib_bb_peer_exporter.BBPeerExporterHelper', lambda: helper)
68
69 return helper
diff --git a/tests/unit/requirements.txt b/tests/unit/requirements.txt
0new file mode 10064470new file mode 100644
index 0000000..31b5d37
--- /dev/null
+++ b/tests/unit/requirements.txt
@@ -0,0 +1,7 @@
1charmhelpers
2charms.reactive
3netifaces
4psutil
5pyroute2
6pytest
7pytest-cov
diff --git a/tests/unit/test_lib_bb_peer_exporter.py b/tests/unit/test_lib_bb_peer_exporter.py
0new file mode 1006448new file mode 100644
index 0000000..9c0ab0f
--- /dev/null
+++ b/tests/unit/test_lib_bb_peer_exporter.py
@@ -0,0 +1,46 @@
1"""Tests around lib_bb_peer_exporter module."""
2
3
4class TestLib():
5 """Test suite for lib_bb_peer_exporter module."""
6
7 def test_bbpeerexporter(self, bbpeerexporter):
8 """See if the helper fixture works to load charm configs."""
9 assert isinstance(bbpeerexporter.charm_config, dict)
10 assert bbpeerexporter.modules
11 assert bbpeerexporter.scrape_interval == "60s"
12 assert bbpeerexporter.port_def == 9115
13
14 def test_get_icmp_and_tcp_targets(self, bbpeerexporter, monkeypatch):
15 """Correct output over given input."""
16 principal_unit = "ubuntu/1"
17 unit_networks = "[{'iface': 'eth0', 'ip': '10.66.111.152', 'net': '10.66.111.0/24'}]"
18 unit_ports = "['22', '9115']"
19 monkeypatch.setattr(
20 'lib_network.get_source_ip_ipv4',
21 lambda x: '10.66.111.1')
22 monkeypatch.setattr(
23 'lib_network.get_unit_ipv4_networks',
24 lambda: [{'net': '192.168.0.0/23'}, {'net': '10.66.111.0/24'}])
25
26 expected = {
27 "networks": ['10.66.111.0/24'],
28 "icmp_targets": [{
29 'network': '10.66.111.0/24',
30 'interface': 'eth0',
31 'ip-address': '10.66.111.152',
32 'principal-unit': principal_unit,
33 'module': 'icmp',
34 'source-ip': '10.66.111.1',
35 }],
36 "tcp_targets": [{'ip-address': '10.66.111.152', 'port': '22',
37 'principal-unit': principal_unit, 'module': 'tcp_connect',
38 },
39 {
40 'ip-address': '10.66.111.152', 'port': '9115',
41 'principal-unit': principal_unit, 'module': 'tcp_connect',
42 }],
43 }
44
45 actual = bbpeerexporter.get_icmp_and_tcp_targets(unit_networks, unit_ports, principal_unit)
46 assert actual == expected
diff --git a/tox.ini b/tox.ini
0new file mode 10064447new file mode 100644
index 0000000..bd1f448
--- /dev/null
+++ b/tox.ini
@@ -0,0 +1,50 @@
1[tox]
2skipsdist=True
3envlist = unit, functional
4skip_missing_interpreters = True
5
6[testenv]
7basepython = python3
8setenv =
9 PYTHONPATH = .
10
11[testenv:unit]
12commands = pytest -v --ignore {toxinidir}/tests/functional \
13 --cov=lib \
14 --cov=reactive \
15 --cov=actions \
16 --cov-report=term \
17 --cov-report=annotate:report/annotated \
18 --cov-report=html:report/html
19deps = -r{toxinidir}/tests/unit/requirements.txt
20 -r{toxinidir}/requirements.txt
21setenv = PYTHONPATH={toxinidir}/lib
22
23[testenv:functional]
24passenv =
25 HOME
26 CHARM_BUILD_DIR
27 PATH
28 PYTEST_KEEP_MODEL
29 PYTEST_CLOUD_NAME
30 PYTEST_CLOUD_REGION
31commands = pytest -v --ignore {toxinidir}/tests/unit
32deps = -r{toxinidir}/tests/functional/requirements.txt
33 -r{toxinidir}/requirements.txt
34
35[testenv:lint]
36commands = flake8
37deps =
38 flake8
39 flake8-docstrings
40 flake8-import-order
41 pep8-naming
42 flake8-colors
43
44[flake8]
45exclude =
46 .git,
47 __pycache__,
48 .tox,
49max-line-length = 120
50max-complexity = 10
diff --git a/wheelhouse.txt b/wheelhouse.txt
index ea2fb5e..f06761b 100644
--- a/wheelhouse.txt
+++ b/wheelhouse.txt
@@ -1,2 +1,3 @@
1netifaces1netifaces
2psutil
2pyroute23pyroute2

Subscribers

People subscribed via source and target branches

to all changes: