Merge ~paulgear/ntp-charm/+git/ntp-charm:autopeers into ntp-charm:master

Proposed by Paul Gear
Status: Merged
Merged at revision: 9825def33421f3177ee772e80f8f68dd1367d7f8
Proposed branch: ~paulgear/ntp-charm/+git/ntp-charm:autopeers
Merge into: ntp-charm:master
Diff against target: 1041 lines (+794/-62)
12 files modified
.gitignore (+1/-0)
Makefile (+18/-3)
README.md (+4/-5)
config.yaml (+8/-3)
files/nagios/check_ntpmon.py (+1/-1)
hooks/ntp_hooks.py (+131/-49)
hooks/ntp_scoring.py (+159/-0)
hooks/ntp_source_score.py (+198/-0)
tests/10-deploy-test.py (+1/-1)
unit_tests/test_ntp_hooks.py (+0/-0)
unit_tests/test_ntp_scoring.py (+106/-0)
unit_tests/test_ntp_source_score.py (+167/-0)
Reviewer Review Type Date Requested Status
Stuart Bishop (community) Approve
Review via email: mp+332326@code.launchpad.net

Description of the change

This MP introduces one major piece of functionality and some other minor improvements.

The previously-deprecated auto_peers option now enables new logic which tests connectivity to upstream NTP servers, then uses a simple scoring mechanism to select the most suitable servers to act as a service stratum between the upstream NTP servers and the rest of the juju environment.

This is primarily intended for use in medium-to-large OpenStack environments where Ceph needs a stable, nearby set of NTP sources. At present in a number of our production OpenStack environments this is achieved by creating two separate ntp services; the new auto_peers functionality achieves this without requiring manual setup. It's my expectation that this will reduce manual tuning in some of our less-well-connected OpenStacks, and result in fewer alerts.

In some environments, NTP is erroneously deployed to containers, conflicting with NTP on the host; this version of the charm automatically detects when it is running in a container and disables NTP.

Both of the above are reflected in juju status, to make it easy for the operator to see the results of the charm's automated decisions and correct if necessary.

To post a comment you must log in.
Revision history for this message
Paul Gear (paulgear) wrote :

I will rebase before merge, but the contents shouldn't change from this.

Revision history for this message
Tom Haddon (mthaddon) wrote :

Can you add a description of this change and why it's needed? Also some comments inline.

Revision history for this message
Paul Gear (paulgear) wrote :

Replies to comments inline.

Revision history for this message
Tom Haddon (mthaddon) wrote :

I get a "make lint" error from this:

hooks/ntp_hooks.py:92:1: C901 'write_config' is too complex (10)

You may want to consider ignoring that in the default lint target and adding a "complex-lint" target that doesn't ignore it so you can work it down to zero over time.

How does some use ntp_source_score.py? You mention in the comments for that file that it can be for diagnostic purposes. That sounds like something that would be nice to expose via juju actions. I don't think that should necessarily block the landing of this, but if you're planning to advertise that functionality at all, I think it should be done via juju actions first.

Revision history for this message
Tom Haddon (mthaddon) wrote :

Also, just noticed there's a unit_tests directory. Would be good to have a Makefile target for running tests.

Revision history for this message
Paul Gear (paulgear) wrote :

> I get a "make lint" error from this:
>
> hooks/ntp_hooks.py:92:1: C901 'write_config' is too complex (10)
>
> You may want to consider ignoring that in the default lint target and adding a
> "complex-lint" target that doesn't ignore it so you can work it down to zero
> over time.

Which version of flake8 are you running? On my system (flake8 3.2.1-1), I have to drop max-complexity from 10 to 8 to get a failure in that method.

> How does some use ntp_source_score.py? You mention in the comments for that
> file that it can be for diagnostic purposes. That sounds like something that
> would be nice to expose via juju actions. I don't think that should
> necessarily block the landing of this, but if you're planning to advertise
> that functionality at all, I think it should be done via juju actions first.

That would probably be important to have for receiving troubleshooting reports from others, but I wasn't planning to advertise it in the first instance. I'll try to get some time to add one in the next couple of weeks.

Revision history for this message
Tom Haddon (mthaddon) wrote :

> > I get a "make lint" error from this:
> >
> > hooks/ntp_hooks.py:92:1: C901 'write_config' is too complex (10)
> >
> > You may want to consider ignoring that in the default lint target and adding
> a
> > "complex-lint" target that doesn't ignore it so you can work it down to zero
> > over time.
>
> Which version of flake8 are you running? On my system (flake8 3.2.1-1), I
> have to drop max-complexity from 10 to 8 to get a failure in that method.

2.5.4.

$ dpkg -l | grep flake8
ii flake8 2.5.4-2 all code checker using pep8 and pyflakes
ii python-flake8 2.5.4-2 all code checker using pep8 and pyflakes - Python 2.x
ii python3-flake8 2.5.4-2 all code checker using pep8 and pyflakes - Python 3.x

> > How does some use ntp_source_score.py? You mention in the comments for that
> > file that it can be for diagnostic purposes. That sounds like something that
> > would be nice to expose via juju actions. I don't think that should
> > necessarily block the landing of this, but if you're planning to advertise
> > that functionality at all, I think it should be done via juju actions first.
>
> That would probably be important to have for receiving troubleshooting reports
> from others, but I wasn't planning to advertise it in the first instance.
> I'll try to get some time to add one in the next couple of weeks.

Revision history for this message
Stuart Bishop (stub) wrote :

Comments added inline. Leaving actual approval for Tom and test suite/flake issues.

review: Approve
Revision history for this message
Paul Gear (paulgear) wrote :

Some replies to Stuart's comments inline.

Revision history for this message
Paul Gear (paulgear) wrote :

Setting this back to WIP while I improve test suite coverage.

Revision history for this message
Paul Gear (paulgear) wrote :

I've added quite a few things to the test suite; it's not 100% coverage, but it should be enough to provide sufficient confidence in the code to merge.

Revision history for this message
Stuart Bishop (stub) wrote :

Looks good. A few minor comments. The psutil magic import at the start of ntp_scoring.py needs to be fixed, or it may fail the first time it is run.

review: Approve
Revision history for this message
Paul Gear (paulgear) wrote :

Pushed new version with updates.

Revision history for this message
Stuart Bishop (stub) wrote :

Looks good.

You probably want 'fatal=True' in install_packages(), which I missed last time. I doubt the rest of this charm will work if the packages are missing, so its better to fail early.

review: Approve
Revision history for this message
Paul Gear (paulgear) wrote :

On 07/11/17 19:10, Stuart Bishop wrote:
> Review: Approve
>
> Looks good.
>
> You probably want 'fatal=True' in install_packages(), which I missed last time. I doubt the rest of this charm will work if the packages are missing, so its better to fail early.

The rest of the charm should work fine if those packages are missing,
and the scoring & auto-peering should fail gracefully if they are.

I've pushed a fix to handle the case where python3-psutil is missing.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
diff --git a/.gitignore b/.gitignore
index ba077a4..c644b99 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1 +1,2 @@
1bin1bin
2__pycache__
diff --git a/Makefile b/Makefile
index 706d2ae..85b5df4 100644
--- a/Makefile
+++ b/Makefile
@@ -1,8 +1,13 @@
1#!/usr/bin/make1#!/usr/bin/make
2PYTHON := /usr/bin/env python2PYTHON := /usr/bin/env PYTHONPATH=$(PWD)/hooks python3
3CHARM_NAME := ntp
4CSDEST := cs:~$(LOGNAME)/$(CHARM_NAME)
35
4lint:6test:
5 @python2 -m flake8 --exclude hooks/charmhelpers hooks7 $(PYTHON) -m unittest unit_tests/test_ntp_*.py
8
9lint: test
10 @python3 -m flake8 --max-line-length=120 --exclude hooks/charmhelpers hooks
6 @charm proof11 @charm proof
712
8bin/charm_helpers_sync.py:13bin/charm_helpers_sync.py:
@@ -12,3 +17,13 @@ bin/charm_helpers_sync.py:
1217
13sync: bin/charm_helpers_sync.py18sync: bin/charm_helpers_sync.py
14 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-sync.yaml19 @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-sync.yaml
20
21git:
22 git push $(LOGNAME)
23
24cspush: lint
25 version=`charm push . $(CSDEST) | awk '/^url:/ {print $$2}'` && \
26 charm release $$version
27
28upgrade: cspush
29 juju upgrade-charm $(CHARM_NAME)
diff --git a/README.md b/README.md
index e635e7a..a2605cb 100644
--- a/README.md
+++ b/README.md
@@ -38,14 +38,13 @@ To disable the default list of pool servers, set that to the empty string:
3838
39Sources, peers, and pools should be space separated.39Sources, peers, and pools should be space separated.
4040
41When you need a set of services to keep close time to each other, it may41If you have a large number of nodes which need to keep close sync with one
42be useful to have them automatically peer with each other. This means42another but need to keep upstream traffic to a minimum, try auto_peers:
43any set of services which use the same ntp subordinate will peer together.
4443
45 juju set ntp auto_peers=true44 juju set ntp auto_peers=true
4645
47This will add all the hosts as peers to each other. Using auto_peers is not46This will select the most suitable units for connecting with upstream, and
48recommended when more than 10 units are expected to be deployed.47configure the remaining units to receive time from those units.
4948
50Mastered49Mastered
51========50========
diff --git a/config.yaml b/config.yaml
index 1405f4b..1bbc1ed 100644
--- a/config.yaml
+++ b/config.yaml
@@ -59,9 +59,14 @@ options:
59 default: false59 default: false
60 type: boolean60 type: boolean
61 description: >61 description: >
62 Automatically peer with other units in the same service.62 Automatically select the most appropriate units in the service to
63 DEPRECATED. Please consider using the ntpmaster charm to provide63 be a service stratum connecting with upstream NTP servers, and use
64 sufficient peers for your environment in favour of auto_peers.64 those units as time sources for the remaining units.
65 auto_peers_upstream:
66 default: 6
67 type: int
68 description: >
69 How many units should attempt to connect with upstream NTP servers?
65 use_iburst:70 use_iburst:
66 default: true71 default: true
67 type: boolean72 type: boolean
diff --git a/files/nagios/check_ntpmon.py b/files/nagios/check_ntpmon.py
index 8d24df4..a3c64be 100755
--- a/files/nagios/check_ntpmon.py
+++ b/files/nagios/check_ntpmon.py
@@ -452,7 +452,7 @@ class NTPPeers(object):
452 output = subprocess.check_output(["ntpq", "-pn"], stderr=null)452 output = subprocess.check_output(["ntpq", "-pn"], stderr=null)
453 if len(output) > 0:453 if len(output) > 0:
454 lines = output.split("\n")454 lines = output.split("\n")
455 except:455 except Exception:
456 traceback.print_exc(file=sys.stdout)456 traceback.print_exc(file=sys.stdout)
457 return lines457 return lines
458458
diff --git a/hooks/ntp_hooks.py b/hooks/ntp_hooks.py
index d0356c3..efde7ad 100755
--- a/hooks/ntp_hooks.py
+++ b/hooks/ntp_hooks.py
@@ -1,15 +1,14 @@
1#!/usr/bin/python31#!/usr/bin/python3
22
3import sys3from charmhelpers.contrib.charmsupport import nrpe
4import charmhelpers.core.hookenv as hookenv
5import charmhelpers.core.host as host
6import charmhelpers.fetch as fetch
7from charmhelpers.core.hookenv import UnregisteredHookError
8from charmhelpers.contrib.templating.jinja import render4from charmhelpers.contrib.templating.jinja import render
9import shutil5from charmhelpers.core import hookenv, host, unitdata
6import charmhelpers.fetch as fetch
10import os7import os
8import shutil
9import sys
1110
12from charmhelpers.contrib.charmsupport import nrpe11import ntp_scoring
1312
14NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins'13NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins'
1514
@@ -19,22 +18,56 @@ NTP_CONF_ORIG = '{}.orig'.format(NTP_CONF)
19hooks = hookenv.Hooks()18hooks = hookenv.Hooks()
2019
2120
22def get_peer_nodes():21def get_peer_sources(topN=6):
23 hosts = []22 """
24 hosts.append(hookenv.unit_get('private-address'))23 Get our score and put it on the peer relation.
24 Read the peer private addresses and their scores.
25 Determine whether we're in the top N scores;
26 if so, we're upstream - return None
27 Otherwise, return the list of the top N peers.
28 """
29 if topN is None:
30 topN = 6
31 ourscore = ntp_scoring.get_score()
32 if ourscore is None:
33 hookenv.log('[AUTO_PEER] Our score cannot be determined - check logs for reason')
34 return None
35
36 peers = []
25 for relid in hookenv.relation_ids('ntp-peers'):37 for relid in hookenv.relation_ids('ntp-peers'):
38 hookenv.relation_set(relation_id=relid, relation_settings={'score': ourscore['score']})
26 for unit in hookenv.related_units(relid):39 for unit in hookenv.related_units(relid):
27 hosts.append(hookenv.relation_get('private-address',40 addr = hookenv.relation_get('private-address', unit, relid)
28 unit, relid))41 peerscore = hookenv.relation_get('score', unit, relid)
29 hosts.sort()42 if peerscore is not None:
30 return hosts43 peers.append((addr, float(peerscore)))
44
45 if len(peers) < topN:
46 # we don't have enough peers to do auto-peering
47 hookenv.log('[AUTO_PEER] There are only {} peers; not doing auto-peering'.format(len(peers)))
48 return None
49
50 # list of hosts with scores better than ours
51 hosts = list(filter(lambda x: x[1] > ourscore['score'], peers))
52 hookenv.log('[AUTO_PEER] {} peers better than us, topN == {}'.format(len(hosts), topN))
53
54 # if the list is less than topN long, we're in the topN hosts
55 if len(hosts) < topN:
56 return None
57 else:
58 # sort list of hosts by score, keep only the topN
59 topNhosts = sorted(hosts, key=lambda x: x[1], reverse=True)[0:topN]
60 # return only the host addresses
61 return map(lambda x: x[0], topNhosts)
3162
3263
33@hooks.hook('install')64@hooks.hook('install')
34def install():65def install():
35 fetch.apt_update(fatal=True)66 fetch.apt_update(fatal=True)
36 fetch.apt_install(["ntp"], fatal=True)67 ntp_scoring.install_packages()
37 shutil.copy(NTP_CONF, NTP_CONF_ORIG)68 if ntp_scoring.get_virt_type() != 'container':
69 fetch.apt_install(["ntp"], fatal=True)
70 shutil.copy(NTP_CONF, NTP_CONF_ORIG)
3871
3972
40def get_sources(sources, iburst=True, source_list=None):73def get_sources(sources, iburst=True, source_list=None):
@@ -61,25 +94,52 @@ def get_sources(sources, iburst=True, source_list=None):
61 'ntp-peers-relation-changed')94 'ntp-peers-relation-changed')
62@host.restart_on_change({NTP_CONF: ['ntp']})95@host.restart_on_change({NTP_CONF: ['ntp']})
63def write_config():96def write_config():
97 ntp_scoring.install_packages()
98 if ntp_scoring.get_virt_type() == 'container':
99 host.service_stop('ntp')
100 host.service_pause('ntp')
101 hookenv.close_port(123, protocol="UDP")
102 return
103
104 host.service_resume('ntp')
64 hookenv.open_port(123, protocol="UDP")105 hookenv.open_port(123, protocol="UDP")
106
65 use_iburst = hookenv.config('use_iburst')107 use_iburst = hookenv.config('use_iburst')
66 source = hookenv.config('source')
67 orphan_stratum = hookenv.config('orphan_stratum')108 orphan_stratum = hookenv.config('orphan_stratum')
68 remote_sources = get_sources(source, iburst=use_iburst)109 source = hookenv.config('source')
69 pools = hookenv.config('pools')110 pools = hookenv.config('pools')
70 remote_pools = get_sources(pools, iburst=use_iburst)
71 for relid in hookenv.relation_ids('master'):
72 for unit in hookenv.related_units(relid=relid):
73 u_addr = hookenv.relation_get(attribute='private-address',
74 unit=unit, rid=relid)
75 remote_sources.append({'name': u_addr, 'iburst': 'iburst'})
76
77 peers = hookenv.config('peers')111 peers = hookenv.config('peers')
78 remote_peers = get_sources(peers, iburst=use_iburst)
79 auto_peers = hookenv.config('auto_peers')112 auto_peers = hookenv.config('auto_peers')
80 if hookenv.relation_ids('ntp-peers') and auto_peers:113
81 remote_peers = get_sources(get_peer_nodes(), iburst=use_iburst,114 remote_sources = get_sources(source, iburst=use_iburst)
82 source_list=remote_peers)115 remote_pools = get_sources(pools, iburst=use_iburst)
116 remote_peers = get_sources(peers, iburst=use_iburst)
117
118 kv = unitdata.kv()
119 hookenv.atexit(kv.flush)
120
121 if hookenv.relation_ids('master'):
122 # use master relation only
123 for relid in hookenv.relation_ids('master'):
124 for unit in hookenv.related_units(relid=relid):
125 u_addr = hookenv.relation_get(attribute='private-address',
126 unit=unit, rid=relid)
127 remote_sources.append({'name': u_addr, 'iburst': 'iburst'})
128 elif auto_peers and hookenv.relation_ids('ntp-peers'):
129 # use auto_peers
130 auto_peer_list = get_peer_sources(hookenv.config('auto_peers_upstream'))
131 if auto_peer_list is None:
132 # we are upstream - use configured sources, pools, peers
133 kv.set('auto_peer', 'upstream')
134 else:
135 # override all sources with auto_peer_list
136 kv.set('auto_peer', 'client')
137 remote_sources = get_sources(auto_peer_list, iburst=use_iburst)
138 remote_pools = []
139 remote_peers = []
140 else:
141 # use configured sources, pools, peers
142 kv.unset('auto_peer')
83143
84 if len(remote_sources) == 0 and len(remote_peers) == 0 and len(remote_pools) == 0:144 if len(remote_sources) == 0 and len(remote_peers) == 0 and len(remote_pools) == 0:
85 # we have no peers/pools/servers; restore default ntp.conf provided by OS145 # we have no peers/pools/servers; restore default ntp.conf provided by OS
@@ -134,7 +194,7 @@ def update_nrpe_config():
134# Hyper-V host clock sync handling - workaround until https://bugs.launchpad.net/bugs/1676635 is SRUed for xenial194# Hyper-V host clock sync handling - workaround until https://bugs.launchpad.net/bugs/1676635 is SRUed for xenial
135# See also:195# See also:
136# - https://patchwork.kernel.org/patch/9525945/196# - https://patchwork.kernel.org/patch/9525945/
137# - https://social.msdn.microsoft.com/Forums/en-US/8c0a1026-0b02-405a-848e-628e68229eaf/i-have-a-lot-of-time-has-been-changed-in-the-journal-of-my-linux-boxes?forum=WAVirtualMachinesforWindows197# - https://social.msdn.microsoft.com/Forums/en-US/8c0a1026-0b02-405a-848e-628e68229eaf/i-have-a-lot-of-time-has-been-changed-in-the-journal-of-my-linux-boxes?forum=WAVirtualMachinesforWindows # NOQA: E501
138_device_class = '9527e630-d0ae-497b-adce-e80ab0175caf'198_device_class = '9527e630-d0ae-497b-adce-e80ab0175caf'
139_vmbus_dir = '/sys/bus/vmbus/'199_vmbus_dir = '/sys/bus/vmbus/'
140200
@@ -146,11 +206,11 @@ def find_hyperv_host_sync_device():
146 try:206 try:
147 f = open(os.path.join(_vmbus_dir, 'devices', d, 'class_id'), 'r')207 f = open(os.path.join(_vmbus_dir, 'devices', d, 'class_id'), 'r')
148 if _device_class in f.readline():208 if _device_class in f.readline():
149 hookenv.log('Hyper-V host time sync device is {}'.format(f.name), level=hookenv.DEBUG)209 hookenv.log('Hyper-V host time sync device is {}'.format(f.name))
150 return d210 return d
151 except:211 except Exception:
152 pass212 pass
153 except:213 except Exception:
154 pass214 pass
155 return None215 return None
156216
@@ -164,13 +224,13 @@ def check_hyperv_host_sync(device_id):
164 firstline = f.readline().strip()224 firstline = f.readline().strip()
165 result = firstline == '3'225 result = firstline == '3'
166 enabled = 'enabled' if result else 'disabled'226 enabled = 'enabled' if result else 'disabled'
167 hookenv.log('Hyper-V host time sync is ' + enabled, level=hookenv.DEBUG)227 hookenv.log('Hyper-V host time sync is ' + enabled)
168 if result:228 if result:
169 return device_id229 return device_id
170 else:230 else:
171 return None231 return None
172 except:232 except Exception:
173 hookenv.log('Hyper-V host time sync status file {} not found'.format(statefile), level=hookenv.DEBUG)233 hookenv.log('Hyper-V host time sync status file {} not found'.format(statefile))
174 return None234 return None
175 else:235 else:
176 return None236 return None
@@ -179,12 +239,12 @@ def check_hyperv_host_sync(device_id):
179def disable_hyperv_host_sync(device_id):239def disable_hyperv_host_sync(device_id):
180 """Unbind the Hyper-V host clock sync driver"""240 """Unbind the Hyper-V host clock sync driver"""
181 try:241 try:
182 hookenv.log('Disabling Hyper-V host time sync', level=hookenv.DEBUG)242 hookenv.log('Disabling Hyper-V host time sync')
183 path = os.path.join(_vmbus_dir, 'drivers', 'hv_util', 'unbind')243 path = os.path.join(_vmbus_dir, 'drivers', 'hv_util', 'unbind')
184 f = open(path, 'w')244 f = open(path, 'w')
185 print(device_id, file=f)245 print(device_id, file=f)
186 return True246 return True
187 except:247 except Exception:
188 return False248 return False
189249
190250
@@ -203,21 +263,43 @@ def hyperv_sync_status():
203263
204@hooks.hook('update-status')264@hooks.hook('update-status')
205def assess_status():265def assess_status():
206 hookenv.application_version_set(266 version = fetch.get_upstream_version('ntp')
207 fetch.get_upstream_version('ntp')267 if version is not None:
208 )268 hookenv.application_version_set(version)
209 if host.service_running('ntp'):269
210 status = 'Unit is ready'270 # create base status
211 status_extra = hyperv_sync_status()271 if ntp_scoring.get_virt_type() == 'container':
212 if status_extra:272 state = 'blocked'
213 status = status + '; ' + status_extra273 status = 'NTP not supported in containers: please configure on host'
214 hookenv.status_set('active', status)274 elif host.service_running('ntp'):
275 state = 'active'
276 status = 'Ready'
215 else:277 else:
216 hookenv.status_set('blocked', 'ntp not running')278 state = 'blocked'
279 status = 'Not running'
280
281 # append Hyper-V status, if any
282 hyperv_status = hyperv_sync_status()
283 if hyperv_status is not None:
284 status += ', ' + hyperv_status
285
286 # append scoring status, if any
287 # (don't force update of the score from update-status more than once a month)
288 max_age = 31 * 86400
289 scorestr = ntp_scoring.get_score_string(max_seconds=max_age)
290 if scorestr is not None:
291 status += ', ' + scorestr
292
293 # append auto_peer status
294 autopeer = unitdata.kv().get('auto_peer')
295 if autopeer is not None:
296 status += ' [{}]'.format(autopeer)
297
298 hookenv.status_set(state, status)
217299
218300
219if __name__ == '__main__':301if __name__ == '__main__':
220 try:302 try:
221 hooks.execute(sys.argv)303 hooks.execute(sys.argv)
222 except UnregisteredHookError as e:304 except hookenv.UnregisteredHookError as e:
223 hookenv.log('Unknown hook {} - skipping.'.format(e))305 hookenv.log('Unknown hook {} - skipping.'.format(e))
diff --git a/hooks/ntp_scoring.py b/hooks/ntp_scoring.py
224new file mode 100755306new file mode 100755
index 0000000..af55103
--- /dev/null
+++ b/hooks/ntp_scoring.py
@@ -0,0 +1,159 @@
1
2# Copyright (c) 2017 Canonical Ltd
3# Author: Paul Gear
4
5# This module retrieves the score calculated in ntp_source_score, and
6# creates an overall node weighting based on the machine type (bare metal,
7# container, or VM) and software running locally. It reduces the score
8# for nodes with OpenStack ceph, nova, or swift services running, in order
9# to decrease the likelihood that they will be selected as upstreams.
10
11from charmhelpers.core import hookenv, unitdata
12import charmhelpers.fetch as fetch
13import json
14import time
15
16import ntp_source_score
17
18
19def install_packages():
20 fetch.apt_install(["facter", "ntpdate", "python3-psutil", "virt-what"], fatal=False)
21
22
23def get_virt_type():
24 """Work out what type of environment we're running in"""
25 for line in ntp_source_score.run_cmd('facter virtual'):
26 fields = str(line).split()
27 if len(fields) > 0:
28 if fields[0] in ['physical', 'xen0']:
29 return 'physical'
30 if fields[0] in ['docker', 'lxc', 'openvz']:
31 return 'container'
32 # Anything not one of the above-mentioned types is assumed to be a VM
33 return 'vm'
34
35
36def get_virt_multiplier():
37 virt_type = get_virt_type()
38 if virt_type == 'container':
39 # containers should be synchronized from their host
40 return -1
41 elif virt_type == 'physical':
42 hookenv.log('[SCORE] running on physical host - score bump 25%')
43 return 1.25
44 else:
45 hookenv.log('[SCORE] probably running in a VM - score bump 0%')
46 return 1
47
48
49def get_package_divisor():
50 """Check for running ceph, swift, & nova-compute services,
51 and increase divisor for each."""
52 try:
53 import psutil
54 except Exception:
55 # If we can't read the process table, assume a worst-case.
56 # (Normally, if every process is running, this will return
57 # 1.1 * 1.1 * 1.25 * 1.25 = 1.890625.)
58 return 2
59
60 # set the weight for each process (regardless of how many there are running)
61 running = {}
62 for p in psutil.process_iter():
63 if p.name().startswith('nova-compute'):
64 running['nova-compute'] = 1.25
65 if p.name().startswith('ceph-osd'):
66 running['ceph-osd'] = 1.25
67 elif p.name().startswith('ceph-'):
68 running['ceph'] = 1.1
69 elif p.name().startswith('swift-'):
70 running['swift'] = 1.1
71
72 # increase the divisor for each discovered process type
73 divisor = 1
74 for r in running:
75 hookenv.log('[SCORE] %s running - score divisor %.3f' % (r, running[r]))
76 divisor *= running[r]
77 return divisor
78
79
80def check_score(seconds=None):
81 if seconds is None:
82 seconds = time.time()
83 score = {
84 'divisor': 1,
85 'multiplier': 0,
86 'score': -999,
87 'time': seconds,
88 }
89
90 # skip scoring if we have an explicitly configured master
91 relation_sources = hookenv.relation_ids('master')
92 score['master-relations'] = len(relation_sources)
93 if relation_sources is not None and len(relation_sources) > 0:
94 hookenv.log('[SCORE] master relation configured - skipped scoring')
95 return score
96
97 # skip scoring if we're in a container
98 multiplier = get_virt_multiplier()
99 score['multiplier'] = multiplier
100 if multiplier <= 0:
101 hookenv.log('[SCORE] running in a container - skipped scoring')
102 return score
103
104 # skip scoring if auto_peers is off
105 auto_peers = hookenv.config('auto_peers')
106 score['auto-peers'] = auto_peers
107 if not auto_peers:
108 hookenv.log('[SCORE] auto_peers is disabled - skipped scoring')
109 return score
110
111 # skip scoring if we have no sources
112 sources = hookenv.config('source').split()
113 peers = hookenv.config('peers').split()
114 pools = hookenv.config('pools').split()
115 host_list = sources + peers + pools
116 if len(host_list) == 0:
117 hookenv.log('[SCORE] No sources configured')
118 return score
119
120 # Now that we've passed all those checks, check upstreams, calculate a score, and return the result
121 divisor = get_package_divisor()
122 score['divisor'] = divisor
123 score['host-list'] = host_list
124 score['raw'] = ntp_source_score.get_source_score(host_list, verbose=True)
125 score['score'] = score['raw'] * multiplier / divisor
126 hookenv.log('[SCORE] Suitability score: %.3f' % (score['score'],))
127 return score
128
129
130def get_score(max_seconds=86400):
131 # Remove this if/when we convert the charm to reactive
132 kv = unitdata.kv()
133 hookenv.atexit(kv.flush)
134
135 score = kv.get('ntp_score')
136 if score is not None:
137 saved_time = score.get('time', 0)
138 else:
139 saved_time = 0
140
141 now = time.time()
142 if score is None or now - saved_time > max_seconds:
143 score = check_score(now)
144 kv.set('ntp_score', score)
145 hookenv.log('[SCORE] saved %s' % (json.dumps(score),))
146
147 return score
148
149
150def get_score_string(score=None, max_seconds=86400):
151 if score is None:
152 score = get_score(max_seconds)
153 if not hookenv.config('auto_peers') or 'raw' not in score:
154 return None
155 return 'score %.3f (%.1f) at %s' % (
156 score['score'],
157 score['multiplier'] / score['divisor'],
158 time.ctime(score['time'])
159 )
diff --git a/hooks/ntp_source_score.py b/hooks/ntp_source_score.py
0new file mode 100755160new file mode 100755
index 0000000..71b1cef
--- /dev/null
+++ b/hooks/ntp_source_score.py
@@ -0,0 +1,198 @@
1#!/usr/bin/python3
2
3# Copyright (c) 2017 Canonical Ltd
4# Author: Paul Gear
5
6# This module runs ntpdate in test mode against the provided list of sources
7# in order to determine this node's suitability as an NTP server, based on the
8# number of reachable sources, and the network delay in reaching them. Up to
9# MAX_THREADS (default 32) threads will be spawned to run ntpdate, in order
10# to minimise the time taken to calculate a score.
11
12# A main method is included to allow this module to be called separately from
13# juju hooks for diagnostic purposes. It has no dependencies on juju,
14# charmhelpers, or the other modules in this charm.
15
16import argparse
17import math
18import queue
19import random
20import statistics
21import subprocess
22import threading
23import time
24
25rand = random.SystemRandom()
26MAX_THREADS = 32
27
28
29def rms(l):
30 """Return the root mean square of the list"""
31 if len(l) > 0:
32 squares = [x ** 2 for x in l]
33 return math.sqrt(statistics.mean(squares))
34 else:
35 return float('nan')
36
37
38def run_cmd(cmd):
39 """Run the output, return a list of lines returned; ignore errors"""
40 lines = []
41 try:
42 output = subprocess.check_output(cmd.split(), stderr=subprocess.DEVNULL).decode('UTF-8')
43 lines = output.split('\n')
44 except Exception:
45 pass
46 return lines
47
48
49def get_source_delays(source):
50 """Run ntpdate on the source, which may resolve to multiple addresses;
51 return the list of delay values. This can take several seconds, depending
52 on bandwidth and distance of the sources."""
53 delays = []
54 cmd = 'ntpdate -d -t 0.2 ' + source
55 for line in run_cmd(cmd):
56 fields = line.split()
57 if len(fields) >= 2 and fields[0] == 'delay':
58 delay = float(fields[1].split(',')[0])
59 if delay > 0:
60 delays.append(delay)
61 return delays
62
63
64def worker(num, src, dst, debug=False):
65 """Thread worker for parallelising ntpdate runs. Gets host name
66 from src queue and places host and delay list in dst queue."""
67 if debug:
68 print('[%d] Starting' % (num,))
69 while True:
70 host = src.get()
71 if host is None:
72 break
73
74 # lower-numbered threads sleep for a shorter time, on average
75 s = rand.random() * num / MAX_THREADS
76 if debug:
77 print('[%d] Sleeping %.3f' % (num, s))
78 time.sleep(s)
79
80 if debug:
81 print('[%d] Getting results for [%s]' % (num, host))
82 delays = get_source_delays(host)
83 src.task_done()
84 if len(delays):
85 result = (host, delays)
86 dst.put(result)
87
88
89def get_delay_score(delay):
90 """Take a delay in seconds and return a score. Under most sane NTP setups
91 will return a value between 0 and 10, where 10 is better and 0 is worse."""
92 return -math.log(delay)
93
94
95def start_workers(threads, num_threads, src, dst, debug=False):
96 """Start all of the worker threads."""
97 for i in range(num_threads):
98 t = threading.Thread(target=worker, args=(i, src, dst, debug))
99 t.start()
100 threads.append(t)
101
102
103def stop_workers(threads, src):
104 """Send the workers a None object, causing them to stop work.
105 We enqueue one stop object for each worker."""
106 for i in range(len(threads)):
107 src.put(None)
108
109
110def calculate_score(delays):
111 """Return the rms, mean, standard deviation, and overall
112 score for the passed list of delay values."""
113 score = 0
114 if len(delays) > 0:
115 r = rms(delays)
116 m = statistics.mean(delays)
117 s = statistics.pstdev(delays, m)
118 source_score = get_delay_score(r)
119 score += source_score
120 else:
121 r = m = s = score = 0
122 return (r, m, s, score)
123
124
125def calculate_results(q, verbose=False):
126 """Get the scores for all the hosts.
127 Return a hash of hosts and their cumulative scores."""
128 results = {}
129 while not q.empty():
130 (host, delays) = q.get()
131 (rms, mean, stdev, score) = calculate_score(delays)
132 delaystrings = [str(x) for x in delays]
133 if verbose:
134 print('%s score=%.3f rms=%.3f mean=%.3f stdevp=%.3f [%s]' %
135 (host, score, rms, mean, stdev, ", ".join(delaystrings)))
136 if host in results:
137 results[host] += score
138 else:
139 results[host] = score
140 return results
141
142
143def wait_workers(threads):
144 """Wait for the given list of threads to complete."""
145 for t in threads:
146 t.join()
147
148
149def run_checks(hosts, debug=False, numthreads=None, verbose=False):
150 """Perform a check of the listed hosts.
151 Can take several seconds per host."""
152 sources = queue.Queue()
153 results = queue.Queue()
154 threads = []
155 for h in hosts:
156 sources.put(h)
157 if numthreads is None:
158 numthreads = min(len(hosts), MAX_THREADS)
159 start_workers(threads, numthreads, sources, results, debug)
160 sources.join()
161 stop_workers(threads, sources)
162 # wait_workers(threads)
163 return calculate_results(results, verbose)
164
165
166def get_source_score(hosts, debug=False, numthreads=None, verbose=False):
167 """Check NTP connectivity to the given list of sources - return a single overall score"""
168 results = run_checks(hosts, debug, numthreads, verbose)
169 if results is None:
170 return 0
171
172 total = 0
173 for host in results:
174 total += results[host]
175 return total
176
177
178def display_results(results):
179 """Sort the hash by value. Print the results."""
180 # http://stackoverflow.com/a/2258273
181 result = sorted(results.items(), key=lambda x: x[1], reverse=True)
182 for i in result:
183 print("%s %.3f" % (i[0], i[1]))
184
185
186def get_args():
187 parser = argparse.ArgumentParser(description='Get NTP server/peer/pool scores')
188 parser.add_argument('--debug', '-d', action='store_true', help='Enable thread debug output')
189 parser.add_argument('--verbose', '-v', action='store_true', help='Display scoring detail')
190 parser.add_argument('hosts', nargs=argparse.REMAINDER, help='List of hosts to check')
191 return parser.parse_args()
192
193
194if __name__ == '__main__':
195 args = get_args()
196 results = run_checks(args.hosts, debug=args.debug, verbose=args.verbose)
197 if results:
198 display_results(results)
diff --git a/tests/10-deploy-test.py b/tests/10-deploy-test.py
index db65f8c..efa5d99 100755
--- a/tests/10-deploy-test.py
+++ b/tests/10-deploy-test.py
@@ -39,7 +39,7 @@ except amulet.helpers.TimeoutError:
39 message = 'The environment did not setup in %d seconds.' % seconds39 message = 'The environment did not setup in %d seconds.' % seconds
40 # The SKIP status enables skip or fail the test based on configuration.40 # The SKIP status enables skip or fail the test based on configuration.
41 amulet.raise_status(amulet.SKIP, msg=message)41 amulet.raise_status(amulet.SKIP, msg=message)
42except:42except Exception:
43 raise43 raise
4444
45# Unable to get the sentry unit for ntp because it is a subordinate.45# Unable to get the sentry unit for ntp because it is a subordinate.
diff --git a/unit_tests/test_ntp_hooks.py b/unit_tests/test_ntp_hooks.py
46new file mode 10064446new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/unit_tests/test_ntp_hooks.py
diff --git a/unit_tests/test_ntp_scoring.py b/unit_tests/test_ntp_scoring.py
47new file mode 10064447new file mode 100644
index 0000000..5840252
--- /dev/null
+++ b/unit_tests/test_ntp_scoring.py
@@ -0,0 +1,106 @@
1#!/usr/bin/env python3
2
3from random import shuffle
4from unittest.mock import Mock, patch
5import unittest
6
7import ntp_scoring
8
9
10class TestNtpScoring(unittest.TestCase):
11
12 def setUp(self):
13 patcher = patch('charmhelpers.core.hookenv.log')
14 patcher.start()
15 self.addCleanup(patcher.stop)
16 patcher = patch('charmhelpers.core.unitdata.kv')
17 patcher.start()
18 self.addCleanup(patcher.stop)
19
20 @patch('ntp_source_score.run_cmd')
21 def testGetVirtTypeValues(self, run_cmd):
22 def virt_test(expected, return_value):
23 run_cmd.return_value = [return_value]
24 self.assertEqual(ntp_scoring.get_virt_type(), expected)
25 run_cmd.assert_called_once_with('facter virtual')
26 run_cmd.reset_mock()
27
28 virt_test('container', 'docker')
29 virt_test('container', 'lxc')
30 virt_test('container', 'openvz')
31 virt_test('physical', 'physical')
32 virt_test('physical', 'xen0')
33 virt_test('vm', '')
34 virt_test('vm', [])
35 virt_test('vm', 1.23)
36 virt_test('vm', 'a')
37 virt_test('vm', 'kvm')
38 virt_test('vm', None)
39 virt_test('vm', 'something-else')
40 virt_test('vm', 'The quick brown fox jumps over the lazy dogs')
41
42 @patch('ntp_source_score.run_cmd')
43 def testGetVirtTypeEmptyList(self, run_cmd):
44 run_cmd.return_value = []
45 self.assertEqual(ntp_scoring.get_virt_type(), 'vm')
46 run_cmd.assert_called_once_with('facter virtual')
47
48 @patch('ntp_source_score.run_cmd')
49 def testGetVirtTypeWrongType(self, run_cmd):
50 run_cmd.return_value = {}
51 self.assertEqual(ntp_scoring.get_virt_type(), 'vm')
52 run_cmd.assert_called_once_with('facter virtual')
53
54 @patch('ntp_source_score.run_cmd')
55 def testGetVirtMultiplier(self, run_cmd):
56 def multiplier_test(expected, return_value):
57 run_cmd.return_value = [return_value]
58 self.assertEqual(ntp_scoring.get_virt_multiplier(), expected)
59 run_cmd.assert_called_once_with('facter virtual')
60 run_cmd.reset_mock()
61
62 multiplier_test(-1, 'docker')
63 multiplier_test(-1, 'lxc')
64 multiplier_test(-1, 'openvz')
65 multiplier_test(1.25, 'physical')
66 multiplier_test(1.25, 'xen0')
67 multiplier_test(1, '')
68 multiplier_test(1, [])
69 multiplier_test(1, 1.23)
70 multiplier_test(1, 'a')
71 multiplier_test(1, 'kvm')
72 multiplier_test(1, None)
73 multiplier_test(1, 'something-else')
74 multiplier_test(1, 'The quick brown fox jumps over the lazy dogs')
75
76 def testGetPackageDivisor(self):
77
78 def test_divisor(expected, pslist, precision=6):
79 def fake_pslist():
80 """yield a list of objects for which name() returns the given list"""
81 shuffle(pslist)
82 for p in pslist:
83 m = Mock()
84 m.name.return_value = p
85 yield m
86
87 with patch('psutil.process_iter', side_effect=fake_pslist):
88 divisor = round(ntp_scoring.get_package_divisor(), precision)
89 self.assertEqual(round(expected, precision), divisor)
90
91 with self.assertRaises(TypeError):
92 test_divisor(1, None)
93
94 test_divisor(1, [])
95 test_divisor(1, ['a', 'b', 'c'])
96 test_divisor(1, 'The quick brown fox jumps over the lazy dogs'.split())
97 test_divisor(1.1, 'The quick brown fox jumps over the lazy dogs'.split() + ['swift-1'])
98 test_divisor(1.1, ['swift-1'])
99 test_divisor(1.1, ['ceph-1', 'ceph-2'])
100 test_divisor(1.25, ['ceph-osd-1', 'ceph-osd-2', 'ceph-osd-3'])
101 test_divisor(1.25, ['nova-compute-1', 'nova-compute-2', 'nova-compute-3', 'nova-compute-4'])
102 test_divisor(1.1 * 1.25, ['swift-1', 'nova-compute-2'])
103 test_divisor(1.1 * 1.25, ['systemd', 'bind', 'swift-1', 'nova-compute-2', 'test'])
104 test_divisor(1.1 * 1.25 * 1.1, ['swift-1', 'nova-compute-2', 'ceph-3'])
105 test_divisor(1.1 * 1.25 * 1.25, ['swift-1', 'nova-compute-2', 'ceph-osd-3'])
106 test_divisor(1.1 * 1.25 * 1.1 * 1.25, ['swift-1', 'nova-compute-2', 'ceph-3', 'ceph-osd-4'])
diff --git a/unit_tests/test_ntp_source_score.py b/unit_tests/test_ntp_source_score.py
0new file mode 100644107new file mode 100644
index 0000000..378faa0
--- /dev/null
+++ b/unit_tests/test_ntp_source_score.py
@@ -0,0 +1,167 @@
1#!/usr/bin/env python3
2
3from unittest.mock import patch
4import math
5import unittest
6
7from ntp_source_score import (
8 get_delay_score,
9 get_source_delays,
10 rms,
11 run_cmd,
12)
13
14ntpdate_output = """
15...
16reference time: dda179ee.3ec34fdd Mon, Oct 30 2017 20:14:06.245
17originate timestamp: dda17a5b.af7c528b Mon, Oct 30 2017 20:15:55.685
18transmit timestamp: dda17a5b.80b4dc04 Mon, Oct 30 2017 20:15:55.502
19filter delay: 0.54126 0.36757 0.36655 0.36743
20 0.00000 0.00000 0.00000 0.00000
21filter offset: 0.099523 0.012978 0.011831 0.011770
22 0.000000 0.000000 0.000000 0.000000
23delay 0.36655, dispersion 0.01126
24offset 0.011831
25...
26reference time: dda17695.69e65b2f Mon, Oct 30 2017 19:59:49.413
27originate timestamp: dda17a5b.afcec2dd Mon, Oct 30 2017 20:15:55.686
28transmit timestamp: dda17a5b.80bb2488 Mon, Oct 30 2017 20:15:55.502
29filter delay: 0.36520 0.36487 0.36647 0.36604
30 0.00000 0.00000 0.00000 0.00000
31filter offset: 0.012833 0.013758 0.013731 0.013629
32 0.000000 0.000000 0.000000 0.000000
33delay 0.36487, dispersion 0.00049
34offset 0.013758
35...
36reference time: dda1782c.6aec9646 Mon, Oct 30 2017 20:06:36.417
37originate timestamp: dda17a5b.d2d04ef4 Mon, Oct 30 2017 20:15:55.823
38transmit timestamp: dda17a5b.b37c4098 Mon, Oct 30 2017 20:15:55.701
39filter delay: 0.28581 0.28406 0.28551 0.28596
40 0.00000 0.00000 0.00000 0.00000
41filter offset: -0.00802 -0.00854 -0.00791 -0.00787
42 0.000000 0.000000 0.000000 0.000000
43delay 0.28406, dispersion 0.00050
44offset -0.008544
45...
46reference time: dda17735.4a03e3ca Mon, Oct 30 2017 20:02:29.289
47originate timestamp: dda17a5c.1634d231 Mon, Oct 30 2017 20:15:56.086
48transmit timestamp: dda17a5b.e6934fad Mon, Oct 30 2017 20:15:55.900
49filter delay: 0.37044 0.37077 0.37050 0.37086
50 0.00000 0.00000 0.00000 0.00000
51filter offset: 0.013993 0.013624 0.013425 0.013362
52 0.000000 0.000000 0.000000 0.000000
53delay 0.37044, dispersion 0.00046
54offset 0.013993
55...
56reference time: dda17695.69e65b2f Mon, Oct 30 2017 19:59:49.413
57originate timestamp: dda17a5c.4944bb52 Mon, Oct 30 2017 20:15:56.286
58transmit timestamp: dda17a5c.19cf5199 Mon, Oct 30 2017 20:15:56.100
59filter delay: 0.36873 0.36823 0.36911 0.36781
60 0.00000 0.00000 0.00000 0.00000
61filter offset: 0.014635 0.014599 0.014166 0.014239
62 0.000000 0.000000 0.000000 0.000000
63delay 0.36781, dispersion 0.00026
64offset 0.014239
65...
66reference time: dda179ee.3ec34fdd Mon, Oct 30 2017 20:14:06.245
67originate timestamp: dda17a5c.7bbd3828 Mon, Oct 30 2017 20:15:56.483
68transmit timestamp: dda17a5c.4cf92e99 Mon, Oct 30 2017 20:15:56.300
69filter delay: 0.36554 0.36617 0.36673 0.36618
70 0.00000 0.00000 0.00000 0.00000
71filter offset: 0.012466 0.012691 0.012863 0.012346
72 0.000000 0.000000 0.000000 0.000000
73delay 0.36554, dispersion 0.00018
74offset 0.012466
75...
76"""
77ntpdate_delays = [0.36655, 0.36487, 0.28406, 0.37044, 0.36781, 0.36554]
78
79
80class TestNtpSourceScore(unittest.TestCase):
81
82 def test_rms(self):
83 self.assertEqual(rms([0, 0, 0, 0, 0]), 0)
84 self.assertEqual(rms([0, 1, 0, 1, 0]), math.sqrt(0.4))
85 self.assertEqual(rms([1, 1, 1, 1, 1]), 1)
86 self.assertEqual(rms([1, 2, 3, 4, 5]), math.sqrt(11))
87 self.assertEqual(rms([0.01, 0.02]), math.sqrt(0.00025))
88 self.assertEqual(rms([0.02766, 0.0894, 0.02657, 0.02679]), math.sqrt(0.00254527615))
89 self.assertEqual(rms([80, 50, 30]), math.sqrt(3266.66666666666666667))
90 self.assertEqual(rms([81, 53, 32]), math.sqrt(3464.66666666666666667))
91 self.assertEqual(rms([81.1, 53.9, 32.3]), math.sqrt(3508.57))
92 self.assertEqual(rms([81.14, 53.93, 32.30]), math.sqrt(3511.8115))
93 self.assertEqual(rms([81.141, 53.935, 32.309]), math.sqrt(3512.23919566666666667))
94 self.assertTrue(math.isnan(rms([])))
95 with self.assertRaises(TypeError):
96 rms(['a', 'b', 'c'])
97
98 @patch('subprocess.check_output')
99 def test_run_cmd(self, patched):
100 patched.return_value = b'a\nb\nc\n'
101 self.assertEqual(run_cmd('ls'), ['a', 'b', 'c', ''])
102
103 patched.return_value = b'4.13.0-14-generic\n'
104 self.assertEqual(run_cmd('uname -r'), ['4.13.0-14-generic', ''])
105
106 self.assertEqual(patched.call_count, 2)
107
108 def test_get_source_delays(self):
109
110 @patch('ntp_source_score.run_cmd')
111 def test_source_delay(data, expect, patched):
112 patched.return_value = data
113 self.assertEqual(get_source_delays('ntp.example.com'), expect)
114 patched.assert_called_once_with('ntpdate -d -t 0.2 ntp.example.com')
115
116 @patch('ntp_source_score.run_cmd')
117 def test_source_delay_error(data, e, patched):
118 patched.return_value = data
119 with self.assertRaises(e):
120 get_source_delays('ntp.example.com')
121 patched.assert_called_once_with('ntpdate -d -t 0.2 ntp.example.com')
122
123 test_source_delay([], [])
124 test_source_delay('', [])
125 test_source_delay('123', [])
126 test_source_delay(['123 678', '234 asdf', 'yaled 345 901'], [])
127 test_source_delay(['123 678', 'delay 345 901', '234 asdf'], [345])
128 test_source_delay(['delay 123 678', 'delay 234 asdf', 'delay 345 901'], [123, 234, 345])
129 test_source_delay(ntpdate_output.split('\n'), ntpdate_delays)
130
131 test_source_delay_error(None, TypeError)
132 test_source_delay_error(123, TypeError)
133
134 def test_get_delay_score_error(self):
135 # You can't have a negative or zero response time
136 with self.assertRaises(ValueError):
137 get_delay_score(-100)
138 with self.assertRaises(ValueError):
139 get_delay_score(-1)
140 with self.assertRaises(ValueError):
141 get_delay_score(-0.1)
142 with self.assertRaises(ValueError):
143 get_delay_score(0)
144
145 def test_get_delay_scores(self):
146 scores = [
147 get_delay_score(0.001), # 1ms delay
148 get_delay_score(0.01),
149 get_delay_score(0.025),
150 get_delay_score(0.05),
151 get_delay_score(0.1),
152 get_delay_score(0.333), # anything beyond this should never happen
153 get_delay_score(0.999),
154 get_delay_score(1),
155 get_delay_score(3), # 3s delay - probably on the moon
156 get_delay_score(10),
157 get_delay_score(9999), # 2.79h delay - are you orbiting Saturn or something?
158 ]
159
160 for i in range(len(scores)):
161 # all lower delays should get a higher score
162 for higher in range(i):
163 self.assertLess(scores[i], scores[higher])
164 # all higher delays should get a lower score
165 if i < len(scores) - 1:
166 for lower in range(i + 1, len(scores)):
167 self.assertGreater(scores[i], scores[lower])

Subscribers

People subscribed via source and target branches