Merge ~martin-kalcok/juju-verify:ceph-mon into ~canonical-solutions-engineering/juju-verify:master
- Git
- lp:~martin-kalcok/juju-verify
- ceph-mon
- Merge into master
Status: | Merged |
---|---|
Approved by: | Robert Gildein |
Approved revision: | fbbac866c9ade5d961ce40c9c0a8fb4d0ad671af |
Merged at revision: | e3a7cfa95e66b493ad11a1a165b8d65fbe389ea7 |
Proposed branch: | ~martin-kalcok/juju-verify:ceph-mon |
Merge into: | ~canonical-solutions-engineering/juju-verify:master |
Diff against target: |
460 lines (+306/-8) 10 files modified
docs/verifiers/ceph-mon.rst (+33/-0) juju_verify/verifiers/__init__.py (+2/-1) juju_verify/verifiers/ceph.py (+81/-1) setup.cfg (+2/-1) tests/functional/requirements.txt (+0/-0) tests/functional/tests/bundles/ceph.yaml (+1/-1) tests/functional/tests/bundles/hacluster.yaml (+1/-1) tests/functional/tests/test_ceph.py (+29/-0) tests/unit/verifiers/test_ceph.py (+156/-2) tox.ini (+1/-1) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Robert Gildein | Approve | ||
🤖 prod-jenkaas-bootstack (community) | continuous-integration | Approve | |
Canonical Solutions Engineering Team | Pending | ||
Canonical Solutions Engineering Team | Pending | ||
Review via email: mp+401953@code.launchpad.net |
This proposal supersedes a proposal from 2021-03-24.
Commit message
Add verifier for ceph-mon
This change adds a CephMon verifier for the ceph-mon charm. It uses
existing actions to ensure the cluster has HEALTH_OK status.
The change relies on ceph-mon get-quorum-status action, introduced in
https:/
Also uses Juju changes in 2.8.10, and introduces a dependency on that
version.
Description of the change
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : Posted in a previous version of this proposal | # |
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : Posted in a previous version of this proposal | # |
Unable to determine commit message from repository - please click "Set commit message" and enter the commit message manually.
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : Posted in a previous version of this proposal | # |
A CI job is currently in progress. A follow up comment will be added when it completes.
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : Posted in a previous version of this proposal | # |
This merge proposal is being monitored by mergebot. Change the status to Approved to merge.
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : Posted in a previous version of this proposal | # |
PASSED: Continuous integration, rev:afd975b3120
https:/
Executed test runs:
SUCCESS: https:/
None: https:/
Click here to trigger a rebuild:
https:/
Martin Kalcok (martin-kalcok) wrote : Posted in a previous version of this proposal | # |
Code-wise this change is LGTM, thanks. I just have one in-line suggestion that I think is worth considering.
General question: I looked through the documentation and I see that odd number of monitors is not necessary for quorum, but it's strongly recommended [1]. Do you think it would be worth it to add warning output (via logger) that would warn user that the action would result in sub-optimal configuration of ceph-mon? (i.e. having 2 mons is not much better than 1, as it provides no fault tolerance).
[1] https:/
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : Posted in a previous version of this proposal | # |
A CI job is currently in progress. A follow up comment will be added when it completes.
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : Posted in a previous version of this proposal | # |
PASSED: Continuous integration, rev:f12707b94e2
https:/
Executed test runs:
SUCCESS: https:/
None: https:/
Click here to trigger a rebuild:
https:/
Xav Paice (xavpaice) wrote : Posted in a previous version of this proposal | # |
This will want a rebase and functest update once the functest change is finalized.
Robert Gildein (rgildein) wrote : Posted in a previous version of this proposal | # |
Overall, it looks good, the only problem is using "\n".
Robert Gildein (rgildein) : Posted in a previous version of this proposal | # |
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
This merge proposal is being monitored by mergebot. Change the status to Approved to merge.
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : | # |
A CI job is currently in progress. A follow up comment will be added when it completes.
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : | # |
PASSED: Continuous integration, rev:8b3228b1e67
https:/
Executed test runs:
SUCCESS: https:/
None: https:/
Click here to trigger a rebuild:
https:/
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : | # |
A CI job is currently in progress. A follow up comment will be added when it completes.
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : | # |
PASSED: Continuous integration, rev:15f5c2d8f5b
https:/
Executed test runs:
SUCCESS: https:/
None: https:/
Click here to trigger a rebuild:
https:/
Alvaro Uria (aluria) wrote : | # |
Overall, change LGTM. I've added a suggestion to avoid multiple calls to the "get-health" action. Besides, some documentation could be introduced underlying what is the expected status of ceph-mons to give the OK (i.e. what is checked).
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : | # |
A CI job is currently in progress. A follow up comment will be added when it completes.
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : | # |
PASSED: Continuous integration, rev:27201e23b82
https:/
Executed test runs:
SUCCESS: https:/
None: https:/
Click here to trigger a rebuild:
https:/
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : | # |
A CI job is currently in progress. A follow up comment will be added when it completes.
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : | # |
PASSED: Continuous integration, rev:ca7393f1f2d
https:/
Executed test runs:
SUCCESS: https:/
None: https:/
Click here to trigger a rebuild:
https:/
Robert Gildein (rgildein) wrote : | # |
LGTM, only two nitpicks and one suggestion for pylint ignore.
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : | # |
A CI job is currently in progress. A follow up comment will be added when it completes.
🤖 prod-jenkaas-bootstack (prod-jenkaas-bootstack) wrote : | # |
PASSED: Continuous integration, rev:fbbac866c9a
https:/
Executed test runs:
SUCCESS: https:/
None: https:/
Click here to trigger a rebuild:
https:/
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
Change successfully merged at revision e3a7cfa95e66b49
Preview Diff
1 | diff --git a/docs/verifiers/ceph-mon.rst b/docs/verifiers/ceph-mon.rst |
2 | new file mode 100644 |
3 | index 0000000..43b7cab |
4 | --- /dev/null |
5 | +++ b/docs/verifiers/ceph-mon.rst |
6 | @@ -0,0 +1,33 @@ |
7 | +Ceph-mon verifier |
8 | +================= |
9 | + |
10 | +So far, the "reboot" and "shutdown" actions are supported and they both |
11 | +perform the same set of checks. |
12 | + |
13 | +* check minimum Juju version |
14 | +* check Ceph Monitor quorum |
15 | +* check Ceph clusters health |
16 | + |
17 | + |
18 | +check Juju version |
19 | +------------------ |
20 | + |
21 | +Ceph-mon verification relies on Juju features introduced in 2.8.10. If this minimum |
22 | +version requirement is not met, the verification will stop and return ``Failed`` result |
23 | +immediately. |
24 | + |
25 | +check Ceph Monitor quorum |
26 | +------------------------- |
27 | + |
28 | +This check verifies that intended action won't remove more than half of monitors in each |
29 | +affected ceph-mon cluster. Majority of Ceph monitors must be kept alive after the change |
30 | +to maintain quorum. |
31 | + |
32 | +check Ceph clusters health |
33 | +-------------------------- |
34 | + |
35 | +This check runs ``get-health`` action on one of the targeted ``ceph-mon`` units to get |
36 | +cluster's health. If targeted units belong to multiple Juju applications, ``get-health`` |
37 | +action is run on one unit per application. A cluster is considered healthy if the |
38 | +action's output contains HEALTH_OK. All affected clusters must be healthy for |
39 | +verification to succeed. |
40 | diff --git a/juju_verify/verifiers/__init__.py b/juju_verify/verifiers/__init__.py |
41 | index a7a518b..36233c0 100644 |
42 | --- a/juju_verify/verifiers/__init__.py |
43 | +++ b/juju_verify/verifiers/__init__.py |
44 | @@ -25,7 +25,7 @@ from juju.unit import Unit |
45 | from juju_verify.exceptions import CharmException |
46 | from juju_verify.utils.unit import parse_charm_name |
47 | from juju_verify.verifiers.base import BaseVerifier |
48 | -from juju_verify.verifiers.ceph import CephOsd |
49 | +from juju_verify.verifiers.ceph import CephOsd, CephMon |
50 | from juju_verify.verifiers.nova_compute import NovaCompute |
51 | from juju_verify.verifiers.result import Result, Severity |
52 | |
53 | @@ -35,6 +35,7 @@ logger = logging.getLogger(__name__) |
54 | SUPPORTED_CHARMS = { |
55 | 'nova-compute': NovaCompute, |
56 | 'ceph-osd': CephOsd, |
57 | + 'ceph-mon': CephMon, |
58 | } |
59 | |
60 | |
61 | diff --git a/juju_verify/verifiers/ceph.py b/juju_verify/verifiers/ceph.py |
62 | index 4258125..1c5b56e 100644 |
63 | --- a/juju_verify/verifiers/ceph.py |
64 | +++ b/juju_verify/verifiers/ceph.py |
65 | @@ -21,6 +21,7 @@ from collections import defaultdict |
66 | from typing import Dict, Optional, Any, List |
67 | |
68 | from juju.unit import Unit |
69 | +from packaging.version import Version, InvalidVersion |
70 | |
71 | from juju_verify.utils.action import data_from_action |
72 | from juju_verify.utils.unit import ( |
73 | @@ -31,6 +32,7 @@ from juju_verify.utils.unit import ( |
74 | ) |
75 | from juju_verify.verifiers.base import BaseVerifier |
76 | from juju_verify.verifiers.result import aggregate_results, Result, Severity |
77 | +from juju_verify.exceptions import CharmException |
78 | |
79 | logger = logging.getLogger() |
80 | |
81 | @@ -99,6 +101,7 @@ class CephCommon(BaseVerifier): # pylint: disable=W0223 |
82 | verify_charm_unit("ceph-mon", *units) |
83 | result = Result() |
84 | action_map = run_action_on_units(list(units), "get-health") |
85 | + |
86 | for unit, action in action_map.items(): |
87 | cluster_health = data_from_action(action, "message") |
88 | logger.debug("Unit (%s): Ceph cluster health '%s'", unit, cluster_health) |
89 | @@ -178,7 +181,7 @@ class CephCommon(BaseVerifier): # pylint: disable=W0223 |
90 | class CephOsd(CephCommon): |
91 | """Implementation of verification checks for the ceph-osd charm.""" |
92 | |
93 | - NAME = 'ceph-osd' |
94 | + NAME = "ceph-osd" |
95 | |
96 | def __init__(self, units: List[Unit]): |
97 | """Ceph-osd charm verifier.""" |
98 | @@ -333,3 +336,80 @@ class CephOsd(CephCommon): |
99 | def verify_shutdown(self) -> Result: |
100 | """Verify that it's safe to shutdown selected ceph-osd units.""" |
101 | return self.verify_reboot() |
102 | + |
103 | + |
104 | +class CephMon(CephCommon): |
105 | + """Implementation of verification checks for the ceph-mon charm.""" |
106 | + |
107 | + NAME = "ceph-mon" |
108 | + |
109 | + def verify_reboot(self) -> Result: |
110 | + """Verify that it's safe to reboot selected ceph-mon units.""" |
111 | + version_check = self.check_version() |
112 | + if not version_check.success: |
113 | + return version_check |
114 | + |
115 | + # Get one ceph-mon unit per each application |
116 | + app_map = {unit.application: unit for unit in self.units} |
117 | + unique_app_units = app_map.values() |
118 | + |
119 | + return aggregate_results( |
120 | + version_check, |
121 | + self.check_quorum(), |
122 | + self.check_cluster_health(*unique_app_units) |
123 | + ) |
124 | + |
125 | + def verify_shutdown(self) -> Result: |
126 | + """Verify that it's safe to shutdown selected units.""" |
127 | + return self.verify_reboot() |
128 | + |
129 | + def check_quorum(self) -> Result: |
130 | + """Check that the shutdown does not result in <50% mons alive.""" |
131 | + result = Result() |
132 | + |
133 | + action_name = "get-quorum-status" |
134 | + action_results = self.run_action_on_all(action_name) |
135 | + |
136 | + affected_hosts = {unit.machine.hostname for unit in self.units} |
137 | + |
138 | + for unit_id, action in action_results.items(): |
139 | + # run this per unit because we might have multiple clusters |
140 | + known_mons = set(json.loads(data_from_action(action, "known-mons"))) |
141 | + online_mons = set(json.loads(data_from_action(action, "online-mons"))) |
142 | + |
143 | + mon_count = len(known_mons) |
144 | + mons_after_change = len(online_mons - affected_hosts) |
145 | + |
146 | + if mons_after_change <= mon_count // 2: |
147 | + result.add_partial_result(Severity.FAIL, f"Removing unit {unit_id} will" |
148 | + f" lose Ceph mon quorum") |
149 | + |
150 | + if result.empty: |
151 | + result.add_partial_result(Severity.OK, 'Ceph-mon quorum check passed.') |
152 | + |
153 | + return result |
154 | + |
155 | + def check_version(self) -> Result: |
156 | + """Check minimum required version of juju agents on units. |
157 | + |
158 | + Ceph-mon verifier requires that all the units run juju agent >=2.8.10 due to |
159 | + reliance on juju.Machine.hostname feature. |
160 | + """ |
161 | + min_version = Version('2.8.10') |
162 | + result = Result() |
163 | + for unit in self.units: |
164 | + juju_version = unit.safe_data.get('agent-status', {}).get('version', '') |
165 | + try: |
166 | + if Version(juju_version) < min_version: |
167 | + fail_msg = (f'Juju agent on unit {unit.entity_id} has lower than ' |
168 | + f'minumum required version. {juju_version} < ' |
169 | + f'{min_version}') |
170 | + result.add_partial_result(Severity.FAIL, fail_msg) |
171 | + except InvalidVersion as exc: |
172 | + raise CharmException(f'Failed to parse juju version from ' |
173 | + f'unit {unit.entity_id}.') from exc |
174 | + |
175 | + if result.empty: |
176 | + result.add_partial_result(Severity.OK, 'Minimum juju version check passed.') |
177 | + |
178 | + return result |
179 | diff --git a/setup.cfg b/setup.cfg |
180 | index 1eb1e8e..b117068 100644 |
181 | --- a/setup.cfg |
182 | +++ b/setup.cfg |
183 | @@ -25,7 +25,8 @@ include_package_data = True |
184 | python_requires = >=3.6 |
185 | packages = find: |
186 | install_requires = |
187 | - juju |
188 | + juju >= 2.8.6 |
189 | + packaging |
190 | |
191 | [options.extras_require] |
192 | dev = |
193 | diff --git a/tests/functional/requirements.txt b/tests/functional/requirements.txt |
194 | new file mode 100644 |
195 | index 0000000..e69de29 |
196 | --- /dev/null |
197 | +++ b/tests/functional/requirements.txt |
198 | diff --git a/tests/functional/tests/bundles/ceph.yaml b/tests/functional/tests/bundles/ceph.yaml |
199 | index 94ea2d8..b29b301 100644 |
200 | --- a/tests/functional/tests/bundles/ceph.yaml |
201 | +++ b/tests/functional/tests/bundles/ceph.yaml |
202 | @@ -1,7 +1,7 @@ |
203 | series: bionic |
204 | applications: |
205 | ceph-mon: |
206 | - charm: cs:~rgildein/ceph-mon-2 |
207 | + charm: cs:~rgildein/ceph-mon-3 |
208 | num_units: 3 |
209 | options: |
210 | expected-osd-count: 3 |
211 | diff --git a/tests/functional/tests/bundles/hacluster.yaml b/tests/functional/tests/bundles/hacluster.yaml |
212 | index ec0bf13..95ce5e9 100644 |
213 | --- a/tests/functional/tests/bundles/hacluster.yaml |
214 | +++ b/tests/functional/tests/bundles/hacluster.yaml |
215 | @@ -1,7 +1,7 @@ |
216 | series: focal |
217 | applications: |
218 | dashboard-hacluster: |
219 | - charm: cs:~xavpaice/hacluster-0 |
220 | + charm: cs:~xavpaice/hacluster |
221 | keystone: |
222 | charm: cs:keystone |
223 | num_units: 1 |
224 | diff --git a/tests/functional/tests/test_ceph.py b/tests/functional/tests/test_ceph.py |
225 | index 4fce49e..0e84632 100644 |
226 | --- a/tests/functional/tests/test_ceph.py |
227 | +++ b/tests/functional/tests/test_ceph.py |
228 | @@ -1,6 +1,7 @@ |
229 | """Test deployment and functionality of juju-verify.""" |
230 | |
231 | from juju import loop |
232 | +import zaza.model |
233 | from tests.base import BaseTestCase |
234 | |
235 | from juju_verify import juju_verify |
236 | @@ -24,3 +25,31 @@ class CephTests(BaseTestCase): |
237 | verifier = get_verifier(unit_objects) |
238 | result = verifier.verify(check) |
239 | self.assertTrue(result.success is True) |
240 | + |
241 | + def test_single_mon_unit(self): |
242 | + """Test that shutdown of a single mon unit returns OK.""" |
243 | + # juju-verify shutdown --units ceph-mon/0 |
244 | + |
245 | + units = ['ceph-mon/0'] |
246 | + log_level = 'info' |
247 | + check = 'shutdown' |
248 | + juju_verify.config_logger(log_level) |
249 | + model = loop.run(juju_verify.connect_model(zaza.model.CURRENT_MODEL)) |
250 | + unit_objects = loop.run(juju_verify.find_units(model, units)) |
251 | + verifier = get_verifier(unit_objects) |
252 | + result = verifier.verify(check) |
253 | + self.assertTrue(result.success is True) |
254 | + |
255 | + def test_two_mon_unit(self): |
256 | + """Test that shutdown of multiple mon units fails.""" |
257 | + # juju-verify shutdown --units ceph-mon/0 ceph-mon/1 |
258 | + |
259 | + units = ['ceph-mon/0', 'ceph-mon/1'] |
260 | + log_level = 'info' |
261 | + check = 'shutdown' |
262 | + juju_verify.config_logger(log_level) |
263 | + model = loop.run(juju_verify.connect_model(zaza.model.CURRENT_MODEL)) |
264 | + unit_objects = loop.run(juju_verify.find_units(model, units)) |
265 | + verifier = get_verifier(unit_objects) |
266 | + result = verifier.verify(check) |
267 | + self.assertTrue(result.success is False) |
268 | diff --git a/tests/unit/verifiers/test_ceph.py b/tests/unit/verifiers/test_ceph.py |
269 | index 7d539ca..cf4a87e 100644 |
270 | --- a/tests/unit/verifiers/test_ceph.py |
271 | +++ b/tests/unit/verifiers/test_ceph.py |
272 | @@ -17,15 +17,23 @@ |
273 | """CephOsd verifier class test suite.""" |
274 | import json |
275 | from unittest import mock |
276 | -from unittest.mock import MagicMock |
277 | +from unittest.mock import MagicMock, PropertyMock |
278 | |
279 | import pytest |
280 | +from juju.model import Model |
281 | +from juju.unit import Unit |
282 | |
283 | from juju_verify.exceptions import CharmException |
284 | -from juju_verify.verifiers.ceph import AvailabilityZone, CephCommon, CephOsd |
285 | +from juju_verify.verifiers.ceph import AvailabilityZone, CephCommon, CephOsd, CephMon |
286 | from juju_verify.verifiers.result import Result, Severity |
287 | |
288 | |
289 | +CEPH_MON_QUORUM_OK = "Ceph-mon quorum check passed." |
290 | +CEPH_MON_QUORUM_FAIL = "Removing unit {} will lose Ceph mon quorum" |
291 | +JUJU_VERSION_ERR = ("The machine for unit {} does not have a hostname attribute, " |
292 | + "please ensure that Juju controller is 2.8.10 or higher.") |
293 | + |
294 | + |
295 | @pytest.mark.parametrize("az_info, ex_az", [ |
296 | ({"root": "default", "host": "test"}, "root=default,host=test"), |
297 | ({"root": "default", "rack": "nova", "row": "row1", "host": "test"}, |
298 | @@ -365,3 +373,149 @@ def test_verify_shutdown( |
299 | mock_check_ceph_cluster_health.assert_called_once_with() |
300 | mock_check_replication_number.assert_called_once_with() |
301 | mock_check_availability_zone.assert_called_once_with() |
302 | + |
303 | + |
304 | +@pytest.mark.parametrize( |
305 | + "action_return_value, severity, msg, hostname", |
306 | + [('["host0", "host1", "host2"]', Severity.OK, CEPH_MON_QUORUM_OK, "host1"), |
307 | + ('["host0", "host1"]', Severity.FAIL, CEPH_MON_QUORUM_FAIL, "host1"), |
308 | + ('["host0", "host1", "host2"]', Severity.OK, CEPH_MON_QUORUM_OK, "host5"), |
309 | + ], |
310 | +) |
311 | +def test_check_ceph_mon_quorum(mocker, action_return_value, severity, msg, hostname): |
312 | + """Test Ceph quorum verification on CephMon.""" |
313 | + unit_to_remove = 'ceph-mon/0' |
314 | + unit = Unit(unit_to_remove, Model()) |
315 | + verifier = CephMon([unit]) |
316 | + mocker.patch.object(verifier, "run_action_on_all").return_value = { |
317 | + unit_to_remove: action_return_value, |
318 | + } |
319 | + unit.machine = mock.Mock(hostname=hostname) |
320 | + mocker.patch( |
321 | + "juju_verify.verifiers.ceph.data_from_action" |
322 | + ).return_value = action_return_value |
323 | + expected_msg = msg if severity == Severity.OK else msg.format(unit_to_remove) |
324 | + expected_result = Result(severity, expected_msg) |
325 | + result = verifier.check_quorum() |
326 | + assert result == expected_result |
327 | + |
328 | + |
329 | +@pytest.mark.parametrize('juju_version, expected_severity', [ |
330 | + pytest.param('2.8.0', Severity.FAIL, id='version-low'), |
331 | + pytest.param('2.8.10', Severity.OK, id='version-match'), |
332 | + pytest.param('2.8.11', Severity.OK, id='version-high'), |
333 | +]) |
334 | +def test_ceph_mon_version_check(juju_version, expected_severity, mocker): |
335 | + """Test expected results of ceph-mon juju version check.""" |
336 | + mock_unit_data = {'agent-status': {'version': juju_version}} |
337 | + mocker.patch.object(Unit, 'safe_data', |
338 | + new_callable=PropertyMock(return_value=mock_unit_data)) |
339 | + unit_name = 'ceph-mon/0' |
340 | + if expected_severity == Severity.OK: |
341 | + expected_result = Result(Severity.OK, 'Minimum juju version check passed.') |
342 | + else: |
343 | + fail_msg = (f'Juju agent on unit {unit_name} has lower than ' |
344 | + f'minumum required version. {juju_version} < 2.8.10') |
345 | + expected_result = Result(Severity.FAIL, fail_msg) |
346 | + |
347 | + verifier = CephMon([Unit(unit_name, Model())]) |
348 | + result = verifier.check_version() |
349 | + |
350 | + assert result == expected_result |
351 | + |
352 | + |
353 | +def test_ceph_mon_fail_version_parsing(mocker): |
354 | + """Test that exception is raised if parsing of juju version fails.""" |
355 | + bogus_version = 'foo' |
356 | + unit_name = 'ceph-mon/0' |
357 | + mock_unit_data = {'agent-status': {'version': bogus_version}} |
358 | + mocker.patch.object(Unit, 'safe_data', |
359 | + new_callable=PropertyMock(return_value=mock_unit_data)) |
360 | + |
361 | + verifier = CephMon([Unit(unit_name, Model())]) |
362 | + |
363 | + with pytest.raises(CharmException) as exc: |
364 | + _ = verifier.check_version() |
365 | + |
366 | + assert str(exc.value) == f'Failed to parse juju version from unit {unit_name}.' |
367 | + |
368 | + |
369 | +def test_verify_ceph_mon_shutdown(mocker): |
370 | + """Test that verify_shutdown links to verify_reboot.""" |
371 | + mocker.patch.object(CephMon, "verify_reboot") |
372 | + unit = Unit("ceph-mon/0", Model()) |
373 | + verifier = CephMon([unit]) |
374 | + verifier.verify_shutdown() |
375 | + verifier.verify_reboot.assert_called_once() |
376 | + |
377 | + |
378 | +@mock.patch("juju_verify.verifiers.ceph.CephCommon.check_cluster_health") |
379 | +@mock.patch("juju_verify.verifiers.ceph.CephMon.check_quorum") |
380 | +@mock.patch("juju_verify.verifiers.ceph.CephMon.check_version") |
381 | +def test_verify_ceph_mon_reboot(mock_version, mock_quorum, mock_health): |
382 | + """Test reboot verification on CephMon.""" |
383 | + unit = Unit("ceph-mon/0", Model()) |
384 | + mock_health.return_value = Result(Severity.OK, "Ceph cluster is healthy") |
385 | + mock_quorum.return_value = Result(Severity.OK, "Ceph-mon quorum check passed.") |
386 | + mock_version.return_value = Result(Severity.OK, |
387 | + "Minimum juju version check passed.") |
388 | + verifier = CephMon([unit]) |
389 | + result = verifier.verify_reboot() |
390 | + |
391 | + expected_result = Result() |
392 | + expected_result.add_partial_result(Severity.OK, |
393 | + "Minimum juju version check passed.") |
394 | + expected_result.add_partial_result(Severity.OK, 'Ceph-mon quorum check passed.') |
395 | + expected_result.add_partial_result(Severity.OK, 'Ceph cluster is healthy') |
396 | + |
397 | + assert result == expected_result |
398 | + |
399 | + |
400 | +@mock.patch("juju_verify.verifiers.ceph.CephCommon.check_cluster_health") |
401 | +@mock.patch("juju_verify.verifiers.ceph.CephMon.check_quorum") |
402 | +@mock.patch("juju_verify.verifiers.ceph.CephMon.check_version") |
403 | +def test_verify_ceph_mon_reboot_stops_on_failed_version(mock_version, mock_quorum, |
404 | + mock_health): |
405 | + """Test that if ceph-mon version check fails, not other checks are performed.""" |
406 | + expected_result = Result(Severity.FAIL, 'version too low') |
407 | + mock_version.return_value = expected_result |
408 | + |
409 | + verifier = CephMon([Unit('ceph-mon/0', Model())]) |
410 | + result = verifier.verify_reboot() |
411 | + |
412 | + assert result == expected_result |
413 | + mock_version.assert_called_once() |
414 | + mock_quorum.assert_not_called() |
415 | + mock_health.assert_not_called() |
416 | + |
417 | + |
418 | +@mock.patch("juju_verify.verifiers.ceph.CephCommon.check_cluster_health") |
419 | +@mock.patch("juju_verify.verifiers.ceph.CephMon.check_quorum") |
420 | +@mock.patch("juju_verify.verifiers.ceph.CephMon.check_version") |
421 | +def test_verify_ceph_mon_reboot_checks_health_once(mock_version, mock_quorum, |
422 | + mock_health, mocker): |
423 | + """Test that ceph-mon verification runs 'health check' only once per application.""" |
424 | + model = Model() |
425 | + app_1 = [Unit('ceph-mon/0', model), Unit('ceph-mon/1', model)] |
426 | + app_2 = [Unit('ceph-mon-extra/0', model), Unit('ceph-mon-extra/1', model)] |
427 | + expected_health_check_units = [app_1[-1], app_2[-1]] |
428 | + |
429 | + for unit in app_1: |
430 | + mocker.patch.object(unit, 'data', |
431 | + new_callable=PropertyMock( |
432 | + return_value={'application': 'ceph-mon'}) |
433 | + ) |
434 | + |
435 | + for unit in app_2: |
436 | + mocker.patch.object(unit, 'data', |
437 | + new_callable=PropertyMock( |
438 | + return_value={'application': 'ceph-mon-extra'}) |
439 | + ) |
440 | + |
441 | + verifier = CephMon(app_1 + app_2) |
442 | + result = verifier.verify_reboot() |
443 | + |
444 | + assert result.success |
445 | + mock_version.assert_called_once() |
446 | + mock_quorum.assert_called_once() |
447 | + mock_health.assert_called_with(*expected_health_check_units) |
448 | diff --git a/tox.ini b/tox.ini |
449 | index e58986e..1138ca2 100644 |
450 | --- a/tox.ini |
451 | +++ b/tox.ini |
452 | @@ -34,7 +34,7 @@ commands = |
453 | flake8 {toxinidir}/juju_verify/ {toxinidir}/tests/ |
454 | mypy {toxinidir}/juju_verify/ |
455 | pylint {toxinidir}/juju_verify/ |
456 | - pylint {toxinidir}/tests/ --disable=E1101,R0913,R0914,W0212 |
457 | + pylint {toxinidir}/tests/ --disable=E1101,R0913,R0914,W0212,R0801 |
458 | |
459 | [testenv:build] |
460 | basepython = python3 |
This merge proposal is being monitored by mergebot. Change the status to Approved to merge.