Merge ~pjdc/charm-k8s-mattermost/+git/charm-k8s-mattermost:sidecar.2 into charm-k8s-mattermost:master
- Git
- lp:~pjdc/charm-k8s-mattermost/+git/charm-k8s-mattermost
- sidecar.2
- Merge into master
Status: | Work in progress |
---|---|
Proposed branch: | ~pjdc/charm-k8s-mattermost/+git/charm-k8s-mattermost:sidecar.2 |
Merge into: | charm-k8s-mattermost:master |
Diff against target: |
1628 lines (+551/-647) 13 files modified
.gitignore (+1/-0) .jujuignore (+1/-0) Makefile (+1/-1) README.md (+20/-22) config.yaml (+6/-21) dev/null (+0/-45) lib/charms/nginx_ingress_integrator/v0/ingress.py (+198/-0) metadata.yaml (+13/-3) requirements.txt (+1/-1) src/charm.py (+166/-233) tests/unit/test_charm.py (+143/-293) tests/unit/test_helpers.py (+0/-27) tox.ini (+1/-1) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Mattermost Charmers | Pending | ||
Review via email: mp+403302@code.launchpad.net |
Commit message
convert to sidecar
Includes changes from both Tom and Paul. Rebased onto the renamed old-style charm.
Description of the change
- 9c40771... by Paul Collins
-
use self.app.name when site_url is empty
- c291831... by Paul Collins
-
doc updates
Unmerged commits
- c291831... by Paul Collins
-
doc updates
- 9c40771... by Paul Collins
-
use self.app.name when site_url is empty
- c51b211... by Paul Collins
-
document ingress
- 9806d78... by Paul Collins
-
sidecar charm is on edge for now
- 6059ba4... by Paul Collins
-
remove _on_upgrade_charm
We work around this issue by setting use_juju_
for_stoage= True.
In upgrade-testing we get stuck in "Waiting for pebble", perhaps
just because of this, or perhaps because of other changes since
this workaround was added. Either way, it is no longer needed. - ddd6233... by Paul Collins
-
coerce to dict when comparing live vs proposed configs
- df95aa4... by Paul Collins
-
name licence file with crc32 to force restart
We now no longer remove the licence file when the setting is cleared,
which is not really a big deal. Licence files will also build up
on the pods, but they're small and will not survive upgrades anyway. - 5955df8... by Paul Collins
-
swap database and pebble checks. defer event after config check
- 2f6bb61... by Paul Collins
-
split test_get_
pebble_ config - 66763d6... by Paul Collins
-
push the licence into the mattermost container via pebble
Preview Diff
1 | diff --git a/.gitignore b/.gitignore |
2 | index 490cc43..99cead5 100644 |
3 | --- a/.gitignore |
4 | +++ b/.gitignore |
5 | @@ -4,3 +4,4 @@ |
6 | .coverage |
7 | __pycache__ |
8 | build |
9 | +htmlcov |
10 | diff --git a/.jujuignore b/.jujuignore |
11 | index 2c04000..328fb43 100644 |
12 | --- a/.jujuignore |
13 | +++ b/.jujuignore |
14 | @@ -5,6 +5,7 @@ Dockerfile |
15 | Makefile |
16 | .coverage |
17 | .gitignore |
18 | +htmlcov/ |
19 | requirements.txt |
20 | tox.ini |
21 | tests/ |
22 | diff --git a/Makefile b/Makefile |
23 | index 1bf76cc..ef5e3fe 100644 |
24 | --- a/Makefile |
25 | +++ b/Makefile |
26 | @@ -17,7 +17,7 @@ clean: |
27 | @echo "Cleaning files" |
28 | @git clean -fXd |
29 | |
30 | -mattermost-k8s.charm: src/*.py requirements.txt |
31 | +mattermost-k8s.charm: lib/charms/*/v*/*.py src/*.py requirements.txt |
32 | charmcraft build |
33 | |
34 | .PHONY: lint test unittest clean |
35 | diff --git a/README.md b/README.md |
36 | index 2d74794..8a97558 100644 |
37 | --- a/README.md |
38 | +++ b/README.md |
39 | @@ -1,34 +1,32 @@ |
40 | # Mattermost Operator |
41 | |
42 | -A Juju charm deploying and managing Mattermost on Kubernetes, configurable to use a PostgreSQL backend. |
43 | - |
44 | -## Overview |
45 | - |
46 | -Mattermost offers both [a Team Edition and an Enterprise Edition](https://mattermost.com/pricing-feature-comparison/). |
47 | -This charm supports both, with the default image deploying the Team Edition. Supported |
48 | -features include authentication via SAML, Push Notifications, clustering, |
49 | -the storage of images and attachments in S3, and a Prometheus exporter for |
50 | -performance monitoring. This charm also offers seamless Mattermost version |
51 | -upgrades, initiated by switching to an image with a newer version of |
52 | -Mattermost than the one currently deployed. |
53 | +A Juju charm to deploy and manage Mattermost on Kubernetes. |
54 | |
55 | ## Usage |
56 | |
57 | -For details on using Kubernetes with Juju [see here](https://juju.is/docs/kubernetes), and for |
58 | -details on using Juju with MicroK8s for easy local testing [see here](https://juju.is/docs/microk8s-cloud). |
59 | - |
60 | To deploy the charm and relate it to [the PostgreSQL K8s charm](https://charmhub.io/postgresql-k8s) within a Juju |
61 | Kubernetes model: |
62 | |
63 | juju deploy postgresql-k8s |
64 | - juju deploy mattermost-k8s --config juju-external-hostname=foo.internal |
65 | + juju deploy mattermost-k8s --channel edge |
66 | juju relate mattermost-k8s postgresql-k8s:db |
67 | - juju expose mattermost-k8s |
68 | |
69 | -Once the deployment has completed and the "mattermost-k8s" workload state in `juju |
70 | -status` has changed to "active" you can visit http://${mattermost_ip}:8065 in a browser and log in to |
71 | -your Mattermost instance, and you'll be presented with a screen to create an |
72 | -initial admin account. Further accounts must be created using this admin account, or by |
73 | -setting up an external authentication source, such as SAML. |
74 | +Then, expose Mattermost via the Kubernetes ingress: |
75 | + |
76 | + juju deploy nginx-ingress-integrator |
77 | + juju relate mattermost-k8s nginx-ingress-integrator |
78 | + |
79 | +Once the deployment has completed and the workload states in `juju |
80 | +status` have changed to "active", you can add `mattermost-k8s` to `/etc/hosts` |
81 | +with the IP address of your Kubernetes cluster and visit `http://mattermost-k8s/` |
82 | +in a browser. |
83 | + |
84 | +You'll be presented with a screen to create an initial admin |
85 | +account. Further accounts must be created using this admin account, or |
86 | +by setting up an external authentication source, such as SAML. |
87 | + |
88 | +For further details, [please consult the documentation](https://charmhub.io/mattermost-charmers-mattermost/docs). |
89 | |
90 | -For further details, [see here](https://charmhub.io/mattermost-charmers-mattermost/docs). |
91 | +See also: |
92 | + * [using Kubernetes with Juju](https://juju.is/docs/kubernetes) |
93 | + * [using Juju with MicroK8s for easy local testing](https://juju.is/docs/microk8s-cloud) |
94 | diff --git a/config.yaml b/config.yaml |
95 | index 5827200..71b4aaa 100644 |
96 | --- a/config.yaml |
97 | +++ b/config.yaml |
98 | @@ -34,25 +34,6 @@ options: |
99 | Some features are not available without a licence. For more |
100 | information, consult the Mattermost documentation. |
101 | default: '' |
102 | - mattermost_image_path: |
103 | - type: string |
104 | - description: | |
105 | - The location of the image to use, e.g. "registry.example.com/mattermost:v1". |
106 | - |
107 | - Switching to a newer image version will initiate an upgrade of Mattermost. |
108 | - |
109 | - This setting is required. |
110 | - default: mattermostcharmers/mattermost:v5.33.3-20.04_edge |
111 | - mattermost_image_username: |
112 | - type: string |
113 | - description: | |
114 | - The username for accessing the registry specified in mattermost_image_path. |
115 | - default: '' |
116 | - mattermost_image_password: |
117 | - type: string |
118 | - description: | |
119 | - The password associated with mattermost_image_username for accessing the registry specified in mattermost_image_path. |
120 | - default: '' |
121 | outbound_proxy: |
122 | type: string |
123 | description: The proxy to use for outbound requests. |
124 | @@ -96,6 +77,8 @@ options: |
125 | type: string |
126 | description: | |
127 | The push notification server to use. |
128 | + |
129 | + Use of Mattermost's Hosted Push Notification Service requires an Enterprise Edition licence. |
130 | default: '' |
131 | push_notifications_include_message_snippet: |
132 | type: boolean |
133 | @@ -182,11 +165,13 @@ options: |
134 | Whether to use Ubuntu SSO to log in. |
135 | |
136 | This will not work unless the administrators of login.ubuntu.com have created a suitable SAML config first. |
137 | + |
138 | + This feature requires a Mattermost Enterprise Edition licence. |
139 | default: false |
140 | - use_canonical_defaults: |
141 | + use_enterprise_defaults: |
142 | type: boolean |
143 | description: | |
144 | - If set, apply miscellaneous Mattermost settings as used by Canonical. |
145 | + If set, apply miscellaneous Mattermost settings as used in an Enterprise deployment. |
146 | default: false |
147 | use_experimental_saml_library: |
148 | type: boolean |
149 | diff --git a/lib/charms/nginx_ingress_integrator/v0/ingress.py b/lib/charms/nginx_ingress_integrator/v0/ingress.py |
150 | new file mode 100644 |
151 | index 0000000..688a77c |
152 | --- /dev/null |
153 | +++ b/lib/charms/nginx_ingress_integrator/v0/ingress.py |
154 | @@ -0,0 +1,198 @@ |
155 | +"""Library for the ingress relation. |
156 | + |
157 | +This library contains the Requires and Provides classes for handling |
158 | +the ingress interface. |
159 | + |
160 | +Import `IngressRequires` in your charm, with two required options: |
161 | + - "self" (the charm itself) |
162 | + - config_dict |
163 | + |
164 | +`config_dict` accepts the following keys: |
165 | + - service-hostname (required) |
166 | + - service-name (required) |
167 | + - service-port (required) |
168 | + - limit-rps |
169 | + - limit-whitelist |
170 | + - max_body-size |
171 | + - retry-errors |
172 | + - service-namespace |
173 | + - session-cookie-max-age |
174 | + - tls-secret-name |
175 | + |
176 | +See [the config section](https://charmhub.io/nginx-ingress-integrator/configure) for descriptions |
177 | +of each, along with the required type. |
178 | + |
179 | +As an example, add the following to `src/charm.py`: |
180 | +``` |
181 | +from charms.nginx_ingress_integrator.v0.ingress import IngressRequires |
182 | + |
183 | +# In your charm's `__init__` method. |
184 | +self.ingress = IngressRequires(self, {"service-hostname": self.config["external_hostname"], |
185 | + "service-name": self.app.name, |
186 | + "service-port": 80}) |
187 | + |
188 | +# In your charm's `config-changed` handler. |
189 | +self.ingress.update_config({"service-hostname": self.config["external_hostname"]}) |
190 | +``` |
191 | +And then add the following to `metadata.yaml`: |
192 | +``` |
193 | +requires: |
194 | + ingress: |
195 | + interface: ingress |
196 | +``` |
197 | +""" |
198 | + |
199 | +import logging |
200 | + |
201 | +from ops.charm import CharmEvents |
202 | +from ops.framework import EventBase, EventSource, Object |
203 | +from ops.model import BlockedStatus |
204 | + |
205 | +# The unique Charmhub library identifier, never change it |
206 | +LIBID = "db0af4367506491c91663468fb5caa4c" |
207 | + |
208 | +# Increment this major API version when introducing breaking changes |
209 | +LIBAPI = 0 |
210 | + |
211 | +# Increment this PATCH version before using `charmcraft publish-lib` or reset |
212 | +# to 0 if you are raising the major API version |
213 | +LIBPATCH = 5 |
214 | + |
215 | +logger = logging.getLogger(__name__) |
216 | + |
217 | +REQUIRED_INGRESS_RELATION_FIELDS = { |
218 | + "service-hostname", |
219 | + "service-name", |
220 | + "service-port", |
221 | +} |
222 | + |
223 | +OPTIONAL_INGRESS_RELATION_FIELDS = { |
224 | + "limit-rps", |
225 | + "limit-whitelist", |
226 | + "max-body-size", |
227 | + "retry-errors", |
228 | + "service-namespace", |
229 | + "session-cookie-max-age", |
230 | + "tls-secret-name", |
231 | +} |
232 | + |
233 | + |
234 | +class IngressAvailableEvent(EventBase): |
235 | + pass |
236 | + |
237 | + |
238 | +class IngressCharmEvents(CharmEvents): |
239 | + """Custom charm events.""" |
240 | + |
241 | + ingress_available = EventSource(IngressAvailableEvent) |
242 | + |
243 | + |
244 | +class IngressRequires(Object): |
245 | + """This class defines the functionality for the 'requires' side of the 'ingress' relation. |
246 | + |
247 | + Hook events observed: |
248 | + - relation-changed |
249 | + """ |
250 | + |
251 | + def __init__(self, charm, config_dict): |
252 | + super().__init__(charm, "ingress") |
253 | + |
254 | + self.framework.observe(charm.on["ingress"].relation_changed, self._on_relation_changed) |
255 | + |
256 | + self.config_dict = config_dict |
257 | + |
258 | + def _config_dict_errors(self, update_only=False): |
259 | + """Check our config dict for errors.""" |
260 | + blocked_message = "Error in ingress relation, check `juju debug-log`" |
261 | + unknown = [ |
262 | + x |
263 | + for x in self.config_dict |
264 | + if x not in REQUIRED_INGRESS_RELATION_FIELDS | OPTIONAL_INGRESS_RELATION_FIELDS |
265 | + ] |
266 | + if unknown: |
267 | + logger.error( |
268 | + "Ingress relation error, unknown key(s) in config dictionary found: %s", |
269 | + ", ".join(unknown), |
270 | + ) |
271 | + self.model.unit.status = BlockedStatus(blocked_message) |
272 | + return True |
273 | + if not update_only: |
274 | + missing = [x for x in REQUIRED_INGRESS_RELATION_FIELDS if x not in self.config_dict] |
275 | + if missing: |
276 | + logger.error( |
277 | + "Ingress relation error, missing required key(s) in config dictionary: %s", |
278 | + ", ".join(missing), |
279 | + ) |
280 | + self.model.unit.status = BlockedStatus(blocked_message) |
281 | + return True |
282 | + return False |
283 | + |
284 | + def _on_relation_changed(self, event): |
285 | + """Handle the relation-changed event.""" |
286 | + # `self.unit` isn't available here, so use `self.model.unit`. |
287 | + if self.model.unit.is_leader(): |
288 | + if self._config_dict_errors(): |
289 | + return |
290 | + for key in self.config_dict: |
291 | + event.relation.data[self.model.app][key] = str(self.config_dict[key]) |
292 | + |
293 | + def update_config(self, config_dict): |
294 | + """Allow for updates to relation.""" |
295 | + if self.model.unit.is_leader(): |
296 | + self.config_dict = config_dict |
297 | + if self._config_dict_errors(update_only=True): |
298 | + return |
299 | + relation = self.model.get_relation("ingress") |
300 | + if relation: |
301 | + for key in self.config_dict: |
302 | + relation.data[self.model.app][key] = str(self.config_dict[key]) |
303 | + |
304 | + |
305 | +class IngressProvides(Object): |
306 | + """This class defines the functionality for the 'provides' side of the 'ingress' relation. |
307 | + |
308 | + Hook events observed: |
309 | + - relation-changed |
310 | + """ |
311 | + |
312 | + def __init__(self, charm): |
313 | + super().__init__(charm, "ingress") |
314 | + # Observe the relation-changed hook event and bind |
315 | + # self.on_relation_changed() to handle the event. |
316 | + self.framework.observe(charm.on["ingress"].relation_changed, self._on_relation_changed) |
317 | + self.charm = charm |
318 | + |
319 | + def _on_relation_changed(self, event): |
320 | + """Handle a change to the ingress relation. |
321 | + |
322 | + Confirm we have the fields we expect to receive.""" |
323 | + # `self.unit` isn't available here, so use `self.model.unit`. |
324 | + if not self.model.unit.is_leader(): |
325 | + return |
326 | + |
327 | + ingress_data = { |
328 | + field: event.relation.data[event.app].get(field) |
329 | + for field in REQUIRED_INGRESS_RELATION_FIELDS | OPTIONAL_INGRESS_RELATION_FIELDS |
330 | + } |
331 | + |
332 | + missing_fields = sorted( |
333 | + [ |
334 | + field |
335 | + for field in REQUIRED_INGRESS_RELATION_FIELDS |
336 | + if ingress_data.get(field) is None |
337 | + ] |
338 | + ) |
339 | + |
340 | + if missing_fields: |
341 | + logger.error( |
342 | + "Missing required data fields for ingress relation: {}".format( |
343 | + ", ".join(missing_fields) |
344 | + ) |
345 | + ) |
346 | + self.model.unit.status = BlockedStatus( |
347 | + "Missing fields for ingress: {}".format(", ".join(missing_fields)) |
348 | + ) |
349 | + |
350 | + # Create an event that our charm can use to decide it's okay to |
351 | + # configure the ingress. |
352 | + self.charm.on.ingress_available.emit() |
353 | diff --git a/metadata.yaml b/metadata.yaml |
354 | index f19cb73..84b3261 100644 |
355 | --- a/metadata.yaml |
356 | +++ b/metadata.yaml |
357 | @@ -8,10 +8,20 @@ description: | |
358 | Mattermost is a flexible, open source messaging platform that enables |
359 | secure team collaboration. |
360 | https://mattermost.com |
361 | -min-juju-version: 2.8.0 # charm storage in state |
362 | -series: |
363 | - - kubernetes |
364 | + |
365 | +containers: |
366 | + mattermost: |
367 | + resource: mattermost-image |
368 | + |
369 | +resources: |
370 | + mattermost-image: |
371 | + type: oci-image |
372 | + description: Docker image for Mattermost to run |
373 | + |
374 | requires: |
375 | db: |
376 | interface: pgsql |
377 | limit: 1 |
378 | + ingress: |
379 | + interface: ingress |
380 | + limit: 1 |
381 | diff --git a/requirements.txt b/requirements.txt |
382 | index fd6adcd..873493e 100644 |
383 | --- a/requirements.txt |
384 | +++ b/requirements.txt |
385 | @@ -1,2 +1,2 @@ |
386 | -ops |
387 | +https://github.com/canonical/operator/archive/refs/heads/master.zip |
388 | ops-lib-pgsql |
389 | diff --git a/src/charm.py b/src/charm.py |
390 | index 9416f62..d858d04 100755 |
391 | --- a/src/charm.py |
392 | +++ b/src/charm.py |
393 | @@ -4,6 +4,7 @@ |
394 | # Licensed under the GPLv3, see LICENCE file for details. |
395 | |
396 | import os |
397 | +import logging |
398 | import subprocess |
399 | |
400 | from ipaddress import ip_network |
401 | @@ -28,26 +29,24 @@ from ops.model import ( |
402 | WaitingStatus, |
403 | ) |
404 | |
405 | -from utils import extend_list_merging_dicts_matched_by_key |
406 | - |
407 | -import logging |
408 | - |
409 | +from charms.nginx_ingress_integrator.v0.ingress import IngressRequires |
410 | |
411 | pgsql = ops.lib.use("pgsql", 1, "postgresql-charmers@lists.launchpad.net") |
412 | |
413 | logger = logging.getLogger() |
414 | |
415 | |
416 | +CONTAINER_NAME = 'mattermost' # per metadata.yaml |
417 | # Mattermost's default port, and what we expect the image to use |
418 | CONTAINER_PORT = 8065 |
419 | # Default port, enforced via envConfig to prevent operator error |
420 | METRICS_PORT = 8067 |
421 | DATABASE_NAME = 'mattermost' |
422 | -LICENCE_SECRET_KEY_NAME = 'licence' |
423 | REQUIRED_S3_SETTINGS = ['s3_bucket', 's3_region', 's3_access_key_id', 's3_secret_access_key'] |
424 | -REQUIRED_SETTINGS = ['mattermost_image_path'] |
425 | +REQUIRED_SETTINGS = [] |
426 | REQUIRED_SSO_SETTINGS = ['licence', 'site_url'] |
427 | SAML_IDP_CRT = 'saml-idp.crt' |
428 | +LICENCE_FILE_TEMPLATE = "/mattermost/licence-{:08x}.txt" |
429 | |
430 | |
431 | class MattermostDBMasterAvailableEvent(EventBase): |
432 | @@ -76,26 +75,6 @@ def check_ranges(ranges, name): |
433 | return '{}: invalid network(s): {}'.format(name, ', '.join(invalid_networks)) |
434 | |
435 | |
436 | -def get_container(pod_spec, container_name): |
437 | - """Find and return the first container in pod_spec whose name is container_name, otherwise return None.""" |
438 | - for container in pod_spec['containers']: |
439 | - if container['name'] == container_name: |
440 | - return container |
441 | - raise ValueError("Unable to find container named '{}' in pod spec".format(container_name)) |
442 | - |
443 | - |
444 | -def get_env_config(pod_spec, container_name): |
445 | - """Return the envConfig of the container in pod_spec whose name is container_name, otherwise return None. |
446 | - |
447 | - If the container exists but has no envConfig, raise KeyError. |
448 | - """ |
449 | - container = get_container(pod_spec, container_name) |
450 | - if 'envConfig' in container: |
451 | - return container['envConfig'] |
452 | - else: |
453 | - raise ValueError("Unable to find envConfig for container named '{}'".format(container_name)) |
454 | - |
455 | - |
456 | class MattermostK8sCharm(CharmBase): |
457 | |
458 | state = StoredState() |
459 | @@ -104,21 +83,104 @@ class MattermostK8sCharm(CharmBase): |
460 | def __init__(self, *args): |
461 | super().__init__(*args) |
462 | |
463 | - self.framework.observe(self.on.start, self.configure_pod) |
464 | - self.framework.observe(self.on.config_changed, self.configure_pod) |
465 | - self.framework.observe(self.on.leader_elected, self.configure_pod) |
466 | - self.framework.observe(self.on.upgrade_charm, self.configure_pod) |
467 | + self.framework.observe(self.on.start, self._on_config_changed) |
468 | + self.framework.observe(self.on.config_changed, self._on_config_changed) |
469 | + self.framework.observe(self.on.leader_elected, self._on_config_changed) |
470 | + |
471 | + self.framework.observe(self.on.mattermost_pebble_ready, self._on_mattermost_pebble_ready) |
472 | |
473 | # actions |
474 | self.framework.observe(self.on.grant_admin_role_action, self._on_grant_admin_role_action) |
475 | |
476 | + # state |
477 | + self.state.set_default(db_conn_str=None, db_uri=None, db_ro_uris=[], mattermost_pebble_ready=False) |
478 | + |
479 | # database |
480 | - self.state.set_default(db_conn_str=None, db_uri=None, db_ro_uris=[]) |
481 | self.db = pgsql.PostgreSQLClient(self, 'db') |
482 | self.framework.observe(self.db.on.database_relation_joined, self._on_database_relation_joined) |
483 | self.framework.observe(self.db.on.master_changed, self._on_master_changed) |
484 | self.framework.observe(self.db.on.standby_changed, self._on_standby_changed) |
485 | - self.framework.observe(self.on.db_master_available, self.configure_pod) |
486 | + self.framework.observe(self.on.db_master_available, self._on_config_changed) |
487 | + |
488 | + self.ingress = IngressRequires(self, self._make_ingress_config()) |
489 | + |
490 | + def _make_ingress_config(self): |
491 | + return { |
492 | + "max-body-size": self.config["max_file_size"], |
493 | + "service-hostname": self._get_external_hostname(), |
494 | + "service-name": self.app.name, |
495 | + "service-port": CONTAINER_PORT, |
496 | + "tls-secret-name": self.config["tls_secret_name"], |
497 | + } |
498 | + |
499 | + def _get_external_hostname(self): |
500 | + """Extract and return hostname from site_url, otherwise return self.app.name.""" |
501 | + site_url = self.config["site_url"] |
502 | + if not site_url: |
503 | + return self.app.name |
504 | + parsed = urlparse(site_url) |
505 | + return parsed.hostname |
506 | + |
507 | + def _get_pebble_config(self): |
508 | + """Generate our pebble config.""" |
509 | + env_config = self._get_env_config() |
510 | + pebble_config = { |
511 | + "summary": "Mattermost layer", |
512 | + "description": "Mattermost layer", |
513 | + "services": { |
514 | + "mattermost": { |
515 | + "override": "replace", |
516 | + "summary": "Mattermost service", |
517 | + "command": "/mattermost/bin/mattermost", |
518 | + "startup": "enabled", |
519 | + "environment": env_config, |
520 | + } |
521 | + }, |
522 | + } |
523 | + return pebble_config |
524 | + |
525 | + def _on_mattermost_pebble_ready(self, event): |
526 | + """Handle the on pebble ready event.""" |
527 | + self.state.mattermost_pebble_ready = True |
528 | + |
529 | + def _on_config_changed(self, event): |
530 | + """Handle config-changed event.""" |
531 | + if not self.state.db_uri: |
532 | + self.unit.status = WaitingStatus('Waiting for database relation') |
533 | + event.defer() |
534 | + return |
535 | + |
536 | + problems = self._check_for_config_problems() |
537 | + if problems: |
538 | + self.unit.status = BlockedStatus(problems) |
539 | + return |
540 | + |
541 | + if not self.state.mattermost_pebble_ready: |
542 | + self.unit.status = WaitingStatus('Waiting for pebble') |
543 | + event.defer() |
544 | + return |
545 | + |
546 | + pebble_config = self._get_pebble_config() |
547 | + if not pebble_config: |
548 | + # Charm will be in blocked status. |
549 | + event.defer() |
550 | + return |
551 | + |
552 | + container = self.unit.get_container(CONTAINER_NAME) |
553 | + services = container.get_plan().to_dict().get("services", {}) |
554 | + if services != pebble_config["services"]: |
555 | + self.unit.status = MaintenanceStatus('Adding layer to pebble') |
556 | + container.add_layer("mattermost", pebble_config, combine=True) |
557 | + |
558 | + self.unit.status = MaintenanceStatus('Restarting mattermost') |
559 | + service = container.get_service("mattermost") |
560 | + if service.is_running(): |
561 | + container.stop("mattermost") |
562 | + container.start("mattermost") |
563 | + |
564 | + self.ingress.update_config(self._make_ingress_config()) |
565 | + |
566 | + self.unit.status = ActiveStatus() |
567 | |
568 | def _on_database_relation_joined(self, event: pgsql.DatabaseRelationJoinedEvent): |
569 | """Handle db-relation-joined.""" |
570 | @@ -172,40 +234,12 @@ class MattermostK8sCharm(CharmBase): |
571 | |
572 | return '; '.join(filter(None, problems)) |
573 | |
574 | - def _make_pod_spec(self): |
575 | - """Return a pod spec with some core configuration.""" |
576 | - config = self.model.config |
577 | - mattermost_image_details = { |
578 | - 'imagePath': config['mattermost_image_path'], |
579 | - } |
580 | - if config['mattermost_image_username']: |
581 | - mattermost_image_details.update( |
582 | - {'username': config['mattermost_image_username'], 'password': config['mattermost_image_password']} |
583 | - ) |
584 | - pod_config = self._make_pod_config() |
585 | - pod_config.update(self._make_s3_pod_config()) |
586 | - |
587 | - return { |
588 | - 'version': 3, # otherwise resources are ignored |
589 | - 'containers': [ |
590 | - { |
591 | - 'name': self.app.name, |
592 | - 'imageDetails': mattermost_image_details, |
593 | - 'ports': [{'containerPort': CONTAINER_PORT, 'protocol': 'TCP'}], |
594 | - 'envConfig': pod_config, |
595 | - 'kubernetes': { |
596 | - 'readinessProbe': {'httpGet': {'path': '/api/v4/system/ping', 'port': CONTAINER_PORT}}, |
597 | - }, |
598 | - } |
599 | - ], |
600 | - } |
601 | - |
602 | - def _make_pod_config(self): |
603 | + def _make_env_config(self): |
604 | """Return an envConfig with some core configuration.""" |
605 | config = self.model.config |
606 | # https://github.com/mattermost/mattermost-server/pull/14666 |
607 | db_uri = self.state.db_uri.replace('postgresql://', 'postgres://') |
608 | - pod_config = { |
609 | + env_config = { |
610 | 'MATTERMOST_HTTPD_LISTEN_PORT': CONTAINER_PORT, |
611 | 'MM_CONFIG': db_uri, |
612 | 'MM_SQLSETTINGS_DATASOURCE': db_uri, |
613 | @@ -219,18 +253,18 @@ class MattermostK8sCharm(CharmBase): |
614 | } |
615 | |
616 | if config['primary_team']: |
617 | - pod_config['MM_TEAMSETTINGS_EXPERIMENTALPRIMARYTEAM'] = config['primary_team'] |
618 | + env_config['MM_TEAMSETTINGS_EXPERIMENTALPRIMARYTEAM'] = config['primary_team'] |
619 | |
620 | if config['site_url']: |
621 | - pod_config['MM_SERVICESETTINGS_SITEURL'] = config['site_url'] |
622 | + env_config['MM_SERVICESETTINGS_SITEURL'] = config['site_url'] |
623 | |
624 | if config['outbound_proxy']: |
625 | - pod_config['HTTP_PROXY'] = config['outbound_proxy'] |
626 | - pod_config['HTTPS_PROXY'] = config['outbound_proxy'] |
627 | + env_config['HTTP_PROXY'] = config['outbound_proxy'] |
628 | + env_config['HTTPS_PROXY'] = config['outbound_proxy'] |
629 | if config['outbound_proxy_exceptions']: |
630 | - pod_config['NO_PROXY'] = config['outbound_proxy_exceptions'] |
631 | + env_config['NO_PROXY'] = config['outbound_proxy_exceptions'] |
632 | |
633 | - return pod_config |
634 | + return env_config |
635 | |
636 | def _missing_charm_settings(self): |
637 | """Return a list of settings required to satisfy configuration dependencies, or else an empty list.""" |
638 | @@ -241,9 +275,6 @@ class MattermostK8sCharm(CharmBase): |
639 | if config['clustering'] and not config['licence']: |
640 | missing.add('licence') |
641 | |
642 | - if config['mattermost_image_username'] and not config['mattermost_image_password']: |
643 | - missing.add('mattermost_image_password') |
644 | - |
645 | if config['performance_monitoring_enabled'] and not config['licence']: |
646 | missing.add('licence') |
647 | |
648 | @@ -258,142 +289,57 @@ class MattermostK8sCharm(CharmBase): |
649 | |
650 | return sorted(missing) |
651 | |
652 | - def _make_s3_pod_config(self): |
653 | + def _update_env_config_for_s3(self, env_config): |
654 | """Return an envConfig of S3 settings, if any.""" |
655 | config = self.model.config |
656 | if not config['s3_enabled']: |
657 | - return {} |
658 | - |
659 | - return { |
660 | - 'MM_FILESETTINGS_DRIVERNAME': 'amazons3', |
661 | - 'MM_FILESETTINGS_MAXFILESIZE': str(config['max_file_size'] * 1048576), # LP:1881227 |
662 | - 'MM_FILESETTINGS_AMAZONS3SSL': 'true', # defaults to true; belt and braces |
663 | - 'MM_FILESETTINGS_AMAZONS3ENDPOINT': config['s3_endpoint'], |
664 | - 'MM_FILESETTINGS_AMAZONS3BUCKET': config['s3_bucket'], |
665 | - 'MM_FILESETTINGS_AMAZONS3REGION': config['s3_region'], |
666 | - 'MM_FILESETTINGS_AMAZONS3ACCESSKEYID': config['s3_access_key_id'], |
667 | - 'MM_FILESETTINGS_AMAZONS3SECRETACCESSKEY': config['s3_secret_access_key'], |
668 | - 'MM_FILESETTINGS_AMAZONS3SSE': 'true' if config['s3_server_side_encryption'] else 'false', |
669 | - 'MM_FILESETTINGS_AMAZONS3TRACE': 'true' if config['debug'] else 'false', |
670 | - } |
671 | - |
672 | - def _update_pod_spec_for_k8s_ingress(self, pod_spec): |
673 | - """Add resources to pod_spec configuring site ingress, if needed.""" |
674 | - site_url = self.model.config['site_url'] |
675 | - if not site_url: |
676 | - return |
677 | - |
678 | - parsed = urlparse(site_url) |
679 | - |
680 | - if not parsed.scheme.startswith('http'): |
681 | return |
682 | |
683 | - annotations = {'nginx.ingress.kubernetes.io/proxy-body-size': '{}m'.format(self.model.config['max_file_size'])} |
684 | - ingress = { |
685 | - "name": "{}-ingress".format(self.app.name), |
686 | - "spec": { |
687 | - "rules": [ |
688 | - { |
689 | - "host": parsed.hostname, |
690 | - "http": { |
691 | - "paths": [ |
692 | - {"path": "/", "backend": {"serviceName": self.app.name, "servicePort": CONTAINER_PORT}} |
693 | - ] |
694 | - }, |
695 | - } |
696 | - ] |
697 | - }, |
698 | - } |
699 | - if parsed.scheme == 'https': |
700 | - ingress['spec']['tls'] = [{'hosts': [parsed.hostname]}] |
701 | - tls_secret_name = self.model.config['tls_secret_name'] |
702 | - if tls_secret_name: |
703 | - ingress['spec']['tls'][0]['secretName'] = tls_secret_name |
704 | - else: |
705 | - annotations['nginx.ingress.kubernetes.io/ssl-redirect'] = 'false' |
706 | - |
707 | - ingress_whitelist_source_range = self.model.config['ingress_whitelist_source_range'] |
708 | - if ingress_whitelist_source_range: |
709 | - annotations['nginx.ingress.kubernetes.io/whitelist-source-range'] = ingress_whitelist_source_range |
710 | - |
711 | - ingress['annotations'] = annotations |
712 | - |
713 | - # Due to https://github.com/canonical/operator/issues/293 we |
714 | - # can't use pod.set_spec's k8s_resources argument. |
715 | - resources = pod_spec.get('kubernetesResources', {}) |
716 | - resources['ingressResources'] = [ingress] |
717 | - pod_spec['kubernetesResources'] = resources |
718 | - |
719 | - def _get_licence_secret_name(self): |
720 | - """Compute a content-dependent name for the licence secret. |
721 | - |
722 | - The name is varied so that licence updates cause the pods to |
723 | - be respawned. Mattermost reads the licence file on startup |
724 | - and updates the copy in the database, if necessary. |
725 | - """ |
726 | - crc = '{:08x}'.format(crc32(self.model.config['licence'].encode('utf-8'))) |
727 | - return '{}-licence-{}'.format(self.app.name, crc) |
728 | - |
729 | - def _make_licence_volume_configs(self): |
730 | - """Return volume config for the licence secret.""" |
731 | - config = self.model.config |
732 | - if not config['licence']: |
733 | - return [] |
734 | - return [ |
735 | + env_config.update( |
736 | { |
737 | - 'name': 'licence', |
738 | - 'mountPath': '/secrets', |
739 | - 'secret': { |
740 | - 'name': self._get_licence_secret_name(), |
741 | - 'files': [{'key': LICENCE_SECRET_KEY_NAME, 'path': 'licence.txt', 'mode': 0o444}], |
742 | - }, |
743 | + 'MM_FILESETTINGS_DRIVERNAME': 'amazons3', |
744 | + 'MM_FILESETTINGS_MAXFILESIZE': str(config['max_file_size'] * 1048576), # LP:1881227 |
745 | + 'MM_FILESETTINGS_AMAZONS3SSL': 'true', # defaults to true; belt and braces |
746 | + 'MM_FILESETTINGS_AMAZONS3ENDPOINT': config['s3_endpoint'], |
747 | + 'MM_FILESETTINGS_AMAZONS3BUCKET': config['s3_bucket'], |
748 | + 'MM_FILESETTINGS_AMAZONS3REGION': config['s3_region'], |
749 | + 'MM_FILESETTINGS_AMAZONS3ACCESSKEYID': config['s3_access_key_id'], |
750 | + 'MM_FILESETTINGS_AMAZONS3SECRETACCESSKEY': config['s3_secret_access_key'], |
751 | + 'MM_FILESETTINGS_AMAZONS3SSE': 'true' if config['s3_server_side_encryption'] else 'false', |
752 | + 'MM_FILESETTINGS_AMAZONS3TRACE': 'true' if config['debug'] else 'false', |
753 | } |
754 | - ] |
755 | + ) |
756 | |
757 | - def _make_licence_k8s_secrets(self): |
758 | - """Return secret for the licence.""" |
759 | + def _update_env_config_for_licence(self, env_config): |
760 | + """Create or delete the licence file and, in the former case, add its location to env_config.""" |
761 | config = self.model.config |
762 | - if not config['licence']: |
763 | - return [] |
764 | - return [ |
765 | - { |
766 | - 'name': self._get_licence_secret_name(), |
767 | - 'type': 'Opaque', |
768 | - 'stringData': {LICENCE_SECRET_KEY_NAME: config['licence']}, |
769 | - } |
770 | - ] |
771 | + container = self.unit.get_container(CONTAINER_NAME) |
772 | |
773 | - def _update_pod_spec_for_licence(self, pod_spec): |
774 | - """Update pod_spec to make the licence, if configured, available to Mattermost.""" |
775 | - config = self.model.config |
776 | if not config['licence']: |
777 | return |
778 | |
779 | - secrets = pod_spec['kubernetesResources'].get('secrets', []) |
780 | - secrets = extend_list_merging_dicts_matched_by_key(secrets, self._make_licence_k8s_secrets(), key='name') |
781 | - pod_spec['kubernetesResources']['secrets'] = secrets |
782 | - |
783 | - container = get_container(pod_spec, self.app.name) |
784 | - volume_config = container.get('volumeConfig', []) |
785 | - volume_config = extend_list_merging_dicts_matched_by_key( |
786 | - volume_config, self._make_licence_volume_configs(), key='name' |
787 | - ) |
788 | - container['volumeConfig'] = volume_config |
789 | - |
790 | - get_env_config(pod_spec, self.app.name).update( |
791 | - {'MM_SERVICESETTINGS_LICENSEFILELOCATION': '/secrets/licence.txt'}, |
792 | + # XXX: Per https://bugs.launchpad.net/juju/+bug/1854759 and |
793 | + # https://bugs.launchpad.net/juju/+bug/1878120 we don't currently |
794 | + # have a way of interacting with k8s secrets. Current guidance is |
795 | + # to use juju config directly for this, as we're doing here. |
796 | + licence_file = LICENCE_FILE_TEMPLATE.format(crc32(config['licence'].encode('utf-8'))) |
797 | + logger.debug('Asking pebble to create {} in the {} container'.format( |
798 | + licence_file, CONTAINER_NAME)) |
799 | + container.push(licence_file, config['licence']) |
800 | + env_config.update( |
801 | + {'MM_SERVICESETTINGS_LICENSEFILELOCATION': licence_file}, |
802 | ) |
803 | |
804 | - def _update_pod_spec_for_canonical_defaults(self, pod_spec): |
805 | - """Update pod_spec with various Mattermost settings particular to Canonical's deployment. |
806 | + def _update_env_config_for_enterprise_defaults(self, env_config): |
807 | + """Update env_config with various Mattermost settings particular to an Enterprise deployment. |
808 | |
809 | These settings may be less generally useful, and so they are controlled here as a unit. |
810 | """ |
811 | config = self.model.config |
812 | - if not config['use_canonical_defaults']: |
813 | + if not config['use_enterprise_defaults']: |
814 | return |
815 | |
816 | - get_env_config(pod_spec, self.app.name).update( |
817 | + env_config.update( |
818 | { |
819 | # If this is off, users can't turn it on themselves. |
820 | 'MM_SERVICESETTINGS_CLOSEUNUSEDDIRECTMESSAGES': 'true', |
821 | @@ -410,8 +356,8 @@ class MattermostK8sCharm(CharmBase): |
822 | } |
823 | ) |
824 | |
825 | - def _update_pod_spec_for_clustering(self, pod_spec): |
826 | - """Update pod_spec with clustering settings, varying the cluster name on the application name. |
827 | + def _update_env_config_for_clustering(self, env_config): |
828 | + """Update env_config with clustering settings, varying the cluster name on the application name. |
829 | |
830 | This is done so that blue/green deployments in the same model won't talk to each other. |
831 | """ |
832 | @@ -419,7 +365,7 @@ class MattermostK8sCharm(CharmBase): |
833 | if not config['clustering']: |
834 | return |
835 | |
836 | - get_env_config(pod_spec, self.app.name).update( |
837 | + env_config.update( |
838 | { |
839 | "MM_CLUSTERSETTINGS_ENABLE": "true", |
840 | "MM_CLUSTERSETTINGS_CLUSTERNAME": '{}-{}'.format(self.app.name, os.environ['JUJU_MODEL_UUID']), |
841 | @@ -427,18 +373,18 @@ class MattermostK8sCharm(CharmBase): |
842 | } |
843 | ) |
844 | |
845 | - def _update_pod_spec_for_sso(self, pod_spec): |
846 | - """Update pod_spec with settings to use login.ubuntu.com via SAML for single sign-on. |
847 | + def _update_env_config_for_sso(self, env_config): |
848 | + """Update env_config with settings to use login.ubuntu.com via SAML for single sign-on. |
849 | |
850 | SAML_IDP_CRT must be generated and installed manually by a human (see README.md). |
851 | """ |
852 | config = self.model.config |
853 | if not config['sso'] or [setting for setting in REQUIRED_SSO_SETTINGS if not config[setting]]: |
854 | return |
855 | - site_hostname = urlparse(config['site_url']).hostname |
856 | + site_hostname = self._get_external_hostname() |
857 | use_experimental_saml_library = 'true' if config['use_experimental_saml_library'] else 'false' |
858 | |
859 | - get_env_config(pod_spec, self.app.name).update( |
860 | + env_config.update( |
861 | { |
862 | 'MM_EMAILSETTINGS_ENABLESIGNINWITHEMAIL': 'false', |
863 | 'MM_EMAILSETTINGS_ENABLESIGNINWITHUSERNAME': 'false', |
864 | @@ -463,23 +409,24 @@ class MattermostK8sCharm(CharmBase): |
865 | } |
866 | ) |
867 | |
868 | - def _update_pod_spec_for_performance_monitoring(self, pod_spec): |
869 | + def _update_env_config_for_performance_monitoring(self, env_config): |
870 | """Update pod_spec with settings for the Prometheus exporter.""" |
871 | config = self.model.config |
872 | if not config['performance_monitoring_enabled']: |
873 | return |
874 | |
875 | - get_env_config(pod_spec, self.app.name).update( |
876 | + env_config.update( |
877 | { |
878 | 'MM_METRICSSETTINGS_ENABLE': 'true' if config['performance_monitoring_enabled'] else 'false', |
879 | 'MM_METRICSSETTINGS_LISTENADDRESS': ':{}'.format(METRICS_PORT), |
880 | } |
881 | ) |
882 | |
883 | + # XXX: See LP:1923820. |
884 | # Ordinarily pods are selected for scraping by the in-cluster |
885 | # Prometheus based on their annotations. Unfortunately Juju |
886 | - # doesn't support pod annotations yet (LP:1884177). When it |
887 | - # does, here are the annotations we'll need to add: |
888 | + # doesn't support pod annotations yet. |
889 | + # Here are the annotations we'll need to add: |
890 | |
891 | # [ fetch or create annotations dict ] |
892 | # annotations.update({ |
893 | @@ -490,14 +437,14 @@ class MattermostK8sCharm(CharmBase): |
894 | # }) |
895 | # [ store annotations in pod_spec ] |
896 | |
897 | - def _update_pod_spec_for_push(self, pod_spec): |
898 | + def _update_env_config_for_push(self, env_config): |
899 | """Update pod_spec with settings for Mattermost HPNS (hosted push notification service).""" |
900 | config = self.model.config |
901 | if not config['push_notification_server']: |
902 | return |
903 | contents = 'full' if config['push_notifications_include_message_snippet'] else 'id_loaded' |
904 | |
905 | - get_env_config(pod_spec, self.app.name).update( |
906 | + env_config.update( |
907 | { |
908 | 'MM_EMAILSETTINGS_SENDPUSHNOTIFICATIONS': 'true', |
909 | 'MM_EMAILSETTINGS_PUSHNOTIFICATIONCONTENTS': contents, |
910 | @@ -505,49 +452,32 @@ class MattermostK8sCharm(CharmBase): |
911 | } |
912 | ) |
913 | |
914 | - def _update_pod_spec_for_smtp(self, pod_spec): |
915 | + def _update_env_config_for_smtp(self, env_config): |
916 | """Update pod_spec with settings for an outgoing SMTP relay.""" |
917 | config = self.model.config |
918 | if not config['smtp_host']: |
919 | return |
920 | |
921 | - get_env_config(pod_spec, self.app.name).update( |
922 | + env_config.update( |
923 | {'MM_EMAILSETTINGS_SMTPPORT': 25, 'MM_EMAILSETTINGS_SMTPSERVER': config['smtp_host']} |
924 | ) |
925 | |
926 | - def configure_pod(self, event): |
927 | - """Assemble the pod spec and apply it, if possible.""" |
928 | - if not self.state.db_uri: |
929 | - self.unit.status = WaitingStatus('Waiting for database relation') |
930 | - event.defer() |
931 | - return |
932 | - |
933 | - if not self.unit.is_leader(): |
934 | - self.unit.status = ActiveStatus() |
935 | - return |
936 | - |
937 | - problems = self._check_for_config_problems() |
938 | - if problems: |
939 | - self.unit.status = BlockedStatus(problems) |
940 | - return |
941 | - |
942 | - self.unit.status = MaintenanceStatus('Assembling pod spec') |
943 | - pod_spec = self._make_pod_spec() |
944 | - self._update_pod_spec_for_canonical_defaults(pod_spec) |
945 | - self._update_pod_spec_for_clustering(pod_spec) |
946 | - self._update_pod_spec_for_k8s_ingress(pod_spec) |
947 | - self._update_pod_spec_for_licence(pod_spec) |
948 | - self._update_pod_spec_for_performance_monitoring(pod_spec) |
949 | - self._update_pod_spec_for_push(pod_spec) |
950 | - self._update_pod_spec_for_sso(pod_spec) |
951 | - self._update_pod_spec_for_smtp(pod_spec) |
952 | - |
953 | - self.unit.status = MaintenanceStatus('Setting pod spec') |
954 | - self.model.pod.set_spec(pod_spec) |
955 | - self.unit.status = ActiveStatus() |
956 | + def _get_env_config(self): |
957 | + """Assemble our environment configuration.""" |
958 | + env_config = self._make_env_config() |
959 | + self._update_env_config_for_s3(env_config) |
960 | + self._update_env_config_for_enterprise_defaults(env_config) |
961 | + self._update_env_config_for_clustering(env_config) |
962 | + self._update_env_config_for_licence(env_config) |
963 | + self._update_env_config_for_performance_monitoring(env_config) |
964 | + self._update_env_config_for_push(env_config) |
965 | + self._update_env_config_for_sso(env_config) |
966 | + self._update_env_config_for_smtp(env_config) |
967 | + return env_config |
968 | |
969 | def _on_grant_admin_role_action(self, event): |
970 | """Handle the grant-admin-role action.""" |
971 | + # XXX: Currently fails due to https://bugs.launchpad.net/juju/+bug/1923822 |
972 | user = event.params["user"] |
973 | cmd = ["/mattermost/bin/mattermost", "roles", "system_admin", user] |
974 | granted = subprocess.run(cmd, capture_output=True) |
975 | @@ -561,5 +491,8 @@ class MattermostK8sCharm(CharmBase): |
976 | event.set_results({"info": msg}) |
977 | |
978 | |
979 | -if __name__ == '__main__': |
980 | - main(MattermostK8sCharm, use_juju_for_storage=True) |
981 | +if __name__ == '__main__': # pragma: no cover |
982 | + main( |
983 | + MattermostK8sCharm, |
984 | + use_juju_for_storage=True, # https://github.com/canonical/operator/issues/506 |
985 | + ) |
986 | diff --git a/src/utils.py b/src/utils.py |
987 | deleted file mode 100644 |
988 | index f02316f..0000000 |
989 | --- a/src/utils.py |
990 | +++ /dev/null |
991 | @@ -1,20 +0,0 @@ |
992 | -# Copyright 2020 Canonical Ltd. |
993 | -# Licensed under the GPLv3, see LICENCE file for details. |
994 | - |
995 | -from copy import deepcopy |
996 | - |
997 | - |
998 | -def extend_list_merging_dicts_matched_by_key(dst, src, key): |
999 | - """Merge src, a list of zero or more dictionaries, into dst, also |
1000 | - such a list. Dictionaries with the same key will be copied from dst |
1001 | - and then merged using .update(src). This is not done recursively.""" |
1002 | - result = [] |
1003 | - sbk = {s[key]: s for s in src} |
1004 | - dbk = {d[key]: d for d in dst} |
1005 | - to_merge = set(dbk.keys()).intersection(sbk.keys()) |
1006 | - result.extend([s for s in src if s[key] not in to_merge]) |
1007 | - result.extend([d for d in dst if d[key] not in to_merge]) |
1008 | - for k in sorted(to_merge): |
1009 | - result.append(deepcopy(dbk[k])) |
1010 | - result[-1].update(sbk[k]) |
1011 | - return result |
1012 | diff --git a/tests/unit/test_charm.py b/tests/unit/test_charm.py |
1013 | index 6fb16c7..a30ca8f 100644 |
1014 | --- a/tests/unit/test_charm.py |
1015 | +++ b/tests/unit/test_charm.py |
1016 | @@ -7,49 +7,13 @@ import unittest |
1017 | from unittest.mock import Mock |
1018 | |
1019 | from charm import ( |
1020 | + CONTAINER_PORT, |
1021 | MattermostK8sCharm, |
1022 | METRICS_PORT, |
1023 | ) |
1024 | |
1025 | from ops import testing |
1026 | |
1027 | -CONFIG_IMAGE_NO_CREDS = { |
1028 | - 'clustering': False, |
1029 | - 'mattermost_image_path': 'example.com/mattermost:latest', |
1030 | - 'mattermost_image_username': '', |
1031 | - 'mattermost_image_password': '', |
1032 | - 'performance_monitoring_enabled': False, |
1033 | - 's3_enabled': False, |
1034 | - 's3_server_side_encryption': False, |
1035 | - 'sso': False, |
1036 | -} |
1037 | - |
1038 | -CONFIG_IMAGE_NO_IMAGE = { |
1039 | - 'clustering': False, |
1040 | - 'mattermost_image_path': '', |
1041 | - 'mattermost_image_username': '', |
1042 | - 'mattermost_image_password': '', |
1043 | - 'performance_monitoring_enabled': False, |
1044 | - 's3_enabled': False, |
1045 | - 's3_server_side_encryption': False, |
1046 | - 'sso': False, |
1047 | -} |
1048 | - |
1049 | -CONFIG_IMAGE_NO_PASSWORD = { |
1050 | - 'clustering': False, |
1051 | - 'mattermost_image_path': 'example.com/mattermost:latest', |
1052 | - 'mattermost_image_username': 'production', |
1053 | - 'mattermost_image_password': '', |
1054 | - 'performance_monitoring_enabled': False, |
1055 | - 's3_enabled': False, |
1056 | - 's3_server_side_encryption': False, |
1057 | - 'sso': False, |
1058 | -} |
1059 | - |
1060 | -CONFIG_LICENCE_SECRET = {"licence": "RANDOMSTRING"} |
1061 | - |
1062 | -CONFIG_NO_LICENCE_SECRET = {"licence": ""} |
1063 | - |
1064 | CONFIG_NO_S3_SETTINGS_S3_ENABLED = { |
1065 | 'clustering': False, |
1066 | 'debug': False, |
1067 | @@ -106,6 +70,76 @@ CONFIG_LICENCE_REQUIRED_MIXED_INGRESS = { |
1068 | 'sso': False, |
1069 | } |
1070 | |
1071 | +DUMMY_LICENCE = "This is not a valid licence!" |
1072 | +DUMMY_LICENCE_FILE = '/mattermost/licence-67efe49b.txt' |
1073 | + |
1074 | +GET_PEBBLE_CONFIG_EXPECTED = { |
1075 | + 'MATTERMOST_HTTPD_LISTEN_PORT': 8065, |
1076 | + 'MM_CONFIG': 'postgres://10.0.1.101:5432/', |
1077 | + 'MM_IMAGEPROXYSETTINGS_ENABLE': 'false', |
1078 | + 'MM_IMAGEPROXYSETTINGS_IMAGEPROXYTYPE': 'local', |
1079 | + 'MM_LOGSETTINGS_CONSOLELEVEL': 'INFO', |
1080 | + 'MM_LOGSETTINGS_ENABLECONSOLE': 'true', |
1081 | + 'MM_LOGSETTINGS_ENABLEFILE': 'false', |
1082 | + 'MM_SQLSETTINGS_DATASOURCE': 'postgres://10.0.1.101:5432/', |
1083 | +} |
1084 | + |
1085 | + |
1086 | +def _mm_env(pebble_config): |
1087 | + """Extract mattermost's environment from the given pebble_config.""" |
1088 | + return pebble_config['services']['mattermost']['environment'] |
1089 | + |
1090 | + |
1091 | +class TestMattermostK8sCharmConfig(unittest.TestCase): |
1092 | + def setUp(self): |
1093 | + self.harness = testing.Harness(MattermostK8sCharm) |
1094 | + self.harness.begin() |
1095 | + self.harness.disable_hooks() |
1096 | + self.harness.charm.state.db_uri = 'postgresql://10.0.1.101:5432/' |
1097 | + self._expected = dict(GET_PEBBLE_CONFIG_EXPECTED) |
1098 | + |
1099 | + def test_config_db_uri(self): |
1100 | + self.assertEqual(_mm_env(self.harness.charm._get_pebble_config()), self._expected) |
1101 | + |
1102 | + def test_config_primary_team(self): |
1103 | + self.harness.update_config({"primary_team": "myteam"}) |
1104 | + self._expected['MM_TEAMSETTINGS_EXPERIMENTALPRIMARYTEAM'] = "myteam" |
1105 | + self.assertEqual(_mm_env(self.harness.charm._get_pebble_config()), self._expected) |
1106 | + |
1107 | + def test_config_site_url(self): |
1108 | + self.harness.update_config({"site_url": "https://myteam.mattermost.io/"}) |
1109 | + self._expected['MM_SERVICESETTINGS_SITEURL'] = "https://myteam.mattermost.io/" |
1110 | + self.assertEqual(_mm_env(self.harness.charm._get_pebble_config()), self._expected) |
1111 | + |
1112 | + def test_config_outbound_proxy(self): |
1113 | + self.harness.update_config({"outbound_proxy": "http://squid.internal:3128"}) |
1114 | + self._expected['HTTP_PROXY'] = "http://squid.internal:3128" |
1115 | + self._expected['HTTPS_PROXY'] = "http://squid.internal:3128" |
1116 | + self.harness.update_config({"outbound_proxy_exceptions": "charmhub.io"}) |
1117 | + self._expected['NO_PROXY'] = "charmhub.io" |
1118 | + self.assertEqual(_mm_env(self.harness.charm._get_pebble_config()), self._expected) |
1119 | + |
1120 | + def test_config_smtp_host(self): |
1121 | + self.harness.update_config({"smtp_host": "smtp.internal"}) |
1122 | + self._expected['MM_EMAILSETTINGS_SMTPPORT'] = 25 |
1123 | + self._expected['MM_EMAILSETTINGS_SMTPSERVER'] = 'smtp.internal' |
1124 | + self.assertEqual(_mm_env(self.harness.charm._get_pebble_config()), self._expected) |
1125 | + |
1126 | + @mock.patch('ops.testing._TestingPebbleClient.push') # https://github.com/canonical/operator/issues/518 |
1127 | + def test_config_licence(self, _push): |
1128 | + self.harness.update_config({"licence": DUMMY_LICENCE}) |
1129 | + self._expected['MM_SERVICESETTINGS_LICENSEFILELOCATION'] = DUMMY_LICENCE_FILE |
1130 | + self.assertEqual(_mm_env(self.harness.charm._get_pebble_config()), self._expected) |
1131 | + _push.assert_called_once() |
1132 | + self.assertEqual(_push.call_args.args, (DUMMY_LICENCE_FILE, DUMMY_LICENCE,)) |
1133 | + |
1134 | + def test_get_external_hostname_default(self): |
1135 | + self.assertEqual(self.harness.charm._get_external_hostname(), self.harness.charm.app.name) |
1136 | + |
1137 | + def test_get_external_hostname_site_url(self): |
1138 | + self.harness.update_config({"site_url": "https://chat.example.com/"}) |
1139 | + self.assertEqual(self.harness.charm._get_external_hostname(), "chat.example.com") |
1140 | + |
1141 | |
1142 | class TestMattermostK8sCharmHooksDisabled(unittest.TestCase): |
1143 | def setUp(self): |
1144 | @@ -121,39 +155,21 @@ class TestMattermostK8sCharmHooksDisabled(unittest.TestCase): |
1145 | ) |
1146 | self.assertEqual(self.harness.charm._check_for_config_problems(), expected) |
1147 | |
1148 | - def test_make_pod_config(self): |
1149 | - """Make pod config.""" |
1150 | - self.harness.charm.state.db_uri = 'postgresql://10.0.1.101:5432/' |
1151 | + def test_make_ingress_config(self): |
1152 | + hostname = 'myteam.mattermost.io' |
1153 | + self.harness.update_config({"site_url": "https://{}/".format(hostname)}) |
1154 | + self.harness.update_config({"tls_secret_name": "{}-tls".format(hostname)}) |
1155 | + |
1156 | expected = { |
1157 | - 'MATTERMOST_HTTPD_LISTEN_PORT': 8065, |
1158 | - 'MM_CONFIG': 'postgres://10.0.1.101:5432/', |
1159 | - 'MM_IMAGEPROXYSETTINGS_ENABLE': 'false', |
1160 | - 'MM_IMAGEPROXYSETTINGS_IMAGEPROXYTYPE': 'local', |
1161 | - 'MM_LOGSETTINGS_CONSOLELEVEL': 'INFO', |
1162 | - 'MM_LOGSETTINGS_ENABLECONSOLE': 'true', |
1163 | - 'MM_LOGSETTINGS_ENABLEFILE': 'false', |
1164 | - 'MM_SQLSETTINGS_DATASOURCE': 'postgres://10.0.1.101:5432/', |
1165 | + "max-body-size": self.harness.charm.config["max_file_size"], |
1166 | + "service-hostname": hostname, |
1167 | + "service-name": self.harness.charm.app.name, |
1168 | + "service-port": CONTAINER_PORT, |
1169 | + "tls-secret-name": '{}-tls'.format(hostname), |
1170 | } |
1171 | - self.assertEqual(self.harness.charm._make_pod_config(), expected) |
1172 | - # Now test with `primary_team` set. |
1173 | - self.harness.update_config({"primary_team": "myteam"}) |
1174 | - expected['MM_TEAMSETTINGS_EXPERIMENTALPRIMARYTEAM'] = "myteam" |
1175 | - self.assertEqual(self.harness.charm._make_pod_config(), expected) |
1176 | - # Now test with `site_url` set. |
1177 | - self.harness.update_config({"site_url": "myteam.mattermost.io"}) |
1178 | - expected['MM_SERVICESETTINGS_SITEURL'] = "myteam.mattermost.io" |
1179 | - self.assertEqual(self.harness.charm._make_pod_config(), expected) |
1180 | - # Now test with `outbound_proxy` set. |
1181 | - self.harness.update_config({"outbound_proxy": "http://squid.internal:3128"}) |
1182 | - expected['HTTP_PROXY'] = "http://squid.internal:3128" |
1183 | - expected['HTTPS_PROXY'] = "http://squid.internal:3128" |
1184 | - self.assertEqual(self.harness.charm._make_pod_config(), expected) |
1185 | - # Now test with `outbound_proxy_exceptions` set. |
1186 | - self.harness.update_config({"outbound_proxy_exceptions": "charmhub.io"}) |
1187 | - expected['NO_PROXY'] = "charmhub.io" |
1188 | - self.assertEqual(self.harness.charm._make_pod_config(), expected) |
1189 | + self.assertEqual(self.harness.charm._make_ingress_config(), expected) |
1190 | |
1191 | - def test_make_s3_pod_config(self): |
1192 | + def test_make_s3_env_config(self): |
1193 | """Make s3 pod config.""" |
1194 | self.harness.update_config(CONFIG_NO_S3_SETTINGS_S3_ENABLED) |
1195 | expected = { |
1196 | @@ -168,25 +184,9 @@ class TestMattermostK8sCharmHooksDisabled(unittest.TestCase): |
1197 | 'MM_FILESETTINGS_AMAZONS3SSE': 'false', |
1198 | 'MM_FILESETTINGS_AMAZONS3TRACE': 'false', |
1199 | } |
1200 | - self.assertEqual(self.harness.charm._make_s3_pod_config(), expected) |
1201 | - |
1202 | - def test_missing_charm_settings_image_no_creds(self): |
1203 | - """Credentials are optional.""" |
1204 | - self.harness.update_config(CONFIG_IMAGE_NO_CREDS) |
1205 | - expected = [] |
1206 | - self.assertEqual(sorted(self.harness.charm._missing_charm_settings()), expected) |
1207 | - |
1208 | - def test_missing_charm_settings_image_no_image(self): |
1209 | - """Image path is required.""" |
1210 | - self.harness.update_config(CONFIG_IMAGE_NO_IMAGE) |
1211 | - expected = sorted(['mattermost_image_path']) |
1212 | - self.assertEqual(sorted(self.harness.charm._missing_charm_settings()), expected) |
1213 | - |
1214 | - def test_missing_charm_settings_image_no_password(self): |
1215 | - """Password is required when username is set.""" |
1216 | - self.harness.update_config(CONFIG_IMAGE_NO_PASSWORD) |
1217 | - expected = sorted(['mattermost_image_password']) |
1218 | - self.assertEqual(sorted(self.harness.charm._missing_charm_settings()), expected) |
1219 | + env_config = {} |
1220 | + self.harness.charm._update_env_config_for_s3(env_config) |
1221 | + self.assertEqual(env_config, expected) |
1222 | |
1223 | def test_missing_charm_settings_no_s3_settings_s3_enabled(self): |
1224 | """If S3 is enabled, we need lots of settings to be set.""" |
1225 | @@ -204,240 +204,90 @@ class TestMattermostK8sCharmHooksDisabled(unittest.TestCase): |
1226 | """If push_notification_server is set to an empty string (default) don't update spec""" |
1227 | self.harness.update_config(CONFIG_PUSH_NOTIFICATION_SERVER_UNSET) |
1228 | expected = {} |
1229 | - pod_spec = {} |
1230 | - self.harness.charm._update_pod_spec_for_push(pod_spec) |
1231 | - self.assertEqual(pod_spec, expected) |
1232 | + env_config = {} |
1233 | + self.harness.charm._update_env_config_for_push(env_config) |
1234 | + self.assertEqual(env_config, expected) |
1235 | |
1236 | def test_push_notification_no_message_snippet(self): |
1237 | """Push notification configured, but without message snippets""" |
1238 | self.harness.update_config(CONFIG_PUSH_NOTIFICATION_NO_MESSAGE_SNIPPET) |
1239 | expected = { |
1240 | - 'containers': [ |
1241 | - { |
1242 | - 'name': 'mattermost-k8s', |
1243 | - 'envConfig': { |
1244 | - 'MM_EMAILSETTINGS_SENDPUSHNOTIFICATIONS': 'true', |
1245 | - 'MM_EMAILSETTINGS_PUSHNOTIFICATIONCONTENTS': 'id_loaded', |
1246 | - 'MM_EMAILSETTINGS_PUSHNOTIFICATIONSERVER': 'https://push.mattermost.com/', |
1247 | - }, |
1248 | - } |
1249 | - ], |
1250 | + 'MM_EMAILSETTINGS_SENDPUSHNOTIFICATIONS': 'true', |
1251 | + 'MM_EMAILSETTINGS_PUSHNOTIFICATIONCONTENTS': 'id_loaded', |
1252 | + 'MM_EMAILSETTINGS_PUSHNOTIFICATIONSERVER': 'https://push.mattermost.com/', |
1253 | } |
1254 | - pod_spec = { |
1255 | - 'containers': [{'name': 'mattermost-k8s', 'envConfig': {}}], |
1256 | - } |
1257 | - self.harness.charm._update_pod_spec_for_push(pod_spec) |
1258 | - self.assertEqual(pod_spec, expected) |
1259 | + env_config = {} |
1260 | + self.harness.charm._update_env_config_for_push(env_config) |
1261 | + self.assertEqual(env_config, expected) |
1262 | |
1263 | def test_push_notification_message_snippet(self): |
1264 | """Push notifications configured, including message snippets""" |
1265 | self.harness.update_config(CONFIG_PUSH_NOTIFICATION_MESSAGE_SNIPPET) |
1266 | expected = { |
1267 | - 'containers': [ |
1268 | - { |
1269 | - 'name': 'mattermost-k8s', |
1270 | - 'envConfig': { |
1271 | - 'MM_EMAILSETTINGS_SENDPUSHNOTIFICATIONS': 'true', |
1272 | - 'MM_EMAILSETTINGS_PUSHNOTIFICATIONCONTENTS': 'full', |
1273 | - 'MM_EMAILSETTINGS_PUSHNOTIFICATIONSERVER': 'https://push.mattermost.com/', |
1274 | - }, |
1275 | - } |
1276 | - ], |
1277 | - } |
1278 | - pod_spec = { |
1279 | - 'containers': [{'name': 'mattermost-k8s', 'envConfig': {}}], |
1280 | - } |
1281 | - self.harness.charm._update_pod_spec_for_push(pod_spec) |
1282 | - self.assertEqual(pod_spec, expected) |
1283 | - |
1284 | - def test_get_licence_secret_name(self): |
1285 | - """Test the licence secret name is correctly constructed""" |
1286 | - self.harness.update_config(CONFIG_LICENCE_SECRET) |
1287 | - self.assertEqual(self.harness.charm._get_licence_secret_name(), "mattermost-k8s-licence-b5bbb1bf") |
1288 | - |
1289 | - def test_make_licence_k8s_secrets(self): |
1290 | - """Test making licence k8s secrets""" |
1291 | - self.harness.update_config(CONFIG_NO_LICENCE_SECRET) |
1292 | - self.assertEqual(self.harness.charm._make_licence_k8s_secrets(), []) |
1293 | - self.harness.update_config(CONFIG_LICENCE_SECRET) |
1294 | - expected = [ |
1295 | - {'name': 'mattermost-k8s-licence-b5bbb1bf', 'type': 'Opaque', 'stringData': {'licence': 'RANDOMSTRING'}} |
1296 | - ] |
1297 | - self.assertEqual(self.harness.charm._make_licence_k8s_secrets(), expected) |
1298 | - |
1299 | - def test_make_licence_volume_configs(self): |
1300 | - """Test making licence volume configs""" |
1301 | - self.harness.update_config(CONFIG_NO_LICENCE_SECRET) |
1302 | - self.assertEqual(self.harness.charm._make_licence_volume_configs(), []) |
1303 | - self.harness.update_config(CONFIG_LICENCE_SECRET) |
1304 | - expected = [ |
1305 | - { |
1306 | - 'name': 'licence', |
1307 | - 'mountPath': '/secrets', |
1308 | - 'secret': { |
1309 | - 'name': 'mattermost-k8s-licence-b5bbb1bf', |
1310 | - 'files': [{'key': 'licence', 'path': 'licence.txt', 'mode': 0o444}], |
1311 | - }, |
1312 | - } |
1313 | - ] |
1314 | - self.assertEqual(self.harness.charm._make_licence_volume_configs(), expected) |
1315 | - |
1316 | - def test_update_pod_spec_for_k8s_ingress(self): |
1317 | - """Test making the k8s ingress, and ensuring ingress name is different to app name |
1318 | - |
1319 | - We're specifically testing that the ingress name is not the same as |
1320 | - the app name due to LP#1884674.""" |
1321 | - self.harness.update_config( |
1322 | - { |
1323 | - 'ingress_whitelist_source_range': '', |
1324 | - 'max_file_size': 5, |
1325 | - 'site_url': 'https://chat.example.com', |
1326 | - 'tls_secret_name': 'chat-example-com-tls', |
1327 | - } |
1328 | - ) |
1329 | - ingress_name = 'mattermost-k8s-ingress' |
1330 | - self.assertNotEqual(ingress_name, self.harness.charm.app.name) |
1331 | - expected = { |
1332 | - 'kubernetesResources': { |
1333 | - 'ingressResources': [ |
1334 | - { |
1335 | - 'name': ingress_name, |
1336 | - 'spec': { |
1337 | - 'rules': [ |
1338 | - { |
1339 | - 'host': 'chat.example.com', |
1340 | - 'http': { |
1341 | - 'paths': [ |
1342 | - { |
1343 | - 'path': '/', |
1344 | - 'backend': {'serviceName': 'mattermost-k8s', 'servicePort': 8065}, |
1345 | - } |
1346 | - ] |
1347 | - }, |
1348 | - } |
1349 | - ], |
1350 | - 'tls': [{'hosts': ['chat.example.com'], 'secretName': 'chat-example-com-tls'}], |
1351 | - }, |
1352 | - 'annotations': {'nginx.ingress.kubernetes.io/proxy-body-size': '5m'}, |
1353 | - } |
1354 | - ] |
1355 | - } |
1356 | - } |
1357 | - pod_spec = {} |
1358 | - self.harness.charm._update_pod_spec_for_k8s_ingress(pod_spec) |
1359 | - self.assertEqual(pod_spec, expected) |
1360 | - # And now test with an ingress_whitelist_source_range, and an http |
1361 | - # rather than https site_url to test a few more ingress conditions. |
1362 | - self.harness.update_config( |
1363 | - {'ingress_whitelist_source_range': '10.10.10.10/24', 'site_url': 'http://chat.example.com'} |
1364 | - ) |
1365 | - expected = { |
1366 | - 'kubernetesResources': { |
1367 | - 'ingressResources': [ |
1368 | - { |
1369 | - 'name': ingress_name, |
1370 | - 'spec': { |
1371 | - 'rules': [ |
1372 | - { |
1373 | - 'host': 'chat.example.com', |
1374 | - 'http': { |
1375 | - 'paths': [ |
1376 | - { |
1377 | - 'path': '/', |
1378 | - 'backend': {'serviceName': 'mattermost-k8s', 'servicePort': 8065}, |
1379 | - } |
1380 | - ] |
1381 | - }, |
1382 | - } |
1383 | - ], |
1384 | - }, |
1385 | - 'annotations': { |
1386 | - 'nginx.ingress.kubernetes.io/proxy-body-size': '5m', |
1387 | - 'nginx.ingress.kubernetes.io/ssl-redirect': 'false', |
1388 | - 'nginx.ingress.kubernetes.io/whitelist-source-range': '10.10.10.10/24', |
1389 | - }, |
1390 | - } |
1391 | - ] |
1392 | - } |
1393 | + 'MM_EMAILSETTINGS_SENDPUSHNOTIFICATIONS': 'true', |
1394 | + 'MM_EMAILSETTINGS_PUSHNOTIFICATIONCONTENTS': 'full', |
1395 | + 'MM_EMAILSETTINGS_PUSHNOTIFICATIONSERVER': 'https://push.mattermost.com/', |
1396 | } |
1397 | - pod_spec = {} |
1398 | - self.harness.charm._update_pod_spec_for_k8s_ingress(pod_spec) |
1399 | - self.assertEqual(pod_spec, expected) |
1400 | + env_config = {} |
1401 | + self.harness.charm._update_env_config_for_push(env_config) |
1402 | + self.assertEqual(env_config, expected) |
1403 | |
1404 | - def test_update_pod_spec_for_performance_monitoring(self): |
1405 | + def test_update_env_config_for_performance_monitoring(self): |
1406 | """envConfig is updated, and pre-existing annotations are not clobbered.""" |
1407 | # We can't set annotations yet because of LP:1884177. |
1408 | # When we can, this test will need updating. |
1409 | self.harness.update_config({'performance_monitoring_enabled': True}) |
1410 | - pod_spec = { |
1411 | - 'containers': [{'name': 'mattermost-k8s', 'envConfig': {}}], |
1412 | - } |
1413 | + env_config = {} |
1414 | expected = { |
1415 | - 'containers': [ |
1416 | - { |
1417 | - 'name': 'mattermost-k8s', |
1418 | - 'envConfig': { |
1419 | - 'MM_METRICSSETTINGS_ENABLE': 'true', |
1420 | - 'MM_METRICSSETTINGS_LISTENADDRESS': ':{}'.format(METRICS_PORT), |
1421 | - }, |
1422 | - } |
1423 | - ], |
1424 | + 'MM_METRICSSETTINGS_ENABLE': 'true', |
1425 | + 'MM_METRICSSETTINGS_LISTENADDRESS': ':{}'.format(METRICS_PORT), |
1426 | } |
1427 | - self.harness.charm._update_pod_spec_for_performance_monitoring(pod_spec) |
1428 | - self.assertEqual(pod_spec, expected) |
1429 | + self.harness.charm._update_env_config_for_performance_monitoring(env_config) |
1430 | + self.assertEqual(env_config, expected) |
1431 | |
1432 | @mock.patch.dict('os.environ', {"JUJU_MODEL_UUID": "fakeuuid"}) |
1433 | - def test_update_pod_spec_for_clustering(self): |
1434 | - """Test clustering config.""" |
1435 | + def test_update_env_config_for_clustering_disabled(self): |
1436 | + """Test config when clustering disabled.""" |
1437 | self.harness.update_config({'clustering': False}) |
1438 | - pod_spec = {} |
1439 | - self.harness.charm._update_pod_spec_for_clustering(pod_spec) |
1440 | - self.assertEqual(pod_spec, {}) |
1441 | + env_config = {} |
1442 | + expected = {} |
1443 | + self.harness.charm._update_env_config_for_clustering(env_config) |
1444 | + self.assertEqual(env_config, expected) |
1445 | + |
1446 | + @mock.patch.dict('os.environ', {"JUJU_MODEL_UUID": "fakeuuid"}) |
1447 | + def test_update_env_config_for_clustering_enabled(self): |
1448 | + """Test config when clustering enabled.""" |
1449 | self.harness.update_config({'clustering': True}) |
1450 | - pod_spec = { |
1451 | - 'containers': [{'name': 'mattermost-k8s', 'envConfig': {}}], |
1452 | - } |
1453 | + env_config = {} |
1454 | expected = { |
1455 | - 'containers': [ |
1456 | - { |
1457 | - 'name': 'mattermost-k8s', |
1458 | - 'envConfig': { |
1459 | - 'MM_CLUSTERSETTINGS_ENABLE': 'true', |
1460 | - 'MM_CLUSTERSETTINGS_CLUSTERNAME': 'mattermost-k8s-fakeuuid', |
1461 | - 'MM_CLUSTERSETTINGS_USEIPADDRESS': 'true', |
1462 | - }, |
1463 | - } |
1464 | - ], |
1465 | - } |
1466 | - self.harness.charm._update_pod_spec_for_clustering(pod_spec) |
1467 | - self.assertEqual(pod_spec, expected) |
1468 | - |
1469 | - def test_update_pod_spec_for_canonical_defaults(self): |
1470 | - """Test canonical defaults.""" |
1471 | - self.harness.update_config({'use_canonical_defaults': False}) |
1472 | - pod_spec = {} |
1473 | - self.harness.charm._update_pod_spec_for_canonical_defaults(pod_spec) |
1474 | - self.assertEqual(pod_spec, {}) |
1475 | - self.harness.update_config({'use_canonical_defaults': True}) |
1476 | - pod_spec = { |
1477 | - 'containers': [{'name': 'mattermost-k8s', 'envConfig': {}}], |
1478 | + 'MM_CLUSTERSETTINGS_ENABLE': 'true', |
1479 | + 'MM_CLUSTERSETTINGS_CLUSTERNAME': 'mattermost-k8s-fakeuuid', |
1480 | + 'MM_CLUSTERSETTINGS_USEIPADDRESS': 'true', |
1481 | } |
1482 | + self.harness.charm._update_env_config_for_clustering(env_config) |
1483 | + self.assertEqual(env_config, expected) |
1484 | + |
1485 | + def test_update_env_config_for_enterprise_defaults_disabled(self): |
1486 | + """Test config when enterprise defaults disabled.""" |
1487 | + self.harness.update_config({'use_enterprise_defaults': False}) |
1488 | + env_config = {} |
1489 | + expected = {} |
1490 | + self.harness.charm._update_env_config_for_enterprise_defaults(env_config) |
1491 | + self.assertEqual(env_config, expected) |
1492 | + |
1493 | + def test_update_env_config_for_enterprise_defaults_enabled(self): |
1494 | + """Test config when enterprise defaults enabled.""" |
1495 | + self.harness.update_config({'use_enterprise_defaults': True}) |
1496 | + env_config = {} |
1497 | expected = { |
1498 | - 'containers': [ |
1499 | - { |
1500 | - 'name': 'mattermost-k8s', |
1501 | - 'envConfig': { |
1502 | - 'MM_SERVICESETTINGS_CLOSEUNUSEDDIRECTMESSAGES': 'true', |
1503 | - 'MM_SERVICESETTINGS_ENABLECUSTOMEMOJI': 'true', |
1504 | - 'MM_SERVICESETTINGS_ENABLELINKPREVIEWS': 'true', |
1505 | - 'MM_SERVICESETTINGS_ENABLEUSERACCESSTOKENS': 'true', |
1506 | - 'MM_TEAMSETTINGS_MAXUSERSPERTEAM': '1000', |
1507 | - }, |
1508 | - } |
1509 | - ], |
1510 | + 'MM_SERVICESETTINGS_CLOSEUNUSEDDIRECTMESSAGES': 'true', |
1511 | + 'MM_SERVICESETTINGS_ENABLECUSTOMEMOJI': 'true', |
1512 | + 'MM_SERVICESETTINGS_ENABLELINKPREVIEWS': 'true', |
1513 | + 'MM_SERVICESETTINGS_ENABLEUSERACCESSTOKENS': 'true', |
1514 | + 'MM_TEAMSETTINGS_MAXUSERSPERTEAM': '1000', |
1515 | } |
1516 | - self.harness.charm._update_pod_spec_for_canonical_defaults(pod_spec) |
1517 | - self.assertEqual(pod_spec, expected) |
1518 | + self.harness.charm._update_env_config_for_enterprise_defaults(env_config) |
1519 | + self.assertEqual(env_config, expected) |
1520 | |
1521 | |
1522 | class TestMattermostK8sCharmHooksEnabled(unittest.TestCase): |
1523 | diff --git a/tests/unit/test_helpers.py b/tests/unit/test_helpers.py |
1524 | index 9987fd3..6820916 100644 |
1525 | --- a/tests/unit/test_helpers.py |
1526 | +++ b/tests/unit/test_helpers.py |
1527 | @@ -5,8 +5,6 @@ import unittest |
1528 | |
1529 | from charm import ( |
1530 | check_ranges, |
1531 | - get_container, |
1532 | - get_env_config, |
1533 | ) |
1534 | |
1535 | POD_SPEC_MULTIPLE_CONTAINERS = { |
1536 | @@ -39,28 +37,3 @@ class TestMattermostCharmHelpers(unittest.TestCase): |
1537 | """Any CIDRs that has host bits set must be rejected, even if others are OK.""" |
1538 | expected = 'range_mixed: invalid network(s): 10.242.0.0/8' |
1539 | self.assertEqual(check_ranges(RANGE_MIXED, 'range_mixed'), expected) |
1540 | - |
1541 | - def test_get_container(self): |
1542 | - """The container with matching name is returned.""" |
1543 | - expected = {'name': 'two', 'envConfig': {'THIS_CONTAINER': 'two'}} |
1544 | - self.assertEqual(get_container(POD_SPEC_MULTIPLE_CONTAINERS, 'two'), expected) |
1545 | - |
1546 | - def test_get_container_nonexistent(self): |
1547 | - """No matching container raises ValueError.""" |
1548 | - with self.assertRaises(ValueError): |
1549 | - get_container(POD_SPEC_MULTIPLE_CONTAINERS, 'eleventy-ten') |
1550 | - |
1551 | - def test_get_env_config(self): |
1552 | - """The envConfig of the container with the matching name is returned.""" |
1553 | - expected = {'THIS_CONTAINER': 'two'} |
1554 | - self.assertEqual(get_env_config(POD_SPEC_MULTIPLE_CONTAINERS, 'two'), expected) |
1555 | - |
1556 | - def test_get_env_config_nonexistent_container(self): |
1557 | - """No matching container raises ValueError.""" |
1558 | - with self.assertRaises(ValueError): |
1559 | - get_env_config(POD_SPEC_MULTIPLE_CONTAINERS, 'eleventy-ten') |
1560 | - |
1561 | - def test_get_env_config_container_no_envconfig(self): |
1562 | - """Container with no envConfig raises ValueError.""" |
1563 | - with self.assertRaises(ValueError): |
1564 | - get_env_config(POD_SPEC_NO_ENVCONFIG, 'one') |
1565 | diff --git a/tests/unit/test_utils.py b/tests/unit/test_utils.py |
1566 | deleted file mode 100644 |
1567 | index 887c17f..0000000 |
1568 | --- a/tests/unit/test_utils.py |
1569 | +++ /dev/null |
1570 | @@ -1,45 +0,0 @@ |
1571 | -# Copyright 2020 Canonical Ltd. |
1572 | -# Licensed under the GPLv3, see LICENCE file for details. |
1573 | - |
1574 | -import unittest |
1575 | - |
1576 | -from copy import deepcopy |
1577 | - |
1578 | -from utils import extend_list_merging_dicts_matched_by_key |
1579 | - |
1580 | - |
1581 | -class TestExtendListMergingDictsByKey(unittest.TestCase): |
1582 | - def test_nothing(self): |
1583 | - """Nothing in, nothing out.""" |
1584 | - self.assertEqual(extend_list_merging_dicts_matched_by_key([], [], key=None), []) |
1585 | - |
1586 | - def test_same(self): |
1587 | - """Identity.""" |
1588 | - self.assertEqual(extend_list_merging_dicts_matched_by_key([{1: 1}], [{1: 1}], key=1), [{1: 1}]) |
1589 | - |
1590 | - def test_different(self): |
1591 | - """Colleagues.""" |
1592 | - self.assertEqual(extend_list_merging_dicts_matched_by_key([{1: 2}], [{1: 1}], key=1), [{1: 1}, {1: 2}]) |
1593 | - |
1594 | - def test_merge_same_key(self): |
1595 | - """Now this is what we came here for!""" |
1596 | - self.assertEqual( |
1597 | - extend_list_merging_dicts_matched_by_key([{1: 1, 3: 4}], [{1: 1, 2: 3}], key=1), [{1: 1, 2: 3, 3: 4}] |
1598 | - ) |
1599 | - |
1600 | - def test_merge_same_key_different_key(self): |
1601 | - """A little of this, a little of that.""" |
1602 | - self.assertEqual( |
1603 | - extend_list_merging_dicts_matched_by_key([{1: 1, 3: 4}], [{1: 1, 2: 3}, {1: 2, 5: 6}, {1: 3, 7: 8}], key=1), |
1604 | - [{1: 2, 5: 6}, {1: 3, 7: 8}, {1: 1, 2: 3, 3: 4}], |
1605 | - ) |
1606 | - |
1607 | - def test_merge_same_key_different_key_deepcopy(self): |
1608 | - """Merge targets are deep-copied beforehand.""" |
1609 | - d = [{1: 1, 3: 4}] |
1610 | - dc = deepcopy(d) |
1611 | - s = [{1: 1, 2: 3}, {1: 2, 5: 6}, {1: 3, 7: 8}] |
1612 | - # Make sure it did something... |
1613 | - self.assertNotEqual(extend_list_merging_dicts_matched_by_key(d, s, key=1), d) |
1614 | - # ...but without altering d. |
1615 | - self.assertEqual(d, dc) |
1616 | diff --git a/tox.ini b/tox.ini |
1617 | index 066c5a8..ec74287 100644 |
1618 | --- a/tox.ini |
1619 | +++ b/tox.ini |
1620 | @@ -11,7 +11,7 @@ setenv = |
1621 | [testenv:unit] |
1622 | commands = |
1623 | pytest --ignore mod --ignore {toxinidir}/tests/functional \ |
1624 | - {posargs:-v --cov=src --cov-report=term-missing --cov-branch} |
1625 | + {posargs:-v --cov=src --cov-report=term-missing --cov-branch --cov-report=html} |
1626 | deps = -r{toxinidir}/tests/unit/requirements.txt |
1627 | -r{toxinidir}/requirements.txt |
1628 | setenv = |