Merge ~gtrkiller/charm-k8s-gunicorn:master into charm-k8s-gunicorn:master
- Git
- lp:~gtrkiller/charm-k8s-gunicorn
- master
- Merge into master
Status: | Rejected |
---|---|
Rejected by: | Tom Haddon |
Proposed branch: | ~gtrkiller/charm-k8s-gunicorn:master |
Merge into: | charm-k8s-gunicorn:master |
Diff against target: |
1098 lines (+401/-384) 10 files modified
.gitignore (+1/-0) Makefile (+1/-1) charmcraft.yaml (+8/-0) lib/charms/nginx_ingress_integrator/v0/ingress.py (+174/-0) metadata.yaml (+10/-6) requirements.txt (+0/-1) src/charm.py (+109/-156) tests/unit/scenario.py (+0/-80) tests/unit/test_charm.py (+97/-139) tox.ini (+1/-1) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Tom Haddon | Approve | ||
Arturo Enrique Seijas Fernández (community) | Needs Fixing | ||
Mariyan Dimitrov | Pending | ||
David Andersson | Pending | ||
Weii Wang | Pending | ||
Canonical IS Reviewers | Pending | ||
Review via email: mp+428496@code.launchpad.net |
Commit message
Adding health checks to axino's sidecar migration branch
Description of the change
This charm was already migrated to sidecar, and apparently works just fine as it was. I tested hitting the charm unit's IP on port 80 on both gunicorn's main branch (podspec) and the branch with the sidecar migration, they throw the same result on the browser. Added a health check to see if the gunicorn server is ready by hitting the address where it is binded (0.0.0.0:80)
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
Tom Haddon (mthaddon) wrote : | # |
Please check the diff before marking it as "Needs Review". There are some vscode artifacts here that shouldn't be included as well as a "sidecar.diff" file.
I haven't taken a deeper look at other bits yet, will wait til those are cleaned up.
Arturo Enrique Seijas Fernández (arturo-seijas) : | # |
Tom Haddon (mthaddon) wrote : | # |
Some comments inline.
Also, I get a `ModuleNotFound
PYTHONPATH=
To:
PYTHONPATH=
Franco Luciano Forneron Buschiazzo (gtrkiller) : | # |
Arturo Enrique Seijas Fernández (arturo-seijas) : | # |
Arturo Enrique Seijas Fernández (arturo-seijas) : | # |
Tom Haddon (mthaddon) : | # |
Franco Luciano Forneron Buschiazzo (gtrkiller) : | # |
Arturo Enrique Seijas Fernández (arturo-seijas) : | # |
Franco Luciano Forneron Buschiazzo (gtrkiller) : | # |
- 62a2e32... by Franco Luciano Forneron Buschiazzo
-
changing charmctaft.yaml
Tom Haddon (mthaddon) wrote : | # |
Some comments inline
- 4f70318... by Franco Luciano Forneron Buschiazzo
-
changing check to 127.0.0.1
- d608cbf... by Franco Luciano Forneron Buschiazzo
-
changing check to 127.0.0.1 with tests
Arturo Enrique Seijas Fernández (arturo-seijas) : | # |
- e91ac73... by Franco Luciano Forneron Buschiazzo
-
taking out exceptions and splitting tests
Arturo Enrique Seijas Fernández (arturo-seijas) wrote : | # |
LGTM. Please, add a follow up story for the comment below
Tom Haddon (mthaddon) wrote : | # |
I'm going to add an approving comment as I think this is a minor change, but see my inline comment about CONTAINER_NAME.
Tom Haddon (mthaddon) wrote : | # |
We've now moved this to Github, so I'll close out this MP.
Unmerged commits
- e91ac73... by Franco Luciano Forneron Buschiazzo
-
taking out exceptions and splitting tests
- d608cbf... by Franco Luciano Forneron Buschiazzo
-
changing check to 127.0.0.1 with tests
- 4f70318... by Franco Luciano Forneron Buschiazzo
-
changing check to 127.0.0.1
- 62a2e32... by Franco Luciano Forneron Buschiazzo
-
changing charmctaft.yaml
- c861c1a... by Franco Luciano Forneron Buschiazzo
-
deleting unecessary exceptions & changing check address
- f61b2c3... by Franco Luciano Forneron Buschiazzo
-
deleting unecessary exceptions
- 482ccbf... by Franco Luciano Forneron Buschiazzo
-
putting comments into assert method as third argument
- 445e552... by Franco Luciano Forneron Buschiazzo
-
addressing comments
- 40bf392... by Franco Luciano Forneron Buschiazzo
-
cleaning up trash files
- e27b30f... by Franco Luciano Forneron Buschiazzo
-
adding health checks to axino's sidecar branch
Preview Diff
1 | diff --git a/.gitignore b/.gitignore |
2 | index 58a7dc6..3d7f690 100644 |
3 | --- a/.gitignore |
4 | +++ b/.gitignore |
5 | @@ -3,6 +3,7 @@ |
6 | *.charm |
7 | .tox |
8 | .coverage |
9 | +.vscode |
10 | __pycache__ |
11 | build |
12 | venv |
13 | diff --git a/Makefile b/Makefile |
14 | index 6d1b61d..56217db 100644 |
15 | --- a/Makefile |
16 | +++ b/Makefile |
17 | @@ -1,5 +1,5 @@ |
18 | gunicorn-k8s.charm: src/*.py requirements.txt metadata.yaml config.yaml test |
19 | - charmcraft build |
20 | + charmcraft pack |
21 | |
22 | blacken: |
23 | @echo "Normalising python layout with black." |
24 | diff --git a/charmcraft.yaml b/charmcraft.yaml |
25 | new file mode 100644 |
26 | index 0000000..add3b33 |
27 | --- /dev/null |
28 | +++ b/charmcraft.yaml |
29 | @@ -0,0 +1,8 @@ |
30 | +type: charm |
31 | +bases: |
32 | + - build-on: |
33 | + - name: "ubuntu" |
34 | + channel: "20.04" |
35 | + run-on: |
36 | + - name: "ubuntu" |
37 | + channel: "20.04" |
38 | diff --git a/lib/charms/nginx_ingress_integrator/v0/ingress.py b/lib/charms/nginx_ingress_integrator/v0/ingress.py |
39 | new file mode 100644 |
40 | index 0000000..65d08a7 |
41 | --- /dev/null |
42 | +++ b/lib/charms/nginx_ingress_integrator/v0/ingress.py |
43 | @@ -0,0 +1,174 @@ |
44 | +"""Library for the ingress relation. |
45 | + |
46 | +This library contains the Requires and Provides classes for handling |
47 | +the ingress interface. |
48 | + |
49 | +Import `IngressRequires` in your charm, with two required options: |
50 | + - "self" (the charm itself) |
51 | + - config_dict |
52 | + |
53 | +`config_dict` accepts the following keys: |
54 | + - service-hostname (required) |
55 | + - service-name (required) |
56 | + - service-port (required) |
57 | + - limit-rps |
58 | + - limit-whitelist |
59 | + - max_body-size |
60 | + - retry-errors |
61 | + - service-namespace |
62 | + - session-cookie-max-age |
63 | + - tls-secret-name |
64 | + |
65 | +See `config.yaml` for descriptions of each, along with the required type. |
66 | + |
67 | +As an example: |
68 | +``` |
69 | +from charms.nginx_ingress_integrator.v0.ingress import IngressRequires |
70 | + |
71 | +# In your charm's `__init__` method. |
72 | +self.ingress = IngressRequires(self, {"service-hostname": self.config["external_hostname"], |
73 | + "service-name": self.app.name, |
74 | + "service-port": 80}) |
75 | + |
76 | +# In your charm's `config-changed` handler. |
77 | +self.ingress.update_config({"service-hostname": self.config["external_hostname"]}) |
78 | +``` |
79 | +""" |
80 | + |
81 | +import logging |
82 | + |
83 | +from ops.charm import CharmEvents |
84 | +from ops.framework import EventBase, EventSource, Object |
85 | +from ops.model import BlockedStatus |
86 | + |
87 | +# The unique Charmhub library identifier, never change it |
88 | +LIBID = "db0af4367506491c91663468fb5caa4c" |
89 | + |
90 | +# Increment this major API version when introducing breaking changes |
91 | +LIBAPI = 0 |
92 | + |
93 | +# Increment this PATCH version before using `charmcraft push-lib` or reset |
94 | +# to 0 if you are raising the major API version |
95 | +LIBPATCH = 1 |
96 | + |
97 | +logger = logging.getLogger(__name__) |
98 | + |
99 | +REQUIRED_INGRESS_RELATION_FIELDS = { |
100 | + "service-hostname", |
101 | + "service-name", |
102 | + "service-port", |
103 | +} |
104 | + |
105 | +OPTIONAL_INGRESS_RELATION_FIELDS = { |
106 | + "limit-rps", |
107 | + "limit-whitelist", |
108 | + "max-body-size", |
109 | + "retry-errors", |
110 | + "service-namespace", |
111 | + "session-cookie-max-age", |
112 | + "tls-secret-name", |
113 | +} |
114 | + |
115 | + |
116 | +class IngressAvailableEvent(EventBase): |
117 | + pass |
118 | + |
119 | + |
120 | +class IngressCharmEvents(CharmEvents): |
121 | + """Custom charm events.""" |
122 | + |
123 | + ingress_available = EventSource(IngressAvailableEvent) |
124 | + |
125 | + |
126 | +class IngressRequires(Object): |
127 | + """This class defines the functionality for the 'requires' side of the 'ingress' relation. |
128 | + |
129 | + Hook events observed: |
130 | + - relation-changed |
131 | + """ |
132 | + |
133 | + def __init__(self, charm, config_dict): |
134 | + super().__init__(charm, "ingress") |
135 | + |
136 | + self.framework.observe(charm.on["ingress"].relation_changed, self._on_relation_changed) |
137 | + |
138 | + self.config_dict = config_dict |
139 | + |
140 | + def _config_dict_errors(self, update_only=False): |
141 | + """Check our config dict for errors.""" |
142 | + block_status = False |
143 | + unknown = [ |
144 | + x for x in self.config_dict if x not in REQUIRED_INGRESS_RELATION_FIELDS | OPTIONAL_INGRESS_RELATION_FIELDS |
145 | + ] |
146 | + if unknown: |
147 | + logger.error("Unknown key(s) in config dictionary found: %s", ", ".join(unknown)) |
148 | + block_status = True |
149 | + if not update_only: |
150 | + missing = [x for x in REQUIRED_INGRESS_RELATION_FIELDS if x not in self.config_dict] |
151 | + if missing: |
152 | + logger.error("Missing required key(s) in config dictionary: %s", ", ".join(missing)) |
153 | + block_status = True |
154 | + if block_status: |
155 | + self.model.unit.status = BlockedStatus("Error in ingress relation, check `juju debug-log`") |
156 | + return True |
157 | + return False |
158 | + |
159 | + def _on_relation_changed(self, event): |
160 | + """Handle the relation-changed event.""" |
161 | + # `self.unit` isn't available here, so use `self.model.unit`. |
162 | + if self.model.unit.is_leader(): |
163 | + if self._config_dict_errors(): |
164 | + return |
165 | + for key in self.config_dict: |
166 | + event.relation.data[self.model.app][key] = str(self.config_dict[key]) |
167 | + |
168 | + def update_config(self, config_dict): |
169 | + """Allow for updates to relation.""" |
170 | + if self.model.unit.is_leader(): |
171 | + self.config_dict = config_dict |
172 | + if self._config_dict_errors(update_only=True): |
173 | + return |
174 | + relation = self.model.get_relation("ingress") |
175 | + if relation: |
176 | + for key in self.config_dict: |
177 | + relation.data[self.model.app][key] = str(self.config_dict[key]) |
178 | + |
179 | + |
180 | +class IngressProvides(Object): |
181 | + """This class defines the functionality for the 'provides' side of the 'ingress' relation. |
182 | + |
183 | + Hook events observed: |
184 | + - relation-changed |
185 | + """ |
186 | + |
187 | + def __init__(self, charm): |
188 | + super().__init__(charm, "ingress") |
189 | + # Observe the relation-changed hook event and bind |
190 | + # self.on_relation_changed() to handle the event. |
191 | + self.framework.observe(charm.on["ingress"].relation_changed, self._on_relation_changed) |
192 | + self.charm = charm |
193 | + |
194 | + def _on_relation_changed(self, event): |
195 | + """Handle a change to the ingress relation. |
196 | + |
197 | + Confirm we have the fields we expect to receive.""" |
198 | + # `self.unit` isn't available here, so use `self.model.unit`. |
199 | + if not self.model.unit.is_leader(): |
200 | + return |
201 | + |
202 | + ingress_data = { |
203 | + field: event.relation.data[event.app].get(field) |
204 | + for field in REQUIRED_INGRESS_RELATION_FIELDS | OPTIONAL_INGRESS_RELATION_FIELDS |
205 | + } |
206 | + |
207 | + missing_fields = sorted( |
208 | + [field for field in REQUIRED_INGRESS_RELATION_FIELDS if ingress_data.get(field) is None] |
209 | + ) |
210 | + |
211 | + if missing_fields: |
212 | + logger.error("Missing required data fields for ingress relation: {}".format(", ".join(missing_fields))) |
213 | + self.model.unit.status = BlockedStatus("Missing fields for ingress: {}".format(", ".join(missing_fields))) |
214 | + |
215 | + # Create an event that our charm can use to decide it's okay to |
216 | + # configure the ingress. |
217 | + self.charm.on.ingress_available.emit() |
218 | diff --git a/metadata.yaml b/metadata.yaml |
219 | index 7ce6e17..d2730e5 100644 |
220 | --- a/metadata.yaml |
221 | +++ b/metadata.yaml |
222 | @@ -6,15 +6,17 @@ docs: https://discourse.charmhub.io/t/gunicorn-docs-index/4606 |
223 | description: | |
224 | A charm for deploying and managing Gunicorn workloads |
225 | summary: | |
226 | - A charm for deploying and managing Gunicorn workloads |
227 | -series: [kubernetes] |
228 | -min-juju-version: 2.8.0 # charm storage in state |
229 | + Gunicorn charm |
230 | + |
231 | +containers: |
232 | + gunicorn: |
233 | + resource: gunicorn-image |
234 | + |
235 | resources: |
236 | gunicorn-image: |
237 | type: oci-image |
238 | - description: docker image for Gunicorn |
239 | - auto-fetch: true |
240 | - upstream-source: 'gunicorncharmers/gunicorn-app:20.0.4-20.04_edge' |
241 | + description: Docker image for gunicorn to run |
242 | + |
243 | requires: |
244 | pg: |
245 | interface: pgsql |
246 | @@ -22,3 +24,5 @@ requires: |
247 | influxdb: |
248 | interface: influxdb-api |
249 | limit: 1 |
250 | + ingress: |
251 | + interface: ingress |
252 | diff --git a/requirements.txt b/requirements.txt |
253 | index 6b14e66..fd6adcd 100644 |
254 | --- a/requirements.txt |
255 | +++ b/requirements.txt |
256 | @@ -1,3 +1,2 @@ |
257 | ops |
258 | ops-lib-pgsql |
259 | -https://github.com/juju-solutions/resource-oci-image/archive/master.zip |
260 | diff --git a/src/charm.py b/src/charm.py |
261 | index c25df78..2d6865a 100755 |
262 | --- a/src/charm.py |
263 | +++ b/src/charm.py |
264 | @@ -6,8 +6,8 @@ from jinja2 import Environment, BaseLoader, meta |
265 | import logging |
266 | import yaml |
267 | |
268 | +from charms.nginx_ingress_integrator.v0.ingress import IngressRequires |
269 | import ops |
270 | -from oci_image import OCIImageResource, OCIImageResourceError |
271 | from ops.framework import StoredState |
272 | from ops.charm import CharmBase |
273 | from ops.main import main |
274 | @@ -23,37 +23,109 @@ logger = logging.getLogger(__name__) |
275 | |
276 | REQUIRED_JUJU_CONFIG = ['external_hostname'] |
277 | JUJU_CONFIG_YAML_DICT_ITEMS = ['environment'] |
278 | +CONTAINER_NAME = yaml.full_load(open('metadata.yaml', 'r')).get('name').replace("-k8s", "") |
279 | |
280 | +class GunicornK8sCharm(CharmBase): |
281 | + _stored = StoredState() |
282 | |
283 | -class GunicornK8sCharmJujuConfigError(Exception): |
284 | - """Exception when the Juju config is bad.""" |
285 | + def __init__(self, *args): |
286 | + super().__init__(*args) |
287 | |
288 | - pass |
289 | + self.framework.observe(self.on.config_changed, self._on_config_changed) |
290 | + self.framework.observe(self.on.gunicorn_pebble_ready, self._on_gunicorn_pebble_ready) |
291 | |
292 | + self.ingress = IngressRequires( |
293 | + self, |
294 | + { |
295 | + "service-hostname": self.config["external_hostname"], |
296 | + "service-name": self.app.name, |
297 | + "service-port": 80, |
298 | + }, |
299 | + ) |
300 | |
301 | -class GunicornK8sCharmYAMLError(Exception): |
302 | - """Exception raised when parsing YAML fails""" |
303 | + self._stored.set_default( |
304 | + reldata={}, |
305 | + ) |
306 | |
307 | - pass |
308 | + self._init_postgresql_relation() |
309 | |
310 | + def _get_pebble_config(self, event: ops.framework.EventBase) -> dict: |
311 | + """Generate pebble config.""" |
312 | + pebble_config = { |
313 | + "summary": "gunicorn layer", |
314 | + "description": "gunicorn layer", |
315 | + "services": { |
316 | + "gunicorn": { |
317 | + "override": "replace", |
318 | + "summary": "gunicorn service", |
319 | + "command": "/srv/gunicorn/run", |
320 | + "startup": "enabled", |
321 | + } |
322 | + }, |
323 | + "checks": { |
324 | + "gunicorn-ready": { |
325 | + "override": "replace", |
326 | + "level": "ready", |
327 | + "http": {"url": "http://127.0.0.1:80"}, |
328 | + }, |
329 | + }, |
330 | + } |
331 | |
332 | -class GunicornK8sCharm(CharmBase): |
333 | - _stored = StoredState() |
334 | + # Update pod environment config. |
335 | + pod_env_config = self._make_pod_env() |
336 | + if type(pod_env_config) is bool: |
337 | + logger.error("Error getting pod_env_config: %s", |
338 | + "Could not parse Juju config 'environment' as a YAML dict - check \"juju debug-log -l ERROR\"") |
339 | + self.unit.status = BlockedStatus('Error getting pod_env_config') |
340 | + return {} |
341 | + elif type(pod_env_config) is set: |
342 | + self.unit.status = BlockedStatus( |
343 | + 'Waiting for {} relation(s)'.format(", ".join(sorted(pod_env_config))) |
344 | + ) |
345 | + event.defer() |
346 | + return {} |
347 | |
348 | - def __init__(self, *args): |
349 | - super().__init__(*args) |
350 | + juju_conf = self._check_juju_config() |
351 | + if juju_conf: |
352 | + self.unit.status = BlockedStatus(str(juju_conf)) |
353 | + return {} |
354 | |
355 | - self.image = OCIImageResource(self, 'gunicorn-image') |
356 | + if pod_env_config: |
357 | + pebble_config["services"]["gunicorn"]["environment"] = pod_env_config |
358 | + return pebble_config |
359 | |
360 | - self.framework.observe(self.on.start, self._configure_pod) |
361 | - self.framework.observe(self.on.config_changed, self._configure_pod) |
362 | - self.framework.observe(self.on.leader_elected, self._configure_pod) |
363 | - self.framework.observe(self.on.upgrade_charm, self._configure_pod) |
364 | + def _on_config_changed(self, event: ops.framework.EventBase) -> None: |
365 | + """Handle the config changed event.""" |
366 | |
367 | - # For special-cased relations |
368 | - self._stored.set_default(reldata={}) |
369 | + self._configure_workload(event) |
370 | |
371 | - self._init_postgresql_relation() |
372 | + def _on_gunicorn_pebble_ready(self, event: ops.framework.EventBase) -> None: |
373 | + """Handle the workload ready event.""" |
374 | + |
375 | + self._configure_workload(event) |
376 | + |
377 | + def _configure_workload(self, event: ops.charm.EventBase) -> None: |
378 | + """Configure the workload container.""" |
379 | + pebble_config = self._get_pebble_config(event) |
380 | + if not pebble_config: |
381 | + # Charm will be in blocked status. |
382 | + return |
383 | + |
384 | + # Ensure the ingress relation has the external hostname. |
385 | + self.ingress.update_config({"service-hostname": self.config["external_hostname"]}) |
386 | + |
387 | + container = self.unit.get_container(CONTAINER_NAME) |
388 | + # pebble may not be ready, in which case we just return |
389 | + if not container.can_connect(): |
390 | + self.unit.status = MaintenanceStatus('waiting for pebble to start') |
391 | + logger.debug('waiting for pebble to start') |
392 | + return |
393 | + |
394 | + logger.debug("About to add_layer with pebble_config:\n{}".format(yaml.dump(pebble_config))) |
395 | + container.add_layer(CONTAINER_NAME, pebble_config, combine=True) |
396 | + container.pebble.replan_services() |
397 | + |
398 | + self.unit.status = ActiveStatus() |
399 | |
400 | def _init_postgresql_relation(self) -> None: |
401 | """Initialization related to the postgresql relation""" |
402 | @@ -87,7 +159,7 @@ class GunicornK8sCharm(CharmBase): |
403 | if event.master is None: |
404 | return |
405 | |
406 | - self._configure_pod(event) |
407 | + self._on_config_changed(event) |
408 | |
409 | def _on_standby_changed(self, event: pgsql.StandbyChangedEvent) -> None: |
410 | """Handle changes in the secondary database unit(s).""" |
411 | @@ -103,8 +175,6 @@ class GunicornK8sCharm(CharmBase): |
412 | def _check_juju_config(self) -> None: |
413 | """Check if all the required Juju config options are set, |
414 | and if all the Juju config options are properly formatted |
415 | - |
416 | - :raises GunicornK8sCharmJujuConfigError: if a required config is not set |
417 | """ |
418 | |
419 | # Verify required items |
420 | @@ -114,41 +184,7 @@ class GunicornK8sCharm(CharmBase): |
421 | logger.error("Required Juju config item not set : %s", required) |
422 | errors.append(required) |
423 | if errors: |
424 | - raise GunicornK8sCharmJujuConfigError( |
425 | - "Required Juju config item(s) not set : {}".format(", ".join(sorted(errors))) |
426 | - ) |
427 | - |
428 | - def _make_k8s_ingress(self) -> list: |
429 | - """Return an ingress that you can use in k8s_resources |
430 | - |
431 | - :returns: A list to be used as k8s ingress |
432 | - """ |
433 | - |
434 | - hostname = self.model.config['external_hostname'] |
435 | - |
436 | - ingress = { |
437 | - "name": "{}-ingress".format(self.app.name), |
438 | - "spec": { |
439 | - "rules": [ |
440 | - { |
441 | - "host": hostname, |
442 | - "http": { |
443 | - "paths": [ |
444 | - { |
445 | - "path": "/", |
446 | - "backend": {"serviceName": self.app.name, "servicePort": 80}, |
447 | - } |
448 | - ] |
449 | - }, |
450 | - } |
451 | - ] |
452 | - }, |
453 | - "annotations": { |
454 | - 'nginx.ingress.kubernetes.io/ssl-redirect': 'false', |
455 | - }, |
456 | - } |
457 | - |
458 | - return [ingress] |
459 | + return "Required Juju config item(s) not set : {}".format(", ".join(sorted(errors))) |
460 | |
461 | def _render_template(self, tmpl: str, ctx: dict) -> str: |
462 | """Render a Jinja2 template |
463 | @@ -207,10 +243,7 @@ class GunicornK8sCharm(CharmBase): |
464 | return ctx |
465 | |
466 | def _validate_yaml(self, supposed_yaml: str, expected_type: type) -> None: |
467 | - """Validate that the supplied YAML is parsed into the supplied type. |
468 | - |
469 | - :raises GunicornK8sCharmYAMLError: if the YAML is incorrect, or if it's not parsed into the expected type |
470 | - """ |
471 | + """Validate that the supplied YAML is parsed into the supplied type.""" |
472 | err = False |
473 | parsed = None |
474 | |
475 | @@ -230,7 +263,7 @@ class GunicornK8sCharm(CharmBase): |
476 | ) |
477 | |
478 | if err: |
479 | - raise GunicornK8sCharmYAMLError('YAML parsing failed, please check "juju debug-log -l ERROR"') |
480 | + return err |
481 | |
482 | def _make_pod_env(self) -> dict: |
483 | """Return an envConfig with some core configuration. |
484 | @@ -243,107 +276,27 @@ class GunicornK8sCharm(CharmBase): |
485 | return {} |
486 | |
487 | ctx = self._get_context_from_relations() |
488 | - rendered_env = self._render_template(env, ctx) |
489 | - |
490 | - try: |
491 | - self._validate_yaml(rendered_env, dict) |
492 | - except GunicornK8sCharmYAMLError: |
493 | - raise GunicornK8sCharmJujuConfigError( |
494 | - "Could not parse Juju config 'environment' as a YAML dict - check \"juju debug-log -l ERROR\"" |
495 | - ) |
496 | - |
497 | - env = yaml.safe_load(rendered_env) |
498 | - |
499 | - return env |
500 | - |
501 | - def _make_pod_spec(self) -> dict: |
502 | - """Return a pod spec with some core configuration. |
503 | - |
504 | - :returns: A pod spec |
505 | - """ |
506 | - |
507 | - try: |
508 | - image_details = self.image.fetch() |
509 | - logging.info("using imageDetails: {}") |
510 | - except OCIImageResourceError: |
511 | - logging.exception('An error occurred while fetching the image info') |
512 | - self.unit.status = BlockedStatus('Error fetching image information') |
513 | - return {} |
514 | - |
515 | - pod_env = self._make_pod_env() |
516 | - |
517 | - return { |
518 | - 'version': 3, # otherwise resources are ignored |
519 | - 'containers': [ |
520 | - { |
521 | - 'name': self.app.name, |
522 | - 'imageDetails': image_details, |
523 | - # TODO: debatable. The idea is that if you want to force an update with the same image name, you |
524 | - # don't need to empty kubelet cache on each node to have the right version. |
525 | - # This implies a performance drop upon start. |
526 | - 'imagePullPolicy': 'Always', |
527 | - 'ports': [{'containerPort': 80, 'protocol': 'TCP'}], |
528 | - 'envConfig': pod_env, |
529 | - 'kubernetes': { |
530 | - 'readinessProbe': {'httpGet': {'path': '/', 'port': 80}}, |
531 | - }, |
532 | - } |
533 | - ], |
534 | - } |
535 | - |
536 | - def _configure_pod(self, event: ops.framework.EventBase) -> None: |
537 | - """Assemble the pod spec and apply it, if possible. |
538 | - |
539 | - :param event: Event that triggered the method. |
540 | - """ |
541 | - |
542 | - env = self.model.config['environment'] |
543 | - ctx = self._get_context_from_relations() |
544 | |
545 | - if env: |
546 | - j2env = Environment(loader=BaseLoader) |
547 | - j2template = j2env.parse(env) |
548 | - missing_vars = set() |
549 | + j2env = Environment(loader=BaseLoader) |
550 | + j2template = j2env.parse(env) |
551 | + missing_vars = set() |
552 | |
553 | - for req_var in meta.find_undeclared_variables(j2template): |
554 | - if not ctx.get(req_var): |
555 | - missing_vars.add(req_var) |
556 | + for req_var in meta.find_undeclared_variables(j2template): |
557 | + if not ctx.get(req_var): |
558 | + missing_vars.add(req_var) |
559 | |
560 | - if missing_vars: |
561 | - logger.info( |
562 | - "Missing YAML vars to interpolate the 'environment' config option, " |
563 | - "setting status to 'waiting' : %s", |
564 | - ", ".join(sorted(missing_vars)), |
565 | - ) |
566 | - self.unit.status = BlockedStatus('Waiting for {} relation(s)'.format(", ".join(sorted(missing_vars)))) |
567 | - event.defer() |
568 | - return |
569 | - |
570 | - if not self.unit.is_leader(): |
571 | - self.unit.status = ActiveStatus() |
572 | - return |
573 | + if missing_vars: |
574 | + return missing_vars |
575 | |
576 | - try: |
577 | - self._check_juju_config() |
578 | - except GunicornK8sCharmJujuConfigError as e: |
579 | - self.unit.status = BlockedStatus(str(e)) |
580 | - return |
581 | + rendered_env = self._render_template(env, ctx) |
582 | |
583 | - self.unit.status = MaintenanceStatus('Assembling pod spec') |
584 | + yaml_val = self._validate_yaml(rendered_env, dict) |
585 | + if yaml_val: |
586 | + return yaml_val |
587 | |
588 | - try: |
589 | - pod_spec = self._make_pod_spec() |
590 | - except GunicornK8sCharmJujuConfigError as e: |
591 | - self.unit.status = BlockedStatus(str(e)) |
592 | - return |
593 | - |
594 | - resources = pod_spec.get('kubernetesResources', {}) |
595 | - resources['ingressResources'] = self._make_k8s_ingress() |
596 | + env = yaml.safe_load(rendered_env) |
597 | |
598 | - self.unit.status = MaintenanceStatus('Setting pod spec') |
599 | - self.model.pod.set_spec(pod_spec, k8s_resources={'kubernetesResources': resources}) |
600 | - logger.info("Setting active status") |
601 | - self.unit.status = ActiveStatus() |
602 | + return env |
603 | |
604 | |
605 | if __name__ == "__main__": # pragma: no cover |
606 | diff --git a/tests/unit/scenario.py b/tests/unit/scenario.py |
607 | index 2efb837..15b1e70 100644 |
608 | --- a/tests/unit/scenario.py |
609 | +++ b/tests/unit/scenario.py |
610 | @@ -78,86 +78,6 @@ TEST_CONFIGURE_POD = { |
611 | }, |
612 | } |
613 | |
614 | -TEST_MAKE_POD_SPEC = { |
615 | - 'basic_no_env': { |
616 | - 'config': { |
617 | - 'external_hostname': 'example.com', |
618 | - }, |
619 | - 'pod_spec': { |
620 | - 'version': 3, # otherwise resources are ignored |
621 | - 'containers': [ |
622 | - { |
623 | - 'name': 'gunicorn-k8s', |
624 | - 'imageDetails': { |
625 | - 'imagePath': 'registrypath', |
626 | - 'password': 'password', |
627 | - 'username': 'username', |
628 | - }, |
629 | - 'imagePullPolicy': 'Always', |
630 | - 'ports': [{'containerPort': 80, 'protocol': 'TCP'}], |
631 | - 'envConfig': {}, |
632 | - 'kubernetes': {'readinessProbe': {'httpGet': {'path': '/', 'port': 80}}}, |
633 | - } |
634 | - ], |
635 | - }, |
636 | - }, |
637 | - 'basic_with_env': { |
638 | - 'config': { |
639 | - 'external_hostname': 'example.com', |
640 | - 'environment': 'MYENV: foo', |
641 | - }, |
642 | - 'pod_spec': { |
643 | - 'version': 3, # otherwise resources are ignored |
644 | - 'containers': [ |
645 | - { |
646 | - 'name': 'gunicorn-k8s', |
647 | - 'imageDetails': { |
648 | - 'imagePath': 'registrypath', |
649 | - 'password': 'password', |
650 | - 'username': 'username', |
651 | - }, |
652 | - 'imagePullPolicy': 'Always', |
653 | - 'ports': [{'containerPort': 80, 'protocol': 'TCP'}], |
654 | - 'envConfig': {'MYENV': 'foo'}, |
655 | - 'kubernetes': {'readinessProbe': {'httpGet': {'path': '/', 'port': 80}}}, |
656 | - } |
657 | - ], |
658 | - }, |
659 | - }, |
660 | -} |
661 | - |
662 | - |
663 | -TEST_MAKE_K8S_INGRESS = { |
664 | - 'basic': { |
665 | - 'config': { |
666 | - 'external_hostname': 'example.com', |
667 | - }, |
668 | - 'expected': [ |
669 | - { |
670 | - 'name': 'gunicorn-k8s-ingress', |
671 | - 'spec': { |
672 | - 'rules': [ |
673 | - { |
674 | - 'host': 'example.com', |
675 | - 'http': { |
676 | - 'paths': [ |
677 | - { |
678 | - 'path': '/', |
679 | - 'backend': {'serviceName': 'gunicorn-k8s', 'servicePort': 80}, |
680 | - }, |
681 | - ], |
682 | - }, |
683 | - }, |
684 | - ], |
685 | - }, |
686 | - 'annotations': { |
687 | - 'nginx.ingress.kubernetes.io/ssl-redirect': 'false', |
688 | - }, |
689 | - }, |
690 | - ], |
691 | - }, |
692 | -} |
693 | - |
694 | TEST_RENDER_TEMPLATE = { |
695 | 'working': { |
696 | 'tmpl': "test {{db.x}}", |
697 | diff --git a/tests/unit/test_charm.py b/tests/unit/test_charm.py |
698 | index b5e1f87..d698ea9 100755 |
699 | --- a/tests/unit/test_charm.py |
700 | +++ b/tests/unit/test_charm.py |
701 | @@ -6,20 +6,13 @@ import unittest |
702 | |
703 | from unittest.mock import MagicMock, patch |
704 | |
705 | -from charm import GunicornK8sCharm, GunicornK8sCharmJujuConfigError, GunicornK8sCharmYAMLError |
706 | +from charm import GunicornK8sCharm |
707 | |
708 | from ops import testing |
709 | -from ops.model import ( |
710 | - ActiveStatus, |
711 | - BlockedStatus, |
712 | -) |
713 | |
714 | from scenario import ( |
715 | JUJU_DEFAULT_CONFIG, |
716 | TEST_JUJU_CONFIG, |
717 | - TEST_CONFIGURE_POD, |
718 | - TEST_MAKE_POD_SPEC, |
719 | - TEST_MAKE_K8S_INGRESS, |
720 | TEST_RENDER_TEMPLATE, |
721 | TEST_PG_URI, |
722 | TEST_PG_CONNSTR, |
723 | @@ -49,14 +42,14 @@ class TestGunicornK8sCharm(unittest.TestCase): |
724 | |
725 | with patch('test_charm.GunicornK8sCharm._stored') as mock_stored: |
726 | with patch('pgsql.PostgreSQLClient'): |
727 | - mock_stored.reldata = {'pg': 'foo'} |
728 | - c = GunicornK8sCharm(MagicMock()) |
729 | - self.assertEqual(c._stored.reldata, mock_stored.reldata) |
730 | + with patch('test_charm.GunicornK8sCharm.on'): |
731 | + mock_stored.reldata = {'pg': 'foo'} |
732 | + c = GunicornK8sCharm(MagicMock()) |
733 | + self.assertEqual(c._stored.reldata, mock_stored.reldata) |
734 | |
735 | - def test_on_database_relation_joined(self): |
736 | + def test_on_database_relation_joined_unit_is_leader(self): |
737 | """Test the _on_database_relation_joined function.""" |
738 | |
739 | - # Unit is leader |
740 | mock_event = MagicMock() |
741 | self.harness.disable_hooks() # we don't want leader-set to fire |
742 | self.harness.set_leader(True) |
743 | @@ -65,7 +58,7 @@ class TestGunicornK8sCharm(unittest.TestCase): |
744 | |
745 | self.assertEqual(mock_event.database, self.harness.charm.app.name) |
746 | |
747 | - # Unit is not leader, DB not ready |
748 | + def test_on_database_relation_joined_unit_is_not_leader(self): |
749 | mock_event = MagicMock() |
750 | self.harness.disable_hooks() # we don't want leader-set to fire |
751 | self.harness.set_leader(False) |
752 | @@ -74,7 +67,6 @@ class TestGunicornK8sCharm(unittest.TestCase): |
753 | |
754 | mock_event.defer.assert_called_once() |
755 | |
756 | - # Unit is leader, DB ready |
757 | mock_event = MagicMock() |
758 | self.harness.disable_hooks() # we don't want leader-set to fire |
759 | self.harness.set_leader(False) |
760 | @@ -109,26 +101,27 @@ class TestGunicornK8sCharm(unittest.TestCase): |
761 | mock_event.database = self.harness.charm.app.name |
762 | mock_event.master.conn_str = TEST_PG_CONNSTR |
763 | mock_event.master.uri = TEST_PG_URI |
764 | - with patch('charm.GunicornK8sCharm._configure_pod') as configure_pod: |
765 | + with patch('charm.GunicornK8sCharm._on_config_changed') as on_config_changes: |
766 | r = self.harness.charm._on_master_changed(mock_event) |
767 | |
768 | reldata = self.harness.charm._stored.reldata |
769 | self.assertEqual(reldata['pg']['conn_str'], mock_event.master.conn_str) |
770 | self.assertEqual(reldata['pg']['db_uri'], mock_event.master.uri) |
771 | self.assertEqual(r, None) |
772 | - configure_pod.assert_called_with(mock_event) |
773 | + on_config_changes.assert_called_with(mock_event) |
774 | |
775 | - def test_on_standby_changed(self): |
776 | + def test_on_standby_changed_database_not_ready(self): |
777 | """Test the _on_standby_changed function.""" |
778 | |
779 | - # Database not ready |
780 | mock_event = MagicMock() |
781 | mock_event.database = None |
782 | |
783 | r = self.harness.charm._on_standby_changed(mock_event) |
784 | self.assertEqual(r, None) |
785 | |
786 | - # Database ready |
787 | + def test_on_standby_changed_database_ready(self): |
788 | + """Test the _on_standby_changed function.""" |
789 | + |
790 | mock_event = MagicMock() |
791 | mock_event.database = self.harness.charm.app.name |
792 | |
793 | @@ -149,10 +142,8 @@ class TestGunicornK8sCharm(unittest.TestCase): |
794 | self.harness.update_config(values['config']) |
795 | if values['expected']: |
796 | with self.assertLogs(level='ERROR') as logger: |
797 | - with self.assertRaises(GunicornK8sCharmJujuConfigError) as exc: |
798 | - self.harness.charm._check_juju_config() |
799 | + self.harness.charm._check_juju_config() |
800 | self.assertEqual(sorted(logger.output), sorted(values['logger'])) |
801 | - self.assertEqual(str(exc.exception), values['expected']) |
802 | else: |
803 | self.assertEqual(self.harness.charm._check_juju_config(), None) |
804 | |
805 | @@ -161,16 +152,6 @@ class TestGunicornK8sCharm(unittest.TestCase): |
806 | # The second argument is the list of key to reset |
807 | self.harness.update_config(JUJU_DEFAULT_CONFIG) |
808 | |
809 | - def test_make_k8s_ingress(self): |
810 | - """Check the crafting of the ingress part of the pod spec.""" |
811 | - self.harness.update_config(JUJU_DEFAULT_CONFIG) |
812 | - |
813 | - for scenario, values in TEST_MAKE_K8S_INGRESS.items(): |
814 | - with self.subTest(scenario=scenario): |
815 | - self.harness.update_config(values['config']) |
816 | - self.assertEqual(self.harness.charm._make_k8s_ingress(), values['expected']) |
817 | - self.harness.update_config(JUJU_DEFAULT_CONFIG) # You need to clean the config after each run |
818 | - |
819 | def test_render_template(self): |
820 | """Test template rendering.""" |
821 | |
822 | @@ -225,18 +206,18 @@ class TestGunicornK8sCharm(unittest.TestCase): |
823 | self.assertEqual(sorted(logger.output), sorted(expected_logger)) |
824 | self.assertEqual(r, expected_ret) |
825 | |
826 | - def test_validate_yaml(self): |
827 | + def test_validate_yaml_proper_type_proper_yaml(self): |
828 | """Test the _validate_yaml function.""" |
829 | |
830 | - # Proper YAML and type |
831 | test_str = "a: b\n1: 2" |
832 | expected_type = dict |
833 | |
834 | r = self.harness.charm._validate_yaml(test_str, expected_type) |
835 | |
836 | self.assertEqual(r, None) |
837 | + |
838 | + def test_validate_yaml_incorrect_yaml(self): |
839 | |
840 | - # Incorrect YAML |
841 | test_str = "a: :" |
842 | expected_type = dict |
843 | expected_output = [ |
844 | @@ -246,49 +227,45 @@ class TestGunicornK8sCharm(unittest.TestCase): |
845 | ' a: :\n' |
846 | ' ^' |
847 | ] |
848 | - expected_exception = 'YAML parsing failed, please check "juju debug-log -l ERROR"' |
849 | |
850 | with self.assertLogs(level='ERROR') as logger: |
851 | - with self.assertRaises(GunicornK8sCharmYAMLError) as exc: |
852 | - self.harness.charm._validate_yaml(test_str, expected_type) |
853 | + self.harness.charm._validate_yaml(test_str, expected_type) |
854 | |
855 | self.assertEqual(sorted(logger.output), expected_output) |
856 | - self.assertEqual(str(exc.exception), expected_exception) |
857 | |
858 | - # Proper YAML, incorrect type |
859 | + def test_validate_yaml_incorrect_type_proper_yaml(self): |
860 | + |
861 | test_str = "a: b" |
862 | expected_type = str |
863 | expected_output = [ |
864 | "ERROR:charm:Expected type '<class 'str'>' but got '<class 'dict'>' when " 'parsing YAML : a: b' |
865 | ] |
866 | |
867 | - expected_exception = 'YAML parsing failed, please check "juju debug-log -l ERROR"' |
868 | - |
869 | with self.assertLogs(level='ERROR') as logger: |
870 | - with self.assertRaises(GunicornK8sCharmYAMLError) as exc: |
871 | - self.harness.charm._validate_yaml(test_str, expected_type) |
872 | + self.harness.charm._validate_yaml(test_str, expected_type) |
873 | |
874 | self.assertEqual(sorted(logger.output), expected_output) |
875 | - self.assertEqual(str(exc.exception), expected_exception) |
876 | |
877 | - def test_make_pod_env(self): |
878 | + def test_make_pod_env_empty_conf(self): |
879 | """Test the _make_pod_env function.""" |
880 | |
881 | - # No env |
882 | self.harness.update_config(JUJU_DEFAULT_CONFIG) |
883 | self.harness.update_config({'environment': ''}) |
884 | expected_ret = {} |
885 | |
886 | r = self.harness.charm._make_pod_env() |
887 | - self.assertEqual(r, expected_ret) |
888 | + self.assertEqual(r, expected_ret, "No env") |
889 | + |
890 | + def test_make_pod_env_proper_env_no_temp_rel(self): |
891 | |
892 | - # Proper env, no templating/relation |
893 | self.harness.update_config(JUJU_DEFAULT_CONFIG) |
894 | self.harness.update_config({'environment': 'a: b'}) |
895 | expected_ret = {'a': 'b'} |
896 | |
897 | r = self.harness.charm._make_pod_env() |
898 | self.assertEqual(r, expected_ret) |
899 | + |
900 | + def test_make_pod_env_proper_env_temp_rel(self): |
901 | |
902 | # Proper env with templating/relations |
903 | self.harness.update_config(JUJU_DEFAULT_CONFIG) |
904 | @@ -310,111 +287,92 @@ class TestGunicornK8sCharm(unittest.TestCase): |
905 | r = self.harness.charm._make_pod_env() |
906 | self.assertEqual(r, expected_ret) |
907 | |
908 | + def test_make_pod_env_improper_env(self): |
909 | + |
910 | # Improper env |
911 | self.harness.update_config(JUJU_DEFAULT_CONFIG) |
912 | self.harness.update_config({'environment': 'a: :'}) |
913 | expected_ret = None |
914 | - expected_exception = ( |
915 | - 'Could not parse Juju config \'environment\' as a YAML dict - check "juju debug-log -l ERROR"' |
916 | - ) |
917 | - |
918 | - with self.assertRaises(GunicornK8sCharmJujuConfigError) as exc: |
919 | + expected_output = [ |
920 | + 'ERROR:charm:Error when parsing the following YAML : a: : : mapping values ' |
921 | + 'are not allowed here\n' |
922 | + ' in "<unicode string>", line 1, column 4:\n' |
923 | + ' a: :\n' |
924 | + ' ^' |
925 | + ] |
926 | + with self.assertLogs(level='ERROR') as logger: |
927 | self.harness.charm._make_pod_env() |
928 | + self.assertEqual(logger.output, expected_output) |
929 | |
930 | - self.assertEqual(str(exc.exception), expected_exception) |
931 | - |
932 | - @patch('pgsql.client._leader_get') |
933 | - def test_configure_pod(self, mock_leader_get): |
934 | - """Test the pod configuration.""" |
935 | - |
936 | + def test_get_pebble_config(self): |
937 | + """Test the _get_pebble_config function.""" |
938 | mock_event = MagicMock() |
939 | - self.harness.update_config(JUJU_DEFAULT_CONFIG) |
940 | - |
941 | - self.harness.set_leader(False) |
942 | - self.harness.charm.unit.status = BlockedStatus("Testing") |
943 | - self.harness.charm._configure_pod(mock_event) |
944 | - self.assertEqual(self.harness.charm.unit.status, ActiveStatus()) |
945 | - self.harness.update_config(JUJU_DEFAULT_CONFIG) # You need to clean the config after each run |
946 | - |
947 | - for scenario, values in TEST_CONFIGURE_POD.items(): |
948 | - with self.subTest(scenario=scenario): |
949 | - mock_leader_get.return_value = values['_leader_get'] |
950 | - self.harness.update_config(values['config']) |
951 | - self.harness.set_leader(True) |
952 | - self.harness.charm._configure_pod(mock_event) |
953 | - if values['expected']: |
954 | - self.assertEqual(self.harness.charm.unit.status, BlockedStatus(values['expected'])) |
955 | - else: |
956 | - self.assertEqual(self.harness.charm.unit.status, ActiveStatus()) |
957 | - |
958 | - self.harness.update_config(JUJU_DEFAULT_CONFIG) # You need to clean the config after each run |
959 | - |
960 | - # Test missing vars |
961 | - self.harness.update_config(JUJU_DEFAULT_CONFIG) |
962 | - self.harness.update_config( |
963 | - { |
964 | - 'image_path': 'my_gunicorn_app:devel', |
965 | - 'external_hostname': 'example.com', |
966 | - 'environment': 'DB_URI: {{pg.uri}}', |
967 | - } |
968 | - ) |
969 | - self.harness.set_leader(True) |
970 | - expected_status = 'Waiting for pg relation(s)' |
971 | - |
972 | - self.harness.charm._configure_pod(mock_event) |
973 | - |
974 | - mock_event.defer.assert_called_once() |
975 | - self.assertEqual(self.harness.charm.unit.status, BlockedStatus(expected_status)) |
976 | + expected_ret = { |
977 | + "summary": "gunicorn layer", |
978 | + "description": "gunicorn layer", |
979 | + "services": { |
980 | + "gunicorn": { |
981 | + "override": "replace", |
982 | + "summary": "gunicorn service", |
983 | + "command": "/srv/gunicorn/run", |
984 | + "startup": "enabled", |
985 | + } |
986 | + }, |
987 | + "checks": { |
988 | + "gunicorn-ready": { |
989 | + "override": "replace", |
990 | + "level": "ready", |
991 | + "http": {"url": "http://127.0.0.1:80"}, |
992 | + }, |
993 | + }, |
994 | + } |
995 | |
996 | - # Test no missing vars |
997 | - self.harness.update_config(JUJU_DEFAULT_CONFIG) |
998 | - self.harness.update_config( |
999 | - { |
1000 | - 'image_path': 'my_gunicorn_app:devel', |
1001 | - 'external_hostname': 'example.com', |
1002 | - 'environment': 'DB_URI: {{pg.uri}}', |
1003 | - } |
1004 | - ) |
1005 | + r = self.harness.charm._get_pebble_config(mock_event) |
1006 | + self.assertEqual(r, expected_ret) |
1007 | |
1008 | - reldata = self.harness.charm._stored.reldata |
1009 | - reldata['pg'] = {'conn_str': TEST_PG_CONNSTR, 'db_uri': TEST_PG_URI} |
1010 | - self.harness.set_leader(True) |
1011 | - # Set up random relation |
1012 | - self.harness.disable_hooks() # no need for hooks to fire for this test |
1013 | - relation_id = self.harness.add_relation('myrel', 'myapp') |
1014 | - self.harness.add_relation_unit(relation_id, 'myapp/0') |
1015 | - self.harness.update_relation_data(relation_id, 'myapp/0', {'thing': 'bli'}) |
1016 | - expected_status = 'Waiting for pg relation(s)' |
1017 | + def test_get_pebble_config_error(self): |
1018 | + """Test the _get_pebble_config function when throwing an error.""" |
1019 | + expected_output = "ERROR:charm:Error getting pod_env_config: Could not parse Juju config 'environment' as a YAML dict - check \"juju debug-log -l ERROR\"" |
1020 | + expected_ret = {} |
1021 | + mock_event = MagicMock() |
1022 | + with patch('charm.GunicornK8sCharm._make_pod_env') as make_pod_env: |
1023 | + make_pod_env.return_value = True |
1024 | |
1025 | - self.harness.charm._configure_pod(mock_event) |
1026 | + with self.assertLogs(level='ERROR') as logger: |
1027 | + r = self.harness.charm._get_pebble_config(mock_event) |
1028 | + self.assertEqual(r, expected_ret) |
1029 | + self.assertEqual(expected_output, logger.output[0]) |
1030 | |
1031 | - self.assertEqual(self.harness.charm.unit.status, ActiveStatus()) |
1032 | + def test_on_gunicorn_pebble_ready_no_problem(self): |
1033 | + """Test the _on_gunicorn_pebble_ready function.""" |
1034 | |
1035 | - # Test incorrect YAML |
1036 | - self.harness.update_config(JUJU_DEFAULT_CONFIG) |
1037 | - self.harness.update_config( |
1038 | - { |
1039 | - 'image_path': 'my_gunicorn_app:devel', |
1040 | - 'external_hostname': 'example.com', |
1041 | - 'environment': 'a: :', |
1042 | - } |
1043 | - ) |
1044 | - self.harness.set_leader(True) |
1045 | - expected_status = 'Could not parse Juju config \'environment\' as a YAML dict - check "juju debug-log -l ERROR"' |
1046 | + mock_event = MagicMock() |
1047 | + expected_ret = None |
1048 | |
1049 | - self.harness.charm._configure_pod(mock_event) |
1050 | + r = self.harness.charm._on_gunicorn_pebble_ready(mock_event) |
1051 | + self.assertEqual(r, expected_ret) |
1052 | |
1053 | - self.assertEqual(self.harness.charm.unit.status, BlockedStatus(expected_status)) |
1054 | + def test_configure_workload_no_problem(self): |
1055 | + """Test the _configure_workload function.""" |
1056 | |
1057 | - def test_make_pod_spec(self): |
1058 | - """Check the crafting of the pod spec.""" |
1059 | - self.harness.update_config(JUJU_DEFAULT_CONFIG) |
1060 | + mock_event = MagicMock() |
1061 | + expected_ret = None |
1062 | |
1063 | - for scenario, values in TEST_MAKE_POD_SPEC.items(): |
1064 | - with self.subTest(scenario=scenario): |
1065 | - self.harness.update_config(values['config']) |
1066 | - self.assertEqual(self.harness.charm._make_pod_spec(), values['pod_spec']) |
1067 | - self.harness.update_config(JUJU_DEFAULT_CONFIG) # You need to clean the config after each run |
1068 | + r = self.harness.charm._configure_workload(mock_event) |
1069 | + self.assertEqual(r, expected_ret) |
1070 | + |
1071 | + def test_configure_workload_pebble_not_ready(self): |
1072 | + |
1073 | + mock_event = MagicMock() |
1074 | + expected_ret = None |
1075 | + expected_output = 'waiting for pebble to start' |
1076 | + with patch('ops.model.Container.can_connect') as can_connect: |
1077 | + can_connect.return_value = False |
1078 | + |
1079 | + with self.assertLogs(level='DEBUG') as logger: |
1080 | + r = self.harness.charm._configure_workload(mock_event) |
1081 | + self.assertEqual(r, expected_ret) |
1082 | + self.assertTrue(expected_output in logger.output[0]) |
1083 | |
1084 | |
1085 | if __name__ == '__main__': |
1086 | diff --git a/tox.ini b/tox.ini |
1087 | index 8dedaea..09db789 100644 |
1088 | --- a/tox.ini |
1089 | +++ b/tox.ini |
1090 | @@ -13,7 +13,7 @@ commands = |
1091 | deps = -r{toxinidir}/tests/unit/requirements.txt |
1092 | -r{toxinidir}/requirements.txt |
1093 | setenv = |
1094 | - PYTHONPATH={toxinidir}/src:{toxinidir}/build/lib:{toxinidir}/build/venv |
1095 | + PYTHONPATH={toxinidir}/src:{toxinidir}/lib |
1096 | TZ=UTC |
1097 | |
1098 | [testenv:black] |
This merge proposal is being monitored by mergebot. Change the status to Approved to merge.