Merge ~mthaddon/charm-k8s-gunicorn/+git/charm-k8s-gunicorn:pebble into charm-k8s-gunicorn:master

Proposed by Tom Haddon
Status: Work in progress
Proposed branch: ~mthaddon/charm-k8s-gunicorn/+git/charm-k8s-gunicorn:pebble
Merge into: charm-k8s-gunicorn:master
Diff against target: 577 lines (+329/-89) (has conflicts)
6 files modified
lib/charms/nginx_ingress_integrator/v0/ingress.py (+174/-0)
metadata.yaml (+15/-0)
requirements.txt (+1/-1)
src/charm.py (+136/-66)
tests/unit/scenario.py (+3/-0)
tests/unit/test_charm.py (+0/-22)
Conflict in metadata.yaml
Conflict in src/charm.py
Conflict in tests/unit/scenario.py
Reviewer Review Type Date Requested Status
gunicorn-charmers Pending
Review via email: mp+399520@code.launchpad.net

Commit message

Test of pebble version of the charm

Description of the change

Test of pebble version of the charm.

$ git clone -b 2.9 https://github.com/juju/juju
$ cd juju
$ make install
$ make microk8s-operator-update # to make the microk8s image and push to Docker
$ export PATH="/home/${USER}/go/bin:$PATH"
$ juju bootstrap microk8s
$ juju add-model gunicorn
$ juju deploy ./gunicorn.charm --resource gunicorn-image='gunicorncharmers/gunicorn-app:20.0.4-20.04_edge'

Once deployed, check juju status for the IP address and then visit that in a browser. You should see something like:

------------------
One of the nice things about the new operator framework is how easy it is to get started.
KUBERNETES_SERVICE_PORT_HTTPS: 443
KUBERNETES_SERVICE_PORT: 443
GUNICORN_PORT: tcp://10.152.183.237:65535
HOSTNAME: gunicorn-0
JUJU_CONTAINER_NAME: gunicorn
MODELOPERATOR_PORT_17071_TCP_ADDR: 10.152.183.72
GUNICORN_SERVICE_HOST: 10.152.183.237
PEBBLE_SOCKET: /charm/container/pebble.socket
PWD: /srv/gunicorn
HOME: /root
KUBERNETES_PORT_443_TCP: tcp://10.152.183.1:443
GUNICORN_PORT_65535_TCP_PROTO: tcp
MODELOPERATOR_SERVICE_HOST: 10.152.183.72
GUNICORN_PORT_65535_TCP_ADDR: 10.152.183.237
GUNICORN_SERVICE_PORT_PLACEHOLDER: 65535
GUNICORN_PORT_65535_TCP_PORT: 65535
APP_WSGI: app:app
APP_NAME: my-awesome-app
SHLVL: 0
MODELOPERATOR_PORT_17071_TCP: tcp://10.152.183.72:17071
KUBERNETES_PORT_443_TCP_PROTO: tcp
KUBERNETES_PORT_443_TCP_ADDR: 10.152.183.1
MODELOPERATOR_PORT_17071_TCP_PORT: 17071
GUNICORN_PORT_65535_TCP: tcp://10.152.183.237:65535
MODELOPERATOR_PORT: tcp://10.152.183.72:17071
KUBERNETES_SERVICE_HOST: 10.152.183.1
KUBERNETES_PORT: tcp://10.152.183.1:443
KUBERNETES_PORT_443_TCP_PORT: 443
PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
GUNICORN_SERVICE_PORT: 65535
MODELOPERATOR_PORT_17071_TCP_PROTO: tcp
DEBIAN_FRONTEND: noninteractive
MODELOPERATOR_SERVICE_PORT: 17071
LC_CTYPE: C.UTF-8
SERVER_SOFTWARE: gunicorn/20.0.4
------------------

To test juju config updates, create a yaml file with the following sample contents:

FOOD: burgers
DRINK: ale

Then run `juju config gunicorn environment=@${PATH_TO_FILENAME}`.

This charm handles an upgrade-charm hook (assuming it's not failing due to config issues as above.

Next steps:
- Handle relations.
- Update unit tests.
- Document missing functionality versus the current implementation.

To post a comment you must log in.
786a573... by Tom Haddon

Comment out series in metadata.yaml and add empty manifest.yaml

eba7f4a... by Tom Haddon

Manifest yaml needs an empty dict, try to include env vars in pebble config by defining and then dumping to yaml

52d1486... by Tom Haddon

Apply black formatting

373c49c... by Tom Haddon

Fix up the pebble config structure based on feedback from jnsgruk

096a11a... by Tom Haddon

No need to run _make_pod_env twice

6b856a5... by Tom Haddon

Update to use specific version of the operator framework that snappass-test uses

e53d086... by Tom Haddon

Remove image configuration since we're switching to oci resources

6abe151... by Tom Haddon

Remove unused textwrap

c5ec8a4... by Tom Haddon

Set a default external hostname for easier devel deploys

635d741... by Tom Haddon

Get OCI image details so we can get past pod-spec-set

2b5702f... by Tom Haddon

Comment out legacy hooks for now, only pass pod_env_config if we have some to pass

966c4b3... by Tom Haddon

Begin trying to handle config changed hook

7402e87... by Tom Haddon

Refactor how we generate pebble config

e20825e... by Tom Haddon

Remove unused copy module

610fe9e... by Tom Haddon

Make the return values of the get_pebble_config function of a consistent type, improve logging of pebble_config

fc03593... by Tom Haddon

Remove unused code, migrate checks for template variables into the right place

01b829b... by Tom Haddon

Remove obsolete tests, handle upgrade-charm hook

24dbd63... by Tom Haddon

Remove commented hook handlers - no longer needed as a reference

b4f9f6a... by Tom Haddon

Switch to archive zip so we always get the latest version of the framework

688b00e... by Tom Haddon

Pin to specific version of the operator framework to avoid needing to rebuild juju with version that supports pebble_ready vs. workload_ready

6d3bdd5... by Tom Haddon

Update for new pebble events and metadata.yaml changes

fa0fe9e... by Tom Haddon

Metadata updates for rc8

e993422... by Tom Haddon

Implement the ingress relation

295358c... by Tom Haddon

Use application rather than unit data for ingress relation

6d1a2da... by Tom Haddon

Pass a dict directly to pebble now it supports that, add a comment about is_running helper method

5969f8f... by Tom Haddon

Use the application name rather than the model name to create the ingress service

f8c5a96... by Tom Haddon

Include return value hint for ingress relation changed handler

203f1a7... by Tom Haddon

Use the new ingress library

2cef81d... by Tom Haddon

Increment patch number

59862e5... by Tom Haddon

Update ingress relation, and handle config changes

ff3ed82... by Tom Haddon

Use new approach to ingress

0e93a5d... by Tom Haddon

Use ServiceStatus to determine whether to stop and start the service

309793e... by Tom Haddon

Switch to nginx-ingress-integrator library

c3f137e... by Tom Haddon

Remove manifest.yaml which is now machine-generated

9cf5653... by Tom Haddon

Remove bases from metadata.yaml, no longer needed

Unmerged commits

9cf5653... by Tom Haddon

Remove bases from metadata.yaml, no longer needed

c3f137e... by Tom Haddon

Remove manifest.yaml which is now machine-generated

309793e... by Tom Haddon

Switch to nginx-ingress-integrator library

0e93a5d... by Tom Haddon

Use ServiceStatus to determine whether to stop and start the service

ff3ed82... by Tom Haddon

Use new approach to ingress

59862e5... by Tom Haddon

Update ingress relation, and handle config changes

2cef81d... by Tom Haddon

Increment patch number

203f1a7... by Tom Haddon

Use the new ingress library

f8c5a96... by Tom Haddon

Include return value hint for ingress relation changed handler

5969f8f... by Tom Haddon

Use the application name rather than the model name to create the ingress service

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/lib/charms/nginx_ingress_integrator/v0/ingress.py b/lib/charms/nginx_ingress_integrator/v0/ingress.py
2new file mode 100644
3index 0000000..65d08a7
4--- /dev/null
5+++ b/lib/charms/nginx_ingress_integrator/v0/ingress.py
6@@ -0,0 +1,174 @@
7+"""Library for the ingress relation.
8+
9+This library contains the Requires and Provides classes for handling
10+the ingress interface.
11+
12+Import `IngressRequires` in your charm, with two required options:
13+ - "self" (the charm itself)
14+ - config_dict
15+
16+`config_dict` accepts the following keys:
17+ - service-hostname (required)
18+ - service-name (required)
19+ - service-port (required)
20+ - limit-rps
21+ - limit-whitelist
22+ - max_body-size
23+ - retry-errors
24+ - service-namespace
25+ - session-cookie-max-age
26+ - tls-secret-name
27+
28+See `config.yaml` for descriptions of each, along with the required type.
29+
30+As an example:
31+```
32+from charms.nginx_ingress_integrator.v0.ingress import IngressRequires
33+
34+# In your charm's `__init__` method.
35+self.ingress = IngressRequires(self, {"service-hostname": self.config["external_hostname"],
36+ "service-name": self.app.name,
37+ "service-port": 80})
38+
39+# In your charm's `config-changed` handler.
40+self.ingress.update_config({"service-hostname": self.config["external_hostname"]})
41+```
42+"""
43+
44+import logging
45+
46+from ops.charm import CharmEvents
47+from ops.framework import EventBase, EventSource, Object
48+from ops.model import BlockedStatus
49+
50+# The unique Charmhub library identifier, never change it
51+LIBID = "db0af4367506491c91663468fb5caa4c"
52+
53+# Increment this major API version when introducing breaking changes
54+LIBAPI = 0
55+
56+# Increment this PATCH version before using `charmcraft push-lib` or reset
57+# to 0 if you are raising the major API version
58+LIBPATCH = 1
59+
60+logger = logging.getLogger(__name__)
61+
62+REQUIRED_INGRESS_RELATION_FIELDS = {
63+ "service-hostname",
64+ "service-name",
65+ "service-port",
66+}
67+
68+OPTIONAL_INGRESS_RELATION_FIELDS = {
69+ "limit-rps",
70+ "limit-whitelist",
71+ "max-body-size",
72+ "retry-errors",
73+ "service-namespace",
74+ "session-cookie-max-age",
75+ "tls-secret-name",
76+}
77+
78+
79+class IngressAvailableEvent(EventBase):
80+ pass
81+
82+
83+class IngressCharmEvents(CharmEvents):
84+ """Custom charm events."""
85+
86+ ingress_available = EventSource(IngressAvailableEvent)
87+
88+
89+class IngressRequires(Object):
90+ """This class defines the functionality for the 'requires' side of the 'ingress' relation.
91+
92+ Hook events observed:
93+ - relation-changed
94+ """
95+
96+ def __init__(self, charm, config_dict):
97+ super().__init__(charm, "ingress")
98+
99+ self.framework.observe(charm.on["ingress"].relation_changed, self._on_relation_changed)
100+
101+ self.config_dict = config_dict
102+
103+ def _config_dict_errors(self, update_only=False):
104+ """Check our config dict for errors."""
105+ block_status = False
106+ unknown = [
107+ x for x in self.config_dict if x not in REQUIRED_INGRESS_RELATION_FIELDS | OPTIONAL_INGRESS_RELATION_FIELDS
108+ ]
109+ if unknown:
110+ logger.error("Unknown key(s) in config dictionary found: %s", ", ".join(unknown))
111+ block_status = True
112+ if not update_only:
113+ missing = [x for x in REQUIRED_INGRESS_RELATION_FIELDS if x not in self.config_dict]
114+ if missing:
115+ logger.error("Missing required key(s) in config dictionary: %s", ", ".join(missing))
116+ block_status = True
117+ if block_status:
118+ self.model.unit.status = BlockedStatus("Error in ingress relation, check `juju debug-log`")
119+ return True
120+ return False
121+
122+ def _on_relation_changed(self, event):
123+ """Handle the relation-changed event."""
124+ # `self.unit` isn't available here, so use `self.model.unit`.
125+ if self.model.unit.is_leader():
126+ if self._config_dict_errors():
127+ return
128+ for key in self.config_dict:
129+ event.relation.data[self.model.app][key] = str(self.config_dict[key])
130+
131+ def update_config(self, config_dict):
132+ """Allow for updates to relation."""
133+ if self.model.unit.is_leader():
134+ self.config_dict = config_dict
135+ if self._config_dict_errors(update_only=True):
136+ return
137+ relation = self.model.get_relation("ingress")
138+ if relation:
139+ for key in self.config_dict:
140+ relation.data[self.model.app][key] = str(self.config_dict[key])
141+
142+
143+class IngressProvides(Object):
144+ """This class defines the functionality for the 'provides' side of the 'ingress' relation.
145+
146+ Hook events observed:
147+ - relation-changed
148+ """
149+
150+ def __init__(self, charm):
151+ super().__init__(charm, "ingress")
152+ # Observe the relation-changed hook event and bind
153+ # self.on_relation_changed() to handle the event.
154+ self.framework.observe(charm.on["ingress"].relation_changed, self._on_relation_changed)
155+ self.charm = charm
156+
157+ def _on_relation_changed(self, event):
158+ """Handle a change to the ingress relation.
159+
160+ Confirm we have the fields we expect to receive."""
161+ # `self.unit` isn't available here, so use `self.model.unit`.
162+ if not self.model.unit.is_leader():
163+ return
164+
165+ ingress_data = {
166+ field: event.relation.data[event.app].get(field)
167+ for field in REQUIRED_INGRESS_RELATION_FIELDS | OPTIONAL_INGRESS_RELATION_FIELDS
168+ }
169+
170+ missing_fields = sorted(
171+ [field for field in REQUIRED_INGRESS_RELATION_FIELDS if ingress_data.get(field) is None]
172+ )
173+
174+ if missing_fields:
175+ logger.error("Missing required data fields for ingress relation: {}".format(", ".join(missing_fields)))
176+ self.model.unit.status = BlockedStatus("Missing fields for ingress: {}".format(", ".join(missing_fields)))
177+
178+ # Create an event that our charm can use to decide it's okay to
179+ # configure the ingress.
180+ self.charm.on.ingress_available.emit()
181diff --git a/metadata.yaml b/metadata.yaml
182index b1a1aa7..37ac9d9 100644
183--- a/metadata.yaml
184+++ b/metadata.yaml
185@@ -5,6 +5,7 @@ description: |
186 Gunicorn charm
187 summary: |
188 Gunicorn charm
189+<<<<<<< metadata.yaml
190 series: [kubernetes]
191 min-juju-version: 2.8.0 # charm storage in state
192 resources:
193@@ -13,6 +14,18 @@ resources:
194 description: docker image for Gunicorn
195 auto-fetch: true
196 upstream-source: 'gunicorncharmers/gunicorn-app:20.0.4-20.04_edge'
197+=======
198+
199+containers:
200+ gunicorn:
201+ resource: gunicorn-image
202+
203+resources:
204+ gunicorn-image:
205+ type: oci-image
206+ description: Docker image for gunicorn to run
207+
208+>>>>>>> metadata.yaml
209 requires:
210 pg:
211 interface: pgsql
212@@ -20,3 +33,5 @@ requires:
213 influxdb:
214 interface: influxdb-api
215 limit: 1
216+ ingress:
217+ interface: ingress
218diff --git a/requirements.txt b/requirements.txt
219index 6b14e66..9313aff 100644
220--- a/requirements.txt
221+++ b/requirements.txt
222@@ -1,3 +1,3 @@
223-ops
224+https://github.com/canonical/operator/archive/refs/heads/master.zip
225 ops-lib-pgsql
226 https://github.com/juju-solutions/resource-oci-image/archive/master.zip
227diff --git a/src/charm.py b/src/charm.py
228index c25df78..13524f7 100755
229--- a/src/charm.py
230+++ b/src/charm.py
231@@ -6,6 +6,7 @@ from jinja2 import Environment, BaseLoader, meta
232 import logging
233 import yaml
234
235+from charms.nginx_ingress_integrator.v0.ingress import IngressRequires
236 import ops
237 from oci_image import OCIImageResource, OCIImageResourceError
238 from ops.framework import StoredState
239@@ -14,8 +15,8 @@ from ops.main import main
240 from ops.model import (
241 ActiveStatus,
242 BlockedStatus,
243- MaintenanceStatus,
244 )
245+from ops.pebble import ServiceStatus
246 import pgsql
247
248
249@@ -28,13 +29,13 @@ JUJU_CONFIG_YAML_DICT_ITEMS = ['environment']
250 class GunicornK8sCharmJujuConfigError(Exception):
251 """Exception when the Juju config is bad."""
252
253- pass
254-
255
256 class GunicornK8sCharmYAMLError(Exception):
257 """Exception raised when parsing YAML fails"""
258
259- pass
260+
261+class GunicornK8sWaitingForRelationsError(Exception):
262+ """Exception when waiting for relations."""
263
264
265 class GunicornK8sCharm(CharmBase):
266@@ -43,18 +44,134 @@ class GunicornK8sCharm(CharmBase):
267 def __init__(self, *args):
268 super().__init__(*args)
269
270+<<<<<<< src/charm.py
271 self.image = OCIImageResource(self, 'gunicorn-image')
272
273 self.framework.observe(self.on.start, self._configure_pod)
274 self.framework.observe(self.on.config_changed, self._configure_pod)
275 self.framework.observe(self.on.leader_elected, self._configure_pod)
276 self.framework.observe(self.on.upgrade_charm, self._configure_pod)
277+=======
278+ self.framework.observe(self.on.config_changed, self._on_config_changed)
279+ self.framework.observe(self.on.upgrade_charm, self._on_upgrade_charm)
280+ self.framework.observe(self.on.gunicorn_pebble_ready, self._on_gunicorn_pebble_ready)
281+
282+ self.ingress = IngressRequires(
283+ self,
284+ {
285+ "service-hostname": self.config["external_hostname"],
286+ "service-name": self.app.name,
287+ "service-port": 80,
288+ },
289+ )
290+>>>>>>> src/charm.py
291
292- # For special-cased relations
293- self._stored.set_default(reldata={})
294+ self._stored.set_default(
295+ gunicorn_pebble_ready=False,
296+ reldata={},
297+ )
298
299 self._init_postgresql_relation()
300
301+ def _get_pebble_config(self, event: ops.framework.EventBase) -> dict:
302+ """Generate pebble config."""
303+ pebble_config = {
304+ "summary": "gunicorn layer",
305+ "description": "gunicorn layer",
306+ "services": {
307+ "gunicorn": {
308+ "override": "replace",
309+ "summary": "gunicorn service",
310+ "command": "/srv/gunicorn/run",
311+ "startup": "enabled",
312+ }
313+ },
314+ }
315+
316+ # Update pod environment config.
317+ try:
318+ pod_env_config = self._make_pod_env()
319+ except GunicornK8sCharmJujuConfigError as e:
320+ logger.exception("Error getting pod_env_config: %s", e)
321+ self.unit.status = BlockedStatus('Error getting pod_env_config')
322+ return {}
323+ except GunicornK8sWaitingForRelationsError as e:
324+ self.unit.status = BlockedStatus(str(e))
325+ event.defer()
326+ return {}
327+
328+ try:
329+ self._check_juju_config()
330+ except GunicornK8sCharmJujuConfigError as e:
331+ self.unit.status = BlockedStatus(str(e))
332+ return {}
333+
334+ if pod_env_config:
335+ pebble_config["services"]["gunicorn"]["environment"] = pod_env_config
336+ return pebble_config
337+
338+ def _on_config_changed(self, event: ops.framework.EventBase) -> None:
339+ """Handle the config changed event."""
340+ if not self._stored.gunicorn_pebble_ready:
341+ logger.info(
342+ "Got a config changed event, but the workload isn't ready yet. Doing nothing, config will be "
343+ "picked up when workload is ready."
344+ )
345+ return
346+
347+ pebble_config = self._get_pebble_config(event)
348+ if not pebble_config:
349+ # Charm will be in blocked status.
350+ return
351+
352+ # Ensure the ingress relation has the external hostname.
353+ self.ingress.update_config({"service-hostname": self.config["external_hostname"]})
354+
355+ container = self.unit.get_container("gunicorn")
356+ plan = container.get_plan().to_dict()
357+ if plan["services"] != pebble_config["services"]:
358+ container.add_layer("gunicorn", pebble_config, combine=True)
359+
360+ status = container.get_service("gunicorn")
361+ if status.current == ServiceStatus.ACTIVE:
362+ container.stop("gunicorn")
363+ container.start("gunicorn")
364+
365+ self.unit.status = ActiveStatus()
366+
367+ def _on_upgrade_charm(self, event: ops.framework.EventBase) -> None:
368+ """Handle the upgrade charm event."""
369+ # An 'upgrade-charm' hook (which will also be triggered by an
370+ # 'attach-resource' event) will cause the pod to be rescheduled:
371+ # even though the name remains the same, the IP may change.
372+ # The workload won't be running, so we need to handle that in the
373+ # course of subsequent events that will be triggered after this.
374+ #
375+ # Setting pebble_ready to `False` will ensure a 'config-changed'
376+ # hook waits for the workload to be ready before doing anything.
377+ self._stored.gunicorn_pebble_ready = False
378+ # An upgrade-charm hook will be followed by others such as config-changed
379+ # and workload-ready, so just do nothing else for now.
380+ return
381+
382+ def _on_gunicorn_pebble_ready(self, event: ops.framework.EventBase) -> None:
383+ """Handle the workload ready event."""
384+ self._stored.gunicorn_pebble_ready = True
385+
386+ pebble_config = self._get_pebble_config(event)
387+ if not pebble_config:
388+ # Charm will be in blocked status.
389+ return
390+
391+ container = event.workload
392+ logger.debug("About to add_layer with pebble_config:\n{}".format(yaml.dump(pebble_config)))
393+ # `container.add_layer` accepts str (YAML) or dict or pebble.Layer
394+ # object directly.
395+ container.add_layer("gunicorn", pebble_config)
396+ # Start the container and set status.
397+ container.autostart()
398+ self.unit.status = ActiveStatus()
399+
400 def _init_postgresql_relation(self) -> None:
401 """Initialization related to the postgresql relation"""
402 if 'pg' not in self._stored.reldata:
403@@ -118,38 +235,6 @@ class GunicornK8sCharm(CharmBase):
404 "Required Juju config item(s) not set : {}".format(", ".join(sorted(errors)))
405 )
406
407- def _make_k8s_ingress(self) -> list:
408- """Return an ingress that you can use in k8s_resources
409-
410- :returns: A list to be used as k8s ingress
411- """
412-
413- hostname = self.model.config['external_hostname']
414-
415- ingress = {
416- "name": "{}-ingress".format(self.app.name),
417- "spec": {
418- "rules": [
419- {
420- "host": hostname,
421- "http": {
422- "paths": [
423- {
424- "path": "/",
425- "backend": {"serviceName": self.app.name, "servicePort": 80},
426- }
427- ]
428- },
429- }
430- ]
431- },
432- "annotations": {
433- 'nginx.ingress.kubernetes.io/ssl-redirect': 'false',
434- },
435- }
436-
437- return [ingress]
438-
439 def _render_template(self, tmpl: str, ctx: dict) -> str:
440 """Render a Jinja2 template
441
442@@ -243,6 +328,7 @@ class GunicornK8sCharm(CharmBase):
443 return {}
444
445 ctx = self._get_context_from_relations()
446+<<<<<<< src/charm.py
447 rendered_env = self._render_template(env, ctx)
448
449 try:
450@@ -300,6 +386,8 @@ class GunicornK8sCharm(CharmBase):
451 env = self.model.config['environment']
452 ctx = self._get_context_from_relations()
453
454+=======
455+>>>>>>> src/charm.py
456 if env:
457 j2env = Environment(loader=BaseLoader)
458 j2template = j2env.parse(env)
459@@ -310,40 +398,22 @@ class GunicornK8sCharm(CharmBase):
460 missing_vars.add(req_var)
461
462 if missing_vars:
463- logger.info(
464- "Missing YAML vars to interpolate the 'environment' config option, "
465- "setting status to 'waiting' : %s",
466- ", ".join(sorted(missing_vars)),
467+ raise GunicornK8sWaitingForRelationsError(
468+ 'Waiting for {} relation(s)'.format(", ".join(sorted(missing_vars)))
469 )
470- self.unit.status = BlockedStatus('Waiting for {} relation(s)'.format(", ".join(sorted(missing_vars))))
471- event.defer()
472- return
473-
474- if not self.unit.is_leader():
475- self.unit.status = ActiveStatus()
476- return
477
478- try:
479- self._check_juju_config()
480- except GunicornK8sCharmJujuConfigError as e:
481- self.unit.status = BlockedStatus(str(e))
482- return
483-
484- self.unit.status = MaintenanceStatus('Assembling pod spec')
485+ rendered_env = self._render_template(env, ctx)
486
487 try:
488- pod_spec = self._make_pod_spec()
489- except GunicornK8sCharmJujuConfigError as e:
490- self.unit.status = BlockedStatus(str(e))
491- return
492+ self._validate_yaml(rendered_env, dict)
493+ except GunicornK8sCharmYAMLError:
494+ raise GunicornK8sCharmJujuConfigError(
495+ "Could not parse Juju config 'environment' as a YAML dict - check \"juju debug-log -l ERROR\""
496+ )
497
498- resources = pod_spec.get('kubernetesResources', {})
499- resources['ingressResources'] = self._make_k8s_ingress()
500+ env = yaml.safe_load(rendered_env)
501
502- self.unit.status = MaintenanceStatus('Setting pod spec')
503- self.model.pod.set_spec(pod_spec, k8s_resources={'kubernetesResources': resources})
504- logger.info("Setting active status")
505- self.unit.status = ActiveStatus()
506+ return env
507
508
509 if __name__ == "__main__": # pragma: no cover
510diff --git a/tests/unit/scenario.py b/tests/unit/scenario.py
511index 3402589..f695a2e 100644
512--- a/tests/unit/scenario.py
513+++ b/tests/unit/scenario.py
514@@ -78,6 +78,7 @@ TEST_CONFIGURE_POD = {
515 },
516 }
517
518+<<<<<<< tests/unit/scenario.py
519 TEST_MAKE_POD_SPEC = {
520 'basic_no_env': {
521 'config': {
522@@ -158,6 +159,8 @@ TEST_MAKE_K8S_INGRESS = {
523 },
524 }
525
526+=======
527+>>>>>>> tests/unit/scenario.py
528 TEST_RENDER_TEMPLATE = {
529 'working': {
530 'tmpl': "test {{db.x}}",
531diff --git a/tests/unit/test_charm.py b/tests/unit/test_charm.py
532index b5e1f87..4572910 100755
533--- a/tests/unit/test_charm.py
534+++ b/tests/unit/test_charm.py
535@@ -18,8 +18,6 @@ from scenario import (
536 JUJU_DEFAULT_CONFIG,
537 TEST_JUJU_CONFIG,
538 TEST_CONFIGURE_POD,
539- TEST_MAKE_POD_SPEC,
540- TEST_MAKE_K8S_INGRESS,
541 TEST_RENDER_TEMPLATE,
542 TEST_PG_URI,
543 TEST_PG_CONNSTR,
544@@ -161,16 +159,6 @@ class TestGunicornK8sCharm(unittest.TestCase):
545 # The second argument is the list of key to reset
546 self.harness.update_config(JUJU_DEFAULT_CONFIG)
547
548- def test_make_k8s_ingress(self):
549- """Check the crafting of the ingress part of the pod spec."""
550- self.harness.update_config(JUJU_DEFAULT_CONFIG)
551-
552- for scenario, values in TEST_MAKE_K8S_INGRESS.items():
553- with self.subTest(scenario=scenario):
554- self.harness.update_config(values['config'])
555- self.assertEqual(self.harness.charm._make_k8s_ingress(), values['expected'])
556- self.harness.update_config(JUJU_DEFAULT_CONFIG) # You need to clean the config after each run
557-
558 def test_render_template(self):
559 """Test template rendering."""
560
561@@ -406,16 +394,6 @@ class TestGunicornK8sCharm(unittest.TestCase):
562
563 self.assertEqual(self.harness.charm.unit.status, BlockedStatus(expected_status))
564
565- def test_make_pod_spec(self):
566- """Check the crafting of the pod spec."""
567- self.harness.update_config(JUJU_DEFAULT_CONFIG)
568-
569- for scenario, values in TEST_MAKE_POD_SPEC.items():
570- with self.subTest(scenario=scenario):
571- self.harness.update_config(values['config'])
572- self.assertEqual(self.harness.charm._make_pod_spec(), values['pod_spec'])
573- self.harness.update_config(JUJU_DEFAULT_CONFIG) # You need to clean the config after each run
574-
575
576 if __name__ == '__main__':
577 unittest.main()

Subscribers

People subscribed via source and target branches

to all changes: