Merge ~mthaddon/charm-k8s-ingress/+git/charm-k8s-ingress:ingress-relation into charm-k8s-ingress:master
- Git
- lp:~mthaddon/charm-k8s-ingress/+git/charm-k8s-ingress
- ingress-relation
- Merge into master
Status: | Merged |
---|---|
Approved by: | Tom Haddon |
Approved revision: | b66902993556481b6d8b3dfdea2bbe9d2910625c |
Merged at revision: | cce119b69540fb49ecd7f6d3e73ba6fa1dca2dbe |
Proposed branch: | ~mthaddon/charm-k8s-ingress/+git/charm-k8s-ingress:ingress-relation |
Merge into: | charm-k8s-ingress:master |
Diff against target: |
539 lines (+211/-125) 5 files modified
README.md (+33/-96) metadata.yaml (+5/-0) src/charm.py (+91/-22) tests/unit/test_charm.py (+80/-5) tox.ini (+2/-2) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
🤖 prod-jenkaas-is (community) | continuous-integration | Approve | |
Stuart Bishop (community) | Approve | ||
Review via email: mp+400278@code.launchpad.net |
Commit message
Implement basic ingress relation, which takes precedence over juju config
Description of the change
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
Unable to determine commit message from repository - please click "Set commit message" and enter the commit message manually.
🤖 prod-jenkaas-is (prod-jenkaas-is) wrote : | # |
A CI job is currently in progress. A follow up comment will be added when it completes.
🤖 prod-jenkaas-is (prod-jenkaas-is) wrote : | # |
FAILED: Continuous integration, rev:089ed6f6ce7
No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want jenkins to rebuild you need to trigger it yourself):
https:/
https:/
Executed test runs:
SUCCESS: https:/
None: https:/
Click here to trigger a rebuild:
https:/
🤖 prod-jenkaas-is (prod-jenkaas-is) wrote : | # |
A CI job is currently in progress. A follow up comment will be added when it completes.
🤖 prod-jenkaas-is (prod-jenkaas-is) wrote : | # |
PASSED: Continuous integration, rev:f7061af49cb
https:/
Executed test runs:
SUCCESS: https:/
None: https:/
Click here to trigger a rebuild:
https:/
Stuart Bishop (stub) wrote : | # |
This looks fine.
I think the relation should be using application relation data rather than per-unit relation data, per inline comments. The code should be about the, essentially changing the event.relation.
🤖 prod-jenkaas-is (prod-jenkaas-is) wrote : | # |
A CI job is currently in progress. A follow up comment will be added when it completes.
🤖 prod-jenkaas-is (prod-jenkaas-is) wrote : | # |
PASSED: Continuous integration, rev:be8f9b5b6e3
https:/
Executed test runs:
SUCCESS: https:/
None: https:/
Click here to trigger a rebuild:
https:/
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
Change successfully merged at revision cce119b69540fb4
Preview Diff
1 | diff --git a/README.md b/README.md |
2 | index 4dcbc5c..fe5769b 100644 |
3 | --- a/README.md |
4 | +++ b/README.md |
5 | @@ -10,7 +10,10 @@ size, etc.). |
6 | |
7 | ## Usage |
8 | |
9 | -TODO: Provide high-level usage, such as required config or relations |
10 | +This charm needs to be built locally as charmhub.io can't yet host charms with |
11 | +"bases" in `metadata.yaml`. Once that's been added, deployment instructions |
12 | +will be updated, but for now, see the "Running the charm locally" section |
13 | +below. |
14 | |
15 | ## Running the charm locally |
16 | |
17 | @@ -19,7 +22,7 @@ To build the charm, run `charmcraft build`. |
18 | You'll need version 1.14 or later of Go (`go version` will confirm your current version), and a custom version of the Juju 2.9 branch, as below: |
19 | |
20 | ``` |
21 | -git clone -b demo-pebble https://github.com/benhoyt/juju |
22 | +git clone -b 2.9 https://github.com/juju/juju |
23 | cd juju |
24 | make install |
25 | make microk8s-operator-update # to make the microk8s image and push to Docker |
26 | @@ -27,109 +30,43 @@ export PATH="/home/${USER}/go/bin:$PATH" |
27 | juju bootstrap microk8s --no-gui |
28 | juju add-model ingress-test |
29 | ``` |
30 | -Now from the directory where you've run `charmcraft build` as above: |
31 | -``` |
32 | -juju deploy ./ingress.charm --resource placeholder-image='google/pause' |
33 | -``` |
34 | -This is currently failing with the following error in `juju debug-log`. |
35 | -``` |
36 | -unit-ingress-0: 05:12:02 INFO juju.worker.uniter awaiting error resolution for "config-changed" hook |
37 | -unit-ingress-0: 05:12:03 ERROR unit.ingress/0.juju-log Uncaught exception while in charm code: |
38 | -Traceback (most recent call last): |
39 | - File "./src/charm.py", line 80, in <module> |
40 | - main(CharmK8SIngressCharm) |
41 | - File "/var/lib/juju/agents/unit-ingress-0/charm/venv/ops/main.py", line 402, in main |
42 | - _emit_charm_event(charm, dispatcher.event_name) |
43 | - File "/var/lib/juju/agents/unit-ingress-0/charm/venv/ops/main.py", line 140, in _emit_charm_event |
44 | - event_to_emit.emit(*args, **kwargs) |
45 | - File "/var/lib/juju/agents/unit-ingress-0/charm/venv/ops/framework.py", line 278, in emit |
46 | - framework._emit(event) |
47 | - File "/var/lib/juju/agents/unit-ingress-0/charm/venv/ops/framework.py", line 722, in _emit |
48 | - self._reemit(event_path) |
49 | - File "/var/lib/juju/agents/unit-ingress-0/charm/venv/ops/framework.py", line 767, in _reemit |
50 | - custom_handler(event) |
51 | - File "./src/charm.py", line 75, in _on_config_changed |
52 | - self._get_pods() |
53 | - File "./src/charm.py", line 63, in _get_pods |
54 | - self.k8s_auth() |
55 | - File "./src/charm.py", line 59, in k8s_auth |
56 | - kubernetes.config.load_incluster_config() |
57 | - File "/var/lib/juju/agents/unit-ingress-0/charm/venv/kubernetes/config/incluster_config.py", line 118, in load_incluster_config |
58 | - InClusterConfigLoader( |
59 | - File "/var/lib/juju/agents/unit-ingress-0/charm/venv/kubernetes/config/incluster_config.py", line 54, in load_and_set |
60 | - self._load_config() |
61 | - File "/var/lib/juju/agents/unit-ingress-0/charm/venv/kubernetes/config/incluster_config.py", line 73, in _load_config |
62 | - raise ConfigException("Service token file does not exists.") |
63 | -kubernetes.config.config_exception.ConfigException: Service token file does not exists. |
64 | -unit-ingress-0: 05:12:03 ERROR juju.worker.uniter.operation hook "config-changed" (via hook dispatching script: dispatch) failed: exit status 1 |
65 | -``` |
66 | -Per https://github.com/kubernetes-client/python/issues/1331 the file it's looking for is |
67 | -/var/run/secrets/kubernetes.io/serviceaccount/token, but this file isn't present in the |
68 | -'charm' container for the deployed instance. The file does exist in the model operator, however. |
69 | - |
70 | -This has been filed as https://bugs.launchpad.net/juju/+bug/1920102. |
71 | - |
72 | -To work around this, we've added a `kube-config` config option. This should be the contents of your |
73 | -kubernetes client configuration. If you're using microk8s you can get this via `microk8s config`, |
74 | -and you'd then deploy the charm as follows: |
75 | +Once https://bugs.launchpad.net/juju/+bug/1920102 has been addressed, this |
76 | +charm will be able to use the credentials provided in cluster. However, for |
77 | +now, you will need to provide this charm with credentials to be able to talk |
78 | +to the K8s API directly. This is done via the `kube-config` config option, |
79 | +which should be set to the contents of your kubernetes client configuration. |
80 | +If you're using microk8s you can get this via `microk8s config`. As an example |
81 | +you could deploy this charm as follows: |
82 | ``` |
83 | juju deploy ./ingress.charm --resource placeholder-image='google/pause' --config kube-config="$(microk8s config)" |
84 | ``` |
85 | -We've also added some initial config options to configure the ingress. As an |
86 | -example: |
87 | +To create an ingress for your service, you'd then add a relation to a charm |
88 | +that supports the `ingress` relation. As an example: |
89 | ``` |
90 | -juju config ingress service-namespace=ingress-test service-name=gunicorn service-port=80 service-hostname=foo.internal |
91 | +juju deploy ./gunicorn.charm --resource gunicorn-image='gunicorncharmers/gunicorn-app:edge' |
92 | +juju relate ingress gunicorn |
93 | ``` |
94 | -This will give you something like this (deployed alongside the non-sidecar |
95 | -version of the charm for comparison): |
96 | +This will create an K8s ingress called `gunicorn-ingress` and a K8s service |
97 | +called `gunicorn-service`. The gunicorn charm in question, which can be found |
98 | +https://code.launchpad.net/~mthaddon/charm-k8s-gunicorn/+git/charm-k8s-gunicorn/+ref/pebble |
99 | +implements the relation as follows, as a trivial example: |
100 | ``` |
101 | -mthaddon@tenaya:~/repos/charm-k8s-ingress/charm-k8s-ingress$ microk8s.kubectl get ingress --all-namespaces |
102 | -NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE |
103 | -ingress-test gunicorn-old-ingress <none> foo.internal 80 27m |
104 | -ingress-test gunicorn-ingress <none> foo.internal 80 91s |
105 | -mthaddon@tenaya:~/repos/charm-k8s-ingress/charm-k8s-ingress$ juju status |
106 | -Model Controller Cloud/Region Version SLA Timestamp |
107 | -ingress-test microk8s-localhost microk8s/localhost 2.9-rc7 unsupported 18:12:17+01:00 |
108 | - |
109 | -App Version Status Scale Charm Store Channel Rev OS Address Message |
110 | -gunicorn active 1 gunicorn local 3 ubuntu |
111 | -gunicorn-old gunicorn-app:edge active 1 gunicorn local 1 kubernetes 10.152.183.69 |
112 | -ingress active 1 ingress local 3 ubuntu |
113 | +# In __init__: |
114 | +self.framework.observe(self.on['ingress'].relation_changed, self._on_ingress_changed) |
115 | |
116 | -Unit Workload Agent Address Ports Message |
117 | -gunicorn-old/1* active idle 10.1.234.25 80/TCP |
118 | -gunicorn/0* active idle 10.1.234.21 |
119 | -ingress/0* active idle 10.1.234.28 |
120 | +def _on_ingress_changed(self, event: ops.framework.EventBase) -> None: |
121 | + """Handle the ingress relation changed event.""" |
122 | + if self.unit.is_leader(): |
123 | + event.relation.data[self.app]["service-hostname"] = self.config["external_hostname"] |
124 | + event.relation.data[self.app]["service-name"] = self.model.name |
125 | + event.relation.data[self.app]["service-port"] = "80" |
126 | +``` |
127 | |
128 | -mthaddon@tenaya:~/repos/charm-k8s-ingress/charm-k8s-ingress$ microk8s.kubectl describe ingress -n ingress-test gunicorn-ingress |
129 | -Name: gunicorn-ingress |
130 | -Namespace: ingress-test |
131 | -Address: |
132 | -Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) |
133 | -Rules: |
134 | - Host Path Backends |
135 | - ---- ---- -------- |
136 | - foo.internal |
137 | - / gunicorn:80 () |
138 | -Annotations: nginx.ingress.kubernetes.io/rewrite-target: / |
139 | -Events: <none> |
140 | -mthaddon@tenaya:~/repos/charm-k8s-ingress/charm-k8s-ingress$ microk8s.kubectl describe ingress -n ingress-test gunicorn-old-ingress |
141 | -Name: gunicorn-old-ingress |
142 | -Namespace: ingress-test |
143 | -Address: |
144 | -Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) |
145 | -Rules: |
146 | - Host Path Backends |
147 | - ---- ---- -------- |
148 | - foo.internal |
149 | - / gunicorn-old:80 (10.1.234.25:80) |
150 | -Annotations: controller.juju.is/id: ac6cacf4-3ed5-4313-88d2-89eef6ae5822 |
151 | - model.juju.is/id: 5cbdb63d-0847-4b92-8c4b-f4af185b60b0 |
152 | - nginx.ingress.kubernetes.io/ssl-redirect: false |
153 | -Events: <none> |
154 | -mthaddon@tenaya:~/repos/charm-k8s-ingress/charm-k8s-ingress$ |
155 | +Alternatively, you can configure the same ingress via Juju config options, to |
156 | +avoid needing to modify the charm you're deploying it alongside. As an example: |
157 | +``` |
158 | +juju config ingress service-name=gunicorn service-port=80 service-hostname=foo.internal |
159 | ``` |
160 | -This needs further work and testing, but produces an initial ingress. |
161 | |
162 | ## Testing |
163 | |
164 | diff --git a/metadata.yaml b/metadata.yaml |
165 | index 100436a..a6ee850 100644 |
166 | --- a/metadata.yaml |
167 | +++ b/metadata.yaml |
168 | @@ -17,3 +17,8 @@ resources: |
169 | placeholder-image: |
170 | type: oci-image |
171 | description: A placeholder image, operator code won't do anything with it. |
172 | + |
173 | +requires: |
174 | + ingress: |
175 | + interface: ingress |
176 | + limit: 1 |
177 | diff --git a/src/charm.py b/src/charm.py |
178 | index d9639d1..a206ba7 100755 |
179 | --- a/src/charm.py |
180 | +++ b/src/charm.py |
181 | @@ -19,11 +19,29 @@ import kubernetes |
182 | # from kubernetes.client.rest import ApiException as K8sApiException |
183 | |
184 | from ops.charm import CharmBase |
185 | +from ops.framework import StoredState |
186 | from ops.main import main |
187 | -from ops.model import ActiveStatus |
188 | +from ops.model import ( |
189 | + ActiveStatus, |
190 | + BlockedStatus, |
191 | +) |
192 | + |
193 | |
194 | logger = logging.getLogger(__name__) |
195 | |
196 | +REQUIRED_INGRESS_RELATION_FIELDS = { |
197 | + "service-hostname", |
198 | + "service-name", |
199 | + "service-port", |
200 | +} |
201 | + |
202 | +OPTIONAL_INGRESS_RELATION_FIELDS = { |
203 | + "max-body-size", |
204 | + "service-namespace", |
205 | + "session-cookie-max-age", |
206 | + "tls-secret-name", |
207 | +} |
208 | + |
209 | |
210 | def _core_v1_api(): |
211 | """Use the v1 k8s API.""" |
212 | @@ -48,27 +66,78 @@ class CharmK8SIngressCharm(CharmBase): |
213 | """Charm the service.""" |
214 | |
215 | _authed = False |
216 | + _stored = StoredState() |
217 | |
218 | def __init__(self, *args): |
219 | super().__init__(*args) |
220 | self.framework.observe(self.on.config_changed, self._on_config_changed) |
221 | |
222 | + # ingress relation handling. |
223 | + self.framework.observe(self.on["ingress"].relation_changed, self._on_ingress_relation_changed) |
224 | + |
225 | + self._stored.set_default(ingress_relation_data=dict()) |
226 | + |
227 | + def _on_ingress_relation_changed(self, event): |
228 | + """Handle a change to the ingress relation.""" |
229 | + if not self.unit.is_leader(): |
230 | + return |
231 | + |
232 | + ingress_fields = { |
233 | + field: event.relation.data[event.app].get(field) |
234 | + for field in REQUIRED_INGRESS_RELATION_FIELDS | OPTIONAL_INGRESS_RELATION_FIELDS |
235 | + } |
236 | + |
237 | + missing_fields = sorted( |
238 | + [field for field in REQUIRED_INGRESS_RELATION_FIELDS if ingress_fields.get(field) is None] |
239 | + ) |
240 | + |
241 | + if missing_fields: |
242 | + logger.error("Missing required data fields for ingress relation: {}".format(", ".join(missing_fields))) |
243 | + self.unit.status = BlockedStatus("Missing fields for ingress: {}".format(", ".join(missing_fields))) |
244 | + return |
245 | + |
246 | + # Set our relation data to stored_state. |
247 | + self._stored.ingress_relation_data = ingress_fields |
248 | + |
249 | + # Now trigger our config_changed handler. |
250 | + self._on_config_changed(event) |
251 | + |
252 | @property |
253 | def _ingress_name(self): |
254 | """Return an ingress name for use creating a k8s ingress.""" |
255 | # Follow the same naming convention as Juju. |
256 | - return "{}-ingress".format(self.config["service-name"]) |
257 | + return "{}-ingress".format( |
258 | + self._stored.ingress_relation_data.get("service-name") or self.config["service-name"] |
259 | + ) |
260 | |
261 | @property |
262 | def _namespace(self): |
263 | """Return the namespace to operate on.""" |
264 | - return self.config["service-namespace"] or self.model.name |
265 | + return ( |
266 | + self._stored.ingress_relation_data.get("service-namespace") |
267 | + or self.config["service-namespace"] |
268 | + or self.model.name |
269 | + ) |
270 | |
271 | @property |
272 | - def _service_name(self): |
273 | + def _k8s_service_name(self): |
274 | """Return a service name for the use creating a k8s service.""" |
275 | - # Avoid collision with service name created by Juju. |
276 | - return "{}-service".format(self.config["service-name"]) |
277 | + # Avoid collision with service name created by Juju. Currently |
278 | + # Juju creates a K8s service listening on port 65535/TCP so we |
279 | + # need to create a separate one. |
280 | + return "{}-service".format(self._service_name) |
281 | + |
282 | + @property |
283 | + def _service_name(self): |
284 | + """Return the name of the service we're connecting to.""" |
285 | + return self._stored.ingress_relation_data.get("service-name") or self.config["service-name"] |
286 | + |
287 | + @property |
288 | + def _service_port(self): |
289 | + """Return the port for the service we're connecting to.""" |
290 | + if self._stored.ingress_relation_data.get("service-port"): |
291 | + return int(self._stored.ingress_relation_data.get("service-port")) |
292 | + return self.config["service-port"] |
293 | |
294 | def k8s_auth(self): |
295 | """Authenticate to kubernetes.""" |
296 | @@ -92,14 +161,14 @@ class CharmK8SIngressCharm(CharmBase): |
297 | return kubernetes.client.V1Service( |
298 | api_version="v1", |
299 | kind="Service", |
300 | - metadata=kubernetes.client.V1ObjectMeta(name=self._service_name), |
301 | + metadata=kubernetes.client.V1ObjectMeta(name=self._k8s_service_name), |
302 | spec=kubernetes.client.V1ServiceSpec( |
303 | - selector={"app.kubernetes.io/name": self.config["service-name"]}, |
304 | + selector={"app.kubernetes.io/name": self._service_name}, |
305 | ports=[ |
306 | kubernetes.client.V1ServicePort( |
307 | - name="tcp-{}".format(self.config["service-port"]), |
308 | - port=self.config["service-port"], |
309 | - target_port=self.config["service-port"], |
310 | + name="tcp-{}".format(self._service_port), |
311 | + port=self._service_port, |
312 | + target_port=self._service_port, |
313 | ) |
314 | ], |
315 | ), |
316 | @@ -116,8 +185,8 @@ class CharmK8SIngressCharm(CharmBase): |
317 | kubernetes.client.NetworkingV1beta1HTTPIngressPath( |
318 | path="/", |
319 | backend=kubernetes.client.NetworkingV1beta1IngressBackend( |
320 | - service_port=self.config["service-port"], |
321 | - service_name=self._service_name, |
322 | + service_port=self._service_port, |
323 | + service_name=self._k8s_service_name, |
324 | ), |
325 | ) |
326 | ] |
327 | @@ -137,7 +206,7 @@ class CharmK8SIngressCharm(CharmBase): |
328 | self.config["session-cookie-max-age"] |
329 | ) |
330 | annotations["nginx.ingress.kubernetes.io/session-cookie-name"] = "{}_AFFINITY".format( |
331 | - self.config["service-name"].upper() |
332 | + self._service_name.upper() |
333 | ) |
334 | annotations["nginx.ingress.kubernetes.io/session-cookie-samesite"] = "Lax" |
335 | tls_secret_name = self.config["tls-secret-name"] |
336 | @@ -164,7 +233,7 @@ class CharmK8SIngressCharm(CharmBase): |
337 | self.k8s_auth() |
338 | api = _core_v1_api() |
339 | services = api.list_namespaced_service(namespace=self._namespace) |
340 | - return [x.spec.cluster_ip for x in services.items if x.metadata.name == self._service_name] |
341 | + return [x.spec.cluster_ip for x in services.items if x.metadata.name == self._k8s_service_name] |
342 | |
343 | def _define_service(self): |
344 | """Create or update a service in kubernetes.""" |
345 | @@ -172,7 +241,7 @@ class CharmK8SIngressCharm(CharmBase): |
346 | api = _core_v1_api() |
347 | body = self._get_k8s_service() |
348 | services = api.list_namespaced_service(namespace=self._namespace) |
349 | - if self._service_name in [x.metadata.name for x in services.items]: |
350 | + if self._k8s_service_name in [x.metadata.name for x in services.items]: |
351 | # Currently failing with port[1].name required but we're only |
352 | # defining one port above... |
353 | # api.patch_namespaced_service( |
354 | @@ -181,7 +250,7 @@ class CharmK8SIngressCharm(CharmBase): |
355 | # body=body, |
356 | # ) |
357 | api.delete_namespaced_service( |
358 | - name=self._service_name, |
359 | + name=self._k8s_service_name, |
360 | namespace=self._namespace, |
361 | ) |
362 | api.create_namespaced_service( |
363 | @@ -191,7 +260,7 @@ class CharmK8SIngressCharm(CharmBase): |
364 | logger.info( |
365 | "Service updated in namespace %s with name %s", |
366 | self._namespace, |
367 | - self.config["service-name"], |
368 | + self._service_name, |
369 | ) |
370 | else: |
371 | api.create_namespaced_service( |
372 | @@ -201,7 +270,7 @@ class CharmK8SIngressCharm(CharmBase): |
373 | logger.info( |
374 | "Service created in namespace %s with name %s", |
375 | self._namespace, |
376 | - self.config["service-name"], |
377 | + self._service_name, |
378 | ) |
379 | |
380 | def _define_ingress(self): |
381 | @@ -219,7 +288,7 @@ class CharmK8SIngressCharm(CharmBase): |
382 | logger.info( |
383 | "Ingress updated in namespace %s with name %s", |
384 | self._namespace, |
385 | - self.config["service-name"], |
386 | + self._service_name, |
387 | ) |
388 | else: |
389 | api.create_namespaced_ingress( |
390 | @@ -229,7 +298,7 @@ class CharmK8SIngressCharm(CharmBase): |
391 | logger.info( |
392 | "Ingress created in namespace %s with name %s", |
393 | self._namespace, |
394 | - self.config["service-name"], |
395 | + self._service_name, |
396 | ) |
397 | |
398 | def _on_config_changed(self, _): |
399 | @@ -237,7 +306,7 @@ class CharmK8SIngressCharm(CharmBase): |
400 | msg = "" |
401 | # We only want to do anything here if we're the leader to avoid |
402 | # collision if we've scaled out this application. |
403 | - if self.unit.is_leader() and self.config["service-name"]: |
404 | + if self.unit.is_leader() and self._service_name: |
405 | self._define_service() |
406 | self._define_ingress() |
407 | # It's not recommended to do this via ActiveStatus, but we don't |
408 | diff --git a/tests/unit/test_charm.py b/tests/unit/test_charm.py |
409 | index 3eaebc6..3efb784 100644 |
410 | --- a/tests/unit/test_charm.py |
411 | +++ b/tests/unit/test_charm.py |
412 | @@ -1,12 +1,16 @@ |
413 | # Copyright 2021 Tom Haddon |
414 | # See LICENSE file for licensing details. |
415 | |
416 | -import mock |
417 | import unittest |
418 | |
419 | +from unittest.mock import MagicMock, patch |
420 | + |
421 | import kubernetes |
422 | |
423 | -from ops.model import ActiveStatus |
424 | +from ops.model import ( |
425 | + ActiveStatus, |
426 | + BlockedStatus, |
427 | +) |
428 | from ops.testing import Harness |
429 | from charm import CharmK8SIngressCharm |
430 | |
431 | @@ -18,9 +22,9 @@ class TestCharm(unittest.TestCase): |
432 | self.addCleanup(self.harness.cleanup) |
433 | self.harness.begin() |
434 | |
435 | - @mock.patch('charm.CharmK8SIngressCharm._report_service_ips') |
436 | - @mock.patch('charm.CharmK8SIngressCharm._define_ingress') |
437 | - @mock.patch('charm.CharmK8SIngressCharm._define_service') |
438 | + @patch('charm.CharmK8SIngressCharm._report_service_ips') |
439 | + @patch('charm.CharmK8SIngressCharm._define_ingress') |
440 | + @patch('charm.CharmK8SIngressCharm._define_service') |
441 | def test_config_changed(self, _define_service, _define_ingress, _report_service_ips): |
442 | """Test our config changed handler.""" |
443 | # First of all test, with leader set to True. |
444 | @@ -56,10 +60,81 @@ class TestCharm(unittest.TestCase): |
445 | |
446 | def test_namespace(self): |
447 | """Test for the namespace property.""" |
448 | + # If charm config and _stored is empty, use model name. |
449 | + self.assertEqual(self.harness.charm._stored.ingress_relation_data.get("service-namespace"), None) |
450 | self.assertEqual(self.harness.charm.config["service-namespace"], "") |
451 | self.assertEqual(self.harness.charm._namespace, self.harness.charm.model.name) |
452 | + # If we set config, that takes precedence. |
453 | self.harness.update_config({"service-namespace": "mymodelname"}) |
454 | self.assertEqual(self.harness.charm._namespace, "mymodelname") |
455 | + # And if we set _stored, that takes precedence. |
456 | + self.harness.charm._stored.ingress_relation_data["service-namespace"] = "relationnamespace" |
457 | + self.assertEqual(self.harness.charm._namespace, "relationnamespace") |
458 | + |
459 | + def test_service_port(self): |
460 | + """Test the service-port property.""" |
461 | + # First set via config. |
462 | + self.harness.update_config({"service-port": 80}) |
463 | + self.assertEqual(self.harness.charm._service_port, 80) |
464 | + # Now set via the StoredState. This will be set to a string, as all |
465 | + # relation data must be a string. |
466 | + self.harness.charm._stored.ingress_relation_data["service-port"] = "88" |
467 | + self.assertEqual(self.harness.charm._service_port, 88) |
468 | + |
469 | + @patch('charm.CharmK8SIngressCharm._on_config_changed') |
470 | + def test_on_ingress_relation_changed(self, _on_config_changed): |
471 | + """Test ingress relation changed handler.""" |
472 | + # Confirm we do nothing if we're not the leader. |
473 | + self.assertFalse(self.harness.charm.unit.is_leader()) |
474 | + mock_event = MagicMock() |
475 | + self.assertEqual(self.harness.charm._stored.ingress_relation_data, {}) |
476 | + self.harness.charm._on_ingress_relation_changed(mock_event) |
477 | + # Confirm no relation data has been set. |
478 | + self.assertEqual(self.harness.charm._stored.ingress_relation_data, {}) |
479 | + # Confirm config_changed hasn't been called. |
480 | + _on_config_changed.assert_not_called() |
481 | + |
482 | + # Now test on the leader, but with missing fields in the relation data. |
483 | + # We don't want leader-set to fire. |
484 | + self.harness.disable_hooks() |
485 | + self.harness.set_leader(True) |
486 | + mock_event.app = "gunicorn" |
487 | + mock_event.relation.data = {"gunicorn": {"service-name": "gunicorn"}} |
488 | + with self.assertLogs(level="ERROR") as logger: |
489 | + self.harness.charm._on_ingress_relation_changed(mock_event) |
490 | + msg = "ERROR:charm:Missing required data fields for ingress relation: service-hostname, service-port" |
491 | + self.assertEqual(sorted(logger.output), [msg]) |
492 | + # Confirm no relation data has been set. |
493 | + self.assertEqual(self.harness.charm._stored.ingress_relation_data, {}) |
494 | + # Confirm config_changed hasn't been called. |
495 | + _on_config_changed.assert_not_called() |
496 | + # Confirm blocked status. |
497 | + self.assertEqual( |
498 | + self.harness.charm.unit.status, |
499 | + BlockedStatus("Missing fields for ingress: service-hostname, service-port"), |
500 | + ) |
501 | + |
502 | + # Now test with complete relation data. |
503 | + mock_event.relation.data = { |
504 | + "gunicorn": { |
505 | + "service-hostname": "foo.internal", |
506 | + "service-name": "gunicorn", |
507 | + "service-port": "80", |
508 | + } |
509 | + } |
510 | + self.harness.charm._on_ingress_relation_changed(mock_event) |
511 | + expected = { |
512 | + "max-body-size": None, |
513 | + "service-hostname": "foo.internal", |
514 | + "service-name": "gunicorn", |
515 | + "service-namespace": None, |
516 | + "service-port": "80", |
517 | + "session-cookie-max-age": None, |
518 | + "tls-secret-name": None, |
519 | + } |
520 | + self.assertEqual(self.harness.charm._stored.ingress_relation_data, expected) |
521 | + # Confirm config_changed has been called. |
522 | + _on_config_changed.assert_called_once() |
523 | |
524 | def test_get_k8s_ingress(self): |
525 | """Test getting our definition of a k8s ingress.""" |
526 | diff --git a/tox.ini b/tox.ini |
527 | index 066c5a8..b4c85dc 100644 |
528 | --- a/tox.ini |
529 | +++ b/tox.ini |
530 | @@ -43,7 +43,7 @@ exclude = |
531 | .git, |
532 | __pycache__, |
533 | .tox, |
534 | -# Ignore E231 because using black creates errors with this |
535 | -ignore = E231 |
536 | +# Ignore E231, W503 because using black creates errors with this |
537 | +ignore = E231, W503 |
538 | max-line-length = 120 |
539 | max-complexity = 10 |
This merge proposal is being monitored by mergebot. Change the status to Approved to merge.