Merge ~cbelu/charm-k8s-ingress:multiple-relations into charm-k8s-ingress:master
- Git
- lp:~cbelu/charm-k8s-ingress
- multiple-relations
- Merge into master
Status: | Merged |
---|---|
Approved by: | Tom Haddon |
Approved revision: | 59f1ca1b4dec1a3fc7cb659b554800ae15b943d3 |
Merged at revision: | 59f1ca1b4dec1a3fc7cb659b554800ae15b943d3 |
Proposed branch: | ~cbelu/charm-k8s-ingress:multiple-relations |
Merge into: | charm-k8s-ingress:master |
Diff against target: |
1544 lines (+888/-131) 4 files modified
lib/charms/nginx_ingress_integrator/v0/ingress.py (+4/-4) metadata.yaml (+0/-1) src/charm.py (+328/-67) tests/unit/test_charm.py (+556/-59) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Tom Haddon | Approve | ||
Review via email: mp+415394@code.launchpad.net |
Commit message
Refactors charm for adding multiple relations support
Updates IngressBrokenEvent to be based on RelationEvent, and include the relation in the event data, which will help us determine which relation has been broken.
Separates the config / relation data into the _ConfigOrRelation object, which receives the config and a relation in its constructor. This will allow us to get necessary data from a specific relation (which is overriden by the config options).
Adds multiple ingress relations support
Removes the 1 limit from metadata.yaml
Currently, the Ingress Name is based on the service-name. However, that name can be a bit inconsistent when it comes to having multiple relations. Changes the Ingress Name used to app.name-ingress.
The nginx/nginx-ingress Controller typically used in EKS doesn't handle multiple Kubernetes Ingress Resources having the same hostname in the same way as the k8s.gcr.
annotations, and an additional Kubernetes Ingress Resource [1]. But both Controllers can handle one Kubernetes Ingress Resource containing multiple hosts / routes.
This charm will now group all the Ingress rules by the required hostnames from all the relations (with config option overrides) and create Kubernetes Ingress Resources per required hostname. When a relation is broken, its ingress-related data will be removed from the Ingress Resources. If no relation will require a particular Ingress Resource anymore, it will be removed.
Adds unit tests to validate the cases in which multiple relations are being used.
Adds TLS information for each additional-hostname defined (each new hostname has its own Kubernetes Ingress Resource). Previously, TLS was being set only for the given service hostname, and not any additional-
On multiple relations, if we're required to have different annotations on the same Ingress Resource (same hostname), we cannot satisfy that request. In this case, we enter BlockedStatus.
On multiple relations, if there are multiple matching route paths on the same hostname, we enter BlockedStatus.
On multiple relations, we should use the service-name, service-port, and route-paths from the relation data fist, instead of reading them from the config options first. This will prevent us from having Service name conflicts and route path conflicts.
If there is only one relation, the config option will still override the relation data (to maintain backwards compatibility).
Adds unit tests to verify this behaviour.
[1] https:/
Description of the change
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
Tom Haddon (mthaddon) wrote : | # |
Thanks very much for working on this!
Have added some comments/questions inline. I haven't reviewed the tests yet, but can do that in a later pass.
Jon Seager (jnsgruk) : | # |
Claudiu Belu (cbelu) : | # |
Claudiu Belu (cbelu) : | # |
Jon Seager (jnsgruk) wrote : | # |
Latest commit LGTM!
Claudiu Belu (cbelu) wrote : | # |
Added TLS information for each additional-hostname defined (each new hostname has its own Kubernetes Ingress Resource). Previously, TLS was being set only for the given service hostname, and not any additional-
Tom Haddon (mthaddon) wrote : | # |
I have a general question about how this will be used, which came out of noticing that if we have annotations that don't match you're logging an error here.
Do we expect that this will be used to generate multiple ingresses for multiple entirely separate hostnames in a model (i.e. not just alternate hostnames on one ingress definition), or more likely it will be used to generate an ingress for multiple applications that should respond on the same hostname within a model? Also, are we expecting that this will be used to combine applications under the same that have been designed to work together, or that it might be used to combine applications that have no knowledge of each other under the same hostname?
The reason I ask is that it seems to me that if we consider it an error that we have annotations that don't match, I think it might be better to put the nginx-ingress-
The other thing to note here is if we are allowing multiple entirely separate ingress objects pointing to different backend applications, the current approach of having the option to override relation settings via configuration is going to potentially be very confusing. We would end up having to apply these options to all defined ingresses, which may not be what's wanted, or needed. For example, how would you override the service-hostname, which I'd expect you'd always want to be able to do in a production deployment?
One option to resolve this would be to decide to only allow one ingress definition that could be serving content for multiple applications under one hostname, and if applications are related to this charm that would cause it to generate multiple ingress definitions we put the charm into blocked state and show an error in juju status saying that this should only be used to generate one ingress definition.
I hope I'm explaining this clearly, let me know what you think.
Claudiu Belu (cbelu) wrote : | # |
The original intent for this was to serve multiple applications under the same hostname. These applications are related to each other, and require each other. Having them under different hostnames would raise some XSS-related issues. Generating different ingresses for different hostnames came as a consequence of allowing multiple relations.
Regarding the annotation mismatch, indeed, we could put the charm into a blocked state until that conflict gets resolved.
Regarding config option overrides: indeed, it's something I've been pondering as well. There are some config options that shouldn't be used in the multiple relations scenario, like path-routes and service-name. Those should come from the relation. But most of the other config options are still fine, and can still be used to override the relation data, including the service-hostname. Overriding the service-hostname will mean that the applications can now be accessed through a different hostname than the one set in the relation data.
Regarding your suggestion for only one ingress definition for only one hostname: We still have the additional-
Tom Haddon (mthaddon) wrote : | # |
In terms of the "additional-
Claudiu Belu (cbelu) wrote : | # |
Added BlockedStatus for conflicting annotations and routes, which can be resolved with juju config. Added unit test to verify that behaviour.
On multiple relations, we now get the service-name, service-port, and route-paths from the relations themselves rather than the config. This way, we are not defining the same service / same routes for every relation.
Preview Diff
1 | diff --git a/lib/charms/nginx_ingress_integrator/v0/ingress.py b/lib/charms/nginx_ingress_integrator/v0/ingress.py |
2 | index c3fac4c..fea2e50 100644 |
3 | --- a/lib/charms/nginx_ingress_integrator/v0/ingress.py |
4 | +++ b/lib/charms/nginx_ingress_integrator/v0/ingress.py |
5 | @@ -54,7 +54,7 @@ the event was fired). |
6 | |
7 | import logging |
8 | |
9 | -from ops.charm import CharmEvents |
10 | +from ops.charm import CharmEvents, RelationEvent |
11 | from ops.framework import EventBase, EventSource, Object |
12 | from ops.model import BlockedStatus |
13 | |
14 | @@ -96,7 +96,7 @@ class IngressAvailableEvent(EventBase): |
15 | pass |
16 | |
17 | |
18 | -class IngressBrokenEvent(EventBase): |
19 | +class IngressBrokenEvent(RelationEvent): |
20 | pass |
21 | |
22 | |
23 | @@ -218,10 +218,10 @@ class IngressProvides(Object): |
24 | # configure the ingress. |
25 | self.charm.on.ingress_available.emit() |
26 | |
27 | - def _on_relation_broken(self, _): |
28 | + def _on_relation_broken(self, event): |
29 | """Handle a relation-broken event in the ingress relation.""" |
30 | if not self.model.unit.is_leader(): |
31 | return |
32 | |
33 | # Create an event that our charm can use to remove the ingress resource. |
34 | - self.charm.on.ingress_broken.emit() |
35 | + self.charm.on.ingress_broken.emit(event.relation) |
36 | diff --git a/metadata.yaml b/metadata.yaml |
37 | index ff29738..8f07769 100644 |
38 | --- a/metadata.yaml |
39 | +++ b/metadata.yaml |
40 | @@ -19,4 +19,3 @@ resources: |
41 | provides: |
42 | ingress: |
43 | interface: ingress |
44 | - limit: 1 |
45 | diff --git a/src/charm.py b/src/charm.py |
46 | index 48bc2d7..b3bf263 100755 |
47 | --- a/src/charm.py |
48 | +++ b/src/charm.py |
49 | @@ -3,6 +3,7 @@ |
50 | # See LICENSE file for licensing details. |
51 | |
52 | import logging |
53 | +import re |
54 | |
55 | import kubernetes.client |
56 | |
57 | @@ -17,6 +18,8 @@ from ops.model import ActiveStatus, BlockedStatus |
58 | |
59 | logger = logging.getLogger(__name__) |
60 | |
61 | +_INGRESS_SUB_REGEX = re.compile("[^0-9a-zA-Z]") |
62 | + |
63 | BOOLEAN_CONFIG_FIELDS = ["rewrite-enabled"] |
64 | |
65 | |
66 | @@ -30,32 +33,33 @@ def _networking_v1_api(): |
67 | return kubernetes.client.NetworkingV1Api() |
68 | |
69 | |
70 | -class NginxIngressCharm(CharmBase): |
71 | - """Charm the service.""" |
72 | +class ConflictingAnnotationsException(Exception): |
73 | + pass |
74 | |
75 | - _authed = False |
76 | - on = IngressCharmEvents() |
77 | |
78 | - def __init__(self, *args): |
79 | - super().__init__(*args) |
80 | - self.framework.observe(self.on.config_changed, self._on_config_changed) |
81 | - self.framework.observe(self.on.describe_ingresses_action, self._describe_ingresses_action) |
82 | +class ConflictingRoutesException(Exception): |
83 | + pass |
84 | |
85 | - # 'ingress' relation handling. |
86 | - self.ingress = IngressProvides(self) |
87 | - # When the 'ingress' is ready to configure, do so. |
88 | - self.framework.observe(self.on.ingress_available, self._on_config_changed) |
89 | - self.framework.observe(self.on.ingress_broken, self._on_ingress_broken) |
90 | |
91 | - def _describe_ingresses_action(self, event): |
92 | - """Handle the 'describe-ingresses' action.""" |
93 | - self.k8s_auth() |
94 | - api = _networking_v1_api() |
95 | - ingresses = api.list_namespaced_ingress(namespace=self._namespace) |
96 | - event.set_results({"ingresses": ingresses}) |
97 | +class _ConfigOrRelation(object): |
98 | + """Class containing data from the Charm configuration, or from a relation.""" |
99 | |
100 | - def _get_config_or_relation_data(self, field, fallback): |
101 | - """Helper method to get data from config or the ingress relation.""" |
102 | + def __init__(self, model, config, relation, multiple_relations): |
103 | + """Creates a _ConfigOrRelation Object. |
104 | + |
105 | + :param model: The charm model. |
106 | + :param config: The charm's configuration. |
107 | + :param relation: One of the charm's relations, if any. |
108 | + :param multiple_relations: If the charm has more than one relation. |
109 | + """ |
110 | + super().__init__() |
111 | + self.model = model |
112 | + self.config = config |
113 | + self.relation = relation |
114 | + self.multiple_relations = multiple_relations |
115 | + |
116 | + def _get_config(self, field): |
117 | + """Helper method to get data from config.""" |
118 | # Config fields with a default of None don't appear in the dict |
119 | config_data = self.config.get(field, None) |
120 | # A value of False is valid in these fields, so check it's not a null-value instead |
121 | @@ -63,13 +67,41 @@ class NginxIngressCharm(CharmBase): |
122 | return config_data |
123 | if config_data: |
124 | return config_data |
125 | - relation = self.model.get_relation("ingress") |
126 | - if relation: |
127 | + |
128 | + return None |
129 | + |
130 | + def _get_relation(self, field): |
131 | + """Helper method to get data from the relation, if any.""" |
132 | + if self.relation: |
133 | try: |
134 | - return relation.data[relation.app][field] |
135 | + return self.relation.data[self.relation.app][field] |
136 | except KeyError: |
137 | # Our relation isn't passing the information we're querying. |
138 | - return fallback |
139 | + return None |
140 | + return None |
141 | + |
142 | + def _get_config_or_relation_data(self, field, fallback): |
143 | + """Helper method to get data from config or the ingress relation, in that order.""" |
144 | + data = self._get_config(field) |
145 | + if data is not None: |
146 | + return data |
147 | + |
148 | + data = self._get_relation(field) |
149 | + if data is not None: |
150 | + return data |
151 | + |
152 | + return fallback |
153 | + |
154 | + def _get_relation_data_or_config(self, field, fallback): |
155 | + """Helper method to get data from the ingress relation or config, in that order.""" |
156 | + data = self._get_relation(field) |
157 | + if data is not None: |
158 | + return data |
159 | + |
160 | + data = self._get_config(field) |
161 | + if data is not None: |
162 | + return data |
163 | + |
164 | return fallback |
165 | |
166 | @property |
167 | @@ -83,8 +115,14 @@ class NginxIngressCharm(CharmBase): |
168 | @property |
169 | def _ingress_name(self): |
170 | """Return an ingress name for use creating a k8s ingress.""" |
171 | - # Follow the same naming convention as Juju. |
172 | - return "{}-ingress".format(self._get_config_or_relation_data("service-name", "")) |
173 | + # If there are 2 or more services configured to use the same service-hostname, the |
174 | + # controller nginx/nginx-ingress requires them to be in the same Kubernetes Ingress object. |
175 | + # Otherwise, Ingress will be served for only one of the services. |
176 | + # Because of this, we'll have to group all ingresses into the same Kubernetes Resource |
177 | + # based on their requested service-hostname. |
178 | + svc_hostname = self._get_config_or_relation_data("service-hostname", "") |
179 | + ingress_name = _INGRESS_SUB_REGEX.sub("-", svc_hostname) |
180 | + return "{}-ingress".format(ingress_name) |
181 | |
182 | @property |
183 | def _limit_rps(self): |
184 | @@ -160,12 +198,28 @@ class NginxIngressCharm(CharmBase): |
185 | @property |
186 | def _service_name(self): |
187 | """Return the name of the service we're connecting to.""" |
188 | + # NOTE: If the charm has multiple relations, use the service name given by the relation |
189 | + # in order to avoid service name conflicts. |
190 | + if self.multiple_relations: |
191 | + return self._get_relation_data_or_config("service-name", "") |
192 | return self._get_config_or_relation_data("service-name", "") |
193 | |
194 | @property |
195 | def _service_port(self): |
196 | """Return the port for the service we're connecting to.""" |
197 | - return int(self._get_config_or_relation_data("service-port", 0)) |
198 | + # NOTE: If the charm has multiple relations, use the service port given by the relation. |
199 | + if self.multiple_relations: |
200 | + return int(self._get_relation_data_or_config("service-port", 0)) |
201 | + return int(self._get_relation_data_or_config("service-port", 0)) |
202 | + |
203 | + @property |
204 | + def _path_routes(self): |
205 | + """Return the path routes to use for the k8s ingress.""" |
206 | + # NOTE: If the charm has multiple relations, use the path routes given by the relation |
207 | + # in order to avoid same route conflicts. |
208 | + if self.multiple_relations: |
209 | + return self._get_relation_data_or_config("path-routes", "/").split(",") |
210 | + return self._get_config_or_relation_data("path-routes", "/").split(",") |
211 | |
212 | @property |
213 | def _session_cookie_max_age(self): |
214 | @@ -181,15 +235,6 @@ class NginxIngressCharm(CharmBase): |
215 | """Return the tls-secret-name to use for k8s ingress (if any).""" |
216 | return self._get_config_or_relation_data("tls-secret-name", "") |
217 | |
218 | - def k8s_auth(self): |
219 | - """Authenticate to kubernetes.""" |
220 | - if self._authed: |
221 | - return |
222 | - |
223 | - kubernetes.config.load_incluster_config() |
224 | - |
225 | - self._authed = True |
226 | - |
227 | def _get_k8s_service(self): |
228 | """Get a K8s service definition.""" |
229 | return kubernetes.client.V1Service( |
230 | @@ -210,7 +255,6 @@ class NginxIngressCharm(CharmBase): |
231 | |
232 | def _get_k8s_ingress(self): |
233 | """Get a K8s ingress definition.""" |
234 | - paths = self._get_config_or_relation_data("path-routes", "/").split(",") |
235 | ingress_paths = [ |
236 | kubernetes.client.V1HTTPIngressPath( |
237 | path=path, |
238 | @@ -224,7 +268,7 @@ class NginxIngressCharm(CharmBase): |
239 | ), |
240 | ), |
241 | ) |
242 | - for path in paths |
243 | + for path in self._path_routes |
244 | ] |
245 | |
246 | hostnames = [self._service_hostname] |
247 | @@ -290,31 +334,89 @@ class NginxIngressCharm(CharmBase): |
248 | spec=spec, |
249 | ) |
250 | |
251 | + |
252 | +class NginxIngressCharm(CharmBase): |
253 | + """Charm the service.""" |
254 | + |
255 | + _authed = False |
256 | + on = IngressCharmEvents() |
257 | + |
258 | + def __init__(self, *args): |
259 | + super().__init__(*args) |
260 | + self.framework.observe(self.on.config_changed, self._on_config_changed) |
261 | + self.framework.observe(self.on.describe_ingresses_action, self._describe_ingresses_action) |
262 | + |
263 | + # 'ingress' relation handling. |
264 | + self.ingress = IngressProvides(self) |
265 | + # When the 'ingress' is ready to configure, do so. |
266 | + self.framework.observe(self.on.ingress_available, self._on_config_changed) |
267 | + self.framework.observe(self.on.ingress_broken, self._on_ingress_broken) |
268 | + |
269 | + @property |
270 | + def _all_config_or_relations(self): |
271 | + all_relations = self.model.relations["ingress"] or [None] |
272 | + multiple_rels = self._multiple_relations |
273 | + return [ |
274 | + _ConfigOrRelation(self.model, self.config, relation, multiple_rels) |
275 | + for relation in all_relations |
276 | + ] |
277 | + |
278 | + @property |
279 | + def _multiple_relations(self): |
280 | + return len(self.model.relations["ingress"]) > 1 |
281 | + |
282 | + @property |
283 | + def _namespace(self): |
284 | + return self._all_config_or_relations[0]._namespace |
285 | + |
286 | + def _describe_ingresses_action(self, event): |
287 | + """Handle the 'describe-ingresses' action.""" |
288 | + self.k8s_auth() |
289 | + api = _networking_v1_api() |
290 | + |
291 | + ingresses = api.list_namespaced_ingress(namespace=self._namespace) |
292 | + event.set_results({"ingresses": ingresses}) |
293 | + |
294 | + def k8s_auth(self): |
295 | + """Authenticate to kubernetes.""" |
296 | + if self._authed: |
297 | + return |
298 | + |
299 | + kubernetes.config.load_incluster_config() |
300 | + |
301 | + self._authed = True |
302 | + |
303 | def _report_service_ips(self): |
304 | """Report on service IP(s).""" |
305 | self.k8s_auth() |
306 | api = _core_v1_api() |
307 | services = api.list_namespaced_service(namespace=self._namespace) |
308 | + all_k8s_service_names = [rel._k8s_service_name for rel in self._all_config_or_relations] |
309 | return [ |
310 | - x.spec.cluster_ip for x in services.items if x.metadata.name == self._k8s_service_name |
311 | + x.spec.cluster_ip for x in services.items if x.metadata.name in all_k8s_service_names |
312 | ] |
313 | |
314 | - def _define_service(self): |
315 | + def _define_services(self): |
316 | + """Create or update the services in Kubernetes from multiple ingress relations.""" |
317 | + for conf_or_rel in self._all_config_or_relations: |
318 | + self._define_service(conf_or_rel) |
319 | + |
320 | + def _define_service(self, conf_or_rel: _ConfigOrRelation): |
321 | """Create or update a service in kubernetes.""" |
322 | self.k8s_auth() |
323 | api = _core_v1_api() |
324 | - body = self._get_k8s_service() |
325 | + body = conf_or_rel._get_k8s_service() |
326 | services = api.list_namespaced_service(namespace=self._namespace) |
327 | - if self._k8s_service_name in [x.metadata.name for x in services.items]: |
328 | + if conf_or_rel._k8s_service_name in [x.metadata.name for x in services.items]: |
329 | api.patch_namespaced_service( |
330 | - name=self._k8s_service_name, |
331 | + name=conf_or_rel._k8s_service_name, |
332 | namespace=self._namespace, |
333 | body=body, |
334 | ) |
335 | logger.info( |
336 | "Service updated in namespace %s with name %s", |
337 | self._namespace, |
338 | - self._service_name, |
339 | + conf_or_rel._service_name, |
340 | ) |
341 | else: |
342 | api.create_namespaced_service( |
343 | @@ -324,23 +426,23 @@ class NginxIngressCharm(CharmBase): |
344 | logger.info( |
345 | "Service created in namespace %s with name %s", |
346 | self._namespace, |
347 | - self._service_name, |
348 | + conf_or_rel._service_name, |
349 | ) |
350 | |
351 | - def _remove_service(self): |
352 | + def _remove_service(self, conf_or_rel: _ConfigOrRelation): |
353 | """Remove the created service in kubernetes.""" |
354 | self.k8s_auth() |
355 | api = _core_v1_api() |
356 | services = api.list_namespaced_service(namespace=self._namespace) |
357 | - if self._k8s_service_name in [x.metadata.name for x in services.items]: |
358 | + if conf_or_rel._k8s_service_name in [x.metadata.name for x in services.items]: |
359 | api.delete_namespaced_service( |
360 | - name=self._k8s_service_name, |
361 | + name=conf_or_rel._k8s_service_name, |
362 | namespace=self._namespace, |
363 | ) |
364 | logger.info( |
365 | "Service deleted in namespace %s with name %s", |
366 | self._namespace, |
367 | - self._service_name, |
368 | + conf_or_rel._service_name, |
369 | ) |
370 | |
371 | def _look_up_and_set_ingress_class(self, api, body): |
372 | @@ -372,23 +474,153 @@ class NginxIngressCharm(CharmBase): |
373 | |
374 | body.spec.ingress_class_name = ingress_class |
375 | |
376 | - def _define_ingress(self): |
377 | + def _define_ingresses(self, excluded_relation=None): |
378 | + """(Re)Creates the Ingress Resources in Kubernetes from multiple ingress relations. |
379 | + |
380 | + Creates the Kubernetes Ingress Resource if it does not exist, or updates it if it does. |
381 | + The Ingress-related data is retrieved from the Charm's configuration and from the ingress |
382 | + relations. |
383 | + |
384 | + :param excluded_relation: The relation for which Ingress rules should not be created. |
385 | + For example, when a relation is broken, Ingress rules for it are no longer |
386 | + necessary, hence, it is to be excluded. Additionally, any Ingress objects that it |
387 | + required and that are no longer needed by other relations will be deleted. |
388 | + """ |
389 | + config_or_relations = self._all_config_or_relations |
390 | + if excluded_relation: |
391 | + config_or_relations = filter( |
392 | + lambda conf_or_rel: conf_or_rel.relation != excluded_relation, config_or_relations |
393 | + ) |
394 | + |
395 | + ingresses = [conf_or_rel._get_k8s_ingress() for conf_or_rel in config_or_relations] |
396 | + |
397 | + ingresses = self._process_ingresses(ingresses) |
398 | + |
399 | + # Check if we need to remove any Ingresses from the excluded relation. If it has hostnames |
400 | + # that are not used by any other relation, we need to remove them. |
401 | + if excluded_relation: |
402 | + conf_or_rel = _ConfigOrRelation( |
403 | + self.model, self.config, excluded_relation, self._multiple_relations |
404 | + ) |
405 | + excluded_ingress = conf_or_rel._get_k8s_ingress() |
406 | + |
407 | + # The Kubernetes Ingress Resources we're creating only has 1 rule per hostname. |
408 | + used_hostnames = [ingress.spec.rules[0].host for ingress in ingresses] |
409 | + for rule in excluded_ingress.spec.rules: |
410 | + if rule.host not in used_hostnames: |
411 | + self._remove_ingress(self._ingress_name(rule.host)) |
412 | + |
413 | + for ingress in ingresses: |
414 | + self._define_ingress(ingress) |
415 | + |
416 | + def _process_ingresses(self, ingresses): |
417 | + # If there are Ingress rules for the same service-hostname, we need to squash those |
418 | + # rules together. This will be used to group the rules by their host. |
419 | + ingress_paths = {} |
420 | + |
421 | + # Mapping between a hostname and the first ingress defining that hostname. Its metadata |
422 | + # will be used for defining the Kubernetes Ingress Resource for the rules having the |
423 | + # same hostname. The metadata used will be from the first rule that references that |
424 | + # hostname. |
425 | + hostname_ingress = {} |
426 | + for ingress in ingresses: |
427 | + # A relation could have defined additional-hostnames, so there could be more than |
428 | + # one rule. Those hostnames might also be used in other relations. |
429 | + for rule in ingress.spec.rules: |
430 | + if rule.host not in ingress_paths: |
431 | + ingress_paths[rule.host] = rule.http.paths |
432 | + hostname_ingress[rule.host] = ingress |
433 | + continue |
434 | + |
435 | + # Ensure that the annotations for this rule match the others in the same group. |
436 | + other_metadata = hostname_ingress[rule.host].metadata |
437 | + if ingress.metadata.annotations != other_metadata.annotations: |
438 | + logger.error( |
439 | + "Annotations that will be set do not match for the rule:\n" |
440 | + "Rule: %s\nAnnotations: %s\nUsed Annotations: %s", |
441 | + rule, |
442 | + ingress.metadata.annotations, |
443 | + other_metadata.annotations, |
444 | + ) |
445 | + raise ConflictingAnnotationsException() |
446 | + |
447 | + # Ensure that the exact same route for this route was not yet defined. |
448 | + defined_paths = ingress_paths[rule.host] |
449 | + for path in rule.http.paths: |
450 | + duplicate_paths = list(filter(lambda p: p.path == path.path, defined_paths)) |
451 | + if duplicate_paths: |
452 | + logger.error( |
453 | + "Duplicate routes found:\nFirst route: %s\nSecond route: %s", |
454 | + duplicate_paths[0], |
455 | + path, |
456 | + ) |
457 | + raise ConflictingRoutesException() |
458 | + |
459 | + # Extend the list of routes for this hostname. |
460 | + ingress_paths[rule.host].extend(rule.http.paths) |
461 | + |
462 | + # Create the new Kubernetes Ingress Objects that are to be added. |
463 | + ingress_objs = [] |
464 | + for hostname, paths in ingress_paths.items(): |
465 | + # Use the metadata from the first rule. |
466 | + initial_ingress = hostname_ingress[hostname] |
467 | + add_ingress = self._create_k8s_ingress_obj(hostname, initial_ingress, paths) |
468 | + ingress_objs.append(add_ingress) |
469 | + |
470 | + return ingress_objs |
471 | + |
472 | + def _ingress_name(self, hostname): |
473 | + """Returns the Kubernetes Ingress Resource name based on the given hostname.""" |
474 | + ingress_name = _INGRESS_SUB_REGEX.sub("-", hostname) |
475 | + return "{}-ingress".format(ingress_name) |
476 | + |
477 | + def _create_k8s_ingress_obj(self, svc_hostname, initial_ingress, paths): |
478 | + """Creates a Kubernetes Ingress Resources with the given data.""" |
479 | + |
480 | + # Create a Ingress Object with the new ingress rules and return it. |
481 | + rule = kubernetes.client.V1IngressRule( |
482 | + host=svc_hostname, |
483 | + http=kubernetes.client.V1HTTPIngressRuleValue(paths=paths), |
484 | + ) |
485 | + # We're going to use an ingress name based on the desired service hostname. |
486 | + ingress_name = self._ingress_name(svc_hostname) |
487 | + new_spec = kubernetes.client.V1IngressSpec(rules=[rule]) |
488 | + |
489 | + # The service hostname might be configured to use TLS. |
490 | + if initial_ingress.spec.tls: |
491 | + new_spec.tls = [ |
492 | + kubernetes.client.V1IngressTLS( |
493 | + hosts=[svc_hostname], |
494 | + secret_name=initial_ingress.spec.tls[0].secret_name, |
495 | + ) |
496 | + ] |
497 | + return kubernetes.client.V1Ingress( |
498 | + api_version="networking.k8s.io/v1", |
499 | + kind="Ingress", |
500 | + metadata=kubernetes.client.V1ObjectMeta( |
501 | + name=ingress_name, |
502 | + annotations=initial_ingress.metadata.annotations, |
503 | + ), |
504 | + spec=new_spec, |
505 | + ) |
506 | + |
507 | + def _define_ingress(self, body): |
508 | """Create or update an ingress in kubernetes.""" |
509 | self.k8s_auth() |
510 | api = _networking_v1_api() |
511 | - body = self._get_k8s_ingress() |
512 | self._look_up_and_set_ingress_class(api, body) |
513 | + ingress_name = body.metadata.name |
514 | ingresses = api.list_namespaced_ingress(namespace=self._namespace) |
515 | - if self._ingress_name in [x.metadata.name for x in ingresses.items]: |
516 | + if ingress_name in [x.metadata.name for x in ingresses.items]: |
517 | api.replace_namespaced_ingress( |
518 | - name=self._ingress_name, |
519 | + name=ingress_name, |
520 | namespace=self._namespace, |
521 | body=body, |
522 | ) |
523 | logger.info( |
524 | "Ingress updated in namespace %s with name %s", |
525 | self._namespace, |
526 | - self._ingress_name, |
527 | + ingress_name, |
528 | ) |
529 | else: |
530 | api.create_namespaced_ingress( |
531 | @@ -398,20 +630,20 @@ class NginxIngressCharm(CharmBase): |
532 | logger.info( |
533 | "Ingress created in namespace %s with name %s", |
534 | self._namespace, |
535 | - self._ingress_name, |
536 | + ingress_name, |
537 | ) |
538 | |
539 | - def _remove_ingress(self): |
540 | + def _remove_ingress(self, ingress_name): |
541 | """Remove ingress resource.""" |
542 | self.k8s_auth() |
543 | api = _networking_v1_api() |
544 | ingresses = api.list_namespaced_ingress(namespace=self._namespace) |
545 | - if self._ingress_name in [x.metadata.name for x in ingresses.items]: |
546 | - api.delete_namespaced_ingress(self._ingress_name, self._namespace) |
547 | + if ingress_name in [x.metadata.name for x in ingresses.items]: |
548 | + api.delete_namespaced_ingress(ingress_name, self._namespace) |
549 | logger.info( |
550 | "Ingress deleted in namespace %s with name %s", |
551 | self._namespace, |
552 | - self._ingress_name, |
553 | + ingress_name, |
554 | ) |
555 | |
556 | def _on_config_changed(self, _): |
557 | @@ -419,10 +651,12 @@ class NginxIngressCharm(CharmBase): |
558 | msg = "" |
559 | # We only want to do anything here if we're the leader to avoid |
560 | # collision if we've scaled out this application. |
561 | - if self.unit.is_leader() and self._service_name: |
562 | + svc_names = [conf_or_rel._service_name for conf_or_rel in self._all_config_or_relations] |
563 | + if self.unit.is_leader() and any(svc_names): |
564 | try: |
565 | - self._define_service() |
566 | - self._define_ingress() |
567 | + self._define_services() |
568 | + self._define_ingresses() |
569 | + |
570 | # It's not recommended to do this via ActiveStatus, but we don't |
571 | # have another way of reporting status yet. |
572 | msg = "Ingress with service IP(s): {}".format( |
573 | @@ -442,14 +676,29 @@ class NginxIngressCharm(CharmBase): |
574 | return |
575 | else: |
576 | raise |
577 | + except ConflictingAnnotationsException: |
578 | + self.unit.status = BlockedStatus( |
579 | + "Conflicting annotations from relations. Run juju debug-log for details. " |
580 | + "Set manually via juju config." |
581 | + ) |
582 | + return |
583 | + except ConflictingRoutesException: |
584 | + self.unit.status = BlockedStatus( |
585 | + "Duplicate route found; cannot add ingress. Run juju debug-log for details." |
586 | + ) |
587 | + return |
588 | self.unit.status = ActiveStatus(msg) |
589 | |
590 | - def _on_ingress_broken(self, _): |
591 | + def _on_ingress_broken(self, event): |
592 | """Handle the ingress broken event.""" |
593 | - if self.unit.is_leader() and self._ingress_name: |
594 | + conf_or_rel = _ConfigOrRelation(self.model, {}, event.relation, self._multiple_relations) |
595 | + if self.unit.is_leader() and conf_or_rel._ingress_name: |
596 | try: |
597 | - self._remove_ingress() |
598 | - self._remove_service() |
599 | + # NOTE: _define_ingresses will recreate the Kubernetes Ingress Resources based |
600 | + # on the existing relations, and remove any resources that are no longer needed |
601 | + # (they were needed by the event relation). |
602 | + self._define_ingresses(excluded_relation=event.relation) |
603 | + self._remove_service(conf_or_rel) |
604 | except kubernetes.client.exceptions.ApiException as e: |
605 | if e.status == 403: |
606 | logger.error( |
607 | @@ -464,6 +713,18 @@ class NginxIngressCharm(CharmBase): |
608 | return |
609 | else: |
610 | raise |
611 | + except ConflictingAnnotationsException: |
612 | + self.unit.status = BlockedStatus( |
613 | + "Conflicting annotations from relations. Run juju debug-log for details. " |
614 | + "Set manually via juju config." |
615 | + ) |
616 | + return |
617 | + except ConflictingRoutesException: |
618 | + self.unit.status = BlockedStatus( |
619 | + "Duplicate route found; cannot add ingress. Run juju debug-log for details." |
620 | + ) |
621 | + return |
622 | + |
623 | self.unit.status = ActiveStatus() |
624 | |
625 | |
626 | diff --git a/tests/unit/test_charm.py b/tests/unit/test_charm.py |
627 | index 538322a..d11634d 100644 |
628 | --- a/tests/unit/test_charm.py |
629 | +++ b/tests/unit/test_charm.py |
630 | @@ -3,6 +3,7 @@ |
631 | |
632 | import unittest |
633 | |
634 | +from unittest import mock |
635 | from unittest.mock import MagicMock, patch |
636 | |
637 | import kubernetes |
638 | @@ -78,9 +79,10 @@ class TestCharm(unittest.TestCase): |
639 | def test_get_ingress_relation_data(self): |
640 | """Test for getting our ingress relation data.""" |
641 | # Confirm we don't have any relation data yet in the relevant properties |
642 | - self.assertEqual(self.harness.charm._service_name, "") |
643 | - self.assertEqual(self.harness.charm._service_hostname, "") |
644 | - self.assertEqual(self.harness.charm._service_port, 0) |
645 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
646 | + self.assertEqual(conf_or_rel._service_name, "") |
647 | + self.assertEqual(conf_or_rel._service_hostname, "") |
648 | + self.assertEqual(conf_or_rel._service_port, 0) |
649 | relation_id = self.harness.add_relation('ingress', 'gunicorn') |
650 | self.harness.add_relation_unit(relation_id, 'gunicorn/0') |
651 | relations_data = { |
652 | @@ -90,9 +92,10 @@ class TestCharm(unittest.TestCase): |
653 | } |
654 | self.harness.update_relation_data(relation_id, 'gunicorn', relations_data) |
655 | # And now confirm we have the expected data in the relevant properties. |
656 | - self.assertEqual(self.harness.charm._service_name, "gunicorn") |
657 | - self.assertEqual(self.harness.charm._service_hostname, "foo.internal") |
658 | - self.assertEqual(self.harness.charm._service_port, 80) |
659 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
660 | + self.assertEqual(conf_or_rel._service_name, "gunicorn") |
661 | + self.assertEqual(conf_or_rel._service_hostname, "foo.internal") |
662 | + self.assertEqual(conf_or_rel._service_port, 80) |
663 | |
664 | def test_multiple_routes_with_relation_data(self): |
665 | """Test for getting our ingress relation data.""" |
666 | @@ -138,14 +141,16 @@ class TestCharm(unittest.TestCase): |
667 | 'path_type': 'Prefix', |
668 | }, |
669 | ] |
670 | - result_dict = self.harness.charm._get_k8s_ingress().to_dict() |
671 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
672 | + result_dict = conf_or_rel._get_k8s_ingress().to_dict() |
673 | self.assertEqual(result_dict['spec']['rules'][0]['http']['paths'], expected) |
674 | |
675 | def test_max_body_size(self): |
676 | """Test for the max-body-size property.""" |
677 | # First set via config. |
678 | self.harness.update_config({"max-body-size": 80}) |
679 | - self.assertEqual(self.harness.charm._max_body_size, "80m") |
680 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
681 | + self.assertEqual(conf_or_rel._max_body_size, "80m") |
682 | # Now set via the StoredState. This will be set to a string, as all |
683 | # relation data must be a string. |
684 | relation_id = self.harness.add_relation('ingress', 'gunicorn') |
685 | @@ -158,10 +163,11 @@ class TestCharm(unittest.TestCase): |
686 | } |
687 | self.harness.update_relation_data(relation_id, 'gunicorn', relations_data) |
688 | # Still 80 because it's set via config. |
689 | - self.assertEqual(self.harness.charm._max_body_size, "80m") |
690 | + self.assertEqual(conf_or_rel._max_body_size, "80m") |
691 | self.harness.update_config({"max-body-size": 0}) |
692 | # Now it's the value from the relation. |
693 | - self.assertEqual(self.harness.charm._max_body_size, "88m") |
694 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
695 | + self.assertEqual(conf_or_rel._max_body_size, "88m") |
696 | |
697 | def test_namespace(self): |
698 | """Test for the namespace property.""" |
699 | @@ -199,7 +205,8 @@ class TestCharm(unittest.TestCase): |
700 | def test_owasp_modsecurity_crs(self): |
701 | """Test the owasp-modsecurity-crs property.""" |
702 | # Test undefined. |
703 | - self.assertEqual(self.harness.charm._owasp_modsecurity_crs, False) |
704 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
705 | + self.assertEqual(conf_or_rel._owasp_modsecurity_crs, False) |
706 | # Test we have no annotations with this set to False. |
707 | self.harness.disable_hooks() |
708 | self.harness.update_config( |
709 | @@ -209,7 +216,8 @@ class TestCharm(unittest.TestCase): |
710 | "service-port": 80, |
711 | } |
712 | ) |
713 | - result_dict = self.harness.charm._get_k8s_ingress().to_dict() |
714 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
715 | + result_dict = conf_or_rel._get_k8s_ingress().to_dict() |
716 | expected = { |
717 | "nginx.ingress.kubernetes.io/proxy-body-size": "20m", |
718 | "nginx.ingress.kubernetes.io/rewrite-target": "/", |
719 | @@ -219,8 +227,8 @@ class TestCharm(unittest.TestCase): |
720 | # Test if we set the value we get the correct annotations and the |
721 | # correct charm property. |
722 | self.harness.update_config({"owasp-modsecurity-crs": True}) |
723 | - self.assertEqual(self.harness.charm._owasp_modsecurity_crs, True) |
724 | - result_dict = self.harness.charm._get_k8s_ingress().to_dict() |
725 | + self.assertEqual(conf_or_rel._owasp_modsecurity_crs, True) |
726 | + result_dict = conf_or_rel._get_k8s_ingress().to_dict() |
727 | expected = { |
728 | "nginx.ingress.kubernetes.io/enable-modsecurity": "true", |
729 | "nginx.ingress.kubernetes.io/enable-owasp-modsecurity-crs": "true", |
730 | @@ -236,21 +244,23 @@ class TestCharm(unittest.TestCase): |
731 | def test_retry_errors(self): |
732 | """Test the retry-errors property.""" |
733 | # Test empty value. |
734 | - self.assertEqual(self.harness.charm._retry_errors, "") |
735 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
736 | + self.assertEqual(conf_or_rel._retry_errors, "") |
737 | # Test we deal with spaces or not spaces properly. |
738 | self.harness.update_config({"retry-errors": "error, timeout, http_502, http_503"}) |
739 | - self.assertEqual(self.harness.charm._retry_errors, "error timeout http_502 http_503") |
740 | + self.assertEqual(conf_or_rel._retry_errors, "error timeout http_502 http_503") |
741 | self.harness.update_config({"retry-errors": "error,timeout,http_502,http_503"}) |
742 | - self.assertEqual(self.harness.charm._retry_errors, "error timeout http_502 http_503") |
743 | + self.assertEqual(conf_or_rel._retry_errors, "error timeout http_502 http_503") |
744 | # Test unknown value. |
745 | self.harness.update_config({"retry-errors": "error,timeout,http_502,http_418"}) |
746 | - self.assertEqual(self.harness.charm._retry_errors, "error timeout http_502") |
747 | + self.assertEqual(conf_or_rel._retry_errors, "error timeout http_502") |
748 | |
749 | def test_service_port(self): |
750 | """Test the service-port property.""" |
751 | # First set via config. |
752 | self.harness.update_config({"service-port": 80}) |
753 | - self.assertEqual(self.harness.charm._service_port, 80) |
754 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
755 | + self.assertEqual(conf_or_rel._service_port, 80) |
756 | # Now set via the relation. |
757 | relation_id = self.harness.add_relation('ingress', 'gunicorn') |
758 | self.harness.add_relation_unit(relation_id, 'gunicorn/0') |
759 | @@ -261,16 +271,18 @@ class TestCharm(unittest.TestCase): |
760 | } |
761 | self.harness.update_relation_data(relation_id, 'gunicorn', relations_data) |
762 | # Config still overrides the relation value. |
763 | - self.assertEqual(self.harness.charm._service_port, 80) |
764 | + self.assertEqual(conf_or_rel._service_port, 80) |
765 | self.harness.update_config({"service-port": 0}) |
766 | # Now it's the value from the relation. |
767 | - self.assertEqual(self.harness.charm._service_port, 88) |
768 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
769 | + self.assertEqual(conf_or_rel._service_port, 88) |
770 | |
771 | def test_service_hostname(self): |
772 | """Test the service-hostname property.""" |
773 | # First set via config. |
774 | self.harness.update_config({"service-hostname": "foo.internal"}) |
775 | - self.assertEqual(self.harness.charm._service_hostname, "foo.internal") |
776 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
777 | + self.assertEqual(conf_or_rel._service_hostname, "foo.internal") |
778 | # Now set via the relation. |
779 | relation_id = self.harness.add_relation('ingress', 'gunicorn') |
780 | self.harness.add_relation_unit(relation_id, 'gunicorn/0') |
781 | @@ -281,20 +293,22 @@ class TestCharm(unittest.TestCase): |
782 | } |
783 | self.harness.update_relation_data(relation_id, 'gunicorn', relations_data) |
784 | # Config still overrides the relation value. |
785 | - self.assertEqual(self.harness.charm._service_hostname, "foo.internal") |
786 | + self.assertEqual(conf_or_rel._service_hostname, "foo.internal") |
787 | self.harness.update_config({"service-hostname": ""}) |
788 | # Now it's the value from the relation. |
789 | - self.assertEqual(self.harness.charm._service_hostname, "foo-bar.internal") |
790 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
791 | + self.assertEqual(conf_or_rel._service_hostname, "foo-bar.internal") |
792 | |
793 | def test_session_cookie_max_age(self): |
794 | """Test the session-cookie-max-age property.""" |
795 | # First set via config. |
796 | self.harness.update_config({"session-cookie-max-age": 3600}) |
797 | - self.assertEqual(self.harness.charm._session_cookie_max_age, "3600") |
798 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
799 | + self.assertEqual(conf_or_rel._session_cookie_max_age, "3600") |
800 | # Confirm if we set this to 0 we get a False value, e.g. it doesn't |
801 | # return a string of "0" which would be evaluated to True. |
802 | self.harness.update_config({"session-cookie-max-age": 0}) |
803 | - self.assertFalse(self.harness.charm._session_cookie_max_age) |
804 | + self.assertFalse(conf_or_rel._session_cookie_max_age) |
805 | # Now set via the relation. |
806 | relation_id = self.harness.add_relation('ingress', 'gunicorn') |
807 | self.harness.add_relation_unit(relation_id, 'gunicorn/0') |
808 | @@ -305,12 +319,18 @@ class TestCharm(unittest.TestCase): |
809 | "session-cookie-max-age": "3688", |
810 | } |
811 | self.harness.update_relation_data(relation_id, 'gunicorn', relations_data) |
812 | - self.assertEqual(self.harness.charm._session_cookie_max_age, "3688") |
813 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
814 | + self.assertEqual(conf_or_rel._session_cookie_max_age, "3688") |
815 | |
816 | - def test_tls_secret_name(self): |
817 | + @patch('charm.NginxIngressCharm._report_service_ips') |
818 | + @patch('charm.NginxIngressCharm._define_ingress') |
819 | + @patch('charm.NginxIngressCharm._define_services') |
820 | + def test_tls_secret_name(self, mock_def_svc, mock_def_ingress, mock_report_ips): |
821 | """Test the tls-secret-name property.""" |
822 | + mock_report_ips.return_value = ["10.0.1.12"] |
823 | self.harness.update_config({"tls-secret-name": "gunicorn-tls"}) |
824 | - self.assertEqual(self.harness.charm._tls_secret_name, "gunicorn-tls") |
825 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
826 | + self.assertEqual(conf_or_rel._tls_secret_name, "gunicorn-tls") |
827 | # Now set via the relation. |
828 | relation_id = self.harness.add_relation('ingress', 'gunicorn') |
829 | self.harness.add_relation_unit(relation_id, 'gunicorn/0') |
830 | @@ -319,19 +339,62 @@ class TestCharm(unittest.TestCase): |
831 | "service-hostname": "foo.internal", |
832 | "service-port": "80", |
833 | "tls-secret-name": "gunicorn-tls-new", |
834 | + "additional-hostnames": "lish.internal", |
835 | } |
836 | self.harness.update_relation_data(relation_id, 'gunicorn', relations_data) |
837 | # Config still overrides the relation data. |
838 | - self.assertEqual(self.harness.charm._tls_secret_name, "gunicorn-tls") |
839 | + self.assertEqual(conf_or_rel._tls_secret_name, "gunicorn-tls") |
840 | + |
841 | + # The charm will not create any resource if it's not the leader. |
842 | + # Check to see if the charm will use the TLS secret name for the additional hostname. |
843 | + self.harness.set_leader(True) |
844 | self.harness.update_config({"tls-secret-name": ""}) |
845 | # Now it's the value from the relation. |
846 | - self.assertEqual(self.harness.charm._tls_secret_name, "gunicorn-tls-new") |
847 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
848 | + self.assertEqual(conf_or_rel._tls_secret_name, "gunicorn-tls-new") |
849 | + |
850 | + mock_def_svc.assert_called_once() |
851 | + base_ingress = conf_or_rel._get_k8s_ingress() |
852 | + |
853 | + tls_lish = kubernetes.client.V1IngressTLS( |
854 | + hosts=["lish.internal"], |
855 | + secret_name=base_ingress.spec.tls[0].secret_name, |
856 | + ) |
857 | + ingress_lish = kubernetes.client.V1Ingress( |
858 | + api_version=base_ingress.api_version, |
859 | + kind=base_ingress.kind, |
860 | + metadata=kubernetes.client.V1ObjectMeta( |
861 | + name="lish-internal-ingress", |
862 | + annotations=base_ingress.metadata.annotations, |
863 | + ), |
864 | + spec=kubernetes.client.V1IngressSpec( |
865 | + tls=[tls_lish], |
866 | + rules=[ |
867 | + kubernetes.client.V1IngressRule( |
868 | + host="lish.internal", |
869 | + http=base_ingress.spec.rules[1].http, |
870 | + ) |
871 | + ], |
872 | + ), |
873 | + ) |
874 | + |
875 | + # Since the hostnames are different, it's expected that 2 different Ingress Resources |
876 | + # are created. base_ingress contains the rules for both. |
877 | + base_ingress.spec.rules = [base_ingress.spec.rules[0]] |
878 | + |
879 | + mock_def_ingress.assert_has_calls( |
880 | + [ |
881 | + mock.call(base_ingress), |
882 | + mock.call(ingress_lish), |
883 | + ] |
884 | + ) |
885 | |
886 | def test_rewrite_enabled_property(self): |
887 | """Test for enabling request rewrites.""" |
888 | # First set via config. |
889 | self.harness.update_config({"rewrite-enabled": True}) |
890 | - self.assertEqual(self.harness.charm._rewrite_enabled, True) |
891 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
892 | + self.assertEqual(conf_or_rel._rewrite_enabled, True) |
893 | relation_id = self.harness.add_relation("ingress", "gunicorn") |
894 | self.harness.add_relation_unit(relation_id, "gunicorn/0") |
895 | relations_data = { |
896 | @@ -341,9 +404,10 @@ class TestCharm(unittest.TestCase): |
897 | } |
898 | self.harness.update_relation_data(relation_id, "gunicorn", relations_data) |
899 | # Still /test-target because it's set via config. |
900 | - self.assertEqual(self.harness.charm._rewrite_enabled, True) |
901 | + self.assertEqual(conf_or_rel._rewrite_enabled, True) |
902 | self.harness.update_config({"rewrite-enabled": ""}) |
903 | - self.assertEqual(self.harness.charm._rewrite_enabled, False) |
904 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
905 | + self.assertEqual(conf_or_rel._rewrite_enabled, False) |
906 | |
907 | def test_rewrite_annotations(self): |
908 | self.harness.disable_hooks() |
909 | @@ -354,7 +418,8 @@ class TestCharm(unittest.TestCase): |
910 | "service-port": 80, |
911 | } |
912 | ) |
913 | - result_dict = self.harness.charm._get_k8s_ingress().to_dict() |
914 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
915 | + result_dict = conf_or_rel._get_k8s_ingress().to_dict() |
916 | expected = { |
917 | "nginx.ingress.kubernetes.io/proxy-body-size": "20m", |
918 | "nginx.ingress.kubernetes.io/rewrite-target": "/", |
919 | @@ -363,7 +428,7 @@ class TestCharm(unittest.TestCase): |
920 | self.assertEqual(result_dict["metadata"]["annotations"], expected) |
921 | |
922 | self.harness.update_config({"rewrite-enabled": False}) |
923 | - result_dict = self.harness.charm._get_k8s_ingress().to_dict() |
924 | + result_dict = conf_or_rel._get_k8s_ingress().to_dict() |
925 | expected = { |
926 | "nginx.ingress.kubernetes.io/proxy-body-size": "20m", |
927 | "nginx.ingress.kubernetes.io/ssl-redirect": "false", |
928 | @@ -378,7 +443,7 @@ class TestCharm(unittest.TestCase): |
929 | "nginx.ingress.kubernetes.io/rewrite-target": "/test-target", |
930 | "nginx.ingress.kubernetes.io/ssl-redirect": "false", |
931 | } |
932 | - result_dict = self.harness.charm._get_k8s_ingress().to_dict() |
933 | + result_dict = conf_or_rel._get_k8s_ingress().to_dict() |
934 | self.assertEqual(result_dict["metadata"]["annotations"], expected) |
935 | |
936 | @patch('charm.NginxIngressCharm._on_config_changed') |
937 | @@ -418,9 +483,10 @@ class TestCharm(unittest.TestCase): |
938 | } |
939 | self.harness.update_relation_data(relation_id, 'gunicorn', relations_data) |
940 | # Test we get the values we expect: |
941 | - self.assertEqual(self.harness.charm._service_hostname, "foo.internal") |
942 | - self.assertEqual(self.harness.charm._service_name, "gunicorn") |
943 | - self.assertEqual(self.harness.charm._service_port, 80) |
944 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
945 | + self.assertEqual(conf_or_rel._service_hostname, "foo.internal") |
946 | + self.assertEqual(conf_or_rel._service_name, "gunicorn") |
947 | + self.assertEqual(conf_or_rel._service_port, 80) |
948 | |
949 | @patch('charm.NginxIngressCharm._remove_ingress') |
950 | @patch('charm.NginxIngressCharm._remove_service') |
951 | @@ -430,7 +496,7 @@ class TestCharm(unittest.TestCase): |
952 | # to make sure the relation is created and therefore can be removed. |
953 | self.test_on_ingress_relation_changed() |
954 | relation = self.harness.charm.model.get_relation("ingress") |
955 | - self.harness.charm.on.ingress_relation_broken.emit(relation) |
956 | + self.harness.remove_relation(relation.id) |
957 | self.assertEqual(_remove_ingress.call_count, 1) |
958 | self.assertEqual(_remove_service.call_count, 1) |
959 | |
960 | @@ -440,11 +506,12 @@ class TestCharm(unittest.TestCase): |
961 | self.harness.update_config( |
962 | {"service-hostname": "foo.internal", "service-name": "gunicorn", "service-port": 80} |
963 | ) |
964 | + expected_ingress_name = "foo-internal-ingress" |
965 | expected = kubernetes.client.V1Ingress( |
966 | api_version="networking.k8s.io/v1", |
967 | kind="Ingress", |
968 | metadata=kubernetes.client.V1ObjectMeta( |
969 | - name="gunicorn-ingress", |
970 | + name=expected_ingress_name, |
971 | annotations={ |
972 | "nginx.ingress.kubernetes.io/proxy-body-size": "20m", |
973 | "nginx.ingress.kubernetes.io/rewrite-target": "/", |
974 | @@ -475,14 +542,15 @@ class TestCharm(unittest.TestCase): |
975 | ] |
976 | ), |
977 | ) |
978 | - self.assertEqual(self.harness.charm._get_k8s_ingress(), expected) |
979 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
980 | + self.assertEqual(conf_or_rel._get_k8s_ingress(), expected) |
981 | # Test additional hostnames |
982 | self.harness.update_config({"additional-hostnames": "bar.internal,foo.external"}) |
983 | expected = kubernetes.client.V1Ingress( |
984 | api_version="networking.k8s.io/v1", |
985 | kind="Ingress", |
986 | metadata=kubernetes.client.V1ObjectMeta( |
987 | - name="gunicorn-ingress", |
988 | + name=expected_ingress_name, |
989 | annotations={ |
990 | "nginx.ingress.kubernetes.io/proxy-body-size": "20m", |
991 | "nginx.ingress.kubernetes.io/rewrite-target": "/", |
992 | @@ -551,14 +619,14 @@ class TestCharm(unittest.TestCase): |
993 | ] |
994 | ), |
995 | ) |
996 | - self.assertEqual(self.harness.charm._get_k8s_ingress(), expected) |
997 | + self.assertEqual(conf_or_rel._get_k8s_ingress(), expected) |
998 | self.harness.update_config({"additional-hostnames": ""}) |
999 | # Test multiple paths |
1000 | expected = kubernetes.client.V1Ingress( |
1001 | api_version="networking.k8s.io/v1", |
1002 | kind="Ingress", |
1003 | metadata=kubernetes.client.V1ObjectMeta( |
1004 | - name="gunicorn-ingress", |
1005 | + name=expected_ingress_name, |
1006 | annotations={ |
1007 | "nginx.ingress.kubernetes.io/proxy-body-size": "20m", |
1008 | "nginx.ingress.kubernetes.io/rewrite-target": "/", |
1009 | @@ -602,14 +670,14 @@ class TestCharm(unittest.TestCase): |
1010 | ), |
1011 | ) |
1012 | self.harness.update_config({"path-routes": "/admin,/portal"}) |
1013 | - self.assertEqual(self.harness.charm._get_k8s_ingress(), expected) |
1014 | + self.assertEqual(conf_or_rel._get_k8s_ingress(), expected) |
1015 | self.harness.update_config({"path-routes": "/"}) |
1016 | # Test with TLS. |
1017 | expected = kubernetes.client.V1Ingress( |
1018 | api_version="networking.k8s.io/v1", |
1019 | kind="Ingress", |
1020 | metadata=kubernetes.client.V1ObjectMeta( |
1021 | - name="gunicorn-ingress", |
1022 | + name=expected_ingress_name, |
1023 | annotations={ |
1024 | "nginx.ingress.kubernetes.io/proxy-body-size": "20m", |
1025 | "nginx.ingress.kubernetes.io/rewrite-target": "/", |
1026 | @@ -646,7 +714,7 @@ class TestCharm(unittest.TestCase): |
1027 | ), |
1028 | ) |
1029 | self.harness.update_config({"tls-secret-name": "gunicorn_tls"}) |
1030 | - self.assertEqual(self.harness.charm._get_k8s_ingress(), expected) |
1031 | + self.assertEqual(conf_or_rel._get_k8s_ingress(), expected) |
1032 | # Test ingress-class, max_body_size, retry_http_errors and |
1033 | # session-cookie-max-age config options. |
1034 | self.harness.update_config( |
1035 | @@ -662,7 +730,7 @@ class TestCharm(unittest.TestCase): |
1036 | api_version="networking.k8s.io/v1", |
1037 | kind="Ingress", |
1038 | metadata=kubernetes.client.V1ObjectMeta( |
1039 | - name="gunicorn-ingress", |
1040 | + name=expected_ingress_name, |
1041 | annotations={ |
1042 | "nginx.ingress.kubernetes.io/affinity": "cookie", |
1043 | "nginx.ingress.kubernetes.io/affinity-mode": "balanced", |
1044 | @@ -702,10 +770,10 @@ class TestCharm(unittest.TestCase): |
1045 | ] |
1046 | ), |
1047 | ) |
1048 | - self.assertEqual(self.harness.charm._get_k8s_ingress(), expected) |
1049 | + self.assertEqual(conf_or_rel._get_k8s_ingress(), expected) |
1050 | # Test limit-whitelist on its own makes no change. |
1051 | self.harness.update_config({"limit-whitelist": "10.0.0.0/16"}) |
1052 | - self.assertEqual(self.harness.charm._get_k8s_ingress(), expected) |
1053 | + self.assertEqual(conf_or_rel._get_k8s_ingress(), expected) |
1054 | # And if we set limit-rps we get both. Unset other options to minimize output. |
1055 | self.harness.update_config( |
1056 | { |
1057 | @@ -720,7 +788,7 @@ class TestCharm(unittest.TestCase): |
1058 | api_version="networking.k8s.io/v1", |
1059 | kind="Ingress", |
1060 | metadata=kubernetes.client.V1ObjectMeta( |
1061 | - name="gunicorn-ingress", |
1062 | + name=expected_ingress_name, |
1063 | annotations={ |
1064 | "nginx.ingress.kubernetes.io/limit-rps": "5", |
1065 | "nginx.ingress.kubernetes.io/limit-whitelist": "10.0.0.0/16", |
1066 | @@ -753,7 +821,7 @@ class TestCharm(unittest.TestCase): |
1067 | ] |
1068 | ), |
1069 | ) |
1070 | - self.assertEqual(self.harness.charm._get_k8s_ingress(), expected) |
1071 | + self.assertEqual(conf_or_rel._get_k8s_ingress(), expected) |
1072 | |
1073 | def test_get_k8s_service(self): |
1074 | """Test getting our definition of a k8s service.""" |
1075 | @@ -774,7 +842,8 @@ class TestCharm(unittest.TestCase): |
1076 | ], |
1077 | ), |
1078 | ) |
1079 | - self.assertEqual(self.harness.charm._get_k8s_service(), expected) |
1080 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
1081 | + self.assertEqual(conf_or_rel._get_k8s_service(), expected) |
1082 | |
1083 | |
1084 | INGRESS_CLASS_PUBLIC_DEFAULT = kubernetes.client.V1beta1IngressClass( |
1085 | @@ -853,27 +922,455 @@ class TestCharmLookUpAndSetIngressClass(unittest.TestCase): |
1086 | def test_zero_ingress_class(self): |
1087 | """If there are no ingress classes, there's nothing to choose from.""" |
1088 | api = _make_mock_api_list_ingress_class(ZERO_INGRESS_CLASS_LIST) |
1089 | - body = self.harness.charm._get_k8s_ingress() |
1090 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
1091 | + body = conf_or_rel._get_k8s_ingress() |
1092 | self.harness.charm._look_up_and_set_ingress_class(api, body) |
1093 | self.assertIsNone(body.spec.ingress_class_name) |
1094 | |
1095 | def test_one_ingress_class(self): |
1096 | """If there's one default ingress class, choose that.""" |
1097 | api = _make_mock_api_list_ingress_class(ONE_INGRESS_CLASS_LIST) |
1098 | - body = self.harness.charm._get_k8s_ingress() |
1099 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
1100 | + body = conf_or_rel._get_k8s_ingress() |
1101 | self.harness.charm._look_up_and_set_ingress_class(api, body) |
1102 | self.assertEqual(body.spec.ingress_class_name, 'public') |
1103 | |
1104 | def test_two_ingress_classes(self): |
1105 | """If there are two ingress classes, one default, choose that.""" |
1106 | api = _make_mock_api_list_ingress_class(TWO_INGRESS_CLASSES_LIST) |
1107 | - body = self.harness.charm._get_k8s_ingress() |
1108 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
1109 | + body = conf_or_rel._get_k8s_ingress() |
1110 | self.harness.charm._look_up_and_set_ingress_class(api, body) |
1111 | self.assertEqual(body.spec.ingress_class_name, 'public') |
1112 | |
1113 | def test_two_ingress_classes_two_default(self): |
1114 | """If there are two ingress classes, both default, choose neither.""" |
1115 | api = _make_mock_api_list_ingress_class(TWO_INGRESS_CLASSES_LIST_TWO_DEFAULT) |
1116 | - body = self.harness.charm._get_k8s_ingress() |
1117 | + conf_or_rel = self.harness.charm._all_config_or_relations[0] |
1118 | + body = conf_or_rel._get_k8s_ingress() |
1119 | self.harness.charm._look_up_and_set_ingress_class(api, body) |
1120 | self.assertIsNone(body.spec.ingress_class_name) |
1121 | + |
1122 | + |
1123 | +class TestCharmMultipleRelations(unittest.TestCase): |
1124 | + def setUp(self): |
1125 | + """Setup the harness object.""" |
1126 | + self.harness = Harness(NginxIngressCharm) |
1127 | + self.addCleanup(self.harness.cleanup) |
1128 | + self.harness.begin() |
1129 | + |
1130 | + def _add_ingress_relation(self, relator_name, rel_data): |
1131 | + relation_id = self.harness.add_relation('ingress', relator_name) |
1132 | + self.harness.add_relation_unit(relation_id, '%s/0' % relator_name) |
1133 | + |
1134 | + self.harness.update_relation_data(relation_id, relator_name, rel_data) |
1135 | + return relation_id |
1136 | + |
1137 | + def test_get_multiple_ingress_relation_data(self): |
1138 | + """Test for getting our multiple ingress relation data.""" |
1139 | + # Confirm we don't have any relation data yet in the relevant properties |
1140 | + # NOTE(claudiub): _all_config_or_relations will always have at least one element. |
1141 | + conf_or_rels = self.harness.charm._all_config_or_relations |
1142 | + self.assertEqual(0, len(self.harness.charm.model.relations['ingress'])) |
1143 | + self.assertEqual(1, len(conf_or_rels)) |
1144 | + |
1145 | + self.assertEqual(conf_or_rels[0]._service_name, "") |
1146 | + self.assertEqual(conf_or_rels[0]._service_hostname, "") |
1147 | + self.assertEqual(conf_or_rels[0]._service_port, 0) |
1148 | + |
1149 | + # Add the first relation. |
1150 | + rel_data = { |
1151 | + "service-name": "gunicorn", |
1152 | + "service-hostname": "foo.in.ternal", |
1153 | + "service-port": "80", |
1154 | + "path-routes": "/gunicorn", |
1155 | + } |
1156 | + rel_id1 = self._add_ingress_relation('gunicorn', rel_data) |
1157 | + |
1158 | + # Add another relation with the same hostname. |
1159 | + rel_data = { |
1160 | + "service-name": "funicorn", |
1161 | + "service-hostname": "foo.in.ternal", |
1162 | + "service-port": "8080", |
1163 | + } |
1164 | + rel_id2 = self._add_ingress_relation('funicorn', rel_data) |
1165 | + |
1166 | + # And now, confirm we have 2 relations and their respective data, and that the expected |
1167 | + # data is in the relevant properties. |
1168 | + conf_or_rels = self.harness.charm._all_config_or_relations |
1169 | + self.assertEqual(2, len(self.harness.charm.model.relations['ingress'])) |
1170 | + self.assertEqual(2, len(conf_or_rels)) |
1171 | + |
1172 | + self.assertEqual(conf_or_rels[0]._service_name, "gunicorn") |
1173 | + self.assertEqual(conf_or_rels[0]._service_hostname, "foo.in.ternal") |
1174 | + self.assertEqual(conf_or_rels[0]._service_port, 80) |
1175 | + |
1176 | + self.assertEqual(conf_or_rels[1]._service_name, "funicorn") |
1177 | + self.assertEqual(conf_or_rels[1]._service_hostname, "foo.in.ternal") |
1178 | + self.assertEqual(conf_or_rels[1]._service_port, 8080) |
1179 | + |
1180 | + # Update the service hostname config option and expect it to be used instead of the |
1181 | + # service hostname set in the relations (config options have precedence over relations) |
1182 | + self.harness.update_config({"service-hostname": "lish.internal"}) |
1183 | + conf_or_rels = self.harness.charm._all_config_or_relations |
1184 | + self.assertEqual(conf_or_rels[0]._service_hostname, "lish.internal") |
1185 | + self.assertEqual(conf_or_rels[1]._service_hostname, "lish.internal") |
1186 | + |
1187 | + # Reset the service hostname config option and expect that the original service hostnames |
1188 | + # are used instead. |
1189 | + self.harness.update_config({"service-hostname": ""}) |
1190 | + conf_or_rels = self.harness.charm._all_config_or_relations |
1191 | + self.assertEqual(conf_or_rels[0]._service_hostname, "foo.in.ternal") |
1192 | + self.assertEqual(conf_or_rels[1]._service_hostname, "foo.in.ternal") |
1193 | + |
1194 | + # Check that the service name, service port, and path routes come from relation data, |
1195 | + # not config options. |
1196 | + self.harness.update_config( |
1197 | + { |
1198 | + "service-name": "foo", |
1199 | + "service-port": "1111", |
1200 | + "path-routes": "/foo", |
1201 | + } |
1202 | + ) |
1203 | + |
1204 | + conf_or_rels = self.harness.charm._all_config_or_relations |
1205 | + self.assertEqual(conf_or_rels[0]._service_name, "gunicorn") |
1206 | + self.assertEqual(conf_or_rels[0]._service_port, 80) |
1207 | + self.assertEqual(conf_or_rels[0]._path_routes, ["/gunicorn"]) |
1208 | + |
1209 | + # Remove the relations and assert that there are no relations to be found. |
1210 | + self.harness.remove_relation(rel_id1) |
1211 | + self.harness.remove_relation(rel_id2) |
1212 | + |
1213 | + self.assertEqual(0, len(self.harness.charm.model.relations['ingress'])) |
1214 | + |
1215 | + @patch('charm.NginxIngressCharm._report_service_ips') |
1216 | + @patch('charm.NginxIngressCharm._remove_ingress') |
1217 | + @patch('charm.NginxIngressCharm._define_ingress') |
1218 | + @patch('charm._core_v1_api') |
1219 | + def test_services_for_multiple_relations( |
1220 | + self, mock_api, mock_define_ingress, mock_remove_ingress, mock_report_ips |
1221 | + ): |
1222 | + """Test for checking Service creation / deletion for multiple relations.""" |
1223 | + # Setting the leader to True will allow us to test the Service creation. |
1224 | + self.harness.set_leader(True) |
1225 | + self.harness.charm._authed = True |
1226 | + |
1227 | + mock_report_ips.return_value = ["10.0.1.12"] |
1228 | + mock_list_services = mock_api.return_value.list_namespaced_service |
1229 | + # We'll consider we don't have any service set yet. |
1230 | + mock_list_services.return_value.items = [] |
1231 | + |
1232 | + # Add the first relation. |
1233 | + rel_data = { |
1234 | + "service-name": "gunicorn", |
1235 | + "service-hostname": "foo.in.ternal", |
1236 | + "service-port": "80", |
1237 | + } |
1238 | + self._add_ingress_relation('gunicorn', rel_data) |
1239 | + |
1240 | + conf_or_rels = self.harness.charm._all_config_or_relations |
1241 | + mock_create_service = mock_api.return_value.create_namespaced_service |
1242 | + mock_create_service.assert_called_once_with( |
1243 | + namespace=self.harness.charm._namespace, |
1244 | + body=conf_or_rels[0]._get_k8s_service(), |
1245 | + ) |
1246 | + |
1247 | + # Reset the create service mock, and add a second relation. Expect the first service to |
1248 | + # be updated, and the second one to be created. |
1249 | + mock_create_service.reset_mock() |
1250 | + mock_service1 = MagicMock() |
1251 | + mock_service1.metadata.name = "gunicorn-service" |
1252 | + mock_list_services.return_value.items = [mock_service1] |
1253 | + |
1254 | + rel_data = { |
1255 | + "service-name": "funicorn", |
1256 | + "service-hostname": "foo.in.ternal", |
1257 | + "service-port": "8080", |
1258 | + } |
1259 | + self._add_ingress_relation('funicorn', rel_data) |
1260 | + |
1261 | + conf_or_rels = self.harness.charm._all_config_or_relations |
1262 | + mock_patch_service = mock_api.return_value.patch_namespaced_service |
1263 | + mock_patch_service.assert_called_once_with( |
1264 | + name=conf_or_rels[0]._k8s_service_name, |
1265 | + namespace=self.harness.charm._namespace, |
1266 | + body=conf_or_rels[0]._get_k8s_service(), |
1267 | + ) |
1268 | + mock_create_service.assert_called_once_with( |
1269 | + namespace=self.harness.charm._namespace, |
1270 | + body=conf_or_rels[1]._get_k8s_service(), |
1271 | + ) |
1272 | + |
1273 | + # Remove the first relation and assert that only the first service is removed. |
1274 | + mock_service2 = MagicMock() |
1275 | + mock_service2.metadata.name = "funicorn-service" |
1276 | + mock_list_services.return_value.items = [mock_service1, mock_service2] |
1277 | + |
1278 | + relation = self.harness.charm.model.relations["ingress"][0] |
1279 | + self.harness.charm.on.ingress_relation_broken.emit(relation) |
1280 | + |
1281 | + mock_delete_service = mock_api.return_value.delete_namespaced_service |
1282 | + mock_delete_service.assert_called_once_with( |
1283 | + name=conf_or_rels[0]._k8s_service_name, |
1284 | + namespace=self.harness.charm._namespace, |
1285 | + ) |
1286 | + |
1287 | + @patch('charm.NginxIngressCharm._report_service_ips') |
1288 | + @patch('charm.NginxIngressCharm._remove_service') |
1289 | + @patch('charm.NginxIngressCharm._define_service') |
1290 | + @patch('charm._networking_v1_api') |
1291 | + def test_ingresses_for_multiple_relations_same_hostname( |
1292 | + self, mock_api, mock_define_service, mock_remove_service, mock_report_ips |
1293 | + ): |
1294 | + """Test for checking Ingress creation / deletion for multiple relations. |
1295 | + |
1296 | + This test will check that the charm will not create multiple Resources for the same |
1297 | + hostname, and that it won't remove the resource if there's still an active relation |
1298 | + using it. |
1299 | + """ |
1300 | + # Setting the leader to True will allow us to test the Ingress creation. |
1301 | + self.harness.set_leader(True) |
1302 | + self.harness.charm._authed = True |
1303 | + |
1304 | + mock_report_ips.return_value = ["10.0.1.12"] |
1305 | + mock_list_ingress = mock_api.return_value.list_namespaced_ingress |
1306 | + # We'll consider we don't have any ingresses set yet. |
1307 | + mock_list_ingress.return_value.items = [] |
1308 | + |
1309 | + # Add the first relation. |
1310 | + rel_data = { |
1311 | + "service-name": "gunicorn", |
1312 | + "service-hostname": "foo.in.ternal", |
1313 | + "service-port": "80", |
1314 | + } |
1315 | + rel_id1 = self._add_ingress_relation('gunicorn', rel_data) |
1316 | + |
1317 | + conf_or_rels = self.harness.charm._all_config_or_relations |
1318 | + mock_create_ingress = mock_api.return_value.create_namespaced_ingress |
1319 | + |
1320 | + # Since we only have one relation, the merged ingress rule should be the same as before |
1321 | + # the merge. |
1322 | + mock_create_ingress.assert_called_once_with( |
1323 | + namespace=self.harness.charm._namespace, |
1324 | + body=conf_or_rels[0]._get_k8s_ingress(), |
1325 | + ) |
1326 | + |
1327 | + # Reset the create ingress mock, and add a second relation. |
1328 | + mock_create_ingress.reset_mock() |
1329 | + mock_ingress1 = MagicMock() |
1330 | + mock_ingress1.metadata.name = "foo-in-ternal-ingress" |
1331 | + mock_list_ingress.return_value.items = [mock_ingress1] |
1332 | + |
1333 | + rel_data = { |
1334 | + "service-name": "funicorn", |
1335 | + "service-hostname": "foo.in.ternal", |
1336 | + "service-port": "8080", |
1337 | + # Since it has the same service-hostname as gunicorn, we need a different route. |
1338 | + "path-routes": "/funicorn", |
1339 | + } |
1340 | + rel_id2 = self._add_ingress_relation('funicorn', rel_data) |
1341 | + |
1342 | + # We're expecting that the K8s Ingress Resource will be replaced by one that contains |
1343 | + # paths from both relations. A new one should not have been created. |
1344 | + mock_create_ingress.assert_not_called() |
1345 | + |
1346 | + conf_or_rels = self.harness.charm._all_config_or_relations |
1347 | + expected_body = conf_or_rels[0]._get_k8s_ingress() |
1348 | + second_body = conf_or_rels[1]._get_k8s_ingress() |
1349 | + |
1350 | + expected_body.spec.rules[0].http.paths.extend(second_body.spec.rules[0].http.paths) |
1351 | + mock_replace_ingress = mock_api.return_value.replace_namespaced_ingress |
1352 | + mock_replace_ingress.assert_called_once_with( |
1353 | + name=conf_or_rels[0]._ingress_name, |
1354 | + namespace=self.harness.charm._namespace, |
1355 | + body=expected_body, |
1356 | + ) |
1357 | + |
1358 | + # Remove the first relation and assert that the Kubernetes Ingress Resource was updated |
1359 | + # and not removed. |
1360 | + mock_create_ingress.reset_mock() |
1361 | + mock_replace_ingress.reset_mock() |
1362 | + |
1363 | + self.harness.remove_relation(rel_id1) |
1364 | + |
1365 | + # Assert that the ingress was replaced, not deleted (we still have a relation). |
1366 | + mock_delete_ingress = mock_api.return_value.delete_namespaced_ingress |
1367 | + mock_delete_ingress.assert_not_called() |
1368 | + |
1369 | + conf_or_rels = self.harness.charm._all_config_or_relations |
1370 | + mock_replace_ingress.assert_called_once_with( |
1371 | + name=conf_or_rels[0]._ingress_name, |
1372 | + namespace=self.harness.charm._namespace, |
1373 | + body=second_body, |
1374 | + ) |
1375 | + |
1376 | + # Remove the second relation. This should cause the K8s Ingress Resource to be removed, |
1377 | + # since we no longer have any relations needing it. |
1378 | + mock_replace_ingress.reset_mock() |
1379 | + mock_delete_ingress.reset_mock() |
1380 | + self.harness.remove_relation(rel_id2) |
1381 | + |
1382 | + mock_create_ingress.assert_not_called() |
1383 | + mock_replace_ingress.assert_not_called() |
1384 | + mock_delete_ingress.assert_called_once_with( |
1385 | + conf_or_rels[0]._ingress_name, |
1386 | + self.harness.charm._namespace, |
1387 | + ) |
1388 | + |
1389 | + @patch('charm.NginxIngressCharm._report_service_ips') |
1390 | + @patch('charm.NginxIngressCharm._remove_service') |
1391 | + @patch('charm.NginxIngressCharm._define_service') |
1392 | + @patch('charm._networking_v1_api') |
1393 | + def test_ingresses_for_multiple_relations_different_hostnames( |
1394 | + self, mock_api, mock_define_service, mock_remove_service, mock_report_ips |
1395 | + ): |
1396 | + """Test for checking Ingress creation / deletion for multiple relations. |
1397 | + |
1398 | + This test will check that the charm will create multiple Resources for different hostnames. |
1399 | + """ |
1400 | + # Setting the leader to True will allow us to test the Ingress creation. |
1401 | + self.harness.set_leader(True) |
1402 | + self.harness.charm._authed = True |
1403 | + |
1404 | + mock_report_ips.return_value = ["10.0.1.12"] |
1405 | + mock_list_ingress = mock_api.return_value.list_namespaced_ingress |
1406 | + # We'll consider we don't have any ingresses set yet. |
1407 | + mock_list_ingress.return_value.items = [] |
1408 | + |
1409 | + # Add the first relation. |
1410 | + rel_data = { |
1411 | + "service-name": "gunicorn", |
1412 | + "service-hostname": "foo.in.ternal", |
1413 | + "service-port": "80", |
1414 | + } |
1415 | + rel_id1 = self._add_ingress_relation('gunicorn', rel_data) |
1416 | + |
1417 | + # Since we only have one relation, the merged ingress rule should be the same as before |
1418 | + # the merge. |
1419 | + conf_or_rels = self.harness.charm._all_config_or_relations |
1420 | + mock_create_ingress = mock_api.return_value.create_namespaced_ingress |
1421 | + mock_create_ingress.assert_called_once_with( |
1422 | + namespace=self.harness.charm._namespace, |
1423 | + body=conf_or_rels[0]._get_k8s_ingress(), |
1424 | + ) |
1425 | + |
1426 | + # Reset the create ingress mock, and add a second relation with a different |
1427 | + # service-hostname. A different K8s Ingress Resource should be created. |
1428 | + mock_create_ingress.reset_mock() |
1429 | + mock_ingress1 = MagicMock() |
1430 | + mock_ingress1.metadata.name = "foo-in-ternal-ingress" |
1431 | + mock_list_ingress.return_value.items = [mock_ingress1] |
1432 | + |
1433 | + rel_data = { |
1434 | + "service-name": "punicorn", |
1435 | + "service-hostname": "lish.in.ternal", |
1436 | + "service-port": "9090", |
1437 | + } |
1438 | + rel_id2 = self._add_ingress_relation('punicorn', rel_data) |
1439 | + |
1440 | + # We're expecting that the first K8s Ingress Resource will be updated, but it will not |
1441 | + # change, and that a new K8s Ingress Resource will be created for the new relation, |
1442 | + # since it has a different service-hostname. |
1443 | + conf_or_rels = self.harness.charm._all_config_or_relations |
1444 | + mock_create_ingress.assert_called_once_with( |
1445 | + namespace=self.harness.charm._namespace, |
1446 | + body=conf_or_rels[1]._get_k8s_ingress(), |
1447 | + ) |
1448 | + mock_replace_ingress = mock_api.return_value.replace_namespaced_ingress |
1449 | + mock_replace_ingress.assert_called_once_with( |
1450 | + name=conf_or_rels[0]._ingress_name, |
1451 | + namespace=self.harness.charm._namespace, |
1452 | + body=conf_or_rels[0]._get_k8s_ingress(), |
1453 | + ) |
1454 | + |
1455 | + # Remove the first relation and assert that only the first ingress is removed. |
1456 | + mock_ingress2 = MagicMock() |
1457 | + mock_ingress2.metadata.name = "lish-in-ternal-ingress" |
1458 | + mock_list_ingress.return_value.items = [mock_ingress1, mock_ingress2] |
1459 | + mock_create_ingress.reset_mock() |
1460 | + mock_replace_ingress.reset_mock() |
1461 | + self.harness.remove_relation(rel_id1) |
1462 | + |
1463 | + # Assert that only the ingress for the first relation was removed. |
1464 | + mock_delete_ingress = mock_api.return_value.delete_namespaced_ingress |
1465 | + mock_delete_ingress.assert_called_once_with( |
1466 | + conf_or_rels[0]._ingress_name, |
1467 | + self.harness.charm._namespace, |
1468 | + ) |
1469 | + mock_create_ingress.assert_not_called() |
1470 | + mock_replace_ingress.assert_called_once_with( |
1471 | + name=conf_or_rels[1]._ingress_name, |
1472 | + namespace=self.harness.charm._namespace, |
1473 | + body=conf_or_rels[1]._get_k8s_ingress(), |
1474 | + ) |
1475 | + |
1476 | + # Remove the second relation. |
1477 | + mock_replace_ingress.reset_mock() |
1478 | + mock_delete_ingress.reset_mock() |
1479 | + self.harness.remove_relation(rel_id2) |
1480 | + |
1481 | + mock_delete_ingress.assert_called_once_with( |
1482 | + conf_or_rels[1]._ingress_name, |
1483 | + self.harness.charm._namespace, |
1484 | + ) |
1485 | + mock_create_ingress.assert_not_called() |
1486 | + mock_replace_ingress.assert_not_called() |
1487 | + |
1488 | + @patch('charm.NginxIngressCharm._report_service_ips') |
1489 | + @patch('charm.NginxIngressCharm._define_ingress') |
1490 | + @patch('charm.NginxIngressCharm._define_service') |
1491 | + @patch('charm._networking_v1_api') |
1492 | + def test_ingresses_for_multiple_relations_blocked( |
1493 | + self, mock_api, mock_define_service, mock_define_ingress, mock_report_ips |
1494 | + ): |
1495 | + """Test for checking the Blocked cases for multiple relations.""" |
1496 | + # Setting the leader to True will allow us to test the Ingress creation. |
1497 | + self.harness.set_leader(True) |
1498 | + self.harness.charm._authed = True |
1499 | + |
1500 | + mock_report_ips.return_value = ["10.0.1.12"] |
1501 | + mock_list_ingress = mock_api.return_value.list_namespaced_ingress |
1502 | + # We'll consider we don't have any ingresses set yet. |
1503 | + mock_list_ingress.return_value.items = [] |
1504 | + |
1505 | + # Add the first relation. |
1506 | + rel_data = { |
1507 | + "service-name": "gunicorn", |
1508 | + "service-hostname": "foo.in.ternal", |
1509 | + "service-port": "80", |
1510 | + } |
1511 | + self._add_ingress_relation('gunicorn', rel_data) |
1512 | + |
1513 | + # Add the second relation. It will have the same service-hostname as the first relation. |
1514 | + # It will also have a conflicting annotation, and conflicting route "/", which will cause |
1515 | + # the relation to become Blocked. |
1516 | + rel_data = { |
1517 | + "service-name": "funicorn", |
1518 | + "service-hostname": "foo.in.ternal", |
1519 | + "service-port": "9090", |
1520 | + "retry-errors": "error,timeout", |
1521 | + } |
1522 | + rel_id = self._add_ingress_relation('funicorn', rel_data) |
1523 | + |
1524 | + expected_status = BlockedStatus( |
1525 | + "Conflicting annotations from relations. Run juju debug-log for details. " |
1526 | + "Set manually via juju config." |
1527 | + ) |
1528 | + self.assertEqual(expected_status, self.harness.charm.unit.status) |
1529 | + |
1530 | + # Override the rewrite target through the config option. It should fix the problem. |
1531 | + self.harness.update_config({"retry-errors": "error,timeout"}) |
1532 | + |
1533 | + # We still have the issue with the duplicate route. |
1534 | + expected_status = BlockedStatus( |
1535 | + "Duplicate route found; cannot add ingress. Run juju debug-log for details." |
1536 | + ) |
1537 | + self.assertEqual(expected_status, self.harness.charm.unit.status) |
1538 | + |
1539 | + # Update the relation data to have a different route. |
1540 | + rel_data["path-routes"] = "/funicorn" |
1541 | + self.harness.update_relation_data(rel_id, "funicorn", rel_data) |
1542 | + |
1543 | + expected_status = ActiveStatus("Ingress with service IP(s): 10.0.1.12") |
1544 | + self.assertEqual(expected_status, self.harness.charm.unit.status) |
This merge proposal is being monitored by mergebot. Change the status to Approved to merge.