Merge ~hloeung/charm-k8s-content-cache:master into charm-k8s-content-cache:master
- Git
- lp:~hloeung/charm-k8s-content-cache
- master
- Merge into master
Status: | Merged |
---|---|
Approved by: | Haw Loeung |
Approved revision: | 14c23416d119df3f694e8d8ff0b35c54c51ba8df |
Merged at revision: | 49946ba05969ee713fb15c8fb8aa297ab2800d9e |
Proposed branch: | ~hloeung/charm-k8s-content-cache:master |
Merge into: | charm-k8s-content-cache:master |
Prerequisite: | ~hloeung/charm-k8s-content-cache:non-charm |
Diff against target: |
791 lines (+689/-60) 4 files modified
config.yaml (+59/-4) dev/null (+0/-36) src/charm.py (+208/-20) tests/test_charm.py (+422/-0) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Tom Haddon | Approve | ||
Facundo Batista (community) | Approve | ||
Haw Loeung | Needs Resubmitting | ||
Review via email:
|
This proposal supersedes a proposal from 2020-08-31.
Commit message
Implement charm functionality and add unit tests
Description of the change
Implement charm functionality and add unit tests. Example is proxying / caching of archive.ubuntu.com.
| ubuntu@
| Model Controller Cloud/Region Version SLA Timestamp
| charm-k8s micro microk8s/localhost 2.8.1 unsupported 03:20:19Z
|
| App Version Status Scale Charm Store Rev OS Address Notes
| content-cache active 1 charm-k8s-
|
| Unit Workload Agent Address Ports Message
| content-cache/42* active idle 10.1.75.118 80/TCP
Generated Nginx config - https:/
Test against the unit itself (10.1.75.118) - https:/
With the access logs:
| ubuntu@
| - - - [08/Sep/
| - - - [08/Sep/
| - - - [08/Sep/
kubectl describe ingress - https:/
Test against the ingress address (10.152.183.163) - https:/
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : Posted in a previous version of this proposal | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Barry Price (barryprice) wrote : Posted in a previous version of this proposal | # |
Looks good, left a few inline comments - you could also consider adding a lint target for the shell script(s) included under the docker/ dir - charm-k8s-bind uses shellcheck for this.
Also seems to be missing the standard COPYRIGHT/LICENSE files, plus the boilerplate copyright headers in indidivual files, but code itself looks good.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Haw Loeung (hloeung) : Posted in a previous version of this proposal | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Tom Haddon (mthaddon) wrote : Posted in a previous version of this proposal | # |
As discussed we'll want to break this up into two MPs, one with non-operator framework changes that we can review/
One comment inline about the name of the charm to deploy, which as discussed means we want to update metadata.yaml so the name of the charm is "content-cache".
Also, just a note that we'll want docstrings on all unit tests for when it comes to that MP.
Thanks!
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
This merge proposal is being monitored by mergebot. Change the status to Approved to merge.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Haw Loeung (hloeung) wrote : Posted in a previous version of this proposal | # |
Split up.
* charm and associated unit tests:
* The rest of the stuff (non-charm):
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Tom Haddon (mthaddon) wrote : | # |
Two very minor comments inline, and then I think we're ready to put this up for review with charmcrafters. Thanks!
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Tom Haddon (mthaddon) wrote : | # |
Actually, I think I've misread the ops code comments on this:
use_juju_
then kubernetes charms that haven't previously used local storage and that
are running on a new enough Juju default to controller-side storage,
otherwise local storage is used.
So I think this means it's not needed at this point (there won't be instances deployed that have used local storage already, so it'll default to using controller-side storage. So we don't need the use_juju_
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Tom Haddon (mthaddon) wrote : | # |
Lgtm, will add charmcrafters to reviewers
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Facundo Batista (facundo) wrote : | # |
Added some minor inline comments, thanks!
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Haw Loeung (hloeung) : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Haw Loeung (hloeung) : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Facundo Batista (facundo) wrote : | # |
Great, thanks!
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Tom Haddon (mthaddon) wrote : | # |
Adding an approving comment so this can be merged.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
Change successfully merged at revision 49946ba05969ee7
Preview Diff
1 | diff --git a/config.yaml b/config.yaml |
2 | index 1cf64d6..141b70d 100644 |
3 | --- a/config.yaml |
4 | +++ b/config.yaml |
5 | @@ -1,6 +1,61 @@ |
6 | options: |
7 | - sites: |
8 | + image_path: |
9 | type: string |
10 | - description: > |
11 | - YAML-formatted virtual hosts/sites. See the README.md for more details |
12 | - and examples. |
13 | + description: >- |
14 | + The location of the image to use, e.g. "localhost:32000/myimage:latest" |
15 | + |
16 | + This setting is required. |
17 | + image_username: |
18 | + type: string |
19 | + description: >- |
20 | + The username for accessing the registry specified in image_path. |
21 | + default: "" |
22 | + image_password: |
23 | + type: string |
24 | + description: >- |
25 | + The password associated with image_username for accessing the registry |
26 | + specified in image_path. |
27 | + default: "" |
28 | + site: |
29 | + type: string |
30 | + description: >- |
31 | + The site name, e.g. "mysite.local" |
32 | + |
33 | + This setting is required. |
34 | + backend: |
35 | + type: string |
36 | + description: >- |
37 | + The backend to use for site, e.g. "http://mybackend.local:80" |
38 | + |
39 | + This setting is required. |
40 | + cache_inactive_time: |
41 | + type: string |
42 | + description: >- |
43 | + The maximum age/time inactive objects are stored in cache. |
44 | + default: "10m" |
45 | + cache_max_size: |
46 | + type: string |
47 | + description: >- |
48 | + The size of the Nginx storage cache. |
49 | + default: "10G" |
50 | + cache_use_stale: |
51 | + type: string |
52 | + description: >- |
53 | + Determines in which cases a stale cached response can be used |
54 | + during communication with the proxied server. |
55 | + default: "error timeout updating http_500 http_502 http_503 http_504" |
56 | + cache_valid: |
57 | + type: string |
58 | + decription: >- |
59 | + Sets caching time for different response codes |
60 | + default: "200 1h" |
61 | + client_max_body_size: |
62 | + type: string |
63 | + description: >- |
64 | + Override max. request body size (default 1m). |
65 | + default: "" |
66 | + tls_secret_name: |
67 | + type: string |
68 | + description: >- |
69 | + The name of the K8s secret to be associated with the ingress resource. |
70 | + default: "" |
71 | diff --git a/src/charm.py b/src/charm.py |
72 | index c5ab80b..171e521 100755 |
73 | --- a/src/charm.py |
74 | +++ b/src/charm.py |
75 | @@ -1,38 +1,226 @@ |
76 | #!/usr/bin/env python3 |
77 | -# Copyright 2020 hloeung |
78 | + |
79 | +# Copyright (C) 2020 Canonical Ltd. |
80 | # See LICENSE file for licensing details. |
81 | |
82 | +import hashlib |
83 | import logging |
84 | |
85 | from ops.charm import CharmBase |
86 | from ops.main import main |
87 | -from ops.framework import StoredState |
88 | +from ops.model import ( |
89 | + ActiveStatus, |
90 | + BlockedStatus, |
91 | + MaintenanceStatus, |
92 | +) |
93 | |
94 | logger = logging.getLogger(__name__) |
95 | |
96 | +CACHE_PATH = '/var/lib/nginx/proxy/cache' |
97 | +CONTAINER_PORT = 80 |
98 | +REQUIRED_JUJU_CONFIGS = ['image_path', 'site', 'backend'] |
99 | |
100 | -class CharmK8SContentCacheCharm(CharmBase): |
101 | - _stored = StoredState() |
102 | |
103 | +class ContentCacheCharm(CharmBase): |
104 | def __init__(self, *args): |
105 | super().__init__(*args) |
106 | + |
107 | + self.framework.observe(self.on.start, self._on_start) |
108 | self.framework.observe(self.on.config_changed, self._on_config_changed) |
109 | - self.framework.observe(self.on.fortune_action, self._on_fortune_action) |
110 | - self._stored.set_default(things=[]) |
111 | - |
112 | - def _on_config_changed(self, _): |
113 | - current = self.model.config["thing"] |
114 | - if current not in self._stored.things: |
115 | - logger.debug("found a new thing: %r", current) |
116 | - self._stored.things.append(current) |
117 | - |
118 | - def _on_fortune_action(self, event): |
119 | - fail = event.params["fail"] |
120 | - if fail: |
121 | - event.fail(fail) |
122 | + self.framework.observe(self.on.leader_elected, self._on_leader_elected) |
123 | + self.framework.observe(self.on.upgrade_charm, self._on_upgrade_charm) |
124 | + |
125 | + def _on_start(self, event) -> None: |
126 | + """Handle pod started.""" |
127 | + self.model.unit.status = ActiveStatus('Started') |
128 | + |
129 | + def _on_config_changed(self, event) -> None: |
130 | + """Check that we're the leader, and if so, configure/set up pod.""" |
131 | + if not self.unit.is_leader(): |
132 | + logger.info('Spec changes ignored by non-leader') |
133 | + self.unit.status = ActiveStatus('Ready') |
134 | + return |
135 | + msg = 'Configuring pod (config-changed)' |
136 | + logger.info(msg) |
137 | + self.model.unit.status = MaintenanceStatus(msg) |
138 | + |
139 | + self.configure_pod(event) |
140 | + |
141 | + def _on_leader_elected(self, event) -> None: |
142 | + """Check that we're the leader, and if so, configure/set up pod.""" |
143 | + msg = 'Configuring pod (leader-elected)' |
144 | + logger.info(msg) |
145 | + self.model.unit.status = MaintenanceStatus(msg) |
146 | + self.configure_pod(event) |
147 | + |
148 | + def _on_upgrade_charm(self, event) -> None: |
149 | + """Check that we're the leader, and if so, configure/set up pod.""" |
150 | + if not self.unit.is_leader(): |
151 | + logger.info('Spec changes ignored by non-leader') |
152 | + self.unit.status = ActiveStatus('Ready') |
153 | + return |
154 | + msg = 'Configuring pod (upgrade-charm)' |
155 | + logger.info(msg) |
156 | + self.model.unit.status = MaintenanceStatus(msg) |
157 | + self.configure_pod(event) |
158 | + |
159 | + def configure_pod(self, event) -> None: |
160 | + """Assemble both K8s ingress and pod spec and apply.""" |
161 | + missing = self._missing_charm_configs() |
162 | + if missing: |
163 | + msg = 'Required config(s) empty: {}'.format(', '.join(sorted(missing))) |
164 | + logger.warning(msg) |
165 | + self.unit.status = BlockedStatus(msg) |
166 | + return |
167 | + |
168 | + msg = 'Assembling K8s ingress spec' |
169 | + logger.info(msg) |
170 | + self.unit.status = MaintenanceStatus(msg) |
171 | + ingress_spec = self._make_k8s_ingress_spec() |
172 | + k8s_resources = {'kubernetesResources': {'ingressResources': ingress_spec}} |
173 | + |
174 | + msg = 'Assembling pod spec' |
175 | + logger.info(msg) |
176 | + self.unit.status = MaintenanceStatus(msg) |
177 | + pod_spec = self._make_pod_spec() |
178 | + |
179 | + msg = 'Setting pod spec' |
180 | + logger.info(msg) |
181 | + self.unit.status = MaintenanceStatus(msg) |
182 | + self.model.pod.set_spec(pod_spec, k8s_resources=k8s_resources) |
183 | + |
184 | + msg = 'Done applying updated pod spec' |
185 | + logger.info(msg) |
186 | + self.unit.status = ActiveStatus('Ready') |
187 | + |
188 | + def _generate_keys_zone(self, name): |
189 | + """Generate hashed name to be used by Nginx's key zone.""" |
190 | + return '{}-cache'.format(hashlib.md5(name.encode('UTF-8')).hexdigest()[0:12]) |
191 | + |
192 | + def _make_k8s_ingress_spec(self) -> list: |
193 | + """Return an assembled K8s ingress spec to be used by pod.set_spec()'s k8s_resources.""" |
194 | + config = self.model.config |
195 | + |
196 | + annotations = {} |
197 | + ingress = { |
198 | + 'name': '{}-ingress'.format(self.app.name), |
199 | + 'spec': { |
200 | + 'rules': [ |
201 | + { |
202 | + 'host': config['site'], |
203 | + 'http': { |
204 | + 'paths': [ |
205 | + {'path': '/', 'backend': {'serviceName': self.app.name, 'servicePort': CONTAINER_PORT}} |
206 | + ], |
207 | + }, |
208 | + } |
209 | + ], |
210 | + }, |
211 | + } |
212 | + |
213 | + client_max_body_size = config.get('client_max_body_size') |
214 | + if client_max_body_size: |
215 | + annotations['nginx.ingress.kubernetes.io/proxy-body-size'] = client_max_body_size |
216 | + |
217 | + tls_secret_name = config.get('tls_secret_name') |
218 | + if tls_secret_name: |
219 | + ingress['spec']['tls'] = [{'hosts': config['site'], 'secretName': tls_secret_name}] |
220 | else: |
221 | - event.set_results({"fortune": "A bug in the code is worth two in the documentation."}) |
222 | + annotations['nginx.ingress.kubernetes.io/ssl-redirect'] = 'false' |
223 | + |
224 | + if annotations: |
225 | + ingress['annotations'] = annotations |
226 | + |
227 | + return [ingress] |
228 | + |
229 | + def _make_pod_spec(self) -> dict: |
230 | + """Return an assembled K8s pod spec with appropriate configs set.""" |
231 | + config = self.model.config |
232 | + |
233 | + image_details = { |
234 | + 'imagePath': config['image_path'], |
235 | + } |
236 | + if config.get('image_username', None): |
237 | + image_details.update({'username': config['image_username'], 'password': config['image_password']}) |
238 | + |
239 | + pod_config = self._make_pod_config() |
240 | + |
241 | + pod_spec = { |
242 | + 'version': 3, # otherwise resources are ignored |
243 | + 'containers': [ |
244 | + { |
245 | + 'name': self.app.name, |
246 | + 'envConfig': pod_config, |
247 | + 'imageDetails': image_details, |
248 | + 'imagePullPolicy': 'Always', |
249 | + 'kubernetes': { |
250 | + 'livenessProbe': { |
251 | + 'httpGet': {'path': '/', 'port': CONTAINER_PORT}, |
252 | + 'initialDelaySeconds': 3, |
253 | + 'periodSeconds': 3, |
254 | + }, |
255 | + 'readinessProbe': { |
256 | + 'httpGet': {'path': '/', 'port': CONTAINER_PORT}, |
257 | + 'initialDelaySeconds': 3, |
258 | + 'periodSeconds': 3, |
259 | + }, |
260 | + }, |
261 | + 'ports': [{'containerPort': CONTAINER_PORT, 'protocol': 'TCP'}], |
262 | + 'volumeConfig': [ |
263 | + { |
264 | + 'name': 'cache-volume', |
265 | + 'mountPath': CACHE_PATH, |
266 | + 'emptyDir': {'sizeLimit': config['cache_max_size']}, |
267 | + } |
268 | + ], |
269 | + } |
270 | + ], |
271 | + } |
272 | + |
273 | + return pod_spec |
274 | + |
275 | + def _make_pod_config(self) -> dict: |
276 | + """Return dict to be used as pod spec's envConfig.""" |
277 | + config = self.model.config |
278 | + |
279 | + client_max_body_size = '1m' |
280 | + if config.get('client_max_body_size'): |
281 | + client_max_body_size = config.get('client_max_body_size') |
282 | + |
283 | + pod_config = { |
284 | + 'NGINX_BACKEND': config['backend'], |
285 | + 'NGINX_CACHE_INACTIVE_TIME': config.get('cache_inactive_time', '10m'), |
286 | + 'NGINX_CACHE_MAX_SIZE': config.get('cache_max_size', '10G'), |
287 | + 'NGINX_CACHE_PATH': CACHE_PATH, |
288 | + 'NGINX_CACHE_USE_STALE': config['cache_use_stale'], |
289 | + 'NGINX_CACHE_VALID': config['cache_valid'], |
290 | + 'NGINX_CLIENT_MAX_BODY_SIZE': client_max_body_size, |
291 | + 'NGINX_KEYS_ZONE': self._generate_keys_zone(config['site']), |
292 | + 'NGINX_SITE_NAME': config['site'], |
293 | + } |
294 | + |
295 | + # https://bugs.launchpad.net/juju/+bug/1894782 |
296 | + config_fields = { |
297 | + "JUJU_NODE_NAME": "spec.nodeName", |
298 | + "JUJU_POD_NAME": "metadata.name", |
299 | + "JUJU_POD_NAMESPACE": "metadata.namespace", |
300 | + "JUJU_POD_IP": "status.podIP", |
301 | + "JUJU_POD_SERVICE_ACCOUNT": "spec.serviceAccountName", |
302 | + } |
303 | + juju_env_config = {k: {"field": {"path": p, "api-version": "v1"}} for k, p in config_fields.items()} |
304 | + pod_config.update(juju_env_config) |
305 | + |
306 | + return pod_config |
307 | + |
308 | + def _missing_charm_configs(self) -> list: |
309 | + """Check and return list of required but missing configs.""" |
310 | + config = self.model.config |
311 | + missing = [setting for setting in REQUIRED_JUJU_CONFIGS if not config[setting]] |
312 | + |
313 | + if config.get('image_username') and not config.get('image_password'): |
314 | + missing.append('image_password') |
315 | + return sorted(missing) |
316 | |
317 | |
318 | -if __name__ == "__main__": |
319 | - main(CharmK8SContentCacheCharm) |
320 | +if __name__ == '__main__': # pragma: no cover |
321 | + main(ContentCacheCharm) |
322 | diff --git a/tests/test_charm.py b/tests/test_charm.py |
323 | new file mode 100644 |
324 | index 0000000..02466ed |
325 | --- /dev/null |
326 | +++ b/tests/test_charm.py |
327 | @@ -0,0 +1,422 @@ |
328 | +# Copyright (C) 2020 Canonical Ltd. |
329 | +# See LICENSE file for licensing details. |
330 | + |
331 | +import copy |
332 | +import unittest |
333 | +from unittest import mock |
334 | + |
335 | +from ops.model import ( |
336 | + ActiveStatus, |
337 | + BlockedStatus, |
338 | + MaintenanceStatus, |
339 | +) |
340 | +from ops.testing import Harness |
341 | +from charm import ContentCacheCharm |
342 | + |
343 | +BASE_CONFIG = { |
344 | + 'image_path': 'localhost:32000/myimage:latest', |
345 | + 'site': 'mysite.local', |
346 | + 'backend': 'http://mybackend.local:80', |
347 | + 'cache_max_size': '10G', |
348 | + 'cache_use_stale': 'error timeout updating http_500 http_502 http_503 http_504', |
349 | + 'cache_valid': '200 1h', |
350 | +} |
351 | +CACHE_PATH = '/var/lib/nginx/proxy/cache' |
352 | +CONTAINER_PORT = 80 |
353 | +JUJU_ENV_CONFIG = { |
354 | + 'JUJU_NODE_NAME': {'field': {'api-version': 'v1', 'path': 'spec.nodeName'}}, |
355 | + 'JUJU_POD_NAME': {'field': {'api-version': 'v1', 'path': 'metadata.name'}}, |
356 | + 'JUJU_POD_NAMESPACE': {'field': {'api-version': 'v1', 'path': 'metadata.namespace'}}, |
357 | + 'JUJU_POD_IP': {'field': {'api-version': 'v1', 'path': 'status.podIP'}}, |
358 | + 'JUJU_POD_SERVICE_ACCOUNT': {'field': {'api-version': 'v1', 'path': 'spec.serviceAccountName'}}, |
359 | +} |
360 | +K8S_RESOURCES_INGRESS_RULES = { |
361 | + 'host': 'mysite.local', |
362 | + 'http': { |
363 | + 'paths': [ |
364 | + { |
365 | + 'backend': {'serviceName': 'content-cache', 'servicePort': 80}, |
366 | + 'path': '/', |
367 | + } |
368 | + ] |
369 | + }, |
370 | +} |
371 | +K8S_RESOURCES_TMPL = { |
372 | + 'kubernetesResources': { |
373 | + 'ingressResources': [ |
374 | + { |
375 | + 'annotations': {'nginx.ingress.kubernetes.io/ssl-redirect': 'false'}, |
376 | + 'name': 'content-cache-ingress', |
377 | + 'spec': { |
378 | + 'rules': [K8S_RESOURCES_INGRESS_RULES], |
379 | + }, |
380 | + } |
381 | + ] |
382 | + } |
383 | +} |
384 | +POD_SPEC_TMPL = { |
385 | + 'version': 3, |
386 | + 'containers': [ |
387 | + { |
388 | + 'name': 'content-cache', |
389 | + 'envConfig': None, |
390 | + 'imageDetails': None, |
391 | + 'imagePullPolicy': 'Always', |
392 | + 'kubernetes': { |
393 | + 'livenessProbe': { |
394 | + 'httpGet': {'path': '/', 'port': CONTAINER_PORT}, |
395 | + 'initialDelaySeconds': 3, |
396 | + 'periodSeconds': 3, |
397 | + }, |
398 | + 'readinessProbe': { |
399 | + 'httpGet': {'path': '/', 'port': CONTAINER_PORT}, |
400 | + 'initialDelaySeconds': 3, |
401 | + 'periodSeconds': 3, |
402 | + }, |
403 | + }, |
404 | + 'ports': [{'containerPort': CONTAINER_PORT, 'protocol': 'TCP'}], |
405 | + 'volumeConfig': None, |
406 | + } |
407 | + ], |
408 | +} |
409 | + |
410 | + |
411 | +class TestCharm(unittest.TestCase): |
412 | + def setUp(self): |
413 | + self.maxDiff = None |
414 | + self.harness = Harness(ContentCacheCharm) |
415 | + |
416 | + def tearDown(self): |
417 | + # starting from ops 0.8, we also need to do: |
418 | + self.addCleanup(self.harness.cleanup) |
419 | + |
420 | + def test_on_start(self): |
421 | + """Test on_start, nothing but setting ActiveStatus.""" |
422 | + harness = self.harness |
423 | + action_event = mock.Mock() |
424 | + |
425 | + harness.begin() |
426 | + harness.charm._on_start(action_event) |
427 | + self.assertEqual(harness.charm.unit.status, ActiveStatus('Started')) |
428 | + |
429 | + @mock.patch('charm.ContentCacheCharm.configure_pod') |
430 | + def test_on_config_changed(self, configure_pod): |
431 | + """Test on_config_changed, ensure configure_pod is called just once.""" |
432 | + harness = self.harness |
433 | + |
434 | + # Intentionally before harness.begin() to avoid firing leadership events. |
435 | + harness.set_leader(True) |
436 | + harness.begin() |
437 | + |
438 | + config = copy.deepcopy(BASE_CONFIG) |
439 | + harness.update_config(config) |
440 | + self.assertEqual(harness.charm.unit.status, MaintenanceStatus('Configuring pod (config-changed)')) |
441 | + configure_pod.assert_called_once() |
442 | + |
443 | + @mock.patch('charm.ContentCacheCharm.configure_pod') |
444 | + def test_on_config_changed_not_leader(self, configure_pod): |
445 | + """Test on_config_chaned, not the leader so configure_pod not called.""" |
446 | + harness = self.harness |
447 | + |
448 | + # Intentionally before harness.begin() to avoid firing leadership events. |
449 | + harness.set_leader(False) |
450 | + harness.begin() |
451 | + |
452 | + config = copy.deepcopy(BASE_CONFIG) |
453 | + harness.update_config(config) |
454 | + self.assertNotEqual(harness.charm.unit.status, MaintenanceStatus('Configuring pod (config-changed)')) |
455 | + configure_pod.assert_not_called() |
456 | + |
457 | + @mock.patch('charm.ContentCacheCharm.configure_pod') |
458 | + def test_on_leader_elected(self, configure_pod): |
459 | + """Test on_leader_elected, ensure configure_pod is called just once.""" |
460 | + harness = self.harness |
461 | + |
462 | + harness.begin() |
463 | + # Intentionally after harness.begin() to trigger leadership events. |
464 | + harness.set_leader(True) |
465 | + self.assertEqual(harness.charm.unit.status, MaintenanceStatus('Configuring pod (leader-elected)')) |
466 | + configure_pod.assert_called_once() |
467 | + |
468 | + @mock.patch('charm.ContentCacheCharm.configure_pod') |
469 | + def test_on_leader_elected_not_leader(self, configure_pod): |
470 | + """Test on_leader_elected, not the leader so configure_pod is not called.""" |
471 | + harness = self.harness |
472 | + |
473 | + harness.begin() |
474 | + # Intentionally after harness.begin() to trigger leadership events. |
475 | + harness.set_leader(False) |
476 | + self.assertNotEqual(harness.charm.unit.status, MaintenanceStatus('Configuring pod (leader-elected)')) |
477 | + configure_pod.assert_not_called() |
478 | + |
479 | + @mock.patch('charm.ContentCacheCharm.configure_pod') |
480 | + def test_on_upgrade_charm(self, configure_pod): |
481 | + """Test on_upgrade_charm, ensure configure_pod is called just once.""" |
482 | + harness = self.harness |
483 | + |
484 | + harness.set_leader(True) |
485 | + harness.begin() |
486 | + harness.charm.on.upgrade_charm.emit() |
487 | + self.assertEqual(harness.charm.unit.status, MaintenanceStatus('Configuring pod (upgrade-charm)')) |
488 | + configure_pod.assert_called_once() |
489 | + |
490 | + @mock.patch('charm.ContentCacheCharm.configure_pod') |
491 | + def test_on_upgrade_charm_not_leader(self, configure_pod): |
492 | + """Test on_upgrade_charm, not the leader so configure_pod is not called.""" |
493 | + harness = self.harness |
494 | + |
495 | + harness.set_leader(False) |
496 | + harness.begin() |
497 | + harness.charm.on.upgrade_charm.emit() |
498 | + self.assertNotEqual(harness.charm.unit.status, MaintenanceStatus('Configuring pod (upgrade-charm)')) |
499 | + configure_pod.assert_not_called() |
500 | + |
501 | + @mock.patch('charm.ContentCacheCharm._make_pod_spec') |
502 | + def test_configure_pod(self, make_pod_spec): |
503 | + """Test configure_pod, ensure make_pod_spec is called and the generated pod spec is correct.""" |
504 | + harness = self.harness |
505 | + |
506 | + harness.set_leader(True) |
507 | + harness.begin() |
508 | + |
509 | + config = copy.deepcopy(BASE_CONFIG) |
510 | + harness.update_config(config) |
511 | + make_pod_spec.assert_called_once() |
512 | + self.assertEqual(harness.charm.unit.status, ActiveStatus('Ready')) |
513 | + pod_spec = harness.charm._make_pod_spec() |
514 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
515 | + self.assertEqual(harness.get_pod_spec(), (pod_spec, k8s_resources)) |
516 | + |
517 | + @mock.patch('charm.ContentCacheCharm._make_pod_spec') |
518 | + def test_configure_pod_missing_configs(self, make_pod_spec): |
519 | + """Test configure_pod, missing configs so ensure make_pod_spec is not called and no pod spec set.""" |
520 | + harness = self.harness |
521 | + |
522 | + harness.set_leader(True) |
523 | + harness.begin() |
524 | + |
525 | + config = copy.deepcopy(BASE_CONFIG) |
526 | + config['site'] = None |
527 | + harness.update_config(config) |
528 | + make_pod_spec.assert_not_called() |
529 | + self.assertEqual(harness.charm.unit.status, BlockedStatus('Required config(s) empty: site')) |
530 | + self.assertEqual(harness.get_pod_spec(), None) |
531 | + |
532 | + def test_generate_keys_zone(self): |
533 | + """Test generating hashed name for Nginx's cache key zone.""" |
534 | + harness = self.harness |
535 | + |
536 | + harness.disable_hooks() |
537 | + harness.begin() |
538 | + |
539 | + expected = '39c631ffb52d-cache' |
540 | + self.assertEqual(harness.charm._generate_keys_zone('mysite.local'), expected) |
541 | + expected = '8b79f9e4b3e8-cache' |
542 | + self.assertEqual(harness.charm._generate_keys_zone('my-really-really-really-long-site-name.local'), expected) |
543 | + expected = 'd41d8cd98f00-cache' |
544 | + self.assertEqual(harness.charm._generate_keys_zone(''), expected) |
545 | + |
546 | + def test_make_k8s_ingress_spec(self): |
547 | + """Test generation of K8s ingress spec and ensure it is correct.""" |
548 | + harness = self.harness |
549 | + |
550 | + harness.disable_hooks() |
551 | + harness.begin() |
552 | + |
553 | + config = copy.deepcopy(BASE_CONFIG) |
554 | + harness.update_config(config) |
555 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
556 | + expected = k8s_resources['kubernetesResources']['ingressResources'] |
557 | + self.assertEqual(harness.charm._make_k8s_ingress_spec(), expected) |
558 | + |
559 | + def test_make_k8s_ingress_spec_client_max_body_size(self): |
560 | + """Test charm config's client_max_body_size with correct annotation in generated K8s ingress spec.""" |
561 | + harness = self.harness |
562 | + |
563 | + harness.disable_hooks() |
564 | + harness.begin() |
565 | + |
566 | + config = copy.deepcopy(BASE_CONFIG) |
567 | + config['client_max_body_size'] = '32m' |
568 | + harness.update_config(config) |
569 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
570 | + t = k8s_resources['kubernetesResources']['ingressResources'][0]['annotations'] |
571 | + t['nginx.ingress.kubernetes.io/proxy-body-size'] = '32m' |
572 | + expected = k8s_resources['kubernetesResources']['ingressResources'] |
573 | + self.assertEqual(harness.charm._make_k8s_ingress_spec(), expected) |
574 | + |
575 | + def test_make_k8s_ingress_spec_tls_secrets(self): |
576 | + """Test charm config's tls_secret_name with TLS/SSL enabled in generated K8s ingress spec.""" |
577 | + harness = self.harness |
578 | + |
579 | + harness.disable_hooks() |
580 | + harness.begin() |
581 | + |
582 | + config = copy.deepcopy(BASE_CONFIG) |
583 | + config['tls_secret_name'] = '{}-tls'.format(config['site']) |
584 | + harness.update_config(config) |
585 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
586 | + t = k8s_resources['kubernetesResources']['ingressResources'][0] |
587 | + t.pop('annotations') |
588 | + t['spec']['tls'] = [{'hosts': 'mysite.local', 'secretName': 'mysite.local-tls'}] |
589 | + expected = k8s_resources['kubernetesResources']['ingressResources'] |
590 | + self.assertEqual(harness.charm._make_k8s_ingress_spec(), expected) |
591 | + |
592 | + def test_make_pod_spec(self): |
593 | + """Test make_pod_spec, ensure correct spec and is applied and returned by operator's get_pod_spec.""" |
594 | + harness = self.harness |
595 | + |
596 | + harness.set_leader(True) |
597 | + harness.begin() |
598 | + |
599 | + config = copy.deepcopy(BASE_CONFIG) |
600 | + harness.update_config(config) |
601 | + spec = copy.deepcopy(POD_SPEC_TMPL) |
602 | + t = spec['containers'][0] |
603 | + t['envConfig'] = harness.charm._make_pod_config() |
604 | + t['imageDetails'] = {'imagePath': 'localhost:32000/myimage:latest'} |
605 | + t['volumeConfig'] = [ |
606 | + {'name': 'cache-volume', 'mountPath': '/var/lib/nginx/proxy/cache', 'emptyDir': {'sizeLimit': '10G'}} |
607 | + ] |
608 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
609 | + expected = (spec, k8s_resources) |
610 | + self.assertEqual(harness.get_pod_spec(), expected) |
611 | + |
612 | + def test_make_pod_spec_image_username(self): |
613 | + """Test charm config image_username/image_password, ensure correct pod spec with details to pod image.""" |
614 | + harness = self.harness |
615 | + |
616 | + harness.set_leader(True) |
617 | + harness.begin() |
618 | + |
619 | + config = copy.deepcopy(BASE_CONFIG) |
620 | + config['image_username'] = 'myuser' |
621 | + config['image_password'] = 'mypassword' |
622 | + harness.update_config(config) |
623 | + spec = copy.deepcopy(POD_SPEC_TMPL) |
624 | + t = spec['containers'][0] |
625 | + t['envConfig'] = harness.charm._make_pod_config() |
626 | + t['imageDetails'] = { |
627 | + 'imagePath': 'localhost:32000/myimage:latest', |
628 | + 'username': 'myuser', |
629 | + 'password': 'mypassword', |
630 | + } |
631 | + t['volumeConfig'] = [ |
632 | + {'name': 'cache-volume', 'mountPath': '/var/lib/nginx/proxy/cache', 'emptyDir': {'sizeLimit': '10G'}} |
633 | + ] |
634 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
635 | + expected = (spec, k8s_resources) |
636 | + self.assertEqual(harness.get_pod_spec(), expected) |
637 | + |
638 | + def test_make_pod_spec_cache_max_size(self): |
639 | + """Test charm config's cache_max_size, ensure correct pod spec utilising volumeConfig with size limit.""" |
640 | + harness = self.harness |
641 | + |
642 | + harness.set_leader(True) |
643 | + harness.begin() |
644 | + |
645 | + config = copy.deepcopy(BASE_CONFIG) |
646 | + config['cache_max_size'] = '201G' |
647 | + harness.update_config(config) |
648 | + spec = copy.deepcopy(POD_SPEC_TMPL) |
649 | + t = spec['containers'][0] |
650 | + t['envConfig'] = harness.charm._make_pod_config() |
651 | + t['imageDetails'] = {'imagePath': 'localhost:32000/myimage:latest'} |
652 | + t['volumeConfig'] = [ |
653 | + {'name': 'cache-volume', 'mountPath': '/var/lib/nginx/proxy/cache', 'emptyDir': {'sizeLimit': '201G'}} |
654 | + ] |
655 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
656 | + expected = (spec, k8s_resources) |
657 | + self.assertEqual(harness.get_pod_spec(), expected) |
658 | + |
659 | + def test_make_pod_config(self): |
660 | + """Test make_pod_config, ensure envConfig returned is correct.""" |
661 | + harness = self.harness |
662 | + |
663 | + harness.disable_hooks() |
664 | + harness.begin() |
665 | + |
666 | + config = copy.deepcopy(BASE_CONFIG) |
667 | + harness.update_config(config) |
668 | + expected = { |
669 | + 'NGINX_BACKEND': 'http://mybackend.local:80', |
670 | + 'NGINX_CACHE_INACTIVE_TIME': '10m', |
671 | + 'NGINX_CACHE_MAX_SIZE': '10G', |
672 | + 'NGINX_CACHE_PATH': CACHE_PATH, |
673 | + 'NGINX_CACHE_USE_STALE': 'error timeout updating http_500 http_502 http_503 http_504', |
674 | + 'NGINX_CACHE_VALID': '200 1h', |
675 | + 'NGINX_CLIENT_MAX_BODY_SIZE': '1m', |
676 | + 'NGINX_KEYS_ZONE': '39c631ffb52d-cache', |
677 | + 'NGINX_SITE_NAME': 'mysite.local', |
678 | + } |
679 | + expected.update(JUJU_ENV_CONFIG) |
680 | + self.assertEqual(harness.charm._make_pod_config(), expected) |
681 | + |
682 | + def test_make_pod_config_client_max_body_size(self): |
683 | + """Test make_pod_config with charm config client_max_body_size, ensure envConfig returned is correct.""" |
684 | + harness = self.harness |
685 | + |
686 | + harness.disable_hooks() |
687 | + harness.begin() |
688 | + |
689 | + config = copy.deepcopy(BASE_CONFIG) |
690 | + config['client_max_body_size'] = '50m' |
691 | + harness.update_config(config) |
692 | + expected = { |
693 | + 'NGINX_BACKEND': 'http://mybackend.local:80', |
694 | + 'NGINX_CACHE_INACTIVE_TIME': '10m', |
695 | + 'NGINX_CACHE_MAX_SIZE': '10G', |
696 | + 'NGINX_CACHE_PATH': CACHE_PATH, |
697 | + 'NGINX_CACHE_USE_STALE': 'error timeout updating http_500 http_502 http_503 http_504', |
698 | + 'NGINX_CACHE_VALID': '200 1h', |
699 | + 'NGINX_CLIENT_MAX_BODY_SIZE': '50m', |
700 | + 'NGINX_KEYS_ZONE': '39c631ffb52d-cache', |
701 | + 'NGINX_SITE_NAME': 'mysite.local', |
702 | + } |
703 | + expected.update(JUJU_ENV_CONFIG) |
704 | + self.assertEqual(harness.charm._make_pod_config(), expected) |
705 | + |
706 | + def test_missing_charm_configs(self): |
707 | + """Test missing_charm_config, ensure required configs present and return those missing.""" |
708 | + harness = self.harness |
709 | + |
710 | + harness.disable_hooks() |
711 | + harness.begin() |
712 | + |
713 | + # None missing. |
714 | + config = copy.deepcopy(BASE_CONFIG) |
715 | + harness.update_config(config) |
716 | + expected = [] |
717 | + self.assertEqual(harness.charm._missing_charm_configs(), expected) |
718 | + |
719 | + # One missing. |
720 | + config = copy.deepcopy(BASE_CONFIG) |
721 | + config['site'] = None |
722 | + harness.update_config(config) |
723 | + expected = ['site'] |
724 | + self.assertEqual(harness.charm._missing_charm_configs(), expected) |
725 | + |
726 | + # More than one missing. |
727 | + config = copy.deepcopy(BASE_CONFIG) |
728 | + config['image_path'] = None |
729 | + config['site'] = None |
730 | + harness.update_config(config) |
731 | + expected = ['image_path', 'site'] |
732 | + self.assertEqual(harness.charm._missing_charm_configs(), expected) |
733 | + |
734 | + # All missing, should be sorted. |
735 | + config = copy.deepcopy(BASE_CONFIG) |
736 | + config['image_path'] = None |
737 | + config['image_username'] = 'myuser' |
738 | + config['backend'] = None |
739 | + config['site'] = None |
740 | + harness.update_config(config) |
741 | + expected = ['backend', 'image_password', 'image_path', 'site'] |
742 | + self.assertEqual(harness.charm._missing_charm_configs(), expected) |
743 | + |
744 | + # image_password missing when image_username present. |
745 | + config = copy.deepcopy(BASE_CONFIG) |
746 | + config['image_username'] = 'myuser' |
747 | + harness.update_config(config) |
748 | + expected = ['image_password'] |
749 | + self.assertEqual(harness.charm._missing_charm_configs(), expected) |
750 | diff --git a/tests/unit/test_charm.py b/tests/unit/test_charm.py |
751 | deleted file mode 100644 |
752 | index 2f3db20..0000000 |
753 | --- a/tests/unit/test_charm.py |
754 | +++ /dev/null |
755 | @@ -1,36 +0,0 @@ |
756 | -# Copyright 2020 hloeung |
757 | -# See LICENSE file for licensing details. |
758 | - |
759 | -import unittest |
760 | -from unittest.mock import Mock |
761 | - |
762 | -from ops.testing import Harness |
763 | -from charm import CharmK8SContentCacheCharm |
764 | - |
765 | - |
766 | -class TestCharm(unittest.TestCase): |
767 | - def test_config_changed(self): |
768 | - harness = Harness(CharmK8SContentCacheCharm) |
769 | - # from 0.8 you should also do: |
770 | - # self.addCleanup(harness.cleanup) |
771 | - harness.begin() |
772 | - self.assertEqual(list(harness.charm._stored.things), []) |
773 | - harness.update_config({"thing": "foo"}) |
774 | - self.assertEqual(list(harness.charm._stored.things), ["foo"]) |
775 | - |
776 | - def test_action(self): |
777 | - harness = Harness(CharmK8SContentCacheCharm) |
778 | - harness.begin() |
779 | - # the harness doesn't (yet!) help much with actions themselves |
780 | - action_event = Mock(params={"fail": ""}) |
781 | - harness.charm._on_fortune_action(action_event) |
782 | - |
783 | - self.assertTrue(action_event.set_results.called) |
784 | - |
785 | - def test_action_fail(self): |
786 | - harness = Harness(CharmK8SContentCacheCharm) |
787 | - harness.begin() |
788 | - action_event = Mock(params={"fail": "fail this"}) |
789 | - harness.charm._on_fortune_action(action_event) |
790 | - |
791 | - self.assertEqual(action_event.fail.call_args, [("fail this",)]) |
This merge proposal is being monitored by mergebot. Change the status to Approved to merge.