Merge ~hloeung/charm-k8s-content-cache:master into charm-k8s-content-cache:master
- Git
- lp:~hloeung/charm-k8s-content-cache
- master
- Merge into master
Status: | Superseded |
---|---|
Proposed branch: | ~hloeung/charm-k8s-content-cache:master |
Merge into: | charm-k8s-content-cache:master |
Diff against target: |
1043 lines (+838/-72) 16 files modified
.gitignore (+8/-0) .jujuignore (+12/-2) COPYRIGHT (+16/-0) Makefile (+24/-0) README.md (+18/-12) config.yaml (+58/-3) dev/null (+0/-36) docker/Dockerfile (+27/-0) docker/entrypoint.sh (+22/-0) docker/files/nginx-logging-format.conf (+4/-0) docker/templates/nginx_cfg.tmpl (+32/-0) metadata.yaml (+1/-1) src/charm.py (+190/-18) tests/requirements.txt (+4/-0) tests/test_charm.py (+385/-0) tox.ini (+37/-0) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Canonical IS Reviewers | Pending | ||
Content Cache Charmers | Pending | ||
Review via email: mp+389999@code.launchpad.net |
This proposal has been superseded by a proposal from 2020-09-08.
Commit message
Implement initial charm functionality and add unit tests
Description of the change
It works. Example is proxying / caching of archive.ubuntu.com.
| ubuntu@
| Model Controller Cloud/Region Version SLA Timestamp
| charm-k8s micro microk8s/localhost 2.8.1 unsupported 03:20:19Z
|
| App Version Status Scale Charm Store Rev OS Address Notes
| content-cache active 1 charm-k8s-
|
| Unit Workload Agent Address Ports Message
| content-cache/42* active idle 10.1.75.118 80/TCP
Generated Nginx config - https:/
Test against the unit itself (10.1.75.118) - https:/
With the access logs:
| ubuntu@
| - - - [08/Sep/
| - - - [08/Sep/
| - - - [08/Sep/
kubectl describe ingress - https:/
Test against the ingress address (10.152.183.163) - https:/
🤖 Canonical IS Merge Bot (canonical-is-mergebot) wrote : | # |
- f29809e... by Haw Loeung
-
Add additional logging
- 8ababba... by Haw Loeung
-
Update / set appropriate status messages when active
Barry Price (barryprice) wrote : | # |
Looks good, left a few inline comments - you could also consider adding a lint target for the shell script(s) included under the docker/ dir - charm-k8s-bind uses shellcheck for this.
Also seems to be missing the standard COPYRIGHT/LICENSE files, plus the boilerplate copyright headers in indidivual files, but code itself looks good.
- b11a529... by Haw Loeung
-
Fixed based on review
Haw Loeung (hloeung) : | # |
- 9946b92... by Haw Loeung
-
Work around lack of JUJU_UNIT - LP:1894782
- 4b3c7ec... by Haw Loeung
-
README, README now.
- 4e660a4... by Haw Loeung
-
shellcheck fixes per review
- 53a009b... by Haw Loeung
-
Fixed comment and reference right LICENSE file
- d98e4e5... by Haw Loeung
-
Rename charm name removing references to charm and k8s so consistent with other K8s charms
Tom Haddon (mthaddon) wrote : | # |
As discussed we'll want to break this up into two MPs, one with non-operator framework changes that we can review/
One comment inline about the name of the charm to deploy, which as discussed means we want to update metadata.yaml so the name of the charm is "content-cache".
Also, just a note that we'll want docstrings on all unit tests for when it comes to that MP.
Thanks!
Haw Loeung (hloeung) wrote : | # |
Split up.
* charm and associated unit tests:
* The rest of the stuff (non-charm):
Unmerged commits
- d98e4e5... by Haw Loeung
-
Rename charm name removing references to charm and k8s so consistent with other K8s charms
- 53a009b... by Haw Loeung
-
Fixed comment and reference right LICENSE file
- 4e660a4... by Haw Loeung
-
shellcheck fixes per review
- 4b3c7ec... by Haw Loeung
-
README, README now.
- 9946b92... by Haw Loeung
-
Work around lack of JUJU_UNIT - LP:1894782
- b11a529... by Haw Loeung
-
Fixed based on review
- 8ababba... by Haw Loeung
-
Update / set appropriate status messages when active
- f29809e... by Haw Loeung
-
Add additional logging
- 507feda... by Haw Loeung
-
Keep it simple, map back to Nginx' proxy_pass which takes a URL
- baf5812... by Haw Loeung
-
Allow overriding client_
max_body_ size / nginx.ingress. kubernetes. io/proxy- body-size
Preview Diff
1 | diff --git a/.gitignore b/.gitignore |
2 | new file mode 100644 |
3 | index 0000000..9759e36 |
4 | --- /dev/null |
5 | +++ b/.gitignore |
6 | @@ -0,0 +1,8 @@ |
7 | +*~ |
8 | +*.charm |
9 | +*.py[cod] |
10 | +*.swp |
11 | +.coverage |
12 | +.tox/ |
13 | +build/ |
14 | +__pycache__/ |
15 | diff --git a/.jujuignore b/.jujuignore |
16 | index 6ccd559..4365792 100644 |
17 | --- a/.jujuignore |
18 | +++ b/.jujuignore |
19 | @@ -1,3 +1,13 @@ |
20 | -/venv |
21 | -*.py[cod] |
22 | +/venv/ |
23 | +*~ |
24 | *.charm |
25 | +*.py[cod] |
26 | +*.swp |
27 | +.coverage |
28 | +.gitignore |
29 | +.tox/ |
30 | +Makefile |
31 | +build/ |
32 | +docker/ |
33 | +tests/ |
34 | +__pycache__/ |
35 | diff --git a/COPYRIGHT b/COPYRIGHT |
36 | new file mode 100644 |
37 | index 0000000..1889423 |
38 | --- /dev/null |
39 | +++ b/COPYRIGHT |
40 | @@ -0,0 +1,16 @@ |
41 | +Format: http://dep.debian.net/deps/dep5/ |
42 | + |
43 | +Files: * |
44 | +Copyright: Copyright 2020, Canonical Ltd. |
45 | +License: GPL-3 |
46 | + This program is free software: you can redistribute it and/or modify |
47 | + it under the terms of the GNU General Public License version 3, as |
48 | + published by the Free Software Foundation. |
49 | + . |
50 | + This program is distributed in the hope that it will be useful, |
51 | + but WITHOUT ANY WARRANTY; without even the implied warranties of |
52 | + MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR |
53 | + PURPOSE. See the GNU General Public License for more details. |
54 | + . |
55 | + You should have received a copy of the GNU General Public License |
56 | + along with this program. If not, see <http://www.gnu.org/licenses/>. |
57 | diff --git a/Makefile b/Makefile |
58 | new file mode 100644 |
59 | index 0000000..af2c5d1 |
60 | --- /dev/null |
61 | +++ b/Makefile |
62 | @@ -0,0 +1,24 @@ |
63 | +# Copyright (C) 2020 Canonical Ltd. |
64 | +# See LICENSE file for licensing details. |
65 | + |
66 | +lint: |
67 | + @echo "Normalising python layout with black." |
68 | + @tox -e black |
69 | + @echo "Running flake8" |
70 | + @tox -e lint |
71 | + |
72 | +# We actually use the build directory created by charmcraft, |
73 | +# but the .charm file makes a much more convenient sentinel. |
74 | +unittest: content-cache.charm |
75 | + @tox -e unit |
76 | + |
77 | +test: lint unittest |
78 | + |
79 | +clean: |
80 | + @echo "Cleaning files" |
81 | + @git clean -fXd |
82 | + |
83 | +content-cache.charm: src/*.py requirements.txt |
84 | + charmcraft build |
85 | + |
86 | +.PHONY: lint test unittest clean |
87 | diff --git a/README.md b/README.md |
88 | index aabaf43..3c24b62 100644 |
89 | --- a/README.md |
90 | +++ b/README.md |
91 | @@ -2,27 +2,33 @@ |
92 | |
93 | ## Description |
94 | |
95 | -TODO: fill out the description |
96 | +Deploy content caching layer into K8s. |
97 | |
98 | ## Usage |
99 | |
100 | -TODO: explain how to use the charm |
101 | +Build the docker image: |
102 | + |
103 | + `cd docker` |
104 | + `docker build . -t myimage:v<revision>` |
105 | + `docker tag myimage:v<revision> localhost:32000/myimage:v<revision>` |
106 | + `docker push localhost:32000/myimage:v<revision>` |
107 | + |
108 | +Deploy: |
109 | + |
110 | + `juju deploy content-cache.charm --config image_path=localhost:32000/myimage:v<revision> --config site=mysite.local --config backend=http://mybackend.local:80` |
111 | + |
112 | +### Test Deployment |
113 | + |
114 | +`curl --resolve mysite.local:80:<ingress IP> http://mysite.local` |
115 | |
116 | ### Scale Out Usage |
117 | |
118 | -... |
119 | +Just run `juju add-unit <application name>`. |
120 | |
121 | ## Developing |
122 | |
123 | -Create and activate a virtualenv, |
124 | -and install the development requirements, |
125 | - |
126 | - virtualenv -p python3 venv |
127 | - source venv/bin/activate |
128 | - pip install -r requirements-dev.txt |
129 | +Just run `make lint`. |
130 | |
131 | ## Testing |
132 | |
133 | -Just run `run_tests`: |
134 | - |
135 | - ./run_tests |
136 | +Just run `make unittest`. |
137 | diff --git a/config.yaml b/config.yaml |
138 | index 1cf64d6..e25deb2 100644 |
139 | --- a/config.yaml |
140 | +++ b/config.yaml |
141 | @@ -1,6 +1,61 @@ |
142 | options: |
143 | - sites: |
144 | + image_path: |
145 | type: string |
146 | description: > |
147 | - YAML-formatted virtual hosts/sites. See the README.md for more details |
148 | - and examples. |
149 | + The location of the image to use, e.g. "localhost:32000/myimage:latest" |
150 | + |
151 | + This setting is required. |
152 | + image_username: |
153 | + type: string |
154 | + description: > |
155 | + The username for accessing the registry specified in image_path. |
156 | + default: "" |
157 | + image_password: |
158 | + type: string |
159 | + description: > |
160 | + The password associated with image_username for accessing the registry |
161 | + specified in image_path. |
162 | + default: "" |
163 | + site: |
164 | + type: string |
165 | + description: > |
166 | + The site name, e.g. "mysite.local" |
167 | + |
168 | + This setting is required. |
169 | + backend: |
170 | + type: string |
171 | + description: > |
172 | + The backend to use for site, e.g. "http://mybackend.local:80" |
173 | + |
174 | + This setting is required. |
175 | + cache_inactive_time: |
176 | + type: string |
177 | + description: > |
178 | + The maximum age/time inactive objects are stored in cache. |
179 | + default: "10m" |
180 | + cache_max_size: |
181 | + type: string |
182 | + description: > |
183 | + The size of the Nginx storage cache. |
184 | + default: "10G" |
185 | + cache_use_stale: |
186 | + type: string |
187 | + description: > |
188 | + Determines in which cases a stale cached response can be used |
189 | + during communication with the proxied server. |
190 | + default: "error timeout updating http_500 http_502 http_503 http_504" |
191 | + cache_valid: |
192 | + type: string |
193 | + decription: > |
194 | + Sets caching time for different response codes |
195 | + default: "200 1h" |
196 | + client_max_body_size: |
197 | + type: string |
198 | + description: > |
199 | + Override max. request body size (default 1m). |
200 | + default: "" |
201 | + tls_secret_name: |
202 | + type: string |
203 | + description: > |
204 | + The name of the K8s secret to be associated with the ingress resource. |
205 | + default: "" |
206 | diff --git a/docker/Dockerfile b/docker/Dockerfile |
207 | new file mode 100644 |
208 | index 0000000..7f89b21 |
209 | --- /dev/null |
210 | +++ b/docker/Dockerfile |
211 | @@ -0,0 +1,27 @@ |
212 | +# Copyright (C) 2020 Canonical Ltd. |
213 | +# See LICENSE file for licensing details. |
214 | + |
215 | +FROM ubuntu:latest |
216 | + |
217 | +ENV LANG C.UTF-8 |
218 | +ENV DEBIAN_FRONTEND noninteractive |
219 | + |
220 | +# Don't install recommends to keep image small and avoid nasty surprises |
221 | +# e.g. rpcbind being pulled in by nrpe. |
222 | +RUN apt-get -qy update && \ |
223 | + apt-get -qy dist-upgrade --no-install-recommends && \ |
224 | + apt-get -qy install --no-install-recommends nginx-light && \ |
225 | + apt-get -qy clean && \ |
226 | + rm -f /var/lib/apt/lists/*_* |
227 | + |
228 | +RUN mkdir -p /srv/content-cache |
229 | + |
230 | +COPY entrypoint.sh /srv/content-cache |
231 | +COPY files /srv/content-cache/files/ |
232 | +COPY templates /srv/content-cache/templates/ |
233 | + |
234 | +ENTRYPOINT ["/srv/content-cache/entrypoint.sh"] |
235 | + |
236 | +CMD ["nginx", "-g", "daemon off;"] |
237 | + |
238 | +EXPOSE 80 |
239 | diff --git a/docker/entrypoint.sh b/docker/entrypoint.sh |
240 | new file mode 100755 |
241 | index 0000000..1f3fb10 |
242 | --- /dev/null |
243 | +++ b/docker/entrypoint.sh |
244 | @@ -0,0 +1,22 @@ |
245 | +#!/bin/sh |
246 | + |
247 | +# Copyright (C) 2020 Canonical Ltd. |
248 | +# See LICENSE file for licensing details. |
249 | + |
250 | +set -eu |
251 | + |
252 | +# https://pempek.net/articles/2013/07/08/bash-sh-as-template-engine/ |
253 | +render_template() { |
254 | + eval "echo \"$(cat "$1")\"" |
255 | +} |
256 | + |
257 | +cp /srv/content-cache/files/nginx-logging-format.conf /etc/nginx/conf.d/nginx-logging-format.conf |
258 | + |
259 | +# https://bugs.launchpad.net/juju/+bug/1894782 |
260 | +JUJU_UNIT=$(basename /var/lib/juju/tools/unit-* | sed -e 's/^unit-//' -e 's/-\([0-9]\+\)$/\/\1/') |
261 | +export JUJU_UNIT |
262 | + |
263 | +render_template /srv/content-cache/templates/nginx_cfg.tmpl > /etc/nginx/sites-available/default |
264 | + |
265 | +# Run the real command |
266 | +exec "$@" |
267 | diff --git a/docker/files/nginx-logging-format.conf b/docker/files/nginx-logging-format.conf |
268 | new file mode 100644 |
269 | index 0000000..8591b3e |
270 | --- /dev/null |
271 | +++ b/docker/files/nginx-logging-format.conf |
272 | @@ -0,0 +1,4 @@ |
273 | +log_format content_cache '$http_x_forwarded_for - $remote_user [$time_local] ' |
274 | + '"$request" $status $bytes_sent ' |
275 | + '"$http_referer" "$http_user_agent" $request_time ' |
276 | + '$upstream_cache_status $upstream_response_time'; |
277 | diff --git a/docker/templates/nginx_cfg.tmpl b/docker/templates/nginx_cfg.tmpl |
278 | new file mode 100644 |
279 | index 0000000..02694a6 |
280 | --- /dev/null |
281 | +++ b/docker/templates/nginx_cfg.tmpl |
282 | @@ -0,0 +1,32 @@ |
283 | +proxy_cache_path ${NGINX_CACHE_PATH} use_temp_path=off levels=1:2 keys_zone=${NGINX_KEYS_ZONE}:10m inactive=${NGINX_CACHE_INACTIVE_TIME} max_size=${NGINX_CACHE_MAX_SIZE}; |
284 | + |
285 | +server { |
286 | + server_name ${NGINX_SITE_NAME}; |
287 | + listen *:80; |
288 | + |
289 | + client_max_body_size ${NGINX_CLIENT_MAX_BODY_SIZE}; |
290 | + |
291 | + port_in_redirect off; |
292 | + absolute_redirect off; |
293 | + |
294 | + location / { |
295 | + proxy_pass \"${NGINX_BACKEND}\"; |
296 | + proxy_set_header Host \"${NGINX_SITE_NAME}\"; |
297 | + # Removed the following headers to avoid cache poisoning. |
298 | + proxy_set_header Forwarded \"\"; |
299 | + proxy_set_header X-Forwarded-Host \"\"; |
300 | + proxy_set_header X-Forwarded-Port \"\"; |
301 | + proxy_set_header X-Forwarded-Proto \"\"; |
302 | + proxy_set_header X-Forwarded-Scheme \"\"; |
303 | + |
304 | + add_header X-Cache-Status \"\$upstream_cache_status from ${JUJU_UNIT} (${HOSTNAME})\"; |
305 | + |
306 | + proxy_force_ranges on; |
307 | + proxy_cache ${NGINX_KEYS_ZONE}; |
308 | + proxy_cache_use_stale ${NGINX_CACHE_USE_STALE}; |
309 | + proxy_cache_valid ${NGINX_CACHE_VALID}; |
310 | + } |
311 | + |
312 | + access_log /dev/stdout content_cache; |
313 | + error_log /dev/stdout info; |
314 | +} |
315 | diff --git a/metadata.yaml b/metadata.yaml |
316 | index 2874a21..dacd17e 100644 |
317 | --- a/metadata.yaml |
318 | +++ b/metadata.yaml |
319 | @@ -1,4 +1,4 @@ |
320 | -name: charm-k8s-content-cache |
321 | +name: content-cache |
322 | description: | |
323 | Useful for providing local mirrors of HTTP servers and building |
324 | content delivery networks (CDN). |
325 | diff --git a/src/charm.py b/src/charm.py |
326 | index c5ab80b..1c27501 100755 |
327 | --- a/src/charm.py |
328 | +++ b/src/charm.py |
329 | @@ -1,38 +1,210 @@ |
330 | #!/usr/bin/env python3 |
331 | -# Copyright 2020 hloeung |
332 | + |
333 | +# Copyright (C) 2020 Canonical Ltd. |
334 | # See LICENSE file for licensing details. |
335 | |
336 | +import hashlib |
337 | import logging |
338 | |
339 | from ops.charm import CharmBase |
340 | from ops.main import main |
341 | from ops.framework import StoredState |
342 | +from ops.model import ( |
343 | + ActiveStatus, |
344 | + BlockedStatus, |
345 | + MaintenanceStatus, |
346 | +) |
347 | |
348 | logger = logging.getLogger(__name__) |
349 | |
350 | +CACHE_PATH = '/var/lib/nginx/proxy/cache' |
351 | +CONTAINER_PORT = 80 |
352 | +REQUIRED_JUJU_CONFIGS = ['image_path', 'site', 'backend'] |
353 | + |
354 | |
355 | -class CharmK8SContentCacheCharm(CharmBase): |
356 | +class ContentCacheCharm(CharmBase): |
357 | _stored = StoredState() |
358 | |
359 | def __init__(self, *args): |
360 | super().__init__(*args) |
361 | + |
362 | + self.framework.observe(self.on.start, self._on_start) |
363 | self.framework.observe(self.on.config_changed, self._on_config_changed) |
364 | - self.framework.observe(self.on.fortune_action, self._on_fortune_action) |
365 | - self._stored.set_default(things=[]) |
366 | - |
367 | - def _on_config_changed(self, _): |
368 | - current = self.model.config["thing"] |
369 | - if current not in self._stored.things: |
370 | - logger.debug("found a new thing: %r", current) |
371 | - self._stored.things.append(current) |
372 | - |
373 | - def _on_fortune_action(self, event): |
374 | - fail = event.params["fail"] |
375 | - if fail: |
376 | - event.fail(fail) |
377 | + self.framework.observe(self.on.leader_elected, self._on_leader_elected) |
378 | + self.framework.observe(self.on.upgrade_charm, self._on_upgrade_charm) |
379 | + |
380 | + self._stored.set_default() |
381 | + |
382 | + def _on_start(self, event) -> None: |
383 | + self.model.unit.status = ActiveStatus('Started') |
384 | + |
385 | + def _on_config_changed(self, event) -> None: |
386 | + if not self.model.unit.is_leader(): |
387 | + logger.info('Spec changes ignored by non-leader') |
388 | + self.unit.status = ActiveStatus('Ready') |
389 | + return |
390 | + msg = 'Configuring pod (config-changed)' |
391 | + logger.info(msg) |
392 | + self.model.unit.status = MaintenanceStatus(msg) |
393 | + |
394 | + self.configure_pod(event) |
395 | + |
396 | + def _on_leader_elected(self, event) -> None: |
397 | + msg = 'Configuring pod (leader-elected)' |
398 | + logger.info(msg) |
399 | + self.model.unit.status = MaintenanceStatus(msg) |
400 | + self.configure_pod(event) |
401 | + |
402 | + def _on_upgrade_charm(self, event) -> None: |
403 | + if not self.model.unit.is_leader(): |
404 | + logger.info('Spec changes ignored by non-leader') |
405 | + self.unit.status = ActiveStatus('Ready') |
406 | + return |
407 | + msg = 'Configuring pod (upgrade-charm)' |
408 | + logger.info(msg) |
409 | + self.model.unit.status = MaintenanceStatus(msg) |
410 | + self.configure_pod(event) |
411 | + |
412 | + def configure_pod(self, event) -> None: |
413 | + missing = self._missing_charm_configs() |
414 | + if missing: |
415 | + msg = 'Required config(s) empty: {}'.format(', '.join(sorted(missing))) |
416 | + logger.warning(msg) |
417 | + self.unit.status = BlockedStatus(msg) |
418 | + return |
419 | + |
420 | + msg = 'Assembling K8s ingress spec' |
421 | + logger.info(msg) |
422 | + self.unit.status = MaintenanceStatus(msg) |
423 | + ingress_spec = self._make_k8s_ingress_spec() |
424 | + k8s_resources = {'kubernetesResources': {'ingressResources': ingress_spec}} |
425 | + |
426 | + msg = 'Assembling pod spec' |
427 | + logger.info(msg) |
428 | + self.unit.status = MaintenanceStatus(msg) |
429 | + pod_spec = self._make_pod_spec() |
430 | + |
431 | + msg = 'Setting pod spec' |
432 | + logger.info(msg) |
433 | + self.unit.status = MaintenanceStatus(msg) |
434 | + self.model.pod.set_spec(pod_spec, k8s_resources=k8s_resources) |
435 | + |
436 | + msg = 'Done applying updated pod spec' |
437 | + logger.info(msg) |
438 | + self.unit.status = ActiveStatus('Ready') |
439 | + |
440 | + def _generate_keys_zone(self, name): |
441 | + return '{}-cache'.format(hashlib.md5(name.encode('UTF-8')).hexdigest()[0:12]) |
442 | + |
443 | + def _make_k8s_ingress_spec(self) -> list: |
444 | + config = self.model.config |
445 | + |
446 | + annotations = {} |
447 | + ingress = { |
448 | + 'name': '{}-ingress'.format(self.app.name), |
449 | + 'spec': { |
450 | + 'rules': [ |
451 | + { |
452 | + 'host': config['site'], |
453 | + 'http': { |
454 | + 'paths': [ |
455 | + {'path': '/', 'backend': {'serviceName': self.app.name, 'servicePort': CONTAINER_PORT}} |
456 | + ], |
457 | + }, |
458 | + } |
459 | + ], |
460 | + }, |
461 | + } |
462 | + |
463 | + client_max_body_size = config.get('client_max_body_size') |
464 | + if client_max_body_size: |
465 | + annotations['nginx.ingress.kubernetes.io/proxy-body-size'] = client_max_body_size |
466 | + |
467 | + tls_secret_name = config.get('tls_secret_name') |
468 | + if tls_secret_name: |
469 | + ingress['spec']['tls'] = [{'hosts': config['site'], 'secretName': tls_secret_name}] |
470 | else: |
471 | - event.set_results({"fortune": "A bug in the code is worth two in the documentation."}) |
472 | + annotations['nginx.ingress.kubernetes.io/ssl-redirect'] = 'false' |
473 | + |
474 | + if annotations: |
475 | + ingress['annotations'] = annotations |
476 | + |
477 | + return [ingress] |
478 | + |
479 | + def _make_pod_spec(self) -> dict: |
480 | + config = self.model.config |
481 | + |
482 | + image_details = { |
483 | + 'imagePath': config['image_path'], |
484 | + } |
485 | + if config.get('image_username', None): |
486 | + image_details.update({'username': config['image_username'], 'password': config['image_password']}) |
487 | + |
488 | + pod_config = self._make_pod_config() |
489 | + |
490 | + pod_spec = { |
491 | + 'version': 3, # otherwise resources are ignored |
492 | + 'containers': [ |
493 | + { |
494 | + 'name': self.app.name, |
495 | + 'envConfig': pod_config, |
496 | + 'imageDetails': image_details, |
497 | + 'imagePullPolicy': 'Always', |
498 | + 'kubernetes': { |
499 | + 'livenessProbe': { |
500 | + 'httpGet': {'path': '/', 'port': CONTAINER_PORT}, |
501 | + 'initialDelaySeconds': 3, |
502 | + 'periodSeconds': 3, |
503 | + }, |
504 | + 'readinessProbe': { |
505 | + 'httpGet': {'path': '/', 'port': CONTAINER_PORT}, |
506 | + 'initialDelaySeconds': 3, |
507 | + 'periodSeconds': 3, |
508 | + }, |
509 | + }, |
510 | + 'ports': [{'containerPort': CONTAINER_PORT, 'protocol': 'TCP'}], |
511 | + 'volumeConfig': [ |
512 | + { |
513 | + 'name': 'cache-volume', |
514 | + 'mountPath': CACHE_PATH, |
515 | + 'emptyDir': {'sizeLimit': config['cache_max_size']}, |
516 | + } |
517 | + ], |
518 | + } |
519 | + ], |
520 | + } |
521 | + |
522 | + return pod_spec |
523 | + |
524 | + def _make_pod_config(self) -> dict: |
525 | + config = self.model.config |
526 | + |
527 | + client_max_body_size = '1m' |
528 | + if config.get('client_max_body_size', ''): |
529 | + client_max_body_size = config.get('client_max_body_size') |
530 | + |
531 | + pod_config = { |
532 | + 'NGINX_BACKEND': config['backend'], |
533 | + 'NGINX_CACHE_INACTIVE_TIME': config.get('cache_inactive_time', '10m'), |
534 | + 'NGINX_CACHE_MAX_SIZE': config.get('cache_max_size', '10G'), |
535 | + 'NGINX_CACHE_PATH': CACHE_PATH, |
536 | + 'NGINX_CACHE_USE_STALE': config['cache_use_stale'], |
537 | + 'NGINX_CACHE_VALID': config['cache_valid'], |
538 | + 'NGINX_CLIENT_MAX_BODY_SIZE': client_max_body_size, |
539 | + 'NGINX_KEYS_ZONE': self._generate_keys_zone(config['site']), |
540 | + 'NGINX_SITE_NAME': config['site'], |
541 | + } |
542 | + |
543 | + return pod_config |
544 | + |
545 | + def _missing_charm_configs(self) -> list: |
546 | + config = self.model.config |
547 | + missing = [] |
548 | + |
549 | + missing.extend([setting for setting in REQUIRED_JUJU_CONFIGS if not config[setting]]) |
550 | + |
551 | + return sorted(list(set(missing))) |
552 | |
553 | |
554 | -if __name__ == "__main__": |
555 | - main(CharmK8SContentCacheCharm) |
556 | +if __name__ == '__main__': # pragma: no cover |
557 | + main(ContentCacheCharm) |
558 | diff --git a/tests/requirements.txt b/tests/requirements.txt |
559 | new file mode 100644 |
560 | index 0000000..f6c26b8 |
561 | --- /dev/null |
562 | +++ b/tests/requirements.txt |
563 | @@ -0,0 +1,4 @@ |
564 | +freezegun |
565 | +mock |
566 | +pytest |
567 | +pytest-cov |
568 | diff --git a/tests/test_charm.py b/tests/test_charm.py |
569 | new file mode 100644 |
570 | index 0000000..5d962a5 |
571 | --- /dev/null |
572 | +++ b/tests/test_charm.py |
573 | @@ -0,0 +1,385 @@ |
574 | +# Copyright (C) 2020 Canonical Ltd. |
575 | +# See LICENSE file for licensing details. |
576 | + |
577 | +import copy |
578 | +import unittest |
579 | +from unittest import mock |
580 | + |
581 | +from ops.model import ( |
582 | + ActiveStatus, |
583 | + BlockedStatus, |
584 | + MaintenanceStatus, |
585 | +) |
586 | +from ops.testing import Harness |
587 | +from charm import ContentCacheCharm |
588 | + |
589 | +BASE_CONFIG = { |
590 | + 'image_path': 'localhost:32000/myimage:latest', |
591 | + 'site': 'mysite.local', |
592 | + 'backend': 'http://mybackend.local:80', |
593 | + 'cache_max_size': '10G', |
594 | + 'cache_use_stale': 'error timeout updating http_500 http_502 http_503 http_504', |
595 | + 'cache_valid': '200 1h', |
596 | +} |
597 | +CACHE_PATH = '/var/lib/nginx/proxy/cache' |
598 | +CONTAINER_PORT = 80 |
599 | +POD_SPEC_TMPL = { |
600 | + 'version': 3, |
601 | + 'containers': [ |
602 | + { |
603 | + 'name': 'content-cache', |
604 | + 'envConfig': None, |
605 | + 'imageDetails': None, |
606 | + 'imagePullPolicy': 'Always', |
607 | + 'kubernetes': { |
608 | + 'livenessProbe': { |
609 | + 'httpGet': {'path': '/', 'port': CONTAINER_PORT}, |
610 | + 'initialDelaySeconds': 3, |
611 | + 'periodSeconds': 3, |
612 | + }, |
613 | + 'readinessProbe': { |
614 | + 'httpGet': {'path': '/', 'port': CONTAINER_PORT}, |
615 | + 'initialDelaySeconds': 3, |
616 | + 'periodSeconds': 3, |
617 | + }, |
618 | + }, |
619 | + 'ports': [{'containerPort': CONTAINER_PORT, 'protocol': 'TCP'}], |
620 | + 'volumeConfig': None, |
621 | + } |
622 | + ], |
623 | +} |
624 | +K8S_RESOURCES_TMPL = { |
625 | + 'kubernetesResources': { |
626 | + 'ingressResources': [ |
627 | + { |
628 | + 'annotations': {'nginx.ingress.kubernetes.io/ssl-redirect': 'false'}, |
629 | + 'name': 'content-cache-ingress', |
630 | + 'spec': { |
631 | + 'rules': [ |
632 | + { |
633 | + 'host': 'mysite.local', |
634 | + 'http': { |
635 | + 'paths': [ |
636 | + { |
637 | + 'backend': {'serviceName': 'content-cache', 'servicePort': 80}, |
638 | + 'path': '/', |
639 | + } |
640 | + ] |
641 | + }, |
642 | + } |
643 | + ] |
644 | + }, |
645 | + } |
646 | + ] |
647 | + } |
648 | +} |
649 | + |
650 | + |
651 | +class TestCharm(unittest.TestCase): |
652 | + def setUp(self): |
653 | + self.maxDiff = None |
654 | + self.harness = Harness(ContentCacheCharm) |
655 | + |
656 | + def tearDown(self): |
657 | + # starting from ops 0.8, we also need to do: |
658 | + self.addCleanup(self.harness.cleanup) |
659 | + |
660 | + def test_on_start(self): |
661 | + harness = self.harness |
662 | + action_event = mock.Mock() |
663 | + |
664 | + harness.begin() |
665 | + harness.charm._on_start(action_event) |
666 | + self.assertEqual(harness.charm.unit.status, ActiveStatus('Started')) |
667 | + |
668 | + @mock.patch('charm.ContentCacheCharm.configure_pod') |
669 | + def test_on_config_changed(self, configure_pod): |
670 | + harness = self.harness |
671 | + |
672 | + # Intentionally before harness.begin() to avoid firing leadership events. |
673 | + harness.set_leader(True) |
674 | + harness.begin() |
675 | + |
676 | + config = copy.deepcopy(BASE_CONFIG) |
677 | + harness.update_config(config) |
678 | + self.assertEqual(harness.charm.unit.status, MaintenanceStatus('Configuring pod (config-changed)')) |
679 | + configure_pod.assert_called_once() |
680 | + |
681 | + @mock.patch('charm.ContentCacheCharm.configure_pod') |
682 | + def test_on_config_changed_not_leader(self, configure_pod): |
683 | + harness = self.harness |
684 | + |
685 | + # Intentionally before harness.begin() to avoid firing leadership events. |
686 | + harness.set_leader(False) |
687 | + harness.begin() |
688 | + |
689 | + config = copy.deepcopy(BASE_CONFIG) |
690 | + harness.update_config(config) |
691 | + self.assertNotEqual(harness.charm.unit.status, MaintenanceStatus('Configuring pod (config-changed)')) |
692 | + configure_pod.assert_not_called() |
693 | + |
694 | + @mock.patch('charm.ContentCacheCharm.configure_pod') |
695 | + def test_on_leader_elected(self, configure_pod): |
696 | + harness = self.harness |
697 | + |
698 | + harness.begin() |
699 | + # Intentionally after harness.begin() to trigger leadership events. |
700 | + harness.set_leader(True) |
701 | + self.assertEqual(harness.charm.unit.status, MaintenanceStatus('Configuring pod (leader-elected)')) |
702 | + configure_pod.assert_called_once() |
703 | + |
704 | + @mock.patch('charm.ContentCacheCharm.configure_pod') |
705 | + def test_on_leader_elected_not_leader(self, configure_pod): |
706 | + harness = self.harness |
707 | + |
708 | + harness.begin() |
709 | + # Intentionally after harness.begin() to trigger leadership events. |
710 | + harness.set_leader(False) |
711 | + self.assertNotEqual(harness.charm.unit.status, MaintenanceStatus('Configuring pod (leader-elected)')) |
712 | + configure_pod.assert_not_called() |
713 | + |
714 | + @mock.patch('charm.ContentCacheCharm.configure_pod') |
715 | + def test_on_upgrade_charm(self, configure_pod): |
716 | + harness = self.harness |
717 | + action_event = mock.Mock() |
718 | + |
719 | + # Disable hooks and fire them manually as that seems to be the |
720 | + # only way to test upgrade-charm. |
721 | + harness.disable_hooks() |
722 | + harness.set_leader(True) |
723 | + harness.begin() |
724 | + |
725 | + harness.charm._on_upgrade_charm(action_event) |
726 | + self.assertEqual(harness.charm.unit.status, MaintenanceStatus('Configuring pod (upgrade-charm)')) |
727 | + configure_pod.assert_called_once() |
728 | + |
729 | + @mock.patch('charm.ContentCacheCharm.configure_pod') |
730 | + def test_on_upgrade_charm_not_leader(self, configure_pod): |
731 | + harness = self.harness |
732 | + action_event = mock.Mock() |
733 | + |
734 | + # Disable hooks and fire them manually as that seems to be the |
735 | + # only way to test upgrade-charm. |
736 | + harness.disable_hooks() |
737 | + harness.set_leader(False) |
738 | + harness.begin() |
739 | + |
740 | + harness.charm._on_upgrade_charm(action_event) |
741 | + self.assertNotEqual(harness.charm.unit.status, MaintenanceStatus('Configuring pod (upgrade-charm)')) |
742 | + configure_pod.assert_not_called() |
743 | + |
744 | + @mock.patch('charm.ContentCacheCharm._make_pod_spec') |
745 | + def test_configure_pod(self, make_pod_spec): |
746 | + harness = self.harness |
747 | + |
748 | + harness.set_leader(True) |
749 | + harness.begin() |
750 | + |
751 | + config = copy.deepcopy(BASE_CONFIG) |
752 | + harness.update_config(config) |
753 | + make_pod_spec.assert_called_once() |
754 | + self.assertEqual(harness.charm.unit.status, ActiveStatus('Ready')) |
755 | + pod_spec = harness.charm._make_pod_spec() |
756 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
757 | + self.assertEqual(harness.get_pod_spec(), (pod_spec, k8s_resources)) |
758 | + |
759 | + @mock.patch('charm.ContentCacheCharm._make_pod_spec') |
760 | + def test_configure_pod_missing_configs(self, make_pod_spec): |
761 | + harness = self.harness |
762 | + |
763 | + harness.set_leader(True) |
764 | + harness.begin() |
765 | + |
766 | + config = copy.deepcopy(BASE_CONFIG) |
767 | + config['site'] = None |
768 | + harness.update_config(config) |
769 | + make_pod_spec.assert_not_called() |
770 | + self.assertEqual(harness.charm.unit.status, BlockedStatus('Required config(s) empty: site')) |
771 | + self.assertEqual(harness.get_pod_spec(), None) |
772 | + |
773 | + def test_generate_keys_zone(self): |
774 | + harness = self.harness |
775 | + |
776 | + harness.disable_hooks() |
777 | + harness.begin() |
778 | + |
779 | + expected = '39c631ffb52d-cache' |
780 | + self.assertEqual(harness.charm._generate_keys_zone('mysite.local'), expected) |
781 | + expected = '8b79f9e4b3e8-cache' |
782 | + self.assertEqual(harness.charm._generate_keys_zone('my-really-really-really-long-site-name.local'), expected) |
783 | + expected = 'd41d8cd98f00-cache' |
784 | + self.assertEqual(harness.charm._generate_keys_zone(''), expected) |
785 | + |
786 | + def test_make_k8s_ingress_spec(self): |
787 | + harness = self.harness |
788 | + |
789 | + harness.disable_hooks() |
790 | + harness.begin() |
791 | + |
792 | + config = copy.deepcopy(BASE_CONFIG) |
793 | + harness.update_config(config) |
794 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
795 | + expected = k8s_resources['kubernetesResources']['ingressResources'] |
796 | + self.assertEqual(harness.charm._make_k8s_ingress_spec(), expected) |
797 | + |
798 | + def test_make_k8s_ingress_spec_client_max_body_size(self): |
799 | + harness = self.harness |
800 | + |
801 | + harness.disable_hooks() |
802 | + harness.begin() |
803 | + |
804 | + config = copy.deepcopy(BASE_CONFIG) |
805 | + config['client_max_body_size'] = '32m' |
806 | + harness.update_config(config) |
807 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
808 | + t = k8s_resources['kubernetesResources']['ingressResources'][0]['annotations'] |
809 | + t['nginx.ingress.kubernetes.io/proxy-body-size'] = '32m' |
810 | + expected = k8s_resources['kubernetesResources']['ingressResources'] |
811 | + self.assertEqual(harness.charm._make_k8s_ingress_spec(), expected) |
812 | + |
813 | + def test_make_k8s_ingress_spec_tls_secrets(self): |
814 | + harness = self.harness |
815 | + |
816 | + harness.disable_hooks() |
817 | + harness.begin() |
818 | + |
819 | + config = copy.deepcopy(BASE_CONFIG) |
820 | + config['tls_secret_name'] = '{}-tls'.format(config['site']) |
821 | + harness.update_config(config) |
822 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
823 | + t = k8s_resources['kubernetesResources']['ingressResources'][0] |
824 | + t.pop('annotations') |
825 | + t['spec']['tls'] = [{'hosts': 'mysite.local', 'secretName': 'mysite.local-tls'}] |
826 | + expected = k8s_resources['kubernetesResources']['ingressResources'] |
827 | + self.assertEqual(harness.charm._make_k8s_ingress_spec(), expected) |
828 | + |
829 | + def test_make_pod_spec(self): |
830 | + harness = self.harness |
831 | + |
832 | + harness.set_leader(True) |
833 | + harness.begin() |
834 | + |
835 | + config = copy.deepcopy(BASE_CONFIG) |
836 | + harness.update_config(config) |
837 | + spec = copy.deepcopy(POD_SPEC_TMPL) |
838 | + t = spec['containers'][0] |
839 | + t['envConfig'] = harness.charm._make_pod_config() |
840 | + t['imageDetails'] = {'imagePath': 'localhost:32000/myimage:latest'} |
841 | + t['volumeConfig'] = [ |
842 | + {'name': 'cache-volume', 'mountPath': '/var/lib/nginx/proxy/cache', 'emptyDir': {'sizeLimit': '10G'}} |
843 | + ] |
844 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
845 | + expected = (spec, k8s_resources) |
846 | + self.assertEqual(harness.get_pod_spec(), expected) |
847 | + |
848 | + def test_make_pod_spec_image_username(self): |
849 | + harness = self.harness |
850 | + |
851 | + harness.set_leader(True) |
852 | + harness.begin() |
853 | + |
854 | + config = copy.deepcopy(BASE_CONFIG) |
855 | + config['image_username'] = 'myuser' |
856 | + config['image_password'] = 'mypassword' |
857 | + harness.update_config(config) |
858 | + spec = copy.deepcopy(POD_SPEC_TMPL) |
859 | + t = spec['containers'][0] |
860 | + t['envConfig'] = harness.charm._make_pod_config() |
861 | + t['imageDetails'] = { |
862 | + 'imagePath': 'localhost:32000/myimage:latest', |
863 | + 'username': 'myuser', |
864 | + 'password': 'mypassword', |
865 | + } |
866 | + t['volumeConfig'] = [ |
867 | + {'name': 'cache-volume', 'mountPath': '/var/lib/nginx/proxy/cache', 'emptyDir': {'sizeLimit': '10G'}} |
868 | + ] |
869 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
870 | + expected = (spec, k8s_resources) |
871 | + self.assertEqual(harness.get_pod_spec(), expected) |
872 | + |
873 | + def test_make_pod_spec_cache_max_size(self): |
874 | + harness = self.harness |
875 | + |
876 | + harness.set_leader(True) |
877 | + harness.begin() |
878 | + |
879 | + config = copy.deepcopy(BASE_CONFIG) |
880 | + config['cache_max_size'] = '201G' |
881 | + harness.update_config(config) |
882 | + spec = copy.deepcopy(POD_SPEC_TMPL) |
883 | + t = spec['containers'][0] |
884 | + t['envConfig'] = harness.charm._make_pod_config() |
885 | + t['imageDetails'] = {'imagePath': 'localhost:32000/myimage:latest'} |
886 | + t['volumeConfig'] = [ |
887 | + {'name': 'cache-volume', 'mountPath': '/var/lib/nginx/proxy/cache', 'emptyDir': {'sizeLimit': '201G'}} |
888 | + ] |
889 | + k8s_resources = copy.deepcopy(K8S_RESOURCES_TMPL) |
890 | + expected = (spec, k8s_resources) |
891 | + self.assertEqual(harness.get_pod_spec(), expected) |
892 | + |
893 | + def test_make_pod_config(self): |
894 | + harness = self.harness |
895 | + |
896 | + harness.disable_hooks() |
897 | + harness.begin() |
898 | + |
899 | + config = copy.deepcopy(BASE_CONFIG) |
900 | + harness.update_config(config) |
901 | + expected = { |
902 | + 'NGINX_BACKEND': 'http://mybackend.local:80', |
903 | + 'NGINX_CACHE_INACTIVE_TIME': '10m', |
904 | + 'NGINX_CACHE_MAX_SIZE': '10G', |
905 | + 'NGINX_CACHE_PATH': CACHE_PATH, |
906 | + 'NGINX_CACHE_USE_STALE': 'error timeout updating http_500 http_502 http_503 http_504', |
907 | + 'NGINX_CACHE_VALID': '200 1h', |
908 | + 'NGINX_CLIENT_MAX_BODY_SIZE': '1m', |
909 | + 'NGINX_KEYS_ZONE': '39c631ffb52d-cache', |
910 | + 'NGINX_SITE_NAME': 'mysite.local', |
911 | + } |
912 | + self.assertEqual(harness.charm._make_pod_config(), expected) |
913 | + |
914 | + def test_make_pod_config_client_max_body_size(self): |
915 | + harness = self.harness |
916 | + |
917 | + harness.disable_hooks() |
918 | + harness.begin() |
919 | + |
920 | + config = copy.deepcopy(BASE_CONFIG) |
921 | + config['client_max_body_size'] = '50m' |
922 | + harness.update_config(config) |
923 | + expected = { |
924 | + 'NGINX_BACKEND': 'http://mybackend.local:80', |
925 | + 'NGINX_CACHE_INACTIVE_TIME': '10m', |
926 | + 'NGINX_CACHE_MAX_SIZE': '10G', |
927 | + 'NGINX_CACHE_PATH': CACHE_PATH, |
928 | + 'NGINX_CACHE_USE_STALE': 'error timeout updating http_500 http_502 http_503 http_504', |
929 | + 'NGINX_CACHE_VALID': '200 1h', |
930 | + 'NGINX_CLIENT_MAX_BODY_SIZE': '50m', |
931 | + 'NGINX_KEYS_ZONE': '39c631ffb52d-cache', |
932 | + 'NGINX_SITE_NAME': 'mysite.local', |
933 | + } |
934 | + self.assertEqual(harness.charm._make_pod_config(), expected) |
935 | + |
936 | + def test_missing_charm_configs(self): |
937 | + harness = self.harness |
938 | + |
939 | + harness.disable_hooks() |
940 | + harness.begin() |
941 | + |
942 | + config = copy.deepcopy(BASE_CONFIG) |
943 | + harness.update_config(config) |
944 | + expected = [] |
945 | + self.assertEqual(harness.charm._missing_charm_configs(), expected) |
946 | + |
947 | + config = copy.deepcopy(BASE_CONFIG) |
948 | + config['site'] = None |
949 | + harness.update_config(config) |
950 | + expected = ['site'] |
951 | + self.assertEqual(harness.charm._missing_charm_configs(), expected) |
952 | + |
953 | + config = copy.deepcopy(BASE_CONFIG) |
954 | + config['image_path'] = None |
955 | + config['site'] = None |
956 | + harness.update_config(config) |
957 | + expected = ['image_path', 'site'] |
958 | + self.assertEqual(harness.charm._missing_charm_configs(), expected) |
959 | diff --git a/tests/unit/test_charm.py b/tests/unit/test_charm.py |
960 | deleted file mode 100644 |
961 | index 2f3db20..0000000 |
962 | --- a/tests/unit/test_charm.py |
963 | +++ /dev/null |
964 | @@ -1,36 +0,0 @@ |
965 | -# Copyright 2020 hloeung |
966 | -# See LICENSE file for licensing details. |
967 | - |
968 | -import unittest |
969 | -from unittest.mock import Mock |
970 | - |
971 | -from ops.testing import Harness |
972 | -from charm import CharmK8SContentCacheCharm |
973 | - |
974 | - |
975 | -class TestCharm(unittest.TestCase): |
976 | - def test_config_changed(self): |
977 | - harness = Harness(CharmK8SContentCacheCharm) |
978 | - # from 0.8 you should also do: |
979 | - # self.addCleanup(harness.cleanup) |
980 | - harness.begin() |
981 | - self.assertEqual(list(harness.charm._stored.things), []) |
982 | - harness.update_config({"thing": "foo"}) |
983 | - self.assertEqual(list(harness.charm._stored.things), ["foo"]) |
984 | - |
985 | - def test_action(self): |
986 | - harness = Harness(CharmK8SContentCacheCharm) |
987 | - harness.begin() |
988 | - # the harness doesn't (yet!) help much with actions themselves |
989 | - action_event = Mock(params={"fail": ""}) |
990 | - harness.charm._on_fortune_action(action_event) |
991 | - |
992 | - self.assertTrue(action_event.set_results.called) |
993 | - |
994 | - def test_action_fail(self): |
995 | - harness = Harness(CharmK8SContentCacheCharm) |
996 | - harness.begin() |
997 | - action_event = Mock(params={"fail": "fail this"}) |
998 | - harness.charm._on_fortune_action(action_event) |
999 | - |
1000 | - self.assertEqual(action_event.fail.call_args, [("fail this",)]) |
1001 | diff --git a/tox.ini b/tox.ini |
1002 | new file mode 100644 |
1003 | index 0000000..2daecb5 |
1004 | --- /dev/null |
1005 | +++ b/tox.ini |
1006 | @@ -0,0 +1,37 @@ |
1007 | +[tox] |
1008 | +skipsdist=True |
1009 | +envlist = unit |
1010 | +skip_missing_interpreters = True |
1011 | + |
1012 | +[testenv] |
1013 | +basepython = python3 |
1014 | +setenv = |
1015 | + PYTHONPATH = {toxinidir}/src:{toxinidir}/build/lib:{toxinidir}/build/venv |
1016 | + |
1017 | +[testenv:unit] |
1018 | +commands = |
1019 | + pytest \ |
1020 | + {posargs:-v --cov=src --cov-report=term-missing --cov-branch} |
1021 | +deps = -r{toxinidir}/tests/requirements.txt |
1022 | + -r{toxinidir}/requirements.txt |
1023 | +setenv = |
1024 | + PYTHONPATH={toxinidir}/src:{toxinidir}/build/lib:{toxinidir}/build/venv |
1025 | + TZ=UTC |
1026 | + |
1027 | +[testenv:black] |
1028 | +commands = black --skip-string-normalization --line-length=120 src/ tests/ |
1029 | +deps = black |
1030 | + |
1031 | +[testenv:lint] |
1032 | +commands = flake8 src/ tests/ |
1033 | +# Pin flake8 to 3.7.9 to match focal |
1034 | +deps = |
1035 | + flake8==3.7.9 |
1036 | + |
1037 | +[flake8] |
1038 | +exclude = |
1039 | + .git, |
1040 | + __pycache__, |
1041 | + .tox, |
1042 | +max-line-length = 120 |
1043 | +max-complexity = 10 |
This merge proposal is being monitored by mergebot. Change the status to Approved to merge.