Merge ~cjwatson/charm-clamav-database-mirror:initial into charm-clamav-database-mirror:main
- Git
- lp:~cjwatson/charm-clamav-database-mirror
- initial
- Merge into main
Status: | Merged |
---|---|
Merged at revision: | d09540ef93fe67e180e1c07145a48127cc634bc6 |
Proposed branch: | ~cjwatson/charm-clamav-database-mirror:initial |
Merge into: | charm-clamav-database-mirror:main |
Diff against target: |
2117 lines (+2009/-0) 18 files modified
.gitignore (+9/-0) CONTRIBUTING.md (+31/-0) LICENSE (+202/-0) README.md (+12/-0) charmcraft.yaml (+16/-0) config.yaml (+13/-0) files/cvdupdate.timer (+12/-0) lib/charms/operator_libs_linux/v0/apt.py (+1329/-0) metadata.yaml (+14/-0) pyproject.toml (+33/-0) requirements.txt (+2/-0) src/charm.py (+106/-0) templates/cvdupdate.service.j2 (+22/-0) templates/nginx-site.conf.j2 (+9/-0) tests/integration/test_charm.py (+34/-0) tests/unit/test_charm.py (+88/-0) tox.ini (+73/-0) update-lib (+4/-0) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Andrey Fedoseev (community) | Approve | ||
Review via email: mp+431861@code.launchpad.net |
Commit message
Add new charm
Description of the change
Much of the project skeleton came from `charmcraft init --profile machine`.
`lib/charms/
I considered using either the existing `apache2` charm or the existing `nginx` charm. However, `apache2` currently only works up to focal, and the `clamav-cvdupdate` package is only available starting from jammy; while the `nginx` charm doesn't allow deploying a site as a subordinate, which makes it rather cumbersome to use here. Since it's fairly trivial to handle installing `nginx` directly, I just went with that.
The test suite doesn't quite have 100% coverage, but there's enough here that it caught a number of my mistakes during development, and I wanted to get this up for review without too much more delay.
Andrey Fedoseev (andrey-fedoseev) : | # |
Preview Diff
1 | diff --git a/.gitignore b/.gitignore |
2 | new file mode 100644 |
3 | index 0000000..a26d707 |
4 | --- /dev/null |
5 | +++ b/.gitignore |
6 | @@ -0,0 +1,9 @@ |
7 | +venv/ |
8 | +build/ |
9 | +*.charm |
10 | +.tox/ |
11 | +.coverage |
12 | +__pycache__/ |
13 | +*.py[cod] |
14 | +.idea |
15 | +.vscode/ |
16 | diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md |
17 | new file mode 100644 |
18 | index 0000000..5863a00 |
19 | --- /dev/null |
20 | +++ b/CONTRIBUTING.md |
21 | @@ -0,0 +1,31 @@ |
22 | +# Contributing |
23 | + |
24 | +To make contributions to this charm, you'll need a working [development setup](https://juju.is/docs/sdk/dev-setup). |
25 | + |
26 | +You can use the environments created by `tox` for development: |
27 | + |
28 | +```shell |
29 | +tox --notest -e unit |
30 | +source .tox/unit/bin/activate |
31 | +``` |
32 | + |
33 | +## Testing |
34 | + |
35 | +This project uses `tox` for managing test environments. There are some pre-configured environments |
36 | +that can be used for linting and formatting code when you're preparing contributions to the charm: |
37 | + |
38 | +```shell |
39 | +tox -e fmt # update your code according to linting rules |
40 | +tox -e lint # code style |
41 | +tox -e unit # unit tests |
42 | +tox -e integration # integration tests |
43 | +tox # runs 'lint' and 'unit' environments |
44 | +``` |
45 | + |
46 | +## Build the charm |
47 | + |
48 | +Build the charm in this git repository using: |
49 | + |
50 | +```shell |
51 | +charmcraft pack |
52 | +``` |
53 | diff --git a/LICENSE b/LICENSE |
54 | new file mode 100644 |
55 | index 0000000..7e9d504 |
56 | --- /dev/null |
57 | +++ b/LICENSE |
58 | @@ -0,0 +1,202 @@ |
59 | + |
60 | + Apache License |
61 | + Version 2.0, January 2004 |
62 | + http://www.apache.org/licenses/ |
63 | + |
64 | + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION |
65 | + |
66 | + 1. Definitions. |
67 | + |
68 | + "License" shall mean the terms and conditions for use, reproduction, |
69 | + and distribution as defined by Sections 1 through 9 of this document. |
70 | + |
71 | + "Licensor" shall mean the copyright owner or entity authorized by |
72 | + the copyright owner that is granting the License. |
73 | + |
74 | + "Legal Entity" shall mean the union of the acting entity and all |
75 | + other entities that control, are controlled by, or are under common |
76 | + control with that entity. For the purposes of this definition, |
77 | + "control" means (i) the power, direct or indirect, to cause the |
78 | + direction or management of such entity, whether by contract or |
79 | + otherwise, or (ii) ownership of fifty percent (50%) or more of the |
80 | + outstanding shares, or (iii) beneficial ownership of such entity. |
81 | + |
82 | + "You" (or "Your") shall mean an individual or Legal Entity |
83 | + exercising permissions granted by this License. |
84 | + |
85 | + "Source" form shall mean the preferred form for making modifications, |
86 | + including but not limited to software source code, documentation |
87 | + source, and configuration files. |
88 | + |
89 | + "Object" form shall mean any form resulting from mechanical |
90 | + transformation or translation of a Source form, including but |
91 | + not limited to compiled object code, generated documentation, |
92 | + and conversions to other media types. |
93 | + |
94 | + "Work" shall mean the work of authorship, whether in Source or |
95 | + Object form, made available under the License, as indicated by a |
96 | + copyright notice that is included in or attached to the work |
97 | + (an example is provided in the Appendix below). |
98 | + |
99 | + "Derivative Works" shall mean any work, whether in Source or Object |
100 | + form, that is based on (or derived from) the Work and for which the |
101 | + editorial revisions, annotations, elaborations, or other modifications |
102 | + represent, as a whole, an original work of authorship. For the purposes |
103 | + of this License, Derivative Works shall not include works that remain |
104 | + separable from, or merely link (or bind by name) to the interfaces of, |
105 | + the Work and Derivative Works thereof. |
106 | + |
107 | + "Contribution" shall mean any work of authorship, including |
108 | + the original version of the Work and any modifications or additions |
109 | + to that Work or Derivative Works thereof, that is intentionally |
110 | + submitted to Licensor for inclusion in the Work by the copyright owner |
111 | + or by an individual or Legal Entity authorized to submit on behalf of |
112 | + the copyright owner. For the purposes of this definition, "submitted" |
113 | + means any form of electronic, verbal, or written communication sent |
114 | + to the Licensor or its representatives, including but not limited to |
115 | + communication on electronic mailing lists, source code control systems, |
116 | + and issue tracking systems that are managed by, or on behalf of, the |
117 | + Licensor for the purpose of discussing and improving the Work, but |
118 | + excluding communication that is conspicuously marked or otherwise |
119 | + designated in writing by the copyright owner as "Not a Contribution." |
120 | + |
121 | + "Contributor" shall mean Licensor and any individual or Legal Entity |
122 | + on behalf of whom a Contribution has been received by Licensor and |
123 | + subsequently incorporated within the Work. |
124 | + |
125 | + 2. Grant of Copyright License. Subject to the terms and conditions of |
126 | + this License, each Contributor hereby grants to You a perpetual, |
127 | + worldwide, non-exclusive, no-charge, royalty-free, irrevocable |
128 | + copyright license to reproduce, prepare Derivative Works of, |
129 | + publicly display, publicly perform, sublicense, and distribute the |
130 | + Work and such Derivative Works in Source or Object form. |
131 | + |
132 | + 3. Grant of Patent License. Subject to the terms and conditions of |
133 | + this License, each Contributor hereby grants to You a perpetual, |
134 | + worldwide, non-exclusive, no-charge, royalty-free, irrevocable |
135 | + (except as stated in this section) patent license to make, have made, |
136 | + use, offer to sell, sell, import, and otherwise transfer the Work, |
137 | + where such license applies only to those patent claims licensable |
138 | + by such Contributor that are necessarily infringed by their |
139 | + Contribution(s) alone or by combination of their Contribution(s) |
140 | + with the Work to which such Contribution(s) was submitted. If You |
141 | + institute patent litigation against any entity (including a |
142 | + cross-claim or counterclaim in a lawsuit) alleging that the Work |
143 | + or a Contribution incorporated within the Work constitutes direct |
144 | + or contributory patent infringement, then any patent licenses |
145 | + granted to You under this License for that Work shall terminate |
146 | + as of the date such litigation is filed. |
147 | + |
148 | + 4. Redistribution. You may reproduce and distribute copies of the |
149 | + Work or Derivative Works thereof in any medium, with or without |
150 | + modifications, and in Source or Object form, provided that You |
151 | + meet the following conditions: |
152 | + |
153 | + (a) You must give any other recipients of the Work or |
154 | + Derivative Works a copy of this License; and |
155 | + |
156 | + (b) You must cause any modified files to carry prominent notices |
157 | + stating that You changed the files; and |
158 | + |
159 | + (c) You must retain, in the Source form of any Derivative Works |
160 | + that You distribute, all copyright, patent, trademark, and |
161 | + attribution notices from the Source form of the Work, |
162 | + excluding those notices that do not pertain to any part of |
163 | + the Derivative Works; and |
164 | + |
165 | + (d) If the Work includes a "NOTICE" text file as part of its |
166 | + distribution, then any Derivative Works that You distribute must |
167 | + include a readable copy of the attribution notices contained |
168 | + within such NOTICE file, excluding those notices that do not |
169 | + pertain to any part of the Derivative Works, in at least one |
170 | + of the following places: within a NOTICE text file distributed |
171 | + as part of the Derivative Works; within the Source form or |
172 | + documentation, if provided along with the Derivative Works; or, |
173 | + within a display generated by the Derivative Works, if and |
174 | + wherever such third-party notices normally appear. The contents |
175 | + of the NOTICE file are for informational purposes only and |
176 | + do not modify the License. You may add Your own attribution |
177 | + notices within Derivative Works that You distribute, alongside |
178 | + or as an addendum to the NOTICE text from the Work, provided |
179 | + that such additional attribution notices cannot be construed |
180 | + as modifying the License. |
181 | + |
182 | + You may add Your own copyright statement to Your modifications and |
183 | + may provide additional or different license terms and conditions |
184 | + for use, reproduction, or distribution of Your modifications, or |
185 | + for any such Derivative Works as a whole, provided Your use, |
186 | + reproduction, and distribution of the Work otherwise complies with |
187 | + the conditions stated in this License. |
188 | + |
189 | + 5. Submission of Contributions. Unless You explicitly state otherwise, |
190 | + any Contribution intentionally submitted for inclusion in the Work |
191 | + by You to the Licensor shall be under the terms and conditions of |
192 | + this License, without any additional terms or conditions. |
193 | + Notwithstanding the above, nothing herein shall supersede or modify |
194 | + the terms of any separate license agreement you may have executed |
195 | + with Licensor regarding such Contributions. |
196 | + |
197 | + 6. Trademarks. This License does not grant permission to use the trade |
198 | + names, trademarks, service marks, or product names of the Licensor, |
199 | + except as required for reasonable and customary use in describing the |
200 | + origin of the Work and reproducing the content of the NOTICE file. |
201 | + |
202 | + 7. Disclaimer of Warranty. Unless required by applicable law or |
203 | + agreed to in writing, Licensor provides the Work (and each |
204 | + Contributor provides its Contributions) on an "AS IS" BASIS, |
205 | + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
206 | + implied, including, without limitation, any warranties or conditions |
207 | + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A |
208 | + PARTICULAR PURPOSE. You are solely responsible for determining the |
209 | + appropriateness of using or redistributing the Work and assume any |
210 | + risks associated with Your exercise of permissions under this License. |
211 | + |
212 | + 8. Limitation of Liability. In no event and under no legal theory, |
213 | + whether in tort (including negligence), contract, or otherwise, |
214 | + unless required by applicable law (such as deliberate and grossly |
215 | + negligent acts) or agreed to in writing, shall any Contributor be |
216 | + liable to You for damages, including any direct, indirect, special, |
217 | + incidental, or consequential damages of any character arising as a |
218 | + result of this License or out of the use or inability to use the |
219 | + Work (including but not limited to damages for loss of goodwill, |
220 | + work stoppage, computer failure or malfunction, or any and all |
221 | + other commercial damages or losses), even if such Contributor |
222 | + has been advised of the possibility of such damages. |
223 | + |
224 | + 9. Accepting Warranty or Additional Liability. While redistributing |
225 | + the Work or Derivative Works thereof, You may choose to offer, |
226 | + and charge a fee for, acceptance of support, warranty, indemnity, |
227 | + or other liability obligations and/or rights consistent with this |
228 | + License. However, in accepting such obligations, You may act only |
229 | + on Your own behalf and on Your sole responsibility, not on behalf |
230 | + of any other Contributor, and only if You agree to indemnify, |
231 | + defend, and hold each Contributor harmless for any liability |
232 | + incurred by, or claims asserted against, such Contributor by reason |
233 | + of your accepting any such warranty or additional liability. |
234 | + |
235 | + END OF TERMS AND CONDITIONS |
236 | + |
237 | + APPENDIX: How to apply the Apache License to your work. |
238 | + |
239 | + To apply the Apache License to your work, attach the following |
240 | + boilerplate notice, with the fields enclosed by brackets "[]" |
241 | + replaced with your own identifying information. (Don't include |
242 | + the brackets!) The text should be enclosed in the appropriate |
243 | + comment syntax for the file format. We also recommend that a |
244 | + file or class name and description of purpose be included on the |
245 | + same "printed page" as the copyright notice for easier |
246 | + identification within third-party archives. |
247 | + |
248 | + Copyright 2022 Canonical Ltd. |
249 | + |
250 | + Licensed under the Apache License, Version 2.0 (the "License"); |
251 | + you may not use this file except in compliance with the License. |
252 | + You may obtain a copy of the License at |
253 | + |
254 | + http://www.apache.org/licenses/LICENSE-2.0 |
255 | + |
256 | + Unless required by applicable law or agreed to in writing, software |
257 | + distributed under the License is distributed on an "AS IS" BASIS, |
258 | + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
259 | + See the License for the specific language governing permissions and |
260 | + limitations under the License. |
261 | diff --git a/README.md b/README.md |
262 | new file mode 100644 |
263 | index 0000000..d78d2a3 |
264 | --- /dev/null |
265 | +++ b/README.md |
266 | @@ -0,0 +1,12 @@ |
267 | +# clamav-database-mirror |
268 | + |
269 | +Charmhub package name: clamav-database-mirror |
270 | +More information: https://charmhub.io/clamav-database-mirror |
271 | + |
272 | +Maintains a local mirror of the latest ClamAV databases. |
273 | + |
274 | +## Other resources |
275 | + |
276 | +- [Contributing](CONTRIBUTING.md) <!-- or link to other contribution documentation --> |
277 | + |
278 | +- See the [Juju SDK documentation](https://juju.is/docs/sdk) for more information about developing and improving charms. |
279 | diff --git a/charmcraft.yaml b/charmcraft.yaml |
280 | new file mode 100644 |
281 | index 0000000..0cd3b34 |
282 | --- /dev/null |
283 | +++ b/charmcraft.yaml |
284 | @@ -0,0 +1,16 @@ |
285 | +# This file configures Charmcraft. |
286 | +# See https://juju.is/docs/sdk/charmcraft-config for guidance. |
287 | + |
288 | +type: charm |
289 | +bases: |
290 | + - build-on: |
291 | + - name: ubuntu |
292 | + channel: "22.04" |
293 | + run-on: |
294 | + - name: ubuntu |
295 | + channel: "22.04" |
296 | +parts: |
297 | + charm: |
298 | + prime: |
299 | + - files/* |
300 | + - templates/* |
301 | diff --git a/config.yaml b/config.yaml |
302 | new file mode 100644 |
303 | index 0000000..f100171 |
304 | --- /dev/null |
305 | +++ b/config.yaml |
306 | @@ -0,0 +1,13 @@ |
307 | +options: |
308 | + host: |
309 | + type: string |
310 | + default: "localhost" |
311 | + description: The mirror's host name. |
312 | + port: |
313 | + type: int |
314 | + default: 80 |
315 | + description: The mirror's listen port. |
316 | + http-proxy: |
317 | + type: string |
318 | + default: "" |
319 | + description: HTTP proxy to use for requests. |
320 | diff --git a/files/cvdupdate.timer b/files/cvdupdate.timer |
321 | new file mode 100644 |
322 | index 0000000..dda6d5d |
323 | --- /dev/null |
324 | +++ b/files/cvdupdate.timer |
325 | @@ -0,0 +1,12 @@ |
326 | +[Unit] |
327 | +Description=Update ClamAV database mirror |
328 | +Documentation=man:cvdupdate(1) |
329 | + |
330 | +[Timer] |
331 | +OnCalendar=*-*-* 0/4:30:00 |
332 | +RandomizedDelaySec=1h |
333 | +FixedRandomDelay=true |
334 | +Persistent=true |
335 | + |
336 | +[Install] |
337 | +WantedBy=timers.target |
338 | diff --git a/lib/charms/operator_libs_linux/v0/apt.py b/lib/charms/operator_libs_linux/v0/apt.py |
339 | new file mode 100644 |
340 | index 0000000..2b5c8f2 |
341 | --- /dev/null |
342 | +++ b/lib/charms/operator_libs_linux/v0/apt.py |
343 | @@ -0,0 +1,1329 @@ |
344 | +# Copyright 2021 Canonical Ltd. |
345 | +# |
346 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
347 | +# you may not use this file except in compliance with the License. |
348 | +# You may obtain a copy of the License at |
349 | +# |
350 | +# http://www.apache.org/licenses/LICENSE-2.0 |
351 | +# |
352 | +# Unless required by applicable law or agreed to in writing, software |
353 | +# distributed under the License is distributed on an "AS IS" BASIS, |
354 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
355 | +# See the License for the specific language governing permissions and |
356 | +# limitations under the License. |
357 | + |
358 | +"""Abstractions for the system's Debian/Ubuntu package information and repositories. |
359 | + |
360 | +This module contains abstractions and wrappers around Debian/Ubuntu-style repositories and |
361 | +packages, in order to easily provide an idiomatic and Pythonic mechanism for adding packages and/or |
362 | +repositories to systems for use in machine charms. |
363 | + |
364 | +A sane default configuration is attainable through nothing more than instantiation of the |
365 | +appropriate classes. `DebianPackage` objects provide information about the architecture, version, |
366 | +name, and status of a package. |
367 | + |
368 | +`DebianPackage` will try to look up a package either from `dpkg -L` or from `apt-cache` when |
369 | +provided with a string indicating the package name. If it cannot be located, `PackageNotFoundError` |
370 | +will be returned, as `apt` and `dpkg` otherwise return `100` for all errors, and a meaningful error |
371 | +message if the package is not known is desirable. |
372 | + |
373 | +To install packages with convenience methods: |
374 | + |
375 | +```python |
376 | +try: |
377 | + # Run `apt-get update` |
378 | + apt.update() |
379 | + apt.add_package("zsh") |
380 | + apt.add_package(["vim", "htop", "wget"]) |
381 | +except PackageNotFoundError: |
382 | + logger.error("a specified package not found in package cache or on system") |
383 | +except PackageError as e: |
384 | + logger.error("could not install package. Reason: %s", e.message) |
385 | +```` |
386 | + |
387 | +To find details of a specific package: |
388 | + |
389 | +```python |
390 | +try: |
391 | + vim = apt.DebianPackage.from_system("vim") |
392 | + |
393 | + # To find from the apt cache only |
394 | + # apt.DebianPackage.from_apt_cache("vim") |
395 | + |
396 | + # To find from installed packages only |
397 | + # apt.DebianPackage.from_installed_package("vim") |
398 | + |
399 | + vim.ensure(PackageState.Latest) |
400 | + logger.info("updated vim to version: %s", vim.fullversion) |
401 | +except PackageNotFoundError: |
402 | + logger.error("a specified package not found in package cache or on system") |
403 | +except PackageError as e: |
404 | + logger.error("could not install package. Reason: %s", e.message) |
405 | +``` |
406 | + |
407 | + |
408 | +`RepositoryMapping` will return a dict-like object containing enabled system repositories |
409 | +and their properties (available groups, baseuri. gpg key). This class can add, disable, or |
410 | +manipulate repositories. Items can be retrieved as `DebianRepository` objects. |
411 | + |
412 | +In order add a new repository with explicit details for fields, a new `DebianRepository` can |
413 | +be added to `RepositoryMapping` |
414 | + |
415 | +`RepositoryMapping` provides an abstraction around the existing repositories on the system, |
416 | +and can be accessed and iterated over like any `Mapping` object, to retrieve values by key, |
417 | +iterate, or perform other operations. |
418 | + |
419 | +Keys are constructed as `{repo_type}-{}-{release}` in order to uniquely identify a repository. |
420 | + |
421 | +Repositories can be added with explicit values through a Python constructor. |
422 | + |
423 | +Example: |
424 | + |
425 | +```python |
426 | +repositories = apt.RepositoryMapping() |
427 | + |
428 | +if "deb-example.com-focal" not in repositories: |
429 | + repositories.add(DebianRepository(enabled=True, repotype="deb", |
430 | + uri="https://example.com", release="focal", groups=["universe"])) |
431 | +``` |
432 | + |
433 | +Alternatively, any valid `sources.list` line may be used to construct a new |
434 | +`DebianRepository`. |
435 | + |
436 | +Example: |
437 | + |
438 | +```python |
439 | +repositories = apt.RepositoryMapping() |
440 | + |
441 | +if "deb-us.archive.ubuntu.com-xenial" not in repositories: |
442 | + line = "deb http://us.archive.ubuntu.com/ubuntu xenial main restricted" |
443 | + repo = DebianRepository.from_repo_line(line) |
444 | + repositories.add(repo) |
445 | +``` |
446 | +""" |
447 | + |
448 | +import fileinput |
449 | +import glob |
450 | +import logging |
451 | +import os |
452 | +import re |
453 | +import subprocess |
454 | +from collections.abc import Mapping |
455 | +from enum import Enum |
456 | +from subprocess import PIPE, CalledProcessError, check_call, check_output |
457 | +from typing import Iterable, List, Optional, Tuple, Union |
458 | +from urllib.parse import urlparse |
459 | + |
460 | +logger = logging.getLogger(__name__) |
461 | + |
462 | +# The unique Charmhub library identifier, never change it |
463 | +LIBID = "7c3dbc9c2ad44a47bd6fcb25caa270e5" |
464 | + |
465 | +# Increment this major API version when introducing breaking changes |
466 | +LIBAPI = 0 |
467 | + |
468 | +# Increment this PATCH version before using `charmcraft publish-lib` or reset |
469 | +# to 0 if you are raising the major API version |
470 | +LIBPATCH = 7 |
471 | + |
472 | + |
473 | +VALID_SOURCE_TYPES = ("deb", "deb-src") |
474 | +OPTIONS_MATCHER = re.compile(r"\[.*?\]") |
475 | + |
476 | + |
477 | +class Error(Exception): |
478 | + """Base class of most errors raised by this library.""" |
479 | + |
480 | + def __repr__(self): |
481 | + """String representation of Error.""" |
482 | + return "<{}.{} {}>".format(type(self).__module__, type(self).__name__, self.args) |
483 | + |
484 | + @property |
485 | + def name(self): |
486 | + """Return a string representation of the model plus class.""" |
487 | + return "<{}.{}>".format(type(self).__module__, type(self).__name__) |
488 | + |
489 | + @property |
490 | + def message(self): |
491 | + """Return the message passed as an argument.""" |
492 | + return self.args[0] |
493 | + |
494 | + |
495 | +class PackageError(Error): |
496 | + """Raised when there's an error installing or removing a package.""" |
497 | + |
498 | + |
499 | +class PackageNotFoundError(Error): |
500 | + """Raised when a requested package is not known to the system.""" |
501 | + |
502 | + |
503 | +class PackageState(Enum): |
504 | + """A class to represent possible package states.""" |
505 | + |
506 | + Present = "present" |
507 | + Absent = "absent" |
508 | + Latest = "latest" |
509 | + Available = "available" |
510 | + |
511 | + |
512 | +class DebianPackage: |
513 | + """Represents a traditional Debian package and its utility functions. |
514 | + |
515 | + `DebianPackage` wraps information and functionality around a known package, whether installed |
516 | + or available. The version, epoch, name, and architecture can be easily queried and compared |
517 | + against other `DebianPackage` objects to determine the latest version or to install a specific |
518 | + version. |
519 | + |
520 | + The representation of this object as a string mimics the output from `dpkg` for familiarity. |
521 | + |
522 | + Installation and removal of packages is handled through the `state` property or `ensure` |
523 | + method, with the following options: |
524 | + |
525 | + apt.PackageState.Absent |
526 | + apt.PackageState.Available |
527 | + apt.PackageState.Present |
528 | + apt.PackageState.Latest |
529 | + |
530 | + When `DebianPackage` is initialized, the state of a given `DebianPackage` object will be set to |
531 | + `Available`, `Present`, or `Latest`, with `Absent` implemented as a convenience for removal |
532 | + (though it operates essentially the same as `Available`). |
533 | + """ |
534 | + |
535 | + def __init__( |
536 | + self, name: str, version: str, epoch: str, arch: str, state: PackageState |
537 | + ) -> None: |
538 | + self._name = name |
539 | + self._arch = arch |
540 | + self._state = state |
541 | + self._version = Version(version, epoch) |
542 | + |
543 | + def __eq__(self, other) -> bool: |
544 | + """Equality for comparison. |
545 | + |
546 | + Args: |
547 | + other: a `DebianPackage` object for comparison |
548 | + |
549 | + Returns: |
550 | + A boolean reflecting equality |
551 | + """ |
552 | + return isinstance(other, self.__class__) and ( |
553 | + self._name, |
554 | + self._version.number, |
555 | + ) == (other._name, other._version.number) |
556 | + |
557 | + def __hash__(self): |
558 | + """A basic hash so this class can be used in Mappings and dicts.""" |
559 | + return hash((self._name, self._version.number)) |
560 | + |
561 | + def __repr__(self): |
562 | + """A representation of the package.""" |
563 | + return "<{}.{}: {}>".format(self.__module__, self.__class__.__name__, self.__dict__) |
564 | + |
565 | + def __str__(self): |
566 | + """A human-readable representation of the package.""" |
567 | + return "<{}: {}-{}.{} -- {}>".format( |
568 | + self.__class__.__name__, |
569 | + self._name, |
570 | + self._version, |
571 | + self._arch, |
572 | + str(self._state), |
573 | + ) |
574 | + |
575 | + @staticmethod |
576 | + def _apt( |
577 | + command: str, |
578 | + package_names: Union[str, List], |
579 | + optargs: Optional[List[str]] = None, |
580 | + ) -> None: |
581 | + """Wrap package management commands for Debian/Ubuntu systems. |
582 | + |
583 | + Args: |
584 | + command: the command given to `apt-get` |
585 | + package_names: a package name or list of package names to operate on |
586 | + optargs: an (Optional) list of additioanl arguments |
587 | + |
588 | + Raises: |
589 | + PackageError if an error is encountered |
590 | + """ |
591 | + optargs = optargs if optargs is not None else [] |
592 | + if isinstance(package_names, str): |
593 | + package_names = [package_names] |
594 | + _cmd = ["apt-get", "-y", *optargs, command, *package_names] |
595 | + try: |
596 | + check_call(_cmd, stderr=PIPE, stdout=PIPE) |
597 | + except CalledProcessError as e: |
598 | + raise PackageError( |
599 | + "Could not {} package(s) [{}]: {}".format(command, [*package_names], e.output) |
600 | + ) from None |
601 | + |
602 | + def _add(self) -> None: |
603 | + """Add a package to the system.""" |
604 | + self._apt( |
605 | + "install", |
606 | + "{}={}".format(self.name, self.version), |
607 | + optargs=["--option=Dpkg::Options::=--force-confold"], |
608 | + ) |
609 | + |
610 | + def _remove(self) -> None: |
611 | + """Removes a package from the system. Implementation-specific.""" |
612 | + return self._apt("remove", "{}={}".format(self.name, self.version)) |
613 | + |
614 | + @property |
615 | + def name(self) -> str: |
616 | + """Returns the name of the package.""" |
617 | + return self._name |
618 | + |
619 | + def ensure(self, state: PackageState): |
620 | + """Ensures that a package is in a given state. |
621 | + |
622 | + Args: |
623 | + state: a `PackageState` to reconcile the package to |
624 | + |
625 | + Raises: |
626 | + PackageError from the underlying call to apt |
627 | + """ |
628 | + if self._state is not state: |
629 | + if state not in (PackageState.Present, PackageState.Latest): |
630 | + self._remove() |
631 | + else: |
632 | + self._add() |
633 | + self._state = state |
634 | + |
635 | + @property |
636 | + def present(self) -> bool: |
637 | + """Returns whether or not a package is present.""" |
638 | + return self._state in (PackageState.Present, PackageState.Latest) |
639 | + |
640 | + @property |
641 | + def latest(self) -> bool: |
642 | + """Returns whether the package is the most recent version.""" |
643 | + return self._state is PackageState.Latest |
644 | + |
645 | + @property |
646 | + def state(self) -> PackageState: |
647 | + """Returns the current package state.""" |
648 | + return self._state |
649 | + |
650 | + @state.setter |
651 | + def state(self, state: PackageState) -> None: |
652 | + """Sets the package state to a given value. |
653 | + |
654 | + Args: |
655 | + state: a `PackageState` to reconcile the package to |
656 | + |
657 | + Raises: |
658 | + PackageError from the underlying call to apt |
659 | + """ |
660 | + if state in (PackageState.Latest, PackageState.Present): |
661 | + self._add() |
662 | + else: |
663 | + self._remove() |
664 | + self._state = state |
665 | + |
666 | + @property |
667 | + def version(self) -> "Version": |
668 | + """Returns the version for a package.""" |
669 | + return self._version |
670 | + |
671 | + @property |
672 | + def epoch(self) -> str: |
673 | + """Returns the epoch for a package. May be unset.""" |
674 | + return self._version.epoch |
675 | + |
676 | + @property |
677 | + def arch(self) -> str: |
678 | + """Returns the architecture for a package.""" |
679 | + return self._arch |
680 | + |
681 | + @property |
682 | + def fullversion(self) -> str: |
683 | + """Returns the name+epoch for a package.""" |
684 | + return "{}.{}".format(self._version, self._arch) |
685 | + |
686 | + @staticmethod |
687 | + def _get_epoch_from_version(version: str) -> Tuple[str, str]: |
688 | + """Pull the epoch, if any, out of a version string.""" |
689 | + epoch_matcher = re.compile(r"^((?P<epoch>\d+):)?(?P<version>.*)") |
690 | + matches = epoch_matcher.search(version).groupdict() |
691 | + return matches.get("epoch", ""), matches.get("version") |
692 | + |
693 | + @classmethod |
694 | + def from_system( |
695 | + cls, package: str, version: Optional[str] = "", arch: Optional[str] = "" |
696 | + ) -> "DebianPackage": |
697 | + """Locates a package, either on the system or known to apt, and serializes the information. |
698 | + |
699 | + Args: |
700 | + package: a string representing the package |
701 | + version: an optional string if a specific version isr equested |
702 | + arch: an optional architecture, defaulting to `dpkg --print-architecture`. If an |
703 | + architecture is not specified, this will be used for selection. |
704 | + |
705 | + """ |
706 | + try: |
707 | + return DebianPackage.from_installed_package(package, version, arch) |
708 | + except PackageNotFoundError: |
709 | + logger.debug( |
710 | + "package '%s' is not currently installed or has the wrong architecture.", package |
711 | + ) |
712 | + |
713 | + # Ok, try `apt-cache ...` |
714 | + try: |
715 | + return DebianPackage.from_apt_cache(package, version, arch) |
716 | + except (PackageNotFoundError, PackageError): |
717 | + # If we get here, it's not known to the systems. |
718 | + # This seems unnecessary, but virtually all `apt` commands have a return code of `100`, |
719 | + # and providing meaningful error messages without this is ugly. |
720 | + raise PackageNotFoundError( |
721 | + "Package '{}{}' could not be found on the system or in the apt cache!".format( |
722 | + package, ".{}".format(arch) if arch else "" |
723 | + ) |
724 | + ) from None |
725 | + |
726 | + @classmethod |
727 | + def from_installed_package( |
728 | + cls, package: str, version: Optional[str] = "", arch: Optional[str] = "" |
729 | + ) -> "DebianPackage": |
730 | + """Check whether the package is already installed and return an instance. |
731 | + |
732 | + Args: |
733 | + package: a string representing the package |
734 | + version: an optional string if a specific version isr equested |
735 | + arch: an optional architecture, defaulting to `dpkg --print-architecture`. |
736 | + If an architecture is not specified, this will be used for selection. |
737 | + """ |
738 | + system_arch = check_output( |
739 | + ["dpkg", "--print-architecture"], universal_newlines=True |
740 | + ).strip() |
741 | + arch = arch if arch else system_arch |
742 | + |
743 | + # Regexps are a really terrible way to do this. Thanks dpkg |
744 | + output = "" |
745 | + try: |
746 | + output = check_output(["dpkg", "-l", package], stderr=PIPE, universal_newlines=True) |
747 | + except CalledProcessError: |
748 | + raise PackageNotFoundError("Package is not installed: {}".format(package)) from None |
749 | + |
750 | + # Pop off the output from `dpkg -l' because there's no flag to |
751 | + # omit it` |
752 | + lines = str(output).splitlines()[5:] |
753 | + |
754 | + dpkg_matcher = re.compile( |
755 | + r""" |
756 | + ^(?P<package_status>\w+?)\s+ |
757 | + (?P<package_name>.*?)(?P<throwaway_arch>:\w+?)?\s+ |
758 | + (?P<version>.*?)\s+ |
759 | + (?P<arch>\w+?)\s+ |
760 | + (?P<description>.*) |
761 | + """, |
762 | + re.VERBOSE, |
763 | + ) |
764 | + |
765 | + for line in lines: |
766 | + try: |
767 | + matches = dpkg_matcher.search(line).groupdict() |
768 | + package_status = matches["package_status"] |
769 | + |
770 | + if not package_status.endswith("i"): |
771 | + logger.debug( |
772 | + "package '%s' in dpkg output but not installed, status: '%s'", |
773 | + package, |
774 | + package_status, |
775 | + ) |
776 | + break |
777 | + |
778 | + epoch, split_version = DebianPackage._get_epoch_from_version(matches["version"]) |
779 | + pkg = DebianPackage( |
780 | + matches["package_name"], |
781 | + split_version, |
782 | + epoch, |
783 | + matches["arch"], |
784 | + PackageState.Present, |
785 | + ) |
786 | + if (pkg.arch == "all" or pkg.arch == arch) and ( |
787 | + version == "" or str(pkg.version) == version |
788 | + ): |
789 | + return pkg |
790 | + except AttributeError: |
791 | + logger.warning("dpkg matcher could not parse line: %s", line) |
792 | + |
793 | + # If we didn't find it, fail through |
794 | + raise PackageNotFoundError("Package {}.{} is not installed!".format(package, arch)) |
795 | + |
796 | + @classmethod |
797 | + def from_apt_cache( |
798 | + cls, package: str, version: Optional[str] = "", arch: Optional[str] = "" |
799 | + ) -> "DebianPackage": |
800 | + """Check whether the package is already installed and return an instance. |
801 | + |
802 | + Args: |
803 | + package: a string representing the package |
804 | + version: an optional string if a specific version isr equested |
805 | + arch: an optional architecture, defaulting to `dpkg --print-architecture`. |
806 | + If an architecture is not specified, this will be used for selection. |
807 | + """ |
808 | + system_arch = check_output( |
809 | + ["dpkg", "--print-architecture"], universal_newlines=True |
810 | + ).strip() |
811 | + arch = arch if arch else system_arch |
812 | + |
813 | + # Regexps are a really terrible way to do this. Thanks dpkg |
814 | + keys = ("Package", "Architecture", "Version") |
815 | + |
816 | + try: |
817 | + output = check_output( |
818 | + ["apt-cache", "show", package], stderr=PIPE, universal_newlines=True |
819 | + ) |
820 | + except CalledProcessError as e: |
821 | + raise PackageError( |
822 | + "Could not list packages in apt-cache: {}".format(e.output) |
823 | + ) from None |
824 | + |
825 | + pkg_groups = output.strip().split("\n\n") |
826 | + keys = ("Package", "Architecture", "Version") |
827 | + |
828 | + for pkg_raw in pkg_groups: |
829 | + lines = str(pkg_raw).splitlines() |
830 | + vals = {} |
831 | + for line in lines: |
832 | + if line.startswith(keys): |
833 | + items = line.split(":", 1) |
834 | + vals[items[0]] = items[1].strip() |
835 | + else: |
836 | + continue |
837 | + |
838 | + epoch, split_version = DebianPackage._get_epoch_from_version(vals["Version"]) |
839 | + pkg = DebianPackage( |
840 | + vals["Package"], |
841 | + split_version, |
842 | + epoch, |
843 | + vals["Architecture"], |
844 | + PackageState.Available, |
845 | + ) |
846 | + |
847 | + if (pkg.arch == "all" or pkg.arch == arch) and ( |
848 | + version == "" or str(pkg.version) == version |
849 | + ): |
850 | + return pkg |
851 | + |
852 | + # If we didn't find it, fail through |
853 | + raise PackageNotFoundError("Package {}.{} is not in the apt cache!".format(package, arch)) |
854 | + |
855 | + |
856 | +class Version: |
857 | + """An abstraction around package versions. |
858 | + |
859 | + This seems like it should be strictly unnecessary, except that `apt_pkg` is not usable inside a |
860 | + venv, and wedging version comparisions into `DebianPackage` would overcomplicate it. |
861 | + |
862 | + This class implements the algorithm found here: |
863 | + https://www.debian.org/doc/debian-policy/ch-controlfields.html#version |
864 | + """ |
865 | + |
866 | + def __init__(self, version: str, epoch: str): |
867 | + self._version = version |
868 | + self._epoch = epoch or "" |
869 | + |
870 | + def __repr__(self): |
871 | + """A representation of the package.""" |
872 | + return "<{}.{}: {}>".format(self.__module__, self.__class__.__name__, self.__dict__) |
873 | + |
874 | + def __str__(self): |
875 | + """A human-readable representation of the package.""" |
876 | + return "{}{}".format("{}:".format(self._epoch) if self._epoch else "", self._version) |
877 | + |
878 | + @property |
879 | + def epoch(self): |
880 | + """Returns the epoch for a package. May be empty.""" |
881 | + return self._epoch |
882 | + |
883 | + @property |
884 | + def number(self) -> str: |
885 | + """Returns the version number for a package.""" |
886 | + return self._version |
887 | + |
888 | + def _get_parts(self, version: str) -> Tuple[str, str]: |
889 | + """Separate the version into component upstream and Debian pieces.""" |
890 | + try: |
891 | + version.rindex("-") |
892 | + except ValueError: |
893 | + # No hyphens means no Debian version |
894 | + return version, "0" |
895 | + |
896 | + upstream, debian = version.rsplit("-", 1) |
897 | + return upstream, debian |
898 | + |
899 | + def _listify(self, revision: str) -> List[str]: |
900 | + """Split a revision string into a listself. |
901 | + |
902 | + This list is comprised of alternating between strings and numbers, |
903 | + padded on either end to always be "str, int, str, int..." and |
904 | + always be of even length. This allows us to trivially implement the |
905 | + comparison algorithm described. |
906 | + """ |
907 | + result = [] |
908 | + while revision: |
909 | + rev_1, remains = self._get_alphas(revision) |
910 | + rev_2, remains = self._get_digits(remains) |
911 | + result.extend([rev_1, rev_2]) |
912 | + revision = remains |
913 | + return result |
914 | + |
915 | + def _get_alphas(self, revision: str) -> Tuple[str, str]: |
916 | + """Return a tuple of the first non-digit characters of a revision.""" |
917 | + # get the index of the first digit |
918 | + for i, char in enumerate(revision): |
919 | + if char.isdigit(): |
920 | + if i == 0: |
921 | + return "", revision |
922 | + return revision[0:i], revision[i:] |
923 | + # string is entirely alphas |
924 | + return revision, "" |
925 | + |
926 | + def _get_digits(self, revision: str) -> Tuple[int, str]: |
927 | + """Return a tuple of the first integer characters of a revision.""" |
928 | + # If the string is empty, return (0,'') |
929 | + if not revision: |
930 | + return 0, "" |
931 | + # get the index of the first non-digit |
932 | + for i, char in enumerate(revision): |
933 | + if not char.isdigit(): |
934 | + if i == 0: |
935 | + return 0, revision |
936 | + return int(revision[0:i]), revision[i:] |
937 | + # string is entirely digits |
938 | + return int(revision), "" |
939 | + |
940 | + def _dstringcmp(self, a, b): # noqa: C901 |
941 | + """Debian package version string section lexical sort algorithm. |
942 | + |
943 | + The lexical comparison is a comparison of ASCII values modified so |
944 | + that all the letters sort earlier than all the non-letters and so that |
945 | + a tilde sorts before anything, even the end of a part. |
946 | + """ |
947 | + if a == b: |
948 | + return 0 |
949 | + try: |
950 | + for i, char in enumerate(a): |
951 | + if char == b[i]: |
952 | + continue |
953 | + # "a tilde sorts before anything, even the end of a part" |
954 | + # (emptyness) |
955 | + if char == "~": |
956 | + return -1 |
957 | + if b[i] == "~": |
958 | + return 1 |
959 | + # "all the letters sort earlier than all the non-letters" |
960 | + if char.isalpha() and not b[i].isalpha(): |
961 | + return -1 |
962 | + if not char.isalpha() and b[i].isalpha(): |
963 | + return 1 |
964 | + # otherwise lexical sort |
965 | + if ord(char) > ord(b[i]): |
966 | + return 1 |
967 | + if ord(char) < ord(b[i]): |
968 | + return -1 |
969 | + except IndexError: |
970 | + # a is longer than b but otherwise equal, greater unless there are tildes |
971 | + if char == "~": |
972 | + return -1 |
973 | + return 1 |
974 | + # if we get here, a is shorter than b but otherwise equal, so check for tildes... |
975 | + if b[len(a)] == "~": |
976 | + return 1 |
977 | + return -1 |
978 | + |
979 | + def _compare_revision_strings(self, first: str, second: str): # noqa: C901 |
980 | + """Compare two debian revision strings.""" |
981 | + if first == second: |
982 | + return 0 |
983 | + |
984 | + # listify pads results so that we will always be comparing ints to ints |
985 | + # and strings to strings (at least until we fall off the end of a list) |
986 | + first_list = self._listify(first) |
987 | + second_list = self._listify(second) |
988 | + if first_list == second_list: |
989 | + return 0 |
990 | + try: |
991 | + for i, item in enumerate(first_list): |
992 | + # explicitly raise IndexError if we've fallen off the edge of list2 |
993 | + if i >= len(second_list): |
994 | + raise IndexError |
995 | + # if the items are equal, next |
996 | + if item == second_list[i]: |
997 | + continue |
998 | + # numeric comparison |
999 | + if isinstance(item, int): |
1000 | + if item > second_list[i]: |
1001 | + return 1 |
1002 | + if item < second_list[i]: |
1003 | + return -1 |
1004 | + else: |
1005 | + # string comparison |
1006 | + return self._dstringcmp(item, second_list[i]) |
1007 | + except IndexError: |
1008 | + # rev1 is longer than rev2 but otherwise equal, hence greater |
1009 | + # ...except for goddamn tildes |
1010 | + if first_list[len(second_list)][0][0] == "~": |
1011 | + return 1 |
1012 | + return 1 |
1013 | + # rev1 is shorter than rev2 but otherwise equal, hence lesser |
1014 | + # ...except for goddamn tildes |
1015 | + if second_list[len(first_list)][0][0] == "~": |
1016 | + return -1 |
1017 | + return -1 |
1018 | + |
1019 | + def _compare_version(self, other) -> int: |
1020 | + if (self.number, self.epoch) == (other.number, other.epoch): |
1021 | + return 0 |
1022 | + |
1023 | + if self.epoch < other.epoch: |
1024 | + return -1 |
1025 | + if self.epoch > other.epoch: |
1026 | + return 1 |
1027 | + |
1028 | + # If none of these are true, follow the algorithm |
1029 | + upstream_version, debian_version = self._get_parts(self.number) |
1030 | + other_upstream_version, other_debian_version = self._get_parts(other.number) |
1031 | + |
1032 | + upstream_cmp = self._compare_revision_strings(upstream_version, other_upstream_version) |
1033 | + if upstream_cmp != 0: |
1034 | + return upstream_cmp |
1035 | + |
1036 | + debian_cmp = self._compare_revision_strings(debian_version, other_debian_version) |
1037 | + if debian_cmp != 0: |
1038 | + return debian_cmp |
1039 | + |
1040 | + return 0 |
1041 | + |
1042 | + def __lt__(self, other) -> bool: |
1043 | + """Less than magic method impl.""" |
1044 | + return self._compare_version(other) < 0 |
1045 | + |
1046 | + def __eq__(self, other) -> bool: |
1047 | + """Equality magic method impl.""" |
1048 | + return self._compare_version(other) == 0 |
1049 | + |
1050 | + def __gt__(self, other) -> bool: |
1051 | + """Greater than magic method impl.""" |
1052 | + return self._compare_version(other) > 0 |
1053 | + |
1054 | + def __le__(self, other) -> bool: |
1055 | + """Less than or equal to magic method impl.""" |
1056 | + return self.__eq__(other) or self.__lt__(other) |
1057 | + |
1058 | + def __ge__(self, other) -> bool: |
1059 | + """Greater than or equal to magic method impl.""" |
1060 | + return self.__gt__(other) or self.__eq__(other) |
1061 | + |
1062 | + def __ne__(self, other) -> bool: |
1063 | + """Not equal to magic method impl.""" |
1064 | + return not self.__eq__(other) |
1065 | + |
1066 | + |
1067 | +def add_package( |
1068 | + package_names: Union[str, List[str]], |
1069 | + version: Optional[str] = "", |
1070 | + arch: Optional[str] = "", |
1071 | + update_cache: Optional[bool] = False, |
1072 | +) -> Union[DebianPackage, List[DebianPackage]]: |
1073 | + """Add a package or list of packages to the system. |
1074 | + |
1075 | + Args: |
1076 | + name: the name(s) of the package(s) |
1077 | + version: an (Optional) version as a string. Defaults to the latest known |
1078 | + arch: an optional architecture for the package |
1079 | + update_cache: whether or not to run `apt-get update` prior to operating |
1080 | + |
1081 | + Raises: |
1082 | + PackageNotFoundError if the package is not in the cache. |
1083 | + """ |
1084 | + cache_refreshed = False |
1085 | + if update_cache: |
1086 | + update() |
1087 | + cache_refreshed = True |
1088 | + |
1089 | + packages = {"success": [], "retry": [], "failed": []} |
1090 | + |
1091 | + package_names = [package_names] if type(package_names) is str else package_names |
1092 | + if not package_names: |
1093 | + raise TypeError("Expected at least one package name to add, received zero!") |
1094 | + |
1095 | + if len(package_names) != 1 and version: |
1096 | + raise TypeError( |
1097 | + "Explicit version should not be set if more than one package is being added!" |
1098 | + ) |
1099 | + |
1100 | + for p in package_names: |
1101 | + pkg, success = _add(p, version, arch) |
1102 | + if success: |
1103 | + packages["success"].append(pkg) |
1104 | + else: |
1105 | + logger.warning("failed to locate and install/update '%s'", pkg) |
1106 | + packages["retry"].append(p) |
1107 | + |
1108 | + if packages["retry"] and not cache_refreshed: |
1109 | + logger.info("updating the apt-cache and retrying installation of failed packages.") |
1110 | + update() |
1111 | + |
1112 | + for p in packages["retry"]: |
1113 | + pkg, success = _add(p, version, arch) |
1114 | + if success: |
1115 | + packages["success"].append(pkg) |
1116 | + else: |
1117 | + packages["failed"].append(p) |
1118 | + |
1119 | + if packages["failed"]: |
1120 | + raise PackageError("Failed to install packages: {}".format(", ".join(packages["failed"]))) |
1121 | + |
1122 | + return packages["success"] if len(packages["success"]) > 1 else packages["success"][0] |
1123 | + |
1124 | + |
1125 | +def _add( |
1126 | + name: str, |
1127 | + version: Optional[str] = "", |
1128 | + arch: Optional[str] = "", |
1129 | +) -> Tuple[Union[DebianPackage, str], bool]: |
1130 | + """Adds a package. |
1131 | + |
1132 | + Args: |
1133 | + name: the name(s) of the package(s) |
1134 | + version: an (Optional) version as a string. Defaults to the latest known |
1135 | + arch: an optional architecture for the package |
1136 | + |
1137 | + Returns: a tuple of `DebianPackage` if found, or a :str: if it is not, and |
1138 | + a boolean indicating success |
1139 | + """ |
1140 | + try: |
1141 | + pkg = DebianPackage.from_system(name, version, arch) |
1142 | + pkg.ensure(state=PackageState.Present) |
1143 | + return pkg, True |
1144 | + except PackageNotFoundError: |
1145 | + return name, False |
1146 | + |
1147 | + |
1148 | +def remove_package( |
1149 | + package_names: Union[str, List[str]] |
1150 | +) -> Union[DebianPackage, List[DebianPackage]]: |
1151 | + """Removes a package from the system. |
1152 | + |
1153 | + Args: |
1154 | + package_names: the name of a package |
1155 | + |
1156 | + Raises: |
1157 | + PackageNotFoundError if the package is not found. |
1158 | + """ |
1159 | + packages = [] |
1160 | + |
1161 | + package_names = [package_names] if type(package_names) is str else package_names |
1162 | + if not package_names: |
1163 | + raise TypeError("Expected at least one package name to add, received zero!") |
1164 | + |
1165 | + for p in package_names: |
1166 | + try: |
1167 | + pkg = DebianPackage.from_installed_package(p) |
1168 | + pkg.ensure(state=PackageState.Absent) |
1169 | + packages.append(pkg) |
1170 | + except PackageNotFoundError: |
1171 | + logger.info("package '%s' was requested for removal, but it was not installed.", p) |
1172 | + |
1173 | + # the list of packages will be empty when no package is removed |
1174 | + logger.debug("packages: '%s'", packages) |
1175 | + return packages[0] if len(packages) == 1 else packages |
1176 | + |
1177 | + |
1178 | +def update() -> None: |
1179 | + """Updates the apt cache via `apt-get update`.""" |
1180 | + check_call(["apt-get", "update"], stderr=PIPE, stdout=PIPE) |
1181 | + |
1182 | + |
1183 | +class InvalidSourceError(Error): |
1184 | + """Exceptions for invalid source entries.""" |
1185 | + |
1186 | + |
1187 | +class GPGKeyError(Error): |
1188 | + """Exceptions for GPG keys.""" |
1189 | + |
1190 | + |
1191 | +class DebianRepository: |
1192 | + """An abstraction to represent a repository.""" |
1193 | + |
1194 | + def __init__( |
1195 | + self, |
1196 | + enabled: bool, |
1197 | + repotype: str, |
1198 | + uri: str, |
1199 | + release: str, |
1200 | + groups: List[str], |
1201 | + filename: Optional[str] = "", |
1202 | + gpg_key_filename: Optional[str] = "", |
1203 | + options: Optional[dict] = None, |
1204 | + ): |
1205 | + self._enabled = enabled |
1206 | + self._repotype = repotype |
1207 | + self._uri = uri |
1208 | + self._release = release |
1209 | + self._groups = groups |
1210 | + self._filename = filename |
1211 | + self._gpg_key_filename = gpg_key_filename |
1212 | + self._options = options |
1213 | + |
1214 | + @property |
1215 | + def enabled(self): |
1216 | + """Return whether or not the repository is enabled.""" |
1217 | + return self._enabled |
1218 | + |
1219 | + @property |
1220 | + def repotype(self): |
1221 | + """Return whether it is binary or source.""" |
1222 | + return self._repotype |
1223 | + |
1224 | + @property |
1225 | + def uri(self): |
1226 | + """Return the URI.""" |
1227 | + return self._uri |
1228 | + |
1229 | + @property |
1230 | + def release(self): |
1231 | + """Return which Debian/Ubuntu releases it is valid for.""" |
1232 | + return self._release |
1233 | + |
1234 | + @property |
1235 | + def groups(self): |
1236 | + """Return the enabled package groups.""" |
1237 | + return self._groups |
1238 | + |
1239 | + @property |
1240 | + def filename(self): |
1241 | + """Returns the filename for a repository.""" |
1242 | + return self._filename |
1243 | + |
1244 | + @filename.setter |
1245 | + def filename(self, fname: str) -> None: |
1246 | + """Sets the filename used when a repo is written back to diskself. |
1247 | + |
1248 | + Args: |
1249 | + fname: a filename to write the repository information to. |
1250 | + """ |
1251 | + if not fname.endswith(".list"): |
1252 | + raise InvalidSourceError("apt source filenames should end in .list!") |
1253 | + |
1254 | + self._filename = fname |
1255 | + |
1256 | + @property |
1257 | + def gpg_key(self): |
1258 | + """Returns the path to the GPG key for this repository.""" |
1259 | + return self._gpg_key_filename |
1260 | + |
1261 | + @property |
1262 | + def options(self): |
1263 | + """Returns any additional repo options which are set.""" |
1264 | + return self._options |
1265 | + |
1266 | + def make_options_string(self) -> str: |
1267 | + """Generate the complete options string for a a repository. |
1268 | + |
1269 | + Combining `gpg_key`, if set, and the rest of the options to find |
1270 | + a complex repo string. |
1271 | + """ |
1272 | + options = self._options if self._options else {} |
1273 | + if self._gpg_key_filename: |
1274 | + options["signed-by"] = self._gpg_key_filename |
1275 | + |
1276 | + return ( |
1277 | + "[{}] ".format(" ".join(["{}={}".format(k, v) for k, v in options.items()])) |
1278 | + if options |
1279 | + else "" |
1280 | + ) |
1281 | + |
1282 | + @staticmethod |
1283 | + def prefix_from_uri(uri: str) -> str: |
1284 | + """Get a repo list prefix from the uri, depending on whether a path is set.""" |
1285 | + uridetails = urlparse(uri) |
1286 | + path = ( |
1287 | + uridetails.path.lstrip("/").replace("/", "-") if uridetails.path else uridetails.netloc |
1288 | + ) |
1289 | + return "/etc/apt/sources.list.d/{}".format(path) |
1290 | + |
1291 | + @staticmethod |
1292 | + def from_repo_line(repo_line: str, write_file: Optional[bool] = True) -> "DebianRepository": |
1293 | + """Instantiate a new `DebianRepository` a `sources.list` entry line. |
1294 | + |
1295 | + Args: |
1296 | + repo_line: a string representing a repository entry |
1297 | + write_file: boolean to enable writing the new repo to disk |
1298 | + """ |
1299 | + repo = RepositoryMapping._parse(repo_line, "UserInput") |
1300 | + fname = "{}-{}.list".format( |
1301 | + DebianRepository.prefix_from_uri(repo.uri), repo.release.replace("/", "-") |
1302 | + ) |
1303 | + repo.filename = fname |
1304 | + |
1305 | + options = repo.options if repo.options else {} |
1306 | + if repo.gpg_key: |
1307 | + options["signed-by"] = repo.gpg_key |
1308 | + |
1309 | + # For Python 3.5 it's required to use sorted in the options dict in order to not have |
1310 | + # different results in the order of the options between executions. |
1311 | + options_str = ( |
1312 | + "[{}] ".format(" ".join(["{}={}".format(k, v) for k, v in sorted(options.items())])) |
1313 | + if options |
1314 | + else "" |
1315 | + ) |
1316 | + |
1317 | + if write_file: |
1318 | + with open(fname, "wb") as f: |
1319 | + f.write( |
1320 | + ( |
1321 | + "{}".format("#" if not repo.enabled else "") |
1322 | + + "{} {}{} ".format(repo.repotype, options_str, repo.uri) |
1323 | + + "{} {}\n".format(repo.release, " ".join(repo.groups)) |
1324 | + ).encode("utf-8") |
1325 | + ) |
1326 | + |
1327 | + return repo |
1328 | + |
1329 | + def disable(self) -> None: |
1330 | + """Remove this repository from consideration. |
1331 | + |
1332 | + Disable it instead of removing from the repository file. |
1333 | + """ |
1334 | + searcher = "{} {}{} {}".format( |
1335 | + self.repotype, self.make_options_string(), self.uri, self.release |
1336 | + ) |
1337 | + for line in fileinput.input(self._filename, inplace=True): |
1338 | + if re.match(r"^{}\s".format(re.escape(searcher)), line): |
1339 | + print("# {}".format(line), end="") |
1340 | + else: |
1341 | + print(line, end="") |
1342 | + |
1343 | + def import_key(self, key: str) -> None: |
1344 | + """Import an ASCII Armor key. |
1345 | + |
1346 | + A Radix64 format keyid is also supported for backwards |
1347 | + compatibility. In this case Ubuntu keyserver will be |
1348 | + queried for a key via HTTPS by its keyid. This method |
1349 | + is less preferrable because https proxy servers may |
1350 | + require traffic decryption which is equivalent to a |
1351 | + man-in-the-middle attack (a proxy server impersonates |
1352 | + keyserver TLS certificates and has to be explicitly |
1353 | + trusted by the system). |
1354 | + |
1355 | + Args: |
1356 | + key: A GPG key in ASCII armor format, |
1357 | + including BEGIN and END markers or a keyid. |
1358 | + |
1359 | + Raises: |
1360 | + GPGKeyError if the key could not be imported |
1361 | + """ |
1362 | + key = key.strip() |
1363 | + if "-" in key or "\n" in key: |
1364 | + # Send everything not obviously a keyid to GPG to import, as |
1365 | + # we trust its validation better than our own. eg. handling |
1366 | + # comments before the key. |
1367 | + logger.debug("PGP key found (looks like ASCII Armor format)") |
1368 | + if ( |
1369 | + "-----BEGIN PGP PUBLIC KEY BLOCK-----" in key |
1370 | + and "-----END PGP PUBLIC KEY BLOCK-----" in key |
1371 | + ): |
1372 | + logger.debug("Writing provided PGP key in the binary format") |
1373 | + key_bytes = key.encode("utf-8") |
1374 | + key_name = self._get_keyid_by_gpg_key(key_bytes) |
1375 | + key_gpg = self._dearmor_gpg_key(key_bytes) |
1376 | + self._gpg_key_filename = "/etc/apt/trusted.gpg.d/{}.gpg".format(key_name) |
1377 | + self._write_apt_gpg_keyfile(key_name=self._gpg_key_filename, key_material=key_gpg) |
1378 | + else: |
1379 | + raise GPGKeyError("ASCII armor markers missing from GPG key") |
1380 | + else: |
1381 | + logger.warning( |
1382 | + "PGP key found (looks like Radix64 format). " |
1383 | + "SECURELY importing PGP key from keyserver; " |
1384 | + "full key not provided." |
1385 | + ) |
1386 | + # as of bionic add-apt-repository uses curl with an HTTPS keyserver URL |
1387 | + # to retrieve GPG keys. `apt-key adv` command is deprecated as is |
1388 | + # apt-key in general as noted in its manpage. See lp:1433761 for more |
1389 | + # history. Instead, /etc/apt/trusted.gpg.d is used directly to drop |
1390 | + # gpg |
1391 | + key_asc = self._get_key_by_keyid(key) |
1392 | + # write the key in GPG format so that apt-key list shows it |
1393 | + key_gpg = self._dearmor_gpg_key(key_asc.encode("utf-8")) |
1394 | + self._gpg_key_filename = "/etc/apt/trusted.gpg.d/{}.gpg".format(key) |
1395 | + self._write_apt_gpg_keyfile(key_name=key, key_material=key_gpg) |
1396 | + |
1397 | + @staticmethod |
1398 | + def _get_keyid_by_gpg_key(key_material: bytes) -> str: |
1399 | + """Get a GPG key fingerprint by GPG key material. |
1400 | + |
1401 | + Gets a GPG key fingerprint (40-digit, 160-bit) by the ASCII armor-encoded |
1402 | + or binary GPG key material. Can be used, for example, to generate file |
1403 | + names for keys passed via charm options. |
1404 | + """ |
1405 | + # Use the same gpg command for both Xenial and Bionic |
1406 | + cmd = ["gpg", "--with-colons", "--with-fingerprint"] |
1407 | + ps = subprocess.run( |
1408 | + cmd, |
1409 | + stdout=PIPE, |
1410 | + stderr=PIPE, |
1411 | + input=key_material, |
1412 | + ) |
1413 | + out, err = ps.stdout.decode(), ps.stderr.decode() |
1414 | + if "gpg: no valid OpenPGP data found." in err: |
1415 | + raise GPGKeyError("Invalid GPG key material provided") |
1416 | + # from gnupg2 docs: fpr :: Fingerprint (fingerprint is in field 10) |
1417 | + return re.search(r"^fpr:{9}([0-9A-F]{40}):$", out, re.MULTILINE).group(1) |
1418 | + |
1419 | + @staticmethod |
1420 | + def _get_key_by_keyid(keyid: str) -> str: |
1421 | + """Get a key via HTTPS from the Ubuntu keyserver. |
1422 | + |
1423 | + Different key ID formats are supported by SKS keyservers (the longer ones |
1424 | + are more secure, see "dead beef attack" and https://evil32.com/). Since |
1425 | + HTTPS is used, if SSLBump-like HTTPS proxies are in place, they will |
1426 | + impersonate keyserver.ubuntu.com and generate a certificate with |
1427 | + keyserver.ubuntu.com in the CN field or in SubjAltName fields of a |
1428 | + certificate. If such proxy behavior is expected it is necessary to add the |
1429 | + CA certificate chain containing the intermediate CA of the SSLBump proxy to |
1430 | + every machine that this code runs on via ca-certs cloud-init directive (via |
1431 | + cloudinit-userdata model-config) or via other means (such as through a |
1432 | + custom charm option). Also note that DNS resolution for the hostname in a |
1433 | + URL is done at a proxy server - not at the client side. |
1434 | + 8-digit (32 bit) key ID |
1435 | + https://keyserver.ubuntu.com/pks/lookup?search=0x4652B4E6 |
1436 | + 16-digit (64 bit) key ID |
1437 | + https://keyserver.ubuntu.com/pks/lookup?search=0x6E85A86E4652B4E6 |
1438 | + 40-digit key ID: |
1439 | + https://keyserver.ubuntu.com/pks/lookup?search=0x35F77D63B5CEC106C577ED856E85A86E4652B4E6 |
1440 | + |
1441 | + Args: |
1442 | + keyid: An 8, 16 or 40 hex digit keyid to find a key for |
1443 | + |
1444 | + Returns: |
1445 | + A string contining key material for the specified GPG key id |
1446 | + |
1447 | + |
1448 | + Raises: |
1449 | + subprocess.CalledProcessError |
1450 | + """ |
1451 | + # options=mr - machine-readable output (disables html wrappers) |
1452 | + keyserver_url = ( |
1453 | + "https://keyserver.ubuntu.com" "/pks/lookup?op=get&options=mr&exact=on&search=0x{}" |
1454 | + ) |
1455 | + curl_cmd = ["curl", keyserver_url.format(keyid)] |
1456 | + # use proxy server settings in order to retrieve the key |
1457 | + return check_output(curl_cmd).decode() |
1458 | + |
1459 | + @staticmethod |
1460 | + def _dearmor_gpg_key(key_asc: bytes) -> bytes: |
1461 | + """Converts a GPG key in the ASCII armor format to the binary format. |
1462 | + |
1463 | + Args: |
1464 | + key_asc: A GPG key in ASCII armor format. |
1465 | + |
1466 | + Returns: |
1467 | + A GPG key in binary format as a string |
1468 | + |
1469 | + Raises: |
1470 | + GPGKeyError |
1471 | + """ |
1472 | + ps = subprocess.run(["gpg", "--dearmor"], stdout=PIPE, stderr=PIPE, input=key_asc) |
1473 | + out, err = ps.stdout, ps.stderr.decode() |
1474 | + if "gpg: no valid OpenPGP data found." in err: |
1475 | + raise GPGKeyError( |
1476 | + "Invalid GPG key material. Check your network setup" |
1477 | + " (MTU, routing, DNS) and/or proxy server settings" |
1478 | + " as well as destination keyserver status." |
1479 | + ) |
1480 | + else: |
1481 | + return out |
1482 | + |
1483 | + @staticmethod |
1484 | + def _write_apt_gpg_keyfile(key_name: str, key_material: bytes) -> None: |
1485 | + """Writes GPG key material into a file at a provided path. |
1486 | + |
1487 | + Args: |
1488 | + key_name: A key name to use for a key file (could be a fingerprint) |
1489 | + key_material: A GPG key material (binary) |
1490 | + """ |
1491 | + with open(key_name, "wb") as keyf: |
1492 | + keyf.write(key_material) |
1493 | + |
1494 | + |
1495 | +class RepositoryMapping(Mapping): |
1496 | + """An representation of known repositories. |
1497 | + |
1498 | + Instantiation of `RepositoryMapping` will iterate through the |
1499 | + filesystem, parse out repository files in `/etc/apt/...`, and create |
1500 | + `DebianRepository` objects in this list. |
1501 | + |
1502 | + Typical usage: |
1503 | + |
1504 | + repositories = apt.RepositoryMapping() |
1505 | + repositories.add(DebianRepository( |
1506 | + enabled=True, repotype="deb", uri="https://example.com", release="focal", |
1507 | + groups=["universe"] |
1508 | + )) |
1509 | + """ |
1510 | + |
1511 | + def __init__(self): |
1512 | + self._repository_map = {} |
1513 | + # Repositories that we're adding -- used to implement mode param |
1514 | + self.default_file = "/etc/apt/sources.list" |
1515 | + |
1516 | + # read sources.list if it exists |
1517 | + if os.path.isfile(self.default_file): |
1518 | + self.load(self.default_file) |
1519 | + |
1520 | + # read sources.list.d |
1521 | + for file in glob.iglob("/etc/apt/sources.list.d/*.list"): |
1522 | + self.load(file) |
1523 | + |
1524 | + def __contains__(self, key: str) -> bool: |
1525 | + """Magic method for checking presence of repo in mapping.""" |
1526 | + return key in self._repository_map |
1527 | + |
1528 | + def __len__(self) -> int: |
1529 | + """Return number of repositories in map.""" |
1530 | + return len(self._repository_map) |
1531 | + |
1532 | + def __iter__(self) -> Iterable[DebianRepository]: |
1533 | + """Iterator magic method for RepositoryMapping.""" |
1534 | + return iter(self._repository_map.values()) |
1535 | + |
1536 | + def __getitem__(self, repository_uri: str) -> DebianRepository: |
1537 | + """Return a given `DebianRepository`.""" |
1538 | + return self._repository_map[repository_uri] |
1539 | + |
1540 | + def __setitem__(self, repository_uri: str, repository: DebianRepository) -> None: |
1541 | + """Add a `DebianRepository` to the cache.""" |
1542 | + self._repository_map[repository_uri] = repository |
1543 | + |
1544 | + def load(self, filename: str): |
1545 | + """Load a repository source file into the cache. |
1546 | + |
1547 | + Args: |
1548 | + filename: the path to the repository file |
1549 | + """ |
1550 | + parsed = [] |
1551 | + skipped = [] |
1552 | + with open(filename, "r") as f: |
1553 | + for n, line in enumerate(f): |
1554 | + try: |
1555 | + repo = self._parse(line, filename) |
1556 | + except InvalidSourceError: |
1557 | + skipped.append(n) |
1558 | + else: |
1559 | + repo_identifier = "{}-{}-{}".format(repo.repotype, repo.uri, repo.release) |
1560 | + self._repository_map[repo_identifier] = repo |
1561 | + parsed.append(n) |
1562 | + logger.debug("parsed repo: '%s'", repo_identifier) |
1563 | + |
1564 | + if skipped: |
1565 | + skip_list = ", ".join(str(s) for s in skipped) |
1566 | + logger.debug("skipped the following lines in file '%s': %s", filename, skip_list) |
1567 | + |
1568 | + if parsed: |
1569 | + logger.info("parsed %d apt package repositories", len(parsed)) |
1570 | + else: |
1571 | + raise InvalidSourceError("all repository lines in '{}' were invalid!".format(filename)) |
1572 | + |
1573 | + @staticmethod |
1574 | + def _parse(line: str, filename: str) -> DebianRepository: |
1575 | + """Parse a line in a sources.list file. |
1576 | + |
1577 | + Args: |
1578 | + line: a single line from `load` to parse |
1579 | + filename: the filename being read |
1580 | + |
1581 | + Raises: |
1582 | + InvalidSourceError if the source type is unknown |
1583 | + """ |
1584 | + enabled = True |
1585 | + repotype = uri = release = gpg_key = "" |
1586 | + options = {} |
1587 | + groups = [] |
1588 | + |
1589 | + line = line.strip() |
1590 | + if line.startswith("#"): |
1591 | + enabled = False |
1592 | + line = line[1:] |
1593 | + |
1594 | + # Check for "#" in the line and treat a part after it as a comment then strip it off. |
1595 | + i = line.find("#") |
1596 | + if i > 0: |
1597 | + line = line[:i] |
1598 | + |
1599 | + # Split a source into substrings to initialize a new repo. |
1600 | + source = line.strip() |
1601 | + if source: |
1602 | + # Match any repo options, and get a dict representation. |
1603 | + for v in re.findall(OPTIONS_MATCHER, source): |
1604 | + opts = dict(o.split("=") for o in v.strip("[]").split()) |
1605 | + # Extract the 'signed-by' option for the gpg_key |
1606 | + gpg_key = opts.pop("signed-by", "") |
1607 | + options = opts |
1608 | + |
1609 | + # Remove any options from the source string and split the string into chunks |
1610 | + source = re.sub(OPTIONS_MATCHER, "", source) |
1611 | + chunks = source.split() |
1612 | + |
1613 | + # Check we've got a valid list of chunks |
1614 | + if len(chunks) < 3 or chunks[0] not in VALID_SOURCE_TYPES: |
1615 | + raise InvalidSourceError("An invalid sources line was found in %s!", filename) |
1616 | + |
1617 | + repotype = chunks[0] |
1618 | + uri = chunks[1] |
1619 | + release = chunks[2] |
1620 | + groups = chunks[3:] |
1621 | + |
1622 | + return DebianRepository( |
1623 | + enabled, repotype, uri, release, groups, filename, gpg_key, options |
1624 | + ) |
1625 | + else: |
1626 | + raise InvalidSourceError("An invalid sources line was found in %s!", filename) |
1627 | + |
1628 | + def add(self, repo: DebianRepository, default_filename: Optional[bool] = False) -> None: |
1629 | + """Add a new repository to the system. |
1630 | + |
1631 | + Args: |
1632 | + repo: a `DebianRepository` object |
1633 | + default_filename: an (Optional) filename if the default is not desirable |
1634 | + """ |
1635 | + new_filename = "{}-{}.list".format( |
1636 | + DebianRepository.prefix_from_uri(repo.uri), repo.release.replace("/", "-") |
1637 | + ) |
1638 | + |
1639 | + fname = repo.filename or new_filename |
1640 | + |
1641 | + options = repo.options if repo.options else {} |
1642 | + if repo.gpg_key: |
1643 | + options["signed-by"] = repo.gpg_key |
1644 | + |
1645 | + with open(fname, "wb") as f: |
1646 | + f.write( |
1647 | + ( |
1648 | + "{}".format("#" if not repo.enabled else "") |
1649 | + + "{} {}{} ".format(repo.repotype, repo.make_options_string(), repo.uri) |
1650 | + + "{} {}\n".format(repo.release, " ".join(repo.groups)) |
1651 | + ).encode("utf-8") |
1652 | + ) |
1653 | + |
1654 | + self._repository_map["{}-{}-{}".format(repo.repotype, repo.uri, repo.release)] = repo |
1655 | + |
1656 | + def disable(self, repo: DebianRepository) -> None: |
1657 | + """Remove a repository. Disable by default. |
1658 | + |
1659 | + Args: |
1660 | + repo: a `DebianRepository` to disable |
1661 | + """ |
1662 | + searcher = "{} {}{} {}".format( |
1663 | + repo.repotype, repo.make_options_string(), repo.uri, repo.release |
1664 | + ) |
1665 | + |
1666 | + for line in fileinput.input(repo.filename, inplace=True): |
1667 | + if re.match(r"^{}\s".format(re.escape(searcher)), line): |
1668 | + print("# {}".format(line), end="") |
1669 | + else: |
1670 | + print(line, end="") |
1671 | + |
1672 | + self._repository_map["{}-{}-{}".format(repo.repotype, repo.uri, repo.release)] = repo |
1673 | diff --git a/metadata.yaml b/metadata.yaml |
1674 | new file mode 100644 |
1675 | index 0000000..3cd1f95 |
1676 | --- /dev/null |
1677 | +++ b/metadata.yaml |
1678 | @@ -0,0 +1,14 @@ |
1679 | +name: clamav-database-mirror |
1680 | +display-name: ClamAV Database Mirror |
1681 | +summary: A local mirror of the ClamAV database. |
1682 | +description: | |
1683 | + Maintains a local mirror of the latest ClamAV databases. |
1684 | + |
1685 | + Workloads that perform ClamAV scans in ephemeral environments need to |
1686 | + fetch fresh copies of virus definitions before doing anything else; but if |
1687 | + they do that using the default configuration that points to |
1688 | + database.clamav.net, they will likely find themselves rate-limited in |
1689 | + short order. To avoid that, point those workloads at a deployment of this |
1690 | + charm instead, which maintains a local mirror more economically. |
1691 | +series: |
1692 | + - jammy |
1693 | diff --git a/pyproject.toml b/pyproject.toml |
1694 | new file mode 100644 |
1695 | index 0000000..2edc519 |
1696 | --- /dev/null |
1697 | +++ b/pyproject.toml |
1698 | @@ -0,0 +1,33 @@ |
1699 | +# Testing tools configuration |
1700 | +[tool.coverage.run] |
1701 | +branch = true |
1702 | + |
1703 | +[tool.coverage.report] |
1704 | +show_missing = true |
1705 | + |
1706 | +[tool.pytest.ini_options] |
1707 | +minversion = "6.0" |
1708 | +log_cli_level = "INFO" |
1709 | + |
1710 | +# Formatting tools configuration |
1711 | +[tool.black] |
1712 | +line-length = 99 |
1713 | +target-version = ["py38"] |
1714 | + |
1715 | +[tool.isort] |
1716 | +line_length = 99 |
1717 | +profile = "black" |
1718 | + |
1719 | +# Linting tools configuration |
1720 | +[tool.flake8] |
1721 | +max-line-length = 99 |
1722 | +max-doc-length = 99 |
1723 | +max-complexity = 10 |
1724 | +exclude = [".git", "__pycache__", ".tox", "build", "dist", "*.egg_info", "venv"] |
1725 | +select = ["E", "W", "F", "C", "N", "R", "D", "H"] |
1726 | +# Ignore W503, E501 because using black creates errors with this |
1727 | +# Ignore D107 Missing docstring in __init__ |
1728 | +ignore = ["W503", "E501", "D107"] |
1729 | +# D100, D101, D102, D103: Ignore missing docstrings in tests |
1730 | +per-file-ignores = ["tests/*:D100,D101,D102,D103,D104"] |
1731 | +docstring-convention = "google" |
1732 | diff --git a/requirements.txt b/requirements.txt |
1733 | new file mode 100644 |
1734 | index 0000000..4bbf401 |
1735 | --- /dev/null |
1736 | +++ b/requirements.txt |
1737 | @@ -0,0 +1,2 @@ |
1738 | +jinja2 |
1739 | +ops >= 1.5.0 |
1740 | diff --git a/src/charm.py b/src/charm.py |
1741 | new file mode 100755 |
1742 | index 0000000..fbbf2c0 |
1743 | --- /dev/null |
1744 | +++ b/src/charm.py |
1745 | @@ -0,0 +1,106 @@ |
1746 | +#!/usr/bin/env python3 |
1747 | +# Copyright 2022 Canonical Ltd. |
1748 | +# See LICENSE file for licensing details. |
1749 | +# |
1750 | +# Learn more at: https://juju.is/docs/sdk |
1751 | + |
1752 | +"""Charm the service. |
1753 | + |
1754 | +Refer to the following post for a quick-start guide that will help you |
1755 | +develop a new k8s charm using the Operator Framework: |
1756 | + |
1757 | + https://discourse.charmhub.io/t/4208 |
1758 | +""" |
1759 | + |
1760 | +import logging |
1761 | +import shutil |
1762 | +import subprocess |
1763 | +from pathlib import Path |
1764 | + |
1765 | +from charms.operator_libs_linux.v0 import apt |
1766 | +from jinja2 import Environment, FileSystemLoader |
1767 | +from ops.charm import CharmBase |
1768 | +from ops.main import main |
1769 | +from ops.model import ActiveStatus, MaintenanceStatus |
1770 | + |
1771 | +logger = logging.getLogger(__name__) |
1772 | + |
1773 | +db_path = "/srv/clamav-database-mirror/database" |
1774 | + |
1775 | + |
1776 | +class ClamAVDatabaseMirrorCharm(CharmBase): |
1777 | + """Charm the service.""" |
1778 | + |
1779 | + def __init__(self, *args): |
1780 | + super().__init__(*args) |
1781 | + self.framework.observe(self.on.install, self._on_install) |
1782 | + self.framework.observe(self.on.config_changed, self._on_config_changed) |
1783 | + |
1784 | + def _set_maintenance_step(self, description): |
1785 | + self.unit.status = MaintenanceStatus(description) |
1786 | + logger.info(description) |
1787 | + |
1788 | + def _install(self): |
1789 | + """Install our dependencies.""" |
1790 | + self._set_maintenance_step("Installing dependencies") |
1791 | + apt.add_package(package_names=["clamav-cvdupdate", "nginx"], update_cache=True) |
1792 | + subprocess.run(["systemctl", "disable", "--now", "nginx.service"], check=True) |
1793 | + Path("/etc/nginx/sites-enabled/default").unlink(missing_ok=True) |
1794 | + |
1795 | + def _run_as(self, user, args, **kwargs): |
1796 | + subprocess.run( |
1797 | + ["setpriv", "--reuid", user, "--regid", user, "--init-groups", "--"] + args, **kwargs |
1798 | + ) |
1799 | + |
1800 | + def _configure_database(self): |
1801 | + """Configure the database.""" |
1802 | + self._set_maintenance_step("Configuring database") |
1803 | + subprocess.run(["mkdir", "-p", db_path]) |
1804 | + subprocess.run(["chown", "ubuntu:ubuntu", db_path]) |
1805 | + self._run_as("ubuntu", ["cvdupdate", "config", "set", "--dbdir", db_path]) |
1806 | + |
1807 | + def _configure_timer(self): |
1808 | + """Create a systemd timer to update the database automatically.""" |
1809 | + self._set_maintenance_step("Configuring timer") |
1810 | + template_env = Environment(loader=FileSystemLoader(str(self.charm_dir / "templates"))) |
1811 | + template = template_env.get_template("cvdupdate.service.j2") |
1812 | + service = template.render(config=self.config) |
1813 | + Path("/lib/systemd/system/cvdupdate.service").write_text(service) |
1814 | + shutil.copy( |
1815 | + self.charm_dir / "files" / "cvdupdate.timer", "/lib/systemd/system/cvdupdate.timer" |
1816 | + ) |
1817 | + subprocess.run(["systemctl", "daemon-reload"]) |
1818 | + # Start the update service once before enabling the timer, to ensure |
1819 | + # that the database is populated immediately. |
1820 | + subprocess.run(["systemctl", "start", "cvdupdate.service"], check=True) |
1821 | + subprocess.run(["systemctl", "enable", "--now", "cvdupdate.timer"], check=True) |
1822 | + |
1823 | + def _configure_nginx(self): |
1824 | + """Configure nginx to serve the database.""" |
1825 | + self._set_maintenance_step("Configuring nginx") |
1826 | + template_env = Environment(loader=FileSystemLoader(str(self.charm_dir / "templates"))) |
1827 | + template = template_env.get_template("nginx-site.conf.j2") |
1828 | + site = template.render(config=self.config, db_path=db_path) |
1829 | + available_conf = Path("/etc/nginx/sites-available/clamav-database-mirror.conf") |
1830 | + enabled_conf = Path("/etc/nginx/sites-enabled/clamav-database-mirror.conf") |
1831 | + available_conf.write_text(site) |
1832 | + if not enabled_conf.exists(): |
1833 | + enabled_conf.symlink_to(available_conf) |
1834 | + subprocess.run(["systemctl", "enable", "nginx.service"], check=True) |
1835 | + subprocess.run(["systemctl", "reload-or-restart", "nginx.service"], check=True) |
1836 | + |
1837 | + def _on_install(self, event): |
1838 | + self._install() |
1839 | + self._configure_database() |
1840 | + self._configure_timer() |
1841 | + self._configure_nginx() |
1842 | + self.unit.status = ActiveStatus() |
1843 | + |
1844 | + def _on_config_changed(self, event): |
1845 | + self._configure_timer() |
1846 | + self._configure_nginx() |
1847 | + self.unit.status = ActiveStatus() |
1848 | + |
1849 | + |
1850 | +if __name__ == "__main__": # pragma: nocover |
1851 | + main(ClamAVDatabaseMirrorCharm) |
1852 | diff --git a/templates/cvdupdate.service.j2 b/templates/cvdupdate.service.j2 |
1853 | new file mode 100644 |
1854 | index 0000000..3946693 |
1855 | --- /dev/null |
1856 | +++ b/templates/cvdupdate.service.j2 |
1857 | @@ -0,0 +1,22 @@ |
1858 | +[Unit] |
1859 | +Description=Update ClamAV database mirror |
1860 | +Documentation=man:cvdupdate(1) |
1861 | + |
1862 | +[Service] |
1863 | +Type=oneshot |
1864 | +ExecStart=/usr/bin/cvdupdate update |
1865 | +User=ubuntu |
1866 | +{% if config["http-proxy"] -%} |
1867 | +Environment=http_proxy={{ config["http-proxy"] }} |
1868 | +Environment=https_proxy={{ config["http-proxy"] }} |
1869 | +{% endif -%} |
1870 | +ProtectSystem=full |
1871 | +PrivateTmp=true |
1872 | +PrivateDevices=true |
1873 | +ProtectHostname=true |
1874 | +ProtectClock=true |
1875 | +ProtectKernelTunables=true |
1876 | +ProtectKernelModules=true |
1877 | +ProtectKernelLogs=true |
1878 | +ProtectControlGroups=true |
1879 | + |
1880 | diff --git a/templates/nginx-site.conf.j2 b/templates/nginx-site.conf.j2 |
1881 | new file mode 100644 |
1882 | index 0000000..5fec382 |
1883 | --- /dev/null |
1884 | +++ b/templates/nginx-site.conf.j2 |
1885 | @@ -0,0 +1,9 @@ |
1886 | +server { |
1887 | + server_name {{ config["host"] }}; |
1888 | + listen {{ config["port"] }}; |
1889 | + location / { |
1890 | + alias {{ db_path }}/; |
1891 | + autoindex on; |
1892 | + } |
1893 | +} |
1894 | + |
1895 | diff --git a/tests/integration/test_charm.py b/tests/integration/test_charm.py |
1896 | new file mode 100644 |
1897 | index 0000000..ffcc9ec |
1898 | --- /dev/null |
1899 | +++ b/tests/integration/test_charm.py |
1900 | @@ -0,0 +1,34 @@ |
1901 | +#!/usr/bin/env python3 |
1902 | +# Copyright 2022 Canonical Ltd. |
1903 | +# See LICENSE file for licensing details. |
1904 | + |
1905 | +import asyncio |
1906 | +import logging |
1907 | +from pathlib import Path |
1908 | + |
1909 | +import pytest |
1910 | +import yaml |
1911 | +from pytest_operator.plugin import OpsTest |
1912 | + |
1913 | +logger = logging.getLogger(__name__) |
1914 | + |
1915 | +METADATA = yaml.safe_load(Path("./metadata.yaml").read_text()) |
1916 | +APP_NAME = METADATA["name"] |
1917 | + |
1918 | + |
1919 | +@pytest.mark.abort_on_fail |
1920 | +async def test_build_and_deploy(ops_test: OpsTest): |
1921 | + """Build the charm-under-test and deploy it together with related charms. |
1922 | + |
1923 | + Assert on the unit status before any relations/configurations take place. |
1924 | + """ |
1925 | + # Build and deploy charm from local source folder |
1926 | + charm = await ops_test.build_charm(".") |
1927 | + |
1928 | + # Deploy the charm and wait for active/idle status |
1929 | + await asyncio.gather( |
1930 | + ops_test.model.deploy(charm, application_name=APP_NAME), |
1931 | + ops_test.model.wait_for_idle( |
1932 | + apps=[APP_NAME], status="active", raise_on_blocked=True, timeout=1000 |
1933 | + ), |
1934 | + ) |
1935 | diff --git a/tests/unit/test_charm.py b/tests/unit/test_charm.py |
1936 | new file mode 100644 |
1937 | index 0000000..bae3788 |
1938 | --- /dev/null |
1939 | +++ b/tests/unit/test_charm.py |
1940 | @@ -0,0 +1,88 @@ |
1941 | +# Copyright 2022 Canonical Ltd. |
1942 | +# See LICENSE file for licensing details. |
1943 | +# |
1944 | +# Learn more about testing at: https://juju.is/docs/sdk/testing |
1945 | + |
1946 | +from pathlib import Path |
1947 | + |
1948 | +import ops.testing |
1949 | +import pytest |
1950 | +from ops.model import ActiveStatus |
1951 | +from ops.testing import Harness |
1952 | + |
1953 | +from charm import ClamAVDatabaseMirrorCharm |
1954 | + |
1955 | +# The "fs" fixture is a fake filesystem from pyfakefs; the "fp" fixture is |
1956 | +# from pytest-subprocess. |
1957 | + |
1958 | + |
1959 | +@pytest.fixture(autouse=True) |
1960 | +def simulate_can_connect(monkeypatch): |
1961 | + # Enable more accurate simulation of container networking. |
1962 | + # For more information, see https://juju.is/docs/sdk/testing#heading--simulate-can-connect |
1963 | + monkeypatch.setattr(ops.testing, "SIMULATE_CAN_CONNECT", True) |
1964 | + |
1965 | + |
1966 | +@pytest.fixture |
1967 | +def harness(): |
1968 | + harness = Harness(ClamAVDatabaseMirrorCharm) |
1969 | + harness.begin() |
1970 | + yield harness |
1971 | + harness.cleanup() |
1972 | + |
1973 | + |
1974 | +def test_install(harness, fs, fp): |
1975 | + fs.add_real_directory(harness.charm.charm_dir / "files") |
1976 | + fs.add_real_directory(harness.charm.charm_dir / "templates") |
1977 | + Path("/lib/systemd/system").mkdir(parents=True) |
1978 | + Path("/etc/nginx/sites-available").mkdir(parents=True) |
1979 | + Path("/etc/nginx/sites-enabled").mkdir(parents=True) |
1980 | + fp.keep_last_process(True) |
1981 | + fp.register( |
1982 | + ["apt-cache", "show", "clamav-cvdupdate"], |
1983 | + stdout="Package: clamav-cvdupdate\nArchitecture: all\nVersion: 0.1\n", |
1984 | + ) |
1985 | + fp.register( |
1986 | + ["apt-cache", "show", "nginx"], stdout="Package: nginx\nArchitecture: all\nVersion: 0.2\n" |
1987 | + ) |
1988 | + fp.register([fp.any()]) |
1989 | + |
1990 | + harness.charm.on.install.emit() |
1991 | + |
1992 | + assert harness.model.unit.status == ActiveStatus() |
1993 | + run_as_ubuntu = ["setpriv", "--reuid", "ubuntu", "--regid", "ubuntu", "--init-groups", "--"] |
1994 | + assert list(fp.calls) == [ |
1995 | + ["apt-get", "update"], |
1996 | + ["dpkg", "--print-architecture"], |
1997 | + ["dpkg", "-l", "clamav-cvdupdate"], |
1998 | + ["dpkg", "--print-architecture"], |
1999 | + ["apt-cache", "show", "clamav-cvdupdate"], |
2000 | + [ |
2001 | + "apt-get", |
2002 | + "-y", |
2003 | + "--option=Dpkg::Options::=--force-confold", |
2004 | + "install", |
2005 | + "clamav-cvdupdate=0.1", |
2006 | + ], |
2007 | + ["dpkg", "--print-architecture"], |
2008 | + ["dpkg", "-l", "nginx"], |
2009 | + ["dpkg", "--print-architecture"], |
2010 | + ["apt-cache", "show", "nginx"], |
2011 | + [ |
2012 | + "apt-get", |
2013 | + "-y", |
2014 | + "--option=Dpkg::Options::=--force-confold", |
2015 | + "install", |
2016 | + "nginx=0.2", |
2017 | + ], |
2018 | + ["systemctl", "disable", "--now", "nginx.service"], |
2019 | + ["mkdir", "-p", "/srv/clamav-database-mirror/database"], |
2020 | + ["chown", "ubuntu:ubuntu", "/srv/clamav-database-mirror/database"], |
2021 | + run_as_ubuntu |
2022 | + + ["cvdupdate", "config", "set", "--dbdir", "/srv/clamav-database-mirror/database"], |
2023 | + ["systemctl", "daemon-reload"], |
2024 | + ["systemctl", "start", "cvdupdate.service"], |
2025 | + ["systemctl", "enable", "--now", "cvdupdate.timer"], |
2026 | + ["systemctl", "enable", "nginx.service"], |
2027 | + ["systemctl", "reload-or-restart", "nginx.service"], |
2028 | + ] |
2029 | diff --git a/tox.ini b/tox.ini |
2030 | new file mode 100644 |
2031 | index 0000000..dc12427 |
2032 | --- /dev/null |
2033 | +++ b/tox.ini |
2034 | @@ -0,0 +1,73 @@ |
2035 | +# Copyright 2022 Canonical Ltd. |
2036 | +# See LICENSE file for licensing details. |
2037 | + |
2038 | +[tox] |
2039 | +skipsdist=True |
2040 | +skip_missing_interpreters = True |
2041 | +envlist = lint, unit |
2042 | + |
2043 | +[vars] |
2044 | +src_path = {toxinidir}/src/ |
2045 | +tst_path = {toxinidir}/tests/ |
2046 | +all_path = {[vars]src_path} {[vars]tst_path} |
2047 | + |
2048 | +[testenv] |
2049 | +setenv = |
2050 | + PYTHONPATH = {toxinidir}:{toxinidir}/lib:{[vars]src_path} |
2051 | + PYTHONBREAKPOINT=pdb.set_trace |
2052 | + PY_COLORS=1 |
2053 | +passenv = |
2054 | + PYTHONPATH |
2055 | + CHARM_BUILD_DIR |
2056 | + MODEL_SETTINGS |
2057 | + |
2058 | +[testenv:fmt] |
2059 | +description = Apply coding style standards to code |
2060 | +deps = |
2061 | + black |
2062 | + isort |
2063 | +commands = |
2064 | + isort {[vars]all_path} |
2065 | + black {[vars]all_path} |
2066 | + |
2067 | +[testenv:lint] |
2068 | +description = Check code against coding style standards |
2069 | +deps = |
2070 | + black |
2071 | + flake8-docstrings |
2072 | + flake8-builtins |
2073 | + pyproject-flake8 |
2074 | + pep8-naming |
2075 | + isort |
2076 | + codespell |
2077 | +commands = |
2078 | + codespell {toxinidir}/. --skip {toxinidir}/.git --skip {toxinidir}/.tox \ |
2079 | + --skip {toxinidir}/build --skip {toxinidir}/lib --skip {toxinidir}/venv \ |
2080 | + --skip {toxinidir}/.mypy_cache --skip {toxinidir}/icon.svg |
2081 | + # pflake8 wrapper supports config from pyproject.toml |
2082 | + pflake8 {[vars]all_path} |
2083 | + isort --check-only --diff {[vars]all_path} |
2084 | + black --check --diff {[vars]all_path} |
2085 | + |
2086 | +[testenv:unit] |
2087 | +description = Run unit tests |
2088 | +deps = |
2089 | + pyfakefs |
2090 | + pytest |
2091 | + pytest-subprocess |
2092 | + coverage[toml] |
2093 | + -r{toxinidir}/requirements.txt |
2094 | +commands = |
2095 | + coverage run --source={[vars]src_path} \ |
2096 | + -m pytest --ignore={[vars]tst_path}integration -v --tb native -s {posargs} |
2097 | + coverage report |
2098 | + |
2099 | +[testenv:integration] |
2100 | +description = Run integration tests |
2101 | +deps = |
2102 | + pytest |
2103 | + juju |
2104 | + pytest-operator |
2105 | + -r{toxinidir}/requirements.txt |
2106 | +commands = |
2107 | + pytest -v --tb native --ignore={[vars]tst_path}unit --log-cli-level=INFO -s {posargs} |
2108 | diff --git a/update-lib b/update-lib |
2109 | new file mode 100755 |
2110 | index 0000000..943e8fe |
2111 | --- /dev/null |
2112 | +++ b/update-lib |
2113 | @@ -0,0 +1,4 @@ |
2114 | +#! /bin/sh |
2115 | +set -e |
2116 | + |
2117 | +charmcraft fetch-lib charms.operator_libs_linux.v0.apt |