Merge ~paride/simplestreams:apply-black into simplestreams:master
- Git
- lp:~paride/simplestreams
- apply-black
- Merge into master
Status: | Work in progress |
---|---|
Proposed branch: | ~paride/simplestreams:apply-black |
Merge into: | simplestreams:master |
Diff against target: |
10555 lines (+3956/-2434) 47 files modified
.git-blame-ignore-revs (+4/-0) .launchpad.yaml (+38/-0) .pre-commit-config.yaml (+15/-0) bin/json2streams (+2/-2) bin/sstream-mirror (+118/-66) bin/sstream-mirror-glance (+197/-114) bin/sstream-query (+84/-47) bin/sstream-sync (+111/-63) pyproject.toml (+6/-0) setup.py (+17/-14) simplestreams/checksum_util.py (+33/-16) simplestreams/contentsource.py (+33/-26) simplestreams/filters.py (+10/-6) simplestreams/generate_simplestreams.py (+53/-38) simplestreams/json2streams.py (+34/-26) simplestreams/log.py (+13/-9) simplestreams/mirrors/__init__.py (+153/-100) simplestreams/mirrors/command_hook.py (+69/-50) simplestreams/mirrors/glance.py (+305/-206) simplestreams/objectstores/__init__.py (+49/-26) simplestreams/objectstores/s3.py (+7/-7) simplestreams/objectstores/swift.py (+48/-37) simplestreams/openstack.py (+126/-66) simplestreams/util.py (+100/-79) tests/httpserver.py (+24/-13) tests/testutil.py (+5/-4) tests/unittests/test_badmirrors.py (+72/-33) tests/unittests/test_command_hook_mirror.py (+17/-12) tests/unittests/test_contentsource.py (+62/-51) tests/unittests/test_generate_simplestreams.py (+322/-179) tests/unittests/test_glancemirror.py (+702/-390) tests/unittests/test_json2streams.py (+126/-95) tests/unittests/test_mirrorreaders.py (+24/-14) tests/unittests/test_mirrorwriters.py (+6/-6) tests/unittests/test_openstack.py (+184/-151) tests/unittests/test_resolvework.py (+90/-38) tests/unittests/test_signed_data.py (+7/-7) tests/unittests/test_util.py (+232/-132) tests/unittests/tests_filestore.py (+19/-13) tools/install-deps (+1/-1) tools/js2signed (+4/-3) tools/make-test-data (+240/-185) tools/sign_helper.py (+8/-4) tools/tab2streams (+31/-15) tools/toolutil.py (+66/-43) tools/ubuntu_versions.py (+71/-43) tox.ini (+18/-4) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
simplestreams-dev | Pending | ||
Review via email: mp+440053@code.launchpad.net |
Commit message
- Apply black and isort
- Add black and isort pre-commit hooks
- Run (read-only) pre-commit in tox environment
- Run tox in lpci
Description of the change
Server Team CI bot (server-team-bot) wrote : | # |
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:43978e994d5
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:43978e994d5
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Paride Legovini (paride) wrote : | # |
CI failures fixed by:
https:/
- 246f30e... by Paride Legovini
-
lpci: don't run the pre-commit environment
Apparently the lpci workers do not have access to github.com,
so pre-commit can't download the hooks.
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:40311284d92
https:/
Executed test runs:
SUCCESS: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:7833c12c86e
https:/
Executed test runs:
SUCCESS: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
- 274817d... by Paride Legovini
-
install-deps: install pkg-config on non-amd64 archs
- a4d7a3f... by Paride Legovini
-
lpci: expand the test matrix
Cover: (amd64, arm64, ppc64el, s390x) * (focal, jammy, devel)
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:a4d7a3f8132
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 39c2e39... by Paride Legovini
-
lpci: add test dependency: python3-dev
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:39c2e398be3
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 19608c6... by Paride Legovini
-
mm
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:fe4262017af
https:/
Executed test runs:
FAILURE: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- b405e84... by Paride Legovini
-
tox: pass the lowercase *_proxy variables
Workaround for https:/
/github. com/tox- dev/tox/ pull/2378/. - 363d3b4... by Paride Legovini
-
tox: skip_install in the pre_commit environment
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:37a21b666ed
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 75441dd... by Paride Legovini
-
ci: deal with pre-commit modifying files
We don't need to jump through hoops to avoid pre-commit modifying files
during the tox run: if that happens, the CI run will fail anyway. - 5449069... by Paride Legovini
-
prefer bin
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:19608c61d57
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:6feb6ce9622
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:6bbfb4fbe85
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:dec84e7e729
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
- 4c88164... by Paride Legovini
-
use-install-deps
The tool requires sudo to install dependencies.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:f79823a18f7
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:5f19b7bdc80
https:/
Executed test runs:
FAILURE: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:0a7d8c46717
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Adam Collard (adam-collard) wrote : | # |
I wouldn't mix a large reformatting like this (applying black across the codebase) with anything else.
I suggest landing the blacken commit, then separately isort, then separately again tox / lp-ci changes.
You'll want to ignore the black commits for e.g. git blame; but should see tox and lp-ci config in their own right
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:5f1da7595cc
https:/
Executed test runs:
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
SUCCESS: https:/
Click here to trigger a rebuild:
https:/
Paride Legovini (paride) wrote : | # |
Hi Adam, thanks for chiming in. Yeah I started this with the idea of quickly fixing CI (currently failing in master), then I started experimenting with integrating pre-commit, tox and lpci and the changeset grew.
I'll definitely split it up and resubmit it in smaller chunks.
Paride Legovini (paride) wrote : | # |
CI is stuck (running for 17 hours). I'll do a no-op force push to retrigger.
- 6da3cc1... by Paride Legovini
-
ci: append 127.0.0.1 to no_proxy
Paride Legovini (paride) wrote : | # |
CI passed! Next steps:
- Cleanup git history
- Split the this MP is smaller one.
I'll get back to this after 2023-06-13.
Adam Collard (adam-collard) wrote : | # |
@Paride - don't forget about this one!
Paride Legovini (paride) wrote : | # |
heh, knowing that other people are waiting for it will help. thanks!
Unmerged commits
- 6da3cc1... by Paride Legovini
-
ci: append 127.0.0.1 to no_proxy
-
lint:0 (build) unit-jammy:0 (build) unit-devel:0 (build) lint:0 (build) unit-jammy:0 (build) unit-devel:0 (build) lint:0 (build) unit-jammy:0 (build) unit-devel:0 (build) lint:0 (build) unit-jammy:0 (build) unit-devel:0 (build) 1 → 12 of 12 results First • Previous • Next • Last - 4c88164... by Paride Legovini
-
use-install-deps
The tool requires sudo to install dependencies.
-
lint:0 (build) unit-jammy:0 (build) unit-devel:0 (build) lint:0 (build) unit-jammy:0 (build) unit-devel:0 (build) lint:0 (build) unit-jammy:0 (build) unit-devel:0 (build) lint:0 (build) unit-jammy:0 (build) unit-devel:0 (build) 1 → 12 of 12 results First • Previous • Next • Last - 5449069... by Paride Legovini
-
prefer bin
- 19608c6... by Paride Legovini
-
mm
-
lint:0 (build) unit-jammy:0 (build) unit-devel:0 (build) lint:0 (build) unit-jammy:0 (build) unit-devel:0 (build) lint:0 (build) unit-jammy:0 (build) unit-devel:0 (build) lint:0 (build) unit-jammy:0 (build) unit-devel:0 (build) 1 → 12 of 12 results First • Previous • Next • Last - 75441dd... by Paride Legovini
-
ci: deal with pre-commit modifying files
We don't need to jump through hoops to avoid pre-commit modifying files
during the tox run: if that happens, the CI run will fail anyway. - 363d3b4... by Paride Legovini
-
tox: skip_install in the pre_commit environment
- b405e84... by Paride Legovini
-
tox: pass the lowercase *_proxy variables
Workaround for https:/
/github. com/tox- dev/tox/ pull/2378/. - 39c2e39... by Paride Legovini
-
lpci: add test dependency: python3-dev
-
1 → 32 of 32 results First • Previous • Next • Last - a4d7a3f... by Paride Legovini
-
lpci: expand the test matrix
Cover: (amd64, arm64, ppc64el, s390x) * (focal, jammy, devel)
-
1 → 32 of 32 results First • Previous • Next • Last - 274817d... by Paride Legovini
-
install-deps: install pkg-config on non-amd64 archs
Preview Diff
1 | diff --git a/.git-blame-ignore-revs b/.git-blame-ignore-revs |
2 | new file mode 100644 |
3 | index 0000000..da1123d |
4 | --- /dev/null |
5 | +++ b/.git-blame-ignore-revs |
6 | @@ -0,0 +1,4 @@ |
7 | +# Automatically apply to git blame with `git config blame.ignorerevsfile .git-blame-ignore-revs` |
8 | + |
9 | +# Apply black and isort formatting |
10 | +0d4060f7fa2de1e5b8c8b263100b1ed3a2c479bb |
11 | diff --git a/.launchpad.yaml b/.launchpad.yaml |
12 | new file mode 100644 |
13 | index 0000000..20a3afb |
14 | --- /dev/null |
15 | +++ b/.launchpad.yaml |
16 | @@ -0,0 +1,38 @@ |
17 | +pipeline: |
18 | + - lint |
19 | + - unit-jammy |
20 | + - unit-devel |
21 | + |
22 | +jobs: |
23 | + lint: |
24 | + series: jammy |
25 | + architectures: |
26 | + - amd64 |
27 | + packages: |
28 | + - git |
29 | + - tox |
30 | + run: tox -e flake8,pre-commit |
31 | + unit-jammy: |
32 | + series: jammy |
33 | + architectures: |
34 | + - amd64 |
35 | + - arm64 |
36 | + - ppc64el |
37 | + - s390x |
38 | + packages: |
39 | + - sudo |
40 | + run: | |
41 | + tools/install-deps tox |
42 | + no_proxy="${no_proxy:+$no_proxy,}127.0.0.1" tox -e py3 |
43 | + unit-devel: |
44 | + series: devel |
45 | + architectures: |
46 | + - amd64 |
47 | + - arm64 |
48 | + - ppc64el |
49 | + - s390x |
50 | + packages: |
51 | + - sudo |
52 | + run: | |
53 | + tools/install-deps tox |
54 | + no_proxy="${no_proxy:+$no_proxy,}127.0.0.1" tox -e py3 |
55 | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml |
56 | new file mode 100644 |
57 | index 0000000..0d08ba7 |
58 | --- /dev/null |
59 | +++ b/.pre-commit-config.yaml |
60 | @@ -0,0 +1,15 @@ |
61 | +# Hooks called in the "manual" stage are meant to be read-only. We'll trigger |
62 | +# them from tox, and we don't want the tox pre-commit environment to modify |
63 | +# code, as this may interfere with other environments. |
64 | +# |
65 | +# To update the pinned versions run: pre-commit autoupdate |
66 | + |
67 | +repos: |
68 | + - repo: https://github.com/ambv/black |
69 | + rev: 23.3.0 |
70 | + hooks: |
71 | + - id: black |
72 | + - repo: https://github.com/pycqa/isort |
73 | + rev: 5.12.0 |
74 | + hooks: |
75 | + - id: isort |
76 | diff --git a/bin/json2streams b/bin/json2streams |
77 | index bca6a7b..7dfd873 100755 |
78 | --- a/bin/json2streams |
79 | +++ b/bin/json2streams |
80 | @@ -2,8 +2,8 @@ |
81 | # Copyright (C) 2013, 2015 Canonical Ltd. |
82 | |
83 | import sys |
84 | -from simplestreams.json2streams import main |
85 | |
86 | +from simplestreams.json2streams import main |
87 | |
88 | -if __name__ == '__main__': |
89 | +if __name__ == "__main__": |
90 | sys.exit(main()) |
91 | diff --git a/bin/sstream-mirror b/bin/sstream-mirror |
92 | index 08b71a5..fbe86da 100755 |
93 | --- a/bin/sstream-mirror |
94 | +++ b/bin/sstream-mirror |
95 | @@ -18,11 +18,7 @@ |
96 | import argparse |
97 | import sys |
98 | |
99 | -from simplestreams import filters |
100 | -from simplestreams import log |
101 | -from simplestreams import mirrors |
102 | -from simplestreams import objectstores |
103 | -from simplestreams import util |
104 | +from simplestreams import filters, log, mirrors, objectstores, util |
105 | |
106 | |
107 | class DotProgress(object): |
108 | @@ -39,9 +35,10 @@ class DotProgress(object): |
109 | self.curpath = path |
110 | status = "" |
111 | if self.expected: |
112 | - status = (" %02s%%" % |
113 | - (int(self.bytes_read * 100 / self.expected))) |
114 | - sys.stderr.write('=> %s [%s]%s\n' % (path, total, status)) |
115 | + status = " %02s%%" % ( |
116 | + int(self.bytes_read * 100 / self.expected) |
117 | + ) |
118 | + sys.stderr.write("=> %s [%s]%s\n" % (path, total, status)) |
119 | |
120 | if cur == total: |
121 | sys.stderr.write("\n") |
122 | @@ -52,7 +49,7 @@ class DotProgress(object): |
123 | toprint = int(cur * self.columns / total) - self.printed |
124 | if toprint <= 0: |
125 | return |
126 | - sys.stderr.write('.' * toprint) |
127 | + sys.stderr.write("." * toprint) |
128 | sys.stderr.flush() |
129 | self.printed += toprint |
130 | |
131 | @@ -60,81 +57,135 @@ class DotProgress(object): |
132 | def main(): |
133 | parser = argparse.ArgumentParser() |
134 | |
135 | - parser.add_argument('--keep', action='store_true', default=False, |
136 | - help='keep items in target up to MAX items ' |
137 | - 'even after they have fallen out of the source') |
138 | - parser.add_argument('--max', type=int, default=None, |
139 | - help='store at most MAX items in the target') |
140 | - parser.add_argument('--path', default=None, |
141 | - help='sync from index or products file in mirror') |
142 | - parser.add_argument('--no-item-download', action='store_true', |
143 | - default=False, |
144 | - help='do not download items with a "path"') |
145 | - parser.add_argument('--dry-run', action='store_true', default=False, |
146 | - help='only report what would be done') |
147 | - parser.add_argument('--progress', action='store_true', default=False, |
148 | - help='show progress for downloading files') |
149 | - parser.add_argument('--mirror', action='append', default=[], |
150 | - dest="mirrors", |
151 | - help='additional mirrors to find referenced files') |
152 | - |
153 | - parser.add_argument('--verbose', '-v', action='count', default=0) |
154 | - parser.add_argument('--log-file', default=sys.stderr, |
155 | - type=argparse.FileType('w')) |
156 | - |
157 | - parser.add_argument('--keyring', action='store', default=None, |
158 | - help='keyring to be specified to gpg via --keyring') |
159 | - parser.add_argument('--no-verify', '-U', action='store_false', |
160 | - dest='verify', default=True, |
161 | - help="do not gpg check signed json files") |
162 | - parser.add_argument('--no-checksumming-reader', action='store_false', |
163 | - dest='checksumming_reader', default=True, |
164 | - help=("do not call 'insert_item' with a reader" |
165 | - " that does checksumming.")) |
166 | - |
167 | - parser.add_argument('source_mirror') |
168 | - parser.add_argument('output_d') |
169 | - parser.add_argument('filters', nargs='*', default=[]) |
170 | + parser.add_argument( |
171 | + "--keep", |
172 | + action="store_true", |
173 | + default=False, |
174 | + help="keep items in target up to MAX items " |
175 | + "even after they have fallen out of the source", |
176 | + ) |
177 | + parser.add_argument( |
178 | + "--max", |
179 | + type=int, |
180 | + default=None, |
181 | + help="store at most MAX items in the target", |
182 | + ) |
183 | + parser.add_argument( |
184 | + "--path", |
185 | + default=None, |
186 | + help="sync from index or products file in mirror", |
187 | + ) |
188 | + parser.add_argument( |
189 | + "--no-item-download", |
190 | + action="store_true", |
191 | + default=False, |
192 | + help='do not download items with a "path"', |
193 | + ) |
194 | + parser.add_argument( |
195 | + "--dry-run", |
196 | + action="store_true", |
197 | + default=False, |
198 | + help="only report what would be done", |
199 | + ) |
200 | + parser.add_argument( |
201 | + "--progress", |
202 | + action="store_true", |
203 | + default=False, |
204 | + help="show progress for downloading files", |
205 | + ) |
206 | + parser.add_argument( |
207 | + "--mirror", |
208 | + action="append", |
209 | + default=[], |
210 | + dest="mirrors", |
211 | + help="additional mirrors to find referenced files", |
212 | + ) |
213 | + |
214 | + parser.add_argument("--verbose", "-v", action="count", default=0) |
215 | + parser.add_argument( |
216 | + "--log-file", default=sys.stderr, type=argparse.FileType("w") |
217 | + ) |
218 | + |
219 | + parser.add_argument( |
220 | + "--keyring", |
221 | + action="store", |
222 | + default=None, |
223 | + help="keyring to be specified to gpg via --keyring", |
224 | + ) |
225 | + parser.add_argument( |
226 | + "--no-verify", |
227 | + "-U", |
228 | + action="store_false", |
229 | + dest="verify", |
230 | + default=True, |
231 | + help="do not gpg check signed json files", |
232 | + ) |
233 | + parser.add_argument( |
234 | + "--no-checksumming-reader", |
235 | + action="store_false", |
236 | + dest="checksumming_reader", |
237 | + default=True, |
238 | + help=( |
239 | + "do not call 'insert_item' with a reader" |
240 | + " that does checksumming." |
241 | + ), |
242 | + ) |
243 | + |
244 | + parser.add_argument("source_mirror") |
245 | + parser.add_argument("output_d") |
246 | + parser.add_argument("filters", nargs="*", default=[]) |
247 | |
248 | args = parser.parse_args() |
249 | |
250 | - (mirror_url, initial_path) = util.path_from_mirror_url(args.source_mirror, |
251 | - args.path) |
252 | + (mirror_url, initial_path) = util.path_from_mirror_url( |
253 | + args.source_mirror, args.path |
254 | + ) |
255 | |
256 | def policy(content, path): |
257 | - if initial_path.endswith('sjson'): |
258 | - return util.read_signed(content, |
259 | - keyring=args.keyring, |
260 | - checked=args.verify) |
261 | + if initial_path.endswith("sjson"): |
262 | + return util.read_signed( |
263 | + content, keyring=args.keyring, checked=args.verify |
264 | + ) |
265 | else: |
266 | return content |
267 | |
268 | filter_list = filters.get_filters(args.filters) |
269 | - mirror_config = {'max_items': args.max, 'keep_items': args.keep, |
270 | - 'filters': filter_list, |
271 | - 'item_download': not args.no_item_download, |
272 | - 'checksumming_reader': args.checksumming_reader} |
273 | + mirror_config = { |
274 | + "max_items": args.max, |
275 | + "keep_items": args.keep, |
276 | + "filters": filter_list, |
277 | + "item_download": not args.no_item_download, |
278 | + "checksumming_reader": args.checksumming_reader, |
279 | + } |
280 | |
281 | level = (log.ERROR, log.INFO, log.DEBUG)[min(args.verbose, 2)] |
282 | log.basicConfig(stream=args.log_file, level=level) |
283 | |
284 | - smirror = mirrors.UrlMirrorReader(mirror_url, mirrors=args.mirrors, |
285 | - policy=policy) |
286 | + smirror = mirrors.UrlMirrorReader( |
287 | + mirror_url, mirrors=args.mirrors, policy=policy |
288 | + ) |
289 | tstore = objectstores.FileStore(args.output_d) |
290 | |
291 | - drmirror = mirrors.DryRunMirrorWriter(config=mirror_config, |
292 | - objectstore=tstore) |
293 | + drmirror = mirrors.DryRunMirrorWriter( |
294 | + config=mirror_config, objectstore=tstore |
295 | + ) |
296 | drmirror.sync(smirror, initial_path) |
297 | |
298 | def print_diff(char, items): |
299 | for pedigree, path, size in items: |
300 | fmt = "{char} {pedigree} {path} {size} Mb\n" |
301 | size = int(size / (1024 * 1024)) |
302 | - sys.stderr.write(fmt.format( |
303 | - char=char, pedigree=' '.join(pedigree), path=path, size=size)) |
304 | - |
305 | - print_diff('+', drmirror.downloading) |
306 | - print_diff('-', drmirror.removing) |
307 | + sys.stderr.write( |
308 | + fmt.format( |
309 | + char=char, |
310 | + pedigree=" ".join(pedigree), |
311 | + path=path, |
312 | + size=size, |
313 | + ) |
314 | + ) |
315 | + |
316 | + print_diff("+", drmirror.downloading) |
317 | + print_diff("-", drmirror.removing) |
318 | sys.stderr.write("%d Mb change\n" % (drmirror.size / (1024 * 1024))) |
319 | |
320 | if args.dry_run: |
321 | @@ -147,13 +198,14 @@ def main(): |
322 | |
323 | tstore = objectstores.FileStore(args.output_d, complete_callback=callback) |
324 | |
325 | - tmirror = mirrors.ObjectFilterMirror(config=mirror_config, |
326 | - objectstore=tstore) |
327 | + tmirror = mirrors.ObjectFilterMirror( |
328 | + config=mirror_config, objectstore=tstore |
329 | + ) |
330 | |
331 | tmirror.sync(smirror, initial_path) |
332 | |
333 | |
334 | -if __name__ == '__main__': |
335 | +if __name__ == "__main__": |
336 | main() |
337 | |
338 | # vi: ts=4 expandtab syntax=python |
339 | diff --git a/bin/sstream-mirror-glance b/bin/sstream-mirror-glance |
340 | index 5907137..953a2d3 100755 |
341 | --- a/bin/sstream-mirror-glance |
342 | +++ b/bin/sstream-mirror-glance |
343 | @@ -23,15 +23,11 @@ import argparse |
344 | import os.path |
345 | import sys |
346 | |
347 | -from simplestreams import objectstores |
348 | -from simplestreams.objectstores import swift |
349 | -from simplestreams import log |
350 | -from simplestreams import mirrors |
351 | -from simplestreams import openstack |
352 | -from simplestreams import util |
353 | +from simplestreams import log, mirrors, objectstores, openstack, util |
354 | from simplestreams.mirrors import glance |
355 | +from simplestreams.objectstores import swift |
356 | |
357 | -DEFAULT_FILTERS = ['ftype~(disk1.img|disk.img)', 'arch~(x86_64|amd64|i386)'] |
358 | +DEFAULT_FILTERS = ["ftype~(disk1.img|disk.img)", "arch~(x86_64|amd64|i386)"] |
359 | DEFAULT_KEYRING = "/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg" |
360 | |
361 | |
362 | @@ -44,94 +40,172 @@ class StdoutProgressAggregator(util.ProgressAggregator): |
363 | super(StdoutProgressAggregator, self).__init__(remaining_items) |
364 | |
365 | def emit(self, progress): |
366 | - size = float(progress['size']) |
367 | - written = float(progress['written']) |
368 | - print("%.2f %s (%d of %d images) - %.2f" % |
369 | - (written / size, progress['name'], |
370 | - self.total_image_count - len(self.remaining_items) + 1, |
371 | - self.total_image_count, |
372 | - float(self.total_written) / self.total_size)) |
373 | + size = float(progress["size"]) |
374 | + written = float(progress["written"]) |
375 | + print( |
376 | + "%.2f %s (%d of %d images) - %.2f" |
377 | + % ( |
378 | + written / size, |
379 | + progress["name"], |
380 | + self.total_image_count - len(self.remaining_items) + 1, |
381 | + self.total_image_count, |
382 | + float(self.total_written) / self.total_size, |
383 | + ) |
384 | + ) |
385 | |
386 | |
387 | def main(): |
388 | parser = argparse.ArgumentParser() |
389 | |
390 | - parser.add_argument('--keep', action='store_true', default=False, |
391 | - help='keep items in target up to MAX items ' |
392 | - 'even after they have fallen out of the source') |
393 | - parser.add_argument('--max', type=int, default=None, |
394 | - help='store at most MAX items in the target') |
395 | - |
396 | - parser.add_argument('--region', action='append', default=None, |
397 | - dest='regions', |
398 | - help='operate on specified region ' |
399 | - '[useable multiple times]') |
400 | - |
401 | - parser.add_argument('--mirror', action='append', default=[], |
402 | - dest="mirrors", |
403 | - help='additional mirrors to find referenced files') |
404 | - parser.add_argument('--path', default=None, |
405 | - help='sync from index or products file in mirror') |
406 | - parser.add_argument('--output-dir', metavar="DIR", default=False, |
407 | - help='write image data to storage in dir') |
408 | - parser.add_argument('--output-swift', metavar="prefix", default=False, |
409 | - help='write image data to swift under prefix') |
410 | - |
411 | - parser.add_argument('--name-prefix', metavar="prefix", default=None, |
412 | - help='prefix for each published image name') |
413 | - parser.add_argument('--cloud-name', metavar="name", default=None, |
414 | - required=True, help='unique name for this cloud') |
415 | - parser.add_argument('--modify-hook', metavar="cmd", default=None, |
416 | - required=False, |
417 | - help='invoke cmd on each image prior to upload') |
418 | - parser.add_argument('--content-id', metavar="name", default=None, |
419 | - required=True, |
420 | - help='content-id to use for published data.' |
421 | - ' may contain "%%(region)s"') |
422 | - |
423 | - parser.add_argument('--progress', action='store_true', default=False, |
424 | - help='display per-item download progress') |
425 | - parser.add_argument('--verbose', '-v', action='count', default=0) |
426 | - parser.add_argument('--log-file', default=sys.stderr, |
427 | - type=argparse.FileType('w')) |
428 | - |
429 | - parser.add_argument('--keyring', action='store', default=DEFAULT_KEYRING, |
430 | - help='The keyring for gpg --keyring') |
431 | - |
432 | - parser.add_argument('source_mirror') |
433 | - parser.add_argument('item_filters', nargs='*', default=DEFAULT_FILTERS, |
434 | - help="Filter expression for mirrored items. " |
435 | - "Multiple filter arguments can be specified" |
436 | - "and will be combined with logical AND. " |
437 | - "Expressions are key[!]=literal_string " |
438 | - "or key[!]~regexp.") |
439 | - |
440 | - parser.add_argument('--hypervisor-mapping', action='store_true', |
441 | - default=False, |
442 | - help="Set hypervisor_type attribute on stored images " |
443 | - "and the virt attribute in the associated stream " |
444 | - "data. This is useful in OpenStack Clouds which use " |
445 | - "multiple hypervisor types with in a single region.") |
446 | - |
447 | - parser.add_argument('--custom-property', action='append', default=[], |
448 | - dest="custom_properties", |
449 | - help='additional properties to add to glance' |
450 | - ' image metadata (key=value format).') |
451 | - |
452 | - parser.add_argument('--visibility', action='store', default='public', |
453 | - choices=('public', 'private', 'community', 'shared'), |
454 | - help='Visibility to apply to stored images.') |
455 | - |
456 | - parser.add_argument('--image-import-conversion', action='store_true', |
457 | - default=False, |
458 | - help="Enable conversion of images to raw format using " |
459 | - "image import option in Glance.") |
460 | - |
461 | - parser.add_argument('--set-latest-property', action='store_true', |
462 | - default=False, |
463 | - help="Set 'latest=true' property to latest synced " |
464 | - "os_version/architecture image metadata and remove " |
465 | - "latest property from the old images.") |
466 | + parser.add_argument( |
467 | + "--keep", |
468 | + action="store_true", |
469 | + default=False, |
470 | + help="keep items in target up to MAX items " |
471 | + "even after they have fallen out of the source", |
472 | + ) |
473 | + parser.add_argument( |
474 | + "--max", |
475 | + type=int, |
476 | + default=None, |
477 | + help="store at most MAX items in the target", |
478 | + ) |
479 | + |
480 | + parser.add_argument( |
481 | + "--region", |
482 | + action="append", |
483 | + default=None, |
484 | + dest="regions", |
485 | + help="operate on specified region " "[useable multiple times]", |
486 | + ) |
487 | + |
488 | + parser.add_argument( |
489 | + "--mirror", |
490 | + action="append", |
491 | + default=[], |
492 | + dest="mirrors", |
493 | + help="additional mirrors to find referenced files", |
494 | + ) |
495 | + parser.add_argument( |
496 | + "--path", |
497 | + default=None, |
498 | + help="sync from index or products file in mirror", |
499 | + ) |
500 | + parser.add_argument( |
501 | + "--output-dir", |
502 | + metavar="DIR", |
503 | + default=False, |
504 | + help="write image data to storage in dir", |
505 | + ) |
506 | + parser.add_argument( |
507 | + "--output-swift", |
508 | + metavar="prefix", |
509 | + default=False, |
510 | + help="write image data to swift under prefix", |
511 | + ) |
512 | + |
513 | + parser.add_argument( |
514 | + "--name-prefix", |
515 | + metavar="prefix", |
516 | + default=None, |
517 | + help="prefix for each published image name", |
518 | + ) |
519 | + parser.add_argument( |
520 | + "--cloud-name", |
521 | + metavar="name", |
522 | + default=None, |
523 | + required=True, |
524 | + help="unique name for this cloud", |
525 | + ) |
526 | + parser.add_argument( |
527 | + "--modify-hook", |
528 | + metavar="cmd", |
529 | + default=None, |
530 | + required=False, |
531 | + help="invoke cmd on each image prior to upload", |
532 | + ) |
533 | + parser.add_argument( |
534 | + "--content-id", |
535 | + metavar="name", |
536 | + default=None, |
537 | + required=True, |
538 | + help="content-id to use for published data." |
539 | + ' may contain "%%(region)s"', |
540 | + ) |
541 | + |
542 | + parser.add_argument( |
543 | + "--progress", |
544 | + action="store_true", |
545 | + default=False, |
546 | + help="display per-item download progress", |
547 | + ) |
548 | + parser.add_argument("--verbose", "-v", action="count", default=0) |
549 | + parser.add_argument( |
550 | + "--log-file", default=sys.stderr, type=argparse.FileType("w") |
551 | + ) |
552 | + |
553 | + parser.add_argument( |
554 | + "--keyring", |
555 | + action="store", |
556 | + default=DEFAULT_KEYRING, |
557 | + help="The keyring for gpg --keyring", |
558 | + ) |
559 | + |
560 | + parser.add_argument("source_mirror") |
561 | + parser.add_argument( |
562 | + "item_filters", |
563 | + nargs="*", |
564 | + default=DEFAULT_FILTERS, |
565 | + help="Filter expression for mirrored items. " |
566 | + "Multiple filter arguments can be specified" |
567 | + "and will be combined with logical AND. " |
568 | + "Expressions are key[!]=literal_string " |
569 | + "or key[!]~regexp.", |
570 | + ) |
571 | + |
572 | + parser.add_argument( |
573 | + "--hypervisor-mapping", |
574 | + action="store_true", |
575 | + default=False, |
576 | + help="Set hypervisor_type attribute on stored images " |
577 | + "and the virt attribute in the associated stream " |
578 | + "data. This is useful in OpenStack Clouds which use " |
579 | + "multiple hypervisor types with in a single region.", |
580 | + ) |
581 | + |
582 | + parser.add_argument( |
583 | + "--custom-property", |
584 | + action="append", |
585 | + default=[], |
586 | + dest="custom_properties", |
587 | + help="additional properties to add to glance" |
588 | + " image metadata (key=value format).", |
589 | + ) |
590 | + |
591 | + parser.add_argument( |
592 | + "--visibility", |
593 | + action="store", |
594 | + default="public", |
595 | + choices=("public", "private", "community", "shared"), |
596 | + help="Visibility to apply to stored images.", |
597 | + ) |
598 | + |
599 | + parser.add_argument( |
600 | + "--image-import-conversion", |
601 | + action="store_true", |
602 | + default=False, |
603 | + help="Enable conversion of images to raw format using " |
604 | + "image import option in Glance.", |
605 | + ) |
606 | + |
607 | + parser.add_argument( |
608 | + "--set-latest-property", |
609 | + action="store_true", |
610 | + default=False, |
611 | + help="Set 'latest=true' property to latest synced " |
612 | + "os_version/architecture image metadata and remove " |
613 | + "latest property from the old images.", |
614 | + ) |
615 | |
616 | args = parser.parse_args() |
617 | |
618 | @@ -139,27 +213,32 @@ def main(): |
619 | if args.modify_hook: |
620 | modify_hook = args.modify_hook.split() |
621 | |
622 | - mirror_config = {'max_items': args.max, 'keep_items': args.keep, |
623 | - 'cloud_name': args.cloud_name, |
624 | - 'modify_hook': modify_hook, |
625 | - 'item_filters': args.item_filters, |
626 | - 'hypervisor_mapping': args.hypervisor_mapping, |
627 | - 'custom_properties': args.custom_properties, |
628 | - 'visibility': args.visibility, |
629 | - 'image_import_conversion': args.image_import_conversion, |
630 | - 'set_latest_property': args.set_latest_property} |
631 | - |
632 | - (mirror_url, args.path) = util.path_from_mirror_url(args.source_mirror, |
633 | - args.path) |
634 | + mirror_config = { |
635 | + "max_items": args.max, |
636 | + "keep_items": args.keep, |
637 | + "cloud_name": args.cloud_name, |
638 | + "modify_hook": modify_hook, |
639 | + "item_filters": args.item_filters, |
640 | + "hypervisor_mapping": args.hypervisor_mapping, |
641 | + "custom_properties": args.custom_properties, |
642 | + "visibility": args.visibility, |
643 | + "image_import_conversion": args.image_import_conversion, |
644 | + "set_latest_property": args.set_latest_property, |
645 | + } |
646 | + |
647 | + (mirror_url, args.path) = util.path_from_mirror_url( |
648 | + args.source_mirror, args.path |
649 | + ) |
650 | |
651 | def policy(content, path): # pylint: disable=W0613 |
652 | - if args.path.endswith('sjson'): |
653 | + if args.path.endswith("sjson"): |
654 | return util.read_signed(content, keyring=args.keyring) |
655 | else: |
656 | return content |
657 | |
658 | - smirror = mirrors.UrlMirrorReader(mirror_url, mirrors=args.mirrors, |
659 | - policy=policy) |
660 | + smirror = mirrors.UrlMirrorReader( |
661 | + mirror_url, mirrors=args.mirrors, policy=policy |
662 | + ) |
663 | if args.output_dir and args.output_swift: |
664 | error("--output-dir and --output-swift are mutually exclusive\n") |
665 | sys.exit(1) |
666 | @@ -169,7 +248,7 @@ def main(): |
667 | |
668 | regions = args.regions |
669 | if regions is None: |
670 | - regions = openstack.get_regions(services=['image']) |
671 | + regions = openstack.get_regions(services=["image"]) |
672 | |
673 | for region in regions: |
674 | if args.output_dir: |
675 | @@ -181,25 +260,29 @@ def main(): |
676 | sys.stderr.write("not writing data anywhere\n") |
677 | tstore = None |
678 | |
679 | - mirror_config['content_id'] = args.content_id % {'region': region} |
680 | + mirror_config["content_id"] = args.content_id % {"region": region} |
681 | |
682 | if args.progress: |
683 | - drmirror = glance.ItemInfoDryRunMirror(config=mirror_config, |
684 | - objectstore=tstore) |
685 | + drmirror = glance.ItemInfoDryRunMirror( |
686 | + config=mirror_config, objectstore=tstore |
687 | + ) |
688 | drmirror.sync(smirror, args.path) |
689 | p = StdoutProgressAggregator(drmirror.items) |
690 | progress_callback = p.progress_callback |
691 | else: |
692 | progress_callback = None |
693 | |
694 | - tmirror = glance.GlanceMirror(config=mirror_config, |
695 | - objectstore=tstore, region=region, |
696 | - name_prefix=args.name_prefix, |
697 | - progress_callback=progress_callback) |
698 | + tmirror = glance.GlanceMirror( |
699 | + config=mirror_config, |
700 | + objectstore=tstore, |
701 | + region=region, |
702 | + name_prefix=args.name_prefix, |
703 | + progress_callback=progress_callback, |
704 | + ) |
705 | tmirror.sync(smirror, args.path) |
706 | |
707 | |
708 | -if __name__ == '__main__': |
709 | +if __name__ == "__main__": |
710 | main() |
711 | |
712 | # vi: ts=4 expandtab syntax=python |
713 | diff --git a/bin/sstream-query b/bin/sstream-query |
714 | index 6534317..3752cac 100755 |
715 | --- a/bin/sstream-query |
716 | +++ b/bin/sstream-query |
717 | @@ -16,11 +16,6 @@ |
718 | # You should have received a copy of the GNU Affero General Public License |
719 | # along with Simplestreams. If not, see <http://www.gnu.org/licenses/>. |
720 | |
721 | -from simplestreams import filters |
722 | -from simplestreams import mirrors |
723 | -from simplestreams import log |
724 | -from simplestreams import util |
725 | - |
726 | import argparse |
727 | import errno |
728 | import json |
729 | @@ -28,6 +23,8 @@ import pprint |
730 | import signal |
731 | import sys |
732 | |
733 | +from simplestreams import filters, log, mirrors, util |
734 | + |
735 | FORMAT_PRETTY = "PRETTY" |
736 | FORMAT_JSON = "JSON" |
737 | DEFAULT_KEYRING = "/usr/share/keyrings/ubuntu-cloudimage-keyring.gpg" |
738 | @@ -43,15 +40,15 @@ class FilterMirror(mirrors.BasicMirrorWriter): |
739 | if config is None: |
740 | config = {} |
741 | self.config = config |
742 | - self.filters = config.get('filters', []) |
743 | - outfmt = config.get('output_format') |
744 | + self.filters = config.get("filters", []) |
745 | + outfmt = config.get("output_format") |
746 | if not outfmt: |
747 | outfmt = "%s" |
748 | self.output_format = outfmt |
749 | self.json_entries = [] |
750 | |
751 | def load_products(self, path=None, content_id=None): |
752 | - return {'content_id': content_id, 'products': {}} |
753 | + return {"content_id": content_id, "products": {}} |
754 | |
755 | def filter_item(self, data, src, target, pedigree): |
756 | return filters.filter_item(self.filters, data, src, pedigree) |
757 | @@ -61,8 +58,8 @@ class FilterMirror(mirrors.BasicMirrorWriter): |
758 | # data is src['products'][ped[0]]['versions'][ped[1]]['items'][ped[2]] |
759 | # contentsource is a ContentSource if 'path' exists in data or None |
760 | data = util.products_exdata(src, pedigree) |
761 | - if 'path' in data: |
762 | - data.update({'item_url': contentsource.url}) |
763 | + if "path" in data: |
764 | + data.update({"item_url": contentsource.url}) |
765 | |
766 | if self.output_format == FORMAT_PRETTY: |
767 | pprint.pprint(data) |
768 | @@ -79,39 +76,71 @@ class FilterMirror(mirrors.BasicMirrorWriter): |
769 | def main(): |
770 | parser = argparse.ArgumentParser() |
771 | |
772 | - parser.add_argument('--max', type=int, default=None, dest='max_items', |
773 | - help='store at most MAX items in the target') |
774 | + parser.add_argument( |
775 | + "--max", |
776 | + type=int, |
777 | + default=None, |
778 | + dest="max_items", |
779 | + help="store at most MAX items in the target", |
780 | + ) |
781 | |
782 | - parser.add_argument('--path', default=None, |
783 | - help='sync from index or products file in mirror') |
784 | + parser.add_argument( |
785 | + "--path", |
786 | + default=None, |
787 | + help="sync from index or products file in mirror", |
788 | + ) |
789 | |
790 | fmt_group = parser.add_mutually_exclusive_group() |
791 | - fmt_group.add_argument('--output-format', '-o', action='store', |
792 | - dest='output_format', default=None, |
793 | - help="specify output format per python str.format") |
794 | - fmt_group.add_argument('--pretty', action='store_const', |
795 | - const=FORMAT_PRETTY, dest='output_format', |
796 | - help="pretty print output") |
797 | - fmt_group.add_argument('--json', action='store_const', |
798 | - const=FORMAT_JSON, dest='output_format', |
799 | - help="output in JSON as a list of dicts.") |
800 | - parser.add_argument('--verbose', '-v', action='count', default=0) |
801 | - parser.add_argument('--log-file', default=sys.stderr, |
802 | - type=argparse.FileType('w')) |
803 | - |
804 | - parser.add_argument('--keyring', action='store', default=DEFAULT_KEYRING, |
805 | - help='keyring to be specified to gpg via --keyring') |
806 | - parser.add_argument('--no-verify', '-U', action='store_false', |
807 | - dest='verify', default=True, |
808 | - help="do not gpg check signed json files") |
809 | - |
810 | - parser.add_argument('mirror_url') |
811 | - parser.add_argument('filters', nargs='*', default=[]) |
812 | + fmt_group.add_argument( |
813 | + "--output-format", |
814 | + "-o", |
815 | + action="store", |
816 | + dest="output_format", |
817 | + default=None, |
818 | + help="specify output format per python str.format", |
819 | + ) |
820 | + fmt_group.add_argument( |
821 | + "--pretty", |
822 | + action="store_const", |
823 | + const=FORMAT_PRETTY, |
824 | + dest="output_format", |
825 | + help="pretty print output", |
826 | + ) |
827 | + fmt_group.add_argument( |
828 | + "--json", |
829 | + action="store_const", |
830 | + const=FORMAT_JSON, |
831 | + dest="output_format", |
832 | + help="output in JSON as a list of dicts.", |
833 | + ) |
834 | + parser.add_argument("--verbose", "-v", action="count", default=0) |
835 | + parser.add_argument( |
836 | + "--log-file", default=sys.stderr, type=argparse.FileType("w") |
837 | + ) |
838 | + |
839 | + parser.add_argument( |
840 | + "--keyring", |
841 | + action="store", |
842 | + default=DEFAULT_KEYRING, |
843 | + help="keyring to be specified to gpg via --keyring", |
844 | + ) |
845 | + parser.add_argument( |
846 | + "--no-verify", |
847 | + "-U", |
848 | + action="store_false", |
849 | + dest="verify", |
850 | + default=True, |
851 | + help="do not gpg check signed json files", |
852 | + ) |
853 | + |
854 | + parser.add_argument("mirror_url") |
855 | + parser.add_argument("filters", nargs="*", default=[]) |
856 | |
857 | cmdargs = parser.parse_args() |
858 | |
859 | - (mirror_url, path) = util.path_from_mirror_url(cmdargs.mirror_url, |
860 | - cmdargs.path) |
861 | + (mirror_url, path) = util.path_from_mirror_url( |
862 | + cmdargs.mirror_url, cmdargs.path |
863 | + ) |
864 | |
865 | level = (log.ERROR, log.INFO, log.DEBUG)[min(cmdargs.verbose, 2)] |
866 | log.basicConfig(stream=cmdargs.log_file, level=level) |
867 | @@ -119,33 +148,41 @@ def main(): |
868 | initial_path = path |
869 | |
870 | def policy(content, path): |
871 | - if initial_path.endswith('sjson'): |
872 | - return util.read_signed(content, |
873 | - keyring=cmdargs.keyring, |
874 | - checked=cmdargs.verify) |
875 | + if initial_path.endswith("sjson"): |
876 | + return util.read_signed( |
877 | + content, keyring=cmdargs.keyring, checked=cmdargs.verify |
878 | + ) |
879 | else: |
880 | return content |
881 | |
882 | smirror = mirrors.UrlMirrorReader(mirror_url, policy=policy) |
883 | |
884 | filter_list = filters.get_filters(cmdargs.filters) |
885 | - cfg = {'max_items': cmdargs.max_items, |
886 | - 'filters': filter_list, |
887 | - 'output_format': cmdargs.output_format} |
888 | + cfg = { |
889 | + "max_items": cmdargs.max_items, |
890 | + "filters": filter_list, |
891 | + "output_format": cmdargs.output_format, |
892 | + } |
893 | |
894 | tmirror = FilterMirror(config=cfg) |
895 | try: |
896 | tmirror.sync(smirror, path) |
897 | if tmirror.output_format == FORMAT_JSON: |
898 | - print(json.dumps(tmirror.json_entries, indent=2, sort_keys=True, |
899 | - separators=(',', ': '))) |
900 | + print( |
901 | + json.dumps( |
902 | + tmirror.json_entries, |
903 | + indent=2, |
904 | + sort_keys=True, |
905 | + separators=(",", ": "), |
906 | + ) |
907 | + ) |
908 | except IOError as e: |
909 | if e.errno == errno.EPIPE: |
910 | sys.exit(0x80 | signal.SIGPIPE) |
911 | raise |
912 | |
913 | |
914 | -if __name__ == '__main__': |
915 | +if __name__ == "__main__": |
916 | main() |
917 | |
918 | # vi: ts=4 expandtab syntax=python |
919 | diff --git a/bin/sstream-sync b/bin/sstream-sync |
920 | index d10b90f..4082400 100755 |
921 | --- a/bin/sstream-sync |
922 | +++ b/bin/sstream-sync |
923 | @@ -16,18 +16,17 @@ |
924 | # You should have received a copy of the GNU Affero General Public License |
925 | # along with Simplestreams. If not, see <http://www.gnu.org/licenses/>. |
926 | |
927 | -from simplestreams import mirrors |
928 | -from simplestreams.mirrors import command_hook |
929 | -from simplestreams import log |
930 | -from simplestreams import util |
931 | - |
932 | import argparse |
933 | import errno |
934 | import os |
935 | import signal |
936 | import sys |
937 | + |
938 | import yaml |
939 | |
940 | +from simplestreams import log, mirrors, util |
941 | +from simplestreams.mirrors import command_hook |
942 | + |
943 | |
944 | def which(program): |
945 | def is_exe(fpath): |
946 | @@ -55,51 +54,90 @@ def main(): |
947 | parser = argparse.ArgumentParser() |
948 | defhook = command_hook.DEFAULT_HOOK_NAME |
949 | |
950 | - hooks = [("--hook-%s" % hook.replace("_", "-"), hook, False) |
951 | - for hook in command_hook.HOOK_NAMES] |
952 | - hooks.append(('--hook', defhook, False,)) |
953 | - |
954 | - parser.add_argument('--config', '-c', |
955 | - help='read config file', |
956 | - type=argparse.FileType('rb')) |
957 | - |
958 | - for (argname, cfgname, _required) in hooks: |
959 | + hooks = [ |
960 | + ("--hook-%s" % hook.replace("_", "-"), hook, False) |
961 | + for hook in command_hook.HOOK_NAMES |
962 | + ] |
963 | + hooks.append( |
964 | + ( |
965 | + "--hook", |
966 | + defhook, |
967 | + False, |
968 | + ) |
969 | + ) |
970 | + |
971 | + parser.add_argument( |
972 | + "--config", "-c", help="read config file", type=argparse.FileType("rb") |
973 | + ) |
974 | + |
975 | + for argname, cfgname, _required in hooks: |
976 | parser.add_argument(argname, dest=cfgname, required=False) |
977 | |
978 | - parser.add_argument('--keep', action='store_true', default=False, |
979 | - dest='keep_items', |
980 | - help='keep items in target up to MAX items ' |
981 | - 'even after they have fallen out of the source') |
982 | - parser.add_argument('--max', type=int, default=None, dest='max_items', |
983 | - help='store at most MAX items in the target') |
984 | - parser.add_argument('--item-skip-download', action='store_true', |
985 | - default=False, |
986 | - help='Do not download items that are to be inserted.') |
987 | - parser.add_argument('--delete', action='store_true', default=False, |
988 | - dest='delete_filtered_items', |
989 | - help='remove filtered items from the target') |
990 | - parser.add_argument('--path', default=None, |
991 | - help='sync from index or products file in mirror') |
992 | - |
993 | - parser.add_argument('--verbose', '-v', action='count', default=0) |
994 | - parser.add_argument('--log-file', default=sys.stderr, |
995 | - type=argparse.FileType('w')) |
996 | - |
997 | - parser.add_argument('--keyring', action='store', default=None, |
998 | - help='keyring to be specified to gpg via --keyring') |
999 | - parser.add_argument('--no-verify', '-U', action='store_false', |
1000 | - dest='verify', default=True, |
1001 | - help="do not gpg check signed json files") |
1002 | - |
1003 | - parser.add_argument('mirror_url') |
1004 | + parser.add_argument( |
1005 | + "--keep", |
1006 | + action="store_true", |
1007 | + default=False, |
1008 | + dest="keep_items", |
1009 | + help="keep items in target up to MAX items " |
1010 | + "even after they have fallen out of the source", |
1011 | + ) |
1012 | + parser.add_argument( |
1013 | + "--max", |
1014 | + type=int, |
1015 | + default=None, |
1016 | + dest="max_items", |
1017 | + help="store at most MAX items in the target", |
1018 | + ) |
1019 | + parser.add_argument( |
1020 | + "--item-skip-download", |
1021 | + action="store_true", |
1022 | + default=False, |
1023 | + help="Do not download items that are to be inserted.", |
1024 | + ) |
1025 | + parser.add_argument( |
1026 | + "--delete", |
1027 | + action="store_true", |
1028 | + default=False, |
1029 | + dest="delete_filtered_items", |
1030 | + help="remove filtered items from the target", |
1031 | + ) |
1032 | + parser.add_argument( |
1033 | + "--path", |
1034 | + default=None, |
1035 | + help="sync from index or products file in mirror", |
1036 | + ) |
1037 | + |
1038 | + parser.add_argument("--verbose", "-v", action="count", default=0) |
1039 | + parser.add_argument( |
1040 | + "--log-file", default=sys.stderr, type=argparse.FileType("w") |
1041 | + ) |
1042 | + |
1043 | + parser.add_argument( |
1044 | + "--keyring", |
1045 | + action="store", |
1046 | + default=None, |
1047 | + help="keyring to be specified to gpg via --keyring", |
1048 | + ) |
1049 | + parser.add_argument( |
1050 | + "--no-verify", |
1051 | + "-U", |
1052 | + action="store_false", |
1053 | + dest="verify", |
1054 | + default=True, |
1055 | + help="do not gpg check signed json files", |
1056 | + ) |
1057 | + |
1058 | + parser.add_argument("mirror_url") |
1059 | cmdargs = parser.parse_args() |
1060 | |
1061 | - known_cfg = [('--item-skip-download', 'item_skip_download', False), |
1062 | - ('--max', 'max_items', False), |
1063 | - ('--keep', 'keep_items', False), |
1064 | - ('--delete', 'delete_filtered_items', False), |
1065 | - ('mirror_url', 'mirror_url', True), |
1066 | - ('--path', 'path', True)] |
1067 | + known_cfg = [ |
1068 | + ("--item-skip-download", "item_skip_download", False), |
1069 | + ("--max", "max_items", False), |
1070 | + ("--keep", "keep_items", False), |
1071 | + ("--delete", "delete_filtered_items", False), |
1072 | + ("mirror_url", "mirror_url", True), |
1073 | + ("--path", "path", True), |
1074 | + ] |
1075 | known_cfg.extend(hooks) |
1076 | |
1077 | cfg = {} |
1078 | @@ -116,31 +154,41 @@ def main(): |
1079 | missing = [] |
1080 | fallback = cfg.get(defhook, getattr(cmdargs, defhook, None)) |
1081 | |
1082 | - for (argname, cfgname, _required) in known_cfg: |
1083 | + for argname, cfgname, _required in known_cfg: |
1084 | val = getattr(cmdargs, cfgname) |
1085 | if val is not None: |
1086 | cfg[cfgname] = val |
1087 | if val == "": |
1088 | cfg[cfgname] = None |
1089 | |
1090 | - if ((cfgname in command_hook.HOOK_NAMES or cfgname == defhook) and |
1091 | - cfg.get(cfgname) is not None): |
1092 | + if ( |
1093 | + cfgname in command_hook.HOOK_NAMES or cfgname == defhook |
1094 | + ) and cfg.get(cfgname) is not None: |
1095 | if which(cfg[cfgname]) is None: |
1096 | msg = "invalid input for %s. '%s' is not executable\n" |
1097 | sys.stderr.write(msg % (argname, val)) |
1098 | sys.exit(1) |
1099 | |
1100 | - if (cfgname in command_hook.REQUIRED_FIELDS and |
1101 | - cfg.get(cfgname) is None and not fallback): |
1102 | - missing.append((argname, cfgname,)) |
1103 | + if ( |
1104 | + cfgname in command_hook.REQUIRED_FIELDS |
1105 | + and cfg.get(cfgname) is None |
1106 | + and not fallback |
1107 | + ): |
1108 | + missing.append( |
1109 | + ( |
1110 | + argname, |
1111 | + cfgname, |
1112 | + ) |
1113 | + ) |
1114 | |
1115 | pfm = util.path_from_mirror_url |
1116 | - (cfg['mirror_url'], cfg['path']) = pfm(cfg['mirror_url'], cfg.get('path')) |
1117 | + (cfg["mirror_url"], cfg["path"]) = pfm(cfg["mirror_url"], cfg.get("path")) |
1118 | |
1119 | if missing: |
1120 | - sys.stderr.write("must provide input for (--hook/%s for default):\n" |
1121 | - % defhook) |
1122 | - for (flag, cfg) in missing: |
1123 | + sys.stderr.write( |
1124 | + "must provide input for (--hook/%s for default):\n" % defhook |
1125 | + ) |
1126 | + for flag, cfg in missing: |
1127 | sys.stderr.write(" cmdline '%s' or cfgname '%s'\n" % (flag, cfg)) |
1128 | sys.exit(1) |
1129 | |
1130 | @@ -148,24 +196,24 @@ def main(): |
1131 | log.basicConfig(stream=cmdargs.log_file, level=level) |
1132 | |
1133 | def policy(content, path): |
1134 | - if cfg['path'].endswith('sjson'): |
1135 | - return util.read_signed(content, |
1136 | - keyring=cmdargs.keyring, |
1137 | - checked=cmdargs.verify) |
1138 | + if cfg["path"].endswith("sjson"): |
1139 | + return util.read_signed( |
1140 | + content, keyring=cmdargs.keyring, checked=cmdargs.verify |
1141 | + ) |
1142 | else: |
1143 | return content |
1144 | |
1145 | - smirror = mirrors.UrlMirrorReader(cfg['mirror_url'], policy=policy) |
1146 | + smirror = mirrors.UrlMirrorReader(cfg["mirror_url"], policy=policy) |
1147 | tmirror = command_hook.CommandHookMirror(config=cfg) |
1148 | try: |
1149 | - tmirror.sync(smirror, cfg['path']) |
1150 | + tmirror.sync(smirror, cfg["path"]) |
1151 | except IOError as e: |
1152 | if e.errno == errno.EPIPE: |
1153 | sys.exit(0x80 | signal.SIGPIPE) |
1154 | raise |
1155 | |
1156 | |
1157 | -if __name__ == '__main__': |
1158 | +if __name__ == "__main__": |
1159 | main() |
1160 | |
1161 | # vi: ts=4 expandtab syntax=python |
1162 | diff --git a/pyproject.toml b/pyproject.toml |
1163 | new file mode 100644 |
1164 | index 0000000..d84cc51 |
1165 | --- /dev/null |
1166 | +++ b/pyproject.toml |
1167 | @@ -0,0 +1,6 @@ |
1168 | +[tool.black] |
1169 | +line-length = 79 |
1170 | + |
1171 | +[tool.isort] |
1172 | +profile = "black" |
1173 | +line_length = 79 |
1174 | diff --git a/setup.py b/setup.py |
1175 | index 6b5a29b..301721b 100644 |
1176 | --- a/setup.py |
1177 | +++ b/setup.py |
1178 | @@ -1,8 +1,9 @@ |
1179 | -from setuptools import setup |
1180 | -from glob import glob |
1181 | import os |
1182 | +from glob import glob |
1183 | + |
1184 | +from setuptools import setup |
1185 | |
1186 | -VERSION = '0.1.0' |
1187 | +VERSION = "0.1.0" |
1188 | |
1189 | |
1190 | def is_f(p): |
1191 | @@ -11,18 +12,20 @@ def is_f(p): |
1192 | |
1193 | setup( |
1194 | name="python-simplestreams", |
1195 | - description='Library and tools for using Simple Streams data', |
1196 | + description="Library and tools for using Simple Streams data", |
1197 | version=VERSION, |
1198 | - author='Scott Moser', |
1199 | - author_email='scott.moser@canonical.com', |
1200 | + author="Scott Moser", |
1201 | + author_email="scott.moser@canonical.com", |
1202 | license="AGPL", |
1203 | - url='http://launchpad.net/simplestreams/', |
1204 | - packages=['simplestreams', 'simplestreams.mirrors', |
1205 | - 'simplestreams.objectstores'], |
1206 | - scripts=glob('bin/*'), |
1207 | + url="http://launchpad.net/simplestreams/", |
1208 | + packages=[ |
1209 | + "simplestreams", |
1210 | + "simplestreams.mirrors", |
1211 | + "simplestreams.objectstores", |
1212 | + ], |
1213 | + scripts=glob("bin/*"), |
1214 | data_files=[ |
1215 | - ('lib/simplestreams', glob('tools/hook-*')), |
1216 | - ('share/doc/simplestreams', |
1217 | - [f for f in glob('doc/*') if is_f(f)]), |
1218 | - ] |
1219 | + ("lib/simplestreams", glob("tools/hook-*")), |
1220 | + ("share/doc/simplestreams", [f for f in glob("doc/*") if is_f(f)]), |
1221 | + ], |
1222 | ) |
1223 | diff --git a/simplestreams/checksum_util.py b/simplestreams/checksum_util.py |
1224 | index dc695e3..bc98161 100644 |
1225 | --- a/simplestreams/checksum_util.py |
1226 | +++ b/simplestreams/checksum_util.py |
1227 | @@ -20,7 +20,7 @@ import hashlib |
1228 | CHECKSUMS = ("md5", "sha256", "sha512") |
1229 | |
1230 | try: |
1231 | - ALGORITHMS = list(getattr(hashlib, 'algorithms')) |
1232 | + ALGORITHMS = list(getattr(hashlib, "algorithms")) |
1233 | except AttributeError: |
1234 | ALGORITHMS = list(hashlib.algorithms_available) |
1235 | |
1236 | @@ -56,11 +56,13 @@ class checksummer(object): |
1237 | return self._hasher.hexdigest() |
1238 | |
1239 | def check(self): |
1240 | - return (self.expected is None or self.expected == self.hexdigest()) |
1241 | + return self.expected is None or self.expected == self.hexdigest() |
1242 | |
1243 | def __str__(self): |
1244 | - return ("checksummer (algorithm=%s expected=%s)" % |
1245 | - (self.algorithm, self.expected)) |
1246 | + return "checksummer (algorithm=%s expected=%s)" % ( |
1247 | + self.algorithm, |
1248 | + self.expected, |
1249 | + ) |
1250 | |
1251 | |
1252 | def item_checksums(item): |
1253 | @@ -69,14 +71,16 @@ def item_checksums(item): |
1254 | |
1255 | class SafeCheckSummer(checksummer): |
1256 | """SafeCheckSummer raises ValueError if checksums are not provided.""" |
1257 | + |
1258 | def __init__(self, checksums, allowed=None): |
1259 | if allowed is None: |
1260 | allowed = CHECKSUMS |
1261 | super(SafeCheckSummer, self).__init__(checksums) |
1262 | if self.algorithm not in allowed: |
1263 | raise ValueError( |
1264 | - "provided checksums (%s) did not include any allowed (%s)" % |
1265 | - (checksums, allowed)) |
1266 | + "provided checksums (%s) did not include any allowed (%s)" |
1267 | + % (checksums, allowed) |
1268 | + ) |
1269 | |
1270 | |
1271 | class InvalidChecksum(ValueError): |
1272 | @@ -93,18 +97,31 @@ class InvalidChecksum(ValueError): |
1273 | if not isinstance(self.expected_size, int): |
1274 | msg = "Invalid size '%s' at %s." % (self.expected_size, self.path) |
1275 | else: |
1276 | - msg = ("Invalid %s Checksum at %s. Found %s. Expected %s. " |
1277 | - "read %s bytes expected %s bytes." % |
1278 | - (self.cksum.algorithm, self.path, |
1279 | - self.cksum.hexdigest(), self.cksum.expected, |
1280 | - self.size, self.expected_size)) |
1281 | + msg = ( |
1282 | + "Invalid %s Checksum at %s. Found %s. Expected %s. " |
1283 | + "read %s bytes expected %s bytes." |
1284 | + % ( |
1285 | + self.cksum.algorithm, |
1286 | + self.path, |
1287 | + self.cksum.hexdigest(), |
1288 | + self.cksum.expected, |
1289 | + self.size, |
1290 | + self.expected_size, |
1291 | + ) |
1292 | + ) |
1293 | if self.size: |
1294 | - msg += (" (size %s expected %s)" % |
1295 | - (self.size, self.expected_size)) |
1296 | + msg += " (size %s expected %s)" % ( |
1297 | + self.size, |
1298 | + self.expected_size, |
1299 | + ) |
1300 | return msg |
1301 | |
1302 | |
1303 | def invalid_checksum_for_reader(reader, msg=None): |
1304 | - return InvalidChecksum(path=reader.url, cksum=reader.checksummer, |
1305 | - size=reader.bytes_read, expected_size=reader.size, |
1306 | - msg=msg) |
1307 | + return InvalidChecksum( |
1308 | + path=reader.url, |
1309 | + cksum=reader.checksummer, |
1310 | + size=reader.bytes_read, |
1311 | + expected_size=reader.size, |
1312 | + msg=msg, |
1313 | + ) |
1314 | diff --git a/simplestreams/contentsource.py b/simplestreams/contentsource.py |
1315 | index ce45097..e5c6c98 100644 |
1316 | --- a/simplestreams/contentsource.py |
1317 | +++ b/simplestreams/contentsource.py |
1318 | @@ -23,12 +23,13 @@ import sys |
1319 | from . import checksum_util |
1320 | |
1321 | if sys.version_info > (3, 0): |
1322 | + import urllib.error as urllib_error |
1323 | import urllib.parse as urlparse |
1324 | import urllib.request as urllib_request |
1325 | - import urllib.error as urllib_error |
1326 | else: |
1327 | - import urlparse |
1328 | import urllib2 as urllib_request |
1329 | + import urlparse |
1330 | + |
1331 | urllib_error = urllib_request |
1332 | |
1333 | READ_BUFFER_SIZE = 1024 * 10 |
1334 | @@ -38,12 +39,14 @@ try: |
1335 | # We try to use requests because we can do gzip encoding with it. |
1336 | # however requests < 1.1 didn't have 'stream' argument to 'get' |
1337 | # making it completely unsuitable for downloading large files. |
1338 | - import requests |
1339 | from distutils.version import LooseVersion |
1340 | + |
1341 | import pkg_resources |
1342 | - _REQ = pkg_resources.get_distribution('requests') |
1343 | + import requests |
1344 | + |
1345 | + _REQ = pkg_resources.get_distribution("requests") |
1346 | _REQ_VER = LooseVersion(_REQ.version) |
1347 | - if _REQ_VER < LooseVersion('1.1'): |
1348 | + if _REQ_VER < LooseVersion("1.1"): |
1349 | raise ImportError("Requests version < 1.1, not suitable for usage.") |
1350 | URL_READER_CLASSNAME = "RequestsUrlReader" |
1351 | except ImportError: |
1352 | @@ -63,8 +66,8 @@ class ContentSource(object): |
1353 | raise NotImplementedError() |
1354 | |
1355 | def set_start_pos(self, offset): |
1356 | - """ Implemented if the ContentSource supports seeking within content. |
1357 | - Used to resume failed transfers. """ |
1358 | + """Implemented if the ContentSource supports seeking within content. |
1359 | + Used to resume failed transfers.""" |
1360 | |
1361 | class SetStartPosNotImplementedError(NotImplementedError): |
1362 | pass |
1363 | @@ -122,8 +125,9 @@ class UrlContentSource(ContentSource): |
1364 | if e.errno != errno.ENOENT: |
1365 | raise |
1366 | continue |
1367 | - myerr = IOError("Unable to open %s. mirrors=%s" % |
1368 | - (self.input_url, self.mirrors)) |
1369 | + myerr = IOError( |
1370 | + "Unable to open %s. mirrors=%s" % (self.input_url, self.mirrors) |
1371 | + ) |
1372 | myerr.errno = errno.ENOENT |
1373 | raise myerr |
1374 | |
1375 | @@ -181,7 +185,7 @@ class IteratorContentSource(ContentSource): |
1376 | raise exc |
1377 | |
1378 | def is_enoent(self, exc): |
1379 | - return (isinstance(exc, IOError) and exc.errno == errno.ENOENT) |
1380 | + return isinstance(exc, IOError) and exc.errno == errno.ENOENT |
1381 | |
1382 | def read(self, size=None): |
1383 | self.open() |
1384 | @@ -189,7 +193,7 @@ class IteratorContentSource(ContentSource): |
1385 | if self.consumed: |
1386 | return bytes() |
1387 | |
1388 | - if (size is None or size < 0): |
1389 | + if size is None or size < 0: |
1390 | # read everything |
1391 | ret = self.leftover |
1392 | self.leftover = bytes() |
1393 | @@ -227,7 +231,7 @@ class IteratorContentSource(ContentSource): |
1394 | class MemoryContentSource(FdContentSource): |
1395 | def __init__(self, url=None, content=""): |
1396 | if isinstance(content, str): |
1397 | - content = content.encode('utf-8') |
1398 | + content = content.encode("utf-8") |
1399 | fd = io.BytesIO(content) |
1400 | if url is None: |
1401 | url = "MemoryContentSource://undefined" |
1402 | @@ -265,8 +269,10 @@ class ChecksummingContentSource(ContentSource): |
1403 | |
1404 | def _set_checksummer(self, checksummer): |
1405 | if checksummer.algorithm not in checksum_util.CHECKSUMS: |
1406 | - raise ValueError("algorithm %s is not valid (%s)" % |
1407 | - (checksummer.algorithm, checksum_util.CHECKSUMS)) |
1408 | + raise ValueError( |
1409 | + "algorithm %s is not valid (%s)" |
1410 | + % (checksummer.algorithm, checksum_util.CHECKSUMS) |
1411 | + ) |
1412 | self.checksummer = checksummer |
1413 | |
1414 | def check(self): |
1415 | @@ -322,7 +328,6 @@ class FileReader(UrlReader): |
1416 | |
1417 | |
1418 | class Urllib2UrlReader(UrlReader): |
1419 | - |
1420 | timeout = TIMEOUT |
1421 | |
1422 | def __init__(self, url, offset=None, user_agent=None): |
1423 | @@ -339,9 +344,9 @@ class Urllib2UrlReader(UrlReader): |
1424 | try: |
1425 | req = urllib_request.Request(url) |
1426 | if user_agent is not None: |
1427 | - req.add_header('User-Agent', user_agent) |
1428 | + req.add_header("User-Agent", user_agent) |
1429 | if offset is not None: |
1430 | - req.add_header('Range', 'bytes=%d-' % offset) |
1431 | + req.add_header("Range", "bytes=%d-" % offset) |
1432 | self.req = opener(req, timeout=self.timeout) |
1433 | except urllib_error.HTTPError as e: |
1434 | if e.code == 404: |
1435 | @@ -373,8 +378,10 @@ class RequestsUrlReader(UrlReader): |
1436 | |
1437 | def __init__(self, url, buflen=None, offset=None, user_agent=None): |
1438 | if requests is None: |
1439 | - raise ImportError("Attempt to use RequestsUrlReader " |
1440 | - "without suitable requests library.") |
1441 | + raise ImportError( |
1442 | + "Attempt to use RequestsUrlReader " |
1443 | + "without suitable requests library." |
1444 | + ) |
1445 | self.url = url |
1446 | (url, user, password) = parse_url_auth(url) |
1447 | if user is None: |
1448 | @@ -384,15 +391,15 @@ class RequestsUrlReader(UrlReader): |
1449 | |
1450 | headers = {} |
1451 | if user_agent is not None: |
1452 | - headers['User-Agent'] = user_agent |
1453 | + headers["User-Agent"] = user_agent |
1454 | if offset is not None: |
1455 | - headers['Range'] = 'bytes=%d-' % offset |
1456 | + headers["Range"] = "bytes=%d-" % offset |
1457 | if headers == {}: |
1458 | headers = None |
1459 | |
1460 | # requests version less than 2.4.1 takes an optional |
1461 | # float for timeout. There is no separate read timeout |
1462 | - if _REQ_VER < LooseVersion('2.4.1'): |
1463 | + if _REQ_VER < LooseVersion("2.4.1"): |
1464 | self.timeout = TIMEOUT |
1465 | |
1466 | self.req = requests.get( |
1467 | @@ -405,13 +412,13 @@ class RequestsUrlReader(UrlReader): |
1468 | self.leftover = bytes() |
1469 | self.consumed = False |
1470 | |
1471 | - if (self.req.status_code == requests.codes.NOT_FOUND): |
1472 | + if self.req.status_code == requests.codes.NOT_FOUND: |
1473 | myerr = IOError("Unable to open %s" % url) |
1474 | myerr.errno = errno.ENOENT |
1475 | raise myerr |
1476 | |
1477 | - ce = self.req.headers.get('content-encoding', '').lower() |
1478 | - if 'gzip' in ce or 'deflate' in ce: |
1479 | + ce = self.req.headers.get("content-encoding", "").lower() |
1480 | + if "gzip" in ce or "deflate" in ce: |
1481 | self._read = self.read_compressed |
1482 | else: |
1483 | self._read = self.read_raw |
1484 | @@ -426,7 +433,7 @@ class RequestsUrlReader(UrlReader): |
1485 | if self.consumed: |
1486 | return bytes() |
1487 | |
1488 | - if (size is None or size < 0): |
1489 | + if size is None or size < 0: |
1490 | # read everything |
1491 | ret = self.leftover |
1492 | self.leftover = bytes() |
1493 | diff --git a/simplestreams/filters.py b/simplestreams/filters.py |
1494 | index 3818949..330a475 100644 |
1495 | --- a/simplestreams/filters.py |
1496 | +++ b/simplestreams/filters.py |
1497 | @@ -15,10 +15,10 @@ |
1498 | # You should have received a copy of the GNU Affero General Public License |
1499 | # along with Simplestreams. If not, see <http://www.gnu.org/licenses/>. |
1500 | |
1501 | -from simplestreams import util |
1502 | - |
1503 | import re |
1504 | |
1505 | +from simplestreams import util |
1506 | + |
1507 | |
1508 | class ItemFilter(object): |
1509 | def __init__(self, content, noneval=""): |
1510 | @@ -37,7 +37,7 @@ class ItemFilter(object): |
1511 | else: |
1512 | raise ValueError("Bad parsing of %s" % content) |
1513 | |
1514 | - self.negator = (op[0] != "!") |
1515 | + self.negator = op[0] != "!" |
1516 | self.op = op |
1517 | self.key = key |
1518 | self.value = val |
1519 | @@ -45,15 +45,19 @@ class ItemFilter(object): |
1520 | self.noneval = noneval |
1521 | |
1522 | def __str__(self): |
1523 | - return "%s %s %s [none=%s]" % (self.key, self.op, |
1524 | - self.value, self.noneval) |
1525 | + return "%s %s %s [none=%s]" % ( |
1526 | + self.key, |
1527 | + self.op, |
1528 | + self.value, |
1529 | + self.noneval, |
1530 | + ) |
1531 | |
1532 | def __repr__(self): |
1533 | return self.__str__() |
1534 | |
1535 | def matches(self, item): |
1536 | val = str(item.get(self.key, self.noneval)) |
1537 | - return (self.negator == bool(self._matches(val))) |
1538 | + return self.negator == bool(self._matches(val)) |
1539 | |
1540 | |
1541 | def get_filters(filters, noneval=""): |
1542 | diff --git a/simplestreams/generate_simplestreams.py b/simplestreams/generate_simplestreams.py |
1543 | index 9b7b919..329a91d 100644 |
1544 | --- a/simplestreams/generate_simplestreams.py |
1545 | +++ b/simplestreams/generate_simplestreams.py |
1546 | @@ -13,54 +13,58 @@ |
1547 | # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public |
1548 | # License for more details. |
1549 | # |
1550 | -from collections import namedtuple |
1551 | -from copy import deepcopy |
1552 | import json |
1553 | import os |
1554 | import sys |
1555 | +from collections import namedtuple |
1556 | +from copy import deepcopy |
1557 | |
1558 | from simplestreams import util |
1559 | |
1560 | - |
1561 | -Item = namedtuple('Item', ['content_id', 'product_name', 'version_name', |
1562 | - 'item_name', 'data']) |
1563 | +Item = namedtuple( |
1564 | + "Item", ["content_id", "product_name", "version_name", "item_name", "data"] |
1565 | +) |
1566 | |
1567 | |
1568 | def items2content_trees(itemslist, exdata): |
1569 | # input is a list with each item having: |
1570 | # (content_id, product_name, version_name, item_name, {data}) |
1571 | ctrees = {} |
1572 | - for (content_id, prodname, vername, itemname, data) in itemslist: |
1573 | + for content_id, prodname, vername, itemname, data in itemslist: |
1574 | if content_id not in ctrees: |
1575 | - ctrees[content_id] = {'content_id': content_id, |
1576 | - 'format': 'products:1.0', 'products': {}} |
1577 | + ctrees[content_id] = { |
1578 | + "content_id": content_id, |
1579 | + "format": "products:1.0", |
1580 | + "products": {}, |
1581 | + } |
1582 | ctrees[content_id].update(exdata) |
1583 | |
1584 | ctree = ctrees[content_id] |
1585 | - if prodname not in ctree['products']: |
1586 | - ctree['products'][prodname] = {'versions': {}} |
1587 | + if prodname not in ctree["products"]: |
1588 | + ctree["products"][prodname] = {"versions": {}} |
1589 | |
1590 | - prodtree = ctree['products'][prodname] |
1591 | - if vername not in prodtree['versions']: |
1592 | - prodtree['versions'][vername] = {'items': {}} |
1593 | + prodtree = ctree["products"][prodname] |
1594 | + if vername not in prodtree["versions"]: |
1595 | + prodtree["versions"][vername] = {"items": {}} |
1596 | |
1597 | - vertree = prodtree['versions'][vername] |
1598 | + vertree = prodtree["versions"][vername] |
1599 | |
1600 | - if itemname in vertree['items']: |
1601 | - raise ValueError("%s: already existed" % |
1602 | - str([content_id, prodname, vername, itemname])) |
1603 | + if itemname in vertree["items"]: |
1604 | + raise ValueError( |
1605 | + "%s: already existed" |
1606 | + % str([content_id, prodname, vername, itemname]) |
1607 | + ) |
1608 | |
1609 | - vertree['items'][itemname] = data |
1610 | + vertree["items"][itemname] = data |
1611 | return ctrees |
1612 | |
1613 | |
1614 | class FileNamer: |
1615 | - |
1616 | - streamdir = 'streams/v1' |
1617 | + streamdir = "streams/v1" |
1618 | |
1619 | @classmethod |
1620 | def get_index_path(cls): |
1621 | - return "%s/%s" % (cls.streamdir, 'index.json') |
1622 | + return "%s/%s" % (cls.streamdir, "index.json") |
1623 | |
1624 | @classmethod |
1625 | def get_content_path(cls, content_id): |
1626 | @@ -68,16 +72,16 @@ class FileNamer: |
1627 | |
1628 | |
1629 | def generate_index(trees, updated, namer): |
1630 | - index = {"index": {}, 'format': 'index:1.0', 'updated': updated} |
1631 | - not_copied_up = ['content_id'] |
1632 | + index = {"index": {}, "format": "index:1.0", "updated": updated} |
1633 | + not_copied_up = ["content_id"] |
1634 | for content_id, content in trees.items(): |
1635 | - index['index'][content_id] = { |
1636 | - 'products': sorted(list(content['products'].keys())), |
1637 | - 'path': namer.get_content_path(content_id), |
1638 | + index["index"][content_id] = { |
1639 | + "products": sorted(list(content["products"].keys())), |
1640 | + "path": namer.get_content_path(content_id), |
1641 | } |
1642 | for k in util.stringitems(content): |
1643 | if k not in not_copied_up: |
1644 | - index['index'][content_id][k] = content[k] |
1645 | + index["index"][content_id][k] = content[k] |
1646 | return index |
1647 | |
1648 | |
1649 | @@ -85,20 +89,29 @@ def write_streams(out_d, trees, updated, namer=None, condense=True): |
1650 | if namer is None: |
1651 | namer = FileNamer |
1652 | index = generate_index(trees, updated, namer) |
1653 | - to_write = [(namer.get_index_path(), index,)] |
1654 | + to_write = [ |
1655 | + ( |
1656 | + namer.get_index_path(), |
1657 | + index, |
1658 | + ) |
1659 | + ] |
1660 | # Don't let products_condense modify the input |
1661 | trees = deepcopy(trees) |
1662 | for content_id in trees: |
1663 | if condense: |
1664 | - util.products_condense(trees[content_id], |
1665 | - sticky=[ |
1666 | - 'path', 'sha256', 'md5', |
1667 | - 'size', 'mirrors' |
1668 | - ]) |
1669 | + util.products_condense( |
1670 | + trees[content_id], |
1671 | + sticky=["path", "sha256", "md5", "size", "mirrors"], |
1672 | + ) |
1673 | content = trees[content_id] |
1674 | - to_write.append((index['index'][content_id]['path'], content,)) |
1675 | + to_write.append( |
1676 | + ( |
1677 | + index["index"][content_id]["path"], |
1678 | + content, |
1679 | + ) |
1680 | + ) |
1681 | out_filenames = [] |
1682 | - for (outfile, data) in to_write: |
1683 | + for outfile, data in to_write: |
1684 | filef = os.path.join(out_d, outfile) |
1685 | util.mkdir_p(os.path.dirname(filef)) |
1686 | json_dump(data, filef) |
1687 | @@ -108,6 +121,8 @@ def write_streams(out_d, trees, updated, namer=None, condense=True): |
1688 | |
1689 | def json_dump(data, filename): |
1690 | with open(filename, "w") as fp: |
1691 | - sys.stderr.write(u"writing %s\n" % filename) |
1692 | - fp.write(json.dumps(data, indent=2, sort_keys=True, |
1693 | - separators=(',', ': ')) + "\n") |
1694 | + sys.stderr.write("writing %s\n" % filename) |
1695 | + fp.write( |
1696 | + json.dumps(data, indent=2, sort_keys=True, separators=(",", ": ")) |
1697 | + + "\n" |
1698 | + ) |
1699 | diff --git a/simplestreams/json2streams.py b/simplestreams/json2streams.py |
1700 | index e9749f6..20b09e7 100755 |
1701 | --- a/simplestreams/json2streams.py |
1702 | +++ b/simplestreams/json2streams.py |
1703 | @@ -1,43 +1,41 @@ |
1704 | #!/usr/bin/env python3 |
1705 | # Copyright (C) 2013, 2015 Canonical Ltd. |
1706 | |
1707 | -from argparse import ArgumentParser |
1708 | import json |
1709 | import os |
1710 | import sys |
1711 | +from argparse import ArgumentParser |
1712 | |
1713 | from simplestreams import util |
1714 | - |
1715 | from simplestreams.generate_simplestreams import ( |
1716 | FileNamer, |
1717 | Item, |
1718 | items2content_trees, |
1719 | json_dump, |
1720 | write_streams, |
1721 | - ) |
1722 | +) |
1723 | |
1724 | |
1725 | class JujuFileNamer(FileNamer): |
1726 | - |
1727 | @classmethod |
1728 | def get_index_path(cls): |
1729 | - return "%s/%s" % (cls.streamdir, 'index2.json') |
1730 | + return "%s/%s" % (cls.streamdir, "index2.json") |
1731 | |
1732 | @classmethod |
1733 | def get_content_path(cls, content_id): |
1734 | - return "%s/%s.json" % (cls.streamdir, content_id.replace(':', '-')) |
1735 | + return "%s/%s.json" % (cls.streamdir, content_id.replace(":", "-")) |
1736 | |
1737 | |
1738 | def dict_to_item(item_dict): |
1739 | """Convert a dict into an Item, mutating input.""" |
1740 | - item_dict.pop('item_url', None) |
1741 | - size = item_dict.get('size') |
1742 | + item_dict.pop("item_url", None) |
1743 | + size = item_dict.get("size") |
1744 | if size is not None: |
1745 | - item_dict['size'] = int(size) |
1746 | - content_id = item_dict.pop('content_id') |
1747 | - product_name = item_dict.pop('product_name') |
1748 | - version_name = item_dict.pop('version_name') |
1749 | - item_name = item_dict.pop('item_name') |
1750 | + item_dict["size"] = int(size) |
1751 | + content_id = item_dict.pop("content_id") |
1752 | + product_name = item_dict.pop("product_name") |
1753 | + version_name = item_dict.pop("version_name") |
1754 | + item_name = item_dict.pop("item_name") |
1755 | return Item(content_id, product_name, version_name, item_name, item_dict) |
1756 | |
1757 | |
1758 | @@ -51,9 +49,11 @@ def write_release_index(out_d): |
1759 | in_path = os.path.join(out_d, JujuFileNamer.get_index_path()) |
1760 | with open(in_path) as in_file: |
1761 | full_index = json.load(in_file) |
1762 | - full_index['index'] = dict( |
1763 | - (k, v) for k, v in list(full_index['index'].items()) |
1764 | - if k == 'com.ubuntu.juju:released:tools') |
1765 | + full_index["index"] = dict( |
1766 | + (k, v) |
1767 | + for k, v in list(full_index["index"].items()) |
1768 | + if k == "com.ubuntu.juju:released:tools" |
1769 | + ) |
1770 | out_path = os.path.join(out_d, FileNamer.get_index_path()) |
1771 | json_dump(full_index, out_path) |
1772 | return out_path |
1773 | @@ -70,7 +70,7 @@ def filenames_to_streams(filenames, updated, out_d, juju_format=False): |
1774 | for items_file in filenames: |
1775 | items.extend(read_items_file(items_file)) |
1776 | |
1777 | - data = {'updated': updated, 'datatype': 'content-download'} |
1778 | + data = {"updated": updated, "datatype": "content-download"} |
1779 | trees = items2content_trees(items, data) |
1780 | if juju_format: |
1781 | write = write_juju_streams |
1782 | @@ -88,23 +88,31 @@ def write_juju_streams(out_d, trees, updated): |
1783 | def parse_args(argv=None): |
1784 | parser = ArgumentParser() |
1785 | parser.add_argument( |
1786 | - 'items_file', metavar='items-file', help='File to read items from', |
1787 | - nargs='+') |
1788 | + "items_file", |
1789 | + metavar="items-file", |
1790 | + help="File to read items from", |
1791 | + nargs="+", |
1792 | + ) |
1793 | parser.add_argument( |
1794 | - 'out_d', metavar='output-dir', |
1795 | - help='The directory to write stream files to.') |
1796 | + "out_d", |
1797 | + metavar="output-dir", |
1798 | + help="The directory to write stream files to.", |
1799 | + ) |
1800 | parser.add_argument( |
1801 | - '--juju-format', action='store_true', |
1802 | - help='Write stream files in juju format.') |
1803 | + "--juju-format", |
1804 | + action="store_true", |
1805 | + help="Write stream files in juju format.", |
1806 | + ) |
1807 | return parser.parse_args(argv) |
1808 | |
1809 | |
1810 | def main(): |
1811 | args = parse_args() |
1812 | updated = util.timestamp() |
1813 | - filenames_to_streams(args.items_file, updated, args.out_d, |
1814 | - args.juju_format) |
1815 | + filenames_to_streams( |
1816 | + args.items_file, updated, args.out_d, args.juju_format |
1817 | + ) |
1818 | |
1819 | |
1820 | -if __name__ == '__main__': |
1821 | +if __name__ == "__main__": |
1822 | sys.exit(main()) |
1823 | diff --git a/simplestreams/log.py b/simplestreams/log.py |
1824 | index 061103a..824f248 100644 |
1825 | --- a/simplestreams/log.py |
1826 | +++ b/simplestreams/log.py |
1827 | @@ -33,18 +33,22 @@ class NullHandler(logging.Handler): |
1828 | |
1829 | def basicConfig(**kwargs): |
1830 | # basically like logging.basicConfig but only output for our logger |
1831 | - if kwargs.get('filename'): |
1832 | - handler = logging.FileHandler(filename=kwargs['filename'], |
1833 | - mode=kwargs.get('filemode', 'a')) |
1834 | - elif kwargs.get('stream'): |
1835 | - handler = logging.StreamHandler(stream=kwargs['stream']) |
1836 | + if kwargs.get("filename"): |
1837 | + handler = logging.FileHandler( |
1838 | + filename=kwargs["filename"], mode=kwargs.get("filemode", "a") |
1839 | + ) |
1840 | + elif kwargs.get("stream"): |
1841 | + handler = logging.StreamHandler(stream=kwargs["stream"]) |
1842 | else: |
1843 | handler = NullHandler() |
1844 | |
1845 | - level = kwargs.get('level', NOTSET) |
1846 | + level = kwargs.get("level", NOTSET) |
1847 | |
1848 | - handler.setFormatter(logging.Formatter(fmt=kwargs.get('format'), |
1849 | - datefmt=kwargs.get('datefmt'))) |
1850 | + handler.setFormatter( |
1851 | + logging.Formatter( |
1852 | + fmt=kwargs.get("format"), datefmt=kwargs.get("datefmt") |
1853 | + ) |
1854 | + ) |
1855 | handler.setLevel(level) |
1856 | |
1857 | logging.getLogger().setLevel(level) |
1858 | @@ -56,7 +60,7 @@ def basicConfig(**kwargs): |
1859 | logger.addHandler(handler) |
1860 | |
1861 | |
1862 | -def _getLogger(name='sstreams'): |
1863 | +def _getLogger(name="sstreams"): |
1864 | return logging.getLogger(name) |
1865 | |
1866 | |
1867 | diff --git a/simplestreams/mirrors/__init__.py b/simplestreams/mirrors/__init__.py |
1868 | index 4a9593a..b3374f8 100644 |
1869 | --- a/simplestreams/mirrors/__init__.py |
1870 | +++ b/simplestreams/mirrors/__init__.py |
1871 | @@ -18,10 +18,10 @@ import errno |
1872 | import io |
1873 | import json |
1874 | |
1875 | +import simplestreams.contentsource as cs |
1876 | import simplestreams.filters as filters |
1877 | import simplestreams.util as util |
1878 | from simplestreams import checksum_util |
1879 | -import simplestreams.contentsource as cs |
1880 | from simplestreams.log import LOG |
1881 | |
1882 | DEFAULT_USER_AGENT = "python-simplestreams/0.1" |
1883 | @@ -29,8 +29,8 @@ DEFAULT_USER_AGENT = "python-simplestreams/0.1" |
1884 | |
1885 | class MirrorReader(object): |
1886 | def __init__(self, policy=util.policy_read_signed): |
1887 | - """ policy should be a function which returns the extracted payload or |
1888 | - raises an exception if the policy is violated. """ |
1889 | + """policy should be a function which returns the extracted payload or |
1890 | + raises an exception if the policy is violated.""" |
1891 | self.policy = policy |
1892 | |
1893 | def load_products(self, path): |
1894 | @@ -39,7 +39,7 @@ class MirrorReader(object): |
1895 | |
1896 | def read_json(self, path): |
1897 | with self.source(path) as source: |
1898 | - raw = source.read().decode('utf-8') |
1899 | + raw = source.read().decode("utf-8") |
1900 | return raw, self.policy(content=raw, path=path) |
1901 | |
1902 | def source(self, path): |
1903 | @@ -164,8 +164,13 @@ class MirrorWriter(object): |
1904 | |
1905 | |
1906 | class UrlMirrorReader(MirrorReader): |
1907 | - def __init__(self, prefix, mirrors=None, policy=util.policy_read_signed, |
1908 | - user_agent=DEFAULT_USER_AGENT): |
1909 | + def __init__( |
1910 | + self, |
1911 | + prefix, |
1912 | + mirrors=None, |
1913 | + policy=util.policy_read_signed, |
1914 | + user_agent=DEFAULT_USER_AGENT, |
1915 | + ): |
1916 | super(UrlMirrorReader, self).__init__(policy=policy) |
1917 | self._cs = cs.UrlContentSource |
1918 | if mirrors is None: |
1919 | @@ -184,13 +189,18 @@ class UrlMirrorReader(MirrorReader): |
1920 | |
1921 | def url_reader_factory(*args, **kwargs): |
1922 | return cs.URL_READER( |
1923 | - *args, user_agent=self.user_agent, **kwargs) |
1924 | + *args, user_agent=self.user_agent, **kwargs |
1925 | + ) |
1926 | + |
1927 | else: |
1928 | url_reader_factory = None |
1929 | |
1930 | if self._trailing_slash_checked: |
1931 | - return self._cs(self.prefix + path, mirrors=mirrors, |
1932 | - url_reader=url_reader_factory) |
1933 | + return self._cs( |
1934 | + self.prefix + path, |
1935 | + mirrors=mirrors, |
1936 | + url_reader=url_reader_factory, |
1937 | + ) |
1938 | |
1939 | # A little hack to fix up the user's path. It's fairly common to |
1940 | # specify URLs without a trailing slash, so we try to do that here as |
1941 | @@ -198,22 +208,31 @@ class UrlMirrorReader(MirrorReader): |
1942 | # returned is not yet open (LP: #1237658) |
1943 | self._trailing_slash_checked = True |
1944 | try: |
1945 | - with self._cs(self.prefix + path, mirrors=None, |
1946 | - url_reader=url_reader_factory) as csource: |
1947 | + with self._cs( |
1948 | + self.prefix + path, mirrors=None, url_reader=url_reader_factory |
1949 | + ) as csource: |
1950 | csource.read(1024) |
1951 | except Exception as e: |
1952 | if isinstance(e, IOError) and (e.errno == errno.ENOENT): |
1953 | - LOG.warning("got ENOENT for (%s, %s), trying with trailing /", |
1954 | - self.prefix, path) |
1955 | - self.prefix = self.prefix + '/' |
1956 | + LOG.warning( |
1957 | + "got ENOENT for (%s, %s), trying with trailing /", |
1958 | + self.prefix, |
1959 | + path, |
1960 | + ) |
1961 | + self.prefix = self.prefix + "/" |
1962 | else: |
1963 | # this raised exception, but it was sneaky to do it |
1964 | # so just ignore it. |
1965 | - LOG.debug("trailing / check on (%s, %s) resulted in %s", |
1966 | - self.prefix, path, e) |
1967 | + LOG.debug( |
1968 | + "trailing / check on (%s, %s) resulted in %s", |
1969 | + self.prefix, |
1970 | + path, |
1971 | + e, |
1972 | + ) |
1973 | |
1974 | - return self._cs(self.prefix + path, mirrors=mirrors, |
1975 | - url_reader=url_reader_factory) |
1976 | + return self._cs( |
1977 | + self.prefix + path, mirrors=mirrors, url_reader=url_reader_factory |
1978 | + ) |
1979 | |
1980 | |
1981 | class ObjectStoreMirrorReader(MirrorReader): |
1982 | @@ -231,7 +250,7 @@ class BasicMirrorWriter(MirrorWriter): |
1983 | if config is None: |
1984 | config = {} |
1985 | self.config = config |
1986 | - self.checksumming_reader = self.config.get('checksumming_reader', True) |
1987 | + self.checksumming_reader = self.config.get("checksumming_reader", True) |
1988 | |
1989 | def load_products(self, path=None, content_id=None): |
1990 | super(BasicMirrorWriter, self).load_products(path, content_id) |
1991 | @@ -243,14 +262,14 @@ class BasicMirrorWriter(MirrorWriter): |
1992 | |
1993 | check_tree_paths(src) |
1994 | |
1995 | - itree = src.get('index') |
1996 | + itree = src.get("index") |
1997 | for content_id, index_entry in itree.items(): |
1998 | if not self.filter_index_entry(index_entry, src, (content_id,)): |
1999 | continue |
2000 | - epath = index_entry.get('path', None) |
2001 | + epath = index_entry.get("path", None) |
2002 | mycs = None |
2003 | if epath: |
2004 | - if index_entry.get('format') in ("index:1.0", "products:1.0"): |
2005 | + if index_entry.get("format") in ("index:1.0", "products:1.0"): |
2006 | self.sync(reader, path=epath) |
2007 | mycs = reader.source(epath) |
2008 | |
2009 | @@ -265,36 +284,37 @@ class BasicMirrorWriter(MirrorWriter): |
2010 | |
2011 | check_tree_paths(src) |
2012 | |
2013 | - content_id = src['content_id'] |
2014 | + content_id = src["content_id"] |
2015 | target = self.load_products(path, content_id) |
2016 | if not target: |
2017 | target = util.stringitems(src) |
2018 | |
2019 | util.expand_tree(target) |
2020 | |
2021 | - stree = src.get('products', {}) |
2022 | - if 'products' not in target: |
2023 | - target['products'] = {} |
2024 | + stree = src.get("products", {}) |
2025 | + if "products" not in target: |
2026 | + target["products"] = {} |
2027 | |
2028 | - tproducts = target['products'] |
2029 | + tproducts = target["products"] |
2030 | |
2031 | filtered_products = [] |
2032 | prodname = None |
2033 | |
2034 | # Apply filters to items before filtering versions |
2035 | for prodname, product in list(stree.items()): |
2036 | - |
2037 | - for vername, version in list(product.get('versions', {}).items()): |
2038 | - for itemname, item in list(version.get('items', {}).items()): |
2039 | + for vername, version in list(product.get("versions", {}).items()): |
2040 | + for itemname, item in list(version.get("items", {}).items()): |
2041 | pgree = (prodname, vername, itemname) |
2042 | if not self.filter_item(item, src, target, pgree): |
2043 | LOG.debug("Filtered out item: %s/%s", itemname, item) |
2044 | - del stree[prodname]['versions'][vername]['items'][ |
2045 | - itemname] |
2046 | - if not stree[prodname]['versions'][vername].get( |
2047 | - 'items', {}): |
2048 | - del stree[prodname]['versions'][vername] |
2049 | - if not stree[prodname].get('versions', {}): |
2050 | + del stree[prodname]["versions"][vername]["items"][ |
2051 | + itemname |
2052 | + ] |
2053 | + if not stree[prodname]["versions"][vername].get( |
2054 | + "items", {} |
2055 | + ): |
2056 | + del stree[prodname]["versions"][vername] |
2057 | + if not stree[prodname].get("versions", {}): |
2058 | del stree[prodname] |
2059 | |
2060 | for prodname, product in stree.items(): |
2061 | @@ -305,50 +325,62 @@ class BasicMirrorWriter(MirrorWriter): |
2062 | if prodname not in tproducts: |
2063 | tproducts[prodname] = util.stringitems(product) |
2064 | tproduct = tproducts[prodname] |
2065 | - if 'versions' not in tproduct: |
2066 | - tproduct['versions'] = {} |
2067 | + if "versions" not in tproduct: |
2068 | + tproduct["versions"] = {} |
2069 | |
2070 | src_filtered_items = [] |
2071 | |
2072 | def _filter(itemkey): |
2073 | - ret = self.filter_version(product['versions'][itemkey], |
2074 | - src, target, (prodname, itemkey)) |
2075 | + ret = self.filter_version( |
2076 | + product["versions"][itemkey], |
2077 | + src, |
2078 | + target, |
2079 | + (prodname, itemkey), |
2080 | + ) |
2081 | if not ret: |
2082 | src_filtered_items.append(itemkey) |
2083 | return ret |
2084 | |
2085 | (to_add, to_remove) = util.resolve_work( |
2086 | - src=list(product.get('versions', {}).keys()), |
2087 | - target=list(tproduct.get('versions', {}).keys()), |
2088 | - maxnum=self.config.get('max_items'), |
2089 | - keep=self.config.get('keep_items'), itemfilter=_filter) |
2090 | - |
2091 | - LOG.info("%s/%s: to_add=%s to_remove=%s", content_id, prodname, |
2092 | - to_add, to_remove) |
2093 | - |
2094 | - tversions = tproduct['versions'] |
2095 | + src=list(product.get("versions", {}).keys()), |
2096 | + target=list(tproduct.get("versions", {}).keys()), |
2097 | + maxnum=self.config.get("max_items"), |
2098 | + keep=self.config.get("keep_items"), |
2099 | + itemfilter=_filter, |
2100 | + ) |
2101 | + |
2102 | + LOG.info( |
2103 | + "%s/%s: to_add=%s to_remove=%s", |
2104 | + content_id, |
2105 | + prodname, |
2106 | + to_add, |
2107 | + to_remove, |
2108 | + ) |
2109 | + |
2110 | + tversions = tproduct["versions"] |
2111 | skipped_versions = [] |
2112 | for vername in to_add: |
2113 | - version = product['versions'][vername] |
2114 | + version = product["versions"][vername] |
2115 | |
2116 | if vername not in tversions: |
2117 | tversions[vername] = util.stringitems(version) |
2118 | |
2119 | added_items = [] |
2120 | - for itemname, item in version.get('items', {}).items(): |
2121 | + for itemname, item in version.get("items", {}).items(): |
2122 | pgree = (prodname, vername, itemname) |
2123 | |
2124 | added_items.append(itemname) |
2125 | |
2126 | - ipath = item.get('path', None) |
2127 | + ipath = item.get("path", None) |
2128 | ipath_cs = None |
2129 | if ipath and reader: |
2130 | if self.checksumming_reader: |
2131 | flat = util.products_exdata(src, pgree) |
2132 | ipath_cs = cs.ChecksummingContentSource( |
2133 | csrc=reader.source(ipath), |
2134 | - size=flat.get('size'), |
2135 | - checksums=checksum_util.item_checksums(flat)) |
2136 | + size=flat.get("size"), |
2137 | + checksums=checksum_util.item_checksums(flat), |
2138 | + ) |
2139 | else: |
2140 | ipath_cs = reader.source(ipath) |
2141 | |
2142 | @@ -356,28 +388,38 @@ class BasicMirrorWriter(MirrorWriter): |
2143 | |
2144 | if len(added_items): |
2145 | # do not insert versions that had all items filtered |
2146 | - self.insert_version(version, src, target, |
2147 | - (prodname, vername)) |
2148 | + self.insert_version( |
2149 | + version, src, target, (prodname, vername) |
2150 | + ) |
2151 | else: |
2152 | skipped_versions.append(vername) |
2153 | |
2154 | for vername in skipped_versions: |
2155 | - if vername in tproduct['versions']: |
2156 | - del tproduct['versions'][vername] |
2157 | + if vername in tproduct["versions"]: |
2158 | + del tproduct["versions"][vername] |
2159 | |
2160 | - if self.config.get('delete_filtered_items', False): |
2161 | - tkeys = tproduct.get('versions', {}).keys() |
2162 | + if self.config.get("delete_filtered_items", False): |
2163 | + tkeys = tproduct.get("versions", {}).keys() |
2164 | for v in src_filtered_items: |
2165 | if v not in to_remove and v in tkeys: |
2166 | to_remove.append(v) |
2167 | - LOG.info("After deletions %s/%s: to_add=%s to_remove=%s", |
2168 | - content_id, prodname, to_add, to_remove) |
2169 | + LOG.info( |
2170 | + "After deletions %s/%s: to_add=%s to_remove=%s", |
2171 | + content_id, |
2172 | + prodname, |
2173 | + to_add, |
2174 | + to_remove, |
2175 | + ) |
2176 | |
2177 | for vername in to_remove: |
2178 | tversion = tversions[vername] |
2179 | - for itemname in list(tversion.get('items', {}).keys()): |
2180 | - self.remove_item(tversion['items'][itemname], src, target, |
2181 | - (prodname, vername, itemname)) |
2182 | + for itemname in list(tversion.get("items", {}).keys()): |
2183 | + self.remove_item( |
2184 | + tversion["items"][itemname], |
2185 | + src, |
2186 | + target, |
2187 | + (prodname, vername, itemname), |
2188 | + ) |
2189 | |
2190 | self.remove_version(tversion, src, target, (prodname, vername)) |
2191 | del tversions[vername] |
2192 | @@ -389,12 +431,14 @@ class BasicMirrorWriter(MirrorWriter): |
2193 | # that could accidentally delete a lot. |
2194 | # |
2195 | del_products = [] |
2196 | - if self.config.get('delete_products', False): |
2197 | - del_products.extend([p for p in list(tproducts.keys()) |
2198 | - if p not in stree]) |
2199 | - if self.config.get('delete_filtered_products', False): |
2200 | - del_products.extend([p for p in filtered_products |
2201 | - if p not in stree]) |
2202 | + if self.config.get("delete_products", False): |
2203 | + del_products.extend( |
2204 | + [p for p in list(tproducts.keys()) if p not in stree] |
2205 | + ) |
2206 | + if self.config.get("delete_filtered_products", False): |
2207 | + del_products.extend( |
2208 | + [p for p in filtered_products if p not in stree] |
2209 | + ) |
2210 | |
2211 | for prodname in del_products: |
2212 | # FIXME: we remove a product here, but unless that acts |
2213 | @@ -421,7 +465,7 @@ class ObjectStoreMirrorWriter(BasicMirrorWriter): |
2214 | try: |
2215 | with self.source(self._reference_count_data_path()) as source: |
2216 | raw = source.read() |
2217 | - return json.load(io.StringIO(raw.decode('utf-8'))) |
2218 | + return json.load(io.StringIO(raw.decode("utf-8"))) |
2219 | except IOError as e: |
2220 | if e.errno == errno.ENOENT: |
2221 | return {} |
2222 | @@ -432,7 +476,7 @@ class ObjectStoreMirrorWriter(BasicMirrorWriter): |
2223 | self.store.insert(self._reference_count_data_path(), source) |
2224 | |
2225 | def _build_rc_id(self, src, pedigree): |
2226 | - return '/'.join([src['content_id']] + list(pedigree)) |
2227 | + return "/".join([src["content_id"]] + list(pedigree)) |
2228 | |
2229 | def _inc_rc(self, path, src, pedigree): |
2230 | rc = self._load_rc_dict() |
2231 | @@ -482,25 +526,30 @@ class ObjectStoreMirrorWriter(BasicMirrorWriter): |
2232 | |
2233 | def insert_item(self, data, src, target, pedigree, contentsource): |
2234 | util.products_set(target, data, pedigree) |
2235 | - if 'path' not in data: |
2236 | + if "path" not in data: |
2237 | return |
2238 | - if not self.config.get('item_download', True): |
2239 | + if not self.config.get("item_download", True): |
2240 | return |
2241 | - LOG.debug("inserting %s to %s", contentsource.url, data['path']) |
2242 | - self.store.insert(data['path'], contentsource, |
2243 | - checksums=checksum_util.item_checksums(data), |
2244 | - mutable=False, size=data.get('size')) |
2245 | - self._inc_rc(data['path'], src, pedigree) |
2246 | + LOG.debug("inserting %s to %s", contentsource.url, data["path"]) |
2247 | + self.store.insert( |
2248 | + data["path"], |
2249 | + contentsource, |
2250 | + checksums=checksum_util.item_checksums(data), |
2251 | + mutable=False, |
2252 | + size=data.get("size"), |
2253 | + ) |
2254 | + self._inc_rc(data["path"], src, pedigree) |
2255 | |
2256 | def insert_index_entry(self, data, src, pedigree, contentsource): |
2257 | - epath = data.get('path', None) |
2258 | + epath = data.get("path", None) |
2259 | if not epath: |
2260 | return |
2261 | - self.store.insert(epath, contentsource, |
2262 | - checksums=checksum_util.item_checksums(data)) |
2263 | + self.store.insert( |
2264 | + epath, contentsource, checksums=checksum_util.item_checksums(data) |
2265 | + ) |
2266 | |
2267 | def insert_products(self, path, target, content): |
2268 | - dpath = self.products_data_path(target['content_id']) |
2269 | + dpath = self.products_data_path(target["content_id"]) |
2270 | self.store.insert_content(dpath, util.dump_data(target)) |
2271 | if not path: |
2272 | return |
2273 | @@ -517,16 +566,16 @@ class ObjectStoreMirrorWriter(BasicMirrorWriter): |
2274 | |
2275 | def remove_item(self, data, src, target, pedigree): |
2276 | util.products_del(target, pedigree) |
2277 | - if 'path' not in data: |
2278 | + if "path" not in data: |
2279 | return |
2280 | - if self._dec_rc(data['path'], src, pedigree): |
2281 | - self.store.remove(data['path']) |
2282 | + if self._dec_rc(data["path"], src, pedigree): |
2283 | + self.store.remove(data["path"]) |
2284 | |
2285 | |
2286 | class ObjectFilterMirror(ObjectStoreMirrorWriter): |
2287 | def __init__(self, *args, **kwargs): |
2288 | super(ObjectFilterMirror, self).__init__(*args, **kwargs) |
2289 | - self.filters = self.config.get('filters', []) |
2290 | + self.filters = self.config.get("filters", []) |
2291 | |
2292 | def filter_item(self, data, src, target, pedigree): |
2293 | return filters.filter_item(self.filters, data, src, pedigree) |
2294 | @@ -552,15 +601,15 @@ class DryRunMirrorWriter(ObjectFilterMirror): |
2295 | |
2296 | def insert_item(self, data, src, target, pedigree, contentsource): |
2297 | data = util.products_exdata(src, pedigree) |
2298 | - if 'size' in data and 'path' in data: |
2299 | + if "size" in data and "path" in data: |
2300 | self.downloading.append( |
2301 | - (pedigree, data['path'], int(data['size']))) |
2302 | + (pedigree, data["path"], int(data["size"])) |
2303 | + ) |
2304 | |
2305 | def remove_item(self, data, src, target, pedigree): |
2306 | data = util.products_exdata(src, pedigree) |
2307 | - if 'size' in data and 'path' in data: |
2308 | - self.removing.append( |
2309 | - (pedigree, data['path'], int(data['size']))) |
2310 | + if "size" in data and "path" in data: |
2311 | + self.removing.append((pedigree, data["path"], int(data["size"]))) |
2312 | |
2313 | @property |
2314 | def size(self): |
2315 | @@ -573,27 +622,31 @@ def _get_data_content(path, data, content, reader): |
2316 | if content is None and path: |
2317 | _, content = reader.read(path) |
2318 | if isinstance(content, bytes): |
2319 | - content = content.decode('utf-8') |
2320 | + content = content.decode("utf-8") |
2321 | |
2322 | if data is None and content: |
2323 | data = util.load_content(content) |
2324 | |
2325 | if not data: |
2326 | - raise ValueError("Data could not be loaded. " |
2327 | - "Path or content is required") |
2328 | + raise ValueError( |
2329 | + "Data could not be loaded. " "Path or content is required" |
2330 | + ) |
2331 | return (data, content) |
2332 | |
2333 | |
2334 | def check_tree_paths(tree, fmt=None): |
2335 | if fmt is None: |
2336 | - fmt = tree.get('format') |
2337 | + fmt = tree.get("format") |
2338 | if fmt == "products:1.0": |
2339 | + |
2340 | def check_path(item, tree, pedigree): |
2341 | - util.assert_safe_path(item.get('path')) |
2342 | + util.assert_safe_path(item.get("path")) |
2343 | + |
2344 | util.walk_products(tree, cb_item=check_path) |
2345 | elif fmt == "index:1.0": |
2346 | - index = tree.get('index') |
2347 | + index = tree.get("index") |
2348 | for content_id in index: |
2349 | - util.assert_safe_path(index[content_id].get('path')) |
2350 | + util.assert_safe_path(index[content_id].get("path")) |
2351 | + |
2352 | |
2353 | # vi: ts=4 expandtab |
2354 | diff --git a/simplestreams/mirrors/command_hook.py b/simplestreams/mirrors/command_hook.py |
2355 | index ab70691..de42623 100644 |
2356 | --- a/simplestreams/mirrors/command_hook.py |
2357 | +++ b/simplestreams/mirrors/command_hook.py |
2358 | @@ -15,15 +15,15 @@ |
2359 | # You should have received a copy of the GNU Affero General Public License |
2360 | # along with Simplestreams. If not, see <http://www.gnu.org/licenses/>. |
2361 | |
2362 | -import simplestreams.mirrors as mirrors |
2363 | -import simplestreams.util as util |
2364 | - |
2365 | -import os |
2366 | import errno |
2367 | +import os |
2368 | import signal |
2369 | import subprocess |
2370 | import tempfile |
2371 | |
2372 | +import simplestreams.mirrors as mirrors |
2373 | +import simplestreams.util as util |
2374 | + |
2375 | REQUIRED_FIELDS = ("load_products",) |
2376 | HOOK_NAMES = ( |
2377 | "filter_index_entry", |
2378 | @@ -92,6 +92,7 @@ class CommandHookMirror(mirrors.BasicMirrorWriter): |
2379 | If the configuration setting 'item_skip_download' is set to True, then |
2380 | 'path_url' will be set instead to a url where the item can be found. |
2381 | """ |
2382 | + |
2383 | def __init__(self, config): |
2384 | if isinstance(config, str): |
2385 | config = util.load_content(config) |
2386 | @@ -100,32 +101,34 @@ class CommandHookMirror(mirrors.BasicMirrorWriter): |
2387 | super(CommandHookMirror, self).__init__(config=config) |
2388 | |
2389 | def load_products(self, path=None, content_id=None): |
2390 | - (_rc, output) = self.call_hook('load_products', |
2391 | - data={'content_id': content_id}, |
2392 | - capture=True) |
2393 | + (_rc, output) = self.call_hook( |
2394 | + "load_products", data={"content_id": content_id}, capture=True |
2395 | + ) |
2396 | fmt = self.config.get("product_load_output_format", "serial_list") |
2397 | |
2398 | - loaded = load_product_output(output=output, content_id=content_id, |
2399 | - fmt=fmt) |
2400 | + loaded = load_product_output( |
2401 | + output=output, content_id=content_id, fmt=fmt |
2402 | + ) |
2403 | return loaded |
2404 | |
2405 | def filter_index_entry(self, data, src, pedigree): |
2406 | mdata = util.stringitems(src) |
2407 | - mdata['content_id'] = pedigree[0] |
2408 | + mdata["content_id"] = pedigree[0] |
2409 | mdata.update(util.stringitems(data)) |
2410 | |
2411 | - (ret, _output) = self.call_hook('filter_index_entry', data=mdata, |
2412 | - rcs=[0, 1]) |
2413 | + (ret, _output) = self.call_hook( |
2414 | + "filter_index_entry", data=mdata, rcs=[0, 1] |
2415 | + ) |
2416 | return ret == 0 |
2417 | |
2418 | def filter_product(self, data, src, target, pedigree): |
2419 | - return self._call_filter('filter_product', src, pedigree) |
2420 | + return self._call_filter("filter_product", src, pedigree) |
2421 | |
2422 | def filter_version(self, data, src, target, pedigree): |
2423 | - return self._call_filter('filter_version', src, pedigree) |
2424 | + return self._call_filter("filter_version", src, pedigree) |
2425 | |
2426 | def filter_item(self, data, src, target, pedigree): |
2427 | - return self._call_filter('filter_item', src, pedigree) |
2428 | + return self._call_filter("filter_item", src, pedigree) |
2429 | |
2430 | def _call_filter(self, name, src, pedigree): |
2431 | data = util.products_exdata(src, pedigree) |
2432 | @@ -133,20 +136,27 @@ class CommandHookMirror(mirrors.BasicMirrorWriter): |
2433 | return ret == 0 |
2434 | |
2435 | def insert_index(self, path, src, content): |
2436 | - return self.call_hook('insert_index', data=src, content=content, |
2437 | - extra={'path': path}) |
2438 | + return self.call_hook( |
2439 | + "insert_index", data=src, content=content, extra={"path": path} |
2440 | + ) |
2441 | |
2442 | def insert_products(self, path, target, content): |
2443 | - return self.call_hook('insert_products', data=target, |
2444 | - content=content, extra={'path': path}) |
2445 | + return self.call_hook( |
2446 | + "insert_products", |
2447 | + data=target, |
2448 | + content=content, |
2449 | + extra={"path": path}, |
2450 | + ) |
2451 | |
2452 | def insert_product(self, data, src, target, pedigree): |
2453 | - return self.call_hook('insert_product', |
2454 | - data=util.products_exdata(src, pedigree)) |
2455 | + return self.call_hook( |
2456 | + "insert_product", data=util.products_exdata(src, pedigree) |
2457 | + ) |
2458 | |
2459 | def insert_version(self, data, src, target, pedigree): |
2460 | - return self.call_hook('insert_version', |
2461 | - data=util.products_exdata(src, pedigree)) |
2462 | + return self.call_hook( |
2463 | + "insert_version", data=util.products_exdata(src, pedigree) |
2464 | + ) |
2465 | |
2466 | def insert_item(self, data, src, target, pedigree, contentsource): |
2467 | mdata = util.products_exdata(src, pedigree) |
2468 | @@ -154,43 +164,47 @@ class CommandHookMirror(mirrors.BasicMirrorWriter): |
2469 | tmp_path = None |
2470 | tmp_del = None |
2471 | extra = {} |
2472 | - if 'path' in data: |
2473 | - extra.update({'item_url': contentsource.url}) |
2474 | - if not self.config.get('item_skip_download', False): |
2475 | + if "path" in data: |
2476 | + extra.update({"item_url": contentsource.url}) |
2477 | + if not self.config.get("item_skip_download", False): |
2478 | try: |
2479 | (tmp_path, tmp_del) = util.get_local_copy(contentsource) |
2480 | - extra['path_local'] = tmp_path |
2481 | + extra["path_local"] = tmp_path |
2482 | finally: |
2483 | contentsource.close() |
2484 | |
2485 | try: |
2486 | - ret = self.call_hook('insert_item', data=mdata, extra=extra) |
2487 | + ret = self.call_hook("insert_item", data=mdata, extra=extra) |
2488 | finally: |
2489 | if tmp_del and os.path.exists(tmp_path): |
2490 | os.unlink(tmp_path) |
2491 | return ret |
2492 | |
2493 | def remove_product(self, data, src, target, pedigree): |
2494 | - return self.call_hook('remove_product', |
2495 | - data=util.products_exdata(src, pedigree)) |
2496 | + return self.call_hook( |
2497 | + "remove_product", data=util.products_exdata(src, pedigree) |
2498 | + ) |
2499 | |
2500 | def remove_version(self, data, src, target, pedigree): |
2501 | - return self.call_hook('remove_version', |
2502 | - data=util.products_exdata(src, pedigree)) |
2503 | + return self.call_hook( |
2504 | + "remove_version", data=util.products_exdata(src, pedigree) |
2505 | + ) |
2506 | |
2507 | def remove_item(self, data, src, target, pedigree): |
2508 | - return self.call_hook('remove_item', |
2509 | - data=util.products_exdata(target, pedigree)) |
2510 | + return self.call_hook( |
2511 | + "remove_item", data=util.products_exdata(target, pedigree) |
2512 | + ) |
2513 | |
2514 | - def call_hook(self, hookname, data, capture=False, rcs=None, extra=None, |
2515 | - content=None): |
2516 | + def call_hook( |
2517 | + self, hookname, data, capture=False, rcs=None, extra=None, content=None |
2518 | + ): |
2519 | command = self.config.get(hookname, self.config.get(DEFAULT_HOOK_NAME)) |
2520 | if not command: |
2521 | # return successful execution with no output |
2522 | - return (0, '') |
2523 | + return (0, "") |
2524 | |
2525 | if isinstance(command, str): |
2526 | - command = ['sh', '-c', command] |
2527 | + command = ["sh", "-c", command] |
2528 | |
2529 | fdata = util.stringitems(data) |
2530 | |
2531 | @@ -200,16 +214,20 @@ class CommandHookMirror(mirrors.BasicMirrorWriter): |
2532 | tfile = os.fdopen(tfd, "w") |
2533 | tfile.write(content) |
2534 | tfile.close() |
2535 | - fdata['content_file_path'] = content_file |
2536 | + fdata["content_file_path"] = content_file |
2537 | |
2538 | if extra: |
2539 | fdata.update(extra) |
2540 | - fdata['HOOK'] = hookname |
2541 | + fdata["HOOK"] = hookname |
2542 | |
2543 | try: |
2544 | - return call_hook(command=command, data=fdata, |
2545 | - unset=self.config.get('unset_value', None), |
2546 | - capture=capture, rcs=rcs) |
2547 | + return call_hook( |
2548 | + command=command, |
2549 | + data=fdata, |
2550 | + unset=self.config.get("unset_value", None), |
2551 | + capture=capture, |
2552 | + rcs=rcs, |
2553 | + ) |
2554 | finally: |
2555 | if content_file: |
2556 | os.unlink(content_file) |
2557 | @@ -219,7 +237,7 @@ def call_hook(command, data, unset=None, capture=False, rcs=None): |
2558 | env = os.environ.copy() |
2559 | data = data.copy() |
2560 | |
2561 | - data[ENV_FIELDS_NAME] = ' '.join([k for k in data if k != ENV_HOOK_NAME]) |
2562 | + data[ENV_FIELDS_NAME] = " ".join([k for k in data if k != ENV_HOOK_NAME]) |
2563 | |
2564 | mcommand = render(command, data, unset=unset) |
2565 | |
2566 | @@ -257,12 +275,12 @@ def load_product_output(output, content_id, fmt="serial_list"): |
2567 | |
2568 | if fmt == "serial_list": |
2569 | # "line" format just is a list of serials that are present |
2570 | - working = {'content_id': content_id, 'products': {}} |
2571 | + working = {"content_id": content_id, "products": {}} |
2572 | for line in output.splitlines(): |
2573 | (product_id, version) = line.split(None, 1) |
2574 | - if product_id not in working['products']: |
2575 | - working['products'][product_id] = {'versions': {}} |
2576 | - working['products'][product_id]['versions'][version] = {} |
2577 | + if product_id not in working["products"]: |
2578 | + working["products"][product_id] = {"versions": {}} |
2579 | + working["products"][product_id]["versions"][version] = {} |
2580 | return working |
2581 | |
2582 | elif fmt == "json": |
2583 | @@ -293,10 +311,11 @@ def run_command(cmd, env=None, capture=False, rcs=None): |
2584 | raise subprocess.CalledProcessError(rc, cmd) |
2585 | |
2586 | if out is None: |
2587 | - out = '' |
2588 | + out = "" |
2589 | elif isinstance(out, bytes): |
2590 | - out = out.decode('utf-8') |
2591 | + out = out.decode("utf-8") |
2592 | |
2593 | return (rc, out) |
2594 | |
2595 | + |
2596 | # vi: ts=4 expandtab syntax=python |
2597 | diff --git a/simplestreams/mirrors/glance.py b/simplestreams/mirrors/glance.py |
2598 | index b96f8eb..22e46e9 100644 |
2599 | --- a/simplestreams/mirrors/glance.py |
2600 | +++ b/simplestreams/mirrors/glance.py |
2601 | @@ -15,42 +15,47 @@ |
2602 | # You should have received a copy of the GNU Affero General Public License |
2603 | # along with Simplestreams. If not, see <http://www.gnu.org/licenses/>. |
2604 | |
2605 | -import simplestreams.filters as filters |
2606 | -import simplestreams.mirrors as mirrors |
2607 | -import simplestreams.util as util |
2608 | -from simplestreams import checksum_util |
2609 | -import simplestreams.openstack as openstack |
2610 | -from simplestreams.log import LOG |
2611 | - |
2612 | -import copy |
2613 | import collections |
2614 | +import copy |
2615 | import errno |
2616 | -import glanceclient |
2617 | import json |
2618 | import os |
2619 | import re |
2620 | |
2621 | +import glanceclient |
2622 | + |
2623 | +import simplestreams.filters as filters |
2624 | +import simplestreams.mirrors as mirrors |
2625 | +import simplestreams.openstack as openstack |
2626 | +import simplestreams.util as util |
2627 | +from simplestreams import checksum_util |
2628 | +from simplestreams.log import LOG |
2629 | + |
2630 | |
2631 | -def get_glanceclient(version='1', **kwargs): |
2632 | +def get_glanceclient(version="1", **kwargs): |
2633 | # newer versions of the glanceclient will do this 'strip_version' for |
2634 | # us, but older versions do not. |
2635 | - kwargs['endpoint'] = _strip_version(kwargs['endpoint']) |
2636 | - pt = ('endpoint', 'token', 'insecure', 'cacert') |
2637 | + kwargs["endpoint"] = _strip_version(kwargs["endpoint"]) |
2638 | + pt = ("endpoint", "token", "insecure", "cacert") |
2639 | kskw = {k: kwargs.get(k) for k in pt if k in kwargs} |
2640 | - if kwargs.get('session'): |
2641 | - sess = kwargs.get('session') |
2642 | + if kwargs.get("session"): |
2643 | + sess = kwargs.get("session") |
2644 | return glanceclient.Client(version, session=sess) |
2645 | else: |
2646 | return glanceclient.Client(version, **kskw) |
2647 | |
2648 | |
2649 | def empty_iid_products(content_id): |
2650 | - return {'content_id': content_id, 'products': {}, |
2651 | - 'datatype': 'image-ids', 'format': 'products:1.0'} |
2652 | + return { |
2653 | + "content_id": content_id, |
2654 | + "products": {}, |
2655 | + "datatype": "image-ids", |
2656 | + "format": "products:1.0", |
2657 | + } |
2658 | |
2659 | |
2660 | def canonicalize_arch(arch): |
2661 | - '''Canonicalize Ubuntu archs for use in OpenStack''' |
2662 | + """Canonicalize Ubuntu archs for use in OpenStack""" |
2663 | newarch = arch.lower() |
2664 | if newarch == "amd64": |
2665 | newarch = "x86_64" |
2666 | @@ -68,21 +73,21 @@ def canonicalize_arch(arch): |
2667 | |
2668 | |
2669 | LXC_FTYPES = { |
2670 | - 'root.tar.gz': 'root-tar', |
2671 | - 'root.tar.xz': 'root-tar', |
2672 | - 'squashfs': 'squashfs', |
2673 | + "root.tar.gz": "root-tar", |
2674 | + "root.tar.xz": "root-tar", |
2675 | + "squashfs": "squashfs", |
2676 | } |
2677 | |
2678 | QEMU_FTYPES = { |
2679 | - 'disk.img': 'qcow2', |
2680 | - 'disk1.img': 'qcow2', |
2681 | + "disk.img": "qcow2", |
2682 | + "disk1.img": "qcow2", |
2683 | } |
2684 | |
2685 | |
2686 | def disk_format(ftype): |
2687 | - '''Canonicalize disk formats for use in OpenStack. |
2688 | + """Canonicalize disk formats for use in OpenStack. |
2689 | Input ftype is a 'ftype' from a simplestream feed. |
2690 | - Return value is the appropriate 'disk_format' for glance.''' |
2691 | + Return value is the appropriate 'disk_format' for glance.""" |
2692 | newftype = ftype.lower() |
2693 | if newftype in LXC_FTYPES: |
2694 | return LXC_FTYPES[newftype] |
2695 | @@ -92,22 +97,22 @@ def disk_format(ftype): |
2696 | |
2697 | |
2698 | def hypervisor_type(ftype): |
2699 | - '''Determine hypervisor type based on image format''' |
2700 | + """Determine hypervisor type based on image format""" |
2701 | newftype = ftype.lower() |
2702 | if newftype in LXC_FTYPES: |
2703 | - return 'lxc' |
2704 | + return "lxc" |
2705 | if newftype in QEMU_FTYPES: |
2706 | - return 'qemu' |
2707 | + return "qemu" |
2708 | return None |
2709 | |
2710 | |
2711 | def virt_type(hypervisor_type): |
2712 | - '''Map underlying hypervisor types into high level virt types''' |
2713 | + """Map underlying hypervisor types into high level virt types""" |
2714 | newhtype = hypervisor_type.lower() |
2715 | - if newhtype == 'qemu': |
2716 | - return 'kvm' |
2717 | - if newhtype == 'lxc': |
2718 | - return 'lxd' |
2719 | + if newhtype == "qemu": |
2720 | + return "kvm" |
2721 | + if newhtype == "lxc": |
2722 | + return "lxd" |
2723 | return None |
2724 | |
2725 | |
2726 | @@ -120,20 +125,29 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
2727 | `client` argument is used for testing to override openstack module: |
2728 | allows dependency injection of fake "openstack" module. |
2729 | """ |
2730 | - def __init__(self, config, objectstore=None, region=None, |
2731 | - name_prefix=None, progress_callback=None, |
2732 | - client=None): |
2733 | + |
2734 | + def __init__( |
2735 | + self, |
2736 | + config, |
2737 | + objectstore=None, |
2738 | + region=None, |
2739 | + name_prefix=None, |
2740 | + progress_callback=None, |
2741 | + client=None, |
2742 | + ): |
2743 | super(GlanceMirror, self).__init__(config=config) |
2744 | |
2745 | - self.item_filters = self.config.get('item_filters', []) |
2746 | + self.item_filters = self.config.get("item_filters", []) |
2747 | if len(self.item_filters) == 0: |
2748 | - self.item_filters = ['ftype~(disk1.img|disk.img)', |
2749 | - 'arch~(x86_64|amd64|i386)'] |
2750 | + self.item_filters = [ |
2751 | + "ftype~(disk1.img|disk.img)", |
2752 | + "arch~(x86_64|amd64|i386)", |
2753 | + ] |
2754 | self.item_filters = filters.get_filters(self.item_filters) |
2755 | |
2756 | - self.index_filters = self.config.get('index_filters', []) |
2757 | + self.index_filters = self.config.get("index_filters", []) |
2758 | if len(self.index_filters) == 0: |
2759 | - self.index_filters = ['datatype=image-downloads'] |
2760 | + self.index_filters = ["datatype=image-downloads"] |
2761 | self.index_filters = filters.get_filters(self.index_filters) |
2762 | |
2763 | self.loaded_content = {} |
2764 | @@ -146,21 +160,28 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
2765 | |
2766 | self.name_prefix = name_prefix or "" |
2767 | if region is not None: |
2768 | - self.keystone_creds['region_name'] = region |
2769 | + self.keystone_creds["region_name"] = region |
2770 | |
2771 | self.progress_callback = progress_callback |
2772 | |
2773 | conn_info = client.get_service_conn_info( |
2774 | - 'image', **self.keystone_creds) |
2775 | - self.glance_api_version = conn_info['glance_version'] |
2776 | - self.gclient = get_glanceclient(version=self.glance_api_version, |
2777 | - **conn_info) |
2778 | - self.tenant_id = conn_info['tenant_id'] |
2779 | - |
2780 | - self.region = self.keystone_creds.get('region_name', 'nullregion') |
2781 | - self.cloudname = config.get("cloud_name", 'nullcloud') |
2782 | - self.crsn = '-'.join((self.cloudname, self.region,)) |
2783 | - self.auth_url = self.keystone_creds['auth_url'] |
2784 | + "image", **self.keystone_creds |
2785 | + ) |
2786 | + self.glance_api_version = conn_info["glance_version"] |
2787 | + self.gclient = get_glanceclient( |
2788 | + version=self.glance_api_version, **conn_info |
2789 | + ) |
2790 | + self.tenant_id = conn_info["tenant_id"] |
2791 | + |
2792 | + self.region = self.keystone_creds.get("region_name", "nullregion") |
2793 | + self.cloudname = config.get("cloud_name", "nullcloud") |
2794 | + self.crsn = "-".join( |
2795 | + ( |
2796 | + self.cloudname, |
2797 | + self.region, |
2798 | + ) |
2799 | + ) |
2800 | + self.auth_url = self.keystone_creds["auth_url"] |
2801 | |
2802 | self.content_id = config.get("content_id") |
2803 | self.modify_hook = config.get("modify_hook") |
2804 | @@ -170,7 +191,7 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
2805 | raise TypeError("content_id is required") |
2806 | |
2807 | self.custom_properties = collections.OrderedDict( |
2808 | - prop.split('=') for prop in config.get("custom_properties", []) |
2809 | + prop.split("=") for prop in config.get("custom_properties", []) |
2810 | ) |
2811 | self.visibility = config.get("visibility", "public") |
2812 | |
2813 | @@ -208,82 +229,100 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
2814 | for image in images: |
2815 | if self.glance_api_version == "1": |
2816 | image = image.to_dict() |
2817 | - props = image['properties'] |
2818 | + props = image["properties"] |
2819 | else: |
2820 | props = copy.deepcopy(image) |
2821 | |
2822 | - if image['owner'] != self.tenant_id: |
2823 | + if image["owner"] != self.tenant_id: |
2824 | continue |
2825 | |
2826 | - if props.get('content_id') != my_cid: |
2827 | + if props.get("content_id") != my_cid: |
2828 | continue |
2829 | |
2830 | - if image.get('status') != "active": |
2831 | - LOG.warning("Ignoring inactive image %s with status '%s'" % ( |
2832 | - image['id'], image.get('status'))) |
2833 | + if image.get("status") != "active": |
2834 | + LOG.warning( |
2835 | + "Ignoring inactive image %s with status '%s'" |
2836 | + % (image["id"], image.get("status")) |
2837 | + ) |
2838 | continue |
2839 | |
2840 | - source_content_id = props.get('source_content_id') |
2841 | + source_content_id = props.get("source_content_id") |
2842 | |
2843 | - product = props.get('product_name') |
2844 | - version = props.get('version_name') |
2845 | - item = props.get('item_name') |
2846 | + product = props.get("product_name") |
2847 | + version = props.get("version_name") |
2848 | + item = props.get("item_name") |
2849 | if not (version and product and item and source_content_id): |
2850 | - LOG.warning("%s missing required fields" % image['id']) |
2851 | + LOG.warning("%s missing required fields" % image["id"]) |
2852 | continue |
2853 | |
2854 | # get data from the datastore for this item, if it exists |
2855 | # and then update that with glance data (just in case different) |
2856 | try: |
2857 | - item_data = util.products_exdata(store_t, |
2858 | - (product, version, item,), |
2859 | - include_top=False, |
2860 | - insert_fieldnames=False) |
2861 | + item_data = util.products_exdata( |
2862 | + store_t, |
2863 | + ( |
2864 | + product, |
2865 | + version, |
2866 | + item, |
2867 | + ), |
2868 | + include_top=False, |
2869 | + insert_fieldnames=False, |
2870 | + ) |
2871 | except KeyError: |
2872 | item_data = {} |
2873 | |
2874 | # If original simplestreams-metadata is stored on the image, |
2875 | # use that as well. |
2876 | - if 'simplestreams_metadata' in props: |
2877 | + if "simplestreams_metadata" in props: |
2878 | simplestreams_metadata = json.loads( |
2879 | - props.get('simplestreams_metadata')) |
2880 | + props.get("simplestreams_metadata") |
2881 | + ) |
2882 | else: |
2883 | simplestreams_metadata = {} |
2884 | item_data.update(simplestreams_metadata) |
2885 | |
2886 | - item_data.update({'name': image['name'], 'id': image['id']}) |
2887 | - if 'owner_id' not in item_data: |
2888 | - item_data['owner_id'] = self.tenant_id |
2889 | - |
2890 | - util.products_set(glance_t, item_data, |
2891 | - (product, version, item,)) |
2892 | + item_data.update({"name": image["name"], "id": image["id"]}) |
2893 | + if "owner_id" not in item_data: |
2894 | + item_data["owner_id"] = self.tenant_id |
2895 | + |
2896 | + util.products_set( |
2897 | + glance_t, |
2898 | + item_data, |
2899 | + ( |
2900 | + product, |
2901 | + version, |
2902 | + item, |
2903 | + ), |
2904 | + ) |
2905 | |
2906 | - for product in glance_t['products']: |
2907 | - glance_t['products'][product]['region'] = self.region |
2908 | - glance_t['products'][product]['endpoint'] = self.auth_url |
2909 | + for product in glance_t["products"]: |
2910 | + glance_t["products"][product]["region"] = self.region |
2911 | + glance_t["products"][product]["endpoint"] = self.auth_url |
2912 | |
2913 | return glance_t |
2914 | |
2915 | def filter_item(self, data, src, target, pedigree): |
2916 | return filters.filter_item(self.item_filters, data, src, pedigree) |
2917 | |
2918 | - def create_glance_properties(self, content_id, source_content_id, |
2919 | - image_metadata, hypervisor_mapping): |
2920 | + def create_glance_properties( |
2921 | + self, content_id, source_content_id, image_metadata, hypervisor_mapping |
2922 | + ): |
2923 | """ |
2924 | Construct extra properties to store in Glance for an image. |
2925 | |
2926 | Based on source image metadata. |
2927 | """ |
2928 | properties = { |
2929 | - 'content_id': content_id, |
2930 | - 'source_content_id': source_content_id, |
2931 | + "content_id": content_id, |
2932 | + "source_content_id": source_content_id, |
2933 | } |
2934 | # An iterator of properties to carry over: if a property needs |
2935 | # renaming, uses a tuple (old name, new name). |
2936 | - carry_over_simple = ( |
2937 | - 'product_name', 'version_name', 'item_name') |
2938 | + carry_over_simple = ("product_name", "version_name", "item_name") |
2939 | carry_over = carry_over_simple + ( |
2940 | - ('os', 'os_distro'), ('version', 'os_version')) |
2941 | + ("os", "os_distro"), |
2942 | + ("version", "os_version"), |
2943 | + ) |
2944 | for carry_over_property in carry_over: |
2945 | if isinstance(carry_over_property, tuple): |
2946 | name_old, name_new = carry_over_property |
2947 | @@ -291,33 +330,41 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
2948 | name_old = name_new = carry_over_property |
2949 | properties[name_new] = image_metadata.get(name_old) |
2950 | |
2951 | - if 'arch' in image_metadata: |
2952 | - properties['architecture'] = canonicalize_arch( |
2953 | - image_metadata['arch']) |
2954 | + if "arch" in image_metadata: |
2955 | + properties["architecture"] = canonicalize_arch( |
2956 | + image_metadata["arch"] |
2957 | + ) |
2958 | |
2959 | - if hypervisor_mapping and 'ftype' in image_metadata: |
2960 | - _hypervisor_type = hypervisor_type(image_metadata['ftype']) |
2961 | + if hypervisor_mapping and "ftype" in image_metadata: |
2962 | + _hypervisor_type = hypervisor_type(image_metadata["ftype"]) |
2963 | if _hypervisor_type: |
2964 | - properties['hypervisor_type'] = _hypervisor_type |
2965 | + properties["hypervisor_type"] = _hypervisor_type |
2966 | |
2967 | properties.update(self.custom_properties) |
2968 | |
2969 | if self.set_latest_property: |
2970 | - properties['latest'] = "true" |
2971 | + properties["latest"] = "true" |
2972 | |
2973 | # Store flattened metadata for a source image along with the |
2974 | # image in 'simplestreams_metadata' property. |
2975 | simplestreams_metadata = image_metadata.copy() |
2976 | - drop_keys = carry_over_simple + ('path',) |
2977 | + drop_keys = carry_over_simple + ("path",) |
2978 | for remove_key in drop_keys: |
2979 | if remove_key in simplestreams_metadata: |
2980 | del simplestreams_metadata[remove_key] |
2981 | - properties['simplestreams_metadata'] = json.dumps( |
2982 | - simplestreams_metadata, sort_keys=True) |
2983 | + properties["simplestreams_metadata"] = json.dumps( |
2984 | + simplestreams_metadata, sort_keys=True |
2985 | + ) |
2986 | return properties |
2987 | |
2988 | - def prepare_glance_arguments(self, full_image_name, image_metadata, |
2989 | - image_md5_hash, image_size, image_properties): |
2990 | + def prepare_glance_arguments( |
2991 | + self, |
2992 | + full_image_name, |
2993 | + image_metadata, |
2994 | + image_md5_hash, |
2995 | + image_size, |
2996 | + image_properties, |
2997 | + ): |
2998 | """ |
2999 | Prepare arguments to pass into Glance image creation method. |
3000 | |
3001 | @@ -334,37 +381,39 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
3002 | GlanceClient.images.create(). |
3003 | """ |
3004 | create_kwargs = { |
3005 | - 'name': full_image_name, |
3006 | - 'container_format': 'bare', |
3007 | - 'is_public': self.visibility == 'public', |
3008 | - 'properties': image_properties, |
3009 | + "name": full_image_name, |
3010 | + "container_format": "bare", |
3011 | + "is_public": self.visibility == "public", |
3012 | + "properties": image_properties, |
3013 | } |
3014 | |
3015 | # In v2 is_public=True is visibility='public' |
3016 | if self.glance_api_version == "2": |
3017 | - del create_kwargs['is_public'] |
3018 | - create_kwargs['visibility'] = self.visibility |
3019 | + del create_kwargs["is_public"] |
3020 | + create_kwargs["visibility"] = self.visibility |
3021 | |
3022 | # v2 automatically calculates size and checksum |
3023 | if self.glance_api_version == "1": |
3024 | - if 'size' in image_metadata: |
3025 | - create_kwargs['size'] = int(image_metadata.get('size')) |
3026 | - if 'md5' in image_metadata: |
3027 | - create_kwargs['checksum'] = image_metadata.get('md5') |
3028 | + if "size" in image_metadata: |
3029 | + create_kwargs["size"] = int(image_metadata.get("size")) |
3030 | + if "md5" in image_metadata: |
3031 | + create_kwargs["checksum"] = image_metadata.get("md5") |
3032 | if image_md5_hash and image_size: |
3033 | - create_kwargs.update({ |
3034 | - 'checksum': image_md5_hash, |
3035 | - 'size': image_size, |
3036 | - }) |
3037 | + create_kwargs.update( |
3038 | + { |
3039 | + "checksum": image_md5_hash, |
3040 | + "size": image_size, |
3041 | + } |
3042 | + ) |
3043 | |
3044 | if self.image_import_conversion: |
3045 | - create_kwargs['disk_format'] = 'raw' |
3046 | - elif 'ftype' in image_metadata: |
3047 | - create_kwargs['disk_format'] = ( |
3048 | - disk_format(image_metadata['ftype']) or 'qcow2' |
3049 | + create_kwargs["disk_format"] = "raw" |
3050 | + elif "ftype" in image_metadata: |
3051 | + create_kwargs["disk_format"] = ( |
3052 | + disk_format(image_metadata["ftype"]) or "qcow2" |
3053 | ) |
3054 | else: |
3055 | - create_kwargs['disk_format'] = 'qcow2' |
3056 | + create_kwargs["disk_format"] = "qcow2" |
3057 | |
3058 | return create_kwargs |
3059 | |
3060 | @@ -378,37 +427,51 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
3061 | Returns a tuple of |
3062 | (str(local-image-path), int(image-size), str(image-md5-hash)). |
3063 | """ |
3064 | - image_name = image_stream_data.get('pubname') |
3065 | - image_size = image_stream_data.get('size') |
3066 | + image_name = image_stream_data.get("pubname") |
3067 | + image_size = image_stream_data.get("size") |
3068 | |
3069 | if self.progress_callback: |
3070 | + |
3071 | def progress_wrapper(written): |
3072 | self.progress_callback( |
3073 | - dict(status="Downloading", name=image_name, |
3074 | - size=None if image_size is None else int(image_size), |
3075 | - written=written)) |
3076 | + dict( |
3077 | + status="Downloading", |
3078 | + name=image_name, |
3079 | + size=None if image_size is None else int(image_size), |
3080 | + written=written, |
3081 | + ) |
3082 | + ) |
3083 | + |
3084 | else: |
3085 | + |
3086 | def progress_wrapper(written): |
3087 | pass |
3088 | |
3089 | try: |
3090 | tmp_path, _ = util.get_local_copy( |
3091 | - contentsource, progress_callback=progress_wrapper) |
3092 | + contentsource, progress_callback=progress_wrapper |
3093 | + ) |
3094 | |
3095 | if self.modify_hook: |
3096 | (new_size, new_md5) = call_hook( |
3097 | - item=image_stream_data, path=tmp_path, |
3098 | - cmd=self.modify_hook) |
3099 | + item=image_stream_data, path=tmp_path, cmd=self.modify_hook |
3100 | + ) |
3101 | else: |
3102 | new_size = os.path.getsize(tmp_path) |
3103 | - new_md5 = image_stream_data.get('md5') |
3104 | + new_md5 = image_stream_data.get("md5") |
3105 | finally: |
3106 | contentsource.close() |
3107 | |
3108 | return tmp_path, new_size, new_md5 |
3109 | |
3110 | - def adapt_source_entry(self, source_entry, hypervisor_mapping, image_name, |
3111 | - image_md5_hash, image_size): |
3112 | + def adapt_source_entry( |
3113 | + self, |
3114 | + source_entry, |
3115 | + hypervisor_mapping, |
3116 | + image_name, |
3117 | + image_md5_hash, |
3118 | + image_size, |
3119 | + ): |
3120 | """ |
3121 | Adapts the source simplestreams dict `source_entry` for use in the |
3122 | generated local simplestreams index. |
3123 | @@ -416,26 +479,30 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
3124 | output_entry = source_entry.copy() |
3125 | |
3126 | # Drop attributes not needed for the simplestreams index itself. |
3127 | - for property_name in ('path', 'product_name', 'version_name', |
3128 | - 'item_name'): |
3129 | + for property_name in ( |
3130 | + "path", |
3131 | + "product_name", |
3132 | + "version_name", |
3133 | + "item_name", |
3134 | + ): |
3135 | if property_name in output_entry: |
3136 | del output_entry[property_name] |
3137 | |
3138 | - if hypervisor_mapping and 'ftype' in output_entry: |
3139 | - _hypervisor_type = hypervisor_type(output_entry['ftype']) |
3140 | + if hypervisor_mapping and "ftype" in output_entry: |
3141 | + _hypervisor_type = hypervisor_type(output_entry["ftype"]) |
3142 | if _hypervisor_type: |
3143 | _virt_type = virt_type(_hypervisor_type) |
3144 | if _virt_type: |
3145 | - output_entry['virt'] = _virt_type |
3146 | + output_entry["virt"] = _virt_type |
3147 | |
3148 | - output_entry['region'] = self.region |
3149 | - output_entry['endpoint'] = self.auth_url |
3150 | - output_entry['owner_id'] = self.tenant_id |
3151 | + output_entry["region"] = self.region |
3152 | + output_entry["endpoint"] = self.auth_url |
3153 | + output_entry["owner_id"] = self.tenant_id |
3154 | |
3155 | - output_entry['name'] = image_name |
3156 | + output_entry["name"] = image_name |
3157 | if image_md5_hash and image_size: |
3158 | - output_entry['md5'] = image_md5_hash |
3159 | - output_entry['size'] = str(image_size) |
3160 | + output_entry["md5"] = image_md5_hash |
3161 | + output_entry["size"] = str(image_size) |
3162 | |
3163 | return output_entry |
3164 | |
3165 | @@ -459,58 +526,74 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
3166 | # (product-name, version-name, image-type) |
3167 | # from the tuple `pedigree` in the source simplestreams index. |
3168 | flattened_img_data = util.products_exdata( |
3169 | - src, pedigree, include_top=False) |
3170 | + src, pedigree, include_top=False |
3171 | + ) |
3172 | |
3173 | tmp_path = None |
3174 | |
3175 | full_image_name = "{}{}".format( |
3176 | self.name_prefix, |
3177 | - flattened_img_data.get('pubname', flattened_img_data.get('name'))) |
3178 | - if not full_image_name.endswith(flattened_img_data['item_name']): |
3179 | - full_image_name += "-{}".format(flattened_img_data['item_name']) |
3180 | + flattened_img_data.get("pubname", flattened_img_data.get("name")), |
3181 | + ) |
3182 | + if not full_image_name.endswith(flattened_img_data["item_name"]): |
3183 | + full_image_name += "-{}".format(flattened_img_data["item_name"]) |
3184 | |
3185 | # Download images locally into a temporary file. |
3186 | tmp_path, new_size, new_md5 = self.download_image( |
3187 | - contentsource, flattened_img_data) |
3188 | + contentsource, flattened_img_data |
3189 | + ) |
3190 | |
3191 | - hypervisor_mapping = self.config.get('hypervisor_mapping', False) |
3192 | + hypervisor_mapping = self.config.get("hypervisor_mapping", False) |
3193 | |
3194 | glance_props = self.create_glance_properties( |
3195 | - target['content_id'], src['content_id'], flattened_img_data, |
3196 | - hypervisor_mapping) |
3197 | + target["content_id"], |
3198 | + src["content_id"], |
3199 | + flattened_img_data, |
3200 | + hypervisor_mapping, |
3201 | + ) |
3202 | LOG.debug("glance properties %s", glance_props) |
3203 | create_kwargs = self.prepare_glance_arguments( |
3204 | - full_image_name, flattened_img_data, new_md5, new_size, |
3205 | - glance_props) |
3206 | + full_image_name, |
3207 | + flattened_img_data, |
3208 | + new_md5, |
3209 | + new_size, |
3210 | + glance_props, |
3211 | + ) |
3212 | |
3213 | target_sstream_item = self.adapt_source_entry( |
3214 | - flattened_img_data, hypervisor_mapping, full_image_name, new_md5, |
3215 | - new_size) |
3216 | + flattened_img_data, |
3217 | + hypervisor_mapping, |
3218 | + full_image_name, |
3219 | + new_md5, |
3220 | + new_size, |
3221 | + ) |
3222 | |
3223 | try: |
3224 | if self.glance_api_version == "1": |
3225 | # Set data as string if v1 |
3226 | - create_kwargs['data'] = open(tmp_path, 'rb') |
3227 | + create_kwargs["data"] = open(tmp_path, "rb") |
3228 | else: |
3229 | # Keep properties for v2 update call |
3230 | - _properties = create_kwargs['properties'] |
3231 | - del create_kwargs['properties'] |
3232 | + _properties = create_kwargs["properties"] |
3233 | + del create_kwargs["properties"] |
3234 | |
3235 | LOG.debug("glance create_kwargs %s", create_kwargs) |
3236 | glance_image = self.gclient.images.create(**create_kwargs) |
3237 | - target_sstream_item['id'] = glance_image.id |
3238 | + target_sstream_item["id"] = glance_image.id |
3239 | |
3240 | if self.glance_api_version == "2": |
3241 | if self.image_import_conversion: |
3242 | # Stage the image before starting import |
3243 | - self.gclient.images.stage(glance_image.id, |
3244 | - open(tmp_path, 'rb')) |
3245 | + self.gclient.images.stage( |
3246 | + glance_image.id, open(tmp_path, "rb") |
3247 | + ) |
3248 | # Import the Glance image |
3249 | self.gclient.images.image_import(glance_image.id) |
3250 | else: |
3251 | # Upload for v2 |
3252 | - self.gclient.images.upload(glance_image.id, |
3253 | - open(tmp_path, 'rb')) |
3254 | + self.gclient.images.upload( |
3255 | + glance_image.id, open(tmp_path, "rb") |
3256 | + ) |
3257 | # Update properties for v2 |
3258 | self.gclient.images.update(glance_image.id, **_properties) |
3259 | |
3260 | @@ -527,15 +610,19 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
3261 | # self.load_products() instead |
3262 | if self.set_latest_property: |
3263 | # Search all images with the same target attributtes |
3264 | - _filter_properties = {'filters': { |
3265 | - 'latest': 'true', |
3266 | - 'os_version': glance_props['os_version'], |
3267 | - 'architecture': glance_props['architecture']}} |
3268 | + _filter_properties = { |
3269 | + "filters": { |
3270 | + "latest": "true", |
3271 | + "os_version": glance_props["os_version"], |
3272 | + "architecture": glance_props["architecture"], |
3273 | + } |
3274 | + } |
3275 | images = self.gclient.images.list(**_filter_properties) |
3276 | for image in images: |
3277 | if image.id != glance_image.id: |
3278 | - self.gclient.images.update(image.id, |
3279 | - remove_props=['latest']) |
3280 | + self.gclient.images.update( |
3281 | + image.id, remove_props=["latest"] |
3282 | + ) |
3283 | |
3284 | finally: |
3285 | if tmp_path and os.path.exists(tmp_path): |
3286 | @@ -562,10 +649,10 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
3287 | found = self.gclient.images.get(image_id) |
3288 | if found.size == size and found.checksum == checksum: |
3289 | return |
3290 | - msg = ( |
3291 | - ("Invalid glance image: %s. " % image_id) + |
3292 | - ("Expected size=%s md5=%s. Found size=%s md5=%s." % |
3293 | - (size, checksum, found.size, found.checksum))) |
3294 | + msg = ("Invalid glance image: %s. " % image_id) + ( |
3295 | + "Expected size=%s md5=%s. Found size=%s md5=%s." |
3296 | + % (size, checksum, found.size, found.checksum) |
3297 | + ) |
3298 | if delete: |
3299 | LOG.warning("Deleting image %s: %s", image_id, msg) |
3300 | self.gclient.images.delete(image_id) |
3301 | @@ -587,13 +674,15 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
3302 | if version_name not in self.inserts[product_name]: |
3303 | self.inserts[product_name][version_name] = {} |
3304 | |
3305 | - if 'ftype' in data: |
3306 | - ftype = data['ftype'] |
3307 | + if "ftype" in data: |
3308 | + ftype = data["ftype"] |
3309 | else: |
3310 | flat = util.products_exdata(src, pedigree, include_top=False) |
3311 | - ftype = flat.get('ftype') |
3312 | + ftype = flat.get("ftype") |
3313 | self.inserts[product_name][version_name][item_name] = ( |
3314 | - ftype, (data, src, target, pedigree, contentsource)) |
3315 | + ftype, |
3316 | + (data, src, target, pedigree, contentsource), |
3317 | + ) |
3318 | |
3319 | def insert_version(self, data, src, target, pedigree): |
3320 | """Upload all images for this version into glance |
3321 | @@ -605,14 +694,20 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
3322 | product_name, version_name = pedigree |
3323 | inserts = self.inserts.get(product_name, {}).get(version_name, []) |
3324 | |
3325 | - rtar_names = [f for f in inserts |
3326 | - if inserts[f][0] in ('root.tar.gz', 'root.tar.xz')] |
3327 | + rtar_names = [ |
3328 | + f |
3329 | + for f in inserts |
3330 | + if inserts[f][0] in ("root.tar.gz", "root.tar.xz") |
3331 | + ] |
3332 | |
3333 | for _iname, (ftype, iargs) in inserts.items(): |
3334 | if ftype == "squashfs" and rtar_names: |
3335 | - LOG.info("[%s] Skipping ftype 'squashfs' image in preference" |
3336 | - "for root tarball type in %s", |
3337 | - '/'.join(pedigree), rtar_names) |
3338 | + LOG.info( |
3339 | + "[%s] Skipping ftype 'squashfs' image in preference" |
3340 | + "for root tarball type in %s", |
3341 | + "/".join(pedigree), |
3342 | + rtar_names, |
3343 | + ) |
3344 | continue |
3345 | self._insert_item(*iargs) |
3346 | |
3347 | @@ -622,9 +717,9 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
3348 | |
3349 | def remove_item(self, data, src, target, pedigree): |
3350 | util.products_del(target, pedigree) |
3351 | - if 'id' in data: |
3352 | - print("removing %s: %s" % (data['id'], data['name'])) |
3353 | - self.gclient.images.delete(data['id']) |
3354 | + if "id" in data: |
3355 | + print("removing %s: %s" % (data["id"], data["name"])) |
3356 | + self.gclient.images.delete(data["id"]) |
3357 | |
3358 | def filter_index_entry(self, data, src, pedigree): |
3359 | return filters.filter_dict(self.index_filters, data) |
3360 | @@ -637,18 +732,18 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
3361 | util.products_prune(tree, preserve_empty_products=True) |
3362 | |
3363 | # stop these items from copying up when we call condense |
3364 | - sticky = ['ftype', 'md5', 'sha256', 'size', 'name', 'id'] |
3365 | + sticky = ["ftype", "md5", "sha256", "size", "name", "id"] |
3366 | |
3367 | # LP: #1329805. Juju expects these on the item. |
3368 | - if self.config.get('sticky_endpoint_region', True): |
3369 | - sticky += ['endpoint', 'region'] |
3370 | + if self.config.get("sticky_endpoint_region", True): |
3371 | + sticky += ["endpoint", "region"] |
3372 | |
3373 | util.products_condense(tree, sticky=sticky) |
3374 | |
3375 | tsnow = util.timestamp() |
3376 | - tree['updated'] = tsnow |
3377 | + tree["updated"] = tsnow |
3378 | |
3379 | - dpath = self._cidpath(tree['content_id']) |
3380 | + dpath = self._cidpath(tree["content_id"]) |
3381 | LOG.info("writing data: %s", dpath) |
3382 | self.store.insert_content(dpath, util.dump_data(tree)) |
3383 | |
3384 | @@ -659,17 +754,20 @@ class GlanceMirror(mirrors.BasicMirrorWriter): |
3385 | except IOError as exc: |
3386 | if exc.errno != errno.ENOENT: |
3387 | raise |
3388 | - index = {"index": {}, 'format': 'index:1.0', |
3389 | - 'updated': util.timestamp()} |
3390 | - |
3391 | - index['index'][tree['content_id']] = { |
3392 | - 'updated': tsnow, |
3393 | - 'datatype': 'image-ids', |
3394 | - 'clouds': [{'region': self.region, 'endpoint': self.auth_url}], |
3395 | - 'cloudname': self.cloudname, |
3396 | - 'path': dpath, |
3397 | - 'products': list(tree['products'].keys()), |
3398 | - 'format': tree['format'], |
3399 | + index = { |
3400 | + "index": {}, |
3401 | + "format": "index:1.0", |
3402 | + "updated": util.timestamp(), |
3403 | + } |
3404 | + |
3405 | + index["index"][tree["content_id"]] = { |
3406 | + "updated": tsnow, |
3407 | + "datatype": "image-ids", |
3408 | + "clouds": [{"region": self.region, "endpoint": self.auth_url}], |
3409 | + "cloudname": self.cloudname, |
3410 | + "path": dpath, |
3411 | + "products": list(tree["products"].keys()), |
3412 | + "format": tree["format"], |
3413 | } |
3414 | LOG.info("writing data: %s", ipath) |
3415 | self.store.insert_content(ipath, util.dump_data(index)) |
3416 | @@ -694,13 +792,13 @@ class ItemInfoDryRunMirror(GlanceMirror): |
3417 | |
3418 | def insert_item(self, data, src, target, pedigree, contentsource): |
3419 | data = util.products_exdata(src, pedigree) |
3420 | - if 'size' in data and 'path' in data and 'pubname' in data: |
3421 | - self.items[data['pubname']] = int(data['size']) |
3422 | + if "size" in data and "path" in data and "pubname" in data: |
3423 | + self.items[data["pubname"]] = int(data["size"]) |
3424 | |
3425 | |
3426 | def _checksum_file(fobj, read_size=util.READ_SIZE, checksums=None): |
3427 | if checksums is None: |
3428 | - checksums = {'md5': None} |
3429 | + checksums = {"md5": None} |
3430 | cksum = checksum_util.checksummer(checksums=checksums) |
3431 | while True: |
3432 | buf = fobj.read(read_size) |
3433 | @@ -713,13 +811,13 @@ def _checksum_file(fobj, read_size=util.READ_SIZE, checksums=None): |
3434 | def call_hook(item, path, cmd): |
3435 | env = os.environ.copy() |
3436 | env.update(item) |
3437 | - env['IMAGE_PATH'] = path |
3438 | - env['FIELDS'] = ' '.join(item.keys()) + ' IMAGE_PATH' |
3439 | + env["IMAGE_PATH"] = path |
3440 | + env["FIELDS"] = " ".join(item.keys()) + " IMAGE_PATH" |
3441 | |
3442 | util.subp(cmd, env=env, capture=False) |
3443 | |
3444 | with open(path, "rb") as fp: |
3445 | - md5 = _checksum_file(fp, checksums={'md5': None}) |
3446 | + md5 = _checksum_file(fp, checksums={"md5": None}) |
3447 | |
3448 | return (os.path.getsize(path), md5) |
3449 | |
3450 | @@ -728,12 +826,13 @@ def _strip_version(endpoint): |
3451 | """Strip a version from the last component of an endpoint if present""" |
3452 | |
3453 | # Get rid of trailing '/' if present |
3454 | - if endpoint.endswith('/'): |
3455 | + if endpoint.endswith("/"): |
3456 | endpoint = endpoint[:-1] |
3457 | - url_bits = endpoint.split('/') |
3458 | + url_bits = endpoint.split("/") |
3459 | # regex to match 'v1' or 'v2.0' etc |
3460 | - if re.match(r'v\d+\.?\d*', url_bits[-1]): |
3461 | - endpoint = '/'.join(url_bits[:-1]) |
3462 | + if re.match(r"v\d+\.?\d*", url_bits[-1]): |
3463 | + endpoint = "/".join(url_bits[:-1]) |
3464 | return endpoint |
3465 | |
3466 | + |
3467 | # vi: ts=4 expandtab syntax=python |
3468 | diff --git a/simplestreams/objectstores/__init__.py b/simplestreams/objectstores/__init__.py |
3469 | index f118a92..b9a6bbe 100644 |
3470 | --- a/simplestreams/objectstores/__init__.py |
3471 | +++ b/simplestreams/objectstores/__init__.py |
3472 | @@ -35,9 +35,13 @@ class ObjectStore(object): |
3473 | |
3474 | def insert_content(self, path, content, checksums=None, mutable=True): |
3475 | if not isinstance(content, bytes): |
3476 | - content = content.encode('utf-8') |
3477 | - self.insert(path=path, reader=cs.MemoryContentSource(content=content), |
3478 | - checksums=checksums, mutable=mutable) |
3479 | + content = content.encode("utf-8") |
3480 | + self.insert( |
3481 | + path=path, |
3482 | + reader=cs.MemoryContentSource(content=content), |
3483 | + checksums=checksums, |
3484 | + mutable=mutable, |
3485 | + ) |
3486 | |
3487 | def remove(self, path): |
3488 | # remove path from store |
3489 | @@ -48,9 +52,12 @@ class ObjectStore(object): |
3490 | raise NotImplementedError() |
3491 | |
3492 | def exists_with_checksum(self, path, checksums=None): |
3493 | - return has_valid_checksum(path=path, reader=self.source, |
3494 | - checksums=checksums, |
3495 | - read_size=self.read_size) |
3496 | + return has_valid_checksum( |
3497 | + path=path, |
3498 | + reader=self.source, |
3499 | + checksums=checksums, |
3500 | + read_size=self.read_size, |
3501 | + ) |
3502 | |
3503 | |
3504 | class MemoryObjectStore(ObjectStore): |
3505 | @@ -73,35 +80,43 @@ class MemoryObjectStore(ObjectStore): |
3506 | url = "%s://%s" % (self.__class__, path) |
3507 | return cs.MemoryContentSource(content=self.data[path], url=url) |
3508 | except KeyError: |
3509 | - raise IOError(errno.ENOENT, '%s not found' % path) |
3510 | + raise IOError(errno.ENOENT, "%s not found" % path) |
3511 | |
3512 | |
3513 | class FileStore(ObjectStore): |
3514 | - |
3515 | def __init__(self, prefix, complete_callback=None): |
3516 | - """ complete_callback is called periodically to notify users when a |
3517 | + """complete_callback is called periodically to notify users when a |
3518 | file is being inserted. It takes three arguments: the path that is |
3519 | inserted, the number of bytes downloaded, and the number of total |
3520 | - bytes. """ |
3521 | + bytes.""" |
3522 | self.prefix = prefix |
3523 | self.complete_callback = complete_callback |
3524 | |
3525 | - def insert(self, path, reader, checksums=None, mutable=True, size=None, |
3526 | - sparse=False): |
3527 | - |
3528 | + def insert( |
3529 | + self, |
3530 | + path, |
3531 | + reader, |
3532 | + checksums=None, |
3533 | + mutable=True, |
3534 | + size=None, |
3535 | + sparse=False, |
3536 | + ): |
3537 | wpath = self._fullpath(path) |
3538 | if os.path.isfile(wpath): |
3539 | if not mutable: |
3540 | # if the file exists, and not mutable, return |
3541 | return |
3542 | - if has_valid_checksum(path=path, reader=self.source, |
3543 | - checksums=checksums, |
3544 | - read_size=self.read_size): |
3545 | + if has_valid_checksum( |
3546 | + path=path, |
3547 | + reader=self.source, |
3548 | + checksums=checksums, |
3549 | + read_size=self.read_size, |
3550 | + ): |
3551 | return |
3552 | |
3553 | zeros = None |
3554 | if sparse is True: |
3555 | - zeros = '\0' * self.read_size |
3556 | + zeros = "\0" * self.read_size |
3557 | |
3558 | cksum = checksum_util.checksummer(checksums) |
3559 | out_d = os.path.dirname(wpath) |
3560 | @@ -110,8 +125,9 @@ class FileStore(ObjectStore): |
3561 | util.mkdir_p(out_d) |
3562 | orig_part_size = 0 |
3563 | reader_does_checksum = ( |
3564 | - isinstance(reader, cs.ChecksummingContentSource) and |
3565 | - cksum.algorithm == reader.algorithm) |
3566 | + isinstance(reader, cs.ChecksummingContentSource) |
3567 | + and cksum.algorithm == reader.algorithm |
3568 | + ) |
3569 | |
3570 | if os.path.exists(partfile): |
3571 | try: |
3572 | @@ -121,8 +137,12 @@ class FileStore(ObjectStore): |
3573 | else: |
3574 | reader.set_start_pos(orig_part_size) |
3575 | |
3576 | - LOG.debug("resuming partial (%s) download of '%s' from '%s'", |
3577 | - orig_part_size, path, partfile) |
3578 | + LOG.debug( |
3579 | + "resuming partial (%s) download of '%s' from '%s'", |
3580 | + orig_part_size, |
3581 | + path, |
3582 | + partfile, |
3583 | + ) |
3584 | with open(partfile, "rb") as fp: |
3585 | while True: |
3586 | buf = fp.read(self.read_size) |
3587 | @@ -136,15 +156,17 @@ class FileStore(ObjectStore): |
3588 | os.unlink(partfile) |
3589 | |
3590 | with open(partfile, "ab") as wfp: |
3591 | - |
3592 | while True: |
3593 | try: |
3594 | buf = reader.read(self.read_size) |
3595 | except checksum_util.InvalidChecksum: |
3596 | break |
3597 | buflen = len(buf) |
3598 | - if (buflen != self.read_size and zeros is not None and |
3599 | - zeros[0:buflen] == buf): |
3600 | + if ( |
3601 | + buflen != self.read_size |
3602 | + and zeros is not None |
3603 | + and zeros[0:buflen] == buf |
3604 | + ): |
3605 | wfp.seek(wfp.tell() + buflen) |
3606 | elif buf == zeros: |
3607 | wfp.seek(wfp.tell() + buflen) |
3608 | @@ -209,8 +231,9 @@ class FileStore(ObjectStore): |
3609 | return os.path.join(self.prefix, path) |
3610 | |
3611 | |
3612 | -def has_valid_checksum(path, reader, checksums=None, |
3613 | - read_size=READ_BUFFER_SIZE): |
3614 | +def has_valid_checksum( |
3615 | + path, reader, checksums=None, read_size=READ_BUFFER_SIZE |
3616 | +): |
3617 | if checksums is None: |
3618 | return False |
3619 | try: |
3620 | diff --git a/simplestreams/objectstores/s3.py b/simplestreams/objectstores/s3.py |
3621 | index f1e9602..b07507f 100644 |
3622 | --- a/simplestreams/objectstores/s3.py |
3623 | +++ b/simplestreams/objectstores/s3.py |
3624 | @@ -15,19 +15,19 @@ |
3625 | # You should have received a copy of the GNU Affero General Public License |
3626 | # along with Simplestreams. If not, see <http://www.gnu.org/licenses/>. |
3627 | |
3628 | +import errno |
3629 | +import tempfile |
3630 | +from contextlib import closing |
3631 | + |
3632 | import boto.exception |
3633 | import boto.s3 |
3634 | import boto.s3.connection |
3635 | -from contextlib import closing |
3636 | -import errno |
3637 | -import tempfile |
3638 | |
3639 | -import simplestreams.objectstores as objectstores |
3640 | import simplestreams.contentsource as cs |
3641 | +import simplestreams.objectstores as objectstores |
3642 | |
3643 | |
3644 | class S3ObjectStore(objectstores.ObjectStore): |
3645 | - |
3646 | _bucket = None |
3647 | _connection = None |
3648 | |
3649 | @@ -92,8 +92,8 @@ class S3ObjectStore(objectstores.ObjectStore): |
3650 | if key is None: |
3651 | return False |
3652 | |
3653 | - if 'md5' in checksums: |
3654 | - return checksums['md5'] == key.etag.replace('"', "") |
3655 | + if "md5" in checksums: |
3656 | + return checksums["md5"] == key.etag.replace('"', "") |
3657 | |
3658 | return False |
3659 | |
3660 | diff --git a/simplestreams/objectstores/swift.py b/simplestreams/objectstores/swift.py |
3661 | index f2c0d5b..d33fa3b 100644 |
3662 | --- a/simplestreams/objectstores/swift.py |
3663 | +++ b/simplestreams/objectstores/swift.py |
3664 | @@ -15,30 +15,30 @@ |
3665 | # You should have received a copy of the GNU Affero General Public License |
3666 | # along with Simplestreams. If not, see <http://www.gnu.org/licenses/>. |
3667 | |
3668 | -import simplestreams.objectstores as objectstores |
3669 | -import simplestreams.contentsource as cs |
3670 | -import simplestreams.openstack as openstack |
3671 | - |
3672 | import errno |
3673 | import hashlib |
3674 | -from swiftclient import Connection, ClientException |
3675 | + |
3676 | +from swiftclient import ClientException, Connection |
3677 | + |
3678 | +import simplestreams.contentsource as cs |
3679 | +import simplestreams.objectstores as objectstores |
3680 | +import simplestreams.openstack as openstack |
3681 | |
3682 | |
3683 | def get_swiftclient(**kwargs): |
3684 | # nmap has entries that need name changes from a 'get_service_conn_info' |
3685 | # to a swift Connection name. |
3686 | # pt has names that pass straight through |
3687 | - nmap = {'endpoint': 'preauthurl', 'token': 'preauthtoken'} |
3688 | - pt = ('insecure', 'cacert') |
3689 | + nmap = {"endpoint": "preauthurl", "token": "preauthtoken"} |
3690 | + pt = ("insecure", "cacert") |
3691 | |
3692 | connargs = {v: kwargs.get(k) for k, v in nmap.items() if k in kwargs} |
3693 | connargs.update({k: kwargs.get(k) for k in pt if k in kwargs}) |
3694 | - if kwargs.get('session'): |
3695 | - sess = kwargs.get('session') |
3696 | + if kwargs.get("session"): |
3697 | + sess = kwargs.get("session") |
3698 | try: |
3699 | # If session is available try it |
3700 | - return Connection(session=sess, |
3701 | - cacert=kwargs.get('cacert')) |
3702 | + return Connection(session=sess, cacert=kwargs.get("cacert")) |
3703 | except TypeError: |
3704 | # The edge case where session is availble but swiftclient is |
3705 | # < 3.3.0. Use the old style method for Connection. |
3706 | @@ -52,7 +52,6 @@ class SwiftContentSource(cs.IteratorContentSource): |
3707 | |
3708 | |
3709 | class SwiftObjectStore(objectstores.ObjectStore): |
3710 | - |
3711 | def __init__(self, prefix, region=None): |
3712 | # expect 'swift://bucket/path_prefix' |
3713 | self.prefix = prefix |
3714 | @@ -67,35 +66,41 @@ class SwiftObjectStore(objectstores.ObjectStore): |
3715 | |
3716 | self.keystone_creds = openstack.load_keystone_creds() |
3717 | if region is not None: |
3718 | - self.keystone_creds['region_name'] = region |
3719 | + self.keystone_creds["region_name"] = region |
3720 | |
3721 | - conn_info = openstack.get_service_conn_info('object-store', |
3722 | - **self.keystone_creds) |
3723 | + conn_info = openstack.get_service_conn_info( |
3724 | + "object-store", **self.keystone_creds |
3725 | + ) |
3726 | self.swiftclient = get_swiftclient(**conn_info) |
3727 | |
3728 | # http://docs.openstack.org/developer/swift/misc.html#acls |
3729 | - self.swiftclient.put_container(self.container, |
3730 | - headers={'X-Container-Read': |
3731 | - '.r:*,.rlistings'}) |
3732 | + self.swiftclient.put_container( |
3733 | + self.container, headers={"X-Container-Read": ".r:*,.rlistings"} |
3734 | + ) |
3735 | |
3736 | def insert(self, path, reader, checksums=None, mutable=True, size=None): |
3737 | # store content from reader.read() into path, expecting result checksum |
3738 | - self._insert(path=path, contents=reader, checksums=checksums, |
3739 | - mutable=mutable) |
3740 | + self._insert( |
3741 | + path=path, contents=reader, checksums=checksums, mutable=mutable |
3742 | + ) |
3743 | |
3744 | def insert_content(self, path, content, checksums=None, mutable=True): |
3745 | - self._insert(path=path, contents=content, checksums=checksums, |
3746 | - mutable=mutable) |
3747 | + self._insert( |
3748 | + path=path, contents=content, checksums=checksums, mutable=mutable |
3749 | + ) |
3750 | |
3751 | def remove(self, path): |
3752 | - self.swiftclient.delete_object(container=self.container, |
3753 | - obj=self.path_prefix + path) |
3754 | + self.swiftclient.delete_object( |
3755 | + container=self.container, obj=self.path_prefix + path |
3756 | + ) |
3757 | |
3758 | def source(self, path): |
3759 | def itgen(): |
3760 | (_headers, iterator) = self.swiftclient.get_object( |
3761 | - container=self.container, obj=self.path_prefix + path, |
3762 | - resp_chunk_size=self.read_size) |
3763 | + container=self.container, |
3764 | + obj=self.path_prefix + path, |
3765 | + resp_chunk_size=self.read_size, |
3766 | + ) |
3767 | return iterator |
3768 | |
3769 | return SwiftContentSource(itgen=itgen, url=self.prefix + path) |
3770 | @@ -105,8 +110,9 @@ class SwiftObjectStore(objectstores.ObjectStore): |
3771 | |
3772 | def _head_path(self, path): |
3773 | try: |
3774 | - headers = self.swiftclient.head_object(container=self.container, |
3775 | - obj=self.path_prefix + path) |
3776 | + headers = self.swiftclient.head_object( |
3777 | + container=self.container, obj=self.path_prefix + path |
3778 | + ) |
3779 | except Exception as exc: |
3780 | if is_enoent(exc): |
3781 | return {} |
3782 | @@ -122,19 +128,22 @@ class SwiftObjectStore(objectstores.ObjectStore): |
3783 | if headers_match_checksums(headers, checksums): |
3784 | return |
3785 | |
3786 | - insargs = {'container': self.container, 'obj': self.path_prefix + path, |
3787 | - 'contents': contents} |
3788 | + insargs = { |
3789 | + "container": self.container, |
3790 | + "obj": self.path_prefix + path, |
3791 | + "contents": contents, |
3792 | + } |
3793 | |
3794 | if size is not None and isinstance(contents, str): |
3795 | size = len(contents) |
3796 | |
3797 | if size is not None: |
3798 | - insargs['content_length'] = size |
3799 | + insargs["content_length"] = size |
3800 | |
3801 | - if checksums and checksums.get('md5'): |
3802 | - insargs['etag'] = checksums.get('md5') |
3803 | + if checksums and checksums.get("md5"): |
3804 | + insargs["etag"] = checksums.get("md5") |
3805 | elif isinstance(contents, str): |
3806 | - insargs['etag'] = hashlib.md5(contents).hexdigest() |
3807 | + insargs["etag"] = hashlib.md5(contents).hexdigest() |
3808 | |
3809 | self.swiftclient.put_object(**insargs) |
3810 | |
3811 | @@ -142,13 +151,15 @@ class SwiftObjectStore(objectstores.ObjectStore): |
3812 | def headers_match_checksums(headers, checksums): |
3813 | if not (headers and checksums): |
3814 | return False |
3815 | - if ('md5' in checksums and headers.get('etag') == checksums.get('md5')): |
3816 | + if "md5" in checksums and headers.get("etag") == checksums.get("md5"): |
3817 | return True |
3818 | return False |
3819 | |
3820 | |
3821 | def is_enoent(exc): |
3822 | - return ((isinstance(exc, IOError) and exc.errno == errno.ENOENT) or |
3823 | - (isinstance(exc, ClientException) and exc.http_status == 404)) |
3824 | + return (isinstance(exc, IOError) and exc.errno == errno.ENOENT) or ( |
3825 | + isinstance(exc, ClientException) and exc.http_status == 404 |
3826 | + ) |
3827 | + |
3828 | |
3829 | # vi: ts=4 expandtab |
3830 | diff --git a/simplestreams/openstack.py b/simplestreams/openstack.py |
3831 | index ebf63c4..48101e7 100644 |
3832 | --- a/simplestreams/openstack.py |
3833 | +++ b/simplestreams/openstack.py |
3834 | @@ -20,9 +20,11 @@ import os |
3835 | |
3836 | from keystoneclient.v2_0 import client as ksclient_v2 |
3837 | from keystoneclient.v3 import client as ksclient_v3 |
3838 | + |
3839 | try: |
3840 | from keystoneauth1 import session |
3841 | - from keystoneauth1.identity import (v2, v3) |
3842 | + from keystoneauth1.identity import v2, v3 |
3843 | + |
3844 | _LEGACY_CLIENTS = False |
3845 | except ImportError: |
3846 | # 14.04 level packages do not have this. |
3847 | @@ -31,43 +33,88 @@ except ImportError: |
3848 | |
3849 | |
3850 | OS_ENV_VARS = ( |
3851 | - 'OS_AUTH_TOKEN', 'OS_AUTH_URL', 'OS_CACERT', 'OS_IMAGE_API_VERSION', |
3852 | - 'OS_IMAGE_URL', 'OS_PASSWORD', 'OS_REGION_NAME', 'OS_STORAGE_URL', |
3853 | - 'OS_TENANT_ID', 'OS_TENANT_NAME', 'OS_USERNAME', 'OS_INSECURE', |
3854 | - 'OS_USER_DOMAIN_NAME', 'OS_PROJECT_DOMAIN_NAME', |
3855 | - 'OS_USER_DOMAIN_ID', 'OS_PROJECT_DOMAIN_ID', 'OS_PROJECT_NAME', |
3856 | - 'OS_PROJECT_ID' |
3857 | + "OS_AUTH_TOKEN", |
3858 | + "OS_AUTH_URL", |
3859 | + "OS_CACERT", |
3860 | + "OS_IMAGE_API_VERSION", |
3861 | + "OS_IMAGE_URL", |
3862 | + "OS_PASSWORD", |
3863 | + "OS_REGION_NAME", |
3864 | + "OS_STORAGE_URL", |
3865 | + "OS_TENANT_ID", |
3866 | + "OS_TENANT_NAME", |
3867 | + "OS_USERNAME", |
3868 | + "OS_INSECURE", |
3869 | + "OS_USER_DOMAIN_NAME", |
3870 | + "OS_PROJECT_DOMAIN_NAME", |
3871 | + "OS_USER_DOMAIN_ID", |
3872 | + "OS_PROJECT_DOMAIN_ID", |
3873 | + "OS_PROJECT_NAME", |
3874 | + "OS_PROJECT_ID", |
3875 | ) |
3876 | |
3877 | |
3878 | # only used for legacy client connection |
3879 | -PT_V2 = ('username', 'password', 'tenant_id', 'tenant_name', 'auth_url', |
3880 | - 'cacert', 'insecure', ) |
3881 | +PT_V2 = ( |
3882 | + "username", |
3883 | + "password", |
3884 | + "tenant_id", |
3885 | + "tenant_name", |
3886 | + "auth_url", |
3887 | + "cacert", |
3888 | + "insecure", |
3889 | +) |
3890 | |
3891 | # annoyingly the 'insecure' option in the old client constructor is now called |
3892 | # the 'verify' option in the session.Session() constructor |
3893 | -PASSWORD_V2 = ('auth_url', 'username', 'password', 'user_id', 'trust_id', |
3894 | - 'tenant_id', 'tenant_name', 'reauthenticate') |
3895 | -PASSWORD_V3 = ('auth_url', 'password', 'username', |
3896 | - 'user_id', 'user_domain_id', 'user_domain_name', |
3897 | - 'trust_id', 'system_scope', |
3898 | - 'domain_id', 'domain_name', |
3899 | - 'project_id', 'project_name', |
3900 | - 'project_domain_id', 'project_domain_name', |
3901 | - 'reauthenticate') |
3902 | -SESSION_ARGS = ('cert', 'timeout', 'verify', 'original_ip', 'redirect', |
3903 | - 'addition_headers', 'app_name', 'app_version', |
3904 | - 'additional_user_agent', |
3905 | - 'discovery_cache', 'split_loggers', 'collect_timing') |
3906 | - |
3907 | - |
3908 | -Settings = collections.namedtuple('Settings', 'mod ident arg_set') |
3909 | -KS_VERSION_RESOLVER = {2: Settings(mod=ksclient_v2, |
3910 | - ident=v2, |
3911 | - arg_set=PASSWORD_V2), |
3912 | - 3: Settings(mod=ksclient_v3, |
3913 | - ident=v3, |
3914 | - arg_set=PASSWORD_V3)} |
3915 | +PASSWORD_V2 = ( |
3916 | + "auth_url", |
3917 | + "username", |
3918 | + "password", |
3919 | + "user_id", |
3920 | + "trust_id", |
3921 | + "tenant_id", |
3922 | + "tenant_name", |
3923 | + "reauthenticate", |
3924 | +) |
3925 | +PASSWORD_V3 = ( |
3926 | + "auth_url", |
3927 | + "password", |
3928 | + "username", |
3929 | + "user_id", |
3930 | + "user_domain_id", |
3931 | + "user_domain_name", |
3932 | + "trust_id", |
3933 | + "system_scope", |
3934 | + "domain_id", |
3935 | + "domain_name", |
3936 | + "project_id", |
3937 | + "project_name", |
3938 | + "project_domain_id", |
3939 | + "project_domain_name", |
3940 | + "reauthenticate", |
3941 | +) |
3942 | +SESSION_ARGS = ( |
3943 | + "cert", |
3944 | + "timeout", |
3945 | + "verify", |
3946 | + "original_ip", |
3947 | + "redirect", |
3948 | + "addition_headers", |
3949 | + "app_name", |
3950 | + "app_version", |
3951 | + "additional_user_agent", |
3952 | + "discovery_cache", |
3953 | + "split_loggers", |
3954 | + "collect_timing", |
3955 | +) |
3956 | + |
3957 | + |
3958 | +Settings = collections.namedtuple("Settings", "mod ident arg_set") |
3959 | +KS_VERSION_RESOLVER = { |
3960 | + 2: Settings(mod=ksclient_v2, ident=v2, arg_set=PASSWORD_V2), |
3961 | + 3: Settings(mod=ksclient_v3, ident=v3, arg_set=PASSWORD_V3), |
3962 | +} |
3963 | |
3964 | |
3965 | def load_keystone_creds(**kwargs): |
3966 | @@ -87,32 +134,38 @@ def load_keystone_creds(**kwargs): |
3967 | # take off 'os_' |
3968 | ret[short] = os.environ[name] |
3969 | |
3970 | - if 'insecure' in ret: |
3971 | - if isinstance(ret['insecure'], str): |
3972 | - ret['insecure'] = (ret['insecure'].lower() not in |
3973 | - ("", "0", "no", "off", 'false')) |
3974 | + if "insecure" in ret: |
3975 | + if isinstance(ret["insecure"], str): |
3976 | + ret["insecure"] = ret["insecure"].lower() not in ( |
3977 | + "", |
3978 | + "0", |
3979 | + "no", |
3980 | + "off", |
3981 | + "false", |
3982 | + ) |
3983 | else: |
3984 | - ret['insecure'] = bool(ret['insecure']) |
3985 | + ret["insecure"] = bool(ret["insecure"]) |
3986 | |
3987 | # verify is the key that is used by requests, and thus the Session object. |
3988 | # i.e. verify is either False or a certificate path or file. |
3989 | - if not ret.get('insecure', False) and 'cacert' in ret: |
3990 | - ret['verify'] = ret['cacert'] |
3991 | + if not ret.get("insecure", False) and "cacert" in ret: |
3992 | + ret["verify"] = ret["cacert"] |
3993 | |
3994 | missing = [] |
3995 | - for req in ('username', 'auth_url'): |
3996 | + for req in ("username", "auth_url"): |
3997 | if not ret.get(req, None): |
3998 | missing.append(req) |
3999 | |
4000 | - if not (ret.get('auth_token') or ret.get('password')): |
4001 | + if not (ret.get("auth_token") or ret.get("password")): |
4002 | missing.append("(auth_token or password)") |
4003 | |
4004 | - api_version = get_ks_api_version(ret.get('auth_url', '')) or 2 |
4005 | - if (api_version == 2 and |
4006 | - not (ret.get('tenant_id') or ret.get('tenant_name'))): |
4007 | + api_version = get_ks_api_version(ret.get("auth_url", "")) or 2 |
4008 | + if api_version == 2 and not ( |
4009 | + ret.get("tenant_id") or ret.get("tenant_name") |
4010 | + ): |
4011 | missing.append("(tenant_id or tenant_name)") |
4012 | if api_version == 3: |
4013 | - for k in ('user_domain_name', 'project_domain_name', 'project_name'): |
4014 | + for k in ("user_domain_name", "project_domain_name", "project_name"): |
4015 | if not ret.get(k, None): |
4016 | missing.append(k) |
4017 | |
4018 | @@ -124,8 +177,8 @@ def load_keystone_creds(**kwargs): |
4019 | |
4020 | def get_regions(client=None, services=None, kscreds=None): |
4021 | # if kscreds had 'region_name', then return that |
4022 | - if kscreds and kscreds.get('region_name'): |
4023 | - return [kscreds.get('region_name')] |
4024 | + if kscreds and kscreds.get("region_name"): |
4025 | + return [kscreds.get("region_name")] |
4026 | |
4027 | if client is None: |
4028 | creds = kscreds |
4029 | @@ -139,7 +192,7 @@ def get_regions(client=None, services=None, kscreds=None): |
4030 | regions = set() |
4031 | for service in services: |
4032 | for r in endpoints.get(service, {}): |
4033 | - regions.add(r['region']) |
4034 | + regions.add(r["region"]) |
4035 | |
4036 | return list(regions) |
4037 | |
4038 | @@ -153,15 +206,15 @@ def get_ks_api_version(auth_url=None, env=None): |
4039 | if env is None: |
4040 | env = os.environ |
4041 | |
4042 | - if env.get('OS_IDENTITY_API_VERSION'): |
4043 | - return int(env['OS_IDENTITY_API_VERSION']) |
4044 | + if env.get("OS_IDENTITY_API_VERSION"): |
4045 | + return int(env["OS_IDENTITY_API_VERSION"]) |
4046 | |
4047 | if auth_url is None: |
4048 | auth_url = "" |
4049 | |
4050 | - if auth_url.endswith('/v3'): |
4051 | + if auth_url.endswith("/v3"): |
4052 | return 3 |
4053 | - elif auth_url.endswith('/v2.0'): |
4054 | + elif auth_url.endswith("/v2.0"): |
4055 | return 2 |
4056 | # Return None if we can't determine the keystone version |
4057 | return None |
4058 | @@ -178,20 +231,20 @@ def get_ksclient(**kwargs): |
4059 | if _LEGACY_CLIENTS: |
4060 | return _legacy_ksclient(**kwargs) |
4061 | |
4062 | - api_version = get_ks_api_version(kwargs.get('auth_url', '')) or 2 |
4063 | + api_version = get_ks_api_version(kwargs.get("auth_url", "")) or 2 |
4064 | arg_set = KS_VERSION_RESOLVER[api_version].arg_set |
4065 | # Filter/select the args for the api version from the kwargs dictionary |
4066 | kskw = {k: v for k, v in kwargs.items() if k in arg_set} |
4067 | auth = KS_VERSION_RESOLVER[api_version].ident.Password(**kskw) |
4068 | authkw = {k: v for k, v in kwargs.items() if k in SESSION_ARGS} |
4069 | - authkw['auth'] = auth |
4070 | + authkw["auth"] = auth |
4071 | sess = session.Session(**authkw) |
4072 | client = KS_VERSION_RESOLVER[api_version].mod.Client(session=sess) |
4073 | client.auth_ref = auth.get_access(sess) |
4074 | return client |
4075 | |
4076 | |
4077 | -def get_service_conn_info(service='image', client=None, **kwargs): |
4078 | +def get_service_conn_info(service="image", client=None, **kwargs): |
4079 | # return a dict with token, insecure, cacert, endpoint |
4080 | if not client: |
4081 | client = get_ksclient(**kwargs) |
4082 | @@ -199,16 +252,23 @@ def get_service_conn_info(service='image', client=None, **kwargs): |
4083 | endpoint = _get_endpoint(client, service, **kwargs) |
4084 | # Session client does not have tenant_id set at client.tenant_id |
4085 | # If client.tenant_id not set use method to get it |
4086 | - tenant_id = (client.tenant_id or client.get_project_id(client.session) or |
4087 | - client.auth.client.get_project_id()) |
4088 | - info = {'token': client.auth_token, 'insecure': kwargs.get('insecure'), |
4089 | - 'cacert': kwargs.get('cacert'), 'endpoint': endpoint, |
4090 | - 'tenant_id': tenant_id} |
4091 | + tenant_id = ( |
4092 | + client.tenant_id |
4093 | + or client.get_project_id(client.session) |
4094 | + or client.auth.client.get_project_id() |
4095 | + ) |
4096 | + info = { |
4097 | + "token": client.auth_token, |
4098 | + "insecure": kwargs.get("insecure"), |
4099 | + "cacert": kwargs.get("cacert"), |
4100 | + "endpoint": endpoint, |
4101 | + "tenant_id": tenant_id, |
4102 | + } |
4103 | if not _LEGACY_CLIENTS: |
4104 | - info['session'] = client.session |
4105 | - info['glance_version'] = '2' |
4106 | + info["session"] = client.session |
4107 | + info["glance_version"] = "2" |
4108 | else: |
4109 | - info['glance_version'] = '1' |
4110 | + info["glance_version"] = "1" |
4111 | |
4112 | return info |
4113 | |
4114 | @@ -216,12 +276,12 @@ def get_service_conn_info(service='image', client=None, **kwargs): |
4115 | def _get_endpoint(client, service, **kwargs): |
4116 | """Get an endpoint using the provided keystone client.""" |
4117 | endpoint_kwargs = { |
4118 | - 'service_type': service, |
4119 | - 'interface': kwargs.get('endpoint_type') or 'publicURL', |
4120 | - 'region_name': kwargs.get('region_name'), |
4121 | + "service_type": service, |
4122 | + "interface": kwargs.get("endpoint_type") or "publicURL", |
4123 | + "region_name": kwargs.get("region_name"), |
4124 | } |
4125 | if _LEGACY_CLIENTS: |
4126 | - del endpoint_kwargs['interface'] |
4127 | + del endpoint_kwargs["interface"] |
4128 | |
4129 | endpoint = client.service_catalog.url_for(**endpoint_kwargs) |
4130 | return endpoint |
4131 | diff --git a/simplestreams/util.py b/simplestreams/util.py |
4132 | index 9866893..ebfe741 100644 |
4133 | --- a/simplestreams/util.py |
4134 | +++ b/simplestreams/util.py |
4135 | @@ -16,15 +16,15 @@ |
4136 | # along with Simplestreams. If not, see <http://www.gnu.org/licenses/>. |
4137 | |
4138 | import errno |
4139 | +import json |
4140 | import os |
4141 | import re |
4142 | import subprocess |
4143 | import tempfile |
4144 | import time |
4145 | -import json |
4146 | |
4147 | -import simplestreams.contentsource as cs |
4148 | import simplestreams.checksum_util as checksum_util |
4149 | +import simplestreams.contentsource as cs |
4150 | from simplestreams.log import LOG |
4151 | |
4152 | ALIASNAME = "_aliases" |
4153 | @@ -35,7 +35,7 @@ PGP_SIGNATURE_FOOTER = "-----END PGP SIGNATURE-----" |
4154 | |
4155 | _UNSET = object() |
4156 | |
4157 | -READ_SIZE = (1024 * 10) |
4158 | +READ_SIZE = 1024 * 10 |
4159 | |
4160 | PRODUCTS_TREE_DATA = ( |
4161 | ("products", "product_name"), |
4162 | @@ -84,7 +84,7 @@ def products_exdata(tree, pedigree, include_top=True, insert_fieldnames=True): |
4163 | if include_top and tree: |
4164 | exdata.update(stringitems(tree)) |
4165 | clevel = tree |
4166 | - for (n, key) in enumerate(pedigree): |
4167 | + for n, key in enumerate(pedigree): |
4168 | dictname, fieldname = harchy[n] |
4169 | clevel = clevel.get(dictname, {}).get(key, {}) |
4170 | exdata.update(stringitems(clevel)) |
4171 | @@ -131,48 +131,54 @@ def products_del(tree, pedigree): |
4172 | |
4173 | |
4174 | def products_prune(tree, preserve_empty_products=False): |
4175 | - for prodname in list(tree.get('products', {}).keys()): |
4176 | - keys = list(tree['products'][prodname].get('versions', {}).keys()) |
4177 | + for prodname in list(tree.get("products", {}).keys()): |
4178 | + keys = list(tree["products"][prodname].get("versions", {}).keys()) |
4179 | for vername in keys: |
4180 | - vtree = tree['products'][prodname]['versions'][vername] |
4181 | - for itemname in list(vtree.get('items', {}).keys()): |
4182 | - if not vtree['items'][itemname]: |
4183 | - del vtree['items'][itemname] |
4184 | - |
4185 | - if 'items' not in vtree or not vtree['items']: |
4186 | - del tree['products'][prodname]['versions'][vername] |
4187 | - |
4188 | - if ('versions' not in tree['products'][prodname] or |
4189 | - not tree['products'][prodname]['versions']): |
4190 | - del tree['products'][prodname] |
4191 | - |
4192 | - if (not preserve_empty_products and 'products' in tree and |
4193 | - not tree['products']): |
4194 | - del tree['products'] |
4195 | - |
4196 | - |
4197 | -def walk_products(tree, cb_product=None, cb_version=None, cb_item=None, |
4198 | - ret_finished=_UNSET): |
4199 | + vtree = tree["products"][prodname]["versions"][vername] |
4200 | + for itemname in list(vtree.get("items", {}).keys()): |
4201 | + if not vtree["items"][itemname]: |
4202 | + del vtree["items"][itemname] |
4203 | + |
4204 | + if "items" not in vtree or not vtree["items"]: |
4205 | + del tree["products"][prodname]["versions"][vername] |
4206 | + |
4207 | + if ( |
4208 | + "versions" not in tree["products"][prodname] |
4209 | + or not tree["products"][prodname]["versions"] |
4210 | + ): |
4211 | + del tree["products"][prodname] |
4212 | + |
4213 | + if ( |
4214 | + not preserve_empty_products |
4215 | + and "products" in tree |
4216 | + and not tree["products"] |
4217 | + ): |
4218 | + del tree["products"] |
4219 | + |
4220 | + |
4221 | +def walk_products( |
4222 | + tree, cb_product=None, cb_version=None, cb_item=None, ret_finished=_UNSET |
4223 | +): |
4224 | # walk a product tree. callbacks are called with (item, tree, (pedigree)) |
4225 | - for prodname, proddata in tree['products'].items(): |
4226 | + for prodname, proddata in tree["products"].items(): |
4227 | if cb_product: |
4228 | ret = cb_product(proddata, tree, (prodname,)) |
4229 | if ret_finished != _UNSET and ret == ret_finished: |
4230 | return |
4231 | |
4232 | - if (not cb_version and not cb_item) or 'versions' not in proddata: |
4233 | + if (not cb_version and not cb_item) or "versions" not in proddata: |
4234 | continue |
4235 | |
4236 | - for vername, verdata in proddata['versions'].items(): |
4237 | + for vername, verdata in proddata["versions"].items(): |
4238 | if cb_version: |
4239 | ret = cb_version(verdata, tree, (prodname, vername)) |
4240 | if ret_finished != _UNSET and ret == ret_finished: |
4241 | return |
4242 | |
4243 | - if not cb_item or 'items' not in verdata: |
4244 | + if not cb_item or "items" not in verdata: |
4245 | continue |
4246 | |
4247 | - for itemname, itemdata in verdata['items'].items(): |
4248 | + for itemname, itemdata in verdata["items"].items(): |
4249 | ret = cb_item(itemdata, tree, (prodname, vername, itemname)) |
4250 | if ret_finished != _UNSET and ret == ret_finished: |
4251 | return |
4252 | @@ -205,8 +211,9 @@ def expand_data(data, refs=None, delete=False): |
4253 | expand_data(item, refs) |
4254 | |
4255 | |
4256 | -def resolve_work(src, target, maxnum=None, keep=False, itemfilter=None, |
4257 | - sort_reverse=True): |
4258 | +def resolve_work( |
4259 | + src, target, maxnum=None, keep=False, itemfilter=None, sort_reverse=True |
4260 | +): |
4261 | # if more than maxnum items are in src, only the most recent maxnum will be |
4262 | # stored in target. If keep is true, then the most recent maxnum items |
4263 | # will be kept in target even if they are no longer in src. |
4264 | @@ -241,8 +248,9 @@ def resolve_work(src, target, maxnum=None, keep=False, itemfilter=None, |
4265 | while len(remove) and (maxnum > (after_add - len(remove))): |
4266 | remove.pop(0) |
4267 | |
4268 | - mtarget = sorted([f for f in target + add if f not in remove], |
4269 | - reverse=reverse) |
4270 | + mtarget = sorted( |
4271 | + [f for f in target + add if f not in remove], reverse=reverse |
4272 | + ) |
4273 | if maxnum is not None and len(mtarget) > maxnum: |
4274 | for item in mtarget[maxnum:]: |
4275 | if item in target: |
4276 | @@ -263,12 +271,12 @@ def has_gpgv(): |
4277 | if _HAS_GPGV is not None: |
4278 | return _HAS_GPGV |
4279 | |
4280 | - if which('gpgv'): |
4281 | + if which("gpgv"): |
4282 | try: |
4283 | env = os.environ.copy() |
4284 | - env['LANG'] = 'C' |
4285 | + env["LANG"] = "C" |
4286 | out, err = subp(["gpgv", "--help"], capture=True, env=env) |
4287 | - _HAS_GPGV = 'gnupg' in out.lower() or 'gnupg' in err.lower() |
4288 | + _HAS_GPGV = "gnupg" in out.lower() or "gnupg" in err.lower() |
4289 | except subprocess.CalledProcessError: |
4290 | _HAS_GPGV = False |
4291 | else: |
4292 | @@ -292,11 +300,13 @@ def read_signed(content, keyring=None, checked=True): |
4293 | try: |
4294 | subp(cmd, data=content) |
4295 | except subprocess.CalledProcessError as e: |
4296 | - LOG.debug("failed: %s\n out=%s\n err=%s" % |
4297 | - (' '.join(cmd), e.output[0], e.output[1])) |
4298 | + LOG.debug( |
4299 | + "failed: %s\n out=%s\n err=%s" |
4300 | + % (" ".join(cmd), e.output[0], e.output[1]) |
4301 | + ) |
4302 | raise e |
4303 | |
4304 | - ret = {'body': [], 'signature': [], 'garbage': []} |
4305 | + ret = {"body": [], "signature": [], "garbage": []} |
4306 | lines = content.splitlines() |
4307 | i = 0 |
4308 | for i in range(0, len(lines)): |
4309 | @@ -320,21 +330,22 @@ def read_signed(content, keyring=None, checked=True): |
4310 | else: |
4311 | ret[mode].append(lines[i]) |
4312 | |
4313 | - ret['body'].append('') # need empty line at end |
4314 | - return "\n".join(ret['body']) |
4315 | + ret["body"].append("") # need empty line at end |
4316 | + return "\n".join(ret["body"]) |
4317 | else: |
4318 | raise SignatureMissingException("No signature found!") |
4319 | |
4320 | |
4321 | def load_content(content): |
4322 | if isinstance(content, bytes): |
4323 | - content = content.decode('utf-8') |
4324 | + content = content.decode("utf-8") |
4325 | return json.loads(content) |
4326 | |
4327 | |
4328 | def dump_data(data): |
4329 | - return json.dumps(data, indent=1, sort_keys=True, |
4330 | - separators=(',', ': ')).encode('utf-8') |
4331 | + return json.dumps( |
4332 | + data, indent=1, sort_keys=True, separators=(",", ": ") |
4333 | + ).encode("utf-8") |
4334 | |
4335 | |
4336 | def timestamp(ts=None): |
4337 | @@ -375,24 +386,26 @@ def move_dups(src, target, sticky=None): |
4338 | target.update(updates) |
4339 | |
4340 | |
4341 | -def products_condense(ptree, sticky=None, top='versions'): |
4342 | +def products_condense(ptree, sticky=None, top="versions"): |
4343 | # walk a products tree, copying up item keys as far as they'll go |
4344 | # only move items to a sibling of the 'top'. |
4345 | |
4346 | - if top not in ('versions', 'products'): |
4347 | - raise ValueError("'top' must be one of: %s" % |
4348 | - ','.join(PRODUCTS_TREE_HIERARCHY)) |
4349 | + if top not in ("versions", "products"): |
4350 | + raise ValueError( |
4351 | + "'top' must be one of: %s" % ",".join(PRODUCTS_TREE_HIERARCHY) |
4352 | + ) |
4353 | |
4354 | def call_move_dups(cur, _tree, pedigree): |
4355 | - (_mtype, stname) = (("product", "versions"), |
4356 | - ("version", "items"))[len(pedigree) - 1] |
4357 | + (_mtype, stname) = (("product", "versions"), ("version", "items"))[ |
4358 | + len(pedigree) - 1 |
4359 | + ] |
4360 | move_dups(cur.get(stname, {}), cur, sticky=sticky) |
4361 | |
4362 | walk_products(ptree, cb_version=call_move_dups) |
4363 | walk_products(ptree, cb_product=call_move_dups) |
4364 | - if top == 'versions': |
4365 | + if top == "versions": |
4366 | return |
4367 | - move_dups(ptree['products'], ptree) |
4368 | + move_dups(ptree["products"], ptree) |
4369 | |
4370 | |
4371 | def assert_safe_path(path): |
4372 | @@ -449,16 +462,22 @@ def subp(args, data=None, capture=True, shell=False, env=None): |
4373 | else: |
4374 | stdout, stderr = (subprocess.PIPE, subprocess.PIPE) |
4375 | |
4376 | - sp = subprocess.Popen(args, stdout=stdout, stderr=stderr, |
4377 | - stdin=subprocess.PIPE, shell=shell, env=env) |
4378 | + sp = subprocess.Popen( |
4379 | + args, |
4380 | + stdout=stdout, |
4381 | + stderr=stderr, |
4382 | + stdin=subprocess.PIPE, |
4383 | + shell=shell, |
4384 | + env=env, |
4385 | + ) |
4386 | if isinstance(data, str): |
4387 | - data = data.encode('utf-8') |
4388 | + data = data.encode("utf-8") |
4389 | |
4390 | (out, err) = sp.communicate(data) |
4391 | if isinstance(out, bytes): |
4392 | - out = out.decode('utf-8') |
4393 | + out = out.decode("utf-8") |
4394 | if isinstance(err, bytes): |
4395 | - err = err.decode('utf-8') |
4396 | + err = err.decode("utf-8") |
4397 | |
4398 | rc = sp.returncode |
4399 | if rc != 0: |
4400 | @@ -468,22 +487,22 @@ def subp(args, data=None, capture=True, shell=False, env=None): |
4401 | |
4402 | |
4403 | def get_sign_cmd(path, output=None, inline=False): |
4404 | - cmd = ['gpg'] |
4405 | - defkey = os.environ.get('SS_GPG_DEFAULT_KEY') |
4406 | + cmd = ["gpg"] |
4407 | + defkey = os.environ.get("SS_GPG_DEFAULT_KEY") |
4408 | if defkey: |
4409 | - cmd.extend(['--default-key', defkey]) |
4410 | + cmd.extend(["--default-key", defkey]) |
4411 | |
4412 | - batch = os.environ.get('SS_GPG_BATCH', "1").lower() |
4413 | + batch = os.environ.get("SS_GPG_BATCH", "1").lower() |
4414 | if batch not in ("0", "false"): |
4415 | - cmd.append('--batch') |
4416 | + cmd.append("--batch") |
4417 | |
4418 | if output: |
4419 | - cmd.extend(['--output', output]) |
4420 | + cmd.extend(["--output", output]) |
4421 | |
4422 | if inline: |
4423 | - cmd.append('--clearsign') |
4424 | + cmd.append("--clearsign") |
4425 | else: |
4426 | - cmd.extend(['--armor', '--detach-sign']) |
4427 | + cmd.extend(["--armor", "--detach-sign"]) |
4428 | |
4429 | cmd.extend([path]) |
4430 | return cmd |
4431 | @@ -498,17 +517,17 @@ def make_signed_content_paths(content): |
4432 | if data.get("format") != "index:1.0": |
4433 | return (False, None) |
4434 | |
4435 | - for content_ent in list(data.get('index', {}).values()): |
4436 | - path = content_ent.get('path') |
4437 | + for content_ent in list(data.get("index", {}).values()): |
4438 | + path = content_ent.get("path") |
4439 | if path.endswith(".json"): |
4440 | - content_ent['path'] = signed_fname(path, inline=True) |
4441 | + content_ent["path"] = signed_fname(path, inline=True) |
4442 | |
4443 | return (True, json.dumps(data, indent=1)) |
4444 | |
4445 | |
4446 | def signed_fname(fname, inline=True): |
4447 | if inline: |
4448 | - sfname = fname[0:-len(".json")] + ".sjson" |
4449 | + sfname = fname[0 : -len(".json")] + ".sjson" |
4450 | else: |
4451 | sfname = fname + ".gpg" |
4452 | |
4453 | @@ -555,8 +574,10 @@ def sign_file(fname, inline=True, outfile=None): |
4454 | |
4455 | def sign_content(content, outfile="-", inline=True): |
4456 | rm_f_file(outfile, skip=["-"]) |
4457 | - return subp(args=get_sign_cmd(path="-", output=outfile, inline=inline), |
4458 | - data=content)[0] |
4459 | + return subp( |
4460 | + args=get_sign_cmd(path="-", output=outfile, inline=inline), |
4461 | + data=content, |
4462 | + )[0] |
4463 | |
4464 | |
4465 | def path_from_mirror_url(mirror, path): |
4466 | @@ -566,8 +587,8 @@ def path_from_mirror_url(mirror, path): |
4467 | path_regex = "streams/v1/.*[.](sjson|json)$" |
4468 | result = re.search(path_regex, mirror) |
4469 | if result: |
4470 | - path = mirror[result.start():] |
4471 | - mirror = mirror[:result.start()] |
4472 | + path = mirror[result.start() :] |
4473 | + mirror = mirror[: result.start()] |
4474 | else: |
4475 | path = "streams/v1/index.sjson" |
4476 | |
4477 | @@ -589,21 +610,21 @@ class ProgressAggregator(object): |
4478 | self.total_written = 0 |
4479 | |
4480 | def progress_callback(self, progress): |
4481 | - if self.current_file != progress['name']: |
4482 | + if self.current_file != progress["name"]: |
4483 | if self.remaining_items and self.current_file is not None: |
4484 | del self.remaining_items[self.current_file] |
4485 | - self.current_file = progress['name'] |
4486 | + self.current_file = progress["name"] |
4487 | self.last_emitted = 0 |
4488 | self.current_written = 0 |
4489 | |
4490 | - size = float(progress['size']) |
4491 | - written = float(progress['written']) |
4492 | + size = float(progress["size"]) |
4493 | + written = float(progress["written"]) |
4494 | self.current_written += written |
4495 | self.total_written += written |
4496 | interval = self.current_written - self.last_emitted |
4497 | if interval > size / 100: |
4498 | self.last_emitted = self.current_written |
4499 | - progress['written'] = self.current_written |
4500 | + progress["written"] = self.current_written |
4501 | self.emit(progress) |
4502 | |
4503 | def emit(self, progress): |
4504 | diff --git a/tests/httpserver.py b/tests/httpserver.py |
4505 | index e88eb56..10bde9a 100644 |
4506 | --- a/tests/httpserver.py |
4507 | +++ b/tests/httpserver.py |
4508 | @@ -1,50 +1,61 @@ |
4509 | #!/usr/bin/env python |
4510 | import os |
4511 | import sys |
4512 | + |
4513 | if sys.version_info.major == 2: |
4514 | - from SimpleHTTPServer import SimpleHTTPRequestHandler |
4515 | from BaseHTTPServer import HTTPServer |
4516 | + from SimpleHTTPServer import SimpleHTTPRequestHandler |
4517 | else: |
4518 | - from http.server import SimpleHTTPRequestHandler |
4519 | - from http.server import HTTPServer |
4520 | + from http.server import HTTPServer, SimpleHTTPRequestHandler |
4521 | |
4522 | |
4523 | class LoggingHTTPRequestHandler(SimpleHTTPRequestHandler): |
4524 | - def log_request(self, code='-', size='-'): |
4525 | + def log_request(self, code="-", size="-"): |
4526 | """ |
4527 | Log an accepted request along with user-agent string. |
4528 | """ |
4529 | |
4530 | user_agent = self.headers.get("user-agent") |
4531 | - self.log_message('"%s" %s %s (%s)', |
4532 | - self.requestline, str(code), str(size), user_agent) |
4533 | + self.log_message( |
4534 | + '"%s" %s %s (%s)', |
4535 | + self.requestline, |
4536 | + str(code), |
4537 | + str(size), |
4538 | + user_agent, |
4539 | + ) |
4540 | |
4541 | |
4542 | -def run(address, port, |
4543 | - HandlerClass=LoggingHTTPRequestHandler, ServerClass=HTTPServer): |
4544 | +def run( |
4545 | + address, |
4546 | + port, |
4547 | + HandlerClass=LoggingHTTPRequestHandler, |
4548 | + ServerClass=HTTPServer, |
4549 | +): |
4550 | try: |
4551 | server = ServerClass((address, port), HandlerClass) |
4552 | address, port = server.socket.getsockname() |
4553 | - sys.stderr.write("Serving HTTP: %s %s %s\n" % |
4554 | - (address, port, os.getcwd())) |
4555 | + sys.stderr.write( |
4556 | + "Serving HTTP: %s %s %s\n" % (address, port, os.getcwd()) |
4557 | + ) |
4558 | server.serve_forever() |
4559 | except KeyboardInterrupt: |
4560 | server.socket.close() |
4561 | |
4562 | |
4563 | -if __name__ == '__main__': |
4564 | +if __name__ == "__main__": |
4565 | import sys |
4566 | + |
4567 | if len(sys.argv) == 3: |
4568 | # 2 args: address and port |
4569 | address = sys.argv[1] |
4570 | port = int(sys.argv[2]) |
4571 | elif len(sys.argv) == 2: |
4572 | # 1 arg: port |
4573 | - address = '0.0.0.0' |
4574 | + address = "0.0.0.0" |
4575 | port = int(sys.argv[1]) |
4576 | elif len(sys.argv) == 1: |
4577 | # no args random port (port=0) |
4578 | - address = '0.0.0.0' |
4579 | + address = "0.0.0.0" |
4580 | port = 0 |
4581 | else: |
4582 | sys.stderr.write("Expect [address] [port]\n") |
4583 | diff --git a/tests/testutil.py b/tests/testutil.py |
4584 | index 2c89e3a..2816366 100644 |
4585 | --- a/tests/testutil.py |
4586 | +++ b/tests/testutil.py |
4587 | @@ -1,10 +1,10 @@ |
4588 | import os |
4589 | -from simplestreams import objectstores |
4590 | -from simplestreams import mirrors |
4591 | |
4592 | +from simplestreams import mirrors, objectstores |
4593 | |
4594 | -EXAMPLES_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), |
4595 | - "..", "examples")) |
4596 | +EXAMPLES_DIR = os.path.abspath( |
4597 | + os.path.join(os.path.dirname(__file__), "..", "examples") |
4598 | +) |
4599 | |
4600 | |
4601 | def get_mirror_reader(name, docdir=None, signed=False): |
4602 | @@ -20,4 +20,5 @@ def get_mirror_reader(name, docdir=None, signed=False): |
4603 | kwargs = {} if signed else {"policy": policy} |
4604 | return mirrors.ObjectStoreMirrorReader(sstore, **kwargs) |
4605 | |
4606 | + |
4607 | # vi: ts=4 expandtab syntax=python |
4608 | diff --git a/tests/unittests/test_badmirrors.py b/tests/unittests/test_badmirrors.py |
4609 | index 6c5546f..d640e28 100644 |
4610 | --- a/tests/unittests/test_badmirrors.py |
4611 | +++ b/tests/unittests/test_badmirrors.py |
4612 | @@ -1,11 +1,12 @@ |
4613 | from unittest import TestCase |
4614 | -from tests.testutil import get_mirror_reader |
4615 | + |
4616 | +from simplestreams import checksum_util, mirrors, util |
4617 | from simplestreams.mirrors import ( |
4618 | - ObjectStoreMirrorWriter, ObjectStoreMirrorReader) |
4619 | + ObjectStoreMirrorReader, |
4620 | + ObjectStoreMirrorWriter, |
4621 | +) |
4622 | from simplestreams.objectstores import MemoryObjectStore |
4623 | -from simplestreams import util |
4624 | -from simplestreams import checksum_util |
4625 | -from simplestreams import mirrors |
4626 | +from tests.testutil import get_mirror_reader |
4627 | |
4628 | |
4629 | class TestBadDataSources(TestCase): |
4630 | @@ -19,7 +20,8 @@ class TestBadDataSources(TestCase): |
4631 | def setUp(self): |
4632 | self.src = self.get_clean_src(self.example, path=self.dlpath) |
4633 | self.target = ObjectStoreMirrorWriter( |
4634 | - config={}, objectstore=MemoryObjectStore()) |
4635 | + config={}, objectstore=MemoryObjectStore() |
4636 | + ) |
4637 | |
4638 | def get_clean_src(self, exname, path): |
4639 | good_src = get_mirror_reader(exname) |
4640 | @@ -34,7 +36,8 @@ class TestBadDataSources(TestCase): |
4641 | del objectstore.data[k] |
4642 | |
4643 | return ObjectStoreMirrorReader( |
4644 | - objectstore=objectstore, policy=lambda content, path: content) |
4645 | + objectstore=objectstore, policy=lambda content, path: content |
4646 | + ) |
4647 | |
4648 | def test_sanity_valid(self): |
4649 | # verify that the tests are fine on expected pass |
4650 | @@ -43,50 +46,77 @@ class TestBadDataSources(TestCase): |
4651 | |
4652 | def test_larger_size_causes_bad_checksum(self): |
4653 | def size_plus_1(item): |
4654 | - item['size'] = int(item['size']) + 1 |
4655 | + item["size"] = int(item["size"]) + 1 |
4656 | return item |
4657 | |
4658 | _moditem(self.src, self.dlpath, self.pedigree, size_plus_1) |
4659 | - self.assertRaises(checksum_util.InvalidChecksum, |
4660 | - self.target.sync, self.src, self.dlpath) |
4661 | + self.assertRaises( |
4662 | + checksum_util.InvalidChecksum, |
4663 | + self.target.sync, |
4664 | + self.src, |
4665 | + self.dlpath, |
4666 | + ) |
4667 | |
4668 | def test_smaller_size_causes_bad_checksum(self): |
4669 | def size_minus_1(item): |
4670 | - item['size'] = int(item['size']) - 1 |
4671 | + item["size"] = int(item["size"]) - 1 |
4672 | return item |
4673 | + |
4674 | _moditem(self.src, self.dlpath, self.pedigree, size_minus_1) |
4675 | - self.assertRaises(checksum_util.InvalidChecksum, |
4676 | - self.target.sync, self.src, self.dlpath) |
4677 | + self.assertRaises( |
4678 | + checksum_util.InvalidChecksum, |
4679 | + self.target.sync, |
4680 | + self.src, |
4681 | + self.dlpath, |
4682 | + ) |
4683 | |
4684 | def test_too_much_content_causes_bad_checksum(self): |
4685 | self.src.objectstore.data[self.item_path] += b"extra" |
4686 | - self.assertRaises(checksum_util.InvalidChecksum, |
4687 | - self.target.sync, self.src, self.dlpath) |
4688 | + self.assertRaises( |
4689 | + checksum_util.InvalidChecksum, |
4690 | + self.target.sync, |
4691 | + self.src, |
4692 | + self.dlpath, |
4693 | + ) |
4694 | |
4695 | def test_too_little_content_causes_bad_checksum(self): |
4696 | orig = self.src.objectstore.data[self.item_path] |
4697 | self.src.objectstore.data[self.item_path] = orig[0:-1] |
4698 | - self.assertRaises(checksum_util.InvalidChecksum, |
4699 | - self.target.sync, self.src, self.dlpath) |
4700 | + self.assertRaises( |
4701 | + checksum_util.InvalidChecksum, |
4702 | + self.target.sync, |
4703 | + self.src, |
4704 | + self.dlpath, |
4705 | + ) |
4706 | |
4707 | def test_busted_checksum_causes_bad_checksum(self): |
4708 | def break_checksum(item): |
4709 | chars = "0123456789abcdef" |
4710 | - orig = item['sha256'] |
4711 | - item['sha256'] = ''.join( |
4712 | - [chars[(chars.find(c) + 1) % len(chars)] for c in orig]) |
4713 | + orig = item["sha256"] |
4714 | + item["sha256"] = "".join( |
4715 | + [chars[(chars.find(c) + 1) % len(chars)] for c in orig] |
4716 | + ) |
4717 | return item |
4718 | |
4719 | _moditem(self.src, self.dlpath, self.pedigree, break_checksum) |
4720 | - self.assertRaises(checksum_util.InvalidChecksum, |
4721 | - self.target.sync, self.src, self.dlpath) |
4722 | + self.assertRaises( |
4723 | + checksum_util.InvalidChecksum, |
4724 | + self.target.sync, |
4725 | + self.src, |
4726 | + self.dlpath, |
4727 | + ) |
4728 | |
4729 | def test_changed_content_causes_bad_checksum(self): |
4730 | # correct size but different content should raise bad checksum |
4731 | - self.src.objectstore.data[self.item_path] = ''.join( |
4732 | - ["x" for c in self.src.objectstore.data[self.item_path]]) |
4733 | - self.assertRaises(checksum_util.InvalidChecksum, |
4734 | - self.target.sync, self.src, self.dlpath) |
4735 | + self.src.objectstore.data[self.item_path] = "".join( |
4736 | + ["x" for c in self.src.objectstore.data[self.item_path]] |
4737 | + ) |
4738 | + self.assertRaises( |
4739 | + checksum_util.InvalidChecksum, |
4740 | + self.target.sync, |
4741 | + self.src, |
4742 | + self.dlpath, |
4743 | + ) |
4744 | |
4745 | def test_no_checksums_cause_bad_checksum(self): |
4746 | def del_checksums(item): |
4747 | @@ -96,32 +126,41 @@ class TestBadDataSources(TestCase): |
4748 | |
4749 | _moditem(self.src, self.dlpath, self.pedigree, del_checksums) |
4750 | with _patched_missing_sum("fail"): |
4751 | - self.assertRaises(checksum_util.InvalidChecksum, |
4752 | - self.target.sync, self.src, self.dlpath) |
4753 | + self.assertRaises( |
4754 | + checksum_util.InvalidChecksum, |
4755 | + self.target.sync, |
4756 | + self.src, |
4757 | + self.dlpath, |
4758 | + ) |
4759 | |
4760 | def test_missing_size_causes_bad_checksum(self): |
4761 | def del_size(item): |
4762 | - del item['size'] |
4763 | + del item["size"] |
4764 | return item |
4765 | |
4766 | _moditem(self.src, self.dlpath, self.pedigree, del_size) |
4767 | with _patched_missing_sum("fail"): |
4768 | - self.assertRaises(checksum_util.InvalidChecksum, |
4769 | - self.target.sync, self.src, self.dlpath) |
4770 | + self.assertRaises( |
4771 | + checksum_util.InvalidChecksum, |
4772 | + self.target.sync, |
4773 | + self.src, |
4774 | + self.dlpath, |
4775 | + ) |
4776 | |
4777 | |
4778 | class _patched_missing_sum(object): |
4779 | """This patches the legacy mode for missing checksum info so |
4780 | that it behaves like the new code path. Thus we can make |
4781 | the test run correctly""" |
4782 | + |
4783 | def __init__(self, mode="fail"): |
4784 | self.mode = mode |
4785 | |
4786 | def __enter__(self): |
4787 | - self.modmcb = getattr(mirrors, '_missing_cksum_behavior', {}) |
4788 | + self.modmcb = getattr(mirrors, "_missing_cksum_behavior", {}) |
4789 | self.orig = self.modmcb.copy() |
4790 | if self.modmcb: |
4791 | - self.modmcb['mode'] = self.mode |
4792 | + self.modmcb["mode"] = self.mode |
4793 | return self |
4794 | |
4795 | def __exit__(self, type, value, traceback): |
4796 | diff --git a/tests/unittests/test_command_hook_mirror.py b/tests/unittests/test_command_hook_mirror.py |
4797 | index 6a72749..e718c9e 100644 |
4798 | --- a/tests/unittests/test_command_hook_mirror.py |
4799 | +++ b/tests/unittests/test_command_hook_mirror.py |
4800 | @@ -1,4 +1,5 @@ |
4801 | from unittest import TestCase |
4802 | + |
4803 | import simplestreams.mirrors.command_hook as chm |
4804 | from tests.testutil import get_mirror_reader |
4805 | |
4806 | @@ -13,12 +14,11 @@ class TestCommandHookMirror(TestCase): |
4807 | self.assertRaises(TypeError, chm.CommandHookMirror, {}) |
4808 | |
4809 | def test_init_with_load_products_works(self): |
4810 | - chm.CommandHookMirror({'load_products': 'true'}) |
4811 | + chm.CommandHookMirror({"load_products": "true"}) |
4812 | |
4813 | def test_stream_load_empty(self): |
4814 | - |
4815 | src = get_mirror_reader("foocloud") |
4816 | - target = chm.CommandHookMirror({'load_products': ['true']}) |
4817 | + target = chm.CommandHookMirror({"load_products": ["true"]}) |
4818 | oruncmd = chm.run_command |
4819 | |
4820 | try: |
4821 | @@ -30,14 +30,16 @@ class TestCommandHookMirror(TestCase): |
4822 | |
4823 | # the 'load_products' should be called once for each content |
4824 | # in the stream. |
4825 | - self.assertEqual(self._run_commands, [['true'], ['true']]) |
4826 | + self.assertEqual(self._run_commands, [["true"], ["true"]]) |
4827 | |
4828 | def test_stream_insert_product(self): |
4829 | - |
4830 | src = get_mirror_reader("foocloud") |
4831 | target = chm.CommandHookMirror( |
4832 | - {'load_products': ['load-products'], |
4833 | - 'insert_products': ['insert-products']}) |
4834 | + { |
4835 | + "load_products": ["load-products"], |
4836 | + "insert_products": ["insert-products"], |
4837 | + } |
4838 | + ) |
4839 | oruncmd = chm.run_command |
4840 | |
4841 | try: |
4842 | @@ -49,15 +51,18 @@ class TestCommandHookMirror(TestCase): |
4843 | |
4844 | # the 'load_products' should be called once for each content |
4845 | # in the stream. same for 'insert-products' |
4846 | - self.assertEqual(len([f for f in self._run_commands |
4847 | - if f == ['load-products']]), 2) |
4848 | - self.assertEqual(len([f for f in self._run_commands |
4849 | - if f == ['insert-products']]), 2) |
4850 | + self.assertEqual( |
4851 | + len([f for f in self._run_commands if f == ["load-products"]]), 2 |
4852 | + ) |
4853 | + self.assertEqual( |
4854 | + len([f for f in self._run_commands if f == ["insert-products"]]), 2 |
4855 | + ) |
4856 | |
4857 | def _run_command(self, cmd, env=None, capture=False, rcs=None): |
4858 | self._run_commands.append(cmd) |
4859 | rc = 0 |
4860 | - output = '' |
4861 | + output = "" |
4862 | return (rc, output) |
4863 | |
4864 | + |
4865 | # vi: ts=4 expandtab syntax=python |
4866 | diff --git a/tests/unittests/test_contentsource.py b/tests/unittests/test_contentsource.py |
4867 | index ef838a2..2b88a15 100644 |
4868 | --- a/tests/unittests/test_contentsource.py |
4869 | +++ b/tests/unittests/test_contentsource.py |
4870 | @@ -2,14 +2,14 @@ import os |
4871 | import shutil |
4872 | import sys |
4873 | import tempfile |
4874 | - |
4875 | -from os.path import join, dirname |
4876 | -from simplestreams import objectstores |
4877 | -from simplestreams import contentsource |
4878 | -from subprocess import Popen, PIPE, STDOUT |
4879 | +from os.path import dirname, join |
4880 | +from subprocess import PIPE, STDOUT, Popen |
4881 | from unittest import TestCase, skipIf |
4882 | + |
4883 | import pytest |
4884 | |
4885 | +from simplestreams import contentsource, objectstores |
4886 | + |
4887 | |
4888 | class RandomPortServer(object): |
4889 | def __init__(self, path): |
4890 | @@ -22,14 +22,15 @@ class RandomPortServer(object): |
4891 | if self.port and self.process: |
4892 | return |
4893 | testserver_path = join( |
4894 | - dirname(__file__), "..", "..", "tests", "httpserver.py") |
4895 | - pre = b'Serving HTTP:' |
4896 | + dirname(__file__), "..", "..", "tests", "httpserver.py" |
4897 | + ) |
4898 | + pre = b"Serving HTTP:" |
4899 | |
4900 | - cmd = [sys.executable, '-u', testserver_path, "0"] |
4901 | + cmd = [sys.executable, "-u", testserver_path, "0"] |
4902 | p = Popen(cmd, cwd=self.path, stdout=PIPE, stderr=STDOUT) |
4903 | line = p.stdout.readline() # pylint: disable=E1101 |
4904 | if line.startswith(pre): |
4905 | - data = line[len(pre):].strip() |
4906 | + data = line[len(pre) :].strip() |
4907 | addr, port_str, cwd = data.decode().split(" ", 2) |
4908 | self.port = int(port_str) |
4909 | self.addr = addr |
4910 | @@ -39,8 +40,9 @@ class RandomPortServer(object): |
4911 | else: |
4912 | p.kill() |
4913 | raise RuntimeError( |
4914 | - "Failed to start server in %s with %s. pid=%s. got: %s" % |
4915 | - (self.path, cmd, self.process, line)) |
4916 | + "Failed to start server in %s with %s. pid=%s. got: %s" |
4917 | + % (self.path, cmd, self.process, line) |
4918 | + ) |
4919 | |
4920 | def read_output(self): |
4921 | return str(self.process.stdout.readline()) |
4922 | @@ -63,13 +65,17 @@ class RandomPortServer(object): |
4923 | if self.process: |
4924 | pid = self.process.pid |
4925 | |
4926 | - return ("RandomPortServer(port=%s, addr=%s, process=%s, path=%s)" % |
4927 | - (self.port, self.addr, pid, self.path)) |
4928 | + return "RandomPortServer(port=%s, addr=%s, process=%s, path=%s)" % ( |
4929 | + self.port, |
4930 | + self.addr, |
4931 | + pid, |
4932 | + self.path, |
4933 | + ) |
4934 | |
4935 | def url_for(self, fpath=""): |
4936 | if self.port is None: |
4937 | raise ValueError("No port available") |
4938 | - return 'http://127.0.0.1:%d/' % self.port + fpath |
4939 | + return "http://127.0.0.1:%d/" % self.port + fpath |
4940 | |
4941 | |
4942 | class BaseDirUsingTestCase(TestCase): |
4943 | @@ -100,7 +106,8 @@ class BaseDirUsingTestCase(TestCase): |
4944 | |
4945 | def getcs(self, path, url_reader=None, rel=None): |
4946 | return contentsource.UrlContentSource( |
4947 | - self.url_for(path, rel=rel), url_reader=url_reader) |
4948 | + self.url_for(path, rel=rel), url_reader=url_reader |
4949 | + ) |
4950 | |
4951 | def path_for(self, fpath, rel=None): |
4952 | # return full path to fpath. |
4953 | @@ -120,8 +127,9 @@ class BaseDirUsingTestCase(TestCase): |
4954 | |
4955 | if not fullpath.startswith(self.tmpd + os.path.sep): |
4956 | raise ValueError( |
4957 | - "%s is not a valid path. Not under tmpdir: %s" % |
4958 | - (fpath, self.tmpd)) |
4959 | + "%s is not a valid path. Not under tmpdir: %s" |
4960 | + % (fpath, self.tmpd) |
4961 | + ) |
4962 | |
4963 | return fullpath |
4964 | |
4965 | @@ -133,17 +141,18 @@ class BaseDirUsingTestCase(TestCase): |
4966 | if not self.server: |
4967 | raise ValueError("No server available, but proto == http") |
4968 | return self.server.url_for( |
4969 | - self.path_for(fpath=fpath, rel=rel)[len(self.tmpd)+1:]) |
4970 | + self.path_for(fpath=fpath, rel=rel)[len(self.tmpd) + 1 :] |
4971 | + ) |
4972 | |
4973 | |
4974 | class TestUrlContentSource(BaseDirUsingTestCase): |
4975 | http = True |
4976 | - fpath = 'foo' |
4977 | - fdata = b'hello world\n' |
4978 | + fpath = "foo" |
4979 | + fdata = b"hello world\n" |
4980 | |
4981 | def setUp(self): |
4982 | super(TestUrlContentSource, self).setUp() |
4983 | - with open(join(self.test_d, self.fpath), 'wb') as f: |
4984 | + with open(join(self.test_d, self.fpath), "wb") as f: |
4985 | f.write(self.fdata) |
4986 | |
4987 | def test_default_url_read_handles_None(self): |
4988 | @@ -171,8 +180,10 @@ class TestUrlContentSource(BaseDirUsingTestCase): |
4989 | |
4990 | @skipIf(contentsource.requests is None, "requests not available") |
4991 | def test_requests_default_timeout(self): |
4992 | - self.assertEqual(contentsource.RequestsUrlReader.timeout, |
4993 | - (contentsource.TIMEOUT, None)) |
4994 | + self.assertEqual( |
4995 | + contentsource.RequestsUrlReader.timeout, |
4996 | + (contentsource.TIMEOUT, None), |
4997 | + ) |
4998 | |
4999 | @skipIf(contentsource.requests is None, "requests not available") |
5000 | def test_requests_url_read_handles_None(self): |
FAILED: Continuous integration, rev:43978e994d5 0150f5e7b2107bc 310ec57ed2efd2 /jenkins. canonical. com/server- team/job/ simplestreams- ci/11/ /jenkins. canonical. com/server- team/job/ simplestreams- ci/nodes= metal-amd64/ 11/ /jenkins. canonical. com/server- team/job/ simplestreams- ci/nodes= metal-ppc64el/ 11/ /jenkins. canonical. com/server- team/job/ simplestreams- ci/nodes= metal-s390x/ 11/
https:/
Executed test runs:
FAILURE: https:/
FAILURE: https:/
FAILURE: https:/
Click here to trigger a rebuild: /jenkins. canonical. com/server- team/job/ simplestreams- ci/11// rebuild
https:/