Merge ~ubuntu-docker-images/ubuntu-docker-images/+git/templates:add_script_for_ubuntu into ~ubuntu-docker-images/ubuntu-docker-images/+git/templates:main
- Git
- lp:~ubuntu-docker-images/ubuntu-docker-images/+git/templates
- add_script_for_ubuntu
- Merge into main
Status: | Merged |
---|---|
Merged at revision: | 0c01c112996781987bd31c7b543abffc01fe3226 |
Proposed branch: | ~ubuntu-docker-images/ubuntu-docker-images/+git/templates:add_script_for_ubuntu |
Merge into: | ~ubuntu-docker-images/ubuntu-docker-images/+git/templates:main |
Diff against target: |
587 lines (+557/-1) 3 files modified
.devcontainer/Dockerfile (+1/-1) README.md (+31/-0) generate_ubuntu_yaml.py (+525/-0) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Cristovao Cordeiro | Approve | ||
Samir Akarioh (community) | Approve | ||
Review via email: mp+431915@code.launchpad.net |
Commit message
feat : add Ubuntu yaml file generator
This commit add a script which permit to generate
the ubuntu yaml file
Co-Signed-Off: Samir Akarioh and Cristovao Cordeiro
Description of the change
Samir Akarioh (samiraka) wrote : | # |
by .devcontainer you want to says the dockerfile ?
Cristovao Cordeiro (cjdc) wrote : | # |
> by .devcontainer you want to says the dockerfile ?
yes
Cristovao Cordeiro (cjdc) wrote : | # |
please make the script executable
Samir Akarioh (samiraka) wrote : | # |
no need it's already install with renderdown : https:/
Samir Akarioh (samiraka) wrote : | # |
> please make the script executable
What do you mean by that?
Cristovao Cordeiro (cjdc) wrote : | # |
please format the file with black and isort and flake8
Samir Akarioh (samiraka) wrote : | # |
Done
Cristovao Cordeiro (cjdc) : | # |
Cristovao Cordeiro (cjdc) wrote (last edit ): | # |
> no need it's already install with renderdown :
> https:/
and so is boto3
Samir Akarioh (samiraka) wrote : | # |
it's better to do a MP on github to add boto3 no ?
Cristovao Cordeiro (cjdc) wrote : | # |
> it's better to do a MP on github to add boto3 no ?
does renderdown need it?
Samir Akarioh (samiraka) wrote : | # |
no
Cristovao Cordeiro (cjdc) wrote : | # |
> no
then it shouldn't go there. let's update the README and .devcontainer in this project instead
Samir Akarioh (samiraka) wrote : | # |
i updated it
Cristovao Cordeiro (cjdc) wrote : | # |
ok thanks. pls let me know once the other comments from https:/
Samir Akarioh (samiraka) wrote : | # |
Done
Samir Akarioh (samiraka) : | # |
Samir Akarioh (samiraka) wrote : | # |
I review your script, you want me to do the correction or ?
Cristovao Cordeiro (cjdc) wrote : | # |
thanks. fixed
Samir Akarioh (samiraka) : | # |
Cristovao Cordeiro (cjdc) : | # |
Samir Akarioh (samiraka) : | # |
Cristovao Cordeiro (cjdc) : | # |
Cristovao Cordeiro (cjdc) wrote : | # |
I think you can merge it now Samir
Preview Diff
1 | diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile |
2 | index fa24741..3c8c2b3 100644 |
3 | --- a/.devcontainer/Dockerfile |
4 | +++ b/.devcontainer/Dockerfile |
5 | @@ -6,6 +6,6 @@ RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ |
6 | && apt-get -y install --no-install-recommends python3-mako python3-yaml |
7 | |
8 | RUN cd /usr/bin && git clone https://github.com/valentinviennot/RenderDown \ |
9 | - && pip3 --disable-pip-version-check --no-cache-dir install -r /usr/bin/RenderDown/requirements.txt \ |
10 | + && pip3 --disable-pip-version-check --no-cache-dir install -r /usr/bin/RenderDown/requirements.txt boto3 \ |
11 | && rm -rf /tmp/pip-tmp |
12 | ENV RENDERDOWN /usr/bin/RenderDown/renderdown.py |
13 | diff --git a/README.md b/README.md |
14 | index 0166dea..29d4e38 100644 |
15 | --- a/README.md |
16 | +++ b/README.md |
17 | @@ -15,8 +15,39 @@ The DevContainer will provide you with a working environment out of the box. **Y |
18 | ```bash |
19 | git clone https://github.com/misterw97/RenderDown |
20 | sudo apt update && sudo apt install -y python3-mako python3-yaml |
21 | +pip install boto3 # if you want to run the generate_ubuntu_yaml file |
22 | ``` |
23 | |
24 | +#### Generate_ubuntu_yaml |
25 | + |
26 | +This script allows to generate the ubuntu.yaml file in order to use it by the RenderDown script. It uses the template ubuntu.yaml located in the template folder. |
27 | + |
28 | +Here are the available arguments and examples of commands: |
29 | + |
30 | +``` |
31 | +usage: generate_ubuntu_yaml.py [-h] [--provider PROVIDER] [--username USERNAME] [--password PASSWORD] [--data-dir DATA_DIR] [--unpublished-suite UNPUBLISHED_SUITE] |
32 | + [--unpublished-tags UNPUBLISHED_TAGS] [--unpublished-archs UNPUBLISHED_ARCHS] |
33 | + |
34 | +Generate documentation about Ubuntu for ECR and DockerHub |
35 | + |
36 | +options: |
37 | + -h, --help show this help message and exit |
38 | + --provider PROVIDER aws or docker |
39 | + --username USERNAME Username of provider |
40 | + --password PASSWORD Password of provider |
41 | + --data-dir DATA_DIR Where you will find the output template (folder), If it does not exist then it is created |
42 | + --unpublished-suite UNPUBLISHED_SUITE |
43 | + an Ubuntu Suite (e.g. jammy). |
44 | + --unpublished-tags UNPUBLISHED_TAGS |
45 | + list of tags separated by comma (e.g. 'kinetic 22.10 22.10_edge',kinetic) |
46 | + --unpublished-archs UNPUBLISHED_ARCHS |
47 | + list of archs separated by comma (e.g amd64,arm) |
48 | +``` |
49 | + |
50 | +Example: `./generate_ubuntu_yaml.py --provider docker --username admin --password admin` |
51 | + |
52 | +If you give the --unpublished-suite argument you need to give --unpublished-tags and --unpublished-archs. When these options are given, the script will take your information into the YAML file without checking the registry. |
53 | + |
54 | ## Running |
55 | |
56 | Create README files for all registries and namespaces: |
57 | diff --git a/generate_ubuntu_yaml.py b/generate_ubuntu_yaml.py |
58 | new file mode 100755 |
59 | index 0000000..b7ca361 |
60 | --- /dev/null |
61 | +++ b/generate_ubuntu_yaml.py |
62 | @@ -0,0 +1,525 @@ |
63 | +#!/usr/bin/env python3 |
64 | + |
65 | +import argparse |
66 | +import datetime |
67 | +import json |
68 | +import logging |
69 | +import os |
70 | +import subprocess |
71 | +from typing import Dict, List |
72 | + |
73 | +import boto3 |
74 | +import requests |
75 | +import sys |
76 | +import yaml |
77 | + |
78 | +logging.basicConfig(stream=sys.stdout, level=logging.INFO) |
79 | +NOW = datetime.datetime.now() |
80 | +SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) |
81 | + |
82 | + |
83 | +def cli_args() -> argparse.ArgumentParser: |
84 | + """Argument parser""" |
85 | + parser = argparse.ArgumentParser( |
86 | + description="Generate documentation about Ubuntu for ECR and DockerHub" |
87 | + ) |
88 | + |
89 | + parser.add_argument( |
90 | + "--provider", |
91 | + default="docker", |
92 | + dest="provider", |
93 | + help="aws or docker", |
94 | + required=True, |
95 | + ) |
96 | + parser.add_argument( |
97 | + "--username", |
98 | + default="admin", |
99 | + dest="username", |
100 | + help="either the Docker Hub username, or the AWS access key ID", |
101 | + required=True, |
102 | + ) |
103 | + parser.add_argument( |
104 | + "--password", |
105 | + default="admin", |
106 | + dest="password", |
107 | + help="either the Docker Hub password/token, or the AWS secret access key", |
108 | + required=True, |
109 | + ) |
110 | + parser.add_argument( |
111 | + "--token-docker", |
112 | + dest="dockertoken", |
113 | + default=None, |
114 | + help="JWT token for Docker Hub authentication. \ |
115 | + Only useful for the 'docker' provider.", |
116 | + ) |
117 | + parser.add_argument( |
118 | + "--repository-basename", |
119 | + dest="repository", |
120 | + default=None, |
121 | + help="repository basename of the ubuntu images. \ |
122 | + Used to infer existing information.", |
123 | + ) |
124 | + parser.add_argument( |
125 | + "--data-dir", |
126 | + default="data", |
127 | + dest="data_dir", |
128 | + help="""The path of the folder |
129 | + where the data file will be |
130 | + saved ( if not exist, the script |
131 | + will create the folder)""", |
132 | + ) |
133 | + parser.add_argument( |
134 | + "--unpublished-suite", |
135 | + dest="unpublished_suite", |
136 | + help="""an Ubuntu Suite (e.g. jammy). |
137 | + if given we will take the |
138 | + tags pass on command lines (required) |
139 | + and the arches for this section |
140 | + of the yaml file. |
141 | + """, |
142 | + ) |
143 | + parser.add_argument( |
144 | + "--unpublished-tags", |
145 | + dest="unpublished_tags", |
146 | + help="""list of tags |
147 | + (e.g. 'kinetic 22.10 22.10_edge kinetic)""", |
148 | + ) |
149 | + parser.add_argument( |
150 | + "--unpublished-archs", |
151 | + dest="unpublished_archs", |
152 | + help="list of archs (e.g amd64 arm)", |
153 | + ) |
154 | + |
155 | + return parser |
156 | + |
157 | + |
158 | +def validate_args( |
159 | + parser: argparse.ArgumentParser, |
160 | +) -> argparse.ArgumentParser.parse_args: |
161 | + """Parse and validate the CLI arguments""" |
162 | + args = parser.parse_args() |
163 | + if any( |
164 | + [ |
165 | + args.unpublished_suite is None, |
166 | + args.unpublished_tags is None, |
167 | + args.unpublished_archs is None, |
168 | + ] |
169 | + ) and not all( |
170 | + [ |
171 | + args.unpublished_suite is None, |
172 | + args.unpublished_tags is None, |
173 | + args.unpublished_archs is None, |
174 | + ] |
175 | + ): |
176 | + parser.error( |
177 | + """--unpublished-suite need |
178 | + --unpublished-archs and --unpublished_tags""" |
179 | + ) |
180 | + |
181 | + return args |
182 | + |
183 | + |
184 | +def build_image_endpoint(provider: str, repo_base: str = None) -> (str, str): |
185 | + """Define the image's registry URL""" |
186 | + if provider == "aws": |
187 | + registry_url = "docker://public.ecr.aws/" |
188 | + staging_repo = "rocksdev" |
189 | + else: |
190 | + registry_url = "docker://docker.io/" |
191 | + staging_repo = "rocksdev4staging" |
192 | + |
193 | + if repo_base is None: |
194 | + logging.warning("Using staging repository") |
195 | + url = f"{registry_url}{staging_repo}/ubuntu" |
196 | + namespace = staging_repo |
197 | + else: |
198 | + url = f"{registry_url}{repo_base}/ubuntu" |
199 | + namespace = repo_base |
200 | + |
201 | + logging.info(f"Using {url} to collect information") |
202 | + |
203 | + return url, namespace |
204 | + |
205 | + |
206 | +def add_yaml_representer(): |
207 | + def str_presenter(dumper, data): |
208 | + """ |
209 | + Permit to format |
210 | + multiline string into |
211 | + yaml file |
212 | + """ |
213 | + |
214 | + c = "tag:yaml.org,2002:str" |
215 | + if len(data.splitlines()) > 1: # check for multiline string |
216 | + return dumper.represent_scalar(c, data, style="|") |
217 | + return dumper.represent_scalar(c, data) |
218 | + |
219 | + yaml.add_representer(str, str_presenter) |
220 | + yaml.representer.SafeRepresenter.add_representer(str, str_presenter) |
221 | + |
222 | + |
223 | +def _process_run(command: List[str], **kwargs) -> str: |
224 | + """Run a command and handle its output.""" |
225 | + logging.info(f"Execute process: {command!r}, kwargs={kwargs!r}") |
226 | + try: |
227 | + out = subprocess.run( |
228 | + command, |
229 | + **kwargs, |
230 | + capture_output=True, |
231 | + check=True, |
232 | + universal_newlines=True, |
233 | + ) |
234 | + except subprocess.CalledProcessError as err: |
235 | + msg = f"Failed to run command: {err!s}" |
236 | + if err.stderr: |
237 | + msg += f" ({err.stderr.strip()!s})" |
238 | + raise Exception(msg) from err |
239 | + |
240 | + return out.stdout.strip() |
241 | + |
242 | + |
243 | +def get_arches(release: str, image_url: str) -> List[str]: |
244 | + """ |
245 | + Permit to get the arches associated to the release |
246 | + """ |
247 | + logging.info(f"Getting the arches for {release}") |
248 | + command = ["skopeo", "inspect", f"{image_url}:{release}", "--raw"] |
249 | + manifest = json.loads(_process_run(command))["manifests"] |
250 | + arches = [] |
251 | + for arch in manifest: |
252 | + arches.append(arch["platform"]["architecture"]) |
253 | + return arches |
254 | + |
255 | + |
256 | +def get_dockerhub_token(username: str, password: str) -> str: |
257 | + """ |
258 | + Permit to get the token associated to the docker account |
259 | + """ |
260 | + logging.info("Getting the token form Docker") |
261 | + |
262 | + url_token = "https://hub.docker.com/v2/users/login" |
263 | + data = {"username": username, "password": password} |
264 | + get_token = requests.post(url_token, json=data) |
265 | + get_token.raise_for_status() |
266 | + return get_token.json()["token"] |
267 | + |
268 | + |
269 | +def get_tags_docker( |
270 | + release: str, token: str, image_url: str, image_namespace: str |
271 | +) -> List[str]: |
272 | + """ |
273 | + Permit to get the tags associated to the release |
274 | + """ |
275 | + logging.info(f"Getting the tags from Docker for {release}") |
276 | + tags = [] |
277 | + command = [ |
278 | + "skopeo", |
279 | + "inspect", |
280 | + f"{image_url}:{release}", |
281 | + "--raw", |
282 | + ] |
283 | + result_json = _process_run(command) |
284 | + digest = json.loads(result_json)["manifests"][0]["digest"] |
285 | + |
286 | + url_dockerhub = "https://hub.docker.com/v2/repositories/" |
287 | + url_dockerhub += f"{image_namespace}/ubuntu/tags/?page_size=999" |
288 | + Headers = {"Authorization": f"JWT {token}"} |
289 | + get_the_tags = requests.get(url_dockerhub, headers=Headers) |
290 | + get_the_tags = get_the_tags.json()["results"] |
291 | + for image in get_the_tags: |
292 | + for info_image in image["images"]: |
293 | + if info_image["digest"] == digest and image["name"] not in tags: |
294 | + tags.append(image["name"]) |
295 | + |
296 | + return tags |
297 | + |
298 | + |
299 | +def get_tags_aws(release: str, client: boto3.Session, image_url: str) -> List[str]: |
300 | + """ |
301 | + Permit to get the tags associated to the release |
302 | + """ |
303 | + logging.info(f"Getting the tags from AWS for {release}") |
304 | + |
305 | + tags = [] |
306 | + command = [ |
307 | + "skopeo", |
308 | + "inspect", |
309 | + f"{image_url}:{release}", |
310 | + ] |
311 | + result_json = _process_run(command) |
312 | + digest = json.loads(result_json)["Digest"] |
313 | + response = client.describe_image_tags(repositoryName="ubuntu") |
314 | + |
315 | + for image in response["imageTagDetails"]: |
316 | + if ( |
317 | + image["imageDetail"]["imageDigest"] == digest |
318 | + and image["imageTag"] not in tags |
319 | + ): |
320 | + tags.append(image["imageTag"]) |
321 | + return tags |
322 | + |
323 | + |
324 | +def get_fullname(release: str) -> str: |
325 | + """ |
326 | + Permit to get the full name associated to the release |
327 | + """ |
328 | + logging.info(f"Getting full name of {release} ") |
329 | + |
330 | + command = ["ubuntu-distro-info", f"--series={release}", "-f"] |
331 | + result_json = _process_run(command) |
332 | + return result_json.replace("Ubuntu", "").strip() |
333 | + |
334 | + |
335 | +def get_support(series: str, is_lts: bool) -> Dict[str, Dict[str, str]]: |
336 | + """Calculates the end of support dates for a given Ubuntu series""" |
337 | + logging.info(f"Getting support information for the {series}") |
338 | + |
339 | + base_cmd = ["ubuntu-distro-info", "--series", series] |
340 | + eol_cmd = base_cmd + ["--day=eol"] |
341 | + |
342 | + eol = int(_process_run(eol_cmd)) |
343 | + eol_date = NOW + datetime.timedelta(days=eol) |
344 | + |
345 | + support = {"support": {"until": f"{eol_date.month:02d}/{eol_date.year}"}} |
346 | + |
347 | + if not is_lts: |
348 | + return support |
349 | + |
350 | + # The it is LTS, and lts_until=until |
351 | + support["support"]["lts_until"] = support["support"]["until"] |
352 | + |
353 | + eol_esm_cmd = base_cmd + ["--day=eol-esm"] |
354 | + |
355 | + eol_esm = int(_process_run(eol_esm_cmd)) |
356 | + eol_esm_date = NOW + datetime.timedelta(days=eol_esm) |
357 | + eol_esm_value = f"{eol_esm_date.month:02d}/{eol_esm_date.year}" |
358 | + support["support"]["esm_until"] = eol_esm_value |
359 | + |
360 | + return support |
361 | + |
362 | + |
363 | +def get_deprecated(series: str) -> Dict[str, Dict[str, object]]: |
364 | + """ |
365 | + Calculated the deprecation date |
366 | + and upgrade path for a deprecated release |
367 | + """ |
368 | + logging.info(f"Getting support information for the {series}") |
369 | + |
370 | + eol_cmd = ["ubuntu-distro-info", "--series", series, "--day=eol"] |
371 | + |
372 | + eol = int(_process_run(eol_cmd)) |
373 | + eol_date = NOW + datetime.timedelta(days=eol) |
374 | + # For now, the upgrade path is always the next release |
375 | + |
376 | + this_release_cmd = ["ubuntu-distro-info", "--series", series, "--day=release"] |
377 | + this_release = int(_process_run(this_release_cmd)) |
378 | + # add 60 days to the release date, to get the next development version |
379 | + next_date = NOW + datetime.timedelta(days=this_release + 60) |
380 | + |
381 | + following_dev_series_cmd = [ |
382 | + "ubuntu-distro-info", |
383 | + "-d", |
384 | + f"--date={next_date.year}-{next_date.month}-{next_date.day}", |
385 | + ] |
386 | + development_suite_at_eol = _process_run(following_dev_series_cmd) |
387 | + |
388 | + upgrade_path_cmd = [ |
389 | + "ubuntu-distro-info", |
390 | + "--series", |
391 | + development_suite_at_eol, |
392 | + "-r", |
393 | + ] |
394 | + upgrade_path = _process_run(upgrade_path_cmd).strip(" LTS") |
395 | + |
396 | + return { |
397 | + "deprecated": { |
398 | + "date": f"{eol_date.month:02d}/{eol_date.year}", |
399 | + "path": {"track": upgrade_path}, |
400 | + } |
401 | + } |
402 | + |
403 | + |
404 | +def is_deprecated(series: str) -> bool: |
405 | + |
406 | + """Checks whether a series is completely deprecated (both LTS and ESM)""" |
407 | + logging.info(f"Checking is {series} is deprecated") |
408 | + supported_cmd = "ubuntu-distro-info --supported" |
409 | + supported_esm_cmd = supported_cmd + "-esm" |
410 | + all_supported = _process_run(supported_cmd.split(" ")) + _process_run( |
411 | + supported_esm_cmd.split(" ") |
412 | + ) |
413 | + return series not in all_supported |
414 | + |
415 | + |
416 | +def is_lts(series: str) -> bool: |
417 | + |
418 | + """Checks if a given series is LTS""" |
419 | + logging.info(f"Checking is {series} is lts") |
420 | + |
421 | + cmd = ["ubuntu-distro-info", "--series", series, "-f"] |
422 | + |
423 | + return "LTS" in _process_run(cmd) |
424 | + |
425 | + |
426 | +def get_lowest_risk(tags: List[str]) -> str: |
427 | + """ |
428 | + Get the lowest risk associated with the release |
429 | + """ |
430 | + risk_sorted = ["stable", "candidate", "beta", "edge"] |
431 | + |
432 | + all_tags_str = " ".join(tags) |
433 | + for risk in risk_sorted: |
434 | + if risk in all_tags_str: |
435 | + return risk |
436 | + |
437 | + return "edge" |
438 | + |
439 | + |
440 | +def get_release(series: str) -> str: |
441 | + command = ["ubuntu-distro-info", f"--series={series}", "-r"] |
442 | + |
443 | + return _process_run(command) |
444 | + |
445 | + |
446 | +def infer_registry_user( |
447 | + provider: str, username: str, password: str, dh_token: str = None |
448 | +) -> object: |
449 | + user = None |
450 | + if provider == "aws": |
451 | + logging.info("Connecting to AWS") |
452 | + session = boto3.Session( |
453 | + region_name="us-east-1", |
454 | + aws_access_key_id=username, |
455 | + aws_secret_access_key=password, |
456 | + ) |
457 | + user = session.client("ecr-public") |
458 | + else: |
459 | + logging.info("Fetching Docker Hub token") |
460 | + if dh_token: |
461 | + user = dh_token |
462 | + else: |
463 | + user = get_dockerhub_token(username, password) |
464 | + |
465 | + return user |
466 | + |
467 | + |
468 | +def build_releases_data( |
469 | + list_of_series: List[str], |
470 | + all_tags: List[str], |
471 | + image_url: str, |
472 | + image_ns: str, |
473 | + arguments: argparse.ArgumentParser.parse_args, |
474 | + registry_user: object, |
475 | +) -> Dict: |
476 | + """Build the releases info data structure""" |
477 | + releases = [] |
478 | + for count, series in enumerate(list_of_series): |
479 | + if series not in all_tags and series != arguments.unpublished_suite: |
480 | + logging.warning( |
481 | + f"Series {series} does not exist in {image_url}. Skipping it..." |
482 | + ) |
483 | + continue |
484 | + |
485 | + release_data = {} |
486 | + |
487 | + release = get_release(series) |
488 | + if "LTS" in release: |
489 | + release_data["type"] = "LTS" |
490 | + |
491 | + release_data["track"] = release.rstrip(" LTS") |
492 | + |
493 | + if arguments.unpublished_suite and arguments.unpublished_suite == series: |
494 | + release_data["architectures"] = arguments.unpublished_archs.split() |
495 | + release_data["version"] = get_fullname(arguments.unpublished_suite) |
496 | + release_data["risk"] = get_lowest_risk(arguments.unpublished_tags.split()) |
497 | + release_data["tags"] = arguments.unpublished_tags.split() |
498 | + else: |
499 | + release_data["architectures"] = get_arches(series, image_url) |
500 | + release_data["version"] = get_fullname(series) |
501 | + if arguments.provider == "docker": |
502 | + release_data["tags"] = get_tags_docker( |
503 | + series, registry_user, image_url, image_ns |
504 | + ) |
505 | + else: |
506 | + release_data["tags"] = get_tags_aws(series, registry_user) |
507 | + release_data["risk"] = get_lowest_risk(release_data["tags"]) |
508 | + |
509 | + if is_deprecated(series): |
510 | + release_data["deprecated"] = get_deprecated(series) |
511 | + else: |
512 | + release_data["support"] = get_support(series, is_lts(series)) |
513 | + |
514 | + releases.append(release_data) |
515 | + |
516 | + return releases |
517 | + |
518 | + |
519 | +def read_ubuntu_data_template() -> Dict: |
520 | + """Reads and parses the YAML contents of the data template""" |
521 | + template_file = f"{SCRIPT_DIR}/templates/ubuntu.yaml" |
522 | + logging.info(f"Opening the template file {template_file}") |
523 | + with open(template_file) as file: |
524 | + try: |
525 | + return yaml.safe_load(file) |
526 | + except yaml.YAMLError as exc: |
527 | + logging.error("Error when loading the ubuntu template file") |
528 | + raise exc |
529 | + |
530 | + |
531 | +def create_data_dir(path: str): |
532 | + """Create data dir if it doesn't exist""" |
533 | + if not os.path.exists(path): |
534 | + logging.info(f"Creating the {path} folder") |
535 | + |
536 | + os.makedirs(path) |
537 | + |
538 | + |
539 | +def write_ubuntu_data_file(file_path: str, content: Dict): |
540 | + """Write the YAML content into the ubuntu file path""" |
541 | + with open(file_path, "w") as file: |
542 | + logging.info(f"Create the yaml file {file_path}") |
543 | + yaml.dump(content, file) |
544 | + |
545 | + |
546 | +def main(): |
547 | + arguments = validate_args(cli_args()) |
548 | + registry_user = infer_registry_user( |
549 | + arguments.provider, |
550 | + arguments.username, |
551 | + arguments.password, |
552 | + arguments.dockertoken, |
553 | + ) |
554 | + |
555 | + add_yaml_representer() |
556 | + url, ns = build_image_endpoint(arguments.provider, repo_base=arguments.repository) |
557 | + |
558 | + logging.info(f"Getting all tags from {url}") |
559 | + command_tags = ["skopeo", "list-tags", url] |
560 | + existing_tags = json.loads(_process_run(command_tags))["Tags"] |
561 | + |
562 | + logging.info("Getting all the series from ubuntu-distro-info") |
563 | + command_suites = ["ubuntu-distro-info", "--all"] |
564 | + series_names = _process_run(command_suites).split("\n") |
565 | + |
566 | + if arguments.unpublished_suite and arguments.unpublished_suite not in series_names: |
567 | + logging.error( |
568 | + f"The provided unpublished suite {arguments.unpublished_suite}" |
569 | + "is not recognized. Ignoring it" |
570 | + ) |
571 | + |
572 | + logging.info("Building releases info") |
573 | + releases = build_releases_data( |
574 | + series_names, existing_tags, url, ns, arguments, registry_user |
575 | + ) |
576 | + |
577 | + dict_file = read_ubuntu_data_template() |
578 | + dict_file["releases"] = releases |
579 | + |
580 | + create_data_dir(arguments.data_dir) |
581 | + |
582 | + ubuntu_data_file = f"{arguments.data_dir}/ubuntu.yaml" |
583 | + write_ubuntu_data_file(ubuntu_data_file, dict_file) |
584 | + |
585 | + |
586 | +if __name__ == "__main__": |
587 | + main() |
I guess PyYaml needs to be installed. If so, please adjust the readme and .devcontainer accordingly