Merge ~ubuntu-docker-images/ubuntu-docker-images/+git/templates:add_script_for_ubuntu into ~ubuntu-docker-images/ubuntu-docker-images/+git/templates:main
- Git
- lp:~ubuntu-docker-images/ubuntu-docker-images/+git/templates
- add_script_for_ubuntu
- Merge into main
Status: | Merged |
---|---|
Merged at revision: | 0c01c112996781987bd31c7b543abffc01fe3226 |
Proposed branch: | ~ubuntu-docker-images/ubuntu-docker-images/+git/templates:add_script_for_ubuntu |
Merge into: | ~ubuntu-docker-images/ubuntu-docker-images/+git/templates:main |
Diff against target: |
587 lines (+557/-1) 3 files modified
.devcontainer/Dockerfile (+1/-1) README.md (+31/-0) generate_ubuntu_yaml.py (+525/-0) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Cristovao Cordeiro | Approve | ||
Samir Akarioh (community) | Approve | ||
Review via email: mp+431915@code.launchpad.net |
Commit message
feat : add Ubuntu yaml file generator
This commit add a script which permit to generate
the ubuntu yaml file
Co-Signed-Off: Samir Akarioh and Cristovao Cordeiro
Description of the change
Samir Akarioh (samiraka) wrote : | # |
by .devcontainer you want to says the dockerfile ?
Cristovao Cordeiro (cjdc) wrote : | # |
> by .devcontainer you want to says the dockerfile ?
yes
Cristovao Cordeiro (cjdc) wrote : | # |
please make the script executable
Samir Akarioh (samiraka) wrote : | # |
no need it's already install with renderdown : https:/
Samir Akarioh (samiraka) wrote : | # |
> please make the script executable
What do you mean by that?
Cristovao Cordeiro (cjdc) wrote : | # |
please format the file with black and isort and flake8
Samir Akarioh (samiraka) wrote : | # |
Done
Cristovao Cordeiro (cjdc) : | # |
Cristovao Cordeiro (cjdc) wrote (last edit ): | # |
> no need it's already install with renderdown :
> https:/
and so is boto3
Samir Akarioh (samiraka) wrote : | # |
it's better to do a MP on github to add boto3 no ?
Cristovao Cordeiro (cjdc) wrote : | # |
> it's better to do a MP on github to add boto3 no ?
does renderdown need it?
Samir Akarioh (samiraka) wrote : | # |
no
Cristovao Cordeiro (cjdc) wrote : | # |
> no
then it shouldn't go there. let's update the README and .devcontainer in this project instead
Samir Akarioh (samiraka) wrote : | # |
i updated it
Cristovao Cordeiro (cjdc) wrote : | # |
ok thanks. pls let me know once the other comments from https:/
Samir Akarioh (samiraka) wrote : | # |
Done
Samir Akarioh (samiraka) : | # |
Samir Akarioh (samiraka) wrote : | # |
I review your script, you want me to do the correction or ?
Cristovao Cordeiro (cjdc) wrote : | # |
thanks. fixed
Samir Akarioh (samiraka) : | # |
Cristovao Cordeiro (cjdc) : | # |
Samir Akarioh (samiraka) : | # |
Cristovao Cordeiro (cjdc) : | # |
Cristovao Cordeiro (cjdc) wrote : | # |
I think you can merge it now Samir
Preview Diff
1 | diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile | |||
2 | index fa24741..3c8c2b3 100644 | |||
3 | --- a/.devcontainer/Dockerfile | |||
4 | +++ b/.devcontainer/Dockerfile | |||
5 | @@ -6,6 +6,6 @@ RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ | |||
6 | 6 | && apt-get -y install --no-install-recommends python3-mako python3-yaml | 6 | && apt-get -y install --no-install-recommends python3-mako python3-yaml |
7 | 7 | 7 | ||
8 | 8 | RUN cd /usr/bin && git clone https://github.com/valentinviennot/RenderDown \ | 8 | RUN cd /usr/bin && git clone https://github.com/valentinviennot/RenderDown \ |
10 | 9 | && pip3 --disable-pip-version-check --no-cache-dir install -r /usr/bin/RenderDown/requirements.txt \ | 9 | && pip3 --disable-pip-version-check --no-cache-dir install -r /usr/bin/RenderDown/requirements.txt boto3 \ |
11 | 10 | && rm -rf /tmp/pip-tmp | 10 | && rm -rf /tmp/pip-tmp |
12 | 11 | ENV RENDERDOWN /usr/bin/RenderDown/renderdown.py | 11 | ENV RENDERDOWN /usr/bin/RenderDown/renderdown.py |
13 | diff --git a/README.md b/README.md | |||
14 | index 0166dea..29d4e38 100644 | |||
15 | --- a/README.md | |||
16 | +++ b/README.md | |||
17 | @@ -15,8 +15,39 @@ The DevContainer will provide you with a working environment out of the box. **Y | |||
18 | 15 | ```bash | 15 | ```bash |
19 | 16 | git clone https://github.com/misterw97/RenderDown | 16 | git clone https://github.com/misterw97/RenderDown |
20 | 17 | sudo apt update && sudo apt install -y python3-mako python3-yaml | 17 | sudo apt update && sudo apt install -y python3-mako python3-yaml |
21 | 18 | pip install boto3 # if you want to run the generate_ubuntu_yaml file | ||
22 | 18 | ``` | 19 | ``` |
23 | 19 | 20 | ||
24 | 21 | #### Generate_ubuntu_yaml | ||
25 | 22 | |||
26 | 23 | This script allows to generate the ubuntu.yaml file in order to use it by the RenderDown script. It uses the template ubuntu.yaml located in the template folder. | ||
27 | 24 | |||
28 | 25 | Here are the available arguments and examples of commands: | ||
29 | 26 | |||
30 | 27 | ``` | ||
31 | 28 | usage: generate_ubuntu_yaml.py [-h] [--provider PROVIDER] [--username USERNAME] [--password PASSWORD] [--data-dir DATA_DIR] [--unpublished-suite UNPUBLISHED_SUITE] | ||
32 | 29 | [--unpublished-tags UNPUBLISHED_TAGS] [--unpublished-archs UNPUBLISHED_ARCHS] | ||
33 | 30 | |||
34 | 31 | Generate documentation about Ubuntu for ECR and DockerHub | ||
35 | 32 | |||
36 | 33 | options: | ||
37 | 34 | -h, --help show this help message and exit | ||
38 | 35 | --provider PROVIDER aws or docker | ||
39 | 36 | --username USERNAME Username of provider | ||
40 | 37 | --password PASSWORD Password of provider | ||
41 | 38 | --data-dir DATA_DIR Where you will find the output template (folder), If it does not exist then it is created | ||
42 | 39 | --unpublished-suite UNPUBLISHED_SUITE | ||
43 | 40 | an Ubuntu Suite (e.g. jammy). | ||
44 | 41 | --unpublished-tags UNPUBLISHED_TAGS | ||
45 | 42 | list of tags separated by comma (e.g. 'kinetic 22.10 22.10_edge',kinetic) | ||
46 | 43 | --unpublished-archs UNPUBLISHED_ARCHS | ||
47 | 44 | list of archs separated by comma (e.g amd64,arm) | ||
48 | 45 | ``` | ||
49 | 46 | |||
50 | 47 | Example: `./generate_ubuntu_yaml.py --provider docker --username admin --password admin` | ||
51 | 48 | |||
52 | 49 | If you give the --unpublished-suite argument you need to give --unpublished-tags and --unpublished-archs. When these options are given, the script will take your information into the YAML file without checking the registry. | ||
53 | 50 | |||
54 | 20 | ## Running | 51 | ## Running |
55 | 21 | 52 | ||
56 | 22 | Create README files for all registries and namespaces: | 53 | Create README files for all registries and namespaces: |
57 | diff --git a/generate_ubuntu_yaml.py b/generate_ubuntu_yaml.py | |||
58 | 23 | new file mode 100755 | 54 | new file mode 100755 |
59 | index 0000000..b7ca361 | |||
60 | --- /dev/null | |||
61 | +++ b/generate_ubuntu_yaml.py | |||
62 | @@ -0,0 +1,525 @@ | |||
63 | 1 | #!/usr/bin/env python3 | ||
64 | 2 | |||
65 | 3 | import argparse | ||
66 | 4 | import datetime | ||
67 | 5 | import json | ||
68 | 6 | import logging | ||
69 | 7 | import os | ||
70 | 8 | import subprocess | ||
71 | 9 | from typing import Dict, List | ||
72 | 10 | |||
73 | 11 | import boto3 | ||
74 | 12 | import requests | ||
75 | 13 | import sys | ||
76 | 14 | import yaml | ||
77 | 15 | |||
78 | 16 | logging.basicConfig(stream=sys.stdout, level=logging.INFO) | ||
79 | 17 | NOW = datetime.datetime.now() | ||
80 | 18 | SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) | ||
81 | 19 | |||
82 | 20 | |||
83 | 21 | def cli_args() -> argparse.ArgumentParser: | ||
84 | 22 | """Argument parser""" | ||
85 | 23 | parser = argparse.ArgumentParser( | ||
86 | 24 | description="Generate documentation about Ubuntu for ECR and DockerHub" | ||
87 | 25 | ) | ||
88 | 26 | |||
89 | 27 | parser.add_argument( | ||
90 | 28 | "--provider", | ||
91 | 29 | default="docker", | ||
92 | 30 | dest="provider", | ||
93 | 31 | help="aws or docker", | ||
94 | 32 | required=True, | ||
95 | 33 | ) | ||
96 | 34 | parser.add_argument( | ||
97 | 35 | "--username", | ||
98 | 36 | default="admin", | ||
99 | 37 | dest="username", | ||
100 | 38 | help="either the Docker Hub username, or the AWS access key ID", | ||
101 | 39 | required=True, | ||
102 | 40 | ) | ||
103 | 41 | parser.add_argument( | ||
104 | 42 | "--password", | ||
105 | 43 | default="admin", | ||
106 | 44 | dest="password", | ||
107 | 45 | help="either the Docker Hub password/token, or the AWS secret access key", | ||
108 | 46 | required=True, | ||
109 | 47 | ) | ||
110 | 48 | parser.add_argument( | ||
111 | 49 | "--token-docker", | ||
112 | 50 | dest="dockertoken", | ||
113 | 51 | default=None, | ||
114 | 52 | help="JWT token for Docker Hub authentication. \ | ||
115 | 53 | Only useful for the 'docker' provider.", | ||
116 | 54 | ) | ||
117 | 55 | parser.add_argument( | ||
118 | 56 | "--repository-basename", | ||
119 | 57 | dest="repository", | ||
120 | 58 | default=None, | ||
121 | 59 | help="repository basename of the ubuntu images. \ | ||
122 | 60 | Used to infer existing information.", | ||
123 | 61 | ) | ||
124 | 62 | parser.add_argument( | ||
125 | 63 | "--data-dir", | ||
126 | 64 | default="data", | ||
127 | 65 | dest="data_dir", | ||
128 | 66 | help="""The path of the folder | ||
129 | 67 | where the data file will be | ||
130 | 68 | saved ( if not exist, the script | ||
131 | 69 | will create the folder)""", | ||
132 | 70 | ) | ||
133 | 71 | parser.add_argument( | ||
134 | 72 | "--unpublished-suite", | ||
135 | 73 | dest="unpublished_suite", | ||
136 | 74 | help="""an Ubuntu Suite (e.g. jammy). | ||
137 | 75 | if given we will take the | ||
138 | 76 | tags pass on command lines (required) | ||
139 | 77 | and the arches for this section | ||
140 | 78 | of the yaml file. | ||
141 | 79 | """, | ||
142 | 80 | ) | ||
143 | 81 | parser.add_argument( | ||
144 | 82 | "--unpublished-tags", | ||
145 | 83 | dest="unpublished_tags", | ||
146 | 84 | help="""list of tags | ||
147 | 85 | (e.g. 'kinetic 22.10 22.10_edge kinetic)""", | ||
148 | 86 | ) | ||
149 | 87 | parser.add_argument( | ||
150 | 88 | "--unpublished-archs", | ||
151 | 89 | dest="unpublished_archs", | ||
152 | 90 | help="list of archs (e.g amd64 arm)", | ||
153 | 91 | ) | ||
154 | 92 | |||
155 | 93 | return parser | ||
156 | 94 | |||
157 | 95 | |||
158 | 96 | def validate_args( | ||
159 | 97 | parser: argparse.ArgumentParser, | ||
160 | 98 | ) -> argparse.ArgumentParser.parse_args: | ||
161 | 99 | """Parse and validate the CLI arguments""" | ||
162 | 100 | args = parser.parse_args() | ||
163 | 101 | if any( | ||
164 | 102 | [ | ||
165 | 103 | args.unpublished_suite is None, | ||
166 | 104 | args.unpublished_tags is None, | ||
167 | 105 | args.unpublished_archs is None, | ||
168 | 106 | ] | ||
169 | 107 | ) and not all( | ||
170 | 108 | [ | ||
171 | 109 | args.unpublished_suite is None, | ||
172 | 110 | args.unpublished_tags is None, | ||
173 | 111 | args.unpublished_archs is None, | ||
174 | 112 | ] | ||
175 | 113 | ): | ||
176 | 114 | parser.error( | ||
177 | 115 | """--unpublished-suite need | ||
178 | 116 | --unpublished-archs and --unpublished_tags""" | ||
179 | 117 | ) | ||
180 | 118 | |||
181 | 119 | return args | ||
182 | 120 | |||
183 | 121 | |||
184 | 122 | def build_image_endpoint(provider: str, repo_base: str = None) -> (str, str): | ||
185 | 123 | """Define the image's registry URL""" | ||
186 | 124 | if provider == "aws": | ||
187 | 125 | registry_url = "docker://public.ecr.aws/" | ||
188 | 126 | staging_repo = "rocksdev" | ||
189 | 127 | else: | ||
190 | 128 | registry_url = "docker://docker.io/" | ||
191 | 129 | staging_repo = "rocksdev4staging" | ||
192 | 130 | |||
193 | 131 | if repo_base is None: | ||
194 | 132 | logging.warning("Using staging repository") | ||
195 | 133 | url = f"{registry_url}{staging_repo}/ubuntu" | ||
196 | 134 | namespace = staging_repo | ||
197 | 135 | else: | ||
198 | 136 | url = f"{registry_url}{repo_base}/ubuntu" | ||
199 | 137 | namespace = repo_base | ||
200 | 138 | |||
201 | 139 | logging.info(f"Using {url} to collect information") | ||
202 | 140 | |||
203 | 141 | return url, namespace | ||
204 | 142 | |||
205 | 143 | |||
206 | 144 | def add_yaml_representer(): | ||
207 | 145 | def str_presenter(dumper, data): | ||
208 | 146 | """ | ||
209 | 147 | Permit to format | ||
210 | 148 | multiline string into | ||
211 | 149 | yaml file | ||
212 | 150 | """ | ||
213 | 151 | |||
214 | 152 | c = "tag:yaml.org,2002:str" | ||
215 | 153 | if len(data.splitlines()) > 1: # check for multiline string | ||
216 | 154 | return dumper.represent_scalar(c, data, style="|") | ||
217 | 155 | return dumper.represent_scalar(c, data) | ||
218 | 156 | |||
219 | 157 | yaml.add_representer(str, str_presenter) | ||
220 | 158 | yaml.representer.SafeRepresenter.add_representer(str, str_presenter) | ||
221 | 159 | |||
222 | 160 | |||
223 | 161 | def _process_run(command: List[str], **kwargs) -> str: | ||
224 | 162 | """Run a command and handle its output.""" | ||
225 | 163 | logging.info(f"Execute process: {command!r}, kwargs={kwargs!r}") | ||
226 | 164 | try: | ||
227 | 165 | out = subprocess.run( | ||
228 | 166 | command, | ||
229 | 167 | **kwargs, | ||
230 | 168 | capture_output=True, | ||
231 | 169 | check=True, | ||
232 | 170 | universal_newlines=True, | ||
233 | 171 | ) | ||
234 | 172 | except subprocess.CalledProcessError as err: | ||
235 | 173 | msg = f"Failed to run command: {err!s}" | ||
236 | 174 | if err.stderr: | ||
237 | 175 | msg += f" ({err.stderr.strip()!s})" | ||
238 | 176 | raise Exception(msg) from err | ||
239 | 177 | |||
240 | 178 | return out.stdout.strip() | ||
241 | 179 | |||
242 | 180 | |||
243 | 181 | def get_arches(release: str, image_url: str) -> List[str]: | ||
244 | 182 | """ | ||
245 | 183 | Permit to get the arches associated to the release | ||
246 | 184 | """ | ||
247 | 185 | logging.info(f"Getting the arches for {release}") | ||
248 | 186 | command = ["skopeo", "inspect", f"{image_url}:{release}", "--raw"] | ||
249 | 187 | manifest = json.loads(_process_run(command))["manifests"] | ||
250 | 188 | arches = [] | ||
251 | 189 | for arch in manifest: | ||
252 | 190 | arches.append(arch["platform"]["architecture"]) | ||
253 | 191 | return arches | ||
254 | 192 | |||
255 | 193 | |||
256 | 194 | def get_dockerhub_token(username: str, password: str) -> str: | ||
257 | 195 | """ | ||
258 | 196 | Permit to get the token associated to the docker account | ||
259 | 197 | """ | ||
260 | 198 | logging.info("Getting the token form Docker") | ||
261 | 199 | |||
262 | 200 | url_token = "https://hub.docker.com/v2/users/login" | ||
263 | 201 | data = {"username": username, "password": password} | ||
264 | 202 | get_token = requests.post(url_token, json=data) | ||
265 | 203 | get_token.raise_for_status() | ||
266 | 204 | return get_token.json()["token"] | ||
267 | 205 | |||
268 | 206 | |||
269 | 207 | def get_tags_docker( | ||
270 | 208 | release: str, token: str, image_url: str, image_namespace: str | ||
271 | 209 | ) -> List[str]: | ||
272 | 210 | """ | ||
273 | 211 | Permit to get the tags associated to the release | ||
274 | 212 | """ | ||
275 | 213 | logging.info(f"Getting the tags from Docker for {release}") | ||
276 | 214 | tags = [] | ||
277 | 215 | command = [ | ||
278 | 216 | "skopeo", | ||
279 | 217 | "inspect", | ||
280 | 218 | f"{image_url}:{release}", | ||
281 | 219 | "--raw", | ||
282 | 220 | ] | ||
283 | 221 | result_json = _process_run(command) | ||
284 | 222 | digest = json.loads(result_json)["manifests"][0]["digest"] | ||
285 | 223 | |||
286 | 224 | url_dockerhub = "https://hub.docker.com/v2/repositories/" | ||
287 | 225 | url_dockerhub += f"{image_namespace}/ubuntu/tags/?page_size=999" | ||
288 | 226 | Headers = {"Authorization": f"JWT {token}"} | ||
289 | 227 | get_the_tags = requests.get(url_dockerhub, headers=Headers) | ||
290 | 228 | get_the_tags = get_the_tags.json()["results"] | ||
291 | 229 | for image in get_the_tags: | ||
292 | 230 | for info_image in image["images"]: | ||
293 | 231 | if info_image["digest"] == digest and image["name"] not in tags: | ||
294 | 232 | tags.append(image["name"]) | ||
295 | 233 | |||
296 | 234 | return tags | ||
297 | 235 | |||
298 | 236 | |||
299 | 237 | def get_tags_aws(release: str, client: boto3.Session, image_url: str) -> List[str]: | ||
300 | 238 | """ | ||
301 | 239 | Permit to get the tags associated to the release | ||
302 | 240 | """ | ||
303 | 241 | logging.info(f"Getting the tags from AWS for {release}") | ||
304 | 242 | |||
305 | 243 | tags = [] | ||
306 | 244 | command = [ | ||
307 | 245 | "skopeo", | ||
308 | 246 | "inspect", | ||
309 | 247 | f"{image_url}:{release}", | ||
310 | 248 | ] | ||
311 | 249 | result_json = _process_run(command) | ||
312 | 250 | digest = json.loads(result_json)["Digest"] | ||
313 | 251 | response = client.describe_image_tags(repositoryName="ubuntu") | ||
314 | 252 | |||
315 | 253 | for image in response["imageTagDetails"]: | ||
316 | 254 | if ( | ||
317 | 255 | image["imageDetail"]["imageDigest"] == digest | ||
318 | 256 | and image["imageTag"] not in tags | ||
319 | 257 | ): | ||
320 | 258 | tags.append(image["imageTag"]) | ||
321 | 259 | return tags | ||
322 | 260 | |||
323 | 261 | |||
324 | 262 | def get_fullname(release: str) -> str: | ||
325 | 263 | """ | ||
326 | 264 | Permit to get the full name associated to the release | ||
327 | 265 | """ | ||
328 | 266 | logging.info(f"Getting full name of {release} ") | ||
329 | 267 | |||
330 | 268 | command = ["ubuntu-distro-info", f"--series={release}", "-f"] | ||
331 | 269 | result_json = _process_run(command) | ||
332 | 270 | return result_json.replace("Ubuntu", "").strip() | ||
333 | 271 | |||
334 | 272 | |||
335 | 273 | def get_support(series: str, is_lts: bool) -> Dict[str, Dict[str, str]]: | ||
336 | 274 | """Calculates the end of support dates for a given Ubuntu series""" | ||
337 | 275 | logging.info(f"Getting support information for the {series}") | ||
338 | 276 | |||
339 | 277 | base_cmd = ["ubuntu-distro-info", "--series", series] | ||
340 | 278 | eol_cmd = base_cmd + ["--day=eol"] | ||
341 | 279 | |||
342 | 280 | eol = int(_process_run(eol_cmd)) | ||
343 | 281 | eol_date = NOW + datetime.timedelta(days=eol) | ||
344 | 282 | |||
345 | 283 | support = {"support": {"until": f"{eol_date.month:02d}/{eol_date.year}"}} | ||
346 | 284 | |||
347 | 285 | if not is_lts: | ||
348 | 286 | return support | ||
349 | 287 | |||
350 | 288 | # The it is LTS, and lts_until=until | ||
351 | 289 | support["support"]["lts_until"] = support["support"]["until"] | ||
352 | 290 | |||
353 | 291 | eol_esm_cmd = base_cmd + ["--day=eol-esm"] | ||
354 | 292 | |||
355 | 293 | eol_esm = int(_process_run(eol_esm_cmd)) | ||
356 | 294 | eol_esm_date = NOW + datetime.timedelta(days=eol_esm) | ||
357 | 295 | eol_esm_value = f"{eol_esm_date.month:02d}/{eol_esm_date.year}" | ||
358 | 296 | support["support"]["esm_until"] = eol_esm_value | ||
359 | 297 | |||
360 | 298 | return support | ||
361 | 299 | |||
362 | 300 | |||
363 | 301 | def get_deprecated(series: str) -> Dict[str, Dict[str, object]]: | ||
364 | 302 | """ | ||
365 | 303 | Calculated the deprecation date | ||
366 | 304 | and upgrade path for a deprecated release | ||
367 | 305 | """ | ||
368 | 306 | logging.info(f"Getting support information for the {series}") | ||
369 | 307 | |||
370 | 308 | eol_cmd = ["ubuntu-distro-info", "--series", series, "--day=eol"] | ||
371 | 309 | |||
372 | 310 | eol = int(_process_run(eol_cmd)) | ||
373 | 311 | eol_date = NOW + datetime.timedelta(days=eol) | ||
374 | 312 | # For now, the upgrade path is always the next release | ||
375 | 313 | |||
376 | 314 | this_release_cmd = ["ubuntu-distro-info", "--series", series, "--day=release"] | ||
377 | 315 | this_release = int(_process_run(this_release_cmd)) | ||
378 | 316 | # add 60 days to the release date, to get the next development version | ||
379 | 317 | next_date = NOW + datetime.timedelta(days=this_release + 60) | ||
380 | 318 | |||
381 | 319 | following_dev_series_cmd = [ | ||
382 | 320 | "ubuntu-distro-info", | ||
383 | 321 | "-d", | ||
384 | 322 | f"--date={next_date.year}-{next_date.month}-{next_date.day}", | ||
385 | 323 | ] | ||
386 | 324 | development_suite_at_eol = _process_run(following_dev_series_cmd) | ||
387 | 325 | |||
388 | 326 | upgrade_path_cmd = [ | ||
389 | 327 | "ubuntu-distro-info", | ||
390 | 328 | "--series", | ||
391 | 329 | development_suite_at_eol, | ||
392 | 330 | "-r", | ||
393 | 331 | ] | ||
394 | 332 | upgrade_path = _process_run(upgrade_path_cmd).strip(" LTS") | ||
395 | 333 | |||
396 | 334 | return { | ||
397 | 335 | "deprecated": { | ||
398 | 336 | "date": f"{eol_date.month:02d}/{eol_date.year}", | ||
399 | 337 | "path": {"track": upgrade_path}, | ||
400 | 338 | } | ||
401 | 339 | } | ||
402 | 340 | |||
403 | 341 | |||
404 | 342 | def is_deprecated(series: str) -> bool: | ||
405 | 343 | |||
406 | 344 | """Checks whether a series is completely deprecated (both LTS and ESM)""" | ||
407 | 345 | logging.info(f"Checking is {series} is deprecated") | ||
408 | 346 | supported_cmd = "ubuntu-distro-info --supported" | ||
409 | 347 | supported_esm_cmd = supported_cmd + "-esm" | ||
410 | 348 | all_supported = _process_run(supported_cmd.split(" ")) + _process_run( | ||
411 | 349 | supported_esm_cmd.split(" ") | ||
412 | 350 | ) | ||
413 | 351 | return series not in all_supported | ||
414 | 352 | |||
415 | 353 | |||
416 | 354 | def is_lts(series: str) -> bool: | ||
417 | 355 | |||
418 | 356 | """Checks if a given series is LTS""" | ||
419 | 357 | logging.info(f"Checking is {series} is lts") | ||
420 | 358 | |||
421 | 359 | cmd = ["ubuntu-distro-info", "--series", series, "-f"] | ||
422 | 360 | |||
423 | 361 | return "LTS" in _process_run(cmd) | ||
424 | 362 | |||
425 | 363 | |||
426 | 364 | def get_lowest_risk(tags: List[str]) -> str: | ||
427 | 365 | """ | ||
428 | 366 | Get the lowest risk associated with the release | ||
429 | 367 | """ | ||
430 | 368 | risk_sorted = ["stable", "candidate", "beta", "edge"] | ||
431 | 369 | |||
432 | 370 | all_tags_str = " ".join(tags) | ||
433 | 371 | for risk in risk_sorted: | ||
434 | 372 | if risk in all_tags_str: | ||
435 | 373 | return risk | ||
436 | 374 | |||
437 | 375 | return "edge" | ||
438 | 376 | |||
439 | 377 | |||
440 | 378 | def get_release(series: str) -> str: | ||
441 | 379 | command = ["ubuntu-distro-info", f"--series={series}", "-r"] | ||
442 | 380 | |||
443 | 381 | return _process_run(command) | ||
444 | 382 | |||
445 | 383 | |||
446 | 384 | def infer_registry_user( | ||
447 | 385 | provider: str, username: str, password: str, dh_token: str = None | ||
448 | 386 | ) -> object: | ||
449 | 387 | user = None | ||
450 | 388 | if provider == "aws": | ||
451 | 389 | logging.info("Connecting to AWS") | ||
452 | 390 | session = boto3.Session( | ||
453 | 391 | region_name="us-east-1", | ||
454 | 392 | aws_access_key_id=username, | ||
455 | 393 | aws_secret_access_key=password, | ||
456 | 394 | ) | ||
457 | 395 | user = session.client("ecr-public") | ||
458 | 396 | else: | ||
459 | 397 | logging.info("Fetching Docker Hub token") | ||
460 | 398 | if dh_token: | ||
461 | 399 | user = dh_token | ||
462 | 400 | else: | ||
463 | 401 | user = get_dockerhub_token(username, password) | ||
464 | 402 | |||
465 | 403 | return user | ||
466 | 404 | |||
467 | 405 | |||
468 | 406 | def build_releases_data( | ||
469 | 407 | list_of_series: List[str], | ||
470 | 408 | all_tags: List[str], | ||
471 | 409 | image_url: str, | ||
472 | 410 | image_ns: str, | ||
473 | 411 | arguments: argparse.ArgumentParser.parse_args, | ||
474 | 412 | registry_user: object, | ||
475 | 413 | ) -> Dict: | ||
476 | 414 | """Build the releases info data structure""" | ||
477 | 415 | releases = [] | ||
478 | 416 | for count, series in enumerate(list_of_series): | ||
479 | 417 | if series not in all_tags and series != arguments.unpublished_suite: | ||
480 | 418 | logging.warning( | ||
481 | 419 | f"Series {series} does not exist in {image_url}. Skipping it..." | ||
482 | 420 | ) | ||
483 | 421 | continue | ||
484 | 422 | |||
485 | 423 | release_data = {} | ||
486 | 424 | |||
487 | 425 | release = get_release(series) | ||
488 | 426 | if "LTS" in release: | ||
489 | 427 | release_data["type"] = "LTS" | ||
490 | 428 | |||
491 | 429 | release_data["track"] = release.rstrip(" LTS") | ||
492 | 430 | |||
493 | 431 | if arguments.unpublished_suite and arguments.unpublished_suite == series: | ||
494 | 432 | release_data["architectures"] = arguments.unpublished_archs.split() | ||
495 | 433 | release_data["version"] = get_fullname(arguments.unpublished_suite) | ||
496 | 434 | release_data["risk"] = get_lowest_risk(arguments.unpublished_tags.split()) | ||
497 | 435 | release_data["tags"] = arguments.unpublished_tags.split() | ||
498 | 436 | else: | ||
499 | 437 | release_data["architectures"] = get_arches(series, image_url) | ||
500 | 438 | release_data["version"] = get_fullname(series) | ||
501 | 439 | if arguments.provider == "docker": | ||
502 | 440 | release_data["tags"] = get_tags_docker( | ||
503 | 441 | series, registry_user, image_url, image_ns | ||
504 | 442 | ) | ||
505 | 443 | else: | ||
506 | 444 | release_data["tags"] = get_tags_aws(series, registry_user) | ||
507 | 445 | release_data["risk"] = get_lowest_risk(release_data["tags"]) | ||
508 | 446 | |||
509 | 447 | if is_deprecated(series): | ||
510 | 448 | release_data["deprecated"] = get_deprecated(series) | ||
511 | 449 | else: | ||
512 | 450 | release_data["support"] = get_support(series, is_lts(series)) | ||
513 | 451 | |||
514 | 452 | releases.append(release_data) | ||
515 | 453 | |||
516 | 454 | return releases | ||
517 | 455 | |||
518 | 456 | |||
519 | 457 | def read_ubuntu_data_template() -> Dict: | ||
520 | 458 | """Reads and parses the YAML contents of the data template""" | ||
521 | 459 | template_file = f"{SCRIPT_DIR}/templates/ubuntu.yaml" | ||
522 | 460 | logging.info(f"Opening the template file {template_file}") | ||
523 | 461 | with open(template_file) as file: | ||
524 | 462 | try: | ||
525 | 463 | return yaml.safe_load(file) | ||
526 | 464 | except yaml.YAMLError as exc: | ||
527 | 465 | logging.error("Error when loading the ubuntu template file") | ||
528 | 466 | raise exc | ||
529 | 467 | |||
530 | 468 | |||
531 | 469 | def create_data_dir(path: str): | ||
532 | 470 | """Create data dir if it doesn't exist""" | ||
533 | 471 | if not os.path.exists(path): | ||
534 | 472 | logging.info(f"Creating the {path} folder") | ||
535 | 473 | |||
536 | 474 | os.makedirs(path) | ||
537 | 475 | |||
538 | 476 | |||
539 | 477 | def write_ubuntu_data_file(file_path: str, content: Dict): | ||
540 | 478 | """Write the YAML content into the ubuntu file path""" | ||
541 | 479 | with open(file_path, "w") as file: | ||
542 | 480 | logging.info(f"Create the yaml file {file_path}") | ||
543 | 481 | yaml.dump(content, file) | ||
544 | 482 | |||
545 | 483 | |||
546 | 484 | def main(): | ||
547 | 485 | arguments = validate_args(cli_args()) | ||
548 | 486 | registry_user = infer_registry_user( | ||
549 | 487 | arguments.provider, | ||
550 | 488 | arguments.username, | ||
551 | 489 | arguments.password, | ||
552 | 490 | arguments.dockertoken, | ||
553 | 491 | ) | ||
554 | 492 | |||
555 | 493 | add_yaml_representer() | ||
556 | 494 | url, ns = build_image_endpoint(arguments.provider, repo_base=arguments.repository) | ||
557 | 495 | |||
558 | 496 | logging.info(f"Getting all tags from {url}") | ||
559 | 497 | command_tags = ["skopeo", "list-tags", url] | ||
560 | 498 | existing_tags = json.loads(_process_run(command_tags))["Tags"] | ||
561 | 499 | |||
562 | 500 | logging.info("Getting all the series from ubuntu-distro-info") | ||
563 | 501 | command_suites = ["ubuntu-distro-info", "--all"] | ||
564 | 502 | series_names = _process_run(command_suites).split("\n") | ||
565 | 503 | |||
566 | 504 | if arguments.unpublished_suite and arguments.unpublished_suite not in series_names: | ||
567 | 505 | logging.error( | ||
568 | 506 | f"The provided unpublished suite {arguments.unpublished_suite}" | ||
569 | 507 | "is not recognized. Ignoring it" | ||
570 | 508 | ) | ||
571 | 509 | |||
572 | 510 | logging.info("Building releases info") | ||
573 | 511 | releases = build_releases_data( | ||
574 | 512 | series_names, existing_tags, url, ns, arguments, registry_user | ||
575 | 513 | ) | ||
576 | 514 | |||
577 | 515 | dict_file = read_ubuntu_data_template() | ||
578 | 516 | dict_file["releases"] = releases | ||
579 | 517 | |||
580 | 518 | create_data_dir(arguments.data_dir) | ||
581 | 519 | |||
582 | 520 | ubuntu_data_file = f"{arguments.data_dir}/ubuntu.yaml" | ||
583 | 521 | write_ubuntu_data_file(ubuntu_data_file, dict_file) | ||
584 | 522 | |||
585 | 523 | |||
586 | 524 | if __name__ == "__main__": | ||
587 | 525 | main() |
I guess PyYaml needs to be installed. If so, please adjust the readme and .devcontainer accordingly