Merge autopkgtest-cloud:wip/mojo-juju-2 into autopkgtest-cloud:master
- Git
- lp:autopkgtest-cloud
- wip/mojo-juju-2
- Merge into master
Status: | Work in progress |
---|---|
Proposed branch: | autopkgtest-cloud:wip/mojo-juju-2 |
Merge into: | autopkgtest-cloud:master |
Diff against target: |
8216 lines (+4981/-205) (has conflicts) 97 files modified
.gitignore (+1/-1) COPYRIGHT (+3/-2) README.md (+11/-0) autopkgtest-cloud (+1/-0) charms/focal/autopkgtest-cloud-worker/README.md (+1/-1) charms/focal/autopkgtest-cloud-worker/actions.yaml (+5/-0) charms/focal/autopkgtest-cloud-worker/actions/reload-units (+21/-0) charms/focal/autopkgtest-cloud-worker/actions/update-sources (+19/-0) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/armhf-lxd-cluster-template.userdata.in (+174/-0) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/build-adt-image (+73/-0) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/cleanup-instances (+164/-0) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/cloud-worker-maintenance (+24/-26) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/copy-security-group (+7/-1) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/create-armhf-cluster-member (+75/-0) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/create-nova-image-new-release (+180/-0) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/ensure-keypair (+23/-0) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/exec-in-region (+11/-2) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/metrics (+142/-0) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/run-autopkgtest (+198/-0) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/worker-config-production/setup-canonical.sh (+0/-7) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/worker/worker (+203/-70) charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/worker/worker.conf (+1/-3) charms/focal/autopkgtest-cloud-worker/config.yaml (+104/-0) charms/focal/autopkgtest-cloud-worker/layer.yaml (+33/-0) charms/focal/autopkgtest-cloud-worker/lib/systemd.py (+402/-0) charms/focal/autopkgtest-cloud-worker/lib/utils.py (+79/-0) charms/focal/autopkgtest-cloud-worker/metadata.yaml (+16/-4) charms/focal/autopkgtest-cloud-worker/reactive/autopkgtest_cloud_worker.py (+597/-0) charms/focal/autopkgtest-cloud-worker/tests/00-setup (+5/-0) charms/focal/autopkgtest-cloud-worker/tests/10-deploy (+35/-0) charms/focal/autopkgtest-cloud-worker/units/autopkgtest-lxd-remote@.service (+5/-6) charms/focal/autopkgtest-cloud-worker/units/autopkgtest@.service (+10/-8) charms/focal/autopkgtest-cloud-worker/units/build-adt-image@.service (+11/-0) charms/focal/autopkgtest-cloud-worker/units/build-adt-image@.timer (+11/-0) charms/focal/autopkgtest-cloud-worker/units/cloud-worker-maintenance.service (+8/-0) charms/focal/autopkgtest-cloud-worker/units/cloud-worker-maintenance.timer (+10/-0) charms/focal/autopkgtest-cloud-worker/units/ensure-adt-image@.service (+12/-0) charms/focal/autopkgtest-cloud-worker/units/metrics.service (+9/-0) charms/focal/autopkgtest-cloud-worker/units/metrics.timer (+8/-0) charms/focal/autopkgtest-cloud-worker/units/pull-package-configs.service (+9/-0) charms/focal/autopkgtest-cloud-worker/units/pull-package-configs.timer (+9/-0) charms/focal/autopkgtest-web/README.md (+1/-1) charms/focal/autopkgtest-web/config.yaml (+0/-12) charms/focal/autopkgtest-web/layer.yaml (+19/-0) charms/focal/autopkgtest-web/metadata.yaml (+7/-5) charms/focal/autopkgtest-web/reactive/autopkgtest_web.py (+327/-0) charms/focal/autopkgtest-web/units/amqp-status-collector.service (+10/-0) charms/focal/autopkgtest-web/units/autopkgtest-web.target (+5/-0) charms/focal/autopkgtest-web/units/cache-amqp.service (+8/-0) charms/focal/autopkgtest-web/units/cache-amqp.timer (+8/-0) charms/focal/autopkgtest-web/units/download-all-results.service (+12/-0) charms/focal/autopkgtest-web/units/download-results.service (+11/-0) charms/focal/autopkgtest-web/units/publish-db.service (+6/-0) charms/focal/autopkgtest-web/units/publish-db.timer (+8/-0) charms/focal/autopkgtest-web/units/update-github-jobs.service (+9/-0) charms/focal/autopkgtest-web/units/update-github-jobs.timer (+8/-0) charms/focal/autopkgtest-web/webcontrol/amqp-status-collector (+10/-4) charms/focal/autopkgtest-web/webcontrol/browse.cgi (+3/-3) charms/focal/autopkgtest-web/webcontrol/cache-amqp (+25/-6) charms/focal/autopkgtest-web/webcontrol/download-all-results (+227/-0) charms/focal/autopkgtest-web/webcontrol/download-results (+119/-0) charms/focal/autopkgtest-web/webcontrol/download/utils.py (+59/-0) charms/focal/autopkgtest-web/webcontrol/publish-db (+77/-14) charms/focal/autopkgtest-web/webcontrol/request/app.py (+2/-2) charms/focal/autopkgtest-web/webcontrol/request/submit.py (+2/-6) charms/focal/autopkgtest-web/webcontrol/static/jquery (+1/-0) charms/focal/autopkgtest-web/webcontrol/templates/browse-layout.html (+1/-0) charms/focal/autopkgtest-web/webcontrol/update-github-jobs (+6/-5) dev/null (+0/-16) docs/Makefile (+20/-0) docs/administration.rst (+169/-0) docs/architecture.rst (+125/-0) docs/conf.py (+54/-0) docs/deploying.rst (+180/-0) docs/docs.rst (+8/-0) docs/index.rst (+15/-0) docs/lxd.rst (+69/-0) docs/make.bat (+35/-0) docs/results.rst (+66/-0) docs/webcontrol.rst (+34/-0) mojo/add-floating-ip (+157/-0) mojo/collect (+2/-0) mojo/make-lxd-secgroup (+27/-0) mojo/manifest (+5/-0) mojo/manifest-upgrade (+3/-0) mojo/postdeploy (+15/-0) mojo/service-bundle (+282/-0) mojo/staging (+1/-0) mojo/upgrade-charm (+11/-0) webcontrol (+1/-0) worker-config-production/worker-bos01-arm64.conf (+3/-0) worker-config-production/worker-bos01-ppc64el.conf (+3/-0) worker-config-production/worker-bos01-s390x.conf (+3/-0) worker-config-production/worker-bos02-arm64.conf (+3/-0) worker-config-production/worker-bos02-ppc64el.conf (+3/-0) worker-config-production/worker-bos02-s390x.conf (+3/-0) worker-config-production/worker.conf (+3/-0) Conflict in worker-config-production/worker-bos01-arm64.conf Conflict in worker-config-production/worker-bos01-ppc64el.conf Conflict in worker-config-production/worker-bos01-s390x.conf Conflict in worker-config-production/worker-bos02-arm64.conf Conflict in worker-config-production/worker-bos02-ppc64el.conf Conflict in worker-config-production/worker-bos02-s390x.conf Conflict in worker-config-production/worker.conf |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Julian Andres Klode (community) | Approve | ||
Ubuntu Release Team | Pending | ||
Review via email: mp+369232@code.launchpad.net |
Commit message
Description of the change
https:/
I've moved from the old Juju and custom deployment script to new Juju and deployment via Mojo (https:/
This ***still needs work*** to be ready for the production environment (see the above Trello card) - so do not merge(!), but it is ready for some review now.
Julian Andres Klode (juliank) wrote : | # |
This branch is working for me locally, so it seems good to go.
- c761ac9... by Iain Lane
-
autopkgtest-cloud worker, charm: Collect big/long/no run packages from git
Instead of in the charm's own config. This lets any release team member
merge changes to these list, without needing to go to an
autopkgtest-cloud admin for deployment. - 800824f... by Iain Lane
-
copy-security-
group: Omit copying the project_id and tenant_id attributes These will be implicitly created. On newer Openstack versions including
these causes an error:neutronclient
.common. exceptions. BadRequest: Specifying 'project_id' or
'tenant_id' other than the authenticated project in request requires
admin privileges - 956412d... by Iain Lane
-
download-results: Make sure to ack the complete notifications all the time
Otherwise when we ignore them they'll be left hanging around in the
queue. - 3e30d78... by Iain Lane
-
download-
all-results: Fetch results from Swift API This lets us fetch all results, not just the newest ones. That's useful
in case `download-results` breaks. - a5cdac9... by Iain Lane
-
service-bundle: Add a first armhf worker
Just one for now - the current user is still up and we mainly just need
to make sure that it's working. - 3f5c036... by Iain Lane
-
.gitignore: Ignore sphinx build artifacts
- 22663d7... by Iain Lane
-
make-lxd-secgroup: Update router IP, add haproxy
- 2b9414c... by Iain Lane
-
autopkgtest-
lxd-remote@ : Be wanted by autopkgtest.target It needs to get installed somewhere.
- 68823f3... by Iain Lane
-
update-github-jobs: Exit if PENDING_DIR doesn't exit
- 0250661... by Iain Lane
-
service-bundle: Fix swift creds
- eac3f7b... by Iain Lane
-
docs/: Add docs for readthedocs
- 666dd40... by Iain Lane
-
.gitignore: Ignore built docs
- 5aff327... by Iain Lane
-
Trim the README now we have readthedocs docs
- eaa78aa... by Iain Lane
-
service-bundle: expose rabbitmq
It needs to be externally accessible, e.g. by snakefruit.
- 3ec671a... by Iain Lane
-
docs: Add instructions for how to update
until we get webhooks
- 9f16fe1... by Iain Lane
-
service-bundle: Add impish
- 07e8a1f... by Iain Lane
-
webcontrol: Add link to RTD docs
- 7ce7401... by Iain Lane
-
docs: Document running download-
all-results It needs to be run after seed-new-release to get the results into
autopkgtest.db. - 7ed840c... by Iain Lane
-
docs: add a TODO note
- 3d07862... by Iain Lane
-
docs: Specify the mirror when building new lxd images
- a731689... by Iain Lane
-
docs: Fix whitespace errors, thanks git diff
Unmerged commits
- a731689... by Iain Lane
-
docs: Fix whitespace errors, thanks git diff
- 3d07862... by Iain Lane
-
docs: Specify the mirror when building new lxd images
- 7ed840c... by Iain Lane
-
docs: add a TODO note
- 7ce7401... by Iain Lane
-
docs: Document running download-
all-results It needs to be run after seed-new-release to get the results into
autopkgtest.db. - 07e8a1f... by Iain Lane
-
webcontrol: Add link to RTD docs
- 9f16fe1... by Iain Lane
-
service-bundle: Add impish
- 3ec671a... by Iain Lane
-
docs: Add instructions for how to update
until we get webhooks
- eaa78aa... by Iain Lane
-
service-bundle: expose rabbitmq
It needs to be externally accessible, e.g. by snakefruit.
- 5aff327... by Iain Lane
-
Trim the README now we have readthedocs docs
- 666dd40... by Iain Lane
-
.gitignore: Ignore built docs
Preview Diff
1 | diff --git a/.gitignore b/.gitignore |
2 | index 74e308d..accb26a 100644 |
3 | --- a/.gitignore |
4 | +++ b/.gitignore |
5 | @@ -1 +1 @@ |
6 | -deployment/charms/xenial/rabbitmq-server |
7 | +docs/_build |
8 | diff --git a/COPYRIGHT b/COPYRIGHT |
9 | index a07e413..234b0d9 100644 |
10 | --- a/COPYRIGHT |
11 | +++ b/COPYRIGHT |
12 | @@ -1,5 +1,6 @@ |
13 | -autopkgtest-cloud is (C) 2015-2016 Canonical Ltd. |
14 | -Author: Martin Pitt <martin.pitt@ubuntu.com> |
15 | +autopkgtest-cloud is (C) 2015-2019 Canonical Ltd. |
16 | +Author: Martin Pitt <martin.pitt@ubuntu.com>, |
17 | + Iain Lane <iain.lane@canonical.com> |
18 | |
19 | This program is free software: you can redistribute it and/or modify |
20 | it under the terms of the GNU General Public License as published by |
21 | diff --git a/README.md b/README.md |
22 | new file mode 100644 |
23 | index 0000000..75a195d |
24 | --- /dev/null |
25 | +++ b/README.md |
26 | @@ -0,0 +1,11 @@ |
27 | +autopkgtest-cloud |
28 | +================= |
29 | + |
30 | +Introduction |
31 | +------------ |
32 | + |
33 | +This is Ubuntu's environment for running autopkgtests, and collecting / |
34 | +delivering their results. |
35 | + |
36 | +See docs/ or [our readthedocs page](https://autopkgtest-cloud.readthedocs.io) |
37 | +for the complete documentation for developers and admins of autopkgtest-cloud. |
38 | diff --git a/autopkgtest-cloud b/autopkgtest-cloud |
39 | new file mode 120000 |
40 | index 0000000..a64523e |
41 | --- /dev/null |
42 | +++ b/autopkgtest-cloud |
43 | @@ -0,0 +1 @@ |
44 | +charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud |
45 | \ No newline at end of file |
46 | diff --git a/deployment/charms/xenial/autopkgtest-cloud-worker/README.md b/charms/focal/autopkgtest-cloud-worker/README.md |
47 | similarity index 87% |
48 | rename from deployment/charms/xenial/autopkgtest-cloud-worker/README.md |
49 | rename to charms/focal/autopkgtest-cloud-worker/README.md |
50 | index a66b49e..44da3e0 100644 |
51 | --- a/deployment/charms/xenial/autopkgtest-cloud-worker/README.md |
52 | +++ b/charms/focal/autopkgtest-cloud-worker/README.md |
53 | @@ -6,4 +6,4 @@ This charm sets up an autopkgtest cloud worker. This receives test requests from |
54 | Contact Information |
55 | ------------------- |
56 | |
57 | -Author: Martin Pitt <martin.pitt@ubuntu.com> |
58 | +Author: Iain Lane <iain.lane@canonical.com> |
59 | diff --git a/charms/focal/autopkgtest-cloud-worker/actions.yaml b/charms/focal/autopkgtest-cloud-worker/actions.yaml |
60 | new file mode 100644 |
61 | index 0000000..98243ff |
62 | --- /dev/null |
63 | +++ b/charms/focal/autopkgtest-cloud-worker/actions.yaml |
64 | @@ -0,0 +1,5 @@ |
65 | +reload-units: |
66 | + description: Reload all systemd units. Use this after you upgrade the charm to change the worker code, for example. |
67 | + |
68 | +update-sources: |
69 | + description: Update (git pull) autopkgtest/autodep8 |
70 | diff --git a/charms/focal/autopkgtest-cloud-worker/actions/reload-units b/charms/focal/autopkgtest-cloud-worker/actions/reload-units |
71 | new file mode 100755 |
72 | index 0000000..52ab519 |
73 | --- /dev/null |
74 | +++ b/charms/focal/autopkgtest-cloud-worker/actions/reload-units |
75 | @@ -0,0 +1,21 @@ |
76 | +#!/usr/local/sbin/charm-env python3 |
77 | + |
78 | +from systemd import get_units, reload_unit |
79 | + |
80 | + |
81 | +if __name__ == "__main__": |
82 | + (cloud, lxd, _) = get_units() |
83 | + cloud_object_paths = [ |
84 | + cloud[region][arch][n] |
85 | + for region in cloud |
86 | + for arch in cloud[region] |
87 | + for n in cloud[region][arch] |
88 | + ] |
89 | + lxd_object_paths = [ |
90 | + lxd[arch][ip][n] |
91 | + for arch in lxd |
92 | + for ip in lxd[arch] |
93 | + for n in lxd[arch][ip] |
94 | + ] |
95 | + for op in cloud_object_paths + lxd_object_paths: |
96 | + reload_unit(op) |
97 | diff --git a/charms/focal/autopkgtest-cloud-worker/actions/update-sources b/charms/focal/autopkgtest-cloud-worker/actions/update-sources |
98 | new file mode 100755 |
99 | index 0000000..7bd2f9d |
100 | --- /dev/null |
101 | +++ b/charms/focal/autopkgtest-cloud-worker/actions/update-sources |
102 | @@ -0,0 +1,19 @@ |
103 | +#!/usr/local/sbin/charm-env python3 |
104 | + |
105 | +import sys |
106 | + |
107 | +from reactive.autopkgtest_cloud_worker import ( |
108 | + AUTOPKGTEST_LOCATION, |
109 | + AUTODEP8_LOCATION, |
110 | +) |
111 | + |
112 | +import pygit2 |
113 | +from utils import pull, install_autodep8, UnixUser |
114 | + |
115 | + |
116 | +if __name__ == "__main__": |
117 | + for location in [AUTOPKGTEST_LOCATION, AUTODEP8_LOCATION]: |
118 | + repo = pygit2.Repository(location) |
119 | + with UnixUser("ubuntu"): |
120 | + pull(repo) |
121 | + install_autodep8(AUTODEP8_LOCATION) |
122 | diff --git a/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/armhf-lxd-cluster-template.userdata.in b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/armhf-lxd-cluster-template.userdata.in |
123 | new file mode 100644 |
124 | index 0000000..aca8491 |
125 | --- /dev/null |
126 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/armhf-lxd-cluster-template.userdata.in |
127 | @@ -0,0 +1,174 @@ |
128 | +#cloud-config |
129 | + |
130 | + # cloud-init user-data file for setting up an autopkgtest slave for running |
131 | + # armhf LXD containers. This should be booted on an arm64 nova m1.large instance. |
132 | + |
133 | +manage_etc_hosts: true |
134 | +apt_update: true |
135 | +apt_upgrade: true |
136 | + |
137 | +packages: |
138 | + - rng-tools |
139 | + - distro-info |
140 | + - libdpkg-perl |
141 | + - python3-netifaces |
142 | + - python3-jinja2 |
143 | + |
144 | +snap: |
145 | + commands: |
146 | + 00: ['refresh', '--channel=latest/stable', 'lxd'] |
147 | + |
148 | +# we don't need much space for the OS, keep most of the disk free for a btrfs partition for lxd |
149 | +growpart: |
150 | + mode: off |
151 | + |
152 | +write_files: |
153 | + - path: /etc/environment |
154 | + content: | |
155 | + PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games" |
156 | + http_proxy=http://squid.internal:3128 |
157 | + https_proxy=http://squid.internal:3128 |
158 | + no_proxy=127.0.0.1,127.0.1.1,localhost,localdomain,novalocal,internal,archive.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,10.0.0.0/8 |
159 | + - path: /etc/sysctl.d/99-autopkgtest-inotify.conf |
160 | + content: | |
161 | + # Increase the user inotify instance limit to allow for about |
162 | + # 100 containers to run before the limit is hit again |
163 | + fs.inotify.max_user_instances = 1024 |
164 | + |
165 | +# LP: #1572026, RT#90771 |
166 | + - path: /etc/rc.local |
167 | + permissions: '0755' |
168 | + content: | |
169 | + #!/bin/sh -e |
170 | + iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu |
171 | + |
172 | +{%- if role == 'bootstrap' or role == 'leader' %} |
173 | + - path: /var/tmp/setup/ubuntu-crontab |
174 | + content: | |
175 | + PATH=/snap/bin:/usr/bin:/bin |
176 | + # development release |
177 | + 37 2 * * * MIRROR=http://ftpmaster.internal/ubuntu/ ~/autopkgtest/tools/autopkgtest-build-lxd images:ubuntu/$(distro-info --devel)/armhf |
178 | + # stable releases |
179 | + 1 0 * * 0 for r in $(distro-info --supported | grep -v $(distro-info --devel)); do MIRROR=http://ftpmaster.internal/ubuntu/ ~/autopkgtest/tools/autopkgtest-build-lxd images:ubuntu/$r/armhf; done |
180 | +{%- endif %} |
181 | + |
182 | + - path: /var/tmp/setup/lxd-init |
183 | + content: | |
184 | + config: |
185 | + core.https_address: '{{ '{{' }} myip {{ '}}' }}:8443' |
186 | + core.trust_password: autopkgtest |
187 | + networks: |
188 | + - config: |
189 | + bridge.mtu: "1458" |
190 | + ipv4.address: 10.0.4.1/24 |
191 | + ipv4.nat: "true" |
192 | + ipv6.address: none |
193 | + raw.dnsmasq: dhcp-option=6,91.189.91.130 |
194 | + description: "" |
195 | + name: lxdbr0 |
196 | + type: bridge |
197 | + storage_pools: |
198 | + - config: |
199 | + source: /srv/lxd |
200 | + description: "" |
201 | + name: default |
202 | + driver: dir |
203 | + profiles: |
204 | + - config: |
205 | + raw.lxc: |- |
206 | + lxc.apparmor.profile=unconfined |
207 | + lxc.seccomp.profile= |
208 | + security.nesting: "true" |
209 | + description: Default LXD profile |
210 | + devices: |
211 | + eth0: |
212 | + name: eth0 |
213 | + nictype: bridged |
214 | + parent: lxdbr0 |
215 | + type: nic |
216 | + root: |
217 | + path: / |
218 | + pool: default |
219 | + type: disk |
220 | + name: default |
221 | + cluster: |
222 | + enabled: true |
223 | + images_minimal_replica: -1 |
224 | + server_name: {{uuid}} |
225 | +{%- if role == 'node' %} |
226 | + server_address: {{ '{{' }} myip {{ '}}' }}:8443 |
227 | + cluster_address: {{server}}:8443 |
228 | + cluster_certificate: "{{cert}} |
229 | + " |
230 | + cluster_password: autopkgtest |
231 | + member_config: |
232 | + - entity: storage-pool |
233 | + name: default |
234 | + key: source |
235 | + value: "" |
236 | +{%- endif %} |
237 | + |
238 | +# we can't do these things during runcmd as this is run *during* boot |
239 | + - path: /var/tmp/setup/post-boot-setup.sh |
240 | + content: | |
241 | + # wait until we are fully booted |
242 | + while true; do O=$(systemctl is-system-running || true); [ "$O" = running -o "$O" = degraded ] && break; sleep 1; done |
243 | + systemctl daemon-reload |
244 | + systemctl start rng-tools |
245 | + systemctl stop snap.lxd.daemon.unix.socket snap.lxd.daemon.service || true |
246 | + systemctl start snap.lxd.daemon.unix.socket snap.lxd.daemon.service |
247 | + python3 -c "from jinja2 import Template; import netifaces as ni; ni.ifaddresses('eth0'); ip = ni.ifaddresses('eth0')[ni.AF_INET][0]['addr']; f = open('/var/tmp/setup/lxd-init'); print(Template(f.read()).render(myip=ip))" | lxd init --preseed |
248 | +{%- if role == 'bootstrap' or role == 'leader' %} |
249 | + for r in $(distro-info --supported); do /home/ubuntu/autopkgtest/tools/autopkgtest-build-lxd images:ubuntu/$r/armhf; done |
250 | +{%- endif %} |
251 | + echo "Finished building - rebooting so changed kernel command line comes into effect..." |
252 | + reboot |
253 | + |
254 | +# use gdisk to fix GPT partition table to span the whole disk |
255 | +# http://askubuntu.com/questions/446991/gparted-claims-whole-hard-drive-is-unallocated-and-gives-warning-about-gpt-table |
256 | +# then create new ~ 80 GB partition and mount it as /srv |
257 | +# use compat_uts_machine=armv7l so that 'linux32 uname -m' shows armv7l, like the buildds |
258 | +runcmd: |
259 | + - printf 'x\ne\nw\ny\n' | gdisk /dev/vda |
260 | + - printf "d\n1\nn\n\n\n+8000M\n\n\nw\ny\n" | gdisk /dev/vda |
261 | + - partprobe |
262 | + - resize2fs /dev/vda1 |
263 | + - lsblk -dno PARTUUID /dev/vda1 | awk '{ print "GRUB_FORCE_PARTUUID=" $0 }' | tee /etc/default/grub.d/40-force-partuuid.cfg |
264 | + - update-grub |
265 | + - echo 'mkpart primary btrfs 8495MB 100%\n' | parted /dev/vda |
266 | + - for timeout in `seq 10`; do [ -e /dev/vda2 ] && break; sleep 1; done |
267 | + |
268 | + - mkfs.btrfs -L autopkgtest /dev/vda2 |
269 | + - systemctl stop snap.lxd.daemon.service snap.lxd.daemon.unix.socket |
270 | + - echo 'LABEL=autopkgtest /srv btrfs defaults 0 0' >> /etc/fstab |
271 | + - echo '/srv/var-snap-lxd-common /var/snap/lxd/common none defaults,bind 0 0' >> /etc/fstab |
272 | + - mount /srv |
273 | + - mkdir -m 0755 /srv/var-snap-lxd-common |
274 | + - mount /var/snap/lxd |
275 | + - mkdir -m 1777 -p /srv/tmp/ |
276 | + - echo '/srv/tmp /tmp none rw,bind 0 0' >> /etc/fstab |
277 | + - mount /tmp |
278 | + - mkdir -p /srv/lxd |
279 | + - fallocate -l 512M /swapfile |
280 | + - chmod 600 /swapfile |
281 | + - mkswap /swapfile |
282 | + - echo '/swapfile none swap defaults 0 0' >> /etc/fstab |
283 | + - swapon /swapfile |
284 | + - systemctl start snap.lxd.daemon.service snap.lxd.daemon.unix.socket |
285 | + |
286 | + - /etc/rc.local |
287 | + |
288 | + - sed -i '/GRUB_CMDLINE_LINUX_DEFAULT=/ { s/quiet splash//; s/"$/ compat_uts_machine=armv7l"/ }' /etc/default/grub |
289 | + - update-grub |
290 | + |
291 | + - apt-get install -y linux-image-generic |
292 | + - apt-get purge --auto-remove -y linux-headers-generic rsyslog mlocate |
293 | + - dpkg -P `dpkg -l linux-headers* | grep ^ii | awk '{print $2}'` |
294 | + - adduser ubuntu lxd |
295 | + |
296 | +{%- if role == 'bootstrap' or role == 'leader' %} |
297 | + - apt-get install -y --no-install-recommends git |
298 | + - su - -c '[ -e autopkgtest ] || git clone https://git.launchpad.net/~ubuntu-release/autopkgtest/+git/development autopkgtest' ubuntu |
299 | + - su - -c 'cat /var/tmp/setup/ubuntu-crontab | crontab -' ubuntu |
300 | +{%- endif %} |
301 | + - sh -xe /var/tmp/setup/post-boot-setup.sh & |
302 | diff --git a/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/build-adt-image b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/build-adt-image |
303 | new file mode 100755 |
304 | index 0000000..8d29c52 |
305 | --- /dev/null |
306 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/build-adt-image |
307 | @@ -0,0 +1,73 @@ |
308 | +#!/bin/bash |
309 | +# Build adt cloud images with create-nova-image-new-release for the given |
310 | +# cloud, release and arch |
311 | + |
312 | +set -eu |
313 | + |
314 | +IFS="[- ]" read -r RELEASE REGION ARCH bootstrap <<< "$@" |
315 | + |
316 | +if [ -z "${RELEASE}" ] || [ -z "${REGION}" ] || [ -z "${ARCH}" ]; then |
317 | + echo "Usage: $0 RELEASE REGION ARCH" >&2 |
318 | + exit 1 |
319 | +fi |
320 | + |
321 | +[ -z "${MIRROR:-}" ] && . ~/mirror.rc |
322 | +export MIRROR |
323 | + |
324 | +[ -z "${NET_NAME:-}" ] && . ~/net-name.rc |
325 | +export NET_NAME |
326 | + |
327 | +if [ -z "${USE_CLOUD_CONFIG_FROM_ENV:-}" ]; then |
328 | + if [ -e ~/cloudrcs/${REGION}-${ARCH}.rc ]; then |
329 | + . ~/cloudrcs/${REGION}-${ARCH}.rc |
330 | + else |
331 | + . ~/cloudrcs/${REGION}.rc |
332 | + fi |
333 | +fi |
334 | + |
335 | +IMAGE_NAME="adt/ubuntu-${RELEASE}-${ARCH}-server-$(date '+%Y%m%d').img" |
336 | + |
337 | +if openstack image show "${IMAGE_NAME}" >/dev/null 2>/dev/null; then |
338 | + echo "$0: Image '${IMAGE_NAME}' already exists, not re-creating." >&2 |
339 | + exit 0 |
340 | +fi |
341 | + |
342 | +IMGS="$(openstack image list -f value)" |
343 | + |
344 | +PREVIMG=$(echo "$IMGS" | grep "adt/ubuntu-${RELEASE}-${ARCH}" | tail -n1 | awk '{print $2}') |
345 | + |
346 | +if [ -n "${ENSURE_ONLY:-}" ] && [ -n "${PREVIMG}" ]; then |
347 | + echo "$0: ENSURE_ONLY passed and we have a previous image (${PREVIMG}), doing nothing." >&2 |
348 | + exit 0 |
349 | +fi |
350 | + |
351 | +RELEASE_VERSION="$(distro-info --series "${RELEASE}" --release | awk '{print $1}')" |
352 | +IMG=$(echo "$IMGS" | grep -E "${RELEASE}-(?:daily|${RELEASE_VERSION})-${ARCH}" | tail -n1 | awk '{print $2}') |
353 | +if [ -z "$IMG" ]; then |
354 | + # Fall back to previous adt image and upgrade that |
355 | + IMG="${PREVIMG}" |
356 | +fi |
357 | + |
358 | +# If we have neither, fall back to the latest stable release |
359 | +BASE_RELEASE=${BASE_RELEASE:-$(distro-info --stable)} |
360 | +if [ -z "$IMG" ] && [ "${RELEASE}" = "$(distro-info --devel)" ] && [ -n "${BASE_RELEASE:-}" ]; then |
361 | + echo "$0: No image exists for ${RELEASE}; trying to bootstrap from ${BASE_RELEASE}" |
362 | + IMG=$(echo "$IMGS" | grep "${BASE_RELEASE}-daily-${ARCH}" | tail -n1 | awk '{print $2}') |
363 | + [ -n "$IMG" ] || IMG=$(echo "$IMGS" | grep "adt/ubuntu-${BASE_RELEASE}-${ARCH}" | tail -n1 | awk '{print $2}') |
364 | +fi |
365 | + |
366 | +if [ -z "$IMG" ]; then |
367 | + echo "$0: No matching images found" >&2 |
368 | + exit 1 |
369 | +fi |
370 | + |
371 | +echo "$REGION-$ARCH: using image $IMG" |
372 | +KEYNAME=${KEYNAME:-testbed-$(hostname)} |
373 | +$(dirname $0)/create-nova-image-new-release $RELEASE $ARCH $IMG "${KEYNAME}" "$IMAGE_NAME" |
374 | +# clean old images |
375 | +openstack image list --private -f value | grep --color=none -v "$IMAGE_NAME" | while read id img state; do |
376 | + if $(echo ${img} | grep -qs "adt/ubuntu-${RELEASE}-${ARCH}") && [ ${state} = active ]; then |
377 | + echo "Cleaning up old image $img ($id)" |
378 | + openstack image delete $id |
379 | + fi |
380 | +done |
381 | diff --git a/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/cleanup-instances b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/cleanup-instances |
382 | new file mode 100755 |
383 | index 0000000..aad8328 |
384 | --- /dev/null |
385 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/cleanup-instances |
386 | @@ -0,0 +1,164 @@ |
387 | +#!/usr/bin/python3 |
388 | +# clean up broken/orphaned instances |
389 | +import logging |
390 | +import os |
391 | +import re |
392 | +import subprocess |
393 | +import time |
394 | +from urllib.error import HTTPError |
395 | + |
396 | +import novaclient.client |
397 | +from influxdb import InfluxDBClient |
398 | +from keystoneauth1 import session |
399 | +from keystoneauth1.identity import v2, v3 |
400 | + |
401 | +try: |
402 | + INFLUXDB_CONTEXT = os.environ["INFLUXDB_CONTEXT"] |
403 | + INFLUXDB_DATABASE = os.environ["INFLUXDB_DATABASE"] |
404 | + INFLUXDB_HOSTNAME = os.environ["INFLUXDB_HOSTNAME"] |
405 | + INFLUXDB_PASSWORD = os.environ["INFLUXDB_PASSWORD"] |
406 | + INFLUXDB_PORT = os.environ["INFLUXDB_PORT"] |
407 | + INFLUXDB_USERNAME = os.environ["INFLUXDB_USERNAME"] |
408 | + |
409 | + influx_client = InfluxDBClient( |
410 | + INFLUXDB_HOSTNAME, |
411 | + INFLUXDB_PORT, |
412 | + INFLUXDB_USERNAME, |
413 | + INFLUXDB_PASSWORD, |
414 | + INFLUXDB_DATABASE, |
415 | + ) |
416 | +except KeyError: |
417 | + influx_client = None |
418 | + |
419 | +try: |
420 | + region = os.environ["REGION"] |
421 | +except KeyError: # probably being run manually; exec-in-region sets this |
422 | + region = None |
423 | + influx_client = None |
424 | + |
425 | +# number of seconds at which an instance counts as "implausibly old" long_tests |
426 | +# has a copy timeout, test timeout and a maximum build time (Restrictions: |
427 | +# build-needed) of 40,000 + 40,000s + 40,000s ~ 33 h |
428 | +MAX_AGE = 36 * 3600 |
429 | + |
430 | +measurements = [] |
431 | + |
432 | +if os.environ.get("DEBUG"): |
433 | + logging.basicConfig(level=logging.DEBUG) |
434 | + |
435 | +if os.environ.get("OS_IDENTITY_API_VERSION") == "3": |
436 | + auth = v3.Password( |
437 | + auth_url=os.environ["OS_AUTH_URL"], |
438 | + username=os.environ["OS_USERNAME"], |
439 | + password=os.environ["OS_PASSWORD"], |
440 | + project_name=os.environ["OS_PROJECT_NAME"], |
441 | + user_domain_name=os.environ["OS_USER_DOMAIN_NAME"], |
442 | + project_domain_name=os.environ["OS_PROJECT_DOMAIN_NAME"], |
443 | + ) |
444 | +else: |
445 | + auth = v2.Password( |
446 | + auth_url=os.environ["OS_AUTH_URL"], |
447 | + username=os.environ["OS_USERNAME"], |
448 | + password=os.environ["OS_PASSWORD"], |
449 | + tenant_name=os.environ["OS_TENANT_NAME"], |
450 | + ) |
451 | + |
452 | +sess = session.Session(auth=auth) |
453 | + |
454 | + |
455 | +nova = novaclient.client.Client( |
456 | + "2", session=sess, region_name=os.environ["OS_REGION_NAME"] |
457 | +) |
458 | + |
459 | +for instance in nova.servers.list(): |
460 | + age = time.mktime(time.gmtime()) - time.mktime( |
461 | + time.strptime(instance.created, "%Y-%m-%dT%H:%M:%SZ") |
462 | + ) |
463 | + logging.debug( |
464 | + "%s: status %s, age %is, networks %s" |
465 | + % (instance.name, instance.status, age, instance.networks) |
466 | + ) |
467 | + |
468 | + # check state |
469 | + if instance.status == "ERROR": |
470 | + try: |
471 | + message = str(instance.fault) |
472 | + except AttributeError: |
473 | + message = "fault message not available" |
474 | + msg = "instance {} ({}) is in error state (message: {})".format( |
475 | + instance.name, instance.id, message |
476 | + ) |
477 | + logging.warning("{}, deleting".format(msg)) |
478 | + measurements.append( |
479 | + { |
480 | + "measurement": "autopkgtest_delete_event", |
481 | + "fields": {"deleted": True}, |
482 | + "tags": { |
483 | + "deletion_reason": "instance_in_error", |
484 | + "instance": INFLUXDB_CONTEXT, |
485 | + "message": msg, |
486 | + "region": region, |
487 | + }, |
488 | + } |
489 | + ) |
490 | + instance.delete() |
491 | + continue |
492 | + |
493 | + if not instance.name.startswith("adt-"): |
494 | + continue |
495 | + |
496 | + # check age |
497 | + if age > MAX_AGE: |
498 | + message = "instance {} ({}) is {:.1f} hours old, deleting".format( |
499 | + instance.name, instance.id, (float(age) / 3600) |
500 | + ) |
501 | + logging.warning("{}, deleting".format(message)) |
502 | + measurements.append( |
503 | + { |
504 | + "measurement": "autopkgtest_delete_event", |
505 | + "fields": {"deleted": True}, |
506 | + "tags": { |
507 | + "deletion_reason": "instance_too_old", |
508 | + "instance": INFLUXDB_CONTEXT, |
509 | + "message": message, |
510 | + "region": region, |
511 | + }, |
512 | + } |
513 | + ) |
514 | + instance.delete() |
515 | + |
516 | + # check matching adt-run process for instance name |
517 | + try: |
518 | + if ( |
519 | + subprocess.call( |
520 | + [ |
521 | + "pgrep", |
522 | + "-f", |
523 | + "python.*autopkgtest.* --name %s" |
524 | + % re.escape(instance.name), |
525 | + ], |
526 | + stdout=subprocess.PIPE, |
527 | + ) |
528 | + != 0 |
529 | + ): |
530 | + message = "instance {} ({}) has no associated autopkgtest".format( |
531 | + instance.name, instance.id |
532 | + ) |
533 | + measurements.append( |
534 | + { |
535 | + "measurement": "autopkgtest_delete_event", |
536 | + "fields": {"deleted": True}, |
537 | + "tags": { |
538 | + "deletion_reason": "instance_no_autopkgtest", |
539 | + "instance": INFLUXDB_CONTEXT, |
540 | + "message": message, |
541 | + "region": region, |
542 | + }, |
543 | + } |
544 | + ) |
545 | + instance.delete() |
546 | + except IndexError: |
547 | + logging.warning("instance %s has invalid name" % instance.name) |
548 | + |
549 | + if measurements and influx_client: |
550 | + influx_client.write_points(measurements) |
551 | diff --git a/tools/cleanup-lxd b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/cleanup-lxd |
552 | similarity index 100% |
553 | rename from tools/cleanup-lxd |
554 | rename to charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/cleanup-lxd |
555 | diff --git a/tools/cloud-worker-maintenance b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/cloud-worker-maintenance |
556 | similarity index 55% |
557 | rename from tools/cloud-worker-maintenance |
558 | rename to charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/cloud-worker-maintenance |
559 | index 06d01d4..b1471a9 100755 |
560 | --- a/tools/cloud-worker-maintenance |
561 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/cloud-worker-maintenance |
562 | @@ -3,41 +3,39 @@ |
563 | # Run this from cron a few times a day. |
564 | set -eu |
565 | |
566 | -CLOUD_RCS="$(find -maxdepth 1 -name '*.rc' ! -type l)" |
567 | -TOOLS_DIR="$(dirname $0)" |
568 | +TOOLS_DIR="$(dirname "${0}")" |
569 | |
570 | # clean up orphaned/broken instances |
571 | -for cloudrc in $CLOUD_RCS; do |
572 | - . $cloudrc |
573 | - # this might fail because a cloud is temporarily down |
574 | - $TOOLS_DIR/cleanup-instances || true |
575 | -done |
576 | +if [ -d "${HOME}/cloudrcs/" ]; then |
577 | + for cloudrc in "${HOME}"/cloudrcs/*.rc; do |
578 | + if [ -e "${cloudrc}" ]; then |
579 | + region="$(basename "${cloudrc%*.rc}")" |
580 | + # this might fail because a cloud is temporarily down |
581 | + ( "${TOOLS_DIR}/exec-in-region" "${region}" "$TOOLS_DIR/cleanup-instances" || true ) |
582 | + fi |
583 | + done |
584 | +fi |
585 | |
586 | # restart died workers (they commonly die because of hitting the quota with |
587 | # lots of orphaned instances) |
588 | -if [ -d /run/systemd/system ]; then |
589 | - # check for failed workers and show their log tails |
590 | - for unit in $(systemctl --no-legend --all --state=inactive list-units 'autopkgtest@*.service' | awk '{print $1}'); do |
591 | - if [ "$(systemctl show --property=ExecMainStatus $unit)" = "ExecMainStatus=99" ] || |
592 | - [ "$(systemctl show --property=Result $unit)" != "Result=success" ]; then |
593 | - # ignore "quota exceeded" failures, not much that we can do about them |
594 | - log="$(journalctl --output=cat --no-pager --lines=300 -u $unit)" |
595 | - if ! echo "$log" | grep -q 'novaclient.exceptions.Forbidden: Quota exceeded'; then |
596 | - cat <<EOF |
597 | +# check for failed workers and show their log tails |
598 | +for unit in $(systemctl --no-legend --all --state=inactive list-units 'autopkgtest@*.service' | awk '{print $1}'); do |
599 | +if [ "$(systemctl show --property=ExecMainStatus "${unit}")" = "ExecMainStatus=99" ] || |
600 | + [ "$(systemctl show --property=Result "${unit}")" != "Result=success" ]; then |
601 | + # ignore "quota exceeded" failures, not much that we can do about them |
602 | + log="$(journalctl --output=cat --no-pager --lines=300 -u "${unit}")" |
603 | + if ! echo "${log}" | grep -q 'novaclient.exceptions.Forbidden: Quota exceeded'; then |
604 | + cat <<EOF |
605 | ==== worker $unit failed ==== |
606 | $log |
607 | |
608 | EOF |
609 | - fi |
610 | - fi |
611 | - sudo systemctl reset-failed "$unit" |
612 | - done |
613 | - |
614 | - sudo systemctl stop autopkgtest.target |
615 | - sudo systemctl start autopkgtest.target |
616 | -else |
617 | - sudo initctl --quiet start autopkgtest-worker-launch |
618 | + fi |
619 | fi |
620 | +sudo systemctl reset-failed "${unit}" |
621 | +done |
622 | + |
623 | +sudo systemctl restart autopkgtest.target |
624 | |
625 | # cleanup lxd cruft |
626 | -$TOOLS_DIR/cleanup-lxd |
627 | +"${TOOLS_DIR}/cleanup-lxd" |
628 | diff --git a/tools/copy-security-group b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/copy-security-group |
629 | similarity index 95% |
630 | rename from tools/copy-security-group |
631 | rename to charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/copy-security-group |
632 | index 5a956a1..6a6a072 100755 |
633 | --- a/tools/copy-security-group |
634 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/copy-security-group |
635 | @@ -18,7 +18,8 @@ from neutronclient.v2_0 import client |
636 | |
637 | # Members in a security group rule that cannot be copied. |
638 | RULE_MEMBERS_IGNORE = ["id", "tags", "updated_at", |
639 | - "created_at", "revision_number"] |
640 | + "created_at", "revision_number", |
641 | + "project_id", "tenant_id", ] |
642 | |
643 | |
644 | def main(): |
645 | @@ -30,6 +31,11 @@ def main(): |
646 | help='only delete group') |
647 | args = parser.parse_args() |
648 | |
649 | + # we get called from ExecStartPre of lxd units too (where |
650 | + # copy-security-group isn't required), just bail out if that's the case |
651 | + if 'lxd' in args.name: |
652 | + return |
653 | + |
654 | if os.environ.get('OS_IDENTITY_API_VERSION') == '3': |
655 | auth = v3.Password(auth_url=os.environ['OS_AUTH_URL'], |
656 | username=os.environ['OS_USERNAME'], |
657 | diff --git a/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/create-armhf-cluster-member b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/create-armhf-cluster-member |
658 | new file mode 100755 |
659 | index 0000000..75e4c74 |
660 | --- /dev/null |
661 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/create-armhf-cluster-member |
662 | @@ -0,0 +1,75 @@ |
663 | +#!/usr/bin/env python3 |
664 | + |
665 | +# There are three modes. |
666 | +# - bootstrap: The first node of a new cluster. Doesn't connect to anything |
667 | +# else. Sets up cron jobs to regularly rebuild images. |
668 | +# - node: Any subsequent nodes. Connect to the given target IP (can be any |
669 | +# other node). No cron job created. |
670 | +# - leader: A new bootstrap-like node, but for an existing cluster. Cron jobs |
671 | +# are set up as for bootstrap nodes. Use this when the bootstrap node goes |
672 | +# down. |
673 | + |
674 | +import os |
675 | +import subprocess |
676 | +import sys |
677 | +import uuid |
678 | + |
679 | +from jinja2 import Template |
680 | + |
681 | +IN_FILE = os.path.join( |
682 | + os.path.dirname(os.path.realpath(__file__)), |
683 | + "armhf-lxd-cluster-template.userdata.in", |
684 | +) |
685 | + |
686 | + |
687 | +def get_ssl_cert(ip): |
688 | + out = subprocess.check_output( |
689 | + [ |
690 | + "ssh", |
691 | + "-o", |
692 | + "UserKnownHostsFile=/dev/null", |
693 | + "-o", |
694 | + "StrictHostKeyChecking=no", |
695 | + "ubuntu@{}".format(ip), |
696 | + "sed ':a;N;$!ba;s/\\n/\\n\\n/g' /var/snap/lxd/common/lxd/cluster.crt", |
697 | + ] |
698 | + ) |
699 | + |
700 | + return " ".join(out.decode("utf-8").rstrip("\n").splitlines(True)) |
701 | + |
702 | + |
703 | +def usage(): |
704 | + print( |
705 | + "usage: {} [bootstrap|leader|node] [IP of cluster to join]".format( |
706 | + os.path.basename(sys.argv[0]) |
707 | + ), |
708 | + file=sys.stderr, |
709 | + ) |
710 | + |
711 | + |
712 | +try: |
713 | + role = sys.argv[1] |
714 | + if role in ("leader", "node"): |
715 | + to_join = sys.argv[2] |
716 | +except IndexError: |
717 | + usage() |
718 | + sys.exit(1) |
719 | + |
720 | +if role not in ["bootstrap", "leader", "node"]: |
721 | + usage() |
722 | + sys.exit(1) |
723 | + |
724 | +with open(IN_FILE, "r") as f: |
725 | + template = Template(f.read()) |
726 | + if role == "bootstrap": |
727 | + print(template.render(role=role, uuid=uuid.uuid1())) |
728 | + else: |
729 | + cert = get_ssl_cert(to_join) |
730 | + print( |
731 | + template.render( |
732 | + cert=cert, |
733 | + role=role, |
734 | + server=to_join, |
735 | + uuid=uuid.uuid1(), |
736 | + ) |
737 | + ) |
738 | diff --git a/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/create-nova-image-new-release b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/create-nova-image-new-release |
739 | new file mode 100755 |
740 | index 0000000..0451144 |
741 | --- /dev/null |
742 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/create-nova-image-new-release |
743 | @@ -0,0 +1,180 @@ |
744 | +#!/bin/bash |
745 | +# create an autopkgtest nova image for a new release, based on a generic image |
746 | +# Author: Martin Pitt <martin.pitt@ubuntu.com> |
747 | +set -eu |
748 | +RELEASE="${1:-}" |
749 | +ARCH="${2:-}" |
750 | +BASEIMG="${3:-}" |
751 | +KEYNAME="${4:-}" |
752 | +IMAGE_NAME="${5:-}" |
753 | +SETUP_TESTBED="${SETUP_TESTBED:-/home/ubuntu/autopkgtest/setup-commands/setup-testbed}" |
754 | + |
755 | +function release_ge() { |
756 | + distro-info --all | mawk " |
757 | + /^$1\$/ { l = NR }; |
758 | + /^$2\$/ { r = NR }; |
759 | + END { |
760 | + if (!l) { |
761 | + print \"Unknown release $1\" > \"/dev/stderr\"; |
762 | + exit 1; |
763 | + } |
764 | + if (!r) { |
765 | + print \"Unknown release $2\" > \"/dev/stderr\"; |
766 | + exit 1; |
767 | + } |
768 | + if (l >= r) { |
769 | + exit 0 |
770 | + } else { |
771 | + exit 1 |
772 | + } |
773 | + }" |
774 | +} |
775 | + |
776 | +if [ -z "$IMAGE_NAME" ]; then |
777 | + echo "Usage: $0 <release> <base image name> <key name> <output image name>" >&2 |
778 | + exit 1 |
779 | +fi |
780 | + |
781 | +if [ "${ARCH}" = amd64 ] && release_ge "${RELEASE}" focal; then |
782 | + ARCHES="[amd64, i386]" |
783 | + RUNCMD="$(printf "runcmd:\n - [ dpkg, --add-architecture, i386]\n")" |
784 | +else |
785 | + ARCHES="default" |
786 | + RUNCMD= |
787 | +fi |
788 | + |
789 | +# unbreak my server option :-( |
790 | +userdata=`mktemp` |
791 | +trap "rm $userdata" EXIT TERM INT QUIT PIPE |
792 | +cat <<EOF > $userdata |
793 | +#cloud-config |
794 | + |
795 | +manage_etc_hosts: true |
796 | +package_update: true |
797 | + |
798 | +apt: |
799 | + primary: |
800 | + - arches: ${ARCHES} |
801 | + uri: ${MIRROR} |
802 | +packages: |
803 | + - linux-generic |
804 | +${RUNCMD} |
805 | +EOF |
806 | + |
807 | +# create new instance |
808 | +INSTNAME="${BASEIMG}-adt-prepare" |
809 | +eval "$(openstack network show -f shell ${NET_NAME})" |
810 | + |
811 | +NET_ID=${id} |
812 | + |
813 | +retries=20 |
814 | +while true; do |
815 | + eval "$(openstack server create -f shell --flavor m1.small --image $BASEIMG --user-data $userdata --key-name $KEYNAME --wait $INSTNAME --nic net-id=${NET_ID})" |
816 | + if openstack server show "${id}" >/dev/null 2>/dev/null; then |
817 | + break |
818 | + fi |
819 | + retries=$((retries - 1)) |
820 | + if [ $retries -lt 0 ]; then |
821 | + echo "failed 20 times, giving up: openstack server create -f shell --flavor m1.small --image $BASEIMG --user-data $userdata --key-name $KEYNAME --wait $INSTNAME --nic net-id=${NET_ID}" >&2 |
822 | + exit 1 |
823 | + fi |
824 | + echo "openstack server create failed, waiting ten seconds before retrying" |
825 | + # sometimes this makes a zombie instance |
826 | + openstack server delete "${IMAGE_NAME}" >/dev/null 2>/dev/null || true |
827 | + sleep 10 |
828 | +done |
829 | + |
830 | +SRVID="${id}" |
831 | + |
832 | +trap "openstack server delete ${SRVID}" EXIT TERM INT QUIT PIPE |
833 | + |
834 | +# determine IP address |
835 | +eval "$(openstack server show -f shell ${SRVID})" |
836 | +ipaddr=$(echo ${addresses} | awk 'BEGIN { FS="=" } { print $2 }') |
837 | + |
838 | +SSH_CMD="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ubuntu@$ipaddr" |
839 | +echo "Waiting for ssh (may cause some error messages)..." |
840 | +timeout 300 sh -c "while ! $SSH_CMD true; do sleep 5; done" |
841 | + |
842 | +echo "Waiting until cloud-init is done..." |
843 | +timeout 25m $SSH_CMD 'while [ ! -e /var/lib/cloud/instance/boot-finished ]; do sleep 1; done' |
844 | + |
845 | +echo "Running setup script..." |
846 | +cat "${SETUP_TESTBED}" | $SSH_CMD "sudo env MIRROR='${MIRROR:-}' RELEASE='$RELEASE' sh -" |
847 | + |
848 | +echo "Running Canonical setup script..." |
849 | +CANONICAL_SCRIPT=$(dirname $(dirname $(readlink -f $0)))/worker-config-production/setup-canonical.sh |
850 | +cat "$CANONICAL_SCRIPT" | $SSH_CMD "sudo env MIRROR='${MIRROR:-}' RELEASE='$RELEASE' sh -" |
851 | + |
852 | +arch=$($SSH_CMD dpkg --print-architecture) |
853 | + |
854 | +echo "Check that the upgraded image boots..." |
855 | +while true; do |
856 | + if openstack server reboot --wait "${SRVID}"; then |
857 | + break |
858 | + fi |
859 | + |
860 | + retries=$((retries - 1)) |
861 | + if [ $retries -lt 0 ]; then |
862 | + echo "failed 20 times, giving up: openstack server reboot --wait ${SRVID}" >&2 |
863 | + exit 1 |
864 | + fi |
865 | + echo "openstack server reboot failed, waiting 10 seconds before retrying" >&2 |
866 | + sleep 10 |
867 | +done |
868 | + |
869 | +if ! timeout 300 sh -c "while ! $SSH_CMD true; do sleep 5; done"; then |
870 | + echo "ERROR: upgraded instance does not reboot; console-log:" >&2 |
871 | + openstack console log show "${SRVID}" >&2 || true |
872 | + exit 1 |
873 | +fi |
874 | + |
875 | +echo "Cleaning systemd journal..." |
876 | +$SSH_CMD sudo journalctl --rotate --vacuum-time=12h || true |
877 | + |
878 | +echo "Powering off to get a clean file system..." |
879 | +$SSH_CMD sudo poweroff || true |
880 | +eval "$(openstack server show -f shell ${SRVID})" |
881 | +while [ ${os_ext_sts_vm_state} != "stopped" ]; do |
882 | + sleep 1 |
883 | + eval "$(openstack server show -f shell ${SRVID})" |
884 | +done |
885 | + |
886 | +echo "Creating image $IMAGE_NAME ..." |
887 | +# this tends to fail occasionally, retry |
888 | +retries=20 |
889 | +while true; do |
890 | + if openstack server image create --wait --name "${IMAGE_NAME}" "${SRVID}"; then |
891 | + break |
892 | + fi |
893 | + inner_retries=20 |
894 | + while [ $inner_retries -gt 0 ]; do |
895 | + # server image create often loses its connection but it's actually |
896 | + # working - if the image is uploading, wait a bit for it to finish |
897 | + eval $(openstack image show -f shell --prefix=image_ "${IMAGE_NAME}") |
898 | + eval $(openstack server show -f shell --prefix=server_ "${SRVID}") |
899 | + if [ "${server_os_ext_sts_task_state}" = "image_uploading" ] || |
900 | + [ "${image_status}" = "saving" ]; then |
901 | + echo "image ${IMAGE_NAME} is uploading, waiting..." >&2 |
902 | + sleep 20 |
903 | + inner_retries=$((inner_retries - 1)) |
904 | + elif [ "${image_status}" = "active" ]; then |
905 | + exit 0 |
906 | + elif [ "${image_status}" = "queued" ]; then # borked, try the outer retry loop |
907 | + break |
908 | + else |
909 | + echo "image creation failed, aborting (status: ${image_status})." >&2 |
910 | + openstack image delete "${IMAGE_NAME}" >/dev/null 2>/dev/null || true |
911 | + exit 1 |
912 | + fi |
913 | + done |
914 | + retries=$((retries - 1)) |
915 | + if [ $retries -lt 0 ]; then |
916 | + echo "failed 20 times, giving up: openstack image create --wait --name ${IMAGE_NAME} ${SRVID}" >&2 |
917 | + exit 1 |
918 | + fi |
919 | + echo "openstack server image create failed, waiting 10 seconds before retrying" >&2 |
920 | + # sometimes this makes a zombie image |
921 | + openstack image delete "${IMAGE_NAME}" >/dev/null 2>/dev/null || true |
922 | + sleep 10 |
923 | +done |
924 | diff --git a/tools/create-nova-image-with-proposed-package b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/create-nova-image-with-proposed-package |
925 | similarity index 100% |
926 | rename from tools/create-nova-image-with-proposed-package |
927 | rename to charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/create-nova-image-with-proposed-package |
928 | diff --git a/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/ensure-keypair b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/ensure-keypair |
929 | new file mode 100755 |
930 | index 0000000..be664d6 |
931 | --- /dev/null |
932 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/ensure-keypair |
933 | @@ -0,0 +1,23 @@ |
934 | +#!/bin/bash |
935 | + |
936 | +set -eu |
937 | + |
938 | +IFS="[- ]" read -r RELEASE REGION ARCH bootstrap <<< "$@" |
939 | + |
940 | +if [ -e ~/cloudrcs/${REGION}-${ARCH}.rc ]; then |
941 | + . ~/cloudrcs/${REGION}-${ARCH}.rc |
942 | +else |
943 | + . ~/cloudrcs/${REGION}.rc |
944 | +fi |
945 | + |
946 | +if ! [ -e "${HOME}/.ssh/id_rsa" ]; then |
947 | + echo "No SSH key exists, generating one..." |
948 | + ssh-keygen -N '' -f ~/.ssh/id_rsa |
949 | +fi |
950 | + |
951 | +KEYNAME="testbed-$(hostname)" |
952 | + |
953 | +if ! openstack keypair show "${KEYNAME}" >/dev/null 2>/dev/null; then |
954 | + echo "Uploading SSH key ~/.ssh/id_rsa.pub to ${REGION} with key name: ${KEYNAME}" |
955 | + openstack keypair create --public-key ~/.ssh/id_rsa.pub "${KEYNAME}" |
956 | +fi |
957 | diff --git a/tools/exec-in-region b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/exec-in-region |
958 | similarity index 76% |
959 | rename from tools/exec-in-region |
960 | rename to charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/exec-in-region |
961 | index aac5bea..0261108 100755 |
962 | --- a/tools/exec-in-region |
963 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/exec-in-region |
964 | @@ -9,14 +9,23 @@ if [ $# -lt 2 ]; then |
965 | fi |
966 | |
967 | REGION="$1" |
968 | -REGION="${REGION%-*}" |
969 | +if ! [ -e "${HOME}/cloudrcs/${REGION}.rc" ]; then |
970 | + REGION="${REGION%-*}" |
971 | +fi |
972 | + |
973 | +if ! [ -e "${HOME}/cloudrcs/${REGION}.rc" ]; then |
974 | + echo "${REGION}.rc not found, exiting." >&2 |
975 | + exit 0 |
976 | +fi |
977 | |
978 | shift 1 |
979 | |
980 | +export REGION |
981 | + |
982 | if [ "${REGION#lxd-}" != "$REGION" ]; then |
983 | LXD_ARCH=${REGION#*-}; LXD_ARCH=${LXD_ARCH%%-*} |
984 | else |
985 | - . ./${REGION}.rc |
986 | + . ${HOME}/cloudrcs/${REGION}.rc |
987 | fi |
988 | |
989 | exec "$@" |
990 | diff --git a/tools/filter-amqp b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/filter-amqp |
991 | similarity index 100% |
992 | rename from tools/filter-amqp |
993 | rename to charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/filter-amqp |
994 | diff --git a/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/metrics b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/metrics |
995 | new file mode 100755 |
996 | index 0000000..d2e14a8 |
997 | --- /dev/null |
998 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/metrics |
999 | @@ -0,0 +1,142 @@ |
1000 | +#!/usr/bin/python3 |
1001 | + |
1002 | +from gi.repository import GLib, Gio |
1003 | +from influxdb import InfluxDBClient |
1004 | + |
1005 | +import json |
1006 | +import os |
1007 | +import subprocess |
1008 | + |
1009 | +SYSTEM_BUS = Gio.bus_get_sync(Gio.BusType.SYSTEM) |
1010 | + |
1011 | +INFLUXDB_CONTEXT = os.environ["INFLUXDB_CONTEXT"] |
1012 | +INFLUXDB_DATABASE = os.environ["INFLUXDB_DATABASE"] |
1013 | +INFLUXDB_HOSTNAME = os.environ["INFLUXDB_HOSTNAME"] |
1014 | +INFLUXDB_PASSWORD = os.environ["INFLUXDB_PASSWORD"] |
1015 | +INFLUXDB_PORT = os.environ["INFLUXDB_PORT"] |
1016 | +INFLUXDB_USERNAME = os.environ["INFLUXDB_USERNAME"] |
1017 | + |
1018 | + |
1019 | +def make_submission(counts, measurement): |
1020 | + out = [] |
1021 | + for arch in counts: |
1022 | + (active, error) = counts[arch] |
1023 | + m = { |
1024 | + "measurement": measurement, |
1025 | + "fields": {"count": active}, |
1026 | + "tags": { |
1027 | + "arch": arch, |
1028 | + "status": "active", |
1029 | + "instance": INFLUXDB_CONTEXT, |
1030 | + }, |
1031 | + } |
1032 | + out.append(m) |
1033 | + m = { |
1034 | + "measurement": measurement, |
1035 | + "fields": {"count": error}, |
1036 | + "tags": { |
1037 | + "arch": arch, |
1038 | + "status": "error", |
1039 | + "instance": INFLUXDB_CONTEXT, |
1040 | + }, |
1041 | + } |
1042 | + out.append(m) |
1043 | + return out |
1044 | + |
1045 | + |
1046 | +def get_units(): |
1047 | + counts = {} |
1048 | + |
1049 | + (units,) = SYSTEM_BUS.call_sync( |
1050 | + "org.freedesktop.systemd1", |
1051 | + "/org/freedesktop/systemd1", |
1052 | + "org.freedesktop.systemd1.Manager", |
1053 | + "ListUnits", |
1054 | + None, # parameters |
1055 | + GLib.VariantType("(a(ssssssouso))"), # reply type |
1056 | + Gio.DBusCallFlags.NONE, |
1057 | + -1, # timeout |
1058 | + None, |
1059 | + ) # cancellable |
1060 | + |
1061 | + for unit in units: |
1062 | + (name, _, _, _, sub, _, _, _, _, _) = unit |
1063 | + if not name.startswith("autopkgtest@"): |
1064 | + continue |
1065 | + |
1066 | + name_cloud = name.strip("autopkgtest@").rstrip(".service") |
1067 | + |
1068 | + if name_cloud.startswith("lxd-"): |
1069 | + # lxd-armhf-1.2.3.4-1 |
1070 | + name_cloud = name_cloud[4:] |
1071 | + try: |
1072 | + (arch, _, _) = name_cloud.split("-", -1) |
1073 | + (active, error) = counts.setdefault(arch, (0, 0)) |
1074 | + if sub == "running": |
1075 | + active += 1 |
1076 | + else: |
1077 | + error += 1 |
1078 | + counts[arch] = (active, error) |
1079 | + except ValueError: |
1080 | + # XXX: for some reason we get autopkgtest@lxd-armhf-a.b.c.d |
1081 | + # (without the number), not sure why |
1082 | + pass |
1083 | + continue |
1084 | + |
1085 | + try: |
1086 | + (region, arch, n) = name_cloud.split("-", -1) |
1087 | + except ValueError: |
1088 | + # autopkgtest@lcy01-1.service |
1089 | + (region, n) = name_cloud.split("-", -1) |
1090 | + arch = "amd64" |
1091 | + (active, error) = counts.setdefault(arch, (0, 0)) |
1092 | + |
1093 | + if sub == "running": |
1094 | + active += 1 |
1095 | + else: |
1096 | + error += 1 |
1097 | + counts[arch] = (active, error) |
1098 | + |
1099 | + return make_submission(counts, "autopkgtest_unit_status") |
1100 | + |
1101 | + |
1102 | +def get_remotes(): |
1103 | + counts = {} |
1104 | + out = subprocess.check_output( |
1105 | + ["lxc", "remote", "list", "--format=json"], universal_newlines=True |
1106 | + ) |
1107 | + |
1108 | + for r in json.loads(out): |
1109 | + if not r.startswith("lxd"): |
1110 | + continue |
1111 | + |
1112 | + (_, arch, ip) = r.split("-", 3) |
1113 | + (active, error) = counts.setdefault(arch, (0, 0)) |
1114 | + |
1115 | + try: |
1116 | + cl = subprocess.check_output( |
1117 | + ["lxc", "cluster", "list", f"{r}:", "--format=json"], |
1118 | + universal_newlines=True, |
1119 | + ) |
1120 | + except subprocess.CalledProcessError: # all backends down, eek |
1121 | + continue |
1122 | + |
1123 | + for node in json.loads(cl): |
1124 | + if node["status"] == "Online": |
1125 | + active += 1 |
1126 | + else: |
1127 | + error += 1 |
1128 | + counts[arch] = (active, error) |
1129 | + |
1130 | + return make_submission(counts, "autopkgtest_cluster_status") |
1131 | + |
1132 | + |
1133 | +influx_client = InfluxDBClient( |
1134 | + INFLUXDB_HOSTNAME, |
1135 | + INFLUXDB_PORT, |
1136 | + INFLUXDB_USERNAME, |
1137 | + INFLUXDB_PASSWORD, |
1138 | + INFLUXDB_DATABASE, |
1139 | +) |
1140 | + |
1141 | +influx_client.write_points(get_units() + get_remotes()) |
1142 | diff --git a/tools/retry-github-test b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/retry-github-test |
1143 | similarity index 100% |
1144 | rename from tools/retry-github-test |
1145 | rename to charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/retry-github-test |
1146 | diff --git a/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/run-autopkgtest b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/run-autopkgtest |
1147 | new file mode 100755 |
1148 | index 0000000..5c2bd67 |
1149 | --- /dev/null |
1150 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/run-autopkgtest |
1151 | @@ -0,0 +1,198 @@ |
1152 | +#!/usr/bin/python3 |
1153 | +# Request runs of autopkgtests for packages |
1154 | +# Imported from lp:ubuntu-archive-scripts, lightly modified to not rely on a |
1155 | +# britney config file, to be used for administration or testing. |
1156 | + |
1157 | +from datetime import datetime |
1158 | +import os |
1159 | +import sys |
1160 | +import argparse |
1161 | +import json |
1162 | +import urllib.parse |
1163 | + |
1164 | +import amqplib.client_0_8 as amqp |
1165 | + |
1166 | +my_dir = os.path.dirname(os.path.realpath(sys.argv[0])) |
1167 | + |
1168 | + |
1169 | +def parse_args(): |
1170 | + """Parse command line arguments""" |
1171 | + |
1172 | + parser = argparse.ArgumentParser() |
1173 | + parser.add_argument( |
1174 | + "-s", "--series", required=True, help="Distro series name (required)." |
1175 | + ) |
1176 | + parser.add_argument( |
1177 | + "-a", |
1178 | + "--architecture", |
1179 | + action="append", |
1180 | + default=[], |
1181 | + help="Only run test(s) on given architecture name(s). " |
1182 | + "Can be specified multiple times (default: all).", |
1183 | + ) |
1184 | + parser.add_argument( |
1185 | + "--trigger", |
1186 | + action="append", |
1187 | + default=[], |
1188 | + metavar="SOURCE/VERSION", |
1189 | + help="Add triggering package to request. " |
1190 | + "Can be specified multiple times.", |
1191 | + ) |
1192 | + parser.add_argument( |
1193 | + "--ppa", |
1194 | + metavar="LPUSER/PPANAME", |
1195 | + action="append", |
1196 | + default=[], |
1197 | + help="Enable PPA for requested test(s). " |
1198 | + "Can be specified multiple times.", |
1199 | + ) |
1200 | + parser.add_argument( |
1201 | + "--env", |
1202 | + metavar="KEY=VALUE", |
1203 | + action="append", |
1204 | + default=[], |
1205 | + help="List of VAR=value strings. " |
1206 | + "This can be used to influence a test's behaviour " |
1207 | + "from a test request. " |
1208 | + "Can be specified multiple times.", |
1209 | + ) |
1210 | + parser.add_argument( |
1211 | + "--test-git", |
1212 | + metavar="URL [branchname]", |
1213 | + help="A single URL or URL branchname. " |
1214 | + "The test will be git cloned from that URL and ran " |
1215 | + "from the checkout. This will not build binary " |
1216 | + "packages from the branch and run tests against " |
1217 | + "those, the test dependencies will be taken from the " |
1218 | + "archive, or PPA if given. In this case the " |
1219 | + "srcpkgname will only be used for the result path in " |
1220 | + "swift and be irrelevant for the actual test.", |
1221 | + ) |
1222 | + parser.add_argument( |
1223 | + "--build-git", |
1224 | + metavar="URL [branchname]", |
1225 | + help="A single URL or URL branchname. " |
1226 | + "Like --test-git`, but will first build binary " |
1227 | + "packages from the branch and run tests against those.", |
1228 | + ) |
1229 | + parser.add_argument( |
1230 | + "--test-bzr", |
1231 | + help="A single URL. " |
1232 | + "The test will be checked out with bzr from that URL. " |
1233 | + "Otherwise this has the same behaviour as test-git.", |
1234 | + ) |
1235 | + parser.add_argument( |
1236 | + "--all-proposed", |
1237 | + action="store_true", |
1238 | + help="Disable apt pinning and use all of -proposed", |
1239 | + ) |
1240 | + parser.add_argument( |
1241 | + "--bulk", |
1242 | + action="store_true", |
1243 | + help="Mark this as a bulk (low priority) test where possible", |
1244 | + ) |
1245 | + parser.add_argument( |
1246 | + "package", nargs="+", help="Source package name(s) whose tests to run." |
1247 | + ) |
1248 | + args = parser.parse_args() |
1249 | + |
1250 | + if not args.trigger and not args.ppa: |
1251 | + parser.error("One of --trigger or --ppa must be given") |
1252 | + |
1253 | + if not args.architecture: |
1254 | + parser.error("--architecture must be given") |
1255 | + |
1256 | + # verify syntax of triggers |
1257 | + for t in args.trigger: |
1258 | + try: |
1259 | + (src, ver) = t.split("/") |
1260 | + except ValueError: |
1261 | + parser.error( |
1262 | + 'Invalid trigger format "%s", must be "sourcepkg/version"' % t |
1263 | + ) |
1264 | + |
1265 | + # verify syntax of PPAs |
1266 | + for t in args.ppa: |
1267 | + try: |
1268 | + (user, name) = t.split("/") |
1269 | + except ValueError: |
1270 | + parser.error( |
1271 | + 'Invalid ppa format "%s", must be "lpuser/ppaname"' % t |
1272 | + ) |
1273 | + |
1274 | + return args |
1275 | + |
1276 | + |
1277 | +if __name__ == "__main__": |
1278 | + args = parse_args() |
1279 | + |
1280 | + context = "" |
1281 | + params = {} |
1282 | + if args.bulk: |
1283 | + context = "huge-" |
1284 | + if args.trigger: |
1285 | + params["triggers"] = args.trigger |
1286 | + if args.ppa: |
1287 | + params["ppas"] = args.ppa |
1288 | + context = "ppa-" |
1289 | + if args.env: |
1290 | + params["env"] = args.env |
1291 | + if args.test_git: |
1292 | + params["test-git"] = args.test_git |
1293 | + context = "upstream-" |
1294 | + elif args.build_git: |
1295 | + params["build-git"] = args.build_git |
1296 | + context = "upstream-" |
1297 | + if args.test_bzr: |
1298 | + params["test-bzr"] = args.test_bzr |
1299 | + context = "upstream-" |
1300 | + if args.all_proposed: |
1301 | + params["all-proposed"] = True |
1302 | + try: |
1303 | + params["requester"] = os.environ["SUDO_USER"] |
1304 | + except KeyError: |
1305 | + pass |
1306 | + params["submit-time"] = datetime.strftime( |
1307 | + datetime.utcnow(), "%Y-%m-%d %H:%M:%S%z" |
1308 | + ) |
1309 | + params = "\n" + json.dumps(params) |
1310 | + |
1311 | + try: |
1312 | + creds = urllib.parse.urlsplit( |
1313 | + "amqp://{user}:{password}@{host}".format( |
1314 | + user=os.environ["RABBIT_USER"], |
1315 | + password=os.environ["RABBIT_PASSWORD"], |
1316 | + host=os.environ["RABBIT_HOST"], |
1317 | + ), |
1318 | + allow_fragments=False, |
1319 | + ) |
1320 | + except KeyError: |
1321 | + with open(os.path.expanduser("~/rabbitmq.cred"), "r") as f: |
1322 | + env_dict = dict( |
1323 | + tuple(line.replace("\n", "").replace('"', "").split("=")) |
1324 | + for line in f.readlines() |
1325 | + if not line.startswith("#") |
1326 | + ) |
1327 | + creds = urllib.parse.urlsplit( |
1328 | + "amqp://{user}:{password}@{host}".format( |
1329 | + user=env_dict["RABBIT_USER"], |
1330 | + password=env_dict["RABBIT_PASSWORD"], |
1331 | + host=env_dict["RABBIT_HOST"], |
1332 | + ), |
1333 | + allow_fragments=False, |
1334 | + ) |
1335 | + assert creds.scheme == "amqp" |
1336 | + |
1337 | + with amqp.Connection( |
1338 | + creds.hostname, userid=creds.username, password=creds.password |
1339 | + ) as amqp_con: |
1340 | + with amqp_con.channel() as ch: |
1341 | + for arch in args.architecture: |
1342 | + queue = "debci-%s%s-%s" % (context, args.series, arch) |
1343 | + for pkg in args.package: |
1344 | + ch.basic_publish( |
1345 | + amqp.Message( |
1346 | + pkg + params, delivery_mode=2 |
1347 | + ), # persistent |
1348 | + routing_key=queue, |
1349 | + ) |
1350 | diff --git a/tools/seed-new-release b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/seed-new-release |
1351 | similarity index 100% |
1352 | rename from tools/seed-new-release |
1353 | rename to charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/tools/seed-new-release |
1354 | diff --git a/worker-config-production/setup-canonical.sh b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/worker-config-production/setup-canonical.sh |
1355 | similarity index 92% |
1356 | rename from worker-config-production/setup-canonical.sh |
1357 | rename to charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/worker-config-production/setup-canonical.sh |
1358 | index 5bcbc23..af76f24 100644 |
1359 | --- a/worker-config-production/setup-canonical.sh |
1360 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/worker-config-production/setup-canonical.sh |
1361 | @@ -39,10 +39,3 @@ if [ -d /run/systemd/system ] && systemd-detect-virt --quiet --vm; then |
1362 | ExecStart=/usr/bin/perl -E 'open \$\$f, "/bin/bash" or die; open \$\$rnd, ">/dev/random" or die; for (\$\$i = 0; \$\$i < 10; ++\$\$i) {read \$\$f, \$\$d, 64; ioctl \$\$rnd, 0x40085203, pack("ii", 64*8, 64) . \$\$d}' |
1363 | EOF |
1364 | fi |
1365 | - |
1366 | -# Make our amd64 runners capable of testing i386 starting with 20.04 |
1367 | -if [ $(lsb_release -r -s | sed 's/\..*//') -gt 19 ] && [ "$(dpkg --print-architecture)" = amd64 ] |
1368 | -then |
1369 | - dpkg --add-architecture i386 |
1370 | - apt-get update |
1371 | -fi |
1372 | diff --git a/worker/worker b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/worker/worker |
1373 | similarity index 72% |
1374 | rename from worker/worker |
1375 | rename to charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/worker/worker |
1376 | index d1916d2..685f07d 100755 |
1377 | --- a/worker/worker |
1378 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/worker/worker |
1379 | @@ -2,7 +2,7 @@ |
1380 | # autopkgtest cloud worker |
1381 | # Author: Martin Pitt <martin.pitt@ubuntu.com> |
1382 | # |
1383 | -# Requirements: python3-amqplib python3-swiftclient |
1384 | +# Requirements: python3-amqplib python3-swiftclient python3-influxdb |
1385 | # Requirements for running autopkgtest from git: python3-debian libdpkg-perl |
1386 | |
1387 | import os |
1388 | @@ -27,10 +27,29 @@ import distro_info |
1389 | import swiftclient |
1390 | import systemd.journal |
1391 | |
1392 | +from influxdb import InfluxDBClient |
1393 | from urllib.error import HTTPError |
1394 | |
1395 | ALL_RELEASES = distro_info.UbuntuDistroInfo().get_all(result='object') |
1396 | |
1397 | +try: |
1398 | + INFLUXDB_CONTEXT = os.environ["INFLUXDB_CONTEXT"] |
1399 | + INFLUXDB_DATABASE = os.environ["INFLUXDB_DATABASE"] |
1400 | + INFLUXDB_HOSTNAME = os.environ["INFLUXDB_HOSTNAME"] |
1401 | + INFLUXDB_PASSWORD = os.environ["INFLUXDB_PASSWORD"] |
1402 | + INFLUXDB_PORT = os.environ["INFLUXDB_PORT"] |
1403 | + INFLUXDB_USERNAME = os.environ["INFLUXDB_USERNAME"] |
1404 | + |
1405 | + influx_client = InfluxDBClient( |
1406 | + INFLUXDB_HOSTNAME, |
1407 | + INFLUXDB_PORT, |
1408 | + INFLUXDB_USERNAME, |
1409 | + INFLUXDB_PASSWORD, |
1410 | + INFLUXDB_DATABASE, |
1411 | + ) |
1412 | +except KeyError: |
1413 | + influx_client = None |
1414 | + |
1415 | my_path = os.path.dirname(__file__) |
1416 | root_path = os.path.dirname(os.path.abspath(my_path)) |
1417 | args = None |
1418 | @@ -39,9 +58,15 @@ swift_creds = {} |
1419 | exit_requested = None |
1420 | running_test = False |
1421 | status_exchange_name = 'teststatus.fanout' |
1422 | +complete_exchange_name = 'testcomplete.fanout' |
1423 | amqp_con = None |
1424 | systemd_logging_handler = systemd.journal.JournalHandler() |
1425 | |
1426 | +# Read in from config files |
1427 | +big_packages = set() |
1428 | +long_tests = set() |
1429 | +never_run = set() |
1430 | + |
1431 | FAIL_CODES = (4, 6, 12, 14, 20) |
1432 | |
1433 | # In the case of a tmpfail, look for these strings in the log and if they're |
1434 | @@ -104,6 +129,32 @@ FAIL_PKG_STRINGS = {'systemd*': ['timed out waiting for testbed to reboot', |
1435 | 'llvm-toolchain-*': ['clang: error: unable to execute command: Segmentation fault (core dumped)']} |
1436 | |
1437 | |
1438 | +def submit_metric(architecture, code, pkgname, current_region, retry, release): |
1439 | + if influx_client is None: |
1440 | + return |
1441 | + |
1442 | + region = ( |
1443 | + current_region |
1444 | + if current_region.startswith("lxd") |
1445 | + else current_region.split("-")[0] |
1446 | + ) |
1447 | + |
1448 | + point = { |
1449 | + "measurement": "autopkgtest_exit_event", |
1450 | + "fields": {"exited": True}, |
1451 | + "tags": { |
1452 | + "arch": architecture, |
1453 | + "exit_status": code, |
1454 | + "instance": INFLUXDB_CONTEXT, |
1455 | + "package": pkgname, |
1456 | + "region": region, |
1457 | + "retry": retry, |
1458 | + "series": release, |
1459 | + }, |
1460 | + } |
1461 | + influx_client.write_points([point]) |
1462 | + |
1463 | + |
1464 | def getglob(d, glob, default=None): |
1465 | for k in d: |
1466 | if fnmatch.fnmatch(glob, k): |
1467 | @@ -147,6 +198,44 @@ def parse_args(): |
1468 | return parser.parse_args() |
1469 | |
1470 | |
1471 | +def read_per_package_configs(cfg): |
1472 | + def read_per_package_file(filename): |
1473 | + out = set() |
1474 | + with open(filename, 'r') as f: |
1475 | + entries = { |
1476 | + line.strip() |
1477 | + for line in f.readlines() |
1478 | + if not line.startswith("#") |
1479 | + } |
1480 | + |
1481 | + for entry in entries: |
1482 | + try: |
1483 | + (package, arch, release) = entry.split("/", 3) |
1484 | + except ValueError: |
1485 | + release = "all" |
1486 | + try: |
1487 | + (package, arch) = entry.split("/", 2) |
1488 | + except ValueError: |
1489 | + arch = "all" |
1490 | + package = entry |
1491 | + finally: |
1492 | + out.add(f"{package}/{arch}/{release}") |
1493 | + return out |
1494 | + global big_packages, long_tests, never_run |
1495 | + |
1496 | + dir = cfg.get('autopkgtest', 'per_package_config_dir').strip() |
1497 | + |
1498 | + big_packages = read_per_package_file(os.path.join(dir, "big_packages")) |
1499 | + long_tests = read_per_package_file(os.path.join(dir, "long_tests")) |
1500 | + never_run = read_per_package_file(os.path.join(dir, "never_run")) |
1501 | + |
1502 | + |
1503 | +def request_matches_per_package(package, arch, release, s): |
1504 | + return (any(fnmatch.fnmatchcase(f"{package}/{arch}/{release}", entry) for entry in s) or |
1505 | + any(fnmatch.fnmatchcase(f"{package}/{arch}/all", entry) for entry in s) or |
1506 | + any(fnmatch.fnmatchcase(f"{package}/all/all", entry) for entry in s)) |
1507 | + |
1508 | + |
1509 | def process_output_dir(dir, pkgname, code): |
1510 | '''Post-process output directory''' |
1511 | |
1512 | @@ -162,6 +251,21 @@ def process_output_dir(dir, pkgname, code): |
1513 | testpkg_version.write('%s unknown' % pkgname) |
1514 | files.add('testpkg-version') |
1515 | |
1516 | + with open(os.path.join(dir, 'testpkg-version'), 'r') as tpv: |
1517 | + testpkg_version = tpv.read().split()[1] |
1518 | + |
1519 | + try: |
1520 | + with open(os.path.join(dir, 'duration'), 'r') as dur: |
1521 | + duration = dur.read() |
1522 | + except FileNotFoundError: |
1523 | + duration = None |
1524 | + |
1525 | + try: |
1526 | + with open(os.path.join(dir, 'requester'), 'r') as req: |
1527 | + requester = req.read() |
1528 | + except FileNotFoundError: |
1529 | + requester = None |
1530 | + |
1531 | # these are small and we need only these for gating and indexing |
1532 | resultfiles = ['exitcode'] |
1533 | # these might not be present in infrastructure failure cases |
1534 | @@ -191,6 +295,8 @@ def process_output_dir(dir, pkgname, code): |
1535 | else: |
1536 | os.unlink(path) |
1537 | |
1538 | + return (testpkg_version, duration, requester) |
1539 | + |
1540 | |
1541 | def series_to_version(series): |
1542 | versions = [x.version for x in ALL_RELEASES if x.series == series] |
1543 | @@ -201,24 +307,24 @@ def series_to_version(series): |
1544 | def i386_cross_series(series): |
1545 | # the first version where i386 is a partiel architecture and only cross |
1546 | # testing is done is 20.04 |
1547 | - ver = series_to_version(series) |
1548 | - if ver: |
1549 | - return ver > '19.10' |
1550 | + return series_to_version(series) > '19.10' |
1551 | |
1552 | - return False |
1553 | |
1554 | +def host_arch(release, architecture): |
1555 | + if architecture != 'i386': |
1556 | + return architecture |
1557 | |
1558 | -def subst(s, autopkgtest_checkout, big_package, release, architecture, pkgname): |
1559 | - if i386_cross_series(release) and architecture == 'i386': |
1560 | - host_arch = 'amd64' |
1561 | - else: |
1562 | - host_arch = architecture |
1563 | + if not i386_cross_series(release): |
1564 | + return architecture |
1565 | + |
1566 | + return 'amd64' |
1567 | + |
1568 | + |
1569 | +def subst(s, big_package, release, architecture, hostarch, pkgname): |
1570 | subst = { |
1571 | - 'CHECKOUTDIR': autopkgtest_checkout, |
1572 | - 'AUTOPKGTEST_CLOUD_DIR': root_path, |
1573 | 'RELEASE': release, |
1574 | 'ARCHITECTURE': architecture, |
1575 | - 'HOSTARCH': host_arch, |
1576 | + 'HOSTARCH': hostarch, |
1577 | 'PACKAGENAME': pkgname, |
1578 | 'PACKAGESIZE': cfg.get('virt', |
1579 | big_package and 'package_size_big' or 'package_size_default'), |
1580 | @@ -322,7 +428,11 @@ def request(msg): |
1581 | if extra.startswith("ADT_"): |
1582 | del systemd_logging_handler._extra[extra] |
1583 | |
1584 | - blacklisted = False |
1585 | + # Re-read in case the big/long/no run lists changed, would be better to |
1586 | + # this only when needed via inotify. |
1587 | + read_per_package_configs(cfg) |
1588 | + |
1589 | + dont_run = False |
1590 | |
1591 | # FIXME: make this more elegant |
1592 | fields = msg.delivery_info['routing_key'].split('-') |
1593 | @@ -368,23 +478,14 @@ def request(msg): |
1594 | pkgname, release, architecture, params) |
1595 | |
1596 | current_region = os.environ.get('REGION') |
1597 | - if current_region: |
1598 | - if ('%s/%s/%s/%s' % (release, architecture, pkgname, current_region) in |
1599 | - cfg.get('autopkgtest', 'blacklist').split()) or \ |
1600 | - ('all/%s/%s/%s' % (architecture, pkgname, current_region) in |
1601 | - cfg.get('autopkgtest', 'blacklist').split()): |
1602 | - logging.warning('Blacklisted on this region (%s), ignoring and sleeping for 5 minutes' % current_region) |
1603 | - msg.channel.basic_reject(msg.delivery_tag, requeue=True) |
1604 | - time.sleep(300) |
1605 | - return |
1606 | |
1607 | # build autopkgtest command line |
1608 | work_dir = tempfile.mkdtemp(prefix='autopkgtest-work.') |
1609 | out_dir = os.path.join(work_dir, 'out') |
1610 | |
1611 | - if '%s/%s/%s' % (release, architecture, pkgname) in cfg.get('autopkgtest', 'blacklist').split(): |
1612 | - logging.warning('Blacklisted, ignoring') |
1613 | - blacklisted = True |
1614 | + if request_matches_per_package(pkgname, architecture, release, never_run): |
1615 | + logging.warning('Marked to never run, ignoring') |
1616 | + dont_run = True |
1617 | |
1618 | # these will be written later on |
1619 | code = 99 |
1620 | @@ -394,21 +495,28 @@ def request(msg): |
1621 | |
1622 | # now let's fake up a log file |
1623 | with open(os.path.join(out_dir, 'log'), 'w') as log: |
1624 | - log.write('This package is blacklisted. To get the entry removed, contact a member of the release team.') |
1625 | + log.write('This package is marked to never run. To get the entry removed, contact a member of the release team.') |
1626 | |
1627 | + triggers = None |
1628 | # a json file containing the env |
1629 | if 'triggers' in params: |
1630 | + triggers = ' '.join(params['triggers']) |
1631 | with open(os.path.join(out_dir, 'testinfo.json'), 'w') as testinfo: |
1632 | d = {'custom_environment': |
1633 | - ['ADT_TEST_TRIGGERS=%s' % ' '.join(params['triggers'])]} |
1634 | + ['ADT_TEST_TRIGGERS=%s' % triggers]} |
1635 | json.dump(d, testinfo, indent=True) |
1636 | |
1637 | # and the testpackage version (pkgname blacklisted) |
1638 | + # XXX: replace "blacklisted" here, but needs changes in |
1639 | + # proposed-migration and hints |
1640 | with open(os.path.join(out_dir, 'testpkg-version'), 'w') as testpkg_version: |
1641 | testpkg_version.write('%s blacklisted' % pkgname) |
1642 | |
1643 | container = 'autopkgtest-' + release |
1644 | - big_pkg = any((fnmatch.fnmatchcase(pkgname, pkg) for pkg in cfg.get('autopkgtest', 'big_packages').strip().split())) |
1645 | + big_pkg = request_matches_per_package(pkgname, |
1646 | + architecture, |
1647 | + release, |
1648 | + big_packages) |
1649 | |
1650 | autopkgtest_checkout = cfg.get('autopkgtest', 'checkout_dir').strip() |
1651 | if autopkgtest_checkout: |
1652 | @@ -426,11 +534,11 @@ def request(msg): |
1653 | |
1654 | c = cfg.get('autopkgtest', 'setup_command').strip() |
1655 | if c: |
1656 | - c = subst(c, autopkgtest_checkout, big_pkg, release, architecture, pkgname) |
1657 | + c = subst(c, big_pkg, release, architecture, host_arch(release, architecture), pkgname) |
1658 | argv += ['--setup-commands', c] |
1659 | c = cfg.get('autopkgtest', 'setup_command2').strip() |
1660 | if c: |
1661 | - c = subst(c, autopkgtest_checkout, big_pkg, release, architecture, pkgname) |
1662 | + c = subst(c, big_pkg, release, architecture, host_arch(release, architecture), pkgname) |
1663 | argv += ['--setup-commands', c] |
1664 | |
1665 | if 'ppas' in params and params['ppas']: |
1666 | @@ -506,7 +614,7 @@ def request(msg): |
1667 | argv += testargs |
1668 | if args.debug: |
1669 | argv.append('--debug') |
1670 | - if pkgname in cfg.get('autopkgtest', 'long_tests').strip().split(): |
1671 | + if request_matches_per_package(pkgname, architecture, release, long_tests): |
1672 | argv.append('--timeout-copy=40000') |
1673 | argv.append('--timeout-test=40000') |
1674 | argv.append('--timeout-build=40000') |
1675 | @@ -521,8 +629,10 @@ def request(msg): |
1676 | for e in params.get('env', []): |
1677 | argv.append('--env=%s' % e) |
1678 | |
1679 | + triggers = None |
1680 | if 'triggers' in params: |
1681 | - argv.append('--env=ADT_TEST_TRIGGERS=%s' % ' '.join(params['triggers'])) |
1682 | + triggers = ' '.join(params['triggers']) |
1683 | + argv.append('--env=ADT_TEST_TRIGGERS=%s' % triggers) |
1684 | |
1685 | # want to run against a non-default kernel? |
1686 | for t in params['triggers']: |
1687 | @@ -576,17 +686,14 @@ def request(msg): |
1688 | if 'testname' in params: |
1689 | argv.append('--testname=%s' % params['testname']) |
1690 | |
1691 | - if i386_cross_series(release) and architecture == 'i386': |
1692 | - img_arch = 'amd64' |
1693 | - else: |
1694 | - img_arch = architecture |
1695 | - |
1696 | argv.append('--') |
1697 | - argv += subst(cfg.get('virt', 'args'), autopkgtest_checkout, big_pkg, |
1698 | - release, img_arch, pkgname).split() |
1699 | + argv += subst(cfg.get('virt', 'args'), big_pkg, |
1700 | + release, architecture, |
1701 | + host_arch(release, architecture), |
1702 | + pkgname).split() |
1703 | |
1704 | # run autopkgtest; retry up to three times on tmpfail issues |
1705 | - if not blacklisted: |
1706 | + if not dont_run: |
1707 | global running_test |
1708 | running_test = True |
1709 | start_time = time.time() |
1710 | @@ -606,6 +713,7 @@ def request(msg): |
1711 | logging.warning('%sLog follows:', "Retrying in 5 minutes. " if retry < 2 else "") |
1712 | logging.error(contents) |
1713 | if retry < 2: |
1714 | + submit_metric(architecture, code, pkgname, current_region, True, release) |
1715 | cleanup_and_sleep(out_dir) |
1716 | else: |
1717 | break |
1718 | @@ -637,6 +745,7 @@ def request(msg): |
1719 | logging.warning('Testbed failure%s. Log follows:', ", retrying in 5 minutes" if retry < 2 else "") |
1720 | logging.error(contents) |
1721 | if retry < 2: |
1722 | + submit_metric(architecture, code, pkgname, current_region, True, release) |
1723 | cleanup_and_sleep(out_dir) |
1724 | else: # code == 0, no retry needed |
1725 | break |
1726 | @@ -645,6 +754,7 @@ def request(msg): |
1727 | logging.warning('Three fails in a row - considering this a failure rather than tmpfail') |
1728 | code = 4 |
1729 | else: |
1730 | + submit_metric(architecture, code, pkgname, current_region, False, release) |
1731 | logging.error('Three tmpfails in a row, aborting worker. Log follows:') |
1732 | logging.error(log_contents(out_dir)) |
1733 | sys.exit(99) |
1734 | @@ -652,6 +762,7 @@ def request(msg): |
1735 | duration = int(time.time() - retry_start_time) |
1736 | |
1737 | logging.info('autopkgtest exited with code %i', code) |
1738 | + submit_metric(architecture, code, pkgname, current_region, False, release) |
1739 | if code == 1: |
1740 | logging.error('autopkgtest exited with unexpected error code 1') |
1741 | sys.exit(1) |
1742 | @@ -664,7 +775,7 @@ def request(msg): |
1743 | with open(os.path.join(out_dir, 'requester'), 'w') as f: |
1744 | f.write('%s\n' % params['requester']) |
1745 | |
1746 | - process_output_dir(out_dir, pkgname, code) |
1747 | + (testpkg_version, duration, requester) = process_output_dir(out_dir, pkgname, code) |
1748 | |
1749 | # If two tests for the same package with different triggers finish at the |
1750 | # same second, we get collisions with just the timestamp; disambiguate with |
1751 | @@ -740,6 +851,23 @@ def request(msg): |
1752 | swift_con.close() |
1753 | shutil.rmtree(work_dir) |
1754 | |
1755 | + global amqp_con |
1756 | + complete_amqp = amqp_con.channel() |
1757 | + complete_amqp.access_request('/complete', active=True, read=False, write=True) |
1758 | + complete_amqp.exchange_declare(complete_exchange_name, 'fanout', durable=True, auto_delete=False) |
1759 | + complete_msg = json.dumps ({'architecture': architecture, |
1760 | + 'container': container, |
1761 | + 'duration': duration, |
1762 | + 'exitcode': code, |
1763 | + 'package': pkgname, |
1764 | + 'testpkg_version': testpkg_version, |
1765 | + 'release': release, |
1766 | + 'requester': requester, |
1767 | + 'swift_dir': swift_dir, |
1768 | + 'triggers': triggers}) |
1769 | + complete_amqp.basic_publish(amqp.Message(complete_msg, delivery_mode=2), |
1770 | + complete_exchange_name, '') |
1771 | + |
1772 | logging.info('Acknowledging request %s' % body) |
1773 | msg.channel.basic_ack(msg.delivery_tag) |
1774 | running_test = False |
1775 | @@ -754,13 +882,11 @@ def amqp_connect(cfg, callback): |
1776 | Return queue object. |
1777 | ''' |
1778 | global amqp_con |
1779 | - logging.info('Connecting to AMQP server %s', cfg.get('amqp', 'host')) |
1780 | - if cfg.has_option('amqp', 'user') and cfg.get('amqp', 'user'): |
1781 | - amqp_con = amqp.Connection(cfg.get('amqp', 'host'), |
1782 | - userid=cfg.get('amqp', 'user'), |
1783 | - password=cfg.get('amqp', 'password')) |
1784 | - else: |
1785 | - amqp_con = amqp.Connection(cfg.get('amqp', 'host')) |
1786 | + logging.info('Connecting to AMQP server %s', os.environ['RABBIT_HOST']) |
1787 | + amqp_con = amqp.Connection(os.environ['RABBIT_HOST'], |
1788 | + userid=os.environ['RABBIT_USER'], |
1789 | + password=os.environ['RABBIT_PASSWORD'], |
1790 | + confirm_publish=True) |
1791 | queue = amqp_con.channel() |
1792 | # avoids greedy grabbing of the entire queue while being too busy |
1793 | queue.basic_qos(0, 1, True) |
1794 | @@ -813,15 +939,9 @@ def main(): |
1795 | # load configuration |
1796 | cfg = configparser.ConfigParser( |
1797 | {'setup_command': '', 'setup_command2': '', |
1798 | - 'long_tests': '', 'big_packages': '', |
1799 | - 'checkout_dir': '', 'blacklist': '', |
1800 | + 'checkout_dir': '', |
1801 | 'package_size_default': '', 'package_size_big': '', |
1802 | - 'extra_args': '', |
1803 | - 'region_name': os.environ.get('OS_REGION_NAME', '<undefined swift region>'), |
1804 | - 'username': os.environ.get('OS_USERNAME', '<undefined swift user>'), |
1805 | - 'auth_url': os.environ.get('OS_AUTH_URL', '<undefined swift auth URL>'), |
1806 | - 'password': os.environ.get('OS_PASSWORD', '<undefined swift password>'), |
1807 | - 'tenant': os.environ.get('OS_TENANT_NAME', '<undefined swift tenant name>')}, |
1808 | + 'extra_args': ''}, |
1809 | allow_no_value=True) |
1810 | cfg.read(args.config) |
1811 | |
1812 | @@ -829,19 +949,32 @@ def main(): |
1813 | format='%(levelname)s: %(message)s', |
1814 | handlers=[systemd_logging_handler]) |
1815 | |
1816 | - # build swift credentials |
1817 | - os_options = {} |
1818 | - os_options['region_name'] = cfg.get('swift', 'region_name') |
1819 | - if '/v2.0' in cfg.get('swift', 'auth_url'): |
1820 | - auth_version = '2.0' |
1821 | - else: |
1822 | - auth_version = '1' |
1823 | - swift_creds = {'authurl': cfg.get('swift', 'auth_url'), |
1824 | - 'user': cfg.get('swift', 'username'), |
1825 | - 'key': cfg.get('swift', 'password'), |
1826 | - 'tenant_name': cfg.get('swift', 'tenant'), |
1827 | - 'os_options': os_options, |
1828 | - 'auth_version': auth_version} |
1829 | + auth_version = os.environ['SWIFT_AUTH_VERSION'] |
1830 | + |
1831 | + if auth_version == '2': |
1832 | + swift_creds = { |
1833 | + 'authurl': os.environ['SWIFT_AUTH_URL'], |
1834 | + 'user': os.environ['SWIFT_USERNAME'], |
1835 | + 'key': os.environ['SWIFT_PASSWORD'], |
1836 | + 'tenant_name': os.environ['SWIFT_TENANT'], |
1837 | + 'os_options': { |
1838 | + 'region_name': os.environ['SWIFT_REGION'] |
1839 | + }, |
1840 | + 'auth_version': os.environ['SWIFT_AUTH_VERSION'] |
1841 | + } |
1842 | + else: # 3 |
1843 | + swift_creds = { |
1844 | + 'authurl': os.environ['SWIFT_AUTH_URL'], |
1845 | + 'user': os.environ['SWIFT_USERNAME'], |
1846 | + 'key': os.environ['SWIFT_PASSWORD'], |
1847 | + 'os_options': { |
1848 | + 'region_name': os.environ['SWIFT_REGION'], |
1849 | + 'project_domain_name': os.environ['SWIFT_PROJECT_DOMAIN_NAME'], |
1850 | + 'project_name': os.environ['SWIFT_PROJECT_NAME'], |
1851 | + 'user_domain_name': os.environ['SWIFT_USER_DOMAIN_NAME'] |
1852 | + }, |
1853 | + 'auth_version': auth_version |
1854 | + } |
1855 | |
1856 | # ensure that we can connect to swift |
1857 | swiftclient.Connection(**swift_creds).close() |
1858 | diff --git a/worker/worker.conf b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/worker/worker.conf |
1859 | similarity index 90% |
1860 | rename from worker/worker.conf |
1861 | rename to charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/worker/worker.conf |
1862 | index 3ae8b03..bee4851 100644 |
1863 | --- a/worker/worker.conf |
1864 | +++ b/charms/focal/autopkgtest-cloud-worker/autopkgtest-cloud/worker/worker.conf |
1865 | @@ -17,9 +17,7 @@ releases = trusty xenial |
1866 | architectures = i386 amd64 |
1867 | setup_command = |
1868 | # setup_command2 = $CHECKOUTDIR/setup-commands/setup-testbed |
1869 | -big_packages = binutils chromium-browser glibc libreoffice linux linux-* tdb firefox akonadi julia libtext-bidi-perl rocs camitk kineticstools r-cran-igraph botch mathicgb openjdk-8 openjdk-lts |
1870 | -long_tests = gutenprint gmp-ecm open-iscsi |
1871 | -blacklist = |
1872 | +per_package_config_dir = ~/autopkgtest-package-configs/ |
1873 | |
1874 | [virt] |
1875 | # normal packages get $PACKAGESIZE set to this value |
1876 | diff --git a/charms/focal/autopkgtest-cloud-worker/config.yaml b/charms/focal/autopkgtest-cloud-worker/config.yaml |
1877 | new file mode 100644 |
1878 | index 0000000..dc4d4fa |
1879 | --- /dev/null |
1880 | +++ b/charms/focal/autopkgtest-cloud-worker/config.yaml |
1881 | @@ -0,0 +1,104 @@ |
1882 | +options: |
1883 | + nova-rcs: |
1884 | + type: string |
1885 | + description: "base64 encoded tarball of nova OpenStack credential .rc files" |
1886 | + default: ~ |
1887 | + n-workers: |
1888 | + type: string |
1889 | + description: "yaml dict region -> arch -> n_workers corresponding to the number of worker instances to run" |
1890 | + default: ~ |
1891 | + lxd-remotes: |
1892 | + type: string |
1893 | + description: "yaml dict arch -> ip -> n_workers corresponding to the number of LXD workers to run" |
1894 | + default: ~ |
1895 | + mail-notify: |
1896 | + type: string |
1897 | + description: "email addresses (space separated) to notify on worker failure" |
1898 | + default: ~ |
1899 | + swift-auth-url: |
1900 | + type: string |
1901 | + description: "swift auth URL" |
1902 | + default: ~ |
1903 | + swift-username: |
1904 | + type: string |
1905 | + description: "username for swift" |
1906 | + default: ~ |
1907 | + swift-password: |
1908 | + type: string |
1909 | + description: "password for swift" |
1910 | + default: ~ |
1911 | + swift-region: |
1912 | + type: string |
1913 | + description: "the swift region to use" |
1914 | + default: ~ |
1915 | + swift-project-domain-name: |
1916 | + type: string |
1917 | + description: "the swift project domain (v3 only)" |
1918 | + default: ~ |
1919 | + swift-project-name: |
1920 | + type: string |
1921 | + description: "the swift project name (v3 only)" |
1922 | + default: ~ |
1923 | + swift-user-domain-name: |
1924 | + type: string |
1925 | + description: "the swift user domain name (v3 only)" |
1926 | + default: ~ |
1927 | + swift-auth-version: |
1928 | + type: int |
1929 | + description: "the keystone API version to use for swift (2 or 3)" |
1930 | + default: 2 |
1931 | + swift-tenant: |
1932 | + type: string |
1933 | + description: "the swift tenant name (v2 only)" |
1934 | + default: ~ |
1935 | + releases: |
1936 | + type: string |
1937 | + description: "the releases to test for" |
1938 | + default: ~ |
1939 | + mirror: |
1940 | + type: string |
1941 | + description: "the archive mirror to use for tests" |
1942 | + default: "http://archive.ubuntu.com/ubuntu/" |
1943 | + net-name: |
1944 | + type: string |
1945 | + description: "the neutron network name to use for the SSH/nova runner (must have port 22 inbound open)" |
1946 | + worker-setup-command: |
1947 | + type: string |
1948 | + description: "autopkgtest's --setup-command argument" |
1949 | + default: ~ |
1950 | + worker-setup-command2: |
1951 | + type: string |
1952 | + description: "second autopkgtest --setup-command argument" |
1953 | + default: ~ |
1954 | + worker-default-flavor: |
1955 | + type: string |
1956 | + description: "the default nova flavor to run tests under" |
1957 | + default: "m1.small" |
1958 | + worker-big-flavor: |
1959 | + type: string |
1960 | + description: "the flavor to run big-packages tests under" |
1961 | + default: "m1.large" |
1962 | + worker-args: |
1963 | + type: string |
1964 | + description: "autopkgtest virt args" |
1965 | + default: "null" |
1966 | + influxdb-hostname: |
1967 | + default: ~ |
1968 | + description: The hostname of the remote influxdb to submit metrics to |
1969 | + type: string |
1970 | + influxdb-port: |
1971 | + default: 8086 |
1972 | + description: The port of the remote influxdb to submit metrics to |
1973 | + type: int |
1974 | + influxdb-username: |
1975 | + default: ~ |
1976 | + description: The influxdb username |
1977 | + influxdb-password: |
1978 | + default: ~ |
1979 | + description: The influxdb password |
1980 | + influxdb-database: |
1981 | + default: ~ |
1982 | + description: The influxdb database to use |
1983 | + influxdb-context: |
1984 | + default: "production" |
1985 | + description: Submit all metrics with this as the "context" tag, to differentiate staging vs. production submissions |
1986 | diff --git a/charms/focal/autopkgtest-cloud-worker/layer.yaml b/charms/focal/autopkgtest-cloud-worker/layer.yaml |
1987 | new file mode 100644 |
1988 | index 0000000..e0a99e0 |
1989 | --- /dev/null |
1990 | +++ b/charms/focal/autopkgtest-cloud-worker/layer.yaml |
1991 | @@ -0,0 +1,33 @@ |
1992 | +includes: |
1993 | + - 'layer:apt' |
1994 | + - 'layer:basic' |
1995 | + - 'layer:status' |
1996 | + - 'interface:rabbitmq' |
1997 | + - 'interface:juju-info' |
1998 | +options: |
1999 | + basic: |
2000 | + packages: |
2001 | + - gir1.2-glib-2.0 |
2002 | + - python3-gi |
2003 | + - python3-pygit2 |
2004 | + include_system_packages: true |
2005 | + apt: |
2006 | + packages: |
2007 | + - bsd-mailx |
2008 | + - dctrl-tools |
2009 | + - distro-info |
2010 | + - git |
2011 | + - libdpkg-perl |
2012 | + - lxd-client |
2013 | + - make |
2014 | + - python3-amqplib |
2015 | + - python3-debian |
2016 | + - python3-distro-info |
2017 | + - python3-glanceclient |
2018 | + - python3-influxdb |
2019 | + - python3-keystoneauth1 |
2020 | + - python3-neutronclient |
2021 | + - python3-novaclient |
2022 | + - python3-openstackclient |
2023 | + - python3-swiftclient |
2024 | + - ssmtp |
2025 | diff --git a/charms/focal/autopkgtest-cloud-worker/lib/systemd.py b/charms/focal/autopkgtest-cloud-worker/lib/systemd.py |
2026 | new file mode 100644 |
2027 | index 0000000..8c9856e |
2028 | --- /dev/null |
2029 | +++ b/charms/focal/autopkgtest-cloud-worker/lib/systemd.py |
2030 | @@ -0,0 +1,402 @@ |
2031 | +import os |
2032 | +import shutil |
2033 | +from textwrap import dedent |
2034 | + |
2035 | +from collections import defaultdict |
2036 | +from gi.repository import GLib, Gio |
2037 | +from lib.utils import UbuntuRelease |
2038 | + |
2039 | + |
2040 | +SYSTEM_BUS = Gio.bus_get_sync(Gio.BusType.SYSTEM) |
2041 | + |
2042 | + |
2043 | +def get_unit_names(region, arch, ns): |
2044 | + if arch == "amd64": |
2045 | + unit_names = ["autopkgtest@{}-{}.service".format(region, n) for n in ns] |
2046 | + else: |
2047 | + unit_names = [ |
2048 | + "autopkgtest@{}-{}-{}.service".format(region, arch, n) for n in ns |
2049 | + ] |
2050 | + |
2051 | + return unit_names |
2052 | + |
2053 | + |
2054 | +def reload(): |
2055 | + SYSTEM_BUS.call_sync( |
2056 | + "org.freedesktop.systemd1", |
2057 | + "/org/freedesktop/systemd1", |
2058 | + "org.freedesktop.systemd1.Manager", |
2059 | + "Reload", |
2060 | + None, # parameters |
2061 | + None, # reply type |
2062 | + Gio.DBusCallFlags.NONE, |
2063 | + -1, # timeout |
2064 | + None, |
2065 | + ) # cancellable |
2066 | + |
2067 | + |
2068 | +def enabledisable(unit_names, enabledisable, enabledisablearg, startstop): |
2069 | + print( |
2070 | + "calling {enabledisable} then {startstop} on {unit_names}".format( |
2071 | + **locals() |
2072 | + ) |
2073 | + ) |
2074 | + SYSTEM_BUS.call_sync( |
2075 | + "org.freedesktop.systemd1", |
2076 | + "/org/freedesktop/systemd1", |
2077 | + "org.freedesktop.systemd1.Manager", |
2078 | + enabledisable, |
2079 | + enabledisablearg, |
2080 | + GLib.VariantType( |
2081 | + "*" |
2082 | + ), # reply type, using '*' because we don't actually care |
2083 | + Gio.DBusCallFlags.NONE, |
2084 | + -1, # timeout |
2085 | + None, |
2086 | + ) # cancellable |
2087 | + |
2088 | + for unit in unit_names: |
2089 | + (_,) = SYSTEM_BUS.call_sync( |
2090 | + "org.freedesktop.systemd1", |
2091 | + "/org/freedesktop/systemd1", |
2092 | + "org.freedesktop.systemd1.Manager", |
2093 | + startstop, |
2094 | + GLib.Variant("(ss)", (unit, "replace")), |
2095 | + GLib.VariantType("(o)"), # reply type |
2096 | + Gio.DBusCallFlags.NONE, |
2097 | + -1, # timeout |
2098 | + None, |
2099 | + ) # cancellable |
2100 | + |
2101 | + |
2102 | +def enable(unit_names): |
2103 | + enabledisable( |
2104 | + unit_names, |
2105 | + "EnableUnitFiles", |
2106 | + GLib.Variant("(asbb)", (unit_names, False, True)), |
2107 | + "StartUnit", |
2108 | + ) |
2109 | + reload() |
2110 | + |
2111 | + |
2112 | +def disable(unit_names): |
2113 | + enabledisable( |
2114 | + unit_names, |
2115 | + "DisableUnitFiles", |
2116 | + GLib.Variant("(asb)", (unit_names, False)), |
2117 | + "StopUnit", |
2118 | + ) |
2119 | + for unit in unit_names: |
2120 | + SYSTEM_BUS.call_sync( |
2121 | + "org.freedesktop.systemd1", |
2122 | + "/org/freedesktop/systemd1", |
2123 | + "org.freedesktop.systemd1.Manager", |
2124 | + "ResetFailedUnit", |
2125 | + GLib.Variant("(s)", (unit,)), |
2126 | + None, # reply type |
2127 | + Gio.DBusCallFlags.NONE, |
2128 | + -1.0, # timeout |
2129 | + None, |
2130 | + ) # cancellable |
2131 | + try: |
2132 | + dropindir = os.path.join( |
2133 | + os.path.sep, "etc", "systemd", "system", "{}.d".format(unit) |
2134 | + ) |
2135 | + shutil.rmtree(dropindir) |
2136 | + except FileNotFoundError: |
2137 | + pass |
2138 | + |
2139 | + reload() |
2140 | + |
2141 | + |
2142 | +def get_units(): |
2143 | + (units,) = SYSTEM_BUS.call_sync( |
2144 | + "org.freedesktop.systemd1", |
2145 | + "/org/freedesktop/systemd1", |
2146 | + "org.freedesktop.systemd1.Manager", |
2147 | + "ListUnits", |
2148 | + None, # parameters |
2149 | + GLib.VariantType("(a(ssssssouso))"), # reply type |
2150 | + Gio.DBusCallFlags.NONE, |
2151 | + -1, # timeout |
2152 | + None, |
2153 | + ) # cancellable |
2154 | + |
2155 | + build_adt_image_object_paths = defaultdict(lambda: defaultdict(dict)) |
2156 | + unit_object_paths = defaultdict(lambda: defaultdict(dict)) |
2157 | + lxd_object_paths = defaultdict(lambda: defaultdict(dict)) |
2158 | + |
2159 | + for unit in units: |
2160 | + (name, _, _, active, _, _, object_path, _, _, _) = unit |
2161 | + if name.startswith("build-adt-image@") and name.endswith(".timer"): |
2162 | + name_release_region_arch = name[16:][:-6] |
2163 | + (release, region, arch) = name_release_region_arch.split("-", -1) |
2164 | + build_adt_image_object_paths[region][arch][release] = object_path |
2165 | + continue |
2166 | + |
2167 | + if not name.startswith("autopkgtest@"): |
2168 | + continue |
2169 | + |
2170 | + name_cloud = name[12:][:-8] |
2171 | + |
2172 | + if name_cloud.startswith("lxd-"): |
2173 | + # lxd-armhf-1.2.3.4-1 |
2174 | + name_cloud = name_cloud[4:] |
2175 | + try: |
2176 | + (arch, ip, n) = name_cloud.split("-", -1) |
2177 | + lxd_object_paths[arch][ip][n] = object_path |
2178 | + except ValueError: |
2179 | + # XXX: for some reason we get autopkgtest@lxd-armhf-a.b.c.d |
2180 | + # (without the number), not sure why |
2181 | + pass |
2182 | + continue |
2183 | + |
2184 | + try: |
2185 | + (region, arch, n) = name_cloud.split("-", -1) |
2186 | + except ValueError: |
2187 | + # autopkgtest@lcy01-1.service |
2188 | + (region, n) = name_cloud.split("-", -1) |
2189 | + arch = "amd64" |
2190 | + unit_object_paths[region][arch][n] = object_path |
2191 | + |
2192 | + return (unit_object_paths, lxd_object_paths, build_adt_image_object_paths) |
2193 | + |
2194 | + |
2195 | +def update_cloud_dropins(region, arch, n, releases): |
2196 | + # re-create, to support fiddling on upgrade |
2197 | + def get_arches(release): |
2198 | + if arch == "amd64" and UbuntuRelease(release) < UbuntuRelease("focal"): |
2199 | + return ["amd64", "i386"] |
2200 | + else: |
2201 | + return [arch] |
2202 | + |
2203 | + ensure_adt_units = " ".join( |
2204 | + [ |
2205 | + "ensure-adt-image@{}-{}-{}.service".format(release, region, arch) |
2206 | + for release in releases |
2207 | + for arch in get_arches(release) |
2208 | + ] |
2209 | + ) |
2210 | + |
2211 | + for unit in get_unit_names(region, arch, range(1, n + 1)): |
2212 | + dropindir = os.path.join( |
2213 | + os.path.sep, "etc", "systemd", "system", "{}.d".format(unit) |
2214 | + ) |
2215 | + try: |
2216 | + os.makedirs(dropindir) |
2217 | + except FileExistsError: |
2218 | + shutil.rmtree(dropindir) |
2219 | + os.makedirs(dropindir) |
2220 | + |
2221 | + with open(os.path.join(dropindir, "ensure-adt-image.conf"), "w") as f: |
2222 | + f.write( |
2223 | + dedent( |
2224 | + """\ |
2225 | + [Unit] |
2226 | + Requires={ensureadt} |
2227 | + After={ensureadt} |
2228 | + """ |
2229 | + ).format(ensureadt=ensure_adt_units) |
2230 | + ) |
2231 | + reload() |
2232 | + |
2233 | + |
2234 | +def update_lxd_dropins(arch, ip, n): |
2235 | + # re-create, to support fiddling on upgrade |
2236 | + units = [ |
2237 | + "autopkgtest@lxd-{}-{}-{}.service".format(arch, ip, m) |
2238 | + for m in range(1, n + 1) |
2239 | + ] |
2240 | + |
2241 | + for unit in units: |
2242 | + dropindir = os.path.join( |
2243 | + os.path.sep, "etc", "systemd", "system", "{}.d".format(unit) |
2244 | + ) |
2245 | + try: |
2246 | + os.makedirs(dropindir) |
2247 | + except FileExistsError: |
2248 | + pass |
2249 | + |
2250 | + with open( |
2251 | + os.path.join(dropindir, "autopkgtest-lxd-remote.conf"), "w" |
2252 | + ) as f: |
2253 | + remote_unit = "autopkgtest-lxd-remote@lxd-{}-{}.service".format( |
2254 | + arch, ip |
2255 | + ) |
2256 | + f.write( |
2257 | + dedent( |
2258 | + """\ |
2259 | + [Unit] |
2260 | + Requires={remote_unit} |
2261 | + After={remote_unit} |
2262 | + """ |
2263 | + ).format(remote_unit=remote_unit) |
2264 | + ) |
2265 | + reload() |
2266 | + |
2267 | + |
2268 | +def enable_timer(region, arch, releases): |
2269 | + unit_names = [ |
2270 | + "build-adt-image@{}-{}-{}.timer".format(release, region, arch) |
2271 | + for release in releases |
2272 | + ] |
2273 | + enabledisable( |
2274 | + unit_names, |
2275 | + "EnableUnitFiles", |
2276 | + GLib.Variant("(asbb)", (unit_names, False, True)), |
2277 | + "StartUnit", |
2278 | + ) |
2279 | + reload() |
2280 | + |
2281 | + |
2282 | +def disable_timer(region, arch, releases): |
2283 | + unit_names = [ |
2284 | + "build-adt-image@{}-{}-{}.timer".format(release, region, arch) |
2285 | + for release in releases |
2286 | + ] |
2287 | + enabledisable( |
2288 | + unit_names, |
2289 | + "DisableUnitFiles", |
2290 | + GLib.Variant("(asb)", (unit_names, False)), |
2291 | + "StopUnit", |
2292 | + ) |
2293 | + reload() |
2294 | + |
2295 | + |
2296 | +def set_up_systemd_units(target_cloud_config, target_lxd_config, releases): |
2297 | + """There are two relevant factors: the units that are currently enabled, and the |
2298 | + units that the configuration file is asking for. Work out the |
2299 | + difference between the two, and enable / disable units as required. |
2300 | + This is split per region-arch. amd64 is a special case. The unit names |
2301 | + do not have the arch in them.""" |
2302 | + |
2303 | + ( |
2304 | + cloud_unit_object_paths, |
2305 | + lxd_unit_object_paths, |
2306 | + build_adt_image_object_paths, |
2307 | + ) = get_units() |
2308 | + all_cloud_regions = set( |
2309 | + list(cloud_unit_object_paths.keys()) + list(target_cloud_config.keys()) |
2310 | + ) |
2311 | + |
2312 | + focal = UbuntuRelease("focal") |
2313 | + i386_releases = [ |
2314 | + release for release in releases if UbuntuRelease(release) < focal |
2315 | + ] |
2316 | + |
2317 | + for region in sorted(all_cloud_regions): |
2318 | + all_arches_for_this_region = set( |
2319 | + list(cloud_unit_object_paths[region].keys()) |
2320 | + + list(target_cloud_config.get(region, {}).keys()) |
2321 | + ) |
2322 | + for arch in sorted(all_arches_for_this_region): |
2323 | + n_units = len(cloud_unit_object_paths[region][arch]) |
2324 | + target_n_units = target_cloud_config.get(region, {}).get(arch, 0) |
2325 | + print( |
2326 | + "Got {target_n_units} units for {region}/{arch}".format( |
2327 | + **locals() |
2328 | + ) |
2329 | + ) |
2330 | + if target_n_units > 0: |
2331 | + update_cloud_dropins(region, arch, target_n_units, releases) |
2332 | + enable_timer(region, arch, releases) |
2333 | + if arch == "amd64": |
2334 | + enable_timer(region, "i386", i386_releases) |
2335 | + if releases == []: |
2336 | + target_n_units = 0 |
2337 | + if target_n_units == 0: |
2338 | + disable_timer(region, arch, releases) |
2339 | + if arch == "amd64": |
2340 | + disable_timer(region, "i386", i386_releases) |
2341 | + if n_units < target_n_units: |
2342 | + # need to enable some units |
2343 | + delta = target_n_units - n_units |
2344 | + unit_names = get_unit_names( |
2345 | + region, arch, range(n_units + 1, n_units + delta + 1) |
2346 | + ) |
2347 | + enable(unit_names) |
2348 | + elif n_units > target_n_units: |
2349 | + # need to disable some units |
2350 | + delta = n_units - target_n_units |
2351 | + unit_names = get_unit_names( |
2352 | + region, arch, range(target_n_units + 1, n_units + 1) |
2353 | + ) |
2354 | + disable(unit_names) |
2355 | + |
2356 | + currently_enabled_releases = set( |
2357 | + build_adt_image_object_paths[region][arch].keys() |
2358 | + ) |
2359 | + releases_to_disable = currently_enabled_releases - set(releases) |
2360 | + |
2361 | + if releases_to_disable: |
2362 | + print( |
2363 | + "Disabling build-adt-image timers for {region}/{arch}/{releases_to_disable}".format( |
2364 | + **locals() |
2365 | + ) |
2366 | + ) |
2367 | + disable_timer(region, arch, releases_to_disable) |
2368 | + |
2369 | + # now do lxd. the target config is a dict arch -> IP -> nworkers |
2370 | + all_lxd_arches = set( |
2371 | + list(lxd_unit_object_paths.keys()) + list(target_lxd_config.keys()) |
2372 | + ) |
2373 | + for arch in sorted(all_lxd_arches): |
2374 | + all_ips_for_this_arch = set( |
2375 | + list(lxd_unit_object_paths[arch].keys()) |
2376 | + + list(target_lxd_config.get(arch, {}).keys()) |
2377 | + ) |
2378 | + for ip in sorted(all_ips_for_this_arch): |
2379 | + n_units = len(lxd_unit_object_paths[arch][ip]) |
2380 | + target_n_units = target_lxd_config.get(arch, {}).get(ip, 0) |
2381 | + if target_n_units > 0: |
2382 | + update_lxd_dropins(arch, ip, target_n_units) |
2383 | + if n_units == target_n_units: |
2384 | + continue |
2385 | + elif n_units < target_n_units: |
2386 | + # need to enable some units |
2387 | + delta = target_n_units - n_units |
2388 | + unit_names = [ |
2389 | + "autopkgtest@lxd-{}-{}-{}.service".format(arch, ip, n) |
2390 | + for n in range(n_units + 1, n_units + delta + 1) |
2391 | + ] |
2392 | + enable(unit_names) |
2393 | + else: |
2394 | + # need to disable some units |
2395 | + delta = n_units - target_n_units |
2396 | + unit_names = [ |
2397 | + "autopkgtest@lxd-{}-{}-{}.service".format(arch, ip, n) |
2398 | + for n in range(target_n_units + 1, n_units + 1) |
2399 | + ] |
2400 | + disable(unit_names) |
2401 | + |
2402 | + |
2403 | +def reload_unit(object_path): |
2404 | + SYSTEM_BUS.call_sync( |
2405 | + "org.freedesktop.systemd1", |
2406 | + object_path, |
2407 | + "org.freedesktop.systemd1.Unit", |
2408 | + "Reload", |
2409 | + GLib.Variant("(s)", ("replace",)), |
2410 | + GLib.VariantType("(o)"), # reply type |
2411 | + Gio.DBusCallFlags.NONE, |
2412 | + -1.0, # timeout |
2413 | + None, |
2414 | + ) # cancellable |
2415 | + |
2416 | + |
2417 | +def reload_units(): |
2418 | + (cloud_unit_object_paths, lxd_unit_object_paths, _) = get_units() |
2419 | + |
2420 | + for region in cloud_unit_object_paths: |
2421 | + for arch in cloud_unit_object_paths[region]: |
2422 | + for n in cloud_unit_object_paths[region][arch]: |
2423 | + object_path = cloud_unit_object_paths[region][arch][n] |
2424 | + print("Reloading {}".format(object_path)) |
2425 | + reload_unit(object_path) |
2426 | + |
2427 | + for arch in lxd_unit_object_paths: |
2428 | + for ip in lxd_unit_object_paths[arch]: |
2429 | + for n in lxd_unit_object_paths[arch][ip]: |
2430 | + object_path = lxd_unit_object_paths[arch][ip][n] |
2431 | + print("Reloading {}".format(object_path)) |
2432 | + reload_unit(object_path) |
2433 | diff --git a/charms/focal/autopkgtest-cloud-worker/lib/utils.py b/charms/focal/autopkgtest-cloud-worker/lib/utils.py |
2434 | new file mode 100644 |
2435 | index 0000000..22d52ee |
2436 | --- /dev/null |
2437 | +++ b/charms/focal/autopkgtest-cloud-worker/lib/utils.py |
2438 | @@ -0,0 +1,79 @@ |
2439 | +from charmhelpers.core.hookenv import log |
2440 | +from distro_info import UbuntuDistroInfo |
2441 | +from functools import total_ordering |
2442 | + |
2443 | +import os |
2444 | +import pwd |
2445 | +import subprocess |
2446 | + |
2447 | + |
2448 | +class UnixUser(object): |
2449 | + def __init__(self, username): |
2450 | + self.username = username |
2451 | + pwnam = pwd.getpwnam(username) |
2452 | + self.uid = pwnam.pw_uid |
2453 | + self.gid = pwnam.pw_gid |
2454 | + |
2455 | + def __enter__(self): |
2456 | + log( |
2457 | + "Raising UID to {uid} and GID to {uid}".format( |
2458 | + uid=self.uid, gid=self.gid |
2459 | + ), |
2460 | + "INFO", |
2461 | + ) |
2462 | + os.setresgid(self.gid, self.gid, os.getgid()) |
2463 | + os.setresuid(self.uid, self.uid, os.getuid()) |
2464 | + |
2465 | + def __exit__(self, exc_type, exc_val, exc_tb): |
2466 | + _, _, suid = os.getresuid() |
2467 | + _, _, sgid = os.getresgid() |
2468 | + log( |
2469 | + "Restoring UID to {suid} and GID to {sgid}".format( |
2470 | + suid=suid, sgid=sgid |
2471 | + ), |
2472 | + "INFO", |
2473 | + ) |
2474 | + os.setresuid(suid, suid, suid) |
2475 | + os.setresgid(sgid, sgid, sgid) |
2476 | + |
2477 | + |
2478 | +def install_autodep8(location): |
2479 | + os.chdir(location) |
2480 | + with UnixUser("ubuntu"): |
2481 | + subprocess.check_call(["make"]) |
2482 | + subprocess.check_call(["make", "install"]) |
2483 | + |
2484 | + |
2485 | +def pull(repository): |
2486 | + """ This will do a sort of git fetch origin && git reset --hard origin/master """ |
2487 | + origin = [ |
2488 | + remote for remote in repository.remotes if remote.name == "origin" |
2489 | + ][0] |
2490 | + origin.fetch() |
2491 | + remote_master_id = repository.lookup_reference( |
2492 | + "refs/remotes/origin/master" |
2493 | + ).target |
2494 | + repository.checkout_tree(repository.get(remote_master_id)) |
2495 | + master_ref = repository.lookup_reference("refs/heads/master") |
2496 | + master_ref.set_target(remote_master_id) |
2497 | + repository.head.set_target(remote_master_id) |
2498 | + |
2499 | + |
2500 | +@total_ordering |
2501 | +class UbuntuRelease(object): |
2502 | + all_releases = UbuntuDistroInfo().all |
2503 | + |
2504 | + def __init__(self, release): |
2505 | + if release not in UbuntuRelease.all_releases: |
2506 | + raise ValueError("Unknown release '{}'".format(release)) |
2507 | + self.release = release |
2508 | + |
2509 | + def __eq__(self, other): |
2510 | + return UbuntuRelease.all_releases.index( |
2511 | + self.release |
2512 | + ) == UbuntuRelease.all_releases.index(other) |
2513 | + |
2514 | + def __le__(self, other): |
2515 | + return UbuntuRelease.all_releases.index( |
2516 | + self.release |
2517 | + ) <= UbuntuRelease.all_releases.index(other) |
2518 | diff --git a/deployment/charms/xenial/autopkgtest-cloud-worker/metadata.yaml b/charms/focal/autopkgtest-cloud-worker/metadata.yaml |
2519 | similarity index 50% |
2520 | rename from deployment/charms/xenial/autopkgtest-cloud-worker/metadata.yaml |
2521 | rename to charms/focal/autopkgtest-cloud-worker/metadata.yaml |
2522 | index b96a32b..ba9667d 100644 |
2523 | --- a/deployment/charms/xenial/autopkgtest-cloud-worker/metadata.yaml |
2524 | +++ b/charms/focal/autopkgtest-cloud-worker/metadata.yaml |
2525 | @@ -1,12 +1,24 @@ |
2526 | name: autopkgtest-cloud-worker |
2527 | summary: autopkgtest cloud worker |
2528 | -maintainer: Martin Pitt <martin.pitt@ubuntu.com> |
2529 | +maintainer: Iain Lane <iain.lane@canonical.com> |
2530 | +series: |
2531 | + - focal |
2532 | description: | |
2533 | An autopkgtest worker receives test requests from AMQP, runs autopkgtest with |
2534 | - the ssh/nova runner, and puts the test results into Swift. |
2535 | -categories: |
2536 | - - ops |
2537 | + the ssh/nova or lxd runner, and puts the test results into Swift. |
2538 | +tags: |
2539 | + - ops |
2540 | requires: |
2541 | amqp: |
2542 | interface: rabbitmq |
2543 | + haproxy-lxd-armhf: |
2544 | + interface: juju-info |
2545 | subordinate: false |
2546 | +storage: |
2547 | + tmp: |
2548 | + type: filesystem |
2549 | + description: working files for running tests |
2550 | + shared: false |
2551 | + read-only: false |
2552 | + minimum-size: 200G |
2553 | + location: /tmp |
2554 | diff --git a/charms/focal/autopkgtest-cloud-worker/reactive/autopkgtest_cloud_worker.py b/charms/focal/autopkgtest-cloud-worker/reactive/autopkgtest_cloud_worker.py |
2555 | new file mode 100644 |
2556 | index 0000000..d459b31 |
2557 | --- /dev/null |
2558 | +++ b/charms/focal/autopkgtest-cloud-worker/reactive/autopkgtest_cloud_worker.py |
2559 | @@ -0,0 +1,597 @@ |
2560 | +from charms.layer import status |
2561 | +from charms.reactive import ( |
2562 | + when, |
2563 | + when_all, |
2564 | + when_any, |
2565 | + when_not, |
2566 | + when_not_all, |
2567 | + clear_flag, |
2568 | + set_flag, |
2569 | + hook, |
2570 | + not_unless, |
2571 | +) |
2572 | +from charms.reactive.relations import endpoint_from_flag |
2573 | +from charmhelpers.core.hookenv import ( |
2574 | + charm_dir, |
2575 | + config, |
2576 | + log, |
2577 | + storage_get, |
2578 | + storage_list, |
2579 | +) |
2580 | +from utils import install_autodep8, UnixUser |
2581 | + |
2582 | +from textwrap import dedent |
2583 | + |
2584 | +import glob |
2585 | +import os |
2586 | +import pygit2 |
2587 | +import socket |
2588 | +import subprocess |
2589 | +import yaml |
2590 | + |
2591 | + |
2592 | +AUTOPKGTEST_LOCATION = os.path.expanduser("~ubuntu/autopkgtest") |
2593 | +AUTOPKGTEST_CLOUD_LOCATION = os.path.expanduser("~ubuntu/autopkgtest-cloud") |
2594 | +AUTODEP8_LOCATION = os.path.expanduser("~ubuntu/autodep8") |
2595 | +AUTOPKGTEST_PER_PACKAGE_LOCATION = os.path.expanduser( |
2596 | + "~ubuntu/autopkgtest-package-configs/" |
2597 | +) |
2598 | + |
2599 | +AUTOPKGTEST_CLONE_LOCATION = ( |
2600 | + "https://git.launchpad.net/~ubuntu-release/autopkgtest/+git/development" |
2601 | +) |
2602 | +AUTODEP8_CLONE_LOCATION = ( |
2603 | + "https://git.launchpad.net/~ubuntu-release/+git/autodep8" |
2604 | +) |
2605 | + |
2606 | +AUTOPKGTEST_PER_PACKAGE_CLONE_LOCATION = "https://git.launchpad.net/~ubuntu-release/autopkgtest-cloud/+git/autopkgtest-package-configs" |
2607 | + |
2608 | +RABBITMQ_CRED_PATH = os.path.expanduser("~ubuntu/rabbitmq.cred") |
2609 | + |
2610 | + |
2611 | +@when_not("autopkgtest.autopkgtest_cloud_symlinked") |
2612 | +def symlink_autopkgtest_cloud(): |
2613 | + with UnixUser("ubuntu"): |
2614 | + try: |
2615 | + autopkgtest_cloud = os.path.join(charm_dir(), "autopkgtest-cloud") |
2616 | + os.symlink(autopkgtest_cloud, AUTOPKGTEST_CLOUD_LOCATION) |
2617 | + except FileExistsError: |
2618 | + pass |
2619 | + set_flag("autopkgtest.autopkgtest_cloud_symlinked") |
2620 | + |
2621 | + |
2622 | +@when("apt.installed.git") |
2623 | +@when_not("autopkgtest.autodep8_cloned") |
2624 | +def clone_autodep8(): |
2625 | + status.maintenance("Cloning autodep8") |
2626 | + log("Cloning autodep8", "INFO") |
2627 | + with UnixUser("ubuntu"): |
2628 | + try: |
2629 | + pygit2.clone_repository(AUTODEP8_CLONE_LOCATION, AUTODEP8_LOCATION) |
2630 | + log("autodep8 cloned", "INFO") |
2631 | + status.maintenance("autodep8 cloned") |
2632 | + set_flag("autopkgtest.autodep8_cloned") |
2633 | + except ValueError: |
2634 | + log("autodep8 already cloned", "INFO") |
2635 | + |
2636 | + |
2637 | +@when("apt.installed.git") |
2638 | +@when_not("autopkgtest.autopkgtest_package_configs_cloned") |
2639 | +def clone_autopkgtest_package_configs(): |
2640 | + status.maintenance("Cloning autopkgtest-package-configs") |
2641 | + log("Cloning autopkgtest-package-configs", "INFO") |
2642 | + with UnixUser("ubuntu"): |
2643 | + try: |
2644 | + pygit2.clone_repository( |
2645 | + AUTOPKGTEST_PER_PACKAGE_CLONE_LOCATION, |
2646 | + AUTOPKGTEST_PER_PACKAGE_LOCATION, |
2647 | + ) |
2648 | + log("autopkgtest-package-configs cloned", "INFO") |
2649 | + status.maintenance("autopkgtest-package-configs cloned") |
2650 | + set_flag("autopkgtest.autopkgtest_package_configs_cloned") |
2651 | + except ValueError: |
2652 | + log("autopkgtest-package-configs already cloned", "INFO") |
2653 | + |
2654 | + |
2655 | +@when("autopkgtest.autodep8_cloned") |
2656 | +@when_not("autopkgtest.autodep8_installed") |
2657 | +def initially_install_autodep8(): |
2658 | + install_autodep8(AUTODEP8_LOCATION) |
2659 | + set_flag("autopkgtest.autodep8_installed") |
2660 | + |
2661 | + |
2662 | +@when("apt.installed.git") |
2663 | +@when_not("autopkgtest.autopkgtest_cloned") |
2664 | +def clone_autopkgtest(): |
2665 | + status.maintenance("Cloning autopkgtest") |
2666 | + log("Cloning autopkgtest", "INFO") |
2667 | + with UnixUser("ubuntu"): |
2668 | + try: |
2669 | + pygit2.clone_repository( |
2670 | + AUTOPKGTEST_CLONE_LOCATION, AUTOPKGTEST_LOCATION |
2671 | + ) |
2672 | + status.maintenance("autopkgtest cloned") |
2673 | + log("autopkgtest cloned", "INFO") |
2674 | + set_flag("autopkgtest.autopkgtest_cloned") |
2675 | + except ValueError: |
2676 | + # exists |
2677 | + log("autopkgtest already cloned", "INFO") |
2678 | + |
2679 | + |
2680 | +@when_all( |
2681 | + "autopkgtest.autopkgtest_cloud_symlinked", |
2682 | + "autopkgtest.rabbitmq-configured", |
2683 | + "autopkgtest.influx-creds-written", |
2684 | +) |
2685 | +def set_up_systemd_units(): |
2686 | + for unit in glob.glob(os.path.join(charm_dir(), "units", "*")): |
2687 | + base = os.path.basename(unit) |
2688 | + dest = os.path.join(os.path.sep, "etc", "systemd", "system", base) |
2689 | + |
2690 | + def link_and_enable(): |
2691 | + os.symlink(unit, dest) |
2692 | + if "@" not in base: |
2693 | + subprocess.check_call(["systemctl", "enable", base]) |
2694 | + |
2695 | + try: |
2696 | + link_and_enable() |
2697 | + except FileExistsError: |
2698 | + if not os.path.islink(dest): |
2699 | + os.unlink(dest) |
2700 | + link_and_enable() |
2701 | + set_flag("autopkgtest.systemd_units_linked_and_enabled") |
2702 | + |
2703 | + |
2704 | +@when("autopkgtest.systemd_units_linked_and_enabled") |
2705 | +@when_not("autopkgtest.target_running") |
2706 | +def start(): |
2707 | + subprocess.check_call( |
2708 | + ["systemctl", "enable", "--now", "autopkgtest.target"] |
2709 | + ) |
2710 | + set_flag("autopkgtest.target_running") |
2711 | + |
2712 | + |
2713 | +@hook("stop") |
2714 | +@when("autopkgtest.target_running") |
2715 | +def stop(): |
2716 | + subprocess.check_call( |
2717 | + ["systemctl", "disable", "--now", "autopkgtest.target"] |
2718 | + ) |
2719 | + clear_flag("autopkgtest.target_running") |
2720 | + |
2721 | + |
2722 | +@when_all("autopkgtest.target-restart-needed", "autopkgtest.target_running") |
2723 | +def restart_target(): |
2724 | + subprocess.check_call(["systemctl", "restart", "autopkgtest.target"]) |
2725 | + clear_flag("autopkgtest.target-restart-needed") |
2726 | + |
2727 | + |
2728 | +@when_not("autopkgtest.target-restart-needed", "apt.queued_installs") |
2729 | +@when("autopkgtest.target_running") |
2730 | +def is_active(): |
2731 | + status.active("autopkgtest workers running and processing requests") |
2732 | + |
2733 | + |
2734 | +@when_all( |
2735 | + "autopkgtest.systemd_units_linked_and_enabled", |
2736 | + "autopkgtest.daemon-reload-needed", |
2737 | +) |
2738 | +@hook("upgrade-charm") |
2739 | +def daemon_reload(): |
2740 | + subprocess.check_call(["systemctl", "daemon-reload"]) |
2741 | + clear_flag("autopkgtest.daemon-reload-needed") |
2742 | + |
2743 | + |
2744 | +@hook("install") |
2745 | +def enable_persistent_journal(): |
2746 | + try: |
2747 | + journal_dir = os.path.join(os.path.sep, "var", "log", "journal") |
2748 | + os.makedirs(journal_dir) |
2749 | + subprocess.check_call( |
2750 | + ["systemd-tmpfiles", "--create", "--prefix", journal_dir] |
2751 | + ) |
2752 | + subprocess.check_call(["systemctl", "restart", "systemd-journald"]) |
2753 | + except FileExistsError: |
2754 | + pass |
2755 | + |
2756 | + |
2757 | +@when("amqp.connected") |
2758 | +@when_not("amqp.available") |
2759 | +def setup_rabbitmq(rabbitmq): |
2760 | + rabbitmq.request_access("autopkgtest-worker", "/") |
2761 | + status.waiting("Waiting on RabbitMQ to configure vhost") |
2762 | + |
2763 | + |
2764 | +@when("amqp.available") |
2765 | +@when_not("autopkgtest.rabbitmq-configured") |
2766 | +def set_up_rabbitmq(rabbitmq): |
2767 | + username = rabbitmq.username() |
2768 | + password = rabbitmq.password() |
2769 | + host = rabbitmq.private_address() |
2770 | + status.maintenance("Configuring rabbitmq") |
2771 | + log("Setting up rabbitmq connection to: {}@{}".format(username, host)) |
2772 | + with open(RABBITMQ_CRED_PATH, "w") as cred_file: |
2773 | + cred_file.write( |
2774 | + dedent( |
2775 | + """\ |
2776 | + RABBIT_HOST="{host}" |
2777 | + RABBIT_USER="{username}" |
2778 | + RABBIT_PASSWORD="{password}" |
2779 | + """.format( |
2780 | + **locals() |
2781 | + ) |
2782 | + ) |
2783 | + ) |
2784 | + status.maintenance("rabbitmq configured") |
2785 | + set_flag("autopkgtest.rabbitmq-configured") |
2786 | + |
2787 | + |
2788 | +@when_not("amqp.available") |
2789 | +def clear_rabbitmq(): |
2790 | + try: |
2791 | + log("rabbitmq not available, deleting credentials file") |
2792 | + os.unlink(RABBITMQ_CRED_PATH) |
2793 | + clear_flag("autopkgtest.rabbitmq-configured") |
2794 | + except FileNotFoundError: |
2795 | + pass |
2796 | + |
2797 | + |
2798 | +@when("config.changed.nova-rcs") |
2799 | +def update_nova_rcs(): |
2800 | + from tarfile import TarFile |
2801 | + from io import BytesIO |
2802 | + |
2803 | + import base64 |
2804 | + |
2805 | + rctar = config().get("nova-rcs") |
2806 | + |
2807 | + if not rctar: |
2808 | + # set to the empty value |
2809 | + return |
2810 | + |
2811 | + log("Extracting cloud .rc files...", "INFO") |
2812 | + |
2813 | + clear_old_rcs() |
2814 | + |
2815 | + bytes_file = BytesIO(base64.b64decode(rctar)) |
2816 | + tar = TarFile(fileobj=bytes_file) |
2817 | + |
2818 | + log("...got {}".format(", ".join(tar.getnames())), "INFO") |
2819 | + |
2820 | + tar.extractall(os.path.expanduser("~ubuntu/cloudrcs/")) |
2821 | + |
2822 | + |
2823 | +@when("config.default.nova-rcs") |
2824 | +def clear_old_rcs(): |
2825 | + rcfiles = glob.glob(os.path.expanduser("~ubuntu/cloudrcs/*.rc")) |
2826 | + |
2827 | + if not rcfiles: |
2828 | + return |
2829 | + |
2830 | + log("Deleting old cloud .rc files...", "INFO") |
2831 | + |
2832 | + for rcfile in rcfiles: |
2833 | + log("...deleting {}...".format(rcfile)) |
2834 | + os.unlink(rcfile) |
2835 | + |
2836 | + log("...done", "INFO") |
2837 | + |
2838 | + |
2839 | +@when_all( |
2840 | + "autopkgtest.autopkgtest_cloud_symlinked", |
2841 | + "autopkgtest.systemd_units_linked_and_enabled", |
2842 | + "config.set.releases", |
2843 | +) |
2844 | +@when_any( |
2845 | + "config.set.n-workers", |
2846 | + "config.set.lxd-remotes", |
2847 | +) |
2848 | +@when_not("autopkgtest.systemd-units-initially-enabled") |
2849 | +def enable_units_initially(): |
2850 | + enable_disable_units() |
2851 | + |
2852 | + |
2853 | +@when_all( |
2854 | + "autopkgtest.autopkgtest_cloud_symlinked", |
2855 | + "autopkgtest.systemd_units_linked_and_enabled", |
2856 | + "config.set.releases", |
2857 | +) |
2858 | +@when_any( |
2859 | + "config.changed.n-workers", |
2860 | + "config.changed.lxd-remotes", |
2861 | + "config.changed.releases", |
2862 | +) |
2863 | +def enable_disable_units(): |
2864 | + from lib.systemd import set_up_systemd_units |
2865 | + |
2866 | + nworkers = config().get("n-workers") or "" |
2867 | + lxdremotes = config().get("lxd-remotes") or "" |
2868 | + releases = config().get("releases") or "" |
2869 | + |
2870 | + nworkers_yaml = yaml.load(nworkers, Loader=yaml.CSafeLoader) |
2871 | + lxdremotes_yaml = yaml.load(lxdremotes, Loader=yaml.CSafeLoader) |
2872 | + |
2873 | + log("Setting up systemd worker units", "INFO") |
2874 | + status.maintenance("Enabling and starting worker units") |
2875 | + set_up_systemd_units( |
2876 | + nworkers_yaml or {}, lxdremotes_yaml or {}, releases.split() |
2877 | + ) |
2878 | + |
2879 | + set_flag("autopkgtest.reload-needed") |
2880 | + set_flag("autopkgtest.daemon-reload-needed") |
2881 | + set_flag("autopkgtest.target-restart-needed") |
2882 | + set_flag("autopkgtest.systemd-units-initially-enabled") |
2883 | + |
2884 | + |
2885 | +@when("apt.installed.lxd-client") |
2886 | +@when_any( |
2887 | + "config.set.n-workers", "config.set.lxd-remotes", "config.set.releases" |
2888 | +) |
2889 | +@when_not("autopkgtest.ubuntu_added_to_lxd_group") |
2890 | +def add_ubuntu_user_to_lxd_group(): |
2891 | + subprocess.check_call(["adduser", "ubuntu", "lxd"]) |
2892 | + set_flag("autopkgtest.ubuntu_added_to_lxd_group") |
2893 | + |
2894 | + |
2895 | +@when("config.changed.swift-auth-version") |
2896 | +def auth_version_changed(): |
2897 | + auth_version = config().get("swift-auth-version") |
2898 | + |
2899 | + if auth_version not in [2, 3]: |
2900 | + raise ValueError("Only auth versions 2 and 3 are supported") |
2901 | + |
2902 | + if auth_version == 2: |
2903 | + set_flag("autopkgtest.v2_auth") |
2904 | + clear_flag("autopkgtest.v3_auth") |
2905 | + else: |
2906 | + clear_flag("autopkgtest.v2_auth") |
2907 | + set_flag("autopkgtest.v3_auth") |
2908 | + |
2909 | + |
2910 | +@not_unless("autopkgtest.v3_auth") |
2911 | +@when_any( |
2912 | + "config.changed.swift-auth-url", |
2913 | + "config.changed.swift-username", |
2914 | + "config.changed.swift-password", |
2915 | + "config.changed.swift-region", |
2916 | + "config.changed.swift-project-domain-name", |
2917 | + "config.changed.swift-project-name", |
2918 | + "config.changed.swift-user-domain-name", |
2919 | + "config.changed.swift-auth-version", |
2920 | +) |
2921 | +@when_all( |
2922 | + "config.set.swift-auth-url", |
2923 | + "config.set.swift-username", |
2924 | + "config.set.swift-password", |
2925 | + "config.set.swift-region", |
2926 | + "config.set.swift-project-domain-name", |
2927 | + "config.set.swift-project-name", |
2928 | + "config.set.swift-user-domain-name", |
2929 | + "config.set.swift-auth-version", |
2930 | +) |
2931 | +def write_v3_config(): |
2932 | + write_swift_config() |
2933 | + |
2934 | + |
2935 | +@not_unless("autopkgtest.v2_auth") |
2936 | +@when_any( |
2937 | + "config.changed.swift-auth-url", |
2938 | + "config.changed.swift-username", |
2939 | + "config.changed.swift-password", |
2940 | + "config.changed.swift-region", |
2941 | + "config.changed.swift-auth-version", |
2942 | + "config.changed.swift-tenant", |
2943 | +) |
2944 | +@when_all( |
2945 | + "config.set.swift-auth-url", |
2946 | + "config.set.swift-username", |
2947 | + "config.set.swift-password", |
2948 | + "config.set.swift-region", |
2949 | + "config.set.swift-auth-version", |
2950 | + "config.set.swift-tenant", |
2951 | +) |
2952 | +def write_v2_config(): |
2953 | + write_swift_config() |
2954 | + |
2955 | + |
2956 | +def write_swift_config(): |
2957 | + with open( |
2958 | + os.path.expanduser("~ubuntu/swift-password.cred"), "w" |
2959 | + ) as swift_password_file: |
2960 | + for key in config(): |
2961 | + if key.startswith("swift") and config()[key] is not None: |
2962 | + swift_password_file.write( |
2963 | + '{}="{}"\n'.format( |
2964 | + key.upper().replace("-", "_"), |
2965 | + str(config()[key]).strip(), |
2966 | + ) |
2967 | + ) |
2968 | + |
2969 | + |
2970 | +@when_any( |
2971 | + "config.changed.worker-default-flavor", |
2972 | + "config.changed.worker-big-flavor", |
2973 | + "config.changed.worker-args", |
2974 | + "config.changed.worker-setup-command", |
2975 | + "config.changed.worker-setup-command2", |
2976 | + "config.changed.releases", |
2977 | + "config.changed.n-workers", |
2978 | + "config.changed.lxd-remotes", |
2979 | + "config.changed.mirror", |
2980 | +) |
2981 | +@when_any("config.set.nova-rcs", "config.set.lxd-remotes") |
2982 | +def write_worker_config(): |
2983 | + import configparser |
2984 | + |
2985 | + nworkers = config().get("n-workers") or "" |
2986 | + lxdremotes = config().get("lxd-remotes") or "" |
2987 | + |
2988 | + if not nworkers and not lxdremotes: |
2989 | + return |
2990 | + |
2991 | + nworkers_yaml = yaml.load(nworkers, Loader=yaml.CSafeLoader) or {} |
2992 | + lxdremotes_yaml = yaml.load(lxdremotes, Loader=yaml.CSafeLoader) or {} |
2993 | + |
2994 | + conf = { |
2995 | + "autopkgtest": { |
2996 | + "checkout_dir": AUTOPKGTEST_LOCATION, |
2997 | + "releases": config().get("releases"), |
2998 | + "setup_command": config().get("worker-setup-command"), |
2999 | + "setup_command2": config().get("worker-setup-command2"), |
3000 | + "per_package_config_dir": AUTOPKGTEST_PER_PACKAGE_LOCATION, |
3001 | + }, |
3002 | + "virt": { |
3003 | + "package_size_default": config().get("worker-default-flavor"), |
3004 | + "package_size_big": config().get("worker-big-flavor"), |
3005 | + "args": config().get("worker-args"), |
3006 | + }, |
3007 | + } |
3008 | + |
3009 | + def subst(s): |
3010 | + replacements = { |
3011 | + "/CHECKOUTDIR/": AUTOPKGTEST_LOCATION, |
3012 | + "/HOSTNAME/": socket.gethostname(), |
3013 | + "/AUTOPKGTEST_CLOUD_DIR/": AUTOPKGTEST_CLOUD_LOCATION, |
3014 | + "/MIRROR/": config().get("mirror"), |
3015 | + "/NET_NAME/": config().get("net-name"), |
3016 | + } |
3017 | + |
3018 | + for k in replacements: |
3019 | + if replacements[k]: |
3020 | + s = s.replace(k, replacements[k]) |
3021 | + |
3022 | + return s |
3023 | + |
3024 | + # prune anything which wasn't set (nested dict) |
3025 | + conf = { |
3026 | + k: {l: subst(w) for l, w in v.items() if w is not None} |
3027 | + for k, v in conf.items() |
3028 | + } |
3029 | + |
3030 | + def write(conf_file): |
3031 | + with open(conf_file, "w") as cf: |
3032 | + cp = configparser.ConfigParser() |
3033 | + cp.read_dict(conf) |
3034 | + cp.write(cf) |
3035 | + |
3036 | + for region in nworkers_yaml: |
3037 | + for arch in nworkers_yaml[region]: |
3038 | + if arch == "amd64": |
3039 | + conf_file = os.path.join( |
3040 | + os.path.expanduser("~ubuntu"), |
3041 | + "worker-{}.conf".format(region), |
3042 | + ) |
3043 | + conf["autopkgtest"]["architectures"] = "amd64 i386" |
3044 | + write(conf_file) |
3045 | + break |
3046 | + else: |
3047 | + conf_file = os.path.join( |
3048 | + os.path.expanduser("~ubuntu"), |
3049 | + "worker-{}-{}.conf".format(region, arch), |
3050 | + ) |
3051 | + conf["autopkgtest"]["architectures"] = arch |
3052 | + write(conf_file) |
3053 | + |
3054 | + for arch in lxdremotes_yaml: |
3055 | + conf_file = os.path.join( |
3056 | + os.path.expanduser("~ubuntu"), "worker-lxd-{}.conf".format(arch) |
3057 | + ) |
3058 | + conf["autopkgtest"]["architectures"] = arch |
3059 | + write(conf_file) |
3060 | + |
3061 | + set_flag("autopkgtest.daemon-reload-needed") |
3062 | + set_flag("autopkgtest.reload-needed") |
3063 | + |
3064 | + |
3065 | +@when("config.changed.net-name") |
3066 | +def write_net_name(): |
3067 | + clear_flag("autopkgtest.net-name-written") |
3068 | + with open(os.path.expanduser("~ubuntu/net-name.rc"), "w") as f: |
3069 | + f.write('NET_NAME="{}"\n'.format(config().get("net-name"))) |
3070 | + set_flag("autopkgtest.net-name-written") |
3071 | + set_flag("autopkgtest.reload-needed") |
3072 | + |
3073 | + |
3074 | +@when("config.changed.mirror") |
3075 | +def write_mirror(): |
3076 | + with open(os.path.expanduser("~ubuntu/mirror.rc"), "w") as f: |
3077 | + f.write('MIRROR="{}"\n'.format(config().get("mirror"))) |
3078 | + set_flag("autopkgtest.reload-needed") |
3079 | + |
3080 | + |
3081 | +@when("autopkgtest.reload-needed") |
3082 | +@when_not("autopkgtest.daemon-reload-needed") |
3083 | +def reload_systemd_units(): |
3084 | + from lib.systemd import reload_units |
3085 | + |
3086 | + reload_units() |
3087 | + clear_flag("autopkgtest.reload-needed") |
3088 | + |
3089 | + |
3090 | +@hook("tmp-storage-attached") |
3091 | +def fix_tmp_permissions(): |
3092 | + storageids = storage_list("tmp") |
3093 | + if not storageids: |
3094 | + status.blocked("Cannot locate attached storage") |
3095 | + return |
3096 | + storageid = storageids[0] |
3097 | + |
3098 | + mount = storage_get("location", storageid) |
3099 | + |
3100 | + os.chmod(mount, 0o777) |
3101 | + |
3102 | + |
3103 | +@when_any( |
3104 | + "config.changed.influxdb-hostname", |
3105 | + "config.changed.influxdb-port", |
3106 | + "config.changed.influxdb-username", |
3107 | + "config.changed.influxdb-password", |
3108 | + "config.changed.influxdb-database", |
3109 | + "config.changed.influxdb-context", |
3110 | +) |
3111 | +@when_all( |
3112 | + "config.set.influxdb-hostname", |
3113 | + "config.set.influxdb-port", |
3114 | + "config.set.influxdb-username", |
3115 | + "config.set.influxdb-password", |
3116 | + "config.set.influxdb-database", |
3117 | + "config.set.influxdb-context", |
3118 | +) |
3119 | +def write_influx_creds(): |
3120 | + influxdb_hostname = config().get("influxdb-hostname") |
3121 | + influxdb_port = config().get("influxdb-port") |
3122 | + influxdb_username = config().get("influxdb-username") |
3123 | + influxdb_password = config().get("influxdb-password") |
3124 | + influxdb_database = config().get("influxdb-database") |
3125 | + influxdb_context = config().get("influxdb-context") |
3126 | + |
3127 | + with open(os.path.expanduser("~ubuntu/influx.cred"), "w") as cf: |
3128 | + cf.write( |
3129 | + dedent( |
3130 | + f"""\ |
3131 | + INFLUXDB_HOSTNAME={influxdb_hostname} |
3132 | + INFLUXDB_PORT={influxdb_port} |
3133 | + INFLUXDB_USERNAME={influxdb_username} |
3134 | + INFLUXDB_PASSWORD={influxdb_password} |
3135 | + INFLUXDB_DATABASE={influxdb_database} |
3136 | + INFLUXDB_CONTEXT={influxdb_context} |
3137 | + """ |
3138 | + ) |
3139 | + ) |
3140 | + set_flag("autopkgtest.influx-creds-written") |
3141 | + |
3142 | + |
3143 | +@when_not_all( |
3144 | + "config.set.influxdb-hostname", |
3145 | + "config.set.influxdb-port", |
3146 | + "config.set.influxdb-username", |
3147 | + "config.set.influxdb-password", |
3148 | + "config.set.influxdb-database", |
3149 | + "config.set.influxdb-context", |
3150 | +) |
3151 | +def unset_influx_creds(): |
3152 | + try: |
3153 | + os.unlink(os.path.expanduser("~ubuntu/influx.cred")) |
3154 | + except FileNotFoundError: |
3155 | + pass |
3156 | + clear_flag("autopkgtest.influx-creds-written") |
3157 | diff --git a/charms/focal/autopkgtest-cloud-worker/tests/00-setup b/charms/focal/autopkgtest-cloud-worker/tests/00-setup |
3158 | new file mode 100755 |
3159 | index 0000000..f0616a5 |
3160 | --- /dev/null |
3161 | +++ b/charms/focal/autopkgtest-cloud-worker/tests/00-setup |
3162 | @@ -0,0 +1,5 @@ |
3163 | +#!/bin/bash |
3164 | + |
3165 | +sudo add-apt-repository ppa:juju/stable -y |
3166 | +sudo apt-get update |
3167 | +sudo apt-get install amulet python-requests -y |
3168 | diff --git a/charms/focal/autopkgtest-cloud-worker/tests/10-deploy b/charms/focal/autopkgtest-cloud-worker/tests/10-deploy |
3169 | new file mode 100755 |
3170 | index 0000000..2cd32f6 |
3171 | --- /dev/null |
3172 | +++ b/charms/focal/autopkgtest-cloud-worker/tests/10-deploy |
3173 | @@ -0,0 +1,35 @@ |
3174 | +#!/usr/bin/python3 |
3175 | + |
3176 | +import amulet |
3177 | +import requests |
3178 | +import unittest |
3179 | + |
3180 | + |
3181 | +class TestCharm(unittest.TestCase): |
3182 | + def setUp(self): |
3183 | + self.d = amulet.Deployment() |
3184 | + |
3185 | + self.d.add('autopkgtest-cloud-worker') |
3186 | + self.d.expose('autopkgtest-cloud-worker') |
3187 | + |
3188 | + self.d.setup(timeout=900) |
3189 | + self.d.sentry.wait() |
3190 | + |
3191 | + self.unit = self.d.sentry['autopkgtest-cloud-worker'][0] |
3192 | + |
3193 | + def test_service(self): |
3194 | + # test we can access over http |
3195 | + page = requests.get('http://{}'.format(self.unit.info['public-address'])) |
3196 | + self.assertEqual(page.status_code, 200) |
3197 | + # Now you can use self.d.sentry[SERVICE][UNIT] to address each of the units and perform |
3198 | + # more in-depth steps. Each self.d.sentry[SERVICE][UNIT] has the following methods: |
3199 | + # - .info - An array of the information of that unit from Juju |
3200 | + # - .file(PATH) - Get the details of a file on that unit |
3201 | + # - .file_contents(PATH) - Get plain text output of PATH file from that unit |
3202 | + # - .directory(PATH) - Get details of directory |
3203 | + # - .directory_contents(PATH) - List files and folders in PATH on that unit |
3204 | + # - .relation(relation, service:rel) - Get relation data from return service |
3205 | + |
3206 | + |
3207 | +if __name__ == '__main__': |
3208 | + unittest.main() |
3209 | diff --git a/deployment/charms/xenial/autopkgtest-cloud-worker/autopkgtest-lxd-socat@.service b/charms/focal/autopkgtest-cloud-worker/units/autopkgtest-lxd-remote@.service |
3210 | similarity index 68% |
3211 | rename from deployment/charms/xenial/autopkgtest-cloud-worker/autopkgtest-lxd-socat@.service |
3212 | rename to charms/focal/autopkgtest-cloud-worker/units/autopkgtest-lxd-remote@.service |
3213 | index 368468c..99cd02a 100644 |
3214 | --- a/deployment/charms/xenial/autopkgtest-cloud-worker/autopkgtest-lxd-socat@.service |
3215 | +++ b/charms/focal/autopkgtest-cloud-worker/units/autopkgtest-lxd-remote@.service |
3216 | @@ -1,22 +1,21 @@ |
3217 | # instance format is lxd-<arch>-<ip> |
3218 | [Unit] |
3219 | Description=LXD remote and socat for %i |
3220 | -# TODO: This probably does not work |
3221 | -Before=autopkgtest@.service |
3222 | Wants=network-online.target |
3223 | After=network-online.target |
3224 | +StopWhenUnneeded=yes |
3225 | |
3226 | [Service] |
3227 | User=ubuntu |
3228 | +Type=oneshot |
3229 | ExecStartPre=-/usr/bin/lxc remote remove %i |
3230 | -ExecStart=/bin/sh -ec 'I="%i"; exec socat UNIX-LISTEN:/tmp/%i.socket,unlink-early,mode=666,fork TCP:$${I##*-}:8443' |
3231 | -ExecStartPost=/bin/sleep 2 |
3232 | -ExecStartPost=/usr/bin/lxc remote add %i unix:///tmp/%i.socket |
3233 | -ExecStopPost=-/usr/bin/lxc remote remove %i |
3234 | +ExecStart=/bin/sh -ec 'I="%i"; exec lxc remote add --protocol lxd --auth-type tls --password autopkgtest --accept-certificate $${I} $${I##*-}:8443' |
3235 | +ExecStop=/usr/bin/lxc remote remove %i |
3236 | Restart=on-failure |
3237 | RestartSec=5min |
3238 | StartLimitInterval=1h |
3239 | StartLimitBurst=3 |
3240 | +RemainAfterExit=yes |
3241 | |
3242 | [Install] |
3243 | WantedBy=autopkgtest.target |
3244 | diff --git a/deployment/charms/xenial/autopkgtest-cloud-worker/autopkgtest.target b/charms/focal/autopkgtest-cloud-worker/units/autopkgtest.target |
3245 | similarity index 100% |
3246 | rename from deployment/charms/xenial/autopkgtest-cloud-worker/autopkgtest.target |
3247 | rename to charms/focal/autopkgtest-cloud-worker/units/autopkgtest.target |
3248 | diff --git a/deployment/charms/xenial/autopkgtest-cloud-worker/autopkgtest@.service b/charms/focal/autopkgtest-cloud-worker/units/autopkgtest@.service |
3249 | similarity index 73% |
3250 | rename from deployment/charms/xenial/autopkgtest-cloud-worker/autopkgtest@.service |
3251 | rename to charms/focal/autopkgtest-cloud-worker/units/autopkgtest@.service |
3252 | index 266b43f..00fc900 100644 |
3253 | --- a/deployment/charms/xenial/autopkgtest-cloud-worker/autopkgtest@.service |
3254 | +++ b/charms/focal/autopkgtest-cloud-worker/units/autopkgtest@.service |
3255 | @@ -4,28 +4,30 @@ ConditionFileNotEmpty=/home/ubuntu/rabbitmq.cred |
3256 | ConditionFileNotEmpty=/home/ubuntu/swift-password.cred |
3257 | Wants=network-online.target |
3258 | After=network-online.target |
3259 | +StopWhenUnneeded=yes |
3260 | |
3261 | [Service] |
3262 | User=ubuntu |
3263 | WorkingDirectory=~ |
3264 | -Environment=WORKER_CONF_DIR=autopkgtest-cloud/worker-config-production |
3265 | +TimeoutStartSec=5min |
3266 | Environment=SECGROUP=autopkgtest@%i.secgroup |
3267 | +EnvironmentFile=/home/ubuntu/influx.cred |
3268 | EnvironmentFile=/home/ubuntu/rabbitmq.cred |
3269 | EnvironmentFile=/home/ubuntu/swift-password.cred |
3270 | ExecStartPre=/home/ubuntu/autopkgtest-cloud/tools/exec-in-region %i /home/ubuntu/autopkgtest-cloud/tools/copy-security-group $SECGROUP |
3271 | ExecStopPost=/home/ubuntu/autopkgtest-cloud/tools/exec-in-region %i /home/ubuntu/autopkgtest-cloud/tools/copy-security-group --delete-only $SECGROUP |
3272 | -ExecStart=/bin/sh -ec 'REGION="%i"; REGION="$${REGION%-*}"; CONF=/tmp/worker-$${REGION}.conf; \ |
3273 | - CONF_T="$WORKER_CONF_DIR/worker-$${REGION}.conf"; \ |
3274 | - [ -e $$CONF_T ] || CONF_T="$WORKER_CONF_DIR/worker-$${REGION%%%%-*}.conf"; \ |
3275 | - [ -e $$CONF_T ] || CONF_T="$WORKER_CONF_DIR/worker.conf"; \ |
3276 | +ExecStart=/bin/sh -ec 'REGION="%i"; REGION="$${REGION%-*}"; \ |
3277 | + CONF="$HOME/worker-$${REGION}.conf"; \ |
3278 | + [ -e $$CONF ] || CONF="$HOME/worker-$${REGION%%%%-*}.conf"; \ |
3279 | + [ -e $$CONF ] || CONF="$HOME/worker-$${REGION%%-*}.conf"; \ |
3280 | + [ -e $$CONF ] || CONF="$HOME/worker.conf"; \ |
3281 | if [ "$${REGION#lxd-}" != "$$REGION" ]; then \ |
3282 | LXD_ARCH=$${REGION#*-}; LXD_ARCH=$${LXD_ARCH%%-*}; \ |
3283 | else \ |
3284 | - . ./$${REGION}.rc; \ |
3285 | + . ./cloudrcs/$${REGION}.rc; \ |
3286 | fi; \ |
3287 | - sed "s/#RABBITHOST#/$RABBIT_HOST/; s/#RABBITUSER#/$RABBIT_USER/; s/#RABBITPASSWORD#/$RABBIT_PASSWORD/; s/#SWIFTPASSWORD#/$SWIFT_PASSWORD/; s/#KEYNAME#/testbed-$$(hostname)/" $$CONF_T > $${CONF}.$$$$ ; \ |
3288 | - mv $${CONF}.$$$$ $${CONF}; \ |
3289 | exec env REGION=$${REGION} $HOME/autopkgtest-cloud/worker/worker --config $$CONF -v LXD_ARCH=$$LXD_ARCH -v LXD_REMOTE=$$REGION -v SECGROUP=$SECGROUP' |
3290 | +ExecReload=/bin/kill -HUP $MAINPID |
3291 | SuccessExitStatus=0 99 SIGTERM SIGHUP |
3292 | RestartSec=5min |
3293 | Restart=on-failure |
3294 | diff --git a/charms/focal/autopkgtest-cloud-worker/units/build-adt-image@.service b/charms/focal/autopkgtest-cloud-worker/units/build-adt-image@.service |
3295 | new file mode 100644 |
3296 | index 0000000..2a9b9c9 |
3297 | --- /dev/null |
3298 | +++ b/charms/focal/autopkgtest-cloud-worker/units/build-adt-image@.service |
3299 | @@ -0,0 +1,11 @@ |
3300 | +[Unit] |
3301 | +Description=Build autopkgtest image (%i) |
3302 | +Wants=network-online.target |
3303 | +After=network-online.target |
3304 | + |
3305 | +[Service] |
3306 | +Type=oneshot |
3307 | +User=ubuntu |
3308 | +WorkingDirectory=~ |
3309 | +ExecStartPre=/usr/bin/flock -x -w 60 /home/ubuntu/ensure-keypair.lock /home/ubuntu/autopkgtest-cloud/tools/ensure-keypair "%i" |
3310 | +ExecStart=/home/ubuntu/autopkgtest-cloud/tools/build-adt-image "%i" |
3311 | diff --git a/charms/focal/autopkgtest-cloud-worker/units/build-adt-image@.timer b/charms/focal/autopkgtest-cloud-worker/units/build-adt-image@.timer |
3312 | new file mode 100644 |
3313 | index 0000000..5688e72 |
3314 | --- /dev/null |
3315 | +++ b/charms/focal/autopkgtest-cloud-worker/units/build-adt-image@.timer |
3316 | @@ -0,0 +1,11 @@ |
3317 | +[Unit] |
3318 | +Description=Timer to build autopkgtest image (%i) |
3319 | +After=autopkgtest.target |
3320 | + |
3321 | +[Timer] |
3322 | +OnBootSec=24h |
3323 | +OnUnitInactiveSec=24h |
3324 | +RandomizedDelaySec=6h |
3325 | + |
3326 | +[Install] |
3327 | +WantedBy=autopkgtest.target |
3328 | diff --git a/charms/focal/autopkgtest-cloud-worker/units/cloud-worker-maintenance.service b/charms/focal/autopkgtest-cloud-worker/units/cloud-worker-maintenance.service |
3329 | new file mode 100644 |
3330 | index 0000000..452c798 |
3331 | --- /dev/null |
3332 | +++ b/charms/focal/autopkgtest-cloud-worker/units/cloud-worker-maintenance.service |
3333 | @@ -0,0 +1,8 @@ |
3334 | +[Unit] |
3335 | +Description=Cloud worker maintenance |
3336 | + |
3337 | +[Service] |
3338 | +Type=oneshot |
3339 | +User=ubuntu |
3340 | +EnvironmentFile=/home/ubuntu/influx.cred |
3341 | +ExecStart=/home/ubuntu/autopkgtest-cloud/tools/cloud-worker-maintenance |
3342 | diff --git a/charms/focal/autopkgtest-cloud-worker/units/cloud-worker-maintenance.timer b/charms/focal/autopkgtest-cloud-worker/units/cloud-worker-maintenance.timer |
3343 | new file mode 100644 |
3344 | index 0000000..a1a53ae |
3345 | --- /dev/null |
3346 | +++ b/charms/focal/autopkgtest-cloud-worker/units/cloud-worker-maintenance.timer |
3347 | @@ -0,0 +1,10 @@ |
3348 | +[Unit] |
3349 | +Description=Cloud worker maintenance (timer) |
3350 | + |
3351 | +[Timer] |
3352 | +OnBootSec=1h |
3353 | +OnUnitInactiveSec=1h |
3354 | + |
3355 | +[Install] |
3356 | +WantedBy=autopkgtest.target |
3357 | + |
3358 | diff --git a/charms/focal/autopkgtest-cloud-worker/units/ensure-adt-image@.service b/charms/focal/autopkgtest-cloud-worker/units/ensure-adt-image@.service |
3359 | new file mode 100644 |
3360 | index 0000000..1ed07c6 |
3361 | --- /dev/null |
3362 | +++ b/charms/focal/autopkgtest-cloud-worker/units/ensure-adt-image@.service |
3363 | @@ -0,0 +1,12 @@ |
3364 | +[Unit] |
3365 | +Description=Ensure there is an autopkgtest image (%i) |
3366 | +Wants=network-online.target |
3367 | +After=network-online.target |
3368 | + |
3369 | +[Service] |
3370 | +Type=oneshot |
3371 | +User=ubuntu |
3372 | +Environment=ENSURE_ONLY=1 |
3373 | +WorkingDirectory=~ |
3374 | +ExecStartPre=/usr/bin/flock -x -w 3600 /home/ubuntu/ensure-keypair.lock /home/ubuntu/autopkgtest-cloud/tools/ensure-keypair "%i" |
3375 | +ExecStart=/home/ubuntu/autopkgtest-cloud/tools/build-adt-image "%i" |
3376 | diff --git a/charms/focal/autopkgtest-cloud-worker/units/metrics.service b/charms/focal/autopkgtest-cloud-worker/units/metrics.service |
3377 | new file mode 100644 |
3378 | index 0000000..0546655 |
3379 | --- /dev/null |
3380 | +++ b/charms/focal/autopkgtest-cloud-worker/units/metrics.service |
3381 | @@ -0,0 +1,9 @@ |
3382 | +[Unit] |
3383 | +Description=Collect & submit metrics |
3384 | + |
3385 | +[Service] |
3386 | +Type=oneshot |
3387 | +User=ubuntu |
3388 | +Group=ubuntu |
3389 | +EnvironmentFile=/home/ubuntu/influx.cred |
3390 | +ExecStart=/home/ubuntu/autopkgtest-cloud/tools/metrics |
3391 | diff --git a/charms/focal/autopkgtest-cloud-worker/units/metrics.timer b/charms/focal/autopkgtest-cloud-worker/units/metrics.timer |
3392 | new file mode 100644 |
3393 | index 0000000..4bfe712 |
3394 | --- /dev/null |
3395 | +++ b/charms/focal/autopkgtest-cloud-worker/units/metrics.timer |
3396 | @@ -0,0 +1,8 @@ |
3397 | +[Unit] |
3398 | +Description=collect metrics (timer) |
3399 | + |
3400 | +[Timer] |
3401 | +OnCalendar=*:0/5 |
3402 | + |
3403 | +[Install] |
3404 | +WantedBy=autopkgtest.target |
3405 | diff --git a/charms/focal/autopkgtest-cloud-worker/units/pull-package-configs.service b/charms/focal/autopkgtest-cloud-worker/units/pull-package-configs.service |
3406 | new file mode 100644 |
3407 | index 0000000..ba15be6 |
3408 | --- /dev/null |
3409 | +++ b/charms/focal/autopkgtest-cloud-worker/units/pull-package-configs.service |
3410 | @@ -0,0 +1,9 @@ |
3411 | +[Unit] |
3412 | +Description=Pull /home/ubuntu/autopkgtest-package-configs/ |
3413 | + |
3414 | +[Service] |
3415 | +Type=oneshot |
3416 | +User=ubuntu |
3417 | +Group=ubuntu |
3418 | +WorkingDirectory=/home/ubuntu/autopkgtest-package-configs/ |
3419 | +ExecStart=/usr/bin/git pull |
3420 | diff --git a/charms/focal/autopkgtest-cloud-worker/units/pull-package-configs.timer b/charms/focal/autopkgtest-cloud-worker/units/pull-package-configs.timer |
3421 | new file mode 100644 |
3422 | index 0000000..51b53ec |
3423 | --- /dev/null |
3424 | +++ b/charms/focal/autopkgtest-cloud-worker/units/pull-package-configs.timer |
3425 | @@ -0,0 +1,9 @@ |
3426 | +[Unit] |
3427 | +Description=Pull package configs repo (timer) |
3428 | +After=autopkgtest.target |
3429 | + |
3430 | +[Timer] |
3431 | +OnCalendar=minutely |
3432 | + |
3433 | +[Install] |
3434 | +WantedBy=autopkgtest.target |
3435 | diff --git a/deployment/charms/xenial/autopkgtest-web/README.md b/charms/focal/autopkgtest-web/README.md |
3436 | similarity index 87% |
3437 | rename from deployment/charms/xenial/autopkgtest-web/README.md |
3438 | rename to charms/focal/autopkgtest-web/README.md |
3439 | index 95d0265..8c6d189 100644 |
3440 | --- a/deployment/charms/xenial/autopkgtest-web/README.md |
3441 | +++ b/charms/focal/autopkgtest-web/README.md |
3442 | @@ -6,5 +6,5 @@ browsable through a web server. |
3443 | |
3444 | Contact Information |
3445 | ------------------- |
3446 | -Author: Martin Pitt <martin.pitt@ubuntu.com> |
3447 | +Author: Iain Lane <iain.lane@canonical.com> |
3448 | Report bugs at: http://bugs.launchpad.net/auto-package-testing |
3449 | diff --git a/deployment/charms/xenial/autopkgtest-web/config.yaml b/charms/focal/autopkgtest-web/config.yaml |
3450 | similarity index 83% |
3451 | rename from deployment/charms/xenial/autopkgtest-web/config.yaml |
3452 | rename to charms/focal/autopkgtest-web/config.yaml |
3453 | index a4dd11c..9786139 100644 |
3454 | --- a/deployment/charms/xenial/autopkgtest-web/config.yaml |
3455 | +++ b/charms/focal/autopkgtest-web/config.yaml |
3456 | @@ -7,14 +7,6 @@ options: |
3457 | type: string |
3458 | default: |
3459 | description: "Public external swift storage URL" |
3460 | - ssl-cert: |
3461 | - type: string |
3462 | - default: |
3463 | - description: "Apache SSL certificate (SSL will be disabled if not present)" |
3464 | - ssl-key: |
3465 | - type: string |
3466 | - default: |
3467 | - description: "Apache SSL key (SSL will be disabled if not present)" |
3468 | hostname: |
3469 | type: string |
3470 | default: autopkgtest.local |
3471 | @@ -35,7 +27,3 @@ options: |
3472 | type: string |
3473 | default: |
3474 | description: "$no_proxy environment variable (for accessing sites other than github, like login.ubuntu.com and launchpad.net)" |
3475 | - mail-notify: |
3476 | - type: string |
3477 | - description: "email addresses (space separated) to notify on worker failure" |
3478 | - default: |
3479 | diff --git a/charms/focal/autopkgtest-web/layer.yaml b/charms/focal/autopkgtest-web/layer.yaml |
3480 | new file mode 100644 |
3481 | index 0000000..e6e30c2 |
3482 | --- /dev/null |
3483 | +++ b/charms/focal/autopkgtest-web/layer.yaml |
3484 | @@ -0,0 +1,19 @@ |
3485 | +includes: |
3486 | + - 'layer:apt' |
3487 | + - 'layer:basic' |
3488 | + - 'layer:status' |
3489 | + - 'interface:rabbitmq' |
3490 | + - 'interface:apache-website' |
3491 | +options: |
3492 | + apt: |
3493 | + packages: |
3494 | + - amqp-tools |
3495 | + - distro-info |
3496 | + - git |
3497 | + - libjs-bootstrap |
3498 | + - libjs-jquery |
3499 | + - make |
3500 | + - python3-amqplib |
3501 | + - python3-distro-info |
3502 | + - python3-flask |
3503 | + - python3-flask-openid |
3504 | diff --git a/deployment/charms/xenial/autopkgtest-web/metadata.yaml b/charms/focal/autopkgtest-web/metadata.yaml |
3505 | similarity index 64% |
3506 | rename from deployment/charms/xenial/autopkgtest-web/metadata.yaml |
3507 | rename to charms/focal/autopkgtest-web/metadata.yaml |
3508 | index ff188a4..ea04545 100644 |
3509 | --- a/deployment/charms/xenial/autopkgtest-web/metadata.yaml |
3510 | +++ b/charms/focal/autopkgtest-web/metadata.yaml |
3511 | @@ -1,16 +1,18 @@ |
3512 | name: autopkgtest-web |
3513 | summary: autopkgtest web frontend with results from Swift |
3514 | -maintainer: Martin Pitt <martin.pitt@ubuntu.com> |
3515 | +series: |
3516 | + - focal |
3517 | +maintainer: Iain Lane <iain.lane@canonical.com> |
3518 | description: | |
3519 | This regularly downloads new autopkgtest results from Swift and makes them |
3520 | browsable through a web server. |
3521 | -categories: |
3522 | +tags: |
3523 | - web_server |
3524 | - monitoring |
3525 | -subordinate: false |
3526 | +subordinate: true |
3527 | requires: |
3528 | amqp: |
3529 | interface: rabbitmq |
3530 | -provides: |
3531 | website: |
3532 | - interface: http |
3533 | + interface: apache-website |
3534 | + scope: container |
3535 | diff --git a/charms/focal/autopkgtest-web/reactive/autopkgtest_web.py b/charms/focal/autopkgtest-web/reactive/autopkgtest_web.py |
3536 | new file mode 100644 |
3537 | index 0000000..b2c1b5b |
3538 | --- /dev/null |
3539 | +++ b/charms/focal/autopkgtest-web/reactive/autopkgtest_web.py |
3540 | @@ -0,0 +1,327 @@ |
3541 | +from charms.layer import status |
3542 | +from charms.reactive import ( |
3543 | + when, |
3544 | + when_all, |
3545 | + when_any, |
3546 | + when_not, |
3547 | + set_flag, |
3548 | + clear_flag, |
3549 | + hook, |
3550 | +) |
3551 | +from charmhelpers.core.hookenv import charm_dir, config |
3552 | + |
3553 | +from textwrap import dedent |
3554 | + |
3555 | +import glob |
3556 | +import os |
3557 | +import shutil |
3558 | +import subprocess |
3559 | + |
3560 | +AUTOPKGTEST_CLOUD_CONF = os.path.expanduser("~ubuntu/autopkgtest-cloud.conf") |
3561 | +GITHUB_SECRETS_PATH = os.path.expanduser("~ubuntu/github-secrets.json") |
3562 | +GITHUB_STATUS_CREDENTIALS_PATH = os.path.expanduser( |
3563 | + "~ubuntu/github-status-credentials.txt" |
3564 | +) |
3565 | + |
3566 | + |
3567 | +@when_not("autopkgtest-web.autopkgtest_web_symlinked") |
3568 | +def symlink_autopkgtest_cloud(): |
3569 | + autopkgtest_cloud = os.path.join(charm_dir(), "webcontrol") |
3570 | + os.symlink(autopkgtest_cloud, os.path.expanduser("~ubuntu/webcontrol")) |
3571 | + set_flag("autopkgtest-web.autopkgtest_web_symlinked") |
3572 | + |
3573 | + |
3574 | +@when("amqp.connected") |
3575 | +@when_not("amqp.available") |
3576 | +def setup_rabbitmq(rabbitmq): |
3577 | + rabbitmq.request_access("webcontrol", "/") |
3578 | + status.waiting("Waiting on RabbitMQ to configure vhost") |
3579 | + |
3580 | + |
3581 | +@when_all( |
3582 | + "amqp.available", |
3583 | + "config.set.storage-url-internal", |
3584 | + "config.set.hostname", |
3585 | +) |
3586 | +def write_autopkgtest_cloud_conf(rabbitmq): |
3587 | + swiftinternal = config().get("storage-url-internal") |
3588 | + hostname = config().get("hostname") |
3589 | + rabbituser = rabbitmq.username() |
3590 | + rabbitpassword = rabbitmq.password() |
3591 | + rabbithost = rabbitmq.private_address() |
3592 | + clear_flag("autopkgtest-web.config-written") |
3593 | + with open(f"{AUTOPKGTEST_CLOUD_CONF}.new", "w") as f: |
3594 | + f.write( |
3595 | + dedent( |
3596 | + """\ |
3597 | + [web] |
3598 | + database=/home/ubuntu/autopkgtest.db |
3599 | + database_ro=/home/ubuntu/public/autopkgtest.db |
3600 | + SwiftURL={swiftinternal} |
3601 | + ExternalURL=https://{hostname}/results |
3602 | + |
3603 | + [amqp] |
3604 | + uri=amqp://{rabbituser}:{rabbitpassword}@{rabbithost}""".format( |
3605 | + **locals() |
3606 | + ) |
3607 | + ) |
3608 | + ) |
3609 | + os.rename(f"{AUTOPKGTEST_CLOUD_CONF}.new", AUTOPKGTEST_CLOUD_CONF) |
3610 | + set_flag("autopkgtest-web.config-written") |
3611 | + |
3612 | + |
3613 | +@when_all( |
3614 | + "autopkgtest-web.autopkgtest_web_symlinked", |
3615 | + "autopkgtest-web.config-written", |
3616 | +) |
3617 | +def set_up_systemd_units(): |
3618 | + any_changed = False |
3619 | + for unit in glob.glob(os.path.join(charm_dir(), "units", "*")): |
3620 | + base = os.path.basename(unit) |
3621 | + try: |
3622 | + os.symlink( |
3623 | + unit, |
3624 | + os.path.join(os.path.sep, "etc", "systemd", "system", base), |
3625 | + ) |
3626 | + any_changed = True |
3627 | + status.maintenance("Installing systemd units") |
3628 | + subprocess.check_call(["systemctl", "enable", base]) |
3629 | + except FileExistsError: |
3630 | + pass |
3631 | + |
3632 | + if any_changed: |
3633 | + subprocess.check_call(["systemctl", "daemon-reload"]) |
3634 | + subprocess.check_call( |
3635 | + ["systemctl", "restart", "autopkgtest-web.target"] |
3636 | + ) |
3637 | + |
3638 | + status.active("systemd units installed and enabled") |
3639 | + |
3640 | + |
3641 | +@when_all( |
3642 | + "config.set.hostname", |
3643 | + "config.set.storage-url-internal", |
3644 | + "website.available", |
3645 | +) |
3646 | +@when_not("autopkgtest-web.website-initially-configured") |
3647 | +def initially_configure_website(website): |
3648 | + set_up_web_config(website) |
3649 | + |
3650 | + |
3651 | +@when_any( |
3652 | + "config.changed.hostname", |
3653 | + "config.changed.https-proxy", |
3654 | + "config.changed.no-proxy", |
3655 | + "config.changed.storage-url-internal", |
3656 | +) |
3657 | +@when_all( |
3658 | + "config.set.hostname", |
3659 | + "config.set.storage-url-internal", |
3660 | + "website.available", |
3661 | + "autopkgtest-web.website-initially-configured" |
3662 | +) |
3663 | +def set_up_web_config(apache): |
3664 | + webcontrol_dir = os.path.join(charm_dir(), "webcontrol") |
3665 | + sn = config().get("hostname") |
3666 | + https_proxy = config().get("https-proxy") |
3667 | + no_proxy = config().get("no-proxy") |
3668 | + storage_url_internal = config().get("storage-url-internal") |
3669 | + |
3670 | + conf_enabled = os.path.join(os.path.sep, "etc", "apache2", "conf-enabled") |
3671 | + environment_d = os.path.join(os.path.sep, "etc", "environment.d") |
3672 | + |
3673 | + if https_proxy or no_proxy: |
3674 | + try: |
3675 | + os.makedirs(environment_d) |
3676 | + except FileExistsError: |
3677 | + pass |
3678 | + |
3679 | + https_proxy_env = os.path.join(environment_d, "https_proxy.conf") |
3680 | + https_proxy_apache = os.path.join(conf_enabled, "https_proxy.conf") |
3681 | + |
3682 | + try: |
3683 | + os.unlink(https_proxy_env) |
3684 | + except FileNotFoundError: |
3685 | + pass |
3686 | + try: |
3687 | + os.unlink(https_proxy_apache) |
3688 | + except FileNotFoundError: |
3689 | + pass |
3690 | + |
3691 | + if https_proxy: |
3692 | + with open(https_proxy_env, "w") as f: |
3693 | + f.write("https_proxy={}".format(https_proxy)) |
3694 | + with open(https_proxy_apache, "w") as f: |
3695 | + f.write("SetEnv https_proxy {}".format(https_proxy)) |
3696 | + |
3697 | + no_proxy_env = os.path.join(environment_d, "no_proxy.conf") |
3698 | + no_proxy_apache = os.path.join(conf_enabled, "no_proxy.conf") |
3699 | + |
3700 | + try: |
3701 | + os.unlink(no_proxy_env) |
3702 | + except FileNotFoundError: |
3703 | + pass |
3704 | + try: |
3705 | + os.unlink(no_proxy_apache) |
3706 | + except FileNotFoundError: |
3707 | + pass |
3708 | + |
3709 | + if no_proxy: |
3710 | + with open(no_proxy_env, "w") as f: |
3711 | + f.write("no_proxy={}".format(no_proxy)) |
3712 | + with open(no_proxy_apache, "w") as f: |
3713 | + f.write("SetEnv no_proxy {}".format(no_proxy)) |
3714 | + |
3715 | + server_name = "ServerName {}".format(sn) if sn else "" |
3716 | + apache.send_site_config( |
3717 | + dedent( |
3718 | + """\ |
3719 | + <Directory {webcontrol_dir}> |
3720 | + Require all granted |
3721 | + </Directory> |
3722 | + |
3723 | + <VirtualHost *:80> |
3724 | + DocumentRoot {webcontrol_dir} |
3725 | + ErrorLog ${{APACHE_LOG_DIR}}/error.log |
3726 | + CustomLog ${{APACHE_LOG_DIR}}/access.log combined |
3727 | + AddType 'text/plain' .log |
3728 | + AddCharset UTF-8 .log |
3729 | + DirectoryIndex browse.cgi |
3730 | + RedirectMatch permanent "^/packages(.*)/$" "/packages$1" |
3731 | + Redirect permanent /running.shtml /running |
3732 | + |
3733 | + # webcontrol CGI scripts |
3734 | + ScriptAlias /request.cgi {webcontrol_dir}/request.cgi/ |
3735 | + ScriptAlias /login {webcontrol_dir}/request.cgi/login |
3736 | + ScriptAlias /logout {webcontrol_dir}/request.cgi/logout |
3737 | + ScriptAlias / {webcontrol_dir}/browse.cgi/ |
3738 | + |
3739 | + <Location "/results/"> |
3740 | + ProxyPass "{storage_url_internal}/" |
3741 | + ProxyPassReverse "{storage_url_internal}/" |
3742 | + </Location> |
3743 | + |
3744 | + {server_name} |
3745 | + </VirtualHost> |
3746 | + """.format( |
3747 | + **locals() |
3748 | + ) |
3749 | + ) |
3750 | + ) |
3751 | + set_flag("autopkgtest-web.website-initially-configured") |
3752 | + apache.send_ports([80]) # haproxy is doing SSL termination |
3753 | + apache.send_enabled() |
3754 | + |
3755 | + |
3756 | +@when_all("config.changed.github-secrets", "config.set.github-secrets") |
3757 | +def write_github_secrets(): |
3758 | + github_secrets = config().get("github-secrets") |
3759 | + |
3760 | + with open(GITHUB_SECRETS_PATH, "w") as f: |
3761 | + f.write(github_secrets) |
3762 | + |
3763 | + try: |
3764 | + os.symlink( |
3765 | + GITHUB_SECRETS_PATH, |
3766 | + os.path.expanduser("~www-data/github-secrets.json"), |
3767 | + ) |
3768 | + except FileExistsError: |
3769 | + pass |
3770 | + |
3771 | + |
3772 | +@when_not("config.set.github-secrets") |
3773 | +def clear_github_secrets(): |
3774 | + try: |
3775 | + os.unlink(GITHUB_SECRETS_PATH) |
3776 | + except FileNotFoundError: |
3777 | + pass |
3778 | + |
3779 | + try: |
3780 | + os.unlink(os.path.expanduser("~www-data/github-secrets.json")) |
3781 | + except FileNotFoundError: |
3782 | + pass |
3783 | + |
3784 | + |
3785 | +@when_all( |
3786 | + "config.changed.github-status-credentials", |
3787 | + "config.set.github-status-credentials", |
3788 | +) |
3789 | +def write_github_status_credentials(): |
3790 | + github_status_credentials = config().get("github-status-credentials") |
3791 | + |
3792 | + with open(GITHUB_STATUS_CREDENTIALS_PATH, "w") as f: |
3793 | + f.write(github_status_credentials) |
3794 | + |
3795 | + try: |
3796 | + os.symlink( |
3797 | + GITHUB_STATUS_CREDENTIALS_PATH, |
3798 | + os.path.expanduser("~www-data/github-status-credentials.txt"), |
3799 | + ) |
3800 | + except FileExistsError: |
3801 | + pass |
3802 | + |
3803 | + |
3804 | +@when_not("config.set.github-status-credentials") |
3805 | +def clear_github_status_credentials(): |
3806 | + try: |
3807 | + os.unlink(GITHUB_STATUS_CREDENTIALS_PATH) |
3808 | + except FileNotFoundError: |
3809 | + pass |
3810 | + |
3811 | + try: |
3812 | + os.unlink( |
3813 | + os.path.expanduser("~www-data/github-status-credentials.json") |
3814 | + ) |
3815 | + except FileNotFoundError: |
3816 | + pass |
3817 | + |
3818 | + |
3819 | +@when_not("autopkgtest-web.bootstrap-symlinked") |
3820 | +def symlink_bootstrap(): |
3821 | + try: |
3822 | + os.symlink( |
3823 | + os.path.join( |
3824 | + os.path.sep, "usr", "share", "javascript", "bootstrap" |
3825 | + ), |
3826 | + os.path.join(charm_dir(), "webcontrol", "static", "bootstrap"), |
3827 | + ) |
3828 | + set_flag("autopkgtest-web.bootstrap-symlinked") |
3829 | + except FileExistsError: |
3830 | + pass |
3831 | + |
3832 | + |
3833 | +@when_not("autopkgtest-web.runtime-dir-created") |
3834 | +def make_runtime_tmpfiles(): |
3835 | + with open("/etc/tmpfiles.d/autopkgtest-web-runtime.conf", "w") as r: |
3836 | + r.write("D %t/autopkgtest_webcontrol 0755 www-data www-data\n") |
3837 | + subprocess.check_call(["systemd-tmpfiles", "--create"]) |
3838 | + set_flag("autopkgtest-web.runtime-dir-created") |
3839 | + |
3840 | + |
3841 | +@when_not("autopkgtest-web.running-json-symlinked") |
3842 | +def symlink_running(): |
3843 | + try: |
3844 | + os.symlink( |
3845 | + os.path.join( |
3846 | + os.path.sep, "run", "amqp-status-collector", "running.json" |
3847 | + ), |
3848 | + os.path.join(charm_dir(), "webcontrol", "static", "running.json"), |
3849 | + ) |
3850 | + set_flag("autopkgtest-web.running-json-symlinked") |
3851 | + except FileExistsError: |
3852 | + pass |
3853 | + |
3854 | + |
3855 | +@when_not("autopkgtest-web.public-db-symlinked") |
3856 | +def symlink_public_db(): |
3857 | + try: |
3858 | + publicdir = os.path.expanduser("~ubuntu/public/") |
3859 | + os.makedirs(publicdir) |
3860 | + shutil.chown(publicdir, user="ubuntu", group="ubuntu") |
3861 | + os.symlink( |
3862 | + os.path.join(publicdir, "autopkgtest.db"), |
3863 | + os.path.join(charm_dir(), "webcontrol", "static", "autopkgtest.db"), |
3864 | + ) |
3865 | + set_flag("autopkgtest-web.public-db-symlinked") |
3866 | + except FileExistsError: |
3867 | + pass |
3868 | diff --git a/charms/focal/autopkgtest-web/units/amqp-status-collector.service b/charms/focal/autopkgtest-web/units/amqp-status-collector.service |
3869 | new file mode 100644 |
3870 | index 0000000..fbbc297 |
3871 | --- /dev/null |
3872 | +++ b/charms/focal/autopkgtest-web/units/amqp-status-collector.service |
3873 | @@ -0,0 +1,10 @@ |
3874 | +[Unit] |
3875 | +Description=Read running test status from RabbitMQ (running.json) |
3876 | + |
3877 | +[Service] |
3878 | +RuntimeDirectory=amqp-status-collector |
3879 | +ExecStart=/home/ubuntu/webcontrol/amqp-status-collector |
3880 | +Restart=on-failure |
3881 | + |
3882 | +[Install] |
3883 | +WantedBy=autopkgtest-web.target |
3884 | diff --git a/charms/focal/autopkgtest-web/units/autopkgtest-web.target b/charms/focal/autopkgtest-web/units/autopkgtest-web.target |
3885 | new file mode 100644 |
3886 | index 0000000..52b8a1c |
3887 | --- /dev/null |
3888 | +++ b/charms/focal/autopkgtest-web/units/autopkgtest-web.target |
3889 | @@ -0,0 +1,5 @@ |
3890 | +[Unit] |
3891 | +Description=autopkgtest web |
3892 | + |
3893 | +[Install] |
3894 | +WantedBy=multi-user.target |
3895 | diff --git a/charms/focal/autopkgtest-web/units/cache-amqp.service b/charms/focal/autopkgtest-web/units/cache-amqp.service |
3896 | new file mode 100644 |
3897 | index 0000000..d7aa53d |
3898 | --- /dev/null |
3899 | +++ b/charms/focal/autopkgtest-web/units/cache-amqp.service |
3900 | @@ -0,0 +1,8 @@ |
3901 | +[Unit] |
3902 | +Description=Cache queue entries from AMQP |
3903 | + |
3904 | +[Service] |
3905 | +Type=oneshot |
3906 | +User=ubuntu |
3907 | +StateDirectory=cache-amqp |
3908 | +ExecStart=/home/ubuntu/webcontrol/cache-amqp |
3909 | diff --git a/charms/focal/autopkgtest-web/units/cache-amqp.timer b/charms/focal/autopkgtest-web/units/cache-amqp.timer |
3910 | new file mode 100644 |
3911 | index 0000000..c9cf3df |
3912 | --- /dev/null |
3913 | +++ b/charms/focal/autopkgtest-web/units/cache-amqp.timer |
3914 | @@ -0,0 +1,8 @@ |
3915 | +[Unit] |
3916 | +Description=Cache queue entries from AMQP (timer) |
3917 | + |
3918 | +[Timer] |
3919 | +OnCalendar=minutely |
3920 | + |
3921 | +[Install] |
3922 | +WantedBy=autopkgtest-web.target |
3923 | diff --git a/charms/focal/autopkgtest-web/units/download-all-results.service b/charms/focal/autopkgtest-web/units/download-all-results.service |
3924 | new file mode 100644 |
3925 | index 0000000..d605794 |
3926 | --- /dev/null |
3927 | +++ b/charms/focal/autopkgtest-web/units/download-all-results.service |
3928 | @@ -0,0 +1,12 @@ |
3929 | +[Unit] |
3930 | +Description=Download all results |
3931 | +Before=download-results.service |
3932 | +Conflicts=download-results.service |
3933 | + |
3934 | +[Service] |
3935 | +User=ubuntu |
3936 | +Type=oneshot |
3937 | +ExecStart=/home/ubuntu/webcontrol/download-all-results |
3938 | + |
3939 | +[Install] |
3940 | +WantedBy=autopkgtest-web.target |
3941 | diff --git a/charms/focal/autopkgtest-web/units/download-results.service b/charms/focal/autopkgtest-web/units/download-results.service |
3942 | new file mode 100644 |
3943 | index 0000000..7ea4171 |
3944 | --- /dev/null |
3945 | +++ b/charms/focal/autopkgtest-web/units/download-results.service |
3946 | @@ -0,0 +1,11 @@ |
3947 | +[Unit] |
3948 | +Description=Download results |
3949 | +After=download-all-results.service |
3950 | + |
3951 | +[Service] |
3952 | +User=ubuntu |
3953 | +ExecStart=/home/ubuntu/webcontrol/download-results |
3954 | +Restart=on-failure |
3955 | + |
3956 | +[Install] |
3957 | +WantedBy=download-all-results.service |
3958 | diff --git a/charms/focal/autopkgtest-web/units/publish-db.service b/charms/focal/autopkgtest-web/units/publish-db.service |
3959 | new file mode 100644 |
3960 | index 0000000..ce6e5a4 |
3961 | --- /dev/null |
3962 | +++ b/charms/focal/autopkgtest-web/units/publish-db.service |
3963 | @@ -0,0 +1,6 @@ |
3964 | +[Unit] |
3965 | +Description=publish autopkgtest.db |
3966 | + |
3967 | +[Service] |
3968 | +Type=oneshot |
3969 | +ExecStart=/home/ubuntu/webcontrol/publish-db |
3970 | diff --git a/charms/focal/autopkgtest-web/units/publish-db.timer b/charms/focal/autopkgtest-web/units/publish-db.timer |
3971 | new file mode 100644 |
3972 | index 0000000..4d00b15 |
3973 | --- /dev/null |
3974 | +++ b/charms/focal/autopkgtest-web/units/publish-db.timer |
3975 | @@ -0,0 +1,8 @@ |
3976 | +[Unit] |
3977 | +Description=publish autopkgtest.db (timer) |
3978 | + |
3979 | +[Timer] |
3980 | +OnCalendar=minutely |
3981 | + |
3982 | +[Install] |
3983 | +WantedBy=autopkgtest-web.target |
3984 | diff --git a/charms/focal/autopkgtest-web/units/update-github-jobs.service b/charms/focal/autopkgtest-web/units/update-github-jobs.service |
3985 | new file mode 100644 |
3986 | index 0000000..5681a9f |
3987 | --- /dev/null |
3988 | +++ b/charms/focal/autopkgtest-web/units/update-github-jobs.service |
3989 | @@ -0,0 +1,9 @@ |
3990 | +[Unit] |
3991 | +Description=Update GitHub job status |
3992 | + |
3993 | +[Service] |
3994 | +Type=oneshot |
3995 | +User=www-data |
3996 | +Group=www-data |
3997 | +EnvironmentFile=/etc/environment.d/*.conf |
3998 | +ExecStart=/home/ubuntu/webcontrol/update-github-jobs |
3999 | diff --git a/charms/focal/autopkgtest-web/units/update-github-jobs.timer b/charms/focal/autopkgtest-web/units/update-github-jobs.timer |
4000 | new file mode 100644 |
4001 | index 0000000..f9df3bd |
4002 | --- /dev/null |
4003 | +++ b/charms/focal/autopkgtest-web/units/update-github-jobs.timer |
4004 | @@ -0,0 +1,8 @@ |
4005 | +[Unit] |
4006 | +Description=Update GitHub job status (timer) |
4007 | + |
4008 | +[Timer] |
4009 | +OnCalendar=minutely |
4010 | + |
4011 | +[Install] |
4012 | +WantedBy=autopkgtest-web.target |
4013 | diff --git a/webcontrol/Makefile b/charms/focal/autopkgtest-web/webcontrol/Makefile |
4014 | similarity index 100% |
4015 | rename from webcontrol/Makefile |
4016 | rename to charms/focal/autopkgtest-web/webcontrol/Makefile |
4017 | diff --git a/webcontrol/amqp-status-collector b/charms/focal/autopkgtest-web/webcontrol/amqp-status-collector |
4018 | similarity index 92% |
4019 | rename from webcontrol/amqp-status-collector |
4020 | rename to charms/focal/autopkgtest-web/webcontrol/amqp-status-collector |
4021 | index bd45a07..66a9ed4 100755 |
4022 | --- a/webcontrol/amqp-status-collector |
4023 | +++ b/charms/focal/autopkgtest-web/webcontrol/amqp-status-collector |
4024 | @@ -1,6 +1,6 @@ |
4025 | #!/usr/bin/python3 |
4026 | # Pick up running tests, their status and logtail from the "teststatus" fanout |
4027 | -# queue, and regularly write it into /tmp/running.json |
4028 | +# queue, and regularly write it into /run/running.json |
4029 | |
4030 | import os |
4031 | import json |
4032 | @@ -13,11 +13,17 @@ import logging |
4033 | import amqplib.client_0_8 as amqp |
4034 | |
4035 | exchange_name = 'teststatus.fanout' |
4036 | +running_name = os.path.join(os.path.sep, |
4037 | + 'run', |
4038 | + 'amqp-status-collector', |
4039 | + 'running.json') |
4040 | +running_name_new = "{}.new".format(running_name) |
4041 | |
4042 | # package -> runhash -> release -> arch -> (params, duration, logtail) |
4043 | running_tests = {} |
4044 | last_update = 0 |
4045 | |
4046 | + |
4047 | def amqp_connect(): |
4048 | '''Connect to AMQP server''' |
4049 | |
4050 | @@ -42,9 +48,9 @@ def update_output(amqp_channel, force_update=False): |
4051 | if not force_update and now - last_update < 10: |
4052 | return |
4053 | |
4054 | - with open('/tmp/running.json.new', 'w', encoding='utf-8') as f: |
4055 | + with open(running_name_new, 'w', encoding='utf-8') as f: |
4056 | json.dump(running_tests, f) |
4057 | - os.rename('/tmp/running.json.new', '/tmp/running.json') |
4058 | + os.rename(running_name_new, running_name) |
4059 | |
4060 | |
4061 | def process_message(msg): |
4062 | @@ -91,7 +97,7 @@ amqp_con = amqp_connect() |
4063 | status_ch = amqp_con.channel() |
4064 | status_ch.access_request('/data', active=True, read=True, write=False) |
4065 | status_ch.exchange_declare(exchange_name, 'fanout', durable=False, auto_delete=True) |
4066 | -queue_name = 'listener-%s' % socket.getfqdn() |
4067 | +queue_name = 'running-listener-%s' % socket.getfqdn() |
4068 | status_ch.queue_declare(queue_name, durable=False, auto_delete=True) |
4069 | status_ch.queue_bind(queue_name, exchange_name, queue_name) |
4070 | |
4071 | diff --git a/webcontrol/browse.cgi b/charms/focal/autopkgtest-web/webcontrol/browse.cgi |
4072 | similarity index 98% |
4073 | rename from webcontrol/browse.cgi |
4074 | rename to charms/focal/autopkgtest-web/webcontrol/browse.cgi |
4075 | index 949d622..429876f 100755 |
4076 | --- a/webcontrol/browse.cgi |
4077 | +++ b/charms/focal/autopkgtest-web/webcontrol/browse.cgi |
4078 | @@ -31,7 +31,7 @@ def init_config(): |
4079 | |
4080 | db_con = sqlite3.connect('file:%s?mode=ro' % cp['web']['database_ro'], uri=True) |
4081 | try: |
4082 | - url = cp['web']['SwiftURLExternal'] |
4083 | + url = cp['web']['ExternalURL'] |
4084 | except KeyError: |
4085 | url = cp['web']['SwiftURL'] |
4086 | swift_container_url = os.path.join(url, 'autopkgtest-%s') |
4087 | @@ -121,7 +121,7 @@ def get_queue_info(): |
4088 | Return (releases, arches, context -> release -> arch -> (queue_size, [requests])). |
4089 | ''' |
4090 | |
4091 | - with open('/tmp/queued.json', 'r') as json_file: |
4092 | + with open('/var/lib/cache-amqp/queued.json', 'r') as json_file: |
4093 | queue_info_j = json.load(json_file) |
4094 | |
4095 | releases = queue_info_j["releases"] |
4096 | @@ -281,7 +281,7 @@ def running(): |
4097 | queue_lengths.setdefault(c, {}).setdefault(r, {})[a] = queue_length |
4098 | |
4099 | try: |
4100 | - with open('/tmp/running.json') as f: |
4101 | + with open('/run/amqp-status-collector/running.json') as f: |
4102 | # package -> runhash -> release -> arch -> (params, duration, logtail) |
4103 | running_info = json.load(f) |
4104 | except FileNotFoundError: |
4105 | diff --git a/webcontrol/cache-amqp b/charms/focal/autopkgtest-web/webcontrol/cache-amqp |
4106 | similarity index 85% |
4107 | rename from webcontrol/cache-amqp |
4108 | rename to charms/focal/autopkgtest-web/webcontrol/cache-amqp |
4109 | index bb3497f..8992943 100755 |
4110 | --- a/webcontrol/cache-amqp |
4111 | +++ b/charms/focal/autopkgtest-web/webcontrol/cache-amqp |
4112 | @@ -97,9 +97,13 @@ class AutopkgtestQueueContents: |
4113 | exclusive=False, |
4114 | auto_delete=False, |
4115 | ) |
4116 | - logging.info("Queue %s has %d items", queue_name, queue_size) |
4117 | + logging.info( |
4118 | + "Queue %s has %d items", queue_name, queue_size |
4119 | + ) |
4120 | requests = self.get_queue_requests(queue_name) |
4121 | - result.setdefault(context, {}).setdefault(release, {})[arch] = { |
4122 | + result.setdefault(context, {}).setdefault(release, {})[ |
4123 | + arch |
4124 | + ] = { |
4125 | "size": queue_size, |
4126 | "requests": requests, |
4127 | } |
4128 | @@ -115,7 +119,14 @@ class AutopkgtestQueueContents: |
4129 | |
4130 | |
4131 | if __name__ == "__main__": |
4132 | - parser = argparse.ArgumentParser(description="Fetch AMQP queues into a file") |
4133 | + try: |
4134 | + state_directory = os.environ["STATE_DIRECTORY"] |
4135 | + except KeyError: |
4136 | + state_directory = "/var/lib/cache-amqp/" |
4137 | + |
4138 | + parser = argparse.ArgumentParser( |
4139 | + description="Fetch AMQP queues into a file" |
4140 | + ) |
4141 | parser.add_argument( |
4142 | "--debug", |
4143 | dest="debug", |
4144 | @@ -124,7 +135,11 @@ if __name__ == "__main__": |
4145 | help="Print debugging (give twice for super verbose output)", |
4146 | ) |
4147 | parser.add_argument( |
4148 | - "-o", "--output", dest="output", type=str, default="/tmp/queued.json" |
4149 | + "-o", |
4150 | + "--output", |
4151 | + dest="output", |
4152 | + type=str, |
4153 | + default=os.path.join(state_directory, "queued.json"), |
4154 | ) |
4155 | |
4156 | args = parser.parse_args() |
4157 | @@ -133,7 +148,9 @@ if __name__ == "__main__": |
4158 | cp.read(os.path.expanduser("~ubuntu/autopkgtest-cloud.conf")) |
4159 | |
4160 | logger = logging.getLogger() |
4161 | - formatter = logging.Formatter("%(asctime)s: %(message)s", "%Y-%m-%d %H:%M:%S") |
4162 | + formatter = logging.Formatter( |
4163 | + "%(asctime)s: %(message)s", "%Y-%m-%d %H:%M:%S" |
4164 | + ) |
4165 | ch = logging.StreamHandler() |
4166 | ch.setFormatter(formatter) |
4167 | logger.addHandler(ch) |
4168 | @@ -158,7 +175,9 @@ if __name__ == "__main__": |
4169 | aq = AutopkgtestQueueContents(amqp_uri, database) |
4170 | queue_contents = aq.get_queue_contents() |
4171 | |
4172 | - with tempfile.NamedTemporaryFile(mode="w", dir="/tmp", delete=False) as tf: |
4173 | + with tempfile.NamedTemporaryFile( |
4174 | + mode="w", dir=os.path.dirname(args.output), delete=False |
4175 | + ) as tf: |
4176 | try: |
4177 | json.dump(queue_contents, tf, indent=2) |
4178 | umask = os.umask(0) |
4179 | diff --git a/charms/focal/autopkgtest-web/webcontrol/download-all-results b/charms/focal/autopkgtest-web/webcontrol/download-all-results |
4180 | new file mode 100755 |
4181 | index 0000000..baf3b54 |
4182 | --- /dev/null |
4183 | +++ b/charms/focal/autopkgtest-web/webcontrol/download-all-results |
4184 | @@ -0,0 +1,227 @@ |
4185 | +#!/usr/bin/python3 |
4186 | + |
4187 | +# Authors: Iain Lane, Martin Pitt |
4188 | + |
4189 | +# This script uses the OpenStack Swift API to list all of the contents of our |
4190 | +# containers, diffs that against the local database, and then downloads any |
4191 | +# missing results. |
4192 | + |
4193 | +# While in normal operation the download-results script is supposed to receive |
4194 | +# notification of completed jobs, in case of bugs or network outages etc, this |
4195 | +# script can be used to find any results which were missed and insert them. |
4196 | + |
4197 | +import os |
4198 | +import sys |
4199 | +import logging |
4200 | +import sqlite3 |
4201 | +import io |
4202 | +import tarfile |
4203 | +import json |
4204 | +import configparser |
4205 | +import urllib.parse |
4206 | +import time |
4207 | + |
4208 | +from distro_info import UbuntuDistroInfo |
4209 | +from download.utils import get_test_id, init_db |
4210 | +from urllib.request import urlopen |
4211 | + |
4212 | +LOGGER = logging.getLogger(__name__) |
4213 | + |
4214 | +config = None |
4215 | +db_con = None |
4216 | + |
4217 | + |
4218 | +def list_remote_container(container_url): |
4219 | + LOGGER.debug("Listing container %s", container_url) |
4220 | + out = [] |
4221 | + |
4222 | + def get_batch(start=None): |
4223 | + url = f"{container_url}/?format=json" |
4224 | + if start is not None: |
4225 | + url += f"&marker={urllib.parse.quote(start)}" |
4226 | + |
4227 | + LOGGER.debug('Retrieving "%s"', url) |
4228 | + resp = urlopen(url) |
4229 | + json_string = resp.read() |
4230 | + json_data = json.loads(json_string) |
4231 | + |
4232 | + if not json_data: |
4233 | + return None |
4234 | + |
4235 | + out.extend([e["name"] for e in json_data]) |
4236 | + name = out[-1] |
4237 | + |
4238 | + return name |
4239 | + |
4240 | + marker = get_batch() |
4241 | + |
4242 | + while True: |
4243 | + new_marker = get_batch(marker) |
4244 | + if not new_marker or new_marker == marker: |
4245 | + break |
4246 | + marker = new_marker |
4247 | + |
4248 | + out = [name for name in out if name.endswith("result.tar")] |
4249 | + LOGGER.debug("Found %d items in %s", len(out), container_url) |
4250 | + ret = {} |
4251 | + for r in out: |
4252 | + (_, _, _, _, run_id, _) = r.split("/") |
4253 | + ret[run_id] = r |
4254 | + return ret |
4255 | + |
4256 | + |
4257 | +def list_our_results(release): |
4258 | + LOGGER.debug("Finding already recorded results for %s", release) |
4259 | + c = db_con.cursor() |
4260 | + c.execute( |
4261 | + "SELECT run_id FROM result INNER JOIN test ON test.id = result.test_id WHERE test.release=?", |
4262 | + (release,), |
4263 | + ) |
4264 | + return {run_id for (run_id,) in c.fetchall()} |
4265 | + |
4266 | + |
4267 | +def fetch_one_result(url): |
4268 | + """Download one result URL from swift and add it to the DB""" |
4269 | + (release, arch, _, src, run_id, _) = url.split("/")[-6:] |
4270 | + test_id = get_test_id(db_con, release, arch, src) |
4271 | + |
4272 | + try: |
4273 | + f = urlopen(url, timeout=30) |
4274 | + if f.getcode() == 200: |
4275 | + tar_bytes = io.BytesIO(f.read()) |
4276 | + f.close() |
4277 | + else: |
4278 | + raise NotImplementedError( |
4279 | + "fetch_one_result(%s): cannot handle HTTP code %i" |
4280 | + % (url, f.getcode()) |
4281 | + ) |
4282 | + except IOError as e: |
4283 | + LOGGER.error("Failure to fetch %s: %s", url, str(e)) |
4284 | + # we tolerate "not found" (something went wrong on uploading the |
4285 | + # result), but other things indicate infrastructure problems |
4286 | + if hasattr(e, "code") and e.code == 404: |
4287 | + return |
4288 | + sys.exit(1) |
4289 | + |
4290 | + try: |
4291 | + with tarfile.open(None, "r", tar_bytes) as tar: |
4292 | + exitcode = int(tar.extractfile("exitcode").read().strip()) |
4293 | + try: |
4294 | + srcver = ( |
4295 | + tar.extractfile("testpkg-version").read().decode().strip() |
4296 | + ) |
4297 | + except KeyError as e: |
4298 | + # not found |
4299 | + if exitcode in (4, 12, 20): |
4300 | + # repair it |
4301 | + srcver = "%s unknown" % (src) |
4302 | + else: |
4303 | + raise |
4304 | + (ressrc, ver) = srcver.split() |
4305 | + testinfo = json.loads( |
4306 | + tar.extractfile("testinfo.json").read().decode() |
4307 | + ) |
4308 | + duration = int(tar.extractfile("duration").read().strip()) |
4309 | + # KeyError means the file is not there, i.e. there isn't a human |
4310 | + # requester |
4311 | + try: |
4312 | + requester = tar.extractfile("requester").read().decode().strip() |
4313 | + except KeyError as e: |
4314 | + requester = "" |
4315 | + except (KeyError, ValueError, tarfile.TarError) as e: |
4316 | + LOGGER.debug("%s is damaged, ignoring: %s", url, str(e)) |
4317 | + return |
4318 | + |
4319 | + if src != ressrc: |
4320 | + LOGGER.error( |
4321 | + "%s is a result for package %s, but expected package %s", |
4322 | + url, |
4323 | + ressrc, |
4324 | + src, |
4325 | + ) |
4326 | + return |
4327 | + |
4328 | + # parse recorded triggers in test result |
4329 | + for e in testinfo.get("custom_environment", []): |
4330 | + if e.startswith("ADT_TEST_TRIGGERS="): |
4331 | + test_triggers = e.split("=", 1)[1] |
4332 | + break |
4333 | + else: |
4334 | + LOGGER.error("%s result has no ADT_TEST_TRIGGERS, ignoring", url) |
4335 | + return |
4336 | + |
4337 | + LOGGER.debug( |
4338 | + "Fetched test result for %s/%s/%s/%s %s (triggers: %s): exit code %i", |
4339 | + release, |
4340 | + arch, |
4341 | + src, |
4342 | + ver, |
4343 | + run_id, |
4344 | + test_triggers, |
4345 | + exitcode, |
4346 | + ) |
4347 | + |
4348 | + try: |
4349 | + c = db_con.cursor() |
4350 | + c.execute( |
4351 | + "INSERT INTO result VALUES (?, ?, ?, ?, ?, ?, ?)", |
4352 | + ( |
4353 | + test_id, |
4354 | + run_id, |
4355 | + ver, |
4356 | + test_triggers, |
4357 | + duration, |
4358 | + exitcode, |
4359 | + requester, |
4360 | + ), |
4361 | + ) |
4362 | + db_con.commit() |
4363 | + except sqlite3.IntegrityError: |
4364 | + LOGGER.info("%s was already recorded - skipping", run_id) |
4365 | + |
4366 | + |
4367 | +def fetch_container(release, container_url): |
4368 | + """Download new results from a swift container""" |
4369 | + |
4370 | + marker = "" |
4371 | + |
4372 | + our_results = list_our_results(release) |
4373 | + known_results = list_remote_container(container_url) |
4374 | + |
4375 | + need_to_fetch = set(known_results.keys()) - our_results |
4376 | + |
4377 | + LOGGER.debug("Need to download %d items", len(need_to_fetch)) |
4378 | + |
4379 | + for run_id in need_to_fetch: |
4380 | + fetch_one_result(os.path.join(container_url, known_results[run_id])) |
4381 | + |
4382 | + |
4383 | +if __name__ == "__main__": |
4384 | + LOGGER.setLevel(logging.DEBUG) |
4385 | + ch = logging.StreamHandler() |
4386 | + if "DEBUG" in os.environ: |
4387 | + ch.setLevel(logging.DEBUG) |
4388 | + formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s") |
4389 | + ch.setFormatter(formatter) |
4390 | + LOGGER.addHandler(ch) |
4391 | + |
4392 | + releases = list( |
4393 | + set(UbuntuDistroInfo().supported() + UbuntuDistroInfo().supported_esm()) |
4394 | + ) |
4395 | + releases.sort(key=UbuntuDistroInfo().all.index) |
4396 | + |
4397 | + config = configparser.ConfigParser() |
4398 | + config.read(os.path.expanduser("~ubuntu/autopkgtest-cloud.conf")) |
4399 | + |
4400 | + try: |
4401 | + for release in releases: |
4402 | + db_con = init_db(config["web"]["database"]) |
4403 | + fetch_container( |
4404 | + release, |
4405 | + os.path.join( |
4406 | + config["web"]["SwiftURL"], f"autopkgtest-{release}" |
4407 | + ), |
4408 | + ) |
4409 | + finally: |
4410 | + if db_con: |
4411 | + db_con.close() |
4412 | diff --git a/charms/focal/autopkgtest-web/webcontrol/download-results b/charms/focal/autopkgtest-web/webcontrol/download-results |
4413 | new file mode 100755 |
4414 | index 0000000..a62babe |
4415 | --- /dev/null |
4416 | +++ b/charms/focal/autopkgtest-web/webcontrol/download-results |
4417 | @@ -0,0 +1,119 @@ |
4418 | +#!/usr/bin/python3 |
4419 | + |
4420 | +import configparser |
4421 | +import json |
4422 | +import logging |
4423 | +import os |
4424 | +import socket |
4425 | +import sqlite3 |
4426 | +import urllib.parse |
4427 | + |
4428 | +from download.utils import get_test_id, init_db |
4429 | +from urllib.request import urlopen |
4430 | + |
4431 | +import amqplib.client_0_8 as amqp |
4432 | + |
4433 | +EXCHANGE_NAME = "testcomplete.fanout" |
4434 | + |
4435 | + |
4436 | +def amqp_connect(): |
4437 | + """Connect to AMQP server""" |
4438 | + |
4439 | + cp = configparser.ConfigParser() |
4440 | + cp.read(os.path.expanduser("~ubuntu/autopkgtest-cloud.conf")) |
4441 | + amqp_uri = cp["amqp"]["uri"] |
4442 | + parts = urllib.parse.urlsplit(amqp_uri, allow_fragments=False) |
4443 | + amqp_con = amqp.Connection( |
4444 | + parts.hostname, userid=parts.username, password=parts.password |
4445 | + ) |
4446 | + logging.info( |
4447 | + "Connected to AMQP server at %s@%s" % (parts.username, parts.hostname) |
4448 | + ) |
4449 | + |
4450 | + return amqp_con |
4451 | + |
4452 | + |
4453 | +def db_connect(): |
4454 | + """Connect to SQLite DB""" |
4455 | + cp = configparser.ConfigParser() |
4456 | + cp.read(os.path.expanduser("~ubuntu/autopkgtest-cloud.conf")) |
4457 | + |
4458 | + db_con = init_db(cp["web"]["database"]) |
4459 | + |
4460 | + return db_con |
4461 | + |
4462 | + |
4463 | +def process_message(msg, db_con): |
4464 | + body = msg.body |
4465 | + if isinstance(body, bytes): |
4466 | + body = body.decode("UTF-8", errors="replace") |
4467 | + info = json.loads(body) |
4468 | + logging.info("Received notification of completed test {}".format(info)) |
4469 | + |
4470 | + arch = info["architecture"] |
4471 | + container = info["container"] |
4472 | + duration = info["duration"] |
4473 | + exitcode = info["exitcode"] |
4474 | + package = info["package"] |
4475 | + release = info["release"] |
4476 | + requester = (info["requester"] or "").strip() |
4477 | + version = info["testpkg_version"] |
4478 | + (_, _, _, _, run_id) = info["swift_dir"].split("/") |
4479 | + |
4480 | + # we don't handle PPA requests |
4481 | + if container != ("autopkgtest-{}".format(release)): |
4482 | + logging.debug("Ignoring non-distro request: {}".format(info)) |
4483 | + msg.channel.basic_ack(msg.delivery_tag) |
4484 | + return |
4485 | + |
4486 | + try: |
4487 | + triggers = info["triggers"] |
4488 | + if not triggers: |
4489 | + raise KeyError |
4490 | + except KeyError: |
4491 | + logging.error( |
4492 | + "%s/%s/%s result has no ADT_TEST_TRIGGERS, ignoring", |
4493 | + package, |
4494 | + arch, |
4495 | + release, |
4496 | + ) |
4497 | + msg.channel.basic_ack(msg.delivery_tag) |
4498 | + return |
4499 | + |
4500 | + test_id = get_test_id(db_con, release, arch, package) |
4501 | + |
4502 | + try: |
4503 | + c = db_con.cursor() |
4504 | + c.execute( |
4505 | + "INSERT INTO result VALUES (?, ?, ?, ?, ?, ?, ?)", |
4506 | + (test_id, run_id, version, triggers, duration, exitcode, requester), |
4507 | + ) |
4508 | + db_con.commit() |
4509 | + except sqlite3.IntegrityError: |
4510 | + logging.info("...which was already recorded - skipping") |
4511 | + |
4512 | + msg.channel.basic_ack(msg.delivery_tag) |
4513 | + |
4514 | + |
4515 | +if __name__ == "__main__": |
4516 | + logging.basicConfig( |
4517 | + level=("DEBUG" in os.environ and logging.DEBUG or logging.INFO) |
4518 | + ) |
4519 | + |
4520 | + db_con = db_connect() |
4521 | + amqp_con = amqp_connect() |
4522 | + status_ch = amqp_con.channel() |
4523 | + status_ch.access_request("/complete", active=True, read=True, write=False) |
4524 | + status_ch.exchange_declare( |
4525 | + EXCHANGE_NAME, "fanout", durable=True, auto_delete=False |
4526 | + ) |
4527 | + queue_name = "complete-listener-%s" % socket.getfqdn() |
4528 | + status_ch.queue_declare(queue_name, durable=True, auto_delete=False) |
4529 | + status_ch.queue_bind(queue_name, EXCHANGE_NAME, queue_name) |
4530 | + |
4531 | + logging.info("Listening to requests on %s" % queue_name) |
4532 | + status_ch.basic_consume( |
4533 | + "", callback=lambda msg: process_message(msg, db_con) |
4534 | + ) |
4535 | + while status_ch.callbacks: |
4536 | + status_ch.wait() |
4537 | diff --git a/deployment/charms/xenial/bootstrap-node/exec.d/.gitkeep b/charms/focal/autopkgtest-web/webcontrol/download/__init__.py |
4538 | similarity index 100% |
4539 | rename from deployment/charms/xenial/bootstrap-node/exec.d/.gitkeep |
4540 | rename to charms/focal/autopkgtest-web/webcontrol/download/__init__.py |
4541 | diff --git a/charms/focal/autopkgtest-web/webcontrol/download/__pycache__/__init__.cpython-37.pyc b/charms/focal/autopkgtest-web/webcontrol/download/__pycache__/__init__.cpython-37.pyc |
4542 | new file mode 100644 |
4543 | index 0000000..3998d24 |
4544 | Binary files /dev/null and b/charms/focal/autopkgtest-web/webcontrol/download/__pycache__/__init__.cpython-37.pyc differ |
4545 | diff --git a/charms/focal/autopkgtest-web/webcontrol/download/__pycache__/utils.cpython-37.pyc b/charms/focal/autopkgtest-web/webcontrol/download/__pycache__/utils.cpython-37.pyc |
4546 | new file mode 100644 |
4547 | index 0000000..723b94d |
4548 | Binary files /dev/null and b/charms/focal/autopkgtest-web/webcontrol/download/__pycache__/utils.cpython-37.pyc differ |
4549 | diff --git a/charms/focal/autopkgtest-web/webcontrol/download/utils.py b/charms/focal/autopkgtest-web/webcontrol/download/utils.py |
4550 | new file mode 100644 |
4551 | index 0000000..70ec37d |
4552 | --- /dev/null |
4553 | +++ b/charms/focal/autopkgtest-web/webcontrol/download/utils.py |
4554 | @@ -0,0 +1,59 @@ |
4555 | +import logging |
4556 | +import sqlite3 |
4557 | + |
4558 | +def init_db(path): |
4559 | + '''Create DB if it does not exist, and connect to it''' |
4560 | + |
4561 | + db = sqlite3.connect(path) |
4562 | + c = db.cursor() |
4563 | + try: |
4564 | + c.execute('CREATE TABLE IF NOT EXISTS test (' |
4565 | + ' id INTEGER PRIMARY KEY, ' |
4566 | + ' release CHAR[20], ' |
4567 | + ' arch CHAR[20], ' |
4568 | + ' package char[120])') |
4569 | + c.execute('CREATE TABLE IF NOT EXISTS result (' |
4570 | + ' test_id INTEGER, ' |
4571 | + ' run_id CHAR[30], ' |
4572 | + ' version VARCHAR[200], ' |
4573 | + ' triggers TEXT, ' |
4574 | + ' duration INTEGER, ' |
4575 | + ' exitcode INTEGER, ' |
4576 | + ' requester TEXT, ' |
4577 | + ' PRIMARY KEY(test_id, run_id), ' |
4578 | + ' FOREIGN KEY(test_id) REFERENCES test(id))') |
4579 | + db.commit() |
4580 | + logging.debug('database %s created', path) |
4581 | + except sqlite3.OperationalError as e: |
4582 | + if 'already exists' not in str(e): |
4583 | + raise |
4584 | + logging.debug('database %s already exists', path) |
4585 | + |
4586 | + return db |
4587 | + |
4588 | +def get_test_id(db_con, release, arch, src): |
4589 | + if not get_test_id._cache: |
4590 | + # prime the cache with all test IDs; much more efficient than doing |
4591 | + # thousands of individual queries |
4592 | + c = db_con.cursor() |
4593 | + c.execute('SELECT * FROM test') |
4594 | + while True: |
4595 | + row = c.fetchone() |
4596 | + if row is None: |
4597 | + break |
4598 | + get_test_id._cache[row[1] + '/' + row[2] + '/' + row[3]] = row[0] |
4599 | + |
4600 | + cache_idx = release + '/' + arch + '/' + src |
4601 | + try: |
4602 | + return get_test_id._cache[cache_idx] |
4603 | + except KeyError: |
4604 | + # create new ID |
4605 | + c = db_con.cursor() |
4606 | + c.execute('INSERT INTO test VALUES (NULL, ?, ?, ?)', (release, arch, src)) |
4607 | + test_id = c.lastrowid |
4608 | + db_con.commit() |
4609 | + get_test_id._cache[cache_idx] = test_id |
4610 | + return test_id |
4611 | + |
4612 | + |
4613 | +get_test_id._cache = {} |
4614 | diff --git a/webcontrol/db-update-current-versions b/charms/focal/autopkgtest-web/webcontrol/publish-db |
4615 | similarity index 55% |
4616 | rename from webcontrol/db-update-current-versions |
4617 | rename to charms/focal/autopkgtest-web/webcontrol/publish-db |
4618 | index 05f3799..c6b02ba 100755 |
4619 | --- a/webcontrol/db-update-current-versions |
4620 | +++ b/charms/focal/autopkgtest-web/webcontrol/publish-db |
4621 | @@ -1,6 +1,7 @@ |
4622 | #!/usr/bin/python3 |
4623 | # download/read Sources.gz for all known releases and write them into |
4624 | -# autopkgtest.db. This is being used for statistics. |
4625 | +# a copy of autopkgtest.db, which is then published to the public location. |
4626 | +# This is being used for statistics. |
4627 | |
4628 | import os |
4629 | import logging |
4630 | @@ -8,6 +9,9 @@ import sqlite3 |
4631 | import configparser |
4632 | import urllib.request |
4633 | import gzip |
4634 | +import fcntl |
4635 | +import shutil |
4636 | +import tempfile |
4637 | |
4638 | import apt_pkg |
4639 | |
4640 | @@ -18,11 +22,13 @@ archive_url = 'http://archive.ubuntu.com/ubuntu' |
4641 | components = ['main', 'restricted', 'universe', 'multiverse'] |
4642 | |
4643 | |
4644 | -def init_db(path): |
4645 | +def init_db(path, path_current): |
4646 | '''Create DB if it does not exist, and connect to it''' |
4647 | |
4648 | db = sqlite3.connect(path) |
4649 | + db_current = sqlite3.connect(path_current) |
4650 | c = db.cursor() |
4651 | + current_c = db_current.cursor() |
4652 | try: |
4653 | c.execute('CREATE TABLE current_version(' |
4654 | ' release CHAR[20], ' |
4655 | @@ -36,27 +42,71 @@ def init_db(path): |
4656 | raise |
4657 | logging.debug('table already exists') |
4658 | |
4659 | + try: |
4660 | + c.execute('CREATE TABLE url_last_checked(' |
4661 | + ' url CHAR[100], ' |
4662 | + ' timestamp CHAR[50], ' |
4663 | + ' PRIMARY KEY(url))') |
4664 | + db.commit() |
4665 | + try: |
4666 | + c.executemany('INSERT INTO url_last_checked (url, timestamp) VALUES (?, ?)', |
4667 | + current_c.execute ('SELECT url, timestamp FROM url_last_checked')) |
4668 | + logging.debug('database table url_last_checked created') |
4669 | + except sqlite3.OperationalError as e: |
4670 | + if 'no such table' not in str(e): |
4671 | + raise |
4672 | + logging.debug('no url_last_checked yet, first run probably') |
4673 | + except sqlite3.OperationalError as e: |
4674 | + if 'already exists' not in str(e): |
4675 | + raise |
4676 | + logging.debug('table already exists') |
4677 | + |
4678 | + db_current.close() |
4679 | return db |
4680 | |
4681 | |
4682 | +def get_last_checked(db_con, url): |
4683 | + c = db_con.cursor() |
4684 | + c.execute('SELECT timestamp FROM url_last_checked WHERE url=?', (url, )) |
4685 | + |
4686 | + try: |
4687 | + (ts, ) = c.fetchone() |
4688 | + return ts |
4689 | + except TypeError: # not found |
4690 | + return None |
4691 | + |
4692 | + |
4693 | def get_sources(db_con, release): |
4694 | c = db_con.cursor() |
4695 | for component in components: |
4696 | for pocket in (release, release + '-updates'): |
4697 | logging.debug('Processing %s/%s', pocket, component) |
4698 | try: |
4699 | + url = os.path.join(archive_url, 'dists', pocket, component, 'source/Sources.gz') |
4700 | + request = urllib.request.Request(url) |
4701 | + last_checked = get_last_checked(db_con, url) |
4702 | + if last_checked: |
4703 | + request.add_header('If-Modified-Since', last_checked) |
4704 | + |
4705 | # meh, apt_pkg.TagFile() apparently cannot directly read from an urllib stream |
4706 | - f = urllib.request.urlretrieve(os.path.join(archive_url, 'dists', pocket, component, 'source/Sources.gz'))[0] |
4707 | - with gzip.open(f) as fd: |
4708 | - for section in apt_pkg.TagFile(fd): |
4709 | - c.execute('INSERT OR REPLACE INTO current_version VALUES (?, ?, ?)', |
4710 | - (release, section['Package'], section['Version'])) |
4711 | - except urllib.error.HTTPError: |
4712 | - # EOLed release; no need to update. |
4713 | - f = None |
4714 | - finally: |
4715 | - if f: |
4716 | - os.unlink(f) |
4717 | + with tempfile.TemporaryFile() as temp_file: |
4718 | + with urllib.request.urlopen(request) as response: |
4719 | + temp_file.write(response.read()) |
4720 | + last_modified = response.getheader('Last-Modified') |
4721 | + if last_modified: |
4722 | + c.execute('INSERT OR REPLACE INTO url_last_checked (url, timestamp) ' |
4723 | + 'VALUES (?, ?)', (url, last_modified)) |
4724 | + db_con.commit() |
4725 | + |
4726 | + temp_file.seek(0) |
4727 | + with gzip.open(temp_file) as fd: |
4728 | + for section in apt_pkg.TagFile(fd): |
4729 | + c.execute('INSERT OR REPLACE INTO current_version VALUES (?, ?, ?)', |
4730 | + (release, section['Package'], section['Version'])) |
4731 | + except urllib.error.HTTPError as e: |
4732 | + if e.code == 304: |
4733 | + logging.debug('url {} not modified'.format(url)) |
4734 | + pass |
4735 | |
4736 | |
4737 | if __name__ == '__main__': |
4738 | @@ -66,9 +116,22 @@ if __name__ == '__main__': |
4739 | config = configparser.ConfigParser() |
4740 | config.read(os.path.expanduser('~ubuntu/autopkgtest-cloud.conf')) |
4741 | |
4742 | - db_con = init_db(config['web']['database']) |
4743 | + target = config['web']['database_ro'] |
4744 | + target_new = '{}.new'.format(target) |
4745 | + |
4746 | + with open(config['web']['database'], 'rb') as src: |
4747 | + with open(target_new, 'wb') as dest: |
4748 | + fcntl.flock(dest, fcntl.LOCK_EX | fcntl.LOCK_NB) |
4749 | + shutil.copyfileobj(src, dest) |
4750 | + dest.flush() |
4751 | + os.fsync(dest.fileno()) |
4752 | + |
4753 | + db_con = init_db(target_new, target) |
4754 | |
4755 | db_con.execute('DELETE FROM current_version') |
4756 | for row in db_con.execute('SELECT DISTINCT release FROM test'): |
4757 | get_sources(db_con, row[0]) |
4758 | db_con.commit() |
4759 | + db_con.close() |
4760 | + |
4761 | + os.rename(target_new, target) |
4762 | diff --git a/webcontrol/request-test.cgi b/charms/focal/autopkgtest-web/webcontrol/request-test.cgi |
4763 | similarity index 100% |
4764 | rename from webcontrol/request-test.cgi |
4765 | rename to charms/focal/autopkgtest-web/webcontrol/request-test.cgi |
4766 | diff --git a/webcontrol/request.cgi b/charms/focal/autopkgtest-web/webcontrol/request.cgi |
4767 | similarity index 100% |
4768 | rename from webcontrol/request.cgi |
4769 | rename to charms/focal/autopkgtest-web/webcontrol/request.cgi |
4770 | diff --git a/webcontrol/request/__init__.py b/charms/focal/autopkgtest-web/webcontrol/request/__init__.py |
4771 | similarity index 100% |
4772 | rename from webcontrol/request/__init__.py |
4773 | rename to charms/focal/autopkgtest-web/webcontrol/request/__init__.py |
4774 | diff --git a/webcontrol/request/app.py b/charms/focal/autopkgtest-web/webcontrol/request/app.py |
4775 | similarity index 97% |
4776 | rename from webcontrol/request/app.py |
4777 | rename to charms/focal/autopkgtest-web/webcontrol/request/app.py |
4778 | index 6813cd0..b19945f 100644 |
4779 | --- a/webcontrol/request/app.py |
4780 | +++ b/charms/focal/autopkgtest-web/webcontrol/request/app.py |
4781 | @@ -7,7 +7,7 @@ from collections import ChainMap |
4782 | from html import escape as _escape |
4783 | |
4784 | from flask import Flask, request, session, redirect |
4785 | -from flask.ext.openid import OpenID |
4786 | +from flask_openid import OpenID |
4787 | |
4788 | from request.submit import Submit |
4789 | |
4790 | @@ -110,7 +110,7 @@ def maybe_escape(value): |
4791 | |
4792 | |
4793 | # Initialize app |
4794 | -PATH = os.path.join(os.getenv('TMPDIR', '/tmp'), 'autopkgtest_webcontrol') |
4795 | +PATH = os.path.join(os.path.sep, 'run', 'autopkgtest_webcontrol') |
4796 | os.makedirs(PATH, exist_ok=True) |
4797 | app = Flask('request') |
4798 | # keep secret persistent between CGI invocations |
4799 | diff --git a/webcontrol/request/submit.py b/charms/focal/autopkgtest-web/webcontrol/request/submit.py |
4800 | similarity index 94% |
4801 | rename from webcontrol/request/submit.py |
4802 | rename to charms/focal/autopkgtest-web/webcontrol/request/submit.py |
4803 | index 3f15a8b..e1326b1 100644 |
4804 | --- a/webcontrol/request/submit.py |
4805 | +++ b/charms/focal/autopkgtest-web/webcontrol/request/submit.py |
4806 | @@ -16,6 +16,7 @@ from urllib.error import HTTPError |
4807 | from datetime import datetime |
4808 | |
4809 | import amqplib.client_0_8 as amqp |
4810 | +from distro_info import UbuntuDistroInfo |
4811 | |
4812 | # Launchpad REST API base |
4813 | LP = 'https://api.launchpad.net/1.0/' |
4814 | @@ -36,14 +37,9 @@ class Submit: |
4815 | cp = configparser.ConfigParser() |
4816 | cp.read(os.path.expanduser('~ubuntu/autopkgtest-cloud.conf')) |
4817 | |
4818 | - cfg = configparser.ConfigParser() |
4819 | - cfg.read(os.path.expanduser('~ubuntu/autopkgtest-cloud/worker-config-production/worker.conf')) |
4820 | - |
4821 | # read valid releases and architectures from DB |
4822 | self.db_con = sqlite3.connect('file:%s?mode=ro' % cp['web']['database_ro'], uri=True) |
4823 | - self.releases = [] |
4824 | - for release in cfg['autopkgtest']['releases'].split(): |
4825 | - self.releases.append(release) |
4826 | + self.releases = set(UbuntuDistroInfo().supported() + UbuntuDistroInfo().supported_esm()) |
4827 | logging.debug('Valid releases: %s' % self.releases) |
4828 | |
4829 | self.architectures = set() |
4830 | diff --git a/webcontrol/request/tests/__init__.py b/charms/focal/autopkgtest-web/webcontrol/request/tests/__init__.py |
4831 | similarity index 100% |
4832 | rename from webcontrol/request/tests/__init__.py |
4833 | rename to charms/focal/autopkgtest-web/webcontrol/request/tests/__init__.py |
4834 | diff --git a/webcontrol/request/tests/test_app.py b/charms/focal/autopkgtest-web/webcontrol/request/tests/test_app.py |
4835 | similarity index 100% |
4836 | rename from webcontrol/request/tests/test_app.py |
4837 | rename to charms/focal/autopkgtest-web/webcontrol/request/tests/test_app.py |
4838 | diff --git a/webcontrol/request/tests/test_submit.py b/charms/focal/autopkgtest-web/webcontrol/request/tests/test_submit.py |
4839 | similarity index 100% |
4840 | rename from webcontrol/request/tests/test_submit.py |
4841 | rename to charms/focal/autopkgtest-web/webcontrol/request/tests/test_submit.py |
4842 | diff --git a/webcontrol/setup.py b/charms/focal/autopkgtest-web/webcontrol/setup.py |
4843 | similarity index 100% |
4844 | rename from webcontrol/setup.py |
4845 | rename to charms/focal/autopkgtest-web/webcontrol/setup.py |
4846 | diff --git a/charms/focal/autopkgtest-web/webcontrol/static/jquery b/charms/focal/autopkgtest-web/webcontrol/static/jquery |
4847 | new file mode 120000 |
4848 | index 0000000..7e2f826 |
4849 | --- /dev/null |
4850 | +++ b/charms/focal/autopkgtest-web/webcontrol/static/jquery |
4851 | @@ -0,0 +1 @@ |
4852 | +/usr/share/javascript/jquery |
4853 | \ No newline at end of file |
4854 | diff --git a/webcontrol/static/style.css b/charms/focal/autopkgtest-web/webcontrol/static/style.css |
4855 | similarity index 100% |
4856 | rename from webcontrol/static/style.css |
4857 | rename to charms/focal/autopkgtest-web/webcontrol/static/style.css |
4858 | diff --git a/webcontrol/templates/browse-error.html b/charms/focal/autopkgtest-web/webcontrol/templates/browse-error.html |
4859 | similarity index 100% |
4860 | rename from webcontrol/templates/browse-error.html |
4861 | rename to charms/focal/autopkgtest-web/webcontrol/templates/browse-error.html |
4862 | diff --git a/webcontrol/templates/browse-home.html b/charms/focal/autopkgtest-web/webcontrol/templates/browse-home.html |
4863 | similarity index 100% |
4864 | rename from webcontrol/templates/browse-home.html |
4865 | rename to charms/focal/autopkgtest-web/webcontrol/templates/browse-home.html |
4866 | diff --git a/webcontrol/templates/browse-layout.html b/charms/focal/autopkgtest-web/webcontrol/templates/browse-layout.html |
4867 | similarity index 96% |
4868 | rename from webcontrol/templates/browse-layout.html |
4869 | rename to charms/focal/autopkgtest-web/webcontrol/templates/browse-layout.html |
4870 | index 5c0d57f..b47203f 100644 |
4871 | --- a/webcontrol/templates/browse-layout.html |
4872 | +++ b/charms/focal/autopkgtest-web/webcontrol/templates/browse-layout.html |
4873 | @@ -31,6 +31,7 @@ |
4874 | <li><a href="{{base_url}}running">Running</a></li> |
4875 | <li><a href="{{base_url}}statistics">Statistics</a></li> |
4876 | <li><a href="https://wiki.ubuntu.com/ProposedMigration#autopkgtests">Documentation</a></li> |
4877 | + <li><a href="https://autopkgtest-cloud.readthedocs.io/">Docs for admins</a></li> |
4878 | </ul> |
4879 | </div><!--/.nav-collapse --> |
4880 | </div> |
4881 | diff --git a/webcontrol/templates/browse-package.html b/charms/focal/autopkgtest-web/webcontrol/templates/browse-package.html |
4882 | similarity index 100% |
4883 | rename from webcontrol/templates/browse-package.html |
4884 | rename to charms/focal/autopkgtest-web/webcontrol/templates/browse-package.html |
4885 | diff --git a/webcontrol/templates/browse-results.html b/charms/focal/autopkgtest-web/webcontrol/templates/browse-results.html |
4886 | similarity index 100% |
4887 | rename from webcontrol/templates/browse-results.html |
4888 | rename to charms/focal/autopkgtest-web/webcontrol/templates/browse-results.html |
4889 | diff --git a/webcontrol/templates/browse-running.html b/charms/focal/autopkgtest-web/webcontrol/templates/browse-running.html |
4890 | similarity index 100% |
4891 | rename from webcontrol/templates/browse-running.html |
4892 | rename to charms/focal/autopkgtest-web/webcontrol/templates/browse-running.html |
4893 | diff --git a/webcontrol/templates/browse-statistics.html b/charms/focal/autopkgtest-web/webcontrol/templates/browse-statistics.html |
4894 | similarity index 100% |
4895 | rename from webcontrol/templates/browse-statistics.html |
4896 | rename to charms/focal/autopkgtest-web/webcontrol/templates/browse-statistics.html |
4897 | diff --git a/webcontrol/templates/browse-testlist.html b/charms/focal/autopkgtest-web/webcontrol/templates/browse-testlist.html |
4898 | similarity index 100% |
4899 | rename from webcontrol/templates/browse-testlist.html |
4900 | rename to charms/focal/autopkgtest-web/webcontrol/templates/browse-testlist.html |
4901 | diff --git a/webcontrol/update-github-jobs b/charms/focal/autopkgtest-web/webcontrol/update-github-jobs |
4902 | similarity index 94% |
4903 | rename from webcontrol/update-github-jobs |
4904 | rename to charms/focal/autopkgtest-web/webcontrol/update-github-jobs |
4905 | index 97a1cbd..f2bb430 100755 |
4906 | --- a/webcontrol/update-github-jobs |
4907 | +++ b/charms/focal/autopkgtest-web/webcontrol/update-github-jobs |
4908 | @@ -16,9 +16,9 @@ from urllib.error import HTTPError |
4909 | from request.submit import Submit |
4910 | |
4911 | |
4912 | -PENDING_DIR = '/tmp/autopkgtest_webcontrol/github-pending' |
4913 | +PENDING_DIR = '/run/autopkgtest_webcontrol/github-pending' |
4914 | swift_url = None |
4915 | -swift_url_public = None |
4916 | +external_url = None |
4917 | |
4918 | |
4919 | def result_matches_job(result_url, params): |
4920 | @@ -126,7 +126,7 @@ def process_job(jobfile): |
4921 | code = result_matches_job(result_url, params) |
4922 | if code is not None: |
4923 | finish_job(jobfile, params, code, |
4924 | - result_url.replace(swift_url, swift_url_public) + '/log.gz') |
4925 | + result_url.replace(swift_url, external_url) + '/log.gz') |
4926 | break |
4927 | except HTTPError as e: |
4928 | logging.error('job %s URL %s failed: %s', os.path.basename(jobfile), query_url, e) |
4929 | @@ -139,14 +139,15 @@ if __name__ == '__main__': |
4930 | logging.basicConfig(level='DEBUG') |
4931 | if not os.path.isdir(PENDING_DIR): |
4932 | logging.info('%s does not exist, nothing to do', PENDING_DIR) |
4933 | + sys.exit(0) |
4934 | |
4935 | config = configparser.ConfigParser() |
4936 | config.read(os.path.expanduser('~ubuntu/autopkgtest-cloud.conf')) |
4937 | swift_url = config['web']['SwiftURL'] |
4938 | try: |
4939 | - swift_url_public = config['web']['SwiftURLExternal'] |
4940 | + external_url = config['web']['ExternalURL'] |
4941 | except KeyError: |
4942 | - swift_url_public = swift_url |
4943 | + external_url = swift_url |
4944 | |
4945 | jobs = sys.argv[1:] |
4946 | |
4947 | diff --git a/deployment/charms/xenial/autopkgtest-cloud-worker/config.yaml b/deployment/charms/xenial/autopkgtest-cloud-worker/config.yaml |
4948 | deleted file mode 100644 |
4949 | index 8732343..0000000 |
4950 | --- a/deployment/charms/xenial/autopkgtest-cloud-worker/config.yaml |
4951 | +++ /dev/null |
4952 | @@ -1,15 +0,0 @@ |
4953 | -options: |
4954 | - swift-password: |
4955 | - type: string |
4956 | - description: "password for swift" |
4957 | - default: |
4958 | - nova-rcs: |
4959 | - type: string |
4960 | - description: "base64 encoded tarball of nova OpenStack credential .rc files" |
4961 | - lxd-remotes: |
4962 | - type: string |
4963 | - description: "lines 'arch IP numworkers' for workers on LXD remotes" |
4964 | - mail-notify: |
4965 | - type: string |
4966 | - description: "email addresses (space separated) to notify on worker failure" |
4967 | - default: |
4968 | diff --git a/deployment/charms/xenial/autopkgtest-cloud-worker/hooks/amqp-relation-broken b/deployment/charms/xenial/autopkgtest-cloud-worker/hooks/amqp-relation-broken |
4969 | deleted file mode 100755 |
4970 | index 3d26678..0000000 |
4971 | --- a/deployment/charms/xenial/autopkgtest-cloud-worker/hooks/amqp-relation-broken |
4972 | +++ /dev/null |
4973 | @@ -1,9 +0,0 @@ |
4974 | -#!/bin/sh |
4975 | -set -e |
4976 | - |
4977 | -CONFIG=/home/ubuntu/rabbitmq.cred |
4978 | - |
4979 | -juju-log "amqp-relation-broken: Removing rabbit config" |
4980 | -rm -f $CONFIG |
4981 | - |
4982 | -hooks/stop |
4983 | diff --git a/deployment/charms/xenial/autopkgtest-cloud-worker/hooks/amqp-relation-changed b/deployment/charms/xenial/autopkgtest-cloud-worker/hooks/amqp-relation-changed |
4984 | deleted file mode 100755 |
4985 | index 4a5b045..0000000 |
4986 | --- a/deployment/charms/xenial/autopkgtest-cloud-worker/hooks/amqp-relation-changed |
4987 | +++ /dev/null |
4988 | @@ -1,25 +0,0 @@ |
4989 | -#!/bin/sh |
4990 | -set -e |
4991 | - |
4992 | -RABBIT_USER=autopkgtest-worker |
4993 | -CONFIG=/home/ubuntu/rabbitmq.cred |
4994 | - |
4995 | -juju-log "amqp-relation-changed: requesting credentials for $RABBIT_USER" |
4996 | -relation-set username=$RABBIT_USER |
4997 | -relation-set vhost=/ |
4998 | -RABBIT_HOST=`relation-get private-address` |
4999 | -RABBIT_PASSWORD=`relation-get password` |
5000 | -if [ -z "$RABBIT_HOST" -o -z "$RABBIT_PASSWORD" ]; then |
Don't worry about the conflicts IMHO; the worker-configs are irrelevant in the new deployment anyhow.