Merge lp:~chad.smith/vmbuilder/jenkins_kvm_azure_netplan_hotplug into lp:vmbuilder
- jenkins_kvm_azure_netplan_hotplug
- Merge into 0.12
Status: | Superseded |
---|---|
Proposed branch: | lp:~chad.smith/vmbuilder/jenkins_kvm_azure_netplan_hotplug |
Merge into: | lp:vmbuilder |
Diff against target: |
31474 lines (+30578/-0) 176 files modified
azure_config.sh (+95/-0) base_indicies.sh (+28/-0) build-juju-local.sh (+108/-0) builder_config.sh (+78/-0) checksum.sh (+27/-0) config/cloud-azure.cfg (+9/-0) config/cloud-maas.cfg (+11/-0) config/cloud-maasv2.cfg (+10/-0) config/cloud-maasv3.cfg (+10/-0) config/cloud-precise.cfg (+24/-0) config/cloud-trusty-pp64el.cfg (+13/-0) config/cloud-trusty.cfg (+26/-0) config/cloud-vps.cfg (+6/-0) config/cloud.cfg (+17/-0) copy_to_final.sh (+52/-0) create-vhd.sh (+97/-0) ec2_publisher.sh (+98/-0) functions/bzr_check.sh (+14/-0) functions/bzr_commit.sh (+23/-0) functions/common (+37/-0) functions/locker (+49/-0) functions/merge_templates (+53/-0) functions/mk_template.sh (+41/-0) functions/retry (+16/-0) generate-ubuntu-lists.sh (+44/-0) get_serial.sh (+157/-0) jenkins/CloudImages_Azure.sh (+162/-0) jenkins/CloudImages_Base.sh (+96/-0) jenkins/CloudImages_Base_Release_Delta.sh (+255/-0) jenkins/CloudImages_Juju.sh (+253/-0) jenkins/CloudImages_Update_Builder.sh (+68/-0) jenkins/CloudImages_Vagrant.sh (+232/-0) jenkins/MAAS_Builder.sh (+171/-0) jenkins/MAAS_Promotion.sh (+31/-0) jenkins/MAASv2_Builder.sh (+191/-0) jenkins/MAASv2_Cleaner.sh (+55/-0) jenkins/MAASv3_Builder.sh (+67/-0) jenkins/Promote_Daily.sh (+55/-0) jenkins/Promote_MAAS_Daily.sh (+48/-0) jenkins/Publish_EC2.sh (+64/-0) jenkins/Publish_Results_to_Tracker.sh (+34/-0) jenkins/README.txt (+1/-0) jenkins/Test_Azure.sh (+17/-0) jenkins/build_lib.sh (+33/-0) jenkins/env-test.sh (+2/-0) launch_kvm.sh (+222/-0) maas_config.sh (+75/-0) make-seed.sh (+147/-0) overlay.sh (+23/-0) pylib/changelogger.py (+222/-0) pylib/changelogger/ChangeLogger.py (+222/-0) pylib/requests/__init__.py (+77/-0) pylib/requests/adapters.py (+388/-0) pylib/requests/api.py (+120/-0) pylib/requests/auth.py (+193/-0) pylib/requests/cacert.pem (+5026/-0) pylib/requests/certs.py (+24/-0) pylib/requests/compat.py (+115/-0) pylib/requests/cookies.py (+454/-0) pylib/requests/exceptions.py (+75/-0) pylib/requests/hooks.py (+45/-0) pylib/requests/models.py (+803/-0) pylib/requests/packages/__init__.py (+3/-0) pylib/requests/packages/chardet/__init__.py (+32/-0) pylib/requests/packages/chardet/big5freq.py (+925/-0) pylib/requests/packages/chardet/big5prober.py (+42/-0) pylib/requests/packages/chardet/chardetect.py (+46/-0) pylib/requests/packages/chardet/chardistribution.py (+231/-0) pylib/requests/packages/chardet/charsetgroupprober.py (+106/-0) pylib/requests/packages/chardet/charsetprober.py (+62/-0) pylib/requests/packages/chardet/codingstatemachine.py (+61/-0) pylib/requests/packages/chardet/compat.py (+34/-0) pylib/requests/packages/chardet/constants.py (+39/-0) pylib/requests/packages/chardet/cp949prober.py (+44/-0) pylib/requests/packages/chardet/escprober.py (+86/-0) pylib/requests/packages/chardet/escsm.py (+242/-0) pylib/requests/packages/chardet/eucjpprober.py (+90/-0) pylib/requests/packages/chardet/euckrfreq.py (+596/-0) pylib/requests/packages/chardet/euckrprober.py (+42/-0) pylib/requests/packages/chardet/euctwfreq.py (+428/-0) pylib/requests/packages/chardet/euctwprober.py (+41/-0) pylib/requests/packages/chardet/gb2312freq.py (+472/-0) pylib/requests/packages/chardet/gb2312prober.py (+41/-0) pylib/requests/packages/chardet/hebrewprober.py (+283/-0) pylib/requests/packages/chardet/jisfreq.py (+569/-0) pylib/requests/packages/chardet/jpcntx.py (+219/-0) pylib/requests/packages/chardet/langbulgarianmodel.py (+229/-0) pylib/requests/packages/chardet/langcyrillicmodel.py (+329/-0) pylib/requests/packages/chardet/langgreekmodel.py (+225/-0) pylib/requests/packages/chardet/langhebrewmodel.py (+201/-0) pylib/requests/packages/chardet/langhungarianmodel.py (+225/-0) pylib/requests/packages/chardet/langthaimodel.py (+200/-0) pylib/requests/packages/chardet/latin1prober.py (+139/-0) pylib/requests/packages/chardet/mbcharsetprober.py (+86/-0) pylib/requests/packages/chardet/mbcsgroupprober.py (+54/-0) pylib/requests/packages/chardet/mbcssm.py (+575/-0) pylib/requests/packages/chardet/sbcharsetprober.py (+120/-0) pylib/requests/packages/chardet/sbcsgroupprober.py (+69/-0) pylib/requests/packages/chardet/sjisprober.py (+91/-0) pylib/requests/packages/chardet/universaldetector.py (+170/-0) pylib/requests/packages/chardet/utf8prober.py (+76/-0) pylib/requests/packages/urllib3/__init__.py (+58/-0) pylib/requests/packages/urllib3/_collections.py (+205/-0) pylib/requests/packages/urllib3/connection.py (+204/-0) pylib/requests/packages/urllib3/connectionpool.py (+710/-0) pylib/requests/packages/urllib3/contrib/ntlmpool.py (+120/-0) pylib/requests/packages/urllib3/contrib/pyopenssl.py (+422/-0) pylib/requests/packages/urllib3/exceptions.py (+126/-0) pylib/requests/packages/urllib3/fields.py (+177/-0) pylib/requests/packages/urllib3/filepost.py (+100/-0) pylib/requests/packages/urllib3/packages/__init__.py (+4/-0) pylib/requests/packages/urllib3/packages/ordered_dict.py (+260/-0) pylib/requests/packages/urllib3/packages/six.py (+385/-0) pylib/requests/packages/urllib3/packages/ssl_match_hostname/__init__.py (+13/-0) pylib/requests/packages/urllib3/packages/ssl_match_hostname/_implementation.py (+105/-0) pylib/requests/packages/urllib3/poolmanager.py (+258/-0) pylib/requests/packages/urllib3/request.py (+141/-0) pylib/requests/packages/urllib3/response.py (+308/-0) pylib/requests/packages/urllib3/util/__init__.py (+27/-0) pylib/requests/packages/urllib3/util/connection.py (+45/-0) pylib/requests/packages/urllib3/util/request.py (+68/-0) pylib/requests/packages/urllib3/util/response.py (+13/-0) pylib/requests/packages/urllib3/util/ssl_.py (+133/-0) pylib/requests/packages/urllib3/util/timeout.py (+234/-0) pylib/requests/packages/urllib3/util/url.py (+162/-0) pylib/requests/sessions.py (+637/-0) pylib/requests/status_codes.py (+88/-0) pylib/requests/structures.py (+127/-0) pylib/requests/utils.py (+673/-0) register-vagrant-version.sh (+107/-0) rss-cleanup.sh (+16/-0) rss-generate.sh (+103/-0) should_build.py (+484/-0) standalone.sh (+303/-0) templates/default.tmpl (+420/-0) templates/example-addin.tmpl (+140/-0) templates/handle-xdeb.py (+15/-0) templates/img-azure-12.04-addin.tmpl (+47/-0) templates/img-azure-14.04-addin.tmpl (+58/-0) templates/img-azure-14.10-addin.tmpl (+60/-0) templates/img-azure-15.04-addin.tmpl (+59/-0) templates/img-azure-15.10-addin.tmpl (+50/-0) templates/img-azure-15.10-docker.tmpl (+25/-0) templates/img-azure-16.04-addin.tmpl (+58/-0) templates/img-azure-16.04-docker.tmpl (+8/-0) templates/img-azure-16.10-addin.tmpl (+58/-0) templates/img-azure-16.10-docker.tmpl (+8/-0) templates/img-azure-17.04-addin.tmpl (+58/-0) templates/img-azure-17.10-addin.tmpl (+58/-0) templates/img-azure-18.04-addin.tmpl (+58/-0) templates/img-azure-extra.tmpl (+19/-0) templates/img-azure.tmpl (+354/-0) templates/img-build.tmpl (+135/-0) templates/img-extra-nets.tmpl (+141/-0) templates/img-juju-addin.tmpl (+250/-0) templates/img-juju.tmpl (+455/-0) templates/img-maas.tmpl (+96/-0) templates/img-maasv2.tmpl (+137/-0) templates/img-maasv3.tmpl (+85/-0) templates/img-smartcloud.tmpl (+112/-0) templates/img-update.tmpl (+292/-0) templates/img-vagrant.tmpl (+294/-0) templates/img-vps.tmpl (+67/-0) tests/azure-node-settings-tool.py (+111/-0) tests/azure.sh (+286/-0) tests/decider.py (+285/-0) tests/jenkins-ssh (+68/-0) tests/passless-sudoifer (+57/-0) tests/run-azure.sh (+29/-0) tests/test-azure.py (+233/-0) tests/tracker.py (+187/-0) tracker.sh (+16/-0) tweet.sh (+44/-0) ubuntu-adj2version (+53/-0) update_release_directory.sh (+17/-0) wait_package.sh (+27/-0) |
To merge this branch: | bzr merge lp:~chad.smith/vmbuilder/jenkins_kvm_azure_netplan_hotplug |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
VMBuilder | Pending | ||
Review via email: mp+347174@code.launchpad.net |
This proposal has been superseded by a proposal from 2018-05-31.
Commit message
Update Azure's nic hotplug script to use netplan if available instead of ENI
Also avoid appending unnecessary include directives in
/etc/network/
Description of the change
WIP: I think I targeted the wrong branch, will resubmit tomorrow
diff should be http://
Azure images deliver a script /usr/local/
In Bionic and later, cloud-init writes a fallback interface config in /etc/netplan/
This changeset adds a test whether netplan command exists in ephemeral_eth.sh:
- If netplan is present, a separate /etc/netplan/
- After the netplan yaml is created, call 'netplan apply' to bring up that device with dchp. The netplan config will specify that these nics are "optional: true" so that subsequent boots will not wait on them to come up in case they are subsequently detached.
Attaching nics in Azure is done through their UI or API. The attaching/detaching operation in Azure requires an instance to be stopped before attach/detach and started after the operation.
Potental gap:
There is no attempt to clean up old netplan yaml files, or to designate a new primary/mandatory nic because this original hotplug script didn't deal with udev rules for hotplug removal of nics (via Azure network interface detachment).
This could present a minor issue if eth1 is attached (and optional by design) and eth0 gets detached. In that case, systemd may still wait for eth0 to come up because of the mandatory eth0 definition in /etc/netplan/
Daniel Axtens (daxtens) wrote : | # |
- 800. By Chad Smith
-
Revert changes to ephemeral_eth.sh and emit a netplan 90-hotplug-
azure.yaml cloud-init only sets up a network configuration at initial boot pinned to
the original macaddress. If we are building a netplan enabled image,
emit a static netplan yaml which will complement the orignal /etc/netplan/50-cloud- init.yaml fallback definition. If the original eth0 is no longer attached to vm, cloud-init's netplan yaml will not match by macaddress and system will fall through to match the following hotpluggedeth0 definition: hotpluggedeth0 :
dhcp4: true
match:
driver: hv_netvsc
name: 'eth0' - 801. By Chad Smith
-
Move /etc/network/
interfaces include directive back out of config_udev. Appended include directive in /etc/network/
interfaces needs to exist for
both upstart and udev solutions. So, it can't live exclusively within
config_udev_or_ netplan function. It needs to be present on all non-netplan
environments (upstart and ENI), but test we are not a netplan enabled
image before manipulating /etc/network/interfaces.
Unmerged revisions
- 801. By Chad Smith
-
Move /etc/network/
interfaces include directive back out of config_udev. Appended include directive in /etc/network/
interfaces needs to exist for
both upstart and udev solutions. So, it can't live exclusively within
config_udev_or_ netplan function. It needs to be present on all non-netplan
environments (upstart and ENI), but test we are not a netplan enabled
image before manipulating /etc/network/interfaces. - 800. By Chad Smith
-
Revert changes to ephemeral_eth.sh and emit a netplan 90-hotplug-
azure.yaml cloud-init only sets up a network configuration at initial boot pinned to
the original macaddress. If we are building a netplan enabled image,
emit a static netplan yaml which will complement the orignal /etc/netplan/50-cloud- init.yaml fallback definition. If the original eth0 is no longer attached to vm, cloud-init's netplan yaml will not match by macaddress and system will fall through to match the following hotpluggedeth0 definition: hotpluggedeth0 :
dhcp4: true
match:
driver: hv_netvsc
name: 'eth0' - 799. By Chad Smith
-
Update Azure's nic hotplug script to use netplan if available instead of ENI
Also avoid appending unnecessary include directives in
/etc/network/interfaces on netplan-enabled systems. - 798. By Dan Watkins
-
Install Azure model assertion in Azure bionic images
- 797. By Dan Watkins
-
Install linux-azure in bionic Azure images [a=Odd_
Bloke][ r=fginther, tribaal] MP: https:/
/code.launchpad .net/~ubuntu- on-ec2/ vmbuilder/ jenkins_ kvm-oddbloke/ +merge/ 341846 - 796. By Philip Roche
-
Merge lp:~ubuntu-on-ec2/vmbuilder/jenkins_kvm-oddbloke into lp:~ubuntu-on-ec2/vmbuilder/jenkins_kvm [a=daniel-
thewatkins] [r=fginther, philroche] Use HTTPS for Vagrant box redirects (LP: #1754948)
MP: https:/
/code.launchpad .net/~ubuntu- on-ec2/ vmbuilder/ jenkins_ kvm-oddbloke/ +merge/ 341339 - 795. By Dan Watkins
-
Drop unscd from bionic Azure images [a=Odd_
Bloke][ r=fginther, philroche] MP: https:/
/code.launchpad .net/~daniel- thewatkins/ vmbuilder/ jenkins_ kvm-drop- unscd/+ merge/337830 - 794. By Dan Watkins
-
do not explicitly install cloud-init [a=mwhudson]
[r=fginther, Odd_Bloke, philroche] MP: https:/
/code.launchpad .net/~mwhudson/ vmbuilder/ jenkins_ kvm.mwhudson/ +merge/ 334878 - 793. By Francis Ginther
-
Update source image and package set for artful. Dropping packages that no longer exist.
[a=fginther][r=daniel- thewatkins, philroche, tribaal] MP: https:/
/code.launchpad .net/~fginther/ vmbuilder/ new-artful- builder/ +merge/ 332487 - 792. By Francis Ginther
-
Add a bb-series version of the Azure suite specific template files, img-azure-
18.04-addin. tmpl.
[a=fginther][r=daniel- thewatkins, rcj] MP: https:/
/code.launchpad .net/~fginther/ vmbuilder/ jenkins_ kvm-add- azure-18. 04/+merge/ 332368
Preview Diff
1 | === added file 'azure_config.sh' | |||
2 | --- azure_config.sh 1970-01-01 00:00:00 +0000 | |||
3 | +++ azure_config.sh 2018-05-31 04:33:07 +0000 | |||
4 | @@ -0,0 +1,95 @@ | |||
5 | 1 | #!/bin/bash | ||
6 | 2 | |||
7 | 3 | # Load up some libraries | ||
8 | 4 | my_dir="$( cd "$( dirname "$0" )" && pwd )" | ||
9 | 5 | source "${my_dir}/functions/locker" | ||
10 | 6 | source "${my_dir}/functions/common" | ||
11 | 7 | source "${my_dir}/functions/retry" | ||
12 | 8 | source "${my_dir}/functions/merge_templates" | ||
13 | 9 | |||
14 | 10 | usage() { | ||
15 | 11 | cat <<EOM | ||
16 | 12 | ${0##/} - Populated values in build temple. | ||
17 | 13 | |||
18 | 14 | Required: | ||
19 | 15 | --template Template file | ||
20 | 16 | --extra Extra, arbitrary addin | ||
21 | 17 | --serial The build serial | ||
22 | 18 | --out The output file | ||
23 | 19 | --tar Name of tar file | ||
24 | 20 | --tar-d Name of directory to tar up | ||
25 | 21 | --version The version number of the distro | ||
26 | 22 | --proposed Build against proposed | ||
27 | 23 | --docker Install Docker/Docker compose | ||
28 | 24 | EOM | ||
29 | 25 | } | ||
30 | 26 | |||
31 | 27 | short_opts="h" | ||
32 | 28 | long_opts="out:,template:,serial:,tar:,tar-d:,version:,proposed,docker,extra:" | ||
33 | 29 | getopt_out=$(getopt --name "${0##*/}" \ | ||
34 | 30 | --options "${short_opts}" --long "${long_opts}" -- "$@") && | ||
35 | 31 | eval set -- "${getopt_out}" || { echo "BAD INVOCATION!"; usage; exit 1; } | ||
36 | 32 | |||
37 | 33 | serial=${SERIAL:-$(date +%Y%m%d)} | ||
38 | 34 | |||
39 | 35 | # Standard templates | ||
40 | 36 | template_f="$(readlink -f ${0%/*}/templates/img-azure.tmpl)" | ||
41 | 37 | template_netaddin_f="$(readlink -f ${0%/*}/templates/img-extra-nets.tmpl)" | ||
42 | 38 | template_extra_f="$(readlink -f ${0%/*}/templates/img-azure-extra.tmpl)" | ||
43 | 39 | extra_addins=() | ||
44 | 40 | |||
45 | 41 | while [ $# -ne 0 ]; do | ||
46 | 42 | cur=${1}; next=${2}; | ||
47 | 43 | case "$cur" in | ||
48 | 44 | --template) template_f=$2; shift;; | ||
49 | 45 | --extra) extra_addins+=($2); shift;; | ||
50 | 46 | --serial) serial=$2; shift;; | ||
51 | 47 | --tar) tar_f=$2; shift;; | ||
52 | 48 | --tar-d) tar_d=$2; shift;; | ||
53 | 49 | --out) out_f=$2; shift;; | ||
54 | 50 | --version) version=$2; shift;; | ||
55 | 51 | --proposed) proposed="true";; | ||
56 | 52 | --docker) docker="1";; | ||
57 | 53 | --) shift; break;; | ||
58 | 54 | esac | ||
59 | 55 | shift; | ||
60 | 56 | done | ||
61 | 57 | |||
62 | 58 | fail() { echo "${@}" 2>&1; exit 1;} | ||
63 | 59 | fail_usage() { fail "Must define $@"; } | ||
64 | 60 | |||
65 | 61 | # Create the template file for image conversion | ||
66 | 62 | sed -e "s,%S,${serial},g" \ | ||
67 | 63 | -e "s,%v,${version},g" \ | ||
68 | 64 | -e "s,%P,${proposed:-false},g" \ | ||
69 | 65 | ${template_f} > ${out_f}.base || | ||
70 | 66 | fail "Unable to write template file" | ||
71 | 67 | |||
72 | 68 | # Support per-suite addins | ||
73 | 69 | net_addin=1 | ||
74 | 70 | |||
75 | 71 | # Disable the extra nets for Azure due due to the systemd changes | ||
76 | 72 | dist_ge ${version} vivid && net_addin=0 | ||
77 | 73 | |||
78 | 74 | # Order the addins | ||
79 | 75 | default_addin="${template_f//.tmpl/}-${version}-addin.tmpl" | ||
80 | 76 | docker_addin="${template_f//.tmpl/}-${version}-docker.tmpl" | ||
81 | 77 | |||
82 | 78 | addins=(${default_addin}) | ||
83 | 79 | [ ${net_addin:-0} -eq 1 ] && addins+=("${template_netaddin_f}") | ||
84 | 80 | [ ${docker:-0} -eq 1 -a -f "${docker_addin}" ] && addins+=("${docker_addin}") | ||
85 | 81 | addins+=("${extra_addins[@]}" "${template_extra_f}") | ||
86 | 82 | |||
87 | 83 | merge_templates ${out_f}.base ${out_f} ${addins[@]} | ||
88 | 84 | |||
89 | 85 | debug "==================================================" | ||
90 | 86 | debug "Content of template:" | ||
91 | 87 | cat ${out_f} | ||
92 | 88 | debug "==================================================" | ||
93 | 89 | |||
94 | 90 | if [ -n "${tar_d}" ]; then | ||
95 | 91 | tar -C "${tar_d}" -cf "${tar_f}" . && | ||
96 | 92 | debug "TAR'd up ${tar_d}" || | ||
97 | 93 | fail "Failed to tar up ${tar_d}" | ||
98 | 94 | fi | ||
99 | 95 | exit 0 | ||
100 | 0 | 96 | ||
101 | === added file 'base_indicies.sh' | |||
102 | --- base_indicies.sh 1970-01-01 00:00:00 +0000 | |||
103 | +++ base_indicies.sh 2018-05-31 04:33:07 +0000 | |||
104 | @@ -0,0 +1,28 @@ | |||
105 | 1 | #!/bin/bash -xe | ||
106 | 2 | # | ||
107 | 3 | # Simple job for creating indicies | ||
108 | 4 | suite="${1:-$SUITE}" | ||
109 | 5 | serial="${2:-$SERIAL}" | ||
110 | 6 | |||
111 | 7 | umask 022 | ||
112 | 8 | cronrun="/srv/builder/vmbuilder/bin/cronrun" | ||
113 | 9 | |||
114 | 10 | # Override and set some home variables | ||
115 | 11 | export HOME="/srv/builder/vmbuilder" | ||
116 | 12 | export CDIMAGE_BIN="${CDIMAGE_BIN:-$HOME/cdimage/bin}" | ||
117 | 13 | export CDIMAGE_ROOT="${CDIMAGE_ROOT:-$HOME/cdimage}" | ||
118 | 14 | export PUBLISH_SCRIPTS="${PUBLISH_SCRIPTS:-$HOME/ec2-publishing-scripts}" | ||
119 | 15 | export PATH="${PUBLISH_SCRIPTS}:${CDIMAGE_BIN}:${PATH}" | ||
120 | 16 | |||
121 | 17 | fail() { echo "${@}" 2>&1; exit 1;} | ||
122 | 18 | |||
123 | 19 | echo "Checksumming result directories" | ||
124 | 20 | work_d="${WORKD:-/srv/ec2-images}/${suite}/${serial}" | ||
125 | 21 | |||
126 | 22 | ${CDIMAGE_BIN}/checksum-directory "${work_d}" && | ||
127 | 23 | checksum-directory "${work_d}/unpacked" || | ||
128 | 24 | fail "Failed to checksum result directories" | ||
129 | 25 | |||
130 | 26 | ${PUBLISH_SCRIPTS}/update-build-indexes daily ${work_d} ${suite} && | ||
131 | 27 | update-build-indexes daily ${work_d} ${suite} || | ||
132 | 28 | fail "Failed to make the indexes for ${work_d}" | ||
133 | 0 | 29 | ||
134 | === added file 'build-juju-local.sh' | |||
135 | --- build-juju-local.sh 1970-01-01 00:00:00 +0000 | |||
136 | +++ build-juju-local.sh 2018-05-31 04:33:07 +0000 | |||
137 | @@ -0,0 +1,108 @@ | |||
138 | 1 | #!/bin/bash | ||
139 | 2 | |||
140 | 3 | # Read in the common files | ||
141 | 4 | myname=$(readlink -f ${0}) | ||
142 | 5 | mydir=$(dirname ${myname}) | ||
143 | 6 | mypdir=$(dirname ${mydir}) | ||
144 | 7 | |||
145 | 8 | # Scope stuff locally here | ||
146 | 9 | # Create a temporary directory for the fun | ||
147 | 10 | tmp_dir=$(mktemp -d builder.XXXXX --tmpdir=${TMPDIR:-/tmp}) | ||
148 | 11 | export TMPDIR=${tmp_dir} | ||
149 | 12 | export WORKSPACE=${mydir} | ||
150 | 13 | export HOME=${mydir} | ||
151 | 14 | export LOCAL_BUILD=1 | ||
152 | 15 | |||
153 | 16 | clean() { [ -d ${tmp_dir} ] && rm -rf ${tmp_dir}; | ||
154 | 17 | [ -d "${mydir}/Virtualbox\ VMS" ] && rm -rf "${mydir}/Virtualbox\ VMS"; | ||
155 | 18 | exit "${@}"; } | ||
156 | 19 | error() { echo "$@"; } | ||
157 | 20 | debug() { error "$(date -R):" "$@"; } | ||
158 | 21 | fail() { debug "${1:-Something bad happend}"; clean 1; } | ||
159 | 22 | |||
160 | 23 | # Fly with the safety on! | ||
161 | 24 | trap fail EXIT | ||
162 | 25 | trap fail SIGINT | ||
163 | 26 | |||
164 | 27 | test_cmd_exists() { | ||
165 | 28 | which $1 >> /dev/null || fail "Command $1 does not exist! Please install $2" | ||
166 | 29 | } | ||
167 | 30 | |||
168 | 31 | if [ "$(lsb_release -r -s | sed 's/\.//')" -lt 1404 ]; then | ||
169 | 32 | fail "This must be run on Ubuntu 14.04 or higher" | ||
170 | 33 | fi | ||
171 | 34 | |||
172 | 35 | test_cmd_exists qemu-nbd qemu-utils | ||
173 | 36 | test_cmd_exists vboxmanage virtualbox | ||
174 | 37 | test_cmd_exists bzr bzr | ||
175 | 38 | test_cmd_exists sstream-query simplestreams | ||
176 | 39 | |||
177 | 40 | # This defines what gets built | ||
178 | 41 | build_for=(${BUILD_FOR:-trusty:amd64 precise:amd64}) | ||
179 | 42 | [ -n "${JUJU_CORE_PKG}" -o -n "${JUJU_LOCAL_PKG}" ] && \ | ||
180 | 43 | [ ${#build_for[@]} -ge 2 ] && \ | ||
181 | 44 | fail "JUJU_CORE_PKG and JUJU_LOCAL_PKG can be specified only for a single build target." | ||
182 | 45 | |||
183 | 46 | for build in ${build_for[@]}; | ||
184 | 47 | do | ||
185 | 48 | suite=${build%%:*} | ||
186 | 49 | arch=${build##*:} | ||
187 | 50 | builder_img="${mydir}/${suite}-builder-${arch}.img" | ||
188 | 51 | results_d_arch="${mydir}/${suite}-${arch}" | ||
189 | 52 | built_img="${suite}-server-cloudimg-${arch}-juju-vagrant-disk1.img" | ||
190 | 53 | |||
191 | 54 | [ ! -e "${results_d_arch}" ] && | ||
192 | 55 | mkdir -p "${results_d_arch}" | ||
193 | 56 | |||
194 | 57 | cmd=( | ||
195 | 58 | "${mydir}/standalone.sh" | ||
196 | 59 | "--cloud_cfg ${mydir}/config/cloud-vps.cfg" | ||
197 | 60 | "--template ${mydir}/templates/img-juju.tmpl" | ||
198 | 61 | "--suite ${suite}" | ||
199 | 62 | "--arch ${arch}" | ||
200 | 63 | "--use_img ${builder_img}" | ||
201 | 64 | "--final_img ${built_img}" | ||
202 | 65 | "--resize_final 40" | ||
203 | 66 | ) | ||
204 | 67 | |||
205 | 68 | [ ! -e "${builder_img}" ] && cmd+=("--fetch_new") | ||
206 | 69 | if [ -n "${JUJU_CORE_PKG}" -o -n "${JUJU_LOCAL_PKG}" ]; then | ||
207 | 70 | cmd+=("--cloud-init-file ${mydir}/templates/handle-xdeb.py:text/part-handler") | ||
208 | 71 | if [ -n "${JUJU_CORE_PKG}" ]; then | ||
209 | 72 | cmd+=("--cloud-init-file ${JUJU_CORE_PKG}:application/x-deb") | ||
210 | 73 | echo "JUJU_CORE_PKG=$(basename $JUJU_CORE_PKG)" > ${tmp_dir}/juju-sources.sh | ||
211 | 74 | fi | ||
212 | 75 | if [ -n "${JUJU_LOCAL_PKG}" ]; then | ||
213 | 76 | cmd+=("--cloud-init-file ${JUJU_LOCAL_PKG}:application/x-deb") | ||
214 | 77 | echo "JUJU_LOCAL_PKG=$(basename $JUJU_LOCAL_PKG)" >> ${tmp_dir}/juju-sources.sh | ||
215 | 78 | fi | ||
216 | 79 | cmd+=("--cloud-init-file ${tmp_dir}/juju-sources.sh:application/x-shellscript") | ||
217 | 80 | fi | ||
218 | 81 | |||
219 | 82 | [ -e "${results_d_arch}/${suite}-server-cloudimg-${arch}-juju-vagrant-disk1.img" ] || | ||
220 | 83 | ( cd ${results_d_arch} && ${cmd[@]} ) | ||
221 | 84 | |||
222 | 85 | # The following Vagrant-ifies the build | ||
223 | 86 | SUITE=${suite} \ | ||
224 | 87 | ARCH_TYPE=${arch} \ | ||
225 | 88 | SERIAL="current" \ | ||
226 | 89 | SRV_D="${mydir}/${suite}-${arch}" \ | ||
227 | 90 | OUTPUT_D="${mydir}/${suite}-${arch}" \ | ||
228 | 91 | WORKSPACE="${mydir}/${suite}-${arch}" \ | ||
229 | 92 | ${mydir}/jenkins/CloudImages_Juju.sh | ||
230 | 93 | |||
231 | 94 | expected_box="${results_d_arch}/${suite}-server-cloudimg-${arch}-juju-vagrant-disk1.box" | ||
232 | 95 | [ -f "${expected_box}" ] || fail "unable to find ${expected_box}; build failed!" | ||
233 | 96 | results_out+=("${build} ${expected_box}") | ||
234 | 97 | done | ||
235 | 98 | |||
236 | 99 | # Clear the traps | ||
237 | 100 | trap - EXIT | ||
238 | 101 | trap - SIGINT | ||
239 | 102 | trap | ||
240 | 103 | |||
241 | 104 | debug "Results are in following locations" | ||
242 | 105 | echo -e "${results_out[@]}" | ||
243 | 106 | |||
244 | 107 | debug "Done with the build!" | ||
245 | 108 | clean 0 | ||
246 | 0 | 109 | ||
247 | === added file 'builder_config.sh' | |||
248 | --- builder_config.sh 1970-01-01 00:00:00 +0000 | |||
249 | +++ builder_config.sh 2018-05-31 04:33:07 +0000 | |||
250 | @@ -0,0 +1,78 @@ | |||
251 | 1 | #!/bin/bash | ||
252 | 2 | short_opts="h" | ||
253 | 3 | long_opts="distro:,arch:,build-type:,bzr-automated-builds:,bzr-pubscripts:,bzr-livebuild:,bzr-vmbuilder:,out:,template:,serial:,proposed" | ||
254 | 4 | getopt_out=$(getopt --name "${0##*/}" \ | ||
255 | 5 | --options "${short_opts}" --long "${long_opts}" -- "$@") && | ||
256 | 6 | eval set -- "${getopt_out}" || { echo "BAD INVOCATION!"; usage; exit 1; } | ||
257 | 7 | |||
258 | 8 | usage() { | ||
259 | 9 | cat <<EOM | ||
260 | 10 | ${0##/} - Populated values in build temple. | ||
261 | 11 | |||
262 | 12 | Required: | ||
263 | 13 | --distro Distro code name, i.e. precise | ||
264 | 14 | --arch Arch, i.e. amd64, i386, armel, armhf | ||
265 | 15 | --template Template file | ||
266 | 16 | --serial The build serial | ||
267 | 17 | --out The output file | ||
268 | 18 | --proposed Build against -proposed | ||
269 | 19 | |||
270 | 20 | Optional: | ||
271 | 21 | --bzr-automated-builds bzr branch for automated ec2 builds | ||
272 | 22 | --bzr-pubscripts bzr branch of EC2 Publishing Scripts | ||
273 | 23 | --bzr-livebuild bzr branch of live-builder | ||
274 | 24 | --bzr-vmbuilder bzr branch of vmbuilder | ||
275 | 25 | EOM | ||
276 | 26 | } | ||
277 | 27 | |||
278 | 28 | |||
279 | 29 | fail() { echo "${@}" 2>&1; exit 1;} | ||
280 | 30 | |||
281 | 31 | serial=$(date +%Y%m%d) | ||
282 | 32 | bzr_automated_builds="http://bazaar.launchpad.net/~ubuntu-on-ec2/vmbuilder/automated-ec2-builds" | ||
283 | 33 | bzr_pubscripts="http://bazaar.launchpad.net/~ubuntu-on-ec2/ubuntu-on-ec2/ec2-publishing-scripts" | ||
284 | 34 | bzr_livebuild="http://bazaar.launchpad.net/~ubuntu-on-ec2/live-build/cloud-images" | ||
285 | 35 | bzr_vmbuilder="http://bazaar.launchpad.net/~ubuntu-on-ec2/vmbuilder/0.11a" | ||
286 | 36 | template_f="${PWD}/img-build.tmpl" | ||
287 | 37 | proposed=0 | ||
288 | 38 | |||
289 | 39 | while [ $# -ne 0 ]; do | ||
290 | 40 | cur=${1}; next=${2}; | ||
291 | 41 | case "$cur" in | ||
292 | 42 | --distro) distro=$2; shift;; | ||
293 | 43 | --arch) arch=$2; shift;; | ||
294 | 44 | --build-type) build_type=$2; shift;; | ||
295 | 45 | --bzr-automated-builds) bzr_automated_builds=$2; shift;; | ||
296 | 46 | --bzr-pubscripts) bzr_pubscripts=$2; shift;; | ||
297 | 47 | --bzr-livebuild) bzr_livebuild=$2; shift;; | ||
298 | 48 | --bzr-vmbuilder) bzr_vmbuilder=$2; shift;; | ||
299 | 49 | --template) template_f=$2; shift;; | ||
300 | 50 | --serial) serial=$2; shift;; | ||
301 | 51 | --out) out_f=$2; shift;; | ||
302 | 52 | --proposed) proposed=1;; | ||
303 | 53 | --) shift; break;; | ||
304 | 54 | esac | ||
305 | 55 | shift; | ||
306 | 56 | done | ||
307 | 57 | |||
308 | 58 | fail_usage() { fail "Must define $@"; } | ||
309 | 59 | |||
310 | 60 | [ -z "${distro}" ] && fail_usage "--distro" | ||
311 | 61 | [ -z "${arch}" ] && fail_usage "--arch" | ||
312 | 62 | [ -z "${build_type}" ] && fail_usage "--build-type" | ||
313 | 63 | [ -z "${out_f}" ] && fail_usage "--out" | ||
314 | 64 | |||
315 | 65 | sed -e "s,%d,${distro},g" \ | ||
316 | 66 | -e "s,%a,${arch},g" \ | ||
317 | 67 | -e "s,%b,${build_type},g" \ | ||
318 | 68 | -e "s,%A,${bzr_automated_builds},g" \ | ||
319 | 69 | -e "s,%P,${bzr_pubscripts},g" \ | ||
320 | 70 | -e "s,%L,${bzr_livebuild},g" \ | ||
321 | 71 | -e "s,%V,${bzr_vmbuilder},g" \ | ||
322 | 72 | -e "s,%S,${serial},g" \ | ||
323 | 73 | -e "s,%p,${proposed:-0},g" \ | ||
324 | 74 | -e "s,%C,$(awk 1 ORS='\\n' < "${HOME}/.lp_creds")," \ | ||
325 | 75 | ${template_f} > ${out_f} || | ||
326 | 76 | fail "Unable to write template file" | ||
327 | 77 | |||
328 | 78 | exit 0 | ||
329 | 0 | 79 | ||
330 | === added file 'checksum.sh' | |||
331 | --- checksum.sh 1970-01-01 00:00:00 +0000 | |||
332 | +++ checksum.sh 2018-05-31 04:33:07 +0000 | |||
333 | @@ -0,0 +1,27 @@ | |||
334 | 1 | # Override and set some home variables | ||
335 | 2 | export HOME="${USE_HOME:-/srv/builder}" | ||
336 | 3 | export CDIMAGE_BIN="${HOME}/cdimage/bin" | ||
337 | 4 | export CDIMAGE_ROOT="${HOME}/cdimage" | ||
338 | 5 | export PATH="${CDIMAGE_BIN}:${PATH}" | ||
339 | 6 | |||
340 | 7 | SUITE_DIR="${BASE_D}/${SUITE}" | ||
341 | 8 | [ -n "${STREAM}" -a "${STREAM}" != "daily" ] && | ||
342 | 9 | SUITE_DIR="${BASE_D}/${STREAM}/${SUITE}" | ||
343 | 10 | SERIAL_DIR="${SUITE_DIR}/${SERIAL}" | ||
344 | 11 | |||
345 | 12 | echo "Checksumming the new version..." | ||
346 | 13 | checksum-directory "${SERIAL_DIR}" | ||
347 | 14 | |||
348 | 15 | if [ ! -d ${SERIAL_DIR}/unpacked ]; then | ||
349 | 16 | echo "Adding build info to the new version..." | ||
350 | 17 | mkdir -p ${SERIAL_DIR}/unpacked | ||
351 | 18 | cat << EOF > ${SERIAL_DIR}/unpacked/build-info.txt | ||
352 | 19 | SERIAL=$SERIAL | ||
353 | 20 | EOF | ||
354 | 21 | fi | ||
355 | 22 | |||
356 | 23 | checksum-directory ${SERIAL_DIR}/unpacked | ||
357 | 24 | |||
358 | 25 | if [ "${UPDATE_CURRENT:-false}" = "true" ]; then | ||
359 | 26 | ./update_release_directory.sh "${SUITE_DIR}" | ||
360 | 27 | fi | ||
361 | 0 | 28 | ||
362 | === added directory 'config' | |||
363 | === added file 'config/cloud-azure.cfg' | |||
364 | --- config/cloud-azure.cfg 1970-01-01 00:00:00 +0000 | |||
365 | +++ config/cloud-azure.cfg 2018-05-31 04:33:07 +0000 | |||
366 | @@ -0,0 +1,9 @@ | |||
367 | 1 | #cloud-config | ||
368 | 2 | package_upgrade: true | ||
369 | 3 | password: ubuntu | ||
370 | 4 | chpasswd: { expire: False } | ||
371 | 5 | ssh_pwauth: True | ||
372 | 6 | packages: | ||
373 | 7 | - pastebinit | ||
374 | 8 | - zerofree | ||
375 | 9 | - ubuntu-dev-tools | ||
376 | 0 | 10 | ||
377 | === added file 'config/cloud-maas.cfg' | |||
378 | --- config/cloud-maas.cfg 1970-01-01 00:00:00 +0000 | |||
379 | +++ config/cloud-maas.cfg 2018-05-31 04:33:07 +0000 | |||
380 | @@ -0,0 +1,11 @@ | |||
381 | 1 | #cloud-config | ||
382 | 2 | package_upgrade: true | ||
383 | 3 | password: ubuntu | ||
384 | 4 | packages: | ||
385 | 5 | - bzr | ||
386 | 6 | - kpartx | ||
387 | 7 | - qemu-kvm | ||
388 | 8 | - qemu-kvm-extras | ||
389 | 9 | - qemu-kvm-extras-static | ||
390 | 10 | - zerofree | ||
391 | 11 | |||
392 | 0 | 12 | ||
393 | === added file 'config/cloud-maasv2.cfg' | |||
394 | --- config/cloud-maasv2.cfg 1970-01-01 00:00:00 +0000 | |||
395 | +++ config/cloud-maasv2.cfg 2018-05-31 04:33:07 +0000 | |||
396 | @@ -0,0 +1,10 @@ | |||
397 | 1 | #cloud-config | ||
398 | 2 | #This is generic enough to build for both MAAS and general cloud images | ||
399 | 3 | package_upgrade: true | ||
400 | 4 | password: ubuntu | ||
401 | 5 | packages: | ||
402 | 6 | - bzr | ||
403 | 7 | - qemu-utils | ||
404 | 8 | - zerofree | ||
405 | 9 | - gdisk | ||
406 | 10 | - proot | ||
407 | 0 | 11 | ||
408 | === added file 'config/cloud-maasv3.cfg' | |||
409 | --- config/cloud-maasv3.cfg 1970-01-01 00:00:00 +0000 | |||
410 | +++ config/cloud-maasv3.cfg 2018-05-31 04:33:07 +0000 | |||
411 | @@ -0,0 +1,10 @@ | |||
412 | 1 | #cloud-config | ||
413 | 2 | #This is generic enough to build for both MAAS and general cloud images | ||
414 | 3 | package_upgrade: true | ||
415 | 4 | password: ubuntu | ||
416 | 5 | packages: | ||
417 | 6 | - bzr | ||
418 | 7 | - qemu-utils | ||
419 | 8 | - zerofree | ||
420 | 9 | - gdisk | ||
421 | 10 | - proot | ||
422 | 0 | 11 | ||
423 | === added file 'config/cloud-precise.cfg' | |||
424 | --- config/cloud-precise.cfg 1970-01-01 00:00:00 +0000 | |||
425 | +++ config/cloud-precise.cfg 2018-05-31 04:33:07 +0000 | |||
426 | @@ -0,0 +1,24 @@ | |||
427 | 1 | #cloud-config | ||
428 | 2 | package_upgrade: true | ||
429 | 3 | password: ubuntu | ||
430 | 4 | chpasswd: { expire: False } | ||
431 | 5 | ssh_pwauth: True | ||
432 | 6 | ssh_import_id: | ||
433 | 7 | - daniel-thewatkins | ||
434 | 8 | - philroche | ||
435 | 9 | - rcj | ||
436 | 10 | packages: | ||
437 | 11 | - bzr | ||
438 | 12 | - debootstrap | ||
439 | 13 | - python-vm-builder | ||
440 | 14 | - pastebinit | ||
441 | 15 | - kpartx | ||
442 | 16 | - qemu-kvm | ||
443 | 17 | - qemu-kvm-extras | ||
444 | 18 | - qemu-kvm-extras-static | ||
445 | 19 | - debhelper | ||
446 | 20 | - virtualbox | ||
447 | 21 | - u-boot-tools | ||
448 | 22 | - zerofree | ||
449 | 23 | - gdisk | ||
450 | 24 | - ubuntu-dev-tools | ||
451 | 0 | 25 | ||
452 | === added file 'config/cloud-trusty-pp64el.cfg' | |||
453 | --- config/cloud-trusty-pp64el.cfg 1970-01-01 00:00:00 +0000 | |||
454 | +++ config/cloud-trusty-pp64el.cfg 2018-05-31 04:33:07 +0000 | |||
455 | @@ -0,0 +1,13 @@ | |||
456 | 1 | #cloud-config | ||
457 | 2 | packages: | ||
458 | 3 | - bzr | ||
459 | 4 | - debootstrap | ||
460 | 5 | - kpartx | ||
461 | 6 | - debhelper | ||
462 | 7 | - zerofree | ||
463 | 8 | - gdisk | ||
464 | 9 | - qemu-utils | ||
465 | 10 | - ubuntu-dev-tools | ||
466 | 11 | - gcc | ||
467 | 12 | - make | ||
468 | 13 | - zlib1g-dev | ||
469 | 0 | 14 | ||
470 | === added file 'config/cloud-trusty.cfg' | |||
471 | --- config/cloud-trusty.cfg 1970-01-01 00:00:00 +0000 | |||
472 | +++ config/cloud-trusty.cfg 2018-05-31 04:33:07 +0000 | |||
473 | @@ -0,0 +1,26 @@ | |||
474 | 1 | #cloud-config | ||
475 | 2 | #This is generic enough to build for both MAAS and general cloud images | ||
476 | 3 | package_upgrade: true | ||
477 | 4 | password: ubuntu | ||
478 | 5 | chpasswd: { expire: False } | ||
479 | 6 | ssh_pwauth: True | ||
480 | 7 | ssh_import_id: | ||
481 | 8 | - daniel-thewatkins | ||
482 | 9 | - philroche | ||
483 | 10 | - rcj | ||
484 | 11 | apt_sources: | ||
485 | 12 | - source: deb $MIRROR $RELEASE multiverse | ||
486 | 13 | packages: | ||
487 | 14 | - bzr | ||
488 | 15 | - debootstrap | ||
489 | 16 | - kpartx | ||
490 | 17 | - qemu-kvm | ||
491 | 18 | - qemu-user-static | ||
492 | 19 | - debhelper | ||
493 | 20 | - virtualbox | ||
494 | 21 | - zerofree | ||
495 | 22 | - gdisk | ||
496 | 23 | - proot | ||
497 | 24 | - u-boot-tools | ||
498 | 25 | - ubuntu-dev-tools | ||
499 | 26 | - zlib1g-dev | ||
500 | 0 | 27 | ||
501 | === added file 'config/cloud-vps.cfg' | |||
502 | --- config/cloud-vps.cfg 1970-01-01 00:00:00 +0000 | |||
503 | +++ config/cloud-vps.cfg 2018-05-31 04:33:07 +0000 | |||
504 | @@ -0,0 +1,6 @@ | |||
505 | 1 | #cloud-config | ||
506 | 2 | packages: | ||
507 | 3 | - pastebinit | ||
508 | 4 | - zerofree | ||
509 | 5 | - btrfs-tools | ||
510 | 6 | - ubuntu-dev-tools | ||
511 | 0 | 7 | ||
512 | === added file 'config/cloud.cfg' | |||
513 | --- config/cloud.cfg 1970-01-01 00:00:00 +0000 | |||
514 | +++ config/cloud.cfg 2018-05-31 04:33:07 +0000 | |||
515 | @@ -0,0 +1,17 @@ | |||
516 | 1 | #cloud-config | ||
517 | 2 | # Generic cloud-config for builder instance | ||
518 | 3 | package_upgrade: true | ||
519 | 4 | password: ubuntu | ||
520 | 5 | chpasswd: { expire: False } | ||
521 | 6 | ssh_pwauth: True | ||
522 | 7 | apt_sources: | ||
523 | 8 | - source: deb $MIRROR $RELEASE multiverse | ||
524 | 9 | packages: | ||
525 | 10 | - bzr | ||
526 | 11 | - zerofree | ||
527 | 12 | - gdisk | ||
528 | 13 | - gcc | ||
529 | 14 | - make | ||
530 | 15 | - git | ||
531 | 16 | - ubuntu-dev-tools | ||
532 | 17 | - zlib1g-dev | ||
533 | 0 | 18 | ||
534 | === added file 'copy_to_final.sh' | |||
535 | --- copy_to_final.sh 1970-01-01 00:00:00 +0000 | |||
536 | +++ copy_to_final.sh 2018-05-31 04:33:07 +0000 | |||
537 | @@ -0,0 +1,52 @@ | |||
538 | 1 | #!/bin/bash | ||
539 | 2 | # | ||
540 | 3 | # copies the files to their staging location | ||
541 | 4 | |||
542 | 5 | DISTRO="${DISTRO:-$1}" | ||
543 | 6 | WORKSPACE="${WORKSPACE:-$2}" | ||
544 | 7 | SERIAL="${SERIAL:-$3}" | ||
545 | 8 | BTYPE="${BTYPE:-$4}" | ||
546 | 9 | BTYPE="${BTYPE:-server}" | ||
547 | 10 | |||
548 | 11 | # Allow for legacy positional arguments | ||
549 | 12 | test_build="${5:-0}" | ||
550 | 13 | sandbox_build="${6:-0}" | ||
551 | 14 | proposed_build="${7:-0}" | ||
552 | 15 | |||
553 | 16 | # Allow for environment variable to control this | ||
554 | 17 | TEST_BUILD="${TEST_BUILD:-$test_build}" | ||
555 | 18 | SANDBOX_BUILD="${SANDBOX_BUILD:-$sandbox_build}" | ||
556 | 19 | PROPOSED_BUILD="${PROPOSED_BUILD:-$proposed_build}" | ||
557 | 20 | |||
558 | 21 | ROOT_D="${ROOT_D:-/srv/ec2-images}" | ||
559 | 22 | base_d="${ROOT_D}/${DISTRO}/${SERIAL}" | ||
560 | 23 | [ "${TEST_BUILD}" -eq 1 ] && base_d="${ROOT_D}/test_builds/${DISTRO}/${SERIAL}" | ||
561 | 24 | [ "${SANDBOX_BUILD}" -eq 1 ] && base_d="${ROOT_D}/sandbox/${DISTRO}/${SERIAL}" | ||
562 | 25 | [ "${PROPOSED_BUILD}" -eq 1 ] && base_d="${ROOT_D}/proposed/${DISTRO}/${SERIAL}" | ||
563 | 26 | [ "${BTYPE}" = "desktop" ] && base_d="${ROOT_D}/desktop/${DISTRO}/${SERIAL}" | ||
564 | 27 | |||
565 | 28 | # Make sure that the HWE directory is created | ||
566 | 29 | if [[ "${BTYPE}" =~ server-hwe ]]; then | ||
567 | 30 | base_d="${base_d}/${BTYPE//server-/}" | ||
568 | 31 | [ ! -e "${base_d}" ] && mkdir -p "${base_d}" | ||
569 | 32 | fi | ||
570 | 33 | |||
571 | 34 | for roottar in $(find . -iname "*root.tar.gz"); do | ||
572 | 35 | echo "Generating file listing" | ||
573 | 36 | |||
574 | 37 | case ${roottar} in | ||
575 | 38 | *amd64*) arch_name="amd64";; | ||
576 | 39 | *i386*) arch_name="i386";; | ||
577 | 40 | *armel*) arch_name="armel";; | ||
578 | 41 | *armhf*) arch_name="armhf";; | ||
579 | 42 | *ppc64*) arch_name="ppc64el";; | ||
580 | 43 | *arm64*) arch_name="arm64";; | ||
581 | 44 | *) arch_name="unknown-$(date +%s)";; | ||
582 | 45 | esac | ||
583 | 46 | |||
584 | 47 | tar -tzvf ${roottar} >> "${WORKSPACE}/file-list-${arch_name}.log" || | ||
585 | 48 | echo "Non fatal error. Failed to gather file list for ${roottar}" | ||
586 | 49 | done | ||
587 | 50 | |||
588 | 51 | cp -au ${DISTRO}-*/* ${base_d} || exit 1 | ||
589 | 52 | exit 0 | ||
590 | 0 | 53 | ||
591 | === added file 'create-vhd.sh' | |||
592 | --- create-vhd.sh 1970-01-01 00:00:00 +0000 | |||
593 | +++ create-vhd.sh 2018-05-31 04:33:07 +0000 | |||
594 | @@ -0,0 +1,97 @@ | |||
595 | 1 | #!/bin/bash | ||
596 | 2 | source "./functions/locker" | ||
597 | 3 | |||
598 | 4 | usage() { | ||
599 | 5 | cat << EOF | ||
600 | 6 | This program is used to convert raw images to VHD files. | ||
601 | 7 | |||
602 | 8 | --suite: the Ubuntu Code name to build against | ||
603 | 9 | --source_file: the name of the raw image file to convert | ||
604 | 10 | --size: the size of the converted image in G (defaults to 30G) | ||
605 | 11 | EOF | ||
606 | 12 | exit 1 | ||
607 | 13 | } | ||
608 | 14 | |||
609 | 15 | # Defaults | ||
610 | 16 | vhd_size=30 | ||
611 | 17 | |||
612 | 18 | # Command line parsing | ||
613 | 19 | short_opts="h" | ||
614 | 20 | long_opts="suite:,source_file:,size:" | ||
615 | 21 | getopt_out=$(getopt --name "${0##*/}" --options "${short_opts}"\ | ||
616 | 22 | --long "${long_opts}" -- "$@") | ||
617 | 23 | if [ $? -eq 0 ]; then | ||
618 | 24 | eval set -- "${getopt_out}" | ||
619 | 25 | else | ||
620 | 26 | usage | ||
621 | 27 | exit 1 | ||
622 | 28 | fi | ||
623 | 29 | |||
624 | 30 | while [ $# -ne 0 ]; do | ||
625 | 31 | cur=${1}; next=${2}; | ||
626 | 32 | |||
627 | 33 | case "${cur}" in | ||
628 | 34 | --size) vhd_size="${2}"; shift;; | ||
629 | 35 | --source_file) source_file="${2}"; shift;; | ||
630 | 36 | --suite) suite="${2}"; shift;; | ||
631 | 37 | -h|--help) usage; exit 0;; | ||
632 | 38 | ?) usage; exit 1;; | ||
633 | 39 | --) shift; break;; | ||
634 | 40 | esac | ||
635 | 41 | shift; | ||
636 | 42 | done | ||
637 | 43 | |||
638 | 44 | if [ -z "$source_file" -o -z "$suite" ]; then | ||
639 | 45 | echo "--source_file and --suite required." | ||
640 | 46 | exit 1 | ||
641 | 47 | fi | ||
642 | 48 | |||
643 | 49 | raw_name=$(readlink -f "$source_file") | ||
644 | 50 | case ${suite} in | ||
645 | 51 | precise|trusty|wily|xenial) | ||
646 | 52 | vhd_name="${PWD}/${suite}-server-cloudimg-amd64-disk1.vhd" | ||
647 | 53 | ;; | ||
648 | 54 | *) | ||
649 | 55 | vhd_name="${PWD}/${suite}-server-cloudimg-amd64.vhd" | ||
650 | 56 | ;; | ||
651 | 57 | esac | ||
652 | 58 | |||
653 | 59 | # Copy the raw image to make it ready for VHD production | ||
654 | 60 | cp --sparse=always "${raw_name}" "${raw_name}.pre-vhd" && | ||
655 | 61 | debug "Copied raw image VHD production" || | ||
656 | 62 | fail "Failed to copy raw image to ${raw_name}.pre-vhd" | ||
657 | 63 | |||
658 | 64 | # Resize the copied RAW image | ||
659 | 65 | debug "Truncating image to ${vhd_size}G" | ||
660 | 66 | truncate -s "${vhd_size}G" "${raw_name}.pre-vhd" && | ||
661 | 67 | debug "Truncated image at ${vhd_size}G" || | ||
662 | 68 | fail "Failed to truncate disk image" | ||
663 | 69 | |||
664 | 70 | # Convert to VHD first, step 1 of cheap hack | ||
665 | 71 | # This is a cheap hack...half the time the next command | ||
666 | 72 | # will fail with "VERR_INVALID_PARAMETER", so this is the, | ||
667 | 73 | # er, workaround | ||
668 | 74 | debug "Converting to VHD" | ||
669 | 75 | _vbox_cmd convertfromraw --format VHD \ | ||
670 | 76 | "${raw_name}.pre-vhd" \ | ||
671 | 77 | "${vhd_name}.pre" && | ||
672 | 78 | debug "Converted raw disk to VHD" || | ||
673 | 79 | fail "Failed to convert raw image to VHD" | ||
674 | 80 | |||
675 | 81 | # Clone the disk to fixed, VHD for Azure | ||
676 | 82 | debug "Converting to VHD format from raw..." | ||
677 | 83 | debug ".....this might take a while...." | ||
678 | 84 | _vbox_cmd clonehd --format VHD --variant Fixed \ | ||
679 | 85 | "${vhd_name}.pre" \ | ||
680 | 86 | "${vhd_name}" && | ||
681 | 87 | debug "Converted raw disk to VHD format using VirtualBox" || | ||
682 | 88 | fail "Failed to convert raw image to VHD disk!" | ||
683 | 89 | |||
684 | 90 | # Remove the unneeded files | ||
685 | 91 | rm "${vhd_name}.pre" "${raw_name}.pre-vhd" | ||
686 | 92 | |||
687 | 93 | debug "Image Characteristics:" | ||
688 | 94 | _vbox_cmd showhdinfo "${vhd_name}" | ||
689 | 95 | |||
690 | 96 | |||
691 | 97 | debug "Raw image converted to VHD" | ||
692 | 0 | 98 | ||
693 | === added file 'ec2_publisher.sh' | |||
694 | --- ec2_publisher.sh 1970-01-01 00:00:00 +0000 | |||
695 | +++ ec2_publisher.sh 2018-05-31 04:33:07 +0000 | |||
696 | @@ -0,0 +1,98 @@ | |||
697 | 1 | #!/bin/bash | ||
698 | 2 | # | ||
699 | 3 | # Simple execution wrapper for publishing images to EC2 from within Jenkins | ||
700 | 4 | # | ||
701 | 5 | suite="${1}" | ||
702 | 6 | serial="${2}" | ||
703 | 7 | btype="${3}" | ||
704 | 8 | work_d="${4}" | ||
705 | 9 | test_build="${5:-0}" | ||
706 | 10 | sandbox_build="${6:-0}" | ||
707 | 11 | allow_existing="${7:-0}" | ||
708 | 12 | pub_type="daily" | ||
709 | 13 | |||
710 | 14 | umask 022 | ||
711 | 15 | ec2_pub_scripts="${EC2_PUB_LOC:-${PWD}/ec2-publishing-scripts}" | ||
712 | 16 | cronrun="/srv/builder/vmbuilder/bin/cronrun" | ||
713 | 17 | |||
714 | 18 | # Override and set some home variables | ||
715 | 19 | export HOME="/srv/builder/vmbuilder" | ||
716 | 20 | export EC2_DAILY="${EC2_DAILY:-$HOME/ec2-daily}" | ||
717 | 21 | export CDIMAGE_BIN="${CDIMAGE_BIN:-$HOME/cdimage/bin}" | ||
718 | 22 | export CDIMAGE_ROOT="${CDIMAGE_ROOT:-$HOME/cdimage}" | ||
719 | 23 | AUTO_BUILDS="${AUTO_BUILDS:-$EC2_DAILY/automated-ec2-builds}" | ||
720 | 24 | PUBLISH_SCRIPTS="${PUBLISH_SCRIPTS:-$HOME/ec2-publishing-scripts}" | ||
721 | 25 | XC2_PATH="${EC2_DAILY}/xc2" | ||
722 | 26 | S3CMD_PATH="${S3CMD_PATH:-$EC2_DAILY/s3cmd}" | ||
723 | 27 | MISC_PATH="${MISC_PATH:-$EC2_DAILY/misc}" | ||
724 | 28 | VMBUILDER_PATH="${VMBUILDER_PATH:-$EC2_DAILY/vmbuilder}" | ||
725 | 29 | ( which euca-version >> /dev/null >&1 ) || EUCA2OOLS_PATH="${EC2_DAILY}/euca2ools" | ||
726 | 30 | BOTO_PATH="${EC2_DAILY}/boto" | ||
727 | 31 | |||
728 | 32 | export EC2_AMITOOL_HOME="${EC2_DAILY}/ec2-ami-tools" | ||
729 | 33 | export LIVE_BUILD_PATH="${EC2_DAILY}/live-build" | ||
730 | 34 | MYPATH=${VMBUILDER_PATH}:${XC2_PATH}:${S3CMD_PATH}:${PUBLISH_SCRIPTS}:${AUTO_BUILDS}:${VMBUILDER_PATH}:${EC2_AMITOOL_HOME}/bin:$HOME/bin:${CDIMAGE_BIN} | ||
731 | 35 | |||
732 | 36 | [ -n "${EUCA2OOLS_PATH}" ] && MYPATH="${MYPATH}:${EUCA2OOLS_PATH}/bin" | ||
733 | 37 | |||
734 | 38 | export PYTHONPATH="${BOTO_PATH}:${EUCA2OOLS_PATH}" | ||
735 | 39 | export PATH=${MYPATH}:/usr/bin:/usr/sbin:/usr/bin:/sbin:/bin | ||
736 | 40 | export JAVA_HOME=/usr | ||
737 | 41 | export START_D=${EC2_DAILY} | ||
738 | 42 | export PUBLISH_BASE=/srv/ec2-images | ||
739 | 43 | export XC2_RETRY_ON="Server.InternalError Read.timeout Server.Unavailable Unable.to.connect" | ||
740 | 44 | |||
741 | 45 | export PATH="/srv/builder/vmbuilder/cdimage/bin:${ec2_pub_scripts}:${PATH}" | ||
742 | 46 | |||
743 | 47 | fail() { echo "${@}" 2>&1; exit 1;} | ||
744 | 48 | |||
745 | 49 | [ -e "${ec2_pub_scripts}" ] || | ||
746 | 50 | fail "Please make sure that ec2-publishing-scripts in the current path or define EC2_PUB_LOC" | ||
747 | 51 | |||
748 | 52 | [ "$#" -eq 4 -o "$#" -eq 5 -o "$#" -eq 6 -o "$#" -eq 7 ] || | ||
749 | 53 | fail "Incorrect number of parameters. Must invoke with: <suite> <serial> <build type> <directory>" | ||
750 | 54 | |||
751 | 55 | [ "${test_build}" -eq 1 ] && { | ||
752 | 56 | echo "Build has been marked as a test build!"; | ||
753 | 57 | echo "Publishing image to sandbox location"; | ||
754 | 58 | pub_type="testing"; | ||
755 | 59 | } | ||
756 | 60 | |||
757 | 61 | [ "${sandbox_build}" -eq 1 ] && { | ||
758 | 62 | echo "Build has been marked as a sandbox build!"; | ||
759 | 63 | echo "Publishing image to Sandbox location"; | ||
760 | 64 | pub_type="sandbox"; | ||
761 | 65 | } | ||
762 | 66 | |||
763 | 67 | echo "Checksumming result directories" | ||
764 | 68 | checksum-directory "${work_d}" && | ||
765 | 69 | checksum-directory "${work_d}/unpacked" || | ||
766 | 70 | fail "Failed to checksum result directories" | ||
767 | 71 | |||
768 | 72 | # Drop ebs-standard and ebs-io1 from publication for xenial and after | ||
769 | 73 | if [[ "${suite}" > "xenial" || "${suite}" == "xenial" ]] ; then | ||
770 | 74 | export OVERRIDE_ITEMS_EBS="i386:ebs-ssd amd64:ebs-ssd" | ||
771 | 75 | export OVERRIDE_ITEMS_HVM="amd64:hvm-ssd" | ||
772 | 76 | fi | ||
773 | 77 | |||
774 | 78 | echo "Publishing to EC2" | ||
775 | 79 | pub_args=(--verbose) | ||
776 | 80 | [ "${allow_existing}" -eq 1 ] && pub_args+=(--allow-existing) | ||
777 | 81 | ${cronrun} publish-build \ | ||
778 | 82 | "${pub_args[@]}" \ | ||
779 | 83 | "${suite}" \ | ||
780 | 84 | "${btype}" \ | ||
781 | 85 | "${pub_type}" \ | ||
782 | 86 | "${work_d}" || | ||
783 | 87 | fail "failed publish-build ${suite} ${btype} daily ${work_d}" | ||
784 | 88 | |||
785 | 89 | # Update current | ||
786 | 90 | base_d="${work_d%/*}" | ||
787 | 91 | serial_d="${work_d##*/}" | ||
788 | 92 | current_d="${base_d}/current" | ||
789 | 93 | [ -e "${current_d}" ] && rm "${current_d}" | ||
790 | 94 | ( cd "${base_d}" && ln -s "${serial_d}" current ) || | ||
791 | 95 | fail "failed to update current directory" | ||
792 | 96 | |||
793 | 97 | exit 0 | ||
794 | 98 | |||
795 | 0 | 99 | ||
796 | === added directory 'functions' | |||
797 | === added file 'functions/bzr_check.sh' | |||
798 | --- functions/bzr_check.sh 1970-01-01 00:00:00 +0000 | |||
799 | +++ functions/bzr_check.sh 2018-05-31 04:33:07 +0000 | |||
800 | @@ -0,0 +1,14 @@ | |||
801 | 1 | #!/bin/bash | ||
802 | 2 | |||
803 | 3 | error() { echo "$@" 1>&2; } | ||
804 | 4 | fail() { error "$@"; exit 1; } | ||
805 | 5 | debug() { error "$(date -R):" "$@"; } | ||
806 | 6 | |||
807 | 7 | check_branch() { | ||
808 | 8 | [ -e "${2}" ] && rm -rf "${2}" | ||
809 | 9 | debug "Checking out ${1} to ${2}" | ||
810 | 10 | bzr checkout --lightweight "${1}" "${2}" && | ||
811 | 11 | debug "Checked out ${1}" || | ||
812 | 12 | fail "Failed to checkout ${1}" | ||
813 | 13 | } | ||
814 | 14 | |||
815 | 0 | 15 | ||
816 | === added file 'functions/bzr_commit.sh' | |||
817 | --- functions/bzr_commit.sh 1970-01-01 00:00:00 +0000 | |||
818 | +++ functions/bzr_commit.sh 2018-05-31 04:33:07 +0000 | |||
819 | @@ -0,0 +1,23 @@ | |||
820 | 1 | #!/bin/bash | ||
821 | 2 | info_dir=${1} | ||
822 | 3 | oargs=${*//${1}/} | ||
823 | 4 | TEMP_D="" | ||
824 | 5 | error() { echo "$@" 1>&2; } | ||
825 | 6 | fail() { [ $# -eq 0 ] || error "$@"; exit 1; } | ||
826 | 7 | |||
827 | 8 | echo "Commit comment is: ${oargs}" | ||
828 | 9 | if [ ! -d "${info_dir}/.bzr" ]; then | ||
829 | 10 | ( cd "${info_dir}" && bzr init && bzr add --quiet . && | ||
830 | 11 | bzr commit --quiet -m "initial state" ) >/dev/null && | ||
831 | 12 | error "initialized bzr directory in ${info_dir}" || | ||
832 | 13 | fail "failed to initialize bzr directory in ${info_dir}" | ||
833 | 14 | fi | ||
834 | 15 | |||
835 | 16 | bzr add "${info_dir}" | ||
836 | 17 | if bzr diff "${info_dir}" >/dev/null; then | ||
837 | 18 | error "no changes were made to ${info_dir}" | ||
838 | 19 | else | ||
839 | 20 | bzr commit -m "${oargs[*]}" "${info_dir}" || | ||
840 | 21 | fail "failed to bzr commit in ${info_dir}" | ||
841 | 22 | fi | ||
842 | 23 | |||
843 | 0 | 24 | ||
844 | === added file 'functions/common' | |||
845 | --- functions/common 1970-01-01 00:00:00 +0000 | |||
846 | +++ functions/common 2018-05-31 04:33:07 +0000 | |||
847 | @@ -0,0 +1,37 @@ | |||
848 | 1 | # Common functions | ||
849 | 2 | # vi: syntax=sh expandtab ts=4 | ||
850 | 3 | |||
851 | 4 | error() { echo "$@" 1>&2; } | ||
852 | 5 | fail() { error "$@"; exit 1; } | ||
853 | 6 | debug() { echo "$(date -R): $@" 1>&2; } | ||
854 | 7 | run() { echo "$(date -R): running cmd: ${@}"; | ||
855 | 8 | env ${@} && debug "Command successful: ${@}" || | ||
856 | 9 | fail "failed to run cmd: ${@}"; } | ||
857 | 10 | |||
858 | 11 | dist_ge() { [[ "$1" > "$2" || "$1" == "$2" ]]; } | ||
859 | 12 | dist_le() { [[ "$1" < "$2" || "$1" == "$2" ]]; } | ||
860 | 13 | |||
861 | 14 | map_version_to_suite() { | ||
862 | 15 | version=(${1//-LTS/ LTS}) | ||
863 | 16 | awk '-F[, ]' \ | ||
864 | 17 | '$2 ~ /LTS/ && $1 ==V {print $5}; $2 !~ /LTS/ && $1 == V {print $4}' \ | ||
865 | 18 | V="${version[0]}" /usr/share/distro-info/ubuntu.csv | ||
866 | 19 | } | ||
867 | 20 | |||
868 | 21 | map_suite_to_version() { | ||
869 | 22 | suite=${1} | ||
870 | 23 | awk '-F[, ]' \ | ||
871 | 24 | '$2 ~ /LTS/ && $5 == S {print $1"-"$2}; $2 !~ /LTS/ && $4 == S {print $1}' \ | ||
872 | 25 | S="${suite}" /usr/share/distro-info/ubuntu.csv | ||
873 | 26 | } | ||
874 | 27 | |||
875 | 28 | # Look for common names | ||
876 | 29 | [ -z "${kvm}" -a -n "${kvm_builder}" ] && kvm="${kvm_builder}" | ||
877 | 30 | [ -z "${kvm_builder}" -a -n "${kvm}" ] && kvm_builder="${kvm}" | ||
878 | 31 | |||
879 | 32 | [ -n "${kvm}" ] && scripts="${kvm}" | ||
880 | 33 | [ -n "${kvm_builder}" ] && scripts="${kvm_builder}" | ||
881 | 34 | |||
882 | 35 | export kvm="${scripts}" | ||
883 | 36 | export kvm_builder="${scripts}" | ||
884 | 37 | export scripts | ||
885 | 0 | 38 | ||
886 | === added file 'functions/locker' | |||
887 | --- functions/locker 1970-01-01 00:00:00 +0000 | |||
888 | +++ functions/locker 2018-05-31 04:33:07 +0000 | |||
889 | @@ -0,0 +1,49 @@ | |||
890 | 1 | # This prevents concurrent commands from running. | ||
891 | 2 | _script=$(readlink -f "${BASH_SOURCE[0]:?}") | ||
892 | 3 | _my_dir=$(dirname "$_script") | ||
893 | 4 | source "${_my_dir}/common" | ||
894 | 5 | source "${_my_dir}/retry" | ||
895 | 6 | |||
896 | 7 | cmd_lock() { | ||
897 | 8 | LOCKFILE="/tmp/wrapper-`basename $1`" | ||
898 | 9 | LOCKFD=99 | ||
899 | 10 | |||
900 | 11 | _lock() { flock -$1 $LOCKFD; } | ||
901 | 12 | _no_more_locking() { _lock u; _lock xn && rm -f $LOCKFILE; } | ||
902 | 13 | _prepare_locking() { eval "exec $LOCKFD>\"$LOCKFILE\""; trap _no_more_locking EXIT; } | ||
903 | 14 | |||
904 | 15 | _prepare_locking | ||
905 | 16 | |||
906 | 17 | exlock_now() { _lock xn; } # obtain an exclusive lock immediately or fail | ||
907 | 18 | exlock() { _lock x; } # obtain an exclusive lock | ||
908 | 19 | shlock() { _lock s; } # obtain a shared lock | ||
909 | 20 | unlock() { _lock u; } # drop a lock | ||
910 | 21 | |||
911 | 22 | count=0 | ||
912 | 23 | max_count=60 | ||
913 | 24 | |||
914 | 25 | while (! exlock_now ); | ||
915 | 26 | do | ||
916 | 27 | let wait_time=$RANDOM%30 | ||
917 | 28 | error "Waiting ${wait_time} seconds due to concurrent ${1} command" | ||
918 | 29 | sleep ${wait_time} | ||
919 | 30 | |||
920 | 31 | count=$(expr ${count} + 1) | ||
921 | 32 | |||
922 | 33 | if [ ${count} -gt ${max_count} ]; then | ||
923 | 34 | echo "Max wait expired. Failing." | ||
924 | 35 | exit 1 | ||
925 | 36 | fi | ||
926 | 37 | done | ||
927 | 38 | |||
928 | 39 | error "Executing command, lock is free for: ${@}" | ||
929 | 40 | "${@}" | ||
930 | 41 | unlock | ||
931 | 42 | } | ||
932 | 43 | |||
933 | 44 | _vbox_cmd() { | ||
934 | 45 | # Virtual box is a real pain. This function uses the locker function above to | ||
935 | 46 | # wrap up vboxmanage to prevent its stupid issues with concurrency. | ||
936 | 47 | cmd_lock vboxmanage ${@} || | ||
937 | 48 | fail "Failed to execute locked command: vboxmange ${@}" | ||
938 | 49 | } | ||
939 | 0 | 50 | ||
940 | === added file 'functions/merge_templates' | |||
941 | --- functions/merge_templates 1970-01-01 00:00:00 +0000 | |||
942 | +++ functions/merge_templates 2018-05-31 04:33:07 +0000 | |||
943 | @@ -0,0 +1,53 @@ | |||
944 | 1 | #!/bin/bash | ||
945 | 2 | # vi: ts=4 noexpandtab syntax=sh | ||
946 | 3 | # | ||
947 | 4 | # This is just like mk_template.sh, but differs in that it handles | ||
948 | 5 | # an arbitrary number of templates being merged in. | ||
949 | 6 | # | ||
950 | 7 | # ARG1 - base template | ||
951 | 8 | # ARG2 - final templates | ||
952 | 9 | # ARG* - addin templates | ||
953 | 10 | |||
954 | 11 | # This merges templates together | ||
955 | 12 | merge_templates() { | ||
956 | 13 | local cur_dir=${PWD} | ||
957 | 14 | local args=(${@}) | ||
958 | 15 | local main_template=${1}; args=("${args[@]:1}") | ||
959 | 16 | local new_template=${2}; args=("${args[@]:1}") | ||
960 | 17 | local addins=("${args[@]}") | ||
961 | 18 | |||
962 | 19 | if [ "${#addins[@]}" -ge 1 ]; then | ||
963 | 20 | ntmp_dir=$(mktemp -d template.XXXXX --tmpdir=${TMPDIR:-/tmp}) | ||
964 | 21 | cd ${ntmp_dir} | ||
965 | 22 | |||
966 | 23 | # Split the base template "ADDIN_HERE" | ||
967 | 24 | awk '/ADDIN_HERE/{n++}{print >"template" n ".txt" }' \ | ||
968 | 25 | ${main_template} || | ||
969 | 26 | fail "failed to split template!" | ||
970 | 27 | |||
971 | 28 | # Combine the split template with the addin in the middle | ||
972 | 29 | cat template.txt \ | ||
973 | 30 | ${addins[@]} \ | ||
974 | 31 | template1.txt \ | ||
975 | 32 | > ${new_template} | ||
976 | 33 | |||
977 | 34 | # Do some variable replacement | ||
978 | 35 | sed -e "s,ADDIN_HERE,# END Addins,g" \ | ||
979 | 36 | -e "s,%%PPA%%,${PPA},g" \ | ||
980 | 37 | -e "s,%%PROPOSED%%,${PROPOSED:-0},g" \ | ||
981 | 38 | -i ${new_template} || | ||
982 | 39 | fail "Unable to finalize template!" | ||
983 | 40 | |||
984 | 41 | else | ||
985 | 42 | |||
986 | 43 | sed -e "s,ADDIN_HERE,# END Addins,g" \ | ||
987 | 44 | "${main_template}" > "${new_template}" | ||
988 | 45 | |||
989 | 46 | fi | ||
990 | 47 | |||
991 | 48 | # Remove the temp directory if it exists | ||
992 | 49 | [ -n "${ntmp_dir}" ] && rm -rf "${ntmp_dir}" | ||
993 | 50 | |||
994 | 51 | # Get back to where we started | ||
995 | 52 | cd ${cur_dir} | ||
996 | 53 | } | ||
997 | 0 | 54 | ||
998 | === added file 'functions/mk_template.sh' | |||
999 | --- functions/mk_template.sh 1970-01-01 00:00:00 +0000 | |||
1000 | +++ functions/mk_template.sh 2018-05-31 04:33:07 +0000 | |||
1001 | @@ -0,0 +1,41 @@ | |||
1002 | 1 | #!/bin/bash | ||
1003 | 2 | |||
1004 | 3 | # This merges templates together | ||
1005 | 4 | merge_template() { | ||
1006 | 5 | cur_dir=${PWD} | ||
1007 | 6 | main_template=${1} | ||
1008 | 7 | addin_template=${2} | ||
1009 | 8 | new_template=${3} | ||
1010 | 9 | |||
1011 | 10 | if [ -n "${addin_template}" ]; then | ||
1012 | 11 | ntmp_dir=$(mktemp -d template.XXXXX --tmpdir=${TMPDIR:-/tmp}) | ||
1013 | 12 | cd ${ntmp_dir} | ||
1014 | 13 | |||
1015 | 14 | # Split the base template "ADDIN_HERE" | ||
1016 | 15 | awk '/ADDIN_HERE/{n++}{print >"template" n ".txt" }' \ | ||
1017 | 16 | ${main_template} || | ||
1018 | 17 | fail "failed to split template!" | ||
1019 | 18 | |||
1020 | 19 | # Combine the split template with the addin in the middle | ||
1021 | 20 | cat template.txt \ | ||
1022 | 21 | ${addin_template} \ | ||
1023 | 22 | template1.txt \ | ||
1024 | 23 | > ${new_template} | ||
1025 | 24 | |||
1026 | 25 | # Do some variable replacement | ||
1027 | 26 | sed -e "s,ADDIN_HERE,# END Addins,g" \ | ||
1028 | 27 | -e "s,%%PPA%%,${PPA},g" \ | ||
1029 | 28 | -e "s,%%PROPOSED%%,${PROPOSED:-0},g" \ | ||
1030 | 29 | -i ${new_template} || | ||
1031 | 30 | fail "Unable to finalize template!" | ||
1032 | 31 | |||
1033 | 32 | else | ||
1034 | 33 | "${main_template}" "${new_template}" | ||
1035 | 34 | fi | ||
1036 | 35 | |||
1037 | 36 | # Remove the temp directory if it exists | ||
1038 | 37 | [ -n "${ntmp_dir}" ] && rm -rf "${ntmp_dir}" | ||
1039 | 38 | |||
1040 | 39 | # Get back to where we started | ||
1041 | 40 | cd ${cur_dir} | ||
1042 | 41 | } | ||
1043 | 0 | 42 | ||
1044 | === added file 'functions/retry' | |||
1045 | --- functions/retry 1970-01-01 00:00:00 +0000 | |||
1046 | +++ functions/retry 2018-05-31 04:33:07 +0000 | |||
1047 | @@ -0,0 +1,16 @@ | |||
1048 | 1 | # Code for retrying commands | ||
1049 | 2 | |||
1050 | 3 | retry() { | ||
1051 | 4 | local trycount=${1} sleep=${2} | ||
1052 | 5 | shift; shift; | ||
1053 | 6 | local i=0 smsg=" sleeping ${sleep}: $*" ret=0 | ||
1054 | 7 | for((i=0;i<${trycount};i++)); do | ||
1055 | 8 | "$@" && return 0 | ||
1056 | 9 | ret=$? | ||
1057 | 10 | [ $(($i+1)) -eq ${trycount} ] && smsg="" | ||
1058 | 11 | debug 1 "Warning: cmd failed [try $(($i+1))/${trycount}].${smsg}" | ||
1059 | 12 | sleep $sleep | ||
1060 | 13 | done | ||
1061 | 14 | return $ret | ||
1062 | 15 | } | ||
1063 | 16 | |||
1064 | 0 | 17 | ||
1065 | === added file 'generate-ubuntu-lists.sh' | |||
1066 | --- generate-ubuntu-lists.sh 1970-01-01 00:00:00 +0000 | |||
1067 | +++ generate-ubuntu-lists.sh 2018-05-31 04:33:07 +0000 | |||
1068 | @@ -0,0 +1,44 @@ | |||
1069 | 1 | #!/bin/bash | ||
1070 | 2 | # Generate a list of Ubuntu releases | ||
1071 | 3 | |||
1072 | 4 | final_d="${FIANL_D:-/srv/jenkins}" | ||
1073 | 5 | tmpd=$(mktemp -d) | ||
1074 | 6 | |||
1075 | 7 | trap "rm -rf ${tmpd}" EXIT SIGINT | ||
1076 | 8 | |||
1077 | 9 | # Get the regular info | ||
1078 | 10 | ubuntu-distro-info --supported \ | ||
1079 | 11 | > ${tmpd}/ubuntu-supported.txt | ||
1080 | 12 | |||
1081 | 13 | ubuntu-distro-info --all \ | ||
1082 | 14 | > ${tmpd}/ubuntu-all.txt | ||
1083 | 15 | |||
1084 | 16 | ubuntu-distro-info --unsupported \ | ||
1085 | 17 | > ${tmpd}/ubuntu-unsupported.txt | ||
1086 | 18 | |||
1087 | 19 | ubuntu-distro-info --release --supported \ | ||
1088 | 20 | > ${tmpd}/ubuntu-versions.txt | ||
1089 | 21 | |||
1090 | 22 | # Populate releases which may be missing | ||
1091 | 23 | for suite in vivid:15.04 wily:15.10 xenial:16.04; | ||
1092 | 24 | do | ||
1093 | 25 | echo "${suite%%:*}" >> ${tmpd}/ubuntu-supported.txt | ||
1094 | 26 | echo "${suite%%:*}" >> ${tmpd}/ubuntu-all.txt | ||
1095 | 27 | echo "${suite##*:}" >> ${tmpd}/ubuntu-versions.txt | ||
1096 | 28 | done | ||
1097 | 29 | |||
1098 | 30 | # Sort and make it pretty | ||
1099 | 31 | cat ${tmpd}/ubuntu-supported.txt \ | ||
1100 | 32 | | sort -r -u > ${final_d}/ubuntu-supported.txt | ||
1101 | 33 | |||
1102 | 34 | cat ${tmpd}/ubuntu-all.txt \ | ||
1103 | 35 | | egrep -v warty \ | ||
1104 | 36 | | sort -r -u > ${final_d}/ubuntu-all.txt | ||
1105 | 37 | |||
1106 | 38 | cat ${tmpd}/ubuntu-versions.txt \ | ||
1107 | 39 | | sed "s, ,-,g" \ | ||
1108 | 40 | | sort -r -u \ | ||
1109 | 41 | > ${final_d}/ubuntu-versions.txt | ||
1110 | 42 | |||
1111 | 43 | sort -r -u ${tmpd}/ubuntu-unsupported.txt \ | ||
1112 | 44 | > ${final_d}/ubuntu-unsupported.txt | ||
1113 | 0 | 45 | ||
1114 | === added file 'get_serial.sh' | |||
1115 | --- get_serial.sh 1970-01-01 00:00:00 +0000 | |||
1116 | +++ get_serial.sh 2018-05-31 04:33:07 +0000 | |||
1117 | @@ -0,0 +1,157 @@ | |||
1118 | 1 | #!/bin/bash -xe | ||
1119 | 2 | # | ||
1120 | 3 | # Determine the build serial and place files into the build serial location | ||
1121 | 4 | # Also, handle the unlikely race condition in case multiple builders arrive | ||
1122 | 5 | # At the same point. | ||
1123 | 6 | # copies the files to their staging location | ||
1124 | 7 | # Prevent race conditions for populating the aggregate build directory | ||
1125 | 8 | # | ||
1126 | 9 | # OUTPUT: | ||
1127 | 10 | # - serial.txt file in ${WORKSPACE} | ||
1128 | 11 | # - build_properties (OR ${BUILD_PROPERTIES}) file in ${PWD} | ||
1129 | 12 | # - build-info.txt in ${base_d}/unpacked (or ${base_nd}/unpacked}) | ||
1130 | 13 | # NOTE: see code for how base_d and base_nd are computed | ||
1131 | 14 | |||
1132 | 15 | # Required options to even do a build | ||
1133 | 16 | DISTRO="${DISTRO:-$1}" | ||
1134 | 17 | WORKSPACE="${WORKSPACE:-$2}" # where is the workspace | ||
1135 | 18 | BUILD_ID="${BUILD_ID:-$3}" # build id | ||
1136 | 19 | |||
1137 | 20 | # Convert hwe builds to regular for the sake of tooling | ||
1138 | 21 | btype="${4:-server}" # server or something else | ||
1139 | 22 | if [[ "${btype}" =~ hwe ]]; then | ||
1140 | 23 | hwe_btype="${btype}" | ||
1141 | 24 | bytpe="server"; BTYPE="server" | ||
1142 | 25 | fi | ||
1143 | 26 | |||
1144 | 27 | # Support the legacy broken positional stuff. This should have been | ||
1145 | 28 | # done with environmental variables or flags | ||
1146 | 29 | test_build="${5:-0}" # test build? | ||
1147 | 30 | sandbox_build="${6:-0}" # should it be a sandbox build | ||
1148 | 31 | allow_existing="${7:-1}" # allow existing | ||
1149 | 32 | publish_image="${8:-0}" # publish the image | ||
1150 | 33 | proposed_build="${9:-0}" # build from proposed | ||
1151 | 34 | |||
1152 | 35 | # Make this less confusing by allowing someone to use environmental | ||
1153 | 36 | # variables. | ||
1154 | 37 | # TODO: utlemming: convert this --flags | ||
1155 | 38 | BTYPE="${BTYPE:-$btype}" | ||
1156 | 39 | TEST_BUILD="${TEST_BUILD:-$test_build}" | ||
1157 | 40 | SANDBOX_BUILD="${SANDBOX_BUILD:-$sandbox_build}" | ||
1158 | 41 | PUBLISH_IMAGE="${PUBLISH_IMAGE:-$publish_image}" | ||
1159 | 42 | PROPOSED_BUILD="${PROPOSED_BUILD:-$proposed_build}" | ||
1160 | 43 | |||
1161 | 44 | ROOT_D="${ROOT_D:-/srv/ec2-images}" | ||
1162 | 45 | base_d="${ROOT_D}/${DISTRO}" | ||
1163 | 46 | [ "${TEST_BUILD}" -eq 1 ] && base_d="${ROOT_D}/test_builds/${DISTRO}" | ||
1164 | 47 | [ "${SANDBOX_BUILD}" -eq 1 ] && base_d="${ROOT_D}/sandbox/${DISTRO}" && TEST_BUILD=0 | ||
1165 | 48 | [ "${BTYPE}" = "desktop" ] && base_d="${ROOT_D}/desktop/${DISTRO}" | ||
1166 | 49 | [ "${PROPOSED_BUILD}" -eq 1 ] && base_d="${ROOT_D}/proposed/${DISTRO}" && | ||
1167 | 50 | TEST_BUILD=0 && SANDBOX_BUILD=0 | ||
1168 | 51 | |||
1169 | 52 | let wait_time=$RANDOM%50 | ||
1170 | 53 | sleep $wait_time # Make build collisions a bit harder | ||
1171 | 54 | |||
1172 | 55 | make_hwe_meta() { | ||
1173 | 56 | # Create a sub build-info.txt for HWE builds | ||
1174 | 57 | serial="${1##*/}" | ||
1175 | 58 | hwe_unpacked="${base_d}/${serial}/${hwe_btype//$BTYPE-/}/unpacked" | ||
1176 | 59 | if [ -n "${hwe_btype}" ]; then | ||
1177 | 60 | [ -d "${hwe_unpacked}" ] || mkdir -p "${hwe_unpacked}" | ||
1178 | 61 | cat << EOF > "${hwe_unpacked}/build-info.txt" | ||
1179 | 62 | serial=${serial} | ||
1180 | 63 | orig_prefix=${DISTRO}-${hwe_btype}-cloudimg | ||
1181 | 64 | suite=${DISTRO} | ||
1182 | 65 | build_name=${hwe_btype} | ||
1183 | 66 | EOF | ||
1184 | 67 | fi | ||
1185 | 68 | } | ||
1186 | 69 | |||
1187 | 70 | make_meta() { | ||
1188 | 71 | # Write the property file for publishing. This used | ||
1189 | 72 | # to write trigger the EC2 publishing job | ||
1190 | 73 | serial=${1##*/} | ||
1191 | 74 | cat << EOM > "${BUILD_PROPERTIES:-$WORKSPACE/build_properties}" | ||
1192 | 75 | BUILD_TYPE=${BTYPE} | ||
1193 | 76 | SERIAL=${serial} | ||
1194 | 77 | SUITE=${DISTRO} | ||
1195 | 78 | TEST_BUILD=${TEST_BUILD} | ||
1196 | 79 | SANDBOX_BUILD=${SANDBOX_BUILD} | ||
1197 | 80 | PUBLISH_IMAGE=${PUBLISH_IMAGE} | ||
1198 | 81 | ALLOW_EXISTING=${ALLOW_EXISTING} | ||
1199 | 82 | PROPOSED_BUILD=${PROPOSED_BUILD} | ||
1200 | 83 | EOM | ||
1201 | 84 | |||
1202 | 85 | # Write the build-info.txt file. This is used in | ||
1203 | 86 | # the publishing process | ||
1204 | 87 | [ -d "${1}/unpacked" ] || mkdir -p "${1}/unpacked" | ||
1205 | 88 | cat << EOF > "${1}/unpacked/build-info.txt" | ||
1206 | 89 | serial=${serial} | ||
1207 | 90 | orig_prefix=${DISTRO}-${BTYPE}-cloudimg | ||
1208 | 91 | suite=${DISTRO} | ||
1209 | 92 | build_name=${BTYPE} | ||
1210 | 93 | EOF | ||
1211 | 94 | make_hwe_meta ${serial} | ||
1212 | 95 | exit 0 | ||
1213 | 96 | } | ||
1214 | 97 | |||
1215 | 98 | $(stat /tmp/${DISTRO}-${BUILD_ID} > /dev/null 2>&1) && { | ||
1216 | 99 | echo "Another builder is/has reserved this part of the build. Deferring..." | ||
1217 | 100 | while [ -z "${destdir}" ] | ||
1218 | 101 | do | ||
1219 | 102 | sleep 5 | ||
1220 | 103 | finaldir="" | ||
1221 | 104 | |||
1222 | 105 | [ -e "${WORKSPACE}/serial.txt" ] && { | ||
1223 | 106 | read serial < "${WORKSPACE}/serial.txt" | ||
1224 | 107 | destdir="${base_d}/${serial}" | ||
1225 | 108 | } | ||
1226 | 109 | |||
1227 | 110 | while read destdir | ||
1228 | 111 | do | ||
1229 | 112 | echo "Candidate serial found: ${destdir##*/}" | ||
1230 | 113 | finaldir="${destdir}" | ||
1231 | 114 | done < /tmp/${DISTRO}-${BUILD_ID} | ||
1232 | 115 | |||
1233 | 116 | if [ -n "${finaldir}" ]; then | ||
1234 | 117 | echo "Aggregation directory reported as ${finaldir}" | ||
1235 | 118 | echo "${finaldir##*/}" > "${WORKSPACE}/serial.txt" | ||
1236 | 119 | make_hwe_meta "${finaldir##*/}" | ||
1237 | 120 | exit 0 | ||
1238 | 121 | else | ||
1239 | 122 | echo "destdir is not defined!" && exit 10 | ||
1240 | 123 | fi | ||
1241 | 124 | |||
1242 | 125 | done | ||
1243 | 126 | } | ||
1244 | 127 | |||
1245 | 128 | # if we get here, then know that the build dir hasn't been created yet | ||
1246 | 129 | touch /tmp/${DISTRO}-$BUILD_ID | ||
1247 | 130 | test_base_d="${base_d}/$(date +%Y%m%d)" | ||
1248 | 131 | |||
1249 | 132 | make_and_write() { | ||
1250 | 133 | serial="${1##*/}" | ||
1251 | 134 | echo "Creating aggregation directory ${1}" | ||
1252 | 135 | echo "${serial}" > "${WORKSPACE}/serial.txt" | ||
1253 | 136 | mkdir -p "${1}" && | ||
1254 | 137 | echo "${1}" >> /tmp/${DISTRO}-$BUILD_ID || | ||
1255 | 138 | exit 10 | ||
1256 | 139 | |||
1257 | 140 | # Copy stuff to where it should go | ||
1258 | 141 | make_meta "${1}" | ||
1259 | 142 | } | ||
1260 | 143 | |||
1261 | 144 | if [ ! -d "${test_base_d}" ]; then | ||
1262 | 145 | make_and_write "${test_base_d}" | ||
1263 | 146 | else | ||
1264 | 147 | for bs in {1..30} | ||
1265 | 148 | do | ||
1266 | 149 | base_nd="${test_base_d}.${bs}" | ||
1267 | 150 | serial="${base_nd##*/}" | ||
1268 | 151 | echo "Checking on directory ${base_nd}" | ||
1269 | 152 | [ ! -d "${base_nd}" ] && make_and_write "${base_nd}" | ||
1270 | 153 | make_hwe_meta "${serial}" | ||
1271 | 154 | done | ||
1272 | 155 | fi | ||
1273 | 156 | |||
1274 | 157 | exit 0 | ||
1275 | 0 | 158 | ||
1276 | === added directory 'jenkins' | |||
1277 | === added file 'jenkins/CloudImages_Azure.sh' | |||
1278 | --- jenkins/CloudImages_Azure.sh 1970-01-01 00:00:00 +0000 | |||
1279 | +++ jenkins/CloudImages_Azure.sh 2018-05-31 04:33:07 +0000 | |||
1280 | @@ -0,0 +1,162 @@ | |||
1281 | 1 | #!/bin/bash | ||
1282 | 2 | fail() { [ $# -eq 0 ] || echo "$@"; exit 1; } | ||
1283 | 3 | |||
1284 | 4 | # set default umask | ||
1285 | 5 | umask 022 | ||
1286 | 6 | |||
1287 | 7 | # Pre-setup: Read the build properties from the previous build | ||
1288 | 8 | # and discard what we don't want | ||
1289 | 9 | [ -e build.info ] && cp build.info build_properties | ||
1290 | 10 | source build_properties | ||
1291 | 11 | |||
1292 | 12 | |||
1293 | 13 | # Load up some libraries | ||
1294 | 14 | my_dir="$( cd "$( dirname "$0" )" && pwd )" | ||
1295 | 15 | base_dir=$(dirname ${my_dir}) | ||
1296 | 16 | source "${base_dir}/functions/locker" | ||
1297 | 17 | source "${base_dir}/functions/common" | ||
1298 | 18 | source "${base_dir}/functions/retry" | ||
1299 | 19 | |||
1300 | 20 | debug() { echo "${@}"; } | ||
1301 | 21 | fail() { echo "${@}" 2>&1; exit 1;} | ||
1302 | 22 | |||
1303 | 23 | |||
1304 | 24 | # Shuffle stuff around" | ||
1305 | 25 | [ -e build_properties ] && mv build_properties parent_build_properties | ||
1306 | 26 | [ -e build.log ] && mv build.log parent_build.log | ||
1307 | 27 | [ -e "${SUITE}-build.sh" ] && rm "${SUITE}-build.sh" | ||
1308 | 28 | |||
1309 | 29 | echo "-------------------" | ||
1310 | 30 | echo " Distro: ${SUITE}" | ||
1311 | 31 | echo " Serial: ${SERIAL}" | ||
1312 | 32 | echo " Type: ${BUILD_TYPE}" | ||
1313 | 33 | echo "-------------------" | ||
1314 | 34 | |||
1315 | 35 | set -x | ||
1316 | 36 | # Variables | ||
1317 | 37 | case ${SUITE} in | ||
1318 | 38 | precise|trusty|wily|xenial) | ||
1319 | 39 | disk_name="${SUITE}-server-cloudimg-amd64-disk1.img" | ||
1320 | 40 | raw_name="${PWD}/${SUITE}-server-cloudimg-amd64-disk1.raw" | ||
1321 | 41 | vhd_name="${PWD}/${SUITE}-server-cloudimg-amd64-disk1.vhd" | ||
1322 | 42 | ;; | ||
1323 | 43 | *) | ||
1324 | 44 | disk_name="${SUITE}-server-cloudimg-amd64.img" | ||
1325 | 45 | raw_name="${PWD}/${SUITE}-server-cloudimg-amd64.raw" | ||
1326 | 46 | vhd_name="${PWD}/${SUITE}-server-cloudimg-amd64.vhd" | ||
1327 | 47 | ;; | ||
1328 | 48 | esac | ||
1329 | 49 | disk_root="${DISK_ROOT:-/srv/ec2-images/${SUITE}/${SERIAL}}" | ||
1330 | 50 | raw_disk="${PWD}/results.raw" | ||
1331 | 51 | launch_config="${PWD}/launch_config.sh" | ||
1332 | 52 | register_config="${PWD}/register_config.sh" | ||
1333 | 53 | pkg_tar="${PWD}/pkg.tar" | ||
1334 | 54 | pkg_tar_d="${PKG_TAR_D:-${kvm_builder}/azure_pkgs}" | ||
1335 | 55 | proposed="${PROPOSED:-false}" | ||
1336 | 56 | vhd_size=${VHD_SIZE:-30} | ||
1337 | 57 | |||
1338 | 58 | # Covert image to a RAW disk to work with. The raw image is used | ||
1339 | 59 | # to populate the daily VHD in Azure | ||
1340 | 60 | debug "Converting QCow2 to Raw Disk" | ||
1341 | 61 | qemu-img \ | ||
1342 | 62 | convert -O raw \ | ||
1343 | 63 | "${disk_root}/${disk_name}" \ | ||
1344 | 64 | "${raw_name}" && | ||
1345 | 65 | debug "Converted QCow2 to Raw disk for manipulation" || | ||
1346 | 66 | fail "Failed to convert QCow2 to Raw disk" | ||
1347 | 67 | |||
1348 | 68 | config_opts=(${CONFIG_OPTS}) | ||
1349 | 69 | config_opts+=( | ||
1350 | 70 | --version $(${kvm_builder}/ubuntu-adj2version ${SUITE}) | ||
1351 | 71 | --serial "${SERIAL}" | ||
1352 | 72 | --out "${launch_config}" | ||
1353 | 73 | ) | ||
1354 | 74 | |||
1355 | 75 | # Turns on building from proposed | ||
1356 | 76 | [ "${proposed}" == "true" ] && | ||
1357 | 77 | config_opts+=(--proposed) | ||
1358 | 78 | |||
1359 | 79 | # Setup the configuration | ||
1360 | 80 | ${kvm_builder}/azure_config.sh \ | ||
1361 | 81 | ${config_opts[@]} || | ||
1362 | 82 | fail "Failed to configure instance runtime" | ||
1363 | 83 | |||
1364 | 84 | # Full disk populate for 12.04 | ||
1365 | 85 | root_size=2 | ||
1366 | 86 | if [ "${SUITE}" == "precise" ]; then | ||
1367 | 87 | root_size=29 | ||
1368 | 88 | truncate -s 29G "${raw_name}.pre-vhd" && | ||
1369 | 89 | debug "Resized 12.04 image to full size" || | ||
1370 | 90 | fail "Failed to resize 12.04 to full size" | ||
1371 | 91 | fi | ||
1372 | 92 | |||
1373 | 93 | case ${SUITE} in | ||
1374 | 94 | precise|trusty|xenial) | ||
1375 | 95 | builder_img=/srv/builder/images/precise-builder-latest.img | ||
1376 | 96 | ;; | ||
1377 | 97 | *) | ||
1378 | 98 | builder_img=/srv/builder/images/artful-builder-latest.img | ||
1379 | 99 | ;; | ||
1380 | 100 | esac | ||
1381 | 101 | |||
1382 | 102 | # Launch KVM to do the work | ||
1383 | 103 | ${kvm_builder}/launch_kvm.sh \ | ||
1384 | 104 | --id ${BUILD_ID} \ | ||
1385 | 105 | --user-data "${launch_config}" \ | ||
1386 | 106 | --cloud-config "${kvm_builder}/config/cloud-azure.cfg" \ | ||
1387 | 107 | --extra-disk "${raw_name}" \ | ||
1388 | 108 | --raw-disk "${WORKSPACE}/${SUITE}-output.raw" \ | ||
1389 | 109 | --raw-size ${root_size} \ | ||
1390 | 110 | --img-url ${builder_img} || | ||
1391 | 111 | fail "KVM instance failed to build image." | ||
1392 | 112 | |||
1393 | 113 | rm "${WORKSPACE}/${SUITE}-output.raw" | ||
1394 | 114 | |||
1395 | 115 | |||
1396 | 116 | # Copy the raw image to make it ready for VHD production | ||
1397 | 117 | cp --sparse=always "${raw_name}" "${raw_name}.pre-vhd" && | ||
1398 | 118 | debug "Copied raw image VHD production" || | ||
1399 | 119 | fail "Failed to copy raw image to ${raw_name}.pre-vhd" | ||
1400 | 120 | |||
1401 | 121 | # Resize the copied RAW image | ||
1402 | 122 | debug "Truncating image to ${vhd_size}G" | ||
1403 | 123 | truncate -s "${vhd_size}G" "${raw_name}.pre-vhd" && | ||
1404 | 124 | debug "Truncated image at ${vhd_size}G" || | ||
1405 | 125 | fail "Failed to truncate disk image" | ||
1406 | 126 | |||
1407 | 127 | # Convert to VHD first, step 1 of cheap hack | ||
1408 | 128 | # This is a cheap hack...half the time the next command | ||
1409 | 129 | # will fail with "VERR_INVALID_PARAMETER", so this is the, | ||
1410 | 130 | # er, workaround | ||
1411 | 131 | debug "Converting to VHD" | ||
1412 | 132 | _vbox_cmd convertfromraw --format VHD \ | ||
1413 | 133 | "${raw_name}.pre-vhd" \ | ||
1414 | 134 | "${vhd_name}.pre" && | ||
1415 | 135 | debug "Converted raw disk to VHD" || | ||
1416 | 136 | fail "Failed to convert raw image to VHD" | ||
1417 | 137 | |||
1418 | 138 | # Clone the disk to fixed, VHD for Azure | ||
1419 | 139 | debug "Converting to VHD format from raw..." | ||
1420 | 140 | debug ".....this might take a while...." | ||
1421 | 141 | _vbox_cmd clonehd --format VHD --variant Fixed \ | ||
1422 | 142 | "${vhd_name}.pre" \ | ||
1423 | 143 | "${vhd_name}" && | ||
1424 | 144 | debug "Converted raw disk to VHD format using VirtualBox" || | ||
1425 | 145 | fail "Failed to convert raw image to VHD disk!" | ||
1426 | 146 | |||
1427 | 147 | # Remove the unneeded files | ||
1428 | 148 | rm "${vhd_name}.pre" "${raw_name}.pre-vhd" | ||
1429 | 149 | |||
1430 | 150 | debug "Image Characteristics:" | ||
1431 | 151 | _vbox_cmd showhdinfo "${vhd_name}" | ||
1432 | 152 | |||
1433 | 153 | |||
1434 | 154 | debug "Raw image converted to VHD" | ||
1435 | 155 | |||
1436 | 156 | # Archive the bzip2 file | ||
1437 | 157 | #debug "Archiving the VHD image" | ||
1438 | 158 | #pbzip2 -f "${vhd_name}" && | ||
1439 | 159 | # debug "Created archive of the VHD image" || | ||
1440 | 160 | # fail "Failed to compress image" | ||
1441 | 161 | |||
1442 | 162 | exit 0 | ||
1443 | 0 | 163 | ||
1444 | === added file 'jenkins/CloudImages_Base.sh' | |||
1445 | --- jenkins/CloudImages_Base.sh 1970-01-01 00:00:00 +0000 | |||
1446 | +++ jenkins/CloudImages_Base.sh 2018-05-31 04:33:07 +0000 | |||
1447 | @@ -0,0 +1,96 @@ | |||
1448 | 1 | #!/bin/bash | ||
1449 | 2 | |||
1450 | 3 | # Set default umask | ||
1451 | 4 | umask 022 | ||
1452 | 5 | |||
1453 | 6 | DISTRO=${DISTRO:-$SUITE} | ||
1454 | 7 | DISTRO=${DISTRO:?Must define distro} | ||
1455 | 8 | build_config="${PWD}/${DISTRO}-build.sh" | ||
1456 | 9 | |||
1457 | 10 | # Read in the common functions | ||
1458 | 11 | my_dir="$( cd "$( dirname "$0" )" && pwd )" | ||
1459 | 12 | base_dir=$(dirname ${my_dir}) | ||
1460 | 13 | source "${base_dir}/functions/locker" | ||
1461 | 14 | source "${base_dir}/functions/common" | ||
1462 | 15 | source "${base_dir}/functions/retry" | ||
1463 | 16 | source "${my_dir}/build_lib.sh" | ||
1464 | 17 | select_build_config | ||
1465 | 18 | |||
1466 | 19 | # Only block for serial if serial is unknown | ||
1467 | 20 | [ -z "${SERIAL}" ] && { | ||
1468 | 21 | # Get the serial number | ||
1469 | 22 | retry 3 10 \ | ||
1470 | 23 | "${base_dir}/get_serial.sh" \ | ||
1471 | 24 | "${DISTRO}" "${WORKSPACE}" "${BUILD_ID}" "${BTYPE}" 0 0 1 1 || | ||
1472 | 25 | fail "Failed to get serial for this build" | ||
1473 | 26 | |||
1474 | 27 | # Get the serial number | ||
1475 | 28 | read SERIAL < serial.txt | ||
1476 | 29 | [ -z ${SERIAL} ] && echo "NO SERIAL" && exit 10 | ||
1477 | 30 | } | ||
1478 | 31 | |||
1479 | 32 | # Create the configurations | ||
1480 | 33 | cmd=("${base_dir}/builder_config.sh" | ||
1481 | 34 | --distro "${DISTRO}" | ||
1482 | 35 | --build-type "${BTYPE}" | ||
1483 | 36 | --arch "${ARCH_TYPE}" | ||
1484 | 37 | --template ${base_dir}/templates/img-build.tmpl | ||
1485 | 38 | --serial "${SERIAL}" | ||
1486 | 39 | --out "${build_config}" | ||
1487 | 40 | ) | ||
1488 | 41 | |||
1489 | 42 | # Allow building from proposed | ||
1490 | 43 | [ "${PROPOSED_BUILD:-0}" -eq 1 ] && cmd+=("--proposed") | ||
1491 | 44 | [ "${USE_BUILDDS:-0}" -eq 1 ] && cmd+=("--bzr-automated-builds lp:~ubuntu-on-ec2/vmbuilder/automated-ec2-builds-buildd") | ||
1492 | 45 | [ -n "${BZR_AUTOMATED_EC2}" ] && cmd+=("--bzr-automated-builds ${BZR_AUTOMATED_EC2}") | ||
1493 | 46 | [ -n "${BZR_PUBSCRIPTS}" ] && cmd+=("--bzr-pubscripts ${BZR_PUBSCRIPTS}") | ||
1494 | 47 | [ -n "${BZR_LIVEBUILD}" ] && cmd+=("--bzr-livebuild ${BZR_LIVEBUILD}") | ||
1495 | 48 | [ -n "${BZR_VMBUILDER}" ] && cmd+=("--bzr-vmbuilder ${BZR_VMBUILDER}") | ||
1496 | 49 | |||
1497 | 50 | # Do the build | ||
1498 | 51 | ${cmd[@]} || fail "Failed to configure instance configuration" | ||
1499 | 52 | unset cmd | ||
1500 | 53 | |||
1501 | 54 | # Exit after configuring for arm if so configured | ||
1502 | 55 | if [[ "${ARCH_TYPE}" =~ (arm|aarch64|arm64) ]]; then | ||
1503 | 56 | echo "This is an ARM build. ARM rules will apply" | ||
1504 | 57 | [ "${BUILD_ARM}" -eq 0 ] && exit 0 | ||
1505 | 58 | fi | ||
1506 | 59 | |||
1507 | 60 | # Launch the builder | ||
1508 | 61 | # Retry building the image twice, waiting five | ||
1509 | 62 | # minutes. This should buffer most failures caused | ||
1510 | 63 | # by bad mirrors. | ||
1511 | 64 | export MAX_CYCLES=2160 | ||
1512 | 65 | retry 2 300 \ | ||
1513 | 66 | "${base_dir}/launch_kvm.sh" \ | ||
1514 | 67 | --id "${BUILD_ID}" \ | ||
1515 | 68 | --user-data "${build_config}" \ | ||
1516 | 69 | --cloud-config "${base_dir}/config/${cloud_init_cfg}" \ | ||
1517 | 70 | --img-url "${BUILDER_CLOUD_IMAGE}" \ | ||
1518 | 71 | --raw-disk "${WORKSPACE}/${DISTRO}.raw" \ | ||
1519 | 72 | --raw-size 20 || | ||
1520 | 73 | fail "KVM instance failed" | ||
1521 | 74 | |||
1522 | 75 | tar -xvvf "${WORKSPACE}/${DISTRO}.raw" || | ||
1523 | 76 | fail "Result tar failed to unpack" | ||
1524 | 77 | |||
1525 | 78 | rm "${WORKSPACE}/${DISTRO}.raw" || | ||
1526 | 79 | fail "Failed to remove unnecessary file" | ||
1527 | 80 | |||
1528 | 81 | # Put the bits in place | ||
1529 | 82 | "${base_dir}/copy_to_final.sh" \ | ||
1530 | 83 | "${DISTRO}" \ | ||
1531 | 84 | "${WORKSPACE}" \ | ||
1532 | 85 | "${SERIAL}" \ | ||
1533 | 86 | "${BTYPE}" \ | ||
1534 | 87 | "${TEST_BUILD}" \ | ||
1535 | 88 | "${SANDBOX_BUILD}" \ | ||
1536 | 89 | "${PROPOSED_BUILD}" || | ||
1537 | 90 | fail "Failed to place final files to destination" | ||
1538 | 91 | |||
1539 | 92 | # Copy the build properties into the workspace. This is set by get_serial.sh | ||
1540 | 93 | [ "${BUILD_PROPERTIES}" != "${WORKSPACE}/build_properties" ] && | ||
1541 | 94 | cp ${BUILD_PROPERTIES} ${WORKSPACE}/build_properties | ||
1542 | 95 | |||
1543 | 96 | echo "ARCH=${ARCH_TYPE}" >> build_properties | ||
1544 | 0 | 97 | ||
1545 | === added file 'jenkins/CloudImages_Base_Release_Delta.sh' | |||
1546 | --- jenkins/CloudImages_Base_Release_Delta.sh 1970-01-01 00:00:00 +0000 | |||
1547 | +++ jenkins/CloudImages_Base_Release_Delta.sh 2018-05-31 04:33:07 +0000 | |||
1548 | @@ -0,0 +1,255 @@ | |||
1549 | 1 | #!/bin/bash -x | ||
1550 | 2 | |||
1551 | 3 | # Set default umask | ||
1552 | 4 | umask 022 | ||
1553 | 5 | |||
1554 | 6 | # Skip promotion if this file exists | ||
1555 | 7 | HOLIDAY_FILE=/srv/jenkins/HOLIDAY | ||
1556 | 8 | |||
1557 | 9 | # Write the build properties file | ||
1558 | 10 | cat << EOF > "${WORKSPACE}/build_properties" | ||
1559 | 11 | SUITE=${SUITE} | ||
1560 | 12 | STREAM=${STREAM} | ||
1561 | 13 | SERIAL=${SERIAL} | ||
1562 | 14 | BUILD_TYPE=${BUILD_TYPE} | ||
1563 | 15 | |||
1564 | 16 | EOF | ||
1565 | 17 | |||
1566 | 18 | # Write the environmental variables to the run file | ||
1567 | 19 | env > ${SUITE}.run | ||
1568 | 20 | |||
1569 | 21 | fail() { echo "$@"; exit 1;} | ||
1570 | 22 | dist_ge() { [[ "$1" > "$2" || "$1" == "$2" ]]; } | ||
1571 | 23 | |||
1572 | 24 | arches=(i386 amd64 armel armhf arm64 ppc64el) | ||
1573 | 25 | exec_c="/srv/builder/vmbuilder/bin/cronrun" | ||
1574 | 26 | rel_base="/srv/ec2-images/releases/${SUITE}/release" | ||
1575 | 27 | rel_link=$(readlink ${rel_base}) | ||
1576 | 28 | |||
1577 | 29 | [ "${BUILD_TYPE}" = "desktop" ] && | ||
1578 | 30 | echo "Not valid for desktop builds" && | ||
1579 | 31 | exit 0 | ||
1580 | 32 | |||
1581 | 33 | # Find the existing manifest file | ||
1582 | 34 | old_manifest=$(find -L ${rel_base} -maxdepth 1 -iname '*amd64.manifest') || | ||
1583 | 35 | echo "Unable to find release manifest file" | ||
1584 | 36 | |||
1585 | 37 | # Find the new manifest file | ||
1586 | 38 | new_manifest_d="/srv/ec2-images/${SUITE}/${SERIAL}" | ||
1587 | 39 | [ "${TEST_BUILD:-0}" -eq 1 ] && new_manifest_d="/srv/ec2-images/test_builds/${SUITE}/${SERIAL}" | ||
1588 | 40 | [ "${SANDBOX_BUILD:-0}" -eq 1 ] && new_manifest_d="/srv/ec2-images/sandbox/${SUITE}/${SERIAL}" | ||
1589 | 41 | new_manifest=$(find ${new_manifest_d} -maxdepth 1 -iname '*amd64.manifest') || | ||
1590 | 42 | fail "Unable to find new manifest file" | ||
1591 | 43 | |||
1592 | 44 | # Find the previous serial if there was one | ||
1593 | 45 | previous_serial=$(find /srv/ec2-images/${SUITE}/ -maxdepth 1 -type d |\ | ||
1594 | 46 | awk -F\/ '{print$NF}' | sort -rn | grep "." | grep -v "${SERIAL}" | head -n1) || | ||
1595 | 47 | echo "Unable to find prior daily manifest" | ||
1596 | 48 | |||
1597 | 49 | previous_manifest=${new_manifest//$SERIAL/$previous_serial} | ||
1598 | 50 | |||
1599 | 51 | # Generate the pure package diffs | ||
1600 | 52 | for arch in "${arches[@]}" | ||
1601 | 53 | do | ||
1602 | 54 | nm=${new_manifest//amd64/$arch} | ||
1603 | 55 | om=${old_manifest//amd64/$arch} | ||
1604 | 56 | pm=${previous_manifest/amd64/$arch} | ||
1605 | 57 | |||
1606 | 58 | [ -e "${nm}" ] && | ||
1607 | 59 | cp "${nm}" "${WORKSPACE}/manifest-${arch}-daily-${SERIAL}.txt" | ||
1608 | 60 | |||
1609 | 61 | # Generate the diff from daily to release | ||
1610 | 62 | if [ -e "${nm}" -a -e "${om}" ]; then | ||
1611 | 63 | release_diff=${new_manifest##*/} | ||
1612 | 64 | release_diff=${release_diff//.manifest/-$rel_link-to-daily_manifest.diff} | ||
1613 | 65 | release_diff=${release_diff//amd64/$arch} | ||
1614 | 66 | diff -u ${om} ${nm} > "${WORKSPACE}/${release_diff}" | ||
1615 | 67 | cp ${om} "${WORKSPACE}/manifest-${arch}-release.txt" | ||
1616 | 68 | fi | ||
1617 | 69 | |||
1618 | 70 | # Generate the diff from daily to old daily | ||
1619 | 71 | if [ -e "${nm}" -a -e "${pm}" ]; then | ||
1620 | 72 | daily_diff=${new_manifest##*/} | ||
1621 | 73 | daily_diff=${daily_diff//.manifest/-$previous_serial-to-$SERIAL-manifest.diff} | ||
1622 | 74 | daily_diff=${daily_diff//amd64/$arch} | ||
1623 | 75 | diff -u ${pm} ${nm} > "${WORKSPACE}/${daily_diff}" | ||
1624 | 76 | cp ${pm} "${WORKSPACE}/manifest-${arch}-previous_daily-${previous_serial}.txt" | ||
1625 | 77 | fi | ||
1626 | 78 | done | ||
1627 | 79 | |||
1628 | 80 | # Determine if there is a different version of a particular package. | ||
1629 | 81 | # If so, write out the package information to the specified trigger file | ||
1630 | 82 | package_differences() { | ||
1631 | 83 | # If a pattern is used, it should match a single entry in the manifest | ||
1632 | 84 | package_pattern=$1 | ||
1633 | 85 | trigger_file=$2 | ||
1634 | 86 | |||
1635 | 87 | echo "Checking for differences in ${package_pattern}" | ||
1636 | 88 | |||
1637 | 89 | v1="$(awk "/${package_pattern}/ {print\$NF}" ${old_manifest})" | ||
1638 | 90 | v2="$(awk "/${package_pattern}/ {print\$NF}" ${new_manifest})" | ||
1639 | 91 | |||
1640 | 92 | if [ "x${v1}" != "x${v2}" ]; then | ||
1641 | 93 | echo " Package changed old:${v1}, new:${v2}" | ||
1642 | 94 | cat << PKGDIFF >> ${trigger_file} | ||
1643 | 95 | '${package_pattern}': | ||
1644 | 96 | - old: '${v1}' | ||
1645 | 97 | - new: '${v2}' | ||
1646 | 98 | |||
1647 | 99 | PKGDIFF | ||
1648 | 100 | else | ||
1649 | 101 | echo " No difference old:${v1}, new:${v2}" | ||
1650 | 102 | fi | ||
1651 | 103 | } | ||
1652 | 104 | |||
1653 | 105 | # Set packages to trigger an automated promotion in this array | ||
1654 | 106 | # This list of packages is controlled through application of | ||
1655 | 107 | # lp:~cloudware/cpc-core/+git/cpc_policy:policies/0003_automated_daily_promotion.rst | ||
1656 | 108 | # trigger_set MUST NOT be modified without accompanying policy doc change | ||
1657 | 109 | declare -a trigger_set | ||
1658 | 110 | # This is an array of package names where the string is any awk-friendly | ||
1659 | 111 | # pattern supported by the expression in package_differences(), but it must | ||
1660 | 112 | # only match a single package | ||
1661 | 113 | #trigger_set=('example_package' 'example_package2-*') | ||
1662 | 114 | trigger_set=('pollinate') | ||
1663 | 115 | |||
1664 | 116 | # Append the kernel package to the trigger_set array | ||
1665 | 117 | if dist_ge ${SUITE} quantal; then | ||
1666 | 118 | trigger_set[${#trigger_set[@]}]='linux-image.*generic' | ||
1667 | 119 | else | ||
1668 | 120 | trigger_set[${#trigger_set[@]}]='linux-image-virtual' | ||
1669 | 121 | fi | ||
1670 | 122 | |||
1671 | 123 | # For legacy reasons the jenkins jobs use a "kernel" trigger file | ||
1672 | 124 | # for automated build promotion. All package changes will | ||
1673 | 125 | # use this single trigger file until the need arises for more | ||
1674 | 126 | # granularity. | ||
1675 | 127 | trigger_file="${WORKSPACE}/${SUITE}-kernel-trigger" | ||
1676 | 128 | if [ -e "${trigger_file}" ] ; then | ||
1677 | 129 | echo "Cleaning up old trigger file in workspace" | ||
1678 | 130 | rm --verbose "${trigger_file}" | ||
1679 | 131 | fi | ||
1680 | 132 | |||
1681 | 133 | # Check all packages in the trigger_set array | ||
1682 | 134 | for pkg in ${trigger_set[@]} ; do | ||
1683 | 135 | package_differences "${pkg}" "${trigger_file}" | ||
1684 | 136 | done | ||
1685 | 137 | |||
1686 | 138 | # If the trigger file exists, determine if it should be pushed to the | ||
1687 | 139 | # build trigger directory for action | ||
1688 | 140 | if [ -e "${trigger_file}" ] ; then | ||
1689 | 141 | if [ ! -e "${rel_base}" ]; then | ||
1690 | 142 | echo "${SUITE} not released, not triggering" | ||
1691 | 143 | rm --verbose "${trigger_file}" | ||
1692 | 144 | elif [ ! -f ${HOLIDAY_FILE} ]; then | ||
1693 | 145 | echo "Creating trigger file with contents:" | ||
1694 | 146 | cat "${trigger_file}" | ||
1695 | 147 | cp --verbose "${trigger_file}" \ | ||
1696 | 148 | "${TRIGGER_LOCATION:-/srv/builder/triggers/kernel}/${PARENT_BUILDER_ID}.trigger" | ||
1697 | 149 | else | ||
1698 | 150 | echo "Not creating trigger, ${HOLIDAY_FILE} found" | ||
1699 | 151 | if [ -e "${TRIGGER_LOCATION:-/srv/builder/triggers/kernel}/${PARENT_BUILDER_ID}.trigger" ]; then | ||
1700 | 152 | echo "Removing existing trigger from old build with same parent ID." | ||
1701 | 153 | rm --verbose "${TRIGGER_LOCATION:-/srv/builder/triggers/kernel}/${PARENT_BUILDER_ID}.trigger" | ||
1702 | 154 | fi | ||
1703 | 155 | fi | ||
1704 | 156 | else | ||
1705 | 157 | echo "No trigger file found" | ||
1706 | 158 | fi | ||
1707 | 159 | |||
1708 | 160 | # Copy the diffs into the current workspace | ||
1709 | 161 | cp ${WORKSPACE}/*.diff ${new_manifest_d}/unpacked | ||
1710 | 162 | |||
1711 | 163 | # Generate the mfdiff between the dailies | ||
1712 | 164 | [ -e "${previous_manifest}" -a -e "${new_manifest}" ] && | ||
1713 | 165 | ${exec_c} mfdiff amd64 ${SUITE} ${previous_manifest} ${new_manifest} >\ | ||
1714 | 166 | "${WORKSPACE}/${SUITE}-daily.changelog" | ||
1715 | 167 | |||
1716 | 168 | # Generate the diff between daily and the released image | ||
1717 | 169 | [ -e "${old_manifest}" -a -e "${new_manifest}" ] && | ||
1718 | 170 | ${exec_c} mfdiff amd64 ${SUITE} ${old_manifest} ${new_manifest} >\ | ||
1719 | 171 | "${WORKSPACE}/${SUITE}-${rel_link}-to-daily.changelog" | ||
1720 | 172 | |||
1721 | 173 | # Copy the changelogs into the current workspace | ||
1722 | 174 | cp ${WORKSPACE}/*.changelog ${new_manifest_d}/unpacked | ||
1723 | 175 | |||
1724 | 176 | # The rest of the operations are for released images only | ||
1725 | 177 | [ ! -e "${rel_base}" ] && | ||
1726 | 178 | echo "No current release, aborting comparison" && | ||
1727 | 179 | exit 0 | ||
1728 | 180 | |||
1729 | 181 | # Tar up the deltas | ||
1730 | 182 | tar -C ${WORKSPACE} -jcvf "${WORKSPACE}/${SUITE}-${SERIAL}.tar.bz2" \ | ||
1731 | 183 | *.changelog \ | ||
1732 | 184 | *.txt \ | ||
1733 | 185 | *.diff || | ||
1734 | 186 | fail "Failed to create tarball" | ||
1735 | 187 | |||
1736 | 188 | # Start the email report work | ||
1737 | 189 | changed_pkgs=$(grep '=>' ${SUITE}-${rel_link}-to-daily.changelog | \ | ||
1738 | 190 | sed -e 's,====,,g' -e 's,^, *,g' | sort -k2) | ||
1739 | 191 | |||
1740 | 192 | # Generate the email template | ||
1741 | 193 | VER=$(${kvm}/ubuntu-adj2version ${SUITE}) | ||
1742 | 194 | |||
1743 | 195 | case ${VER} in | ||
1744 | 196 | *8.04*) VER="${VER} LTS"; | ||
1745 | 197 | CODENAME="Hardy Heron";; | ||
1746 | 198 | *10.04*) VER="${VER} LTS"; | ||
1747 | 199 | CODENAME="Lucid Lynx";; | ||
1748 | 200 | *11.04*) CODENAME="Natty Narwhal";; | ||
1749 | 201 | *11.10*) CODENAME="Oneiric Ocelot";; | ||
1750 | 202 | *12.04*) VER="${VER} LTS"; | ||
1751 | 203 | CODENAME="Precise Pangolin";; | ||
1752 | 204 | *12.10*) CODENAME="Quantal Queztal";; | ||
1753 | 205 | *13.04*) CODENAME="Raring Ringtail";; | ||
1754 | 206 | *13.10*) CODENAME="Saucy Salamander";; | ||
1755 | 207 | *14.04*) VER="${VER} LTS"; | ||
1756 | 208 | CODENAME="Trusty Tahr";; | ||
1757 | 209 | *14.10*) CODENAME="Utopic Unicorn";; | ||
1758 | 210 | *15.04*) CODENAME="Vivid Vervet";; | ||
1759 | 211 | *15.10*) CODENAME="Wily Werewolf";; | ||
1760 | 212 | esac | ||
1761 | 213 | |||
1762 | 214 | email_name="${WORKSPACE}/${SUITE}-release_announcement.email" | ||
1763 | 215 | cat << EOF > "${email_name}" | ||
1764 | 216 | SUBJECT: Refreshed Cloud Images of ${VER} (${CODENAME}) [${SERIAL}] | ||
1765 | 217 | TO: ec2ubuntu@googlegroups.com; ubuntu-cloud@lists.ubuntu.com; ubuntu-cloud-announce@lists.ubuntu.com | ||
1766 | 218 | |||
1767 | 219 | A new release of the Ubuntu Cloud Images for stable Ubuntu release ${VER} (${CODENAME}) is available at [1]. These new images superseded the existing images [2]. Images are available for download or immediate use on EC2 via publish AMI ids. Users who wish to update their existing installations can do so with: | ||
1768 | 220 | 'sudo apt-get update && sudo apt-get dist-upgrade && sudo reboot'. | ||
1769 | 221 | |||
1770 | 222 | EOF | ||
1771 | 223 | |||
1772 | 224 | if [ "${old_linux_kernel}" != "${new_linux_kernel}" ]; then | ||
1773 | 225 | cat << EOF >> "${email_name}" | ||
1774 | 226 | The Linux kernel was updated from ${old_linux_kernel} [3] to ${new_linux_kernel} [4] | ||
1775 | 227 | |||
1776 | 228 | EOF | ||
1777 | 229 | fi | ||
1778 | 230 | |||
1779 | 231 | cat << EOF >> "${email_name}" | ||
1780 | 232 | The following packages have been updated. Please see the full changelogs | ||
1781 | 233 | for a complete listing of changes: | ||
1782 | 234 | ${changed_pkgs} | ||
1783 | 235 | |||
1784 | 236 | |||
1785 | 237 | The following is a complete changelog for this image. | ||
1786 | 238 | $(cat ${SUITE}-${rel_link}-to-daily.changelog) | ||
1787 | 239 | |||
1788 | 240 | -- | ||
1789 | 241 | [1] http://cloud-images.ubuntu.com/releases/${SUITE}/release-${SERIAL}/ | ||
1790 | 242 | [2] http://cloud-images.ubuntu.com/releases/${SUITE}/${rel_link}/ | ||
1791 | 243 | EOF | ||
1792 | 244 | |||
1793 | 245 | if [ "${old_linux_kernel}" != "${new_linux_kernel}" ]; then | ||
1794 | 246 | cat << EOF >> "${email_name}" | ||
1795 | 247 | [3] http://changelogs.ubuntu.com/changelogs/pool/main/l/linux/linux_${old_linux_kernel}/changelog | ||
1796 | 248 | [4] http://changelogs.ubuntu.com/changelogs/pool/main/l/linux/linux_${new_linux_kernel}/changelog | ||
1797 | 249 | EOF | ||
1798 | 250 | fi | ||
1799 | 251 | |||
1800 | 252 | # Create release notes | ||
1801 | 253 | lnc=$(wc -l ${email_name} | awk '{print$1}') | ||
1802 | 254 | tail -n `expr $lnc - 3` ${email_name} > "${WORKSPACE}/release_notes.txt" | ||
1803 | 255 | cp ${WORKSPACE}/release_notes.txt ${new_manifest_d}/unpacked | ||
1804 | 0 | 256 | ||
1805 | === added file 'jenkins/CloudImages_Juju.sh' | |||
1806 | --- jenkins/CloudImages_Juju.sh 1970-01-01 00:00:00 +0000 | |||
1807 | +++ jenkins/CloudImages_Juju.sh 2018-05-31 04:33:07 +0000 | |||
1808 | @@ -0,0 +1,253 @@ | |||
1809 | 1 | #!/bin/bash | ||
1810 | 2 | |||
1811 | 3 | # Set default umask | ||
1812 | 4 | umask 022 | ||
1813 | 5 | |||
1814 | 6 | # Read in the common files | ||
1815 | 7 | my_name=$(readlink -f ${0}) | ||
1816 | 8 | my_dir=$(dirname ${my_name}) | ||
1817 | 9 | my_pdir=$(dirname ${my_dir}) | ||
1818 | 10 | |||
1819 | 11 | # Source in the common functions | ||
1820 | 12 | source "${my_pdir}/functions/common" | ||
1821 | 13 | source "${my_pdir}/functions/retry" | ||
1822 | 14 | source "${my_pdir}/functions/locker" | ||
1823 | 15 | export HOME=${WORKSPACE} | ||
1824 | 16 | |||
1825 | 17 | # needed for building on Jenkins | ||
1826 | 18 | [ -e "build_properties" ] && source build_properties | ||
1827 | 19 | |||
1828 | 20 | # Copy the target disk imags | ||
1829 | 21 | ARCH_TYPE=${ARCH_TYPE:-$ARCH} | ||
1830 | 22 | disk_orig="${SUITE}-server-cloudimg-${ARCH_TYPE}-disk1.img" | ||
1831 | 23 | disk_cp="${disk_orig//$ARCH_TYPE/$ARCH_TYPE-juju-vagrant}" | ||
1832 | 24 | disk_root="${SRV_D:-/srv/ec2-images}/${SUITE}/${SERIAL:-current}" | ||
1833 | 25 | disk_working="${WORKSPACE}/${disk_cp}" | ||
1834 | 26 | final_disk="${WORKSPACE}/box-disk1.vdi" | ||
1835 | 27 | final_location="${OUTPUT_D:-/srv/ec2-images}/vagrant/${SUITE}/${SERIAL}" | ||
1836 | 28 | box_name="${disk_working//.img/.box}" | ||
1837 | 29 | raw_f="${WORKSPACE}/raw_f-$(date +%s).img" | ||
1838 | 30 | build_host_suite=$(lsb_release -c -s) | ||
1839 | 31 | |||
1840 | 32 | jenkins_build() { | ||
1841 | 33 | [ -e "build_properties" ] && | ||
1842 | 34 | source build_properties || | ||
1843 | 35 | fail "Failed to read build_properties. I don't know what I'm doing!" | ||
1844 | 36 | |||
1845 | 37 | # Bail if something isn't right | ||
1846 | 38 | SUITE=${SUITE:?Suite must be defined} | ||
1847 | 39 | SERIAL=${SERIAL:?Serial must be defined} | ||
1848 | 40 | |||
1849 | 41 | cp "${disk_root}/${disk_orig}" "${disk_working}" || | ||
1850 | 42 | fail "Unable to copy ${disk_orig} from ${disk_root}" | ||
1851 | 43 | |||
1852 | 44 | qemu-img resize ${disk_working} 40G | ||
1853 | 45 | |||
1854 | 46 | # Launch KVM to do the worK | ||
1855 | 47 | ${my_pdir}/launch_kvm.sh \ | ||
1856 | 48 | --id "${ARCH_TYPE}-${BUILD_ID}" \ | ||
1857 | 49 | --user-data "${my_pdir}/config/cloud-vps.cfg" \ | ||
1858 | 50 | --cloud-config "${my_pdir}/templates/img-juju.tmpl" \ | ||
1859 | 51 | --extra-disk "${disk_working}" \ | ||
1860 | 52 | --disk-gb 1 \ | ||
1861 | 53 | --raw-disk "${raw_f}" \ | ||
1862 | 54 | --raw-size 1 \ | ||
1863 | 55 | --img-url /srv/builder/images/precise-builder-latest.img || | ||
1864 | 56 | fail "KVM instance failed to build image." | ||
1865 | 57 | } | ||
1866 | 58 | |||
1867 | 59 | # Assume that we're building in Jenkins unless otherwise stated | ||
1868 | 60 | # What this allows us to do is to use the standalone builder for testing | ||
1869 | 61 | # and finish running the bits below | ||
1870 | 62 | [ "${LOCAL_BUILD:-0}" -eq 1 ] || jenkins_build | ||
1871 | 63 | |||
1872 | 64 | # Covert to VMDK. | ||
1873 | 65 | qemu-img convert -O raw ${disk_working} ${disk_working//.img/.raw} | ||
1874 | 66 | |||
1875 | 67 | _vbox_cmd convertfromraw \ | ||
1876 | 68 | --format vdi \ | ||
1877 | 69 | ${disk_working//.img/.raw} ${final_disk} | ||
1878 | 70 | |||
1879 | 71 | # Create the VM | ||
1880 | 72 | vmname="ubuntu-cloudimg-${SUITE}-juju-vagrant-${ARCH_TYPE}" | ||
1881 | 73 | _vbox_cmd modifyhd --compact ${final_disk} | ||
1882 | 74 | |||
1883 | 75 | dist_v="Ubuntu" | ||
1884 | 76 | [ "${ARCH_TYPE}" = "amd64" ] && dist_v="Ubuntu_64" | ||
1885 | 77 | _vbox_cmd createvm \ | ||
1886 | 78 | --name ${vmname} \ | ||
1887 | 79 | --ostype ${dist_v} \ | ||
1888 | 80 | --register | ||
1889 | 81 | |||
1890 | 82 | _vbox_cmd modifyvm ${vmname} \ | ||
1891 | 83 | --memory 2048 \ | ||
1892 | 84 | --boot1 disk \ | ||
1893 | 85 | --boot2 none \ | ||
1894 | 86 | --boot3 none \ | ||
1895 | 87 | --boot4 none \ | ||
1896 | 88 | --vram 12 \ | ||
1897 | 89 | --pae off \ | ||
1898 | 90 | --acpi on \ | ||
1899 | 91 | --ioapic on \ | ||
1900 | 92 | --rtcuseutc on \ | ||
1901 | 93 | --bioslogodisplaytime 0 \ | ||
1902 | 94 | --nic1 nat \ | ||
1903 | 95 | --nictype1 virtio | ||
1904 | 96 | |||
1905 | 97 | if [ "${ARCH_TYPE}" = "i386" ]; then | ||
1906 | 98 | _vbox_cmd modifyvm ${vmname} \ | ||
1907 | 99 | --ioapic off \ | ||
1908 | 100 | --pae on | ||
1909 | 101 | fi | ||
1910 | 102 | |||
1911 | 103 | |||
1912 | 104 | _vbox_cmd modifyvm ${vmname} --natpf1 "guestssh,tcp,,2222,,22" | ||
1913 | 105 | |||
1914 | 106 | storage_cmd=( | ||
1915 | 107 | _vbox_cmd storagectl "${vmname}" | ||
1916 | 108 | --name "SATAController" | ||
1917 | 109 | --add sata | ||
1918 | 110 | --controller IntelAhci | ||
1919 | 111 | --hostiocache on | ||
1920 | 112 | ) | ||
1921 | 113 | |||
1922 | 114 | if [ "$(lsb_release -r -s | sed 's/\.//')" -lt 1404 ]; then | ||
1923 | 115 | storage_cmd+=(--sataportcount 1) | ||
1924 | 116 | else | ||
1925 | 117 | storage_cmd+=(--portcount 1) | ||
1926 | 118 | fi | ||
1927 | 119 | |||
1928 | 120 | ${storage_cmd[@]} | ||
1929 | 121 | |||
1930 | 122 | _vbox_cmd storageattach ${vmname} \ | ||
1931 | 123 | --storagectl "SATAController" \ | ||
1932 | 124 | --port 0 \ | ||
1933 | 125 | --device 0 \ | ||
1934 | 126 | --type hdd \ | ||
1935 | 127 | --medium ${final_disk} | ||
1936 | 128 | |||
1937 | 129 | # Set extra-data | ||
1938 | 130 | _vbox_cmd setextradata ${vmname} installdate ${serial} | ||
1939 | 131 | _vbox_cmd setextradata ${vmname} supported false | ||
1940 | 132 | |||
1941 | 133 | # Set the Guest information to get rid of error message | ||
1942 | 134 | [ -e vagrant_image.pkgs ] && { | ||
1943 | 135 | |||
1944 | 136 | vbox_version="" | ||
1945 | 137 | while read -r line | ||
1946 | 138 | do | ||
1947 | 139 | line=( $(echo ${line}) ) | ||
1948 | 140 | [[ ${line[0]} =~ virtualbox-guest-utils ]] && vbox_version=${line[1]} | ||
1949 | 141 | done < vagrant_image.pkgs | ||
1950 | 142 | debug "Guest Additions version is ${vbox_version}" | ||
1951 | 143 | |||
1952 | 144 | # Set the revision to some arbitrary value | ||
1953 | 145 | _vbox_cmd guestproperty set ${vmname} \ | ||
1954 | 146 | "/VirtualBox/GuestAdd/Revision" '8000' | ||
1955 | 147 | |||
1956 | 148 | # Set the Ubuntu packaged version correctly | ||
1957 | 149 | _vbox_cmd guestproperty set ${vmname} \ | ||
1958 | 150 | "/VirtualBox/GuestAdd/VersionExt" \ | ||
1959 | 151 | "${vbox_version//-dfsg-*/_Ubuntu}" | ||
1960 | 152 | |||
1961 | 153 | # Set the version string appropriately | ||
1962 | 154 | _vbox_cmd guestproperty set ${vmname} \ | ||
1963 | 155 | "/VirtualBox/GuestAdd/Version" \ | ||
1964 | 156 | "${vbox_version//-dfsg-*/}" | ||
1965 | 157 | } | ||
1966 | 158 | |||
1967 | 159 | mkdir ${WORKSPACE}/box | ||
1968 | 160 | _vbox_cmd export ${vmname} --output ${WORKSPACE}/box/box.ovf | ||
1969 | 161 | |||
1970 | 162 | # Create the Vagrant file | ||
1971 | 163 | #macaddr="02:$(openssl rand -hex 5)" | ||
1972 | 164 | macaddr=$(awk '-F"' '/<Adapter slot="0" enabled="true"/ {print$6}' ${WORKSPACE}/box/box.ovf) | ||
1973 | 165 | cat << EOF > ${WORKSPACE}/box/Vagrantfile | ||
1974 | 166 | \$script = <<SCRIPT | ||
1975 | 167 | bzr branch lp:jujuredirector /tmp/jujuredir | ||
1976 | 168 | |||
1977 | 169 | if ! grep precise /etc/lsb-release > /dev/null; then | ||
1978 | 170 | cat << EOM > "/etc/apt/apt.conf.d/90proxy" | ||
1979 | 171 | Acquire::http::Proxy "http://10.0.3.1:8000"; | ||
1980 | 172 | EOM | ||
1981 | 173 | |||
1982 | 174 | for series in precise trusty; do | ||
1983 | 175 | version=\$(grep \$series /usr/share/distro-info/ubuntu.csv | cut -d, -f1 | cut -d' ' -f1) | ||
1984 | 176 | expected_filename=/var/cache/lxc/cloud-\${series}/ubuntu-\${version}-server-cloudimg-${ARCH_TYPE}-root.tar.gz | ||
1985 | 177 | if [ ! -e \$expected_filename ]; then | ||
1986 | 178 | mkdir -p "/var/cache/lxc/cloud-\${series}" | ||
1987 | 179 | curl -o "\$expected_filename" \ | ||
1988 | 180 | http://cloud-images.ubuntu.com/releases/\${series}/release/ubuntu-\${version}-server-cloudimg-${ARCH_TYPE}-root.tar.gz | ||
1989 | 181 | fi | ||
1990 | 182 | done | ||
1991 | 183 | |||
1992 | 184 | # Set up squid in the LXC template | ||
1993 | 185 | for lxc_template in \$(ls /var/cache/lxc/cloud-*/*-root.tar.gz); do | ||
1994 | 186 | gunzip "\$lxc_template" | ||
1995 | 187 | unwrapped_name=\$(dirname "\$lxc_template")/\$(basename "\$lxc_template" .gz) | ||
1996 | 188 | mkdir -p etc/apt/apt.conf.d | ||
1997 | 189 | echo 'Acquire::http::Proxy "http://10.0.3.1:8000";' > etc/apt/apt.conf.d/90proxy | ||
1998 | 190 | tar rf "\$unwrapped_name" etc/apt/apt.conf.d/90proxy | ||
1999 | 191 | gzip "\$unwrapped_name" | ||
2000 | 192 | rm -rf etc | ||
2001 | 193 | done | ||
2002 | 194 | fi | ||
2003 | 195 | |||
2004 | 196 | bash /tmp/jujuredir/setup-juju.sh 6079 | ||
2005 | 197 | echo "export JUJU_REPOSITORY=/charms" >> /home/vagrant/.bashrc | ||
2006 | 198 | SCRIPT | ||
2007 | 199 | |||
2008 | 200 | system 'mkdir', '-p', 'charms' | ||
2009 | 201 | |||
2010 | 202 | Vagrant.configure("2") do |config| | ||
2011 | 203 | # This Vagrantfile is auto-generated by 'vagrant package' to contain | ||
2012 | 204 | # the MAC address of the box. Custom configuration should be placed in | ||
2013 | 205 | # the actual 'Vagrantfile' in this box. | ||
2014 | 206 | |||
2015 | 207 | config.vm.base_mac = "${macaddr}" | ||
2016 | 208 | config.vm.network :forwarded_port, guest: 22, host: 2122, host_ip: "127.0.0.1" | ||
2017 | 209 | config.vm.network :forwarded_port, guest: 80, host: 6080, host_ip: "127.0.0.1" | ||
2018 | 210 | config.vm.network :forwarded_port, guest: 6079, host: 6079, host_ip: "127.0.0.1" | ||
2019 | 211 | config.vm.network "private_network", ip: "172.16.250.15" | ||
2020 | 212 | config.vm.provider "virtualbox" do |vb| | ||
2021 | 213 | vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"] | ||
2022 | 214 | end | ||
2023 | 215 | config.vm.provision "shell", inline: \$script | ||
2024 | 216 | |||
2025 | 217 | config.vm.synced_folder "charms/", "/charms" | ||
2026 | 218 | end | ||
2027 | 219 | |||
2028 | 220 | # Load include vagrant file if it exists after the auto-generated | ||
2029 | 221 | # so it can override any of the settings | ||
2030 | 222 | include_vagrantfile = File.expand_path("../include/_Vagrantfile", __FILE__) | ||
2031 | 223 | load include_vagrantfile if File.exist?(include_vagrantfile) | ||
2032 | 224 | EOF | ||
2033 | 225 | |||
2034 | 226 | # Now pack it all up.... | ||
2035 | 227 | tar -C ${WORKSPACE}/box -Scvf ${box_name} box.ovf Vagrantfile box-disk1.vmdk || | ||
2036 | 228 | fail "Unable to create box file" | ||
2037 | 229 | |||
2038 | 230 | # Some minor cleanup | ||
2039 | 231 | rm ${disk_working} ${disk_working//.img/.raw} || /bin/true | ||
2040 | 232 | rm -rf ${WORKSPACE}/box *.vdi | ||
2041 | 233 | [ -e "${raw_f}" ] && rm "${raw_f}" | ||
2042 | 234 | |||
2043 | 235 | # Bail here if this is a local build | ||
2044 | 236 | [ "${LOCAL_BUILD:-0}" -eq 1 ] && exit 0 | ||
2045 | 237 | |||
2046 | 238 | # Put the box in place | ||
2047 | 239 | mkdir -p "${final_location}" || | ||
2048 | 240 | fail "Unable to create the vagrant image location" | ||
2049 | 241 | |||
2050 | 242 | cp ${box_name} ${final_location} || | ||
2051 | 243 | fail "Failed to place vagrant image in final home" | ||
2052 | 244 | |||
2053 | 245 | # Now Checksum it all | ||
2054 | 246 | |||
2055 | 247 | # Override and set some home variables | ||
2056 | 248 | export HOME="/srv/builder" | ||
2057 | 249 | export CDIMAGE_BIN="${HOME}/cdimage/bin" | ||
2058 | 250 | PUBLISH_SCRIPTS=${HOME}/ec2-publishing-scripts | ||
2059 | 251 | export CDIMAGE_ROOT="${HOME}/cdimage" | ||
2060 | 252 | export PATH="${PUBLISH_SCRIPTS}:${CDIMAGE_BIN}:${PATH}" | ||
2061 | 253 | checksum-directory ${final_location} | ||
2062 | 0 | 254 | ||
2063 | === added file 'jenkins/CloudImages_Update_Builder.sh' | |||
2064 | --- jenkins/CloudImages_Update_Builder.sh 1970-01-01 00:00:00 +0000 | |||
2065 | +++ jenkins/CloudImages_Update_Builder.sh 2018-05-31 04:33:07 +0000 | |||
2066 | @@ -0,0 +1,68 @@ | |||
2067 | 1 | #!/bin/bash | ||
2068 | 2 | |||
2069 | 3 | # Set default umask | ||
2070 | 4 | umask 022 | ||
2071 | 5 | |||
2072 | 6 | # Read in the common files | ||
2073 | 7 | source "${kvm}/functions/common" | ||
2074 | 8 | source "${kvm}/functions/retry" | ||
2075 | 9 | |||
2076 | 10 | # Apply the build stuff | ||
2077 | 11 | find . -iname "*build_properties" | xargs -I FILE cp FILE . | ||
2078 | 12 | [ -e "build_properties" ] && | ||
2079 | 13 | source build_properties || | ||
2080 | 14 | fail "Failed to read build_properties. I don't know what I'm doing!" | ||
2081 | 15 | |||
2082 | 16 | [ -e failed ] && rm | ||
2083 | 17 | [ -e success ] && rm | ||
2084 | 18 | |||
2085 | 19 | # Copy the target disk image | ||
2086 | 20 | case ${SUITE} in | ||
2087 | 21 | trusty|xenial) | ||
2088 | 22 | disk_orig="${SUITE}-server-cloudimg-${ARCH}-disk1.img" | ||
2089 | 23 | builder_img=/srv/builder/images/trusty-builder-latest.img | ||
2090 | 24 | ;; | ||
2091 | 25 | zesty) | ||
2092 | 26 | # Zesty needs yakkety or newer due to ext4 tool changes | ||
2093 | 27 | disk_orig="${SUITE}-server-cloudimg-${ARCH}.img" | ||
2094 | 28 | builder_img=/srv/builder/images/zesty-builder-latest.img | ||
2095 | 29 | ;; | ||
2096 | 30 | *) | ||
2097 | 31 | disk_orig="${SUITE}-server-cloudimg-${ARCH}.img" | ||
2098 | 32 | builder_img=/srv/builder/images/artful-builder-latest.img | ||
2099 | 33 | ;; | ||
2100 | 34 | esac | ||
2101 | 35 | |||
2102 | 36 | disk_cp="${disk_orig//cloudimg/cloudimg-builder-$(date +%Y%m%d)}" | ||
2103 | 37 | disk_root="/srv/ec2-images/${SUITE}/${SERIAL:-current}" | ||
2104 | 38 | disk_working="${WORKSPACE}/${disk_cp}" | ||
2105 | 39 | raw_f="${WORKSPACE}/raw_f-$(date +%s).img" | ||
2106 | 40 | |||
2107 | 41 | cp "${disk_root}/${disk_orig}" "${disk_working}" || | ||
2108 | 42 | fail "Unable to copy ${disk_orig} from ${disk_root}" | ||
2109 | 43 | |||
2110 | 44 | qemu-img resize "${disk_working}" 5G || | ||
2111 | 45 | fail "unable to resize disk" | ||
2112 | 46 | |||
2113 | 47 | # Launch KVM to do the work | ||
2114 | 48 | ${kvm}/launch_kvm.sh \ | ||
2115 | 49 | --id "${ARCH}-${BUILD_ID}" \ | ||
2116 | 50 | --user-data "${kvm}/config/cloud-vps.cfg" \ | ||
2117 | 51 | --cloud-config "${kvm}/templates/img-update.tmpl" \ | ||
2118 | 52 | --extra-disk "${disk_working}" \ | ||
2119 | 53 | --disk-gb 5 \ | ||
2120 | 54 | --raw-disk "${raw_f}" \ | ||
2121 | 55 | --raw-size 1 \ | ||
2122 | 56 | --img-url ${builder_img} || | ||
2123 | 57 | fail "KVM instance failed to build image." | ||
2124 | 58 | |||
2125 | 59 | # Remove the results | ||
2126 | 60 | rm "${raw_f}" || /bin/true | ||
2127 | 61 | |||
2128 | 62 | # Compress it down... | ||
2129 | 63 | mv "${disk_working}" "${disk_working}.new" | ||
2130 | 64 | qemu-img convert "${disk_working}.new" -c -O qcow2 "${disk_working}" || | ||
2131 | 65 | fail "Failed to create compressed image" | ||
2132 | 66 | |||
2133 | 67 | rm "${disk_working}.new" | ||
2134 | 68 | |||
2135 | 0 | 69 | ||
2136 | === added file 'jenkins/CloudImages_Vagrant.sh' | |||
2137 | --- jenkins/CloudImages_Vagrant.sh 1970-01-01 00:00:00 +0000 | |||
2138 | +++ jenkins/CloudImages_Vagrant.sh 2018-05-31 04:33:07 +0000 | |||
2139 | @@ -0,0 +1,232 @@ | |||
2140 | 1 | #!/bin/bash | ||
2141 | 2 | |||
2142 | 3 | # Set default umask | ||
2143 | 4 | umask 022 | ||
2144 | 5 | |||
2145 | 6 | # Read in the common files | ||
2146 | 7 | source "${kvm}/functions/common" | ||
2147 | 8 | source "${kvm}/functions/retry" | ||
2148 | 9 | source "${kvm}/functions/locker" | ||
2149 | 10 | export HOME=${WORKSPACE} | ||
2150 | 11 | |||
2151 | 12 | # Apply the build stuff | ||
2152 | 13 | [ -e "build_properties" ] && | ||
2153 | 14 | source build_properties || | ||
2154 | 15 | fail "Failed to read build_properties. I don't know what I'm doing!" | ||
2155 | 16 | |||
2156 | 17 | rm {failed,success} || /bin/true | ||
2157 | 18 | |||
2158 | 19 | # Copy the target disk image | ||
2159 | 20 | ARCH_TYPE=${ARCH_TYPE:-$ARCH} | ||
2160 | 21 | disk_orig="${SUITE}-server-cloudimg-${ARCH_TYPE}-disk1.img" | ||
2161 | 22 | disk_cp="${disk_orig//$ARCH_TYPE/$ARCH_TYPE-vagrant}" | ||
2162 | 23 | disk_root="${SRV_D:-/srv/ec2-images}/${SUITE}/${SERIAL:-current}" | ||
2163 | 24 | disk_working="${WORKSPACE}/${disk_cp}" | ||
2164 | 25 | final_disk="${WORKSPACE}/box-disk1.vdi" | ||
2165 | 26 | final_location="${OUTPUT_D:-/srv/ec2-images}/vagrant/${SUITE}/${SERIAL}" | ||
2166 | 27 | box_name="${disk_working//.img/.box}" | ||
2167 | 28 | raw_f="${WORKSPACE}/raw_f-$(date +%s).img" | ||
2168 | 29 | |||
2169 | 30 | [ -e "${final_location}/${box_name}" -a "${REBUILD}" != "true" ] && exit 0 | ||
2170 | 31 | |||
2171 | 32 | cp "${disk_root}/${disk_orig}" "${disk_working}" || | ||
2172 | 33 | fail "Unable to copy ${disk_orig} from ${disk_root}" | ||
2173 | 34 | |||
2174 | 35 | # Resize it to 4G, but not the full 40G because we want it sparse | ||
2175 | 36 | qemu-img resize ${disk_working} 4G | ||
2176 | 37 | |||
2177 | 38 | # Launch KVM to do the work | ||
2178 | 39 | ${kvm}/launch_kvm.sh \ | ||
2179 | 40 | --id "${ARCH_TYPE}-${BUILD_ID}" \ | ||
2180 | 41 | --user-data "${kvm}/config/cloud-vps.cfg" \ | ||
2181 | 42 | --cloud-config "${kvm}/templates/img-vagrant.tmpl" \ | ||
2182 | 43 | --extra-disk "${disk_working}" \ | ||
2183 | 44 | --disk-gb 1 \ | ||
2184 | 45 | --raw-disk "${raw_f}" \ | ||
2185 | 46 | --raw-size 1 \ | ||
2186 | 47 | --img-url /srv/builder/images/precise-builder-latest.img || | ||
2187 | 48 | fail "KVM instance failed to build image." | ||
2188 | 49 | |||
2189 | 50 | # Covert to VMDK. | ||
2190 | 51 | qemu-img convert -O raw ${disk_working} ${disk_working//.img/.raw} | ||
2191 | 52 | truncate -s 40G ${disk_working//.img/.raw} | ||
2192 | 53 | |||
2193 | 54 | _vbox_cmd convertfromraw \ | ||
2194 | 55 | --format vdi \ | ||
2195 | 56 | ${disk_working//.img/.raw} ${final_disk} | ||
2196 | 57 | |||
2197 | 58 | # Create the VM | ||
2198 | 59 | vmname="ubuntu-cloudimg-${SUITE}-vagrant-${ARCH_TYPE}" | ||
2199 | 60 | _vbox_cmd modifyhd --compact ${final_disk} | ||
2200 | 61 | |||
2201 | 62 | dist_v="Ubuntu" | ||
2202 | 63 | [ "${ARCH_TYPE}" = "amd64" ] && dist_v="Ubuntu_64" | ||
2203 | 64 | _vbox_cmd createvm \ | ||
2204 | 65 | --name ${vmname} \ | ||
2205 | 66 | --ostype ${dist_v} \ | ||
2206 | 67 | --register | ||
2207 | 68 | |||
2208 | 69 | _vbox_cmd modifyvm ${vmname} \ | ||
2209 | 70 | --memory 512 \ | ||
2210 | 71 | --boot1 disk \ | ||
2211 | 72 | --boot2 none \ | ||
2212 | 73 | --boot3 none \ | ||
2213 | 74 | --boot4 none \ | ||
2214 | 75 | --vram 12 \ | ||
2215 | 76 | --pae off \ | ||
2216 | 77 | --acpi on \ | ||
2217 | 78 | --ioapic on \ | ||
2218 | 79 | --rtcuseutc on | ||
2219 | 80 | # --natnet1 default \ | ||
2220 | 81 | |||
2221 | 82 | if [ "${ARCH_TYPE}" = "i386" ]; then | ||
2222 | 83 | _vbox_cmd modifyvm ${vmname} \ | ||
2223 | 84 | --ioapic off \ | ||
2224 | 85 | --pae on | ||
2225 | 86 | fi | ||
2226 | 87 | |||
2227 | 88 | |||
2228 | 89 | _vbox_cmd modifyvm ${vmname} --natpf1 "guestssh,tcp,,2222,,22" | ||
2229 | 90 | |||
2230 | 91 | _vbox_cmd storagectl "${vmname}" \ | ||
2231 | 92 | --name "SATAController" \ | ||
2232 | 93 | --add sata \ | ||
2233 | 94 | --controller IntelAhci \ | ||
2234 | 95 | --sataportcount 1 \ | ||
2235 | 96 | --hostiocache on | ||
2236 | 97 | |||
2237 | 98 | _vbox_cmd storageattach ${vmname} \ | ||
2238 | 99 | --storagectl "SATAController" \ | ||
2239 | 100 | --port 0 \ | ||
2240 | 101 | --device 0 \ | ||
2241 | 102 | --type hdd \ | ||
2242 | 103 | --medium ${final_disk} | ||
2243 | 104 | |||
2244 | 105 | # Set extra-data | ||
2245 | 106 | _vbox_cmd setextradata ${vmname} installdate ${serial} | ||
2246 | 107 | _vbox_cmd setextradata ${vmname} supported false | ||
2247 | 108 | |||
2248 | 109 | # Set the Guest information to get rid of error message | ||
2249 | 110 | [ -e vagrant_image.pkgs ] && { | ||
2250 | 111 | |||
2251 | 112 | vbox_version="" | ||
2252 | 113 | while read -r line | ||
2253 | 114 | do | ||
2254 | 115 | line=( $(echo ${line}) ) | ||
2255 | 116 | [[ ${line[0]} =~ virtualbox-guest-utils ]] && vbox_version=${line[1]} | ||
2256 | 117 | done < vagrant_image.pkgs | ||
2257 | 118 | debug "Guest Additions version is ${vbox_version}" | ||
2258 | 119 | |||
2259 | 120 | # Set the revision to some arbitrary value | ||
2260 | 121 | _vbox_cmd guestproperty set ${vmname} \ | ||
2261 | 122 | "/VirtualBox/GuestAdd/Revision" '8000' | ||
2262 | 123 | |||
2263 | 124 | # Set the Ubuntu packaged version correctly | ||
2264 | 125 | _vbox_cmd guestproperty set ${vmname} \ | ||
2265 | 126 | "/VirtualBox/GuestAdd/VersionExt" \ | ||
2266 | 127 | "${vbox_version//-dfsg-*/_Ubuntu}" | ||
2267 | 128 | |||
2268 | 129 | # Set the version string appropriately | ||
2269 | 130 | _vbox_cmd guestproperty set ${vmname} \ | ||
2270 | 131 | "/VirtualBox/GuestAdd/Version" \ | ||
2271 | 132 | "${vbox_version//-dfsg-*/}" | ||
2272 | 133 | } | ||
2273 | 134 | |||
2274 | 135 | mkdir box | ||
2275 | 136 | _vbox_cmd export ${vmname} --output box/box.ovf | ||
2276 | 137 | |||
2277 | 138 | # Create the Vagrant file | ||
2278 | 139 | #macaddr="02:$(openssl rand -hex 5)" | ||
2279 | 140 | macaddr=$(awk '-F"' '/<Adapter slot="0" enabled="true"/ {print$6}' ${WORKSPACE}/box/box.ovf) | ||
2280 | 141 | cat << EOF > ${WORKSPACE}/box/Vagrantfile | ||
2281 | 142 | Vagrant::Config.run do |config| | ||
2282 | 143 | # This Vagrantfile is auto-generated by 'vagrant package' to contain | ||
2283 | 144 | # the MAC address of the box. Custom configuration should be placed in | ||
2284 | 145 | # the actual 'Vagrantfile' in this box. | ||
2285 | 146 | config.vm.base_mac = "${macaddr}" | ||
2286 | 147 | end | ||
2287 | 148 | |||
2288 | 149 | # Load include vagrant file if it exists after the auto-generated | ||
2289 | 150 | # so it can override any of the settings | ||
2290 | 151 | include_vagrantfile = File.expand_path("../include/_Vagrantfile", __FILE__) | ||
2291 | 152 | load include_vagrantfile if File.exist?(include_vagrantfile) | ||
2292 | 153 | EOF | ||
2293 | 154 | |||
2294 | 155 | # Now pack it all up.... | ||
2295 | 156 | tar -C ${WORKSPACE}/box -Scvf ${box_name} box.ovf Vagrantfile box-disk1.vmdk || | ||
2296 | 157 | fail "Unable to create box file" | ||
2297 | 158 | |||
2298 | 159 | # Some minor cleanup | ||
2299 | 160 | rm ${disk_working} ${disk_working//.img/.raw} || /bin/true | ||
2300 | 161 | rm -rf ${WORKSPACE}/box *.vdi | ||
2301 | 162 | rm "${raw_f}" || /bin/true | ||
2302 | 163 | |||
2303 | 164 | # Put the box in place | ||
2304 | 165 | mkdir -p "${final_location}" || | ||
2305 | 166 | fail "Unable to create the vagrant image location" | ||
2306 | 167 | |||
2307 | 168 | cp ${box_name} ${final_location} || | ||
2308 | 169 | fail "Failed to place vagrant image in final home" | ||
2309 | 170 | |||
2310 | 171 | # box_d is where the boxes are stored | ||
2311 | 172 | box_d="${OUTPUT_D:-/srv/ec2-images}/vagrant/${SUITE}" | ||
2312 | 173 | |||
2313 | 174 | # Only proceed if the required boxes exist | ||
2314 | 175 | boxes=($(find ${box_d}/${SERIAL} -regextype posix-extended -regex ".*(amd64|i386)-vagrant-disk1.box")) | ||
2315 | 176 | if [ "${#boxes[@]}" -ne 2 ]; then | ||
2316 | 177 | echo "Not updating current, required boxes are missing" | ||
2317 | 178 | [[ ! "${boxes[@]}" =~ "amd64" ]] && echo "Missing build for amd64" | ||
2318 | 179 | [[ ! "${boxes[@]}" =~ "i386" ]] && echo "Missing build for i386" | ||
2319 | 180 | |||
2320 | 181 | # We don't want to fail here. | ||
2321 | 182 | exit 0 | ||
2322 | 183 | else | ||
2323 | 184 | echo "Updating current links; all builds are present" | ||
2324 | 185 | fi | ||
2325 | 186 | |||
2326 | 187 | # Update the link to current | ||
2327 | 188 | current_l="${box_d}/current" | ||
2328 | 189 | [ -e "${current_l}" ] && rm "${current_l}" | ||
2329 | 190 | ( cd "${box_d}" && ln -s "${SERIAL}" current ) | ||
2330 | 191 | |||
2331 | 192 | # Cleanup old builds | ||
2332 | 193 | builds=($(find ${box_d} -mindepth 1 -maxdepth 1 -type d | sort -r)) | ||
2333 | 194 | build_count="${#builds[@]}" | ||
2334 | 195 | |||
2335 | 196 | echo "------------------------" | ||
2336 | 197 | echo "Clean-up for prior builds" | ||
2337 | 198 | echo "Found ${build_count} builds for consideration" | ||
2338 | 199 | |||
2339 | 200 | for b in ${builds[@]} | ||
2340 | 201 | do | ||
2341 | 202 | echo " - found build ${b}" | ||
2342 | 203 | done | ||
2343 | 204 | echo "" | ||
2344 | 205 | |||
2345 | 206 | [ "${build_count}" -gt 4 ] && { | ||
2346 | 207 | for item in $(seq 4 ${build_count}) | ||
2347 | 208 | do | ||
2348 | 209 | [ -e "${builds[$item]}" ] && { | ||
2349 | 210 | echo "Removing build ${builds[$item]} for deletion" | ||
2350 | 211 | rm -rf ${builds[$item]} || | ||
2351 | 212 | echo "Failed to remove build ${builds[$item]}" | ||
2352 | 213 | } | ||
2353 | 214 | done | ||
2354 | 215 | |||
2355 | 216 | for item in $(seq 0 3) | ||
2356 | 217 | do | ||
2357 | 218 | [ -e "${builds[$item]}" ] && | ||
2358 | 219 | echo "Preserving build ${builds[$item]}" | ||
2359 | 220 | done | ||
2360 | 221 | |||
2361 | 222 | } || echo "No builds marked for removal" | ||
2362 | 223 | |||
2363 | 224 | |||
2364 | 225 | # Override and set some home variables | ||
2365 | 226 | export HOME="/srv/builder" | ||
2366 | 227 | export CDIMAGE_BIN="${HOME}/cdimage/bin" | ||
2367 | 228 | PUBLISH_SCRIPTS=${HOME}/ec2-publishing-scripts | ||
2368 | 229 | export CDIMAGE_ROOT="${HOME}/cdimage" | ||
2369 | 230 | export PATH="${PUBLISH_SCRIPTS}:${CDIMAGE_BIN}:${PATH}" | ||
2370 | 231 | checksum-directory ${final_location} | ||
2371 | 232 | |||
2372 | 0 | 233 | ||
2373 | === added file 'jenkins/MAAS_Builder.sh' | |||
2374 | --- jenkins/MAAS_Builder.sh 1970-01-01 00:00:00 +0000 | |||
2375 | +++ jenkins/MAAS_Builder.sh 2018-05-31 04:33:07 +0000 | |||
2376 | @@ -0,0 +1,171 @@ | |||
2377 | 1 | #!/bin/bash | ||
2378 | 2 | set -x | ||
2379 | 3 | fail() { [ $# -eq 0 ] || echo "$@"; exit 1; } | ||
2380 | 4 | |||
2381 | 5 | find . -iname "*build_properties" -exec cp {} . \; || | ||
2382 | 6 | echo "Unable to copy build properties, this might be v2" | ||
2383 | 7 | |||
2384 | 8 | [ -z "${SERIAL}" -a -z "${SUITE}" -a -e "build_properties" ] && { | ||
2385 | 9 | source build_properties || | ||
2386 | 10 | fail "Failed to read build_properties. I don't know what I'm doing!"; | ||
2387 | 11 | } | ||
2388 | 12 | |||
2389 | 13 | # Read in the common functions | ||
2390 | 14 | my_dir="$( cd "$( dirname "$0" )" && pwd )" | ||
2391 | 15 | base_dir=$(dirname ${my_dir}) | ||
2392 | 16 | export PATH="${base_dir}:${my_dir}:${PATH}" | ||
2393 | 17 | source "${base_dir}/functions/locker" | ||
2394 | 18 | source "${base_dir}/functions/common" | ||
2395 | 19 | source "${base_dir}/functions/retry" | ||
2396 | 20 | source ${my_dir}/build_lib.sh | ||
2397 | 21 | select_build_config | ||
2398 | 22 | |||
2399 | 23 | export WORKSPACE="${WORKSPACE:-$WORKSPACE_R}" | ||
2400 | 24 | out_f="${WORKSPACE}/maas-${SUITE}-${STREAM}-config.sh" | ||
2401 | 25 | raw_f="${WORKSPACE}/${SUITE}-output.raw" | ||
2402 | 26 | query_t="${WORKSPACE}/cloud-images-query.tar" | ||
2403 | 27 | base_name="${SUITE}-server-cloudimg" | ||
2404 | 28 | rel_base_name="ubuntu-$(ubuntu-adj2version ${SUITE})-${stream//-/}-server-cloudimg" | ||
2405 | 29 | |||
2406 | 30 | export maas_branch="${MAAS_BRANCH:-http://bazaar.launchpad.net/~smoser/maas/maas.ubuntu.com.images-ephemeral}" | ||
2407 | 31 | |||
2408 | 32 | case "${STREAM}" in | ||
2409 | 33 | release) build_f="/srv/ec2-images/releases/${SUITE}/release-${SERIAL}"; | ||
2410 | 34 | base_name=${rel_base_name}; | ||
2411 | 35 | out_d="/srv/maas-images/ephemeral/releases/${SUITE}/release-${SERIAL}" | ||
2412 | 36 | ;; | ||
2413 | 37 | daily) build_f="/srv/ec2-images/${SUITE}/${SERIAL}"; | ||
2414 | 38 | out_d="/srv/maas-images/ephemeral/daily/${SUITE}/${SERIAL}"; | ||
2415 | 39 | ;; | ||
2416 | 40 | alpha*|beta*) build_f="/srv/ec2-images/releases/${SUITE}/${STREAM}"; | ||
2417 | 41 | base_name=${rel_base_name}; | ||
2418 | 42 | out_d="/srv/maas-images/ephemeral/release/${SUITE}/${STREAM}"; | ||
2419 | 43 | ;; | ||
2420 | 44 | *) fail "Unknown stream ${STREAM}.";; | ||
2421 | 45 | esac | ||
2422 | 46 | |||
2423 | 47 | final_out_d="${out_d}" | ||
2424 | 48 | |||
2425 | 49 | [ -e "${final_out_d}" -a "${REBUILD:-false}" = "false" ] && | ||
2426 | 50 | fail "Build already exists. Rebuild is set to false. Failing this build" | ||
2427 | 51 | |||
2428 | 52 | # Tar up query for use in-image | ||
2429 | 53 | [ ! -e "${query_t}" ] && { | ||
2430 | 54 | tar cvf ${query_t} \ | ||
2431 | 55 | ${QUERY_D:-/srv/ec2-images/query} \ | ||
2432 | 56 | ${build_f} \ | ||
2433 | 57 | --exclude "*img" --exclude "*azure*" --exclude "*html" \ | ||
2434 | 58 | --exclude "*armel*" --exclude "*root.tar.gz" \ | ||
2435 | 59 | --exclude "*floppy" || | ||
2436 | 60 | fail "Failed to pack up build elements for MAAS builder"; } | ||
2437 | 61 | |||
2438 | 62 | # Generate the template file | ||
2439 | 63 | ci_cfg="${kvm_builder}/config/cloud-maas.cfg" | ||
2440 | 64 | template="${kvm_builder}/templates/img-maas.tmpl" | ||
2441 | 65 | [ "${IS_MAAS_V2:-0}" -eq 1 ] && { | ||
2442 | 66 | template="${kvm_builder}/templates/img-maasv2.tmpl" | ||
2443 | 67 | ci_cfg="${kvm_builder}/config/cloud-maasv2.cfg" | ||
2444 | 68 | } | ||
2445 | 69 | |||
2446 | 70 | maas_config.sh \ | ||
2447 | 71 | --distro "${SUITE}" \ | ||
2448 | 72 | --stream "${STREAM}" \ | ||
2449 | 73 | --template "${template}" \ | ||
2450 | 74 | --base-name "${base_name}" \ | ||
2451 | 75 | --local "${build_f}" \ | ||
2452 | 76 | --serial "${SERIAL}" \ | ||
2453 | 77 | --out "${out_f}" \ | ||
2454 | 78 | --out_d "${out_d}" || | ||
2455 | 79 | fail "Failed to configure KVM instance for building" | ||
2456 | 80 | |||
2457 | 81 | [ -n "${cloud_init_cfg}" ] && ci_cfg="${kvm_builder}/config/${cloud_init_cfg}" | ||
2458 | 82 | |||
2459 | 83 | # Launch KVM to do the work | ||
2460 | 84 | launch_kvm.sh \ | ||
2461 | 85 | --id ${BUILD_ID} \ | ||
2462 | 86 | --user-data "${out_f}" \ | ||
2463 | 87 | --cloud-config "${ci_cfg}" \ | ||
2464 | 88 | --extra-disk "${query_t}" \ | ||
2465 | 89 | --disk-gb 50 \ | ||
2466 | 90 | --raw-disk "${raw_f}" \ | ||
2467 | 91 | --raw-size 20 \ | ||
2468 | 92 | --img-url ${BUILDER_CLOUD_IMAGE} || | ||
2469 | 93 | fail "KVM instance failed to build image." | ||
2470 | 94 | |||
2471 | 95 | # Extract the result set | ||
2472 | 96 | tar -xvvf "${raw_f}" || | ||
2473 | 97 | fail "Failed to extract information from instance" | ||
2474 | 98 | |||
2475 | 99 | # Useful for off-host builds, like ppc64el. Just make sure that any-off host | ||
2476 | 100 | # builds are done before the on-hosts builds. | ||
2477 | 101 | [ "${BUILD_ONLY:-0}" -eq 1 ] && exit 0 | ||
2478 | 102 | |||
2479 | 103 | # Extracted reslts should be here | ||
2480 | 104 | [ ! -e "${WORKSPACE}/${out_d}" ] && fail "Expected result directory is missing: ${WORKSPACE}/${out_d}" | ||
2481 | 105 | |||
2482 | 106 | # Checksum the results (and sign 'em) | ||
2483 | 107 | export CDIMAGE_ROOT="/srv/builder/vmbuilder/cdimage" | ||
2484 | 108 | /srv/builder/vmbuilder/bin/cronrun checksum-directory "${WORKSPACE}/${out_d}" || | ||
2485 | 109 | fail "Failed to create checksums and GPG signatures" | ||
2486 | 110 | |||
2487 | 111 | set -x | ||
2488 | 112 | # Put the bits where they go... | ||
2489 | 113 | mkdir -p "${final_out_d}" && | ||
2490 | 114 | cp -a ${WORKSPACE}${out_d}/* "${final_out_d}" && | ||
2491 | 115 | echo "Copied bits to final location ${final_out_d}" || | ||
2492 | 116 | fail "Unable to copy build bits to final location" | ||
2493 | 117 | |||
2494 | 118 | # Produce build-info | ||
2495 | 119 | cat << EOF > "${final_out_d}/build-info.txt" | ||
2496 | 120 | serial=${SERIAL} | ||
2497 | 121 | orig_prefix=${SUITE}-ephemeral-maas | ||
2498 | 122 | suite=${SUITE} | ||
2499 | 123 | build_name=ephemeral | ||
2500 | 124 | EOF | ||
2501 | 125 | |||
2502 | 126 | # Clean up the dailies | ||
2503 | 127 | if [ "${STREAM}" = "daily" ]; then | ||
2504 | 128 | base_d="${out_d%/*}" | ||
2505 | 129 | builds=( $(find ${base_d} -maxdepth 1 -mindepth 1 -type d | sort -r) ) | ||
2506 | 130 | build_count=${#builds[@]} | ||
2507 | 131 | |||
2508 | 132 | # Delete all but the | ||
2509 | 133 | if [ ${build_count} -gt 6 ]; then | ||
2510 | 134 | for item in $(seq 6 ${build_count}) | ||
2511 | 135 | do | ||
2512 | 136 | [ -e "${builds[$item]}" ] && { | ||
2513 | 137 | rm -rf ${builds[$item]}; | ||
2514 | 138 | echo "Build ${SUITE} ${builds[$item]##*/} has been deleted"; | ||
2515 | 139 | } | ||
2516 | 140 | done | ||
2517 | 141 | |||
2518 | 142 | for item in $(seq 0 5) | ||
2519 | 143 | do | ||
2520 | 144 | echo "Preserving ${SUITE} ${builds[$item]##*/}" | ||
2521 | 145 | done | ||
2522 | 146 | else | ||
2523 | 147 | echo "No builds marked for deletion" | ||
2524 | 148 | fi | ||
2525 | 149 | fi | ||
2526 | 150 | |||
2527 | 151 | # Generate the Query2 tree | ||
2528 | 152 | src_tree="${WORKSPACE}/maas_src" | ||
2529 | 153 | bzr branch "${maas_branch}" "${src_tree}" | ||
2530 | 154 | ${src_tree}/tree2query \ | ||
2531 | 155 | --commit-msg "Build ${BUILD_ID}" \ | ||
2532 | 156 | --namespace maas \ | ||
2533 | 157 | /srv/maas-images | ||
2534 | 158 | |||
2535 | 159 | # Update current | ||
2536 | 160 | if [ "${STREAM}" = "daily" ]; then | ||
2537 | 161 | cur_d="/srv/maas-images/ephemeral/daily/${SUITE}/current" | ||
2538 | 162 | [ -e "${cur_d}" ] && rm "${cur_d}" | ||
2539 | 163 | ln -s "${final_out_d}" "${cur_d}" || | ||
2540 | 164 | echo "Failed to update ${cur_d}" | ||
2541 | 165 | fi | ||
2542 | 166 | |||
2543 | 167 | |||
2544 | 168 | # Remove the results | ||
2545 | 169 | rm "${raw_f}" || | ||
2546 | 170 | fail "Failed to clean up files!" | ||
2547 | 171 | |||
2548 | 0 | 172 | ||
2549 | === added file 'jenkins/MAAS_Promotion.sh' | |||
2550 | --- jenkins/MAAS_Promotion.sh 1970-01-01 00:00:00 +0000 | |||
2551 | +++ jenkins/MAAS_Promotion.sh 2018-05-31 04:33:07 +0000 | |||
2552 | @@ -0,0 +1,31 @@ | |||
2553 | 1 | #!/bin/bash | ||
2554 | 2 | |||
2555 | 3 | fail() { [ $# -eq 0 ] || echo "$@"; exit 1; } | ||
2556 | 4 | |||
2557 | 5 | if [ "${TAG}" == "release" ]; then | ||
2558 | 6 | TAG="release-${SERIAL}" | ||
2559 | 7 | fi | ||
2560 | 8 | |||
2561 | 9 | src_d="/srv/maas-images/ephemeral/daily/${SUITE}/${SERIAL}" | ||
2562 | 10 | final_out_d="/srv/maas-images/ephemeral/releases/${SUITE}/${TAG}" | ||
2563 | 11 | |||
2564 | 12 | [ -e ${src_d} ] || | ||
2565 | 13 | fail "Source ${src_d} does not exist" | ||
2566 | 14 | |||
2567 | 15 | [ -e ${final_out_d} ] && | ||
2568 | 16 | fail "Serial has already been promoted" | ||
2569 | 17 | |||
2570 | 18 | mkdir -p "${final_out_d}" && | ||
2571 | 19 | rsync -a ${src_d}/ ${final_out_d} && | ||
2572 | 20 | echo "Copied bits to final location ${final_out_d}" || | ||
2573 | 21 | fail "Unable to copy build bits to final location" | ||
2574 | 22 | |||
2575 | 23 | # Generate the Query2 tree | ||
2576 | 24 | export maas_branch="${MAAS_BRANCH:-http://bazaar.launchpad.net/~smoser/maas/maas.ubuntu.com.images-ephemeral}" | ||
2577 | 25 | src_tree="${WORKSPACE}/maas_src" | ||
2578 | 26 | bzr branch "${maas_branch}" "${src_tree}" | ||
2579 | 27 | ${src_tree}/tree2query \ | ||
2580 | 28 | --commit-msg "Build ${BUILD_ID}" \ | ||
2581 | 29 | --namespace maas \ | ||
2582 | 30 | /srv/maas-images | ||
2583 | 31 | |||
2584 | 0 | 32 | ||
2585 | === added file 'jenkins/MAASv2_Builder.sh' | |||
2586 | --- jenkins/MAASv2_Builder.sh 1970-01-01 00:00:00 +0000 | |||
2587 | +++ jenkins/MAASv2_Builder.sh 2018-05-31 04:33:07 +0000 | |||
2588 | @@ -0,0 +1,191 @@ | |||
2589 | 1 | #!/bin/bash | ||
2590 | 2 | set -x | ||
2591 | 3 | fail() { [ $# -eq 0 ] || echo "$@"; exit 1; } | ||
2592 | 4 | |||
2593 | 5 | find . -iname "*build_properties" -exec cp {} . \; || | ||
2594 | 6 | echo "Unable to copy build properties, this might be v2" | ||
2595 | 7 | |||
2596 | 8 | [ -z "${SERIAL}" -a -z "${SUITE}" -a -e "build_properties" ] && { | ||
2597 | 9 | source build_properties || | ||
2598 | 10 | fail "Failed to read build_properties. I don't know what I'm doing!"; | ||
2599 | 11 | } | ||
2600 | 12 | |||
2601 | 13 | STREAM="${STREAM:-daily}" | ||
2602 | 14 | # Read in the common functions | ||
2603 | 15 | my_dir="$( cd "$( dirname "$0" )" && pwd )" | ||
2604 | 16 | base_dir=$(dirname ${my_dir}) | ||
2605 | 17 | export PATH="${base_dir}:${my_dir}:${PATH}" | ||
2606 | 18 | source "${base_dir}/functions/locker" | ||
2607 | 19 | source "${base_dir}/functions/common" | ||
2608 | 20 | source "${base_dir}/functions/retry" | ||
2609 | 21 | source ${my_dir}/build_lib.sh | ||
2610 | 22 | select_build_config | ||
2611 | 23 | |||
2612 | 24 | export WORKSPACE="${WORKSPACE:-$WORKSPACE_R}" | ||
2613 | 25 | out_f="${WORKSPACE}/maas-${SUITE}-${STREAM}-config.sh" | ||
2614 | 26 | raw_f="${WORKSPACE}/${SUITE}-output.raw" | ||
2615 | 27 | query_t="${WORKSPACE}/cloud-images-query.tar" | ||
2616 | 28 | base_name="${SUITE}-server-cloudimg" | ||
2617 | 29 | rel_base_name="ubuntu-${VERSION:-$(ubuntu-adj2version ${SUITE})}-${stream//-/}-server-cloudimg" | ||
2618 | 30 | |||
2619 | 31 | export maas_branch_v1="http://bazaar.launchpad.net/~smoser/maas/maas.ubuntu.com.images-ephemeral" | ||
2620 | 32 | export maas_branch="${MAAS_BRANCH:-$maas_branch_v1}" | ||
2621 | 33 | |||
2622 | 34 | case "${STREAM}" in | ||
2623 | 35 | release) build_f="/srv/ec2-images/releases/${SUITE}/release-${SERIAL}"; | ||
2624 | 36 | base_name=${rel_base_name}; | ||
2625 | 37 | out_d="/srv/maas-images/ephemeral/releases/${SUITE}/release-${SERIAL}" | ||
2626 | 38 | ;; | ||
2627 | 39 | daily) build_f="/srv/ec2-images/${SUITE}/${SERIAL}"; | ||
2628 | 40 | out_d="/srv/maas-images/ephemeral/daily/${SUITE}/${SERIAL}"; | ||
2629 | 41 | ;; | ||
2630 | 42 | alpha*|beta*) build_f="/srv/ec2-images/releases/${SUITE}/${STREAM}"; | ||
2631 | 43 | base_name=${rel_base_name}; | ||
2632 | 44 | out_d="/srv/maas-images/ephemeral/releases/${SUITE}/${STREAM}"; | ||
2633 | 45 | ;; | ||
2634 | 46 | *) fail "Unknown stream ${STREAM}.";; | ||
2635 | 47 | esac | ||
2636 | 48 | |||
2637 | 49 | final_out_d="${out_d}" | ||
2638 | 50 | |||
2639 | 51 | [ -e "${final_out_d}" -a "${REBUILD:-false}" = "false" ] && | ||
2640 | 52 | fail "Build already exists. Rebuild is set to false. Failing this build" | ||
2641 | 53 | |||
2642 | 54 | if [ ! -e "${query_t}" ]; then | ||
2643 | 55 | |||
2644 | 56 | if [ "${MAASv2:-0}" -eq 1 ]; then | ||
2645 | 57 | # MAAS v2 doesn't need this information | ||
2646 | 58 | out_d="/tmp/maas_final" | ||
2647 | 59 | touch ${WORKSPACE}/maasv2 | ||
2648 | 60 | tar cvf ${query_t} ${WORKSPACE}/maasv2 | ||
2649 | 61 | |||
2650 | 62 | if [ -e "${WORKSPACE}/tmp/maas-final" ]; then | ||
2651 | 63 | tar cvf ${query_t} maas-final || | ||
2652 | 64 | fail "Failed to create tarball of MAAS images" | ||
2653 | 65 | fi | ||
2654 | 66 | |||
2655 | 67 | else | ||
2656 | 68 | # MAAS v1 need information | ||
2657 | 69 | tar cvf ${query_t} \ | ||
2658 | 70 | ${QUERY_D:-/srv/ec2-images/query} \ | ||
2659 | 71 | ${build_f} \ | ||
2660 | 72 | --exclude "*img" --exclude "*azure*" --exclude "*html" \ | ||
2661 | 73 | --exclude "*armel*" --exclude "*root.tar.gz" \ | ||
2662 | 74 | --exclude "*floppy" || | ||
2663 | 75 | fail "Failed to pack up build elements for MAAS builder"; | ||
2664 | 76 | fi | ||
2665 | 77 | fi | ||
2666 | 78 | |||
2667 | 79 | # Select the right template | ||
2668 | 80 | tmpl="${kvm_builder}/templates/img-maas.tmpl" | ||
2669 | 81 | [ "${MAASv2:-0}" -eq 1 ] && tmpl="${tmpl//maas.tmpl/maasv2.tmpl}" | ||
2670 | 82 | |||
2671 | 83 | # Construct the right template | ||
2672 | 84 | maas_config.sh \ | ||
2673 | 85 | --distro "${SUITE}" \ | ||
2674 | 86 | --stream "${STREAM}" \ | ||
2675 | 87 | --template "${tmpl}" \ | ||
2676 | 88 | --base-name "${base_name}" \ | ||
2677 | 89 | --local "${build_f}" \ | ||
2678 | 90 | --serial "${SERIAL}" \ | ||
2679 | 91 | --out "${out_f}" \ | ||
2680 | 92 | --maas-branch "${maas_branch}" \ | ||
2681 | 93 | --out_d "${out_d}" || | ||
2682 | 94 | fail "Failed to configure KVM instance for building" | ||
2683 | 95 | set +x | ||
2684 | 96 | |||
2685 | 97 | ci_cfg="${kvm_builder}/config/cloud-maasv2.cfg" | ||
2686 | 98 | [ "$(uname -m)" == "ppc64" ] && ci_cfg="${kvm_builder}/config/cloud-trusty-pp64el.cfg" | ||
2687 | 99 | |||
2688 | 100 | # Launch KVM to do the work | ||
2689 | 101 | launch_kvm.sh \ | ||
2690 | 102 | --id ${BUILD_ID} \ | ||
2691 | 103 | --user-data "${out_f}" \ | ||
2692 | 104 | --cloud-config "${ci_cfg}" \ | ||
2693 | 105 | --extra-disk "${query_t}" \ | ||
2694 | 106 | --disk-gb 50 \ | ||
2695 | 107 | --raw-disk "${raw_f}" \ | ||
2696 | 108 | --raw-size 20 \ | ||
2697 | 109 | --mem 1G \ | ||
2698 | 110 | --img-url ${BUILDER_CLOUD_IMAGE} || | ||
2699 | 111 | fail "KVM instance failed to build image." | ||
2700 | 112 | |||
2701 | 113 | # Extract the result set | ||
2702 | 114 | tar -xvvf "${raw_f}" || | ||
2703 | 115 | fail "Failed to extract information from instance" | ||
2704 | 116 | |||
2705 | 117 | # Useful for off-host builds, like ppc64el. Just make sure that any-off host | ||
2706 | 118 | # builds are done before the on-hosts builds. | ||
2707 | 119 | |||
2708 | 120 | [ "${BUILD_ONLY:-0}" -eq 1 ] && exit 0 | ||
2709 | 121 | [ "${MAASv2:-0}" -eq 1 ] && exit 0 | ||
2710 | 122 | |||
2711 | 123 | # Extracted reslts should be here | ||
2712 | 124 | [ ! -e "${WORKSPACE}/${out_d}" ] && fail "Expected result directory is missing: ${WORKSPACE}/${out_d}" | ||
2713 | 125 | |||
2714 | 126 | # Checksum the results (and sign 'em) | ||
2715 | 127 | export CDIMAGE_ROOT="/srv/builder/vmbuilder/cdimage" | ||
2716 | 128 | /srv/builder/vmbuilder/bin/cronrun checksum-directory "${WORKSPACE}/${out_d}" || | ||
2717 | 129 | fail "Failed to create checksums and GPG signatures" | ||
2718 | 130 | |||
2719 | 131 | set -x | ||
2720 | 132 | # Put the bits where they go... | ||
2721 | 133 | mkdir -p "${final_out_d}" && | ||
2722 | 134 | cp -a ${WORKSPACE}${out_d}/* "${final_out_d}" && | ||
2723 | 135 | echo "Copied bits to final location ${final_out_d}" || | ||
2724 | 136 | fail "Unable to copy build bits to final location" | ||
2725 | 137 | |||
2726 | 138 | # Produce build-info | ||
2727 | 139 | cat << EOF > "${final_out_d}/build-info.txt" | ||
2728 | 140 | serial=${SERIAL} | ||
2729 | 141 | orig_prefix=${SUITE}-ephemeral-maas | ||
2730 | 142 | suite=${SUITE} | ||
2731 | 143 | build_name=ephemeral | ||
2732 | 144 | EOF | ||
2733 | 145 | |||
2734 | 146 | # Clean up the dailies | ||
2735 | 147 | if [ "${STREAM}" = "daily" ]; then | ||
2736 | 148 | base_d="${out_d%/*}" | ||
2737 | 149 | builds=( $(find ${base_d} -maxdepth 1 -mindepth 1 -type d | sort -r) ) | ||
2738 | 150 | build_count=${#builds[@]} | ||
2739 | 151 | |||
2740 | 152 | # Delete all but the | ||
2741 | 153 | if [ ${build_count} -gt 6 ]; then | ||
2742 | 154 | for item in $(seq 6 ${build_count}) | ||
2743 | 155 | do | ||
2744 | 156 | [ -e "${builds[$item]}" ] && { | ||
2745 | 157 | rm -rf ${builds[$item]}; | ||
2746 | 158 | echo "Build ${SUITE} ${builds[$item]##*/} has been deleted"; | ||
2747 | 159 | } | ||
2748 | 160 | done | ||
2749 | 161 | |||
2750 | 162 | for item in $(seq 0 5) | ||
2751 | 163 | do | ||
2752 | 164 | echo "Preserving ${SUITE} ${builds[$item]##*/}" | ||
2753 | 165 | done | ||
2754 | 166 | else | ||
2755 | 167 | echo "No builds marked for deletion" | ||
2756 | 168 | fi | ||
2757 | 169 | fi | ||
2758 | 170 | |||
2759 | 171 | # Generate the Query2 tree | ||
2760 | 172 | src_tree="${WORKSPACE}/maas_src" | ||
2761 | 173 | bzr branch "${maas_branch_v1}" "${src_tree}" | ||
2762 | 174 | ${src_tree}/tree2query \ | ||
2763 | 175 | --commit-msg "Build ${BUILD_ID}" \ | ||
2764 | 176 | --namespace maas \ | ||
2765 | 177 | /srv/maas-images | ||
2766 | 178 | |||
2767 | 179 | # Update current | ||
2768 | 180 | if [ "${STREAM}" = "daily" ]; then | ||
2769 | 181 | cur_d="/srv/maas-images/ephemeral/daily/${SUITE}/current" | ||
2770 | 182 | [ -e "${cur_d}" ] && rm "${cur_d}" | ||
2771 | 183 | ln -s "${final_out_d}" "${cur_d}" || | ||
2772 | 184 | echo "Failed to update ${cur_d}" | ||
2773 | 185 | fi | ||
2774 | 186 | |||
2775 | 187 | |||
2776 | 188 | # Remove the results | ||
2777 | 189 | rm "${raw_f}" || | ||
2778 | 190 | fail "Failed to clean up files!" | ||
2779 | 191 | |||
2780 | 0 | 192 | ||
2781 | === added file 'jenkins/MAASv2_Cleaner.sh' | |||
2782 | --- jenkins/MAASv2_Cleaner.sh 1970-01-01 00:00:00 +0000 | |||
2783 | +++ jenkins/MAASv2_Cleaner.sh 2018-05-31 04:33:07 +0000 | |||
2784 | @@ -0,0 +1,55 @@ | |||
2785 | 1 | #!/bin/bash | ||
2786 | 2 | # | ||
2787 | 3 | # Clean up MAAS2 v2/v3 builds/streams | ||
2788 | 4 | # | ||
2789 | 5 | my_dir="$(dirname $0)" | ||
2790 | 6 | my_p_dir="$(dirname $my_dir)" | ||
2791 | 7 | source ${my_p_dir}/functions/common | ||
2792 | 8 | source ${my_p_dir}/functions/bzr_check.sh | ||
2793 | 9 | |||
2794 | 10 | # Number of builds to publish in the stream | ||
2795 | 11 | MAX_BUILDS=${MAX_BUILDS:-3} | ||
2796 | 12 | # Number of days to keep files not referenced in the stream data | ||
2797 | 13 | REAP_AGE=${REAP_AGE:-2d} | ||
2798 | 14 | |||
2799 | 15 | WORKSPACE=${WORKSPACE:-$PWD} | ||
2800 | 16 | OUTDIR=${JENKINS_HOME:?}/.config/MAASv2_Cleaner/ | ||
2801 | 17 | DAILY_ROOT=/srv/maas-images/ephemeral-v2/daily/ | ||
2802 | 18 | RELEASE_ROOT=/srv/maas-images/ephemeral-v2/releases/ | ||
2803 | 19 | INDEX_PATH=streams/v1/index.json | ||
2804 | 20 | |||
2805 | 21 | # Local checkouts | ||
2806 | 22 | sstreams=${WORKSPACE}/sstreams | ||
2807 | 23 | maasv2=${WORKSPACE}/maasv2 | ||
2808 | 24 | check_branch ${BZR_SIMPLESTREAMS:-lp:simplestreams} ${sstreams} | ||
2809 | 25 | check_branch ${BZR_MAASv2:-lp:maas-images} ${maasv2} | ||
2810 | 26 | |||
2811 | 27 | for METADATA_ROOT in /srv/maas-images/ephemeral-v2/daily/ \ | ||
2812 | 28 | /srv/maas-images/ephemeral-v3/daily/; do | ||
2813 | 29 | case $METADATA_ROOT in | ||
2814 | 30 | *v2*) | ||
2815 | 31 | orphan_json="${OUTDIR}/daily.json" | ||
2816 | 32 | ;; | ||
2817 | 33 | *v3*) | ||
2818 | 34 | orphan_json="${OUTDIR}/daily-v3.json" | ||
2819 | 35 | ;; | ||
2820 | 36 | *) | ||
2821 | 37 | echo "Unexpected METADATA_ROOT" | ||
2822 | 38 | exit 1 | ||
2823 | 39 | ;; | ||
2824 | 40 | esac | ||
2825 | 41 | run PYTHONPATH=${sstreams}:${maasv2} \ | ||
2826 | 42 | ${maasv2}/bin/meph2-util clean-md \ | ||
2827 | 43 | ${MAX_BUILDS} ${METADATA_ROOT}/${INDEX_PATH} | ||
2828 | 44 | |||
2829 | 45 | run PYTHONPATH=${sstreams}:${maasv2} \ | ||
2830 | 46 | ${maasv2}/bin/meph2-util find-orphans \ | ||
2831 | 47 | "${orphan_json}" \ | ||
2832 | 48 | ${METADATA_ROOT} ${METADATA_ROOT}/${INDEX_PATH} | ||
2833 | 49 | |||
2834 | 50 | run PYTHONPATH=${sstreams}:${maasv2} \ | ||
2835 | 51 | ${maasv2}/bin/meph2-util reap-orphans \ | ||
2836 | 52 | --older ${REAP_AGE} \ | ||
2837 | 53 | "${orphan_json}" \ | ||
2838 | 54 | ${METADATA_ROOT} | ||
2839 | 55 | done | ||
2840 | 0 | 56 | ||
2841 | === added file 'jenkins/MAASv3_Builder.sh' | |||
2842 | --- jenkins/MAASv3_Builder.sh 1970-01-01 00:00:00 +0000 | |||
2843 | +++ jenkins/MAASv3_Builder.sh 2018-05-31 04:33:07 +0000 | |||
2844 | @@ -0,0 +1,67 @@ | |||
2845 | 1 | #!/bin/bash -x | ||
2846 | 2 | |||
2847 | 3 | fail() { [ $# -eq 0 ] || echo "$@"; exit 1; } | ||
2848 | 4 | |||
2849 | 5 | [ -z "${SERIAL}" -a -z "${SUITE}" -a -e "build_properties" ] && { | ||
2850 | 6 | source build_properties || | ||
2851 | 7 | fail "Failed to read build_properties."; | ||
2852 | 8 | } | ||
2853 | 9 | |||
2854 | 10 | # Read in the common functions | ||
2855 | 11 | my_dir="$( cd "$( dirname "$0" )" && pwd )" | ||
2856 | 12 | base_dir=$(dirname ${my_dir}) | ||
2857 | 13 | export PATH="${base_dir}:${my_dir}:${PATH}" | ||
2858 | 14 | source "${base_dir}/functions/locker" | ||
2859 | 15 | source "${base_dir}/functions/common" | ||
2860 | 16 | source "${base_dir}/functions/retry" | ||
2861 | 17 | source ${my_dir}/build_lib.sh | ||
2862 | 18 | select_build_config | ||
2863 | 19 | |||
2864 | 20 | case "${STREAM:?}" in | ||
2865 | 21 | daily) build_f="/srv/ec2-images/${SUITE}/${SERIAL}"; | ||
2866 | 22 | base_name="${SUITE}-server-cloudimg" | ||
2867 | 23 | ;; | ||
2868 | 24 | *) fail "Unknown/unsupported stream ${STREAM}.";; | ||
2869 | 25 | esac | ||
2870 | 26 | |||
2871 | 27 | export WORKSPACE="${WORKSPACE:-$WORKSPACE_R}" | ||
2872 | 28 | out_f="${WORKSPACE:?}/maas-${SUITE}-${STREAM}-config.sh" | ||
2873 | 29 | raw_f="${WORKSPACE}/${SUITE}-output.raw" | ||
2874 | 30 | |||
2875 | 31 | export maas_branch="${MAAS_BRANCH:?}" | ||
2876 | 32 | |||
2877 | 33 | touch ${WORKSPACE}/maasv3 | ||
2878 | 34 | tar cvf ${query_t} ${WORKSPACE}/maasv3 | ||
2879 | 35 | |||
2880 | 36 | # Construct the right template | ||
2881 | 37 | maas_config.sh \ | ||
2882 | 38 | --distro "${SUITE}" \ | ||
2883 | 39 | --stream "${STREAM}" \ | ||
2884 | 40 | --template "${kvm_builder}/templates/img-maasv3.tmpl" \ | ||
2885 | 41 | --base-name "${base_name}" \ | ||
2886 | 42 | --local "${build_f}" \ | ||
2887 | 43 | --serial "${SERIAL}" \ | ||
2888 | 44 | --out "${out_f}" \ | ||
2889 | 45 | --maas-branch "${maas_branch}" \ | ||
2890 | 46 | --out_d "/tmp/maas_final" || | ||
2891 | 47 | fail "Failed to configure KVM instance for building" | ||
2892 | 48 | |||
2893 | 49 | ci_cfg="${kvm_builder}/config/cloud-maasv3.cfg" | ||
2894 | 50 | [ "$(uname -m)" == "ppc64" ] && ci_cfg="${kvm_builder}/config/cloud-trusty-pp64el.cfg" | ||
2895 | 51 | |||
2896 | 52 | # Launch KVM to do the work | ||
2897 | 53 | launch_kvm.sh \ | ||
2898 | 54 | --id ${BUILD_ID} \ | ||
2899 | 55 | --user-data "${out_f}" \ | ||
2900 | 56 | --cloud-config "${ci_cfg}" \ | ||
2901 | 57 | --extra-disk "${query_t}" \ | ||
2902 | 58 | --disk-gb 50 \ | ||
2903 | 59 | --raw-disk "${raw_f}" \ | ||
2904 | 60 | --raw-size 20 \ | ||
2905 | 61 | --mem 1G \ | ||
2906 | 62 | --img-url ${BUILDER_CLOUD_IMAGE} || | ||
2907 | 63 | fail "KVM instance failed to build image." | ||
2908 | 64 | |||
2909 | 65 | # Extract the result set | ||
2910 | 66 | tar -xvvf "${raw_f}" || | ||
2911 | 67 | fail "Failed to extract information from instance" | ||
2912 | 0 | 68 | ||
2913 | === added file 'jenkins/Promote_Daily.sh' | |||
2914 | --- jenkins/Promote_Daily.sh 1970-01-01 00:00:00 +0000 | |||
2915 | +++ jenkins/Promote_Daily.sh 2018-05-31 04:33:07 +0000 | |||
2916 | @@ -0,0 +1,55 @@ | |||
2917 | 1 | #!/bin/bash | ||
2918 | 2 | echo "---------------------------------------------------" | ||
2919 | 3 | echo "Instructed to Promote Daily job: | ||
2920 | 4 | echo " Suite: ${SUITE}" | ||
2921 | 5 | echo " Serial: ${SERIAL}" | ||
2922 | 6 | echo " Milestone: ${MILESTONE_LABEL}" | ||
2923 | 7 | echo " Stream: ${BTYPE}" | ||
2924 | 8 | echo " Public: ${MAKE_PUBLIC}" | ||
2925 | 9 | echo " PrePublish: ${PREPUBLISH}" | ||
2926 | 10 | echo " | ||
2927 | 11 | echo "---------------------------------------------------" | ||
2928 | 12 | |||
2929 | 13 | cat << EOF > "${WORKSPACE}/build_properties" | ||
2930 | 14 | SUITE=${SUITE} | ||
2931 | 15 | SERIAL=${SERIAL} | ||
2932 | 16 | MILESTONE=${MILESTONE_LABEL} | ||
2933 | 17 | STREAM=${BTYPE} | ||
2934 | 18 | PUBLIC=${MAKE_PUBLIC} | ||
2935 | 19 | PREPUBLISH=${PREPUBLISH} | ||
2936 | 20 | EOF | ||
2937 | 21 | |||
2938 | 22 | export HOME="/srv/builder/vmbuilder" | ||
2939 | 23 | |||
2940 | 24 | cmd=( | ||
2941 | 25 | '/srv/builder/vmbuilder/bin/cronrun' | ||
2942 | 26 | 'promote-daily' | ||
2943 | 27 | '--verbose' | ||
2944 | 28 | '--allow-existing' ) | ||
2945 | 29 | |||
2946 | 30 | if [ "${PREPUBLISH}" == "true" ]; then | ||
2947 | 31 | echo "Pre-publishing rules, will not make public" | ||
2948 | 32 | else | ||
2949 | 33 | [ "${MAKE_PUBLIC}" == "true" ] && cmd+=('--make-public') | ||
2950 | 34 | fi | ||
2951 | 35 | |||
2952 | 36 | case ${BTYPE} in | ||
2953 | 37 | *server*hwe*) pub_path="/srv/ec2-images/server/${SUITE}/${SERIAL}/${BTYPE//server-/}";; | ||
2954 | 38 | *) pub_path="/srv/ec2-images/${BTYPE}/${SUITE}/${SERIAL}" | ||
2955 | 39 | ;; | ||
2956 | 40 | esac | ||
2957 | 41 | |||
2958 | 42 | if [ "${REPUBLISH}" == "true" ]; then | ||
2959 | 43 | cmd+=('--republish') | ||
2960 | 44 | if [ "${MILESTONE_LABEL}" == "release" ]; then | ||
2961 | 45 | pub_path="/srv/ec2-images/releases/${SUITE}/release-${SERIAL}" | ||
2962 | 46 | else | ||
2963 | 47 | pub_path="/srv/ec2-images/releases/${SUITE}/${MILESTONE_LABEL}" | ||
2964 | 48 | fi | ||
2965 | 49 | [[ "${BTYPE}" =~ server-hwe ]] && pub_path="${pub_path}/${BTYPE//server-/}" | ||
2966 | 50 | fi | ||
2967 | 51 | |||
2968 | 52 | cmd+=("${MILESTONE_LABEL}" ${pub_path}) | ||
2969 | 53 | |||
2970 | 54 | echo "Executing command: ${cmd[@]}" | ||
2971 | 55 | exec ${cmd[@]} | ||
2972 | 0 | 56 | ||
2973 | === added file 'jenkins/Promote_MAAS_Daily.sh' | |||
2974 | --- jenkins/Promote_MAAS_Daily.sh 1970-01-01 00:00:00 +0000 | |||
2975 | +++ jenkins/Promote_MAAS_Daily.sh 2018-05-31 04:33:07 +0000 | |||
2976 | @@ -0,0 +1,48 @@ | |||
2977 | 1 | #!/bin/bash | ||
2978 | 2 | |||
2979 | 3 | export maas_branch="${MAAS_BRANCH:-http://bazaar.launchpad.net/~smoser/maas/maas.ubuntu.com.images-ephemeral}" | ||
2980 | 4 | |||
2981 | 5 | fail() { [ $# -eq 0 ] || echo "$@"; exit 1; } | ||
2982 | 6 | |||
2983 | 7 | [ -z "${SERIAL}" ] && fail "Serial must be defined" | ||
2984 | 8 | [ -z "${SUITE}" ] && fail "Suite must be defined" | ||
2985 | 9 | [ -z "${MILESTONE}" ] && fail "Milestone must be defined" | ||
2986 | 10 | |||
2987 | 11 | cp_d="/srv/maas-images/ephemeral/daily/${SUITE}/${SERIAL}" | ||
2988 | 12 | finald="/srv/maas-images/ephemeral/releases/${SUITE}" | ||
2989 | 13 | |||
2990 | 14 | case "${MILESTONE}" in | ||
2991 | 15 | release) final_d="${finald}/release-${SERIAL}" | ||
2992 | 16 | alpha|beta) final_d="${finald}/${milestone}" | ||
2993 | 17 | esac | ||
2994 | 18 | |||
2995 | 19 | # Sanity check | ||
2996 | 20 | [ ! -e "${cp_d}" ] && fail "Serial ${SERIAL} for ${SUITE} does not exist" | ||
2997 | 21 | [ -e "${final_d}" ] && fail "Already released ${SERIAL} for ${SUITE} as ${MILESTONE}" | ||
2998 | 22 | |||
2999 | 23 | # Make the home directory | ||
3000 | 24 | mkdir -p "${final_d}" || | ||
3001 | 25 | fail "Unable to create final destination" | ||
3002 | 26 | |||
3003 | 27 | # Put the files in final destination | ||
3004 | 28 | cp -au ${cp_d}/* "${final_d}" || | ||
3005 | 29 | fail "Failed to copy source files for promotion" | ||
3006 | 30 | |||
3007 | 31 | # Generate the Query2 tree | ||
3008 | 32 | src_tree="${WORKSPACE}/maas_src" | ||
3009 | 33 | bzr branch "${maas_branch}" "${src_tree}" | ||
3010 | 34 | ${src_tree}/tree2query \ | ||
3011 | 35 | --commit-msg "Build ${BUILD_ID}" \ | ||
3012 | 36 | --namespace maas \ | ||
3013 | 37 | /srv/maas-images | ||
3014 | 38 | |||
3015 | 39 | # Update the "release" link | ||
3016 | 40 | if [ "${MILESTONE}" = "release" ]; then | ||
3017 | 41 | cur_d="/srv/maas-images/ephemeral/releases/${SUITE}/release" | ||
3018 | 42 | [ -e "${cur_d}" ] && rm "${cur_d}" | ||
3019 | 43 | ln -s "${final_d}" "${cur_d}" || | ||
3020 | 44 | echo "Failed to update ${cur_d}" | ||
3021 | 45 | fi | ||
3022 | 46 | |||
3023 | 47 | # Sync the stuff | ||
3024 | 48 | KEY=maas /srv/builder/vmbuilder/bin/trigger-sync | ||
3025 | 0 | 49 | ||
3026 | === added file 'jenkins/Publish_EC2.sh' | |||
3027 | --- jenkins/Publish_EC2.sh 1970-01-01 00:00:00 +0000 | |||
3028 | +++ jenkins/Publish_EC2.sh 2018-05-31 04:33:07 +0000 | |||
3029 | @@ -0,0 +1,64 @@ | |||
3030 | 1 | #!/bin/bash -x | ||
3031 | 2 | |||
3032 | 3 | # Add in the retry stub | ||
3033 | 4 | source "${kvm}/functions/retry" | ||
3034 | 5 | source "${kvm}/functions/common" | ||
3035 | 6 | |||
3036 | 7 | # Exit if trigger job does not want this published | ||
3037 | 8 | [ "${PUBLISH_IMAGE}" -eq 0 ] && exit 0 | ||
3038 | 9 | |||
3039 | 10 | # Set the build directories | ||
3040 | 11 | WORK_D="/srv/ec2-images/${BUILD_TYPE}/${SUITE}/${SERIAL}" | ||
3041 | 12 | [ "${TEST_BUILD}" -eq 1 ] && WORK_D="/srv/ec2-images/test_builds/${BUILD_TYPE}/${SUITE}/${SERIAL}" | ||
3042 | 13 | [ "${SANDBOX_BUILD}" -eq 1 ] && WORK_D="/srv/ec2-images/sandbox/${BUILD_TYPE}/${SUITE}/${SERIAL}" | ||
3043 | 14 | |||
3044 | 15 | # Handle the special case of HWE builds. Otherwise they get the names of server | ||
3045 | 16 | # and things don't go well. | ||
3046 | 17 | [[ "${HWE_SUFFIX}" =~ hwe ]] && | ||
3047 | 18 | WORK_D="${WORK_D}/${HWE_SUFFIX}" && | ||
3048 | 19 | BUILD_TYPE="${BUILD_TYPE}-${HWE_SUFFIX}" | ||
3049 | 20 | |||
3050 | 21 | |||
3051 | 22 | echo "Using ${WORK_D} as the directory" | ||
3052 | 23 | [ -e "${WORK_D}" ] || { echo "Working directory does not exist!"; exit 1; } | ||
3053 | 24 | |||
3054 | 25 | ec2_pub="${PWD}/ec2-publishing-scripts" | ||
3055 | 26 | |||
3056 | 27 | # Check out the scripts needed | ||
3057 | 28 | [ -e "${ec2_pub}" ] && rm -rf "${ec2_pub}" | ||
3058 | 29 | bzr branch "${EC2_PUB_SCRIPTS}" "${ec2_pub}" | ||
3059 | 30 | |||
3060 | 31 | # Add some elements to the path | ||
3061 | 32 | VMBUILDER_PATH="${VMBUILDER_PATH:-/srv/builder/vmbuilder}" | ||
3062 | 33 | VMBUILDER_BIN="${VMBUILDER_PATH}/bin" | ||
3063 | 34 | XC2_PATH="${VMBUILDER_PATH}/ec2-daily/xc2" | ||
3064 | 35 | export PUBLISH_SCRIPTS="${PUBLISH_SCRIPTS:-$VMBUILDER_PATH/ec2-publishing-scripts}" | ||
3065 | 36 | export PATH="${VMBUILDER_BIN}:${VMBUILDER_BIN}:${VMBUILDER_PATH}:${XC2_PATH}:${PATH}" | ||
3066 | 37 | export HOME="/srv/builder/vmbuilder" | ||
3067 | 38 | export CDIMAGE_ROOT="${CDIMAGE_ROOT:-/srv/builder/cdimage}" | ||
3068 | 39 | export EC2_PUB_LOC="${ec2_pub}" | ||
3069 | 40 | |||
3070 | 41 | ec2publish() { | ||
3071 | 42 | # Run the publisher job | ||
3072 | 43 | ${kvm}/ec2_publisher.sh \ | ||
3073 | 44 | ${SUITE} \ | ||
3074 | 45 | ${SERIAL} \ | ||
3075 | 46 | ${BUILD_TYPE} \ | ||
3076 | 47 | ${WORK_D} \ | ||
3077 | 48 | ${TEST_BUILD} \ | ||
3078 | 49 | ${SANDBOX_BUILD} \ | ||
3079 | 50 | ${ALLOW_EXISTING} | ||
3080 | 51 | } | ||
3081 | 52 | |||
3082 | 53 | # Retry the publishing up to 3 times | ||
3083 | 54 | retry 6 120 ec2publish || | ||
3084 | 55 | fail "Failed three attempts to publish EC2 images!" | ||
3085 | 56 | |||
3086 | 57 | # Add the new daily to the tracker | ||
3087 | 58 | #exec_tracker=${ADD_TO_TRACKER:-0} | ||
3088 | 59 | #[ "${exec_tracker}" -eq 1 ] && { | ||
3089 | 60 | # ${kvm}/tracker.sh daily ${SUITE} ${SERIAL} && | ||
3090 | 61 | # exit $? || fail "Unable to execute tracker!" | ||
3091 | 62 | # } | ||
3092 | 63 | # | ||
3093 | 64 | #exit 0 | ||
3094 | 0 | 65 | ||
3095 | === added file 'jenkins/Publish_Results_to_Tracker.sh' | |||
3096 | --- jenkins/Publish_Results_to_Tracker.sh 1970-01-01 00:00:00 +0000 | |||
3097 | +++ jenkins/Publish_Results_to_Tracker.sh 2018-05-31 04:33:07 +0000 | |||
3098 | @@ -0,0 +1,34 @@ | |||
3099 | 1 | #!/bin/bash | ||
3100 | 2 | |||
3101 | 3 | # Environmental variables: | ||
3102 | 4 | # HOST: the Jenkins host URL to poll from | ||
3103 | 5 | # SUITE: Ubuntu codename | ||
3104 | 6 | # MILESTONE: i.e. Alpha 2 | ||
3105 | 7 | # SERIAL: What is the build serial, i.e 20130213 | ||
3106 | 8 | # OUT: File to execute | ||
3107 | 9 | |||
3108 | 10 | set -x | ||
3109 | 11 | |||
3110 | 12 | # Setup the QA tracker code | ||
3111 | 13 | bzr branch http://bazaar.launchpad.net/~jibel/+junk/qatracker | ||
3112 | 14 | cd qatracker | ||
3113 | 15 | sed -i "s/iso.qa.ubuntu.com/cloud.qa.ubuntu.com/g" tracker_update_result | ||
3114 | 16 | |||
3115 | 17 | bzr branch http://bazaar.launchpad.net/~ubuntu-qa-website-devel/ubuntu-qa-website/python-qatracker | ||
3116 | 18 | ln -s python-qatracker/qatracker.py . | ||
3117 | 19 | export PATH="${PATH}:${WORKSPACE}/qatracker" | ||
3118 | 20 | |||
3119 | 21 | # Get the actual working script | ||
3120 | 22 | ${scripts}/tests/tracker.py \ | ||
3121 | 23 | --host ${HOST} \ | ||
3122 | 24 | --suite ${SUITE} \ | ||
3123 | 25 | --test ${TEST} \ | ||
3124 | 26 | --milestone "${MILESTONE}" \ | ||
3125 | 27 | --serial ${SERIAL} \ | ||
3126 | 28 | --out "${WORKSPACE}/script.sh" | ||
3127 | 29 | |||
3128 | 30 | # Execute the script | ||
3129 | 31 | env API_USER="${API_USER}" \ | ||
3130 | 32 | API_KEY="${API_KEY}" \ | ||
3131 | 33 | bash ${WORKSPACE}/script.sh || | ||
3132 | 34 | exit 1 | tee publish.log | ||
3133 | 0 | 35 | ||
3134 | === added file 'jenkins/README.txt' | |||
3135 | --- jenkins/README.txt 1970-01-01 00:00:00 +0000 | |||
3136 | +++ jenkins/README.txt 2018-05-31 04:33:07 +0000 | |||
3137 | @@ -0,0 +1,1 @@ | |||
3138 | 1 | This directory contains the jobs that Jenkins executes. Most of the jobs just setup up an environmental component and then call another script, usually one directory below | ||
3139 | 0 | 2 | ||
3140 | === added file 'jenkins/Test_Azure.sh' | |||
3141 | --- jenkins/Test_Azure.sh 1970-01-01 00:00:00 +0000 | |||
3142 | +++ jenkins/Test_Azure.sh 2018-05-31 04:33:07 +0000 | |||
3143 | @@ -0,0 +1,17 @@ | |||
3144 | 1 | #!/bin/bash | ||
3145 | 2 | fail() { [ $# -eq 0 ] || echo "$@"; exit 1; } | ||
3146 | 3 | |||
3147 | 4 | umask 022 | ||
3148 | 5 | set -x | ||
3149 | 6 | source watch_properties || fail "Failed to read watch properties" | ||
3150 | 7 | |||
3151 | 8 | echo "-------------------" | ||
3152 | 9 | echo "Image for testing:" | ||
3153 | 10 | cat watch_properties | ||
3154 | 11 | echo "-------------------" | ||
3155 | 12 | |||
3156 | 13 | |||
3157 | 14 | my_dir="$( cd "$( dirname "$0" )" && pwd )" | ||
3158 | 15 | base_dir=$(dirname ${my_dir}) | ||
3159 | 16 | |||
3160 | 17 | ${my_dir}/tests/azure.sh ${1} | ||
3161 | 0 | 18 | ||
3162 | === added file 'jenkins/build_lib.sh' | |||
3163 | --- jenkins/build_lib.sh 1970-01-01 00:00:00 +0000 | |||
3164 | +++ jenkins/build_lib.sh 2018-05-31 04:33:07 +0000 | |||
3165 | @@ -0,0 +1,33 @@ | |||
3166 | 1 | #!/bin/bash | ||
3167 | 2 | |||
3168 | 3 | # set default umask | ||
3169 | 4 | umask 022 | ||
3170 | 5 | |||
3171 | 6 | # Read in the common functions | ||
3172 | 7 | my_dir="$( cd "$( dirname "$0" )" && pwd )" | ||
3173 | 8 | base_dir=$(dirname ${my_dir}) | ||
3174 | 9 | export PATH="${base_dir}:${my_dir}:${PATH}" | ||
3175 | 10 | source "${base_dir}/functions/locker" | ||
3176 | 11 | source "${base_dir}/functions/common" | ||
3177 | 12 | source "${base_dir}/functions/retry" | ||
3178 | 13 | |||
3179 | 14 | dist_ge() { [[ "$1" > "$2" || "$1" == "$2" ]]; } | ||
3180 | 15 | |||
3181 | 16 | [ -z "${DISTRO}" -a -n "${SUITE}" ] && DISTRO="${SUITE}" | ||
3182 | 17 | |||
3183 | 18 | select_build_config() { | ||
3184 | 19 | |||
3185 | 20 | [ -z "${BUILDER_CLOUD_IMAGE}" ] && { | ||
3186 | 21 | # Use the latest 14.04 LTS image to do the build. | ||
3187 | 22 | BUILDER_CLOUD_IMAGE="http://cloud-images.ubuntu.com/releases/trusty/release/ubuntu-14.04-server-cloudimg-amd64-uefi1.img" | ||
3188 | 23 | export cloud_init_cfg="cloud-trusty.cfg" | ||
3189 | 24 | } | ||
3190 | 25 | |||
3191 | 26 | # For ppc64el, we use ppc64el images | ||
3192 | 27 | [ "${ARCH_TYPE}" == "ppc64el" ] && { | ||
3193 | 28 | export cloud_init_cfg="cloud-trusty-pp64el.cfg" | ||
3194 | 29 | BUILDER_CLOUD_IMAGE="${BUILDER_CLOUD_IMAGE//amd64/ppc64el}" | ||
3195 | 30 | export BUILDER_CLOUD_IMAGE="${BUILDER_CLOUD_IMAGE//uefi1/disk1}" | ||
3196 | 31 | } | ||
3197 | 32 | echo "Using ${BUILDER_CLOUD_IMAGE} to do the build" | ||
3198 | 33 | } | ||
3199 | 0 | 34 | ||
3200 | === added file 'jenkins/env-test.sh' | |||
3201 | --- jenkins/env-test.sh 1970-01-01 00:00:00 +0000 | |||
3202 | +++ jenkins/env-test.sh 2018-05-31 04:33:07 +0000 | |||
3203 | @@ -0,0 +1,2 @@ | |||
3204 | 1 | #!/bin/bash | ||
3205 | 2 | env | ||
3206 | 0 | 3 | ||
3207 | === added file 'launch_kvm.sh' | |||
3208 | --- launch_kvm.sh 1970-01-01 00:00:00 +0000 | |||
3209 | +++ launch_kvm.sh 2018-05-31 04:33:07 +0000 | |||
3210 | @@ -0,0 +1,222 @@ | |||
3211 | 1 | #!/bin/bash | ||
3212 | 2 | usage() { | ||
3213 | 3 | cat << EOF | ||
3214 | 4 | This program is a KVM wrapper for performing tasks inside a KVM Environment. | ||
3215 | 5 | Its primary goal is to help developers do dangerous tasks that their IS/IT | ||
3216 | 6 | deparment won't allow them to do on an existing machine. | ||
3217 | 7 | --id <ARG> The ID you want to use to identify the KVM image | ||
3218 | 8 | this is used to name the image | ||
3219 | 9 | --disk-gb <ARG> Disk size you want to resize the image too | ||
3220 | 10 | Default it to _add_ 30GB | ||
3221 | 11 | --smp <ARG> KVM SMP options, defaults to: | ||
3222 | 12 | ${smp_opt} | ||
3223 | 13 | --mem <ARG> How much RAM do you want to use | ||
3224 | 14 | --user-data <ARG> Cloud-Init user-data file | ||
3225 | 15 | --cloud-config <ARG> Cloud-Init cloud-config file | ||
3226 | 16 | --img-url <ARG> Location of the image file. | ||
3227 | 17 | --raw-disk <ARG> Name of RAW disk to create and attach. | ||
3228 | 18 | --raw-size <ARG> Size of RAW disk in GB. | ||
3229 | 19 | --extra-disk <ARG> Add an extra disk, starting with /dev/vdd | ||
3230 | 20 | --cloud-init-file <ARG> Additional file for the cloud-init data | ||
3231 | 21 | EOF | ||
3232 | 22 | exit 1 | ||
3233 | 23 | } | ||
3234 | 24 | |||
3235 | 25 | short_opts="h" | ||
3236 | 26 | long_opts="id:,ssh_port,disk-gb:,mem:,bzr-automated-ec2-builds:,cloud-config:,user-data:,kernel-url:,img-url:,raw-disk:,raw-size:,smp:,extra-disk:,cloud-init-file:,help" | ||
3237 | 27 | getopt_out=$(getopt --name "${0##*/}" \ | ||
3238 | 28 | --options "${short_opts}" --long "${long_opts}" -- "$@") && | ||
3239 | 29 | eval set -- "${getopt_out}" || | ||
3240 | 30 | usage | ||
3241 | 31 | |||
3242 | 32 | builder_id=$(uuidgen) | ||
3243 | 33 | uuid=${builder_id} | ||
3244 | 34 | bname="server" | ||
3245 | 35 | size_gb=15 | ||
3246 | 36 | mem=512 | ||
3247 | 37 | smp_opt="4" | ||
3248 | 38 | ud="" | ||
3249 | 39 | cloud_config="" | ||
3250 | 40 | img_loc="${BUILDER_CLOUD_IMAGE:-http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img}" | ||
3251 | 41 | KVM_PID="" | ||
3252 | 42 | |||
3253 | 43 | while [ $# -ne 0 ]; do | ||
3254 | 44 | cur=${1}; next=${2}; | ||
3255 | 45 | case "$cur" in | ||
3256 | 46 | --id) id="$2"; shift;; | ||
3257 | 47 | --disk-gb) size_gb="$2"; shift;; | ||
3258 | 48 | --mem) mem="$2"; shift;; | ||
3259 | 49 | --cloud-config) ccloud="$2"; shift;; | ||
3260 | 50 | --user-data) ud="$2"; shift;; | ||
3261 | 51 | --img-url) img_loc="$2"; shift;; | ||
3262 | 52 | --raw-disk) raw_disk="$2"; shift;; | ||
3263 | 53 | --raw-size) raw_size="$2"; shift;; | ||
3264 | 54 | --smp) smp_opts="$2"; shift;; | ||
3265 | 55 | --extra-disk) [ -z "${extra_disk}" ] && extra_disk=$2 || extra_disk="${extra_disk} $2"; shift;; | ||
3266 | 56 | --cloud-init-file) [ -z "${cloud_init_files}" ] && cloud_init_files=$2 || cloud_init_files="${cloud_init_files} $2"; shift;; | ||
3267 | 57 | -h|--help) usage; exit 0;; | ||
3268 | 58 | --) shift; break;; | ||
3269 | 59 | esac | ||
3270 | 60 | shift; | ||
3271 | 61 | done | ||
3272 | 62 | |||
3273 | 63 | work_d="$(mktemp -d /tmp/kvm-builder.XXXX)" | ||
3274 | 64 | kvm_pidfile="$(mktemp --tmpdir=${work_d})" | ||
3275 | 65 | |||
3276 | 66 | error() { echo "$@" 1>&2; } | ||
3277 | 67 | cleanup() { | ||
3278 | 68 | [ -n "${KVM_PID}" ] && kill -9 ${KVM_PID}; | ||
3279 | 69 | [ -n "${TAIL_PID}" ] && kill -9 ${TAIL_PID}; | ||
3280 | 70 | rm -rf "${work_d}"; | ||
3281 | 71 | } | ||
3282 | 72 | fail() { error "$@"; cleanup; exit 1; } | ||
3283 | 73 | debug() { error "$(date -R):" "$@"; } | ||
3284 | 74 | sysfail() { fail "Failure in commands detected; purging "; } | ||
3285 | 75 | |||
3286 | 76 | # Make sure that we kill everything | ||
3287 | 77 | trap sysfail SIGINT SIGTERM | ||
3288 | 78 | |||
3289 | 79 | [ -z "${ud}" ] && fail "Must define user-data script via --user-data" | ||
3290 | 80 | [ -z "${ccloud}" ] && fail "Must define cloud-config script via --cloud-config" | ||
3291 | 81 | |||
3292 | 82 | debug "Creating Cloud-Init configuration..." | ||
3293 | 83 | write_mime_args=( | ||
3294 | 84 | -o "${work_d}/user-data.txt" | ||
3295 | 85 | "${ccloud}" | ||
3296 | 86 | "${ud}") | ||
3297 | 87 | write_mime_args+=(${cloud_init_files[@]}) | ||
3298 | 88 | write_mime_location="$(which write-mime-multipart)" | ||
3299 | 89 | if which python3 > /dev/null; then | ||
3300 | 90 | "${write_mime_location}" ${write_mime_args[@]} || fail "Unable to create user-data" | ||
3301 | 91 | else | ||
3302 | 92 | python "${write_mime_location}" ${write_mime_args[@]} || fail "Unable to create user-data" | ||
3303 | 93 | fi | ||
3304 | 94 | |||
3305 | 95 | echo "instance-id: $(uuidgen)" > "${work_d}/meta-data" | ||
3306 | 96 | echo "local-hostname: builder" >> "${work_d}/meta-data" | ||
3307 | 97 | |||
3308 | 98 | debug "Creating Seed for Cloud-Init..." | ||
3309 | 99 | "${0%/*}/make-seed.sh" "${work_d}/seed.img" "${work_d}/user-data.txt" "${work_d}/meta-data" || | ||
3310 | 100 | fail "Failed to create Configruation ISO" | ||
3311 | 101 | |||
3312 | 102 | # Place the image in place | ||
3313 | 103 | debug "Build image location is ${img_loc}" | ||
3314 | 104 | if [[ "${img_loc}" =~ "http" ]]; then | ||
3315 | 105 | debug "Fetching cloud image from ${img_loc}" | ||
3316 | 106 | curl -s -o "${work_d}/img-${builder_id}" "${img_loc}" || | ||
3317 | 107 | fail "Unable to fetch pristine image from '${img_loc}'" | ||
3318 | 108 | else | ||
3319 | 109 | cp "${img_loc}" "${work_d}/img-${builder_id}" || | ||
3320 | 110 | fail "Unable to copy '${img_loc}'" | ||
3321 | 111 | fi | ||
3322 | 112 | |||
3323 | 113 | debug "Adding ${size_gb}G to image size" | ||
3324 | 114 | qemu-img resize "${work_d}/img-${builder_id}" +"${size_gb}G" || | ||
3325 | 115 | fail "Unable to resize image to ${size_gb}G" | ||
3326 | 116 | |||
3327 | 117 | if [ -n "${raw_disk}" -a ! -e "${raw_disk}" ]; then | ||
3328 | 118 | if [ -n "${raw_size}" ]; then | ||
3329 | 119 | dd if=/dev/zero of=${raw_disk} bs=1k count=1 seek=$((${raw_size} * 1024000)) && | ||
3330 | 120 | debug "Create new raw disk" || | ||
3331 | 121 | fail "Unable to create raw disk" | ||
3332 | 122 | else | ||
3333 | 123 | fail "Undefined raw disk size" | ||
3334 | 124 | fi | ||
3335 | 125 | else | ||
3336 | 126 | debug "Using existing raw disk." | ||
3337 | 127 | fi | ||
3338 | 128 | |||
3339 | 129 | |||
3340 | 130 | debug "________________________________________________" | ||
3341 | 131 | debug "Launching instance..." | ||
3342 | 132 | kvm_cmd=( | ||
3343 | 133 | ${QEMU_COMMAND:-kvm} | ||
3344 | 134 | -name ${uuid} | ||
3345 | 135 | -drive file=${work_d}/img-${builder_id},if=virtio,bus=0,cache=unsafe,unit=0 | ||
3346 | 136 | -drive file=${raw_disk},if=virtio,format=raw,bus=0,unit=1 | ||
3347 | 137 | -drive file=${work_d}/seed.img,if=virtio,media=cdrom,bus=0,cache=unsafe,unit=2 | ||
3348 | 138 | -net nic,model=virtio | ||
3349 | 139 | -net user | ||
3350 | 140 | -no-reboot | ||
3351 | 141 | -display none | ||
3352 | 142 | -daemonize | ||
3353 | 143 | -serial file:${work_d}/console.log | ||
3354 | 144 | -pidfile ${kvm_pidfile} | ||
3355 | 145 | ) | ||
3356 | 146 | kvm_cmd+=(${QEMU_ARGS[@]}) | ||
3357 | 147 | |||
3358 | 148 | # Arch independant stuff | ||
3359 | 149 | if [[ "$(uname -p)" =~ "ppc64" ]]; then | ||
3360 | 150 | # Use more memory for building on PPC64 | ||
3361 | 151 | kvm_cmd+=(-m 4G) | ||
3362 | 152 | else | ||
3363 | 153 | kvm_cmd+=(-smp ${smp_opt} -m ${mem}) | ||
3364 | 154 | fi | ||
3365 | 155 | |||
3366 | 156 | # Allow for kernel and append | ||
3367 | 157 | if [ -n "${QEMU_KERNEL}" ]; then | ||
3368 | 158 | root="/dev/vda1" | ||
3369 | 159 | if [[ "$(uname -p)" =~ "ppc64" ]]; then | ||
3370 | 160 | root="/dev/vda" | ||
3371 | 161 | fi | ||
3372 | 162 | kvm_cmd+=(-kernel ${QEMU_KERNEL} | ||
3373 | 163 | -append "earlyprintk root=${root} console=hvc0" | ||
3374 | 164 | ) | ||
3375 | 165 | fi | ||
3376 | 166 | |||
3377 | 167 | unit_c=3 | ||
3378 | 168 | for disk in ${extra_disk} | ||
3379 | 169 | do | ||
3380 | 170 | if [[ $(file ${disk}) =~ (disk|qcow|QCOW|vmdk|VMDK|vdi|VDI) ]]; then | ||
3381 | 171 | debug "Adding extra disk $disk to KVM configuration" | ||
3382 | 172 | kvm_cmd+=(-drive file=${extra_disk},if=virtio,bus=1,unit=${unit_c}) | ||
3383 | 173 | else | ||
3384 | 174 | debug "Adding extra disk as a raw formated disk" | ||
3385 | 175 | kvm_cmd+=(-drive file=${extra_disk},if=virtio,format=raw,bus=1,unit=${unit_c}) | ||
3386 | 176 | fi | ||
3387 | 177 | unit_c=$((unit_c+1)) | ||
3388 | 178 | done | ||
3389 | 179 | |||
3390 | 180 | debug "KVM command is: ${kvm_cmd[@]}" | ||
3391 | 181 | "${kvm_cmd[@]}" || | ||
3392 | 182 | fail "Failed to launch KVM image\n${kvm_out}" | ||
3393 | 183 | |||
3394 | 184 | read KVM_PID < ${kvm_pidfile} | ||
3395 | 185 | debug "KVM PID is: ${KVM_PID}" | ||
3396 | 186 | |||
3397 | 187 | tail -f "${work_d}/console.log" & | ||
3398 | 188 | TAIL_PID=$! | ||
3399 | 189 | |||
3400 | 190 | # Wait on the pid until the max timeout | ||
3401 | 191 | count=0 | ||
3402 | 192 | max_count=${MAX_CYCLES:-720} | ||
3403 | 193 | while $(ps ${KVM_PID} > /dev/null 2>&1) | ||
3404 | 194 | do | ||
3405 | 195 | sleep 10 | ||
3406 | 196 | count=$((count + 1)) | ||
3407 | 197 | if [ "${count}" -gt "${max_count}" ]; then | ||
3408 | 198 | kill -15 ${KVM_PID} | ||
3409 | 199 | debug "Build timed out...killing PID ${KVM_PID}" | ||
3410 | 200 | fi | ||
3411 | 201 | done | ||
3412 | 202 | |||
3413 | 203 | debug "________________________________________________" | ||
3414 | 204 | debug "KVM PID has ended. Work is done" | ||
3415 | 205 | kill -15 ${TAIL_PID} | ||
3416 | 206 | |||
3417 | 207 | unset KVM_PID | ||
3418 | 208 | unset TAIL_PID | ||
3419 | 209 | |||
3420 | 210 | [ -n "${raw_disk}" ] && | ||
3421 | 211 | debug "Extracting raw tarball" && | ||
3422 | 212 | { tar xvvf "${raw_disk}" || /bin/true; } | ||
3423 | 213 | |||
3424 | 214 | [ ! -e success ] && | ||
3425 | 215 | fail "Tarball contents reported failure" | ||
3426 | 216 | |||
3427 | 217 | cp "${work_d}/console.log" . | ||
3428 | 218 | |||
3429 | 219 | # Wait for Cloud-Init to finish any work | ||
3430 | 220 | debug "Cleaning up..." | ||
3431 | 221 | cleanup | ||
3432 | 222 | exit 0 | ||
3433 | 0 | 223 | ||
3434 | === added file 'maas_config.sh' | |||
3435 | --- maas_config.sh 1970-01-01 00:00:00 +0000 | |||
3436 | +++ maas_config.sh 2018-05-31 04:33:07 +0000 | |||
3437 | @@ -0,0 +1,75 @@ | |||
3438 | 1 | #!/bin/bash | ||
3439 | 2 | short_opts="h" | ||
3440 | 3 | long_opts="distro:,stream:,maas-branch:,out:,template:,serial:,local:,base-name:,out_d:" | ||
3441 | 4 | getopt_out=$(getopt --name "${0##*/}" \ | ||
3442 | 5 | --options "${short_opts}" --long "${long_opts}" -- "$@") && | ||
3443 | 6 | eval set -- "${getopt_out}" || { echo "BAD INVOCATION!"; usage; exit 1; } | ||
3444 | 7 | |||
3445 | 8 | usage() { | ||
3446 | 9 | cat <<EOM | ||
3447 | 10 | ${0##/} - Populated values in build temple. | ||
3448 | 11 | |||
3449 | 12 | Required: | ||
3450 | 13 | --distro Distro code name, i.e. precise | ||
3451 | 14 | --template Template file | ||
3452 | 15 | --stream Stream, i.e. daily, release | ||
3453 | 16 | --base-name The name of the file to work on | ||
3454 | 17 | --serial The build serial | ||
3455 | 18 | --out The output file | ||
3456 | 19 | --out_d Where to stuff the output files | ||
3457 | 20 | |||
3458 | 21 | Optional: | ||
3459 | 22 | --maas-branch bzr branch for maas image code | ||
3460 | 23 | EOM | ||
3461 | 24 | } | ||
3462 | 25 | |||
3463 | 26 | |||
3464 | 27 | fail() { echo "${@}" 2>&1; exit 1;} | ||
3465 | 28 | |||
3466 | 29 | serial="${serial:-$(date +%Y%m%d)}" | ||
3467 | 30 | maas_branch="${maas_branch:-http://bazaar.launchpad.net/~smoser/maas/maas.ubuntu.com.images-ephemeral}" | ||
3468 | 31 | template_f="${PWD}/img-maas.tmpl" | ||
3469 | 32 | |||
3470 | 33 | while [ $# -ne 0 ]; do | ||
3471 | 34 | cur=${1}; next=${2}; | ||
3472 | 35 | case "$cur" in | ||
3473 | 36 | --distro) distro=$2; shift;; | ||
3474 | 37 | --stream) stream=$2; shift;; | ||
3475 | 38 | --local) local_d=$2; shift;; | ||
3476 | 39 | --maas-branch) maas_branch=$2; shift;; | ||
3477 | 40 | --base-name) base_name=$2; shift;; | ||
3478 | 41 | --template) template_f=$2; shift;; | ||
3479 | 42 | --out) out_f=$2; shift;; | ||
3480 | 43 | --out_d) out_d=$2; shift;; | ||
3481 | 44 | --) shift; break;; | ||
3482 | 45 | esac | ||
3483 | 46 | shift; | ||
3484 | 47 | done | ||
3485 | 48 | |||
3486 | 49 | fail_usage() { fail "Must define $@"; } | ||
3487 | 50 | |||
3488 | 51 | [ -z "${distro}" ] && fail_usage "--distro" | ||
3489 | 52 | [ -z "${stream}" ] && fail_usage "--stream" | ||
3490 | 53 | [ -z "${local_d}" ] && fail_usage "--local" | ||
3491 | 54 | [ -z "${out_f}" ] && fail_usage "--out" | ||
3492 | 55 | [ -z "${out_d}" ] && fail_usage "--out_d" | ||
3493 | 56 | [ -z "${base_name}" ] && fail_usage "--base-name" | ||
3494 | 57 | |||
3495 | 58 | case "$distro" in | ||
3496 | 59 | trusty) arches="${ARCH_TYPE:-i386 amd64 armhf}"; | ||
3497 | 60 | [[ "$(uname -m)" =~ ppc64 ]] && arches="ppc64el";; | ||
3498 | 61 | *) arches="${ARCH_TYPE:-i386 amd64 armhf}";; | ||
3499 | 62 | esac | ||
3500 | 63 | |||
3501 | 64 | sed -e "s,%d,${distro},g" \ | ||
3502 | 65 | -e "s,%S,${stream},g" \ | ||
3503 | 66 | -e "s,%M,${maas_branch},g" \ | ||
3504 | 67 | -e "s,%D,${local_d},g" \ | ||
3505 | 68 | -e "s,%B,${base_name},g" \ | ||
3506 | 69 | -e "s,%s,${serial},g" \ | ||
3507 | 70 | -e "s,%O,${out_d},g" \ | ||
3508 | 71 | -e "s,%A,${arches},g" \ | ||
3509 | 72 | ${template_f} > ${out_f} || | ||
3510 | 73 | fail "Unable to write template file" | ||
3511 | 74 | |||
3512 | 75 | exit 0 | ||
3513 | 0 | 76 | ||
3514 | === added file 'make-seed.sh' | |||
3515 | --- make-seed.sh 1970-01-01 00:00:00 +0000 | |||
3516 | +++ make-seed.sh 2018-05-31 04:33:07 +0000 | |||
3517 | @@ -0,0 +1,147 @@ | |||
3518 | 1 | #!/bin/bash | ||
3519 | 2 | |||
3520 | 3 | VERBOSITY=0 | ||
3521 | 4 | TEMP_D="" | ||
3522 | 5 | DEF_DISK_FORMAT="raw" | ||
3523 | 6 | DEF_FILESYSTEM="iso9660" | ||
3524 | 7 | |||
3525 | 8 | error() { echo "$@" 1>&2; } | ||
3526 | 9 | errorp() { printf "$@" 1>&2; } | ||
3527 | 10 | fail() { [ $# -eq 0 ] || error "$@"; exit 1; } | ||
3528 | 11 | failp() { [ $# -eq 0 ] || errorp "$@"; exit 1; } | ||
3529 | 12 | |||
3530 | 13 | Usage() { | ||
3531 | 14 | cat <<EOF | ||
3532 | 15 | Usage: ${0##*/} [ options ] output user-data [meta-data] | ||
3533 | 16 | |||
3534 | 17 | Create a disk for cloud-init to utilize nocloud | ||
3535 | 18 | |||
3536 | 19 | options: | ||
3537 | 20 | -h | --help show usage | ||
3538 | 21 | -d | --disk-format D disk format to output. default: raw | ||
3539 | 22 | -f | --filesystem F filesystem format (vfat or iso), default: iso9660 | ||
3540 | 23 | |||
3541 | 24 | -i | --interfaces F write network interfaces file into metadata | ||
3542 | 25 | -m | --dsmode M add 'dsmode' ('local' or 'net') to the metadata | ||
3543 | 26 | default in cloud-init is 'net', meaning network is | ||
3544 | 27 | required. | ||
3545 | 28 | |||
3546 | 29 | Example: | ||
3547 | 30 | * cat my-user-data | ||
3548 | 31 | #cloud-config | ||
3549 | 32 | password: passw0rd | ||
3550 | 33 | chpasswd: { expire: False } | ||
3551 | 34 | ssh_pwauth: True | ||
3552 | 35 | * echo "instance-id: \$(uuidgen || echo i-abcdefg)" > my-meta-data | ||
3553 | 36 | * ${0##*/} my-seed.img my-user-data my-meta-data | ||
3554 | 37 | EOF | ||
3555 | 38 | } | ||
3556 | 39 | |||
3557 | 40 | bad_Usage() { Usage 1>&2; [ $# -eq 0 ] || error "$@"; exit 1; } | ||
3558 | 41 | cleanup() { | ||
3559 | 42 | [ -z "${TEMP_D}" -o ! -d "${TEMP_D}" ] || rm -Rf "${TEMP_D}" | ||
3560 | 43 | } | ||
3561 | 44 | |||
3562 | 45 | debug() { | ||
3563 | 46 | local level=${1}; shift; | ||
3564 | 47 | [ "${level}" -gt "${VERBOSITY}" ] && return | ||
3565 | 48 | error "${@}" | ||
3566 | 49 | } | ||
3567 | 50 | |||
3568 | 51 | short_opts="hi:d:f:m:o:v" | ||
3569 | 52 | long_opts="disk-format:,dsmode:,filesystem:,help,interfaces:,output:,verbose" | ||
3570 | 53 | getopt_out=$(getopt --name "${0##*/}" \ | ||
3571 | 54 | --options "${short_opts}" --long "${long_opts}" -- "$@") && | ||
3572 | 55 | eval set -- "${getopt_out}" || | ||
3573 | 56 | bad_Usage | ||
3574 | 57 | |||
3575 | 58 | ## <<insert default variables here>> | ||
3576 | 59 | output="" | ||
3577 | 60 | userdata="" | ||
3578 | 61 | metadata="" | ||
3579 | 62 | filesystem=$DEF_FILESYSTEM | ||
3580 | 63 | diskformat=$DEF_DISK_FORMAT | ||
3581 | 64 | interfaces=_unset | ||
3582 | 65 | dsmode="" | ||
3583 | 66 | |||
3584 | 67 | |||
3585 | 68 | while [ $# -ne 0 ]; do | ||
3586 | 69 | cur=${1}; next=${2}; | ||
3587 | 70 | case "$cur" in | ||
3588 | 71 | -h|--help) Usage ; exit 0;; | ||
3589 | 72 | -v|--verbose) VERBOSITY=$((${VERBOSITY}+1));; | ||
3590 | 73 | -d|--disk-format) diskformat=$next; shift;; | ||
3591 | 74 | -f|--filesystem) filesystem=$next; shift;; | ||
3592 | 75 | -m|--dsmode) dsmode=$next; shift;; | ||
3593 | 76 | -i|--interfaces) interfaces=$next; shift;; | ||
3594 | 77 | --) shift; break;; | ||
3595 | 78 | esac | ||
3596 | 79 | shift; | ||
3597 | 80 | done | ||
3598 | 81 | |||
3599 | 82 | ## check arguments here | ||
3600 | 83 | ## how many args do you expect? | ||
3601 | 84 | [ $# -ge 1 ] || bad_Usage "must provide output, userdata" | ||
3602 | 85 | [ $# -le 3 ] || bad_Usage "confused by additional args" | ||
3603 | 86 | |||
3604 | 87 | output=$1 | ||
3605 | 88 | userdata=$2 | ||
3606 | 89 | metadata=$3 | ||
3607 | 90 | |||
3608 | 91 | [ -n "$metadata" -a "${interfaces}" != "_unset" ] && | ||
3609 | 92 | fail "metadata and --interfaces are incompatible" | ||
3610 | 93 | [ -n "$metadata" -a -n "$dsmode" ] && | ||
3611 | 94 | fail "metadata and dsmode are incompatible" | ||
3612 | 95 | [ "$interfaces" = "_unset" -o -r "$interfaces" ] || | ||
3613 | 96 | fail "$interfaces: not a readable file" | ||
3614 | 97 | |||
3615 | 98 | TEMP_D=$(mktemp -d "${TMPDIR:-/tmp}/${0##*/}.XXXXXX") || | ||
3616 | 99 | fail "failed to make tempdir" | ||
3617 | 100 | trap cleanup EXIT | ||
3618 | 101 | |||
3619 | 102 | if [ -n "$metadata" ]; then | ||
3620 | 103 | cp "$metadata" "$TEMP_D/meta-data" || fail "$metadata: failed to copy" | ||
3621 | 104 | else | ||
3622 | 105 | { | ||
3623 | 106 | echo "instance-id: iid-local01" | ||
3624 | 107 | [ -n "$dsmode" ] && echo "dsmode: $dsmode" | ||
3625 | 108 | [ -n "$interfaces" ] && echo "interfaces: |" && | ||
3626 | 109 | sed 's,^, ,' "$interfaces" | ||
3627 | 110 | } > "$TEMP_D/meta-data" | ||
3628 | 111 | fi | ||
3629 | 112 | |||
3630 | 113 | if [ "$userdata" = "-" ]; then | ||
3631 | 114 | cat > "$TEMP_D/user-data" || fail "failed to read from stdin" | ||
3632 | 115 | else | ||
3633 | 116 | cp "$userdata" "$TEMP_D/user-data" || fail "$userdata: failed to copy" | ||
3634 | 117 | fi | ||
3635 | 118 | |||
3636 | 119 | ## alternatively, create a vfat filesystem with same files | ||
3637 | 120 | img="$TEMP_D/seed.img" | ||
3638 | 121 | truncate --size 100K "$img" || fail "failed truncate image" | ||
3639 | 122 | |||
3640 | 123 | case "$filesystem" in | ||
3641 | 124 | iso9660|iso) | ||
3642 | 125 | genisoimage -output "$img" -volid cidata \ | ||
3643 | 126 | -joliet -rock "$TEMP_D/user-data" "$TEMP_D/meta-data" \ | ||
3644 | 127 | > "$TEMP_D/err" 2>&1 || | ||
3645 | 128 | { cat "$TEMP_D/err" 1>&2; fail "failed to genisoimage"; } | ||
3646 | 129 | ;; | ||
3647 | 130 | vfat) | ||
3648 | 131 | mkfs.vfat -n cidata "$img" || fail "failed mkfs.vfat" | ||
3649 | 132 | mcopy -oi "$img" "$TEMP_D/user-data" "$TEMP_D/meta-data" :: || | ||
3650 | 133 | fail "failed to copy user-data, meta-data to img" | ||
3651 | 134 | ;; | ||
3652 | 135 | *) fail "unknown filesystem $filesystem";; | ||
3653 | 136 | esac | ||
3654 | 137 | |||
3655 | 138 | [ "$output" = "-" ] && output="$TEMP_D/final" | ||
3656 | 139 | qemu-img convert -f raw -O "$diskformat" "$img" "$output" || | ||
3657 | 140 | fail "failed to convert to disk format $diskformat" | ||
3658 | 141 | |||
3659 | 142 | [ "$output" != "$TEMP_D/final" ] || { cat "$output" && output="-"; } || | ||
3660 | 143 | fail "failed to write to -" | ||
3661 | 144 | |||
3662 | 145 | error "wrote ${output} with filesystem=$filesystem and diskformat=$diskformat" | ||
3663 | 146 | # vi: ts=4 noexpandtab | ||
3664 | 147 | |||
3665 | 0 | 148 | ||
3666 | === added file 'overlay.sh' | |||
3667 | --- overlay.sh 1970-01-01 00:00:00 +0000 | |||
3668 | +++ overlay.sh 2018-05-31 04:33:07 +0000 | |||
3669 | @@ -0,0 +1,23 @@ | |||
3670 | 1 | #!/bin/bash | ||
3671 | 2 | # Overlays a new branch over this branch. This allows you to reuse code | ||
3672 | 3 | # from this branch against a development branch or a private branch. | ||
3673 | 4 | |||
3674 | 5 | my_script_path=$(readlink -f ${0}) | ||
3675 | 6 | my_s_dir=$(dirname ${my_script_path}) | ||
3676 | 7 | |||
3677 | 8 | source "${my_s_dir}/functions/bzr_check.sh" | ||
3678 | 9 | source "${my_s_dir}/functions/common.sh" | ||
3679 | 10 | |||
3680 | 11 | rsync_merge() { | ||
3681 | 12 | debug "Merging ${1} with private" | ||
3682 | 13 | [ -d ${1} ] || fail "no such directory ${1} for merging!" | ||
3683 | 14 | rsync -av ${1}/* ${my_s_dir} || | ||
3684 | 15 | fail "failed to merge ${1}" | ||
3685 | 16 | } | ||
3686 | 17 | |||
3687 | 18 | for i in ${@} | ||
3688 | 19 | do | ||
3689 | 20 | check_tmp=$(mktemp --directory --tmpdir=${my_s_dir} bzrbranch.XXX) | ||
3690 | 21 | check_branch ${i} ${check_tmp} | ||
3691 | 22 | rsync_merge ${check_tmp} | ||
3692 | 23 | done | ||
3693 | 0 | 24 | ||
3694 | === added directory 'pylib' | |||
3695 | === added directory 'pylib/changelogger' | |||
3696 | === added file 'pylib/changelogger.py' | |||
3697 | --- pylib/changelogger.py 1970-01-01 00:00:00 +0000 | |||
3698 | +++ pylib/changelogger.py 2018-05-31 04:33:07 +0000 | |||
3699 | @@ -0,0 +1,222 @@ | |||
3700 | 1 | from __future__ import print_function | ||
3701 | 2 | |||
3702 | 3 | import logging | ||
3703 | 4 | import re | ||
3704 | 5 | import requests | ||
3705 | 6 | import subprocess | ||
3706 | 7 | from debian.changelog import (Changelog) | ||
3707 | 8 | |||
3708 | 9 | # http://changelogs.ubuntu.com/changelogs/pool/main/l/linux-meta/linux-meta_4.2.0.17.19/changelog | ||
3709 | 10 | changelog_url_base = \ | ||
3710 | 11 | "http://changelogs.ubuntu.com/changelogs/pool/{}/{}/{}/{}_{}/changelog" | ||
3711 | 12 | |||
3712 | 13 | |||
3713 | 14 | class ChangeDelta(object): | ||
3714 | 15 | |||
3715 | 16 | def __init__(self, pkg, changelog=None): | ||
3716 | 17 | self._pkg = pkg | ||
3717 | 18 | self._changelogs = changelog | ||
3718 | 19 | |||
3719 | 20 | def _set_pkg(self, pkg): | ||
3720 | 21 | if pkg is not None: | ||
3721 | 22 | self._pkg = str(pkg) | ||
3722 | 23 | |||
3723 | 24 | def _get_pkg(self): | ||
3724 | 25 | return self._pkg | ||
3725 | 26 | |||
3726 | 27 | pkg = property(_get_pkg, _set_pkg) | ||
3727 | 28 | |||
3728 | 29 | def _get_changelogs(self): | ||
3729 | 30 | try: | ||
3730 | 31 | return self._changelogs | ||
3731 | 32 | except KeyError: | ||
3732 | 33 | return [] | ||
3733 | 34 | |||
3734 | 35 | def _set_changelogs(self, changelogs): | ||
3735 | 36 | self._changelogs = changelogs | ||
3736 | 37 | |||
3737 | 38 | changelogs = property(_get_changelogs, _set_changelogs) | ||
3738 | 39 | |||
3739 | 40 | def iter_changelogs(self, text=False): | ||
3740 | 41 | for block in self.changelogs: | ||
3741 | 42 | if text: | ||
3742 | 43 | yield str(block) | ||
3743 | 44 | else: | ||
3744 | 45 | yield block | ||
3745 | 46 | |||
3746 | 47 | def _get_bug_cves(self): | ||
3747 | 48 | """extract information from the changelog block""" | ||
3748 | 49 | mappings, bugs, cves = ({}, [], []) | ||
3749 | 50 | cve_re = re.compile("CVE-\d+-\d+") | ||
3750 | 51 | bug_re = re.compile("\(LP:.#(\d+)\)") | ||
3751 | 52 | |||
3752 | 53 | for block in self.iter_changelogs(): | ||
3753 | 54 | _block = str(block) | ||
3754 | 55 | cves_in_block = cve_re.findall(_block) | ||
3755 | 56 | cves.extend(cves_in_block) | ||
3756 | 57 | |||
3757 | 58 | bugs_in_block = bug_re.findall(_block) | ||
3758 | 59 | bugs.extend(bugs_in_block) | ||
3759 | 60 | |||
3760 | 61 | ver = str(block.version) | ||
3761 | 62 | mappings[ver] = {'cves': cves_in_block, | ||
3762 | 63 | 'bugs': bugs_in_block} | ||
3763 | 64 | |||
3764 | 65 | self.mappings = mappings | ||
3765 | 66 | self.cves = cves | ||
3766 | 67 | self.bugs = bugs | ||
3767 | 68 | |||
3768 | 69 | def _get_cves(self): | ||
3769 | 70 | self._get_bug_cves() | ||
3770 | 71 | return self._cves | ||
3771 | 72 | |||
3772 | 73 | def _set_cves(self, cves): | ||
3773 | 74 | self._cves = cves | ||
3774 | 75 | |||
3775 | 76 | cves = property(_get_cves, _set_cves) | ||
3776 | 77 | |||
3777 | 78 | def cve_in_delta(self): | ||
3778 | 79 | if len(self.cves) > 0: | ||
3779 | 80 | return True | ||
3780 | 81 | return False | ||
3781 | 82 | |||
3782 | 83 | def _get_bugs(self): | ||
3783 | 84 | self._get_bug_cves() | ||
3784 | 85 | return self._bugs | ||
3785 | 86 | |||
3786 | 87 | def _set_bugs(self, bugs): | ||
3787 | 88 | self._bugs = bugs | ||
3788 | 89 | |||
3789 | 90 | bugs = property(_get_bugs, _set_bugs) | ||
3790 | 91 | |||
3791 | 92 | def _get_min_version(self): | ||
3792 | 93 | if self.changelogs: | ||
3793 | 94 | return self.changelogs[-1].version | ||
3794 | 95 | |||
3795 | 96 | min_version = property(_get_min_version) | ||
3796 | 97 | |||
3797 | 98 | def _get_max_version(self): | ||
3798 | 99 | if self.changelogs: | ||
3799 | 100 | return self.changelogs[0].version | ||
3800 | 101 | |||
3801 | 102 | max_version = property(_get_max_version) | ||
3802 | 103 | |||
3803 | 104 | def _get_mappings(self): | ||
3804 | 105 | try: | ||
3805 | 106 | return self._mappings | ||
3806 | 107 | except KeyError: | ||
3807 | 108 | return {} | ||
3808 | 109 | |||
3809 | 110 | def _set_mappings(self, mapping): | ||
3810 | 111 | self._mappings = mapping | ||
3811 | 112 | |||
3812 | 113 | mappings = property(_get_mappings, _set_mappings) | ||
3813 | 114 | |||
3814 | 115 | def format_changelogs(self): | ||
3815 | 116 | changeblocks = "\n".join(self.iter_changelogs(text=True)) | ||
3816 | 117 | return changeblocks | ||
3817 | 118 | |||
3818 | 119 | def __str__(): | ||
3819 | 120 | return self.format_changelogs() | ||
3820 | 121 | |||
3821 | 122 | |||
3822 | 123 | class ReadChangeLog(Changelog): | ||
3823 | 124 | |||
3824 | 125 | def __init__(self, pkg, version): | ||
3825 | 126 | self.logger = logging.getLogger("__changelog_{}__".format(pkg)) | ||
3826 | 127 | logging.basicConfig(format= | ||
3827 | 128 | '%(asctime)s %(levelname)s - [PARSING {}] %(message)s'.format( | ||
3828 | 129 | pkg)) | ||
3829 | 130 | self.logger.setLevel(logging.DEBUG) | ||
3830 | 131 | self.logger.debug("Parsing changelog for {}".format(version)) | ||
3831 | 132 | |||
3832 | 133 | ch_url = self.get_changelog_url(pkg, version) | ||
3833 | 134 | self.logger.debug("URL: {}".format(ch_url)) | ||
3834 | 135 | try: | ||
3835 | 136 | raw_changelog = self.get_changelog_from_url(ch_url) | ||
3836 | 137 | Changelog.__init__(self, raw_changelog) | ||
3837 | 138 | except Exception as e: | ||
3838 | 139 | self.logger.debug("Failed to parse changelog!\n{}".format(e)) | ||
3839 | 140 | |||
3840 | 141 | self.min_version = self._blocks[-1].version | ||
3841 | 142 | self.max_version = self.version | ||
3842 | 143 | |||
3843 | 144 | self.logger.debug("Opened changelog:") | ||
3844 | 145 | self.logger.debug(" Versions {} through {}".format(self.min_version, | ||
3845 | 146 | self.max_version)) | ||
3846 | 147 | |||
3847 | 148 | def get_changelog_url(self, pkg, version, pocket='main', url=None): | ||
3848 | 149 | """Return the changelog URL""" | ||
3849 | 150 | |||
3850 | 151 | url = url or changelog_url_base | ||
3851 | 152 | pdir = pkg[0] | ||
3852 | 153 | if pkg.startswith("lib"): | ||
3853 | 154 | pdir = pkg[:4] | ||
3854 | 155 | return url.format(pocket, pdir, pkg, pkg, version) | ||
3855 | 156 | |||
3856 | 157 | def get_changelog_from_url(self, url): | ||
3857 | 158 | """Fetch the change log""" | ||
3858 | 159 | try: | ||
3859 | 160 | chlog = requests.get(url) | ||
3860 | 161 | if chlog.status_code == requests.codes.ok: | ||
3861 | 162 | return chlog.text | ||
3862 | 163 | else: | ||
3863 | 164 | chlog.raise_for_status() | ||
3864 | 165 | |||
3865 | 166 | except requests.exceptions.HTTPError as e: | ||
3866 | 167 | self.logger.critical("Failed to fetch changelog at {}:\n{}".format( | ||
3867 | 168 | url, e)) | ||
3868 | 169 | |||
3869 | 170 | def compare_versions(self, v1, operator, v2): | ||
3870 | 171 | """Dirty, slow hack to compare versions""" | ||
3871 | 172 | cmd = ['/usr/bin/dpkg', '--compare-versions', str(v1), str(operator), | ||
3872 | 173 | str(v2)] | ||
3873 | 174 | try: | ||
3874 | 175 | subprocess.check_call(cmd) | ||
3875 | 176 | except subprocess.CalledProcessError as e: | ||
3876 | 177 | return False | ||
3877 | 178 | |||
3878 | 179 | return True | ||
3879 | 180 | |||
3880 | 181 | def iter_changeblocks(self): | ||
3881 | 182 | """Iterate over the change logs""" | ||
3882 | 183 | for block in self._blocks: | ||
3883 | 184 | yield block | ||
3884 | 185 | |||
3885 | 186 | def get_changes_between(self, minv=None, maxv=None, commits=None): | ||
3886 | 187 | """Get the changes between two versions""" | ||
3887 | 188 | blocks = [] | ||
3888 | 189 | # Don't waste CPU time if we are getting the whole log | ||
3889 | 190 | if minv is None and maxv is None: | ||
3890 | 191 | for block in self.iter_changeblocks(): | ||
3891 | 192 | blocks.append(block) | ||
3892 | 193 | |||
3893 | 194 | # Now deal with changes between | ||
3894 | 195 | minver = minv or self.min_version | ||
3895 | 196 | maxver = maxv or self.max_version | ||
3896 | 197 | |||
3897 | 198 | # Allow for comparing the latest version against arbitrary counts | ||
3898 | 199 | # i.e. you don't have to know the prior version | ||
3899 | 200 | if minver <= -1: | ||
3900 | 201 | minver = self.versions[(abs(minver) - 1)] | ||
3901 | 202 | |||
3902 | 203 | if minv or maxv: | ||
3903 | 204 | for block in self.iter_changeblocks(): | ||
3904 | 205 | bver = block.version | ||
3905 | 206 | if minv: | ||
3906 | 207 | if not self.compare_versions(bver, 'ge', minver): | ||
3907 | 208 | continue | ||
3908 | 209 | if not maxv: | ||
3909 | 210 | blocks.append(block) | ||
3910 | 211 | elif self.compare_versions(bver, 'le', maxver): | ||
3911 | 212 | blocks.append(block) | ||
3912 | 213 | elif maxv: | ||
3913 | 214 | if not self.compare_versions(bver, 'le', maxver): | ||
3914 | 215 | continue | ||
3915 | 216 | if not minv: | ||
3916 | 217 | blocks.append(block) | ||
3917 | 218 | elif self.compare_versions(bver, 'ge', minver): | ||
3918 | 219 | blocks.append(block) | ||
3919 | 220 | |||
3920 | 221 | ret = ChangeDelta(self.package, blocks) | ||
3921 | 222 | return ret | ||
3922 | 0 | 223 | ||
3923 | === added file 'pylib/changelogger/ChangeLogger.py' | |||
3924 | --- pylib/changelogger/ChangeLogger.py 1970-01-01 00:00:00 +0000 | |||
3925 | +++ pylib/changelogger/ChangeLogger.py 2018-05-31 04:33:07 +0000 | |||
3926 | @@ -0,0 +1,222 @@ | |||
3927 | 1 | from __future__ import print_function | ||
3928 | 2 | |||
3929 | 3 | import logging | ||
3930 | 4 | import re | ||
3931 | 5 | import requests | ||
3932 | 6 | import subprocess | ||
3933 | 7 | from debian.changelog import (Changelog) | ||
3934 | 8 | |||
3935 | 9 | # http://changelogs.ubuntu.com/changelogs/pool/main/l/linux-meta/linux-meta_4.2.0.17.19/changelog | ||
3936 | 10 | changelog_url_base = \ | ||
3937 | 11 | "http://changelogs.ubuntu.com/changelogs/pool/{}/{}/{}/{}_{}/changelog" | ||
3938 | 12 | |||
3939 | 13 | |||
3940 | 14 | class ChangeDelta(object): | ||
3941 | 15 | |||
3942 | 16 | def __init__(self, pkg, changelog=None): | ||
3943 | 17 | self._pkg = pkg | ||
3944 | 18 | self._changelogs = changelog | ||
3945 | 19 | |||
3946 | 20 | def _set_pkg(self, pkg): | ||
3947 | 21 | if pkg is not None: | ||
3948 | 22 | self._pkg = str(pkg) | ||
3949 | 23 | |||
3950 | 24 | def _get_pkg(self): | ||
3951 | 25 | return self._pkg | ||
3952 | 26 | |||
3953 | 27 | pkg = property(_get_pkg, _set_pkg) | ||
3954 | 28 | |||
3955 | 29 | def _get_changelogs(self): | ||
3956 | 30 | try: | ||
3957 | 31 | return self._changelogs | ||
3958 | 32 | except KeyError: | ||
3959 | 33 | return [] | ||
3960 | 34 | |||
3961 | 35 | def _set_changelogs(self, changelogs): | ||
3962 | 36 | self._changelogs = changelogs | ||
3963 | 37 | |||
3964 | 38 | changelogs = property(_get_changelogs, _set_changelogs) | ||
3965 | 39 | |||
3966 | 40 | def iter_changelogs(self, text=False): | ||
3967 | 41 | for block in self.changelogs: | ||
3968 | 42 | if text: | ||
3969 | 43 | yield str(block) | ||
3970 | 44 | else: | ||
3971 | 45 | yield block | ||
3972 | 46 | |||
3973 | 47 | def _get_bug_cves(self): | ||
3974 | 48 | """extract information from the changelog block""" | ||
3975 | 49 | mappings, bugs, cves = ({}, [], []) | ||
3976 | 50 | cve_re = re.compile("CVE-\d+-\d+") | ||
3977 | 51 | bug_re = re.compile("\(LP:.#(\d+)\)") | ||
3978 | 52 | |||
3979 | 53 | for block in self.iter_changelogs(): | ||
3980 | 54 | _block = str(block) | ||
3981 | 55 | cves_in_block = cve_re.findall(_block) | ||
3982 | 56 | cves.extend(cves_in_block) | ||
3983 | 57 | |||
3984 | 58 | bugs_in_block = bug_re.findall(_block) | ||
3985 | 59 | bugs.extend(bugs_in_block) | ||
3986 | 60 | |||
3987 | 61 | ver = str(block.version) | ||
3988 | 62 | mappings[ver] = {'cves': cves_in_block, | ||
3989 | 63 | 'bugs': bugs_in_block} | ||
3990 | 64 | |||
3991 | 65 | self.mappings = mappings | ||
3992 | 66 | self.cves = cves | ||
3993 | 67 | self.bugs = bugs | ||
3994 | 68 | |||
3995 | 69 | def _get_cves(self): | ||
3996 | 70 | self._get_bug_cves() | ||
3997 | 71 | return self._cves | ||
3998 | 72 | |||
3999 | 73 | def _set_cves(self, cves): | ||
4000 | 74 | self._cves = cves | ||
4001 | 75 | |||
4002 | 76 | cves = property(_get_cves, _set_cves) | ||
4003 | 77 | |||
4004 | 78 | def cve_in_delta(self): | ||
4005 | 79 | if len(self.cves) > 0: | ||
4006 | 80 | return True | ||
4007 | 81 | return False | ||
4008 | 82 | |||
4009 | 83 | def _get_bugs(self): | ||
4010 | 84 | self._get_bug_cves() | ||
4011 | 85 | return self._bugs | ||
4012 | 86 | |||
4013 | 87 | def _set_bugs(self, bugs): | ||
4014 | 88 | self._bugs = bugs | ||
4015 | 89 | |||
4016 | 90 | bugs = property(_get_bugs, _set_bugs) | ||
4017 | 91 | |||
4018 | 92 | def _get_min_version(self): | ||
4019 | 93 | if self.changelogs: | ||
4020 | 94 | return self.changelogs[-1].version | ||
4021 | 95 | |||
4022 | 96 | min_version = property(_get_min_version) | ||
4023 | 97 | |||
4024 | 98 | def _get_max_version(self): | ||
4025 | 99 | if self.changelogs: | ||
4026 | 100 | return self.changelogs[0].version | ||
4027 | 101 | |||
4028 | 102 | max_version = property(_get_max_version) | ||
4029 | 103 | |||
4030 | 104 | def _get_mappings(self): | ||
4031 | 105 | try: | ||
4032 | 106 | return self._mappings | ||
4033 | 107 | except KeyError: | ||
4034 | 108 | return {} | ||
4035 | 109 | |||
4036 | 110 | def _set_mappings(self, mapping): | ||
4037 | 111 | self._mappings = mapping | ||
4038 | 112 | |||
4039 | 113 | mappings = property(_get_mappings, _set_mappings) | ||
4040 | 114 | |||
4041 | 115 | def format_changelogs(self): | ||
4042 | 116 | changeblocks = "\n".join(self.iter_changelogs(text=True)) | ||
4043 | 117 | return changeblocks | ||
4044 | 118 | |||
4045 | 119 | def __str__(): | ||
4046 | 120 | return self.format_changelogs() | ||
4047 | 121 | |||
4048 | 122 | |||
4049 | 123 | class ReadChangelog(Changelog): | ||
4050 | 124 | |||
4051 | 125 | def __init__(self, pkg, version): | ||
4052 | 126 | self.logger = logging.getLogger("__changelog_{}__".format(pkg)) | ||
4053 | 127 | logging.basicConfig(format= | ||
4054 | 128 | '%(asctime)s %(levelname)s - [PARSING {}] %(message)s'.format( | ||
4055 | 129 | pkg)) | ||
4056 | 130 | self.logger.setLevel(logging.DEBUG) | ||
4057 | 131 | self.logger.debug("Parsing changelog for {}".format(version)) | ||
4058 | 132 | |||
4059 | 133 | ch_url = self.get_changelog_url(pkg, version) | ||
4060 | 134 | self.logger.debug("URL: {}".format(ch_url)) | ||
4061 | 135 | try: | ||
4062 | 136 | raw_changelog = self.get_changelog_from_url(ch_url) | ||
4063 | 137 | Changelog.__init__(self, raw_changelog) | ||
4064 | 138 | except Exception as e: | ||
4065 | 139 | self.logger.debug("Failed to parse changelog!\n{}".format(e)) | ||
4066 | 140 | |||
4067 | 141 | self.min_version = self._blocks[-1].version | ||
4068 | 142 | self.max_version = self.version | ||
4069 | 143 | |||
4070 | 144 | self.logger.debug("Opened changelog:") | ||
4071 | 145 | self.logger.debug(" Versions {} through {}".format(self.min_version, | ||
4072 | 146 | self.max_version)) | ||
4073 | 147 | |||
4074 | 148 | def get_changelog_url(self, pkg, version, pocket='main', url=None): | ||
4075 | 149 | """Return the changelog URL""" | ||
4076 | 150 | |||
4077 | 151 | url = url or changelog_url_base | ||
4078 | 152 | pdir = pkg[0] | ||
4079 | 153 | if pkg.startswith("lib"): | ||
4080 | 154 | pdir = pkg[:4] | ||
4081 | 155 | return url.format(pocket, pdir, pkg, pkg, version) | ||
4082 | 156 | |||
4083 | 157 | def get_changelog_from_url(self, url): | ||
4084 | 158 | """Fetch the change log""" | ||
4085 | 159 | try: | ||
4086 | 160 | chlog = requests.get(url) | ||
4087 | 161 | if chlog.status_code == requests.codes.ok: | ||
4088 | 162 | return chlog.text | ||
4089 | 163 | else: | ||
4090 | 164 | chlog.raise_for_status() | ||
4091 | 165 | |||
4092 | 166 | except requests.exceptions.HTTPError as e: | ||
4093 | 167 | self.logger.critical("Failed to fetch changelog at {}:\n{}".format( | ||
4094 | 168 | url, e)) | ||
4095 | 169 | |||
4096 | 170 | def compare_versions(self, v1, operator, v2): | ||
4097 | 171 | """Dirty, slow hack to compare versions""" | ||
4098 | 172 | cmd = ['/usr/bin/dpkg', '--compare-versions', str(v1), str(operator), | ||
4099 | 173 | str(v2)] | ||
4100 | 174 | try: | ||
4101 | 175 | subprocess.check_call(cmd) | ||
4102 | 176 | except subprocess.CalledProcessError as e: | ||
4103 | 177 | return False | ||
4104 | 178 | |||
4105 | 179 | return True | ||
4106 | 180 | |||
4107 | 181 | def iter_changeblocks(self): | ||
4108 | 182 | """Iterate over the change logs""" | ||
4109 | 183 | for block in self._blocks: | ||
4110 | 184 | yield block | ||
4111 | 185 | |||
4112 | 186 | def get_changes_between(self, minv=None, maxv=None, commits=None): | ||
4113 | 187 | """Get the changes between two versions""" | ||
4114 | 188 | blocks = [] | ||
4115 | 189 | # Don't waste CPU time if we are getting the whole log | ||
4116 | 190 | if minv is None and maxv is None: | ||
4117 | 191 | for block in self.iter_changeblocks(): | ||
4118 | 192 | blocks.append(block) | ||
4119 | 193 | |||
4120 | 194 | # Now deal with changes between | ||
4121 | 195 | minver = minv or self.min_version | ||
4122 | 196 | maxver = maxv or self.max_version | ||
4123 | 197 | |||
4124 | 198 | # Allow for comparing the latest version against arbitrary counts | ||
4125 | 199 | # i.e. you don't have to know the prior version | ||
4126 | 200 | if minver <= -1: | ||
4127 | 201 | minver = self.versions[(abs(minver) - 1)] | ||
4128 | 202 | |||
4129 | 203 | if minv or maxv: | ||
4130 | 204 | for block in self.iter_changeblocks(): | ||
4131 | 205 | bver = block.version | ||
4132 | 206 | if minv: | ||
4133 | 207 | if not self.compare_versions(bver, 'ge', minver): | ||
4134 | 208 | continue | ||
4135 | 209 | if not maxv: | ||
4136 | 210 | blocks.append(block) | ||
4137 | 211 | elif self.compare_versions(bver, 'le', maxver): | ||
4138 | 212 | blocks.append(block) | ||
4139 | 213 | elif maxv: | ||
4140 | 214 | if not self.compare_versions(bver, 'le', maxver): | ||
4141 | 215 | continue | ||
4142 | 216 | if not minv: | ||
4143 | 217 | blocks.append(block) | ||
4144 | 218 | elif self.compare_versions(bver, 'ge', minver): | ||
4145 | 219 | blocks.append(block) | ||
4146 | 220 | |||
4147 | 221 | ret = ChangeDelta(self.package, blocks) | ||
4148 | 222 | return ret | ||
4149 | 0 | 223 | ||
4150 | === added file 'pylib/changelogger/__init__.py' | |||
4151 | === added directory 'pylib/requests' | |||
4152 | === added file 'pylib/requests/__init__.py' | |||
4153 | --- pylib/requests/__init__.py 1970-01-01 00:00:00 +0000 | |||
4154 | +++ pylib/requests/__init__.py 2018-05-31 04:33:07 +0000 | |||
4155 | @@ -0,0 +1,77 @@ | |||
4156 | 1 | # -*- coding: utf-8 -*- | ||
4157 | 2 | |||
4158 | 3 | # __ | ||
4159 | 4 | # /__) _ _ _ _ _/ _ | ||
4160 | 5 | # / ( (- (/ (/ (- _) / _) | ||
4161 | 6 | # / | ||
4162 | 7 | |||
4163 | 8 | """ | ||
4164 | 9 | requests HTTP library | ||
4165 | 10 | ~~~~~~~~~~~~~~~~~~~~~ | ||
4166 | 11 | |||
4167 | 12 | Requests is an HTTP library, written in Python, for human beings. Basic GET | ||
4168 | 13 | usage: | ||
4169 | 14 | |||
4170 | 15 | >>> import requests | ||
4171 | 16 | >>> r = requests.get('http://python.org') | ||
4172 | 17 | >>> r.status_code | ||
4173 | 18 | 200 | ||
4174 | 19 | >>> 'Python is a programming language' in r.content | ||
4175 | 20 | True | ||
4176 | 21 | |||
4177 | 22 | ... or POST: | ||
4178 | 23 | |||
4179 | 24 | >>> payload = dict(key1='value1', key2='value2') | ||
4180 | 25 | >>> r = requests.post("http://httpbin.org/post", data=payload) | ||
4181 | 26 | >>> print(r.text) | ||
4182 | 27 | { | ||
4183 | 28 | ... | ||
4184 | 29 | "form": { | ||
4185 | 30 | "key2": "value2", | ||
4186 | 31 | "key1": "value1" | ||
4187 | 32 | }, | ||
4188 | 33 | ... | ||
4189 | 34 | } | ||
4190 | 35 | |||
4191 | 36 | The other HTTP methods are supported - see `requests.api`. Full documentation | ||
4192 | 37 | is at <http://python-requests.org>. | ||
4193 | 38 | |||
4194 | 39 | :copyright: (c) 2014 by Kenneth Reitz. | ||
4195 | 40 | :license: Apache 2.0, see LICENSE for more details. | ||
4196 | 41 | |||
4197 | 42 | """ | ||
4198 | 43 | |||
4199 | 44 | __title__ = 'requests' | ||
4200 | 45 | __version__ = '2.3.0' | ||
4201 | 46 | __build__ = 0x020300 | ||
4202 | 47 | __author__ = 'Kenneth Reitz' | ||
4203 | 48 | __license__ = 'Apache 2.0' | ||
4204 | 49 | __copyright__ = 'Copyright 2014 Kenneth Reitz' | ||
4205 | 50 | |||
4206 | 51 | # Attempt to enable urllib3's SNI support, if possible | ||
4207 | 52 | try: | ||
4208 | 53 | from .packages.urllib3.contrib import pyopenssl | ||
4209 | 54 | pyopenssl.inject_into_urllib3() | ||
4210 | 55 | except ImportError: | ||
4211 | 56 | pass | ||
4212 | 57 | |||
4213 | 58 | from . import utils | ||
4214 | 59 | from .models import Request, Response, PreparedRequest | ||
4215 | 60 | from .api import request, get, head, post, patch, put, delete, options | ||
4216 | 61 | from .sessions import session, Session | ||
4217 | 62 | from .status_codes import codes | ||
4218 | 63 | from .exceptions import ( | ||
4219 | 64 | RequestException, Timeout, URLRequired, | ||
4220 | 65 | TooManyRedirects, HTTPError, ConnectionError | ||
4221 | 66 | ) | ||
4222 | 67 | |||
4223 | 68 | # Set default logging handler to avoid "No handler found" warnings. | ||
4224 | 69 | import logging | ||
4225 | 70 | try: # Python 2.7+ | ||
4226 | 71 | from logging import NullHandler | ||
4227 | 72 | except ImportError: | ||
4228 | 73 | class NullHandler(logging.Handler): | ||
4229 | 74 | def emit(self, record): | ||
4230 | 75 | pass | ||
4231 | 76 | |||
4232 | 77 | logging.getLogger(__name__).addHandler(NullHandler()) | ||
4233 | 0 | 78 | ||
4234 | === added file 'pylib/requests/adapters.py' | |||
4235 | --- pylib/requests/adapters.py 1970-01-01 00:00:00 +0000 | |||
4236 | +++ pylib/requests/adapters.py 2018-05-31 04:33:07 +0000 | |||
4237 | @@ -0,0 +1,388 @@ | |||
4238 | 1 | # -*- coding: utf-8 -*- | ||
4239 | 2 | |||
4240 | 3 | """ | ||
4241 | 4 | requests.adapters | ||
4242 | 5 | ~~~~~~~~~~~~~~~~~ | ||
4243 | 6 | |||
4244 | 7 | This module contains the transport adapters that Requests uses to define | ||
4245 | 8 | and maintain connections. | ||
4246 | 9 | """ | ||
4247 | 10 | |||
4248 | 11 | import socket | ||
4249 | 12 | |||
4250 | 13 | from .models import Response | ||
4251 | 14 | from .packages.urllib3.poolmanager import PoolManager, proxy_from_url | ||
4252 | 15 | from .packages.urllib3.response import HTTPResponse | ||
4253 | 16 | from .packages.urllib3.util import Timeout as TimeoutSauce | ||
4254 | 17 | from .compat import urlparse, basestring, urldefrag, unquote | ||
4255 | 18 | from .utils import (DEFAULT_CA_BUNDLE_PATH, get_encoding_from_headers, | ||
4256 | 19 | prepend_scheme_if_needed, get_auth_from_url) | ||
4257 | 20 | from .structures import CaseInsensitiveDict | ||
4258 | 21 | from .packages.urllib3.exceptions import MaxRetryError | ||
4259 | 22 | from .packages.urllib3.exceptions import TimeoutError | ||
4260 | 23 | from .packages.urllib3.exceptions import SSLError as _SSLError | ||
4261 | 24 | from .packages.urllib3.exceptions import HTTPError as _HTTPError | ||
4262 | 25 | from .packages.urllib3.exceptions import ProxyError as _ProxyError | ||
4263 | 26 | from .cookies import extract_cookies_to_jar | ||
4264 | 27 | from .exceptions import ConnectionError, Timeout, SSLError, ProxyError | ||
4265 | 28 | from .auth import _basic_auth_str | ||
4266 | 29 | |||
4267 | 30 | DEFAULT_POOLBLOCK = False | ||
4268 | 31 | DEFAULT_POOLSIZE = 10 | ||
4269 | 32 | DEFAULT_RETRIES = 0 | ||
4270 | 33 | |||
4271 | 34 | |||
4272 | 35 | class BaseAdapter(object): | ||
4273 | 36 | """The Base Transport Adapter""" | ||
4274 | 37 | |||
4275 | 38 | def __init__(self): | ||
4276 | 39 | super(BaseAdapter, self).__init__() | ||
4277 | 40 | |||
4278 | 41 | def send(self): | ||
4279 | 42 | raise NotImplementedError | ||
4280 | 43 | |||
4281 | 44 | def close(self): | ||
4282 | 45 | raise NotImplementedError | ||
4283 | 46 | |||
4284 | 47 | |||
4285 | 48 | class HTTPAdapter(BaseAdapter): | ||
4286 | 49 | """The built-in HTTP Adapter for urllib3. | ||
4287 | 50 | |||
4288 | 51 | Provides a general-case interface for Requests sessions to contact HTTP and | ||
4289 | 52 | HTTPS urls by implementing the Transport Adapter interface. This class will | ||
4290 | 53 | usually be created by the :class:`Session <Session>` class under the | ||
4291 | 54 | covers. | ||
4292 | 55 | |||
4293 | 56 | :param pool_connections: The number of urllib3 connection pools to cache. | ||
4294 | 57 | :param pool_maxsize: The maximum number of connections to save in the pool. | ||
4295 | 58 | :param int max_retries: The maximum number of retries each connection | ||
4296 | 59 | should attempt. Note, this applies only to failed connections and | ||
4297 | 60 | timeouts, never to requests where the server returns a response. | ||
4298 | 61 | :param pool_block: Whether the connection pool should block for connections. | ||
4299 | 62 | |||
4300 | 63 | Usage:: | ||
4301 | 64 | |||
4302 | 65 | >>> import requests | ||
4303 | 66 | >>> s = requests.Session() | ||
4304 | 67 | >>> a = requests.adapters.HTTPAdapter(max_retries=3) | ||
4305 | 68 | >>> s.mount('http://', a) | ||
4306 | 69 | """ | ||
4307 | 70 | __attrs__ = ['max_retries', 'config', '_pool_connections', '_pool_maxsize', | ||
4308 | 71 | '_pool_block'] | ||
4309 | 72 | |||
4310 | 73 | def __init__(self, pool_connections=DEFAULT_POOLSIZE, | ||
4311 | 74 | pool_maxsize=DEFAULT_POOLSIZE, max_retries=DEFAULT_RETRIES, | ||
4312 | 75 | pool_block=DEFAULT_POOLBLOCK): | ||
4313 | 76 | self.max_retries = max_retries | ||
4314 | 77 | self.config = {} | ||
4315 | 78 | self.proxy_manager = {} | ||
4316 | 79 | |||
4317 | 80 | super(HTTPAdapter, self).__init__() | ||
4318 | 81 | |||
4319 | 82 | self._pool_connections = pool_connections | ||
4320 | 83 | self._pool_maxsize = pool_maxsize | ||
4321 | 84 | self._pool_block = pool_block | ||
4322 | 85 | |||
4323 | 86 | self.init_poolmanager(pool_connections, pool_maxsize, block=pool_block) | ||
4324 | 87 | |||
4325 | 88 | def __getstate__(self): | ||
4326 | 89 | return dict((attr, getattr(self, attr, None)) for attr in | ||
4327 | 90 | self.__attrs__) | ||
4328 | 91 | |||
4329 | 92 | def __setstate__(self, state): | ||
4330 | 93 | # Can't handle by adding 'proxy_manager' to self.__attrs__ because | ||
4331 | 94 | # because self.poolmanager uses a lambda function, which isn't pickleable. | ||
4332 | 95 | self.proxy_manager = {} | ||
4333 | 96 | self.config = {} | ||
4334 | 97 | |||
4335 | 98 | for attr, value in state.items(): | ||
4336 | 99 | setattr(self, attr, value) | ||
4337 | 100 | |||
4338 | 101 | self.init_poolmanager(self._pool_connections, self._pool_maxsize, | ||
4339 | 102 | block=self._pool_block) | ||
4340 | 103 | |||
4341 | 104 | def init_poolmanager(self, connections, maxsize, block=DEFAULT_POOLBLOCK): | ||
4342 | 105 | """Initializes a urllib3 PoolManager. This method should not be called | ||
4343 | 106 | from user code, and is only exposed for use when subclassing the | ||
4344 | 107 | :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. | ||
4345 | 108 | |||
4346 | 109 | :param connections: The number of urllib3 connection pools to cache. | ||
4347 | 110 | :param maxsize: The maximum number of connections to save in the pool. | ||
4348 | 111 | :param block: Block when no free connections are available. | ||
4349 | 112 | """ | ||
4350 | 113 | # save these values for pickling | ||
4351 | 114 | self._pool_connections = connections | ||
4352 | 115 | self._pool_maxsize = maxsize | ||
4353 | 116 | self._pool_block = block | ||
4354 | 117 | |||
4355 | 118 | self.poolmanager = PoolManager(num_pools=connections, maxsize=maxsize, | ||
4356 | 119 | block=block) | ||
4357 | 120 | |||
4358 | 121 | def cert_verify(self, conn, url, verify, cert): | ||
4359 | 122 | """Verify a SSL certificate. This method should not be called from user | ||
4360 | 123 | code, and is only exposed for use when subclassing the | ||
4361 | 124 | :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. | ||
4362 | 125 | |||
4363 | 126 | :param conn: The urllib3 connection object associated with the cert. | ||
4364 | 127 | :param url: The requested URL. | ||
4365 | 128 | :param verify: Whether we should actually verify the certificate. | ||
4366 | 129 | :param cert: The SSL certificate to verify. | ||
4367 | 130 | """ | ||
4368 | 131 | if url.lower().startswith('https') and verify: | ||
4369 | 132 | |||
4370 | 133 | cert_loc = None | ||
4371 | 134 | |||
4372 | 135 | # Allow self-specified cert location. | ||
4373 | 136 | if verify is not True: | ||
4374 | 137 | cert_loc = verify | ||
4375 | 138 | |||
4376 | 139 | if not cert_loc: | ||
4377 | 140 | cert_loc = DEFAULT_CA_BUNDLE_PATH | ||
4378 | 141 | |||
4379 | 142 | if not cert_loc: | ||
4380 | 143 | raise Exception("Could not find a suitable SSL CA certificate bundle.") | ||
4381 | 144 | |||
4382 | 145 | conn.cert_reqs = 'CERT_REQUIRED' | ||
4383 | 146 | conn.ca_certs = cert_loc | ||
4384 | 147 | else: | ||
4385 | 148 | conn.cert_reqs = 'CERT_NONE' | ||
4386 | 149 | conn.ca_certs = None | ||
4387 | 150 | |||
4388 | 151 | if cert: | ||
4389 | 152 | if not isinstance(cert, basestring): | ||
4390 | 153 | conn.cert_file = cert[0] | ||
4391 | 154 | conn.key_file = cert[1] | ||
4392 | 155 | else: | ||
4393 | 156 | conn.cert_file = cert | ||
4394 | 157 | |||
4395 | 158 | def build_response(self, req, resp): | ||
4396 | 159 | """Builds a :class:`Response <requests.Response>` object from a urllib3 | ||
4397 | 160 | response. This should not be called from user code, and is only exposed | ||
4398 | 161 | for use when subclassing the | ||
4399 | 162 | :class:`HTTPAdapter <requests.adapters.HTTPAdapter>` | ||
4400 | 163 | |||
4401 | 164 | :param req: The :class:`PreparedRequest <PreparedRequest>` used to generate the response. | ||
4402 | 165 | :param resp: The urllib3 response object. | ||
4403 | 166 | """ | ||
4404 | 167 | response = Response() | ||
4405 | 168 | |||
4406 | 169 | # Fallback to None if there's no status_code, for whatever reason. | ||
4407 | 170 | response.status_code = getattr(resp, 'status', None) | ||
4408 | 171 | |||
4409 | 172 | # Make headers case-insensitive. | ||
4410 | 173 | response.headers = CaseInsensitiveDict(getattr(resp, 'headers', {})) | ||
4411 | 174 | |||
4412 | 175 | # Set encoding. | ||
4413 | 176 | response.encoding = get_encoding_from_headers(response.headers) | ||
4414 | 177 | response.raw = resp | ||
4415 | 178 | response.reason = response.raw.reason | ||
4416 | 179 | |||
4417 | 180 | if isinstance(req.url, bytes): | ||
4418 | 181 | response.url = req.url.decode('utf-8') | ||
4419 | 182 | else: | ||
4420 | 183 | response.url = req.url | ||
4421 | 184 | |||
4422 | 185 | # Add new cookies from the server. | ||
4423 | 186 | extract_cookies_to_jar(response.cookies, req, resp) | ||
4424 | 187 | |||
4425 | 188 | # Give the Response some context. | ||
4426 | 189 | response.request = req | ||
4427 | 190 | response.connection = self | ||
4428 | 191 | |||
4429 | 192 | return response | ||
4430 | 193 | |||
4431 | 194 | def get_connection(self, url, proxies=None): | ||
4432 | 195 | """Returns a urllib3 connection for the given URL. This should not be | ||
4433 | 196 | called from user code, and is only exposed for use when subclassing the | ||
4434 | 197 | :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. | ||
4435 | 198 | |||
4436 | 199 | :param url: The URL to connect to. | ||
4437 | 200 | :param proxies: (optional) A Requests-style dictionary of proxies used on this request. | ||
4438 | 201 | """ | ||
4439 | 202 | proxies = proxies or {} | ||
4440 | 203 | proxy = proxies.get(urlparse(url.lower()).scheme) | ||
4441 | 204 | |||
4442 | 205 | if proxy: | ||
4443 | 206 | proxy = prepend_scheme_if_needed(proxy, 'http') | ||
4444 | 207 | proxy_headers = self.proxy_headers(proxy) | ||
4445 | 208 | |||
4446 | 209 | if not proxy in self.proxy_manager: | ||
4447 | 210 | self.proxy_manager[proxy] = proxy_from_url( | ||
4448 | 211 | proxy, | ||
4449 | 212 | proxy_headers=proxy_headers, | ||
4450 | 213 | num_pools=self._pool_connections, | ||
4451 | 214 | maxsize=self._pool_maxsize, | ||
4452 | 215 | block=self._pool_block) | ||
4453 | 216 | |||
4454 | 217 | conn = self.proxy_manager[proxy].connection_from_url(url) | ||
4455 | 218 | else: | ||
4456 | 219 | # Only scheme should be lower case | ||
4457 | 220 | parsed = urlparse(url) | ||
4458 | 221 | url = parsed.geturl() | ||
4459 | 222 | conn = self.poolmanager.connection_from_url(url) | ||
4460 | 223 | |||
4461 | 224 | return conn | ||
4462 | 225 | |||
4463 | 226 | def close(self): | ||
4464 | 227 | """Disposes of any internal state. | ||
4465 | 228 | |||
4466 | 229 | Currently, this just closes the PoolManager, which closes pooled | ||
4467 | 230 | connections. | ||
4468 | 231 | """ | ||
4469 | 232 | self.poolmanager.clear() | ||
4470 | 233 | |||
4471 | 234 | def request_url(self, request, proxies): | ||
4472 | 235 | """Obtain the url to use when making the final request. | ||
4473 | 236 | |||
4474 | 237 | If the message is being sent through a HTTP proxy, the full URL has to | ||
4475 | 238 | be used. Otherwise, we should only use the path portion of the URL. | ||
4476 | 239 | |||
4477 | 240 | This should not be called from user code, and is only exposed for use | ||
4478 | 241 | when subclassing the | ||
4479 | 242 | :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. | ||
4480 | 243 | |||
4481 | 244 | :param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | ||
4482 | 245 | :param proxies: A dictionary of schemes to proxy URLs. | ||
4483 | 246 | """ | ||
4484 | 247 | proxies = proxies or {} | ||
4485 | 248 | scheme = urlparse(request.url).scheme | ||
4486 | 249 | proxy = proxies.get(scheme) | ||
4487 | 250 | |||
4488 | 251 | if proxy and scheme != 'https': | ||
4489 | 252 | url, _ = urldefrag(request.url) | ||
4490 | 253 | else: | ||
4491 | 254 | url = request.path_url | ||
4492 | 255 | |||
4493 | 256 | return url | ||
4494 | 257 | |||
4495 | 258 | def add_headers(self, request, **kwargs): | ||
4496 | 259 | """Add any headers needed by the connection. As of v2.0 this does | ||
4497 | 260 | nothing by default, but is left for overriding by users that subclass | ||
4498 | 261 | the :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. | ||
4499 | 262 | |||
4500 | 263 | This should not be called from user code, and is only exposed for use | ||
4501 | 264 | when subclassing the | ||
4502 | 265 | :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. | ||
4503 | 266 | |||
4504 | 267 | :param request: The :class:`PreparedRequest <PreparedRequest>` to add headers to. | ||
4505 | 268 | :param kwargs: The keyword arguments from the call to send(). | ||
4506 | 269 | """ | ||
4507 | 270 | pass | ||
4508 | 271 | |||
4509 | 272 | def proxy_headers(self, proxy): | ||
4510 | 273 | """Returns a dictionary of the headers to add to any request sent | ||
4511 | 274 | through a proxy. This works with urllib3 magic to ensure that they are | ||
4512 | 275 | correctly sent to the proxy, rather than in a tunnelled request if | ||
4513 | 276 | CONNECT is being used. | ||
4514 | 277 | |||
4515 | 278 | This should not be called from user code, and is only exposed for use | ||
4516 | 279 | when subclassing the | ||
4517 | 280 | :class:`HTTPAdapter <requests.adapters.HTTPAdapter>`. | ||
4518 | 281 | |||
4519 | 282 | :param proxies: The url of the proxy being used for this request. | ||
4520 | 283 | :param kwargs: Optional additional keyword arguments. | ||
4521 | 284 | """ | ||
4522 | 285 | headers = {} | ||
4523 | 286 | username, password = get_auth_from_url(proxy) | ||
4524 | 287 | |||
4525 | 288 | if username and password: | ||
4526 | 289 | headers['Proxy-Authorization'] = _basic_auth_str(username, | ||
4527 | 290 | password) | ||
4528 | 291 | |||
4529 | 292 | return headers | ||
4530 | 293 | |||
4531 | 294 | def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): | ||
4532 | 295 | """Sends PreparedRequest object. Returns Response object. | ||
4533 | 296 | |||
4534 | 297 | :param request: The :class:`PreparedRequest <PreparedRequest>` being sent. | ||
4535 | 298 | :param stream: (optional) Whether to stream the request content. | ||
4536 | 299 | :param timeout: (optional) The timeout on the request. | ||
4537 | 300 | :param verify: (optional) Whether to verify SSL certificates. | ||
4538 | 301 | :param cert: (optional) Any user-provided SSL certificate to be trusted. | ||
4539 | 302 | :param proxies: (optional) The proxies dictionary to apply to the request. | ||
4540 | 303 | """ | ||
4541 | 304 | |||
4542 | 305 | conn = self.get_connection(request.url, proxies) | ||
4543 | 306 | |||
4544 | 307 | self.cert_verify(conn, request.url, verify, cert) | ||
4545 | 308 | url = self.request_url(request, proxies) | ||
4546 | 309 | self.add_headers(request) | ||
4547 | 310 | |||
4548 | 311 | chunked = not (request.body is None or 'Content-Length' in request.headers) | ||
4549 | 312 | |||
4550 | 313 | timeout = TimeoutSauce(connect=timeout, read=timeout) | ||
4551 | 314 | |||
4552 | 315 | try: | ||
4553 | 316 | if not chunked: | ||
4554 | 317 | resp = conn.urlopen( | ||
4555 | 318 | method=request.method, | ||
4556 | 319 | url=url, | ||
4557 | 320 | body=request.body, | ||
4558 | 321 | headers=request.headers, | ||
4559 | 322 | redirect=False, | ||
4560 | 323 | assert_same_host=False, | ||
4561 | 324 | preload_content=False, | ||
4562 | 325 | decode_content=False, | ||
4563 | 326 | retries=self.max_retries, | ||
4564 | 327 | timeout=timeout | ||
4565 | 328 | ) | ||
4566 | 329 | |||
4567 | 330 | # Send the request. | ||
4568 | 331 | else: | ||
4569 | 332 | if hasattr(conn, 'proxy_pool'): | ||
4570 | 333 | conn = conn.proxy_pool | ||
4571 | 334 | |||
4572 | 335 | low_conn = conn._get_conn(timeout=timeout) | ||
4573 | 336 | |||
4574 | 337 | try: | ||
4575 | 338 | low_conn.putrequest(request.method, | ||
4576 | 339 | url, | ||
4577 | 340 | skip_accept_encoding=True) | ||
4578 | 341 | |||
4579 | 342 | for header, value in request.headers.items(): | ||
4580 | 343 | low_conn.putheader(header, value) | ||
4581 | 344 | |||
4582 | 345 | low_conn.endheaders() | ||
4583 | 346 | |||
4584 | 347 | for i in request.body: | ||
4585 | 348 | low_conn.send(hex(len(i))[2:].encode('utf-8')) | ||
4586 | 349 | low_conn.send(b'\r\n') | ||
4587 | 350 | low_conn.send(i) | ||
4588 | 351 | low_conn.send(b'\r\n') | ||
4589 | 352 | low_conn.send(b'0\r\n\r\n') | ||
4590 | 353 | |||
4591 | 354 | r = low_conn.getresponse() | ||
4592 | 355 | resp = HTTPResponse.from_httplib( | ||
4593 | 356 | r, | ||
4594 | 357 | pool=conn, | ||
4595 | 358 | connection=low_conn, | ||
4596 | 359 | preload_content=False, | ||
4597 | 360 | decode_content=False | ||
4598 | 361 | ) | ||
4599 | 362 | except: | ||
4600 | 363 | # If we hit any problems here, clean up the connection. | ||
4601 | 364 | # Then, reraise so that we can handle the actual exception. | ||
4602 | 365 | low_conn.close() | ||
4603 | 366 | raise | ||
4604 | 367 | else: | ||
4605 | 368 | # All is well, return the connection to the pool. | ||
4606 | 369 | conn._put_conn(low_conn) | ||
4607 | 370 | |||
4608 | 371 | except socket.error as sockerr: | ||
4609 | 372 | raise ConnectionError(sockerr, request=request) | ||
4610 | 373 | |||
4611 | 374 | except MaxRetryError as e: | ||
4612 | 375 | raise ConnectionError(e, request=request) | ||
4613 | 376 | |||
4614 | 377 | except _ProxyError as e: | ||
4615 | 378 | raise ProxyError(e) | ||
4616 | 379 | |||
4617 | 380 | except (_SSLError, _HTTPError) as e: | ||
4618 | 381 | if isinstance(e, _SSLError): | ||
4619 | 382 | raise SSLError(e, request=request) | ||
4620 | 383 | elif isinstance(e, TimeoutError): | ||
4621 | 384 | raise Timeout(e, request=request) | ||
4622 | 385 | else: | ||
4623 | 386 | raise | ||
4624 | 387 | |||
4625 | 388 | return self.build_response(request, resp) | ||
4626 | 0 | 389 | ||
4627 | === added file 'pylib/requests/api.py' | |||
4628 | --- pylib/requests/api.py 1970-01-01 00:00:00 +0000 | |||
4629 | +++ pylib/requests/api.py 2018-05-31 04:33:07 +0000 | |||
4630 | @@ -0,0 +1,120 @@ | |||
4631 | 1 | # -*- coding: utf-8 -*- | ||
4632 | 2 | |||
4633 | 3 | """ | ||
4634 | 4 | requests.api | ||
4635 | 5 | ~~~~~~~~~~~~ | ||
4636 | 6 | |||
4637 | 7 | This module implements the Requests API. | ||
4638 | 8 | |||
4639 | 9 | :copyright: (c) 2012 by Kenneth Reitz. | ||
4640 | 10 | :license: Apache2, see LICENSE for more details. | ||
4641 | 11 | |||
4642 | 12 | """ | ||
4643 | 13 | |||
4644 | 14 | from . import sessions | ||
4645 | 15 | |||
4646 | 16 | |||
4647 | 17 | def request(method, url, **kwargs): | ||
4648 | 18 | """Constructs and sends a :class:`Request <Request>`. | ||
4649 | 19 | Returns :class:`Response <Response>` object. | ||
4650 | 20 | |||
4651 | 21 | :param method: method for the new :class:`Request` object. | ||
4652 | 22 | :param url: URL for the new :class:`Request` object. | ||
4653 | 23 | :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`. | ||
4654 | 24 | :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`. | ||
4655 | 25 | :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`. | ||
4656 | 26 | :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`. | ||
4657 | 27 | :param files: (optional) Dictionary of 'name': file-like-objects (or {'name': ('filename', fileobj)}) for multipart encoding upload. | ||
4658 | 28 | :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth. | ||
4659 | 29 | :param timeout: (optional) Float describing the timeout of the request in seconds. | ||
4660 | 30 | :param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed. | ||
4661 | 31 | :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy. | ||
4662 | 32 | :param verify: (optional) if ``True``, the SSL cert will be verified. A CA_BUNDLE path can also be provided. | ||
4663 | 33 | :param stream: (optional) if ``False``, the response content will be immediately downloaded. | ||
4664 | 34 | :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair. | ||
4665 | 35 | |||
4666 | 36 | Usage:: | ||
4667 | 37 | |||
4668 | 38 | >>> import requests | ||
4669 | 39 | >>> req = requests.request('GET', 'http://httpbin.org/get') | ||
4670 | 40 | <Response [200]> | ||
4671 | 41 | """ | ||
4672 | 42 | |||
4673 | 43 | session = sessions.Session() | ||
4674 | 44 | return session.request(method=method, url=url, **kwargs) | ||
4675 | 45 | |||
4676 | 46 | |||
4677 | 47 | def get(url, **kwargs): | ||
4678 | 48 | """Sends a GET request. Returns :class:`Response` object. | ||
4679 | 49 | |||
4680 | 50 | :param url: URL for the new :class:`Request` object. | ||
4681 | 51 | :param \*\*kwargs: Optional arguments that ``request`` takes. | ||
4682 | 52 | """ | ||
4683 | 53 | |||
4684 | 54 | kwargs.setdefault('allow_redirects', True) | ||
4685 | 55 | return request('get', url, **kwargs) | ||
4686 | 56 | |||
4687 | 57 | |||
4688 | 58 | def options(url, **kwargs): | ||
4689 | 59 | """Sends a OPTIONS request. Returns :class:`Response` object. | ||
4690 | 60 | |||
4691 | 61 | :param url: URL for the new :class:`Request` object. | ||
4692 | 62 | :param \*\*kwargs: Optional arguments that ``request`` takes. | ||
4693 | 63 | """ | ||
4694 | 64 | |||
4695 | 65 | kwargs.setdefault('allow_redirects', True) | ||
4696 | 66 | return request('options', url, **kwargs) | ||
4697 | 67 | |||
4698 | 68 | |||
4699 | 69 | def head(url, **kwargs): | ||
4700 | 70 | """Sends a HEAD request. Returns :class:`Response` object. | ||
4701 | 71 | |||
4702 | 72 | :param url: URL for the new :class:`Request` object. | ||
4703 | 73 | :param \*\*kwargs: Optional arguments that ``request`` takes. | ||
4704 | 74 | """ | ||
4705 | 75 | |||
4706 | 76 | kwargs.setdefault('allow_redirects', False) | ||
4707 | 77 | return request('head', url, **kwargs) | ||
4708 | 78 | |||
4709 | 79 | |||
4710 | 80 | def post(url, data=None, **kwargs): | ||
4711 | 81 | """Sends a POST request. Returns :class:`Response` object. | ||
4712 | 82 | |||
4713 | 83 | :param url: URL for the new :class:`Request` object. | ||
4714 | 84 | :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`. | ||
4715 | 85 | :param \*\*kwargs: Optional arguments that ``request`` takes. | ||
4716 | 86 | """ | ||
4717 | 87 | |||
4718 | 88 | return request('post', url, data=data, **kwargs) | ||
4719 | 89 | |||
4720 | 90 | |||
4721 | 91 | def put(url, data=None, **kwargs): | ||
4722 | 92 | """Sends a PUT request. Returns :class:`Response` object. | ||
4723 | 93 | |||
4724 | 94 | :param url: URL for the new :class:`Request` object. | ||
4725 | 95 | :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`. | ||
4726 | 96 | :param \*\*kwargs: Optional arguments that ``request`` takes. | ||
4727 | 97 | """ | ||
4728 | 98 | |||
4729 | 99 | return request('put', url, data=data, **kwargs) | ||
4730 | 100 | |||
4731 | 101 | |||
4732 | 102 | def patch(url, data=None, **kwargs): | ||
4733 | 103 | """Sends a PATCH request. Returns :class:`Response` object. | ||
4734 | 104 | |||
4735 | 105 | :param url: URL for the new :class:`Request` object. | ||
4736 | 106 | :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`. | ||
4737 | 107 | :param \*\*kwargs: Optional arguments that ``request`` takes. | ||
4738 | 108 | """ | ||
4739 | 109 | |||
4740 | 110 | return request('patch', url, data=data, **kwargs) | ||
4741 | 111 | |||
4742 | 112 | |||
4743 | 113 | def delete(url, **kwargs): | ||
4744 | 114 | """Sends a DELETE request. Returns :class:`Response` object. | ||
4745 | 115 | |||
4746 | 116 | :param url: URL for the new :class:`Request` object. | ||
4747 | 117 | :param \*\*kwargs: Optional arguments that ``request`` takes. | ||
4748 | 118 | """ | ||
4749 | 119 | |||
4750 | 120 | return request('delete', url, **kwargs) | ||
4751 | 0 | 121 | ||
4752 | === added file 'pylib/requests/auth.py' | |||
4753 | --- pylib/requests/auth.py 1970-01-01 00:00:00 +0000 | |||
4754 | +++ pylib/requests/auth.py 2018-05-31 04:33:07 +0000 | |||
4755 | @@ -0,0 +1,193 @@ | |||
4756 | 1 | # -*- coding: utf-8 -*- | ||
4757 | 2 | |||
4758 | 3 | """ | ||
4759 | 4 | requests.auth | ||
4760 | 5 | ~~~~~~~~~~~~~ | ||
4761 | 6 | |||
4762 | 7 | This module contains the authentication handlers for Requests. | ||
4763 | 8 | """ | ||
4764 | 9 | |||
4765 | 10 | import os | ||
4766 | 11 | import re | ||
4767 | 12 | import time | ||
4768 | 13 | import hashlib | ||
4769 | 14 | |||
4770 | 15 | from base64 import b64encode | ||
4771 | 16 | |||
4772 | 17 | from .compat import urlparse, str | ||
4773 | 18 | from .cookies import extract_cookies_to_jar | ||
4774 | 19 | from .utils import parse_dict_header | ||
4775 | 20 | |||
4776 | 21 | CONTENT_TYPE_FORM_URLENCODED = 'application/x-www-form-urlencoded' | ||
4777 | 22 | CONTENT_TYPE_MULTI_PART = 'multipart/form-data' | ||
4778 | 23 | |||
4779 | 24 | |||
4780 | 25 | def _basic_auth_str(username, password): | ||
4781 | 26 | """Returns a Basic Auth string.""" | ||
4782 | 27 | |||
4783 | 28 | return 'Basic ' + b64encode(('%s:%s' % (username, password)).encode('latin1')).strip().decode('latin1') | ||
4784 | 29 | |||
4785 | 30 | |||
4786 | 31 | class AuthBase(object): | ||
4787 | 32 | """Base class that all auth implementations derive from""" | ||
4788 | 33 | |||
4789 | 34 | def __call__(self, r): | ||
4790 | 35 | raise NotImplementedError('Auth hooks must be callable.') | ||
4791 | 36 | |||
4792 | 37 | |||
4793 | 38 | class HTTPBasicAuth(AuthBase): | ||
4794 | 39 | """Attaches HTTP Basic Authentication to the given Request object.""" | ||
4795 | 40 | def __init__(self, username, password): | ||
4796 | 41 | self.username = username | ||
4797 | 42 | self.password = password | ||
4798 | 43 | |||
4799 | 44 | def __call__(self, r): | ||
4800 | 45 | r.headers['Authorization'] = _basic_auth_str(self.username, self.password) | ||
4801 | 46 | return r | ||
4802 | 47 | |||
4803 | 48 | |||
4804 | 49 | class HTTPProxyAuth(HTTPBasicAuth): | ||
4805 | 50 | """Attaches HTTP Proxy Authentication to a given Request object.""" | ||
4806 | 51 | def __call__(self, r): | ||
4807 | 52 | r.headers['Proxy-Authorization'] = _basic_auth_str(self.username, self.password) | ||
4808 | 53 | return r | ||
4809 | 54 | |||
4810 | 55 | |||
4811 | 56 | class HTTPDigestAuth(AuthBase): | ||
4812 | 57 | """Attaches HTTP Digest Authentication to the given Request object.""" | ||
4813 | 58 | def __init__(self, username, password): | ||
4814 | 59 | self.username = username | ||
4815 | 60 | self.password = password | ||
4816 | 61 | self.last_nonce = '' | ||
4817 | 62 | self.nonce_count = 0 | ||
4818 | 63 | self.chal = {} | ||
4819 | 64 | self.pos = None | ||
4820 | 65 | |||
4821 | 66 | def build_digest_header(self, method, url): | ||
4822 | 67 | |||
4823 | 68 | realm = self.chal['realm'] | ||
4824 | 69 | nonce = self.chal['nonce'] | ||
4825 | 70 | qop = self.chal.get('qop') | ||
4826 | 71 | algorithm = self.chal.get('algorithm') | ||
4827 | 72 | opaque = self.chal.get('opaque') | ||
4828 | 73 | |||
4829 | 74 | if algorithm is None: | ||
4830 | 75 | _algorithm = 'MD5' | ||
4831 | 76 | else: | ||
4832 | 77 | _algorithm = algorithm.upper() | ||
4833 | 78 | # lambdas assume digest modules are imported at the top level | ||
4834 | 79 | if _algorithm == 'MD5' or _algorithm == 'MD5-SESS': | ||
4835 | 80 | def md5_utf8(x): | ||
4836 | 81 | if isinstance(x, str): | ||
4837 | 82 | x = x.encode('utf-8') | ||
4838 | 83 | return hashlib.md5(x).hexdigest() | ||
4839 | 84 | hash_utf8 = md5_utf8 | ||
4840 | 85 | elif _algorithm == 'SHA': | ||
4841 | 86 | def sha_utf8(x): | ||
4842 | 87 | if isinstance(x, str): | ||
4843 | 88 | x = x.encode('utf-8') | ||
4844 | 89 | return hashlib.sha1(x).hexdigest() | ||
4845 | 90 | hash_utf8 = sha_utf8 | ||
4846 | 91 | |||
4847 | 92 | KD = lambda s, d: hash_utf8("%s:%s" % (s, d)) | ||
4848 | 93 | |||
4849 | 94 | if hash_utf8 is None: | ||
4850 | 95 | return None | ||
4851 | 96 | |||
4852 | 97 | # XXX not implemented yet | ||
4853 | 98 | entdig = None | ||
4854 | 99 | p_parsed = urlparse(url) | ||
4855 | 100 | path = p_parsed.path | ||
4856 | 101 | if p_parsed.query: | ||
4857 | 102 | path += '?' + p_parsed.query | ||
4858 | 103 | |||
4859 | 104 | A1 = '%s:%s:%s' % (self.username, realm, self.password) | ||
4860 | 105 | A2 = '%s:%s' % (method, path) | ||
4861 | 106 | |||
4862 | 107 | HA1 = hash_utf8(A1) | ||
4863 | 108 | HA2 = hash_utf8(A2) | ||
4864 | 109 | |||
4865 | 110 | if nonce == self.last_nonce: | ||
4866 | 111 | self.nonce_count += 1 | ||
4867 | 112 | else: | ||
4868 | 113 | self.nonce_count = 1 | ||
4869 | 114 | ncvalue = '%08x' % self.nonce_count | ||
4870 | 115 | s = str(self.nonce_count).encode('utf-8') | ||
4871 | 116 | s += nonce.encode('utf-8') | ||
4872 | 117 | s += time.ctime().encode('utf-8') | ||
4873 | 118 | s += os.urandom(8) | ||
4874 | 119 | |||
4875 | 120 | cnonce = (hashlib.sha1(s).hexdigest()[:16]) | ||
4876 | 121 | noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, HA2) | ||
4877 | 122 | if _algorithm == 'MD5-SESS': | ||
4878 | 123 | HA1 = hash_utf8('%s:%s:%s' % (HA1, nonce, cnonce)) | ||
4879 | 124 | |||
4880 | 125 | if qop is None: | ||
4881 | 126 | respdig = KD(HA1, "%s:%s" % (nonce, HA2)) | ||
4882 | 127 | elif qop == 'auth' or 'auth' in qop.split(','): | ||
4883 | 128 | respdig = KD(HA1, noncebit) | ||
4884 | 129 | else: | ||
4885 | 130 | # XXX handle auth-int. | ||
4886 | 131 | return None | ||
4887 | 132 | |||
4888 | 133 | self.last_nonce = nonce | ||
4889 | 134 | |||
4890 | 135 | # XXX should the partial digests be encoded too? | ||
4891 | 136 | base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \ | ||
4892 | 137 | 'response="%s"' % (self.username, realm, nonce, path, respdig) | ||
4893 | 138 | if opaque: | ||
4894 | 139 | base += ', opaque="%s"' % opaque | ||
4895 | 140 | if algorithm: | ||
4896 | 141 | base += ', algorithm="%s"' % algorithm | ||
4897 | 142 | if entdig: | ||
4898 | 143 | base += ', digest="%s"' % entdig | ||
4899 | 144 | if qop: | ||
4900 | 145 | base += ', qop="auth", nc=%s, cnonce="%s"' % (ncvalue, cnonce) | ||
4901 | 146 | |||
4902 | 147 | return 'Digest %s' % (base) | ||
4903 | 148 | |||
4904 | 149 | def handle_401(self, r, **kwargs): | ||
4905 | 150 | """Takes the given response and tries digest-auth, if needed.""" | ||
4906 | 151 | |||
4907 | 152 | if self.pos is not None: | ||
4908 | 153 | # Rewind the file position indicator of the body to where | ||
4909 | 154 | # it was to resend the request. | ||
4910 | 155 | r.request.body.seek(self.pos) | ||
4911 | 156 | num_401_calls = getattr(self, 'num_401_calls', 1) | ||
4912 | 157 | s_auth = r.headers.get('www-authenticate', '') | ||
4913 | 158 | |||
4914 | 159 | if 'digest' in s_auth.lower() and num_401_calls < 2: | ||
4915 | 160 | |||
4916 | 161 | setattr(self, 'num_401_calls', num_401_calls + 1) | ||
4917 | 162 | pat = re.compile(r'digest ', flags=re.IGNORECASE) | ||
4918 | 163 | self.chal = parse_dict_header(pat.sub('', s_auth, count=1)) | ||
4919 | 164 | |||
4920 | 165 | # Consume content and release the original connection | ||
4921 | 166 | # to allow our new request to reuse the same one. | ||
4922 | 167 | r.content | ||
4923 | 168 | r.raw.release_conn() | ||
4924 | 169 | prep = r.request.copy() | ||
4925 | 170 | extract_cookies_to_jar(prep._cookies, r.request, r.raw) | ||
4926 | 171 | prep.prepare_cookies(prep._cookies) | ||
4927 | 172 | |||
4928 | 173 | prep.headers['Authorization'] = self.build_digest_header( | ||
4929 | 174 | prep.method, prep.url) | ||
4930 | 175 | _r = r.connection.send(prep, **kwargs) | ||
4931 | 176 | _r.history.append(r) | ||
4932 | 177 | _r.request = prep | ||
4933 | 178 | |||
4934 | 179 | return _r | ||
4935 | 180 | |||
4936 | 181 | setattr(self, 'num_401_calls', 1) | ||
4937 | 182 | return r | ||
4938 | 183 | |||
4939 | 184 | def __call__(self, r): | ||
4940 | 185 | # If we have a saved nonce, skip the 401 | ||
4941 | 186 | if self.last_nonce: | ||
4942 | 187 | r.headers['Authorization'] = self.build_digest_header(r.method, r.url) | ||
4943 | 188 | try: | ||
4944 | 189 | self.pos = r.body.tell() | ||
4945 | 190 | except AttributeError: | ||
4946 | 191 | pass | ||
4947 | 192 | r.register_hook('response', self.handle_401) | ||
4948 | 193 | return r | ||
4949 | 0 | 194 | ||
4950 | === added file 'pylib/requests/cacert.pem' | |||
4951 | --- pylib/requests/cacert.pem 1970-01-01 00:00:00 +0000 | |||
4952 | +++ pylib/requests/cacert.pem 2018-05-31 04:33:07 +0000 | |||
4953 | @@ -0,0 +1,5026 @@ | |||
4954 | 1 | # This Source Code Form is subject to the terms of the Mozilla Public | ||
4955 | 2 | # License, v. 2.0. If a copy of the MPL was not distributed with this | ||
4956 | 3 | # file, You can obtain one at http://mozilla.org/MPL/2.0/. | ||
4957 | 4 | |||
4958 | 5 | # Issuer: CN=GTE CyberTrust Global Root O=GTE Corporation OU=GTE CyberTrust Solutions, Inc. | ||
4959 | 6 | # Subject: CN=GTE CyberTrust Global Root O=GTE Corporation OU=GTE CyberTrust Solutions, Inc. | ||
4960 | 7 | # Label: "GTE CyberTrust Global Root" | ||
4961 | 8 | # Serial: 421 | ||
4962 | 9 | # MD5 Fingerprint: ca:3d:d3:68:f1:03:5c:d0:32:fa:b8:2b:59:e8:5a:db | ||
4963 | 10 | # SHA1 Fingerprint: 97:81:79:50:d8:1c:96:70:cc:34:d8:09:cf:79:44:31:36:7e:f4:74 | ||
4964 | 11 | # SHA256 Fingerprint: a5:31:25:18:8d:21:10:aa:96:4b:02:c7:b7:c6:da:32:03:17:08:94:e5:fb:71:ff:fb:66:67:d5:e6:81:0a:36 | ||
4965 | 12 | -----BEGIN CERTIFICATE----- | ||
4966 | 13 | MIICWjCCAcMCAgGlMA0GCSqGSIb3DQEBBAUAMHUxCzAJBgNVBAYTAlVTMRgwFgYD | ||
4967 | 14 | VQQKEw9HVEUgQ29ycG9yYXRpb24xJzAlBgNVBAsTHkdURSBDeWJlclRydXN0IFNv | ||
4968 | 15 | bHV0aW9ucywgSW5jLjEjMCEGA1UEAxMaR1RFIEN5YmVyVHJ1c3QgR2xvYmFsIFJv | ||
4969 | 16 | b3QwHhcNOTgwODEzMDAyOTAwWhcNMTgwODEzMjM1OTAwWjB1MQswCQYDVQQGEwJV | ||
4970 | 17 | UzEYMBYGA1UEChMPR1RFIENvcnBvcmF0aW9uMScwJQYDVQQLEx5HVEUgQ3liZXJU | ||
4971 | 18 | cnVzdCBTb2x1dGlvbnMsIEluYy4xIzAhBgNVBAMTGkdURSBDeWJlclRydXN0IEds | ||
4972 | 19 | b2JhbCBSb290MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCVD6C28FCc6HrH | ||
4973 | 20 | iM3dFw4usJTQGz0O9pTAipTHBsiQl8i4ZBp6fmw8U+E3KHNgf7KXUwefU/ltWJTS | ||
4974 | 21 | r41tiGeA5u2ylc9yMcqlHHK6XALnZELn+aks1joNrI1CqiQBOeacPwGFVw1Yh0X4 | ||
4975 | 22 | 04Wqk2kmhXBIgD8SFcd5tB8FLztimQIDAQABMA0GCSqGSIb3DQEBBAUAA4GBAG3r | ||
4976 | 23 | GwnpXtlR22ciYaQqPEh346B8pt5zohQDhT37qw4wxYMWM4ETCJ57NE7fQMh017l9 | ||
4977 | 24 | 3PR2VX2bY1QY6fDq81yx2YtCHrnAlU66+tXifPVoYb+O7AWXX1uw16OFNMQkpw0P | ||
4978 | 25 | lZPvy5TYnh+dXIVtx6quTx8itc2VrbqnzPmrC3p/ | ||
4979 | 26 | -----END CERTIFICATE----- | ||
4980 | 27 | |||
4981 | 28 | # Issuer: CN=Thawte Server CA O=Thawte Consulting cc OU=Certification Services Division | ||
4982 | 29 | # Subject: CN=Thawte Server CA O=Thawte Consulting cc OU=Certification Services Division | ||
4983 | 30 | # Label: "Thawte Server CA" | ||
4984 | 31 | # Serial: 1 | ||
4985 | 32 | # MD5 Fingerprint: c5:70:c4:a2:ed:53:78:0c:c8:10:53:81:64:cb:d0:1d | ||
4986 | 33 | # SHA1 Fingerprint: 23:e5:94:94:51:95:f2:41:48:03:b4:d5:64:d2:a3:a3:f5:d8:8b:8c | ||
4987 | 34 | # SHA256 Fingerprint: b4:41:0b:73:e2:e6:ea:ca:47:fb:c4:2f:8f:a4:01:8a:f4:38:1d:c5:4c:fa:a8:44:50:46:1e:ed:09:45:4d:e9 | ||
4988 | 35 | -----BEGIN CERTIFICATE----- | ||
4989 | 36 | MIIDEzCCAnygAwIBAgIBATANBgkqhkiG9w0BAQQFADCBxDELMAkGA1UEBhMCWkEx | ||
4990 | 37 | FTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3duMR0wGwYD | ||
4991 | 38 | VQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UECxMfQ2VydGlmaWNhdGlv | ||
4992 | 39 | biBTZXJ2aWNlcyBEaXZpc2lvbjEZMBcGA1UEAxMQVGhhd3RlIFNlcnZlciBDQTEm | ||
4993 | 40 | MCQGCSqGSIb3DQEJARYXc2VydmVyLWNlcnRzQHRoYXd0ZS5jb20wHhcNOTYwODAx | ||
4994 | 41 | MDAwMDAwWhcNMjAxMjMxMjM1OTU5WjCBxDELMAkGA1UEBhMCWkExFTATBgNVBAgT | ||
4995 | 42 | DFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3duMR0wGwYDVQQKExRUaGF3 | ||
4996 | 43 | dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UECxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNl | ||
4997 | 44 | cyBEaXZpc2lvbjEZMBcGA1UEAxMQVGhhd3RlIFNlcnZlciBDQTEmMCQGCSqGSIb3 | ||
4998 | 45 | DQEJARYXc2VydmVyLWNlcnRzQHRoYXd0ZS5jb20wgZ8wDQYJKoZIhvcNAQEBBQAD | ||
4999 | 46 | gY0AMIGJAoGBANOkUG7I/1Zr5s9dtuoMaHVHoqrC2oQl/Kj0R1HahbUgdJSGHg91 | ||
5000 | 47 | yekIYfUGbTBuFRkC6VLAYttNmZ7iagxEOM3+vuNkCXDF/rFrKbYvScg71CcEJRCX |
A few things:
1) netplan is the default on Artful too. I think your detection code is right, but your commit message is potentially wrong?
2) If I understand cloud-init and netplan correctly, couldn't you achieve the same effect by just adding this as /etc/netplan/ 99-azure- hotplug. yaml? Then you could drop ephemeral_eth.sh entirely on Artful and Bionic.
network: ....... ..driver: hv_netvsc ....... ..name: "eth*" .....optional: true
....version: 2
....ethernets:
........ephemeral:
............dhcp4: true
............match:
.......
.......
.......
3) Looking at the code itself, you should probably use /run/netplan for ephemeral files, rather than /etc/netplan. That also solves your cleanup problem.
4) And it's worth knowing that netplan apply will look for network devices that are 'down' and them from their drivers and rebind them. With your approach, netplan apply will be run for each extra device, so if there are 4 extra devices, the first one configured won't be replugged, the second will be replugged once, the third will be replugged twice and so on. This *probably* isn't problematic, but it makes me nervous, especially doing it in rapid succession.