Merge ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic
- Git
- lp:~chad.smith/cloud-init
- ubuntu/bionic
- Merge into ubuntu/bionic
Status: | Merged |
---|---|
Merged at revision: | ef4aa258cb508c9be6343fcbefa0735066f4ad21 |
Proposed branch: | ~chad.smith/cloud-init:ubuntu/bionic |
Merge into: | cloud-init:ubuntu/bionic |
Diff against target: |
6964 lines (+3990/-570) 81 files modified
.github/pull_request_template.md (+9/-0) ChangeLog (+36/-0) cloudinit/analyze/__main__.py (+86/-2) cloudinit/analyze/show.py (+192/-10) cloudinit/analyze/tests/test_boot.py (+170/-0) cloudinit/apport.py (+1/-0) cloudinit/config/cc_apt_configure.py (+3/-1) cloudinit/config/cc_lxd.py (+1/-1) cloudinit/config/cc_set_passwords.py (+34/-19) cloudinit/config/cc_ssh.py (+55/-0) cloudinit/config/cc_ubuntu_drivers.py (+49/-1) cloudinit/config/tests/test_ssh.py (+166/-0) cloudinit/config/tests/test_ubuntu_drivers.py (+81/-18) cloudinit/distros/__init__.py (+22/-22) cloudinit/distros/arch.py (+14/-0) cloudinit/distros/debian.py (+2/-2) cloudinit/distros/freebsd.py (+16/-16) cloudinit/distros/opensuse.py (+2/-0) cloudinit/distros/parsers/sys_conf.py (+7/-0) cloudinit/distros/ubuntu.py (+15/-0) cloudinit/net/__init__.py (+112/-43) cloudinit/net/cmdline.py (+16/-9) cloudinit/net/dhcp.py (+90/-0) cloudinit/net/network_state.py (+12/-4) cloudinit/net/sysconfig.py (+12/-0) cloudinit/net/tests/test_dhcp.py (+119/-1) cloudinit/net/tests/test_init.py (+262/-9) cloudinit/settings.py (+1/-0) cloudinit/sources/DataSourceAzure.py (+141/-32) cloudinit/sources/DataSourceCloudSigma.py (+2/-6) cloudinit/sources/DataSourceExoscale.py (+258/-0) cloudinit/sources/DataSourceGCE.py (+20/-2) cloudinit/sources/DataSourceHetzner.py (+3/-0) cloudinit/sources/DataSourceOVF.py (+6/-1) cloudinit/sources/DataSourceOracle.py (+99/-7) cloudinit/sources/__init__.py (+27/-0) cloudinit/sources/helpers/azure.py (+152/-8) cloudinit/sources/helpers/vmware/imc/config_custom_script.py (+42/-101) cloudinit/sources/tests/test_oracle.py (+228/-11) cloudinit/stages.py (+50/-15) cloudinit/tests/helpers.py (+2/-1) cloudinit/tests/test_stages.py (+132/-19) cloudinit/url_helper.py (+5/-4) cloudinit/version.py (+1/-1) debian/changelog (+60/-3) debian/cloud-init.templates (+3/-3) debian/patches/ubuntu-advantage-revert-tip.patch (+4/-8) doc/examples/cloud-config-datasources.txt (+1/-1) doc/examples/cloud-config-user-groups.txt (+1/-0) doc/rtd/conf.py (+0/-5) doc/rtd/topics/analyze.rst (+84/-0) doc/rtd/topics/capabilities.rst (+1/-0) doc/rtd/topics/datasources.rst (+1/-0) doc/rtd/topics/datasources/exoscale.rst (+68/-0) doc/rtd/topics/datasources/oracle.rst (+24/-1) doc/rtd/topics/debugging.rst (+13/-0) doc/rtd/topics/format.rst (+13/-12) doc/rtd/topics/network-config-format-v2.rst (+1/-1) doc/rtd/topics/network-config.rst (+5/-4) integration-requirements.txt (+2/-1) systemd/cloud-init-generator.tmpl (+6/-1) templates/ntp.conf.debian.tmpl (+2/-1) tests/cloud_tests/platforms.yaml (+1/-0) tests/cloud_tests/platforms/nocloudkvm/instance.py (+9/-4) tests/cloud_tests/platforms/platforms.py (+1/-1) tests/cloud_tests/setup_image.py (+2/-1) tests/unittests/test_datasource/test_azure.py (+112/-15) tests/unittests/test_datasource/test_common.py (+13/-0) tests/unittests/test_datasource/test_ec2.py (+2/-1) tests/unittests/test_datasource/test_exoscale.py (+203/-0) tests/unittests/test_datasource/test_gce.py (+18/-0) tests/unittests/test_distros/test_netconfig.py (+86/-0) tests/unittests/test_ds_identify.py (+25/-0) tests/unittests/test_handler/test_handler_apt_source_v3.py (+11/-0) tests/unittests/test_handler/test_handler_ntp.py (+15/-10) tests/unittests/test_net.py (+197/-23) tests/unittests/test_reporting_hyperv.py (+65/-0) tests/unittests/test_vmware/test_custom_script.py (+63/-53) tools/build-on-freebsd (+40/-33) tools/ds-identify (+32/-14) tools/xkvm (+53/-8) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Ryan Harper | Approve | ||
Server Team CI bot | continuous-integration | Approve | |
Review via email: mp+371686@code.launchpad.net |
Commit message
Upstream snapshot for SRU into bionic
Also enables Exoscale in debian/
Description of the change
Server Team CI bot (server-team-bot) wrote : | # |
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:5b00861163e
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:ef4aa258cb5
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
Looks good. Thanks.
Preview Diff
1 | diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md | |||
2 | 0 | new file mode 100644 | 0 | new file mode 100644 |
3 | index 0000000..170a71e | |||
4 | --- /dev/null | |||
5 | +++ b/.github/pull_request_template.md | |||
6 | @@ -0,0 +1,9 @@ | |||
7 | 1 | ***This GitHub repo is only a mirror. Do not submit pull requests | ||
8 | 2 | here!*** | ||
9 | 3 | |||
10 | 4 | Thank you for taking the time to write and submit a change to | ||
11 | 5 | cloud-init! Please follow [our hacking | ||
12 | 6 | guide](https://cloudinit.readthedocs.io/en/latest/topics/hacking.html) | ||
13 | 7 | to submit your change to cloud-init's [Launchpad git | ||
14 | 8 | repository](https://code.launchpad.net/cloud-init/), where cloud-init | ||
15 | 9 | development happens. | ||
16 | diff --git a/ChangeLog b/ChangeLog | |||
17 | index bf48fd4..a98f8c2 100644 | |||
18 | --- a/ChangeLog | |||
19 | +++ b/ChangeLog | |||
20 | @@ -1,3 +1,39 @@ | |||
21 | 1 | 19.2: | ||
22 | 2 | - net: add rfc3442 (classless static routes) to EphemeralDHCP | ||
23 | 3 | (LP: #1821102) | ||
24 | 4 | - templates/ntp.conf.debian.tmpl: fix missing newline for pools | ||
25 | 5 | (LP: #1836598) | ||
26 | 6 | - Support netplan renderer in Arch Linux [Conrad Hoffmann] | ||
27 | 7 | - Fix typo in publicly viewable documentation. [David Medberry] | ||
28 | 8 | - Add a cdrom size checker for OVF ds to ds-identify | ||
29 | 9 | [Pengpeng Sun] (LP: #1806701) | ||
30 | 10 | - VMWare: Trigger the post customization script via cc_scripts module. | ||
31 | 11 | [Xiaofeng Wang] (LP: #1833192) | ||
32 | 12 | - Cloud-init analyze module: Added ability to analyze boot events. | ||
33 | 13 | [Sam Gilson] | ||
34 | 14 | - Update debian eni network configuration location, retain Ubuntu setting | ||
35 | 15 | [Janos Lenart] | ||
36 | 16 | - net: skip bond interfaces in get_interfaces | ||
37 | 17 | [Stanislav Makar] (LP: #1812857) | ||
38 | 18 | - Fix a couple of issues raised by a coverity scan | ||
39 | 19 | - Add missing dsname for Hetzner Cloud datasource [Markus Schade] | ||
40 | 20 | - doc: indicate that netplan is default in Ubuntu now | ||
41 | 21 | - azure: add region and AZ properties from imds compute location metadata | ||
42 | 22 | - sysconfig: support more bonding options [Penghui Liao] | ||
43 | 23 | - cloud-init-generator: use libexec path to ds-identify on redhat systems | ||
44 | 24 | (LP: #1833264) | ||
45 | 25 | - tools/build-on-freebsd: update to python3 [Gonéri Le Bouder] | ||
46 | 26 | - Allow identification of OpenStack by Asset Tag | ||
47 | 27 | [Mark T. Voelker] (LP: #1669875) | ||
48 | 28 | - Fix spelling error making 'an Ubuntu' consistent. [Brian Murray] | ||
49 | 29 | - run-container: centos: comment out the repo mirrorlist [Paride Legovini] | ||
50 | 30 | - netplan: update netplan key mappings for gratuitous-arp (LP: #1827238) | ||
51 | 31 | - freebsd: fix the name of cloudcfg VARIANT [Gonéri Le Bouder] | ||
52 | 32 | - freebsd: ability to grow root file system [Gonéri Le Bouder] | ||
53 | 33 | - freebsd: NoCloud data source support [Gonéri Le Bouder] (LP: #1645824) | ||
54 | 34 | - Azure: Return static fallback address as if failed to find endpoint | ||
55 | 35 | [Jason Zions (MSFT)] | ||
56 | 36 | |||
57 | 1 | 19.1: | 37 | 19.1: |
58 | 2 | - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder] | 38 | - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder] |
59 | 3 | - tests: add Eoan release [Paride Legovini] | 39 | - tests: add Eoan release [Paride Legovini] |
60 | diff --git a/cloudinit/analyze/__main__.py b/cloudinit/analyze/__main__.py | |||
61 | index f861365..99e5c20 100644 | |||
62 | --- a/cloudinit/analyze/__main__.py | |||
63 | +++ b/cloudinit/analyze/__main__.py | |||
64 | @@ -7,7 +7,7 @@ import re | |||
65 | 7 | import sys | 7 | import sys |
66 | 8 | 8 | ||
67 | 9 | from cloudinit.util import json_dumps | 9 | from cloudinit.util import json_dumps |
69 | 10 | 10 | from datetime import datetime | |
70 | 11 | from . import dump | 11 | from . import dump |
71 | 12 | from . import show | 12 | from . import show |
72 | 13 | 13 | ||
73 | @@ -52,9 +52,93 @@ def get_parser(parser=None): | |||
74 | 52 | dest='outfile', default='-', | 52 | dest='outfile', default='-', |
75 | 53 | help='specify where to write output. ') | 53 | help='specify where to write output. ') |
76 | 54 | parser_dump.set_defaults(action=('dump', analyze_dump)) | 54 | parser_dump.set_defaults(action=('dump', analyze_dump)) |
77 | 55 | parser_boot = subparsers.add_parser( | ||
78 | 56 | 'boot', help='Print list of boot times for kernel and cloud-init') | ||
79 | 57 | parser_boot.add_argument('-i', '--infile', action='store', | ||
80 | 58 | dest='infile', default='/var/log/cloud-init.log', | ||
81 | 59 | help='specify where to read input. ') | ||
82 | 60 | parser_boot.add_argument('-o', '--outfile', action='store', | ||
83 | 61 | dest='outfile', default='-', | ||
84 | 62 | help='specify where to write output.') | ||
85 | 63 | parser_boot.set_defaults(action=('boot', analyze_boot)) | ||
86 | 55 | return parser | 64 | return parser |
87 | 56 | 65 | ||
88 | 57 | 66 | ||
89 | 67 | def analyze_boot(name, args): | ||
90 | 68 | """Report a list of how long different boot operations took. | ||
91 | 69 | |||
92 | 70 | For Example: | ||
93 | 71 | -- Most Recent Boot Record -- | ||
94 | 72 | Kernel Started at: <time> | ||
95 | 73 | Kernel ended boot at: <time> | ||
96 | 74 | Kernel time to boot (seconds): <time> | ||
97 | 75 | Cloud-init activated by systemd at: <time> | ||
98 | 76 | Time between Kernel end boot and Cloud-init activation (seconds):<time> | ||
99 | 77 | Cloud-init start: <time> | ||
100 | 78 | """ | ||
101 | 79 | infh, outfh = configure_io(args) | ||
102 | 80 | kernel_info = show.dist_check_timestamp() | ||
103 | 81 | status_code, kernel_start, kernel_end, ci_sysd_start = \ | ||
104 | 82 | kernel_info | ||
105 | 83 | kernel_start_timestamp = datetime.utcfromtimestamp(kernel_start) | ||
106 | 84 | kernel_end_timestamp = datetime.utcfromtimestamp(kernel_end) | ||
107 | 85 | ci_sysd_start_timestamp = datetime.utcfromtimestamp(ci_sysd_start) | ||
108 | 86 | try: | ||
109 | 87 | last_init_local = \ | ||
110 | 88 | [e for e in _get_events(infh) if e['name'] == 'init-local' and | ||
111 | 89 | 'starting search' in e['description']][-1] | ||
112 | 90 | ci_start = datetime.utcfromtimestamp(last_init_local['timestamp']) | ||
113 | 91 | except IndexError: | ||
114 | 92 | ci_start = 'Could not find init-local log-line in cloud-init.log' | ||
115 | 93 | status_code = show.FAIL_CODE | ||
116 | 94 | |||
117 | 95 | FAILURE_MSG = 'Your Linux distro or container does not support this ' \ | ||
118 | 96 | 'functionality.\n' \ | ||
119 | 97 | 'You must be running a Kernel Telemetry supported ' \ | ||
120 | 98 | 'distro.\nPlease check ' \ | ||
121 | 99 | 'https://cloudinit.readthedocs.io/en/latest' \ | ||
122 | 100 | '/topics/analyze.html for more ' \ | ||
123 | 101 | 'information on supported distros.\n' | ||
124 | 102 | |||
125 | 103 | SUCCESS_MSG = '-- Most Recent Boot Record --\n' \ | ||
126 | 104 | ' Kernel Started at: {k_s_t}\n' \ | ||
127 | 105 | ' Kernel ended boot at: {k_e_t}\n' \ | ||
128 | 106 | ' Kernel time to boot (seconds): {k_r}\n' \ | ||
129 | 107 | ' Cloud-init activated by systemd at: {ci_sysd_t}\n' \ | ||
130 | 108 | ' Time between Kernel end boot and Cloud-init ' \ | ||
131 | 109 | 'activation (seconds): {bt_r}\n' \ | ||
132 | 110 | ' Cloud-init start: {ci_start}\n' | ||
133 | 111 | |||
134 | 112 | CONTAINER_MSG = '-- Most Recent Container Boot Record --\n' \ | ||
135 | 113 | ' Container started at: {k_s_t}\n' \ | ||
136 | 114 | ' Cloud-init activated by systemd at: {ci_sysd_t}\n' \ | ||
137 | 115 | ' Cloud-init start: {ci_start}\n' \ | ||
138 | 116 | |||
139 | 117 | status_map = { | ||
140 | 118 | show.FAIL_CODE: FAILURE_MSG, | ||
141 | 119 | show.CONTAINER_CODE: CONTAINER_MSG, | ||
142 | 120 | show.SUCCESS_CODE: SUCCESS_MSG | ||
143 | 121 | } | ||
144 | 122 | |||
145 | 123 | kernel_runtime = kernel_end - kernel_start | ||
146 | 124 | between_process_runtime = ci_sysd_start - kernel_end | ||
147 | 125 | |||
148 | 126 | kwargs = { | ||
149 | 127 | 'k_s_t': kernel_start_timestamp, | ||
150 | 128 | 'k_e_t': kernel_end_timestamp, | ||
151 | 129 | 'k_r': kernel_runtime, | ||
152 | 130 | 'bt_r': between_process_runtime, | ||
153 | 131 | 'k_e': kernel_end, | ||
154 | 132 | 'k_s': kernel_start, | ||
155 | 133 | 'ci_sysd': ci_sysd_start, | ||
156 | 134 | 'ci_sysd_t': ci_sysd_start_timestamp, | ||
157 | 135 | 'ci_start': ci_start | ||
158 | 136 | } | ||
159 | 137 | |||
160 | 138 | outfh.write(status_map[status_code].format(**kwargs)) | ||
161 | 139 | return status_code | ||
162 | 140 | |||
163 | 141 | |||
164 | 58 | def analyze_blame(name, args): | 142 | def analyze_blame(name, args): |
165 | 59 | """Report a list of records sorted by largest time delta. | 143 | """Report a list of records sorted by largest time delta. |
166 | 60 | 144 | ||
167 | @@ -119,7 +203,7 @@ def analyze_dump(name, args): | |||
168 | 119 | 203 | ||
169 | 120 | def _get_events(infile): | 204 | def _get_events(infile): |
170 | 121 | rawdata = None | 205 | rawdata = None |
172 | 122 | events, rawdata = show.load_events(infile, None) | 206 | events, rawdata = show.load_events_infile(infile) |
173 | 123 | if not events: | 207 | if not events: |
174 | 124 | events, _ = dump.dump_events(rawdata=rawdata) | 208 | events, _ = dump.dump_events(rawdata=rawdata) |
175 | 125 | return events | 209 | return events |
176 | diff --git a/cloudinit/analyze/show.py b/cloudinit/analyze/show.py | |||
177 | index 3e778b8..511b808 100644 | |||
178 | --- a/cloudinit/analyze/show.py | |||
179 | +++ b/cloudinit/analyze/show.py | |||
180 | @@ -8,8 +8,11 @@ import base64 | |||
181 | 8 | import datetime | 8 | import datetime |
182 | 9 | import json | 9 | import json |
183 | 10 | import os | 10 | import os |
184 | 11 | import time | ||
185 | 12 | import sys | ||
186 | 11 | 13 | ||
187 | 12 | from cloudinit import util | 14 | from cloudinit import util |
188 | 15 | from cloudinit.distros import uses_systemd | ||
189 | 13 | 16 | ||
190 | 14 | # An event: | 17 | # An event: |
191 | 15 | ''' | 18 | ''' |
192 | @@ -49,6 +52,10 @@ format_key = { | |||
193 | 49 | 52 | ||
194 | 50 | formatting_help = " ".join(["{0}: {1}".format(k.replace('%', '%%'), v) | 53 | formatting_help = " ".join(["{0}: {1}".format(k.replace('%', '%%'), v) |
195 | 51 | for k, v in format_key.items()]) | 54 | for k, v in format_key.items()]) |
196 | 55 | SUCCESS_CODE = 'successful' | ||
197 | 56 | FAIL_CODE = 'failure' | ||
198 | 57 | CONTAINER_CODE = 'container' | ||
199 | 58 | TIMESTAMP_UNKNOWN = (FAIL_CODE, -1, -1, -1) | ||
200 | 52 | 59 | ||
201 | 53 | 60 | ||
202 | 54 | def format_record(msg, event): | 61 | def format_record(msg, event): |
203 | @@ -125,9 +132,175 @@ def total_time_record(total_time): | |||
204 | 125 | return 'Total Time: %3.5f seconds\n' % total_time | 132 | return 'Total Time: %3.5f seconds\n' % total_time |
205 | 126 | 133 | ||
206 | 127 | 134 | ||
207 | 135 | class SystemctlReader(object): | ||
208 | 136 | ''' | ||
209 | 137 | Class for dealing with all systemctl subp calls in a consistent manner. | ||
210 | 138 | ''' | ||
211 | 139 | def __init__(self, property, parameter=None): | ||
212 | 140 | self.epoch = None | ||
213 | 141 | self.args = ['/bin/systemctl', 'show'] | ||
214 | 142 | if parameter: | ||
215 | 143 | self.args.append(parameter) | ||
216 | 144 | self.args.extend(['-p', property]) | ||
217 | 145 | # Don't want the init of our object to break. Instead of throwing | ||
218 | 146 | # an exception, set an error code that gets checked when data is | ||
219 | 147 | # requested from the object | ||
220 | 148 | self.failure = self.subp() | ||
221 | 149 | |||
222 | 150 | def subp(self): | ||
223 | 151 | ''' | ||
224 | 152 | Make a subp call based on set args and handle errors by setting | ||
225 | 153 | failure code | ||
226 | 154 | |||
227 | 155 | :return: whether the subp call failed or not | ||
228 | 156 | ''' | ||
229 | 157 | try: | ||
230 | 158 | value, err = util.subp(self.args, capture=True) | ||
231 | 159 | if err: | ||
232 | 160 | return err | ||
233 | 161 | self.epoch = value | ||
234 | 162 | return None | ||
235 | 163 | except Exception as systemctl_fail: | ||
236 | 164 | return systemctl_fail | ||
237 | 165 | |||
238 | 166 | def parse_epoch_as_float(self): | ||
239 | 167 | ''' | ||
240 | 168 | If subp call succeeded, return the timestamp from subp as a float. | ||
241 | 169 | |||
242 | 170 | :return: timestamp as a float | ||
243 | 171 | ''' | ||
244 | 172 | # subp has 2 ways to fail: it either fails and throws an exception, | ||
245 | 173 | # or returns an error code. Raise an exception here in order to make | ||
246 | 174 | # sure both scenarios throw exceptions | ||
247 | 175 | if self.failure: | ||
248 | 176 | raise RuntimeError('Subprocess call to systemctl has failed, ' | ||
249 | 177 | 'returning error code ({})' | ||
250 | 178 | .format(self.failure)) | ||
251 | 179 | # Output from systemctl show has the format Property=Value. | ||
252 | 180 | # For example, UserspaceMonotonic=1929304 | ||
253 | 181 | timestamp = self.epoch.split('=')[1] | ||
254 | 182 | # Timestamps reported by systemctl are in microseconds, converting | ||
255 | 183 | return float(timestamp) / 1000000 | ||
256 | 184 | |||
257 | 185 | |||
258 | 186 | def dist_check_timestamp(): | ||
259 | 187 | ''' | ||
260 | 188 | Determine which init system a particular linux distro is using. | ||
261 | 189 | Each init system (systemd, upstart, etc) has a different way of | ||
262 | 190 | providing timestamps. | ||
263 | 191 | |||
264 | 192 | :return: timestamps of kernelboot, kernelendboot, and cloud-initstart | ||
265 | 193 | or TIMESTAMP_UNKNOWN if the timestamps cannot be retrieved. | ||
266 | 194 | ''' | ||
267 | 195 | |||
268 | 196 | if uses_systemd(): | ||
269 | 197 | return gather_timestamps_using_systemd() | ||
270 | 198 | |||
271 | 199 | # Use dmesg to get timestamps if the distro does not have systemd | ||
272 | 200 | if util.is_FreeBSD() or 'gentoo' in \ | ||
273 | 201 | util.system_info()['system'].lower(): | ||
274 | 202 | return gather_timestamps_using_dmesg() | ||
275 | 203 | |||
276 | 204 | # this distro doesn't fit anything that is supported by cloud-init. just | ||
277 | 205 | # return error codes | ||
278 | 206 | return TIMESTAMP_UNKNOWN | ||
279 | 207 | |||
280 | 208 | |||
281 | 209 | def gather_timestamps_using_dmesg(): | ||
282 | 210 | ''' | ||
283 | 211 | Gather timestamps that corresponds to kernel begin initialization, | ||
284 | 212 | kernel finish initialization using dmesg as opposed to systemctl | ||
285 | 213 | |||
286 | 214 | :return: the two timestamps plus a dummy timestamp to keep consistency | ||
287 | 215 | with gather_timestamps_using_systemd | ||
288 | 216 | ''' | ||
289 | 217 | try: | ||
290 | 218 | data, _ = util.subp(['dmesg'], capture=True) | ||
291 | 219 | split_entries = data[0].splitlines() | ||
292 | 220 | for i in split_entries: | ||
293 | 221 | if i.decode('UTF-8').find('user') != -1: | ||
294 | 222 | splitup = i.decode('UTF-8').split() | ||
295 | 223 | stripped = splitup[1].strip(']') | ||
296 | 224 | |||
297 | 225 | # kernel timestamp from dmesg is equal to 0, | ||
298 | 226 | # with the userspace timestamp relative to it. | ||
299 | 227 | user_space_timestamp = float(stripped) | ||
300 | 228 | kernel_start = float(time.time()) - float(util.uptime()) | ||
301 | 229 | kernel_end = kernel_start + user_space_timestamp | ||
302 | 230 | |||
303 | 231 | # systemd wont start cloud-init in this case, | ||
304 | 232 | # so we cannot get that timestamp | ||
305 | 233 | return SUCCESS_CODE, kernel_start, kernel_end, \ | ||
306 | 234 | kernel_end | ||
307 | 235 | |||
308 | 236 | except Exception: | ||
309 | 237 | pass | ||
310 | 238 | return TIMESTAMP_UNKNOWN | ||
311 | 239 | |||
312 | 240 | |||
313 | 241 | def gather_timestamps_using_systemd(): | ||
314 | 242 | ''' | ||
315 | 243 | Gather timestamps that corresponds to kernel begin initialization, | ||
316 | 244 | kernel finish initialization. and cloud-init systemd unit activation | ||
317 | 245 | |||
318 | 246 | :return: the three timestamps | ||
319 | 247 | ''' | ||
320 | 248 | kernel_start = float(time.time()) - float(util.uptime()) | ||
321 | 249 | try: | ||
322 | 250 | delta_k_end = SystemctlReader('UserspaceTimestampMonotonic')\ | ||
323 | 251 | .parse_epoch_as_float() | ||
324 | 252 | delta_ci_s = SystemctlReader('InactiveExitTimestampMonotonic', | ||
325 | 253 | 'cloud-init-local').parse_epoch_as_float() | ||
326 | 254 | base_time = kernel_start | ||
327 | 255 | status = SUCCESS_CODE | ||
328 | 256 | # lxc based containers do not set their monotonic zero point to be when | ||
329 | 257 | # the container starts, instead keep using host boot as zero point | ||
330 | 258 | # time.CLOCK_MONOTONIC_RAW is only available in python 3.3 | ||
331 | 259 | if util.is_container(): | ||
332 | 260 | # clock.monotonic also uses host boot as zero point | ||
333 | 261 | if sys.version_info >= (3, 3): | ||
334 | 262 | base_time = float(time.time()) - float(time.monotonic()) | ||
335 | 263 | # TODO: lxcfs automatically truncates /proc/uptime to seconds | ||
336 | 264 | # in containers when https://github.com/lxc/lxcfs/issues/292 | ||
337 | 265 | # is fixed, util.uptime() should be used instead of stat on | ||
338 | 266 | try: | ||
339 | 267 | file_stat = os.stat('/proc/1/cmdline') | ||
340 | 268 | kernel_start = file_stat.st_atime | ||
341 | 269 | except OSError as err: | ||
342 | 270 | raise RuntimeError('Could not determine container boot ' | ||
343 | 271 | 'time from /proc/1/cmdline. ({})' | ||
344 | 272 | .format(err)) | ||
345 | 273 | status = CONTAINER_CODE | ||
346 | 274 | else: | ||
347 | 275 | status = FAIL_CODE | ||
348 | 276 | kernel_end = base_time + delta_k_end | ||
349 | 277 | cloudinit_sysd = base_time + delta_ci_s | ||
350 | 278 | |||
351 | 279 | except Exception as e: | ||
352 | 280 | # Except ALL exceptions as Systemctl reader can throw many different | ||
353 | 281 | # errors, but any failure in systemctl means that timestamps cannot be | ||
354 | 282 | # obtained | ||
355 | 283 | print(e) | ||
356 | 284 | return TIMESTAMP_UNKNOWN | ||
357 | 285 | return status, kernel_start, kernel_end, cloudinit_sysd | ||
358 | 286 | |||
359 | 287 | |||
360 | 128 | def generate_records(events, blame_sort=False, | 288 | def generate_records(events, blame_sort=False, |
361 | 129 | print_format="(%n) %d seconds in %I%D", | 289 | print_format="(%n) %d seconds in %I%D", |
362 | 130 | dump_files=False, log_datafiles=False): | 290 | dump_files=False, log_datafiles=False): |
363 | 291 | ''' | ||
364 | 292 | Take in raw events and create parent-child dependencies between events | ||
365 | 293 | in order to order events in chronological order. | ||
366 | 294 | |||
367 | 295 | :param events: JSONs from dump that represents events taken from logs | ||
368 | 296 | :param blame_sort: whether to sort by timestamp or by time taken. | ||
369 | 297 | :param print_format: formatting to represent event, time stamp, | ||
370 | 298 | and time taken by the event in one line | ||
371 | 299 | :param dump_files: whether to dump files into JSONs | ||
372 | 300 | :param log_datafiles: whether or not to log events generated | ||
373 | 301 | |||
374 | 302 | :return: boot records ordered chronologically | ||
375 | 303 | ''' | ||
376 | 131 | 304 | ||
377 | 132 | sorted_events = sorted(events, key=lambda x: x['timestamp']) | 305 | sorted_events = sorted(events, key=lambda x: x['timestamp']) |
378 | 133 | records = [] | 306 | records = [] |
379 | @@ -189,19 +362,28 @@ def generate_records(events, blame_sort=False, | |||
380 | 189 | 362 | ||
381 | 190 | 363 | ||
382 | 191 | def show_events(events, print_format): | 364 | def show_events(events, print_format): |
383 | 365 | ''' | ||
384 | 366 | A passthrough method that makes it easier to call generate_records() | ||
385 | 367 | |||
386 | 368 | :param events: JSONs from dump that represents events taken from logs | ||
387 | 369 | :param print_format: formatting to represent event, time stamp, | ||
388 | 370 | and time taken by the event in one line | ||
389 | 371 | |||
390 | 372 | :return: boot records ordered chronologically | ||
391 | 373 | ''' | ||
392 | 192 | return generate_records(events, print_format=print_format) | 374 | return generate_records(events, print_format=print_format) |
393 | 193 | 375 | ||
394 | 194 | 376 | ||
400 | 195 | def load_events(infile, rawdata=None): | 377 | def load_events_infile(infile): |
401 | 196 | if rawdata: | 378 | ''' |
402 | 197 | data = rawdata.read() | 379 | Takes in a log file, read it, and convert to json. |
403 | 198 | else: | 380 | |
404 | 199 | data = infile.read() | 381 | :param infile: The Log file to be read |
405 | 200 | 382 | ||
407 | 201 | j = None | 383 | :return: json version of logfile, raw file |
408 | 384 | ''' | ||
409 | 385 | data = infile.read() | ||
410 | 202 | try: | 386 | try: |
412 | 203 | j = json.loads(data) | 387 | return json.loads(data), data |
413 | 204 | except ValueError: | 388 | except ValueError: |
417 | 205 | pass | 389 | return None, data |
415 | 206 | |||
416 | 207 | return j, data | ||
418 | diff --git a/cloudinit/analyze/tests/test_boot.py b/cloudinit/analyze/tests/test_boot.py | |||
419 | 208 | new file mode 100644 | 390 | new file mode 100644 |
420 | index 0000000..706e2cc | |||
421 | --- /dev/null | |||
422 | +++ b/cloudinit/analyze/tests/test_boot.py | |||
423 | @@ -0,0 +1,170 @@ | |||
424 | 1 | import os | ||
425 | 2 | from cloudinit.analyze.__main__ import (analyze_boot, get_parser) | ||
426 | 3 | from cloudinit.tests.helpers import CiTestCase, mock | ||
427 | 4 | from cloudinit.analyze.show import dist_check_timestamp, SystemctlReader, \ | ||
428 | 5 | FAIL_CODE, CONTAINER_CODE | ||
429 | 6 | |||
430 | 7 | err_code = (FAIL_CODE, -1, -1, -1) | ||
431 | 8 | |||
432 | 9 | |||
433 | 10 | class TestDistroChecker(CiTestCase): | ||
434 | 11 | |||
435 | 12 | @mock.patch('cloudinit.util.system_info', return_value={'dist': ('', '', | ||
436 | 13 | ''), | ||
437 | 14 | 'system': ''}) | ||
438 | 15 | @mock.patch('platform.linux_distribution', return_value=('', '', '')) | ||
439 | 16 | @mock.patch('cloudinit.util.is_FreeBSD', return_value=False) | ||
440 | 17 | def test_blank_distro(self, m_sys_info, m_linux_distribution, m_free_bsd): | ||
441 | 18 | self.assertEqual(err_code, dist_check_timestamp()) | ||
442 | 19 | |||
443 | 20 | @mock.patch('cloudinit.util.system_info', return_value={'dist': ('', '', | ||
444 | 21 | '')}) | ||
445 | 22 | @mock.patch('platform.linux_distribution', return_value=('', '', '')) | ||
446 | 23 | @mock.patch('cloudinit.util.is_FreeBSD', return_value=True) | ||
447 | 24 | def test_freebsd_gentoo_cant_find(self, m_sys_info, | ||
448 | 25 | m_linux_distribution, m_is_FreeBSD): | ||
449 | 26 | self.assertEqual(err_code, dist_check_timestamp()) | ||
450 | 27 | |||
451 | 28 | @mock.patch('cloudinit.util.subp', return_value=(0, 1)) | ||
452 | 29 | def test_subp_fails(self, m_subp): | ||
453 | 30 | self.assertEqual(err_code, dist_check_timestamp()) | ||
454 | 31 | |||
455 | 32 | |||
456 | 33 | class TestSystemCtlReader(CiTestCase): | ||
457 | 34 | |||
458 | 35 | def test_systemctl_invalid_property(self): | ||
459 | 36 | reader = SystemctlReader('dummyProperty') | ||
460 | 37 | with self.assertRaises(RuntimeError): | ||
461 | 38 | reader.parse_epoch_as_float() | ||
462 | 39 | |||
463 | 40 | def test_systemctl_invalid_parameter(self): | ||
464 | 41 | reader = SystemctlReader('dummyProperty', 'dummyParameter') | ||
465 | 42 | with self.assertRaises(RuntimeError): | ||
466 | 43 | reader.parse_epoch_as_float() | ||
467 | 44 | |||
468 | 45 | @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None)) | ||
469 | 46 | def test_systemctl_works_correctly_threshold(self, m_subp): | ||
470 | 47 | reader = SystemctlReader('dummyProperty', 'dummyParameter') | ||
471 | 48 | self.assertEqual(1.0, reader.parse_epoch_as_float()) | ||
472 | 49 | thresh = 1.0 - reader.parse_epoch_as_float() | ||
473 | 50 | self.assertTrue(thresh < 1e-6) | ||
474 | 51 | self.assertTrue(thresh > (-1 * 1e-6)) | ||
475 | 52 | |||
476 | 53 | @mock.patch('cloudinit.util.subp', return_value=('U=0', None)) | ||
477 | 54 | def test_systemctl_succeed_zero(self, m_subp): | ||
478 | 55 | reader = SystemctlReader('dummyProperty', 'dummyParameter') | ||
479 | 56 | self.assertEqual(0.0, reader.parse_epoch_as_float()) | ||
480 | 57 | |||
481 | 58 | @mock.patch('cloudinit.util.subp', return_value=('U=1', None)) | ||
482 | 59 | def test_systemctl_succeed_distinct(self, m_subp): | ||
483 | 60 | reader = SystemctlReader('dummyProperty', 'dummyParameter') | ||
484 | 61 | val1 = reader.parse_epoch_as_float() | ||
485 | 62 | m_subp.return_value = ('U=2', None) | ||
486 | 63 | reader2 = SystemctlReader('dummyProperty', 'dummyParameter') | ||
487 | 64 | val2 = reader2.parse_epoch_as_float() | ||
488 | 65 | self.assertNotEqual(val1, val2) | ||
489 | 66 | |||
490 | 67 | @mock.patch('cloudinit.util.subp', return_value=('100', None)) | ||
491 | 68 | def test_systemctl_epoch_not_splittable(self, m_subp): | ||
492 | 69 | reader = SystemctlReader('dummyProperty', 'dummyParameter') | ||
493 | 70 | with self.assertRaises(IndexError): | ||
494 | 71 | reader.parse_epoch_as_float() | ||
495 | 72 | |||
496 | 73 | @mock.patch('cloudinit.util.subp', return_value=('U=foobar', None)) | ||
497 | 74 | def test_systemctl_cannot_convert_epoch_to_float(self, m_subp): | ||
498 | 75 | reader = SystemctlReader('dummyProperty', 'dummyParameter') | ||
499 | 76 | with self.assertRaises(ValueError): | ||
500 | 77 | reader.parse_epoch_as_float() | ||
501 | 78 | |||
502 | 79 | |||
503 | 80 | class TestAnalyzeBoot(CiTestCase): | ||
504 | 81 | |||
505 | 82 | def set_up_dummy_file_ci(self, path, log_path): | ||
506 | 83 | infh = open(path, 'w+') | ||
507 | 84 | infh.write('2019-07-08 17:40:49,601 - util.py[DEBUG]: Cloud-init v. ' | ||
508 | 85 | '19.1-1-gbaa47854-0ubuntu1~18.04.1 running \'init-local\' ' | ||
509 | 86 | 'at Mon, 08 Jul 2019 17:40:49 +0000. Up 18.84 seconds.') | ||
510 | 87 | infh.close() | ||
511 | 88 | outfh = open(log_path, 'w+') | ||
512 | 89 | outfh.close() | ||
513 | 90 | |||
514 | 91 | def set_up_dummy_file(self, path, log_path): | ||
515 | 92 | infh = open(path, 'w+') | ||
516 | 93 | infh.write('dummy data') | ||
517 | 94 | infh.close() | ||
518 | 95 | outfh = open(log_path, 'w+') | ||
519 | 96 | outfh.close() | ||
520 | 97 | |||
521 | 98 | def remove_dummy_file(self, path, log_path): | ||
522 | 99 | if os.path.isfile(path): | ||
523 | 100 | os.remove(path) | ||
524 | 101 | if os.path.isfile(log_path): | ||
525 | 102 | os.remove(log_path) | ||
526 | 103 | |||
527 | 104 | @mock.patch('cloudinit.analyze.show.dist_check_timestamp', | ||
528 | 105 | return_value=err_code) | ||
529 | 106 | def test_boot_invalid_distro(self, m_dist_check_timestamp): | ||
530 | 107 | |||
531 | 108 | path = os.path.dirname(os.path.abspath(__file__)) | ||
532 | 109 | log_path = path + '/boot-test.log' | ||
533 | 110 | path += '/dummy.log' | ||
534 | 111 | self.set_up_dummy_file(path, log_path) | ||
535 | 112 | |||
536 | 113 | parser = get_parser() | ||
537 | 114 | args = parser.parse_args(args=['boot', '-i', path, '-o', | ||
538 | 115 | log_path]) | ||
539 | 116 | name_default = '' | ||
540 | 117 | analyze_boot(name_default, args) | ||
541 | 118 | # now args have been tested, go into outfile and make sure error | ||
542 | 119 | # message is in the outfile | ||
543 | 120 | outfh = open(args.outfile, 'r') | ||
544 | 121 | data = outfh.read() | ||
545 | 122 | err_string = 'Your Linux distro or container does not support this ' \ | ||
546 | 123 | 'functionality.\nYou must be running a Kernel ' \ | ||
547 | 124 | 'Telemetry supported distro.\nPlease check ' \ | ||
548 | 125 | 'https://cloudinit.readthedocs.io/en/latest/topics' \ | ||
549 | 126 | '/analyze.html for more information on supported ' \ | ||
550 | 127 | 'distros.\n' | ||
551 | 128 | |||
552 | 129 | self.remove_dummy_file(path, log_path) | ||
553 | 130 | self.assertEqual(err_string, data) | ||
554 | 131 | |||
555 | 132 | @mock.patch("cloudinit.util.is_container", return_value=True) | ||
556 | 133 | @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None)) | ||
557 | 134 | def test_container_no_ci_log_line(self, m_is_container, m_subp): | ||
558 | 135 | path = os.path.dirname(os.path.abspath(__file__)) | ||
559 | 136 | log_path = path + '/boot-test.log' | ||
560 | 137 | path += '/dummy.log' | ||
561 | 138 | self.set_up_dummy_file(path, log_path) | ||
562 | 139 | |||
563 | 140 | parser = get_parser() | ||
564 | 141 | args = parser.parse_args(args=['boot', '-i', path, '-o', | ||
565 | 142 | log_path]) | ||
566 | 143 | name_default = '' | ||
567 | 144 | |||
568 | 145 | finish_code = analyze_boot(name_default, args) | ||
569 | 146 | |||
570 | 147 | self.remove_dummy_file(path, log_path) | ||
571 | 148 | self.assertEqual(FAIL_CODE, finish_code) | ||
572 | 149 | |||
573 | 150 | @mock.patch("cloudinit.util.is_container", return_value=True) | ||
574 | 151 | @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None)) | ||
575 | 152 | @mock.patch('cloudinit.analyze.__main__._get_events', return_value=[{ | ||
576 | 153 | 'name': 'init-local', 'description': 'starting search', 'timestamp': | ||
577 | 154 | 100000}]) | ||
578 | 155 | @mock.patch('cloudinit.analyze.show.dist_check_timestamp', | ||
579 | 156 | return_value=(CONTAINER_CODE, 1, 1, 1)) | ||
580 | 157 | def test_container_ci_log_line(self, m_is_container, m_subp, m_get, m_g): | ||
581 | 158 | path = os.path.dirname(os.path.abspath(__file__)) | ||
582 | 159 | log_path = path + '/boot-test.log' | ||
583 | 160 | path += '/dummy.log' | ||
584 | 161 | self.set_up_dummy_file_ci(path, log_path) | ||
585 | 162 | |||
586 | 163 | parser = get_parser() | ||
587 | 164 | args = parser.parse_args(args=['boot', '-i', path, '-o', | ||
588 | 165 | log_path]) | ||
589 | 166 | name_default = '' | ||
590 | 167 | finish_code = analyze_boot(name_default, args) | ||
591 | 168 | |||
592 | 169 | self.remove_dummy_file(path, log_path) | ||
593 | 170 | self.assertEqual(CONTAINER_CODE, finish_code) | ||
594 | diff --git a/cloudinit/apport.py b/cloudinit/apport.py | |||
595 | index 22cb7fd..003ff1f 100644 | |||
596 | --- a/cloudinit/apport.py | |||
597 | +++ b/cloudinit/apport.py | |||
598 | @@ -23,6 +23,7 @@ KNOWN_CLOUD_NAMES = [ | |||
599 | 23 | 'CloudStack', | 23 | 'CloudStack', |
600 | 24 | 'DigitalOcean', | 24 | 'DigitalOcean', |
601 | 25 | 'GCE - Google Compute Engine', | 25 | 'GCE - Google Compute Engine', |
602 | 26 | 'Exoscale', | ||
603 | 26 | 'Hetzner Cloud', | 27 | 'Hetzner Cloud', |
604 | 27 | 'IBM - (aka SoftLayer or BlueMix)', | 28 | 'IBM - (aka SoftLayer or BlueMix)', |
605 | 28 | 'LXD', | 29 | 'LXD', |
606 | diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py | |||
607 | index 919d199..f01e2aa 100644 | |||
608 | --- a/cloudinit/config/cc_apt_configure.py | |||
609 | +++ b/cloudinit/config/cc_apt_configure.py | |||
610 | @@ -332,6 +332,8 @@ def apply_apt(cfg, cloud, target): | |||
611 | 332 | 332 | ||
612 | 333 | 333 | ||
613 | 334 | def debconf_set_selections(selections, target=None): | 334 | def debconf_set_selections(selections, target=None): |
614 | 335 | if not selections.endswith(b'\n'): | ||
615 | 336 | selections += b'\n' | ||
616 | 335 | util.subp(['debconf-set-selections'], data=selections, target=target, | 337 | util.subp(['debconf-set-selections'], data=selections, target=target, |
617 | 336 | capture=True) | 338 | capture=True) |
618 | 337 | 339 | ||
619 | @@ -374,7 +376,7 @@ def apply_debconf_selections(cfg, target=None): | |||
620 | 374 | 376 | ||
621 | 375 | selections = '\n'.join( | 377 | selections = '\n'.join( |
622 | 376 | [selsets[key] for key in sorted(selsets.keys())]) | 378 | [selsets[key] for key in sorted(selsets.keys())]) |
624 | 377 | debconf_set_selections(selections.encode() + b"\n", target=target) | 379 | debconf_set_selections(selections.encode(), target=target) |
625 | 378 | 380 | ||
626 | 379 | # get a complete list of packages listed in input | 381 | # get a complete list of packages listed in input |
627 | 380 | pkgs_cfgd = set() | 382 | pkgs_cfgd = set() |
628 | diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py | |||
629 | index 71d13ed..d983077 100644 | |||
630 | --- a/cloudinit/config/cc_lxd.py | |||
631 | +++ b/cloudinit/config/cc_lxd.py | |||
632 | @@ -152,7 +152,7 @@ def handle(name, cfg, cloud, log, args): | |||
633 | 152 | 152 | ||
634 | 153 | if cmd_attach: | 153 | if cmd_attach: |
635 | 154 | log.debug("Setting up default lxd bridge: %s" % | 154 | log.debug("Setting up default lxd bridge: %s" % |
637 | 155 | " ".join(cmd_create)) | 155 | " ".join(cmd_attach)) |
638 | 156 | _lxc(cmd_attach) | 156 | _lxc(cmd_attach) |
639 | 157 | 157 | ||
640 | 158 | elif bridge_cfg: | 158 | elif bridge_cfg: |
641 | diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py | |||
642 | index 4585e4d..cf9b5ab 100755 | |||
643 | --- a/cloudinit/config/cc_set_passwords.py | |||
644 | +++ b/cloudinit/config/cc_set_passwords.py | |||
645 | @@ -9,27 +9,40 @@ | |||
646 | 9 | """ | 9 | """ |
647 | 10 | Set Passwords | 10 | Set Passwords |
648 | 11 | ------------- | 11 | ------------- |
666 | 12 | **Summary:** Set user passwords | 12 | **Summary:** Set user passwords and enable/disable SSH password authentication |
667 | 13 | 13 | ||
668 | 14 | Set system passwords and enable or disable ssh password authentication. | 14 | This module consumes three top-level config keys: ``ssh_pwauth``, ``chpasswd`` |
669 | 15 | The ``chpasswd`` config key accepts a dictionary containing a single one of two | 15 | and ``password``. |
670 | 16 | keys, either ``expire`` or ``list``. If ``expire`` is specified and is set to | 16 | |
671 | 17 | ``false``, then the ``password`` global config key is used as the password for | 17 | The ``ssh_pwauth`` config key determines whether or not sshd will be configured |
672 | 18 | all user accounts. If the ``expire`` key is specified and is set to ``true`` | 18 | to accept password authentication. True values will enable password auth, |
673 | 19 | then user passwords will be expired, preventing the default system passwords | 19 | false values will disable password auth, and the literal string ``unchanged`` |
674 | 20 | from being used. | 20 | will leave it unchanged. Setting no value will also leave the current setting |
675 | 21 | 21 | on-disk unchanged. | |
676 | 22 | If the ``list`` key is provided, a list of | 22 | |
677 | 23 | ``username:password`` pairs can be specified. The usernames specified | 23 | The ``chpasswd`` config key accepts a dictionary containing either or both of |
678 | 24 | must already exist on the system, or have been created using the | 24 | ``expire`` and ``list``. |
679 | 25 | ``cc_users_groups`` module. A password can be randomly generated using | 25 | |
680 | 26 | ``username:RANDOM`` or ``username:R``. A hashed password can be specified | 26 | If the ``list`` key is provided, it should contain a list of |
681 | 27 | using ``username:$6$salt$hash``. Password ssh authentication can be | 27 | ``username:password`` pairs. This can be either a YAML list (of strings), or a |
682 | 28 | enabled, disabled, or left to system defaults using ``ssh_pwauth``. | 28 | multi-line string with one pair per line. Each user will have the |
683 | 29 | corresponding password set. A password can be randomly generated by specifying | ||
684 | 30 | ``RANDOM`` or ``R`` as a user's password. A hashed password, created by a tool | ||
685 | 31 | like ``mkpasswd``, can be specified; a regex | ||
686 | 32 | (``r'\\$(1|2a|2y|5|6)(\\$.+){2}'``) is used to determine if a password value | ||
687 | 33 | should be treated as a hash. | ||
688 | 29 | 34 | ||
689 | 30 | .. note:: | 35 | .. note:: |
692 | 31 | if using ``expire: true`` then a ssh authkey should be specified or it may | 36 | The users specified must already exist on the system. Users will have been |
693 | 32 | not be possible to login to the system | 37 | created by the ``cc_users_groups`` module at this point. |
694 | 38 | |||
695 | 39 | By default, all users on the system will have their passwords expired (meaning | ||
696 | 40 | that they will have to be reset the next time the user logs in). To disable | ||
697 | 41 | this behaviour, set ``expire`` under ``chpasswd`` to a false value. | ||
698 | 42 | |||
699 | 43 | If a ``list`` of user/password pairs is not specified under ``chpasswd``, then | ||
700 | 44 | the value of the ``password`` config key will be used to set the default user's | ||
701 | 45 | password. | ||
702 | 33 | 46 | ||
703 | 34 | **Internal name:** ``cc_set_passwords`` | 47 | **Internal name:** ``cc_set_passwords`` |
704 | 35 | 48 | ||
705 | @@ -160,6 +173,8 @@ def handle(_name, cfg, cloud, log, args): | |||
706 | 160 | hashed_users = [] | 173 | hashed_users = [] |
707 | 161 | randlist = [] | 174 | randlist = [] |
708 | 162 | users = [] | 175 | users = [] |
709 | 176 | # N.B. This regex is included in the documentation (i.e. the module | ||
710 | 177 | # docstring), so any changes to it should be reflected there. | ||
711 | 163 | prog = re.compile(r'\$(1|2a|2y|5|6)(\$.+){2}') | 178 | prog = re.compile(r'\$(1|2a|2y|5|6)(\$.+){2}') |
712 | 164 | for line in plist: | 179 | for line in plist: |
713 | 165 | u, p = line.split(':', 1) | 180 | u, p = line.split(':', 1) |
714 | diff --git a/cloudinit/config/cc_ssh.py b/cloudinit/config/cc_ssh.py | |||
715 | index f8f7cb3..fdd8f4d 100755 | |||
716 | --- a/cloudinit/config/cc_ssh.py | |||
717 | +++ b/cloudinit/config/cc_ssh.py | |||
718 | @@ -91,6 +91,9 @@ public keys. | |||
719 | 91 | ssh_authorized_keys: | 91 | ssh_authorized_keys: |
720 | 92 | - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUU ... | 92 | - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUU ... |
721 | 93 | - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZ ... | 93 | - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZ ... |
722 | 94 | ssh_publish_hostkeys: | ||
723 | 95 | enabled: <true/false> (Defaults to true) | ||
724 | 96 | blacklist: <list of key types> (Defaults to [dsa]) | ||
725 | 94 | """ | 97 | """ |
726 | 95 | 98 | ||
727 | 96 | import glob | 99 | import glob |
728 | @@ -104,6 +107,10 @@ from cloudinit import util | |||
729 | 104 | 107 | ||
730 | 105 | GENERATE_KEY_NAMES = ['rsa', 'dsa', 'ecdsa', 'ed25519'] | 108 | GENERATE_KEY_NAMES = ['rsa', 'dsa', 'ecdsa', 'ed25519'] |
731 | 106 | KEY_FILE_TPL = '/etc/ssh/ssh_host_%s_key' | 109 | KEY_FILE_TPL = '/etc/ssh/ssh_host_%s_key' |
732 | 110 | PUBLISH_HOST_KEYS = True | ||
733 | 111 | # Don't publish the dsa hostkey by default since OpenSSH recommends not using | ||
734 | 112 | # it. | ||
735 | 113 | HOST_KEY_PUBLISH_BLACKLIST = ['dsa'] | ||
736 | 107 | 114 | ||
737 | 108 | CONFIG_KEY_TO_FILE = {} | 115 | CONFIG_KEY_TO_FILE = {} |
738 | 109 | PRIV_TO_PUB = {} | 116 | PRIV_TO_PUB = {} |
739 | @@ -176,6 +183,23 @@ def handle(_name, cfg, cloud, log, _args): | |||
740 | 176 | util.logexc(log, "Failed generating key type %s to " | 183 | util.logexc(log, "Failed generating key type %s to " |
741 | 177 | "file %s", keytype, keyfile) | 184 | "file %s", keytype, keyfile) |
742 | 178 | 185 | ||
743 | 186 | if "ssh_publish_hostkeys" in cfg: | ||
744 | 187 | host_key_blacklist = util.get_cfg_option_list( | ||
745 | 188 | cfg["ssh_publish_hostkeys"], "blacklist", | ||
746 | 189 | HOST_KEY_PUBLISH_BLACKLIST) | ||
747 | 190 | publish_hostkeys = util.get_cfg_option_bool( | ||
748 | 191 | cfg["ssh_publish_hostkeys"], "enabled", PUBLISH_HOST_KEYS) | ||
749 | 192 | else: | ||
750 | 193 | host_key_blacklist = HOST_KEY_PUBLISH_BLACKLIST | ||
751 | 194 | publish_hostkeys = PUBLISH_HOST_KEYS | ||
752 | 195 | |||
753 | 196 | if publish_hostkeys: | ||
754 | 197 | hostkeys = get_public_host_keys(blacklist=host_key_blacklist) | ||
755 | 198 | try: | ||
756 | 199 | cloud.datasource.publish_host_keys(hostkeys) | ||
757 | 200 | except Exception: | ||
758 | 201 | util.logexc(log, "Publishing host keys failed!") | ||
759 | 202 | |||
760 | 179 | try: | 203 | try: |
761 | 180 | (users, _groups) = ug_util.normalize_users_groups(cfg, cloud.distro) | 204 | (users, _groups) = ug_util.normalize_users_groups(cfg, cloud.distro) |
762 | 181 | (user, _user_config) = ug_util.extract_default(users) | 205 | (user, _user_config) = ug_util.extract_default(users) |
763 | @@ -209,4 +233,35 @@ def apply_credentials(keys, user, disable_root, disable_root_opts): | |||
764 | 209 | 233 | ||
765 | 210 | ssh_util.setup_user_keys(keys, 'root', options=key_prefix) | 234 | ssh_util.setup_user_keys(keys, 'root', options=key_prefix) |
766 | 211 | 235 | ||
767 | 236 | |||
768 | 237 | def get_public_host_keys(blacklist=None): | ||
769 | 238 | """Read host keys from /etc/ssh/*.pub files and return them as a list. | ||
770 | 239 | |||
771 | 240 | @param blacklist: List of key types to ignore. e.g. ['dsa', 'rsa'] | ||
772 | 241 | @returns: List of keys, each formatted as a two-element tuple. | ||
773 | 242 | e.g. [('ssh-rsa', 'AAAAB3Nz...'), ('ssh-ed25519', 'AAAAC3Nx...')] | ||
774 | 243 | """ | ||
775 | 244 | public_key_file_tmpl = '%s.pub' % (KEY_FILE_TPL,) | ||
776 | 245 | key_list = [] | ||
777 | 246 | blacklist_files = [] | ||
778 | 247 | if blacklist: | ||
779 | 248 | # Convert blacklist to filenames: | ||
780 | 249 | # 'dsa' -> '/etc/ssh/ssh_host_dsa_key.pub' | ||
781 | 250 | blacklist_files = [public_key_file_tmpl % (key_type,) | ||
782 | 251 | for key_type in blacklist] | ||
783 | 252 | # Get list of public key files and filter out blacklisted files. | ||
784 | 253 | file_list = [hostfile for hostfile | ||
785 | 254 | in glob.glob(public_key_file_tmpl % ('*',)) | ||
786 | 255 | if hostfile not in blacklist_files] | ||
787 | 256 | |||
788 | 257 | # Read host key files, retrieve first two fields as a tuple and | ||
789 | 258 | # append that tuple to key_list. | ||
790 | 259 | for file_name in file_list: | ||
791 | 260 | file_contents = util.load_file(file_name) | ||
792 | 261 | key_data = file_contents.split() | ||
793 | 262 | if key_data and len(key_data) > 1: | ||
794 | 263 | key_list.append(tuple(key_data[:2])) | ||
795 | 264 | return key_list | ||
796 | 265 | |||
797 | 266 | |||
798 | 212 | # vi: ts=4 expandtab | 267 | # vi: ts=4 expandtab |
799 | diff --git a/cloudinit/config/cc_ubuntu_drivers.py b/cloudinit/config/cc_ubuntu_drivers.py | |||
800 | index 91feb60..297451d 100644 | |||
801 | --- a/cloudinit/config/cc_ubuntu_drivers.py | |||
802 | +++ b/cloudinit/config/cc_ubuntu_drivers.py | |||
803 | @@ -2,12 +2,14 @@ | |||
804 | 2 | 2 | ||
805 | 3 | """Ubuntu Drivers: Interact with third party drivers in Ubuntu.""" | 3 | """Ubuntu Drivers: Interact with third party drivers in Ubuntu.""" |
806 | 4 | 4 | ||
807 | 5 | import os | ||
808 | 5 | from textwrap import dedent | 6 | from textwrap import dedent |
809 | 6 | 7 | ||
810 | 7 | from cloudinit.config.schema import ( | 8 | from cloudinit.config.schema import ( |
811 | 8 | get_schema_doc, validate_cloudconfig_schema) | 9 | get_schema_doc, validate_cloudconfig_schema) |
812 | 9 | from cloudinit import log as logging | 10 | from cloudinit import log as logging |
813 | 10 | from cloudinit.settings import PER_INSTANCE | 11 | from cloudinit.settings import PER_INSTANCE |
814 | 12 | from cloudinit import temp_utils | ||
815 | 11 | from cloudinit import type_utils | 13 | from cloudinit import type_utils |
816 | 12 | from cloudinit import util | 14 | from cloudinit import util |
817 | 13 | 15 | ||
818 | @@ -64,6 +66,33 @@ OLD_UBUNTU_DRIVERS_STDERR_NEEDLE = ( | |||
819 | 64 | __doc__ = get_schema_doc(schema) # Supplement python help() | 66 | __doc__ = get_schema_doc(schema) # Supplement python help() |
820 | 65 | 67 | ||
821 | 66 | 68 | ||
822 | 69 | # Use a debconf template to configure a global debconf variable | ||
823 | 70 | # (linux/nvidia/latelink) setting this to "true" allows the | ||
824 | 71 | # 'linux-restricted-modules' deb to accept the NVIDIA EULA and the package | ||
825 | 72 | # will automatically link the drivers to the running kernel. | ||
826 | 73 | |||
827 | 74 | # EOL_XENIAL: can then drop this script and use python3-debconf which is only | ||
828 | 75 | # available in Bionic and later. Can't use python3-debconf currently as it | ||
829 | 76 | # isn't in Xenial and doesn't yet support X_LOADTEMPLATEFILE debconf command. | ||
830 | 77 | |||
831 | 78 | NVIDIA_DEBCONF_CONTENT = """\ | ||
832 | 79 | Template: linux/nvidia/latelink | ||
833 | 80 | Type: boolean | ||
834 | 81 | Default: true | ||
835 | 82 | Description: Late-link NVIDIA kernel modules? | ||
836 | 83 | Enable this to link the NVIDIA kernel modules in cloud-init and | ||
837 | 84 | make them available for use. | ||
838 | 85 | """ | ||
839 | 86 | |||
840 | 87 | NVIDIA_DRIVER_LATELINK_DEBCONF_SCRIPT = """\ | ||
841 | 88 | #!/bin/sh | ||
842 | 89 | # Allow cloud-init to trigger EULA acceptance via registering a debconf | ||
843 | 90 | # template to set linux/nvidia/latelink true | ||
844 | 91 | . /usr/share/debconf/confmodule | ||
845 | 92 | db_x_loadtemplatefile "$1" cloud-init | ||
846 | 93 | """ | ||
847 | 94 | |||
848 | 95 | |||
849 | 67 | def install_drivers(cfg, pkg_install_func): | 96 | def install_drivers(cfg, pkg_install_func): |
850 | 68 | if not isinstance(cfg, dict): | 97 | if not isinstance(cfg, dict): |
851 | 69 | raise TypeError( | 98 | raise TypeError( |
852 | @@ -89,9 +118,28 @@ def install_drivers(cfg, pkg_install_func): | |||
853 | 89 | if version_cfg: | 118 | if version_cfg: |
854 | 90 | driver_arg += ':{}'.format(version_cfg) | 119 | driver_arg += ':{}'.format(version_cfg) |
855 | 91 | 120 | ||
857 | 92 | LOG.debug("Installing NVIDIA drivers (%s=%s, version=%s)", | 121 | LOG.debug("Installing and activating NVIDIA drivers (%s=%s, version=%s)", |
858 | 93 | cfgpath, nv_acc, version_cfg if version_cfg else 'latest') | 122 | cfgpath, nv_acc, version_cfg if version_cfg else 'latest') |
859 | 94 | 123 | ||
860 | 124 | # Register and set debconf selection linux/nvidia/latelink = true | ||
861 | 125 | tdir = temp_utils.mkdtemp(needs_exe=True) | ||
862 | 126 | debconf_file = os.path.join(tdir, 'nvidia.template') | ||
863 | 127 | debconf_script = os.path.join(tdir, 'nvidia-debconf.sh') | ||
864 | 128 | try: | ||
865 | 129 | util.write_file(debconf_file, NVIDIA_DEBCONF_CONTENT) | ||
866 | 130 | util.write_file( | ||
867 | 131 | debconf_script, | ||
868 | 132 | util.encode_text(NVIDIA_DRIVER_LATELINK_DEBCONF_SCRIPT), | ||
869 | 133 | mode=0o755) | ||
870 | 134 | util.subp([debconf_script, debconf_file]) | ||
871 | 135 | except Exception as e: | ||
872 | 136 | util.logexc( | ||
873 | 137 | LOG, "Failed to register NVIDIA debconf template: %s", str(e)) | ||
874 | 138 | raise | ||
875 | 139 | finally: | ||
876 | 140 | if os.path.isdir(tdir): | ||
877 | 141 | util.del_dir(tdir) | ||
878 | 142 | |||
879 | 95 | try: | 143 | try: |
880 | 96 | util.subp(['ubuntu-drivers', 'install', '--gpgpu', driver_arg]) | 144 | util.subp(['ubuntu-drivers', 'install', '--gpgpu', driver_arg]) |
881 | 97 | except util.ProcessExecutionError as exc: | 145 | except util.ProcessExecutionError as exc: |
882 | diff --git a/cloudinit/config/tests/test_ssh.py b/cloudinit/config/tests/test_ssh.py | |||
883 | index c8a4271..e778984 100644 | |||
884 | --- a/cloudinit/config/tests/test_ssh.py | |||
885 | +++ b/cloudinit/config/tests/test_ssh.py | |||
886 | @@ -1,5 +1,6 @@ | |||
887 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
888 | 2 | 2 | ||
889 | 3 | import os.path | ||
890 | 3 | 4 | ||
891 | 4 | from cloudinit.config import cc_ssh | 5 | from cloudinit.config import cc_ssh |
892 | 5 | from cloudinit import ssh_util | 6 | from cloudinit import ssh_util |
893 | @@ -12,6 +13,25 @@ MODPATH = "cloudinit.config.cc_ssh." | |||
894 | 12 | class TestHandleSsh(CiTestCase): | 13 | class TestHandleSsh(CiTestCase): |
895 | 13 | """Test cc_ssh handling of ssh config.""" | 14 | """Test cc_ssh handling of ssh config.""" |
896 | 14 | 15 | ||
897 | 16 | def _publish_hostkey_test_setup(self): | ||
898 | 17 | self.test_hostkeys = { | ||
899 | 18 | 'dsa': ('ssh-dss', 'AAAAB3NzaC1kc3MAAACB'), | ||
900 | 19 | 'ecdsa': ('ecdsa-sha2-nistp256', 'AAAAE2VjZ'), | ||
901 | 20 | 'ed25519': ('ssh-ed25519', 'AAAAC3NzaC1lZDI'), | ||
902 | 21 | 'rsa': ('ssh-rsa', 'AAAAB3NzaC1yc2EAAA'), | ||
903 | 22 | } | ||
904 | 23 | self.test_hostkey_files = [] | ||
905 | 24 | hostkey_tmpdir = self.tmp_dir() | ||
906 | 25 | for key_type in ['dsa', 'ecdsa', 'ed25519', 'rsa']: | ||
907 | 26 | key_data = self.test_hostkeys[key_type] | ||
908 | 27 | filename = 'ssh_host_%s_key.pub' % key_type | ||
909 | 28 | filepath = os.path.join(hostkey_tmpdir, filename) | ||
910 | 29 | self.test_hostkey_files.append(filepath) | ||
911 | 30 | with open(filepath, 'w') as f: | ||
912 | 31 | f.write(' '.join(key_data)) | ||
913 | 32 | |||
914 | 33 | cc_ssh.KEY_FILE_TPL = os.path.join(hostkey_tmpdir, 'ssh_host_%s_key') | ||
915 | 34 | |||
916 | 15 | def test_apply_credentials_with_user(self, m_setup_keys): | 35 | def test_apply_credentials_with_user(self, m_setup_keys): |
917 | 16 | """Apply keys for the given user and root.""" | 36 | """Apply keys for the given user and root.""" |
918 | 17 | keys = ["key1"] | 37 | keys = ["key1"] |
919 | @@ -64,6 +84,7 @@ class TestHandleSsh(CiTestCase): | |||
920 | 64 | # Mock os.path.exits to True to short-circuit the key writing logic | 84 | # Mock os.path.exits to True to short-circuit the key writing logic |
921 | 65 | m_path_exists.return_value = True | 85 | m_path_exists.return_value = True |
922 | 66 | m_nug.return_value = ([], {}) | 86 | m_nug.return_value = ([], {}) |
923 | 87 | cc_ssh.PUBLISH_HOST_KEYS = False | ||
924 | 67 | cloud = self.tmp_cloud( | 88 | cloud = self.tmp_cloud( |
925 | 68 | distro='ubuntu', metadata={'public-keys': keys}) | 89 | distro='ubuntu', metadata={'public-keys': keys}) |
926 | 69 | cc_ssh.handle("name", cfg, cloud, None, None) | 90 | cc_ssh.handle("name", cfg, cloud, None, None) |
927 | @@ -149,3 +170,148 @@ class TestHandleSsh(CiTestCase): | |||
928 | 149 | self.assertEqual([mock.call(set(keys), user), | 170 | self.assertEqual([mock.call(set(keys), user), |
929 | 150 | mock.call(set(keys), "root", options="")], | 171 | mock.call(set(keys), "root", options="")], |
930 | 151 | m_setup_keys.call_args_list) | 172 | m_setup_keys.call_args_list) |
931 | 173 | |||
932 | 174 | @mock.patch(MODPATH + "glob.glob") | ||
933 | 175 | @mock.patch(MODPATH + "ug_util.normalize_users_groups") | ||
934 | 176 | @mock.patch(MODPATH + "os.path.exists") | ||
935 | 177 | def test_handle_publish_hostkeys_default( | ||
936 | 178 | self, m_path_exists, m_nug, m_glob, m_setup_keys): | ||
937 | 179 | """Test handle with various configs for ssh_publish_hostkeys.""" | ||
938 | 180 | self._publish_hostkey_test_setup() | ||
939 | 181 | cc_ssh.PUBLISH_HOST_KEYS = True | ||
940 | 182 | keys = ["key1"] | ||
941 | 183 | user = "clouduser" | ||
942 | 184 | # Return no matching keys for first glob, test keys for second. | ||
943 | 185 | m_glob.side_effect = iter([ | ||
944 | 186 | [], | ||
945 | 187 | self.test_hostkey_files, | ||
946 | 188 | ]) | ||
947 | 189 | # Mock os.path.exits to True to short-circuit the key writing logic | ||
948 | 190 | m_path_exists.return_value = True | ||
949 | 191 | m_nug.return_value = ({user: {"default": user}}, {}) | ||
950 | 192 | cloud = self.tmp_cloud( | ||
951 | 193 | distro='ubuntu', metadata={'public-keys': keys}) | ||
952 | 194 | cloud.datasource.publish_host_keys = mock.Mock() | ||
953 | 195 | |||
954 | 196 | cfg = {} | ||
955 | 197 | expected_call = [self.test_hostkeys[key_type] for key_type | ||
956 | 198 | in ['ecdsa', 'ed25519', 'rsa']] | ||
957 | 199 | cc_ssh.handle("name", cfg, cloud, None, None) | ||
958 | 200 | self.assertEqual([mock.call(expected_call)], | ||
959 | 201 | cloud.datasource.publish_host_keys.call_args_list) | ||
960 | 202 | |||
961 | 203 | @mock.patch(MODPATH + "glob.glob") | ||
962 | 204 | @mock.patch(MODPATH + "ug_util.normalize_users_groups") | ||
963 | 205 | @mock.patch(MODPATH + "os.path.exists") | ||
964 | 206 | def test_handle_publish_hostkeys_config_enable( | ||
965 | 207 | self, m_path_exists, m_nug, m_glob, m_setup_keys): | ||
966 | 208 | """Test handle with various configs for ssh_publish_hostkeys.""" | ||
967 | 209 | self._publish_hostkey_test_setup() | ||
968 | 210 | cc_ssh.PUBLISH_HOST_KEYS = False | ||
969 | 211 | keys = ["key1"] | ||
970 | 212 | user = "clouduser" | ||
971 | 213 | # Return no matching keys for first glob, test keys for second. | ||
972 | 214 | m_glob.side_effect = iter([ | ||
973 | 215 | [], | ||
974 | 216 | self.test_hostkey_files, | ||
975 | 217 | ]) | ||
976 | 218 | # Mock os.path.exits to True to short-circuit the key writing logic | ||
977 | 219 | m_path_exists.return_value = True | ||
978 | 220 | m_nug.return_value = ({user: {"default": user}}, {}) | ||
979 | 221 | cloud = self.tmp_cloud( | ||
980 | 222 | distro='ubuntu', metadata={'public-keys': keys}) | ||
981 | 223 | cloud.datasource.publish_host_keys = mock.Mock() | ||
982 | 224 | |||
983 | 225 | cfg = {'ssh_publish_hostkeys': {'enabled': True}} | ||
984 | 226 | expected_call = [self.test_hostkeys[key_type] for key_type | ||
985 | 227 | in ['ecdsa', 'ed25519', 'rsa']] | ||
986 | 228 | cc_ssh.handle("name", cfg, cloud, None, None) | ||
987 | 229 | self.assertEqual([mock.call(expected_call)], | ||
988 | 230 | cloud.datasource.publish_host_keys.call_args_list) | ||
989 | 231 | |||
990 | 232 | @mock.patch(MODPATH + "glob.glob") | ||
991 | 233 | @mock.patch(MODPATH + "ug_util.normalize_users_groups") | ||
992 | 234 | @mock.patch(MODPATH + "os.path.exists") | ||
993 | 235 | def test_handle_publish_hostkeys_config_disable( | ||
994 | 236 | self, m_path_exists, m_nug, m_glob, m_setup_keys): | ||
995 | 237 | """Test handle with various configs for ssh_publish_hostkeys.""" | ||
996 | 238 | self._publish_hostkey_test_setup() | ||
997 | 239 | cc_ssh.PUBLISH_HOST_KEYS = True | ||
998 | 240 | keys = ["key1"] | ||
999 | 241 | user = "clouduser" | ||
1000 | 242 | # Return no matching keys for first glob, test keys for second. | ||
1001 | 243 | m_glob.side_effect = iter([ | ||
1002 | 244 | [], | ||
1003 | 245 | self.test_hostkey_files, | ||
1004 | 246 | ]) | ||
1005 | 247 | # Mock os.path.exits to True to short-circuit the key writing logic | ||
1006 | 248 | m_path_exists.return_value = True | ||
1007 | 249 | m_nug.return_value = ({user: {"default": user}}, {}) | ||
1008 | 250 | cloud = self.tmp_cloud( | ||
1009 | 251 | distro='ubuntu', metadata={'public-keys': keys}) | ||
1010 | 252 | cloud.datasource.publish_host_keys = mock.Mock() | ||
1011 | 253 | |||
1012 | 254 | cfg = {'ssh_publish_hostkeys': {'enabled': False}} | ||
1013 | 255 | cc_ssh.handle("name", cfg, cloud, None, None) | ||
1014 | 256 | self.assertFalse(cloud.datasource.publish_host_keys.call_args_list) | ||
1015 | 257 | cloud.datasource.publish_host_keys.assert_not_called() | ||
1016 | 258 | |||
1017 | 259 | @mock.patch(MODPATH + "glob.glob") | ||
1018 | 260 | @mock.patch(MODPATH + "ug_util.normalize_users_groups") | ||
1019 | 261 | @mock.patch(MODPATH + "os.path.exists") | ||
1020 | 262 | def test_handle_publish_hostkeys_config_blacklist( | ||
1021 | 263 | self, m_path_exists, m_nug, m_glob, m_setup_keys): | ||
1022 | 264 | """Test handle with various configs for ssh_publish_hostkeys.""" | ||
1023 | 265 | self._publish_hostkey_test_setup() | ||
1024 | 266 | cc_ssh.PUBLISH_HOST_KEYS = True | ||
1025 | 267 | keys = ["key1"] | ||
1026 | 268 | user = "clouduser" | ||
1027 | 269 | # Return no matching keys for first glob, test keys for second. | ||
1028 | 270 | m_glob.side_effect = iter([ | ||
1029 | 271 | [], | ||
1030 | 272 | self.test_hostkey_files, | ||
1031 | 273 | ]) | ||
1032 | 274 | # Mock os.path.exits to True to short-circuit the key writing logic | ||
1033 | 275 | m_path_exists.return_value = True | ||
1034 | 276 | m_nug.return_value = ({user: {"default": user}}, {}) | ||
1035 | 277 | cloud = self.tmp_cloud( | ||
1036 | 278 | distro='ubuntu', metadata={'public-keys': keys}) | ||
1037 | 279 | cloud.datasource.publish_host_keys = mock.Mock() | ||
1038 | 280 | |||
1039 | 281 | cfg = {'ssh_publish_hostkeys': {'enabled': True, | ||
1040 | 282 | 'blacklist': ['dsa', 'rsa']}} | ||
1041 | 283 | expected_call = [self.test_hostkeys[key_type] for key_type | ||
1042 | 284 | in ['ecdsa', 'ed25519']] | ||
1043 | 285 | cc_ssh.handle("name", cfg, cloud, None, None) | ||
1044 | 286 | self.assertEqual([mock.call(expected_call)], | ||
1045 | 287 | cloud.datasource.publish_host_keys.call_args_list) | ||
1046 | 288 | |||
1047 | 289 | @mock.patch(MODPATH + "glob.glob") | ||
1048 | 290 | @mock.patch(MODPATH + "ug_util.normalize_users_groups") | ||
1049 | 291 | @mock.patch(MODPATH + "os.path.exists") | ||
1050 | 292 | def test_handle_publish_hostkeys_empty_blacklist( | ||
1051 | 293 | self, m_path_exists, m_nug, m_glob, m_setup_keys): | ||
1052 | 294 | """Test handle with various configs for ssh_publish_hostkeys.""" | ||
1053 | 295 | self._publish_hostkey_test_setup() | ||
1054 | 296 | cc_ssh.PUBLISH_HOST_KEYS = True | ||
1055 | 297 | keys = ["key1"] | ||
1056 | 298 | user = "clouduser" | ||
1057 | 299 | # Return no matching keys for first glob, test keys for second. | ||
1058 | 300 | m_glob.side_effect = iter([ | ||
1059 | 301 | [], | ||
1060 | 302 | self.test_hostkey_files, | ||
1061 | 303 | ]) | ||
1062 | 304 | # Mock os.path.exits to True to short-circuit the key writing logic | ||
1063 | 305 | m_path_exists.return_value = True | ||
1064 | 306 | m_nug.return_value = ({user: {"default": user}}, {}) | ||
1065 | 307 | cloud = self.tmp_cloud( | ||
1066 | 308 | distro='ubuntu', metadata={'public-keys': keys}) | ||
1067 | 309 | cloud.datasource.publish_host_keys = mock.Mock() | ||
1068 | 310 | |||
1069 | 311 | cfg = {'ssh_publish_hostkeys': {'enabled': True, | ||
1070 | 312 | 'blacklist': []}} | ||
1071 | 313 | expected_call = [self.test_hostkeys[key_type] for key_type | ||
1072 | 314 | in ['dsa', 'ecdsa', 'ed25519', 'rsa']] | ||
1073 | 315 | cc_ssh.handle("name", cfg, cloud, None, None) | ||
1074 | 316 | self.assertEqual([mock.call(expected_call)], | ||
1075 | 317 | cloud.datasource.publish_host_keys.call_args_list) | ||
1076 | diff --git a/cloudinit/config/tests/test_ubuntu_drivers.py b/cloudinit/config/tests/test_ubuntu_drivers.py | |||
1077 | index efba4ce..4695269 100644 | |||
1078 | --- a/cloudinit/config/tests/test_ubuntu_drivers.py | |||
1079 | +++ b/cloudinit/config/tests/test_ubuntu_drivers.py | |||
1080 | @@ -1,6 +1,7 @@ | |||
1081 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
1082 | 2 | 2 | ||
1083 | 3 | import copy | 3 | import copy |
1084 | 4 | import os | ||
1085 | 4 | 5 | ||
1086 | 5 | from cloudinit.tests.helpers import CiTestCase, skipUnlessJsonSchema, mock | 6 | from cloudinit.tests.helpers import CiTestCase, skipUnlessJsonSchema, mock |
1087 | 6 | from cloudinit.config.schema import ( | 7 | from cloudinit.config.schema import ( |
1088 | @@ -9,11 +10,27 @@ from cloudinit.config import cc_ubuntu_drivers as drivers | |||
1089 | 9 | from cloudinit.util import ProcessExecutionError | 10 | from cloudinit.util import ProcessExecutionError |
1090 | 10 | 11 | ||
1091 | 11 | MPATH = "cloudinit.config.cc_ubuntu_drivers." | 12 | MPATH = "cloudinit.config.cc_ubuntu_drivers." |
1092 | 13 | M_TMP_PATH = MPATH + "temp_utils.mkdtemp" | ||
1093 | 12 | OLD_UBUNTU_DRIVERS_ERROR_STDERR = ( | 14 | OLD_UBUNTU_DRIVERS_ERROR_STDERR = ( |
1094 | 13 | "ubuntu-drivers: error: argument <command>: invalid choice: 'install' " | 15 | "ubuntu-drivers: error: argument <command>: invalid choice: 'install' " |
1095 | 14 | "(choose from 'list', 'autoinstall', 'devices', 'debug')\n") | 16 | "(choose from 'list', 'autoinstall', 'devices', 'debug')\n") |
1096 | 15 | 17 | ||
1097 | 16 | 18 | ||
1098 | 19 | class AnyTempScriptAndDebconfFile(object): | ||
1099 | 20 | |||
1100 | 21 | def __init__(self, tmp_dir, debconf_file): | ||
1101 | 22 | self.tmp_dir = tmp_dir | ||
1102 | 23 | self.debconf_file = debconf_file | ||
1103 | 24 | |||
1104 | 25 | def __eq__(self, cmd): | ||
1105 | 26 | if not len(cmd) == 2: | ||
1106 | 27 | return False | ||
1107 | 28 | script, debconf_file = cmd | ||
1108 | 29 | if bool(script.startswith(self.tmp_dir) and script.endswith('.sh')): | ||
1109 | 30 | return debconf_file == self.debconf_file | ||
1110 | 31 | return False | ||
1111 | 32 | |||
1112 | 33 | |||
1113 | 17 | class TestUbuntuDrivers(CiTestCase): | 34 | class TestUbuntuDrivers(CiTestCase): |
1114 | 18 | cfg_accepted = {'drivers': {'nvidia': {'license-accepted': True}}} | 35 | cfg_accepted = {'drivers': {'nvidia': {'license-accepted': True}}} |
1115 | 19 | install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'] | 36 | install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'] |
1116 | @@ -28,16 +45,23 @@ class TestUbuntuDrivers(CiTestCase): | |||
1117 | 28 | {'drivers': {'nvidia': {'license-accepted': "TRUE"}}}, | 45 | {'drivers': {'nvidia': {'license-accepted': "TRUE"}}}, |
1118 | 29 | schema=drivers.schema, strict=True) | 46 | schema=drivers.schema, strict=True) |
1119 | 30 | 47 | ||
1120 | 48 | @mock.patch(M_TMP_PATH) | ||
1121 | 31 | @mock.patch(MPATH + "util.subp", return_value=('', '')) | 49 | @mock.patch(MPATH + "util.subp", return_value=('', '')) |
1122 | 32 | @mock.patch(MPATH + "util.which", return_value=False) | 50 | @mock.patch(MPATH + "util.which", return_value=False) |
1124 | 33 | def _assert_happy_path_taken(self, config, m_which, m_subp): | 51 | def _assert_happy_path_taken( |
1125 | 52 | self, config, m_which, m_subp, m_tmp): | ||
1126 | 34 | """Positive path test through handle. Package should be installed.""" | 53 | """Positive path test through handle. Package should be installed.""" |
1127 | 54 | tdir = self.tmp_dir() | ||
1128 | 55 | debconf_file = os.path.join(tdir, 'nvidia.template') | ||
1129 | 56 | m_tmp.return_value = tdir | ||
1130 | 35 | myCloud = mock.MagicMock() | 57 | myCloud = mock.MagicMock() |
1131 | 36 | drivers.handle('ubuntu_drivers', config, myCloud, None, None) | 58 | drivers.handle('ubuntu_drivers', config, myCloud, None, None) |
1132 | 37 | self.assertEqual([mock.call(['ubuntu-drivers-common'])], | 59 | self.assertEqual([mock.call(['ubuntu-drivers-common'])], |
1133 | 38 | myCloud.distro.install_packages.call_args_list) | 60 | myCloud.distro.install_packages.call_args_list) |
1136 | 39 | self.assertEqual([mock.call(self.install_gpgpu)], | 61 | self.assertEqual( |
1137 | 40 | m_subp.call_args_list) | 62 | [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)), |
1138 | 63 | mock.call(self.install_gpgpu)], | ||
1139 | 64 | m_subp.call_args_list) | ||
1140 | 41 | 65 | ||
1141 | 42 | def test_handle_does_package_install(self): | 66 | def test_handle_does_package_install(self): |
1142 | 43 | self._assert_happy_path_taken(self.cfg_accepted) | 67 | self._assert_happy_path_taken(self.cfg_accepted) |
1143 | @@ -48,19 +72,33 @@ class TestUbuntuDrivers(CiTestCase): | |||
1144 | 48 | new_config['drivers']['nvidia']['license-accepted'] = true_value | 72 | new_config['drivers']['nvidia']['license-accepted'] = true_value |
1145 | 49 | self._assert_happy_path_taken(new_config) | 73 | self._assert_happy_path_taken(new_config) |
1146 | 50 | 74 | ||
1149 | 51 | @mock.patch(MPATH + "util.subp", side_effect=ProcessExecutionError( | 75 | @mock.patch(M_TMP_PATH) |
1150 | 52 | stdout='No drivers found for installation.\n', exit_code=1)) | 76 | @mock.patch(MPATH + "util.subp") |
1151 | 53 | @mock.patch(MPATH + "util.which", return_value=False) | 77 | @mock.patch(MPATH + "util.which", return_value=False) |
1153 | 54 | def test_handle_raises_error_if_no_drivers_found(self, m_which, m_subp): | 78 | def test_handle_raises_error_if_no_drivers_found( |
1154 | 79 | self, m_which, m_subp, m_tmp): | ||
1155 | 55 | """If ubuntu-drivers doesn't install any drivers, raise an error.""" | 80 | """If ubuntu-drivers doesn't install any drivers, raise an error.""" |
1156 | 81 | tdir = self.tmp_dir() | ||
1157 | 82 | debconf_file = os.path.join(tdir, 'nvidia.template') | ||
1158 | 83 | m_tmp.return_value = tdir | ||
1159 | 56 | myCloud = mock.MagicMock() | 84 | myCloud = mock.MagicMock() |
1160 | 85 | |||
1161 | 86 | def fake_subp(cmd): | ||
1162 | 87 | if cmd[0].startswith(tdir): | ||
1163 | 88 | return | ||
1164 | 89 | raise ProcessExecutionError( | ||
1165 | 90 | stdout='No drivers found for installation.\n', exit_code=1) | ||
1166 | 91 | m_subp.side_effect = fake_subp | ||
1167 | 92 | |||
1168 | 57 | with self.assertRaises(Exception): | 93 | with self.assertRaises(Exception): |
1169 | 58 | drivers.handle( | 94 | drivers.handle( |
1170 | 59 | 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None) | 95 | 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None) |
1171 | 60 | self.assertEqual([mock.call(['ubuntu-drivers-common'])], | 96 | self.assertEqual([mock.call(['ubuntu-drivers-common'])], |
1172 | 61 | myCloud.distro.install_packages.call_args_list) | 97 | myCloud.distro.install_packages.call_args_list) |
1175 | 62 | self.assertEqual([mock.call(self.install_gpgpu)], | 98 | self.assertEqual( |
1176 | 63 | m_subp.call_args_list) | 99 | [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)), |
1177 | 100 | mock.call(self.install_gpgpu)], | ||
1178 | 101 | m_subp.call_args_list) | ||
1179 | 64 | self.assertIn('ubuntu-drivers found no drivers for installation', | 102 | self.assertIn('ubuntu-drivers found no drivers for installation', |
1180 | 65 | self.logs.getvalue()) | 103 | self.logs.getvalue()) |
1181 | 66 | 104 | ||
1182 | @@ -108,18 +146,25 @@ class TestUbuntuDrivers(CiTestCase): | |||
1183 | 108 | myLog.debug.call_args_list[0][0][0]) | 146 | myLog.debug.call_args_list[0][0][0]) |
1184 | 109 | self.assertEqual(0, m_install_drivers.call_count) | 147 | self.assertEqual(0, m_install_drivers.call_count) |
1185 | 110 | 148 | ||
1186 | 149 | @mock.patch(M_TMP_PATH) | ||
1187 | 111 | @mock.patch(MPATH + "util.subp", return_value=('', '')) | 150 | @mock.patch(MPATH + "util.subp", return_value=('', '')) |
1188 | 112 | @mock.patch(MPATH + "util.which", return_value=True) | 151 | @mock.patch(MPATH + "util.which", return_value=True) |
1190 | 113 | def test_install_drivers_no_install_if_present(self, m_which, m_subp): | 152 | def test_install_drivers_no_install_if_present( |
1191 | 153 | self, m_which, m_subp, m_tmp): | ||
1192 | 114 | """If 'ubuntu-drivers' is present, no package install should occur.""" | 154 | """If 'ubuntu-drivers' is present, no package install should occur.""" |
1193 | 155 | tdir = self.tmp_dir() | ||
1194 | 156 | debconf_file = os.path.join(tdir, 'nvidia.template') | ||
1195 | 157 | m_tmp.return_value = tdir | ||
1196 | 115 | pkg_install = mock.MagicMock() | 158 | pkg_install = mock.MagicMock() |
1197 | 116 | drivers.install_drivers(self.cfg_accepted['drivers'], | 159 | drivers.install_drivers(self.cfg_accepted['drivers'], |
1198 | 117 | pkg_install_func=pkg_install) | 160 | pkg_install_func=pkg_install) |
1199 | 118 | self.assertEqual(0, pkg_install.call_count) | 161 | self.assertEqual(0, pkg_install.call_count) |
1200 | 119 | self.assertEqual([mock.call('ubuntu-drivers')], | 162 | self.assertEqual([mock.call('ubuntu-drivers')], |
1201 | 120 | m_which.call_args_list) | 163 | m_which.call_args_list) |
1204 | 121 | self.assertEqual([mock.call(self.install_gpgpu)], | 164 | self.assertEqual( |
1205 | 122 | m_subp.call_args_list) | 165 | [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)), |
1206 | 166 | mock.call(self.install_gpgpu)], | ||
1207 | 167 | m_subp.call_args_list) | ||
1208 | 123 | 168 | ||
1209 | 124 | def test_install_drivers_rejects_invalid_config(self): | 169 | def test_install_drivers_rejects_invalid_config(self): |
1210 | 125 | """install_drivers should raise TypeError if not given a config dict""" | 170 | """install_drivers should raise TypeError if not given a config dict""" |
1211 | @@ -128,20 +173,33 @@ class TestUbuntuDrivers(CiTestCase): | |||
1212 | 128 | drivers.install_drivers("mystring", pkg_install_func=pkg_install) | 173 | drivers.install_drivers("mystring", pkg_install_func=pkg_install) |
1213 | 129 | self.assertEqual(0, pkg_install.call_count) | 174 | self.assertEqual(0, pkg_install.call_count) |
1214 | 130 | 175 | ||
1217 | 131 | @mock.patch(MPATH + "util.subp", side_effect=ProcessExecutionError( | 176 | @mock.patch(M_TMP_PATH) |
1218 | 132 | stderr=OLD_UBUNTU_DRIVERS_ERROR_STDERR, exit_code=2)) | 177 | @mock.patch(MPATH + "util.subp") |
1219 | 133 | @mock.patch(MPATH + "util.which", return_value=False) | 178 | @mock.patch(MPATH + "util.which", return_value=False) |
1220 | 134 | def test_install_drivers_handles_old_ubuntu_drivers_gracefully( | 179 | def test_install_drivers_handles_old_ubuntu_drivers_gracefully( |
1222 | 135 | self, m_which, m_subp): | 180 | self, m_which, m_subp, m_tmp): |
1223 | 136 | """Older ubuntu-drivers versions should emit message and raise error""" | 181 | """Older ubuntu-drivers versions should emit message and raise error""" |
1224 | 182 | tdir = self.tmp_dir() | ||
1225 | 183 | debconf_file = os.path.join(tdir, 'nvidia.template') | ||
1226 | 184 | m_tmp.return_value = tdir | ||
1227 | 137 | myCloud = mock.MagicMock() | 185 | myCloud = mock.MagicMock() |
1228 | 186 | |||
1229 | 187 | def fake_subp(cmd): | ||
1230 | 188 | if cmd[0].startswith(tdir): | ||
1231 | 189 | return | ||
1232 | 190 | raise ProcessExecutionError( | ||
1233 | 191 | stderr=OLD_UBUNTU_DRIVERS_ERROR_STDERR, exit_code=2) | ||
1234 | 192 | m_subp.side_effect = fake_subp | ||
1235 | 193 | |||
1236 | 138 | with self.assertRaises(Exception): | 194 | with self.assertRaises(Exception): |
1237 | 139 | drivers.handle( | 195 | drivers.handle( |
1238 | 140 | 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None) | 196 | 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None) |
1239 | 141 | self.assertEqual([mock.call(['ubuntu-drivers-common'])], | 197 | self.assertEqual([mock.call(['ubuntu-drivers-common'])], |
1240 | 142 | myCloud.distro.install_packages.call_args_list) | 198 | myCloud.distro.install_packages.call_args_list) |
1243 | 143 | self.assertEqual([mock.call(self.install_gpgpu)], | 199 | self.assertEqual( |
1244 | 144 | m_subp.call_args_list) | 200 | [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)), |
1245 | 201 | mock.call(self.install_gpgpu)], | ||
1246 | 202 | m_subp.call_args_list) | ||
1247 | 145 | self.assertIn('WARNING: the available version of ubuntu-drivers is' | 203 | self.assertIn('WARNING: the available version of ubuntu-drivers is' |
1248 | 146 | ' too old to perform requested driver installation', | 204 | ' too old to perform requested driver installation', |
1249 | 147 | self.logs.getvalue()) | 205 | self.logs.getvalue()) |
1250 | @@ -153,16 +211,21 @@ class TestUbuntuDriversWithVersion(TestUbuntuDrivers): | |||
1251 | 153 | 'drivers': {'nvidia': {'license-accepted': True, 'version': '123'}}} | 211 | 'drivers': {'nvidia': {'license-accepted': True, 'version': '123'}}} |
1252 | 154 | install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia:123'] | 212 | install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia:123'] |
1253 | 155 | 213 | ||
1254 | 214 | @mock.patch(M_TMP_PATH) | ||
1255 | 156 | @mock.patch(MPATH + "util.subp", return_value=('', '')) | 215 | @mock.patch(MPATH + "util.subp", return_value=('', '')) |
1256 | 157 | @mock.patch(MPATH + "util.which", return_value=False) | 216 | @mock.patch(MPATH + "util.which", return_value=False) |
1258 | 158 | def test_version_none_uses_latest(self, m_which, m_subp): | 217 | def test_version_none_uses_latest(self, m_which, m_subp, m_tmp): |
1259 | 218 | tdir = self.tmp_dir() | ||
1260 | 219 | debconf_file = os.path.join(tdir, 'nvidia.template') | ||
1261 | 220 | m_tmp.return_value = tdir | ||
1262 | 159 | myCloud = mock.MagicMock() | 221 | myCloud = mock.MagicMock() |
1263 | 160 | version_none_cfg = { | 222 | version_none_cfg = { |
1264 | 161 | 'drivers': {'nvidia': {'license-accepted': True, 'version': None}}} | 223 | 'drivers': {'nvidia': {'license-accepted': True, 'version': None}}} |
1265 | 162 | drivers.handle( | 224 | drivers.handle( |
1266 | 163 | 'ubuntu_drivers', version_none_cfg, myCloud, None, None) | 225 | 'ubuntu_drivers', version_none_cfg, myCloud, None, None) |
1267 | 164 | self.assertEqual( | 226 | self.assertEqual( |
1269 | 165 | [mock.call(['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'])], | 227 | [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)), |
1270 | 228 | mock.call(['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'])], | ||
1271 | 166 | m_subp.call_args_list) | 229 | m_subp.call_args_list) |
1272 | 167 | 230 | ||
1273 | 168 | def test_specifying_a_version_doesnt_override_license_acceptance(self): | 231 | def test_specifying_a_version_doesnt_override_license_acceptance(self): |
1274 | diff --git a/cloudinit/distros/__init__.py b/cloudinit/distros/__init__.py | |||
1275 | index 20c994d..00bdee3 100644 | |||
1276 | --- a/cloudinit/distros/__init__.py | |||
1277 | +++ b/cloudinit/distros/__init__.py | |||
1278 | @@ -396,16 +396,16 @@ class Distro(object): | |||
1279 | 396 | else: | 396 | else: |
1280 | 397 | create_groups = True | 397 | create_groups = True |
1281 | 398 | 398 | ||
1284 | 399 | adduser_cmd = ['useradd', name] | 399 | useradd_cmd = ['useradd', name] |
1285 | 400 | log_adduser_cmd = ['useradd', name] | 400 | log_useradd_cmd = ['useradd', name] |
1286 | 401 | if util.system_is_snappy(): | 401 | if util.system_is_snappy(): |
1289 | 402 | adduser_cmd.append('--extrausers') | 402 | useradd_cmd.append('--extrausers') |
1290 | 403 | log_adduser_cmd.append('--extrausers') | 403 | log_useradd_cmd.append('--extrausers') |
1291 | 404 | 404 | ||
1292 | 405 | # Since we are creating users, we want to carefully validate the | 405 | # Since we are creating users, we want to carefully validate the |
1293 | 406 | # inputs. If something goes wrong, we can end up with a system | 406 | # inputs. If something goes wrong, we can end up with a system |
1294 | 407 | # that nobody can login to. | 407 | # that nobody can login to. |
1296 | 408 | adduser_opts = { | 408 | useradd_opts = { |
1297 | 409 | "gecos": '--comment', | 409 | "gecos": '--comment', |
1298 | 410 | "homedir": '--home', | 410 | "homedir": '--home', |
1299 | 411 | "primary_group": '--gid', | 411 | "primary_group": '--gid', |
1300 | @@ -418,7 +418,7 @@ class Distro(object): | |||
1301 | 418 | "selinux_user": '--selinux-user', | 418 | "selinux_user": '--selinux-user', |
1302 | 419 | } | 419 | } |
1303 | 420 | 420 | ||
1305 | 421 | adduser_flags = { | 421 | useradd_flags = { |
1306 | 422 | "no_user_group": '--no-user-group', | 422 | "no_user_group": '--no-user-group', |
1307 | 423 | "system": '--system', | 423 | "system": '--system', |
1308 | 424 | "no_log_init": '--no-log-init', | 424 | "no_log_init": '--no-log-init', |
1309 | @@ -453,32 +453,32 @@ class Distro(object): | |||
1310 | 453 | # Check the values and create the command | 453 | # Check the values and create the command |
1311 | 454 | for key, val in sorted(kwargs.items()): | 454 | for key, val in sorted(kwargs.items()): |
1312 | 455 | 455 | ||
1315 | 456 | if key in adduser_opts and val and isinstance(val, str): | 456 | if key in useradd_opts and val and isinstance(val, str): |
1316 | 457 | adduser_cmd.extend([adduser_opts[key], val]) | 457 | useradd_cmd.extend([useradd_opts[key], val]) |
1317 | 458 | 458 | ||
1318 | 459 | # Redact certain fields from the logs | 459 | # Redact certain fields from the logs |
1319 | 460 | if key in redact_opts: | 460 | if key in redact_opts: |
1321 | 461 | log_adduser_cmd.extend([adduser_opts[key], 'REDACTED']) | 461 | log_useradd_cmd.extend([useradd_opts[key], 'REDACTED']) |
1322 | 462 | else: | 462 | else: |
1324 | 463 | log_adduser_cmd.extend([adduser_opts[key], val]) | 463 | log_useradd_cmd.extend([useradd_opts[key], val]) |
1325 | 464 | 464 | ||
1329 | 465 | elif key in adduser_flags and val: | 465 | elif key in useradd_flags and val: |
1330 | 466 | adduser_cmd.append(adduser_flags[key]) | 466 | useradd_cmd.append(useradd_flags[key]) |
1331 | 467 | log_adduser_cmd.append(adduser_flags[key]) | 467 | log_useradd_cmd.append(useradd_flags[key]) |
1332 | 468 | 468 | ||
1333 | 469 | # Don't create the home directory if directed so or if the user is a | 469 | # Don't create the home directory if directed so or if the user is a |
1334 | 470 | # system user | 470 | # system user |
1335 | 471 | if kwargs.get('no_create_home') or kwargs.get('system'): | 471 | if kwargs.get('no_create_home') or kwargs.get('system'): |
1338 | 472 | adduser_cmd.append('-M') | 472 | useradd_cmd.append('-M') |
1339 | 473 | log_adduser_cmd.append('-M') | 473 | log_useradd_cmd.append('-M') |
1340 | 474 | else: | 474 | else: |
1343 | 475 | adduser_cmd.append('-m') | 475 | useradd_cmd.append('-m') |
1344 | 476 | log_adduser_cmd.append('-m') | 476 | log_useradd_cmd.append('-m') |
1345 | 477 | 477 | ||
1346 | 478 | # Run the command | 478 | # Run the command |
1347 | 479 | LOG.debug("Adding user %s", name) | 479 | LOG.debug("Adding user %s", name) |
1348 | 480 | try: | 480 | try: |
1350 | 481 | util.subp(adduser_cmd, logstring=log_adduser_cmd) | 481 | util.subp(useradd_cmd, logstring=log_useradd_cmd) |
1351 | 482 | except Exception as e: | 482 | except Exception as e: |
1352 | 483 | util.logexc(LOG, "Failed to create user %s", name) | 483 | util.logexc(LOG, "Failed to create user %s", name) |
1353 | 484 | raise e | 484 | raise e |
1354 | @@ -490,15 +490,15 @@ class Distro(object): | |||
1355 | 490 | 490 | ||
1356 | 491 | snapuser = kwargs.get('snapuser') | 491 | snapuser = kwargs.get('snapuser') |
1357 | 492 | known = kwargs.get('known', False) | 492 | known = kwargs.get('known', False) |
1359 | 493 | adduser_cmd = ["snap", "create-user", "--sudoer", "--json"] | 493 | create_user_cmd = ["snap", "create-user", "--sudoer", "--json"] |
1360 | 494 | if known: | 494 | if known: |
1363 | 495 | adduser_cmd.append("--known") | 495 | create_user_cmd.append("--known") |
1364 | 496 | adduser_cmd.append(snapuser) | 496 | create_user_cmd.append(snapuser) |
1365 | 497 | 497 | ||
1366 | 498 | # Run the command | 498 | # Run the command |
1367 | 499 | LOG.debug("Adding snap user %s", name) | 499 | LOG.debug("Adding snap user %s", name) |
1368 | 500 | try: | 500 | try: |
1370 | 501 | (out, err) = util.subp(adduser_cmd, logstring=adduser_cmd, | 501 | (out, err) = util.subp(create_user_cmd, logstring=create_user_cmd, |
1371 | 502 | capture=True) | 502 | capture=True) |
1372 | 503 | LOG.debug("snap create-user returned: %s:%s", out, err) | 503 | LOG.debug("snap create-user returned: %s:%s", out, err) |
1373 | 504 | jobj = util.load_json(out) | 504 | jobj = util.load_json(out) |
1374 | diff --git a/cloudinit/distros/arch.py b/cloudinit/distros/arch.py | |||
1375 | index b814c8b..9f89c5f 100644 | |||
1376 | --- a/cloudinit/distros/arch.py | |||
1377 | +++ b/cloudinit/distros/arch.py | |||
1378 | @@ -12,6 +12,8 @@ from cloudinit import util | |||
1379 | 12 | from cloudinit.distros import net_util | 12 | from cloudinit.distros import net_util |
1380 | 13 | from cloudinit.distros.parsers.hostname import HostnameConf | 13 | from cloudinit.distros.parsers.hostname import HostnameConf |
1381 | 14 | 14 | ||
1382 | 15 | from cloudinit.net.renderers import RendererNotFoundError | ||
1383 | 16 | |||
1384 | 15 | from cloudinit.settings import PER_INSTANCE | 17 | from cloudinit.settings import PER_INSTANCE |
1385 | 16 | 18 | ||
1386 | 17 | import os | 19 | import os |
1387 | @@ -24,6 +26,11 @@ class Distro(distros.Distro): | |||
1388 | 24 | network_conf_dir = "/etc/netctl" | 26 | network_conf_dir = "/etc/netctl" |
1389 | 25 | resolve_conf_fn = "/etc/resolv.conf" | 27 | resolve_conf_fn = "/etc/resolv.conf" |
1390 | 26 | init_cmd = ['systemctl'] # init scripts | 28 | init_cmd = ['systemctl'] # init scripts |
1391 | 29 | renderer_configs = { | ||
1392 | 30 | "netplan": {"netplan_path": "/etc/netplan/50-cloud-init.yaml", | ||
1393 | 31 | "netplan_header": "# generated by cloud-init\n", | ||
1394 | 32 | "postcmds": True} | ||
1395 | 33 | } | ||
1396 | 27 | 34 | ||
1397 | 28 | def __init__(self, name, cfg, paths): | 35 | def __init__(self, name, cfg, paths): |
1398 | 29 | distros.Distro.__init__(self, name, cfg, paths) | 36 | distros.Distro.__init__(self, name, cfg, paths) |
1399 | @@ -50,6 +57,13 @@ class Distro(distros.Distro): | |||
1400 | 50 | self.update_package_sources() | 57 | self.update_package_sources() |
1401 | 51 | self.package_command('', pkgs=pkglist) | 58 | self.package_command('', pkgs=pkglist) |
1402 | 52 | 59 | ||
1403 | 60 | def _write_network_config(self, netconfig): | ||
1404 | 61 | try: | ||
1405 | 62 | return self._supported_write_network_config(netconfig) | ||
1406 | 63 | except RendererNotFoundError: | ||
1407 | 64 | # Fall back to old _write_network | ||
1408 | 65 | raise NotImplementedError | ||
1409 | 66 | |||
1410 | 53 | def _write_network(self, settings): | 67 | def _write_network(self, settings): |
1411 | 54 | entries = net_util.translate_network(settings) | 68 | entries = net_util.translate_network(settings) |
1412 | 55 | LOG.debug("Translated ubuntu style network settings %s into %s", | 69 | LOG.debug("Translated ubuntu style network settings %s into %s", |
1413 | diff --git a/cloudinit/distros/debian.py b/cloudinit/distros/debian.py | |||
1414 | index d517fb8..0ad93ff 100644 | |||
1415 | --- a/cloudinit/distros/debian.py | |||
1416 | +++ b/cloudinit/distros/debian.py | |||
1417 | @@ -36,14 +36,14 @@ ENI_HEADER = """# This file is generated from information provided by | |||
1418 | 36 | # network: {config: disabled} | 36 | # network: {config: disabled} |
1419 | 37 | """ | 37 | """ |
1420 | 38 | 38 | ||
1422 | 39 | NETWORK_CONF_FN = "/etc/network/interfaces.d/50-cloud-init.cfg" | 39 | NETWORK_CONF_FN = "/etc/network/interfaces.d/50-cloud-init" |
1423 | 40 | LOCALE_CONF_FN = "/etc/default/locale" | 40 | LOCALE_CONF_FN = "/etc/default/locale" |
1424 | 41 | 41 | ||
1425 | 42 | 42 | ||
1426 | 43 | class Distro(distros.Distro): | 43 | class Distro(distros.Distro): |
1427 | 44 | hostname_conf_fn = "/etc/hostname" | 44 | hostname_conf_fn = "/etc/hostname" |
1428 | 45 | network_conf_fn = { | 45 | network_conf_fn = { |
1430 | 46 | "eni": "/etc/network/interfaces.d/50-cloud-init.cfg", | 46 | "eni": "/etc/network/interfaces.d/50-cloud-init", |
1431 | 47 | "netplan": "/etc/netplan/50-cloud-init.yaml" | 47 | "netplan": "/etc/netplan/50-cloud-init.yaml" |
1432 | 48 | } | 48 | } |
1433 | 49 | renderer_configs = { | 49 | renderer_configs = { |
1434 | diff --git a/cloudinit/distros/freebsd.py b/cloudinit/distros/freebsd.py | |||
1435 | index ff22d56..f7825fd 100644 | |||
1436 | --- a/cloudinit/distros/freebsd.py | |||
1437 | +++ b/cloudinit/distros/freebsd.py | |||
1438 | @@ -185,10 +185,10 @@ class Distro(distros.Distro): | |||
1439 | 185 | LOG.info("User %s already exists, skipping.", name) | 185 | LOG.info("User %s already exists, skipping.", name) |
1440 | 186 | return False | 186 | return False |
1441 | 187 | 187 | ||
1444 | 188 | adduser_cmd = ['pw', 'useradd', '-n', name] | 188 | pw_useradd_cmd = ['pw', 'useradd', '-n', name] |
1445 | 189 | log_adduser_cmd = ['pw', 'useradd', '-n', name] | 189 | log_pw_useradd_cmd = ['pw', 'useradd', '-n', name] |
1446 | 190 | 190 | ||
1448 | 191 | adduser_opts = { | 191 | pw_useradd_opts = { |
1449 | 192 | "homedir": '-d', | 192 | "homedir": '-d', |
1450 | 193 | "gecos": '-c', | 193 | "gecos": '-c', |
1451 | 194 | "primary_group": '-g', | 194 | "primary_group": '-g', |
1452 | @@ -196,34 +196,34 @@ class Distro(distros.Distro): | |||
1453 | 196 | "shell": '-s', | 196 | "shell": '-s', |
1454 | 197 | "inactive": '-E', | 197 | "inactive": '-E', |
1455 | 198 | } | 198 | } |
1457 | 199 | adduser_flags = { | 199 | pw_useradd_flags = { |
1458 | 200 | "no_user_group": '--no-user-group', | 200 | "no_user_group": '--no-user-group', |
1459 | 201 | "system": '--system', | 201 | "system": '--system', |
1460 | 202 | "no_log_init": '--no-log-init', | 202 | "no_log_init": '--no-log-init', |
1461 | 203 | } | 203 | } |
1462 | 204 | 204 | ||
1463 | 205 | for key, val in kwargs.items(): | 205 | for key, val in kwargs.items(): |
1465 | 206 | if (key in adduser_opts and val and | 206 | if (key in pw_useradd_opts and val and |
1466 | 207 | isinstance(val, six.string_types)): | 207 | isinstance(val, six.string_types)): |
1468 | 208 | adduser_cmd.extend([adduser_opts[key], val]) | 208 | pw_useradd_cmd.extend([pw_useradd_opts[key], val]) |
1469 | 209 | 209 | ||
1473 | 210 | elif key in adduser_flags and val: | 210 | elif key in pw_useradd_flags and val: |
1474 | 211 | adduser_cmd.append(adduser_flags[key]) | 211 | pw_useradd_cmd.append(pw_useradd_flags[key]) |
1475 | 212 | log_adduser_cmd.append(adduser_flags[key]) | 212 | log_pw_useradd_cmd.append(pw_useradd_flags[key]) |
1476 | 213 | 213 | ||
1477 | 214 | if 'no_create_home' in kwargs or 'system' in kwargs: | 214 | if 'no_create_home' in kwargs or 'system' in kwargs: |
1480 | 215 | adduser_cmd.append('-d/nonexistent') | 215 | pw_useradd_cmd.append('-d/nonexistent') |
1481 | 216 | log_adduser_cmd.append('-d/nonexistent') | 216 | log_pw_useradd_cmd.append('-d/nonexistent') |
1482 | 217 | else: | 217 | else: |
1487 | 218 | adduser_cmd.append('-d/usr/home/%s' % name) | 218 | pw_useradd_cmd.append('-d/usr/home/%s' % name) |
1488 | 219 | adduser_cmd.append('-m') | 219 | pw_useradd_cmd.append('-m') |
1489 | 220 | log_adduser_cmd.append('-d/usr/home/%s' % name) | 220 | log_pw_useradd_cmd.append('-d/usr/home/%s' % name) |
1490 | 221 | log_adduser_cmd.append('-m') | 221 | log_pw_useradd_cmd.append('-m') |
1491 | 222 | 222 | ||
1492 | 223 | # Run the command | 223 | # Run the command |
1493 | 224 | LOG.info("Adding user %s", name) | 224 | LOG.info("Adding user %s", name) |
1494 | 225 | try: | 225 | try: |
1496 | 226 | util.subp(adduser_cmd, logstring=log_adduser_cmd) | 226 | util.subp(pw_useradd_cmd, logstring=log_pw_useradd_cmd) |
1497 | 227 | except Exception as e: | 227 | except Exception as e: |
1498 | 228 | util.logexc(LOG, "Failed to create user %s", name) | 228 | util.logexc(LOG, "Failed to create user %s", name) |
1499 | 229 | raise e | 229 | raise e |
1500 | diff --git a/cloudinit/distros/opensuse.py b/cloudinit/distros/opensuse.py | |||
1501 | index 1bfe047..e41e2f7 100644 | |||
1502 | --- a/cloudinit/distros/opensuse.py | |||
1503 | +++ b/cloudinit/distros/opensuse.py | |||
1504 | @@ -38,6 +38,8 @@ class Distro(distros.Distro): | |||
1505 | 38 | 'sysconfig': { | 38 | 'sysconfig': { |
1506 | 39 | 'control': 'etc/sysconfig/network/config', | 39 | 'control': 'etc/sysconfig/network/config', |
1507 | 40 | 'iface_templates': '%(base)s/network/ifcfg-%(name)s', | 40 | 'iface_templates': '%(base)s/network/ifcfg-%(name)s', |
1508 | 41 | 'netrules_path': ( | ||
1509 | 42 | 'etc/udev/rules.d/85-persistent-net-cloud-init.rules'), | ||
1510 | 41 | 'route_templates': { | 43 | 'route_templates': { |
1511 | 42 | 'ipv4': '%(base)s/network/ifroute-%(name)s', | 44 | 'ipv4': '%(base)s/network/ifroute-%(name)s', |
1512 | 43 | 'ipv6': '%(base)s/network/ifroute-%(name)s', | 45 | 'ipv6': '%(base)s/network/ifroute-%(name)s', |
1513 | diff --git a/cloudinit/distros/parsers/sys_conf.py b/cloudinit/distros/parsers/sys_conf.py | |||
1514 | index c27b5d5..44df17d 100644 | |||
1515 | --- a/cloudinit/distros/parsers/sys_conf.py | |||
1516 | +++ b/cloudinit/distros/parsers/sys_conf.py | |||
1517 | @@ -43,6 +43,13 @@ def _contains_shell_variable(text): | |||
1518 | 43 | 43 | ||
1519 | 44 | 44 | ||
1520 | 45 | class SysConf(configobj.ConfigObj): | 45 | class SysConf(configobj.ConfigObj): |
1521 | 46 | """A configobj.ConfigObj subclass specialised for sysconfig files. | ||
1522 | 47 | |||
1523 | 48 | :param contents: | ||
1524 | 49 | The sysconfig file to parse, in a format accepted by | ||
1525 | 50 | ``configobj.ConfigObj.__init__`` (i.e. "a filename, file like object, | ||
1526 | 51 | or list of lines"). | ||
1527 | 52 | """ | ||
1528 | 46 | def __init__(self, contents): | 53 | def __init__(self, contents): |
1529 | 47 | configobj.ConfigObj.__init__(self, contents, | 54 | configobj.ConfigObj.__init__(self, contents, |
1530 | 48 | interpolation=False, | 55 | interpolation=False, |
1531 | diff --git a/cloudinit/distros/ubuntu.py b/cloudinit/distros/ubuntu.py | |||
1532 | index 6815410..e5fcbc5 100644 | |||
1533 | --- a/cloudinit/distros/ubuntu.py | |||
1534 | +++ b/cloudinit/distros/ubuntu.py | |||
1535 | @@ -21,6 +21,21 @@ LOG = logging.getLogger(__name__) | |||
1536 | 21 | 21 | ||
1537 | 22 | class Distro(debian.Distro): | 22 | class Distro(debian.Distro): |
1538 | 23 | 23 | ||
1539 | 24 | def __init__(self, name, cfg, paths): | ||
1540 | 25 | super(Distro, self).__init__(name, cfg, paths) | ||
1541 | 26 | # Ubuntu specific network cfg locations | ||
1542 | 27 | self.network_conf_fn = { | ||
1543 | 28 | "eni": "/etc/network/interfaces.d/50-cloud-init.cfg", | ||
1544 | 29 | "netplan": "/etc/netplan/50-cloud-init.yaml" | ||
1545 | 30 | } | ||
1546 | 31 | self.renderer_configs = { | ||
1547 | 32 | "eni": {"eni_path": self.network_conf_fn["eni"], | ||
1548 | 33 | "eni_header": debian.ENI_HEADER}, | ||
1549 | 34 | "netplan": {"netplan_path": self.network_conf_fn["netplan"], | ||
1550 | 35 | "netplan_header": debian.ENI_HEADER, | ||
1551 | 36 | "postcmds": True} | ||
1552 | 37 | } | ||
1553 | 38 | |||
1554 | 24 | @property | 39 | @property |
1555 | 25 | def preferred_ntp_clients(self): | 40 | def preferred_ntp_clients(self): |
1556 | 26 | """The preferred ntp client is dependent on the version.""" | 41 | """The preferred ntp client is dependent on the version.""" |
1557 | diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py | |||
1558 | index 3642fb1..ea707c0 100644 | |||
1559 | --- a/cloudinit/net/__init__.py | |||
1560 | +++ b/cloudinit/net/__init__.py | |||
1561 | @@ -9,6 +9,7 @@ import errno | |||
1562 | 9 | import logging | 9 | import logging |
1563 | 10 | import os | 10 | import os |
1564 | 11 | import re | 11 | import re |
1565 | 12 | from functools import partial | ||
1566 | 12 | 13 | ||
1567 | 13 | from cloudinit.net.network_state import mask_to_net_prefix | 14 | from cloudinit.net.network_state import mask_to_net_prefix |
1568 | 14 | from cloudinit import util | 15 | from cloudinit import util |
1569 | @@ -264,46 +265,29 @@ def find_fallback_nic(blacklist_drivers=None): | |||
1570 | 264 | 265 | ||
1571 | 265 | 266 | ||
1572 | 266 | def generate_fallback_config(blacklist_drivers=None, config_driver=None): | 267 | def generate_fallback_config(blacklist_drivers=None, config_driver=None): |
1576 | 267 | """Determine which attached net dev is most likely to have a connection and | 268 | """Generate network cfg v2 for dhcp on the NIC most likely connected.""" |
1574 | 268 | generate network state to run dhcp on that interface""" | ||
1575 | 269 | |||
1577 | 270 | if not config_driver: | 269 | if not config_driver: |
1578 | 271 | config_driver = False | 270 | config_driver = False |
1579 | 272 | 271 | ||
1580 | 273 | target_name = find_fallback_nic(blacklist_drivers=blacklist_drivers) | 272 | target_name = find_fallback_nic(blacklist_drivers=blacklist_drivers) |
1598 | 274 | if target_name: | 273 | if not target_name: |
1582 | 275 | target_mac = read_sys_net_safe(target_name, 'address') | ||
1583 | 276 | nconf = {'config': [], 'version': 1} | ||
1584 | 277 | cfg = {'type': 'physical', 'name': target_name, | ||
1585 | 278 | 'mac_address': target_mac, 'subnets': [{'type': 'dhcp'}]} | ||
1586 | 279 | # inject the device driver name, dev_id into config if enabled and | ||
1587 | 280 | # device has a valid device driver value | ||
1588 | 281 | if config_driver: | ||
1589 | 282 | driver = device_driver(target_name) | ||
1590 | 283 | if driver: | ||
1591 | 284 | cfg['params'] = { | ||
1592 | 285 | 'driver': driver, | ||
1593 | 286 | 'device_id': device_devid(target_name), | ||
1594 | 287 | } | ||
1595 | 288 | nconf['config'].append(cfg) | ||
1596 | 289 | return nconf | ||
1597 | 290 | else: | ||
1599 | 291 | # can't read any interfaces addresses (or there are none); give up | 274 | # can't read any interfaces addresses (or there are none); give up |
1600 | 292 | return None | 275 | return None |
1601 | 276 | target_mac = read_sys_net_safe(target_name, 'address') | ||
1602 | 277 | cfg = {'dhcp4': True, 'set-name': target_name, | ||
1603 | 278 | 'match': {'macaddress': target_mac.lower()}} | ||
1604 | 279 | if config_driver: | ||
1605 | 280 | driver = device_driver(target_name) | ||
1606 | 281 | if driver: | ||
1607 | 282 | cfg['match']['driver'] = driver | ||
1608 | 283 | nconf = {'ethernets': {target_name: cfg}, 'version': 2} | ||
1609 | 284 | return nconf | ||
1610 | 293 | 285 | ||
1611 | 294 | 286 | ||
1621 | 295 | def apply_network_config_names(netcfg, strict_present=True, strict_busy=True): | 287 | def extract_physdevs(netcfg): |
1613 | 296 | """read the network config and rename devices accordingly. | ||
1614 | 297 | if strict_present is false, then do not raise exception if no devices | ||
1615 | 298 | match. if strict_busy is false, then do not raise exception if the | ||
1616 | 299 | device cannot be renamed because it is currently configured. | ||
1617 | 300 | |||
1618 | 301 | renames are only attempted for interfaces of type 'physical'. It is | ||
1619 | 302 | expected that the network system will create other devices with the | ||
1620 | 303 | correct name in place.""" | ||
1622 | 304 | 288 | ||
1623 | 305 | def _version_1(netcfg): | 289 | def _version_1(netcfg): |
1625 | 306 | renames = [] | 290 | physdevs = [] |
1626 | 307 | for ent in netcfg.get('config', {}): | 291 | for ent in netcfg.get('config', {}): |
1627 | 308 | if ent.get('type') != 'physical': | 292 | if ent.get('type') != 'physical': |
1628 | 309 | continue | 293 | continue |
1629 | @@ -317,11 +301,11 @@ def apply_network_config_names(netcfg, strict_present=True, strict_busy=True): | |||
1630 | 317 | driver = device_driver(name) | 301 | driver = device_driver(name) |
1631 | 318 | if not device_id: | 302 | if not device_id: |
1632 | 319 | device_id = device_devid(name) | 303 | device_id = device_devid(name) |
1635 | 320 | renames.append([mac, name, driver, device_id]) | 304 | physdevs.append([mac, name, driver, device_id]) |
1636 | 321 | return renames | 305 | return physdevs |
1637 | 322 | 306 | ||
1638 | 323 | def _version_2(netcfg): | 307 | def _version_2(netcfg): |
1640 | 324 | renames = [] | 308 | physdevs = [] |
1641 | 325 | for ent in netcfg.get('ethernets', {}).values(): | 309 | for ent in netcfg.get('ethernets', {}).values(): |
1642 | 326 | # only rename if configured to do so | 310 | # only rename if configured to do so |
1643 | 327 | name = ent.get('set-name') | 311 | name = ent.get('set-name') |
1644 | @@ -337,16 +321,69 @@ def apply_network_config_names(netcfg, strict_present=True, strict_busy=True): | |||
1645 | 337 | driver = device_driver(name) | 321 | driver = device_driver(name) |
1646 | 338 | if not device_id: | 322 | if not device_id: |
1647 | 339 | device_id = device_devid(name) | 323 | device_id = device_devid(name) |
1650 | 340 | renames.append([mac, name, driver, device_id]) | 324 | physdevs.append([mac, name, driver, device_id]) |
1651 | 341 | return renames | 325 | return physdevs |
1652 | 326 | |||
1653 | 327 | version = netcfg.get('version') | ||
1654 | 328 | if version == 1: | ||
1655 | 329 | return _version_1(netcfg) | ||
1656 | 330 | elif version == 2: | ||
1657 | 331 | return _version_2(netcfg) | ||
1658 | 332 | |||
1659 | 333 | raise RuntimeError('Unknown network config version: %s' % version) | ||
1660 | 334 | |||
1661 | 335 | |||
1662 | 336 | def wait_for_physdevs(netcfg, strict=True): | ||
1663 | 337 | physdevs = extract_physdevs(netcfg) | ||
1664 | 338 | |||
1665 | 339 | # set of expected iface names and mac addrs | ||
1666 | 340 | expected_ifaces = dict([(iface[0], iface[1]) for iface in physdevs]) | ||
1667 | 341 | expected_macs = set(expected_ifaces.keys()) | ||
1668 | 342 | |||
1669 | 343 | # set of current macs | ||
1670 | 344 | present_macs = get_interfaces_by_mac().keys() | ||
1671 | 345 | |||
1672 | 346 | # compare the set of expected mac address values to | ||
1673 | 347 | # the current macs present; we only check MAC as cloud-init | ||
1674 | 348 | # has not yet renamed interfaces and the netcfg may include | ||
1675 | 349 | # such renames. | ||
1676 | 350 | for _ in range(0, 5): | ||
1677 | 351 | if expected_macs.issubset(present_macs): | ||
1678 | 352 | LOG.debug('net: all expected physical devices present') | ||
1679 | 353 | return | ||
1680 | 342 | 354 | ||
1685 | 343 | if netcfg.get('version') == 1: | 355 | missing = expected_macs.difference(present_macs) |
1686 | 344 | return _rename_interfaces(_version_1(netcfg)) | 356 | LOG.debug('net: waiting for expected net devices: %s', missing) |
1687 | 345 | elif netcfg.get('version') == 2: | 357 | for mac in missing: |
1688 | 346 | return _rename_interfaces(_version_2(netcfg)) | 358 | # trigger a settle, unless this interface exists |
1689 | 359 | syspath = sys_dev_path(expected_ifaces[mac]) | ||
1690 | 360 | settle = partial(util.udevadm_settle, exists=syspath) | ||
1691 | 361 | msg = 'Waiting for udev events to settle or %s exists' % syspath | ||
1692 | 362 | util.log_time(LOG.debug, msg, func=settle) | ||
1693 | 347 | 363 | ||
1696 | 348 | raise RuntimeError('Failed to apply network config names. Found bad' | 364 | # update present_macs after settles |
1697 | 349 | ' network config version: %s' % netcfg.get('version')) | 365 | present_macs = get_interfaces_by_mac().keys() |
1698 | 366 | |||
1699 | 367 | msg = 'Not all expected physical devices present: %s' % missing | ||
1700 | 368 | LOG.warning(msg) | ||
1701 | 369 | if strict: | ||
1702 | 370 | raise RuntimeError(msg) | ||
1703 | 371 | |||
1704 | 372 | |||
1705 | 373 | def apply_network_config_names(netcfg, strict_present=True, strict_busy=True): | ||
1706 | 374 | """read the network config and rename devices accordingly. | ||
1707 | 375 | if strict_present is false, then do not raise exception if no devices | ||
1708 | 376 | match. if strict_busy is false, then do not raise exception if the | ||
1709 | 377 | device cannot be renamed because it is currently configured. | ||
1710 | 378 | |||
1711 | 379 | renames are only attempted for interfaces of type 'physical'. It is | ||
1712 | 380 | expected that the network system will create other devices with the | ||
1713 | 381 | correct name in place.""" | ||
1714 | 382 | |||
1715 | 383 | try: | ||
1716 | 384 | _rename_interfaces(extract_physdevs(netcfg)) | ||
1717 | 385 | except RuntimeError as e: | ||
1718 | 386 | raise RuntimeError('Failed to apply network config names: %s' % e) | ||
1719 | 350 | 387 | ||
1720 | 351 | 388 | ||
1721 | 352 | def interface_has_own_mac(ifname, strict=False): | 389 | def interface_has_own_mac(ifname, strict=False): |
1722 | @@ -622,6 +659,8 @@ def get_interfaces(): | |||
1723 | 622 | continue | 659 | continue |
1724 | 623 | if is_vlan(name): | 660 | if is_vlan(name): |
1725 | 624 | continue | 661 | continue |
1726 | 662 | if is_bond(name): | ||
1727 | 663 | continue | ||
1728 | 625 | mac = get_interface_mac(name) | 664 | mac = get_interface_mac(name) |
1729 | 626 | # some devices may not have a mac (tun0) | 665 | # some devices may not have a mac (tun0) |
1730 | 627 | if not mac: | 666 | if not mac: |
1731 | @@ -677,7 +716,7 @@ class EphemeralIPv4Network(object): | |||
1732 | 677 | """ | 716 | """ |
1733 | 678 | 717 | ||
1734 | 679 | def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None, | 718 | def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None, |
1736 | 680 | connectivity_url=None): | 719 | connectivity_url=None, static_routes=None): |
1737 | 681 | """Setup context manager and validate call signature. | 720 | """Setup context manager and validate call signature. |
1738 | 682 | 721 | ||
1739 | 683 | @param interface: Name of the network interface to bring up. | 722 | @param interface: Name of the network interface to bring up. |
1740 | @@ -688,6 +727,7 @@ class EphemeralIPv4Network(object): | |||
1741 | 688 | @param router: Optionally the default gateway IP. | 727 | @param router: Optionally the default gateway IP. |
1742 | 689 | @param connectivity_url: Optionally, a URL to verify if a usable | 728 | @param connectivity_url: Optionally, a URL to verify if a usable |
1743 | 690 | connection already exists. | 729 | connection already exists. |
1744 | 730 | @param static_routes: Optionally a list of static routes from DHCP | ||
1745 | 691 | """ | 731 | """ |
1746 | 692 | if not all([interface, ip, prefix_or_mask, broadcast]): | 732 | if not all([interface, ip, prefix_or_mask, broadcast]): |
1747 | 693 | raise ValueError( | 733 | raise ValueError( |
1748 | @@ -704,6 +744,7 @@ class EphemeralIPv4Network(object): | |||
1749 | 704 | self.ip = ip | 744 | self.ip = ip |
1750 | 705 | self.broadcast = broadcast | 745 | self.broadcast = broadcast |
1751 | 706 | self.router = router | 746 | self.router = router |
1752 | 747 | self.static_routes = static_routes | ||
1753 | 707 | self.cleanup_cmds = [] # List of commands to run to cleanup state. | 748 | self.cleanup_cmds = [] # List of commands to run to cleanup state. |
1754 | 708 | 749 | ||
1755 | 709 | def __enter__(self): | 750 | def __enter__(self): |
1756 | @@ -716,7 +757,21 @@ class EphemeralIPv4Network(object): | |||
1757 | 716 | return | 757 | return |
1758 | 717 | 758 | ||
1759 | 718 | self._bringup_device() | 759 | self._bringup_device() |
1761 | 719 | if self.router: | 760 | |
1762 | 761 | # rfc3442 requires us to ignore the router config *if* classless static | ||
1763 | 762 | # routes are provided. | ||
1764 | 763 | # | ||
1765 | 764 | # https://tools.ietf.org/html/rfc3442 | ||
1766 | 765 | # | ||
1767 | 766 | # If the DHCP server returns both a Classless Static Routes option and | ||
1768 | 767 | # a Router option, the DHCP client MUST ignore the Router option. | ||
1769 | 768 | # | ||
1770 | 769 | # Similarly, if the DHCP server returns both a Classless Static Routes | ||
1771 | 770 | # option and a Static Routes option, the DHCP client MUST ignore the | ||
1772 | 771 | # Static Routes option. | ||
1773 | 772 | if self.static_routes: | ||
1774 | 773 | self._bringup_static_routes() | ||
1775 | 774 | elif self.router: | ||
1776 | 720 | self._bringup_router() | 775 | self._bringup_router() |
1777 | 721 | 776 | ||
1778 | 722 | def __exit__(self, excp_type, excp_value, excp_traceback): | 777 | def __exit__(self, excp_type, excp_value, excp_traceback): |
1779 | @@ -760,6 +815,20 @@ class EphemeralIPv4Network(object): | |||
1780 | 760 | ['ip', '-family', 'inet', 'addr', 'del', cidr, 'dev', | 815 | ['ip', '-family', 'inet', 'addr', 'del', cidr, 'dev', |
1781 | 761 | self.interface]) | 816 | self.interface]) |
1782 | 762 | 817 | ||
1783 | 818 | def _bringup_static_routes(self): | ||
1784 | 819 | # static_routes = [("169.254.169.254/32", "130.56.248.255"), | ||
1785 | 820 | # ("0.0.0.0/0", "130.56.240.1")] | ||
1786 | 821 | for net_address, gateway in self.static_routes: | ||
1787 | 822 | via_arg = [] | ||
1788 | 823 | if gateway != "0.0.0.0/0": | ||
1789 | 824 | via_arg = ['via', gateway] | ||
1790 | 825 | util.subp( | ||
1791 | 826 | ['ip', '-4', 'route', 'add', net_address] + via_arg + | ||
1792 | 827 | ['dev', self.interface], capture=True) | ||
1793 | 828 | self.cleanup_cmds.insert( | ||
1794 | 829 | 0, ['ip', '-4', 'route', 'del', net_address] + via_arg + | ||
1795 | 830 | ['dev', self.interface]) | ||
1796 | 831 | |||
1797 | 763 | def _bringup_router(self): | 832 | def _bringup_router(self): |
1798 | 764 | """Perform the ip commands to fully setup the router if needed.""" | 833 | """Perform the ip commands to fully setup the router if needed.""" |
1799 | 765 | # Check if a default route exists and exit if it does | 834 | # Check if a default route exists and exit if it does |
1800 | diff --git a/cloudinit/net/cmdline.py b/cloudinit/net/cmdline.py | |||
1801 | index f89a0f7..556a10f 100755 | |||
1802 | --- a/cloudinit/net/cmdline.py | |||
1803 | +++ b/cloudinit/net/cmdline.py | |||
1804 | @@ -177,21 +177,13 @@ def _is_initramfs_netconfig(files, cmdline): | |||
1805 | 177 | return False | 177 | return False |
1806 | 178 | 178 | ||
1807 | 179 | 179 | ||
1809 | 180 | def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None): | 180 | def read_initramfs_config(files=None, mac_addrs=None, cmdline=None): |
1810 | 181 | if cmdline is None: | 181 | if cmdline is None: |
1811 | 182 | cmdline = util.get_cmdline() | 182 | cmdline = util.get_cmdline() |
1812 | 183 | 183 | ||
1813 | 184 | if files is None: | 184 | if files is None: |
1814 | 185 | files = _get_klibc_net_cfg_files() | 185 | files = _get_klibc_net_cfg_files() |
1815 | 186 | 186 | ||
1816 | 187 | if 'network-config=' in cmdline: | ||
1817 | 188 | data64 = None | ||
1818 | 189 | for tok in cmdline.split(): | ||
1819 | 190 | if tok.startswith("network-config="): | ||
1820 | 191 | data64 = tok.split("=", 1)[1] | ||
1821 | 192 | if data64: | ||
1822 | 193 | return util.load_yaml(_b64dgz(data64)) | ||
1823 | 194 | |||
1824 | 195 | if not _is_initramfs_netconfig(files, cmdline): | 187 | if not _is_initramfs_netconfig(files, cmdline): |
1825 | 196 | return None | 188 | return None |
1826 | 197 | 189 | ||
1827 | @@ -204,4 +196,19 @@ def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None): | |||
1828 | 204 | 196 | ||
1829 | 205 | return config_from_klibc_net_cfg(files=files, mac_addrs=mac_addrs) | 197 | return config_from_klibc_net_cfg(files=files, mac_addrs=mac_addrs) |
1830 | 206 | 198 | ||
1831 | 199 | |||
1832 | 200 | def read_kernel_cmdline_config(cmdline=None): | ||
1833 | 201 | if cmdline is None: | ||
1834 | 202 | cmdline = util.get_cmdline() | ||
1835 | 203 | |||
1836 | 204 | if 'network-config=' in cmdline: | ||
1837 | 205 | data64 = None | ||
1838 | 206 | for tok in cmdline.split(): | ||
1839 | 207 | if tok.startswith("network-config="): | ||
1840 | 208 | data64 = tok.split("=", 1)[1] | ||
1841 | 209 | if data64: | ||
1842 | 210 | return util.load_yaml(_b64dgz(data64)) | ||
1843 | 211 | |||
1844 | 212 | return None | ||
1845 | 213 | |||
1846 | 207 | # vi: ts=4 expandtab | 214 | # vi: ts=4 expandtab |
1847 | diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py | |||
1848 | index c98a97c..1737991 100644 | |||
1849 | --- a/cloudinit/net/dhcp.py | |||
1850 | +++ b/cloudinit/net/dhcp.py | |||
1851 | @@ -92,10 +92,14 @@ class EphemeralDHCPv4(object): | |||
1852 | 92 | nmap = {'interface': 'interface', 'ip': 'fixed-address', | 92 | nmap = {'interface': 'interface', 'ip': 'fixed-address', |
1853 | 93 | 'prefix_or_mask': 'subnet-mask', | 93 | 'prefix_or_mask': 'subnet-mask', |
1854 | 94 | 'broadcast': 'broadcast-address', | 94 | 'broadcast': 'broadcast-address', |
1855 | 95 | 'static_routes': 'rfc3442-classless-static-routes', | ||
1856 | 95 | 'router': 'routers'} | 96 | 'router': 'routers'} |
1857 | 96 | kwargs = dict([(k, self.lease.get(v)) for k, v in nmap.items()]) | 97 | kwargs = dict([(k, self.lease.get(v)) for k, v in nmap.items()]) |
1858 | 97 | if not kwargs['broadcast']: | 98 | if not kwargs['broadcast']: |
1859 | 98 | kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip']) | 99 | kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip']) |
1860 | 100 | if kwargs['static_routes']: | ||
1861 | 101 | kwargs['static_routes'] = ( | ||
1862 | 102 | parse_static_routes(kwargs['static_routes'])) | ||
1863 | 99 | if self.connectivity_url: | 103 | if self.connectivity_url: |
1864 | 100 | kwargs['connectivity_url'] = self.connectivity_url | 104 | kwargs['connectivity_url'] = self.connectivity_url |
1865 | 101 | ephipv4 = EphemeralIPv4Network(**kwargs) | 105 | ephipv4 = EphemeralIPv4Network(**kwargs) |
1866 | @@ -272,4 +276,90 @@ def networkd_get_option_from_leases(keyname, leases_d=None): | |||
1867 | 272 | return data[keyname] | 276 | return data[keyname] |
1868 | 273 | return None | 277 | return None |
1869 | 274 | 278 | ||
1870 | 279 | |||
1871 | 280 | def parse_static_routes(rfc3442): | ||
1872 | 281 | """ parse rfc3442 format and return a list containing tuple of strings. | ||
1873 | 282 | |||
1874 | 283 | The tuple is composed of the network_address (including net length) and | ||
1875 | 284 | gateway for a parsed static route. | ||
1876 | 285 | |||
1877 | 286 | @param rfc3442: string in rfc3442 format | ||
1878 | 287 | @returns: list of tuple(str, str) for all valid parsed routes until the | ||
1879 | 288 | first parsing error. | ||
1880 | 289 | |||
1881 | 290 | E.g. | ||
1882 | 291 | sr = parse_state_routes("32,169,254,169,254,130,56,248,255,0,130,56,240,1") | ||
1883 | 292 | sr = [ | ||
1884 | 293 | ("169.254.169.254/32", "130.56.248.255"), ("0.0.0.0/0", "130.56.240.1") | ||
1885 | 294 | ] | ||
1886 | 295 | |||
1887 | 296 | Python version of isc-dhclient's hooks: | ||
1888 | 297 | /etc/dhcp/dhclient-exit-hooks.d/rfc3442-classless-routes | ||
1889 | 298 | """ | ||
1890 | 299 | # raw strings from dhcp lease may end in semi-colon | ||
1891 | 300 | rfc3442 = rfc3442.rstrip(";") | ||
1892 | 301 | tokens = rfc3442.split(',') | ||
1893 | 302 | static_routes = [] | ||
1894 | 303 | |||
1895 | 304 | def _trunc_error(cidr, required, remain): | ||
1896 | 305 | msg = ("RFC3442 string malformed. Current route has CIDR of %s " | ||
1897 | 306 | "and requires %s significant octets, but only %s remain. " | ||
1898 | 307 | "Verify DHCP rfc3442-classless-static-routes value: %s" | ||
1899 | 308 | % (cidr, required, remain, rfc3442)) | ||
1900 | 309 | LOG.error(msg) | ||
1901 | 310 | |||
1902 | 311 | current_idx = 0 | ||
1903 | 312 | for idx, tok in enumerate(tokens): | ||
1904 | 313 | if idx < current_idx: | ||
1905 | 314 | continue | ||
1906 | 315 | net_length = int(tok) | ||
1907 | 316 | if net_length in range(25, 33): | ||
1908 | 317 | req_toks = 9 | ||
1909 | 318 | if len(tokens[idx:]) < req_toks: | ||
1910 | 319 | _trunc_error(net_length, req_toks, len(tokens[idx:])) | ||
1911 | 320 | return static_routes | ||
1912 | 321 | net_address = ".".join(tokens[idx+1:idx+5]) | ||
1913 | 322 | gateway = ".".join(tokens[idx+5:idx+req_toks]) | ||
1914 | 323 | current_idx = idx + req_toks | ||
1915 | 324 | elif net_length in range(17, 25): | ||
1916 | 325 | req_toks = 8 | ||
1917 | 326 | if len(tokens[idx:]) < req_toks: | ||
1918 | 327 | _trunc_error(net_length, req_toks, len(tokens[idx:])) | ||
1919 | 328 | return static_routes | ||
1920 | 329 | net_address = ".".join(tokens[idx+1:idx+4] + ["0"]) | ||
1921 | 330 | gateway = ".".join(tokens[idx+4:idx+req_toks]) | ||
1922 | 331 | current_idx = idx + req_toks | ||
1923 | 332 | elif net_length in range(9, 17): | ||
1924 | 333 | req_toks = 7 | ||
1925 | 334 | if len(tokens[idx:]) < req_toks: | ||
1926 | 335 | _trunc_error(net_length, req_toks, len(tokens[idx:])) | ||
1927 | 336 | return static_routes | ||
1928 | 337 | net_address = ".".join(tokens[idx+1:idx+3] + ["0", "0"]) | ||
1929 | 338 | gateway = ".".join(tokens[idx+3:idx+req_toks]) | ||
1930 | 339 | current_idx = idx + req_toks | ||
1931 | 340 | elif net_length in range(1, 9): | ||
1932 | 341 | req_toks = 6 | ||
1933 | 342 | if len(tokens[idx:]) < req_toks: | ||
1934 | 343 | _trunc_error(net_length, req_toks, len(tokens[idx:])) | ||
1935 | 344 | return static_routes | ||
1936 | 345 | net_address = ".".join(tokens[idx+1:idx+2] + ["0", "0", "0"]) | ||
1937 | 346 | gateway = ".".join(tokens[idx+2:idx+req_toks]) | ||
1938 | 347 | current_idx = idx + req_toks | ||
1939 | 348 | elif net_length == 0: | ||
1940 | 349 | req_toks = 5 | ||
1941 | 350 | if len(tokens[idx:]) < req_toks: | ||
1942 | 351 | _trunc_error(net_length, req_toks, len(tokens[idx:])) | ||
1943 | 352 | return static_routes | ||
1944 | 353 | net_address = "0.0.0.0" | ||
1945 | 354 | gateway = ".".join(tokens[idx+1:idx+req_toks]) | ||
1946 | 355 | current_idx = idx + req_toks | ||
1947 | 356 | else: | ||
1948 | 357 | LOG.error('Parsed invalid net length "%s". Verify DHCP ' | ||
1949 | 358 | 'rfc3442-classless-static-routes value.', net_length) | ||
1950 | 359 | return static_routes | ||
1951 | 360 | |||
1952 | 361 | static_routes.append(("%s/%s" % (net_address, net_length), gateway)) | ||
1953 | 362 | |||
1954 | 363 | return static_routes | ||
1955 | 364 | |||
1956 | 275 | # vi: ts=4 expandtab | 365 | # vi: ts=4 expandtab |
1957 | diff --git a/cloudinit/net/network_state.py b/cloudinit/net/network_state.py | |||
1958 | index 3702130..c0c415d 100644 | |||
1959 | --- a/cloudinit/net/network_state.py | |||
1960 | +++ b/cloudinit/net/network_state.py | |||
1961 | @@ -596,6 +596,7 @@ class NetworkStateInterpreter(object): | |||
1962 | 596 | eno1: | 596 | eno1: |
1963 | 597 | match: | 597 | match: |
1964 | 598 | macaddress: 00:11:22:33:44:55 | 598 | macaddress: 00:11:22:33:44:55 |
1965 | 599 | driver: hv_netsvc | ||
1966 | 599 | wakeonlan: true | 600 | wakeonlan: true |
1967 | 600 | dhcp4: true | 601 | dhcp4: true |
1968 | 601 | dhcp6: false | 602 | dhcp6: false |
1969 | @@ -631,15 +632,18 @@ class NetworkStateInterpreter(object): | |||
1970 | 631 | 'type': 'physical', | 632 | 'type': 'physical', |
1971 | 632 | 'name': cfg.get('set-name', eth), | 633 | 'name': cfg.get('set-name', eth), |
1972 | 633 | } | 634 | } |
1974 | 634 | mac_address = cfg.get('match', {}).get('macaddress', None) | 635 | match = cfg.get('match', {}) |
1975 | 636 | mac_address = match.get('macaddress', None) | ||
1976 | 635 | if not mac_address: | 637 | if not mac_address: |
1977 | 636 | LOG.debug('NetworkState Version2: missing "macaddress" info ' | 638 | LOG.debug('NetworkState Version2: missing "macaddress" info ' |
1978 | 637 | 'in config entry: %s: %s', eth, str(cfg)) | 639 | 'in config entry: %s: %s', eth, str(cfg)) |
1981 | 638 | phy_cmd.update({'mac_address': mac_address}) | 640 | phy_cmd['mac_address'] = mac_address |
1982 | 639 | 641 | driver = match.get('driver', None) | |
1983 | 642 | if driver: | ||
1984 | 643 | phy_cmd['params'] = {'driver': driver} | ||
1985 | 640 | for key in ['mtu', 'match', 'wakeonlan']: | 644 | for key in ['mtu', 'match', 'wakeonlan']: |
1986 | 641 | if key in cfg: | 645 | if key in cfg: |
1988 | 642 | phy_cmd.update({key: cfg.get(key)}) | 646 | phy_cmd[key] = cfg[key] |
1989 | 643 | 647 | ||
1990 | 644 | subnets = self._v2_to_v1_ipcfg(cfg) | 648 | subnets = self._v2_to_v1_ipcfg(cfg) |
1991 | 645 | if len(subnets) > 0: | 649 | if len(subnets) > 0: |
1992 | @@ -673,6 +677,8 @@ class NetworkStateInterpreter(object): | |||
1993 | 673 | 'vlan_id': cfg.get('id'), | 677 | 'vlan_id': cfg.get('id'), |
1994 | 674 | 'vlan_link': cfg.get('link'), | 678 | 'vlan_link': cfg.get('link'), |
1995 | 675 | } | 679 | } |
1996 | 680 | if 'mtu' in cfg: | ||
1997 | 681 | vlan_cmd['mtu'] = cfg['mtu'] | ||
1998 | 676 | subnets = self._v2_to_v1_ipcfg(cfg) | 682 | subnets = self._v2_to_v1_ipcfg(cfg) |
1999 | 677 | if len(subnets) > 0: | 683 | if len(subnets) > 0: |
2000 | 678 | vlan_cmd.update({'subnets': subnets}) | 684 | vlan_cmd.update({'subnets': subnets}) |
2001 | @@ -722,6 +728,8 @@ class NetworkStateInterpreter(object): | |||
2002 | 722 | 'params': dict((v2key_to_v1[k], v) for k, v in | 728 | 'params': dict((v2key_to_v1[k], v) for k, v in |
2003 | 723 | item_params.get('parameters', {}).items()) | 729 | item_params.get('parameters', {}).items()) |
2004 | 724 | } | 730 | } |
2005 | 731 | if 'mtu' in item_cfg: | ||
2006 | 732 | v1_cmd['mtu'] = item_cfg['mtu'] | ||
2007 | 725 | subnets = self._v2_to_v1_ipcfg(item_cfg) | 733 | subnets = self._v2_to_v1_ipcfg(item_cfg) |
2008 | 726 | if len(subnets) > 0: | 734 | if len(subnets) > 0: |
2009 | 727 | v1_cmd.update({'subnets': subnets}) | 735 | v1_cmd.update({'subnets': subnets}) |
2010 | diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py | |||
2011 | index a47da0a..be5dede 100644 | |||
2012 | --- a/cloudinit/net/sysconfig.py | |||
2013 | +++ b/cloudinit/net/sysconfig.py | |||
2014 | @@ -284,6 +284,18 @@ class Renderer(renderer.Renderer): | |||
2015 | 284 | ('bond_mode', "mode=%s"), | 284 | ('bond_mode', "mode=%s"), |
2016 | 285 | ('bond_xmit_hash_policy', "xmit_hash_policy=%s"), | 285 | ('bond_xmit_hash_policy', "xmit_hash_policy=%s"), |
2017 | 286 | ('bond_miimon', "miimon=%s"), | 286 | ('bond_miimon', "miimon=%s"), |
2018 | 287 | ('bond_min_links', "min_links=%s"), | ||
2019 | 288 | ('bond_arp_interval', "arp_interval=%s"), | ||
2020 | 289 | ('bond_arp_ip_target', "arp_ip_target=%s"), | ||
2021 | 290 | ('bond_arp_validate', "arp_validate=%s"), | ||
2022 | 291 | ('bond_ad_select', "ad_select=%s"), | ||
2023 | 292 | ('bond_num_grat_arp', "num_grat_arp=%s"), | ||
2024 | 293 | ('bond_downdelay', "downdelay=%s"), | ||
2025 | 294 | ('bond_updelay', "updelay=%s"), | ||
2026 | 295 | ('bond_lacp_rate', "lacp_rate=%s"), | ||
2027 | 296 | ('bond_fail_over_mac', "fail_over_mac=%s"), | ||
2028 | 297 | ('bond_primary', "primary=%s"), | ||
2029 | 298 | ('bond_primary_reselect', "primary_reselect=%s"), | ||
2030 | 287 | ]) | 299 | ]) |
2031 | 288 | 300 | ||
2032 | 289 | bridge_opts_keys = tuple([ | 301 | bridge_opts_keys = tuple([ |
2033 | diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py | |||
2034 | index 5139024..91f503c 100644 | |||
2035 | --- a/cloudinit/net/tests/test_dhcp.py | |||
2036 | +++ b/cloudinit/net/tests/test_dhcp.py | |||
2037 | @@ -8,7 +8,8 @@ from textwrap import dedent | |||
2038 | 8 | import cloudinit.net as net | 8 | import cloudinit.net as net |
2039 | 9 | from cloudinit.net.dhcp import ( | 9 | from cloudinit.net.dhcp import ( |
2040 | 10 | InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery, | 10 | InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery, |
2042 | 11 | parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases) | 11 | parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases, |
2043 | 12 | parse_static_routes) | ||
2044 | 12 | from cloudinit.util import ensure_file, write_file | 13 | from cloudinit.util import ensure_file, write_file |
2045 | 13 | from cloudinit.tests.helpers import ( | 14 | from cloudinit.tests.helpers import ( |
2046 | 14 | CiTestCase, HttprettyTestCase, mock, populate_dir, wrap_and_call) | 15 | CiTestCase, HttprettyTestCase, mock, populate_dir, wrap_and_call) |
2047 | @@ -64,6 +65,123 @@ class TestParseDHCPLeasesFile(CiTestCase): | |||
2048 | 64 | self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file)) | 65 | self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file)) |
2049 | 65 | 66 | ||
2050 | 66 | 67 | ||
2051 | 68 | class TestDHCPRFC3442(CiTestCase): | ||
2052 | 69 | |||
2053 | 70 | def test_parse_lease_finds_rfc3442_classless_static_routes(self): | ||
2054 | 71 | """parse_dhcp_lease_file returns rfc3442-classless-static-routes.""" | ||
2055 | 72 | lease_file = self.tmp_path('leases') | ||
2056 | 73 | content = dedent(""" | ||
2057 | 74 | lease { | ||
2058 | 75 | interface "wlp3s0"; | ||
2059 | 76 | fixed-address 192.168.2.74; | ||
2060 | 77 | option subnet-mask 255.255.255.0; | ||
2061 | 78 | option routers 192.168.2.1; | ||
2062 | 79 | option rfc3442-classless-static-routes 0,130,56,240,1; | ||
2063 | 80 | renew 4 2017/07/27 18:02:30; | ||
2064 | 81 | expire 5 2017/07/28 07:08:15; | ||
2065 | 82 | } | ||
2066 | 83 | """) | ||
2067 | 84 | expected = [ | ||
2068 | 85 | {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74', | ||
2069 | 86 | 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1', | ||
2070 | 87 | 'rfc3442-classless-static-routes': '0,130,56,240,1', | ||
2071 | 88 | 'renew': '4 2017/07/27 18:02:30', | ||
2072 | 89 | 'expire': '5 2017/07/28 07:08:15'}] | ||
2073 | 90 | write_file(lease_file, content) | ||
2074 | 91 | self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file)) | ||
2075 | 92 | |||
2076 | 93 | @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network') | ||
2077 | 94 | @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery') | ||
2078 | 95 | def test_obtain_lease_parses_static_routes(self, m_maybe, m_ipv4): | ||
2079 | 96 | """EphemeralDHPCv4 parses rfc3442 routes for EphemeralIPv4Network""" | ||
2080 | 97 | lease = [ | ||
2081 | 98 | {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74', | ||
2082 | 99 | 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1', | ||
2083 | 100 | 'rfc3442-classless-static-routes': '0,130,56,240,1', | ||
2084 | 101 | 'renew': '4 2017/07/27 18:02:30', | ||
2085 | 102 | 'expire': '5 2017/07/28 07:08:15'}] | ||
2086 | 103 | m_maybe.return_value = lease | ||
2087 | 104 | eph = net.dhcp.EphemeralDHCPv4() | ||
2088 | 105 | eph.obtain_lease() | ||
2089 | 106 | expected_kwargs = { | ||
2090 | 107 | 'interface': 'wlp3s0', | ||
2091 | 108 | 'ip': '192.168.2.74', | ||
2092 | 109 | 'prefix_or_mask': '255.255.255.0', | ||
2093 | 110 | 'broadcast': '192.168.2.255', | ||
2094 | 111 | 'static_routes': [('0.0.0.0/0', '130.56.240.1')], | ||
2095 | 112 | 'router': '192.168.2.1'} | ||
2096 | 113 | m_ipv4.assert_called_with(**expected_kwargs) | ||
2097 | 114 | |||
2098 | 115 | |||
2099 | 116 | class TestDHCPParseStaticRoutes(CiTestCase): | ||
2100 | 117 | |||
2101 | 118 | with_logs = True | ||
2102 | 119 | |||
2103 | 120 | def parse_static_routes_empty_string(self): | ||
2104 | 121 | self.assertEqual([], parse_static_routes("")) | ||
2105 | 122 | |||
2106 | 123 | def test_parse_static_routes_invalid_input_returns_empty_list(self): | ||
2107 | 124 | rfc3442 = "32,169,254,169,254,130,56,248" | ||
2108 | 125 | self.assertEqual([], parse_static_routes(rfc3442)) | ||
2109 | 126 | |||
2110 | 127 | def test_parse_static_routes_bogus_width_returns_empty_list(self): | ||
2111 | 128 | rfc3442 = "33,169,254,169,254,130,56,248" | ||
2112 | 129 | self.assertEqual([], parse_static_routes(rfc3442)) | ||
2113 | 130 | |||
2114 | 131 | def test_parse_static_routes_single_ip(self): | ||
2115 | 132 | rfc3442 = "32,169,254,169,254,130,56,248,255" | ||
2116 | 133 | self.assertEqual([('169.254.169.254/32', '130.56.248.255')], | ||
2117 | 134 | parse_static_routes(rfc3442)) | ||
2118 | 135 | |||
2119 | 136 | def test_parse_static_routes_single_ip_handles_trailing_semicolon(self): | ||
2120 | 137 | rfc3442 = "32,169,254,169,254,130,56,248,255;" | ||
2121 | 138 | self.assertEqual([('169.254.169.254/32', '130.56.248.255')], | ||
2122 | 139 | parse_static_routes(rfc3442)) | ||
2123 | 140 | |||
2124 | 141 | def test_parse_static_routes_default_route(self): | ||
2125 | 142 | rfc3442 = "0,130,56,240,1" | ||
2126 | 143 | self.assertEqual([('0.0.0.0/0', '130.56.240.1')], | ||
2127 | 144 | parse_static_routes(rfc3442)) | ||
2128 | 145 | |||
2129 | 146 | def test_parse_static_routes_class_c_b_a(self): | ||
2130 | 147 | class_c = "24,192,168,74,192,168,0,4" | ||
2131 | 148 | class_b = "16,172,16,172,16,0,4" | ||
2132 | 149 | class_a = "8,10,10,0,0,4" | ||
2133 | 150 | rfc3442 = ",".join([class_c, class_b, class_a]) | ||
2134 | 151 | self.assertEqual(sorted([ | ||
2135 | 152 | ("192.168.74.0/24", "192.168.0.4"), | ||
2136 | 153 | ("172.16.0.0/16", "172.16.0.4"), | ||
2137 | 154 | ("10.0.0.0/8", "10.0.0.4") | ||
2138 | 155 | ]), sorted(parse_static_routes(rfc3442))) | ||
2139 | 156 | |||
2140 | 157 | def test_parse_static_routes_logs_error_truncated(self): | ||
2141 | 158 | bad_rfc3442 = { | ||
2142 | 159 | "class_c": "24,169,254,169,10", | ||
2143 | 160 | "class_b": "16,172,16,10", | ||
2144 | 161 | "class_a": "8,10,10", | ||
2145 | 162 | "gateway": "0,0", | ||
2146 | 163 | "netlen": "33,0", | ||
2147 | 164 | } | ||
2148 | 165 | for rfc3442 in bad_rfc3442.values(): | ||
2149 | 166 | self.assertEqual([], parse_static_routes(rfc3442)) | ||
2150 | 167 | |||
2151 | 168 | logs = self.logs.getvalue() | ||
2152 | 169 | self.assertEqual(len(bad_rfc3442.keys()), len(logs.splitlines())) | ||
2153 | 170 | |||
2154 | 171 | def test_parse_static_routes_returns_valid_routes_until_parse_err(self): | ||
2155 | 172 | class_c = "24,192,168,74,192,168,0,4" | ||
2156 | 173 | class_b = "16,172,16,172,16,0,4" | ||
2157 | 174 | class_a_error = "8,10,10,0,0" | ||
2158 | 175 | rfc3442 = ",".join([class_c, class_b, class_a_error]) | ||
2159 | 176 | self.assertEqual(sorted([ | ||
2160 | 177 | ("192.168.74.0/24", "192.168.0.4"), | ||
2161 | 178 | ("172.16.0.0/16", "172.16.0.4"), | ||
2162 | 179 | ]), sorted(parse_static_routes(rfc3442))) | ||
2163 | 180 | |||
2164 | 181 | logs = self.logs.getvalue() | ||
2165 | 182 | self.assertIn(rfc3442, logs.splitlines()[0]) | ||
2166 | 183 | |||
2167 | 184 | |||
2168 | 67 | class TestDHCPDiscoveryClean(CiTestCase): | 185 | class TestDHCPDiscoveryClean(CiTestCase): |
2169 | 68 | with_logs = True | 186 | with_logs = True |
2170 | 69 | 187 | ||
2171 | diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py | |||
2172 | index 6d2affe..d2e38f0 100644 | |||
2173 | --- a/cloudinit/net/tests/test_init.py | |||
2174 | +++ b/cloudinit/net/tests/test_init.py | |||
2175 | @@ -212,9 +212,9 @@ class TestGenerateFallbackConfig(CiTestCase): | |||
2176 | 212 | mac = 'aa:bb:cc:aa:bb:cc' | 212 | mac = 'aa:bb:cc:aa:bb:cc' |
2177 | 213 | write_file(os.path.join(self.sysdir, 'eth1', 'address'), mac) | 213 | write_file(os.path.join(self.sysdir, 'eth1', 'address'), mac) |
2178 | 214 | expected = { | 214 | expected = { |
2182 | 215 | 'config': [{'type': 'physical', 'mac_address': mac, | 215 | 'ethernets': {'eth1': {'match': {'macaddress': mac}, |
2183 | 216 | 'name': 'eth1', 'subnets': [{'type': 'dhcp'}]}], | 216 | 'dhcp4': True, 'set-name': 'eth1'}}, |
2184 | 217 | 'version': 1} | 217 | 'version': 2} |
2185 | 218 | self.assertEqual(expected, net.generate_fallback_config()) | 218 | self.assertEqual(expected, net.generate_fallback_config()) |
2186 | 219 | 219 | ||
2187 | 220 | def test_generate_fallback_finds_dormant_eth_with_mac(self): | 220 | def test_generate_fallback_finds_dormant_eth_with_mac(self): |
2188 | @@ -223,9 +223,9 @@ class TestGenerateFallbackConfig(CiTestCase): | |||
2189 | 223 | mac = 'aa:bb:cc:aa:bb:cc' | 223 | mac = 'aa:bb:cc:aa:bb:cc' |
2190 | 224 | write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac) | 224 | write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac) |
2191 | 225 | expected = { | 225 | expected = { |
2195 | 226 | 'config': [{'type': 'physical', 'mac_address': mac, | 226 | 'ethernets': {'eth0': {'match': {'macaddress': mac}, 'dhcp4': True, |
2196 | 227 | 'name': 'eth0', 'subnets': [{'type': 'dhcp'}]}], | 227 | 'set-name': 'eth0'}}, |
2197 | 228 | 'version': 1} | 228 | 'version': 2} |
2198 | 229 | self.assertEqual(expected, net.generate_fallback_config()) | 229 | self.assertEqual(expected, net.generate_fallback_config()) |
2199 | 230 | 230 | ||
2200 | 231 | def test_generate_fallback_finds_eth_by_operstate(self): | 231 | def test_generate_fallback_finds_eth_by_operstate(self): |
2201 | @@ -233,9 +233,10 @@ class TestGenerateFallbackConfig(CiTestCase): | |||
2202 | 233 | mac = 'aa:bb:cc:aa:bb:cc' | 233 | mac = 'aa:bb:cc:aa:bb:cc' |
2203 | 234 | write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac) | 234 | write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac) |
2204 | 235 | expected = { | 235 | expected = { |
2208 | 236 | 'config': [{'type': 'physical', 'mac_address': mac, | 236 | 'ethernets': { |
2209 | 237 | 'name': 'eth0', 'subnets': [{'type': 'dhcp'}]}], | 237 | 'eth0': {'dhcp4': True, 'match': {'macaddress': mac}, |
2210 | 238 | 'version': 1} | 238 | 'set-name': 'eth0'}}, |
2211 | 239 | 'version': 2} | ||
2212 | 239 | valid_operstates = ['dormant', 'down', 'lowerlayerdown', 'unknown'] | 240 | valid_operstates = ['dormant', 'down', 'lowerlayerdown', 'unknown'] |
2213 | 240 | for state in valid_operstates: | 241 | for state in valid_operstates: |
2214 | 241 | write_file(os.path.join(self.sysdir, 'eth0', 'operstate'), state) | 242 | write_file(os.path.join(self.sysdir, 'eth0', 'operstate'), state) |
2215 | @@ -549,6 +550,45 @@ class TestEphemeralIPV4Network(CiTestCase): | |||
2216 | 549 | self.assertEqual(expected_setup_calls, m_subp.call_args_list) | 550 | self.assertEqual(expected_setup_calls, m_subp.call_args_list) |
2217 | 550 | m_subp.assert_has_calls(expected_teardown_calls) | 551 | m_subp.assert_has_calls(expected_teardown_calls) |
2218 | 551 | 552 | ||
2219 | 553 | def test_ephemeral_ipv4_network_with_rfc3442_static_routes(self, m_subp): | ||
2220 | 554 | params = { | ||
2221 | 555 | 'interface': 'eth0', 'ip': '192.168.2.2', | ||
2222 | 556 | 'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255', | ||
2223 | 557 | 'static_routes': [('169.254.169.254/32', '192.168.2.1'), | ||
2224 | 558 | ('0.0.0.0/0', '192.168.2.1')], | ||
2225 | 559 | 'router': '192.168.2.1'} | ||
2226 | 560 | expected_setup_calls = [ | ||
2227 | 561 | mock.call( | ||
2228 | 562 | ['ip', '-family', 'inet', 'addr', 'add', '192.168.2.2/24', | ||
2229 | 563 | 'broadcast', '192.168.2.255', 'dev', 'eth0'], | ||
2230 | 564 | capture=True, update_env={'LANG': 'C'}), | ||
2231 | 565 | mock.call( | ||
2232 | 566 | ['ip', '-family', 'inet', 'link', 'set', 'dev', 'eth0', 'up'], | ||
2233 | 567 | capture=True), | ||
2234 | 568 | mock.call( | ||
2235 | 569 | ['ip', '-4', 'route', 'add', '169.254.169.254/32', | ||
2236 | 570 | 'via', '192.168.2.1', 'dev', 'eth0'], capture=True), | ||
2237 | 571 | mock.call( | ||
2238 | 572 | ['ip', '-4', 'route', 'add', '0.0.0.0/0', | ||
2239 | 573 | 'via', '192.168.2.1', 'dev', 'eth0'], capture=True)] | ||
2240 | 574 | expected_teardown_calls = [ | ||
2241 | 575 | mock.call( | ||
2242 | 576 | ['ip', '-4', 'route', 'del', '0.0.0.0/0', | ||
2243 | 577 | 'via', '192.168.2.1', 'dev', 'eth0'], capture=True), | ||
2244 | 578 | mock.call( | ||
2245 | 579 | ['ip', '-4', 'route', 'del', '169.254.169.254/32', | ||
2246 | 580 | 'via', '192.168.2.1', 'dev', 'eth0'], capture=True), | ||
2247 | 581 | mock.call( | ||
2248 | 582 | ['ip', '-family', 'inet', 'link', 'set', 'dev', | ||
2249 | 583 | 'eth0', 'down'], capture=True), | ||
2250 | 584 | mock.call( | ||
2251 | 585 | ['ip', '-family', 'inet', 'addr', 'del', | ||
2252 | 586 | '192.168.2.2/24', 'dev', 'eth0'], capture=True) | ||
2253 | 587 | ] | ||
2254 | 588 | with net.EphemeralIPv4Network(**params): | ||
2255 | 589 | self.assertEqual(expected_setup_calls, m_subp.call_args_list) | ||
2256 | 590 | m_subp.assert_has_calls(expected_setup_calls + expected_teardown_calls) | ||
2257 | 591 | |||
2258 | 552 | 592 | ||
2259 | 553 | class TestApplyNetworkCfgNames(CiTestCase): | 593 | class TestApplyNetworkCfgNames(CiTestCase): |
2260 | 554 | V1_CONFIG = textwrap.dedent("""\ | 594 | V1_CONFIG = textwrap.dedent("""\ |
2261 | @@ -669,3 +709,216 @@ class TestHasURLConnectivity(HttprettyTestCase): | |||
2262 | 669 | httpretty.register_uri(httpretty.GET, self.url, body={}, status=404) | 709 | httpretty.register_uri(httpretty.GET, self.url, body={}, status=404) |
2263 | 670 | self.assertFalse( | 710 | self.assertFalse( |
2264 | 671 | net.has_url_connectivity(self.url), 'Expected False on url fail') | 711 | net.has_url_connectivity(self.url), 'Expected False on url fail') |
2265 | 712 | |||
2266 | 713 | |||
2267 | 714 | def _mk_v1_phys(mac, name, driver, device_id): | ||
2268 | 715 | v1_cfg = {'type': 'physical', 'name': name, 'mac_address': mac} | ||
2269 | 716 | params = {} | ||
2270 | 717 | if driver: | ||
2271 | 718 | params.update({'driver': driver}) | ||
2272 | 719 | if device_id: | ||
2273 | 720 | params.update({'device_id': device_id}) | ||
2274 | 721 | |||
2275 | 722 | if params: | ||
2276 | 723 | v1_cfg.update({'params': params}) | ||
2277 | 724 | |||
2278 | 725 | return v1_cfg | ||
2279 | 726 | |||
2280 | 727 | |||
2281 | 728 | def _mk_v2_phys(mac, name, driver=None, device_id=None): | ||
2282 | 729 | v2_cfg = {'set-name': name, 'match': {'macaddress': mac}} | ||
2283 | 730 | if driver: | ||
2284 | 731 | v2_cfg['match'].update({'driver': driver}) | ||
2285 | 732 | if device_id: | ||
2286 | 733 | v2_cfg['match'].update({'device_id': device_id}) | ||
2287 | 734 | |||
2288 | 735 | return v2_cfg | ||
2289 | 736 | |||
2290 | 737 | |||
2291 | 738 | class TestExtractPhysdevs(CiTestCase): | ||
2292 | 739 | |||
2293 | 740 | def setUp(self): | ||
2294 | 741 | super(TestExtractPhysdevs, self).setUp() | ||
2295 | 742 | self.add_patch('cloudinit.net.device_driver', 'm_driver') | ||
2296 | 743 | self.add_patch('cloudinit.net.device_devid', 'm_devid') | ||
2297 | 744 | |||
2298 | 745 | def test_extract_physdevs_looks_up_driver_v1(self): | ||
2299 | 746 | driver = 'virtio' | ||
2300 | 747 | self.m_driver.return_value = driver | ||
2301 | 748 | physdevs = [ | ||
2302 | 749 | ['aa:bb:cc:dd:ee:ff', 'eth0', None, '0x1000'], | ||
2303 | 750 | ] | ||
2304 | 751 | netcfg = { | ||
2305 | 752 | 'version': 1, | ||
2306 | 753 | 'config': [_mk_v1_phys(*args) for args in physdevs], | ||
2307 | 754 | } | ||
2308 | 755 | # insert the driver value for verification | ||
2309 | 756 | physdevs[0][2] = driver | ||
2310 | 757 | self.assertEqual(sorted(physdevs), | ||
2311 | 758 | sorted(net.extract_physdevs(netcfg))) | ||
2312 | 759 | self.m_driver.assert_called_with('eth0') | ||
2313 | 760 | |||
2314 | 761 | def test_extract_physdevs_looks_up_driver_v2(self): | ||
2315 | 762 | driver = 'virtio' | ||
2316 | 763 | self.m_driver.return_value = driver | ||
2317 | 764 | physdevs = [ | ||
2318 | 765 | ['aa:bb:cc:dd:ee:ff', 'eth0', None, '0x1000'], | ||
2319 | 766 | ] | ||
2320 | 767 | netcfg = { | ||
2321 | 768 | 'version': 2, | ||
2322 | 769 | 'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs}, | ||
2323 | 770 | } | ||
2324 | 771 | # insert the driver value for verification | ||
2325 | 772 | physdevs[0][2] = driver | ||
2326 | 773 | self.assertEqual(sorted(physdevs), | ||
2327 | 774 | sorted(net.extract_physdevs(netcfg))) | ||
2328 | 775 | self.m_driver.assert_called_with('eth0') | ||
2329 | 776 | |||
2330 | 777 | def test_extract_physdevs_looks_up_devid_v1(self): | ||
2331 | 778 | devid = '0x1000' | ||
2332 | 779 | self.m_devid.return_value = devid | ||
2333 | 780 | physdevs = [ | ||
2334 | 781 | ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', None], | ||
2335 | 782 | ] | ||
2336 | 783 | netcfg = { | ||
2337 | 784 | 'version': 1, | ||
2338 | 785 | 'config': [_mk_v1_phys(*args) for args in physdevs], | ||
2339 | 786 | } | ||
2340 | 787 | # insert the driver value for verification | ||
2341 | 788 | physdevs[0][3] = devid | ||
2342 | 789 | self.assertEqual(sorted(physdevs), | ||
2343 | 790 | sorted(net.extract_physdevs(netcfg))) | ||
2344 | 791 | self.m_devid.assert_called_with('eth0') | ||
2345 | 792 | |||
2346 | 793 | def test_extract_physdevs_looks_up_devid_v2(self): | ||
2347 | 794 | devid = '0x1000' | ||
2348 | 795 | self.m_devid.return_value = devid | ||
2349 | 796 | physdevs = [ | ||
2350 | 797 | ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', None], | ||
2351 | 798 | ] | ||
2352 | 799 | netcfg = { | ||
2353 | 800 | 'version': 2, | ||
2354 | 801 | 'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs}, | ||
2355 | 802 | } | ||
2356 | 803 | # insert the driver value for verification | ||
2357 | 804 | physdevs[0][3] = devid | ||
2358 | 805 | self.assertEqual(sorted(physdevs), | ||
2359 | 806 | sorted(net.extract_physdevs(netcfg))) | ||
2360 | 807 | self.m_devid.assert_called_with('eth0') | ||
2361 | 808 | |||
2362 | 809 | def test_get_v1_type_physical(self): | ||
2363 | 810 | physdevs = [ | ||
2364 | 811 | ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'], | ||
2365 | 812 | ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'], | ||
2366 | 813 | ['09:87:65:43:21:10', 'ens0p1', 'mlx4_core', '0:0:1000'], | ||
2367 | 814 | ] | ||
2368 | 815 | netcfg = { | ||
2369 | 816 | 'version': 1, | ||
2370 | 817 | 'config': [_mk_v1_phys(*args) for args in physdevs], | ||
2371 | 818 | } | ||
2372 | 819 | self.assertEqual(sorted(physdevs), | ||
2373 | 820 | sorted(net.extract_physdevs(netcfg))) | ||
2374 | 821 | |||
2375 | 822 | def test_get_v2_type_physical(self): | ||
2376 | 823 | physdevs = [ | ||
2377 | 824 | ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'], | ||
2378 | 825 | ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'], | ||
2379 | 826 | ['09:87:65:43:21:10', 'ens0p1', 'mlx4_core', '0:0:1000'], | ||
2380 | 827 | ] | ||
2381 | 828 | netcfg = { | ||
2382 | 829 | 'version': 2, | ||
2383 | 830 | 'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs}, | ||
2384 | 831 | } | ||
2385 | 832 | self.assertEqual(sorted(physdevs), | ||
2386 | 833 | sorted(net.extract_physdevs(netcfg))) | ||
2387 | 834 | |||
2388 | 835 | def test_get_v2_type_physical_skips_if_no_set_name(self): | ||
2389 | 836 | netcfg = { | ||
2390 | 837 | 'version': 2, | ||
2391 | 838 | 'ethernets': { | ||
2392 | 839 | 'ens3': { | ||
2393 | 840 | 'match': {'macaddress': '00:11:22:33:44:55'}, | ||
2394 | 841 | } | ||
2395 | 842 | } | ||
2396 | 843 | } | ||
2397 | 844 | self.assertEqual([], net.extract_physdevs(netcfg)) | ||
2398 | 845 | |||
2399 | 846 | def test_runtime_error_on_unknown_netcfg_version(self): | ||
2400 | 847 | with self.assertRaises(RuntimeError): | ||
2401 | 848 | net.extract_physdevs({'version': 3, 'awesome_config': []}) | ||
2402 | 849 | |||
2403 | 850 | |||
2404 | 851 | class TestWaitForPhysdevs(CiTestCase): | ||
2405 | 852 | |||
2406 | 853 | with_logs = True | ||
2407 | 854 | |||
2408 | 855 | def setUp(self): | ||
2409 | 856 | super(TestWaitForPhysdevs, self).setUp() | ||
2410 | 857 | self.add_patch('cloudinit.net.get_interfaces_by_mac', | ||
2411 | 858 | 'm_get_iface_mac') | ||
2412 | 859 | self.add_patch('cloudinit.util.udevadm_settle', 'm_udev_settle') | ||
2413 | 860 | |||
2414 | 861 | def test_wait_for_physdevs_skips_settle_if_all_present(self): | ||
2415 | 862 | physdevs = [ | ||
2416 | 863 | ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'], | ||
2417 | 864 | ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'], | ||
2418 | 865 | ] | ||
2419 | 866 | netcfg = { | ||
2420 | 867 | 'version': 2, | ||
2421 | 868 | 'ethernets': {args[1]: _mk_v2_phys(*args) | ||
2422 | 869 | for args in physdevs}, | ||
2423 | 870 | } | ||
2424 | 871 | self.m_get_iface_mac.side_effect = iter([ | ||
2425 | 872 | {'aa:bb:cc:dd:ee:ff': 'eth0', | ||
2426 | 873 | '00:11:22:33:44:55': 'ens3'}, | ||
2427 | 874 | ]) | ||
2428 | 875 | net.wait_for_physdevs(netcfg) | ||
2429 | 876 | self.assertEqual(0, self.m_udev_settle.call_count) | ||
2430 | 877 | |||
2431 | 878 | def test_wait_for_physdevs_calls_udev_settle_on_missing(self): | ||
2432 | 879 | physdevs = [ | ||
2433 | 880 | ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'], | ||
2434 | 881 | ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'], | ||
2435 | 882 | ] | ||
2436 | 883 | netcfg = { | ||
2437 | 884 | 'version': 2, | ||
2438 | 885 | 'ethernets': {args[1]: _mk_v2_phys(*args) | ||
2439 | 886 | for args in physdevs}, | ||
2440 | 887 | } | ||
2441 | 888 | self.m_get_iface_mac.side_effect = iter([ | ||
2442 | 889 | {'aa:bb:cc:dd:ee:ff': 'eth0'}, # first call ens3 is missing | ||
2443 | 890 | {'aa:bb:cc:dd:ee:ff': 'eth0', | ||
2444 | 891 | '00:11:22:33:44:55': 'ens3'}, # second call has both | ||
2445 | 892 | ]) | ||
2446 | 893 | net.wait_for_physdevs(netcfg) | ||
2447 | 894 | self.m_udev_settle.assert_called_with(exists=net.sys_dev_path('ens3')) | ||
2448 | 895 | |||
2449 | 896 | def test_wait_for_physdevs_raise_runtime_error_if_missing_and_strict(self): | ||
2450 | 897 | physdevs = [ | ||
2451 | 898 | ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'], | ||
2452 | 899 | ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'], | ||
2453 | 900 | ] | ||
2454 | 901 | netcfg = { | ||
2455 | 902 | 'version': 2, | ||
2456 | 903 | 'ethernets': {args[1]: _mk_v2_phys(*args) | ||
2457 | 904 | for args in physdevs}, | ||
2458 | 905 | } | ||
2459 | 906 | self.m_get_iface_mac.return_value = {} | ||
2460 | 907 | with self.assertRaises(RuntimeError): | ||
2461 | 908 | net.wait_for_physdevs(netcfg) | ||
2462 | 909 | |||
2463 | 910 | self.assertEqual(5 * len(physdevs), self.m_udev_settle.call_count) | ||
2464 | 911 | |||
2465 | 912 | def test_wait_for_physdevs_no_raise_if_not_strict(self): | ||
2466 | 913 | physdevs = [ | ||
2467 | 914 | ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'], | ||
2468 | 915 | ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'], | ||
2469 | 916 | ] | ||
2470 | 917 | netcfg = { | ||
2471 | 918 | 'version': 2, | ||
2472 | 919 | 'ethernets': {args[1]: _mk_v2_phys(*args) | ||
2473 | 920 | for args in physdevs}, | ||
2474 | 921 | } | ||
2475 | 922 | self.m_get_iface_mac.return_value = {} | ||
2476 | 923 | net.wait_for_physdevs(netcfg, strict=False) | ||
2477 | 924 | self.assertEqual(5 * len(physdevs), self.m_udev_settle.call_count) | ||
2478 | diff --git a/cloudinit/settings.py b/cloudinit/settings.py | |||
2479 | index b1ebaad..2060d81 100644 | |||
2480 | --- a/cloudinit/settings.py | |||
2481 | +++ b/cloudinit/settings.py | |||
2482 | @@ -39,6 +39,7 @@ CFG_BUILTIN = { | |||
2483 | 39 | 'Hetzner', | 39 | 'Hetzner', |
2484 | 40 | 'IBMCloud', | 40 | 'IBMCloud', |
2485 | 41 | 'Oracle', | 41 | 'Oracle', |
2486 | 42 | 'Exoscale', | ||
2487 | 42 | # At the end to act as a 'catch' when none of the above work... | 43 | # At the end to act as a 'catch' when none of the above work... |
2488 | 43 | 'None', | 44 | 'None', |
2489 | 44 | ], | 45 | ], |
2490 | diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py | |||
2491 | index b7440c1..4984fa8 100755 | |||
2492 | --- a/cloudinit/sources/DataSourceAzure.py | |||
2493 | +++ b/cloudinit/sources/DataSourceAzure.py | |||
2494 | @@ -26,9 +26,14 @@ from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc | |||
2495 | 26 | from cloudinit import util | 26 | from cloudinit import util |
2496 | 27 | from cloudinit.reporting import events | 27 | from cloudinit.reporting import events |
2497 | 28 | 28 | ||
2501 | 29 | from cloudinit.sources.helpers.azure import (azure_ds_reporter, | 29 | from cloudinit.sources.helpers.azure import ( |
2502 | 30 | azure_ds_telemetry_reporter, | 30 | azure_ds_reporter, |
2503 | 31 | get_metadata_from_fabric) | 31 | azure_ds_telemetry_reporter, |
2504 | 32 | get_metadata_from_fabric, | ||
2505 | 33 | get_boot_telemetry, | ||
2506 | 34 | get_system_info, | ||
2507 | 35 | report_diagnostic_event, | ||
2508 | 36 | EphemeralDHCPv4WithReporting) | ||
2509 | 32 | 37 | ||
2510 | 33 | LOG = logging.getLogger(__name__) | 38 | LOG = logging.getLogger(__name__) |
2511 | 34 | 39 | ||
2512 | @@ -354,7 +359,7 @@ class DataSourceAzure(sources.DataSource): | |||
2513 | 354 | bname = str(pk['fingerprint'] + ".crt") | 359 | bname = str(pk['fingerprint'] + ".crt") |
2514 | 355 | fp_files += [os.path.join(ddir, bname)] | 360 | fp_files += [os.path.join(ddir, bname)] |
2515 | 356 | LOG.debug("ssh authentication: " | 361 | LOG.debug("ssh authentication: " |
2517 | 357 | "using fingerprint from fabirc") | 362 | "using fingerprint from fabric") |
2518 | 358 | 363 | ||
2519 | 359 | with events.ReportEventStack( | 364 | with events.ReportEventStack( |
2520 | 360 | name="waiting-for-ssh-public-key", | 365 | name="waiting-for-ssh-public-key", |
2521 | @@ -419,12 +424,17 @@ class DataSourceAzure(sources.DataSource): | |||
2522 | 419 | ret = load_azure_ds_dir(cdev) | 424 | ret = load_azure_ds_dir(cdev) |
2523 | 420 | 425 | ||
2524 | 421 | except NonAzureDataSource: | 426 | except NonAzureDataSource: |
2525 | 427 | report_diagnostic_event( | ||
2526 | 428 | "Did not find Azure data source in %s" % cdev) | ||
2527 | 422 | continue | 429 | continue |
2528 | 423 | except BrokenAzureDataSource as exc: | 430 | except BrokenAzureDataSource as exc: |
2529 | 424 | msg = 'BrokenAzureDataSource: %s' % exc | 431 | msg = 'BrokenAzureDataSource: %s' % exc |
2530 | 432 | report_diagnostic_event(msg) | ||
2531 | 425 | raise sources.InvalidMetaDataException(msg) | 433 | raise sources.InvalidMetaDataException(msg) |
2532 | 426 | except util.MountFailedError: | 434 | except util.MountFailedError: |
2534 | 427 | LOG.warning("%s was not mountable", cdev) | 435 | msg = '%s was not mountable' % cdev |
2535 | 436 | report_diagnostic_event(msg) | ||
2536 | 437 | LOG.warning(msg) | ||
2537 | 428 | continue | 438 | continue |
2538 | 429 | 439 | ||
2539 | 430 | perform_reprovision = reprovision or self._should_reprovision(ret) | 440 | perform_reprovision = reprovision or self._should_reprovision(ret) |
2540 | @@ -432,6 +442,7 @@ class DataSourceAzure(sources.DataSource): | |||
2541 | 432 | if util.is_FreeBSD(): | 442 | if util.is_FreeBSD(): |
2542 | 433 | msg = "Free BSD is not supported for PPS VMs" | 443 | msg = "Free BSD is not supported for PPS VMs" |
2543 | 434 | LOG.error(msg) | 444 | LOG.error(msg) |
2544 | 445 | report_diagnostic_event(msg) | ||
2545 | 435 | raise sources.InvalidMetaDataException(msg) | 446 | raise sources.InvalidMetaDataException(msg) |
2546 | 436 | ret = self._reprovision() | 447 | ret = self._reprovision() |
2547 | 437 | imds_md = get_metadata_from_imds( | 448 | imds_md = get_metadata_from_imds( |
2548 | @@ -450,7 +461,9 @@ class DataSourceAzure(sources.DataSource): | |||
2549 | 450 | break | 461 | break |
2550 | 451 | 462 | ||
2551 | 452 | if not found: | 463 | if not found: |
2553 | 453 | raise sources.InvalidMetaDataException('No Azure metadata found') | 464 | msg = 'No Azure metadata found' |
2554 | 465 | report_diagnostic_event(msg) | ||
2555 | 466 | raise sources.InvalidMetaDataException(msg) | ||
2556 | 454 | 467 | ||
2557 | 455 | if found == ddir: | 468 | if found == ddir: |
2558 | 456 | LOG.debug("using files cached in %s", ddir) | 469 | LOG.debug("using files cached in %s", ddir) |
2559 | @@ -469,9 +482,14 @@ class DataSourceAzure(sources.DataSource): | |||
2560 | 469 | self._report_ready(lease=self._ephemeral_dhcp_ctx.lease) | 482 | self._report_ready(lease=self._ephemeral_dhcp_ctx.lease) |
2561 | 470 | self._ephemeral_dhcp_ctx.clean_network() # Teardown ephemeral | 483 | self._ephemeral_dhcp_ctx.clean_network() # Teardown ephemeral |
2562 | 471 | else: | 484 | else: |
2566 | 472 | with EphemeralDHCPv4() as lease: | 485 | try: |
2567 | 473 | self._report_ready(lease=lease) | 486 | with EphemeralDHCPv4WithReporting( |
2568 | 474 | 487 | azure_ds_reporter) as lease: | |
2569 | 488 | self._report_ready(lease=lease) | ||
2570 | 489 | except Exception as e: | ||
2571 | 490 | report_diagnostic_event( | ||
2572 | 491 | "exception while reporting ready: %s" % e) | ||
2573 | 492 | raise | ||
2574 | 475 | return crawled_data | 493 | return crawled_data |
2575 | 476 | 494 | ||
2576 | 477 | def _is_platform_viable(self): | 495 | def _is_platform_viable(self): |
2577 | @@ -493,6 +511,16 @@ class DataSourceAzure(sources.DataSource): | |||
2578 | 493 | if not self._is_platform_viable(): | 511 | if not self._is_platform_viable(): |
2579 | 494 | return False | 512 | return False |
2580 | 495 | try: | 513 | try: |
2581 | 514 | get_boot_telemetry() | ||
2582 | 515 | except Exception as e: | ||
2583 | 516 | LOG.warning("Failed to get boot telemetry: %s", e) | ||
2584 | 517 | |||
2585 | 518 | try: | ||
2586 | 519 | get_system_info() | ||
2587 | 520 | except Exception as e: | ||
2588 | 521 | LOG.warning("Failed to get system information: %s", e) | ||
2589 | 522 | |||
2590 | 523 | try: | ||
2591 | 496 | crawled_data = util.log_time( | 524 | crawled_data = util.log_time( |
2592 | 497 | logfunc=LOG.debug, msg='Crawl of metadata service', | 525 | logfunc=LOG.debug, msg='Crawl of metadata service', |
2593 | 498 | func=self.crawl_metadata) | 526 | func=self.crawl_metadata) |
2594 | @@ -551,27 +579,55 @@ class DataSourceAzure(sources.DataSource): | |||
2595 | 551 | headers = {"Metadata": "true"} | 579 | headers = {"Metadata": "true"} |
2596 | 552 | nl_sock = None | 580 | nl_sock = None |
2597 | 553 | report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE)) | 581 | report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE)) |
2598 | 582 | self.imds_logging_threshold = 1 | ||
2599 | 583 | self.imds_poll_counter = 1 | ||
2600 | 584 | dhcp_attempts = 0 | ||
2601 | 585 | vnet_switched = False | ||
2602 | 586 | return_val = None | ||
2603 | 554 | 587 | ||
2604 | 555 | def exc_cb(msg, exception): | 588 | def exc_cb(msg, exception): |
2605 | 556 | if isinstance(exception, UrlError) and exception.code == 404: | 589 | if isinstance(exception, UrlError) and exception.code == 404: |
2606 | 590 | if self.imds_poll_counter == self.imds_logging_threshold: | ||
2607 | 591 | # Reducing the logging frequency as we are polling IMDS | ||
2608 | 592 | self.imds_logging_threshold *= 2 | ||
2609 | 593 | LOG.debug("Call to IMDS with arguments %s failed " | ||
2610 | 594 | "with status code %s after %s retries", | ||
2611 | 595 | msg, exception.code, self.imds_poll_counter) | ||
2612 | 596 | LOG.debug("Backing off logging threshold for the same " | ||
2613 | 597 | "exception to %d", self.imds_logging_threshold) | ||
2614 | 598 | self.imds_poll_counter += 1 | ||
2615 | 557 | return True | 599 | return True |
2616 | 600 | |||
2617 | 558 | # If we get an exception while trying to call IMDS, we | 601 | # If we get an exception while trying to call IMDS, we |
2618 | 559 | # call DHCP and setup the ephemeral network to acquire the new IP. | 602 | # call DHCP and setup the ephemeral network to acquire the new IP. |
2619 | 603 | LOG.debug("Call to IMDS with arguments %s failed with " | ||
2620 | 604 | "status code %s", msg, exception.code) | ||
2621 | 605 | report_diagnostic_event("polling IMDS failed with exception %s" | ||
2622 | 606 | % exception.code) | ||
2623 | 560 | return False | 607 | return False |
2624 | 561 | 608 | ||
2625 | 562 | LOG.debug("Wait for vnetswitch to happen") | 609 | LOG.debug("Wait for vnetswitch to happen") |
2626 | 563 | while True: | 610 | while True: |
2627 | 564 | try: | 611 | try: |
2631 | 565 | # Save our EphemeralDHCPv4 context so we avoid repeated dhcp | 612 | # Save our EphemeralDHCPv4 context to avoid repeated dhcp |
2632 | 566 | self._ephemeral_dhcp_ctx = EphemeralDHCPv4() | 613 | with events.ReportEventStack( |
2633 | 567 | lease = self._ephemeral_dhcp_ctx.obtain_lease() | 614 | name="obtain-dhcp-lease", |
2634 | 615 | description="obtain dhcp lease", | ||
2635 | 616 | parent=azure_ds_reporter): | ||
2636 | 617 | self._ephemeral_dhcp_ctx = EphemeralDHCPv4() | ||
2637 | 618 | lease = self._ephemeral_dhcp_ctx.obtain_lease() | ||
2638 | 619 | |||
2639 | 620 | if vnet_switched: | ||
2640 | 621 | dhcp_attempts += 1 | ||
2641 | 568 | if report_ready: | 622 | if report_ready: |
2642 | 569 | try: | 623 | try: |
2643 | 570 | nl_sock = netlink.create_bound_netlink_socket() | 624 | nl_sock = netlink.create_bound_netlink_socket() |
2644 | 571 | except netlink.NetlinkCreateSocketError as e: | 625 | except netlink.NetlinkCreateSocketError as e: |
2645 | 626 | report_diagnostic_event(e) | ||
2646 | 572 | LOG.warning(e) | 627 | LOG.warning(e) |
2647 | 573 | self._ephemeral_dhcp_ctx.clean_network() | 628 | self._ephemeral_dhcp_ctx.clean_network() |
2649 | 574 | return | 629 | break |
2650 | 630 | |||
2651 | 575 | path = REPORTED_READY_MARKER_FILE | 631 | path = REPORTED_READY_MARKER_FILE |
2652 | 576 | LOG.info( | 632 | LOG.info( |
2653 | 577 | "Creating a marker file to report ready: %s", path) | 633 | "Creating a marker file to report ready: %s", path) |
2654 | @@ -579,17 +635,33 @@ class DataSourceAzure(sources.DataSource): | |||
2655 | 579 | pid=os.getpid(), time=time())) | 635 | pid=os.getpid(), time=time())) |
2656 | 580 | self._report_ready(lease=lease) | 636 | self._report_ready(lease=lease) |
2657 | 581 | report_ready = False | 637 | report_ready = False |
2664 | 582 | try: | 638 | |
2665 | 583 | netlink.wait_for_media_disconnect_connect( | 639 | with events.ReportEventStack( |
2666 | 584 | nl_sock, lease['interface']) | 640 | name="wait-for-media-disconnect-connect", |
2667 | 585 | except AssertionError as error: | 641 | description="wait for vnet switch", |
2668 | 586 | LOG.error(error) | 642 | parent=azure_ds_reporter): |
2669 | 587 | return | 643 | try: |
2670 | 644 | netlink.wait_for_media_disconnect_connect( | ||
2671 | 645 | nl_sock, lease['interface']) | ||
2672 | 646 | except AssertionError as error: | ||
2673 | 647 | report_diagnostic_event(error) | ||
2674 | 648 | LOG.error(error) | ||
2675 | 649 | break | ||
2676 | 650 | |||
2677 | 651 | vnet_switched = True | ||
2678 | 588 | self._ephemeral_dhcp_ctx.clean_network() | 652 | self._ephemeral_dhcp_ctx.clean_network() |
2679 | 589 | else: | 653 | else: |
2683 | 590 | return readurl(url, timeout=IMDS_TIMEOUT_IN_SECONDS, | 654 | with events.ReportEventStack( |
2684 | 591 | headers=headers, exception_cb=exc_cb, | 655 | name="get-reprovision-data-from-imds", |
2685 | 592 | infinite=True, log_req_resp=False).contents | 656 | description="get reprovision data from imds", |
2686 | 657 | parent=azure_ds_reporter): | ||
2687 | 658 | return_val = readurl(url, | ||
2688 | 659 | timeout=IMDS_TIMEOUT_IN_SECONDS, | ||
2689 | 660 | headers=headers, | ||
2690 | 661 | exception_cb=exc_cb, | ||
2691 | 662 | infinite=True, | ||
2692 | 663 | log_req_resp=False).contents | ||
2693 | 664 | break | ||
2694 | 593 | except UrlError: | 665 | except UrlError: |
2695 | 594 | # Teardown our EphemeralDHCPv4 context on failure as we retry | 666 | # Teardown our EphemeralDHCPv4 context on failure as we retry |
2696 | 595 | self._ephemeral_dhcp_ctx.clean_network() | 667 | self._ephemeral_dhcp_ctx.clean_network() |
2697 | @@ -598,6 +670,14 @@ class DataSourceAzure(sources.DataSource): | |||
2698 | 598 | if nl_sock: | 670 | if nl_sock: |
2699 | 599 | nl_sock.close() | 671 | nl_sock.close() |
2700 | 600 | 672 | ||
2701 | 673 | if vnet_switched: | ||
2702 | 674 | report_diagnostic_event("attempted dhcp %d times after reuse" % | ||
2703 | 675 | dhcp_attempts) | ||
2704 | 676 | report_diagnostic_event("polled imds %d times after reuse" % | ||
2705 | 677 | self.imds_poll_counter) | ||
2706 | 678 | |||
2707 | 679 | return return_val | ||
2708 | 680 | |||
2709 | 601 | @azure_ds_telemetry_reporter | 681 | @azure_ds_telemetry_reporter |
2710 | 602 | def _report_ready(self, lease): | 682 | def _report_ready(self, lease): |
2711 | 603 | """Tells the fabric provisioning has completed """ | 683 | """Tells the fabric provisioning has completed """ |
2712 | @@ -666,9 +746,12 @@ class DataSourceAzure(sources.DataSource): | |||
2713 | 666 | self.ds_cfg['agent_command']) | 746 | self.ds_cfg['agent_command']) |
2714 | 667 | try: | 747 | try: |
2715 | 668 | fabric_data = metadata_func() | 748 | fabric_data = metadata_func() |
2717 | 669 | except Exception: | 749 | except Exception as e: |
2718 | 750 | report_diagnostic_event( | ||
2719 | 751 | "Error communicating with Azure fabric; You may experience " | ||
2720 | 752 | "connectivity issues: %s" % e) | ||
2721 | 670 | LOG.warning( | 753 | LOG.warning( |
2723 | 671 | "Error communicating with Azure fabric; You may experience." | 754 | "Error communicating with Azure fabric; You may experience " |
2724 | 672 | "connectivity issues.", exc_info=True) | 755 | "connectivity issues.", exc_info=True) |
2725 | 673 | return False | 756 | return False |
2726 | 674 | 757 | ||
2727 | @@ -684,6 +767,11 @@ class DataSourceAzure(sources.DataSource): | |||
2728 | 684 | return | 767 | return |
2729 | 685 | 768 | ||
2730 | 686 | @property | 769 | @property |
2731 | 770 | def availability_zone(self): | ||
2732 | 771 | return self.metadata.get( | ||
2733 | 772 | 'imds', {}).get('compute', {}).get('platformFaultDomain') | ||
2734 | 773 | |||
2735 | 774 | @property | ||
2736 | 687 | def network_config(self): | 775 | def network_config(self): |
2737 | 688 | """Generate a network config like net.generate_fallback_network() with | 776 | """Generate a network config like net.generate_fallback_network() with |
2738 | 689 | the following exceptions. | 777 | the following exceptions. |
2739 | @@ -701,6 +789,10 @@ class DataSourceAzure(sources.DataSource): | |||
2740 | 701 | self._network_config = parse_network_config(nc_src) | 789 | self._network_config = parse_network_config(nc_src) |
2741 | 702 | return self._network_config | 790 | return self._network_config |
2742 | 703 | 791 | ||
2743 | 792 | @property | ||
2744 | 793 | def region(self): | ||
2745 | 794 | return self.metadata.get('imds', {}).get('compute', {}).get('location') | ||
2746 | 795 | |||
2747 | 704 | 796 | ||
2748 | 705 | def _partitions_on_device(devpath, maxnum=16): | 797 | def _partitions_on_device(devpath, maxnum=16): |
2749 | 706 | # return a list of tuples (ptnum, path) for each part on devpath | 798 | # return a list of tuples (ptnum, path) for each part on devpath |
2750 | @@ -1018,7 +1110,9 @@ def read_azure_ovf(contents): | |||
2751 | 1018 | try: | 1110 | try: |
2752 | 1019 | dom = minidom.parseString(contents) | 1111 | dom = minidom.parseString(contents) |
2753 | 1020 | except Exception as e: | 1112 | except Exception as e: |
2755 | 1021 | raise BrokenAzureDataSource("Invalid ovf-env.xml: %s" % e) | 1113 | error_str = "Invalid ovf-env.xml: %s" % e |
2756 | 1114 | report_diagnostic_event(error_str) | ||
2757 | 1115 | raise BrokenAzureDataSource(error_str) | ||
2758 | 1022 | 1116 | ||
2759 | 1023 | results = find_child(dom.documentElement, | 1117 | results = find_child(dom.documentElement, |
2760 | 1024 | lambda n: n.localName == "ProvisioningSection") | 1118 | lambda n: n.localName == "ProvisioningSection") |
2761 | @@ -1232,7 +1326,7 @@ def parse_network_config(imds_metadata): | |||
2762 | 1232 | privateIpv4 = addr4['privateIpAddress'] | 1326 | privateIpv4 = addr4['privateIpAddress'] |
2763 | 1233 | if privateIpv4: | 1327 | if privateIpv4: |
2764 | 1234 | if dev_config.get('dhcp4', False): | 1328 | if dev_config.get('dhcp4', False): |
2766 | 1235 | # Append static address config for nic > 1 | 1329 | # Append static address config for ip > 1 |
2767 | 1236 | netPrefix = intf['ipv4']['subnet'][0].get( | 1330 | netPrefix = intf['ipv4']['subnet'][0].get( |
2768 | 1237 | 'prefix', '24') | 1331 | 'prefix', '24') |
2769 | 1238 | if not dev_config.get('addresses'): | 1332 | if not dev_config.get('addresses'): |
2770 | @@ -1242,6 +1336,11 @@ def parse_network_config(imds_metadata): | |||
2771 | 1242 | ip=privateIpv4, prefix=netPrefix)) | 1336 | ip=privateIpv4, prefix=netPrefix)) |
2772 | 1243 | else: | 1337 | else: |
2773 | 1244 | dev_config['dhcp4'] = True | 1338 | dev_config['dhcp4'] = True |
2774 | 1339 | # non-primary interfaces should have a higher | ||
2775 | 1340 | # route-metric (cost) so default routes prefer | ||
2776 | 1341 | # primary nic due to lower route-metric value | ||
2777 | 1342 | dev_config['dhcp4-overrides'] = { | ||
2778 | 1343 | 'route-metric': (idx + 1) * 100} | ||
2779 | 1245 | for addr6 in intf['ipv6']['ipAddress']: | 1344 | for addr6 in intf['ipv6']['ipAddress']: |
2780 | 1246 | privateIpv6 = addr6['privateIpAddress'] | 1345 | privateIpv6 = addr6['privateIpAddress'] |
2781 | 1247 | if privateIpv6: | 1346 | if privateIpv6: |
2782 | @@ -1285,8 +1384,13 @@ def get_metadata_from_imds(fallback_nic, retries): | |||
2783 | 1285 | if net.is_up(fallback_nic): | 1384 | if net.is_up(fallback_nic): |
2784 | 1286 | return util.log_time(**kwargs) | 1385 | return util.log_time(**kwargs) |
2785 | 1287 | else: | 1386 | else: |
2788 | 1288 | with EphemeralDHCPv4(fallback_nic): | 1387 | try: |
2789 | 1289 | return util.log_time(**kwargs) | 1388 | with EphemeralDHCPv4WithReporting( |
2790 | 1389 | azure_ds_reporter, fallback_nic): | ||
2791 | 1390 | return util.log_time(**kwargs) | ||
2792 | 1391 | except Exception as e: | ||
2793 | 1392 | report_diagnostic_event("exception while getting metadata: %s" % e) | ||
2794 | 1393 | raise | ||
2795 | 1290 | 1394 | ||
2796 | 1291 | 1395 | ||
2797 | 1292 | @azure_ds_telemetry_reporter | 1396 | @azure_ds_telemetry_reporter |
2798 | @@ -1299,11 +1403,14 @@ def _get_metadata_from_imds(retries): | |||
2799 | 1299 | url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers, | 1403 | url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers, |
2800 | 1300 | retries=retries, exception_cb=retry_on_url_exc) | 1404 | retries=retries, exception_cb=retry_on_url_exc) |
2801 | 1301 | except Exception as e: | 1405 | except Exception as e: |
2803 | 1302 | LOG.debug('Ignoring IMDS instance metadata: %s', e) | 1406 | msg = 'Ignoring IMDS instance metadata: %s' % e |
2804 | 1407 | report_diagnostic_event(msg) | ||
2805 | 1408 | LOG.debug(msg) | ||
2806 | 1303 | return {} | 1409 | return {} |
2807 | 1304 | try: | 1410 | try: |
2808 | 1305 | return util.load_json(str(response)) | 1411 | return util.load_json(str(response)) |
2810 | 1306 | except json.decoder.JSONDecodeError: | 1412 | except json.decoder.JSONDecodeError as e: |
2811 | 1413 | report_diagnostic_event('non-json imds response' % e) | ||
2812 | 1307 | LOG.warning( | 1414 | LOG.warning( |
2813 | 1308 | 'Ignoring non-json IMDS instance metadata: %s', str(response)) | 1415 | 'Ignoring non-json IMDS instance metadata: %s', str(response)) |
2814 | 1309 | return {} | 1416 | return {} |
2815 | @@ -1356,8 +1463,10 @@ def _is_platform_viable(seed_dir): | |||
2816 | 1356 | asset_tag = util.read_dmi_data('chassis-asset-tag') | 1463 | asset_tag = util.read_dmi_data('chassis-asset-tag') |
2817 | 1357 | if asset_tag == AZURE_CHASSIS_ASSET_TAG: | 1464 | if asset_tag == AZURE_CHASSIS_ASSET_TAG: |
2818 | 1358 | return True | 1465 | return True |
2821 | 1359 | LOG.debug("Non-Azure DMI asset tag '%s' discovered.", asset_tag) | 1466 | msg = "Non-Azure DMI asset tag '%s' discovered." % asset_tag |
2822 | 1360 | evt.description = "Non-Azure DMI asset tag '%s' discovered.", asset_tag | 1467 | LOG.debug(msg) |
2823 | 1468 | evt.description = msg | ||
2824 | 1469 | report_diagnostic_event(msg) | ||
2825 | 1361 | if os.path.exists(os.path.join(seed_dir, 'ovf-env.xml')): | 1470 | if os.path.exists(os.path.join(seed_dir, 'ovf-env.xml')): |
2826 | 1362 | return True | 1471 | return True |
2827 | 1363 | return False | 1472 | return False |
2828 | diff --git a/cloudinit/sources/DataSourceCloudSigma.py b/cloudinit/sources/DataSourceCloudSigma.py | |||
2829 | index 2955d3f..df88f67 100644 | |||
2830 | --- a/cloudinit/sources/DataSourceCloudSigma.py | |||
2831 | +++ b/cloudinit/sources/DataSourceCloudSigma.py | |||
2832 | @@ -42,12 +42,8 @@ class DataSourceCloudSigma(sources.DataSource): | |||
2833 | 42 | if not sys_product_name: | 42 | if not sys_product_name: |
2834 | 43 | LOG.debug("system-product-name not available in dmi data") | 43 | LOG.debug("system-product-name not available in dmi data") |
2835 | 44 | return False | 44 | return False |
2842 | 45 | else: | 45 | LOG.debug("detected hypervisor as %s", sys_product_name) |
2843 | 46 | LOG.debug("detected hypervisor as %s", sys_product_name) | 46 | return 'cloudsigma' in sys_product_name.lower() |
2838 | 47 | return 'cloudsigma' in sys_product_name.lower() | ||
2839 | 48 | |||
2840 | 49 | LOG.warning("failed to query dmi data for system product name") | ||
2841 | 50 | return False | ||
2844 | 51 | 47 | ||
2845 | 52 | def _get_data(self): | 48 | def _get_data(self): |
2846 | 53 | """ | 49 | """ |
2847 | diff --git a/cloudinit/sources/DataSourceExoscale.py b/cloudinit/sources/DataSourceExoscale.py | |||
2848 | 54 | new file mode 100644 | 50 | new file mode 100644 |
2849 | index 0000000..52e7f6f | |||
2850 | --- /dev/null | |||
2851 | +++ b/cloudinit/sources/DataSourceExoscale.py | |||
2852 | @@ -0,0 +1,258 @@ | |||
2853 | 1 | # Author: Mathieu Corbin <mathieu.corbin@exoscale.com> | ||
2854 | 2 | # Author: Christopher Glass <christopher.glass@exoscale.com> | ||
2855 | 3 | # | ||
2856 | 4 | # This file is part of cloud-init. See LICENSE file for license information. | ||
2857 | 5 | |||
2858 | 6 | from cloudinit import ec2_utils as ec2 | ||
2859 | 7 | from cloudinit import log as logging | ||
2860 | 8 | from cloudinit import sources | ||
2861 | 9 | from cloudinit import url_helper | ||
2862 | 10 | from cloudinit import util | ||
2863 | 11 | |||
2864 | 12 | LOG = logging.getLogger(__name__) | ||
2865 | 13 | |||
2866 | 14 | METADATA_URL = "http://169.254.169.254" | ||
2867 | 15 | API_VERSION = "1.0" | ||
2868 | 16 | PASSWORD_SERVER_PORT = 8080 | ||
2869 | 17 | |||
2870 | 18 | URL_TIMEOUT = 10 | ||
2871 | 19 | URL_RETRIES = 6 | ||
2872 | 20 | |||
2873 | 21 | EXOSCALE_DMI_NAME = "Exoscale" | ||
2874 | 22 | |||
2875 | 23 | BUILTIN_DS_CONFIG = { | ||
2876 | 24 | # We run the set password config module on every boot in order to enable | ||
2877 | 25 | # resetting the instance's password via the exoscale console (and a | ||
2878 | 26 | # subsequent instance reboot). | ||
2879 | 27 | 'cloud_config_modules': [["set-passwords", "always"]] | ||
2880 | 28 | } | ||
2881 | 29 | |||
2882 | 30 | |||
2883 | 31 | class DataSourceExoscale(sources.DataSource): | ||
2884 | 32 | |||
2885 | 33 | dsname = 'Exoscale' | ||
2886 | 34 | |||
2887 | 35 | def __init__(self, sys_cfg, distro, paths): | ||
2888 | 36 | super(DataSourceExoscale, self).__init__(sys_cfg, distro, paths) | ||
2889 | 37 | LOG.debug("Initializing the Exoscale datasource") | ||
2890 | 38 | |||
2891 | 39 | self.metadata_url = self.ds_cfg.get('metadata_url', METADATA_URL) | ||
2892 | 40 | self.api_version = self.ds_cfg.get('api_version', API_VERSION) | ||
2893 | 41 | self.password_server_port = int( | ||
2894 | 42 | self.ds_cfg.get('password_server_port', PASSWORD_SERVER_PORT)) | ||
2895 | 43 | self.url_timeout = self.ds_cfg.get('timeout', URL_TIMEOUT) | ||
2896 | 44 | self.url_retries = self.ds_cfg.get('retries', URL_RETRIES) | ||
2897 | 45 | |||
2898 | 46 | self.extra_config = BUILTIN_DS_CONFIG | ||
2899 | 47 | |||
2900 | 48 | def wait_for_metadata_service(self): | ||
2901 | 49 | """Wait for the metadata service to be reachable.""" | ||
2902 | 50 | |||
2903 | 51 | metadata_url = "{}/{}/meta-data/instance-id".format( | ||
2904 | 52 | self.metadata_url, self.api_version) | ||
2905 | 53 | |||
2906 | 54 | url = url_helper.wait_for_url( | ||
2907 | 55 | urls=[metadata_url], | ||
2908 | 56 | max_wait=self.url_max_wait, | ||
2909 | 57 | timeout=self.url_timeout, | ||
2910 | 58 | status_cb=LOG.critical) | ||
2911 | 59 | |||
2912 | 60 | return bool(url) | ||
2913 | 61 | |||
2914 | 62 | def crawl_metadata(self): | ||
2915 | 63 | """ | ||
2916 | 64 | Crawl the metadata service when available. | ||
2917 | 65 | |||
2918 | 66 | @returns: Dictionary of crawled metadata content. | ||
2919 | 67 | """ | ||
2920 | 68 | metadata_ready = util.log_time( | ||
2921 | 69 | logfunc=LOG.info, | ||
2922 | 70 | msg='waiting for the metadata service', | ||
2923 | 71 | func=self.wait_for_metadata_service) | ||
2924 | 72 | |||
2925 | 73 | if not metadata_ready: | ||
2926 | 74 | return {} | ||
2927 | 75 | |||
2928 | 76 | return read_metadata(self.metadata_url, self.api_version, | ||
2929 | 77 | self.password_server_port, self.url_timeout, | ||
2930 | 78 | self.url_retries) | ||
2931 | 79 | |||
2932 | 80 | def _get_data(self): | ||
2933 | 81 | """Fetch the user data, the metadata and the VM password | ||
2934 | 82 | from the metadata service. | ||
2935 | 83 | |||
2936 | 84 | Please refer to the datasource documentation for details on how the | ||
2937 | 85 | metadata server and password server are crawled. | ||
2938 | 86 | """ | ||
2939 | 87 | if not self._is_platform_viable(): | ||
2940 | 88 | return False | ||
2941 | 89 | |||
2942 | 90 | data = util.log_time( | ||
2943 | 91 | logfunc=LOG.debug, | ||
2944 | 92 | msg='Crawl of metadata service', | ||
2945 | 93 | func=self.crawl_metadata) | ||
2946 | 94 | |||
2947 | 95 | if not data: | ||
2948 | 96 | return False | ||
2949 | 97 | |||
2950 | 98 | self.userdata_raw = data['user-data'] | ||
2951 | 99 | self.metadata = data['meta-data'] | ||
2952 | 100 | password = data.get('password') | ||
2953 | 101 | |||
2954 | 102 | password_config = {} | ||
2955 | 103 | if password: | ||
2956 | 104 | # Since we have a password, let's make sure we are allowed to use | ||
2957 | 105 | # it by allowing ssh_pwauth. | ||
2958 | 106 | # The password module's default behavior is to leave the | ||
2959 | 107 | # configuration as-is in this regard, so that means it will either | ||
2960 | 108 | # leave the password always disabled if no password is ever set, or | ||
2961 | 109 | # leave the password login enabled if we set it once. | ||
2962 | 110 | password_config = { | ||
2963 | 111 | 'ssh_pwauth': True, | ||
2964 | 112 | 'password': password, | ||
2965 | 113 | 'chpasswd': { | ||
2966 | 114 | 'expire': False, | ||
2967 | 115 | }, | ||
2968 | 116 | } | ||
2969 | 117 | |||
2970 | 118 | # builtin extra_config overrides password_config | ||
2971 | 119 | self.extra_config = util.mergemanydict( | ||
2972 | 120 | [self.extra_config, password_config]) | ||
2973 | 121 | |||
2974 | 122 | return True | ||
2975 | 123 | |||
2976 | 124 | def get_config_obj(self): | ||
2977 | 125 | return self.extra_config | ||
2978 | 126 | |||
2979 | 127 | def _is_platform_viable(self): | ||
2980 | 128 | return util.read_dmi_data('system-product-name').startswith( | ||
2981 | 129 | EXOSCALE_DMI_NAME) | ||
2982 | 130 | |||
2983 | 131 | |||
2984 | 132 | # Used to match classes to dependencies | ||
2985 | 133 | datasources = [ | ||
2986 | 134 | (DataSourceExoscale, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)), | ||
2987 | 135 | ] | ||
2988 | 136 | |||
2989 | 137 | |||
2990 | 138 | # Return a list of data sources that match this set of dependencies | ||
2991 | 139 | def get_datasource_list(depends): | ||
2992 | 140 | return sources.list_from_depends(depends, datasources) | ||
2993 | 141 | |||
2994 | 142 | |||
2995 | 143 | def get_password(metadata_url=METADATA_URL, | ||
2996 | 144 | api_version=API_VERSION, | ||
2997 | 145 | password_server_port=PASSWORD_SERVER_PORT, | ||
2998 | 146 | url_timeout=URL_TIMEOUT, | ||
2999 | 147 | url_retries=URL_RETRIES): | ||
3000 | 148 | """Obtain the VM's password if set. | ||
3001 | 149 | |||
3002 | 150 | Once fetched the password is marked saved. Future calls to this method may | ||
3003 | 151 | return empty string or 'saved_password'.""" | ||
3004 | 152 | password_url = "{}:{}/{}/".format(metadata_url, password_server_port, | ||
3005 | 153 | api_version) | ||
3006 | 154 | response = url_helper.read_file_or_url( | ||
3007 | 155 | password_url, | ||
3008 | 156 | ssl_details=None, | ||
3009 | 157 | headers={"DomU_Request": "send_my_password"}, | ||
3010 | 158 | timeout=url_timeout, | ||
3011 | 159 | retries=url_retries) | ||
3012 | 160 | password = response.contents.decode('utf-8') | ||
3013 | 161 | # the password is empty or already saved | ||
3014 | 162 | # Note: the original metadata server would answer an additional | ||
3015 | 163 | # 'bad_request' status, but the Exoscale implementation does not. | ||
3016 | 164 | if password in ['', 'saved_password']: | ||
3017 | 165 | return None | ||
3018 | 166 | # save the password | ||
3019 | 167 | url_helper.read_file_or_url( | ||
3020 | 168 | password_url, | ||
3021 | 169 | ssl_details=None, | ||
3022 | 170 | headers={"DomU_Request": "saved_password"}, | ||
3023 | 171 | timeout=url_timeout, | ||
3024 | 172 | retries=url_retries) | ||
3025 | 173 | return password | ||
3026 | 174 | |||
3027 | 175 | |||
3028 | 176 | def read_metadata(metadata_url=METADATA_URL, | ||
3029 | 177 | api_version=API_VERSION, | ||
3030 | 178 | password_server_port=PASSWORD_SERVER_PORT, | ||
3031 | 179 | url_timeout=URL_TIMEOUT, | ||
3032 | 180 | url_retries=URL_RETRIES): | ||
3033 | 181 | """Query the metadata server and return the retrieved data.""" | ||
3034 | 182 | crawled_metadata = {} | ||
3035 | 183 | crawled_metadata['_metadata_api_version'] = api_version | ||
3036 | 184 | try: | ||
3037 | 185 | crawled_metadata['user-data'] = ec2.get_instance_userdata( | ||
3038 | 186 | api_version, | ||
3039 | 187 | metadata_url, | ||
3040 | 188 | timeout=url_timeout, | ||
3041 | 189 | retries=url_retries) | ||
3042 | 190 | crawled_metadata['meta-data'] = ec2.get_instance_metadata( | ||
3043 | 191 | api_version, | ||
3044 | 192 | metadata_url, | ||
3045 | 193 | timeout=url_timeout, | ||
3046 | 194 | retries=url_retries) | ||
3047 | 195 | except Exception as e: | ||
3048 | 196 | util.logexc(LOG, "failed reading from metadata url %s (%s)", | ||
3049 | 197 | metadata_url, e) | ||
3050 | 198 | return {} | ||
3051 | 199 | |||
3052 | 200 | try: | ||
3053 | 201 | crawled_metadata['password'] = get_password( | ||
3054 | 202 | api_version=api_version, | ||
3055 | 203 | metadata_url=metadata_url, | ||
3056 | 204 | password_server_port=password_server_port, | ||
3057 | 205 | url_retries=url_retries, | ||
3058 | 206 | url_timeout=url_timeout) | ||
3059 | 207 | except Exception as e: | ||
3060 | 208 | util.logexc(LOG, "failed to read from password server url %s:%s (%s)", | ||
3061 | 209 | metadata_url, password_server_port, e) | ||
3062 | 210 | |||
3063 | 211 | return crawled_metadata | ||
3064 | 212 | |||
3065 | 213 | |||
3066 | 214 | if __name__ == "__main__": | ||
3067 | 215 | import argparse | ||
3068 | 216 | |||
3069 | 217 | parser = argparse.ArgumentParser(description='Query Exoscale Metadata') | ||
3070 | 218 | parser.add_argument( | ||
3071 | 219 | "--endpoint", | ||
3072 | 220 | metavar="URL", | ||
3073 | 221 | help="The url of the metadata service.", | ||
3074 | 222 | default=METADATA_URL) | ||
3075 | 223 | parser.add_argument( | ||
3076 | 224 | "--version", | ||
3077 | 225 | metavar="VERSION", | ||
3078 | 226 | help="The version of the metadata endpoint to query.", | ||
3079 | 227 | default=API_VERSION) | ||
3080 | 228 | parser.add_argument( | ||
3081 | 229 | "--retries", | ||
3082 | 230 | metavar="NUM", | ||
3083 | 231 | type=int, | ||
3084 | 232 | help="The number of retries querying the endpoint.", | ||
3085 | 233 | default=URL_RETRIES) | ||
3086 | 234 | parser.add_argument( | ||
3087 | 235 | "--timeout", | ||
3088 | 236 | metavar="NUM", | ||
3089 | 237 | type=int, | ||
3090 | 238 | help="The time in seconds to wait before timing out.", | ||
3091 | 239 | default=URL_TIMEOUT) | ||
3092 | 240 | parser.add_argument( | ||
3093 | 241 | "--password-port", | ||
3094 | 242 | metavar="PORT", | ||
3095 | 243 | type=int, | ||
3096 | 244 | help="The port on which the password endpoint listens", | ||
3097 | 245 | default=PASSWORD_SERVER_PORT) | ||
3098 | 246 | |||
3099 | 247 | args = parser.parse_args() | ||
3100 | 248 | |||
3101 | 249 | data = read_metadata( | ||
3102 | 250 | metadata_url=args.endpoint, | ||
3103 | 251 | api_version=args.version, | ||
3104 | 252 | password_server_port=args.password_port, | ||
3105 | 253 | url_timeout=args.timeout, | ||
3106 | 254 | url_retries=args.retries) | ||
3107 | 255 | |||
3108 | 256 | print(util.json_dumps(data)) | ||
3109 | 257 | |||
3110 | 258 | # vi: ts=4 expandtab | ||
3111 | diff --git a/cloudinit/sources/DataSourceGCE.py b/cloudinit/sources/DataSourceGCE.py | |||
3112 | index d816262..6cbfbba 100644 | |||
3113 | --- a/cloudinit/sources/DataSourceGCE.py | |||
3114 | +++ b/cloudinit/sources/DataSourceGCE.py | |||
3115 | @@ -18,10 +18,13 @@ LOG = logging.getLogger(__name__) | |||
3116 | 18 | MD_V1_URL = 'http://metadata.google.internal/computeMetadata/v1/' | 18 | MD_V1_URL = 'http://metadata.google.internal/computeMetadata/v1/' |
3117 | 19 | BUILTIN_DS_CONFIG = {'metadata_url': MD_V1_URL} | 19 | BUILTIN_DS_CONFIG = {'metadata_url': MD_V1_URL} |
3118 | 20 | REQUIRED_FIELDS = ('instance-id', 'availability-zone', 'local-hostname') | 20 | REQUIRED_FIELDS = ('instance-id', 'availability-zone', 'local-hostname') |
3119 | 21 | GUEST_ATTRIBUTES_URL = ('http://metadata.google.internal/computeMetadata/' | ||
3120 | 22 | 'v1/instance/guest-attributes') | ||
3121 | 23 | HOSTKEY_NAMESPACE = 'hostkeys' | ||
3122 | 24 | HEADERS = {'Metadata-Flavor': 'Google'} | ||
3123 | 21 | 25 | ||
3124 | 22 | 26 | ||
3125 | 23 | class GoogleMetadataFetcher(object): | 27 | class GoogleMetadataFetcher(object): |
3126 | 24 | headers = {'Metadata-Flavor': 'Google'} | ||
3127 | 25 | 28 | ||
3128 | 26 | def __init__(self, metadata_address): | 29 | def __init__(self, metadata_address): |
3129 | 27 | self.metadata_address = metadata_address | 30 | self.metadata_address = metadata_address |
3130 | @@ -32,7 +35,7 @@ class GoogleMetadataFetcher(object): | |||
3131 | 32 | url = self.metadata_address + path | 35 | url = self.metadata_address + path |
3132 | 33 | if is_recursive: | 36 | if is_recursive: |
3133 | 34 | url += '/?recursive=True' | 37 | url += '/?recursive=True' |
3135 | 35 | resp = url_helper.readurl(url=url, headers=self.headers) | 38 | resp = url_helper.readurl(url=url, headers=HEADERS) |
3136 | 36 | except url_helper.UrlError as exc: | 39 | except url_helper.UrlError as exc: |
3137 | 37 | msg = "url %s raised exception %s" | 40 | msg = "url %s raised exception %s" |
3138 | 38 | LOG.debug(msg, path, exc) | 41 | LOG.debug(msg, path, exc) |
3139 | @@ -90,6 +93,10 @@ class DataSourceGCE(sources.DataSource): | |||
3140 | 90 | public_keys_data = self.metadata['public-keys-data'] | 93 | public_keys_data = self.metadata['public-keys-data'] |
3141 | 91 | return _parse_public_keys(public_keys_data, self.default_user) | 94 | return _parse_public_keys(public_keys_data, self.default_user) |
3142 | 92 | 95 | ||
3143 | 96 | def publish_host_keys(self, hostkeys): | ||
3144 | 97 | for key in hostkeys: | ||
3145 | 98 | _write_host_key_to_guest_attributes(*key) | ||
3146 | 99 | |||
3147 | 93 | def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): | 100 | def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): |
3148 | 94 | # GCE has long FDQN's and has asked for short hostnames. | 101 | # GCE has long FDQN's and has asked for short hostnames. |
3149 | 95 | return self.metadata['local-hostname'].split('.')[0] | 102 | return self.metadata['local-hostname'].split('.')[0] |
3150 | @@ -103,6 +110,17 @@ class DataSourceGCE(sources.DataSource): | |||
3151 | 103 | return self.availability_zone.rsplit('-', 1)[0] | 110 | return self.availability_zone.rsplit('-', 1)[0] |
3152 | 104 | 111 | ||
3153 | 105 | 112 | ||
3154 | 113 | def _write_host_key_to_guest_attributes(key_type, key_value): | ||
3155 | 114 | url = '%s/%s/%s' % (GUEST_ATTRIBUTES_URL, HOSTKEY_NAMESPACE, key_type) | ||
3156 | 115 | key_value = key_value.encode('utf-8') | ||
3157 | 116 | resp = url_helper.readurl(url=url, data=key_value, headers=HEADERS, | ||
3158 | 117 | request_method='PUT', check_status=False) | ||
3159 | 118 | if resp.ok(): | ||
3160 | 119 | LOG.debug('Wrote %s host key to guest attributes.', key_type) | ||
3161 | 120 | else: | ||
3162 | 121 | LOG.debug('Unable to write %s host key to guest attributes.', key_type) | ||
3163 | 122 | |||
3164 | 123 | |||
3165 | 106 | def _has_expired(public_key): | 124 | def _has_expired(public_key): |
3166 | 107 | # Check whether an SSH key is expired. Public key input is a single SSH | 125 | # Check whether an SSH key is expired. Public key input is a single SSH |
3167 | 108 | # public key in the GCE specific key format documented here: | 126 | # public key in the GCE specific key format documented here: |
3168 | diff --git a/cloudinit/sources/DataSourceHetzner.py b/cloudinit/sources/DataSourceHetzner.py | |||
3169 | index 5c75b65..5029833 100644 | |||
3170 | --- a/cloudinit/sources/DataSourceHetzner.py | |||
3171 | +++ b/cloudinit/sources/DataSourceHetzner.py | |||
3172 | @@ -28,6 +28,9 @@ MD_WAIT_RETRY = 2 | |||
3173 | 28 | 28 | ||
3174 | 29 | 29 | ||
3175 | 30 | class DataSourceHetzner(sources.DataSource): | 30 | class DataSourceHetzner(sources.DataSource): |
3176 | 31 | |||
3177 | 32 | dsname = 'Hetzner' | ||
3178 | 33 | |||
3179 | 31 | def __init__(self, sys_cfg, distro, paths): | 34 | def __init__(self, sys_cfg, distro, paths): |
3180 | 32 | sources.DataSource.__init__(self, sys_cfg, distro, paths) | 35 | sources.DataSource.__init__(self, sys_cfg, distro, paths) |
3181 | 33 | self.distro = distro | 36 | self.distro = distro |
3182 | diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py | |||
3183 | index 70e7a5c..dd941d2 100644 | |||
3184 | --- a/cloudinit/sources/DataSourceOVF.py | |||
3185 | +++ b/cloudinit/sources/DataSourceOVF.py | |||
3186 | @@ -148,6 +148,9 @@ class DataSourceOVF(sources.DataSource): | |||
3187 | 148 | product_marker, os.path.join(self.paths.cloud_dir, 'data')) | 148 | product_marker, os.path.join(self.paths.cloud_dir, 'data')) |
3188 | 149 | special_customization = product_marker and not hasmarkerfile | 149 | special_customization = product_marker and not hasmarkerfile |
3189 | 150 | customscript = self._vmware_cust_conf.custom_script_name | 150 | customscript = self._vmware_cust_conf.custom_script_name |
3190 | 151 | ccScriptsDir = os.path.join( | ||
3191 | 152 | self.paths.get_cpath("scripts"), | ||
3192 | 153 | "per-instance") | ||
3193 | 151 | except Exception as e: | 154 | except Exception as e: |
3194 | 152 | _raise_error_status( | 155 | _raise_error_status( |
3195 | 153 | "Error parsing the customization Config File", | 156 | "Error parsing the customization Config File", |
3196 | @@ -201,7 +204,9 @@ class DataSourceOVF(sources.DataSource): | |||
3197 | 201 | 204 | ||
3198 | 202 | if customscript: | 205 | if customscript: |
3199 | 203 | try: | 206 | try: |
3201 | 204 | postcust = PostCustomScript(customscript, imcdirpath) | 207 | postcust = PostCustomScript(customscript, |
3202 | 208 | imcdirpath, | ||
3203 | 209 | ccScriptsDir) | ||
3204 | 205 | postcust.execute() | 210 | postcust.execute() |
3205 | 206 | except Exception as e: | 211 | except Exception as e: |
3206 | 207 | _raise_error_status( | 212 | _raise_error_status( |
3207 | diff --git a/cloudinit/sources/DataSourceOracle.py b/cloudinit/sources/DataSourceOracle.py | |||
3208 | index 70b9c58..6e73f56 100644 | |||
3209 | --- a/cloudinit/sources/DataSourceOracle.py | |||
3210 | +++ b/cloudinit/sources/DataSourceOracle.py | |||
3211 | @@ -16,7 +16,7 @@ Notes: | |||
3212 | 16 | """ | 16 | """ |
3213 | 17 | 17 | ||
3214 | 18 | from cloudinit.url_helper import combine_url, readurl, UrlError | 18 | from cloudinit.url_helper import combine_url, readurl, UrlError |
3216 | 19 | from cloudinit.net import dhcp | 19 | from cloudinit.net import dhcp, get_interfaces_by_mac |
3217 | 20 | from cloudinit import net | 20 | from cloudinit import net |
3218 | 21 | from cloudinit import sources | 21 | from cloudinit import sources |
3219 | 22 | from cloudinit import util | 22 | from cloudinit import util |
3220 | @@ -28,8 +28,80 @@ import re | |||
3221 | 28 | 28 | ||
3222 | 29 | LOG = logging.getLogger(__name__) | 29 | LOG = logging.getLogger(__name__) |
3223 | 30 | 30 | ||
3224 | 31 | BUILTIN_DS_CONFIG = { | ||
3225 | 32 | # Don't use IMDS to configure secondary NICs by default | ||
3226 | 33 | 'configure_secondary_nics': False, | ||
3227 | 34 | } | ||
3228 | 31 | CHASSIS_ASSET_TAG = "OracleCloud.com" | 35 | CHASSIS_ASSET_TAG = "OracleCloud.com" |
3229 | 32 | METADATA_ENDPOINT = "http://169.254.169.254/openstack/" | 36 | METADATA_ENDPOINT = "http://169.254.169.254/openstack/" |
3230 | 37 | VNIC_METADATA_URL = 'http://169.254.169.254/opc/v1/vnics/' | ||
3231 | 38 | # https://docs.cloud.oracle.com/iaas/Content/Network/Troubleshoot/connectionhang.htm#Overview, | ||
3232 | 39 | # indicates that an MTU of 9000 is used within OCI | ||
3233 | 40 | MTU = 9000 | ||
3234 | 41 | |||
3235 | 42 | |||
3236 | 43 | def _add_network_config_from_opc_imds(network_config): | ||
3237 | 44 | """ | ||
3238 | 45 | Fetch data from Oracle's IMDS, generate secondary NIC config, merge it. | ||
3239 | 46 | |||
3240 | 47 | The primary NIC configuration should not be modified based on the IMDS | ||
3241 | 48 | values, as it should continue to be configured for DHCP. As such, this | ||
3242 | 49 | takes an existing network_config dict which is expected to have the primary | ||
3243 | 50 | NIC configuration already present. It will mutate the given dict to | ||
3244 | 51 | include the secondary VNICs. | ||
3245 | 52 | |||
3246 | 53 | :param network_config: | ||
3247 | 54 | A v1 network config dict with the primary NIC already configured. This | ||
3248 | 55 | dict will be mutated. | ||
3249 | 56 | |||
3250 | 57 | :raises: | ||
3251 | 58 | Exceptions are not handled within this function. Likely exceptions are | ||
3252 | 59 | those raised by url_helper.readurl (if communicating with the IMDS | ||
3253 | 60 | fails), ValueError/JSONDecodeError (if the IMDS returns invalid JSON), | ||
3254 | 61 | and KeyError/IndexError (if the IMDS returns valid JSON with unexpected | ||
3255 | 62 | contents). | ||
3256 | 63 | """ | ||
3257 | 64 | resp = readurl(VNIC_METADATA_URL) | ||
3258 | 65 | vnics = json.loads(str(resp)) | ||
3259 | 66 | |||
3260 | 67 | if 'nicIndex' in vnics[0]: | ||
3261 | 68 | # TODO: Once configure_secondary_nics defaults to True, lower the level | ||
3262 | 69 | # of this log message. (Currently, if we're running this code at all, | ||
3263 | 70 | # someone has explicitly opted-in to secondary VNIC configuration, so | ||
3264 | 71 | # we should warn them that it didn't happen. Once it's default, this | ||
3265 | 72 | # would be emitted on every Bare Metal Machine launch, which means INFO | ||
3266 | 73 | # or DEBUG would be more appropriate.) | ||
3267 | 74 | LOG.warning( | ||
3268 | 75 | 'VNIC metadata indicates this is a bare metal machine; skipping' | ||
3269 | 76 | ' secondary VNIC configuration.' | ||
3270 | 77 | ) | ||
3271 | 78 | return | ||
3272 | 79 | |||
3273 | 80 | interfaces_by_mac = get_interfaces_by_mac() | ||
3274 | 81 | |||
3275 | 82 | for vnic_dict in vnics[1:]: | ||
3276 | 83 | # We skip the first entry in the response because the primary interface | ||
3277 | 84 | # is already configured by iSCSI boot; applying configuration from the | ||
3278 | 85 | # IMDS is not required. | ||
3279 | 86 | mac_address = vnic_dict['macAddr'].lower() | ||
3280 | 87 | if mac_address not in interfaces_by_mac: | ||
3281 | 88 | LOG.debug('Interface with MAC %s not found; skipping', mac_address) | ||
3282 | 89 | continue | ||
3283 | 90 | name = interfaces_by_mac[mac_address] | ||
3284 | 91 | subnet = { | ||
3285 | 92 | 'type': 'static', | ||
3286 | 93 | 'address': vnic_dict['privateIp'], | ||
3287 | 94 | 'netmask': vnic_dict['subnetCidrBlock'].split('/')[1], | ||
3288 | 95 | 'gateway': vnic_dict['virtualRouterIp'], | ||
3289 | 96 | 'control': 'manual', | ||
3290 | 97 | } | ||
3291 | 98 | network_config['config'].append({ | ||
3292 | 99 | 'name': name, | ||
3293 | 100 | 'type': 'physical', | ||
3294 | 101 | 'mac_address': mac_address, | ||
3295 | 102 | 'mtu': MTU, | ||
3296 | 103 | 'subnets': [subnet], | ||
3297 | 104 | }) | ||
3298 | 33 | 105 | ||
3299 | 34 | 106 | ||
3300 | 35 | class DataSourceOracle(sources.DataSource): | 107 | class DataSourceOracle(sources.DataSource): |
3301 | @@ -37,8 +109,22 @@ class DataSourceOracle(sources.DataSource): | |||
3302 | 37 | dsname = 'Oracle' | 109 | dsname = 'Oracle' |
3303 | 38 | system_uuid = None | 110 | system_uuid = None |
3304 | 39 | vendordata_pure = None | 111 | vendordata_pure = None |
3305 | 112 | network_config_sources = ( | ||
3306 | 113 | sources.NetworkConfigSource.cmdline, | ||
3307 | 114 | sources.NetworkConfigSource.ds, | ||
3308 | 115 | sources.NetworkConfigSource.initramfs, | ||
3309 | 116 | sources.NetworkConfigSource.system_cfg, | ||
3310 | 117 | ) | ||
3311 | 118 | |||
3312 | 40 | _network_config = sources.UNSET | 119 | _network_config = sources.UNSET |
3313 | 41 | 120 | ||
3314 | 121 | def __init__(self, sys_cfg, *args, **kwargs): | ||
3315 | 122 | super(DataSourceOracle, self).__init__(sys_cfg, *args, **kwargs) | ||
3316 | 123 | |||
3317 | 124 | self.ds_cfg = util.mergemanydict([ | ||
3318 | 125 | util.get_cfg_by_path(sys_cfg, ['datasource', self.dsname], {}), | ||
3319 | 126 | BUILTIN_DS_CONFIG]) | ||
3320 | 127 | |||
3321 | 42 | def _is_platform_viable(self): | 128 | def _is_platform_viable(self): |
3322 | 43 | """Check platform environment to report if this datasource may run.""" | 129 | """Check platform environment to report if this datasource may run.""" |
3323 | 44 | return _is_platform_viable() | 130 | return _is_platform_viable() |
3324 | @@ -48,7 +134,7 @@ class DataSourceOracle(sources.DataSource): | |||
3325 | 48 | return False | 134 | return False |
3326 | 49 | 135 | ||
3327 | 50 | # network may be configured if iscsi root. If that is the case | 136 | # network may be configured if iscsi root. If that is the case |
3329 | 51 | # then read_kernel_cmdline_config will return non-None. | 137 | # then read_initramfs_config will return non-None. |
3330 | 52 | if _is_iscsi_root(): | 138 | if _is_iscsi_root(): |
3331 | 53 | data = self.crawl_metadata() | 139 | data = self.crawl_metadata() |
3332 | 54 | else: | 140 | else: |
3333 | @@ -118,11 +204,17 @@ class DataSourceOracle(sources.DataSource): | |||
3334 | 118 | We nonetheless return cmdline provided config if present | 204 | We nonetheless return cmdline provided config if present |
3335 | 119 | and fallback to generate fallback.""" | 205 | and fallback to generate fallback.""" |
3336 | 120 | if self._network_config == sources.UNSET: | 206 | if self._network_config == sources.UNSET: |
3341 | 121 | cmdline_cfg = cmdline.read_kernel_cmdline_config() | 207 | self._network_config = cmdline.read_initramfs_config() |
3342 | 122 | if cmdline_cfg: | 208 | if not self._network_config: |
3339 | 123 | self._network_config = cmdline_cfg | ||
3340 | 124 | else: | ||
3343 | 125 | self._network_config = self.distro.generate_fallback_config() | 209 | self._network_config = self.distro.generate_fallback_config() |
3344 | 210 | if self.ds_cfg.get('configure_secondary_nics'): | ||
3345 | 211 | try: | ||
3346 | 212 | # Mutate self._network_config to include secondary VNICs | ||
3347 | 213 | _add_network_config_from_opc_imds(self._network_config) | ||
3348 | 214 | except Exception: | ||
3349 | 215 | util.logexc( | ||
3350 | 216 | LOG, | ||
3351 | 217 | "Failed to fetch secondary network configuration!") | ||
3352 | 126 | return self._network_config | 218 | return self._network_config |
3353 | 127 | 219 | ||
3354 | 128 | 220 | ||
3355 | @@ -137,7 +229,7 @@ def _is_platform_viable(): | |||
3356 | 137 | 229 | ||
3357 | 138 | 230 | ||
3358 | 139 | def _is_iscsi_root(): | 231 | def _is_iscsi_root(): |
3360 | 140 | return bool(cmdline.read_kernel_cmdline_config()) | 232 | return bool(cmdline.read_initramfs_config()) |
3361 | 141 | 233 | ||
3362 | 142 | 234 | ||
3363 | 143 | def _load_index(content): | 235 | def _load_index(content): |
3364 | diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py | |||
3365 | index e6966b3..a319322 100644 | |||
3366 | --- a/cloudinit/sources/__init__.py | |||
3367 | +++ b/cloudinit/sources/__init__.py | |||
3368 | @@ -66,6 +66,13 @@ CLOUD_ID_REGION_PREFIX_MAP = { | |||
3369 | 66 | 'china': ('azure-china', lambda c: c == 'azure'), # only change azure | 66 | 'china': ('azure-china', lambda c: c == 'azure'), # only change azure |
3370 | 67 | } | 67 | } |
3371 | 68 | 68 | ||
3372 | 69 | # NetworkConfigSource represents the canonical list of network config sources | ||
3373 | 70 | # that cloud-init knows about. (Python 2.7 lacks PEP 435, so use a singleton | ||
3374 | 71 | # namedtuple as an enum; see https://stackoverflow.com/a/6971002) | ||
3375 | 72 | _NETCFG_SOURCE_NAMES = ('cmdline', 'ds', 'system_cfg', 'fallback', 'initramfs') | ||
3376 | 73 | NetworkConfigSource = namedtuple('NetworkConfigSource', | ||
3377 | 74 | _NETCFG_SOURCE_NAMES)(*_NETCFG_SOURCE_NAMES) | ||
3378 | 75 | |||
3379 | 69 | 76 | ||
3380 | 70 | class DataSourceNotFoundException(Exception): | 77 | class DataSourceNotFoundException(Exception): |
3381 | 71 | pass | 78 | pass |
3382 | @@ -153,6 +160,16 @@ class DataSource(object): | |||
3383 | 153 | # Track the discovered fallback nic for use in configuration generation. | 160 | # Track the discovered fallback nic for use in configuration generation. |
3384 | 154 | _fallback_interface = None | 161 | _fallback_interface = None |
3385 | 155 | 162 | ||
3386 | 163 | # The network configuration sources that should be considered for this data | ||
3387 | 164 | # source. (The first source in this list that provides network | ||
3388 | 165 | # configuration will be used without considering any that follow.) This | ||
3389 | 166 | # should always be a subset of the members of NetworkConfigSource with no | ||
3390 | 167 | # duplicate entries. | ||
3391 | 168 | network_config_sources = (NetworkConfigSource.cmdline, | ||
3392 | 169 | NetworkConfigSource.initramfs, | ||
3393 | 170 | NetworkConfigSource.system_cfg, | ||
3394 | 171 | NetworkConfigSource.ds) | ||
3395 | 172 | |||
3396 | 156 | # read_url_params | 173 | # read_url_params |
3397 | 157 | url_max_wait = -1 # max_wait < 0 means do not wait | 174 | url_max_wait = -1 # max_wait < 0 means do not wait |
3398 | 158 | url_timeout = 10 # timeout for each metadata url read attempt | 175 | url_timeout = 10 # timeout for each metadata url read attempt |
3399 | @@ -474,6 +491,16 @@ class DataSource(object): | |||
3400 | 474 | def get_public_ssh_keys(self): | 491 | def get_public_ssh_keys(self): |
3401 | 475 | return normalize_pubkey_data(self.metadata.get('public-keys')) | 492 | return normalize_pubkey_data(self.metadata.get('public-keys')) |
3402 | 476 | 493 | ||
3403 | 494 | def publish_host_keys(self, hostkeys): | ||
3404 | 495 | """Publish the public SSH host keys (found in /etc/ssh/*.pub). | ||
3405 | 496 | |||
3406 | 497 | @param hostkeys: List of host key tuples (key_type, key_value), | ||
3407 | 498 | where key_type is the first field in the public key file | ||
3408 | 499 | (e.g. 'ssh-rsa') and key_value is the key itself | ||
3409 | 500 | (e.g. 'AAAAB3NzaC1y...'). | ||
3410 | 501 | """ | ||
3411 | 502 | pass | ||
3412 | 503 | |||
3413 | 477 | def _remap_device(self, short_name): | 504 | def _remap_device(self, short_name): |
3414 | 478 | # LP: #611137 | 505 | # LP: #611137 |
3415 | 479 | # the metadata service may believe that devices are named 'sda' | 506 | # the metadata service may believe that devices are named 'sda' |
3416 | diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py | |||
3417 | index 82c4c8c..f1fba17 100755 | |||
3418 | --- a/cloudinit/sources/helpers/azure.py | |||
3419 | +++ b/cloudinit/sources/helpers/azure.py | |||
3420 | @@ -16,7 +16,11 @@ from xml.etree import ElementTree | |||
3421 | 16 | 16 | ||
3422 | 17 | from cloudinit import url_helper | 17 | from cloudinit import url_helper |
3423 | 18 | from cloudinit import util | 18 | from cloudinit import util |
3424 | 19 | from cloudinit import version | ||
3425 | 20 | from cloudinit import distros | ||
3426 | 19 | from cloudinit.reporting import events | 21 | from cloudinit.reporting import events |
3427 | 22 | from cloudinit.net.dhcp import EphemeralDHCPv4 | ||
3428 | 23 | from datetime import datetime | ||
3429 | 20 | 24 | ||
3430 | 21 | LOG = logging.getLogger(__name__) | 25 | LOG = logging.getLogger(__name__) |
3431 | 22 | 26 | ||
3432 | @@ -24,6 +28,10 @@ LOG = logging.getLogger(__name__) | |||
3433 | 24 | # value is applied if the endpoint can't be found within a lease file | 28 | # value is applied if the endpoint can't be found within a lease file |
3434 | 25 | DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10" | 29 | DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10" |
3435 | 26 | 30 | ||
3436 | 31 | BOOT_EVENT_TYPE = 'boot-telemetry' | ||
3437 | 32 | SYSTEMINFO_EVENT_TYPE = 'system-info' | ||
3438 | 33 | DIAGNOSTIC_EVENT_TYPE = 'diagnostic' | ||
3439 | 34 | |||
3440 | 27 | azure_ds_reporter = events.ReportEventStack( | 35 | azure_ds_reporter = events.ReportEventStack( |
3441 | 28 | name="azure-ds", | 36 | name="azure-ds", |
3442 | 29 | description="initialize reporter for azure ds", | 37 | description="initialize reporter for azure ds", |
3443 | @@ -40,6 +48,105 @@ def azure_ds_telemetry_reporter(func): | |||
3444 | 40 | return impl | 48 | return impl |
3445 | 41 | 49 | ||
3446 | 42 | 50 | ||
3447 | 51 | @azure_ds_telemetry_reporter | ||
3448 | 52 | def get_boot_telemetry(): | ||
3449 | 53 | """Report timestamps related to kernel initialization and systemd | ||
3450 | 54 | activation of cloud-init""" | ||
3451 | 55 | if not distros.uses_systemd(): | ||
3452 | 56 | raise RuntimeError( | ||
3453 | 57 | "distro not using systemd, skipping boot telemetry") | ||
3454 | 58 | |||
3455 | 59 | LOG.debug("Collecting boot telemetry") | ||
3456 | 60 | try: | ||
3457 | 61 | kernel_start = float(time.time()) - float(util.uptime()) | ||
3458 | 62 | except ValueError: | ||
3459 | 63 | raise RuntimeError("Failed to determine kernel start timestamp") | ||
3460 | 64 | |||
3461 | 65 | try: | ||
3462 | 66 | out, _ = util.subp(['/bin/systemctl', | ||
3463 | 67 | 'show', '-p', | ||
3464 | 68 | 'UserspaceTimestampMonotonic'], | ||
3465 | 69 | capture=True) | ||
3466 | 70 | tsm = None | ||
3467 | 71 | if out and '=' in out: | ||
3468 | 72 | tsm = out.split("=")[1] | ||
3469 | 73 | |||
3470 | 74 | if not tsm: | ||
3471 | 75 | raise RuntimeError("Failed to parse " | ||
3472 | 76 | "UserspaceTimestampMonotonic from systemd") | ||
3473 | 77 | |||
3474 | 78 | user_start = kernel_start + (float(tsm) / 1000000) | ||
3475 | 79 | except util.ProcessExecutionError as e: | ||
3476 | 80 | raise RuntimeError("Failed to get UserspaceTimestampMonotonic: %s" | ||
3477 | 81 | % e) | ||
3478 | 82 | except ValueError as e: | ||
3479 | 83 | raise RuntimeError("Failed to parse " | ||
3480 | 84 | "UserspaceTimestampMonotonic from systemd: %s" | ||
3481 | 85 | % e) | ||
3482 | 86 | |||
3483 | 87 | try: | ||
3484 | 88 | out, _ = util.subp(['/bin/systemctl', 'show', | ||
3485 | 89 | 'cloud-init-local', '-p', | ||
3486 | 90 | 'InactiveExitTimestampMonotonic'], | ||
3487 | 91 | capture=True) | ||
3488 | 92 | tsm = None | ||
3489 | 93 | if out and '=' in out: | ||
3490 | 94 | tsm = out.split("=")[1] | ||
3491 | 95 | if not tsm: | ||
3492 | 96 | raise RuntimeError("Failed to parse " | ||
3493 | 97 | "InactiveExitTimestampMonotonic from systemd") | ||
3494 | 98 | |||
3495 | 99 | cloudinit_activation = kernel_start + (float(tsm) / 1000000) | ||
3496 | 100 | except util.ProcessExecutionError as e: | ||
3497 | 101 | raise RuntimeError("Failed to get InactiveExitTimestampMonotonic: %s" | ||
3498 | 102 | % e) | ||
3499 | 103 | except ValueError as e: | ||
3500 | 104 | raise RuntimeError("Failed to parse " | ||
3501 | 105 | "InactiveExitTimestampMonotonic from systemd: %s" | ||
3502 | 106 | % e) | ||
3503 | 107 | |||
3504 | 108 | evt = events.ReportingEvent( | ||
3505 | 109 | BOOT_EVENT_TYPE, 'boot-telemetry', | ||
3506 | 110 | "kernel_start=%s user_start=%s cloudinit_activation=%s" % | ||
3507 | 111 | (datetime.utcfromtimestamp(kernel_start).isoformat() + 'Z', | ||
3508 | 112 | datetime.utcfromtimestamp(user_start).isoformat() + 'Z', | ||
3509 | 113 | datetime.utcfromtimestamp(cloudinit_activation).isoformat() + 'Z'), | ||
3510 | 114 | events.DEFAULT_EVENT_ORIGIN) | ||
3511 | 115 | events.report_event(evt) | ||
3512 | 116 | |||
3513 | 117 | # return the event for unit testing purpose | ||
3514 | 118 | return evt | ||
3515 | 119 | |||
3516 | 120 | |||
3517 | 121 | @azure_ds_telemetry_reporter | ||
3518 | 122 | def get_system_info(): | ||
3519 | 123 | """Collect and report system information""" | ||
3520 | 124 | info = util.system_info() | ||
3521 | 125 | evt = events.ReportingEvent( | ||
3522 | 126 | SYSTEMINFO_EVENT_TYPE, 'system information', | ||
3523 | 127 | "cloudinit_version=%s, kernel_version=%s, variant=%s, " | ||
3524 | 128 | "distro_name=%s, distro_version=%s, flavor=%s, " | ||
3525 | 129 | "python_version=%s" % | ||
3526 | 130 | (version.version_string(), info['release'], info['variant'], | ||
3527 | 131 | info['dist'][0], info['dist'][1], info['dist'][2], | ||
3528 | 132 | info['python']), events.DEFAULT_EVENT_ORIGIN) | ||
3529 | 133 | events.report_event(evt) | ||
3530 | 134 | |||
3531 | 135 | # return the event for unit testing purpose | ||
3532 | 136 | return evt | ||
3533 | 137 | |||
3534 | 138 | |||
3535 | 139 | def report_diagnostic_event(str): | ||
3536 | 140 | """Report a diagnostic event""" | ||
3537 | 141 | evt = events.ReportingEvent( | ||
3538 | 142 | DIAGNOSTIC_EVENT_TYPE, 'diagnostic message', | ||
3539 | 143 | str, events.DEFAULT_EVENT_ORIGIN) | ||
3540 | 144 | events.report_event(evt) | ||
3541 | 145 | |||
3542 | 146 | # return the event for unit testing purpose | ||
3543 | 147 | return evt | ||
3544 | 148 | |||
3545 | 149 | |||
3546 | 43 | @contextmanager | 150 | @contextmanager |
3547 | 44 | def cd(newdir): | 151 | def cd(newdir): |
3548 | 45 | prevdir = os.getcwd() | 152 | prevdir = os.getcwd() |
3549 | @@ -360,16 +467,19 @@ class WALinuxAgentShim(object): | |||
3550 | 360 | value = dhcp245 | 467 | value = dhcp245 |
3551 | 361 | LOG.debug("Using Azure Endpoint from dhcp options") | 468 | LOG.debug("Using Azure Endpoint from dhcp options") |
3552 | 362 | if value is None: | 469 | if value is None: |
3553 | 470 | report_diagnostic_event("No Azure endpoint from dhcp options") | ||
3554 | 363 | LOG.debug('Finding Azure endpoint from networkd...') | 471 | LOG.debug('Finding Azure endpoint from networkd...') |
3555 | 364 | value = WALinuxAgentShim._networkd_get_value_from_leases() | 472 | value = WALinuxAgentShim._networkd_get_value_from_leases() |
3556 | 365 | if value is None: | 473 | if value is None: |
3557 | 366 | # Option-245 stored in /run/cloud-init/dhclient.hooks/<ifc>.json | 474 | # Option-245 stored in /run/cloud-init/dhclient.hooks/<ifc>.json |
3558 | 367 | # a dhclient exit hook that calls cloud-init-dhclient-hook | 475 | # a dhclient exit hook that calls cloud-init-dhclient-hook |
3559 | 476 | report_diagnostic_event("No Azure endpoint from networkd") | ||
3560 | 368 | LOG.debug('Finding Azure endpoint from hook json...') | 477 | LOG.debug('Finding Azure endpoint from hook json...') |
3561 | 369 | dhcp_options = WALinuxAgentShim._load_dhclient_json() | 478 | dhcp_options = WALinuxAgentShim._load_dhclient_json() |
3562 | 370 | value = WALinuxAgentShim._get_value_from_dhcpoptions(dhcp_options) | 479 | value = WALinuxAgentShim._get_value_from_dhcpoptions(dhcp_options) |
3563 | 371 | if value is None: | 480 | if value is None: |
3564 | 372 | # Fallback and check the leases file if unsuccessful | 481 | # Fallback and check the leases file if unsuccessful |
3565 | 482 | report_diagnostic_event("No Azure endpoint from dhclient logs") | ||
3566 | 373 | LOG.debug("Unable to find endpoint in dhclient logs. " | 483 | LOG.debug("Unable to find endpoint in dhclient logs. " |
3567 | 374 | " Falling back to check lease files") | 484 | " Falling back to check lease files") |
3568 | 375 | if fallback_lease_file is None: | 485 | if fallback_lease_file is None: |
3569 | @@ -381,11 +491,15 @@ class WALinuxAgentShim(object): | |||
3570 | 381 | value = WALinuxAgentShim._get_value_from_leases_file( | 491 | value = WALinuxAgentShim._get_value_from_leases_file( |
3571 | 382 | fallback_lease_file) | 492 | fallback_lease_file) |
3572 | 383 | if value is None: | 493 | if value is None: |
3574 | 384 | LOG.warning("No lease found; using default endpoint") | 494 | msg = "No lease found; using default endpoint" |
3575 | 495 | report_diagnostic_event(msg) | ||
3576 | 496 | LOG.warning(msg) | ||
3577 | 385 | value = DEFAULT_WIRESERVER_ENDPOINT | 497 | value = DEFAULT_WIRESERVER_ENDPOINT |
3578 | 386 | 498 | ||
3579 | 387 | endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value) | 499 | endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value) |
3581 | 388 | LOG.debug('Azure endpoint found at %s', endpoint_ip_address) | 500 | msg = 'Azure endpoint found at %s' % endpoint_ip_address |
3582 | 501 | report_diagnostic_event(msg) | ||
3583 | 502 | LOG.debug(msg) | ||
3584 | 389 | return endpoint_ip_address | 503 | return endpoint_ip_address |
3585 | 390 | 504 | ||
3586 | 391 | @azure_ds_telemetry_reporter | 505 | @azure_ds_telemetry_reporter |
3587 | @@ -399,16 +513,19 @@ class WALinuxAgentShim(object): | |||
3588 | 399 | try: | 513 | try: |
3589 | 400 | response = http_client.get( | 514 | response = http_client.get( |
3590 | 401 | 'http://{0}/machine/?comp=goalstate'.format(self.endpoint)) | 515 | 'http://{0}/machine/?comp=goalstate'.format(self.endpoint)) |
3592 | 402 | except Exception: | 516 | except Exception as e: |
3593 | 403 | if attempts < 10: | 517 | if attempts < 10: |
3594 | 404 | time.sleep(attempts + 1) | 518 | time.sleep(attempts + 1) |
3595 | 405 | else: | 519 | else: |
3596 | 520 | report_diagnostic_event( | ||
3597 | 521 | "failed to register with Azure: %s" % e) | ||
3598 | 406 | raise | 522 | raise |
3599 | 407 | else: | 523 | else: |
3600 | 408 | break | 524 | break |
3601 | 409 | attempts += 1 | 525 | attempts += 1 |
3602 | 410 | LOG.debug('Successfully fetched GoalState XML.') | 526 | LOG.debug('Successfully fetched GoalState XML.') |
3603 | 411 | goal_state = GoalState(response.contents, http_client) | 527 | goal_state = GoalState(response.contents, http_client) |
3604 | 528 | report_diagnostic_event("container_id %s" % goal_state.container_id) | ||
3605 | 412 | ssh_keys = [] | 529 | ssh_keys = [] |
3606 | 413 | if goal_state.certificates_xml is not None and pubkey_info is not None: | 530 | if goal_state.certificates_xml is not None and pubkey_info is not None: |
3607 | 414 | LOG.debug('Certificate XML found; parsing out public keys.') | 531 | LOG.debug('Certificate XML found; parsing out public keys.') |
3608 | @@ -449,11 +566,20 @@ class WALinuxAgentShim(object): | |||
3609 | 449 | container_id=goal_state.container_id, | 566 | container_id=goal_state.container_id, |
3610 | 450 | instance_id=goal_state.instance_id, | 567 | instance_id=goal_state.instance_id, |
3611 | 451 | ) | 568 | ) |
3617 | 452 | http_client.post( | 569 | # Host will collect kvps when cloud-init reports ready. |
3618 | 453 | "http://{0}/machine?comp=health".format(self.endpoint), | 570 | # some kvps might still be in the queue. We yield the scheduler |
3619 | 454 | data=document, | 571 | # to make sure we process all kvps up till this point. |
3620 | 455 | extra_headers={'Content-Type': 'text/xml; charset=utf-8'}, | 572 | time.sleep(0) |
3621 | 456 | ) | 573 | try: |
3622 | 574 | http_client.post( | ||
3623 | 575 | "http://{0}/machine?comp=health".format(self.endpoint), | ||
3624 | 576 | data=document, | ||
3625 | 577 | extra_headers={'Content-Type': 'text/xml; charset=utf-8'}, | ||
3626 | 578 | ) | ||
3627 | 579 | except Exception as e: | ||
3628 | 580 | report_diagnostic_event("exception while reporting ready: %s" % e) | ||
3629 | 581 | raise | ||
3630 | 582 | |||
3631 | 457 | LOG.info('Reported ready to Azure fabric.') | 583 | LOG.info('Reported ready to Azure fabric.') |
3632 | 458 | 584 | ||
3633 | 459 | 585 | ||
3634 | @@ -467,4 +593,22 @@ def get_metadata_from_fabric(fallback_lease_file=None, dhcp_opts=None, | |||
3635 | 467 | finally: | 593 | finally: |
3636 | 468 | shim.clean_up() | 594 | shim.clean_up() |
3637 | 469 | 595 | ||
3638 | 596 | |||
3639 | 597 | class EphemeralDHCPv4WithReporting(object): | ||
3640 | 598 | def __init__(self, reporter, nic=None): | ||
3641 | 599 | self.reporter = reporter | ||
3642 | 600 | self.ephemeralDHCPv4 = EphemeralDHCPv4(iface=nic) | ||
3643 | 601 | |||
3644 | 602 | def __enter__(self): | ||
3645 | 603 | with events.ReportEventStack( | ||
3646 | 604 | name="obtain-dhcp-lease", | ||
3647 | 605 | description="obtain dhcp lease", | ||
3648 | 606 | parent=self.reporter): | ||
3649 | 607 | return self.ephemeralDHCPv4.__enter__() | ||
3650 | 608 | |||
3651 | 609 | def __exit__(self, excp_type, excp_value, excp_traceback): | ||
3652 | 610 | self.ephemeralDHCPv4.__exit__( | ||
3653 | 611 | excp_type, excp_value, excp_traceback) | ||
3654 | 612 | |||
3655 | 613 | |||
3656 | 470 | # vi: ts=4 expandtab | 614 | # vi: ts=4 expandtab |
3657 | diff --git a/cloudinit/sources/helpers/vmware/imc/config_custom_script.py b/cloudinit/sources/helpers/vmware/imc/config_custom_script.py | |||
3658 | index a7d4ad9..9f14770 100644 | |||
3659 | --- a/cloudinit/sources/helpers/vmware/imc/config_custom_script.py | |||
3660 | +++ b/cloudinit/sources/helpers/vmware/imc/config_custom_script.py | |||
3661 | @@ -1,5 +1,5 @@ | |||
3662 | 1 | # Copyright (C) 2017 Canonical Ltd. | 1 | # Copyright (C) 2017 Canonical Ltd. |
3664 | 2 | # Copyright (C) 2017 VMware Inc. | 2 | # Copyright (C) 2017-2019 VMware Inc. |
3665 | 3 | # | 3 | # |
3666 | 4 | # Author: Maitreyee Saikia <msaikia@vmware.com> | 4 | # Author: Maitreyee Saikia <msaikia@vmware.com> |
3667 | 5 | # | 5 | # |
3668 | @@ -8,7 +8,6 @@ | |||
3669 | 8 | import logging | 8 | import logging |
3670 | 9 | import os | 9 | import os |
3671 | 10 | import stat | 10 | import stat |
3672 | 11 | from textwrap import dedent | ||
3673 | 12 | 11 | ||
3674 | 13 | from cloudinit import util | 12 | from cloudinit import util |
3675 | 14 | 13 | ||
3676 | @@ -20,12 +19,15 @@ class CustomScriptNotFound(Exception): | |||
3677 | 20 | 19 | ||
3678 | 21 | 20 | ||
3679 | 22 | class CustomScriptConstant(object): | 21 | class CustomScriptConstant(object): |
3686 | 23 | RC_LOCAL = "/etc/rc.local" | 22 | CUSTOM_TMP_DIR = "/root/.customization" |
3687 | 24 | POST_CUST_TMP_DIR = "/root/.customization" | 23 | |
3688 | 25 | POST_CUST_RUN_SCRIPT_NAME = "post-customize-guest.sh" | 24 | # The user defined custom script |
3689 | 26 | POST_CUST_RUN_SCRIPT = os.path.join(POST_CUST_TMP_DIR, | 25 | CUSTOM_SCRIPT_NAME = "customize.sh" |
3690 | 27 | POST_CUST_RUN_SCRIPT_NAME) | 26 | CUSTOM_SCRIPT = os.path.join(CUSTOM_TMP_DIR, |
3691 | 28 | POST_REBOOT_PENDING_MARKER = "/.guest-customization-post-reboot-pending" | 27 | CUSTOM_SCRIPT_NAME) |
3692 | 28 | POST_CUSTOM_PENDING_MARKER = "/.guest-customization-post-reboot-pending" | ||
3693 | 29 | # The cc_scripts_per_instance script to launch custom script | ||
3694 | 30 | POST_CUSTOM_SCRIPT_NAME = "post-customize-guest.sh" | ||
3695 | 29 | 31 | ||
3696 | 30 | 32 | ||
3697 | 31 | class RunCustomScript(object): | 33 | class RunCustomScript(object): |
3698 | @@ -39,10 +41,19 @@ class RunCustomScript(object): | |||
3699 | 39 | raise CustomScriptNotFound("Script %s not found!! " | 41 | raise CustomScriptNotFound("Script %s not found!! " |
3700 | 40 | "Cannot execute custom script!" | 42 | "Cannot execute custom script!" |
3701 | 41 | % self.scriptpath) | 43 | % self.scriptpath) |
3702 | 44 | |||
3703 | 45 | util.ensure_dir(CustomScriptConstant.CUSTOM_TMP_DIR) | ||
3704 | 46 | |||
3705 | 47 | LOG.debug("Copying custom script to %s", | ||
3706 | 48 | CustomScriptConstant.CUSTOM_SCRIPT) | ||
3707 | 49 | util.copy(self.scriptpath, CustomScriptConstant.CUSTOM_SCRIPT) | ||
3708 | 50 | |||
3709 | 42 | # Strip any CR characters from the decoded script | 51 | # Strip any CR characters from the decoded script |
3713 | 43 | util.load_file(self.scriptpath).replace("\r", "") | 52 | content = util.load_file( |
3714 | 44 | st = os.stat(self.scriptpath) | 53 | CustomScriptConstant.CUSTOM_SCRIPT).replace("\r", "") |
3715 | 45 | os.chmod(self.scriptpath, st.st_mode | stat.S_IEXEC) | 54 | util.write_file(CustomScriptConstant.CUSTOM_SCRIPT, |
3716 | 55 | content, | ||
3717 | 56 | mode=0o544) | ||
3718 | 46 | 57 | ||
3719 | 47 | 58 | ||
3720 | 48 | class PreCustomScript(RunCustomScript): | 59 | class PreCustomScript(RunCustomScript): |
3721 | @@ -50,104 +61,34 @@ class PreCustomScript(RunCustomScript): | |||
3722 | 50 | """Executing custom script with precustomization argument.""" | 61 | """Executing custom script with precustomization argument.""" |
3723 | 51 | LOG.debug("Executing pre-customization script") | 62 | LOG.debug("Executing pre-customization script") |
3724 | 52 | self.prepare_script() | 63 | self.prepare_script() |
3726 | 53 | util.subp(["/bin/sh", self.scriptpath, "precustomization"]) | 64 | util.subp([CustomScriptConstant.CUSTOM_SCRIPT, "precustomization"]) |
3727 | 54 | 65 | ||
3728 | 55 | 66 | ||
3729 | 56 | class PostCustomScript(RunCustomScript): | 67 | class PostCustomScript(RunCustomScript): |
3731 | 57 | def __init__(self, scriptname, directory): | 68 | def __init__(self, scriptname, directory, ccScriptsDir): |
3732 | 58 | super(PostCustomScript, self).__init__(scriptname, directory) | 69 | super(PostCustomScript, self).__init__(scriptname, directory) |
3796 | 59 | # Determine when to run custom script. When postreboot is True, | 70 | self.ccScriptsDir = ccScriptsDir |
3797 | 60 | # the user uploaded script will run as part of rc.local after | 71 | self.ccScriptPath = os.path.join( |
3798 | 61 | # the machine reboots. This is determined by presence of rclocal. | 72 | ccScriptsDir, |
3799 | 62 | # When postreboot is False, script will run as part of cloud-init. | 73 | CustomScriptConstant.POST_CUSTOM_SCRIPT_NAME) |
3737 | 63 | self.postreboot = False | ||
3738 | 64 | |||
3739 | 65 | def _install_post_reboot_agent(self, rclocal): | ||
3740 | 66 | """ | ||
3741 | 67 | Install post-reboot agent for running custom script after reboot. | ||
3742 | 68 | As part of this process, we are editing the rclocal file to run a | ||
3743 | 69 | VMware script, which in turn is resposible for handling the user | ||
3744 | 70 | script. | ||
3745 | 71 | @param: path to rc local. | ||
3746 | 72 | """ | ||
3747 | 73 | LOG.debug("Installing post-reboot customization from %s to %s", | ||
3748 | 74 | self.directory, rclocal) | ||
3749 | 75 | if not self.has_previous_agent(rclocal): | ||
3750 | 76 | LOG.info("Adding post-reboot customization agent to rc.local") | ||
3751 | 77 | new_content = dedent(""" | ||
3752 | 78 | # Run post-reboot guest customization | ||
3753 | 79 | /bin/sh %s | ||
3754 | 80 | exit 0 | ||
3755 | 81 | """) % CustomScriptConstant.POST_CUST_RUN_SCRIPT | ||
3756 | 82 | existing_rclocal = util.load_file(rclocal).replace('exit 0\n', '') | ||
3757 | 83 | st = os.stat(rclocal) | ||
3758 | 84 | # "x" flag should be set | ||
3759 | 85 | mode = st.st_mode | stat.S_IEXEC | ||
3760 | 86 | util.write_file(rclocal, existing_rclocal + new_content, mode) | ||
3761 | 87 | |||
3762 | 88 | else: | ||
3763 | 89 | # We don't need to update rclocal file everytime a customization | ||
3764 | 90 | # is requested. It just needs to be done for the first time. | ||
3765 | 91 | LOG.info("Post-reboot guest customization agent is already " | ||
3766 | 92 | "registered in rc.local") | ||
3767 | 93 | LOG.debug("Installing post-reboot customization agent finished: %s", | ||
3768 | 94 | self.postreboot) | ||
3769 | 95 | |||
3770 | 96 | def has_previous_agent(self, rclocal): | ||
3771 | 97 | searchstring = "# Run post-reboot guest customization" | ||
3772 | 98 | if searchstring in open(rclocal).read(): | ||
3773 | 99 | return True | ||
3774 | 100 | return False | ||
3775 | 101 | |||
3776 | 102 | def find_rc_local(self): | ||
3777 | 103 | """ | ||
3778 | 104 | Determine if rc local is present. | ||
3779 | 105 | """ | ||
3780 | 106 | rclocal = "" | ||
3781 | 107 | if os.path.exists(CustomScriptConstant.RC_LOCAL): | ||
3782 | 108 | LOG.debug("rc.local detected.") | ||
3783 | 109 | # resolving in case of symlink | ||
3784 | 110 | rclocal = os.path.realpath(CustomScriptConstant.RC_LOCAL) | ||
3785 | 111 | LOG.debug("rc.local resolved to %s", rclocal) | ||
3786 | 112 | else: | ||
3787 | 113 | LOG.warning("Can't find rc.local, post-customization " | ||
3788 | 114 | "will be run before reboot") | ||
3789 | 115 | return rclocal | ||
3790 | 116 | |||
3791 | 117 | def install_agent(self): | ||
3792 | 118 | rclocal = self.find_rc_local() | ||
3793 | 119 | if rclocal: | ||
3794 | 120 | self._install_post_reboot_agent(rclocal) | ||
3795 | 121 | self.postreboot = True | ||
3800 | 122 | 74 | ||
3801 | 123 | def execute(self): | 75 | def execute(self): |
3802 | 124 | """ | 76 | """ |
3805 | 125 | This method executes post-customization script before or after reboot | 77 | This method copy the post customize run script to |
3806 | 126 | based on the presence of rc local. | 78 | cc_scripts_per_instance directory and let this |
3807 | 79 | module to run post custom script. | ||
3808 | 127 | """ | 80 | """ |
3809 | 128 | self.prepare_script() | 81 | self.prepare_script() |
3833 | 129 | self.install_agent() | 82 | |
3834 | 130 | if not self.postreboot: | 83 | LOG.debug("Copying post customize run script to %s", |
3835 | 131 | LOG.warning("Executing post-customization script inline") | 84 | self.ccScriptPath) |
3836 | 132 | util.subp(["/bin/sh", self.scriptpath, "postcustomization"]) | 85 | util.copy( |
3837 | 133 | else: | 86 | os.path.join(self.directory, |
3838 | 134 | LOG.debug("Scheduling custom script to run post reboot") | 87 | CustomScriptConstant.POST_CUSTOM_SCRIPT_NAME), |
3839 | 135 | if not os.path.isdir(CustomScriptConstant.POST_CUST_TMP_DIR): | 88 | self.ccScriptPath) |
3840 | 136 | os.mkdir(CustomScriptConstant.POST_CUST_TMP_DIR) | 89 | st = os.stat(self.ccScriptPath) |
3841 | 137 | # Script "post-customize-guest.sh" and user uploaded script are | 90 | os.chmod(self.ccScriptPath, st.st_mode | stat.S_IEXEC) |
3842 | 138 | # are present in the same directory and needs to copied to a temp | 91 | LOG.info("Creating post customization pending marker") |
3843 | 139 | # directory to be executed post reboot. User uploaded script is | 92 | util.ensure_file(CustomScriptConstant.POST_CUSTOM_PENDING_MARKER) |
3821 | 140 | # saved as customize.sh in the temp directory. | ||
3822 | 141 | # post-customize-guest.sh excutes customize.sh after reboot. | ||
3823 | 142 | LOG.debug("Copying post-customization script") | ||
3824 | 143 | util.copy(self.scriptpath, | ||
3825 | 144 | CustomScriptConstant.POST_CUST_TMP_DIR + "/customize.sh") | ||
3826 | 145 | LOG.debug("Copying script to run post-customization script") | ||
3827 | 146 | util.copy( | ||
3828 | 147 | os.path.join(self.directory, | ||
3829 | 148 | CustomScriptConstant.POST_CUST_RUN_SCRIPT_NAME), | ||
3830 | 149 | CustomScriptConstant.POST_CUST_RUN_SCRIPT) | ||
3831 | 150 | LOG.info("Creating post-reboot pending marker") | ||
3832 | 151 | util.ensure_file(CustomScriptConstant.POST_REBOOT_PENDING_MARKER) | ||
3844 | 152 | 93 | ||
3845 | 153 | # vi: ts=4 expandtab | 94 | # vi: ts=4 expandtab |
3846 | diff --git a/cloudinit/sources/tests/test_oracle.py b/cloudinit/sources/tests/test_oracle.py | |||
3847 | index 97d6294..3ddf7df 100644 | |||
3848 | --- a/cloudinit/sources/tests/test_oracle.py | |||
3849 | +++ b/cloudinit/sources/tests/test_oracle.py | |||
3850 | @@ -1,7 +1,7 @@ | |||
3851 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
3852 | 2 | 2 | ||
3853 | 3 | from cloudinit.sources import DataSourceOracle as oracle | 3 | from cloudinit.sources import DataSourceOracle as oracle |
3855 | 4 | from cloudinit.sources import BrokenMetadata | 4 | from cloudinit.sources import BrokenMetadata, NetworkConfigSource |
3856 | 5 | from cloudinit import helpers | 5 | from cloudinit import helpers |
3857 | 6 | 6 | ||
3858 | 7 | from cloudinit.tests import helpers as test_helpers | 7 | from cloudinit.tests import helpers as test_helpers |
3859 | @@ -18,10 +18,52 @@ import uuid | |||
3860 | 18 | DS_PATH = "cloudinit.sources.DataSourceOracle" | 18 | DS_PATH = "cloudinit.sources.DataSourceOracle" |
3861 | 19 | MD_VER = "2013-10-17" | 19 | MD_VER = "2013-10-17" |
3862 | 20 | 20 | ||
3863 | 21 | # `curl -L http://169.254.169.254/opc/v1/vnics/` on a Oracle Bare Metal Machine | ||
3864 | 22 | # with a secondary VNIC attached (vnicId truncated for Python line length) | ||
3865 | 23 | OPC_BM_SECONDARY_VNIC_RESPONSE = """\ | ||
3866 | 24 | [ { | ||
3867 | 25 | "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtyvcucqkhdqmgjszebxe4hrb!!TRUNCATED||", | ||
3868 | 26 | "privateIp" : "10.0.0.8", | ||
3869 | 27 | "vlanTag" : 0, | ||
3870 | 28 | "macAddr" : "90:e2:ba:d4:f1:68", | ||
3871 | 29 | "virtualRouterIp" : "10.0.0.1", | ||
3872 | 30 | "subnetCidrBlock" : "10.0.0.0/24", | ||
3873 | 31 | "nicIndex" : 0 | ||
3874 | 32 | }, { | ||
3875 | 33 | "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtfmkxjdy2sqidndiwrsg63zf!!TRUNCATED||", | ||
3876 | 34 | "privateIp" : "10.0.4.5", | ||
3877 | 35 | "vlanTag" : 1, | ||
3878 | 36 | "macAddr" : "02:00:17:05:CF:51", | ||
3879 | 37 | "virtualRouterIp" : "10.0.4.1", | ||
3880 | 38 | "subnetCidrBlock" : "10.0.4.0/24", | ||
3881 | 39 | "nicIndex" : 0 | ||
3882 | 40 | } ]""" | ||
3883 | 41 | |||
3884 | 42 | # `curl -L http://169.254.169.254/opc/v1/vnics/` on a Oracle Virtual Machine | ||
3885 | 43 | # with a secondary VNIC attached | ||
3886 | 44 | OPC_VM_SECONDARY_VNIC_RESPONSE = """\ | ||
3887 | 45 | [ { | ||
3888 | 46 | "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtch72z5pd76cc2636qeqh7z_truncated", | ||
3889 | 47 | "privateIp" : "10.0.0.230", | ||
3890 | 48 | "vlanTag" : 1039, | ||
3891 | 49 | "macAddr" : "02:00:17:05:D1:DB", | ||
3892 | 50 | "virtualRouterIp" : "10.0.0.1", | ||
3893 | 51 | "subnetCidrBlock" : "10.0.0.0/24" | ||
3894 | 52 | }, { | ||
3895 | 53 | "vnicId" : "ocid1.vnic.oc1.phx.abyhqljt4iew3gwmvrwrhhf3bp5drj_truncated", | ||
3896 | 54 | "privateIp" : "10.0.0.231", | ||
3897 | 55 | "vlanTag" : 1041, | ||
3898 | 56 | "macAddr" : "00:00:17:02:2B:B1", | ||
3899 | 57 | "virtualRouterIp" : "10.0.0.1", | ||
3900 | 58 | "subnetCidrBlock" : "10.0.0.0/24" | ||
3901 | 59 | } ]""" | ||
3902 | 60 | |||
3903 | 21 | 61 | ||
3904 | 22 | class TestDataSourceOracle(test_helpers.CiTestCase): | 62 | class TestDataSourceOracle(test_helpers.CiTestCase): |
3905 | 23 | """Test datasource DataSourceOracle.""" | 63 | """Test datasource DataSourceOracle.""" |
3906 | 24 | 64 | ||
3907 | 65 | with_logs = True | ||
3908 | 66 | |||
3909 | 25 | ds_class = oracle.DataSourceOracle | 67 | ds_class = oracle.DataSourceOracle |
3910 | 26 | 68 | ||
3911 | 27 | my_uuid = str(uuid.uuid4()) | 69 | my_uuid = str(uuid.uuid4()) |
3912 | @@ -79,6 +121,16 @@ class TestDataSourceOracle(test_helpers.CiTestCase): | |||
3913 | 79 | self.assertEqual( | 121 | self.assertEqual( |
3914 | 80 | 'metadata (http://169.254.169.254/openstack/)', ds.subplatform) | 122 | 'metadata (http://169.254.169.254/openstack/)', ds.subplatform) |
3915 | 81 | 123 | ||
3916 | 124 | def test_sys_cfg_can_enable_configure_secondary_nics(self): | ||
3917 | 125 | # Confirm that behaviour is toggled by sys_cfg | ||
3918 | 126 | ds, _mocks = self._get_ds() | ||
3919 | 127 | self.assertFalse(ds.ds_cfg['configure_secondary_nics']) | ||
3920 | 128 | |||
3921 | 129 | sys_cfg = { | ||
3922 | 130 | 'datasource': {'Oracle': {'configure_secondary_nics': True}}} | ||
3923 | 131 | ds, _mocks = self._get_ds(sys_cfg=sys_cfg) | ||
3924 | 132 | self.assertTrue(ds.ds_cfg['configure_secondary_nics']) | ||
3925 | 133 | |||
3926 | 82 | @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) | 134 | @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) |
3927 | 83 | def test_without_userdata(self, m_is_iscsi_root): | 135 | def test_without_userdata(self, m_is_iscsi_root): |
3928 | 84 | """If no user-data is provided, it should not be in return dict.""" | 136 | """If no user-data is provided, it should not be in return dict.""" |
3929 | @@ -133,9 +185,12 @@ class TestDataSourceOracle(test_helpers.CiTestCase): | |||
3930 | 133 | self.assertEqual(self.my_md['uuid'], ds.get_instance_id()) | 185 | self.assertEqual(self.my_md['uuid'], ds.get_instance_id()) |
3931 | 134 | self.assertEqual(my_userdata, ds.userdata_raw) | 186 | self.assertEqual(my_userdata, ds.userdata_raw) |
3932 | 135 | 187 | ||
3934 | 136 | @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config") | 188 | @mock.patch(DS_PATH + "._add_network_config_from_opc_imds", |
3935 | 189 | side_effect=lambda network_config: network_config) | ||
3936 | 190 | @mock.patch(DS_PATH + ".cmdline.read_initramfs_config") | ||
3937 | 137 | @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) | 191 | @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) |
3939 | 138 | def test_network_cmdline(self, m_is_iscsi_root, m_cmdline_config): | 192 | def test_network_cmdline(self, m_is_iscsi_root, m_initramfs_config, |
3940 | 193 | _m_add_network_config_from_opc_imds): | ||
3941 | 139 | """network_config should read kernel cmdline.""" | 194 | """network_config should read kernel cmdline.""" |
3942 | 140 | distro = mock.MagicMock() | 195 | distro = mock.MagicMock() |
3943 | 141 | ds, _ = self._get_ds(distro=distro, patches={ | 196 | ds, _ = self._get_ds(distro=distro, patches={ |
3944 | @@ -145,15 +200,18 @@ class TestDataSourceOracle(test_helpers.CiTestCase): | |||
3945 | 145 | MD_VER: {'system_uuid': self.my_uuid, | 200 | MD_VER: {'system_uuid': self.my_uuid, |
3946 | 146 | 'meta_data': self.my_md}}}}) | 201 | 'meta_data': self.my_md}}}}) |
3947 | 147 | ncfg = {'version': 1, 'config': [{'a': 'b'}]} | 202 | ncfg = {'version': 1, 'config': [{'a': 'b'}]} |
3949 | 148 | m_cmdline_config.return_value = ncfg | 203 | m_initramfs_config.return_value = ncfg |
3950 | 149 | self.assertTrue(ds._get_data()) | 204 | self.assertTrue(ds._get_data()) |
3951 | 150 | self.assertEqual(ncfg, ds.network_config) | 205 | self.assertEqual(ncfg, ds.network_config) |
3953 | 151 | m_cmdline_config.assert_called_once_with() | 206 | self.assertEqual([mock.call()], m_initramfs_config.call_args_list) |
3954 | 152 | self.assertFalse(distro.generate_fallback_config.called) | 207 | self.assertFalse(distro.generate_fallback_config.called) |
3955 | 153 | 208 | ||
3957 | 154 | @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config") | 209 | @mock.patch(DS_PATH + "._add_network_config_from_opc_imds", |
3958 | 210 | side_effect=lambda network_config: network_config) | ||
3959 | 211 | @mock.patch(DS_PATH + ".cmdline.read_initramfs_config") | ||
3960 | 155 | @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) | 212 | @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) |
3962 | 156 | def test_network_fallback(self, m_is_iscsi_root, m_cmdline_config): | 213 | def test_network_fallback(self, m_is_iscsi_root, m_initramfs_config, |
3963 | 214 | _m_add_network_config_from_opc_imds): | ||
3964 | 157 | """test that fallback network is generated if no kernel cmdline.""" | 215 | """test that fallback network is generated if no kernel cmdline.""" |
3965 | 158 | distro = mock.MagicMock() | 216 | distro = mock.MagicMock() |
3966 | 159 | ds, _ = self._get_ds(distro=distro, patches={ | 217 | ds, _ = self._get_ds(distro=distro, patches={ |
3967 | @@ -163,18 +221,95 @@ class TestDataSourceOracle(test_helpers.CiTestCase): | |||
3968 | 163 | MD_VER: {'system_uuid': self.my_uuid, | 221 | MD_VER: {'system_uuid': self.my_uuid, |
3969 | 164 | 'meta_data': self.my_md}}}}) | 222 | 'meta_data': self.my_md}}}}) |
3970 | 165 | ncfg = {'version': 1, 'config': [{'a': 'b'}]} | 223 | ncfg = {'version': 1, 'config': [{'a': 'b'}]} |
3972 | 166 | m_cmdline_config.return_value = None | 224 | m_initramfs_config.return_value = None |
3973 | 167 | self.assertTrue(ds._get_data()) | 225 | self.assertTrue(ds._get_data()) |
3974 | 168 | ncfg = {'version': 1, 'config': [{'distro1': 'value'}]} | 226 | ncfg = {'version': 1, 'config': [{'distro1': 'value'}]} |
3975 | 169 | distro.generate_fallback_config.return_value = ncfg | 227 | distro.generate_fallback_config.return_value = ncfg |
3976 | 170 | self.assertEqual(ncfg, ds.network_config) | 228 | self.assertEqual(ncfg, ds.network_config) |
3978 | 171 | m_cmdline_config.assert_called_once_with() | 229 | self.assertEqual([mock.call()], m_initramfs_config.call_args_list) |
3979 | 172 | distro.generate_fallback_config.assert_called_once_with() | 230 | distro.generate_fallback_config.assert_called_once_with() |
3980 | 173 | self.assertEqual(1, m_cmdline_config.call_count) | ||
3981 | 174 | 231 | ||
3982 | 175 | # test that the result got cached, and the methods not re-called. | 232 | # test that the result got cached, and the methods not re-called. |
3983 | 176 | self.assertEqual(ncfg, ds.network_config) | 233 | self.assertEqual(ncfg, ds.network_config) |
3985 | 177 | self.assertEqual(1, m_cmdline_config.call_count) | 234 | self.assertEqual(1, m_initramfs_config.call_count) |
3986 | 235 | |||
3987 | 236 | @mock.patch(DS_PATH + "._add_network_config_from_opc_imds") | ||
3988 | 237 | @mock.patch(DS_PATH + ".cmdline.read_initramfs_config", | ||
3989 | 238 | return_value={'some': 'config'}) | ||
3990 | 239 | @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) | ||
3991 | 240 | def test_secondary_nics_added_to_network_config_if_enabled( | ||
3992 | 241 | self, _m_is_iscsi_root, _m_initramfs_config, | ||
3993 | 242 | m_add_network_config_from_opc_imds): | ||
3994 | 243 | |||
3995 | 244 | needle = object() | ||
3996 | 245 | |||
3997 | 246 | def network_config_side_effect(network_config): | ||
3998 | 247 | network_config['secondary_added'] = needle | ||
3999 | 248 | |||
4000 | 249 | m_add_network_config_from_opc_imds.side_effect = ( | ||
4001 | 250 | network_config_side_effect) | ||
4002 | 251 | |||
4003 | 252 | distro = mock.MagicMock() | ||
4004 | 253 | ds, _ = self._get_ds(distro=distro, patches={ | ||
4005 | 254 | '_is_platform_viable': {'return_value': True}, | ||
4006 | 255 | 'crawl_metadata': { | ||
4007 | 256 | 'return_value': { | ||
4008 | 257 | MD_VER: {'system_uuid': self.my_uuid, | ||
4009 | 258 | 'meta_data': self.my_md}}}}) | ||
4010 | 259 | ds.ds_cfg['configure_secondary_nics'] = True | ||
4011 | 260 | self.assertEqual(needle, ds.network_config['secondary_added']) | ||
4012 | 261 | |||
4013 | 262 | @mock.patch(DS_PATH + "._add_network_config_from_opc_imds") | ||
4014 | 263 | @mock.patch(DS_PATH + ".cmdline.read_initramfs_config", | ||
4015 | 264 | return_value={'some': 'config'}) | ||
4016 | 265 | @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) | ||
4017 | 266 | def test_secondary_nics_not_added_to_network_config_by_default( | ||
4018 | 267 | self, _m_is_iscsi_root, _m_initramfs_config, | ||
4019 | 268 | m_add_network_config_from_opc_imds): | ||
4020 | 269 | |||
4021 | 270 | def network_config_side_effect(network_config): | ||
4022 | 271 | network_config['secondary_added'] = True | ||
4023 | 272 | |||
4024 | 273 | m_add_network_config_from_opc_imds.side_effect = ( | ||
4025 | 274 | network_config_side_effect) | ||
4026 | 275 | |||
4027 | 276 | distro = mock.MagicMock() | ||
4028 | 277 | ds, _ = self._get_ds(distro=distro, patches={ | ||
4029 | 278 | '_is_platform_viable': {'return_value': True}, | ||
4030 | 279 | 'crawl_metadata': { | ||
4031 | 280 | 'return_value': { | ||
4032 | 281 | MD_VER: {'system_uuid': self.my_uuid, | ||
4033 | 282 | 'meta_data': self.my_md}}}}) | ||
4034 | 283 | self.assertNotIn('secondary_added', ds.network_config) | ||
4035 | 284 | |||
4036 | 285 | @mock.patch(DS_PATH + "._add_network_config_from_opc_imds") | ||
4037 | 286 | @mock.patch(DS_PATH + ".cmdline.read_initramfs_config") | ||
4038 | 287 | @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) | ||
4039 | 288 | def test_secondary_nic_failure_isnt_blocking( | ||
4040 | 289 | self, _m_is_iscsi_root, m_initramfs_config, | ||
4041 | 290 | m_add_network_config_from_opc_imds): | ||
4042 | 291 | |||
4043 | 292 | m_add_network_config_from_opc_imds.side_effect = Exception() | ||
4044 | 293 | |||
4045 | 294 | distro = mock.MagicMock() | ||
4046 | 295 | ds, _ = self._get_ds(distro=distro, patches={ | ||
4047 | 296 | '_is_platform_viable': {'return_value': True}, | ||
4048 | 297 | 'crawl_metadata': { | ||
4049 | 298 | 'return_value': { | ||
4050 | 299 | MD_VER: {'system_uuid': self.my_uuid, | ||
4051 | 300 | 'meta_data': self.my_md}}}}) | ||
4052 | 301 | ds.ds_cfg['configure_secondary_nics'] = True | ||
4053 | 302 | self.assertEqual(ds.network_config, m_initramfs_config.return_value) | ||
4054 | 303 | self.assertIn('Failed to fetch secondary network configuration', | ||
4055 | 304 | self.logs.getvalue()) | ||
4056 | 305 | |||
4057 | 306 | def test_ds_network_cfg_preferred_over_initramfs(self): | ||
4058 | 307 | """Ensure that DS net config is preferred over initramfs config""" | ||
4059 | 308 | network_config_sources = oracle.DataSourceOracle.network_config_sources | ||
4060 | 309 | self.assertLess( | ||
4061 | 310 | network_config_sources.index(NetworkConfigSource.ds), | ||
4062 | 311 | network_config_sources.index(NetworkConfigSource.initramfs) | ||
4063 | 312 | ) | ||
4064 | 178 | 313 | ||
4065 | 179 | 314 | ||
4066 | 180 | @mock.patch(DS_PATH + "._read_system_uuid", return_value=str(uuid.uuid4())) | 315 | @mock.patch(DS_PATH + "._read_system_uuid", return_value=str(uuid.uuid4())) |
4067 | @@ -336,4 +471,86 @@ class TestLoadIndex(test_helpers.CiTestCase): | |||
4068 | 336 | oracle._load_index("\n".join(["meta_data.json", "user_data"]))) | 471 | oracle._load_index("\n".join(["meta_data.json", "user_data"]))) |
4069 | 337 | 472 | ||
4070 | 338 | 473 | ||
4071 | 474 | class TestNetworkConfigFromOpcImds(test_helpers.CiTestCase): | ||
4072 | 475 | |||
4073 | 476 | with_logs = True | ||
4074 | 477 | |||
4075 | 478 | def setUp(self): | ||
4076 | 479 | super(TestNetworkConfigFromOpcImds, self).setUp() | ||
4077 | 480 | self.add_patch(DS_PATH + '.readurl', 'm_readurl') | ||
4078 | 481 | self.add_patch(DS_PATH + '.get_interfaces_by_mac', | ||
4079 | 482 | 'm_get_interfaces_by_mac') | ||
4080 | 483 | |||
4081 | 484 | def test_failure_to_readurl(self): | ||
4082 | 485 | # readurl failures should just bubble out to the caller | ||
4083 | 486 | self.m_readurl.side_effect = Exception('oh no') | ||
4084 | 487 | with self.assertRaises(Exception) as excinfo: | ||
4085 | 488 | oracle._add_network_config_from_opc_imds({}) | ||
4086 | 489 | self.assertEqual(str(excinfo.exception), 'oh no') | ||
4087 | 490 | |||
4088 | 491 | def test_empty_response(self): | ||
4089 | 492 | # empty response error should just bubble out to the caller | ||
4090 | 493 | self.m_readurl.return_value = '' | ||
4091 | 494 | with self.assertRaises(Exception): | ||
4092 | 495 | oracle._add_network_config_from_opc_imds([]) | ||
4093 | 496 | |||
4094 | 497 | def test_invalid_json(self): | ||
4095 | 498 | # invalid JSON error should just bubble out to the caller | ||
4096 | 499 | self.m_readurl.return_value = '{' | ||
4097 | 500 | with self.assertRaises(Exception): | ||
4098 | 501 | oracle._add_network_config_from_opc_imds([]) | ||
4099 | 502 | |||
4100 | 503 | def test_no_secondary_nics_does_not_mutate_input(self): | ||
4101 | 504 | self.m_readurl.return_value = json.dumps([{}]) | ||
4102 | 505 | # We test this by passing in a non-dict to ensure that no dict | ||
4103 | 506 | # operations are used; failure would be seen as exceptions | ||
4104 | 507 | oracle._add_network_config_from_opc_imds(object()) | ||
4105 | 508 | |||
4106 | 509 | def test_bare_metal_machine_skipped(self): | ||
4107 | 510 | # nicIndex in the first entry indicates a bare metal machine | ||
4108 | 511 | self.m_readurl.return_value = OPC_BM_SECONDARY_VNIC_RESPONSE | ||
4109 | 512 | # We test this by passing in a non-dict to ensure that no dict | ||
4110 | 513 | # operations are used | ||
4111 | 514 | self.assertFalse(oracle._add_network_config_from_opc_imds(object())) | ||
4112 | 515 | self.assertIn('bare metal machine', self.logs.getvalue()) | ||
4113 | 516 | |||
4114 | 517 | def test_missing_mac_skipped(self): | ||
4115 | 518 | self.m_readurl.return_value = OPC_VM_SECONDARY_VNIC_RESPONSE | ||
4116 | 519 | self.m_get_interfaces_by_mac.return_value = {} | ||
4117 | 520 | |||
4118 | 521 | network_config = {'version': 1, 'config': [{'primary': 'nic'}]} | ||
4119 | 522 | oracle._add_network_config_from_opc_imds(network_config) | ||
4120 | 523 | |||
4121 | 524 | self.assertEqual(1, len(network_config['config'])) | ||
4122 | 525 | self.assertIn( | ||
4123 | 526 | 'Interface with MAC 00:00:17:02:2b:b1 not found; skipping', | ||
4124 | 527 | self.logs.getvalue()) | ||
4125 | 528 | |||
4126 | 529 | def test_secondary_nic(self): | ||
4127 | 530 | self.m_readurl.return_value = OPC_VM_SECONDARY_VNIC_RESPONSE | ||
4128 | 531 | mac_addr, nic_name = '00:00:17:02:2b:b1', 'ens3' | ||
4129 | 532 | self.m_get_interfaces_by_mac.return_value = { | ||
4130 | 533 | mac_addr: nic_name, | ||
4131 | 534 | } | ||
4132 | 535 | |||
4133 | 536 | network_config = {'version': 1, 'config': [{'primary': 'nic'}]} | ||
4134 | 537 | oracle._add_network_config_from_opc_imds(network_config) | ||
4135 | 538 | |||
4136 | 539 | # The input is mutated | ||
4137 | 540 | self.assertEqual(2, len(network_config['config'])) | ||
4138 | 541 | |||
4139 | 542 | secondary_nic_cfg = network_config['config'][1] | ||
4140 | 543 | self.assertEqual(nic_name, secondary_nic_cfg['name']) | ||
4141 | 544 | self.assertEqual('physical', secondary_nic_cfg['type']) | ||
4142 | 545 | self.assertEqual(mac_addr, secondary_nic_cfg['mac_address']) | ||
4143 | 546 | self.assertEqual(9000, secondary_nic_cfg['mtu']) | ||
4144 | 547 | |||
4145 | 548 | self.assertEqual(1, len(secondary_nic_cfg['subnets'])) | ||
4146 | 549 | subnet_cfg = secondary_nic_cfg['subnets'][0] | ||
4147 | 550 | # These values are hard-coded in OPC_VM_SECONDARY_VNIC_RESPONSE | ||
4148 | 551 | self.assertEqual('10.0.0.231', subnet_cfg['address']) | ||
4149 | 552 | self.assertEqual('24', subnet_cfg['netmask']) | ||
4150 | 553 | self.assertEqual('10.0.0.1', subnet_cfg['gateway']) | ||
4151 | 554 | self.assertEqual('manual', subnet_cfg['control']) | ||
4152 | 555 | |||
4153 | 339 | # vi: ts=4 expandtab | 556 | # vi: ts=4 expandtab |
4154 | diff --git a/cloudinit/stages.py b/cloudinit/stages.py | |||
4155 | index da7d349..5012988 100644 | |||
4156 | --- a/cloudinit/stages.py | |||
4157 | +++ b/cloudinit/stages.py | |||
4158 | @@ -24,6 +24,7 @@ from cloudinit.handlers.shell_script import ShellScriptPartHandler | |||
4159 | 24 | from cloudinit.handlers.upstart_job import UpstartJobPartHandler | 24 | from cloudinit.handlers.upstart_job import UpstartJobPartHandler |
4160 | 25 | 25 | ||
4161 | 26 | from cloudinit.event import EventType | 26 | from cloudinit.event import EventType |
4162 | 27 | from cloudinit.sources import NetworkConfigSource | ||
4163 | 27 | 28 | ||
4164 | 28 | from cloudinit import cloud | 29 | from cloudinit import cloud |
4165 | 29 | from cloudinit import config | 30 | from cloudinit import config |
4166 | @@ -630,32 +631,54 @@ class Init(object): | |||
4167 | 630 | if os.path.exists(disable_file): | 631 | if os.path.exists(disable_file): |
4168 | 631 | return (None, disable_file) | 632 | return (None, disable_file) |
4169 | 632 | 633 | ||
4172 | 633 | cmdline_cfg = ('cmdline', cmdline.read_kernel_cmdline_config()) | 634 | available_cfgs = { |
4173 | 634 | dscfg = ('ds', None) | 635 | NetworkConfigSource.cmdline: cmdline.read_kernel_cmdline_config(), |
4174 | 636 | NetworkConfigSource.initramfs: cmdline.read_initramfs_config(), | ||
4175 | 637 | NetworkConfigSource.ds: None, | ||
4176 | 638 | NetworkConfigSource.system_cfg: self.cfg.get('network'), | ||
4177 | 639 | } | ||
4178 | 640 | |||
4179 | 635 | if self.datasource and hasattr(self.datasource, 'network_config'): | 641 | if self.datasource and hasattr(self.datasource, 'network_config'): |
4182 | 636 | dscfg = ('ds', self.datasource.network_config) | 642 | available_cfgs[NetworkConfigSource.ds] = ( |
4183 | 637 | sys_cfg = ('system_cfg', self.cfg.get('network')) | 643 | self.datasource.network_config) |
4184 | 638 | 644 | ||
4186 | 639 | for loc, ncfg in (cmdline_cfg, sys_cfg, dscfg): | 645 | if self.datasource: |
4187 | 646 | order = self.datasource.network_config_sources | ||
4188 | 647 | else: | ||
4189 | 648 | order = sources.DataSource.network_config_sources | ||
4190 | 649 | for cfg_source in order: | ||
4191 | 650 | if not hasattr(NetworkConfigSource, cfg_source): | ||
4192 | 651 | LOG.warning('data source specifies an invalid network' | ||
4193 | 652 | ' cfg_source: %s', cfg_source) | ||
4194 | 653 | continue | ||
4195 | 654 | if cfg_source not in available_cfgs: | ||
4196 | 655 | LOG.warning('data source specifies an unavailable network' | ||
4197 | 656 | ' cfg_source: %s', cfg_source) | ||
4198 | 657 | continue | ||
4199 | 658 | ncfg = available_cfgs[cfg_source] | ||
4200 | 640 | if net.is_disabled_cfg(ncfg): | 659 | if net.is_disabled_cfg(ncfg): |
4203 | 641 | LOG.debug("network config disabled by %s", loc) | 660 | LOG.debug("network config disabled by %s", cfg_source) |
4204 | 642 | return (None, loc) | 661 | return (None, cfg_source) |
4205 | 643 | if ncfg: | 662 | if ncfg: |
4214 | 644 | return (ncfg, loc) | 663 | return (ncfg, cfg_source) |
4215 | 645 | return (self.distro.generate_fallback_config(), "fallback") | 664 | return (self.distro.generate_fallback_config(), |
4216 | 646 | 665 | NetworkConfigSource.fallback) | |
4209 | 647 | def apply_network_config(self, bring_up): | ||
4210 | 648 | netcfg, src = self._find_networking_config() | ||
4211 | 649 | if netcfg is None: | ||
4212 | 650 | LOG.info("network config is disabled by %s", src) | ||
4213 | 651 | return | ||
4217 | 652 | 666 | ||
4218 | 667 | def _apply_netcfg_names(self, netcfg): | ||
4219 | 653 | try: | 668 | try: |
4220 | 654 | LOG.debug("applying net config names for %s", netcfg) | 669 | LOG.debug("applying net config names for %s", netcfg) |
4221 | 655 | self.distro.apply_network_config_names(netcfg) | 670 | self.distro.apply_network_config_names(netcfg) |
4222 | 656 | except Exception as e: | 671 | except Exception as e: |
4223 | 657 | LOG.warning("Failed to rename devices: %s", e) | 672 | LOG.warning("Failed to rename devices: %s", e) |
4224 | 658 | 673 | ||
4225 | 674 | def apply_network_config(self, bring_up): | ||
4226 | 675 | # get a network config | ||
4227 | 676 | netcfg, src = self._find_networking_config() | ||
4228 | 677 | if netcfg is None: | ||
4229 | 678 | LOG.info("network config is disabled by %s", src) | ||
4230 | 679 | return | ||
4231 | 680 | |||
4232 | 681 | # request an update if needed/available | ||
4233 | 659 | if self.datasource is not NULL_DATA_SOURCE: | 682 | if self.datasource is not NULL_DATA_SOURCE: |
4234 | 660 | if not self.is_new_instance(): | 683 | if not self.is_new_instance(): |
4235 | 661 | if not self.datasource.update_metadata([EventType.BOOT]): | 684 | if not self.datasource.update_metadata([EventType.BOOT]): |
4236 | @@ -663,8 +686,20 @@ class Init(object): | |||
4237 | 663 | "No network config applied. Neither a new instance" | 686 | "No network config applied. Neither a new instance" |
4238 | 664 | " nor datasource network update on '%s' event", | 687 | " nor datasource network update on '%s' event", |
4239 | 665 | EventType.BOOT) | 688 | EventType.BOOT) |
4240 | 689 | # nothing new, but ensure proper names | ||
4241 | 690 | self._apply_netcfg_names(netcfg) | ||
4242 | 666 | return | 691 | return |
4243 | 692 | else: | ||
4244 | 693 | # refresh netcfg after update | ||
4245 | 694 | netcfg, src = self._find_networking_config() | ||
4246 | 695 | |||
4247 | 696 | # ensure all physical devices in config are present | ||
4248 | 697 | net.wait_for_physdevs(netcfg) | ||
4249 | 698 | |||
4250 | 699 | # apply renames from config | ||
4251 | 700 | self._apply_netcfg_names(netcfg) | ||
4252 | 667 | 701 | ||
4253 | 702 | # rendering config | ||
4254 | 668 | LOG.info("Applying network configuration from %s bringup=%s: %s", | 703 | LOG.info("Applying network configuration from %s bringup=%s: %s", |
4255 | 669 | src, bring_up, netcfg) | 704 | src, bring_up, netcfg) |
4256 | 670 | try: | 705 | try: |
4257 | diff --git a/cloudinit/tests/helpers.py b/cloudinit/tests/helpers.py | |||
4258 | index f41180f..23fddd0 100644 | |||
4259 | --- a/cloudinit/tests/helpers.py | |||
4260 | +++ b/cloudinit/tests/helpers.py | |||
4261 | @@ -198,7 +198,8 @@ class CiTestCase(TestCase): | |||
4262 | 198 | prefix="ci-%s." % self.__class__.__name__) | 198 | prefix="ci-%s." % self.__class__.__name__) |
4263 | 199 | else: | 199 | else: |
4264 | 200 | tmpd = tempfile.mkdtemp(dir=dir) | 200 | tmpd = tempfile.mkdtemp(dir=dir) |
4266 | 201 | self.addCleanup(functools.partial(shutil.rmtree, tmpd)) | 201 | self.addCleanup( |
4267 | 202 | functools.partial(shutil.rmtree, tmpd, ignore_errors=True)) | ||
4268 | 202 | return tmpd | 203 | return tmpd |
4269 | 203 | 204 | ||
4270 | 204 | def tmp_path(self, path, dir=None): | 205 | def tmp_path(self, path, dir=None): |
4271 | diff --git a/cloudinit/tests/test_stages.py b/cloudinit/tests/test_stages.py | |||
4272 | index 94b6b25..d5c9c0e 100644 | |||
4273 | --- a/cloudinit/tests/test_stages.py | |||
4274 | +++ b/cloudinit/tests/test_stages.py | |||
4275 | @@ -6,6 +6,7 @@ import os | |||
4276 | 6 | 6 | ||
4277 | 7 | from cloudinit import stages | 7 | from cloudinit import stages |
4278 | 8 | from cloudinit import sources | 8 | from cloudinit import sources |
4279 | 9 | from cloudinit.sources import NetworkConfigSource | ||
4280 | 9 | 10 | ||
4281 | 10 | from cloudinit.event import EventType | 11 | from cloudinit.event import EventType |
4282 | 11 | from cloudinit.util import write_file | 12 | from cloudinit.util import write_file |
4283 | @@ -37,6 +38,7 @@ class FakeDataSource(sources.DataSource): | |||
4284 | 37 | 38 | ||
4285 | 38 | class TestInit(CiTestCase): | 39 | class TestInit(CiTestCase): |
4286 | 39 | with_logs = True | 40 | with_logs = True |
4287 | 41 | allowed_subp = False | ||
4288 | 40 | 42 | ||
4289 | 41 | def setUp(self): | 43 | def setUp(self): |
4290 | 42 | super(TestInit, self).setUp() | 44 | super(TestInit, self).setUp() |
4291 | @@ -57,84 +59,189 @@ class TestInit(CiTestCase): | |||
4292 | 57 | (None, disable_file), | 59 | (None, disable_file), |
4293 | 58 | self.init._find_networking_config()) | 60 | self.init._find_networking_config()) |
4294 | 59 | 61 | ||
4295 | 62 | @mock.patch('cloudinit.stages.cmdline.read_initramfs_config') | ||
4296 | 60 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') | 63 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
4298 | 61 | def test_wb__find_networking_config_disabled_by_kernel(self, m_cmdline): | 64 | def test_wb__find_networking_config_disabled_by_kernel( |
4299 | 65 | self, m_cmdline, m_initramfs): | ||
4300 | 62 | """find_networking_config returns when disabled by kernel cmdline.""" | 66 | """find_networking_config returns when disabled by kernel cmdline.""" |
4301 | 63 | m_cmdline.return_value = {'config': 'disabled'} | 67 | m_cmdline.return_value = {'config': 'disabled'} |
4302 | 68 | m_initramfs.return_value = {'config': ['fake_initrd']} | ||
4303 | 64 | self.assertEqual( | 69 | self.assertEqual( |
4305 | 65 | (None, 'cmdline'), | 70 | (None, NetworkConfigSource.cmdline), |
4306 | 66 | self.init._find_networking_config()) | 71 | self.init._find_networking_config()) |
4307 | 67 | self.assertEqual('DEBUG: network config disabled by cmdline\n', | 72 | self.assertEqual('DEBUG: network config disabled by cmdline\n', |
4308 | 68 | self.logs.getvalue()) | 73 | self.logs.getvalue()) |
4309 | 69 | 74 | ||
4310 | 75 | @mock.patch('cloudinit.stages.cmdline.read_initramfs_config') | ||
4311 | 70 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') | 76 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
4313 | 71 | def test_wb__find_networking_config_disabled_by_datasrc(self, m_cmdline): | 77 | def test_wb__find_networking_config_disabled_by_initrd( |
4314 | 78 | self, m_cmdline, m_initramfs): | ||
4315 | 79 | """find_networking_config returns when disabled by kernel cmdline.""" | ||
4316 | 80 | m_cmdline.return_value = {} | ||
4317 | 81 | m_initramfs.return_value = {'config': 'disabled'} | ||
4318 | 82 | self.assertEqual( | ||
4319 | 83 | (None, NetworkConfigSource.initramfs), | ||
4320 | 84 | self.init._find_networking_config()) | ||
4321 | 85 | self.assertEqual('DEBUG: network config disabled by initramfs\n', | ||
4322 | 86 | self.logs.getvalue()) | ||
4323 | 87 | |||
4324 | 88 | @mock.patch('cloudinit.stages.cmdline.read_initramfs_config') | ||
4325 | 89 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') | ||
4326 | 90 | def test_wb__find_networking_config_disabled_by_datasrc( | ||
4327 | 91 | self, m_cmdline, m_initramfs): | ||
4328 | 72 | """find_networking_config returns when disabled by datasource cfg.""" | 92 | """find_networking_config returns when disabled by datasource cfg.""" |
4329 | 73 | m_cmdline.return_value = {} # Kernel doesn't disable networking | 93 | m_cmdline.return_value = {} # Kernel doesn't disable networking |
4330 | 94 | m_initramfs.return_value = {} # initramfs doesn't disable networking | ||
4331 | 74 | self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, | 95 | self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, |
4332 | 75 | 'network': {}} # system config doesn't disable | 96 | 'network': {}} # system config doesn't disable |
4333 | 76 | 97 | ||
4334 | 77 | self.init.datasource = FakeDataSource( | 98 | self.init.datasource = FakeDataSource( |
4335 | 78 | network_config={'config': 'disabled'}) | 99 | network_config={'config': 'disabled'}) |
4336 | 79 | self.assertEqual( | 100 | self.assertEqual( |
4338 | 80 | (None, 'ds'), | 101 | (None, NetworkConfigSource.ds), |
4339 | 81 | self.init._find_networking_config()) | 102 | self.init._find_networking_config()) |
4340 | 82 | self.assertEqual('DEBUG: network config disabled by ds\n', | 103 | self.assertEqual('DEBUG: network config disabled by ds\n', |
4341 | 83 | self.logs.getvalue()) | 104 | self.logs.getvalue()) |
4342 | 84 | 105 | ||
4343 | 106 | @mock.patch('cloudinit.stages.cmdline.read_initramfs_config') | ||
4344 | 85 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') | 107 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
4346 | 86 | def test_wb__find_networking_config_disabled_by_sysconfig(self, m_cmdline): | 108 | def test_wb__find_networking_config_disabled_by_sysconfig( |
4347 | 109 | self, m_cmdline, m_initramfs): | ||
4348 | 87 | """find_networking_config returns when disabled by system config.""" | 110 | """find_networking_config returns when disabled by system config.""" |
4349 | 88 | m_cmdline.return_value = {} # Kernel doesn't disable networking | 111 | m_cmdline.return_value = {} # Kernel doesn't disable networking |
4350 | 112 | m_initramfs.return_value = {} # initramfs doesn't disable networking | ||
4351 | 89 | self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, | 113 | self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, |
4352 | 90 | 'network': {'config': 'disabled'}} | 114 | 'network': {'config': 'disabled'}} |
4353 | 91 | self.assertEqual( | 115 | self.assertEqual( |
4355 | 92 | (None, 'system_cfg'), | 116 | (None, NetworkConfigSource.system_cfg), |
4356 | 93 | self.init._find_networking_config()) | 117 | self.init._find_networking_config()) |
4357 | 94 | self.assertEqual('DEBUG: network config disabled by system_cfg\n', | 118 | self.assertEqual('DEBUG: network config disabled by system_cfg\n', |
4358 | 95 | self.logs.getvalue()) | 119 | self.logs.getvalue()) |
4359 | 96 | 120 | ||
4360 | 121 | @mock.patch('cloudinit.stages.cmdline.read_initramfs_config') | ||
4361 | 122 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') | ||
4362 | 123 | def test__find_networking_config_uses_datasrc_order( | ||
4363 | 124 | self, m_cmdline, m_initramfs): | ||
4364 | 125 | """find_networking_config should check sources in DS defined order""" | ||
4365 | 126 | # cmdline and initramfs, which would normally be preferred over other | ||
4366 | 127 | # sources, disable networking; in this case, though, the DS moves them | ||
4367 | 128 | # later so its own config is preferred | ||
4368 | 129 | m_cmdline.return_value = {'config': 'disabled'} | ||
4369 | 130 | m_initramfs.return_value = {'config': 'disabled'} | ||
4370 | 131 | |||
4371 | 132 | ds_net_cfg = {'config': {'needle': True}} | ||
4372 | 133 | self.init.datasource = FakeDataSource(network_config=ds_net_cfg) | ||
4373 | 134 | self.init.datasource.network_config_sources = [ | ||
4374 | 135 | NetworkConfigSource.ds, NetworkConfigSource.system_cfg, | ||
4375 | 136 | NetworkConfigSource.cmdline, NetworkConfigSource.initramfs] | ||
4376 | 137 | |||
4377 | 138 | self.assertEqual( | ||
4378 | 139 | (ds_net_cfg, NetworkConfigSource.ds), | ||
4379 | 140 | self.init._find_networking_config()) | ||
4380 | 141 | |||
4381 | 142 | @mock.patch('cloudinit.stages.cmdline.read_initramfs_config') | ||
4382 | 143 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') | ||
4383 | 144 | def test__find_networking_config_warns_if_datasrc_uses_invalid_src( | ||
4384 | 145 | self, m_cmdline, m_initramfs): | ||
4385 | 146 | """find_networking_config should check sources in DS defined order""" | ||
4386 | 147 | ds_net_cfg = {'config': {'needle': True}} | ||
4387 | 148 | self.init.datasource = FakeDataSource(network_config=ds_net_cfg) | ||
4388 | 149 | self.init.datasource.network_config_sources = [ | ||
4389 | 150 | 'invalid_src', NetworkConfigSource.ds] | ||
4390 | 151 | |||
4391 | 152 | self.assertEqual( | ||
4392 | 153 | (ds_net_cfg, NetworkConfigSource.ds), | ||
4393 | 154 | self.init._find_networking_config()) | ||
4394 | 155 | self.assertIn('WARNING: data source specifies an invalid network' | ||
4395 | 156 | ' cfg_source: invalid_src', | ||
4396 | 157 | self.logs.getvalue()) | ||
4397 | 158 | |||
4398 | 159 | @mock.patch('cloudinit.stages.cmdline.read_initramfs_config') | ||
4399 | 97 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') | 160 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
4401 | 98 | def test_wb__find_networking_config_returns_kernel(self, m_cmdline): | 161 | def test__find_networking_config_warns_if_datasrc_uses_unavailable_src( |
4402 | 162 | self, m_cmdline, m_initramfs): | ||
4403 | 163 | """find_networking_config should check sources in DS defined order""" | ||
4404 | 164 | ds_net_cfg = {'config': {'needle': True}} | ||
4405 | 165 | self.init.datasource = FakeDataSource(network_config=ds_net_cfg) | ||
4406 | 166 | self.init.datasource.network_config_sources = [ | ||
4407 | 167 | NetworkConfigSource.fallback, NetworkConfigSource.ds] | ||
4408 | 168 | |||
4409 | 169 | self.assertEqual( | ||
4410 | 170 | (ds_net_cfg, NetworkConfigSource.ds), | ||
4411 | 171 | self.init._find_networking_config()) | ||
4412 | 172 | self.assertIn('WARNING: data source specifies an unavailable network' | ||
4413 | 173 | ' cfg_source: fallback', | ||
4414 | 174 | self.logs.getvalue()) | ||
4415 | 175 | |||
4416 | 176 | @mock.patch('cloudinit.stages.cmdline.read_initramfs_config') | ||
4417 | 177 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') | ||
4418 | 178 | def test_wb__find_networking_config_returns_kernel( | ||
4419 | 179 | self, m_cmdline, m_initramfs): | ||
4420 | 99 | """find_networking_config returns kernel cmdline config if present.""" | 180 | """find_networking_config returns kernel cmdline config if present.""" |
4421 | 100 | expected_cfg = {'config': ['fakekernel']} | 181 | expected_cfg = {'config': ['fakekernel']} |
4422 | 101 | m_cmdline.return_value = expected_cfg | 182 | m_cmdline.return_value = expected_cfg |
4423 | 183 | m_initramfs.return_value = {'config': ['fake_initrd']} | ||
4424 | 102 | self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, | 184 | self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, |
4425 | 103 | 'network': {'config': ['fakesys_config']}} | 185 | 'network': {'config': ['fakesys_config']}} |
4426 | 104 | self.init.datasource = FakeDataSource( | 186 | self.init.datasource = FakeDataSource( |
4427 | 105 | network_config={'config': ['fakedatasource']}) | 187 | network_config={'config': ['fakedatasource']}) |
4428 | 106 | self.assertEqual( | 188 | self.assertEqual( |
4430 | 107 | (expected_cfg, 'cmdline'), | 189 | (expected_cfg, NetworkConfigSource.cmdline), |
4431 | 108 | self.init._find_networking_config()) | 190 | self.init._find_networking_config()) |
4432 | 109 | 191 | ||
4433 | 192 | @mock.patch('cloudinit.stages.cmdline.read_initramfs_config') | ||
4434 | 110 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') | 193 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
4436 | 111 | def test_wb__find_networking_config_returns_system_cfg(self, m_cmdline): | 194 | def test_wb__find_networking_config_returns_initramfs( |
4437 | 195 | self, m_cmdline, m_initramfs): | ||
4438 | 196 | """find_networking_config returns kernel cmdline config if present.""" | ||
4439 | 197 | expected_cfg = {'config': ['fake_initrd']} | ||
4440 | 198 | m_cmdline.return_value = {} | ||
4441 | 199 | m_initramfs.return_value = expected_cfg | ||
4442 | 200 | self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, | ||
4443 | 201 | 'network': {'config': ['fakesys_config']}} | ||
4444 | 202 | self.init.datasource = FakeDataSource( | ||
4445 | 203 | network_config={'config': ['fakedatasource']}) | ||
4446 | 204 | self.assertEqual( | ||
4447 | 205 | (expected_cfg, NetworkConfigSource.initramfs), | ||
4448 | 206 | self.init._find_networking_config()) | ||
4449 | 207 | |||
4450 | 208 | @mock.patch('cloudinit.stages.cmdline.read_initramfs_config') | ||
4451 | 209 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') | ||
4452 | 210 | def test_wb__find_networking_config_returns_system_cfg( | ||
4453 | 211 | self, m_cmdline, m_initramfs): | ||
4454 | 112 | """find_networking_config returns system config when present.""" | 212 | """find_networking_config returns system config when present.""" |
4455 | 113 | m_cmdline.return_value = {} # No kernel network config | 213 | m_cmdline.return_value = {} # No kernel network config |
4456 | 214 | m_initramfs.return_value = {} # no initramfs network config | ||
4457 | 114 | expected_cfg = {'config': ['fakesys_config']} | 215 | expected_cfg = {'config': ['fakesys_config']} |
4458 | 115 | self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, | 216 | self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, |
4459 | 116 | 'network': expected_cfg} | 217 | 'network': expected_cfg} |
4460 | 117 | self.init.datasource = FakeDataSource( | 218 | self.init.datasource = FakeDataSource( |
4461 | 118 | network_config={'config': ['fakedatasource']}) | 219 | network_config={'config': ['fakedatasource']}) |
4462 | 119 | self.assertEqual( | 220 | self.assertEqual( |
4464 | 120 | (expected_cfg, 'system_cfg'), | 221 | (expected_cfg, NetworkConfigSource.system_cfg), |
4465 | 121 | self.init._find_networking_config()) | 222 | self.init._find_networking_config()) |
4466 | 122 | 223 | ||
4467 | 224 | @mock.patch('cloudinit.stages.cmdline.read_initramfs_config') | ||
4468 | 123 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') | 225 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
4470 | 124 | def test_wb__find_networking_config_returns_datasrc_cfg(self, m_cmdline): | 226 | def test_wb__find_networking_config_returns_datasrc_cfg( |
4471 | 227 | self, m_cmdline, m_initramfs): | ||
4472 | 125 | """find_networking_config returns datasource net config if present.""" | 228 | """find_networking_config returns datasource net config if present.""" |
4473 | 126 | m_cmdline.return_value = {} # No kernel network config | 229 | m_cmdline.return_value = {} # No kernel network config |
4474 | 230 | m_initramfs.return_value = {} # no initramfs network config | ||
4475 | 127 | # No system config for network in setUp | 231 | # No system config for network in setUp |
4476 | 128 | expected_cfg = {'config': ['fakedatasource']} | 232 | expected_cfg = {'config': ['fakedatasource']} |
4477 | 129 | self.init.datasource = FakeDataSource(network_config=expected_cfg) | 233 | self.init.datasource = FakeDataSource(network_config=expected_cfg) |
4478 | 130 | self.assertEqual( | 234 | self.assertEqual( |
4480 | 131 | (expected_cfg, 'ds'), | 235 | (expected_cfg, NetworkConfigSource.ds), |
4481 | 132 | self.init._find_networking_config()) | 236 | self.init._find_networking_config()) |
4482 | 133 | 237 | ||
4483 | 238 | @mock.patch('cloudinit.stages.cmdline.read_initramfs_config') | ||
4484 | 134 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') | 239 | @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
4486 | 135 | def test_wb__find_networking_config_returns_fallback(self, m_cmdline): | 240 | def test_wb__find_networking_config_returns_fallback( |
4487 | 241 | self, m_cmdline, m_initramfs): | ||
4488 | 136 | """find_networking_config returns fallback config if not defined.""" | 242 | """find_networking_config returns fallback config if not defined.""" |
4489 | 137 | m_cmdline.return_value = {} # Kernel doesn't disable networking | 243 | m_cmdline.return_value = {} # Kernel doesn't disable networking |
4490 | 244 | m_initramfs.return_value = {} # no initramfs network config | ||
4491 | 138 | # Neither datasource nor system_info disable or provide network | 245 | # Neither datasource nor system_info disable or provide network |
4492 | 139 | 246 | ||
4493 | 140 | fake_cfg = {'config': [{'type': 'physical', 'name': 'eth9'}], | 247 | fake_cfg = {'config': [{'type': 'physical', 'name': 'eth9'}], |
4494 | @@ -147,7 +254,7 @@ class TestInit(CiTestCase): | |||
4495 | 147 | distro = self.init.distro | 254 | distro = self.init.distro |
4496 | 148 | distro.generate_fallback_config = fake_generate_fallback | 255 | distro.generate_fallback_config = fake_generate_fallback |
4497 | 149 | self.assertEqual( | 256 | self.assertEqual( |
4499 | 150 | (fake_cfg, 'fallback'), | 257 | (fake_cfg, NetworkConfigSource.fallback), |
4500 | 151 | self.init._find_networking_config()) | 258 | self.init._find_networking_config()) |
4501 | 152 | self.assertNotIn('network config disabled', self.logs.getvalue()) | 259 | self.assertNotIn('network config disabled', self.logs.getvalue()) |
4502 | 153 | 260 | ||
4503 | @@ -166,8 +273,9 @@ class TestInit(CiTestCase): | |||
4504 | 166 | 'INFO: network config is disabled by %s' % disable_file, | 273 | 'INFO: network config is disabled by %s' % disable_file, |
4505 | 167 | self.logs.getvalue()) | 274 | self.logs.getvalue()) |
4506 | 168 | 275 | ||
4507 | 276 | @mock.patch('cloudinit.net.get_interfaces_by_mac') | ||
4508 | 169 | @mock.patch('cloudinit.distros.ubuntu.Distro') | 277 | @mock.patch('cloudinit.distros.ubuntu.Distro') |
4510 | 170 | def test_apply_network_on_new_instance(self, m_ubuntu): | 278 | def test_apply_network_on_new_instance(self, m_ubuntu, m_macs): |
4511 | 171 | """Call distro apply_network_config methods on is_new_instance.""" | 279 | """Call distro apply_network_config methods on is_new_instance.""" |
4512 | 172 | net_cfg = { | 280 | net_cfg = { |
4513 | 173 | 'version': 1, 'config': [ | 281 | 'version': 1, 'config': [ |
4514 | @@ -175,7 +283,9 @@ class TestInit(CiTestCase): | |||
4515 | 175 | 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]} | 283 | 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]} |
4516 | 176 | 284 | ||
4517 | 177 | def fake_network_config(): | 285 | def fake_network_config(): |
4519 | 178 | return net_cfg, 'fallback' | 286 | return net_cfg, NetworkConfigSource.fallback |
4520 | 287 | |||
4521 | 288 | m_macs.return_value = {'42:42:42:42:42:42': 'eth9'} | ||
4522 | 179 | 289 | ||
4523 | 180 | self.init._find_networking_config = fake_network_config | 290 | self.init._find_networking_config = fake_network_config |
4524 | 181 | self.init.apply_network_config(True) | 291 | self.init.apply_network_config(True) |
4525 | @@ -195,7 +305,7 @@ class TestInit(CiTestCase): | |||
4526 | 195 | 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]} | 305 | 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]} |
4527 | 196 | 306 | ||
4528 | 197 | def fake_network_config(): | 307 | def fake_network_config(): |
4530 | 198 | return net_cfg, 'fallback' | 308 | return net_cfg, NetworkConfigSource.fallback |
4531 | 199 | 309 | ||
4532 | 200 | self.init._find_networking_config = fake_network_config | 310 | self.init._find_networking_config = fake_network_config |
4533 | 201 | self.init.apply_network_config(True) | 311 | self.init.apply_network_config(True) |
4534 | @@ -206,8 +316,9 @@ class TestInit(CiTestCase): | |||
4535 | 206 | " nor datasource network update on '%s' event" % EventType.BOOT, | 316 | " nor datasource network update on '%s' event" % EventType.BOOT, |
4536 | 207 | self.logs.getvalue()) | 317 | self.logs.getvalue()) |
4537 | 208 | 318 | ||
4538 | 319 | @mock.patch('cloudinit.net.get_interfaces_by_mac') | ||
4539 | 209 | @mock.patch('cloudinit.distros.ubuntu.Distro') | 320 | @mock.patch('cloudinit.distros.ubuntu.Distro') |
4541 | 210 | def test_apply_network_on_datasource_allowed_event(self, m_ubuntu): | 321 | def test_apply_network_on_datasource_allowed_event(self, m_ubuntu, m_macs): |
4542 | 211 | """Apply network if datasource.update_metadata permits BOOT event.""" | 322 | """Apply network if datasource.update_metadata permits BOOT event.""" |
4543 | 212 | old_instance_id = os.path.join( | 323 | old_instance_id = os.path.join( |
4544 | 213 | self.init.paths.get_cpath('data'), 'instance-id') | 324 | self.init.paths.get_cpath('data'), 'instance-id') |
4545 | @@ -218,7 +329,9 @@ class TestInit(CiTestCase): | |||
4546 | 218 | 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]} | 329 | 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]} |
4547 | 219 | 330 | ||
4548 | 220 | def fake_network_config(): | 331 | def fake_network_config(): |
4550 | 221 | return net_cfg, 'fallback' | 332 | return net_cfg, NetworkConfigSource.fallback |
4551 | 333 | |||
4552 | 334 | m_macs.return_value = {'42:42:42:42:42:42': 'eth9'} | ||
4553 | 222 | 335 | ||
4554 | 223 | self.init._find_networking_config = fake_network_config | 336 | self.init._find_networking_config = fake_network_config |
4555 | 224 | self.init.datasource = FakeDataSource(paths=self.init.paths) | 337 | self.init.datasource = FakeDataSource(paths=self.init.paths) |
4556 | diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py | |||
4557 | index 0af0d9e..44ee61d 100644 | |||
4558 | --- a/cloudinit/url_helper.py | |||
4559 | +++ b/cloudinit/url_helper.py | |||
4560 | @@ -199,18 +199,19 @@ def _get_ssl_args(url, ssl_details): | |||
4561 | 199 | def readurl(url, data=None, timeout=None, retries=0, sec_between=1, | 199 | def readurl(url, data=None, timeout=None, retries=0, sec_between=1, |
4562 | 200 | headers=None, headers_cb=None, ssl_details=None, | 200 | headers=None, headers_cb=None, ssl_details=None, |
4563 | 201 | check_status=True, allow_redirects=True, exception_cb=None, | 201 | check_status=True, allow_redirects=True, exception_cb=None, |
4565 | 202 | session=None, infinite=False, log_req_resp=True): | 202 | session=None, infinite=False, log_req_resp=True, |
4566 | 203 | request_method=None): | ||
4567 | 203 | url = _cleanurl(url) | 204 | url = _cleanurl(url) |
4568 | 204 | req_args = { | 205 | req_args = { |
4569 | 205 | 'url': url, | 206 | 'url': url, |
4570 | 206 | } | 207 | } |
4571 | 207 | req_args.update(_get_ssl_args(url, ssl_details)) | 208 | req_args.update(_get_ssl_args(url, ssl_details)) |
4572 | 208 | req_args['allow_redirects'] = allow_redirects | 209 | req_args['allow_redirects'] = allow_redirects |
4574 | 209 | req_args['method'] = 'GET' | 210 | if not request_method: |
4575 | 211 | request_method = 'POST' if data else 'GET' | ||
4576 | 212 | req_args['method'] = request_method | ||
4577 | 210 | if timeout is not None: | 213 | if timeout is not None: |
4578 | 211 | req_args['timeout'] = max(float(timeout), 0) | 214 | req_args['timeout'] = max(float(timeout), 0) |
4579 | 212 | if data: | ||
4580 | 213 | req_args['method'] = 'POST' | ||
4581 | 214 | # It doesn't seem like config | 215 | # It doesn't seem like config |
4582 | 215 | # was added in older library versions (or newer ones either), thus we | 216 | # was added in older library versions (or newer ones either), thus we |
4583 | 216 | # need to manually do the retries if it wasn't... | 217 | # need to manually do the retries if it wasn't... |
4584 | diff --git a/cloudinit/version.py b/cloudinit/version.py | |||
4585 | index ddcd436..b04b11f 100644 | |||
4586 | --- a/cloudinit/version.py | |||
4587 | +++ b/cloudinit/version.py | |||
4588 | @@ -4,7 +4,7 @@ | |||
4589 | 4 | # | 4 | # |
4590 | 5 | # This file is part of cloud-init. See LICENSE file for license information. | 5 | # This file is part of cloud-init. See LICENSE file for license information. |
4591 | 6 | 6 | ||
4593 | 7 | __VERSION__ = "19.1" | 7 | __VERSION__ = "19.2" |
4594 | 8 | _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' | 8 | _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' |
4595 | 9 | 9 | ||
4596 | 10 | FEATURES = [ | 10 | FEATURES = [ |
4597 | diff --git a/debian/changelog b/debian/changelog | |||
4598 | index 032f711..8ae019f 100644 | |||
4599 | --- a/debian/changelog | |||
4600 | +++ b/debian/changelog | |||
4601 | @@ -1,9 +1,66 @@ | |||
4603 | 1 | cloud-init (19.1-1-gbaa47854-0ubuntu1~18.04.2) UNRELEASED; urgency=medium | 1 | cloud-init (19.2-21-ge6383719-0ubuntu1~18.04.1) bionic; urgency=medium |
4604 | 2 | 2 | ||
4605 | 3 | * refresh patches: | 3 | * refresh patches: |
4606 | 4 | + debian/patches/ubuntu-advantage-revert-tip.patch | 4 | + debian/patches/ubuntu-advantage-revert-tip.patch |
4609 | 5 | 5 | * refresh patches: | |
4610 | 6 | -- Chad Smith <chad.smith@canonical.com> Tue, 04 Jun 2019 15:01:41 -0600 | 6 | + debian/patches/ubuntu-advantage-revert-tip.patch |
4611 | 7 | * debian/cloud-init.templates: enable Exoscale cloud. | ||
4612 | 8 | * New upstream snapshot. (LP: #1841099) | ||
4613 | 9 | - ubuntu-drivers: call db_x_loadtemplatefile to accept NVIDIA EULA | ||
4614 | 10 | - Add missing #cloud-config comment on first example in documentation. | ||
4615 | 11 | [Florian Müller] | ||
4616 | 12 | - ubuntu-drivers: emit latelink=true debconf to accept nvidia eula | ||
4617 | 13 | - DataSourceOracle: prefer DS network config over initramfs | ||
4618 | 14 | - format.rst: add text/jinja2 to list of content types (+ cleanups) | ||
4619 | 15 | - Add GitHub pull request template to point people at hacking doc | ||
4620 | 16 | - cloudinit/distros/parsers/sys_conf: add docstring to SysConf | ||
4621 | 17 | - pyflakes: remove unused variable [Joshua Powers] | ||
4622 | 18 | - Azure: Record boot timestamps, system information, and diagnostic events | ||
4623 | 19 | [Anh Vo] | ||
4624 | 20 | - DataSourceOracle: configure secondary NICs on Virtual Machines | ||
4625 | 21 | - distros: fix confusing variable names | ||
4626 | 22 | - azure/net: generate_fallback_nic emits network v2 config instead of v1 | ||
4627 | 23 | - Add support for publishing host keys to GCE guest attributes | ||
4628 | 24 | [Rick Wright] | ||
4629 | 25 | - New data source for the Exoscale.com cloud platform [Chris Glass] | ||
4630 | 26 | - doc: remove intersphinx extension | ||
4631 | 27 | - cc_set_passwords: rewrite documentation | ||
4632 | 28 | - net/cmdline: split interfaces_by_mac and init network config | ||
4633 | 29 | determination | ||
4634 | 30 | - stages: allow data sources to override network config source order | ||
4635 | 31 | - cloud_tests: updates and fixes | ||
4636 | 32 | - Fix bug rendering MTU on bond or vlan when input was netplan. | ||
4637 | 33 | [Scott Moser] | ||
4638 | 34 | - net: update net sequence, include wait on netdevs, opensuse netrules path | ||
4639 | 35 | - Release 19.2 | ||
4640 | 36 | - net: add rfc3442 (classless static routes) to EphemeralDHCP | ||
4641 | 37 | - templates/ntp.conf.debian.tmpl: fix missing newline for pools | ||
4642 | 38 | - Support netplan renderer in Arch Linux [Conrad Hoffmann] | ||
4643 | 39 | - Fix typo in publicly viewable documentation. [David Medberry] | ||
4644 | 40 | - Add a cdrom size checker for OVF ds to ds-identify [Pengpeng Sun] | ||
4645 | 41 | - VMWare: Trigger the post customization script via cc_scripts module. | ||
4646 | 42 | [Xiaofeng Wang] | ||
4647 | 43 | - Cloud-init analyze module: Added ability to analyze boot events. | ||
4648 | 44 | [Sam Gilson] | ||
4649 | 45 | - Update debian eni network configuration location, retain Ubuntu setting | ||
4650 | 46 | [Janos Lenart] | ||
4651 | 47 | - net: skip bond interfaces in get_interfaces [Stanislav Makar] | ||
4652 | 48 | - Fix a couple of issues raised by a coverity scan | ||
4653 | 49 | - Add missing dsname for Hetzner Cloud datasource [Markus Schade] | ||
4654 | 50 | - doc: indicate that netplan is default in Ubuntu now | ||
4655 | 51 | - azure: add region and AZ properties from imds compute location metadata | ||
4656 | 52 | - sysconfig: support more bonding options [Penghui Liao] | ||
4657 | 53 | - cloud-init-generator: use libexec path to ds-identify on redhat systems | ||
4658 | 54 | - tools/build-on-freebsd: update to python3 [Gonéri Le Bouder] | ||
4659 | 55 | - Allow identification of OpenStack by Asset Tag [Mark T. Voelker] | ||
4660 | 56 | - Fix spelling error making 'an Ubuntu' consistent. [Brian Murray] | ||
4661 | 57 | - run-container: centos: comment out the repo mirrorlist [Paride Legovini] | ||
4662 | 58 | - netplan: update netplan key mappings for gratuitous-arp | ||
4663 | 59 | - freebsd: fix the name of cloudcfg VARIANT [Gonéri Le Bouder] | ||
4664 | 60 | - freebsd: ability to grow root file system [Gonéri Le Bouder] | ||
4665 | 61 | - freebsd: NoCloud data source support [Gonéri Le Bouder] | ||
4666 | 62 | |||
4667 | 63 | -- Chad Smith <chad.smith@canonical.com> Thu, 22 Aug 2019 12:56:36 -0600 | ||
4668 | 7 | 64 | ||
4669 | 8 | cloud-init (19.1-1-gbaa47854-0ubuntu1~18.04.1) bionic; urgency=medium | 65 | cloud-init (19.1-1-gbaa47854-0ubuntu1~18.04.1) bionic; urgency=medium |
4670 | 9 | 66 | ||
4671 | diff --git a/debian/cloud-init.templates b/debian/cloud-init.templates | |||
4672 | index ef3c3a7..8d37ee5 100644 | |||
4673 | --- a/debian/cloud-init.templates | |||
4674 | +++ b/debian/cloud-init.templates | |||
4675 | @@ -1,8 +1,8 @@ | |||
4676 | 1 | Template: cloud-init/datasources | 1 | Template: cloud-init/datasources |
4677 | 2 | Type: multiselect | 2 | Type: multiselect |
4681 | 3 | Default: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, None | 3 | Default: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Exoscale, None |
4682 | 4 | Choices-C: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, None | 4 | Choices-C: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Exoscale, None |
4683 | 5 | Choices: NoCloud: Reads info from /var/lib/cloud/seed only, ConfigDrive: Reads data from Openstack Config Drive, OpenNebula: read from OpenNebula context disk, DigitalOcean: reads data from Droplet datasource, Azure: read from MS Azure cdrom. Requires walinux-agent, AltCloud: config disks for RHEVm and vSphere, OVF: Reads data from OVF Transports, MAAS: Reads data from Ubuntu MAAS, GCE: google compute metadata service, OpenStack: native openstack metadata service, CloudSigma: metadata over serial for cloudsigma.com, SmartOS: Read from SmartOS metadata service, Bigstep: Bigstep metadata service, Scaleway: Scaleway metadata service, AliYun: Alibaba metadata service, Ec2: reads data from EC2 Metadata service, CloudStack: Read from CloudStack metadata service, Hetzner: Hetzner Cloud, IBMCloud: IBM Cloud. Previously softlayer or bluemix., None: Failsafe datasource | 5 | Choices: NoCloud: Reads info from /var/lib/cloud/seed only, ConfigDrive: Reads data from Openstack Config Drive, OpenNebula: read from OpenNebula context disk, DigitalOcean: reads data from Droplet datasource, Azure: read from MS Azure cdrom. Requires walinux-agent, AltCloud: config disks for RHEVm and vSphere, OVF: Reads data from OVF Transports, MAAS: Reads data from Ubuntu MAAS, GCE: google compute metadata service, OpenStack: native openstack metadata service, CloudSigma: metadata over serial for cloudsigma.com, SmartOS: Read from SmartOS metadata service, Bigstep: Bigstep metadata service, Scaleway: Scaleway metadata service, AliYun: Alibaba metadata service, Ec2: reads data from EC2 Metadata service, CloudStack: Read from CloudStack metadata service, Hetzner: Hetzner Cloud, IBMCloud: IBM Cloud. Previously softlayer or bluemix., Exoscale: Exoscale metadata service, None: Failsafe datasource |
4684 | 6 | Description: Which data sources should be searched? | 6 | Description: Which data sources should be searched? |
4685 | 7 | Cloud-init supports searching different "Data Sources" for information | 7 | Cloud-init supports searching different "Data Sources" for information |
4686 | 8 | that it uses to configure a cloud instance. | 8 | that it uses to configure a cloud instance. |
4687 | diff --git a/debian/patches/ubuntu-advantage-revert-tip.patch b/debian/patches/ubuntu-advantage-revert-tip.patch | |||
4688 | index b956067..6d8b888 100644 | |||
4689 | --- a/debian/patches/ubuntu-advantage-revert-tip.patch | |||
4690 | +++ b/debian/patches/ubuntu-advantage-revert-tip.patch | |||
4691 | @@ -9,10 +9,8 @@ Forwarded: not-needed | |||
4692 | 9 | Last-Update: 2019-05-10 | 9 | Last-Update: 2019-05-10 |
4693 | 10 | --- | 10 | --- |
4694 | 11 | This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ | 11 | This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ |
4699 | 12 | Index: cloud-init/cloudinit/config/cc_ubuntu_advantage.py | 12 | --- a/cloudinit/config/cc_ubuntu_advantage.py |
4700 | 13 | =================================================================== | 13 | +++ b/cloudinit/config/cc_ubuntu_advantage.py |
4697 | 14 | --- cloud-init.orig/cloudinit/config/cc_ubuntu_advantage.py | ||
4698 | 15 | +++ cloud-init/cloudinit/config/cc_ubuntu_advantage.py | ||
4701 | 16 | @@ -1,143 +1,150 @@ | 14 | @@ -1,143 +1,150 @@ |
4702 | 17 | +# Copyright (C) 2018 Canonical Ltd. | 15 | +# Copyright (C) 2018 Canonical Ltd. |
4703 | 18 | +# | 16 | +# |
4704 | @@ -294,10 +292,8 @@ Index: cloud-init/cloudinit/config/cc_ubuntu_advantage.py | |||
4705 | 294 | + run_commands(cfgin.get('commands', [])) | 292 | + run_commands(cfgin.get('commands', [])) |
4706 | 295 | 293 | ||
4707 | 296 | # vi: ts=4 expandtab | 294 | # vi: ts=4 expandtab |
4712 | 297 | Index: cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py | 295 | --- a/cloudinit/config/tests/test_ubuntu_advantage.py |
4713 | 298 | =================================================================== | 296 | +++ b/cloudinit/config/tests/test_ubuntu_advantage.py |
4710 | 299 | --- cloud-init.orig/cloudinit/config/tests/test_ubuntu_advantage.py | ||
4711 | 300 | +++ cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py | ||
4714 | 301 | @@ -1,7 +1,10 @@ | 297 | @@ -1,7 +1,10 @@ |
4715 | 302 | # This file is part of cloud-init. See LICENSE file for license information. | 298 | # This file is part of cloud-init. See LICENSE file for license information. |
4716 | 303 | 299 | ||
4717 | diff --git a/doc/examples/cloud-config-datasources.txt b/doc/examples/cloud-config-datasources.txt | |||
4718 | index 2651c02..52a2476 100644 | |||
4719 | --- a/doc/examples/cloud-config-datasources.txt | |||
4720 | +++ b/doc/examples/cloud-config-datasources.txt | |||
4721 | @@ -38,7 +38,7 @@ datasource: | |||
4722 | 38 | # these are optional, but allow you to basically provide a datasource | 38 | # these are optional, but allow you to basically provide a datasource |
4723 | 39 | # right here | 39 | # right here |
4724 | 40 | user-data: | | 40 | user-data: | |
4726 | 41 | # This is the user-data verbatum | 41 | # This is the user-data verbatim |
4727 | 42 | meta-data: | 42 | meta-data: |
4728 | 43 | instance-id: i-87018aed | 43 | instance-id: i-87018aed |
4729 | 44 | local-hostname: myhost.internal | 44 | local-hostname: myhost.internal |
4730 | diff --git a/doc/examples/cloud-config-user-groups.txt b/doc/examples/cloud-config-user-groups.txt | |||
4731 | index 6a363b7..f588bfb 100644 | |||
4732 | --- a/doc/examples/cloud-config-user-groups.txt | |||
4733 | +++ b/doc/examples/cloud-config-user-groups.txt | |||
4734 | @@ -1,3 +1,4 @@ | |||
4735 | 1 | #cloud-config | ||
4736 | 1 | # Add groups to the system | 2 | # Add groups to the system |
4737 | 2 | # The following example adds the ubuntu group with members 'root' and 'sys' | 3 | # The following example adds the ubuntu group with members 'root' and 'sys' |
4738 | 3 | # and the empty group cloud-users. | 4 | # and the empty group cloud-users. |
4739 | diff --git a/doc/rtd/conf.py b/doc/rtd/conf.py | |||
4740 | index 50eb05c..4174477 100644 | |||
4741 | --- a/doc/rtd/conf.py | |||
4742 | +++ b/doc/rtd/conf.py | |||
4743 | @@ -27,16 +27,11 @@ project = 'Cloud-Init' | |||
4744 | 27 | # Add any Sphinx extension module names here, as strings. They can be | 27 | # Add any Sphinx extension module names here, as strings. They can be |
4745 | 28 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. | 28 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. |
4746 | 29 | extensions = [ | 29 | extensions = [ |
4747 | 30 | 'sphinx.ext.intersphinx', | ||
4748 | 31 | 'sphinx.ext.autodoc', | 30 | 'sphinx.ext.autodoc', |
4749 | 32 | 'sphinx.ext.autosectionlabel', | 31 | 'sphinx.ext.autosectionlabel', |
4750 | 33 | 'sphinx.ext.viewcode', | 32 | 'sphinx.ext.viewcode', |
4751 | 34 | ] | 33 | ] |
4752 | 35 | 34 | ||
4753 | 36 | intersphinx_mapping = { | ||
4754 | 37 | 'sphinx': ('http://sphinx.pocoo.org', None) | ||
4755 | 38 | } | ||
4756 | 39 | |||
4757 | 40 | # The suffix of source filenames. | 35 | # The suffix of source filenames. |
4758 | 41 | source_suffix = '.rst' | 36 | source_suffix = '.rst' |
4759 | 42 | 37 | ||
4760 | diff --git a/doc/rtd/topics/analyze.rst b/doc/rtd/topics/analyze.rst | |||
4761 | 43 | new file mode 100644 | 38 | new file mode 100644 |
4762 | index 0000000..5cf38bd | |||
4763 | --- /dev/null | |||
4764 | +++ b/doc/rtd/topics/analyze.rst | |||
4765 | @@ -0,0 +1,84 @@ | |||
4766 | 1 | ************************* | ||
4767 | 2 | Cloud-init Analyze Module | ||
4768 | 3 | ************************* | ||
4769 | 4 | |||
4770 | 5 | Overview | ||
4771 | 6 | ======== | ||
4772 | 7 | The analyze module was added to cloud-init in order to help analyze cloud-init boot time | ||
4773 | 8 | performance. It is loosely based on systemd-analyze where there are 4 main actions: | ||
4774 | 9 | show, blame, dump, and boot. | ||
4775 | 10 | |||
4776 | 11 | The 'show' action is similar to 'systemd-analyze critical-chain' which prints a list of units, the | ||
4777 | 12 | time they started and how long they took. For cloud-init, we have four stages, and within each stage | ||
4778 | 13 | a number of modules may run depending on configuration. ‘cloudinit-analyze show’ will, for each | ||
4779 | 14 | boot, print this information and a summary total time, per boot. | ||
4780 | 15 | |||
4781 | 16 | The 'blame' action matches 'systemd-analyze blame' where it prints, in descending order, | ||
4782 | 17 | the units that took the longest to run. This output is highly useful for examining where cloud-init | ||
4783 | 18 | is spending its time during execution. | ||
4784 | 19 | |||
4785 | 20 | The 'dump' action simply dumps the cloud-init logs that the analyze module is performing | ||
4786 | 21 | the analysis on and returns a list of dictionaries that can be consumed for other reporting needs. | ||
4787 | 22 | |||
4788 | 23 | The 'boot' action prints out kernel related timestamps that are not included in any of the | ||
4789 | 24 | cloud-init logs. There are three different timestamps that are presented to the user: | ||
4790 | 25 | kernel start, kernel finish boot, and cloud-init start. This was added for additional | ||
4791 | 26 | clarity into the boot process that cloud-init does not have control over, to aid in debugging of | ||
4792 | 27 | performance issues related to cloudinit startup or tracking regression. | ||
4793 | 28 | |||
4794 | 29 | Usage | ||
4795 | 30 | ===== | ||
4796 | 31 | Using each of the printing formats is as easy as running one of the following bash commands: | ||
4797 | 32 | |||
4798 | 33 | .. code-block:: shell-session | ||
4799 | 34 | |||
4800 | 35 | cloud-init analyze show | ||
4801 | 36 | cloud-init analyze blame | ||
4802 | 37 | cloud-init analyze dump | ||
4803 | 38 | cloud-init analyze boot | ||
4804 | 39 | |||
4805 | 40 | Cloud-init analyze boot Timestamp Gathering | ||
4806 | 41 | =========================================== | ||
4807 | 42 | The following boot related timestamps are gathered on demand when cloud-init analyze boot runs: | ||
4808 | 43 | - Kernel Startup, which is inferred from system uptime | ||
4809 | 44 | - Kernel Finishes Initialization, which is inferred from systemd UserSpaceMonotonicTimestamp property | ||
4810 | 45 | - Cloud-init activation, which is inferred from the property InactiveExitTimestamp of the cloud-init | ||
4811 | 46 | local systemd unit. | ||
4812 | 47 | |||
4813 | 48 | In order to gather the necessary timestamps using systemd, running the commands | ||
4814 | 49 | |||
4815 | 50 | .. code-block:: shell-session | ||
4816 | 51 | |||
4817 | 52 | systemctl show -p UserspaceTimestampMonotonic | ||
4818 | 53 | systemctl show cloud-init-local -p InactiveExitTimestampMonotonic | ||
4819 | 54 | |||
4820 | 55 | will gather the UserspaceTimestamp and InactiveExitTimestamp. | ||
4821 | 56 | The UserspaceTimestamp tracks when the init system starts, which is used as an indicator of kernel | ||
4822 | 57 | finishing initialization. The InactiveExitTimestamp tracks when a particular systemd unit transitions | ||
4823 | 58 | from the Inactive to Active state, which can be used to mark the beginning of systemd's activation | ||
4824 | 59 | of cloud-init. | ||
4825 | 60 | |||
4826 | 61 | Currently this only works for distros that use systemd as the init process. We will be expanding | ||
4827 | 62 | support for other distros in the future and this document will be updated accordingly. | ||
4828 | 63 | |||
4829 | 64 | If systemd is not present on the system, dmesg is used to attempt to find an event that logs the | ||
4830 | 65 | beginning of the init system. However, with this method only the first two timestamps are able to be found; | ||
4831 | 66 | dmesg does not monitor userspace processes, so no cloud-init start timestamps are emitted like when | ||
4832 | 67 | using systemd. | ||
4833 | 68 | |||
4834 | 69 | List of Cloud-init analyze boot supported distros | ||
4835 | 70 | ================================================= | ||
4836 | 71 | - Arch | ||
4837 | 72 | - CentOS | ||
4838 | 73 | - Debian | ||
4839 | 74 | - Fedora | ||
4840 | 75 | - OpenSuSE | ||
4841 | 76 | - Red Hat Enterprise Linux | ||
4842 | 77 | - Ubuntu | ||
4843 | 78 | - SUSE Linux Enterprise Server | ||
4844 | 79 | - CoreOS | ||
4845 | 80 | |||
4846 | 81 | List of Cloud-init analyze boot unsupported distros | ||
4847 | 82 | =================================================== | ||
4848 | 83 | - FreeBSD | ||
4849 | 84 | - Gentoo | ||
4850 | 0 | \ No newline at end of file | 85 | \ No newline at end of file |
4851 | diff --git a/doc/rtd/topics/capabilities.rst b/doc/rtd/topics/capabilities.rst | |||
4852 | index 0d8b894..6d85a99 100644 | |||
4853 | --- a/doc/rtd/topics/capabilities.rst | |||
4854 | +++ b/doc/rtd/topics/capabilities.rst | |||
4855 | @@ -217,6 +217,7 @@ Get detailed reports of where cloud-init spends most of its time. See | |||
4856 | 217 | * **dump** Machine-readable JSON dump of all cloud-init tracked events. | 217 | * **dump** Machine-readable JSON dump of all cloud-init tracked events. |
4857 | 218 | * **show** show time-ordered report of the cost of operations during each | 218 | * **show** show time-ordered report of the cost of operations during each |
4858 | 219 | boot stage. | 219 | boot stage. |
4859 | 220 | * **boot** show timestamps from kernel initialization, kernel finish initialization, and cloud-init start. | ||
4860 | 220 | 221 | ||
4861 | 221 | .. _cli_devel: | 222 | .. _cli_devel: |
4862 | 222 | 223 | ||
4863 | diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst | |||
4864 | index 648c606..2148cd5 100644 | |||
4865 | --- a/doc/rtd/topics/datasources.rst | |||
4866 | +++ b/doc/rtd/topics/datasources.rst | |||
4867 | @@ -155,6 +155,7 @@ Follow for more information. | |||
4868 | 155 | datasources/configdrive.rst | 155 | datasources/configdrive.rst |
4869 | 156 | datasources/digitalocean.rst | 156 | datasources/digitalocean.rst |
4870 | 157 | datasources/ec2.rst | 157 | datasources/ec2.rst |
4871 | 158 | datasources/exoscale.rst | ||
4872 | 158 | datasources/maas.rst | 159 | datasources/maas.rst |
4873 | 159 | datasources/nocloud.rst | 160 | datasources/nocloud.rst |
4874 | 160 | datasources/opennebula.rst | 161 | datasources/opennebula.rst |
4875 | diff --git a/doc/rtd/topics/datasources/exoscale.rst b/doc/rtd/topics/datasources/exoscale.rst | |||
4876 | 161 | new file mode 100644 | 162 | new file mode 100644 |
4877 | index 0000000..27aec9c | |||
4878 | --- /dev/null | |||
4879 | +++ b/doc/rtd/topics/datasources/exoscale.rst | |||
4880 | @@ -0,0 +1,68 @@ | |||
4881 | 1 | .. _datasource_exoscale: | ||
4882 | 2 | |||
4883 | 3 | Exoscale | ||
4884 | 4 | ======== | ||
4885 | 5 | |||
4886 | 6 | This datasource supports reading from the metadata server used on the | ||
4887 | 7 | `Exoscale platform <https://exoscale.com>`_. | ||
4888 | 8 | |||
4889 | 9 | Use of the Exoscale datasource is recommended to benefit from new features of | ||
4890 | 10 | the Exoscale platform. | ||
4891 | 11 | |||
4892 | 12 | The datasource relies on the availability of a compatible metadata server | ||
4893 | 13 | (``http://169.254.169.254`` is used by default) and its companion password | ||
4894 | 14 | server, reachable at the same address (by default on port 8080). | ||
4895 | 15 | |||
4896 | 16 | Crawling of metadata | ||
4897 | 17 | -------------------- | ||
4898 | 18 | |||
4899 | 19 | The metadata service and password server are crawled slightly differently: | ||
4900 | 20 | |||
4901 | 21 | * The "metadata service" is crawled every boot. | ||
4902 | 22 | * The password server is also crawled every boot (the Exoscale datasource | ||
4903 | 23 | forces the password module to run with "frequency always"). | ||
4904 | 24 | |||
4905 | 25 | In the password server case, the following rules apply in order to enable the | ||
4906 | 26 | "restore instance password" functionality: | ||
4907 | 27 | |||
4908 | 28 | * If a password is returned by the password server, it is then marked "saved" | ||
4909 | 29 | by the cloud-init datasource. Subsequent boots will skip setting the password | ||
4910 | 30 | (the password server will return "saved_password"). | ||
4911 | 31 | * When the instance password is reset (via the Exoscale UI), the password | ||
4912 | 32 | server will return the non-empty password at next boot, therefore causing | ||
4913 | 33 | cloud-init to reset the instance's password. | ||
4914 | 34 | |||
4915 | 35 | Configuration | ||
4916 | 36 | ------------- | ||
4917 | 37 | |||
4918 | 38 | Users of this datasource are discouraged from changing the default settings | ||
4919 | 39 | unless instructed to by Exoscale support. | ||
4920 | 40 | |||
4921 | 41 | The following settings are available and can be set for the datasource in system | ||
4922 | 42 | configuration (in `/etc/cloud/cloud.cfg.d/`). | ||
4923 | 43 | |||
4924 | 44 | The settings available are: | ||
4925 | 45 | |||
4926 | 46 | * **metadata_url**: The URL for the metadata service (defaults to | ||
4927 | 47 | ``http://169.254.169.254``) | ||
4928 | 48 | * **api_version**: The API version path on which to query the instance metadata | ||
4929 | 49 | (defaults to ``1.0``) | ||
4930 | 50 | * **password_server_port**: The port (on the metadata server) on which the | ||
4931 | 51 | password server listens (defaults to ``8080``). | ||
4932 | 52 | * **timeout**: the timeout value provided to urlopen for each individual http | ||
4933 | 53 | request. (defaults to ``10``) | ||
4934 | 54 | * **retries**: The number of retries that should be done for an http request | ||
4935 | 55 | (defaults to ``6``) | ||
4936 | 56 | |||
4937 | 57 | |||
4938 | 58 | An example configuration with the default values is provided below: | ||
4939 | 59 | |||
4940 | 60 | .. sourcecode:: yaml | ||
4941 | 61 | |||
4942 | 62 | datasource: | ||
4943 | 63 | Exoscale: | ||
4944 | 64 | metadata_url: "http://169.254.169.254" | ||
4945 | 65 | api_version: "1.0" | ||
4946 | 66 | password_server_port: 8080 | ||
4947 | 67 | timeout: 10 | ||
4948 | 68 | retries: 6 | ||
4949 | diff --git a/doc/rtd/topics/datasources/oracle.rst b/doc/rtd/topics/datasources/oracle.rst | |||
4950 | index f2383ce..98c4657 100644 | |||
4951 | --- a/doc/rtd/topics/datasources/oracle.rst | |||
4952 | +++ b/doc/rtd/topics/datasources/oracle.rst | |||
4953 | @@ -8,7 +8,7 @@ This datasource reads metadata, vendor-data and user-data from | |||
4954 | 8 | 8 | ||
4955 | 9 | Oracle Platform | 9 | Oracle Platform |
4956 | 10 | --------------- | 10 | --------------- |
4958 | 11 | OCI provides bare metal and virtual machines. In both cases, | 11 | OCI provides bare metal and virtual machines. In both cases, |
4959 | 12 | the platform identifies itself via DMI data in the chassis asset tag | 12 | the platform identifies itself via DMI data in the chassis asset tag |
4960 | 13 | with the string 'OracleCloud.com'. | 13 | with the string 'OracleCloud.com'. |
4961 | 14 | 14 | ||
4962 | @@ -22,5 +22,28 @@ Cloud-init has a specific datasource for Oracle in order to: | |||
4963 | 22 | implementation. | 22 | implementation. |
4964 | 23 | 23 | ||
4965 | 24 | 24 | ||
4966 | 25 | Configuration | ||
4967 | 26 | ------------- | ||
4968 | 27 | |||
4969 | 28 | The following configuration can be set for the datasource in system | ||
4970 | 29 | configuration (in ``/etc/cloud/cloud.cfg`` or ``/etc/cloud/cloud.cfg.d/``). | ||
4971 | 30 | |||
4972 | 31 | The settings that may be configured are: | ||
4973 | 32 | |||
4974 | 33 | * **configure_secondary_nics**: A boolean, defaulting to False. If set | ||
4975 | 34 | to True on an OCI Virtual Machine, cloud-init will fetch networking | ||
4976 | 35 | metadata from Oracle's IMDS and use it to configure the non-primary | ||
4977 | 36 | network interface controllers in the system. If set to True on an | ||
4978 | 37 | OCI Bare Metal Machine, it will have no effect (though this may | ||
4979 | 38 | change in the future). | ||
4980 | 39 | |||
4981 | 40 | An example configuration with the default values is provided below: | ||
4982 | 41 | |||
4983 | 42 | .. sourcecode:: yaml | ||
4984 | 43 | |||
4985 | 44 | datasource: | ||
4986 | 45 | Oracle: | ||
4987 | 46 | configure_secondary_nics: false | ||
4988 | 47 | |||
4989 | 25 | .. _Oracle Compute Infrastructure: https://cloud.oracle.com/ | 48 | .. _Oracle Compute Infrastructure: https://cloud.oracle.com/ |
4990 | 26 | .. vi: textwidth=78 | 49 | .. vi: textwidth=78 |
4991 | diff --git a/doc/rtd/topics/debugging.rst b/doc/rtd/topics/debugging.rst | |||
4992 | index 51363ea..e13d915 100644 | |||
4993 | --- a/doc/rtd/topics/debugging.rst | |||
4994 | +++ b/doc/rtd/topics/debugging.rst | |||
4995 | @@ -68,6 +68,19 @@ subcommands default to reading /var/log/cloud-init.log. | |||
4996 | 68 | 00.00100s (modules-final/config-rightscale_userdata) | 68 | 00.00100s (modules-final/config-rightscale_userdata) |
4997 | 69 | ... | 69 | ... |
4998 | 70 | 70 | ||
4999 | 71 | * ``analyze boot`` Make subprocess calls to the kernel in order to get relevant | ||
5000 | 72 | pre-cloud-init timestamps, such as the kernel start, kernel finish boot, and cloud-init start. |
PASSED: Continuous integration, rev:5463fec28e7 9740fff2382c504 f94756a6eda6e2 /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 1071/
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 1071//rebuild
https:/