Merge ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic
- Git
- lp:~chad.smith/cloud-init
- ubuntu/bionic
- Merge into ubuntu/bionic
Proposed by
Chad Smith
Status: | Merged | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Merged at revision: | 4a1aaa18595fa663e1a38dd3ac8f73231ec69a7f | ||||||||||||||||
Proposed branch: | ~chad.smith/cloud-init:ubuntu/bionic | ||||||||||||||||
Merge into: | cloud-init:ubuntu/bionic | ||||||||||||||||
Diff against target: |
7789 lines (+3838/-768) 98 files modified
ChangeLog (+54/-0) HACKING.rst (+2/-2) bash_completion/cloud-init (+4/-1) cloudinit/cmd/cloud_id.py (+90/-0) cloudinit/cmd/devel/logs.py (+23/-8) cloudinit/cmd/devel/net_convert.py (+10/-5) cloudinit/cmd/devel/render.py (+24/-11) cloudinit/cmd/devel/tests/test_logs.py (+37/-6) cloudinit/cmd/devel/tests/test_render.py (+44/-1) cloudinit/cmd/main.py (+4/-16) cloudinit/cmd/query.py (+24/-12) cloudinit/cmd/tests/test_cloud_id.py (+127/-0) cloudinit/cmd/tests/test_query.py (+71/-5) cloudinit/config/cc_disk_setup.py (+1/-1) cloudinit/config/cc_lxd.py (+1/-1) cloudinit/config/cc_resizefs.py (+7/-0) cloudinit/config/cc_set_passwords.py (+1/-1) cloudinit/config/cc_write_files.py (+6/-1) cloudinit/config/tests/test_set_passwords.py (+40/-0) cloudinit/dhclient_hook.py (+72/-38) cloudinit/handlers/jinja_template.py (+9/-1) cloudinit/net/__init__.py (+38/-4) cloudinit/net/dhcp.py (+76/-25) cloudinit/net/eni.py (+15/-14) cloudinit/net/netplan.py (+3/-3) cloudinit/net/sysconfig.py (+61/-5) cloudinit/net/tests/test_dhcp.py (+47/-4) cloudinit/net/tests/test_init.py (+51/-1) cloudinit/sources/DataSourceAliYun.py (+5/-15) cloudinit/sources/DataSourceAltCloud.py (+22/-11) cloudinit/sources/DataSourceAzure.py (+82/-31) cloudinit/sources/DataSourceBigstep.py (+4/-0) cloudinit/sources/DataSourceCloudSigma.py (+5/-1) cloudinit/sources/DataSourceConfigDrive.py (+12/-0) cloudinit/sources/DataSourceEc2.py (+59/-56) cloudinit/sources/DataSourceIBMCloud.py (+4/-0) cloudinit/sources/DataSourceMAAS.py (+4/-0) cloudinit/sources/DataSourceNoCloud.py (+52/-1) cloudinit/sources/DataSourceNone.py (+4/-0) cloudinit/sources/DataSourceOVF.py (+36/-26) cloudinit/sources/DataSourceOpenNebula.py (+9/-1) cloudinit/sources/DataSourceOracle.py (+4/-0) cloudinit/sources/DataSourceScaleway.py (+10/-1) cloudinit/sources/DataSourceSmartOS.py (+3/-0) cloudinit/sources/__init__.py (+104/-21) cloudinit/sources/helpers/netlink.py (+250/-0) cloudinit/sources/helpers/tests/test_netlink.py (+373/-0) cloudinit/sources/helpers/vmware/imc/config_nic.py (+2/-3) cloudinit/sources/tests/test_init.py (+83/-3) cloudinit/sources/tests/test_oracle.py (+8/-0) cloudinit/temp_utils.py (+2/-2) cloudinit/tests/test_dhclient_hook.py (+105/-0) cloudinit/tests/test_temp_utils.py (+17/-1) cloudinit/tests/test_url_helper.py (+24/-1) cloudinit/tests/test_util.py (+82/-17) cloudinit/url_helper.py (+25/-6) cloudinit/util.py (+25/-3) cloudinit/version.py (+1/-1) config/cloud.cfg.tmpl (+11/-1) debian/changelog (+75/-0) doc/rtd/topics/datasources.rst (+60/-1) doc/rtd/topics/datasources/azure.rst (+65/-38) doc/rtd/topics/instancedata.rst (+137/-46) doc/rtd/topics/network-config-format-v1.rst (+1/-1) packages/redhat/cloud-init.spec.in (+1/-0) packages/suse/cloud-init.spec.in (+1/-0) setup.py (+2/-1) systemd/cloud-init.service.tmpl (+1/-2) templates/sources.list.ubuntu.tmpl (+17/-17) tests/cloud_tests/releases.yaml (+16/-0) tests/cloud_tests/testcases/base.py (+15/-3) tests/cloud_tests/testcases/modules/apt_configure_primary.py (+9/-5) tests/cloud_tests/testcases/modules/apt_configure_primary.yaml (+0/-7) tests/unittests/test_builtin_handlers.py (+25/-0) tests/unittests/test_cli.py (+8/-8) tests/unittests/test_datasource/test_aliyun.py (+4/-0) tests/unittests/test_datasource/test_altcloud.py (+67/-51) tests/unittests/test_datasource/test_azure.py (+262/-79) tests/unittests/test_datasource/test_cloudsigma.py (+6/-0) tests/unittests/test_datasource/test_configdrive.py (+3/-0) tests/unittests/test_datasource/test_ec2.py (+37/-23) tests/unittests/test_datasource/test_ibmcloud.py (+39/-1) tests/unittests/test_datasource/test_nocloud.py (+98/-41) tests/unittests/test_datasource/test_opennebula.py (+4/-0) tests/unittests/test_datasource/test_ovf.py (+119/-39) tests/unittests/test_datasource/test_scaleway.py (+72/-4) tests/unittests/test_datasource/test_smartos.py (+7/-0) tests/unittests/test_ds_identify.py (+16/-1) tests/unittests/test_handler/test_handler_lxd.py (+1/-1) tests/unittests/test_handler/test_handler_resizefs.py (+42/-10) tests/unittests/test_handler/test_handler_write_files.py (+12/-0) tests/unittests/test_net.py (+137/-6) tests/unittests/test_util.py (+6/-0) tests/unittests/test_vmware_config_file.py (+52/-6) tools/ds-identify (+32/-6) tools/run-container (+1/-0) tox.ini (+2/-2) udev/66-azure-ephemeral.rules (+17/-1) |
||||||||||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
cloud-init Commiters | Pending | ||
Review via email: mp+362281@code.launchpad.net |
Commit message
sync new upstream snapshot for release into bionic via SRU
Description of the change
To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote : | # |
review:
Approve
(continuous-integration)
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | diff --git a/ChangeLog b/ChangeLog |
2 | index 9c043b0..8fa6fdd 100644 |
3 | --- a/ChangeLog |
4 | +++ b/ChangeLog |
5 | @@ -1,3 +1,57 @@ |
6 | +18.5: |
7 | + - tests: add Disco release [Joshua Powers] |
8 | + - net: render 'metric' values in per-subnet routes (LP: #1805871) |
9 | + - write_files: add support for appending to files. [James Baxter] |
10 | + - config: On ubuntu select cloud archive mirrors for armel, armhf, arm64. |
11 | + (LP: #1805854) |
12 | + - dhclient-hook: cleanups, tests and fix a bug on 'down' event. |
13 | + - NoCloud: Allow top level 'network' key in network-config. (LP: #1798117) |
14 | + - ovf: Fix ovf network config generation gateway/routes (LP: #1806103) |
15 | + - azure: detect vnet migration via netlink media change event |
16 | + [Tamilmani Manoharan] |
17 | + - Azure: fix copy/paste error in error handling when reading azure ovf. |
18 | + [Adam DePue] |
19 | + - tests: fix incorrect order of mocks in test_handle_zfs_root. |
20 | + - doc: Change dns_nameserver property to dns_nameservers. [Tomer Cohen] |
21 | + - OVF: identify label iso9660 filesystems with label 'OVF ENV'. |
22 | + - logs: collect-logs ignore instance-data-sensitive.json on non-root user |
23 | + (LP: #1805201) |
24 | + - net: Ephemeral*Network: add connectivity check via URL |
25 | + - azure: _poll_imds only retry on 404. Fail on Timeout (LP: #1803598) |
26 | + - resizefs: Prefix discovered devpath with '/dev/' when path does not |
27 | + exist [Igor Galić] |
28 | + - azure: retry imds polling on requests.Timeout (LP: #1800223) |
29 | + - azure: Accept variation in error msg from mount for ntfs volumes |
30 | + [Jason Zions] (LP: #1799338) |
31 | + - azure: fix regression introduced when persisting ephemeral dhcp lease |
32 | + [asakkurr] |
33 | + - azure: add udev rules to create cloud-init Gen2 disk name symlinks |
34 | + (LP: #1797480) |
35 | + - tests: ec2 mock missing httpretty user-data and instance-identity routes |
36 | + - azure: remove /etc/netplan/90-hotplug-azure.yaml when net from IMDS |
37 | + - azure: report ready to fabric after reprovision and reduce logging |
38 | + [asakkurr] (LP: #1799594) |
39 | + - query: better error when missing read permission on instance-data |
40 | + - instance-data: fallback to instance-data.json if sensitive is absent. |
41 | + (LP: #1798189) |
42 | + - docs: remove colon from network v1 config example. [Tomer Cohen] |
43 | + - Add cloud-id binary to packages for SUSE [Jason Zions] |
44 | + - systemd: On SUSE ensure cloud-init.service runs before wicked |
45 | + [Robert Schweikert] (LP: #1799709) |
46 | + - update detection of openSUSE variants [Robert Schweikert] |
47 | + - azure: Add apply_network_config option to disable network from IMDS |
48 | + (LP: #1798424) |
49 | + - Correct spelling in an error message (udevadm). [Katie McLaughlin] |
50 | + - tests: meta_data key changed to meta-data in ec2 instance-data.json |
51 | + (LP: #1797231) |
52 | + - tests: fix kvm integration test to assert flexible config-disk path |
53 | + (LP: #1797199) |
54 | + - tools: Add cloud-id command line utility |
55 | + - instance-data: Add standard keys platform and subplatform. Refactor ec2. |
56 | + - net: ignore nics that have "zero" mac address. (LP: #1796917) |
57 | + - tests: fix apt_configure_primary to be more flexible |
58 | + - Ubuntu: update sources.list to comment out deb-src entries. (LP: #74747) |
59 | + |
60 | 18.4: |
61 | - add rtd example docs about new standardized keys |
62 | - use ds._crawled_metadata instance attribute if set when writing |
63 | diff --git a/HACKING.rst b/HACKING.rst |
64 | index 3bb555c..fcdfa4f 100644 |
65 | --- a/HACKING.rst |
66 | +++ b/HACKING.rst |
67 | @@ -11,10 +11,10 @@ Do these things once |
68 | |
69 | * To contribute, you must sign the Canonical `contributor license agreement`_ |
70 | |
71 | - If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group. Unfortunately there is no easy way to check if an organization or company you are doing work for has signed. If you are unsure or have questions, email `Scott Moser <mailto:scott.moser@canonical.com>`_ or ping smoser in ``#cloud-init`` channel via freenode. |
72 | + If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group. Unfortunately there is no easy way to check if an organization or company you are doing work for has signed. If you are unsure or have questions, email `Josh Powers <mailto:josh.powers@canonical.com>`_ or ping powersj in ``#cloud-init`` channel via freenode. |
73 | |
74 | When prompted for 'Project contact' or 'Canonical Project Manager' enter |
75 | - 'Scott Moser'. |
76 | + 'Josh Powers'. |
77 | |
78 | * Configure git with your email and name for commit messages. |
79 | |
80 | diff --git a/bash_completion/cloud-init b/bash_completion/cloud-init |
81 | index 8c25032..a9577e9 100644 |
82 | --- a/bash_completion/cloud-init |
83 | +++ b/bash_completion/cloud-init |
84 | @@ -30,7 +30,10 @@ _cloudinit_complete() |
85 | devel) |
86 | COMPREPLY=($(compgen -W "--help schema net-convert" -- $cur_word)) |
87 | ;; |
88 | - dhclient-hook|features) |
89 | + dhclient-hook) |
90 | + COMPREPLY=($(compgen -W "--help up down" -- $cur_word)) |
91 | + ;; |
92 | + features) |
93 | COMPREPLY=($(compgen -W "--help" -- $cur_word)) |
94 | ;; |
95 | init) |
96 | diff --git a/cloudinit/cmd/cloud_id.py b/cloudinit/cmd/cloud_id.py |
97 | new file mode 100755 |
98 | index 0000000..9760892 |
99 | --- /dev/null |
100 | +++ b/cloudinit/cmd/cloud_id.py |
101 | @@ -0,0 +1,90 @@ |
102 | +# This file is part of cloud-init. See LICENSE file for license information. |
103 | + |
104 | +"""Commandline utility to list the canonical cloud-id for an instance.""" |
105 | + |
106 | +import argparse |
107 | +import json |
108 | +import sys |
109 | + |
110 | +from cloudinit.sources import ( |
111 | + INSTANCE_JSON_FILE, METADATA_UNKNOWN, canonical_cloud_id) |
112 | + |
113 | +DEFAULT_INSTANCE_JSON = '/run/cloud-init/%s' % INSTANCE_JSON_FILE |
114 | + |
115 | +NAME = 'cloud-id' |
116 | + |
117 | + |
118 | +def get_parser(parser=None): |
119 | + """Build or extend an arg parser for the cloud-id utility. |
120 | + |
121 | + @param parser: Optional existing ArgumentParser instance representing the |
122 | + query subcommand which will be extended to support the args of |
123 | + this utility. |
124 | + |
125 | + @returns: ArgumentParser with proper argument configuration. |
126 | + """ |
127 | + if not parser: |
128 | + parser = argparse.ArgumentParser( |
129 | + prog=NAME, |
130 | + description='Report the canonical cloud-id for this instance') |
131 | + parser.add_argument( |
132 | + '-j', '--json', action='store_true', default=False, |
133 | + help='Report all standardized cloud-id information as json.') |
134 | + parser.add_argument( |
135 | + '-l', '--long', action='store_true', default=False, |
136 | + help='Report extended cloud-id information as tab-delimited string.') |
137 | + parser.add_argument( |
138 | + '-i', '--instance-data', type=str, default=DEFAULT_INSTANCE_JSON, |
139 | + help=('Path to instance-data.json file. Default is %s' % |
140 | + DEFAULT_INSTANCE_JSON)) |
141 | + return parser |
142 | + |
143 | + |
144 | +def error(msg): |
145 | + sys.stderr.write('ERROR: %s\n' % msg) |
146 | + return 1 |
147 | + |
148 | + |
149 | +def handle_args(name, args): |
150 | + """Handle calls to 'cloud-id' cli. |
151 | + |
152 | + Print the canonical cloud-id on which the instance is running. |
153 | + |
154 | + @return: 0 on success, 1 otherwise. |
155 | + """ |
156 | + try: |
157 | + instance_data = json.load(open(args.instance_data)) |
158 | + except IOError: |
159 | + return error( |
160 | + "File not found '%s'. Provide a path to instance data json file" |
161 | + ' using --instance-data' % args.instance_data) |
162 | + except ValueError as e: |
163 | + return error( |
164 | + "File '%s' is not valid json. %s" % (args.instance_data, e)) |
165 | + v1 = instance_data.get('v1', {}) |
166 | + cloud_id = canonical_cloud_id( |
167 | + v1.get('cloud_name', METADATA_UNKNOWN), |
168 | + v1.get('region', METADATA_UNKNOWN), |
169 | + v1.get('platform', METADATA_UNKNOWN)) |
170 | + if args.json: |
171 | + v1['cloud_id'] = cloud_id |
172 | + response = json.dumps( # Pretty, sorted json |
173 | + v1, indent=1, sort_keys=True, separators=(',', ': ')) |
174 | + elif args.long: |
175 | + response = '%s\t%s' % (cloud_id, v1.get('region', METADATA_UNKNOWN)) |
176 | + else: |
177 | + response = cloud_id |
178 | + sys.stdout.write('%s\n' % response) |
179 | + return 0 |
180 | + |
181 | + |
182 | +def main(): |
183 | + """Tool to query specific instance-data values.""" |
184 | + parser = get_parser() |
185 | + sys.exit(handle_args(NAME, parser.parse_args())) |
186 | + |
187 | + |
188 | +if __name__ == '__main__': |
189 | + main() |
190 | + |
191 | +# vi: ts=4 expandtab |
192 | diff --git a/cloudinit/cmd/devel/logs.py b/cloudinit/cmd/devel/logs.py |
193 | index df72520..4c086b5 100644 |
194 | --- a/cloudinit/cmd/devel/logs.py |
195 | +++ b/cloudinit/cmd/devel/logs.py |
196 | @@ -5,14 +5,16 @@ |
197 | """Define 'collect-logs' utility and handler to include in cloud-init cmd.""" |
198 | |
199 | import argparse |
200 | -from cloudinit.util import ( |
201 | - ProcessExecutionError, chdir, copy, ensure_dir, subp, write_file) |
202 | -from cloudinit.temp_utils import tempdir |
203 | from datetime import datetime |
204 | import os |
205 | import shutil |
206 | import sys |
207 | |
208 | +from cloudinit.sources import INSTANCE_JSON_SENSITIVE_FILE |
209 | +from cloudinit.temp_utils import tempdir |
210 | +from cloudinit.util import ( |
211 | + ProcessExecutionError, chdir, copy, ensure_dir, subp, write_file) |
212 | + |
213 | |
214 | CLOUDINIT_LOGS = ['/var/log/cloud-init.log', '/var/log/cloud-init-output.log'] |
215 | CLOUDINIT_RUN_DIR = '/run/cloud-init' |
216 | @@ -46,6 +48,13 @@ def get_parser(parser=None): |
217 | return parser |
218 | |
219 | |
220 | +def _copytree_ignore_sensitive_files(curdir, files): |
221 | + """Return a list of files to ignore if we are non-root""" |
222 | + if os.getuid() == 0: |
223 | + return () |
224 | + return (INSTANCE_JSON_SENSITIVE_FILE,) # Ignore root-permissioned files |
225 | + |
226 | + |
227 | def _write_command_output_to_file(cmd, filename, msg, verbosity): |
228 | """Helper which runs a command and writes output or error to filename.""" |
229 | try: |
230 | @@ -78,6 +87,11 @@ def collect_logs(tarfile, include_userdata, verbosity=0): |
231 | @param tarfile: The path of the tar-gzipped file to create. |
232 | @param include_userdata: Boolean, true means include user-data. |
233 | """ |
234 | + if include_userdata and os.getuid() != 0: |
235 | + sys.stderr.write( |
236 | + "To include userdata, root user is required." |
237 | + " Try sudo cloud-init collect-logs\n") |
238 | + return 1 |
239 | tarfile = os.path.abspath(tarfile) |
240 | date = datetime.utcnow().date().strftime('%Y-%m-%d') |
241 | log_dir = 'cloud-init-logs-{0}'.format(date) |
242 | @@ -110,7 +124,8 @@ def collect_logs(tarfile, include_userdata, verbosity=0): |
243 | ensure_dir(run_dir) |
244 | if os.path.exists(CLOUDINIT_RUN_DIR): |
245 | shutil.copytree(CLOUDINIT_RUN_DIR, |
246 | - os.path.join(run_dir, 'cloud-init')) |
247 | + os.path.join(run_dir, 'cloud-init'), |
248 | + ignore=_copytree_ignore_sensitive_files) |
249 | _debug("collected dir %s\n" % CLOUDINIT_RUN_DIR, 1, verbosity) |
250 | else: |
251 | _debug("directory '%s' did not exist\n" % CLOUDINIT_RUN_DIR, 1, |
252 | @@ -118,21 +133,21 @@ def collect_logs(tarfile, include_userdata, verbosity=0): |
253 | with chdir(tmp_dir): |
254 | subp(['tar', 'czvf', tarfile, log_dir.replace(tmp_dir + '/', '')]) |
255 | sys.stderr.write("Wrote %s\n" % tarfile) |
256 | + return 0 |
257 | |
258 | |
259 | def handle_collect_logs_args(name, args): |
260 | """Handle calls to 'cloud-init collect-logs' as a subcommand.""" |
261 | - collect_logs(args.tarfile, args.userdata, args.verbosity) |
262 | + return collect_logs(args.tarfile, args.userdata, args.verbosity) |
263 | |
264 | |
265 | def main(): |
266 | """Tool to collect and tar all cloud-init related logs.""" |
267 | parser = get_parser() |
268 | - handle_collect_logs_args('collect-logs', parser.parse_args()) |
269 | - return 0 |
270 | + return handle_collect_logs_args('collect-logs', parser.parse_args()) |
271 | |
272 | |
273 | if __name__ == '__main__': |
274 | - main() |
275 | + sys.exit(main()) |
276 | |
277 | # vi: ts=4 expandtab |
278 | diff --git a/cloudinit/cmd/devel/net_convert.py b/cloudinit/cmd/devel/net_convert.py |
279 | index a0f58a0..1ad7e0b 100755 |
280 | --- a/cloudinit/cmd/devel/net_convert.py |
281 | +++ b/cloudinit/cmd/devel/net_convert.py |
282 | @@ -9,6 +9,7 @@ import yaml |
283 | |
284 | from cloudinit.sources.helpers import openstack |
285 | from cloudinit.sources import DataSourceAzure as azure |
286 | +from cloudinit.sources import DataSourceOVF as ovf |
287 | |
288 | from cloudinit import distros |
289 | from cloudinit.net import eni, netplan, network_state, sysconfig |
290 | @@ -31,7 +32,7 @@ def get_parser(parser=None): |
291 | metavar="PATH", required=True) |
292 | parser.add_argument("-k", "--kind", |
293 | choices=['eni', 'network_data.json', 'yaml', |
294 | - 'azure-imds'], |
295 | + 'azure-imds', 'vmware-imc'], |
296 | required=True) |
297 | parser.add_argument("-d", "--directory", |
298 | metavar="PATH", |
299 | @@ -76,7 +77,6 @@ def handle_args(name, args): |
300 | net_data = args.network_data.read() |
301 | if args.kind == "eni": |
302 | pre_ns = eni.convert_eni_data(net_data) |
303 | - ns = network_state.parse_net_config_data(pre_ns) |
304 | elif args.kind == "yaml": |
305 | pre_ns = yaml.load(net_data) |
306 | if 'network' in pre_ns: |
307 | @@ -85,15 +85,16 @@ def handle_args(name, args): |
308 | sys.stderr.write('\n'.join( |
309 | ["Input YAML", |
310 | yaml.dump(pre_ns, default_flow_style=False, indent=4), ""])) |
311 | - ns = network_state.parse_net_config_data(pre_ns) |
312 | elif args.kind == 'network_data.json': |
313 | pre_ns = openstack.convert_net_json( |
314 | json.loads(net_data), known_macs=known_macs) |
315 | - ns = network_state.parse_net_config_data(pre_ns) |
316 | elif args.kind == 'azure-imds': |
317 | pre_ns = azure.parse_network_config(json.loads(net_data)) |
318 | - ns = network_state.parse_net_config_data(pre_ns) |
319 | + elif args.kind == 'vmware-imc': |
320 | + config = ovf.Config(ovf.ConfigFile(args.network_data.name)) |
321 | + pre_ns = ovf.get_network_config_from_conf(config, False) |
322 | |
323 | + ns = network_state.parse_net_config_data(pre_ns) |
324 | if not ns: |
325 | raise RuntimeError("No valid network_state object created from" |
326 | "input data") |
327 | @@ -111,6 +112,10 @@ def handle_args(name, args): |
328 | elif args.output_kind == "netplan": |
329 | r_cls = netplan.Renderer |
330 | config = distro.renderer_configs.get('netplan') |
331 | + # don't run netplan generate/apply |
332 | + config['postcmds'] = False |
333 | + # trim leading slash |
334 | + config['netplan_path'] = config['netplan_path'][1:] |
335 | else: |
336 | r_cls = sysconfig.Renderer |
337 | config = distro.renderer_configs.get('sysconfig') |
338 | diff --git a/cloudinit/cmd/devel/render.py b/cloudinit/cmd/devel/render.py |
339 | index 2ba6b68..1bc2240 100755 |
340 | --- a/cloudinit/cmd/devel/render.py |
341 | +++ b/cloudinit/cmd/devel/render.py |
342 | @@ -8,11 +8,10 @@ import sys |
343 | |
344 | from cloudinit.handlers.jinja_template import render_jinja_payload_from_file |
345 | from cloudinit import log |
346 | -from cloudinit.sources import INSTANCE_JSON_FILE |
347 | +from cloudinit.sources import INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE |
348 | from . import addLogHandlerCLI, read_cfg_paths |
349 | |
350 | NAME = 'render' |
351 | -DEFAULT_INSTANCE_DATA = '/run/cloud-init/instance-data.json' |
352 | |
353 | LOG = log.getLogger(NAME) |
354 | |
355 | @@ -47,12 +46,22 @@ def handle_args(name, args): |
356 | @return 0 on success, 1 on failure. |
357 | """ |
358 | addLogHandlerCLI(LOG, log.DEBUG if args.debug else log.WARNING) |
359 | - if not args.instance_data: |
360 | - paths = read_cfg_paths() |
361 | - instance_data_fn = os.path.join( |
362 | - paths.run_dir, INSTANCE_JSON_FILE) |
363 | - else: |
364 | + if args.instance_data: |
365 | instance_data_fn = args.instance_data |
366 | + else: |
367 | + paths = read_cfg_paths() |
368 | + uid = os.getuid() |
369 | + redacted_data_fn = os.path.join(paths.run_dir, INSTANCE_JSON_FILE) |
370 | + if uid == 0: |
371 | + instance_data_fn = os.path.join( |
372 | + paths.run_dir, INSTANCE_JSON_SENSITIVE_FILE) |
373 | + if not os.path.exists(instance_data_fn): |
374 | + LOG.warning( |
375 | + 'Missing root-readable %s. Using redacted %s instead.', |
376 | + instance_data_fn, redacted_data_fn) |
377 | + instance_data_fn = redacted_data_fn |
378 | + else: |
379 | + instance_data_fn = redacted_data_fn |
380 | if not os.path.exists(instance_data_fn): |
381 | LOG.error('Missing instance-data.json file: %s', instance_data_fn) |
382 | return 1 |
383 | @@ -62,10 +71,14 @@ def handle_args(name, args): |
384 | except IOError: |
385 | LOG.error('Missing user-data file: %s', args.user_data) |
386 | return 1 |
387 | - rendered_payload = render_jinja_payload_from_file( |
388 | - payload=user_data, payload_fn=args.user_data, |
389 | - instance_data_file=instance_data_fn, |
390 | - debug=True if args.debug else False) |
391 | + try: |
392 | + rendered_payload = render_jinja_payload_from_file( |
393 | + payload=user_data, payload_fn=args.user_data, |
394 | + instance_data_file=instance_data_fn, |
395 | + debug=True if args.debug else False) |
396 | + except RuntimeError as e: |
397 | + LOG.error('Cannot render from instance data: %s', str(e)) |
398 | + return 1 |
399 | if not rendered_payload: |
400 | LOG.error('Unable to render user-data file: %s', args.user_data) |
401 | return 1 |
402 | diff --git a/cloudinit/cmd/devel/tests/test_logs.py b/cloudinit/cmd/devel/tests/test_logs.py |
403 | index 98b4756..4951797 100644 |
404 | --- a/cloudinit/cmd/devel/tests/test_logs.py |
405 | +++ b/cloudinit/cmd/devel/tests/test_logs.py |
406 | @@ -1,13 +1,17 @@ |
407 | # This file is part of cloud-init. See LICENSE file for license information. |
408 | |
409 | -from cloudinit.cmd.devel import logs |
410 | -from cloudinit.util import ensure_dir, load_file, subp, write_file |
411 | -from cloudinit.tests.helpers import FilesystemMockingTestCase, wrap_and_call |
412 | from datetime import datetime |
413 | -import mock |
414 | import os |
415 | +from six import StringIO |
416 | + |
417 | +from cloudinit.cmd.devel import logs |
418 | +from cloudinit.sources import INSTANCE_JSON_SENSITIVE_FILE |
419 | +from cloudinit.tests.helpers import ( |
420 | + FilesystemMockingTestCase, mock, wrap_and_call) |
421 | +from cloudinit.util import ensure_dir, load_file, subp, write_file |
422 | |
423 | |
424 | +@mock.patch('cloudinit.cmd.devel.logs.os.getuid') |
425 | class TestCollectLogs(FilesystemMockingTestCase): |
426 | |
427 | def setUp(self): |
428 | @@ -15,14 +19,29 @@ class TestCollectLogs(FilesystemMockingTestCase): |
429 | self.new_root = self.tmp_dir() |
430 | self.run_dir = self.tmp_path('run', self.new_root) |
431 | |
432 | - def test_collect_logs_creates_tarfile(self): |
433 | + def test_collect_logs_with_userdata_requires_root_user(self, m_getuid): |
434 | + """collect-logs errors when non-root user collects userdata .""" |
435 | + m_getuid.return_value = 100 # non-root |
436 | + output_tarfile = self.tmp_path('logs.tgz') |
437 | + with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: |
438 | + self.assertEqual( |
439 | + 1, logs.collect_logs(output_tarfile, include_userdata=True)) |
440 | + self.assertEqual( |
441 | + 'To include userdata, root user is required.' |
442 | + ' Try sudo cloud-init collect-logs\n', |
443 | + m_stderr.getvalue()) |
444 | + |
445 | + def test_collect_logs_creates_tarfile(self, m_getuid): |
446 | """collect-logs creates a tarfile with all related cloud-init info.""" |
447 | + m_getuid.return_value = 100 |
448 | log1 = self.tmp_path('cloud-init.log', self.new_root) |
449 | write_file(log1, 'cloud-init-log') |
450 | log2 = self.tmp_path('cloud-init-output.log', self.new_root) |
451 | write_file(log2, 'cloud-init-output-log') |
452 | ensure_dir(self.run_dir) |
453 | write_file(self.tmp_path('results.json', self.run_dir), 'results') |
454 | + write_file(self.tmp_path(INSTANCE_JSON_SENSITIVE_FILE, self.run_dir), |
455 | + 'sensitive') |
456 | output_tarfile = self.tmp_path('logs.tgz') |
457 | |
458 | date = datetime.utcnow().date().strftime('%Y-%m-%d') |
459 | @@ -59,6 +78,11 @@ class TestCollectLogs(FilesystemMockingTestCase): |
460 | # unpack the tarfile and check file contents |
461 | subp(['tar', 'zxvf', output_tarfile, '-C', self.new_root]) |
462 | out_logdir = self.tmp_path(date_logdir, self.new_root) |
463 | + self.assertFalse( |
464 | + os.path.exists( |
465 | + os.path.join(out_logdir, 'run', 'cloud-init', |
466 | + INSTANCE_JSON_SENSITIVE_FILE)), |
467 | + 'Unexpected file found: %s' % INSTANCE_JSON_SENSITIVE_FILE) |
468 | self.assertEqual( |
469 | '0.7fake\n', |
470 | load_file(os.path.join(out_logdir, 'dpkg-version'))) |
471 | @@ -82,8 +106,9 @@ class TestCollectLogs(FilesystemMockingTestCase): |
472 | os.path.join(out_logdir, 'run', 'cloud-init', 'results.json'))) |
473 | fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile) |
474 | |
475 | - def test_collect_logs_includes_optional_userdata(self): |
476 | + def test_collect_logs_includes_optional_userdata(self, m_getuid): |
477 | """collect-logs include userdata when --include-userdata is set.""" |
478 | + m_getuid.return_value = 0 |
479 | log1 = self.tmp_path('cloud-init.log', self.new_root) |
480 | write_file(log1, 'cloud-init-log') |
481 | log2 = self.tmp_path('cloud-init-output.log', self.new_root) |
482 | @@ -92,6 +117,8 @@ class TestCollectLogs(FilesystemMockingTestCase): |
483 | write_file(userdata, 'user-data') |
484 | ensure_dir(self.run_dir) |
485 | write_file(self.tmp_path('results.json', self.run_dir), 'results') |
486 | + write_file(self.tmp_path(INSTANCE_JSON_SENSITIVE_FILE, self.run_dir), |
487 | + 'sensitive') |
488 | output_tarfile = self.tmp_path('logs.tgz') |
489 | |
490 | date = datetime.utcnow().date().strftime('%Y-%m-%d') |
491 | @@ -132,4 +159,8 @@ class TestCollectLogs(FilesystemMockingTestCase): |
492 | self.assertEqual( |
493 | 'user-data', |
494 | load_file(os.path.join(out_logdir, 'user-data.txt'))) |
495 | + self.assertEqual( |
496 | + 'sensitive', |
497 | + load_file(os.path.join(out_logdir, 'run', 'cloud-init', |
498 | + INSTANCE_JSON_SENSITIVE_FILE))) |
499 | fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile) |
500 | diff --git a/cloudinit/cmd/devel/tests/test_render.py b/cloudinit/cmd/devel/tests/test_render.py |
501 | index fc5d2c0..988bba0 100644 |
502 | --- a/cloudinit/cmd/devel/tests/test_render.py |
503 | +++ b/cloudinit/cmd/devel/tests/test_render.py |
504 | @@ -6,7 +6,7 @@ import os |
505 | from collections import namedtuple |
506 | from cloudinit.cmd.devel import render |
507 | from cloudinit.helpers import Paths |
508 | -from cloudinit.sources import INSTANCE_JSON_FILE |
509 | +from cloudinit.sources import INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE |
510 | from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJinja |
511 | from cloudinit.util import ensure_dir, write_file |
512 | |
513 | @@ -63,6 +63,49 @@ class TestRender(CiTestCase): |
514 | 'Missing instance-data.json file: %s' % json_file, |
515 | self.logs.getvalue()) |
516 | |
517 | + def test_handle_args_root_fallback_from_sensitive_instance_data(self): |
518 | + """When root user defaults to sensitive.json.""" |
519 | + user_data = self.tmp_path('user-data', dir=self.tmp) |
520 | + run_dir = self.tmp_path('run_dir', dir=self.tmp) |
521 | + ensure_dir(run_dir) |
522 | + paths = Paths({'run_dir': run_dir}) |
523 | + self.add_patch('cloudinit.cmd.devel.render.read_cfg_paths', 'm_paths') |
524 | + self.m_paths.return_value = paths |
525 | + args = self.args( |
526 | + user_data=user_data, instance_data=None, debug=False) |
527 | + with mock.patch('sys.stderr', new_callable=StringIO): |
528 | + with mock.patch('os.getuid') as m_getuid: |
529 | + m_getuid.return_value = 0 |
530 | + self.assertEqual(1, render.handle_args('anyname', args)) |
531 | + json_file = os.path.join(run_dir, INSTANCE_JSON_FILE) |
532 | + json_sensitive = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE) |
533 | + self.assertIn( |
534 | + 'WARNING: Missing root-readable %s. Using redacted %s' % ( |
535 | + json_sensitive, json_file), self.logs.getvalue()) |
536 | + self.assertIn( |
537 | + 'ERROR: Missing instance-data.json file: %s' % json_file, |
538 | + self.logs.getvalue()) |
539 | + |
540 | + def test_handle_args_root_uses_sensitive_instance_data(self): |
541 | + """When root user, and no instance-data arg, use sensitive.json.""" |
542 | + user_data = self.tmp_path('user-data', dir=self.tmp) |
543 | + write_file(user_data, '##template: jinja\nrendering: {{ my_var }}') |
544 | + run_dir = self.tmp_path('run_dir', dir=self.tmp) |
545 | + ensure_dir(run_dir) |
546 | + json_sensitive = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE) |
547 | + write_file(json_sensitive, '{"my-var": "jinja worked"}') |
548 | + paths = Paths({'run_dir': run_dir}) |
549 | + self.add_patch('cloudinit.cmd.devel.render.read_cfg_paths', 'm_paths') |
550 | + self.m_paths.return_value = paths |
551 | + args = self.args( |
552 | + user_data=user_data, instance_data=None, debug=False) |
553 | + with mock.patch('sys.stderr', new_callable=StringIO): |
554 | + with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: |
555 | + with mock.patch('os.getuid') as m_getuid: |
556 | + m_getuid.return_value = 0 |
557 | + self.assertEqual(0, render.handle_args('anyname', args)) |
558 | + self.assertIn('rendering: jinja worked', m_stdout.getvalue()) |
559 | + |
560 | @skipUnlessJinja() |
561 | def test_handle_args_renders_instance_data_vars_in_template(self): |
562 | """If user_data file is a jinja template render instance-data vars.""" |
563 | diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py |
564 | index 5a43702..933c019 100644 |
565 | --- a/cloudinit/cmd/main.py |
566 | +++ b/cloudinit/cmd/main.py |
567 | @@ -41,7 +41,7 @@ from cloudinit.settings import (PER_INSTANCE, PER_ALWAYS, PER_ONCE, |
568 | from cloudinit import atomic_helper |
569 | |
570 | from cloudinit.config import cc_set_hostname |
571 | -from cloudinit.dhclient_hook import LogDhclient |
572 | +from cloudinit import dhclient_hook |
573 | |
574 | |
575 | # Welcome message template |
576 | @@ -586,12 +586,6 @@ def main_single(name, args): |
577 | return 0 |
578 | |
579 | |
580 | -def dhclient_hook(name, args): |
581 | - record = LogDhclient(args) |
582 | - record.check_hooks_dir() |
583 | - record.record() |
584 | - |
585 | - |
586 | def status_wrapper(name, args, data_d=None, link_d=None): |
587 | if data_d is None: |
588 | data_d = os.path.normpath("/var/lib/cloud/data") |
589 | @@ -795,15 +789,9 @@ def main(sysv_args=None): |
590 | 'query', |
591 | help='Query standardized instance metadata from the command line.') |
592 | |
593 | - parser_dhclient = subparsers.add_parser('dhclient-hook', |
594 | - help=('run the dhclient hook' |
595 | - 'to record network info')) |
596 | - parser_dhclient.add_argument("net_action", |
597 | - help=('action taken on the interface')) |
598 | - parser_dhclient.add_argument("net_interface", |
599 | - help=('the network interface being acted' |
600 | - ' upon')) |
601 | - parser_dhclient.set_defaults(action=('dhclient_hook', dhclient_hook)) |
602 | + parser_dhclient = subparsers.add_parser( |
603 | + dhclient_hook.NAME, help=dhclient_hook.__doc__) |
604 | + dhclient_hook.get_parser(parser_dhclient) |
605 | |
606 | parser_features = subparsers.add_parser('features', |
607 | help=('list defined features')) |
608 | diff --git a/cloudinit/cmd/query.py b/cloudinit/cmd/query.py |
609 | index 7d2d4fe..1d888b9 100644 |
610 | --- a/cloudinit/cmd/query.py |
611 | +++ b/cloudinit/cmd/query.py |
612 | @@ -3,6 +3,7 @@ |
613 | """Query standardized instance metadata from the command line.""" |
614 | |
615 | import argparse |
616 | +from errno import EACCES |
617 | import os |
618 | import six |
619 | import sys |
620 | @@ -79,27 +80,38 @@ def handle_args(name, args): |
621 | uid = os.getuid() |
622 | if not all([args.instance_data, args.user_data, args.vendor_data]): |
623 | paths = read_cfg_paths() |
624 | - if not args.instance_data: |
625 | + if args.instance_data: |
626 | + instance_data_fn = args.instance_data |
627 | + else: |
628 | + redacted_data_fn = os.path.join(paths.run_dir, INSTANCE_JSON_FILE) |
629 | if uid == 0: |
630 | - default_json_fn = INSTANCE_JSON_SENSITIVE_FILE |
631 | + sensitive_data_fn = os.path.join( |
632 | + paths.run_dir, INSTANCE_JSON_SENSITIVE_FILE) |
633 | + if os.path.exists(sensitive_data_fn): |
634 | + instance_data_fn = sensitive_data_fn |
635 | + else: |
636 | + LOG.warning( |
637 | + 'Missing root-readable %s. Using redacted %s instead.', |
638 | + sensitive_data_fn, redacted_data_fn) |
639 | + instance_data_fn = redacted_data_fn |
640 | else: |
641 | - default_json_fn = INSTANCE_JSON_FILE # World readable |
642 | - instance_data_fn = os.path.join(paths.run_dir, default_json_fn) |
643 | + instance_data_fn = redacted_data_fn |
644 | + if args.user_data: |
645 | + user_data_fn = args.user_data |
646 | else: |
647 | - instance_data_fn = args.instance_data |
648 | - if not args.user_data: |
649 | user_data_fn = os.path.join(paths.instance_link, 'user-data.txt') |
650 | + if args.vendor_data: |
651 | + vendor_data_fn = args.vendor_data |
652 | else: |
653 | - user_data_fn = args.user_data |
654 | - if not args.vendor_data: |
655 | vendor_data_fn = os.path.join(paths.instance_link, 'vendor-data.txt') |
656 | - else: |
657 | - vendor_data_fn = args.vendor_data |
658 | |
659 | try: |
660 | instance_json = util.load_file(instance_data_fn) |
661 | - except IOError: |
662 | - LOG.error('Missing instance-data.json file: %s', instance_data_fn) |
663 | + except (IOError, OSError) as e: |
664 | + if e.errno == EACCES: |
665 | + LOG.error("No read permission on '%s'. Try sudo", instance_data_fn) |
666 | + else: |
667 | + LOG.error('Missing instance-data file: %s', instance_data_fn) |
668 | return 1 |
669 | |
670 | instance_data = util.load_json(instance_json) |
671 | diff --git a/cloudinit/cmd/tests/test_cloud_id.py b/cloudinit/cmd/tests/test_cloud_id.py |
672 | new file mode 100644 |
673 | index 0000000..7373817 |
674 | --- /dev/null |
675 | +++ b/cloudinit/cmd/tests/test_cloud_id.py |
676 | @@ -0,0 +1,127 @@ |
677 | +# This file is part of cloud-init. See LICENSE file for license information. |
678 | + |
679 | +"""Tests for cloud-id command line utility.""" |
680 | + |
681 | +from cloudinit import util |
682 | +from collections import namedtuple |
683 | +from six import StringIO |
684 | + |
685 | +from cloudinit.cmd import cloud_id |
686 | + |
687 | +from cloudinit.tests.helpers import CiTestCase, mock |
688 | + |
689 | + |
690 | +class TestCloudId(CiTestCase): |
691 | + |
692 | + args = namedtuple('cloudidargs', ('instance_data json long')) |
693 | + |
694 | + def setUp(self): |
695 | + super(TestCloudId, self).setUp() |
696 | + self.tmp = self.tmp_dir() |
697 | + self.instance_data = self.tmp_path('instance-data.json', dir=self.tmp) |
698 | + |
699 | + def test_cloud_id_arg_parser_defaults(self): |
700 | + """Validate the argument defaults when not provided by the end-user.""" |
701 | + cmd = ['cloud-id'] |
702 | + with mock.patch('sys.argv', cmd): |
703 | + args = cloud_id.get_parser().parse_args() |
704 | + self.assertEqual( |
705 | + '/run/cloud-init/instance-data.json', |
706 | + args.instance_data) |
707 | + self.assertEqual(False, args.long) |
708 | + self.assertEqual(False, args.json) |
709 | + |
710 | + def test_cloud_id_arg_parse_overrides(self): |
711 | + """Override argument defaults by specifying values for each param.""" |
712 | + util.write_file(self.instance_data, '{}') |
713 | + cmd = ['cloud-id', '--instance-data', self.instance_data, '--long', |
714 | + '--json'] |
715 | + with mock.patch('sys.argv', cmd): |
716 | + args = cloud_id.get_parser().parse_args() |
717 | + self.assertEqual(self.instance_data, args.instance_data) |
718 | + self.assertEqual(True, args.long) |
719 | + self.assertEqual(True, args.json) |
720 | + |
721 | + def test_cloud_id_missing_instance_data_json(self): |
722 | + """Exit error when the provided instance-data.json does not exist.""" |
723 | + cmd = ['cloud-id', '--instance-data', self.instance_data] |
724 | + with mock.patch('sys.argv', cmd): |
725 | + with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: |
726 | + with self.assertRaises(SystemExit) as context_manager: |
727 | + cloud_id.main() |
728 | + self.assertEqual(1, context_manager.exception.code) |
729 | + self.assertIn( |
730 | + "ERROR: File not found '%s'" % self.instance_data, |
731 | + m_stderr.getvalue()) |
732 | + |
733 | + def test_cloud_id_non_json_instance_data(self): |
734 | + """Exit error when the provided instance-data.json is not json.""" |
735 | + cmd = ['cloud-id', '--instance-data', self.instance_data] |
736 | + util.write_file(self.instance_data, '{') |
737 | + with mock.patch('sys.argv', cmd): |
738 | + with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: |
739 | + with self.assertRaises(SystemExit) as context_manager: |
740 | + cloud_id.main() |
741 | + self.assertEqual(1, context_manager.exception.code) |
742 | + self.assertIn( |
743 | + "ERROR: File '%s' is not valid json." % self.instance_data, |
744 | + m_stderr.getvalue()) |
745 | + |
746 | + def test_cloud_id_from_cloud_name_in_instance_data(self): |
747 | + """Report canonical cloud-id from cloud_name in instance-data.""" |
748 | + util.write_file( |
749 | + self.instance_data, |
750 | + '{"v1": {"cloud_name": "mycloud", "region": "somereg"}}') |
751 | + cmd = ['cloud-id', '--instance-data', self.instance_data] |
752 | + with mock.patch('sys.argv', cmd): |
753 | + with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: |
754 | + with self.assertRaises(SystemExit) as context_manager: |
755 | + cloud_id.main() |
756 | + self.assertEqual(0, context_manager.exception.code) |
757 | + self.assertEqual("mycloud\n", m_stdout.getvalue()) |
758 | + |
759 | + def test_cloud_id_long_name_from_instance_data(self): |
760 | + """Report long cloud-id format from cloud_name and region.""" |
761 | + util.write_file( |
762 | + self.instance_data, |
763 | + '{"v1": {"cloud_name": "mycloud", "region": "somereg"}}') |
764 | + cmd = ['cloud-id', '--instance-data', self.instance_data, '--long'] |
765 | + with mock.patch('sys.argv', cmd): |
766 | + with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: |
767 | + with self.assertRaises(SystemExit) as context_manager: |
768 | + cloud_id.main() |
769 | + self.assertEqual(0, context_manager.exception.code) |
770 | + self.assertEqual("mycloud\tsomereg\n", m_stdout.getvalue()) |
771 | + |
772 | + def test_cloud_id_lookup_from_instance_data_region(self): |
773 | + """Report discovered canonical cloud_id when region lookup matches.""" |
774 | + util.write_file( |
775 | + self.instance_data, |
776 | + '{"v1": {"cloud_name": "aws", "region": "cn-north-1",' |
777 | + ' "platform": "ec2"}}') |
778 | + cmd = ['cloud-id', '--instance-data', self.instance_data, '--long'] |
779 | + with mock.patch('sys.argv', cmd): |
780 | + with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: |
781 | + with self.assertRaises(SystemExit) as context_manager: |
782 | + cloud_id.main() |
783 | + self.assertEqual(0, context_manager.exception.code) |
784 | + self.assertEqual("aws-china\tcn-north-1\n", m_stdout.getvalue()) |
785 | + |
786 | + def test_cloud_id_lookup_json_instance_data_adds_cloud_id_to_json(self): |
787 | + """Report v1 instance-data content with cloud_id when --json set.""" |
788 | + util.write_file( |
789 | + self.instance_data, |
790 | + '{"v1": {"cloud_name": "unknown", "region": "dfw",' |
791 | + ' "platform": "openstack", "public_ssh_keys": []}}') |
792 | + expected = util.json_dumps({ |
793 | + 'cloud_id': 'openstack', 'cloud_name': 'unknown', |
794 | + 'platform': 'openstack', 'public_ssh_keys': [], 'region': 'dfw'}) |
795 | + cmd = ['cloud-id', '--instance-data', self.instance_data, '--json'] |
796 | + with mock.patch('sys.argv', cmd): |
797 | + with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: |
798 | + with self.assertRaises(SystemExit) as context_manager: |
799 | + cloud_id.main() |
800 | + self.assertEqual(0, context_manager.exception.code) |
801 | + self.assertEqual(expected + '\n', m_stdout.getvalue()) |
802 | + |
803 | +# vi: ts=4 expandtab |
804 | diff --git a/cloudinit/cmd/tests/test_query.py b/cloudinit/cmd/tests/test_query.py |
805 | index fb87c6a..28738b1 100644 |
806 | --- a/cloudinit/cmd/tests/test_query.py |
807 | +++ b/cloudinit/cmd/tests/test_query.py |
808 | @@ -1,5 +1,6 @@ |
809 | # This file is part of cloud-init. See LICENSE file for license information. |
810 | |
811 | +import errno |
812 | from six import StringIO |
813 | from textwrap import dedent |
814 | import os |
815 | @@ -7,7 +8,8 @@ import os |
816 | from collections import namedtuple |
817 | from cloudinit.cmd import query |
818 | from cloudinit.helpers import Paths |
819 | -from cloudinit.sources import REDACT_SENSITIVE_VALUE, INSTANCE_JSON_FILE |
820 | +from cloudinit.sources import ( |
821 | + REDACT_SENSITIVE_VALUE, INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE) |
822 | from cloudinit.tests.helpers import CiTestCase, mock |
823 | from cloudinit.util import ensure_dir, write_file |
824 | |
825 | @@ -50,10 +52,28 @@ class TestQuery(CiTestCase): |
826 | with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: |
827 | self.assertEqual(1, query.handle_args('anyname', args)) |
828 | self.assertIn( |
829 | - 'ERROR: Missing instance-data.json file: %s' % absent_fn, |
830 | + 'ERROR: Missing instance-data file: %s' % absent_fn, |
831 | self.logs.getvalue()) |
832 | self.assertIn( |
833 | - 'ERROR: Missing instance-data.json file: %s' % absent_fn, |
834 | + 'ERROR: Missing instance-data file: %s' % absent_fn, |
835 | + m_stderr.getvalue()) |
836 | + |
837 | + def test_handle_args_error_when_no_read_permission_instance_data(self): |
838 | + """When instance_data file is unreadable, log an error.""" |
839 | + noread_fn = self.tmp_path('unreadable', dir=self.tmp) |
840 | + write_file(noread_fn, 'thou shall not pass') |
841 | + args = self.args( |
842 | + debug=False, dump_all=True, format=None, instance_data=noread_fn, |
843 | + list_keys=False, user_data='ud', vendor_data='vd', varname=None) |
844 | + with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: |
845 | + with mock.patch('cloudinit.cmd.query.util.load_file') as m_load: |
846 | + m_load.side_effect = OSError(errno.EACCES, 'Not allowed') |
847 | + self.assertEqual(1, query.handle_args('anyname', args)) |
848 | + self.assertIn( |
849 | + "ERROR: No read permission on '%s'. Try sudo" % noread_fn, |
850 | + self.logs.getvalue()) |
851 | + self.assertIn( |
852 | + "ERROR: No read permission on '%s'. Try sudo" % noread_fn, |
853 | m_stderr.getvalue()) |
854 | |
855 | def test_handle_args_defaults_instance_data(self): |
856 | @@ -70,12 +90,58 @@ class TestQuery(CiTestCase): |
857 | self.assertEqual(1, query.handle_args('anyname', args)) |
858 | json_file = os.path.join(run_dir, INSTANCE_JSON_FILE) |
859 | self.assertIn( |
860 | - 'ERROR: Missing instance-data.json file: %s' % json_file, |
861 | + 'ERROR: Missing instance-data file: %s' % json_file, |
862 | self.logs.getvalue()) |
863 | self.assertIn( |
864 | - 'ERROR: Missing instance-data.json file: %s' % json_file, |
865 | + 'ERROR: Missing instance-data file: %s' % json_file, |
866 | m_stderr.getvalue()) |
867 | |
868 | + def test_handle_args_root_fallsback_to_instance_data(self): |
869 | + """When no instance_data argument, root falls back to redacted json.""" |
870 | + args = self.args( |
871 | + debug=False, dump_all=True, format=None, instance_data=None, |
872 | + list_keys=False, user_data=None, vendor_data=None, varname=None) |
873 | + run_dir = self.tmp_path('run_dir', dir=self.tmp) |
874 | + ensure_dir(run_dir) |
875 | + paths = Paths({'run_dir': run_dir}) |
876 | + self.add_patch('cloudinit.cmd.query.read_cfg_paths', 'm_paths') |
877 | + self.m_paths.return_value = paths |
878 | + with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: |
879 | + with mock.patch('os.getuid') as m_getuid: |
880 | + m_getuid.return_value = 0 |
881 | + self.assertEqual(1, query.handle_args('anyname', args)) |
882 | + json_file = os.path.join(run_dir, INSTANCE_JSON_FILE) |
883 | + sensitive_file = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE) |
884 | + self.assertIn( |
885 | + 'WARNING: Missing root-readable %s. Using redacted %s instead.' % ( |
886 | + sensitive_file, json_file), |
887 | + m_stderr.getvalue()) |
888 | + |
889 | + def test_handle_args_root_uses_instance_sensitive_data(self): |
890 | + """When no instance_data argument, root uses semsitive json.""" |
891 | + user_data = self.tmp_path('user-data', dir=self.tmp) |
892 | + vendor_data = self.tmp_path('vendor-data', dir=self.tmp) |
893 | + write_file(user_data, 'ud') |
894 | + write_file(vendor_data, 'vd') |
895 | + run_dir = self.tmp_path('run_dir', dir=self.tmp) |
896 | + sensitive_file = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE) |
897 | + write_file(sensitive_file, '{"my-var": "it worked"}') |
898 | + ensure_dir(run_dir) |
899 | + paths = Paths({'run_dir': run_dir}) |
900 | + self.add_patch('cloudinit.cmd.query.read_cfg_paths', 'm_paths') |
901 | + self.m_paths.return_value = paths |
902 | + args = self.args( |
903 | + debug=False, dump_all=True, format=None, instance_data=None, |
904 | + list_keys=False, user_data=vendor_data, vendor_data=vendor_data, |
905 | + varname=None) |
906 | + with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: |
907 | + with mock.patch('os.getuid') as m_getuid: |
908 | + m_getuid.return_value = 0 |
909 | + self.assertEqual(0, query.handle_args('anyname', args)) |
910 | + self.assertEqual( |
911 | + '{\n "my_var": "it worked",\n "userdata": "vd",\n ' |
912 | + '"vendordata": "vd"\n}\n', m_stdout.getvalue()) |
913 | + |
914 | def test_handle_args_dumps_all_instance_data(self): |
915 | """When --all is specified query will dump all instance data vars.""" |
916 | write_file(self.instance_data, '{"my-var": "it worked"}') |
917 | diff --git a/cloudinit/config/cc_disk_setup.py b/cloudinit/config/cc_disk_setup.py |
918 | index 943089e..29e192e 100644 |
919 | --- a/cloudinit/config/cc_disk_setup.py |
920 | +++ b/cloudinit/config/cc_disk_setup.py |
921 | @@ -743,7 +743,7 @@ def assert_and_settle_device(device): |
922 | util.udevadm_settle() |
923 | if not os.path.exists(device): |
924 | raise RuntimeError("Device %s did not exist and was not created " |
925 | - "with a udevamd settle." % device) |
926 | + "with a udevadm settle." % device) |
927 | |
928 | # Whether or not the device existed above, it is possible that udev |
929 | # events that would populate udev database (for reading by lsdname) have |
930 | diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py |
931 | index 24a8ebe..71d13ed 100644 |
932 | --- a/cloudinit/config/cc_lxd.py |
933 | +++ b/cloudinit/config/cc_lxd.py |
934 | @@ -89,7 +89,7 @@ def handle(name, cfg, cloud, log, args): |
935 | packages.append('lxd') |
936 | |
937 | if init_cfg.get("storage_backend") == "zfs" and not util.which('zfs'): |
938 | - packages.append('zfs') |
939 | + packages.append('zfsutils-linux') |
940 | |
941 | if len(packages): |
942 | try: |
943 | diff --git a/cloudinit/config/cc_resizefs.py b/cloudinit/config/cc_resizefs.py |
944 | index 2edddd0..076b9d5 100644 |
945 | --- a/cloudinit/config/cc_resizefs.py |
946 | +++ b/cloudinit/config/cc_resizefs.py |
947 | @@ -197,6 +197,13 @@ def maybe_get_writable_device_path(devpath, info, log): |
948 | if devpath.startswith('gpt/'): |
949 | log.debug('We have a gpt label - just go ahead') |
950 | return devpath |
951 | + # Alternatively, our device could simply be a name as returned by gpart, |
952 | + # such as da0p3 |
953 | + if not devpath.startswith('/dev/') and not os.path.exists(devpath): |
954 | + fulldevpath = '/dev/' + devpath.lstrip('/') |
955 | + log.debug("'%s' doesn't appear to be a valid device path. Trying '%s'", |
956 | + devpath, fulldevpath) |
957 | + devpath = fulldevpath |
958 | |
959 | try: |
960 | statret = os.stat(devpath) |
961 | diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py |
962 | index 5ef9737..4585e4d 100755 |
963 | --- a/cloudinit/config/cc_set_passwords.py |
964 | +++ b/cloudinit/config/cc_set_passwords.py |
965 | @@ -160,7 +160,7 @@ def handle(_name, cfg, cloud, log, args): |
966 | hashed_users = [] |
967 | randlist = [] |
968 | users = [] |
969 | - prog = re.compile(r'\$[1,2a,2y,5,6](\$.+){2}') |
970 | + prog = re.compile(r'\$(1|2a|2y|5|6)(\$.+){2}') |
971 | for line in plist: |
972 | u, p = line.split(':', 1) |
973 | if prog.match(p) is not None and ":" not in p: |
974 | diff --git a/cloudinit/config/cc_write_files.py b/cloudinit/config/cc_write_files.py |
975 | index 31d1db6..0b6546e 100644 |
976 | --- a/cloudinit/config/cc_write_files.py |
977 | +++ b/cloudinit/config/cc_write_files.py |
978 | @@ -49,6 +49,10 @@ binary gzip data can be specified and will be decoded before being written. |
979 | ... |
980 | path: /bin/arch |
981 | permissions: '0555' |
982 | + - content: | |
983 | + 15 * * * * root ship_logs |
984 | + path: /etc/crontab |
985 | + append: true |
986 | """ |
987 | |
988 | import base64 |
989 | @@ -113,7 +117,8 @@ def write_files(name, files): |
990 | contents = extract_contents(f_info.get('content', ''), extractions) |
991 | (u, g) = util.extract_usergroup(f_info.get('owner', DEFAULT_OWNER)) |
992 | perms = decode_perms(f_info.get('permissions'), DEFAULT_PERMS) |
993 | - util.write_file(path, contents, mode=perms) |
994 | + omode = 'ab' if util.get_cfg_option_bool(f_info, 'append') else 'wb' |
995 | + util.write_file(path, contents, omode=omode, mode=perms) |
996 | util.chownbyname(path, u, g) |
997 | |
998 | |
999 | diff --git a/cloudinit/config/tests/test_set_passwords.py b/cloudinit/config/tests/test_set_passwords.py |
1000 | index b051ec8..a2ea5ec 100644 |
1001 | --- a/cloudinit/config/tests/test_set_passwords.py |
1002 | +++ b/cloudinit/config/tests/test_set_passwords.py |
1003 | @@ -68,4 +68,44 @@ class TestHandleSshPwauth(CiTestCase): |
1004 | m_update.assert_called_with({optname: optval}) |
1005 | m_subp.assert_not_called() |
1006 | |
1007 | + |
1008 | +class TestSetPasswordsHandle(CiTestCase): |
1009 | + """Test cc_set_passwords.handle""" |
1010 | + |
1011 | + with_logs = True |
1012 | + |
1013 | + def test_handle_on_empty_config(self): |
1014 | + """handle logs that no password has changed when config is empty.""" |
1015 | + cloud = self.tmp_cloud(distro='ubuntu') |
1016 | + setpass.handle( |
1017 | + 'IGNORED', cfg={}, cloud=cloud, log=self.logger, args=[]) |
1018 | + self.assertEqual( |
1019 | + "DEBUG: Leaving ssh config 'PasswordAuthentication' unchanged. " |
1020 | + 'ssh_pwauth=None\n', |
1021 | + self.logs.getvalue()) |
1022 | + |
1023 | + @mock.patch(MODPATH + "util.subp") |
1024 | + def test_handle_on_chpasswd_list_parses_common_hashes(self, m_subp): |
1025 | + """handle parses command password hashes.""" |
1026 | + cloud = self.tmp_cloud(distro='ubuntu') |
1027 | + valid_hashed_pwds = [ |
1028 | + 'root:$2y$10$8BQjxjVByHA/Ee.O1bCXtO8S7Y5WojbXWqnqYpUW.BrPx/' |
1029 | + 'Dlew1Va', |
1030 | + 'ubuntu:$6$5hOurLPO$naywm3Ce0UlmZg9gG2Fl9acWCVEoakMMC7dR52q' |
1031 | + 'SDexZbrN9z8yHxhUM2b.sxpguSwOlbOQSW/HpXazGGx3oo1'] |
1032 | + cfg = {'chpasswd': {'list': valid_hashed_pwds}} |
1033 | + with mock.patch(MODPATH + 'util.subp') as m_subp: |
1034 | + setpass.handle( |
1035 | + 'IGNORED', cfg=cfg, cloud=cloud, log=self.logger, args=[]) |
1036 | + self.assertIn( |
1037 | + 'DEBUG: Handling input for chpasswd as list.', |
1038 | + self.logs.getvalue()) |
1039 | + self.assertIn( |
1040 | + "DEBUG: Setting hashed password for ['root', 'ubuntu']", |
1041 | + self.logs.getvalue()) |
1042 | + self.assertEqual( |
1043 | + [mock.call(['chpasswd', '-e'], |
1044 | + '\n'.join(valid_hashed_pwds) + '\n')], |
1045 | + m_subp.call_args_list) |
1046 | + |
1047 | # vi: ts=4 expandtab |
1048 | diff --git a/cloudinit/dhclient_hook.py b/cloudinit/dhclient_hook.py |
1049 | index 7f02d7f..72b51b6 100644 |
1050 | --- a/cloudinit/dhclient_hook.py |
1051 | +++ b/cloudinit/dhclient_hook.py |
1052 | @@ -1,5 +1,8 @@ |
1053 | # This file is part of cloud-init. See LICENSE file for license information. |
1054 | |
1055 | +"""Run the dhclient hook to record network info.""" |
1056 | + |
1057 | +import argparse |
1058 | import os |
1059 | |
1060 | from cloudinit import atomic_helper |
1061 | @@ -8,44 +11,75 @@ from cloudinit import stages |
1062 | |
1063 | LOG = logging.getLogger(__name__) |
1064 | |
1065 | +NAME = "dhclient-hook" |
1066 | +UP = "up" |
1067 | +DOWN = "down" |
1068 | +EVENTS = (UP, DOWN) |
1069 | + |
1070 | + |
1071 | +def _get_hooks_dir(): |
1072 | + i = stages.Init() |
1073 | + return os.path.join(i.paths.get_runpath(), 'dhclient.hooks') |
1074 | + |
1075 | + |
1076 | +def _filter_env_vals(info): |
1077 | + """Given info (os.environ), return a dictionary with |
1078 | + lower case keys for each entry starting with DHCP4_ or new_.""" |
1079 | + new_info = {} |
1080 | + for k, v in info.items(): |
1081 | + if k.startswith("DHCP4_") or k.startswith("new_"): |
1082 | + key = (k.replace('DHCP4_', '').replace('new_', '')).lower() |
1083 | + new_info[key] = v |
1084 | + return new_info |
1085 | + |
1086 | + |
1087 | +def run_hook(interface, event, data_d=None, env=None): |
1088 | + if event not in EVENTS: |
1089 | + raise ValueError("Unexpected event '%s'. Expected one of: %s" % |
1090 | + (event, EVENTS)) |
1091 | + if data_d is None: |
1092 | + data_d = _get_hooks_dir() |
1093 | + if env is None: |
1094 | + env = os.environ |
1095 | + hook_file = os.path.join(data_d, interface + ".json") |
1096 | + |
1097 | + if event == UP: |
1098 | + if not os.path.exists(data_d): |
1099 | + os.makedirs(data_d) |
1100 | + atomic_helper.write_json(hook_file, _filter_env_vals(env)) |
1101 | + LOG.debug("Wrote dhclient options in %s", hook_file) |
1102 | + elif event == DOWN: |
1103 | + if os.path.exists(hook_file): |
1104 | + os.remove(hook_file) |
1105 | + LOG.debug("Removed dhclient options file %s", hook_file) |
1106 | + |
1107 | + |
1108 | +def get_parser(parser=None): |
1109 | + if parser is None: |
1110 | + parser = argparse.ArgumentParser(prog=NAME, description=__doc__) |
1111 | + parser.add_argument( |
1112 | + "event", help='event taken on the interface', choices=EVENTS) |
1113 | + parser.add_argument( |
1114 | + "interface", help='the network interface being acted upon') |
1115 | + # cloud-init main uses 'action' |
1116 | + parser.set_defaults(action=(NAME, handle_args)) |
1117 | + return parser |
1118 | + |
1119 | + |
1120 | +def handle_args(name, args, data_d=None): |
1121 | + """Handle the Namespace args. |
1122 | + Takes 'name' as passed by cloud-init main. not used here.""" |
1123 | + return run_hook(interface=args.interface, event=args.event, data_d=data_d) |
1124 | + |
1125 | + |
1126 | +if __name__ == '__main__': |
1127 | + import sys |
1128 | + parser = get_parser() |
1129 | + args = parser.parse_args(args=sys.argv[1:]) |
1130 | + return_value = handle_args( |
1131 | + NAME, args, data_d=os.environ.get('_CI_DHCP_HOOK_DATA_D')) |
1132 | + if return_value: |
1133 | + sys.exit(return_value) |
1134 | |
1135 | -class LogDhclient(object): |
1136 | - |
1137 | - def __init__(self, cli_args): |
1138 | - self.hooks_dir = self._get_hooks_dir() |
1139 | - self.net_interface = cli_args.net_interface |
1140 | - self.net_action = cli_args.net_action |
1141 | - self.hook_file = os.path.join(self.hooks_dir, |
1142 | - self.net_interface + ".json") |
1143 | - |
1144 | - @staticmethod |
1145 | - def _get_hooks_dir(): |
1146 | - i = stages.Init() |
1147 | - return os.path.join(i.paths.get_runpath(), 'dhclient.hooks') |
1148 | - |
1149 | - def check_hooks_dir(self): |
1150 | - if not os.path.exists(self.hooks_dir): |
1151 | - os.makedirs(self.hooks_dir) |
1152 | - else: |
1153 | - # If the action is down and the json file exists, we need to |
1154 | - # delete the file |
1155 | - if self.net_action is 'down' and os.path.exists(self.hook_file): |
1156 | - os.remove(self.hook_file) |
1157 | - |
1158 | - @staticmethod |
1159 | - def get_vals(info): |
1160 | - new_info = {} |
1161 | - for k, v in info.items(): |
1162 | - if k.startswith("DHCP4_") or k.startswith("new_"): |
1163 | - key = (k.replace('DHCP4_', '').replace('new_', '')).lower() |
1164 | - new_info[key] = v |
1165 | - return new_info |
1166 | - |
1167 | - def record(self): |
1168 | - envs = os.environ |
1169 | - if self.hook_file is None: |
1170 | - return |
1171 | - atomic_helper.write_json(self.hook_file, self.get_vals(envs)) |
1172 | - LOG.debug("Wrote dhclient options in %s", self.hook_file) |
1173 | |
1174 | # vi: ts=4 expandtab |
1175 | diff --git a/cloudinit/handlers/jinja_template.py b/cloudinit/handlers/jinja_template.py |
1176 | index 3fa4097..ce3accf 100644 |
1177 | --- a/cloudinit/handlers/jinja_template.py |
1178 | +++ b/cloudinit/handlers/jinja_template.py |
1179 | @@ -1,5 +1,6 @@ |
1180 | # This file is part of cloud-init. See LICENSE file for license information. |
1181 | |
1182 | +from errno import EACCES |
1183 | import os |
1184 | import re |
1185 | |
1186 | @@ -76,7 +77,14 @@ def render_jinja_payload_from_file( |
1187 | raise RuntimeError( |
1188 | 'Cannot render jinja template vars. Instance data not yet' |
1189 | ' present at %s' % instance_data_file) |
1190 | - instance_data = load_json(load_file(instance_data_file)) |
1191 | + try: |
1192 | + instance_data = load_json(load_file(instance_data_file)) |
1193 | + except (IOError, OSError) as e: |
1194 | + if e.errno == EACCES: |
1195 | + raise RuntimeError( |
1196 | + 'Cannot render jinja template vars. No read permission on' |
1197 | + " '%s'. Try sudo" % instance_data_file) |
1198 | + |
1199 | rendered_payload = render_jinja_payload( |
1200 | payload, payload_fn, instance_data, debug) |
1201 | if not rendered_payload: |
1202 | diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py |
1203 | index f83d368..3642fb1 100644 |
1204 | --- a/cloudinit/net/__init__.py |
1205 | +++ b/cloudinit/net/__init__.py |
1206 | @@ -12,6 +12,7 @@ import re |
1207 | |
1208 | from cloudinit.net.network_state import mask_to_net_prefix |
1209 | from cloudinit import util |
1210 | +from cloudinit.url_helper import UrlError, readurl |
1211 | |
1212 | LOG = logging.getLogger(__name__) |
1213 | SYS_CLASS_NET = "/sys/class/net/" |
1214 | @@ -612,7 +613,8 @@ def get_interfaces(): |
1215 | Bridges and any devices that have a 'stolen' mac are excluded.""" |
1216 | ret = [] |
1217 | devs = get_devicelist() |
1218 | - empty_mac = '00:00:00:00:00:00' |
1219 | + # 16 somewhat arbitrarily chosen. Normally a mac is 6 '00:' tokens. |
1220 | + zero_mac = ':'.join(('00',) * 16) |
1221 | for name in devs: |
1222 | if not interface_has_own_mac(name): |
1223 | continue |
1224 | @@ -624,7 +626,8 @@ def get_interfaces(): |
1225 | # some devices may not have a mac (tun0) |
1226 | if not mac: |
1227 | continue |
1228 | - if mac == empty_mac and name != 'lo': |
1229 | + # skip nics that have no mac (00:00....) |
1230 | + if name != 'lo' and mac == zero_mac[:len(mac)]: |
1231 | continue |
1232 | ret.append((name, mac, device_driver(name), device_devid(name))) |
1233 | return ret |
1234 | @@ -645,16 +648,36 @@ def get_ib_hwaddrs_by_interface(): |
1235 | return ret |
1236 | |
1237 | |
1238 | +def has_url_connectivity(url): |
1239 | + """Return true when the instance has access to the provided URL |
1240 | + |
1241 | + Logs a warning if url is not the expected format. |
1242 | + """ |
1243 | + if not any([url.startswith('http://'), url.startswith('https://')]): |
1244 | + LOG.warning( |
1245 | + "Ignoring connectivity check. Expected URL beginning with http*://" |
1246 | + " received '%s'", url) |
1247 | + return False |
1248 | + try: |
1249 | + readurl(url, timeout=5) |
1250 | + except UrlError: |
1251 | + return False |
1252 | + return True |
1253 | + |
1254 | + |
1255 | class EphemeralIPv4Network(object): |
1256 | """Context manager which sets up temporary static network configuration. |
1257 | |
1258 | - No operations are performed if the provided interface is already connected. |
1259 | + No operations are performed if the provided interface already has the |
1260 | + specified configuration. |
1261 | + This can be verified with the connectivity_url. |
1262 | If unconnected, bring up the interface with valid ip, prefix and broadcast. |
1263 | If router is provided setup a default route for that interface. Upon |
1264 | context exit, clean up the interface leaving no configuration behind. |
1265 | """ |
1266 | |
1267 | - def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None): |
1268 | + def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None, |
1269 | + connectivity_url=None): |
1270 | """Setup context manager and validate call signature. |
1271 | |
1272 | @param interface: Name of the network interface to bring up. |
1273 | @@ -663,6 +686,8 @@ class EphemeralIPv4Network(object): |
1274 | prefix. |
1275 | @param broadcast: Broadcast address for the IPv4 network. |
1276 | @param router: Optionally the default gateway IP. |
1277 | + @param connectivity_url: Optionally, a URL to verify if a usable |
1278 | + connection already exists. |
1279 | """ |
1280 | if not all([interface, ip, prefix_or_mask, broadcast]): |
1281 | raise ValueError( |
1282 | @@ -673,6 +698,8 @@ class EphemeralIPv4Network(object): |
1283 | except ValueError as e: |
1284 | raise ValueError( |
1285 | 'Cannot setup network: {0}'.format(e)) |
1286 | + |
1287 | + self.connectivity_url = connectivity_url |
1288 | self.interface = interface |
1289 | self.ip = ip |
1290 | self.broadcast = broadcast |
1291 | @@ -681,6 +708,13 @@ class EphemeralIPv4Network(object): |
1292 | |
1293 | def __enter__(self): |
1294 | """Perform ephemeral network setup if interface is not connected.""" |
1295 | + if self.connectivity_url: |
1296 | + if has_url_connectivity(self.connectivity_url): |
1297 | + LOG.debug( |
1298 | + 'Skip ephemeral network setup, instance has connectivity' |
1299 | + ' to %s', self.connectivity_url) |
1300 | + return |
1301 | + |
1302 | self._bringup_device() |
1303 | if self.router: |
1304 | self._bringup_router() |
1305 | diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py |
1306 | index 12cf509..c98a97c 100644 |
1307 | --- a/cloudinit/net/dhcp.py |
1308 | +++ b/cloudinit/net/dhcp.py |
1309 | @@ -9,9 +9,11 @@ import logging |
1310 | import os |
1311 | import re |
1312 | import signal |
1313 | +import time |
1314 | |
1315 | from cloudinit.net import ( |
1316 | - EphemeralIPv4Network, find_fallback_nic, get_devicelist) |
1317 | + EphemeralIPv4Network, find_fallback_nic, get_devicelist, |
1318 | + has_url_connectivity) |
1319 | from cloudinit.net.network_state import mask_and_ipv4_to_bcast_addr as bcip |
1320 | from cloudinit import temp_utils |
1321 | from cloudinit import util |
1322 | @@ -37,37 +39,69 @@ class NoDHCPLeaseError(Exception): |
1323 | |
1324 | |
1325 | class EphemeralDHCPv4(object): |
1326 | - def __init__(self, iface=None): |
1327 | + def __init__(self, iface=None, connectivity_url=None): |
1328 | self.iface = iface |
1329 | self._ephipv4 = None |
1330 | + self.lease = None |
1331 | + self.connectivity_url = connectivity_url |
1332 | |
1333 | def __enter__(self): |
1334 | + """Setup sandboxed dhcp context, unless connectivity_url can already be |
1335 | + reached.""" |
1336 | + if self.connectivity_url: |
1337 | + if has_url_connectivity(self.connectivity_url): |
1338 | + LOG.debug( |
1339 | + 'Skip ephemeral DHCP setup, instance has connectivity' |
1340 | + ' to %s', self.connectivity_url) |
1341 | + return |
1342 | + return self.obtain_lease() |
1343 | + |
1344 | + def __exit__(self, excp_type, excp_value, excp_traceback): |
1345 | + """Teardown sandboxed dhcp context.""" |
1346 | + self.clean_network() |
1347 | + |
1348 | + def clean_network(self): |
1349 | + """Exit _ephipv4 context to teardown of ip configuration performed.""" |
1350 | + if self.lease: |
1351 | + self.lease = None |
1352 | + if not self._ephipv4: |
1353 | + return |
1354 | + self._ephipv4.__exit__(None, None, None) |
1355 | + |
1356 | + def obtain_lease(self): |
1357 | + """Perform dhcp discovery in a sandboxed environment if possible. |
1358 | + |
1359 | + @return: A dict representing dhcp options on the most recent lease |
1360 | + obtained from the dhclient discovery if run, otherwise an error |
1361 | + is raised. |
1362 | + |
1363 | + @raises: NoDHCPLeaseError if no leases could be obtained. |
1364 | + """ |
1365 | + if self.lease: |
1366 | + return self.lease |
1367 | try: |
1368 | leases = maybe_perform_dhcp_discovery(self.iface) |
1369 | except InvalidDHCPLeaseFileError: |
1370 | raise NoDHCPLeaseError() |
1371 | if not leases: |
1372 | raise NoDHCPLeaseError() |
1373 | - lease = leases[-1] |
1374 | + self.lease = leases[-1] |
1375 | LOG.debug("Received dhcp lease on %s for %s/%s", |
1376 | - lease['interface'], lease['fixed-address'], |
1377 | - lease['subnet-mask']) |
1378 | + self.lease['interface'], self.lease['fixed-address'], |
1379 | + self.lease['subnet-mask']) |
1380 | nmap = {'interface': 'interface', 'ip': 'fixed-address', |
1381 | 'prefix_or_mask': 'subnet-mask', |
1382 | 'broadcast': 'broadcast-address', |
1383 | 'router': 'routers'} |
1384 | - kwargs = dict([(k, lease.get(v)) for k, v in nmap.items()]) |
1385 | + kwargs = dict([(k, self.lease.get(v)) for k, v in nmap.items()]) |
1386 | if not kwargs['broadcast']: |
1387 | kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip']) |
1388 | + if self.connectivity_url: |
1389 | + kwargs['connectivity_url'] = self.connectivity_url |
1390 | ephipv4 = EphemeralIPv4Network(**kwargs) |
1391 | ephipv4.__enter__() |
1392 | self._ephipv4 = ephipv4 |
1393 | - return lease |
1394 | - |
1395 | - def __exit__(self, excp_type, excp_value, excp_traceback): |
1396 | - if not self._ephipv4: |
1397 | - return |
1398 | - self._ephipv4.__exit__(excp_type, excp_value, excp_traceback) |
1399 | + return self.lease |
1400 | |
1401 | |
1402 | def maybe_perform_dhcp_discovery(nic=None): |
1403 | @@ -94,7 +128,9 @@ def maybe_perform_dhcp_discovery(nic=None): |
1404 | if not dhclient_path: |
1405 | LOG.debug('Skip dhclient configuration: No dhclient command found.') |
1406 | return [] |
1407 | - with temp_utils.tempdir(prefix='cloud-init-dhcp-', needs_exe=True) as tdir: |
1408 | + with temp_utils.tempdir(rmtree_ignore_errors=True, |
1409 | + prefix='cloud-init-dhcp-', |
1410 | + needs_exe=True) as tdir: |
1411 | # Use /var/tmp because /run/cloud-init/tmp is mounted noexec |
1412 | return dhcp_discovery(dhclient_path, nic, tdir) |
1413 | |
1414 | @@ -162,24 +198,39 @@ def dhcp_discovery(dhclient_cmd_path, interface, cleandir): |
1415 | '-pf', pid_file, interface, '-sf', '/bin/true'] |
1416 | util.subp(cmd, capture=True) |
1417 | |
1418 | - # dhclient doesn't write a pid file until after it forks when it gets a |
1419 | - # proper lease response. Since cleandir is a temp directory that gets |
1420 | - # removed, we need to wait for that pidfile creation before the |
1421 | - # cleandir is removed, otherwise we get FileNotFound errors. |
1422 | + # Wait for pid file and lease file to appear, and for the process |
1423 | + # named by the pid file to daemonize (have pid 1 as its parent). If we |
1424 | + # try to read the lease file before daemonization happens, we might try |
1425 | + # to read it before the dhclient has actually written it. We also have |
1426 | + # to wait until the dhclient has become a daemon so we can be sure to |
1427 | + # kill the correct process, thus freeing cleandir to be deleted back |
1428 | + # up the callstack. |
1429 | missing = util.wait_for_files( |
1430 | [pid_file, lease_file], maxwait=5, naplen=0.01) |
1431 | if missing: |
1432 | LOG.warning("dhclient did not produce expected files: %s", |
1433 | ', '.join(os.path.basename(f) for f in missing)) |
1434 | return [] |
1435 | - pid_content = util.load_file(pid_file).strip() |
1436 | - try: |
1437 | - pid = int(pid_content) |
1438 | - except ValueError: |
1439 | - LOG.debug( |
1440 | - "pid file contains non-integer content '%s'", pid_content) |
1441 | - else: |
1442 | - os.kill(pid, signal.SIGKILL) |
1443 | + |
1444 | + ppid = 'unknown' |
1445 | + for _ in range(0, 1000): |
1446 | + pid_content = util.load_file(pid_file).strip() |
1447 | + try: |
1448 | + pid = int(pid_content) |
1449 | + except ValueError: |
1450 | + pass |
1451 | + else: |
1452 | + ppid = util.get_proc_ppid(pid) |
1453 | + if ppid == 1: |
1454 | + LOG.debug('killing dhclient with pid=%s', pid) |
1455 | + os.kill(pid, signal.SIGKILL) |
1456 | + return parse_dhcp_lease_file(lease_file) |
1457 | + time.sleep(0.01) |
1458 | + |
1459 | + LOG.error( |
1460 | + 'dhclient(pid=%s, parentpid=%s) failed to daemonize after %s seconds', |
1461 | + pid_content, ppid, 0.01 * 1000 |
1462 | + ) |
1463 | return parse_dhcp_lease_file(lease_file) |
1464 | |
1465 | |
1466 | diff --git a/cloudinit/net/eni.py b/cloudinit/net/eni.py |
1467 | index c6f631a..6423632 100644 |
1468 | --- a/cloudinit/net/eni.py |
1469 | +++ b/cloudinit/net/eni.py |
1470 | @@ -371,22 +371,23 @@ class Renderer(renderer.Renderer): |
1471 | 'gateway': 'gw', |
1472 | 'metric': 'metric', |
1473 | } |
1474 | + |
1475 | + default_gw = '' |
1476 | if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0': |
1477 | - default_gw = " default gw %s" % route['gateway'] |
1478 | - content.append(up + default_gw + or_true) |
1479 | - content.append(down + default_gw + or_true) |
1480 | + default_gw = ' default' |
1481 | elif route['network'] == '::' and route['prefix'] == 0: |
1482 | - # ipv6! |
1483 | - default_gw = " -A inet6 default gw %s" % route['gateway'] |
1484 | - content.append(up + default_gw + or_true) |
1485 | - content.append(down + default_gw + or_true) |
1486 | - else: |
1487 | - route_line = "" |
1488 | - for k in ['network', 'netmask', 'gateway', 'metric']: |
1489 | - if k in route: |
1490 | - route_line += " %s %s" % (mapping[k], route[k]) |
1491 | - content.append(up + route_line + or_true) |
1492 | - content.append(down + route_line + or_true) |
1493 | + default_gw = ' -A inet6 default' |
1494 | + |
1495 | + route_line = '' |
1496 | + for k in ['network', 'netmask', 'gateway', 'metric']: |
1497 | + if default_gw and k in ['network', 'netmask']: |
1498 | + continue |
1499 | + if k == 'gateway': |
1500 | + route_line += '%s %s %s' % (default_gw, mapping[k], route[k]) |
1501 | + elif k in route: |
1502 | + route_line += ' %s %s' % (mapping[k], route[k]) |
1503 | + content.append(up + route_line + or_true) |
1504 | + content.append(down + route_line + or_true) |
1505 | return content |
1506 | |
1507 | def _render_iface(self, iface, render_hwaddress=False): |
1508 | diff --git a/cloudinit/net/netplan.py b/cloudinit/net/netplan.py |
1509 | index bc1087f..21517fd 100644 |
1510 | --- a/cloudinit/net/netplan.py |
1511 | +++ b/cloudinit/net/netplan.py |
1512 | @@ -114,13 +114,13 @@ def _extract_addresses(config, entry, ifname): |
1513 | for route in subnet.get('routes', []): |
1514 | to_net = "%s/%s" % (route.get('network'), |
1515 | route.get('prefix')) |
1516 | - route = { |
1517 | + new_route = { |
1518 | 'via': route.get('gateway'), |
1519 | 'to': to_net, |
1520 | } |
1521 | if 'metric' in route: |
1522 | - route.update({'metric': route.get('metric', 100)}) |
1523 | - routes.append(route) |
1524 | + new_route.update({'metric': route.get('metric', 100)}) |
1525 | + routes.append(new_route) |
1526 | |
1527 | addresses.append(addr) |
1528 | |
1529 | diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py |
1530 | index 9c16d3a..fd8e501 100644 |
1531 | --- a/cloudinit/net/sysconfig.py |
1532 | +++ b/cloudinit/net/sysconfig.py |
1533 | @@ -10,11 +10,14 @@ from cloudinit.distros.parsers import resolv_conf |
1534 | from cloudinit import log as logging |
1535 | from cloudinit import util |
1536 | |
1537 | +from configobj import ConfigObj |
1538 | + |
1539 | from . import renderer |
1540 | from .network_state import ( |
1541 | is_ipv6_addr, net_prefix_to_ipv4_mask, subnet_is_ipv6) |
1542 | |
1543 | LOG = logging.getLogger(__name__) |
1544 | +NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf" |
1545 | |
1546 | |
1547 | def _make_header(sep='#'): |
1548 | @@ -46,6 +49,24 @@ def _quote_value(value): |
1549 | return value |
1550 | |
1551 | |
1552 | +def enable_ifcfg_rh(path): |
1553 | + """Add ifcfg-rh to NetworkManager.cfg plugins if main section is present""" |
1554 | + config = ConfigObj(path) |
1555 | + if 'main' in config: |
1556 | + if 'plugins' in config['main']: |
1557 | + if 'ifcfg-rh' in config['main']['plugins']: |
1558 | + return |
1559 | + else: |
1560 | + config['main']['plugins'] = [] |
1561 | + |
1562 | + if isinstance(config['main']['plugins'], list): |
1563 | + config['main']['plugins'].append('ifcfg-rh') |
1564 | + else: |
1565 | + config['main']['plugins'] = [config['main']['plugins'], 'ifcfg-rh'] |
1566 | + config.write() |
1567 | + LOG.debug('Enabled ifcfg-rh NetworkManager plugins') |
1568 | + |
1569 | + |
1570 | class ConfigMap(object): |
1571 | """Sysconfig like dictionary object.""" |
1572 | |
1573 | @@ -156,13 +177,23 @@ class Route(ConfigMap): |
1574 | _quote_value(gateway_value))) |
1575 | buf.write("%s=%s\n" % ('NETMASK' + str(reindex), |
1576 | _quote_value(netmask_value))) |
1577 | + metric_key = 'METRIC' + index |
1578 | + if metric_key in self._conf: |
1579 | + metric_value = str(self._conf['METRIC' + index]) |
1580 | + buf.write("%s=%s\n" % ('METRIC' + str(reindex), |
1581 | + _quote_value(metric_value))) |
1582 | elif proto == "ipv6" and self.is_ipv6_route(address_value): |
1583 | netmask_value = str(self._conf['NETMASK' + index]) |
1584 | gateway_value = str(self._conf['GATEWAY' + index]) |
1585 | - buf.write("%s/%s via %s dev %s\n" % (address_value, |
1586 | - netmask_value, |
1587 | - gateway_value, |
1588 | - self._route_name)) |
1589 | + metric_value = ( |
1590 | + 'metric ' + str(self._conf['METRIC' + index]) |
1591 | + if 'METRIC' + index in self._conf else '') |
1592 | + buf.write( |
1593 | + "%s/%s via %s %s dev %s\n" % (address_value, |
1594 | + netmask_value, |
1595 | + gateway_value, |
1596 | + metric_value, |
1597 | + self._route_name)) |
1598 | |
1599 | return buf.getvalue() |
1600 | |
1601 | @@ -370,6 +401,9 @@ class Renderer(renderer.Renderer): |
1602 | else: |
1603 | iface_cfg['GATEWAY'] = subnet['gateway'] |
1604 | |
1605 | + if 'metric' in subnet: |
1606 | + iface_cfg['METRIC'] = subnet['metric'] |
1607 | + |
1608 | if 'dns_search' in subnet: |
1609 | iface_cfg['DOMAIN'] = ' '.join(subnet['dns_search']) |
1610 | |
1611 | @@ -414,15 +448,19 @@ class Renderer(renderer.Renderer): |
1612 | else: |
1613 | iface_cfg['GATEWAY'] = route['gateway'] |
1614 | route_cfg.has_set_default_ipv4 = True |
1615 | + if 'metric' in route: |
1616 | + iface_cfg['METRIC'] = route['metric'] |
1617 | |
1618 | else: |
1619 | gw_key = 'GATEWAY%s' % route_cfg.last_idx |
1620 | nm_key = 'NETMASK%s' % route_cfg.last_idx |
1621 | addr_key = 'ADDRESS%s' % route_cfg.last_idx |
1622 | + metric_key = 'METRIC%s' % route_cfg.last_idx |
1623 | route_cfg.last_idx += 1 |
1624 | # add default routes only to ifcfg files, not |
1625 | # to route-* or route6-* |
1626 | for (old_key, new_key) in [('gateway', gw_key), |
1627 | + ('metric', metric_key), |
1628 | ('netmask', nm_key), |
1629 | ('network', addr_key)]: |
1630 | if old_key in route: |
1631 | @@ -519,6 +557,8 @@ class Renderer(renderer.Renderer): |
1632 | content.add_nameserver(nameserver) |
1633 | for searchdomain in network_state.dns_searchdomains: |
1634 | content.add_search_domain(searchdomain) |
1635 | + if not str(content): |
1636 | + return None |
1637 | header = _make_header(';') |
1638 | content_str = str(content) |
1639 | if not content_str.startswith(header): |
1640 | @@ -628,7 +668,8 @@ class Renderer(renderer.Renderer): |
1641 | dns_path = util.target_path(target, self.dns_path) |
1642 | resolv_content = self._render_dns(network_state, |
1643 | existing_dns_path=dns_path) |
1644 | - util.write_file(dns_path, resolv_content, file_mode) |
1645 | + if resolv_content: |
1646 | + util.write_file(dns_path, resolv_content, file_mode) |
1647 | if self.networkmanager_conf_path: |
1648 | nm_conf_path = util.target_path(target, |
1649 | self.networkmanager_conf_path) |
1650 | @@ -640,6 +681,8 @@ class Renderer(renderer.Renderer): |
1651 | netrules_content = self._render_persistent_net(network_state) |
1652 | netrules_path = util.target_path(target, self.netrules_path) |
1653 | util.write_file(netrules_path, netrules_content, file_mode) |
1654 | + if available_nm(target=target): |
1655 | + enable_ifcfg_rh(util.target_path(target, path=NM_CFG_FILE)) |
1656 | |
1657 | sysconfig_path = util.target_path(target, templates.get('control')) |
1658 | # Distros configuring /etc/sysconfig/network as a file e.g. Centos |
1659 | @@ -654,6 +697,13 @@ class Renderer(renderer.Renderer): |
1660 | |
1661 | |
1662 | def available(target=None): |
1663 | + sysconfig = available_sysconfig(target=target) |
1664 | + nm = available_nm(target=target) |
1665 | + |
1666 | + return any([nm, sysconfig]) |
1667 | + |
1668 | + |
1669 | +def available_sysconfig(target=None): |
1670 | expected = ['ifup', 'ifdown'] |
1671 | search = ['/sbin', '/usr/sbin'] |
1672 | for p in expected: |
1673 | @@ -669,4 +719,10 @@ def available(target=None): |
1674 | return True |
1675 | |
1676 | |
1677 | +def available_nm(target=None): |
1678 | + if not os.path.isfile(util.target_path(target, path=NM_CFG_FILE)): |
1679 | + return False |
1680 | + return True |
1681 | + |
1682 | + |
1683 | # vi: ts=4 expandtab |
1684 | diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py |
1685 | index db25b6f..79e8842 100644 |
1686 | --- a/cloudinit/net/tests/test_dhcp.py |
1687 | +++ b/cloudinit/net/tests/test_dhcp.py |
1688 | @@ -1,15 +1,17 @@ |
1689 | # This file is part of cloud-init. See LICENSE file for license information. |
1690 | |
1691 | +import httpretty |
1692 | import os |
1693 | import signal |
1694 | from textwrap import dedent |
1695 | |
1696 | +import cloudinit.net as net |
1697 | from cloudinit.net.dhcp import ( |
1698 | InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery, |
1699 | parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases) |
1700 | from cloudinit.util import ensure_file, write_file |
1701 | from cloudinit.tests.helpers import ( |
1702 | - CiTestCase, mock, populate_dir, wrap_and_call) |
1703 | + CiTestCase, HttprettyTestCase, mock, populate_dir, wrap_and_call) |
1704 | |
1705 | |
1706 | class TestParseDHCPLeasesFile(CiTestCase): |
1707 | @@ -143,16 +145,20 @@ class TestDHCPDiscoveryClean(CiTestCase): |
1708 | 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1'}], |
1709 | dhcp_discovery(dhclient_script, 'eth9', tmpdir)) |
1710 | self.assertIn( |
1711 | - "pid file contains non-integer content ''", self.logs.getvalue()) |
1712 | + "dhclient(pid=, parentpid=unknown) failed " |
1713 | + "to daemonize after 10.0 seconds", |
1714 | + self.logs.getvalue()) |
1715 | m_kill.assert_not_called() |
1716 | |
1717 | + @mock.patch('cloudinit.net.dhcp.util.get_proc_ppid') |
1718 | @mock.patch('cloudinit.net.dhcp.os.kill') |
1719 | @mock.patch('cloudinit.net.dhcp.util.wait_for_files') |
1720 | @mock.patch('cloudinit.net.dhcp.util.subp') |
1721 | def test_dhcp_discovery_run_in_sandbox_waits_on_lease_and_pid(self, |
1722 | m_subp, |
1723 | m_wait, |
1724 | - m_kill): |
1725 | + m_kill, |
1726 | + m_getppid): |
1727 | """dhcp_discovery waits for the presence of pidfile and dhcp.leases.""" |
1728 | tmpdir = self.tmp_dir() |
1729 | dhclient_script = os.path.join(tmpdir, 'dhclient.orig') |
1730 | @@ -162,6 +168,7 @@ class TestDHCPDiscoveryClean(CiTestCase): |
1731 | pidfile = self.tmp_path('dhclient.pid', tmpdir) |
1732 | leasefile = self.tmp_path('dhcp.leases', tmpdir) |
1733 | m_wait.return_value = [pidfile] # Return the missing pidfile wait for |
1734 | + m_getppid.return_value = 1 # Indicate that dhclient has daemonized |
1735 | self.assertEqual([], dhcp_discovery(dhclient_script, 'eth9', tmpdir)) |
1736 | self.assertEqual( |
1737 | mock.call([pidfile, leasefile], maxwait=5, naplen=0.01), |
1738 | @@ -171,9 +178,10 @@ class TestDHCPDiscoveryClean(CiTestCase): |
1739 | self.logs.getvalue()) |
1740 | m_kill.assert_not_called() |
1741 | |
1742 | + @mock.patch('cloudinit.net.dhcp.util.get_proc_ppid') |
1743 | @mock.patch('cloudinit.net.dhcp.os.kill') |
1744 | @mock.patch('cloudinit.net.dhcp.util.subp') |
1745 | - def test_dhcp_discovery_run_in_sandbox(self, m_subp, m_kill): |
1746 | + def test_dhcp_discovery_run_in_sandbox(self, m_subp, m_kill, m_getppid): |
1747 | """dhcp_discovery brings up the interface and runs dhclient. |
1748 | |
1749 | It also returns the parsed dhcp.leases file generated in the sandbox. |
1750 | @@ -195,6 +203,7 @@ class TestDHCPDiscoveryClean(CiTestCase): |
1751 | pid_file = os.path.join(tmpdir, 'dhclient.pid') |
1752 | my_pid = 1 |
1753 | write_file(pid_file, "%d\n" % my_pid) |
1754 | + m_getppid.return_value = 1 # Indicate that dhclient has daemonized |
1755 | |
1756 | self.assertItemsEqual( |
1757 | [{'interface': 'eth9', 'fixed-address': '192.168.2.74', |
1758 | @@ -321,3 +330,37 @@ class TestSystemdParseLeases(CiTestCase): |
1759 | '9': self.lxd_lease}) |
1760 | self.assertEqual({'1': self.azure_parsed, '9': self.lxd_parsed}, |
1761 | networkd_load_leases(self.lease_d)) |
1762 | + |
1763 | + |
1764 | +class TestEphemeralDhcpNoNetworkSetup(HttprettyTestCase): |
1765 | + |
1766 | + @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery') |
1767 | + def test_ephemeral_dhcp_no_network_if_url_connectivity(self, m_dhcp): |
1768 | + """No EphemeralDhcp4 network setup when connectivity_url succeeds.""" |
1769 | + url = 'http://example.org/index.html' |
1770 | + |
1771 | + httpretty.register_uri(httpretty.GET, url) |
1772 | + with net.dhcp.EphemeralDHCPv4(connectivity_url=url) as lease: |
1773 | + self.assertIsNone(lease) |
1774 | + # Ensure that no teardown happens: |
1775 | + m_dhcp.assert_not_called() |
1776 | + |
1777 | + @mock.patch('cloudinit.net.dhcp.util.subp') |
1778 | + @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery') |
1779 | + def test_ephemeral_dhcp_setup_network_if_url_connectivity( |
1780 | + self, m_dhcp, m_subp): |
1781 | + """No EphemeralDhcp4 network setup when connectivity_url succeeds.""" |
1782 | + url = 'http://example.org/index.html' |
1783 | + fake_lease = { |
1784 | + 'interface': 'eth9', 'fixed-address': '192.168.2.2', |
1785 | + 'subnet-mask': '255.255.0.0'} |
1786 | + m_dhcp.return_value = [fake_lease] |
1787 | + m_subp.return_value = ('', '') |
1788 | + |
1789 | + httpretty.register_uri(httpretty.GET, url, body={}, status=404) |
1790 | + with net.dhcp.EphemeralDHCPv4(connectivity_url=url) as lease: |
1791 | + self.assertEqual(fake_lease, lease) |
1792 | + # Ensure that dhcp discovery occurs |
1793 | + m_dhcp.called_once_with() |
1794 | + |
1795 | +# vi: ts=4 expandtab |
1796 | diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py |
1797 | index 58e0a59..f55c31e 100644 |
1798 | --- a/cloudinit/net/tests/test_init.py |
1799 | +++ b/cloudinit/net/tests/test_init.py |
1800 | @@ -2,14 +2,16 @@ |
1801 | |
1802 | import copy |
1803 | import errno |
1804 | +import httpretty |
1805 | import mock |
1806 | import os |
1807 | +import requests |
1808 | import textwrap |
1809 | import yaml |
1810 | |
1811 | import cloudinit.net as net |
1812 | from cloudinit.util import ensure_file, write_file, ProcessExecutionError |
1813 | -from cloudinit.tests.helpers import CiTestCase |
1814 | +from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase |
1815 | |
1816 | |
1817 | class TestSysDevPath(CiTestCase): |
1818 | @@ -458,6 +460,22 @@ class TestEphemeralIPV4Network(CiTestCase): |
1819 | self.assertEqual(expected_setup_calls, m_subp.call_args_list) |
1820 | m_subp.assert_has_calls(expected_teardown_calls) |
1821 | |
1822 | + @mock.patch('cloudinit.net.readurl') |
1823 | + def test_ephemeral_ipv4_no_network_if_url_connectivity( |
1824 | + self, m_readurl, m_subp): |
1825 | + """No network setup is performed if we can successfully connect to |
1826 | + connectivity_url.""" |
1827 | + params = { |
1828 | + 'interface': 'eth0', 'ip': '192.168.2.2', |
1829 | + 'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255', |
1830 | + 'connectivity_url': 'http://example.org/index.html'} |
1831 | + |
1832 | + with net.EphemeralIPv4Network(**params): |
1833 | + self.assertEqual([mock.call('http://example.org/index.html', |
1834 | + timeout=5)], m_readurl.call_args_list) |
1835 | + # Ensure that no teardown happens: |
1836 | + m_subp.assert_has_calls([]) |
1837 | + |
1838 | def test_ephemeral_ipv4_network_noop_when_configured(self, m_subp): |
1839 | """EphemeralIPv4Network handles exception when address is setup. |
1840 | |
1841 | @@ -619,3 +637,35 @@ class TestApplyNetworkCfgNames(CiTestCase): |
1842 | def test_apply_v2_renames_raises_runtime_error_on_unknown_version(self): |
1843 | with self.assertRaises(RuntimeError): |
1844 | net.apply_network_config_names(yaml.load("version: 3")) |
1845 | + |
1846 | + |
1847 | +class TestHasURLConnectivity(HttprettyTestCase): |
1848 | + |
1849 | + def setUp(self): |
1850 | + super(TestHasURLConnectivity, self).setUp() |
1851 | + self.url = 'http://fake/' |
1852 | + self.kwargs = {'allow_redirects': True, 'timeout': 5.0} |
1853 | + |
1854 | + @mock.patch('cloudinit.net.readurl') |
1855 | + def test_url_timeout_on_connectivity_check(self, m_readurl): |
1856 | + """A timeout of 5 seconds is provided when reading a url.""" |
1857 | + self.assertTrue( |
1858 | + net.has_url_connectivity(self.url), 'Expected True on url connect') |
1859 | + |
1860 | + def test_true_on_url_connectivity_success(self): |
1861 | + httpretty.register_uri(httpretty.GET, self.url) |
1862 | + self.assertTrue( |
1863 | + net.has_url_connectivity(self.url), 'Expected True on url connect') |
1864 | + |
1865 | + @mock.patch('requests.Session.request') |
1866 | + def test_true_on_url_connectivity_timeout(self, m_request): |
1867 | + """A timeout raised accessing the url will return False.""" |
1868 | + m_request.side_effect = requests.Timeout('Fake Connection Timeout') |
1869 | + self.assertFalse( |
1870 | + net.has_url_connectivity(self.url), |
1871 | + 'Expected False on url timeout') |
1872 | + |
1873 | + def test_true_on_url_connectivity_failure(self): |
1874 | + httpretty.register_uri(httpretty.GET, self.url, body={}, status=404) |
1875 | + self.assertFalse( |
1876 | + net.has_url_connectivity(self.url), 'Expected False on url fail') |
1877 | diff --git a/cloudinit/sources/DataSourceAliYun.py b/cloudinit/sources/DataSourceAliYun.py |
1878 | index 858e082..45cc9f0 100644 |
1879 | --- a/cloudinit/sources/DataSourceAliYun.py |
1880 | +++ b/cloudinit/sources/DataSourceAliYun.py |
1881 | @@ -1,7 +1,5 @@ |
1882 | # This file is part of cloud-init. See LICENSE file for license information. |
1883 | |
1884 | -import os |
1885 | - |
1886 | from cloudinit import sources |
1887 | from cloudinit.sources import DataSourceEc2 as EC2 |
1888 | from cloudinit import util |
1889 | @@ -18,25 +16,17 @@ class DataSourceAliYun(EC2.DataSourceEc2): |
1890 | min_metadata_version = '2016-01-01' |
1891 | extended_metadata_versions = [] |
1892 | |
1893 | - def __init__(self, sys_cfg, distro, paths): |
1894 | - super(DataSourceAliYun, self).__init__(sys_cfg, distro, paths) |
1895 | - self.seed_dir = os.path.join(paths.seed_dir, "AliYun") |
1896 | - |
1897 | def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): |
1898 | return self.metadata.get('hostname', 'localhost.localdomain') |
1899 | |
1900 | def get_public_ssh_keys(self): |
1901 | return parse_public_keys(self.metadata.get('public-keys', {})) |
1902 | |
1903 | - @property |
1904 | - def cloud_platform(self): |
1905 | - if self._cloud_platform is None: |
1906 | - if _is_aliyun(): |
1907 | - self._cloud_platform = EC2.Platforms.ALIYUN |
1908 | - else: |
1909 | - self._cloud_platform = EC2.Platforms.NO_EC2_METADATA |
1910 | - |
1911 | - return self._cloud_platform |
1912 | + def _get_cloud_name(self): |
1913 | + if _is_aliyun(): |
1914 | + return EC2.CloudNames.ALIYUN |
1915 | + else: |
1916 | + return EC2.CloudNames.NO_EC2_METADATA |
1917 | |
1918 | |
1919 | def _is_aliyun(): |
1920 | diff --git a/cloudinit/sources/DataSourceAltCloud.py b/cloudinit/sources/DataSourceAltCloud.py |
1921 | index 8cd312d..5270fda 100644 |
1922 | --- a/cloudinit/sources/DataSourceAltCloud.py |
1923 | +++ b/cloudinit/sources/DataSourceAltCloud.py |
1924 | @@ -89,7 +89,9 @@ class DataSourceAltCloud(sources.DataSource): |
1925 | ''' |
1926 | Description: |
1927 | Get the type for the cloud back end this instance is running on |
1928 | - by examining the string returned by reading the dmi data. |
1929 | + by examining the string returned by reading either: |
1930 | + CLOUD_INFO_FILE or |
1931 | + the dmi data. |
1932 | |
1933 | Input: |
1934 | None |
1935 | @@ -99,7 +101,14 @@ class DataSourceAltCloud(sources.DataSource): |
1936 | 'RHEV', 'VSPHERE' or 'UNKNOWN' |
1937 | |
1938 | ''' |
1939 | - |
1940 | + if os.path.exists(CLOUD_INFO_FILE): |
1941 | + try: |
1942 | + cloud_type = util.load_file(CLOUD_INFO_FILE).strip().upper() |
1943 | + except IOError: |
1944 | + util.logexc(LOG, 'Unable to access cloud info file at %s.', |
1945 | + CLOUD_INFO_FILE) |
1946 | + return 'UNKNOWN' |
1947 | + return cloud_type |
1948 | system_name = util.read_dmi_data("system-product-name") |
1949 | if not system_name: |
1950 | return 'UNKNOWN' |
1951 | @@ -134,15 +143,7 @@ class DataSourceAltCloud(sources.DataSource): |
1952 | |
1953 | LOG.debug('Invoked get_data()') |
1954 | |
1955 | - if os.path.exists(CLOUD_INFO_FILE): |
1956 | - try: |
1957 | - cloud_type = util.load_file(CLOUD_INFO_FILE).strip().upper() |
1958 | - except IOError: |
1959 | - util.logexc(LOG, 'Unable to access cloud info file at %s.', |
1960 | - CLOUD_INFO_FILE) |
1961 | - return False |
1962 | - else: |
1963 | - cloud_type = self.get_cloud_type() |
1964 | + cloud_type = self.get_cloud_type() |
1965 | |
1966 | LOG.debug('cloud_type: %s', str(cloud_type)) |
1967 | |
1968 | @@ -161,6 +162,15 @@ class DataSourceAltCloud(sources.DataSource): |
1969 | util.logexc(LOG, 'Failed accessing user data.') |
1970 | return False |
1971 | |
1972 | + def _get_subplatform(self): |
1973 | + """Return the subplatform metadata details.""" |
1974 | + cloud_type = self.get_cloud_type() |
1975 | + if not hasattr(self, 'source'): |
1976 | + self.source = sources.METADATA_UNKNOWN |
1977 | + if cloud_type == 'RHEV': |
1978 | + self.source = '/dev/fd0' |
1979 | + return '%s (%s)' % (cloud_type.lower(), self.source) |
1980 | + |
1981 | def user_data_rhevm(self): |
1982 | ''' |
1983 | RHEVM specific userdata read |
1984 | @@ -232,6 +242,7 @@ class DataSourceAltCloud(sources.DataSource): |
1985 | try: |
1986 | return_str = util.mount_cb(cdrom_dev, read_user_data_callback) |
1987 | if return_str: |
1988 | + self.source = cdrom_dev |
1989 | break |
1990 | except OSError as err: |
1991 | if err.errno != errno.ENOENT: |
1992 | diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py |
1993 | index 783445e..a4f998b 100644 |
1994 | --- a/cloudinit/sources/DataSourceAzure.py |
1995 | +++ b/cloudinit/sources/DataSourceAzure.py |
1996 | @@ -22,7 +22,8 @@ from cloudinit.event import EventType |
1997 | from cloudinit.net.dhcp import EphemeralDHCPv4 |
1998 | from cloudinit import sources |
1999 | from cloudinit.sources.helpers.azure import get_metadata_from_fabric |
2000 | -from cloudinit.url_helper import readurl, UrlError |
2001 | +from cloudinit.sources.helpers import netlink |
2002 | +from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc |
2003 | from cloudinit import util |
2004 | |
2005 | LOG = logging.getLogger(__name__) |
2006 | @@ -57,7 +58,7 @@ IMDS_URL = "http://169.254.169.254/metadata/" |
2007 | # List of static scripts and network config artifacts created by |
2008 | # stock ubuntu suported images. |
2009 | UBUNTU_EXTENDED_NETWORK_SCRIPTS = [ |
2010 | - '/etc/netplan/90-azure-hotplug.yaml', |
2011 | + '/etc/netplan/90-hotplug-azure.yaml', |
2012 | '/usr/local/sbin/ephemeral_eth.sh', |
2013 | '/etc/udev/rules.d/10-net-device-added.rules', |
2014 | '/run/network/interfaces.ephemeral.d', |
2015 | @@ -207,7 +208,9 @@ BUILTIN_DS_CONFIG = { |
2016 | }, |
2017 | 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH}, |
2018 | 'dhclient_lease_file': LEASE_FILE, |
2019 | + 'apply_network_config': True, # Use IMDS published network configuration |
2020 | } |
2021 | +# RELEASE_BLOCKER: Xenial and earlier apply_network_config default is False |
2022 | |
2023 | BUILTIN_CLOUD_CONFIG = { |
2024 | 'disk_setup': { |
2025 | @@ -278,6 +281,7 @@ class DataSourceAzure(sources.DataSource): |
2026 | self._network_config = None |
2027 | # Regenerate network config new_instance boot and every boot |
2028 | self.update_events['network'].add(EventType.BOOT) |
2029 | + self._ephemeral_dhcp_ctx = None |
2030 | |
2031 | def __str__(self): |
2032 | root = sources.DataSource.__str__(self) |
2033 | @@ -351,6 +355,14 @@ class DataSourceAzure(sources.DataSource): |
2034 | metadata['public-keys'] = key_value or pubkeys_from_crt_files(fp_files) |
2035 | return metadata |
2036 | |
2037 | + def _get_subplatform(self): |
2038 | + """Return the subplatform metadata source details.""" |
2039 | + if self.seed.startswith('/dev'): |
2040 | + subplatform_type = 'config-disk' |
2041 | + else: |
2042 | + subplatform_type = 'seed-dir' |
2043 | + return '%s (%s)' % (subplatform_type, self.seed) |
2044 | + |
2045 | def crawl_metadata(self): |
2046 | """Walk all instance metadata sources returning a dict on success. |
2047 | |
2048 | @@ -396,10 +408,15 @@ class DataSourceAzure(sources.DataSource): |
2049 | LOG.warning("%s was not mountable", cdev) |
2050 | continue |
2051 | |
2052 | - if reprovision or self._should_reprovision(ret): |
2053 | + perform_reprovision = reprovision or self._should_reprovision(ret) |
2054 | + if perform_reprovision: |
2055 | + if util.is_FreeBSD(): |
2056 | + msg = "Free BSD is not supported for PPS VMs" |
2057 | + LOG.error(msg) |
2058 | + raise sources.InvalidMetaDataException(msg) |
2059 | ret = self._reprovision() |
2060 | imds_md = get_metadata_from_imds( |
2061 | - self.fallback_interface, retries=3) |
2062 | + self.fallback_interface, retries=10) |
2063 | (md, userdata_raw, cfg, files) = ret |
2064 | self.seed = cdev |
2065 | crawled_data.update({ |
2066 | @@ -424,6 +441,18 @@ class DataSourceAzure(sources.DataSource): |
2067 | crawled_data['metadata']['random_seed'] = seed |
2068 | crawled_data['metadata']['instance-id'] = util.read_dmi_data( |
2069 | 'system-uuid') |
2070 | + |
2071 | + if perform_reprovision: |
2072 | + LOG.info("Reporting ready to Azure after getting ReprovisionData") |
2073 | + use_cached_ephemeral = (net.is_up(self.fallback_interface) and |
2074 | + getattr(self, '_ephemeral_dhcp_ctx', None)) |
2075 | + if use_cached_ephemeral: |
2076 | + self._report_ready(lease=self._ephemeral_dhcp_ctx.lease) |
2077 | + self._ephemeral_dhcp_ctx.clean_network() # Teardown ephemeral |
2078 | + else: |
2079 | + with EphemeralDHCPv4() as lease: |
2080 | + self._report_ready(lease=lease) |
2081 | + |
2082 | return crawled_data |
2083 | |
2084 | def _is_platform_viable(self): |
2085 | @@ -450,7 +479,8 @@ class DataSourceAzure(sources.DataSource): |
2086 | except sources.InvalidMetaDataException as e: |
2087 | LOG.warning('Could not crawl Azure metadata: %s', e) |
2088 | return False |
2089 | - if self.distro and self.distro.name == 'ubuntu': |
2090 | + if (self.distro and self.distro.name == 'ubuntu' and |
2091 | + self.ds_cfg.get('apply_network_config')): |
2092 | maybe_remove_ubuntu_network_config_scripts() |
2093 | |
2094 | # Process crawled data and augment with various config defaults |
2095 | @@ -498,8 +528,8 @@ class DataSourceAzure(sources.DataSource): |
2096 | response. Then return the returned JSON object.""" |
2097 | url = IMDS_URL + "reprovisiondata?api-version=2017-04-02" |
2098 | headers = {"Metadata": "true"} |
2099 | + nl_sock = None |
2100 | report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE)) |
2101 | - LOG.debug("Start polling IMDS") |
2102 | |
2103 | def exc_cb(msg, exception): |
2104 | if isinstance(exception, UrlError) and exception.code == 404: |
2105 | @@ -508,25 +538,47 @@ class DataSourceAzure(sources.DataSource): |
2106 | # call DHCP and setup the ephemeral network to acquire the new IP. |
2107 | return False |
2108 | |
2109 | + LOG.debug("Wait for vnetswitch to happen") |
2110 | while True: |
2111 | try: |
2112 | - with EphemeralDHCPv4() as lease: |
2113 | - if report_ready: |
2114 | - path = REPORTED_READY_MARKER_FILE |
2115 | - LOG.info( |
2116 | - "Creating a marker file to report ready: %s", path) |
2117 | - util.write_file(path, "{pid}: {time}\n".format( |
2118 | - pid=os.getpid(), time=time())) |
2119 | - self._report_ready(lease=lease) |
2120 | - report_ready = False |
2121 | + # Save our EphemeralDHCPv4 context so we avoid repeated dhcp |
2122 | + self._ephemeral_dhcp_ctx = EphemeralDHCPv4() |
2123 | + lease = self._ephemeral_dhcp_ctx.obtain_lease() |
2124 | + if report_ready: |
2125 | + try: |
2126 | + nl_sock = netlink.create_bound_netlink_socket() |
2127 | + except netlink.NetlinkCreateSocketError as e: |
2128 | + LOG.warning(e) |
2129 | + self._ephemeral_dhcp_ctx.clean_network() |
2130 | + return |
2131 | + path = REPORTED_READY_MARKER_FILE |
2132 | + LOG.info( |
2133 | + "Creating a marker file to report ready: %s", path) |
2134 | + util.write_file(path, "{pid}: {time}\n".format( |
2135 | + pid=os.getpid(), time=time())) |
2136 | + self._report_ready(lease=lease) |
2137 | + report_ready = False |
2138 | + try: |
2139 | + netlink.wait_for_media_disconnect_connect( |
2140 | + nl_sock, lease['interface']) |
2141 | + except AssertionError as error: |
2142 | + LOG.error(error) |
2143 | + return |
2144 | + self._ephemeral_dhcp_ctx.clean_network() |
2145 | + else: |
2146 | return readurl(url, timeout=1, headers=headers, |
2147 | - exception_cb=exc_cb, infinite=True).contents |
2148 | + exception_cb=exc_cb, infinite=True, |
2149 | + log_req_resp=False).contents |
2150 | except UrlError: |
2151 | + # Teardown our EphemeralDHCPv4 context on failure as we retry |
2152 | + self._ephemeral_dhcp_ctx.clean_network() |
2153 | pass |
2154 | + finally: |
2155 | + if nl_sock: |
2156 | + nl_sock.close() |
2157 | |
2158 | def _report_ready(self, lease): |
2159 | - """Tells the fabric provisioning has completed |
2160 | - before we go into our polling loop.""" |
2161 | + """Tells the fabric provisioning has completed """ |
2162 | try: |
2163 | get_metadata_from_fabric(None, lease['unknown-245']) |
2164 | except Exception: |
2165 | @@ -611,7 +663,11 @@ class DataSourceAzure(sources.DataSource): |
2166 | the blacklisted devices. |
2167 | """ |
2168 | if not self._network_config: |
2169 | - self._network_config = parse_network_config(self._metadata_imds) |
2170 | + if self.ds_cfg.get('apply_network_config'): |
2171 | + nc_src = self._metadata_imds |
2172 | + else: |
2173 | + nc_src = None |
2174 | + self._network_config = parse_network_config(nc_src) |
2175 | return self._network_config |
2176 | |
2177 | |
2178 | @@ -692,7 +748,7 @@ def can_dev_be_reformatted(devpath, preserve_ntfs): |
2179 | file_count = util.mount_cb(cand_path, count_files, mtype="ntfs", |
2180 | update_env_for_mount={'LANG': 'C'}) |
2181 | except util.MountFailedError as e: |
2182 | - if "mount: unknown filesystem type 'ntfs'" in str(e): |
2183 | + if "unknown filesystem type 'ntfs'" in str(e): |
2184 | return True, (bmsg + ' but this system cannot mount NTFS,' |
2185 | ' assuming there are no important files.' |
2186 | ' Formatting allowed.') |
2187 | @@ -920,12 +976,12 @@ def read_azure_ovf(contents): |
2188 | lambda n: |
2189 | n.localName == "LinuxProvisioningConfigurationSet") |
2190 | |
2191 | - if len(results) == 0: |
2192 | + if len(lpcs_nodes) == 0: |
2193 | raise NonAzureDataSource("No LinuxProvisioningConfigurationSet") |
2194 | - if len(results) > 1: |
2195 | + if len(lpcs_nodes) > 1: |
2196 | raise BrokenAzureDataSource("found '%d' %ss" % |
2197 | - ("LinuxProvisioningConfigurationSet", |
2198 | - len(results))) |
2199 | + (len(lpcs_nodes), |
2200 | + "LinuxProvisioningConfigurationSet")) |
2201 | lpcs = lpcs_nodes[0] |
2202 | |
2203 | if not lpcs.hasChildNodes(): |
2204 | @@ -1154,17 +1210,12 @@ def get_metadata_from_imds(fallback_nic, retries): |
2205 | |
2206 | def _get_metadata_from_imds(retries): |
2207 | |
2208 | - def retry_on_url_error(msg, exception): |
2209 | - if isinstance(exception, UrlError) and exception.code == 404: |
2210 | - return True # Continue retries |
2211 | - return False # Stop retries on all other exceptions |
2212 | - |
2213 | url = IMDS_URL + "instance?api-version=2017-12-01" |
2214 | headers = {"Metadata": "true"} |
2215 | try: |
2216 | response = readurl( |
2217 | url, timeout=1, headers=headers, retries=retries, |
2218 | - exception_cb=retry_on_url_error) |
2219 | + exception_cb=retry_on_url_exc) |
2220 | except Exception as e: |
2221 | LOG.debug('Ignoring IMDS instance metadata: %s', e) |
2222 | return {} |
2223 | @@ -1187,7 +1238,7 @@ def maybe_remove_ubuntu_network_config_scripts(paths=None): |
2224 | additional interfaces which get attached by a customer at some point |
2225 | after initial boot. Since the Azure datasource can now regenerate |
2226 | network configuration as metadata reports these new devices, we no longer |
2227 | - want the udev rules or netplan's 90-azure-hotplug.yaml to configure |
2228 | + want the udev rules or netplan's 90-hotplug-azure.yaml to configure |
2229 | networking on eth1 or greater as it might collide with cloud-init's |
2230 | configuration. |
2231 | |
2232 | diff --git a/cloudinit/sources/DataSourceBigstep.py b/cloudinit/sources/DataSourceBigstep.py |
2233 | index 699a85b..52fff20 100644 |
2234 | --- a/cloudinit/sources/DataSourceBigstep.py |
2235 | +++ b/cloudinit/sources/DataSourceBigstep.py |
2236 | @@ -36,6 +36,10 @@ class DataSourceBigstep(sources.DataSource): |
2237 | self.userdata_raw = decoded["userdata_raw"] |
2238 | return True |
2239 | |
2240 | + def _get_subplatform(self): |
2241 | + """Return the subplatform metadata source details.""" |
2242 | + return 'metadata (%s)' % get_url_from_file() |
2243 | + |
2244 | |
2245 | def get_url_from_file(): |
2246 | try: |
2247 | diff --git a/cloudinit/sources/DataSourceCloudSigma.py b/cloudinit/sources/DataSourceCloudSigma.py |
2248 | index c816f34..2955d3f 100644 |
2249 | --- a/cloudinit/sources/DataSourceCloudSigma.py |
2250 | +++ b/cloudinit/sources/DataSourceCloudSigma.py |
2251 | @@ -7,7 +7,7 @@ |
2252 | from base64 import b64decode |
2253 | import re |
2254 | |
2255 | -from cloudinit.cs_utils import Cepko |
2256 | +from cloudinit.cs_utils import Cepko, SERIAL_PORT |
2257 | |
2258 | from cloudinit import log as logging |
2259 | from cloudinit import sources |
2260 | @@ -84,6 +84,10 @@ class DataSourceCloudSigma(sources.DataSource): |
2261 | |
2262 | return True |
2263 | |
2264 | + def _get_subplatform(self): |
2265 | + """Return the subplatform metadata source details.""" |
2266 | + return 'cepko (%s)' % SERIAL_PORT |
2267 | + |
2268 | def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): |
2269 | """ |
2270 | Cleans up and uses the server's name if the latter is set. Otherwise |
2271 | diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py |
2272 | index 664dc4b..564e3eb 100644 |
2273 | --- a/cloudinit/sources/DataSourceConfigDrive.py |
2274 | +++ b/cloudinit/sources/DataSourceConfigDrive.py |
2275 | @@ -160,6 +160,18 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource): |
2276 | LOG.debug("no network configuration available") |
2277 | return self._network_config |
2278 | |
2279 | + @property |
2280 | + def platform(self): |
2281 | + return 'openstack' |
2282 | + |
2283 | + def _get_subplatform(self): |
2284 | + """Return the subplatform metadata source details.""" |
2285 | + if self.seed_dir in self.source: |
2286 | + subplatform_type = 'seed-dir' |
2287 | + elif self.source.startswith('/dev'): |
2288 | + subplatform_type = 'config-disk' |
2289 | + return '%s (%s)' % (subplatform_type, self.source) |
2290 | + |
2291 | |
2292 | def read_config_drive(source_dir): |
2293 | reader = openstack.ConfigDriveReader(source_dir) |
2294 | diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py |
2295 | index 968ab3f..9ccf2cd 100644 |
2296 | --- a/cloudinit/sources/DataSourceEc2.py |
2297 | +++ b/cloudinit/sources/DataSourceEc2.py |
2298 | @@ -28,18 +28,16 @@ STRICT_ID_PATH = ("datasource", "Ec2", "strict_id") |
2299 | STRICT_ID_DEFAULT = "warn" |
2300 | |
2301 | |
2302 | -class Platforms(object): |
2303 | - # TODO Rename and move to cloudinit.cloud.CloudNames |
2304 | - ALIYUN = "AliYun" |
2305 | - AWS = "AWS" |
2306 | - BRIGHTBOX = "Brightbox" |
2307 | - SEEDED = "Seeded" |
2308 | +class CloudNames(object): |
2309 | + ALIYUN = "aliyun" |
2310 | + AWS = "aws" |
2311 | + BRIGHTBOX = "brightbox" |
2312 | # UNKNOWN indicates no positive id. If strict_id is 'warn' or 'false', |
2313 | # then an attempt at the Ec2 Metadata service will be made. |
2314 | - UNKNOWN = "Unknown" |
2315 | + UNKNOWN = "unknown" |
2316 | # NO_EC2_METADATA indicates this platform does not have a Ec2 metadata |
2317 | # service available. No attempt at the Ec2 Metadata service will be made. |
2318 | - NO_EC2_METADATA = "No-EC2-Metadata" |
2319 | + NO_EC2_METADATA = "no-ec2-metadata" |
2320 | |
2321 | |
2322 | class DataSourceEc2(sources.DataSource): |
2323 | @@ -61,8 +59,6 @@ class DataSourceEc2(sources.DataSource): |
2324 | url_max_wait = 120 |
2325 | url_timeout = 50 |
2326 | |
2327 | - _cloud_platform = None |
2328 | - |
2329 | _network_config = sources.UNSET # Used to cache calculated network cfg v1 |
2330 | |
2331 | # Whether we want to get network configuration from the metadata service. |
2332 | @@ -71,30 +67,21 @@ class DataSourceEc2(sources.DataSource): |
2333 | def __init__(self, sys_cfg, distro, paths): |
2334 | super(DataSourceEc2, self).__init__(sys_cfg, distro, paths) |
2335 | self.metadata_address = None |
2336 | - self.seed_dir = os.path.join(paths.seed_dir, "ec2") |
2337 | |
2338 | def _get_cloud_name(self): |
2339 | """Return the cloud name as identified during _get_data.""" |
2340 | - return self.cloud_platform |
2341 | + return identify_platform() |
2342 | |
2343 | def _get_data(self): |
2344 | - seed_ret = {} |
2345 | - if util.read_optional_seed(seed_ret, base=(self.seed_dir + "/")): |
2346 | - self.userdata_raw = seed_ret['user-data'] |
2347 | - self.metadata = seed_ret['meta-data'] |
2348 | - LOG.debug("Using seeded ec2 data from %s", self.seed_dir) |
2349 | - self._cloud_platform = Platforms.SEEDED |
2350 | - return True |
2351 | - |
2352 | strict_mode, _sleep = read_strict_mode( |
2353 | util.get_cfg_by_path(self.sys_cfg, STRICT_ID_PATH, |
2354 | STRICT_ID_DEFAULT), ("warn", None)) |
2355 | |
2356 | - LOG.debug("strict_mode: %s, cloud_platform=%s", |
2357 | - strict_mode, self.cloud_platform) |
2358 | - if strict_mode == "true" and self.cloud_platform == Platforms.UNKNOWN: |
2359 | + LOG.debug("strict_mode: %s, cloud_name=%s cloud_platform=%s", |
2360 | + strict_mode, self.cloud_name, self.platform) |
2361 | + if strict_mode == "true" and self.cloud_name == CloudNames.UNKNOWN: |
2362 | return False |
2363 | - elif self.cloud_platform == Platforms.NO_EC2_METADATA: |
2364 | + elif self.cloud_name == CloudNames.NO_EC2_METADATA: |
2365 | return False |
2366 | |
2367 | if self.perform_dhcp_setup: # Setup networking in init-local stage. |
2368 | @@ -103,13 +90,22 @@ class DataSourceEc2(sources.DataSource): |
2369 | return False |
2370 | try: |
2371 | with EphemeralDHCPv4(self.fallback_interface): |
2372 | - return util.log_time( |
2373 | + self._crawled_metadata = util.log_time( |
2374 | logfunc=LOG.debug, msg='Crawl of metadata service', |
2375 | - func=self._crawl_metadata) |
2376 | + func=self.crawl_metadata) |
2377 | except NoDHCPLeaseError: |
2378 | return False |
2379 | else: |
2380 | - return self._crawl_metadata() |
2381 | + self._crawled_metadata = util.log_time( |
2382 | + logfunc=LOG.debug, msg='Crawl of metadata service', |
2383 | + func=self.crawl_metadata) |
2384 | + if not self._crawled_metadata: |
2385 | + return False |
2386 | + self.metadata = self._crawled_metadata.get('meta-data', None) |
2387 | + self.userdata_raw = self._crawled_metadata.get('user-data', None) |
2388 | + self.identity = self._crawled_metadata.get( |
2389 | + 'dynamic', {}).get('instance-identity', {}).get('document', {}) |
2390 | + return True |
2391 | |
2392 | @property |
2393 | def launch_index(self): |
2394 | @@ -117,6 +113,15 @@ class DataSourceEc2(sources.DataSource): |
2395 | return None |
2396 | return self.metadata.get('ami-launch-index') |
2397 | |
2398 | + @property |
2399 | + def platform(self): |
2400 | + # Handle upgrade path of pickled ds |
2401 | + if not hasattr(self, '_platform_type'): |
2402 | + self._platform_type = DataSourceEc2.dsname.lower() |
2403 | + if not self._platform_type: |
2404 | + self._platform_type = DataSourceEc2.dsname.lower() |
2405 | + return self._platform_type |
2406 | + |
2407 | def get_metadata_api_version(self): |
2408 | """Get the best supported api version from the metadata service. |
2409 | |
2410 | @@ -144,7 +149,7 @@ class DataSourceEc2(sources.DataSource): |
2411 | return self.min_metadata_version |
2412 | |
2413 | def get_instance_id(self): |
2414 | - if self.cloud_platform == Platforms.AWS: |
2415 | + if self.cloud_name == CloudNames.AWS: |
2416 | # Prefer the ID from the instance identity document, but fall back |
2417 | if not getattr(self, 'identity', None): |
2418 | # If re-using cached datasource, it's get_data run didn't |
2419 | @@ -254,7 +259,7 @@ class DataSourceEc2(sources.DataSource): |
2420 | @property |
2421 | def availability_zone(self): |
2422 | try: |
2423 | - if self.cloud_platform == Platforms.AWS: |
2424 | + if self.cloud_name == CloudNames.AWS: |
2425 | return self.identity.get( |
2426 | 'availabilityZone', |
2427 | self.metadata['placement']['availability-zone']) |
2428 | @@ -265,7 +270,7 @@ class DataSourceEc2(sources.DataSource): |
2429 | |
2430 | @property |
2431 | def region(self): |
2432 | - if self.cloud_platform == Platforms.AWS: |
2433 | + if self.cloud_name == CloudNames.AWS: |
2434 | region = self.identity.get('region') |
2435 | # Fallback to trimming the availability zone if region is missing |
2436 | if self.availability_zone and not region: |
2437 | @@ -277,16 +282,10 @@ class DataSourceEc2(sources.DataSource): |
2438 | return az[:-1] |
2439 | return None |
2440 | |
2441 | - @property |
2442 | - def cloud_platform(self): # TODO rename cloud_name |
2443 | - if self._cloud_platform is None: |
2444 | - self._cloud_platform = identify_platform() |
2445 | - return self._cloud_platform |
2446 | - |
2447 | def activate(self, cfg, is_new_instance): |
2448 | if not is_new_instance: |
2449 | return |
2450 | - if self.cloud_platform == Platforms.UNKNOWN: |
2451 | + if self.cloud_name == CloudNames.UNKNOWN: |
2452 | warn_if_necessary( |
2453 | util.get_cfg_by_path(cfg, STRICT_ID_PATH, STRICT_ID_DEFAULT), |
2454 | cfg) |
2455 | @@ -306,13 +305,13 @@ class DataSourceEc2(sources.DataSource): |
2456 | result = None |
2457 | no_network_metadata_on_aws = bool( |
2458 | 'network' not in self.metadata and |
2459 | - self.cloud_platform == Platforms.AWS) |
2460 | + self.cloud_name == CloudNames.AWS) |
2461 | if no_network_metadata_on_aws: |
2462 | LOG.debug("Metadata 'network' not present:" |
2463 | " Refreshing stale metadata from prior to upgrade.") |
2464 | util.log_time( |
2465 | logfunc=LOG.debug, msg='Re-crawl of metadata service', |
2466 | - func=self._crawl_metadata) |
2467 | + func=self.get_data) |
2468 | |
2469 | # Limit network configuration to only the primary/fallback nic |
2470 | iface = self.fallback_interface |
2471 | @@ -340,28 +339,32 @@ class DataSourceEc2(sources.DataSource): |
2472 | return super(DataSourceEc2, self).fallback_interface |
2473 | return self._fallback_interface |
2474 | |
2475 | - def _crawl_metadata(self): |
2476 | + def crawl_metadata(self): |
2477 | """Crawl metadata service when available. |
2478 | |
2479 | - @returns: True on success, False otherwise. |
2480 | + @returns: Dictionary of crawled metadata content containing the keys: |
2481 | + meta-data, user-data and dynamic. |
2482 | """ |
2483 | if not self.wait_for_metadata_service(): |
2484 | - return False |
2485 | + return {} |
2486 | api_version = self.get_metadata_api_version() |
2487 | + crawled_metadata = {} |
2488 | try: |
2489 | - self.userdata_raw = ec2.get_instance_userdata( |
2490 | + crawled_metadata['user-data'] = ec2.get_instance_userdata( |
2491 | api_version, self.metadata_address) |
2492 | - self.metadata = ec2.get_instance_metadata( |
2493 | + crawled_metadata['meta-data'] = ec2.get_instance_metadata( |
2494 | api_version, self.metadata_address) |
2495 | - if self.cloud_platform == Platforms.AWS: |
2496 | - self.identity = ec2.get_instance_identity( |
2497 | - api_version, self.metadata_address).get('document', {}) |
2498 | + if self.cloud_name == CloudNames.AWS: |
2499 | + identity = ec2.get_instance_identity( |
2500 | + api_version, self.metadata_address) |
2501 | + crawled_metadata['dynamic'] = {'instance-identity': identity} |
2502 | except Exception: |
2503 | util.logexc( |
2504 | LOG, "Failed reading from metadata address %s", |
2505 | self.metadata_address) |
2506 | - return False |
2507 | - return True |
2508 | + return {} |
2509 | + crawled_metadata['_metadata_api_version'] = api_version |
2510 | + return crawled_metadata |
2511 | |
2512 | |
2513 | class DataSourceEc2Local(DataSourceEc2): |
2514 | @@ -375,10 +378,10 @@ class DataSourceEc2Local(DataSourceEc2): |
2515 | perform_dhcp_setup = True # Use dhcp before querying metadata |
2516 | |
2517 | def get_data(self): |
2518 | - supported_platforms = (Platforms.AWS,) |
2519 | - if self.cloud_platform not in supported_platforms: |
2520 | + supported_platforms = (CloudNames.AWS,) |
2521 | + if self.cloud_name not in supported_platforms: |
2522 | LOG.debug("Local Ec2 mode only supported on %s, not %s", |
2523 | - supported_platforms, self.cloud_platform) |
2524 | + supported_platforms, self.cloud_name) |
2525 | return False |
2526 | return super(DataSourceEc2Local, self).get_data() |
2527 | |
2528 | @@ -439,20 +442,20 @@ def identify_aws(data): |
2529 | if (data['uuid'].startswith('ec2') and |
2530 | (data['uuid_source'] == 'hypervisor' or |
2531 | data['uuid'] == data['serial'])): |
2532 | - return Platforms.AWS |
2533 | + return CloudNames.AWS |
2534 | |
2535 | return None |
2536 | |
2537 | |
2538 | def identify_brightbox(data): |
2539 | if data['serial'].endswith('brightbox.com'): |
2540 | - return Platforms.BRIGHTBOX |
2541 | + return CloudNames.BRIGHTBOX |
2542 | |
2543 | |
2544 | def identify_platform(): |
2545 | - # identify the platform and return an entry in Platforms. |
2546 | + # identify the platform and return an entry in CloudNames. |
2547 | data = _collect_platform_data() |
2548 | - checks = (identify_aws, identify_brightbox, lambda x: Platforms.UNKNOWN) |
2549 | + checks = (identify_aws, identify_brightbox, lambda x: CloudNames.UNKNOWN) |
2550 | for checker in checks: |
2551 | try: |
2552 | result = checker(data) |
2553 | diff --git a/cloudinit/sources/DataSourceIBMCloud.py b/cloudinit/sources/DataSourceIBMCloud.py |
2554 | index a535814..21e6ae6 100644 |
2555 | --- a/cloudinit/sources/DataSourceIBMCloud.py |
2556 | +++ b/cloudinit/sources/DataSourceIBMCloud.py |
2557 | @@ -157,6 +157,10 @@ class DataSourceIBMCloud(sources.DataSource): |
2558 | |
2559 | return True |
2560 | |
2561 | + def _get_subplatform(self): |
2562 | + """Return the subplatform metadata source details.""" |
2563 | + return '%s (%s)' % (self.platform, self.source) |
2564 | + |
2565 | def check_instance_id(self, sys_cfg): |
2566 | """quickly (local check only) if self.instance_id is still valid |
2567 | |
2568 | diff --git a/cloudinit/sources/DataSourceMAAS.py b/cloudinit/sources/DataSourceMAAS.py |
2569 | index bcb3854..61aa6d7 100644 |
2570 | --- a/cloudinit/sources/DataSourceMAAS.py |
2571 | +++ b/cloudinit/sources/DataSourceMAAS.py |
2572 | @@ -109,6 +109,10 @@ class DataSourceMAAS(sources.DataSource): |
2573 | LOG.warning("Invalid content in vendor-data: %s", e) |
2574 | self.vendordata_raw = None |
2575 | |
2576 | + def _get_subplatform(self): |
2577 | + """Return the subplatform metadata source details.""" |
2578 | + return 'seed-dir (%s)' % self.base_url |
2579 | + |
2580 | def wait_for_metadata_service(self, url): |
2581 | mcfg = self.ds_cfg |
2582 | max_wait = 120 |
2583 | diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py |
2584 | index 2daea59..6860f0c 100644 |
2585 | --- a/cloudinit/sources/DataSourceNoCloud.py |
2586 | +++ b/cloudinit/sources/DataSourceNoCloud.py |
2587 | @@ -186,6 +186,27 @@ class DataSourceNoCloud(sources.DataSource): |
2588 | self._network_eni = mydata['meta-data'].get('network-interfaces') |
2589 | return True |
2590 | |
2591 | + @property |
2592 | + def platform_type(self): |
2593 | + # Handle upgrade path of pickled ds |
2594 | + if not hasattr(self, '_platform_type'): |
2595 | + self._platform_type = None |
2596 | + if not self._platform_type: |
2597 | + self._platform_type = 'lxd' if util.is_lxd() else 'nocloud' |
2598 | + return self._platform_type |
2599 | + |
2600 | + def _get_cloud_name(self): |
2601 | + """Return unknown when 'cloud-name' key is absent from metadata.""" |
2602 | + return sources.METADATA_UNKNOWN |
2603 | + |
2604 | + def _get_subplatform(self): |
2605 | + """Return the subplatform metadata source details.""" |
2606 | + if self.seed.startswith('/dev'): |
2607 | + subplatform_type = 'config-disk' |
2608 | + else: |
2609 | + subplatform_type = 'seed-dir' |
2610 | + return '%s (%s)' % (subplatform_type, self.seed) |
2611 | + |
2612 | def check_instance_id(self, sys_cfg): |
2613 | # quickly (local check only) if self.instance_id is still valid |
2614 | # we check kernel command line or files. |
2615 | @@ -290,6 +311,35 @@ def parse_cmdline_data(ds_id, fill, cmdline=None): |
2616 | return True |
2617 | |
2618 | |
2619 | +def _maybe_remove_top_network(cfg): |
2620 | + """If network-config contains top level 'network' key, then remove it. |
2621 | + |
2622 | + Some providers of network configuration may provide a top level |
2623 | + 'network' key (LP: #1798117) even though it is not necessary. |
2624 | + |
2625 | + Be friendly and remove it if it really seems so. |
2626 | + |
2627 | + Return the original value if no change or the updated value if changed.""" |
2628 | + nullval = object() |
2629 | + network_val = cfg.get('network', nullval) |
2630 | + if network_val is nullval: |
2631 | + return cfg |
2632 | + bmsg = 'Top level network key in network-config %s: %s' |
2633 | + if not isinstance(network_val, dict): |
2634 | + LOG.debug(bmsg, "was not a dict", cfg) |
2635 | + return cfg |
2636 | + if len(list(cfg.keys())) != 1: |
2637 | + LOG.debug(bmsg, "had multiple top level keys", cfg) |
2638 | + return cfg |
2639 | + if network_val.get('config') == "disabled": |
2640 | + LOG.debug(bmsg, "was config/disabled", cfg) |
2641 | + elif not all(('config' in network_val, 'version' in network_val)): |
2642 | + LOG.debug(bmsg, "but missing 'config' or 'version'", cfg) |
2643 | + return cfg |
2644 | + LOG.debug(bmsg, "fixed by removing shifting network.", cfg) |
2645 | + return network_val |
2646 | + |
2647 | + |
2648 | def _merge_new_seed(cur, seeded): |
2649 | ret = cur.copy() |
2650 | |
2651 | @@ -299,7 +349,8 @@ def _merge_new_seed(cur, seeded): |
2652 | ret['meta-data'] = util.mergemanydict([cur['meta-data'], newmd]) |
2653 | |
2654 | if seeded.get('network-config'): |
2655 | - ret['network-config'] = util.load_yaml(seeded['network-config']) |
2656 | + ret['network-config'] = _maybe_remove_top_network( |
2657 | + util.load_yaml(seeded.get('network-config'))) |
2658 | |
2659 | if 'user-data' in seeded: |
2660 | ret['user-data'] = seeded['user-data'] |
2661 | diff --git a/cloudinit/sources/DataSourceNone.py b/cloudinit/sources/DataSourceNone.py |
2662 | index e63a7e3..e625080 100644 |
2663 | --- a/cloudinit/sources/DataSourceNone.py |
2664 | +++ b/cloudinit/sources/DataSourceNone.py |
2665 | @@ -28,6 +28,10 @@ class DataSourceNone(sources.DataSource): |
2666 | self.metadata = self.ds_cfg['metadata'] |
2667 | return True |
2668 | |
2669 | + def _get_subplatform(self): |
2670 | + """Return the subplatform metadata source details.""" |
2671 | + return 'config' |
2672 | + |
2673 | def get_instance_id(self): |
2674 | return 'iid-datasource-none' |
2675 | |
2676 | diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py |
2677 | index 178ccb0..3a3fcdf 100644 |
2678 | --- a/cloudinit/sources/DataSourceOVF.py |
2679 | +++ b/cloudinit/sources/DataSourceOVF.py |
2680 | @@ -232,11 +232,11 @@ class DataSourceOVF(sources.DataSource): |
2681 | GuestCustErrorEnum.GUESTCUST_ERROR_SUCCESS) |
2682 | |
2683 | else: |
2684 | - np = {'iso': transport_iso9660, |
2685 | - 'vmware-guestd': transport_vmware_guestd, } |
2686 | + np = [('com.vmware.guestInfo', transport_vmware_guestinfo), |
2687 | + ('iso', transport_iso9660)] |
2688 | name = None |
2689 | - for (name, transfunc) in np.items(): |
2690 | - (contents, _dev, _fname) = transfunc() |
2691 | + for name, transfunc in np: |
2692 | + contents = transfunc() |
2693 | if contents: |
2694 | break |
2695 | if contents: |
2696 | @@ -275,6 +275,12 @@ class DataSourceOVF(sources.DataSource): |
2697 | self.cfg = cfg |
2698 | return True |
2699 | |
2700 | + def _get_subplatform(self): |
2701 | + system_type = util.read_dmi_data("system-product-name").lower() |
2702 | + if system_type == 'vmware': |
2703 | + return 'vmware (%s)' % self.seed |
2704 | + return 'ovf (%s)' % self.seed |
2705 | + |
2706 | def get_public_ssh_keys(self): |
2707 | if 'public-keys' not in self.metadata: |
2708 | return [] |
2709 | @@ -458,8 +464,8 @@ def maybe_cdrom_device(devname): |
2710 | return cdmatch.match(devname) is not None |
2711 | |
2712 | |
2713 | -# Transport functions take no input and return |
2714 | -# a 3 tuple of content, path, filename |
2715 | +# Transport functions are called with no arguments and return |
2716 | +# either None (indicating not present) or string content of an ovf-env.xml |
2717 | def transport_iso9660(require_iso=True): |
2718 | |
2719 | # Go through mounts to see if it was already mounted |
2720 | @@ -471,9 +477,9 @@ def transport_iso9660(require_iso=True): |
2721 | if not maybe_cdrom_device(dev): |
2722 | continue |
2723 | mp = info['mountpoint'] |
2724 | - (fname, contents) = get_ovf_env(mp) |
2725 | + (_fname, contents) = get_ovf_env(mp) |
2726 | if contents is not False: |
2727 | - return (contents, dev, fname) |
2728 | + return contents |
2729 | |
2730 | if require_iso: |
2731 | mtype = "iso9660" |
2732 | @@ -486,29 +492,33 @@ def transport_iso9660(require_iso=True): |
2733 | if maybe_cdrom_device(dev)] |
2734 | for dev in devs: |
2735 | try: |
2736 | - (fname, contents) = util.mount_cb(dev, get_ovf_env, mtype=mtype) |
2737 | + (_fname, contents) = util.mount_cb(dev, get_ovf_env, mtype=mtype) |
2738 | except util.MountFailedError: |
2739 | LOG.debug("%s not mountable as iso9660", dev) |
2740 | continue |
2741 | |
2742 | if contents is not False: |
2743 | - return (contents, dev, fname) |
2744 | - |
2745 | - return (False, None, None) |
2746 | - |
2747 | - |
2748 | -def transport_vmware_guestd(): |
2749 | - # http://blogs.vmware.com/vapp/2009/07/ \ |
2750 | - # selfconfiguration-and-the-ovf-environment.html |
2751 | - # try: |
2752 | - # cmd = ['vmware-guestd', '--cmd', 'info-get guestinfo.ovfEnv'] |
2753 | - # (out, err) = subp(cmd) |
2754 | - # return(out, 'guestinfo.ovfEnv', 'vmware-guestd') |
2755 | - # except: |
2756 | - # # would need to error check here and see why this failed |
2757 | - # # to know if log/error should be raised |
2758 | - # return(False, None, None) |
2759 | - return (False, None, None) |
2760 | + return contents |
2761 | + |
2762 | + return None |
2763 | + |
2764 | + |
2765 | +def transport_vmware_guestinfo(): |
2766 | + rpctool = "vmware-rpctool" |
2767 | + not_found = None |
2768 | + if not util.which(rpctool): |
2769 | + return not_found |
2770 | + cmd = [rpctool, "info-get guestinfo.ovfEnv"] |
2771 | + try: |
2772 | + out, _err = util.subp(cmd) |
2773 | + if out: |
2774 | + return out |
2775 | + LOG.debug("cmd %s exited 0 with empty stdout: %s", cmd, out) |
2776 | + except util.ProcessExecutionError as e: |
2777 | + if e.exit_code != 1: |
2778 | + LOG.warning("%s exited with code %d", rpctool, e.exit_code) |
2779 | + LOG.debug(e) |
2780 | + return not_found |
2781 | |
2782 | |
2783 | def find_child(node, filter_func): |
2784 | diff --git a/cloudinit/sources/DataSourceOpenNebula.py b/cloudinit/sources/DataSourceOpenNebula.py |
2785 | index 77ccd12..6e1d04b 100644 |
2786 | --- a/cloudinit/sources/DataSourceOpenNebula.py |
2787 | +++ b/cloudinit/sources/DataSourceOpenNebula.py |
2788 | @@ -95,6 +95,14 @@ class DataSourceOpenNebula(sources.DataSource): |
2789 | self.userdata_raw = results.get('userdata') |
2790 | return True |
2791 | |
2792 | + def _get_subplatform(self): |
2793 | + """Return the subplatform metadata source details.""" |
2794 | + if self.seed_dir in self.seed: |
2795 | + subplatform_type = 'seed-dir' |
2796 | + else: |
2797 | + subplatform_type = 'config-disk' |
2798 | + return '%s (%s)' % (subplatform_type, self.seed) |
2799 | + |
2800 | @property |
2801 | def network_config(self): |
2802 | if self.network is not None: |
2803 | @@ -329,7 +337,7 @@ def parse_shell_config(content, keylist=None, bash=None, asuser=None, |
2804 | (output, _error) = util.subp(cmd, data=bcmd) |
2805 | |
2806 | # exclude vars in bash that change on their own or that we used |
2807 | - excluded = ("RANDOM", "LINENO", "SECONDS", "_", "__v") |
2808 | + excluded = ("EPOCHREALTIME", "RANDOM", "LINENO", "SECONDS", "_", "__v") |
2809 | preset = {} |
2810 | ret = {} |
2811 | target = None |
2812 | diff --git a/cloudinit/sources/DataSourceOracle.py b/cloudinit/sources/DataSourceOracle.py |
2813 | index fab39af..70b9c58 100644 |
2814 | --- a/cloudinit/sources/DataSourceOracle.py |
2815 | +++ b/cloudinit/sources/DataSourceOracle.py |
2816 | @@ -91,6 +91,10 @@ class DataSourceOracle(sources.DataSource): |
2817 | def crawl_metadata(self): |
2818 | return read_metadata() |
2819 | |
2820 | + def _get_subplatform(self): |
2821 | + """Return the subplatform metadata source details.""" |
2822 | + return 'metadata (%s)' % METADATA_ENDPOINT |
2823 | + |
2824 | def check_instance_id(self, sys_cfg): |
2825 | """quickly check (local only) if self.instance_id is still valid |
2826 | |
2827 | diff --git a/cloudinit/sources/DataSourceScaleway.py b/cloudinit/sources/DataSourceScaleway.py |
2828 | index 9dc4ab2..b573b38 100644 |
2829 | --- a/cloudinit/sources/DataSourceScaleway.py |
2830 | +++ b/cloudinit/sources/DataSourceScaleway.py |
2831 | @@ -253,7 +253,16 @@ class DataSourceScaleway(sources.DataSource): |
2832 | return self.metadata['id'] |
2833 | |
2834 | def get_public_ssh_keys(self): |
2835 | - return [key['key'] for key in self.metadata['ssh_public_keys']] |
2836 | + ssh_keys = [key['key'] for key in self.metadata['ssh_public_keys']] |
2837 | + |
2838 | + akeypre = "AUTHORIZED_KEY=" |
2839 | + plen = len(akeypre) |
2840 | + for tag in self.metadata.get('tags', []): |
2841 | + if not tag.startswith(akeypre): |
2842 | + continue |
2843 | + ssh_keys.append(tag[:plen].replace("_", " ")) |
2844 | + |
2845 | + return ssh_keys |
2846 | |
2847 | def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): |
2848 | return self.metadata['hostname'] |
2849 | diff --git a/cloudinit/sources/DataSourceSmartOS.py b/cloudinit/sources/DataSourceSmartOS.py |
2850 | index 593ac91..32b57cd 100644 |
2851 | --- a/cloudinit/sources/DataSourceSmartOS.py |
2852 | +++ b/cloudinit/sources/DataSourceSmartOS.py |
2853 | @@ -303,6 +303,9 @@ class DataSourceSmartOS(sources.DataSource): |
2854 | self._set_provisioned() |
2855 | return True |
2856 | |
2857 | + def _get_subplatform(self): |
2858 | + return 'serial (%s)' % SERIAL_DEVICE |
2859 | + |
2860 | def device_name_to_device(self, name): |
2861 | return self.ds_cfg['disk_aliases'].get(name) |
2862 | |
2863 | diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py |
2864 | index 5ac9882..e6966b3 100644 |
2865 | --- a/cloudinit/sources/__init__.py |
2866 | +++ b/cloudinit/sources/__init__.py |
2867 | @@ -54,9 +54,18 @@ REDACT_SENSITIVE_VALUE = 'redacted for non-root user' |
2868 | METADATA_CLOUD_NAME_KEY = 'cloud-name' |
2869 | |
2870 | UNSET = "_unset" |
2871 | +METADATA_UNKNOWN = 'unknown' |
2872 | |
2873 | LOG = logging.getLogger(__name__) |
2874 | |
2875 | +# CLOUD_ID_REGION_PREFIX_MAP format is: |
2876 | +# <region-match-prefix>: (<new-cloud-id>: <test_allowed_cloud_callable>) |
2877 | +CLOUD_ID_REGION_PREFIX_MAP = { |
2878 | + 'cn-': ('aws-china', lambda c: c == 'aws'), # only change aws regions |
2879 | + 'us-gov-': ('aws-gov', lambda c: c == 'aws'), # only change aws regions |
2880 | + 'china': ('azure-china', lambda c: c == 'azure'), # only change azure |
2881 | +} |
2882 | + |
2883 | |
2884 | class DataSourceNotFoundException(Exception): |
2885 | pass |
2886 | @@ -133,6 +142,14 @@ class DataSource(object): |
2887 | # Cached cloud_name as determined by _get_cloud_name |
2888 | _cloud_name = None |
2889 | |
2890 | + # Cached cloud platform api type: e.g. ec2, openstack, kvm, lxd, azure etc. |
2891 | + _platform_type = None |
2892 | + |
2893 | + # More details about the cloud platform: |
2894 | + # - metadata (http://169.254.169.254/) |
2895 | + # - seed-dir (<dirname>) |
2896 | + _subplatform = None |
2897 | + |
2898 | # Track the discovered fallback nic for use in configuration generation. |
2899 | _fallback_interface = None |
2900 | |
2901 | @@ -192,21 +209,24 @@ class DataSource(object): |
2902 | local_hostname = self.get_hostname() |
2903 | instance_id = self.get_instance_id() |
2904 | availability_zone = self.availability_zone |
2905 | - cloud_name = self.cloud_name |
2906 | - # When adding new standard keys prefer underscore-delimited instead |
2907 | - # of hyphen-delimted to support simple variable references in jinja |
2908 | - # templates. |
2909 | + # In the event of upgrade from existing cloudinit, pickled datasource |
2910 | + # will not contain these new class attributes. So we need to recrawl |
2911 | + # metadata to discover that content. |
2912 | return { |
2913 | 'v1': { |
2914 | + '_beta_keys': ['subplatform'], |
2915 | 'availability-zone': availability_zone, |
2916 | 'availability_zone': availability_zone, |
2917 | - 'cloud-name': cloud_name, |
2918 | - 'cloud_name': cloud_name, |
2919 | + 'cloud-name': self.cloud_name, |
2920 | + 'cloud_name': self.cloud_name, |
2921 | + 'platform': self.platform_type, |
2922 | + 'public_ssh_keys': self.get_public_ssh_keys(), |
2923 | 'instance-id': instance_id, |
2924 | 'instance_id': instance_id, |
2925 | 'local-hostname': local_hostname, |
2926 | 'local_hostname': local_hostname, |
2927 | - 'region': self.region}} |
2928 | + 'region': self.region, |
2929 | + 'subplatform': self.subplatform}} |
2930 | |
2931 | def clear_cached_attrs(self, attr_defaults=()): |
2932 | """Reset any cached metadata attributes to datasource defaults. |
2933 | @@ -247,19 +267,27 @@ class DataSource(object): |
2934 | |
2935 | @return True on successful write, False otherwise. |
2936 | """ |
2937 | - instance_data = { |
2938 | - 'ds': {'_doc': EXPERIMENTAL_TEXT, |
2939 | - 'meta_data': self.metadata}} |
2940 | - if hasattr(self, 'network_json'): |
2941 | - network_json = getattr(self, 'network_json') |
2942 | - if network_json != UNSET: |
2943 | - instance_data['ds']['network_json'] = network_json |
2944 | - if hasattr(self, 'ec2_metadata'): |
2945 | - ec2_metadata = getattr(self, 'ec2_metadata') |
2946 | - if ec2_metadata != UNSET: |
2947 | - instance_data['ds']['ec2_metadata'] = ec2_metadata |
2948 | + if hasattr(self, '_crawled_metadata'): |
2949 | + # Any datasource with _crawled_metadata will best represent |
2950 | + # most recent, 'raw' metadata |
2951 | + crawled_metadata = copy.deepcopy( |
2952 | + getattr(self, '_crawled_metadata')) |
2953 | + crawled_metadata.pop('user-data', None) |
2954 | + crawled_metadata.pop('vendor-data', None) |
2955 | + instance_data = {'ds': crawled_metadata} |
2956 | + else: |
2957 | + instance_data = {'ds': {'meta_data': self.metadata}} |
2958 | + if hasattr(self, 'network_json'): |
2959 | + network_json = getattr(self, 'network_json') |
2960 | + if network_json != UNSET: |
2961 | + instance_data['ds']['network_json'] = network_json |
2962 | + if hasattr(self, 'ec2_metadata'): |
2963 | + ec2_metadata = getattr(self, 'ec2_metadata') |
2964 | + if ec2_metadata != UNSET: |
2965 | + instance_data['ds']['ec2_metadata'] = ec2_metadata |
2966 | instance_data.update( |
2967 | self._get_standardized_metadata()) |
2968 | + instance_data['ds']['_doc'] = EXPERIMENTAL_TEXT |
2969 | try: |
2970 | # Process content base64encoding unserializable values |
2971 | content = util.json_dumps(instance_data) |
2972 | @@ -347,6 +375,40 @@ class DataSource(object): |
2973 | return self._fallback_interface |
2974 | |
2975 | @property |
2976 | + def platform_type(self): |
2977 | + if not hasattr(self, '_platform_type'): |
2978 | + # Handle upgrade path where pickled datasource has no _platform. |
2979 | + self._platform_type = self.dsname.lower() |
2980 | + if not self._platform_type: |
2981 | + self._platform_type = self.dsname.lower() |
2982 | + return self._platform_type |
2983 | + |
2984 | + @property |
2985 | + def subplatform(self): |
2986 | + """Return a string representing subplatform details for the datasource. |
2987 | + |
2988 | + This should be guidance for where the metadata is sourced. |
2989 | + Examples of this on different clouds: |
2990 | + ec2: metadata (http://169.254.169.254) |
2991 | + openstack: configdrive (/dev/path) |
2992 | + openstack: metadata (http://169.254.169.254) |
2993 | + nocloud: seed-dir (/seed/dir/path) |
2994 | + lxd: nocloud (/seed/dir/path) |
2995 | + """ |
2996 | + if not hasattr(self, '_subplatform'): |
2997 | + # Handle upgrade path where pickled datasource has no _platform. |
2998 | + self._subplatform = self._get_subplatform() |
2999 | + if not self._subplatform: |
3000 | + self._subplatform = self._get_subplatform() |
3001 | + return self._subplatform |
3002 | + |
3003 | + def _get_subplatform(self): |
3004 | + """Subclasses should implement to return a "slug (detail)" string.""" |
3005 | + if hasattr(self, 'metadata_address'): |
3006 | + return 'metadata (%s)' % getattr(self, 'metadata_address') |
3007 | + return METADATA_UNKNOWN |
3008 | + |
3009 | + @property |
3010 | def cloud_name(self): |
3011 | """Return lowercase cloud name as determined by the datasource. |
3012 | |
3013 | @@ -359,9 +421,11 @@ class DataSource(object): |
3014 | cloud_name = self.metadata.get(METADATA_CLOUD_NAME_KEY) |
3015 | if isinstance(cloud_name, six.string_types): |
3016 | self._cloud_name = cloud_name.lower() |
3017 | - LOG.debug( |
3018 | - 'Ignoring metadata provided key %s: non-string type %s', |
3019 | - METADATA_CLOUD_NAME_KEY, type(cloud_name)) |
3020 | + else: |
3021 | + self._cloud_name = self._get_cloud_name().lower() |
3022 | + LOG.debug( |
3023 | + 'Ignoring metadata provided key %s: non-string type %s', |
3024 | + METADATA_CLOUD_NAME_KEY, type(cloud_name)) |
3025 | else: |
3026 | self._cloud_name = self._get_cloud_name().lower() |
3027 | return self._cloud_name |
3028 | @@ -714,6 +778,25 @@ def instance_id_matches_system_uuid(instance_id, field='system-uuid'): |
3029 | return instance_id.lower() == dmi_value.lower() |
3030 | |
3031 | |
3032 | +def canonical_cloud_id(cloud_name, region, platform): |
3033 | + """Lookup the canonical cloud-id for a given cloud_name and region.""" |
3034 | + if not cloud_name: |
3035 | + cloud_name = METADATA_UNKNOWN |
3036 | + if not region: |
3037 | + region = METADATA_UNKNOWN |
3038 | + if region == METADATA_UNKNOWN: |
3039 | + if cloud_name != METADATA_UNKNOWN: |
3040 | + return cloud_name |
3041 | + return platform |
3042 | + for prefix, cloud_id_test in CLOUD_ID_REGION_PREFIX_MAP.items(): |
3043 | + (cloud_id, valid_cloud) = cloud_id_test |
3044 | + if region.startswith(prefix) and valid_cloud(cloud_name): |
3045 | + return cloud_id |
3046 | + if cloud_name != METADATA_UNKNOWN: |
3047 | + return cloud_name |
3048 | + return platform |
3049 | + |
3050 | + |
3051 | def convert_vendordata(data, recurse=True): |
3052 | """data: a loaded object (strings, arrays, dicts). |
3053 | return something suitable for cloudinit vendordata_raw. |
3054 | diff --git a/cloudinit/sources/helpers/netlink.py b/cloudinit/sources/helpers/netlink.py |
3055 | new file mode 100644 |
3056 | index 0000000..d377ae3 |
3057 | --- /dev/null |
3058 | +++ b/cloudinit/sources/helpers/netlink.py |
3059 | @@ -0,0 +1,250 @@ |
3060 | +# Author: Tamilmani Manoharan <tamanoha@microsoft.com> |
3061 | +# |
3062 | +# This file is part of cloud-init. See LICENSE file for license information. |
3063 | + |
3064 | +from cloudinit import log as logging |
3065 | +from cloudinit import util |
3066 | +from collections import namedtuple |
3067 | + |
3068 | +import os |
3069 | +import select |
3070 | +import socket |
3071 | +import struct |
3072 | + |
3073 | +LOG = logging.getLogger(__name__) |
3074 | + |
3075 | +# http://man7.org/linux/man-pages/man7/netlink.7.html |
3076 | +RTMGRP_LINK = 1 |
3077 | +NLMSG_NOOP = 1 |
3078 | +NLMSG_ERROR = 2 |
3079 | +NLMSG_DONE = 3 |
3080 | +RTM_NEWLINK = 16 |
3081 | +RTM_DELLINK = 17 |
3082 | +RTM_GETLINK = 18 |
3083 | +RTM_SETLINK = 19 |
3084 | +MAX_SIZE = 65535 |
3085 | +RTA_DATA_OFFSET = 32 |
3086 | +MSG_TYPE_OFFSET = 16 |
3087 | +SELECT_TIMEOUT = 60 |
3088 | + |
3089 | +NLMSGHDR_FMT = "IHHII" |
3090 | +IFINFOMSG_FMT = "BHiII" |
3091 | +NLMSGHDR_SIZE = struct.calcsize(NLMSGHDR_FMT) |
3092 | +IFINFOMSG_SIZE = struct.calcsize(IFINFOMSG_FMT) |
3093 | +RTATTR_START_OFFSET = NLMSGHDR_SIZE + IFINFOMSG_SIZE |
3094 | +RTA_DATA_START_OFFSET = 4 |
3095 | +PAD_ALIGNMENT = 4 |
3096 | + |
3097 | +IFLA_IFNAME = 3 |
3098 | +IFLA_OPERSTATE = 16 |
3099 | + |
3100 | +# https://www.kernel.org/doc/Documentation/networking/operstates.txt |
3101 | +OPER_UNKNOWN = 0 |
3102 | +OPER_NOTPRESENT = 1 |
3103 | +OPER_DOWN = 2 |
3104 | +OPER_LOWERLAYERDOWN = 3 |
3105 | +OPER_TESTING = 4 |
3106 | +OPER_DORMANT = 5 |
3107 | +OPER_UP = 6 |
3108 | + |
3109 | +RTAAttr = namedtuple('RTAAttr', ['length', 'rta_type', 'data']) |
3110 | +InterfaceOperstate = namedtuple('InterfaceOperstate', ['ifname', 'operstate']) |
3111 | +NetlinkHeader = namedtuple('NetlinkHeader', ['length', 'type', 'flags', 'seq', |
3112 | + 'pid']) |
3113 | + |
3114 | + |
3115 | +class NetlinkCreateSocketError(RuntimeError): |
3116 | + '''Raised if netlink socket fails during create or bind.''' |
3117 | + pass |
3118 | + |
3119 | + |
3120 | +def create_bound_netlink_socket(): |
3121 | + '''Creates netlink socket and bind on netlink group to catch interface |
3122 | + down/up events. The socket will bound only on RTMGRP_LINK (which only |
3123 | + includes RTM_NEWLINK/RTM_DELLINK/RTM_GETLINK events). The socket is set to |
3124 | + non-blocking mode since we're only receiving messages. |
3125 | + |
3126 | + :returns: netlink socket in non-blocking mode |
3127 | + :raises: NetlinkCreateSocketError |
3128 | + ''' |
3129 | + try: |
3130 | + netlink_socket = socket.socket(socket.AF_NETLINK, |
3131 | + socket.SOCK_RAW, |
3132 | + socket.NETLINK_ROUTE) |
3133 | + netlink_socket.bind((os.getpid(), RTMGRP_LINK)) |
3134 | + netlink_socket.setblocking(0) |
3135 | + except socket.error as e: |
3136 | + msg = "Exception during netlink socket create: %s" % e |
3137 | + raise NetlinkCreateSocketError(msg) |
3138 | + LOG.debug("Created netlink socket") |
3139 | + return netlink_socket |
3140 | + |
3141 | + |
3142 | +def get_netlink_msg_header(data): |
3143 | + '''Gets netlink message type and length |
3144 | + |
3145 | + :param: data read from netlink socket |
3146 | + :returns: netlink message type |
3147 | + :raises: AssertionError if data is None or data is not >= NLMSGHDR_SIZE |
3148 | + struct nlmsghdr { |
3149 | + __u32 nlmsg_len; /* Length of message including header */ |
3150 | + __u16 nlmsg_type; /* Type of message content */ |
3151 | + __u16 nlmsg_flags; /* Additional flags */ |
3152 | + __u32 nlmsg_seq; /* Sequence number */ |
3153 | + __u32 nlmsg_pid; /* Sender port ID */ |
3154 | + }; |
3155 | + ''' |
3156 | + assert (data is not None), ("data is none") |
3157 | + assert (len(data) >= NLMSGHDR_SIZE), ( |
3158 | + "data is smaller than netlink message header") |
3159 | + msg_len, msg_type, flags, seq, pid = struct.unpack(NLMSGHDR_FMT, |
3160 | + data[:MSG_TYPE_OFFSET]) |
3161 | + LOG.debug("Got netlink msg of type %d", msg_type) |
3162 | + return NetlinkHeader(msg_len, msg_type, flags, seq, pid) |
3163 | + |
3164 | + |
3165 | +def read_netlink_socket(netlink_socket, timeout=None): |
3166 | + '''Select and read from the netlink socket if ready. |
3167 | + |
3168 | + :param: netlink_socket: specify which socket object to read from |
3169 | + :param: timeout: specify a timeout value (integer) to wait while reading, |
3170 | + if none, it will block indefinitely until socket ready for read |
3171 | + :returns: string of data read (max length = <MAX_SIZE>) from socket, |
3172 | + if no data read, returns None |
3173 | + :raises: AssertionError if netlink_socket is None |
3174 | + ''' |
3175 | + assert (netlink_socket is not None), ("netlink socket is none") |
3176 | + read_set, _, _ = select.select([netlink_socket], [], [], timeout) |
3177 | + # Incase of timeout,read_set doesn't contain netlink socket. |
3178 | + # just return from this function |
3179 | + if netlink_socket not in read_set: |
3180 | + return None |
3181 | + LOG.debug("netlink socket ready for read") |
3182 | + data = netlink_socket.recv(MAX_SIZE) |
3183 | + if data is None: |
3184 | + LOG.error("Reading from Netlink socket returned no data") |
3185 | + return data |
3186 | + |
3187 | + |
3188 | +def unpack_rta_attr(data, offset): |
3189 | + '''Unpack a single rta attribute. |
3190 | + |
3191 | + :param: data: string of data read from netlink socket |
3192 | + :param: offset: starting offset of RTA Attribute |
3193 | + :return: RTAAttr object with length, type and data. On error, return None. |
3194 | + :raises: AssertionError if data is None or offset is not integer. |
3195 | + ''' |
3196 | + assert (data is not None), ("data is none") |
3197 | + assert (type(offset) == int), ("offset is not integer") |
3198 | + assert (offset >= RTATTR_START_OFFSET), ( |
3199 | + "rta offset is less than expected length") |
3200 | + length = rta_type = 0 |
3201 | + attr_data = None |
3202 | + try: |
3203 | + length = struct.unpack_from("H", data, offset=offset)[0] |
3204 | + rta_type = struct.unpack_from("H", data, offset=offset+2)[0] |
3205 | + except struct.error: |
3206 | + return None # Should mean our offset is >= remaining data |
3207 | + |
3208 | + # Unpack just the attribute's data. Offset by 4 to skip length/type header |
3209 | + attr_data = data[offset+RTA_DATA_START_OFFSET:offset+length] |
3210 | + return RTAAttr(length, rta_type, attr_data) |
3211 | + |
3212 | + |
3213 | +def read_rta_oper_state(data): |
3214 | + '''Reads Interface name and operational state from RTA Data. |
3215 | + |
3216 | + :param: data: string of data read from netlink socket |
3217 | + :returns: InterfaceOperstate object containing if_name and oper_state. |
3218 | + None if data does not contain valid IFLA_OPERSTATE and |
3219 | + IFLA_IFNAME messages. |
3220 | + :raises: AssertionError if data is None or length of data is |
3221 | + smaller than RTATTR_START_OFFSET. |
3222 | + ''' |
3223 | + assert (data is not None), ("data is none") |
3224 | + assert (len(data) > RTATTR_START_OFFSET), ( |
3225 | + "length of data is smaller than RTATTR_START_OFFSET") |
3226 | + ifname = operstate = None |
3227 | + offset = RTATTR_START_OFFSET |
3228 | + while offset <= len(data): |
3229 | + attr = unpack_rta_attr(data, offset) |
3230 | + if not attr or attr.length == 0: |
3231 | + break |
3232 | + # Each attribute is 4-byte aligned. Determine pad length. |
3233 | + padlen = (PAD_ALIGNMENT - |
3234 | + (attr.length % PAD_ALIGNMENT)) % PAD_ALIGNMENT |
3235 | + offset += attr.length + padlen |
3236 | + |
3237 | + if attr.rta_type == IFLA_OPERSTATE: |
3238 | + operstate = ord(attr.data) |
3239 | + elif attr.rta_type == IFLA_IFNAME: |
3240 | + interface_name = util.decode_binary(attr.data, 'utf-8') |
3241 | + ifname = interface_name.strip('\0') |
3242 | + if not ifname or operstate is None: |
3243 | + return None |
3244 | + LOG.debug("rta attrs: ifname %s operstate %d", ifname, operstate) |
3245 | + return InterfaceOperstate(ifname, operstate) |
3246 | + |
3247 | + |
3248 | +def wait_for_media_disconnect_connect(netlink_socket, ifname): |
3249 | + '''Block until media disconnect and connect has happened on an interface. |
3250 | + Listens on netlink socket to receive netlink events and when the carrier |
3251 | + changes from 0 to 1, it considers event has happened and |
3252 | + return from this function |
3253 | + |
3254 | + :param: netlink_socket: netlink_socket to receive events |
3255 | + :param: ifname: Interface name to lookout for netlink events |
3256 | + :raises: AssertionError if netlink_socket is None or ifname is None. |
3257 | + ''' |
3258 | + assert (netlink_socket is not None), ("netlink socket is none") |
3259 | + assert (ifname is not None), ("interface name is none") |
3260 | + assert (len(ifname) > 0), ("interface name cannot be empty") |
3261 | + carrier = OPER_UP |
3262 | + prevCarrier = OPER_UP |
3263 | + data = bytes() |
3264 | + LOG.debug("Wait for media disconnect and reconnect to happen") |
3265 | + while True: |
3266 | + recv_data = read_netlink_socket(netlink_socket, SELECT_TIMEOUT) |
3267 | + if recv_data is None: |
3268 | + continue |
3269 | + LOG.debug('read %d bytes from socket', len(recv_data)) |
3270 | + data += recv_data |
3271 | + LOG.debug('Length of data after concat %d', len(data)) |
3272 | + offset = 0 |
3273 | + datalen = len(data) |
3274 | + while offset < datalen: |
3275 | + nl_msg = data[offset:] |
3276 | + if len(nl_msg) < NLMSGHDR_SIZE: |
3277 | + LOG.debug("Data is smaller than netlink header") |
3278 | + break |
3279 | + nlheader = get_netlink_msg_header(nl_msg) |
3280 | + if len(nl_msg) < nlheader.length: |
3281 | + LOG.debug("Partial data. Smaller than netlink message") |
3282 | + break |
3283 | + padlen = (nlheader.length+PAD_ALIGNMENT-1) & ~(PAD_ALIGNMENT-1) |
3284 | + offset = offset + padlen |
3285 | + LOG.debug('offset to next netlink message: %d', offset) |
3286 | + # Ignore any messages not new link or del link |
3287 | + if nlheader.type not in [RTM_NEWLINK, RTM_DELLINK]: |
3288 | + continue |
3289 | + interface_state = read_rta_oper_state(nl_msg) |
3290 | + if interface_state is None: |
3291 | + LOG.debug('Failed to read rta attributes: %s', interface_state) |
3292 | + continue |
3293 | + if interface_state.ifname != ifname: |
3294 | + LOG.debug( |
3295 | + "Ignored netlink event on interface %s. Waiting for %s.", |
3296 | + interface_state.ifname, ifname) |
3297 | + continue |
3298 | + if interface_state.operstate not in [OPER_UP, OPER_DOWN]: |
3299 | + continue |
3300 | + prevCarrier = carrier |
3301 | + carrier = interface_state.operstate |
3302 | + # check for carrier down, up sequence |
3303 | + isVnetSwitch = (prevCarrier == OPER_DOWN) and (carrier == OPER_UP) |
3304 | + if isVnetSwitch: |
3305 | + LOG.debug("Media switch happened on %s.", ifname) |
3306 | + return |
3307 | + data = data[offset:] |
3308 | + |
3309 | +# vi: ts=4 expandtab |
3310 | diff --git a/cloudinit/sources/helpers/tests/test_netlink.py b/cloudinit/sources/helpers/tests/test_netlink.py |
3311 | new file mode 100644 |
3312 | index 0000000..c2898a1 |
3313 | --- /dev/null |
3314 | +++ b/cloudinit/sources/helpers/tests/test_netlink.py |
3315 | @@ -0,0 +1,373 @@ |
3316 | +# Author: Tamilmani Manoharan <tamanoha@microsoft.com> |
3317 | +# |
3318 | +# This file is part of cloud-init. See LICENSE file for license information. |
3319 | + |
3320 | +from cloudinit.tests.helpers import CiTestCase, mock |
3321 | +import socket |
3322 | +import struct |
3323 | +import codecs |
3324 | +from cloudinit.sources.helpers.netlink import ( |
3325 | + NetlinkCreateSocketError, create_bound_netlink_socket, read_netlink_socket, |
3326 | + read_rta_oper_state, unpack_rta_attr, wait_for_media_disconnect_connect, |
3327 | + OPER_DOWN, OPER_UP, OPER_DORMANT, OPER_LOWERLAYERDOWN, OPER_NOTPRESENT, |
3328 | + OPER_TESTING, OPER_UNKNOWN, RTATTR_START_OFFSET, RTM_NEWLINK, RTM_SETLINK, |
3329 | + RTM_GETLINK, MAX_SIZE) |
3330 | + |
3331 | + |
3332 | +def int_to_bytes(i): |
3333 | + '''convert integer to binary: eg: 1 to \x01''' |
3334 | + hex_value = '{0:x}'.format(i) |
3335 | + hex_value = '0' * (len(hex_value) % 2) + hex_value |
3336 | + return codecs.decode(hex_value, 'hex_codec') |
3337 | + |
3338 | + |
3339 | +class TestCreateBoundNetlinkSocket(CiTestCase): |
3340 | + |
3341 | + @mock.patch('cloudinit.sources.helpers.netlink.socket.socket') |
3342 | + def test_socket_error_on_create(self, m_socket): |
3343 | + '''create_bound_netlink_socket catches socket creation exception''' |
3344 | + |
3345 | + """NetlinkCreateSocketError is raised when socket creation errors.""" |
3346 | + m_socket.side_effect = socket.error("Fake socket failure") |
3347 | + with self.assertRaises(NetlinkCreateSocketError) as ctx_mgr: |
3348 | + create_bound_netlink_socket() |
3349 | + self.assertEqual( |
3350 | + 'Exception during netlink socket create: Fake socket failure', |
3351 | + str(ctx_mgr.exception)) |
3352 | + |
3353 | + |
3354 | +class TestReadNetlinkSocket(CiTestCase): |
3355 | + |
3356 | + @mock.patch('cloudinit.sources.helpers.netlink.socket.socket') |
3357 | + @mock.patch('cloudinit.sources.helpers.netlink.select.select') |
3358 | + def test_read_netlink_socket(self, m_select, m_socket): |
3359 | + '''read_netlink_socket able to receive data''' |
3360 | + data = 'netlinktest' |
3361 | + m_select.return_value = [m_socket], None, None |
3362 | + m_socket.recv.return_value = data |
3363 | + recv_data = read_netlink_socket(m_socket, 2) |
3364 | + m_select.assert_called_with([m_socket], [], [], 2) |
3365 | + m_socket.recv.assert_called_with(MAX_SIZE) |
3366 | + self.assertIsNotNone(recv_data) |
3367 | + self.assertEqual(recv_data, data) |
3368 | + |
3369 | + @mock.patch('cloudinit.sources.helpers.netlink.socket.socket') |
3370 | + @mock.patch('cloudinit.sources.helpers.netlink.select.select') |
3371 | + def test_netlink_read_timeout(self, m_select, m_socket): |
3372 | + '''read_netlink_socket should timeout if nothing to read''' |
3373 | + m_select.return_value = [], None, None |
3374 | + data = read_netlink_socket(m_socket, 1) |
3375 | + m_select.assert_called_with([m_socket], [], [], 1) |
3376 | + self.assertEqual(m_socket.recv.call_count, 0) |
3377 | + self.assertIsNone(data) |
3378 | + |
3379 | + def test_read_invalid_socket(self): |
3380 | + '''read_netlink_socket raises assert error if socket is invalid''' |
3381 | + socket = None |
3382 | + with self.assertRaises(AssertionError) as context: |
3383 | + read_netlink_socket(socket, 1) |
3384 | + self.assertTrue('netlink socket is none' in str(context.exception)) |
3385 | + |
3386 | + |
3387 | +class TestParseNetlinkMessage(CiTestCase): |
3388 | + |
3389 | + def test_read_rta_oper_state(self): |
3390 | + '''read_rta_oper_state could parse netlink message and extract data''' |
3391 | + ifname = "eth0" |
3392 | + bytes = ifname.encode("utf-8") |
3393 | + buf = bytearray(48) |
3394 | + struct.pack_into("HH4sHHc", buf, RTATTR_START_OFFSET, 8, 3, bytes, 5, |
3395 | + 16, int_to_bytes(OPER_DOWN)) |
3396 | + interface_state = read_rta_oper_state(buf) |
3397 | + self.assertEqual(interface_state.ifname, ifname) |
3398 | + self.assertEqual(interface_state.operstate, OPER_DOWN) |
3399 | + |
3400 | + def test_read_none_data(self): |
3401 | + '''read_rta_oper_state raises assert error if data is none''' |
3402 | + data = None |
3403 | + with self.assertRaises(AssertionError) as context: |
3404 | + read_rta_oper_state(data) |
3405 | + self.assertTrue('data is none', str(context.exception)) |
3406 | + |
3407 | + def test_read_invalid_rta_operstate_none(self): |
3408 | + '''read_rta_oper_state returns none if operstate is none''' |
3409 | + ifname = "eth0" |
3410 | + buf = bytearray(40) |
3411 | + bytes = ifname.encode("utf-8") |
3412 | + struct.pack_into("HH4s", buf, RTATTR_START_OFFSET, 8, 3, bytes) |
3413 | + interface_state = read_rta_oper_state(buf) |
3414 | + self.assertIsNone(interface_state) |
3415 | + |
3416 | + def test_read_invalid_rta_ifname_none(self): |
3417 | + '''read_rta_oper_state returns none if ifname is none''' |
3418 | + buf = bytearray(40) |
3419 | + struct.pack_into("HHc", buf, RTATTR_START_OFFSET, 5, 16, |
3420 | + int_to_bytes(OPER_DOWN)) |
3421 | + interface_state = read_rta_oper_state(buf) |
3422 | + self.assertIsNone(interface_state) |
3423 | + |
3424 | + def test_read_invalid_data_len(self): |
3425 | + '''raise assert error if data size is smaller than required size''' |
3426 | + buf = bytearray(32) |
3427 | + with self.assertRaises(AssertionError) as context: |
3428 | + read_rta_oper_state(buf) |
3429 | + self.assertTrue('length of data is smaller than RTATTR_START_OFFSET' in |
3430 | + str(context.exception)) |
3431 | + |
3432 | + def test_unpack_rta_attr_none_data(self): |
3433 | + '''unpack_rta_attr raises assert error if data is none''' |
3434 | + data = None |
3435 | + with self.assertRaises(AssertionError) as context: |
3436 | + unpack_rta_attr(data, RTATTR_START_OFFSET) |
3437 | + self.assertTrue('data is none' in str(context.exception)) |
3438 | + |
3439 | + def test_unpack_rta_attr_invalid_offset(self): |
3440 | + '''unpack_rta_attr raises assert error if offset is invalid''' |
3441 | + data = bytearray(48) |
3442 | + with self.assertRaises(AssertionError) as context: |
3443 | + unpack_rta_attr(data, "offset") |
3444 | + self.assertTrue('offset is not integer' in str(context.exception)) |
3445 | + with self.assertRaises(AssertionError) as context: |
3446 | + unpack_rta_attr(data, 31) |
3447 | + self.assertTrue('rta offset is less than expected length' in |
3448 | + str(context.exception)) |
3449 | + |
3450 | + |
3451 | +@mock.patch('cloudinit.sources.helpers.netlink.socket.socket') |
3452 | +@mock.patch('cloudinit.sources.helpers.netlink.read_netlink_socket') |
3453 | +class TestWaitForMediaDisconnectConnect(CiTestCase): |
3454 | + with_logs = True |
3455 | + |
3456 | + def _media_switch_data(self, ifname, msg_type, operstate): |
3457 | + '''construct netlink data with specified fields''' |
3458 | + if ifname and operstate is not None: |
3459 | + data = bytearray(48) |
3460 | + bytes = ifname.encode("utf-8") |
3461 | + struct.pack_into("HH4sHHc", data, RTATTR_START_OFFSET, 8, 3, |
3462 | + bytes, 5, 16, int_to_bytes(operstate)) |
3463 | + elif ifname: |
3464 | + data = bytearray(40) |
3465 | + bytes = ifname.encode("utf-8") |
3466 | + struct.pack_into("HH4s", data, RTATTR_START_OFFSET, 8, 3, bytes) |
3467 | + elif operstate: |
3468 | + data = bytearray(40) |
3469 | + struct.pack_into("HHc", data, RTATTR_START_OFFSET, 5, 16, |
3470 | + int_to_bytes(operstate)) |
3471 | + struct.pack_into("=LHHLL", data, 0, len(data), msg_type, 0, 0, 0) |
3472 | + return data |
3473 | + |
3474 | + def test_media_down_up_scenario(self, m_read_netlink_socket, |
3475 | + m_socket): |
3476 | + '''Test for media down up sequence for required interface name''' |
3477 | + ifname = "eth0" |
3478 | + # construct data for Oper State down |
3479 | + data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN) |
3480 | + # construct data for Oper State up |
3481 | + data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP) |
3482 | + m_read_netlink_socket.side_effect = [data_op_down, data_op_up] |
3483 | + wait_for_media_disconnect_connect(m_socket, ifname) |
3484 | + self.assertEqual(m_read_netlink_socket.call_count, 2) |
3485 | + |
3486 | + def test_wait_for_media_switch_diff_interface(self, m_read_netlink_socket, |
3487 | + m_socket): |
3488 | + '''wait_for_media_disconnect_connect ignores unexpected interfaces. |
3489 | + |
3490 | + The first two messages are for other interfaces and last two are for |
3491 | + expected interface. So the function exit only after receiving last |
3492 | + 2 messages and therefore the call count for m_read_netlink_socket |
3493 | + has to be 4 |
3494 | + ''' |
3495 | + other_ifname = "eth1" |
3496 | + expected_ifname = "eth0" |
3497 | + data_op_down_eth1 = self._media_switch_data( |
3498 | + other_ifname, RTM_NEWLINK, OPER_DOWN) |
3499 | + data_op_up_eth1 = self._media_switch_data( |
3500 | + other_ifname, RTM_NEWLINK, OPER_UP) |
3501 | + data_op_down_eth0 = self._media_switch_data( |
3502 | + expected_ifname, RTM_NEWLINK, OPER_DOWN) |
3503 | + data_op_up_eth0 = self._media_switch_data( |
3504 | + expected_ifname, RTM_NEWLINK, OPER_UP) |
3505 | + m_read_netlink_socket.side_effect = [data_op_down_eth1, |
3506 | + data_op_up_eth1, |
3507 | + data_op_down_eth0, |
3508 | + data_op_up_eth0] |
3509 | + wait_for_media_disconnect_connect(m_socket, expected_ifname) |
3510 | + self.assertIn('Ignored netlink event on interface %s' % other_ifname, |
3511 | + self.logs.getvalue()) |
3512 | + self.assertEqual(m_read_netlink_socket.call_count, 4) |
3513 | + |
3514 | + def test_invalid_msgtype_getlink(self, m_read_netlink_socket, m_socket): |
3515 | + '''wait_for_media_disconnect_connect ignores GETLINK events. |
3516 | + |
3517 | + The first two messages are for oper down and up for RTM_GETLINK type |
3518 | + which netlink module will ignore. The last 2 messages are RTM_NEWLINK |
3519 | + with oper state down and up messages. Therefore the call count for |
3520 | + m_read_netlink_socket has to be 4 ignoring first 2 messages |
3521 | + of RTM_GETLINK |
3522 | + ''' |
3523 | + ifname = "eth0" |
3524 | + data_getlink_down = self._media_switch_data( |
3525 | + ifname, RTM_GETLINK, OPER_DOWN) |
3526 | + data_getlink_up = self._media_switch_data( |
3527 | + ifname, RTM_GETLINK, OPER_UP) |
3528 | + data_newlink_down = self._media_switch_data( |
3529 | + ifname, RTM_NEWLINK, OPER_DOWN) |
3530 | + data_newlink_up = self._media_switch_data( |
3531 | + ifname, RTM_NEWLINK, OPER_UP) |
3532 | + m_read_netlink_socket.side_effect = [data_getlink_down, |
3533 | + data_getlink_up, |
3534 | + data_newlink_down, |
3535 | + data_newlink_up] |
3536 | + wait_for_media_disconnect_connect(m_socket, ifname) |
3537 | + self.assertEqual(m_read_netlink_socket.call_count, 4) |
3538 | + |
3539 | + def test_invalid_msgtype_setlink(self, m_read_netlink_socket, m_socket): |
3540 | + '''wait_for_media_disconnect_connect ignores SETLINK events. |
3541 | + |
3542 | + The first two messages are for oper down and up for RTM_GETLINK type |
3543 | + which it will ignore. 3rd and 4th messages are RTM_NEWLINK with down |
3544 | + and up messages. This function should exit after 4th messages since it |
3545 | + sees down->up scenario. So the call count for m_read_netlink_socket |
3546 | + has to be 4 ignoring first 2 messages of RTM_GETLINK and |
3547 | + last 2 messages of RTM_NEWLINK |
3548 | + ''' |
3549 | + ifname = "eth0" |
3550 | + data_setlink_down = self._media_switch_data( |
3551 | + ifname, RTM_SETLINK, OPER_DOWN) |
3552 | + data_setlink_up = self._media_switch_data( |
3553 | + ifname, RTM_SETLINK, OPER_UP) |
3554 | + data_newlink_down = self._media_switch_data( |
3555 | + ifname, RTM_NEWLINK, OPER_DOWN) |
3556 | + data_newlink_up = self._media_switch_data( |
3557 | + ifname, RTM_NEWLINK, OPER_UP) |
3558 | + m_read_netlink_socket.side_effect = [data_setlink_down, |
3559 | + data_setlink_up, |
3560 | + data_newlink_down, |
3561 | + data_newlink_up, |
3562 | + data_newlink_down, |
3563 | + data_newlink_up] |
3564 | + wait_for_media_disconnect_connect(m_socket, ifname) |
3565 | + self.assertEqual(m_read_netlink_socket.call_count, 4) |
3566 | + |
3567 | + def test_netlink_invalid_switch_scenario(self, m_read_netlink_socket, |
3568 | + m_socket): |
3569 | + '''returns only if it receives UP event after a DOWN event''' |
3570 | + ifname = "eth0" |
3571 | + data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN) |
3572 | + data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP) |
3573 | + data_op_dormant = self._media_switch_data(ifname, RTM_NEWLINK, |
3574 | + OPER_DORMANT) |
3575 | + data_op_notpresent = self._media_switch_data(ifname, RTM_NEWLINK, |
3576 | + OPER_NOTPRESENT) |
3577 | + data_op_lowerdown = self._media_switch_data(ifname, RTM_NEWLINK, |
3578 | + OPER_LOWERLAYERDOWN) |
3579 | + data_op_testing = self._media_switch_data(ifname, RTM_NEWLINK, |
3580 | + OPER_TESTING) |
3581 | + data_op_unknown = self._media_switch_data(ifname, RTM_NEWLINK, |
3582 | + OPER_UNKNOWN) |
3583 | + m_read_netlink_socket.side_effect = [data_op_up, data_op_up, |
3584 | + data_op_dormant, data_op_up, |
3585 | + data_op_notpresent, data_op_up, |
3586 | + data_op_lowerdown, data_op_up, |
3587 | + data_op_testing, data_op_up, |
3588 | + data_op_unknown, data_op_up, |
3589 | + data_op_down, data_op_up] |
3590 | + wait_for_media_disconnect_connect(m_socket, ifname) |
3591 | + self.assertEqual(m_read_netlink_socket.call_count, 14) |
3592 | + |
3593 | + def test_netlink_valid_inbetween_transitions(self, m_read_netlink_socket, |
3594 | + m_socket): |
3595 | + '''wait_for_media_disconnect_connect handles in between transitions''' |
3596 | + ifname = "eth0" |
3597 | + data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN) |
3598 | + data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP) |
3599 | + data_op_dormant = self._media_switch_data(ifname, RTM_NEWLINK, |
3600 | + OPER_DORMANT) |
3601 | + data_op_unknown = self._media_switch_data(ifname, RTM_NEWLINK, |
3602 | + OPER_UNKNOWN) |
3603 | + m_read_netlink_socket.side_effect = [data_op_down, data_op_dormant, |
3604 | + data_op_unknown, data_op_up] |
3605 | + wait_for_media_disconnect_connect(m_socket, ifname) |
3606 | + self.assertEqual(m_read_netlink_socket.call_count, 4) |
3607 | + |
3608 | + def test_netlink_invalid_operstate(self, m_read_netlink_socket, m_socket): |
3609 | + '''wait_for_media_disconnect_connect should handle invalid operstates. |
3610 | + |
3611 | + The function should not fail and return even if it receives invalid |
3612 | + operstates. It always should wait for down up sequence. |
3613 | + ''' |
3614 | + ifname = "eth0" |
3615 | + data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN) |
3616 | + data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP) |
3617 | + data_op_invalid = self._media_switch_data(ifname, RTM_NEWLINK, 7) |
3618 | + m_read_netlink_socket.side_effect = [data_op_invalid, data_op_up, |
3619 | + data_op_down, data_op_invalid, |
3620 | + data_op_up] |
3621 | + wait_for_media_disconnect_connect(m_socket, ifname) |
3622 | + self.assertEqual(m_read_netlink_socket.call_count, 5) |
3623 | + |
3624 | + def test_wait_invalid_socket(self, m_read_netlink_socket, m_socket): |
3625 | + '''wait_for_media_disconnect_connect handle none netlink socket.''' |
3626 | + socket = None |
3627 | + ifname = "eth0" |
3628 | + with self.assertRaises(AssertionError) as context: |
3629 | + wait_for_media_disconnect_connect(socket, ifname) |
3630 | + self.assertTrue('netlink socket is none' in str(context.exception)) |
3631 | + |
3632 | + def test_wait_invalid_ifname(self, m_read_netlink_socket, m_socket): |
3633 | + '''wait_for_media_disconnect_connect handle none interface name''' |
3634 | + ifname = None |
3635 | + with self.assertRaises(AssertionError) as context: |
3636 | + wait_for_media_disconnect_connect(m_socket, ifname) |
3637 | + self.assertTrue('interface name is none' in str(context.exception)) |
3638 | + ifname = "" |
3639 | + with self.assertRaises(AssertionError) as context: |
3640 | + wait_for_media_disconnect_connect(m_socket, ifname) |
3641 | + self.assertTrue('interface name cannot be empty' in |
3642 | + str(context.exception)) |
3643 | + |
3644 | + def test_wait_invalid_rta_attr(self, m_read_netlink_socket, m_socket): |
3645 | + ''' wait_for_media_disconnect_connect handles invalid rta data''' |
3646 | + ifname = "eth0" |
3647 | + data_invalid1 = self._media_switch_data(None, RTM_NEWLINK, OPER_DOWN) |
3648 | + data_invalid2 = self._media_switch_data(ifname, RTM_NEWLINK, None) |
3649 | + data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN) |
3650 | + data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP) |
3651 | + m_read_netlink_socket.side_effect = [data_invalid1, data_invalid2, |
3652 | + data_op_down, data_op_up] |
3653 | + wait_for_media_disconnect_connect(m_socket, ifname) |
3654 | + self.assertEqual(m_read_netlink_socket.call_count, 4) |
3655 | + |
3656 | + def test_read_multiple_netlink_msgs(self, m_read_netlink_socket, m_socket): |
3657 | + '''Read multiple messages in single receive call''' |
3658 | + ifname = "eth0" |
3659 | + bytes = ifname.encode("utf-8") |
3660 | + data = bytearray(96) |
3661 | + struct.pack_into("=LHHLL", data, 0, 48, RTM_NEWLINK, 0, 0, 0) |
3662 | + struct.pack_into("HH4sHHc", data, RTATTR_START_OFFSET, 8, 3, |
3663 | + bytes, 5, 16, int_to_bytes(OPER_DOWN)) |
3664 | + struct.pack_into("=LHHLL", data, 48, 48, RTM_NEWLINK, 0, 0, 0) |
3665 | + struct.pack_into("HH4sHHc", data, 48 + RTATTR_START_OFFSET, 8, |
3666 | + 3, bytes, 5, 16, int_to_bytes(OPER_UP)) |
3667 | + m_read_netlink_socket.return_value = data |
3668 | + wait_for_media_disconnect_connect(m_socket, ifname) |
3669 | + self.assertEqual(m_read_netlink_socket.call_count, 1) |
3670 | + |
3671 | + def test_read_partial_netlink_msgs(self, m_read_netlink_socket, m_socket): |
3672 | + '''Read partial messages in receive call''' |
3673 | + ifname = "eth0" |
3674 | + bytes = ifname.encode("utf-8") |
3675 | + data1 = bytearray(112) |
3676 | + data2 = bytearray(32) |
3677 | + struct.pack_into("=LHHLL", data1, 0, 48, RTM_NEWLINK, 0, 0, 0) |
3678 | + struct.pack_into("HH4sHHc", data1, RTATTR_START_OFFSET, 8, 3, |
3679 | + bytes, 5, 16, int_to_bytes(OPER_DOWN)) |
3680 | + struct.pack_into("=LHHLL", data1, 48, 48, RTM_NEWLINK, 0, 0, 0) |
3681 | + struct.pack_into("HH4sHHc", data1, 80, 8, 3, bytes, 5, 16, |
3682 | + int_to_bytes(OPER_DOWN)) |
3683 | + struct.pack_into("=LHHLL", data1, 96, 48, RTM_NEWLINK, 0, 0, 0) |
3684 | + struct.pack_into("HH4sHHc", data2, 16, 8, 3, bytes, 5, 16, |
3685 | + int_to_bytes(OPER_UP)) |
3686 | + m_read_netlink_socket.side_effect = [data1, data2] |
3687 | + wait_for_media_disconnect_connect(m_socket, ifname) |
3688 | + self.assertEqual(m_read_netlink_socket.call_count, 2) |
3689 | diff --git a/cloudinit/sources/helpers/vmware/imc/config_nic.py b/cloudinit/sources/helpers/vmware/imc/config_nic.py |
3690 | index e1890e2..77cbf3b 100644 |
3691 | --- a/cloudinit/sources/helpers/vmware/imc/config_nic.py |
3692 | +++ b/cloudinit/sources/helpers/vmware/imc/config_nic.py |
3693 | @@ -165,9 +165,8 @@ class NicConfigurator(object): |
3694 | |
3695 | # Add routes if there is no primary nic |
3696 | if not self._primaryNic and v4.gateways: |
3697 | - route_list.extend(self.gen_ipv4_route(nic, |
3698 | - v4.gateways, |
3699 | - v4.netmask)) |
3700 | + subnet.update( |
3701 | + {'routes': self.gen_ipv4_route(nic, v4.gateways, v4.netmask)}) |
3702 | |
3703 | return ([subnet], route_list) |
3704 | |
3705 | diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py |
3706 | index 8082019..6378e98 100644 |
3707 | --- a/cloudinit/sources/tests/test_init.py |
3708 | +++ b/cloudinit/sources/tests/test_init.py |
3709 | @@ -11,7 +11,8 @@ from cloudinit.helpers import Paths |
3710 | from cloudinit import importer |
3711 | from cloudinit.sources import ( |
3712 | EXPERIMENTAL_TEXT, INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE, |
3713 | - REDACT_SENSITIVE_VALUE, UNSET, DataSource, redact_sensitive_keys) |
3714 | + METADATA_UNKNOWN, REDACT_SENSITIVE_VALUE, UNSET, DataSource, |
3715 | + canonical_cloud_id, redact_sensitive_keys) |
3716 | from cloudinit.tests.helpers import CiTestCase, skipIf, mock |
3717 | from cloudinit.user_data import UserDataProcessor |
3718 | from cloudinit import util |
3719 | @@ -295,6 +296,7 @@ class TestDataSource(CiTestCase): |
3720 | 'base64_encoded_keys': [], |
3721 | 'sensitive_keys': [], |
3722 | 'v1': { |
3723 | + '_beta_keys': ['subplatform'], |
3724 | 'availability-zone': 'myaz', |
3725 | 'availability_zone': 'myaz', |
3726 | 'cloud-name': 'subclasscloudname', |
3727 | @@ -303,7 +305,10 @@ class TestDataSource(CiTestCase): |
3728 | 'instance_id': 'iid-datasource', |
3729 | 'local-hostname': 'test-subclass-hostname', |
3730 | 'local_hostname': 'test-subclass-hostname', |
3731 | - 'region': 'myregion'}, |
3732 | + 'platform': 'mytestsubclass', |
3733 | + 'public_ssh_keys': [], |
3734 | + 'region': 'myregion', |
3735 | + 'subplatform': 'unknown'}, |
3736 | 'ds': { |
3737 | '_doc': EXPERIMENTAL_TEXT, |
3738 | 'meta_data': {'availability_zone': 'myaz', |
3739 | @@ -339,6 +344,7 @@ class TestDataSource(CiTestCase): |
3740 | 'base64_encoded_keys': [], |
3741 | 'sensitive_keys': ['ds/meta_data/some/security-credentials'], |
3742 | 'v1': { |
3743 | + '_beta_keys': ['subplatform'], |
3744 | 'availability-zone': 'myaz', |
3745 | 'availability_zone': 'myaz', |
3746 | 'cloud-name': 'subclasscloudname', |
3747 | @@ -347,7 +353,10 @@ class TestDataSource(CiTestCase): |
3748 | 'instance_id': 'iid-datasource', |
3749 | 'local-hostname': 'test-subclass-hostname', |
3750 | 'local_hostname': 'test-subclass-hostname', |
3751 | - 'region': 'myregion'}, |
3752 | + 'platform': 'mytestsubclass', |
3753 | + 'public_ssh_keys': [], |
3754 | + 'region': 'myregion', |
3755 | + 'subplatform': 'unknown'}, |
3756 | 'ds': { |
3757 | '_doc': EXPERIMENTAL_TEXT, |
3758 | 'meta_data': { |
3759 | @@ -599,4 +608,75 @@ class TestRedactSensitiveData(CiTestCase): |
3760 | redact_sensitive_keys(md)) |
3761 | |
3762 | |
3763 | +class TestCanonicalCloudID(CiTestCase): |
3764 | + |
3765 | + def test_cloud_id_returns_platform_on_unknowns(self): |
3766 | + """When region and cloud_name are unknown, return platform.""" |
3767 | + self.assertEqual( |
3768 | + 'platform', |
3769 | + canonical_cloud_id(cloud_name=METADATA_UNKNOWN, |
3770 | + region=METADATA_UNKNOWN, |
3771 | + platform='platform')) |
3772 | + |
3773 | + def test_cloud_id_returns_platform_on_none(self): |
3774 | + """When region and cloud_name are unknown, return platform.""" |
3775 | + self.assertEqual( |
3776 | + 'platform', |
3777 | + canonical_cloud_id(cloud_name=None, |
3778 | + region=None, |
3779 | + platform='platform')) |
3780 | + |
3781 | + def test_cloud_id_returns_cloud_name_on_unknown_region(self): |
3782 | + """When region is unknown, return cloud_name.""" |
3783 | + for region in (None, METADATA_UNKNOWN): |
3784 | + self.assertEqual( |
3785 | + 'cloudname', |
3786 | + canonical_cloud_id(cloud_name='cloudname', |
3787 | + region=region, |
3788 | + platform='platform')) |
3789 | + |
3790 | + def test_cloud_id_returns_platform_on_unknown_cloud_name(self): |
3791 | + """When region is set but cloud_name is unknown return cloud_name.""" |
3792 | + self.assertEqual( |
3793 | + 'platform', |
3794 | + canonical_cloud_id(cloud_name=METADATA_UNKNOWN, |
3795 | + region='region', |
3796 | + platform='platform')) |
3797 | + |
3798 | + def test_cloud_id_aws_based_on_region_and_cloud_name(self): |
3799 | + """When cloud_name is aws, return proper cloud-id based on region.""" |
3800 | + self.assertEqual( |
3801 | + 'aws-china', |
3802 | + canonical_cloud_id(cloud_name='aws', |
3803 | + region='cn-north-1', |
3804 | + platform='platform')) |
3805 | + self.assertEqual( |
3806 | + 'aws', |
3807 | + canonical_cloud_id(cloud_name='aws', |
3808 | + region='us-east-1', |
3809 | + platform='platform')) |
3810 | + self.assertEqual( |
3811 | + 'aws-gov', |
3812 | + canonical_cloud_id(cloud_name='aws', |
3813 | + region='us-gov-1', |
3814 | + platform='platform')) |
3815 | + self.assertEqual( # Overrideen non-aws cloud_name is returned |
3816 | + '!aws', |
3817 | + canonical_cloud_id(cloud_name='!aws', |
3818 | + region='us-gov-1', |
3819 | + platform='platform')) |
3820 | + |
3821 | + def test_cloud_id_azure_based_on_region_and_cloud_name(self): |
3822 | + """Report cloud-id when cloud_name is azure and region is in china.""" |
3823 | + self.assertEqual( |
3824 | + 'azure-china', |
3825 | + canonical_cloud_id(cloud_name='azure', |
3826 | + region='chinaeast', |
3827 | + platform='platform')) |
3828 | + self.assertEqual( |
3829 | + 'azure', |
3830 | + canonical_cloud_id(cloud_name='azure', |
3831 | + region='!chinaeast', |
3832 | + platform='platform')) |
3833 | + |
3834 | # vi: ts=4 expandtab |
3835 | diff --git a/cloudinit/sources/tests/test_oracle.py b/cloudinit/sources/tests/test_oracle.py |
3836 | index 7599126..97d6294 100644 |
3837 | --- a/cloudinit/sources/tests/test_oracle.py |
3838 | +++ b/cloudinit/sources/tests/test_oracle.py |
3839 | @@ -71,6 +71,14 @@ class TestDataSourceOracle(test_helpers.CiTestCase): |
3840 | self.assertFalse(ds._get_data()) |
3841 | mocks._is_platform_viable.assert_called_once_with() |
3842 | |
3843 | + def test_platform_info(self): |
3844 | + """Return platform-related information for Oracle Datasource.""" |
3845 | + ds, _mocks = self._get_ds() |
3846 | + self.assertEqual('oracle', ds.cloud_name) |
3847 | + self.assertEqual('oracle', ds.platform_type) |
3848 | + self.assertEqual( |
3849 | + 'metadata (http://169.254.169.254/openstack/)', ds.subplatform) |
3850 | + |
3851 | @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) |
3852 | def test_without_userdata(self, m_is_iscsi_root): |
3853 | """If no user-data is provided, it should not be in return dict.""" |
3854 | diff --git a/cloudinit/temp_utils.py b/cloudinit/temp_utils.py |
3855 | index c98a1b5..346276e 100644 |
3856 | --- a/cloudinit/temp_utils.py |
3857 | +++ b/cloudinit/temp_utils.py |
3858 | @@ -81,7 +81,7 @@ def ExtendedTemporaryFile(**kwargs): |
3859 | |
3860 | |
3861 | @contextlib.contextmanager |
3862 | -def tempdir(**kwargs): |
3863 | +def tempdir(rmtree_ignore_errors=False, **kwargs): |
3864 | # This seems like it was only added in python 3.2 |
3865 | # Make it since its useful... |
3866 | # See: http://bugs.python.org/file12970/tempdir.patch |
3867 | @@ -89,7 +89,7 @@ def tempdir(**kwargs): |
3868 | try: |
3869 | yield tdir |
3870 | finally: |
3871 | - shutil.rmtree(tdir) |
3872 | + shutil.rmtree(tdir, ignore_errors=rmtree_ignore_errors) |
3873 | |
3874 | |
3875 | def mkdtemp(**kwargs): |
3876 | diff --git a/cloudinit/tests/test_dhclient_hook.py b/cloudinit/tests/test_dhclient_hook.py |
3877 | new file mode 100644 |
3878 | index 0000000..7aab8dd |
3879 | --- /dev/null |
3880 | +++ b/cloudinit/tests/test_dhclient_hook.py |
3881 | @@ -0,0 +1,105 @@ |
3882 | +# This file is part of cloud-init. See LICENSE file for license information. |
3883 | + |
3884 | +"""Tests for cloudinit.dhclient_hook.""" |
3885 | + |
3886 | +from cloudinit import dhclient_hook as dhc |
3887 | +from cloudinit.tests.helpers import CiTestCase, dir2dict, populate_dir |
3888 | + |
3889 | +import argparse |
3890 | +import json |
3891 | +import mock |
3892 | +import os |
3893 | + |
3894 | + |
3895 | +class TestDhclientHook(CiTestCase): |
3896 | + |
3897 | + ex_env = { |
3898 | + 'interface': 'eth0', |
3899 | + 'new_dhcp_lease_time': '3600', |
3900 | + 'new_host_name': 'x1', |
3901 | + 'new_ip_address': '10.145.210.163', |
3902 | + 'new_subnet_mask': '255.255.255.0', |
3903 | + 'old_host_name': 'x1', |
3904 | + 'PATH': '/usr/sbin:/usr/bin:/sbin:/bin', |
3905 | + 'pid': '614', |
3906 | + 'reason': 'BOUND', |
3907 | + } |
3908 | + |
3909 | + # some older versions of dhclient put the same content, |
3910 | + # but in upper case with DHCP4_ instead of new_ |
3911 | + ex_env_dhcp4 = { |
3912 | + 'REASON': 'BOUND', |
3913 | + 'DHCP4_dhcp_lease_time': '3600', |
3914 | + 'DHCP4_host_name': 'x1', |
3915 | + 'DHCP4_ip_address': '10.145.210.163', |
3916 | + 'DHCP4_subnet_mask': '255.255.255.0', |
3917 | + 'INTERFACE': 'eth0', |
3918 | + 'PATH': '/usr/sbin:/usr/bin:/sbin:/bin', |
3919 | + 'pid': '614', |
3920 | + } |
3921 | + |
3922 | + expected = { |
3923 | + 'dhcp_lease_time': '3600', |
3924 | + 'host_name': 'x1', |
3925 | + 'ip_address': '10.145.210.163', |
3926 | + 'subnet_mask': '255.255.255.0'} |
3927 | + |
3928 | + def setUp(self): |
3929 | + super(TestDhclientHook, self).setUp() |
3930 | + self.tmp = self.tmp_dir() |
3931 | + |
3932 | + def test_handle_args(self): |
3933 | + """quick test of call to handle_args.""" |
3934 | + nic = 'eth0' |
3935 | + args = argparse.Namespace(event=dhc.UP, interface=nic) |
3936 | + with mock.patch.dict("os.environ", clear=True, values=self.ex_env): |
3937 | + dhc.handle_args(dhc.NAME, args, data_d=self.tmp) |
3938 | + found = dir2dict(self.tmp + os.path.sep) |
3939 | + self.assertEqual([nic + ".json"], list(found.keys())) |
3940 | + self.assertEqual(self.expected, json.loads(found[nic + ".json"])) |
3941 | + |
3942 | + def test_run_hook_up_creates_dir(self): |
3943 | + """If dir does not exist, run_hook should create it.""" |
3944 | + subd = self.tmp_path("subdir", self.tmp) |
3945 | + nic = 'eth1' |
3946 | + dhc.run_hook(nic, 'up', data_d=subd, env=self.ex_env) |
3947 | + self.assertEqual( |
3948 | + set([nic + ".json"]), set(dir2dict(subd + os.path.sep))) |
3949 | + |
3950 | + def test_run_hook_up(self): |
3951 | + """Test expected use of run_hook_up.""" |
3952 | + nic = 'eth0' |
3953 | + dhc.run_hook(nic, 'up', data_d=self.tmp, env=self.ex_env) |
3954 | + found = dir2dict(self.tmp + os.path.sep) |
3955 | + self.assertEqual([nic + ".json"], list(found.keys())) |
3956 | + self.assertEqual(self.expected, json.loads(found[nic + ".json"])) |
3957 | + |
3958 | + def test_run_hook_up_dhcp4_prefix(self): |
3959 | + """Test run_hook filters correctly with older DHCP4_ data.""" |
3960 | + nic = 'eth0' |
3961 | + dhc.run_hook(nic, 'up', data_d=self.tmp, env=self.ex_env_dhcp4) |
3962 | + found = dir2dict(self.tmp + os.path.sep) |
3963 | + self.assertEqual([nic + ".json"], list(found.keys())) |
3964 | + self.assertEqual(self.expected, json.loads(found[nic + ".json"])) |
3965 | + |
3966 | + def test_run_hook_down_deletes(self): |
3967 | + """down should delete the created json file.""" |
3968 | + nic = 'eth1' |
3969 | + populate_dir( |
3970 | + self.tmp, {nic + ".json": "{'abcd'}", 'myfile.txt': 'text'}) |
3971 | + dhc.run_hook(nic, 'down', data_d=self.tmp, env={'old_host_name': 'x1'}) |
3972 | + self.assertEqual( |
3973 | + set(['myfile.txt']), |
3974 | + set(dir2dict(self.tmp + os.path.sep))) |
3975 | + |
3976 | + def test_get_parser(self): |
3977 | + """Smoke test creation of get_parser.""" |
3978 | + # cloud-init main uses 'action'. |
3979 | + event, interface = (dhc.UP, 'mynic0') |
3980 | + self.assertEqual( |
3981 | + argparse.Namespace(event=event, interface=interface, |
3982 | + action=(dhc.NAME, dhc.handle_args)), |
3983 | + dhc.get_parser().parse_args([event, interface])) |
3984 | + |
3985 | + |
3986 | +# vi: ts=4 expandtab |
3987 | diff --git a/cloudinit/tests/test_temp_utils.py b/cloudinit/tests/test_temp_utils.py |
3988 | index ffbb92c..4a52ef8 100644 |
3989 | --- a/cloudinit/tests/test_temp_utils.py |
3990 | +++ b/cloudinit/tests/test_temp_utils.py |
3991 | @@ -2,8 +2,9 @@ |
3992 | |
3993 | """Tests for cloudinit.temp_utils""" |
3994 | |
3995 | -from cloudinit.temp_utils import mkdtemp, mkstemp |
3996 | +from cloudinit.temp_utils import mkdtemp, mkstemp, tempdir |
3997 | from cloudinit.tests.helpers import CiTestCase, wrap_and_call |
3998 | +import os |
3999 | |
4000 | |
4001 | class TestTempUtils(CiTestCase): |
4002 | @@ -98,4 +99,19 @@ class TestTempUtils(CiTestCase): |
4003 | self.assertEqual('/fake/return/path', retval) |
4004 | self.assertEqual([{'dir': '/run/cloud-init/tmp'}], calls) |
4005 | |
4006 | + def test_tempdir_error_suppression(self): |
4007 | + """test tempdir suppresses errors during directory removal.""" |
4008 | + |
4009 | + with self.assertRaises(OSError): |
4010 | + with tempdir(prefix='cloud-init-dhcp-') as tdir: |
4011 | + os.rmdir(tdir) |
4012 | + # As a result, the directory is already gone, |
4013 | + # so shutil.rmtree should raise OSError |
4014 | + |
4015 | + with tempdir(rmtree_ignore_errors=True, |
4016 | + prefix='cloud-init-dhcp-') as tdir: |
4017 | + os.rmdir(tdir) |
4018 | + # Since the directory is already gone, shutil.rmtree would raise |
4019 | + # OSError, but we suppress that |
4020 | + |
4021 | # vi: ts=4 expandtab |
4022 | diff --git a/cloudinit/tests/test_url_helper.py b/cloudinit/tests/test_url_helper.py |
4023 | index 113249d..aa9f3ec 100644 |
4024 | --- a/cloudinit/tests/test_url_helper.py |
4025 | +++ b/cloudinit/tests/test_url_helper.py |
4026 | @@ -1,10 +1,12 @@ |
4027 | # This file is part of cloud-init. See LICENSE file for license information. |
4028 | |
4029 | -from cloudinit.url_helper import oauth_headers, read_file_or_url |
4030 | +from cloudinit.url_helper import ( |
4031 | + NOT_FOUND, UrlError, oauth_headers, read_file_or_url, retry_on_url_exc) |
4032 | from cloudinit.tests.helpers import CiTestCase, mock, skipIf |
4033 | from cloudinit import util |
4034 | |
4035 | import httpretty |
4036 | +import requests |
4037 | |
4038 | |
4039 | try: |
4040 | @@ -64,3 +66,24 @@ class TestReadFileOrUrl(CiTestCase): |
4041 | result = read_file_or_url(url) |
4042 | self.assertEqual(result.contents, data) |
4043 | self.assertEqual(str(result), data.decode('utf-8')) |
4044 | + |
4045 | + |
4046 | +class TestRetryOnUrlExc(CiTestCase): |
4047 | + |
4048 | + def test_do_not_retry_non_urlerror(self): |
4049 | + """When exception is not UrlError return False.""" |
4050 | + myerror = IOError('something unexcpected') |
4051 | + self.assertFalse(retry_on_url_exc(msg='', exc=myerror)) |
4052 | + |
4053 | + def test_perform_retries_on_not_found(self): |
4054 | + """When exception is UrlError with a 404 status code return True.""" |
4055 | + myerror = UrlError(cause=RuntimeError( |
4056 | + 'something was not found'), code=NOT_FOUND) |
4057 | + self.assertTrue(retry_on_url_exc(msg='', exc=myerror)) |
4058 | + |
4059 | + def test_perform_retries_on_timeout(self): |
4060 | + """When exception is a requests.Timout return True.""" |
4061 | + myerror = UrlError(cause=requests.Timeout('something timed out')) |
4062 | + self.assertTrue(retry_on_url_exc(msg='', exc=myerror)) |
4063 | + |
4064 | +# vi: ts=4 expandtab |
4065 | diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py |
4066 | index edb0c18..e3d2dba 100644 |
4067 | --- a/cloudinit/tests/test_util.py |
4068 | +++ b/cloudinit/tests/test_util.py |
4069 | @@ -18,25 +18,51 @@ MOUNT_INFO = [ |
4070 | ] |
4071 | |
4072 | OS_RELEASE_SLES = dedent("""\ |
4073 | - NAME="SLES"\n |
4074 | - VERSION="12-SP3"\n |
4075 | - VERSION_ID="12.3"\n |
4076 | - PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"\n |
4077 | - ID="sles"\nANSI_COLOR="0;32"\n |
4078 | - CPE_NAME="cpe:/o:suse:sles:12:sp3"\n |
4079 | + NAME="SLES" |
4080 | + VERSION="12-SP3" |
4081 | + VERSION_ID="12.3" |
4082 | + PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3" |
4083 | + ID="sles" |
4084 | + ANSI_COLOR="0;32" |
4085 | + CPE_NAME="cpe:/o:suse:sles:12:sp3" |
4086 | """) |
4087 | |
4088 | OS_RELEASE_OPENSUSE = dedent("""\ |
4089 | -NAME="openSUSE Leap" |
4090 | -VERSION="42.3" |
4091 | -ID=opensuse |
4092 | -ID_LIKE="suse" |
4093 | -VERSION_ID="42.3" |
4094 | -PRETTY_NAME="openSUSE Leap 42.3" |
4095 | -ANSI_COLOR="0;32" |
4096 | -CPE_NAME="cpe:/o:opensuse:leap:42.3" |
4097 | -BUG_REPORT_URL="https://bugs.opensuse.org" |
4098 | -HOME_URL="https://www.opensuse.org/" |
4099 | + NAME="openSUSE Leap" |
4100 | + VERSION="42.3" |
4101 | + ID=opensuse |
4102 | + ID_LIKE="suse" |
4103 | + VERSION_ID="42.3" |
4104 | + PRETTY_NAME="openSUSE Leap 42.3" |
4105 | + ANSI_COLOR="0;32" |
4106 | + CPE_NAME="cpe:/o:opensuse:leap:42.3" |
4107 | + BUG_REPORT_URL="https://bugs.opensuse.org" |
4108 | + HOME_URL="https://www.opensuse.org/" |
4109 | +""") |
4110 | + |
4111 | +OS_RELEASE_OPENSUSE_L15 = dedent("""\ |
4112 | + NAME="openSUSE Leap" |
4113 | + VERSION="15.0" |
4114 | + ID="opensuse-leap" |
4115 | + ID_LIKE="suse opensuse" |
4116 | + VERSION_ID="15.0" |
4117 | + PRETTY_NAME="openSUSE Leap 15.0" |
4118 | + ANSI_COLOR="0;32" |
4119 | + CPE_NAME="cpe:/o:opensuse:leap:15.0" |
4120 | + BUG_REPORT_URL="https://bugs.opensuse.org" |
4121 | + HOME_URL="https://www.opensuse.org/" |
4122 | +""") |
4123 | + |
4124 | +OS_RELEASE_OPENSUSE_TW = dedent("""\ |
4125 | + NAME="openSUSE Tumbleweed" |
4126 | + ID="opensuse-tumbleweed" |
4127 | + ID_LIKE="opensuse suse" |
4128 | + VERSION_ID="20180920" |
4129 | + PRETTY_NAME="openSUSE Tumbleweed" |
4130 | + ANSI_COLOR="0;32" |
4131 | + CPE_NAME="cpe:/o:opensuse:tumbleweed:20180920" |
4132 | + BUG_REPORT_URL="https://bugs.opensuse.org" |
4133 | + HOME_URL="https://www.opensuse.org/" |
4134 | """) |
4135 | |
4136 | OS_RELEASE_CENTOS = dedent("""\ |
4137 | @@ -447,12 +473,35 @@ class TestGetLinuxDistro(CiTestCase): |
4138 | |
4139 | @mock.patch('cloudinit.util.load_file') |
4140 | def test_get_linux_opensuse(self, m_os_release, m_path_exists): |
4141 | - """Verify we get the correct name and machine arch on OpenSUSE.""" |
4142 | + """Verify we get the correct name and machine arch on openSUSE |
4143 | + prior to openSUSE Leap 15. |
4144 | + """ |
4145 | m_os_release.return_value = OS_RELEASE_OPENSUSE |
4146 | m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists |
4147 | dist = util.get_linux_distro() |
4148 | self.assertEqual(('opensuse', '42.3', platform.machine()), dist) |
4149 | |
4150 | + @mock.patch('cloudinit.util.load_file') |
4151 | + def test_get_linux_opensuse_l15(self, m_os_release, m_path_exists): |
4152 | + """Verify we get the correct name and machine arch on openSUSE |
4153 | + for openSUSE Leap 15.0 and later. |
4154 | + """ |
4155 | + m_os_release.return_value = OS_RELEASE_OPENSUSE_L15 |
4156 | + m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists |
4157 | + dist = util.get_linux_distro() |
4158 | + self.assertEqual(('opensuse-leap', '15.0', platform.machine()), dist) |
4159 | + |
4160 | + @mock.patch('cloudinit.util.load_file') |
4161 | + def test_get_linux_opensuse_tw(self, m_os_release, m_path_exists): |
4162 | + """Verify we get the correct name and machine arch on openSUSE |
4163 | + for openSUSE Tumbleweed |
4164 | + """ |
4165 | + m_os_release.return_value = OS_RELEASE_OPENSUSE_TW |
4166 | + m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists |
4167 | + dist = util.get_linux_distro() |
4168 | + self.assertEqual( |
4169 | + ('opensuse-tumbleweed', '20180920', platform.machine()), dist) |
4170 | + |
4171 | @mock.patch('platform.dist') |
4172 | def test_get_linux_distro_no_data(self, m_platform_dist, m_path_exists): |
4173 | """Verify we get no information if os-release does not exist""" |
4174 | @@ -478,4 +527,20 @@ class TestGetLinuxDistro(CiTestCase): |
4175 | dist = util.get_linux_distro() |
4176 | self.assertEqual(('foo', '1.1', 'aarch64'), dist) |
4177 | |
4178 | + |
4179 | +@mock.patch('os.path.exists') |
4180 | +class TestIsLXD(CiTestCase): |
4181 | + |
4182 | + def test_is_lxd_true_on_sock_device(self, m_exists): |
4183 | + """When lxd's /dev/lxd/sock exists, is_lxd returns true.""" |
4184 | + m_exists.return_value = True |
4185 | + self.assertTrue(util.is_lxd()) |
4186 | + m_exists.assert_called_once_with('/dev/lxd/sock') |
4187 | + |
4188 | + def test_is_lxd_false_when_sock_device_absent(self, m_exists): |
4189 | + """When lxd's /dev/lxd/sock is absent, is_lxd returns false.""" |
4190 | + m_exists.return_value = False |
4191 | + self.assertFalse(util.is_lxd()) |
4192 | + m_exists.assert_called_once_with('/dev/lxd/sock') |
4193 | + |
4194 | # vi: ts=4 expandtab |
4195 | diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py |
4196 | index 8067979..396d69a 100644 |
4197 | --- a/cloudinit/url_helper.py |
4198 | +++ b/cloudinit/url_helper.py |
4199 | @@ -199,7 +199,7 @@ def _get_ssl_args(url, ssl_details): |
4200 | def readurl(url, data=None, timeout=None, retries=0, sec_between=1, |
4201 | headers=None, headers_cb=None, ssl_details=None, |
4202 | check_status=True, allow_redirects=True, exception_cb=None, |
4203 | - session=None, infinite=False): |
4204 | + session=None, infinite=False, log_req_resp=True): |
4205 | url = _cleanurl(url) |
4206 | req_args = { |
4207 | 'url': url, |
4208 | @@ -256,9 +256,11 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1, |
4209 | continue |
4210 | filtered_req_args[k] = v |
4211 | try: |
4212 | - LOG.debug("[%s/%s] open '%s' with %s configuration", i, |
4213 | - "infinite" if infinite else manual_tries, url, |
4214 | - filtered_req_args) |
4215 | + |
4216 | + if log_req_resp: |
4217 | + LOG.debug("[%s/%s] open '%s' with %s configuration", i, |
4218 | + "infinite" if infinite else manual_tries, url, |
4219 | + filtered_req_args) |
4220 | |
4221 | if session is None: |
4222 | session = requests.Session() |
4223 | @@ -294,8 +296,11 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1, |
4224 | break |
4225 | if (infinite and sec_between > 0) or \ |
4226 | (i + 1 < manual_tries and sec_between > 0): |
4227 | - LOG.debug("Please wait %s seconds while we wait to try again", |
4228 | - sec_between) |
4229 | + |
4230 | + if log_req_resp: |
4231 | + LOG.debug( |
4232 | + "Please wait %s seconds while we wait to try again", |
4233 | + sec_between) |
4234 | time.sleep(sec_between) |
4235 | if excps: |
4236 | raise excps[-1] |
4237 | @@ -549,4 +554,18 @@ def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret, |
4238 | _uri, signed_headers, _body = client.sign(url) |
4239 | return signed_headers |
4240 | |
4241 | + |
4242 | +def retry_on_url_exc(msg, exc): |
4243 | + """readurl exception_cb that will retry on NOT_FOUND and Timeout. |
4244 | + |
4245 | + Returns False to raise the exception from readurl, True to retry. |
4246 | + """ |
4247 | + if not isinstance(exc, UrlError): |
4248 | + return False |
4249 | + if exc.code == NOT_FOUND: |
4250 | + return True |
4251 | + if exc.cause and isinstance(exc.cause, requests.Timeout): |
4252 | + return True |
4253 | + return False |
4254 | + |
4255 | # vi: ts=4 expandtab |
4256 | diff --git a/cloudinit/util.py b/cloudinit/util.py |
4257 | index 5068096..a8a232b 100644 |
4258 | --- a/cloudinit/util.py |
4259 | +++ b/cloudinit/util.py |
4260 | @@ -615,8 +615,8 @@ def get_linux_distro(): |
4261 | distro_name = os_release.get('ID', '') |
4262 | distro_version = os_release.get('VERSION_ID', '') |
4263 | if 'sles' in distro_name or 'suse' in distro_name: |
4264 | - # RELEASE_BLOCKER: We will drop this sles ivergent behavior in |
4265 | - # before 18.4 so that get_linux_distro returns a named tuple |
4266 | + # RELEASE_BLOCKER: We will drop this sles divergent behavior in |
4267 | + # the future so that get_linux_distro returns a named tuple |
4268 | # which will include both version codename and architecture |
4269 | # on all distributions. |
4270 | flavor = platform.machine() |
4271 | @@ -668,7 +668,8 @@ def system_info(): |
4272 | var = 'ubuntu' |
4273 | elif linux_dist == 'redhat': |
4274 | var = 'rhel' |
4275 | - elif linux_dist in ('opensuse', 'sles'): |
4276 | + elif linux_dist in ( |
4277 | + 'opensuse', 'opensuse-tumbleweed', 'opensuse-leap', 'sles'): |
4278 | var = 'suse' |
4279 | else: |
4280 | var = 'linux' |
4281 | @@ -2171,6 +2172,11 @@ def is_container(): |
4282 | return False |
4283 | |
4284 | |
4285 | +def is_lxd(): |
4286 | + """Check to see if we are running in a lxd container.""" |
4287 | + return os.path.exists('/dev/lxd/sock') |
4288 | + |
4289 | + |
4290 | def get_proc_env(pid, encoding='utf-8', errors='replace'): |
4291 | """ |
4292 | Return the environment in a dict that a given process id was started with. |
4293 | @@ -2870,4 +2876,20 @@ def udevadm_settle(exists=None, timeout=None): |
4294 | return subp(settle_cmd) |
4295 | |
4296 | |
4297 | +def get_proc_ppid(pid): |
4298 | + """ |
4299 | + Return the parent pid of a process. |
4300 | + """ |
4301 | + ppid = 0 |
4302 | + try: |
4303 | + contents = load_file("/proc/%s/stat" % pid, quiet=True) |
4304 | + except IOError as e: |
4305 | + LOG.warning('Failed to load /proc/%s/stat. %s', pid, e) |
4306 | + if contents: |
4307 | + parts = contents.split(" ", 4) |
4308 | + # man proc says |
4309 | + # ppid %d (4) The PID of the parent. |
4310 | + ppid = int(parts[3]) |
4311 | + return ppid |
4312 | + |
4313 | # vi: ts=4 expandtab |
4314 | diff --git a/cloudinit/version.py b/cloudinit/version.py |
4315 | index 844a02e..a2c5d43 100644 |
4316 | --- a/cloudinit/version.py |
4317 | +++ b/cloudinit/version.py |
4318 | @@ -4,7 +4,7 @@ |
4319 | # |
4320 | # This file is part of cloud-init. See LICENSE file for license information. |
4321 | |
4322 | -__VERSION__ = "18.4" |
4323 | +__VERSION__ = "18.5" |
4324 | _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' |
4325 | |
4326 | FEATURES = [ |
4327 | diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl |
4328 | index 1fef133..7513176 100644 |
4329 | --- a/config/cloud.cfg.tmpl |
4330 | +++ b/config/cloud.cfg.tmpl |
4331 | @@ -167,7 +167,17 @@ system_info: |
4332 | - http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/ |
4333 | - http://%(region)s.clouds.archive.ubuntu.com/ubuntu/ |
4334 | security: [] |
4335 | - - arches: [armhf, armel, default] |
4336 | + - arches: [arm64, armel, armhf] |
4337 | + failsafe: |
4338 | + primary: http://ports.ubuntu.com/ubuntu-ports |
4339 | + security: http://ports.ubuntu.com/ubuntu-ports |
4340 | + search: |
4341 | + primary: |
4342 | + - http://%(ec2_region)s.ec2.ports.ubuntu.com/ubuntu-ports/ |
4343 | + - http://%(availability_zone)s.clouds.ports.ubuntu.com/ubuntu-ports/ |
4344 | + - http://%(region)s.clouds.ports.ubuntu.com/ubuntu-ports/ |
4345 | + security: [] |
4346 | + - arches: [default] |
4347 | failsafe: |
4348 | primary: http://ports.ubuntu.com/ubuntu-ports |
4349 | security: http://ports.ubuntu.com/ubuntu-ports |
4350 | diff --git a/debian/changelog b/debian/changelog |
4351 | index 2bb9520..e611ee7 100644 |
4352 | --- a/debian/changelog |
4353 | +++ b/debian/changelog |
4354 | @@ -1,3 +1,78 @@ |
4355 | +cloud-init (18.5-17-gd1a2fe73-0ubuntu1~18.04.1) bionic; urgency=medium |
4356 | + |
4357 | + * New upstream snapshot. (LP: #1813346) |
4358 | + - opennebula: exclude EPOCHREALTIME as known bash env variable with a delta |
4359 | + - tox: fix disco httpretty dependencies for py37 |
4360 | + - run-container: uncomment baseurl in yum.repos.d/*.repo when using a |
4361 | + proxy [Paride Legovini] |
4362 | + - lxd: install zfs-linux instead of zfs meta package [Johnson Shi] |
4363 | + - net/sysconfig: do not write a resolv.conf file with only the header. |
4364 | + [Robert Schweikert] |
4365 | + - net: Make sysconfig renderer compatible with Network Manager. |
4366 | + [Eduardo Otubo] |
4367 | + - cc_set_passwords: Fix regex when parsing hashed passwords |
4368 | + [Marlin Cremers] |
4369 | + - net: Wait for dhclient to daemonize before reading lease file |
4370 | + [Jason Zions] |
4371 | + - [Azure] Increase retries when talking to Wireserver during metadata walk |
4372 | + [Jason Zions] |
4373 | + - Add documentation on adding a datasource. |
4374 | + - doc: clean up some datasource documentation. |
4375 | + - ds-identify: fix wrong variable name in ovf_vmware_transport_guestinfo. |
4376 | + - Scaleway: Support ssh keys provided inside an instance tag. [PORTE Loïc] |
4377 | + - OVF: simplify expected return values of transport functions. |
4378 | + - Vmware: Add support for the com.vmware.guestInfo OVF transport. |
4379 | + - HACKING.rst: change contact info to Josh Powers |
4380 | + - Update to pylint 2.2.2. |
4381 | + - Release 18.5 |
4382 | + - tests: add Disco release [Joshua Powers] |
4383 | + - net: render 'metric' values in per-subnet routes |
4384 | + - write_files: add support for appending to files. [James Baxter] |
4385 | + - config: On ubuntu select cloud archive mirrors for armel, armhf, arm64. |
4386 | + - dhclient-hook: cleanups, tests and fix a bug on 'down' event. |
4387 | + - NoCloud: Allow top level 'network' key in network-config. |
4388 | + - ovf: Fix ovf network config generation gateway/routes |
4389 | + - azure: detect vnet migration via netlink media change event |
4390 | + [Tamilmani Manoharan] |
4391 | + - Azure: fix copy/paste error in error handling when reading azure ovf. |
4392 | + [Adam DePue] |
4393 | + - tests: fix incorrect order of mocks in test_handle_zfs_root. |
4394 | + - doc: Change dns_nameserver property to dns_nameservers. [Tomer Cohen] |
4395 | + - OVF: identify label iso9660 filesystems with label 'OVF ENV'. |
4396 | + - logs: collect-logs ignore instance-data-sensitive.json on non-root user |
4397 | + - net: Ephemeral*Network: add connectivity check via URL |
4398 | + - azure: _poll_imds only retry on 404. Fail on Timeout |
4399 | + - resizefs: Prefix discovered devpath with '/dev/' when path does not |
4400 | + exist [Igor Galić] |
4401 | + - azure: retry imds polling on requests.Timeout |
4402 | + - azure: Accept variation in error msg from mount for ntfs volumes |
4403 | + [Jason Zions] |
4404 | + - azure: fix regression introduced when persisting ephemeral dhcp lease |
4405 | + [Aswin Rajamannar] |
4406 | + - azure: add udev rules to create cloud-init Gen2 disk name symlinks |
4407 | + - tests: ec2 mock missing httpretty user-data and instance-identity routes |
4408 | + - azure: remove /etc/netplan/90-hotplug-azure.yaml when net from IMDS |
4409 | + - azure: report ready to fabric after reprovision and reduce logging |
4410 | + [Aswin Rajamannar] |
4411 | + - query: better error when missing read permission on instance-data |
4412 | + - instance-data: fallback to instance-data.json if sensitive is absent. |
4413 | + - docs: remove colon from network v1 config example. [Tomer Cohen] |
4414 | + - Add cloud-id binary to packages for SUSE [Jason Zions] |
4415 | + - systemd: On SUSE ensure cloud-init.service runs before wicked |
4416 | + [Robert Schweikert] |
4417 | + - update detection of openSUSE variants [Robert Schweikert] |
4418 | + - azure: Add apply_network_config option to disable network from IMDS |
4419 | + - Correct spelling in an error message (udevadm). [Katie McLaughlin] |
4420 | + - tests: meta_data key changed to meta-data in ec2 instance-data.json |
4421 | + - tests: fix kvm integration test to assert flexible config-disk path |
4422 | + - tools: Add cloud-id command line utility |
4423 | + - instance-data: Add standard keys platform and subplatform. Refactor ec2. |
4424 | + - net: ignore nics that have "zero" mac address. |
4425 | + - tests: fix apt_configure_primary to be more flexible |
4426 | + - Ubuntu: update sources.list to comment out deb-src entries. |
4427 | + |
4428 | + -- Chad Smith <chad.smith@canonical.com> Sat, 26 Jan 2019 08:42:04 -0700 |
4429 | + |
4430 | cloud-init (18.4-0ubuntu1~18.04.1) bionic-proposed; urgency=medium |
4431 | |
4432 | * drop the following cherry-picks now included: |
4433 | diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst |
4434 | index e34f145..648c606 100644 |
4435 | --- a/doc/rtd/topics/datasources.rst |
4436 | +++ b/doc/rtd/topics/datasources.rst |
4437 | @@ -18,7 +18,7 @@ single way to access the different cloud systems methods to provide this data |
4438 | through the typical usage of subclasses. |
4439 | |
4440 | Any metadata processed by cloud-init's datasources is persisted as |
4441 | -``/run/cloud0-init/instance-data.json``. Cloud-init provides tooling |
4442 | +``/run/cloud-init/instance-data.json``. Cloud-init provides tooling |
4443 | to quickly introspect some of that data. See :ref:`instance_metadata` for |
4444 | more information. |
4445 | |
4446 | @@ -80,6 +80,65 @@ The current interface that a datasource object must provide is the following: |
4447 | def get_package_mirror_info(self) |
4448 | |
4449 | |
4450 | +Adding a new Datasource |
4451 | +----------------------- |
4452 | +The datasource objects have a few touch points with cloud-init. If you |
4453 | +are interested in adding a new datasource for your cloud platform you'll |
4454 | +need to take care of the following items: |
4455 | + |
4456 | +* **Identify a mechanism for positive identification of the platform**: |
4457 | + It is good practice for a cloud platform to positively identify itself |
4458 | + to the guest. This allows the guest to make educated decisions based |
4459 | + on the platform on which it is running. On the x86 and arm64 architectures, |
4460 | + many clouds identify themselves through DMI data. For example, |
4461 | + Oracle's public cloud provides the string 'OracleCloud.com' in the |
4462 | + DMI chassis-asset field. |
4463 | + |
4464 | + cloud-init enabled images produce a log file with details about the |
4465 | + platform. Reading through this log in ``/run/cloud-init/ds-identify.log`` |
4466 | + may provide the information needed to uniquely identify the platform. |
4467 | + If the log is not present, you can generate it by running from source |
4468 | + ``./tools/ds-identify`` or the installed location |
4469 | + ``/usr/lib/cloud-init/ds-identify``. |
4470 | + |
4471 | + The mechanism used to identify the platform will be required for the |
4472 | + ds-identify and datasource module sections below. |
4473 | + |
4474 | +* **Add datasource module ``cloudinit/sources/DataSource<CloudPlatform>.py``**: |
4475 | + It is suggested that you start by copying one of the simpler datasources |
4476 | + such as DataSourceHetzner. |
4477 | + |
4478 | +* **Add tests for datasource module**: |
4479 | + Add a new file with some tests for the module to |
4480 | + ``cloudinit/sources/test_<yourplatform>.py``. For example see |
4481 | + ``cloudinit/sources/tests/test_oracle.py`` |
4482 | + |
4483 | +* **Update ds-identify**: In systemd systems, ds-identify is used to detect |
4484 | + which datasource should be enabled or if cloud-init should run at all. |
4485 | + You'll need to make changes to ``tools/ds-identify``. |
4486 | + |
4487 | +* **Add tests for ds-identify**: Add relevant tests in a new class to |
4488 | + ``tests/unittests/test_ds_identify.py``. You can use ``TestOracle`` as an |
4489 | + example. |
4490 | + |
4491 | +* **Add your datasource name to the builtin list of datasources:** Add |
4492 | + your datasource module name to the end of the ``datasource_list`` |
4493 | + entry in ``cloudinit/settings.py``. |
4494 | + |
4495 | +* **Add your your cloud platform to apport collection prompts:** Update the |
4496 | + list of cloud platforms in ``cloudinit/apport.py``. This list will be |
4497 | + provided to the user who invokes ``ubuntu-bug cloud-init``. |
4498 | + |
4499 | +* **Enable datasource by default in ubuntu packaging branches:** |
4500 | + Ubuntu packaging branches contain a template file |
4501 | + ``debian/cloud-init.templates`` that ultimately sets the default |
4502 | + datasource_list when installed via package. This file needs updating when |
4503 | + the commit gets into a package. |
4504 | + |
4505 | +* **Add documentation for your datasource**: You should add a new |
4506 | + file in ``doc/datasources/<cloudplatform>.rst`` |
4507 | + |
4508 | + |
4509 | Datasource Documentation |
4510 | ======================== |
4511 | The following is a list of the implemented datasources. |
4512 | diff --git a/doc/rtd/topics/datasources/azure.rst b/doc/rtd/topics/datasources/azure.rst |
4513 | index 559011e..720a475 100644 |
4514 | --- a/doc/rtd/topics/datasources/azure.rst |
4515 | +++ b/doc/rtd/topics/datasources/azure.rst |
4516 | @@ -23,18 +23,18 @@ information in json format to /run/cloud-init/dhclient.hook/<interface>.json. |
4517 | In order for cloud-init to leverage this method to find the endpoint, the |
4518 | cloud.cfg file must contain: |
4519 | |
4520 | -datasource: |
4521 | - Azure: |
4522 | - set_hostname: False |
4523 | - agent_command: __builtin__ |
4524 | +.. sourcecode:: yaml |
4525 | + |
4526 | + datasource: |
4527 | + Azure: |
4528 | + set_hostname: False |
4529 | + agent_command: __builtin__ |
4530 | |
4531 | If those files are not available, the fallback is to check the leases file |
4532 | for the endpoint server (again option 245). |
4533 | |
4534 | You can define the path to the lease file with the 'dhclient_lease_file' |
4535 | -configuration. The default value is /var/lib/dhcp/dhclient.eth0.leases. |
4536 | - |
4537 | - dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases |
4538 | +configuration. |
4539 | |
4540 | walinuxagent |
4541 | ------------ |
4542 | @@ -57,6 +57,64 @@ in order to use waagent.conf with cloud-init, the following settings are recomme |
4543 | ResourceDisk.MountPoint=/mnt |
4544 | |
4545 | |
4546 | +Configuration |
4547 | +------------- |
4548 | +The following configuration can be set for the datasource in system |
4549 | +configuration (in ``/etc/cloud/cloud.cfg`` or ``/etc/cloud/cloud.cfg.d/``). |
4550 | + |
4551 | +The settings that may be configured are: |
4552 | + |
4553 | + * **agent_command**: Either __builtin__ (default) or a command to run to getcw |
4554 | + metadata. If __builtin__, get metadata from walinuxagent. Otherwise run the |
4555 | + provided command to obtain metadata. |
4556 | + * **apply_network_config**: Boolean set to True to use network configuration |
4557 | + described by Azure's IMDS endpoint instead of fallback network config of |
4558 | + dhcp on eth0. Default is True. For Ubuntu 16.04 or earlier, default is False. |
4559 | + * **data_dir**: Path used to read metadata files and write crawled data. |
4560 | + * **dhclient_lease_file**: The fallback lease file to source when looking for |
4561 | + custom DHCP option 245 from Azure fabric. |
4562 | + * **disk_aliases**: A dictionary defining which device paths should be |
4563 | + interpreted as ephemeral images. See cc_disk_setup module for more info. |
4564 | + * **hostname_bounce**: A dictionary Azure hostname bounce behavior to react to |
4565 | + metadata changes. The '``hostname_bounce: command``' entry can be either |
4566 | + the literal string 'builtin' or a command to execute. The command will be |
4567 | + invoked after the hostname is set, and will have the 'interface' in its |
4568 | + environment. If ``set_hostname`` is not true, then ``hostname_bounce`` |
4569 | + will be ignored. An example might be: |
4570 | + |
4571 | + ``command: ["sh", "-c", "killall dhclient; dhclient $interface"]`` |
4572 | + |
4573 | + * **hostname_bounce**: A dictionary Azure hostname bounce behavior to react to |
4574 | + metadata changes. Azure will throttle ifup/down in some cases after metadata |
4575 | + has been updated to inform dhcp server about updated hostnames. |
4576 | + * **set_hostname**: Boolean set to True when we want Azure to set the hostname |
4577 | + based on metadata. |
4578 | + |
4579 | +Configuration for the datasource can also be read from a |
4580 | +``dscfg`` entry in the ``LinuxProvisioningConfigurationSet``. Content in |
4581 | +dscfg node is expected to be base64 encoded yaml content, and it will be |
4582 | +merged into the 'datasource: Azure' entry. |
4583 | + |
4584 | +An example configuration with the default values is provided below: |
4585 | + |
4586 | +.. sourcecode:: yaml |
4587 | + |
4588 | + datasource: |
4589 | + Azure: |
4590 | + agent_command: __builtin__ |
4591 | + apply_network_config: true |
4592 | + data_dir: /var/lib/waagent |
4593 | + dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases |
4594 | + disk_aliases: |
4595 | + ephemeral0: /dev/disk/cloud/azure_resource |
4596 | + hostname_bounce: |
4597 | + interface: eth0 |
4598 | + command: builtin |
4599 | + policy: true |
4600 | + hostname_command: hostname |
4601 | + set_hostname: true |
4602 | + |
4603 | + |
4604 | Userdata |
4605 | -------- |
4606 | Userdata is provided to cloud-init inside the ovf-env.xml file. Cloud-init |
4607 | @@ -97,37 +155,6 @@ Example: |
4608 | </LinuxProvisioningConfigurationSet> |
4609 | </wa:ProvisioningSection> |
4610 | |
4611 | -Configuration |
4612 | -------------- |
4613 | -Configuration for the datasource can be read from the system config's or set |
4614 | -via the `dscfg` entry in the `LinuxProvisioningConfigurationSet`. Content in |
4615 | -dscfg node is expected to be base64 encoded yaml content, and it will be |
4616 | -merged into the 'datasource: Azure' entry. |
4617 | - |
4618 | -The '``hostname_bounce: command``' entry can be either the literal string |
4619 | -'builtin' or a command to execute. The command will be invoked after the |
4620 | -hostname is set, and will have the 'interface' in its environment. If |
4621 | -``set_hostname`` is not true, then ``hostname_bounce`` will be ignored. |
4622 | - |
4623 | -An example might be: |
4624 | - command: ["sh", "-c", "killall dhclient; dhclient $interface"] |
4625 | - |
4626 | -.. code:: yaml |
4627 | - |
4628 | - datasource: |
4629 | - agent_command |
4630 | - Azure: |
4631 | - agent_command: [service, walinuxagent, start] |
4632 | - set_hostname: True |
4633 | - hostname_bounce: |
4634 | - # the name of the interface to bounce |
4635 | - interface: eth0 |
4636 | - # policy can be 'on', 'off' or 'force' |
4637 | - policy: on |
4638 | - # the method 'bounce' command. |
4639 | - command: "builtin" |
4640 | - hostname_command: "hostname" |
4641 | - |
4642 | hostname |
4643 | -------- |
4644 | When the user launches an instance, they provide a hostname for that instance. |
4645 | diff --git a/doc/rtd/topics/instancedata.rst b/doc/rtd/topics/instancedata.rst |
4646 | index 634e180..5d2dc94 100644 |
4647 | --- a/doc/rtd/topics/instancedata.rst |
4648 | +++ b/doc/rtd/topics/instancedata.rst |
4649 | @@ -90,24 +90,46 @@ There are three basic top-level keys: |
4650 | |
4651 | The standardized keys present: |
4652 | |
4653 | -+----------------------+-----------------------------------------------+---------------------------+ |
4654 | -| Key path | Description | Examples | |
4655 | -+======================+===============================================+===========================+ |
4656 | -| v1.cloud_name | The name of the cloud provided by metadata | aws, openstack, azure, | |
4657 | -| | key 'cloud-name' or the cloud-init datasource | configdrive, nocloud, | |
4658 | -| | name which was discovered. | ovf, etc. | |
4659 | -+----------------------+-----------------------------------------------+---------------------------+ |
4660 | -| v1.instance_id | Unique instance_id allocated by the cloud | i-<somehash> | |
4661 | -+----------------------+-----------------------------------------------+---------------------------+ |
4662 | -| v1.local_hostname | The internal or local hostname of the system | ip-10-41-41-70, | |
4663 | -| | | <user-provided-hostname> | |
4664 | -+----------------------+-----------------------------------------------+---------------------------+ |
4665 | -| v1.region | The physical region/datacenter in which the | us-east-2 | |
4666 | -| | instance is deployed | | |
4667 | -+----------------------+-----------------------------------------------+---------------------------+ |
4668 | -| v1.availability_zone | The physical availability zone in which the | us-east-2b, nova, null | |
4669 | -| | instance is deployed | | |
4670 | -+----------------------+-----------------------------------------------+---------------------------+ |
4671 | ++----------------------+-----------------------------------------------+-----------------------------------+ |
4672 | +| Key path | Description | Examples | |
4673 | ++======================+===============================================+===================================+ |
4674 | +| v1._beta_keys | List of standardized keys still in 'beta'. | [subplatform] | |
4675 | +| | The format, intent or presence of these keys | | |
4676 | +| | can change. Do not consider them | | |
4677 | +| | production-ready. | | |
4678 | ++----------------------+-----------------------------------------------+-----------------------------------+ |
4679 | +| v1.cloud_name | Where possible this will indicate the 'name' | aws, openstack, azure, | |
4680 | +| | of the cloud this system is running on. This | configdrive, nocloud, | |
4681 | +| | is specifically different than the 'platform' | ovf, etc. | |
4682 | +| | below. As an example, the name of Amazon Web | | |
4683 | +| | Services is 'aws' while the platform is 'ec2'.| | |
4684 | +| | | | |
4685 | +| | If no specific name is determinable or | | |
4686 | +| | provided in meta-data, then this field may | | |
4687 | +| | contain the same content as 'platform'. | | |
4688 | ++----------------------+-----------------------------------------------+-----------------------------------+ |
4689 | +| v1.instance_id | Unique instance_id allocated by the cloud | i-<somehash> | |
4690 | ++----------------------+-----------------------------------------------+-----------------------------------+ |
4691 | +| v1.local_hostname | The internal or local hostname of the system | ip-10-41-41-70, | |
4692 | +| | | <user-provided-hostname> | |
4693 | ++----------------------+-----------------------------------------------+-----------------------------------+ |
4694 | +| v1.platform | An attempt to identify the cloud platform | ec2, openstack, lxd, gce | |
4695 | +| | instance that the system is running on. | nocloud, ovf | |
4696 | ++----------------------+-----------------------------------------------+-----------------------------------+ |
4697 | +| v1.subplatform | Additional platform details describing the | metadata (http://168.254.169.254),| |
4698 | +| | specific source or type of metadata used. | seed-dir (/path/to/seed-dir/), | |
4699 | +| | The format of subplatform will be: | config-disk (/dev/cd0), | |
4700 | +| | <subplatform_type> (<url_file_or_dev_path>) | configdrive (/dev/sr0) | |
4701 | ++----------------------+-----------------------------------------------+-----------------------------------+ |
4702 | +| v1.public_ssh_keys | A list of ssh keys provided to the instance | ['ssh-rsa AA...', ...] | |
4703 | +| | by the datasource metadata. | | |
4704 | ++----------------------+-----------------------------------------------+-----------------------------------+ |
4705 | +| v1.region | The physical region/datacenter in which the | us-east-2 | |
4706 | +| | instance is deployed | | |
4707 | ++----------------------+-----------------------------------------------+-----------------------------------+ |
4708 | +| v1.availability_zone | The physical availability zone in which the | us-east-2b, nova, null | |
4709 | +| | instance is deployed | | |
4710 | ++----------------------+-----------------------------------------------+-----------------------------------+ |
4711 | |
4712 | |
4713 | Below is an example of ``/run/cloud-init/instance_data.json`` on an EC2 |
4714 | @@ -117,10 +139,75 @@ instance: |
4715 | |
4716 | { |
4717 | "base64_encoded_keys": [], |
4718 | - "sensitive_keys": [], |
4719 | "ds": { |
4720 | - "meta_data": { |
4721 | - "ami-id": "ami-014e1416b628b0cbf", |
4722 | + "_doc": "EXPERIMENTAL: The structure and format of content scoped under the 'ds' key may change in subsequent releases of cloud-init.", |
4723 | + "_metadata_api_version": "2016-09-02", |
4724 | + "dynamic": { |
4725 | + "instance-identity": { |
4726 | + "document": { |
4727 | + "accountId": "437526006925", |
4728 | + "architecture": "x86_64", |
4729 | + "availabilityZone": "us-east-2b", |
4730 | + "billingProducts": null, |
4731 | + "devpayProductCodes": null, |
4732 | + "imageId": "ami-079638aae7046bdd2", |
4733 | + "instanceId": "i-075f088c72ad3271c", |
4734 | + "instanceType": "t2.micro", |
4735 | + "kernelId": null, |
4736 | + "marketplaceProductCodes": null, |
4737 | + "pendingTime": "2018-10-05T20:10:43Z", |
4738 | + "privateIp": "10.41.41.95", |
4739 | + "ramdiskId": null, |
4740 | + "region": "us-east-2", |
4741 | + "version": "2017-09-30" |
4742 | + }, |
4743 | + "pkcs7": [ |
4744 | + "MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAaCAJIAEggHbewog", |
4745 | + "ICJkZXZwYXlQcm9kdWN0Q29kZXMiIDogbnVsbCwKICAibWFya2V0cGxhY2VQcm9kdWN0Q29kZXMi", |
4746 | + "IDogbnVsbCwKICAicHJpdmF0ZUlwIiA6ICIxMC40MS40MS45NSIsCiAgInZlcnNpb24iIDogIjIw", |
4747 | + "MTctMDktMzAiLAogICJpbnN0YW5jZUlkIiA6ICJpLTA3NWYwODhjNzJhZDMyNzFjIiwKICAiYmls", |
4748 | + "bGluZ1Byb2R1Y3RzIiA6IG51bGwsCiAgImluc3RhbmNlVHlwZSIgOiAidDIubWljcm8iLAogICJh", |
4749 | + "Y2NvdW50SWQiIDogIjQzNzUyNjAwNjkyNSIsCiAgImF2YWlsYWJpbGl0eVpvbmUiIDogInVzLWVh", |
4750 | + "c3QtMmIiLAogICJrZXJuZWxJZCIgOiBudWxsLAogICJyYW1kaXNrSWQiIDogbnVsbCwKICAiYXJj", |
4751 | + "aGl0ZWN0dXJlIiA6ICJ4ODZfNjQiLAogICJpbWFnZUlkIiA6ICJhbWktMDc5NjM4YWFlNzA0NmJk", |
4752 | + "ZDIiLAogICJwZW5kaW5nVGltZSIgOiAiMjAxOC0xMC0wNVQyMDoxMDo0M1oiLAogICJyZWdpb24i", |
4753 | + "IDogInVzLWVhc3QtMiIKfQAAAAAAADGCARcwggETAgEBMGkwXDELMAkGA1UEBhMCVVMxGTAXBgNV", |
4754 | + "BAgTEFdhc2hpbmd0b24gU3RhdGUxEDAOBgNVBAcTB1NlYXR0bGUxIDAeBgNVBAoTF0FtYXpvbiBX", |
4755 | + "ZWIgU2VydmljZXMgTExDAgkAlrpI2eVeGmcwCQYFKw4DAhoFAKBdMBgGCSqGSIb3DQEJAzELBgkq", |
4756 | + "hkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MTAwNTIwMTA0OFowIwYJKoZIhvcNAQkEMRYEFK0k", |
4757 | + "Tz6n1A8/zU1AzFj0riNQORw2MAkGByqGSM44BAMELjAsAhRNrr174y98grPBVXUforN/6wZp8AIU", |
4758 | + "JLZBkrB2GJA8A4WJ1okq++jSrBIAAAAAAAA=" |
4759 | + ], |
4760 | + "rsa2048": [ |
4761 | + "MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwGggCSABIIB", |
4762 | + "23sKICAiZGV2cGF5UHJvZHVjdENvZGVzIiA6IG51bGwsCiAgIm1hcmtldHBsYWNlUHJvZHVjdENv", |
4763 | + "ZGVzIiA6IG51bGwsCiAgInByaXZhdGVJcCIgOiAiMTAuNDEuNDEuOTUiLAogICJ2ZXJzaW9uIiA6", |
4764 | + "ICIyMDE3LTA5LTMwIiwKICAiaW5zdGFuY2VJZCIgOiAiaS0wNzVmMDg4YzcyYWQzMjcxYyIsCiAg", |
4765 | + "ImJpbGxpbmdQcm9kdWN0cyIgOiBudWxsLAogICJpbnN0YW5jZVR5cGUiIDogInQyLm1pY3JvIiwK", |
4766 | + "ICAiYWNjb3VudElkIiA6ICI0Mzc1MjYwMDY5MjUiLAogICJhdmFpbGFiaWxpdHlab25lIiA6ICJ1", |
4767 | + "cy1lYXN0LTJiIiwKICAia2VybmVsSWQiIDogbnVsbCwKICAicmFtZGlza0lkIiA6IG51bGwsCiAg", |
4768 | + "ImFyY2hpdGVjdHVyZSIgOiAieDg2XzY0IiwKICAiaW1hZ2VJZCIgOiAiYW1pLTA3OTYzOGFhZTcw", |
4769 | + "NDZiZGQyIiwKICAicGVuZGluZ1RpbWUiIDogIjIwMTgtMTAtMDVUMjA6MTA6NDNaIiwKICAicmVn", |
4770 | + "aW9uIiA6ICJ1cy1lYXN0LTIiCn0AAAAAAAAxggH/MIIB+wIBATBpMFwxCzAJBgNVBAYTAlVTMRkw", |
4771 | + "FwYDVQQIExBXYXNoaW5ndG9uIFN0YXRlMRAwDgYDVQQHEwdTZWF0dGxlMSAwHgYDVQQKExdBbWF6", |
4772 | + "b24gV2ViIFNlcnZpY2VzIExMQwIJAM07oeX4xevdMA0GCWCGSAFlAwQCAQUAoGkwGAYJKoZIhvcN", |
4773 | + "AQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTgxMDA1MjAxMDQ4WjAvBgkqhkiG9w0B", |
4774 | + "CQQxIgQgkYz0pZk3zJKBi4KP4egeOKJl/UYwu5UdE7id74pmPwMwDQYJKoZIhvcNAQEBBQAEggEA", |
4775 | + "dC3uIGGNul1OC1mJKSH3XoBWsYH20J/xhIdftYBoXHGf2BSFsrs9ZscXd2rKAKea4pSPOZEYMXgz", |
4776 | + "lPuT7W0WU89N3ZKviy/ReMSRjmI/jJmsY1lea6mlgcsJXreBXFMYucZvyeWGHdnCjamoKWXkmZlM", |
4777 | + "mSB1gshWy8Y7DzoKviYPQZi5aI54XK2Upt4kGme1tH1NI2Cq+hM4K+adxTbNhS3uzvWaWzMklUuU", |
4778 | + "QHX2GMmjAVRVc8vnA8IAsBCJJp+gFgYzi09IK+cwNgCFFPADoG6jbMHHf4sLB3MUGpiA+G9JlCnM", |
4779 | + "fmkjI2pNRB8spc0k4UG4egqLrqCz67WuK38tjwAAAAAAAA==" |
4780 | + ], |
4781 | + "signature": [ |
4782 | + "Tsw6h+V3WnxrNVSXBYIOs1V4j95YR1mLPPH45XnhX0/Ei3waJqf7/7EEKGYP1Cr4PTYEULtZ7Mvf", |
4783 | + "+xJpM50Ivs2bdF7o0c4vnplRWe3f06NI9pv50dr110j/wNzP4MZ1pLhJCqubQOaaBTF3LFutgRrt", |
4784 | + "r4B0mN3p7EcqD8G+ll0=" |
4785 | + ] |
4786 | + } |
4787 | + }, |
4788 | + "meta-data": { |
4789 | + "ami-id": "ami-079638aae7046bdd2", |
4790 | "ami-launch-index": "0", |
4791 | "ami-manifest-path": "(unknown)", |
4792 | "block-device-mapping": { |
4793 | @@ -129,31 +216,31 @@ instance: |
4794 | "ephemeral1": "sdc", |
4795 | "root": "/dev/sda1" |
4796 | }, |
4797 | - "hostname": "ip-10-41-41-70.us-east-2.compute.internal", |
4798 | + "hostname": "ip-10-41-41-95.us-east-2.compute.internal", |
4799 | "instance-action": "none", |
4800 | - "instance-id": "i-04fa31cfc55aa7976", |
4801 | + "instance-id": "i-075f088c72ad3271c", |
4802 | "instance-type": "t2.micro", |
4803 | - "local-hostname": "ip-10-41-41-70.us-east-2.compute.internal", |
4804 | - "local-ipv4": "10.41.41.70", |
4805 | - "mac": "06:b6:92:dd:9d:24", |
4806 | + "local-hostname": "ip-10-41-41-95.us-east-2.compute.internal", |
4807 | + "local-ipv4": "10.41.41.95", |
4808 | + "mac": "06:74:8f:39:cd:a6", |
4809 | "metrics": { |
4810 | "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" |
4811 | }, |
4812 | "network": { |
4813 | "interfaces": { |
4814 | "macs": { |
4815 | - "06:b6:92:dd:9d:24": { |
4816 | + "06:74:8f:39:cd:a6": { |
4817 | "device-number": "0", |
4818 | - "interface-id": "eni-08c0c9fdb99b6e6f4", |
4819 | + "interface-id": "eni-052058bbd7831eaae", |
4820 | "ipv4-associations": { |
4821 | - "18.224.22.43": "10.41.41.70" |
4822 | + "18.218.221.122": "10.41.41.95" |
4823 | }, |
4824 | - "local-hostname": "ip-10-41-41-70.us-east-2.compute.internal", |
4825 | - "local-ipv4s": "10.41.41.70", |
4826 | - "mac": "06:b6:92:dd:9d:24", |
4827 | + "local-hostname": "ip-10-41-41-95.us-east-2.compute.internal", |
4828 | + "local-ipv4s": "10.41.41.95", |
4829 | + "mac": "06:74:8f:39:cd:a6", |
4830 | "owner-id": "437526006925", |
4831 | - "public-hostname": "ec2-18-224-22-43.us-east-2.compute.amazonaws.com", |
4832 | - "public-ipv4s": "18.224.22.43", |
4833 | + "public-hostname": "ec2-18-218-221-122.us-east-2.compute.amazonaws.com", |
4834 | + "public-ipv4s": "18.218.221.122", |
4835 | "security-group-ids": "sg-828247e9", |
4836 | "security-groups": "Cloud-init integration test secgroup", |
4837 | "subnet-id": "subnet-282f3053", |
4838 | @@ -171,16 +258,14 @@ instance: |
4839 | "availability-zone": "us-east-2b" |
4840 | }, |
4841 | "profile": "default-hvm", |
4842 | - "public-hostname": "ec2-18-224-22-43.us-east-2.compute.amazonaws.com", |
4843 | - "public-ipv4": "18.224.22.43", |
4844 | + "public-hostname": "ec2-18-218-221-122.us-east-2.compute.amazonaws.com", |
4845 | + "public-ipv4": "18.218.221.122", |
4846 | "public-keys": { |
4847 | "cloud-init-integration": [ |
4848 | - "ssh-rsa |
4849 | - AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB |
4850 | - cloud-init-integration" |
4851 | + "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB cloud-init-integration" |
4852 | ] |
4853 | }, |
4854 | - "reservation-id": "r-06ab75e9346f54333", |
4855 | + "reservation-id": "r-0594a20e31f6cfe46", |
4856 | "security-groups": "Cloud-init integration test secgroup", |
4857 | "services": { |
4858 | "domain": "amazonaws.com", |
4859 | @@ -188,16 +273,22 @@ instance: |
4860 | } |
4861 | } |
4862 | }, |
4863 | + "sensitive_keys": [], |
4864 | "v1": { |
4865 | + "_beta_keys": [ |
4866 | + "subplatform" |
4867 | + ], |
4868 | "availability-zone": "us-east-2b", |
4869 | "availability_zone": "us-east-2b", |
4870 | - "cloud-name": "aws", |
4871 | "cloud_name": "aws", |
4872 | - "instance-id": "i-04fa31cfc55aa7976", |
4873 | - "instance_id": "i-04fa31cfc55aa7976", |
4874 | - "local-hostname": "ip-10-41-41-70", |
4875 | - "local_hostname": "ip-10-41-41-70", |
4876 | - "region": "us-east-2" |
4877 | + "instance_id": "i-075f088c72ad3271c", |
4878 | + "local_hostname": "ip-10-41-41-95", |
4879 | + "platform": "ec2", |
4880 | + "public_ssh_keys": [ |
4881 | + "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB cloud-init-integration" |
4882 | + ], |
4883 | + "region": "us-east-2", |
4884 | + "subplatform": "metadata (http://169.254.169.254)" |
4885 | } |
4886 | } |
4887 | |
4888 | diff --git a/doc/rtd/topics/network-config-format-v1.rst b/doc/rtd/topics/network-config-format-v1.rst |
4889 | index 3b0148c..9723d68 100644 |
4890 | --- a/doc/rtd/topics/network-config-format-v1.rst |
4891 | +++ b/doc/rtd/topics/network-config-format-v1.rst |
4892 | @@ -384,7 +384,7 @@ Valid keys for ``subnets`` include the following: |
4893 | - ``address``: IPv4 or IPv6 address. It may include CIDR netmask notation. |
4894 | - ``netmask``: IPv4 subnet mask in dotted format or CIDR notation. |
4895 | - ``gateway``: IPv4 address of the default gateway for this subnet. |
4896 | -- ``dns_nameserver``: Specify a list of IPv4 dns server IPs to end up in |
4897 | +- ``dns_nameservers``: Specify a list of IPv4 dns server IPs to end up in |
4898 | resolv.conf. |
4899 | - ``dns_search``: Specify a list of search paths to be included in |
4900 | resolv.conf. |
4901 | diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in |
4902 | index a3a6d1e..6b2022b 100644 |
4903 | --- a/packages/redhat/cloud-init.spec.in |
4904 | +++ b/packages/redhat/cloud-init.spec.in |
4905 | @@ -191,6 +191,7 @@ fi |
4906 | |
4907 | # Program binaries |
4908 | %{_bindir}/cloud-init* |
4909 | +%{_bindir}/cloud-id* |
4910 | |
4911 | # Docs |
4912 | %doc LICENSE ChangeLog TODO.rst requirements.txt |
4913 | diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in |
4914 | index e781d74..26894b3 100644 |
4915 | --- a/packages/suse/cloud-init.spec.in |
4916 | +++ b/packages/suse/cloud-init.spec.in |
4917 | @@ -93,6 +93,7 @@ version_pys=$(cd "%{buildroot}" && find . -name version.py -type f) |
4918 | |
4919 | # Program binaries |
4920 | %{_bindir}/cloud-init* |
4921 | +%{_bindir}/cloud-id* |
4922 | |
4923 | # systemd files |
4924 | /usr/lib/systemd/system-generators/* |
4925 | diff --git a/setup.py b/setup.py |
4926 | index 5ed8eae..ea37efc 100755 |
4927 | --- a/setup.py |
4928 | +++ b/setup.py |
4929 | @@ -282,7 +282,8 @@ setuptools.setup( |
4930 | cmdclass=cmdclass, |
4931 | entry_points={ |
4932 | 'console_scripts': [ |
4933 | - 'cloud-init = cloudinit.cmd.main:main' |
4934 | + 'cloud-init = cloudinit.cmd.main:main', |
4935 | + 'cloud-id = cloudinit.cmd.cloud_id:main' |
4936 | ], |
4937 | } |
4938 | ) |
4939 | diff --git a/systemd/cloud-init.service.tmpl b/systemd/cloud-init.service.tmpl |
4940 | index b92e8ab..5cb0037 100644 |
4941 | --- a/systemd/cloud-init.service.tmpl |
4942 | +++ b/systemd/cloud-init.service.tmpl |
4943 | @@ -14,8 +14,7 @@ After=networking.service |
4944 | After=network.service |
4945 | {% endif %} |
4946 | {% if variant in ["suse"] %} |
4947 | -Requires=wicked.service |
4948 | -After=wicked.service |
4949 | +Before=wicked.service |
4950 | # setting hostname via hostnamectl depends on dbus, which otherwise |
4951 | # would not be guaranteed at this point. |
4952 | After=dbus.service |
4953 | diff --git a/templates/sources.list.ubuntu.tmpl b/templates/sources.list.ubuntu.tmpl |
4954 | index d879972..edb92f1 100644 |
4955 | --- a/templates/sources.list.ubuntu.tmpl |
4956 | +++ b/templates/sources.list.ubuntu.tmpl |
4957 | @@ -10,30 +10,30 @@ |
4958 | # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to |
4959 | # newer versions of the distribution. |
4960 | deb {{mirror}} {{codename}} main restricted |
4961 | -deb-src {{mirror}} {{codename}} main restricted |
4962 | +# deb-src {{mirror}} {{codename}} main restricted |
4963 | |
4964 | ## Major bug fix updates produced after the final release of the |
4965 | ## distribution. |
4966 | deb {{mirror}} {{codename}}-updates main restricted |
4967 | -deb-src {{mirror}} {{codename}}-updates main restricted |
4968 | +# deb-src {{mirror}} {{codename}}-updates main restricted |
4969 | |
4970 | ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu |
4971 | ## team. Also, please note that software in universe WILL NOT receive any |
4972 | ## review or updates from the Ubuntu security team. |
4973 | deb {{mirror}} {{codename}} universe |
4974 | -deb-src {{mirror}} {{codename}} universe |
4975 | +# deb-src {{mirror}} {{codename}} universe |
4976 | deb {{mirror}} {{codename}}-updates universe |
4977 | -deb-src {{mirror}} {{codename}}-updates universe |
4978 | +# deb-src {{mirror}} {{codename}}-updates universe |
4979 | |
4980 | -## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu |
4981 | -## team, and may not be under a free licence. Please satisfy yourself as to |
4982 | -## your rights to use the software. Also, please note that software in |
4983 | +## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu |
4984 | +## team, and may not be under a free licence. Please satisfy yourself as to |
4985 | +## your rights to use the software. Also, please note that software in |
4986 | ## multiverse WILL NOT receive any review or updates from the Ubuntu |
4987 | ## security team. |
4988 | deb {{mirror}} {{codename}} multiverse |
4989 | -deb-src {{mirror}} {{codename}} multiverse |
4990 | +# deb-src {{mirror}} {{codename}} multiverse |
4991 | deb {{mirror}} {{codename}}-updates multiverse |
4992 | -deb-src {{mirror}} {{codename}}-updates multiverse |
4993 | +# deb-src {{mirror}} {{codename}}-updates multiverse |
4994 | |
4995 | ## N.B. software from this repository may not have been tested as |
4996 | ## extensively as that contained in the main release, although it includes |
4997 | @@ -41,14 +41,7 @@ deb-src {{mirror}} {{codename}}-updates multiverse |
4998 | ## Also, please note that software in backports WILL NOT receive any review |
4999 | ## or updates from the Ubuntu security team. |
5000 | deb {{mirror}} {{codename}}-backports main restricted universe multiverse |
The diff has been truncated for viewing.
PASSED: Continuous integration, rev:4a1aaa18595 fa663e1a38dd3ac 8f73231ec69a7f /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 543/
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 543/rebuild
https:/