Merge ~chad.smith/cloud-init:ubuntu/cosmic into cloud-init:ubuntu/cosmic

Proposed by Chad Smith
Status: Merged
Merged at revision: 6db1a138541f807d103bcee3080d830c71dfab2e
Proposed branch: ~chad.smith/cloud-init:ubuntu/cosmic
Merge into: cloud-init:ubuntu/cosmic
Diff against target: 5357 lines (+2757/-473)
69 files modified
ChangeLog (+54/-0)
HACKING.rst (+2/-2)
bash_completion/cloud-init (+4/-1)
cloudinit/cmd/devel/logs.py (+23/-8)
cloudinit/cmd/devel/net_convert.py (+10/-5)
cloudinit/cmd/devel/render.py (+24/-11)
cloudinit/cmd/devel/tests/test_logs.py (+37/-6)
cloudinit/cmd/devel/tests/test_render.py (+44/-1)
cloudinit/cmd/main.py (+4/-16)
cloudinit/cmd/query.py (+24/-12)
cloudinit/cmd/tests/test_query.py (+71/-5)
cloudinit/config/cc_disk_setup.py (+1/-1)
cloudinit/config/cc_lxd.py (+1/-1)
cloudinit/config/cc_resizefs.py (+7/-0)
cloudinit/config/cc_set_passwords.py (+1/-1)
cloudinit/config/cc_write_files.py (+6/-1)
cloudinit/config/tests/test_set_passwords.py (+40/-0)
cloudinit/dhclient_hook.py (+72/-38)
cloudinit/handlers/jinja_template.py (+9/-1)
cloudinit/net/__init__.py (+34/-2)
cloudinit/net/dhcp.py (+76/-25)
cloudinit/net/eni.py (+15/-14)
cloudinit/net/netplan.py (+3/-3)
cloudinit/net/sysconfig.py (+61/-5)
cloudinit/net/tests/test_dhcp.py (+47/-4)
cloudinit/net/tests/test_init.py (+51/-1)
cloudinit/sources/DataSourceAzure.py (+74/-31)
cloudinit/sources/DataSourceNoCloud.py (+31/-1)
cloudinit/sources/DataSourceOVF.py (+30/-26)
cloudinit/sources/DataSourceOpenNebula.py (+1/-1)
cloudinit/sources/DataSourceScaleway.py (+10/-1)
cloudinit/sources/helpers/netlink.py (+250/-0)
cloudinit/sources/helpers/tests/test_netlink.py (+373/-0)
cloudinit/sources/helpers/vmware/imc/config_nic.py (+2/-3)
cloudinit/temp_utils.py (+2/-2)
cloudinit/tests/test_dhclient_hook.py (+105/-0)
cloudinit/tests/test_temp_utils.py (+17/-1)
cloudinit/tests/test_url_helper.py (+24/-1)
cloudinit/tests/test_util.py (+66/-17)
cloudinit/url_helper.py (+25/-6)
cloudinit/util.py (+20/-3)
cloudinit/version.py (+1/-1)
config/cloud.cfg.tmpl (+11/-1)
debian/changelog (+70/-0)
doc/rtd/topics/datasources.rst (+60/-1)
doc/rtd/topics/datasources/azure.rst (+65/-38)
doc/rtd/topics/network-config-format-v1.rst (+1/-1)
packages/redhat/cloud-init.spec.in (+1/-0)
packages/suse/cloud-init.spec.in (+1/-0)
systemd/cloud-init.service.tmpl (+1/-2)
tests/cloud_tests/releases.yaml (+16/-0)
tests/unittests/test_builtin_handlers.py (+25/-0)
tests/unittests/test_cli.py (+8/-8)
tests/unittests/test_datasource/test_azure.py (+201/-34)
tests/unittests/test_datasource/test_ec2.py (+24/-16)
tests/unittests/test_datasource/test_nocloud.py (+66/-34)
tests/unittests/test_datasource/test_ovf.py (+79/-43)
tests/unittests/test_datasource/test_scaleway.py (+72/-4)
tests/unittests/test_ds_identify.py (+16/-1)
tests/unittests/test_handler/test_handler_lxd.py (+1/-1)
tests/unittests/test_handler/test_handler_resizefs.py (+42/-10)
tests/unittests/test_handler/test_handler_write_files.py (+12/-0)
tests/unittests/test_net.py (+123/-6)
tests/unittests/test_util.py (+6/-0)
tests/unittests/test_vmware_config_file.py (+52/-6)
tools/ds-identify (+32/-6)
tools/run-container (+1/-0)
tox.ini (+2/-2)
udev/66-azure-ephemeral.rules (+17/-1)
Reviewer Review Type Date Requested Status
Server Team CI bot continuous-integration Approve
cloud-init Commiters Pending
Review via email: mp+362280@code.launchpad.net

Commit message

sync new-upstream snapshot for release into cosmic via SRU

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:6db1a138541f807d103bcee3080d830c71dfab2e
https://jenkins.ubuntu.com/server/job/cloud-init-ci/544/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/544/rebuild

review: Approve (continuous-integration)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/ChangeLog b/ChangeLog
2index 9c043b0..8fa6fdd 100644
3--- a/ChangeLog
4+++ b/ChangeLog
5@@ -1,3 +1,57 @@
6+18.5:
7+ - tests: add Disco release [Joshua Powers]
8+ - net: render 'metric' values in per-subnet routes (LP: #1805871)
9+ - write_files: add support for appending to files. [James Baxter]
10+ - config: On ubuntu select cloud archive mirrors for armel, armhf, arm64.
11+ (LP: #1805854)
12+ - dhclient-hook: cleanups, tests and fix a bug on 'down' event.
13+ - NoCloud: Allow top level 'network' key in network-config. (LP: #1798117)
14+ - ovf: Fix ovf network config generation gateway/routes (LP: #1806103)
15+ - azure: detect vnet migration via netlink media change event
16+ [Tamilmani Manoharan]
17+ - Azure: fix copy/paste error in error handling when reading azure ovf.
18+ [Adam DePue]
19+ - tests: fix incorrect order of mocks in test_handle_zfs_root.
20+ - doc: Change dns_nameserver property to dns_nameservers. [Tomer Cohen]
21+ - OVF: identify label iso9660 filesystems with label 'OVF ENV'.
22+ - logs: collect-logs ignore instance-data-sensitive.json on non-root user
23+ (LP: #1805201)
24+ - net: Ephemeral*Network: add connectivity check via URL
25+ - azure: _poll_imds only retry on 404. Fail on Timeout (LP: #1803598)
26+ - resizefs: Prefix discovered devpath with '/dev/' when path does not
27+ exist [Igor Galić]
28+ - azure: retry imds polling on requests.Timeout (LP: #1800223)
29+ - azure: Accept variation in error msg from mount for ntfs volumes
30+ [Jason Zions] (LP: #1799338)
31+ - azure: fix regression introduced when persisting ephemeral dhcp lease
32+ [asakkurr]
33+ - azure: add udev rules to create cloud-init Gen2 disk name symlinks
34+ (LP: #1797480)
35+ - tests: ec2 mock missing httpretty user-data and instance-identity routes
36+ - azure: remove /etc/netplan/90-hotplug-azure.yaml when net from IMDS
37+ - azure: report ready to fabric after reprovision and reduce logging
38+ [asakkurr] (LP: #1799594)
39+ - query: better error when missing read permission on instance-data
40+ - instance-data: fallback to instance-data.json if sensitive is absent.
41+ (LP: #1798189)
42+ - docs: remove colon from network v1 config example. [Tomer Cohen]
43+ - Add cloud-id binary to packages for SUSE [Jason Zions]
44+ - systemd: On SUSE ensure cloud-init.service runs before wicked
45+ [Robert Schweikert] (LP: #1799709)
46+ - update detection of openSUSE variants [Robert Schweikert]
47+ - azure: Add apply_network_config option to disable network from IMDS
48+ (LP: #1798424)
49+ - Correct spelling in an error message (udevadm). [Katie McLaughlin]
50+ - tests: meta_data key changed to meta-data in ec2 instance-data.json
51+ (LP: #1797231)
52+ - tests: fix kvm integration test to assert flexible config-disk path
53+ (LP: #1797199)
54+ - tools: Add cloud-id command line utility
55+ - instance-data: Add standard keys platform and subplatform. Refactor ec2.
56+ - net: ignore nics that have "zero" mac address. (LP: #1796917)
57+ - tests: fix apt_configure_primary to be more flexible
58+ - Ubuntu: update sources.list to comment out deb-src entries. (LP: #74747)
59+
60 18.4:
61 - add rtd example docs about new standardized keys
62 - use ds._crawled_metadata instance attribute if set when writing
63diff --git a/HACKING.rst b/HACKING.rst
64index 3bb555c..fcdfa4f 100644
65--- a/HACKING.rst
66+++ b/HACKING.rst
67@@ -11,10 +11,10 @@ Do these things once
68
69 * To contribute, you must sign the Canonical `contributor license agreement`_
70
71- If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group. Unfortunately there is no easy way to check if an organization or company you are doing work for has signed. If you are unsure or have questions, email `Scott Moser <mailto:scott.moser@canonical.com>`_ or ping smoser in ``#cloud-init`` channel via freenode.
72+ If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group. Unfortunately there is no easy way to check if an organization or company you are doing work for has signed. If you are unsure or have questions, email `Josh Powers <mailto:josh.powers@canonical.com>`_ or ping powersj in ``#cloud-init`` channel via freenode.
73
74 When prompted for 'Project contact' or 'Canonical Project Manager' enter
75- 'Scott Moser'.
76+ 'Josh Powers'.
77
78 * Configure git with your email and name for commit messages.
79
80diff --git a/bash_completion/cloud-init b/bash_completion/cloud-init
81index 8c25032..a9577e9 100644
82--- a/bash_completion/cloud-init
83+++ b/bash_completion/cloud-init
84@@ -30,7 +30,10 @@ _cloudinit_complete()
85 devel)
86 COMPREPLY=($(compgen -W "--help schema net-convert" -- $cur_word))
87 ;;
88- dhclient-hook|features)
89+ dhclient-hook)
90+ COMPREPLY=($(compgen -W "--help up down" -- $cur_word))
91+ ;;
92+ features)
93 COMPREPLY=($(compgen -W "--help" -- $cur_word))
94 ;;
95 init)
96diff --git a/cloudinit/cmd/devel/logs.py b/cloudinit/cmd/devel/logs.py
97index df72520..4c086b5 100644
98--- a/cloudinit/cmd/devel/logs.py
99+++ b/cloudinit/cmd/devel/logs.py
100@@ -5,14 +5,16 @@
101 """Define 'collect-logs' utility and handler to include in cloud-init cmd."""
102
103 import argparse
104-from cloudinit.util import (
105- ProcessExecutionError, chdir, copy, ensure_dir, subp, write_file)
106-from cloudinit.temp_utils import tempdir
107 from datetime import datetime
108 import os
109 import shutil
110 import sys
111
112+from cloudinit.sources import INSTANCE_JSON_SENSITIVE_FILE
113+from cloudinit.temp_utils import tempdir
114+from cloudinit.util import (
115+ ProcessExecutionError, chdir, copy, ensure_dir, subp, write_file)
116+
117
118 CLOUDINIT_LOGS = ['/var/log/cloud-init.log', '/var/log/cloud-init-output.log']
119 CLOUDINIT_RUN_DIR = '/run/cloud-init'
120@@ -46,6 +48,13 @@ def get_parser(parser=None):
121 return parser
122
123
124+def _copytree_ignore_sensitive_files(curdir, files):
125+ """Return a list of files to ignore if we are non-root"""
126+ if os.getuid() == 0:
127+ return ()
128+ return (INSTANCE_JSON_SENSITIVE_FILE,) # Ignore root-permissioned files
129+
130+
131 def _write_command_output_to_file(cmd, filename, msg, verbosity):
132 """Helper which runs a command and writes output or error to filename."""
133 try:
134@@ -78,6 +87,11 @@ def collect_logs(tarfile, include_userdata, verbosity=0):
135 @param tarfile: The path of the tar-gzipped file to create.
136 @param include_userdata: Boolean, true means include user-data.
137 """
138+ if include_userdata and os.getuid() != 0:
139+ sys.stderr.write(
140+ "To include userdata, root user is required."
141+ " Try sudo cloud-init collect-logs\n")
142+ return 1
143 tarfile = os.path.abspath(tarfile)
144 date = datetime.utcnow().date().strftime('%Y-%m-%d')
145 log_dir = 'cloud-init-logs-{0}'.format(date)
146@@ -110,7 +124,8 @@ def collect_logs(tarfile, include_userdata, verbosity=0):
147 ensure_dir(run_dir)
148 if os.path.exists(CLOUDINIT_RUN_DIR):
149 shutil.copytree(CLOUDINIT_RUN_DIR,
150- os.path.join(run_dir, 'cloud-init'))
151+ os.path.join(run_dir, 'cloud-init'),
152+ ignore=_copytree_ignore_sensitive_files)
153 _debug("collected dir %s\n" % CLOUDINIT_RUN_DIR, 1, verbosity)
154 else:
155 _debug("directory '%s' did not exist\n" % CLOUDINIT_RUN_DIR, 1,
156@@ -118,21 +133,21 @@ def collect_logs(tarfile, include_userdata, verbosity=0):
157 with chdir(tmp_dir):
158 subp(['tar', 'czvf', tarfile, log_dir.replace(tmp_dir + '/', '')])
159 sys.stderr.write("Wrote %s\n" % tarfile)
160+ return 0
161
162
163 def handle_collect_logs_args(name, args):
164 """Handle calls to 'cloud-init collect-logs' as a subcommand."""
165- collect_logs(args.tarfile, args.userdata, args.verbosity)
166+ return collect_logs(args.tarfile, args.userdata, args.verbosity)
167
168
169 def main():
170 """Tool to collect and tar all cloud-init related logs."""
171 parser = get_parser()
172- handle_collect_logs_args('collect-logs', parser.parse_args())
173- return 0
174+ return handle_collect_logs_args('collect-logs', parser.parse_args())
175
176
177 if __name__ == '__main__':
178- main()
179+ sys.exit(main())
180
181 # vi: ts=4 expandtab
182diff --git a/cloudinit/cmd/devel/net_convert.py b/cloudinit/cmd/devel/net_convert.py
183index a0f58a0..1ad7e0b 100755
184--- a/cloudinit/cmd/devel/net_convert.py
185+++ b/cloudinit/cmd/devel/net_convert.py
186@@ -9,6 +9,7 @@ import yaml
187
188 from cloudinit.sources.helpers import openstack
189 from cloudinit.sources import DataSourceAzure as azure
190+from cloudinit.sources import DataSourceOVF as ovf
191
192 from cloudinit import distros
193 from cloudinit.net import eni, netplan, network_state, sysconfig
194@@ -31,7 +32,7 @@ def get_parser(parser=None):
195 metavar="PATH", required=True)
196 parser.add_argument("-k", "--kind",
197 choices=['eni', 'network_data.json', 'yaml',
198- 'azure-imds'],
199+ 'azure-imds', 'vmware-imc'],
200 required=True)
201 parser.add_argument("-d", "--directory",
202 metavar="PATH",
203@@ -76,7 +77,6 @@ def handle_args(name, args):
204 net_data = args.network_data.read()
205 if args.kind == "eni":
206 pre_ns = eni.convert_eni_data(net_data)
207- ns = network_state.parse_net_config_data(pre_ns)
208 elif args.kind == "yaml":
209 pre_ns = yaml.load(net_data)
210 if 'network' in pre_ns:
211@@ -85,15 +85,16 @@ def handle_args(name, args):
212 sys.stderr.write('\n'.join(
213 ["Input YAML",
214 yaml.dump(pre_ns, default_flow_style=False, indent=4), ""]))
215- ns = network_state.parse_net_config_data(pre_ns)
216 elif args.kind == 'network_data.json':
217 pre_ns = openstack.convert_net_json(
218 json.loads(net_data), known_macs=known_macs)
219- ns = network_state.parse_net_config_data(pre_ns)
220 elif args.kind == 'azure-imds':
221 pre_ns = azure.parse_network_config(json.loads(net_data))
222- ns = network_state.parse_net_config_data(pre_ns)
223+ elif args.kind == 'vmware-imc':
224+ config = ovf.Config(ovf.ConfigFile(args.network_data.name))
225+ pre_ns = ovf.get_network_config_from_conf(config, False)
226
227+ ns = network_state.parse_net_config_data(pre_ns)
228 if not ns:
229 raise RuntimeError("No valid network_state object created from"
230 "input data")
231@@ -111,6 +112,10 @@ def handle_args(name, args):
232 elif args.output_kind == "netplan":
233 r_cls = netplan.Renderer
234 config = distro.renderer_configs.get('netplan')
235+ # don't run netplan generate/apply
236+ config['postcmds'] = False
237+ # trim leading slash
238+ config['netplan_path'] = config['netplan_path'][1:]
239 else:
240 r_cls = sysconfig.Renderer
241 config = distro.renderer_configs.get('sysconfig')
242diff --git a/cloudinit/cmd/devel/render.py b/cloudinit/cmd/devel/render.py
243index 2ba6b68..1bc2240 100755
244--- a/cloudinit/cmd/devel/render.py
245+++ b/cloudinit/cmd/devel/render.py
246@@ -8,11 +8,10 @@ import sys
247
248 from cloudinit.handlers.jinja_template import render_jinja_payload_from_file
249 from cloudinit import log
250-from cloudinit.sources import INSTANCE_JSON_FILE
251+from cloudinit.sources import INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE
252 from . import addLogHandlerCLI, read_cfg_paths
253
254 NAME = 'render'
255-DEFAULT_INSTANCE_DATA = '/run/cloud-init/instance-data.json'
256
257 LOG = log.getLogger(NAME)
258
259@@ -47,12 +46,22 @@ def handle_args(name, args):
260 @return 0 on success, 1 on failure.
261 """
262 addLogHandlerCLI(LOG, log.DEBUG if args.debug else log.WARNING)
263- if not args.instance_data:
264- paths = read_cfg_paths()
265- instance_data_fn = os.path.join(
266- paths.run_dir, INSTANCE_JSON_FILE)
267- else:
268+ if args.instance_data:
269 instance_data_fn = args.instance_data
270+ else:
271+ paths = read_cfg_paths()
272+ uid = os.getuid()
273+ redacted_data_fn = os.path.join(paths.run_dir, INSTANCE_JSON_FILE)
274+ if uid == 0:
275+ instance_data_fn = os.path.join(
276+ paths.run_dir, INSTANCE_JSON_SENSITIVE_FILE)
277+ if not os.path.exists(instance_data_fn):
278+ LOG.warning(
279+ 'Missing root-readable %s. Using redacted %s instead.',
280+ instance_data_fn, redacted_data_fn)
281+ instance_data_fn = redacted_data_fn
282+ else:
283+ instance_data_fn = redacted_data_fn
284 if not os.path.exists(instance_data_fn):
285 LOG.error('Missing instance-data.json file: %s', instance_data_fn)
286 return 1
287@@ -62,10 +71,14 @@ def handle_args(name, args):
288 except IOError:
289 LOG.error('Missing user-data file: %s', args.user_data)
290 return 1
291- rendered_payload = render_jinja_payload_from_file(
292- payload=user_data, payload_fn=args.user_data,
293- instance_data_file=instance_data_fn,
294- debug=True if args.debug else False)
295+ try:
296+ rendered_payload = render_jinja_payload_from_file(
297+ payload=user_data, payload_fn=args.user_data,
298+ instance_data_file=instance_data_fn,
299+ debug=True if args.debug else False)
300+ except RuntimeError as e:
301+ LOG.error('Cannot render from instance data: %s', str(e))
302+ return 1
303 if not rendered_payload:
304 LOG.error('Unable to render user-data file: %s', args.user_data)
305 return 1
306diff --git a/cloudinit/cmd/devel/tests/test_logs.py b/cloudinit/cmd/devel/tests/test_logs.py
307index 98b4756..4951797 100644
308--- a/cloudinit/cmd/devel/tests/test_logs.py
309+++ b/cloudinit/cmd/devel/tests/test_logs.py
310@@ -1,13 +1,17 @@
311 # This file is part of cloud-init. See LICENSE file for license information.
312
313-from cloudinit.cmd.devel import logs
314-from cloudinit.util import ensure_dir, load_file, subp, write_file
315-from cloudinit.tests.helpers import FilesystemMockingTestCase, wrap_and_call
316 from datetime import datetime
317-import mock
318 import os
319+from six import StringIO
320+
321+from cloudinit.cmd.devel import logs
322+from cloudinit.sources import INSTANCE_JSON_SENSITIVE_FILE
323+from cloudinit.tests.helpers import (
324+ FilesystemMockingTestCase, mock, wrap_and_call)
325+from cloudinit.util import ensure_dir, load_file, subp, write_file
326
327
328+@mock.patch('cloudinit.cmd.devel.logs.os.getuid')
329 class TestCollectLogs(FilesystemMockingTestCase):
330
331 def setUp(self):
332@@ -15,14 +19,29 @@ class TestCollectLogs(FilesystemMockingTestCase):
333 self.new_root = self.tmp_dir()
334 self.run_dir = self.tmp_path('run', self.new_root)
335
336- def test_collect_logs_creates_tarfile(self):
337+ def test_collect_logs_with_userdata_requires_root_user(self, m_getuid):
338+ """collect-logs errors when non-root user collects userdata ."""
339+ m_getuid.return_value = 100 # non-root
340+ output_tarfile = self.tmp_path('logs.tgz')
341+ with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
342+ self.assertEqual(
343+ 1, logs.collect_logs(output_tarfile, include_userdata=True))
344+ self.assertEqual(
345+ 'To include userdata, root user is required.'
346+ ' Try sudo cloud-init collect-logs\n',
347+ m_stderr.getvalue())
348+
349+ def test_collect_logs_creates_tarfile(self, m_getuid):
350 """collect-logs creates a tarfile with all related cloud-init info."""
351+ m_getuid.return_value = 100
352 log1 = self.tmp_path('cloud-init.log', self.new_root)
353 write_file(log1, 'cloud-init-log')
354 log2 = self.tmp_path('cloud-init-output.log', self.new_root)
355 write_file(log2, 'cloud-init-output-log')
356 ensure_dir(self.run_dir)
357 write_file(self.tmp_path('results.json', self.run_dir), 'results')
358+ write_file(self.tmp_path(INSTANCE_JSON_SENSITIVE_FILE, self.run_dir),
359+ 'sensitive')
360 output_tarfile = self.tmp_path('logs.tgz')
361
362 date = datetime.utcnow().date().strftime('%Y-%m-%d')
363@@ -59,6 +78,11 @@ class TestCollectLogs(FilesystemMockingTestCase):
364 # unpack the tarfile and check file contents
365 subp(['tar', 'zxvf', output_tarfile, '-C', self.new_root])
366 out_logdir = self.tmp_path(date_logdir, self.new_root)
367+ self.assertFalse(
368+ os.path.exists(
369+ os.path.join(out_logdir, 'run', 'cloud-init',
370+ INSTANCE_JSON_SENSITIVE_FILE)),
371+ 'Unexpected file found: %s' % INSTANCE_JSON_SENSITIVE_FILE)
372 self.assertEqual(
373 '0.7fake\n',
374 load_file(os.path.join(out_logdir, 'dpkg-version')))
375@@ -82,8 +106,9 @@ class TestCollectLogs(FilesystemMockingTestCase):
376 os.path.join(out_logdir, 'run', 'cloud-init', 'results.json')))
377 fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile)
378
379- def test_collect_logs_includes_optional_userdata(self):
380+ def test_collect_logs_includes_optional_userdata(self, m_getuid):
381 """collect-logs include userdata when --include-userdata is set."""
382+ m_getuid.return_value = 0
383 log1 = self.tmp_path('cloud-init.log', self.new_root)
384 write_file(log1, 'cloud-init-log')
385 log2 = self.tmp_path('cloud-init-output.log', self.new_root)
386@@ -92,6 +117,8 @@ class TestCollectLogs(FilesystemMockingTestCase):
387 write_file(userdata, 'user-data')
388 ensure_dir(self.run_dir)
389 write_file(self.tmp_path('results.json', self.run_dir), 'results')
390+ write_file(self.tmp_path(INSTANCE_JSON_SENSITIVE_FILE, self.run_dir),
391+ 'sensitive')
392 output_tarfile = self.tmp_path('logs.tgz')
393
394 date = datetime.utcnow().date().strftime('%Y-%m-%d')
395@@ -132,4 +159,8 @@ class TestCollectLogs(FilesystemMockingTestCase):
396 self.assertEqual(
397 'user-data',
398 load_file(os.path.join(out_logdir, 'user-data.txt')))
399+ self.assertEqual(
400+ 'sensitive',
401+ load_file(os.path.join(out_logdir, 'run', 'cloud-init',
402+ INSTANCE_JSON_SENSITIVE_FILE)))
403 fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile)
404diff --git a/cloudinit/cmd/devel/tests/test_render.py b/cloudinit/cmd/devel/tests/test_render.py
405index fc5d2c0..988bba0 100644
406--- a/cloudinit/cmd/devel/tests/test_render.py
407+++ b/cloudinit/cmd/devel/tests/test_render.py
408@@ -6,7 +6,7 @@ import os
409 from collections import namedtuple
410 from cloudinit.cmd.devel import render
411 from cloudinit.helpers import Paths
412-from cloudinit.sources import INSTANCE_JSON_FILE
413+from cloudinit.sources import INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE
414 from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJinja
415 from cloudinit.util import ensure_dir, write_file
416
417@@ -63,6 +63,49 @@ class TestRender(CiTestCase):
418 'Missing instance-data.json file: %s' % json_file,
419 self.logs.getvalue())
420
421+ def test_handle_args_root_fallback_from_sensitive_instance_data(self):
422+ """When root user defaults to sensitive.json."""
423+ user_data = self.tmp_path('user-data', dir=self.tmp)
424+ run_dir = self.tmp_path('run_dir', dir=self.tmp)
425+ ensure_dir(run_dir)
426+ paths = Paths({'run_dir': run_dir})
427+ self.add_patch('cloudinit.cmd.devel.render.read_cfg_paths', 'm_paths')
428+ self.m_paths.return_value = paths
429+ args = self.args(
430+ user_data=user_data, instance_data=None, debug=False)
431+ with mock.patch('sys.stderr', new_callable=StringIO):
432+ with mock.patch('os.getuid') as m_getuid:
433+ m_getuid.return_value = 0
434+ self.assertEqual(1, render.handle_args('anyname', args))
435+ json_file = os.path.join(run_dir, INSTANCE_JSON_FILE)
436+ json_sensitive = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE)
437+ self.assertIn(
438+ 'WARNING: Missing root-readable %s. Using redacted %s' % (
439+ json_sensitive, json_file), self.logs.getvalue())
440+ self.assertIn(
441+ 'ERROR: Missing instance-data.json file: %s' % json_file,
442+ self.logs.getvalue())
443+
444+ def test_handle_args_root_uses_sensitive_instance_data(self):
445+ """When root user, and no instance-data arg, use sensitive.json."""
446+ user_data = self.tmp_path('user-data', dir=self.tmp)
447+ write_file(user_data, '##template: jinja\nrendering: {{ my_var }}')
448+ run_dir = self.tmp_path('run_dir', dir=self.tmp)
449+ ensure_dir(run_dir)
450+ json_sensitive = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE)
451+ write_file(json_sensitive, '{"my-var": "jinja worked"}')
452+ paths = Paths({'run_dir': run_dir})
453+ self.add_patch('cloudinit.cmd.devel.render.read_cfg_paths', 'm_paths')
454+ self.m_paths.return_value = paths
455+ args = self.args(
456+ user_data=user_data, instance_data=None, debug=False)
457+ with mock.patch('sys.stderr', new_callable=StringIO):
458+ with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
459+ with mock.patch('os.getuid') as m_getuid:
460+ m_getuid.return_value = 0
461+ self.assertEqual(0, render.handle_args('anyname', args))
462+ self.assertIn('rendering: jinja worked', m_stdout.getvalue())
463+
464 @skipUnlessJinja()
465 def test_handle_args_renders_instance_data_vars_in_template(self):
466 """If user_data file is a jinja template render instance-data vars."""
467diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py
468index 5a43702..933c019 100644
469--- a/cloudinit/cmd/main.py
470+++ b/cloudinit/cmd/main.py
471@@ -41,7 +41,7 @@ from cloudinit.settings import (PER_INSTANCE, PER_ALWAYS, PER_ONCE,
472 from cloudinit import atomic_helper
473
474 from cloudinit.config import cc_set_hostname
475-from cloudinit.dhclient_hook import LogDhclient
476+from cloudinit import dhclient_hook
477
478
479 # Welcome message template
480@@ -586,12 +586,6 @@ def main_single(name, args):
481 return 0
482
483
484-def dhclient_hook(name, args):
485- record = LogDhclient(args)
486- record.check_hooks_dir()
487- record.record()
488-
489-
490 def status_wrapper(name, args, data_d=None, link_d=None):
491 if data_d is None:
492 data_d = os.path.normpath("/var/lib/cloud/data")
493@@ -795,15 +789,9 @@ def main(sysv_args=None):
494 'query',
495 help='Query standardized instance metadata from the command line.')
496
497- parser_dhclient = subparsers.add_parser('dhclient-hook',
498- help=('run the dhclient hook'
499- 'to record network info'))
500- parser_dhclient.add_argument("net_action",
501- help=('action taken on the interface'))
502- parser_dhclient.add_argument("net_interface",
503- help=('the network interface being acted'
504- ' upon'))
505- parser_dhclient.set_defaults(action=('dhclient_hook', dhclient_hook))
506+ parser_dhclient = subparsers.add_parser(
507+ dhclient_hook.NAME, help=dhclient_hook.__doc__)
508+ dhclient_hook.get_parser(parser_dhclient)
509
510 parser_features = subparsers.add_parser('features',
511 help=('list defined features'))
512diff --git a/cloudinit/cmd/query.py b/cloudinit/cmd/query.py
513index 7d2d4fe..1d888b9 100644
514--- a/cloudinit/cmd/query.py
515+++ b/cloudinit/cmd/query.py
516@@ -3,6 +3,7 @@
517 """Query standardized instance metadata from the command line."""
518
519 import argparse
520+from errno import EACCES
521 import os
522 import six
523 import sys
524@@ -79,27 +80,38 @@ def handle_args(name, args):
525 uid = os.getuid()
526 if not all([args.instance_data, args.user_data, args.vendor_data]):
527 paths = read_cfg_paths()
528- if not args.instance_data:
529+ if args.instance_data:
530+ instance_data_fn = args.instance_data
531+ else:
532+ redacted_data_fn = os.path.join(paths.run_dir, INSTANCE_JSON_FILE)
533 if uid == 0:
534- default_json_fn = INSTANCE_JSON_SENSITIVE_FILE
535+ sensitive_data_fn = os.path.join(
536+ paths.run_dir, INSTANCE_JSON_SENSITIVE_FILE)
537+ if os.path.exists(sensitive_data_fn):
538+ instance_data_fn = sensitive_data_fn
539+ else:
540+ LOG.warning(
541+ 'Missing root-readable %s. Using redacted %s instead.',
542+ sensitive_data_fn, redacted_data_fn)
543+ instance_data_fn = redacted_data_fn
544 else:
545- default_json_fn = INSTANCE_JSON_FILE # World readable
546- instance_data_fn = os.path.join(paths.run_dir, default_json_fn)
547+ instance_data_fn = redacted_data_fn
548+ if args.user_data:
549+ user_data_fn = args.user_data
550 else:
551- instance_data_fn = args.instance_data
552- if not args.user_data:
553 user_data_fn = os.path.join(paths.instance_link, 'user-data.txt')
554+ if args.vendor_data:
555+ vendor_data_fn = args.vendor_data
556 else:
557- user_data_fn = args.user_data
558- if not args.vendor_data:
559 vendor_data_fn = os.path.join(paths.instance_link, 'vendor-data.txt')
560- else:
561- vendor_data_fn = args.vendor_data
562
563 try:
564 instance_json = util.load_file(instance_data_fn)
565- except IOError:
566- LOG.error('Missing instance-data.json file: %s', instance_data_fn)
567+ except (IOError, OSError) as e:
568+ if e.errno == EACCES:
569+ LOG.error("No read permission on '%s'. Try sudo", instance_data_fn)
570+ else:
571+ LOG.error('Missing instance-data file: %s', instance_data_fn)
572 return 1
573
574 instance_data = util.load_json(instance_json)
575diff --git a/cloudinit/cmd/tests/test_query.py b/cloudinit/cmd/tests/test_query.py
576index fb87c6a..28738b1 100644
577--- a/cloudinit/cmd/tests/test_query.py
578+++ b/cloudinit/cmd/tests/test_query.py
579@@ -1,5 +1,6 @@
580 # This file is part of cloud-init. See LICENSE file for license information.
581
582+import errno
583 from six import StringIO
584 from textwrap import dedent
585 import os
586@@ -7,7 +8,8 @@ import os
587 from collections import namedtuple
588 from cloudinit.cmd import query
589 from cloudinit.helpers import Paths
590-from cloudinit.sources import REDACT_SENSITIVE_VALUE, INSTANCE_JSON_FILE
591+from cloudinit.sources import (
592+ REDACT_SENSITIVE_VALUE, INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE)
593 from cloudinit.tests.helpers import CiTestCase, mock
594 from cloudinit.util import ensure_dir, write_file
595
596@@ -50,10 +52,28 @@ class TestQuery(CiTestCase):
597 with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
598 self.assertEqual(1, query.handle_args('anyname', args))
599 self.assertIn(
600- 'ERROR: Missing instance-data.json file: %s' % absent_fn,
601+ 'ERROR: Missing instance-data file: %s' % absent_fn,
602 self.logs.getvalue())
603 self.assertIn(
604- 'ERROR: Missing instance-data.json file: %s' % absent_fn,
605+ 'ERROR: Missing instance-data file: %s' % absent_fn,
606+ m_stderr.getvalue())
607+
608+ def test_handle_args_error_when_no_read_permission_instance_data(self):
609+ """When instance_data file is unreadable, log an error."""
610+ noread_fn = self.tmp_path('unreadable', dir=self.tmp)
611+ write_file(noread_fn, 'thou shall not pass')
612+ args = self.args(
613+ debug=False, dump_all=True, format=None, instance_data=noread_fn,
614+ list_keys=False, user_data='ud', vendor_data='vd', varname=None)
615+ with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
616+ with mock.patch('cloudinit.cmd.query.util.load_file') as m_load:
617+ m_load.side_effect = OSError(errno.EACCES, 'Not allowed')
618+ self.assertEqual(1, query.handle_args('anyname', args))
619+ self.assertIn(
620+ "ERROR: No read permission on '%s'. Try sudo" % noread_fn,
621+ self.logs.getvalue())
622+ self.assertIn(
623+ "ERROR: No read permission on '%s'. Try sudo" % noread_fn,
624 m_stderr.getvalue())
625
626 def test_handle_args_defaults_instance_data(self):
627@@ -70,12 +90,58 @@ class TestQuery(CiTestCase):
628 self.assertEqual(1, query.handle_args('anyname', args))
629 json_file = os.path.join(run_dir, INSTANCE_JSON_FILE)
630 self.assertIn(
631- 'ERROR: Missing instance-data.json file: %s' % json_file,
632+ 'ERROR: Missing instance-data file: %s' % json_file,
633 self.logs.getvalue())
634 self.assertIn(
635- 'ERROR: Missing instance-data.json file: %s' % json_file,
636+ 'ERROR: Missing instance-data file: %s' % json_file,
637 m_stderr.getvalue())
638
639+ def test_handle_args_root_fallsback_to_instance_data(self):
640+ """When no instance_data argument, root falls back to redacted json."""
641+ args = self.args(
642+ debug=False, dump_all=True, format=None, instance_data=None,
643+ list_keys=False, user_data=None, vendor_data=None, varname=None)
644+ run_dir = self.tmp_path('run_dir', dir=self.tmp)
645+ ensure_dir(run_dir)
646+ paths = Paths({'run_dir': run_dir})
647+ self.add_patch('cloudinit.cmd.query.read_cfg_paths', 'm_paths')
648+ self.m_paths.return_value = paths
649+ with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
650+ with mock.patch('os.getuid') as m_getuid:
651+ m_getuid.return_value = 0
652+ self.assertEqual(1, query.handle_args('anyname', args))
653+ json_file = os.path.join(run_dir, INSTANCE_JSON_FILE)
654+ sensitive_file = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE)
655+ self.assertIn(
656+ 'WARNING: Missing root-readable %s. Using redacted %s instead.' % (
657+ sensitive_file, json_file),
658+ m_stderr.getvalue())
659+
660+ def test_handle_args_root_uses_instance_sensitive_data(self):
661+ """When no instance_data argument, root uses semsitive json."""
662+ user_data = self.tmp_path('user-data', dir=self.tmp)
663+ vendor_data = self.tmp_path('vendor-data', dir=self.tmp)
664+ write_file(user_data, 'ud')
665+ write_file(vendor_data, 'vd')
666+ run_dir = self.tmp_path('run_dir', dir=self.tmp)
667+ sensitive_file = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE)
668+ write_file(sensitive_file, '{"my-var": "it worked"}')
669+ ensure_dir(run_dir)
670+ paths = Paths({'run_dir': run_dir})
671+ self.add_patch('cloudinit.cmd.query.read_cfg_paths', 'm_paths')
672+ self.m_paths.return_value = paths
673+ args = self.args(
674+ debug=False, dump_all=True, format=None, instance_data=None,
675+ list_keys=False, user_data=vendor_data, vendor_data=vendor_data,
676+ varname=None)
677+ with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
678+ with mock.patch('os.getuid') as m_getuid:
679+ m_getuid.return_value = 0
680+ self.assertEqual(0, query.handle_args('anyname', args))
681+ self.assertEqual(
682+ '{\n "my_var": "it worked",\n "userdata": "vd",\n '
683+ '"vendordata": "vd"\n}\n', m_stdout.getvalue())
684+
685 def test_handle_args_dumps_all_instance_data(self):
686 """When --all is specified query will dump all instance data vars."""
687 write_file(self.instance_data, '{"my-var": "it worked"}')
688diff --git a/cloudinit/config/cc_disk_setup.py b/cloudinit/config/cc_disk_setup.py
689index 943089e..29e192e 100644
690--- a/cloudinit/config/cc_disk_setup.py
691+++ b/cloudinit/config/cc_disk_setup.py
692@@ -743,7 +743,7 @@ def assert_and_settle_device(device):
693 util.udevadm_settle()
694 if not os.path.exists(device):
695 raise RuntimeError("Device %s did not exist and was not created "
696- "with a udevamd settle." % device)
697+ "with a udevadm settle." % device)
698
699 # Whether or not the device existed above, it is possible that udev
700 # events that would populate udev database (for reading by lsdname) have
701diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py
702index 24a8ebe..71d13ed 100644
703--- a/cloudinit/config/cc_lxd.py
704+++ b/cloudinit/config/cc_lxd.py
705@@ -89,7 +89,7 @@ def handle(name, cfg, cloud, log, args):
706 packages.append('lxd')
707
708 if init_cfg.get("storage_backend") == "zfs" and not util.which('zfs'):
709- packages.append('zfs')
710+ packages.append('zfsutils-linux')
711
712 if len(packages):
713 try:
714diff --git a/cloudinit/config/cc_resizefs.py b/cloudinit/config/cc_resizefs.py
715index 2edddd0..076b9d5 100644
716--- a/cloudinit/config/cc_resizefs.py
717+++ b/cloudinit/config/cc_resizefs.py
718@@ -197,6 +197,13 @@ def maybe_get_writable_device_path(devpath, info, log):
719 if devpath.startswith('gpt/'):
720 log.debug('We have a gpt label - just go ahead')
721 return devpath
722+ # Alternatively, our device could simply be a name as returned by gpart,
723+ # such as da0p3
724+ if not devpath.startswith('/dev/') and not os.path.exists(devpath):
725+ fulldevpath = '/dev/' + devpath.lstrip('/')
726+ log.debug("'%s' doesn't appear to be a valid device path. Trying '%s'",
727+ devpath, fulldevpath)
728+ devpath = fulldevpath
729
730 try:
731 statret = os.stat(devpath)
732diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py
733index 5ef9737..4585e4d 100755
734--- a/cloudinit/config/cc_set_passwords.py
735+++ b/cloudinit/config/cc_set_passwords.py
736@@ -160,7 +160,7 @@ def handle(_name, cfg, cloud, log, args):
737 hashed_users = []
738 randlist = []
739 users = []
740- prog = re.compile(r'\$[1,2a,2y,5,6](\$.+){2}')
741+ prog = re.compile(r'\$(1|2a|2y|5|6)(\$.+){2}')
742 for line in plist:
743 u, p = line.split(':', 1)
744 if prog.match(p) is not None and ":" not in p:
745diff --git a/cloudinit/config/cc_write_files.py b/cloudinit/config/cc_write_files.py
746index 31d1db6..0b6546e 100644
747--- a/cloudinit/config/cc_write_files.py
748+++ b/cloudinit/config/cc_write_files.py
749@@ -49,6 +49,10 @@ binary gzip data can be specified and will be decoded before being written.
750 ...
751 path: /bin/arch
752 permissions: '0555'
753+ - content: |
754+ 15 * * * * root ship_logs
755+ path: /etc/crontab
756+ append: true
757 """
758
759 import base64
760@@ -113,7 +117,8 @@ def write_files(name, files):
761 contents = extract_contents(f_info.get('content', ''), extractions)
762 (u, g) = util.extract_usergroup(f_info.get('owner', DEFAULT_OWNER))
763 perms = decode_perms(f_info.get('permissions'), DEFAULT_PERMS)
764- util.write_file(path, contents, mode=perms)
765+ omode = 'ab' if util.get_cfg_option_bool(f_info, 'append') else 'wb'
766+ util.write_file(path, contents, omode=omode, mode=perms)
767 util.chownbyname(path, u, g)
768
769
770diff --git a/cloudinit/config/tests/test_set_passwords.py b/cloudinit/config/tests/test_set_passwords.py
771index b051ec8..a2ea5ec 100644
772--- a/cloudinit/config/tests/test_set_passwords.py
773+++ b/cloudinit/config/tests/test_set_passwords.py
774@@ -68,4 +68,44 @@ class TestHandleSshPwauth(CiTestCase):
775 m_update.assert_called_with({optname: optval})
776 m_subp.assert_not_called()
777
778+
779+class TestSetPasswordsHandle(CiTestCase):
780+ """Test cc_set_passwords.handle"""
781+
782+ with_logs = True
783+
784+ def test_handle_on_empty_config(self):
785+ """handle logs that no password has changed when config is empty."""
786+ cloud = self.tmp_cloud(distro='ubuntu')
787+ setpass.handle(
788+ 'IGNORED', cfg={}, cloud=cloud, log=self.logger, args=[])
789+ self.assertEqual(
790+ "DEBUG: Leaving ssh config 'PasswordAuthentication' unchanged. "
791+ 'ssh_pwauth=None\n',
792+ self.logs.getvalue())
793+
794+ @mock.patch(MODPATH + "util.subp")
795+ def test_handle_on_chpasswd_list_parses_common_hashes(self, m_subp):
796+ """handle parses command password hashes."""
797+ cloud = self.tmp_cloud(distro='ubuntu')
798+ valid_hashed_pwds = [
799+ 'root:$2y$10$8BQjxjVByHA/Ee.O1bCXtO8S7Y5WojbXWqnqYpUW.BrPx/'
800+ 'Dlew1Va',
801+ 'ubuntu:$6$5hOurLPO$naywm3Ce0UlmZg9gG2Fl9acWCVEoakMMC7dR52q'
802+ 'SDexZbrN9z8yHxhUM2b.sxpguSwOlbOQSW/HpXazGGx3oo1']
803+ cfg = {'chpasswd': {'list': valid_hashed_pwds}}
804+ with mock.patch(MODPATH + 'util.subp') as m_subp:
805+ setpass.handle(
806+ 'IGNORED', cfg=cfg, cloud=cloud, log=self.logger, args=[])
807+ self.assertIn(
808+ 'DEBUG: Handling input for chpasswd as list.',
809+ self.logs.getvalue())
810+ self.assertIn(
811+ "DEBUG: Setting hashed password for ['root', 'ubuntu']",
812+ self.logs.getvalue())
813+ self.assertEqual(
814+ [mock.call(['chpasswd', '-e'],
815+ '\n'.join(valid_hashed_pwds) + '\n')],
816+ m_subp.call_args_list)
817+
818 # vi: ts=4 expandtab
819diff --git a/cloudinit/dhclient_hook.py b/cloudinit/dhclient_hook.py
820index 7f02d7f..72b51b6 100644
821--- a/cloudinit/dhclient_hook.py
822+++ b/cloudinit/dhclient_hook.py
823@@ -1,5 +1,8 @@
824 # This file is part of cloud-init. See LICENSE file for license information.
825
826+"""Run the dhclient hook to record network info."""
827+
828+import argparse
829 import os
830
831 from cloudinit import atomic_helper
832@@ -8,44 +11,75 @@ from cloudinit import stages
833
834 LOG = logging.getLogger(__name__)
835
836+NAME = "dhclient-hook"
837+UP = "up"
838+DOWN = "down"
839+EVENTS = (UP, DOWN)
840+
841+
842+def _get_hooks_dir():
843+ i = stages.Init()
844+ return os.path.join(i.paths.get_runpath(), 'dhclient.hooks')
845+
846+
847+def _filter_env_vals(info):
848+ """Given info (os.environ), return a dictionary with
849+ lower case keys for each entry starting with DHCP4_ or new_."""
850+ new_info = {}
851+ for k, v in info.items():
852+ if k.startswith("DHCP4_") or k.startswith("new_"):
853+ key = (k.replace('DHCP4_', '').replace('new_', '')).lower()
854+ new_info[key] = v
855+ return new_info
856+
857+
858+def run_hook(interface, event, data_d=None, env=None):
859+ if event not in EVENTS:
860+ raise ValueError("Unexpected event '%s'. Expected one of: %s" %
861+ (event, EVENTS))
862+ if data_d is None:
863+ data_d = _get_hooks_dir()
864+ if env is None:
865+ env = os.environ
866+ hook_file = os.path.join(data_d, interface + ".json")
867+
868+ if event == UP:
869+ if not os.path.exists(data_d):
870+ os.makedirs(data_d)
871+ atomic_helper.write_json(hook_file, _filter_env_vals(env))
872+ LOG.debug("Wrote dhclient options in %s", hook_file)
873+ elif event == DOWN:
874+ if os.path.exists(hook_file):
875+ os.remove(hook_file)
876+ LOG.debug("Removed dhclient options file %s", hook_file)
877+
878+
879+def get_parser(parser=None):
880+ if parser is None:
881+ parser = argparse.ArgumentParser(prog=NAME, description=__doc__)
882+ parser.add_argument(
883+ "event", help='event taken on the interface', choices=EVENTS)
884+ parser.add_argument(
885+ "interface", help='the network interface being acted upon')
886+ # cloud-init main uses 'action'
887+ parser.set_defaults(action=(NAME, handle_args))
888+ return parser
889+
890+
891+def handle_args(name, args, data_d=None):
892+ """Handle the Namespace args.
893+ Takes 'name' as passed by cloud-init main. not used here."""
894+ return run_hook(interface=args.interface, event=args.event, data_d=data_d)
895+
896+
897+if __name__ == '__main__':
898+ import sys
899+ parser = get_parser()
900+ args = parser.parse_args(args=sys.argv[1:])
901+ return_value = handle_args(
902+ NAME, args, data_d=os.environ.get('_CI_DHCP_HOOK_DATA_D'))
903+ if return_value:
904+ sys.exit(return_value)
905
906-class LogDhclient(object):
907-
908- def __init__(self, cli_args):
909- self.hooks_dir = self._get_hooks_dir()
910- self.net_interface = cli_args.net_interface
911- self.net_action = cli_args.net_action
912- self.hook_file = os.path.join(self.hooks_dir,
913- self.net_interface + ".json")
914-
915- @staticmethod
916- def _get_hooks_dir():
917- i = stages.Init()
918- return os.path.join(i.paths.get_runpath(), 'dhclient.hooks')
919-
920- def check_hooks_dir(self):
921- if not os.path.exists(self.hooks_dir):
922- os.makedirs(self.hooks_dir)
923- else:
924- # If the action is down and the json file exists, we need to
925- # delete the file
926- if self.net_action is 'down' and os.path.exists(self.hook_file):
927- os.remove(self.hook_file)
928-
929- @staticmethod
930- def get_vals(info):
931- new_info = {}
932- for k, v in info.items():
933- if k.startswith("DHCP4_") or k.startswith("new_"):
934- key = (k.replace('DHCP4_', '').replace('new_', '')).lower()
935- new_info[key] = v
936- return new_info
937-
938- def record(self):
939- envs = os.environ
940- if self.hook_file is None:
941- return
942- atomic_helper.write_json(self.hook_file, self.get_vals(envs))
943- LOG.debug("Wrote dhclient options in %s", self.hook_file)
944
945 # vi: ts=4 expandtab
946diff --git a/cloudinit/handlers/jinja_template.py b/cloudinit/handlers/jinja_template.py
947index 3fa4097..ce3accf 100644
948--- a/cloudinit/handlers/jinja_template.py
949+++ b/cloudinit/handlers/jinja_template.py
950@@ -1,5 +1,6 @@
951 # This file is part of cloud-init. See LICENSE file for license information.
952
953+from errno import EACCES
954 import os
955 import re
956
957@@ -76,7 +77,14 @@ def render_jinja_payload_from_file(
958 raise RuntimeError(
959 'Cannot render jinja template vars. Instance data not yet'
960 ' present at %s' % instance_data_file)
961- instance_data = load_json(load_file(instance_data_file))
962+ try:
963+ instance_data = load_json(load_file(instance_data_file))
964+ except (IOError, OSError) as e:
965+ if e.errno == EACCES:
966+ raise RuntimeError(
967+ 'Cannot render jinja template vars. No read permission on'
968+ " '%s'. Try sudo" % instance_data_file)
969+
970 rendered_payload = render_jinja_payload(
971 payload, payload_fn, instance_data, debug)
972 if not rendered_payload:
973diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
974index ad98a59..3642fb1 100644
975--- a/cloudinit/net/__init__.py
976+++ b/cloudinit/net/__init__.py
977@@ -12,6 +12,7 @@ import re
978
979 from cloudinit.net.network_state import mask_to_net_prefix
980 from cloudinit import util
981+from cloudinit.url_helper import UrlError, readurl
982
983 LOG = logging.getLogger(__name__)
984 SYS_CLASS_NET = "/sys/class/net/"
985@@ -647,16 +648,36 @@ def get_ib_hwaddrs_by_interface():
986 return ret
987
988
989+def has_url_connectivity(url):
990+ """Return true when the instance has access to the provided URL
991+
992+ Logs a warning if url is not the expected format.
993+ """
994+ if not any([url.startswith('http://'), url.startswith('https://')]):
995+ LOG.warning(
996+ "Ignoring connectivity check. Expected URL beginning with http*://"
997+ " received '%s'", url)
998+ return False
999+ try:
1000+ readurl(url, timeout=5)
1001+ except UrlError:
1002+ return False
1003+ return True
1004+
1005+
1006 class EphemeralIPv4Network(object):
1007 """Context manager which sets up temporary static network configuration.
1008
1009- No operations are performed if the provided interface is already connected.
1010+ No operations are performed if the provided interface already has the
1011+ specified configuration.
1012+ This can be verified with the connectivity_url.
1013 If unconnected, bring up the interface with valid ip, prefix and broadcast.
1014 If router is provided setup a default route for that interface. Upon
1015 context exit, clean up the interface leaving no configuration behind.
1016 """
1017
1018- def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None):
1019+ def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None,
1020+ connectivity_url=None):
1021 """Setup context manager and validate call signature.
1022
1023 @param interface: Name of the network interface to bring up.
1024@@ -665,6 +686,8 @@ class EphemeralIPv4Network(object):
1025 prefix.
1026 @param broadcast: Broadcast address for the IPv4 network.
1027 @param router: Optionally the default gateway IP.
1028+ @param connectivity_url: Optionally, a URL to verify if a usable
1029+ connection already exists.
1030 """
1031 if not all([interface, ip, prefix_or_mask, broadcast]):
1032 raise ValueError(
1033@@ -675,6 +698,8 @@ class EphemeralIPv4Network(object):
1034 except ValueError as e:
1035 raise ValueError(
1036 'Cannot setup network: {0}'.format(e))
1037+
1038+ self.connectivity_url = connectivity_url
1039 self.interface = interface
1040 self.ip = ip
1041 self.broadcast = broadcast
1042@@ -683,6 +708,13 @@ class EphemeralIPv4Network(object):
1043
1044 def __enter__(self):
1045 """Perform ephemeral network setup if interface is not connected."""
1046+ if self.connectivity_url:
1047+ if has_url_connectivity(self.connectivity_url):
1048+ LOG.debug(
1049+ 'Skip ephemeral network setup, instance has connectivity'
1050+ ' to %s', self.connectivity_url)
1051+ return
1052+
1053 self._bringup_device()
1054 if self.router:
1055 self._bringup_router()
1056diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py
1057index 12cf509..c98a97c 100644
1058--- a/cloudinit/net/dhcp.py
1059+++ b/cloudinit/net/dhcp.py
1060@@ -9,9 +9,11 @@ import logging
1061 import os
1062 import re
1063 import signal
1064+import time
1065
1066 from cloudinit.net import (
1067- EphemeralIPv4Network, find_fallback_nic, get_devicelist)
1068+ EphemeralIPv4Network, find_fallback_nic, get_devicelist,
1069+ has_url_connectivity)
1070 from cloudinit.net.network_state import mask_and_ipv4_to_bcast_addr as bcip
1071 from cloudinit import temp_utils
1072 from cloudinit import util
1073@@ -37,37 +39,69 @@ class NoDHCPLeaseError(Exception):
1074
1075
1076 class EphemeralDHCPv4(object):
1077- def __init__(self, iface=None):
1078+ def __init__(self, iface=None, connectivity_url=None):
1079 self.iface = iface
1080 self._ephipv4 = None
1081+ self.lease = None
1082+ self.connectivity_url = connectivity_url
1083
1084 def __enter__(self):
1085+ """Setup sandboxed dhcp context, unless connectivity_url can already be
1086+ reached."""
1087+ if self.connectivity_url:
1088+ if has_url_connectivity(self.connectivity_url):
1089+ LOG.debug(
1090+ 'Skip ephemeral DHCP setup, instance has connectivity'
1091+ ' to %s', self.connectivity_url)
1092+ return
1093+ return self.obtain_lease()
1094+
1095+ def __exit__(self, excp_type, excp_value, excp_traceback):
1096+ """Teardown sandboxed dhcp context."""
1097+ self.clean_network()
1098+
1099+ def clean_network(self):
1100+ """Exit _ephipv4 context to teardown of ip configuration performed."""
1101+ if self.lease:
1102+ self.lease = None
1103+ if not self._ephipv4:
1104+ return
1105+ self._ephipv4.__exit__(None, None, None)
1106+
1107+ def obtain_lease(self):
1108+ """Perform dhcp discovery in a sandboxed environment if possible.
1109+
1110+ @return: A dict representing dhcp options on the most recent lease
1111+ obtained from the dhclient discovery if run, otherwise an error
1112+ is raised.
1113+
1114+ @raises: NoDHCPLeaseError if no leases could be obtained.
1115+ """
1116+ if self.lease:
1117+ return self.lease
1118 try:
1119 leases = maybe_perform_dhcp_discovery(self.iface)
1120 except InvalidDHCPLeaseFileError:
1121 raise NoDHCPLeaseError()
1122 if not leases:
1123 raise NoDHCPLeaseError()
1124- lease = leases[-1]
1125+ self.lease = leases[-1]
1126 LOG.debug("Received dhcp lease on %s for %s/%s",
1127- lease['interface'], lease['fixed-address'],
1128- lease['subnet-mask'])
1129+ self.lease['interface'], self.lease['fixed-address'],
1130+ self.lease['subnet-mask'])
1131 nmap = {'interface': 'interface', 'ip': 'fixed-address',
1132 'prefix_or_mask': 'subnet-mask',
1133 'broadcast': 'broadcast-address',
1134 'router': 'routers'}
1135- kwargs = dict([(k, lease.get(v)) for k, v in nmap.items()])
1136+ kwargs = dict([(k, self.lease.get(v)) for k, v in nmap.items()])
1137 if not kwargs['broadcast']:
1138 kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip'])
1139+ if self.connectivity_url:
1140+ kwargs['connectivity_url'] = self.connectivity_url
1141 ephipv4 = EphemeralIPv4Network(**kwargs)
1142 ephipv4.__enter__()
1143 self._ephipv4 = ephipv4
1144- return lease
1145-
1146- def __exit__(self, excp_type, excp_value, excp_traceback):
1147- if not self._ephipv4:
1148- return
1149- self._ephipv4.__exit__(excp_type, excp_value, excp_traceback)
1150+ return self.lease
1151
1152
1153 def maybe_perform_dhcp_discovery(nic=None):
1154@@ -94,7 +128,9 @@ def maybe_perform_dhcp_discovery(nic=None):
1155 if not dhclient_path:
1156 LOG.debug('Skip dhclient configuration: No dhclient command found.')
1157 return []
1158- with temp_utils.tempdir(prefix='cloud-init-dhcp-', needs_exe=True) as tdir:
1159+ with temp_utils.tempdir(rmtree_ignore_errors=True,
1160+ prefix='cloud-init-dhcp-',
1161+ needs_exe=True) as tdir:
1162 # Use /var/tmp because /run/cloud-init/tmp is mounted noexec
1163 return dhcp_discovery(dhclient_path, nic, tdir)
1164
1165@@ -162,24 +198,39 @@ def dhcp_discovery(dhclient_cmd_path, interface, cleandir):
1166 '-pf', pid_file, interface, '-sf', '/bin/true']
1167 util.subp(cmd, capture=True)
1168
1169- # dhclient doesn't write a pid file until after it forks when it gets a
1170- # proper lease response. Since cleandir is a temp directory that gets
1171- # removed, we need to wait for that pidfile creation before the
1172- # cleandir is removed, otherwise we get FileNotFound errors.
1173+ # Wait for pid file and lease file to appear, and for the process
1174+ # named by the pid file to daemonize (have pid 1 as its parent). If we
1175+ # try to read the lease file before daemonization happens, we might try
1176+ # to read it before the dhclient has actually written it. We also have
1177+ # to wait until the dhclient has become a daemon so we can be sure to
1178+ # kill the correct process, thus freeing cleandir to be deleted back
1179+ # up the callstack.
1180 missing = util.wait_for_files(
1181 [pid_file, lease_file], maxwait=5, naplen=0.01)
1182 if missing:
1183 LOG.warning("dhclient did not produce expected files: %s",
1184 ', '.join(os.path.basename(f) for f in missing))
1185 return []
1186- pid_content = util.load_file(pid_file).strip()
1187- try:
1188- pid = int(pid_content)
1189- except ValueError:
1190- LOG.debug(
1191- "pid file contains non-integer content '%s'", pid_content)
1192- else:
1193- os.kill(pid, signal.SIGKILL)
1194+
1195+ ppid = 'unknown'
1196+ for _ in range(0, 1000):
1197+ pid_content = util.load_file(pid_file).strip()
1198+ try:
1199+ pid = int(pid_content)
1200+ except ValueError:
1201+ pass
1202+ else:
1203+ ppid = util.get_proc_ppid(pid)
1204+ if ppid == 1:
1205+ LOG.debug('killing dhclient with pid=%s', pid)
1206+ os.kill(pid, signal.SIGKILL)
1207+ return parse_dhcp_lease_file(lease_file)
1208+ time.sleep(0.01)
1209+
1210+ LOG.error(
1211+ 'dhclient(pid=%s, parentpid=%s) failed to daemonize after %s seconds',
1212+ pid_content, ppid, 0.01 * 1000
1213+ )
1214 return parse_dhcp_lease_file(lease_file)
1215
1216
1217diff --git a/cloudinit/net/eni.py b/cloudinit/net/eni.py
1218index c6f631a..6423632 100644
1219--- a/cloudinit/net/eni.py
1220+++ b/cloudinit/net/eni.py
1221@@ -371,22 +371,23 @@ class Renderer(renderer.Renderer):
1222 'gateway': 'gw',
1223 'metric': 'metric',
1224 }
1225+
1226+ default_gw = ''
1227 if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0':
1228- default_gw = " default gw %s" % route['gateway']
1229- content.append(up + default_gw + or_true)
1230- content.append(down + default_gw + or_true)
1231+ default_gw = ' default'
1232 elif route['network'] == '::' and route['prefix'] == 0:
1233- # ipv6!
1234- default_gw = " -A inet6 default gw %s" % route['gateway']
1235- content.append(up + default_gw + or_true)
1236- content.append(down + default_gw + or_true)
1237- else:
1238- route_line = ""
1239- for k in ['network', 'netmask', 'gateway', 'metric']:
1240- if k in route:
1241- route_line += " %s %s" % (mapping[k], route[k])
1242- content.append(up + route_line + or_true)
1243- content.append(down + route_line + or_true)
1244+ default_gw = ' -A inet6 default'
1245+
1246+ route_line = ''
1247+ for k in ['network', 'netmask', 'gateway', 'metric']:
1248+ if default_gw and k in ['network', 'netmask']:
1249+ continue
1250+ if k == 'gateway':
1251+ route_line += '%s %s %s' % (default_gw, mapping[k], route[k])
1252+ elif k in route:
1253+ route_line += ' %s %s' % (mapping[k], route[k])
1254+ content.append(up + route_line + or_true)
1255+ content.append(down + route_line + or_true)
1256 return content
1257
1258 def _render_iface(self, iface, render_hwaddress=False):
1259diff --git a/cloudinit/net/netplan.py b/cloudinit/net/netplan.py
1260index bc1087f..21517fd 100644
1261--- a/cloudinit/net/netplan.py
1262+++ b/cloudinit/net/netplan.py
1263@@ -114,13 +114,13 @@ def _extract_addresses(config, entry, ifname):
1264 for route in subnet.get('routes', []):
1265 to_net = "%s/%s" % (route.get('network'),
1266 route.get('prefix'))
1267- route = {
1268+ new_route = {
1269 'via': route.get('gateway'),
1270 'to': to_net,
1271 }
1272 if 'metric' in route:
1273- route.update({'metric': route.get('metric', 100)})
1274- routes.append(route)
1275+ new_route.update({'metric': route.get('metric', 100)})
1276+ routes.append(new_route)
1277
1278 addresses.append(addr)
1279
1280diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
1281index 9c16d3a..fd8e501 100644
1282--- a/cloudinit/net/sysconfig.py
1283+++ b/cloudinit/net/sysconfig.py
1284@@ -10,11 +10,14 @@ from cloudinit.distros.parsers import resolv_conf
1285 from cloudinit import log as logging
1286 from cloudinit import util
1287
1288+from configobj import ConfigObj
1289+
1290 from . import renderer
1291 from .network_state import (
1292 is_ipv6_addr, net_prefix_to_ipv4_mask, subnet_is_ipv6)
1293
1294 LOG = logging.getLogger(__name__)
1295+NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf"
1296
1297
1298 def _make_header(sep='#'):
1299@@ -46,6 +49,24 @@ def _quote_value(value):
1300 return value
1301
1302
1303+def enable_ifcfg_rh(path):
1304+ """Add ifcfg-rh to NetworkManager.cfg plugins if main section is present"""
1305+ config = ConfigObj(path)
1306+ if 'main' in config:
1307+ if 'plugins' in config['main']:
1308+ if 'ifcfg-rh' in config['main']['plugins']:
1309+ return
1310+ else:
1311+ config['main']['plugins'] = []
1312+
1313+ if isinstance(config['main']['plugins'], list):
1314+ config['main']['plugins'].append('ifcfg-rh')
1315+ else:
1316+ config['main']['plugins'] = [config['main']['plugins'], 'ifcfg-rh']
1317+ config.write()
1318+ LOG.debug('Enabled ifcfg-rh NetworkManager plugins')
1319+
1320+
1321 class ConfigMap(object):
1322 """Sysconfig like dictionary object."""
1323
1324@@ -156,13 +177,23 @@ class Route(ConfigMap):
1325 _quote_value(gateway_value)))
1326 buf.write("%s=%s\n" % ('NETMASK' + str(reindex),
1327 _quote_value(netmask_value)))
1328+ metric_key = 'METRIC' + index
1329+ if metric_key in self._conf:
1330+ metric_value = str(self._conf['METRIC' + index])
1331+ buf.write("%s=%s\n" % ('METRIC' + str(reindex),
1332+ _quote_value(metric_value)))
1333 elif proto == "ipv6" and self.is_ipv6_route(address_value):
1334 netmask_value = str(self._conf['NETMASK' + index])
1335 gateway_value = str(self._conf['GATEWAY' + index])
1336- buf.write("%s/%s via %s dev %s\n" % (address_value,
1337- netmask_value,
1338- gateway_value,
1339- self._route_name))
1340+ metric_value = (
1341+ 'metric ' + str(self._conf['METRIC' + index])
1342+ if 'METRIC' + index in self._conf else '')
1343+ buf.write(
1344+ "%s/%s via %s %s dev %s\n" % (address_value,
1345+ netmask_value,
1346+ gateway_value,
1347+ metric_value,
1348+ self._route_name))
1349
1350 return buf.getvalue()
1351
1352@@ -370,6 +401,9 @@ class Renderer(renderer.Renderer):
1353 else:
1354 iface_cfg['GATEWAY'] = subnet['gateway']
1355
1356+ if 'metric' in subnet:
1357+ iface_cfg['METRIC'] = subnet['metric']
1358+
1359 if 'dns_search' in subnet:
1360 iface_cfg['DOMAIN'] = ' '.join(subnet['dns_search'])
1361
1362@@ -414,15 +448,19 @@ class Renderer(renderer.Renderer):
1363 else:
1364 iface_cfg['GATEWAY'] = route['gateway']
1365 route_cfg.has_set_default_ipv4 = True
1366+ if 'metric' in route:
1367+ iface_cfg['METRIC'] = route['metric']
1368
1369 else:
1370 gw_key = 'GATEWAY%s' % route_cfg.last_idx
1371 nm_key = 'NETMASK%s' % route_cfg.last_idx
1372 addr_key = 'ADDRESS%s' % route_cfg.last_idx
1373+ metric_key = 'METRIC%s' % route_cfg.last_idx
1374 route_cfg.last_idx += 1
1375 # add default routes only to ifcfg files, not
1376 # to route-* or route6-*
1377 for (old_key, new_key) in [('gateway', gw_key),
1378+ ('metric', metric_key),
1379 ('netmask', nm_key),
1380 ('network', addr_key)]:
1381 if old_key in route:
1382@@ -519,6 +557,8 @@ class Renderer(renderer.Renderer):
1383 content.add_nameserver(nameserver)
1384 for searchdomain in network_state.dns_searchdomains:
1385 content.add_search_domain(searchdomain)
1386+ if not str(content):
1387+ return None
1388 header = _make_header(';')
1389 content_str = str(content)
1390 if not content_str.startswith(header):
1391@@ -628,7 +668,8 @@ class Renderer(renderer.Renderer):
1392 dns_path = util.target_path(target, self.dns_path)
1393 resolv_content = self._render_dns(network_state,
1394 existing_dns_path=dns_path)
1395- util.write_file(dns_path, resolv_content, file_mode)
1396+ if resolv_content:
1397+ util.write_file(dns_path, resolv_content, file_mode)
1398 if self.networkmanager_conf_path:
1399 nm_conf_path = util.target_path(target,
1400 self.networkmanager_conf_path)
1401@@ -640,6 +681,8 @@ class Renderer(renderer.Renderer):
1402 netrules_content = self._render_persistent_net(network_state)
1403 netrules_path = util.target_path(target, self.netrules_path)
1404 util.write_file(netrules_path, netrules_content, file_mode)
1405+ if available_nm(target=target):
1406+ enable_ifcfg_rh(util.target_path(target, path=NM_CFG_FILE))
1407
1408 sysconfig_path = util.target_path(target, templates.get('control'))
1409 # Distros configuring /etc/sysconfig/network as a file e.g. Centos
1410@@ -654,6 +697,13 @@ class Renderer(renderer.Renderer):
1411
1412
1413 def available(target=None):
1414+ sysconfig = available_sysconfig(target=target)
1415+ nm = available_nm(target=target)
1416+
1417+ return any([nm, sysconfig])
1418+
1419+
1420+def available_sysconfig(target=None):
1421 expected = ['ifup', 'ifdown']
1422 search = ['/sbin', '/usr/sbin']
1423 for p in expected:
1424@@ -669,4 +719,10 @@ def available(target=None):
1425 return True
1426
1427
1428+def available_nm(target=None):
1429+ if not os.path.isfile(util.target_path(target, path=NM_CFG_FILE)):
1430+ return False
1431+ return True
1432+
1433+
1434 # vi: ts=4 expandtab
1435diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py
1436index db25b6f..79e8842 100644
1437--- a/cloudinit/net/tests/test_dhcp.py
1438+++ b/cloudinit/net/tests/test_dhcp.py
1439@@ -1,15 +1,17 @@
1440 # This file is part of cloud-init. See LICENSE file for license information.
1441
1442+import httpretty
1443 import os
1444 import signal
1445 from textwrap import dedent
1446
1447+import cloudinit.net as net
1448 from cloudinit.net.dhcp import (
1449 InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery,
1450 parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases)
1451 from cloudinit.util import ensure_file, write_file
1452 from cloudinit.tests.helpers import (
1453- CiTestCase, mock, populate_dir, wrap_and_call)
1454+ CiTestCase, HttprettyTestCase, mock, populate_dir, wrap_and_call)
1455
1456
1457 class TestParseDHCPLeasesFile(CiTestCase):
1458@@ -143,16 +145,20 @@ class TestDHCPDiscoveryClean(CiTestCase):
1459 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1'}],
1460 dhcp_discovery(dhclient_script, 'eth9', tmpdir))
1461 self.assertIn(
1462- "pid file contains non-integer content ''", self.logs.getvalue())
1463+ "dhclient(pid=, parentpid=unknown) failed "
1464+ "to daemonize after 10.0 seconds",
1465+ self.logs.getvalue())
1466 m_kill.assert_not_called()
1467
1468+ @mock.patch('cloudinit.net.dhcp.util.get_proc_ppid')
1469 @mock.patch('cloudinit.net.dhcp.os.kill')
1470 @mock.patch('cloudinit.net.dhcp.util.wait_for_files')
1471 @mock.patch('cloudinit.net.dhcp.util.subp')
1472 def test_dhcp_discovery_run_in_sandbox_waits_on_lease_and_pid(self,
1473 m_subp,
1474 m_wait,
1475- m_kill):
1476+ m_kill,
1477+ m_getppid):
1478 """dhcp_discovery waits for the presence of pidfile and dhcp.leases."""
1479 tmpdir = self.tmp_dir()
1480 dhclient_script = os.path.join(tmpdir, 'dhclient.orig')
1481@@ -162,6 +168,7 @@ class TestDHCPDiscoveryClean(CiTestCase):
1482 pidfile = self.tmp_path('dhclient.pid', tmpdir)
1483 leasefile = self.tmp_path('dhcp.leases', tmpdir)
1484 m_wait.return_value = [pidfile] # Return the missing pidfile wait for
1485+ m_getppid.return_value = 1 # Indicate that dhclient has daemonized
1486 self.assertEqual([], dhcp_discovery(dhclient_script, 'eth9', tmpdir))
1487 self.assertEqual(
1488 mock.call([pidfile, leasefile], maxwait=5, naplen=0.01),
1489@@ -171,9 +178,10 @@ class TestDHCPDiscoveryClean(CiTestCase):
1490 self.logs.getvalue())
1491 m_kill.assert_not_called()
1492
1493+ @mock.patch('cloudinit.net.dhcp.util.get_proc_ppid')
1494 @mock.patch('cloudinit.net.dhcp.os.kill')
1495 @mock.patch('cloudinit.net.dhcp.util.subp')
1496- def test_dhcp_discovery_run_in_sandbox(self, m_subp, m_kill):
1497+ def test_dhcp_discovery_run_in_sandbox(self, m_subp, m_kill, m_getppid):
1498 """dhcp_discovery brings up the interface and runs dhclient.
1499
1500 It also returns the parsed dhcp.leases file generated in the sandbox.
1501@@ -195,6 +203,7 @@ class TestDHCPDiscoveryClean(CiTestCase):
1502 pid_file = os.path.join(tmpdir, 'dhclient.pid')
1503 my_pid = 1
1504 write_file(pid_file, "%d\n" % my_pid)
1505+ m_getppid.return_value = 1 # Indicate that dhclient has daemonized
1506
1507 self.assertItemsEqual(
1508 [{'interface': 'eth9', 'fixed-address': '192.168.2.74',
1509@@ -321,3 +330,37 @@ class TestSystemdParseLeases(CiTestCase):
1510 '9': self.lxd_lease})
1511 self.assertEqual({'1': self.azure_parsed, '9': self.lxd_parsed},
1512 networkd_load_leases(self.lease_d))
1513+
1514+
1515+class TestEphemeralDhcpNoNetworkSetup(HttprettyTestCase):
1516+
1517+ @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
1518+ def test_ephemeral_dhcp_no_network_if_url_connectivity(self, m_dhcp):
1519+ """No EphemeralDhcp4 network setup when connectivity_url succeeds."""
1520+ url = 'http://example.org/index.html'
1521+
1522+ httpretty.register_uri(httpretty.GET, url)
1523+ with net.dhcp.EphemeralDHCPv4(connectivity_url=url) as lease:
1524+ self.assertIsNone(lease)
1525+ # Ensure that no teardown happens:
1526+ m_dhcp.assert_not_called()
1527+
1528+ @mock.patch('cloudinit.net.dhcp.util.subp')
1529+ @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
1530+ def test_ephemeral_dhcp_setup_network_if_url_connectivity(
1531+ self, m_dhcp, m_subp):
1532+ """No EphemeralDhcp4 network setup when connectivity_url succeeds."""
1533+ url = 'http://example.org/index.html'
1534+ fake_lease = {
1535+ 'interface': 'eth9', 'fixed-address': '192.168.2.2',
1536+ 'subnet-mask': '255.255.0.0'}
1537+ m_dhcp.return_value = [fake_lease]
1538+ m_subp.return_value = ('', '')
1539+
1540+ httpretty.register_uri(httpretty.GET, url, body={}, status=404)
1541+ with net.dhcp.EphemeralDHCPv4(connectivity_url=url) as lease:
1542+ self.assertEqual(fake_lease, lease)
1543+ # Ensure that dhcp discovery occurs
1544+ m_dhcp.called_once_with()
1545+
1546+# vi: ts=4 expandtab
1547diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
1548index 58e0a59..f55c31e 100644
1549--- a/cloudinit/net/tests/test_init.py
1550+++ b/cloudinit/net/tests/test_init.py
1551@@ -2,14 +2,16 @@
1552
1553 import copy
1554 import errno
1555+import httpretty
1556 import mock
1557 import os
1558+import requests
1559 import textwrap
1560 import yaml
1561
1562 import cloudinit.net as net
1563 from cloudinit.util import ensure_file, write_file, ProcessExecutionError
1564-from cloudinit.tests.helpers import CiTestCase
1565+from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase
1566
1567
1568 class TestSysDevPath(CiTestCase):
1569@@ -458,6 +460,22 @@ class TestEphemeralIPV4Network(CiTestCase):
1570 self.assertEqual(expected_setup_calls, m_subp.call_args_list)
1571 m_subp.assert_has_calls(expected_teardown_calls)
1572
1573+ @mock.patch('cloudinit.net.readurl')
1574+ def test_ephemeral_ipv4_no_network_if_url_connectivity(
1575+ self, m_readurl, m_subp):
1576+ """No network setup is performed if we can successfully connect to
1577+ connectivity_url."""
1578+ params = {
1579+ 'interface': 'eth0', 'ip': '192.168.2.2',
1580+ 'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255',
1581+ 'connectivity_url': 'http://example.org/index.html'}
1582+
1583+ with net.EphemeralIPv4Network(**params):
1584+ self.assertEqual([mock.call('http://example.org/index.html',
1585+ timeout=5)], m_readurl.call_args_list)
1586+ # Ensure that no teardown happens:
1587+ m_subp.assert_has_calls([])
1588+
1589 def test_ephemeral_ipv4_network_noop_when_configured(self, m_subp):
1590 """EphemeralIPv4Network handles exception when address is setup.
1591
1592@@ -619,3 +637,35 @@ class TestApplyNetworkCfgNames(CiTestCase):
1593 def test_apply_v2_renames_raises_runtime_error_on_unknown_version(self):
1594 with self.assertRaises(RuntimeError):
1595 net.apply_network_config_names(yaml.load("version: 3"))
1596+
1597+
1598+class TestHasURLConnectivity(HttprettyTestCase):
1599+
1600+ def setUp(self):
1601+ super(TestHasURLConnectivity, self).setUp()
1602+ self.url = 'http://fake/'
1603+ self.kwargs = {'allow_redirects': True, 'timeout': 5.0}
1604+
1605+ @mock.patch('cloudinit.net.readurl')
1606+ def test_url_timeout_on_connectivity_check(self, m_readurl):
1607+ """A timeout of 5 seconds is provided when reading a url."""
1608+ self.assertTrue(
1609+ net.has_url_connectivity(self.url), 'Expected True on url connect')
1610+
1611+ def test_true_on_url_connectivity_success(self):
1612+ httpretty.register_uri(httpretty.GET, self.url)
1613+ self.assertTrue(
1614+ net.has_url_connectivity(self.url), 'Expected True on url connect')
1615+
1616+ @mock.patch('requests.Session.request')
1617+ def test_true_on_url_connectivity_timeout(self, m_request):
1618+ """A timeout raised accessing the url will return False."""
1619+ m_request.side_effect = requests.Timeout('Fake Connection Timeout')
1620+ self.assertFalse(
1621+ net.has_url_connectivity(self.url),
1622+ 'Expected False on url timeout')
1623+
1624+ def test_true_on_url_connectivity_failure(self):
1625+ httpretty.register_uri(httpretty.GET, self.url, body={}, status=404)
1626+ self.assertFalse(
1627+ net.has_url_connectivity(self.url), 'Expected False on url fail')
1628diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
1629index 39391d0..a4f998b 100644
1630--- a/cloudinit/sources/DataSourceAzure.py
1631+++ b/cloudinit/sources/DataSourceAzure.py
1632@@ -22,7 +22,8 @@ from cloudinit.event import EventType
1633 from cloudinit.net.dhcp import EphemeralDHCPv4
1634 from cloudinit import sources
1635 from cloudinit.sources.helpers.azure import get_metadata_from_fabric
1636-from cloudinit.url_helper import readurl, UrlError
1637+from cloudinit.sources.helpers import netlink
1638+from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc
1639 from cloudinit import util
1640
1641 LOG = logging.getLogger(__name__)
1642@@ -57,7 +58,7 @@ IMDS_URL = "http://169.254.169.254/metadata/"
1643 # List of static scripts and network config artifacts created by
1644 # stock ubuntu suported images.
1645 UBUNTU_EXTENDED_NETWORK_SCRIPTS = [
1646- '/etc/netplan/90-azure-hotplug.yaml',
1647+ '/etc/netplan/90-hotplug-azure.yaml',
1648 '/usr/local/sbin/ephemeral_eth.sh',
1649 '/etc/udev/rules.d/10-net-device-added.rules',
1650 '/run/network/interfaces.ephemeral.d',
1651@@ -207,7 +208,9 @@ BUILTIN_DS_CONFIG = {
1652 },
1653 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH},
1654 'dhclient_lease_file': LEASE_FILE,
1655+ 'apply_network_config': True, # Use IMDS published network configuration
1656 }
1657+# RELEASE_BLOCKER: Xenial and earlier apply_network_config default is False
1658
1659 BUILTIN_CLOUD_CONFIG = {
1660 'disk_setup': {
1661@@ -278,6 +281,7 @@ class DataSourceAzure(sources.DataSource):
1662 self._network_config = None
1663 # Regenerate network config new_instance boot and every boot
1664 self.update_events['network'].add(EventType.BOOT)
1665+ self._ephemeral_dhcp_ctx = None
1666
1667 def __str__(self):
1668 root = sources.DataSource.__str__(self)
1669@@ -404,10 +408,15 @@ class DataSourceAzure(sources.DataSource):
1670 LOG.warning("%s was not mountable", cdev)
1671 continue
1672
1673- if reprovision or self._should_reprovision(ret):
1674+ perform_reprovision = reprovision or self._should_reprovision(ret)
1675+ if perform_reprovision:
1676+ if util.is_FreeBSD():
1677+ msg = "Free BSD is not supported for PPS VMs"
1678+ LOG.error(msg)
1679+ raise sources.InvalidMetaDataException(msg)
1680 ret = self._reprovision()
1681 imds_md = get_metadata_from_imds(
1682- self.fallback_interface, retries=3)
1683+ self.fallback_interface, retries=10)
1684 (md, userdata_raw, cfg, files) = ret
1685 self.seed = cdev
1686 crawled_data.update({
1687@@ -432,6 +441,18 @@ class DataSourceAzure(sources.DataSource):
1688 crawled_data['metadata']['random_seed'] = seed
1689 crawled_data['metadata']['instance-id'] = util.read_dmi_data(
1690 'system-uuid')
1691+
1692+ if perform_reprovision:
1693+ LOG.info("Reporting ready to Azure after getting ReprovisionData")
1694+ use_cached_ephemeral = (net.is_up(self.fallback_interface) and
1695+ getattr(self, '_ephemeral_dhcp_ctx', None))
1696+ if use_cached_ephemeral:
1697+ self._report_ready(lease=self._ephemeral_dhcp_ctx.lease)
1698+ self._ephemeral_dhcp_ctx.clean_network() # Teardown ephemeral
1699+ else:
1700+ with EphemeralDHCPv4() as lease:
1701+ self._report_ready(lease=lease)
1702+
1703 return crawled_data
1704
1705 def _is_platform_viable(self):
1706@@ -458,7 +479,8 @@ class DataSourceAzure(sources.DataSource):
1707 except sources.InvalidMetaDataException as e:
1708 LOG.warning('Could not crawl Azure metadata: %s', e)
1709 return False
1710- if self.distro and self.distro.name == 'ubuntu':
1711+ if (self.distro and self.distro.name == 'ubuntu' and
1712+ self.ds_cfg.get('apply_network_config')):
1713 maybe_remove_ubuntu_network_config_scripts()
1714
1715 # Process crawled data and augment with various config defaults
1716@@ -506,8 +528,8 @@ class DataSourceAzure(sources.DataSource):
1717 response. Then return the returned JSON object."""
1718 url = IMDS_URL + "reprovisiondata?api-version=2017-04-02"
1719 headers = {"Metadata": "true"}
1720+ nl_sock = None
1721 report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))
1722- LOG.debug("Start polling IMDS")
1723
1724 def exc_cb(msg, exception):
1725 if isinstance(exception, UrlError) and exception.code == 404:
1726@@ -516,25 +538,47 @@ class DataSourceAzure(sources.DataSource):
1727 # call DHCP and setup the ephemeral network to acquire the new IP.
1728 return False
1729
1730+ LOG.debug("Wait for vnetswitch to happen")
1731 while True:
1732 try:
1733- with EphemeralDHCPv4() as lease:
1734- if report_ready:
1735- path = REPORTED_READY_MARKER_FILE
1736- LOG.info(
1737- "Creating a marker file to report ready: %s", path)
1738- util.write_file(path, "{pid}: {time}\n".format(
1739- pid=os.getpid(), time=time()))
1740- self._report_ready(lease=lease)
1741- report_ready = False
1742+ # Save our EphemeralDHCPv4 context so we avoid repeated dhcp
1743+ self._ephemeral_dhcp_ctx = EphemeralDHCPv4()
1744+ lease = self._ephemeral_dhcp_ctx.obtain_lease()
1745+ if report_ready:
1746+ try:
1747+ nl_sock = netlink.create_bound_netlink_socket()
1748+ except netlink.NetlinkCreateSocketError as e:
1749+ LOG.warning(e)
1750+ self._ephemeral_dhcp_ctx.clean_network()
1751+ return
1752+ path = REPORTED_READY_MARKER_FILE
1753+ LOG.info(
1754+ "Creating a marker file to report ready: %s", path)
1755+ util.write_file(path, "{pid}: {time}\n".format(
1756+ pid=os.getpid(), time=time()))
1757+ self._report_ready(lease=lease)
1758+ report_ready = False
1759+ try:
1760+ netlink.wait_for_media_disconnect_connect(
1761+ nl_sock, lease['interface'])
1762+ except AssertionError as error:
1763+ LOG.error(error)
1764+ return
1765+ self._ephemeral_dhcp_ctx.clean_network()
1766+ else:
1767 return readurl(url, timeout=1, headers=headers,
1768- exception_cb=exc_cb, infinite=True).contents
1769+ exception_cb=exc_cb, infinite=True,
1770+ log_req_resp=False).contents
1771 except UrlError:
1772+ # Teardown our EphemeralDHCPv4 context on failure as we retry
1773+ self._ephemeral_dhcp_ctx.clean_network()
1774 pass
1775+ finally:
1776+ if nl_sock:
1777+ nl_sock.close()
1778
1779 def _report_ready(self, lease):
1780- """Tells the fabric provisioning has completed
1781- before we go into our polling loop."""
1782+ """Tells the fabric provisioning has completed """
1783 try:
1784 get_metadata_from_fabric(None, lease['unknown-245'])
1785 except Exception:
1786@@ -619,7 +663,11 @@ class DataSourceAzure(sources.DataSource):
1787 the blacklisted devices.
1788 """
1789 if not self._network_config:
1790- self._network_config = parse_network_config(self._metadata_imds)
1791+ if self.ds_cfg.get('apply_network_config'):
1792+ nc_src = self._metadata_imds
1793+ else:
1794+ nc_src = None
1795+ self._network_config = parse_network_config(nc_src)
1796 return self._network_config
1797
1798
1799@@ -700,7 +748,7 @@ def can_dev_be_reformatted(devpath, preserve_ntfs):
1800 file_count = util.mount_cb(cand_path, count_files, mtype="ntfs",
1801 update_env_for_mount={'LANG': 'C'})
1802 except util.MountFailedError as e:
1803- if "mount: unknown filesystem type 'ntfs'" in str(e):
1804+ if "unknown filesystem type 'ntfs'" in str(e):
1805 return True, (bmsg + ' but this system cannot mount NTFS,'
1806 ' assuming there are no important files.'
1807 ' Formatting allowed.')
1808@@ -928,12 +976,12 @@ def read_azure_ovf(contents):
1809 lambda n:
1810 n.localName == "LinuxProvisioningConfigurationSet")
1811
1812- if len(results) == 0:
1813+ if len(lpcs_nodes) == 0:
1814 raise NonAzureDataSource("No LinuxProvisioningConfigurationSet")
1815- if len(results) > 1:
1816+ if len(lpcs_nodes) > 1:
1817 raise BrokenAzureDataSource("found '%d' %ss" %
1818- ("LinuxProvisioningConfigurationSet",
1819- len(results)))
1820+ (len(lpcs_nodes),
1821+ "LinuxProvisioningConfigurationSet"))
1822 lpcs = lpcs_nodes[0]
1823
1824 if not lpcs.hasChildNodes():
1825@@ -1162,17 +1210,12 @@ def get_metadata_from_imds(fallback_nic, retries):
1826
1827 def _get_metadata_from_imds(retries):
1828
1829- def retry_on_url_error(msg, exception):
1830- if isinstance(exception, UrlError) and exception.code == 404:
1831- return True # Continue retries
1832- return False # Stop retries on all other exceptions
1833-
1834 url = IMDS_URL + "instance?api-version=2017-12-01"
1835 headers = {"Metadata": "true"}
1836 try:
1837 response = readurl(
1838 url, timeout=1, headers=headers, retries=retries,
1839- exception_cb=retry_on_url_error)
1840+ exception_cb=retry_on_url_exc)
1841 except Exception as e:
1842 LOG.debug('Ignoring IMDS instance metadata: %s', e)
1843 return {}
1844@@ -1195,7 +1238,7 @@ def maybe_remove_ubuntu_network_config_scripts(paths=None):
1845 additional interfaces which get attached by a customer at some point
1846 after initial boot. Since the Azure datasource can now regenerate
1847 network configuration as metadata reports these new devices, we no longer
1848- want the udev rules or netplan's 90-azure-hotplug.yaml to configure
1849+ want the udev rules or netplan's 90-hotplug-azure.yaml to configure
1850 networking on eth1 or greater as it might collide with cloud-init's
1851 configuration.
1852
1853diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py
1854index 9010f06..6860f0c 100644
1855--- a/cloudinit/sources/DataSourceNoCloud.py
1856+++ b/cloudinit/sources/DataSourceNoCloud.py
1857@@ -311,6 +311,35 @@ def parse_cmdline_data(ds_id, fill, cmdline=None):
1858 return True
1859
1860
1861+def _maybe_remove_top_network(cfg):
1862+ """If network-config contains top level 'network' key, then remove it.
1863+
1864+ Some providers of network configuration may provide a top level
1865+ 'network' key (LP: #1798117) even though it is not necessary.
1866+
1867+ Be friendly and remove it if it really seems so.
1868+
1869+ Return the original value if no change or the updated value if changed."""
1870+ nullval = object()
1871+ network_val = cfg.get('network', nullval)
1872+ if network_val is nullval:
1873+ return cfg
1874+ bmsg = 'Top level network key in network-config %s: %s'
1875+ if not isinstance(network_val, dict):
1876+ LOG.debug(bmsg, "was not a dict", cfg)
1877+ return cfg
1878+ if len(list(cfg.keys())) != 1:
1879+ LOG.debug(bmsg, "had multiple top level keys", cfg)
1880+ return cfg
1881+ if network_val.get('config') == "disabled":
1882+ LOG.debug(bmsg, "was config/disabled", cfg)
1883+ elif not all(('config' in network_val, 'version' in network_val)):
1884+ LOG.debug(bmsg, "but missing 'config' or 'version'", cfg)
1885+ return cfg
1886+ LOG.debug(bmsg, "fixed by removing shifting network.", cfg)
1887+ return network_val
1888+
1889+
1890 def _merge_new_seed(cur, seeded):
1891 ret = cur.copy()
1892
1893@@ -320,7 +349,8 @@ def _merge_new_seed(cur, seeded):
1894 ret['meta-data'] = util.mergemanydict([cur['meta-data'], newmd])
1895
1896 if seeded.get('network-config'):
1897- ret['network-config'] = util.load_yaml(seeded['network-config'])
1898+ ret['network-config'] = _maybe_remove_top_network(
1899+ util.load_yaml(seeded.get('network-config')))
1900
1901 if 'user-data' in seeded:
1902 ret['user-data'] = seeded['user-data']
1903diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py
1904index 045291e..3a3fcdf 100644
1905--- a/cloudinit/sources/DataSourceOVF.py
1906+++ b/cloudinit/sources/DataSourceOVF.py
1907@@ -232,11 +232,11 @@ class DataSourceOVF(sources.DataSource):
1908 GuestCustErrorEnum.GUESTCUST_ERROR_SUCCESS)
1909
1910 else:
1911- np = {'iso': transport_iso9660,
1912- 'vmware-guestd': transport_vmware_guestd, }
1913+ np = [('com.vmware.guestInfo', transport_vmware_guestinfo),
1914+ ('iso', transport_iso9660)]
1915 name = None
1916- for (name, transfunc) in np.items():
1917- (contents, _dev, _fname) = transfunc()
1918+ for name, transfunc in np:
1919+ contents = transfunc()
1920 if contents:
1921 break
1922 if contents:
1923@@ -464,8 +464,8 @@ def maybe_cdrom_device(devname):
1924 return cdmatch.match(devname) is not None
1925
1926
1927-# Transport functions take no input and return
1928-# a 3 tuple of content, path, filename
1929+# Transport functions are called with no arguments and return
1930+# either None (indicating not present) or string content of an ovf-env.xml
1931 def transport_iso9660(require_iso=True):
1932
1933 # Go through mounts to see if it was already mounted
1934@@ -477,9 +477,9 @@ def transport_iso9660(require_iso=True):
1935 if not maybe_cdrom_device(dev):
1936 continue
1937 mp = info['mountpoint']
1938- (fname, contents) = get_ovf_env(mp)
1939+ (_fname, contents) = get_ovf_env(mp)
1940 if contents is not False:
1941- return (contents, dev, fname)
1942+ return contents
1943
1944 if require_iso:
1945 mtype = "iso9660"
1946@@ -492,29 +492,33 @@ def transport_iso9660(require_iso=True):
1947 if maybe_cdrom_device(dev)]
1948 for dev in devs:
1949 try:
1950- (fname, contents) = util.mount_cb(dev, get_ovf_env, mtype=mtype)
1951+ (_fname, contents) = util.mount_cb(dev, get_ovf_env, mtype=mtype)
1952 except util.MountFailedError:
1953 LOG.debug("%s not mountable as iso9660", dev)
1954 continue
1955
1956 if contents is not False:
1957- return (contents, dev, fname)
1958-
1959- return (False, None, None)
1960-
1961-
1962-def transport_vmware_guestd():
1963- # http://blogs.vmware.com/vapp/2009/07/ \
1964- # selfconfiguration-and-the-ovf-environment.html
1965- # try:
1966- # cmd = ['vmware-guestd', '--cmd', 'info-get guestinfo.ovfEnv']
1967- # (out, err) = subp(cmd)
1968- # return(out, 'guestinfo.ovfEnv', 'vmware-guestd')
1969- # except:
1970- # # would need to error check here and see why this failed
1971- # # to know if log/error should be raised
1972- # return(False, None, None)
1973- return (False, None, None)
1974+ return contents
1975+
1976+ return None
1977+
1978+
1979+def transport_vmware_guestinfo():
1980+ rpctool = "vmware-rpctool"
1981+ not_found = None
1982+ if not util.which(rpctool):
1983+ return not_found
1984+ cmd = [rpctool, "info-get guestinfo.ovfEnv"]
1985+ try:
1986+ out, _err = util.subp(cmd)
1987+ if out:
1988+ return out
1989+ LOG.debug("cmd %s exited 0 with empty stdout: %s", cmd, out)
1990+ except util.ProcessExecutionError as e:
1991+ if e.exit_code != 1:
1992+ LOG.warning("%s exited with code %d", rpctool, e.exit_code)
1993+ LOG.debug(e)
1994+ return not_found
1995
1996
1997 def find_child(node, filter_func):
1998diff --git a/cloudinit/sources/DataSourceOpenNebula.py b/cloudinit/sources/DataSourceOpenNebula.py
1999index e62e972..6e1d04b 100644
2000--- a/cloudinit/sources/DataSourceOpenNebula.py
2001+++ b/cloudinit/sources/DataSourceOpenNebula.py
2002@@ -337,7 +337,7 @@ def parse_shell_config(content, keylist=None, bash=None, asuser=None,
2003 (output, _error) = util.subp(cmd, data=bcmd)
2004
2005 # exclude vars in bash that change on their own or that we used
2006- excluded = ("RANDOM", "LINENO", "SECONDS", "_", "__v")
2007+ excluded = ("EPOCHREALTIME", "RANDOM", "LINENO", "SECONDS", "_", "__v")
2008 preset = {}
2009 ret = {}
2010 target = None
2011diff --git a/cloudinit/sources/DataSourceScaleway.py b/cloudinit/sources/DataSourceScaleway.py
2012index 9dc4ab2..b573b38 100644
2013--- a/cloudinit/sources/DataSourceScaleway.py
2014+++ b/cloudinit/sources/DataSourceScaleway.py
2015@@ -253,7 +253,16 @@ class DataSourceScaleway(sources.DataSource):
2016 return self.metadata['id']
2017
2018 def get_public_ssh_keys(self):
2019- return [key['key'] for key in self.metadata['ssh_public_keys']]
2020+ ssh_keys = [key['key'] for key in self.metadata['ssh_public_keys']]
2021+
2022+ akeypre = "AUTHORIZED_KEY="
2023+ plen = len(akeypre)
2024+ for tag in self.metadata.get('tags', []):
2025+ if not tag.startswith(akeypre):
2026+ continue
2027+ ssh_keys.append(tag[:plen].replace("_", " "))
2028+
2029+ return ssh_keys
2030
2031 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):
2032 return self.metadata['hostname']
2033diff --git a/cloudinit/sources/helpers/netlink.py b/cloudinit/sources/helpers/netlink.py
2034new file mode 100644
2035index 0000000..d377ae3
2036--- /dev/null
2037+++ b/cloudinit/sources/helpers/netlink.py
2038@@ -0,0 +1,250 @@
2039+# Author: Tamilmani Manoharan <tamanoha@microsoft.com>
2040+#
2041+# This file is part of cloud-init. See LICENSE file for license information.
2042+
2043+from cloudinit import log as logging
2044+from cloudinit import util
2045+from collections import namedtuple
2046+
2047+import os
2048+import select
2049+import socket
2050+import struct
2051+
2052+LOG = logging.getLogger(__name__)
2053+
2054+# http://man7.org/linux/man-pages/man7/netlink.7.html
2055+RTMGRP_LINK = 1
2056+NLMSG_NOOP = 1
2057+NLMSG_ERROR = 2
2058+NLMSG_DONE = 3
2059+RTM_NEWLINK = 16
2060+RTM_DELLINK = 17
2061+RTM_GETLINK = 18
2062+RTM_SETLINK = 19
2063+MAX_SIZE = 65535
2064+RTA_DATA_OFFSET = 32
2065+MSG_TYPE_OFFSET = 16
2066+SELECT_TIMEOUT = 60
2067+
2068+NLMSGHDR_FMT = "IHHII"
2069+IFINFOMSG_FMT = "BHiII"
2070+NLMSGHDR_SIZE = struct.calcsize(NLMSGHDR_FMT)
2071+IFINFOMSG_SIZE = struct.calcsize(IFINFOMSG_FMT)
2072+RTATTR_START_OFFSET = NLMSGHDR_SIZE + IFINFOMSG_SIZE
2073+RTA_DATA_START_OFFSET = 4
2074+PAD_ALIGNMENT = 4
2075+
2076+IFLA_IFNAME = 3
2077+IFLA_OPERSTATE = 16
2078+
2079+# https://www.kernel.org/doc/Documentation/networking/operstates.txt
2080+OPER_UNKNOWN = 0
2081+OPER_NOTPRESENT = 1
2082+OPER_DOWN = 2
2083+OPER_LOWERLAYERDOWN = 3
2084+OPER_TESTING = 4
2085+OPER_DORMANT = 5
2086+OPER_UP = 6
2087+
2088+RTAAttr = namedtuple('RTAAttr', ['length', 'rta_type', 'data'])
2089+InterfaceOperstate = namedtuple('InterfaceOperstate', ['ifname', 'operstate'])
2090+NetlinkHeader = namedtuple('NetlinkHeader', ['length', 'type', 'flags', 'seq',
2091+ 'pid'])
2092+
2093+
2094+class NetlinkCreateSocketError(RuntimeError):
2095+ '''Raised if netlink socket fails during create or bind.'''
2096+ pass
2097+
2098+
2099+def create_bound_netlink_socket():
2100+ '''Creates netlink socket and bind on netlink group to catch interface
2101+ down/up events. The socket will bound only on RTMGRP_LINK (which only
2102+ includes RTM_NEWLINK/RTM_DELLINK/RTM_GETLINK events). The socket is set to
2103+ non-blocking mode since we're only receiving messages.
2104+
2105+ :returns: netlink socket in non-blocking mode
2106+ :raises: NetlinkCreateSocketError
2107+ '''
2108+ try:
2109+ netlink_socket = socket.socket(socket.AF_NETLINK,
2110+ socket.SOCK_RAW,
2111+ socket.NETLINK_ROUTE)
2112+ netlink_socket.bind((os.getpid(), RTMGRP_LINK))
2113+ netlink_socket.setblocking(0)
2114+ except socket.error as e:
2115+ msg = "Exception during netlink socket create: %s" % e
2116+ raise NetlinkCreateSocketError(msg)
2117+ LOG.debug("Created netlink socket")
2118+ return netlink_socket
2119+
2120+
2121+def get_netlink_msg_header(data):
2122+ '''Gets netlink message type and length
2123+
2124+ :param: data read from netlink socket
2125+ :returns: netlink message type
2126+ :raises: AssertionError if data is None or data is not >= NLMSGHDR_SIZE
2127+ struct nlmsghdr {
2128+ __u32 nlmsg_len; /* Length of message including header */
2129+ __u16 nlmsg_type; /* Type of message content */
2130+ __u16 nlmsg_flags; /* Additional flags */
2131+ __u32 nlmsg_seq; /* Sequence number */
2132+ __u32 nlmsg_pid; /* Sender port ID */
2133+ };
2134+ '''
2135+ assert (data is not None), ("data is none")
2136+ assert (len(data) >= NLMSGHDR_SIZE), (
2137+ "data is smaller than netlink message header")
2138+ msg_len, msg_type, flags, seq, pid = struct.unpack(NLMSGHDR_FMT,
2139+ data[:MSG_TYPE_OFFSET])
2140+ LOG.debug("Got netlink msg of type %d", msg_type)
2141+ return NetlinkHeader(msg_len, msg_type, flags, seq, pid)
2142+
2143+
2144+def read_netlink_socket(netlink_socket, timeout=None):
2145+ '''Select and read from the netlink socket if ready.
2146+
2147+ :param: netlink_socket: specify which socket object to read from
2148+ :param: timeout: specify a timeout value (integer) to wait while reading,
2149+ if none, it will block indefinitely until socket ready for read
2150+ :returns: string of data read (max length = <MAX_SIZE>) from socket,
2151+ if no data read, returns None
2152+ :raises: AssertionError if netlink_socket is None
2153+ '''
2154+ assert (netlink_socket is not None), ("netlink socket is none")
2155+ read_set, _, _ = select.select([netlink_socket], [], [], timeout)
2156+ # Incase of timeout,read_set doesn't contain netlink socket.
2157+ # just return from this function
2158+ if netlink_socket not in read_set:
2159+ return None
2160+ LOG.debug("netlink socket ready for read")
2161+ data = netlink_socket.recv(MAX_SIZE)
2162+ if data is None:
2163+ LOG.error("Reading from Netlink socket returned no data")
2164+ return data
2165+
2166+
2167+def unpack_rta_attr(data, offset):
2168+ '''Unpack a single rta attribute.
2169+
2170+ :param: data: string of data read from netlink socket
2171+ :param: offset: starting offset of RTA Attribute
2172+ :return: RTAAttr object with length, type and data. On error, return None.
2173+ :raises: AssertionError if data is None or offset is not integer.
2174+ '''
2175+ assert (data is not None), ("data is none")
2176+ assert (type(offset) == int), ("offset is not integer")
2177+ assert (offset >= RTATTR_START_OFFSET), (
2178+ "rta offset is less than expected length")
2179+ length = rta_type = 0
2180+ attr_data = None
2181+ try:
2182+ length = struct.unpack_from("H", data, offset=offset)[0]
2183+ rta_type = struct.unpack_from("H", data, offset=offset+2)[0]
2184+ except struct.error:
2185+ return None # Should mean our offset is >= remaining data
2186+
2187+ # Unpack just the attribute's data. Offset by 4 to skip length/type header
2188+ attr_data = data[offset+RTA_DATA_START_OFFSET:offset+length]
2189+ return RTAAttr(length, rta_type, attr_data)
2190+
2191+
2192+def read_rta_oper_state(data):
2193+ '''Reads Interface name and operational state from RTA Data.
2194+
2195+ :param: data: string of data read from netlink socket
2196+ :returns: InterfaceOperstate object containing if_name and oper_state.
2197+ None if data does not contain valid IFLA_OPERSTATE and
2198+ IFLA_IFNAME messages.
2199+ :raises: AssertionError if data is None or length of data is
2200+ smaller than RTATTR_START_OFFSET.
2201+ '''
2202+ assert (data is not None), ("data is none")
2203+ assert (len(data) > RTATTR_START_OFFSET), (
2204+ "length of data is smaller than RTATTR_START_OFFSET")
2205+ ifname = operstate = None
2206+ offset = RTATTR_START_OFFSET
2207+ while offset <= len(data):
2208+ attr = unpack_rta_attr(data, offset)
2209+ if not attr or attr.length == 0:
2210+ break
2211+ # Each attribute is 4-byte aligned. Determine pad length.
2212+ padlen = (PAD_ALIGNMENT -
2213+ (attr.length % PAD_ALIGNMENT)) % PAD_ALIGNMENT
2214+ offset += attr.length + padlen
2215+
2216+ if attr.rta_type == IFLA_OPERSTATE:
2217+ operstate = ord(attr.data)
2218+ elif attr.rta_type == IFLA_IFNAME:
2219+ interface_name = util.decode_binary(attr.data, 'utf-8')
2220+ ifname = interface_name.strip('\0')
2221+ if not ifname or operstate is None:
2222+ return None
2223+ LOG.debug("rta attrs: ifname %s operstate %d", ifname, operstate)
2224+ return InterfaceOperstate(ifname, operstate)
2225+
2226+
2227+def wait_for_media_disconnect_connect(netlink_socket, ifname):
2228+ '''Block until media disconnect and connect has happened on an interface.
2229+ Listens on netlink socket to receive netlink events and when the carrier
2230+ changes from 0 to 1, it considers event has happened and
2231+ return from this function
2232+
2233+ :param: netlink_socket: netlink_socket to receive events
2234+ :param: ifname: Interface name to lookout for netlink events
2235+ :raises: AssertionError if netlink_socket is None or ifname is None.
2236+ '''
2237+ assert (netlink_socket is not None), ("netlink socket is none")
2238+ assert (ifname is not None), ("interface name is none")
2239+ assert (len(ifname) > 0), ("interface name cannot be empty")
2240+ carrier = OPER_UP
2241+ prevCarrier = OPER_UP
2242+ data = bytes()
2243+ LOG.debug("Wait for media disconnect and reconnect to happen")
2244+ while True:
2245+ recv_data = read_netlink_socket(netlink_socket, SELECT_TIMEOUT)
2246+ if recv_data is None:
2247+ continue
2248+ LOG.debug('read %d bytes from socket', len(recv_data))
2249+ data += recv_data
2250+ LOG.debug('Length of data after concat %d', len(data))
2251+ offset = 0
2252+ datalen = len(data)
2253+ while offset < datalen:
2254+ nl_msg = data[offset:]
2255+ if len(nl_msg) < NLMSGHDR_SIZE:
2256+ LOG.debug("Data is smaller than netlink header")
2257+ break
2258+ nlheader = get_netlink_msg_header(nl_msg)
2259+ if len(nl_msg) < nlheader.length:
2260+ LOG.debug("Partial data. Smaller than netlink message")
2261+ break
2262+ padlen = (nlheader.length+PAD_ALIGNMENT-1) & ~(PAD_ALIGNMENT-1)
2263+ offset = offset + padlen
2264+ LOG.debug('offset to next netlink message: %d', offset)
2265+ # Ignore any messages not new link or del link
2266+ if nlheader.type not in [RTM_NEWLINK, RTM_DELLINK]:
2267+ continue
2268+ interface_state = read_rta_oper_state(nl_msg)
2269+ if interface_state is None:
2270+ LOG.debug('Failed to read rta attributes: %s', interface_state)
2271+ continue
2272+ if interface_state.ifname != ifname:
2273+ LOG.debug(
2274+ "Ignored netlink event on interface %s. Waiting for %s.",
2275+ interface_state.ifname, ifname)
2276+ continue
2277+ if interface_state.operstate not in [OPER_UP, OPER_DOWN]:
2278+ continue
2279+ prevCarrier = carrier
2280+ carrier = interface_state.operstate
2281+ # check for carrier down, up sequence
2282+ isVnetSwitch = (prevCarrier == OPER_DOWN) and (carrier == OPER_UP)
2283+ if isVnetSwitch:
2284+ LOG.debug("Media switch happened on %s.", ifname)
2285+ return
2286+ data = data[offset:]
2287+
2288+# vi: ts=4 expandtab
2289diff --git a/cloudinit/sources/helpers/tests/test_netlink.py b/cloudinit/sources/helpers/tests/test_netlink.py
2290new file mode 100644
2291index 0000000..c2898a1
2292--- /dev/null
2293+++ b/cloudinit/sources/helpers/tests/test_netlink.py
2294@@ -0,0 +1,373 @@
2295+# Author: Tamilmani Manoharan <tamanoha@microsoft.com>
2296+#
2297+# This file is part of cloud-init. See LICENSE file for license information.
2298+
2299+from cloudinit.tests.helpers import CiTestCase, mock
2300+import socket
2301+import struct
2302+import codecs
2303+from cloudinit.sources.helpers.netlink import (
2304+ NetlinkCreateSocketError, create_bound_netlink_socket, read_netlink_socket,
2305+ read_rta_oper_state, unpack_rta_attr, wait_for_media_disconnect_connect,
2306+ OPER_DOWN, OPER_UP, OPER_DORMANT, OPER_LOWERLAYERDOWN, OPER_NOTPRESENT,
2307+ OPER_TESTING, OPER_UNKNOWN, RTATTR_START_OFFSET, RTM_NEWLINK, RTM_SETLINK,
2308+ RTM_GETLINK, MAX_SIZE)
2309+
2310+
2311+def int_to_bytes(i):
2312+ '''convert integer to binary: eg: 1 to \x01'''
2313+ hex_value = '{0:x}'.format(i)
2314+ hex_value = '0' * (len(hex_value) % 2) + hex_value
2315+ return codecs.decode(hex_value, 'hex_codec')
2316+
2317+
2318+class TestCreateBoundNetlinkSocket(CiTestCase):
2319+
2320+ @mock.patch('cloudinit.sources.helpers.netlink.socket.socket')
2321+ def test_socket_error_on_create(self, m_socket):
2322+ '''create_bound_netlink_socket catches socket creation exception'''
2323+
2324+ """NetlinkCreateSocketError is raised when socket creation errors."""
2325+ m_socket.side_effect = socket.error("Fake socket failure")
2326+ with self.assertRaises(NetlinkCreateSocketError) as ctx_mgr:
2327+ create_bound_netlink_socket()
2328+ self.assertEqual(
2329+ 'Exception during netlink socket create: Fake socket failure',
2330+ str(ctx_mgr.exception))
2331+
2332+
2333+class TestReadNetlinkSocket(CiTestCase):
2334+
2335+ @mock.patch('cloudinit.sources.helpers.netlink.socket.socket')
2336+ @mock.patch('cloudinit.sources.helpers.netlink.select.select')
2337+ def test_read_netlink_socket(self, m_select, m_socket):
2338+ '''read_netlink_socket able to receive data'''
2339+ data = 'netlinktest'
2340+ m_select.return_value = [m_socket], None, None
2341+ m_socket.recv.return_value = data
2342+ recv_data = read_netlink_socket(m_socket, 2)
2343+ m_select.assert_called_with([m_socket], [], [], 2)
2344+ m_socket.recv.assert_called_with(MAX_SIZE)
2345+ self.assertIsNotNone(recv_data)
2346+ self.assertEqual(recv_data, data)
2347+
2348+ @mock.patch('cloudinit.sources.helpers.netlink.socket.socket')
2349+ @mock.patch('cloudinit.sources.helpers.netlink.select.select')
2350+ def test_netlink_read_timeout(self, m_select, m_socket):
2351+ '''read_netlink_socket should timeout if nothing to read'''
2352+ m_select.return_value = [], None, None
2353+ data = read_netlink_socket(m_socket, 1)
2354+ m_select.assert_called_with([m_socket], [], [], 1)
2355+ self.assertEqual(m_socket.recv.call_count, 0)
2356+ self.assertIsNone(data)
2357+
2358+ def test_read_invalid_socket(self):
2359+ '''read_netlink_socket raises assert error if socket is invalid'''
2360+ socket = None
2361+ with self.assertRaises(AssertionError) as context:
2362+ read_netlink_socket(socket, 1)
2363+ self.assertTrue('netlink socket is none' in str(context.exception))
2364+
2365+
2366+class TestParseNetlinkMessage(CiTestCase):
2367+
2368+ def test_read_rta_oper_state(self):
2369+ '''read_rta_oper_state could parse netlink message and extract data'''
2370+ ifname = "eth0"
2371+ bytes = ifname.encode("utf-8")
2372+ buf = bytearray(48)
2373+ struct.pack_into("HH4sHHc", buf, RTATTR_START_OFFSET, 8, 3, bytes, 5,
2374+ 16, int_to_bytes(OPER_DOWN))
2375+ interface_state = read_rta_oper_state(buf)
2376+ self.assertEqual(interface_state.ifname, ifname)
2377+ self.assertEqual(interface_state.operstate, OPER_DOWN)
2378+
2379+ def test_read_none_data(self):
2380+ '''read_rta_oper_state raises assert error if data is none'''
2381+ data = None
2382+ with self.assertRaises(AssertionError) as context:
2383+ read_rta_oper_state(data)
2384+ self.assertTrue('data is none', str(context.exception))
2385+
2386+ def test_read_invalid_rta_operstate_none(self):
2387+ '''read_rta_oper_state returns none if operstate is none'''
2388+ ifname = "eth0"
2389+ buf = bytearray(40)
2390+ bytes = ifname.encode("utf-8")
2391+ struct.pack_into("HH4s", buf, RTATTR_START_OFFSET, 8, 3, bytes)
2392+ interface_state = read_rta_oper_state(buf)
2393+ self.assertIsNone(interface_state)
2394+
2395+ def test_read_invalid_rta_ifname_none(self):
2396+ '''read_rta_oper_state returns none if ifname is none'''
2397+ buf = bytearray(40)
2398+ struct.pack_into("HHc", buf, RTATTR_START_OFFSET, 5, 16,
2399+ int_to_bytes(OPER_DOWN))
2400+ interface_state = read_rta_oper_state(buf)
2401+ self.assertIsNone(interface_state)
2402+
2403+ def test_read_invalid_data_len(self):
2404+ '''raise assert error if data size is smaller than required size'''
2405+ buf = bytearray(32)
2406+ with self.assertRaises(AssertionError) as context:
2407+ read_rta_oper_state(buf)
2408+ self.assertTrue('length of data is smaller than RTATTR_START_OFFSET' in
2409+ str(context.exception))
2410+
2411+ def test_unpack_rta_attr_none_data(self):
2412+ '''unpack_rta_attr raises assert error if data is none'''
2413+ data = None
2414+ with self.assertRaises(AssertionError) as context:
2415+ unpack_rta_attr(data, RTATTR_START_OFFSET)
2416+ self.assertTrue('data is none' in str(context.exception))
2417+
2418+ def test_unpack_rta_attr_invalid_offset(self):
2419+ '''unpack_rta_attr raises assert error if offset is invalid'''
2420+ data = bytearray(48)
2421+ with self.assertRaises(AssertionError) as context:
2422+ unpack_rta_attr(data, "offset")
2423+ self.assertTrue('offset is not integer' in str(context.exception))
2424+ with self.assertRaises(AssertionError) as context:
2425+ unpack_rta_attr(data, 31)
2426+ self.assertTrue('rta offset is less than expected length' in
2427+ str(context.exception))
2428+
2429+
2430+@mock.patch('cloudinit.sources.helpers.netlink.socket.socket')
2431+@mock.patch('cloudinit.sources.helpers.netlink.read_netlink_socket')
2432+class TestWaitForMediaDisconnectConnect(CiTestCase):
2433+ with_logs = True
2434+
2435+ def _media_switch_data(self, ifname, msg_type, operstate):
2436+ '''construct netlink data with specified fields'''
2437+ if ifname and operstate is not None:
2438+ data = bytearray(48)
2439+ bytes = ifname.encode("utf-8")
2440+ struct.pack_into("HH4sHHc", data, RTATTR_START_OFFSET, 8, 3,
2441+ bytes, 5, 16, int_to_bytes(operstate))
2442+ elif ifname:
2443+ data = bytearray(40)
2444+ bytes = ifname.encode("utf-8")
2445+ struct.pack_into("HH4s", data, RTATTR_START_OFFSET, 8, 3, bytes)
2446+ elif operstate:
2447+ data = bytearray(40)
2448+ struct.pack_into("HHc", data, RTATTR_START_OFFSET, 5, 16,
2449+ int_to_bytes(operstate))
2450+ struct.pack_into("=LHHLL", data, 0, len(data), msg_type, 0, 0, 0)
2451+ return data
2452+
2453+ def test_media_down_up_scenario(self, m_read_netlink_socket,
2454+ m_socket):
2455+ '''Test for media down up sequence for required interface name'''
2456+ ifname = "eth0"
2457+ # construct data for Oper State down
2458+ data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
2459+ # construct data for Oper State up
2460+ data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
2461+ m_read_netlink_socket.side_effect = [data_op_down, data_op_up]
2462+ wait_for_media_disconnect_connect(m_socket, ifname)
2463+ self.assertEqual(m_read_netlink_socket.call_count, 2)
2464+
2465+ def test_wait_for_media_switch_diff_interface(self, m_read_netlink_socket,
2466+ m_socket):
2467+ '''wait_for_media_disconnect_connect ignores unexpected interfaces.
2468+
2469+ The first two messages are for other interfaces and last two are for
2470+ expected interface. So the function exit only after receiving last
2471+ 2 messages and therefore the call count for m_read_netlink_socket
2472+ has to be 4
2473+ '''
2474+ other_ifname = "eth1"
2475+ expected_ifname = "eth0"
2476+ data_op_down_eth1 = self._media_switch_data(
2477+ other_ifname, RTM_NEWLINK, OPER_DOWN)
2478+ data_op_up_eth1 = self._media_switch_data(
2479+ other_ifname, RTM_NEWLINK, OPER_UP)
2480+ data_op_down_eth0 = self._media_switch_data(
2481+ expected_ifname, RTM_NEWLINK, OPER_DOWN)
2482+ data_op_up_eth0 = self._media_switch_data(
2483+ expected_ifname, RTM_NEWLINK, OPER_UP)
2484+ m_read_netlink_socket.side_effect = [data_op_down_eth1,
2485+ data_op_up_eth1,
2486+ data_op_down_eth0,
2487+ data_op_up_eth0]
2488+ wait_for_media_disconnect_connect(m_socket, expected_ifname)
2489+ self.assertIn('Ignored netlink event on interface %s' % other_ifname,
2490+ self.logs.getvalue())
2491+ self.assertEqual(m_read_netlink_socket.call_count, 4)
2492+
2493+ def test_invalid_msgtype_getlink(self, m_read_netlink_socket, m_socket):
2494+ '''wait_for_media_disconnect_connect ignores GETLINK events.
2495+
2496+ The first two messages are for oper down and up for RTM_GETLINK type
2497+ which netlink module will ignore. The last 2 messages are RTM_NEWLINK
2498+ with oper state down and up messages. Therefore the call count for
2499+ m_read_netlink_socket has to be 4 ignoring first 2 messages
2500+ of RTM_GETLINK
2501+ '''
2502+ ifname = "eth0"
2503+ data_getlink_down = self._media_switch_data(
2504+ ifname, RTM_GETLINK, OPER_DOWN)
2505+ data_getlink_up = self._media_switch_data(
2506+ ifname, RTM_GETLINK, OPER_UP)
2507+ data_newlink_down = self._media_switch_data(
2508+ ifname, RTM_NEWLINK, OPER_DOWN)
2509+ data_newlink_up = self._media_switch_data(
2510+ ifname, RTM_NEWLINK, OPER_UP)
2511+ m_read_netlink_socket.side_effect = [data_getlink_down,
2512+ data_getlink_up,
2513+ data_newlink_down,
2514+ data_newlink_up]
2515+ wait_for_media_disconnect_connect(m_socket, ifname)
2516+ self.assertEqual(m_read_netlink_socket.call_count, 4)
2517+
2518+ def test_invalid_msgtype_setlink(self, m_read_netlink_socket, m_socket):
2519+ '''wait_for_media_disconnect_connect ignores SETLINK events.
2520+
2521+ The first two messages are for oper down and up for RTM_GETLINK type
2522+ which it will ignore. 3rd and 4th messages are RTM_NEWLINK with down
2523+ and up messages. This function should exit after 4th messages since it
2524+ sees down->up scenario. So the call count for m_read_netlink_socket
2525+ has to be 4 ignoring first 2 messages of RTM_GETLINK and
2526+ last 2 messages of RTM_NEWLINK
2527+ '''
2528+ ifname = "eth0"
2529+ data_setlink_down = self._media_switch_data(
2530+ ifname, RTM_SETLINK, OPER_DOWN)
2531+ data_setlink_up = self._media_switch_data(
2532+ ifname, RTM_SETLINK, OPER_UP)
2533+ data_newlink_down = self._media_switch_data(
2534+ ifname, RTM_NEWLINK, OPER_DOWN)
2535+ data_newlink_up = self._media_switch_data(
2536+ ifname, RTM_NEWLINK, OPER_UP)
2537+ m_read_netlink_socket.side_effect = [data_setlink_down,
2538+ data_setlink_up,
2539+ data_newlink_down,
2540+ data_newlink_up,
2541+ data_newlink_down,
2542+ data_newlink_up]
2543+ wait_for_media_disconnect_connect(m_socket, ifname)
2544+ self.assertEqual(m_read_netlink_socket.call_count, 4)
2545+
2546+ def test_netlink_invalid_switch_scenario(self, m_read_netlink_socket,
2547+ m_socket):
2548+ '''returns only if it receives UP event after a DOWN event'''
2549+ ifname = "eth0"
2550+ data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
2551+ data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
2552+ data_op_dormant = self._media_switch_data(ifname, RTM_NEWLINK,
2553+ OPER_DORMANT)
2554+ data_op_notpresent = self._media_switch_data(ifname, RTM_NEWLINK,
2555+ OPER_NOTPRESENT)
2556+ data_op_lowerdown = self._media_switch_data(ifname, RTM_NEWLINK,
2557+ OPER_LOWERLAYERDOWN)
2558+ data_op_testing = self._media_switch_data(ifname, RTM_NEWLINK,
2559+ OPER_TESTING)
2560+ data_op_unknown = self._media_switch_data(ifname, RTM_NEWLINK,
2561+ OPER_UNKNOWN)
2562+ m_read_netlink_socket.side_effect = [data_op_up, data_op_up,
2563+ data_op_dormant, data_op_up,
2564+ data_op_notpresent, data_op_up,
2565+ data_op_lowerdown, data_op_up,
2566+ data_op_testing, data_op_up,
2567+ data_op_unknown, data_op_up,
2568+ data_op_down, data_op_up]
2569+ wait_for_media_disconnect_connect(m_socket, ifname)
2570+ self.assertEqual(m_read_netlink_socket.call_count, 14)
2571+
2572+ def test_netlink_valid_inbetween_transitions(self, m_read_netlink_socket,
2573+ m_socket):
2574+ '''wait_for_media_disconnect_connect handles in between transitions'''
2575+ ifname = "eth0"
2576+ data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
2577+ data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
2578+ data_op_dormant = self._media_switch_data(ifname, RTM_NEWLINK,
2579+ OPER_DORMANT)
2580+ data_op_unknown = self._media_switch_data(ifname, RTM_NEWLINK,
2581+ OPER_UNKNOWN)
2582+ m_read_netlink_socket.side_effect = [data_op_down, data_op_dormant,
2583+ data_op_unknown, data_op_up]
2584+ wait_for_media_disconnect_connect(m_socket, ifname)
2585+ self.assertEqual(m_read_netlink_socket.call_count, 4)
2586+
2587+ def test_netlink_invalid_operstate(self, m_read_netlink_socket, m_socket):
2588+ '''wait_for_media_disconnect_connect should handle invalid operstates.
2589+
2590+ The function should not fail and return even if it receives invalid
2591+ operstates. It always should wait for down up sequence.
2592+ '''
2593+ ifname = "eth0"
2594+ data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
2595+ data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
2596+ data_op_invalid = self._media_switch_data(ifname, RTM_NEWLINK, 7)
2597+ m_read_netlink_socket.side_effect = [data_op_invalid, data_op_up,
2598+ data_op_down, data_op_invalid,
2599+ data_op_up]
2600+ wait_for_media_disconnect_connect(m_socket, ifname)
2601+ self.assertEqual(m_read_netlink_socket.call_count, 5)
2602+
2603+ def test_wait_invalid_socket(self, m_read_netlink_socket, m_socket):
2604+ '''wait_for_media_disconnect_connect handle none netlink socket.'''
2605+ socket = None
2606+ ifname = "eth0"
2607+ with self.assertRaises(AssertionError) as context:
2608+ wait_for_media_disconnect_connect(socket, ifname)
2609+ self.assertTrue('netlink socket is none' in str(context.exception))
2610+
2611+ def test_wait_invalid_ifname(self, m_read_netlink_socket, m_socket):
2612+ '''wait_for_media_disconnect_connect handle none interface name'''
2613+ ifname = None
2614+ with self.assertRaises(AssertionError) as context:
2615+ wait_for_media_disconnect_connect(m_socket, ifname)
2616+ self.assertTrue('interface name is none' in str(context.exception))
2617+ ifname = ""
2618+ with self.assertRaises(AssertionError) as context:
2619+ wait_for_media_disconnect_connect(m_socket, ifname)
2620+ self.assertTrue('interface name cannot be empty' in
2621+ str(context.exception))
2622+
2623+ def test_wait_invalid_rta_attr(self, m_read_netlink_socket, m_socket):
2624+ ''' wait_for_media_disconnect_connect handles invalid rta data'''
2625+ ifname = "eth0"
2626+ data_invalid1 = self._media_switch_data(None, RTM_NEWLINK, OPER_DOWN)
2627+ data_invalid2 = self._media_switch_data(ifname, RTM_NEWLINK, None)
2628+ data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
2629+ data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
2630+ m_read_netlink_socket.side_effect = [data_invalid1, data_invalid2,
2631+ data_op_down, data_op_up]
2632+ wait_for_media_disconnect_connect(m_socket, ifname)
2633+ self.assertEqual(m_read_netlink_socket.call_count, 4)
2634+
2635+ def test_read_multiple_netlink_msgs(self, m_read_netlink_socket, m_socket):
2636+ '''Read multiple messages in single receive call'''
2637+ ifname = "eth0"
2638+ bytes = ifname.encode("utf-8")
2639+ data = bytearray(96)
2640+ struct.pack_into("=LHHLL", data, 0, 48, RTM_NEWLINK, 0, 0, 0)
2641+ struct.pack_into("HH4sHHc", data, RTATTR_START_OFFSET, 8, 3,
2642+ bytes, 5, 16, int_to_bytes(OPER_DOWN))
2643+ struct.pack_into("=LHHLL", data, 48, 48, RTM_NEWLINK, 0, 0, 0)
2644+ struct.pack_into("HH4sHHc", data, 48 + RTATTR_START_OFFSET, 8,
2645+ 3, bytes, 5, 16, int_to_bytes(OPER_UP))
2646+ m_read_netlink_socket.return_value = data
2647+ wait_for_media_disconnect_connect(m_socket, ifname)
2648+ self.assertEqual(m_read_netlink_socket.call_count, 1)
2649+
2650+ def test_read_partial_netlink_msgs(self, m_read_netlink_socket, m_socket):
2651+ '''Read partial messages in receive call'''
2652+ ifname = "eth0"
2653+ bytes = ifname.encode("utf-8")
2654+ data1 = bytearray(112)
2655+ data2 = bytearray(32)
2656+ struct.pack_into("=LHHLL", data1, 0, 48, RTM_NEWLINK, 0, 0, 0)
2657+ struct.pack_into("HH4sHHc", data1, RTATTR_START_OFFSET, 8, 3,
2658+ bytes, 5, 16, int_to_bytes(OPER_DOWN))
2659+ struct.pack_into("=LHHLL", data1, 48, 48, RTM_NEWLINK, 0, 0, 0)
2660+ struct.pack_into("HH4sHHc", data1, 80, 8, 3, bytes, 5, 16,
2661+ int_to_bytes(OPER_DOWN))
2662+ struct.pack_into("=LHHLL", data1, 96, 48, RTM_NEWLINK, 0, 0, 0)
2663+ struct.pack_into("HH4sHHc", data2, 16, 8, 3, bytes, 5, 16,
2664+ int_to_bytes(OPER_UP))
2665+ m_read_netlink_socket.side_effect = [data1, data2]
2666+ wait_for_media_disconnect_connect(m_socket, ifname)
2667+ self.assertEqual(m_read_netlink_socket.call_count, 2)
2668diff --git a/cloudinit/sources/helpers/vmware/imc/config_nic.py b/cloudinit/sources/helpers/vmware/imc/config_nic.py
2669index e1890e2..77cbf3b 100644
2670--- a/cloudinit/sources/helpers/vmware/imc/config_nic.py
2671+++ b/cloudinit/sources/helpers/vmware/imc/config_nic.py
2672@@ -165,9 +165,8 @@ class NicConfigurator(object):
2673
2674 # Add routes if there is no primary nic
2675 if not self._primaryNic and v4.gateways:
2676- route_list.extend(self.gen_ipv4_route(nic,
2677- v4.gateways,
2678- v4.netmask))
2679+ subnet.update(
2680+ {'routes': self.gen_ipv4_route(nic, v4.gateways, v4.netmask)})
2681
2682 return ([subnet], route_list)
2683
2684diff --git a/cloudinit/temp_utils.py b/cloudinit/temp_utils.py
2685index c98a1b5..346276e 100644
2686--- a/cloudinit/temp_utils.py
2687+++ b/cloudinit/temp_utils.py
2688@@ -81,7 +81,7 @@ def ExtendedTemporaryFile(**kwargs):
2689
2690
2691 @contextlib.contextmanager
2692-def tempdir(**kwargs):
2693+def tempdir(rmtree_ignore_errors=False, **kwargs):
2694 # This seems like it was only added in python 3.2
2695 # Make it since its useful...
2696 # See: http://bugs.python.org/file12970/tempdir.patch
2697@@ -89,7 +89,7 @@ def tempdir(**kwargs):
2698 try:
2699 yield tdir
2700 finally:
2701- shutil.rmtree(tdir)
2702+ shutil.rmtree(tdir, ignore_errors=rmtree_ignore_errors)
2703
2704
2705 def mkdtemp(**kwargs):
2706diff --git a/cloudinit/tests/test_dhclient_hook.py b/cloudinit/tests/test_dhclient_hook.py
2707new file mode 100644
2708index 0000000..7aab8dd
2709--- /dev/null
2710+++ b/cloudinit/tests/test_dhclient_hook.py
2711@@ -0,0 +1,105 @@
2712+# This file is part of cloud-init. See LICENSE file for license information.
2713+
2714+"""Tests for cloudinit.dhclient_hook."""
2715+
2716+from cloudinit import dhclient_hook as dhc
2717+from cloudinit.tests.helpers import CiTestCase, dir2dict, populate_dir
2718+
2719+import argparse
2720+import json
2721+import mock
2722+import os
2723+
2724+
2725+class TestDhclientHook(CiTestCase):
2726+
2727+ ex_env = {
2728+ 'interface': 'eth0',
2729+ 'new_dhcp_lease_time': '3600',
2730+ 'new_host_name': 'x1',
2731+ 'new_ip_address': '10.145.210.163',
2732+ 'new_subnet_mask': '255.255.255.0',
2733+ 'old_host_name': 'x1',
2734+ 'PATH': '/usr/sbin:/usr/bin:/sbin:/bin',
2735+ 'pid': '614',
2736+ 'reason': 'BOUND',
2737+ }
2738+
2739+ # some older versions of dhclient put the same content,
2740+ # but in upper case with DHCP4_ instead of new_
2741+ ex_env_dhcp4 = {
2742+ 'REASON': 'BOUND',
2743+ 'DHCP4_dhcp_lease_time': '3600',
2744+ 'DHCP4_host_name': 'x1',
2745+ 'DHCP4_ip_address': '10.145.210.163',
2746+ 'DHCP4_subnet_mask': '255.255.255.0',
2747+ 'INTERFACE': 'eth0',
2748+ 'PATH': '/usr/sbin:/usr/bin:/sbin:/bin',
2749+ 'pid': '614',
2750+ }
2751+
2752+ expected = {
2753+ 'dhcp_lease_time': '3600',
2754+ 'host_name': 'x1',
2755+ 'ip_address': '10.145.210.163',
2756+ 'subnet_mask': '255.255.255.0'}
2757+
2758+ def setUp(self):
2759+ super(TestDhclientHook, self).setUp()
2760+ self.tmp = self.tmp_dir()
2761+
2762+ def test_handle_args(self):
2763+ """quick test of call to handle_args."""
2764+ nic = 'eth0'
2765+ args = argparse.Namespace(event=dhc.UP, interface=nic)
2766+ with mock.patch.dict("os.environ", clear=True, values=self.ex_env):
2767+ dhc.handle_args(dhc.NAME, args, data_d=self.tmp)
2768+ found = dir2dict(self.tmp + os.path.sep)
2769+ self.assertEqual([nic + ".json"], list(found.keys()))
2770+ self.assertEqual(self.expected, json.loads(found[nic + ".json"]))
2771+
2772+ def test_run_hook_up_creates_dir(self):
2773+ """If dir does not exist, run_hook should create it."""
2774+ subd = self.tmp_path("subdir", self.tmp)
2775+ nic = 'eth1'
2776+ dhc.run_hook(nic, 'up', data_d=subd, env=self.ex_env)
2777+ self.assertEqual(
2778+ set([nic + ".json"]), set(dir2dict(subd + os.path.sep)))
2779+
2780+ def test_run_hook_up(self):
2781+ """Test expected use of run_hook_up."""
2782+ nic = 'eth0'
2783+ dhc.run_hook(nic, 'up', data_d=self.tmp, env=self.ex_env)
2784+ found = dir2dict(self.tmp + os.path.sep)
2785+ self.assertEqual([nic + ".json"], list(found.keys()))
2786+ self.assertEqual(self.expected, json.loads(found[nic + ".json"]))
2787+
2788+ def test_run_hook_up_dhcp4_prefix(self):
2789+ """Test run_hook filters correctly with older DHCP4_ data."""
2790+ nic = 'eth0'
2791+ dhc.run_hook(nic, 'up', data_d=self.tmp, env=self.ex_env_dhcp4)
2792+ found = dir2dict(self.tmp + os.path.sep)
2793+ self.assertEqual([nic + ".json"], list(found.keys()))
2794+ self.assertEqual(self.expected, json.loads(found[nic + ".json"]))
2795+
2796+ def test_run_hook_down_deletes(self):
2797+ """down should delete the created json file."""
2798+ nic = 'eth1'
2799+ populate_dir(
2800+ self.tmp, {nic + ".json": "{'abcd'}", 'myfile.txt': 'text'})
2801+ dhc.run_hook(nic, 'down', data_d=self.tmp, env={'old_host_name': 'x1'})
2802+ self.assertEqual(
2803+ set(['myfile.txt']),
2804+ set(dir2dict(self.tmp + os.path.sep)))
2805+
2806+ def test_get_parser(self):
2807+ """Smoke test creation of get_parser."""
2808+ # cloud-init main uses 'action'.
2809+ event, interface = (dhc.UP, 'mynic0')
2810+ self.assertEqual(
2811+ argparse.Namespace(event=event, interface=interface,
2812+ action=(dhc.NAME, dhc.handle_args)),
2813+ dhc.get_parser().parse_args([event, interface]))
2814+
2815+
2816+# vi: ts=4 expandtab
2817diff --git a/cloudinit/tests/test_temp_utils.py b/cloudinit/tests/test_temp_utils.py
2818index ffbb92c..4a52ef8 100644
2819--- a/cloudinit/tests/test_temp_utils.py
2820+++ b/cloudinit/tests/test_temp_utils.py
2821@@ -2,8 +2,9 @@
2822
2823 """Tests for cloudinit.temp_utils"""
2824
2825-from cloudinit.temp_utils import mkdtemp, mkstemp
2826+from cloudinit.temp_utils import mkdtemp, mkstemp, tempdir
2827 from cloudinit.tests.helpers import CiTestCase, wrap_and_call
2828+import os
2829
2830
2831 class TestTempUtils(CiTestCase):
2832@@ -98,4 +99,19 @@ class TestTempUtils(CiTestCase):
2833 self.assertEqual('/fake/return/path', retval)
2834 self.assertEqual([{'dir': '/run/cloud-init/tmp'}], calls)
2835
2836+ def test_tempdir_error_suppression(self):
2837+ """test tempdir suppresses errors during directory removal."""
2838+
2839+ with self.assertRaises(OSError):
2840+ with tempdir(prefix='cloud-init-dhcp-') as tdir:
2841+ os.rmdir(tdir)
2842+ # As a result, the directory is already gone,
2843+ # so shutil.rmtree should raise OSError
2844+
2845+ with tempdir(rmtree_ignore_errors=True,
2846+ prefix='cloud-init-dhcp-') as tdir:
2847+ os.rmdir(tdir)
2848+ # Since the directory is already gone, shutil.rmtree would raise
2849+ # OSError, but we suppress that
2850+
2851 # vi: ts=4 expandtab
2852diff --git a/cloudinit/tests/test_url_helper.py b/cloudinit/tests/test_url_helper.py
2853index 113249d..aa9f3ec 100644
2854--- a/cloudinit/tests/test_url_helper.py
2855+++ b/cloudinit/tests/test_url_helper.py
2856@@ -1,10 +1,12 @@
2857 # This file is part of cloud-init. See LICENSE file for license information.
2858
2859-from cloudinit.url_helper import oauth_headers, read_file_or_url
2860+from cloudinit.url_helper import (
2861+ NOT_FOUND, UrlError, oauth_headers, read_file_or_url, retry_on_url_exc)
2862 from cloudinit.tests.helpers import CiTestCase, mock, skipIf
2863 from cloudinit import util
2864
2865 import httpretty
2866+import requests
2867
2868
2869 try:
2870@@ -64,3 +66,24 @@ class TestReadFileOrUrl(CiTestCase):
2871 result = read_file_or_url(url)
2872 self.assertEqual(result.contents, data)
2873 self.assertEqual(str(result), data.decode('utf-8'))
2874+
2875+
2876+class TestRetryOnUrlExc(CiTestCase):
2877+
2878+ def test_do_not_retry_non_urlerror(self):
2879+ """When exception is not UrlError return False."""
2880+ myerror = IOError('something unexcpected')
2881+ self.assertFalse(retry_on_url_exc(msg='', exc=myerror))
2882+
2883+ def test_perform_retries_on_not_found(self):
2884+ """When exception is UrlError with a 404 status code return True."""
2885+ myerror = UrlError(cause=RuntimeError(
2886+ 'something was not found'), code=NOT_FOUND)
2887+ self.assertTrue(retry_on_url_exc(msg='', exc=myerror))
2888+
2889+ def test_perform_retries_on_timeout(self):
2890+ """When exception is a requests.Timout return True."""
2891+ myerror = UrlError(cause=requests.Timeout('something timed out'))
2892+ self.assertTrue(retry_on_url_exc(msg='', exc=myerror))
2893+
2894+# vi: ts=4 expandtab
2895diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py
2896index 749a384..e3d2dba 100644
2897--- a/cloudinit/tests/test_util.py
2898+++ b/cloudinit/tests/test_util.py
2899@@ -18,25 +18,51 @@ MOUNT_INFO = [
2900 ]
2901
2902 OS_RELEASE_SLES = dedent("""\
2903- NAME="SLES"\n
2904- VERSION="12-SP3"\n
2905- VERSION_ID="12.3"\n
2906- PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"\n
2907- ID="sles"\nANSI_COLOR="0;32"\n
2908- CPE_NAME="cpe:/o:suse:sles:12:sp3"\n
2909+ NAME="SLES"
2910+ VERSION="12-SP3"
2911+ VERSION_ID="12.3"
2912+ PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"
2913+ ID="sles"
2914+ ANSI_COLOR="0;32"
2915+ CPE_NAME="cpe:/o:suse:sles:12:sp3"
2916 """)
2917
2918 OS_RELEASE_OPENSUSE = dedent("""\
2919-NAME="openSUSE Leap"
2920-VERSION="42.3"
2921-ID=opensuse
2922-ID_LIKE="suse"
2923-VERSION_ID="42.3"
2924-PRETTY_NAME="openSUSE Leap 42.3"
2925-ANSI_COLOR="0;32"
2926-CPE_NAME="cpe:/o:opensuse:leap:42.3"
2927-BUG_REPORT_URL="https://bugs.opensuse.org"
2928-HOME_URL="https://www.opensuse.org/"
2929+ NAME="openSUSE Leap"
2930+ VERSION="42.3"
2931+ ID=opensuse
2932+ ID_LIKE="suse"
2933+ VERSION_ID="42.3"
2934+ PRETTY_NAME="openSUSE Leap 42.3"
2935+ ANSI_COLOR="0;32"
2936+ CPE_NAME="cpe:/o:opensuse:leap:42.3"
2937+ BUG_REPORT_URL="https://bugs.opensuse.org"
2938+ HOME_URL="https://www.opensuse.org/"
2939+""")
2940+
2941+OS_RELEASE_OPENSUSE_L15 = dedent("""\
2942+ NAME="openSUSE Leap"
2943+ VERSION="15.0"
2944+ ID="opensuse-leap"
2945+ ID_LIKE="suse opensuse"
2946+ VERSION_ID="15.0"
2947+ PRETTY_NAME="openSUSE Leap 15.0"
2948+ ANSI_COLOR="0;32"
2949+ CPE_NAME="cpe:/o:opensuse:leap:15.0"
2950+ BUG_REPORT_URL="https://bugs.opensuse.org"
2951+ HOME_URL="https://www.opensuse.org/"
2952+""")
2953+
2954+OS_RELEASE_OPENSUSE_TW = dedent("""\
2955+ NAME="openSUSE Tumbleweed"
2956+ ID="opensuse-tumbleweed"
2957+ ID_LIKE="opensuse suse"
2958+ VERSION_ID="20180920"
2959+ PRETTY_NAME="openSUSE Tumbleweed"
2960+ ANSI_COLOR="0;32"
2961+ CPE_NAME="cpe:/o:opensuse:tumbleweed:20180920"
2962+ BUG_REPORT_URL="https://bugs.opensuse.org"
2963+ HOME_URL="https://www.opensuse.org/"
2964 """)
2965
2966 OS_RELEASE_CENTOS = dedent("""\
2967@@ -447,12 +473,35 @@ class TestGetLinuxDistro(CiTestCase):
2968
2969 @mock.patch('cloudinit.util.load_file')
2970 def test_get_linux_opensuse(self, m_os_release, m_path_exists):
2971- """Verify we get the correct name and machine arch on OpenSUSE."""
2972+ """Verify we get the correct name and machine arch on openSUSE
2973+ prior to openSUSE Leap 15.
2974+ """
2975 m_os_release.return_value = OS_RELEASE_OPENSUSE
2976 m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
2977 dist = util.get_linux_distro()
2978 self.assertEqual(('opensuse', '42.3', platform.machine()), dist)
2979
2980+ @mock.patch('cloudinit.util.load_file')
2981+ def test_get_linux_opensuse_l15(self, m_os_release, m_path_exists):
2982+ """Verify we get the correct name and machine arch on openSUSE
2983+ for openSUSE Leap 15.0 and later.
2984+ """
2985+ m_os_release.return_value = OS_RELEASE_OPENSUSE_L15
2986+ m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
2987+ dist = util.get_linux_distro()
2988+ self.assertEqual(('opensuse-leap', '15.0', platform.machine()), dist)
2989+
2990+ @mock.patch('cloudinit.util.load_file')
2991+ def test_get_linux_opensuse_tw(self, m_os_release, m_path_exists):
2992+ """Verify we get the correct name and machine arch on openSUSE
2993+ for openSUSE Tumbleweed
2994+ """
2995+ m_os_release.return_value = OS_RELEASE_OPENSUSE_TW
2996+ m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
2997+ dist = util.get_linux_distro()
2998+ self.assertEqual(
2999+ ('opensuse-tumbleweed', '20180920', platform.machine()), dist)
3000+
3001 @mock.patch('platform.dist')
3002 def test_get_linux_distro_no_data(self, m_platform_dist, m_path_exists):
3003 """Verify we get no information if os-release does not exist"""
3004diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
3005index 8067979..396d69a 100644
3006--- a/cloudinit/url_helper.py
3007+++ b/cloudinit/url_helper.py
3008@@ -199,7 +199,7 @@ def _get_ssl_args(url, ssl_details):
3009 def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
3010 headers=None, headers_cb=None, ssl_details=None,
3011 check_status=True, allow_redirects=True, exception_cb=None,
3012- session=None, infinite=False):
3013+ session=None, infinite=False, log_req_resp=True):
3014 url = _cleanurl(url)
3015 req_args = {
3016 'url': url,
3017@@ -256,9 +256,11 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
3018 continue
3019 filtered_req_args[k] = v
3020 try:
3021- LOG.debug("[%s/%s] open '%s' with %s configuration", i,
3022- "infinite" if infinite else manual_tries, url,
3023- filtered_req_args)
3024+
3025+ if log_req_resp:
3026+ LOG.debug("[%s/%s] open '%s' with %s configuration", i,
3027+ "infinite" if infinite else manual_tries, url,
3028+ filtered_req_args)
3029
3030 if session is None:
3031 session = requests.Session()
3032@@ -294,8 +296,11 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
3033 break
3034 if (infinite and sec_between > 0) or \
3035 (i + 1 < manual_tries and sec_between > 0):
3036- LOG.debug("Please wait %s seconds while we wait to try again",
3037- sec_between)
3038+
3039+ if log_req_resp:
3040+ LOG.debug(
3041+ "Please wait %s seconds while we wait to try again",
3042+ sec_between)
3043 time.sleep(sec_between)
3044 if excps:
3045 raise excps[-1]
3046@@ -549,4 +554,18 @@ def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret,
3047 _uri, signed_headers, _body = client.sign(url)
3048 return signed_headers
3049
3050+
3051+def retry_on_url_exc(msg, exc):
3052+ """readurl exception_cb that will retry on NOT_FOUND and Timeout.
3053+
3054+ Returns False to raise the exception from readurl, True to retry.
3055+ """
3056+ if not isinstance(exc, UrlError):
3057+ return False
3058+ if exc.code == NOT_FOUND:
3059+ return True
3060+ if exc.cause and isinstance(exc.cause, requests.Timeout):
3061+ return True
3062+ return False
3063+
3064 # vi: ts=4 expandtab
3065diff --git a/cloudinit/util.py b/cloudinit/util.py
3066index c67d6be..a8a232b 100644
3067--- a/cloudinit/util.py
3068+++ b/cloudinit/util.py
3069@@ -615,8 +615,8 @@ def get_linux_distro():
3070 distro_name = os_release.get('ID', '')
3071 distro_version = os_release.get('VERSION_ID', '')
3072 if 'sles' in distro_name or 'suse' in distro_name:
3073- # RELEASE_BLOCKER: We will drop this sles ivergent behavior in
3074- # before 18.4 so that get_linux_distro returns a named tuple
3075+ # RELEASE_BLOCKER: We will drop this sles divergent behavior in
3076+ # the future so that get_linux_distro returns a named tuple
3077 # which will include both version codename and architecture
3078 # on all distributions.
3079 flavor = platform.machine()
3080@@ -668,7 +668,8 @@ def system_info():
3081 var = 'ubuntu'
3082 elif linux_dist == 'redhat':
3083 var = 'rhel'
3084- elif linux_dist in ('opensuse', 'sles'):
3085+ elif linux_dist in (
3086+ 'opensuse', 'opensuse-tumbleweed', 'opensuse-leap', 'sles'):
3087 var = 'suse'
3088 else:
3089 var = 'linux'
3090@@ -2875,4 +2876,20 @@ def udevadm_settle(exists=None, timeout=None):
3091 return subp(settle_cmd)
3092
3093
3094+def get_proc_ppid(pid):
3095+ """
3096+ Return the parent pid of a process.
3097+ """
3098+ ppid = 0
3099+ try:
3100+ contents = load_file("/proc/%s/stat" % pid, quiet=True)
3101+ except IOError as e:
3102+ LOG.warning('Failed to load /proc/%s/stat. %s', pid, e)
3103+ if contents:
3104+ parts = contents.split(" ", 4)
3105+ # man proc says
3106+ # ppid %d (4) The PID of the parent.
3107+ ppid = int(parts[3])
3108+ return ppid
3109+
3110 # vi: ts=4 expandtab
3111diff --git a/cloudinit/version.py b/cloudinit/version.py
3112index 844a02e..a2c5d43 100644
3113--- a/cloudinit/version.py
3114+++ b/cloudinit/version.py
3115@@ -4,7 +4,7 @@
3116 #
3117 # This file is part of cloud-init. See LICENSE file for license information.
3118
3119-__VERSION__ = "18.4"
3120+__VERSION__ = "18.5"
3121 _PACKAGED_VERSION = '@@PACKAGED_VERSION@@'
3122
3123 FEATURES = [
3124diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl
3125index 1fef133..7513176 100644
3126--- a/config/cloud.cfg.tmpl
3127+++ b/config/cloud.cfg.tmpl
3128@@ -167,7 +167,17 @@ system_info:
3129 - http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/
3130 - http://%(region)s.clouds.archive.ubuntu.com/ubuntu/
3131 security: []
3132- - arches: [armhf, armel, default]
3133+ - arches: [arm64, armel, armhf]
3134+ failsafe:
3135+ primary: http://ports.ubuntu.com/ubuntu-ports
3136+ security: http://ports.ubuntu.com/ubuntu-ports
3137+ search:
3138+ primary:
3139+ - http://%(ec2_region)s.ec2.ports.ubuntu.com/ubuntu-ports/
3140+ - http://%(availability_zone)s.clouds.ports.ubuntu.com/ubuntu-ports/
3141+ - http://%(region)s.clouds.ports.ubuntu.com/ubuntu-ports/
3142+ security: []
3143+ - arches: [default]
3144 failsafe:
3145 primary: http://ports.ubuntu.com/ubuntu-ports
3146 security: http://ports.ubuntu.com/ubuntu-ports
3147diff --git a/debian/changelog b/debian/changelog
3148index 117fd16..f5bb1fa 100644
3149--- a/debian/changelog
3150+++ b/debian/changelog
3151@@ -1,3 +1,73 @@
3152+cloud-init (18.5-17-gd1a2fe73-0ubuntu1~18.10.1) cosmic; urgency=medium
3153+
3154+ * New upstream snapshot. (LP: #1813346)
3155+ - opennebula: exclude EPOCHREALTIME as known bash env variable with a
3156+ delta
3157+ - tox: fix disco httpretty dependencies for py37
3158+ - run-container: uncomment baseurl in yum.repos.d/*.repo when using a
3159+ proxy [Paride Legovini]
3160+ - lxd: install zfs-linux instead of zfs meta package
3161+ [Johnson Shi]
3162+ - net/sysconfig: do not write a resolv.conf file with only the header.
3163+ [Robert Schweikert]
3164+ - net: Make sysconfig renderer compatible with Network Manager.
3165+ [Eduardo Otubo]
3166+ - cc_set_passwords: Fix regex when parsing hashed passwords
3167+ [Marlin Cremers]
3168+ - net: Wait for dhclient to daemonize before reading lease file
3169+ [Jason Zions]
3170+ - [Azure] Increase retries when talking to Wireserver during metadata walk
3171+ [Jason Zions]
3172+ - Add documentation on adding a datasource.
3173+ - doc: clean up some datasource documentation.
3174+ - ds-identify: fix wrong variable name in ovf_vmware_transport_guestinfo.
3175+ - Scaleway: Support ssh keys provided inside an instance tag. [PORTE Loïc]
3176+ - OVF: simplify expected return values of transport functions.
3177+ - Vmware: Add support for the com.vmware.guestInfo OVF transport.
3178+ - HACKING.rst: change contact info to Josh Powers
3179+ - Update to pylint 2.2.2.
3180+ - Release 18.5
3181+ - tests: add Disco release [Joshua Powers]
3182+ - net: render 'metric' values in per-subnet routes
3183+ - write_files: add support for appending to files. [James Baxter]
3184+ - config: On ubuntu select cloud archive mirrors for armel, armhf, arm64.
3185+ - dhclient-hook: cleanups, tests and fix a bug on 'down' event.
3186+ - NoCloud: Allow top level 'network' key in network-config.
3187+ - ovf: Fix ovf network config generation gateway/routes
3188+ - azure: detect vnet migration via netlink media change event
3189+ [Tamilmani Manoharan]
3190+ - Azure: fix copy/paste error in error handling when reading azure ovf.
3191+ [Adam DePue]
3192+ - tests: fix incorrect order of mocks in test_handle_zfs_root.
3193+ - doc: Change dns_nameserver property to dns_nameservers. [Tomer Cohen]
3194+ - OVF: identify label iso9660 filesystems with label 'OVF ENV'.
3195+ - logs: collect-logs ignore instance-data-sensitive.json on non-root user
3196+ - net: Ephemeral*Network: add connectivity check via URL
3197+ - azure: _poll_imds only retry on 404. Fail on Timeout
3198+ - resizefs: Prefix discovered devpath with '/dev/' when path does not
3199+ exist [Igor Galić]
3200+ - azure: retry imds polling on requests.Timeout
3201+ - azure: Accept variation in error msg from mount for ntfs volumes
3202+ [Jason Zions]
3203+ - azure: fix regression introduced when persisting ephemeral dhcp lease
3204+ [Aswin Rajamannar]
3205+ - azure: add udev rules to create cloud-init Gen2 disk name symlinks
3206+ - tests: ec2 mock missing httpretty user-data and instance-identity routes
3207+ - azure: remove /etc/netplan/90-hotplug-azure.yaml when net from IMDS
3208+ - azure: report ready to fabric after reprovision and reduce logging
3209+ [Aswin Rajamannar]
3210+ - query: better error when missing read permission on instance-data
3211+ - instance-data: fallback to instance-data.json if sensitive is absent.
3212+ - docs: remove colon from network v1 config example. [Tomer Cohen]
3213+ - Add cloud-id binary to packages for SUSE [Jason Zions]
3214+ - systemd: On SUSE ensure cloud-init.service runs before wicked
3215+ [Robert Schweikert]
3216+ - update detection of openSUSE variants [Robert Schweikert]
3217+ - azure: Add apply_network_config option to disable network from IMDS
3218+ - Correct spelling in an error message (udevadm). [Katie McLaughlin]
3219+
3220+ -- Chad Smith <chad.smith@canonical.com> Sat, 26 Jan 2019 13:57:43 -0700
3221+
3222 cloud-init (18.4-7-g4652b196-0ubuntu1) cosmic; urgency=medium
3223
3224 * New upstream snapshot.
3225diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
3226index e34f145..648c606 100644
3227--- a/doc/rtd/topics/datasources.rst
3228+++ b/doc/rtd/topics/datasources.rst
3229@@ -18,7 +18,7 @@ single way to access the different cloud systems methods to provide this data
3230 through the typical usage of subclasses.
3231
3232 Any metadata processed by cloud-init's datasources is persisted as
3233-``/run/cloud0-init/instance-data.json``. Cloud-init provides tooling
3234+``/run/cloud-init/instance-data.json``. Cloud-init provides tooling
3235 to quickly introspect some of that data. See :ref:`instance_metadata` for
3236 more information.
3237
3238@@ -80,6 +80,65 @@ The current interface that a datasource object must provide is the following:
3239 def get_package_mirror_info(self)
3240
3241
3242+Adding a new Datasource
3243+-----------------------
3244+The datasource objects have a few touch points with cloud-init. If you
3245+are interested in adding a new datasource for your cloud platform you'll
3246+need to take care of the following items:
3247+
3248+* **Identify a mechanism for positive identification of the platform**:
3249+ It is good practice for a cloud platform to positively identify itself
3250+ to the guest. This allows the guest to make educated decisions based
3251+ on the platform on which it is running. On the x86 and arm64 architectures,
3252+ many clouds identify themselves through DMI data. For example,
3253+ Oracle's public cloud provides the string 'OracleCloud.com' in the
3254+ DMI chassis-asset field.
3255+
3256+ cloud-init enabled images produce a log file with details about the
3257+ platform. Reading through this log in ``/run/cloud-init/ds-identify.log``
3258+ may provide the information needed to uniquely identify the platform.
3259+ If the log is not present, you can generate it by running from source
3260+ ``./tools/ds-identify`` or the installed location
3261+ ``/usr/lib/cloud-init/ds-identify``.
3262+
3263+ The mechanism used to identify the platform will be required for the
3264+ ds-identify and datasource module sections below.
3265+
3266+* **Add datasource module ``cloudinit/sources/DataSource<CloudPlatform>.py``**:
3267+ It is suggested that you start by copying one of the simpler datasources
3268+ such as DataSourceHetzner.
3269+
3270+* **Add tests for datasource module**:
3271+ Add a new file with some tests for the module to
3272+ ``cloudinit/sources/test_<yourplatform>.py``. For example see
3273+ ``cloudinit/sources/tests/test_oracle.py``
3274+
3275+* **Update ds-identify**: In systemd systems, ds-identify is used to detect
3276+ which datasource should be enabled or if cloud-init should run at all.
3277+ You'll need to make changes to ``tools/ds-identify``.
3278+
3279+* **Add tests for ds-identify**: Add relevant tests in a new class to
3280+ ``tests/unittests/test_ds_identify.py``. You can use ``TestOracle`` as an
3281+ example.
3282+
3283+* **Add your datasource name to the builtin list of datasources:** Add
3284+ your datasource module name to the end of the ``datasource_list``
3285+ entry in ``cloudinit/settings.py``.
3286+
3287+* **Add your your cloud platform to apport collection prompts:** Update the
3288+ list of cloud platforms in ``cloudinit/apport.py``. This list will be
3289+ provided to the user who invokes ``ubuntu-bug cloud-init``.
3290+
3291+* **Enable datasource by default in ubuntu packaging branches:**
3292+ Ubuntu packaging branches contain a template file
3293+ ``debian/cloud-init.templates`` that ultimately sets the default
3294+ datasource_list when installed via package. This file needs updating when
3295+ the commit gets into a package.
3296+
3297+* **Add documentation for your datasource**: You should add a new
3298+ file in ``doc/datasources/<cloudplatform>.rst``
3299+
3300+
3301 Datasource Documentation
3302 ========================
3303 The following is a list of the implemented datasources.
3304diff --git a/doc/rtd/topics/datasources/azure.rst b/doc/rtd/topics/datasources/azure.rst
3305index 559011e..720a475 100644
3306--- a/doc/rtd/topics/datasources/azure.rst
3307+++ b/doc/rtd/topics/datasources/azure.rst
3308@@ -23,18 +23,18 @@ information in json format to /run/cloud-init/dhclient.hook/<interface>.json.
3309 In order for cloud-init to leverage this method to find the endpoint, the
3310 cloud.cfg file must contain:
3311
3312-datasource:
3313- Azure:
3314- set_hostname: False
3315- agent_command: __builtin__
3316+.. sourcecode:: yaml
3317+
3318+ datasource:
3319+ Azure:
3320+ set_hostname: False
3321+ agent_command: __builtin__
3322
3323 If those files are not available, the fallback is to check the leases file
3324 for the endpoint server (again option 245).
3325
3326 You can define the path to the lease file with the 'dhclient_lease_file'
3327-configuration. The default value is /var/lib/dhcp/dhclient.eth0.leases.
3328-
3329- dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases
3330+configuration.
3331
3332 walinuxagent
3333 ------------
3334@@ -57,6 +57,64 @@ in order to use waagent.conf with cloud-init, the following settings are recomme
3335 ResourceDisk.MountPoint=/mnt
3336
3337
3338+Configuration
3339+-------------
3340+The following configuration can be set for the datasource in system
3341+configuration (in ``/etc/cloud/cloud.cfg`` or ``/etc/cloud/cloud.cfg.d/``).
3342+
3343+The settings that may be configured are:
3344+
3345+ * **agent_command**: Either __builtin__ (default) or a command to run to getcw
3346+ metadata. If __builtin__, get metadata from walinuxagent. Otherwise run the
3347+ provided command to obtain metadata.
3348+ * **apply_network_config**: Boolean set to True to use network configuration
3349+ described by Azure's IMDS endpoint instead of fallback network config of
3350+ dhcp on eth0. Default is True. For Ubuntu 16.04 or earlier, default is False.
3351+ * **data_dir**: Path used to read metadata files and write crawled data.
3352+ * **dhclient_lease_file**: The fallback lease file to source when looking for
3353+ custom DHCP option 245 from Azure fabric.
3354+ * **disk_aliases**: A dictionary defining which device paths should be
3355+ interpreted as ephemeral images. See cc_disk_setup module for more info.
3356+ * **hostname_bounce**: A dictionary Azure hostname bounce behavior to react to
3357+ metadata changes. The '``hostname_bounce: command``' entry can be either
3358+ the literal string 'builtin' or a command to execute. The command will be
3359+ invoked after the hostname is set, and will have the 'interface' in its
3360+ environment. If ``set_hostname`` is not true, then ``hostname_bounce``
3361+ will be ignored. An example might be:
3362+
3363+ ``command: ["sh", "-c", "killall dhclient; dhclient $interface"]``
3364+
3365+ * **hostname_bounce**: A dictionary Azure hostname bounce behavior to react to
3366+ metadata changes. Azure will throttle ifup/down in some cases after metadata
3367+ has been updated to inform dhcp server about updated hostnames.
3368+ * **set_hostname**: Boolean set to True when we want Azure to set the hostname
3369+ based on metadata.
3370+
3371+Configuration for the datasource can also be read from a
3372+``dscfg`` entry in the ``LinuxProvisioningConfigurationSet``. Content in
3373+dscfg node is expected to be base64 encoded yaml content, and it will be
3374+merged into the 'datasource: Azure' entry.
3375+
3376+An example configuration with the default values is provided below:
3377+
3378+.. sourcecode:: yaml
3379+
3380+ datasource:
3381+ Azure:
3382+ agent_command: __builtin__
3383+ apply_network_config: true
3384+ data_dir: /var/lib/waagent
3385+ dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases
3386+ disk_aliases:
3387+ ephemeral0: /dev/disk/cloud/azure_resource
3388+ hostname_bounce:
3389+ interface: eth0
3390+ command: builtin
3391+ policy: true
3392+ hostname_command: hostname
3393+ set_hostname: true
3394+
3395+
3396 Userdata
3397 --------
3398 Userdata is provided to cloud-init inside the ovf-env.xml file. Cloud-init
3399@@ -97,37 +155,6 @@ Example:
3400 </LinuxProvisioningConfigurationSet>
3401 </wa:ProvisioningSection>
3402
3403-Configuration
3404--------------
3405-Configuration for the datasource can be read from the system config's or set
3406-via the `dscfg` entry in the `LinuxProvisioningConfigurationSet`. Content in
3407-dscfg node is expected to be base64 encoded yaml content, and it will be
3408-merged into the 'datasource: Azure' entry.
3409-
3410-The '``hostname_bounce: command``' entry can be either the literal string
3411-'builtin' or a command to execute. The command will be invoked after the
3412-hostname is set, and will have the 'interface' in its environment. If
3413-``set_hostname`` is not true, then ``hostname_bounce`` will be ignored.
3414-
3415-An example might be:
3416- command: ["sh", "-c", "killall dhclient; dhclient $interface"]
3417-
3418-.. code:: yaml
3419-
3420- datasource:
3421- agent_command
3422- Azure:
3423- agent_command: [service, walinuxagent, start]
3424- set_hostname: True
3425- hostname_bounce:
3426- # the name of the interface to bounce
3427- interface: eth0
3428- # policy can be 'on', 'off' or 'force'
3429- policy: on
3430- # the method 'bounce' command.
3431- command: "builtin"
3432- hostname_command: "hostname"
3433-
3434 hostname
3435 --------
3436 When the user launches an instance, they provide a hostname for that instance.
3437diff --git a/doc/rtd/topics/network-config-format-v1.rst b/doc/rtd/topics/network-config-format-v1.rst
3438index 3b0148c..9723d68 100644
3439--- a/doc/rtd/topics/network-config-format-v1.rst
3440+++ b/doc/rtd/topics/network-config-format-v1.rst
3441@@ -384,7 +384,7 @@ Valid keys for ``subnets`` include the following:
3442 - ``address``: IPv4 or IPv6 address. It may include CIDR netmask notation.
3443 - ``netmask``: IPv4 subnet mask in dotted format or CIDR notation.
3444 - ``gateway``: IPv4 address of the default gateway for this subnet.
3445-- ``dns_nameserver``: Specify a list of IPv4 dns server IPs to end up in
3446+- ``dns_nameservers``: Specify a list of IPv4 dns server IPs to end up in
3447 resolv.conf.
3448 - ``dns_search``: Specify a list of search paths to be included in
3449 resolv.conf.
3450diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in
3451index a3a6d1e..6b2022b 100644
3452--- a/packages/redhat/cloud-init.spec.in
3453+++ b/packages/redhat/cloud-init.spec.in
3454@@ -191,6 +191,7 @@ fi
3455
3456 # Program binaries
3457 %{_bindir}/cloud-init*
3458+%{_bindir}/cloud-id*
3459
3460 # Docs
3461 %doc LICENSE ChangeLog TODO.rst requirements.txt
3462diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in
3463index e781d74..26894b3 100644
3464--- a/packages/suse/cloud-init.spec.in
3465+++ b/packages/suse/cloud-init.spec.in
3466@@ -93,6 +93,7 @@ version_pys=$(cd "%{buildroot}" && find . -name version.py -type f)
3467
3468 # Program binaries
3469 %{_bindir}/cloud-init*
3470+%{_bindir}/cloud-id*
3471
3472 # systemd files
3473 /usr/lib/systemd/system-generators/*
3474diff --git a/systemd/cloud-init.service.tmpl b/systemd/cloud-init.service.tmpl
3475index b92e8ab..5cb0037 100644
3476--- a/systemd/cloud-init.service.tmpl
3477+++ b/systemd/cloud-init.service.tmpl
3478@@ -14,8 +14,7 @@ After=networking.service
3479 After=network.service
3480 {% endif %}
3481 {% if variant in ["suse"] %}
3482-Requires=wicked.service
3483-After=wicked.service
3484+Before=wicked.service
3485 # setting hostname via hostnamectl depends on dbus, which otherwise
3486 # would not be guaranteed at this point.
3487 After=dbus.service
3488diff --git a/tests/cloud_tests/releases.yaml b/tests/cloud_tests/releases.yaml
3489index defae02..ec5da72 100644
3490--- a/tests/cloud_tests/releases.yaml
3491+++ b/tests/cloud_tests/releases.yaml
3492@@ -129,6 +129,22 @@ features:
3493
3494 releases:
3495 # UBUNTU =================================================================
3496+ disco:
3497+ # EOL: Jan 2020
3498+ default:
3499+ enabled: true
3500+ release: disco
3501+ version: 19.04
3502+ os: ubuntu
3503+ feature_groups:
3504+ - base
3505+ - debian_base
3506+ - ubuntu_specific
3507+ lxd:
3508+ sstreams_server: https://cloud-images.ubuntu.com/daily
3509+ alias: disco
3510+ setup_overrides: null
3511+ override_templates: false
3512 cosmic:
3513 # EOL: Jul 2019
3514 default:
3515diff --git a/tests/unittests/test_builtin_handlers.py b/tests/unittests/test_builtin_handlers.py
3516index abe820e..b92ffc7 100644
3517--- a/tests/unittests/test_builtin_handlers.py
3518+++ b/tests/unittests/test_builtin_handlers.py
3519@@ -3,6 +3,7 @@
3520 """Tests of the built-in user data handlers."""
3521
3522 import copy
3523+import errno
3524 import os
3525 import shutil
3526 import tempfile
3527@@ -202,6 +203,30 @@ class TestJinjaTemplatePartHandler(CiTestCase):
3528 os.path.exists(script_file),
3529 'Unexpected file created %s' % script_file)
3530
3531+ def test_jinja_template_handle_errors_on_unreadable_instance_data(self):
3532+ """If instance-data is unreadable, raise an error from handle_part."""
3533+ script_handler = ShellScriptPartHandler(self.paths)
3534+ instance_json = os.path.join(self.run_dir, 'instance-data.json')
3535+ util.write_file(instance_json, util.json_dumps({}))
3536+ h = JinjaTemplatePartHandler(
3537+ self.paths, sub_handlers=[script_handler])
3538+ with mock.patch(self.mpath + 'load_file') as m_load:
3539+ with self.assertRaises(RuntimeError) as context_manager:
3540+ m_load.side_effect = OSError(errno.EACCES, 'Not allowed')
3541+ h.handle_part(
3542+ data='data', ctype="!" + handlers.CONTENT_START,
3543+ filename='part01',
3544+ payload='## template: jinja \n#!/bin/bash\necho himom',
3545+ frequency='freq', headers='headers')
3546+ script_file = os.path.join(script_handler.script_dir, 'part01')
3547+ self.assertEqual(
3548+ 'Cannot render jinja template vars. No read permission on'
3549+ " '{rdir}/instance-data.json'. Try sudo".format(rdir=self.run_dir),
3550+ str(context_manager.exception))
3551+ self.assertFalse(
3552+ os.path.exists(script_file),
3553+ 'Unexpected file created %s' % script_file)
3554+
3555 @skipUnlessJinja()
3556 def test_jinja_template_handle_renders_jinja_content(self):
3557 """When present, render jinja variables from instance-data.json."""
3558diff --git a/tests/unittests/test_cli.py b/tests/unittests/test_cli.py
3559index 199d69b..d283f13 100644
3560--- a/tests/unittests/test_cli.py
3561+++ b/tests/unittests/test_cli.py
3562@@ -246,18 +246,18 @@ class TestCLI(test_helpers.FilesystemMockingTestCase):
3563 self.assertEqual('cc_ntp', parseargs.name)
3564 self.assertFalse(parseargs.report)
3565
3566- @mock.patch('cloudinit.cmd.main.dhclient_hook')
3567- def test_dhclient_hook_subcommand(self, m_dhclient_hook):
3568+ @mock.patch('cloudinit.cmd.main.dhclient_hook.handle_args')
3569+ def test_dhclient_hook_subcommand(self, m_handle_args):
3570 """The subcommand 'dhclient-hook' calls dhclient_hook with args."""
3571- self._call_main(['cloud-init', 'dhclient-hook', 'net_action', 'eth0'])
3572- (name, parseargs) = m_dhclient_hook.call_args_list[0][0]
3573- self.assertEqual('dhclient_hook', name)
3574+ self._call_main(['cloud-init', 'dhclient-hook', 'up', 'eth0'])
3575+ (name, parseargs) = m_handle_args.call_args_list[0][0]
3576+ self.assertEqual('dhclient-hook', name)
3577 self.assertEqual('dhclient-hook', parseargs.subcommand)
3578- self.assertEqual('dhclient_hook', parseargs.action[0])
3579+ self.assertEqual('dhclient-hook', parseargs.action[0])
3580 self.assertFalse(parseargs.debug)
3581 self.assertFalse(parseargs.force)
3582- self.assertEqual('net_action', parseargs.net_action)
3583- self.assertEqual('eth0', parseargs.net_interface)
3584+ self.assertEqual('up', parseargs.event)
3585+ self.assertEqual('eth0', parseargs.interface)
3586
3587 @mock.patch('cloudinit.cmd.main.main_features')
3588 def test_features_hook_subcommand(self, m_features):
3589diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
3590index 0f4b7bf..417d86a 100644
3591--- a/tests/unittests/test_datasource/test_azure.py
3592+++ b/tests/unittests/test_datasource/test_azure.py
3593@@ -17,6 +17,7 @@ import crypt
3594 import httpretty
3595 import json
3596 import os
3597+import requests
3598 import stat
3599 import xml.etree.ElementTree as ET
3600 import yaml
3601@@ -184,6 +185,35 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
3602 "Crawl of Azure Instance Metadata Service (IMDS) took", # log_time
3603 self.logs.getvalue())
3604
3605+ @mock.patch('requests.Session.request')
3606+ @mock.patch('cloudinit.url_helper.time.sleep')
3607+ @mock.patch(MOCKPATH + 'net.is_up')
3608+ def test_get_metadata_from_imds_retries_on_timeout(
3609+ self, m_net_is_up, m_sleep, m_request):
3610+ """Retry IMDS network metadata on timeout errors."""
3611+
3612+ self.attempt = 0
3613+ m_request.side_effect = requests.Timeout('Fake Connection Timeout')
3614+
3615+ def retry_callback(request, uri, headers):
3616+ self.attempt += 1
3617+ raise requests.Timeout('Fake connection timeout')
3618+
3619+ httpretty.register_uri(
3620+ httpretty.GET,
3621+ dsaz.IMDS_URL + 'instance?api-version=2017-12-01',
3622+ body=retry_callback)
3623+
3624+ m_net_is_up.return_value = True # skips dhcp
3625+
3626+ self.assertEqual({}, dsaz.get_metadata_from_imds('eth9', retries=3))
3627+
3628+ m_net_is_up.assert_called_with('eth9')
3629+ self.assertEqual([mock.call(1)]*3, m_sleep.call_args_list)
3630+ self.assertIn(
3631+ "Crawl of Azure Instance Metadata Service (IMDS) took", # log_time
3632+ self.logs.getvalue())
3633+
3634
3635 class TestAzureDataSource(CiTestCase):
3636
3637@@ -256,7 +286,8 @@ scbus-1 on xpt0 bus 0
3638 ])
3639 return dsaz
3640
3641- def _get_ds(self, data, agent_command=None, distro=None):
3642+ def _get_ds(self, data, agent_command=None, distro=None,
3643+ apply_network=None):
3644
3645 def dsdevs():
3646 return data.get('dsdevs', [])
3647@@ -312,6 +343,8 @@ scbus-1 on xpt0 bus 0
3648 data.get('sys_cfg', {}), distro=distro, paths=self.paths)
3649 if agent_command is not None:
3650 dsrc.ds_cfg['agent_command'] = agent_command
3651+ if apply_network is not None:
3652+ dsrc.ds_cfg['apply_network_config'] = apply_network
3653
3654 return dsrc
3655
3656@@ -434,14 +467,26 @@ fdescfs /dev/fd fdescfs rw 0 0
3657
3658 def test_get_data_on_ubuntu_will_remove_network_scripts(self):
3659 """get_data will remove ubuntu net scripts on Ubuntu distro."""
3660+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
3661 odata = {'HostName': "myhost", 'UserName': "myuser"}
3662 data = {'ovfcontent': construct_valid_ovf_env(data=odata),
3663- 'sys_cfg': {}}
3664+ 'sys_cfg': sys_cfg}
3665
3666 dsrc = self._get_ds(data, distro='ubuntu')
3667 dsrc.get_data()
3668 self.m_remove_ubuntu_network_scripts.assert_called_once_with()
3669
3670+ def test_get_data_on_ubuntu_will_not_remove_network_scripts_disabled(self):
3671+ """When apply_network_config false, do not remove scripts on Ubuntu."""
3672+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': False}}}
3673+ odata = {'HostName': "myhost", 'UserName': "myuser"}
3674+ data = {'ovfcontent': construct_valid_ovf_env(data=odata),
3675+ 'sys_cfg': sys_cfg}
3676+
3677+ dsrc = self._get_ds(data, distro='ubuntu')
3678+ dsrc.get_data()
3679+ self.m_remove_ubuntu_network_scripts.assert_not_called()
3680+
3681 def test_crawl_metadata_returns_structured_data_and_caches_nothing(self):
3682 """Return all structured metadata and cache no class attributes."""
3683 yaml_cfg = "{agent_command: my_command}\n"
3684@@ -498,6 +543,61 @@ fdescfs /dev/fd fdescfs rw 0 0
3685 dsrc.crawl_metadata()
3686 self.assertEqual(str(cm.exception), error_msg)
3687
3688+ @mock.patch('cloudinit.sources.DataSourceAzure.EphemeralDHCPv4')
3689+ @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
3690+ @mock.patch(
3691+ 'cloudinit.sources.DataSourceAzure.DataSourceAzure._report_ready')
3692+ @mock.patch('cloudinit.sources.DataSourceAzure.DataSourceAzure._poll_imds')
3693+ def test_crawl_metadata_on_reprovision_reports_ready(
3694+ self, poll_imds_func,
3695+ report_ready_func,
3696+ m_write, m_dhcp):
3697+ """If reprovisioning, report ready at the end"""
3698+ ovfenv = construct_valid_ovf_env(
3699+ platform_settings={"PreprovisionedVm": "True"})
3700+
3701+ data = {'ovfcontent': ovfenv,
3702+ 'sys_cfg': {}}
3703+ dsrc = self._get_ds(data)
3704+ poll_imds_func.return_value = ovfenv
3705+ dsrc.crawl_metadata()
3706+ self.assertEqual(1, report_ready_func.call_count)
3707+
3708+ @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
3709+ @mock.patch('cloudinit.sources.helpers.netlink.'
3710+ 'wait_for_media_disconnect_connect')
3711+ @mock.patch(
3712+ 'cloudinit.sources.DataSourceAzure.DataSourceAzure._report_ready')
3713+ @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
3714+ @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
3715+ @mock.patch('cloudinit.sources.DataSourceAzure.readurl')
3716+ def test_crawl_metadata_on_reprovision_reports_ready_using_lease(
3717+ self, m_readurl, m_dhcp,
3718+ m_net, report_ready_func,
3719+ m_media_switch, m_write):
3720+ """If reprovisioning, report ready using the obtained lease"""
3721+ ovfenv = construct_valid_ovf_env(
3722+ platform_settings={"PreprovisionedVm": "True"})
3723+
3724+ data = {'ovfcontent': ovfenv,
3725+ 'sys_cfg': {}}
3726+ dsrc = self._get_ds(data)
3727+
3728+ lease = {
3729+ 'interface': 'eth9', 'fixed-address': '192.168.2.9',
3730+ 'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
3731+ 'unknown-245': '624c3620'}
3732+ m_dhcp.return_value = [lease]
3733+ m_media_switch.return_value = None
3734+
3735+ reprovision_ovfenv = construct_valid_ovf_env()
3736+ m_readurl.return_value = url_helper.StringResponse(
3737+ reprovision_ovfenv.encode('utf-8'))
3738+
3739+ dsrc.crawl_metadata()
3740+ self.assertEqual(2, report_ready_func.call_count)
3741+ report_ready_func.assert_called_with(lease=lease)
3742+
3743 def test_waagent_d_has_0700_perms(self):
3744 # we expect /var/lib/waagent to be created 0700
3745 dsrc = self._get_ds({'ovfcontent': construct_valid_ovf_env()})
3746@@ -523,8 +623,10 @@ fdescfs /dev/fd fdescfs rw 0 0
3747
3748 def test_network_config_set_from_imds(self):
3749 """Datasource.network_config returns IMDS network data."""
3750+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
3751 odata = {}
3752- data = {'ovfcontent': construct_valid_ovf_env(data=odata)}
3753+ data = {'ovfcontent': construct_valid_ovf_env(data=odata),
3754+ 'sys_cfg': sys_cfg}
3755 expected_network_config = {
3756 'ethernets': {
3757 'eth0': {'set-name': 'eth0',
3758@@ -803,9 +905,10 @@ fdescfs /dev/fd fdescfs rw 0 0
3759 @mock.patch('cloudinit.net.generate_fallback_config')
3760 def test_imds_network_config(self, mock_fallback):
3761 """Network config is generated from IMDS network data when present."""
3762+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': True}}}
3763 odata = {'HostName': "myhost", 'UserName': "myuser"}
3764 data = {'ovfcontent': construct_valid_ovf_env(data=odata),
3765- 'sys_cfg': {}}
3766+ 'sys_cfg': sys_cfg}
3767
3768 dsrc = self._get_ds(data)
3769 ret = dsrc.get_data()
3770@@ -825,6 +928,36 @@ fdescfs /dev/fd fdescfs rw 0 0
3771 @mock.patch('cloudinit.net.get_devicelist')
3772 @mock.patch('cloudinit.net.device_driver')
3773 @mock.patch('cloudinit.net.generate_fallback_config')
3774+ def test_imds_network_ignored_when_apply_network_config_false(
3775+ self, mock_fallback, mock_dd, mock_devlist, mock_get_mac):
3776+ """When apply_network_config is False, use fallback instead of IMDS."""
3777+ sys_cfg = {'datasource': {'Azure': {'apply_network_config': False}}}
3778+ odata = {'HostName': "myhost", 'UserName': "myuser"}
3779+ data = {'ovfcontent': construct_valid_ovf_env(data=odata),
3780+ 'sys_cfg': sys_cfg}
3781+ fallback_config = {
3782+ 'version': 1,
3783+ 'config': [{
3784+ 'type': 'physical', 'name': 'eth0',
3785+ 'mac_address': '00:11:22:33:44:55',
3786+ 'params': {'driver': 'hv_netsvc'},
3787+ 'subnets': [{'type': 'dhcp'}],
3788+ }]
3789+ }
3790+ mock_fallback.return_value = fallback_config
3791+
3792+ mock_devlist.return_value = ['eth0']
3793+ mock_dd.return_value = ['hv_netsvc']
3794+ mock_get_mac.return_value = '00:11:22:33:44:55'
3795+
3796+ dsrc = self._get_ds(data)
3797+ self.assertTrue(dsrc.get_data())
3798+ self.assertEqual(dsrc.network_config, fallback_config)
3799+
3800+ @mock.patch('cloudinit.net.get_interface_mac')
3801+ @mock.patch('cloudinit.net.get_devicelist')
3802+ @mock.patch('cloudinit.net.device_driver')
3803+ @mock.patch('cloudinit.net.generate_fallback_config')
3804 def test_fallback_network_config(self, mock_fallback, mock_dd,
3805 mock_devlist, mock_get_mac):
3806 """On absent IMDS network data, generate network fallback config."""
3807@@ -1411,21 +1544,20 @@ class TestCanDevBeReformatted(CiTestCase):
3808 '/dev/sda1': {'num': 1, 'fs': 'ntfs', 'files': []}
3809 }}})
3810
3811- err = ("Unexpected error while running command.\n",
3812- "Command: ['mount', '-o', 'ro,sync', '-t', 'auto', ",
3813- "'/dev/sda1', '/fake-tmp/dir']\n"
3814- "Exit code: 32\n"
3815- "Reason: -\n"
3816- "Stdout: -\n"
3817- "Stderr: mount: unknown filesystem type 'ntfs'")
3818- self.m_mount_cb.side_effect = MountFailedError(
3819- 'Failed mounting %s to %s due to: %s' %
3820- ('/dev/sda', '/fake-tmp/dir', err))
3821-
3822- value, msg = dsaz.can_dev_be_reformatted('/dev/sda',
3823- preserve_ntfs=False)
3824- self.assertTrue(value)
3825- self.assertIn('cannot mount NTFS, assuming', msg)
3826+ error_msgs = [
3827+ "Stderr: mount: unknown filesystem type 'ntfs'", # RHEL
3828+ "Stderr: mount: /dev/sdb1: unknown filesystem type 'ntfs'" # SLES
3829+ ]
3830+
3831+ for err_msg in error_msgs:
3832+ self.m_mount_cb.side_effect = MountFailedError(
3833+ "Failed mounting %s to %s due to: \nUnexpected.\n%s" %
3834+ ('/dev/sda', '/fake-tmp/dir', err_msg))
3835+
3836+ value, msg = dsaz.can_dev_be_reformatted('/dev/sda',
3837+ preserve_ntfs=False)
3838+ self.assertTrue(value)
3839+ self.assertIn('cannot mount NTFS, assuming', msg)
3840
3841 def test_never_destroy_ntfs_config_false(self):
3842 """Normally formattable situation with never_destroy_ntfs set."""
3843@@ -1547,6 +1679,8 @@ class TestPreprovisioningShouldReprovision(CiTestCase):
3844
3845 @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
3846 @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
3847+@mock.patch('cloudinit.sources.helpers.netlink.'
3848+ 'wait_for_media_disconnect_connect')
3849 @mock.patch('requests.Session.request')
3850 @mock.patch(MOCKPATH + 'DataSourceAzure._report_ready')
3851 class TestPreprovisioningPollIMDS(CiTestCase):
3852@@ -1558,25 +1692,49 @@ class TestPreprovisioningPollIMDS(CiTestCase):
3853 self.paths = helpers.Paths({'cloud_dir': self.tmp})
3854 dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
3855
3856- @mock.patch(MOCKPATH + 'util.write_file')
3857- def test_poll_imds_calls_report_ready(self, write_f, report_ready_func,
3858- fake_resp, m_dhcp, m_net):
3859- """The poll_imds will call report_ready after creating marker file."""
3860- report_marker = self.tmp_path('report_marker', self.tmp)
3861+ @mock.patch(MOCKPATH + 'EphemeralDHCPv4')
3862+ def test_poll_imds_re_dhcp_on_timeout(self, m_dhcpv4, report_ready_func,
3863+ fake_resp, m_media_switch, m_dhcp,
3864+ m_net):
3865+ """The poll_imds will retry DHCP on IMDS timeout."""
3866+ report_file = self.tmp_path('report_marker', self.tmp)
3867 lease = {
3868 'interface': 'eth9', 'fixed-address': '192.168.2.9',
3869 'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
3870 'unknown-245': '624c3620'}
3871 m_dhcp.return_value = [lease]
3872+ m_media_switch.return_value = None
3873+ dhcp_ctx = mock.MagicMock(lease=lease)
3874+ dhcp_ctx.obtain_lease.return_value = lease
3875+ m_dhcpv4.return_value = dhcp_ctx
3876+
3877+ self.tries = 0
3878+
3879+ def fake_timeout_once(**kwargs):
3880+ self.tries += 1
3881+ if self.tries == 1:
3882+ raise requests.Timeout('Fake connection timeout')
3883+ elif self.tries == 2:
3884+ response = requests.Response()
3885+ response.status_code = 404
3886+ raise requests.exceptions.HTTPError(
3887+ "fake 404", response=response)
3888+ # Third try should succeed and stop retries or redhcp
3889+ return mock.MagicMock(status_code=200, text="good", content="good")
3890+
3891+ fake_resp.side_effect = fake_timeout_once
3892+
3893 dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
3894- mock_path = (MOCKPATH + 'REPORTED_READY_MARKER_FILE')
3895- with mock.patch(mock_path, report_marker):
3896+ with mock.patch(MOCKPATH + 'REPORTED_READY_MARKER_FILE', report_file):
3897 dsa._poll_imds()
3898 self.assertEqual(report_ready_func.call_count, 1)
3899 report_ready_func.assert_called_with(lease=lease)
3900+ self.assertEqual(3, m_dhcpv4.call_count, 'Expected 3 DHCP calls')
3901+ self.assertEqual(3, self.tries, 'Expected 3 total reads from IMDS')
3902
3903- def test_poll_imds_report_ready_false(self, report_ready_func,
3904- fake_resp, m_dhcp, m_net):
3905+ def test_poll_imds_report_ready_false(self,
3906+ report_ready_func, fake_resp,
3907+ m_media_switch, m_dhcp, m_net):
3908 """The poll_imds should not call reporting ready
3909 when flag is false"""
3910 report_file = self.tmp_path('report_marker', self.tmp)
3911@@ -1585,6 +1743,7 @@ class TestPreprovisioningPollIMDS(CiTestCase):
3912 'interface': 'eth9', 'fixed-address': '192.168.2.9',
3913 'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
3914 'unknown-245': '624c3620'}]
3915+ m_media_switch.return_value = None
3916 dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
3917 with mock.patch(MOCKPATH + 'REPORTED_READY_MARKER_FILE', report_file):
3918 dsa._poll_imds()
3919@@ -1594,6 +1753,8 @@ class TestPreprovisioningPollIMDS(CiTestCase):
3920 @mock.patch(MOCKPATH + 'util.subp')
3921 @mock.patch(MOCKPATH + 'util.write_file')
3922 @mock.patch(MOCKPATH + 'util.is_FreeBSD')
3923+@mock.patch('cloudinit.sources.helpers.netlink.'
3924+ 'wait_for_media_disconnect_connect')
3925 @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
3926 @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
3927 @mock.patch('requests.Session.request')
3928@@ -1606,10 +1767,13 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
3929 self.paths = helpers.Paths({'cloud_dir': tmp})
3930 dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
3931
3932- def test_poll_imds_returns_ovf_env(self, fake_resp, m_dhcp, m_net,
3933+ def test_poll_imds_returns_ovf_env(self, fake_resp,
3934+ m_dhcp, m_net,
3935+ m_media_switch,
3936 m_is_bsd, write_f, subp):
3937 """The _poll_imds method should return the ovf_env.xml."""
3938 m_is_bsd.return_value = False
3939+ m_media_switch.return_value = None
3940 m_dhcp.return_value = [{
3941 'interface': 'eth9', 'fixed-address': '192.168.2.9',
3942 'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0'}]
3943@@ -1627,16 +1791,19 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
3944 'Cloud-Init/%s' % vs()
3945 }, method='GET', timeout=1,
3946 url=full_url)])
3947- self.assertEqual(m_dhcp.call_count, 1)
3948+ self.assertEqual(m_dhcp.call_count, 2)
3949 m_net.assert_any_call(
3950 broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9',
3951 prefix_or_mask='255.255.255.0', router='192.168.2.1')
3952- self.assertEqual(m_net.call_count, 1)
3953+ self.assertEqual(m_net.call_count, 2)
3954
3955- def test__reprovision_calls__poll_imds(self, fake_resp, m_dhcp, m_net,
3956+ def test__reprovision_calls__poll_imds(self, fake_resp,
3957+ m_dhcp, m_net,
3958+ m_media_switch,
3959 m_is_bsd, write_f, subp):
3960 """The _reprovision method should call poll IMDS."""
3961 m_is_bsd.return_value = False
3962+ m_media_switch.return_value = None
3963 m_dhcp.return_value = [{
3964 'interface': 'eth9', 'fixed-address': '192.168.2.9',
3965 'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
3966@@ -1660,11 +1827,11 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
3967 'User-Agent':
3968 'Cloud-Init/%s' % vs()},
3969 method='GET', timeout=1, url=full_url)])
3970- self.assertEqual(m_dhcp.call_count, 1)
3971+ self.assertEqual(m_dhcp.call_count, 2)
3972 m_net.assert_any_call(
3973 broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9',
3974 prefix_or_mask='255.255.255.0', router='192.168.2.1')
3975- self.assertEqual(m_net.call_count, 1)
3976+ self.assertEqual(m_net.call_count, 2)
3977
3978
3979 class TestRemoveUbuntuNetworkConfigScripts(CiTestCase):
3980diff --git a/tests/unittests/test_datasource/test_ec2.py b/tests/unittests/test_datasource/test_ec2.py
3981index 9f81255..1a5956d 100644
3982--- a/tests/unittests/test_datasource/test_ec2.py
3983+++ b/tests/unittests/test_datasource/test_ec2.py
3984@@ -211,9 +211,9 @@ class TestEc2(test_helpers.HttprettyTestCase):
3985 self.metadata_addr = self.datasource.metadata_urls[0]
3986 self.tmp = self.tmp_dir()
3987
3988- def data_url(self, version):
3989+ def data_url(self, version, data_item='meta-data'):
3990 """Return a metadata url based on the version provided."""
3991- return '/'.join([self.metadata_addr, version, 'meta-data', ''])
3992+ return '/'.join([self.metadata_addr, version, data_item])
3993
3994 def _patch_add_cleanup(self, mpath, *args, **kwargs):
3995 p = mock.patch(mpath, *args, **kwargs)
3996@@ -238,10 +238,18 @@ class TestEc2(test_helpers.HttprettyTestCase):
3997 all_versions = (
3998 [ds.min_metadata_version] + ds.extended_metadata_versions)
3999 for version in all_versions:
4000- metadata_url = self.data_url(version)
4001+ metadata_url = self.data_url(version) + '/'
4002 if version == md_version:
4003 # Register all metadata for desired version
4004- register_mock_metaserver(metadata_url, md)
4005+ register_mock_metaserver(
4006+ metadata_url, md.get('md', DEFAULT_METADATA))
4007+ userdata_url = self.data_url(
4008+ version, data_item='user-data')
4009+ register_mock_metaserver(userdata_url, md.get('ud', ''))
4010+ identity_url = self.data_url(
4011+ version, data_item='dynamic/instance-identity')
4012+ register_mock_metaserver(
4013+ identity_url, md.get('id', DYNAMIC_METADATA))
4014 else:
4015 instance_id_url = metadata_url + 'instance-id'
4016 if version == ds.min_metadata_version:
4017@@ -261,7 +269,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
4018 ds = self._setup_ds(
4019 platform_data=self.valid_platform_data,
4020 sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
4021- md=DEFAULT_METADATA)
4022+ md={'md': DEFAULT_METADATA})
4023 find_fallback_path = (
4024 'cloudinit.sources.DataSourceEc2.net.find_fallback_nic')
4025 with mock.patch(find_fallback_path) as m_find_fallback:
4026@@ -293,7 +301,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
4027 ds = self._setup_ds(
4028 platform_data=self.valid_platform_data,
4029 sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
4030- md=DEFAULT_METADATA)
4031+ md={'md': DEFAULT_METADATA})
4032 find_fallback_path = (
4033 'cloudinit.sources.DataSourceEc2.net.find_fallback_nic')
4034 with mock.patch(find_fallback_path) as m_find_fallback:
4035@@ -322,7 +330,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
4036 ds = self._setup_ds(
4037 platform_data=self.valid_platform_data,
4038 sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
4039- md=DEFAULT_METADATA)
4040+ md={'md': DEFAULT_METADATA})
4041 ds._network_config = {'cached': 'data'}
4042 self.assertEqual({'cached': 'data'}, ds.network_config)
4043
4044@@ -338,7 +346,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
4045 ds = self._setup_ds(
4046 platform_data=self.valid_platform_data,
4047 sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
4048- md=old_metadata)
4049+ md={'md': old_metadata})
4050 self.assertTrue(ds.get_data())
4051 # Provide new revision of metadata that contains network data
4052 register_mock_metaserver(
4053@@ -372,7 +380,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
4054 ds = self._setup_ds(
4055 platform_data=self.valid_platform_data,
4056 sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
4057- md=DEFAULT_METADATA)
4058+ md={'md': DEFAULT_METADATA})
4059 # Mock 404s on all versions except latest
4060 all_versions = (
4061 [ds.min_metadata_version] + ds.extended_metadata_versions)
4062@@ -399,7 +407,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
4063 ds = self._setup_ds(
4064 platform_data=self.valid_platform_data,
4065 sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
4066- md=DEFAULT_METADATA)
4067+ md={'md': DEFAULT_METADATA})
4068 ret = ds.get_data()
4069 self.assertTrue(ret)
4070 self.assertEqual(0, m_dhcp.call_count)
4071@@ -412,7 +420,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
4072 ds = self._setup_ds(
4073 platform_data=self.valid_platform_data,
4074 sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
4075- md=DEFAULT_METADATA)
4076+ md={'md': DEFAULT_METADATA})
4077 ret = ds.get_data()
4078 self.assertTrue(ret)
4079
4080@@ -422,7 +430,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
4081 ds = self._setup_ds(
4082 platform_data={'uuid': uuid, 'uuid_source': 'dmi', 'serial': ''},
4083 sys_cfg={'datasource': {'Ec2': {'strict_id': True}}},
4084- md=DEFAULT_METADATA)
4085+ md={'md': DEFAULT_METADATA})
4086 ret = ds.get_data()
4087 self.assertFalse(ret)
4088
4089@@ -432,7 +440,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
4090 ds = self._setup_ds(
4091 platform_data={'uuid': uuid, 'uuid_source': 'dmi', 'serial': ''},
4092 sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
4093- md=DEFAULT_METADATA)
4094+ md={'md': DEFAULT_METADATA})
4095 ret = ds.get_data()
4096 self.assertTrue(ret)
4097
4098@@ -442,7 +450,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
4099 ds = self._setup_ds(
4100 platform_data=self.valid_platform_data,
4101 sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
4102- md=DEFAULT_METADATA)
4103+ md={'md': DEFAULT_METADATA})
4104 platform_attrs = [
4105 attr for attr in ec2.CloudNames.__dict__.keys()
4106 if not attr.startswith('__')]
4107@@ -469,7 +477,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
4108 ds = self._setup_ds(
4109 platform_data=self.valid_platform_data,
4110 sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
4111- md=DEFAULT_METADATA)
4112+ md={'md': DEFAULT_METADATA})
4113 ret = ds.get_data()
4114 self.assertFalse(ret)
4115 self.assertIn(
4116@@ -499,7 +507,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
4117 ds = self._setup_ds(
4118 platform_data=self.valid_platform_data,
4119 sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
4120- md=DEFAULT_METADATA)
4121+ md={'md': DEFAULT_METADATA})
4122
4123 ret = ds.get_data()
4124 self.assertTrue(ret)
4125diff --git a/tests/unittests/test_datasource/test_nocloud.py b/tests/unittests/test_datasource/test_nocloud.py
4126index b6468b6..3429272 100644
4127--- a/tests/unittests/test_datasource/test_nocloud.py
4128+++ b/tests/unittests/test_datasource/test_nocloud.py
4129@@ -1,7 +1,10 @@
4130 # This file is part of cloud-init. See LICENSE file for license information.
4131
4132 from cloudinit import helpers
4133-from cloudinit.sources import DataSourceNoCloud
4134+from cloudinit.sources.DataSourceNoCloud import (
4135+ DataSourceNoCloud as dsNoCloud,
4136+ _maybe_remove_top_network,
4137+ parse_cmdline_data)
4138 from cloudinit import util
4139 from cloudinit.tests.helpers import CiTestCase, populate_dir, mock, ExitStack
4140
4141@@ -40,9 +43,7 @@ class TestNoCloudDataSource(CiTestCase):
4142 'datasource': {'NoCloud': {'fs_label': None}}
4143 }
4144
4145- ds = DataSourceNoCloud.DataSourceNoCloud
4146-
4147- dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4148+ dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4149 ret = dsrc.get_data()
4150 self.assertEqual(dsrc.userdata_raw, ud)
4151 self.assertEqual(dsrc.metadata, md)
4152@@ -63,9 +64,7 @@ class TestNoCloudDataSource(CiTestCase):
4153 'datasource': {'NoCloud': {'fs_label': None}}
4154 }
4155
4156- ds = DataSourceNoCloud.DataSourceNoCloud
4157-
4158- dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4159+ dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4160 self.assertTrue(dsrc.get_data())
4161 self.assertEqual(dsrc.platform_type, 'nocloud')
4162 self.assertEqual(
4163@@ -73,8 +72,6 @@ class TestNoCloudDataSource(CiTestCase):
4164
4165 def test_fs_label(self, m_is_lxd):
4166 # find_devs_with should not be called ff fs_label is None
4167- ds = DataSourceNoCloud.DataSourceNoCloud
4168-
4169 class PsuedoException(Exception):
4170 pass
4171
4172@@ -84,12 +81,12 @@ class TestNoCloudDataSource(CiTestCase):
4173
4174 # by default, NoCloud should search for filesystems by label
4175 sys_cfg = {'datasource': {'NoCloud': {}}}
4176- dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4177+ dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4178 self.assertRaises(PsuedoException, dsrc.get_data)
4179
4180 # but disabling searching should just end up with None found
4181 sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
4182- dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4183+ dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4184 ret = dsrc.get_data()
4185 self.assertFalse(ret)
4186
4187@@ -97,13 +94,10 @@ class TestNoCloudDataSource(CiTestCase):
4188 # no source should be found if no cmdline, config, and fs_label=None
4189 sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
4190
4191- ds = DataSourceNoCloud.DataSourceNoCloud
4192- dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4193+ dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4194 self.assertFalse(dsrc.get_data())
4195
4196 def test_seed_in_config(self, m_is_lxd):
4197- ds = DataSourceNoCloud.DataSourceNoCloud
4198-
4199 data = {
4200 'fs_label': None,
4201 'meta-data': yaml.safe_dump({'instance-id': 'IID'}),
4202@@ -111,7 +105,7 @@ class TestNoCloudDataSource(CiTestCase):
4203 }
4204
4205 sys_cfg = {'datasource': {'NoCloud': data}}
4206- dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4207+ dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4208 ret = dsrc.get_data()
4209 self.assertEqual(dsrc.userdata_raw, b"USER_DATA_RAW")
4210 self.assertEqual(dsrc.metadata.get('instance-id'), 'IID')
4211@@ -130,9 +124,7 @@ class TestNoCloudDataSource(CiTestCase):
4212 'datasource': {'NoCloud': {'fs_label': None}}
4213 }
4214
4215- ds = DataSourceNoCloud.DataSourceNoCloud
4216-
4217- dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4218+ dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4219 ret = dsrc.get_data()
4220 self.assertEqual(dsrc.userdata_raw, ud)
4221 self.assertEqual(dsrc.metadata, md)
4222@@ -145,9 +137,7 @@ class TestNoCloudDataSource(CiTestCase):
4223
4224 sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
4225
4226- ds = DataSourceNoCloud.DataSourceNoCloud
4227-
4228- dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4229+ dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4230 ret = dsrc.get_data()
4231 self.assertEqual(dsrc.userdata_raw, b"ud")
4232 self.assertFalse(dsrc.vendordata)
4233@@ -174,9 +164,7 @@ class TestNoCloudDataSource(CiTestCase):
4234
4235 sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
4236
4237- ds = DataSourceNoCloud.DataSourceNoCloud
4238-
4239- dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4240+ dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4241 ret = dsrc.get_data()
4242 self.assertTrue(ret)
4243 # very simple check just for the strings above
4244@@ -195,9 +183,23 @@ class TestNoCloudDataSource(CiTestCase):
4245
4246 sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
4247
4248- ds = DataSourceNoCloud.DataSourceNoCloud
4249+ dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4250+ ret = dsrc.get_data()
4251+ self.assertTrue(ret)
4252+ self.assertEqual(netconf, dsrc.network_config)
4253+
4254+ def test_metadata_network_config_with_toplevel_network(self, m_is_lxd):
4255+ """network-config may have 'network' top level key."""
4256+ netconf = {'config': 'disabled'}
4257+ populate_dir(
4258+ os.path.join(self.paths.seed_dir, "nocloud"),
4259+ {'user-data': b"ud",
4260+ 'meta-data': "instance-id: IID\n",
4261+ 'network-config': yaml.dump({'network': netconf}) + "\n"})
4262+
4263+ sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
4264
4265- dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4266+ dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4267 ret = dsrc.get_data()
4268 self.assertTrue(ret)
4269 self.assertEqual(netconf, dsrc.network_config)
4270@@ -228,9 +230,7 @@ class TestNoCloudDataSource(CiTestCase):
4271
4272 sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
4273
4274- ds = DataSourceNoCloud.DataSourceNoCloud
4275-
4276- dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4277+ dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
4278 ret = dsrc.get_data()
4279 self.assertTrue(ret)
4280 self.assertEqual(netconf, dsrc.network_config)
4281@@ -258,8 +258,7 @@ class TestParseCommandLineData(CiTestCase):
4282 for (fmt, expected) in pairs:
4283 fill = {}
4284 cmdline = fmt % {'ds_id': ds_id}
4285- ret = DataSourceNoCloud.parse_cmdline_data(ds_id=ds_id, fill=fill,
4286- cmdline=cmdline)
4287+ ret = parse_cmdline_data(ds_id=ds_id, fill=fill, cmdline=cmdline)
4288 self.assertEqual(expected, fill)
4289 self.assertTrue(ret)
4290
4291@@ -276,10 +275,43 @@ class TestParseCommandLineData(CiTestCase):
4292
4293 for cmdline in cmdlines:
4294 fill = {}
4295- ret = DataSourceNoCloud.parse_cmdline_data(ds_id=ds_id, fill=fill,
4296- cmdline=cmdline)
4297+ ret = parse_cmdline_data(ds_id=ds_id, fill=fill, cmdline=cmdline)
4298 self.assertEqual(fill, {})
4299 self.assertFalse(ret)
4300
4301
4302+class TestMaybeRemoveToplevelNetwork(CiTestCase):
4303+ """test _maybe_remove_top_network function."""
4304+ basecfg = [{'type': 'physical', 'name': 'interface0',
4305+ 'subnets': [{'type': 'dhcp'}]}]
4306+
4307+ def test_should_remove_safely(self):
4308+ mcfg = {'config': self.basecfg, 'version': 1}
4309+ self.assertEqual(mcfg, _maybe_remove_top_network({'network': mcfg}))
4310+
4311+ def test_no_remove_if_other_keys(self):
4312+ """should not shift if other keys at top level."""
4313+ mcfg = {'network': {'config': self.basecfg, 'version': 1},
4314+ 'unknown_keyname': 'keyval'}
4315+ self.assertEqual(mcfg, _maybe_remove_top_network(mcfg))
4316+
4317+ def test_no_remove_if_non_dict(self):
4318+ """should not shift if not a dict."""
4319+ mcfg = {'network': '"content here'}
4320+ self.assertEqual(mcfg, _maybe_remove_top_network(mcfg))
4321+
4322+ def test_no_remove_if_missing_config_or_version(self):
4323+ """should not shift unless network entry has config and version."""
4324+ mcfg = {'network': {'config': self.basecfg}}
4325+ self.assertEqual(mcfg, _maybe_remove_top_network(mcfg))
4326+
4327+ mcfg = {'network': {'version': 1}}
4328+ self.assertEqual(mcfg, _maybe_remove_top_network(mcfg))
4329+
4330+ def test_remove_with_config_disabled(self):
4331+ """network/config=disabled should be shifted."""
4332+ mcfg = {'config': 'disabled'}
4333+ self.assertEqual(mcfg, _maybe_remove_top_network({'network': mcfg}))
4334+
4335+
4336 # vi: ts=4 expandtab
4337diff --git a/tests/unittests/test_datasource/test_ovf.py b/tests/unittests/test_datasource/test_ovf.py
4338index a226c03..349d54c 100644
4339--- a/tests/unittests/test_datasource/test_ovf.py
4340+++ b/tests/unittests/test_datasource/test_ovf.py
4341@@ -17,6 +17,10 @@ from cloudinit.sources import DataSourceOVF as dsovf
4342 from cloudinit.sources.helpers.vmware.imc.config_custom_script import (
4343 CustomScriptNotFound)
4344
4345+MPATH = 'cloudinit.sources.DataSourceOVF.'
4346+
4347+NOT_FOUND = None
4348+
4349 OVF_ENV_CONTENT = """<?xml version="1.0" encoding="UTF-8"?>
4350 <Environment xmlns="http://schemas.dmtf.org/ovf/environment/1"
4351 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
4352@@ -125,8 +129,8 @@ class TestDatasourceOVF(CiTestCase):
4353 retcode = wrap_and_call(
4354 'cloudinit.sources.DataSourceOVF',
4355 {'util.read_dmi_data': None,
4356- 'transport_iso9660': (False, None, None),
4357- 'transport_vmware_guestd': (False, None, None)},
4358+ 'transport_iso9660': NOT_FOUND,
4359+ 'transport_vmware_guestinfo': NOT_FOUND},
4360 ds.get_data)
4361 self.assertFalse(retcode, 'Expected False return from ds.get_data')
4362 self.assertIn(
4363@@ -141,8 +145,8 @@ class TestDatasourceOVF(CiTestCase):
4364 retcode = wrap_and_call(
4365 'cloudinit.sources.DataSourceOVF',
4366 {'util.read_dmi_data': 'vmware',
4367- 'transport_iso9660': (False, None, None),
4368- 'transport_vmware_guestd': (False, None, None)},
4369+ 'transport_iso9660': NOT_FOUND,
4370+ 'transport_vmware_guestinfo': NOT_FOUND},
4371 ds.get_data)
4372 self.assertFalse(retcode, 'Expected False return from ds.get_data')
4373 self.assertIn(
4374@@ -189,12 +193,11 @@ class TestDatasourceOVF(CiTestCase):
4375
4376 self.assertEqual('ovf', ds.cloud_name)
4377 self.assertEqual('ovf', ds.platform_type)
4378- MPATH = 'cloudinit.sources.DataSourceOVF.'
4379 with mock.patch(MPATH + 'util.read_dmi_data', return_value='!VMware'):
4380- with mock.patch(MPATH + 'transport_vmware_guestd') as m_guestd:
4381+ with mock.patch(MPATH + 'transport_vmware_guestinfo') as m_guestd:
4382 with mock.patch(MPATH + 'transport_iso9660') as m_iso9660:
4383- m_iso9660.return_value = (None, 'ignored', 'ignored')
4384- m_guestd.return_value = (None, 'ignored', 'ignored')
4385+ m_iso9660.return_value = NOT_FOUND
4386+ m_guestd.return_value = NOT_FOUND
4387 self.assertTrue(ds.get_data())
4388 self.assertEqual(
4389 'ovf (%s/seed/ovf-env.xml)' % self.tdir,
4390@@ -211,12 +214,11 @@ class TestDatasourceOVF(CiTestCase):
4391
4392 self.assertEqual('ovf', ds.cloud_name)
4393 self.assertEqual('ovf', ds.platform_type)
4394- MPATH = 'cloudinit.sources.DataSourceOVF.'
4395 with mock.patch(MPATH + 'util.read_dmi_data', return_value='VMWare'):
4396- with mock.patch(MPATH + 'transport_vmware_guestd') as m_guestd:
4397+ with mock.patch(MPATH + 'transport_vmware_guestinfo') as m_guestd:
4398 with mock.patch(MPATH + 'transport_iso9660') as m_iso9660:
4399- m_iso9660.return_value = (None, 'ignored', 'ignored')
4400- m_guestd.return_value = (None, 'ignored', 'ignored')
4401+ m_iso9660.return_value = NOT_FOUND
4402+ m_guestd.return_value = NOT_FOUND
4403 self.assertTrue(ds.get_data())
4404 self.assertEqual(
4405 'vmware (%s/seed/ovf-env.xml)' % self.tdir,
4406@@ -246,10 +248,7 @@ class TestTransportIso9660(CiTestCase):
4407 }
4408 self.m_mounts.return_value = mounts
4409
4410- (contents, fullp, fname) = dsovf.transport_iso9660()
4411- self.assertEqual("mycontent", contents)
4412- self.assertEqual("/dev/sr9", fullp)
4413- self.assertEqual("myfile", fname)
4414+ self.assertEqual("mycontent", dsovf.transport_iso9660())
4415
4416 def test_find_already_mounted_skips_non_iso9660(self):
4417 """Check we call get_ovf_env ignoring non iso9660"""
4418@@ -272,10 +271,7 @@ class TestTransportIso9660(CiTestCase):
4419 self.m_mounts.return_value = (
4420 OrderedDict(sorted(mounts.items(), key=lambda t: t[0])))
4421
4422- (contents, fullp, fname) = dsovf.transport_iso9660()
4423- self.assertEqual("mycontent", contents)
4424- self.assertEqual("/dev/xvdc", fullp)
4425- self.assertEqual("myfile", fname)
4426+ self.assertEqual("mycontent", dsovf.transport_iso9660())
4427
4428 def test_find_already_mounted_matches_kname(self):
4429 """Check we dont regex match on basename of the device"""
4430@@ -289,10 +285,7 @@ class TestTransportIso9660(CiTestCase):
4431 # we're skipping an entry which fails to match.
4432 self.m_mounts.return_value = mounts
4433
4434- (contents, fullp, fname) = dsovf.transport_iso9660()
4435- self.assertEqual(False, contents)
4436- self.assertIsNone(fullp)
4437- self.assertIsNone(fname)
4438+ self.assertEqual(NOT_FOUND, dsovf.transport_iso9660())
4439
4440 def test_mount_cb_called_on_blkdevs_with_iso9660(self):
4441 """Check we call mount_cb on blockdevs with iso9660 only"""
4442@@ -300,13 +293,9 @@ class TestTransportIso9660(CiTestCase):
4443 self.m_find_devs_with.return_value = ['/dev/sr0']
4444 self.m_mount_cb.return_value = ("myfile", "mycontent")
4445
4446- (contents, fullp, fname) = dsovf.transport_iso9660()
4447-
4448+ self.assertEqual("mycontent", dsovf.transport_iso9660())
4449 self.m_mount_cb.assert_called_with(
4450 "/dev/sr0", dsovf.get_ovf_env, mtype="iso9660")
4451- self.assertEqual("mycontent", contents)
4452- self.assertEqual("/dev/sr0", fullp)
4453- self.assertEqual("myfile", fname)
4454
4455 def test_mount_cb_called_on_blkdevs_with_iso9660_check_regex(self):
4456 """Check we call mount_cb on blockdevs with iso9660 and match regex"""
4457@@ -315,25 +304,17 @@ class TestTransportIso9660(CiTestCase):
4458 '/dev/abc', '/dev/my-cdrom', '/dev/sr0']
4459 self.m_mount_cb.return_value = ("myfile", "mycontent")
4460
4461- (contents, fullp, fname) = dsovf.transport_iso9660()
4462-
4463+ self.assertEqual("mycontent", dsovf.transport_iso9660())
4464 self.m_mount_cb.assert_called_with(
4465 "/dev/sr0", dsovf.get_ovf_env, mtype="iso9660")
4466- self.assertEqual("mycontent", contents)
4467- self.assertEqual("/dev/sr0", fullp)
4468- self.assertEqual("myfile", fname)
4469
4470 def test_mount_cb_not_called_no_matches(self):
4471 """Check we don't call mount_cb if nothing matches"""
4472 self.m_mounts.return_value = {}
4473 self.m_find_devs_with.return_value = ['/dev/vg/myovf']
4474
4475- (contents, fullp, fname) = dsovf.transport_iso9660()
4476-
4477+ self.assertEqual(NOT_FOUND, dsovf.transport_iso9660())
4478 self.assertEqual(0, self.m_mount_cb.call_count)
4479- self.assertEqual(False, contents)
4480- self.assertIsNone(fullp)
4481- self.assertIsNone(fname)
4482
4483 def test_mount_cb_called_require_iso_false(self):
4484 """Check we call mount_cb on blockdevs with require_iso=False"""
4485@@ -341,13 +322,11 @@ class TestTransportIso9660(CiTestCase):
4486 self.m_find_devs_with.return_value = ['/dev/xvdz']
4487 self.m_mount_cb.return_value = ("myfile", "mycontent")
4488
4489- (contents, fullp, fname) = dsovf.transport_iso9660(require_iso=False)
4490+ self.assertEqual(
4491+ "mycontent", dsovf.transport_iso9660(require_iso=False))
4492
4493 self.m_mount_cb.assert_called_with(
4494 "/dev/xvdz", dsovf.get_ovf_env, mtype=None)
4495- self.assertEqual("mycontent", contents)
4496- self.assertEqual("/dev/xvdz", fullp)
4497- self.assertEqual("myfile", fname)
4498
4499 def test_maybe_cdrom_device_none(self):
4500 """Test maybe_cdrom_device returns False for none/empty input"""
4501@@ -384,5 +363,62 @@ class TestTransportIso9660(CiTestCase):
4502 self.assertTrue(dsovf.maybe_cdrom_device('/dev/xvda1'))
4503 self.assertTrue(dsovf.maybe_cdrom_device('xvdza1'))
4504
4505+
4506+@mock.patch(MPATH + "util.which")
4507+@mock.patch(MPATH + "util.subp")
4508+class TestTransportVmwareGuestinfo(CiTestCase):
4509+ """Test the com.vmware.guestInfo transport implemented in
4510+ transport_vmware_guestinfo."""
4511+
4512+ rpctool = 'vmware-rpctool'
4513+ with_logs = True
4514+ rpctool_path = '/not/important/vmware-rpctool'
4515+
4516+ def test_without_vmware_rpctool_returns_notfound(self, m_subp, m_which):
4517+ m_which.return_value = None
4518+ self.assertEqual(NOT_FOUND, dsovf.transport_vmware_guestinfo())
4519+ self.assertEqual(0, m_subp.call_count,
4520+ "subp should not be called if no rpctool in path.")
4521+
4522+ def test_notfound_on_exit_code_1(self, m_subp, m_which):
4523+ """If vmware-rpctool exits 1, then must return not found."""
4524+ m_which.return_value = self.rpctool_path
4525+ m_subp.side_effect = util.ProcessExecutionError(
4526+ stdout="", stderr="No value found", exit_code=1, cmd=["unused"])
4527+ self.assertEqual(NOT_FOUND, dsovf.transport_vmware_guestinfo())
4528+ self.assertEqual(1, m_subp.call_count)
4529+ self.assertNotIn("WARNING", self.logs.getvalue(),
4530+ "exit code of 1 by rpctool should not cause warning.")
4531+
4532+ def test_notfound_if_no_content_but_exit_zero(self, m_subp, m_which):
4533+ """If vmware-rpctool exited 0 with no stdout is normal not-found.
4534+
4535+ This isn't actually a case I've seen. normally on "not found",
4536+ rpctool would exit 1 with 'No value found' on stderr. But cover
4537+ the case where it exited 0 and just wrote nothing to stdout.
4538+ """
4539+ m_which.return_value = self.rpctool_path
4540+ m_subp.return_value = ('', '')
4541+ self.assertEqual(NOT_FOUND, dsovf.transport_vmware_guestinfo())
4542+ self.assertEqual(1, m_subp.call_count)
4543+
4544+ def test_notfound_and_warns_on_unexpected_exit_code(self, m_subp, m_which):
4545+ """If vmware-rpctool exits non zero or 1, warnings should be logged."""
4546+ m_which.return_value = self.rpctool_path
4547+ m_subp.side_effect = util.ProcessExecutionError(
4548+ stdout=None, stderr="No value found", exit_code=2, cmd=["unused"])
4549+ self.assertEqual(NOT_FOUND, dsovf.transport_vmware_guestinfo())
4550+ self.assertEqual(1, m_subp.call_count)
4551+ self.assertIn("WARNING", self.logs.getvalue(),
4552+ "exit code of 2 by rpctool should log WARNING.")
4553+
4554+ def test_found_when_guestinfo_present(self, m_subp, m_which):
4555+ """When there is a ovf info, transport should return it."""
4556+ m_which.return_value = self.rpctool_path
4557+ content = fill_properties({})
4558+ m_subp.return_value = (content, '')
4559+ self.assertEqual(content, dsovf.transport_vmware_guestinfo())
4560+ self.assertEqual(1, m_subp.call_count)
4561+
4562 #
4563 # vi: ts=4 expandtab
4564diff --git a/tests/unittests/test_datasource/test_scaleway.py b/tests/unittests/test_datasource/test_scaleway.py
4565index c2bc7a0..f96bf0a 100644
4566--- a/tests/unittests/test_datasource/test_scaleway.py
4567+++ b/tests/unittests/test_datasource/test_scaleway.py
4568@@ -49,6 +49,9 @@ class MetadataResponses(object):
4569 FAKE_METADATA = {
4570 'id': '00000000-0000-0000-0000-000000000000',
4571 'hostname': 'scaleway.host',
4572+ 'tags': [
4573+ "AUTHORIZED_KEY=ssh-rsa_AAAAB3NzaC1yc2EAAAADAQABDDDDD",
4574+ ],
4575 'ssh_public_keys': [{
4576 'key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA',
4577 'fingerprint': '2048 06:ae:... login (RSA)'
4578@@ -204,10 +207,11 @@ class TestDataSourceScaleway(HttprettyTestCase):
4579
4580 self.assertEqual(self.datasource.get_instance_id(),
4581 MetadataResponses.FAKE_METADATA['id'])
4582- self.assertEqual(self.datasource.get_public_ssh_keys(), [
4583- elem['key'] for elem in
4584- MetadataResponses.FAKE_METADATA['ssh_public_keys']
4585- ])
4586+ self.assertEqual(self.datasource.get_public_ssh_keys().sort(), [
4587+ u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABCCCCC',
4588+ u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABDDDDD',
4589+ u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA',
4590+ ].sort())
4591 self.assertEqual(self.datasource.get_hostname(),
4592 MetadataResponses.FAKE_METADATA['hostname'])
4593 self.assertEqual(self.datasource.get_userdata_raw(),
4594@@ -218,6 +222,70 @@ class TestDataSourceScaleway(HttprettyTestCase):
4595 self.assertIsNone(self.datasource.region)
4596 self.assertEqual(sleep.call_count, 0)
4597
4598+ def test_ssh_keys_empty(self):
4599+ """
4600+ get_public_ssh_keys() should return empty list if no ssh key are
4601+ available
4602+ """
4603+ self.datasource.metadata['tags'] = []
4604+ self.datasource.metadata['ssh_public_keys'] = []
4605+ self.assertEqual(self.datasource.get_public_ssh_keys(), [])
4606+
4607+ def test_ssh_keys_only_tags(self):
4608+ """
4609+ get_public_ssh_keys() should return list of keys available in tags
4610+ """
4611+ self.datasource.metadata['tags'] = [
4612+ "AUTHORIZED_KEY=ssh-rsa_AAAAB3NzaC1yc2EAAAADAQABDDDDD",
4613+ "AUTHORIZED_KEY=ssh-rsa_AAAAB3NzaC1yc2EAAAADAQABCCCCC",
4614+ ]
4615+ self.datasource.metadata['ssh_public_keys'] = []
4616+ self.assertEqual(self.datasource.get_public_ssh_keys().sort(), [
4617+ u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABDDDDD',
4618+ u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABCCCCC',
4619+ ].sort())
4620+
4621+ def test_ssh_keys_only_conf(self):
4622+ """
4623+ get_public_ssh_keys() should return list of keys available in
4624+ ssh_public_keys field
4625+ """
4626+ self.datasource.metadata['tags'] = []
4627+ self.datasource.metadata['ssh_public_keys'] = [{
4628+ 'key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA',
4629+ 'fingerprint': '2048 06:ae:... login (RSA)'
4630+ }, {
4631+ 'key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABCCCCC',
4632+ 'fingerprint': '2048 06:ff:... login2 (RSA)'
4633+ }]
4634+ self.assertEqual(self.datasource.get_public_ssh_keys().sort(), [
4635+ u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABCCCCC',
4636+ u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABDDDDD',
4637+ u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA',
4638+ ].sort())
4639+
4640+ def test_ssh_keys_both(self):
4641+ """
4642+ get_public_ssh_keys() should return a merge of keys available
4643+ in ssh_public_keys and tags
4644+ """
4645+ self.datasource.metadata['tags'] = [
4646+ "AUTHORIZED_KEY=ssh-rsa_AAAAB3NzaC1yc2EAAAADAQABDDDDD",
4647+ ]
4648+
4649+ self.datasource.metadata['ssh_public_keys'] = [{
4650+ 'key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA',
4651+ 'fingerprint': '2048 06:ae:... login (RSA)'
4652+ }, {
4653+ 'key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABCCCCC',
4654+ 'fingerprint': '2048 06:ff:... login2 (RSA)'
4655+ }]
4656+ self.assertEqual(self.datasource.get_public_ssh_keys().sort(), [
4657+ u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABCCCCC',
4658+ u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABDDDDD',
4659+ u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABA',
4660+ ].sort())
4661+
4662 @mock.patch('cloudinit.sources.DataSourceScaleway.EphemeralDHCPv4')
4663 @mock.patch('cloudinit.sources.DataSourceScaleway.SourceAddressAdapter',
4664 get_source_address_adapter)
4665diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py
4666index 46778e9..756b4fb 100644
4667--- a/tests/unittests/test_ds_identify.py
4668+++ b/tests/unittests/test_ds_identify.py
4669@@ -138,6 +138,9 @@ class DsIdentifyBase(CiTestCase):
4670 {'name': 'detect_virt', 'RET': 'none', 'ret': 1},
4671 {'name': 'uname', 'out': UNAME_MYSYS},
4672 {'name': 'blkid', 'out': BLKID_EFI_ROOT},
4673+ {'name': 'ovf_vmware_transport_guestinfo',
4674+ 'out': 'No value found', 'ret': 1},
4675+
4676 ]
4677
4678 written = [d['name'] for d in mocks]
4679@@ -475,6 +478,10 @@ class TestDsIdentify(DsIdentifyBase):
4680 """OVF is identified when iso9660 cdrom path contains ovf schema."""
4681 self._test_ds_found('OVF')
4682
4683+ def test_ovf_on_vmware_guestinfo_found(self):
4684+ """OVF guest info is found on vmware."""
4685+ self._test_ds_found('OVF-guestinfo')
4686+
4687 def test_ovf_on_vmware_iso_found_when_vmware_customization(self):
4688 """OVF is identified when vmware customization is enabled."""
4689 self._test_ds_found('OVF-vmware-customization')
4690@@ -499,7 +506,7 @@ class TestDsIdentify(DsIdentifyBase):
4691
4692 # Add recognized labels
4693 valid_ovf_labels = ['ovf-transport', 'OVF-TRANSPORT',
4694- "OVFENV", "ovfenv"]
4695+ "OVFENV", "ovfenv", "OVF ENV", "ovf env"]
4696 for valid_ovf_label in valid_ovf_labels:
4697 ovf_cdrom_by_label['mocks'][0]['out'] = blkid_out([
4698 {'DEVNAME': 'sda1', 'TYPE': 'ext4', 'LABEL': 'rootfs'},
4699@@ -773,6 +780,14 @@ VALID_CFG = {
4700 'dev/sr0': 'pretend ovf iso has ' + OVF_MATCH_STRING + '\n',
4701 }
4702 },
4703+ 'OVF-guestinfo': {
4704+ 'ds': 'OVF',
4705+ 'mocks': [
4706+ {'name': 'ovf_vmware_transport_guestinfo', 'ret': 0,
4707+ 'out': '<?xml version="1.0" encoding="UTF-8"?>\n<Environment'},
4708+ MOCK_VIRT_IS_VMWARE,
4709+ ],
4710+ },
4711 'ConfigDrive': {
4712 'ds': 'ConfigDrive',
4713 'mocks': [
4714diff --git a/tests/unittests/test_handler/test_handler_lxd.py b/tests/unittests/test_handler/test_handler_lxd.py
4715index 2478ebc..b63db61 100644
4716--- a/tests/unittests/test_handler/test_handler_lxd.py
4717+++ b/tests/unittests/test_handler/test_handler_lxd.py
4718@@ -62,7 +62,7 @@ class TestLxd(t_help.CiTestCase):
4719 cc_lxd.handle('cc_lxd', self.lxd_cfg, cc, self.logger, [])
4720 self.assertFalse(m_maybe_clean.called)
4721 install_pkg = cc.distro.install_packages.call_args_list[0][0][0]
4722- self.assertEqual(sorted(install_pkg), ['lxd', 'zfs'])
4723+ self.assertEqual(sorted(install_pkg), ['lxd', 'zfsutils-linux'])
4724
4725 @mock.patch("cloudinit.config.cc_lxd.maybe_cleanup_default")
4726 @mock.patch("cloudinit.config.cc_lxd.util")
4727diff --git a/tests/unittests/test_handler/test_handler_resizefs.py b/tests/unittests/test_handler/test_handler_resizefs.py
4728index feca56c..3518784 100644
4729--- a/tests/unittests/test_handler/test_handler_resizefs.py
4730+++ b/tests/unittests/test_handler/test_handler_resizefs.py
4731@@ -151,9 +151,9 @@ class TestResizefs(CiTestCase):
4732 _resize_ufs(mount_point, devpth))
4733
4734 @mock.patch('cloudinit.util.is_container', return_value=False)
4735- @mock.patch('cloudinit.util.get_mount_info')
4736- @mock.patch('cloudinit.util.get_device_info_from_zpool')
4737 @mock.patch('cloudinit.util.parse_mount')
4738+ @mock.patch('cloudinit.util.get_device_info_from_zpool')
4739+ @mock.patch('cloudinit.util.get_mount_info')
4740 def test_handle_zfs_root(self, mount_info, zpool_info, parse_mount,
4741 is_container):
4742 devpth = 'vmzroot/ROOT/freebsd'
4743@@ -173,6 +173,38 @@ class TestResizefs(CiTestCase):
4744
4745 self.assertEqual(('zpool', 'online', '-e', 'vmzroot', disk), ret)
4746
4747+ @mock.patch('cloudinit.util.is_container', return_value=False)
4748+ @mock.patch('cloudinit.util.get_mount_info')
4749+ @mock.patch('cloudinit.util.get_device_info_from_zpool')
4750+ @mock.patch('cloudinit.util.parse_mount')
4751+ def test_handle_modern_zfsroot(self, mount_info, zpool_info, parse_mount,
4752+ is_container):
4753+ devpth = 'zroot/ROOT/default'
4754+ disk = 'da0p3'
4755+ fs_type = 'zfs'
4756+ mount_point = '/'
4757+
4758+ mount_info.return_value = (devpth, fs_type, mount_point)
4759+ zpool_info.return_value = disk
4760+ parse_mount.return_value = (devpth, fs_type, mount_point)
4761+
4762+ cfg = {'resize_rootfs': True}
4763+
4764+ def fake_stat(devpath):
4765+ if devpath == disk:
4766+ raise OSError("not here")
4767+ FakeStat = namedtuple(
4768+ 'FakeStat', ['st_mode', 'st_size', 'st_mtime']) # minimal stat
4769+ return FakeStat(25008, 0, 1) # fake char block device
4770+
4771+ with mock.patch('cloudinit.config.cc_resizefs.do_resize') as dresize:
4772+ with mock.patch('cloudinit.config.cc_resizefs.os.stat') as m_stat:
4773+ m_stat.side_effect = fake_stat
4774+ handle('cc_resizefs', cfg, _cloud=None, log=LOG, args=[])
4775+
4776+ self.assertEqual(('zpool', 'online', '-e', 'zroot', '/dev/' + disk),
4777+ dresize.call_args[0][0])
4778+
4779
4780 class TestRootDevFromCmdline(CiTestCase):
4781
4782@@ -246,39 +278,39 @@ class TestMaybeGetDevicePathAsWritableBlock(CiTestCase):
4783
4784 def test_maybe_get_writable_device_path_does_not_exist(self):
4785 """When devpath does not exist, a warning is logged."""
4786- info = 'dev=/I/dont/exist mnt_point=/ path=/dev/none'
4787+ info = 'dev=/dev/I/dont/exist mnt_point=/ path=/dev/none'
4788 devpath = wrap_and_call(
4789 'cloudinit.config.cc_resizefs.util',
4790 {'is_container': {'return_value': False}},
4791- maybe_get_writable_device_path, '/I/dont/exist', info, LOG)
4792+ maybe_get_writable_device_path, '/dev/I/dont/exist', info, LOG)
4793 self.assertIsNone(devpath)
4794 self.assertIn(
4795- "WARNING: Device '/I/dont/exist' did not exist."
4796+ "WARNING: Device '/dev/I/dont/exist' did not exist."
4797 ' cannot resize: %s' % info,
4798 self.logs.getvalue())
4799
4800 def test_maybe_get_writable_device_path_does_not_exist_in_container(self):
4801 """When devpath does not exist in a container, log a debug message."""
4802- info = 'dev=/I/dont/exist mnt_point=/ path=/dev/none'
4803+ info = 'dev=/dev/I/dont/exist mnt_point=/ path=/dev/none'
4804 devpath = wrap_and_call(
4805 'cloudinit.config.cc_resizefs.util',
4806 {'is_container': {'return_value': True}},
4807- maybe_get_writable_device_path, '/I/dont/exist', info, LOG)
4808+ maybe_get_writable_device_path, '/dev/I/dont/exist', info, LOG)
4809 self.assertIsNone(devpath)
4810 self.assertIn(
4811- "DEBUG: Device '/I/dont/exist' did not exist in container."
4812+ "DEBUG: Device '/dev/I/dont/exist' did not exist in container."
4813 ' cannot resize: %s' % info,
4814 self.logs.getvalue())
4815
4816 def test_maybe_get_writable_device_path_raises_oserror(self):
4817 """When unexpected OSError is raises by os.stat it is reraised."""
4818- info = 'dev=/I/dont/exist mnt_point=/ path=/dev/none'
4819+ info = 'dev=/dev/I/dont/exist mnt_point=/ path=/dev/none'
4820 with self.assertRaises(OSError) as context_manager:
4821 wrap_and_call(
4822 'cloudinit.config.cc_resizefs',
4823 {'util.is_container': {'return_value': True},
4824 'os.stat': {'side_effect': OSError('Something unexpected')}},
4825- maybe_get_writable_device_path, '/I/dont/exist', info, LOG)
4826+ maybe_get_writable_device_path, '/dev/I/dont/exist', info, LOG)
4827 self.assertEqual(
4828 'Something unexpected', str(context_manager.exception))
4829
4830diff --git a/tests/unittests/test_handler/test_handler_write_files.py b/tests/unittests/test_handler/test_handler_write_files.py
4831index 7fa8fd2..bc8756c 100644
4832--- a/tests/unittests/test_handler/test_handler_write_files.py
4833+++ b/tests/unittests/test_handler/test_handler_write_files.py
4834@@ -52,6 +52,18 @@ class TestWriteFiles(FilesystemMockingTestCase):
4835 "test_simple", [{"content": expected, "path": filename}])
4836 self.assertEqual(util.load_file(filename), expected)
4837
4838+ def test_append(self):
4839+ self.patchUtils(self.tmp)
4840+ existing = "hello "
4841+ added = "world\n"
4842+ expected = existing + added
4843+ filename = "/tmp/append.file"
4844+ util.write_file(filename, existing)
4845+ write_files(
4846+ "test_append",
4847+ [{"content": added, "path": filename, "append": "true"}])
4848+ self.assertEqual(util.load_file(filename), expected)
4849+
4850 def test_yaml_binary(self):
4851 self.patchUtils(self.tmp)
4852 data = util.load_yaml(YAML_TEXT)
4853diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
4854index 8e38373..5313d2d 100644
4855--- a/tests/unittests/test_net.py
4856+++ b/tests/unittests/test_net.py
4857@@ -22,6 +22,7 @@ import os
4858 import textwrap
4859 import yaml
4860
4861+
4862 DHCP_CONTENT_1 = """
4863 DEVICE='eth0'
4864 PROTO='dhcp'
4865@@ -488,8 +489,8 @@ NETWORK_CONFIGS = {
4866 address 192.168.21.3/24
4867 dns-nameservers 8.8.8.8 8.8.4.4
4868 dns-search barley.maas sach.maas
4869- post-up route add default gw 65.61.151.37 || true
4870- pre-down route del default gw 65.61.151.37 || true
4871+ post-up route add default gw 65.61.151.37 metric 10000 || true
4872+ pre-down route del default gw 65.61.151.37 metric 10000 || true
4873 """).rstrip(' '),
4874 'expected_netplan': textwrap.dedent("""
4875 network:
4876@@ -513,7 +514,8 @@ NETWORK_CONFIGS = {
4877 - barley.maas
4878 - sach.maas
4879 routes:
4880- - to: 0.0.0.0/0
4881+ - metric: 10000
4882+ to: 0.0.0.0/0
4883 via: 65.61.151.37
4884 set-name: eth99
4885 """).rstrip(' '),
4886@@ -537,6 +539,7 @@ NETWORK_CONFIGS = {
4887 HWADDR=c0:d6:9f:2c:e8:80
4888 IPADDR=192.168.21.3
4889 NETMASK=255.255.255.0
4890+ METRIC=10000
4891 NM_CONTROLLED=no
4892 ONBOOT=yes
4893 TYPE=Ethernet
4894@@ -561,7 +564,7 @@ NETWORK_CONFIGS = {
4895 - gateway: 65.61.151.37
4896 netmask: 0.0.0.0
4897 network: 0.0.0.0
4898- metric: 2
4899+ metric: 10000
4900 - type: physical
4901 name: eth1
4902 mac_address: "cf:d6:af:48:e8:80"
4903@@ -1161,6 +1164,13 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
4904 - gateway: 192.168.0.3
4905 netmask: 255.255.255.0
4906 network: 10.1.3.0
4907+ - gateway: 2001:67c:1562:1
4908+ network: 2001:67c:1
4909+ netmask: ffff:ffff:0
4910+ - gateway: 3001:67c:1562:1
4911+ network: 3001:67c:1
4912+ netmask: ffff:ffff:0
4913+ metric: 10000
4914 - type: static
4915 address: 192.168.1.2/24
4916 - type: static
4917@@ -1197,6 +1207,11 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
4918 routes:
4919 - to: 10.1.3.0/24
4920 via: 192.168.0.3
4921+ - to: 2001:67c:1/32
4922+ via: 2001:67c:1562:1
4923+ - metric: 10000
4924+ to: 3001:67c:1/32
4925+ via: 3001:67c:1562:1
4926 """),
4927 'yaml-v2': textwrap.dedent("""
4928 version: 2
4929@@ -1228,6 +1243,11 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
4930 routes:
4931 - to: 10.1.3.0/24
4932 via: 192.168.0.3
4933+ - to: 2001:67c:1562:8007::1/64
4934+ via: 2001:67c:1562:8007::aac:40b2
4935+ - metric: 10000
4936+ to: 3001:67c:1562:8007::1/64
4937+ via: 3001:67c:1562:8007::aac:40b2
4938 """),
4939 'expected_netplan-v2': textwrap.dedent("""
4940 network:
4941@@ -1249,6 +1269,11 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
4942 routes:
4943 - to: 10.1.3.0/24
4944 via: 192.168.0.3
4945+ - to: 2001:67c:1562:8007::1/64
4946+ via: 2001:67c:1562:8007::aac:40b2
4947+ - metric: 10000
4948+ to: 3001:67c:1562:8007::1/64
4949+ via: 3001:67c:1562:8007::aac:40b2
4950 ethernets:
4951 eth0:
4952 match:
4953@@ -1349,6 +1374,10 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true
4954 USERCTL=no
4955 """),
4956 'route6-bond0': textwrap.dedent("""\
4957+ # Created by cloud-init on instance boot automatically, do not edit.
4958+ #
4959+ 2001:67c:1/ffff:ffff:0 via 2001:67c:1562:1 dev bond0
4960+ 3001:67c:1/ffff:ffff:0 via 3001:67c:1562:1 metric 10000 dev bond0
4961 """),
4962 'route-bond0': textwrap.dedent("""\
4963 ADDRESS0=10.1.3.0
4964@@ -1852,6 +1881,7 @@ class TestRhelSysConfigRendering(CiTestCase):
4965
4966 with_logs = True
4967
4968+ nm_cfg_file = "/etc/NetworkManager/NetworkManager.conf"
4969 scripts_dir = '/etc/sysconfig/network-scripts'
4970 header = ('# Created by cloud-init on instance boot automatically, '
4971 'do not edit.\n#\n')
4972@@ -1879,14 +1909,24 @@ class TestRhelSysConfigRendering(CiTestCase):
4973 return dir2dict(dir)
4974
4975 def _compare_files_to_expected(self, expected, found):
4976+
4977+ def _try_load(f):
4978+ ''' Attempt to load shell content, otherwise return as-is '''
4979+ try:
4980+ return util.load_shell_content(f)
4981+ except ValueError:
4982+ pass
4983+ # route6- * files aren't shell content, but iproute2 params
4984+ return f
4985+
4986 orig_maxdiff = self.maxDiff
4987 expected_d = dict(
4988- (os.path.join(self.scripts_dir, k), util.load_shell_content(v))
4989+ (os.path.join(self.scripts_dir, k), _try_load(v))
4990 for k, v in expected.items())
4991
4992 # only compare the files in scripts_dir
4993 scripts_found = dict(
4994- (k, util.load_shell_content(v)) for k, v in found.items()
4995+ (k, _try_load(v)) for k, v in found.items()
4996 if k.startswith(self.scripts_dir))
4997 try:
4998 self.maxDiff = None
4999@@ -2058,6 +2098,10 @@ TYPE=Ethernet
5000 USERCTL=no
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches