Merge ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic

Proposed by Chad Smith
Status: Merged
Merged at revision: ef4aa258cb508c9be6343fcbefa0735066f4ad21
Proposed branch: ~chad.smith/cloud-init:ubuntu/bionic
Merge into: cloud-init:ubuntu/bionic
Diff against target: 6964 lines (+3990/-570)
81 files modified
.github/pull_request_template.md (+9/-0)
ChangeLog (+36/-0)
cloudinit/analyze/__main__.py (+86/-2)
cloudinit/analyze/show.py (+192/-10)
cloudinit/analyze/tests/test_boot.py (+170/-0)
cloudinit/apport.py (+1/-0)
cloudinit/config/cc_apt_configure.py (+3/-1)
cloudinit/config/cc_lxd.py (+1/-1)
cloudinit/config/cc_set_passwords.py (+34/-19)
cloudinit/config/cc_ssh.py (+55/-0)
cloudinit/config/cc_ubuntu_drivers.py (+49/-1)
cloudinit/config/tests/test_ssh.py (+166/-0)
cloudinit/config/tests/test_ubuntu_drivers.py (+81/-18)
cloudinit/distros/__init__.py (+22/-22)
cloudinit/distros/arch.py (+14/-0)
cloudinit/distros/debian.py (+2/-2)
cloudinit/distros/freebsd.py (+16/-16)
cloudinit/distros/opensuse.py (+2/-0)
cloudinit/distros/parsers/sys_conf.py (+7/-0)
cloudinit/distros/ubuntu.py (+15/-0)
cloudinit/net/__init__.py (+112/-43)
cloudinit/net/cmdline.py (+16/-9)
cloudinit/net/dhcp.py (+90/-0)
cloudinit/net/network_state.py (+12/-4)
cloudinit/net/sysconfig.py (+12/-0)
cloudinit/net/tests/test_dhcp.py (+119/-1)
cloudinit/net/tests/test_init.py (+262/-9)
cloudinit/settings.py (+1/-0)
cloudinit/sources/DataSourceAzure.py (+141/-32)
cloudinit/sources/DataSourceCloudSigma.py (+2/-6)
cloudinit/sources/DataSourceExoscale.py (+258/-0)
cloudinit/sources/DataSourceGCE.py (+20/-2)
cloudinit/sources/DataSourceHetzner.py (+3/-0)
cloudinit/sources/DataSourceOVF.py (+6/-1)
cloudinit/sources/DataSourceOracle.py (+99/-7)
cloudinit/sources/__init__.py (+27/-0)
cloudinit/sources/helpers/azure.py (+152/-8)
cloudinit/sources/helpers/vmware/imc/config_custom_script.py (+42/-101)
cloudinit/sources/tests/test_oracle.py (+228/-11)
cloudinit/stages.py (+50/-15)
cloudinit/tests/helpers.py (+2/-1)
cloudinit/tests/test_stages.py (+132/-19)
cloudinit/url_helper.py (+5/-4)
cloudinit/version.py (+1/-1)
debian/changelog (+60/-3)
debian/cloud-init.templates (+3/-3)
debian/patches/ubuntu-advantage-revert-tip.patch (+4/-8)
doc/examples/cloud-config-datasources.txt (+1/-1)
doc/examples/cloud-config-user-groups.txt (+1/-0)
doc/rtd/conf.py (+0/-5)
doc/rtd/topics/analyze.rst (+84/-0)
doc/rtd/topics/capabilities.rst (+1/-0)
doc/rtd/topics/datasources.rst (+1/-0)
doc/rtd/topics/datasources/exoscale.rst (+68/-0)
doc/rtd/topics/datasources/oracle.rst (+24/-1)
doc/rtd/topics/debugging.rst (+13/-0)
doc/rtd/topics/format.rst (+13/-12)
doc/rtd/topics/network-config-format-v2.rst (+1/-1)
doc/rtd/topics/network-config.rst (+5/-4)
integration-requirements.txt (+2/-1)
systemd/cloud-init-generator.tmpl (+6/-1)
templates/ntp.conf.debian.tmpl (+2/-1)
tests/cloud_tests/platforms.yaml (+1/-0)
tests/cloud_tests/platforms/nocloudkvm/instance.py (+9/-4)
tests/cloud_tests/platforms/platforms.py (+1/-1)
tests/cloud_tests/setup_image.py (+2/-1)
tests/unittests/test_datasource/test_azure.py (+112/-15)
tests/unittests/test_datasource/test_common.py (+13/-0)
tests/unittests/test_datasource/test_ec2.py (+2/-1)
tests/unittests/test_datasource/test_exoscale.py (+203/-0)
tests/unittests/test_datasource/test_gce.py (+18/-0)
tests/unittests/test_distros/test_netconfig.py (+86/-0)
tests/unittests/test_ds_identify.py (+25/-0)
tests/unittests/test_handler/test_handler_apt_source_v3.py (+11/-0)
tests/unittests/test_handler/test_handler_ntp.py (+15/-10)
tests/unittests/test_net.py (+197/-23)
tests/unittests/test_reporting_hyperv.py (+65/-0)
tests/unittests/test_vmware/test_custom_script.py (+63/-53)
tools/build-on-freebsd (+40/-33)
tools/ds-identify (+32/-14)
tools/xkvm (+53/-8)
Reviewer Review Type Date Requested Status
Ryan Harper Approve
Server Team CI bot continuous-integration Approve
Review via email: mp+371686@code.launchpad.net

Commit message

Upstream snapshot for SRU into bionic
Also enables Exoscale in debian/cloud-init.templates

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:5463fec28e79740fff2382c504f94756a6eda6e2
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1071/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1071//rebuild

review: Approve (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:5b00861163e6cdee18805b11796b60ee76e8c60b
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1073/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1073//rebuild

review: Approve (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:ef4aa258cb508c9be6343fcbefa0735066f4ad21
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1076/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1076//rebuild

review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :

Looks good. Thanks.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
2new file mode 100644
3index 0000000..170a71e
4--- /dev/null
5+++ b/.github/pull_request_template.md
6@@ -0,0 +1,9 @@
7+***This GitHub repo is only a mirror. Do not submit pull requests
8+here!***
9+
10+Thank you for taking the time to write and submit a change to
11+cloud-init! Please follow [our hacking
12+guide](https://cloudinit.readthedocs.io/en/latest/topics/hacking.html)
13+to submit your change to cloud-init's [Launchpad git
14+repository](https://code.launchpad.net/cloud-init/), where cloud-init
15+development happens.
16diff --git a/ChangeLog b/ChangeLog
17index bf48fd4..a98f8c2 100644
18--- a/ChangeLog
19+++ b/ChangeLog
20@@ -1,3 +1,39 @@
21+19.2:
22+ - net: add rfc3442 (classless static routes) to EphemeralDHCP
23+ (LP: #1821102)
24+ - templates/ntp.conf.debian.tmpl: fix missing newline for pools
25+ (LP: #1836598)
26+ - Support netplan renderer in Arch Linux [Conrad Hoffmann]
27+ - Fix typo in publicly viewable documentation. [David Medberry]
28+ - Add a cdrom size checker for OVF ds to ds-identify
29+ [Pengpeng Sun] (LP: #1806701)
30+ - VMWare: Trigger the post customization script via cc_scripts module.
31+ [Xiaofeng Wang] (LP: #1833192)
32+ - Cloud-init analyze module: Added ability to analyze boot events.
33+ [Sam Gilson]
34+ - Update debian eni network configuration location, retain Ubuntu setting
35+ [Janos Lenart]
36+ - net: skip bond interfaces in get_interfaces
37+ [Stanislav Makar] (LP: #1812857)
38+ - Fix a couple of issues raised by a coverity scan
39+ - Add missing dsname for Hetzner Cloud datasource [Markus Schade]
40+ - doc: indicate that netplan is default in Ubuntu now
41+ - azure: add region and AZ properties from imds compute location metadata
42+ - sysconfig: support more bonding options [Penghui Liao]
43+ - cloud-init-generator: use libexec path to ds-identify on redhat systems
44+ (LP: #1833264)
45+ - tools/build-on-freebsd: update to python3 [Gonéri Le Bouder]
46+ - Allow identification of OpenStack by Asset Tag
47+ [Mark T. Voelker] (LP: #1669875)
48+ - Fix spelling error making 'an Ubuntu' consistent. [Brian Murray]
49+ - run-container: centos: comment out the repo mirrorlist [Paride Legovini]
50+ - netplan: update netplan key mappings for gratuitous-arp (LP: #1827238)
51+ - freebsd: fix the name of cloudcfg VARIANT [Gonéri Le Bouder]
52+ - freebsd: ability to grow root file system [Gonéri Le Bouder]
53+ - freebsd: NoCloud data source support [Gonéri Le Bouder] (LP: #1645824)
54+ - Azure: Return static fallback address as if failed to find endpoint
55+ [Jason Zions (MSFT)]
56+
57 19.1:
58 - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder]
59 - tests: add Eoan release [Paride Legovini]
60diff --git a/cloudinit/analyze/__main__.py b/cloudinit/analyze/__main__.py
61index f861365..99e5c20 100644
62--- a/cloudinit/analyze/__main__.py
63+++ b/cloudinit/analyze/__main__.py
64@@ -7,7 +7,7 @@ import re
65 import sys
66
67 from cloudinit.util import json_dumps
68-
69+from datetime import datetime
70 from . import dump
71 from . import show
72
73@@ -52,9 +52,93 @@ def get_parser(parser=None):
74 dest='outfile', default='-',
75 help='specify where to write output. ')
76 parser_dump.set_defaults(action=('dump', analyze_dump))
77+ parser_boot = subparsers.add_parser(
78+ 'boot', help='Print list of boot times for kernel and cloud-init')
79+ parser_boot.add_argument('-i', '--infile', action='store',
80+ dest='infile', default='/var/log/cloud-init.log',
81+ help='specify where to read input. ')
82+ parser_boot.add_argument('-o', '--outfile', action='store',
83+ dest='outfile', default='-',
84+ help='specify where to write output.')
85+ parser_boot.set_defaults(action=('boot', analyze_boot))
86 return parser
87
88
89+def analyze_boot(name, args):
90+ """Report a list of how long different boot operations took.
91+
92+ For Example:
93+ -- Most Recent Boot Record --
94+ Kernel Started at: <time>
95+ Kernel ended boot at: <time>
96+ Kernel time to boot (seconds): <time>
97+ Cloud-init activated by systemd at: <time>
98+ Time between Kernel end boot and Cloud-init activation (seconds):<time>
99+ Cloud-init start: <time>
100+ """
101+ infh, outfh = configure_io(args)
102+ kernel_info = show.dist_check_timestamp()
103+ status_code, kernel_start, kernel_end, ci_sysd_start = \
104+ kernel_info
105+ kernel_start_timestamp = datetime.utcfromtimestamp(kernel_start)
106+ kernel_end_timestamp = datetime.utcfromtimestamp(kernel_end)
107+ ci_sysd_start_timestamp = datetime.utcfromtimestamp(ci_sysd_start)
108+ try:
109+ last_init_local = \
110+ [e for e in _get_events(infh) if e['name'] == 'init-local' and
111+ 'starting search' in e['description']][-1]
112+ ci_start = datetime.utcfromtimestamp(last_init_local['timestamp'])
113+ except IndexError:
114+ ci_start = 'Could not find init-local log-line in cloud-init.log'
115+ status_code = show.FAIL_CODE
116+
117+ FAILURE_MSG = 'Your Linux distro or container does not support this ' \
118+ 'functionality.\n' \
119+ 'You must be running a Kernel Telemetry supported ' \
120+ 'distro.\nPlease check ' \
121+ 'https://cloudinit.readthedocs.io/en/latest' \
122+ '/topics/analyze.html for more ' \
123+ 'information on supported distros.\n'
124+
125+ SUCCESS_MSG = '-- Most Recent Boot Record --\n' \
126+ ' Kernel Started at: {k_s_t}\n' \
127+ ' Kernel ended boot at: {k_e_t}\n' \
128+ ' Kernel time to boot (seconds): {k_r}\n' \
129+ ' Cloud-init activated by systemd at: {ci_sysd_t}\n' \
130+ ' Time between Kernel end boot and Cloud-init ' \
131+ 'activation (seconds): {bt_r}\n' \
132+ ' Cloud-init start: {ci_start}\n'
133+
134+ CONTAINER_MSG = '-- Most Recent Container Boot Record --\n' \
135+ ' Container started at: {k_s_t}\n' \
136+ ' Cloud-init activated by systemd at: {ci_sysd_t}\n' \
137+ ' Cloud-init start: {ci_start}\n' \
138+
139+ status_map = {
140+ show.FAIL_CODE: FAILURE_MSG,
141+ show.CONTAINER_CODE: CONTAINER_MSG,
142+ show.SUCCESS_CODE: SUCCESS_MSG
143+ }
144+
145+ kernel_runtime = kernel_end - kernel_start
146+ between_process_runtime = ci_sysd_start - kernel_end
147+
148+ kwargs = {
149+ 'k_s_t': kernel_start_timestamp,
150+ 'k_e_t': kernel_end_timestamp,
151+ 'k_r': kernel_runtime,
152+ 'bt_r': between_process_runtime,
153+ 'k_e': kernel_end,
154+ 'k_s': kernel_start,
155+ 'ci_sysd': ci_sysd_start,
156+ 'ci_sysd_t': ci_sysd_start_timestamp,
157+ 'ci_start': ci_start
158+ }
159+
160+ outfh.write(status_map[status_code].format(**kwargs))
161+ return status_code
162+
163+
164 def analyze_blame(name, args):
165 """Report a list of records sorted by largest time delta.
166
167@@ -119,7 +203,7 @@ def analyze_dump(name, args):
168
169 def _get_events(infile):
170 rawdata = None
171- events, rawdata = show.load_events(infile, None)
172+ events, rawdata = show.load_events_infile(infile)
173 if not events:
174 events, _ = dump.dump_events(rawdata=rawdata)
175 return events
176diff --git a/cloudinit/analyze/show.py b/cloudinit/analyze/show.py
177index 3e778b8..511b808 100644
178--- a/cloudinit/analyze/show.py
179+++ b/cloudinit/analyze/show.py
180@@ -8,8 +8,11 @@ import base64
181 import datetime
182 import json
183 import os
184+import time
185+import sys
186
187 from cloudinit import util
188+from cloudinit.distros import uses_systemd
189
190 # An event:
191 '''
192@@ -49,6 +52,10 @@ format_key = {
193
194 formatting_help = " ".join(["{0}: {1}".format(k.replace('%', '%%'), v)
195 for k, v in format_key.items()])
196+SUCCESS_CODE = 'successful'
197+FAIL_CODE = 'failure'
198+CONTAINER_CODE = 'container'
199+TIMESTAMP_UNKNOWN = (FAIL_CODE, -1, -1, -1)
200
201
202 def format_record(msg, event):
203@@ -125,9 +132,175 @@ def total_time_record(total_time):
204 return 'Total Time: %3.5f seconds\n' % total_time
205
206
207+class SystemctlReader(object):
208+ '''
209+ Class for dealing with all systemctl subp calls in a consistent manner.
210+ '''
211+ def __init__(self, property, parameter=None):
212+ self.epoch = None
213+ self.args = ['/bin/systemctl', 'show']
214+ if parameter:
215+ self.args.append(parameter)
216+ self.args.extend(['-p', property])
217+ # Don't want the init of our object to break. Instead of throwing
218+ # an exception, set an error code that gets checked when data is
219+ # requested from the object
220+ self.failure = self.subp()
221+
222+ def subp(self):
223+ '''
224+ Make a subp call based on set args and handle errors by setting
225+ failure code
226+
227+ :return: whether the subp call failed or not
228+ '''
229+ try:
230+ value, err = util.subp(self.args, capture=True)
231+ if err:
232+ return err
233+ self.epoch = value
234+ return None
235+ except Exception as systemctl_fail:
236+ return systemctl_fail
237+
238+ def parse_epoch_as_float(self):
239+ '''
240+ If subp call succeeded, return the timestamp from subp as a float.
241+
242+ :return: timestamp as a float
243+ '''
244+ # subp has 2 ways to fail: it either fails and throws an exception,
245+ # or returns an error code. Raise an exception here in order to make
246+ # sure both scenarios throw exceptions
247+ if self.failure:
248+ raise RuntimeError('Subprocess call to systemctl has failed, '
249+ 'returning error code ({})'
250+ .format(self.failure))
251+ # Output from systemctl show has the format Property=Value.
252+ # For example, UserspaceMonotonic=1929304
253+ timestamp = self.epoch.split('=')[1]
254+ # Timestamps reported by systemctl are in microseconds, converting
255+ return float(timestamp) / 1000000
256+
257+
258+def dist_check_timestamp():
259+ '''
260+ Determine which init system a particular linux distro is using.
261+ Each init system (systemd, upstart, etc) has a different way of
262+ providing timestamps.
263+
264+ :return: timestamps of kernelboot, kernelendboot, and cloud-initstart
265+ or TIMESTAMP_UNKNOWN if the timestamps cannot be retrieved.
266+ '''
267+
268+ if uses_systemd():
269+ return gather_timestamps_using_systemd()
270+
271+ # Use dmesg to get timestamps if the distro does not have systemd
272+ if util.is_FreeBSD() or 'gentoo' in \
273+ util.system_info()['system'].lower():
274+ return gather_timestamps_using_dmesg()
275+
276+ # this distro doesn't fit anything that is supported by cloud-init. just
277+ # return error codes
278+ return TIMESTAMP_UNKNOWN
279+
280+
281+def gather_timestamps_using_dmesg():
282+ '''
283+ Gather timestamps that corresponds to kernel begin initialization,
284+ kernel finish initialization using dmesg as opposed to systemctl
285+
286+ :return: the two timestamps plus a dummy timestamp to keep consistency
287+ with gather_timestamps_using_systemd
288+ '''
289+ try:
290+ data, _ = util.subp(['dmesg'], capture=True)
291+ split_entries = data[0].splitlines()
292+ for i in split_entries:
293+ if i.decode('UTF-8').find('user') != -1:
294+ splitup = i.decode('UTF-8').split()
295+ stripped = splitup[1].strip(']')
296+
297+ # kernel timestamp from dmesg is equal to 0,
298+ # with the userspace timestamp relative to it.
299+ user_space_timestamp = float(stripped)
300+ kernel_start = float(time.time()) - float(util.uptime())
301+ kernel_end = kernel_start + user_space_timestamp
302+
303+ # systemd wont start cloud-init in this case,
304+ # so we cannot get that timestamp
305+ return SUCCESS_CODE, kernel_start, kernel_end, \
306+ kernel_end
307+
308+ except Exception:
309+ pass
310+ return TIMESTAMP_UNKNOWN
311+
312+
313+def gather_timestamps_using_systemd():
314+ '''
315+ Gather timestamps that corresponds to kernel begin initialization,
316+ kernel finish initialization. and cloud-init systemd unit activation
317+
318+ :return: the three timestamps
319+ '''
320+ kernel_start = float(time.time()) - float(util.uptime())
321+ try:
322+ delta_k_end = SystemctlReader('UserspaceTimestampMonotonic')\
323+ .parse_epoch_as_float()
324+ delta_ci_s = SystemctlReader('InactiveExitTimestampMonotonic',
325+ 'cloud-init-local').parse_epoch_as_float()
326+ base_time = kernel_start
327+ status = SUCCESS_CODE
328+ # lxc based containers do not set their monotonic zero point to be when
329+ # the container starts, instead keep using host boot as zero point
330+ # time.CLOCK_MONOTONIC_RAW is only available in python 3.3
331+ if util.is_container():
332+ # clock.monotonic also uses host boot as zero point
333+ if sys.version_info >= (3, 3):
334+ base_time = float(time.time()) - float(time.monotonic())
335+ # TODO: lxcfs automatically truncates /proc/uptime to seconds
336+ # in containers when https://github.com/lxc/lxcfs/issues/292
337+ # is fixed, util.uptime() should be used instead of stat on
338+ try:
339+ file_stat = os.stat('/proc/1/cmdline')
340+ kernel_start = file_stat.st_atime
341+ except OSError as err:
342+ raise RuntimeError('Could not determine container boot '
343+ 'time from /proc/1/cmdline. ({})'
344+ .format(err))
345+ status = CONTAINER_CODE
346+ else:
347+ status = FAIL_CODE
348+ kernel_end = base_time + delta_k_end
349+ cloudinit_sysd = base_time + delta_ci_s
350+
351+ except Exception as e:
352+ # Except ALL exceptions as Systemctl reader can throw many different
353+ # errors, but any failure in systemctl means that timestamps cannot be
354+ # obtained
355+ print(e)
356+ return TIMESTAMP_UNKNOWN
357+ return status, kernel_start, kernel_end, cloudinit_sysd
358+
359+
360 def generate_records(events, blame_sort=False,
361 print_format="(%n) %d seconds in %I%D",
362 dump_files=False, log_datafiles=False):
363+ '''
364+ Take in raw events and create parent-child dependencies between events
365+ in order to order events in chronological order.
366+
367+ :param events: JSONs from dump that represents events taken from logs
368+ :param blame_sort: whether to sort by timestamp or by time taken.
369+ :param print_format: formatting to represent event, time stamp,
370+ and time taken by the event in one line
371+ :param dump_files: whether to dump files into JSONs
372+ :param log_datafiles: whether or not to log events generated
373+
374+ :return: boot records ordered chronologically
375+ '''
376
377 sorted_events = sorted(events, key=lambda x: x['timestamp'])
378 records = []
379@@ -189,19 +362,28 @@ def generate_records(events, blame_sort=False,
380
381
382 def show_events(events, print_format):
383+ '''
384+ A passthrough method that makes it easier to call generate_records()
385+
386+ :param events: JSONs from dump that represents events taken from logs
387+ :param print_format: formatting to represent event, time stamp,
388+ and time taken by the event in one line
389+
390+ :return: boot records ordered chronologically
391+ '''
392 return generate_records(events, print_format=print_format)
393
394
395-def load_events(infile, rawdata=None):
396- if rawdata:
397- data = rawdata.read()
398- else:
399- data = infile.read()
400+def load_events_infile(infile):
401+ '''
402+ Takes in a log file, read it, and convert to json.
403+
404+ :param infile: The Log file to be read
405
406- j = None
407+ :return: json version of logfile, raw file
408+ '''
409+ data = infile.read()
410 try:
411- j = json.loads(data)
412+ return json.loads(data), data
413 except ValueError:
414- pass
415-
416- return j, data
417+ return None, data
418diff --git a/cloudinit/analyze/tests/test_boot.py b/cloudinit/analyze/tests/test_boot.py
419new file mode 100644
420index 0000000..706e2cc
421--- /dev/null
422+++ b/cloudinit/analyze/tests/test_boot.py
423@@ -0,0 +1,170 @@
424+import os
425+from cloudinit.analyze.__main__ import (analyze_boot, get_parser)
426+from cloudinit.tests.helpers import CiTestCase, mock
427+from cloudinit.analyze.show import dist_check_timestamp, SystemctlReader, \
428+ FAIL_CODE, CONTAINER_CODE
429+
430+err_code = (FAIL_CODE, -1, -1, -1)
431+
432+
433+class TestDistroChecker(CiTestCase):
434+
435+ @mock.patch('cloudinit.util.system_info', return_value={'dist': ('', '',
436+ ''),
437+ 'system': ''})
438+ @mock.patch('platform.linux_distribution', return_value=('', '', ''))
439+ @mock.patch('cloudinit.util.is_FreeBSD', return_value=False)
440+ def test_blank_distro(self, m_sys_info, m_linux_distribution, m_free_bsd):
441+ self.assertEqual(err_code, dist_check_timestamp())
442+
443+ @mock.patch('cloudinit.util.system_info', return_value={'dist': ('', '',
444+ '')})
445+ @mock.patch('platform.linux_distribution', return_value=('', '', ''))
446+ @mock.patch('cloudinit.util.is_FreeBSD', return_value=True)
447+ def test_freebsd_gentoo_cant_find(self, m_sys_info,
448+ m_linux_distribution, m_is_FreeBSD):
449+ self.assertEqual(err_code, dist_check_timestamp())
450+
451+ @mock.patch('cloudinit.util.subp', return_value=(0, 1))
452+ def test_subp_fails(self, m_subp):
453+ self.assertEqual(err_code, dist_check_timestamp())
454+
455+
456+class TestSystemCtlReader(CiTestCase):
457+
458+ def test_systemctl_invalid_property(self):
459+ reader = SystemctlReader('dummyProperty')
460+ with self.assertRaises(RuntimeError):
461+ reader.parse_epoch_as_float()
462+
463+ def test_systemctl_invalid_parameter(self):
464+ reader = SystemctlReader('dummyProperty', 'dummyParameter')
465+ with self.assertRaises(RuntimeError):
466+ reader.parse_epoch_as_float()
467+
468+ @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None))
469+ def test_systemctl_works_correctly_threshold(self, m_subp):
470+ reader = SystemctlReader('dummyProperty', 'dummyParameter')
471+ self.assertEqual(1.0, reader.parse_epoch_as_float())
472+ thresh = 1.0 - reader.parse_epoch_as_float()
473+ self.assertTrue(thresh < 1e-6)
474+ self.assertTrue(thresh > (-1 * 1e-6))
475+
476+ @mock.patch('cloudinit.util.subp', return_value=('U=0', None))
477+ def test_systemctl_succeed_zero(self, m_subp):
478+ reader = SystemctlReader('dummyProperty', 'dummyParameter')
479+ self.assertEqual(0.0, reader.parse_epoch_as_float())
480+
481+ @mock.patch('cloudinit.util.subp', return_value=('U=1', None))
482+ def test_systemctl_succeed_distinct(self, m_subp):
483+ reader = SystemctlReader('dummyProperty', 'dummyParameter')
484+ val1 = reader.parse_epoch_as_float()
485+ m_subp.return_value = ('U=2', None)
486+ reader2 = SystemctlReader('dummyProperty', 'dummyParameter')
487+ val2 = reader2.parse_epoch_as_float()
488+ self.assertNotEqual(val1, val2)
489+
490+ @mock.patch('cloudinit.util.subp', return_value=('100', None))
491+ def test_systemctl_epoch_not_splittable(self, m_subp):
492+ reader = SystemctlReader('dummyProperty', 'dummyParameter')
493+ with self.assertRaises(IndexError):
494+ reader.parse_epoch_as_float()
495+
496+ @mock.patch('cloudinit.util.subp', return_value=('U=foobar', None))
497+ def test_systemctl_cannot_convert_epoch_to_float(self, m_subp):
498+ reader = SystemctlReader('dummyProperty', 'dummyParameter')
499+ with self.assertRaises(ValueError):
500+ reader.parse_epoch_as_float()
501+
502+
503+class TestAnalyzeBoot(CiTestCase):
504+
505+ def set_up_dummy_file_ci(self, path, log_path):
506+ infh = open(path, 'w+')
507+ infh.write('2019-07-08 17:40:49,601 - util.py[DEBUG]: Cloud-init v. '
508+ '19.1-1-gbaa47854-0ubuntu1~18.04.1 running \'init-local\' '
509+ 'at Mon, 08 Jul 2019 17:40:49 +0000. Up 18.84 seconds.')
510+ infh.close()
511+ outfh = open(log_path, 'w+')
512+ outfh.close()
513+
514+ def set_up_dummy_file(self, path, log_path):
515+ infh = open(path, 'w+')
516+ infh.write('dummy data')
517+ infh.close()
518+ outfh = open(log_path, 'w+')
519+ outfh.close()
520+
521+ def remove_dummy_file(self, path, log_path):
522+ if os.path.isfile(path):
523+ os.remove(path)
524+ if os.path.isfile(log_path):
525+ os.remove(log_path)
526+
527+ @mock.patch('cloudinit.analyze.show.dist_check_timestamp',
528+ return_value=err_code)
529+ def test_boot_invalid_distro(self, m_dist_check_timestamp):
530+
531+ path = os.path.dirname(os.path.abspath(__file__))
532+ log_path = path + '/boot-test.log'
533+ path += '/dummy.log'
534+ self.set_up_dummy_file(path, log_path)
535+
536+ parser = get_parser()
537+ args = parser.parse_args(args=['boot', '-i', path, '-o',
538+ log_path])
539+ name_default = ''
540+ analyze_boot(name_default, args)
541+ # now args have been tested, go into outfile and make sure error
542+ # message is in the outfile
543+ outfh = open(args.outfile, 'r')
544+ data = outfh.read()
545+ err_string = 'Your Linux distro or container does not support this ' \
546+ 'functionality.\nYou must be running a Kernel ' \
547+ 'Telemetry supported distro.\nPlease check ' \
548+ 'https://cloudinit.readthedocs.io/en/latest/topics' \
549+ '/analyze.html for more information on supported ' \
550+ 'distros.\n'
551+
552+ self.remove_dummy_file(path, log_path)
553+ self.assertEqual(err_string, data)
554+
555+ @mock.patch("cloudinit.util.is_container", return_value=True)
556+ @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None))
557+ def test_container_no_ci_log_line(self, m_is_container, m_subp):
558+ path = os.path.dirname(os.path.abspath(__file__))
559+ log_path = path + '/boot-test.log'
560+ path += '/dummy.log'
561+ self.set_up_dummy_file(path, log_path)
562+
563+ parser = get_parser()
564+ args = parser.parse_args(args=['boot', '-i', path, '-o',
565+ log_path])
566+ name_default = ''
567+
568+ finish_code = analyze_boot(name_default, args)
569+
570+ self.remove_dummy_file(path, log_path)
571+ self.assertEqual(FAIL_CODE, finish_code)
572+
573+ @mock.patch("cloudinit.util.is_container", return_value=True)
574+ @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None))
575+ @mock.patch('cloudinit.analyze.__main__._get_events', return_value=[{
576+ 'name': 'init-local', 'description': 'starting search', 'timestamp':
577+ 100000}])
578+ @mock.patch('cloudinit.analyze.show.dist_check_timestamp',
579+ return_value=(CONTAINER_CODE, 1, 1, 1))
580+ def test_container_ci_log_line(self, m_is_container, m_subp, m_get, m_g):
581+ path = os.path.dirname(os.path.abspath(__file__))
582+ log_path = path + '/boot-test.log'
583+ path += '/dummy.log'
584+ self.set_up_dummy_file_ci(path, log_path)
585+
586+ parser = get_parser()
587+ args = parser.parse_args(args=['boot', '-i', path, '-o',
588+ log_path])
589+ name_default = ''
590+ finish_code = analyze_boot(name_default, args)
591+
592+ self.remove_dummy_file(path, log_path)
593+ self.assertEqual(CONTAINER_CODE, finish_code)
594diff --git a/cloudinit/apport.py b/cloudinit/apport.py
595index 22cb7fd..003ff1f 100644
596--- a/cloudinit/apport.py
597+++ b/cloudinit/apport.py
598@@ -23,6 +23,7 @@ KNOWN_CLOUD_NAMES = [
599 'CloudStack',
600 'DigitalOcean',
601 'GCE - Google Compute Engine',
602+ 'Exoscale',
603 'Hetzner Cloud',
604 'IBM - (aka SoftLayer or BlueMix)',
605 'LXD',
606diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py
607index 919d199..f01e2aa 100644
608--- a/cloudinit/config/cc_apt_configure.py
609+++ b/cloudinit/config/cc_apt_configure.py
610@@ -332,6 +332,8 @@ def apply_apt(cfg, cloud, target):
611
612
613 def debconf_set_selections(selections, target=None):
614+ if not selections.endswith(b'\n'):
615+ selections += b'\n'
616 util.subp(['debconf-set-selections'], data=selections, target=target,
617 capture=True)
618
619@@ -374,7 +376,7 @@ def apply_debconf_selections(cfg, target=None):
620
621 selections = '\n'.join(
622 [selsets[key] for key in sorted(selsets.keys())])
623- debconf_set_selections(selections.encode() + b"\n", target=target)
624+ debconf_set_selections(selections.encode(), target=target)
625
626 # get a complete list of packages listed in input
627 pkgs_cfgd = set()
628diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py
629index 71d13ed..d983077 100644
630--- a/cloudinit/config/cc_lxd.py
631+++ b/cloudinit/config/cc_lxd.py
632@@ -152,7 +152,7 @@ def handle(name, cfg, cloud, log, args):
633
634 if cmd_attach:
635 log.debug("Setting up default lxd bridge: %s" %
636- " ".join(cmd_create))
637+ " ".join(cmd_attach))
638 _lxc(cmd_attach)
639
640 elif bridge_cfg:
641diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py
642index 4585e4d..cf9b5ab 100755
643--- a/cloudinit/config/cc_set_passwords.py
644+++ b/cloudinit/config/cc_set_passwords.py
645@@ -9,27 +9,40 @@
646 """
647 Set Passwords
648 -------------
649-**Summary:** Set user passwords
650-
651-Set system passwords and enable or disable ssh password authentication.
652-The ``chpasswd`` config key accepts a dictionary containing a single one of two
653-keys, either ``expire`` or ``list``. If ``expire`` is specified and is set to
654-``false``, then the ``password`` global config key is used as the password for
655-all user accounts. If the ``expire`` key is specified and is set to ``true``
656-then user passwords will be expired, preventing the default system passwords
657-from being used.
658-
659-If the ``list`` key is provided, a list of
660-``username:password`` pairs can be specified. The usernames specified
661-must already exist on the system, or have been created using the
662-``cc_users_groups`` module. A password can be randomly generated using
663-``username:RANDOM`` or ``username:R``. A hashed password can be specified
664-using ``username:$6$salt$hash``. Password ssh authentication can be
665-enabled, disabled, or left to system defaults using ``ssh_pwauth``.
666+**Summary:** Set user passwords and enable/disable SSH password authentication
667+
668+This module consumes three top-level config keys: ``ssh_pwauth``, ``chpasswd``
669+and ``password``.
670+
671+The ``ssh_pwauth`` config key determines whether or not sshd will be configured
672+to accept password authentication. True values will enable password auth,
673+false values will disable password auth, and the literal string ``unchanged``
674+will leave it unchanged. Setting no value will also leave the current setting
675+on-disk unchanged.
676+
677+The ``chpasswd`` config key accepts a dictionary containing either or both of
678+``expire`` and ``list``.
679+
680+If the ``list`` key is provided, it should contain a list of
681+``username:password`` pairs. This can be either a YAML list (of strings), or a
682+multi-line string with one pair per line. Each user will have the
683+corresponding password set. A password can be randomly generated by specifying
684+``RANDOM`` or ``R`` as a user's password. A hashed password, created by a tool
685+like ``mkpasswd``, can be specified; a regex
686+(``r'\\$(1|2a|2y|5|6)(\\$.+){2}'``) is used to determine if a password value
687+should be treated as a hash.
688
689 .. note::
690- if using ``expire: true`` then a ssh authkey should be specified or it may
691- not be possible to login to the system
692+ The users specified must already exist on the system. Users will have been
693+ created by the ``cc_users_groups`` module at this point.
694+
695+By default, all users on the system will have their passwords expired (meaning
696+that they will have to be reset the next time the user logs in). To disable
697+this behaviour, set ``expire`` under ``chpasswd`` to a false value.
698+
699+If a ``list`` of user/password pairs is not specified under ``chpasswd``, then
700+the value of the ``password`` config key will be used to set the default user's
701+password.
702
703 **Internal name:** ``cc_set_passwords``
704
705@@ -160,6 +173,8 @@ def handle(_name, cfg, cloud, log, args):
706 hashed_users = []
707 randlist = []
708 users = []
709+ # N.B. This regex is included in the documentation (i.e. the module
710+ # docstring), so any changes to it should be reflected there.
711 prog = re.compile(r'\$(1|2a|2y|5|6)(\$.+){2}')
712 for line in plist:
713 u, p = line.split(':', 1)
714diff --git a/cloudinit/config/cc_ssh.py b/cloudinit/config/cc_ssh.py
715index f8f7cb3..fdd8f4d 100755
716--- a/cloudinit/config/cc_ssh.py
717+++ b/cloudinit/config/cc_ssh.py
718@@ -91,6 +91,9 @@ public keys.
719 ssh_authorized_keys:
720 - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUU ...
721 - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZ ...
722+ ssh_publish_hostkeys:
723+ enabled: <true/false> (Defaults to true)
724+ blacklist: <list of key types> (Defaults to [dsa])
725 """
726
727 import glob
728@@ -104,6 +107,10 @@ from cloudinit import util
729
730 GENERATE_KEY_NAMES = ['rsa', 'dsa', 'ecdsa', 'ed25519']
731 KEY_FILE_TPL = '/etc/ssh/ssh_host_%s_key'
732+PUBLISH_HOST_KEYS = True
733+# Don't publish the dsa hostkey by default since OpenSSH recommends not using
734+# it.
735+HOST_KEY_PUBLISH_BLACKLIST = ['dsa']
736
737 CONFIG_KEY_TO_FILE = {}
738 PRIV_TO_PUB = {}
739@@ -176,6 +183,23 @@ def handle(_name, cfg, cloud, log, _args):
740 util.logexc(log, "Failed generating key type %s to "
741 "file %s", keytype, keyfile)
742
743+ if "ssh_publish_hostkeys" in cfg:
744+ host_key_blacklist = util.get_cfg_option_list(
745+ cfg["ssh_publish_hostkeys"], "blacklist",
746+ HOST_KEY_PUBLISH_BLACKLIST)
747+ publish_hostkeys = util.get_cfg_option_bool(
748+ cfg["ssh_publish_hostkeys"], "enabled", PUBLISH_HOST_KEYS)
749+ else:
750+ host_key_blacklist = HOST_KEY_PUBLISH_BLACKLIST
751+ publish_hostkeys = PUBLISH_HOST_KEYS
752+
753+ if publish_hostkeys:
754+ hostkeys = get_public_host_keys(blacklist=host_key_blacklist)
755+ try:
756+ cloud.datasource.publish_host_keys(hostkeys)
757+ except Exception:
758+ util.logexc(log, "Publishing host keys failed!")
759+
760 try:
761 (users, _groups) = ug_util.normalize_users_groups(cfg, cloud.distro)
762 (user, _user_config) = ug_util.extract_default(users)
763@@ -209,4 +233,35 @@ def apply_credentials(keys, user, disable_root, disable_root_opts):
764
765 ssh_util.setup_user_keys(keys, 'root', options=key_prefix)
766
767+
768+def get_public_host_keys(blacklist=None):
769+ """Read host keys from /etc/ssh/*.pub files and return them as a list.
770+
771+ @param blacklist: List of key types to ignore. e.g. ['dsa', 'rsa']
772+ @returns: List of keys, each formatted as a two-element tuple.
773+ e.g. [('ssh-rsa', 'AAAAB3Nz...'), ('ssh-ed25519', 'AAAAC3Nx...')]
774+ """
775+ public_key_file_tmpl = '%s.pub' % (KEY_FILE_TPL,)
776+ key_list = []
777+ blacklist_files = []
778+ if blacklist:
779+ # Convert blacklist to filenames:
780+ # 'dsa' -> '/etc/ssh/ssh_host_dsa_key.pub'
781+ blacklist_files = [public_key_file_tmpl % (key_type,)
782+ for key_type in blacklist]
783+ # Get list of public key files and filter out blacklisted files.
784+ file_list = [hostfile for hostfile
785+ in glob.glob(public_key_file_tmpl % ('*',))
786+ if hostfile not in blacklist_files]
787+
788+ # Read host key files, retrieve first two fields as a tuple and
789+ # append that tuple to key_list.
790+ for file_name in file_list:
791+ file_contents = util.load_file(file_name)
792+ key_data = file_contents.split()
793+ if key_data and len(key_data) > 1:
794+ key_list.append(tuple(key_data[:2]))
795+ return key_list
796+
797+
798 # vi: ts=4 expandtab
799diff --git a/cloudinit/config/cc_ubuntu_drivers.py b/cloudinit/config/cc_ubuntu_drivers.py
800index 91feb60..297451d 100644
801--- a/cloudinit/config/cc_ubuntu_drivers.py
802+++ b/cloudinit/config/cc_ubuntu_drivers.py
803@@ -2,12 +2,14 @@
804
805 """Ubuntu Drivers: Interact with third party drivers in Ubuntu."""
806
807+import os
808 from textwrap import dedent
809
810 from cloudinit.config.schema import (
811 get_schema_doc, validate_cloudconfig_schema)
812 from cloudinit import log as logging
813 from cloudinit.settings import PER_INSTANCE
814+from cloudinit import temp_utils
815 from cloudinit import type_utils
816 from cloudinit import util
817
818@@ -64,6 +66,33 @@ OLD_UBUNTU_DRIVERS_STDERR_NEEDLE = (
819 __doc__ = get_schema_doc(schema) # Supplement python help()
820
821
822+# Use a debconf template to configure a global debconf variable
823+# (linux/nvidia/latelink) setting this to "true" allows the
824+# 'linux-restricted-modules' deb to accept the NVIDIA EULA and the package
825+# will automatically link the drivers to the running kernel.
826+
827+# EOL_XENIAL: can then drop this script and use python3-debconf which is only
828+# available in Bionic and later. Can't use python3-debconf currently as it
829+# isn't in Xenial and doesn't yet support X_LOADTEMPLATEFILE debconf command.
830+
831+NVIDIA_DEBCONF_CONTENT = """\
832+Template: linux/nvidia/latelink
833+Type: boolean
834+Default: true
835+Description: Late-link NVIDIA kernel modules?
836+ Enable this to link the NVIDIA kernel modules in cloud-init and
837+ make them available for use.
838+"""
839+
840+NVIDIA_DRIVER_LATELINK_DEBCONF_SCRIPT = """\
841+#!/bin/sh
842+# Allow cloud-init to trigger EULA acceptance via registering a debconf
843+# template to set linux/nvidia/latelink true
844+. /usr/share/debconf/confmodule
845+db_x_loadtemplatefile "$1" cloud-init
846+"""
847+
848+
849 def install_drivers(cfg, pkg_install_func):
850 if not isinstance(cfg, dict):
851 raise TypeError(
852@@ -89,9 +118,28 @@ def install_drivers(cfg, pkg_install_func):
853 if version_cfg:
854 driver_arg += ':{}'.format(version_cfg)
855
856- LOG.debug("Installing NVIDIA drivers (%s=%s, version=%s)",
857+ LOG.debug("Installing and activating NVIDIA drivers (%s=%s, version=%s)",
858 cfgpath, nv_acc, version_cfg if version_cfg else 'latest')
859
860+ # Register and set debconf selection linux/nvidia/latelink = true
861+ tdir = temp_utils.mkdtemp(needs_exe=True)
862+ debconf_file = os.path.join(tdir, 'nvidia.template')
863+ debconf_script = os.path.join(tdir, 'nvidia-debconf.sh')
864+ try:
865+ util.write_file(debconf_file, NVIDIA_DEBCONF_CONTENT)
866+ util.write_file(
867+ debconf_script,
868+ util.encode_text(NVIDIA_DRIVER_LATELINK_DEBCONF_SCRIPT),
869+ mode=0o755)
870+ util.subp([debconf_script, debconf_file])
871+ except Exception as e:
872+ util.logexc(
873+ LOG, "Failed to register NVIDIA debconf template: %s", str(e))
874+ raise
875+ finally:
876+ if os.path.isdir(tdir):
877+ util.del_dir(tdir)
878+
879 try:
880 util.subp(['ubuntu-drivers', 'install', '--gpgpu', driver_arg])
881 except util.ProcessExecutionError as exc:
882diff --git a/cloudinit/config/tests/test_ssh.py b/cloudinit/config/tests/test_ssh.py
883index c8a4271..e778984 100644
884--- a/cloudinit/config/tests/test_ssh.py
885+++ b/cloudinit/config/tests/test_ssh.py
886@@ -1,5 +1,6 @@
887 # This file is part of cloud-init. See LICENSE file for license information.
888
889+import os.path
890
891 from cloudinit.config import cc_ssh
892 from cloudinit import ssh_util
893@@ -12,6 +13,25 @@ MODPATH = "cloudinit.config.cc_ssh."
894 class TestHandleSsh(CiTestCase):
895 """Test cc_ssh handling of ssh config."""
896
897+ def _publish_hostkey_test_setup(self):
898+ self.test_hostkeys = {
899+ 'dsa': ('ssh-dss', 'AAAAB3NzaC1kc3MAAACB'),
900+ 'ecdsa': ('ecdsa-sha2-nistp256', 'AAAAE2VjZ'),
901+ 'ed25519': ('ssh-ed25519', 'AAAAC3NzaC1lZDI'),
902+ 'rsa': ('ssh-rsa', 'AAAAB3NzaC1yc2EAAA'),
903+ }
904+ self.test_hostkey_files = []
905+ hostkey_tmpdir = self.tmp_dir()
906+ for key_type in ['dsa', 'ecdsa', 'ed25519', 'rsa']:
907+ key_data = self.test_hostkeys[key_type]
908+ filename = 'ssh_host_%s_key.pub' % key_type
909+ filepath = os.path.join(hostkey_tmpdir, filename)
910+ self.test_hostkey_files.append(filepath)
911+ with open(filepath, 'w') as f:
912+ f.write(' '.join(key_data))
913+
914+ cc_ssh.KEY_FILE_TPL = os.path.join(hostkey_tmpdir, 'ssh_host_%s_key')
915+
916 def test_apply_credentials_with_user(self, m_setup_keys):
917 """Apply keys for the given user and root."""
918 keys = ["key1"]
919@@ -64,6 +84,7 @@ class TestHandleSsh(CiTestCase):
920 # Mock os.path.exits to True to short-circuit the key writing logic
921 m_path_exists.return_value = True
922 m_nug.return_value = ([], {})
923+ cc_ssh.PUBLISH_HOST_KEYS = False
924 cloud = self.tmp_cloud(
925 distro='ubuntu', metadata={'public-keys': keys})
926 cc_ssh.handle("name", cfg, cloud, None, None)
927@@ -149,3 +170,148 @@ class TestHandleSsh(CiTestCase):
928 self.assertEqual([mock.call(set(keys), user),
929 mock.call(set(keys), "root", options="")],
930 m_setup_keys.call_args_list)
931+
932+ @mock.patch(MODPATH + "glob.glob")
933+ @mock.patch(MODPATH + "ug_util.normalize_users_groups")
934+ @mock.patch(MODPATH + "os.path.exists")
935+ def test_handle_publish_hostkeys_default(
936+ self, m_path_exists, m_nug, m_glob, m_setup_keys):
937+ """Test handle with various configs for ssh_publish_hostkeys."""
938+ self._publish_hostkey_test_setup()
939+ cc_ssh.PUBLISH_HOST_KEYS = True
940+ keys = ["key1"]
941+ user = "clouduser"
942+ # Return no matching keys for first glob, test keys for second.
943+ m_glob.side_effect = iter([
944+ [],
945+ self.test_hostkey_files,
946+ ])
947+ # Mock os.path.exits to True to short-circuit the key writing logic
948+ m_path_exists.return_value = True
949+ m_nug.return_value = ({user: {"default": user}}, {})
950+ cloud = self.tmp_cloud(
951+ distro='ubuntu', metadata={'public-keys': keys})
952+ cloud.datasource.publish_host_keys = mock.Mock()
953+
954+ cfg = {}
955+ expected_call = [self.test_hostkeys[key_type] for key_type
956+ in ['ecdsa', 'ed25519', 'rsa']]
957+ cc_ssh.handle("name", cfg, cloud, None, None)
958+ self.assertEqual([mock.call(expected_call)],
959+ cloud.datasource.publish_host_keys.call_args_list)
960+
961+ @mock.patch(MODPATH + "glob.glob")
962+ @mock.patch(MODPATH + "ug_util.normalize_users_groups")
963+ @mock.patch(MODPATH + "os.path.exists")
964+ def test_handle_publish_hostkeys_config_enable(
965+ self, m_path_exists, m_nug, m_glob, m_setup_keys):
966+ """Test handle with various configs for ssh_publish_hostkeys."""
967+ self._publish_hostkey_test_setup()
968+ cc_ssh.PUBLISH_HOST_KEYS = False
969+ keys = ["key1"]
970+ user = "clouduser"
971+ # Return no matching keys for first glob, test keys for second.
972+ m_glob.side_effect = iter([
973+ [],
974+ self.test_hostkey_files,
975+ ])
976+ # Mock os.path.exits to True to short-circuit the key writing logic
977+ m_path_exists.return_value = True
978+ m_nug.return_value = ({user: {"default": user}}, {})
979+ cloud = self.tmp_cloud(
980+ distro='ubuntu', metadata={'public-keys': keys})
981+ cloud.datasource.publish_host_keys = mock.Mock()
982+
983+ cfg = {'ssh_publish_hostkeys': {'enabled': True}}
984+ expected_call = [self.test_hostkeys[key_type] for key_type
985+ in ['ecdsa', 'ed25519', 'rsa']]
986+ cc_ssh.handle("name", cfg, cloud, None, None)
987+ self.assertEqual([mock.call(expected_call)],
988+ cloud.datasource.publish_host_keys.call_args_list)
989+
990+ @mock.patch(MODPATH + "glob.glob")
991+ @mock.patch(MODPATH + "ug_util.normalize_users_groups")
992+ @mock.patch(MODPATH + "os.path.exists")
993+ def test_handle_publish_hostkeys_config_disable(
994+ self, m_path_exists, m_nug, m_glob, m_setup_keys):
995+ """Test handle with various configs for ssh_publish_hostkeys."""
996+ self._publish_hostkey_test_setup()
997+ cc_ssh.PUBLISH_HOST_KEYS = True
998+ keys = ["key1"]
999+ user = "clouduser"
1000+ # Return no matching keys for first glob, test keys for second.
1001+ m_glob.side_effect = iter([
1002+ [],
1003+ self.test_hostkey_files,
1004+ ])
1005+ # Mock os.path.exits to True to short-circuit the key writing logic
1006+ m_path_exists.return_value = True
1007+ m_nug.return_value = ({user: {"default": user}}, {})
1008+ cloud = self.tmp_cloud(
1009+ distro='ubuntu', metadata={'public-keys': keys})
1010+ cloud.datasource.publish_host_keys = mock.Mock()
1011+
1012+ cfg = {'ssh_publish_hostkeys': {'enabled': False}}
1013+ cc_ssh.handle("name", cfg, cloud, None, None)
1014+ self.assertFalse(cloud.datasource.publish_host_keys.call_args_list)
1015+ cloud.datasource.publish_host_keys.assert_not_called()
1016+
1017+ @mock.patch(MODPATH + "glob.glob")
1018+ @mock.patch(MODPATH + "ug_util.normalize_users_groups")
1019+ @mock.patch(MODPATH + "os.path.exists")
1020+ def test_handle_publish_hostkeys_config_blacklist(
1021+ self, m_path_exists, m_nug, m_glob, m_setup_keys):
1022+ """Test handle with various configs for ssh_publish_hostkeys."""
1023+ self._publish_hostkey_test_setup()
1024+ cc_ssh.PUBLISH_HOST_KEYS = True
1025+ keys = ["key1"]
1026+ user = "clouduser"
1027+ # Return no matching keys for first glob, test keys for second.
1028+ m_glob.side_effect = iter([
1029+ [],
1030+ self.test_hostkey_files,
1031+ ])
1032+ # Mock os.path.exits to True to short-circuit the key writing logic
1033+ m_path_exists.return_value = True
1034+ m_nug.return_value = ({user: {"default": user}}, {})
1035+ cloud = self.tmp_cloud(
1036+ distro='ubuntu', metadata={'public-keys': keys})
1037+ cloud.datasource.publish_host_keys = mock.Mock()
1038+
1039+ cfg = {'ssh_publish_hostkeys': {'enabled': True,
1040+ 'blacklist': ['dsa', 'rsa']}}
1041+ expected_call = [self.test_hostkeys[key_type] for key_type
1042+ in ['ecdsa', 'ed25519']]
1043+ cc_ssh.handle("name", cfg, cloud, None, None)
1044+ self.assertEqual([mock.call(expected_call)],
1045+ cloud.datasource.publish_host_keys.call_args_list)
1046+
1047+ @mock.patch(MODPATH + "glob.glob")
1048+ @mock.patch(MODPATH + "ug_util.normalize_users_groups")
1049+ @mock.patch(MODPATH + "os.path.exists")
1050+ def test_handle_publish_hostkeys_empty_blacklist(
1051+ self, m_path_exists, m_nug, m_glob, m_setup_keys):
1052+ """Test handle with various configs for ssh_publish_hostkeys."""
1053+ self._publish_hostkey_test_setup()
1054+ cc_ssh.PUBLISH_HOST_KEYS = True
1055+ keys = ["key1"]
1056+ user = "clouduser"
1057+ # Return no matching keys for first glob, test keys for second.
1058+ m_glob.side_effect = iter([
1059+ [],
1060+ self.test_hostkey_files,
1061+ ])
1062+ # Mock os.path.exits to True to short-circuit the key writing logic
1063+ m_path_exists.return_value = True
1064+ m_nug.return_value = ({user: {"default": user}}, {})
1065+ cloud = self.tmp_cloud(
1066+ distro='ubuntu', metadata={'public-keys': keys})
1067+ cloud.datasource.publish_host_keys = mock.Mock()
1068+
1069+ cfg = {'ssh_publish_hostkeys': {'enabled': True,
1070+ 'blacklist': []}}
1071+ expected_call = [self.test_hostkeys[key_type] for key_type
1072+ in ['dsa', 'ecdsa', 'ed25519', 'rsa']]
1073+ cc_ssh.handle("name", cfg, cloud, None, None)
1074+ self.assertEqual([mock.call(expected_call)],
1075+ cloud.datasource.publish_host_keys.call_args_list)
1076diff --git a/cloudinit/config/tests/test_ubuntu_drivers.py b/cloudinit/config/tests/test_ubuntu_drivers.py
1077index efba4ce..4695269 100644
1078--- a/cloudinit/config/tests/test_ubuntu_drivers.py
1079+++ b/cloudinit/config/tests/test_ubuntu_drivers.py
1080@@ -1,6 +1,7 @@
1081 # This file is part of cloud-init. See LICENSE file for license information.
1082
1083 import copy
1084+import os
1085
1086 from cloudinit.tests.helpers import CiTestCase, skipUnlessJsonSchema, mock
1087 from cloudinit.config.schema import (
1088@@ -9,11 +10,27 @@ from cloudinit.config import cc_ubuntu_drivers as drivers
1089 from cloudinit.util import ProcessExecutionError
1090
1091 MPATH = "cloudinit.config.cc_ubuntu_drivers."
1092+M_TMP_PATH = MPATH + "temp_utils.mkdtemp"
1093 OLD_UBUNTU_DRIVERS_ERROR_STDERR = (
1094 "ubuntu-drivers: error: argument <command>: invalid choice: 'install' "
1095 "(choose from 'list', 'autoinstall', 'devices', 'debug')\n")
1096
1097
1098+class AnyTempScriptAndDebconfFile(object):
1099+
1100+ def __init__(self, tmp_dir, debconf_file):
1101+ self.tmp_dir = tmp_dir
1102+ self.debconf_file = debconf_file
1103+
1104+ def __eq__(self, cmd):
1105+ if not len(cmd) == 2:
1106+ return False
1107+ script, debconf_file = cmd
1108+ if bool(script.startswith(self.tmp_dir) and script.endswith('.sh')):
1109+ return debconf_file == self.debconf_file
1110+ return False
1111+
1112+
1113 class TestUbuntuDrivers(CiTestCase):
1114 cfg_accepted = {'drivers': {'nvidia': {'license-accepted': True}}}
1115 install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia']
1116@@ -28,16 +45,23 @@ class TestUbuntuDrivers(CiTestCase):
1117 {'drivers': {'nvidia': {'license-accepted': "TRUE"}}},
1118 schema=drivers.schema, strict=True)
1119
1120+ @mock.patch(M_TMP_PATH)
1121 @mock.patch(MPATH + "util.subp", return_value=('', ''))
1122 @mock.patch(MPATH + "util.which", return_value=False)
1123- def _assert_happy_path_taken(self, config, m_which, m_subp):
1124+ def _assert_happy_path_taken(
1125+ self, config, m_which, m_subp, m_tmp):
1126 """Positive path test through handle. Package should be installed."""
1127+ tdir = self.tmp_dir()
1128+ debconf_file = os.path.join(tdir, 'nvidia.template')
1129+ m_tmp.return_value = tdir
1130 myCloud = mock.MagicMock()
1131 drivers.handle('ubuntu_drivers', config, myCloud, None, None)
1132 self.assertEqual([mock.call(['ubuntu-drivers-common'])],
1133 myCloud.distro.install_packages.call_args_list)
1134- self.assertEqual([mock.call(self.install_gpgpu)],
1135- m_subp.call_args_list)
1136+ self.assertEqual(
1137+ [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
1138+ mock.call(self.install_gpgpu)],
1139+ m_subp.call_args_list)
1140
1141 def test_handle_does_package_install(self):
1142 self._assert_happy_path_taken(self.cfg_accepted)
1143@@ -48,19 +72,33 @@ class TestUbuntuDrivers(CiTestCase):
1144 new_config['drivers']['nvidia']['license-accepted'] = true_value
1145 self._assert_happy_path_taken(new_config)
1146
1147- @mock.patch(MPATH + "util.subp", side_effect=ProcessExecutionError(
1148- stdout='No drivers found for installation.\n', exit_code=1))
1149+ @mock.patch(M_TMP_PATH)
1150+ @mock.patch(MPATH + "util.subp")
1151 @mock.patch(MPATH + "util.which", return_value=False)
1152- def test_handle_raises_error_if_no_drivers_found(self, m_which, m_subp):
1153+ def test_handle_raises_error_if_no_drivers_found(
1154+ self, m_which, m_subp, m_tmp):
1155 """If ubuntu-drivers doesn't install any drivers, raise an error."""
1156+ tdir = self.tmp_dir()
1157+ debconf_file = os.path.join(tdir, 'nvidia.template')
1158+ m_tmp.return_value = tdir
1159 myCloud = mock.MagicMock()
1160+
1161+ def fake_subp(cmd):
1162+ if cmd[0].startswith(tdir):
1163+ return
1164+ raise ProcessExecutionError(
1165+ stdout='No drivers found for installation.\n', exit_code=1)
1166+ m_subp.side_effect = fake_subp
1167+
1168 with self.assertRaises(Exception):
1169 drivers.handle(
1170 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None)
1171 self.assertEqual([mock.call(['ubuntu-drivers-common'])],
1172 myCloud.distro.install_packages.call_args_list)
1173- self.assertEqual([mock.call(self.install_gpgpu)],
1174- m_subp.call_args_list)
1175+ self.assertEqual(
1176+ [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
1177+ mock.call(self.install_gpgpu)],
1178+ m_subp.call_args_list)
1179 self.assertIn('ubuntu-drivers found no drivers for installation',
1180 self.logs.getvalue())
1181
1182@@ -108,18 +146,25 @@ class TestUbuntuDrivers(CiTestCase):
1183 myLog.debug.call_args_list[0][0][0])
1184 self.assertEqual(0, m_install_drivers.call_count)
1185
1186+ @mock.patch(M_TMP_PATH)
1187 @mock.patch(MPATH + "util.subp", return_value=('', ''))
1188 @mock.patch(MPATH + "util.which", return_value=True)
1189- def test_install_drivers_no_install_if_present(self, m_which, m_subp):
1190+ def test_install_drivers_no_install_if_present(
1191+ self, m_which, m_subp, m_tmp):
1192 """If 'ubuntu-drivers' is present, no package install should occur."""
1193+ tdir = self.tmp_dir()
1194+ debconf_file = os.path.join(tdir, 'nvidia.template')
1195+ m_tmp.return_value = tdir
1196 pkg_install = mock.MagicMock()
1197 drivers.install_drivers(self.cfg_accepted['drivers'],
1198 pkg_install_func=pkg_install)
1199 self.assertEqual(0, pkg_install.call_count)
1200 self.assertEqual([mock.call('ubuntu-drivers')],
1201 m_which.call_args_list)
1202- self.assertEqual([mock.call(self.install_gpgpu)],
1203- m_subp.call_args_list)
1204+ self.assertEqual(
1205+ [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
1206+ mock.call(self.install_gpgpu)],
1207+ m_subp.call_args_list)
1208
1209 def test_install_drivers_rejects_invalid_config(self):
1210 """install_drivers should raise TypeError if not given a config dict"""
1211@@ -128,20 +173,33 @@ class TestUbuntuDrivers(CiTestCase):
1212 drivers.install_drivers("mystring", pkg_install_func=pkg_install)
1213 self.assertEqual(0, pkg_install.call_count)
1214
1215- @mock.patch(MPATH + "util.subp", side_effect=ProcessExecutionError(
1216- stderr=OLD_UBUNTU_DRIVERS_ERROR_STDERR, exit_code=2))
1217+ @mock.patch(M_TMP_PATH)
1218+ @mock.patch(MPATH + "util.subp")
1219 @mock.patch(MPATH + "util.which", return_value=False)
1220 def test_install_drivers_handles_old_ubuntu_drivers_gracefully(
1221- self, m_which, m_subp):
1222+ self, m_which, m_subp, m_tmp):
1223 """Older ubuntu-drivers versions should emit message and raise error"""
1224+ tdir = self.tmp_dir()
1225+ debconf_file = os.path.join(tdir, 'nvidia.template')
1226+ m_tmp.return_value = tdir
1227 myCloud = mock.MagicMock()
1228+
1229+ def fake_subp(cmd):
1230+ if cmd[0].startswith(tdir):
1231+ return
1232+ raise ProcessExecutionError(
1233+ stderr=OLD_UBUNTU_DRIVERS_ERROR_STDERR, exit_code=2)
1234+ m_subp.side_effect = fake_subp
1235+
1236 with self.assertRaises(Exception):
1237 drivers.handle(
1238 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None)
1239 self.assertEqual([mock.call(['ubuntu-drivers-common'])],
1240 myCloud.distro.install_packages.call_args_list)
1241- self.assertEqual([mock.call(self.install_gpgpu)],
1242- m_subp.call_args_list)
1243+ self.assertEqual(
1244+ [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
1245+ mock.call(self.install_gpgpu)],
1246+ m_subp.call_args_list)
1247 self.assertIn('WARNING: the available version of ubuntu-drivers is'
1248 ' too old to perform requested driver installation',
1249 self.logs.getvalue())
1250@@ -153,16 +211,21 @@ class TestUbuntuDriversWithVersion(TestUbuntuDrivers):
1251 'drivers': {'nvidia': {'license-accepted': True, 'version': '123'}}}
1252 install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia:123']
1253
1254+ @mock.patch(M_TMP_PATH)
1255 @mock.patch(MPATH + "util.subp", return_value=('', ''))
1256 @mock.patch(MPATH + "util.which", return_value=False)
1257- def test_version_none_uses_latest(self, m_which, m_subp):
1258+ def test_version_none_uses_latest(self, m_which, m_subp, m_tmp):
1259+ tdir = self.tmp_dir()
1260+ debconf_file = os.path.join(tdir, 'nvidia.template')
1261+ m_tmp.return_value = tdir
1262 myCloud = mock.MagicMock()
1263 version_none_cfg = {
1264 'drivers': {'nvidia': {'license-accepted': True, 'version': None}}}
1265 drivers.handle(
1266 'ubuntu_drivers', version_none_cfg, myCloud, None, None)
1267 self.assertEqual(
1268- [mock.call(['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'])],
1269+ [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
1270+ mock.call(['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'])],
1271 m_subp.call_args_list)
1272
1273 def test_specifying_a_version_doesnt_override_license_acceptance(self):
1274diff --git a/cloudinit/distros/__init__.py b/cloudinit/distros/__init__.py
1275index 20c994d..00bdee3 100644
1276--- a/cloudinit/distros/__init__.py
1277+++ b/cloudinit/distros/__init__.py
1278@@ -396,16 +396,16 @@ class Distro(object):
1279 else:
1280 create_groups = True
1281
1282- adduser_cmd = ['useradd', name]
1283- log_adduser_cmd = ['useradd', name]
1284+ useradd_cmd = ['useradd', name]
1285+ log_useradd_cmd = ['useradd', name]
1286 if util.system_is_snappy():
1287- adduser_cmd.append('--extrausers')
1288- log_adduser_cmd.append('--extrausers')
1289+ useradd_cmd.append('--extrausers')
1290+ log_useradd_cmd.append('--extrausers')
1291
1292 # Since we are creating users, we want to carefully validate the
1293 # inputs. If something goes wrong, we can end up with a system
1294 # that nobody can login to.
1295- adduser_opts = {
1296+ useradd_opts = {
1297 "gecos": '--comment',
1298 "homedir": '--home',
1299 "primary_group": '--gid',
1300@@ -418,7 +418,7 @@ class Distro(object):
1301 "selinux_user": '--selinux-user',
1302 }
1303
1304- adduser_flags = {
1305+ useradd_flags = {
1306 "no_user_group": '--no-user-group',
1307 "system": '--system',
1308 "no_log_init": '--no-log-init',
1309@@ -453,32 +453,32 @@ class Distro(object):
1310 # Check the values and create the command
1311 for key, val in sorted(kwargs.items()):
1312
1313- if key in adduser_opts and val and isinstance(val, str):
1314- adduser_cmd.extend([adduser_opts[key], val])
1315+ if key in useradd_opts and val and isinstance(val, str):
1316+ useradd_cmd.extend([useradd_opts[key], val])
1317
1318 # Redact certain fields from the logs
1319 if key in redact_opts:
1320- log_adduser_cmd.extend([adduser_opts[key], 'REDACTED'])
1321+ log_useradd_cmd.extend([useradd_opts[key], 'REDACTED'])
1322 else:
1323- log_adduser_cmd.extend([adduser_opts[key], val])
1324+ log_useradd_cmd.extend([useradd_opts[key], val])
1325
1326- elif key in adduser_flags and val:
1327- adduser_cmd.append(adduser_flags[key])
1328- log_adduser_cmd.append(adduser_flags[key])
1329+ elif key in useradd_flags and val:
1330+ useradd_cmd.append(useradd_flags[key])
1331+ log_useradd_cmd.append(useradd_flags[key])
1332
1333 # Don't create the home directory if directed so or if the user is a
1334 # system user
1335 if kwargs.get('no_create_home') or kwargs.get('system'):
1336- adduser_cmd.append('-M')
1337- log_adduser_cmd.append('-M')
1338+ useradd_cmd.append('-M')
1339+ log_useradd_cmd.append('-M')
1340 else:
1341- adduser_cmd.append('-m')
1342- log_adduser_cmd.append('-m')
1343+ useradd_cmd.append('-m')
1344+ log_useradd_cmd.append('-m')
1345
1346 # Run the command
1347 LOG.debug("Adding user %s", name)
1348 try:
1349- util.subp(adduser_cmd, logstring=log_adduser_cmd)
1350+ util.subp(useradd_cmd, logstring=log_useradd_cmd)
1351 except Exception as e:
1352 util.logexc(LOG, "Failed to create user %s", name)
1353 raise e
1354@@ -490,15 +490,15 @@ class Distro(object):
1355
1356 snapuser = kwargs.get('snapuser')
1357 known = kwargs.get('known', False)
1358- adduser_cmd = ["snap", "create-user", "--sudoer", "--json"]
1359+ create_user_cmd = ["snap", "create-user", "--sudoer", "--json"]
1360 if known:
1361- adduser_cmd.append("--known")
1362- adduser_cmd.append(snapuser)
1363+ create_user_cmd.append("--known")
1364+ create_user_cmd.append(snapuser)
1365
1366 # Run the command
1367 LOG.debug("Adding snap user %s", name)
1368 try:
1369- (out, err) = util.subp(adduser_cmd, logstring=adduser_cmd,
1370+ (out, err) = util.subp(create_user_cmd, logstring=create_user_cmd,
1371 capture=True)
1372 LOG.debug("snap create-user returned: %s:%s", out, err)
1373 jobj = util.load_json(out)
1374diff --git a/cloudinit/distros/arch.py b/cloudinit/distros/arch.py
1375index b814c8b..9f89c5f 100644
1376--- a/cloudinit/distros/arch.py
1377+++ b/cloudinit/distros/arch.py
1378@@ -12,6 +12,8 @@ from cloudinit import util
1379 from cloudinit.distros import net_util
1380 from cloudinit.distros.parsers.hostname import HostnameConf
1381
1382+from cloudinit.net.renderers import RendererNotFoundError
1383+
1384 from cloudinit.settings import PER_INSTANCE
1385
1386 import os
1387@@ -24,6 +26,11 @@ class Distro(distros.Distro):
1388 network_conf_dir = "/etc/netctl"
1389 resolve_conf_fn = "/etc/resolv.conf"
1390 init_cmd = ['systemctl'] # init scripts
1391+ renderer_configs = {
1392+ "netplan": {"netplan_path": "/etc/netplan/50-cloud-init.yaml",
1393+ "netplan_header": "# generated by cloud-init\n",
1394+ "postcmds": True}
1395+ }
1396
1397 def __init__(self, name, cfg, paths):
1398 distros.Distro.__init__(self, name, cfg, paths)
1399@@ -50,6 +57,13 @@ class Distro(distros.Distro):
1400 self.update_package_sources()
1401 self.package_command('', pkgs=pkglist)
1402
1403+ def _write_network_config(self, netconfig):
1404+ try:
1405+ return self._supported_write_network_config(netconfig)
1406+ except RendererNotFoundError:
1407+ # Fall back to old _write_network
1408+ raise NotImplementedError
1409+
1410 def _write_network(self, settings):
1411 entries = net_util.translate_network(settings)
1412 LOG.debug("Translated ubuntu style network settings %s into %s",
1413diff --git a/cloudinit/distros/debian.py b/cloudinit/distros/debian.py
1414index d517fb8..0ad93ff 100644
1415--- a/cloudinit/distros/debian.py
1416+++ b/cloudinit/distros/debian.py
1417@@ -36,14 +36,14 @@ ENI_HEADER = """# This file is generated from information provided by
1418 # network: {config: disabled}
1419 """
1420
1421-NETWORK_CONF_FN = "/etc/network/interfaces.d/50-cloud-init.cfg"
1422+NETWORK_CONF_FN = "/etc/network/interfaces.d/50-cloud-init"
1423 LOCALE_CONF_FN = "/etc/default/locale"
1424
1425
1426 class Distro(distros.Distro):
1427 hostname_conf_fn = "/etc/hostname"
1428 network_conf_fn = {
1429- "eni": "/etc/network/interfaces.d/50-cloud-init.cfg",
1430+ "eni": "/etc/network/interfaces.d/50-cloud-init",
1431 "netplan": "/etc/netplan/50-cloud-init.yaml"
1432 }
1433 renderer_configs = {
1434diff --git a/cloudinit/distros/freebsd.py b/cloudinit/distros/freebsd.py
1435index ff22d56..f7825fd 100644
1436--- a/cloudinit/distros/freebsd.py
1437+++ b/cloudinit/distros/freebsd.py
1438@@ -185,10 +185,10 @@ class Distro(distros.Distro):
1439 LOG.info("User %s already exists, skipping.", name)
1440 return False
1441
1442- adduser_cmd = ['pw', 'useradd', '-n', name]
1443- log_adduser_cmd = ['pw', 'useradd', '-n', name]
1444+ pw_useradd_cmd = ['pw', 'useradd', '-n', name]
1445+ log_pw_useradd_cmd = ['pw', 'useradd', '-n', name]
1446
1447- adduser_opts = {
1448+ pw_useradd_opts = {
1449 "homedir": '-d',
1450 "gecos": '-c',
1451 "primary_group": '-g',
1452@@ -196,34 +196,34 @@ class Distro(distros.Distro):
1453 "shell": '-s',
1454 "inactive": '-E',
1455 }
1456- adduser_flags = {
1457+ pw_useradd_flags = {
1458 "no_user_group": '--no-user-group',
1459 "system": '--system',
1460 "no_log_init": '--no-log-init',
1461 }
1462
1463 for key, val in kwargs.items():
1464- if (key in adduser_opts and val and
1465+ if (key in pw_useradd_opts and val and
1466 isinstance(val, six.string_types)):
1467- adduser_cmd.extend([adduser_opts[key], val])
1468+ pw_useradd_cmd.extend([pw_useradd_opts[key], val])
1469
1470- elif key in adduser_flags and val:
1471- adduser_cmd.append(adduser_flags[key])
1472- log_adduser_cmd.append(adduser_flags[key])
1473+ elif key in pw_useradd_flags and val:
1474+ pw_useradd_cmd.append(pw_useradd_flags[key])
1475+ log_pw_useradd_cmd.append(pw_useradd_flags[key])
1476
1477 if 'no_create_home' in kwargs or 'system' in kwargs:
1478- adduser_cmd.append('-d/nonexistent')
1479- log_adduser_cmd.append('-d/nonexistent')
1480+ pw_useradd_cmd.append('-d/nonexistent')
1481+ log_pw_useradd_cmd.append('-d/nonexistent')
1482 else:
1483- adduser_cmd.append('-d/usr/home/%s' % name)
1484- adduser_cmd.append('-m')
1485- log_adduser_cmd.append('-d/usr/home/%s' % name)
1486- log_adduser_cmd.append('-m')
1487+ pw_useradd_cmd.append('-d/usr/home/%s' % name)
1488+ pw_useradd_cmd.append('-m')
1489+ log_pw_useradd_cmd.append('-d/usr/home/%s' % name)
1490+ log_pw_useradd_cmd.append('-m')
1491
1492 # Run the command
1493 LOG.info("Adding user %s", name)
1494 try:
1495- util.subp(adduser_cmd, logstring=log_adduser_cmd)
1496+ util.subp(pw_useradd_cmd, logstring=log_pw_useradd_cmd)
1497 except Exception as e:
1498 util.logexc(LOG, "Failed to create user %s", name)
1499 raise e
1500diff --git a/cloudinit/distros/opensuse.py b/cloudinit/distros/opensuse.py
1501index 1bfe047..e41e2f7 100644
1502--- a/cloudinit/distros/opensuse.py
1503+++ b/cloudinit/distros/opensuse.py
1504@@ -38,6 +38,8 @@ class Distro(distros.Distro):
1505 'sysconfig': {
1506 'control': 'etc/sysconfig/network/config',
1507 'iface_templates': '%(base)s/network/ifcfg-%(name)s',
1508+ 'netrules_path': (
1509+ 'etc/udev/rules.d/85-persistent-net-cloud-init.rules'),
1510 'route_templates': {
1511 'ipv4': '%(base)s/network/ifroute-%(name)s',
1512 'ipv6': '%(base)s/network/ifroute-%(name)s',
1513diff --git a/cloudinit/distros/parsers/sys_conf.py b/cloudinit/distros/parsers/sys_conf.py
1514index c27b5d5..44df17d 100644
1515--- a/cloudinit/distros/parsers/sys_conf.py
1516+++ b/cloudinit/distros/parsers/sys_conf.py
1517@@ -43,6 +43,13 @@ def _contains_shell_variable(text):
1518
1519
1520 class SysConf(configobj.ConfigObj):
1521+ """A configobj.ConfigObj subclass specialised for sysconfig files.
1522+
1523+ :param contents:
1524+ The sysconfig file to parse, in a format accepted by
1525+ ``configobj.ConfigObj.__init__`` (i.e. "a filename, file like object,
1526+ or list of lines").
1527+ """
1528 def __init__(self, contents):
1529 configobj.ConfigObj.__init__(self, contents,
1530 interpolation=False,
1531diff --git a/cloudinit/distros/ubuntu.py b/cloudinit/distros/ubuntu.py
1532index 6815410..e5fcbc5 100644
1533--- a/cloudinit/distros/ubuntu.py
1534+++ b/cloudinit/distros/ubuntu.py
1535@@ -21,6 +21,21 @@ LOG = logging.getLogger(__name__)
1536
1537 class Distro(debian.Distro):
1538
1539+ def __init__(self, name, cfg, paths):
1540+ super(Distro, self).__init__(name, cfg, paths)
1541+ # Ubuntu specific network cfg locations
1542+ self.network_conf_fn = {
1543+ "eni": "/etc/network/interfaces.d/50-cloud-init.cfg",
1544+ "netplan": "/etc/netplan/50-cloud-init.yaml"
1545+ }
1546+ self.renderer_configs = {
1547+ "eni": {"eni_path": self.network_conf_fn["eni"],
1548+ "eni_header": debian.ENI_HEADER},
1549+ "netplan": {"netplan_path": self.network_conf_fn["netplan"],
1550+ "netplan_header": debian.ENI_HEADER,
1551+ "postcmds": True}
1552+ }
1553+
1554 @property
1555 def preferred_ntp_clients(self):
1556 """The preferred ntp client is dependent on the version."""
1557diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
1558index 3642fb1..ea707c0 100644
1559--- a/cloudinit/net/__init__.py
1560+++ b/cloudinit/net/__init__.py
1561@@ -9,6 +9,7 @@ import errno
1562 import logging
1563 import os
1564 import re
1565+from functools import partial
1566
1567 from cloudinit.net.network_state import mask_to_net_prefix
1568 from cloudinit import util
1569@@ -264,46 +265,29 @@ def find_fallback_nic(blacklist_drivers=None):
1570
1571
1572 def generate_fallback_config(blacklist_drivers=None, config_driver=None):
1573- """Determine which attached net dev is most likely to have a connection and
1574- generate network state to run dhcp on that interface"""
1575-
1576+ """Generate network cfg v2 for dhcp on the NIC most likely connected."""
1577 if not config_driver:
1578 config_driver = False
1579
1580 target_name = find_fallback_nic(blacklist_drivers=blacklist_drivers)
1581- if target_name:
1582- target_mac = read_sys_net_safe(target_name, 'address')
1583- nconf = {'config': [], 'version': 1}
1584- cfg = {'type': 'physical', 'name': target_name,
1585- 'mac_address': target_mac, 'subnets': [{'type': 'dhcp'}]}
1586- # inject the device driver name, dev_id into config if enabled and
1587- # device has a valid device driver value
1588- if config_driver:
1589- driver = device_driver(target_name)
1590- if driver:
1591- cfg['params'] = {
1592- 'driver': driver,
1593- 'device_id': device_devid(target_name),
1594- }
1595- nconf['config'].append(cfg)
1596- return nconf
1597- else:
1598+ if not target_name:
1599 # can't read any interfaces addresses (or there are none); give up
1600 return None
1601+ target_mac = read_sys_net_safe(target_name, 'address')
1602+ cfg = {'dhcp4': True, 'set-name': target_name,
1603+ 'match': {'macaddress': target_mac.lower()}}
1604+ if config_driver:
1605+ driver = device_driver(target_name)
1606+ if driver:
1607+ cfg['match']['driver'] = driver
1608+ nconf = {'ethernets': {target_name: cfg}, 'version': 2}
1609+ return nconf
1610
1611
1612-def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
1613- """read the network config and rename devices accordingly.
1614- if strict_present is false, then do not raise exception if no devices
1615- match. if strict_busy is false, then do not raise exception if the
1616- device cannot be renamed because it is currently configured.
1617-
1618- renames are only attempted for interfaces of type 'physical'. It is
1619- expected that the network system will create other devices with the
1620- correct name in place."""
1621+def extract_physdevs(netcfg):
1622
1623 def _version_1(netcfg):
1624- renames = []
1625+ physdevs = []
1626 for ent in netcfg.get('config', {}):
1627 if ent.get('type') != 'physical':
1628 continue
1629@@ -317,11 +301,11 @@ def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
1630 driver = device_driver(name)
1631 if not device_id:
1632 device_id = device_devid(name)
1633- renames.append([mac, name, driver, device_id])
1634- return renames
1635+ physdevs.append([mac, name, driver, device_id])
1636+ return physdevs
1637
1638 def _version_2(netcfg):
1639- renames = []
1640+ physdevs = []
1641 for ent in netcfg.get('ethernets', {}).values():
1642 # only rename if configured to do so
1643 name = ent.get('set-name')
1644@@ -337,16 +321,69 @@ def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
1645 driver = device_driver(name)
1646 if not device_id:
1647 device_id = device_devid(name)
1648- renames.append([mac, name, driver, device_id])
1649- return renames
1650+ physdevs.append([mac, name, driver, device_id])
1651+ return physdevs
1652+
1653+ version = netcfg.get('version')
1654+ if version == 1:
1655+ return _version_1(netcfg)
1656+ elif version == 2:
1657+ return _version_2(netcfg)
1658+
1659+ raise RuntimeError('Unknown network config version: %s' % version)
1660+
1661+
1662+def wait_for_physdevs(netcfg, strict=True):
1663+ physdevs = extract_physdevs(netcfg)
1664+
1665+ # set of expected iface names and mac addrs
1666+ expected_ifaces = dict([(iface[0], iface[1]) for iface in physdevs])
1667+ expected_macs = set(expected_ifaces.keys())
1668+
1669+ # set of current macs
1670+ present_macs = get_interfaces_by_mac().keys()
1671+
1672+ # compare the set of expected mac address values to
1673+ # the current macs present; we only check MAC as cloud-init
1674+ # has not yet renamed interfaces and the netcfg may include
1675+ # such renames.
1676+ for _ in range(0, 5):
1677+ if expected_macs.issubset(present_macs):
1678+ LOG.debug('net: all expected physical devices present')
1679+ return
1680
1681- if netcfg.get('version') == 1:
1682- return _rename_interfaces(_version_1(netcfg))
1683- elif netcfg.get('version') == 2:
1684- return _rename_interfaces(_version_2(netcfg))
1685+ missing = expected_macs.difference(present_macs)
1686+ LOG.debug('net: waiting for expected net devices: %s', missing)
1687+ for mac in missing:
1688+ # trigger a settle, unless this interface exists
1689+ syspath = sys_dev_path(expected_ifaces[mac])
1690+ settle = partial(util.udevadm_settle, exists=syspath)
1691+ msg = 'Waiting for udev events to settle or %s exists' % syspath
1692+ util.log_time(LOG.debug, msg, func=settle)
1693
1694- raise RuntimeError('Failed to apply network config names. Found bad'
1695- ' network config version: %s' % netcfg.get('version'))
1696+ # update present_macs after settles
1697+ present_macs = get_interfaces_by_mac().keys()
1698+
1699+ msg = 'Not all expected physical devices present: %s' % missing
1700+ LOG.warning(msg)
1701+ if strict:
1702+ raise RuntimeError(msg)
1703+
1704+
1705+def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
1706+ """read the network config and rename devices accordingly.
1707+ if strict_present is false, then do not raise exception if no devices
1708+ match. if strict_busy is false, then do not raise exception if the
1709+ device cannot be renamed because it is currently configured.
1710+
1711+ renames are only attempted for interfaces of type 'physical'. It is
1712+ expected that the network system will create other devices with the
1713+ correct name in place."""
1714+
1715+ try:
1716+ _rename_interfaces(extract_physdevs(netcfg))
1717+ except RuntimeError as e:
1718+ raise RuntimeError('Failed to apply network config names: %s' % e)
1719
1720
1721 def interface_has_own_mac(ifname, strict=False):
1722@@ -622,6 +659,8 @@ def get_interfaces():
1723 continue
1724 if is_vlan(name):
1725 continue
1726+ if is_bond(name):
1727+ continue
1728 mac = get_interface_mac(name)
1729 # some devices may not have a mac (tun0)
1730 if not mac:
1731@@ -677,7 +716,7 @@ class EphemeralIPv4Network(object):
1732 """
1733
1734 def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None,
1735- connectivity_url=None):
1736+ connectivity_url=None, static_routes=None):
1737 """Setup context manager and validate call signature.
1738
1739 @param interface: Name of the network interface to bring up.
1740@@ -688,6 +727,7 @@ class EphemeralIPv4Network(object):
1741 @param router: Optionally the default gateway IP.
1742 @param connectivity_url: Optionally, a URL to verify if a usable
1743 connection already exists.
1744+ @param static_routes: Optionally a list of static routes from DHCP
1745 """
1746 if not all([interface, ip, prefix_or_mask, broadcast]):
1747 raise ValueError(
1748@@ -704,6 +744,7 @@ class EphemeralIPv4Network(object):
1749 self.ip = ip
1750 self.broadcast = broadcast
1751 self.router = router
1752+ self.static_routes = static_routes
1753 self.cleanup_cmds = [] # List of commands to run to cleanup state.
1754
1755 def __enter__(self):
1756@@ -716,7 +757,21 @@ class EphemeralIPv4Network(object):
1757 return
1758
1759 self._bringup_device()
1760- if self.router:
1761+
1762+ # rfc3442 requires us to ignore the router config *if* classless static
1763+ # routes are provided.
1764+ #
1765+ # https://tools.ietf.org/html/rfc3442
1766+ #
1767+ # If the DHCP server returns both a Classless Static Routes option and
1768+ # a Router option, the DHCP client MUST ignore the Router option.
1769+ #
1770+ # Similarly, if the DHCP server returns both a Classless Static Routes
1771+ # option and a Static Routes option, the DHCP client MUST ignore the
1772+ # Static Routes option.
1773+ if self.static_routes:
1774+ self._bringup_static_routes()
1775+ elif self.router:
1776 self._bringup_router()
1777
1778 def __exit__(self, excp_type, excp_value, excp_traceback):
1779@@ -760,6 +815,20 @@ class EphemeralIPv4Network(object):
1780 ['ip', '-family', 'inet', 'addr', 'del', cidr, 'dev',
1781 self.interface])
1782
1783+ def _bringup_static_routes(self):
1784+ # static_routes = [("169.254.169.254/32", "130.56.248.255"),
1785+ # ("0.0.0.0/0", "130.56.240.1")]
1786+ for net_address, gateway in self.static_routes:
1787+ via_arg = []
1788+ if gateway != "0.0.0.0/0":
1789+ via_arg = ['via', gateway]
1790+ util.subp(
1791+ ['ip', '-4', 'route', 'add', net_address] + via_arg +
1792+ ['dev', self.interface], capture=True)
1793+ self.cleanup_cmds.insert(
1794+ 0, ['ip', '-4', 'route', 'del', net_address] + via_arg +
1795+ ['dev', self.interface])
1796+
1797 def _bringup_router(self):
1798 """Perform the ip commands to fully setup the router if needed."""
1799 # Check if a default route exists and exit if it does
1800diff --git a/cloudinit/net/cmdline.py b/cloudinit/net/cmdline.py
1801index f89a0f7..556a10f 100755
1802--- a/cloudinit/net/cmdline.py
1803+++ b/cloudinit/net/cmdline.py
1804@@ -177,21 +177,13 @@ def _is_initramfs_netconfig(files, cmdline):
1805 return False
1806
1807
1808-def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None):
1809+def read_initramfs_config(files=None, mac_addrs=None, cmdline=None):
1810 if cmdline is None:
1811 cmdline = util.get_cmdline()
1812
1813 if files is None:
1814 files = _get_klibc_net_cfg_files()
1815
1816- if 'network-config=' in cmdline:
1817- data64 = None
1818- for tok in cmdline.split():
1819- if tok.startswith("network-config="):
1820- data64 = tok.split("=", 1)[1]
1821- if data64:
1822- return util.load_yaml(_b64dgz(data64))
1823-
1824 if not _is_initramfs_netconfig(files, cmdline):
1825 return None
1826
1827@@ -204,4 +196,19 @@ def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None):
1828
1829 return config_from_klibc_net_cfg(files=files, mac_addrs=mac_addrs)
1830
1831+
1832+def read_kernel_cmdline_config(cmdline=None):
1833+ if cmdline is None:
1834+ cmdline = util.get_cmdline()
1835+
1836+ if 'network-config=' in cmdline:
1837+ data64 = None
1838+ for tok in cmdline.split():
1839+ if tok.startswith("network-config="):
1840+ data64 = tok.split("=", 1)[1]
1841+ if data64:
1842+ return util.load_yaml(_b64dgz(data64))
1843+
1844+ return None
1845+
1846 # vi: ts=4 expandtab
1847diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py
1848index c98a97c..1737991 100644
1849--- a/cloudinit/net/dhcp.py
1850+++ b/cloudinit/net/dhcp.py
1851@@ -92,10 +92,14 @@ class EphemeralDHCPv4(object):
1852 nmap = {'interface': 'interface', 'ip': 'fixed-address',
1853 'prefix_or_mask': 'subnet-mask',
1854 'broadcast': 'broadcast-address',
1855+ 'static_routes': 'rfc3442-classless-static-routes',
1856 'router': 'routers'}
1857 kwargs = dict([(k, self.lease.get(v)) for k, v in nmap.items()])
1858 if not kwargs['broadcast']:
1859 kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip'])
1860+ if kwargs['static_routes']:
1861+ kwargs['static_routes'] = (
1862+ parse_static_routes(kwargs['static_routes']))
1863 if self.connectivity_url:
1864 kwargs['connectivity_url'] = self.connectivity_url
1865 ephipv4 = EphemeralIPv4Network(**kwargs)
1866@@ -272,4 +276,90 @@ def networkd_get_option_from_leases(keyname, leases_d=None):
1867 return data[keyname]
1868 return None
1869
1870+
1871+def parse_static_routes(rfc3442):
1872+ """ parse rfc3442 format and return a list containing tuple of strings.
1873+
1874+ The tuple is composed of the network_address (including net length) and
1875+ gateway for a parsed static route.
1876+
1877+ @param rfc3442: string in rfc3442 format
1878+ @returns: list of tuple(str, str) for all valid parsed routes until the
1879+ first parsing error.
1880+
1881+ E.g.
1882+ sr = parse_state_routes("32,169,254,169,254,130,56,248,255,0,130,56,240,1")
1883+ sr = [
1884+ ("169.254.169.254/32", "130.56.248.255"), ("0.0.0.0/0", "130.56.240.1")
1885+ ]
1886+
1887+ Python version of isc-dhclient's hooks:
1888+ /etc/dhcp/dhclient-exit-hooks.d/rfc3442-classless-routes
1889+ """
1890+ # raw strings from dhcp lease may end in semi-colon
1891+ rfc3442 = rfc3442.rstrip(";")
1892+ tokens = rfc3442.split(',')
1893+ static_routes = []
1894+
1895+ def _trunc_error(cidr, required, remain):
1896+ msg = ("RFC3442 string malformed. Current route has CIDR of %s "
1897+ "and requires %s significant octets, but only %s remain. "
1898+ "Verify DHCP rfc3442-classless-static-routes value: %s"
1899+ % (cidr, required, remain, rfc3442))
1900+ LOG.error(msg)
1901+
1902+ current_idx = 0
1903+ for idx, tok in enumerate(tokens):
1904+ if idx < current_idx:
1905+ continue
1906+ net_length = int(tok)
1907+ if net_length in range(25, 33):
1908+ req_toks = 9
1909+ if len(tokens[idx:]) < req_toks:
1910+ _trunc_error(net_length, req_toks, len(tokens[idx:]))
1911+ return static_routes
1912+ net_address = ".".join(tokens[idx+1:idx+5])
1913+ gateway = ".".join(tokens[idx+5:idx+req_toks])
1914+ current_idx = idx + req_toks
1915+ elif net_length in range(17, 25):
1916+ req_toks = 8
1917+ if len(tokens[idx:]) < req_toks:
1918+ _trunc_error(net_length, req_toks, len(tokens[idx:]))
1919+ return static_routes
1920+ net_address = ".".join(tokens[idx+1:idx+4] + ["0"])
1921+ gateway = ".".join(tokens[idx+4:idx+req_toks])
1922+ current_idx = idx + req_toks
1923+ elif net_length in range(9, 17):
1924+ req_toks = 7
1925+ if len(tokens[idx:]) < req_toks:
1926+ _trunc_error(net_length, req_toks, len(tokens[idx:]))
1927+ return static_routes
1928+ net_address = ".".join(tokens[idx+1:idx+3] + ["0", "0"])
1929+ gateway = ".".join(tokens[idx+3:idx+req_toks])
1930+ current_idx = idx + req_toks
1931+ elif net_length in range(1, 9):
1932+ req_toks = 6
1933+ if len(tokens[idx:]) < req_toks:
1934+ _trunc_error(net_length, req_toks, len(tokens[idx:]))
1935+ return static_routes
1936+ net_address = ".".join(tokens[idx+1:idx+2] + ["0", "0", "0"])
1937+ gateway = ".".join(tokens[idx+2:idx+req_toks])
1938+ current_idx = idx + req_toks
1939+ elif net_length == 0:
1940+ req_toks = 5
1941+ if len(tokens[idx:]) < req_toks:
1942+ _trunc_error(net_length, req_toks, len(tokens[idx:]))
1943+ return static_routes
1944+ net_address = "0.0.0.0"
1945+ gateway = ".".join(tokens[idx+1:idx+req_toks])
1946+ current_idx = idx + req_toks
1947+ else:
1948+ LOG.error('Parsed invalid net length "%s". Verify DHCP '
1949+ 'rfc3442-classless-static-routes value.', net_length)
1950+ return static_routes
1951+
1952+ static_routes.append(("%s/%s" % (net_address, net_length), gateway))
1953+
1954+ return static_routes
1955+
1956 # vi: ts=4 expandtab
1957diff --git a/cloudinit/net/network_state.py b/cloudinit/net/network_state.py
1958index 3702130..c0c415d 100644
1959--- a/cloudinit/net/network_state.py
1960+++ b/cloudinit/net/network_state.py
1961@@ -596,6 +596,7 @@ class NetworkStateInterpreter(object):
1962 eno1:
1963 match:
1964 macaddress: 00:11:22:33:44:55
1965+ driver: hv_netsvc
1966 wakeonlan: true
1967 dhcp4: true
1968 dhcp6: false
1969@@ -631,15 +632,18 @@ class NetworkStateInterpreter(object):
1970 'type': 'physical',
1971 'name': cfg.get('set-name', eth),
1972 }
1973- mac_address = cfg.get('match', {}).get('macaddress', None)
1974+ match = cfg.get('match', {})
1975+ mac_address = match.get('macaddress', None)
1976 if not mac_address:
1977 LOG.debug('NetworkState Version2: missing "macaddress" info '
1978 'in config entry: %s: %s', eth, str(cfg))
1979- phy_cmd.update({'mac_address': mac_address})
1980-
1981+ phy_cmd['mac_address'] = mac_address
1982+ driver = match.get('driver', None)
1983+ if driver:
1984+ phy_cmd['params'] = {'driver': driver}
1985 for key in ['mtu', 'match', 'wakeonlan']:
1986 if key in cfg:
1987- phy_cmd.update({key: cfg.get(key)})
1988+ phy_cmd[key] = cfg[key]
1989
1990 subnets = self._v2_to_v1_ipcfg(cfg)
1991 if len(subnets) > 0:
1992@@ -673,6 +677,8 @@ class NetworkStateInterpreter(object):
1993 'vlan_id': cfg.get('id'),
1994 'vlan_link': cfg.get('link'),
1995 }
1996+ if 'mtu' in cfg:
1997+ vlan_cmd['mtu'] = cfg['mtu']
1998 subnets = self._v2_to_v1_ipcfg(cfg)
1999 if len(subnets) > 0:
2000 vlan_cmd.update({'subnets': subnets})
2001@@ -722,6 +728,8 @@ class NetworkStateInterpreter(object):
2002 'params': dict((v2key_to_v1[k], v) for k, v in
2003 item_params.get('parameters', {}).items())
2004 }
2005+ if 'mtu' in item_cfg:
2006+ v1_cmd['mtu'] = item_cfg['mtu']
2007 subnets = self._v2_to_v1_ipcfg(item_cfg)
2008 if len(subnets) > 0:
2009 v1_cmd.update({'subnets': subnets})
2010diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
2011index a47da0a..be5dede 100644
2012--- a/cloudinit/net/sysconfig.py
2013+++ b/cloudinit/net/sysconfig.py
2014@@ -284,6 +284,18 @@ class Renderer(renderer.Renderer):
2015 ('bond_mode', "mode=%s"),
2016 ('bond_xmit_hash_policy', "xmit_hash_policy=%s"),
2017 ('bond_miimon', "miimon=%s"),
2018+ ('bond_min_links', "min_links=%s"),
2019+ ('bond_arp_interval', "arp_interval=%s"),
2020+ ('bond_arp_ip_target', "arp_ip_target=%s"),
2021+ ('bond_arp_validate', "arp_validate=%s"),
2022+ ('bond_ad_select', "ad_select=%s"),
2023+ ('bond_num_grat_arp', "num_grat_arp=%s"),
2024+ ('bond_downdelay', "downdelay=%s"),
2025+ ('bond_updelay', "updelay=%s"),
2026+ ('bond_lacp_rate', "lacp_rate=%s"),
2027+ ('bond_fail_over_mac', "fail_over_mac=%s"),
2028+ ('bond_primary', "primary=%s"),
2029+ ('bond_primary_reselect', "primary_reselect=%s"),
2030 ])
2031
2032 bridge_opts_keys = tuple([
2033diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py
2034index 5139024..91f503c 100644
2035--- a/cloudinit/net/tests/test_dhcp.py
2036+++ b/cloudinit/net/tests/test_dhcp.py
2037@@ -8,7 +8,8 @@ from textwrap import dedent
2038 import cloudinit.net as net
2039 from cloudinit.net.dhcp import (
2040 InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery,
2041- parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases)
2042+ parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases,
2043+ parse_static_routes)
2044 from cloudinit.util import ensure_file, write_file
2045 from cloudinit.tests.helpers import (
2046 CiTestCase, HttprettyTestCase, mock, populate_dir, wrap_and_call)
2047@@ -64,6 +65,123 @@ class TestParseDHCPLeasesFile(CiTestCase):
2048 self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file))
2049
2050
2051+class TestDHCPRFC3442(CiTestCase):
2052+
2053+ def test_parse_lease_finds_rfc3442_classless_static_routes(self):
2054+ """parse_dhcp_lease_file returns rfc3442-classless-static-routes."""
2055+ lease_file = self.tmp_path('leases')
2056+ content = dedent("""
2057+ lease {
2058+ interface "wlp3s0";
2059+ fixed-address 192.168.2.74;
2060+ option subnet-mask 255.255.255.0;
2061+ option routers 192.168.2.1;
2062+ option rfc3442-classless-static-routes 0,130,56,240,1;
2063+ renew 4 2017/07/27 18:02:30;
2064+ expire 5 2017/07/28 07:08:15;
2065+ }
2066+ """)
2067+ expected = [
2068+ {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74',
2069+ 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1',
2070+ 'rfc3442-classless-static-routes': '0,130,56,240,1',
2071+ 'renew': '4 2017/07/27 18:02:30',
2072+ 'expire': '5 2017/07/28 07:08:15'}]
2073+ write_file(lease_file, content)
2074+ self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file))
2075+
2076+ @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
2077+ @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
2078+ def test_obtain_lease_parses_static_routes(self, m_maybe, m_ipv4):
2079+ """EphemeralDHPCv4 parses rfc3442 routes for EphemeralIPv4Network"""
2080+ lease = [
2081+ {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74',
2082+ 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1',
2083+ 'rfc3442-classless-static-routes': '0,130,56,240,1',
2084+ 'renew': '4 2017/07/27 18:02:30',
2085+ 'expire': '5 2017/07/28 07:08:15'}]
2086+ m_maybe.return_value = lease
2087+ eph = net.dhcp.EphemeralDHCPv4()
2088+ eph.obtain_lease()
2089+ expected_kwargs = {
2090+ 'interface': 'wlp3s0',
2091+ 'ip': '192.168.2.74',
2092+ 'prefix_or_mask': '255.255.255.0',
2093+ 'broadcast': '192.168.2.255',
2094+ 'static_routes': [('0.0.0.0/0', '130.56.240.1')],
2095+ 'router': '192.168.2.1'}
2096+ m_ipv4.assert_called_with(**expected_kwargs)
2097+
2098+
2099+class TestDHCPParseStaticRoutes(CiTestCase):
2100+
2101+ with_logs = True
2102+
2103+ def parse_static_routes_empty_string(self):
2104+ self.assertEqual([], parse_static_routes(""))
2105+
2106+ def test_parse_static_routes_invalid_input_returns_empty_list(self):
2107+ rfc3442 = "32,169,254,169,254,130,56,248"
2108+ self.assertEqual([], parse_static_routes(rfc3442))
2109+
2110+ def test_parse_static_routes_bogus_width_returns_empty_list(self):
2111+ rfc3442 = "33,169,254,169,254,130,56,248"
2112+ self.assertEqual([], parse_static_routes(rfc3442))
2113+
2114+ def test_parse_static_routes_single_ip(self):
2115+ rfc3442 = "32,169,254,169,254,130,56,248,255"
2116+ self.assertEqual([('169.254.169.254/32', '130.56.248.255')],
2117+ parse_static_routes(rfc3442))
2118+
2119+ def test_parse_static_routes_single_ip_handles_trailing_semicolon(self):
2120+ rfc3442 = "32,169,254,169,254,130,56,248,255;"
2121+ self.assertEqual([('169.254.169.254/32', '130.56.248.255')],
2122+ parse_static_routes(rfc3442))
2123+
2124+ def test_parse_static_routes_default_route(self):
2125+ rfc3442 = "0,130,56,240,1"
2126+ self.assertEqual([('0.0.0.0/0', '130.56.240.1')],
2127+ parse_static_routes(rfc3442))
2128+
2129+ def test_parse_static_routes_class_c_b_a(self):
2130+ class_c = "24,192,168,74,192,168,0,4"
2131+ class_b = "16,172,16,172,16,0,4"
2132+ class_a = "8,10,10,0,0,4"
2133+ rfc3442 = ",".join([class_c, class_b, class_a])
2134+ self.assertEqual(sorted([
2135+ ("192.168.74.0/24", "192.168.0.4"),
2136+ ("172.16.0.0/16", "172.16.0.4"),
2137+ ("10.0.0.0/8", "10.0.0.4")
2138+ ]), sorted(parse_static_routes(rfc3442)))
2139+
2140+ def test_parse_static_routes_logs_error_truncated(self):
2141+ bad_rfc3442 = {
2142+ "class_c": "24,169,254,169,10",
2143+ "class_b": "16,172,16,10",
2144+ "class_a": "8,10,10",
2145+ "gateway": "0,0",
2146+ "netlen": "33,0",
2147+ }
2148+ for rfc3442 in bad_rfc3442.values():
2149+ self.assertEqual([], parse_static_routes(rfc3442))
2150+
2151+ logs = self.logs.getvalue()
2152+ self.assertEqual(len(bad_rfc3442.keys()), len(logs.splitlines()))
2153+
2154+ def test_parse_static_routes_returns_valid_routes_until_parse_err(self):
2155+ class_c = "24,192,168,74,192,168,0,4"
2156+ class_b = "16,172,16,172,16,0,4"
2157+ class_a_error = "8,10,10,0,0"
2158+ rfc3442 = ",".join([class_c, class_b, class_a_error])
2159+ self.assertEqual(sorted([
2160+ ("192.168.74.0/24", "192.168.0.4"),
2161+ ("172.16.0.0/16", "172.16.0.4"),
2162+ ]), sorted(parse_static_routes(rfc3442)))
2163+
2164+ logs = self.logs.getvalue()
2165+ self.assertIn(rfc3442, logs.splitlines()[0])
2166+
2167+
2168 class TestDHCPDiscoveryClean(CiTestCase):
2169 with_logs = True
2170
2171diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
2172index 6d2affe..d2e38f0 100644
2173--- a/cloudinit/net/tests/test_init.py
2174+++ b/cloudinit/net/tests/test_init.py
2175@@ -212,9 +212,9 @@ class TestGenerateFallbackConfig(CiTestCase):
2176 mac = 'aa:bb:cc:aa:bb:cc'
2177 write_file(os.path.join(self.sysdir, 'eth1', 'address'), mac)
2178 expected = {
2179- 'config': [{'type': 'physical', 'mac_address': mac,
2180- 'name': 'eth1', 'subnets': [{'type': 'dhcp'}]}],
2181- 'version': 1}
2182+ 'ethernets': {'eth1': {'match': {'macaddress': mac},
2183+ 'dhcp4': True, 'set-name': 'eth1'}},
2184+ 'version': 2}
2185 self.assertEqual(expected, net.generate_fallback_config())
2186
2187 def test_generate_fallback_finds_dormant_eth_with_mac(self):
2188@@ -223,9 +223,9 @@ class TestGenerateFallbackConfig(CiTestCase):
2189 mac = 'aa:bb:cc:aa:bb:cc'
2190 write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac)
2191 expected = {
2192- 'config': [{'type': 'physical', 'mac_address': mac,
2193- 'name': 'eth0', 'subnets': [{'type': 'dhcp'}]}],
2194- 'version': 1}
2195+ 'ethernets': {'eth0': {'match': {'macaddress': mac}, 'dhcp4': True,
2196+ 'set-name': 'eth0'}},
2197+ 'version': 2}
2198 self.assertEqual(expected, net.generate_fallback_config())
2199
2200 def test_generate_fallback_finds_eth_by_operstate(self):
2201@@ -233,9 +233,10 @@ class TestGenerateFallbackConfig(CiTestCase):
2202 mac = 'aa:bb:cc:aa:bb:cc'
2203 write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac)
2204 expected = {
2205- 'config': [{'type': 'physical', 'mac_address': mac,
2206- 'name': 'eth0', 'subnets': [{'type': 'dhcp'}]}],
2207- 'version': 1}
2208+ 'ethernets': {
2209+ 'eth0': {'dhcp4': True, 'match': {'macaddress': mac},
2210+ 'set-name': 'eth0'}},
2211+ 'version': 2}
2212 valid_operstates = ['dormant', 'down', 'lowerlayerdown', 'unknown']
2213 for state in valid_operstates:
2214 write_file(os.path.join(self.sysdir, 'eth0', 'operstate'), state)
2215@@ -549,6 +550,45 @@ class TestEphemeralIPV4Network(CiTestCase):
2216 self.assertEqual(expected_setup_calls, m_subp.call_args_list)
2217 m_subp.assert_has_calls(expected_teardown_calls)
2218
2219+ def test_ephemeral_ipv4_network_with_rfc3442_static_routes(self, m_subp):
2220+ params = {
2221+ 'interface': 'eth0', 'ip': '192.168.2.2',
2222+ 'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255',
2223+ 'static_routes': [('169.254.169.254/32', '192.168.2.1'),
2224+ ('0.0.0.0/0', '192.168.2.1')],
2225+ 'router': '192.168.2.1'}
2226+ expected_setup_calls = [
2227+ mock.call(
2228+ ['ip', '-family', 'inet', 'addr', 'add', '192.168.2.2/24',
2229+ 'broadcast', '192.168.2.255', 'dev', 'eth0'],
2230+ capture=True, update_env={'LANG': 'C'}),
2231+ mock.call(
2232+ ['ip', '-family', 'inet', 'link', 'set', 'dev', 'eth0', 'up'],
2233+ capture=True),
2234+ mock.call(
2235+ ['ip', '-4', 'route', 'add', '169.254.169.254/32',
2236+ 'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
2237+ mock.call(
2238+ ['ip', '-4', 'route', 'add', '0.0.0.0/0',
2239+ 'via', '192.168.2.1', 'dev', 'eth0'], capture=True)]
2240+ expected_teardown_calls = [
2241+ mock.call(
2242+ ['ip', '-4', 'route', 'del', '0.0.0.0/0',
2243+ 'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
2244+ mock.call(
2245+ ['ip', '-4', 'route', 'del', '169.254.169.254/32',
2246+ 'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
2247+ mock.call(
2248+ ['ip', '-family', 'inet', 'link', 'set', 'dev',
2249+ 'eth0', 'down'], capture=True),
2250+ mock.call(
2251+ ['ip', '-family', 'inet', 'addr', 'del',
2252+ '192.168.2.2/24', 'dev', 'eth0'], capture=True)
2253+ ]
2254+ with net.EphemeralIPv4Network(**params):
2255+ self.assertEqual(expected_setup_calls, m_subp.call_args_list)
2256+ m_subp.assert_has_calls(expected_setup_calls + expected_teardown_calls)
2257+
2258
2259 class TestApplyNetworkCfgNames(CiTestCase):
2260 V1_CONFIG = textwrap.dedent("""\
2261@@ -669,3 +709,216 @@ class TestHasURLConnectivity(HttprettyTestCase):
2262 httpretty.register_uri(httpretty.GET, self.url, body={}, status=404)
2263 self.assertFalse(
2264 net.has_url_connectivity(self.url), 'Expected False on url fail')
2265+
2266+
2267+def _mk_v1_phys(mac, name, driver, device_id):
2268+ v1_cfg = {'type': 'physical', 'name': name, 'mac_address': mac}
2269+ params = {}
2270+ if driver:
2271+ params.update({'driver': driver})
2272+ if device_id:
2273+ params.update({'device_id': device_id})
2274+
2275+ if params:
2276+ v1_cfg.update({'params': params})
2277+
2278+ return v1_cfg
2279+
2280+
2281+def _mk_v2_phys(mac, name, driver=None, device_id=None):
2282+ v2_cfg = {'set-name': name, 'match': {'macaddress': mac}}
2283+ if driver:
2284+ v2_cfg['match'].update({'driver': driver})
2285+ if device_id:
2286+ v2_cfg['match'].update({'device_id': device_id})
2287+
2288+ return v2_cfg
2289+
2290+
2291+class TestExtractPhysdevs(CiTestCase):
2292+
2293+ def setUp(self):
2294+ super(TestExtractPhysdevs, self).setUp()
2295+ self.add_patch('cloudinit.net.device_driver', 'm_driver')
2296+ self.add_patch('cloudinit.net.device_devid', 'm_devid')
2297+
2298+ def test_extract_physdevs_looks_up_driver_v1(self):
2299+ driver = 'virtio'
2300+ self.m_driver.return_value = driver
2301+ physdevs = [
2302+ ['aa:bb:cc:dd:ee:ff', 'eth0', None, '0x1000'],
2303+ ]
2304+ netcfg = {
2305+ 'version': 1,
2306+ 'config': [_mk_v1_phys(*args) for args in physdevs],
2307+ }
2308+ # insert the driver value for verification
2309+ physdevs[0][2] = driver
2310+ self.assertEqual(sorted(physdevs),
2311+ sorted(net.extract_physdevs(netcfg)))
2312+ self.m_driver.assert_called_with('eth0')
2313+
2314+ def test_extract_physdevs_looks_up_driver_v2(self):
2315+ driver = 'virtio'
2316+ self.m_driver.return_value = driver
2317+ physdevs = [
2318+ ['aa:bb:cc:dd:ee:ff', 'eth0', None, '0x1000'],
2319+ ]
2320+ netcfg = {
2321+ 'version': 2,
2322+ 'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs},
2323+ }
2324+ # insert the driver value for verification
2325+ physdevs[0][2] = driver
2326+ self.assertEqual(sorted(physdevs),
2327+ sorted(net.extract_physdevs(netcfg)))
2328+ self.m_driver.assert_called_with('eth0')
2329+
2330+ def test_extract_physdevs_looks_up_devid_v1(self):
2331+ devid = '0x1000'
2332+ self.m_devid.return_value = devid
2333+ physdevs = [
2334+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', None],
2335+ ]
2336+ netcfg = {
2337+ 'version': 1,
2338+ 'config': [_mk_v1_phys(*args) for args in physdevs],
2339+ }
2340+ # insert the driver value for verification
2341+ physdevs[0][3] = devid
2342+ self.assertEqual(sorted(physdevs),
2343+ sorted(net.extract_physdevs(netcfg)))
2344+ self.m_devid.assert_called_with('eth0')
2345+
2346+ def test_extract_physdevs_looks_up_devid_v2(self):
2347+ devid = '0x1000'
2348+ self.m_devid.return_value = devid
2349+ physdevs = [
2350+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', None],
2351+ ]
2352+ netcfg = {
2353+ 'version': 2,
2354+ 'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs},
2355+ }
2356+ # insert the driver value for verification
2357+ physdevs[0][3] = devid
2358+ self.assertEqual(sorted(physdevs),
2359+ sorted(net.extract_physdevs(netcfg)))
2360+ self.m_devid.assert_called_with('eth0')
2361+
2362+ def test_get_v1_type_physical(self):
2363+ physdevs = [
2364+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
2365+ ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
2366+ ['09:87:65:43:21:10', 'ens0p1', 'mlx4_core', '0:0:1000'],
2367+ ]
2368+ netcfg = {
2369+ 'version': 1,
2370+ 'config': [_mk_v1_phys(*args) for args in physdevs],
2371+ }
2372+ self.assertEqual(sorted(physdevs),
2373+ sorted(net.extract_physdevs(netcfg)))
2374+
2375+ def test_get_v2_type_physical(self):
2376+ physdevs = [
2377+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
2378+ ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
2379+ ['09:87:65:43:21:10', 'ens0p1', 'mlx4_core', '0:0:1000'],
2380+ ]
2381+ netcfg = {
2382+ 'version': 2,
2383+ 'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs},
2384+ }
2385+ self.assertEqual(sorted(physdevs),
2386+ sorted(net.extract_physdevs(netcfg)))
2387+
2388+ def test_get_v2_type_physical_skips_if_no_set_name(self):
2389+ netcfg = {
2390+ 'version': 2,
2391+ 'ethernets': {
2392+ 'ens3': {
2393+ 'match': {'macaddress': '00:11:22:33:44:55'},
2394+ }
2395+ }
2396+ }
2397+ self.assertEqual([], net.extract_physdevs(netcfg))
2398+
2399+ def test_runtime_error_on_unknown_netcfg_version(self):
2400+ with self.assertRaises(RuntimeError):
2401+ net.extract_physdevs({'version': 3, 'awesome_config': []})
2402+
2403+
2404+class TestWaitForPhysdevs(CiTestCase):
2405+
2406+ with_logs = True
2407+
2408+ def setUp(self):
2409+ super(TestWaitForPhysdevs, self).setUp()
2410+ self.add_patch('cloudinit.net.get_interfaces_by_mac',
2411+ 'm_get_iface_mac')
2412+ self.add_patch('cloudinit.util.udevadm_settle', 'm_udev_settle')
2413+
2414+ def test_wait_for_physdevs_skips_settle_if_all_present(self):
2415+ physdevs = [
2416+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
2417+ ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
2418+ ]
2419+ netcfg = {
2420+ 'version': 2,
2421+ 'ethernets': {args[1]: _mk_v2_phys(*args)
2422+ for args in physdevs},
2423+ }
2424+ self.m_get_iface_mac.side_effect = iter([
2425+ {'aa:bb:cc:dd:ee:ff': 'eth0',
2426+ '00:11:22:33:44:55': 'ens3'},
2427+ ])
2428+ net.wait_for_physdevs(netcfg)
2429+ self.assertEqual(0, self.m_udev_settle.call_count)
2430+
2431+ def test_wait_for_physdevs_calls_udev_settle_on_missing(self):
2432+ physdevs = [
2433+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
2434+ ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
2435+ ]
2436+ netcfg = {
2437+ 'version': 2,
2438+ 'ethernets': {args[1]: _mk_v2_phys(*args)
2439+ for args in physdevs},
2440+ }
2441+ self.m_get_iface_mac.side_effect = iter([
2442+ {'aa:bb:cc:dd:ee:ff': 'eth0'}, # first call ens3 is missing
2443+ {'aa:bb:cc:dd:ee:ff': 'eth0',
2444+ '00:11:22:33:44:55': 'ens3'}, # second call has both
2445+ ])
2446+ net.wait_for_physdevs(netcfg)
2447+ self.m_udev_settle.assert_called_with(exists=net.sys_dev_path('ens3'))
2448+
2449+ def test_wait_for_physdevs_raise_runtime_error_if_missing_and_strict(self):
2450+ physdevs = [
2451+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
2452+ ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
2453+ ]
2454+ netcfg = {
2455+ 'version': 2,
2456+ 'ethernets': {args[1]: _mk_v2_phys(*args)
2457+ for args in physdevs},
2458+ }
2459+ self.m_get_iface_mac.return_value = {}
2460+ with self.assertRaises(RuntimeError):
2461+ net.wait_for_physdevs(netcfg)
2462+
2463+ self.assertEqual(5 * len(physdevs), self.m_udev_settle.call_count)
2464+
2465+ def test_wait_for_physdevs_no_raise_if_not_strict(self):
2466+ physdevs = [
2467+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
2468+ ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
2469+ ]
2470+ netcfg = {
2471+ 'version': 2,
2472+ 'ethernets': {args[1]: _mk_v2_phys(*args)
2473+ for args in physdevs},
2474+ }
2475+ self.m_get_iface_mac.return_value = {}
2476+ net.wait_for_physdevs(netcfg, strict=False)
2477+ self.assertEqual(5 * len(physdevs), self.m_udev_settle.call_count)
2478diff --git a/cloudinit/settings.py b/cloudinit/settings.py
2479index b1ebaad..2060d81 100644
2480--- a/cloudinit/settings.py
2481+++ b/cloudinit/settings.py
2482@@ -39,6 +39,7 @@ CFG_BUILTIN = {
2483 'Hetzner',
2484 'IBMCloud',
2485 'Oracle',
2486+ 'Exoscale',
2487 # At the end to act as a 'catch' when none of the above work...
2488 'None',
2489 ],
2490diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
2491index b7440c1..4984fa8 100755
2492--- a/cloudinit/sources/DataSourceAzure.py
2493+++ b/cloudinit/sources/DataSourceAzure.py
2494@@ -26,9 +26,14 @@ from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc
2495 from cloudinit import util
2496 from cloudinit.reporting import events
2497
2498-from cloudinit.sources.helpers.azure import (azure_ds_reporter,
2499- azure_ds_telemetry_reporter,
2500- get_metadata_from_fabric)
2501+from cloudinit.sources.helpers.azure import (
2502+ azure_ds_reporter,
2503+ azure_ds_telemetry_reporter,
2504+ get_metadata_from_fabric,
2505+ get_boot_telemetry,
2506+ get_system_info,
2507+ report_diagnostic_event,
2508+ EphemeralDHCPv4WithReporting)
2509
2510 LOG = logging.getLogger(__name__)
2511
2512@@ -354,7 +359,7 @@ class DataSourceAzure(sources.DataSource):
2513 bname = str(pk['fingerprint'] + ".crt")
2514 fp_files += [os.path.join(ddir, bname)]
2515 LOG.debug("ssh authentication: "
2516- "using fingerprint from fabirc")
2517+ "using fingerprint from fabric")
2518
2519 with events.ReportEventStack(
2520 name="waiting-for-ssh-public-key",
2521@@ -419,12 +424,17 @@ class DataSourceAzure(sources.DataSource):
2522 ret = load_azure_ds_dir(cdev)
2523
2524 except NonAzureDataSource:
2525+ report_diagnostic_event(
2526+ "Did not find Azure data source in %s" % cdev)
2527 continue
2528 except BrokenAzureDataSource as exc:
2529 msg = 'BrokenAzureDataSource: %s' % exc
2530+ report_diagnostic_event(msg)
2531 raise sources.InvalidMetaDataException(msg)
2532 except util.MountFailedError:
2533- LOG.warning("%s was not mountable", cdev)
2534+ msg = '%s was not mountable' % cdev
2535+ report_diagnostic_event(msg)
2536+ LOG.warning(msg)
2537 continue
2538
2539 perform_reprovision = reprovision or self._should_reprovision(ret)
2540@@ -432,6 +442,7 @@ class DataSourceAzure(sources.DataSource):
2541 if util.is_FreeBSD():
2542 msg = "Free BSD is not supported for PPS VMs"
2543 LOG.error(msg)
2544+ report_diagnostic_event(msg)
2545 raise sources.InvalidMetaDataException(msg)
2546 ret = self._reprovision()
2547 imds_md = get_metadata_from_imds(
2548@@ -450,7 +461,9 @@ class DataSourceAzure(sources.DataSource):
2549 break
2550
2551 if not found:
2552- raise sources.InvalidMetaDataException('No Azure metadata found')
2553+ msg = 'No Azure metadata found'
2554+ report_diagnostic_event(msg)
2555+ raise sources.InvalidMetaDataException(msg)
2556
2557 if found == ddir:
2558 LOG.debug("using files cached in %s", ddir)
2559@@ -469,9 +482,14 @@ class DataSourceAzure(sources.DataSource):
2560 self._report_ready(lease=self._ephemeral_dhcp_ctx.lease)
2561 self._ephemeral_dhcp_ctx.clean_network() # Teardown ephemeral
2562 else:
2563- with EphemeralDHCPv4() as lease:
2564- self._report_ready(lease=lease)
2565-
2566+ try:
2567+ with EphemeralDHCPv4WithReporting(
2568+ azure_ds_reporter) as lease:
2569+ self._report_ready(lease=lease)
2570+ except Exception as e:
2571+ report_diagnostic_event(
2572+ "exception while reporting ready: %s" % e)
2573+ raise
2574 return crawled_data
2575
2576 def _is_platform_viable(self):
2577@@ -493,6 +511,16 @@ class DataSourceAzure(sources.DataSource):
2578 if not self._is_platform_viable():
2579 return False
2580 try:
2581+ get_boot_telemetry()
2582+ except Exception as e:
2583+ LOG.warning("Failed to get boot telemetry: %s", e)
2584+
2585+ try:
2586+ get_system_info()
2587+ except Exception as e:
2588+ LOG.warning("Failed to get system information: %s", e)
2589+
2590+ try:
2591 crawled_data = util.log_time(
2592 logfunc=LOG.debug, msg='Crawl of metadata service',
2593 func=self.crawl_metadata)
2594@@ -551,27 +579,55 @@ class DataSourceAzure(sources.DataSource):
2595 headers = {"Metadata": "true"}
2596 nl_sock = None
2597 report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))
2598+ self.imds_logging_threshold = 1
2599+ self.imds_poll_counter = 1
2600+ dhcp_attempts = 0
2601+ vnet_switched = False
2602+ return_val = None
2603
2604 def exc_cb(msg, exception):
2605 if isinstance(exception, UrlError) and exception.code == 404:
2606+ if self.imds_poll_counter == self.imds_logging_threshold:
2607+ # Reducing the logging frequency as we are polling IMDS
2608+ self.imds_logging_threshold *= 2
2609+ LOG.debug("Call to IMDS with arguments %s failed "
2610+ "with status code %s after %s retries",
2611+ msg, exception.code, self.imds_poll_counter)
2612+ LOG.debug("Backing off logging threshold for the same "
2613+ "exception to %d", self.imds_logging_threshold)
2614+ self.imds_poll_counter += 1
2615 return True
2616+
2617 # If we get an exception while trying to call IMDS, we
2618 # call DHCP and setup the ephemeral network to acquire the new IP.
2619+ LOG.debug("Call to IMDS with arguments %s failed with "
2620+ "status code %s", msg, exception.code)
2621+ report_diagnostic_event("polling IMDS failed with exception %s"
2622+ % exception.code)
2623 return False
2624
2625 LOG.debug("Wait for vnetswitch to happen")
2626 while True:
2627 try:
2628- # Save our EphemeralDHCPv4 context so we avoid repeated dhcp
2629- self._ephemeral_dhcp_ctx = EphemeralDHCPv4()
2630- lease = self._ephemeral_dhcp_ctx.obtain_lease()
2631+ # Save our EphemeralDHCPv4 context to avoid repeated dhcp
2632+ with events.ReportEventStack(
2633+ name="obtain-dhcp-lease",
2634+ description="obtain dhcp lease",
2635+ parent=azure_ds_reporter):
2636+ self._ephemeral_dhcp_ctx = EphemeralDHCPv4()
2637+ lease = self._ephemeral_dhcp_ctx.obtain_lease()
2638+
2639+ if vnet_switched:
2640+ dhcp_attempts += 1
2641 if report_ready:
2642 try:
2643 nl_sock = netlink.create_bound_netlink_socket()
2644 except netlink.NetlinkCreateSocketError as e:
2645+ report_diagnostic_event(e)
2646 LOG.warning(e)
2647 self._ephemeral_dhcp_ctx.clean_network()
2648- return
2649+ break
2650+
2651 path = REPORTED_READY_MARKER_FILE
2652 LOG.info(
2653 "Creating a marker file to report ready: %s", path)
2654@@ -579,17 +635,33 @@ class DataSourceAzure(sources.DataSource):
2655 pid=os.getpid(), time=time()))
2656 self._report_ready(lease=lease)
2657 report_ready = False
2658- try:
2659- netlink.wait_for_media_disconnect_connect(
2660- nl_sock, lease['interface'])
2661- except AssertionError as error:
2662- LOG.error(error)
2663- return
2664+
2665+ with events.ReportEventStack(
2666+ name="wait-for-media-disconnect-connect",
2667+ description="wait for vnet switch",
2668+ parent=azure_ds_reporter):
2669+ try:
2670+ netlink.wait_for_media_disconnect_connect(
2671+ nl_sock, lease['interface'])
2672+ except AssertionError as error:
2673+ report_diagnostic_event(error)
2674+ LOG.error(error)
2675+ break
2676+
2677+ vnet_switched = True
2678 self._ephemeral_dhcp_ctx.clean_network()
2679 else:
2680- return readurl(url, timeout=IMDS_TIMEOUT_IN_SECONDS,
2681- headers=headers, exception_cb=exc_cb,
2682- infinite=True, log_req_resp=False).contents
2683+ with events.ReportEventStack(
2684+ name="get-reprovision-data-from-imds",
2685+ description="get reprovision data from imds",
2686+ parent=azure_ds_reporter):
2687+ return_val = readurl(url,
2688+ timeout=IMDS_TIMEOUT_IN_SECONDS,
2689+ headers=headers,
2690+ exception_cb=exc_cb,
2691+ infinite=True,
2692+ log_req_resp=False).contents
2693+ break
2694 except UrlError:
2695 # Teardown our EphemeralDHCPv4 context on failure as we retry
2696 self._ephemeral_dhcp_ctx.clean_network()
2697@@ -598,6 +670,14 @@ class DataSourceAzure(sources.DataSource):
2698 if nl_sock:
2699 nl_sock.close()
2700
2701+ if vnet_switched:
2702+ report_diagnostic_event("attempted dhcp %d times after reuse" %
2703+ dhcp_attempts)
2704+ report_diagnostic_event("polled imds %d times after reuse" %
2705+ self.imds_poll_counter)
2706+
2707+ return return_val
2708+
2709 @azure_ds_telemetry_reporter
2710 def _report_ready(self, lease):
2711 """Tells the fabric provisioning has completed """
2712@@ -666,9 +746,12 @@ class DataSourceAzure(sources.DataSource):
2713 self.ds_cfg['agent_command'])
2714 try:
2715 fabric_data = metadata_func()
2716- except Exception:
2717+ except Exception as e:
2718+ report_diagnostic_event(
2719+ "Error communicating with Azure fabric; You may experience "
2720+ "connectivity issues: %s" % e)
2721 LOG.warning(
2722- "Error communicating with Azure fabric; You may experience."
2723+ "Error communicating with Azure fabric; You may experience "
2724 "connectivity issues.", exc_info=True)
2725 return False
2726
2727@@ -684,6 +767,11 @@ class DataSourceAzure(sources.DataSource):
2728 return
2729
2730 @property
2731+ def availability_zone(self):
2732+ return self.metadata.get(
2733+ 'imds', {}).get('compute', {}).get('platformFaultDomain')
2734+
2735+ @property
2736 def network_config(self):
2737 """Generate a network config like net.generate_fallback_network() with
2738 the following exceptions.
2739@@ -701,6 +789,10 @@ class DataSourceAzure(sources.DataSource):
2740 self._network_config = parse_network_config(nc_src)
2741 return self._network_config
2742
2743+ @property
2744+ def region(self):
2745+ return self.metadata.get('imds', {}).get('compute', {}).get('location')
2746+
2747
2748 def _partitions_on_device(devpath, maxnum=16):
2749 # return a list of tuples (ptnum, path) for each part on devpath
2750@@ -1018,7 +1110,9 @@ def read_azure_ovf(contents):
2751 try:
2752 dom = minidom.parseString(contents)
2753 except Exception as e:
2754- raise BrokenAzureDataSource("Invalid ovf-env.xml: %s" % e)
2755+ error_str = "Invalid ovf-env.xml: %s" % e
2756+ report_diagnostic_event(error_str)
2757+ raise BrokenAzureDataSource(error_str)
2758
2759 results = find_child(dom.documentElement,
2760 lambda n: n.localName == "ProvisioningSection")
2761@@ -1232,7 +1326,7 @@ def parse_network_config(imds_metadata):
2762 privateIpv4 = addr4['privateIpAddress']
2763 if privateIpv4:
2764 if dev_config.get('dhcp4', False):
2765- # Append static address config for nic > 1
2766+ # Append static address config for ip > 1
2767 netPrefix = intf['ipv4']['subnet'][0].get(
2768 'prefix', '24')
2769 if not dev_config.get('addresses'):
2770@@ -1242,6 +1336,11 @@ def parse_network_config(imds_metadata):
2771 ip=privateIpv4, prefix=netPrefix))
2772 else:
2773 dev_config['dhcp4'] = True
2774+ # non-primary interfaces should have a higher
2775+ # route-metric (cost) so default routes prefer
2776+ # primary nic due to lower route-metric value
2777+ dev_config['dhcp4-overrides'] = {
2778+ 'route-metric': (idx + 1) * 100}
2779 for addr6 in intf['ipv6']['ipAddress']:
2780 privateIpv6 = addr6['privateIpAddress']
2781 if privateIpv6:
2782@@ -1285,8 +1384,13 @@ def get_metadata_from_imds(fallback_nic, retries):
2783 if net.is_up(fallback_nic):
2784 return util.log_time(**kwargs)
2785 else:
2786- with EphemeralDHCPv4(fallback_nic):
2787- return util.log_time(**kwargs)
2788+ try:
2789+ with EphemeralDHCPv4WithReporting(
2790+ azure_ds_reporter, fallback_nic):
2791+ return util.log_time(**kwargs)
2792+ except Exception as e:
2793+ report_diagnostic_event("exception while getting metadata: %s" % e)
2794+ raise
2795
2796
2797 @azure_ds_telemetry_reporter
2798@@ -1299,11 +1403,14 @@ def _get_metadata_from_imds(retries):
2799 url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers,
2800 retries=retries, exception_cb=retry_on_url_exc)
2801 except Exception as e:
2802- LOG.debug('Ignoring IMDS instance metadata: %s', e)
2803+ msg = 'Ignoring IMDS instance metadata: %s' % e
2804+ report_diagnostic_event(msg)
2805+ LOG.debug(msg)
2806 return {}
2807 try:
2808 return util.load_json(str(response))
2809- except json.decoder.JSONDecodeError:
2810+ except json.decoder.JSONDecodeError as e:
2811+ report_diagnostic_event('non-json imds response' % e)
2812 LOG.warning(
2813 'Ignoring non-json IMDS instance metadata: %s', str(response))
2814 return {}
2815@@ -1356,8 +1463,10 @@ def _is_platform_viable(seed_dir):
2816 asset_tag = util.read_dmi_data('chassis-asset-tag')
2817 if asset_tag == AZURE_CHASSIS_ASSET_TAG:
2818 return True
2819- LOG.debug("Non-Azure DMI asset tag '%s' discovered.", asset_tag)
2820- evt.description = "Non-Azure DMI asset tag '%s' discovered.", asset_tag
2821+ msg = "Non-Azure DMI asset tag '%s' discovered." % asset_tag
2822+ LOG.debug(msg)
2823+ evt.description = msg
2824+ report_diagnostic_event(msg)
2825 if os.path.exists(os.path.join(seed_dir, 'ovf-env.xml')):
2826 return True
2827 return False
2828diff --git a/cloudinit/sources/DataSourceCloudSigma.py b/cloudinit/sources/DataSourceCloudSigma.py
2829index 2955d3f..df88f67 100644
2830--- a/cloudinit/sources/DataSourceCloudSigma.py
2831+++ b/cloudinit/sources/DataSourceCloudSigma.py
2832@@ -42,12 +42,8 @@ class DataSourceCloudSigma(sources.DataSource):
2833 if not sys_product_name:
2834 LOG.debug("system-product-name not available in dmi data")
2835 return False
2836- else:
2837- LOG.debug("detected hypervisor as %s", sys_product_name)
2838- return 'cloudsigma' in sys_product_name.lower()
2839-
2840- LOG.warning("failed to query dmi data for system product name")
2841- return False
2842+ LOG.debug("detected hypervisor as %s", sys_product_name)
2843+ return 'cloudsigma' in sys_product_name.lower()
2844
2845 def _get_data(self):
2846 """
2847diff --git a/cloudinit/sources/DataSourceExoscale.py b/cloudinit/sources/DataSourceExoscale.py
2848new file mode 100644
2849index 0000000..52e7f6f
2850--- /dev/null
2851+++ b/cloudinit/sources/DataSourceExoscale.py
2852@@ -0,0 +1,258 @@
2853+# Author: Mathieu Corbin <mathieu.corbin@exoscale.com>
2854+# Author: Christopher Glass <christopher.glass@exoscale.com>
2855+#
2856+# This file is part of cloud-init. See LICENSE file for license information.
2857+
2858+from cloudinit import ec2_utils as ec2
2859+from cloudinit import log as logging
2860+from cloudinit import sources
2861+from cloudinit import url_helper
2862+from cloudinit import util
2863+
2864+LOG = logging.getLogger(__name__)
2865+
2866+METADATA_URL = "http://169.254.169.254"
2867+API_VERSION = "1.0"
2868+PASSWORD_SERVER_PORT = 8080
2869+
2870+URL_TIMEOUT = 10
2871+URL_RETRIES = 6
2872+
2873+EXOSCALE_DMI_NAME = "Exoscale"
2874+
2875+BUILTIN_DS_CONFIG = {
2876+ # We run the set password config module on every boot in order to enable
2877+ # resetting the instance's password via the exoscale console (and a
2878+ # subsequent instance reboot).
2879+ 'cloud_config_modules': [["set-passwords", "always"]]
2880+}
2881+
2882+
2883+class DataSourceExoscale(sources.DataSource):
2884+
2885+ dsname = 'Exoscale'
2886+
2887+ def __init__(self, sys_cfg, distro, paths):
2888+ super(DataSourceExoscale, self).__init__(sys_cfg, distro, paths)
2889+ LOG.debug("Initializing the Exoscale datasource")
2890+
2891+ self.metadata_url = self.ds_cfg.get('metadata_url', METADATA_URL)
2892+ self.api_version = self.ds_cfg.get('api_version', API_VERSION)
2893+ self.password_server_port = int(
2894+ self.ds_cfg.get('password_server_port', PASSWORD_SERVER_PORT))
2895+ self.url_timeout = self.ds_cfg.get('timeout', URL_TIMEOUT)
2896+ self.url_retries = self.ds_cfg.get('retries', URL_RETRIES)
2897+
2898+ self.extra_config = BUILTIN_DS_CONFIG
2899+
2900+ def wait_for_metadata_service(self):
2901+ """Wait for the metadata service to be reachable."""
2902+
2903+ metadata_url = "{}/{}/meta-data/instance-id".format(
2904+ self.metadata_url, self.api_version)
2905+
2906+ url = url_helper.wait_for_url(
2907+ urls=[metadata_url],
2908+ max_wait=self.url_max_wait,
2909+ timeout=self.url_timeout,
2910+ status_cb=LOG.critical)
2911+
2912+ return bool(url)
2913+
2914+ def crawl_metadata(self):
2915+ """
2916+ Crawl the metadata service when available.
2917+
2918+ @returns: Dictionary of crawled metadata content.
2919+ """
2920+ metadata_ready = util.log_time(
2921+ logfunc=LOG.info,
2922+ msg='waiting for the metadata service',
2923+ func=self.wait_for_metadata_service)
2924+
2925+ if not metadata_ready:
2926+ return {}
2927+
2928+ return read_metadata(self.metadata_url, self.api_version,
2929+ self.password_server_port, self.url_timeout,
2930+ self.url_retries)
2931+
2932+ def _get_data(self):
2933+ """Fetch the user data, the metadata and the VM password
2934+ from the metadata service.
2935+
2936+ Please refer to the datasource documentation for details on how the
2937+ metadata server and password server are crawled.
2938+ """
2939+ if not self._is_platform_viable():
2940+ return False
2941+
2942+ data = util.log_time(
2943+ logfunc=LOG.debug,
2944+ msg='Crawl of metadata service',
2945+ func=self.crawl_metadata)
2946+
2947+ if not data:
2948+ return False
2949+
2950+ self.userdata_raw = data['user-data']
2951+ self.metadata = data['meta-data']
2952+ password = data.get('password')
2953+
2954+ password_config = {}
2955+ if password:
2956+ # Since we have a password, let's make sure we are allowed to use
2957+ # it by allowing ssh_pwauth.
2958+ # The password module's default behavior is to leave the
2959+ # configuration as-is in this regard, so that means it will either
2960+ # leave the password always disabled if no password is ever set, or
2961+ # leave the password login enabled if we set it once.
2962+ password_config = {
2963+ 'ssh_pwauth': True,
2964+ 'password': password,
2965+ 'chpasswd': {
2966+ 'expire': False,
2967+ },
2968+ }
2969+
2970+ # builtin extra_config overrides password_config
2971+ self.extra_config = util.mergemanydict(
2972+ [self.extra_config, password_config])
2973+
2974+ return True
2975+
2976+ def get_config_obj(self):
2977+ return self.extra_config
2978+
2979+ def _is_platform_viable(self):
2980+ return util.read_dmi_data('system-product-name').startswith(
2981+ EXOSCALE_DMI_NAME)
2982+
2983+
2984+# Used to match classes to dependencies
2985+datasources = [
2986+ (DataSourceExoscale, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
2987+]
2988+
2989+
2990+# Return a list of data sources that match this set of dependencies
2991+def get_datasource_list(depends):
2992+ return sources.list_from_depends(depends, datasources)
2993+
2994+
2995+def get_password(metadata_url=METADATA_URL,
2996+ api_version=API_VERSION,
2997+ password_server_port=PASSWORD_SERVER_PORT,
2998+ url_timeout=URL_TIMEOUT,
2999+ url_retries=URL_RETRIES):
3000+ """Obtain the VM's password if set.
3001+
3002+ Once fetched the password is marked saved. Future calls to this method may
3003+ return empty string or 'saved_password'."""
3004+ password_url = "{}:{}/{}/".format(metadata_url, password_server_port,
3005+ api_version)
3006+ response = url_helper.read_file_or_url(
3007+ password_url,
3008+ ssl_details=None,
3009+ headers={"DomU_Request": "send_my_password"},
3010+ timeout=url_timeout,
3011+ retries=url_retries)
3012+ password = response.contents.decode('utf-8')
3013+ # the password is empty or already saved
3014+ # Note: the original metadata server would answer an additional
3015+ # 'bad_request' status, but the Exoscale implementation does not.
3016+ if password in ['', 'saved_password']:
3017+ return None
3018+ # save the password
3019+ url_helper.read_file_or_url(
3020+ password_url,
3021+ ssl_details=None,
3022+ headers={"DomU_Request": "saved_password"},
3023+ timeout=url_timeout,
3024+ retries=url_retries)
3025+ return password
3026+
3027+
3028+def read_metadata(metadata_url=METADATA_URL,
3029+ api_version=API_VERSION,
3030+ password_server_port=PASSWORD_SERVER_PORT,
3031+ url_timeout=URL_TIMEOUT,
3032+ url_retries=URL_RETRIES):
3033+ """Query the metadata server and return the retrieved data."""
3034+ crawled_metadata = {}
3035+ crawled_metadata['_metadata_api_version'] = api_version
3036+ try:
3037+ crawled_metadata['user-data'] = ec2.get_instance_userdata(
3038+ api_version,
3039+ metadata_url,
3040+ timeout=url_timeout,
3041+ retries=url_retries)
3042+ crawled_metadata['meta-data'] = ec2.get_instance_metadata(
3043+ api_version,
3044+ metadata_url,
3045+ timeout=url_timeout,
3046+ retries=url_retries)
3047+ except Exception as e:
3048+ util.logexc(LOG, "failed reading from metadata url %s (%s)",
3049+ metadata_url, e)
3050+ return {}
3051+
3052+ try:
3053+ crawled_metadata['password'] = get_password(
3054+ api_version=api_version,
3055+ metadata_url=metadata_url,
3056+ password_server_port=password_server_port,
3057+ url_retries=url_retries,
3058+ url_timeout=url_timeout)
3059+ except Exception as e:
3060+ util.logexc(LOG, "failed to read from password server url %s:%s (%s)",
3061+ metadata_url, password_server_port, e)
3062+
3063+ return crawled_metadata
3064+
3065+
3066+if __name__ == "__main__":
3067+ import argparse
3068+
3069+ parser = argparse.ArgumentParser(description='Query Exoscale Metadata')
3070+ parser.add_argument(
3071+ "--endpoint",
3072+ metavar="URL",
3073+ help="The url of the metadata service.",
3074+ default=METADATA_URL)
3075+ parser.add_argument(
3076+ "--version",
3077+ metavar="VERSION",
3078+ help="The version of the metadata endpoint to query.",
3079+ default=API_VERSION)
3080+ parser.add_argument(
3081+ "--retries",
3082+ metavar="NUM",
3083+ type=int,
3084+ help="The number of retries querying the endpoint.",
3085+ default=URL_RETRIES)
3086+ parser.add_argument(
3087+ "--timeout",
3088+ metavar="NUM",
3089+ type=int,
3090+ help="The time in seconds to wait before timing out.",
3091+ default=URL_TIMEOUT)
3092+ parser.add_argument(
3093+ "--password-port",
3094+ metavar="PORT",
3095+ type=int,
3096+ help="The port on which the password endpoint listens",
3097+ default=PASSWORD_SERVER_PORT)
3098+
3099+ args = parser.parse_args()
3100+
3101+ data = read_metadata(
3102+ metadata_url=args.endpoint,
3103+ api_version=args.version,
3104+ password_server_port=args.password_port,
3105+ url_timeout=args.timeout,
3106+ url_retries=args.retries)
3107+
3108+ print(util.json_dumps(data))
3109+
3110+# vi: ts=4 expandtab
3111diff --git a/cloudinit/sources/DataSourceGCE.py b/cloudinit/sources/DataSourceGCE.py
3112index d816262..6cbfbba 100644
3113--- a/cloudinit/sources/DataSourceGCE.py
3114+++ b/cloudinit/sources/DataSourceGCE.py
3115@@ -18,10 +18,13 @@ LOG = logging.getLogger(__name__)
3116 MD_V1_URL = 'http://metadata.google.internal/computeMetadata/v1/'
3117 BUILTIN_DS_CONFIG = {'metadata_url': MD_V1_URL}
3118 REQUIRED_FIELDS = ('instance-id', 'availability-zone', 'local-hostname')
3119+GUEST_ATTRIBUTES_URL = ('http://metadata.google.internal/computeMetadata/'
3120+ 'v1/instance/guest-attributes')
3121+HOSTKEY_NAMESPACE = 'hostkeys'
3122+HEADERS = {'Metadata-Flavor': 'Google'}
3123
3124
3125 class GoogleMetadataFetcher(object):
3126- headers = {'Metadata-Flavor': 'Google'}
3127
3128 def __init__(self, metadata_address):
3129 self.metadata_address = metadata_address
3130@@ -32,7 +35,7 @@ class GoogleMetadataFetcher(object):
3131 url = self.metadata_address + path
3132 if is_recursive:
3133 url += '/?recursive=True'
3134- resp = url_helper.readurl(url=url, headers=self.headers)
3135+ resp = url_helper.readurl(url=url, headers=HEADERS)
3136 except url_helper.UrlError as exc:
3137 msg = "url %s raised exception %s"
3138 LOG.debug(msg, path, exc)
3139@@ -90,6 +93,10 @@ class DataSourceGCE(sources.DataSource):
3140 public_keys_data = self.metadata['public-keys-data']
3141 return _parse_public_keys(public_keys_data, self.default_user)
3142
3143+ def publish_host_keys(self, hostkeys):
3144+ for key in hostkeys:
3145+ _write_host_key_to_guest_attributes(*key)
3146+
3147 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):
3148 # GCE has long FDQN's and has asked for short hostnames.
3149 return self.metadata['local-hostname'].split('.')[0]
3150@@ -103,6 +110,17 @@ class DataSourceGCE(sources.DataSource):
3151 return self.availability_zone.rsplit('-', 1)[0]
3152
3153
3154+def _write_host_key_to_guest_attributes(key_type, key_value):
3155+ url = '%s/%s/%s' % (GUEST_ATTRIBUTES_URL, HOSTKEY_NAMESPACE, key_type)
3156+ key_value = key_value.encode('utf-8')
3157+ resp = url_helper.readurl(url=url, data=key_value, headers=HEADERS,
3158+ request_method='PUT', check_status=False)
3159+ if resp.ok():
3160+ LOG.debug('Wrote %s host key to guest attributes.', key_type)
3161+ else:
3162+ LOG.debug('Unable to write %s host key to guest attributes.', key_type)
3163+
3164+
3165 def _has_expired(public_key):
3166 # Check whether an SSH key is expired. Public key input is a single SSH
3167 # public key in the GCE specific key format documented here:
3168diff --git a/cloudinit/sources/DataSourceHetzner.py b/cloudinit/sources/DataSourceHetzner.py
3169index 5c75b65..5029833 100644
3170--- a/cloudinit/sources/DataSourceHetzner.py
3171+++ b/cloudinit/sources/DataSourceHetzner.py
3172@@ -28,6 +28,9 @@ MD_WAIT_RETRY = 2
3173
3174
3175 class DataSourceHetzner(sources.DataSource):
3176+
3177+ dsname = 'Hetzner'
3178+
3179 def __init__(self, sys_cfg, distro, paths):
3180 sources.DataSource.__init__(self, sys_cfg, distro, paths)
3181 self.distro = distro
3182diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py
3183index 70e7a5c..dd941d2 100644
3184--- a/cloudinit/sources/DataSourceOVF.py
3185+++ b/cloudinit/sources/DataSourceOVF.py
3186@@ -148,6 +148,9 @@ class DataSourceOVF(sources.DataSource):
3187 product_marker, os.path.join(self.paths.cloud_dir, 'data'))
3188 special_customization = product_marker and not hasmarkerfile
3189 customscript = self._vmware_cust_conf.custom_script_name
3190+ ccScriptsDir = os.path.join(
3191+ self.paths.get_cpath("scripts"),
3192+ "per-instance")
3193 except Exception as e:
3194 _raise_error_status(
3195 "Error parsing the customization Config File",
3196@@ -201,7 +204,9 @@ class DataSourceOVF(sources.DataSource):
3197
3198 if customscript:
3199 try:
3200- postcust = PostCustomScript(customscript, imcdirpath)
3201+ postcust = PostCustomScript(customscript,
3202+ imcdirpath,
3203+ ccScriptsDir)
3204 postcust.execute()
3205 except Exception as e:
3206 _raise_error_status(
3207diff --git a/cloudinit/sources/DataSourceOracle.py b/cloudinit/sources/DataSourceOracle.py
3208index 70b9c58..6e73f56 100644
3209--- a/cloudinit/sources/DataSourceOracle.py
3210+++ b/cloudinit/sources/DataSourceOracle.py
3211@@ -16,7 +16,7 @@ Notes:
3212 """
3213
3214 from cloudinit.url_helper import combine_url, readurl, UrlError
3215-from cloudinit.net import dhcp
3216+from cloudinit.net import dhcp, get_interfaces_by_mac
3217 from cloudinit import net
3218 from cloudinit import sources
3219 from cloudinit import util
3220@@ -28,8 +28,80 @@ import re
3221
3222 LOG = logging.getLogger(__name__)
3223
3224+BUILTIN_DS_CONFIG = {
3225+ # Don't use IMDS to configure secondary NICs by default
3226+ 'configure_secondary_nics': False,
3227+}
3228 CHASSIS_ASSET_TAG = "OracleCloud.com"
3229 METADATA_ENDPOINT = "http://169.254.169.254/openstack/"
3230+VNIC_METADATA_URL = 'http://169.254.169.254/opc/v1/vnics/'
3231+# https://docs.cloud.oracle.com/iaas/Content/Network/Troubleshoot/connectionhang.htm#Overview,
3232+# indicates that an MTU of 9000 is used within OCI
3233+MTU = 9000
3234+
3235+
3236+def _add_network_config_from_opc_imds(network_config):
3237+ """
3238+ Fetch data from Oracle's IMDS, generate secondary NIC config, merge it.
3239+
3240+ The primary NIC configuration should not be modified based on the IMDS
3241+ values, as it should continue to be configured for DHCP. As such, this
3242+ takes an existing network_config dict which is expected to have the primary
3243+ NIC configuration already present. It will mutate the given dict to
3244+ include the secondary VNICs.
3245+
3246+ :param network_config:
3247+ A v1 network config dict with the primary NIC already configured. This
3248+ dict will be mutated.
3249+
3250+ :raises:
3251+ Exceptions are not handled within this function. Likely exceptions are
3252+ those raised by url_helper.readurl (if communicating with the IMDS
3253+ fails), ValueError/JSONDecodeError (if the IMDS returns invalid JSON),
3254+ and KeyError/IndexError (if the IMDS returns valid JSON with unexpected
3255+ contents).
3256+ """
3257+ resp = readurl(VNIC_METADATA_URL)
3258+ vnics = json.loads(str(resp))
3259+
3260+ if 'nicIndex' in vnics[0]:
3261+ # TODO: Once configure_secondary_nics defaults to True, lower the level
3262+ # of this log message. (Currently, if we're running this code at all,
3263+ # someone has explicitly opted-in to secondary VNIC configuration, so
3264+ # we should warn them that it didn't happen. Once it's default, this
3265+ # would be emitted on every Bare Metal Machine launch, which means INFO
3266+ # or DEBUG would be more appropriate.)
3267+ LOG.warning(
3268+ 'VNIC metadata indicates this is a bare metal machine; skipping'
3269+ ' secondary VNIC configuration.'
3270+ )
3271+ return
3272+
3273+ interfaces_by_mac = get_interfaces_by_mac()
3274+
3275+ for vnic_dict in vnics[1:]:
3276+ # We skip the first entry in the response because the primary interface
3277+ # is already configured by iSCSI boot; applying configuration from the
3278+ # IMDS is not required.
3279+ mac_address = vnic_dict['macAddr'].lower()
3280+ if mac_address not in interfaces_by_mac:
3281+ LOG.debug('Interface with MAC %s not found; skipping', mac_address)
3282+ continue
3283+ name = interfaces_by_mac[mac_address]
3284+ subnet = {
3285+ 'type': 'static',
3286+ 'address': vnic_dict['privateIp'],
3287+ 'netmask': vnic_dict['subnetCidrBlock'].split('/')[1],
3288+ 'gateway': vnic_dict['virtualRouterIp'],
3289+ 'control': 'manual',
3290+ }
3291+ network_config['config'].append({
3292+ 'name': name,
3293+ 'type': 'physical',
3294+ 'mac_address': mac_address,
3295+ 'mtu': MTU,
3296+ 'subnets': [subnet],
3297+ })
3298
3299
3300 class DataSourceOracle(sources.DataSource):
3301@@ -37,8 +109,22 @@ class DataSourceOracle(sources.DataSource):
3302 dsname = 'Oracle'
3303 system_uuid = None
3304 vendordata_pure = None
3305+ network_config_sources = (
3306+ sources.NetworkConfigSource.cmdline,
3307+ sources.NetworkConfigSource.ds,
3308+ sources.NetworkConfigSource.initramfs,
3309+ sources.NetworkConfigSource.system_cfg,
3310+ )
3311+
3312 _network_config = sources.UNSET
3313
3314+ def __init__(self, sys_cfg, *args, **kwargs):
3315+ super(DataSourceOracle, self).__init__(sys_cfg, *args, **kwargs)
3316+
3317+ self.ds_cfg = util.mergemanydict([
3318+ util.get_cfg_by_path(sys_cfg, ['datasource', self.dsname], {}),
3319+ BUILTIN_DS_CONFIG])
3320+
3321 def _is_platform_viable(self):
3322 """Check platform environment to report if this datasource may run."""
3323 return _is_platform_viable()
3324@@ -48,7 +134,7 @@ class DataSourceOracle(sources.DataSource):
3325 return False
3326
3327 # network may be configured if iscsi root. If that is the case
3328- # then read_kernel_cmdline_config will return non-None.
3329+ # then read_initramfs_config will return non-None.
3330 if _is_iscsi_root():
3331 data = self.crawl_metadata()
3332 else:
3333@@ -118,11 +204,17 @@ class DataSourceOracle(sources.DataSource):
3334 We nonetheless return cmdline provided config if present
3335 and fallback to generate fallback."""
3336 if self._network_config == sources.UNSET:
3337- cmdline_cfg = cmdline.read_kernel_cmdline_config()
3338- if cmdline_cfg:
3339- self._network_config = cmdline_cfg
3340- else:
3341+ self._network_config = cmdline.read_initramfs_config()
3342+ if not self._network_config:
3343 self._network_config = self.distro.generate_fallback_config()
3344+ if self.ds_cfg.get('configure_secondary_nics'):
3345+ try:
3346+ # Mutate self._network_config to include secondary VNICs
3347+ _add_network_config_from_opc_imds(self._network_config)
3348+ except Exception:
3349+ util.logexc(
3350+ LOG,
3351+ "Failed to fetch secondary network configuration!")
3352 return self._network_config
3353
3354
3355@@ -137,7 +229,7 @@ def _is_platform_viable():
3356
3357
3358 def _is_iscsi_root():
3359- return bool(cmdline.read_kernel_cmdline_config())
3360+ return bool(cmdline.read_initramfs_config())
3361
3362
3363 def _load_index(content):
3364diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
3365index e6966b3..a319322 100644
3366--- a/cloudinit/sources/__init__.py
3367+++ b/cloudinit/sources/__init__.py
3368@@ -66,6 +66,13 @@ CLOUD_ID_REGION_PREFIX_MAP = {
3369 'china': ('azure-china', lambda c: c == 'azure'), # only change azure
3370 }
3371
3372+# NetworkConfigSource represents the canonical list of network config sources
3373+# that cloud-init knows about. (Python 2.7 lacks PEP 435, so use a singleton
3374+# namedtuple as an enum; see https://stackoverflow.com/a/6971002)
3375+_NETCFG_SOURCE_NAMES = ('cmdline', 'ds', 'system_cfg', 'fallback', 'initramfs')
3376+NetworkConfigSource = namedtuple('NetworkConfigSource',
3377+ _NETCFG_SOURCE_NAMES)(*_NETCFG_SOURCE_NAMES)
3378+
3379
3380 class DataSourceNotFoundException(Exception):
3381 pass
3382@@ -153,6 +160,16 @@ class DataSource(object):
3383 # Track the discovered fallback nic for use in configuration generation.
3384 _fallback_interface = None
3385
3386+ # The network configuration sources that should be considered for this data
3387+ # source. (The first source in this list that provides network
3388+ # configuration will be used without considering any that follow.) This
3389+ # should always be a subset of the members of NetworkConfigSource with no
3390+ # duplicate entries.
3391+ network_config_sources = (NetworkConfigSource.cmdline,
3392+ NetworkConfigSource.initramfs,
3393+ NetworkConfigSource.system_cfg,
3394+ NetworkConfigSource.ds)
3395+
3396 # read_url_params
3397 url_max_wait = -1 # max_wait < 0 means do not wait
3398 url_timeout = 10 # timeout for each metadata url read attempt
3399@@ -474,6 +491,16 @@ class DataSource(object):
3400 def get_public_ssh_keys(self):
3401 return normalize_pubkey_data(self.metadata.get('public-keys'))
3402
3403+ def publish_host_keys(self, hostkeys):
3404+ """Publish the public SSH host keys (found in /etc/ssh/*.pub).
3405+
3406+ @param hostkeys: List of host key tuples (key_type, key_value),
3407+ where key_type is the first field in the public key file
3408+ (e.g. 'ssh-rsa') and key_value is the key itself
3409+ (e.g. 'AAAAB3NzaC1y...').
3410+ """
3411+ pass
3412+
3413 def _remap_device(self, short_name):
3414 # LP: #611137
3415 # the metadata service may believe that devices are named 'sda'
3416diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
3417index 82c4c8c..f1fba17 100755
3418--- a/cloudinit/sources/helpers/azure.py
3419+++ b/cloudinit/sources/helpers/azure.py
3420@@ -16,7 +16,11 @@ from xml.etree import ElementTree
3421
3422 from cloudinit import url_helper
3423 from cloudinit import util
3424+from cloudinit import version
3425+from cloudinit import distros
3426 from cloudinit.reporting import events
3427+from cloudinit.net.dhcp import EphemeralDHCPv4
3428+from datetime import datetime
3429
3430 LOG = logging.getLogger(__name__)
3431
3432@@ -24,6 +28,10 @@ LOG = logging.getLogger(__name__)
3433 # value is applied if the endpoint can't be found within a lease file
3434 DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10"
3435
3436+BOOT_EVENT_TYPE = 'boot-telemetry'
3437+SYSTEMINFO_EVENT_TYPE = 'system-info'
3438+DIAGNOSTIC_EVENT_TYPE = 'diagnostic'
3439+
3440 azure_ds_reporter = events.ReportEventStack(
3441 name="azure-ds",
3442 description="initialize reporter for azure ds",
3443@@ -40,6 +48,105 @@ def azure_ds_telemetry_reporter(func):
3444 return impl
3445
3446
3447+@azure_ds_telemetry_reporter
3448+def get_boot_telemetry():
3449+ """Report timestamps related to kernel initialization and systemd
3450+ activation of cloud-init"""
3451+ if not distros.uses_systemd():
3452+ raise RuntimeError(
3453+ "distro not using systemd, skipping boot telemetry")
3454+
3455+ LOG.debug("Collecting boot telemetry")
3456+ try:
3457+ kernel_start = float(time.time()) - float(util.uptime())
3458+ except ValueError:
3459+ raise RuntimeError("Failed to determine kernel start timestamp")
3460+
3461+ try:
3462+ out, _ = util.subp(['/bin/systemctl',
3463+ 'show', '-p',
3464+ 'UserspaceTimestampMonotonic'],
3465+ capture=True)
3466+ tsm = None
3467+ if out and '=' in out:
3468+ tsm = out.split("=")[1]
3469+
3470+ if not tsm:
3471+ raise RuntimeError("Failed to parse "
3472+ "UserspaceTimestampMonotonic from systemd")
3473+
3474+ user_start = kernel_start + (float(tsm) / 1000000)
3475+ except util.ProcessExecutionError as e:
3476+ raise RuntimeError("Failed to get UserspaceTimestampMonotonic: %s"
3477+ % e)
3478+ except ValueError as e:
3479+ raise RuntimeError("Failed to parse "
3480+ "UserspaceTimestampMonotonic from systemd: %s"
3481+ % e)
3482+
3483+ try:
3484+ out, _ = util.subp(['/bin/systemctl', 'show',
3485+ 'cloud-init-local', '-p',
3486+ 'InactiveExitTimestampMonotonic'],
3487+ capture=True)
3488+ tsm = None
3489+ if out and '=' in out:
3490+ tsm = out.split("=")[1]
3491+ if not tsm:
3492+ raise RuntimeError("Failed to parse "
3493+ "InactiveExitTimestampMonotonic from systemd")
3494+
3495+ cloudinit_activation = kernel_start + (float(tsm) / 1000000)
3496+ except util.ProcessExecutionError as e:
3497+ raise RuntimeError("Failed to get InactiveExitTimestampMonotonic: %s"
3498+ % e)
3499+ except ValueError as e:
3500+ raise RuntimeError("Failed to parse "
3501+ "InactiveExitTimestampMonotonic from systemd: %s"
3502+ % e)
3503+
3504+ evt = events.ReportingEvent(
3505+ BOOT_EVENT_TYPE, 'boot-telemetry',
3506+ "kernel_start=%s user_start=%s cloudinit_activation=%s" %
3507+ (datetime.utcfromtimestamp(kernel_start).isoformat() + 'Z',
3508+ datetime.utcfromtimestamp(user_start).isoformat() + 'Z',
3509+ datetime.utcfromtimestamp(cloudinit_activation).isoformat() + 'Z'),
3510+ events.DEFAULT_EVENT_ORIGIN)
3511+ events.report_event(evt)
3512+
3513+ # return the event for unit testing purpose
3514+ return evt
3515+
3516+
3517+@azure_ds_telemetry_reporter
3518+def get_system_info():
3519+ """Collect and report system information"""
3520+ info = util.system_info()
3521+ evt = events.ReportingEvent(
3522+ SYSTEMINFO_EVENT_TYPE, 'system information',
3523+ "cloudinit_version=%s, kernel_version=%s, variant=%s, "
3524+ "distro_name=%s, distro_version=%s, flavor=%s, "
3525+ "python_version=%s" %
3526+ (version.version_string(), info['release'], info['variant'],
3527+ info['dist'][0], info['dist'][1], info['dist'][2],
3528+ info['python']), events.DEFAULT_EVENT_ORIGIN)
3529+ events.report_event(evt)
3530+
3531+ # return the event for unit testing purpose
3532+ return evt
3533+
3534+
3535+def report_diagnostic_event(str):
3536+ """Report a diagnostic event"""
3537+ evt = events.ReportingEvent(
3538+ DIAGNOSTIC_EVENT_TYPE, 'diagnostic message',
3539+ str, events.DEFAULT_EVENT_ORIGIN)
3540+ events.report_event(evt)
3541+
3542+ # return the event for unit testing purpose
3543+ return evt
3544+
3545+
3546 @contextmanager
3547 def cd(newdir):
3548 prevdir = os.getcwd()
3549@@ -360,16 +467,19 @@ class WALinuxAgentShim(object):
3550 value = dhcp245
3551 LOG.debug("Using Azure Endpoint from dhcp options")
3552 if value is None:
3553+ report_diagnostic_event("No Azure endpoint from dhcp options")
3554 LOG.debug('Finding Azure endpoint from networkd...')
3555 value = WALinuxAgentShim._networkd_get_value_from_leases()
3556 if value is None:
3557 # Option-245 stored in /run/cloud-init/dhclient.hooks/<ifc>.json
3558 # a dhclient exit hook that calls cloud-init-dhclient-hook
3559+ report_diagnostic_event("No Azure endpoint from networkd")
3560 LOG.debug('Finding Azure endpoint from hook json...')
3561 dhcp_options = WALinuxAgentShim._load_dhclient_json()
3562 value = WALinuxAgentShim._get_value_from_dhcpoptions(dhcp_options)
3563 if value is None:
3564 # Fallback and check the leases file if unsuccessful
3565+ report_diagnostic_event("No Azure endpoint from dhclient logs")
3566 LOG.debug("Unable to find endpoint in dhclient logs. "
3567 " Falling back to check lease files")
3568 if fallback_lease_file is None:
3569@@ -381,11 +491,15 @@ class WALinuxAgentShim(object):
3570 value = WALinuxAgentShim._get_value_from_leases_file(
3571 fallback_lease_file)
3572 if value is None:
3573- LOG.warning("No lease found; using default endpoint")
3574+ msg = "No lease found; using default endpoint"
3575+ report_diagnostic_event(msg)
3576+ LOG.warning(msg)
3577 value = DEFAULT_WIRESERVER_ENDPOINT
3578
3579 endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value)
3580- LOG.debug('Azure endpoint found at %s', endpoint_ip_address)
3581+ msg = 'Azure endpoint found at %s' % endpoint_ip_address
3582+ report_diagnostic_event(msg)
3583+ LOG.debug(msg)
3584 return endpoint_ip_address
3585
3586 @azure_ds_telemetry_reporter
3587@@ -399,16 +513,19 @@ class WALinuxAgentShim(object):
3588 try:
3589 response = http_client.get(
3590 'http://{0}/machine/?comp=goalstate'.format(self.endpoint))
3591- except Exception:
3592+ except Exception as e:
3593 if attempts < 10:
3594 time.sleep(attempts + 1)
3595 else:
3596+ report_diagnostic_event(
3597+ "failed to register with Azure: %s" % e)
3598 raise
3599 else:
3600 break
3601 attempts += 1
3602 LOG.debug('Successfully fetched GoalState XML.')
3603 goal_state = GoalState(response.contents, http_client)
3604+ report_diagnostic_event("container_id %s" % goal_state.container_id)
3605 ssh_keys = []
3606 if goal_state.certificates_xml is not None and pubkey_info is not None:
3607 LOG.debug('Certificate XML found; parsing out public keys.')
3608@@ -449,11 +566,20 @@ class WALinuxAgentShim(object):
3609 container_id=goal_state.container_id,
3610 instance_id=goal_state.instance_id,
3611 )
3612- http_client.post(
3613- "http://{0}/machine?comp=health".format(self.endpoint),
3614- data=document,
3615- extra_headers={'Content-Type': 'text/xml; charset=utf-8'},
3616- )
3617+ # Host will collect kvps when cloud-init reports ready.
3618+ # some kvps might still be in the queue. We yield the scheduler
3619+ # to make sure we process all kvps up till this point.
3620+ time.sleep(0)
3621+ try:
3622+ http_client.post(
3623+ "http://{0}/machine?comp=health".format(self.endpoint),
3624+ data=document,
3625+ extra_headers={'Content-Type': 'text/xml; charset=utf-8'},
3626+ )
3627+ except Exception as e:
3628+ report_diagnostic_event("exception while reporting ready: %s" % e)
3629+ raise
3630+
3631 LOG.info('Reported ready to Azure fabric.')
3632
3633
3634@@ -467,4 +593,22 @@ def get_metadata_from_fabric(fallback_lease_file=None, dhcp_opts=None,
3635 finally:
3636 shim.clean_up()
3637
3638+
3639+class EphemeralDHCPv4WithReporting(object):
3640+ def __init__(self, reporter, nic=None):
3641+ self.reporter = reporter
3642+ self.ephemeralDHCPv4 = EphemeralDHCPv4(iface=nic)
3643+
3644+ def __enter__(self):
3645+ with events.ReportEventStack(
3646+ name="obtain-dhcp-lease",
3647+ description="obtain dhcp lease",
3648+ parent=self.reporter):
3649+ return self.ephemeralDHCPv4.__enter__()
3650+
3651+ def __exit__(self, excp_type, excp_value, excp_traceback):
3652+ self.ephemeralDHCPv4.__exit__(
3653+ excp_type, excp_value, excp_traceback)
3654+
3655+
3656 # vi: ts=4 expandtab
3657diff --git a/cloudinit/sources/helpers/vmware/imc/config_custom_script.py b/cloudinit/sources/helpers/vmware/imc/config_custom_script.py
3658index a7d4ad9..9f14770 100644
3659--- a/cloudinit/sources/helpers/vmware/imc/config_custom_script.py
3660+++ b/cloudinit/sources/helpers/vmware/imc/config_custom_script.py
3661@@ -1,5 +1,5 @@
3662 # Copyright (C) 2017 Canonical Ltd.
3663-# Copyright (C) 2017 VMware Inc.
3664+# Copyright (C) 2017-2019 VMware Inc.
3665 #
3666 # Author: Maitreyee Saikia <msaikia@vmware.com>
3667 #
3668@@ -8,7 +8,6 @@
3669 import logging
3670 import os
3671 import stat
3672-from textwrap import dedent
3673
3674 from cloudinit import util
3675
3676@@ -20,12 +19,15 @@ class CustomScriptNotFound(Exception):
3677
3678
3679 class CustomScriptConstant(object):
3680- RC_LOCAL = "/etc/rc.local"
3681- POST_CUST_TMP_DIR = "/root/.customization"
3682- POST_CUST_RUN_SCRIPT_NAME = "post-customize-guest.sh"
3683- POST_CUST_RUN_SCRIPT = os.path.join(POST_CUST_TMP_DIR,
3684- POST_CUST_RUN_SCRIPT_NAME)
3685- POST_REBOOT_PENDING_MARKER = "/.guest-customization-post-reboot-pending"
3686+ CUSTOM_TMP_DIR = "/root/.customization"
3687+
3688+ # The user defined custom script
3689+ CUSTOM_SCRIPT_NAME = "customize.sh"
3690+ CUSTOM_SCRIPT = os.path.join(CUSTOM_TMP_DIR,
3691+ CUSTOM_SCRIPT_NAME)
3692+ POST_CUSTOM_PENDING_MARKER = "/.guest-customization-post-reboot-pending"
3693+ # The cc_scripts_per_instance script to launch custom script
3694+ POST_CUSTOM_SCRIPT_NAME = "post-customize-guest.sh"
3695
3696
3697 class RunCustomScript(object):
3698@@ -39,10 +41,19 @@ class RunCustomScript(object):
3699 raise CustomScriptNotFound("Script %s not found!! "
3700 "Cannot execute custom script!"
3701 % self.scriptpath)
3702+
3703+ util.ensure_dir(CustomScriptConstant.CUSTOM_TMP_DIR)
3704+
3705+ LOG.debug("Copying custom script to %s",
3706+ CustomScriptConstant.CUSTOM_SCRIPT)
3707+ util.copy(self.scriptpath, CustomScriptConstant.CUSTOM_SCRIPT)
3708+
3709 # Strip any CR characters from the decoded script
3710- util.load_file(self.scriptpath).replace("\r", "")
3711- st = os.stat(self.scriptpath)
3712- os.chmod(self.scriptpath, st.st_mode | stat.S_IEXEC)
3713+ content = util.load_file(
3714+ CustomScriptConstant.CUSTOM_SCRIPT).replace("\r", "")
3715+ util.write_file(CustomScriptConstant.CUSTOM_SCRIPT,
3716+ content,
3717+ mode=0o544)
3718
3719
3720 class PreCustomScript(RunCustomScript):
3721@@ -50,104 +61,34 @@ class PreCustomScript(RunCustomScript):
3722 """Executing custom script with precustomization argument."""
3723 LOG.debug("Executing pre-customization script")
3724 self.prepare_script()
3725- util.subp(["/bin/sh", self.scriptpath, "precustomization"])
3726+ util.subp([CustomScriptConstant.CUSTOM_SCRIPT, "precustomization"])
3727
3728
3729 class PostCustomScript(RunCustomScript):
3730- def __init__(self, scriptname, directory):
3731+ def __init__(self, scriptname, directory, ccScriptsDir):
3732 super(PostCustomScript, self).__init__(scriptname, directory)
3733- # Determine when to run custom script. When postreboot is True,
3734- # the user uploaded script will run as part of rc.local after
3735- # the machine reboots. This is determined by presence of rclocal.
3736- # When postreboot is False, script will run as part of cloud-init.
3737- self.postreboot = False
3738-
3739- def _install_post_reboot_agent(self, rclocal):
3740- """
3741- Install post-reboot agent for running custom script after reboot.
3742- As part of this process, we are editing the rclocal file to run a
3743- VMware script, which in turn is resposible for handling the user
3744- script.
3745- @param: path to rc local.
3746- """
3747- LOG.debug("Installing post-reboot customization from %s to %s",
3748- self.directory, rclocal)
3749- if not self.has_previous_agent(rclocal):
3750- LOG.info("Adding post-reboot customization agent to rc.local")
3751- new_content = dedent("""
3752- # Run post-reboot guest customization
3753- /bin/sh %s
3754- exit 0
3755- """) % CustomScriptConstant.POST_CUST_RUN_SCRIPT
3756- existing_rclocal = util.load_file(rclocal).replace('exit 0\n', '')
3757- st = os.stat(rclocal)
3758- # "x" flag should be set
3759- mode = st.st_mode | stat.S_IEXEC
3760- util.write_file(rclocal, existing_rclocal + new_content, mode)
3761-
3762- else:
3763- # We don't need to update rclocal file everytime a customization
3764- # is requested. It just needs to be done for the first time.
3765- LOG.info("Post-reboot guest customization agent is already "
3766- "registered in rc.local")
3767- LOG.debug("Installing post-reboot customization agent finished: %s",
3768- self.postreboot)
3769-
3770- def has_previous_agent(self, rclocal):
3771- searchstring = "# Run post-reboot guest customization"
3772- if searchstring in open(rclocal).read():
3773- return True
3774- return False
3775-
3776- def find_rc_local(self):
3777- """
3778- Determine if rc local is present.
3779- """
3780- rclocal = ""
3781- if os.path.exists(CustomScriptConstant.RC_LOCAL):
3782- LOG.debug("rc.local detected.")
3783- # resolving in case of symlink
3784- rclocal = os.path.realpath(CustomScriptConstant.RC_LOCAL)
3785- LOG.debug("rc.local resolved to %s", rclocal)
3786- else:
3787- LOG.warning("Can't find rc.local, post-customization "
3788- "will be run before reboot")
3789- return rclocal
3790-
3791- def install_agent(self):
3792- rclocal = self.find_rc_local()
3793- if rclocal:
3794- self._install_post_reboot_agent(rclocal)
3795- self.postreboot = True
3796+ self.ccScriptsDir = ccScriptsDir
3797+ self.ccScriptPath = os.path.join(
3798+ ccScriptsDir,
3799+ CustomScriptConstant.POST_CUSTOM_SCRIPT_NAME)
3800
3801 def execute(self):
3802 """
3803- This method executes post-customization script before or after reboot
3804- based on the presence of rc local.
3805+ This method copy the post customize run script to
3806+ cc_scripts_per_instance directory and let this
3807+ module to run post custom script.
3808 """
3809 self.prepare_script()
3810- self.install_agent()
3811- if not self.postreboot:
3812- LOG.warning("Executing post-customization script inline")
3813- util.subp(["/bin/sh", self.scriptpath, "postcustomization"])
3814- else:
3815- LOG.debug("Scheduling custom script to run post reboot")
3816- if not os.path.isdir(CustomScriptConstant.POST_CUST_TMP_DIR):
3817- os.mkdir(CustomScriptConstant.POST_CUST_TMP_DIR)
3818- # Script "post-customize-guest.sh" and user uploaded script are
3819- # are present in the same directory and needs to copied to a temp
3820- # directory to be executed post reboot. User uploaded script is
3821- # saved as customize.sh in the temp directory.
3822- # post-customize-guest.sh excutes customize.sh after reboot.
3823- LOG.debug("Copying post-customization script")
3824- util.copy(self.scriptpath,
3825- CustomScriptConstant.POST_CUST_TMP_DIR + "/customize.sh")
3826- LOG.debug("Copying script to run post-customization script")
3827- util.copy(
3828- os.path.join(self.directory,
3829- CustomScriptConstant.POST_CUST_RUN_SCRIPT_NAME),
3830- CustomScriptConstant.POST_CUST_RUN_SCRIPT)
3831- LOG.info("Creating post-reboot pending marker")
3832- util.ensure_file(CustomScriptConstant.POST_REBOOT_PENDING_MARKER)
3833+
3834+ LOG.debug("Copying post customize run script to %s",
3835+ self.ccScriptPath)
3836+ util.copy(
3837+ os.path.join(self.directory,
3838+ CustomScriptConstant.POST_CUSTOM_SCRIPT_NAME),
3839+ self.ccScriptPath)
3840+ st = os.stat(self.ccScriptPath)
3841+ os.chmod(self.ccScriptPath, st.st_mode | stat.S_IEXEC)
3842+ LOG.info("Creating post customization pending marker")
3843+ util.ensure_file(CustomScriptConstant.POST_CUSTOM_PENDING_MARKER)
3844
3845 # vi: ts=4 expandtab
3846diff --git a/cloudinit/sources/tests/test_oracle.py b/cloudinit/sources/tests/test_oracle.py
3847index 97d6294..3ddf7df 100644
3848--- a/cloudinit/sources/tests/test_oracle.py
3849+++ b/cloudinit/sources/tests/test_oracle.py
3850@@ -1,7 +1,7 @@
3851 # This file is part of cloud-init. See LICENSE file for license information.
3852
3853 from cloudinit.sources import DataSourceOracle as oracle
3854-from cloudinit.sources import BrokenMetadata
3855+from cloudinit.sources import BrokenMetadata, NetworkConfigSource
3856 from cloudinit import helpers
3857
3858 from cloudinit.tests import helpers as test_helpers
3859@@ -18,10 +18,52 @@ import uuid
3860 DS_PATH = "cloudinit.sources.DataSourceOracle"
3861 MD_VER = "2013-10-17"
3862
3863+# `curl -L http://169.254.169.254/opc/v1/vnics/` on a Oracle Bare Metal Machine
3864+# with a secondary VNIC attached (vnicId truncated for Python line length)
3865+OPC_BM_SECONDARY_VNIC_RESPONSE = """\
3866+[ {
3867+ "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtyvcucqkhdqmgjszebxe4hrb!!TRUNCATED||",
3868+ "privateIp" : "10.0.0.8",
3869+ "vlanTag" : 0,
3870+ "macAddr" : "90:e2:ba:d4:f1:68",
3871+ "virtualRouterIp" : "10.0.0.1",
3872+ "subnetCidrBlock" : "10.0.0.0/24",
3873+ "nicIndex" : 0
3874+}, {
3875+ "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtfmkxjdy2sqidndiwrsg63zf!!TRUNCATED||",
3876+ "privateIp" : "10.0.4.5",
3877+ "vlanTag" : 1,
3878+ "macAddr" : "02:00:17:05:CF:51",
3879+ "virtualRouterIp" : "10.0.4.1",
3880+ "subnetCidrBlock" : "10.0.4.0/24",
3881+ "nicIndex" : 0
3882+} ]"""
3883+
3884+# `curl -L http://169.254.169.254/opc/v1/vnics/` on a Oracle Virtual Machine
3885+# with a secondary VNIC attached
3886+OPC_VM_SECONDARY_VNIC_RESPONSE = """\
3887+[ {
3888+ "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtch72z5pd76cc2636qeqh7z_truncated",
3889+ "privateIp" : "10.0.0.230",
3890+ "vlanTag" : 1039,
3891+ "macAddr" : "02:00:17:05:D1:DB",
3892+ "virtualRouterIp" : "10.0.0.1",
3893+ "subnetCidrBlock" : "10.0.0.0/24"
3894+}, {
3895+ "vnicId" : "ocid1.vnic.oc1.phx.abyhqljt4iew3gwmvrwrhhf3bp5drj_truncated",
3896+ "privateIp" : "10.0.0.231",
3897+ "vlanTag" : 1041,
3898+ "macAddr" : "00:00:17:02:2B:B1",
3899+ "virtualRouterIp" : "10.0.0.1",
3900+ "subnetCidrBlock" : "10.0.0.0/24"
3901+} ]"""
3902+
3903
3904 class TestDataSourceOracle(test_helpers.CiTestCase):
3905 """Test datasource DataSourceOracle."""
3906
3907+ with_logs = True
3908+
3909 ds_class = oracle.DataSourceOracle
3910
3911 my_uuid = str(uuid.uuid4())
3912@@ -79,6 +121,16 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
3913 self.assertEqual(
3914 'metadata (http://169.254.169.254/openstack/)', ds.subplatform)
3915
3916+ def test_sys_cfg_can_enable_configure_secondary_nics(self):
3917+ # Confirm that behaviour is toggled by sys_cfg
3918+ ds, _mocks = self._get_ds()
3919+ self.assertFalse(ds.ds_cfg['configure_secondary_nics'])
3920+
3921+ sys_cfg = {
3922+ 'datasource': {'Oracle': {'configure_secondary_nics': True}}}
3923+ ds, _mocks = self._get_ds(sys_cfg=sys_cfg)
3924+ self.assertTrue(ds.ds_cfg['configure_secondary_nics'])
3925+
3926 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
3927 def test_without_userdata(self, m_is_iscsi_root):
3928 """If no user-data is provided, it should not be in return dict."""
3929@@ -133,9 +185,12 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
3930 self.assertEqual(self.my_md['uuid'], ds.get_instance_id())
3931 self.assertEqual(my_userdata, ds.userdata_raw)
3932
3933- @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config")
3934+ @mock.patch(DS_PATH + "._add_network_config_from_opc_imds",
3935+ side_effect=lambda network_config: network_config)
3936+ @mock.patch(DS_PATH + ".cmdline.read_initramfs_config")
3937 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
3938- def test_network_cmdline(self, m_is_iscsi_root, m_cmdline_config):
3939+ def test_network_cmdline(self, m_is_iscsi_root, m_initramfs_config,
3940+ _m_add_network_config_from_opc_imds):
3941 """network_config should read kernel cmdline."""
3942 distro = mock.MagicMock()
3943 ds, _ = self._get_ds(distro=distro, patches={
3944@@ -145,15 +200,18 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
3945 MD_VER: {'system_uuid': self.my_uuid,
3946 'meta_data': self.my_md}}}})
3947 ncfg = {'version': 1, 'config': [{'a': 'b'}]}
3948- m_cmdline_config.return_value = ncfg
3949+ m_initramfs_config.return_value = ncfg
3950 self.assertTrue(ds._get_data())
3951 self.assertEqual(ncfg, ds.network_config)
3952- m_cmdline_config.assert_called_once_with()
3953+ self.assertEqual([mock.call()], m_initramfs_config.call_args_list)
3954 self.assertFalse(distro.generate_fallback_config.called)
3955
3956- @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config")
3957+ @mock.patch(DS_PATH + "._add_network_config_from_opc_imds",
3958+ side_effect=lambda network_config: network_config)
3959+ @mock.patch(DS_PATH + ".cmdline.read_initramfs_config")
3960 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
3961- def test_network_fallback(self, m_is_iscsi_root, m_cmdline_config):
3962+ def test_network_fallback(self, m_is_iscsi_root, m_initramfs_config,
3963+ _m_add_network_config_from_opc_imds):
3964 """test that fallback network is generated if no kernel cmdline."""
3965 distro = mock.MagicMock()
3966 ds, _ = self._get_ds(distro=distro, patches={
3967@@ -163,18 +221,95 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
3968 MD_VER: {'system_uuid': self.my_uuid,
3969 'meta_data': self.my_md}}}})
3970 ncfg = {'version': 1, 'config': [{'a': 'b'}]}
3971- m_cmdline_config.return_value = None
3972+ m_initramfs_config.return_value = None
3973 self.assertTrue(ds._get_data())
3974 ncfg = {'version': 1, 'config': [{'distro1': 'value'}]}
3975 distro.generate_fallback_config.return_value = ncfg
3976 self.assertEqual(ncfg, ds.network_config)
3977- m_cmdline_config.assert_called_once_with()
3978+ self.assertEqual([mock.call()], m_initramfs_config.call_args_list)
3979 distro.generate_fallback_config.assert_called_once_with()
3980- self.assertEqual(1, m_cmdline_config.call_count)
3981
3982 # test that the result got cached, and the methods not re-called.
3983 self.assertEqual(ncfg, ds.network_config)
3984- self.assertEqual(1, m_cmdline_config.call_count)
3985+ self.assertEqual(1, m_initramfs_config.call_count)
3986+
3987+ @mock.patch(DS_PATH + "._add_network_config_from_opc_imds")
3988+ @mock.patch(DS_PATH + ".cmdline.read_initramfs_config",
3989+ return_value={'some': 'config'})
3990+ @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
3991+ def test_secondary_nics_added_to_network_config_if_enabled(
3992+ self, _m_is_iscsi_root, _m_initramfs_config,
3993+ m_add_network_config_from_opc_imds):
3994+
3995+ needle = object()
3996+
3997+ def network_config_side_effect(network_config):
3998+ network_config['secondary_added'] = needle
3999+
4000+ m_add_network_config_from_opc_imds.side_effect = (
4001+ network_config_side_effect)
4002+
4003+ distro = mock.MagicMock()
4004+ ds, _ = self._get_ds(distro=distro, patches={
4005+ '_is_platform_viable': {'return_value': True},
4006+ 'crawl_metadata': {
4007+ 'return_value': {
4008+ MD_VER: {'system_uuid': self.my_uuid,
4009+ 'meta_data': self.my_md}}}})
4010+ ds.ds_cfg['configure_secondary_nics'] = True
4011+ self.assertEqual(needle, ds.network_config['secondary_added'])
4012+
4013+ @mock.patch(DS_PATH + "._add_network_config_from_opc_imds")
4014+ @mock.patch(DS_PATH + ".cmdline.read_initramfs_config",
4015+ return_value={'some': 'config'})
4016+ @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
4017+ def test_secondary_nics_not_added_to_network_config_by_default(
4018+ self, _m_is_iscsi_root, _m_initramfs_config,
4019+ m_add_network_config_from_opc_imds):
4020+
4021+ def network_config_side_effect(network_config):
4022+ network_config['secondary_added'] = True
4023+
4024+ m_add_network_config_from_opc_imds.side_effect = (
4025+ network_config_side_effect)
4026+
4027+ distro = mock.MagicMock()
4028+ ds, _ = self._get_ds(distro=distro, patches={
4029+ '_is_platform_viable': {'return_value': True},
4030+ 'crawl_metadata': {
4031+ 'return_value': {
4032+ MD_VER: {'system_uuid': self.my_uuid,
4033+ 'meta_data': self.my_md}}}})
4034+ self.assertNotIn('secondary_added', ds.network_config)
4035+
4036+ @mock.patch(DS_PATH + "._add_network_config_from_opc_imds")
4037+ @mock.patch(DS_PATH + ".cmdline.read_initramfs_config")
4038+ @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
4039+ def test_secondary_nic_failure_isnt_blocking(
4040+ self, _m_is_iscsi_root, m_initramfs_config,
4041+ m_add_network_config_from_opc_imds):
4042+
4043+ m_add_network_config_from_opc_imds.side_effect = Exception()
4044+
4045+ distro = mock.MagicMock()
4046+ ds, _ = self._get_ds(distro=distro, patches={
4047+ '_is_platform_viable': {'return_value': True},
4048+ 'crawl_metadata': {
4049+ 'return_value': {
4050+ MD_VER: {'system_uuid': self.my_uuid,
4051+ 'meta_data': self.my_md}}}})
4052+ ds.ds_cfg['configure_secondary_nics'] = True
4053+ self.assertEqual(ds.network_config, m_initramfs_config.return_value)
4054+ self.assertIn('Failed to fetch secondary network configuration',
4055+ self.logs.getvalue())
4056+
4057+ def test_ds_network_cfg_preferred_over_initramfs(self):
4058+ """Ensure that DS net config is preferred over initramfs config"""
4059+ network_config_sources = oracle.DataSourceOracle.network_config_sources
4060+ self.assertLess(
4061+ network_config_sources.index(NetworkConfigSource.ds),
4062+ network_config_sources.index(NetworkConfigSource.initramfs)
4063+ )
4064
4065
4066 @mock.patch(DS_PATH + "._read_system_uuid", return_value=str(uuid.uuid4()))
4067@@ -336,4 +471,86 @@ class TestLoadIndex(test_helpers.CiTestCase):
4068 oracle._load_index("\n".join(["meta_data.json", "user_data"])))
4069
4070
4071+class TestNetworkConfigFromOpcImds(test_helpers.CiTestCase):
4072+
4073+ with_logs = True
4074+
4075+ def setUp(self):
4076+ super(TestNetworkConfigFromOpcImds, self).setUp()
4077+ self.add_patch(DS_PATH + '.readurl', 'm_readurl')
4078+ self.add_patch(DS_PATH + '.get_interfaces_by_mac',
4079+ 'm_get_interfaces_by_mac')
4080+
4081+ def test_failure_to_readurl(self):
4082+ # readurl failures should just bubble out to the caller
4083+ self.m_readurl.side_effect = Exception('oh no')
4084+ with self.assertRaises(Exception) as excinfo:
4085+ oracle._add_network_config_from_opc_imds({})
4086+ self.assertEqual(str(excinfo.exception), 'oh no')
4087+
4088+ def test_empty_response(self):
4089+ # empty response error should just bubble out to the caller
4090+ self.m_readurl.return_value = ''
4091+ with self.assertRaises(Exception):
4092+ oracle._add_network_config_from_opc_imds([])
4093+
4094+ def test_invalid_json(self):
4095+ # invalid JSON error should just bubble out to the caller
4096+ self.m_readurl.return_value = '{'
4097+ with self.assertRaises(Exception):
4098+ oracle._add_network_config_from_opc_imds([])
4099+
4100+ def test_no_secondary_nics_does_not_mutate_input(self):
4101+ self.m_readurl.return_value = json.dumps([{}])
4102+ # We test this by passing in a non-dict to ensure that no dict
4103+ # operations are used; failure would be seen as exceptions
4104+ oracle._add_network_config_from_opc_imds(object())
4105+
4106+ def test_bare_metal_machine_skipped(self):
4107+ # nicIndex in the first entry indicates a bare metal machine
4108+ self.m_readurl.return_value = OPC_BM_SECONDARY_VNIC_RESPONSE
4109+ # We test this by passing in a non-dict to ensure that no dict
4110+ # operations are used
4111+ self.assertFalse(oracle._add_network_config_from_opc_imds(object()))
4112+ self.assertIn('bare metal machine', self.logs.getvalue())
4113+
4114+ def test_missing_mac_skipped(self):
4115+ self.m_readurl.return_value = OPC_VM_SECONDARY_VNIC_RESPONSE
4116+ self.m_get_interfaces_by_mac.return_value = {}
4117+
4118+ network_config = {'version': 1, 'config': [{'primary': 'nic'}]}
4119+ oracle._add_network_config_from_opc_imds(network_config)
4120+
4121+ self.assertEqual(1, len(network_config['config']))
4122+ self.assertIn(
4123+ 'Interface with MAC 00:00:17:02:2b:b1 not found; skipping',
4124+ self.logs.getvalue())
4125+
4126+ def test_secondary_nic(self):
4127+ self.m_readurl.return_value = OPC_VM_SECONDARY_VNIC_RESPONSE
4128+ mac_addr, nic_name = '00:00:17:02:2b:b1', 'ens3'
4129+ self.m_get_interfaces_by_mac.return_value = {
4130+ mac_addr: nic_name,
4131+ }
4132+
4133+ network_config = {'version': 1, 'config': [{'primary': 'nic'}]}
4134+ oracle._add_network_config_from_opc_imds(network_config)
4135+
4136+ # The input is mutated
4137+ self.assertEqual(2, len(network_config['config']))
4138+
4139+ secondary_nic_cfg = network_config['config'][1]
4140+ self.assertEqual(nic_name, secondary_nic_cfg['name'])
4141+ self.assertEqual('physical', secondary_nic_cfg['type'])
4142+ self.assertEqual(mac_addr, secondary_nic_cfg['mac_address'])
4143+ self.assertEqual(9000, secondary_nic_cfg['mtu'])
4144+
4145+ self.assertEqual(1, len(secondary_nic_cfg['subnets']))
4146+ subnet_cfg = secondary_nic_cfg['subnets'][0]
4147+ # These values are hard-coded in OPC_VM_SECONDARY_VNIC_RESPONSE
4148+ self.assertEqual('10.0.0.231', subnet_cfg['address'])
4149+ self.assertEqual('24', subnet_cfg['netmask'])
4150+ self.assertEqual('10.0.0.1', subnet_cfg['gateway'])
4151+ self.assertEqual('manual', subnet_cfg['control'])
4152+
4153 # vi: ts=4 expandtab
4154diff --git a/cloudinit/stages.py b/cloudinit/stages.py
4155index da7d349..5012988 100644
4156--- a/cloudinit/stages.py
4157+++ b/cloudinit/stages.py
4158@@ -24,6 +24,7 @@ from cloudinit.handlers.shell_script import ShellScriptPartHandler
4159 from cloudinit.handlers.upstart_job import UpstartJobPartHandler
4160
4161 from cloudinit.event import EventType
4162+from cloudinit.sources import NetworkConfigSource
4163
4164 from cloudinit import cloud
4165 from cloudinit import config
4166@@ -630,32 +631,54 @@ class Init(object):
4167 if os.path.exists(disable_file):
4168 return (None, disable_file)
4169
4170- cmdline_cfg = ('cmdline', cmdline.read_kernel_cmdline_config())
4171- dscfg = ('ds', None)
4172+ available_cfgs = {
4173+ NetworkConfigSource.cmdline: cmdline.read_kernel_cmdline_config(),
4174+ NetworkConfigSource.initramfs: cmdline.read_initramfs_config(),
4175+ NetworkConfigSource.ds: None,
4176+ NetworkConfigSource.system_cfg: self.cfg.get('network'),
4177+ }
4178+
4179 if self.datasource and hasattr(self.datasource, 'network_config'):
4180- dscfg = ('ds', self.datasource.network_config)
4181- sys_cfg = ('system_cfg', self.cfg.get('network'))
4182+ available_cfgs[NetworkConfigSource.ds] = (
4183+ self.datasource.network_config)
4184
4185- for loc, ncfg in (cmdline_cfg, sys_cfg, dscfg):
4186+ if self.datasource:
4187+ order = self.datasource.network_config_sources
4188+ else:
4189+ order = sources.DataSource.network_config_sources
4190+ for cfg_source in order:
4191+ if not hasattr(NetworkConfigSource, cfg_source):
4192+ LOG.warning('data source specifies an invalid network'
4193+ ' cfg_source: %s', cfg_source)
4194+ continue
4195+ if cfg_source not in available_cfgs:
4196+ LOG.warning('data source specifies an unavailable network'
4197+ ' cfg_source: %s', cfg_source)
4198+ continue
4199+ ncfg = available_cfgs[cfg_source]
4200 if net.is_disabled_cfg(ncfg):
4201- LOG.debug("network config disabled by %s", loc)
4202- return (None, loc)
4203+ LOG.debug("network config disabled by %s", cfg_source)
4204+ return (None, cfg_source)
4205 if ncfg:
4206- return (ncfg, loc)
4207- return (self.distro.generate_fallback_config(), "fallback")
4208-
4209- def apply_network_config(self, bring_up):
4210- netcfg, src = self._find_networking_config()
4211- if netcfg is None:
4212- LOG.info("network config is disabled by %s", src)
4213- return
4214+ return (ncfg, cfg_source)
4215+ return (self.distro.generate_fallback_config(),
4216+ NetworkConfigSource.fallback)
4217
4218+ def _apply_netcfg_names(self, netcfg):
4219 try:
4220 LOG.debug("applying net config names for %s", netcfg)
4221 self.distro.apply_network_config_names(netcfg)
4222 except Exception as e:
4223 LOG.warning("Failed to rename devices: %s", e)
4224
4225+ def apply_network_config(self, bring_up):
4226+ # get a network config
4227+ netcfg, src = self._find_networking_config()
4228+ if netcfg is None:
4229+ LOG.info("network config is disabled by %s", src)
4230+ return
4231+
4232+ # request an update if needed/available
4233 if self.datasource is not NULL_DATA_SOURCE:
4234 if not self.is_new_instance():
4235 if not self.datasource.update_metadata([EventType.BOOT]):
4236@@ -663,8 +686,20 @@ class Init(object):
4237 "No network config applied. Neither a new instance"
4238 " nor datasource network update on '%s' event",
4239 EventType.BOOT)
4240+ # nothing new, but ensure proper names
4241+ self._apply_netcfg_names(netcfg)
4242 return
4243+ else:
4244+ # refresh netcfg after update
4245+ netcfg, src = self._find_networking_config()
4246+
4247+ # ensure all physical devices in config are present
4248+ net.wait_for_physdevs(netcfg)
4249+
4250+ # apply renames from config
4251+ self._apply_netcfg_names(netcfg)
4252
4253+ # rendering config
4254 LOG.info("Applying network configuration from %s bringup=%s: %s",
4255 src, bring_up, netcfg)
4256 try:
4257diff --git a/cloudinit/tests/helpers.py b/cloudinit/tests/helpers.py
4258index f41180f..23fddd0 100644
4259--- a/cloudinit/tests/helpers.py
4260+++ b/cloudinit/tests/helpers.py
4261@@ -198,7 +198,8 @@ class CiTestCase(TestCase):
4262 prefix="ci-%s." % self.__class__.__name__)
4263 else:
4264 tmpd = tempfile.mkdtemp(dir=dir)
4265- self.addCleanup(functools.partial(shutil.rmtree, tmpd))
4266+ self.addCleanup(
4267+ functools.partial(shutil.rmtree, tmpd, ignore_errors=True))
4268 return tmpd
4269
4270 def tmp_path(self, path, dir=None):
4271diff --git a/cloudinit/tests/test_stages.py b/cloudinit/tests/test_stages.py
4272index 94b6b25..d5c9c0e 100644
4273--- a/cloudinit/tests/test_stages.py
4274+++ b/cloudinit/tests/test_stages.py
4275@@ -6,6 +6,7 @@ import os
4276
4277 from cloudinit import stages
4278 from cloudinit import sources
4279+from cloudinit.sources import NetworkConfigSource
4280
4281 from cloudinit.event import EventType
4282 from cloudinit.util import write_file
4283@@ -37,6 +38,7 @@ class FakeDataSource(sources.DataSource):
4284
4285 class TestInit(CiTestCase):
4286 with_logs = True
4287+ allowed_subp = False
4288
4289 def setUp(self):
4290 super(TestInit, self).setUp()
4291@@ -57,84 +59,189 @@ class TestInit(CiTestCase):
4292 (None, disable_file),
4293 self.init._find_networking_config())
4294
4295+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4296 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4297- def test_wb__find_networking_config_disabled_by_kernel(self, m_cmdline):
4298+ def test_wb__find_networking_config_disabled_by_kernel(
4299+ self, m_cmdline, m_initramfs):
4300 """find_networking_config returns when disabled by kernel cmdline."""
4301 m_cmdline.return_value = {'config': 'disabled'}
4302+ m_initramfs.return_value = {'config': ['fake_initrd']}
4303 self.assertEqual(
4304- (None, 'cmdline'),
4305+ (None, NetworkConfigSource.cmdline),
4306 self.init._find_networking_config())
4307 self.assertEqual('DEBUG: network config disabled by cmdline\n',
4308 self.logs.getvalue())
4309
4310+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4311 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4312- def test_wb__find_networking_config_disabled_by_datasrc(self, m_cmdline):
4313+ def test_wb__find_networking_config_disabled_by_initrd(
4314+ self, m_cmdline, m_initramfs):
4315+ """find_networking_config returns when disabled by kernel cmdline."""
4316+ m_cmdline.return_value = {}
4317+ m_initramfs.return_value = {'config': 'disabled'}
4318+ self.assertEqual(
4319+ (None, NetworkConfigSource.initramfs),
4320+ self.init._find_networking_config())
4321+ self.assertEqual('DEBUG: network config disabled by initramfs\n',
4322+ self.logs.getvalue())
4323+
4324+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4325+ @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4326+ def test_wb__find_networking_config_disabled_by_datasrc(
4327+ self, m_cmdline, m_initramfs):
4328 """find_networking_config returns when disabled by datasource cfg."""
4329 m_cmdline.return_value = {} # Kernel doesn't disable networking
4330+ m_initramfs.return_value = {} # initramfs doesn't disable networking
4331 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
4332 'network': {}} # system config doesn't disable
4333
4334 self.init.datasource = FakeDataSource(
4335 network_config={'config': 'disabled'})
4336 self.assertEqual(
4337- (None, 'ds'),
4338+ (None, NetworkConfigSource.ds),
4339 self.init._find_networking_config())
4340 self.assertEqual('DEBUG: network config disabled by ds\n',
4341 self.logs.getvalue())
4342
4343+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4344 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4345- def test_wb__find_networking_config_disabled_by_sysconfig(self, m_cmdline):
4346+ def test_wb__find_networking_config_disabled_by_sysconfig(
4347+ self, m_cmdline, m_initramfs):
4348 """find_networking_config returns when disabled by system config."""
4349 m_cmdline.return_value = {} # Kernel doesn't disable networking
4350+ m_initramfs.return_value = {} # initramfs doesn't disable networking
4351 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
4352 'network': {'config': 'disabled'}}
4353 self.assertEqual(
4354- (None, 'system_cfg'),
4355+ (None, NetworkConfigSource.system_cfg),
4356 self.init._find_networking_config())
4357 self.assertEqual('DEBUG: network config disabled by system_cfg\n',
4358 self.logs.getvalue())
4359
4360+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4361+ @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4362+ def test__find_networking_config_uses_datasrc_order(
4363+ self, m_cmdline, m_initramfs):
4364+ """find_networking_config should check sources in DS defined order"""
4365+ # cmdline and initramfs, which would normally be preferred over other
4366+ # sources, disable networking; in this case, though, the DS moves them
4367+ # later so its own config is preferred
4368+ m_cmdline.return_value = {'config': 'disabled'}
4369+ m_initramfs.return_value = {'config': 'disabled'}
4370+
4371+ ds_net_cfg = {'config': {'needle': True}}
4372+ self.init.datasource = FakeDataSource(network_config=ds_net_cfg)
4373+ self.init.datasource.network_config_sources = [
4374+ NetworkConfigSource.ds, NetworkConfigSource.system_cfg,
4375+ NetworkConfigSource.cmdline, NetworkConfigSource.initramfs]
4376+
4377+ self.assertEqual(
4378+ (ds_net_cfg, NetworkConfigSource.ds),
4379+ self.init._find_networking_config())
4380+
4381+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4382+ @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4383+ def test__find_networking_config_warns_if_datasrc_uses_invalid_src(
4384+ self, m_cmdline, m_initramfs):
4385+ """find_networking_config should check sources in DS defined order"""
4386+ ds_net_cfg = {'config': {'needle': True}}
4387+ self.init.datasource = FakeDataSource(network_config=ds_net_cfg)
4388+ self.init.datasource.network_config_sources = [
4389+ 'invalid_src', NetworkConfigSource.ds]
4390+
4391+ self.assertEqual(
4392+ (ds_net_cfg, NetworkConfigSource.ds),
4393+ self.init._find_networking_config())
4394+ self.assertIn('WARNING: data source specifies an invalid network'
4395+ ' cfg_source: invalid_src',
4396+ self.logs.getvalue())
4397+
4398+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4399 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4400- def test_wb__find_networking_config_returns_kernel(self, m_cmdline):
4401+ def test__find_networking_config_warns_if_datasrc_uses_unavailable_src(
4402+ self, m_cmdline, m_initramfs):
4403+ """find_networking_config should check sources in DS defined order"""
4404+ ds_net_cfg = {'config': {'needle': True}}
4405+ self.init.datasource = FakeDataSource(network_config=ds_net_cfg)
4406+ self.init.datasource.network_config_sources = [
4407+ NetworkConfigSource.fallback, NetworkConfigSource.ds]
4408+
4409+ self.assertEqual(
4410+ (ds_net_cfg, NetworkConfigSource.ds),
4411+ self.init._find_networking_config())
4412+ self.assertIn('WARNING: data source specifies an unavailable network'
4413+ ' cfg_source: fallback',
4414+ self.logs.getvalue())
4415+
4416+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4417+ @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4418+ def test_wb__find_networking_config_returns_kernel(
4419+ self, m_cmdline, m_initramfs):
4420 """find_networking_config returns kernel cmdline config if present."""
4421 expected_cfg = {'config': ['fakekernel']}
4422 m_cmdline.return_value = expected_cfg
4423+ m_initramfs.return_value = {'config': ['fake_initrd']}
4424 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
4425 'network': {'config': ['fakesys_config']}}
4426 self.init.datasource = FakeDataSource(
4427 network_config={'config': ['fakedatasource']})
4428 self.assertEqual(
4429- (expected_cfg, 'cmdline'),
4430+ (expected_cfg, NetworkConfigSource.cmdline),
4431 self.init._find_networking_config())
4432
4433+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4434 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4435- def test_wb__find_networking_config_returns_system_cfg(self, m_cmdline):
4436+ def test_wb__find_networking_config_returns_initramfs(
4437+ self, m_cmdline, m_initramfs):
4438+ """find_networking_config returns kernel cmdline config if present."""
4439+ expected_cfg = {'config': ['fake_initrd']}
4440+ m_cmdline.return_value = {}
4441+ m_initramfs.return_value = expected_cfg
4442+ self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
4443+ 'network': {'config': ['fakesys_config']}}
4444+ self.init.datasource = FakeDataSource(
4445+ network_config={'config': ['fakedatasource']})
4446+ self.assertEqual(
4447+ (expected_cfg, NetworkConfigSource.initramfs),
4448+ self.init._find_networking_config())
4449+
4450+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4451+ @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4452+ def test_wb__find_networking_config_returns_system_cfg(
4453+ self, m_cmdline, m_initramfs):
4454 """find_networking_config returns system config when present."""
4455 m_cmdline.return_value = {} # No kernel network config
4456+ m_initramfs.return_value = {} # no initramfs network config
4457 expected_cfg = {'config': ['fakesys_config']}
4458 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
4459 'network': expected_cfg}
4460 self.init.datasource = FakeDataSource(
4461 network_config={'config': ['fakedatasource']})
4462 self.assertEqual(
4463- (expected_cfg, 'system_cfg'),
4464+ (expected_cfg, NetworkConfigSource.system_cfg),
4465 self.init._find_networking_config())
4466
4467+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4468 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4469- def test_wb__find_networking_config_returns_datasrc_cfg(self, m_cmdline):
4470+ def test_wb__find_networking_config_returns_datasrc_cfg(
4471+ self, m_cmdline, m_initramfs):
4472 """find_networking_config returns datasource net config if present."""
4473 m_cmdline.return_value = {} # No kernel network config
4474+ m_initramfs.return_value = {} # no initramfs network config
4475 # No system config for network in setUp
4476 expected_cfg = {'config': ['fakedatasource']}
4477 self.init.datasource = FakeDataSource(network_config=expected_cfg)
4478 self.assertEqual(
4479- (expected_cfg, 'ds'),
4480+ (expected_cfg, NetworkConfigSource.ds),
4481 self.init._find_networking_config())
4482
4483+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4484 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4485- def test_wb__find_networking_config_returns_fallback(self, m_cmdline):
4486+ def test_wb__find_networking_config_returns_fallback(
4487+ self, m_cmdline, m_initramfs):
4488 """find_networking_config returns fallback config if not defined."""
4489 m_cmdline.return_value = {} # Kernel doesn't disable networking
4490+ m_initramfs.return_value = {} # no initramfs network config
4491 # Neither datasource nor system_info disable or provide network
4492
4493 fake_cfg = {'config': [{'type': 'physical', 'name': 'eth9'}],
4494@@ -147,7 +254,7 @@ class TestInit(CiTestCase):
4495 distro = self.init.distro
4496 distro.generate_fallback_config = fake_generate_fallback
4497 self.assertEqual(
4498- (fake_cfg, 'fallback'),
4499+ (fake_cfg, NetworkConfigSource.fallback),
4500 self.init._find_networking_config())
4501 self.assertNotIn('network config disabled', self.logs.getvalue())
4502
4503@@ -166,8 +273,9 @@ class TestInit(CiTestCase):
4504 'INFO: network config is disabled by %s' % disable_file,
4505 self.logs.getvalue())
4506
4507+ @mock.patch('cloudinit.net.get_interfaces_by_mac')
4508 @mock.patch('cloudinit.distros.ubuntu.Distro')
4509- def test_apply_network_on_new_instance(self, m_ubuntu):
4510+ def test_apply_network_on_new_instance(self, m_ubuntu, m_macs):
4511 """Call distro apply_network_config methods on is_new_instance."""
4512 net_cfg = {
4513 'version': 1, 'config': [
4514@@ -175,7 +283,9 @@ class TestInit(CiTestCase):
4515 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}
4516
4517 def fake_network_config():
4518- return net_cfg, 'fallback'
4519+ return net_cfg, NetworkConfigSource.fallback
4520+
4521+ m_macs.return_value = {'42:42:42:42:42:42': 'eth9'}
4522
4523 self.init._find_networking_config = fake_network_config
4524 self.init.apply_network_config(True)
4525@@ -195,7 +305,7 @@ class TestInit(CiTestCase):
4526 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}
4527
4528 def fake_network_config():
4529- return net_cfg, 'fallback'
4530+ return net_cfg, NetworkConfigSource.fallback
4531
4532 self.init._find_networking_config = fake_network_config
4533 self.init.apply_network_config(True)
4534@@ -206,8 +316,9 @@ class TestInit(CiTestCase):
4535 " nor datasource network update on '%s' event" % EventType.BOOT,
4536 self.logs.getvalue())
4537
4538+ @mock.patch('cloudinit.net.get_interfaces_by_mac')
4539 @mock.patch('cloudinit.distros.ubuntu.Distro')
4540- def test_apply_network_on_datasource_allowed_event(self, m_ubuntu):
4541+ def test_apply_network_on_datasource_allowed_event(self, m_ubuntu, m_macs):
4542 """Apply network if datasource.update_metadata permits BOOT event."""
4543 old_instance_id = os.path.join(
4544 self.init.paths.get_cpath('data'), 'instance-id')
4545@@ -218,7 +329,9 @@ class TestInit(CiTestCase):
4546 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}
4547
4548 def fake_network_config():
4549- return net_cfg, 'fallback'
4550+ return net_cfg, NetworkConfigSource.fallback
4551+
4552+ m_macs.return_value = {'42:42:42:42:42:42': 'eth9'}
4553
4554 self.init._find_networking_config = fake_network_config
4555 self.init.datasource = FakeDataSource(paths=self.init.paths)
4556diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
4557index 0af0d9e..44ee61d 100644
4558--- a/cloudinit/url_helper.py
4559+++ b/cloudinit/url_helper.py
4560@@ -199,18 +199,19 @@ def _get_ssl_args(url, ssl_details):
4561 def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
4562 headers=None, headers_cb=None, ssl_details=None,
4563 check_status=True, allow_redirects=True, exception_cb=None,
4564- session=None, infinite=False, log_req_resp=True):
4565+ session=None, infinite=False, log_req_resp=True,
4566+ request_method=None):
4567 url = _cleanurl(url)
4568 req_args = {
4569 'url': url,
4570 }
4571 req_args.update(_get_ssl_args(url, ssl_details))
4572 req_args['allow_redirects'] = allow_redirects
4573- req_args['method'] = 'GET'
4574+ if not request_method:
4575+ request_method = 'POST' if data else 'GET'
4576+ req_args['method'] = request_method
4577 if timeout is not None:
4578 req_args['timeout'] = max(float(timeout), 0)
4579- if data:
4580- req_args['method'] = 'POST'
4581 # It doesn't seem like config
4582 # was added in older library versions (or newer ones either), thus we
4583 # need to manually do the retries if it wasn't...
4584diff --git a/cloudinit/version.py b/cloudinit/version.py
4585index ddcd436..b04b11f 100644
4586--- a/cloudinit/version.py
4587+++ b/cloudinit/version.py
4588@@ -4,7 +4,7 @@
4589 #
4590 # This file is part of cloud-init. See LICENSE file for license information.
4591
4592-__VERSION__ = "19.1"
4593+__VERSION__ = "19.2"
4594 _PACKAGED_VERSION = '@@PACKAGED_VERSION@@'
4595
4596 FEATURES = [
4597diff --git a/debian/changelog b/debian/changelog
4598index 032f711..8ae019f 100644
4599--- a/debian/changelog
4600+++ b/debian/changelog
4601@@ -1,9 +1,66 @@
4602-cloud-init (19.1-1-gbaa47854-0ubuntu1~18.04.2) UNRELEASED; urgency=medium
4603+cloud-init (19.2-21-ge6383719-0ubuntu1~18.04.1) bionic; urgency=medium
4604
4605 * refresh patches:
4606 + debian/patches/ubuntu-advantage-revert-tip.patch
4607-
4608- -- Chad Smith <chad.smith@canonical.com> Tue, 04 Jun 2019 15:01:41 -0600
4609+ * refresh patches:
4610+ + debian/patches/ubuntu-advantage-revert-tip.patch
4611+ * debian/cloud-init.templates: enable Exoscale cloud.
4612+ * New upstream snapshot. (LP: #1841099)
4613+ - ubuntu-drivers: call db_x_loadtemplatefile to accept NVIDIA EULA
4614+ - Add missing #cloud-config comment on first example in documentation.
4615+ [Florian Müller]
4616+ - ubuntu-drivers: emit latelink=true debconf to accept nvidia eula
4617+ - DataSourceOracle: prefer DS network config over initramfs
4618+ - format.rst: add text/jinja2 to list of content types (+ cleanups)
4619+ - Add GitHub pull request template to point people at hacking doc
4620+ - cloudinit/distros/parsers/sys_conf: add docstring to SysConf
4621+ - pyflakes: remove unused variable [Joshua Powers]
4622+ - Azure: Record boot timestamps, system information, and diagnostic events
4623+ [Anh Vo]
4624+ - DataSourceOracle: configure secondary NICs on Virtual Machines
4625+ - distros: fix confusing variable names
4626+ - azure/net: generate_fallback_nic emits network v2 config instead of v1
4627+ - Add support for publishing host keys to GCE guest attributes
4628+ [Rick Wright]
4629+ - New data source for the Exoscale.com cloud platform [Chris Glass]
4630+ - doc: remove intersphinx extension
4631+ - cc_set_passwords: rewrite documentation
4632+ - net/cmdline: split interfaces_by_mac and init network config
4633+ determination
4634+ - stages: allow data sources to override network config source order
4635+ - cloud_tests: updates and fixes
4636+ - Fix bug rendering MTU on bond or vlan when input was netplan.
4637+ [Scott Moser]
4638+ - net: update net sequence, include wait on netdevs, opensuse netrules path
4639+ - Release 19.2
4640+ - net: add rfc3442 (classless static routes) to EphemeralDHCP
4641+ - templates/ntp.conf.debian.tmpl: fix missing newline for pools
4642+ - Support netplan renderer in Arch Linux [Conrad Hoffmann]
4643+ - Fix typo in publicly viewable documentation. [David Medberry]
4644+ - Add a cdrom size checker for OVF ds to ds-identify [Pengpeng Sun]
4645+ - VMWare: Trigger the post customization script via cc_scripts module.
4646+ [Xiaofeng Wang]
4647+ - Cloud-init analyze module: Added ability to analyze boot events.
4648+ [Sam Gilson]
4649+ - Update debian eni network configuration location, retain Ubuntu setting
4650+ [Janos Lenart]
4651+ - net: skip bond interfaces in get_interfaces [Stanislav Makar]
4652+ - Fix a couple of issues raised by a coverity scan
4653+ - Add missing dsname for Hetzner Cloud datasource [Markus Schade]
4654+ - doc: indicate that netplan is default in Ubuntu now
4655+ - azure: add region and AZ properties from imds compute location metadata
4656+ - sysconfig: support more bonding options [Penghui Liao]
4657+ - cloud-init-generator: use libexec path to ds-identify on redhat systems
4658+ - tools/build-on-freebsd: update to python3 [Gonéri Le Bouder]
4659+ - Allow identification of OpenStack by Asset Tag [Mark T. Voelker]
4660+ - Fix spelling error making 'an Ubuntu' consistent. [Brian Murray]
4661+ - run-container: centos: comment out the repo mirrorlist [Paride Legovini]
4662+ - netplan: update netplan key mappings for gratuitous-arp
4663+ - freebsd: fix the name of cloudcfg VARIANT [Gonéri Le Bouder]
4664+ - freebsd: ability to grow root file system [Gonéri Le Bouder]
4665+ - freebsd: NoCloud data source support [Gonéri Le Bouder]
4666+
4667+ -- Chad Smith <chad.smith@canonical.com> Thu, 22 Aug 2019 12:56:36 -0600
4668
4669 cloud-init (19.1-1-gbaa47854-0ubuntu1~18.04.1) bionic; urgency=medium
4670
4671diff --git a/debian/cloud-init.templates b/debian/cloud-init.templates
4672index ef3c3a7..8d37ee5 100644
4673--- a/debian/cloud-init.templates
4674+++ b/debian/cloud-init.templates
4675@@ -1,8 +1,8 @@
4676 Template: cloud-init/datasources
4677 Type: multiselect
4678-Default: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, None
4679-Choices-C: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, None
4680-Choices: NoCloud: Reads info from /var/lib/cloud/seed only, ConfigDrive: Reads data from Openstack Config Drive, OpenNebula: read from OpenNebula context disk, DigitalOcean: reads data from Droplet datasource, Azure: read from MS Azure cdrom. Requires walinux-agent, AltCloud: config disks for RHEVm and vSphere, OVF: Reads data from OVF Transports, MAAS: Reads data from Ubuntu MAAS, GCE: google compute metadata service, OpenStack: native openstack metadata service, CloudSigma: metadata over serial for cloudsigma.com, SmartOS: Read from SmartOS metadata service, Bigstep: Bigstep metadata service, Scaleway: Scaleway metadata service, AliYun: Alibaba metadata service, Ec2: reads data from EC2 Metadata service, CloudStack: Read from CloudStack metadata service, Hetzner: Hetzner Cloud, IBMCloud: IBM Cloud. Previously softlayer or bluemix., None: Failsafe datasource
4681+Default: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Exoscale, None
4682+Choices-C: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Exoscale, None
4683+Choices: NoCloud: Reads info from /var/lib/cloud/seed only, ConfigDrive: Reads data from Openstack Config Drive, OpenNebula: read from OpenNebula context disk, DigitalOcean: reads data from Droplet datasource, Azure: read from MS Azure cdrom. Requires walinux-agent, AltCloud: config disks for RHEVm and vSphere, OVF: Reads data from OVF Transports, MAAS: Reads data from Ubuntu MAAS, GCE: google compute metadata service, OpenStack: native openstack metadata service, CloudSigma: metadata over serial for cloudsigma.com, SmartOS: Read from SmartOS metadata service, Bigstep: Bigstep metadata service, Scaleway: Scaleway metadata service, AliYun: Alibaba metadata service, Ec2: reads data from EC2 Metadata service, CloudStack: Read from CloudStack metadata service, Hetzner: Hetzner Cloud, IBMCloud: IBM Cloud. Previously softlayer or bluemix., Exoscale: Exoscale metadata service, None: Failsafe datasource
4684 Description: Which data sources should be searched?
4685 Cloud-init supports searching different "Data Sources" for information
4686 that it uses to configure a cloud instance.
4687diff --git a/debian/patches/ubuntu-advantage-revert-tip.patch b/debian/patches/ubuntu-advantage-revert-tip.patch
4688index b956067..6d8b888 100644
4689--- a/debian/patches/ubuntu-advantage-revert-tip.patch
4690+++ b/debian/patches/ubuntu-advantage-revert-tip.patch
4691@@ -9,10 +9,8 @@ Forwarded: not-needed
4692 Last-Update: 2019-05-10
4693 ---
4694 This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
4695-Index: cloud-init/cloudinit/config/cc_ubuntu_advantage.py
4696-===================================================================
4697---- cloud-init.orig/cloudinit/config/cc_ubuntu_advantage.py
4698-+++ cloud-init/cloudinit/config/cc_ubuntu_advantage.py
4699+--- a/cloudinit/config/cc_ubuntu_advantage.py
4700++++ b/cloudinit/config/cc_ubuntu_advantage.py
4701 @@ -1,143 +1,150 @@
4702 +# Copyright (C) 2018 Canonical Ltd.
4703 +#
4704@@ -294,10 +292,8 @@ Index: cloud-init/cloudinit/config/cc_ubuntu_advantage.py
4705 + run_commands(cfgin.get('commands', []))
4706
4707 # vi: ts=4 expandtab
4708-Index: cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py
4709-===================================================================
4710---- cloud-init.orig/cloudinit/config/tests/test_ubuntu_advantage.py
4711-+++ cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py
4712+--- a/cloudinit/config/tests/test_ubuntu_advantage.py
4713++++ b/cloudinit/config/tests/test_ubuntu_advantage.py
4714 @@ -1,7 +1,10 @@
4715 # This file is part of cloud-init. See LICENSE file for license information.
4716
4717diff --git a/doc/examples/cloud-config-datasources.txt b/doc/examples/cloud-config-datasources.txt
4718index 2651c02..52a2476 100644
4719--- a/doc/examples/cloud-config-datasources.txt
4720+++ b/doc/examples/cloud-config-datasources.txt
4721@@ -38,7 +38,7 @@ datasource:
4722 # these are optional, but allow you to basically provide a datasource
4723 # right here
4724 user-data: |
4725- # This is the user-data verbatum
4726+ # This is the user-data verbatim
4727 meta-data:
4728 instance-id: i-87018aed
4729 local-hostname: myhost.internal
4730diff --git a/doc/examples/cloud-config-user-groups.txt b/doc/examples/cloud-config-user-groups.txt
4731index 6a363b7..f588bfb 100644
4732--- a/doc/examples/cloud-config-user-groups.txt
4733+++ b/doc/examples/cloud-config-user-groups.txt
4734@@ -1,3 +1,4 @@
4735+#cloud-config
4736 # Add groups to the system
4737 # The following example adds the ubuntu group with members 'root' and 'sys'
4738 # and the empty group cloud-users.
4739diff --git a/doc/rtd/conf.py b/doc/rtd/conf.py
4740index 50eb05c..4174477 100644
4741--- a/doc/rtd/conf.py
4742+++ b/doc/rtd/conf.py
4743@@ -27,16 +27,11 @@ project = 'Cloud-Init'
4744 # Add any Sphinx extension module names here, as strings. They can be
4745 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
4746 extensions = [
4747- 'sphinx.ext.intersphinx',
4748 'sphinx.ext.autodoc',
4749 'sphinx.ext.autosectionlabel',
4750 'sphinx.ext.viewcode',
4751 ]
4752
4753-intersphinx_mapping = {
4754- 'sphinx': ('http://sphinx.pocoo.org', None)
4755-}
4756-
4757 # The suffix of source filenames.
4758 source_suffix = '.rst'
4759
4760diff --git a/doc/rtd/topics/analyze.rst b/doc/rtd/topics/analyze.rst
4761new file mode 100644
4762index 0000000..5cf38bd
4763--- /dev/null
4764+++ b/doc/rtd/topics/analyze.rst
4765@@ -0,0 +1,84 @@
4766+*************************
4767+Cloud-init Analyze Module
4768+*************************
4769+
4770+Overview
4771+========
4772+The analyze module was added to cloud-init in order to help analyze cloud-init boot time
4773+performance. It is loosely based on systemd-analyze where there are 4 main actions:
4774+show, blame, dump, and boot.
4775+
4776+The 'show' action is similar to 'systemd-analyze critical-chain' which prints a list of units, the
4777+time they started and how long they took. For cloud-init, we have four stages, and within each stage
4778+a number of modules may run depending on configuration. ‘cloudinit-analyze show’ will, for each
4779+boot, print this information and a summary total time, per boot.
4780+
4781+The 'blame' action matches 'systemd-analyze blame' where it prints, in descending order,
4782+the units that took the longest to run. This output is highly useful for examining where cloud-init
4783+is spending its time during execution.
4784+
4785+The 'dump' action simply dumps the cloud-init logs that the analyze module is performing
4786+the analysis on and returns a list of dictionaries that can be consumed for other reporting needs.
4787+
4788+The 'boot' action prints out kernel related timestamps that are not included in any of the
4789+cloud-init logs. There are three different timestamps that are presented to the user:
4790+kernel start, kernel finish boot, and cloud-init start. This was added for additional
4791+clarity into the boot process that cloud-init does not have control over, to aid in debugging of
4792+performance issues related to cloudinit startup or tracking regression.
4793+
4794+Usage
4795+=====
4796+Using each of the printing formats is as easy as running one of the following bash commands:
4797+
4798+.. code-block:: shell-session
4799+
4800+ cloud-init analyze show
4801+ cloud-init analyze blame
4802+ cloud-init analyze dump
4803+ cloud-init analyze boot
4804+
4805+Cloud-init analyze boot Timestamp Gathering
4806+===========================================
4807+The following boot related timestamps are gathered on demand when cloud-init analyze boot runs:
4808+- Kernel Startup, which is inferred from system uptime
4809+- Kernel Finishes Initialization, which is inferred from systemd UserSpaceMonotonicTimestamp property
4810+- Cloud-init activation, which is inferred from the property InactiveExitTimestamp of the cloud-init
4811+local systemd unit.
4812+
4813+In order to gather the necessary timestamps using systemd, running the commands
4814+
4815+.. code-block:: shell-session
4816+
4817+ systemctl show -p UserspaceTimestampMonotonic
4818+ systemctl show cloud-init-local -p InactiveExitTimestampMonotonic
4819+
4820+will gather the UserspaceTimestamp and InactiveExitTimestamp.
4821+The UserspaceTimestamp tracks when the init system starts, which is used as an indicator of kernel
4822+finishing initialization. The InactiveExitTimestamp tracks when a particular systemd unit transitions
4823+from the Inactive to Active state, which can be used to mark the beginning of systemd's activation
4824+of cloud-init.
4825+
4826+Currently this only works for distros that use systemd as the init process. We will be expanding
4827+support for other distros in the future and this document will be updated accordingly.
4828+
4829+If systemd is not present on the system, dmesg is used to attempt to find an event that logs the
4830+beginning of the init system. However, with this method only the first two timestamps are able to be found;
4831+dmesg does not monitor userspace processes, so no cloud-init start timestamps are emitted like when
4832+using systemd.
4833+
4834+List of Cloud-init analyze boot supported distros
4835+=================================================
4836+- Arch
4837+- CentOS
4838+- Debian
4839+- Fedora
4840+- OpenSuSE
4841+- Red Hat Enterprise Linux
4842+- Ubuntu
4843+- SUSE Linux Enterprise Server
4844+- CoreOS
4845+
4846+List of Cloud-init analyze boot unsupported distros
4847+===================================================
4848+- FreeBSD
4849+- Gentoo
4850\ No newline at end of file
4851diff --git a/doc/rtd/topics/capabilities.rst b/doc/rtd/topics/capabilities.rst
4852index 0d8b894..6d85a99 100644
4853--- a/doc/rtd/topics/capabilities.rst
4854+++ b/doc/rtd/topics/capabilities.rst
4855@@ -217,6 +217,7 @@ Get detailed reports of where cloud-init spends most of its time. See
4856 * **dump** Machine-readable JSON dump of all cloud-init tracked events.
4857 * **show** show time-ordered report of the cost of operations during each
4858 boot stage.
4859+* **boot** show timestamps from kernel initialization, kernel finish initialization, and cloud-init start.
4860
4861 .. _cli_devel:
4862
4863diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
4864index 648c606..2148cd5 100644
4865--- a/doc/rtd/topics/datasources.rst
4866+++ b/doc/rtd/topics/datasources.rst
4867@@ -155,6 +155,7 @@ Follow for more information.
4868 datasources/configdrive.rst
4869 datasources/digitalocean.rst
4870 datasources/ec2.rst
4871+ datasources/exoscale.rst
4872 datasources/maas.rst
4873 datasources/nocloud.rst
4874 datasources/opennebula.rst
4875diff --git a/doc/rtd/topics/datasources/exoscale.rst b/doc/rtd/topics/datasources/exoscale.rst
4876new file mode 100644
4877index 0000000..27aec9c
4878--- /dev/null
4879+++ b/doc/rtd/topics/datasources/exoscale.rst
4880@@ -0,0 +1,68 @@
4881+.. _datasource_exoscale:
4882+
4883+Exoscale
4884+========
4885+
4886+This datasource supports reading from the metadata server used on the
4887+`Exoscale platform <https://exoscale.com>`_.
4888+
4889+Use of the Exoscale datasource is recommended to benefit from new features of
4890+the Exoscale platform.
4891+
4892+The datasource relies on the availability of a compatible metadata server
4893+(``http://169.254.169.254`` is used by default) and its companion password
4894+server, reachable at the same address (by default on port 8080).
4895+
4896+Crawling of metadata
4897+--------------------
4898+
4899+The metadata service and password server are crawled slightly differently:
4900+
4901+ * The "metadata service" is crawled every boot.
4902+ * The password server is also crawled every boot (the Exoscale datasource
4903+ forces the password module to run with "frequency always").
4904+
4905+In the password server case, the following rules apply in order to enable the
4906+"restore instance password" functionality:
4907+
4908+ * If a password is returned by the password server, it is then marked "saved"
4909+ by the cloud-init datasource. Subsequent boots will skip setting the password
4910+ (the password server will return "saved_password").
4911+ * When the instance password is reset (via the Exoscale UI), the password
4912+ server will return the non-empty password at next boot, therefore causing
4913+ cloud-init to reset the instance's password.
4914+
4915+Configuration
4916+-------------
4917+
4918+Users of this datasource are discouraged from changing the default settings
4919+unless instructed to by Exoscale support.
4920+
4921+The following settings are available and can be set for the datasource in system
4922+configuration (in `/etc/cloud/cloud.cfg.d/`).
4923+
4924+The settings available are:
4925+
4926+ * **metadata_url**: The URL for the metadata service (defaults to
4927+ ``http://169.254.169.254``)
4928+ * **api_version**: The API version path on which to query the instance metadata
4929+ (defaults to ``1.0``)
4930+ * **password_server_port**: The port (on the metadata server) on which the
4931+ password server listens (defaults to ``8080``).
4932+ * **timeout**: the timeout value provided to urlopen for each individual http
4933+ request. (defaults to ``10``)
4934+ * **retries**: The number of retries that should be done for an http request
4935+ (defaults to ``6``)
4936+
4937+
4938+An example configuration with the default values is provided below:
4939+
4940+.. sourcecode:: yaml
4941+
4942+ datasource:
4943+ Exoscale:
4944+ metadata_url: "http://169.254.169.254"
4945+ api_version: "1.0"
4946+ password_server_port: 8080
4947+ timeout: 10
4948+ retries: 6
4949diff --git a/doc/rtd/topics/datasources/oracle.rst b/doc/rtd/topics/datasources/oracle.rst
4950index f2383ce..98c4657 100644
4951--- a/doc/rtd/topics/datasources/oracle.rst
4952+++ b/doc/rtd/topics/datasources/oracle.rst
4953@@ -8,7 +8,7 @@ This datasource reads metadata, vendor-data and user-data from
4954
4955 Oracle Platform
4956 ---------------
4957-OCI provides bare metal and virtual machines. In both cases,
4958+OCI provides bare metal and virtual machines. In both cases,
4959 the platform identifies itself via DMI data in the chassis asset tag
4960 with the string 'OracleCloud.com'.
4961
4962@@ -22,5 +22,28 @@ Cloud-init has a specific datasource for Oracle in order to:
4963 implementation.
4964
4965
4966+Configuration
4967+-------------
4968+
4969+The following configuration can be set for the datasource in system
4970+configuration (in ``/etc/cloud/cloud.cfg`` or ``/etc/cloud/cloud.cfg.d/``).
4971+
4972+The settings that may be configured are:
4973+
4974+* **configure_secondary_nics**: A boolean, defaulting to False. If set
4975+ to True on an OCI Virtual Machine, cloud-init will fetch networking
4976+ metadata from Oracle's IMDS and use it to configure the non-primary
4977+ network interface controllers in the system. If set to True on an
4978+ OCI Bare Metal Machine, it will have no effect (though this may
4979+ change in the future).
4980+
4981+An example configuration with the default values is provided below:
4982+
4983+.. sourcecode:: yaml
4984+
4985+ datasource:
4986+ Oracle:
4987+ configure_secondary_nics: false
4988+
4989 .. _Oracle Compute Infrastructure: https://cloud.oracle.com/
4990 .. vi: textwidth=78
4991diff --git a/doc/rtd/topics/debugging.rst b/doc/rtd/topics/debugging.rst
4992index 51363ea..e13d915 100644
4993--- a/doc/rtd/topics/debugging.rst
4994+++ b/doc/rtd/topics/debugging.rst
4995@@ -68,6 +68,19 @@ subcommands default to reading /var/log/cloud-init.log.
4996 00.00100s (modules-final/config-rightscale_userdata)
4997 ...
4998
4999+* ``analyze boot`` Make subprocess calls to the kernel in order to get relevant
5000+ pre-cloud-init timestamps, such as the kernel start, kernel finish boot, and cloud-init start.
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches