Merge ~chad.smith/cloud-init:ubuntu/disco into cloud-init:ubuntu/disco

Proposed by Chad Smith
Status: Merged
Merged at revision: 3f2ba00663c817ff44bb9ac91d7c2aaa9f331f9c
Proposed branch: ~chad.smith/cloud-init:ubuntu/disco
Merge into: cloud-init:ubuntu/disco
Diff against target: 7381 lines (+4178/-619)
91 files modified
.github/pull_request_template.md (+9/-0)
ChangeLog (+36/-0)
cloudinit/analyze/__main__.py (+86/-2)
cloudinit/analyze/show.py (+192/-10)
cloudinit/analyze/tests/test_boot.py (+170/-0)
cloudinit/apport.py (+1/-0)
cloudinit/config/cc_apt_configure.py (+3/-1)
cloudinit/config/cc_growpart.py (+2/-1)
cloudinit/config/cc_lxd.py (+1/-1)
cloudinit/config/cc_resizefs.py (+3/-3)
cloudinit/config/cc_set_passwords.py (+34/-19)
cloudinit/config/cc_ssh.py (+55/-0)
cloudinit/config/cc_ubuntu_advantage.py (+1/-1)
cloudinit/config/cc_ubuntu_drivers.py (+49/-1)
cloudinit/config/tests/test_ssh.py (+166/-0)
cloudinit/config/tests/test_ubuntu_drivers.py (+81/-18)
cloudinit/distros/__init__.py (+22/-22)
cloudinit/distros/arch.py (+14/-0)
cloudinit/distros/debian.py (+2/-2)
cloudinit/distros/freebsd.py (+16/-16)
cloudinit/distros/opensuse.py (+2/-0)
cloudinit/distros/parsers/sys_conf.py (+7/-0)
cloudinit/distros/ubuntu.py (+15/-0)
cloudinit/net/__init__.py (+112/-43)
cloudinit/net/cmdline.py (+16/-9)
cloudinit/net/dhcp.py (+90/-0)
cloudinit/net/network_state.py (+20/-4)
cloudinit/net/sysconfig.py (+12/-0)
cloudinit/net/tests/test_dhcp.py (+119/-1)
cloudinit/net/tests/test_init.py (+262/-9)
cloudinit/settings.py (+1/-0)
cloudinit/sources/DataSourceAzure.py (+141/-32)
cloudinit/sources/DataSourceCloudSigma.py (+2/-6)
cloudinit/sources/DataSourceExoscale.py (+258/-0)
cloudinit/sources/DataSourceGCE.py (+20/-2)
cloudinit/sources/DataSourceHetzner.py (+3/-0)
cloudinit/sources/DataSourceNoCloud.py (+23/-17)
cloudinit/sources/DataSourceOVF.py (+6/-1)
cloudinit/sources/DataSourceOracle.py (+99/-7)
cloudinit/sources/__init__.py (+27/-0)
cloudinit/sources/helpers/azure.py (+152/-8)
cloudinit/sources/helpers/vmware/imc/config_custom_script.py (+42/-101)
cloudinit/sources/tests/test_oracle.py (+228/-11)
cloudinit/stages.py (+50/-15)
cloudinit/tests/helpers.py (+2/-1)
cloudinit/tests/test_stages.py (+132/-19)
cloudinit/url_helper.py (+5/-4)
cloudinit/util.py (+13/-9)
cloudinit/version.py (+1/-1)
config/cloud.cfg.tmpl (+2/-2)
debian/changelog (+60/-0)
debian/cloud-init.templates (+3/-3)
doc/examples/cloud-config-datasources.txt (+1/-1)
doc/examples/cloud-config-user-groups.txt (+1/-0)
doc/rtd/conf.py (+0/-5)
doc/rtd/topics/analyze.rst (+84/-0)
doc/rtd/topics/capabilities.rst (+1/-0)
doc/rtd/topics/datasources.rst (+1/-0)
doc/rtd/topics/datasources/exoscale.rst (+68/-0)
doc/rtd/topics/datasources/oracle.rst (+24/-1)
doc/rtd/topics/debugging.rst (+13/-0)
doc/rtd/topics/format.rst (+13/-12)
doc/rtd/topics/network-config-format-v2.rst (+1/-1)
doc/rtd/topics/network-config.rst (+5/-4)
integration-requirements.txt (+2/-1)
systemd/cloud-init-generator.tmpl (+6/-1)
templates/ntp.conf.debian.tmpl (+2/-1)
tests/cloud_tests/platforms.yaml (+1/-0)
tests/cloud_tests/platforms/nocloudkvm/instance.py (+9/-4)
tests/cloud_tests/platforms/platforms.py (+1/-1)
tests/cloud_tests/setup_image.py (+2/-1)
tests/unittests/test_datasource/test_azure.py (+112/-39)
tests/unittests/test_datasource/test_common.py (+13/-0)
tests/unittests/test_datasource/test_ec2.py (+2/-1)
tests/unittests/test_datasource/test_exoscale.py (+203/-0)
tests/unittests/test_datasource/test_gce.py (+18/-0)
tests/unittests/test_datasource/test_nocloud.py (+18/-0)
tests/unittests/test_distros/test_freebsd.py (+45/-0)
tests/unittests/test_distros/test_netconfig.py (+86/-0)
tests/unittests/test_ds_identify.py (+45/-0)
tests/unittests/test_handler/test_handler_apt_source_v3.py (+11/-0)
tests/unittests/test_handler/test_handler_ntp.py (+15/-10)
tests/unittests/test_handler/test_handler_resizefs.py (+1/-1)
tests/unittests/test_net.py (+243/-23)
tests/unittests/test_reporting_hyperv.py (+65/-0)
tests/unittests/test_vmware/test_custom_script.py (+63/-53)
tools/build-on-freebsd (+40/-33)
tools/ds-identify (+40/-14)
tools/render-cloudcfg (+1/-1)
tools/run-container (+1/-1)
tools/xkvm (+53/-8)
Reviewer Review Type Date Requested Status
Ryan Harper Approve
Server Team CI bot continuous-integration Approve
Review via email: mp+371687@code.launchpad.net

Commit message

Upstream snapshot for SRU
Also enables Exoscale in debian/cloud-init.tempaltes.

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:4e98b3884a51d16564437f8068fa95b1ff782e23
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1070/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1070//rebuild

review: Approve (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:d8cb9fd75b21e133556493fd8c0e831bcf5cec9e
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1075/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1075//rebuild

review: Approve (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:3f2ba00663c817ff44bb9ac91d7c2aaa9f331f9c
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1078/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1078//rebuild

review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :

Thanks!

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
2new file mode 100644
3index 0000000..170a71e
4--- /dev/null
5+++ b/.github/pull_request_template.md
6@@ -0,0 +1,9 @@
7+***This GitHub repo is only a mirror. Do not submit pull requests
8+here!***
9+
10+Thank you for taking the time to write and submit a change to
11+cloud-init! Please follow [our hacking
12+guide](https://cloudinit.readthedocs.io/en/latest/topics/hacking.html)
13+to submit your change to cloud-init's [Launchpad git
14+repository](https://code.launchpad.net/cloud-init/), where cloud-init
15+development happens.
16diff --git a/ChangeLog b/ChangeLog
17index bf48fd4..a98f8c2 100644
18--- a/ChangeLog
19+++ b/ChangeLog
20@@ -1,3 +1,39 @@
21+19.2:
22+ - net: add rfc3442 (classless static routes) to EphemeralDHCP
23+ (LP: #1821102)
24+ - templates/ntp.conf.debian.tmpl: fix missing newline for pools
25+ (LP: #1836598)
26+ - Support netplan renderer in Arch Linux [Conrad Hoffmann]
27+ - Fix typo in publicly viewable documentation. [David Medberry]
28+ - Add a cdrom size checker for OVF ds to ds-identify
29+ [Pengpeng Sun] (LP: #1806701)
30+ - VMWare: Trigger the post customization script via cc_scripts module.
31+ [Xiaofeng Wang] (LP: #1833192)
32+ - Cloud-init analyze module: Added ability to analyze boot events.
33+ [Sam Gilson]
34+ - Update debian eni network configuration location, retain Ubuntu setting
35+ [Janos Lenart]
36+ - net: skip bond interfaces in get_interfaces
37+ [Stanislav Makar] (LP: #1812857)
38+ - Fix a couple of issues raised by a coverity scan
39+ - Add missing dsname for Hetzner Cloud datasource [Markus Schade]
40+ - doc: indicate that netplan is default in Ubuntu now
41+ - azure: add region and AZ properties from imds compute location metadata
42+ - sysconfig: support more bonding options [Penghui Liao]
43+ - cloud-init-generator: use libexec path to ds-identify on redhat systems
44+ (LP: #1833264)
45+ - tools/build-on-freebsd: update to python3 [Gonéri Le Bouder]
46+ - Allow identification of OpenStack by Asset Tag
47+ [Mark T. Voelker] (LP: #1669875)
48+ - Fix spelling error making 'an Ubuntu' consistent. [Brian Murray]
49+ - run-container: centos: comment out the repo mirrorlist [Paride Legovini]
50+ - netplan: update netplan key mappings for gratuitous-arp (LP: #1827238)
51+ - freebsd: fix the name of cloudcfg VARIANT [Gonéri Le Bouder]
52+ - freebsd: ability to grow root file system [Gonéri Le Bouder]
53+ - freebsd: NoCloud data source support [Gonéri Le Bouder] (LP: #1645824)
54+ - Azure: Return static fallback address as if failed to find endpoint
55+ [Jason Zions (MSFT)]
56+
57 19.1:
58 - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder]
59 - tests: add Eoan release [Paride Legovini]
60diff --git a/cloudinit/analyze/__main__.py b/cloudinit/analyze/__main__.py
61index f861365..99e5c20 100644
62--- a/cloudinit/analyze/__main__.py
63+++ b/cloudinit/analyze/__main__.py
64@@ -7,7 +7,7 @@ import re
65 import sys
66
67 from cloudinit.util import json_dumps
68-
69+from datetime import datetime
70 from . import dump
71 from . import show
72
73@@ -52,9 +52,93 @@ def get_parser(parser=None):
74 dest='outfile', default='-',
75 help='specify where to write output. ')
76 parser_dump.set_defaults(action=('dump', analyze_dump))
77+ parser_boot = subparsers.add_parser(
78+ 'boot', help='Print list of boot times for kernel and cloud-init')
79+ parser_boot.add_argument('-i', '--infile', action='store',
80+ dest='infile', default='/var/log/cloud-init.log',
81+ help='specify where to read input. ')
82+ parser_boot.add_argument('-o', '--outfile', action='store',
83+ dest='outfile', default='-',
84+ help='specify where to write output.')
85+ parser_boot.set_defaults(action=('boot', analyze_boot))
86 return parser
87
88
89+def analyze_boot(name, args):
90+ """Report a list of how long different boot operations took.
91+
92+ For Example:
93+ -- Most Recent Boot Record --
94+ Kernel Started at: <time>
95+ Kernel ended boot at: <time>
96+ Kernel time to boot (seconds): <time>
97+ Cloud-init activated by systemd at: <time>
98+ Time between Kernel end boot and Cloud-init activation (seconds):<time>
99+ Cloud-init start: <time>
100+ """
101+ infh, outfh = configure_io(args)
102+ kernel_info = show.dist_check_timestamp()
103+ status_code, kernel_start, kernel_end, ci_sysd_start = \
104+ kernel_info
105+ kernel_start_timestamp = datetime.utcfromtimestamp(kernel_start)
106+ kernel_end_timestamp = datetime.utcfromtimestamp(kernel_end)
107+ ci_sysd_start_timestamp = datetime.utcfromtimestamp(ci_sysd_start)
108+ try:
109+ last_init_local = \
110+ [e for e in _get_events(infh) if e['name'] == 'init-local' and
111+ 'starting search' in e['description']][-1]
112+ ci_start = datetime.utcfromtimestamp(last_init_local['timestamp'])
113+ except IndexError:
114+ ci_start = 'Could not find init-local log-line in cloud-init.log'
115+ status_code = show.FAIL_CODE
116+
117+ FAILURE_MSG = 'Your Linux distro or container does not support this ' \
118+ 'functionality.\n' \
119+ 'You must be running a Kernel Telemetry supported ' \
120+ 'distro.\nPlease check ' \
121+ 'https://cloudinit.readthedocs.io/en/latest' \
122+ '/topics/analyze.html for more ' \
123+ 'information on supported distros.\n'
124+
125+ SUCCESS_MSG = '-- Most Recent Boot Record --\n' \
126+ ' Kernel Started at: {k_s_t}\n' \
127+ ' Kernel ended boot at: {k_e_t}\n' \
128+ ' Kernel time to boot (seconds): {k_r}\n' \
129+ ' Cloud-init activated by systemd at: {ci_sysd_t}\n' \
130+ ' Time between Kernel end boot and Cloud-init ' \
131+ 'activation (seconds): {bt_r}\n' \
132+ ' Cloud-init start: {ci_start}\n'
133+
134+ CONTAINER_MSG = '-- Most Recent Container Boot Record --\n' \
135+ ' Container started at: {k_s_t}\n' \
136+ ' Cloud-init activated by systemd at: {ci_sysd_t}\n' \
137+ ' Cloud-init start: {ci_start}\n' \
138+
139+ status_map = {
140+ show.FAIL_CODE: FAILURE_MSG,
141+ show.CONTAINER_CODE: CONTAINER_MSG,
142+ show.SUCCESS_CODE: SUCCESS_MSG
143+ }
144+
145+ kernel_runtime = kernel_end - kernel_start
146+ between_process_runtime = ci_sysd_start - kernel_end
147+
148+ kwargs = {
149+ 'k_s_t': kernel_start_timestamp,
150+ 'k_e_t': kernel_end_timestamp,
151+ 'k_r': kernel_runtime,
152+ 'bt_r': between_process_runtime,
153+ 'k_e': kernel_end,
154+ 'k_s': kernel_start,
155+ 'ci_sysd': ci_sysd_start,
156+ 'ci_sysd_t': ci_sysd_start_timestamp,
157+ 'ci_start': ci_start
158+ }
159+
160+ outfh.write(status_map[status_code].format(**kwargs))
161+ return status_code
162+
163+
164 def analyze_blame(name, args):
165 """Report a list of records sorted by largest time delta.
166
167@@ -119,7 +203,7 @@ def analyze_dump(name, args):
168
169 def _get_events(infile):
170 rawdata = None
171- events, rawdata = show.load_events(infile, None)
172+ events, rawdata = show.load_events_infile(infile)
173 if not events:
174 events, _ = dump.dump_events(rawdata=rawdata)
175 return events
176diff --git a/cloudinit/analyze/show.py b/cloudinit/analyze/show.py
177index 3e778b8..511b808 100644
178--- a/cloudinit/analyze/show.py
179+++ b/cloudinit/analyze/show.py
180@@ -8,8 +8,11 @@ import base64
181 import datetime
182 import json
183 import os
184+import time
185+import sys
186
187 from cloudinit import util
188+from cloudinit.distros import uses_systemd
189
190 # An event:
191 '''
192@@ -49,6 +52,10 @@ format_key = {
193
194 formatting_help = " ".join(["{0}: {1}".format(k.replace('%', '%%'), v)
195 for k, v in format_key.items()])
196+SUCCESS_CODE = 'successful'
197+FAIL_CODE = 'failure'
198+CONTAINER_CODE = 'container'
199+TIMESTAMP_UNKNOWN = (FAIL_CODE, -1, -1, -1)
200
201
202 def format_record(msg, event):
203@@ -125,9 +132,175 @@ def total_time_record(total_time):
204 return 'Total Time: %3.5f seconds\n' % total_time
205
206
207+class SystemctlReader(object):
208+ '''
209+ Class for dealing with all systemctl subp calls in a consistent manner.
210+ '''
211+ def __init__(self, property, parameter=None):
212+ self.epoch = None
213+ self.args = ['/bin/systemctl', 'show']
214+ if parameter:
215+ self.args.append(parameter)
216+ self.args.extend(['-p', property])
217+ # Don't want the init of our object to break. Instead of throwing
218+ # an exception, set an error code that gets checked when data is
219+ # requested from the object
220+ self.failure = self.subp()
221+
222+ def subp(self):
223+ '''
224+ Make a subp call based on set args and handle errors by setting
225+ failure code
226+
227+ :return: whether the subp call failed or not
228+ '''
229+ try:
230+ value, err = util.subp(self.args, capture=True)
231+ if err:
232+ return err
233+ self.epoch = value
234+ return None
235+ except Exception as systemctl_fail:
236+ return systemctl_fail
237+
238+ def parse_epoch_as_float(self):
239+ '''
240+ If subp call succeeded, return the timestamp from subp as a float.
241+
242+ :return: timestamp as a float
243+ '''
244+ # subp has 2 ways to fail: it either fails and throws an exception,
245+ # or returns an error code. Raise an exception here in order to make
246+ # sure both scenarios throw exceptions
247+ if self.failure:
248+ raise RuntimeError('Subprocess call to systemctl has failed, '
249+ 'returning error code ({})'
250+ .format(self.failure))
251+ # Output from systemctl show has the format Property=Value.
252+ # For example, UserspaceMonotonic=1929304
253+ timestamp = self.epoch.split('=')[1]
254+ # Timestamps reported by systemctl are in microseconds, converting
255+ return float(timestamp) / 1000000
256+
257+
258+def dist_check_timestamp():
259+ '''
260+ Determine which init system a particular linux distro is using.
261+ Each init system (systemd, upstart, etc) has a different way of
262+ providing timestamps.
263+
264+ :return: timestamps of kernelboot, kernelendboot, and cloud-initstart
265+ or TIMESTAMP_UNKNOWN if the timestamps cannot be retrieved.
266+ '''
267+
268+ if uses_systemd():
269+ return gather_timestamps_using_systemd()
270+
271+ # Use dmesg to get timestamps if the distro does not have systemd
272+ if util.is_FreeBSD() or 'gentoo' in \
273+ util.system_info()['system'].lower():
274+ return gather_timestamps_using_dmesg()
275+
276+ # this distro doesn't fit anything that is supported by cloud-init. just
277+ # return error codes
278+ return TIMESTAMP_UNKNOWN
279+
280+
281+def gather_timestamps_using_dmesg():
282+ '''
283+ Gather timestamps that corresponds to kernel begin initialization,
284+ kernel finish initialization using dmesg as opposed to systemctl
285+
286+ :return: the two timestamps plus a dummy timestamp to keep consistency
287+ with gather_timestamps_using_systemd
288+ '''
289+ try:
290+ data, _ = util.subp(['dmesg'], capture=True)
291+ split_entries = data[0].splitlines()
292+ for i in split_entries:
293+ if i.decode('UTF-8').find('user') != -1:
294+ splitup = i.decode('UTF-8').split()
295+ stripped = splitup[1].strip(']')
296+
297+ # kernel timestamp from dmesg is equal to 0,
298+ # with the userspace timestamp relative to it.
299+ user_space_timestamp = float(stripped)
300+ kernel_start = float(time.time()) - float(util.uptime())
301+ kernel_end = kernel_start + user_space_timestamp
302+
303+ # systemd wont start cloud-init in this case,
304+ # so we cannot get that timestamp
305+ return SUCCESS_CODE, kernel_start, kernel_end, \
306+ kernel_end
307+
308+ except Exception:
309+ pass
310+ return TIMESTAMP_UNKNOWN
311+
312+
313+def gather_timestamps_using_systemd():
314+ '''
315+ Gather timestamps that corresponds to kernel begin initialization,
316+ kernel finish initialization. and cloud-init systemd unit activation
317+
318+ :return: the three timestamps
319+ '''
320+ kernel_start = float(time.time()) - float(util.uptime())
321+ try:
322+ delta_k_end = SystemctlReader('UserspaceTimestampMonotonic')\
323+ .parse_epoch_as_float()
324+ delta_ci_s = SystemctlReader('InactiveExitTimestampMonotonic',
325+ 'cloud-init-local').parse_epoch_as_float()
326+ base_time = kernel_start
327+ status = SUCCESS_CODE
328+ # lxc based containers do not set their monotonic zero point to be when
329+ # the container starts, instead keep using host boot as zero point
330+ # time.CLOCK_MONOTONIC_RAW is only available in python 3.3
331+ if util.is_container():
332+ # clock.monotonic also uses host boot as zero point
333+ if sys.version_info >= (3, 3):
334+ base_time = float(time.time()) - float(time.monotonic())
335+ # TODO: lxcfs automatically truncates /proc/uptime to seconds
336+ # in containers when https://github.com/lxc/lxcfs/issues/292
337+ # is fixed, util.uptime() should be used instead of stat on
338+ try:
339+ file_stat = os.stat('/proc/1/cmdline')
340+ kernel_start = file_stat.st_atime
341+ except OSError as err:
342+ raise RuntimeError('Could not determine container boot '
343+ 'time from /proc/1/cmdline. ({})'
344+ .format(err))
345+ status = CONTAINER_CODE
346+ else:
347+ status = FAIL_CODE
348+ kernel_end = base_time + delta_k_end
349+ cloudinit_sysd = base_time + delta_ci_s
350+
351+ except Exception as e:
352+ # Except ALL exceptions as Systemctl reader can throw many different
353+ # errors, but any failure in systemctl means that timestamps cannot be
354+ # obtained
355+ print(e)
356+ return TIMESTAMP_UNKNOWN
357+ return status, kernel_start, kernel_end, cloudinit_sysd
358+
359+
360 def generate_records(events, blame_sort=False,
361 print_format="(%n) %d seconds in %I%D",
362 dump_files=False, log_datafiles=False):
363+ '''
364+ Take in raw events and create parent-child dependencies between events
365+ in order to order events in chronological order.
366+
367+ :param events: JSONs from dump that represents events taken from logs
368+ :param blame_sort: whether to sort by timestamp or by time taken.
369+ :param print_format: formatting to represent event, time stamp,
370+ and time taken by the event in one line
371+ :param dump_files: whether to dump files into JSONs
372+ :param log_datafiles: whether or not to log events generated
373+
374+ :return: boot records ordered chronologically
375+ '''
376
377 sorted_events = sorted(events, key=lambda x: x['timestamp'])
378 records = []
379@@ -189,19 +362,28 @@ def generate_records(events, blame_sort=False,
380
381
382 def show_events(events, print_format):
383+ '''
384+ A passthrough method that makes it easier to call generate_records()
385+
386+ :param events: JSONs from dump that represents events taken from logs
387+ :param print_format: formatting to represent event, time stamp,
388+ and time taken by the event in one line
389+
390+ :return: boot records ordered chronologically
391+ '''
392 return generate_records(events, print_format=print_format)
393
394
395-def load_events(infile, rawdata=None):
396- if rawdata:
397- data = rawdata.read()
398- else:
399- data = infile.read()
400+def load_events_infile(infile):
401+ '''
402+ Takes in a log file, read it, and convert to json.
403+
404+ :param infile: The Log file to be read
405
406- j = None
407+ :return: json version of logfile, raw file
408+ '''
409+ data = infile.read()
410 try:
411- j = json.loads(data)
412+ return json.loads(data), data
413 except ValueError:
414- pass
415-
416- return j, data
417+ return None, data
418diff --git a/cloudinit/analyze/tests/test_boot.py b/cloudinit/analyze/tests/test_boot.py
419new file mode 100644
420index 0000000..706e2cc
421--- /dev/null
422+++ b/cloudinit/analyze/tests/test_boot.py
423@@ -0,0 +1,170 @@
424+import os
425+from cloudinit.analyze.__main__ import (analyze_boot, get_parser)
426+from cloudinit.tests.helpers import CiTestCase, mock
427+from cloudinit.analyze.show import dist_check_timestamp, SystemctlReader, \
428+ FAIL_CODE, CONTAINER_CODE
429+
430+err_code = (FAIL_CODE, -1, -1, -1)
431+
432+
433+class TestDistroChecker(CiTestCase):
434+
435+ @mock.patch('cloudinit.util.system_info', return_value={'dist': ('', '',
436+ ''),
437+ 'system': ''})
438+ @mock.patch('platform.linux_distribution', return_value=('', '', ''))
439+ @mock.patch('cloudinit.util.is_FreeBSD', return_value=False)
440+ def test_blank_distro(self, m_sys_info, m_linux_distribution, m_free_bsd):
441+ self.assertEqual(err_code, dist_check_timestamp())
442+
443+ @mock.patch('cloudinit.util.system_info', return_value={'dist': ('', '',
444+ '')})
445+ @mock.patch('platform.linux_distribution', return_value=('', '', ''))
446+ @mock.patch('cloudinit.util.is_FreeBSD', return_value=True)
447+ def test_freebsd_gentoo_cant_find(self, m_sys_info,
448+ m_linux_distribution, m_is_FreeBSD):
449+ self.assertEqual(err_code, dist_check_timestamp())
450+
451+ @mock.patch('cloudinit.util.subp', return_value=(0, 1))
452+ def test_subp_fails(self, m_subp):
453+ self.assertEqual(err_code, dist_check_timestamp())
454+
455+
456+class TestSystemCtlReader(CiTestCase):
457+
458+ def test_systemctl_invalid_property(self):
459+ reader = SystemctlReader('dummyProperty')
460+ with self.assertRaises(RuntimeError):
461+ reader.parse_epoch_as_float()
462+
463+ def test_systemctl_invalid_parameter(self):
464+ reader = SystemctlReader('dummyProperty', 'dummyParameter')
465+ with self.assertRaises(RuntimeError):
466+ reader.parse_epoch_as_float()
467+
468+ @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None))
469+ def test_systemctl_works_correctly_threshold(self, m_subp):
470+ reader = SystemctlReader('dummyProperty', 'dummyParameter')
471+ self.assertEqual(1.0, reader.parse_epoch_as_float())
472+ thresh = 1.0 - reader.parse_epoch_as_float()
473+ self.assertTrue(thresh < 1e-6)
474+ self.assertTrue(thresh > (-1 * 1e-6))
475+
476+ @mock.patch('cloudinit.util.subp', return_value=('U=0', None))
477+ def test_systemctl_succeed_zero(self, m_subp):
478+ reader = SystemctlReader('dummyProperty', 'dummyParameter')
479+ self.assertEqual(0.0, reader.parse_epoch_as_float())
480+
481+ @mock.patch('cloudinit.util.subp', return_value=('U=1', None))
482+ def test_systemctl_succeed_distinct(self, m_subp):
483+ reader = SystemctlReader('dummyProperty', 'dummyParameter')
484+ val1 = reader.parse_epoch_as_float()
485+ m_subp.return_value = ('U=2', None)
486+ reader2 = SystemctlReader('dummyProperty', 'dummyParameter')
487+ val2 = reader2.parse_epoch_as_float()
488+ self.assertNotEqual(val1, val2)
489+
490+ @mock.patch('cloudinit.util.subp', return_value=('100', None))
491+ def test_systemctl_epoch_not_splittable(self, m_subp):
492+ reader = SystemctlReader('dummyProperty', 'dummyParameter')
493+ with self.assertRaises(IndexError):
494+ reader.parse_epoch_as_float()
495+
496+ @mock.patch('cloudinit.util.subp', return_value=('U=foobar', None))
497+ def test_systemctl_cannot_convert_epoch_to_float(self, m_subp):
498+ reader = SystemctlReader('dummyProperty', 'dummyParameter')
499+ with self.assertRaises(ValueError):
500+ reader.parse_epoch_as_float()
501+
502+
503+class TestAnalyzeBoot(CiTestCase):
504+
505+ def set_up_dummy_file_ci(self, path, log_path):
506+ infh = open(path, 'w+')
507+ infh.write('2019-07-08 17:40:49,601 - util.py[DEBUG]: Cloud-init v. '
508+ '19.1-1-gbaa47854-0ubuntu1~18.04.1 running \'init-local\' '
509+ 'at Mon, 08 Jul 2019 17:40:49 +0000. Up 18.84 seconds.')
510+ infh.close()
511+ outfh = open(log_path, 'w+')
512+ outfh.close()
513+
514+ def set_up_dummy_file(self, path, log_path):
515+ infh = open(path, 'w+')
516+ infh.write('dummy data')
517+ infh.close()
518+ outfh = open(log_path, 'w+')
519+ outfh.close()
520+
521+ def remove_dummy_file(self, path, log_path):
522+ if os.path.isfile(path):
523+ os.remove(path)
524+ if os.path.isfile(log_path):
525+ os.remove(log_path)
526+
527+ @mock.patch('cloudinit.analyze.show.dist_check_timestamp',
528+ return_value=err_code)
529+ def test_boot_invalid_distro(self, m_dist_check_timestamp):
530+
531+ path = os.path.dirname(os.path.abspath(__file__))
532+ log_path = path + '/boot-test.log'
533+ path += '/dummy.log'
534+ self.set_up_dummy_file(path, log_path)
535+
536+ parser = get_parser()
537+ args = parser.parse_args(args=['boot', '-i', path, '-o',
538+ log_path])
539+ name_default = ''
540+ analyze_boot(name_default, args)
541+ # now args have been tested, go into outfile and make sure error
542+ # message is in the outfile
543+ outfh = open(args.outfile, 'r')
544+ data = outfh.read()
545+ err_string = 'Your Linux distro or container does not support this ' \
546+ 'functionality.\nYou must be running a Kernel ' \
547+ 'Telemetry supported distro.\nPlease check ' \
548+ 'https://cloudinit.readthedocs.io/en/latest/topics' \
549+ '/analyze.html for more information on supported ' \
550+ 'distros.\n'
551+
552+ self.remove_dummy_file(path, log_path)
553+ self.assertEqual(err_string, data)
554+
555+ @mock.patch("cloudinit.util.is_container", return_value=True)
556+ @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None))
557+ def test_container_no_ci_log_line(self, m_is_container, m_subp):
558+ path = os.path.dirname(os.path.abspath(__file__))
559+ log_path = path + '/boot-test.log'
560+ path += '/dummy.log'
561+ self.set_up_dummy_file(path, log_path)
562+
563+ parser = get_parser()
564+ args = parser.parse_args(args=['boot', '-i', path, '-o',
565+ log_path])
566+ name_default = ''
567+
568+ finish_code = analyze_boot(name_default, args)
569+
570+ self.remove_dummy_file(path, log_path)
571+ self.assertEqual(FAIL_CODE, finish_code)
572+
573+ @mock.patch("cloudinit.util.is_container", return_value=True)
574+ @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None))
575+ @mock.patch('cloudinit.analyze.__main__._get_events', return_value=[{
576+ 'name': 'init-local', 'description': 'starting search', 'timestamp':
577+ 100000}])
578+ @mock.patch('cloudinit.analyze.show.dist_check_timestamp',
579+ return_value=(CONTAINER_CODE, 1, 1, 1))
580+ def test_container_ci_log_line(self, m_is_container, m_subp, m_get, m_g):
581+ path = os.path.dirname(os.path.abspath(__file__))
582+ log_path = path + '/boot-test.log'
583+ path += '/dummy.log'
584+ self.set_up_dummy_file_ci(path, log_path)
585+
586+ parser = get_parser()
587+ args = parser.parse_args(args=['boot', '-i', path, '-o',
588+ log_path])
589+ name_default = ''
590+ finish_code = analyze_boot(name_default, args)
591+
592+ self.remove_dummy_file(path, log_path)
593+ self.assertEqual(CONTAINER_CODE, finish_code)
594diff --git a/cloudinit/apport.py b/cloudinit/apport.py
595index 22cb7fd..003ff1f 100644
596--- a/cloudinit/apport.py
597+++ b/cloudinit/apport.py
598@@ -23,6 +23,7 @@ KNOWN_CLOUD_NAMES = [
599 'CloudStack',
600 'DigitalOcean',
601 'GCE - Google Compute Engine',
602+ 'Exoscale',
603 'Hetzner Cloud',
604 'IBM - (aka SoftLayer or BlueMix)',
605 'LXD',
606diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py
607index 919d199..f01e2aa 100644
608--- a/cloudinit/config/cc_apt_configure.py
609+++ b/cloudinit/config/cc_apt_configure.py
610@@ -332,6 +332,8 @@ def apply_apt(cfg, cloud, target):
611
612
613 def debconf_set_selections(selections, target=None):
614+ if not selections.endswith(b'\n'):
615+ selections += b'\n'
616 util.subp(['debconf-set-selections'], data=selections, target=target,
617 capture=True)
618
619@@ -374,7 +376,7 @@ def apply_debconf_selections(cfg, target=None):
620
621 selections = '\n'.join(
622 [selsets[key] for key in sorted(selsets.keys())])
623- debconf_set_selections(selections.encode() + b"\n", target=target)
624+ debconf_set_selections(selections.encode(), target=target)
625
626 # get a complete list of packages listed in input
627 pkgs_cfgd = set()
628diff --git a/cloudinit/config/cc_growpart.py b/cloudinit/config/cc_growpart.py
629index bafca9d..564f376 100644
630--- a/cloudinit/config/cc_growpart.py
631+++ b/cloudinit/config/cc_growpart.py
632@@ -215,7 +215,8 @@ def device_part_info(devpath):
633 # FreeBSD doesn't know of sysfs so just get everything we need from
634 # the device, like /dev/vtbd0p2.
635 if util.is_FreeBSD():
636- m = re.search('^(/dev/.+)p([0-9])$', devpath)
637+ freebsd_part = "/dev/" + util.find_freebsd_part(devpath)
638+ m = re.search('^(/dev/.+)p([0-9])$', freebsd_part)
639 return (m.group(1), m.group(2))
640
641 if not os.path.exists(syspath):
642diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py
643index 71d13ed..d983077 100644
644--- a/cloudinit/config/cc_lxd.py
645+++ b/cloudinit/config/cc_lxd.py
646@@ -152,7 +152,7 @@ def handle(name, cfg, cloud, log, args):
647
648 if cmd_attach:
649 log.debug("Setting up default lxd bridge: %s" %
650- " ".join(cmd_create))
651+ " ".join(cmd_attach))
652 _lxc(cmd_attach)
653
654 elif bridge_cfg:
655diff --git a/cloudinit/config/cc_resizefs.py b/cloudinit/config/cc_resizefs.py
656index 076b9d5..afd2e06 100644
657--- a/cloudinit/config/cc_resizefs.py
658+++ b/cloudinit/config/cc_resizefs.py
659@@ -81,7 +81,7 @@ def _resize_xfs(mount_point, devpth):
660
661
662 def _resize_ufs(mount_point, devpth):
663- return ('growfs', '-y', devpth)
664+ return ('growfs', '-y', mount_point)
665
666
667 def _resize_zfs(mount_point, devpth):
668@@ -101,7 +101,7 @@ def _can_skip_resize_ufs(mount_point, devpth):
669 """
670 # dumpfs -m /
671 # newfs command for / (/dev/label/rootfs)
672- newfs -O 2 -U -a 4 -b 32768 -d 32768 -e 4096 -f 4096 -g 16384
673+ newfs -L rootf -O 2 -U -a 4 -b 32768 -d 32768 -e 4096 -f 4096 -g 16384
674 -h 64 -i 8192 -j -k 6408 -m 8 -o time -s 58719232 /dev/label/rootf
675 """
676 cur_fs_sz = None
677@@ -110,7 +110,7 @@ def _can_skip_resize_ufs(mount_point, devpth):
678 for line in dumpfs_res.splitlines():
679 if not line.startswith('#'):
680 newfs_cmd = shlex.split(line)
681- opt_value = 'O:Ua:s:b:d:e:f:g:h:i:jk:m:o:'
682+ opt_value = 'O:Ua:s:b:d:e:f:g:h:i:jk:m:o:L:'
683 optlist, _args = getopt.getopt(newfs_cmd[1:], opt_value)
684 for o, a in optlist:
685 if o == "-s":
686diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py
687index 4585e4d..cf9b5ab 100755
688--- a/cloudinit/config/cc_set_passwords.py
689+++ b/cloudinit/config/cc_set_passwords.py
690@@ -9,27 +9,40 @@
691 """
692 Set Passwords
693 -------------
694-**Summary:** Set user passwords
695-
696-Set system passwords and enable or disable ssh password authentication.
697-The ``chpasswd`` config key accepts a dictionary containing a single one of two
698-keys, either ``expire`` or ``list``. If ``expire`` is specified and is set to
699-``false``, then the ``password`` global config key is used as the password for
700-all user accounts. If the ``expire`` key is specified and is set to ``true``
701-then user passwords will be expired, preventing the default system passwords
702-from being used.
703-
704-If the ``list`` key is provided, a list of
705-``username:password`` pairs can be specified. The usernames specified
706-must already exist on the system, or have been created using the
707-``cc_users_groups`` module. A password can be randomly generated using
708-``username:RANDOM`` or ``username:R``. A hashed password can be specified
709-using ``username:$6$salt$hash``. Password ssh authentication can be
710-enabled, disabled, or left to system defaults using ``ssh_pwauth``.
711+**Summary:** Set user passwords and enable/disable SSH password authentication
712+
713+This module consumes three top-level config keys: ``ssh_pwauth``, ``chpasswd``
714+and ``password``.
715+
716+The ``ssh_pwauth`` config key determines whether or not sshd will be configured
717+to accept password authentication. True values will enable password auth,
718+false values will disable password auth, and the literal string ``unchanged``
719+will leave it unchanged. Setting no value will also leave the current setting
720+on-disk unchanged.
721+
722+The ``chpasswd`` config key accepts a dictionary containing either or both of
723+``expire`` and ``list``.
724+
725+If the ``list`` key is provided, it should contain a list of
726+``username:password`` pairs. This can be either a YAML list (of strings), or a
727+multi-line string with one pair per line. Each user will have the
728+corresponding password set. A password can be randomly generated by specifying
729+``RANDOM`` or ``R`` as a user's password. A hashed password, created by a tool
730+like ``mkpasswd``, can be specified; a regex
731+(``r'\\$(1|2a|2y|5|6)(\\$.+){2}'``) is used to determine if a password value
732+should be treated as a hash.
733
734 .. note::
735- if using ``expire: true`` then a ssh authkey should be specified or it may
736- not be possible to login to the system
737+ The users specified must already exist on the system. Users will have been
738+ created by the ``cc_users_groups`` module at this point.
739+
740+By default, all users on the system will have their passwords expired (meaning
741+that they will have to be reset the next time the user logs in). To disable
742+this behaviour, set ``expire`` under ``chpasswd`` to a false value.
743+
744+If a ``list`` of user/password pairs is not specified under ``chpasswd``, then
745+the value of the ``password`` config key will be used to set the default user's
746+password.
747
748 **Internal name:** ``cc_set_passwords``
749
750@@ -160,6 +173,8 @@ def handle(_name, cfg, cloud, log, args):
751 hashed_users = []
752 randlist = []
753 users = []
754+ # N.B. This regex is included in the documentation (i.e. the module
755+ # docstring), so any changes to it should be reflected there.
756 prog = re.compile(r'\$(1|2a|2y|5|6)(\$.+){2}')
757 for line in plist:
758 u, p = line.split(':', 1)
759diff --git a/cloudinit/config/cc_ssh.py b/cloudinit/config/cc_ssh.py
760index f8f7cb3..fdd8f4d 100755
761--- a/cloudinit/config/cc_ssh.py
762+++ b/cloudinit/config/cc_ssh.py
763@@ -91,6 +91,9 @@ public keys.
764 ssh_authorized_keys:
765 - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUU ...
766 - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZ ...
767+ ssh_publish_hostkeys:
768+ enabled: <true/false> (Defaults to true)
769+ blacklist: <list of key types> (Defaults to [dsa])
770 """
771
772 import glob
773@@ -104,6 +107,10 @@ from cloudinit import util
774
775 GENERATE_KEY_NAMES = ['rsa', 'dsa', 'ecdsa', 'ed25519']
776 KEY_FILE_TPL = '/etc/ssh/ssh_host_%s_key'
777+PUBLISH_HOST_KEYS = True
778+# Don't publish the dsa hostkey by default since OpenSSH recommends not using
779+# it.
780+HOST_KEY_PUBLISH_BLACKLIST = ['dsa']
781
782 CONFIG_KEY_TO_FILE = {}
783 PRIV_TO_PUB = {}
784@@ -176,6 +183,23 @@ def handle(_name, cfg, cloud, log, _args):
785 util.logexc(log, "Failed generating key type %s to "
786 "file %s", keytype, keyfile)
787
788+ if "ssh_publish_hostkeys" in cfg:
789+ host_key_blacklist = util.get_cfg_option_list(
790+ cfg["ssh_publish_hostkeys"], "blacklist",
791+ HOST_KEY_PUBLISH_BLACKLIST)
792+ publish_hostkeys = util.get_cfg_option_bool(
793+ cfg["ssh_publish_hostkeys"], "enabled", PUBLISH_HOST_KEYS)
794+ else:
795+ host_key_blacklist = HOST_KEY_PUBLISH_BLACKLIST
796+ publish_hostkeys = PUBLISH_HOST_KEYS
797+
798+ if publish_hostkeys:
799+ hostkeys = get_public_host_keys(blacklist=host_key_blacklist)
800+ try:
801+ cloud.datasource.publish_host_keys(hostkeys)
802+ except Exception:
803+ util.logexc(log, "Publishing host keys failed!")
804+
805 try:
806 (users, _groups) = ug_util.normalize_users_groups(cfg, cloud.distro)
807 (user, _user_config) = ug_util.extract_default(users)
808@@ -209,4 +233,35 @@ def apply_credentials(keys, user, disable_root, disable_root_opts):
809
810 ssh_util.setup_user_keys(keys, 'root', options=key_prefix)
811
812+
813+def get_public_host_keys(blacklist=None):
814+ """Read host keys from /etc/ssh/*.pub files and return them as a list.
815+
816+ @param blacklist: List of key types to ignore. e.g. ['dsa', 'rsa']
817+ @returns: List of keys, each formatted as a two-element tuple.
818+ e.g. [('ssh-rsa', 'AAAAB3Nz...'), ('ssh-ed25519', 'AAAAC3Nx...')]
819+ """
820+ public_key_file_tmpl = '%s.pub' % (KEY_FILE_TPL,)
821+ key_list = []
822+ blacklist_files = []
823+ if blacklist:
824+ # Convert blacklist to filenames:
825+ # 'dsa' -> '/etc/ssh/ssh_host_dsa_key.pub'
826+ blacklist_files = [public_key_file_tmpl % (key_type,)
827+ for key_type in blacklist]
828+ # Get list of public key files and filter out blacklisted files.
829+ file_list = [hostfile for hostfile
830+ in glob.glob(public_key_file_tmpl % ('*',))
831+ if hostfile not in blacklist_files]
832+
833+ # Read host key files, retrieve first two fields as a tuple and
834+ # append that tuple to key_list.
835+ for file_name in file_list:
836+ file_contents = util.load_file(file_name)
837+ key_data = file_contents.split()
838+ if key_data and len(key_data) > 1:
839+ key_list.append(tuple(key_data[:2]))
840+ return key_list
841+
842+
843 # vi: ts=4 expandtab
844diff --git a/cloudinit/config/cc_ubuntu_advantage.py b/cloudinit/config/cc_ubuntu_advantage.py
845index f488123..f846e9a 100644
846--- a/cloudinit/config/cc_ubuntu_advantage.py
847+++ b/cloudinit/config/cc_ubuntu_advantage.py
848@@ -36,7 +36,7 @@ schema = {
849 """),
850 'distros': distros,
851 'examples': [dedent("""\
852- # Attach the machine to a Ubuntu Advantage support contract with a
853+ # Attach the machine to an Ubuntu Advantage support contract with a
854 # UA contract token obtained from %s.
855 ubuntu_advantage:
856 token: <ua_contract_token>
857diff --git a/cloudinit/config/cc_ubuntu_drivers.py b/cloudinit/config/cc_ubuntu_drivers.py
858index 91feb60..297451d 100644
859--- a/cloudinit/config/cc_ubuntu_drivers.py
860+++ b/cloudinit/config/cc_ubuntu_drivers.py
861@@ -2,12 +2,14 @@
862
863 """Ubuntu Drivers: Interact with third party drivers in Ubuntu."""
864
865+import os
866 from textwrap import dedent
867
868 from cloudinit.config.schema import (
869 get_schema_doc, validate_cloudconfig_schema)
870 from cloudinit import log as logging
871 from cloudinit.settings import PER_INSTANCE
872+from cloudinit import temp_utils
873 from cloudinit import type_utils
874 from cloudinit import util
875
876@@ -64,6 +66,33 @@ OLD_UBUNTU_DRIVERS_STDERR_NEEDLE = (
877 __doc__ = get_schema_doc(schema) # Supplement python help()
878
879
880+# Use a debconf template to configure a global debconf variable
881+# (linux/nvidia/latelink) setting this to "true" allows the
882+# 'linux-restricted-modules' deb to accept the NVIDIA EULA and the package
883+# will automatically link the drivers to the running kernel.
884+
885+# EOL_XENIAL: can then drop this script and use python3-debconf which is only
886+# available in Bionic and later. Can't use python3-debconf currently as it
887+# isn't in Xenial and doesn't yet support X_LOADTEMPLATEFILE debconf command.
888+
889+NVIDIA_DEBCONF_CONTENT = """\
890+Template: linux/nvidia/latelink
891+Type: boolean
892+Default: true
893+Description: Late-link NVIDIA kernel modules?
894+ Enable this to link the NVIDIA kernel modules in cloud-init and
895+ make them available for use.
896+"""
897+
898+NVIDIA_DRIVER_LATELINK_DEBCONF_SCRIPT = """\
899+#!/bin/sh
900+# Allow cloud-init to trigger EULA acceptance via registering a debconf
901+# template to set linux/nvidia/latelink true
902+. /usr/share/debconf/confmodule
903+db_x_loadtemplatefile "$1" cloud-init
904+"""
905+
906+
907 def install_drivers(cfg, pkg_install_func):
908 if not isinstance(cfg, dict):
909 raise TypeError(
910@@ -89,9 +118,28 @@ def install_drivers(cfg, pkg_install_func):
911 if version_cfg:
912 driver_arg += ':{}'.format(version_cfg)
913
914- LOG.debug("Installing NVIDIA drivers (%s=%s, version=%s)",
915+ LOG.debug("Installing and activating NVIDIA drivers (%s=%s, version=%s)",
916 cfgpath, nv_acc, version_cfg if version_cfg else 'latest')
917
918+ # Register and set debconf selection linux/nvidia/latelink = true
919+ tdir = temp_utils.mkdtemp(needs_exe=True)
920+ debconf_file = os.path.join(tdir, 'nvidia.template')
921+ debconf_script = os.path.join(tdir, 'nvidia-debconf.sh')
922+ try:
923+ util.write_file(debconf_file, NVIDIA_DEBCONF_CONTENT)
924+ util.write_file(
925+ debconf_script,
926+ util.encode_text(NVIDIA_DRIVER_LATELINK_DEBCONF_SCRIPT),
927+ mode=0o755)
928+ util.subp([debconf_script, debconf_file])
929+ except Exception as e:
930+ util.logexc(
931+ LOG, "Failed to register NVIDIA debconf template: %s", str(e))
932+ raise
933+ finally:
934+ if os.path.isdir(tdir):
935+ util.del_dir(tdir)
936+
937 try:
938 util.subp(['ubuntu-drivers', 'install', '--gpgpu', driver_arg])
939 except util.ProcessExecutionError as exc:
940diff --git a/cloudinit/config/tests/test_ssh.py b/cloudinit/config/tests/test_ssh.py
941index c8a4271..e778984 100644
942--- a/cloudinit/config/tests/test_ssh.py
943+++ b/cloudinit/config/tests/test_ssh.py
944@@ -1,5 +1,6 @@
945 # This file is part of cloud-init. See LICENSE file for license information.
946
947+import os.path
948
949 from cloudinit.config import cc_ssh
950 from cloudinit import ssh_util
951@@ -12,6 +13,25 @@ MODPATH = "cloudinit.config.cc_ssh."
952 class TestHandleSsh(CiTestCase):
953 """Test cc_ssh handling of ssh config."""
954
955+ def _publish_hostkey_test_setup(self):
956+ self.test_hostkeys = {
957+ 'dsa': ('ssh-dss', 'AAAAB3NzaC1kc3MAAACB'),
958+ 'ecdsa': ('ecdsa-sha2-nistp256', 'AAAAE2VjZ'),
959+ 'ed25519': ('ssh-ed25519', 'AAAAC3NzaC1lZDI'),
960+ 'rsa': ('ssh-rsa', 'AAAAB3NzaC1yc2EAAA'),
961+ }
962+ self.test_hostkey_files = []
963+ hostkey_tmpdir = self.tmp_dir()
964+ for key_type in ['dsa', 'ecdsa', 'ed25519', 'rsa']:
965+ key_data = self.test_hostkeys[key_type]
966+ filename = 'ssh_host_%s_key.pub' % key_type
967+ filepath = os.path.join(hostkey_tmpdir, filename)
968+ self.test_hostkey_files.append(filepath)
969+ with open(filepath, 'w') as f:
970+ f.write(' '.join(key_data))
971+
972+ cc_ssh.KEY_FILE_TPL = os.path.join(hostkey_tmpdir, 'ssh_host_%s_key')
973+
974 def test_apply_credentials_with_user(self, m_setup_keys):
975 """Apply keys for the given user and root."""
976 keys = ["key1"]
977@@ -64,6 +84,7 @@ class TestHandleSsh(CiTestCase):
978 # Mock os.path.exits to True to short-circuit the key writing logic
979 m_path_exists.return_value = True
980 m_nug.return_value = ([], {})
981+ cc_ssh.PUBLISH_HOST_KEYS = False
982 cloud = self.tmp_cloud(
983 distro='ubuntu', metadata={'public-keys': keys})
984 cc_ssh.handle("name", cfg, cloud, None, None)
985@@ -149,3 +170,148 @@ class TestHandleSsh(CiTestCase):
986 self.assertEqual([mock.call(set(keys), user),
987 mock.call(set(keys), "root", options="")],
988 m_setup_keys.call_args_list)
989+
990+ @mock.patch(MODPATH + "glob.glob")
991+ @mock.patch(MODPATH + "ug_util.normalize_users_groups")
992+ @mock.patch(MODPATH + "os.path.exists")
993+ def test_handle_publish_hostkeys_default(
994+ self, m_path_exists, m_nug, m_glob, m_setup_keys):
995+ """Test handle with various configs for ssh_publish_hostkeys."""
996+ self._publish_hostkey_test_setup()
997+ cc_ssh.PUBLISH_HOST_KEYS = True
998+ keys = ["key1"]
999+ user = "clouduser"
1000+ # Return no matching keys for first glob, test keys for second.
1001+ m_glob.side_effect = iter([
1002+ [],
1003+ self.test_hostkey_files,
1004+ ])
1005+ # Mock os.path.exits to True to short-circuit the key writing logic
1006+ m_path_exists.return_value = True
1007+ m_nug.return_value = ({user: {"default": user}}, {})
1008+ cloud = self.tmp_cloud(
1009+ distro='ubuntu', metadata={'public-keys': keys})
1010+ cloud.datasource.publish_host_keys = mock.Mock()
1011+
1012+ cfg = {}
1013+ expected_call = [self.test_hostkeys[key_type] for key_type
1014+ in ['ecdsa', 'ed25519', 'rsa']]
1015+ cc_ssh.handle("name", cfg, cloud, None, None)
1016+ self.assertEqual([mock.call(expected_call)],
1017+ cloud.datasource.publish_host_keys.call_args_list)
1018+
1019+ @mock.patch(MODPATH + "glob.glob")
1020+ @mock.patch(MODPATH + "ug_util.normalize_users_groups")
1021+ @mock.patch(MODPATH + "os.path.exists")
1022+ def test_handle_publish_hostkeys_config_enable(
1023+ self, m_path_exists, m_nug, m_glob, m_setup_keys):
1024+ """Test handle with various configs for ssh_publish_hostkeys."""
1025+ self._publish_hostkey_test_setup()
1026+ cc_ssh.PUBLISH_HOST_KEYS = False
1027+ keys = ["key1"]
1028+ user = "clouduser"
1029+ # Return no matching keys for first glob, test keys for second.
1030+ m_glob.side_effect = iter([
1031+ [],
1032+ self.test_hostkey_files,
1033+ ])
1034+ # Mock os.path.exits to True to short-circuit the key writing logic
1035+ m_path_exists.return_value = True
1036+ m_nug.return_value = ({user: {"default": user}}, {})
1037+ cloud = self.tmp_cloud(
1038+ distro='ubuntu', metadata={'public-keys': keys})
1039+ cloud.datasource.publish_host_keys = mock.Mock()
1040+
1041+ cfg = {'ssh_publish_hostkeys': {'enabled': True}}
1042+ expected_call = [self.test_hostkeys[key_type] for key_type
1043+ in ['ecdsa', 'ed25519', 'rsa']]
1044+ cc_ssh.handle("name", cfg, cloud, None, None)
1045+ self.assertEqual([mock.call(expected_call)],
1046+ cloud.datasource.publish_host_keys.call_args_list)
1047+
1048+ @mock.patch(MODPATH + "glob.glob")
1049+ @mock.patch(MODPATH + "ug_util.normalize_users_groups")
1050+ @mock.patch(MODPATH + "os.path.exists")
1051+ def test_handle_publish_hostkeys_config_disable(
1052+ self, m_path_exists, m_nug, m_glob, m_setup_keys):
1053+ """Test handle with various configs for ssh_publish_hostkeys."""
1054+ self._publish_hostkey_test_setup()
1055+ cc_ssh.PUBLISH_HOST_KEYS = True
1056+ keys = ["key1"]
1057+ user = "clouduser"
1058+ # Return no matching keys for first glob, test keys for second.
1059+ m_glob.side_effect = iter([
1060+ [],
1061+ self.test_hostkey_files,
1062+ ])
1063+ # Mock os.path.exits to True to short-circuit the key writing logic
1064+ m_path_exists.return_value = True
1065+ m_nug.return_value = ({user: {"default": user}}, {})
1066+ cloud = self.tmp_cloud(
1067+ distro='ubuntu', metadata={'public-keys': keys})
1068+ cloud.datasource.publish_host_keys = mock.Mock()
1069+
1070+ cfg = {'ssh_publish_hostkeys': {'enabled': False}}
1071+ cc_ssh.handle("name", cfg, cloud, None, None)
1072+ self.assertFalse(cloud.datasource.publish_host_keys.call_args_list)
1073+ cloud.datasource.publish_host_keys.assert_not_called()
1074+
1075+ @mock.patch(MODPATH + "glob.glob")
1076+ @mock.patch(MODPATH + "ug_util.normalize_users_groups")
1077+ @mock.patch(MODPATH + "os.path.exists")
1078+ def test_handle_publish_hostkeys_config_blacklist(
1079+ self, m_path_exists, m_nug, m_glob, m_setup_keys):
1080+ """Test handle with various configs for ssh_publish_hostkeys."""
1081+ self._publish_hostkey_test_setup()
1082+ cc_ssh.PUBLISH_HOST_KEYS = True
1083+ keys = ["key1"]
1084+ user = "clouduser"
1085+ # Return no matching keys for first glob, test keys for second.
1086+ m_glob.side_effect = iter([
1087+ [],
1088+ self.test_hostkey_files,
1089+ ])
1090+ # Mock os.path.exits to True to short-circuit the key writing logic
1091+ m_path_exists.return_value = True
1092+ m_nug.return_value = ({user: {"default": user}}, {})
1093+ cloud = self.tmp_cloud(
1094+ distro='ubuntu', metadata={'public-keys': keys})
1095+ cloud.datasource.publish_host_keys = mock.Mock()
1096+
1097+ cfg = {'ssh_publish_hostkeys': {'enabled': True,
1098+ 'blacklist': ['dsa', 'rsa']}}
1099+ expected_call = [self.test_hostkeys[key_type] for key_type
1100+ in ['ecdsa', 'ed25519']]
1101+ cc_ssh.handle("name", cfg, cloud, None, None)
1102+ self.assertEqual([mock.call(expected_call)],
1103+ cloud.datasource.publish_host_keys.call_args_list)
1104+
1105+ @mock.patch(MODPATH + "glob.glob")
1106+ @mock.patch(MODPATH + "ug_util.normalize_users_groups")
1107+ @mock.patch(MODPATH + "os.path.exists")
1108+ def test_handle_publish_hostkeys_empty_blacklist(
1109+ self, m_path_exists, m_nug, m_glob, m_setup_keys):
1110+ """Test handle with various configs for ssh_publish_hostkeys."""
1111+ self._publish_hostkey_test_setup()
1112+ cc_ssh.PUBLISH_HOST_KEYS = True
1113+ keys = ["key1"]
1114+ user = "clouduser"
1115+ # Return no matching keys for first glob, test keys for second.
1116+ m_glob.side_effect = iter([
1117+ [],
1118+ self.test_hostkey_files,
1119+ ])
1120+ # Mock os.path.exits to True to short-circuit the key writing logic
1121+ m_path_exists.return_value = True
1122+ m_nug.return_value = ({user: {"default": user}}, {})
1123+ cloud = self.tmp_cloud(
1124+ distro='ubuntu', metadata={'public-keys': keys})
1125+ cloud.datasource.publish_host_keys = mock.Mock()
1126+
1127+ cfg = {'ssh_publish_hostkeys': {'enabled': True,
1128+ 'blacklist': []}}
1129+ expected_call = [self.test_hostkeys[key_type] for key_type
1130+ in ['dsa', 'ecdsa', 'ed25519', 'rsa']]
1131+ cc_ssh.handle("name", cfg, cloud, None, None)
1132+ self.assertEqual([mock.call(expected_call)],
1133+ cloud.datasource.publish_host_keys.call_args_list)
1134diff --git a/cloudinit/config/tests/test_ubuntu_drivers.py b/cloudinit/config/tests/test_ubuntu_drivers.py
1135index efba4ce..4695269 100644
1136--- a/cloudinit/config/tests/test_ubuntu_drivers.py
1137+++ b/cloudinit/config/tests/test_ubuntu_drivers.py
1138@@ -1,6 +1,7 @@
1139 # This file is part of cloud-init. See LICENSE file for license information.
1140
1141 import copy
1142+import os
1143
1144 from cloudinit.tests.helpers import CiTestCase, skipUnlessJsonSchema, mock
1145 from cloudinit.config.schema import (
1146@@ -9,11 +10,27 @@ from cloudinit.config import cc_ubuntu_drivers as drivers
1147 from cloudinit.util import ProcessExecutionError
1148
1149 MPATH = "cloudinit.config.cc_ubuntu_drivers."
1150+M_TMP_PATH = MPATH + "temp_utils.mkdtemp"
1151 OLD_UBUNTU_DRIVERS_ERROR_STDERR = (
1152 "ubuntu-drivers: error: argument <command>: invalid choice: 'install' "
1153 "(choose from 'list', 'autoinstall', 'devices', 'debug')\n")
1154
1155
1156+class AnyTempScriptAndDebconfFile(object):
1157+
1158+ def __init__(self, tmp_dir, debconf_file):
1159+ self.tmp_dir = tmp_dir
1160+ self.debconf_file = debconf_file
1161+
1162+ def __eq__(self, cmd):
1163+ if not len(cmd) == 2:
1164+ return False
1165+ script, debconf_file = cmd
1166+ if bool(script.startswith(self.tmp_dir) and script.endswith('.sh')):
1167+ return debconf_file == self.debconf_file
1168+ return False
1169+
1170+
1171 class TestUbuntuDrivers(CiTestCase):
1172 cfg_accepted = {'drivers': {'nvidia': {'license-accepted': True}}}
1173 install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia']
1174@@ -28,16 +45,23 @@ class TestUbuntuDrivers(CiTestCase):
1175 {'drivers': {'nvidia': {'license-accepted': "TRUE"}}},
1176 schema=drivers.schema, strict=True)
1177
1178+ @mock.patch(M_TMP_PATH)
1179 @mock.patch(MPATH + "util.subp", return_value=('', ''))
1180 @mock.patch(MPATH + "util.which", return_value=False)
1181- def _assert_happy_path_taken(self, config, m_which, m_subp):
1182+ def _assert_happy_path_taken(
1183+ self, config, m_which, m_subp, m_tmp):
1184 """Positive path test through handle. Package should be installed."""
1185+ tdir = self.tmp_dir()
1186+ debconf_file = os.path.join(tdir, 'nvidia.template')
1187+ m_tmp.return_value = tdir
1188 myCloud = mock.MagicMock()
1189 drivers.handle('ubuntu_drivers', config, myCloud, None, None)
1190 self.assertEqual([mock.call(['ubuntu-drivers-common'])],
1191 myCloud.distro.install_packages.call_args_list)
1192- self.assertEqual([mock.call(self.install_gpgpu)],
1193- m_subp.call_args_list)
1194+ self.assertEqual(
1195+ [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
1196+ mock.call(self.install_gpgpu)],
1197+ m_subp.call_args_list)
1198
1199 def test_handle_does_package_install(self):
1200 self._assert_happy_path_taken(self.cfg_accepted)
1201@@ -48,19 +72,33 @@ class TestUbuntuDrivers(CiTestCase):
1202 new_config['drivers']['nvidia']['license-accepted'] = true_value
1203 self._assert_happy_path_taken(new_config)
1204
1205- @mock.patch(MPATH + "util.subp", side_effect=ProcessExecutionError(
1206- stdout='No drivers found for installation.\n', exit_code=1))
1207+ @mock.patch(M_TMP_PATH)
1208+ @mock.patch(MPATH + "util.subp")
1209 @mock.patch(MPATH + "util.which", return_value=False)
1210- def test_handle_raises_error_if_no_drivers_found(self, m_which, m_subp):
1211+ def test_handle_raises_error_if_no_drivers_found(
1212+ self, m_which, m_subp, m_tmp):
1213 """If ubuntu-drivers doesn't install any drivers, raise an error."""
1214+ tdir = self.tmp_dir()
1215+ debconf_file = os.path.join(tdir, 'nvidia.template')
1216+ m_tmp.return_value = tdir
1217 myCloud = mock.MagicMock()
1218+
1219+ def fake_subp(cmd):
1220+ if cmd[0].startswith(tdir):
1221+ return
1222+ raise ProcessExecutionError(
1223+ stdout='No drivers found for installation.\n', exit_code=1)
1224+ m_subp.side_effect = fake_subp
1225+
1226 with self.assertRaises(Exception):
1227 drivers.handle(
1228 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None)
1229 self.assertEqual([mock.call(['ubuntu-drivers-common'])],
1230 myCloud.distro.install_packages.call_args_list)
1231- self.assertEqual([mock.call(self.install_gpgpu)],
1232- m_subp.call_args_list)
1233+ self.assertEqual(
1234+ [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
1235+ mock.call(self.install_gpgpu)],
1236+ m_subp.call_args_list)
1237 self.assertIn('ubuntu-drivers found no drivers for installation',
1238 self.logs.getvalue())
1239
1240@@ -108,18 +146,25 @@ class TestUbuntuDrivers(CiTestCase):
1241 myLog.debug.call_args_list[0][0][0])
1242 self.assertEqual(0, m_install_drivers.call_count)
1243
1244+ @mock.patch(M_TMP_PATH)
1245 @mock.patch(MPATH + "util.subp", return_value=('', ''))
1246 @mock.patch(MPATH + "util.which", return_value=True)
1247- def test_install_drivers_no_install_if_present(self, m_which, m_subp):
1248+ def test_install_drivers_no_install_if_present(
1249+ self, m_which, m_subp, m_tmp):
1250 """If 'ubuntu-drivers' is present, no package install should occur."""
1251+ tdir = self.tmp_dir()
1252+ debconf_file = os.path.join(tdir, 'nvidia.template')
1253+ m_tmp.return_value = tdir
1254 pkg_install = mock.MagicMock()
1255 drivers.install_drivers(self.cfg_accepted['drivers'],
1256 pkg_install_func=pkg_install)
1257 self.assertEqual(0, pkg_install.call_count)
1258 self.assertEqual([mock.call('ubuntu-drivers')],
1259 m_which.call_args_list)
1260- self.assertEqual([mock.call(self.install_gpgpu)],
1261- m_subp.call_args_list)
1262+ self.assertEqual(
1263+ [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
1264+ mock.call(self.install_gpgpu)],
1265+ m_subp.call_args_list)
1266
1267 def test_install_drivers_rejects_invalid_config(self):
1268 """install_drivers should raise TypeError if not given a config dict"""
1269@@ -128,20 +173,33 @@ class TestUbuntuDrivers(CiTestCase):
1270 drivers.install_drivers("mystring", pkg_install_func=pkg_install)
1271 self.assertEqual(0, pkg_install.call_count)
1272
1273- @mock.patch(MPATH + "util.subp", side_effect=ProcessExecutionError(
1274- stderr=OLD_UBUNTU_DRIVERS_ERROR_STDERR, exit_code=2))
1275+ @mock.patch(M_TMP_PATH)
1276+ @mock.patch(MPATH + "util.subp")
1277 @mock.patch(MPATH + "util.which", return_value=False)
1278 def test_install_drivers_handles_old_ubuntu_drivers_gracefully(
1279- self, m_which, m_subp):
1280+ self, m_which, m_subp, m_tmp):
1281 """Older ubuntu-drivers versions should emit message and raise error"""
1282+ tdir = self.tmp_dir()
1283+ debconf_file = os.path.join(tdir, 'nvidia.template')
1284+ m_tmp.return_value = tdir
1285 myCloud = mock.MagicMock()
1286+
1287+ def fake_subp(cmd):
1288+ if cmd[0].startswith(tdir):
1289+ return
1290+ raise ProcessExecutionError(
1291+ stderr=OLD_UBUNTU_DRIVERS_ERROR_STDERR, exit_code=2)
1292+ m_subp.side_effect = fake_subp
1293+
1294 with self.assertRaises(Exception):
1295 drivers.handle(
1296 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None)
1297 self.assertEqual([mock.call(['ubuntu-drivers-common'])],
1298 myCloud.distro.install_packages.call_args_list)
1299- self.assertEqual([mock.call(self.install_gpgpu)],
1300- m_subp.call_args_list)
1301+ self.assertEqual(
1302+ [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
1303+ mock.call(self.install_gpgpu)],
1304+ m_subp.call_args_list)
1305 self.assertIn('WARNING: the available version of ubuntu-drivers is'
1306 ' too old to perform requested driver installation',
1307 self.logs.getvalue())
1308@@ -153,16 +211,21 @@ class TestUbuntuDriversWithVersion(TestUbuntuDrivers):
1309 'drivers': {'nvidia': {'license-accepted': True, 'version': '123'}}}
1310 install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia:123']
1311
1312+ @mock.patch(M_TMP_PATH)
1313 @mock.patch(MPATH + "util.subp", return_value=('', ''))
1314 @mock.patch(MPATH + "util.which", return_value=False)
1315- def test_version_none_uses_latest(self, m_which, m_subp):
1316+ def test_version_none_uses_latest(self, m_which, m_subp, m_tmp):
1317+ tdir = self.tmp_dir()
1318+ debconf_file = os.path.join(tdir, 'nvidia.template')
1319+ m_tmp.return_value = tdir
1320 myCloud = mock.MagicMock()
1321 version_none_cfg = {
1322 'drivers': {'nvidia': {'license-accepted': True, 'version': None}}}
1323 drivers.handle(
1324 'ubuntu_drivers', version_none_cfg, myCloud, None, None)
1325 self.assertEqual(
1326- [mock.call(['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'])],
1327+ [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
1328+ mock.call(['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'])],
1329 m_subp.call_args_list)
1330
1331 def test_specifying_a_version_doesnt_override_license_acceptance(self):
1332diff --git a/cloudinit/distros/__init__.py b/cloudinit/distros/__init__.py
1333index 20c994d..00bdee3 100644
1334--- a/cloudinit/distros/__init__.py
1335+++ b/cloudinit/distros/__init__.py
1336@@ -396,16 +396,16 @@ class Distro(object):
1337 else:
1338 create_groups = True
1339
1340- adduser_cmd = ['useradd', name]
1341- log_adduser_cmd = ['useradd', name]
1342+ useradd_cmd = ['useradd', name]
1343+ log_useradd_cmd = ['useradd', name]
1344 if util.system_is_snappy():
1345- adduser_cmd.append('--extrausers')
1346- log_adduser_cmd.append('--extrausers')
1347+ useradd_cmd.append('--extrausers')
1348+ log_useradd_cmd.append('--extrausers')
1349
1350 # Since we are creating users, we want to carefully validate the
1351 # inputs. If something goes wrong, we can end up with a system
1352 # that nobody can login to.
1353- adduser_opts = {
1354+ useradd_opts = {
1355 "gecos": '--comment',
1356 "homedir": '--home',
1357 "primary_group": '--gid',
1358@@ -418,7 +418,7 @@ class Distro(object):
1359 "selinux_user": '--selinux-user',
1360 }
1361
1362- adduser_flags = {
1363+ useradd_flags = {
1364 "no_user_group": '--no-user-group',
1365 "system": '--system',
1366 "no_log_init": '--no-log-init',
1367@@ -453,32 +453,32 @@ class Distro(object):
1368 # Check the values and create the command
1369 for key, val in sorted(kwargs.items()):
1370
1371- if key in adduser_opts and val and isinstance(val, str):
1372- adduser_cmd.extend([adduser_opts[key], val])
1373+ if key in useradd_opts and val and isinstance(val, str):
1374+ useradd_cmd.extend([useradd_opts[key], val])
1375
1376 # Redact certain fields from the logs
1377 if key in redact_opts:
1378- log_adduser_cmd.extend([adduser_opts[key], 'REDACTED'])
1379+ log_useradd_cmd.extend([useradd_opts[key], 'REDACTED'])
1380 else:
1381- log_adduser_cmd.extend([adduser_opts[key], val])
1382+ log_useradd_cmd.extend([useradd_opts[key], val])
1383
1384- elif key in adduser_flags and val:
1385- adduser_cmd.append(adduser_flags[key])
1386- log_adduser_cmd.append(adduser_flags[key])
1387+ elif key in useradd_flags and val:
1388+ useradd_cmd.append(useradd_flags[key])
1389+ log_useradd_cmd.append(useradd_flags[key])
1390
1391 # Don't create the home directory if directed so or if the user is a
1392 # system user
1393 if kwargs.get('no_create_home') or kwargs.get('system'):
1394- adduser_cmd.append('-M')
1395- log_adduser_cmd.append('-M')
1396+ useradd_cmd.append('-M')
1397+ log_useradd_cmd.append('-M')
1398 else:
1399- adduser_cmd.append('-m')
1400- log_adduser_cmd.append('-m')
1401+ useradd_cmd.append('-m')
1402+ log_useradd_cmd.append('-m')
1403
1404 # Run the command
1405 LOG.debug("Adding user %s", name)
1406 try:
1407- util.subp(adduser_cmd, logstring=log_adduser_cmd)
1408+ util.subp(useradd_cmd, logstring=log_useradd_cmd)
1409 except Exception as e:
1410 util.logexc(LOG, "Failed to create user %s", name)
1411 raise e
1412@@ -490,15 +490,15 @@ class Distro(object):
1413
1414 snapuser = kwargs.get('snapuser')
1415 known = kwargs.get('known', False)
1416- adduser_cmd = ["snap", "create-user", "--sudoer", "--json"]
1417+ create_user_cmd = ["snap", "create-user", "--sudoer", "--json"]
1418 if known:
1419- adduser_cmd.append("--known")
1420- adduser_cmd.append(snapuser)
1421+ create_user_cmd.append("--known")
1422+ create_user_cmd.append(snapuser)
1423
1424 # Run the command
1425 LOG.debug("Adding snap user %s", name)
1426 try:
1427- (out, err) = util.subp(adduser_cmd, logstring=adduser_cmd,
1428+ (out, err) = util.subp(create_user_cmd, logstring=create_user_cmd,
1429 capture=True)
1430 LOG.debug("snap create-user returned: %s:%s", out, err)
1431 jobj = util.load_json(out)
1432diff --git a/cloudinit/distros/arch.py b/cloudinit/distros/arch.py
1433index b814c8b..9f89c5f 100644
1434--- a/cloudinit/distros/arch.py
1435+++ b/cloudinit/distros/arch.py
1436@@ -12,6 +12,8 @@ from cloudinit import util
1437 from cloudinit.distros import net_util
1438 from cloudinit.distros.parsers.hostname import HostnameConf
1439
1440+from cloudinit.net.renderers import RendererNotFoundError
1441+
1442 from cloudinit.settings import PER_INSTANCE
1443
1444 import os
1445@@ -24,6 +26,11 @@ class Distro(distros.Distro):
1446 network_conf_dir = "/etc/netctl"
1447 resolve_conf_fn = "/etc/resolv.conf"
1448 init_cmd = ['systemctl'] # init scripts
1449+ renderer_configs = {
1450+ "netplan": {"netplan_path": "/etc/netplan/50-cloud-init.yaml",
1451+ "netplan_header": "# generated by cloud-init\n",
1452+ "postcmds": True}
1453+ }
1454
1455 def __init__(self, name, cfg, paths):
1456 distros.Distro.__init__(self, name, cfg, paths)
1457@@ -50,6 +57,13 @@ class Distro(distros.Distro):
1458 self.update_package_sources()
1459 self.package_command('', pkgs=pkglist)
1460
1461+ def _write_network_config(self, netconfig):
1462+ try:
1463+ return self._supported_write_network_config(netconfig)
1464+ except RendererNotFoundError:
1465+ # Fall back to old _write_network
1466+ raise NotImplementedError
1467+
1468 def _write_network(self, settings):
1469 entries = net_util.translate_network(settings)
1470 LOG.debug("Translated ubuntu style network settings %s into %s",
1471diff --git a/cloudinit/distros/debian.py b/cloudinit/distros/debian.py
1472index d517fb8..0ad93ff 100644
1473--- a/cloudinit/distros/debian.py
1474+++ b/cloudinit/distros/debian.py
1475@@ -36,14 +36,14 @@ ENI_HEADER = """# This file is generated from information provided by
1476 # network: {config: disabled}
1477 """
1478
1479-NETWORK_CONF_FN = "/etc/network/interfaces.d/50-cloud-init.cfg"
1480+NETWORK_CONF_FN = "/etc/network/interfaces.d/50-cloud-init"
1481 LOCALE_CONF_FN = "/etc/default/locale"
1482
1483
1484 class Distro(distros.Distro):
1485 hostname_conf_fn = "/etc/hostname"
1486 network_conf_fn = {
1487- "eni": "/etc/network/interfaces.d/50-cloud-init.cfg",
1488+ "eni": "/etc/network/interfaces.d/50-cloud-init",
1489 "netplan": "/etc/netplan/50-cloud-init.yaml"
1490 }
1491 renderer_configs = {
1492diff --git a/cloudinit/distros/freebsd.py b/cloudinit/distros/freebsd.py
1493index ff22d56..f7825fd 100644
1494--- a/cloudinit/distros/freebsd.py
1495+++ b/cloudinit/distros/freebsd.py
1496@@ -185,10 +185,10 @@ class Distro(distros.Distro):
1497 LOG.info("User %s already exists, skipping.", name)
1498 return False
1499
1500- adduser_cmd = ['pw', 'useradd', '-n', name]
1501- log_adduser_cmd = ['pw', 'useradd', '-n', name]
1502+ pw_useradd_cmd = ['pw', 'useradd', '-n', name]
1503+ log_pw_useradd_cmd = ['pw', 'useradd', '-n', name]
1504
1505- adduser_opts = {
1506+ pw_useradd_opts = {
1507 "homedir": '-d',
1508 "gecos": '-c',
1509 "primary_group": '-g',
1510@@ -196,34 +196,34 @@ class Distro(distros.Distro):
1511 "shell": '-s',
1512 "inactive": '-E',
1513 }
1514- adduser_flags = {
1515+ pw_useradd_flags = {
1516 "no_user_group": '--no-user-group',
1517 "system": '--system',
1518 "no_log_init": '--no-log-init',
1519 }
1520
1521 for key, val in kwargs.items():
1522- if (key in adduser_opts and val and
1523+ if (key in pw_useradd_opts and val and
1524 isinstance(val, six.string_types)):
1525- adduser_cmd.extend([adduser_opts[key], val])
1526+ pw_useradd_cmd.extend([pw_useradd_opts[key], val])
1527
1528- elif key in adduser_flags and val:
1529- adduser_cmd.append(adduser_flags[key])
1530- log_adduser_cmd.append(adduser_flags[key])
1531+ elif key in pw_useradd_flags and val:
1532+ pw_useradd_cmd.append(pw_useradd_flags[key])
1533+ log_pw_useradd_cmd.append(pw_useradd_flags[key])
1534
1535 if 'no_create_home' in kwargs or 'system' in kwargs:
1536- adduser_cmd.append('-d/nonexistent')
1537- log_adduser_cmd.append('-d/nonexistent')
1538+ pw_useradd_cmd.append('-d/nonexistent')
1539+ log_pw_useradd_cmd.append('-d/nonexistent')
1540 else:
1541- adduser_cmd.append('-d/usr/home/%s' % name)
1542- adduser_cmd.append('-m')
1543- log_adduser_cmd.append('-d/usr/home/%s' % name)
1544- log_adduser_cmd.append('-m')
1545+ pw_useradd_cmd.append('-d/usr/home/%s' % name)
1546+ pw_useradd_cmd.append('-m')
1547+ log_pw_useradd_cmd.append('-d/usr/home/%s' % name)
1548+ log_pw_useradd_cmd.append('-m')
1549
1550 # Run the command
1551 LOG.info("Adding user %s", name)
1552 try:
1553- util.subp(adduser_cmd, logstring=log_adduser_cmd)
1554+ util.subp(pw_useradd_cmd, logstring=log_pw_useradd_cmd)
1555 except Exception as e:
1556 util.logexc(LOG, "Failed to create user %s", name)
1557 raise e
1558diff --git a/cloudinit/distros/opensuse.py b/cloudinit/distros/opensuse.py
1559index 1bfe047..e41e2f7 100644
1560--- a/cloudinit/distros/opensuse.py
1561+++ b/cloudinit/distros/opensuse.py
1562@@ -38,6 +38,8 @@ class Distro(distros.Distro):
1563 'sysconfig': {
1564 'control': 'etc/sysconfig/network/config',
1565 'iface_templates': '%(base)s/network/ifcfg-%(name)s',
1566+ 'netrules_path': (
1567+ 'etc/udev/rules.d/85-persistent-net-cloud-init.rules'),
1568 'route_templates': {
1569 'ipv4': '%(base)s/network/ifroute-%(name)s',
1570 'ipv6': '%(base)s/network/ifroute-%(name)s',
1571diff --git a/cloudinit/distros/parsers/sys_conf.py b/cloudinit/distros/parsers/sys_conf.py
1572index c27b5d5..44df17d 100644
1573--- a/cloudinit/distros/parsers/sys_conf.py
1574+++ b/cloudinit/distros/parsers/sys_conf.py
1575@@ -43,6 +43,13 @@ def _contains_shell_variable(text):
1576
1577
1578 class SysConf(configobj.ConfigObj):
1579+ """A configobj.ConfigObj subclass specialised for sysconfig files.
1580+
1581+ :param contents:
1582+ The sysconfig file to parse, in a format accepted by
1583+ ``configobj.ConfigObj.__init__`` (i.e. "a filename, file like object,
1584+ or list of lines").
1585+ """
1586 def __init__(self, contents):
1587 configobj.ConfigObj.__init__(self, contents,
1588 interpolation=False,
1589diff --git a/cloudinit/distros/ubuntu.py b/cloudinit/distros/ubuntu.py
1590index 6815410..e5fcbc5 100644
1591--- a/cloudinit/distros/ubuntu.py
1592+++ b/cloudinit/distros/ubuntu.py
1593@@ -21,6 +21,21 @@ LOG = logging.getLogger(__name__)
1594
1595 class Distro(debian.Distro):
1596
1597+ def __init__(self, name, cfg, paths):
1598+ super(Distro, self).__init__(name, cfg, paths)
1599+ # Ubuntu specific network cfg locations
1600+ self.network_conf_fn = {
1601+ "eni": "/etc/network/interfaces.d/50-cloud-init.cfg",
1602+ "netplan": "/etc/netplan/50-cloud-init.yaml"
1603+ }
1604+ self.renderer_configs = {
1605+ "eni": {"eni_path": self.network_conf_fn["eni"],
1606+ "eni_header": debian.ENI_HEADER},
1607+ "netplan": {"netplan_path": self.network_conf_fn["netplan"],
1608+ "netplan_header": debian.ENI_HEADER,
1609+ "postcmds": True}
1610+ }
1611+
1612 @property
1613 def preferred_ntp_clients(self):
1614 """The preferred ntp client is dependent on the version."""
1615diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
1616index 3642fb1..ea707c0 100644
1617--- a/cloudinit/net/__init__.py
1618+++ b/cloudinit/net/__init__.py
1619@@ -9,6 +9,7 @@ import errno
1620 import logging
1621 import os
1622 import re
1623+from functools import partial
1624
1625 from cloudinit.net.network_state import mask_to_net_prefix
1626 from cloudinit import util
1627@@ -264,46 +265,29 @@ def find_fallback_nic(blacklist_drivers=None):
1628
1629
1630 def generate_fallback_config(blacklist_drivers=None, config_driver=None):
1631- """Determine which attached net dev is most likely to have a connection and
1632- generate network state to run dhcp on that interface"""
1633-
1634+ """Generate network cfg v2 for dhcp on the NIC most likely connected."""
1635 if not config_driver:
1636 config_driver = False
1637
1638 target_name = find_fallback_nic(blacklist_drivers=blacklist_drivers)
1639- if target_name:
1640- target_mac = read_sys_net_safe(target_name, 'address')
1641- nconf = {'config': [], 'version': 1}
1642- cfg = {'type': 'physical', 'name': target_name,
1643- 'mac_address': target_mac, 'subnets': [{'type': 'dhcp'}]}
1644- # inject the device driver name, dev_id into config if enabled and
1645- # device has a valid device driver value
1646- if config_driver:
1647- driver = device_driver(target_name)
1648- if driver:
1649- cfg['params'] = {
1650- 'driver': driver,
1651- 'device_id': device_devid(target_name),
1652- }
1653- nconf['config'].append(cfg)
1654- return nconf
1655- else:
1656+ if not target_name:
1657 # can't read any interfaces addresses (or there are none); give up
1658 return None
1659+ target_mac = read_sys_net_safe(target_name, 'address')
1660+ cfg = {'dhcp4': True, 'set-name': target_name,
1661+ 'match': {'macaddress': target_mac.lower()}}
1662+ if config_driver:
1663+ driver = device_driver(target_name)
1664+ if driver:
1665+ cfg['match']['driver'] = driver
1666+ nconf = {'ethernets': {target_name: cfg}, 'version': 2}
1667+ return nconf
1668
1669
1670-def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
1671- """read the network config and rename devices accordingly.
1672- if strict_present is false, then do not raise exception if no devices
1673- match. if strict_busy is false, then do not raise exception if the
1674- device cannot be renamed because it is currently configured.
1675-
1676- renames are only attempted for interfaces of type 'physical'. It is
1677- expected that the network system will create other devices with the
1678- correct name in place."""
1679+def extract_physdevs(netcfg):
1680
1681 def _version_1(netcfg):
1682- renames = []
1683+ physdevs = []
1684 for ent in netcfg.get('config', {}):
1685 if ent.get('type') != 'physical':
1686 continue
1687@@ -317,11 +301,11 @@ def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
1688 driver = device_driver(name)
1689 if not device_id:
1690 device_id = device_devid(name)
1691- renames.append([mac, name, driver, device_id])
1692- return renames
1693+ physdevs.append([mac, name, driver, device_id])
1694+ return physdevs
1695
1696 def _version_2(netcfg):
1697- renames = []
1698+ physdevs = []
1699 for ent in netcfg.get('ethernets', {}).values():
1700 # only rename if configured to do so
1701 name = ent.get('set-name')
1702@@ -337,16 +321,69 @@ def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
1703 driver = device_driver(name)
1704 if not device_id:
1705 device_id = device_devid(name)
1706- renames.append([mac, name, driver, device_id])
1707- return renames
1708+ physdevs.append([mac, name, driver, device_id])
1709+ return physdevs
1710+
1711+ version = netcfg.get('version')
1712+ if version == 1:
1713+ return _version_1(netcfg)
1714+ elif version == 2:
1715+ return _version_2(netcfg)
1716+
1717+ raise RuntimeError('Unknown network config version: %s' % version)
1718+
1719+
1720+def wait_for_physdevs(netcfg, strict=True):
1721+ physdevs = extract_physdevs(netcfg)
1722+
1723+ # set of expected iface names and mac addrs
1724+ expected_ifaces = dict([(iface[0], iface[1]) for iface in physdevs])
1725+ expected_macs = set(expected_ifaces.keys())
1726+
1727+ # set of current macs
1728+ present_macs = get_interfaces_by_mac().keys()
1729+
1730+ # compare the set of expected mac address values to
1731+ # the current macs present; we only check MAC as cloud-init
1732+ # has not yet renamed interfaces and the netcfg may include
1733+ # such renames.
1734+ for _ in range(0, 5):
1735+ if expected_macs.issubset(present_macs):
1736+ LOG.debug('net: all expected physical devices present')
1737+ return
1738
1739- if netcfg.get('version') == 1:
1740- return _rename_interfaces(_version_1(netcfg))
1741- elif netcfg.get('version') == 2:
1742- return _rename_interfaces(_version_2(netcfg))
1743+ missing = expected_macs.difference(present_macs)
1744+ LOG.debug('net: waiting for expected net devices: %s', missing)
1745+ for mac in missing:
1746+ # trigger a settle, unless this interface exists
1747+ syspath = sys_dev_path(expected_ifaces[mac])
1748+ settle = partial(util.udevadm_settle, exists=syspath)
1749+ msg = 'Waiting for udev events to settle or %s exists' % syspath
1750+ util.log_time(LOG.debug, msg, func=settle)
1751
1752- raise RuntimeError('Failed to apply network config names. Found bad'
1753- ' network config version: %s' % netcfg.get('version'))
1754+ # update present_macs after settles
1755+ present_macs = get_interfaces_by_mac().keys()
1756+
1757+ msg = 'Not all expected physical devices present: %s' % missing
1758+ LOG.warning(msg)
1759+ if strict:
1760+ raise RuntimeError(msg)
1761+
1762+
1763+def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
1764+ """read the network config and rename devices accordingly.
1765+ if strict_present is false, then do not raise exception if no devices
1766+ match. if strict_busy is false, then do not raise exception if the
1767+ device cannot be renamed because it is currently configured.
1768+
1769+ renames are only attempted for interfaces of type 'physical'. It is
1770+ expected that the network system will create other devices with the
1771+ correct name in place."""
1772+
1773+ try:
1774+ _rename_interfaces(extract_physdevs(netcfg))
1775+ except RuntimeError as e:
1776+ raise RuntimeError('Failed to apply network config names: %s' % e)
1777
1778
1779 def interface_has_own_mac(ifname, strict=False):
1780@@ -622,6 +659,8 @@ def get_interfaces():
1781 continue
1782 if is_vlan(name):
1783 continue
1784+ if is_bond(name):
1785+ continue
1786 mac = get_interface_mac(name)
1787 # some devices may not have a mac (tun0)
1788 if not mac:
1789@@ -677,7 +716,7 @@ class EphemeralIPv4Network(object):
1790 """
1791
1792 def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None,
1793- connectivity_url=None):
1794+ connectivity_url=None, static_routes=None):
1795 """Setup context manager and validate call signature.
1796
1797 @param interface: Name of the network interface to bring up.
1798@@ -688,6 +727,7 @@ class EphemeralIPv4Network(object):
1799 @param router: Optionally the default gateway IP.
1800 @param connectivity_url: Optionally, a URL to verify if a usable
1801 connection already exists.
1802+ @param static_routes: Optionally a list of static routes from DHCP
1803 """
1804 if not all([interface, ip, prefix_or_mask, broadcast]):
1805 raise ValueError(
1806@@ -704,6 +744,7 @@ class EphemeralIPv4Network(object):
1807 self.ip = ip
1808 self.broadcast = broadcast
1809 self.router = router
1810+ self.static_routes = static_routes
1811 self.cleanup_cmds = [] # List of commands to run to cleanup state.
1812
1813 def __enter__(self):
1814@@ -716,7 +757,21 @@ class EphemeralIPv4Network(object):
1815 return
1816
1817 self._bringup_device()
1818- if self.router:
1819+
1820+ # rfc3442 requires us to ignore the router config *if* classless static
1821+ # routes are provided.
1822+ #
1823+ # https://tools.ietf.org/html/rfc3442
1824+ #
1825+ # If the DHCP server returns both a Classless Static Routes option and
1826+ # a Router option, the DHCP client MUST ignore the Router option.
1827+ #
1828+ # Similarly, if the DHCP server returns both a Classless Static Routes
1829+ # option and a Static Routes option, the DHCP client MUST ignore the
1830+ # Static Routes option.
1831+ if self.static_routes:
1832+ self._bringup_static_routes()
1833+ elif self.router:
1834 self._bringup_router()
1835
1836 def __exit__(self, excp_type, excp_value, excp_traceback):
1837@@ -760,6 +815,20 @@ class EphemeralIPv4Network(object):
1838 ['ip', '-family', 'inet', 'addr', 'del', cidr, 'dev',
1839 self.interface])
1840
1841+ def _bringup_static_routes(self):
1842+ # static_routes = [("169.254.169.254/32", "130.56.248.255"),
1843+ # ("0.0.0.0/0", "130.56.240.1")]
1844+ for net_address, gateway in self.static_routes:
1845+ via_arg = []
1846+ if gateway != "0.0.0.0/0":
1847+ via_arg = ['via', gateway]
1848+ util.subp(
1849+ ['ip', '-4', 'route', 'add', net_address] + via_arg +
1850+ ['dev', self.interface], capture=True)
1851+ self.cleanup_cmds.insert(
1852+ 0, ['ip', '-4', 'route', 'del', net_address] + via_arg +
1853+ ['dev', self.interface])
1854+
1855 def _bringup_router(self):
1856 """Perform the ip commands to fully setup the router if needed."""
1857 # Check if a default route exists and exit if it does
1858diff --git a/cloudinit/net/cmdline.py b/cloudinit/net/cmdline.py
1859index f89a0f7..556a10f 100755
1860--- a/cloudinit/net/cmdline.py
1861+++ b/cloudinit/net/cmdline.py
1862@@ -177,21 +177,13 @@ def _is_initramfs_netconfig(files, cmdline):
1863 return False
1864
1865
1866-def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None):
1867+def read_initramfs_config(files=None, mac_addrs=None, cmdline=None):
1868 if cmdline is None:
1869 cmdline = util.get_cmdline()
1870
1871 if files is None:
1872 files = _get_klibc_net_cfg_files()
1873
1874- if 'network-config=' in cmdline:
1875- data64 = None
1876- for tok in cmdline.split():
1877- if tok.startswith("network-config="):
1878- data64 = tok.split("=", 1)[1]
1879- if data64:
1880- return util.load_yaml(_b64dgz(data64))
1881-
1882 if not _is_initramfs_netconfig(files, cmdline):
1883 return None
1884
1885@@ -204,4 +196,19 @@ def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None):
1886
1887 return config_from_klibc_net_cfg(files=files, mac_addrs=mac_addrs)
1888
1889+
1890+def read_kernel_cmdline_config(cmdline=None):
1891+ if cmdline is None:
1892+ cmdline = util.get_cmdline()
1893+
1894+ if 'network-config=' in cmdline:
1895+ data64 = None
1896+ for tok in cmdline.split():
1897+ if tok.startswith("network-config="):
1898+ data64 = tok.split("=", 1)[1]
1899+ if data64:
1900+ return util.load_yaml(_b64dgz(data64))
1901+
1902+ return None
1903+
1904 # vi: ts=4 expandtab
1905diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py
1906index c98a97c..1737991 100644
1907--- a/cloudinit/net/dhcp.py
1908+++ b/cloudinit/net/dhcp.py
1909@@ -92,10 +92,14 @@ class EphemeralDHCPv4(object):
1910 nmap = {'interface': 'interface', 'ip': 'fixed-address',
1911 'prefix_or_mask': 'subnet-mask',
1912 'broadcast': 'broadcast-address',
1913+ 'static_routes': 'rfc3442-classless-static-routes',
1914 'router': 'routers'}
1915 kwargs = dict([(k, self.lease.get(v)) for k, v in nmap.items()])
1916 if not kwargs['broadcast']:
1917 kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip'])
1918+ if kwargs['static_routes']:
1919+ kwargs['static_routes'] = (
1920+ parse_static_routes(kwargs['static_routes']))
1921 if self.connectivity_url:
1922 kwargs['connectivity_url'] = self.connectivity_url
1923 ephipv4 = EphemeralIPv4Network(**kwargs)
1924@@ -272,4 +276,90 @@ def networkd_get_option_from_leases(keyname, leases_d=None):
1925 return data[keyname]
1926 return None
1927
1928+
1929+def parse_static_routes(rfc3442):
1930+ """ parse rfc3442 format and return a list containing tuple of strings.
1931+
1932+ The tuple is composed of the network_address (including net length) and
1933+ gateway for a parsed static route.
1934+
1935+ @param rfc3442: string in rfc3442 format
1936+ @returns: list of tuple(str, str) for all valid parsed routes until the
1937+ first parsing error.
1938+
1939+ E.g.
1940+ sr = parse_state_routes("32,169,254,169,254,130,56,248,255,0,130,56,240,1")
1941+ sr = [
1942+ ("169.254.169.254/32", "130.56.248.255"), ("0.0.0.0/0", "130.56.240.1")
1943+ ]
1944+
1945+ Python version of isc-dhclient's hooks:
1946+ /etc/dhcp/dhclient-exit-hooks.d/rfc3442-classless-routes
1947+ """
1948+ # raw strings from dhcp lease may end in semi-colon
1949+ rfc3442 = rfc3442.rstrip(";")
1950+ tokens = rfc3442.split(',')
1951+ static_routes = []
1952+
1953+ def _trunc_error(cidr, required, remain):
1954+ msg = ("RFC3442 string malformed. Current route has CIDR of %s "
1955+ "and requires %s significant octets, but only %s remain. "
1956+ "Verify DHCP rfc3442-classless-static-routes value: %s"
1957+ % (cidr, required, remain, rfc3442))
1958+ LOG.error(msg)
1959+
1960+ current_idx = 0
1961+ for idx, tok in enumerate(tokens):
1962+ if idx < current_idx:
1963+ continue
1964+ net_length = int(tok)
1965+ if net_length in range(25, 33):
1966+ req_toks = 9
1967+ if len(tokens[idx:]) < req_toks:
1968+ _trunc_error(net_length, req_toks, len(tokens[idx:]))
1969+ return static_routes
1970+ net_address = ".".join(tokens[idx+1:idx+5])
1971+ gateway = ".".join(tokens[idx+5:idx+req_toks])
1972+ current_idx = idx + req_toks
1973+ elif net_length in range(17, 25):
1974+ req_toks = 8
1975+ if len(tokens[idx:]) < req_toks:
1976+ _trunc_error(net_length, req_toks, len(tokens[idx:]))
1977+ return static_routes
1978+ net_address = ".".join(tokens[idx+1:idx+4] + ["0"])
1979+ gateway = ".".join(tokens[idx+4:idx+req_toks])
1980+ current_idx = idx + req_toks
1981+ elif net_length in range(9, 17):
1982+ req_toks = 7
1983+ if len(tokens[idx:]) < req_toks:
1984+ _trunc_error(net_length, req_toks, len(tokens[idx:]))
1985+ return static_routes
1986+ net_address = ".".join(tokens[idx+1:idx+3] + ["0", "0"])
1987+ gateway = ".".join(tokens[idx+3:idx+req_toks])
1988+ current_idx = idx + req_toks
1989+ elif net_length in range(1, 9):
1990+ req_toks = 6
1991+ if len(tokens[idx:]) < req_toks:
1992+ _trunc_error(net_length, req_toks, len(tokens[idx:]))
1993+ return static_routes
1994+ net_address = ".".join(tokens[idx+1:idx+2] + ["0", "0", "0"])
1995+ gateway = ".".join(tokens[idx+2:idx+req_toks])
1996+ current_idx = idx + req_toks
1997+ elif net_length == 0:
1998+ req_toks = 5
1999+ if len(tokens[idx:]) < req_toks:
2000+ _trunc_error(net_length, req_toks, len(tokens[idx:]))
2001+ return static_routes
2002+ net_address = "0.0.0.0"
2003+ gateway = ".".join(tokens[idx+1:idx+req_toks])
2004+ current_idx = idx + req_toks
2005+ else:
2006+ LOG.error('Parsed invalid net length "%s". Verify DHCP '
2007+ 'rfc3442-classless-static-routes value.', net_length)
2008+ return static_routes
2009+
2010+ static_routes.append(("%s/%s" % (net_address, net_length), gateway))
2011+
2012+ return static_routes
2013+
2014 # vi: ts=4 expandtab
2015diff --git a/cloudinit/net/network_state.py b/cloudinit/net/network_state.py
2016index 4d19f56..c0c415d 100644
2017--- a/cloudinit/net/network_state.py
2018+++ b/cloudinit/net/network_state.py
2019@@ -596,6 +596,7 @@ class NetworkStateInterpreter(object):
2020 eno1:
2021 match:
2022 macaddress: 00:11:22:33:44:55
2023+ driver: hv_netsvc
2024 wakeonlan: true
2025 dhcp4: true
2026 dhcp6: false
2027@@ -631,15 +632,18 @@ class NetworkStateInterpreter(object):
2028 'type': 'physical',
2029 'name': cfg.get('set-name', eth),
2030 }
2031- mac_address = cfg.get('match', {}).get('macaddress', None)
2032+ match = cfg.get('match', {})
2033+ mac_address = match.get('macaddress', None)
2034 if not mac_address:
2035 LOG.debug('NetworkState Version2: missing "macaddress" info '
2036 'in config entry: %s: %s', eth, str(cfg))
2037- phy_cmd.update({'mac_address': mac_address})
2038-
2039+ phy_cmd['mac_address'] = mac_address
2040+ driver = match.get('driver', None)
2041+ if driver:
2042+ phy_cmd['params'] = {'driver': driver}
2043 for key in ['mtu', 'match', 'wakeonlan']:
2044 if key in cfg:
2045- phy_cmd.update({key: cfg.get(key)})
2046+ phy_cmd[key] = cfg[key]
2047
2048 subnets = self._v2_to_v1_ipcfg(cfg)
2049 if len(subnets) > 0:
2050@@ -673,6 +677,8 @@ class NetworkStateInterpreter(object):
2051 'vlan_id': cfg.get('id'),
2052 'vlan_link': cfg.get('link'),
2053 }
2054+ if 'mtu' in cfg:
2055+ vlan_cmd['mtu'] = cfg['mtu']
2056 subnets = self._v2_to_v1_ipcfg(cfg)
2057 if len(subnets) > 0:
2058 vlan_cmd.update({'subnets': subnets})
2059@@ -707,6 +713,14 @@ class NetworkStateInterpreter(object):
2060 item_params = dict((key, value) for (key, value) in
2061 item_cfg.items() if key not in
2062 NETWORK_V2_KEY_FILTER)
2063+ # we accept the fixed spelling, but write the old for compatability
2064+ # Xenial does not have an updated netplan which supports the
2065+ # correct spelling. LP: #1756701
2066+ params = item_params['parameters']
2067+ grat_value = params.pop('gratuitous-arp', None)
2068+ if grat_value:
2069+ params['gratuitious-arp'] = grat_value
2070+
2071 v1_cmd = {
2072 'type': cmd_type,
2073 'name': item_name,
2074@@ -714,6 +728,8 @@ class NetworkStateInterpreter(object):
2075 'params': dict((v2key_to_v1[k], v) for k, v in
2076 item_params.get('parameters', {}).items())
2077 }
2078+ if 'mtu' in item_cfg:
2079+ v1_cmd['mtu'] = item_cfg['mtu']
2080 subnets = self._v2_to_v1_ipcfg(item_cfg)
2081 if len(subnets) > 0:
2082 v1_cmd.update({'subnets': subnets})
2083diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
2084index a47da0a..be5dede 100644
2085--- a/cloudinit/net/sysconfig.py
2086+++ b/cloudinit/net/sysconfig.py
2087@@ -284,6 +284,18 @@ class Renderer(renderer.Renderer):
2088 ('bond_mode', "mode=%s"),
2089 ('bond_xmit_hash_policy', "xmit_hash_policy=%s"),
2090 ('bond_miimon', "miimon=%s"),
2091+ ('bond_min_links', "min_links=%s"),
2092+ ('bond_arp_interval', "arp_interval=%s"),
2093+ ('bond_arp_ip_target', "arp_ip_target=%s"),
2094+ ('bond_arp_validate', "arp_validate=%s"),
2095+ ('bond_ad_select', "ad_select=%s"),
2096+ ('bond_num_grat_arp', "num_grat_arp=%s"),
2097+ ('bond_downdelay', "downdelay=%s"),
2098+ ('bond_updelay', "updelay=%s"),
2099+ ('bond_lacp_rate', "lacp_rate=%s"),
2100+ ('bond_fail_over_mac', "fail_over_mac=%s"),
2101+ ('bond_primary', "primary=%s"),
2102+ ('bond_primary_reselect', "primary_reselect=%s"),
2103 ])
2104
2105 bridge_opts_keys = tuple([
2106diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py
2107index 5139024..91f503c 100644
2108--- a/cloudinit/net/tests/test_dhcp.py
2109+++ b/cloudinit/net/tests/test_dhcp.py
2110@@ -8,7 +8,8 @@ from textwrap import dedent
2111 import cloudinit.net as net
2112 from cloudinit.net.dhcp import (
2113 InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery,
2114- parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases)
2115+ parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases,
2116+ parse_static_routes)
2117 from cloudinit.util import ensure_file, write_file
2118 from cloudinit.tests.helpers import (
2119 CiTestCase, HttprettyTestCase, mock, populate_dir, wrap_and_call)
2120@@ -64,6 +65,123 @@ class TestParseDHCPLeasesFile(CiTestCase):
2121 self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file))
2122
2123
2124+class TestDHCPRFC3442(CiTestCase):
2125+
2126+ def test_parse_lease_finds_rfc3442_classless_static_routes(self):
2127+ """parse_dhcp_lease_file returns rfc3442-classless-static-routes."""
2128+ lease_file = self.tmp_path('leases')
2129+ content = dedent("""
2130+ lease {
2131+ interface "wlp3s0";
2132+ fixed-address 192.168.2.74;
2133+ option subnet-mask 255.255.255.0;
2134+ option routers 192.168.2.1;
2135+ option rfc3442-classless-static-routes 0,130,56,240,1;
2136+ renew 4 2017/07/27 18:02:30;
2137+ expire 5 2017/07/28 07:08:15;
2138+ }
2139+ """)
2140+ expected = [
2141+ {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74',
2142+ 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1',
2143+ 'rfc3442-classless-static-routes': '0,130,56,240,1',
2144+ 'renew': '4 2017/07/27 18:02:30',
2145+ 'expire': '5 2017/07/28 07:08:15'}]
2146+ write_file(lease_file, content)
2147+ self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file))
2148+
2149+ @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
2150+ @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
2151+ def test_obtain_lease_parses_static_routes(self, m_maybe, m_ipv4):
2152+ """EphemeralDHPCv4 parses rfc3442 routes for EphemeralIPv4Network"""
2153+ lease = [
2154+ {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74',
2155+ 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1',
2156+ 'rfc3442-classless-static-routes': '0,130,56,240,1',
2157+ 'renew': '4 2017/07/27 18:02:30',
2158+ 'expire': '5 2017/07/28 07:08:15'}]
2159+ m_maybe.return_value = lease
2160+ eph = net.dhcp.EphemeralDHCPv4()
2161+ eph.obtain_lease()
2162+ expected_kwargs = {
2163+ 'interface': 'wlp3s0',
2164+ 'ip': '192.168.2.74',
2165+ 'prefix_or_mask': '255.255.255.0',
2166+ 'broadcast': '192.168.2.255',
2167+ 'static_routes': [('0.0.0.0/0', '130.56.240.1')],
2168+ 'router': '192.168.2.1'}
2169+ m_ipv4.assert_called_with(**expected_kwargs)
2170+
2171+
2172+class TestDHCPParseStaticRoutes(CiTestCase):
2173+
2174+ with_logs = True
2175+
2176+ def parse_static_routes_empty_string(self):
2177+ self.assertEqual([], parse_static_routes(""))
2178+
2179+ def test_parse_static_routes_invalid_input_returns_empty_list(self):
2180+ rfc3442 = "32,169,254,169,254,130,56,248"
2181+ self.assertEqual([], parse_static_routes(rfc3442))
2182+
2183+ def test_parse_static_routes_bogus_width_returns_empty_list(self):
2184+ rfc3442 = "33,169,254,169,254,130,56,248"
2185+ self.assertEqual([], parse_static_routes(rfc3442))
2186+
2187+ def test_parse_static_routes_single_ip(self):
2188+ rfc3442 = "32,169,254,169,254,130,56,248,255"
2189+ self.assertEqual([('169.254.169.254/32', '130.56.248.255')],
2190+ parse_static_routes(rfc3442))
2191+
2192+ def test_parse_static_routes_single_ip_handles_trailing_semicolon(self):
2193+ rfc3442 = "32,169,254,169,254,130,56,248,255;"
2194+ self.assertEqual([('169.254.169.254/32', '130.56.248.255')],
2195+ parse_static_routes(rfc3442))
2196+
2197+ def test_parse_static_routes_default_route(self):
2198+ rfc3442 = "0,130,56,240,1"
2199+ self.assertEqual([('0.0.0.0/0', '130.56.240.1')],
2200+ parse_static_routes(rfc3442))
2201+
2202+ def test_parse_static_routes_class_c_b_a(self):
2203+ class_c = "24,192,168,74,192,168,0,4"
2204+ class_b = "16,172,16,172,16,0,4"
2205+ class_a = "8,10,10,0,0,4"
2206+ rfc3442 = ",".join([class_c, class_b, class_a])
2207+ self.assertEqual(sorted([
2208+ ("192.168.74.0/24", "192.168.0.4"),
2209+ ("172.16.0.0/16", "172.16.0.4"),
2210+ ("10.0.0.0/8", "10.0.0.4")
2211+ ]), sorted(parse_static_routes(rfc3442)))
2212+
2213+ def test_parse_static_routes_logs_error_truncated(self):
2214+ bad_rfc3442 = {
2215+ "class_c": "24,169,254,169,10",
2216+ "class_b": "16,172,16,10",
2217+ "class_a": "8,10,10",
2218+ "gateway": "0,0",
2219+ "netlen": "33,0",
2220+ }
2221+ for rfc3442 in bad_rfc3442.values():
2222+ self.assertEqual([], parse_static_routes(rfc3442))
2223+
2224+ logs = self.logs.getvalue()
2225+ self.assertEqual(len(bad_rfc3442.keys()), len(logs.splitlines()))
2226+
2227+ def test_parse_static_routes_returns_valid_routes_until_parse_err(self):
2228+ class_c = "24,192,168,74,192,168,0,4"
2229+ class_b = "16,172,16,172,16,0,4"
2230+ class_a_error = "8,10,10,0,0"
2231+ rfc3442 = ",".join([class_c, class_b, class_a_error])
2232+ self.assertEqual(sorted([
2233+ ("192.168.74.0/24", "192.168.0.4"),
2234+ ("172.16.0.0/16", "172.16.0.4"),
2235+ ]), sorted(parse_static_routes(rfc3442)))
2236+
2237+ logs = self.logs.getvalue()
2238+ self.assertIn(rfc3442, logs.splitlines()[0])
2239+
2240+
2241 class TestDHCPDiscoveryClean(CiTestCase):
2242 with_logs = True
2243
2244diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
2245index 6d2affe..d2e38f0 100644
2246--- a/cloudinit/net/tests/test_init.py
2247+++ b/cloudinit/net/tests/test_init.py
2248@@ -212,9 +212,9 @@ class TestGenerateFallbackConfig(CiTestCase):
2249 mac = 'aa:bb:cc:aa:bb:cc'
2250 write_file(os.path.join(self.sysdir, 'eth1', 'address'), mac)
2251 expected = {
2252- 'config': [{'type': 'physical', 'mac_address': mac,
2253- 'name': 'eth1', 'subnets': [{'type': 'dhcp'}]}],
2254- 'version': 1}
2255+ 'ethernets': {'eth1': {'match': {'macaddress': mac},
2256+ 'dhcp4': True, 'set-name': 'eth1'}},
2257+ 'version': 2}
2258 self.assertEqual(expected, net.generate_fallback_config())
2259
2260 def test_generate_fallback_finds_dormant_eth_with_mac(self):
2261@@ -223,9 +223,9 @@ class TestGenerateFallbackConfig(CiTestCase):
2262 mac = 'aa:bb:cc:aa:bb:cc'
2263 write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac)
2264 expected = {
2265- 'config': [{'type': 'physical', 'mac_address': mac,
2266- 'name': 'eth0', 'subnets': [{'type': 'dhcp'}]}],
2267- 'version': 1}
2268+ 'ethernets': {'eth0': {'match': {'macaddress': mac}, 'dhcp4': True,
2269+ 'set-name': 'eth0'}},
2270+ 'version': 2}
2271 self.assertEqual(expected, net.generate_fallback_config())
2272
2273 def test_generate_fallback_finds_eth_by_operstate(self):
2274@@ -233,9 +233,10 @@ class TestGenerateFallbackConfig(CiTestCase):
2275 mac = 'aa:bb:cc:aa:bb:cc'
2276 write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac)
2277 expected = {
2278- 'config': [{'type': 'physical', 'mac_address': mac,
2279- 'name': 'eth0', 'subnets': [{'type': 'dhcp'}]}],
2280- 'version': 1}
2281+ 'ethernets': {
2282+ 'eth0': {'dhcp4': True, 'match': {'macaddress': mac},
2283+ 'set-name': 'eth0'}},
2284+ 'version': 2}
2285 valid_operstates = ['dormant', 'down', 'lowerlayerdown', 'unknown']
2286 for state in valid_operstates:
2287 write_file(os.path.join(self.sysdir, 'eth0', 'operstate'), state)
2288@@ -549,6 +550,45 @@ class TestEphemeralIPV4Network(CiTestCase):
2289 self.assertEqual(expected_setup_calls, m_subp.call_args_list)
2290 m_subp.assert_has_calls(expected_teardown_calls)
2291
2292+ def test_ephemeral_ipv4_network_with_rfc3442_static_routes(self, m_subp):
2293+ params = {
2294+ 'interface': 'eth0', 'ip': '192.168.2.2',
2295+ 'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255',
2296+ 'static_routes': [('169.254.169.254/32', '192.168.2.1'),
2297+ ('0.0.0.0/0', '192.168.2.1')],
2298+ 'router': '192.168.2.1'}
2299+ expected_setup_calls = [
2300+ mock.call(
2301+ ['ip', '-family', 'inet', 'addr', 'add', '192.168.2.2/24',
2302+ 'broadcast', '192.168.2.255', 'dev', 'eth0'],
2303+ capture=True, update_env={'LANG': 'C'}),
2304+ mock.call(
2305+ ['ip', '-family', 'inet', 'link', 'set', 'dev', 'eth0', 'up'],
2306+ capture=True),
2307+ mock.call(
2308+ ['ip', '-4', 'route', 'add', '169.254.169.254/32',
2309+ 'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
2310+ mock.call(
2311+ ['ip', '-4', 'route', 'add', '0.0.0.0/0',
2312+ 'via', '192.168.2.1', 'dev', 'eth0'], capture=True)]
2313+ expected_teardown_calls = [
2314+ mock.call(
2315+ ['ip', '-4', 'route', 'del', '0.0.0.0/0',
2316+ 'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
2317+ mock.call(
2318+ ['ip', '-4', 'route', 'del', '169.254.169.254/32',
2319+ 'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
2320+ mock.call(
2321+ ['ip', '-family', 'inet', 'link', 'set', 'dev',
2322+ 'eth0', 'down'], capture=True),
2323+ mock.call(
2324+ ['ip', '-family', 'inet', 'addr', 'del',
2325+ '192.168.2.2/24', 'dev', 'eth0'], capture=True)
2326+ ]
2327+ with net.EphemeralIPv4Network(**params):
2328+ self.assertEqual(expected_setup_calls, m_subp.call_args_list)
2329+ m_subp.assert_has_calls(expected_setup_calls + expected_teardown_calls)
2330+
2331
2332 class TestApplyNetworkCfgNames(CiTestCase):
2333 V1_CONFIG = textwrap.dedent("""\
2334@@ -669,3 +709,216 @@ class TestHasURLConnectivity(HttprettyTestCase):
2335 httpretty.register_uri(httpretty.GET, self.url, body={}, status=404)
2336 self.assertFalse(
2337 net.has_url_connectivity(self.url), 'Expected False on url fail')
2338+
2339+
2340+def _mk_v1_phys(mac, name, driver, device_id):
2341+ v1_cfg = {'type': 'physical', 'name': name, 'mac_address': mac}
2342+ params = {}
2343+ if driver:
2344+ params.update({'driver': driver})
2345+ if device_id:
2346+ params.update({'device_id': device_id})
2347+
2348+ if params:
2349+ v1_cfg.update({'params': params})
2350+
2351+ return v1_cfg
2352+
2353+
2354+def _mk_v2_phys(mac, name, driver=None, device_id=None):
2355+ v2_cfg = {'set-name': name, 'match': {'macaddress': mac}}
2356+ if driver:
2357+ v2_cfg['match'].update({'driver': driver})
2358+ if device_id:
2359+ v2_cfg['match'].update({'device_id': device_id})
2360+
2361+ return v2_cfg
2362+
2363+
2364+class TestExtractPhysdevs(CiTestCase):
2365+
2366+ def setUp(self):
2367+ super(TestExtractPhysdevs, self).setUp()
2368+ self.add_patch('cloudinit.net.device_driver', 'm_driver')
2369+ self.add_patch('cloudinit.net.device_devid', 'm_devid')
2370+
2371+ def test_extract_physdevs_looks_up_driver_v1(self):
2372+ driver = 'virtio'
2373+ self.m_driver.return_value = driver
2374+ physdevs = [
2375+ ['aa:bb:cc:dd:ee:ff', 'eth0', None, '0x1000'],
2376+ ]
2377+ netcfg = {
2378+ 'version': 1,
2379+ 'config': [_mk_v1_phys(*args) for args in physdevs],
2380+ }
2381+ # insert the driver value for verification
2382+ physdevs[0][2] = driver
2383+ self.assertEqual(sorted(physdevs),
2384+ sorted(net.extract_physdevs(netcfg)))
2385+ self.m_driver.assert_called_with('eth0')
2386+
2387+ def test_extract_physdevs_looks_up_driver_v2(self):
2388+ driver = 'virtio'
2389+ self.m_driver.return_value = driver
2390+ physdevs = [
2391+ ['aa:bb:cc:dd:ee:ff', 'eth0', None, '0x1000'],
2392+ ]
2393+ netcfg = {
2394+ 'version': 2,
2395+ 'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs},
2396+ }
2397+ # insert the driver value for verification
2398+ physdevs[0][2] = driver
2399+ self.assertEqual(sorted(physdevs),
2400+ sorted(net.extract_physdevs(netcfg)))
2401+ self.m_driver.assert_called_with('eth0')
2402+
2403+ def test_extract_physdevs_looks_up_devid_v1(self):
2404+ devid = '0x1000'
2405+ self.m_devid.return_value = devid
2406+ physdevs = [
2407+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', None],
2408+ ]
2409+ netcfg = {
2410+ 'version': 1,
2411+ 'config': [_mk_v1_phys(*args) for args in physdevs],
2412+ }
2413+ # insert the driver value for verification
2414+ physdevs[0][3] = devid
2415+ self.assertEqual(sorted(physdevs),
2416+ sorted(net.extract_physdevs(netcfg)))
2417+ self.m_devid.assert_called_with('eth0')
2418+
2419+ def test_extract_physdevs_looks_up_devid_v2(self):
2420+ devid = '0x1000'
2421+ self.m_devid.return_value = devid
2422+ physdevs = [
2423+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', None],
2424+ ]
2425+ netcfg = {
2426+ 'version': 2,
2427+ 'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs},
2428+ }
2429+ # insert the driver value for verification
2430+ physdevs[0][3] = devid
2431+ self.assertEqual(sorted(physdevs),
2432+ sorted(net.extract_physdevs(netcfg)))
2433+ self.m_devid.assert_called_with('eth0')
2434+
2435+ def test_get_v1_type_physical(self):
2436+ physdevs = [
2437+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
2438+ ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
2439+ ['09:87:65:43:21:10', 'ens0p1', 'mlx4_core', '0:0:1000'],
2440+ ]
2441+ netcfg = {
2442+ 'version': 1,
2443+ 'config': [_mk_v1_phys(*args) for args in physdevs],
2444+ }
2445+ self.assertEqual(sorted(physdevs),
2446+ sorted(net.extract_physdevs(netcfg)))
2447+
2448+ def test_get_v2_type_physical(self):
2449+ physdevs = [
2450+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
2451+ ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
2452+ ['09:87:65:43:21:10', 'ens0p1', 'mlx4_core', '0:0:1000'],
2453+ ]
2454+ netcfg = {
2455+ 'version': 2,
2456+ 'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs},
2457+ }
2458+ self.assertEqual(sorted(physdevs),
2459+ sorted(net.extract_physdevs(netcfg)))
2460+
2461+ def test_get_v2_type_physical_skips_if_no_set_name(self):
2462+ netcfg = {
2463+ 'version': 2,
2464+ 'ethernets': {
2465+ 'ens3': {
2466+ 'match': {'macaddress': '00:11:22:33:44:55'},
2467+ }
2468+ }
2469+ }
2470+ self.assertEqual([], net.extract_physdevs(netcfg))
2471+
2472+ def test_runtime_error_on_unknown_netcfg_version(self):
2473+ with self.assertRaises(RuntimeError):
2474+ net.extract_physdevs({'version': 3, 'awesome_config': []})
2475+
2476+
2477+class TestWaitForPhysdevs(CiTestCase):
2478+
2479+ with_logs = True
2480+
2481+ def setUp(self):
2482+ super(TestWaitForPhysdevs, self).setUp()
2483+ self.add_patch('cloudinit.net.get_interfaces_by_mac',
2484+ 'm_get_iface_mac')
2485+ self.add_patch('cloudinit.util.udevadm_settle', 'm_udev_settle')
2486+
2487+ def test_wait_for_physdevs_skips_settle_if_all_present(self):
2488+ physdevs = [
2489+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
2490+ ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
2491+ ]
2492+ netcfg = {
2493+ 'version': 2,
2494+ 'ethernets': {args[1]: _mk_v2_phys(*args)
2495+ for args in physdevs},
2496+ }
2497+ self.m_get_iface_mac.side_effect = iter([
2498+ {'aa:bb:cc:dd:ee:ff': 'eth0',
2499+ '00:11:22:33:44:55': 'ens3'},
2500+ ])
2501+ net.wait_for_physdevs(netcfg)
2502+ self.assertEqual(0, self.m_udev_settle.call_count)
2503+
2504+ def test_wait_for_physdevs_calls_udev_settle_on_missing(self):
2505+ physdevs = [
2506+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
2507+ ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
2508+ ]
2509+ netcfg = {
2510+ 'version': 2,
2511+ 'ethernets': {args[1]: _mk_v2_phys(*args)
2512+ for args in physdevs},
2513+ }
2514+ self.m_get_iface_mac.side_effect = iter([
2515+ {'aa:bb:cc:dd:ee:ff': 'eth0'}, # first call ens3 is missing
2516+ {'aa:bb:cc:dd:ee:ff': 'eth0',
2517+ '00:11:22:33:44:55': 'ens3'}, # second call has both
2518+ ])
2519+ net.wait_for_physdevs(netcfg)
2520+ self.m_udev_settle.assert_called_with(exists=net.sys_dev_path('ens3'))
2521+
2522+ def test_wait_for_physdevs_raise_runtime_error_if_missing_and_strict(self):
2523+ physdevs = [
2524+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
2525+ ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
2526+ ]
2527+ netcfg = {
2528+ 'version': 2,
2529+ 'ethernets': {args[1]: _mk_v2_phys(*args)
2530+ for args in physdevs},
2531+ }
2532+ self.m_get_iface_mac.return_value = {}
2533+ with self.assertRaises(RuntimeError):
2534+ net.wait_for_physdevs(netcfg)
2535+
2536+ self.assertEqual(5 * len(physdevs), self.m_udev_settle.call_count)
2537+
2538+ def test_wait_for_physdevs_no_raise_if_not_strict(self):
2539+ physdevs = [
2540+ ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
2541+ ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
2542+ ]
2543+ netcfg = {
2544+ 'version': 2,
2545+ 'ethernets': {args[1]: _mk_v2_phys(*args)
2546+ for args in physdevs},
2547+ }
2548+ self.m_get_iface_mac.return_value = {}
2549+ net.wait_for_physdevs(netcfg, strict=False)
2550+ self.assertEqual(5 * len(physdevs), self.m_udev_settle.call_count)
2551diff --git a/cloudinit/settings.py b/cloudinit/settings.py
2552index b1ebaad..2060d81 100644
2553--- a/cloudinit/settings.py
2554+++ b/cloudinit/settings.py
2555@@ -39,6 +39,7 @@ CFG_BUILTIN = {
2556 'Hetzner',
2557 'IBMCloud',
2558 'Oracle',
2559+ 'Exoscale',
2560 # At the end to act as a 'catch' when none of the above work...
2561 'None',
2562 ],
2563diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
2564index b7440c1..4984fa8 100755
2565--- a/cloudinit/sources/DataSourceAzure.py
2566+++ b/cloudinit/sources/DataSourceAzure.py
2567@@ -26,9 +26,14 @@ from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc
2568 from cloudinit import util
2569 from cloudinit.reporting import events
2570
2571-from cloudinit.sources.helpers.azure import (azure_ds_reporter,
2572- azure_ds_telemetry_reporter,
2573- get_metadata_from_fabric)
2574+from cloudinit.sources.helpers.azure import (
2575+ azure_ds_reporter,
2576+ azure_ds_telemetry_reporter,
2577+ get_metadata_from_fabric,
2578+ get_boot_telemetry,
2579+ get_system_info,
2580+ report_diagnostic_event,
2581+ EphemeralDHCPv4WithReporting)
2582
2583 LOG = logging.getLogger(__name__)
2584
2585@@ -354,7 +359,7 @@ class DataSourceAzure(sources.DataSource):
2586 bname = str(pk['fingerprint'] + ".crt")
2587 fp_files += [os.path.join(ddir, bname)]
2588 LOG.debug("ssh authentication: "
2589- "using fingerprint from fabirc")
2590+ "using fingerprint from fabric")
2591
2592 with events.ReportEventStack(
2593 name="waiting-for-ssh-public-key",
2594@@ -419,12 +424,17 @@ class DataSourceAzure(sources.DataSource):
2595 ret = load_azure_ds_dir(cdev)
2596
2597 except NonAzureDataSource:
2598+ report_diagnostic_event(
2599+ "Did not find Azure data source in %s" % cdev)
2600 continue
2601 except BrokenAzureDataSource as exc:
2602 msg = 'BrokenAzureDataSource: %s' % exc
2603+ report_diagnostic_event(msg)
2604 raise sources.InvalidMetaDataException(msg)
2605 except util.MountFailedError:
2606- LOG.warning("%s was not mountable", cdev)
2607+ msg = '%s was not mountable' % cdev
2608+ report_diagnostic_event(msg)
2609+ LOG.warning(msg)
2610 continue
2611
2612 perform_reprovision = reprovision or self._should_reprovision(ret)
2613@@ -432,6 +442,7 @@ class DataSourceAzure(sources.DataSource):
2614 if util.is_FreeBSD():
2615 msg = "Free BSD is not supported for PPS VMs"
2616 LOG.error(msg)
2617+ report_diagnostic_event(msg)
2618 raise sources.InvalidMetaDataException(msg)
2619 ret = self._reprovision()
2620 imds_md = get_metadata_from_imds(
2621@@ -450,7 +461,9 @@ class DataSourceAzure(sources.DataSource):
2622 break
2623
2624 if not found:
2625- raise sources.InvalidMetaDataException('No Azure metadata found')
2626+ msg = 'No Azure metadata found'
2627+ report_diagnostic_event(msg)
2628+ raise sources.InvalidMetaDataException(msg)
2629
2630 if found == ddir:
2631 LOG.debug("using files cached in %s", ddir)
2632@@ -469,9 +482,14 @@ class DataSourceAzure(sources.DataSource):
2633 self._report_ready(lease=self._ephemeral_dhcp_ctx.lease)
2634 self._ephemeral_dhcp_ctx.clean_network() # Teardown ephemeral
2635 else:
2636- with EphemeralDHCPv4() as lease:
2637- self._report_ready(lease=lease)
2638-
2639+ try:
2640+ with EphemeralDHCPv4WithReporting(
2641+ azure_ds_reporter) as lease:
2642+ self._report_ready(lease=lease)
2643+ except Exception as e:
2644+ report_diagnostic_event(
2645+ "exception while reporting ready: %s" % e)
2646+ raise
2647 return crawled_data
2648
2649 def _is_platform_viable(self):
2650@@ -493,6 +511,16 @@ class DataSourceAzure(sources.DataSource):
2651 if not self._is_platform_viable():
2652 return False
2653 try:
2654+ get_boot_telemetry()
2655+ except Exception as e:
2656+ LOG.warning("Failed to get boot telemetry: %s", e)
2657+
2658+ try:
2659+ get_system_info()
2660+ except Exception as e:
2661+ LOG.warning("Failed to get system information: %s", e)
2662+
2663+ try:
2664 crawled_data = util.log_time(
2665 logfunc=LOG.debug, msg='Crawl of metadata service',
2666 func=self.crawl_metadata)
2667@@ -551,27 +579,55 @@ class DataSourceAzure(sources.DataSource):
2668 headers = {"Metadata": "true"}
2669 nl_sock = None
2670 report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))
2671+ self.imds_logging_threshold = 1
2672+ self.imds_poll_counter = 1
2673+ dhcp_attempts = 0
2674+ vnet_switched = False
2675+ return_val = None
2676
2677 def exc_cb(msg, exception):
2678 if isinstance(exception, UrlError) and exception.code == 404:
2679+ if self.imds_poll_counter == self.imds_logging_threshold:
2680+ # Reducing the logging frequency as we are polling IMDS
2681+ self.imds_logging_threshold *= 2
2682+ LOG.debug("Call to IMDS with arguments %s failed "
2683+ "with status code %s after %s retries",
2684+ msg, exception.code, self.imds_poll_counter)
2685+ LOG.debug("Backing off logging threshold for the same "
2686+ "exception to %d", self.imds_logging_threshold)
2687+ self.imds_poll_counter += 1
2688 return True
2689+
2690 # If we get an exception while trying to call IMDS, we
2691 # call DHCP and setup the ephemeral network to acquire the new IP.
2692+ LOG.debug("Call to IMDS with arguments %s failed with "
2693+ "status code %s", msg, exception.code)
2694+ report_diagnostic_event("polling IMDS failed with exception %s"
2695+ % exception.code)
2696 return False
2697
2698 LOG.debug("Wait for vnetswitch to happen")
2699 while True:
2700 try:
2701- # Save our EphemeralDHCPv4 context so we avoid repeated dhcp
2702- self._ephemeral_dhcp_ctx = EphemeralDHCPv4()
2703- lease = self._ephemeral_dhcp_ctx.obtain_lease()
2704+ # Save our EphemeralDHCPv4 context to avoid repeated dhcp
2705+ with events.ReportEventStack(
2706+ name="obtain-dhcp-lease",
2707+ description="obtain dhcp lease",
2708+ parent=azure_ds_reporter):
2709+ self._ephemeral_dhcp_ctx = EphemeralDHCPv4()
2710+ lease = self._ephemeral_dhcp_ctx.obtain_lease()
2711+
2712+ if vnet_switched:
2713+ dhcp_attempts += 1
2714 if report_ready:
2715 try:
2716 nl_sock = netlink.create_bound_netlink_socket()
2717 except netlink.NetlinkCreateSocketError as e:
2718+ report_diagnostic_event(e)
2719 LOG.warning(e)
2720 self._ephemeral_dhcp_ctx.clean_network()
2721- return
2722+ break
2723+
2724 path = REPORTED_READY_MARKER_FILE
2725 LOG.info(
2726 "Creating a marker file to report ready: %s", path)
2727@@ -579,17 +635,33 @@ class DataSourceAzure(sources.DataSource):
2728 pid=os.getpid(), time=time()))
2729 self._report_ready(lease=lease)
2730 report_ready = False
2731- try:
2732- netlink.wait_for_media_disconnect_connect(
2733- nl_sock, lease['interface'])
2734- except AssertionError as error:
2735- LOG.error(error)
2736- return
2737+
2738+ with events.ReportEventStack(
2739+ name="wait-for-media-disconnect-connect",
2740+ description="wait for vnet switch",
2741+ parent=azure_ds_reporter):
2742+ try:
2743+ netlink.wait_for_media_disconnect_connect(
2744+ nl_sock, lease['interface'])
2745+ except AssertionError as error:
2746+ report_diagnostic_event(error)
2747+ LOG.error(error)
2748+ break
2749+
2750+ vnet_switched = True
2751 self._ephemeral_dhcp_ctx.clean_network()
2752 else:
2753- return readurl(url, timeout=IMDS_TIMEOUT_IN_SECONDS,
2754- headers=headers, exception_cb=exc_cb,
2755- infinite=True, log_req_resp=False).contents
2756+ with events.ReportEventStack(
2757+ name="get-reprovision-data-from-imds",
2758+ description="get reprovision data from imds",
2759+ parent=azure_ds_reporter):
2760+ return_val = readurl(url,
2761+ timeout=IMDS_TIMEOUT_IN_SECONDS,
2762+ headers=headers,
2763+ exception_cb=exc_cb,
2764+ infinite=True,
2765+ log_req_resp=False).contents
2766+ break
2767 except UrlError:
2768 # Teardown our EphemeralDHCPv4 context on failure as we retry
2769 self._ephemeral_dhcp_ctx.clean_network()
2770@@ -598,6 +670,14 @@ class DataSourceAzure(sources.DataSource):
2771 if nl_sock:
2772 nl_sock.close()
2773
2774+ if vnet_switched:
2775+ report_diagnostic_event("attempted dhcp %d times after reuse" %
2776+ dhcp_attempts)
2777+ report_diagnostic_event("polled imds %d times after reuse" %
2778+ self.imds_poll_counter)
2779+
2780+ return return_val
2781+
2782 @azure_ds_telemetry_reporter
2783 def _report_ready(self, lease):
2784 """Tells the fabric provisioning has completed """
2785@@ -666,9 +746,12 @@ class DataSourceAzure(sources.DataSource):
2786 self.ds_cfg['agent_command'])
2787 try:
2788 fabric_data = metadata_func()
2789- except Exception:
2790+ except Exception as e:
2791+ report_diagnostic_event(
2792+ "Error communicating with Azure fabric; You may experience "
2793+ "connectivity issues: %s" % e)
2794 LOG.warning(
2795- "Error communicating with Azure fabric; You may experience."
2796+ "Error communicating with Azure fabric; You may experience "
2797 "connectivity issues.", exc_info=True)
2798 return False
2799
2800@@ -684,6 +767,11 @@ class DataSourceAzure(sources.DataSource):
2801 return
2802
2803 @property
2804+ def availability_zone(self):
2805+ return self.metadata.get(
2806+ 'imds', {}).get('compute', {}).get('platformFaultDomain')
2807+
2808+ @property
2809 def network_config(self):
2810 """Generate a network config like net.generate_fallback_network() with
2811 the following exceptions.
2812@@ -701,6 +789,10 @@ class DataSourceAzure(sources.DataSource):
2813 self._network_config = parse_network_config(nc_src)
2814 return self._network_config
2815
2816+ @property
2817+ def region(self):
2818+ return self.metadata.get('imds', {}).get('compute', {}).get('location')
2819+
2820
2821 def _partitions_on_device(devpath, maxnum=16):
2822 # return a list of tuples (ptnum, path) for each part on devpath
2823@@ -1018,7 +1110,9 @@ def read_azure_ovf(contents):
2824 try:
2825 dom = minidom.parseString(contents)
2826 except Exception as e:
2827- raise BrokenAzureDataSource("Invalid ovf-env.xml: %s" % e)
2828+ error_str = "Invalid ovf-env.xml: %s" % e
2829+ report_diagnostic_event(error_str)
2830+ raise BrokenAzureDataSource(error_str)
2831
2832 results = find_child(dom.documentElement,
2833 lambda n: n.localName == "ProvisioningSection")
2834@@ -1232,7 +1326,7 @@ def parse_network_config(imds_metadata):
2835 privateIpv4 = addr4['privateIpAddress']
2836 if privateIpv4:
2837 if dev_config.get('dhcp4', False):
2838- # Append static address config for nic > 1
2839+ # Append static address config for ip > 1
2840 netPrefix = intf['ipv4']['subnet'][0].get(
2841 'prefix', '24')
2842 if not dev_config.get('addresses'):
2843@@ -1242,6 +1336,11 @@ def parse_network_config(imds_metadata):
2844 ip=privateIpv4, prefix=netPrefix))
2845 else:
2846 dev_config['dhcp4'] = True
2847+ # non-primary interfaces should have a higher
2848+ # route-metric (cost) so default routes prefer
2849+ # primary nic due to lower route-metric value
2850+ dev_config['dhcp4-overrides'] = {
2851+ 'route-metric': (idx + 1) * 100}
2852 for addr6 in intf['ipv6']['ipAddress']:
2853 privateIpv6 = addr6['privateIpAddress']
2854 if privateIpv6:
2855@@ -1285,8 +1384,13 @@ def get_metadata_from_imds(fallback_nic, retries):
2856 if net.is_up(fallback_nic):
2857 return util.log_time(**kwargs)
2858 else:
2859- with EphemeralDHCPv4(fallback_nic):
2860- return util.log_time(**kwargs)
2861+ try:
2862+ with EphemeralDHCPv4WithReporting(
2863+ azure_ds_reporter, fallback_nic):
2864+ return util.log_time(**kwargs)
2865+ except Exception as e:
2866+ report_diagnostic_event("exception while getting metadata: %s" % e)
2867+ raise
2868
2869
2870 @azure_ds_telemetry_reporter
2871@@ -1299,11 +1403,14 @@ def _get_metadata_from_imds(retries):
2872 url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers,
2873 retries=retries, exception_cb=retry_on_url_exc)
2874 except Exception as e:
2875- LOG.debug('Ignoring IMDS instance metadata: %s', e)
2876+ msg = 'Ignoring IMDS instance metadata: %s' % e
2877+ report_diagnostic_event(msg)
2878+ LOG.debug(msg)
2879 return {}
2880 try:
2881 return util.load_json(str(response))
2882- except json.decoder.JSONDecodeError:
2883+ except json.decoder.JSONDecodeError as e:
2884+ report_diagnostic_event('non-json imds response' % e)
2885 LOG.warning(
2886 'Ignoring non-json IMDS instance metadata: %s', str(response))
2887 return {}
2888@@ -1356,8 +1463,10 @@ def _is_platform_viable(seed_dir):
2889 asset_tag = util.read_dmi_data('chassis-asset-tag')
2890 if asset_tag == AZURE_CHASSIS_ASSET_TAG:
2891 return True
2892- LOG.debug("Non-Azure DMI asset tag '%s' discovered.", asset_tag)
2893- evt.description = "Non-Azure DMI asset tag '%s' discovered.", asset_tag
2894+ msg = "Non-Azure DMI asset tag '%s' discovered." % asset_tag
2895+ LOG.debug(msg)
2896+ evt.description = msg
2897+ report_diagnostic_event(msg)
2898 if os.path.exists(os.path.join(seed_dir, 'ovf-env.xml')):
2899 return True
2900 return False
2901diff --git a/cloudinit/sources/DataSourceCloudSigma.py b/cloudinit/sources/DataSourceCloudSigma.py
2902index 2955d3f..df88f67 100644
2903--- a/cloudinit/sources/DataSourceCloudSigma.py
2904+++ b/cloudinit/sources/DataSourceCloudSigma.py
2905@@ -42,12 +42,8 @@ class DataSourceCloudSigma(sources.DataSource):
2906 if not sys_product_name:
2907 LOG.debug("system-product-name not available in dmi data")
2908 return False
2909- else:
2910- LOG.debug("detected hypervisor as %s", sys_product_name)
2911- return 'cloudsigma' in sys_product_name.lower()
2912-
2913- LOG.warning("failed to query dmi data for system product name")
2914- return False
2915+ LOG.debug("detected hypervisor as %s", sys_product_name)
2916+ return 'cloudsigma' in sys_product_name.lower()
2917
2918 def _get_data(self):
2919 """
2920diff --git a/cloudinit/sources/DataSourceExoscale.py b/cloudinit/sources/DataSourceExoscale.py
2921new file mode 100644
2922index 0000000..52e7f6f
2923--- /dev/null
2924+++ b/cloudinit/sources/DataSourceExoscale.py
2925@@ -0,0 +1,258 @@
2926+# Author: Mathieu Corbin <mathieu.corbin@exoscale.com>
2927+# Author: Christopher Glass <christopher.glass@exoscale.com>
2928+#
2929+# This file is part of cloud-init. See LICENSE file for license information.
2930+
2931+from cloudinit import ec2_utils as ec2
2932+from cloudinit import log as logging
2933+from cloudinit import sources
2934+from cloudinit import url_helper
2935+from cloudinit import util
2936+
2937+LOG = logging.getLogger(__name__)
2938+
2939+METADATA_URL = "http://169.254.169.254"
2940+API_VERSION = "1.0"
2941+PASSWORD_SERVER_PORT = 8080
2942+
2943+URL_TIMEOUT = 10
2944+URL_RETRIES = 6
2945+
2946+EXOSCALE_DMI_NAME = "Exoscale"
2947+
2948+BUILTIN_DS_CONFIG = {
2949+ # We run the set password config module on every boot in order to enable
2950+ # resetting the instance's password via the exoscale console (and a
2951+ # subsequent instance reboot).
2952+ 'cloud_config_modules': [["set-passwords", "always"]]
2953+}
2954+
2955+
2956+class DataSourceExoscale(sources.DataSource):
2957+
2958+ dsname = 'Exoscale'
2959+
2960+ def __init__(self, sys_cfg, distro, paths):
2961+ super(DataSourceExoscale, self).__init__(sys_cfg, distro, paths)
2962+ LOG.debug("Initializing the Exoscale datasource")
2963+
2964+ self.metadata_url = self.ds_cfg.get('metadata_url', METADATA_URL)
2965+ self.api_version = self.ds_cfg.get('api_version', API_VERSION)
2966+ self.password_server_port = int(
2967+ self.ds_cfg.get('password_server_port', PASSWORD_SERVER_PORT))
2968+ self.url_timeout = self.ds_cfg.get('timeout', URL_TIMEOUT)
2969+ self.url_retries = self.ds_cfg.get('retries', URL_RETRIES)
2970+
2971+ self.extra_config = BUILTIN_DS_CONFIG
2972+
2973+ def wait_for_metadata_service(self):
2974+ """Wait for the metadata service to be reachable."""
2975+
2976+ metadata_url = "{}/{}/meta-data/instance-id".format(
2977+ self.metadata_url, self.api_version)
2978+
2979+ url = url_helper.wait_for_url(
2980+ urls=[metadata_url],
2981+ max_wait=self.url_max_wait,
2982+ timeout=self.url_timeout,
2983+ status_cb=LOG.critical)
2984+
2985+ return bool(url)
2986+
2987+ def crawl_metadata(self):
2988+ """
2989+ Crawl the metadata service when available.
2990+
2991+ @returns: Dictionary of crawled metadata content.
2992+ """
2993+ metadata_ready = util.log_time(
2994+ logfunc=LOG.info,
2995+ msg='waiting for the metadata service',
2996+ func=self.wait_for_metadata_service)
2997+
2998+ if not metadata_ready:
2999+ return {}
3000+
3001+ return read_metadata(self.metadata_url, self.api_version,
3002+ self.password_server_port, self.url_timeout,
3003+ self.url_retries)
3004+
3005+ def _get_data(self):
3006+ """Fetch the user data, the metadata and the VM password
3007+ from the metadata service.
3008+
3009+ Please refer to the datasource documentation for details on how the
3010+ metadata server and password server are crawled.
3011+ """
3012+ if not self._is_platform_viable():
3013+ return False
3014+
3015+ data = util.log_time(
3016+ logfunc=LOG.debug,
3017+ msg='Crawl of metadata service',
3018+ func=self.crawl_metadata)
3019+
3020+ if not data:
3021+ return False
3022+
3023+ self.userdata_raw = data['user-data']
3024+ self.metadata = data['meta-data']
3025+ password = data.get('password')
3026+
3027+ password_config = {}
3028+ if password:
3029+ # Since we have a password, let's make sure we are allowed to use
3030+ # it by allowing ssh_pwauth.
3031+ # The password module's default behavior is to leave the
3032+ # configuration as-is in this regard, so that means it will either
3033+ # leave the password always disabled if no password is ever set, or
3034+ # leave the password login enabled if we set it once.
3035+ password_config = {
3036+ 'ssh_pwauth': True,
3037+ 'password': password,
3038+ 'chpasswd': {
3039+ 'expire': False,
3040+ },
3041+ }
3042+
3043+ # builtin extra_config overrides password_config
3044+ self.extra_config = util.mergemanydict(
3045+ [self.extra_config, password_config])
3046+
3047+ return True
3048+
3049+ def get_config_obj(self):
3050+ return self.extra_config
3051+
3052+ def _is_platform_viable(self):
3053+ return util.read_dmi_data('system-product-name').startswith(
3054+ EXOSCALE_DMI_NAME)
3055+
3056+
3057+# Used to match classes to dependencies
3058+datasources = [
3059+ (DataSourceExoscale, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
3060+]
3061+
3062+
3063+# Return a list of data sources that match this set of dependencies
3064+def get_datasource_list(depends):
3065+ return sources.list_from_depends(depends, datasources)
3066+
3067+
3068+def get_password(metadata_url=METADATA_URL,
3069+ api_version=API_VERSION,
3070+ password_server_port=PASSWORD_SERVER_PORT,
3071+ url_timeout=URL_TIMEOUT,
3072+ url_retries=URL_RETRIES):
3073+ """Obtain the VM's password if set.
3074+
3075+ Once fetched the password is marked saved. Future calls to this method may
3076+ return empty string or 'saved_password'."""
3077+ password_url = "{}:{}/{}/".format(metadata_url, password_server_port,
3078+ api_version)
3079+ response = url_helper.read_file_or_url(
3080+ password_url,
3081+ ssl_details=None,
3082+ headers={"DomU_Request": "send_my_password"},
3083+ timeout=url_timeout,
3084+ retries=url_retries)
3085+ password = response.contents.decode('utf-8')
3086+ # the password is empty or already saved
3087+ # Note: the original metadata server would answer an additional
3088+ # 'bad_request' status, but the Exoscale implementation does not.
3089+ if password in ['', 'saved_password']:
3090+ return None
3091+ # save the password
3092+ url_helper.read_file_or_url(
3093+ password_url,
3094+ ssl_details=None,
3095+ headers={"DomU_Request": "saved_password"},
3096+ timeout=url_timeout,
3097+ retries=url_retries)
3098+ return password
3099+
3100+
3101+def read_metadata(metadata_url=METADATA_URL,
3102+ api_version=API_VERSION,
3103+ password_server_port=PASSWORD_SERVER_PORT,
3104+ url_timeout=URL_TIMEOUT,
3105+ url_retries=URL_RETRIES):
3106+ """Query the metadata server and return the retrieved data."""
3107+ crawled_metadata = {}
3108+ crawled_metadata['_metadata_api_version'] = api_version
3109+ try:
3110+ crawled_metadata['user-data'] = ec2.get_instance_userdata(
3111+ api_version,
3112+ metadata_url,
3113+ timeout=url_timeout,
3114+ retries=url_retries)
3115+ crawled_metadata['meta-data'] = ec2.get_instance_metadata(
3116+ api_version,
3117+ metadata_url,
3118+ timeout=url_timeout,
3119+ retries=url_retries)
3120+ except Exception as e:
3121+ util.logexc(LOG, "failed reading from metadata url %s (%s)",
3122+ metadata_url, e)
3123+ return {}
3124+
3125+ try:
3126+ crawled_metadata['password'] = get_password(
3127+ api_version=api_version,
3128+ metadata_url=metadata_url,
3129+ password_server_port=password_server_port,
3130+ url_retries=url_retries,
3131+ url_timeout=url_timeout)
3132+ except Exception as e:
3133+ util.logexc(LOG, "failed to read from password server url %s:%s (%s)",
3134+ metadata_url, password_server_port, e)
3135+
3136+ return crawled_metadata
3137+
3138+
3139+if __name__ == "__main__":
3140+ import argparse
3141+
3142+ parser = argparse.ArgumentParser(description='Query Exoscale Metadata')
3143+ parser.add_argument(
3144+ "--endpoint",
3145+ metavar="URL",
3146+ help="The url of the metadata service.",
3147+ default=METADATA_URL)
3148+ parser.add_argument(
3149+ "--version",
3150+ metavar="VERSION",
3151+ help="The version of the metadata endpoint to query.",
3152+ default=API_VERSION)
3153+ parser.add_argument(
3154+ "--retries",
3155+ metavar="NUM",
3156+ type=int,
3157+ help="The number of retries querying the endpoint.",
3158+ default=URL_RETRIES)
3159+ parser.add_argument(
3160+ "--timeout",
3161+ metavar="NUM",
3162+ type=int,
3163+ help="The time in seconds to wait before timing out.",
3164+ default=URL_TIMEOUT)
3165+ parser.add_argument(
3166+ "--password-port",
3167+ metavar="PORT",
3168+ type=int,
3169+ help="The port on which the password endpoint listens",
3170+ default=PASSWORD_SERVER_PORT)
3171+
3172+ args = parser.parse_args()
3173+
3174+ data = read_metadata(
3175+ metadata_url=args.endpoint,
3176+ api_version=args.version,
3177+ password_server_port=args.password_port,
3178+ url_timeout=args.timeout,
3179+ url_retries=args.retries)
3180+
3181+ print(util.json_dumps(data))
3182+
3183+# vi: ts=4 expandtab
3184diff --git a/cloudinit/sources/DataSourceGCE.py b/cloudinit/sources/DataSourceGCE.py
3185index d816262..6cbfbba 100644
3186--- a/cloudinit/sources/DataSourceGCE.py
3187+++ b/cloudinit/sources/DataSourceGCE.py
3188@@ -18,10 +18,13 @@ LOG = logging.getLogger(__name__)
3189 MD_V1_URL = 'http://metadata.google.internal/computeMetadata/v1/'
3190 BUILTIN_DS_CONFIG = {'metadata_url': MD_V1_URL}
3191 REQUIRED_FIELDS = ('instance-id', 'availability-zone', 'local-hostname')
3192+GUEST_ATTRIBUTES_URL = ('http://metadata.google.internal/computeMetadata/'
3193+ 'v1/instance/guest-attributes')
3194+HOSTKEY_NAMESPACE = 'hostkeys'
3195+HEADERS = {'Metadata-Flavor': 'Google'}
3196
3197
3198 class GoogleMetadataFetcher(object):
3199- headers = {'Metadata-Flavor': 'Google'}
3200
3201 def __init__(self, metadata_address):
3202 self.metadata_address = metadata_address
3203@@ -32,7 +35,7 @@ class GoogleMetadataFetcher(object):
3204 url = self.metadata_address + path
3205 if is_recursive:
3206 url += '/?recursive=True'
3207- resp = url_helper.readurl(url=url, headers=self.headers)
3208+ resp = url_helper.readurl(url=url, headers=HEADERS)
3209 except url_helper.UrlError as exc:
3210 msg = "url %s raised exception %s"
3211 LOG.debug(msg, path, exc)
3212@@ -90,6 +93,10 @@ class DataSourceGCE(sources.DataSource):
3213 public_keys_data = self.metadata['public-keys-data']
3214 return _parse_public_keys(public_keys_data, self.default_user)
3215
3216+ def publish_host_keys(self, hostkeys):
3217+ for key in hostkeys:
3218+ _write_host_key_to_guest_attributes(*key)
3219+
3220 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):
3221 # GCE has long FDQN's and has asked for short hostnames.
3222 return self.metadata['local-hostname'].split('.')[0]
3223@@ -103,6 +110,17 @@ class DataSourceGCE(sources.DataSource):
3224 return self.availability_zone.rsplit('-', 1)[0]
3225
3226
3227+def _write_host_key_to_guest_attributes(key_type, key_value):
3228+ url = '%s/%s/%s' % (GUEST_ATTRIBUTES_URL, HOSTKEY_NAMESPACE, key_type)
3229+ key_value = key_value.encode('utf-8')
3230+ resp = url_helper.readurl(url=url, data=key_value, headers=HEADERS,
3231+ request_method='PUT', check_status=False)
3232+ if resp.ok():
3233+ LOG.debug('Wrote %s host key to guest attributes.', key_type)
3234+ else:
3235+ LOG.debug('Unable to write %s host key to guest attributes.', key_type)
3236+
3237+
3238 def _has_expired(public_key):
3239 # Check whether an SSH key is expired. Public key input is a single SSH
3240 # public key in the GCE specific key format documented here:
3241diff --git a/cloudinit/sources/DataSourceHetzner.py b/cloudinit/sources/DataSourceHetzner.py
3242index 5c75b65..5029833 100644
3243--- a/cloudinit/sources/DataSourceHetzner.py
3244+++ b/cloudinit/sources/DataSourceHetzner.py
3245@@ -28,6 +28,9 @@ MD_WAIT_RETRY = 2
3246
3247
3248 class DataSourceHetzner(sources.DataSource):
3249+
3250+ dsname = 'Hetzner'
3251+
3252 def __init__(self, sys_cfg, distro, paths):
3253 sources.DataSource.__init__(self, sys_cfg, distro, paths)
3254 self.distro = distro
3255diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py
3256index fcf5d58..8a9e5dd 100644
3257--- a/cloudinit/sources/DataSourceNoCloud.py
3258+++ b/cloudinit/sources/DataSourceNoCloud.py
3259@@ -35,6 +35,26 @@ class DataSourceNoCloud(sources.DataSource):
3260 root = sources.DataSource.__str__(self)
3261 return "%s [seed=%s][dsmode=%s]" % (root, self.seed, self.dsmode)
3262
3263+ def _get_devices(self, label):
3264+ if util.is_FreeBSD():
3265+ devlist = [
3266+ p for p in ['/dev/msdosfs/' + label, '/dev/iso9660/' + label]
3267+ if os.path.exists(p)]
3268+ else:
3269+ # Query optical drive to get it in blkid cache for 2.6 kernels
3270+ util.find_devs_with(path="/dev/sr0")
3271+ util.find_devs_with(path="/dev/sr1")
3272+
3273+ fslist = util.find_devs_with("TYPE=vfat")
3274+ fslist.extend(util.find_devs_with("TYPE=iso9660"))
3275+
3276+ label_list = util.find_devs_with("LABEL=%s" % label.upper())
3277+ label_list.extend(util.find_devs_with("LABEL=%s" % label.lower()))
3278+
3279+ devlist = list(set(fslist) & set(label_list))
3280+ devlist.sort(reverse=True)
3281+ return devlist
3282+
3283 def _get_data(self):
3284 defaults = {
3285 "instance-id": "nocloud",
3286@@ -99,20 +119,7 @@ class DataSourceNoCloud(sources.DataSource):
3287
3288 label = self.ds_cfg.get('fs_label', "cidata")
3289 if label is not None:
3290- # Query optical drive to get it in blkid cache for 2.6 kernels
3291- util.find_devs_with(path="/dev/sr0")
3292- util.find_devs_with(path="/dev/sr1")
3293-
3294- fslist = util.find_devs_with("TYPE=vfat")
3295- fslist.extend(util.find_devs_with("TYPE=iso9660"))
3296-
3297- label_list = util.find_devs_with("LABEL=%s" % label.upper())
3298- label_list.extend(util.find_devs_with("LABEL=%s" % label.lower()))
3299-
3300- devlist = list(set(fslist) & set(label_list))
3301- devlist.sort(reverse=True)
3302-
3303- for dev in devlist:
3304+ for dev in self._get_devices(label):
3305 try:
3306 LOG.debug("Attempting to use data from %s", dev)
3307
3308@@ -120,9 +127,8 @@ class DataSourceNoCloud(sources.DataSource):
3309 seeded = util.mount_cb(dev, _pp2d_callback,
3310 pp2d_kwargs)
3311 except ValueError:
3312- if dev in label_list:
3313- LOG.warning("device %s with label=%s not a"
3314- "valid seed.", dev, label)
3315+ LOG.warning("device %s with label=%s not a"
3316+ "valid seed.", dev, label)
3317 continue
3318
3319 mydata = _merge_new_seed(mydata, seeded)
3320diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py
3321index 70e7a5c..dd941d2 100644
3322--- a/cloudinit/sources/DataSourceOVF.py
3323+++ b/cloudinit/sources/DataSourceOVF.py
3324@@ -148,6 +148,9 @@ class DataSourceOVF(sources.DataSource):
3325 product_marker, os.path.join(self.paths.cloud_dir, 'data'))
3326 special_customization = product_marker and not hasmarkerfile
3327 customscript = self._vmware_cust_conf.custom_script_name
3328+ ccScriptsDir = os.path.join(
3329+ self.paths.get_cpath("scripts"),
3330+ "per-instance")
3331 except Exception as e:
3332 _raise_error_status(
3333 "Error parsing the customization Config File",
3334@@ -201,7 +204,9 @@ class DataSourceOVF(sources.DataSource):
3335
3336 if customscript:
3337 try:
3338- postcust = PostCustomScript(customscript, imcdirpath)
3339+ postcust = PostCustomScript(customscript,
3340+ imcdirpath,
3341+ ccScriptsDir)
3342 postcust.execute()
3343 except Exception as e:
3344 _raise_error_status(
3345diff --git a/cloudinit/sources/DataSourceOracle.py b/cloudinit/sources/DataSourceOracle.py
3346index 70b9c58..6e73f56 100644
3347--- a/cloudinit/sources/DataSourceOracle.py
3348+++ b/cloudinit/sources/DataSourceOracle.py
3349@@ -16,7 +16,7 @@ Notes:
3350 """
3351
3352 from cloudinit.url_helper import combine_url, readurl, UrlError
3353-from cloudinit.net import dhcp
3354+from cloudinit.net import dhcp, get_interfaces_by_mac
3355 from cloudinit import net
3356 from cloudinit import sources
3357 from cloudinit import util
3358@@ -28,8 +28,80 @@ import re
3359
3360 LOG = logging.getLogger(__name__)
3361
3362+BUILTIN_DS_CONFIG = {
3363+ # Don't use IMDS to configure secondary NICs by default
3364+ 'configure_secondary_nics': False,
3365+}
3366 CHASSIS_ASSET_TAG = "OracleCloud.com"
3367 METADATA_ENDPOINT = "http://169.254.169.254/openstack/"
3368+VNIC_METADATA_URL = 'http://169.254.169.254/opc/v1/vnics/'
3369+# https://docs.cloud.oracle.com/iaas/Content/Network/Troubleshoot/connectionhang.htm#Overview,
3370+# indicates that an MTU of 9000 is used within OCI
3371+MTU = 9000
3372+
3373+
3374+def _add_network_config_from_opc_imds(network_config):
3375+ """
3376+ Fetch data from Oracle's IMDS, generate secondary NIC config, merge it.
3377+
3378+ The primary NIC configuration should not be modified based on the IMDS
3379+ values, as it should continue to be configured for DHCP. As such, this
3380+ takes an existing network_config dict which is expected to have the primary
3381+ NIC configuration already present. It will mutate the given dict to
3382+ include the secondary VNICs.
3383+
3384+ :param network_config:
3385+ A v1 network config dict with the primary NIC already configured. This
3386+ dict will be mutated.
3387+
3388+ :raises:
3389+ Exceptions are not handled within this function. Likely exceptions are
3390+ those raised by url_helper.readurl (if communicating with the IMDS
3391+ fails), ValueError/JSONDecodeError (if the IMDS returns invalid JSON),
3392+ and KeyError/IndexError (if the IMDS returns valid JSON with unexpected
3393+ contents).
3394+ """
3395+ resp = readurl(VNIC_METADATA_URL)
3396+ vnics = json.loads(str(resp))
3397+
3398+ if 'nicIndex' in vnics[0]:
3399+ # TODO: Once configure_secondary_nics defaults to True, lower the level
3400+ # of this log message. (Currently, if we're running this code at all,
3401+ # someone has explicitly opted-in to secondary VNIC configuration, so
3402+ # we should warn them that it didn't happen. Once it's default, this
3403+ # would be emitted on every Bare Metal Machine launch, which means INFO
3404+ # or DEBUG would be more appropriate.)
3405+ LOG.warning(
3406+ 'VNIC metadata indicates this is a bare metal machine; skipping'
3407+ ' secondary VNIC configuration.'
3408+ )
3409+ return
3410+
3411+ interfaces_by_mac = get_interfaces_by_mac()
3412+
3413+ for vnic_dict in vnics[1:]:
3414+ # We skip the first entry in the response because the primary interface
3415+ # is already configured by iSCSI boot; applying configuration from the
3416+ # IMDS is not required.
3417+ mac_address = vnic_dict['macAddr'].lower()
3418+ if mac_address not in interfaces_by_mac:
3419+ LOG.debug('Interface with MAC %s not found; skipping', mac_address)
3420+ continue
3421+ name = interfaces_by_mac[mac_address]
3422+ subnet = {
3423+ 'type': 'static',
3424+ 'address': vnic_dict['privateIp'],
3425+ 'netmask': vnic_dict['subnetCidrBlock'].split('/')[1],
3426+ 'gateway': vnic_dict['virtualRouterIp'],
3427+ 'control': 'manual',
3428+ }
3429+ network_config['config'].append({
3430+ 'name': name,
3431+ 'type': 'physical',
3432+ 'mac_address': mac_address,
3433+ 'mtu': MTU,
3434+ 'subnets': [subnet],
3435+ })
3436
3437
3438 class DataSourceOracle(sources.DataSource):
3439@@ -37,8 +109,22 @@ class DataSourceOracle(sources.DataSource):
3440 dsname = 'Oracle'
3441 system_uuid = None
3442 vendordata_pure = None
3443+ network_config_sources = (
3444+ sources.NetworkConfigSource.cmdline,
3445+ sources.NetworkConfigSource.ds,
3446+ sources.NetworkConfigSource.initramfs,
3447+ sources.NetworkConfigSource.system_cfg,
3448+ )
3449+
3450 _network_config = sources.UNSET
3451
3452+ def __init__(self, sys_cfg, *args, **kwargs):
3453+ super(DataSourceOracle, self).__init__(sys_cfg, *args, **kwargs)
3454+
3455+ self.ds_cfg = util.mergemanydict([
3456+ util.get_cfg_by_path(sys_cfg, ['datasource', self.dsname], {}),
3457+ BUILTIN_DS_CONFIG])
3458+
3459 def _is_platform_viable(self):
3460 """Check platform environment to report if this datasource may run."""
3461 return _is_platform_viable()
3462@@ -48,7 +134,7 @@ class DataSourceOracle(sources.DataSource):
3463 return False
3464
3465 # network may be configured if iscsi root. If that is the case
3466- # then read_kernel_cmdline_config will return non-None.
3467+ # then read_initramfs_config will return non-None.
3468 if _is_iscsi_root():
3469 data = self.crawl_metadata()
3470 else:
3471@@ -118,11 +204,17 @@ class DataSourceOracle(sources.DataSource):
3472 We nonetheless return cmdline provided config if present
3473 and fallback to generate fallback."""
3474 if self._network_config == sources.UNSET:
3475- cmdline_cfg = cmdline.read_kernel_cmdline_config()
3476- if cmdline_cfg:
3477- self._network_config = cmdline_cfg
3478- else:
3479+ self._network_config = cmdline.read_initramfs_config()
3480+ if not self._network_config:
3481 self._network_config = self.distro.generate_fallback_config()
3482+ if self.ds_cfg.get('configure_secondary_nics'):
3483+ try:
3484+ # Mutate self._network_config to include secondary VNICs
3485+ _add_network_config_from_opc_imds(self._network_config)
3486+ except Exception:
3487+ util.logexc(
3488+ LOG,
3489+ "Failed to fetch secondary network configuration!")
3490 return self._network_config
3491
3492
3493@@ -137,7 +229,7 @@ def _is_platform_viable():
3494
3495
3496 def _is_iscsi_root():
3497- return bool(cmdline.read_kernel_cmdline_config())
3498+ return bool(cmdline.read_initramfs_config())
3499
3500
3501 def _load_index(content):
3502diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
3503index e6966b3..a319322 100644
3504--- a/cloudinit/sources/__init__.py
3505+++ b/cloudinit/sources/__init__.py
3506@@ -66,6 +66,13 @@ CLOUD_ID_REGION_PREFIX_MAP = {
3507 'china': ('azure-china', lambda c: c == 'azure'), # only change azure
3508 }
3509
3510+# NetworkConfigSource represents the canonical list of network config sources
3511+# that cloud-init knows about. (Python 2.7 lacks PEP 435, so use a singleton
3512+# namedtuple as an enum; see https://stackoverflow.com/a/6971002)
3513+_NETCFG_SOURCE_NAMES = ('cmdline', 'ds', 'system_cfg', 'fallback', 'initramfs')
3514+NetworkConfigSource = namedtuple('NetworkConfigSource',
3515+ _NETCFG_SOURCE_NAMES)(*_NETCFG_SOURCE_NAMES)
3516+
3517
3518 class DataSourceNotFoundException(Exception):
3519 pass
3520@@ -153,6 +160,16 @@ class DataSource(object):
3521 # Track the discovered fallback nic for use in configuration generation.
3522 _fallback_interface = None
3523
3524+ # The network configuration sources that should be considered for this data
3525+ # source. (The first source in this list that provides network
3526+ # configuration will be used without considering any that follow.) This
3527+ # should always be a subset of the members of NetworkConfigSource with no
3528+ # duplicate entries.
3529+ network_config_sources = (NetworkConfigSource.cmdline,
3530+ NetworkConfigSource.initramfs,
3531+ NetworkConfigSource.system_cfg,
3532+ NetworkConfigSource.ds)
3533+
3534 # read_url_params
3535 url_max_wait = -1 # max_wait < 0 means do not wait
3536 url_timeout = 10 # timeout for each metadata url read attempt
3537@@ -474,6 +491,16 @@ class DataSource(object):
3538 def get_public_ssh_keys(self):
3539 return normalize_pubkey_data(self.metadata.get('public-keys'))
3540
3541+ def publish_host_keys(self, hostkeys):
3542+ """Publish the public SSH host keys (found in /etc/ssh/*.pub).
3543+
3544+ @param hostkeys: List of host key tuples (key_type, key_value),
3545+ where key_type is the first field in the public key file
3546+ (e.g. 'ssh-rsa') and key_value is the key itself
3547+ (e.g. 'AAAAB3NzaC1y...').
3548+ """
3549+ pass
3550+
3551 def _remap_device(self, short_name):
3552 # LP: #611137
3553 # the metadata service may believe that devices are named 'sda'
3554diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
3555index 82c4c8c..f1fba17 100755
3556--- a/cloudinit/sources/helpers/azure.py
3557+++ b/cloudinit/sources/helpers/azure.py
3558@@ -16,7 +16,11 @@ from xml.etree import ElementTree
3559
3560 from cloudinit import url_helper
3561 from cloudinit import util
3562+from cloudinit import version
3563+from cloudinit import distros
3564 from cloudinit.reporting import events
3565+from cloudinit.net.dhcp import EphemeralDHCPv4
3566+from datetime import datetime
3567
3568 LOG = logging.getLogger(__name__)
3569
3570@@ -24,6 +28,10 @@ LOG = logging.getLogger(__name__)
3571 # value is applied if the endpoint can't be found within a lease file
3572 DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10"
3573
3574+BOOT_EVENT_TYPE = 'boot-telemetry'
3575+SYSTEMINFO_EVENT_TYPE = 'system-info'
3576+DIAGNOSTIC_EVENT_TYPE = 'diagnostic'
3577+
3578 azure_ds_reporter = events.ReportEventStack(
3579 name="azure-ds",
3580 description="initialize reporter for azure ds",
3581@@ -40,6 +48,105 @@ def azure_ds_telemetry_reporter(func):
3582 return impl
3583
3584
3585+@azure_ds_telemetry_reporter
3586+def get_boot_telemetry():
3587+ """Report timestamps related to kernel initialization and systemd
3588+ activation of cloud-init"""
3589+ if not distros.uses_systemd():
3590+ raise RuntimeError(
3591+ "distro not using systemd, skipping boot telemetry")
3592+
3593+ LOG.debug("Collecting boot telemetry")
3594+ try:
3595+ kernel_start = float(time.time()) - float(util.uptime())
3596+ except ValueError:
3597+ raise RuntimeError("Failed to determine kernel start timestamp")
3598+
3599+ try:
3600+ out, _ = util.subp(['/bin/systemctl',
3601+ 'show', '-p',
3602+ 'UserspaceTimestampMonotonic'],
3603+ capture=True)
3604+ tsm = None
3605+ if out and '=' in out:
3606+ tsm = out.split("=")[1]
3607+
3608+ if not tsm:
3609+ raise RuntimeError("Failed to parse "
3610+ "UserspaceTimestampMonotonic from systemd")
3611+
3612+ user_start = kernel_start + (float(tsm) / 1000000)
3613+ except util.ProcessExecutionError as e:
3614+ raise RuntimeError("Failed to get UserspaceTimestampMonotonic: %s"
3615+ % e)
3616+ except ValueError as e:
3617+ raise RuntimeError("Failed to parse "
3618+ "UserspaceTimestampMonotonic from systemd: %s"
3619+ % e)
3620+
3621+ try:
3622+ out, _ = util.subp(['/bin/systemctl', 'show',
3623+ 'cloud-init-local', '-p',
3624+ 'InactiveExitTimestampMonotonic'],
3625+ capture=True)
3626+ tsm = None
3627+ if out and '=' in out:
3628+ tsm = out.split("=")[1]
3629+ if not tsm:
3630+ raise RuntimeError("Failed to parse "
3631+ "InactiveExitTimestampMonotonic from systemd")
3632+
3633+ cloudinit_activation = kernel_start + (float(tsm) / 1000000)
3634+ except util.ProcessExecutionError as e:
3635+ raise RuntimeError("Failed to get InactiveExitTimestampMonotonic: %s"
3636+ % e)
3637+ except ValueError as e:
3638+ raise RuntimeError("Failed to parse "
3639+ "InactiveExitTimestampMonotonic from systemd: %s"
3640+ % e)
3641+
3642+ evt = events.ReportingEvent(
3643+ BOOT_EVENT_TYPE, 'boot-telemetry',
3644+ "kernel_start=%s user_start=%s cloudinit_activation=%s" %
3645+ (datetime.utcfromtimestamp(kernel_start).isoformat() + 'Z',
3646+ datetime.utcfromtimestamp(user_start).isoformat() + 'Z',
3647+ datetime.utcfromtimestamp(cloudinit_activation).isoformat() + 'Z'),
3648+ events.DEFAULT_EVENT_ORIGIN)
3649+ events.report_event(evt)
3650+
3651+ # return the event for unit testing purpose
3652+ return evt
3653+
3654+
3655+@azure_ds_telemetry_reporter
3656+def get_system_info():
3657+ """Collect and report system information"""
3658+ info = util.system_info()
3659+ evt = events.ReportingEvent(
3660+ SYSTEMINFO_EVENT_TYPE, 'system information',
3661+ "cloudinit_version=%s, kernel_version=%s, variant=%s, "
3662+ "distro_name=%s, distro_version=%s, flavor=%s, "
3663+ "python_version=%s" %
3664+ (version.version_string(), info['release'], info['variant'],
3665+ info['dist'][0], info['dist'][1], info['dist'][2],
3666+ info['python']), events.DEFAULT_EVENT_ORIGIN)
3667+ events.report_event(evt)
3668+
3669+ # return the event for unit testing purpose
3670+ return evt
3671+
3672+
3673+def report_diagnostic_event(str):
3674+ """Report a diagnostic event"""
3675+ evt = events.ReportingEvent(
3676+ DIAGNOSTIC_EVENT_TYPE, 'diagnostic message',
3677+ str, events.DEFAULT_EVENT_ORIGIN)
3678+ events.report_event(evt)
3679+
3680+ # return the event for unit testing purpose
3681+ return evt
3682+
3683+
3684 @contextmanager
3685 def cd(newdir):
3686 prevdir = os.getcwd()
3687@@ -360,16 +467,19 @@ class WALinuxAgentShim(object):
3688 value = dhcp245
3689 LOG.debug("Using Azure Endpoint from dhcp options")
3690 if value is None:
3691+ report_diagnostic_event("No Azure endpoint from dhcp options")
3692 LOG.debug('Finding Azure endpoint from networkd...')
3693 value = WALinuxAgentShim._networkd_get_value_from_leases()
3694 if value is None:
3695 # Option-245 stored in /run/cloud-init/dhclient.hooks/<ifc>.json
3696 # a dhclient exit hook that calls cloud-init-dhclient-hook
3697+ report_diagnostic_event("No Azure endpoint from networkd")
3698 LOG.debug('Finding Azure endpoint from hook json...')
3699 dhcp_options = WALinuxAgentShim._load_dhclient_json()
3700 value = WALinuxAgentShim._get_value_from_dhcpoptions(dhcp_options)
3701 if value is None:
3702 # Fallback and check the leases file if unsuccessful
3703+ report_diagnostic_event("No Azure endpoint from dhclient logs")
3704 LOG.debug("Unable to find endpoint in dhclient logs. "
3705 " Falling back to check lease files")
3706 if fallback_lease_file is None:
3707@@ -381,11 +491,15 @@ class WALinuxAgentShim(object):
3708 value = WALinuxAgentShim._get_value_from_leases_file(
3709 fallback_lease_file)
3710 if value is None:
3711- LOG.warning("No lease found; using default endpoint")
3712+ msg = "No lease found; using default endpoint"
3713+ report_diagnostic_event(msg)
3714+ LOG.warning(msg)
3715 value = DEFAULT_WIRESERVER_ENDPOINT
3716
3717 endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value)
3718- LOG.debug('Azure endpoint found at %s', endpoint_ip_address)
3719+ msg = 'Azure endpoint found at %s' % endpoint_ip_address
3720+ report_diagnostic_event(msg)
3721+ LOG.debug(msg)
3722 return endpoint_ip_address
3723
3724 @azure_ds_telemetry_reporter
3725@@ -399,16 +513,19 @@ class WALinuxAgentShim(object):
3726 try:
3727 response = http_client.get(
3728 'http://{0}/machine/?comp=goalstate'.format(self.endpoint))
3729- except Exception:
3730+ except Exception as e:
3731 if attempts < 10:
3732 time.sleep(attempts + 1)
3733 else:
3734+ report_diagnostic_event(
3735+ "failed to register with Azure: %s" % e)
3736 raise
3737 else:
3738 break
3739 attempts += 1
3740 LOG.debug('Successfully fetched GoalState XML.')
3741 goal_state = GoalState(response.contents, http_client)
3742+ report_diagnostic_event("container_id %s" % goal_state.container_id)
3743 ssh_keys = []
3744 if goal_state.certificates_xml is not None and pubkey_info is not None:
3745 LOG.debug('Certificate XML found; parsing out public keys.')
3746@@ -449,11 +566,20 @@ class WALinuxAgentShim(object):
3747 container_id=goal_state.container_id,
3748 instance_id=goal_state.instance_id,
3749 )
3750- http_client.post(
3751- "http://{0}/machine?comp=health".format(self.endpoint),
3752- data=document,
3753- extra_headers={'Content-Type': 'text/xml; charset=utf-8'},
3754- )
3755+ # Host will collect kvps when cloud-init reports ready.
3756+ # some kvps might still be in the queue. We yield the scheduler
3757+ # to make sure we process all kvps up till this point.
3758+ time.sleep(0)
3759+ try:
3760+ http_client.post(
3761+ "http://{0}/machine?comp=health".format(self.endpoint),
3762+ data=document,
3763+ extra_headers={'Content-Type': 'text/xml; charset=utf-8'},
3764+ )
3765+ except Exception as e:
3766+ report_diagnostic_event("exception while reporting ready: %s" % e)
3767+ raise
3768+
3769 LOG.info('Reported ready to Azure fabric.')
3770
3771
3772@@ -467,4 +593,22 @@ def get_metadata_from_fabric(fallback_lease_file=None, dhcp_opts=None,
3773 finally:
3774 shim.clean_up()
3775
3776+
3777+class EphemeralDHCPv4WithReporting(object):
3778+ def __init__(self, reporter, nic=None):
3779+ self.reporter = reporter
3780+ self.ephemeralDHCPv4 = EphemeralDHCPv4(iface=nic)
3781+
3782+ def __enter__(self):
3783+ with events.ReportEventStack(
3784+ name="obtain-dhcp-lease",
3785+ description="obtain dhcp lease",
3786+ parent=self.reporter):
3787+ return self.ephemeralDHCPv4.__enter__()
3788+
3789+ def __exit__(self, excp_type, excp_value, excp_traceback):
3790+ self.ephemeralDHCPv4.__exit__(
3791+ excp_type, excp_value, excp_traceback)
3792+
3793+
3794 # vi: ts=4 expandtab
3795diff --git a/cloudinit/sources/helpers/vmware/imc/config_custom_script.py b/cloudinit/sources/helpers/vmware/imc/config_custom_script.py
3796index a7d4ad9..9f14770 100644
3797--- a/cloudinit/sources/helpers/vmware/imc/config_custom_script.py
3798+++ b/cloudinit/sources/helpers/vmware/imc/config_custom_script.py
3799@@ -1,5 +1,5 @@
3800 # Copyright (C) 2017 Canonical Ltd.
3801-# Copyright (C) 2017 VMware Inc.
3802+# Copyright (C) 2017-2019 VMware Inc.
3803 #
3804 # Author: Maitreyee Saikia <msaikia@vmware.com>
3805 #
3806@@ -8,7 +8,6 @@
3807 import logging
3808 import os
3809 import stat
3810-from textwrap import dedent
3811
3812 from cloudinit import util
3813
3814@@ -20,12 +19,15 @@ class CustomScriptNotFound(Exception):
3815
3816
3817 class CustomScriptConstant(object):
3818- RC_LOCAL = "/etc/rc.local"
3819- POST_CUST_TMP_DIR = "/root/.customization"
3820- POST_CUST_RUN_SCRIPT_NAME = "post-customize-guest.sh"
3821- POST_CUST_RUN_SCRIPT = os.path.join(POST_CUST_TMP_DIR,
3822- POST_CUST_RUN_SCRIPT_NAME)
3823- POST_REBOOT_PENDING_MARKER = "/.guest-customization-post-reboot-pending"
3824+ CUSTOM_TMP_DIR = "/root/.customization"
3825+
3826+ # The user defined custom script
3827+ CUSTOM_SCRIPT_NAME = "customize.sh"
3828+ CUSTOM_SCRIPT = os.path.join(CUSTOM_TMP_DIR,
3829+ CUSTOM_SCRIPT_NAME)
3830+ POST_CUSTOM_PENDING_MARKER = "/.guest-customization-post-reboot-pending"
3831+ # The cc_scripts_per_instance script to launch custom script
3832+ POST_CUSTOM_SCRIPT_NAME = "post-customize-guest.sh"
3833
3834
3835 class RunCustomScript(object):
3836@@ -39,10 +41,19 @@ class RunCustomScript(object):
3837 raise CustomScriptNotFound("Script %s not found!! "
3838 "Cannot execute custom script!"
3839 % self.scriptpath)
3840+
3841+ util.ensure_dir(CustomScriptConstant.CUSTOM_TMP_DIR)
3842+
3843+ LOG.debug("Copying custom script to %s",
3844+ CustomScriptConstant.CUSTOM_SCRIPT)
3845+ util.copy(self.scriptpath, CustomScriptConstant.CUSTOM_SCRIPT)
3846+
3847 # Strip any CR characters from the decoded script
3848- util.load_file(self.scriptpath).replace("\r", "")
3849- st = os.stat(self.scriptpath)
3850- os.chmod(self.scriptpath, st.st_mode | stat.S_IEXEC)
3851+ content = util.load_file(
3852+ CustomScriptConstant.CUSTOM_SCRIPT).replace("\r", "")
3853+ util.write_file(CustomScriptConstant.CUSTOM_SCRIPT,
3854+ content,
3855+ mode=0o544)
3856
3857
3858 class PreCustomScript(RunCustomScript):
3859@@ -50,104 +61,34 @@ class PreCustomScript(RunCustomScript):
3860 """Executing custom script with precustomization argument."""
3861 LOG.debug("Executing pre-customization script")
3862 self.prepare_script()
3863- util.subp(["/bin/sh", self.scriptpath, "precustomization"])
3864+ util.subp([CustomScriptConstant.CUSTOM_SCRIPT, "precustomization"])
3865
3866
3867 class PostCustomScript(RunCustomScript):
3868- def __init__(self, scriptname, directory):
3869+ def __init__(self, scriptname, directory, ccScriptsDir):
3870 super(PostCustomScript, self).__init__(scriptname, directory)
3871- # Determine when to run custom script. When postreboot is True,
3872- # the user uploaded script will run as part of rc.local after
3873- # the machine reboots. This is determined by presence of rclocal.
3874- # When postreboot is False, script will run as part of cloud-init.
3875- self.postreboot = False
3876-
3877- def _install_post_reboot_agent(self, rclocal):
3878- """
3879- Install post-reboot agent for running custom script after reboot.
3880- As part of this process, we are editing the rclocal file to run a
3881- VMware script, which in turn is resposible for handling the user
3882- script.
3883- @param: path to rc local.
3884- """
3885- LOG.debug("Installing post-reboot customization from %s to %s",
3886- self.directory, rclocal)
3887- if not self.has_previous_agent(rclocal):
3888- LOG.info("Adding post-reboot customization agent to rc.local")
3889- new_content = dedent("""
3890- # Run post-reboot guest customization
3891- /bin/sh %s
3892- exit 0
3893- """) % CustomScriptConstant.POST_CUST_RUN_SCRIPT
3894- existing_rclocal = util.load_file(rclocal).replace('exit 0\n', '')
3895- st = os.stat(rclocal)
3896- # "x" flag should be set
3897- mode = st.st_mode | stat.S_IEXEC
3898- util.write_file(rclocal, existing_rclocal + new_content, mode)
3899-
3900- else:
3901- # We don't need to update rclocal file everytime a customization
3902- # is requested. It just needs to be done for the first time.
3903- LOG.info("Post-reboot guest customization agent is already "
3904- "registered in rc.local")
3905- LOG.debug("Installing post-reboot customization agent finished: %s",
3906- self.postreboot)
3907-
3908- def has_previous_agent(self, rclocal):
3909- searchstring = "# Run post-reboot guest customization"
3910- if searchstring in open(rclocal).read():
3911- return True
3912- return False
3913-
3914- def find_rc_local(self):
3915- """
3916- Determine if rc local is present.
3917- """
3918- rclocal = ""
3919- if os.path.exists(CustomScriptConstant.RC_LOCAL):
3920- LOG.debug("rc.local detected.")
3921- # resolving in case of symlink
3922- rclocal = os.path.realpath(CustomScriptConstant.RC_LOCAL)
3923- LOG.debug("rc.local resolved to %s", rclocal)
3924- else:
3925- LOG.warning("Can't find rc.local, post-customization "
3926- "will be run before reboot")
3927- return rclocal
3928-
3929- def install_agent(self):
3930- rclocal = self.find_rc_local()
3931- if rclocal:
3932- self._install_post_reboot_agent(rclocal)
3933- self.postreboot = True
3934+ self.ccScriptsDir = ccScriptsDir
3935+ self.ccScriptPath = os.path.join(
3936+ ccScriptsDir,
3937+ CustomScriptConstant.POST_CUSTOM_SCRIPT_NAME)
3938
3939 def execute(self):
3940 """
3941- This method executes post-customization script before or after reboot
3942- based on the presence of rc local.
3943+ This method copy the post customize run script to
3944+ cc_scripts_per_instance directory and let this
3945+ module to run post custom script.
3946 """
3947 self.prepare_script()
3948- self.install_agent()
3949- if not self.postreboot:
3950- LOG.warning("Executing post-customization script inline")
3951- util.subp(["/bin/sh", self.scriptpath, "postcustomization"])
3952- else:
3953- LOG.debug("Scheduling custom script to run post reboot")
3954- if not os.path.isdir(CustomScriptConstant.POST_CUST_TMP_DIR):
3955- os.mkdir(CustomScriptConstant.POST_CUST_TMP_DIR)
3956- # Script "post-customize-guest.sh" and user uploaded script are
3957- # are present in the same directory and needs to copied to a temp
3958- # directory to be executed post reboot. User uploaded script is
3959- # saved as customize.sh in the temp directory.
3960- # post-customize-guest.sh excutes customize.sh after reboot.
3961- LOG.debug("Copying post-customization script")
3962- util.copy(self.scriptpath,
3963- CustomScriptConstant.POST_CUST_TMP_DIR + "/customize.sh")
3964- LOG.debug("Copying script to run post-customization script")
3965- util.copy(
3966- os.path.join(self.directory,
3967- CustomScriptConstant.POST_CUST_RUN_SCRIPT_NAME),
3968- CustomScriptConstant.POST_CUST_RUN_SCRIPT)
3969- LOG.info("Creating post-reboot pending marker")
3970- util.ensure_file(CustomScriptConstant.POST_REBOOT_PENDING_MARKER)
3971+
3972+ LOG.debug("Copying post customize run script to %s",
3973+ self.ccScriptPath)
3974+ util.copy(
3975+ os.path.join(self.directory,
3976+ CustomScriptConstant.POST_CUSTOM_SCRIPT_NAME),
3977+ self.ccScriptPath)
3978+ st = os.stat(self.ccScriptPath)
3979+ os.chmod(self.ccScriptPath, st.st_mode | stat.S_IEXEC)
3980+ LOG.info("Creating post customization pending marker")
3981+ util.ensure_file(CustomScriptConstant.POST_CUSTOM_PENDING_MARKER)
3982
3983 # vi: ts=4 expandtab
3984diff --git a/cloudinit/sources/tests/test_oracle.py b/cloudinit/sources/tests/test_oracle.py
3985index 97d6294..3ddf7df 100644
3986--- a/cloudinit/sources/tests/test_oracle.py
3987+++ b/cloudinit/sources/tests/test_oracle.py
3988@@ -1,7 +1,7 @@
3989 # This file is part of cloud-init. See LICENSE file for license information.
3990
3991 from cloudinit.sources import DataSourceOracle as oracle
3992-from cloudinit.sources import BrokenMetadata
3993+from cloudinit.sources import BrokenMetadata, NetworkConfigSource
3994 from cloudinit import helpers
3995
3996 from cloudinit.tests import helpers as test_helpers
3997@@ -18,10 +18,52 @@ import uuid
3998 DS_PATH = "cloudinit.sources.DataSourceOracle"
3999 MD_VER = "2013-10-17"
4000
4001+# `curl -L http://169.254.169.254/opc/v1/vnics/` on a Oracle Bare Metal Machine
4002+# with a secondary VNIC attached (vnicId truncated for Python line length)
4003+OPC_BM_SECONDARY_VNIC_RESPONSE = """\
4004+[ {
4005+ "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtyvcucqkhdqmgjszebxe4hrb!!TRUNCATED||",
4006+ "privateIp" : "10.0.0.8",
4007+ "vlanTag" : 0,
4008+ "macAddr" : "90:e2:ba:d4:f1:68",
4009+ "virtualRouterIp" : "10.0.0.1",
4010+ "subnetCidrBlock" : "10.0.0.0/24",
4011+ "nicIndex" : 0
4012+}, {
4013+ "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtfmkxjdy2sqidndiwrsg63zf!!TRUNCATED||",
4014+ "privateIp" : "10.0.4.5",
4015+ "vlanTag" : 1,
4016+ "macAddr" : "02:00:17:05:CF:51",
4017+ "virtualRouterIp" : "10.0.4.1",
4018+ "subnetCidrBlock" : "10.0.4.0/24",
4019+ "nicIndex" : 0
4020+} ]"""
4021+
4022+# `curl -L http://169.254.169.254/opc/v1/vnics/` on a Oracle Virtual Machine
4023+# with a secondary VNIC attached
4024+OPC_VM_SECONDARY_VNIC_RESPONSE = """\
4025+[ {
4026+ "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtch72z5pd76cc2636qeqh7z_truncated",
4027+ "privateIp" : "10.0.0.230",
4028+ "vlanTag" : 1039,
4029+ "macAddr" : "02:00:17:05:D1:DB",
4030+ "virtualRouterIp" : "10.0.0.1",
4031+ "subnetCidrBlock" : "10.0.0.0/24"
4032+}, {
4033+ "vnicId" : "ocid1.vnic.oc1.phx.abyhqljt4iew3gwmvrwrhhf3bp5drj_truncated",
4034+ "privateIp" : "10.0.0.231",
4035+ "vlanTag" : 1041,
4036+ "macAddr" : "00:00:17:02:2B:B1",
4037+ "virtualRouterIp" : "10.0.0.1",
4038+ "subnetCidrBlock" : "10.0.0.0/24"
4039+} ]"""
4040+
4041
4042 class TestDataSourceOracle(test_helpers.CiTestCase):
4043 """Test datasource DataSourceOracle."""
4044
4045+ with_logs = True
4046+
4047 ds_class = oracle.DataSourceOracle
4048
4049 my_uuid = str(uuid.uuid4())
4050@@ -79,6 +121,16 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
4051 self.assertEqual(
4052 'metadata (http://169.254.169.254/openstack/)', ds.subplatform)
4053
4054+ def test_sys_cfg_can_enable_configure_secondary_nics(self):
4055+ # Confirm that behaviour is toggled by sys_cfg
4056+ ds, _mocks = self._get_ds()
4057+ self.assertFalse(ds.ds_cfg['configure_secondary_nics'])
4058+
4059+ sys_cfg = {
4060+ 'datasource': {'Oracle': {'configure_secondary_nics': True}}}
4061+ ds, _mocks = self._get_ds(sys_cfg=sys_cfg)
4062+ self.assertTrue(ds.ds_cfg['configure_secondary_nics'])
4063+
4064 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
4065 def test_without_userdata(self, m_is_iscsi_root):
4066 """If no user-data is provided, it should not be in return dict."""
4067@@ -133,9 +185,12 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
4068 self.assertEqual(self.my_md['uuid'], ds.get_instance_id())
4069 self.assertEqual(my_userdata, ds.userdata_raw)
4070
4071- @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config")
4072+ @mock.patch(DS_PATH + "._add_network_config_from_opc_imds",
4073+ side_effect=lambda network_config: network_config)
4074+ @mock.patch(DS_PATH + ".cmdline.read_initramfs_config")
4075 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
4076- def test_network_cmdline(self, m_is_iscsi_root, m_cmdline_config):
4077+ def test_network_cmdline(self, m_is_iscsi_root, m_initramfs_config,
4078+ _m_add_network_config_from_opc_imds):
4079 """network_config should read kernel cmdline."""
4080 distro = mock.MagicMock()
4081 ds, _ = self._get_ds(distro=distro, patches={
4082@@ -145,15 +200,18 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
4083 MD_VER: {'system_uuid': self.my_uuid,
4084 'meta_data': self.my_md}}}})
4085 ncfg = {'version': 1, 'config': [{'a': 'b'}]}
4086- m_cmdline_config.return_value = ncfg
4087+ m_initramfs_config.return_value = ncfg
4088 self.assertTrue(ds._get_data())
4089 self.assertEqual(ncfg, ds.network_config)
4090- m_cmdline_config.assert_called_once_with()
4091+ self.assertEqual([mock.call()], m_initramfs_config.call_args_list)
4092 self.assertFalse(distro.generate_fallback_config.called)
4093
4094- @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config")
4095+ @mock.patch(DS_PATH + "._add_network_config_from_opc_imds",
4096+ side_effect=lambda network_config: network_config)
4097+ @mock.patch(DS_PATH + ".cmdline.read_initramfs_config")
4098 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
4099- def test_network_fallback(self, m_is_iscsi_root, m_cmdline_config):
4100+ def test_network_fallback(self, m_is_iscsi_root, m_initramfs_config,
4101+ _m_add_network_config_from_opc_imds):
4102 """test that fallback network is generated if no kernel cmdline."""
4103 distro = mock.MagicMock()
4104 ds, _ = self._get_ds(distro=distro, patches={
4105@@ -163,18 +221,95 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
4106 MD_VER: {'system_uuid': self.my_uuid,
4107 'meta_data': self.my_md}}}})
4108 ncfg = {'version': 1, 'config': [{'a': 'b'}]}
4109- m_cmdline_config.return_value = None
4110+ m_initramfs_config.return_value = None
4111 self.assertTrue(ds._get_data())
4112 ncfg = {'version': 1, 'config': [{'distro1': 'value'}]}
4113 distro.generate_fallback_config.return_value = ncfg
4114 self.assertEqual(ncfg, ds.network_config)
4115- m_cmdline_config.assert_called_once_with()
4116+ self.assertEqual([mock.call()], m_initramfs_config.call_args_list)
4117 distro.generate_fallback_config.assert_called_once_with()
4118- self.assertEqual(1, m_cmdline_config.call_count)
4119
4120 # test that the result got cached, and the methods not re-called.
4121 self.assertEqual(ncfg, ds.network_config)
4122- self.assertEqual(1, m_cmdline_config.call_count)
4123+ self.assertEqual(1, m_initramfs_config.call_count)
4124+
4125+ @mock.patch(DS_PATH + "._add_network_config_from_opc_imds")
4126+ @mock.patch(DS_PATH + ".cmdline.read_initramfs_config",
4127+ return_value={'some': 'config'})
4128+ @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
4129+ def test_secondary_nics_added_to_network_config_if_enabled(
4130+ self, _m_is_iscsi_root, _m_initramfs_config,
4131+ m_add_network_config_from_opc_imds):
4132+
4133+ needle = object()
4134+
4135+ def network_config_side_effect(network_config):
4136+ network_config['secondary_added'] = needle
4137+
4138+ m_add_network_config_from_opc_imds.side_effect = (
4139+ network_config_side_effect)
4140+
4141+ distro = mock.MagicMock()
4142+ ds, _ = self._get_ds(distro=distro, patches={
4143+ '_is_platform_viable': {'return_value': True},
4144+ 'crawl_metadata': {
4145+ 'return_value': {
4146+ MD_VER: {'system_uuid': self.my_uuid,
4147+ 'meta_data': self.my_md}}}})
4148+ ds.ds_cfg['configure_secondary_nics'] = True
4149+ self.assertEqual(needle, ds.network_config['secondary_added'])
4150+
4151+ @mock.patch(DS_PATH + "._add_network_config_from_opc_imds")
4152+ @mock.patch(DS_PATH + ".cmdline.read_initramfs_config",
4153+ return_value={'some': 'config'})
4154+ @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
4155+ def test_secondary_nics_not_added_to_network_config_by_default(
4156+ self, _m_is_iscsi_root, _m_initramfs_config,
4157+ m_add_network_config_from_opc_imds):
4158+
4159+ def network_config_side_effect(network_config):
4160+ network_config['secondary_added'] = True
4161+
4162+ m_add_network_config_from_opc_imds.side_effect = (
4163+ network_config_side_effect)
4164+
4165+ distro = mock.MagicMock()
4166+ ds, _ = self._get_ds(distro=distro, patches={
4167+ '_is_platform_viable': {'return_value': True},
4168+ 'crawl_metadata': {
4169+ 'return_value': {
4170+ MD_VER: {'system_uuid': self.my_uuid,
4171+ 'meta_data': self.my_md}}}})
4172+ self.assertNotIn('secondary_added', ds.network_config)
4173+
4174+ @mock.patch(DS_PATH + "._add_network_config_from_opc_imds")
4175+ @mock.patch(DS_PATH + ".cmdline.read_initramfs_config")
4176+ @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
4177+ def test_secondary_nic_failure_isnt_blocking(
4178+ self, _m_is_iscsi_root, m_initramfs_config,
4179+ m_add_network_config_from_opc_imds):
4180+
4181+ m_add_network_config_from_opc_imds.side_effect = Exception()
4182+
4183+ distro = mock.MagicMock()
4184+ ds, _ = self._get_ds(distro=distro, patches={
4185+ '_is_platform_viable': {'return_value': True},
4186+ 'crawl_metadata': {
4187+ 'return_value': {
4188+ MD_VER: {'system_uuid': self.my_uuid,
4189+ 'meta_data': self.my_md}}}})
4190+ ds.ds_cfg['configure_secondary_nics'] = True
4191+ self.assertEqual(ds.network_config, m_initramfs_config.return_value)
4192+ self.assertIn('Failed to fetch secondary network configuration',
4193+ self.logs.getvalue())
4194+
4195+ def test_ds_network_cfg_preferred_over_initramfs(self):
4196+ """Ensure that DS net config is preferred over initramfs config"""
4197+ network_config_sources = oracle.DataSourceOracle.network_config_sources
4198+ self.assertLess(
4199+ network_config_sources.index(NetworkConfigSource.ds),
4200+ network_config_sources.index(NetworkConfigSource.initramfs)
4201+ )
4202
4203
4204 @mock.patch(DS_PATH + "._read_system_uuid", return_value=str(uuid.uuid4()))
4205@@ -336,4 +471,86 @@ class TestLoadIndex(test_helpers.CiTestCase):
4206 oracle._load_index("\n".join(["meta_data.json", "user_data"])))
4207
4208
4209+class TestNetworkConfigFromOpcImds(test_helpers.CiTestCase):
4210+
4211+ with_logs = True
4212+
4213+ def setUp(self):
4214+ super(TestNetworkConfigFromOpcImds, self).setUp()
4215+ self.add_patch(DS_PATH + '.readurl', 'm_readurl')
4216+ self.add_patch(DS_PATH + '.get_interfaces_by_mac',
4217+ 'm_get_interfaces_by_mac')
4218+
4219+ def test_failure_to_readurl(self):
4220+ # readurl failures should just bubble out to the caller
4221+ self.m_readurl.side_effect = Exception('oh no')
4222+ with self.assertRaises(Exception) as excinfo:
4223+ oracle._add_network_config_from_opc_imds({})
4224+ self.assertEqual(str(excinfo.exception), 'oh no')
4225+
4226+ def test_empty_response(self):
4227+ # empty response error should just bubble out to the caller
4228+ self.m_readurl.return_value = ''
4229+ with self.assertRaises(Exception):
4230+ oracle._add_network_config_from_opc_imds([])
4231+
4232+ def test_invalid_json(self):
4233+ # invalid JSON error should just bubble out to the caller
4234+ self.m_readurl.return_value = '{'
4235+ with self.assertRaises(Exception):
4236+ oracle._add_network_config_from_opc_imds([])
4237+
4238+ def test_no_secondary_nics_does_not_mutate_input(self):
4239+ self.m_readurl.return_value = json.dumps([{}])
4240+ # We test this by passing in a non-dict to ensure that no dict
4241+ # operations are used; failure would be seen as exceptions
4242+ oracle._add_network_config_from_opc_imds(object())
4243+
4244+ def test_bare_metal_machine_skipped(self):
4245+ # nicIndex in the first entry indicates a bare metal machine
4246+ self.m_readurl.return_value = OPC_BM_SECONDARY_VNIC_RESPONSE
4247+ # We test this by passing in a non-dict to ensure that no dict
4248+ # operations are used
4249+ self.assertFalse(oracle._add_network_config_from_opc_imds(object()))
4250+ self.assertIn('bare metal machine', self.logs.getvalue())
4251+
4252+ def test_missing_mac_skipped(self):
4253+ self.m_readurl.return_value = OPC_VM_SECONDARY_VNIC_RESPONSE
4254+ self.m_get_interfaces_by_mac.return_value = {}
4255+
4256+ network_config = {'version': 1, 'config': [{'primary': 'nic'}]}
4257+ oracle._add_network_config_from_opc_imds(network_config)
4258+
4259+ self.assertEqual(1, len(network_config['config']))
4260+ self.assertIn(
4261+ 'Interface with MAC 00:00:17:02:2b:b1 not found; skipping',
4262+ self.logs.getvalue())
4263+
4264+ def test_secondary_nic(self):
4265+ self.m_readurl.return_value = OPC_VM_SECONDARY_VNIC_RESPONSE
4266+ mac_addr, nic_name = '00:00:17:02:2b:b1', 'ens3'
4267+ self.m_get_interfaces_by_mac.return_value = {
4268+ mac_addr: nic_name,
4269+ }
4270+
4271+ network_config = {'version': 1, 'config': [{'primary': 'nic'}]}
4272+ oracle._add_network_config_from_opc_imds(network_config)
4273+
4274+ # The input is mutated
4275+ self.assertEqual(2, len(network_config['config']))
4276+
4277+ secondary_nic_cfg = network_config['config'][1]
4278+ self.assertEqual(nic_name, secondary_nic_cfg['name'])
4279+ self.assertEqual('physical', secondary_nic_cfg['type'])
4280+ self.assertEqual(mac_addr, secondary_nic_cfg['mac_address'])
4281+ self.assertEqual(9000, secondary_nic_cfg['mtu'])
4282+
4283+ self.assertEqual(1, len(secondary_nic_cfg['subnets']))
4284+ subnet_cfg = secondary_nic_cfg['subnets'][0]
4285+ # These values are hard-coded in OPC_VM_SECONDARY_VNIC_RESPONSE
4286+ self.assertEqual('10.0.0.231', subnet_cfg['address'])
4287+ self.assertEqual('24', subnet_cfg['netmask'])
4288+ self.assertEqual('10.0.0.1', subnet_cfg['gateway'])
4289+ self.assertEqual('manual', subnet_cfg['control'])
4290+
4291 # vi: ts=4 expandtab
4292diff --git a/cloudinit/stages.py b/cloudinit/stages.py
4293index da7d349..5012988 100644
4294--- a/cloudinit/stages.py
4295+++ b/cloudinit/stages.py
4296@@ -24,6 +24,7 @@ from cloudinit.handlers.shell_script import ShellScriptPartHandler
4297 from cloudinit.handlers.upstart_job import UpstartJobPartHandler
4298
4299 from cloudinit.event import EventType
4300+from cloudinit.sources import NetworkConfigSource
4301
4302 from cloudinit import cloud
4303 from cloudinit import config
4304@@ -630,32 +631,54 @@ class Init(object):
4305 if os.path.exists(disable_file):
4306 return (None, disable_file)
4307
4308- cmdline_cfg = ('cmdline', cmdline.read_kernel_cmdline_config())
4309- dscfg = ('ds', None)
4310+ available_cfgs = {
4311+ NetworkConfigSource.cmdline: cmdline.read_kernel_cmdline_config(),
4312+ NetworkConfigSource.initramfs: cmdline.read_initramfs_config(),
4313+ NetworkConfigSource.ds: None,
4314+ NetworkConfigSource.system_cfg: self.cfg.get('network'),
4315+ }
4316+
4317 if self.datasource and hasattr(self.datasource, 'network_config'):
4318- dscfg = ('ds', self.datasource.network_config)
4319- sys_cfg = ('system_cfg', self.cfg.get('network'))
4320+ available_cfgs[NetworkConfigSource.ds] = (
4321+ self.datasource.network_config)
4322
4323- for loc, ncfg in (cmdline_cfg, sys_cfg, dscfg):
4324+ if self.datasource:
4325+ order = self.datasource.network_config_sources
4326+ else:
4327+ order = sources.DataSource.network_config_sources
4328+ for cfg_source in order:
4329+ if not hasattr(NetworkConfigSource, cfg_source):
4330+ LOG.warning('data source specifies an invalid network'
4331+ ' cfg_source: %s', cfg_source)
4332+ continue
4333+ if cfg_source not in available_cfgs:
4334+ LOG.warning('data source specifies an unavailable network'
4335+ ' cfg_source: %s', cfg_source)
4336+ continue
4337+ ncfg = available_cfgs[cfg_source]
4338 if net.is_disabled_cfg(ncfg):
4339- LOG.debug("network config disabled by %s", loc)
4340- return (None, loc)
4341+ LOG.debug("network config disabled by %s", cfg_source)
4342+ return (None, cfg_source)
4343 if ncfg:
4344- return (ncfg, loc)
4345- return (self.distro.generate_fallback_config(), "fallback")
4346-
4347- def apply_network_config(self, bring_up):
4348- netcfg, src = self._find_networking_config()
4349- if netcfg is None:
4350- LOG.info("network config is disabled by %s", src)
4351- return
4352+ return (ncfg, cfg_source)
4353+ return (self.distro.generate_fallback_config(),
4354+ NetworkConfigSource.fallback)
4355
4356+ def _apply_netcfg_names(self, netcfg):
4357 try:
4358 LOG.debug("applying net config names for %s", netcfg)
4359 self.distro.apply_network_config_names(netcfg)
4360 except Exception as e:
4361 LOG.warning("Failed to rename devices: %s", e)
4362
4363+ def apply_network_config(self, bring_up):
4364+ # get a network config
4365+ netcfg, src = self._find_networking_config()
4366+ if netcfg is None:
4367+ LOG.info("network config is disabled by %s", src)
4368+ return
4369+
4370+ # request an update if needed/available
4371 if self.datasource is not NULL_DATA_SOURCE:
4372 if not self.is_new_instance():
4373 if not self.datasource.update_metadata([EventType.BOOT]):
4374@@ -663,8 +686,20 @@ class Init(object):
4375 "No network config applied. Neither a new instance"
4376 " nor datasource network update on '%s' event",
4377 EventType.BOOT)
4378+ # nothing new, but ensure proper names
4379+ self._apply_netcfg_names(netcfg)
4380 return
4381+ else:
4382+ # refresh netcfg after update
4383+ netcfg, src = self._find_networking_config()
4384+
4385+ # ensure all physical devices in config are present
4386+ net.wait_for_physdevs(netcfg)
4387+
4388+ # apply renames from config
4389+ self._apply_netcfg_names(netcfg)
4390
4391+ # rendering config
4392 LOG.info("Applying network configuration from %s bringup=%s: %s",
4393 src, bring_up, netcfg)
4394 try:
4395diff --git a/cloudinit/tests/helpers.py b/cloudinit/tests/helpers.py
4396index f41180f..23fddd0 100644
4397--- a/cloudinit/tests/helpers.py
4398+++ b/cloudinit/tests/helpers.py
4399@@ -198,7 +198,8 @@ class CiTestCase(TestCase):
4400 prefix="ci-%s." % self.__class__.__name__)
4401 else:
4402 tmpd = tempfile.mkdtemp(dir=dir)
4403- self.addCleanup(functools.partial(shutil.rmtree, tmpd))
4404+ self.addCleanup(
4405+ functools.partial(shutil.rmtree, tmpd, ignore_errors=True))
4406 return tmpd
4407
4408 def tmp_path(self, path, dir=None):
4409diff --git a/cloudinit/tests/test_stages.py b/cloudinit/tests/test_stages.py
4410index 94b6b25..d5c9c0e 100644
4411--- a/cloudinit/tests/test_stages.py
4412+++ b/cloudinit/tests/test_stages.py
4413@@ -6,6 +6,7 @@ import os
4414
4415 from cloudinit import stages
4416 from cloudinit import sources
4417+from cloudinit.sources import NetworkConfigSource
4418
4419 from cloudinit.event import EventType
4420 from cloudinit.util import write_file
4421@@ -37,6 +38,7 @@ class FakeDataSource(sources.DataSource):
4422
4423 class TestInit(CiTestCase):
4424 with_logs = True
4425+ allowed_subp = False
4426
4427 def setUp(self):
4428 super(TestInit, self).setUp()
4429@@ -57,84 +59,189 @@ class TestInit(CiTestCase):
4430 (None, disable_file),
4431 self.init._find_networking_config())
4432
4433+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4434 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4435- def test_wb__find_networking_config_disabled_by_kernel(self, m_cmdline):
4436+ def test_wb__find_networking_config_disabled_by_kernel(
4437+ self, m_cmdline, m_initramfs):
4438 """find_networking_config returns when disabled by kernel cmdline."""
4439 m_cmdline.return_value = {'config': 'disabled'}
4440+ m_initramfs.return_value = {'config': ['fake_initrd']}
4441 self.assertEqual(
4442- (None, 'cmdline'),
4443+ (None, NetworkConfigSource.cmdline),
4444 self.init._find_networking_config())
4445 self.assertEqual('DEBUG: network config disabled by cmdline\n',
4446 self.logs.getvalue())
4447
4448+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4449 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4450- def test_wb__find_networking_config_disabled_by_datasrc(self, m_cmdline):
4451+ def test_wb__find_networking_config_disabled_by_initrd(
4452+ self, m_cmdline, m_initramfs):
4453+ """find_networking_config returns when disabled by kernel cmdline."""
4454+ m_cmdline.return_value = {}
4455+ m_initramfs.return_value = {'config': 'disabled'}
4456+ self.assertEqual(
4457+ (None, NetworkConfigSource.initramfs),
4458+ self.init._find_networking_config())
4459+ self.assertEqual('DEBUG: network config disabled by initramfs\n',
4460+ self.logs.getvalue())
4461+
4462+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4463+ @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4464+ def test_wb__find_networking_config_disabled_by_datasrc(
4465+ self, m_cmdline, m_initramfs):
4466 """find_networking_config returns when disabled by datasource cfg."""
4467 m_cmdline.return_value = {} # Kernel doesn't disable networking
4468+ m_initramfs.return_value = {} # initramfs doesn't disable networking
4469 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
4470 'network': {}} # system config doesn't disable
4471
4472 self.init.datasource = FakeDataSource(
4473 network_config={'config': 'disabled'})
4474 self.assertEqual(
4475- (None, 'ds'),
4476+ (None, NetworkConfigSource.ds),
4477 self.init._find_networking_config())
4478 self.assertEqual('DEBUG: network config disabled by ds\n',
4479 self.logs.getvalue())
4480
4481+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4482 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4483- def test_wb__find_networking_config_disabled_by_sysconfig(self, m_cmdline):
4484+ def test_wb__find_networking_config_disabled_by_sysconfig(
4485+ self, m_cmdline, m_initramfs):
4486 """find_networking_config returns when disabled by system config."""
4487 m_cmdline.return_value = {} # Kernel doesn't disable networking
4488+ m_initramfs.return_value = {} # initramfs doesn't disable networking
4489 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
4490 'network': {'config': 'disabled'}}
4491 self.assertEqual(
4492- (None, 'system_cfg'),
4493+ (None, NetworkConfigSource.system_cfg),
4494 self.init._find_networking_config())
4495 self.assertEqual('DEBUG: network config disabled by system_cfg\n',
4496 self.logs.getvalue())
4497
4498+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4499+ @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4500+ def test__find_networking_config_uses_datasrc_order(
4501+ self, m_cmdline, m_initramfs):
4502+ """find_networking_config should check sources in DS defined order"""
4503+ # cmdline and initramfs, which would normally be preferred over other
4504+ # sources, disable networking; in this case, though, the DS moves them
4505+ # later so its own config is preferred
4506+ m_cmdline.return_value = {'config': 'disabled'}
4507+ m_initramfs.return_value = {'config': 'disabled'}
4508+
4509+ ds_net_cfg = {'config': {'needle': True}}
4510+ self.init.datasource = FakeDataSource(network_config=ds_net_cfg)
4511+ self.init.datasource.network_config_sources = [
4512+ NetworkConfigSource.ds, NetworkConfigSource.system_cfg,
4513+ NetworkConfigSource.cmdline, NetworkConfigSource.initramfs]
4514+
4515+ self.assertEqual(
4516+ (ds_net_cfg, NetworkConfigSource.ds),
4517+ self.init._find_networking_config())
4518+
4519+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4520+ @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4521+ def test__find_networking_config_warns_if_datasrc_uses_invalid_src(
4522+ self, m_cmdline, m_initramfs):
4523+ """find_networking_config should check sources in DS defined order"""
4524+ ds_net_cfg = {'config': {'needle': True}}
4525+ self.init.datasource = FakeDataSource(network_config=ds_net_cfg)
4526+ self.init.datasource.network_config_sources = [
4527+ 'invalid_src', NetworkConfigSource.ds]
4528+
4529+ self.assertEqual(
4530+ (ds_net_cfg, NetworkConfigSource.ds),
4531+ self.init._find_networking_config())
4532+ self.assertIn('WARNING: data source specifies an invalid network'
4533+ ' cfg_source: invalid_src',
4534+ self.logs.getvalue())
4535+
4536+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4537 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4538- def test_wb__find_networking_config_returns_kernel(self, m_cmdline):
4539+ def test__find_networking_config_warns_if_datasrc_uses_unavailable_src(
4540+ self, m_cmdline, m_initramfs):
4541+ """find_networking_config should check sources in DS defined order"""
4542+ ds_net_cfg = {'config': {'needle': True}}
4543+ self.init.datasource = FakeDataSource(network_config=ds_net_cfg)
4544+ self.init.datasource.network_config_sources = [
4545+ NetworkConfigSource.fallback, NetworkConfigSource.ds]
4546+
4547+ self.assertEqual(
4548+ (ds_net_cfg, NetworkConfigSource.ds),
4549+ self.init._find_networking_config())
4550+ self.assertIn('WARNING: data source specifies an unavailable network'
4551+ ' cfg_source: fallback',
4552+ self.logs.getvalue())
4553+
4554+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4555+ @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4556+ def test_wb__find_networking_config_returns_kernel(
4557+ self, m_cmdline, m_initramfs):
4558 """find_networking_config returns kernel cmdline config if present."""
4559 expected_cfg = {'config': ['fakekernel']}
4560 m_cmdline.return_value = expected_cfg
4561+ m_initramfs.return_value = {'config': ['fake_initrd']}
4562 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
4563 'network': {'config': ['fakesys_config']}}
4564 self.init.datasource = FakeDataSource(
4565 network_config={'config': ['fakedatasource']})
4566 self.assertEqual(
4567- (expected_cfg, 'cmdline'),
4568+ (expected_cfg, NetworkConfigSource.cmdline),
4569 self.init._find_networking_config())
4570
4571+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4572 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4573- def test_wb__find_networking_config_returns_system_cfg(self, m_cmdline):
4574+ def test_wb__find_networking_config_returns_initramfs(
4575+ self, m_cmdline, m_initramfs):
4576+ """find_networking_config returns kernel cmdline config if present."""
4577+ expected_cfg = {'config': ['fake_initrd']}
4578+ m_cmdline.return_value = {}
4579+ m_initramfs.return_value = expected_cfg
4580+ self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
4581+ 'network': {'config': ['fakesys_config']}}
4582+ self.init.datasource = FakeDataSource(
4583+ network_config={'config': ['fakedatasource']})
4584+ self.assertEqual(
4585+ (expected_cfg, NetworkConfigSource.initramfs),
4586+ self.init._find_networking_config())
4587+
4588+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4589+ @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4590+ def test_wb__find_networking_config_returns_system_cfg(
4591+ self, m_cmdline, m_initramfs):
4592 """find_networking_config returns system config when present."""
4593 m_cmdline.return_value = {} # No kernel network config
4594+ m_initramfs.return_value = {} # no initramfs network config
4595 expected_cfg = {'config': ['fakesys_config']}
4596 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
4597 'network': expected_cfg}
4598 self.init.datasource = FakeDataSource(
4599 network_config={'config': ['fakedatasource']})
4600 self.assertEqual(
4601- (expected_cfg, 'system_cfg'),
4602+ (expected_cfg, NetworkConfigSource.system_cfg),
4603 self.init._find_networking_config())
4604
4605+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4606 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4607- def test_wb__find_networking_config_returns_datasrc_cfg(self, m_cmdline):
4608+ def test_wb__find_networking_config_returns_datasrc_cfg(
4609+ self, m_cmdline, m_initramfs):
4610 """find_networking_config returns datasource net config if present."""
4611 m_cmdline.return_value = {} # No kernel network config
4612+ m_initramfs.return_value = {} # no initramfs network config
4613 # No system config for network in setUp
4614 expected_cfg = {'config': ['fakedatasource']}
4615 self.init.datasource = FakeDataSource(network_config=expected_cfg)
4616 self.assertEqual(
4617- (expected_cfg, 'ds'),
4618+ (expected_cfg, NetworkConfigSource.ds),
4619 self.init._find_networking_config())
4620
4621+ @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
4622 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
4623- def test_wb__find_networking_config_returns_fallback(self, m_cmdline):
4624+ def test_wb__find_networking_config_returns_fallback(
4625+ self, m_cmdline, m_initramfs):
4626 """find_networking_config returns fallback config if not defined."""
4627 m_cmdline.return_value = {} # Kernel doesn't disable networking
4628+ m_initramfs.return_value = {} # no initramfs network config
4629 # Neither datasource nor system_info disable or provide network
4630
4631 fake_cfg = {'config': [{'type': 'physical', 'name': 'eth9'}],
4632@@ -147,7 +254,7 @@ class TestInit(CiTestCase):
4633 distro = self.init.distro
4634 distro.generate_fallback_config = fake_generate_fallback
4635 self.assertEqual(
4636- (fake_cfg, 'fallback'),
4637+ (fake_cfg, NetworkConfigSource.fallback),
4638 self.init._find_networking_config())
4639 self.assertNotIn('network config disabled', self.logs.getvalue())
4640
4641@@ -166,8 +273,9 @@ class TestInit(CiTestCase):
4642 'INFO: network config is disabled by %s' % disable_file,
4643 self.logs.getvalue())
4644
4645+ @mock.patch('cloudinit.net.get_interfaces_by_mac')
4646 @mock.patch('cloudinit.distros.ubuntu.Distro')
4647- def test_apply_network_on_new_instance(self, m_ubuntu):
4648+ def test_apply_network_on_new_instance(self, m_ubuntu, m_macs):
4649 """Call distro apply_network_config methods on is_new_instance."""
4650 net_cfg = {
4651 'version': 1, 'config': [
4652@@ -175,7 +283,9 @@ class TestInit(CiTestCase):
4653 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}
4654
4655 def fake_network_config():
4656- return net_cfg, 'fallback'
4657+ return net_cfg, NetworkConfigSource.fallback
4658+
4659+ m_macs.return_value = {'42:42:42:42:42:42': 'eth9'}
4660
4661 self.init._find_networking_config = fake_network_config
4662 self.init.apply_network_config(True)
4663@@ -195,7 +305,7 @@ class TestInit(CiTestCase):
4664 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}
4665
4666 def fake_network_config():
4667- return net_cfg, 'fallback'
4668+ return net_cfg, NetworkConfigSource.fallback
4669
4670 self.init._find_networking_config = fake_network_config
4671 self.init.apply_network_config(True)
4672@@ -206,8 +316,9 @@ class TestInit(CiTestCase):
4673 " nor datasource network update on '%s' event" % EventType.BOOT,
4674 self.logs.getvalue())
4675
4676+ @mock.patch('cloudinit.net.get_interfaces_by_mac')
4677 @mock.patch('cloudinit.distros.ubuntu.Distro')
4678- def test_apply_network_on_datasource_allowed_event(self, m_ubuntu):
4679+ def test_apply_network_on_datasource_allowed_event(self, m_ubuntu, m_macs):
4680 """Apply network if datasource.update_metadata permits BOOT event."""
4681 old_instance_id = os.path.join(
4682 self.init.paths.get_cpath('data'), 'instance-id')
4683@@ -218,7 +329,9 @@ class TestInit(CiTestCase):
4684 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}
4685
4686 def fake_network_config():
4687- return net_cfg, 'fallback'
4688+ return net_cfg, NetworkConfigSource.fallback
4689+
4690+ m_macs.return_value = {'42:42:42:42:42:42': 'eth9'}
4691
4692 self.init._find_networking_config = fake_network_config
4693 self.init.datasource = FakeDataSource(paths=self.init.paths)
4694diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
4695index 0af0d9e..44ee61d 100644
4696--- a/cloudinit/url_helper.py
4697+++ b/cloudinit/url_helper.py
4698@@ -199,18 +199,19 @@ def _get_ssl_args(url, ssl_details):
4699 def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
4700 headers=None, headers_cb=None, ssl_details=None,
4701 check_status=True, allow_redirects=True, exception_cb=None,
4702- session=None, infinite=False, log_req_resp=True):
4703+ session=None, infinite=False, log_req_resp=True,
4704+ request_method=None):
4705 url = _cleanurl(url)
4706 req_args = {
4707 'url': url,
4708 }
4709 req_args.update(_get_ssl_args(url, ssl_details))
4710 req_args['allow_redirects'] = allow_redirects
4711- req_args['method'] = 'GET'
4712+ if not request_method:
4713+ request_method = 'POST' if data else 'GET'
4714+ req_args['method'] = request_method
4715 if timeout is not None:
4716 req_args['timeout'] = max(float(timeout), 0)
4717- if data:
4718- req_args['method'] = 'POST'
4719 # It doesn't seem like config
4720 # was added in older library versions (or newer ones either), thus we
4721 # need to manually do the retries if it wasn't...
4722diff --git a/cloudinit/util.py b/cloudinit/util.py
4723index ea4199c..aa23b3f 100644
4724--- a/cloudinit/util.py
4725+++ b/cloudinit/util.py
4726@@ -2337,17 +2337,21 @@ def parse_mtab(path):
4727 return None
4728
4729
4730-def find_freebsd_part(label_part):
4731- if label_part.startswith("/dev/label/"):
4732- target_label = label_part[5:]
4733- (label_part, _err) = subp(['glabel', 'status', '-s'])
4734- for labels in label_part.split("\n"):
4735+def find_freebsd_part(fs):
4736+ splitted = fs.split('/')
4737+ if len(splitted) == 3:
4738+ return splitted[2]
4739+ elif splitted[2] in ['label', 'gpt', 'ufs']:
4740+ target_label = fs[5:]
4741+ (part, _err) = subp(['glabel', 'status', '-s'])
4742+ for labels in part.split("\n"):
4743 items = labels.split()
4744- if len(items) > 0 and items[0].startswith(target_label):
4745- label_part = items[2]
4746+ if len(items) > 0 and items[0] == target_label:
4747+ part = items[2]
4748 break
4749- label_part = str(label_part)
4750- return label_part
4751+ return str(part)
4752+ else:
4753+ LOG.warning("Unexpected input in find_freebsd_part: %s", fs)
4754
4755
4756 def get_path_dev_freebsd(path, mnt_list):
4757diff --git a/cloudinit/version.py b/cloudinit/version.py
4758index ddcd436..b04b11f 100644
4759--- a/cloudinit/version.py
4760+++ b/cloudinit/version.py
4761@@ -4,7 +4,7 @@
4762 #
4763 # This file is part of cloud-init. See LICENSE file for license information.
4764
4765-__VERSION__ = "19.1"
4766+__VERSION__ = "19.2"
4767 _PACKAGED_VERSION = '@@PACKAGED_VERSION@@'
4768
4769 FEATURES = [
4770diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl
4771index 25db43e..684c747 100644
4772--- a/config/cloud.cfg.tmpl
4773+++ b/config/cloud.cfg.tmpl
4774@@ -32,8 +32,8 @@ preserve_hostname: false
4775
4776 {% if variant in ["freebsd"] %}
4777 # This should not be required, but leave it in place until the real cause of
4778-# not beeing able to find -any- datasources is resolved.
4779-datasource_list: ['ConfigDrive', 'Azure', 'OpenStack', 'Ec2']
4780+# not finding -any- datasources is resolved.
4781+datasource_list: ['NoCloud', 'ConfigDrive', 'Azure', 'OpenStack', 'Ec2']
4782 {% endif %}
4783 # Example datasource config
4784 # datasource:
4785diff --git a/debian/changelog b/debian/changelog
4786index 2b9cfe8..0ac84a4 100644
4787--- a/debian/changelog
4788+++ b/debian/changelog
4789@@ -1,3 +1,63 @@
4790+cloud-init (19.2-21-ge6383719-0ubuntu1~19.04.1) disco; urgency=medium
4791+
4792+ * debian/cloud-init.templates: enable Exoscale cloud.
4793+ * New upstream snapshot. (LP: #1841099)
4794+ - ubuntu-drivers: call db_x_loadtemplatefile to accept NVIDIA EULA
4795+ - Add missing #cloud-config comment on first example in documentation.
4796+ [Florian Müller]
4797+ - ubuntu-drivers: emit latelink=true debconf to accept nvidia eula
4798+ - DataSourceOracle: prefer DS network config over initramfs
4799+ - format.rst: add text/jinja2 to list of content types (+ cleanups)
4800+ - Add GitHub pull request template to point people at hacking doc
4801+ - cloudinit/distros/parsers/sys_conf: add docstring to SysConf
4802+ - pyflakes: remove unused variable [Joshua Powers]
4803+ - Azure: Record boot timestamps, system information, and diagnostic events
4804+ [Anh Vo]
4805+ - DataSourceOracle: configure secondary NICs on Virtual Machines
4806+ - distros: fix confusing variable names
4807+ - azure/net: generate_fallback_nic emits network v2 config instead of v1
4808+ - Add support for publishing host keys to GCE guest attributes
4809+ [Rick Wright]
4810+ - New data source for the Exoscale.com cloud platform [Chris Glass]
4811+ - doc: remove intersphinx extension
4812+ - cc_set_passwords: rewrite documentation
4813+ - net/cmdline: split interfaces_by_mac and init network config
4814+ determination
4815+ - stages: allow data sources to override network config source order
4816+ - cloud_tests: updates and fixes
4817+ - Fix bug rendering MTU on bond or vlan when input was netplan.
4818+ [Scott Moser]
4819+ - net: update net sequence, include wait on netdevs, opensuse netrules path
4820+ - Release 19.2
4821+ - net: add rfc3442 (classless static routes) to EphemeralDHCP
4822+ - templates/ntp.conf.debian.tmpl: fix missing newline for pools
4823+ - Support netplan renderer in Arch Linux [Conrad Hoffmann]
4824+ - Fix typo in publicly viewable documentation. [David Medberry]
4825+ - Add a cdrom size checker for OVF ds to ds-identify [Pengpeng Sun]
4826+ - VMWare: Trigger the post customization script via cc_scripts module.
4827+ [Xiaofeng Wang]
4828+ - Cloud-init analyze module: Added ability to analyze boot events.
4829+ [Sam Gilson]
4830+ - Update debian eni network configuration location, retain Ubuntu setting
4831+ [Janos Lenart]
4832+ - net: skip bond interfaces in get_interfaces [Stanislav Makar]
4833+ - Fix a couple of issues raised by a coverity scan
4834+ - Add missing dsname for Hetzner Cloud datasource [Markus Schade]
4835+ - doc: indicate that netplan is default in Ubuntu now
4836+ - azure: add region and AZ properties from imds compute location metadata
4837+ - sysconfig: support more bonding options [Penghui Liao]
4838+ - cloud-init-generator: use libexec path to ds-identify on redhat systems
4839+ - tools/build-on-freebsd: update to python3 [Gonéri Le Bouder]
4840+ - Allow identification of OpenStack by Asset Tag [Mark T. Voelker]
4841+ - Fix spelling error making 'an Ubuntu' consistent. [Brian Murray]
4842+ - run-container: centos: comment out the repo mirrorlist [Paride Legovini]
4843+ - netplan: update netplan key mappings for gratuitous-arp
4844+ - freebsd: fix the name of cloudcfg VARIANT [Gonéri Le Bouder]
4845+ - freebsd: ability to grow root file system [Gonéri Le Bouder]
4846+ - freebsd: NoCloud data source support [Gonéri Le Bouder]
4847+
4848+ -- Chad Smith <chad.smith@canonical.com> Thu, 22 Aug 2019 12:58:06 -0600
4849+
4850 cloud-init (19.1-1-gbaa47854-0ubuntu1~19.04.1) disco; urgency=medium
4851
4852 * New upstream snapshot. (LP: #1828637)
4853diff --git a/debian/cloud-init.templates b/debian/cloud-init.templates
4854index ece53a0..aa14d1b 100644
4855--- a/debian/cloud-init.templates
4856+++ b/debian/cloud-init.templates
4857@@ -1,8 +1,8 @@
4858 Template: cloud-init/datasources
4859 Type: multiselect
4860-Default: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Oracle, None
4861-Choices-C: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Oracle, None
4862-Choices: NoCloud: Reads info from /var/lib/cloud/seed only, ConfigDrive: Reads data from Openstack Config Drive, OpenNebula: read from OpenNebula context disk, DigitalOcean: reads data from Droplet datasource, Azure: read from MS Azure cdrom. Requires walinux-agent, AltCloud: config disks for RHEVm and vSphere, OVF: Reads data from OVF Transports, MAAS: Reads data from Ubuntu MAAS, GCE: google compute metadata service, OpenStack: native openstack metadata service, CloudSigma: metadata over serial for cloudsigma.com, SmartOS: Read from SmartOS metadata service, Bigstep: Bigstep metadata service, Scaleway: Scaleway metadata service, AliYun: Alibaba metadata service, Ec2: reads data from EC2 Metadata service, CloudStack: Read from CloudStack metadata service, Hetzner: Hetzner Cloud, IBMCloud: IBM Cloud. Previously softlayer or bluemix., Oracle: Oracle Compute Infrastructure, None: Failsafe datasource
4863+Default: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Oracle, Exoscale, None
4864+Choices-C: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Oracle, Exoscale, None
4865+Choices: NoCloud: Reads info from /var/lib/cloud/seed only, ConfigDrive: Reads data from Openstack Config Drive, OpenNebula: read from OpenNebula context disk, DigitalOcean: reads data from Droplet datasource, Azure: read from MS Azure cdrom. Requires walinux-agent, AltCloud: config disks for RHEVm and vSphere, OVF: Reads data from OVF Transports, MAAS: Reads data from Ubuntu MAAS, GCE: google compute metadata service, OpenStack: native openstack metadata service, CloudSigma: metadata over serial for cloudsigma.com, SmartOS: Read from SmartOS metadata service, Bigstep: Bigstep metadata service, Scaleway: Scaleway metadata service, AliYun: Alibaba metadata service, Ec2: reads data from EC2 Metadata service, CloudStack: Read from CloudStack metadata service, Hetzner: Hetzner Cloud, IBMCloud: IBM Cloud. Previously softlayer or bluemix., Oracle: Oracle Compute Infrastructure, Exoscale: Exoscale metadata service, None: Failsafe datasource
4866 Description: Which data sources should be searched?
4867 Cloud-init supports searching different "Data Sources" for information
4868 that it uses to configure a cloud instance.
4869diff --git a/doc/examples/cloud-config-datasources.txt b/doc/examples/cloud-config-datasources.txt
4870index 2651c02..52a2476 100644
4871--- a/doc/examples/cloud-config-datasources.txt
4872+++ b/doc/examples/cloud-config-datasources.txt
4873@@ -38,7 +38,7 @@ datasource:
4874 # these are optional, but allow you to basically provide a datasource
4875 # right here
4876 user-data: |
4877- # This is the user-data verbatum
4878+ # This is the user-data verbatim
4879 meta-data:
4880 instance-id: i-87018aed
4881 local-hostname: myhost.internal
4882diff --git a/doc/examples/cloud-config-user-groups.txt b/doc/examples/cloud-config-user-groups.txt
4883index 6a363b7..f588bfb 100644
4884--- a/doc/examples/cloud-config-user-groups.txt
4885+++ b/doc/examples/cloud-config-user-groups.txt
4886@@ -1,3 +1,4 @@
4887+#cloud-config
4888 # Add groups to the system
4889 # The following example adds the ubuntu group with members 'root' and 'sys'
4890 # and the empty group cloud-users.
4891diff --git a/doc/rtd/conf.py b/doc/rtd/conf.py
4892index 50eb05c..4174477 100644
4893--- a/doc/rtd/conf.py
4894+++ b/doc/rtd/conf.py
4895@@ -27,16 +27,11 @@ project = 'Cloud-Init'
4896 # Add any Sphinx extension module names here, as strings. They can be
4897 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
4898 extensions = [
4899- 'sphinx.ext.intersphinx',
4900 'sphinx.ext.autodoc',
4901 'sphinx.ext.autosectionlabel',
4902 'sphinx.ext.viewcode',
4903 ]
4904
4905-intersphinx_mapping = {
4906- 'sphinx': ('http://sphinx.pocoo.org', None)
4907-}
4908-
4909 # The suffix of source filenames.
4910 source_suffix = '.rst'
4911
4912diff --git a/doc/rtd/topics/analyze.rst b/doc/rtd/topics/analyze.rst
4913new file mode 100644
4914index 0000000..5cf38bd
4915--- /dev/null
4916+++ b/doc/rtd/topics/analyze.rst
4917@@ -0,0 +1,84 @@
4918+*************************
4919+Cloud-init Analyze Module
4920+*************************
4921+
4922+Overview
4923+========
4924+The analyze module was added to cloud-init in order to help analyze cloud-init boot time
4925+performance. It is loosely based on systemd-analyze where there are 4 main actions:
4926+show, blame, dump, and boot.
4927+
4928+The 'show' action is similar to 'systemd-analyze critical-chain' which prints a list of units, the
4929+time they started and how long they took. For cloud-init, we have four stages, and within each stage
4930+a number of modules may run depending on configuration. ‘cloudinit-analyze show’ will, for each
4931+boot, print this information and a summary total time, per boot.
4932+
4933+The 'blame' action matches 'systemd-analyze blame' where it prints, in descending order,
4934+the units that took the longest to run. This output is highly useful for examining where cloud-init
4935+is spending its time during execution.
4936+
4937+The 'dump' action simply dumps the cloud-init logs that the analyze module is performing
4938+the analysis on and returns a list of dictionaries that can be consumed for other reporting needs.
4939+
4940+The 'boot' action prints out kernel related timestamps that are not included in any of the
4941+cloud-init logs. There are three different timestamps that are presented to the user:
4942+kernel start, kernel finish boot, and cloud-init start. This was added for additional
4943+clarity into the boot process that cloud-init does not have control over, to aid in debugging of
4944+performance issues related to cloudinit startup or tracking regression.
4945+
4946+Usage
4947+=====
4948+Using each of the printing formats is as easy as running one of the following bash commands:
4949+
4950+.. code-block:: shell-session
4951+
4952+ cloud-init analyze show
4953+ cloud-init analyze blame
4954+ cloud-init analyze dump
4955+ cloud-init analyze boot
4956+
4957+Cloud-init analyze boot Timestamp Gathering
4958+===========================================
4959+The following boot related timestamps are gathered on demand when cloud-init analyze boot runs:
4960+- Kernel Startup, which is inferred from system uptime
4961+- Kernel Finishes Initialization, which is inferred from systemd UserSpaceMonotonicTimestamp property
4962+- Cloud-init activation, which is inferred from the property InactiveExitTimestamp of the cloud-init
4963+local systemd unit.
4964+
4965+In order to gather the necessary timestamps using systemd, running the commands
4966+
4967+.. code-block:: shell-session
4968+
4969+ systemctl show -p UserspaceTimestampMonotonic
4970+ systemctl show cloud-init-local -p InactiveExitTimestampMonotonic
4971+
4972+will gather the UserspaceTimestamp and InactiveExitTimestamp.
4973+The UserspaceTimestamp tracks when the init system starts, which is used as an indicator of kernel
4974+finishing initialization. The InactiveExitTimestamp tracks when a particular systemd unit transitions
4975+from the Inactive to Active state, which can be used to mark the beginning of systemd's activation
4976+of cloud-init.
4977+
4978+Currently this only works for distros that use systemd as the init process. We will be expanding
4979+support for other distros in the future and this document will be updated accordingly.
4980+
4981+If systemd is not present on the system, dmesg is used to attempt to find an event that logs the
4982+beginning of the init system. However, with this method only the first two timestamps are able to be found;
4983+dmesg does not monitor userspace processes, so no cloud-init start timestamps are emitted like when
4984+using systemd.
4985+
4986+List of Cloud-init analyze boot supported distros
4987+=================================================
4988+- Arch
4989+- CentOS
4990+- Debian
4991+- Fedora
4992+- OpenSuSE
4993+- Red Hat Enterprise Linux
4994+- Ubuntu
4995+- SUSE Linux Enterprise Server
4996+- CoreOS
4997+
4998+List of Cloud-init analyze boot unsupported distros
4999+===================================================
5000+- FreeBSD
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches