Merge ~chad.smith/cloud-init:ubuntu/xenial into cloud-init:ubuntu/xenial

Proposed by Chad Smith
Status: Merged
Merged at revision: 8e78ce6bc8847fde7f1488c37245946925fc3f67
Proposed branch: ~chad.smith/cloud-init:ubuntu/xenial
Merge into: cloud-init:ubuntu/xenial
Diff against target: 6992 lines (+3994/-572)
83 files modified
.github/pull_request_template.md (+9/-0)
ChangeLog (+36/-0)
cloudinit/analyze/__main__.py (+86/-2)
cloudinit/analyze/show.py (+192/-10)
cloudinit/analyze/tests/test_boot.py (+170/-0)
cloudinit/apport.py (+1/-0)
cloudinit/config/cc_apt_configure.py (+3/-1)
cloudinit/config/cc_lxd.py (+1/-1)
cloudinit/config/cc_set_passwords.py (+34/-19)
cloudinit/config/cc_ssh.py (+55/-0)
cloudinit/config/cc_ubuntu_drivers.py (+49/-1)
cloudinit/config/tests/test_ssh.py (+166/-0)
cloudinit/config/tests/test_ubuntu_drivers.py (+81/-18)
cloudinit/distros/__init__.py (+22/-22)
cloudinit/distros/arch.py (+14/-0)
cloudinit/distros/debian.py (+2/-2)
cloudinit/distros/freebsd.py (+16/-16)
cloudinit/distros/opensuse.py (+2/-0)
cloudinit/distros/parsers/sys_conf.py (+7/-0)
cloudinit/distros/ubuntu.py (+15/-0)
cloudinit/net/__init__.py (+112/-43)
cloudinit/net/cmdline.py (+16/-9)
cloudinit/net/dhcp.py (+90/-0)
cloudinit/net/network_state.py (+12/-4)
cloudinit/net/sysconfig.py (+12/-0)
cloudinit/net/tests/test_dhcp.py (+119/-1)
cloudinit/net/tests/test_init.py (+262/-9)
cloudinit/settings.py (+1/-0)
cloudinit/sources/DataSourceAzure.py (+141/-32)
cloudinit/sources/DataSourceCloudSigma.py (+2/-6)
cloudinit/sources/DataSourceExoscale.py (+258/-0)
cloudinit/sources/DataSourceGCE.py (+20/-2)
cloudinit/sources/DataSourceHetzner.py (+3/-0)
cloudinit/sources/DataSourceOVF.py (+6/-1)
cloudinit/sources/DataSourceOracle.py (+99/-7)
cloudinit/sources/__init__.py (+27/-0)
cloudinit/sources/helpers/azure.py (+152/-8)
cloudinit/sources/helpers/vmware/imc/config_custom_script.py (+42/-101)
cloudinit/sources/tests/test_oracle.py (+228/-11)
cloudinit/stages.py (+50/-15)
cloudinit/tests/helpers.py (+2/-1)
cloudinit/tests/test_stages.py (+132/-19)
cloudinit/url_helper.py (+5/-4)
cloudinit/version.py (+1/-1)
debian/changelog (+62/-3)
debian/cloud-init.templates (+3/-3)
debian/patches/azure-apply-network-config-false.patch (+1/-1)
debian/patches/azure-use-walinux-agent.patch (+1/-1)
debian/patches/ubuntu-advantage-revert-tip.patch (+4/-8)
doc/examples/cloud-config-datasources.txt (+1/-1)
doc/examples/cloud-config-user-groups.txt (+1/-0)
doc/rtd/conf.py (+0/-5)
doc/rtd/topics/analyze.rst (+84/-0)
doc/rtd/topics/capabilities.rst (+1/-0)
doc/rtd/topics/datasources.rst (+1/-0)
doc/rtd/topics/datasources/exoscale.rst (+68/-0)
doc/rtd/topics/datasources/oracle.rst (+24/-1)
doc/rtd/topics/debugging.rst (+13/-0)
doc/rtd/topics/format.rst (+13/-12)
doc/rtd/topics/network-config-format-v2.rst (+1/-1)
doc/rtd/topics/network-config.rst (+5/-4)
integration-requirements.txt (+2/-1)
systemd/cloud-init-generator.tmpl (+6/-1)
templates/ntp.conf.debian.tmpl (+2/-1)
tests/cloud_tests/platforms.yaml (+1/-0)
tests/cloud_tests/platforms/nocloudkvm/instance.py (+9/-4)
tests/cloud_tests/platforms/platforms.py (+1/-1)
tests/cloud_tests/setup_image.py (+2/-1)
tests/unittests/test_datasource/test_azure.py (+112/-15)
tests/unittests/test_datasource/test_common.py (+13/-0)
tests/unittests/test_datasource/test_ec2.py (+2/-1)
tests/unittests/test_datasource/test_exoscale.py (+203/-0)
tests/unittests/test_datasource/test_gce.py (+18/-0)
tests/unittests/test_distros/test_netconfig.py (+86/-0)
tests/unittests/test_ds_identify.py (+25/-0)
tests/unittests/test_handler/test_handler_apt_source_v3.py (+11/-0)
tests/unittests/test_handler/test_handler_ntp.py (+15/-10)
tests/unittests/test_net.py (+197/-23)
tests/unittests/test_reporting_hyperv.py (+65/-0)
tests/unittests/test_vmware/test_custom_script.py (+63/-53)
tools/build-on-freebsd (+40/-33)
tools/ds-identify (+32/-14)
tools/xkvm (+53/-8)
Reviewer Review Type Date Requested Status
Ryan Harper Approve
Server Team CI bot continuous-integration Approve
Review via email: mp+371685@code.launchpad.net

Commit message

Upstream snapshot for SRU into Xenial

also enables Exoscale in debian/cloud-init.templates

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:c5eb692d62024ca13e79085f6672612de1bec7bc
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1072/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1072//rebuild

review: Approve (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:c8452b91f44eb54a145e9e59d552b1ce9ff6e5f5
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1074/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1074//rebuild

review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :

I see Aliyun mentioned in the debian/changelog ...

review: Needs Fixing
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:8e78ce6bc8847fde7f1488c37245946925fc3f67
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1077/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/1077//rebuild

review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :

In your cloud-init.template change, the Choices: entry should be

Exoscale: Read from Exoscale metadata service

and you have:

Exoscale: Exoscale metadata service

review: Needs Fixing
Revision history for this message
Ryan Harper (raharper) wrote :

After discussion in #cloud-init, the human readable string varies abit from datasource to datasource. No need to fix now; in the future, adding a datasource to the template will be done via a script and at which time we'll standardize (or store the strings we want to use on the datasource's themselves) and then render that.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
0new file mode 1006440new file mode 100644
index 0000000..170a71e
--- /dev/null
+++ b/.github/pull_request_template.md
@@ -0,0 +1,9 @@
1***This GitHub repo is only a mirror. Do not submit pull requests
2here!***
3
4Thank you for taking the time to write and submit a change to
5cloud-init! Please follow [our hacking
6guide](https://cloudinit.readthedocs.io/en/latest/topics/hacking.html)
7to submit your change to cloud-init's [Launchpad git
8repository](https://code.launchpad.net/cloud-init/), where cloud-init
9development happens.
diff --git a/ChangeLog b/ChangeLog
index bf48fd4..a98f8c2 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,39 @@
119.2:
2 - net: add rfc3442 (classless static routes) to EphemeralDHCP
3 (LP: #1821102)
4 - templates/ntp.conf.debian.tmpl: fix missing newline for pools
5 (LP: #1836598)
6 - Support netplan renderer in Arch Linux [Conrad Hoffmann]
7 - Fix typo in publicly viewable documentation. [David Medberry]
8 - Add a cdrom size checker for OVF ds to ds-identify
9 [Pengpeng Sun] (LP: #1806701)
10 - VMWare: Trigger the post customization script via cc_scripts module.
11 [Xiaofeng Wang] (LP: #1833192)
12 - Cloud-init analyze module: Added ability to analyze boot events.
13 [Sam Gilson]
14 - Update debian eni network configuration location, retain Ubuntu setting
15 [Janos Lenart]
16 - net: skip bond interfaces in get_interfaces
17 [Stanislav Makar] (LP: #1812857)
18 - Fix a couple of issues raised by a coverity scan
19 - Add missing dsname for Hetzner Cloud datasource [Markus Schade]
20 - doc: indicate that netplan is default in Ubuntu now
21 - azure: add region and AZ properties from imds compute location metadata
22 - sysconfig: support more bonding options [Penghui Liao]
23 - cloud-init-generator: use libexec path to ds-identify on redhat systems
24 (LP: #1833264)
25 - tools/build-on-freebsd: update to python3 [Gonéri Le Bouder]
26 - Allow identification of OpenStack by Asset Tag
27 [Mark T. Voelker] (LP: #1669875)
28 - Fix spelling error making 'an Ubuntu' consistent. [Brian Murray]
29 - run-container: centos: comment out the repo mirrorlist [Paride Legovini]
30 - netplan: update netplan key mappings for gratuitous-arp (LP: #1827238)
31 - freebsd: fix the name of cloudcfg VARIANT [Gonéri Le Bouder]
32 - freebsd: ability to grow root file system [Gonéri Le Bouder]
33 - freebsd: NoCloud data source support [Gonéri Le Bouder] (LP: #1645824)
34 - Azure: Return static fallback address as if failed to find endpoint
35 [Jason Zions (MSFT)]
36
119.1:3719.1:
2 - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder]38 - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder]
3 - tests: add Eoan release [Paride Legovini]39 - tests: add Eoan release [Paride Legovini]
diff --git a/cloudinit/analyze/__main__.py b/cloudinit/analyze/__main__.py
index f861365..99e5c20 100644
--- a/cloudinit/analyze/__main__.py
+++ b/cloudinit/analyze/__main__.py
@@ -7,7 +7,7 @@ import re
7import sys7import sys
88
9from cloudinit.util import json_dumps9from cloudinit.util import json_dumps
1010from datetime import datetime
11from . import dump11from . import dump
12from . import show12from . import show
1313
@@ -52,9 +52,93 @@ def get_parser(parser=None):
52 dest='outfile', default='-',52 dest='outfile', default='-',
53 help='specify where to write output. ')53 help='specify where to write output. ')
54 parser_dump.set_defaults(action=('dump', analyze_dump))54 parser_dump.set_defaults(action=('dump', analyze_dump))
55 parser_boot = subparsers.add_parser(
56 'boot', help='Print list of boot times for kernel and cloud-init')
57 parser_boot.add_argument('-i', '--infile', action='store',
58 dest='infile', default='/var/log/cloud-init.log',
59 help='specify where to read input. ')
60 parser_boot.add_argument('-o', '--outfile', action='store',
61 dest='outfile', default='-',
62 help='specify where to write output.')
63 parser_boot.set_defaults(action=('boot', analyze_boot))
55 return parser64 return parser
5665
5766
67def analyze_boot(name, args):
68 """Report a list of how long different boot operations took.
69
70 For Example:
71 -- Most Recent Boot Record --
72 Kernel Started at: <time>
73 Kernel ended boot at: <time>
74 Kernel time to boot (seconds): <time>
75 Cloud-init activated by systemd at: <time>
76 Time between Kernel end boot and Cloud-init activation (seconds):<time>
77 Cloud-init start: <time>
78 """
79 infh, outfh = configure_io(args)
80 kernel_info = show.dist_check_timestamp()
81 status_code, kernel_start, kernel_end, ci_sysd_start = \
82 kernel_info
83 kernel_start_timestamp = datetime.utcfromtimestamp(kernel_start)
84 kernel_end_timestamp = datetime.utcfromtimestamp(kernel_end)
85 ci_sysd_start_timestamp = datetime.utcfromtimestamp(ci_sysd_start)
86 try:
87 last_init_local = \
88 [e for e in _get_events(infh) if e['name'] == 'init-local' and
89 'starting search' in e['description']][-1]
90 ci_start = datetime.utcfromtimestamp(last_init_local['timestamp'])
91 except IndexError:
92 ci_start = 'Could not find init-local log-line in cloud-init.log'
93 status_code = show.FAIL_CODE
94
95 FAILURE_MSG = 'Your Linux distro or container does not support this ' \
96 'functionality.\n' \
97 'You must be running a Kernel Telemetry supported ' \
98 'distro.\nPlease check ' \
99 'https://cloudinit.readthedocs.io/en/latest' \
100 '/topics/analyze.html for more ' \
101 'information on supported distros.\n'
102
103 SUCCESS_MSG = '-- Most Recent Boot Record --\n' \
104 ' Kernel Started at: {k_s_t}\n' \
105 ' Kernel ended boot at: {k_e_t}\n' \
106 ' Kernel time to boot (seconds): {k_r}\n' \
107 ' Cloud-init activated by systemd at: {ci_sysd_t}\n' \
108 ' Time between Kernel end boot and Cloud-init ' \
109 'activation (seconds): {bt_r}\n' \
110 ' Cloud-init start: {ci_start}\n'
111
112 CONTAINER_MSG = '-- Most Recent Container Boot Record --\n' \
113 ' Container started at: {k_s_t}\n' \
114 ' Cloud-init activated by systemd at: {ci_sysd_t}\n' \
115 ' Cloud-init start: {ci_start}\n' \
116
117 status_map = {
118 show.FAIL_CODE: FAILURE_MSG,
119 show.CONTAINER_CODE: CONTAINER_MSG,
120 show.SUCCESS_CODE: SUCCESS_MSG
121 }
122
123 kernel_runtime = kernel_end - kernel_start
124 between_process_runtime = ci_sysd_start - kernel_end
125
126 kwargs = {
127 'k_s_t': kernel_start_timestamp,
128 'k_e_t': kernel_end_timestamp,
129 'k_r': kernel_runtime,
130 'bt_r': between_process_runtime,
131 'k_e': kernel_end,
132 'k_s': kernel_start,
133 'ci_sysd': ci_sysd_start,
134 'ci_sysd_t': ci_sysd_start_timestamp,
135 'ci_start': ci_start
136 }
137
138 outfh.write(status_map[status_code].format(**kwargs))
139 return status_code
140
141
58def analyze_blame(name, args):142def analyze_blame(name, args):
59 """Report a list of records sorted by largest time delta.143 """Report a list of records sorted by largest time delta.
60144
@@ -119,7 +203,7 @@ def analyze_dump(name, args):
119203
120def _get_events(infile):204def _get_events(infile):
121 rawdata = None205 rawdata = None
122 events, rawdata = show.load_events(infile, None)206 events, rawdata = show.load_events_infile(infile)
123 if not events:207 if not events:
124 events, _ = dump.dump_events(rawdata=rawdata)208 events, _ = dump.dump_events(rawdata=rawdata)
125 return events209 return events
diff --git a/cloudinit/analyze/show.py b/cloudinit/analyze/show.py
index 3e778b8..511b808 100644
--- a/cloudinit/analyze/show.py
+++ b/cloudinit/analyze/show.py
@@ -8,8 +8,11 @@ import base64
8import datetime8import datetime
9import json9import json
10import os10import os
11import time
12import sys
1113
12from cloudinit import util14from cloudinit import util
15from cloudinit.distros import uses_systemd
1316
14# An event:17# An event:
15'''18'''
@@ -49,6 +52,10 @@ format_key = {
4952
50formatting_help = " ".join(["{0}: {1}".format(k.replace('%', '%%'), v)53formatting_help = " ".join(["{0}: {1}".format(k.replace('%', '%%'), v)
51 for k, v in format_key.items()])54 for k, v in format_key.items()])
55SUCCESS_CODE = 'successful'
56FAIL_CODE = 'failure'
57CONTAINER_CODE = 'container'
58TIMESTAMP_UNKNOWN = (FAIL_CODE, -1, -1, -1)
5259
5360
54def format_record(msg, event):61def format_record(msg, event):
@@ -125,9 +132,175 @@ def total_time_record(total_time):
125 return 'Total Time: %3.5f seconds\n' % total_time132 return 'Total Time: %3.5f seconds\n' % total_time
126133
127134
135class SystemctlReader(object):
136 '''
137 Class for dealing with all systemctl subp calls in a consistent manner.
138 '''
139 def __init__(self, property, parameter=None):
140 self.epoch = None
141 self.args = ['/bin/systemctl', 'show']
142 if parameter:
143 self.args.append(parameter)
144 self.args.extend(['-p', property])
145 # Don't want the init of our object to break. Instead of throwing
146 # an exception, set an error code that gets checked when data is
147 # requested from the object
148 self.failure = self.subp()
149
150 def subp(self):
151 '''
152 Make a subp call based on set args and handle errors by setting
153 failure code
154
155 :return: whether the subp call failed or not
156 '''
157 try:
158 value, err = util.subp(self.args, capture=True)
159 if err:
160 return err
161 self.epoch = value
162 return None
163 except Exception as systemctl_fail:
164 return systemctl_fail
165
166 def parse_epoch_as_float(self):
167 '''
168 If subp call succeeded, return the timestamp from subp as a float.
169
170 :return: timestamp as a float
171 '''
172 # subp has 2 ways to fail: it either fails and throws an exception,
173 # or returns an error code. Raise an exception here in order to make
174 # sure both scenarios throw exceptions
175 if self.failure:
176 raise RuntimeError('Subprocess call to systemctl has failed, '
177 'returning error code ({})'
178 .format(self.failure))
179 # Output from systemctl show has the format Property=Value.
180 # For example, UserspaceMonotonic=1929304
181 timestamp = self.epoch.split('=')[1]
182 # Timestamps reported by systemctl are in microseconds, converting
183 return float(timestamp) / 1000000
184
185
186def dist_check_timestamp():
187 '''
188 Determine which init system a particular linux distro is using.
189 Each init system (systemd, upstart, etc) has a different way of
190 providing timestamps.
191
192 :return: timestamps of kernelboot, kernelendboot, and cloud-initstart
193 or TIMESTAMP_UNKNOWN if the timestamps cannot be retrieved.
194 '''
195
196 if uses_systemd():
197 return gather_timestamps_using_systemd()
198
199 # Use dmesg to get timestamps if the distro does not have systemd
200 if util.is_FreeBSD() or 'gentoo' in \
201 util.system_info()['system'].lower():
202 return gather_timestamps_using_dmesg()
203
204 # this distro doesn't fit anything that is supported by cloud-init. just
205 # return error codes
206 return TIMESTAMP_UNKNOWN
207
208
209def gather_timestamps_using_dmesg():
210 '''
211 Gather timestamps that corresponds to kernel begin initialization,
212 kernel finish initialization using dmesg as opposed to systemctl
213
214 :return: the two timestamps plus a dummy timestamp to keep consistency
215 with gather_timestamps_using_systemd
216 '''
217 try:
218 data, _ = util.subp(['dmesg'], capture=True)
219 split_entries = data[0].splitlines()
220 for i in split_entries:
221 if i.decode('UTF-8').find('user') != -1:
222 splitup = i.decode('UTF-8').split()
223 stripped = splitup[1].strip(']')
224
225 # kernel timestamp from dmesg is equal to 0,
226 # with the userspace timestamp relative to it.
227 user_space_timestamp = float(stripped)
228 kernel_start = float(time.time()) - float(util.uptime())
229 kernel_end = kernel_start + user_space_timestamp
230
231 # systemd wont start cloud-init in this case,
232 # so we cannot get that timestamp
233 return SUCCESS_CODE, kernel_start, kernel_end, \
234 kernel_end
235
236 except Exception:
237 pass
238 return TIMESTAMP_UNKNOWN
239
240
241def gather_timestamps_using_systemd():
242 '''
243 Gather timestamps that corresponds to kernel begin initialization,
244 kernel finish initialization. and cloud-init systemd unit activation
245
246 :return: the three timestamps
247 '''
248 kernel_start = float(time.time()) - float(util.uptime())
249 try:
250 delta_k_end = SystemctlReader('UserspaceTimestampMonotonic')\
251 .parse_epoch_as_float()
252 delta_ci_s = SystemctlReader('InactiveExitTimestampMonotonic',
253 'cloud-init-local').parse_epoch_as_float()
254 base_time = kernel_start
255 status = SUCCESS_CODE
256 # lxc based containers do not set their monotonic zero point to be when
257 # the container starts, instead keep using host boot as zero point
258 # time.CLOCK_MONOTONIC_RAW is only available in python 3.3
259 if util.is_container():
260 # clock.monotonic also uses host boot as zero point
261 if sys.version_info >= (3, 3):
262 base_time = float(time.time()) - float(time.monotonic())
263 # TODO: lxcfs automatically truncates /proc/uptime to seconds
264 # in containers when https://github.com/lxc/lxcfs/issues/292
265 # is fixed, util.uptime() should be used instead of stat on
266 try:
267 file_stat = os.stat('/proc/1/cmdline')
268 kernel_start = file_stat.st_atime
269 except OSError as err:
270 raise RuntimeError('Could not determine container boot '
271 'time from /proc/1/cmdline. ({})'
272 .format(err))
273 status = CONTAINER_CODE
274 else:
275 status = FAIL_CODE
276 kernel_end = base_time + delta_k_end
277 cloudinit_sysd = base_time + delta_ci_s
278
279 except Exception as e:
280 # Except ALL exceptions as Systemctl reader can throw many different
281 # errors, but any failure in systemctl means that timestamps cannot be
282 # obtained
283 print(e)
284 return TIMESTAMP_UNKNOWN
285 return status, kernel_start, kernel_end, cloudinit_sysd
286
287
128def generate_records(events, blame_sort=False,288def generate_records(events, blame_sort=False,
129 print_format="(%n) %d seconds in %I%D",289 print_format="(%n) %d seconds in %I%D",
130 dump_files=False, log_datafiles=False):290 dump_files=False, log_datafiles=False):
291 '''
292 Take in raw events and create parent-child dependencies between events
293 in order to order events in chronological order.
294
295 :param events: JSONs from dump that represents events taken from logs
296 :param blame_sort: whether to sort by timestamp or by time taken.
297 :param print_format: formatting to represent event, time stamp,
298 and time taken by the event in one line
299 :param dump_files: whether to dump files into JSONs
300 :param log_datafiles: whether or not to log events generated
301
302 :return: boot records ordered chronologically
303 '''
131304
132 sorted_events = sorted(events, key=lambda x: x['timestamp'])305 sorted_events = sorted(events, key=lambda x: x['timestamp'])
133 records = []306 records = []
@@ -189,19 +362,28 @@ def generate_records(events, blame_sort=False,
189362
190363
191def show_events(events, print_format):364def show_events(events, print_format):
365 '''
366 A passthrough method that makes it easier to call generate_records()
367
368 :param events: JSONs from dump that represents events taken from logs
369 :param print_format: formatting to represent event, time stamp,
370 and time taken by the event in one line
371
372 :return: boot records ordered chronologically
373 '''
192 return generate_records(events, print_format=print_format)374 return generate_records(events, print_format=print_format)
193375
194376
195def load_events(infile, rawdata=None):377def load_events_infile(infile):
196 if rawdata:378 '''
197 data = rawdata.read()379 Takes in a log file, read it, and convert to json.
198 else:380
199 data = infile.read()381 :param infile: The Log file to be read
200382
201 j = None383 :return: json version of logfile, raw file
384 '''
385 data = infile.read()
202 try:386 try:
203 j = json.loads(data)387 return json.loads(data), data
204 except ValueError:388 except ValueError:
205 pass389 return None, data
206
207 return j, data
diff --git a/cloudinit/analyze/tests/test_boot.py b/cloudinit/analyze/tests/test_boot.py
208new file mode 100644390new file mode 100644
index 0000000..706e2cc
--- /dev/null
+++ b/cloudinit/analyze/tests/test_boot.py
@@ -0,0 +1,170 @@
1import os
2from cloudinit.analyze.__main__ import (analyze_boot, get_parser)
3from cloudinit.tests.helpers import CiTestCase, mock
4from cloudinit.analyze.show import dist_check_timestamp, SystemctlReader, \
5 FAIL_CODE, CONTAINER_CODE
6
7err_code = (FAIL_CODE, -1, -1, -1)
8
9
10class TestDistroChecker(CiTestCase):
11
12 @mock.patch('cloudinit.util.system_info', return_value={'dist': ('', '',
13 ''),
14 'system': ''})
15 @mock.patch('platform.linux_distribution', return_value=('', '', ''))
16 @mock.patch('cloudinit.util.is_FreeBSD', return_value=False)
17 def test_blank_distro(self, m_sys_info, m_linux_distribution, m_free_bsd):
18 self.assertEqual(err_code, dist_check_timestamp())
19
20 @mock.patch('cloudinit.util.system_info', return_value={'dist': ('', '',
21 '')})
22 @mock.patch('platform.linux_distribution', return_value=('', '', ''))
23 @mock.patch('cloudinit.util.is_FreeBSD', return_value=True)
24 def test_freebsd_gentoo_cant_find(self, m_sys_info,
25 m_linux_distribution, m_is_FreeBSD):
26 self.assertEqual(err_code, dist_check_timestamp())
27
28 @mock.patch('cloudinit.util.subp', return_value=(0, 1))
29 def test_subp_fails(self, m_subp):
30 self.assertEqual(err_code, dist_check_timestamp())
31
32
33class TestSystemCtlReader(CiTestCase):
34
35 def test_systemctl_invalid_property(self):
36 reader = SystemctlReader('dummyProperty')
37 with self.assertRaises(RuntimeError):
38 reader.parse_epoch_as_float()
39
40 def test_systemctl_invalid_parameter(self):
41 reader = SystemctlReader('dummyProperty', 'dummyParameter')
42 with self.assertRaises(RuntimeError):
43 reader.parse_epoch_as_float()
44
45 @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None))
46 def test_systemctl_works_correctly_threshold(self, m_subp):
47 reader = SystemctlReader('dummyProperty', 'dummyParameter')
48 self.assertEqual(1.0, reader.parse_epoch_as_float())
49 thresh = 1.0 - reader.parse_epoch_as_float()
50 self.assertTrue(thresh < 1e-6)
51 self.assertTrue(thresh > (-1 * 1e-6))
52
53 @mock.patch('cloudinit.util.subp', return_value=('U=0', None))
54 def test_systemctl_succeed_zero(self, m_subp):
55 reader = SystemctlReader('dummyProperty', 'dummyParameter')
56 self.assertEqual(0.0, reader.parse_epoch_as_float())
57
58 @mock.patch('cloudinit.util.subp', return_value=('U=1', None))
59 def test_systemctl_succeed_distinct(self, m_subp):
60 reader = SystemctlReader('dummyProperty', 'dummyParameter')
61 val1 = reader.parse_epoch_as_float()
62 m_subp.return_value = ('U=2', None)
63 reader2 = SystemctlReader('dummyProperty', 'dummyParameter')
64 val2 = reader2.parse_epoch_as_float()
65 self.assertNotEqual(val1, val2)
66
67 @mock.patch('cloudinit.util.subp', return_value=('100', None))
68 def test_systemctl_epoch_not_splittable(self, m_subp):
69 reader = SystemctlReader('dummyProperty', 'dummyParameter')
70 with self.assertRaises(IndexError):
71 reader.parse_epoch_as_float()
72
73 @mock.patch('cloudinit.util.subp', return_value=('U=foobar', None))
74 def test_systemctl_cannot_convert_epoch_to_float(self, m_subp):
75 reader = SystemctlReader('dummyProperty', 'dummyParameter')
76 with self.assertRaises(ValueError):
77 reader.parse_epoch_as_float()
78
79
80class TestAnalyzeBoot(CiTestCase):
81
82 def set_up_dummy_file_ci(self, path, log_path):
83 infh = open(path, 'w+')
84 infh.write('2019-07-08 17:40:49,601 - util.py[DEBUG]: Cloud-init v. '
85 '19.1-1-gbaa47854-0ubuntu1~18.04.1 running \'init-local\' '
86 'at Mon, 08 Jul 2019 17:40:49 +0000. Up 18.84 seconds.')
87 infh.close()
88 outfh = open(log_path, 'w+')
89 outfh.close()
90
91 def set_up_dummy_file(self, path, log_path):
92 infh = open(path, 'w+')
93 infh.write('dummy data')
94 infh.close()
95 outfh = open(log_path, 'w+')
96 outfh.close()
97
98 def remove_dummy_file(self, path, log_path):
99 if os.path.isfile(path):
100 os.remove(path)
101 if os.path.isfile(log_path):
102 os.remove(log_path)
103
104 @mock.patch('cloudinit.analyze.show.dist_check_timestamp',
105 return_value=err_code)
106 def test_boot_invalid_distro(self, m_dist_check_timestamp):
107
108 path = os.path.dirname(os.path.abspath(__file__))
109 log_path = path + '/boot-test.log'
110 path += '/dummy.log'
111 self.set_up_dummy_file(path, log_path)
112
113 parser = get_parser()
114 args = parser.parse_args(args=['boot', '-i', path, '-o',
115 log_path])
116 name_default = ''
117 analyze_boot(name_default, args)
118 # now args have been tested, go into outfile and make sure error
119 # message is in the outfile
120 outfh = open(args.outfile, 'r')
121 data = outfh.read()
122 err_string = 'Your Linux distro or container does not support this ' \
123 'functionality.\nYou must be running a Kernel ' \
124 'Telemetry supported distro.\nPlease check ' \
125 'https://cloudinit.readthedocs.io/en/latest/topics' \
126 '/analyze.html for more information on supported ' \
127 'distros.\n'
128
129 self.remove_dummy_file(path, log_path)
130 self.assertEqual(err_string, data)
131
132 @mock.patch("cloudinit.util.is_container", return_value=True)
133 @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None))
134 def test_container_no_ci_log_line(self, m_is_container, m_subp):
135 path = os.path.dirname(os.path.abspath(__file__))
136 log_path = path + '/boot-test.log'
137 path += '/dummy.log'
138 self.set_up_dummy_file(path, log_path)
139
140 parser = get_parser()
141 args = parser.parse_args(args=['boot', '-i', path, '-o',
142 log_path])
143 name_default = ''
144
145 finish_code = analyze_boot(name_default, args)
146
147 self.remove_dummy_file(path, log_path)
148 self.assertEqual(FAIL_CODE, finish_code)
149
150 @mock.patch("cloudinit.util.is_container", return_value=True)
151 @mock.patch('cloudinit.util.subp', return_value=('U=1000000', None))
152 @mock.patch('cloudinit.analyze.__main__._get_events', return_value=[{
153 'name': 'init-local', 'description': 'starting search', 'timestamp':
154 100000}])
155 @mock.patch('cloudinit.analyze.show.dist_check_timestamp',
156 return_value=(CONTAINER_CODE, 1, 1, 1))
157 def test_container_ci_log_line(self, m_is_container, m_subp, m_get, m_g):
158 path = os.path.dirname(os.path.abspath(__file__))
159 log_path = path + '/boot-test.log'
160 path += '/dummy.log'
161 self.set_up_dummy_file_ci(path, log_path)
162
163 parser = get_parser()
164 args = parser.parse_args(args=['boot', '-i', path, '-o',
165 log_path])
166 name_default = ''
167 finish_code = analyze_boot(name_default, args)
168
169 self.remove_dummy_file(path, log_path)
170 self.assertEqual(CONTAINER_CODE, finish_code)
diff --git a/cloudinit/apport.py b/cloudinit/apport.py
index 22cb7fd..003ff1f 100644
--- a/cloudinit/apport.py
+++ b/cloudinit/apport.py
@@ -23,6 +23,7 @@ KNOWN_CLOUD_NAMES = [
23 'CloudStack',23 'CloudStack',
24 'DigitalOcean',24 'DigitalOcean',
25 'GCE - Google Compute Engine',25 'GCE - Google Compute Engine',
26 'Exoscale',
26 'Hetzner Cloud',27 'Hetzner Cloud',
27 'IBM - (aka SoftLayer or BlueMix)',28 'IBM - (aka SoftLayer or BlueMix)',
28 'LXD',29 'LXD',
diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py
index 919d199..f01e2aa 100644
--- a/cloudinit/config/cc_apt_configure.py
+++ b/cloudinit/config/cc_apt_configure.py
@@ -332,6 +332,8 @@ def apply_apt(cfg, cloud, target):
332332
333333
334def debconf_set_selections(selections, target=None):334def debconf_set_selections(selections, target=None):
335 if not selections.endswith(b'\n'):
336 selections += b'\n'
335 util.subp(['debconf-set-selections'], data=selections, target=target,337 util.subp(['debconf-set-selections'], data=selections, target=target,
336 capture=True)338 capture=True)
337339
@@ -374,7 +376,7 @@ def apply_debconf_selections(cfg, target=None):
374376
375 selections = '\n'.join(377 selections = '\n'.join(
376 [selsets[key] for key in sorted(selsets.keys())])378 [selsets[key] for key in sorted(selsets.keys())])
377 debconf_set_selections(selections.encode() + b"\n", target=target)379 debconf_set_selections(selections.encode(), target=target)
378380
379 # get a complete list of packages listed in input381 # get a complete list of packages listed in input
380 pkgs_cfgd = set()382 pkgs_cfgd = set()
diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py
index 71d13ed..d983077 100644
--- a/cloudinit/config/cc_lxd.py
+++ b/cloudinit/config/cc_lxd.py
@@ -152,7 +152,7 @@ def handle(name, cfg, cloud, log, args):
152152
153 if cmd_attach:153 if cmd_attach:
154 log.debug("Setting up default lxd bridge: %s" %154 log.debug("Setting up default lxd bridge: %s" %
155 " ".join(cmd_create))155 " ".join(cmd_attach))
156 _lxc(cmd_attach)156 _lxc(cmd_attach)
157157
158 elif bridge_cfg:158 elif bridge_cfg:
diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py
index 4585e4d..cf9b5ab 100755
--- a/cloudinit/config/cc_set_passwords.py
+++ b/cloudinit/config/cc_set_passwords.py
@@ -9,27 +9,40 @@
9"""9"""
10Set Passwords10Set Passwords
11-------------11-------------
12**Summary:** Set user passwords12**Summary:** Set user passwords and enable/disable SSH password authentication
1313
14Set system passwords and enable or disable ssh password authentication.14This module consumes three top-level config keys: ``ssh_pwauth``, ``chpasswd``
15The ``chpasswd`` config key accepts a dictionary containing a single one of two15and ``password``.
16keys, either ``expire`` or ``list``. If ``expire`` is specified and is set to16
17``false``, then the ``password`` global config key is used as the password for17The ``ssh_pwauth`` config key determines whether or not sshd will be configured
18all user accounts. If the ``expire`` key is specified and is set to ``true``18to accept password authentication. True values will enable password auth,
19then user passwords will be expired, preventing the default system passwords19false values will disable password auth, and the literal string ``unchanged``
20from being used.20will leave it unchanged. Setting no value will also leave the current setting
2121on-disk unchanged.
22If the ``list`` key is provided, a list of22
23``username:password`` pairs can be specified. The usernames specified23The ``chpasswd`` config key accepts a dictionary containing either or both of
24must already exist on the system, or have been created using the24``expire`` and ``list``.
25``cc_users_groups`` module. A password can be randomly generated using25
26``username:RANDOM`` or ``username:R``. A hashed password can be specified26If the ``list`` key is provided, it should contain a list of
27using ``username:$6$salt$hash``. Password ssh authentication can be27``username:password`` pairs. This can be either a YAML list (of strings), or a
28enabled, disabled, or left to system defaults using ``ssh_pwauth``.28multi-line string with one pair per line. Each user will have the
29corresponding password set. A password can be randomly generated by specifying
30``RANDOM`` or ``R`` as a user's password. A hashed password, created by a tool
31like ``mkpasswd``, can be specified; a regex
32(``r'\\$(1|2a|2y|5|6)(\\$.+){2}'``) is used to determine if a password value
33should be treated as a hash.
2934
30.. note::35.. note::
31 if using ``expire: true`` then a ssh authkey should be specified or it may36 The users specified must already exist on the system. Users will have been
32 not be possible to login to the system37 created by the ``cc_users_groups`` module at this point.
38
39By default, all users on the system will have their passwords expired (meaning
40that they will have to be reset the next time the user logs in). To disable
41this behaviour, set ``expire`` under ``chpasswd`` to a false value.
42
43If a ``list`` of user/password pairs is not specified under ``chpasswd``, then
44the value of the ``password`` config key will be used to set the default user's
45password.
3346
34**Internal name:** ``cc_set_passwords``47**Internal name:** ``cc_set_passwords``
3548
@@ -160,6 +173,8 @@ def handle(_name, cfg, cloud, log, args):
160 hashed_users = []173 hashed_users = []
161 randlist = []174 randlist = []
162 users = []175 users = []
176 # N.B. This regex is included in the documentation (i.e. the module
177 # docstring), so any changes to it should be reflected there.
163 prog = re.compile(r'\$(1|2a|2y|5|6)(\$.+){2}')178 prog = re.compile(r'\$(1|2a|2y|5|6)(\$.+){2}')
164 for line in plist:179 for line in plist:
165 u, p = line.split(':', 1)180 u, p = line.split(':', 1)
diff --git a/cloudinit/config/cc_ssh.py b/cloudinit/config/cc_ssh.py
index f8f7cb3..fdd8f4d 100755
--- a/cloudinit/config/cc_ssh.py
+++ b/cloudinit/config/cc_ssh.py
@@ -91,6 +91,9 @@ public keys.
91 ssh_authorized_keys:91 ssh_authorized_keys:
92 - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUU ...92 - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEA3FSyQwBI6Z+nCSjUU ...
93 - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZ ...93 - ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3I7VUf2l5gSn5uavROsc5HRDpZ ...
94 ssh_publish_hostkeys:
95 enabled: <true/false> (Defaults to true)
96 blacklist: <list of key types> (Defaults to [dsa])
94"""97"""
9598
96import glob99import glob
@@ -104,6 +107,10 @@ from cloudinit import util
104107
105GENERATE_KEY_NAMES = ['rsa', 'dsa', 'ecdsa', 'ed25519']108GENERATE_KEY_NAMES = ['rsa', 'dsa', 'ecdsa', 'ed25519']
106KEY_FILE_TPL = '/etc/ssh/ssh_host_%s_key'109KEY_FILE_TPL = '/etc/ssh/ssh_host_%s_key'
110PUBLISH_HOST_KEYS = True
111# Don't publish the dsa hostkey by default since OpenSSH recommends not using
112# it.
113HOST_KEY_PUBLISH_BLACKLIST = ['dsa']
107114
108CONFIG_KEY_TO_FILE = {}115CONFIG_KEY_TO_FILE = {}
109PRIV_TO_PUB = {}116PRIV_TO_PUB = {}
@@ -176,6 +183,23 @@ def handle(_name, cfg, cloud, log, _args):
176 util.logexc(log, "Failed generating key type %s to "183 util.logexc(log, "Failed generating key type %s to "
177 "file %s", keytype, keyfile)184 "file %s", keytype, keyfile)
178185
186 if "ssh_publish_hostkeys" in cfg:
187 host_key_blacklist = util.get_cfg_option_list(
188 cfg["ssh_publish_hostkeys"], "blacklist",
189 HOST_KEY_PUBLISH_BLACKLIST)
190 publish_hostkeys = util.get_cfg_option_bool(
191 cfg["ssh_publish_hostkeys"], "enabled", PUBLISH_HOST_KEYS)
192 else:
193 host_key_blacklist = HOST_KEY_PUBLISH_BLACKLIST
194 publish_hostkeys = PUBLISH_HOST_KEYS
195
196 if publish_hostkeys:
197 hostkeys = get_public_host_keys(blacklist=host_key_blacklist)
198 try:
199 cloud.datasource.publish_host_keys(hostkeys)
200 except Exception:
201 util.logexc(log, "Publishing host keys failed!")
202
179 try:203 try:
180 (users, _groups) = ug_util.normalize_users_groups(cfg, cloud.distro)204 (users, _groups) = ug_util.normalize_users_groups(cfg, cloud.distro)
181 (user, _user_config) = ug_util.extract_default(users)205 (user, _user_config) = ug_util.extract_default(users)
@@ -209,4 +233,35 @@ def apply_credentials(keys, user, disable_root, disable_root_opts):
209233
210 ssh_util.setup_user_keys(keys, 'root', options=key_prefix)234 ssh_util.setup_user_keys(keys, 'root', options=key_prefix)
211235
236
237def get_public_host_keys(blacklist=None):
238 """Read host keys from /etc/ssh/*.pub files and return them as a list.
239
240 @param blacklist: List of key types to ignore. e.g. ['dsa', 'rsa']
241 @returns: List of keys, each formatted as a two-element tuple.
242 e.g. [('ssh-rsa', 'AAAAB3Nz...'), ('ssh-ed25519', 'AAAAC3Nx...')]
243 """
244 public_key_file_tmpl = '%s.pub' % (KEY_FILE_TPL,)
245 key_list = []
246 blacklist_files = []
247 if blacklist:
248 # Convert blacklist to filenames:
249 # 'dsa' -> '/etc/ssh/ssh_host_dsa_key.pub'
250 blacklist_files = [public_key_file_tmpl % (key_type,)
251 for key_type in blacklist]
252 # Get list of public key files and filter out blacklisted files.
253 file_list = [hostfile for hostfile
254 in glob.glob(public_key_file_tmpl % ('*',))
255 if hostfile not in blacklist_files]
256
257 # Read host key files, retrieve first two fields as a tuple and
258 # append that tuple to key_list.
259 for file_name in file_list:
260 file_contents = util.load_file(file_name)
261 key_data = file_contents.split()
262 if key_data and len(key_data) > 1:
263 key_list.append(tuple(key_data[:2]))
264 return key_list
265
266
212# vi: ts=4 expandtab267# vi: ts=4 expandtab
diff --git a/cloudinit/config/cc_ubuntu_drivers.py b/cloudinit/config/cc_ubuntu_drivers.py
index 91feb60..297451d 100644
--- a/cloudinit/config/cc_ubuntu_drivers.py
+++ b/cloudinit/config/cc_ubuntu_drivers.py
@@ -2,12 +2,14 @@
22
3"""Ubuntu Drivers: Interact with third party drivers in Ubuntu."""3"""Ubuntu Drivers: Interact with third party drivers in Ubuntu."""
44
5import os
5from textwrap import dedent6from textwrap import dedent
67
7from cloudinit.config.schema import (8from cloudinit.config.schema import (
8 get_schema_doc, validate_cloudconfig_schema)9 get_schema_doc, validate_cloudconfig_schema)
9from cloudinit import log as logging10from cloudinit import log as logging
10from cloudinit.settings import PER_INSTANCE11from cloudinit.settings import PER_INSTANCE
12from cloudinit import temp_utils
11from cloudinit import type_utils13from cloudinit import type_utils
12from cloudinit import util14from cloudinit import util
1315
@@ -64,6 +66,33 @@ OLD_UBUNTU_DRIVERS_STDERR_NEEDLE = (
64__doc__ = get_schema_doc(schema) # Supplement python help()66__doc__ = get_schema_doc(schema) # Supplement python help()
6567
6668
69# Use a debconf template to configure a global debconf variable
70# (linux/nvidia/latelink) setting this to "true" allows the
71# 'linux-restricted-modules' deb to accept the NVIDIA EULA and the package
72# will automatically link the drivers to the running kernel.
73
74# EOL_XENIAL: can then drop this script and use python3-debconf which is only
75# available in Bionic and later. Can't use python3-debconf currently as it
76# isn't in Xenial and doesn't yet support X_LOADTEMPLATEFILE debconf command.
77
78NVIDIA_DEBCONF_CONTENT = """\
79Template: linux/nvidia/latelink
80Type: boolean
81Default: true
82Description: Late-link NVIDIA kernel modules?
83 Enable this to link the NVIDIA kernel modules in cloud-init and
84 make them available for use.
85"""
86
87NVIDIA_DRIVER_LATELINK_DEBCONF_SCRIPT = """\
88#!/bin/sh
89# Allow cloud-init to trigger EULA acceptance via registering a debconf
90# template to set linux/nvidia/latelink true
91. /usr/share/debconf/confmodule
92db_x_loadtemplatefile "$1" cloud-init
93"""
94
95
67def install_drivers(cfg, pkg_install_func):96def install_drivers(cfg, pkg_install_func):
68 if not isinstance(cfg, dict):97 if not isinstance(cfg, dict):
69 raise TypeError(98 raise TypeError(
@@ -89,9 +118,28 @@ def install_drivers(cfg, pkg_install_func):
89 if version_cfg:118 if version_cfg:
90 driver_arg += ':{}'.format(version_cfg)119 driver_arg += ':{}'.format(version_cfg)
91120
92 LOG.debug("Installing NVIDIA drivers (%s=%s, version=%s)",121 LOG.debug("Installing and activating NVIDIA drivers (%s=%s, version=%s)",
93 cfgpath, nv_acc, version_cfg if version_cfg else 'latest')122 cfgpath, nv_acc, version_cfg if version_cfg else 'latest')
94123
124 # Register and set debconf selection linux/nvidia/latelink = true
125 tdir = temp_utils.mkdtemp(needs_exe=True)
126 debconf_file = os.path.join(tdir, 'nvidia.template')
127 debconf_script = os.path.join(tdir, 'nvidia-debconf.sh')
128 try:
129 util.write_file(debconf_file, NVIDIA_DEBCONF_CONTENT)
130 util.write_file(
131 debconf_script,
132 util.encode_text(NVIDIA_DRIVER_LATELINK_DEBCONF_SCRIPT),
133 mode=0o755)
134 util.subp([debconf_script, debconf_file])
135 except Exception as e:
136 util.logexc(
137 LOG, "Failed to register NVIDIA debconf template: %s", str(e))
138 raise
139 finally:
140 if os.path.isdir(tdir):
141 util.del_dir(tdir)
142
95 try:143 try:
96 util.subp(['ubuntu-drivers', 'install', '--gpgpu', driver_arg])144 util.subp(['ubuntu-drivers', 'install', '--gpgpu', driver_arg])
97 except util.ProcessExecutionError as exc:145 except util.ProcessExecutionError as exc:
diff --git a/cloudinit/config/tests/test_ssh.py b/cloudinit/config/tests/test_ssh.py
index c8a4271..e778984 100644
--- a/cloudinit/config/tests/test_ssh.py
+++ b/cloudinit/config/tests/test_ssh.py
@@ -1,5 +1,6 @@
1# This file is part of cloud-init. See LICENSE file for license information.1# This file is part of cloud-init. See LICENSE file for license information.
22
3import os.path
34
4from cloudinit.config import cc_ssh5from cloudinit.config import cc_ssh
5from cloudinit import ssh_util6from cloudinit import ssh_util
@@ -12,6 +13,25 @@ MODPATH = "cloudinit.config.cc_ssh."
12class TestHandleSsh(CiTestCase):13class TestHandleSsh(CiTestCase):
13 """Test cc_ssh handling of ssh config."""14 """Test cc_ssh handling of ssh config."""
1415
16 def _publish_hostkey_test_setup(self):
17 self.test_hostkeys = {
18 'dsa': ('ssh-dss', 'AAAAB3NzaC1kc3MAAACB'),
19 'ecdsa': ('ecdsa-sha2-nistp256', 'AAAAE2VjZ'),
20 'ed25519': ('ssh-ed25519', 'AAAAC3NzaC1lZDI'),
21 'rsa': ('ssh-rsa', 'AAAAB3NzaC1yc2EAAA'),
22 }
23 self.test_hostkey_files = []
24 hostkey_tmpdir = self.tmp_dir()
25 for key_type in ['dsa', 'ecdsa', 'ed25519', 'rsa']:
26 key_data = self.test_hostkeys[key_type]
27 filename = 'ssh_host_%s_key.pub' % key_type
28 filepath = os.path.join(hostkey_tmpdir, filename)
29 self.test_hostkey_files.append(filepath)
30 with open(filepath, 'w') as f:
31 f.write(' '.join(key_data))
32
33 cc_ssh.KEY_FILE_TPL = os.path.join(hostkey_tmpdir, 'ssh_host_%s_key')
34
15 def test_apply_credentials_with_user(self, m_setup_keys):35 def test_apply_credentials_with_user(self, m_setup_keys):
16 """Apply keys for the given user and root."""36 """Apply keys for the given user and root."""
17 keys = ["key1"]37 keys = ["key1"]
@@ -64,6 +84,7 @@ class TestHandleSsh(CiTestCase):
64 # Mock os.path.exits to True to short-circuit the key writing logic84 # Mock os.path.exits to True to short-circuit the key writing logic
65 m_path_exists.return_value = True85 m_path_exists.return_value = True
66 m_nug.return_value = ([], {})86 m_nug.return_value = ([], {})
87 cc_ssh.PUBLISH_HOST_KEYS = False
67 cloud = self.tmp_cloud(88 cloud = self.tmp_cloud(
68 distro='ubuntu', metadata={'public-keys': keys})89 distro='ubuntu', metadata={'public-keys': keys})
69 cc_ssh.handle("name", cfg, cloud, None, None)90 cc_ssh.handle("name", cfg, cloud, None, None)
@@ -149,3 +170,148 @@ class TestHandleSsh(CiTestCase):
149 self.assertEqual([mock.call(set(keys), user),170 self.assertEqual([mock.call(set(keys), user),
150 mock.call(set(keys), "root", options="")],171 mock.call(set(keys), "root", options="")],
151 m_setup_keys.call_args_list)172 m_setup_keys.call_args_list)
173
174 @mock.patch(MODPATH + "glob.glob")
175 @mock.patch(MODPATH + "ug_util.normalize_users_groups")
176 @mock.patch(MODPATH + "os.path.exists")
177 def test_handle_publish_hostkeys_default(
178 self, m_path_exists, m_nug, m_glob, m_setup_keys):
179 """Test handle with various configs for ssh_publish_hostkeys."""
180 self._publish_hostkey_test_setup()
181 cc_ssh.PUBLISH_HOST_KEYS = True
182 keys = ["key1"]
183 user = "clouduser"
184 # Return no matching keys for first glob, test keys for second.
185 m_glob.side_effect = iter([
186 [],
187 self.test_hostkey_files,
188 ])
189 # Mock os.path.exits to True to short-circuit the key writing logic
190 m_path_exists.return_value = True
191 m_nug.return_value = ({user: {"default": user}}, {})
192 cloud = self.tmp_cloud(
193 distro='ubuntu', metadata={'public-keys': keys})
194 cloud.datasource.publish_host_keys = mock.Mock()
195
196 cfg = {}
197 expected_call = [self.test_hostkeys[key_type] for key_type
198 in ['ecdsa', 'ed25519', 'rsa']]
199 cc_ssh.handle("name", cfg, cloud, None, None)
200 self.assertEqual([mock.call(expected_call)],
201 cloud.datasource.publish_host_keys.call_args_list)
202
203 @mock.patch(MODPATH + "glob.glob")
204 @mock.patch(MODPATH + "ug_util.normalize_users_groups")
205 @mock.patch(MODPATH + "os.path.exists")
206 def test_handle_publish_hostkeys_config_enable(
207 self, m_path_exists, m_nug, m_glob, m_setup_keys):
208 """Test handle with various configs for ssh_publish_hostkeys."""
209 self._publish_hostkey_test_setup()
210 cc_ssh.PUBLISH_HOST_KEYS = False
211 keys = ["key1"]
212 user = "clouduser"
213 # Return no matching keys for first glob, test keys for second.
214 m_glob.side_effect = iter([
215 [],
216 self.test_hostkey_files,
217 ])
218 # Mock os.path.exits to True to short-circuit the key writing logic
219 m_path_exists.return_value = True
220 m_nug.return_value = ({user: {"default": user}}, {})
221 cloud = self.tmp_cloud(
222 distro='ubuntu', metadata={'public-keys': keys})
223 cloud.datasource.publish_host_keys = mock.Mock()
224
225 cfg = {'ssh_publish_hostkeys': {'enabled': True}}
226 expected_call = [self.test_hostkeys[key_type] for key_type
227 in ['ecdsa', 'ed25519', 'rsa']]
228 cc_ssh.handle("name", cfg, cloud, None, None)
229 self.assertEqual([mock.call(expected_call)],
230 cloud.datasource.publish_host_keys.call_args_list)
231
232 @mock.patch(MODPATH + "glob.glob")
233 @mock.patch(MODPATH + "ug_util.normalize_users_groups")
234 @mock.patch(MODPATH + "os.path.exists")
235 def test_handle_publish_hostkeys_config_disable(
236 self, m_path_exists, m_nug, m_glob, m_setup_keys):
237 """Test handle with various configs for ssh_publish_hostkeys."""
238 self._publish_hostkey_test_setup()
239 cc_ssh.PUBLISH_HOST_KEYS = True
240 keys = ["key1"]
241 user = "clouduser"
242 # Return no matching keys for first glob, test keys for second.
243 m_glob.side_effect = iter([
244 [],
245 self.test_hostkey_files,
246 ])
247 # Mock os.path.exits to True to short-circuit the key writing logic
248 m_path_exists.return_value = True
249 m_nug.return_value = ({user: {"default": user}}, {})
250 cloud = self.tmp_cloud(
251 distro='ubuntu', metadata={'public-keys': keys})
252 cloud.datasource.publish_host_keys = mock.Mock()
253
254 cfg = {'ssh_publish_hostkeys': {'enabled': False}}
255 cc_ssh.handle("name", cfg, cloud, None, None)
256 self.assertFalse(cloud.datasource.publish_host_keys.call_args_list)
257 cloud.datasource.publish_host_keys.assert_not_called()
258
259 @mock.patch(MODPATH + "glob.glob")
260 @mock.patch(MODPATH + "ug_util.normalize_users_groups")
261 @mock.patch(MODPATH + "os.path.exists")
262 def test_handle_publish_hostkeys_config_blacklist(
263 self, m_path_exists, m_nug, m_glob, m_setup_keys):
264 """Test handle with various configs for ssh_publish_hostkeys."""
265 self._publish_hostkey_test_setup()
266 cc_ssh.PUBLISH_HOST_KEYS = True
267 keys = ["key1"]
268 user = "clouduser"
269 # Return no matching keys for first glob, test keys for second.
270 m_glob.side_effect = iter([
271 [],
272 self.test_hostkey_files,
273 ])
274 # Mock os.path.exits to True to short-circuit the key writing logic
275 m_path_exists.return_value = True
276 m_nug.return_value = ({user: {"default": user}}, {})
277 cloud = self.tmp_cloud(
278 distro='ubuntu', metadata={'public-keys': keys})
279 cloud.datasource.publish_host_keys = mock.Mock()
280
281 cfg = {'ssh_publish_hostkeys': {'enabled': True,
282 'blacklist': ['dsa', 'rsa']}}
283 expected_call = [self.test_hostkeys[key_type] for key_type
284 in ['ecdsa', 'ed25519']]
285 cc_ssh.handle("name", cfg, cloud, None, None)
286 self.assertEqual([mock.call(expected_call)],
287 cloud.datasource.publish_host_keys.call_args_list)
288
289 @mock.patch(MODPATH + "glob.glob")
290 @mock.patch(MODPATH + "ug_util.normalize_users_groups")
291 @mock.patch(MODPATH + "os.path.exists")
292 def test_handle_publish_hostkeys_empty_blacklist(
293 self, m_path_exists, m_nug, m_glob, m_setup_keys):
294 """Test handle with various configs for ssh_publish_hostkeys."""
295 self._publish_hostkey_test_setup()
296 cc_ssh.PUBLISH_HOST_KEYS = True
297 keys = ["key1"]
298 user = "clouduser"
299 # Return no matching keys for first glob, test keys for second.
300 m_glob.side_effect = iter([
301 [],
302 self.test_hostkey_files,
303 ])
304 # Mock os.path.exits to True to short-circuit the key writing logic
305 m_path_exists.return_value = True
306 m_nug.return_value = ({user: {"default": user}}, {})
307 cloud = self.tmp_cloud(
308 distro='ubuntu', metadata={'public-keys': keys})
309 cloud.datasource.publish_host_keys = mock.Mock()
310
311 cfg = {'ssh_publish_hostkeys': {'enabled': True,
312 'blacklist': []}}
313 expected_call = [self.test_hostkeys[key_type] for key_type
314 in ['dsa', 'ecdsa', 'ed25519', 'rsa']]
315 cc_ssh.handle("name", cfg, cloud, None, None)
316 self.assertEqual([mock.call(expected_call)],
317 cloud.datasource.publish_host_keys.call_args_list)
diff --git a/cloudinit/config/tests/test_ubuntu_drivers.py b/cloudinit/config/tests/test_ubuntu_drivers.py
index efba4ce..4695269 100644
--- a/cloudinit/config/tests/test_ubuntu_drivers.py
+++ b/cloudinit/config/tests/test_ubuntu_drivers.py
@@ -1,6 +1,7 @@
1# This file is part of cloud-init. See LICENSE file for license information.1# This file is part of cloud-init. See LICENSE file for license information.
22
3import copy3import copy
4import os
45
5from cloudinit.tests.helpers import CiTestCase, skipUnlessJsonSchema, mock6from cloudinit.tests.helpers import CiTestCase, skipUnlessJsonSchema, mock
6from cloudinit.config.schema import (7from cloudinit.config.schema import (
@@ -9,11 +10,27 @@ from cloudinit.config import cc_ubuntu_drivers as drivers
9from cloudinit.util import ProcessExecutionError10from cloudinit.util import ProcessExecutionError
1011
11MPATH = "cloudinit.config.cc_ubuntu_drivers."12MPATH = "cloudinit.config.cc_ubuntu_drivers."
13M_TMP_PATH = MPATH + "temp_utils.mkdtemp"
12OLD_UBUNTU_DRIVERS_ERROR_STDERR = (14OLD_UBUNTU_DRIVERS_ERROR_STDERR = (
13 "ubuntu-drivers: error: argument <command>: invalid choice: 'install' "15 "ubuntu-drivers: error: argument <command>: invalid choice: 'install' "
14 "(choose from 'list', 'autoinstall', 'devices', 'debug')\n")16 "(choose from 'list', 'autoinstall', 'devices', 'debug')\n")
1517
1618
19class AnyTempScriptAndDebconfFile(object):
20
21 def __init__(self, tmp_dir, debconf_file):
22 self.tmp_dir = tmp_dir
23 self.debconf_file = debconf_file
24
25 def __eq__(self, cmd):
26 if not len(cmd) == 2:
27 return False
28 script, debconf_file = cmd
29 if bool(script.startswith(self.tmp_dir) and script.endswith('.sh')):
30 return debconf_file == self.debconf_file
31 return False
32
33
17class TestUbuntuDrivers(CiTestCase):34class TestUbuntuDrivers(CiTestCase):
18 cfg_accepted = {'drivers': {'nvidia': {'license-accepted': True}}}35 cfg_accepted = {'drivers': {'nvidia': {'license-accepted': True}}}
19 install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia']36 install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia']
@@ -28,16 +45,23 @@ class TestUbuntuDrivers(CiTestCase):
28 {'drivers': {'nvidia': {'license-accepted': "TRUE"}}},45 {'drivers': {'nvidia': {'license-accepted': "TRUE"}}},
29 schema=drivers.schema, strict=True)46 schema=drivers.schema, strict=True)
3047
48 @mock.patch(M_TMP_PATH)
31 @mock.patch(MPATH + "util.subp", return_value=('', ''))49 @mock.patch(MPATH + "util.subp", return_value=('', ''))
32 @mock.patch(MPATH + "util.which", return_value=False)50 @mock.patch(MPATH + "util.which", return_value=False)
33 def _assert_happy_path_taken(self, config, m_which, m_subp):51 def _assert_happy_path_taken(
52 self, config, m_which, m_subp, m_tmp):
34 """Positive path test through handle. Package should be installed."""53 """Positive path test through handle. Package should be installed."""
54 tdir = self.tmp_dir()
55 debconf_file = os.path.join(tdir, 'nvidia.template')
56 m_tmp.return_value = tdir
35 myCloud = mock.MagicMock()57 myCloud = mock.MagicMock()
36 drivers.handle('ubuntu_drivers', config, myCloud, None, None)58 drivers.handle('ubuntu_drivers', config, myCloud, None, None)
37 self.assertEqual([mock.call(['ubuntu-drivers-common'])],59 self.assertEqual([mock.call(['ubuntu-drivers-common'])],
38 myCloud.distro.install_packages.call_args_list)60 myCloud.distro.install_packages.call_args_list)
39 self.assertEqual([mock.call(self.install_gpgpu)],61 self.assertEqual(
40 m_subp.call_args_list)62 [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
63 mock.call(self.install_gpgpu)],
64 m_subp.call_args_list)
4165
42 def test_handle_does_package_install(self):66 def test_handle_does_package_install(self):
43 self._assert_happy_path_taken(self.cfg_accepted)67 self._assert_happy_path_taken(self.cfg_accepted)
@@ -48,19 +72,33 @@ class TestUbuntuDrivers(CiTestCase):
48 new_config['drivers']['nvidia']['license-accepted'] = true_value72 new_config['drivers']['nvidia']['license-accepted'] = true_value
49 self._assert_happy_path_taken(new_config)73 self._assert_happy_path_taken(new_config)
5074
51 @mock.patch(MPATH + "util.subp", side_effect=ProcessExecutionError(75 @mock.patch(M_TMP_PATH)
52 stdout='No drivers found for installation.\n', exit_code=1))76 @mock.patch(MPATH + "util.subp")
53 @mock.patch(MPATH + "util.which", return_value=False)77 @mock.patch(MPATH + "util.which", return_value=False)
54 def test_handle_raises_error_if_no_drivers_found(self, m_which, m_subp):78 def test_handle_raises_error_if_no_drivers_found(
79 self, m_which, m_subp, m_tmp):
55 """If ubuntu-drivers doesn't install any drivers, raise an error."""80 """If ubuntu-drivers doesn't install any drivers, raise an error."""
81 tdir = self.tmp_dir()
82 debconf_file = os.path.join(tdir, 'nvidia.template')
83 m_tmp.return_value = tdir
56 myCloud = mock.MagicMock()84 myCloud = mock.MagicMock()
85
86 def fake_subp(cmd):
87 if cmd[0].startswith(tdir):
88 return
89 raise ProcessExecutionError(
90 stdout='No drivers found for installation.\n', exit_code=1)
91 m_subp.side_effect = fake_subp
92
57 with self.assertRaises(Exception):93 with self.assertRaises(Exception):
58 drivers.handle(94 drivers.handle(
59 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None)95 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None)
60 self.assertEqual([mock.call(['ubuntu-drivers-common'])],96 self.assertEqual([mock.call(['ubuntu-drivers-common'])],
61 myCloud.distro.install_packages.call_args_list)97 myCloud.distro.install_packages.call_args_list)
62 self.assertEqual([mock.call(self.install_gpgpu)],98 self.assertEqual(
63 m_subp.call_args_list)99 [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
100 mock.call(self.install_gpgpu)],
101 m_subp.call_args_list)
64 self.assertIn('ubuntu-drivers found no drivers for installation',102 self.assertIn('ubuntu-drivers found no drivers for installation',
65 self.logs.getvalue())103 self.logs.getvalue())
66104
@@ -108,18 +146,25 @@ class TestUbuntuDrivers(CiTestCase):
108 myLog.debug.call_args_list[0][0][0])146 myLog.debug.call_args_list[0][0][0])
109 self.assertEqual(0, m_install_drivers.call_count)147 self.assertEqual(0, m_install_drivers.call_count)
110148
149 @mock.patch(M_TMP_PATH)
111 @mock.patch(MPATH + "util.subp", return_value=('', ''))150 @mock.patch(MPATH + "util.subp", return_value=('', ''))
112 @mock.patch(MPATH + "util.which", return_value=True)151 @mock.patch(MPATH + "util.which", return_value=True)
113 def test_install_drivers_no_install_if_present(self, m_which, m_subp):152 def test_install_drivers_no_install_if_present(
153 self, m_which, m_subp, m_tmp):
114 """If 'ubuntu-drivers' is present, no package install should occur."""154 """If 'ubuntu-drivers' is present, no package install should occur."""
155 tdir = self.tmp_dir()
156 debconf_file = os.path.join(tdir, 'nvidia.template')
157 m_tmp.return_value = tdir
115 pkg_install = mock.MagicMock()158 pkg_install = mock.MagicMock()
116 drivers.install_drivers(self.cfg_accepted['drivers'],159 drivers.install_drivers(self.cfg_accepted['drivers'],
117 pkg_install_func=pkg_install)160 pkg_install_func=pkg_install)
118 self.assertEqual(0, pkg_install.call_count)161 self.assertEqual(0, pkg_install.call_count)
119 self.assertEqual([mock.call('ubuntu-drivers')],162 self.assertEqual([mock.call('ubuntu-drivers')],
120 m_which.call_args_list)163 m_which.call_args_list)
121 self.assertEqual([mock.call(self.install_gpgpu)],164 self.assertEqual(
122 m_subp.call_args_list)165 [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
166 mock.call(self.install_gpgpu)],
167 m_subp.call_args_list)
123168
124 def test_install_drivers_rejects_invalid_config(self):169 def test_install_drivers_rejects_invalid_config(self):
125 """install_drivers should raise TypeError if not given a config dict"""170 """install_drivers should raise TypeError if not given a config dict"""
@@ -128,20 +173,33 @@ class TestUbuntuDrivers(CiTestCase):
128 drivers.install_drivers("mystring", pkg_install_func=pkg_install)173 drivers.install_drivers("mystring", pkg_install_func=pkg_install)
129 self.assertEqual(0, pkg_install.call_count)174 self.assertEqual(0, pkg_install.call_count)
130175
131 @mock.patch(MPATH + "util.subp", side_effect=ProcessExecutionError(176 @mock.patch(M_TMP_PATH)
132 stderr=OLD_UBUNTU_DRIVERS_ERROR_STDERR, exit_code=2))177 @mock.patch(MPATH + "util.subp")
133 @mock.patch(MPATH + "util.which", return_value=False)178 @mock.patch(MPATH + "util.which", return_value=False)
134 def test_install_drivers_handles_old_ubuntu_drivers_gracefully(179 def test_install_drivers_handles_old_ubuntu_drivers_gracefully(
135 self, m_which, m_subp):180 self, m_which, m_subp, m_tmp):
136 """Older ubuntu-drivers versions should emit message and raise error"""181 """Older ubuntu-drivers versions should emit message and raise error"""
182 tdir = self.tmp_dir()
183 debconf_file = os.path.join(tdir, 'nvidia.template')
184 m_tmp.return_value = tdir
137 myCloud = mock.MagicMock()185 myCloud = mock.MagicMock()
186
187 def fake_subp(cmd):
188 if cmd[0].startswith(tdir):
189 return
190 raise ProcessExecutionError(
191 stderr=OLD_UBUNTU_DRIVERS_ERROR_STDERR, exit_code=2)
192 m_subp.side_effect = fake_subp
193
138 with self.assertRaises(Exception):194 with self.assertRaises(Exception):
139 drivers.handle(195 drivers.handle(
140 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None)196 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None)
141 self.assertEqual([mock.call(['ubuntu-drivers-common'])],197 self.assertEqual([mock.call(['ubuntu-drivers-common'])],
142 myCloud.distro.install_packages.call_args_list)198 myCloud.distro.install_packages.call_args_list)
143 self.assertEqual([mock.call(self.install_gpgpu)],199 self.assertEqual(
144 m_subp.call_args_list)200 [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
201 mock.call(self.install_gpgpu)],
202 m_subp.call_args_list)
145 self.assertIn('WARNING: the available version of ubuntu-drivers is'203 self.assertIn('WARNING: the available version of ubuntu-drivers is'
146 ' too old to perform requested driver installation',204 ' too old to perform requested driver installation',
147 self.logs.getvalue())205 self.logs.getvalue())
@@ -153,16 +211,21 @@ class TestUbuntuDriversWithVersion(TestUbuntuDrivers):
153 'drivers': {'nvidia': {'license-accepted': True, 'version': '123'}}}211 'drivers': {'nvidia': {'license-accepted': True, 'version': '123'}}}
154 install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia:123']212 install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia:123']
155213
214 @mock.patch(M_TMP_PATH)
156 @mock.patch(MPATH + "util.subp", return_value=('', ''))215 @mock.patch(MPATH + "util.subp", return_value=('', ''))
157 @mock.patch(MPATH + "util.which", return_value=False)216 @mock.patch(MPATH + "util.which", return_value=False)
158 def test_version_none_uses_latest(self, m_which, m_subp):217 def test_version_none_uses_latest(self, m_which, m_subp, m_tmp):
218 tdir = self.tmp_dir()
219 debconf_file = os.path.join(tdir, 'nvidia.template')
220 m_tmp.return_value = tdir
159 myCloud = mock.MagicMock()221 myCloud = mock.MagicMock()
160 version_none_cfg = {222 version_none_cfg = {
161 'drivers': {'nvidia': {'license-accepted': True, 'version': None}}}223 'drivers': {'nvidia': {'license-accepted': True, 'version': None}}}
162 drivers.handle(224 drivers.handle(
163 'ubuntu_drivers', version_none_cfg, myCloud, None, None)225 'ubuntu_drivers', version_none_cfg, myCloud, None, None)
164 self.assertEqual(226 self.assertEqual(
165 [mock.call(['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'])],227 [mock.call(AnyTempScriptAndDebconfFile(tdir, debconf_file)),
228 mock.call(['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'])],
166 m_subp.call_args_list)229 m_subp.call_args_list)
167230
168 def test_specifying_a_version_doesnt_override_license_acceptance(self):231 def test_specifying_a_version_doesnt_override_license_acceptance(self):
diff --git a/cloudinit/distros/__init__.py b/cloudinit/distros/__init__.py
index 20c994d..00bdee3 100644
--- a/cloudinit/distros/__init__.py
+++ b/cloudinit/distros/__init__.py
@@ -396,16 +396,16 @@ class Distro(object):
396 else:396 else:
397 create_groups = True397 create_groups = True
398398
399 adduser_cmd = ['useradd', name]399 useradd_cmd = ['useradd', name]
400 log_adduser_cmd = ['useradd', name]400 log_useradd_cmd = ['useradd', name]
401 if util.system_is_snappy():401 if util.system_is_snappy():
402 adduser_cmd.append('--extrausers')402 useradd_cmd.append('--extrausers')
403 log_adduser_cmd.append('--extrausers')403 log_useradd_cmd.append('--extrausers')
404404
405 # Since we are creating users, we want to carefully validate the405 # Since we are creating users, we want to carefully validate the
406 # inputs. If something goes wrong, we can end up with a system406 # inputs. If something goes wrong, we can end up with a system
407 # that nobody can login to.407 # that nobody can login to.
408 adduser_opts = {408 useradd_opts = {
409 "gecos": '--comment',409 "gecos": '--comment',
410 "homedir": '--home',410 "homedir": '--home',
411 "primary_group": '--gid',411 "primary_group": '--gid',
@@ -418,7 +418,7 @@ class Distro(object):
418 "selinux_user": '--selinux-user',418 "selinux_user": '--selinux-user',
419 }419 }
420420
421 adduser_flags = {421 useradd_flags = {
422 "no_user_group": '--no-user-group',422 "no_user_group": '--no-user-group',
423 "system": '--system',423 "system": '--system',
424 "no_log_init": '--no-log-init',424 "no_log_init": '--no-log-init',
@@ -453,32 +453,32 @@ class Distro(object):
453 # Check the values and create the command453 # Check the values and create the command
454 for key, val in sorted(kwargs.items()):454 for key, val in sorted(kwargs.items()):
455455
456 if key in adduser_opts and val and isinstance(val, str):456 if key in useradd_opts and val and isinstance(val, str):
457 adduser_cmd.extend([adduser_opts[key], val])457 useradd_cmd.extend([useradd_opts[key], val])
458458
459 # Redact certain fields from the logs459 # Redact certain fields from the logs
460 if key in redact_opts:460 if key in redact_opts:
461 log_adduser_cmd.extend([adduser_opts[key], 'REDACTED'])461 log_useradd_cmd.extend([useradd_opts[key], 'REDACTED'])
462 else:462 else:
463 log_adduser_cmd.extend([adduser_opts[key], val])463 log_useradd_cmd.extend([useradd_opts[key], val])
464464
465 elif key in adduser_flags and val:465 elif key in useradd_flags and val:
466 adduser_cmd.append(adduser_flags[key])466 useradd_cmd.append(useradd_flags[key])
467 log_adduser_cmd.append(adduser_flags[key])467 log_useradd_cmd.append(useradd_flags[key])
468468
469 # Don't create the home directory if directed so or if the user is a469 # Don't create the home directory if directed so or if the user is a
470 # system user470 # system user
471 if kwargs.get('no_create_home') or kwargs.get('system'):471 if kwargs.get('no_create_home') or kwargs.get('system'):
472 adduser_cmd.append('-M')472 useradd_cmd.append('-M')
473 log_adduser_cmd.append('-M')473 log_useradd_cmd.append('-M')
474 else:474 else:
475 adduser_cmd.append('-m')475 useradd_cmd.append('-m')
476 log_adduser_cmd.append('-m')476 log_useradd_cmd.append('-m')
477477
478 # Run the command478 # Run the command
479 LOG.debug("Adding user %s", name)479 LOG.debug("Adding user %s", name)
480 try:480 try:
481 util.subp(adduser_cmd, logstring=log_adduser_cmd)481 util.subp(useradd_cmd, logstring=log_useradd_cmd)
482 except Exception as e:482 except Exception as e:
483 util.logexc(LOG, "Failed to create user %s", name)483 util.logexc(LOG, "Failed to create user %s", name)
484 raise e484 raise e
@@ -490,15 +490,15 @@ class Distro(object):
490490
491 snapuser = kwargs.get('snapuser')491 snapuser = kwargs.get('snapuser')
492 known = kwargs.get('known', False)492 known = kwargs.get('known', False)
493 adduser_cmd = ["snap", "create-user", "--sudoer", "--json"]493 create_user_cmd = ["snap", "create-user", "--sudoer", "--json"]
494 if known:494 if known:
495 adduser_cmd.append("--known")495 create_user_cmd.append("--known")
496 adduser_cmd.append(snapuser)496 create_user_cmd.append(snapuser)
497497
498 # Run the command498 # Run the command
499 LOG.debug("Adding snap user %s", name)499 LOG.debug("Adding snap user %s", name)
500 try:500 try:
501 (out, err) = util.subp(adduser_cmd, logstring=adduser_cmd,501 (out, err) = util.subp(create_user_cmd, logstring=create_user_cmd,
502 capture=True)502 capture=True)
503 LOG.debug("snap create-user returned: %s:%s", out, err)503 LOG.debug("snap create-user returned: %s:%s", out, err)
504 jobj = util.load_json(out)504 jobj = util.load_json(out)
diff --git a/cloudinit/distros/arch.py b/cloudinit/distros/arch.py
index b814c8b..9f89c5f 100644
--- a/cloudinit/distros/arch.py
+++ b/cloudinit/distros/arch.py
@@ -12,6 +12,8 @@ from cloudinit import util
12from cloudinit.distros import net_util12from cloudinit.distros import net_util
13from cloudinit.distros.parsers.hostname import HostnameConf13from cloudinit.distros.parsers.hostname import HostnameConf
1414
15from cloudinit.net.renderers import RendererNotFoundError
16
15from cloudinit.settings import PER_INSTANCE17from cloudinit.settings import PER_INSTANCE
1618
17import os19import os
@@ -24,6 +26,11 @@ class Distro(distros.Distro):
24 network_conf_dir = "/etc/netctl"26 network_conf_dir = "/etc/netctl"
25 resolve_conf_fn = "/etc/resolv.conf"27 resolve_conf_fn = "/etc/resolv.conf"
26 init_cmd = ['systemctl'] # init scripts28 init_cmd = ['systemctl'] # init scripts
29 renderer_configs = {
30 "netplan": {"netplan_path": "/etc/netplan/50-cloud-init.yaml",
31 "netplan_header": "# generated by cloud-init\n",
32 "postcmds": True}
33 }
2734
28 def __init__(self, name, cfg, paths):35 def __init__(self, name, cfg, paths):
29 distros.Distro.__init__(self, name, cfg, paths)36 distros.Distro.__init__(self, name, cfg, paths)
@@ -50,6 +57,13 @@ class Distro(distros.Distro):
50 self.update_package_sources()57 self.update_package_sources()
51 self.package_command('', pkgs=pkglist)58 self.package_command('', pkgs=pkglist)
5259
60 def _write_network_config(self, netconfig):
61 try:
62 return self._supported_write_network_config(netconfig)
63 except RendererNotFoundError:
64 # Fall back to old _write_network
65 raise NotImplementedError
66
53 def _write_network(self, settings):67 def _write_network(self, settings):
54 entries = net_util.translate_network(settings)68 entries = net_util.translate_network(settings)
55 LOG.debug("Translated ubuntu style network settings %s into %s",69 LOG.debug("Translated ubuntu style network settings %s into %s",
diff --git a/cloudinit/distros/debian.py b/cloudinit/distros/debian.py
index d517fb8..0ad93ff 100644
--- a/cloudinit/distros/debian.py
+++ b/cloudinit/distros/debian.py
@@ -36,14 +36,14 @@ ENI_HEADER = """# This file is generated from information provided by
36# network: {config: disabled}36# network: {config: disabled}
37"""37"""
3838
39NETWORK_CONF_FN = "/etc/network/interfaces.d/50-cloud-init.cfg"39NETWORK_CONF_FN = "/etc/network/interfaces.d/50-cloud-init"
40LOCALE_CONF_FN = "/etc/default/locale"40LOCALE_CONF_FN = "/etc/default/locale"
4141
4242
43class Distro(distros.Distro):43class Distro(distros.Distro):
44 hostname_conf_fn = "/etc/hostname"44 hostname_conf_fn = "/etc/hostname"
45 network_conf_fn = {45 network_conf_fn = {
46 "eni": "/etc/network/interfaces.d/50-cloud-init.cfg",46 "eni": "/etc/network/interfaces.d/50-cloud-init",
47 "netplan": "/etc/netplan/50-cloud-init.yaml"47 "netplan": "/etc/netplan/50-cloud-init.yaml"
48 }48 }
49 renderer_configs = {49 renderer_configs = {
diff --git a/cloudinit/distros/freebsd.py b/cloudinit/distros/freebsd.py
index ff22d56..f7825fd 100644
--- a/cloudinit/distros/freebsd.py
+++ b/cloudinit/distros/freebsd.py
@@ -185,10 +185,10 @@ class Distro(distros.Distro):
185 LOG.info("User %s already exists, skipping.", name)185 LOG.info("User %s already exists, skipping.", name)
186 return False186 return False
187187
188 adduser_cmd = ['pw', 'useradd', '-n', name]188 pw_useradd_cmd = ['pw', 'useradd', '-n', name]
189 log_adduser_cmd = ['pw', 'useradd', '-n', name]189 log_pw_useradd_cmd = ['pw', 'useradd', '-n', name]
190190
191 adduser_opts = {191 pw_useradd_opts = {
192 "homedir": '-d',192 "homedir": '-d',
193 "gecos": '-c',193 "gecos": '-c',
194 "primary_group": '-g',194 "primary_group": '-g',
@@ -196,34 +196,34 @@ class Distro(distros.Distro):
196 "shell": '-s',196 "shell": '-s',
197 "inactive": '-E',197 "inactive": '-E',
198 }198 }
199 adduser_flags = {199 pw_useradd_flags = {
200 "no_user_group": '--no-user-group',200 "no_user_group": '--no-user-group',
201 "system": '--system',201 "system": '--system',
202 "no_log_init": '--no-log-init',202 "no_log_init": '--no-log-init',
203 }203 }
204204
205 for key, val in kwargs.items():205 for key, val in kwargs.items():
206 if (key in adduser_opts and val and206 if (key in pw_useradd_opts and val and
207 isinstance(val, six.string_types)):207 isinstance(val, six.string_types)):
208 adduser_cmd.extend([adduser_opts[key], val])208 pw_useradd_cmd.extend([pw_useradd_opts[key], val])
209209
210 elif key in adduser_flags and val:210 elif key in pw_useradd_flags and val:
211 adduser_cmd.append(adduser_flags[key])211 pw_useradd_cmd.append(pw_useradd_flags[key])
212 log_adduser_cmd.append(adduser_flags[key])212 log_pw_useradd_cmd.append(pw_useradd_flags[key])
213213
214 if 'no_create_home' in kwargs or 'system' in kwargs:214 if 'no_create_home' in kwargs or 'system' in kwargs:
215 adduser_cmd.append('-d/nonexistent')215 pw_useradd_cmd.append('-d/nonexistent')
216 log_adduser_cmd.append('-d/nonexistent')216 log_pw_useradd_cmd.append('-d/nonexistent')
217 else:217 else:
218 adduser_cmd.append('-d/usr/home/%s' % name)218 pw_useradd_cmd.append('-d/usr/home/%s' % name)
219 adduser_cmd.append('-m')219 pw_useradd_cmd.append('-m')
220 log_adduser_cmd.append('-d/usr/home/%s' % name)220 log_pw_useradd_cmd.append('-d/usr/home/%s' % name)
221 log_adduser_cmd.append('-m')221 log_pw_useradd_cmd.append('-m')
222222
223 # Run the command223 # Run the command
224 LOG.info("Adding user %s", name)224 LOG.info("Adding user %s", name)
225 try:225 try:
226 util.subp(adduser_cmd, logstring=log_adduser_cmd)226 util.subp(pw_useradd_cmd, logstring=log_pw_useradd_cmd)
227 except Exception as e:227 except Exception as e:
228 util.logexc(LOG, "Failed to create user %s", name)228 util.logexc(LOG, "Failed to create user %s", name)
229 raise e229 raise e
diff --git a/cloudinit/distros/opensuse.py b/cloudinit/distros/opensuse.py
index 1bfe047..e41e2f7 100644
--- a/cloudinit/distros/opensuse.py
+++ b/cloudinit/distros/opensuse.py
@@ -38,6 +38,8 @@ class Distro(distros.Distro):
38 'sysconfig': {38 'sysconfig': {
39 'control': 'etc/sysconfig/network/config',39 'control': 'etc/sysconfig/network/config',
40 'iface_templates': '%(base)s/network/ifcfg-%(name)s',40 'iface_templates': '%(base)s/network/ifcfg-%(name)s',
41 'netrules_path': (
42 'etc/udev/rules.d/85-persistent-net-cloud-init.rules'),
41 'route_templates': {43 'route_templates': {
42 'ipv4': '%(base)s/network/ifroute-%(name)s',44 'ipv4': '%(base)s/network/ifroute-%(name)s',
43 'ipv6': '%(base)s/network/ifroute-%(name)s',45 'ipv6': '%(base)s/network/ifroute-%(name)s',
diff --git a/cloudinit/distros/parsers/sys_conf.py b/cloudinit/distros/parsers/sys_conf.py
index c27b5d5..44df17d 100644
--- a/cloudinit/distros/parsers/sys_conf.py
+++ b/cloudinit/distros/parsers/sys_conf.py
@@ -43,6 +43,13 @@ def _contains_shell_variable(text):
4343
4444
45class SysConf(configobj.ConfigObj):45class SysConf(configobj.ConfigObj):
46 """A configobj.ConfigObj subclass specialised for sysconfig files.
47
48 :param contents:
49 The sysconfig file to parse, in a format accepted by
50 ``configobj.ConfigObj.__init__`` (i.e. "a filename, file like object,
51 or list of lines").
52 """
46 def __init__(self, contents):53 def __init__(self, contents):
47 configobj.ConfigObj.__init__(self, contents,54 configobj.ConfigObj.__init__(self, contents,
48 interpolation=False,55 interpolation=False,
diff --git a/cloudinit/distros/ubuntu.py b/cloudinit/distros/ubuntu.py
index 6815410..e5fcbc5 100644
--- a/cloudinit/distros/ubuntu.py
+++ b/cloudinit/distros/ubuntu.py
@@ -21,6 +21,21 @@ LOG = logging.getLogger(__name__)
2121
22class Distro(debian.Distro):22class Distro(debian.Distro):
2323
24 def __init__(self, name, cfg, paths):
25 super(Distro, self).__init__(name, cfg, paths)
26 # Ubuntu specific network cfg locations
27 self.network_conf_fn = {
28 "eni": "/etc/network/interfaces.d/50-cloud-init.cfg",
29 "netplan": "/etc/netplan/50-cloud-init.yaml"
30 }
31 self.renderer_configs = {
32 "eni": {"eni_path": self.network_conf_fn["eni"],
33 "eni_header": debian.ENI_HEADER},
34 "netplan": {"netplan_path": self.network_conf_fn["netplan"],
35 "netplan_header": debian.ENI_HEADER,
36 "postcmds": True}
37 }
38
24 @property39 @property
25 def preferred_ntp_clients(self):40 def preferred_ntp_clients(self):
26 """The preferred ntp client is dependent on the version."""41 """The preferred ntp client is dependent on the version."""
diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
index 3642fb1..ea707c0 100644
--- a/cloudinit/net/__init__.py
+++ b/cloudinit/net/__init__.py
@@ -9,6 +9,7 @@ import errno
9import logging9import logging
10import os10import os
11import re11import re
12from functools import partial
1213
13from cloudinit.net.network_state import mask_to_net_prefix14from cloudinit.net.network_state import mask_to_net_prefix
14from cloudinit import util15from cloudinit import util
@@ -264,46 +265,29 @@ def find_fallback_nic(blacklist_drivers=None):
264265
265266
266def generate_fallback_config(blacklist_drivers=None, config_driver=None):267def generate_fallback_config(blacklist_drivers=None, config_driver=None):
267 """Determine which attached net dev is most likely to have a connection and268 """Generate network cfg v2 for dhcp on the NIC most likely connected."""
268 generate network state to run dhcp on that interface"""
269
270 if not config_driver:269 if not config_driver:
271 config_driver = False270 config_driver = False
272271
273 target_name = find_fallback_nic(blacklist_drivers=blacklist_drivers)272 target_name = find_fallback_nic(blacklist_drivers=blacklist_drivers)
274 if target_name:273 if not target_name:
275 target_mac = read_sys_net_safe(target_name, 'address')
276 nconf = {'config': [], 'version': 1}
277 cfg = {'type': 'physical', 'name': target_name,
278 'mac_address': target_mac, 'subnets': [{'type': 'dhcp'}]}
279 # inject the device driver name, dev_id into config if enabled and
280 # device has a valid device driver value
281 if config_driver:
282 driver = device_driver(target_name)
283 if driver:
284 cfg['params'] = {
285 'driver': driver,
286 'device_id': device_devid(target_name),
287 }
288 nconf['config'].append(cfg)
289 return nconf
290 else:
291 # can't read any interfaces addresses (or there are none); give up274 # can't read any interfaces addresses (or there are none); give up
292 return None275 return None
276 target_mac = read_sys_net_safe(target_name, 'address')
277 cfg = {'dhcp4': True, 'set-name': target_name,
278 'match': {'macaddress': target_mac.lower()}}
279 if config_driver:
280 driver = device_driver(target_name)
281 if driver:
282 cfg['match']['driver'] = driver
283 nconf = {'ethernets': {target_name: cfg}, 'version': 2}
284 return nconf
293285
294286
295def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):287def extract_physdevs(netcfg):
296 """read the network config and rename devices accordingly.
297 if strict_present is false, then do not raise exception if no devices
298 match. if strict_busy is false, then do not raise exception if the
299 device cannot be renamed because it is currently configured.
300
301 renames are only attempted for interfaces of type 'physical'. It is
302 expected that the network system will create other devices with the
303 correct name in place."""
304288
305 def _version_1(netcfg):289 def _version_1(netcfg):
306 renames = []290 physdevs = []
307 for ent in netcfg.get('config', {}):291 for ent in netcfg.get('config', {}):
308 if ent.get('type') != 'physical':292 if ent.get('type') != 'physical':
309 continue293 continue
@@ -317,11 +301,11 @@ def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
317 driver = device_driver(name)301 driver = device_driver(name)
318 if not device_id:302 if not device_id:
319 device_id = device_devid(name)303 device_id = device_devid(name)
320 renames.append([mac, name, driver, device_id])304 physdevs.append([mac, name, driver, device_id])
321 return renames305 return physdevs
322306
323 def _version_2(netcfg):307 def _version_2(netcfg):
324 renames = []308 physdevs = []
325 for ent in netcfg.get('ethernets', {}).values():309 for ent in netcfg.get('ethernets', {}).values():
326 # only rename if configured to do so310 # only rename if configured to do so
327 name = ent.get('set-name')311 name = ent.get('set-name')
@@ -337,16 +321,69 @@ def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
337 driver = device_driver(name)321 driver = device_driver(name)
338 if not device_id:322 if not device_id:
339 device_id = device_devid(name)323 device_id = device_devid(name)
340 renames.append([mac, name, driver, device_id])324 physdevs.append([mac, name, driver, device_id])
341 return renames325 return physdevs
326
327 version = netcfg.get('version')
328 if version == 1:
329 return _version_1(netcfg)
330 elif version == 2:
331 return _version_2(netcfg)
332
333 raise RuntimeError('Unknown network config version: %s' % version)
334
335
336def wait_for_physdevs(netcfg, strict=True):
337 physdevs = extract_physdevs(netcfg)
338
339 # set of expected iface names and mac addrs
340 expected_ifaces = dict([(iface[0], iface[1]) for iface in physdevs])
341 expected_macs = set(expected_ifaces.keys())
342
343 # set of current macs
344 present_macs = get_interfaces_by_mac().keys()
345
346 # compare the set of expected mac address values to
347 # the current macs present; we only check MAC as cloud-init
348 # has not yet renamed interfaces and the netcfg may include
349 # such renames.
350 for _ in range(0, 5):
351 if expected_macs.issubset(present_macs):
352 LOG.debug('net: all expected physical devices present')
353 return
342354
343 if netcfg.get('version') == 1:355 missing = expected_macs.difference(present_macs)
344 return _rename_interfaces(_version_1(netcfg))356 LOG.debug('net: waiting for expected net devices: %s', missing)
345 elif netcfg.get('version') == 2:357 for mac in missing:
346 return _rename_interfaces(_version_2(netcfg))358 # trigger a settle, unless this interface exists
359 syspath = sys_dev_path(expected_ifaces[mac])
360 settle = partial(util.udevadm_settle, exists=syspath)
361 msg = 'Waiting for udev events to settle or %s exists' % syspath
362 util.log_time(LOG.debug, msg, func=settle)
347363
348 raise RuntimeError('Failed to apply network config names. Found bad'364 # update present_macs after settles
349 ' network config version: %s' % netcfg.get('version'))365 present_macs = get_interfaces_by_mac().keys()
366
367 msg = 'Not all expected physical devices present: %s' % missing
368 LOG.warning(msg)
369 if strict:
370 raise RuntimeError(msg)
371
372
373def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
374 """read the network config and rename devices accordingly.
375 if strict_present is false, then do not raise exception if no devices
376 match. if strict_busy is false, then do not raise exception if the
377 device cannot be renamed because it is currently configured.
378
379 renames are only attempted for interfaces of type 'physical'. It is
380 expected that the network system will create other devices with the
381 correct name in place."""
382
383 try:
384 _rename_interfaces(extract_physdevs(netcfg))
385 except RuntimeError as e:
386 raise RuntimeError('Failed to apply network config names: %s' % e)
350387
351388
352def interface_has_own_mac(ifname, strict=False):389def interface_has_own_mac(ifname, strict=False):
@@ -622,6 +659,8 @@ def get_interfaces():
622 continue659 continue
623 if is_vlan(name):660 if is_vlan(name):
624 continue661 continue
662 if is_bond(name):
663 continue
625 mac = get_interface_mac(name)664 mac = get_interface_mac(name)
626 # some devices may not have a mac (tun0)665 # some devices may not have a mac (tun0)
627 if not mac:666 if not mac:
@@ -677,7 +716,7 @@ class EphemeralIPv4Network(object):
677 """716 """
678717
679 def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None,718 def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None,
680 connectivity_url=None):719 connectivity_url=None, static_routes=None):
681 """Setup context manager and validate call signature.720 """Setup context manager and validate call signature.
682721
683 @param interface: Name of the network interface to bring up.722 @param interface: Name of the network interface to bring up.
@@ -688,6 +727,7 @@ class EphemeralIPv4Network(object):
688 @param router: Optionally the default gateway IP.727 @param router: Optionally the default gateway IP.
689 @param connectivity_url: Optionally, a URL to verify if a usable728 @param connectivity_url: Optionally, a URL to verify if a usable
690 connection already exists.729 connection already exists.
730 @param static_routes: Optionally a list of static routes from DHCP
691 """731 """
692 if not all([interface, ip, prefix_or_mask, broadcast]):732 if not all([interface, ip, prefix_or_mask, broadcast]):
693 raise ValueError(733 raise ValueError(
@@ -704,6 +744,7 @@ class EphemeralIPv4Network(object):
704 self.ip = ip744 self.ip = ip
705 self.broadcast = broadcast745 self.broadcast = broadcast
706 self.router = router746 self.router = router
747 self.static_routes = static_routes
707 self.cleanup_cmds = [] # List of commands to run to cleanup state.748 self.cleanup_cmds = [] # List of commands to run to cleanup state.
708749
709 def __enter__(self):750 def __enter__(self):
@@ -716,7 +757,21 @@ class EphemeralIPv4Network(object):
716 return757 return
717758
718 self._bringup_device()759 self._bringup_device()
719 if self.router:760
761 # rfc3442 requires us to ignore the router config *if* classless static
762 # routes are provided.
763 #
764 # https://tools.ietf.org/html/rfc3442
765 #
766 # If the DHCP server returns both a Classless Static Routes option and
767 # a Router option, the DHCP client MUST ignore the Router option.
768 #
769 # Similarly, if the DHCP server returns both a Classless Static Routes
770 # option and a Static Routes option, the DHCP client MUST ignore the
771 # Static Routes option.
772 if self.static_routes:
773 self._bringup_static_routes()
774 elif self.router:
720 self._bringup_router()775 self._bringup_router()
721776
722 def __exit__(self, excp_type, excp_value, excp_traceback):777 def __exit__(self, excp_type, excp_value, excp_traceback):
@@ -760,6 +815,20 @@ class EphemeralIPv4Network(object):
760 ['ip', '-family', 'inet', 'addr', 'del', cidr, 'dev',815 ['ip', '-family', 'inet', 'addr', 'del', cidr, 'dev',
761 self.interface])816 self.interface])
762817
818 def _bringup_static_routes(self):
819 # static_routes = [("169.254.169.254/32", "130.56.248.255"),
820 # ("0.0.0.0/0", "130.56.240.1")]
821 for net_address, gateway in self.static_routes:
822 via_arg = []
823 if gateway != "0.0.0.0/0":
824 via_arg = ['via', gateway]
825 util.subp(
826 ['ip', '-4', 'route', 'add', net_address] + via_arg +
827 ['dev', self.interface], capture=True)
828 self.cleanup_cmds.insert(
829 0, ['ip', '-4', 'route', 'del', net_address] + via_arg +
830 ['dev', self.interface])
831
763 def _bringup_router(self):832 def _bringup_router(self):
764 """Perform the ip commands to fully setup the router if needed."""833 """Perform the ip commands to fully setup the router if needed."""
765 # Check if a default route exists and exit if it does834 # Check if a default route exists and exit if it does
diff --git a/cloudinit/net/cmdline.py b/cloudinit/net/cmdline.py
index f89a0f7..556a10f 100755
--- a/cloudinit/net/cmdline.py
+++ b/cloudinit/net/cmdline.py
@@ -177,21 +177,13 @@ def _is_initramfs_netconfig(files, cmdline):
177 return False177 return False
178178
179179
180def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None):180def read_initramfs_config(files=None, mac_addrs=None, cmdline=None):
181 if cmdline is None:181 if cmdline is None:
182 cmdline = util.get_cmdline()182 cmdline = util.get_cmdline()
183183
184 if files is None:184 if files is None:
185 files = _get_klibc_net_cfg_files()185 files = _get_klibc_net_cfg_files()
186186
187 if 'network-config=' in cmdline:
188 data64 = None
189 for tok in cmdline.split():
190 if tok.startswith("network-config="):
191 data64 = tok.split("=", 1)[1]
192 if data64:
193 return util.load_yaml(_b64dgz(data64))
194
195 if not _is_initramfs_netconfig(files, cmdline):187 if not _is_initramfs_netconfig(files, cmdline):
196 return None188 return None
197189
@@ -204,4 +196,19 @@ def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None):
204196
205 return config_from_klibc_net_cfg(files=files, mac_addrs=mac_addrs)197 return config_from_klibc_net_cfg(files=files, mac_addrs=mac_addrs)
206198
199
200def read_kernel_cmdline_config(cmdline=None):
201 if cmdline is None:
202 cmdline = util.get_cmdline()
203
204 if 'network-config=' in cmdline:
205 data64 = None
206 for tok in cmdline.split():
207 if tok.startswith("network-config="):
208 data64 = tok.split("=", 1)[1]
209 if data64:
210 return util.load_yaml(_b64dgz(data64))
211
212 return None
213
207# vi: ts=4 expandtab214# vi: ts=4 expandtab
diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py
index c98a97c..1737991 100644
--- a/cloudinit/net/dhcp.py
+++ b/cloudinit/net/dhcp.py
@@ -92,10 +92,14 @@ class EphemeralDHCPv4(object):
92 nmap = {'interface': 'interface', 'ip': 'fixed-address',92 nmap = {'interface': 'interface', 'ip': 'fixed-address',
93 'prefix_or_mask': 'subnet-mask',93 'prefix_or_mask': 'subnet-mask',
94 'broadcast': 'broadcast-address',94 'broadcast': 'broadcast-address',
95 'static_routes': 'rfc3442-classless-static-routes',
95 'router': 'routers'}96 'router': 'routers'}
96 kwargs = dict([(k, self.lease.get(v)) for k, v in nmap.items()])97 kwargs = dict([(k, self.lease.get(v)) for k, v in nmap.items()])
97 if not kwargs['broadcast']:98 if not kwargs['broadcast']:
98 kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip'])99 kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip'])
100 if kwargs['static_routes']:
101 kwargs['static_routes'] = (
102 parse_static_routes(kwargs['static_routes']))
99 if self.connectivity_url:103 if self.connectivity_url:
100 kwargs['connectivity_url'] = self.connectivity_url104 kwargs['connectivity_url'] = self.connectivity_url
101 ephipv4 = EphemeralIPv4Network(**kwargs)105 ephipv4 = EphemeralIPv4Network(**kwargs)
@@ -272,4 +276,90 @@ def networkd_get_option_from_leases(keyname, leases_d=None):
272 return data[keyname]276 return data[keyname]
273 return None277 return None
274278
279
280def parse_static_routes(rfc3442):
281 """ parse rfc3442 format and return a list containing tuple of strings.
282
283 The tuple is composed of the network_address (including net length) and
284 gateway for a parsed static route.
285
286 @param rfc3442: string in rfc3442 format
287 @returns: list of tuple(str, str) for all valid parsed routes until the
288 first parsing error.
289
290 E.g.
291 sr = parse_state_routes("32,169,254,169,254,130,56,248,255,0,130,56,240,1")
292 sr = [
293 ("169.254.169.254/32", "130.56.248.255"), ("0.0.0.0/0", "130.56.240.1")
294 ]
295
296 Python version of isc-dhclient's hooks:
297 /etc/dhcp/dhclient-exit-hooks.d/rfc3442-classless-routes
298 """
299 # raw strings from dhcp lease may end in semi-colon
300 rfc3442 = rfc3442.rstrip(";")
301 tokens = rfc3442.split(',')
302 static_routes = []
303
304 def _trunc_error(cidr, required, remain):
305 msg = ("RFC3442 string malformed. Current route has CIDR of %s "
306 "and requires %s significant octets, but only %s remain. "
307 "Verify DHCP rfc3442-classless-static-routes value: %s"
308 % (cidr, required, remain, rfc3442))
309 LOG.error(msg)
310
311 current_idx = 0
312 for idx, tok in enumerate(tokens):
313 if idx < current_idx:
314 continue
315 net_length = int(tok)
316 if net_length in range(25, 33):
317 req_toks = 9
318 if len(tokens[idx:]) < req_toks:
319 _trunc_error(net_length, req_toks, len(tokens[idx:]))
320 return static_routes
321 net_address = ".".join(tokens[idx+1:idx+5])
322 gateway = ".".join(tokens[idx+5:idx+req_toks])
323 current_idx = idx + req_toks
324 elif net_length in range(17, 25):
325 req_toks = 8
326 if len(tokens[idx:]) < req_toks:
327 _trunc_error(net_length, req_toks, len(tokens[idx:]))
328 return static_routes
329 net_address = ".".join(tokens[idx+1:idx+4] + ["0"])
330 gateway = ".".join(tokens[idx+4:idx+req_toks])
331 current_idx = idx + req_toks
332 elif net_length in range(9, 17):
333 req_toks = 7
334 if len(tokens[idx:]) < req_toks:
335 _trunc_error(net_length, req_toks, len(tokens[idx:]))
336 return static_routes
337 net_address = ".".join(tokens[idx+1:idx+3] + ["0", "0"])
338 gateway = ".".join(tokens[idx+3:idx+req_toks])
339 current_idx = idx + req_toks
340 elif net_length in range(1, 9):
341 req_toks = 6
342 if len(tokens[idx:]) < req_toks:
343 _trunc_error(net_length, req_toks, len(tokens[idx:]))
344 return static_routes
345 net_address = ".".join(tokens[idx+1:idx+2] + ["0", "0", "0"])
346 gateway = ".".join(tokens[idx+2:idx+req_toks])
347 current_idx = idx + req_toks
348 elif net_length == 0:
349 req_toks = 5
350 if len(tokens[idx:]) < req_toks:
351 _trunc_error(net_length, req_toks, len(tokens[idx:]))
352 return static_routes
353 net_address = "0.0.0.0"
354 gateway = ".".join(tokens[idx+1:idx+req_toks])
355 current_idx = idx + req_toks
356 else:
357 LOG.error('Parsed invalid net length "%s". Verify DHCP '
358 'rfc3442-classless-static-routes value.', net_length)
359 return static_routes
360
361 static_routes.append(("%s/%s" % (net_address, net_length), gateway))
362
363 return static_routes
364
275# vi: ts=4 expandtab365# vi: ts=4 expandtab
diff --git a/cloudinit/net/network_state.py b/cloudinit/net/network_state.py
index 3702130..c0c415d 100644
--- a/cloudinit/net/network_state.py
+++ b/cloudinit/net/network_state.py
@@ -596,6 +596,7 @@ class NetworkStateInterpreter(object):
596 eno1:596 eno1:
597 match:597 match:
598 macaddress: 00:11:22:33:44:55598 macaddress: 00:11:22:33:44:55
599 driver: hv_netsvc
599 wakeonlan: true600 wakeonlan: true
600 dhcp4: true601 dhcp4: true
601 dhcp6: false602 dhcp6: false
@@ -631,15 +632,18 @@ class NetworkStateInterpreter(object):
631 'type': 'physical',632 'type': 'physical',
632 'name': cfg.get('set-name', eth),633 'name': cfg.get('set-name', eth),
633 }634 }
634 mac_address = cfg.get('match', {}).get('macaddress', None)635 match = cfg.get('match', {})
636 mac_address = match.get('macaddress', None)
635 if not mac_address:637 if not mac_address:
636 LOG.debug('NetworkState Version2: missing "macaddress" info '638 LOG.debug('NetworkState Version2: missing "macaddress" info '
637 'in config entry: %s: %s', eth, str(cfg))639 'in config entry: %s: %s', eth, str(cfg))
638 phy_cmd.update({'mac_address': mac_address})640 phy_cmd['mac_address'] = mac_address
639641 driver = match.get('driver', None)
642 if driver:
643 phy_cmd['params'] = {'driver': driver}
640 for key in ['mtu', 'match', 'wakeonlan']:644 for key in ['mtu', 'match', 'wakeonlan']:
641 if key in cfg:645 if key in cfg:
642 phy_cmd.update({key: cfg.get(key)})646 phy_cmd[key] = cfg[key]
643647
644 subnets = self._v2_to_v1_ipcfg(cfg)648 subnets = self._v2_to_v1_ipcfg(cfg)
645 if len(subnets) > 0:649 if len(subnets) > 0:
@@ -673,6 +677,8 @@ class NetworkStateInterpreter(object):
673 'vlan_id': cfg.get('id'),677 'vlan_id': cfg.get('id'),
674 'vlan_link': cfg.get('link'),678 'vlan_link': cfg.get('link'),
675 }679 }
680 if 'mtu' in cfg:
681 vlan_cmd['mtu'] = cfg['mtu']
676 subnets = self._v2_to_v1_ipcfg(cfg)682 subnets = self._v2_to_v1_ipcfg(cfg)
677 if len(subnets) > 0:683 if len(subnets) > 0:
678 vlan_cmd.update({'subnets': subnets})684 vlan_cmd.update({'subnets': subnets})
@@ -722,6 +728,8 @@ class NetworkStateInterpreter(object):
722 'params': dict((v2key_to_v1[k], v) for k, v in728 'params': dict((v2key_to_v1[k], v) for k, v in
723 item_params.get('parameters', {}).items())729 item_params.get('parameters', {}).items())
724 }730 }
731 if 'mtu' in item_cfg:
732 v1_cmd['mtu'] = item_cfg['mtu']
725 subnets = self._v2_to_v1_ipcfg(item_cfg)733 subnets = self._v2_to_v1_ipcfg(item_cfg)
726 if len(subnets) > 0:734 if len(subnets) > 0:
727 v1_cmd.update({'subnets': subnets})735 v1_cmd.update({'subnets': subnets})
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index a47da0a..be5dede 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -284,6 +284,18 @@ class Renderer(renderer.Renderer):
284 ('bond_mode', "mode=%s"),284 ('bond_mode', "mode=%s"),
285 ('bond_xmit_hash_policy', "xmit_hash_policy=%s"),285 ('bond_xmit_hash_policy', "xmit_hash_policy=%s"),
286 ('bond_miimon', "miimon=%s"),286 ('bond_miimon', "miimon=%s"),
287 ('bond_min_links', "min_links=%s"),
288 ('bond_arp_interval', "arp_interval=%s"),
289 ('bond_arp_ip_target', "arp_ip_target=%s"),
290 ('bond_arp_validate', "arp_validate=%s"),
291 ('bond_ad_select', "ad_select=%s"),
292 ('bond_num_grat_arp', "num_grat_arp=%s"),
293 ('bond_downdelay', "downdelay=%s"),
294 ('bond_updelay', "updelay=%s"),
295 ('bond_lacp_rate', "lacp_rate=%s"),
296 ('bond_fail_over_mac', "fail_over_mac=%s"),
297 ('bond_primary', "primary=%s"),
298 ('bond_primary_reselect', "primary_reselect=%s"),
287 ])299 ])
288300
289 bridge_opts_keys = tuple([301 bridge_opts_keys = tuple([
diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py
index 5139024..91f503c 100644
--- a/cloudinit/net/tests/test_dhcp.py
+++ b/cloudinit/net/tests/test_dhcp.py
@@ -8,7 +8,8 @@ from textwrap import dedent
8import cloudinit.net as net8import cloudinit.net as net
9from cloudinit.net.dhcp import (9from cloudinit.net.dhcp import (
10 InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery,10 InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery,
11 parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases)11 parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases,
12 parse_static_routes)
12from cloudinit.util import ensure_file, write_file13from cloudinit.util import ensure_file, write_file
13from cloudinit.tests.helpers import (14from cloudinit.tests.helpers import (
14 CiTestCase, HttprettyTestCase, mock, populate_dir, wrap_and_call)15 CiTestCase, HttprettyTestCase, mock, populate_dir, wrap_and_call)
@@ -64,6 +65,123 @@ class TestParseDHCPLeasesFile(CiTestCase):
64 self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file))65 self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file))
6566
6667
68class TestDHCPRFC3442(CiTestCase):
69
70 def test_parse_lease_finds_rfc3442_classless_static_routes(self):
71 """parse_dhcp_lease_file returns rfc3442-classless-static-routes."""
72 lease_file = self.tmp_path('leases')
73 content = dedent("""
74 lease {
75 interface "wlp3s0";
76 fixed-address 192.168.2.74;
77 option subnet-mask 255.255.255.0;
78 option routers 192.168.2.1;
79 option rfc3442-classless-static-routes 0,130,56,240,1;
80 renew 4 2017/07/27 18:02:30;
81 expire 5 2017/07/28 07:08:15;
82 }
83 """)
84 expected = [
85 {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74',
86 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1',
87 'rfc3442-classless-static-routes': '0,130,56,240,1',
88 'renew': '4 2017/07/27 18:02:30',
89 'expire': '5 2017/07/28 07:08:15'}]
90 write_file(lease_file, content)
91 self.assertItemsEqual(expected, parse_dhcp_lease_file(lease_file))
92
93 @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
94 @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
95 def test_obtain_lease_parses_static_routes(self, m_maybe, m_ipv4):
96 """EphemeralDHPCv4 parses rfc3442 routes for EphemeralIPv4Network"""
97 lease = [
98 {'interface': 'wlp3s0', 'fixed-address': '192.168.2.74',
99 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1',
100 'rfc3442-classless-static-routes': '0,130,56,240,1',
101 'renew': '4 2017/07/27 18:02:30',
102 'expire': '5 2017/07/28 07:08:15'}]
103 m_maybe.return_value = lease
104 eph = net.dhcp.EphemeralDHCPv4()
105 eph.obtain_lease()
106 expected_kwargs = {
107 'interface': 'wlp3s0',
108 'ip': '192.168.2.74',
109 'prefix_or_mask': '255.255.255.0',
110 'broadcast': '192.168.2.255',
111 'static_routes': [('0.0.0.0/0', '130.56.240.1')],
112 'router': '192.168.2.1'}
113 m_ipv4.assert_called_with(**expected_kwargs)
114
115
116class TestDHCPParseStaticRoutes(CiTestCase):
117
118 with_logs = True
119
120 def parse_static_routes_empty_string(self):
121 self.assertEqual([], parse_static_routes(""))
122
123 def test_parse_static_routes_invalid_input_returns_empty_list(self):
124 rfc3442 = "32,169,254,169,254,130,56,248"
125 self.assertEqual([], parse_static_routes(rfc3442))
126
127 def test_parse_static_routes_bogus_width_returns_empty_list(self):
128 rfc3442 = "33,169,254,169,254,130,56,248"
129 self.assertEqual([], parse_static_routes(rfc3442))
130
131 def test_parse_static_routes_single_ip(self):
132 rfc3442 = "32,169,254,169,254,130,56,248,255"
133 self.assertEqual([('169.254.169.254/32', '130.56.248.255')],
134 parse_static_routes(rfc3442))
135
136 def test_parse_static_routes_single_ip_handles_trailing_semicolon(self):
137 rfc3442 = "32,169,254,169,254,130,56,248,255;"
138 self.assertEqual([('169.254.169.254/32', '130.56.248.255')],
139 parse_static_routes(rfc3442))
140
141 def test_parse_static_routes_default_route(self):
142 rfc3442 = "0,130,56,240,1"
143 self.assertEqual([('0.0.0.0/0', '130.56.240.1')],
144 parse_static_routes(rfc3442))
145
146 def test_parse_static_routes_class_c_b_a(self):
147 class_c = "24,192,168,74,192,168,0,4"
148 class_b = "16,172,16,172,16,0,4"
149 class_a = "8,10,10,0,0,4"
150 rfc3442 = ",".join([class_c, class_b, class_a])
151 self.assertEqual(sorted([
152 ("192.168.74.0/24", "192.168.0.4"),
153 ("172.16.0.0/16", "172.16.0.4"),
154 ("10.0.0.0/8", "10.0.0.4")
155 ]), sorted(parse_static_routes(rfc3442)))
156
157 def test_parse_static_routes_logs_error_truncated(self):
158 bad_rfc3442 = {
159 "class_c": "24,169,254,169,10",
160 "class_b": "16,172,16,10",
161 "class_a": "8,10,10",
162 "gateway": "0,0",
163 "netlen": "33,0",
164 }
165 for rfc3442 in bad_rfc3442.values():
166 self.assertEqual([], parse_static_routes(rfc3442))
167
168 logs = self.logs.getvalue()
169 self.assertEqual(len(bad_rfc3442.keys()), len(logs.splitlines()))
170
171 def test_parse_static_routes_returns_valid_routes_until_parse_err(self):
172 class_c = "24,192,168,74,192,168,0,4"
173 class_b = "16,172,16,172,16,0,4"
174 class_a_error = "8,10,10,0,0"
175 rfc3442 = ",".join([class_c, class_b, class_a_error])
176 self.assertEqual(sorted([
177 ("192.168.74.0/24", "192.168.0.4"),
178 ("172.16.0.0/16", "172.16.0.4"),
179 ]), sorted(parse_static_routes(rfc3442)))
180
181 logs = self.logs.getvalue()
182 self.assertIn(rfc3442, logs.splitlines()[0])
183
184
67class TestDHCPDiscoveryClean(CiTestCase):185class TestDHCPDiscoveryClean(CiTestCase):
68 with_logs = True186 with_logs = True
69187
diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
index 6d2affe..d2e38f0 100644
--- a/cloudinit/net/tests/test_init.py
+++ b/cloudinit/net/tests/test_init.py
@@ -212,9 +212,9 @@ class TestGenerateFallbackConfig(CiTestCase):
212 mac = 'aa:bb:cc:aa:bb:cc'212 mac = 'aa:bb:cc:aa:bb:cc'
213 write_file(os.path.join(self.sysdir, 'eth1', 'address'), mac)213 write_file(os.path.join(self.sysdir, 'eth1', 'address'), mac)
214 expected = {214 expected = {
215 'config': [{'type': 'physical', 'mac_address': mac,215 'ethernets': {'eth1': {'match': {'macaddress': mac},
216 'name': 'eth1', 'subnets': [{'type': 'dhcp'}]}],216 'dhcp4': True, 'set-name': 'eth1'}},
217 'version': 1}217 'version': 2}
218 self.assertEqual(expected, net.generate_fallback_config())218 self.assertEqual(expected, net.generate_fallback_config())
219219
220 def test_generate_fallback_finds_dormant_eth_with_mac(self):220 def test_generate_fallback_finds_dormant_eth_with_mac(self):
@@ -223,9 +223,9 @@ class TestGenerateFallbackConfig(CiTestCase):
223 mac = 'aa:bb:cc:aa:bb:cc'223 mac = 'aa:bb:cc:aa:bb:cc'
224 write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac)224 write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac)
225 expected = {225 expected = {
226 'config': [{'type': 'physical', 'mac_address': mac,226 'ethernets': {'eth0': {'match': {'macaddress': mac}, 'dhcp4': True,
227 'name': 'eth0', 'subnets': [{'type': 'dhcp'}]}],227 'set-name': 'eth0'}},
228 'version': 1}228 'version': 2}
229 self.assertEqual(expected, net.generate_fallback_config())229 self.assertEqual(expected, net.generate_fallback_config())
230230
231 def test_generate_fallback_finds_eth_by_operstate(self):231 def test_generate_fallback_finds_eth_by_operstate(self):
@@ -233,9 +233,10 @@ class TestGenerateFallbackConfig(CiTestCase):
233 mac = 'aa:bb:cc:aa:bb:cc'233 mac = 'aa:bb:cc:aa:bb:cc'
234 write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac)234 write_file(os.path.join(self.sysdir, 'eth0', 'address'), mac)
235 expected = {235 expected = {
236 'config': [{'type': 'physical', 'mac_address': mac,236 'ethernets': {
237 'name': 'eth0', 'subnets': [{'type': 'dhcp'}]}],237 'eth0': {'dhcp4': True, 'match': {'macaddress': mac},
238 'version': 1}238 'set-name': 'eth0'}},
239 'version': 2}
239 valid_operstates = ['dormant', 'down', 'lowerlayerdown', 'unknown']240 valid_operstates = ['dormant', 'down', 'lowerlayerdown', 'unknown']
240 for state in valid_operstates:241 for state in valid_operstates:
241 write_file(os.path.join(self.sysdir, 'eth0', 'operstate'), state)242 write_file(os.path.join(self.sysdir, 'eth0', 'operstate'), state)
@@ -549,6 +550,45 @@ class TestEphemeralIPV4Network(CiTestCase):
549 self.assertEqual(expected_setup_calls, m_subp.call_args_list)550 self.assertEqual(expected_setup_calls, m_subp.call_args_list)
550 m_subp.assert_has_calls(expected_teardown_calls)551 m_subp.assert_has_calls(expected_teardown_calls)
551552
553 def test_ephemeral_ipv4_network_with_rfc3442_static_routes(self, m_subp):
554 params = {
555 'interface': 'eth0', 'ip': '192.168.2.2',
556 'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255',
557 'static_routes': [('169.254.169.254/32', '192.168.2.1'),
558 ('0.0.0.0/0', '192.168.2.1')],
559 'router': '192.168.2.1'}
560 expected_setup_calls = [
561 mock.call(
562 ['ip', '-family', 'inet', 'addr', 'add', '192.168.2.2/24',
563 'broadcast', '192.168.2.255', 'dev', 'eth0'],
564 capture=True, update_env={'LANG': 'C'}),
565 mock.call(
566 ['ip', '-family', 'inet', 'link', 'set', 'dev', 'eth0', 'up'],
567 capture=True),
568 mock.call(
569 ['ip', '-4', 'route', 'add', '169.254.169.254/32',
570 'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
571 mock.call(
572 ['ip', '-4', 'route', 'add', '0.0.0.0/0',
573 'via', '192.168.2.1', 'dev', 'eth0'], capture=True)]
574 expected_teardown_calls = [
575 mock.call(
576 ['ip', '-4', 'route', 'del', '0.0.0.0/0',
577 'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
578 mock.call(
579 ['ip', '-4', 'route', 'del', '169.254.169.254/32',
580 'via', '192.168.2.1', 'dev', 'eth0'], capture=True),
581 mock.call(
582 ['ip', '-family', 'inet', 'link', 'set', 'dev',
583 'eth0', 'down'], capture=True),
584 mock.call(
585 ['ip', '-family', 'inet', 'addr', 'del',
586 '192.168.2.2/24', 'dev', 'eth0'], capture=True)
587 ]
588 with net.EphemeralIPv4Network(**params):
589 self.assertEqual(expected_setup_calls, m_subp.call_args_list)
590 m_subp.assert_has_calls(expected_setup_calls + expected_teardown_calls)
591
552592
553class TestApplyNetworkCfgNames(CiTestCase):593class TestApplyNetworkCfgNames(CiTestCase):
554 V1_CONFIG = textwrap.dedent("""\594 V1_CONFIG = textwrap.dedent("""\
@@ -669,3 +709,216 @@ class TestHasURLConnectivity(HttprettyTestCase):
669 httpretty.register_uri(httpretty.GET, self.url, body={}, status=404)709 httpretty.register_uri(httpretty.GET, self.url, body={}, status=404)
670 self.assertFalse(710 self.assertFalse(
671 net.has_url_connectivity(self.url), 'Expected False on url fail')711 net.has_url_connectivity(self.url), 'Expected False on url fail')
712
713
714def _mk_v1_phys(mac, name, driver, device_id):
715 v1_cfg = {'type': 'physical', 'name': name, 'mac_address': mac}
716 params = {}
717 if driver:
718 params.update({'driver': driver})
719 if device_id:
720 params.update({'device_id': device_id})
721
722 if params:
723 v1_cfg.update({'params': params})
724
725 return v1_cfg
726
727
728def _mk_v2_phys(mac, name, driver=None, device_id=None):
729 v2_cfg = {'set-name': name, 'match': {'macaddress': mac}}
730 if driver:
731 v2_cfg['match'].update({'driver': driver})
732 if device_id:
733 v2_cfg['match'].update({'device_id': device_id})
734
735 return v2_cfg
736
737
738class TestExtractPhysdevs(CiTestCase):
739
740 def setUp(self):
741 super(TestExtractPhysdevs, self).setUp()
742 self.add_patch('cloudinit.net.device_driver', 'm_driver')
743 self.add_patch('cloudinit.net.device_devid', 'm_devid')
744
745 def test_extract_physdevs_looks_up_driver_v1(self):
746 driver = 'virtio'
747 self.m_driver.return_value = driver
748 physdevs = [
749 ['aa:bb:cc:dd:ee:ff', 'eth0', None, '0x1000'],
750 ]
751 netcfg = {
752 'version': 1,
753 'config': [_mk_v1_phys(*args) for args in physdevs],
754 }
755 # insert the driver value for verification
756 physdevs[0][2] = driver
757 self.assertEqual(sorted(physdevs),
758 sorted(net.extract_physdevs(netcfg)))
759 self.m_driver.assert_called_with('eth0')
760
761 def test_extract_physdevs_looks_up_driver_v2(self):
762 driver = 'virtio'
763 self.m_driver.return_value = driver
764 physdevs = [
765 ['aa:bb:cc:dd:ee:ff', 'eth0', None, '0x1000'],
766 ]
767 netcfg = {
768 'version': 2,
769 'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs},
770 }
771 # insert the driver value for verification
772 physdevs[0][2] = driver
773 self.assertEqual(sorted(physdevs),
774 sorted(net.extract_physdevs(netcfg)))
775 self.m_driver.assert_called_with('eth0')
776
777 def test_extract_physdevs_looks_up_devid_v1(self):
778 devid = '0x1000'
779 self.m_devid.return_value = devid
780 physdevs = [
781 ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', None],
782 ]
783 netcfg = {
784 'version': 1,
785 'config': [_mk_v1_phys(*args) for args in physdevs],
786 }
787 # insert the driver value for verification
788 physdevs[0][3] = devid
789 self.assertEqual(sorted(physdevs),
790 sorted(net.extract_physdevs(netcfg)))
791 self.m_devid.assert_called_with('eth0')
792
793 def test_extract_physdevs_looks_up_devid_v2(self):
794 devid = '0x1000'
795 self.m_devid.return_value = devid
796 physdevs = [
797 ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', None],
798 ]
799 netcfg = {
800 'version': 2,
801 'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs},
802 }
803 # insert the driver value for verification
804 physdevs[0][3] = devid
805 self.assertEqual(sorted(physdevs),
806 sorted(net.extract_physdevs(netcfg)))
807 self.m_devid.assert_called_with('eth0')
808
809 def test_get_v1_type_physical(self):
810 physdevs = [
811 ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
812 ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
813 ['09:87:65:43:21:10', 'ens0p1', 'mlx4_core', '0:0:1000'],
814 ]
815 netcfg = {
816 'version': 1,
817 'config': [_mk_v1_phys(*args) for args in physdevs],
818 }
819 self.assertEqual(sorted(physdevs),
820 sorted(net.extract_physdevs(netcfg)))
821
822 def test_get_v2_type_physical(self):
823 physdevs = [
824 ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
825 ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
826 ['09:87:65:43:21:10', 'ens0p1', 'mlx4_core', '0:0:1000'],
827 ]
828 netcfg = {
829 'version': 2,
830 'ethernets': {args[1]: _mk_v2_phys(*args) for args in physdevs},
831 }
832 self.assertEqual(sorted(physdevs),
833 sorted(net.extract_physdevs(netcfg)))
834
835 def test_get_v2_type_physical_skips_if_no_set_name(self):
836 netcfg = {
837 'version': 2,
838 'ethernets': {
839 'ens3': {
840 'match': {'macaddress': '00:11:22:33:44:55'},
841 }
842 }
843 }
844 self.assertEqual([], net.extract_physdevs(netcfg))
845
846 def test_runtime_error_on_unknown_netcfg_version(self):
847 with self.assertRaises(RuntimeError):
848 net.extract_physdevs({'version': 3, 'awesome_config': []})
849
850
851class TestWaitForPhysdevs(CiTestCase):
852
853 with_logs = True
854
855 def setUp(self):
856 super(TestWaitForPhysdevs, self).setUp()
857 self.add_patch('cloudinit.net.get_interfaces_by_mac',
858 'm_get_iface_mac')
859 self.add_patch('cloudinit.util.udevadm_settle', 'm_udev_settle')
860
861 def test_wait_for_physdevs_skips_settle_if_all_present(self):
862 physdevs = [
863 ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
864 ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
865 ]
866 netcfg = {
867 'version': 2,
868 'ethernets': {args[1]: _mk_v2_phys(*args)
869 for args in physdevs},
870 }
871 self.m_get_iface_mac.side_effect = iter([
872 {'aa:bb:cc:dd:ee:ff': 'eth0',
873 '00:11:22:33:44:55': 'ens3'},
874 ])
875 net.wait_for_physdevs(netcfg)
876 self.assertEqual(0, self.m_udev_settle.call_count)
877
878 def test_wait_for_physdevs_calls_udev_settle_on_missing(self):
879 physdevs = [
880 ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
881 ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
882 ]
883 netcfg = {
884 'version': 2,
885 'ethernets': {args[1]: _mk_v2_phys(*args)
886 for args in physdevs},
887 }
888 self.m_get_iface_mac.side_effect = iter([
889 {'aa:bb:cc:dd:ee:ff': 'eth0'}, # first call ens3 is missing
890 {'aa:bb:cc:dd:ee:ff': 'eth0',
891 '00:11:22:33:44:55': 'ens3'}, # second call has both
892 ])
893 net.wait_for_physdevs(netcfg)
894 self.m_udev_settle.assert_called_with(exists=net.sys_dev_path('ens3'))
895
896 def test_wait_for_physdevs_raise_runtime_error_if_missing_and_strict(self):
897 physdevs = [
898 ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
899 ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
900 ]
901 netcfg = {
902 'version': 2,
903 'ethernets': {args[1]: _mk_v2_phys(*args)
904 for args in physdevs},
905 }
906 self.m_get_iface_mac.return_value = {}
907 with self.assertRaises(RuntimeError):
908 net.wait_for_physdevs(netcfg)
909
910 self.assertEqual(5 * len(physdevs), self.m_udev_settle.call_count)
911
912 def test_wait_for_physdevs_no_raise_if_not_strict(self):
913 physdevs = [
914 ['aa:bb:cc:dd:ee:ff', 'eth0', 'virtio', '0x1000'],
915 ['00:11:22:33:44:55', 'ens3', 'e1000', '0x1643'],
916 ]
917 netcfg = {
918 'version': 2,
919 'ethernets': {args[1]: _mk_v2_phys(*args)
920 for args in physdevs},
921 }
922 self.m_get_iface_mac.return_value = {}
923 net.wait_for_physdevs(netcfg, strict=False)
924 self.assertEqual(5 * len(physdevs), self.m_udev_settle.call_count)
diff --git a/cloudinit/settings.py b/cloudinit/settings.py
index b1ebaad..2060d81 100644
--- a/cloudinit/settings.py
+++ b/cloudinit/settings.py
@@ -39,6 +39,7 @@ CFG_BUILTIN = {
39 'Hetzner',39 'Hetzner',
40 'IBMCloud',40 'IBMCloud',
41 'Oracle',41 'Oracle',
42 'Exoscale',
42 # At the end to act as a 'catch' when none of the above work...43 # At the end to act as a 'catch' when none of the above work...
43 'None',44 'None',
44 ],45 ],
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index b7440c1..4984fa8 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -26,9 +26,14 @@ from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc
26from cloudinit import util26from cloudinit import util
27from cloudinit.reporting import events27from cloudinit.reporting import events
2828
29from cloudinit.sources.helpers.azure import (azure_ds_reporter,29from cloudinit.sources.helpers.azure import (
30 azure_ds_telemetry_reporter,30 azure_ds_reporter,
31 get_metadata_from_fabric)31 azure_ds_telemetry_reporter,
32 get_metadata_from_fabric,
33 get_boot_telemetry,
34 get_system_info,
35 report_diagnostic_event,
36 EphemeralDHCPv4WithReporting)
3237
33LOG = logging.getLogger(__name__)38LOG = logging.getLogger(__name__)
3439
@@ -354,7 +359,7 @@ class DataSourceAzure(sources.DataSource):
354 bname = str(pk['fingerprint'] + ".crt")359 bname = str(pk['fingerprint'] + ".crt")
355 fp_files += [os.path.join(ddir, bname)]360 fp_files += [os.path.join(ddir, bname)]
356 LOG.debug("ssh authentication: "361 LOG.debug("ssh authentication: "
357 "using fingerprint from fabirc")362 "using fingerprint from fabric")
358363
359 with events.ReportEventStack(364 with events.ReportEventStack(
360 name="waiting-for-ssh-public-key",365 name="waiting-for-ssh-public-key",
@@ -419,12 +424,17 @@ class DataSourceAzure(sources.DataSource):
419 ret = load_azure_ds_dir(cdev)424 ret = load_azure_ds_dir(cdev)
420425
421 except NonAzureDataSource:426 except NonAzureDataSource:
427 report_diagnostic_event(
428 "Did not find Azure data source in %s" % cdev)
422 continue429 continue
423 except BrokenAzureDataSource as exc:430 except BrokenAzureDataSource as exc:
424 msg = 'BrokenAzureDataSource: %s' % exc431 msg = 'BrokenAzureDataSource: %s' % exc
432 report_diagnostic_event(msg)
425 raise sources.InvalidMetaDataException(msg)433 raise sources.InvalidMetaDataException(msg)
426 except util.MountFailedError:434 except util.MountFailedError:
427 LOG.warning("%s was not mountable", cdev)435 msg = '%s was not mountable' % cdev
436 report_diagnostic_event(msg)
437 LOG.warning(msg)
428 continue438 continue
429439
430 perform_reprovision = reprovision or self._should_reprovision(ret)440 perform_reprovision = reprovision or self._should_reprovision(ret)
@@ -432,6 +442,7 @@ class DataSourceAzure(sources.DataSource):
432 if util.is_FreeBSD():442 if util.is_FreeBSD():
433 msg = "Free BSD is not supported for PPS VMs"443 msg = "Free BSD is not supported for PPS VMs"
434 LOG.error(msg)444 LOG.error(msg)
445 report_diagnostic_event(msg)
435 raise sources.InvalidMetaDataException(msg)446 raise sources.InvalidMetaDataException(msg)
436 ret = self._reprovision()447 ret = self._reprovision()
437 imds_md = get_metadata_from_imds(448 imds_md = get_metadata_from_imds(
@@ -450,7 +461,9 @@ class DataSourceAzure(sources.DataSource):
450 break461 break
451462
452 if not found:463 if not found:
453 raise sources.InvalidMetaDataException('No Azure metadata found')464 msg = 'No Azure metadata found'
465 report_diagnostic_event(msg)
466 raise sources.InvalidMetaDataException(msg)
454467
455 if found == ddir:468 if found == ddir:
456 LOG.debug("using files cached in %s", ddir)469 LOG.debug("using files cached in %s", ddir)
@@ -469,9 +482,14 @@ class DataSourceAzure(sources.DataSource):
469 self._report_ready(lease=self._ephemeral_dhcp_ctx.lease)482 self._report_ready(lease=self._ephemeral_dhcp_ctx.lease)
470 self._ephemeral_dhcp_ctx.clean_network() # Teardown ephemeral483 self._ephemeral_dhcp_ctx.clean_network() # Teardown ephemeral
471 else:484 else:
472 with EphemeralDHCPv4() as lease:485 try:
473 self._report_ready(lease=lease)486 with EphemeralDHCPv4WithReporting(
474487 azure_ds_reporter) as lease:
488 self._report_ready(lease=lease)
489 except Exception as e:
490 report_diagnostic_event(
491 "exception while reporting ready: %s" % e)
492 raise
475 return crawled_data493 return crawled_data
476494
477 def _is_platform_viable(self):495 def _is_platform_viable(self):
@@ -493,6 +511,16 @@ class DataSourceAzure(sources.DataSource):
493 if not self._is_platform_viable():511 if not self._is_platform_viable():
494 return False512 return False
495 try:513 try:
514 get_boot_telemetry()
515 except Exception as e:
516 LOG.warning("Failed to get boot telemetry: %s", e)
517
518 try:
519 get_system_info()
520 except Exception as e:
521 LOG.warning("Failed to get system information: %s", e)
522
523 try:
496 crawled_data = util.log_time(524 crawled_data = util.log_time(
497 logfunc=LOG.debug, msg='Crawl of metadata service',525 logfunc=LOG.debug, msg='Crawl of metadata service',
498 func=self.crawl_metadata)526 func=self.crawl_metadata)
@@ -551,27 +579,55 @@ class DataSourceAzure(sources.DataSource):
551 headers = {"Metadata": "true"}579 headers = {"Metadata": "true"}
552 nl_sock = None580 nl_sock = None
553 report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))581 report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))
582 self.imds_logging_threshold = 1
583 self.imds_poll_counter = 1
584 dhcp_attempts = 0
585 vnet_switched = False
586 return_val = None
554587
555 def exc_cb(msg, exception):588 def exc_cb(msg, exception):
556 if isinstance(exception, UrlError) and exception.code == 404:589 if isinstance(exception, UrlError) and exception.code == 404:
590 if self.imds_poll_counter == self.imds_logging_threshold:
591 # Reducing the logging frequency as we are polling IMDS
592 self.imds_logging_threshold *= 2
593 LOG.debug("Call to IMDS with arguments %s failed "
594 "with status code %s after %s retries",
595 msg, exception.code, self.imds_poll_counter)
596 LOG.debug("Backing off logging threshold for the same "
597 "exception to %d", self.imds_logging_threshold)
598 self.imds_poll_counter += 1
557 return True599 return True
600
558 # If we get an exception while trying to call IMDS, we601 # If we get an exception while trying to call IMDS, we
559 # call DHCP and setup the ephemeral network to acquire the new IP.602 # call DHCP and setup the ephemeral network to acquire the new IP.
603 LOG.debug("Call to IMDS with arguments %s failed with "
604 "status code %s", msg, exception.code)
605 report_diagnostic_event("polling IMDS failed with exception %s"
606 % exception.code)
560 return False607 return False
561608
562 LOG.debug("Wait for vnetswitch to happen")609 LOG.debug("Wait for vnetswitch to happen")
563 while True:610 while True:
564 try:611 try:
565 # Save our EphemeralDHCPv4 context so we avoid repeated dhcp612 # Save our EphemeralDHCPv4 context to avoid repeated dhcp
566 self._ephemeral_dhcp_ctx = EphemeralDHCPv4()613 with events.ReportEventStack(
567 lease = self._ephemeral_dhcp_ctx.obtain_lease()614 name="obtain-dhcp-lease",
615 description="obtain dhcp lease",
616 parent=azure_ds_reporter):
617 self._ephemeral_dhcp_ctx = EphemeralDHCPv4()
618 lease = self._ephemeral_dhcp_ctx.obtain_lease()
619
620 if vnet_switched:
621 dhcp_attempts += 1
568 if report_ready:622 if report_ready:
569 try:623 try:
570 nl_sock = netlink.create_bound_netlink_socket()624 nl_sock = netlink.create_bound_netlink_socket()
571 except netlink.NetlinkCreateSocketError as e:625 except netlink.NetlinkCreateSocketError as e:
626 report_diagnostic_event(e)
572 LOG.warning(e)627 LOG.warning(e)
573 self._ephemeral_dhcp_ctx.clean_network()628 self._ephemeral_dhcp_ctx.clean_network()
574 return629 break
630
575 path = REPORTED_READY_MARKER_FILE631 path = REPORTED_READY_MARKER_FILE
576 LOG.info(632 LOG.info(
577 "Creating a marker file to report ready: %s", path)633 "Creating a marker file to report ready: %s", path)
@@ -579,17 +635,33 @@ class DataSourceAzure(sources.DataSource):
579 pid=os.getpid(), time=time()))635 pid=os.getpid(), time=time()))
580 self._report_ready(lease=lease)636 self._report_ready(lease=lease)
581 report_ready = False637 report_ready = False
582 try:638
583 netlink.wait_for_media_disconnect_connect(639 with events.ReportEventStack(
584 nl_sock, lease['interface'])640 name="wait-for-media-disconnect-connect",
585 except AssertionError as error:641 description="wait for vnet switch",
586 LOG.error(error)642 parent=azure_ds_reporter):
587 return643 try:
644 netlink.wait_for_media_disconnect_connect(
645 nl_sock, lease['interface'])
646 except AssertionError as error:
647 report_diagnostic_event(error)
648 LOG.error(error)
649 break
650
651 vnet_switched = True
588 self._ephemeral_dhcp_ctx.clean_network()652 self._ephemeral_dhcp_ctx.clean_network()
589 else:653 else:
590 return readurl(url, timeout=IMDS_TIMEOUT_IN_SECONDS,654 with events.ReportEventStack(
591 headers=headers, exception_cb=exc_cb,655 name="get-reprovision-data-from-imds",
592 infinite=True, log_req_resp=False).contents656 description="get reprovision data from imds",
657 parent=azure_ds_reporter):
658 return_val = readurl(url,
659 timeout=IMDS_TIMEOUT_IN_SECONDS,
660 headers=headers,
661 exception_cb=exc_cb,
662 infinite=True,
663 log_req_resp=False).contents
664 break
593 except UrlError:665 except UrlError:
594 # Teardown our EphemeralDHCPv4 context on failure as we retry666 # Teardown our EphemeralDHCPv4 context on failure as we retry
595 self._ephemeral_dhcp_ctx.clean_network()667 self._ephemeral_dhcp_ctx.clean_network()
@@ -598,6 +670,14 @@ class DataSourceAzure(sources.DataSource):
598 if nl_sock:670 if nl_sock:
599 nl_sock.close()671 nl_sock.close()
600672
673 if vnet_switched:
674 report_diagnostic_event("attempted dhcp %d times after reuse" %
675 dhcp_attempts)
676 report_diagnostic_event("polled imds %d times after reuse" %
677 self.imds_poll_counter)
678
679 return return_val
680
601 @azure_ds_telemetry_reporter681 @azure_ds_telemetry_reporter
602 def _report_ready(self, lease):682 def _report_ready(self, lease):
603 """Tells the fabric provisioning has completed """683 """Tells the fabric provisioning has completed """
@@ -666,9 +746,12 @@ class DataSourceAzure(sources.DataSource):
666 self.ds_cfg['agent_command'])746 self.ds_cfg['agent_command'])
667 try:747 try:
668 fabric_data = metadata_func()748 fabric_data = metadata_func()
669 except Exception:749 except Exception as e:
750 report_diagnostic_event(
751 "Error communicating with Azure fabric; You may experience "
752 "connectivity issues: %s" % e)
670 LOG.warning(753 LOG.warning(
671 "Error communicating with Azure fabric; You may experience."754 "Error communicating with Azure fabric; You may experience "
672 "connectivity issues.", exc_info=True)755 "connectivity issues.", exc_info=True)
673 return False756 return False
674757
@@ -684,6 +767,11 @@ class DataSourceAzure(sources.DataSource):
684 return767 return
685768
686 @property769 @property
770 def availability_zone(self):
771 return self.metadata.get(
772 'imds', {}).get('compute', {}).get('platformFaultDomain')
773
774 @property
687 def network_config(self):775 def network_config(self):
688 """Generate a network config like net.generate_fallback_network() with776 """Generate a network config like net.generate_fallback_network() with
689 the following exceptions.777 the following exceptions.
@@ -701,6 +789,10 @@ class DataSourceAzure(sources.DataSource):
701 self._network_config = parse_network_config(nc_src)789 self._network_config = parse_network_config(nc_src)
702 return self._network_config790 return self._network_config
703791
792 @property
793 def region(self):
794 return self.metadata.get('imds', {}).get('compute', {}).get('location')
795
704796
705def _partitions_on_device(devpath, maxnum=16):797def _partitions_on_device(devpath, maxnum=16):
706 # return a list of tuples (ptnum, path) for each part on devpath798 # return a list of tuples (ptnum, path) for each part on devpath
@@ -1018,7 +1110,9 @@ def read_azure_ovf(contents):
1018 try:1110 try:
1019 dom = minidom.parseString(contents)1111 dom = minidom.parseString(contents)
1020 except Exception as e:1112 except Exception as e:
1021 raise BrokenAzureDataSource("Invalid ovf-env.xml: %s" % e)1113 error_str = "Invalid ovf-env.xml: %s" % e
1114 report_diagnostic_event(error_str)
1115 raise BrokenAzureDataSource(error_str)
10221116
1023 results = find_child(dom.documentElement,1117 results = find_child(dom.documentElement,
1024 lambda n: n.localName == "ProvisioningSection")1118 lambda n: n.localName == "ProvisioningSection")
@@ -1232,7 +1326,7 @@ def parse_network_config(imds_metadata):
1232 privateIpv4 = addr4['privateIpAddress']1326 privateIpv4 = addr4['privateIpAddress']
1233 if privateIpv4:1327 if privateIpv4:
1234 if dev_config.get('dhcp4', False):1328 if dev_config.get('dhcp4', False):
1235 # Append static address config for nic > 11329 # Append static address config for ip > 1
1236 netPrefix = intf['ipv4']['subnet'][0].get(1330 netPrefix = intf['ipv4']['subnet'][0].get(
1237 'prefix', '24')1331 'prefix', '24')
1238 if not dev_config.get('addresses'):1332 if not dev_config.get('addresses'):
@@ -1242,6 +1336,11 @@ def parse_network_config(imds_metadata):
1242 ip=privateIpv4, prefix=netPrefix))1336 ip=privateIpv4, prefix=netPrefix))
1243 else:1337 else:
1244 dev_config['dhcp4'] = True1338 dev_config['dhcp4'] = True
1339 # non-primary interfaces should have a higher
1340 # route-metric (cost) so default routes prefer
1341 # primary nic due to lower route-metric value
1342 dev_config['dhcp4-overrides'] = {
1343 'route-metric': (idx + 1) * 100}
1245 for addr6 in intf['ipv6']['ipAddress']:1344 for addr6 in intf['ipv6']['ipAddress']:
1246 privateIpv6 = addr6['privateIpAddress']1345 privateIpv6 = addr6['privateIpAddress']
1247 if privateIpv6:1346 if privateIpv6:
@@ -1285,8 +1384,13 @@ def get_metadata_from_imds(fallback_nic, retries):
1285 if net.is_up(fallback_nic):1384 if net.is_up(fallback_nic):
1286 return util.log_time(**kwargs)1385 return util.log_time(**kwargs)
1287 else:1386 else:
1288 with EphemeralDHCPv4(fallback_nic):1387 try:
1289 return util.log_time(**kwargs)1388 with EphemeralDHCPv4WithReporting(
1389 azure_ds_reporter, fallback_nic):
1390 return util.log_time(**kwargs)
1391 except Exception as e:
1392 report_diagnostic_event("exception while getting metadata: %s" % e)
1393 raise
12901394
12911395
1292@azure_ds_telemetry_reporter1396@azure_ds_telemetry_reporter
@@ -1299,11 +1403,14 @@ def _get_metadata_from_imds(retries):
1299 url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers,1403 url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers,
1300 retries=retries, exception_cb=retry_on_url_exc)1404 retries=retries, exception_cb=retry_on_url_exc)
1301 except Exception as e:1405 except Exception as e:
1302 LOG.debug('Ignoring IMDS instance metadata: %s', e)1406 msg = 'Ignoring IMDS instance metadata: %s' % e
1407 report_diagnostic_event(msg)
1408 LOG.debug(msg)
1303 return {}1409 return {}
1304 try:1410 try:
1305 return util.load_json(str(response))1411 return util.load_json(str(response))
1306 except json.decoder.JSONDecodeError:1412 except json.decoder.JSONDecodeError as e:
1413 report_diagnostic_event('non-json imds response' % e)
1307 LOG.warning(1414 LOG.warning(
1308 'Ignoring non-json IMDS instance metadata: %s', str(response))1415 'Ignoring non-json IMDS instance metadata: %s', str(response))
1309 return {}1416 return {}
@@ -1356,8 +1463,10 @@ def _is_platform_viable(seed_dir):
1356 asset_tag = util.read_dmi_data('chassis-asset-tag')1463 asset_tag = util.read_dmi_data('chassis-asset-tag')
1357 if asset_tag == AZURE_CHASSIS_ASSET_TAG:1464 if asset_tag == AZURE_CHASSIS_ASSET_TAG:
1358 return True1465 return True
1359 LOG.debug("Non-Azure DMI asset tag '%s' discovered.", asset_tag)1466 msg = "Non-Azure DMI asset tag '%s' discovered." % asset_tag
1360 evt.description = "Non-Azure DMI asset tag '%s' discovered.", asset_tag1467 LOG.debug(msg)
1468 evt.description = msg
1469 report_diagnostic_event(msg)
1361 if os.path.exists(os.path.join(seed_dir, 'ovf-env.xml')):1470 if os.path.exists(os.path.join(seed_dir, 'ovf-env.xml')):
1362 return True1471 return True
1363 return False1472 return False
diff --git a/cloudinit/sources/DataSourceCloudSigma.py b/cloudinit/sources/DataSourceCloudSigma.py
index 2955d3f..df88f67 100644
--- a/cloudinit/sources/DataSourceCloudSigma.py
+++ b/cloudinit/sources/DataSourceCloudSigma.py
@@ -42,12 +42,8 @@ class DataSourceCloudSigma(sources.DataSource):
42 if not sys_product_name:42 if not sys_product_name:
43 LOG.debug("system-product-name not available in dmi data")43 LOG.debug("system-product-name not available in dmi data")
44 return False44 return False
45 else:45 LOG.debug("detected hypervisor as %s", sys_product_name)
46 LOG.debug("detected hypervisor as %s", sys_product_name)46 return 'cloudsigma' in sys_product_name.lower()
47 return 'cloudsigma' in sys_product_name.lower()
48
49 LOG.warning("failed to query dmi data for system product name")
50 return False
5147
52 def _get_data(self):48 def _get_data(self):
53 """49 """
diff --git a/cloudinit/sources/DataSourceExoscale.py b/cloudinit/sources/DataSourceExoscale.py
54new file mode 10064450new file mode 100644
index 0000000..52e7f6f
--- /dev/null
+++ b/cloudinit/sources/DataSourceExoscale.py
@@ -0,0 +1,258 @@
1# Author: Mathieu Corbin <mathieu.corbin@exoscale.com>
2# Author: Christopher Glass <christopher.glass@exoscale.com>
3#
4# This file is part of cloud-init. See LICENSE file for license information.
5
6from cloudinit import ec2_utils as ec2
7from cloudinit import log as logging
8from cloudinit import sources
9from cloudinit import url_helper
10from cloudinit import util
11
12LOG = logging.getLogger(__name__)
13
14METADATA_URL = "http://169.254.169.254"
15API_VERSION = "1.0"
16PASSWORD_SERVER_PORT = 8080
17
18URL_TIMEOUT = 10
19URL_RETRIES = 6
20
21EXOSCALE_DMI_NAME = "Exoscale"
22
23BUILTIN_DS_CONFIG = {
24 # We run the set password config module on every boot in order to enable
25 # resetting the instance's password via the exoscale console (and a
26 # subsequent instance reboot).
27 'cloud_config_modules': [["set-passwords", "always"]]
28}
29
30
31class DataSourceExoscale(sources.DataSource):
32
33 dsname = 'Exoscale'
34
35 def __init__(self, sys_cfg, distro, paths):
36 super(DataSourceExoscale, self).__init__(sys_cfg, distro, paths)
37 LOG.debug("Initializing the Exoscale datasource")
38
39 self.metadata_url = self.ds_cfg.get('metadata_url', METADATA_URL)
40 self.api_version = self.ds_cfg.get('api_version', API_VERSION)
41 self.password_server_port = int(
42 self.ds_cfg.get('password_server_port', PASSWORD_SERVER_PORT))
43 self.url_timeout = self.ds_cfg.get('timeout', URL_TIMEOUT)
44 self.url_retries = self.ds_cfg.get('retries', URL_RETRIES)
45
46 self.extra_config = BUILTIN_DS_CONFIG
47
48 def wait_for_metadata_service(self):
49 """Wait for the metadata service to be reachable."""
50
51 metadata_url = "{}/{}/meta-data/instance-id".format(
52 self.metadata_url, self.api_version)
53
54 url = url_helper.wait_for_url(
55 urls=[metadata_url],
56 max_wait=self.url_max_wait,
57 timeout=self.url_timeout,
58 status_cb=LOG.critical)
59
60 return bool(url)
61
62 def crawl_metadata(self):
63 """
64 Crawl the metadata service when available.
65
66 @returns: Dictionary of crawled metadata content.
67 """
68 metadata_ready = util.log_time(
69 logfunc=LOG.info,
70 msg='waiting for the metadata service',
71 func=self.wait_for_metadata_service)
72
73 if not metadata_ready:
74 return {}
75
76 return read_metadata(self.metadata_url, self.api_version,
77 self.password_server_port, self.url_timeout,
78 self.url_retries)
79
80 def _get_data(self):
81 """Fetch the user data, the metadata and the VM password
82 from the metadata service.
83
84 Please refer to the datasource documentation for details on how the
85 metadata server and password server are crawled.
86 """
87 if not self._is_platform_viable():
88 return False
89
90 data = util.log_time(
91 logfunc=LOG.debug,
92 msg='Crawl of metadata service',
93 func=self.crawl_metadata)
94
95 if not data:
96 return False
97
98 self.userdata_raw = data['user-data']
99 self.metadata = data['meta-data']
100 password = data.get('password')
101
102 password_config = {}
103 if password:
104 # Since we have a password, let's make sure we are allowed to use
105 # it by allowing ssh_pwauth.
106 # The password module's default behavior is to leave the
107 # configuration as-is in this regard, so that means it will either
108 # leave the password always disabled if no password is ever set, or
109 # leave the password login enabled if we set it once.
110 password_config = {
111 'ssh_pwauth': True,
112 'password': password,
113 'chpasswd': {
114 'expire': False,
115 },
116 }
117
118 # builtin extra_config overrides password_config
119 self.extra_config = util.mergemanydict(
120 [self.extra_config, password_config])
121
122 return True
123
124 def get_config_obj(self):
125 return self.extra_config
126
127 def _is_platform_viable(self):
128 return util.read_dmi_data('system-product-name').startswith(
129 EXOSCALE_DMI_NAME)
130
131
132# Used to match classes to dependencies
133datasources = [
134 (DataSourceExoscale, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
135]
136
137
138# Return a list of data sources that match this set of dependencies
139def get_datasource_list(depends):
140 return sources.list_from_depends(depends, datasources)
141
142
143def get_password(metadata_url=METADATA_URL,
144 api_version=API_VERSION,
145 password_server_port=PASSWORD_SERVER_PORT,
146 url_timeout=URL_TIMEOUT,
147 url_retries=URL_RETRIES):
148 """Obtain the VM's password if set.
149
150 Once fetched the password is marked saved. Future calls to this method may
151 return empty string or 'saved_password'."""
152 password_url = "{}:{}/{}/".format(metadata_url, password_server_port,
153 api_version)
154 response = url_helper.read_file_or_url(
155 password_url,
156 ssl_details=None,
157 headers={"DomU_Request": "send_my_password"},
158 timeout=url_timeout,
159 retries=url_retries)
160 password = response.contents.decode('utf-8')
161 # the password is empty or already saved
162 # Note: the original metadata server would answer an additional
163 # 'bad_request' status, but the Exoscale implementation does not.
164 if password in ['', 'saved_password']:
165 return None
166 # save the password
167 url_helper.read_file_or_url(
168 password_url,
169 ssl_details=None,
170 headers={"DomU_Request": "saved_password"},
171 timeout=url_timeout,
172 retries=url_retries)
173 return password
174
175
176def read_metadata(metadata_url=METADATA_URL,
177 api_version=API_VERSION,
178 password_server_port=PASSWORD_SERVER_PORT,
179 url_timeout=URL_TIMEOUT,
180 url_retries=URL_RETRIES):
181 """Query the metadata server and return the retrieved data."""
182 crawled_metadata = {}
183 crawled_metadata['_metadata_api_version'] = api_version
184 try:
185 crawled_metadata['user-data'] = ec2.get_instance_userdata(
186 api_version,
187 metadata_url,
188 timeout=url_timeout,
189 retries=url_retries)
190 crawled_metadata['meta-data'] = ec2.get_instance_metadata(
191 api_version,
192 metadata_url,
193 timeout=url_timeout,
194 retries=url_retries)
195 except Exception as e:
196 util.logexc(LOG, "failed reading from metadata url %s (%s)",
197 metadata_url, e)
198 return {}
199
200 try:
201 crawled_metadata['password'] = get_password(
202 api_version=api_version,
203 metadata_url=metadata_url,
204 password_server_port=password_server_port,
205 url_retries=url_retries,
206 url_timeout=url_timeout)
207 except Exception as e:
208 util.logexc(LOG, "failed to read from password server url %s:%s (%s)",
209 metadata_url, password_server_port, e)
210
211 return crawled_metadata
212
213
214if __name__ == "__main__":
215 import argparse
216
217 parser = argparse.ArgumentParser(description='Query Exoscale Metadata')
218 parser.add_argument(
219 "--endpoint",
220 metavar="URL",
221 help="The url of the metadata service.",
222 default=METADATA_URL)
223 parser.add_argument(
224 "--version",
225 metavar="VERSION",
226 help="The version of the metadata endpoint to query.",
227 default=API_VERSION)
228 parser.add_argument(
229 "--retries",
230 metavar="NUM",
231 type=int,
232 help="The number of retries querying the endpoint.",
233 default=URL_RETRIES)
234 parser.add_argument(
235 "--timeout",
236 metavar="NUM",
237 type=int,
238 help="The time in seconds to wait before timing out.",
239 default=URL_TIMEOUT)
240 parser.add_argument(
241 "--password-port",
242 metavar="PORT",
243 type=int,
244 help="The port on which the password endpoint listens",
245 default=PASSWORD_SERVER_PORT)
246
247 args = parser.parse_args()
248
249 data = read_metadata(
250 metadata_url=args.endpoint,
251 api_version=args.version,
252 password_server_port=args.password_port,
253 url_timeout=args.timeout,
254 url_retries=args.retries)
255
256 print(util.json_dumps(data))
257
258# vi: ts=4 expandtab
diff --git a/cloudinit/sources/DataSourceGCE.py b/cloudinit/sources/DataSourceGCE.py
index d816262..6cbfbba 100644
--- a/cloudinit/sources/DataSourceGCE.py
+++ b/cloudinit/sources/DataSourceGCE.py
@@ -18,10 +18,13 @@ LOG = logging.getLogger(__name__)
18MD_V1_URL = 'http://metadata.google.internal/computeMetadata/v1/'18MD_V1_URL = 'http://metadata.google.internal/computeMetadata/v1/'
19BUILTIN_DS_CONFIG = {'metadata_url': MD_V1_URL}19BUILTIN_DS_CONFIG = {'metadata_url': MD_V1_URL}
20REQUIRED_FIELDS = ('instance-id', 'availability-zone', 'local-hostname')20REQUIRED_FIELDS = ('instance-id', 'availability-zone', 'local-hostname')
21GUEST_ATTRIBUTES_URL = ('http://metadata.google.internal/computeMetadata/'
22 'v1/instance/guest-attributes')
23HOSTKEY_NAMESPACE = 'hostkeys'
24HEADERS = {'Metadata-Flavor': 'Google'}
2125
2226
23class GoogleMetadataFetcher(object):27class GoogleMetadataFetcher(object):
24 headers = {'Metadata-Flavor': 'Google'}
2528
26 def __init__(self, metadata_address):29 def __init__(self, metadata_address):
27 self.metadata_address = metadata_address30 self.metadata_address = metadata_address
@@ -32,7 +35,7 @@ class GoogleMetadataFetcher(object):
32 url = self.metadata_address + path35 url = self.metadata_address + path
33 if is_recursive:36 if is_recursive:
34 url += '/?recursive=True'37 url += '/?recursive=True'
35 resp = url_helper.readurl(url=url, headers=self.headers)38 resp = url_helper.readurl(url=url, headers=HEADERS)
36 except url_helper.UrlError as exc:39 except url_helper.UrlError as exc:
37 msg = "url %s raised exception %s"40 msg = "url %s raised exception %s"
38 LOG.debug(msg, path, exc)41 LOG.debug(msg, path, exc)
@@ -90,6 +93,10 @@ class DataSourceGCE(sources.DataSource):
90 public_keys_data = self.metadata['public-keys-data']93 public_keys_data = self.metadata['public-keys-data']
91 return _parse_public_keys(public_keys_data, self.default_user)94 return _parse_public_keys(public_keys_data, self.default_user)
9295
96 def publish_host_keys(self, hostkeys):
97 for key in hostkeys:
98 _write_host_key_to_guest_attributes(*key)
99
93 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):100 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):
94 # GCE has long FDQN's and has asked for short hostnames.101 # GCE has long FDQN's and has asked for short hostnames.
95 return self.metadata['local-hostname'].split('.')[0]102 return self.metadata['local-hostname'].split('.')[0]
@@ -103,6 +110,17 @@ class DataSourceGCE(sources.DataSource):
103 return self.availability_zone.rsplit('-', 1)[0]110 return self.availability_zone.rsplit('-', 1)[0]
104111
105112
113def _write_host_key_to_guest_attributes(key_type, key_value):
114 url = '%s/%s/%s' % (GUEST_ATTRIBUTES_URL, HOSTKEY_NAMESPACE, key_type)
115 key_value = key_value.encode('utf-8')
116 resp = url_helper.readurl(url=url, data=key_value, headers=HEADERS,
117 request_method='PUT', check_status=False)
118 if resp.ok():
119 LOG.debug('Wrote %s host key to guest attributes.', key_type)
120 else:
121 LOG.debug('Unable to write %s host key to guest attributes.', key_type)
122
123
106def _has_expired(public_key):124def _has_expired(public_key):
107 # Check whether an SSH key is expired. Public key input is a single SSH125 # Check whether an SSH key is expired. Public key input is a single SSH
108 # public key in the GCE specific key format documented here:126 # public key in the GCE specific key format documented here:
diff --git a/cloudinit/sources/DataSourceHetzner.py b/cloudinit/sources/DataSourceHetzner.py
index 5c75b65..5029833 100644
--- a/cloudinit/sources/DataSourceHetzner.py
+++ b/cloudinit/sources/DataSourceHetzner.py
@@ -28,6 +28,9 @@ MD_WAIT_RETRY = 2
2828
2929
30class DataSourceHetzner(sources.DataSource):30class DataSourceHetzner(sources.DataSource):
31
32 dsname = 'Hetzner'
33
31 def __init__(self, sys_cfg, distro, paths):34 def __init__(self, sys_cfg, distro, paths):
32 sources.DataSource.__init__(self, sys_cfg, distro, paths)35 sources.DataSource.__init__(self, sys_cfg, distro, paths)
33 self.distro = distro36 self.distro = distro
diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py
index 70e7a5c..dd941d2 100644
--- a/cloudinit/sources/DataSourceOVF.py
+++ b/cloudinit/sources/DataSourceOVF.py
@@ -148,6 +148,9 @@ class DataSourceOVF(sources.DataSource):
148 product_marker, os.path.join(self.paths.cloud_dir, 'data'))148 product_marker, os.path.join(self.paths.cloud_dir, 'data'))
149 special_customization = product_marker and not hasmarkerfile149 special_customization = product_marker and not hasmarkerfile
150 customscript = self._vmware_cust_conf.custom_script_name150 customscript = self._vmware_cust_conf.custom_script_name
151 ccScriptsDir = os.path.join(
152 self.paths.get_cpath("scripts"),
153 "per-instance")
151 except Exception as e:154 except Exception as e:
152 _raise_error_status(155 _raise_error_status(
153 "Error parsing the customization Config File",156 "Error parsing the customization Config File",
@@ -201,7 +204,9 @@ class DataSourceOVF(sources.DataSource):
201204
202 if customscript:205 if customscript:
203 try:206 try:
204 postcust = PostCustomScript(customscript, imcdirpath)207 postcust = PostCustomScript(customscript,
208 imcdirpath,
209 ccScriptsDir)
205 postcust.execute()210 postcust.execute()
206 except Exception as e:211 except Exception as e:
207 _raise_error_status(212 _raise_error_status(
diff --git a/cloudinit/sources/DataSourceOracle.py b/cloudinit/sources/DataSourceOracle.py
index 70b9c58..6e73f56 100644
--- a/cloudinit/sources/DataSourceOracle.py
+++ b/cloudinit/sources/DataSourceOracle.py
@@ -16,7 +16,7 @@ Notes:
16"""16"""
1717
18from cloudinit.url_helper import combine_url, readurl, UrlError18from cloudinit.url_helper import combine_url, readurl, UrlError
19from cloudinit.net import dhcp19from cloudinit.net import dhcp, get_interfaces_by_mac
20from cloudinit import net20from cloudinit import net
21from cloudinit import sources21from cloudinit import sources
22from cloudinit import util22from cloudinit import util
@@ -28,8 +28,80 @@ import re
2828
29LOG = logging.getLogger(__name__)29LOG = logging.getLogger(__name__)
3030
31BUILTIN_DS_CONFIG = {
32 # Don't use IMDS to configure secondary NICs by default
33 'configure_secondary_nics': False,
34}
31CHASSIS_ASSET_TAG = "OracleCloud.com"35CHASSIS_ASSET_TAG = "OracleCloud.com"
32METADATA_ENDPOINT = "http://169.254.169.254/openstack/"36METADATA_ENDPOINT = "http://169.254.169.254/openstack/"
37VNIC_METADATA_URL = 'http://169.254.169.254/opc/v1/vnics/'
38# https://docs.cloud.oracle.com/iaas/Content/Network/Troubleshoot/connectionhang.htm#Overview,
39# indicates that an MTU of 9000 is used within OCI
40MTU = 9000
41
42
43def _add_network_config_from_opc_imds(network_config):
44 """
45 Fetch data from Oracle's IMDS, generate secondary NIC config, merge it.
46
47 The primary NIC configuration should not be modified based on the IMDS
48 values, as it should continue to be configured for DHCP. As such, this
49 takes an existing network_config dict which is expected to have the primary
50 NIC configuration already present. It will mutate the given dict to
51 include the secondary VNICs.
52
53 :param network_config:
54 A v1 network config dict with the primary NIC already configured. This
55 dict will be mutated.
56
57 :raises:
58 Exceptions are not handled within this function. Likely exceptions are
59 those raised by url_helper.readurl (if communicating with the IMDS
60 fails), ValueError/JSONDecodeError (if the IMDS returns invalid JSON),
61 and KeyError/IndexError (if the IMDS returns valid JSON with unexpected
62 contents).
63 """
64 resp = readurl(VNIC_METADATA_URL)
65 vnics = json.loads(str(resp))
66
67 if 'nicIndex' in vnics[0]:
68 # TODO: Once configure_secondary_nics defaults to True, lower the level
69 # of this log message. (Currently, if we're running this code at all,
70 # someone has explicitly opted-in to secondary VNIC configuration, so
71 # we should warn them that it didn't happen. Once it's default, this
72 # would be emitted on every Bare Metal Machine launch, which means INFO
73 # or DEBUG would be more appropriate.)
74 LOG.warning(
75 'VNIC metadata indicates this is a bare metal machine; skipping'
76 ' secondary VNIC configuration.'
77 )
78 return
79
80 interfaces_by_mac = get_interfaces_by_mac()
81
82 for vnic_dict in vnics[1:]:
83 # We skip the first entry in the response because the primary interface
84 # is already configured by iSCSI boot; applying configuration from the
85 # IMDS is not required.
86 mac_address = vnic_dict['macAddr'].lower()
87 if mac_address not in interfaces_by_mac:
88 LOG.debug('Interface with MAC %s not found; skipping', mac_address)
89 continue
90 name = interfaces_by_mac[mac_address]
91 subnet = {
92 'type': 'static',
93 'address': vnic_dict['privateIp'],
94 'netmask': vnic_dict['subnetCidrBlock'].split('/')[1],
95 'gateway': vnic_dict['virtualRouterIp'],
96 'control': 'manual',
97 }
98 network_config['config'].append({
99 'name': name,
100 'type': 'physical',
101 'mac_address': mac_address,
102 'mtu': MTU,
103 'subnets': [subnet],
104 })
33105
34106
35class DataSourceOracle(sources.DataSource):107class DataSourceOracle(sources.DataSource):
@@ -37,8 +109,22 @@ class DataSourceOracle(sources.DataSource):
37 dsname = 'Oracle'109 dsname = 'Oracle'
38 system_uuid = None110 system_uuid = None
39 vendordata_pure = None111 vendordata_pure = None
112 network_config_sources = (
113 sources.NetworkConfigSource.cmdline,
114 sources.NetworkConfigSource.ds,
115 sources.NetworkConfigSource.initramfs,
116 sources.NetworkConfigSource.system_cfg,
117 )
118
40 _network_config = sources.UNSET119 _network_config = sources.UNSET
41120
121 def __init__(self, sys_cfg, *args, **kwargs):
122 super(DataSourceOracle, self).__init__(sys_cfg, *args, **kwargs)
123
124 self.ds_cfg = util.mergemanydict([
125 util.get_cfg_by_path(sys_cfg, ['datasource', self.dsname], {}),
126 BUILTIN_DS_CONFIG])
127
42 def _is_platform_viable(self):128 def _is_platform_viable(self):
43 """Check platform environment to report if this datasource may run."""129 """Check platform environment to report if this datasource may run."""
44 return _is_platform_viable()130 return _is_platform_viable()
@@ -48,7 +134,7 @@ class DataSourceOracle(sources.DataSource):
48 return False134 return False
49135
50 # network may be configured if iscsi root. If that is the case136 # network may be configured if iscsi root. If that is the case
51 # then read_kernel_cmdline_config will return non-None.137 # then read_initramfs_config will return non-None.
52 if _is_iscsi_root():138 if _is_iscsi_root():
53 data = self.crawl_metadata()139 data = self.crawl_metadata()
54 else:140 else:
@@ -118,11 +204,17 @@ class DataSourceOracle(sources.DataSource):
118 We nonetheless return cmdline provided config if present204 We nonetheless return cmdline provided config if present
119 and fallback to generate fallback."""205 and fallback to generate fallback."""
120 if self._network_config == sources.UNSET:206 if self._network_config == sources.UNSET:
121 cmdline_cfg = cmdline.read_kernel_cmdline_config()207 self._network_config = cmdline.read_initramfs_config()
122 if cmdline_cfg:208 if not self._network_config:
123 self._network_config = cmdline_cfg
124 else:
125 self._network_config = self.distro.generate_fallback_config()209 self._network_config = self.distro.generate_fallback_config()
210 if self.ds_cfg.get('configure_secondary_nics'):
211 try:
212 # Mutate self._network_config to include secondary VNICs
213 _add_network_config_from_opc_imds(self._network_config)
214 except Exception:
215 util.logexc(
216 LOG,
217 "Failed to fetch secondary network configuration!")
126 return self._network_config218 return self._network_config
127219
128220
@@ -137,7 +229,7 @@ def _is_platform_viable():
137229
138230
139def _is_iscsi_root():231def _is_iscsi_root():
140 return bool(cmdline.read_kernel_cmdline_config())232 return bool(cmdline.read_initramfs_config())
141233
142234
143def _load_index(content):235def _load_index(content):
diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
index e6966b3..a319322 100644
--- a/cloudinit/sources/__init__.py
+++ b/cloudinit/sources/__init__.py
@@ -66,6 +66,13 @@ CLOUD_ID_REGION_PREFIX_MAP = {
66 'china': ('azure-china', lambda c: c == 'azure'), # only change azure66 'china': ('azure-china', lambda c: c == 'azure'), # only change azure
67}67}
6868
69# NetworkConfigSource represents the canonical list of network config sources
70# that cloud-init knows about. (Python 2.7 lacks PEP 435, so use a singleton
71# namedtuple as an enum; see https://stackoverflow.com/a/6971002)
72_NETCFG_SOURCE_NAMES = ('cmdline', 'ds', 'system_cfg', 'fallback', 'initramfs')
73NetworkConfigSource = namedtuple('NetworkConfigSource',
74 _NETCFG_SOURCE_NAMES)(*_NETCFG_SOURCE_NAMES)
75
6976
70class DataSourceNotFoundException(Exception):77class DataSourceNotFoundException(Exception):
71 pass78 pass
@@ -153,6 +160,16 @@ class DataSource(object):
153 # Track the discovered fallback nic for use in configuration generation.160 # Track the discovered fallback nic for use in configuration generation.
154 _fallback_interface = None161 _fallback_interface = None
155162
163 # The network configuration sources that should be considered for this data
164 # source. (The first source in this list that provides network
165 # configuration will be used without considering any that follow.) This
166 # should always be a subset of the members of NetworkConfigSource with no
167 # duplicate entries.
168 network_config_sources = (NetworkConfigSource.cmdline,
169 NetworkConfigSource.initramfs,
170 NetworkConfigSource.system_cfg,
171 NetworkConfigSource.ds)
172
156 # read_url_params173 # read_url_params
157 url_max_wait = -1 # max_wait < 0 means do not wait174 url_max_wait = -1 # max_wait < 0 means do not wait
158 url_timeout = 10 # timeout for each metadata url read attempt175 url_timeout = 10 # timeout for each metadata url read attempt
@@ -474,6 +491,16 @@ class DataSource(object):
474 def get_public_ssh_keys(self):491 def get_public_ssh_keys(self):
475 return normalize_pubkey_data(self.metadata.get('public-keys'))492 return normalize_pubkey_data(self.metadata.get('public-keys'))
476493
494 def publish_host_keys(self, hostkeys):
495 """Publish the public SSH host keys (found in /etc/ssh/*.pub).
496
497 @param hostkeys: List of host key tuples (key_type, key_value),
498 where key_type is the first field in the public key file
499 (e.g. 'ssh-rsa') and key_value is the key itself
500 (e.g. 'AAAAB3NzaC1y...').
501 """
502 pass
503
477 def _remap_device(self, short_name):504 def _remap_device(self, short_name):
478 # LP: #611137505 # LP: #611137
479 # the metadata service may believe that devices are named 'sda'506 # the metadata service may believe that devices are named 'sda'
diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
index 82c4c8c..f1fba17 100755
--- a/cloudinit/sources/helpers/azure.py
+++ b/cloudinit/sources/helpers/azure.py
@@ -16,7 +16,11 @@ from xml.etree import ElementTree
1616
17from cloudinit import url_helper17from cloudinit import url_helper
18from cloudinit import util18from cloudinit import util
19from cloudinit import version
20from cloudinit import distros
19from cloudinit.reporting import events21from cloudinit.reporting import events
22from cloudinit.net.dhcp import EphemeralDHCPv4
23from datetime import datetime
2024
21LOG = logging.getLogger(__name__)25LOG = logging.getLogger(__name__)
2226
@@ -24,6 +28,10 @@ LOG = logging.getLogger(__name__)
24# value is applied if the endpoint can't be found within a lease file28# value is applied if the endpoint can't be found within a lease file
25DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10"29DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10"
2630
31BOOT_EVENT_TYPE = 'boot-telemetry'
32SYSTEMINFO_EVENT_TYPE = 'system-info'
33DIAGNOSTIC_EVENT_TYPE = 'diagnostic'
34
27azure_ds_reporter = events.ReportEventStack(35azure_ds_reporter = events.ReportEventStack(
28 name="azure-ds",36 name="azure-ds",
29 description="initialize reporter for azure ds",37 description="initialize reporter for azure ds",
@@ -40,6 +48,105 @@ def azure_ds_telemetry_reporter(func):
40 return impl48 return impl
4149
4250
51@azure_ds_telemetry_reporter
52def get_boot_telemetry():
53 """Report timestamps related to kernel initialization and systemd
54 activation of cloud-init"""
55 if not distros.uses_systemd():
56 raise RuntimeError(
57 "distro not using systemd, skipping boot telemetry")
58
59 LOG.debug("Collecting boot telemetry")
60 try:
61 kernel_start = float(time.time()) - float(util.uptime())
62 except ValueError:
63 raise RuntimeError("Failed to determine kernel start timestamp")
64
65 try:
66 out, _ = util.subp(['/bin/systemctl',
67 'show', '-p',
68 'UserspaceTimestampMonotonic'],
69 capture=True)
70 tsm = None
71 if out and '=' in out:
72 tsm = out.split("=")[1]
73
74 if not tsm:
75 raise RuntimeError("Failed to parse "
76 "UserspaceTimestampMonotonic from systemd")
77
78 user_start = kernel_start + (float(tsm) / 1000000)
79 except util.ProcessExecutionError as e:
80 raise RuntimeError("Failed to get UserspaceTimestampMonotonic: %s"
81 % e)
82 except ValueError as e:
83 raise RuntimeError("Failed to parse "
84 "UserspaceTimestampMonotonic from systemd: %s"
85 % e)
86
87 try:
88 out, _ = util.subp(['/bin/systemctl', 'show',
89 'cloud-init-local', '-p',
90 'InactiveExitTimestampMonotonic'],
91 capture=True)
92 tsm = None
93 if out and '=' in out:
94 tsm = out.split("=")[1]
95 if not tsm:
96 raise RuntimeError("Failed to parse "
97 "InactiveExitTimestampMonotonic from systemd")
98
99 cloudinit_activation = kernel_start + (float(tsm) / 1000000)
100 except util.ProcessExecutionError as e:
101 raise RuntimeError("Failed to get InactiveExitTimestampMonotonic: %s"
102 % e)
103 except ValueError as e:
104 raise RuntimeError("Failed to parse "
105 "InactiveExitTimestampMonotonic from systemd: %s"
106 % e)
107
108 evt = events.ReportingEvent(
109 BOOT_EVENT_TYPE, 'boot-telemetry',
110 "kernel_start=%s user_start=%s cloudinit_activation=%s" %
111 (datetime.utcfromtimestamp(kernel_start).isoformat() + 'Z',
112 datetime.utcfromtimestamp(user_start).isoformat() + 'Z',
113 datetime.utcfromtimestamp(cloudinit_activation).isoformat() + 'Z'),
114 events.DEFAULT_EVENT_ORIGIN)
115 events.report_event(evt)
116
117 # return the event for unit testing purpose
118 return evt
119
120
121@azure_ds_telemetry_reporter
122def get_system_info():
123 """Collect and report system information"""
124 info = util.system_info()
125 evt = events.ReportingEvent(
126 SYSTEMINFO_EVENT_TYPE, 'system information',
127 "cloudinit_version=%s, kernel_version=%s, variant=%s, "
128 "distro_name=%s, distro_version=%s, flavor=%s, "
129 "python_version=%s" %
130 (version.version_string(), info['release'], info['variant'],
131 info['dist'][0], info['dist'][1], info['dist'][2],
132 info['python']), events.DEFAULT_EVENT_ORIGIN)
133 events.report_event(evt)
134
135 # return the event for unit testing purpose
136 return evt
137
138
139def report_diagnostic_event(str):
140 """Report a diagnostic event"""
141 evt = events.ReportingEvent(
142 DIAGNOSTIC_EVENT_TYPE, 'diagnostic message',
143 str, events.DEFAULT_EVENT_ORIGIN)
144 events.report_event(evt)
145
146 # return the event for unit testing purpose
147 return evt
148
149
43@contextmanager150@contextmanager
44def cd(newdir):151def cd(newdir):
45 prevdir = os.getcwd()152 prevdir = os.getcwd()
@@ -360,16 +467,19 @@ class WALinuxAgentShim(object):
360 value = dhcp245467 value = dhcp245
361 LOG.debug("Using Azure Endpoint from dhcp options")468 LOG.debug("Using Azure Endpoint from dhcp options")
362 if value is None:469 if value is None:
470 report_diagnostic_event("No Azure endpoint from dhcp options")
363 LOG.debug('Finding Azure endpoint from networkd...')471 LOG.debug('Finding Azure endpoint from networkd...')
364 value = WALinuxAgentShim._networkd_get_value_from_leases()472 value = WALinuxAgentShim._networkd_get_value_from_leases()
365 if value is None:473 if value is None:
366 # Option-245 stored in /run/cloud-init/dhclient.hooks/<ifc>.json474 # Option-245 stored in /run/cloud-init/dhclient.hooks/<ifc>.json
367 # a dhclient exit hook that calls cloud-init-dhclient-hook475 # a dhclient exit hook that calls cloud-init-dhclient-hook
476 report_diagnostic_event("No Azure endpoint from networkd")
368 LOG.debug('Finding Azure endpoint from hook json...')477 LOG.debug('Finding Azure endpoint from hook json...')
369 dhcp_options = WALinuxAgentShim._load_dhclient_json()478 dhcp_options = WALinuxAgentShim._load_dhclient_json()
370 value = WALinuxAgentShim._get_value_from_dhcpoptions(dhcp_options)479 value = WALinuxAgentShim._get_value_from_dhcpoptions(dhcp_options)
371 if value is None:480 if value is None:
372 # Fallback and check the leases file if unsuccessful481 # Fallback and check the leases file if unsuccessful
482 report_diagnostic_event("No Azure endpoint from dhclient logs")
373 LOG.debug("Unable to find endpoint in dhclient logs. "483 LOG.debug("Unable to find endpoint in dhclient logs. "
374 " Falling back to check lease files")484 " Falling back to check lease files")
375 if fallback_lease_file is None:485 if fallback_lease_file is None:
@@ -381,11 +491,15 @@ class WALinuxAgentShim(object):
381 value = WALinuxAgentShim._get_value_from_leases_file(491 value = WALinuxAgentShim._get_value_from_leases_file(
382 fallback_lease_file)492 fallback_lease_file)
383 if value is None:493 if value is None:
384 LOG.warning("No lease found; using default endpoint")494 msg = "No lease found; using default endpoint"
495 report_diagnostic_event(msg)
496 LOG.warning(msg)
385 value = DEFAULT_WIRESERVER_ENDPOINT497 value = DEFAULT_WIRESERVER_ENDPOINT
386498
387 endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value)499 endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value)
388 LOG.debug('Azure endpoint found at %s', endpoint_ip_address)500 msg = 'Azure endpoint found at %s' % endpoint_ip_address
501 report_diagnostic_event(msg)
502 LOG.debug(msg)
389 return endpoint_ip_address503 return endpoint_ip_address
390504
391 @azure_ds_telemetry_reporter505 @azure_ds_telemetry_reporter
@@ -399,16 +513,19 @@ class WALinuxAgentShim(object):
399 try:513 try:
400 response = http_client.get(514 response = http_client.get(
401 'http://{0}/machine/?comp=goalstate'.format(self.endpoint))515 'http://{0}/machine/?comp=goalstate'.format(self.endpoint))
402 except Exception:516 except Exception as e:
403 if attempts < 10:517 if attempts < 10:
404 time.sleep(attempts + 1)518 time.sleep(attempts + 1)
405 else:519 else:
520 report_diagnostic_event(
521 "failed to register with Azure: %s" % e)
406 raise522 raise
407 else:523 else:
408 break524 break
409 attempts += 1525 attempts += 1
410 LOG.debug('Successfully fetched GoalState XML.')526 LOG.debug('Successfully fetched GoalState XML.')
411 goal_state = GoalState(response.contents, http_client)527 goal_state = GoalState(response.contents, http_client)
528 report_diagnostic_event("container_id %s" % goal_state.container_id)
412 ssh_keys = []529 ssh_keys = []
413 if goal_state.certificates_xml is not None and pubkey_info is not None:530 if goal_state.certificates_xml is not None and pubkey_info is not None:
414 LOG.debug('Certificate XML found; parsing out public keys.')531 LOG.debug('Certificate XML found; parsing out public keys.')
@@ -449,11 +566,20 @@ class WALinuxAgentShim(object):
449 container_id=goal_state.container_id,566 container_id=goal_state.container_id,
450 instance_id=goal_state.instance_id,567 instance_id=goal_state.instance_id,
451 )568 )
452 http_client.post(569 # Host will collect kvps when cloud-init reports ready.
453 "http://{0}/machine?comp=health".format(self.endpoint),570 # some kvps might still be in the queue. We yield the scheduler
454 data=document,571 # to make sure we process all kvps up till this point.
455 extra_headers={'Content-Type': 'text/xml; charset=utf-8'},572 time.sleep(0)
456 )573 try:
574 http_client.post(
575 "http://{0}/machine?comp=health".format(self.endpoint),
576 data=document,
577 extra_headers={'Content-Type': 'text/xml; charset=utf-8'},
578 )
579 except Exception as e:
580 report_diagnostic_event("exception while reporting ready: %s" % e)
581 raise
582
457 LOG.info('Reported ready to Azure fabric.')583 LOG.info('Reported ready to Azure fabric.')
458584
459585
@@ -467,4 +593,22 @@ def get_metadata_from_fabric(fallback_lease_file=None, dhcp_opts=None,
467 finally:593 finally:
468 shim.clean_up()594 shim.clean_up()
469595
596
597class EphemeralDHCPv4WithReporting(object):
598 def __init__(self, reporter, nic=None):
599 self.reporter = reporter
600 self.ephemeralDHCPv4 = EphemeralDHCPv4(iface=nic)
601
602 def __enter__(self):
603 with events.ReportEventStack(
604 name="obtain-dhcp-lease",
605 description="obtain dhcp lease",
606 parent=self.reporter):
607 return self.ephemeralDHCPv4.__enter__()
608
609 def __exit__(self, excp_type, excp_value, excp_traceback):
610 self.ephemeralDHCPv4.__exit__(
611 excp_type, excp_value, excp_traceback)
612
613
470# vi: ts=4 expandtab614# vi: ts=4 expandtab
diff --git a/cloudinit/sources/helpers/vmware/imc/config_custom_script.py b/cloudinit/sources/helpers/vmware/imc/config_custom_script.py
index a7d4ad9..9f14770 100644
--- a/cloudinit/sources/helpers/vmware/imc/config_custom_script.py
+++ b/cloudinit/sources/helpers/vmware/imc/config_custom_script.py
@@ -1,5 +1,5 @@
1# Copyright (C) 2017 Canonical Ltd.1# Copyright (C) 2017 Canonical Ltd.
2# Copyright (C) 2017 VMware Inc.2# Copyright (C) 2017-2019 VMware Inc.
3#3#
4# Author: Maitreyee Saikia <msaikia@vmware.com>4# Author: Maitreyee Saikia <msaikia@vmware.com>
5#5#
@@ -8,7 +8,6 @@
8import logging8import logging
9import os9import os
10import stat10import stat
11from textwrap import dedent
1211
13from cloudinit import util12from cloudinit import util
1413
@@ -20,12 +19,15 @@ class CustomScriptNotFound(Exception):
2019
2120
22class CustomScriptConstant(object):21class CustomScriptConstant(object):
23 RC_LOCAL = "/etc/rc.local"22 CUSTOM_TMP_DIR = "/root/.customization"
24 POST_CUST_TMP_DIR = "/root/.customization"23
25 POST_CUST_RUN_SCRIPT_NAME = "post-customize-guest.sh"24 # The user defined custom script
26 POST_CUST_RUN_SCRIPT = os.path.join(POST_CUST_TMP_DIR,25 CUSTOM_SCRIPT_NAME = "customize.sh"
27 POST_CUST_RUN_SCRIPT_NAME)26 CUSTOM_SCRIPT = os.path.join(CUSTOM_TMP_DIR,
28 POST_REBOOT_PENDING_MARKER = "/.guest-customization-post-reboot-pending"27 CUSTOM_SCRIPT_NAME)
28 POST_CUSTOM_PENDING_MARKER = "/.guest-customization-post-reboot-pending"
29 # The cc_scripts_per_instance script to launch custom script
30 POST_CUSTOM_SCRIPT_NAME = "post-customize-guest.sh"
2931
3032
31class RunCustomScript(object):33class RunCustomScript(object):
@@ -39,10 +41,19 @@ class RunCustomScript(object):
39 raise CustomScriptNotFound("Script %s not found!! "41 raise CustomScriptNotFound("Script %s not found!! "
40 "Cannot execute custom script!"42 "Cannot execute custom script!"
41 % self.scriptpath)43 % self.scriptpath)
44
45 util.ensure_dir(CustomScriptConstant.CUSTOM_TMP_DIR)
46
47 LOG.debug("Copying custom script to %s",
48 CustomScriptConstant.CUSTOM_SCRIPT)
49 util.copy(self.scriptpath, CustomScriptConstant.CUSTOM_SCRIPT)
50
42 # Strip any CR characters from the decoded script51 # Strip any CR characters from the decoded script
43 util.load_file(self.scriptpath).replace("\r", "")52 content = util.load_file(
44 st = os.stat(self.scriptpath)53 CustomScriptConstant.CUSTOM_SCRIPT).replace("\r", "")
45 os.chmod(self.scriptpath, st.st_mode | stat.S_IEXEC)54 util.write_file(CustomScriptConstant.CUSTOM_SCRIPT,
55 content,
56 mode=0o544)
4657
4758
48class PreCustomScript(RunCustomScript):59class PreCustomScript(RunCustomScript):
@@ -50,104 +61,34 @@ class PreCustomScript(RunCustomScript):
50 """Executing custom script with precustomization argument."""61 """Executing custom script with precustomization argument."""
51 LOG.debug("Executing pre-customization script")62 LOG.debug("Executing pre-customization script")
52 self.prepare_script()63 self.prepare_script()
53 util.subp(["/bin/sh", self.scriptpath, "precustomization"])64 util.subp([CustomScriptConstant.CUSTOM_SCRIPT, "precustomization"])
5465
5566
56class PostCustomScript(RunCustomScript):67class PostCustomScript(RunCustomScript):
57 def __init__(self, scriptname, directory):68 def __init__(self, scriptname, directory, ccScriptsDir):
58 super(PostCustomScript, self).__init__(scriptname, directory)69 super(PostCustomScript, self).__init__(scriptname, directory)
59 # Determine when to run custom script. When postreboot is True,70 self.ccScriptsDir = ccScriptsDir
60 # the user uploaded script will run as part of rc.local after71 self.ccScriptPath = os.path.join(
61 # the machine reboots. This is determined by presence of rclocal.72 ccScriptsDir,
62 # When postreboot is False, script will run as part of cloud-init.73 CustomScriptConstant.POST_CUSTOM_SCRIPT_NAME)
63 self.postreboot = False
64
65 def _install_post_reboot_agent(self, rclocal):
66 """
67 Install post-reboot agent for running custom script after reboot.
68 As part of this process, we are editing the rclocal file to run a
69 VMware script, which in turn is resposible for handling the user
70 script.
71 @param: path to rc local.
72 """
73 LOG.debug("Installing post-reboot customization from %s to %s",
74 self.directory, rclocal)
75 if not self.has_previous_agent(rclocal):
76 LOG.info("Adding post-reboot customization agent to rc.local")
77 new_content = dedent("""
78 # Run post-reboot guest customization
79 /bin/sh %s
80 exit 0
81 """) % CustomScriptConstant.POST_CUST_RUN_SCRIPT
82 existing_rclocal = util.load_file(rclocal).replace('exit 0\n', '')
83 st = os.stat(rclocal)
84 # "x" flag should be set
85 mode = st.st_mode | stat.S_IEXEC
86 util.write_file(rclocal, existing_rclocal + new_content, mode)
87
88 else:
89 # We don't need to update rclocal file everytime a customization
90 # is requested. It just needs to be done for the first time.
91 LOG.info("Post-reboot guest customization agent is already "
92 "registered in rc.local")
93 LOG.debug("Installing post-reboot customization agent finished: %s",
94 self.postreboot)
95
96 def has_previous_agent(self, rclocal):
97 searchstring = "# Run post-reboot guest customization"
98 if searchstring in open(rclocal).read():
99 return True
100 return False
101
102 def find_rc_local(self):
103 """
104 Determine if rc local is present.
105 """
106 rclocal = ""
107 if os.path.exists(CustomScriptConstant.RC_LOCAL):
108 LOG.debug("rc.local detected.")
109 # resolving in case of symlink
110 rclocal = os.path.realpath(CustomScriptConstant.RC_LOCAL)
111 LOG.debug("rc.local resolved to %s", rclocal)
112 else:
113 LOG.warning("Can't find rc.local, post-customization "
114 "will be run before reboot")
115 return rclocal
116
117 def install_agent(self):
118 rclocal = self.find_rc_local()
119 if rclocal:
120 self._install_post_reboot_agent(rclocal)
121 self.postreboot = True
12274
123 def execute(self):75 def execute(self):
124 """76 """
125 This method executes post-customization script before or after reboot77 This method copy the post customize run script to
126 based on the presence of rc local.78 cc_scripts_per_instance directory and let this
79 module to run post custom script.
127 """80 """
128 self.prepare_script()81 self.prepare_script()
129 self.install_agent()82
130 if not self.postreboot:83 LOG.debug("Copying post customize run script to %s",
131 LOG.warning("Executing post-customization script inline")84 self.ccScriptPath)
132 util.subp(["/bin/sh", self.scriptpath, "postcustomization"])85 util.copy(
133 else:86 os.path.join(self.directory,
134 LOG.debug("Scheduling custom script to run post reboot")87 CustomScriptConstant.POST_CUSTOM_SCRIPT_NAME),
135 if not os.path.isdir(CustomScriptConstant.POST_CUST_TMP_DIR):88 self.ccScriptPath)
136 os.mkdir(CustomScriptConstant.POST_CUST_TMP_DIR)89 st = os.stat(self.ccScriptPath)
137 # Script "post-customize-guest.sh" and user uploaded script are90 os.chmod(self.ccScriptPath, st.st_mode | stat.S_IEXEC)
138 # are present in the same directory and needs to copied to a temp91 LOG.info("Creating post customization pending marker")
139 # directory to be executed post reboot. User uploaded script is92 util.ensure_file(CustomScriptConstant.POST_CUSTOM_PENDING_MARKER)
140 # saved as customize.sh in the temp directory.
141 # post-customize-guest.sh excutes customize.sh after reboot.
142 LOG.debug("Copying post-customization script")
143 util.copy(self.scriptpath,
144 CustomScriptConstant.POST_CUST_TMP_DIR + "/customize.sh")
145 LOG.debug("Copying script to run post-customization script")
146 util.copy(
147 os.path.join(self.directory,
148 CustomScriptConstant.POST_CUST_RUN_SCRIPT_NAME),
149 CustomScriptConstant.POST_CUST_RUN_SCRIPT)
150 LOG.info("Creating post-reboot pending marker")
151 util.ensure_file(CustomScriptConstant.POST_REBOOT_PENDING_MARKER)
15293
153# vi: ts=4 expandtab94# vi: ts=4 expandtab
diff --git a/cloudinit/sources/tests/test_oracle.py b/cloudinit/sources/tests/test_oracle.py
index 97d6294..3ddf7df 100644
--- a/cloudinit/sources/tests/test_oracle.py
+++ b/cloudinit/sources/tests/test_oracle.py
@@ -1,7 +1,7 @@
1# This file is part of cloud-init. See LICENSE file for license information.1# This file is part of cloud-init. See LICENSE file for license information.
22
3from cloudinit.sources import DataSourceOracle as oracle3from cloudinit.sources import DataSourceOracle as oracle
4from cloudinit.sources import BrokenMetadata4from cloudinit.sources import BrokenMetadata, NetworkConfigSource
5from cloudinit import helpers5from cloudinit import helpers
66
7from cloudinit.tests import helpers as test_helpers7from cloudinit.tests import helpers as test_helpers
@@ -18,10 +18,52 @@ import uuid
18DS_PATH = "cloudinit.sources.DataSourceOracle"18DS_PATH = "cloudinit.sources.DataSourceOracle"
19MD_VER = "2013-10-17"19MD_VER = "2013-10-17"
2020
21# `curl -L http://169.254.169.254/opc/v1/vnics/` on a Oracle Bare Metal Machine
22# with a secondary VNIC attached (vnicId truncated for Python line length)
23OPC_BM_SECONDARY_VNIC_RESPONSE = """\
24[ {
25 "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtyvcucqkhdqmgjszebxe4hrb!!TRUNCATED||",
26 "privateIp" : "10.0.0.8",
27 "vlanTag" : 0,
28 "macAddr" : "90:e2:ba:d4:f1:68",
29 "virtualRouterIp" : "10.0.0.1",
30 "subnetCidrBlock" : "10.0.0.0/24",
31 "nicIndex" : 0
32}, {
33 "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtfmkxjdy2sqidndiwrsg63zf!!TRUNCATED||",
34 "privateIp" : "10.0.4.5",
35 "vlanTag" : 1,
36 "macAddr" : "02:00:17:05:CF:51",
37 "virtualRouterIp" : "10.0.4.1",
38 "subnetCidrBlock" : "10.0.4.0/24",
39 "nicIndex" : 0
40} ]"""
41
42# `curl -L http://169.254.169.254/opc/v1/vnics/` on a Oracle Virtual Machine
43# with a secondary VNIC attached
44OPC_VM_SECONDARY_VNIC_RESPONSE = """\
45[ {
46 "vnicId" : "ocid1.vnic.oc1.phx.abyhqljtch72z5pd76cc2636qeqh7z_truncated",
47 "privateIp" : "10.0.0.230",
48 "vlanTag" : 1039,
49 "macAddr" : "02:00:17:05:D1:DB",
50 "virtualRouterIp" : "10.0.0.1",
51 "subnetCidrBlock" : "10.0.0.0/24"
52}, {
53 "vnicId" : "ocid1.vnic.oc1.phx.abyhqljt4iew3gwmvrwrhhf3bp5drj_truncated",
54 "privateIp" : "10.0.0.231",
55 "vlanTag" : 1041,
56 "macAddr" : "00:00:17:02:2B:B1",
57 "virtualRouterIp" : "10.0.0.1",
58 "subnetCidrBlock" : "10.0.0.0/24"
59} ]"""
60
2161
22class TestDataSourceOracle(test_helpers.CiTestCase):62class TestDataSourceOracle(test_helpers.CiTestCase):
23 """Test datasource DataSourceOracle."""63 """Test datasource DataSourceOracle."""
2464
65 with_logs = True
66
25 ds_class = oracle.DataSourceOracle67 ds_class = oracle.DataSourceOracle
2668
27 my_uuid = str(uuid.uuid4())69 my_uuid = str(uuid.uuid4())
@@ -79,6 +121,16 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
79 self.assertEqual(121 self.assertEqual(
80 'metadata (http://169.254.169.254/openstack/)', ds.subplatform)122 'metadata (http://169.254.169.254/openstack/)', ds.subplatform)
81123
124 def test_sys_cfg_can_enable_configure_secondary_nics(self):
125 # Confirm that behaviour is toggled by sys_cfg
126 ds, _mocks = self._get_ds()
127 self.assertFalse(ds.ds_cfg['configure_secondary_nics'])
128
129 sys_cfg = {
130 'datasource': {'Oracle': {'configure_secondary_nics': True}}}
131 ds, _mocks = self._get_ds(sys_cfg=sys_cfg)
132 self.assertTrue(ds.ds_cfg['configure_secondary_nics'])
133
82 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)134 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
83 def test_without_userdata(self, m_is_iscsi_root):135 def test_without_userdata(self, m_is_iscsi_root):
84 """If no user-data is provided, it should not be in return dict."""136 """If no user-data is provided, it should not be in return dict."""
@@ -133,9 +185,12 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
133 self.assertEqual(self.my_md['uuid'], ds.get_instance_id())185 self.assertEqual(self.my_md['uuid'], ds.get_instance_id())
134 self.assertEqual(my_userdata, ds.userdata_raw)186 self.assertEqual(my_userdata, ds.userdata_raw)
135187
136 @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config")188 @mock.patch(DS_PATH + "._add_network_config_from_opc_imds",
189 side_effect=lambda network_config: network_config)
190 @mock.patch(DS_PATH + ".cmdline.read_initramfs_config")
137 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)191 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
138 def test_network_cmdline(self, m_is_iscsi_root, m_cmdline_config):192 def test_network_cmdline(self, m_is_iscsi_root, m_initramfs_config,
193 _m_add_network_config_from_opc_imds):
139 """network_config should read kernel cmdline."""194 """network_config should read kernel cmdline."""
140 distro = mock.MagicMock()195 distro = mock.MagicMock()
141 ds, _ = self._get_ds(distro=distro, patches={196 ds, _ = self._get_ds(distro=distro, patches={
@@ -145,15 +200,18 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
145 MD_VER: {'system_uuid': self.my_uuid,200 MD_VER: {'system_uuid': self.my_uuid,
146 'meta_data': self.my_md}}}})201 'meta_data': self.my_md}}}})
147 ncfg = {'version': 1, 'config': [{'a': 'b'}]}202 ncfg = {'version': 1, 'config': [{'a': 'b'}]}
148 m_cmdline_config.return_value = ncfg203 m_initramfs_config.return_value = ncfg
149 self.assertTrue(ds._get_data())204 self.assertTrue(ds._get_data())
150 self.assertEqual(ncfg, ds.network_config)205 self.assertEqual(ncfg, ds.network_config)
151 m_cmdline_config.assert_called_once_with()206 self.assertEqual([mock.call()], m_initramfs_config.call_args_list)
152 self.assertFalse(distro.generate_fallback_config.called)207 self.assertFalse(distro.generate_fallback_config.called)
153208
154 @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config")209 @mock.patch(DS_PATH + "._add_network_config_from_opc_imds",
210 side_effect=lambda network_config: network_config)
211 @mock.patch(DS_PATH + ".cmdline.read_initramfs_config")
155 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)212 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
156 def test_network_fallback(self, m_is_iscsi_root, m_cmdline_config):213 def test_network_fallback(self, m_is_iscsi_root, m_initramfs_config,
214 _m_add_network_config_from_opc_imds):
157 """test that fallback network is generated if no kernel cmdline."""215 """test that fallback network is generated if no kernel cmdline."""
158 distro = mock.MagicMock()216 distro = mock.MagicMock()
159 ds, _ = self._get_ds(distro=distro, patches={217 ds, _ = self._get_ds(distro=distro, patches={
@@ -163,18 +221,95 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
163 MD_VER: {'system_uuid': self.my_uuid,221 MD_VER: {'system_uuid': self.my_uuid,
164 'meta_data': self.my_md}}}})222 'meta_data': self.my_md}}}})
165 ncfg = {'version': 1, 'config': [{'a': 'b'}]}223 ncfg = {'version': 1, 'config': [{'a': 'b'}]}
166 m_cmdline_config.return_value = None224 m_initramfs_config.return_value = None
167 self.assertTrue(ds._get_data())225 self.assertTrue(ds._get_data())
168 ncfg = {'version': 1, 'config': [{'distro1': 'value'}]}226 ncfg = {'version': 1, 'config': [{'distro1': 'value'}]}
169 distro.generate_fallback_config.return_value = ncfg227 distro.generate_fallback_config.return_value = ncfg
170 self.assertEqual(ncfg, ds.network_config)228 self.assertEqual(ncfg, ds.network_config)
171 m_cmdline_config.assert_called_once_with()229 self.assertEqual([mock.call()], m_initramfs_config.call_args_list)
172 distro.generate_fallback_config.assert_called_once_with()230 distro.generate_fallback_config.assert_called_once_with()
173 self.assertEqual(1, m_cmdline_config.call_count)
174231
175 # test that the result got cached, and the methods not re-called.232 # test that the result got cached, and the methods not re-called.
176 self.assertEqual(ncfg, ds.network_config)233 self.assertEqual(ncfg, ds.network_config)
177 self.assertEqual(1, m_cmdline_config.call_count)234 self.assertEqual(1, m_initramfs_config.call_count)
235
236 @mock.patch(DS_PATH + "._add_network_config_from_opc_imds")
237 @mock.patch(DS_PATH + ".cmdline.read_initramfs_config",
238 return_value={'some': 'config'})
239 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
240 def test_secondary_nics_added_to_network_config_if_enabled(
241 self, _m_is_iscsi_root, _m_initramfs_config,
242 m_add_network_config_from_opc_imds):
243
244 needle = object()
245
246 def network_config_side_effect(network_config):
247 network_config['secondary_added'] = needle
248
249 m_add_network_config_from_opc_imds.side_effect = (
250 network_config_side_effect)
251
252 distro = mock.MagicMock()
253 ds, _ = self._get_ds(distro=distro, patches={
254 '_is_platform_viable': {'return_value': True},
255 'crawl_metadata': {
256 'return_value': {
257 MD_VER: {'system_uuid': self.my_uuid,
258 'meta_data': self.my_md}}}})
259 ds.ds_cfg['configure_secondary_nics'] = True
260 self.assertEqual(needle, ds.network_config['secondary_added'])
261
262 @mock.patch(DS_PATH + "._add_network_config_from_opc_imds")
263 @mock.patch(DS_PATH + ".cmdline.read_initramfs_config",
264 return_value={'some': 'config'})
265 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
266 def test_secondary_nics_not_added_to_network_config_by_default(
267 self, _m_is_iscsi_root, _m_initramfs_config,
268 m_add_network_config_from_opc_imds):
269
270 def network_config_side_effect(network_config):
271 network_config['secondary_added'] = True
272
273 m_add_network_config_from_opc_imds.side_effect = (
274 network_config_side_effect)
275
276 distro = mock.MagicMock()
277 ds, _ = self._get_ds(distro=distro, patches={
278 '_is_platform_viable': {'return_value': True},
279 'crawl_metadata': {
280 'return_value': {
281 MD_VER: {'system_uuid': self.my_uuid,
282 'meta_data': self.my_md}}}})
283 self.assertNotIn('secondary_added', ds.network_config)
284
285 @mock.patch(DS_PATH + "._add_network_config_from_opc_imds")
286 @mock.patch(DS_PATH + ".cmdline.read_initramfs_config")
287 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
288 def test_secondary_nic_failure_isnt_blocking(
289 self, _m_is_iscsi_root, m_initramfs_config,
290 m_add_network_config_from_opc_imds):
291
292 m_add_network_config_from_opc_imds.side_effect = Exception()
293
294 distro = mock.MagicMock()
295 ds, _ = self._get_ds(distro=distro, patches={
296 '_is_platform_viable': {'return_value': True},
297 'crawl_metadata': {
298 'return_value': {
299 MD_VER: {'system_uuid': self.my_uuid,
300 'meta_data': self.my_md}}}})
301 ds.ds_cfg['configure_secondary_nics'] = True
302 self.assertEqual(ds.network_config, m_initramfs_config.return_value)
303 self.assertIn('Failed to fetch secondary network configuration',
304 self.logs.getvalue())
305
306 def test_ds_network_cfg_preferred_over_initramfs(self):
307 """Ensure that DS net config is preferred over initramfs config"""
308 network_config_sources = oracle.DataSourceOracle.network_config_sources
309 self.assertLess(
310 network_config_sources.index(NetworkConfigSource.ds),
311 network_config_sources.index(NetworkConfigSource.initramfs)
312 )
178313
179314
180@mock.patch(DS_PATH + "._read_system_uuid", return_value=str(uuid.uuid4()))315@mock.patch(DS_PATH + "._read_system_uuid", return_value=str(uuid.uuid4()))
@@ -336,4 +471,86 @@ class TestLoadIndex(test_helpers.CiTestCase):
336 oracle._load_index("\n".join(["meta_data.json", "user_data"])))471 oracle._load_index("\n".join(["meta_data.json", "user_data"])))
337472
338473
474class TestNetworkConfigFromOpcImds(test_helpers.CiTestCase):
475
476 with_logs = True
477
478 def setUp(self):
479 super(TestNetworkConfigFromOpcImds, self).setUp()
480 self.add_patch(DS_PATH + '.readurl', 'm_readurl')
481 self.add_patch(DS_PATH + '.get_interfaces_by_mac',
482 'm_get_interfaces_by_mac')
483
484 def test_failure_to_readurl(self):
485 # readurl failures should just bubble out to the caller
486 self.m_readurl.side_effect = Exception('oh no')
487 with self.assertRaises(Exception) as excinfo:
488 oracle._add_network_config_from_opc_imds({})
489 self.assertEqual(str(excinfo.exception), 'oh no')
490
491 def test_empty_response(self):
492 # empty response error should just bubble out to the caller
493 self.m_readurl.return_value = ''
494 with self.assertRaises(Exception):
495 oracle._add_network_config_from_opc_imds([])
496
497 def test_invalid_json(self):
498 # invalid JSON error should just bubble out to the caller
499 self.m_readurl.return_value = '{'
500 with self.assertRaises(Exception):
501 oracle._add_network_config_from_opc_imds([])
502
503 def test_no_secondary_nics_does_not_mutate_input(self):
504 self.m_readurl.return_value = json.dumps([{}])
505 # We test this by passing in a non-dict to ensure that no dict
506 # operations are used; failure would be seen as exceptions
507 oracle._add_network_config_from_opc_imds(object())
508
509 def test_bare_metal_machine_skipped(self):
510 # nicIndex in the first entry indicates a bare metal machine
511 self.m_readurl.return_value = OPC_BM_SECONDARY_VNIC_RESPONSE
512 # We test this by passing in a non-dict to ensure that no dict
513 # operations are used
514 self.assertFalse(oracle._add_network_config_from_opc_imds(object()))
515 self.assertIn('bare metal machine', self.logs.getvalue())
516
517 def test_missing_mac_skipped(self):
518 self.m_readurl.return_value = OPC_VM_SECONDARY_VNIC_RESPONSE
519 self.m_get_interfaces_by_mac.return_value = {}
520
521 network_config = {'version': 1, 'config': [{'primary': 'nic'}]}
522 oracle._add_network_config_from_opc_imds(network_config)
523
524 self.assertEqual(1, len(network_config['config']))
525 self.assertIn(
526 'Interface with MAC 00:00:17:02:2b:b1 not found; skipping',
527 self.logs.getvalue())
528
529 def test_secondary_nic(self):
530 self.m_readurl.return_value = OPC_VM_SECONDARY_VNIC_RESPONSE
531 mac_addr, nic_name = '00:00:17:02:2b:b1', 'ens3'
532 self.m_get_interfaces_by_mac.return_value = {
533 mac_addr: nic_name,
534 }
535
536 network_config = {'version': 1, 'config': [{'primary': 'nic'}]}
537 oracle._add_network_config_from_opc_imds(network_config)
538
539 # The input is mutated
540 self.assertEqual(2, len(network_config['config']))
541
542 secondary_nic_cfg = network_config['config'][1]
543 self.assertEqual(nic_name, secondary_nic_cfg['name'])
544 self.assertEqual('physical', secondary_nic_cfg['type'])
545 self.assertEqual(mac_addr, secondary_nic_cfg['mac_address'])
546 self.assertEqual(9000, secondary_nic_cfg['mtu'])
547
548 self.assertEqual(1, len(secondary_nic_cfg['subnets']))
549 subnet_cfg = secondary_nic_cfg['subnets'][0]
550 # These values are hard-coded in OPC_VM_SECONDARY_VNIC_RESPONSE
551 self.assertEqual('10.0.0.231', subnet_cfg['address'])
552 self.assertEqual('24', subnet_cfg['netmask'])
553 self.assertEqual('10.0.0.1', subnet_cfg['gateway'])
554 self.assertEqual('manual', subnet_cfg['control'])
555
339# vi: ts=4 expandtab556# vi: ts=4 expandtab
diff --git a/cloudinit/stages.py b/cloudinit/stages.py
index da7d349..5012988 100644
--- a/cloudinit/stages.py
+++ b/cloudinit/stages.py
@@ -24,6 +24,7 @@ from cloudinit.handlers.shell_script import ShellScriptPartHandler
24from cloudinit.handlers.upstart_job import UpstartJobPartHandler24from cloudinit.handlers.upstart_job import UpstartJobPartHandler
2525
26from cloudinit.event import EventType26from cloudinit.event import EventType
27from cloudinit.sources import NetworkConfigSource
2728
28from cloudinit import cloud29from cloudinit import cloud
29from cloudinit import config30from cloudinit import config
@@ -630,32 +631,54 @@ class Init(object):
630 if os.path.exists(disable_file):631 if os.path.exists(disable_file):
631 return (None, disable_file)632 return (None, disable_file)
632633
633 cmdline_cfg = ('cmdline', cmdline.read_kernel_cmdline_config())634 available_cfgs = {
634 dscfg = ('ds', None)635 NetworkConfigSource.cmdline: cmdline.read_kernel_cmdline_config(),
636 NetworkConfigSource.initramfs: cmdline.read_initramfs_config(),
637 NetworkConfigSource.ds: None,
638 NetworkConfigSource.system_cfg: self.cfg.get('network'),
639 }
640
635 if self.datasource and hasattr(self.datasource, 'network_config'):641 if self.datasource and hasattr(self.datasource, 'network_config'):
636 dscfg = ('ds', self.datasource.network_config)642 available_cfgs[NetworkConfigSource.ds] = (
637 sys_cfg = ('system_cfg', self.cfg.get('network'))643 self.datasource.network_config)
638644
639 for loc, ncfg in (cmdline_cfg, sys_cfg, dscfg):645 if self.datasource:
646 order = self.datasource.network_config_sources
647 else:
648 order = sources.DataSource.network_config_sources
649 for cfg_source in order:
650 if not hasattr(NetworkConfigSource, cfg_source):
651 LOG.warning('data source specifies an invalid network'
652 ' cfg_source: %s', cfg_source)
653 continue
654 if cfg_source not in available_cfgs:
655 LOG.warning('data source specifies an unavailable network'
656 ' cfg_source: %s', cfg_source)
657 continue
658 ncfg = available_cfgs[cfg_source]
640 if net.is_disabled_cfg(ncfg):659 if net.is_disabled_cfg(ncfg):
641 LOG.debug("network config disabled by %s", loc)660 LOG.debug("network config disabled by %s", cfg_source)
642 return (None, loc)661 return (None, cfg_source)
643 if ncfg:662 if ncfg:
644 return (ncfg, loc)663 return (ncfg, cfg_source)
645 return (self.distro.generate_fallback_config(), "fallback")664 return (self.distro.generate_fallback_config(),
646665 NetworkConfigSource.fallback)
647 def apply_network_config(self, bring_up):
648 netcfg, src = self._find_networking_config()
649 if netcfg is None:
650 LOG.info("network config is disabled by %s", src)
651 return
652666
667 def _apply_netcfg_names(self, netcfg):
653 try:668 try:
654 LOG.debug("applying net config names for %s", netcfg)669 LOG.debug("applying net config names for %s", netcfg)
655 self.distro.apply_network_config_names(netcfg)670 self.distro.apply_network_config_names(netcfg)
656 except Exception as e:671 except Exception as e:
657 LOG.warning("Failed to rename devices: %s", e)672 LOG.warning("Failed to rename devices: %s", e)
658673
674 def apply_network_config(self, bring_up):
675 # get a network config
676 netcfg, src = self._find_networking_config()
677 if netcfg is None:
678 LOG.info("network config is disabled by %s", src)
679 return
680
681 # request an update if needed/available
659 if self.datasource is not NULL_DATA_SOURCE:682 if self.datasource is not NULL_DATA_SOURCE:
660 if not self.is_new_instance():683 if not self.is_new_instance():
661 if not self.datasource.update_metadata([EventType.BOOT]):684 if not self.datasource.update_metadata([EventType.BOOT]):
@@ -663,8 +686,20 @@ class Init(object):
663 "No network config applied. Neither a new instance"686 "No network config applied. Neither a new instance"
664 " nor datasource network update on '%s' event",687 " nor datasource network update on '%s' event",
665 EventType.BOOT)688 EventType.BOOT)
689 # nothing new, but ensure proper names
690 self._apply_netcfg_names(netcfg)
666 return691 return
692 else:
693 # refresh netcfg after update
694 netcfg, src = self._find_networking_config()
695
696 # ensure all physical devices in config are present
697 net.wait_for_physdevs(netcfg)
698
699 # apply renames from config
700 self._apply_netcfg_names(netcfg)
667701
702 # rendering config
668 LOG.info("Applying network configuration from %s bringup=%s: %s",703 LOG.info("Applying network configuration from %s bringup=%s: %s",
669 src, bring_up, netcfg)704 src, bring_up, netcfg)
670 try:705 try:
diff --git a/cloudinit/tests/helpers.py b/cloudinit/tests/helpers.py
index f41180f..23fddd0 100644
--- a/cloudinit/tests/helpers.py
+++ b/cloudinit/tests/helpers.py
@@ -198,7 +198,8 @@ class CiTestCase(TestCase):
198 prefix="ci-%s." % self.__class__.__name__)198 prefix="ci-%s." % self.__class__.__name__)
199 else:199 else:
200 tmpd = tempfile.mkdtemp(dir=dir)200 tmpd = tempfile.mkdtemp(dir=dir)
201 self.addCleanup(functools.partial(shutil.rmtree, tmpd))201 self.addCleanup(
202 functools.partial(shutil.rmtree, tmpd, ignore_errors=True))
202 return tmpd203 return tmpd
203204
204 def tmp_path(self, path, dir=None):205 def tmp_path(self, path, dir=None):
diff --git a/cloudinit/tests/test_stages.py b/cloudinit/tests/test_stages.py
index 94b6b25..d5c9c0e 100644
--- a/cloudinit/tests/test_stages.py
+++ b/cloudinit/tests/test_stages.py
@@ -6,6 +6,7 @@ import os
66
7from cloudinit import stages7from cloudinit import stages
8from cloudinit import sources8from cloudinit import sources
9from cloudinit.sources import NetworkConfigSource
910
10from cloudinit.event import EventType11from cloudinit.event import EventType
11from cloudinit.util import write_file12from cloudinit.util import write_file
@@ -37,6 +38,7 @@ class FakeDataSource(sources.DataSource):
3738
38class TestInit(CiTestCase):39class TestInit(CiTestCase):
39 with_logs = True40 with_logs = True
41 allowed_subp = False
4042
41 def setUp(self):43 def setUp(self):
42 super(TestInit, self).setUp()44 super(TestInit, self).setUp()
@@ -57,84 +59,189 @@ class TestInit(CiTestCase):
57 (None, disable_file),59 (None, disable_file),
58 self.init._find_networking_config())60 self.init._find_networking_config())
5961
62 @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
60 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')63 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
61 def test_wb__find_networking_config_disabled_by_kernel(self, m_cmdline):64 def test_wb__find_networking_config_disabled_by_kernel(
65 self, m_cmdline, m_initramfs):
62 """find_networking_config returns when disabled by kernel cmdline."""66 """find_networking_config returns when disabled by kernel cmdline."""
63 m_cmdline.return_value = {'config': 'disabled'}67 m_cmdline.return_value = {'config': 'disabled'}
68 m_initramfs.return_value = {'config': ['fake_initrd']}
64 self.assertEqual(69 self.assertEqual(
65 (None, 'cmdline'),70 (None, NetworkConfigSource.cmdline),
66 self.init._find_networking_config())71 self.init._find_networking_config())
67 self.assertEqual('DEBUG: network config disabled by cmdline\n',72 self.assertEqual('DEBUG: network config disabled by cmdline\n',
68 self.logs.getvalue())73 self.logs.getvalue())
6974
75 @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
70 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')76 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
71 def test_wb__find_networking_config_disabled_by_datasrc(self, m_cmdline):77 def test_wb__find_networking_config_disabled_by_initrd(
78 self, m_cmdline, m_initramfs):
79 """find_networking_config returns when disabled by kernel cmdline."""
80 m_cmdline.return_value = {}
81 m_initramfs.return_value = {'config': 'disabled'}
82 self.assertEqual(
83 (None, NetworkConfigSource.initramfs),
84 self.init._find_networking_config())
85 self.assertEqual('DEBUG: network config disabled by initramfs\n',
86 self.logs.getvalue())
87
88 @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
89 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
90 def test_wb__find_networking_config_disabled_by_datasrc(
91 self, m_cmdline, m_initramfs):
72 """find_networking_config returns when disabled by datasource cfg."""92 """find_networking_config returns when disabled by datasource cfg."""
73 m_cmdline.return_value = {} # Kernel doesn't disable networking93 m_cmdline.return_value = {} # Kernel doesn't disable networking
94 m_initramfs.return_value = {} # initramfs doesn't disable networking
74 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},95 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
75 'network': {}} # system config doesn't disable96 'network': {}} # system config doesn't disable
7697
77 self.init.datasource = FakeDataSource(98 self.init.datasource = FakeDataSource(
78 network_config={'config': 'disabled'})99 network_config={'config': 'disabled'})
79 self.assertEqual(100 self.assertEqual(
80 (None, 'ds'),101 (None, NetworkConfigSource.ds),
81 self.init._find_networking_config())102 self.init._find_networking_config())
82 self.assertEqual('DEBUG: network config disabled by ds\n',103 self.assertEqual('DEBUG: network config disabled by ds\n',
83 self.logs.getvalue())104 self.logs.getvalue())
84105
106 @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
85 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')107 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
86 def test_wb__find_networking_config_disabled_by_sysconfig(self, m_cmdline):108 def test_wb__find_networking_config_disabled_by_sysconfig(
109 self, m_cmdline, m_initramfs):
87 """find_networking_config returns when disabled by system config."""110 """find_networking_config returns when disabled by system config."""
88 m_cmdline.return_value = {} # Kernel doesn't disable networking111 m_cmdline.return_value = {} # Kernel doesn't disable networking
112 m_initramfs.return_value = {} # initramfs doesn't disable networking
89 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},113 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
90 'network': {'config': 'disabled'}}114 'network': {'config': 'disabled'}}
91 self.assertEqual(115 self.assertEqual(
92 (None, 'system_cfg'),116 (None, NetworkConfigSource.system_cfg),
93 self.init._find_networking_config())117 self.init._find_networking_config())
94 self.assertEqual('DEBUG: network config disabled by system_cfg\n',118 self.assertEqual('DEBUG: network config disabled by system_cfg\n',
95 self.logs.getvalue())119 self.logs.getvalue())
96120
121 @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
122 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
123 def test__find_networking_config_uses_datasrc_order(
124 self, m_cmdline, m_initramfs):
125 """find_networking_config should check sources in DS defined order"""
126 # cmdline and initramfs, which would normally be preferred over other
127 # sources, disable networking; in this case, though, the DS moves them
128 # later so its own config is preferred
129 m_cmdline.return_value = {'config': 'disabled'}
130 m_initramfs.return_value = {'config': 'disabled'}
131
132 ds_net_cfg = {'config': {'needle': True}}
133 self.init.datasource = FakeDataSource(network_config=ds_net_cfg)
134 self.init.datasource.network_config_sources = [
135 NetworkConfigSource.ds, NetworkConfigSource.system_cfg,
136 NetworkConfigSource.cmdline, NetworkConfigSource.initramfs]
137
138 self.assertEqual(
139 (ds_net_cfg, NetworkConfigSource.ds),
140 self.init._find_networking_config())
141
142 @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
143 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
144 def test__find_networking_config_warns_if_datasrc_uses_invalid_src(
145 self, m_cmdline, m_initramfs):
146 """find_networking_config should check sources in DS defined order"""
147 ds_net_cfg = {'config': {'needle': True}}
148 self.init.datasource = FakeDataSource(network_config=ds_net_cfg)
149 self.init.datasource.network_config_sources = [
150 'invalid_src', NetworkConfigSource.ds]
151
152 self.assertEqual(
153 (ds_net_cfg, NetworkConfigSource.ds),
154 self.init._find_networking_config())
155 self.assertIn('WARNING: data source specifies an invalid network'
156 ' cfg_source: invalid_src',
157 self.logs.getvalue())
158
159 @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
97 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')160 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
98 def test_wb__find_networking_config_returns_kernel(self, m_cmdline):161 def test__find_networking_config_warns_if_datasrc_uses_unavailable_src(
162 self, m_cmdline, m_initramfs):
163 """find_networking_config should check sources in DS defined order"""
164 ds_net_cfg = {'config': {'needle': True}}
165 self.init.datasource = FakeDataSource(network_config=ds_net_cfg)
166 self.init.datasource.network_config_sources = [
167 NetworkConfigSource.fallback, NetworkConfigSource.ds]
168
169 self.assertEqual(
170 (ds_net_cfg, NetworkConfigSource.ds),
171 self.init._find_networking_config())
172 self.assertIn('WARNING: data source specifies an unavailable network'
173 ' cfg_source: fallback',
174 self.logs.getvalue())
175
176 @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
177 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
178 def test_wb__find_networking_config_returns_kernel(
179 self, m_cmdline, m_initramfs):
99 """find_networking_config returns kernel cmdline config if present."""180 """find_networking_config returns kernel cmdline config if present."""
100 expected_cfg = {'config': ['fakekernel']}181 expected_cfg = {'config': ['fakekernel']}
101 m_cmdline.return_value = expected_cfg182 m_cmdline.return_value = expected_cfg
183 m_initramfs.return_value = {'config': ['fake_initrd']}
102 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},184 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
103 'network': {'config': ['fakesys_config']}}185 'network': {'config': ['fakesys_config']}}
104 self.init.datasource = FakeDataSource(186 self.init.datasource = FakeDataSource(
105 network_config={'config': ['fakedatasource']})187 network_config={'config': ['fakedatasource']})
106 self.assertEqual(188 self.assertEqual(
107 (expected_cfg, 'cmdline'),189 (expected_cfg, NetworkConfigSource.cmdline),
108 self.init._find_networking_config())190 self.init._find_networking_config())
109191
192 @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
110 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')193 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
111 def test_wb__find_networking_config_returns_system_cfg(self, m_cmdline):194 def test_wb__find_networking_config_returns_initramfs(
195 self, m_cmdline, m_initramfs):
196 """find_networking_config returns kernel cmdline config if present."""
197 expected_cfg = {'config': ['fake_initrd']}
198 m_cmdline.return_value = {}
199 m_initramfs.return_value = expected_cfg
200 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
201 'network': {'config': ['fakesys_config']}}
202 self.init.datasource = FakeDataSource(
203 network_config={'config': ['fakedatasource']})
204 self.assertEqual(
205 (expected_cfg, NetworkConfigSource.initramfs),
206 self.init._find_networking_config())
207
208 @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
209 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
210 def test_wb__find_networking_config_returns_system_cfg(
211 self, m_cmdline, m_initramfs):
112 """find_networking_config returns system config when present."""212 """find_networking_config returns system config when present."""
113 m_cmdline.return_value = {} # No kernel network config213 m_cmdline.return_value = {} # No kernel network config
214 m_initramfs.return_value = {} # no initramfs network config
114 expected_cfg = {'config': ['fakesys_config']}215 expected_cfg = {'config': ['fakesys_config']}
115 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},216 self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}},
116 'network': expected_cfg}217 'network': expected_cfg}
117 self.init.datasource = FakeDataSource(218 self.init.datasource = FakeDataSource(
118 network_config={'config': ['fakedatasource']})219 network_config={'config': ['fakedatasource']})
119 self.assertEqual(220 self.assertEqual(
120 (expected_cfg, 'system_cfg'),221 (expected_cfg, NetworkConfigSource.system_cfg),
121 self.init._find_networking_config())222 self.init._find_networking_config())
122223
224 @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
123 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')225 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
124 def test_wb__find_networking_config_returns_datasrc_cfg(self, m_cmdline):226 def test_wb__find_networking_config_returns_datasrc_cfg(
227 self, m_cmdline, m_initramfs):
125 """find_networking_config returns datasource net config if present."""228 """find_networking_config returns datasource net config if present."""
126 m_cmdline.return_value = {} # No kernel network config229 m_cmdline.return_value = {} # No kernel network config
230 m_initramfs.return_value = {} # no initramfs network config
127 # No system config for network in setUp231 # No system config for network in setUp
128 expected_cfg = {'config': ['fakedatasource']}232 expected_cfg = {'config': ['fakedatasource']}
129 self.init.datasource = FakeDataSource(network_config=expected_cfg)233 self.init.datasource = FakeDataSource(network_config=expected_cfg)
130 self.assertEqual(234 self.assertEqual(
131 (expected_cfg, 'ds'),235 (expected_cfg, NetworkConfigSource.ds),
132 self.init._find_networking_config())236 self.init._find_networking_config())
133237
238 @mock.patch('cloudinit.stages.cmdline.read_initramfs_config')
134 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')239 @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config')
135 def test_wb__find_networking_config_returns_fallback(self, m_cmdline):240 def test_wb__find_networking_config_returns_fallback(
241 self, m_cmdline, m_initramfs):
136 """find_networking_config returns fallback config if not defined."""242 """find_networking_config returns fallback config if not defined."""
137 m_cmdline.return_value = {} # Kernel doesn't disable networking243 m_cmdline.return_value = {} # Kernel doesn't disable networking
244 m_initramfs.return_value = {} # no initramfs network config
138 # Neither datasource nor system_info disable or provide network245 # Neither datasource nor system_info disable or provide network
139246
140 fake_cfg = {'config': [{'type': 'physical', 'name': 'eth9'}],247 fake_cfg = {'config': [{'type': 'physical', 'name': 'eth9'}],
@@ -147,7 +254,7 @@ class TestInit(CiTestCase):
147 distro = self.init.distro254 distro = self.init.distro
148 distro.generate_fallback_config = fake_generate_fallback255 distro.generate_fallback_config = fake_generate_fallback
149 self.assertEqual(256 self.assertEqual(
150 (fake_cfg, 'fallback'),257 (fake_cfg, NetworkConfigSource.fallback),
151 self.init._find_networking_config())258 self.init._find_networking_config())
152 self.assertNotIn('network config disabled', self.logs.getvalue())259 self.assertNotIn('network config disabled', self.logs.getvalue())
153260
@@ -166,8 +273,9 @@ class TestInit(CiTestCase):
166 'INFO: network config is disabled by %s' % disable_file,273 'INFO: network config is disabled by %s' % disable_file,
167 self.logs.getvalue())274 self.logs.getvalue())
168275
276 @mock.patch('cloudinit.net.get_interfaces_by_mac')
169 @mock.patch('cloudinit.distros.ubuntu.Distro')277 @mock.patch('cloudinit.distros.ubuntu.Distro')
170 def test_apply_network_on_new_instance(self, m_ubuntu):278 def test_apply_network_on_new_instance(self, m_ubuntu, m_macs):
171 """Call distro apply_network_config methods on is_new_instance."""279 """Call distro apply_network_config methods on is_new_instance."""
172 net_cfg = {280 net_cfg = {
173 'version': 1, 'config': [281 'version': 1, 'config': [
@@ -175,7 +283,9 @@ class TestInit(CiTestCase):
175 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}283 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}
176284
177 def fake_network_config():285 def fake_network_config():
178 return net_cfg, 'fallback'286 return net_cfg, NetworkConfigSource.fallback
287
288 m_macs.return_value = {'42:42:42:42:42:42': 'eth9'}
179289
180 self.init._find_networking_config = fake_network_config290 self.init._find_networking_config = fake_network_config
181 self.init.apply_network_config(True)291 self.init.apply_network_config(True)
@@ -195,7 +305,7 @@ class TestInit(CiTestCase):
195 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}305 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}
196306
197 def fake_network_config():307 def fake_network_config():
198 return net_cfg, 'fallback'308 return net_cfg, NetworkConfigSource.fallback
199309
200 self.init._find_networking_config = fake_network_config310 self.init._find_networking_config = fake_network_config
201 self.init.apply_network_config(True)311 self.init.apply_network_config(True)
@@ -206,8 +316,9 @@ class TestInit(CiTestCase):
206 " nor datasource network update on '%s' event" % EventType.BOOT,316 " nor datasource network update on '%s' event" % EventType.BOOT,
207 self.logs.getvalue())317 self.logs.getvalue())
208318
319 @mock.patch('cloudinit.net.get_interfaces_by_mac')
209 @mock.patch('cloudinit.distros.ubuntu.Distro')320 @mock.patch('cloudinit.distros.ubuntu.Distro')
210 def test_apply_network_on_datasource_allowed_event(self, m_ubuntu):321 def test_apply_network_on_datasource_allowed_event(self, m_ubuntu, m_macs):
211 """Apply network if datasource.update_metadata permits BOOT event."""322 """Apply network if datasource.update_metadata permits BOOT event."""
212 old_instance_id = os.path.join(323 old_instance_id = os.path.join(
213 self.init.paths.get_cpath('data'), 'instance-id')324 self.init.paths.get_cpath('data'), 'instance-id')
@@ -218,7 +329,9 @@ class TestInit(CiTestCase):
218 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}329 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]}
219330
220 def fake_network_config():331 def fake_network_config():
221 return net_cfg, 'fallback'332 return net_cfg, NetworkConfigSource.fallback
333
334 m_macs.return_value = {'42:42:42:42:42:42': 'eth9'}
222335
223 self.init._find_networking_config = fake_network_config336 self.init._find_networking_config = fake_network_config
224 self.init.datasource = FakeDataSource(paths=self.init.paths)337 self.init.datasource = FakeDataSource(paths=self.init.paths)
diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
index 0af0d9e..44ee61d 100644
--- a/cloudinit/url_helper.py
+++ b/cloudinit/url_helper.py
@@ -199,18 +199,19 @@ def _get_ssl_args(url, ssl_details):
199def readurl(url, data=None, timeout=None, retries=0, sec_between=1,199def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
200 headers=None, headers_cb=None, ssl_details=None,200 headers=None, headers_cb=None, ssl_details=None,
201 check_status=True, allow_redirects=True, exception_cb=None,201 check_status=True, allow_redirects=True, exception_cb=None,
202 session=None, infinite=False, log_req_resp=True):202 session=None, infinite=False, log_req_resp=True,
203 request_method=None):
203 url = _cleanurl(url)204 url = _cleanurl(url)
204 req_args = {205 req_args = {
205 'url': url,206 'url': url,
206 }207 }
207 req_args.update(_get_ssl_args(url, ssl_details))208 req_args.update(_get_ssl_args(url, ssl_details))
208 req_args['allow_redirects'] = allow_redirects209 req_args['allow_redirects'] = allow_redirects
209 req_args['method'] = 'GET'210 if not request_method:
211 request_method = 'POST' if data else 'GET'
212 req_args['method'] = request_method
210 if timeout is not None:213 if timeout is not None:
211 req_args['timeout'] = max(float(timeout), 0)214 req_args['timeout'] = max(float(timeout), 0)
212 if data:
213 req_args['method'] = 'POST'
214 # It doesn't seem like config215 # It doesn't seem like config
215 # was added in older library versions (or newer ones either), thus we216 # was added in older library versions (or newer ones either), thus we
216 # need to manually do the retries if it wasn't...217 # need to manually do the retries if it wasn't...
diff --git a/cloudinit/version.py b/cloudinit/version.py
index ddcd436..b04b11f 100644
--- a/cloudinit/version.py
+++ b/cloudinit/version.py
@@ -4,7 +4,7 @@
4#4#
5# This file is part of cloud-init. See LICENSE file for license information.5# This file is part of cloud-init. See LICENSE file for license information.
66
7__VERSION__ = "19.1"7__VERSION__ = "19.2"
8_PACKAGED_VERSION = '@@PACKAGED_VERSION@@'8_PACKAGED_VERSION = '@@PACKAGED_VERSION@@'
99
10FEATURES = [10FEATURES = [
diff --git a/debian/changelog b/debian/changelog
index e14ee1c..9a84824 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,9 +1,68 @@
1cloud-init (19.1-1-gbaa47854-0ubuntu1~16.04.2) UNRELEASED; urgency=medium1cloud-init (19.2-21-ge6383719-0ubuntu1~16.04.1) xenial; urgency=medium
22
3 * debian/cloud-init.templates: enable Exoscale cloud.
3 * refresh patches:4 * refresh patches:
4 + debian/patches/ubuntu-advantage-revert-tip.patch5 + debian/patches/ubuntu-advantage-revert-tip.patch
56 * refresh patches:
6 -- Chad Smith <chad.smith@canonical.com> Tue, 04 Jun 2019 14:59:14 -06007 + debian/patches/azure-apply-network-config-false.patch
8 + debian/patches/azure-use-walinux-agent.patch
9 + debian/patches/ubuntu-advantage-revert-tip.patch
10 * New upstream snapshot. (LP: #1841099)
11 - ubuntu-drivers: call db_x_loadtemplatefile to accept NVIDIA EULA
12 - Add missing #cloud-config comment on first example in documentation.
13 [Florian Müller]
14 - ubuntu-drivers: emit latelink=true debconf to accept nvidia eula
15 - DataSourceOracle: prefer DS network config over initramfs
16 - format.rst: add text/jinja2 to list of content types (+ cleanups)
17 - Add GitHub pull request template to point people at hacking doc
18 - cloudinit/distros/parsers/sys_conf: add docstring to SysConf
19 - pyflakes: remove unused variable [Joshua Powers]
20 - Azure: Record boot timestamps, system information, and diagnostic events
21 [Anh Vo]
22 - DataSourceOracle: configure secondary NICs on Virtual Machines
23 - distros: fix confusing variable names
24 - azure/net: generate_fallback_nic emits network v2 config instead of v1
25 - Add support for publishing host keys to GCE guest attributes
26 [Rick Wright]
27 - New data source for the Exoscale.com cloud platform [Chris Glass]
28 - doc: remove intersphinx extension
29 - cc_set_passwords: rewrite documentation
30 - net/cmdline: split interfaces_by_mac and init network config
31 determination
32 - stages: allow data sources to override network config source order
33 - cloud_tests: updates and fixes
34 - Fix bug rendering MTU on bond or vlan when input was netplan.
35 [Scott Moser]
36 - net: update net sequence, include wait on netdevs, opensuse netrules path
37 - Release 19.2
38 - net: add rfc3442 (classless static routes) to EphemeralDHCP
39 - templates/ntp.conf.debian.tmpl: fix missing newline for pools
40 - Support netplan renderer in Arch Linux [Conrad Hoffmann]
41 - Fix typo in publicly viewable documentation. [David Medberry]
42 - Add a cdrom size checker for OVF ds to ds-identify [Pengpeng Sun]
43 - VMWare: Trigger the post customization script via cc_scripts module.
44 [Xiaofeng Wang]
45 - Cloud-init analyze module: Added ability to analyze boot events.
46 [Sam Gilson]
47 - Update debian eni network configuration location, retain Ubuntu setting
48 [Janos Lenart]
49 - net: skip bond interfaces in get_interfaces [Stanislav Makar]
50 - Fix a couple of issues raised by a coverity scan
51 - Add missing dsname for Hetzner Cloud datasource [Markus Schade]
52 - doc: indicate that netplan is default in Ubuntu now
53 - azure: add region and AZ properties from imds compute location metadata
54 - sysconfig: support more bonding options [Penghui Liao]
55 - cloud-init-generator: use libexec path to ds-identify on redhat systems
56 - tools/build-on-freebsd: update to python3 [Gonéri Le Bouder]
57 - Allow identification of OpenStack by Asset Tag [Mark T. Voelker]
58 - Fix spelling error making 'an Ubuntu' consistent. [Brian Murray]
59 - run-container: centos: comment out the repo mirrorlist [Paride Legovini]
60 - netplan: update netplan key mappings for gratuitous-arp
61 - freebsd: fix the name of cloudcfg VARIANT [Gonéri Le Bouder]
62 - freebsd: ability to grow root file system [Gonéri Le Bouder]
63 - freebsd: NoCloud data source support [Gonéri Le Bouder]
64
65 -- Chad Smith <chad.smith@canonical.com> Thu, 22 Aug 2019 11:55:27 -0600
766
8cloud-init (19.1-1-gbaa47854-0ubuntu1~16.04.1) xenial; urgency=medium67cloud-init (19.1-1-gbaa47854-0ubuntu1~16.04.1) xenial; urgency=medium
968
diff --git a/debian/cloud-init.templates b/debian/cloud-init.templates
index 5ed37f7..64cafbf 100644
--- a/debian/cloud-init.templates
+++ b/debian/cloud-init.templates
@@ -1,8 +1,8 @@
1Template: cloud-init/datasources1Template: cloud-init/datasources
2Type: multiselect2Type: multiselect
3Default: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, None3Default: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Exoscale, None
4Choices-C: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, None4Choices-C: NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Exoscale, None
5Choices: NoCloud: Reads info from /var/lib/cloud/seed only, ConfigDrive: Reads data from Openstack Config Drive, OpenNebula: read from OpenNebula context disk, DigitalOcean: reads data from Droplet datasource, Azure: read from MS Azure cdrom. Requires walinux-agent, AltCloud: config disks for RHEVm and vSphere, OVF: Reads data from OVF Transports, MAAS: Reads data from Ubuntu MAAS, GCE: google compute metadata service, OpenStack: native openstack metadata service, CloudSigma: metadata over serial for cloudsigma.com, SmartOS: Read from SmartOS metadata service, Bigstep: Bigstep metadata service, Scaleway: Scaleway metadata service, AliYun: Alibaba metadata service, Ec2: reads data from EC2 Metadata service, CloudStack: Read from CloudStack metadata service, None: Failsafe datasource5Choices: NoCloud: Reads info from /var/lib/cloud/seed only, ConfigDrive: Reads data from Openstack Config Drive, OpenNebula: read from OpenNebula context disk, DigitalOcean: reads data from Droplet datasource, Azure: read from MS Azure cdrom. Requires walinux-agent, AltCloud: config disks for RHEVm and vSphere, OVF: Reads data from OVF Transports, MAAS: Reads data from Ubuntu MAAS, GCE: google compute metadata service, OpenStack: native openstack metadata service, CloudSigma: metadata over serial for cloudsigma.com, SmartOS: Read from SmartOS metadata service, Bigstep: Bigstep metadata service, Scaleway: Scaleway metadata service, AliYun: Alibaba metadata service, Ec2: reads data from EC2 Metadata service, CloudStack: Read from CloudStack metadata service, Exoscale: Exoscale metadata service, None: Failsafe datasource
6Description: Which data sources should be searched?6Description: Which data sources should be searched?
7 Cloud-init supports searching different "Data Sources" for information7 Cloud-init supports searching different "Data Sources" for information
8 that it uses to configure a cloud instance.8 that it uses to configure a cloud instance.
diff --git a/debian/patches/azure-apply-network-config-false.patch b/debian/patches/azure-apply-network-config-false.patch
index f0c2fcf..d982c1d 100644
--- a/debian/patches/azure-apply-network-config-false.patch
+++ b/debian/patches/azure-apply-network-config-false.patch
@@ -10,7 +10,7 @@ Forwarded: not-needed
10Last-Update: 2018-10-1710Last-Update: 2018-10-17
11--- a/cloudinit/sources/DataSourceAzure.py11--- a/cloudinit/sources/DataSourceAzure.py
12+++ b/cloudinit/sources/DataSourceAzure.py12+++ b/cloudinit/sources/DataSourceAzure.py
13@@ -220,7 +220,7 @@ BUILTIN_DS_CONFIG = {13@@ -225,7 +225,7 @@ BUILTIN_DS_CONFIG = {
14 },14 },
15 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH},15 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH},
16 'dhclient_lease_file': LEASE_FILE,16 'dhclient_lease_file': LEASE_FILE,
diff --git a/debian/patches/azure-use-walinux-agent.patch b/debian/patches/azure-use-walinux-agent.patch
index b4ad76c..3e9ddd9 100644
--- a/debian/patches/azure-use-walinux-agent.patch
+++ b/debian/patches/azure-use-walinux-agent.patch
@@ -6,7 +6,7 @@ Forwarded: not-needed
6Author: Scott Moser <smoser@ubuntu.com>6Author: Scott Moser <smoser@ubuntu.com>
7--- a/cloudinit/sources/DataSourceAzure.py7--- a/cloudinit/sources/DataSourceAzure.py
8+++ b/cloudinit/sources/DataSourceAzure.py8+++ b/cloudinit/sources/DataSourceAzure.py
9@@ -209,7 +209,7 @@ if util.is_FreeBSD():9@@ -214,7 +214,7 @@ if util.is_FreeBSD():
10 PLATFORM_ENTROPY_SOURCE = None10 PLATFORM_ENTROPY_SOURCE = None
11 11
12 BUILTIN_DS_CONFIG = {12 BUILTIN_DS_CONFIG = {
diff --git a/debian/patches/ubuntu-advantage-revert-tip.patch b/debian/patches/ubuntu-advantage-revert-tip.patch
index 966d2f3..0730a06 100644
--- a/debian/patches/ubuntu-advantage-revert-tip.patch
+++ b/debian/patches/ubuntu-advantage-revert-tip.patch
@@ -9,10 +9,8 @@ Forwarded: not-needed
9Last-Update: 2019-05-109Last-Update: 2019-05-10
10---10---
11This patch header follows DEP-3: http://dep.debian.net/deps/dep3/11This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
12Index: cloud-init/cloudinit/config/cc_ubuntu_advantage.py12--- a/cloudinit/config/cc_ubuntu_advantage.py
13===================================================================13+++ b/cloudinit/config/cc_ubuntu_advantage.py
14--- cloud-init.orig/cloudinit/config/cc_ubuntu_advantage.py
15+++ cloud-init/cloudinit/config/cc_ubuntu_advantage.py
16@@ -1,143 +1,150 @@14@@ -1,143 +1,150 @@
17+# Copyright (C) 2018 Canonical Ltd.15+# Copyright (C) 2018 Canonical Ltd.
18+#16+#
@@ -294,10 +292,8 @@ Index: cloud-init/cloudinit/config/cc_ubuntu_advantage.py
294+ run_commands(cfgin.get('commands', []))292+ run_commands(cfgin.get('commands', []))
295 293
296 # vi: ts=4 expandtab294 # vi: ts=4 expandtab
297Index: cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py295--- a/cloudinit/config/tests/test_ubuntu_advantage.py
298===================================================================296+++ b/cloudinit/config/tests/test_ubuntu_advantage.py
299--- cloud-init.orig/cloudinit/config/tests/test_ubuntu_advantage.py
300+++ cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py
301@@ -1,7 +1,10 @@297@@ -1,7 +1,10 @@
302 # This file is part of cloud-init. See LICENSE file for license information.298 # This file is part of cloud-init. See LICENSE file for license information.
303 299
diff --git a/doc/examples/cloud-config-datasources.txt b/doc/examples/cloud-config-datasources.txt
index 2651c02..52a2476 100644
--- a/doc/examples/cloud-config-datasources.txt
+++ b/doc/examples/cloud-config-datasources.txt
@@ -38,7 +38,7 @@ datasource:
38 # these are optional, but allow you to basically provide a datasource38 # these are optional, but allow you to basically provide a datasource
39 # right here39 # right here
40 user-data: |40 user-data: |
41 # This is the user-data verbatum41 # This is the user-data verbatim
42 meta-data:42 meta-data:
43 instance-id: i-87018aed43 instance-id: i-87018aed
44 local-hostname: myhost.internal44 local-hostname: myhost.internal
diff --git a/doc/examples/cloud-config-user-groups.txt b/doc/examples/cloud-config-user-groups.txt
index 6a363b7..f588bfb 100644
--- a/doc/examples/cloud-config-user-groups.txt
+++ b/doc/examples/cloud-config-user-groups.txt
@@ -1,3 +1,4 @@
1#cloud-config
1# Add groups to the system2# Add groups to the system
2# The following example adds the ubuntu group with members 'root' and 'sys'3# The following example adds the ubuntu group with members 'root' and 'sys'
3# and the empty group cloud-users.4# and the empty group cloud-users.
diff --git a/doc/rtd/conf.py b/doc/rtd/conf.py
index 50eb05c..4174477 100644
--- a/doc/rtd/conf.py
+++ b/doc/rtd/conf.py
@@ -27,16 +27,11 @@ project = 'Cloud-Init'
27# Add any Sphinx extension module names here, as strings. They can be27# Add any Sphinx extension module names here, as strings. They can be
28# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.28# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
29extensions = [29extensions = [
30 'sphinx.ext.intersphinx',
31 'sphinx.ext.autodoc',30 'sphinx.ext.autodoc',
32 'sphinx.ext.autosectionlabel',31 'sphinx.ext.autosectionlabel',
33 'sphinx.ext.viewcode',32 'sphinx.ext.viewcode',
34]33]
3534
36intersphinx_mapping = {
37 'sphinx': ('http://sphinx.pocoo.org', None)
38}
39
40# The suffix of source filenames.35# The suffix of source filenames.
41source_suffix = '.rst'36source_suffix = '.rst'
4237
diff --git a/doc/rtd/topics/analyze.rst b/doc/rtd/topics/analyze.rst
43new file mode 10064438new file mode 100644
index 0000000..5cf38bd
--- /dev/null
+++ b/doc/rtd/topics/analyze.rst
@@ -0,0 +1,84 @@
1*************************
2Cloud-init Analyze Module
3*************************
4
5Overview
6========
7The analyze module was added to cloud-init in order to help analyze cloud-init boot time
8performance. It is loosely based on systemd-analyze where there are 4 main actions:
9show, blame, dump, and boot.
10
11The 'show' action is similar to 'systemd-analyze critical-chain' which prints a list of units, the
12time they started and how long they took. For cloud-init, we have four stages, and within each stage
13a number of modules may run depending on configuration. ‘cloudinit-analyze show’ will, for each
14boot, print this information and a summary total time, per boot.
15
16The 'blame' action matches 'systemd-analyze blame' where it prints, in descending order,
17the units that took the longest to run. This output is highly useful for examining where cloud-init
18is spending its time during execution.
19
20The 'dump' action simply dumps the cloud-init logs that the analyze module is performing
21the analysis on and returns a list of dictionaries that can be consumed for other reporting needs.
22
23The 'boot' action prints out kernel related timestamps that are not included in any of the
24cloud-init logs. There are three different timestamps that are presented to the user:
25kernel start, kernel finish boot, and cloud-init start. This was added for additional
26clarity into the boot process that cloud-init does not have control over, to aid in debugging of
27performance issues related to cloudinit startup or tracking regression.
28
29Usage
30=====
31Using each of the printing formats is as easy as running one of the following bash commands:
32
33.. code-block:: shell-session
34
35 cloud-init analyze show
36 cloud-init analyze blame
37 cloud-init analyze dump
38 cloud-init analyze boot
39
40Cloud-init analyze boot Timestamp Gathering
41===========================================
42The following boot related timestamps are gathered on demand when cloud-init analyze boot runs:
43- Kernel Startup, which is inferred from system uptime
44- Kernel Finishes Initialization, which is inferred from systemd UserSpaceMonotonicTimestamp property
45- Cloud-init activation, which is inferred from the property InactiveExitTimestamp of the cloud-init
46local systemd unit.
47
48In order to gather the necessary timestamps using systemd, running the commands
49
50.. code-block:: shell-session
51
52 systemctl show -p UserspaceTimestampMonotonic
53 systemctl show cloud-init-local -p InactiveExitTimestampMonotonic
54
55will gather the UserspaceTimestamp and InactiveExitTimestamp.
56The UserspaceTimestamp tracks when the init system starts, which is used as an indicator of kernel
57finishing initialization. The InactiveExitTimestamp tracks when a particular systemd unit transitions
58from the Inactive to Active state, which can be used to mark the beginning of systemd's activation
59of cloud-init.
60
61Currently this only works for distros that use systemd as the init process. We will be expanding
62support for other distros in the future and this document will be updated accordingly.
63
64If systemd is not present on the system, dmesg is used to attempt to find an event that logs the
65beginning of the init system. However, with this method only the first two timestamps are able to be found;
66dmesg does not monitor userspace processes, so no cloud-init start timestamps are emitted like when
67using systemd.
68
69List of Cloud-init analyze boot supported distros
70=================================================
71- Arch
72- CentOS
73- Debian
74- Fedora
75- OpenSuSE
76- Red Hat Enterprise Linux
77- Ubuntu
78- SUSE Linux Enterprise Server
79- CoreOS
80
81List of Cloud-init analyze boot unsupported distros
82===================================================
83- FreeBSD
84- Gentoo
0\ No newline at end of file85\ No newline at end of file
diff --git a/doc/rtd/topics/capabilities.rst b/doc/rtd/topics/capabilities.rst
index 0d8b894..6d85a99 100644
--- a/doc/rtd/topics/capabilities.rst
+++ b/doc/rtd/topics/capabilities.rst
@@ -217,6 +217,7 @@ Get detailed reports of where cloud-init spends most of its time. See
217* **dump** Machine-readable JSON dump of all cloud-init tracked events.217* **dump** Machine-readable JSON dump of all cloud-init tracked events.
218* **show** show time-ordered report of the cost of operations during each218* **show** show time-ordered report of the cost of operations during each
219 boot stage.219 boot stage.
220* **boot** show timestamps from kernel initialization, kernel finish initialization, and cloud-init start.
220221
221.. _cli_devel:222.. _cli_devel:
222223
diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
index 648c606..2148cd5 100644
--- a/doc/rtd/topics/datasources.rst
+++ b/doc/rtd/topics/datasources.rst
@@ -155,6 +155,7 @@ Follow for more information.
155 datasources/configdrive.rst155 datasources/configdrive.rst
156 datasources/digitalocean.rst156 datasources/digitalocean.rst
157 datasources/ec2.rst157 datasources/ec2.rst
158 datasources/exoscale.rst
158 datasources/maas.rst159 datasources/maas.rst
159 datasources/nocloud.rst160 datasources/nocloud.rst
160 datasources/opennebula.rst161 datasources/opennebula.rst
diff --git a/doc/rtd/topics/datasources/exoscale.rst b/doc/rtd/topics/datasources/exoscale.rst
161new file mode 100644162new file mode 100644
index 0000000..27aec9c
--- /dev/null
+++ b/doc/rtd/topics/datasources/exoscale.rst
@@ -0,0 +1,68 @@
1.. _datasource_exoscale:
2
3Exoscale
4========
5
6This datasource supports reading from the metadata server used on the
7`Exoscale platform <https://exoscale.com>`_.
8
9Use of the Exoscale datasource is recommended to benefit from new features of
10the Exoscale platform.
11
12The datasource relies on the availability of a compatible metadata server
13(``http://169.254.169.254`` is used by default) and its companion password
14server, reachable at the same address (by default on port 8080).
15
16Crawling of metadata
17--------------------
18
19The metadata service and password server are crawled slightly differently:
20
21 * The "metadata service" is crawled every boot.
22 * The password server is also crawled every boot (the Exoscale datasource
23 forces the password module to run with "frequency always").
24
25In the password server case, the following rules apply in order to enable the
26"restore instance password" functionality:
27
28 * If a password is returned by the password server, it is then marked "saved"
29 by the cloud-init datasource. Subsequent boots will skip setting the password
30 (the password server will return "saved_password").
31 * When the instance password is reset (via the Exoscale UI), the password
32 server will return the non-empty password at next boot, therefore causing
33 cloud-init to reset the instance's password.
34
35Configuration
36-------------
37
38Users of this datasource are discouraged from changing the default settings
39unless instructed to by Exoscale support.
40
41The following settings are available and can be set for the datasource in system
42configuration (in `/etc/cloud/cloud.cfg.d/`).
43
44The settings available are:
45
46 * **metadata_url**: The URL for the metadata service (defaults to
47 ``http://169.254.169.254``)
48 * **api_version**: The API version path on which to query the instance metadata
49 (defaults to ``1.0``)
50 * **password_server_port**: The port (on the metadata server) on which the
51 password server listens (defaults to ``8080``).
52 * **timeout**: the timeout value provided to urlopen for each individual http
53 request. (defaults to ``10``)
54 * **retries**: The number of retries that should be done for an http request
55 (defaults to ``6``)
56
57
58An example configuration with the default values is provided below:
59
60.. sourcecode:: yaml
61
62 datasource:
63 Exoscale:
64 metadata_url: "http://169.254.169.254"
65 api_version: "1.0"
66 password_server_port: 8080
67 timeout: 10
68 retries: 6
diff --git a/doc/rtd/topics/datasources/oracle.rst b/doc/rtd/topics/datasources/oracle.rst
index f2383ce..98c4657 100644
--- a/doc/rtd/topics/datasources/oracle.rst
+++ b/doc/rtd/topics/datasources/oracle.rst
@@ -8,7 +8,7 @@ This datasource reads metadata, vendor-data and user-data from
88
9Oracle Platform9Oracle Platform
10---------------10---------------
11OCI provides bare metal and virtual machines. In both cases, 11OCI provides bare metal and virtual machines. In both cases,
12the platform identifies itself via DMI data in the chassis asset tag12the platform identifies itself via DMI data in the chassis asset tag
13with the string 'OracleCloud.com'.13with the string 'OracleCloud.com'.
1414
@@ -22,5 +22,28 @@ Cloud-init has a specific datasource for Oracle in order to:
22 implementation.22 implementation.
2323
2424
25Configuration
26-------------
27
28The following configuration can be set for the datasource in system
29configuration (in ``/etc/cloud/cloud.cfg`` or ``/etc/cloud/cloud.cfg.d/``).
30
31The settings that may be configured are:
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches