Merge ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic
- Git
- lp:~chad.smith/cloud-init
- ubuntu/bionic
- Merge into ubuntu/bionic
Proposed by
Chad Smith
Status: | Merged | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Merged at revision: | 01cf9304e4d697cffff7db9db48e374b31cb50bd | ||||||||||||||||
Proposed branch: | ~chad.smith/cloud-init:ubuntu/bionic | ||||||||||||||||
Merge into: | cloud-init:ubuntu/bionic | ||||||||||||||||
Diff against target: |
4639 lines (+2418/-613) 49 files modified
ChangeLog (+117/-0) cloudinit/cmd/main.py (+5/-4) cloudinit/config/cc_apt_configure.py (+1/-1) cloudinit/config/cc_mounts.py (+11/-0) cloudinit/config/cc_ubuntu_advantage.py (+116/-109) cloudinit/config/cc_ubuntu_drivers.py (+112/-0) cloudinit/config/tests/test_ubuntu_advantage.py (+191/-156) cloudinit/config/tests/test_ubuntu_drivers.py (+174/-0) cloudinit/net/eni.py (+11/-5) cloudinit/net/network_state.py (+33/-8) cloudinit/net/sysconfig.py (+29/-11) cloudinit/net/tests/test_init.py (+1/-1) cloudinit/reporting/handlers.py (+57/-60) cloudinit/sources/DataSourceAzure.py (+179/-95) cloudinit/sources/DataSourceCloudStack.py (+1/-1) cloudinit/sources/DataSourceConfigDrive.py (+2/-5) cloudinit/sources/DataSourceEc2.py (+7/-3) cloudinit/sources/DataSourceNoCloud.py (+3/-1) cloudinit/sources/helpers/azure.py (+42/-3) cloudinit/util.py (+17/-13) cloudinit/version.py (+1/-1) config/cloud.cfg.tmpl (+3/-0) debian/changelog (+48/-0) debian/patches/series (+1/-0) debian/patches/ubuntu-advantage-revert-tip.patch (+735/-0) doc/rtd/topics/datasources/azure.rst (+35/-22) doc/rtd/topics/datasources/nocloud.rst (+1/-1) doc/rtd/topics/modules.rst (+1/-0) packages/redhat/cloud-init.spec.in (+3/-1) packages/suse/cloud-init.spec.in (+3/-1) setup.py (+2/-1) tests/cloud_tests/releases.yaml (+16/-0) tests/cloud_tests/testcases/modules/apt_pipelining_disable.yaml (+1/-2) tests/cloud_tests/testcases/modules/apt_pipelining_os.py (+3/-3) tests/cloud_tests/testcases/modules/apt_pipelining_os.yaml (+4/-5) tests/data/azure/non_unicode_random_string (+1/-0) tests/unittests/test_datasource/test_azure.py (+32/-5) tests/unittests/test_datasource/test_azure_helper.py (+7/-2) tests/unittests/test_datasource/test_nocloud.py (+42/-0) tests/unittests/test_distros/test_netconfig.py (+2/-0) tests/unittests/test_ds_identify.py (+17/-0) tests/unittests/test_handler/test_handler_mounts.py (+29/-1) tests/unittests/test_handler/test_schema.py (+1/-0) tests/unittests/test_net.py (+251/-18) tests/unittests/test_reporting_hyperv.py (+49/-55) tools/build-on-freebsd (+4/-5) tools/ds-identify (+4/-3) tools/read-version (+5/-2) tox.ini (+8/-9) |
||||||||||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Ryan Harper | Approve | ||
Server Team CI bot | continuous-integration | Approve | |
Review via email:
|
Commit message
New upstream snapshot for SRU into bionic
only operation out of the norm here is adding a quilt patch to revert changes to ubuntu-advantage cloud-config module because ubuntu-
LP: #1828641
Description of the change
To post a comment you must log in.
Revision history for this message

Server Team CI bot (server-team-bot) wrote : | # |
review:
Approve
(continuous-integration)
Revision history for this message

Ryan Harper (raharper) wrote : | # |
LGTM. Verified I get the same branch as what's proposed.
review:
Approve
There was an error fetching revisions from git servers. Please try again in a few minutes. If the problem persists, contact Launchpad support.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | diff --git a/ChangeLog b/ChangeLog |
2 | index 8fa6fdd..bf48fd4 100644 |
3 | --- a/ChangeLog |
4 | +++ b/ChangeLog |
5 | @@ -1,3 +1,120 @@ |
6 | +19.1: |
7 | + - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder] |
8 | + - tests: add Eoan release [Paride Legovini] |
9 | + - cc_mounts: check if mount -a on no-change fstab path |
10 | + [Jason Zions (MSFT)] (LP: #1825596) |
11 | + - replace remaining occurrences of LOG.warn [Daniel Watkins] |
12 | + - DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo] |
13 | + - Azure: Changes to the Hyper-V KVP Reporter [Anh Vo] |
14 | + - git tests: no longer show warning about safe yaml. |
15 | + - tools/read-version: handle errors [Chad Miller] |
16 | + - net/sysconfig: only indicate available on known sysconfig distros |
17 | + (LP: #1819994) |
18 | + - packages: update rpm specs for new bash completion path |
19 | + [Daniel Watkins] (LP: #1825444) |
20 | + - test_azure: mock util.SeLinuxGuard where needed |
21 | + [Jason Zions (MSFT)] (LP: #1825253) |
22 | + - setup.py: install bash completion script in new location [Daniel Watkins] |
23 | + - mount_cb: do not pass sync and rw options to mount |
24 | + [Gonéri Le Bouder] (LP: #1645824) |
25 | + - cc_apt_configure: fix typo in apt documentation [Dominic Schlegel] |
26 | + - Revert "DataSource: move update_events from a class to an instance..." |
27 | + [Daniel Watkins] |
28 | + - Change DataSourceNoCloud to ignore file system label's case. |
29 | + [Risto Oikarinen] |
30 | + - cmd:main.py: Fix missing 'modules-init' key in modes dict |
31 | + [Antonio Romito] (LP: #1815109) |
32 | + - ubuntu_advantage: rewrite cloud-config module |
33 | + - Azure: Treat _unset network configuration as if it were absent |
34 | + [Jason Zions (MSFT)] (LP: #1823084) |
35 | + - DatasourceAzure: add additional logging for azure datasource [Anh Vo] |
36 | + - cloud_tests: fix apt_pipelining test-cases |
37 | + - Azure: Ensure platform random_seed is always serializable as JSON. |
38 | + [Jason Zions (MSFT)] |
39 | + - net/sysconfig: write out SUSE-compatible IPv6 config [Robert Schweikert] |
40 | + - tox: Update testenv for openSUSE Leap to 15.0 [Thomas Bechtold] |
41 | + - net: Fix ipv6 static routes when using eni renderer |
42 | + [Raphael Glon] (LP: #1818669) |
43 | + - Add ubuntu_drivers config module [Daniel Watkins] |
44 | + - doc: Refresh Azure walinuxagent docs [Daniel Watkins] |
45 | + - tox: bump pylint version to latest (2.3.1) [Daniel Watkins] |
46 | + - DataSource: move update_events from a class to an instance attribute |
47 | + [Daniel Watkins] (LP: #1819913) |
48 | + - net/sysconfig: Handle default route setup for dhcp configured NICs |
49 | + [Robert Schweikert] (LP: #1812117) |
50 | + - DataSourceEc2: update RELEASE_BLOCKER to be more accurate |
51 | + [Daniel Watkins] |
52 | + - cloud-init-per: POSIX sh does not support string subst, use sed |
53 | + (LP: #1819222) |
54 | + - Support locking user with usermod if passwd is not available. |
55 | + - Example for Microsoft Azure data disk added. [Anton Olifir] |
56 | + - clean: correctly determine the path for excluding seed directory |
57 | + [Daniel Watkins] (LP: #1818571) |
58 | + - helpers/openstack: Treat unknown link types as physical |
59 | + [Daniel Watkins] (LP: #1639263) |
60 | + - drop Python 2.6 support and our NIH version detection [Daniel Watkins] |
61 | + - tip-pylint: Fix assignment-from-return-none errors |
62 | + - net: append type:dhcp[46] only if dhcp[46] is True in v2 netconfig |
63 | + [Kurt Stieger] (LP: #1818032) |
64 | + - cc_apt_pipelining: stop disabling pipelining by default |
65 | + [Daniel Watkins] (LP: #1794982) |
66 | + - tests: fix some slow tests and some leaking state [Daniel Watkins] |
67 | + - util: don't determine string_types ourselves [Daniel Watkins] |
68 | + - cc_rsyslog: Escape possible nested set [Daniel Watkins] (LP: #1816967) |
69 | + - Enable encrypted_data_bag_secret support for Chef |
70 | + [Eric Williams] (LP: #1817082) |
71 | + - azure: Filter list of ssh keys pulled from fabric [Jason Zions (MSFT)] |
72 | + - doc: update merging doc with fixes and some additional details/examples |
73 | + - tests: integration test failure summary to use traceback if empty error |
74 | + - This is to fix https://bugs.launchpad.net/cloud-init/+bug/1812676 |
75 | + [Vitaly Kuznetsov] |
76 | + - EC2: Rewrite network config on AWS Classic instances every boot |
77 | + [Guilherme G. Piccoli] (LP: #1802073) |
78 | + - netinfo: Adjust ifconfig output parsing for FreeBSD ipv6 entries |
79 | + (LP: #1779672) |
80 | + - netplan: Don't render yaml aliases when dumping netplan (LP: #1815051) |
81 | + - add PyCharm IDE .idea/ path to .gitignore [Dominic Schlegel] |
82 | + - correct grammar issue in instance metadata documentation |
83 | + [Dominic Schlegel] (LP: #1802188) |
84 | + - clean: cloud-init clean should not trace when run from within cloud_dir |
85 | + (LP: #1795508) |
86 | + - Resolve flake8 comparison and pycodestyle over-ident issues |
87 | + [Paride Legovini] |
88 | + - opennebula: also exclude epochseconds from changed environment vars |
89 | + (LP: #1813641) |
90 | + - systemd: Render generator from template to account for system |
91 | + differences. [Robert Schweikert] |
92 | + - sysconfig: On SUSE, use STARTMODE instead of ONBOOT |
93 | + [Robert Schweikert] (LP: #1799540) |
94 | + - flake8: use ==/!= to compare str, bytes, and int literals |
95 | + [Paride Legovini] |
96 | + - opennebula: exclude EPOCHREALTIME as known bash env variable with a |
97 | + delta (LP: #1813383) |
98 | + - tox: fix disco httpretty dependencies for py37 (LP: #1813361) |
99 | + - run-container: uncomment baseurl in yum.repos.d/*.repo when using a |
100 | + proxy [Paride Legovini] |
101 | + - lxd: install zfs-linux instead of zfs meta package |
102 | + [Johnson Shi] (LP: #1799779) |
103 | + - net/sysconfig: do not write a resolv.conf file with only the header. |
104 | + [Robert Schweikert] |
105 | + - net: Make sysconfig renderer compatible with Network Manager. |
106 | + [Eduardo Otubo] |
107 | + - cc_set_passwords: Fix regex when parsing hashed passwords |
108 | + [Marlin Cremers] (LP: #1811446) |
109 | + - net: Wait for dhclient to daemonize before reading lease file |
110 | + [Jason Zions] (LP: #1794399) |
111 | + - [Azure] Increase retries when talking to Wireserver during metadata walk |
112 | + [Jason Zions] |
113 | + - Add documentation on adding a datasource. |
114 | + - doc: clean up some datasource documentation. |
115 | + - ds-identify: fix wrong variable name in ovf_vmware_transport_guestinfo. |
116 | + - Scaleway: Support ssh keys provided inside an instance tag. [PORTE Loïc] |
117 | + - OVF: simplify expected return values of transport functions. |
118 | + - Vmware: Add support for the com.vmware.guestInfo OVF transport. |
119 | + (LP: #1807466) |
120 | + - HACKING.rst: change contact info to Josh Powers |
121 | + - Update to pylint 2.2.2. |
122 | + |
123 | 18.5: |
124 | - tests: add Disco release [Joshua Powers] |
125 | - net: render 'metric' values in per-subnet routes (LP: #1805871) |
126 | diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py |
127 | index 933c019..a5446da 100644 |
128 | --- a/cloudinit/cmd/main.py |
129 | +++ b/cloudinit/cmd/main.py |
130 | @@ -632,13 +632,14 @@ def status_wrapper(name, args, data_d=None, link_d=None): |
131 | 'start': None, |
132 | 'finished': None, |
133 | } |
134 | + |
135 | if status is None: |
136 | status = {'v1': {}} |
137 | - for m in modes: |
138 | - status['v1'][m] = nullstatus.copy() |
139 | status['v1']['datasource'] = None |
140 | - elif mode not in status['v1']: |
141 | - status['v1'][mode] = nullstatus.copy() |
142 | + |
143 | + for m in modes: |
144 | + if m not in status['v1']: |
145 | + status['v1'][m] = nullstatus.copy() |
146 | |
147 | v1 = status['v1'] |
148 | v1['stage'] = mode |
149 | diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py |
150 | index e18944e..919d199 100644 |
151 | --- a/cloudinit/config/cc_apt_configure.py |
152 | +++ b/cloudinit/config/cc_apt_configure.py |
153 | @@ -127,7 +127,7 @@ to ``^[\\w-]+:\\w`` |
154 | |
155 | Source list entries can be specified as a dictionary under the ``sources`` |
156 | config key, with key in the dict representing a different source file. The key |
157 | -The key of each source entry will be used as an id that can be referenced in |
158 | +of each source entry will be used as an id that can be referenced in |
159 | other config entries, as well as the filename for the source's configuration |
160 | under ``/etc/apt/sources.list.d``. If the name does not end with ``.list``, |
161 | it will be appended. If there is no configuration for a key in ``sources``, no |
162 | diff --git a/cloudinit/config/cc_mounts.py b/cloudinit/config/cc_mounts.py |
163 | index 339baba..123ffb8 100644 |
164 | --- a/cloudinit/config/cc_mounts.py |
165 | +++ b/cloudinit/config/cc_mounts.py |
166 | @@ -439,6 +439,7 @@ def handle(_name, cfg, cloud, log, _args): |
167 | |
168 | cc_lines = [] |
169 | needswap = False |
170 | + need_mount_all = False |
171 | dirs = [] |
172 | for line in actlist: |
173 | # write 'comment' in the fs_mntops, entry, claiming this |
174 | @@ -449,11 +450,18 @@ def handle(_name, cfg, cloud, log, _args): |
175 | dirs.append(line[1]) |
176 | cc_lines.append('\t'.join(line)) |
177 | |
178 | + mount_points = [v['mountpoint'] for k, v in util.mounts().items() |
179 | + if 'mountpoint' in v] |
180 | for d in dirs: |
181 | try: |
182 | util.ensure_dir(d) |
183 | except Exception: |
184 | util.logexc(log, "Failed to make '%s' config-mount", d) |
185 | + # dirs is list of directories on which a volume should be mounted. |
186 | + # If any of them does not already show up in the list of current |
187 | + # mount points, we will definitely need to do mount -a. |
188 | + if not need_mount_all and d not in mount_points: |
189 | + need_mount_all = True |
190 | |
191 | sadds = [WS.sub(" ", n) for n in cc_lines] |
192 | sdrops = [WS.sub(" ", n) for n in fstab_removed] |
193 | @@ -473,6 +481,9 @@ def handle(_name, cfg, cloud, log, _args): |
194 | log.debug("No changes to /etc/fstab made.") |
195 | else: |
196 | log.debug("Changes to fstab: %s", sops) |
197 | + need_mount_all = True |
198 | + |
199 | + if need_mount_all: |
200 | activate_cmds.append(["mount", "-a"]) |
201 | if uses_systemd: |
202 | activate_cmds.append(["systemctl", "daemon-reload"]) |
203 | diff --git a/cloudinit/config/cc_ubuntu_advantage.py b/cloudinit/config/cc_ubuntu_advantage.py |
204 | index 5e082bd..f488123 100644 |
205 | --- a/cloudinit/config/cc_ubuntu_advantage.py |
206 | +++ b/cloudinit/config/cc_ubuntu_advantage.py |
207 | @@ -1,150 +1,143 @@ |
208 | -# Copyright (C) 2018 Canonical Ltd. |
209 | -# |
210 | # This file is part of cloud-init. See LICENSE file for license information. |
211 | |
212 | -"""Ubuntu advantage: manage ubuntu-advantage offerings from Canonical.""" |
213 | +"""ubuntu_advantage: Configure Ubuntu Advantage support services""" |
214 | |
215 | -import sys |
216 | from textwrap import dedent |
217 | |
218 | -from cloudinit import log as logging |
219 | +import six |
220 | + |
221 | from cloudinit.config.schema import ( |
222 | get_schema_doc, validate_cloudconfig_schema) |
223 | +from cloudinit import log as logging |
224 | from cloudinit.settings import PER_INSTANCE |
225 | -from cloudinit.subp import prepend_base_command |
226 | from cloudinit import util |
227 | |
228 | |
229 | -distros = ['ubuntu'] |
230 | -frequency = PER_INSTANCE |
231 | +UA_URL = 'https://ubuntu.com/advantage' |
232 | |
233 | -LOG = logging.getLogger(__name__) |
234 | +distros = ['ubuntu'] |
235 | |
236 | schema = { |
237 | 'id': 'cc_ubuntu_advantage', |
238 | 'name': 'Ubuntu Advantage', |
239 | - 'title': 'Install, configure and manage ubuntu-advantage offerings', |
240 | + 'title': 'Configure Ubuntu Advantage support services', |
241 | 'description': dedent("""\ |
242 | - This module provides configuration options to setup ubuntu-advantage |
243 | - subscriptions. |
244 | - |
245 | - .. note:: |
246 | - Both ``commands`` value can be either a dictionary or a list. If |
247 | - the configuration provided is a dictionary, the keys are only used |
248 | - to order the execution of the commands and the dictionary is |
249 | - merged with any vendor-data ubuntu-advantage configuration |
250 | - provided. If a ``commands`` is provided as a list, any vendor-data |
251 | - ubuntu-advantage ``commands`` are ignored. |
252 | - |
253 | - Ubuntu-advantage ``commands`` is a dictionary or list of |
254 | - ubuntu-advantage commands to run on the deployed machine. |
255 | - These commands can be used to enable or disable subscriptions to |
256 | - various ubuntu-advantage products. See 'man ubuntu-advantage' for more |
257 | - information on supported subcommands. |
258 | - |
259 | - .. note:: |
260 | - Each command item can be a string or list. If the item is a list, |
261 | - 'ubuntu-advantage' can be omitted and it will automatically be |
262 | - inserted as part of the command. |
263 | + Attach machine to an existing Ubuntu Advantage support contract and |
264 | + enable or disable support services such as Livepatch, ESM, |
265 | + FIPS and FIPS Updates. When attaching a machine to Ubuntu Advantage, |
266 | + one can also specify services to enable. When the 'enable' |
267 | + list is present, any named service will be enabled and all absent |
268 | + services will remain disabled. |
269 | + |
270 | + Note that when enabling FIPS or FIPS updates you will need to schedule |
271 | + a reboot to ensure the machine is running the FIPS-compliant kernel. |
272 | + See :ref:`Power State Change` for information on how to configure |
273 | + cloud-init to perform this reboot. |
274 | """), |
275 | 'distros': distros, |
276 | 'examples': [dedent("""\ |
277 | - # Enable Extended Security Maintenance using your service auth token |
278 | + # Attach the machine to a Ubuntu Advantage support contract with a |
279 | + # UA contract token obtained from %s. |
280 | + ubuntu_advantage: |
281 | + token: <ua_contract_token> |
282 | + """ % UA_URL), dedent("""\ |
283 | + # Attach the machine to an Ubuntu Advantage support contract enabling |
284 | + # only fips and esm services. Services will only be enabled if |
285 | + # the environment supports said service. Otherwise warnings will |
286 | + # be logged for incompatible services specified. |
287 | ubuntu-advantage: |
288 | - commands: |
289 | - 00: ubuntu-advantage enable-esm <token> |
290 | + token: <ua_contract_token> |
291 | + enable: |
292 | + - fips |
293 | + - esm |
294 | """), dedent("""\ |
295 | - # Enable livepatch by providing your livepatch token |
296 | + # Attach the machine to an Ubuntu Advantage support contract and enable |
297 | + # the FIPS service. Perform a reboot once cloud-init has |
298 | + # completed. |
299 | + power_state: |
300 | + mode: reboot |
301 | ubuntu-advantage: |
302 | - commands: |
303 | - 00: ubuntu-advantage enable-livepatch <livepatch-token> |
304 | - |
305 | - """), dedent("""\ |
306 | - # Convenience: the ubuntu-advantage command can be omitted when |
307 | - # specifying commands as a list and 'ubuntu-advantage' will |
308 | - # automatically be prepended. |
309 | - # The following commands are equivalent |
310 | - ubuntu-advantage: |
311 | - commands: |
312 | - 00: ['enable-livepatch', 'my-token'] |
313 | - 01: ['ubuntu-advantage', 'enable-livepatch', 'my-token'] |
314 | - 02: ubuntu-advantage enable-livepatch my-token |
315 | - 03: 'ubuntu-advantage enable-livepatch my-token' |
316 | - """)], |
317 | + token: <ua_contract_token> |
318 | + enable: |
319 | + - fips |
320 | + """)], |
321 | 'frequency': PER_INSTANCE, |
322 | 'type': 'object', |
323 | 'properties': { |
324 | - 'ubuntu-advantage': { |
325 | + 'ubuntu_advantage': { |
326 | 'type': 'object', |
327 | 'properties': { |
328 | - 'commands': { |
329 | - 'type': ['object', 'array'], # Array of strings or dict |
330 | - 'items': { |
331 | - 'oneOf': [ |
332 | - {'type': 'array', 'items': {'type': 'string'}}, |
333 | - {'type': 'string'}] |
334 | - }, |
335 | - 'additionalItems': False, # Reject non-string & non-list |
336 | - 'minItems': 1, |
337 | - 'minProperties': 1, |
338 | + 'enable': { |
339 | + 'type': 'array', |
340 | + 'items': {'type': 'string'}, |
341 | + }, |
342 | + 'token': { |
343 | + 'type': 'string', |
344 | + 'description': ( |
345 | + 'A contract token obtained from %s.' % UA_URL) |
346 | } |
347 | }, |
348 | - 'additionalProperties': False, # Reject keys not in schema |
349 | - 'required': ['commands'] |
350 | + 'required': ['token'], |
351 | + 'additionalProperties': False |
352 | } |
353 | } |
354 | } |
355 | |
356 | -# TODO schema for 'assertions' and 'commands' are too permissive at the moment. |
357 | -# Once python-jsonschema supports schema draft 6 add support for arbitrary |
358 | -# object keys with 'patternProperties' constraint to validate string values. |
359 | - |
360 | __doc__ = get_schema_doc(schema) # Supplement python help() |
361 | |
362 | -UA_CMD = "ubuntu-advantage" |
363 | - |
364 | - |
365 | -def run_commands(commands): |
366 | - """Run the commands provided in ubuntu-advantage:commands config. |
367 | +LOG = logging.getLogger(__name__) |
368 | |
369 | - Commands are run individually. Any errors are collected and reported |
370 | - after attempting all commands. |
371 | |
372 | - @param commands: A list or dict containing commands to run. Keys of a |
373 | - dict will be used to order the commands provided as dict values. |
374 | - """ |
375 | - if not commands: |
376 | - return |
377 | - LOG.debug('Running user-provided ubuntu-advantage commands') |
378 | - if isinstance(commands, dict): |
379 | - # Sort commands based on dictionary key |
380 | - commands = [v for _, v in sorted(commands.items())] |
381 | - elif not isinstance(commands, list): |
382 | - raise TypeError( |
383 | - 'commands parameter was not a list or dict: {commands}'.format( |
384 | - commands=commands)) |
385 | - |
386 | - fixed_ua_commands = prepend_base_command('ubuntu-advantage', commands) |
387 | - |
388 | - cmd_failures = [] |
389 | - for command in fixed_ua_commands: |
390 | - shell = isinstance(command, str) |
391 | - try: |
392 | - util.subp(command, shell=shell, status_cb=sys.stderr.write) |
393 | - except util.ProcessExecutionError as e: |
394 | - cmd_failures.append(str(e)) |
395 | - if cmd_failures: |
396 | - msg = ( |
397 | - 'Failures running ubuntu-advantage commands:\n' |
398 | - '{cmd_failures}'.format( |
399 | - cmd_failures=cmd_failures)) |
400 | +def configure_ua(token=None, enable=None): |
401 | + """Call ua commandline client to attach or enable services.""" |
402 | + error = None |
403 | + if not token: |
404 | + error = ('ubuntu_advantage: token must be provided') |
405 | + LOG.error(error) |
406 | + raise RuntimeError(error) |
407 | + |
408 | + if enable is None: |
409 | + enable = [] |
410 | + elif isinstance(enable, six.string_types): |
411 | + LOG.warning('ubuntu_advantage: enable should be a list, not' |
412 | + ' a string; treating as a single enable') |
413 | + enable = [enable] |
414 | + elif not isinstance(enable, list): |
415 | + LOG.warning('ubuntu_advantage: enable should be a list, not' |
416 | + ' a %s; skipping enabling services', |
417 | + type(enable).__name__) |
418 | + enable = [] |
419 | + |
420 | + attach_cmd = ['ua', 'attach', token] |
421 | + LOG.debug('Attaching to Ubuntu Advantage. %s', ' '.join(attach_cmd)) |
422 | + try: |
423 | + util.subp(attach_cmd) |
424 | + except util.ProcessExecutionError as e: |
425 | + msg = 'Failure attaching Ubuntu Advantage:\n{error}'.format( |
426 | + error=str(e)) |
427 | util.logexc(LOG, msg) |
428 | raise RuntimeError(msg) |
429 | + enable_errors = [] |
430 | + for service in enable: |
431 | + try: |
432 | + cmd = ['ua', 'enable', service] |
433 | + util.subp(cmd, capture=True) |
434 | + except util.ProcessExecutionError as e: |
435 | + enable_errors.append((service, e)) |
436 | + if enable_errors: |
437 | + for service, error in enable_errors: |
438 | + msg = 'Failure enabling "{service}":\n{error}'.format( |
439 | + service=service, error=str(error)) |
440 | + util.logexc(LOG, msg) |
441 | + raise RuntimeError( |
442 | + 'Failure enabling Ubuntu Advantage service(s): {}'.format( |
443 | + ', '.join('"{}"'.format(service) |
444 | + for service, _ in enable_errors))) |
445 | |
446 | |
447 | def maybe_install_ua_tools(cloud): |
448 | """Install ubuntu-advantage-tools if not present.""" |
449 | - if util.which('ubuntu-advantage'): |
450 | + if util.which('ua'): |
451 | return |
452 | try: |
453 | cloud.distro.update_package_sources() |
454 | @@ -159,14 +152,28 @@ def maybe_install_ua_tools(cloud): |
455 | |
456 | |
457 | def handle(name, cfg, cloud, log, args): |
458 | - cfgin = cfg.get('ubuntu-advantage') |
459 | - if cfgin is None: |
460 | - LOG.debug(("Skipping module named %s," |
461 | - " no 'ubuntu-advantage' key in configuration"), name) |
462 | + ua_section = None |
463 | + if 'ubuntu-advantage' in cfg: |
464 | + LOG.warning('Deprecated configuration key "ubuntu-advantage" provided.' |
465 | + ' Expected underscore delimited "ubuntu_advantage"; will' |
466 | + ' attempt to continue.') |
467 | + ua_section = cfg['ubuntu-advantage'] |
468 | + if 'ubuntu_advantage' in cfg: |
469 | + ua_section = cfg['ubuntu_advantage'] |
470 | + if ua_section is None: |
471 | + LOG.debug("Skipping module named %s," |
472 | + " no 'ubuntu_advantage' configuration found", name) |
473 | return |
474 | - |
475 | validate_cloudconfig_schema(cfg, schema) |
476 | + if 'commands' in ua_section: |
477 | + msg = ( |
478 | + 'Deprecated configuration "ubuntu-advantage: commands" provided.' |
479 | + ' Expected "token"') |
480 | + LOG.error(msg) |
481 | + raise RuntimeError(msg) |
482 | + |
483 | maybe_install_ua_tools(cloud) |
484 | - run_commands(cfgin.get('commands', [])) |
485 | + configure_ua(token=ua_section.get('token'), |
486 | + enable=ua_section.get('enable')) |
487 | |
488 | # vi: ts=4 expandtab |
489 | diff --git a/cloudinit/config/cc_ubuntu_drivers.py b/cloudinit/config/cc_ubuntu_drivers.py |
490 | new file mode 100644 |
491 | index 0000000..91feb60 |
492 | --- /dev/null |
493 | +++ b/cloudinit/config/cc_ubuntu_drivers.py |
494 | @@ -0,0 +1,112 @@ |
495 | +# This file is part of cloud-init. See LICENSE file for license information. |
496 | + |
497 | +"""Ubuntu Drivers: Interact with third party drivers in Ubuntu.""" |
498 | + |
499 | +from textwrap import dedent |
500 | + |
501 | +from cloudinit.config.schema import ( |
502 | + get_schema_doc, validate_cloudconfig_schema) |
503 | +from cloudinit import log as logging |
504 | +from cloudinit.settings import PER_INSTANCE |
505 | +from cloudinit import type_utils |
506 | +from cloudinit import util |
507 | + |
508 | +LOG = logging.getLogger(__name__) |
509 | + |
510 | +frequency = PER_INSTANCE |
511 | +distros = ['ubuntu'] |
512 | +schema = { |
513 | + 'id': 'cc_ubuntu_drivers', |
514 | + 'name': 'Ubuntu Drivers', |
515 | + 'title': 'Interact with third party drivers in Ubuntu.', |
516 | + 'description': dedent("""\ |
517 | + This module interacts with the 'ubuntu-drivers' command to install |
518 | + third party driver packages."""), |
519 | + 'distros': distros, |
520 | + 'examples': [dedent("""\ |
521 | + drivers: |
522 | + nvidia: |
523 | + license-accepted: true |
524 | + """)], |
525 | + 'frequency': frequency, |
526 | + 'type': 'object', |
527 | + 'properties': { |
528 | + 'drivers': { |
529 | + 'type': 'object', |
530 | + 'additionalProperties': False, |
531 | + 'properties': { |
532 | + 'nvidia': { |
533 | + 'type': 'object', |
534 | + 'additionalProperties': False, |
535 | + 'required': ['license-accepted'], |
536 | + 'properties': { |
537 | + 'license-accepted': { |
538 | + 'type': 'boolean', |
539 | + 'description': ("Do you accept the NVIDIA driver" |
540 | + " license?"), |
541 | + }, |
542 | + 'version': { |
543 | + 'type': 'string', |
544 | + 'description': ( |
545 | + 'The version of the driver to install (e.g.' |
546 | + ' "390", "410"). Defaults to the latest' |
547 | + ' version.'), |
548 | + }, |
549 | + }, |
550 | + }, |
551 | + }, |
552 | + }, |
553 | + }, |
554 | +} |
555 | +OLD_UBUNTU_DRIVERS_STDERR_NEEDLE = ( |
556 | + "ubuntu-drivers: error: argument <command>: invalid choice: 'install'") |
557 | + |
558 | +__doc__ = get_schema_doc(schema) # Supplement python help() |
559 | + |
560 | + |
561 | +def install_drivers(cfg, pkg_install_func): |
562 | + if not isinstance(cfg, dict): |
563 | + raise TypeError( |
564 | + "'drivers' config expected dict, found '%s': %s" % |
565 | + (type_utils.obj_name(cfg), cfg)) |
566 | + |
567 | + cfgpath = 'nvidia/license-accepted' |
568 | + # Call translate_bool to ensure that we treat string values like "yes" as |
569 | + # acceptance and _don't_ treat string values like "nah" as acceptance |
570 | + # because they're True-ish |
571 | + nv_acc = util.translate_bool(util.get_cfg_by_path(cfg, cfgpath)) |
572 | + if not nv_acc: |
573 | + LOG.debug("Not installing NVIDIA drivers. %s=%s", cfgpath, nv_acc) |
574 | + return |
575 | + |
576 | + if not util.which('ubuntu-drivers'): |
577 | + LOG.debug("'ubuntu-drivers' command not available. " |
578 | + "Installing ubuntu-drivers-common") |
579 | + pkg_install_func(['ubuntu-drivers-common']) |
580 | + |
581 | + driver_arg = 'nvidia' |
582 | + version_cfg = util.get_cfg_by_path(cfg, 'nvidia/version') |
583 | + if version_cfg: |
584 | + driver_arg += ':{}'.format(version_cfg) |
585 | + |
586 | + LOG.debug("Installing NVIDIA drivers (%s=%s, version=%s)", |
587 | + cfgpath, nv_acc, version_cfg if version_cfg else 'latest') |
588 | + |
589 | + try: |
590 | + util.subp(['ubuntu-drivers', 'install', '--gpgpu', driver_arg]) |
591 | + except util.ProcessExecutionError as exc: |
592 | + if OLD_UBUNTU_DRIVERS_STDERR_NEEDLE in exc.stderr: |
593 | + LOG.warning('the available version of ubuntu-drivers is' |
594 | + ' too old to perform requested driver installation') |
595 | + elif 'No drivers found for installation.' in exc.stdout: |
596 | + LOG.warning('ubuntu-drivers found no drivers for installation') |
597 | + raise |
598 | + |
599 | + |
600 | +def handle(name, cfg, cloud, log, _args): |
601 | + if "drivers" not in cfg: |
602 | + log.debug("Skipping module named %s, no 'drivers' key in config", name) |
603 | + return |
604 | + |
605 | + validate_cloudconfig_schema(cfg, schema) |
606 | + install_drivers(cfg['drivers'], cloud.distro.install_packages) |
607 | diff --git a/cloudinit/config/tests/test_ubuntu_advantage.py b/cloudinit/config/tests/test_ubuntu_advantage.py |
608 | index b7cf9be..8c4161e 100644 |
609 | --- a/cloudinit/config/tests/test_ubuntu_advantage.py |
610 | +++ b/cloudinit/config/tests/test_ubuntu_advantage.py |
611 | @@ -1,10 +1,7 @@ |
612 | # This file is part of cloud-init. See LICENSE file for license information. |
613 | |
614 | -import re |
615 | -from six import StringIO |
616 | - |
617 | from cloudinit.config.cc_ubuntu_advantage import ( |
618 | - handle, maybe_install_ua_tools, run_commands, schema) |
619 | + configure_ua, handle, maybe_install_ua_tools, schema) |
620 | from cloudinit.config.schema import validate_cloudconfig_schema |
621 | from cloudinit import util |
622 | from cloudinit.tests.helpers import ( |
623 | @@ -20,90 +17,120 @@ class FakeCloud(object): |
624 | self.distro = distro |
625 | |
626 | |
627 | -class TestRunCommands(CiTestCase): |
628 | +class TestConfigureUA(CiTestCase): |
629 | |
630 | with_logs = True |
631 | allowed_subp = [CiTestCase.SUBP_SHELL_TRUE] |
632 | |
633 | def setUp(self): |
634 | - super(TestRunCommands, self).setUp() |
635 | + super(TestConfigureUA, self).setUp() |
636 | self.tmp = self.tmp_dir() |
637 | |
638 | @mock.patch('%s.util.subp' % MPATH) |
639 | - def test_run_commands_on_empty_list(self, m_subp): |
640 | - """When provided with an empty list, run_commands does nothing.""" |
641 | - run_commands([]) |
642 | - self.assertEqual('', self.logs.getvalue()) |
643 | - m_subp.assert_not_called() |
644 | - |
645 | - def test_run_commands_on_non_list_or_dict(self): |
646 | - """When provided an invalid type, run_commands raises an error.""" |
647 | - with self.assertRaises(TypeError) as context_manager: |
648 | - run_commands(commands="I'm Not Valid") |
649 | + def test_configure_ua_attach_error(self, m_subp): |
650 | + """Errors from ua attach command are raised.""" |
651 | + m_subp.side_effect = util.ProcessExecutionError( |
652 | + 'Invalid token SomeToken') |
653 | + with self.assertRaises(RuntimeError) as context_manager: |
654 | + configure_ua(token='SomeToken') |
655 | self.assertEqual( |
656 | - "commands parameter was not a list or dict: I'm Not Valid", |
657 | + 'Failure attaching Ubuntu Advantage:\nUnexpected error while' |
658 | + ' running command.\nCommand: -\nExit code: -\nReason: -\n' |
659 | + 'Stdout: Invalid token SomeToken\nStderr: -', |
660 | str(context_manager.exception)) |
661 | |
662 | - def test_run_command_logs_commands_and_exit_codes_to_stderr(self): |
663 | - """All exit codes are logged to stderr.""" |
664 | - outfile = self.tmp_path('output.log', dir=self.tmp) |
665 | - |
666 | - cmd1 = 'echo "HI" >> %s' % outfile |
667 | - cmd2 = 'bogus command' |
668 | - cmd3 = 'echo "MOM" >> %s' % outfile |
669 | - commands = [cmd1, cmd2, cmd3] |
670 | - |
671 | - mock_path = '%s.sys.stderr' % MPATH |
672 | - with mock.patch(mock_path, new_callable=StringIO) as m_stderr: |
673 | - with self.assertRaises(RuntimeError) as context_manager: |
674 | - run_commands(commands=commands) |
675 | - |
676 | - self.assertIsNotNone( |
677 | - re.search(r'bogus: (command )?not found', |
678 | - str(context_manager.exception)), |
679 | - msg='Expected bogus command not found') |
680 | - expected_stderr_log = '\n'.join([ |
681 | - 'Begin run command: {cmd}'.format(cmd=cmd1), |
682 | - 'End run command: exit(0)', |
683 | - 'Begin run command: {cmd}'.format(cmd=cmd2), |
684 | - 'ERROR: End run command: exit(127)', |
685 | - 'Begin run command: {cmd}'.format(cmd=cmd3), |
686 | - 'End run command: exit(0)\n']) |
687 | - self.assertEqual(expected_stderr_log, m_stderr.getvalue()) |
688 | - |
689 | - def test_run_command_as_lists(self): |
690 | - """When commands are specified as a list, run them in order.""" |
691 | - outfile = self.tmp_path('output.log', dir=self.tmp) |
692 | - |
693 | - cmd1 = 'echo "HI" >> %s' % outfile |
694 | - cmd2 = 'echo "MOM" >> %s' % outfile |
695 | - commands = [cmd1, cmd2] |
696 | - with mock.patch('%s.sys.stderr' % MPATH, new_callable=StringIO): |
697 | - run_commands(commands=commands) |
698 | + @mock.patch('%s.util.subp' % MPATH) |
699 | + def test_configure_ua_attach_with_token(self, m_subp): |
700 | + """When token is provided, attach the machine to ua using the token.""" |
701 | + configure_ua(token='SomeToken') |
702 | + m_subp.assert_called_once_with(['ua', 'attach', 'SomeToken']) |
703 | + self.assertEqual( |
704 | + 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
705 | + self.logs.getvalue()) |
706 | + |
707 | + @mock.patch('%s.util.subp' % MPATH) |
708 | + def test_configure_ua_attach_on_service_error(self, m_subp): |
709 | + """all services should be enabled and then any failures raised""" |
710 | |
711 | + def fake_subp(cmd, capture=None): |
712 | + fail_cmds = [['ua', 'enable', svc] for svc in ['esm', 'cc']] |
713 | + if cmd in fail_cmds and capture: |
714 | + svc = cmd[-1] |
715 | + raise util.ProcessExecutionError( |
716 | + 'Invalid {} credentials'.format(svc.upper())) |
717 | + |
718 | + m_subp.side_effect = fake_subp |
719 | + |
720 | + with self.assertRaises(RuntimeError) as context_manager: |
721 | + configure_ua(token='SomeToken', enable=['esm', 'cc', 'fips']) |
722 | + self.assertEqual( |
723 | + m_subp.call_args_list, |
724 | + [mock.call(['ua', 'attach', 'SomeToken']), |
725 | + mock.call(['ua', 'enable', 'esm'], capture=True), |
726 | + mock.call(['ua', 'enable', 'cc'], capture=True), |
727 | + mock.call(['ua', 'enable', 'fips'], capture=True)]) |
728 | self.assertIn( |
729 | - 'DEBUG: Running user-provided ubuntu-advantage commands', |
730 | + 'WARNING: Failure enabling "esm":\nUnexpected error' |
731 | + ' while running command.\nCommand: -\nExit code: -\nReason: -\n' |
732 | + 'Stdout: Invalid ESM credentials\nStderr: -\n', |
733 | self.logs.getvalue()) |
734 | - self.assertEqual('HI\nMOM\n', util.load_file(outfile)) |
735 | self.assertIn( |
736 | - 'WARNING: Non-ubuntu-advantage commands in ubuntu-advantage' |
737 | - ' config:', |
738 | + 'WARNING: Failure enabling "cc":\nUnexpected error' |
739 | + ' while running command.\nCommand: -\nExit code: -\nReason: -\n' |
740 | + 'Stdout: Invalid CC credentials\nStderr: -\n', |
741 | + self.logs.getvalue()) |
742 | + self.assertEqual( |
743 | + 'Failure enabling Ubuntu Advantage service(s): "esm", "cc"', |
744 | + str(context_manager.exception)) |
745 | + |
746 | + @mock.patch('%s.util.subp' % MPATH) |
747 | + def test_configure_ua_attach_with_empty_services(self, m_subp): |
748 | + """When services is an empty list, do not auto-enable attach.""" |
749 | + configure_ua(token='SomeToken', enable=[]) |
750 | + m_subp.assert_called_once_with(['ua', 'attach', 'SomeToken']) |
751 | + self.assertEqual( |
752 | + 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
753 | self.logs.getvalue()) |
754 | |
755 | - def test_run_command_dict_sorted_as_command_script(self): |
756 | - """When commands are a dict, sort them and run.""" |
757 | - outfile = self.tmp_path('output.log', dir=self.tmp) |
758 | - cmd1 = 'echo "HI" >> %s' % outfile |
759 | - cmd2 = 'echo "MOM" >> %s' % outfile |
760 | - commands = {'02': cmd1, '01': cmd2} |
761 | - with mock.patch('%s.sys.stderr' % MPATH, new_callable=StringIO): |
762 | - run_commands(commands=commands) |
763 | + @mock.patch('%s.util.subp' % MPATH) |
764 | + def test_configure_ua_attach_with_specific_services(self, m_subp): |
765 | + """When services a list, only enable specific services.""" |
766 | + configure_ua(token='SomeToken', enable=['fips']) |
767 | + self.assertEqual( |
768 | + m_subp.call_args_list, |
769 | + [mock.call(['ua', 'attach', 'SomeToken']), |
770 | + mock.call(['ua', 'enable', 'fips'], capture=True)]) |
771 | + self.assertEqual( |
772 | + 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
773 | + self.logs.getvalue()) |
774 | + |
775 | + @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock()) |
776 | + @mock.patch('%s.util.subp' % MPATH) |
777 | + def test_configure_ua_attach_with_string_services(self, m_subp): |
778 | + """When services a string, treat as singleton list and warn""" |
779 | + configure_ua(token='SomeToken', enable='fips') |
780 | + self.assertEqual( |
781 | + m_subp.call_args_list, |
782 | + [mock.call(['ua', 'attach', 'SomeToken']), |
783 | + mock.call(['ua', 'enable', 'fips'], capture=True)]) |
784 | + self.assertEqual( |
785 | + 'WARNING: ubuntu_advantage: enable should be a list, not a' |
786 | + ' string; treating as a single enable\n' |
787 | + 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
788 | + self.logs.getvalue()) |
789 | |
790 | - expected_messages = [ |
791 | - 'DEBUG: Running user-provided ubuntu-advantage commands'] |
792 | - for message in expected_messages: |
793 | - self.assertIn(message, self.logs.getvalue()) |
794 | - self.assertEqual('MOM\nHI\n', util.load_file(outfile)) |
795 | + @mock.patch('%s.util.subp' % MPATH) |
796 | + def test_configure_ua_attach_with_weird_services(self, m_subp): |
797 | + """When services not string or list, warn but still attach""" |
798 | + configure_ua(token='SomeToken', enable={'deffo': 'wont work'}) |
799 | + self.assertEqual( |
800 | + m_subp.call_args_list, |
801 | + [mock.call(['ua', 'attach', 'SomeToken'])]) |
802 | + self.assertEqual( |
803 | + 'WARNING: ubuntu_advantage: enable should be a list, not a' |
804 | + ' dict; skipping enabling services\n' |
805 | + 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
806 | + self.logs.getvalue()) |
807 | |
808 | |
809 | @skipUnlessJsonSchema() |
810 | @@ -112,90 +139,50 @@ class TestSchema(CiTestCase, SchemaTestCaseMixin): |
811 | with_logs = True |
812 | schema = schema |
813 | |
814 | - def test_schema_warns_on_ubuntu_advantage_not_as_dict(self): |
815 | - """If ubuntu-advantage configuration is not a dict, emit a warning.""" |
816 | - validate_cloudconfig_schema({'ubuntu-advantage': 'wrong type'}, schema) |
817 | + @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
818 | + @mock.patch('%s.configure_ua' % MPATH) |
819 | + def test_schema_warns_on_ubuntu_advantage_not_dict(self, _cfg, _): |
820 | + """If ubuntu_advantage configuration is not a dict, emit a warning.""" |
821 | + validate_cloudconfig_schema({'ubuntu_advantage': 'wrong type'}, schema) |
822 | self.assertEqual( |
823 | - "WARNING: Invalid config:\nubuntu-advantage: 'wrong type' is not" |
824 | + "WARNING: Invalid config:\nubuntu_advantage: 'wrong type' is not" |
825 | " of type 'object'\n", |
826 | self.logs.getvalue()) |
827 | |
828 | - @mock.patch('%s.run_commands' % MPATH) |
829 | - def test_schema_disallows_unknown_keys(self, _): |
830 | - """Unknown keys in ubuntu-advantage configuration emit warnings.""" |
831 | + @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
832 | + @mock.patch('%s.configure_ua' % MPATH) |
833 | + def test_schema_disallows_unknown_keys(self, _cfg, _): |
834 | + """Unknown keys in ubuntu_advantage configuration emit warnings.""" |
835 | validate_cloudconfig_schema( |
836 | - {'ubuntu-advantage': {'commands': ['ls'], 'invalid-key': ''}}, |
837 | + {'ubuntu_advantage': {'token': 'winner', 'invalid-key': ''}}, |
838 | schema) |
839 | self.assertIn( |
840 | - 'WARNING: Invalid config:\nubuntu-advantage: Additional properties' |
841 | + 'WARNING: Invalid config:\nubuntu_advantage: Additional properties' |
842 | " are not allowed ('invalid-key' was unexpected)", |
843 | self.logs.getvalue()) |
844 | |
845 | - def test_warn_schema_requires_commands(self): |
846 | - """Warn when ubuntu-advantage configuration lacks commands.""" |
847 | - validate_cloudconfig_schema( |
848 | - {'ubuntu-advantage': {}}, schema) |
849 | - self.assertEqual( |
850 | - "WARNING: Invalid config:\nubuntu-advantage: 'commands' is a" |
851 | - " required property\n", |
852 | - self.logs.getvalue()) |
853 | - |
854 | - @mock.patch('%s.run_commands' % MPATH) |
855 | - def test_warn_schema_commands_is_not_list_or_dict(self, _): |
856 | - """Warn when ubuntu-advantage:commands config is not a list or dict.""" |
857 | + @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
858 | + @mock.patch('%s.configure_ua' % MPATH) |
859 | + def test_warn_schema_requires_token(self, _cfg, _): |
860 | + """Warn if ubuntu_advantage configuration lacks token.""" |
861 | validate_cloudconfig_schema( |
862 | - {'ubuntu-advantage': {'commands': 'broken'}}, schema) |
863 | + {'ubuntu_advantage': {'enable': ['esm']}}, schema) |
864 | self.assertEqual( |
865 | - "WARNING: Invalid config:\nubuntu-advantage.commands: 'broken' is" |
866 | - " not of type 'object', 'array'\n", |
867 | - self.logs.getvalue()) |
868 | + "WARNING: Invalid config:\nubuntu_advantage:" |
869 | + " 'token' is a required property\n", self.logs.getvalue()) |
870 | |
871 | - @mock.patch('%s.run_commands' % MPATH) |
872 | - def test_warn_schema_when_commands_is_empty(self, _): |
873 | - """Emit warnings when ubuntu-advantage:commands is empty.""" |
874 | - validate_cloudconfig_schema( |
875 | - {'ubuntu-advantage': {'commands': []}}, schema) |
876 | + @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
877 | + @mock.patch('%s.configure_ua' % MPATH) |
878 | + def test_warn_schema_services_is_not_list_or_dict(self, _cfg, _): |
879 | + """Warn when ubuntu_advantage:enable config is not a list.""" |
880 | validate_cloudconfig_schema( |
881 | - {'ubuntu-advantage': {'commands': {}}}, schema) |
882 | + {'ubuntu_advantage': {'enable': 'needslist'}}, schema) |
883 | self.assertEqual( |
884 | - "WARNING: Invalid config:\nubuntu-advantage.commands: [] is too" |
885 | - " short\nWARNING: Invalid config:\nubuntu-advantage.commands: {}" |
886 | - " does not have enough properties\n", |
887 | + "WARNING: Invalid config:\nubuntu_advantage: 'token' is a" |
888 | + " required property\nubuntu_advantage.enable: 'needslist'" |
889 | + " is not of type 'array'\n", |
890 | self.logs.getvalue()) |
891 | |
892 | - @mock.patch('%s.run_commands' % MPATH) |
893 | - def test_schema_when_commands_are_list_or_dict(self, _): |
894 | - """No warnings when ubuntu-advantage:commands are a list or dict.""" |
895 | - validate_cloudconfig_schema( |
896 | - {'ubuntu-advantage': {'commands': ['valid']}}, schema) |
897 | - validate_cloudconfig_schema( |
898 | - {'ubuntu-advantage': {'commands': {'01': 'also valid'}}}, schema) |
899 | - self.assertEqual('', self.logs.getvalue()) |
900 | - |
901 | - def test_duplicates_are_fine_array_array(self): |
902 | - """Duplicated commands array/array entries are allowed.""" |
903 | - self.assertSchemaValid( |
904 | - {'commands': [["echo", "bye"], ["echo" "bye"]]}, |
905 | - "command entries can be duplicate.") |
906 | - |
907 | - def test_duplicates_are_fine_array_string(self): |
908 | - """Duplicated commands array/string entries are allowed.""" |
909 | - self.assertSchemaValid( |
910 | - {'commands': ["echo bye", "echo bye"]}, |
911 | - "command entries can be duplicate.") |
912 | - |
913 | - def test_duplicates_are_fine_dict_array(self): |
914 | - """Duplicated commands dict/array entries are allowed.""" |
915 | - self.assertSchemaValid( |
916 | - {'commands': {'00': ["echo", "bye"], '01': ["echo", "bye"]}}, |
917 | - "command entries can be duplicate.") |
918 | - |
919 | - def test_duplicates_are_fine_dict_string(self): |
920 | - """Duplicated commands dict/string entries are allowed.""" |
921 | - self.assertSchemaValid( |
922 | - {'commands': {'00': "echo bye", '01': "echo bye"}}, |
923 | - "command entries can be duplicate.") |
924 | - |
925 | |
926 | class TestHandle(CiTestCase): |
927 | |
928 | @@ -205,41 +192,89 @@ class TestHandle(CiTestCase): |
929 | super(TestHandle, self).setUp() |
930 | self.tmp = self.tmp_dir() |
931 | |
932 | - @mock.patch('%s.run_commands' % MPATH) |
933 | @mock.patch('%s.validate_cloudconfig_schema' % MPATH) |
934 | - def test_handle_no_config(self, m_schema, m_run): |
935 | + def test_handle_no_config(self, m_schema): |
936 | """When no ua-related configuration is provided, nothing happens.""" |
937 | cfg = {} |
938 | handle('ua-test', cfg=cfg, cloud=None, log=self.logger, args=None) |
939 | self.assertIn( |
940 | - "DEBUG: Skipping module named ua-test, no 'ubuntu-advantage' key" |
941 | - " in config", |
942 | + "DEBUG: Skipping module named ua-test, no 'ubuntu_advantage'" |
943 | + ' configuration found', |
944 | self.logs.getvalue()) |
945 | m_schema.assert_not_called() |
946 | - m_run.assert_not_called() |
947 | |
948 | + @mock.patch('%s.configure_ua' % MPATH) |
949 | @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
950 | - def test_handle_tries_to_install_ubuntu_advantage_tools(self, m_install): |
951 | + def test_handle_tries_to_install_ubuntu_advantage_tools( |
952 | + self, m_install, m_cfg): |
953 | """If ubuntu_advantage is provided, try installing ua-tools package.""" |
954 | - cfg = {'ubuntu-advantage': {}} |
955 | + cfg = {'ubuntu_advantage': {'token': 'valid'}} |
956 | mycloud = FakeCloud(None) |
957 | handle('nomatter', cfg=cfg, cloud=mycloud, log=self.logger, args=None) |
958 | m_install.assert_called_once_with(mycloud) |
959 | |
960 | + @mock.patch('%s.configure_ua' % MPATH) |
961 | @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
962 | - def test_handle_runs_commands_provided(self, m_install): |
963 | - """When commands are specified as a list, run them.""" |
964 | - outfile = self.tmp_path('output.log', dir=self.tmp) |
965 | + def test_handle_passes_credentials_and_services_to_configure_ua( |
966 | + self, m_install, m_configure_ua): |
967 | + """All ubuntu_advantage config keys are passed to configure_ua.""" |
968 | + cfg = {'ubuntu_advantage': {'token': 'token', 'enable': ['esm']}} |
969 | + handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
970 | + m_configure_ua.assert_called_once_with( |
971 | + token='token', enable=['esm']) |
972 | + |
973 | + @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock()) |
974 | + @mock.patch('%s.configure_ua' % MPATH) |
975 | + def test_handle_warns_on_deprecated_ubuntu_advantage_key_w_config( |
976 | + self, m_configure_ua): |
977 | + """Warning when ubuntu-advantage key is present with new config""" |
978 | + cfg = {'ubuntu-advantage': {'token': 'token', 'enable': ['esm']}} |
979 | + handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
980 | + self.assertEqual( |
981 | + 'WARNING: Deprecated configuration key "ubuntu-advantage"' |
982 | + ' provided. Expected underscore delimited "ubuntu_advantage";' |
983 | + ' will attempt to continue.', |
984 | + self.logs.getvalue().splitlines()[0]) |
985 | + m_configure_ua.assert_called_once_with( |
986 | + token='token', enable=['esm']) |
987 | + |
988 | + def test_handle_error_on_deprecated_commands_key_dashed(self): |
989 | + """Error when commands is present in ubuntu-advantage key.""" |
990 | + cfg = {'ubuntu-advantage': {'commands': 'nogo'}} |
991 | + with self.assertRaises(RuntimeError) as context_manager: |
992 | + handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
993 | + self.assertEqual( |
994 | + 'Deprecated configuration "ubuntu-advantage: commands" provided.' |
995 | + ' Expected "token"', |
996 | + str(context_manager.exception)) |
997 | + |
998 | + def test_handle_error_on_deprecated_commands_key_underscored(self): |
999 | + """Error when commands is present in ubuntu_advantage key.""" |
1000 | + cfg = {'ubuntu_advantage': {'commands': 'nogo'}} |
1001 | + with self.assertRaises(RuntimeError) as context_manager: |
1002 | + handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
1003 | + self.assertEqual( |
1004 | + 'Deprecated configuration "ubuntu-advantage: commands" provided.' |
1005 | + ' Expected "token"', |
1006 | + str(context_manager.exception)) |
1007 | |
1008 | + @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock()) |
1009 | + @mock.patch('%s.configure_ua' % MPATH) |
1010 | + def test_handle_prefers_new_style_config( |
1011 | + self, m_configure_ua): |
1012 | + """ubuntu_advantage should be preferred over ubuntu-advantage""" |
1013 | cfg = { |
1014 | - 'ubuntu-advantage': {'commands': ['echo "HI" >> %s' % outfile, |
1015 | - 'echo "MOM" >> %s' % outfile]}} |
1016 | - mock_path = '%s.sys.stderr' % MPATH |
1017 | - with self.allow_subp([CiTestCase.SUBP_SHELL_TRUE]): |
1018 | - with mock.patch(mock_path, new_callable=StringIO): |
1019 | - handle('nomatter', cfg=cfg, cloud=None, log=self.logger, |
1020 | - args=None) |
1021 | - self.assertEqual('HI\nMOM\n', util.load_file(outfile)) |
1022 | + 'ubuntu-advantage': {'token': 'nope', 'enable': ['wrong']}, |
1023 | + 'ubuntu_advantage': {'token': 'token', 'enable': ['esm']}, |
1024 | + } |
1025 | + handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
1026 | + self.assertEqual( |
1027 | + 'WARNING: Deprecated configuration key "ubuntu-advantage"' |
1028 | + ' provided. Expected underscore delimited "ubuntu_advantage";' |
1029 | + ' will attempt to continue.', |
1030 | + self.logs.getvalue().splitlines()[0]) |
1031 | + m_configure_ua.assert_called_once_with( |
1032 | + token='token', enable=['esm']) |
1033 | |
1034 | |
1035 | class TestMaybeInstallUATools(CiTestCase): |
1036 | @@ -253,7 +288,7 @@ class TestMaybeInstallUATools(CiTestCase): |
1037 | @mock.patch('%s.util.which' % MPATH) |
1038 | def test_maybe_install_ua_tools_noop_when_ua_tools_present(self, m_which): |
1039 | """Do nothing if ubuntu-advantage-tools already exists.""" |
1040 | - m_which.return_value = '/usr/bin/ubuntu-advantage' # already installed |
1041 | + m_which.return_value = '/usr/bin/ua' # already installed |
1042 | distro = mock.MagicMock() |
1043 | distro.update_package_sources.side_effect = RuntimeError( |
1044 | 'Some apt error') |
1045 | diff --git a/cloudinit/config/tests/test_ubuntu_drivers.py b/cloudinit/config/tests/test_ubuntu_drivers.py |
1046 | new file mode 100644 |
1047 | index 0000000..efba4ce |
1048 | --- /dev/null |
1049 | +++ b/cloudinit/config/tests/test_ubuntu_drivers.py |
1050 | @@ -0,0 +1,174 @@ |
1051 | +# This file is part of cloud-init. See LICENSE file for license information. |
1052 | + |
1053 | +import copy |
1054 | + |
1055 | +from cloudinit.tests.helpers import CiTestCase, skipUnlessJsonSchema, mock |
1056 | +from cloudinit.config.schema import ( |
1057 | + SchemaValidationError, validate_cloudconfig_schema) |
1058 | +from cloudinit.config import cc_ubuntu_drivers as drivers |
1059 | +from cloudinit.util import ProcessExecutionError |
1060 | + |
1061 | +MPATH = "cloudinit.config.cc_ubuntu_drivers." |
1062 | +OLD_UBUNTU_DRIVERS_ERROR_STDERR = ( |
1063 | + "ubuntu-drivers: error: argument <command>: invalid choice: 'install' " |
1064 | + "(choose from 'list', 'autoinstall', 'devices', 'debug')\n") |
1065 | + |
1066 | + |
1067 | +class TestUbuntuDrivers(CiTestCase): |
1068 | + cfg_accepted = {'drivers': {'nvidia': {'license-accepted': True}}} |
1069 | + install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'] |
1070 | + |
1071 | + with_logs = True |
1072 | + |
1073 | + @skipUnlessJsonSchema() |
1074 | + def test_schema_requires_boolean_for_license_accepted(self): |
1075 | + with self.assertRaisesRegex( |
1076 | + SchemaValidationError, ".*license-accepted.*TRUE.*boolean"): |
1077 | + validate_cloudconfig_schema( |
1078 | + {'drivers': {'nvidia': {'license-accepted': "TRUE"}}}, |
1079 | + schema=drivers.schema, strict=True) |
1080 | + |
1081 | + @mock.patch(MPATH + "util.subp", return_value=('', '')) |
1082 | + @mock.patch(MPATH + "util.which", return_value=False) |
1083 | + def _assert_happy_path_taken(self, config, m_which, m_subp): |
1084 | + """Positive path test through handle. Package should be installed.""" |
1085 | + myCloud = mock.MagicMock() |
1086 | + drivers.handle('ubuntu_drivers', config, myCloud, None, None) |
1087 | + self.assertEqual([mock.call(['ubuntu-drivers-common'])], |
1088 | + myCloud.distro.install_packages.call_args_list) |
1089 | + self.assertEqual([mock.call(self.install_gpgpu)], |
1090 | + m_subp.call_args_list) |
1091 | + |
1092 | + def test_handle_does_package_install(self): |
1093 | + self._assert_happy_path_taken(self.cfg_accepted) |
1094 | + |
1095 | + def test_trueish_strings_are_considered_approval(self): |
1096 | + for true_value in ['yes', 'true', 'on', '1']: |
1097 | + new_config = copy.deepcopy(self.cfg_accepted) |
1098 | + new_config['drivers']['nvidia']['license-accepted'] = true_value |
1099 | + self._assert_happy_path_taken(new_config) |
1100 | + |
1101 | + @mock.patch(MPATH + "util.subp", side_effect=ProcessExecutionError( |
1102 | + stdout='No drivers found for installation.\n', exit_code=1)) |
1103 | + @mock.patch(MPATH + "util.which", return_value=False) |
1104 | + def test_handle_raises_error_if_no_drivers_found(self, m_which, m_subp): |
1105 | + """If ubuntu-drivers doesn't install any drivers, raise an error.""" |
1106 | + myCloud = mock.MagicMock() |
1107 | + with self.assertRaises(Exception): |
1108 | + drivers.handle( |
1109 | + 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None) |
1110 | + self.assertEqual([mock.call(['ubuntu-drivers-common'])], |
1111 | + myCloud.distro.install_packages.call_args_list) |
1112 | + self.assertEqual([mock.call(self.install_gpgpu)], |
1113 | + m_subp.call_args_list) |
1114 | + self.assertIn('ubuntu-drivers found no drivers for installation', |
1115 | + self.logs.getvalue()) |
1116 | + |
1117 | + @mock.patch(MPATH + "util.subp", return_value=('', '')) |
1118 | + @mock.patch(MPATH + "util.which", return_value=False) |
1119 | + def _assert_inert_with_config(self, config, m_which, m_subp): |
1120 | + """Helper to reduce repetition when testing negative cases""" |
1121 | + myCloud = mock.MagicMock() |
1122 | + drivers.handle('ubuntu_drivers', config, myCloud, None, None) |
1123 | + self.assertEqual(0, myCloud.distro.install_packages.call_count) |
1124 | + self.assertEqual(0, m_subp.call_count) |
1125 | + |
1126 | + def test_handle_inert_if_license_not_accepted(self): |
1127 | + """Ensure we don't do anything if the license is rejected.""" |
1128 | + self._assert_inert_with_config( |
1129 | + {'drivers': {'nvidia': {'license-accepted': False}}}) |
1130 | + |
1131 | + def test_handle_inert_if_garbage_in_license_field(self): |
1132 | + """Ensure we don't do anything if unknown text is in license field.""" |
1133 | + self._assert_inert_with_config( |
1134 | + {'drivers': {'nvidia': {'license-accepted': 'garbage'}}}) |
1135 | + |
1136 | + def test_handle_inert_if_no_license_key(self): |
1137 | + """Ensure we don't do anything if no license key.""" |
1138 | + self._assert_inert_with_config({'drivers': {'nvidia': {}}}) |
1139 | + |
1140 | + def test_handle_inert_if_no_nvidia_key(self): |
1141 | + """Ensure we don't do anything if other license accepted.""" |
1142 | + self._assert_inert_with_config( |
1143 | + {'drivers': {'acme': {'license-accepted': True}}}) |
1144 | + |
1145 | + def test_handle_inert_if_string_given(self): |
1146 | + """Ensure we don't do anything if string refusal given.""" |
1147 | + for false_value in ['no', 'false', 'off', '0']: |
1148 | + self._assert_inert_with_config( |
1149 | + {'drivers': {'nvidia': {'license-accepted': false_value}}}) |
1150 | + |
1151 | + @mock.patch(MPATH + "install_drivers") |
1152 | + def test_handle_no_drivers_does_nothing(self, m_install_drivers): |
1153 | + """If no 'drivers' key in the config, nothing should be done.""" |
1154 | + myCloud = mock.MagicMock() |
1155 | + myLog = mock.MagicMock() |
1156 | + drivers.handle('ubuntu_drivers', {'foo': 'bzr'}, myCloud, myLog, None) |
1157 | + self.assertIn('Skipping module named', |
1158 | + myLog.debug.call_args_list[0][0][0]) |
1159 | + self.assertEqual(0, m_install_drivers.call_count) |
1160 | + |
1161 | + @mock.patch(MPATH + "util.subp", return_value=('', '')) |
1162 | + @mock.patch(MPATH + "util.which", return_value=True) |
1163 | + def test_install_drivers_no_install_if_present(self, m_which, m_subp): |
1164 | + """If 'ubuntu-drivers' is present, no package install should occur.""" |
1165 | + pkg_install = mock.MagicMock() |
1166 | + drivers.install_drivers(self.cfg_accepted['drivers'], |
1167 | + pkg_install_func=pkg_install) |
1168 | + self.assertEqual(0, pkg_install.call_count) |
1169 | + self.assertEqual([mock.call('ubuntu-drivers')], |
1170 | + m_which.call_args_list) |
1171 | + self.assertEqual([mock.call(self.install_gpgpu)], |
1172 | + m_subp.call_args_list) |
1173 | + |
1174 | + def test_install_drivers_rejects_invalid_config(self): |
1175 | + """install_drivers should raise TypeError if not given a config dict""" |
1176 | + pkg_install = mock.MagicMock() |
1177 | + with self.assertRaisesRegex(TypeError, ".*expected dict.*"): |
1178 | + drivers.install_drivers("mystring", pkg_install_func=pkg_install) |
1179 | + self.assertEqual(0, pkg_install.call_count) |
1180 | + |
1181 | + @mock.patch(MPATH + "util.subp", side_effect=ProcessExecutionError( |
1182 | + stderr=OLD_UBUNTU_DRIVERS_ERROR_STDERR, exit_code=2)) |
1183 | + @mock.patch(MPATH + "util.which", return_value=False) |
1184 | + def test_install_drivers_handles_old_ubuntu_drivers_gracefully( |
1185 | + self, m_which, m_subp): |
1186 | + """Older ubuntu-drivers versions should emit message and raise error""" |
1187 | + myCloud = mock.MagicMock() |
1188 | + with self.assertRaises(Exception): |
1189 | + drivers.handle( |
1190 | + 'ubuntu_drivers', self.cfg_accepted, myCloud, None, None) |
1191 | + self.assertEqual([mock.call(['ubuntu-drivers-common'])], |
1192 | + myCloud.distro.install_packages.call_args_list) |
1193 | + self.assertEqual([mock.call(self.install_gpgpu)], |
1194 | + m_subp.call_args_list) |
1195 | + self.assertIn('WARNING: the available version of ubuntu-drivers is' |
1196 | + ' too old to perform requested driver installation', |
1197 | + self.logs.getvalue()) |
1198 | + |
1199 | + |
1200 | +# Sub-class TestUbuntuDrivers to run the same test cases, but with a version |
1201 | +class TestUbuntuDriversWithVersion(TestUbuntuDrivers): |
1202 | + cfg_accepted = { |
1203 | + 'drivers': {'nvidia': {'license-accepted': True, 'version': '123'}}} |
1204 | + install_gpgpu = ['ubuntu-drivers', 'install', '--gpgpu', 'nvidia:123'] |
1205 | + |
1206 | + @mock.patch(MPATH + "util.subp", return_value=('', '')) |
1207 | + @mock.patch(MPATH + "util.which", return_value=False) |
1208 | + def test_version_none_uses_latest(self, m_which, m_subp): |
1209 | + myCloud = mock.MagicMock() |
1210 | + version_none_cfg = { |
1211 | + 'drivers': {'nvidia': {'license-accepted': True, 'version': None}}} |
1212 | + drivers.handle( |
1213 | + 'ubuntu_drivers', version_none_cfg, myCloud, None, None) |
1214 | + self.assertEqual( |
1215 | + [mock.call(['ubuntu-drivers', 'install', '--gpgpu', 'nvidia'])], |
1216 | + m_subp.call_args_list) |
1217 | + |
1218 | + def test_specifying_a_version_doesnt_override_license_acceptance(self): |
1219 | + self._assert_inert_with_config({ |
1220 | + 'drivers': {'nvidia': {'license-accepted': False, |
1221 | + 'version': '123'}} |
1222 | + }) |
1223 | + |
1224 | +# vi: ts=4 expandtab |
1225 | diff --git a/cloudinit/net/eni.py b/cloudinit/net/eni.py |
1226 | index 6423632..b129bb6 100644 |
1227 | --- a/cloudinit/net/eni.py |
1228 | +++ b/cloudinit/net/eni.py |
1229 | @@ -366,8 +366,6 @@ class Renderer(renderer.Renderer): |
1230 | down = indent + "pre-down route del" |
1231 | or_true = " || true" |
1232 | mapping = { |
1233 | - 'network': '-net', |
1234 | - 'netmask': 'netmask', |
1235 | 'gateway': 'gw', |
1236 | 'metric': 'metric', |
1237 | } |
1238 | @@ -379,13 +377,21 @@ class Renderer(renderer.Renderer): |
1239 | default_gw = ' -A inet6 default' |
1240 | |
1241 | route_line = '' |
1242 | - for k in ['network', 'netmask', 'gateway', 'metric']: |
1243 | - if default_gw and k in ['network', 'netmask']: |
1244 | + for k in ['network', 'gateway', 'metric']: |
1245 | + if default_gw and k == 'network': |
1246 | continue |
1247 | if k == 'gateway': |
1248 | route_line += '%s %s %s' % (default_gw, mapping[k], route[k]) |
1249 | elif k in route: |
1250 | - route_line += ' %s %s' % (mapping[k], route[k]) |
1251 | + if k == 'network': |
1252 | + if ':' in route[k]: |
1253 | + route_line += ' -A inet6' |
1254 | + else: |
1255 | + route_line += ' -net' |
1256 | + if 'prefix' in route: |
1257 | + route_line += ' %s/%s' % (route[k], route['prefix']) |
1258 | + else: |
1259 | + route_line += ' %s %s' % (mapping[k], route[k]) |
1260 | content.append(up + route_line + or_true) |
1261 | content.append(down + route_line + or_true) |
1262 | return content |
1263 | diff --git a/cloudinit/net/network_state.py b/cloudinit/net/network_state.py |
1264 | index 539b76d..4d19f56 100644 |
1265 | --- a/cloudinit/net/network_state.py |
1266 | +++ b/cloudinit/net/network_state.py |
1267 | @@ -148,6 +148,7 @@ class NetworkState(object): |
1268 | self._network_state = copy.deepcopy(network_state) |
1269 | self._version = version |
1270 | self.use_ipv6 = network_state.get('use_ipv6', False) |
1271 | + self._has_default_route = None |
1272 | |
1273 | @property |
1274 | def config(self): |
1275 | @@ -157,14 +158,6 @@ class NetworkState(object): |
1276 | def version(self): |
1277 | return self._version |
1278 | |
1279 | - def iter_routes(self, filter_func=None): |
1280 | - for route in self._network_state.get('routes', []): |
1281 | - if filter_func is not None: |
1282 | - if filter_func(route): |
1283 | - yield route |
1284 | - else: |
1285 | - yield route |
1286 | - |
1287 | @property |
1288 | def dns_nameservers(self): |
1289 | try: |
1290 | @@ -179,6 +172,12 @@ class NetworkState(object): |
1291 | except KeyError: |
1292 | return [] |
1293 | |
1294 | + @property |
1295 | + def has_default_route(self): |
1296 | + if self._has_default_route is None: |
1297 | + self._has_default_route = self._maybe_has_default_route() |
1298 | + return self._has_default_route |
1299 | + |
1300 | def iter_interfaces(self, filter_func=None): |
1301 | ifaces = self._network_state.get('interfaces', {}) |
1302 | for iface in six.itervalues(ifaces): |
1303 | @@ -188,6 +187,32 @@ class NetworkState(object): |
1304 | if filter_func(iface): |
1305 | yield iface |
1306 | |
1307 | + def iter_routes(self, filter_func=None): |
1308 | + for route in self._network_state.get('routes', []): |
1309 | + if filter_func is not None: |
1310 | + if filter_func(route): |
1311 | + yield route |
1312 | + else: |
1313 | + yield route |
1314 | + |
1315 | + def _maybe_has_default_route(self): |
1316 | + for route in self.iter_routes(): |
1317 | + if self._is_default_route(route): |
1318 | + return True |
1319 | + for iface in self.iter_interfaces(): |
1320 | + for subnet in iface.get('subnets', []): |
1321 | + for route in subnet.get('routes', []): |
1322 | + if self._is_default_route(route): |
1323 | + return True |
1324 | + return False |
1325 | + |
1326 | + def _is_default_route(self, route): |
1327 | + default_nets = ('::', '0.0.0.0') |
1328 | + return ( |
1329 | + route.get('prefix') == 0 |
1330 | + and route.get('network') in default_nets |
1331 | + ) |
1332 | + |
1333 | |
1334 | @six.add_metaclass(CommandHandlerMeta) |
1335 | class NetworkStateInterpreter(object): |
1336 | diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py |
1337 | index 19b3e60..a47da0a 100644 |
1338 | --- a/cloudinit/net/sysconfig.py |
1339 | +++ b/cloudinit/net/sysconfig.py |
1340 | @@ -18,6 +18,8 @@ from .network_state import ( |
1341 | |
1342 | LOG = logging.getLogger(__name__) |
1343 | NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf" |
1344 | +KNOWN_DISTROS = [ |
1345 | + 'opensuse', 'sles', 'suse', 'redhat', 'fedora', 'centos'] |
1346 | |
1347 | |
1348 | def _make_header(sep='#'): |
1349 | @@ -322,7 +324,7 @@ class Renderer(renderer.Renderer): |
1350 | iface_cfg[new_key] = old_value |
1351 | |
1352 | @classmethod |
1353 | - def _render_subnets(cls, iface_cfg, subnets): |
1354 | + def _render_subnets(cls, iface_cfg, subnets, has_default_route): |
1355 | # setting base values |
1356 | iface_cfg['BOOTPROTO'] = 'none' |
1357 | |
1358 | @@ -331,6 +333,7 @@ class Renderer(renderer.Renderer): |
1359 | mtu_key = 'MTU' |
1360 | subnet_type = subnet.get('type') |
1361 | if subnet_type == 'dhcp6': |
1362 | + # TODO need to set BOOTPROTO to dhcp6 on SUSE |
1363 | iface_cfg['IPV6INIT'] = True |
1364 | iface_cfg['DHCPV6C'] = True |
1365 | elif subnet_type in ['dhcp4', 'dhcp']: |
1366 | @@ -375,9 +378,9 @@ class Renderer(renderer.Renderer): |
1367 | ipv6_index = -1 |
1368 | for i, subnet in enumerate(subnets, start=len(iface_cfg.children)): |
1369 | subnet_type = subnet.get('type') |
1370 | - if subnet_type == 'dhcp6': |
1371 | - continue |
1372 | - elif subnet_type in ['dhcp4', 'dhcp']: |
1373 | + if subnet_type in ['dhcp', 'dhcp4', 'dhcp6']: |
1374 | + if has_default_route and iface_cfg['BOOTPROTO'] != 'none': |
1375 | + iface_cfg['DHCLIENT_SET_DEFAULT_ROUTE'] = False |
1376 | continue |
1377 | elif subnet_type == 'static': |
1378 | if subnet_is_ipv6(subnet): |
1379 | @@ -385,10 +388,13 @@ class Renderer(renderer.Renderer): |
1380 | ipv6_cidr = "%s/%s" % (subnet['address'], subnet['prefix']) |
1381 | if ipv6_index == 0: |
1382 | iface_cfg['IPV6ADDR'] = ipv6_cidr |
1383 | + iface_cfg['IPADDR6'] = ipv6_cidr |
1384 | elif ipv6_index == 1: |
1385 | iface_cfg['IPV6ADDR_SECONDARIES'] = ipv6_cidr |
1386 | + iface_cfg['IPADDR6_0'] = ipv6_cidr |
1387 | else: |
1388 | iface_cfg['IPV6ADDR_SECONDARIES'] += " " + ipv6_cidr |
1389 | + iface_cfg['IPADDR6_%d' % ipv6_index] = ipv6_cidr |
1390 | else: |
1391 | ipv4_index = ipv4_index + 1 |
1392 | suff = "" if ipv4_index == 0 else str(ipv4_index) |
1393 | @@ -443,6 +449,8 @@ class Renderer(renderer.Renderer): |
1394 | # TODO(harlowja): add validation that no other iface has |
1395 | # also provided the default route? |
1396 | iface_cfg['DEFROUTE'] = True |
1397 | + if iface_cfg['BOOTPROTO'] in ('dhcp', 'dhcp4', 'dhcp6'): |
1398 | + iface_cfg['DHCLIENT_SET_DEFAULT_ROUTE'] = True |
1399 | if 'gateway' in route: |
1400 | if is_ipv6 or is_ipv6_addr(route['gateway']): |
1401 | iface_cfg['IPV6_DEFAULTGW'] = route['gateway'] |
1402 | @@ -493,7 +501,9 @@ class Renderer(renderer.Renderer): |
1403 | iface_cfg = iface_contents[iface_name] |
1404 | route_cfg = iface_cfg.routes |
1405 | |
1406 | - cls._render_subnets(iface_cfg, iface_subnets) |
1407 | + cls._render_subnets( |
1408 | + iface_cfg, iface_subnets, network_state.has_default_route |
1409 | + ) |
1410 | cls._render_subnet_routes(iface_cfg, route_cfg, iface_subnets) |
1411 | |
1412 | @classmethod |
1413 | @@ -518,7 +528,9 @@ class Renderer(renderer.Renderer): |
1414 | |
1415 | iface_subnets = iface.get("subnets", []) |
1416 | route_cfg = iface_cfg.routes |
1417 | - cls._render_subnets(iface_cfg, iface_subnets) |
1418 | + cls._render_subnets( |
1419 | + iface_cfg, iface_subnets, network_state.has_default_route |
1420 | + ) |
1421 | cls._render_subnet_routes(iface_cfg, route_cfg, iface_subnets) |
1422 | |
1423 | # iter_interfaces on network-state is not sorted to produce |
1424 | @@ -547,7 +559,9 @@ class Renderer(renderer.Renderer): |
1425 | |
1426 | iface_subnets = iface.get("subnets", []) |
1427 | route_cfg = iface_cfg.routes |
1428 | - cls._render_subnets(iface_cfg, iface_subnets) |
1429 | + cls._render_subnets( |
1430 | + iface_cfg, iface_subnets, network_state.has_default_route |
1431 | + ) |
1432 | cls._render_subnet_routes(iface_cfg, route_cfg, iface_subnets) |
1433 | |
1434 | @staticmethod |
1435 | @@ -608,7 +622,9 @@ class Renderer(renderer.Renderer): |
1436 | |
1437 | iface_subnets = iface.get("subnets", []) |
1438 | route_cfg = iface_cfg.routes |
1439 | - cls._render_subnets(iface_cfg, iface_subnets) |
1440 | + cls._render_subnets( |
1441 | + iface_cfg, iface_subnets, network_state.has_default_route |
1442 | + ) |
1443 | cls._render_subnet_routes(iface_cfg, route_cfg, iface_subnets) |
1444 | |
1445 | @classmethod |
1446 | @@ -620,7 +636,9 @@ class Renderer(renderer.Renderer): |
1447 | iface_cfg.kind = 'infiniband' |
1448 | iface_subnets = iface.get("subnets", []) |
1449 | route_cfg = iface_cfg.routes |
1450 | - cls._render_subnets(iface_cfg, iface_subnets) |
1451 | + cls._render_subnets( |
1452 | + iface_cfg, iface_subnets, network_state.has_default_route |
1453 | + ) |
1454 | cls._render_subnet_routes(iface_cfg, route_cfg, iface_subnets) |
1455 | |
1456 | @classmethod |
1457 | @@ -701,8 +719,8 @@ class Renderer(renderer.Renderer): |
1458 | def available(target=None): |
1459 | sysconfig = available_sysconfig(target=target) |
1460 | nm = available_nm(target=target) |
1461 | - |
1462 | - return any([nm, sysconfig]) |
1463 | + return (util.get_linux_distro()[0] in KNOWN_DISTROS |
1464 | + and any([nm, sysconfig])) |
1465 | |
1466 | |
1467 | def available_sysconfig(target=None): |
1468 | diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py |
1469 | index f55c31e..6d2affe 100644 |
1470 | --- a/cloudinit/net/tests/test_init.py |
1471 | +++ b/cloudinit/net/tests/test_init.py |
1472 | @@ -7,11 +7,11 @@ import mock |
1473 | import os |
1474 | import requests |
1475 | import textwrap |
1476 | -import yaml |
1477 | |
1478 | import cloudinit.net as net |
1479 | from cloudinit.util import ensure_file, write_file, ProcessExecutionError |
1480 | from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase |
1481 | +from cloudinit import safeyaml as yaml |
1482 | |
1483 | |
1484 | class TestSysDevPath(CiTestCase): |
1485 | diff --git a/cloudinit/reporting/handlers.py b/cloudinit/reporting/handlers.py |
1486 | old mode 100644 |
1487 | new mode 100755 |
1488 | index 6d23558..10165ae |
1489 | --- a/cloudinit/reporting/handlers.py |
1490 | +++ b/cloudinit/reporting/handlers.py |
1491 | @@ -5,7 +5,6 @@ import fcntl |
1492 | import json |
1493 | import six |
1494 | import os |
1495 | -import re |
1496 | import struct |
1497 | import threading |
1498 | import time |
1499 | @@ -14,6 +13,7 @@ from cloudinit import log as logging |
1500 | from cloudinit.registry import DictRegistry |
1501 | from cloudinit import (url_helper, util) |
1502 | from datetime import datetime |
1503 | +from six.moves.queue import Empty as QueueEmptyError |
1504 | |
1505 | if six.PY2: |
1506 | from multiprocessing.queues import JoinableQueue as JQueue |
1507 | @@ -129,24 +129,50 @@ class HyperVKvpReportingHandler(ReportingHandler): |
1508 | DESC_IDX_KEY = 'msg_i' |
1509 | JSON_SEPARATORS = (',', ':') |
1510 | KVP_POOL_FILE_GUEST = '/var/lib/hyperv/.kvp_pool_1' |
1511 | + _already_truncated_pool_file = False |
1512 | |
1513 | def __init__(self, |
1514 | kvp_file_path=KVP_POOL_FILE_GUEST, |
1515 | event_types=None): |
1516 | super(HyperVKvpReportingHandler, self).__init__() |
1517 | self._kvp_file_path = kvp_file_path |
1518 | + HyperVKvpReportingHandler._truncate_guest_pool_file( |
1519 | + self._kvp_file_path) |
1520 | + |
1521 | self._event_types = event_types |
1522 | self.q = JQueue() |
1523 | - self.kvp_file = None |
1524 | self.incarnation_no = self._get_incarnation_no() |
1525 | self.event_key_prefix = u"{0}|{1}".format(self.EVENT_PREFIX, |
1526 | self.incarnation_no) |
1527 | - self._current_offset = 0 |
1528 | self.publish_thread = threading.Thread( |
1529 | target=self._publish_event_routine) |
1530 | self.publish_thread.daemon = True |
1531 | self.publish_thread.start() |
1532 | |
1533 | + @classmethod |
1534 | + def _truncate_guest_pool_file(cls, kvp_file): |
1535 | + """ |
1536 | + Truncate the pool file if it has not been truncated since boot. |
1537 | + This should be done exactly once for the file indicated by |
1538 | + KVP_POOL_FILE_GUEST constant above. This method takes a filename |
1539 | + so that we can use an arbitrary file during unit testing. |
1540 | + Since KVP is a best-effort telemetry channel we only attempt to |
1541 | + truncate the file once and only if the file has not been modified |
1542 | + since boot. Additional truncation can lead to loss of existing |
1543 | + KVPs. |
1544 | + """ |
1545 | + if cls._already_truncated_pool_file: |
1546 | + return |
1547 | + boot_time = time.time() - float(util.uptime()) |
1548 | + try: |
1549 | + if os.path.getmtime(kvp_file) < boot_time: |
1550 | + with open(kvp_file, "w"): |
1551 | + pass |
1552 | + except (OSError, IOError) as e: |
1553 | + LOG.warning("failed to truncate kvp pool file, %s", e) |
1554 | + finally: |
1555 | + cls._already_truncated_pool_file = True |
1556 | + |
1557 | def _get_incarnation_no(self): |
1558 | """ |
1559 | use the time passed as the incarnation number. |
1560 | @@ -162,20 +188,15 @@ class HyperVKvpReportingHandler(ReportingHandler): |
1561 | |
1562 | def _iterate_kvps(self, offset): |
1563 | """iterate the kvp file from the current offset.""" |
1564 | - try: |
1565 | - with open(self._kvp_file_path, 'rb+') as f: |
1566 | - self.kvp_file = f |
1567 | - fcntl.flock(f, fcntl.LOCK_EX) |
1568 | - f.seek(offset) |
1569 | + with open(self._kvp_file_path, 'rb') as f: |
1570 | + fcntl.flock(f, fcntl.LOCK_EX) |
1571 | + f.seek(offset) |
1572 | + record_data = f.read(self.HV_KVP_RECORD_SIZE) |
1573 | + while len(record_data) == self.HV_KVP_RECORD_SIZE: |
1574 | + kvp_item = self._decode_kvp_item(record_data) |
1575 | + yield kvp_item |
1576 | record_data = f.read(self.HV_KVP_RECORD_SIZE) |
1577 | - while len(record_data) == self.HV_KVP_RECORD_SIZE: |
1578 | - self._current_offset += self.HV_KVP_RECORD_SIZE |
1579 | - kvp_item = self._decode_kvp_item(record_data) |
1580 | - yield kvp_item |
1581 | - record_data = f.read(self.HV_KVP_RECORD_SIZE) |
1582 | - fcntl.flock(f, fcntl.LOCK_UN) |
1583 | - finally: |
1584 | - self.kvp_file = None |
1585 | + fcntl.flock(f, fcntl.LOCK_UN) |
1586 | |
1587 | def _event_key(self, event): |
1588 | """ |
1589 | @@ -207,23 +228,13 @@ class HyperVKvpReportingHandler(ReportingHandler): |
1590 | |
1591 | return {'key': k, 'value': v} |
1592 | |
1593 | - def _update_kvp_item(self, record_data): |
1594 | - if self.kvp_file is None: |
1595 | - raise ReportException( |
1596 | - "kvp file '{0}' not opened." |
1597 | - .format(self._kvp_file_path)) |
1598 | - self.kvp_file.seek(-self.HV_KVP_RECORD_SIZE, 1) |
1599 | - self.kvp_file.write(record_data) |
1600 | - |
1601 | def _append_kvp_item(self, record_data): |
1602 | - with open(self._kvp_file_path, 'rb+') as f: |
1603 | + with open(self._kvp_file_path, 'ab') as f: |
1604 | fcntl.flock(f, fcntl.LOCK_EX) |
1605 | - # seek to end of the file |
1606 | - f.seek(0, 2) |
1607 | - f.write(record_data) |
1608 | + for data in record_data: |
1609 | + f.write(data) |
1610 | f.flush() |
1611 | fcntl.flock(f, fcntl.LOCK_UN) |
1612 | - self._current_offset = f.tell() |
1613 | |
1614 | def _break_down(self, key, meta_data, description): |
1615 | del meta_data[self.MSG_KEY] |
1616 | @@ -279,40 +290,26 @@ class HyperVKvpReportingHandler(ReportingHandler): |
1617 | |
1618 | def _publish_event_routine(self): |
1619 | while True: |
1620 | + items_from_queue = 0 |
1621 | try: |
1622 | event = self.q.get(block=True) |
1623 | - need_append = True |
1624 | + items_from_queue += 1 |
1625 | + encoded_data = [] |
1626 | + while event is not None: |
1627 | + encoded_data += self._encode_event(event) |
1628 | + try: |
1629 | + # get all the rest of the events in the queue |
1630 | + event = self.q.get(block=False) |
1631 | + items_from_queue += 1 |
1632 | + except QueueEmptyError: |
1633 | + event = None |
1634 | try: |
1635 | - if not os.path.exists(self._kvp_file_path): |
1636 | - LOG.warning( |
1637 | - "skip writing events %s to %s. file not present.", |
1638 | - event.as_string(), |
1639 | - self._kvp_file_path) |
1640 | - encoded_event = self._encode_event(event) |
1641 | - # for each encoded_event |
1642 | - for encoded_data in (encoded_event): |
1643 | - for kvp in self._iterate_kvps(self._current_offset): |
1644 | - match = ( |
1645 | - re.match( |
1646 | - r"^{0}\|(\d+)\|.+" |
1647 | - .format(self.EVENT_PREFIX), |
1648 | - kvp['key'] |
1649 | - )) |
1650 | - if match: |
1651 | - match_groups = match.groups(0) |
1652 | - if int(match_groups[0]) < self.incarnation_no: |
1653 | - need_append = False |
1654 | - self._update_kvp_item(encoded_data) |
1655 | - continue |
1656 | - if need_append: |
1657 | - self._append_kvp_item(encoded_data) |
1658 | - except IOError as e: |
1659 | - LOG.warning( |
1660 | - "failed posting event to kvp: %s e:%s", |
1661 | - event.as_string(), e) |
1662 | + self._append_kvp_item(encoded_data) |
1663 | + except (OSError, IOError) as e: |
1664 | + LOG.warning("failed posting events to kvp, %s", e) |
1665 | finally: |
1666 | - self.q.task_done() |
1667 | - |
1668 | + for _ in range(items_from_queue): |
1669 | + self.q.task_done() |
1670 | # when main process exits, q.get() will through EOFError |
1671 | # indicating we should exit this thread. |
1672 | except EOFError: |
1673 | @@ -322,7 +319,7 @@ class HyperVKvpReportingHandler(ReportingHandler): |
1674 | # if the kvp pool already contains a chunk of data, |
1675 | # so defer it to another thread. |
1676 | def publish_event(self, event): |
1677 | - if (not self._event_types or event.event_type in self._event_types): |
1678 | + if not self._event_types or event.event_type in self._event_types: |
1679 | self.q.put(event) |
1680 | |
1681 | def flush(self): |
1682 | diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py |
1683 | old mode 100644 |
1684 | new mode 100755 |
1685 | index eccbee5..b7440c1 |
1686 | --- a/cloudinit/sources/DataSourceAzure.py |
1687 | +++ b/cloudinit/sources/DataSourceAzure.py |
1688 | @@ -21,10 +21,14 @@ from cloudinit import net |
1689 | from cloudinit.event import EventType |
1690 | from cloudinit.net.dhcp import EphemeralDHCPv4 |
1691 | from cloudinit import sources |
1692 | -from cloudinit.sources.helpers.azure import get_metadata_from_fabric |
1693 | from cloudinit.sources.helpers import netlink |
1694 | from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc |
1695 | from cloudinit import util |
1696 | +from cloudinit.reporting import events |
1697 | + |
1698 | +from cloudinit.sources.helpers.azure import (azure_ds_reporter, |
1699 | + azure_ds_telemetry_reporter, |
1700 | + get_metadata_from_fabric) |
1701 | |
1702 | LOG = logging.getLogger(__name__) |
1703 | |
1704 | @@ -53,8 +57,14 @@ AZURE_CHASSIS_ASSET_TAG = '7783-7084-3265-9085-8269-3286-77' |
1705 | REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds" |
1706 | REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready" |
1707 | AGENT_SEED_DIR = '/var/lib/waagent' |
1708 | + |
1709 | +# In the event where the IMDS primary server is not |
1710 | +# available, it takes 1s to fallback to the secondary one |
1711 | +IMDS_TIMEOUT_IN_SECONDS = 2 |
1712 | IMDS_URL = "http://169.254.169.254/metadata/" |
1713 | |
1714 | +PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0" |
1715 | + |
1716 | # List of static scripts and network config artifacts created by |
1717 | # stock ubuntu suported images. |
1718 | UBUNTU_EXTENDED_NETWORK_SCRIPTS = [ |
1719 | @@ -195,6 +205,8 @@ if util.is_FreeBSD(): |
1720 | RESOURCE_DISK_PATH = "/dev/" + res_disk |
1721 | else: |
1722 | LOG.debug("resource disk is None") |
1723 | + # TODO Find where platform entropy data is surfaced |
1724 | + PLATFORM_ENTROPY_SOURCE = None |
1725 | |
1726 | BUILTIN_DS_CONFIG = { |
1727 | 'agent_command': AGENT_START_BUILTIN, |
1728 | @@ -241,6 +253,7 @@ def set_hostname(hostname, hostname_command='hostname'): |
1729 | util.subp([hostname_command, hostname]) |
1730 | |
1731 | |
1732 | +@azure_ds_telemetry_reporter |
1733 | @contextlib.contextmanager |
1734 | def temporary_hostname(temp_hostname, cfg, hostname_command='hostname'): |
1735 | """ |
1736 | @@ -287,6 +300,7 @@ class DataSourceAzure(sources.DataSource): |
1737 | root = sources.DataSource.__str__(self) |
1738 | return "%s [seed=%s]" % (root, self.seed) |
1739 | |
1740 | + @azure_ds_telemetry_reporter |
1741 | def bounce_network_with_azure_hostname(self): |
1742 | # When using cloud-init to provision, we have to set the hostname from |
1743 | # the metadata and "bounce" the network to force DDNS to update via |
1744 | @@ -312,6 +326,7 @@ class DataSourceAzure(sources.DataSource): |
1745 | util.logexc(LOG, "handling set_hostname failed") |
1746 | return False |
1747 | |
1748 | + @azure_ds_telemetry_reporter |
1749 | def get_metadata_from_agent(self): |
1750 | temp_hostname = self.metadata.get('local-hostname') |
1751 | agent_cmd = self.ds_cfg['agent_command'] |
1752 | @@ -341,15 +356,18 @@ class DataSourceAzure(sources.DataSource): |
1753 | LOG.debug("ssh authentication: " |
1754 | "using fingerprint from fabirc") |
1755 | |
1756 | - # wait very long for public SSH keys to arrive |
1757 | - # https://bugs.launchpad.net/cloud-init/+bug/1717611 |
1758 | - missing = util.log_time(logfunc=LOG.debug, |
1759 | - msg="waiting for SSH public key files", |
1760 | - func=util.wait_for_files, |
1761 | - args=(fp_files, 900)) |
1762 | - |
1763 | - if len(missing): |
1764 | - LOG.warning("Did not find files, but going on: %s", missing) |
1765 | + with events.ReportEventStack( |
1766 | + name="waiting-for-ssh-public-key", |
1767 | + description="wait for agents to retrieve ssh keys", |
1768 | + parent=azure_ds_reporter): |
1769 | + # wait very long for public SSH keys to arrive |
1770 | + # https://bugs.launchpad.net/cloud-init/+bug/1717611 |
1771 | + missing = util.log_time(logfunc=LOG.debug, |
1772 | + msg="waiting for SSH public key files", |
1773 | + func=util.wait_for_files, |
1774 | + args=(fp_files, 900)) |
1775 | + if len(missing): |
1776 | + LOG.warning("Did not find files, but going on: %s", missing) |
1777 | |
1778 | metadata = {} |
1779 | metadata['public-keys'] = key_value or pubkeys_from_crt_files(fp_files) |
1780 | @@ -363,6 +381,7 @@ class DataSourceAzure(sources.DataSource): |
1781 | subplatform_type = 'seed-dir' |
1782 | return '%s (%s)' % (subplatform_type, self.seed) |
1783 | |
1784 | + @azure_ds_telemetry_reporter |
1785 | def crawl_metadata(self): |
1786 | """Walk all instance metadata sources returning a dict on success. |
1787 | |
1788 | @@ -393,7 +412,7 @@ class DataSourceAzure(sources.DataSource): |
1789 | elif cdev.startswith("/dev/"): |
1790 | if util.is_FreeBSD(): |
1791 | ret = util.mount_cb(cdev, load_azure_ds_dir, |
1792 | - mtype="udf", sync=False) |
1793 | + mtype="udf") |
1794 | else: |
1795 | ret = util.mount_cb(cdev, load_azure_ds_dir) |
1796 | else: |
1797 | @@ -464,6 +483,7 @@ class DataSourceAzure(sources.DataSource): |
1798 | super(DataSourceAzure, self).clear_cached_attrs(attr_defaults) |
1799 | self._metadata_imds = sources.UNSET |
1800 | |
1801 | + @azure_ds_telemetry_reporter |
1802 | def _get_data(self): |
1803 | """Crawl and process datasource metadata caching metadata as attrs. |
1804 | |
1805 | @@ -510,6 +530,7 @@ class DataSourceAzure(sources.DataSource): |
1806 | # quickly (local check only) if self.instance_id is still valid |
1807 | return sources.instance_id_matches_system_uuid(self.get_instance_id()) |
1808 | |
1809 | + @azure_ds_telemetry_reporter |
1810 | def setup(self, is_new_instance): |
1811 | if self._negotiated is False: |
1812 | LOG.debug("negotiating for %s (new_instance=%s)", |
1813 | @@ -566,9 +587,9 @@ class DataSourceAzure(sources.DataSource): |
1814 | return |
1815 | self._ephemeral_dhcp_ctx.clean_network() |
1816 | else: |
1817 | - return readurl(url, timeout=1, headers=headers, |
1818 | - exception_cb=exc_cb, infinite=True, |
1819 | - log_req_resp=False).contents |
1820 | + return readurl(url, timeout=IMDS_TIMEOUT_IN_SECONDS, |
1821 | + headers=headers, exception_cb=exc_cb, |
1822 | + infinite=True, log_req_resp=False).contents |
1823 | except UrlError: |
1824 | # Teardown our EphemeralDHCPv4 context on failure as we retry |
1825 | self._ephemeral_dhcp_ctx.clean_network() |
1826 | @@ -577,6 +598,7 @@ class DataSourceAzure(sources.DataSource): |
1827 | if nl_sock: |
1828 | nl_sock.close() |
1829 | |
1830 | + @azure_ds_telemetry_reporter |
1831 | def _report_ready(self, lease): |
1832 | """Tells the fabric provisioning has completed """ |
1833 | try: |
1834 | @@ -614,9 +636,14 @@ class DataSourceAzure(sources.DataSource): |
1835 | def _reprovision(self): |
1836 | """Initiate the reprovisioning workflow.""" |
1837 | contents = self._poll_imds() |
1838 | - md, ud, cfg = read_azure_ovf(contents) |
1839 | - return (md, ud, cfg, {'ovf-env.xml': contents}) |
1840 | - |
1841 | + with events.ReportEventStack( |
1842 | + name="reprovisioning-read-azure-ovf", |
1843 | + description="read azure ovf during reprovisioning", |
1844 | + parent=azure_ds_reporter): |
1845 | + md, ud, cfg = read_azure_ovf(contents) |
1846 | + return (md, ud, cfg, {'ovf-env.xml': contents}) |
1847 | + |
1848 | + @azure_ds_telemetry_reporter |
1849 | def _negotiate(self): |
1850 | """Negotiate with fabric and return data from it. |
1851 | |
1852 | @@ -649,6 +676,7 @@ class DataSourceAzure(sources.DataSource): |
1853 | util.del_file(REPROVISION_MARKER_FILE) |
1854 | return fabric_data |
1855 | |
1856 | + @azure_ds_telemetry_reporter |
1857 | def activate(self, cfg, is_new_instance): |
1858 | address_ephemeral_resize(is_new_instance=is_new_instance, |
1859 | preserve_ntfs=self.ds_cfg.get( |
1860 | @@ -665,7 +693,7 @@ class DataSourceAzure(sources.DataSource): |
1861 | 2. Generate a fallback network config that does not include any of |
1862 | the blacklisted devices. |
1863 | """ |
1864 | - if not self._network_config: |
1865 | + if not self._network_config or self._network_config == sources.UNSET: |
1866 | if self.ds_cfg.get('apply_network_config'): |
1867 | nc_src = self._metadata_imds |
1868 | else: |
1869 | @@ -687,12 +715,14 @@ def _partitions_on_device(devpath, maxnum=16): |
1870 | return [] |
1871 | |
1872 | |
1873 | +@azure_ds_telemetry_reporter |
1874 | def _has_ntfs_filesystem(devpath): |
1875 | ntfs_devices = util.find_devs_with("TYPE=ntfs", no_cache=True) |
1876 | LOG.debug('ntfs_devices found = %s', ntfs_devices) |
1877 | return os.path.realpath(devpath) in ntfs_devices |
1878 | |
1879 | |
1880 | +@azure_ds_telemetry_reporter |
1881 | def can_dev_be_reformatted(devpath, preserve_ntfs): |
1882 | """Determine if the ephemeral drive at devpath should be reformatted. |
1883 | |
1884 | @@ -741,43 +771,59 @@ def can_dev_be_reformatted(devpath, preserve_ntfs): |
1885 | (cand_part, cand_path, devpath)) |
1886 | return False, msg |
1887 | |
1888 | + @azure_ds_telemetry_reporter |
1889 | def count_files(mp): |
1890 | ignored = set(['dataloss_warning_readme.txt']) |
1891 | return len([f for f in os.listdir(mp) if f.lower() not in ignored]) |
1892 | |
1893 | bmsg = ('partition %s (%s) on device %s was ntfs formatted' % |
1894 | (cand_part, cand_path, devpath)) |
1895 | - try: |
1896 | - file_count = util.mount_cb(cand_path, count_files, mtype="ntfs", |
1897 | - update_env_for_mount={'LANG': 'C'}) |
1898 | - except util.MountFailedError as e: |
1899 | - if "unknown filesystem type 'ntfs'" in str(e): |
1900 | - return True, (bmsg + ' but this system cannot mount NTFS,' |
1901 | - ' assuming there are no important files.' |
1902 | - ' Formatting allowed.') |
1903 | - return False, bmsg + ' but mount of %s failed: %s' % (cand_part, e) |
1904 | - |
1905 | - if file_count != 0: |
1906 | - LOG.warning("it looks like you're using NTFS on the ephemeral disk, " |
1907 | - 'to ensure that filesystem does not get wiped, set ' |
1908 | - '%s.%s in config', '.'.join(DS_CFG_PATH), |
1909 | - DS_CFG_KEY_PRESERVE_NTFS) |
1910 | - return False, bmsg + ' but had %d files on it.' % file_count |
1911 | + |
1912 | + with events.ReportEventStack( |
1913 | + name="mount-ntfs-and-count", |
1914 | + description="mount-ntfs-and-count", |
1915 | + parent=azure_ds_reporter) as evt: |
1916 | + try: |
1917 | + file_count = util.mount_cb(cand_path, count_files, mtype="ntfs", |
1918 | + update_env_for_mount={'LANG': 'C'}) |
1919 | + except util.MountFailedError as e: |
1920 | + evt.description = "cannot mount ntfs" |
1921 | + if "unknown filesystem type 'ntfs'" in str(e): |
1922 | + return True, (bmsg + ' but this system cannot mount NTFS,' |
1923 | + ' assuming there are no important files.' |
1924 | + ' Formatting allowed.') |
1925 | + return False, bmsg + ' but mount of %s failed: %s' % (cand_part, e) |
1926 | + |
1927 | + if file_count != 0: |
1928 | + evt.description = "mounted and counted %d files" % file_count |
1929 | + LOG.warning("it looks like you're using NTFS on the ephemeral" |
1930 | + " disk, to ensure that filesystem does not get wiped," |
1931 | + " set %s.%s in config", '.'.join(DS_CFG_PATH), |
1932 | + DS_CFG_KEY_PRESERVE_NTFS) |
1933 | + return False, bmsg + ' but had %d files on it.' % file_count |
1934 | |
1935 | return True, bmsg + ' and had no important files. Safe for reformatting.' |
1936 | |
1937 | |
1938 | +@azure_ds_telemetry_reporter |
1939 | def address_ephemeral_resize(devpath=RESOURCE_DISK_PATH, maxwait=120, |
1940 | is_new_instance=False, preserve_ntfs=False): |
1941 | # wait for ephemeral disk to come up |
1942 | naplen = .2 |
1943 | - missing = util.wait_for_files([devpath], maxwait=maxwait, naplen=naplen, |
1944 | - log_pre="Azure ephemeral disk: ") |
1945 | - |
1946 | - if missing: |
1947 | - LOG.warning("ephemeral device '%s' did not appear after %d seconds.", |
1948 | - devpath, maxwait) |
1949 | - return |
1950 | + with events.ReportEventStack( |
1951 | + name="wait-for-ephemeral-disk", |
1952 | + description="wait for ephemeral disk", |
1953 | + parent=azure_ds_reporter): |
1954 | + missing = util.wait_for_files([devpath], |
1955 | + maxwait=maxwait, |
1956 | + naplen=naplen, |
1957 | + log_pre="Azure ephemeral disk: ") |
1958 | + |
1959 | + if missing: |
1960 | + LOG.warning("ephemeral device '%s' did" |
1961 | + " not appear after %d seconds.", |
1962 | + devpath, maxwait) |
1963 | + return |
1964 | |
1965 | result = False |
1966 | msg = None |
1967 | @@ -805,6 +851,7 @@ def address_ephemeral_resize(devpath=RESOURCE_DISK_PATH, maxwait=120, |
1968 | return |
1969 | |
1970 | |
1971 | +@azure_ds_telemetry_reporter |
1972 | def perform_hostname_bounce(hostname, cfg, prev_hostname): |
1973 | # set the hostname to 'hostname' if it is not already set to that. |
1974 | # then, if policy is not off, bounce the interface using command |
1975 | @@ -840,6 +887,7 @@ def perform_hostname_bounce(hostname, cfg, prev_hostname): |
1976 | return True |
1977 | |
1978 | |
1979 | +@azure_ds_telemetry_reporter |
1980 | def crtfile_to_pubkey(fname, data=None): |
1981 | pipeline = ('openssl x509 -noout -pubkey < "$0" |' |
1982 | 'ssh-keygen -i -m PKCS8 -f /dev/stdin') |
1983 | @@ -848,6 +896,7 @@ def crtfile_to_pubkey(fname, data=None): |
1984 | return out.rstrip() |
1985 | |
1986 | |
1987 | +@azure_ds_telemetry_reporter |
1988 | def pubkeys_from_crt_files(flist): |
1989 | pubkeys = [] |
1990 | errors = [] |
1991 | @@ -863,6 +912,7 @@ def pubkeys_from_crt_files(flist): |
1992 | return pubkeys |
1993 | |
1994 | |
1995 | +@azure_ds_telemetry_reporter |
1996 | def write_files(datadir, files, dirmode=None): |
1997 | |
1998 | def _redact_password(cnt, fname): |
1999 | @@ -890,6 +940,7 @@ def write_files(datadir, files, dirmode=None): |
2000 | util.write_file(filename=fname, content=content, mode=0o600) |
2001 | |
2002 | |
2003 | +@azure_ds_telemetry_reporter |
2004 | def invoke_agent(cmd): |
2005 | # this is a function itself to simplify patching it for test |
2006 | if cmd: |
2007 | @@ -909,6 +960,7 @@ def find_child(node, filter_func): |
2008 | return ret |
2009 | |
2010 | |
2011 | +@azure_ds_telemetry_reporter |
2012 | def load_azure_ovf_pubkeys(sshnode): |
2013 | # This parses a 'SSH' node formatted like below, and returns |
2014 | # an array of dicts. |
2015 | @@ -961,6 +1013,7 @@ def load_azure_ovf_pubkeys(sshnode): |
2016 | return found |
2017 | |
2018 | |
2019 | +@azure_ds_telemetry_reporter |
2020 | def read_azure_ovf(contents): |
2021 | try: |
2022 | dom = minidom.parseString(contents) |
2023 | @@ -1061,6 +1114,7 @@ def read_azure_ovf(contents): |
2024 | return (md, ud, cfg) |
2025 | |
2026 | |
2027 | +@azure_ds_telemetry_reporter |
2028 | def _extract_preprovisioned_vm_setting(dom): |
2029 | """Read the preprovision flag from the ovf. It should not |
2030 | exist unless true.""" |
2031 | @@ -1089,6 +1143,7 @@ def encrypt_pass(password, salt_id="$6$"): |
2032 | return crypt.crypt(password, salt_id + util.rand_str(strlen=16)) |
2033 | |
2034 | |
2035 | +@azure_ds_telemetry_reporter |
2036 | def _check_freebsd_cdrom(cdrom_dev): |
2037 | """Return boolean indicating path to cdrom device has content.""" |
2038 | try: |
2039 | @@ -1100,18 +1155,31 @@ def _check_freebsd_cdrom(cdrom_dev): |
2040 | return False |
2041 | |
2042 | |
2043 | -def _get_random_seed(): |
2044 | +@azure_ds_telemetry_reporter |
2045 | +def _get_random_seed(source=PLATFORM_ENTROPY_SOURCE): |
2046 | """Return content random seed file if available, otherwise, |
2047 | return None.""" |
2048 | # azure / hyper-v provides random data here |
2049 | - # TODO. find the seed on FreeBSD platform |
2050 | # now update ds_cfg to reflect contents pass in config |
2051 | - if util.is_FreeBSD(): |
2052 | + if source is None: |
2053 | return None |
2054 | - return util.load_file("/sys/firmware/acpi/tables/OEM0", |
2055 | - quiet=True, decode=False) |
2056 | + seed = util.load_file(source, quiet=True, decode=False) |
2057 | + |
2058 | + # The seed generally contains non-Unicode characters. load_file puts |
2059 | + # them into a str (in python 2) or bytes (in python 3). In python 2, |
2060 | + # bad octets in a str cause util.json_dumps() to throw an exception. In |
2061 | + # python 3, bytes is a non-serializable type, and the handler load_file |
2062 | + # uses applies b64 encoding *again* to handle it. The simplest solution |
2063 | + # is to just b64encode the data and then decode it to a serializable |
2064 | + # string. Same number of bits of entropy, just with 25% more zeroes. |
2065 | + # There's no need to undo this base64-encoding when the random seed is |
2066 | + # actually used in cc_seed_random.py. |
2067 | + seed = base64.b64encode(seed).decode() |
2068 | |
2069 | + return seed |
2070 | |
2071 | + |
2072 | +@azure_ds_telemetry_reporter |
2073 | def list_possible_azure_ds_devs(): |
2074 | devlist = [] |
2075 | if util.is_FreeBSD(): |
2076 | @@ -1126,6 +1194,7 @@ def list_possible_azure_ds_devs(): |
2077 | return devlist |
2078 | |
2079 | |
2080 | +@azure_ds_telemetry_reporter |
2081 | def load_azure_ds_dir(source_dir): |
2082 | ovf_file = os.path.join(source_dir, "ovf-env.xml") |
2083 | |
2084 | @@ -1148,47 +1217,54 @@ def parse_network_config(imds_metadata): |
2085 | @param: imds_metadata: Dict of content read from IMDS network service. |
2086 | @return: Dictionary containing network version 2 standard configuration. |
2087 | """ |
2088 | - if imds_metadata != sources.UNSET and imds_metadata: |
2089 | - netconfig = {'version': 2, 'ethernets': {}} |
2090 | - LOG.debug('Azure: generating network configuration from IMDS') |
2091 | - network_metadata = imds_metadata['network'] |
2092 | - for idx, intf in enumerate(network_metadata['interface']): |
2093 | - nicname = 'eth{idx}'.format(idx=idx) |
2094 | - dev_config = {} |
2095 | - for addr4 in intf['ipv4']['ipAddress']: |
2096 | - privateIpv4 = addr4['privateIpAddress'] |
2097 | - if privateIpv4: |
2098 | - if dev_config.get('dhcp4', False): |
2099 | - # Append static address config for nic > 1 |
2100 | - netPrefix = intf['ipv4']['subnet'][0].get( |
2101 | - 'prefix', '24') |
2102 | - if not dev_config.get('addresses'): |
2103 | - dev_config['addresses'] = [] |
2104 | - dev_config['addresses'].append( |
2105 | - '{ip}/{prefix}'.format( |
2106 | - ip=privateIpv4, prefix=netPrefix)) |
2107 | - else: |
2108 | - dev_config['dhcp4'] = True |
2109 | - for addr6 in intf['ipv6']['ipAddress']: |
2110 | - privateIpv6 = addr6['privateIpAddress'] |
2111 | - if privateIpv6: |
2112 | - dev_config['dhcp6'] = True |
2113 | - break |
2114 | - if dev_config: |
2115 | - mac = ':'.join(re.findall(r'..', intf['macAddress'])) |
2116 | - dev_config.update( |
2117 | - {'match': {'macaddress': mac.lower()}, |
2118 | - 'set-name': nicname}) |
2119 | - netconfig['ethernets'][nicname] = dev_config |
2120 | - else: |
2121 | - blacklist = ['mlx4_core'] |
2122 | - LOG.debug('Azure: generating fallback configuration') |
2123 | - # generate a network config, blacklist picking mlx4_core devs |
2124 | - netconfig = net.generate_fallback_config( |
2125 | - blacklist_drivers=blacklist, config_driver=True) |
2126 | - return netconfig |
2127 | + with events.ReportEventStack( |
2128 | + name="parse_network_config", |
2129 | + description="", |
2130 | + parent=azure_ds_reporter) as evt: |
2131 | + if imds_metadata != sources.UNSET and imds_metadata: |
2132 | + netconfig = {'version': 2, 'ethernets': {}} |
2133 | + LOG.debug('Azure: generating network configuration from IMDS') |
2134 | + network_metadata = imds_metadata['network'] |
2135 | + for idx, intf in enumerate(network_metadata['interface']): |
2136 | + nicname = 'eth{idx}'.format(idx=idx) |
2137 | + dev_config = {} |
2138 | + for addr4 in intf['ipv4']['ipAddress']: |
2139 | + privateIpv4 = addr4['privateIpAddress'] |
2140 | + if privateIpv4: |
2141 | + if dev_config.get('dhcp4', False): |
2142 | + # Append static address config for nic > 1 |
2143 | + netPrefix = intf['ipv4']['subnet'][0].get( |
2144 | + 'prefix', '24') |
2145 | + if not dev_config.get('addresses'): |
2146 | + dev_config['addresses'] = [] |
2147 | + dev_config['addresses'].append( |
2148 | + '{ip}/{prefix}'.format( |
2149 | + ip=privateIpv4, prefix=netPrefix)) |
2150 | + else: |
2151 | + dev_config['dhcp4'] = True |
2152 | + for addr6 in intf['ipv6']['ipAddress']: |
2153 | + privateIpv6 = addr6['privateIpAddress'] |
2154 | + if privateIpv6: |
2155 | + dev_config['dhcp6'] = True |
2156 | + break |
2157 | + if dev_config: |
2158 | + mac = ':'.join(re.findall(r'..', intf['macAddress'])) |
2159 | + dev_config.update( |
2160 | + {'match': {'macaddress': mac.lower()}, |
2161 | + 'set-name': nicname}) |
2162 | + netconfig['ethernets'][nicname] = dev_config |
2163 | + evt.description = "network config from imds" |
2164 | + else: |
2165 | + blacklist = ['mlx4_core'] |
2166 | + LOG.debug('Azure: generating fallback configuration') |
2167 | + # generate a network config, blacklist picking mlx4_core devs |
2168 | + netconfig = net.generate_fallback_config( |
2169 | + blacklist_drivers=blacklist, config_driver=True) |
2170 | + evt.description = "network config from fallback" |
2171 | + return netconfig |
2172 | |
2173 | |
2174 | +@azure_ds_telemetry_reporter |
2175 | def get_metadata_from_imds(fallback_nic, retries): |
2176 | """Query Azure's network metadata service, returning a dictionary. |
2177 | |
2178 | @@ -1213,14 +1289,15 @@ def get_metadata_from_imds(fallback_nic, retries): |
2179 | return util.log_time(**kwargs) |
2180 | |
2181 | |
2182 | +@azure_ds_telemetry_reporter |
2183 | def _get_metadata_from_imds(retries): |
2184 | |
2185 | url = IMDS_URL + "instance?api-version=2017-12-01" |
2186 | headers = {"Metadata": "true"} |
2187 | try: |
2188 | response = readurl( |
2189 | - url, timeout=1, headers=headers, retries=retries, |
2190 | - exception_cb=retry_on_url_exc) |
2191 | + url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers, |
2192 | + retries=retries, exception_cb=retry_on_url_exc) |
2193 | except Exception as e: |
2194 | LOG.debug('Ignoring IMDS instance metadata: %s', e) |
2195 | return {} |
2196 | @@ -1232,6 +1309,7 @@ def _get_metadata_from_imds(retries): |
2197 | return {} |
2198 | |
2199 | |
2200 | +@azure_ds_telemetry_reporter |
2201 | def maybe_remove_ubuntu_network_config_scripts(paths=None): |
2202 | """Remove Azure-specific ubuntu network config for non-primary nics. |
2203 | |
2204 | @@ -1269,14 +1347,20 @@ def maybe_remove_ubuntu_network_config_scripts(paths=None): |
2205 | |
2206 | |
2207 | def _is_platform_viable(seed_dir): |
2208 | - """Check platform environment to report if this datasource may run.""" |
2209 | - asset_tag = util.read_dmi_data('chassis-asset-tag') |
2210 | - if asset_tag == AZURE_CHASSIS_ASSET_TAG: |
2211 | - return True |
2212 | - LOG.debug("Non-Azure DMI asset tag '%s' discovered.", asset_tag) |
2213 | - if os.path.exists(os.path.join(seed_dir, 'ovf-env.xml')): |
2214 | - return True |
2215 | - return False |
2216 | + with events.ReportEventStack( |
2217 | + name="check-platform-viability", |
2218 | + description="found azure asset tag", |
2219 | + parent=azure_ds_reporter) as evt: |
2220 | + |
2221 | + """Check platform environment to report if this datasource may run.""" |
2222 | + asset_tag = util.read_dmi_data('chassis-asset-tag') |
2223 | + if asset_tag == AZURE_CHASSIS_ASSET_TAG: |
2224 | + return True |
2225 | + LOG.debug("Non-Azure DMI asset tag '%s' discovered.", asset_tag) |
2226 | + evt.description = "Non-Azure DMI asset tag '%s' discovered.", asset_tag |
2227 | + if os.path.exists(os.path.join(seed_dir, 'ovf-env.xml')): |
2228 | + return True |
2229 | + return False |
2230 | |
2231 | |
2232 | class BrokenAzureDataSource(Exception): |
2233 | diff --git a/cloudinit/sources/DataSourceCloudStack.py b/cloudinit/sources/DataSourceCloudStack.py |
2234 | index d4b758f..f185dc7 100644 |
2235 | --- a/cloudinit/sources/DataSourceCloudStack.py |
2236 | +++ b/cloudinit/sources/DataSourceCloudStack.py |
2237 | @@ -95,7 +95,7 @@ class DataSourceCloudStack(sources.DataSource): |
2238 | start_time = time.time() |
2239 | url = uhelp.wait_for_url( |
2240 | urls=urls, max_wait=url_params.max_wait_seconds, |
2241 | - timeout=url_params.timeout_seconds, status_cb=LOG.warn) |
2242 | + timeout=url_params.timeout_seconds, status_cb=LOG.warning) |
2243 | |
2244 | if url: |
2245 | LOG.debug("Using metadata source: '%s'", url) |
2246 | diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py |
2247 | index 564e3eb..571d30d 100644 |
2248 | --- a/cloudinit/sources/DataSourceConfigDrive.py |
2249 | +++ b/cloudinit/sources/DataSourceConfigDrive.py |
2250 | @@ -72,15 +72,12 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource): |
2251 | dslist = self.sys_cfg.get('datasource_list') |
2252 | for dev in find_candidate_devs(dslist=dslist): |
2253 | try: |
2254 | - # Set mtype if freebsd and turn off sync |
2255 | - if dev.startswith("/dev/cd"): |
2256 | + if util.is_FreeBSD() and dev.startswith("/dev/cd"): |
2257 | mtype = "cd9660" |
2258 | - sync = False |
2259 | else: |
2260 | mtype = None |
2261 | - sync = True |
2262 | results = util.mount_cb(dev, read_config_drive, |
2263 | - mtype=mtype, sync=sync) |
2264 | + mtype=mtype) |
2265 | found = dev |
2266 | except openstack.NonReadable: |
2267 | pass |
2268 | diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py |
2269 | index 4f2f6cc..5c017bf 100644 |
2270 | --- a/cloudinit/sources/DataSourceEc2.py |
2271 | +++ b/cloudinit/sources/DataSourceEc2.py |
2272 | @@ -208,7 +208,7 @@ class DataSourceEc2(sources.DataSource): |
2273 | start_time = time.time() |
2274 | url = uhelp.wait_for_url( |
2275 | urls=urls, max_wait=url_params.max_wait_seconds, |
2276 | - timeout=url_params.timeout_seconds, status_cb=LOG.warn) |
2277 | + timeout=url_params.timeout_seconds, status_cb=LOG.warning) |
2278 | |
2279 | if url: |
2280 | self.metadata_address = url2base[url] |
2281 | @@ -334,8 +334,12 @@ class DataSourceEc2(sources.DataSource): |
2282 | if isinstance(net_md, dict): |
2283 | result = convert_ec2_metadata_network_config( |
2284 | net_md, macs_to_nics=macs_to_nics, fallback_nic=iface) |
2285 | - # RELEASE_BLOCKER: Xenial debian/postinst needs to add |
2286 | - # EventType.BOOT on upgrade path for classic. |
2287 | + |
2288 | + # RELEASE_BLOCKER: xenial should drop the below if statement, |
2289 | + # because the issue being addressed doesn't exist pre-netplan. |
2290 | + # (This datasource doesn't implement check_instance_id() so the |
2291 | + # datasource object is recreated every boot; this means we don't |
2292 | + # need to modify update_events on cloud-init upgrade.) |
2293 | |
2294 | # Non-VPC (aka Classic) Ec2 instances need to rewrite the |
2295 | # network config file every boot due to MAC address change. |
2296 | diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py |
2297 | index 6860f0c..fcf5d58 100644 |
2298 | --- a/cloudinit/sources/DataSourceNoCloud.py |
2299 | +++ b/cloudinit/sources/DataSourceNoCloud.py |
2300 | @@ -106,7 +106,9 @@ class DataSourceNoCloud(sources.DataSource): |
2301 | fslist = util.find_devs_with("TYPE=vfat") |
2302 | fslist.extend(util.find_devs_with("TYPE=iso9660")) |
2303 | |
2304 | - label_list = util.find_devs_with("LABEL=%s" % label) |
2305 | + label_list = util.find_devs_with("LABEL=%s" % label.upper()) |
2306 | + label_list.extend(util.find_devs_with("LABEL=%s" % label.lower())) |
2307 | + |
2308 | devlist = list(set(fslist) & set(label_list)) |
2309 | devlist.sort(reverse=True) |
2310 | |
2311 | diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py |
2312 | old mode 100644 |
2313 | new mode 100755 |
2314 | index 2829dd2..82c4c8c |
2315 | --- a/cloudinit/sources/helpers/azure.py |
2316 | +++ b/cloudinit/sources/helpers/azure.py |
2317 | @@ -16,9 +16,29 @@ from xml.etree import ElementTree |
2318 | |
2319 | from cloudinit import url_helper |
2320 | from cloudinit import util |
2321 | +from cloudinit.reporting import events |
2322 | |
2323 | LOG = logging.getLogger(__name__) |
2324 | |
2325 | +# This endpoint matches the format as found in dhcp lease files, since this |
2326 | +# value is applied if the endpoint can't be found within a lease file |
2327 | +DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10" |
2328 | + |
2329 | +azure_ds_reporter = events.ReportEventStack( |
2330 | + name="azure-ds", |
2331 | + description="initialize reporter for azure ds", |
2332 | + reporting_enabled=True) |
2333 | + |
2334 | + |
2335 | +def azure_ds_telemetry_reporter(func): |
2336 | + def impl(*args, **kwargs): |
2337 | + with events.ReportEventStack( |
2338 | + name=func.__name__, |
2339 | + description=func.__name__, |
2340 | + parent=azure_ds_reporter): |
2341 | + return func(*args, **kwargs) |
2342 | + return impl |
2343 | + |
2344 | |
2345 | @contextmanager |
2346 | def cd(newdir): |
2347 | @@ -119,6 +139,7 @@ class OpenSSLManager(object): |
2348 | def clean_up(self): |
2349 | util.del_dir(self.tmpdir) |
2350 | |
2351 | + @azure_ds_telemetry_reporter |
2352 | def generate_certificate(self): |
2353 | LOG.debug('Generating certificate for communication with fabric...') |
2354 | if self.certificate is not None: |
2355 | @@ -139,17 +160,20 @@ class OpenSSLManager(object): |
2356 | LOG.debug('New certificate generated.') |
2357 | |
2358 | @staticmethod |
2359 | + @azure_ds_telemetry_reporter |
2360 | def _run_x509_action(action, cert): |
2361 | cmd = ['openssl', 'x509', '-noout', action] |
2362 | result, _ = util.subp(cmd, data=cert) |
2363 | return result |
2364 | |
2365 | + @azure_ds_telemetry_reporter |
2366 | def _get_ssh_key_from_cert(self, certificate): |
2367 | pub_key = self._run_x509_action('-pubkey', certificate) |
2368 | keygen_cmd = ['ssh-keygen', '-i', '-m', 'PKCS8', '-f', '/dev/stdin'] |
2369 | ssh_key, _ = util.subp(keygen_cmd, data=pub_key) |
2370 | return ssh_key |
2371 | |
2372 | + @azure_ds_telemetry_reporter |
2373 | def _get_fingerprint_from_cert(self, certificate): |
2374 | """openssl x509 formats fingerprints as so: |
2375 | 'SHA1 Fingerprint=07:3E:19:D1:4D:1C:79:92:24:C6:A0:FD:8D:DA:\ |
2376 | @@ -163,6 +187,7 @@ class OpenSSLManager(object): |
2377 | octets = raw_fp[eq+1:-1].split(':') |
2378 | return ''.join(octets) |
2379 | |
2380 | + @azure_ds_telemetry_reporter |
2381 | def _decrypt_certs_from_xml(self, certificates_xml): |
2382 | """Decrypt the certificates XML document using the our private key; |
2383 | return the list of certs and private keys contained in the doc. |
2384 | @@ -185,6 +210,7 @@ class OpenSSLManager(object): |
2385 | shell=True, data=b'\n'.join(lines)) |
2386 | return out |
2387 | |
2388 | + @azure_ds_telemetry_reporter |
2389 | def parse_certificates(self, certificates_xml): |
2390 | """Given the Certificates XML document, return a dictionary of |
2391 | fingerprints and associated SSH keys derived from the certs.""" |
2392 | @@ -265,14 +291,21 @@ class WALinuxAgentShim(object): |
2393 | return socket.inet_ntoa(packed_bytes) |
2394 | |
2395 | @staticmethod |
2396 | + @azure_ds_telemetry_reporter |
2397 | def _networkd_get_value_from_leases(leases_d=None): |
2398 | return dhcp.networkd_get_option_from_leases( |
2399 | 'OPTION_245', leases_d=leases_d) |
2400 | |
2401 | @staticmethod |
2402 | + @azure_ds_telemetry_reporter |
2403 | def _get_value_from_leases_file(fallback_lease_file): |
2404 | leases = [] |
2405 | - content = util.load_file(fallback_lease_file) |
2406 | + try: |
2407 | + content = util.load_file(fallback_lease_file) |
2408 | + except IOError as ex: |
2409 | + LOG.error("Failed to read %s: %s", fallback_lease_file, ex) |
2410 | + return None |
2411 | + |
2412 | LOG.debug("content is %s", content) |
2413 | option_name = _get_dhcp_endpoint_option_name() |
2414 | for line in content.splitlines(): |
2415 | @@ -287,6 +320,7 @@ class WALinuxAgentShim(object): |
2416 | return leases[-1] |
2417 | |
2418 | @staticmethod |
2419 | + @azure_ds_telemetry_reporter |
2420 | def _load_dhclient_json(): |
2421 | dhcp_options = {} |
2422 | hooks_dir = WALinuxAgentShim._get_hooks_dir() |
2423 | @@ -305,6 +339,7 @@ class WALinuxAgentShim(object): |
2424 | return dhcp_options |
2425 | |
2426 | @staticmethod |
2427 | + @azure_ds_telemetry_reporter |
2428 | def _get_value_from_dhcpoptions(dhcp_options): |
2429 | if dhcp_options is None: |
2430 | return None |
2431 | @@ -318,6 +353,7 @@ class WALinuxAgentShim(object): |
2432 | return _value |
2433 | |
2434 | @staticmethod |
2435 | + @azure_ds_telemetry_reporter |
2436 | def find_endpoint(fallback_lease_file=None, dhcp245=None): |
2437 | value = None |
2438 | if dhcp245 is not None: |
2439 | @@ -344,14 +380,15 @@ class WALinuxAgentShim(object): |
2440 | fallback_lease_file) |
2441 | value = WALinuxAgentShim._get_value_from_leases_file( |
2442 | fallback_lease_file) |
2443 | - |
2444 | if value is None: |
2445 | - raise ValueError('No endpoint found.') |
2446 | + LOG.warning("No lease found; using default endpoint") |
2447 | + value = DEFAULT_WIRESERVER_ENDPOINT |
2448 | |
2449 | endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value) |
2450 | LOG.debug('Azure endpoint found at %s', endpoint_ip_address) |
2451 | return endpoint_ip_address |
2452 | |
2453 | + @azure_ds_telemetry_reporter |
2454 | def register_with_azure_and_fetch_data(self, pubkey_info=None): |
2455 | if self.openssl_manager is None: |
2456 | self.openssl_manager = OpenSSLManager() |
2457 | @@ -404,6 +441,7 @@ class WALinuxAgentShim(object): |
2458 | |
2459 | return keys |
2460 | |
2461 | + @azure_ds_telemetry_reporter |
2462 | def _report_ready(self, goal_state, http_client): |
2463 | LOG.debug('Reporting ready to Azure fabric.') |
2464 | document = self.REPORT_READY_XML_TEMPLATE.format( |
2465 | @@ -419,6 +457,7 @@ class WALinuxAgentShim(object): |
2466 | LOG.info('Reported ready to Azure fabric.') |
2467 | |
2468 | |
2469 | +@azure_ds_telemetry_reporter |
2470 | def get_metadata_from_fabric(fallback_lease_file=None, dhcp_opts=None, |
2471 | pubkey_info=None): |
2472 | shim = WALinuxAgentShim(fallback_lease_file=fallback_lease_file, |
2473 | diff --git a/cloudinit/util.py b/cloudinit/util.py |
2474 | index a192091..ea4199c 100644 |
2475 | --- a/cloudinit/util.py |
2476 | +++ b/cloudinit/util.py |
2477 | @@ -703,6 +703,21 @@ def get_cfg_option_list(yobj, key, default=None): |
2478 | # get a cfg entry by its path array |
2479 | # for f['a']['b']: get_cfg_by_path(mycfg,('a','b')) |
2480 | def get_cfg_by_path(yobj, keyp, default=None): |
2481 | + """Return the value of the item at path C{keyp} in C{yobj}. |
2482 | + |
2483 | + example: |
2484 | + get_cfg_by_path({'a': {'b': {'num': 4}}}, 'a/b/num') == 4 |
2485 | + get_cfg_by_path({'a': {'b': {'num': 4}}}, 'c/d') == None |
2486 | + |
2487 | + @param yobj: A dictionary. |
2488 | + @param keyp: A path inside yobj. it can be a '/' delimited string, |
2489 | + or an iterable. |
2490 | + @param default: The default to return if the path does not exist. |
2491 | + @return: The value of the item at keyp." |
2492 | + is not found.""" |
2493 | + |
2494 | + if isinstance(keyp, six.string_types): |
2495 | + keyp = keyp.split("/") |
2496 | cur = yobj |
2497 | for tok in keyp: |
2498 | if tok not in cur: |
2499 | @@ -1664,7 +1679,7 @@ def mounts(): |
2500 | return mounted |
2501 | |
2502 | |
2503 | -def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True, |
2504 | +def mount_cb(device, callback, data=None, mtype=None, |
2505 | update_env_for_mount=None): |
2506 | """ |
2507 | Mount the device, call method 'callback' passing the directory |
2508 | @@ -1711,18 +1726,7 @@ def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True, |
2509 | for mtype in mtypes: |
2510 | mountpoint = None |
2511 | try: |
2512 | - mountcmd = ['mount'] |
2513 | - mountopts = [] |
2514 | - if rw: |
2515 | - mountopts.append('rw') |
2516 | - else: |
2517 | - mountopts.append('ro') |
2518 | - if sync: |
2519 | - # This seems like the safe approach to do |
2520 | - # (ie where this is on by default) |
2521 | - mountopts.append("sync") |
2522 | - if mountopts: |
2523 | - mountcmd.extend(["-o", ",".join(mountopts)]) |
2524 | + mountcmd = ['mount', '-o', 'ro'] |
2525 | if mtype: |
2526 | mountcmd.extend(['-t', mtype]) |
2527 | mountcmd.append(device) |
2528 | diff --git a/cloudinit/version.py b/cloudinit/version.py |
2529 | index a2c5d43..ddcd436 100644 |
2530 | --- a/cloudinit/version.py |
2531 | +++ b/cloudinit/version.py |
2532 | @@ -4,7 +4,7 @@ |
2533 | # |
2534 | # This file is part of cloud-init. See LICENSE file for license information. |
2535 | |
2536 | -__VERSION__ = "18.5" |
2537 | +__VERSION__ = "19.1" |
2538 | _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' |
2539 | |
2540 | FEATURES = [ |
2541 | diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl |
2542 | index 7513176..25db43e 100644 |
2543 | --- a/config/cloud.cfg.tmpl |
2544 | +++ b/config/cloud.cfg.tmpl |
2545 | @@ -112,6 +112,9 @@ cloud_final_modules: |
2546 | - landscape |
2547 | - lxd |
2548 | {% endif %} |
2549 | +{% if variant in ["ubuntu", "unknown"] %} |
2550 | + - ubuntu-drivers |
2551 | +{% endif %} |
2552 | {% if variant not in ["freebsd"] %} |
2553 | - puppet |
2554 | - chef |
2555 | diff --git a/debian/changelog b/debian/changelog |
2556 | index e12179d..e22c09e 100644 |
2557 | --- a/debian/changelog |
2558 | +++ b/debian/changelog |
2559 | @@ -1,3 +1,51 @@ |
2560 | +cloud-init (19.1-1-gbaa47854-0ubuntu1~18.04.1) bionic; urgency=medium |
2561 | + |
2562 | + * debian/patches/ubuntu-advantage-revert-tip.patch |
2563 | + Revert ubuntu-advantage config module changes until ubuntu-advantage-tools |
2564 | + 19.1 publishes to Xenial (LP: #1828641) |
2565 | + * New upstream snapshot. (LP: #1828637) |
2566 | + - Azure: Return static fallback address as if failed to find endpoint |
2567 | + [Jason Zions (MSFT)] |
2568 | + - release 19.1 |
2569 | + - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder] |
2570 | + - tests: add Eoan release [Paride Legovini] |
2571 | + - cc_mounts: check if mount -a on no-change fstab path [Jason Zions (MSFT)] |
2572 | + - replace remaining occurrences of LOG.warn |
2573 | + - DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo] |
2574 | + - Azure: Changes to the Hyper-V KVP Reporter [Anh Vo] |
2575 | + - git tests: no longer show warning about safe yaml. [Scott Moser] |
2576 | + - tools/read-version: handle errors [Chad Miller] |
2577 | + - net/sysconfig: only indicate available on known sysconfig distros |
2578 | + - packages: update rpm specs for new bash completion path |
2579 | + - test_azure: mock util.SeLinuxGuard where needed [Jason Zions (MSFT)] |
2580 | + - setup.py: install bash completion script in new location |
2581 | + - mount_cb: do not pass sync and rw options to mount [Gonéri Le Bouder] |
2582 | + - cc_apt_configure: fix typo in apt documentation [Dominic Schlegel] |
2583 | + - Revert "DataSource: move update_events from a class to an instance..." |
2584 | + - Change DataSourceNoCloud to ignore file system label's case. |
2585 | + [Risto Oikarinen] |
2586 | + - cmd:main.py: Fix missing 'modules-init' key in modes dict |
2587 | + [Antonio Romito] |
2588 | + - ubuntu_advantage: rewrite cloud-config module |
2589 | + - Azure: Treat _unset network configuration as if it were absent |
2590 | + [Jason Zions (MSFT)] |
2591 | + - DatasourceAzure: add additional logging for azure datasource [Anh Vo] |
2592 | + - cloud_tests: fix apt_pipelining test-cases |
2593 | + - Azure: Ensure platform random_seed is always serializable as JSON. |
2594 | + [Jason Zions (MSFT)] |
2595 | + - net/sysconfig: write out SUSE-compatible IPv6 config [Robert Schweikert] |
2596 | + - tox: Update testenv for openSUSE Leap to 15.0 [Thomas Bechtold] |
2597 | + - net: Fix ipv6 static routes when using eni renderer [Raphael Glon] |
2598 | + - Add ubuntu_drivers config module |
2599 | + - doc: Refresh Azure walinuxagent docs |
2600 | + - tox: bump pylint version to latest (2.3.1) |
2601 | + - DataSource: move update_events from a class to an instance attribute |
2602 | + - net/sysconfig: Handle default route setup for dhcp configured NICs |
2603 | + [Robert Schweikert] |
2604 | + - DataSourceEc2: update RELEASE_BLOCKER to be more accurate |
2605 | + |
2606 | + -- Chad Smith <chad.smith@canonical.com> Fri, 10 May 2019 23:17:50 -0600 |
2607 | + |
2608 | cloud-init (18.5-45-g3554ffe8-0ubuntu1~18.04.1) bionic; urgency=medium |
2609 | |
2610 | * New upstream snapshot. (LP: #1819067) |
2611 | diff --git a/debian/patches/series b/debian/patches/series |
2612 | index 2ce72fb..72f0fe9 100644 |
2613 | --- a/debian/patches/series |
2614 | +++ b/debian/patches/series |
2615 | @@ -1 +1,2 @@ |
2616 | openstack-no-network-config.patch |
2617 | +ubuntu-advantage-revert-tip.patch |
2618 | diff --git a/debian/patches/ubuntu-advantage-revert-tip.patch b/debian/patches/ubuntu-advantage-revert-tip.patch |
2619 | new file mode 100644 |
2620 | index 0000000..58d7792 |
2621 | --- /dev/null |
2622 | +++ b/debian/patches/ubuntu-advantage-revert-tip.patch |
2623 | @@ -0,0 +1,735 @@ |
2624 | +Description: Revert upstream changes for ubuntu-advantage-tools v 19.1 |
2625 | + ubuntu-advantage-tools v. 19.1 or later is required for the new |
2626 | + cloud-config module because the two command lines are incompatible. |
2627 | + Bionic can drop this patch once ubuntu-advantage-tools has been SRU'd >= 19.1 |
2628 | +Author: Chad Smith <chad.smith@canonical.com> |
2629 | +Origin: backport |
2630 | +Bug: https://bugs.launchpad.net/cloud-init/+bug/1828641 |
2631 | +Forwarded: not-needed |
2632 | +Last-Update: 2019-05-10 |
2633 | +--- |
2634 | +This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ |
2635 | +Index: cloud-init/cloudinit/config/cc_ubuntu_advantage.py |
2636 | +=================================================================== |
2637 | +--- cloud-init.orig/cloudinit/config/cc_ubuntu_advantage.py |
2638 | ++++ cloud-init/cloudinit/config/cc_ubuntu_advantage.py |
2639 | +@@ -1,143 +1,150 @@ |
2640 | ++# Copyright (C) 2018 Canonical Ltd. |
2641 | ++# |
2642 | + # This file is part of cloud-init. See LICENSE file for license information. |
2643 | + |
2644 | +-"""ubuntu_advantage: Configure Ubuntu Advantage support services""" |
2645 | ++"""Ubuntu advantage: manage ubuntu-advantage offerings from Canonical.""" |
2646 | + |
2647 | ++import sys |
2648 | + from textwrap import dedent |
2649 | + |
2650 | +-import six |
2651 | +- |
2652 | ++from cloudinit import log as logging |
2653 | + from cloudinit.config.schema import ( |
2654 | + get_schema_doc, validate_cloudconfig_schema) |
2655 | +-from cloudinit import log as logging |
2656 | + from cloudinit.settings import PER_INSTANCE |
2657 | ++from cloudinit.subp import prepend_base_command |
2658 | + from cloudinit import util |
2659 | + |
2660 | + |
2661 | +-UA_URL = 'https://ubuntu.com/advantage' |
2662 | +- |
2663 | + distros = ['ubuntu'] |
2664 | ++frequency = PER_INSTANCE |
2665 | ++ |
2666 | ++LOG = logging.getLogger(__name__) |
2667 | + |
2668 | + schema = { |
2669 | + 'id': 'cc_ubuntu_advantage', |
2670 | + 'name': 'Ubuntu Advantage', |
2671 | +- 'title': 'Configure Ubuntu Advantage support services', |
2672 | ++ 'title': 'Install, configure and manage ubuntu-advantage offerings', |
2673 | + 'description': dedent("""\ |
2674 | +- Attach machine to an existing Ubuntu Advantage support contract and |
2675 | +- enable or disable support services such as Livepatch, ESM, |
2676 | +- FIPS and FIPS Updates. When attaching a machine to Ubuntu Advantage, |
2677 | +- one can also specify services to enable. When the 'enable' |
2678 | +- list is present, any named service will be enabled and all absent |
2679 | +- services will remain disabled. |
2680 | +- |
2681 | +- Note that when enabling FIPS or FIPS updates you will need to schedule |
2682 | +- a reboot to ensure the machine is running the FIPS-compliant kernel. |
2683 | +- See :ref:`Power State Change` for information on how to configure |
2684 | +- cloud-init to perform this reboot. |
2685 | ++ This module provides configuration options to setup ubuntu-advantage |
2686 | ++ subscriptions. |
2687 | ++ |
2688 | ++ .. note:: |
2689 | ++ Both ``commands`` value can be either a dictionary or a list. If |
2690 | ++ the configuration provided is a dictionary, the keys are only used |
2691 | ++ to order the execution of the commands and the dictionary is |
2692 | ++ merged with any vendor-data ubuntu-advantage configuration |
2693 | ++ provided. If a ``commands`` is provided as a list, any vendor-data |
2694 | ++ ubuntu-advantage ``commands`` are ignored. |
2695 | ++ |
2696 | ++ Ubuntu-advantage ``commands`` is a dictionary or list of |
2697 | ++ ubuntu-advantage commands to run on the deployed machine. |
2698 | ++ These commands can be used to enable or disable subscriptions to |
2699 | ++ various ubuntu-advantage products. See 'man ubuntu-advantage' for more |
2700 | ++ information on supported subcommands. |
2701 | ++ |
2702 | ++ .. note:: |
2703 | ++ Each command item can be a string or list. If the item is a list, |
2704 | ++ 'ubuntu-advantage' can be omitted and it will automatically be |
2705 | ++ inserted as part of the command. |
2706 | + """), |
2707 | + 'distros': distros, |
2708 | + 'examples': [dedent("""\ |
2709 | +- # Attach the machine to a Ubuntu Advantage support contract with a |
2710 | +- # UA contract token obtained from %s. |
2711 | +- ubuntu_advantage: |
2712 | +- token: <ua_contract_token> |
2713 | +- """ % UA_URL), dedent("""\ |
2714 | +- # Attach the machine to an Ubuntu Advantage support contract enabling |
2715 | +- # only fips and esm services. Services will only be enabled if |
2716 | +- # the environment supports said service. Otherwise warnings will |
2717 | +- # be logged for incompatible services specified. |
2718 | ++ # Enable Extended Security Maintenance using your service auth token |
2719 | ++ ubuntu-advantage: |
2720 | ++ commands: |
2721 | ++ 00: ubuntu-advantage enable-esm <token> |
2722 | ++ """), dedent("""\ |
2723 | ++ # Enable livepatch by providing your livepatch token |
2724 | + ubuntu-advantage: |
2725 | +- token: <ua_contract_token> |
2726 | +- enable: |
2727 | +- - fips |
2728 | +- - esm |
2729 | ++ commands: |
2730 | ++ 00: ubuntu-advantage enable-livepatch <livepatch-token> |
2731 | ++ |
2732 | + """), dedent("""\ |
2733 | +- # Attach the machine to an Ubuntu Advantage support contract and enable |
2734 | +- # the FIPS service. Perform a reboot once cloud-init has |
2735 | +- # completed. |
2736 | +- power_state: |
2737 | +- mode: reboot |
2738 | ++ # Convenience: the ubuntu-advantage command can be omitted when |
2739 | ++ # specifying commands as a list and 'ubuntu-advantage' will |
2740 | ++ # automatically be prepended. |
2741 | ++ # The following commands are equivalent |
2742 | + ubuntu-advantage: |
2743 | +- token: <ua_contract_token> |
2744 | +- enable: |
2745 | +- - fips |
2746 | +- """)], |
2747 | ++ commands: |
2748 | ++ 00: ['enable-livepatch', 'my-token'] |
2749 | ++ 01: ['ubuntu-advantage', 'enable-livepatch', 'my-token'] |
2750 | ++ 02: ubuntu-advantage enable-livepatch my-token |
2751 | ++ 03: 'ubuntu-advantage enable-livepatch my-token' |
2752 | ++ """)], |
2753 | + 'frequency': PER_INSTANCE, |
2754 | + 'type': 'object', |
2755 | + 'properties': { |
2756 | +- 'ubuntu_advantage': { |
2757 | ++ 'ubuntu-advantage': { |
2758 | + 'type': 'object', |
2759 | + 'properties': { |
2760 | +- 'enable': { |
2761 | +- 'type': 'array', |
2762 | +- 'items': {'type': 'string'}, |
2763 | +- }, |
2764 | +- 'token': { |
2765 | +- 'type': 'string', |
2766 | +- 'description': ( |
2767 | +- 'A contract token obtained from %s.' % UA_URL) |
2768 | ++ 'commands': { |
2769 | ++ 'type': ['object', 'array'], # Array of strings or dict |
2770 | ++ 'items': { |
2771 | ++ 'oneOf': [ |
2772 | ++ {'type': 'array', 'items': {'type': 'string'}}, |
2773 | ++ {'type': 'string'}] |
2774 | ++ }, |
2775 | ++ 'additionalItems': False, # Reject non-string & non-list |
2776 | ++ 'minItems': 1, |
2777 | ++ 'minProperties': 1, |
2778 | + } |
2779 | + }, |
2780 | +- 'required': ['token'], |
2781 | +- 'additionalProperties': False |
2782 | ++ 'additionalProperties': False, # Reject keys not in schema |
2783 | ++ 'required': ['commands'] |
2784 | + } |
2785 | + } |
2786 | + } |
2787 | + |
2788 | ++# TODO schema for 'assertions' and 'commands' are too permissive at the moment. |
2789 | ++# Once python-jsonschema supports schema draft 6 add support for arbitrary |
2790 | ++# object keys with 'patternProperties' constraint to validate string values. |
2791 | ++ |
2792 | + __doc__ = get_schema_doc(schema) # Supplement python help() |
2793 | + |
2794 | +-LOG = logging.getLogger(__name__) |
2795 | ++UA_CMD = "ubuntu-advantage" |
2796 | + |
2797 | + |
2798 | +-def configure_ua(token=None, enable=None): |
2799 | +- """Call ua commandline client to attach or enable services.""" |
2800 | +- error = None |
2801 | +- if not token: |
2802 | +- error = ('ubuntu_advantage: token must be provided') |
2803 | +- LOG.error(error) |
2804 | +- raise RuntimeError(error) |
2805 | +- |
2806 | +- if enable is None: |
2807 | +- enable = [] |
2808 | +- elif isinstance(enable, six.string_types): |
2809 | +- LOG.warning('ubuntu_advantage: enable should be a list, not' |
2810 | +- ' a string; treating as a single enable') |
2811 | +- enable = [enable] |
2812 | +- elif not isinstance(enable, list): |
2813 | +- LOG.warning('ubuntu_advantage: enable should be a list, not' |
2814 | +- ' a %s; skipping enabling services', |
2815 | +- type(enable).__name__) |
2816 | +- enable = [] |
2817 | ++def run_commands(commands): |
2818 | ++ """Run the commands provided in ubuntu-advantage:commands config. |
2819 | + |
2820 | +- attach_cmd = ['ua', 'attach', token] |
2821 | +- LOG.debug('Attaching to Ubuntu Advantage. %s', ' '.join(attach_cmd)) |
2822 | +- try: |
2823 | +- util.subp(attach_cmd) |
2824 | +- except util.ProcessExecutionError as e: |
2825 | +- msg = 'Failure attaching Ubuntu Advantage:\n{error}'.format( |
2826 | +- error=str(e)) |
2827 | +- util.logexc(LOG, msg) |
2828 | +- raise RuntimeError(msg) |
2829 | +- enable_errors = [] |
2830 | +- for service in enable: |
2831 | ++ Commands are run individually. Any errors are collected and reported |
2832 | ++ after attempting all commands. |
2833 | ++ |
2834 | ++ @param commands: A list or dict containing commands to run. Keys of a |
2835 | ++ dict will be used to order the commands provided as dict values. |
2836 | ++ """ |
2837 | ++ if not commands: |
2838 | ++ return |
2839 | ++ LOG.debug('Running user-provided ubuntu-advantage commands') |
2840 | ++ if isinstance(commands, dict): |
2841 | ++ # Sort commands based on dictionary key |
2842 | ++ commands = [v for _, v in sorted(commands.items())] |
2843 | ++ elif not isinstance(commands, list): |
2844 | ++ raise TypeError( |
2845 | ++ 'commands parameter was not a list or dict: {commands}'.format( |
2846 | ++ commands=commands)) |
2847 | ++ |
2848 | ++ fixed_ua_commands = prepend_base_command('ubuntu-advantage', commands) |
2849 | ++ |
2850 | ++ cmd_failures = [] |
2851 | ++ for command in fixed_ua_commands: |
2852 | ++ shell = isinstance(command, str) |
2853 | + try: |
2854 | +- cmd = ['ua', 'enable', service] |
2855 | +- util.subp(cmd, capture=True) |
2856 | ++ util.subp(command, shell=shell, status_cb=sys.stderr.write) |
2857 | + except util.ProcessExecutionError as e: |
2858 | +- enable_errors.append((service, e)) |
2859 | +- if enable_errors: |
2860 | +- for service, error in enable_errors: |
2861 | +- msg = 'Failure enabling "{service}":\n{error}'.format( |
2862 | +- service=service, error=str(error)) |
2863 | +- util.logexc(LOG, msg) |
2864 | +- raise RuntimeError( |
2865 | +- 'Failure enabling Ubuntu Advantage service(s): {}'.format( |
2866 | +- ', '.join('"{}"'.format(service) |
2867 | +- for service, _ in enable_errors))) |
2868 | ++ cmd_failures.append(str(e)) |
2869 | ++ if cmd_failures: |
2870 | ++ msg = ( |
2871 | ++ 'Failures running ubuntu-advantage commands:\n' |
2872 | ++ '{cmd_failures}'.format( |
2873 | ++ cmd_failures=cmd_failures)) |
2874 | ++ util.logexc(LOG, msg) |
2875 | ++ raise RuntimeError(msg) |
2876 | + |
2877 | + |
2878 | + def maybe_install_ua_tools(cloud): |
2879 | + """Install ubuntu-advantage-tools if not present.""" |
2880 | +- if util.which('ua'): |
2881 | ++ if util.which('ubuntu-advantage'): |
2882 | + return |
2883 | + try: |
2884 | + cloud.distro.update_package_sources() |
2885 | +@@ -152,28 +159,14 @@ def maybe_install_ua_tools(cloud): |
2886 | + |
2887 | + |
2888 | + def handle(name, cfg, cloud, log, args): |
2889 | +- ua_section = None |
2890 | +- if 'ubuntu-advantage' in cfg: |
2891 | +- LOG.warning('Deprecated configuration key "ubuntu-advantage" provided.' |
2892 | +- ' Expected underscore delimited "ubuntu_advantage"; will' |
2893 | +- ' attempt to continue.') |
2894 | +- ua_section = cfg['ubuntu-advantage'] |
2895 | +- if 'ubuntu_advantage' in cfg: |
2896 | +- ua_section = cfg['ubuntu_advantage'] |
2897 | +- if ua_section is None: |
2898 | +- LOG.debug("Skipping module named %s," |
2899 | +- " no 'ubuntu_advantage' configuration found", name) |
2900 | ++ cfgin = cfg.get('ubuntu-advantage') |
2901 | ++ if cfgin is None: |
2902 | ++ LOG.debug(("Skipping module named %s," |
2903 | ++ " no 'ubuntu-advantage' key in configuration"), name) |
2904 | + return |
2905 | +- validate_cloudconfig_schema(cfg, schema) |
2906 | +- if 'commands' in ua_section: |
2907 | +- msg = ( |
2908 | +- 'Deprecated configuration "ubuntu-advantage: commands" provided.' |
2909 | +- ' Expected "token"') |
2910 | +- LOG.error(msg) |
2911 | +- raise RuntimeError(msg) |
2912 | + |
2913 | ++ validate_cloudconfig_schema(cfg, schema) |
2914 | + maybe_install_ua_tools(cloud) |
2915 | +- configure_ua(token=ua_section.get('token'), |
2916 | +- enable=ua_section.get('enable')) |
2917 | ++ run_commands(cfgin.get('commands', [])) |
2918 | + |
2919 | + # vi: ts=4 expandtab |
2920 | +Index: cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py |
2921 | +=================================================================== |
2922 | +--- cloud-init.orig/cloudinit/config/tests/test_ubuntu_advantage.py |
2923 | ++++ cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py |
2924 | +@@ -1,7 +1,10 @@ |
2925 | + # This file is part of cloud-init. See LICENSE file for license information. |
2926 | + |
2927 | ++import re |
2928 | ++from six import StringIO |
2929 | ++ |
2930 | + from cloudinit.config.cc_ubuntu_advantage import ( |
2931 | +- configure_ua, handle, maybe_install_ua_tools, schema) |
2932 | ++ handle, maybe_install_ua_tools, run_commands, schema) |
2933 | + from cloudinit.config.schema import validate_cloudconfig_schema |
2934 | + from cloudinit import util |
2935 | + from cloudinit.tests.helpers import ( |
2936 | +@@ -17,120 +20,90 @@ class FakeCloud(object): |
2937 | + self.distro = distro |
2938 | + |
2939 | + |
2940 | +-class TestConfigureUA(CiTestCase): |
2941 | ++class TestRunCommands(CiTestCase): |
2942 | + |
2943 | + with_logs = True |
2944 | + allowed_subp = [CiTestCase.SUBP_SHELL_TRUE] |
2945 | + |
2946 | + def setUp(self): |
2947 | +- super(TestConfigureUA, self).setUp() |
2948 | ++ super(TestRunCommands, self).setUp() |
2949 | + self.tmp = self.tmp_dir() |
2950 | + |
2951 | + @mock.patch('%s.util.subp' % MPATH) |
2952 | +- def test_configure_ua_attach_error(self, m_subp): |
2953 | +- """Errors from ua attach command are raised.""" |
2954 | +- m_subp.side_effect = util.ProcessExecutionError( |
2955 | +- 'Invalid token SomeToken') |
2956 | +- with self.assertRaises(RuntimeError) as context_manager: |
2957 | +- configure_ua(token='SomeToken') |
2958 | ++ def test_run_commands_on_empty_list(self, m_subp): |
2959 | ++ """When provided with an empty list, run_commands does nothing.""" |
2960 | ++ run_commands([]) |
2961 | ++ self.assertEqual('', self.logs.getvalue()) |
2962 | ++ m_subp.assert_not_called() |
2963 | ++ |
2964 | ++ def test_run_commands_on_non_list_or_dict(self): |
2965 | ++ """When provided an invalid type, run_commands raises an error.""" |
2966 | ++ with self.assertRaises(TypeError) as context_manager: |
2967 | ++ run_commands(commands="I'm Not Valid") |
2968 | + self.assertEqual( |
2969 | +- 'Failure attaching Ubuntu Advantage:\nUnexpected error while' |
2970 | +- ' running command.\nCommand: -\nExit code: -\nReason: -\n' |
2971 | +- 'Stdout: Invalid token SomeToken\nStderr: -', |
2972 | ++ "commands parameter was not a list or dict: I'm Not Valid", |
2973 | + str(context_manager.exception)) |
2974 | + |
2975 | +- @mock.patch('%s.util.subp' % MPATH) |
2976 | +- def test_configure_ua_attach_with_token(self, m_subp): |
2977 | +- """When token is provided, attach the machine to ua using the token.""" |
2978 | +- configure_ua(token='SomeToken') |
2979 | +- m_subp.assert_called_once_with(['ua', 'attach', 'SomeToken']) |
2980 | +- self.assertEqual( |
2981 | +- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
2982 | +- self.logs.getvalue()) |
2983 | +- |
2984 | +- @mock.patch('%s.util.subp' % MPATH) |
2985 | +- def test_configure_ua_attach_on_service_error(self, m_subp): |
2986 | +- """all services should be enabled and then any failures raised""" |
2987 | +- |
2988 | +- def fake_subp(cmd, capture=None): |
2989 | +- fail_cmds = [['ua', 'enable', svc] for svc in ['esm', 'cc']] |
2990 | +- if cmd in fail_cmds and capture: |
2991 | +- svc = cmd[-1] |
2992 | +- raise util.ProcessExecutionError( |
2993 | +- 'Invalid {} credentials'.format(svc.upper())) |
2994 | ++ def test_run_command_logs_commands_and_exit_codes_to_stderr(self): |
2995 | ++ """All exit codes are logged to stderr.""" |
2996 | ++ outfile = self.tmp_path('output.log', dir=self.tmp) |
2997 | ++ |
2998 | ++ cmd1 = 'echo "HI" >> %s' % outfile |
2999 | ++ cmd2 = 'bogus command' |
3000 | ++ cmd3 = 'echo "MOM" >> %s' % outfile |
3001 | ++ commands = [cmd1, cmd2, cmd3] |
3002 | ++ |
3003 | ++ mock_path = '%s.sys.stderr' % MPATH |
3004 | ++ with mock.patch(mock_path, new_callable=StringIO) as m_stderr: |
3005 | ++ with self.assertRaises(RuntimeError) as context_manager: |
3006 | ++ run_commands(commands=commands) |
3007 | ++ |
3008 | ++ self.assertIsNotNone( |
3009 | ++ re.search(r'bogus: (command )?not found', |
3010 | ++ str(context_manager.exception)), |
3011 | ++ msg='Expected bogus command not found') |
3012 | ++ expected_stderr_log = '\n'.join([ |
3013 | ++ 'Begin run command: {cmd}'.format(cmd=cmd1), |
3014 | ++ 'End run command: exit(0)', |
3015 | ++ 'Begin run command: {cmd}'.format(cmd=cmd2), |
3016 | ++ 'ERROR: End run command: exit(127)', |
3017 | ++ 'Begin run command: {cmd}'.format(cmd=cmd3), |
3018 | ++ 'End run command: exit(0)\n']) |
3019 | ++ self.assertEqual(expected_stderr_log, m_stderr.getvalue()) |
3020 | ++ |
3021 | ++ def test_run_command_as_lists(self): |
3022 | ++ """When commands are specified as a list, run them in order.""" |
3023 | ++ outfile = self.tmp_path('output.log', dir=self.tmp) |
3024 | ++ |
3025 | ++ cmd1 = 'echo "HI" >> %s' % outfile |
3026 | ++ cmd2 = 'echo "MOM" >> %s' % outfile |
3027 | ++ commands = [cmd1, cmd2] |
3028 | ++ with mock.patch('%s.sys.stderr' % MPATH, new_callable=StringIO): |
3029 | ++ run_commands(commands=commands) |
3030 | + |
3031 | +- m_subp.side_effect = fake_subp |
3032 | +- |
3033 | +- with self.assertRaises(RuntimeError) as context_manager: |
3034 | +- configure_ua(token='SomeToken', enable=['esm', 'cc', 'fips']) |
3035 | +- self.assertEqual( |
3036 | +- m_subp.call_args_list, |
3037 | +- [mock.call(['ua', 'attach', 'SomeToken']), |
3038 | +- mock.call(['ua', 'enable', 'esm'], capture=True), |
3039 | +- mock.call(['ua', 'enable', 'cc'], capture=True), |
3040 | +- mock.call(['ua', 'enable', 'fips'], capture=True)]) |
3041 | + self.assertIn( |
3042 | +- 'WARNING: Failure enabling "esm":\nUnexpected error' |
3043 | +- ' while running command.\nCommand: -\nExit code: -\nReason: -\n' |
3044 | +- 'Stdout: Invalid ESM credentials\nStderr: -\n', |
3045 | ++ 'DEBUG: Running user-provided ubuntu-advantage commands', |
3046 | + self.logs.getvalue()) |
3047 | ++ self.assertEqual('HI\nMOM\n', util.load_file(outfile)) |
3048 | + self.assertIn( |
3049 | +- 'WARNING: Failure enabling "cc":\nUnexpected error' |
3050 | +- ' while running command.\nCommand: -\nExit code: -\nReason: -\n' |
3051 | +- 'Stdout: Invalid CC credentials\nStderr: -\n', |
3052 | +- self.logs.getvalue()) |
3053 | +- self.assertEqual( |
3054 | +- 'Failure enabling Ubuntu Advantage service(s): "esm", "cc"', |
3055 | +- str(context_manager.exception)) |
3056 | +- |
3057 | +- @mock.patch('%s.util.subp' % MPATH) |
3058 | +- def test_configure_ua_attach_with_empty_services(self, m_subp): |
3059 | +- """When services is an empty list, do not auto-enable attach.""" |
3060 | +- configure_ua(token='SomeToken', enable=[]) |
3061 | +- m_subp.assert_called_once_with(['ua', 'attach', 'SomeToken']) |
3062 | +- self.assertEqual( |
3063 | +- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
3064 | +- self.logs.getvalue()) |
3065 | +- |
3066 | +- @mock.patch('%s.util.subp' % MPATH) |
3067 | +- def test_configure_ua_attach_with_specific_services(self, m_subp): |
3068 | +- """When services a list, only enable specific services.""" |
3069 | +- configure_ua(token='SomeToken', enable=['fips']) |
3070 | +- self.assertEqual( |
3071 | +- m_subp.call_args_list, |
3072 | +- [mock.call(['ua', 'attach', 'SomeToken']), |
3073 | +- mock.call(['ua', 'enable', 'fips'], capture=True)]) |
3074 | +- self.assertEqual( |
3075 | +- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
3076 | +- self.logs.getvalue()) |
3077 | +- |
3078 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock()) |
3079 | +- @mock.patch('%s.util.subp' % MPATH) |
3080 | +- def test_configure_ua_attach_with_string_services(self, m_subp): |
3081 | +- """When services a string, treat as singleton list and warn""" |
3082 | +- configure_ua(token='SomeToken', enable='fips') |
3083 | +- self.assertEqual( |
3084 | +- m_subp.call_args_list, |
3085 | +- [mock.call(['ua', 'attach', 'SomeToken']), |
3086 | +- mock.call(['ua', 'enable', 'fips'], capture=True)]) |
3087 | +- self.assertEqual( |
3088 | +- 'WARNING: ubuntu_advantage: enable should be a list, not a' |
3089 | +- ' string; treating as a single enable\n' |
3090 | +- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
3091 | ++ 'WARNING: Non-ubuntu-advantage commands in ubuntu-advantage' |
3092 | ++ ' config:', |
3093 | + self.logs.getvalue()) |
3094 | + |
3095 | +- @mock.patch('%s.util.subp' % MPATH) |
3096 | +- def test_configure_ua_attach_with_weird_services(self, m_subp): |
3097 | +- """When services not string or list, warn but still attach""" |
3098 | +- configure_ua(token='SomeToken', enable={'deffo': 'wont work'}) |
3099 | +- self.assertEqual( |
3100 | +- m_subp.call_args_list, |
3101 | +- [mock.call(['ua', 'attach', 'SomeToken'])]) |
3102 | +- self.assertEqual( |
3103 | +- 'WARNING: ubuntu_advantage: enable should be a list, not a' |
3104 | +- ' dict; skipping enabling services\n' |
3105 | +- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
3106 | +- self.logs.getvalue()) |
3107 | ++ def test_run_command_dict_sorted_as_command_script(self): |
3108 | ++ """When commands are a dict, sort them and run.""" |
3109 | ++ outfile = self.tmp_path('output.log', dir=self.tmp) |
3110 | ++ cmd1 = 'echo "HI" >> %s' % outfile |
3111 | ++ cmd2 = 'echo "MOM" >> %s' % outfile |
3112 | ++ commands = {'02': cmd1, '01': cmd2} |
3113 | ++ with mock.patch('%s.sys.stderr' % MPATH, new_callable=StringIO): |
3114 | ++ run_commands(commands=commands) |
3115 | ++ |
3116 | ++ expected_messages = [ |
3117 | ++ 'DEBUG: Running user-provided ubuntu-advantage commands'] |
3118 | ++ for message in expected_messages: |
3119 | ++ self.assertIn(message, self.logs.getvalue()) |
3120 | ++ self.assertEqual('MOM\nHI\n', util.load_file(outfile)) |
3121 | + |
3122 | + |
3123 | + @skipUnlessJsonSchema() |
3124 | +@@ -139,50 +112,90 @@ class TestSchema(CiTestCase, SchemaTestC |
3125 | + with_logs = True |
3126 | + schema = schema |
3127 | + |
3128 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
3129 | +- @mock.patch('%s.configure_ua' % MPATH) |
3130 | +- def test_schema_warns_on_ubuntu_advantage_not_dict(self, _cfg, _): |
3131 | +- """If ubuntu_advantage configuration is not a dict, emit a warning.""" |
3132 | +- validate_cloudconfig_schema({'ubuntu_advantage': 'wrong type'}, schema) |
3133 | ++ def test_schema_warns_on_ubuntu_advantage_not_as_dict(self): |
3134 | ++ """If ubuntu-advantage configuration is not a dict, emit a warning.""" |
3135 | ++ validate_cloudconfig_schema({'ubuntu-advantage': 'wrong type'}, schema) |
3136 | + self.assertEqual( |
3137 | +- "WARNING: Invalid config:\nubuntu_advantage: 'wrong type' is not" |
3138 | ++ "WARNING: Invalid config:\nubuntu-advantage: 'wrong type' is not" |
3139 | + " of type 'object'\n", |
3140 | + self.logs.getvalue()) |
3141 | + |
3142 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
3143 | +- @mock.patch('%s.configure_ua' % MPATH) |
3144 | +- def test_schema_disallows_unknown_keys(self, _cfg, _): |
3145 | +- """Unknown keys in ubuntu_advantage configuration emit warnings.""" |
3146 | ++ @mock.patch('%s.run_commands' % MPATH) |
3147 | ++ def test_schema_disallows_unknown_keys(self, _): |
3148 | ++ """Unknown keys in ubuntu-advantage configuration emit warnings.""" |
3149 | + validate_cloudconfig_schema( |
3150 | +- {'ubuntu_advantage': {'token': 'winner', 'invalid-key': ''}}, |
3151 | ++ {'ubuntu-advantage': {'commands': ['ls'], 'invalid-key': ''}}, |
3152 | + schema) |
3153 | + self.assertIn( |
3154 | +- 'WARNING: Invalid config:\nubuntu_advantage: Additional properties' |
3155 | ++ 'WARNING: Invalid config:\nubuntu-advantage: Additional properties' |
3156 | + " are not allowed ('invalid-key' was unexpected)", |
3157 | + self.logs.getvalue()) |
3158 | + |
3159 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
3160 | +- @mock.patch('%s.configure_ua' % MPATH) |
3161 | +- def test_warn_schema_requires_token(self, _cfg, _): |
3162 | +- """Warn if ubuntu_advantage configuration lacks token.""" |
3163 | ++ def test_warn_schema_requires_commands(self): |
3164 | ++ """Warn when ubuntu-advantage configuration lacks commands.""" |
3165 | + validate_cloudconfig_schema( |
3166 | +- {'ubuntu_advantage': {'enable': ['esm']}}, schema) |
3167 | ++ {'ubuntu-advantage': {}}, schema) |
3168 | + self.assertEqual( |
3169 | +- "WARNING: Invalid config:\nubuntu_advantage:" |
3170 | +- " 'token' is a required property\n", self.logs.getvalue()) |
3171 | ++ "WARNING: Invalid config:\nubuntu-advantage: 'commands' is a" |
3172 | ++ " required property\n", |
3173 | ++ self.logs.getvalue()) |
3174 | + |
3175 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
3176 | +- @mock.patch('%s.configure_ua' % MPATH) |
3177 | +- def test_warn_schema_services_is_not_list_or_dict(self, _cfg, _): |
3178 | +- """Warn when ubuntu_advantage:enable config is not a list.""" |
3179 | ++ @mock.patch('%s.run_commands' % MPATH) |
3180 | ++ def test_warn_schema_commands_is_not_list_or_dict(self, _): |
3181 | ++ """Warn when ubuntu-advantage:commands config is not a list or dict.""" |
3182 | + validate_cloudconfig_schema( |
3183 | +- {'ubuntu_advantage': {'enable': 'needslist'}}, schema) |
3184 | ++ {'ubuntu-advantage': {'commands': 'broken'}}, schema) |
3185 | + self.assertEqual( |
3186 | +- "WARNING: Invalid config:\nubuntu_advantage: 'token' is a" |
3187 | +- " required property\nubuntu_advantage.enable: 'needslist'" |
3188 | +- " is not of type 'array'\n", |
3189 | ++ "WARNING: Invalid config:\nubuntu-advantage.commands: 'broken' is" |
3190 | ++ " not of type 'object', 'array'\n", |
3191 | + self.logs.getvalue()) |
3192 | + |
3193 | ++ @mock.patch('%s.run_commands' % MPATH) |
3194 | ++ def test_warn_schema_when_commands_is_empty(self, _): |
3195 | ++ """Emit warnings when ubuntu-advantage:commands is empty.""" |
3196 | ++ validate_cloudconfig_schema( |
3197 | ++ {'ubuntu-advantage': {'commands': []}}, schema) |
3198 | ++ validate_cloudconfig_schema( |
3199 | ++ {'ubuntu-advantage': {'commands': {}}}, schema) |
3200 | ++ self.assertEqual( |
3201 | ++ "WARNING: Invalid config:\nubuntu-advantage.commands: [] is too" |
3202 | ++ " short\nWARNING: Invalid config:\nubuntu-advantage.commands: {}" |
3203 | ++ " does not have enough properties\n", |
3204 | ++ self.logs.getvalue()) |
3205 | ++ |
3206 | ++ @mock.patch('%s.run_commands' % MPATH) |
3207 | ++ def test_schema_when_commands_are_list_or_dict(self, _): |
3208 | ++ """No warnings when ubuntu-advantage:commands are a list or dict.""" |
3209 | ++ validate_cloudconfig_schema( |
3210 | ++ {'ubuntu-advantage': {'commands': ['valid']}}, schema) |
3211 | ++ validate_cloudconfig_schema( |
3212 | ++ {'ubuntu-advantage': {'commands': {'01': 'also valid'}}}, schema) |
3213 | ++ self.assertEqual('', self.logs.getvalue()) |
3214 | ++ |
3215 | ++ def test_duplicates_are_fine_array_array(self): |
3216 | ++ """Duplicated commands array/array entries are allowed.""" |
3217 | ++ self.assertSchemaValid( |
3218 | ++ {'commands': [["echo", "bye"], ["echo" "bye"]]}, |
3219 | ++ "command entries can be duplicate.") |
3220 | ++ |
3221 | ++ def test_duplicates_are_fine_array_string(self): |
3222 | ++ """Duplicated commands array/string entries are allowed.""" |
3223 | ++ self.assertSchemaValid( |
3224 | ++ {'commands': ["echo bye", "echo bye"]}, |
3225 | ++ "command entries can be duplicate.") |
3226 | ++ |
3227 | ++ def test_duplicates_are_fine_dict_array(self): |
3228 | ++ """Duplicated commands dict/array entries are allowed.""" |
3229 | ++ self.assertSchemaValid( |
3230 | ++ {'commands': {'00': ["echo", "bye"], '01': ["echo", "bye"]}}, |
3231 | ++ "command entries can be duplicate.") |
3232 | ++ |
3233 | ++ def test_duplicates_are_fine_dict_string(self): |
3234 | ++ """Duplicated commands dict/string entries are allowed.""" |
3235 | ++ self.assertSchemaValid( |
3236 | ++ {'commands': {'00': "echo bye", '01': "echo bye"}}, |
3237 | ++ "command entries can be duplicate.") |
3238 | ++ |
3239 | + |
3240 | + class TestHandle(CiTestCase): |
3241 | + |
3242 | +@@ -192,89 +205,41 @@ class TestHandle(CiTestCase): |
3243 | + super(TestHandle, self).setUp() |
3244 | + self.tmp = self.tmp_dir() |
3245 | + |
3246 | ++ @mock.patch('%s.run_commands' % MPATH) |
3247 | + @mock.patch('%s.validate_cloudconfig_schema' % MPATH) |
3248 | +- def test_handle_no_config(self, m_schema): |
3249 | ++ def test_handle_no_config(self, m_schema, m_run): |
3250 | + """When no ua-related configuration is provided, nothing happens.""" |
3251 | + cfg = {} |
3252 | + handle('ua-test', cfg=cfg, cloud=None, log=self.logger, args=None) |
3253 | + self.assertIn( |
3254 | +- "DEBUG: Skipping module named ua-test, no 'ubuntu_advantage'" |
3255 | +- ' configuration found', |
3256 | ++ "DEBUG: Skipping module named ua-test, no 'ubuntu-advantage' key" |
3257 | ++ " in config", |
3258 | + self.logs.getvalue()) |
3259 | + m_schema.assert_not_called() |
3260 | ++ m_run.assert_not_called() |
3261 | + |
3262 | +- @mock.patch('%s.configure_ua' % MPATH) |
3263 | + @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
3264 | +- def test_handle_tries_to_install_ubuntu_advantage_tools( |
3265 | +- self, m_install, m_cfg): |
3266 | ++ def test_handle_tries_to_install_ubuntu_advantage_tools(self, m_install): |
3267 | + """If ubuntu_advantage is provided, try installing ua-tools package.""" |
3268 | +- cfg = {'ubuntu_advantage': {'token': 'valid'}} |
3269 | ++ cfg = {'ubuntu-advantage': {}} |
3270 | + mycloud = FakeCloud(None) |
3271 | + handle('nomatter', cfg=cfg, cloud=mycloud, log=self.logger, args=None) |
3272 | + m_install.assert_called_once_with(mycloud) |
3273 | + |
3274 | +- @mock.patch('%s.configure_ua' % MPATH) |
3275 | + @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
3276 | +- def test_handle_passes_credentials_and_services_to_configure_ua( |
3277 | +- self, m_install, m_configure_ua): |
3278 | +- """All ubuntu_advantage config keys are passed to configure_ua.""" |
3279 | +- cfg = {'ubuntu_advantage': {'token': 'token', 'enable': ['esm']}} |
3280 | +- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
3281 | +- m_configure_ua.assert_called_once_with( |
3282 | +- token='token', enable=['esm']) |
3283 | +- |
3284 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock()) |
3285 | +- @mock.patch('%s.configure_ua' % MPATH) |
3286 | +- def test_handle_warns_on_deprecated_ubuntu_advantage_key_w_config( |
3287 | +- self, m_configure_ua): |
3288 | +- """Warning when ubuntu-advantage key is present with new config""" |
3289 | +- cfg = {'ubuntu-advantage': {'token': 'token', 'enable': ['esm']}} |
3290 | +- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
3291 | +- self.assertEqual( |
3292 | +- 'WARNING: Deprecated configuration key "ubuntu-advantage"' |
3293 | +- ' provided. Expected underscore delimited "ubuntu_advantage";' |
3294 | +- ' will attempt to continue.', |
3295 | +- self.logs.getvalue().splitlines()[0]) |
3296 | +- m_configure_ua.assert_called_once_with( |
3297 | +- token='token', enable=['esm']) |
3298 | +- |
3299 | +- def test_handle_error_on_deprecated_commands_key_dashed(self): |
3300 | +- """Error when commands is present in ubuntu-advantage key.""" |
3301 | +- cfg = {'ubuntu-advantage': {'commands': 'nogo'}} |
3302 | +- with self.assertRaises(RuntimeError) as context_manager: |
3303 | +- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
3304 | +- self.assertEqual( |
3305 | +- 'Deprecated configuration "ubuntu-advantage: commands" provided.' |
3306 | +- ' Expected "token"', |
3307 | +- str(context_manager.exception)) |
3308 | +- |
3309 | +- def test_handle_error_on_deprecated_commands_key_underscored(self): |
3310 | +- """Error when commands is present in ubuntu_advantage key.""" |
3311 | +- cfg = {'ubuntu_advantage': {'commands': 'nogo'}} |
3312 | +- with self.assertRaises(RuntimeError) as context_manager: |
3313 | +- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
3314 | +- self.assertEqual( |
3315 | +- 'Deprecated configuration "ubuntu-advantage: commands" provided.' |
3316 | +- ' Expected "token"', |
3317 | +- str(context_manager.exception)) |
3318 | ++ def test_handle_runs_commands_provided(self, m_install): |
3319 | ++ """When commands are specified as a list, run them.""" |
3320 | ++ outfile = self.tmp_path('output.log', dir=self.tmp) |
3321 | + |
3322 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock()) |
3323 | +- @mock.patch('%s.configure_ua' % MPATH) |
3324 | +- def test_handle_prefers_new_style_config( |
3325 | +- self, m_configure_ua): |
3326 | +- """ubuntu_advantage should be preferred over ubuntu-advantage""" |
3327 | + cfg = { |
3328 | +- 'ubuntu-advantage': {'token': 'nope', 'enable': ['wrong']}, |
3329 | +- 'ubuntu_advantage': {'token': 'token', 'enable': ['esm']}, |
3330 | +- } |
3331 | +- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
3332 | +- self.assertEqual( |
3333 | +- 'WARNING: Deprecated configuration key "ubuntu-advantage"' |
3334 | +- ' provided. Expected underscore delimited "ubuntu_advantage";' |
3335 | +- ' will attempt to continue.', |
3336 | +- self.logs.getvalue().splitlines()[0]) |
3337 | +- m_configure_ua.assert_called_once_with( |
3338 | +- token='token', enable=['esm']) |
3339 | ++ 'ubuntu-advantage': {'commands': ['echo "HI" >> %s' % outfile, |
3340 | ++ 'echo "MOM" >> %s' % outfile]}} |
3341 | ++ mock_path = '%s.sys.stderr' % MPATH |
3342 | ++ with self.allow_subp([CiTestCase.SUBP_SHELL_TRUE]): |
3343 | ++ with mock.patch(mock_path, new_callable=StringIO): |
3344 | ++ handle('nomatter', cfg=cfg, cloud=None, log=self.logger, |
3345 | ++ args=None) |
3346 | ++ self.assertEqual('HI\nMOM\n', util.load_file(outfile)) |
3347 | + |
3348 | + |
3349 | + class TestMaybeInstallUATools(CiTestCase): |
3350 | +@@ -288,7 +253,7 @@ class TestMaybeInstallUATools(CiTestCase |
3351 | + @mock.patch('%s.util.which' % MPATH) |
3352 | + def test_maybe_install_ua_tools_noop_when_ua_tools_present(self, m_which): |
3353 | + """Do nothing if ubuntu-advantage-tools already exists.""" |
3354 | +- m_which.return_value = '/usr/bin/ua' # already installed |
3355 | ++ m_which.return_value = '/usr/bin/ubuntu-advantage' # already installed |
3356 | + distro = mock.MagicMock() |
3357 | + distro.update_package_sources.side_effect = RuntimeError( |
3358 | + 'Some apt error') |
3359 | diff --git a/doc/rtd/topics/datasources/azure.rst b/doc/rtd/topics/datasources/azure.rst |
3360 | index 720a475..b41cddd 100644 |
3361 | --- a/doc/rtd/topics/datasources/azure.rst |
3362 | +++ b/doc/rtd/topics/datasources/azure.rst |
3363 | @@ -5,9 +5,30 @@ Azure |
3364 | |
3365 | This datasource finds metadata and user-data from the Azure cloud platform. |
3366 | |
3367 | -Azure Platform |
3368 | --------------- |
3369 | -The azure cloud-platform provides initial data to an instance via an attached |
3370 | +walinuxagent |
3371 | +------------ |
3372 | +walinuxagent has several functions within images. For cloud-init |
3373 | +specifically, the relevant functionality it performs is to register the |
3374 | +instance with the Azure cloud platform at boot so networking will be |
3375 | +permitted. For more information about the other functionality of |
3376 | +walinuxagent, see `Azure's documentation |
3377 | +<https://github.com/Azure/WALinuxAgent#introduction>`_ for more details. |
3378 | +(Note, however, that only one of walinuxagent's provisioning and cloud-init |
3379 | +should be used to perform instance customisation.) |
3380 | + |
3381 | +If you are configuring walinuxagent yourself, you will want to ensure that you |
3382 | +have `Provisioning.UseCloudInit |
3383 | +<https://github.com/Azure/WALinuxAgent#provisioningusecloudinit>`_ set to |
3384 | +``y``. |
3385 | + |
3386 | + |
3387 | +Builtin Agent |
3388 | +------------- |
3389 | +An alternative to using walinuxagent to register to the Azure cloud platform |
3390 | +is to use the ``__builtin__`` agent command. This section contains more |
3391 | +background on what that code path does, and how to enable it. |
3392 | + |
3393 | +The Azure cloud platform provides initial data to an instance via an attached |
3394 | CD formatted in UDF. That CD contains a 'ovf-env.xml' file that provides some |
3395 | information. Additional information is obtained via interaction with the |
3396 | "endpoint". |
3397 | @@ -36,25 +57,17 @@ for the endpoint server (again option 245). |
3398 | You can define the path to the lease file with the 'dhclient_lease_file' |
3399 | configuration. |
3400 | |
3401 | -walinuxagent |
3402 | ------------- |
3403 | -In order to operate correctly, cloud-init needs walinuxagent to provide much |
3404 | -of the interaction with azure. In addition to "provisioning" code, walinux |
3405 | -does the following on the agent is a long running daemon that handles the |
3406 | -following things: |
3407 | -- generate a x509 certificate and send that to the endpoint |
3408 | - |
3409 | -waagent.conf config |
3410 | -^^^^^^^^^^^^^^^^^^^ |
3411 | -in order to use waagent.conf with cloud-init, the following settings are recommended. Other values can be changed or set to the defaults. |
3412 | - |
3413 | - :: |
3414 | - |
3415 | - # disabling provisioning turns off all 'Provisioning.*' function |
3416 | - Provisioning.Enabled=n |
3417 | - # this is currently not handled by cloud-init, so let walinuxagent do it. |
3418 | - ResourceDisk.Format=y |
3419 | - ResourceDisk.MountPoint=/mnt |
3420 | + |
3421 | +IMDS |
3422 | +---- |
3423 | +Azure provides the `instance metadata service (IMDS) |
3424 | +<https://docs.microsoft.com/en-us/azure/virtual-machines/windows/instance-metadata-service>`_ |
3425 | +which is a REST service on ``196.254.196.254`` providing additional |
3426 | +configuration information to the instance. Cloud-init uses the IMDS for: |
3427 | + |
3428 | +- network configuration for the instance which is applied per boot |
3429 | +- a preprovisioing gate which blocks instance configuration until Azure fabric |
3430 | + is ready to provision |
3431 | |
3432 | |
3433 | Configuration |
3434 | diff --git a/doc/rtd/topics/datasources/nocloud.rst b/doc/rtd/topics/datasources/nocloud.rst |
3435 | index 08578e8..1c5cf96 100644 |
3436 | --- a/doc/rtd/topics/datasources/nocloud.rst |
3437 | +++ b/doc/rtd/topics/datasources/nocloud.rst |
3438 | @@ -9,7 +9,7 @@ network at all). |
3439 | |
3440 | You can provide meta-data and user-data to a local vm boot via files on a |
3441 | `vfat`_ or `iso9660`_ filesystem. The filesystem volume label must be |
3442 | -``cidata``. |
3443 | +``cidata`` or ``CIDATA``. |
3444 | |
3445 | Alternatively, you can provide meta-data via kernel command line or SMBIOS |
3446 | "serial number" option. The data must be passed in the form of a string: |
3447 | diff --git a/doc/rtd/topics/modules.rst b/doc/rtd/topics/modules.rst |
3448 | index d9720f6..3dcdd3b 100644 |
3449 | --- a/doc/rtd/topics/modules.rst |
3450 | +++ b/doc/rtd/topics/modules.rst |
3451 | @@ -54,6 +54,7 @@ Modules |
3452 | .. automodule:: cloudinit.config.cc_ssh_import_id |
3453 | .. automodule:: cloudinit.config.cc_timezone |
3454 | .. automodule:: cloudinit.config.cc_ubuntu_advantage |
3455 | +.. automodule:: cloudinit.config.cc_ubuntu_drivers |
3456 | .. automodule:: cloudinit.config.cc_update_etc_hosts |
3457 | .. automodule:: cloudinit.config.cc_update_hostname |
3458 | .. automodule:: cloudinit.config.cc_users_groups |
3459 | diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in |
3460 | index 6b2022b..057a578 100644 |
3461 | --- a/packages/redhat/cloud-init.spec.in |
3462 | +++ b/packages/redhat/cloud-init.spec.in |
3463 | @@ -205,7 +205,9 @@ fi |
3464 | %dir %{_sysconfdir}/cloud/templates |
3465 | %config(noreplace) %{_sysconfdir}/cloud/templates/* |
3466 | %config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf |
3467 | -%{_sysconfdir}/bash_completion.d/cloud-init |
3468 | + |
3469 | +# Bash completion script |
3470 | +%{_datadir}/bash-completion/completions/cloud-init |
3471 | |
3472 | %{_libexecdir}/%{name} |
3473 | %dir %{_sharedstatedir}/cloud |
3474 | diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in |
3475 | index 26894b3..004b875 100644 |
3476 | --- a/packages/suse/cloud-init.spec.in |
3477 | +++ b/packages/suse/cloud-init.spec.in |
3478 | @@ -120,7 +120,9 @@ version_pys=$(cd "%{buildroot}" && find . -name version.py -type f) |
3479 | %config(noreplace) %{_sysconfdir}/cloud/cloud.cfg.d/README |
3480 | %dir %{_sysconfdir}/cloud/templates |
3481 | %config(noreplace) %{_sysconfdir}/cloud/templates/* |
3482 | -%{_sysconfdir}/bash_completion.d/cloud-init |
3483 | + |
3484 | +# Bash completion script |
3485 | +%{_datadir}/bash-completion/completions/cloud-init |
3486 | |
3487 | %{_sysconfdir}/dhcp/dhclient-exit-hooks.d/hook-dhclient |
3488 | %{_sysconfdir}/NetworkManager/dispatcher.d/hook-network-manager |
3489 | diff --git a/setup.py b/setup.py |
3490 | index 186e215..fcaf26f 100755 |
3491 | --- a/setup.py |
3492 | +++ b/setup.py |
3493 | @@ -245,13 +245,14 @@ if not in_virtualenv(): |
3494 | INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k] |
3495 | |
3496 | data_files = [ |
3497 | - (ETC + '/bash_completion.d', ['bash_completion/cloud-init']), |
3498 | (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]), |
3499 | (ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')), |
3500 | (ETC + '/cloud/templates', glob('templates/*')), |
3501 | (USR_LIB_EXEC + '/cloud-init', ['tools/ds-identify', |
3502 | 'tools/uncloud-init', |
3503 | 'tools/write-ssh-key-fingerprints']), |
3504 | + (USR + '/share/bash-completion/completions', |
3505 | + ['bash_completion/cloud-init']), |
3506 | (USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]), |
3507 | (USR + '/share/doc/cloud-init/examples', |
3508 | [f for f in glob('doc/examples/*') if is_f(f)]), |
3509 | diff --git a/tests/cloud_tests/releases.yaml b/tests/cloud_tests/releases.yaml |
3510 | index ec5da72..924ad95 100644 |
3511 | --- a/tests/cloud_tests/releases.yaml |
3512 | +++ b/tests/cloud_tests/releases.yaml |
3513 | @@ -129,6 +129,22 @@ features: |
3514 | |
3515 | releases: |
3516 | # UBUNTU ================================================================= |
3517 | + eoan: |
3518 | + # EOL: Jul 2020 |
3519 | + default: |
3520 | + enabled: true |
3521 | + release: eoan |
3522 | + version: 19.10 |
3523 | + os: ubuntu |
3524 | + feature_groups: |
3525 | + - base |
3526 | + - debian_base |
3527 | + - ubuntu_specific |
3528 | + lxd: |
3529 | + sstreams_server: https://cloud-images.ubuntu.com/daily |
3530 | + alias: eoan |
3531 | + setup_overrides: null |
3532 | + override_templates: false |
3533 | disco: |
3534 | # EOL: Jan 2020 |
3535 | default: |
3536 | diff --git a/tests/cloud_tests/testcases/modules/apt_pipelining_disable.yaml b/tests/cloud_tests/testcases/modules/apt_pipelining_disable.yaml |
3537 | index bd9b5d0..22a31dc 100644 |
3538 | --- a/tests/cloud_tests/testcases/modules/apt_pipelining_disable.yaml |
3539 | +++ b/tests/cloud_tests/testcases/modules/apt_pipelining_disable.yaml |
3540 | @@ -5,8 +5,7 @@ required_features: |
3541 | - apt |
3542 | cloud_config: | |
3543 | #cloud-config |
3544 | - apt: |
3545 | - apt_pipelining: false |
3546 | + apt_pipelining: false |
3547 | collect_scripts: |
3548 | 90cloud-init-pipelining: | |
3549 | #!/bin/bash |
3550 | diff --git a/tests/cloud_tests/testcases/modules/apt_pipelining_os.py b/tests/cloud_tests/testcases/modules/apt_pipelining_os.py |
3551 | index 740dc7c..2b940a6 100644 |
3552 | --- a/tests/cloud_tests/testcases/modules/apt_pipelining_os.py |
3553 | +++ b/tests/cloud_tests/testcases/modules/apt_pipelining_os.py |
3554 | @@ -8,8 +8,8 @@ class TestAptPipeliningOS(base.CloudTestCase): |
3555 | """Test apt-pipelining module.""" |
3556 | |
3557 | def test_os_pipelining(self): |
3558 | - """Test pipelining set to os.""" |
3559 | - out = self.get_data_file('90cloud-init-pipelining') |
3560 | - self.assertIn('Acquire::http::Pipeline-Depth "0";', out) |
3561 | + """test 'os' settings does not write apt config file.""" |
3562 | + out = self.get_data_file('90cloud-init-pipelining_not_written') |
3563 | + self.assertEqual(0, int(out)) |
3564 | |
3565 | # vi: ts=4 expandtab |
3566 | diff --git a/tests/cloud_tests/testcases/modules/apt_pipelining_os.yaml b/tests/cloud_tests/testcases/modules/apt_pipelining_os.yaml |
3567 | index cbed3ba..86d5220 100644 |
3568 | --- a/tests/cloud_tests/testcases/modules/apt_pipelining_os.yaml |
3569 | +++ b/tests/cloud_tests/testcases/modules/apt_pipelining_os.yaml |
3570 | @@ -1,15 +1,14 @@ |
3571 | # |
3572 | -# Set apt pipelining value to OS |
3573 | +# Set apt pipelining value to OS, no conf written |
3574 | # |
3575 | required_features: |
3576 | - apt |
3577 | cloud_config: | |
3578 | #cloud-config |
3579 | - apt: |
3580 | - apt_pipelining: os |
3581 | + apt_pipelining: os |
3582 | collect_scripts: |
3583 | - 90cloud-init-pipelining: | |
3584 | + 90cloud-init-pipelining_not_written: | |
3585 | #!/bin/bash |
3586 | - cat /etc/apt/apt.conf.d/90cloud-init-pipelining |
3587 | + ls /etc/apt/apt.conf.d/90cloud-init-pipelining | wc -l |
3588 | |
3589 | # vi: ts=4 expandtab |
3590 | diff --git a/tests/data/azure/non_unicode_random_string b/tests/data/azure/non_unicode_random_string |
3591 | new file mode 100644 |
3592 | index 0000000..b9ecefb |
3593 | --- /dev/null |
3594 | +++ b/tests/data/azure/non_unicode_random_string |
3595 | @@ -0,0 +1 @@ |
3596 | +OEM0d\x00\x00\x00\x01\x80VRTUALMICROSFT\x02\x17\x00\x06MSFT\x97\x00\x00\x00C\xb4{V\xf4X%\x061x\x90\x1c\xfen\x86\xbf~\xf5\x8c\x94&\x88\xed\x84\xf9B\xbd\xd3\xf1\xdb\xee:\xd9\x0fc\x0e\x83(\xbd\xe3'\xfc\x85,\xdf\xf4\x13\x99N\xc5\xf3Y\x1e\xe3\x0b\xa4H\x08J\xb9\xdcdb$ |
3597 | \ No newline at end of file |
3598 | diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py |
3599 | index 6b05b8f..427ab7e 100644 |
3600 | --- a/tests/unittests/test_datasource/test_azure.py |
3601 | +++ b/tests/unittests/test_datasource/test_azure.py |
3602 | @@ -7,11 +7,11 @@ from cloudinit.sources import ( |
3603 | UNSET, DataSourceAzure as dsaz, InvalidMetaDataException) |
3604 | from cloudinit.util import (b64e, decode_binary, load_file, write_file, |
3605 | find_freebsd_part, get_path_dev_freebsd, |
3606 | - MountFailedError) |
3607 | + MountFailedError, json_dumps, load_json) |
3608 | from cloudinit.version import version_string as vs |
3609 | from cloudinit.tests.helpers import ( |
3610 | HttprettyTestCase, CiTestCase, populate_dir, mock, wrap_and_call, |
3611 | - ExitStack) |
3612 | + ExitStack, resourceLocation) |
3613 | |
3614 | import crypt |
3615 | import httpretty |
3616 | @@ -163,7 +163,8 @@ class TestGetMetadataFromIMDS(HttprettyTestCase): |
3617 | |
3618 | m_readurl.assert_called_with( |
3619 | self.network_md_url, exception_cb=mock.ANY, |
3620 | - headers={'Metadata': 'true'}, retries=2, timeout=1) |
3621 | + headers={'Metadata': 'true'}, retries=2, |
3622 | + timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS) |
3623 | |
3624 | @mock.patch('cloudinit.url_helper.time.sleep') |
3625 | @mock.patch(MOCKPATH + 'net.is_up') |
3626 | @@ -1375,12 +1376,15 @@ class TestCanDevBeReformatted(CiTestCase): |
3627 | self._domock(p + "util.mount_cb", 'm_mount_cb') |
3628 | self._domock(p + "os.path.realpath", 'm_realpath') |
3629 | self._domock(p + "os.path.exists", 'm_exists') |
3630 | + self._domock(p + "util.SeLinuxGuard", 'm_selguard') |
3631 | |
3632 | self.m_exists.side_effect = lambda p: p in bypath |
3633 | self.m_realpath.side_effect = realpath |
3634 | self.m_has_ntfs_filesystem.side_effect = has_ntfs_fs |
3635 | self.m_mount_cb.side_effect = mount_cb |
3636 | self.m_partitions_on_device.side_effect = partitions_on_device |
3637 | + self.m_selguard.__enter__ = mock.Mock(return_value=False) |
3638 | + self.m_selguard.__exit__ = mock.Mock() |
3639 | |
3640 | def test_three_partitions_is_false(self): |
3641 | """A disk with 3 partitions can not be formatted.""" |
3642 | @@ -1788,7 +1792,8 @@ class TestAzureDataSourcePreprovisioning(CiTestCase): |
3643 | headers={'Metadata': 'true', |
3644 | 'User-Agent': |
3645 | 'Cloud-Init/%s' % vs() |
3646 | - }, method='GET', timeout=1, |
3647 | + }, method='GET', |
3648 | + timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS, |
3649 | url=full_url)]) |
3650 | self.assertEqual(m_dhcp.call_count, 2) |
3651 | m_net.assert_any_call( |
3652 | @@ -1825,7 +1830,9 @@ class TestAzureDataSourcePreprovisioning(CiTestCase): |
3653 | headers={'Metadata': 'true', |
3654 | 'User-Agent': |
3655 | 'Cloud-Init/%s' % vs()}, |
3656 | - method='GET', timeout=1, url=full_url)]) |
3657 | + method='GET', |
3658 | + timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS, |
3659 | + url=full_url)]) |
3660 | self.assertEqual(m_dhcp.call_count, 2) |
3661 | m_net.assert_any_call( |
3662 | broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9', |
3663 | @@ -1923,4 +1930,24 @@ class TestWBIsPlatformViable(CiTestCase): |
3664 | self.logs.getvalue()) |
3665 | |
3666 | |
3667 | +class TestRandomSeed(CiTestCase): |
3668 | + """Test proper handling of random_seed""" |
3669 | + |
3670 | + def test_non_ascii_seed_is_serializable(self): |
3671 | + """Pass if a random string from the Azure infrastructure which |
3672 | + contains at least one non-Unicode character can be converted to/from |
3673 | + JSON without alteration and without throwing an exception. |
3674 | + """ |
3675 | + path = resourceLocation("azure/non_unicode_random_string") |
3676 | + result = dsaz._get_random_seed(path) |
3677 | + |
3678 | + obj = {'seed': result} |
3679 | + try: |
3680 | + serialized = json_dumps(obj) |
3681 | + deserialized = load_json(serialized) |
3682 | + except UnicodeDecodeError: |
3683 | + self.fail("Non-serializable random seed returned") |
3684 | + |
3685 | + self.assertEqual(deserialized['seed'], result) |
3686 | + |
3687 | # vi: ts=4 expandtab |
3688 | diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py |
3689 | index 0255616..bd006ab 100644 |
3690 | --- a/tests/unittests/test_datasource/test_azure_helper.py |
3691 | +++ b/tests/unittests/test_datasource/test_azure_helper.py |
3692 | @@ -67,12 +67,17 @@ class TestFindEndpoint(CiTestCase): |
3693 | self.networkd_leases.return_value = None |
3694 | |
3695 | def test_missing_file(self): |
3696 | - self.assertRaises(ValueError, wa_shim.find_endpoint) |
3697 | + """wa_shim find_endpoint uses default endpoint if leasefile not found |
3698 | + """ |
3699 | + self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16") |
3700 | |
3701 | def test_missing_special_azure_line(self): |
3702 | + """wa_shim find_endpoint uses default endpoint if leasefile is found |
3703 | + but does not contain DHCP Option 245 (whose value is the endpoint) |
3704 | + """ |
3705 | self.load_file.return_value = '' |
3706 | self.dhcp_options.return_value = {'eth0': {'key': 'value'}} |
3707 | - self.assertRaises(ValueError, wa_shim.find_endpoint) |
3708 | + self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16") |
3709 | |
3710 | @staticmethod |
3711 | def _build_lease_content(encoded_address): |
3712 | diff --git a/tests/unittests/test_datasource/test_nocloud.py b/tests/unittests/test_datasource/test_nocloud.py |
3713 | index 3429272..b785362 100644 |
3714 | --- a/tests/unittests/test_datasource/test_nocloud.py |
3715 | +++ b/tests/unittests/test_datasource/test_nocloud.py |
3716 | @@ -32,6 +32,36 @@ class TestNoCloudDataSource(CiTestCase): |
3717 | self.mocks.enter_context( |
3718 | mock.patch.object(util, 'read_dmi_data', return_value=None)) |
3719 | |
3720 | + def _test_fs_config_is_read(self, fs_label, fs_label_to_search): |
3721 | + vfat_device = 'device-1' |
3722 | + |
3723 | + def m_mount_cb(device, callback, mtype): |
3724 | + if (device == vfat_device): |
3725 | + return {'meta-data': yaml.dump({'instance-id': 'IID'})} |
3726 | + else: |
3727 | + return {} |
3728 | + |
3729 | + def m_find_devs_with(query='', path=''): |
3730 | + if 'TYPE=vfat' == query: |
3731 | + return [vfat_device] |
3732 | + elif 'LABEL={}'.format(fs_label) == query: |
3733 | + return [vfat_device] |
3734 | + else: |
3735 | + return [] |
3736 | + |
3737 | + self.mocks.enter_context( |
3738 | + mock.patch.object(util, 'find_devs_with', |
3739 | + side_effect=m_find_devs_with)) |
3740 | + self.mocks.enter_context( |
3741 | + mock.patch.object(util, 'mount_cb', |
3742 | + side_effect=m_mount_cb)) |
3743 | + sys_cfg = {'datasource': {'NoCloud': {'fs_label': fs_label_to_search}}} |
3744 | + dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths) |
3745 | + ret = dsrc.get_data() |
3746 | + |
3747 | + self.assertEqual(dsrc.metadata.get('instance-id'), 'IID') |
3748 | + self.assertTrue(ret) |
3749 | + |
3750 | def test_nocloud_seed_dir_on_lxd(self, m_is_lxd): |
3751 | md = {'instance-id': 'IID', 'dsmode': 'local'} |
3752 | ud = b"USER_DATA_HERE" |
3753 | @@ -90,6 +120,18 @@ class TestNoCloudDataSource(CiTestCase): |
3754 | ret = dsrc.get_data() |
3755 | self.assertFalse(ret) |
3756 | |
3757 | + def test_fs_config_lowercase_label(self, m_is_lxd): |
3758 | + self._test_fs_config_is_read('cidata', 'cidata') |
3759 | + |
3760 | + def test_fs_config_uppercase_label(self, m_is_lxd): |
3761 | + self._test_fs_config_is_read('CIDATA', 'cidata') |
3762 | + |
3763 | + def test_fs_config_lowercase_label_search_uppercase(self, m_is_lxd): |
3764 | + self._test_fs_config_is_read('cidata', 'CIDATA') |
3765 | + |
3766 | + def test_fs_config_uppercase_label_search_uppercase(self, m_is_lxd): |
3767 | + self._test_fs_config_is_read('CIDATA', 'CIDATA') |
3768 | + |
3769 | def test_no_datasource_expected(self, m_is_lxd): |
3770 | # no source should be found if no cmdline, config, and fs_label=None |
3771 | sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}} |
3772 | diff --git a/tests/unittests/test_distros/test_netconfig.py b/tests/unittests/test_distros/test_netconfig.py |
3773 | index e453040..c3c0c8c 100644 |
3774 | --- a/tests/unittests/test_distros/test_netconfig.py |
3775 | +++ b/tests/unittests/test_distros/test_netconfig.py |
3776 | @@ -496,6 +496,7 @@ class TestNetCfgDistroRedhat(TestNetCfgDistroBase): |
3777 | BOOTPROTO=none |
3778 | DEFROUTE=yes |
3779 | DEVICE=eth0 |
3780 | + IPADDR6=2607:f0d0:1002:0011::2/64 |
3781 | IPV6ADDR=2607:f0d0:1002:0011::2/64 |
3782 | IPV6INIT=yes |
3783 | IPV6_DEFAULTGW=2607:f0d0:1002:0011::1 |
3784 | @@ -588,6 +589,7 @@ class TestNetCfgDistroOpensuse(TestNetCfgDistroBase): |
3785 | BOOTPROTO=none |
3786 | DEFROUTE=yes |
3787 | DEVICE=eth0 |
3788 | + IPADDR6=2607:f0d0:1002:0011::2/64 |
3789 | IPV6ADDR=2607:f0d0:1002:0011::2/64 |
3790 | IPV6INIT=yes |
3791 | IPV6_DEFAULTGW=2607:f0d0:1002:0011::1 |
3792 | diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py |
3793 | index d00c1b4..8c18aa1 100644 |
3794 | --- a/tests/unittests/test_ds_identify.py |
3795 | +++ b/tests/unittests/test_ds_identify.py |
3796 | @@ -520,6 +520,10 @@ class TestDsIdentify(DsIdentifyBase): |
3797 | """NoCloud is found with iso9660 filesystem on non-cdrom disk.""" |
3798 | self._test_ds_found('NoCloud') |
3799 | |
3800 | + def test_nocloud_upper(self): |
3801 | + """NoCloud is found with uppercase filesystem label.""" |
3802 | + self._test_ds_found('NoCloudUpper') |
3803 | + |
3804 | def test_nocloud_seed(self): |
3805 | """Nocloud seed directory.""" |
3806 | self._test_ds_found('NoCloud-seed') |
3807 | @@ -713,6 +717,19 @@ VALID_CFG = { |
3808 | 'dev/vdb': 'pretend iso content for cidata\n', |
3809 | } |
3810 | }, |
3811 | + 'NoCloudUpper': { |
3812 | + 'ds': 'NoCloud', |
3813 | + 'mocks': [ |
3814 | + MOCK_VIRT_IS_KVM, |
3815 | + {'name': 'blkid', 'ret': 0, |
3816 | + 'out': blkid_out( |
3817 | + BLKID_UEFI_UBUNTU + |
3818 | + [{'DEVNAME': 'vdb', 'TYPE': 'iso9660', 'LABEL': 'CIDATA'}])}, |
3819 | + ], |
3820 | + 'files': { |
3821 | + 'dev/vdb': 'pretend iso content for cidata\n', |
3822 | + } |
3823 | + }, |
3824 | 'NoCloud-seed': { |
3825 | 'ds': 'NoCloud', |
3826 | 'files': { |
3827 | diff --git a/tests/unittests/test_handler/test_handler_mounts.py b/tests/unittests/test_handler/test_handler_mounts.py |
3828 | index 8fea6c2..0fb160b 100644 |
3829 | --- a/tests/unittests/test_handler/test_handler_mounts.py |
3830 | +++ b/tests/unittests/test_handler/test_handler_mounts.py |
3831 | @@ -154,7 +154,15 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase): |
3832 | return_value=True) |
3833 | |
3834 | self.add_patch('cloudinit.config.cc_mounts.util.subp', |
3835 | - 'mock_util_subp') |
3836 | + 'm_util_subp') |
3837 | + |
3838 | + self.add_patch('cloudinit.config.cc_mounts.util.mounts', |
3839 | + 'mock_util_mounts', |
3840 | + return_value={ |
3841 | + '/dev/sda1': {'fstype': 'ext4', |
3842 | + 'mountpoint': '/', |
3843 | + 'opts': 'rw,relatime,discard' |
3844 | + }}) |
3845 | |
3846 | self.mock_cloud = mock.Mock() |
3847 | self.mock_log = mock.Mock() |
3848 | @@ -230,4 +238,24 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase): |
3849 | fstab_new_content = fd.read() |
3850 | self.assertEqual(fstab_expected_content, fstab_new_content) |
3851 | |
3852 | + def test_no_change_fstab_sets_needs_mount_all(self): |
3853 | + '''verify unchanged fstab entries are mounted if not call mount -a''' |
3854 | + fstab_original_content = ( |
3855 | + 'LABEL=cloudimg-rootfs / ext4 defaults 0 0\n' |
3856 | + 'LABEL=UEFI /boot/efi vfat defaults 0 0\n' |
3857 | + '/dev/vdb /mnt auto defaults,noexec,comment=cloudconfig 0 2\n' |
3858 | + ) |
3859 | + fstab_expected_content = fstab_original_content |
3860 | + cc = {'mounts': [ |
3861 | + ['/dev/vdb', '/mnt', 'auto', 'defaults,noexec']]} |
3862 | + with open(cc_mounts.FSTAB_PATH, 'w') as fd: |
3863 | + fd.write(fstab_original_content) |
3864 | + with open(cc_mounts.FSTAB_PATH, 'r') as fd: |
3865 | + fstab_new_content = fd.read() |
3866 | + self.assertEqual(fstab_expected_content, fstab_new_content) |
3867 | + cc_mounts.handle(None, cc, self.mock_cloud, self.mock_log, []) |
3868 | + self.m_util_subp.assert_has_calls([ |
3869 | + mock.call(['mount', '-a']), |
3870 | + mock.call(['systemctl', 'daemon-reload'])]) |
3871 | + |
3872 | # vi: ts=4 expandtab |
3873 | diff --git a/tests/unittests/test_handler/test_schema.py b/tests/unittests/test_handler/test_schema.py |
3874 | index 1bad07f..e69a47a 100644 |
3875 | --- a/tests/unittests/test_handler/test_schema.py |
3876 | +++ b/tests/unittests/test_handler/test_schema.py |
3877 | @@ -28,6 +28,7 @@ class GetSchemaTest(CiTestCase): |
3878 | 'cc_runcmd', |
3879 | 'cc_snap', |
3880 | 'cc_ubuntu_advantage', |
3881 | + 'cc_ubuntu_drivers', |
3882 | 'cc_zypper_add_repo' |
3883 | ], |
3884 | [subschema['id'] for subschema in schema['allOf']]) |
3885 | diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py |
3886 | index e3b9e02..e85e964 100644 |
3887 | --- a/tests/unittests/test_net.py |
3888 | +++ b/tests/unittests/test_net.py |
3889 | @@ -9,6 +9,7 @@ from cloudinit.net import ( |
3890 | from cloudinit.sources.helpers import openstack |
3891 | from cloudinit import temp_utils |
3892 | from cloudinit import util |
3893 | +from cloudinit import safeyaml as yaml |
3894 | |
3895 | from cloudinit.tests.helpers import ( |
3896 | CiTestCase, FilesystemMockingTestCase, dir2dict, mock, populate_dir) |
3897 | @@ -21,7 +22,7 @@ import json |
3898 | import os |
3899 | import re |
3900 | import textwrap |
3901 | -import yaml |
3902 | +from yaml.serializer import Serializer |
3903 | |
3904 | |
3905 | DHCP_CONTENT_1 = """ |
3906 | @@ -691,6 +692,9 @@ DEVICE=eth0 |
3907 | GATEWAY=172.19.3.254 |
3908 | HWADDR=fa:16:3e:ed:9a:59 |
3909 | IPADDR=172.19.1.34 |
3910 | +IPADDR6=2001:DB8::10/64 |
3911 | +IPADDR6_0=2001:DB9::10/64 |
3912 | +IPADDR6_2=2001:DB10::10/64 |
3913 | IPV6ADDR=2001:DB8::10/64 |
3914 | IPV6ADDR_SECONDARIES="2001:DB9::10/64 2001:DB10::10/64" |
3915 | IPV6INIT=yes |
3916 | @@ -729,6 +733,9 @@ DEVICE=eth0 |
3917 | GATEWAY=172.19.3.254 |
3918 | HWADDR=fa:16:3e:ed:9a:59 |
3919 | IPADDR=172.19.1.34 |
3920 | +IPADDR6=2001:DB8::10/64 |
3921 | +IPADDR6_0=2001:DB9::10/64 |
3922 | +IPADDR6_2=2001:DB10::10/64 |
3923 | IPV6ADDR=2001:DB8::10/64 |
3924 | IPV6ADDR_SECONDARIES="2001:DB9::10/64 2001:DB10::10/64" |
3925 | IPV6INIT=yes |
3926 | @@ -860,6 +867,7 @@ NETWORK_CONFIGS = { |
3927 | BOOTPROTO=dhcp |
3928 | DEFROUTE=yes |
3929 | DEVICE=eth99 |
3930 | + DHCLIENT_SET_DEFAULT_ROUTE=yes |
3931 | DNS1=8.8.8.8 |
3932 | DNS2=8.8.4.4 |
3933 | DOMAIN="barley.maas sach.maas" |
3934 | @@ -979,6 +987,7 @@ NETWORK_CONFIGS = { |
3935 | BOOTPROTO=none |
3936 | DEVICE=iface0 |
3937 | IPADDR=192.168.14.2 |
3938 | + IPADDR6=2001:1::1/64 |
3939 | IPV6ADDR=2001:1::1/64 |
3940 | IPV6INIT=yes |
3941 | NETMASK=255.255.255.0 |
3942 | @@ -1113,8 +1122,8 @@ iface eth0.101 inet static |
3943 | iface eth0.101 inet static |
3944 | address 192.168.2.10/24 |
3945 | |
3946 | -post-up route add -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
3947 | -pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
3948 | +post-up route add -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true |
3949 | +pre-down route del -net 10.0.0.0/8 gw 11.0.0.1 metric 3 || true |
3950 | """), |
3951 | 'expected_netplan': textwrap.dedent(""" |
3952 | network: |
3953 | @@ -1234,6 +1243,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
3954 | 'ifcfg-bond0.200': textwrap.dedent("""\ |
3955 | BOOTPROTO=dhcp |
3956 | DEVICE=bond0.200 |
3957 | + DHCLIENT_SET_DEFAULT_ROUTE=no |
3958 | NM_CONTROLLED=no |
3959 | ONBOOT=yes |
3960 | PHYSDEV=bond0 |
3961 | @@ -1247,6 +1257,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
3962 | DEFROUTE=yes |
3963 | DEVICE=br0 |
3964 | IPADDR=192.168.14.2 |
3965 | + IPADDR6=2001:1::1/64 |
3966 | IPV6ADDR=2001:1::1/64 |
3967 | IPV6INIT=yes |
3968 | IPV6_DEFAULTGW=2001:4800:78ff:1b::1 |
3969 | @@ -1333,6 +1344,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
3970 | 'ifcfg-eth5': textwrap.dedent("""\ |
3971 | BOOTPROTO=dhcp |
3972 | DEVICE=eth5 |
3973 | + DHCLIENT_SET_DEFAULT_ROUTE=no |
3974 | HWADDR=98:bb:9f:2c:e8:8a |
3975 | NM_CONTROLLED=no |
3976 | ONBOOT=no |
3977 | @@ -1505,17 +1517,18 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
3978 | - gateway: 192.168.0.3 |
3979 | netmask: 255.255.255.0 |
3980 | network: 10.1.3.0 |
3981 | - - gateway: 2001:67c:1562:1 |
3982 | - network: 2001:67c:1 |
3983 | - netmask: ffff:ffff:0 |
3984 | - - gateway: 3001:67c:1562:1 |
3985 | - network: 3001:67c:1 |
3986 | - netmask: ffff:ffff:0 |
3987 | - metric: 10000 |
3988 | - type: static |
3989 | address: 192.168.1.2/24 |
3990 | - type: static |
3991 | address: 2001:1::1/92 |
3992 | + routes: |
3993 | + - gateway: 2001:67c:1562:1 |
3994 | + network: 2001:67c:1 |
3995 | + netmask: ffff:ffff:0 |
3996 | + - gateway: 3001:67c:1562:1 |
3997 | + network: 3001:67c:1 |
3998 | + netmask: ffff:ffff:0 |
3999 | + metric: 10000 |
4000 | """), |
4001 | 'expected_netplan': textwrap.dedent(""" |
4002 | network: |
4003 | @@ -1554,6 +1567,51 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
4004 | to: 3001:67c:1/32 |
4005 | via: 3001:67c:1562:1 |
4006 | """), |
4007 | + 'expected_eni': textwrap.dedent("""\ |
4008 | +auto lo |
4009 | +iface lo inet loopback |
4010 | + |
4011 | +auto bond0s0 |
4012 | +iface bond0s0 inet manual |
4013 | + bond-master bond0 |
4014 | + bond-mode active-backup |
4015 | + bond-xmit-hash-policy layer3+4 |
4016 | + bond_miimon 100 |
4017 | + |
4018 | +auto bond0s1 |
4019 | +iface bond0s1 inet manual |
4020 | + bond-master bond0 |
4021 | + bond-mode active-backup |
4022 | + bond-xmit-hash-policy layer3+4 |
4023 | + bond_miimon 100 |
4024 | + |
4025 | +auto bond0 |
4026 | +iface bond0 inet static |
4027 | + address 192.168.0.2/24 |
4028 | + gateway 192.168.0.1 |
4029 | + bond-mode active-backup |
4030 | + bond-slaves none |
4031 | + bond-xmit-hash-policy layer3+4 |
4032 | + bond_miimon 100 |
4033 | + hwaddress aa:bb:cc:dd:e8:ff |
4034 | + mtu 9000 |
4035 | + post-up route add -net 10.1.3.0/24 gw 192.168.0.3 || true |
4036 | + pre-down route del -net 10.1.3.0/24 gw 192.168.0.3 || true |
4037 | + |
4038 | +# control-alias bond0 |
4039 | +iface bond0 inet static |
4040 | + address 192.168.1.2/24 |
4041 | + |
4042 | +# control-alias bond0 |
4043 | +iface bond0 inet6 static |
4044 | + address 2001:1::1/92 |
4045 | + post-up route add -A inet6 2001:67c:1/32 gw 2001:67c:1562:1 || true |
4046 | + pre-down route del -A inet6 2001:67c:1/32 gw 2001:67c:1562:1 || true |
4047 | + post-up route add -A inet6 3001:67c:1/32 gw 3001:67c:1562:1 metric 10000 \ |
4048 | +|| true |
4049 | + pre-down route del -A inet6 3001:67c:1/32 gw 3001:67c:1562:1 metric 10000 \ |
4050 | +|| true |
4051 | + """), |
4052 | 'yaml-v2': textwrap.dedent(""" |
4053 | version: 2 |
4054 | ethernets: |
4055 | @@ -1641,6 +1699,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
4056 | MACADDR=aa:bb:cc:dd:e8:ff |
4057 | IPADDR=192.168.0.2 |
4058 | IPADDR1=192.168.1.2 |
4059 | + IPADDR6=2001:1::1/92 |
4060 | IPV6ADDR=2001:1::1/92 |
4061 | IPV6INIT=yes |
4062 | MTU=9000 |
4063 | @@ -1696,6 +1755,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
4064 | MACADDR=aa:bb:cc:dd:e8:ff |
4065 | IPADDR=192.168.0.2 |
4066 | IPADDR1=192.168.1.2 |
4067 | + IPADDR6=2001:1::1/92 |
4068 | IPV6ADDR=2001:1::1/92 |
4069 | IPV6INIT=yes |
4070 | MTU=9000 |
4071 | @@ -1786,6 +1846,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
4072 | GATEWAY=192.168.1.1 |
4073 | IPADDR=192.168.2.2 |
4074 | IPADDR1=192.168.1.2 |
4075 | + IPADDR6=2001:1::bbbb/96 |
4076 | IPV6ADDR=2001:1::bbbb/96 |
4077 | IPV6INIT=yes |
4078 | IPV6_DEFAULTGW=2001:1::1 |
4079 | @@ -1847,6 +1908,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
4080 | BRIDGE=br0 |
4081 | DEVICE=eth0 |
4082 | HWADDR=52:54:00:12:34:00 |
4083 | + IPADDR6=2001:1::100/96 |
4084 | IPV6ADDR=2001:1::100/96 |
4085 | IPV6INIT=yes |
4086 | NM_CONTROLLED=no |
4087 | @@ -1860,6 +1922,7 @@ pre-down route del -net 10.0.0.0 netmask 255.0.0.0 gw 11.0.0.1 metric 3 || true |
4088 | BRIDGE=br0 |
4089 | DEVICE=eth1 |
4090 | HWADDR=52:54:00:12:34:01 |
4091 | + IPADDR6=2001:1::101/96 |
4092 | IPV6ADDR=2001:1::101/96 |
4093 | IPV6INIT=yes |
4094 | NM_CONTROLLED=no |
4095 | @@ -1988,6 +2051,23 @@ CONFIG_V1_SIMPLE_SUBNET = { |
4096 | 'type': 'static'}], |
4097 | 'type': 'physical'}]} |
4098 | |
4099 | +CONFIG_V1_MULTI_IFACE = { |
4100 | + 'version': 1, |
4101 | + 'config': [{'type': 'physical', |
4102 | + 'mtu': 1500, |
4103 | + 'subnets': [{'type': 'static', |
4104 | + 'netmask': '255.255.240.0', |
4105 | + 'routes': [{'netmask': '0.0.0.0', |
4106 | + 'network': '0.0.0.0', |
4107 | + 'gateway': '51.68.80.1'}], |
4108 | + 'address': '51.68.89.122', |
4109 | + 'ipv4': True}], |
4110 | + 'mac_address': 'fa:16:3e:25:b4:59', |
4111 | + 'name': 'eth0'}, |
4112 | + {'type': 'physical', |
4113 | + 'mtu': 9000, |
4114 | + 'subnets': [{'type': 'dhcp4'}], |
4115 | + 'mac_address': 'fa:16:3e:b1:ca:29', 'name': 'eth1'}]} |
4116 | |
4117 | DEFAULT_DEV_ATTRS = { |
4118 | 'eth1000': { |
4119 | @@ -2460,6 +2540,49 @@ USERCTL=no |
4120 | respath = '/etc/resolv.conf' |
4121 | self.assertNotIn(respath, found.keys()) |
4122 | |
4123 | + def test_network_config_v1_multi_iface_samples(self): |
4124 | + ns = network_state.parse_net_config_data(CONFIG_V1_MULTI_IFACE) |
4125 | + render_dir = self.tmp_path("render") |
4126 | + os.makedirs(render_dir) |
4127 | + renderer = self._get_renderer() |
4128 | + renderer.render_network_state(ns, target=render_dir) |
4129 | + found = dir2dict(render_dir) |
4130 | + nspath = '/etc/sysconfig/network-scripts/' |
4131 | + self.assertNotIn(nspath + 'ifcfg-lo', found.keys()) |
4132 | + expected_i1 = """\ |
4133 | +# Created by cloud-init on instance boot automatically, do not edit. |
4134 | +# |
4135 | +BOOTPROTO=none |
4136 | +DEFROUTE=yes |
4137 | +DEVICE=eth0 |
4138 | +GATEWAY=51.68.80.1 |
4139 | +HWADDR=fa:16:3e:25:b4:59 |
4140 | +IPADDR=51.68.89.122 |
4141 | +MTU=1500 |
4142 | +NETMASK=255.255.240.0 |
4143 | +NM_CONTROLLED=no |
4144 | +ONBOOT=yes |
4145 | +STARTMODE=auto |
4146 | +TYPE=Ethernet |
4147 | +USERCTL=no |
4148 | +""" |
4149 | + self.assertEqual(expected_i1, found[nspath + 'ifcfg-eth0']) |
4150 | + expected_i2 = """\ |
4151 | +# Created by cloud-init on instance boot automatically, do not edit. |
4152 | +# |
4153 | +BOOTPROTO=dhcp |
4154 | +DEVICE=eth1 |
4155 | +DHCLIENT_SET_DEFAULT_ROUTE=no |
4156 | +HWADDR=fa:16:3e:b1:ca:29 |
4157 | +MTU=9000 |
4158 | +NM_CONTROLLED=no |
4159 | +ONBOOT=yes |
4160 | +STARTMODE=auto |
4161 | +TYPE=Ethernet |
4162 | +USERCTL=no |
4163 | +""" |
4164 | + self.assertEqual(expected_i2, found[nspath + 'ifcfg-eth1']) |
4165 | + |
4166 | def test_config_with_explicit_loopback(self): |
4167 | ns = network_state.parse_net_config_data(CONFIG_V1_EXPLICIT_LOOPBACK) |
4168 | render_dir = self.tmp_path("render") |
4169 | @@ -2634,6 +2757,7 @@ USERCTL=no |
4170 | GATEWAY=192.168.42.1 |
4171 | HWADDR=52:54:00:ab:cd:ef |
4172 | IPADDR=192.168.42.100 |
4173 | + IPADDR6=2001:db8::100/32 |
4174 | IPV6ADDR=2001:db8::100/32 |
4175 | IPV6INIT=yes |
4176 | IPV6_DEFAULTGW=2001:db8::1 |
4177 | @@ -3146,9 +3270,12 @@ class TestNetplanPostcommands(CiTestCase): |
4178 | mock_netplan_generate.assert_called_with(run=True) |
4179 | mock_net_setup_link.assert_called_with(run=True) |
4180 | |
4181 | + @mock.patch('cloudinit.util.SeLinuxGuard') |
4182 | @mock.patch.object(netplan, "get_devicelist") |
4183 | @mock.patch('cloudinit.util.subp') |
4184 | - def test_netplan_postcmds(self, mock_subp, mock_devlist): |
4185 | + def test_netplan_postcmds(self, mock_subp, mock_devlist, mock_sel): |
4186 | + mock_sel.__enter__ = mock.Mock(return_value=False) |
4187 | + mock_sel.__exit__ = mock.Mock() |
4188 | mock_devlist.side_effect = [['lo']] |
4189 | tmp_dir = self.tmp_dir() |
4190 | ns = network_state.parse_net_config_data(self.mycfg, |
4191 | @@ -3449,7 +3576,7 @@ class TestNetplanRoundTrip(CiTestCase): |
4192 | # now look for any alias, avoid rendering them entirely |
4193 | # generate the first anchor string using the template |
4194 | # as of this writing, looks like "&id001" |
4195 | - anchor = r'&' + yaml.serializer.Serializer.ANCHOR_TEMPLATE % 1 |
4196 | + anchor = r'&' + Serializer.ANCHOR_TEMPLATE % 1 |
4197 | found_alias = re.search(anchor, content, re.MULTILINE) |
4198 | if found_alias: |
4199 | msg = "Error at: %s\nContent:\n%s" % (found_alias, content) |
4200 | @@ -3570,17 +3697,17 @@ class TestEniRoundTrip(CiTestCase): |
4201 | 'iface eth0 inet static', |
4202 | ' address 172.23.31.42/26', |
4203 | ' gateway 172.23.31.2', |
4204 | - ('post-up route add -net 10.0.0.0 netmask 255.240.0.0 gw ' |
4205 | + ('post-up route add -net 10.0.0.0/12 gw ' |
4206 | '172.23.31.1 metric 0 || true'), |
4207 | - ('pre-down route del -net 10.0.0.0 netmask 255.240.0.0 gw ' |
4208 | + ('pre-down route del -net 10.0.0.0/12 gw ' |
4209 | '172.23.31.1 metric 0 || true'), |
4210 | - ('post-up route add -net 192.168.2.0 netmask 255.255.0.0 gw ' |
4211 | + ('post-up route add -net 192.168.2.0/16 gw ' |
4212 | '172.23.31.1 metric 0 || true'), |
4213 | - ('pre-down route del -net 192.168.2.0 netmask 255.255.0.0 gw ' |
4214 | + ('pre-down route del -net 192.168.2.0/16 gw ' |
4215 | '172.23.31.1 metric 0 || true'), |
4216 | - ('post-up route add -net 10.0.200.0 netmask 255.255.0.0 gw ' |
4217 | + ('post-up route add -net 10.0.200.0/16 gw ' |
4218 | '172.23.31.1 metric 1 || true'), |
4219 | - ('pre-down route del -net 10.0.200.0 netmask 255.255.0.0 gw ' |
4220 | + ('pre-down route del -net 10.0.200.0/16 gw ' |
4221 | '172.23.31.1 metric 1 || true'), |
4222 | ] |
4223 | found = files['/etc/network/interfaces'].splitlines() |
4224 | @@ -3588,6 +3715,77 @@ class TestEniRoundTrip(CiTestCase): |
4225 | self.assertEqual( |
4226 | expected, [line for line in found if line]) |
4227 | |
4228 | + def test_ipv6_static_routes(self): |
4229 | + # as reported in bug 1818669 |
4230 | + conf = [ |
4231 | + {'name': 'eno3', 'type': 'physical', |
4232 | + 'subnets': [{ |
4233 | + 'address': 'fd00::12/64', |
4234 | + 'dns_nameservers': ['fd00:2::15'], |
4235 | + 'gateway': 'fd00::1', |
4236 | + 'ipv6': True, |
4237 | + 'type': 'static', |
4238 | + 'routes': [{'netmask': '32', |
4239 | + 'network': 'fd00:12::', |
4240 | + 'gateway': 'fd00::2'}, |
4241 | + {'network': 'fd00:14::', |
4242 | + 'gateway': 'fd00::3'}, |
4243 | + {'destination': 'fe00:14::/48', |
4244 | + 'gateway': 'fe00::4', |
4245 | + 'metric': 500}, |
4246 | + {'gateway': '192.168.23.1', |
4247 | + 'metric': 999, |
4248 | + 'netmask': 24, |
4249 | + 'network': '192.168.23.0'}, |
4250 | + {'destination': '10.23.23.0/24', |
4251 | + 'gateway': '10.23.23.2', |
4252 | + 'metric': 300}]}]}, |
4253 | + ] |
4254 | + |
4255 | + files = self._render_and_read( |
4256 | + network_config={'config': conf, 'version': 1}) |
4257 | + expected = [ |
4258 | + 'auto lo', |
4259 | + 'iface lo inet loopback', |
4260 | + 'auto eno3', |
4261 | + 'iface eno3 inet6 static', |
4262 | + ' address fd00::12/64', |
4263 | + ' dns-nameservers fd00:2::15', |
4264 | + ' gateway fd00::1', |
4265 | + (' post-up route add -A inet6 fd00:12::/32 gw ' |
4266 | + 'fd00::2 || true'), |
4267 | + (' pre-down route del -A inet6 fd00:12::/32 gw ' |
4268 | + 'fd00::2 || true'), |
4269 | + (' post-up route add -A inet6 fd00:14::/64 gw ' |
4270 | + 'fd00::3 || true'), |
4271 | + (' pre-down route del -A inet6 fd00:14::/64 gw ' |
4272 | + 'fd00::3 || true'), |
4273 | + (' post-up route add -A inet6 fe00:14::/48 gw ' |
4274 | + 'fe00::4 metric 500 || true'), |
4275 | + (' pre-down route del -A inet6 fe00:14::/48 gw ' |
4276 | + 'fe00::4 metric 500 || true'), |
4277 | + (' post-up route add -net 192.168.23.0/24 gw ' |
4278 | + '192.168.23.1 metric 999 || true'), |
4279 | + (' pre-down route del -net 192.168.23.0/24 gw ' |
4280 | + '192.168.23.1 metric 999 || true'), |
4281 | + (' post-up route add -net 10.23.23.0/24 gw ' |
4282 | + '10.23.23.2 metric 300 || true'), |
4283 | + (' pre-down route del -net 10.23.23.0/24 gw ' |
4284 | + '10.23.23.2 metric 300 || true'), |
4285 | + |
4286 | + ] |
4287 | + found = files['/etc/network/interfaces'].splitlines() |
4288 | + |
4289 | + self.assertEqual( |
4290 | + expected, [line for line in found if line]) |
4291 | + |
4292 | + def testsimple_render_bond(self): |
4293 | + entry = NETWORK_CONFIGS['bond'] |
4294 | + files = self._render_and_read(network_config=yaml.load(entry['yaml'])) |
4295 | + self.assertEqual( |
4296 | + entry['expected_eni'].splitlines(), |
4297 | + files['/etc/network/interfaces'].splitlines()) |
4298 | + |
4299 | |
4300 | class TestNetRenderers(CiTestCase): |
4301 | @mock.patch("cloudinit.net.renderers.sysconfig.available") |
4302 | @@ -3632,6 +3830,41 @@ class TestNetRenderers(CiTestCase): |
4303 | self.assertRaises(net.RendererNotFoundError, renderers.select, |
4304 | priority=['sysconfig', 'eni']) |
4305 | |
4306 | + @mock.patch("cloudinit.net.renderers.netplan.available") |
4307 | + @mock.patch("cloudinit.net.renderers.sysconfig.available_sysconfig") |
4308 | + @mock.patch("cloudinit.net.renderers.sysconfig.available_nm") |
4309 | + @mock.patch("cloudinit.net.renderers.eni.available") |
4310 | + @mock.patch("cloudinit.net.renderers.sysconfig.util.get_linux_distro") |
4311 | + def test_sysconfig_selected_on_sysconfig_enabled_distros(self, m_distro, |
4312 | + m_eni, m_sys_nm, |
4313 | + m_sys_scfg, |
4314 | + m_netplan): |
4315 | + """sysconfig only selected on specific distros (rhel/sles).""" |
4316 | + |
4317 | + # Ubuntu with Network-Manager installed |
4318 | + m_eni.return_value = False # no ifupdown (ifquery) |
4319 | + m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown |
4320 | + m_sys_nm.return_value = True # network-manager is installed |
4321 | + m_netplan.return_value = True # netplan is installed |
4322 | + m_distro.return_value = ('ubuntu', None, None) |
4323 | + self.assertEqual('netplan', renderers.select(priority=None)[0]) |
4324 | + |
4325 | + # Centos with Network-Manager installed |
4326 | + m_eni.return_value = False # no ifupdown (ifquery) |
4327 | + m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown |
4328 | + m_sys_nm.return_value = True # network-manager is installed |
4329 | + m_netplan.return_value = False # netplan is not installed |
4330 | + m_distro.return_value = ('centos', None, None) |
4331 | + self.assertEqual('sysconfig', renderers.select(priority=None)[0]) |
4332 | + |
4333 | + # OpenSuse with Network-Manager installed |
4334 | + m_eni.return_value = False # no ifupdown (ifquery) |
4335 | + m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown |
4336 | + m_sys_nm.return_value = True # network-manager is installed |
4337 | + m_netplan.return_value = False # netplan is not installed |
4338 | + m_distro.return_value = ('opensuse', None, None) |
4339 | + self.assertEqual('sysconfig', renderers.select(priority=None)[0]) |
4340 | + |
4341 | |
4342 | class TestGetInterfaces(CiTestCase): |
4343 | _data = {'bonds': ['bond1'], |
4344 | diff --git a/tests/unittests/test_reporting_hyperv.py b/tests/unittests/test_reporting_hyperv.py |
4345 | old mode 100644 |
4346 | new mode 100755 |
4347 | index 2e64c6c..d01ed5b |
4348 | --- a/tests/unittests/test_reporting_hyperv.py |
4349 | +++ b/tests/unittests/test_reporting_hyperv.py |
4350 | @@ -1,10 +1,12 @@ |
4351 | # This file is part of cloud-init. See LICENSE file for license information. |
4352 | |
4353 | from cloudinit.reporting import events |
4354 | -from cloudinit.reporting import handlers |
4355 | +from cloudinit.reporting.handlers import HyperVKvpReportingHandler |
4356 | |
4357 | import json |
4358 | import os |
4359 | +import struct |
4360 | +import time |
4361 | |
4362 | from cloudinit import util |
4363 | from cloudinit.tests.helpers import CiTestCase |
4364 | @@ -13,7 +15,7 @@ from cloudinit.tests.helpers import CiTestCase |
4365 | class TestKvpEncoding(CiTestCase): |
4366 | def test_encode_decode(self): |
4367 | kvp = {'key': 'key1', 'value': 'value1'} |
4368 | - kvp_reporting = handlers.HyperVKvpReportingHandler() |
4369 | + kvp_reporting = HyperVKvpReportingHandler() |
4370 | data = kvp_reporting._encode_kvp_item(kvp['key'], kvp['value']) |
4371 | self.assertEqual(len(data), kvp_reporting.HV_KVP_RECORD_SIZE) |
4372 | decoded_kvp = kvp_reporting._decode_kvp_item(data) |
4373 | @@ -26,57 +28,9 @@ class TextKvpReporter(CiTestCase): |
4374 | self.tmp_file_path = self.tmp_path('kvp_pool_file') |
4375 | util.ensure_file(self.tmp_file_path) |
4376 | |
4377 | - def test_event_type_can_be_filtered(self): |
4378 | - reporter = handlers.HyperVKvpReportingHandler( |
4379 | - kvp_file_path=self.tmp_file_path, |
4380 | - event_types=['foo', 'bar']) |
4381 | - |
4382 | - reporter.publish_event( |
4383 | - events.ReportingEvent('foo', 'name', 'description')) |
4384 | - reporter.publish_event( |
4385 | - events.ReportingEvent('some_other', 'name', 'description3')) |
4386 | - reporter.q.join() |
4387 | - |
4388 | - kvps = list(reporter._iterate_kvps(0)) |
4389 | - self.assertEqual(1, len(kvps)) |
4390 | - |
4391 | - reporter.publish_event( |
4392 | - events.ReportingEvent('bar', 'name', 'description2')) |
4393 | - reporter.q.join() |
4394 | - kvps = list(reporter._iterate_kvps(0)) |
4395 | - self.assertEqual(2, len(kvps)) |
4396 | - |
4397 | - self.assertIn('foo', kvps[0]['key']) |
4398 | - self.assertIn('bar', kvps[1]['key']) |
4399 | - self.assertNotIn('some_other', kvps[0]['key']) |
4400 | - self.assertNotIn('some_other', kvps[1]['key']) |
4401 | - |
4402 | - def test_events_are_over_written(self): |
4403 | - reporter = handlers.HyperVKvpReportingHandler( |
4404 | - kvp_file_path=self.tmp_file_path) |
4405 | - |
4406 | - self.assertEqual(0, len(list(reporter._iterate_kvps(0)))) |
4407 | - |
4408 | - reporter.publish_event( |
4409 | - events.ReportingEvent('foo', 'name1', 'description')) |
4410 | - reporter.publish_event( |
4411 | - events.ReportingEvent('foo', 'name2', 'description')) |
4412 | - reporter.q.join() |
4413 | - self.assertEqual(2, len(list(reporter._iterate_kvps(0)))) |
4414 | - |
4415 | - reporter2 = handlers.HyperVKvpReportingHandler( |
4416 | - kvp_file_path=self.tmp_file_path) |
4417 | - reporter2.incarnation_no = reporter.incarnation_no + 1 |
4418 | - reporter2.publish_event( |
4419 | - events.ReportingEvent('foo', 'name3', 'description')) |
4420 | - reporter2.q.join() |
4421 | - |
4422 | - self.assertEqual(2, len(list(reporter2._iterate_kvps(0)))) |
4423 | - |
4424 | def test_events_with_higher_incarnation_not_over_written(self): |
4425 | - reporter = handlers.HyperVKvpReportingHandler( |
4426 | + reporter = HyperVKvpReportingHandler( |
4427 | kvp_file_path=self.tmp_file_path) |
4428 | - |
4429 | self.assertEqual(0, len(list(reporter._iterate_kvps(0)))) |
4430 | |
4431 | reporter.publish_event( |
4432 | @@ -86,7 +40,7 @@ class TextKvpReporter(CiTestCase): |
4433 | reporter.q.join() |
4434 | self.assertEqual(2, len(list(reporter._iterate_kvps(0)))) |
4435 | |
4436 | - reporter3 = handlers.HyperVKvpReportingHandler( |
4437 | + reporter3 = HyperVKvpReportingHandler( |
4438 | kvp_file_path=self.tmp_file_path) |
4439 | reporter3.incarnation_no = reporter.incarnation_no - 1 |
4440 | reporter3.publish_event( |
4441 | @@ -95,7 +49,7 @@ class TextKvpReporter(CiTestCase): |
4442 | self.assertEqual(3, len(list(reporter3._iterate_kvps(0)))) |
4443 | |
4444 | def test_finish_event_result_is_logged(self): |
4445 | - reporter = handlers.HyperVKvpReportingHandler( |
4446 | + reporter = HyperVKvpReportingHandler( |
4447 | kvp_file_path=self.tmp_file_path) |
4448 | reporter.publish_event( |
4449 | events.FinishReportingEvent('name2', 'description1', |
4450 | @@ -105,7 +59,7 @@ class TextKvpReporter(CiTestCase): |
4451 | |
4452 | def test_file_operation_issue(self): |
4453 | os.remove(self.tmp_file_path) |
4454 | - reporter = handlers.HyperVKvpReportingHandler( |
4455 | + reporter = HyperVKvpReportingHandler( |
4456 | kvp_file_path=self.tmp_file_path) |
4457 | reporter.publish_event( |
4458 | events.FinishReportingEvent('name2', 'description1', |
4459 | @@ -113,7 +67,7 @@ class TextKvpReporter(CiTestCase): |
4460 | reporter.q.join() |
4461 | |
4462 | def test_event_very_long(self): |
4463 | - reporter = handlers.HyperVKvpReportingHandler( |
4464 | + reporter = HyperVKvpReportingHandler( |
4465 | kvp_file_path=self.tmp_file_path) |
4466 | description = 'ab' * reporter.HV_KVP_EXCHANGE_MAX_VALUE_SIZE |
4467 | long_event = events.FinishReportingEvent( |
4468 | @@ -132,3 +86,43 @@ class TextKvpReporter(CiTestCase): |
4469 | self.assertEqual(msg_slice['msg_i'], i) |
4470 | full_description += msg_slice['msg'] |
4471 | self.assertEqual(description, full_description) |
4472 | + |
4473 | + def test_not_truncate_kvp_file_modified_after_boot(self): |
4474 | + with open(self.tmp_file_path, "wb+") as f: |
4475 | + kvp = {'key': 'key1', 'value': 'value1'} |
4476 | + data = (struct.pack("%ds%ds" % ( |
4477 | + HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE, |
4478 | + HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE), |
4479 | + kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8'))) |
4480 | + f.write(data) |
4481 | + cur_time = time.time() |
4482 | + os.utime(self.tmp_file_path, (cur_time, cur_time)) |
4483 | + |
4484 | + # reset this because the unit test framework |
4485 | + # has already polluted the class variable |
4486 | + HyperVKvpReportingHandler._already_truncated_pool_file = False |
4487 | + |
4488 | + reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path) |
4489 | + kvps = list(reporter._iterate_kvps(0)) |
4490 | + self.assertEqual(1, len(kvps)) |
4491 | + |
4492 | + def test_truncate_stale_kvp_file(self): |
4493 | + with open(self.tmp_file_path, "wb+") as f: |
4494 | + kvp = {'key': 'key1', 'value': 'value1'} |
4495 | + data = (struct.pack("%ds%ds" % ( |
4496 | + HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE, |
4497 | + HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE), |
4498 | + kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8'))) |
4499 | + f.write(data) |
4500 | + |
4501 | + # set the time ways back to make it look like |
4502 | + # we had an old kvp file |
4503 | + os.utime(self.tmp_file_path, (1000000, 1000000)) |
4504 | + |
4505 | + # reset this because the unit test framework |
4506 | + # has already polluted the class variable |
4507 | + HyperVKvpReportingHandler._already_truncated_pool_file = False |
4508 | + |
4509 | + reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path) |
4510 | + kvps = list(reporter._iterate_kvps(0)) |
4511 | + self.assertEqual(0, len(kvps)) |
4512 | diff --git a/tools/build-on-freebsd b/tools/build-on-freebsd |
4513 | index d23fde2..dc3b974 100755 |
4514 | --- a/tools/build-on-freebsd |
4515 | +++ b/tools/build-on-freebsd |
4516 | @@ -9,6 +9,7 @@ fail() { echo "FAILED:" "$@" 1>&2; exit 1; } |
4517 | depschecked=/tmp/c-i.dependencieschecked |
4518 | pkgs=" |
4519 | bash |
4520 | + chpasswd |
4521 | dmidecode |
4522 | e2fsprogs |
4523 | py27-Jinja2 |
4524 | @@ -17,6 +18,7 @@ pkgs=" |
4525 | py27-configobj |
4526 | py27-jsonpatch |
4527 | py27-jsonpointer |
4528 | + py27-jsonschema |
4529 | py27-oauthlib |
4530 | py27-requests |
4531 | py27-serial |
4532 | @@ -28,12 +30,9 @@ pkgs=" |
4533 | [ -f "$depschecked" ] || pkg install ${pkgs} || fail "install packages" |
4534 | touch $depschecked |
4535 | |
4536 | -# Required but unavailable port/pkg: py27-jsonpatch py27-jsonpointer |
4537 | -# Luckily, the install step will take care of this by installing it from pypi... |
4538 | - |
4539 | # Build the code and install in /usr/local/: |
4540 | -python setup.py build |
4541 | -python setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd |
4542 | +python2.7 setup.py build |
4543 | +python2.7 setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd |
4544 | |
4545 | # Enable cloud-init in /etc/rc.conf: |
4546 | sed -i.bak -e "/cloudinit_enable=.*/d" /etc/rc.conf |
4547 | diff --git a/tools/ds-identify b/tools/ds-identify |
4548 | index b78b273..6518901 100755 |
4549 | --- a/tools/ds-identify |
4550 | +++ b/tools/ds-identify |
4551 | @@ -620,7 +620,7 @@ dscheck_MAAS() { |
4552 | } |
4553 | |
4554 | dscheck_NoCloud() { |
4555 | - local fslabel="cidata" d="" |
4556 | + local fslabel="cidata CIDATA" d="" |
4557 | case " ${DI_KERNEL_CMDLINE} " in |
4558 | *\ ds=nocloud*) return ${DS_FOUND};; |
4559 | esac |
4560 | @@ -632,9 +632,10 @@ dscheck_NoCloud() { |
4561 | check_seed_dir "$d" meta-data user-data && return ${DS_FOUND} |
4562 | check_writable_seed_dir "$d" meta-data user-data && return ${DS_FOUND} |
4563 | done |
4564 | - if has_fs_with_label "${fslabel}"; then |
4565 | + if has_fs_with_label $fslabel; then |
4566 | return ${DS_FOUND} |
4567 | fi |
4568 | + |
4569 | return ${DS_NOT_FOUND} |
4570 | } |
4571 | |
4572 | @@ -762,7 +763,7 @@ is_cdrom_ovf() { |
4573 | |
4574 | # explicitly skip known labels of other types. rd_rdfe is azure. |
4575 | case "$label" in |
4576 | - config-2|CONFIG-2|rd_rdfe_stable*|cidata) return 1;; |
4577 | + config-2|CONFIG-2|rd_rdfe_stable*|cidata|CIDATA) return 1;; |
4578 | esac |
4579 | |
4580 | local idstr="http://schemas.dmtf.org/ovf/environment/1" |
4581 | diff --git a/tools/read-version b/tools/read-version |
4582 | index e69c2ce..6dca659 100755 |
4583 | --- a/tools/read-version |
4584 | +++ b/tools/read-version |
4585 | @@ -71,9 +71,12 @@ if is_gitdir(_tdir) and which("git"): |
4586 | flags = ['--tags'] |
4587 | cmd = ['git', 'describe', '--abbrev=8', '--match=[0-9]*'] + flags |
4588 | |
4589 | - version = tiny_p(cmd).strip() |
4590 | + try: |
4591 | + version = tiny_p(cmd).strip() |
4592 | + except RuntimeError: |
4593 | + version = None |
4594 | |
4595 | - if not version.startswith(src_version): |
4596 | + if version is None or not version.startswith(src_version): |
4597 | sys.stderr.write("git describe version (%s) differs from " |
4598 | "cloudinit.version (%s)\n" % (version, src_version)) |
4599 | sys.stderr.write( |
4600 | diff --git a/tox.ini b/tox.ini |
4601 | index d371720..1f01eb7 100644 |
4602 | --- a/tox.ini |
4603 | +++ b/tox.ini |
4604 | @@ -21,7 +21,7 @@ setenv = |
4605 | basepython = python3 |
4606 | deps = |
4607 | # requirements |
4608 | - pylint==2.2.2 |
4609 | + pylint==2.3.1 |
4610 | # test-requirements because unit tests are now present in cloudinit tree |
4611 | -r{toxinidir}/test-requirements.txt |
4612 | commands = {envpython} -m pylint {posargs:cloudinit tests tools} |
4613 | @@ -96,19 +96,18 @@ deps = |
4614 | six==1.9.0 |
4615 | -r{toxinidir}/test-requirements.txt |
4616 | |
4617 | -[testenv:opensusel42] |
4618 | +[testenv:opensusel150] |
4619 | basepython = python2.7 |
4620 | commands = nosetests {posargs:tests/unittests cloudinit} |
4621 | deps = |
4622 | # requirements |
4623 | - argparse==1.3.0 |
4624 | - jinja2==2.8 |
4625 | - PyYAML==3.11 |
4626 | - oauthlib==0.7.2 |
4627 | + jinja2==2.10 |
4628 | + PyYAML==3.12 |
4629 | + oauthlib==2.0.6 |
4630 | configobj==5.0.6 |
4631 | - requests==2.11.1 |
4632 | - jsonpatch==1.11 |
4633 | - six==1.9.0 |
4634 | + requests==2.18.4 |
4635 | + jsonpatch==1.16 |
4636 | + six==1.11.0 |
4637 | -r{toxinidir}/test-requirements.txt |
4638 | |
4639 | [testenv:tip-pycodestyle] |
PASSED: Continuous integration, rev:01cf9304e4d 697cffff7db9db4 8e374b31cb50bd /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 722/
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 722/rebuild
https:/