Merge ~chad.smith/cloud-init:ubuntu/xenial into cloud-init:ubuntu/xenial
- Git
- lp:~chad.smith/cloud-init
- ubuntu/xenial
- Merge into ubuntu/xenial
Status: | Merged | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Merged at revision: | 421a036c999c3c1ad12872e6509315472089d53d | ||||||||||||
Proposed branch: | ~chad.smith/cloud-init:ubuntu/xenial | ||||||||||||
Merge into: | cloud-init:ubuntu/xenial | ||||||||||||
Diff against target: |
2193 lines (+1247/-204) 38 files modified
ChangeLog (+117/-0) cloudinit/config/cc_apt_configure.py (+1/-1) cloudinit/config/cc_mounts.py (+11/-0) cloudinit/net/sysconfig.py (+4/-2) cloudinit/net/tests/test_init.py (+1/-1) cloudinit/reporting/handlers.py (+57/-60) cloudinit/sources/DataSourceAzure.py (+11/-6) cloudinit/sources/DataSourceCloudStack.py (+1/-1) cloudinit/sources/DataSourceConfigDrive.py (+2/-5) cloudinit/sources/DataSourceEc2.py (+1/-1) cloudinit/sources/DataSourceNoCloud.py (+3/-1) cloudinit/sources/DataSourceScaleway.py (+1/-2) cloudinit/sources/__init__.py (+3/-3) cloudinit/sources/helpers/azure.py (+11/-3) cloudinit/sources/tests/test_init.py (+0/-15) cloudinit/util.py (+2/-13) cloudinit/version.py (+1/-1) debian/changelog (+48/-2) debian/patches/azure-apply-network-config-false.patch (+1/-1) debian/patches/azure-use-walinux-agent.patch (+1/-1) debian/patches/series (+1/-0) debian/patches/ubuntu-advantage-revert-tip.patch (+735/-0) doc/rtd/topics/datasources/nocloud.rst (+1/-1) packages/redhat/cloud-init.spec.in (+3/-1) packages/suse/cloud-init.spec.in (+3/-1) setup.py (+2/-1) tests/cloud_tests/releases.yaml (+16/-0) tests/unittests/test_datasource/test_azure.py (+10/-3) tests/unittests/test_datasource/test_azure_helper.py (+7/-2) tests/unittests/test_datasource/test_nocloud.py (+42/-0) tests/unittests/test_datasource/test_scaleway.py (+0/-7) tests/unittests/test_ds_identify.py (+17/-0) tests/unittests/test_handler/test_handler_mounts.py (+29/-1) tests/unittests/test_net.py (+42/-3) tests/unittests/test_reporting_hyperv.py (+49/-55) tools/build-on-freebsd (+4/-5) tools/ds-identify (+4/-3) tools/read-version (+5/-2) |
||||||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Ryan Harper | Approve | ||
Server Team CI bot | continuous-integration | Approve | |
Review via email:
|
Commit message
new upstream snapshot for SRU into xenial..
QUESTION: we changed ubuntu-advantage cloud config module in an incompatible way on expectation that new ubuntu-
That is contained in this branch with
debian/
Description of the change

Server Team CI bot (server-team-bot) wrote : | # |

Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:421a036c999
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/

Ryan Harper (raharper) wrote : | # |
LGTM. Verified I get the same branch as what's proposed.
Preview Diff
1 | diff --git a/ChangeLog b/ChangeLog | |||
2 | index 8fa6fdd..bf48fd4 100644 | |||
3 | --- a/ChangeLog | |||
4 | +++ b/ChangeLog | |||
5 | @@ -1,3 +1,120 @@ | |||
6 | 1 | 19.1: | ||
7 | 2 | - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder] | ||
8 | 3 | - tests: add Eoan release [Paride Legovini] | ||
9 | 4 | - cc_mounts: check if mount -a on no-change fstab path | ||
10 | 5 | [Jason Zions (MSFT)] (LP: #1825596) | ||
11 | 6 | - replace remaining occurrences of LOG.warn [Daniel Watkins] | ||
12 | 7 | - DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo] | ||
13 | 8 | - Azure: Changes to the Hyper-V KVP Reporter [Anh Vo] | ||
14 | 9 | - git tests: no longer show warning about safe yaml. | ||
15 | 10 | - tools/read-version: handle errors [Chad Miller] | ||
16 | 11 | - net/sysconfig: only indicate available on known sysconfig distros | ||
17 | 12 | (LP: #1819994) | ||
18 | 13 | - packages: update rpm specs for new bash completion path | ||
19 | 14 | [Daniel Watkins] (LP: #1825444) | ||
20 | 15 | - test_azure: mock util.SeLinuxGuard where needed | ||
21 | 16 | [Jason Zions (MSFT)] (LP: #1825253) | ||
22 | 17 | - setup.py: install bash completion script in new location [Daniel Watkins] | ||
23 | 18 | - mount_cb: do not pass sync and rw options to mount | ||
24 | 19 | [Gonéri Le Bouder] (LP: #1645824) | ||
25 | 20 | - cc_apt_configure: fix typo in apt documentation [Dominic Schlegel] | ||
26 | 21 | - Revert "DataSource: move update_events from a class to an instance..." | ||
27 | 22 | [Daniel Watkins] | ||
28 | 23 | - Change DataSourceNoCloud to ignore file system label's case. | ||
29 | 24 | [Risto Oikarinen] | ||
30 | 25 | - cmd:main.py: Fix missing 'modules-init' key in modes dict | ||
31 | 26 | [Antonio Romito] (LP: #1815109) | ||
32 | 27 | - ubuntu_advantage: rewrite cloud-config module | ||
33 | 28 | - Azure: Treat _unset network configuration as if it were absent | ||
34 | 29 | [Jason Zions (MSFT)] (LP: #1823084) | ||
35 | 30 | - DatasourceAzure: add additional logging for azure datasource [Anh Vo] | ||
36 | 31 | - cloud_tests: fix apt_pipelining test-cases | ||
37 | 32 | - Azure: Ensure platform random_seed is always serializable as JSON. | ||
38 | 33 | [Jason Zions (MSFT)] | ||
39 | 34 | - net/sysconfig: write out SUSE-compatible IPv6 config [Robert Schweikert] | ||
40 | 35 | - tox: Update testenv for openSUSE Leap to 15.0 [Thomas Bechtold] | ||
41 | 36 | - net: Fix ipv6 static routes when using eni renderer | ||
42 | 37 | [Raphael Glon] (LP: #1818669) | ||
43 | 38 | - Add ubuntu_drivers config module [Daniel Watkins] | ||
44 | 39 | - doc: Refresh Azure walinuxagent docs [Daniel Watkins] | ||
45 | 40 | - tox: bump pylint version to latest (2.3.1) [Daniel Watkins] | ||
46 | 41 | - DataSource: move update_events from a class to an instance attribute | ||
47 | 42 | [Daniel Watkins] (LP: #1819913) | ||
48 | 43 | - net/sysconfig: Handle default route setup for dhcp configured NICs | ||
49 | 44 | [Robert Schweikert] (LP: #1812117) | ||
50 | 45 | - DataSourceEc2: update RELEASE_BLOCKER to be more accurate | ||
51 | 46 | [Daniel Watkins] | ||
52 | 47 | - cloud-init-per: POSIX sh does not support string subst, use sed | ||
53 | 48 | (LP: #1819222) | ||
54 | 49 | - Support locking user with usermod if passwd is not available. | ||
55 | 50 | - Example for Microsoft Azure data disk added. [Anton Olifir] | ||
56 | 51 | - clean: correctly determine the path for excluding seed directory | ||
57 | 52 | [Daniel Watkins] (LP: #1818571) | ||
58 | 53 | - helpers/openstack: Treat unknown link types as physical | ||
59 | 54 | [Daniel Watkins] (LP: #1639263) | ||
60 | 55 | - drop Python 2.6 support and our NIH version detection [Daniel Watkins] | ||
61 | 56 | - tip-pylint: Fix assignment-from-return-none errors | ||
62 | 57 | - net: append type:dhcp[46] only if dhcp[46] is True in v2 netconfig | ||
63 | 58 | [Kurt Stieger] (LP: #1818032) | ||
64 | 59 | - cc_apt_pipelining: stop disabling pipelining by default | ||
65 | 60 | [Daniel Watkins] (LP: #1794982) | ||
66 | 61 | - tests: fix some slow tests and some leaking state [Daniel Watkins] | ||
67 | 62 | - util: don't determine string_types ourselves [Daniel Watkins] | ||
68 | 63 | - cc_rsyslog: Escape possible nested set [Daniel Watkins] (LP: #1816967) | ||
69 | 64 | - Enable encrypted_data_bag_secret support for Chef | ||
70 | 65 | [Eric Williams] (LP: #1817082) | ||
71 | 66 | - azure: Filter list of ssh keys pulled from fabric [Jason Zions (MSFT)] | ||
72 | 67 | - doc: update merging doc with fixes and some additional details/examples | ||
73 | 68 | - tests: integration test failure summary to use traceback if empty error | ||
74 | 69 | - This is to fix https://bugs.launchpad.net/cloud-init/+bug/1812676 | ||
75 | 70 | [Vitaly Kuznetsov] | ||
76 | 71 | - EC2: Rewrite network config on AWS Classic instances every boot | ||
77 | 72 | [Guilherme G. Piccoli] (LP: #1802073) | ||
78 | 73 | - netinfo: Adjust ifconfig output parsing for FreeBSD ipv6 entries | ||
79 | 74 | (LP: #1779672) | ||
80 | 75 | - netplan: Don't render yaml aliases when dumping netplan (LP: #1815051) | ||
81 | 76 | - add PyCharm IDE .idea/ path to .gitignore [Dominic Schlegel] | ||
82 | 77 | - correct grammar issue in instance metadata documentation | ||
83 | 78 | [Dominic Schlegel] (LP: #1802188) | ||
84 | 79 | - clean: cloud-init clean should not trace when run from within cloud_dir | ||
85 | 80 | (LP: #1795508) | ||
86 | 81 | - Resolve flake8 comparison and pycodestyle over-ident issues | ||
87 | 82 | [Paride Legovini] | ||
88 | 83 | - opennebula: also exclude epochseconds from changed environment vars | ||
89 | 84 | (LP: #1813641) | ||
90 | 85 | - systemd: Render generator from template to account for system | ||
91 | 86 | differences. [Robert Schweikert] | ||
92 | 87 | - sysconfig: On SUSE, use STARTMODE instead of ONBOOT | ||
93 | 88 | [Robert Schweikert] (LP: #1799540) | ||
94 | 89 | - flake8: use ==/!= to compare str, bytes, and int literals | ||
95 | 90 | [Paride Legovini] | ||
96 | 91 | - opennebula: exclude EPOCHREALTIME as known bash env variable with a | ||
97 | 92 | delta (LP: #1813383) | ||
98 | 93 | - tox: fix disco httpretty dependencies for py37 (LP: #1813361) | ||
99 | 94 | - run-container: uncomment baseurl in yum.repos.d/*.repo when using a | ||
100 | 95 | proxy [Paride Legovini] | ||
101 | 96 | - lxd: install zfs-linux instead of zfs meta package | ||
102 | 97 | [Johnson Shi] (LP: #1799779) | ||
103 | 98 | - net/sysconfig: do not write a resolv.conf file with only the header. | ||
104 | 99 | [Robert Schweikert] | ||
105 | 100 | - net: Make sysconfig renderer compatible with Network Manager. | ||
106 | 101 | [Eduardo Otubo] | ||
107 | 102 | - cc_set_passwords: Fix regex when parsing hashed passwords | ||
108 | 103 | [Marlin Cremers] (LP: #1811446) | ||
109 | 104 | - net: Wait for dhclient to daemonize before reading lease file | ||
110 | 105 | [Jason Zions] (LP: #1794399) | ||
111 | 106 | - [Azure] Increase retries when talking to Wireserver during metadata walk | ||
112 | 107 | [Jason Zions] | ||
113 | 108 | - Add documentation on adding a datasource. | ||
114 | 109 | - doc: clean up some datasource documentation. | ||
115 | 110 | - ds-identify: fix wrong variable name in ovf_vmware_transport_guestinfo. | ||
116 | 111 | - Scaleway: Support ssh keys provided inside an instance tag. [PORTE Loïc] | ||
117 | 112 | - OVF: simplify expected return values of transport functions. | ||
118 | 113 | - Vmware: Add support for the com.vmware.guestInfo OVF transport. | ||
119 | 114 | (LP: #1807466) | ||
120 | 115 | - HACKING.rst: change contact info to Josh Powers | ||
121 | 116 | - Update to pylint 2.2.2. | ||
122 | 117 | |||
123 | 1 | 18.5: | 118 | 18.5: |
124 | 2 | - tests: add Disco release [Joshua Powers] | 119 | - tests: add Disco release [Joshua Powers] |
125 | 3 | - net: render 'metric' values in per-subnet routes (LP: #1805871) | 120 | - net: render 'metric' values in per-subnet routes (LP: #1805871) |
126 | diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py | |||
127 | index e18944e..919d199 100644 | |||
128 | --- a/cloudinit/config/cc_apt_configure.py | |||
129 | +++ b/cloudinit/config/cc_apt_configure.py | |||
130 | @@ -127,7 +127,7 @@ to ``^[\\w-]+:\\w`` | |||
131 | 127 | 127 | ||
132 | 128 | Source list entries can be specified as a dictionary under the ``sources`` | 128 | Source list entries can be specified as a dictionary under the ``sources`` |
133 | 129 | config key, with key in the dict representing a different source file. The key | 129 | config key, with key in the dict representing a different source file. The key |
135 | 130 | The key of each source entry will be used as an id that can be referenced in | 130 | of each source entry will be used as an id that can be referenced in |
136 | 131 | other config entries, as well as the filename for the source's configuration | 131 | other config entries, as well as the filename for the source's configuration |
137 | 132 | under ``/etc/apt/sources.list.d``. If the name does not end with ``.list``, | 132 | under ``/etc/apt/sources.list.d``. If the name does not end with ``.list``, |
138 | 133 | it will be appended. If there is no configuration for a key in ``sources``, no | 133 | it will be appended. If there is no configuration for a key in ``sources``, no |
139 | diff --git a/cloudinit/config/cc_mounts.py b/cloudinit/config/cc_mounts.py | |||
140 | index 339baba..123ffb8 100644 | |||
141 | --- a/cloudinit/config/cc_mounts.py | |||
142 | +++ b/cloudinit/config/cc_mounts.py | |||
143 | @@ -439,6 +439,7 @@ def handle(_name, cfg, cloud, log, _args): | |||
144 | 439 | 439 | ||
145 | 440 | cc_lines = [] | 440 | cc_lines = [] |
146 | 441 | needswap = False | 441 | needswap = False |
147 | 442 | need_mount_all = False | ||
148 | 442 | dirs = [] | 443 | dirs = [] |
149 | 443 | for line in actlist: | 444 | for line in actlist: |
150 | 444 | # write 'comment' in the fs_mntops, entry, claiming this | 445 | # write 'comment' in the fs_mntops, entry, claiming this |
151 | @@ -449,11 +450,18 @@ def handle(_name, cfg, cloud, log, _args): | |||
152 | 449 | dirs.append(line[1]) | 450 | dirs.append(line[1]) |
153 | 450 | cc_lines.append('\t'.join(line)) | 451 | cc_lines.append('\t'.join(line)) |
154 | 451 | 452 | ||
155 | 453 | mount_points = [v['mountpoint'] for k, v in util.mounts().items() | ||
156 | 454 | if 'mountpoint' in v] | ||
157 | 452 | for d in dirs: | 455 | for d in dirs: |
158 | 453 | try: | 456 | try: |
159 | 454 | util.ensure_dir(d) | 457 | util.ensure_dir(d) |
160 | 455 | except Exception: | 458 | except Exception: |
161 | 456 | util.logexc(log, "Failed to make '%s' config-mount", d) | 459 | util.logexc(log, "Failed to make '%s' config-mount", d) |
162 | 460 | # dirs is list of directories on which a volume should be mounted. | ||
163 | 461 | # If any of them does not already show up in the list of current | ||
164 | 462 | # mount points, we will definitely need to do mount -a. | ||
165 | 463 | if not need_mount_all and d not in mount_points: | ||
166 | 464 | need_mount_all = True | ||
167 | 457 | 465 | ||
168 | 458 | sadds = [WS.sub(" ", n) for n in cc_lines] | 466 | sadds = [WS.sub(" ", n) for n in cc_lines] |
169 | 459 | sdrops = [WS.sub(" ", n) for n in fstab_removed] | 467 | sdrops = [WS.sub(" ", n) for n in fstab_removed] |
170 | @@ -473,6 +481,9 @@ def handle(_name, cfg, cloud, log, _args): | |||
171 | 473 | log.debug("No changes to /etc/fstab made.") | 481 | log.debug("No changes to /etc/fstab made.") |
172 | 474 | else: | 482 | else: |
173 | 475 | log.debug("Changes to fstab: %s", sops) | 483 | log.debug("Changes to fstab: %s", sops) |
174 | 484 | need_mount_all = True | ||
175 | 485 | |||
176 | 486 | if need_mount_all: | ||
177 | 476 | activate_cmds.append(["mount", "-a"]) | 487 | activate_cmds.append(["mount", "-a"]) |
178 | 477 | if uses_systemd: | 488 | if uses_systemd: |
179 | 478 | activate_cmds.append(["systemctl", "daemon-reload"]) | 489 | activate_cmds.append(["systemctl", "daemon-reload"]) |
180 | diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py | |||
181 | index 0998392..a47da0a 100644 | |||
182 | --- a/cloudinit/net/sysconfig.py | |||
183 | +++ b/cloudinit/net/sysconfig.py | |||
184 | @@ -18,6 +18,8 @@ from .network_state import ( | |||
185 | 18 | 18 | ||
186 | 19 | LOG = logging.getLogger(__name__) | 19 | LOG = logging.getLogger(__name__) |
187 | 20 | NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf" | 20 | NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf" |
188 | 21 | KNOWN_DISTROS = [ | ||
189 | 22 | 'opensuse', 'sles', 'suse', 'redhat', 'fedora', 'centos'] | ||
190 | 21 | 23 | ||
191 | 22 | 24 | ||
192 | 23 | def _make_header(sep='#'): | 25 | def _make_header(sep='#'): |
193 | @@ -717,8 +719,8 @@ class Renderer(renderer.Renderer): | |||
194 | 717 | def available(target=None): | 719 | def available(target=None): |
195 | 718 | sysconfig = available_sysconfig(target=target) | 720 | sysconfig = available_sysconfig(target=target) |
196 | 719 | nm = available_nm(target=target) | 721 | nm = available_nm(target=target) |
199 | 720 | 722 | return (util.get_linux_distro()[0] in KNOWN_DISTROS | |
200 | 721 | return any([nm, sysconfig]) | 723 | and any([nm, sysconfig])) |
201 | 722 | 724 | ||
202 | 723 | 725 | ||
203 | 724 | def available_sysconfig(target=None): | 726 | def available_sysconfig(target=None): |
204 | diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py | |||
205 | index f55c31e..6d2affe 100644 | |||
206 | --- a/cloudinit/net/tests/test_init.py | |||
207 | +++ b/cloudinit/net/tests/test_init.py | |||
208 | @@ -7,11 +7,11 @@ import mock | |||
209 | 7 | import os | 7 | import os |
210 | 8 | import requests | 8 | import requests |
211 | 9 | import textwrap | 9 | import textwrap |
212 | 10 | import yaml | ||
213 | 11 | 10 | ||
214 | 12 | import cloudinit.net as net | 11 | import cloudinit.net as net |
215 | 13 | from cloudinit.util import ensure_file, write_file, ProcessExecutionError | 12 | from cloudinit.util import ensure_file, write_file, ProcessExecutionError |
216 | 14 | from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase | 13 | from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase |
217 | 14 | from cloudinit import safeyaml as yaml | ||
218 | 15 | 15 | ||
219 | 16 | 16 | ||
220 | 17 | class TestSysDevPath(CiTestCase): | 17 | class TestSysDevPath(CiTestCase): |
221 | diff --git a/cloudinit/reporting/handlers.py b/cloudinit/reporting/handlers.py | |||
222 | 18 | old mode 100644 | 18 | old mode 100644 |
223 | 19 | new mode 100755 | 19 | new mode 100755 |
224 | index 6d23558..10165ae | |||
225 | --- a/cloudinit/reporting/handlers.py | |||
226 | +++ b/cloudinit/reporting/handlers.py | |||
227 | @@ -5,7 +5,6 @@ import fcntl | |||
228 | 5 | import json | 5 | import json |
229 | 6 | import six | 6 | import six |
230 | 7 | import os | 7 | import os |
231 | 8 | import re | ||
232 | 9 | import struct | 8 | import struct |
233 | 10 | import threading | 9 | import threading |
234 | 11 | import time | 10 | import time |
235 | @@ -14,6 +13,7 @@ from cloudinit import log as logging | |||
236 | 14 | from cloudinit.registry import DictRegistry | 13 | from cloudinit.registry import DictRegistry |
237 | 15 | from cloudinit import (url_helper, util) | 14 | from cloudinit import (url_helper, util) |
238 | 16 | from datetime import datetime | 15 | from datetime import datetime |
239 | 16 | from six.moves.queue import Empty as QueueEmptyError | ||
240 | 17 | 17 | ||
241 | 18 | if six.PY2: | 18 | if six.PY2: |
242 | 19 | from multiprocessing.queues import JoinableQueue as JQueue | 19 | from multiprocessing.queues import JoinableQueue as JQueue |
243 | @@ -129,24 +129,50 @@ class HyperVKvpReportingHandler(ReportingHandler): | |||
244 | 129 | DESC_IDX_KEY = 'msg_i' | 129 | DESC_IDX_KEY = 'msg_i' |
245 | 130 | JSON_SEPARATORS = (',', ':') | 130 | JSON_SEPARATORS = (',', ':') |
246 | 131 | KVP_POOL_FILE_GUEST = '/var/lib/hyperv/.kvp_pool_1' | 131 | KVP_POOL_FILE_GUEST = '/var/lib/hyperv/.kvp_pool_1' |
247 | 132 | _already_truncated_pool_file = False | ||
248 | 132 | 133 | ||
249 | 133 | def __init__(self, | 134 | def __init__(self, |
250 | 134 | kvp_file_path=KVP_POOL_FILE_GUEST, | 135 | kvp_file_path=KVP_POOL_FILE_GUEST, |
251 | 135 | event_types=None): | 136 | event_types=None): |
252 | 136 | super(HyperVKvpReportingHandler, self).__init__() | 137 | super(HyperVKvpReportingHandler, self).__init__() |
253 | 137 | self._kvp_file_path = kvp_file_path | 138 | self._kvp_file_path = kvp_file_path |
254 | 139 | HyperVKvpReportingHandler._truncate_guest_pool_file( | ||
255 | 140 | self._kvp_file_path) | ||
256 | 141 | |||
257 | 138 | self._event_types = event_types | 142 | self._event_types = event_types |
258 | 139 | self.q = JQueue() | 143 | self.q = JQueue() |
259 | 140 | self.kvp_file = None | ||
260 | 141 | self.incarnation_no = self._get_incarnation_no() | 144 | self.incarnation_no = self._get_incarnation_no() |
261 | 142 | self.event_key_prefix = u"{0}|{1}".format(self.EVENT_PREFIX, | 145 | self.event_key_prefix = u"{0}|{1}".format(self.EVENT_PREFIX, |
262 | 143 | self.incarnation_no) | 146 | self.incarnation_no) |
263 | 144 | self._current_offset = 0 | ||
264 | 145 | self.publish_thread = threading.Thread( | 147 | self.publish_thread = threading.Thread( |
265 | 146 | target=self._publish_event_routine) | 148 | target=self._publish_event_routine) |
266 | 147 | self.publish_thread.daemon = True | 149 | self.publish_thread.daemon = True |
267 | 148 | self.publish_thread.start() | 150 | self.publish_thread.start() |
268 | 149 | 151 | ||
269 | 152 | @classmethod | ||
270 | 153 | def _truncate_guest_pool_file(cls, kvp_file): | ||
271 | 154 | """ | ||
272 | 155 | Truncate the pool file if it has not been truncated since boot. | ||
273 | 156 | This should be done exactly once for the file indicated by | ||
274 | 157 | KVP_POOL_FILE_GUEST constant above. This method takes a filename | ||
275 | 158 | so that we can use an arbitrary file during unit testing. | ||
276 | 159 | Since KVP is a best-effort telemetry channel we only attempt to | ||
277 | 160 | truncate the file once and only if the file has not been modified | ||
278 | 161 | since boot. Additional truncation can lead to loss of existing | ||
279 | 162 | KVPs. | ||
280 | 163 | """ | ||
281 | 164 | if cls._already_truncated_pool_file: | ||
282 | 165 | return | ||
283 | 166 | boot_time = time.time() - float(util.uptime()) | ||
284 | 167 | try: | ||
285 | 168 | if os.path.getmtime(kvp_file) < boot_time: | ||
286 | 169 | with open(kvp_file, "w"): | ||
287 | 170 | pass | ||
288 | 171 | except (OSError, IOError) as e: | ||
289 | 172 | LOG.warning("failed to truncate kvp pool file, %s", e) | ||
290 | 173 | finally: | ||
291 | 174 | cls._already_truncated_pool_file = True | ||
292 | 175 | |||
293 | 150 | def _get_incarnation_no(self): | 176 | def _get_incarnation_no(self): |
294 | 151 | """ | 177 | """ |
295 | 152 | use the time passed as the incarnation number. | 178 | use the time passed as the incarnation number. |
296 | @@ -162,20 +188,15 @@ class HyperVKvpReportingHandler(ReportingHandler): | |||
297 | 162 | 188 | ||
298 | 163 | def _iterate_kvps(self, offset): | 189 | def _iterate_kvps(self, offset): |
299 | 164 | """iterate the kvp file from the current offset.""" | 190 | """iterate the kvp file from the current offset.""" |
305 | 165 | try: | 191 | with open(self._kvp_file_path, 'rb') as f: |
306 | 166 | with open(self._kvp_file_path, 'rb+') as f: | 192 | fcntl.flock(f, fcntl.LOCK_EX) |
307 | 167 | self.kvp_file = f | 193 | f.seek(offset) |
308 | 168 | fcntl.flock(f, fcntl.LOCK_EX) | 194 | record_data = f.read(self.HV_KVP_RECORD_SIZE) |
309 | 169 | f.seek(offset) | 195 | while len(record_data) == self.HV_KVP_RECORD_SIZE: |
310 | 196 | kvp_item = self._decode_kvp_item(record_data) | ||
311 | 197 | yield kvp_item | ||
312 | 170 | record_data = f.read(self.HV_KVP_RECORD_SIZE) | 198 | record_data = f.read(self.HV_KVP_RECORD_SIZE) |
321 | 171 | while len(record_data) == self.HV_KVP_RECORD_SIZE: | 199 | fcntl.flock(f, fcntl.LOCK_UN) |
314 | 172 | self._current_offset += self.HV_KVP_RECORD_SIZE | ||
315 | 173 | kvp_item = self._decode_kvp_item(record_data) | ||
316 | 174 | yield kvp_item | ||
317 | 175 | record_data = f.read(self.HV_KVP_RECORD_SIZE) | ||
318 | 176 | fcntl.flock(f, fcntl.LOCK_UN) | ||
319 | 177 | finally: | ||
320 | 178 | self.kvp_file = None | ||
322 | 179 | 200 | ||
323 | 180 | def _event_key(self, event): | 201 | def _event_key(self, event): |
324 | 181 | """ | 202 | """ |
325 | @@ -207,23 +228,13 @@ class HyperVKvpReportingHandler(ReportingHandler): | |||
326 | 207 | 228 | ||
327 | 208 | return {'key': k, 'value': v} | 229 | return {'key': k, 'value': v} |
328 | 209 | 230 | ||
329 | 210 | def _update_kvp_item(self, record_data): | ||
330 | 211 | if self.kvp_file is None: | ||
331 | 212 | raise ReportException( | ||
332 | 213 | "kvp file '{0}' not opened." | ||
333 | 214 | .format(self._kvp_file_path)) | ||
334 | 215 | self.kvp_file.seek(-self.HV_KVP_RECORD_SIZE, 1) | ||
335 | 216 | self.kvp_file.write(record_data) | ||
336 | 217 | |||
337 | 218 | def _append_kvp_item(self, record_data): | 231 | def _append_kvp_item(self, record_data): |
339 | 219 | with open(self._kvp_file_path, 'rb+') as f: | 232 | with open(self._kvp_file_path, 'ab') as f: |
340 | 220 | fcntl.flock(f, fcntl.LOCK_EX) | 233 | fcntl.flock(f, fcntl.LOCK_EX) |
344 | 221 | # seek to end of the file | 234 | for data in record_data: |
345 | 222 | f.seek(0, 2) | 235 | f.write(data) |
343 | 223 | f.write(record_data) | ||
346 | 224 | f.flush() | 236 | f.flush() |
347 | 225 | fcntl.flock(f, fcntl.LOCK_UN) | 237 | fcntl.flock(f, fcntl.LOCK_UN) |
348 | 226 | self._current_offset = f.tell() | ||
349 | 227 | 238 | ||
350 | 228 | def _break_down(self, key, meta_data, description): | 239 | def _break_down(self, key, meta_data, description): |
351 | 229 | del meta_data[self.MSG_KEY] | 240 | del meta_data[self.MSG_KEY] |
352 | @@ -279,40 +290,26 @@ class HyperVKvpReportingHandler(ReportingHandler): | |||
353 | 279 | 290 | ||
354 | 280 | def _publish_event_routine(self): | 291 | def _publish_event_routine(self): |
355 | 281 | while True: | 292 | while True: |
356 | 293 | items_from_queue = 0 | ||
357 | 282 | try: | 294 | try: |
358 | 283 | event = self.q.get(block=True) | 295 | event = self.q.get(block=True) |
360 | 284 | need_append = True | 296 | items_from_queue += 1 |
361 | 297 | encoded_data = [] | ||
362 | 298 | while event is not None: | ||
363 | 299 | encoded_data += self._encode_event(event) | ||
364 | 300 | try: | ||
365 | 301 | # get all the rest of the events in the queue | ||
366 | 302 | event = self.q.get(block=False) | ||
367 | 303 | items_from_queue += 1 | ||
368 | 304 | except QueueEmptyError: | ||
369 | 305 | event = None | ||
370 | 285 | try: | 306 | try: |
398 | 286 | if not os.path.exists(self._kvp_file_path): | 307 | self._append_kvp_item(encoded_data) |
399 | 287 | LOG.warning( | 308 | except (OSError, IOError) as e: |
400 | 288 | "skip writing events %s to %s. file not present.", | 309 | LOG.warning("failed posting events to kvp, %s", e) |
374 | 289 | event.as_string(), | ||
375 | 290 | self._kvp_file_path) | ||
376 | 291 | encoded_event = self._encode_event(event) | ||
377 | 292 | # for each encoded_event | ||
378 | 293 | for encoded_data in (encoded_event): | ||
379 | 294 | for kvp in self._iterate_kvps(self._current_offset): | ||
380 | 295 | match = ( | ||
381 | 296 | re.match( | ||
382 | 297 | r"^{0}\|(\d+)\|.+" | ||
383 | 298 | .format(self.EVENT_PREFIX), | ||
384 | 299 | kvp['key'] | ||
385 | 300 | )) | ||
386 | 301 | if match: | ||
387 | 302 | match_groups = match.groups(0) | ||
388 | 303 | if int(match_groups[0]) < self.incarnation_no: | ||
389 | 304 | need_append = False | ||
390 | 305 | self._update_kvp_item(encoded_data) | ||
391 | 306 | continue | ||
392 | 307 | if need_append: | ||
393 | 308 | self._append_kvp_item(encoded_data) | ||
394 | 309 | except IOError as e: | ||
395 | 310 | LOG.warning( | ||
396 | 311 | "failed posting event to kvp: %s e:%s", | ||
397 | 312 | event.as_string(), e) | ||
401 | 313 | finally: | 310 | finally: |
404 | 314 | self.q.task_done() | 311 | for _ in range(items_from_queue): |
405 | 315 | 312 | self.q.task_done() | |
406 | 316 | # when main process exits, q.get() will through EOFError | 313 | # when main process exits, q.get() will through EOFError |
407 | 317 | # indicating we should exit this thread. | 314 | # indicating we should exit this thread. |
408 | 318 | except EOFError: | 315 | except EOFError: |
409 | @@ -322,7 +319,7 @@ class HyperVKvpReportingHandler(ReportingHandler): | |||
410 | 322 | # if the kvp pool already contains a chunk of data, | 319 | # if the kvp pool already contains a chunk of data, |
411 | 323 | # so defer it to another thread. | 320 | # so defer it to another thread. |
412 | 324 | def publish_event(self, event): | 321 | def publish_event(self, event): |
414 | 325 | if (not self._event_types or event.event_type in self._event_types): | 322 | if not self._event_types or event.event_type in self._event_types: |
415 | 326 | self.q.put(event) | 323 | self.q.put(event) |
416 | 327 | 324 | ||
417 | 328 | def flush(self): | 325 | def flush(self): |
418 | diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py | |||
419 | index 76b1661..b7440c1 100755 | |||
420 | --- a/cloudinit/sources/DataSourceAzure.py | |||
421 | +++ b/cloudinit/sources/DataSourceAzure.py | |||
422 | @@ -57,7 +57,12 @@ AZURE_CHASSIS_ASSET_TAG = '7783-7084-3265-9085-8269-3286-77' | |||
423 | 57 | REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds" | 57 | REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds" |
424 | 58 | REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready" | 58 | REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready" |
425 | 59 | AGENT_SEED_DIR = '/var/lib/waagent' | 59 | AGENT_SEED_DIR = '/var/lib/waagent' |
426 | 60 | |||
427 | 61 | # In the event where the IMDS primary server is not | ||
428 | 62 | # available, it takes 1s to fallback to the secondary one | ||
429 | 63 | IMDS_TIMEOUT_IN_SECONDS = 2 | ||
430 | 60 | IMDS_URL = "http://169.254.169.254/metadata/" | 64 | IMDS_URL = "http://169.254.169.254/metadata/" |
431 | 65 | |||
432 | 61 | PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0" | 66 | PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0" |
433 | 62 | 67 | ||
434 | 63 | # List of static scripts and network config artifacts created by | 68 | # List of static scripts and network config artifacts created by |
435 | @@ -407,7 +412,7 @@ class DataSourceAzure(sources.DataSource): | |||
436 | 407 | elif cdev.startswith("/dev/"): | 412 | elif cdev.startswith("/dev/"): |
437 | 408 | if util.is_FreeBSD(): | 413 | if util.is_FreeBSD(): |
438 | 409 | ret = util.mount_cb(cdev, load_azure_ds_dir, | 414 | ret = util.mount_cb(cdev, load_azure_ds_dir, |
440 | 410 | mtype="udf", sync=False) | 415 | mtype="udf") |
441 | 411 | else: | 416 | else: |
442 | 412 | ret = util.mount_cb(cdev, load_azure_ds_dir) | 417 | ret = util.mount_cb(cdev, load_azure_ds_dir) |
443 | 413 | else: | 418 | else: |
444 | @@ -582,9 +587,9 @@ class DataSourceAzure(sources.DataSource): | |||
445 | 582 | return | 587 | return |
446 | 583 | self._ephemeral_dhcp_ctx.clean_network() | 588 | self._ephemeral_dhcp_ctx.clean_network() |
447 | 584 | else: | 589 | else: |
451 | 585 | return readurl(url, timeout=1, headers=headers, | 590 | return readurl(url, timeout=IMDS_TIMEOUT_IN_SECONDS, |
452 | 586 | exception_cb=exc_cb, infinite=True, | 591 | headers=headers, exception_cb=exc_cb, |
453 | 587 | log_req_resp=False).contents | 592 | infinite=True, log_req_resp=False).contents |
454 | 588 | except UrlError: | 593 | except UrlError: |
455 | 589 | # Teardown our EphemeralDHCPv4 context on failure as we retry | 594 | # Teardown our EphemeralDHCPv4 context on failure as we retry |
456 | 590 | self._ephemeral_dhcp_ctx.clean_network() | 595 | self._ephemeral_dhcp_ctx.clean_network() |
457 | @@ -1291,8 +1296,8 @@ def _get_metadata_from_imds(retries): | |||
458 | 1291 | headers = {"Metadata": "true"} | 1296 | headers = {"Metadata": "true"} |
459 | 1292 | try: | 1297 | try: |
460 | 1293 | response = readurl( | 1298 | response = readurl( |
463 | 1294 | url, timeout=1, headers=headers, retries=retries, | 1299 | url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers, |
464 | 1295 | exception_cb=retry_on_url_exc) | 1300 | retries=retries, exception_cb=retry_on_url_exc) |
465 | 1296 | except Exception as e: | 1301 | except Exception as e: |
466 | 1297 | LOG.debug('Ignoring IMDS instance metadata: %s', e) | 1302 | LOG.debug('Ignoring IMDS instance metadata: %s', e) |
467 | 1298 | return {} | 1303 | return {} |
468 | diff --git a/cloudinit/sources/DataSourceCloudStack.py b/cloudinit/sources/DataSourceCloudStack.py | |||
469 | index d4b758f..f185dc7 100644 | |||
470 | --- a/cloudinit/sources/DataSourceCloudStack.py | |||
471 | +++ b/cloudinit/sources/DataSourceCloudStack.py | |||
472 | @@ -95,7 +95,7 @@ class DataSourceCloudStack(sources.DataSource): | |||
473 | 95 | start_time = time.time() | 95 | start_time = time.time() |
474 | 96 | url = uhelp.wait_for_url( | 96 | url = uhelp.wait_for_url( |
475 | 97 | urls=urls, max_wait=url_params.max_wait_seconds, | 97 | urls=urls, max_wait=url_params.max_wait_seconds, |
477 | 98 | timeout=url_params.timeout_seconds, status_cb=LOG.warn) | 98 | timeout=url_params.timeout_seconds, status_cb=LOG.warning) |
478 | 99 | 99 | ||
479 | 100 | if url: | 100 | if url: |
480 | 101 | LOG.debug("Using metadata source: '%s'", url) | 101 | LOG.debug("Using metadata source: '%s'", url) |
481 | diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py | |||
482 | index 564e3eb..571d30d 100644 | |||
483 | --- a/cloudinit/sources/DataSourceConfigDrive.py | |||
484 | +++ b/cloudinit/sources/DataSourceConfigDrive.py | |||
485 | @@ -72,15 +72,12 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource): | |||
486 | 72 | dslist = self.sys_cfg.get('datasource_list') | 72 | dslist = self.sys_cfg.get('datasource_list') |
487 | 73 | for dev in find_candidate_devs(dslist=dslist): | 73 | for dev in find_candidate_devs(dslist=dslist): |
488 | 74 | try: | 74 | try: |
491 | 75 | # Set mtype if freebsd and turn off sync | 75 | if util.is_FreeBSD() and dev.startswith("/dev/cd"): |
490 | 76 | if dev.startswith("/dev/cd"): | ||
492 | 77 | mtype = "cd9660" | 76 | mtype = "cd9660" |
493 | 78 | sync = False | ||
494 | 79 | else: | 77 | else: |
495 | 80 | mtype = None | 78 | mtype = None |
496 | 81 | sync = True | ||
497 | 82 | results = util.mount_cb(dev, read_config_drive, | 79 | results = util.mount_cb(dev, read_config_drive, |
499 | 83 | mtype=mtype, sync=sync) | 80 | mtype=mtype) |
500 | 84 | found = dev | 81 | found = dev |
501 | 85 | except openstack.NonReadable: | 82 | except openstack.NonReadable: |
502 | 86 | pass | 83 | pass |
503 | diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py | |||
504 | index ac28f1d..5c017bf 100644 | |||
505 | --- a/cloudinit/sources/DataSourceEc2.py | |||
506 | +++ b/cloudinit/sources/DataSourceEc2.py | |||
507 | @@ -208,7 +208,7 @@ class DataSourceEc2(sources.DataSource): | |||
508 | 208 | start_time = time.time() | 208 | start_time = time.time() |
509 | 209 | url = uhelp.wait_for_url( | 209 | url = uhelp.wait_for_url( |
510 | 210 | urls=urls, max_wait=url_params.max_wait_seconds, | 210 | urls=urls, max_wait=url_params.max_wait_seconds, |
512 | 211 | timeout=url_params.timeout_seconds, status_cb=LOG.warn) | 211 | timeout=url_params.timeout_seconds, status_cb=LOG.warning) |
513 | 212 | 212 | ||
514 | 213 | if url: | 213 | if url: |
515 | 214 | self.metadata_address = url2base[url] | 214 | self.metadata_address = url2base[url] |
516 | diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py | |||
517 | index 6860f0c..fcf5d58 100644 | |||
518 | --- a/cloudinit/sources/DataSourceNoCloud.py | |||
519 | +++ b/cloudinit/sources/DataSourceNoCloud.py | |||
520 | @@ -106,7 +106,9 @@ class DataSourceNoCloud(sources.DataSource): | |||
521 | 106 | fslist = util.find_devs_with("TYPE=vfat") | 106 | fslist = util.find_devs_with("TYPE=vfat") |
522 | 107 | fslist.extend(util.find_devs_with("TYPE=iso9660")) | 107 | fslist.extend(util.find_devs_with("TYPE=iso9660")) |
523 | 108 | 108 | ||
525 | 109 | label_list = util.find_devs_with("LABEL=%s" % label) | 109 | label_list = util.find_devs_with("LABEL=%s" % label.upper()) |
526 | 110 | label_list.extend(util.find_devs_with("LABEL=%s" % label.lower())) | ||
527 | 111 | |||
528 | 110 | devlist = list(set(fslist) & set(label_list)) | 112 | devlist = list(set(fslist) & set(label_list)) |
529 | 111 | devlist.sort(reverse=True) | 113 | devlist.sort(reverse=True) |
530 | 112 | 114 | ||
531 | diff --git a/cloudinit/sources/DataSourceScaleway.py b/cloudinit/sources/DataSourceScaleway.py | |||
532 | index 54bfc1f..b573b38 100644 | |||
533 | --- a/cloudinit/sources/DataSourceScaleway.py | |||
534 | +++ b/cloudinit/sources/DataSourceScaleway.py | |||
535 | @@ -171,11 +171,10 @@ def query_data_api(api_type, api_address, retries, timeout): | |||
536 | 171 | 171 | ||
537 | 172 | class DataSourceScaleway(sources.DataSource): | 172 | class DataSourceScaleway(sources.DataSource): |
538 | 173 | dsname = "Scaleway" | 173 | dsname = "Scaleway" |
539 | 174 | update_events = {'network': [EventType.BOOT_NEW_INSTANCE, EventType.BOOT]} | ||
540 | 174 | 175 | ||
541 | 175 | def __init__(self, sys_cfg, distro, paths): | 176 | def __init__(self, sys_cfg, distro, paths): |
542 | 176 | super(DataSourceScaleway, self).__init__(sys_cfg, distro, paths) | 177 | super(DataSourceScaleway, self).__init__(sys_cfg, distro, paths) |
543 | 177 | self.update_events = { | ||
544 | 178 | 'network': {EventType.BOOT_NEW_INSTANCE, EventType.BOOT}} | ||
545 | 179 | 178 | ||
546 | 180 | self.ds_cfg = util.mergemanydict([ | 179 | self.ds_cfg = util.mergemanydict([ |
547 | 181 | util.get_cfg_by_path(sys_cfg, ["datasource", "Scaleway"], {}), | 180 | util.get_cfg_by_path(sys_cfg, ["datasource", "Scaleway"], {}), |
548 | diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py | |||
549 | index 1604932..e6966b3 100644 | |||
550 | --- a/cloudinit/sources/__init__.py | |||
551 | +++ b/cloudinit/sources/__init__.py | |||
552 | @@ -164,6 +164,9 @@ class DataSource(object): | |||
553 | 164 | # A datasource which supports writing network config on each system boot | 164 | # A datasource which supports writing network config on each system boot |
554 | 165 | # would call update_events['network'].add(EventType.BOOT). | 165 | # would call update_events['network'].add(EventType.BOOT). |
555 | 166 | 166 | ||
556 | 167 | # Default: generate network config on new instance id (first boot). | ||
557 | 168 | update_events = {'network': set([EventType.BOOT_NEW_INSTANCE])} | ||
558 | 169 | |||
559 | 167 | # N-tuple listing default values for any metadata-related class | 170 | # N-tuple listing default values for any metadata-related class |
560 | 168 | # attributes cached on an instance by a process_data runs. These attribute | 171 | # attributes cached on an instance by a process_data runs. These attribute |
561 | 169 | # values are reset via clear_cached_attrs during any update_metadata call. | 172 | # values are reset via clear_cached_attrs during any update_metadata call. |
562 | @@ -188,9 +191,6 @@ class DataSource(object): | |||
563 | 188 | self.vendordata = None | 191 | self.vendordata = None |
564 | 189 | self.vendordata_raw = None | 192 | self.vendordata_raw = None |
565 | 190 | 193 | ||
566 | 191 | # Default: generate network config on new instance id (first boot). | ||
567 | 192 | self.update_events = {'network': {EventType.BOOT_NEW_INSTANCE}} | ||
568 | 193 | |||
569 | 194 | self.ds_cfg = util.get_cfg_by_path( | 194 | self.ds_cfg = util.get_cfg_by_path( |
570 | 195 | self.sys_cfg, ("datasource", self.dsname), {}) | 195 | self.sys_cfg, ("datasource", self.dsname), {}) |
571 | 196 | if not self.ds_cfg: | 196 | if not self.ds_cfg: |
572 | diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py | |||
573 | index d3af05e..82c4c8c 100755 | |||
574 | --- a/cloudinit/sources/helpers/azure.py | |||
575 | +++ b/cloudinit/sources/helpers/azure.py | |||
576 | @@ -20,6 +20,9 @@ from cloudinit.reporting import events | |||
577 | 20 | 20 | ||
578 | 21 | LOG = logging.getLogger(__name__) | 21 | LOG = logging.getLogger(__name__) |
579 | 22 | 22 | ||
580 | 23 | # This endpoint matches the format as found in dhcp lease files, since this | ||
581 | 24 | # value is applied if the endpoint can't be found within a lease file | ||
582 | 25 | DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10" | ||
583 | 23 | 26 | ||
584 | 24 | azure_ds_reporter = events.ReportEventStack( | 27 | azure_ds_reporter = events.ReportEventStack( |
585 | 25 | name="azure-ds", | 28 | name="azure-ds", |
586 | @@ -297,7 +300,12 @@ class WALinuxAgentShim(object): | |||
587 | 297 | @azure_ds_telemetry_reporter | 300 | @azure_ds_telemetry_reporter |
588 | 298 | def _get_value_from_leases_file(fallback_lease_file): | 301 | def _get_value_from_leases_file(fallback_lease_file): |
589 | 299 | leases = [] | 302 | leases = [] |
591 | 300 | content = util.load_file(fallback_lease_file) | 303 | try: |
592 | 304 | content = util.load_file(fallback_lease_file) | ||
593 | 305 | except IOError as ex: | ||
594 | 306 | LOG.error("Failed to read %s: %s", fallback_lease_file, ex) | ||
595 | 307 | return None | ||
596 | 308 | |||
597 | 301 | LOG.debug("content is %s", content) | 309 | LOG.debug("content is %s", content) |
598 | 302 | option_name = _get_dhcp_endpoint_option_name() | 310 | option_name = _get_dhcp_endpoint_option_name() |
599 | 303 | for line in content.splitlines(): | 311 | for line in content.splitlines(): |
600 | @@ -372,9 +380,9 @@ class WALinuxAgentShim(object): | |||
601 | 372 | fallback_lease_file) | 380 | fallback_lease_file) |
602 | 373 | value = WALinuxAgentShim._get_value_from_leases_file( | 381 | value = WALinuxAgentShim._get_value_from_leases_file( |
603 | 374 | fallback_lease_file) | 382 | fallback_lease_file) |
604 | 375 | |||
605 | 376 | if value is None: | 383 | if value is None: |
607 | 377 | raise ValueError('No endpoint found.') | 384 | LOG.warning("No lease found; using default endpoint") |
608 | 385 | value = DEFAULT_WIRESERVER_ENDPOINT | ||
609 | 378 | 386 | ||
610 | 379 | endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value) | 387 | endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value) |
611 | 380 | LOG.debug('Azure endpoint found at %s', endpoint_ip_address) | 388 | LOG.debug('Azure endpoint found at %s', endpoint_ip_address) |
612 | diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py | |||
613 | index cb1912b..6378e98 100644 | |||
614 | --- a/cloudinit/sources/tests/test_init.py | |||
615 | +++ b/cloudinit/sources/tests/test_init.py | |||
616 | @@ -575,21 +575,6 @@ class TestDataSource(CiTestCase): | |||
617 | 575 | " events: New instance first boot", | 575 | " events: New instance first boot", |
618 | 576 | self.logs.getvalue()) | 576 | self.logs.getvalue()) |
619 | 577 | 577 | ||
620 | 578 | def test_data_sources_cant_mutate_update_events_for_others(self): | ||
621 | 579 | """update_events shouldn't be changed for other DSes (LP: #1819913)""" | ||
622 | 580 | |||
623 | 581 | class ModifyingDS(DataSource): | ||
624 | 582 | |||
625 | 583 | def __init__(self, sys_cfg, distro, paths): | ||
626 | 584 | # This mirrors what DataSourceAzure does which causes LP: | ||
627 | 585 | # #1819913 | ||
628 | 586 | DataSource.__init__(self, sys_cfg, distro, paths) | ||
629 | 587 | self.update_events['network'].add(EventType.BOOT) | ||
630 | 588 | |||
631 | 589 | before_update_events = copy.deepcopy(self.datasource.update_events) | ||
632 | 590 | ModifyingDS(self.sys_cfg, self.distro, self.paths) | ||
633 | 591 | self.assertEqual(before_update_events, self.datasource.update_events) | ||
634 | 592 | |||
635 | 593 | 578 | ||
636 | 594 | class TestRedactSensitiveData(CiTestCase): | 579 | class TestRedactSensitiveData(CiTestCase): |
637 | 595 | 580 | ||
638 | diff --git a/cloudinit/util.py b/cloudinit/util.py | |||
639 | index 385f231..ea4199c 100644 | |||
640 | --- a/cloudinit/util.py | |||
641 | +++ b/cloudinit/util.py | |||
642 | @@ -1679,7 +1679,7 @@ def mounts(): | |||
643 | 1679 | return mounted | 1679 | return mounted |
644 | 1680 | 1680 | ||
645 | 1681 | 1681 | ||
647 | 1682 | def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True, | 1682 | def mount_cb(device, callback, data=None, mtype=None, |
648 | 1683 | update_env_for_mount=None): | 1683 | update_env_for_mount=None): |
649 | 1684 | """ | 1684 | """ |
650 | 1685 | Mount the device, call method 'callback' passing the directory | 1685 | Mount the device, call method 'callback' passing the directory |
651 | @@ -1726,18 +1726,7 @@ def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True, | |||
652 | 1726 | for mtype in mtypes: | 1726 | for mtype in mtypes: |
653 | 1727 | mountpoint = None | 1727 | mountpoint = None |
654 | 1728 | try: | 1728 | try: |
667 | 1729 | mountcmd = ['mount'] | 1729 | mountcmd = ['mount', '-o', 'ro'] |
656 | 1730 | mountopts = [] | ||
657 | 1731 | if rw: | ||
658 | 1732 | mountopts.append('rw') | ||
659 | 1733 | else: | ||
660 | 1734 | mountopts.append('ro') | ||
661 | 1735 | if sync: | ||
662 | 1736 | # This seems like the safe approach to do | ||
663 | 1737 | # (ie where this is on by default) | ||
664 | 1738 | mountopts.append("sync") | ||
665 | 1739 | if mountopts: | ||
666 | 1740 | mountcmd.extend(["-o", ",".join(mountopts)]) | ||
668 | 1741 | if mtype: | 1730 | if mtype: |
669 | 1742 | mountcmd.extend(['-t', mtype]) | 1731 | mountcmd.extend(['-t', mtype]) |
670 | 1743 | mountcmd.append(device) | 1732 | mountcmd.append(device) |
671 | diff --git a/cloudinit/version.py b/cloudinit/version.py | |||
672 | index a2c5d43..ddcd436 100644 | |||
673 | --- a/cloudinit/version.py | |||
674 | +++ b/cloudinit/version.py | |||
675 | @@ -4,7 +4,7 @@ | |||
676 | 4 | # | 4 | # |
677 | 5 | # This file is part of cloud-init. See LICENSE file for license information. | 5 | # This file is part of cloud-init. See LICENSE file for license information. |
678 | 6 | 6 | ||
680 | 7 | __VERSION__ = "18.5" | 7 | __VERSION__ = "19.1" |
681 | 8 | _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' | 8 | _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' |
682 | 9 | 9 | ||
683 | 10 | FEATURES = [ | 10 | FEATURES = [ |
684 | diff --git a/debian/changelog b/debian/changelog | |||
685 | index d21167b..270b0f3 100644 | |||
686 | --- a/debian/changelog | |||
687 | +++ b/debian/changelog | |||
688 | @@ -1,11 +1,57 @@ | |||
690 | 1 | cloud-init (18.5-45-g3554ffe8-0ubuntu1~16.04.2) UNRELEASED; urgency=medium | 1 | cloud-init (19.1-1-gbaa47854-0ubuntu1~16.04.1) xenial; urgency=medium |
691 | 2 | 2 | ||
692 | 3 | * debian/patches/ubuntu-advantage-revert-tip.patch | ||
693 | 4 | Revert ubuntu-advantage config module changes until ubuntu-advantage-tools | ||
694 | 5 | 19.1 publishes to Xenial (LP: #1828641) | ||
695 | 3 | * refresh patches: | 6 | * refresh patches: |
696 | 4 | + debian/patches/azure-apply-network-config-false.patch | 7 | + debian/patches/azure-apply-network-config-false.patch |
697 | 5 | + debian/patches/azure-use-walinux-agent.patch | 8 | + debian/patches/azure-use-walinux-agent.patch |
698 | 6 | + debian/patches/ec2-classic-dont-reapply-networking.patch | 9 | + debian/patches/ec2-classic-dont-reapply-networking.patch |
699 | 10 | * refresh patches: | ||
700 | 11 | + debian/patches/azure-apply-network-config-false.patch | ||
701 | 12 | + debian/patches/azure-use-walinux-agent.patch | ||
702 | 13 | * New upstream snapshot. (LP: #1828637) | ||
703 | 14 | - Azure: Return static fallback address as if failed to find endpoint | ||
704 | 15 | [Jason Zions (MSFT)] | ||
705 | 16 | - release 19.1 | ||
706 | 17 | - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder] | ||
707 | 18 | - tests: add Eoan release [Paride Legovini] | ||
708 | 19 | - cc_mounts: check if mount -a on no-change fstab path [Jason Zions (MSFT)] | ||
709 | 20 | - replace remaining occurrences of LOG.warn | ||
710 | 21 | - DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo] | ||
711 | 22 | - Azure: Changes to the Hyper-V KVP Reporter [Anh Vo] | ||
712 | 23 | - git tests: no longer show warning about safe yaml. [Scott Moser] | ||
713 | 24 | - tools/read-version: handle errors [Chad Miller] | ||
714 | 25 | - net/sysconfig: only indicate available on known sysconfig distros | ||
715 | 26 | - packages: update rpm specs for new bash completion path | ||
716 | 27 | - test_azure: mock util.SeLinuxGuard where needed [Jason Zions (MSFT)] | ||
717 | 28 | - setup.py: install bash completion script in new location | ||
718 | 29 | - mount_cb: do not pass sync and rw options to mount [Gonéri Le Bouder] | ||
719 | 30 | - cc_apt_configure: fix typo in apt documentation [Dominic Schlegel] | ||
720 | 31 | - Revert "DataSource: move update_events from a class to an instance..." | ||
721 | 32 | - Change DataSourceNoCloud to ignore file system label's case. | ||
722 | 33 | [Risto Oikarinen] | ||
723 | 34 | - cmd:main.py: Fix missing 'modules-init' key in modes dict | ||
724 | 35 | [Antonio Romito] | ||
725 | 36 | - ubuntu_advantage: rewrite cloud-config module | ||
726 | 37 | - Azure: Treat _unset network configuration as if it were absent | ||
727 | 38 | [Jason Zions (MSFT)] | ||
728 | 39 | - DatasourceAzure: add additional logging for azure datasource [Anh Vo] | ||
729 | 40 | - cloud_tests: fix apt_pipelining test-cases | ||
730 | 41 | - Azure: Ensure platform random_seed is always serializable as JSON. | ||
731 | 42 | [Jason Zions (MSFT)] | ||
732 | 43 | - net/sysconfig: write out SUSE-compatible IPv6 config [Robert Schweikert] | ||
733 | 44 | - tox: Update testenv for openSUSE Leap to 15.0 [Thomas Bechtold] | ||
734 | 45 | - net: Fix ipv6 static routes when using eni renderer [Raphael Glon] | ||
735 | 46 | - Add ubuntu_drivers config module | ||
736 | 47 | - doc: Refresh Azure walinuxagent docs | ||
737 | 48 | - tox: bump pylint version to latest (2.3.1) | ||
738 | 49 | - DataSource: move update_events from a class to an instance attribute | ||
739 | 50 | - net/sysconfig: Handle default route setup for dhcp configured NICs | ||
740 | 51 | [Robert Schweikert] | ||
741 | 52 | - DataSourceEc2: update RELEASE_BLOCKER to be more accurate | ||
742 | 7 | 53 | ||
744 | 8 | -- Ryan Harper <ryan.harper@canonical.com> Tue, 09 Apr 2019 11:20:17 -0500 | 54 | -- Chad Smith <chad.smith@canonical.com> Fri, 10 May 2019 16:26:48 -0600 |
745 | 9 | 55 | ||
746 | 10 | cloud-init (18.5-45-g3554ffe8-0ubuntu1~16.04.1) xenial; urgency=medium | 56 | cloud-init (18.5-45-g3554ffe8-0ubuntu1~16.04.1) xenial; urgency=medium |
747 | 11 | 57 | ||
748 | diff --git a/debian/patches/azure-apply-network-config-false.patch b/debian/patches/azure-apply-network-config-false.patch | |||
749 | index e16ad64..f0c2fcf 100644 | |||
750 | --- a/debian/patches/azure-apply-network-config-false.patch | |||
751 | +++ b/debian/patches/azure-apply-network-config-false.patch | |||
752 | @@ -10,7 +10,7 @@ Forwarded: not-needed | |||
753 | 10 | Last-Update: 2018-10-17 | 10 | Last-Update: 2018-10-17 |
754 | 11 | --- a/cloudinit/sources/DataSourceAzure.py | 11 | --- a/cloudinit/sources/DataSourceAzure.py |
755 | 12 | +++ b/cloudinit/sources/DataSourceAzure.py | 12 | +++ b/cloudinit/sources/DataSourceAzure.py |
757 | 13 | @@ -215,7 +215,7 @@ BUILTIN_DS_CONFIG = { | 13 | @@ -220,7 +220,7 @@ BUILTIN_DS_CONFIG = { |
758 | 14 | }, | 14 | }, |
759 | 15 | 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH}, | 15 | 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH}, |
760 | 16 | 'dhclient_lease_file': LEASE_FILE, | 16 | 'dhclient_lease_file': LEASE_FILE, |
761 | diff --git a/debian/patches/azure-use-walinux-agent.patch b/debian/patches/azure-use-walinux-agent.patch | |||
762 | index 3f60dfd..b4ad76c 100644 | |||
763 | --- a/debian/patches/azure-use-walinux-agent.patch | |||
764 | +++ b/debian/patches/azure-use-walinux-agent.patch | |||
765 | @@ -6,7 +6,7 @@ Forwarded: not-needed | |||
766 | 6 | Author: Scott Moser <smoser@ubuntu.com> | 6 | Author: Scott Moser <smoser@ubuntu.com> |
767 | 7 | --- a/cloudinit/sources/DataSourceAzure.py | 7 | --- a/cloudinit/sources/DataSourceAzure.py |
768 | 8 | +++ b/cloudinit/sources/DataSourceAzure.py | 8 | +++ b/cloudinit/sources/DataSourceAzure.py |
770 | 9 | @@ -204,7 +204,7 @@ if util.is_FreeBSD(): | 9 | @@ -209,7 +209,7 @@ if util.is_FreeBSD(): |
771 | 10 | PLATFORM_ENTROPY_SOURCE = None | 10 | PLATFORM_ENTROPY_SOURCE = None |
772 | 11 | 11 | ||
773 | 12 | BUILTIN_DS_CONFIG = { | 12 | BUILTIN_DS_CONFIG = { |
774 | diff --git a/debian/patches/series b/debian/patches/series | |||
775 | index d37ae8a..5d6995e 100644 | |||
776 | --- a/debian/patches/series | |||
777 | +++ b/debian/patches/series | |||
778 | @@ -4,3 +4,4 @@ stable-release-no-jsonschema-dep.patch | |||
779 | 4 | openstack-no-network-config.patch | 4 | openstack-no-network-config.patch |
780 | 5 | azure-apply-network-config-false.patch | 5 | azure-apply-network-config-false.patch |
781 | 6 | ec2-classic-dont-reapply-networking.patch | 6 | ec2-classic-dont-reapply-networking.patch |
782 | 7 | ubuntu-advantage-revert-tip.patch | ||
783 | diff --git a/debian/patches/ubuntu-advantage-revert-tip.patch b/debian/patches/ubuntu-advantage-revert-tip.patch | |||
784 | 7 | new file mode 100644 | 8 | new file mode 100644 |
785 | index 0000000..08bdc81 | |||
786 | --- /dev/null | |||
787 | +++ b/debian/patches/ubuntu-advantage-revert-tip.patch | |||
788 | @@ -0,0 +1,735 @@ | |||
789 | 1 | Description: Revert upstream changes for ubuntu-advantage-tools v 19.1 | ||
790 | 2 | ubuntu-advantage-tools v. 19.1 or later is required for the newcw | ||
791 | 3 | cloud-config module becaues the two command lines are incompatible. | ||
792 | 4 | Xenial can drop this patch once ubuntu-advantage-tools has been SRU'd >= 19.1 | ||
793 | 5 | Author: Chad Smith <chad.smith@canonical.com> | ||
794 | 6 | Origin: backport | ||
795 | 7 | Bug: https://bugs.launchpad.net/cloud-init/+bug/1828641 | ||
796 | 8 | Forwarded: not-needed | ||
797 | 9 | Last-Update: 2019-05-10 | ||
798 | 10 | --- | ||
799 | 11 | This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ | ||
800 | 12 | Index: cloud-init/cloudinit/config/cc_ubuntu_advantage.py | ||
801 | 13 | =================================================================== | ||
802 | 14 | --- cloud-init.orig/cloudinit/config/cc_ubuntu_advantage.py | ||
803 | 15 | +++ cloud-init/cloudinit/config/cc_ubuntu_advantage.py | ||
804 | 16 | @@ -1,143 +1,150 @@ | ||
805 | 17 | +# Copyright (C) 2018 Canonical Ltd. | ||
806 | 18 | +# | ||
807 | 19 | # This file is part of cloud-init. See LICENSE file for license information. | ||
808 | 20 | |||
809 | 21 | -"""ubuntu_advantage: Configure Ubuntu Advantage support services""" | ||
810 | 22 | +"""Ubuntu advantage: manage ubuntu-advantage offerings from Canonical.""" | ||
811 | 23 | |||
812 | 24 | +import sys | ||
813 | 25 | from textwrap import dedent | ||
814 | 26 | |||
815 | 27 | -import six | ||
816 | 28 | - | ||
817 | 29 | +from cloudinit import log as logging | ||
818 | 30 | from cloudinit.config.schema import ( | ||
819 | 31 | get_schema_doc, validate_cloudconfig_schema) | ||
820 | 32 | -from cloudinit import log as logging | ||
821 | 33 | from cloudinit.settings import PER_INSTANCE | ||
822 | 34 | +from cloudinit.subp import prepend_base_command | ||
823 | 35 | from cloudinit import util | ||
824 | 36 | |||
825 | 37 | |||
826 | 38 | -UA_URL = 'https://ubuntu.com/advantage' | ||
827 | 39 | - | ||
828 | 40 | distros = ['ubuntu'] | ||
829 | 41 | +frequency = PER_INSTANCE | ||
830 | 42 | + | ||
831 | 43 | +LOG = logging.getLogger(__name__) | ||
832 | 44 | |||
833 | 45 | schema = { | ||
834 | 46 | 'id': 'cc_ubuntu_advantage', | ||
835 | 47 | 'name': 'Ubuntu Advantage', | ||
836 | 48 | - 'title': 'Configure Ubuntu Advantage support services', | ||
837 | 49 | + 'title': 'Install, configure and manage ubuntu-advantage offerings', | ||
838 | 50 | 'description': dedent("""\ | ||
839 | 51 | - Attach machine to an existing Ubuntu Advantage support contract and | ||
840 | 52 | - enable or disable support services such as Livepatch, ESM, | ||
841 | 53 | - FIPS and FIPS Updates. When attaching a machine to Ubuntu Advantage, | ||
842 | 54 | - one can also specify services to enable. When the 'enable' | ||
843 | 55 | - list is present, any named service will be enabled and all absent | ||
844 | 56 | - services will remain disabled. | ||
845 | 57 | - | ||
846 | 58 | - Note that when enabling FIPS or FIPS updates you will need to schedule | ||
847 | 59 | - a reboot to ensure the machine is running the FIPS-compliant kernel. | ||
848 | 60 | - See :ref:`Power State Change` for information on how to configure | ||
849 | 61 | - cloud-init to perform this reboot. | ||
850 | 62 | + This module provides configuration options to setup ubuntu-advantage | ||
851 | 63 | + subscriptions. | ||
852 | 64 | + | ||
853 | 65 | + .. note:: | ||
854 | 66 | + Both ``commands`` value can be either a dictionary or a list. If | ||
855 | 67 | + the configuration provided is a dictionary, the keys are only used | ||
856 | 68 | + to order the execution of the commands and the dictionary is | ||
857 | 69 | + merged with any vendor-data ubuntu-advantage configuration | ||
858 | 70 | + provided. If a ``commands`` is provided as a list, any vendor-data | ||
859 | 71 | + ubuntu-advantage ``commands`` are ignored. | ||
860 | 72 | + | ||
861 | 73 | + Ubuntu-advantage ``commands`` is a dictionary or list of | ||
862 | 74 | + ubuntu-advantage commands to run on the deployed machine. | ||
863 | 75 | + These commands can be used to enable or disable subscriptions to | ||
864 | 76 | + various ubuntu-advantage products. See 'man ubuntu-advantage' for more | ||
865 | 77 | + information on supported subcommands. | ||
866 | 78 | + | ||
867 | 79 | + .. note:: | ||
868 | 80 | + Each command item can be a string or list. If the item is a list, | ||
869 | 81 | + 'ubuntu-advantage' can be omitted and it will automatically be | ||
870 | 82 | + inserted as part of the command. | ||
871 | 83 | """), | ||
872 | 84 | 'distros': distros, | ||
873 | 85 | 'examples': [dedent("""\ | ||
874 | 86 | - # Attach the machine to a Ubuntu Advantage support contract with a | ||
875 | 87 | - # UA contract token obtained from %s. | ||
876 | 88 | - ubuntu_advantage: | ||
877 | 89 | - token: <ua_contract_token> | ||
878 | 90 | - """ % UA_URL), dedent("""\ | ||
879 | 91 | - # Attach the machine to an Ubuntu Advantage support contract enabling | ||
880 | 92 | - # only fips and esm services. Services will only be enabled if | ||
881 | 93 | - # the environment supports said service. Otherwise warnings will | ||
882 | 94 | - # be logged for incompatible services specified. | ||
883 | 95 | + # Enable Extended Security Maintenance using your service auth token | ||
884 | 96 | + ubuntu-advantage: | ||
885 | 97 | + commands: | ||
886 | 98 | + 00: ubuntu-advantage enable-esm <token> | ||
887 | 99 | + """), dedent("""\ | ||
888 | 100 | + # Enable livepatch by providing your livepatch token | ||
889 | 101 | ubuntu-advantage: | ||
890 | 102 | - token: <ua_contract_token> | ||
891 | 103 | - enable: | ||
892 | 104 | - - fips | ||
893 | 105 | - - esm | ||
894 | 106 | + commands: | ||
895 | 107 | + 00: ubuntu-advantage enable-livepatch <livepatch-token> | ||
896 | 108 | + | ||
897 | 109 | """), dedent("""\ | ||
898 | 110 | - # Attach the machine to an Ubuntu Advantage support contract and enable | ||
899 | 111 | - # the FIPS service. Perform a reboot once cloud-init has | ||
900 | 112 | - # completed. | ||
901 | 113 | - power_state: | ||
902 | 114 | - mode: reboot | ||
903 | 115 | + # Convenience: the ubuntu-advantage command can be omitted when | ||
904 | 116 | + # specifying commands as a list and 'ubuntu-advantage' will | ||
905 | 117 | + # automatically be prepended. | ||
906 | 118 | + # The following commands are equivalent | ||
907 | 119 | ubuntu-advantage: | ||
908 | 120 | - token: <ua_contract_token> | ||
909 | 121 | - enable: | ||
910 | 122 | - - fips | ||
911 | 123 | - """)], | ||
912 | 124 | + commands: | ||
913 | 125 | + 00: ['enable-livepatch', 'my-token'] | ||
914 | 126 | + 01: ['ubuntu-advantage', 'enable-livepatch', 'my-token'] | ||
915 | 127 | + 02: ubuntu-advantage enable-livepatch my-token | ||
916 | 128 | + 03: 'ubuntu-advantage enable-livepatch my-token' | ||
917 | 129 | + """)], | ||
918 | 130 | 'frequency': PER_INSTANCE, | ||
919 | 131 | 'type': 'object', | ||
920 | 132 | 'properties': { | ||
921 | 133 | - 'ubuntu_advantage': { | ||
922 | 134 | + 'ubuntu-advantage': { | ||
923 | 135 | 'type': 'object', | ||
924 | 136 | 'properties': { | ||
925 | 137 | - 'enable': { | ||
926 | 138 | - 'type': 'array', | ||
927 | 139 | - 'items': {'type': 'string'}, | ||
928 | 140 | - }, | ||
929 | 141 | - 'token': { | ||
930 | 142 | - 'type': 'string', | ||
931 | 143 | - 'description': ( | ||
932 | 144 | - 'A contract token obtained from %s.' % UA_URL) | ||
933 | 145 | + 'commands': { | ||
934 | 146 | + 'type': ['object', 'array'], # Array of strings or dict | ||
935 | 147 | + 'items': { | ||
936 | 148 | + 'oneOf': [ | ||
937 | 149 | + {'type': 'array', 'items': {'type': 'string'}}, | ||
938 | 150 | + {'type': 'string'}] | ||
939 | 151 | + }, | ||
940 | 152 | + 'additionalItems': False, # Reject non-string & non-list | ||
941 | 153 | + 'minItems': 1, | ||
942 | 154 | + 'minProperties': 1, | ||
943 | 155 | } | ||
944 | 156 | }, | ||
945 | 157 | - 'required': ['token'], | ||
946 | 158 | - 'additionalProperties': False | ||
947 | 159 | + 'additionalProperties': False, # Reject keys not in schema | ||
948 | 160 | + 'required': ['commands'] | ||
949 | 161 | } | ||
950 | 162 | } | ||
951 | 163 | } | ||
952 | 164 | |||
953 | 165 | +# TODO schema for 'assertions' and 'commands' are too permissive at the moment. | ||
954 | 166 | +# Once python-jsonschema supports schema draft 6 add support for arbitrary | ||
955 | 167 | +# object keys with 'patternProperties' constraint to validate string values. | ||
956 | 168 | + | ||
957 | 169 | __doc__ = get_schema_doc(schema) # Supplement python help() | ||
958 | 170 | |||
959 | 171 | -LOG = logging.getLogger(__name__) | ||
960 | 172 | +UA_CMD = "ubuntu-advantage" | ||
961 | 173 | |||
962 | 174 | |||
963 | 175 | -def configure_ua(token=None, enable=None): | ||
964 | 176 | - """Call ua commandline client to attach or enable services.""" | ||
965 | 177 | - error = None | ||
966 | 178 | - if not token: | ||
967 | 179 | - error = ('ubuntu_advantage: token must be provided') | ||
968 | 180 | - LOG.error(error) | ||
969 | 181 | - raise RuntimeError(error) | ||
970 | 182 | - | ||
971 | 183 | - if enable is None: | ||
972 | 184 | - enable = [] | ||
973 | 185 | - elif isinstance(enable, six.string_types): | ||
974 | 186 | - LOG.warning('ubuntu_advantage: enable should be a list, not' | ||
975 | 187 | - ' a string; treating as a single enable') | ||
976 | 188 | - enable = [enable] | ||
977 | 189 | - elif not isinstance(enable, list): | ||
978 | 190 | - LOG.warning('ubuntu_advantage: enable should be a list, not' | ||
979 | 191 | - ' a %s; skipping enabling services', | ||
980 | 192 | - type(enable).__name__) | ||
981 | 193 | - enable = [] | ||
982 | 194 | +def run_commands(commands): | ||
983 | 195 | + """Run the commands provided in ubuntu-advantage:commands config. | ||
984 | 196 | |||
985 | 197 | - attach_cmd = ['ua', 'attach', token] | ||
986 | 198 | - LOG.debug('Attaching to Ubuntu Advantage. %s', ' '.join(attach_cmd)) | ||
987 | 199 | - try: | ||
988 | 200 | - util.subp(attach_cmd) | ||
989 | 201 | - except util.ProcessExecutionError as e: | ||
990 | 202 | - msg = 'Failure attaching Ubuntu Advantage:\n{error}'.format( | ||
991 | 203 | - error=str(e)) | ||
992 | 204 | - util.logexc(LOG, msg) | ||
993 | 205 | - raise RuntimeError(msg) | ||
994 | 206 | - enable_errors = [] | ||
995 | 207 | - for service in enable: | ||
996 | 208 | + Commands are run individually. Any errors are collected and reported | ||
997 | 209 | + after attempting all commands. | ||
998 | 210 | + | ||
999 | 211 | + @param commands: A list or dict containing commands to run. Keys of a | ||
1000 | 212 | + dict will be used to order the commands provided as dict values. | ||
1001 | 213 | + """ | ||
1002 | 214 | + if not commands: | ||
1003 | 215 | + return | ||
1004 | 216 | + LOG.debug('Running user-provided ubuntu-advantage commands') | ||
1005 | 217 | + if isinstance(commands, dict): | ||
1006 | 218 | + # Sort commands based on dictionary key | ||
1007 | 219 | + commands = [v for _, v in sorted(commands.items())] | ||
1008 | 220 | + elif not isinstance(commands, list): | ||
1009 | 221 | + raise TypeError( | ||
1010 | 222 | + 'commands parameter was not a list or dict: {commands}'.format( | ||
1011 | 223 | + commands=commands)) | ||
1012 | 224 | + | ||
1013 | 225 | + fixed_ua_commands = prepend_base_command('ubuntu-advantage', commands) | ||
1014 | 226 | + | ||
1015 | 227 | + cmd_failures = [] | ||
1016 | 228 | + for command in fixed_ua_commands: | ||
1017 | 229 | + shell = isinstance(command, str) | ||
1018 | 230 | try: | ||
1019 | 231 | - cmd = ['ua', 'enable', service] | ||
1020 | 232 | - util.subp(cmd, capture=True) | ||
1021 | 233 | + util.subp(command, shell=shell, status_cb=sys.stderr.write) | ||
1022 | 234 | except util.ProcessExecutionError as e: | ||
1023 | 235 | - enable_errors.append((service, e)) | ||
1024 | 236 | - if enable_errors: | ||
1025 | 237 | - for service, error in enable_errors: | ||
1026 | 238 | - msg = 'Failure enabling "{service}":\n{error}'.format( | ||
1027 | 239 | - service=service, error=str(error)) | ||
1028 | 240 | - util.logexc(LOG, msg) | ||
1029 | 241 | - raise RuntimeError( | ||
1030 | 242 | - 'Failure enabling Ubuntu Advantage service(s): {}'.format( | ||
1031 | 243 | - ', '.join('"{}"'.format(service) | ||
1032 | 244 | - for service, _ in enable_errors))) | ||
1033 | 245 | + cmd_failures.append(str(e)) | ||
1034 | 246 | + if cmd_failures: | ||
1035 | 247 | + msg = ( | ||
1036 | 248 | + 'Failures running ubuntu-advantage commands:\n' | ||
1037 | 249 | + '{cmd_failures}'.format( | ||
1038 | 250 | + cmd_failures=cmd_failures)) | ||
1039 | 251 | + util.logexc(LOG, msg) | ||
1040 | 252 | + raise RuntimeError(msg) | ||
1041 | 253 | |||
1042 | 254 | |||
1043 | 255 | def maybe_install_ua_tools(cloud): | ||
1044 | 256 | """Install ubuntu-advantage-tools if not present.""" | ||
1045 | 257 | - if util.which('ua'): | ||
1046 | 258 | + if util.which('ubuntu-advantage'): | ||
1047 | 259 | return | ||
1048 | 260 | try: | ||
1049 | 261 | cloud.distro.update_package_sources() | ||
1050 | 262 | @@ -152,28 +159,14 @@ def maybe_install_ua_tools(cloud): | ||
1051 | 263 | |||
1052 | 264 | |||
1053 | 265 | def handle(name, cfg, cloud, log, args): | ||
1054 | 266 | - ua_section = None | ||
1055 | 267 | - if 'ubuntu-advantage' in cfg: | ||
1056 | 268 | - LOG.warning('Deprecated configuration key "ubuntu-advantage" provided.' | ||
1057 | 269 | - ' Expected underscore delimited "ubuntu_advantage"; will' | ||
1058 | 270 | - ' attempt to continue.') | ||
1059 | 271 | - ua_section = cfg['ubuntu-advantage'] | ||
1060 | 272 | - if 'ubuntu_advantage' in cfg: | ||
1061 | 273 | - ua_section = cfg['ubuntu_advantage'] | ||
1062 | 274 | - if ua_section is None: | ||
1063 | 275 | - LOG.debug("Skipping module named %s," | ||
1064 | 276 | - " no 'ubuntu_advantage' configuration found", name) | ||
1065 | 277 | + cfgin = cfg.get('ubuntu-advantage') | ||
1066 | 278 | + if cfgin is None: | ||
1067 | 279 | + LOG.debug(("Skipping module named %s," | ||
1068 | 280 | + " no 'ubuntu-advantage' key in configuration"), name) | ||
1069 | 281 | return | ||
1070 | 282 | - validate_cloudconfig_schema(cfg, schema) | ||
1071 | 283 | - if 'commands' in ua_section: | ||
1072 | 284 | - msg = ( | ||
1073 | 285 | - 'Deprecated configuration "ubuntu-advantage: commands" provided.' | ||
1074 | 286 | - ' Expected "token"') | ||
1075 | 287 | - LOG.error(msg) | ||
1076 | 288 | - raise RuntimeError(msg) | ||
1077 | 289 | |||
1078 | 290 | + validate_cloudconfig_schema(cfg, schema) | ||
1079 | 291 | maybe_install_ua_tools(cloud) | ||
1080 | 292 | - configure_ua(token=ua_section.get('token'), | ||
1081 | 293 | - enable=ua_section.get('enable')) | ||
1082 | 294 | + run_commands(cfgin.get('commands', [])) | ||
1083 | 295 | |||
1084 | 296 | # vi: ts=4 expandtab | ||
1085 | 297 | Index: cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py | ||
1086 | 298 | =================================================================== | ||
1087 | 299 | --- cloud-init.orig/cloudinit/config/tests/test_ubuntu_advantage.py | ||
1088 | 300 | +++ cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py | ||
1089 | 301 | @@ -1,7 +1,10 @@ | ||
1090 | 302 | # This file is part of cloud-init. See LICENSE file for license information. | ||
1091 | 303 | |||
1092 | 304 | +import re | ||
1093 | 305 | +from six import StringIO | ||
1094 | 306 | + | ||
1095 | 307 | from cloudinit.config.cc_ubuntu_advantage import ( | ||
1096 | 308 | - configure_ua, handle, maybe_install_ua_tools, schema) | ||
1097 | 309 | + handle, maybe_install_ua_tools, run_commands, schema) | ||
1098 | 310 | from cloudinit.config.schema import validate_cloudconfig_schema | ||
1099 | 311 | from cloudinit import util | ||
1100 | 312 | from cloudinit.tests.helpers import ( | ||
1101 | 313 | @@ -17,120 +20,90 @@ class FakeCloud(object): | ||
1102 | 314 | self.distro = distro | ||
1103 | 315 | |||
1104 | 316 | |||
1105 | 317 | -class TestConfigureUA(CiTestCase): | ||
1106 | 318 | +class TestRunCommands(CiTestCase): | ||
1107 | 319 | |||
1108 | 320 | with_logs = True | ||
1109 | 321 | allowed_subp = [CiTestCase.SUBP_SHELL_TRUE] | ||
1110 | 322 | |||
1111 | 323 | def setUp(self): | ||
1112 | 324 | - super(TestConfigureUA, self).setUp() | ||
1113 | 325 | + super(TestRunCommands, self).setUp() | ||
1114 | 326 | self.tmp = self.tmp_dir() | ||
1115 | 327 | |||
1116 | 328 | @mock.patch('%s.util.subp' % MPATH) | ||
1117 | 329 | - def test_configure_ua_attach_error(self, m_subp): | ||
1118 | 330 | - """Errors from ua attach command are raised.""" | ||
1119 | 331 | - m_subp.side_effect = util.ProcessExecutionError( | ||
1120 | 332 | - 'Invalid token SomeToken') | ||
1121 | 333 | - with self.assertRaises(RuntimeError) as context_manager: | ||
1122 | 334 | - configure_ua(token='SomeToken') | ||
1123 | 335 | + def test_run_commands_on_empty_list(self, m_subp): | ||
1124 | 336 | + """When provided with an empty list, run_commands does nothing.""" | ||
1125 | 337 | + run_commands([]) | ||
1126 | 338 | + self.assertEqual('', self.logs.getvalue()) | ||
1127 | 339 | + m_subp.assert_not_called() | ||
1128 | 340 | + | ||
1129 | 341 | + def test_run_commands_on_non_list_or_dict(self): | ||
1130 | 342 | + """When provided an invalid type, run_commands raises an error.""" | ||
1131 | 343 | + with self.assertRaises(TypeError) as context_manager: | ||
1132 | 344 | + run_commands(commands="I'm Not Valid") | ||
1133 | 345 | self.assertEqual( | ||
1134 | 346 | - 'Failure attaching Ubuntu Advantage:\nUnexpected error while' | ||
1135 | 347 | - ' running command.\nCommand: -\nExit code: -\nReason: -\n' | ||
1136 | 348 | - 'Stdout: Invalid token SomeToken\nStderr: -', | ||
1137 | 349 | + "commands parameter was not a list or dict: I'm Not Valid", | ||
1138 | 350 | str(context_manager.exception)) | ||
1139 | 351 | |||
1140 | 352 | - @mock.patch('%s.util.subp' % MPATH) | ||
1141 | 353 | - def test_configure_ua_attach_with_token(self, m_subp): | ||
1142 | 354 | - """When token is provided, attach the machine to ua using the token.""" | ||
1143 | 355 | - configure_ua(token='SomeToken') | ||
1144 | 356 | - m_subp.assert_called_once_with(['ua', 'attach', 'SomeToken']) | ||
1145 | 357 | - self.assertEqual( | ||
1146 | 358 | - 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', | ||
1147 | 359 | - self.logs.getvalue()) | ||
1148 | 360 | - | ||
1149 | 361 | - @mock.patch('%s.util.subp' % MPATH) | ||
1150 | 362 | - def test_configure_ua_attach_on_service_error(self, m_subp): | ||
1151 | 363 | - """all services should be enabled and then any failures raised""" | ||
1152 | 364 | - | ||
1153 | 365 | - def fake_subp(cmd, capture=None): | ||
1154 | 366 | - fail_cmds = [['ua', 'enable', svc] for svc in ['esm', 'cc']] | ||
1155 | 367 | - if cmd in fail_cmds and capture: | ||
1156 | 368 | - svc = cmd[-1] | ||
1157 | 369 | - raise util.ProcessExecutionError( | ||
1158 | 370 | - 'Invalid {} credentials'.format(svc.upper())) | ||
1159 | 371 | + def test_run_command_logs_commands_and_exit_codes_to_stderr(self): | ||
1160 | 372 | + """All exit codes are logged to stderr.""" | ||
1161 | 373 | + outfile = self.tmp_path('output.log', dir=self.tmp) | ||
1162 | 374 | + | ||
1163 | 375 | + cmd1 = 'echo "HI" >> %s' % outfile | ||
1164 | 376 | + cmd2 = 'bogus command' | ||
1165 | 377 | + cmd3 = 'echo "MOM" >> %s' % outfile | ||
1166 | 378 | + commands = [cmd1, cmd2, cmd3] | ||
1167 | 379 | + | ||
1168 | 380 | + mock_path = '%s.sys.stderr' % MPATH | ||
1169 | 381 | + with mock.patch(mock_path, new_callable=StringIO) as m_stderr: | ||
1170 | 382 | + with self.assertRaises(RuntimeError) as context_manager: | ||
1171 | 383 | + run_commands(commands=commands) | ||
1172 | 384 | + | ||
1173 | 385 | + self.assertIsNotNone( | ||
1174 | 386 | + re.search(r'bogus: (command )?not found', | ||
1175 | 387 | + str(context_manager.exception)), | ||
1176 | 388 | + msg='Expected bogus command not found') | ||
1177 | 389 | + expected_stderr_log = '\n'.join([ | ||
1178 | 390 | + 'Begin run command: {cmd}'.format(cmd=cmd1), | ||
1179 | 391 | + 'End run command: exit(0)', | ||
1180 | 392 | + 'Begin run command: {cmd}'.format(cmd=cmd2), | ||
1181 | 393 | + 'ERROR: End run command: exit(127)', | ||
1182 | 394 | + 'Begin run command: {cmd}'.format(cmd=cmd3), | ||
1183 | 395 | + 'End run command: exit(0)\n']) | ||
1184 | 396 | + self.assertEqual(expected_stderr_log, m_stderr.getvalue()) | ||
1185 | 397 | + | ||
1186 | 398 | + def test_run_command_as_lists(self): | ||
1187 | 399 | + """When commands are specified as a list, run them in order.""" | ||
1188 | 400 | + outfile = self.tmp_path('output.log', dir=self.tmp) | ||
1189 | 401 | + | ||
1190 | 402 | + cmd1 = 'echo "HI" >> %s' % outfile | ||
1191 | 403 | + cmd2 = 'echo "MOM" >> %s' % outfile | ||
1192 | 404 | + commands = [cmd1, cmd2] | ||
1193 | 405 | + with mock.patch('%s.sys.stderr' % MPATH, new_callable=StringIO): | ||
1194 | 406 | + run_commands(commands=commands) | ||
1195 | 407 | |||
1196 | 408 | - m_subp.side_effect = fake_subp | ||
1197 | 409 | - | ||
1198 | 410 | - with self.assertRaises(RuntimeError) as context_manager: | ||
1199 | 411 | - configure_ua(token='SomeToken', enable=['esm', 'cc', 'fips']) | ||
1200 | 412 | - self.assertEqual( | ||
1201 | 413 | - m_subp.call_args_list, | ||
1202 | 414 | - [mock.call(['ua', 'attach', 'SomeToken']), | ||
1203 | 415 | - mock.call(['ua', 'enable', 'esm'], capture=True), | ||
1204 | 416 | - mock.call(['ua', 'enable', 'cc'], capture=True), | ||
1205 | 417 | - mock.call(['ua', 'enable', 'fips'], capture=True)]) | ||
1206 | 418 | self.assertIn( | ||
1207 | 419 | - 'WARNING: Failure enabling "esm":\nUnexpected error' | ||
1208 | 420 | - ' while running command.\nCommand: -\nExit code: -\nReason: -\n' | ||
1209 | 421 | - 'Stdout: Invalid ESM credentials\nStderr: -\n', | ||
1210 | 422 | + 'DEBUG: Running user-provided ubuntu-advantage commands', | ||
1211 | 423 | self.logs.getvalue()) | ||
1212 | 424 | + self.assertEqual('HI\nMOM\n', util.load_file(outfile)) | ||
1213 | 425 | self.assertIn( | ||
1214 | 426 | - 'WARNING: Failure enabling "cc":\nUnexpected error' | ||
1215 | 427 | - ' while running command.\nCommand: -\nExit code: -\nReason: -\n' | ||
1216 | 428 | - 'Stdout: Invalid CC credentials\nStderr: -\n', | ||
1217 | 429 | - self.logs.getvalue()) | ||
1218 | 430 | - self.assertEqual( | ||
1219 | 431 | - 'Failure enabling Ubuntu Advantage service(s): "esm", "cc"', | ||
1220 | 432 | - str(context_manager.exception)) | ||
1221 | 433 | - | ||
1222 | 434 | - @mock.patch('%s.util.subp' % MPATH) | ||
1223 | 435 | - def test_configure_ua_attach_with_empty_services(self, m_subp): | ||
1224 | 436 | - """When services is an empty list, do not auto-enable attach.""" | ||
1225 | 437 | - configure_ua(token='SomeToken', enable=[]) | ||
1226 | 438 | - m_subp.assert_called_once_with(['ua', 'attach', 'SomeToken']) | ||
1227 | 439 | - self.assertEqual( | ||
1228 | 440 | - 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', | ||
1229 | 441 | - self.logs.getvalue()) | ||
1230 | 442 | - | ||
1231 | 443 | - @mock.patch('%s.util.subp' % MPATH) | ||
1232 | 444 | - def test_configure_ua_attach_with_specific_services(self, m_subp): | ||
1233 | 445 | - """When services a list, only enable specific services.""" | ||
1234 | 446 | - configure_ua(token='SomeToken', enable=['fips']) | ||
1235 | 447 | - self.assertEqual( | ||
1236 | 448 | - m_subp.call_args_list, | ||
1237 | 449 | - [mock.call(['ua', 'attach', 'SomeToken']), | ||
1238 | 450 | - mock.call(['ua', 'enable', 'fips'], capture=True)]) | ||
1239 | 451 | - self.assertEqual( | ||
1240 | 452 | - 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', | ||
1241 | 453 | - self.logs.getvalue()) | ||
1242 | 454 | - | ||
1243 | 455 | - @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock()) | ||
1244 | 456 | - @mock.patch('%s.util.subp' % MPATH) | ||
1245 | 457 | - def test_configure_ua_attach_with_string_services(self, m_subp): | ||
1246 | 458 | - """When services a string, treat as singleton list and warn""" | ||
1247 | 459 | - configure_ua(token='SomeToken', enable='fips') | ||
1248 | 460 | - self.assertEqual( | ||
1249 | 461 | - m_subp.call_args_list, | ||
1250 | 462 | - [mock.call(['ua', 'attach', 'SomeToken']), | ||
1251 | 463 | - mock.call(['ua', 'enable', 'fips'], capture=True)]) | ||
1252 | 464 | - self.assertEqual( | ||
1253 | 465 | - 'WARNING: ubuntu_advantage: enable should be a list, not a' | ||
1254 | 466 | - ' string; treating as a single enable\n' | ||
1255 | 467 | - 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', | ||
1256 | 468 | + 'WARNING: Non-ubuntu-advantage commands in ubuntu-advantage' | ||
1257 | 469 | + ' config:', | ||
1258 | 470 | self.logs.getvalue()) | ||
1259 | 471 | |||
1260 | 472 | - @mock.patch('%s.util.subp' % MPATH) | ||
1261 | 473 | - def test_configure_ua_attach_with_weird_services(self, m_subp): | ||
1262 | 474 | - """When services not string or list, warn but still attach""" | ||
1263 | 475 | - configure_ua(token='SomeToken', enable={'deffo': 'wont work'}) | ||
1264 | 476 | - self.assertEqual( | ||
1265 | 477 | - m_subp.call_args_list, | ||
1266 | 478 | - [mock.call(['ua', 'attach', 'SomeToken'])]) | ||
1267 | 479 | - self.assertEqual( | ||
1268 | 480 | - 'WARNING: ubuntu_advantage: enable should be a list, not a' | ||
1269 | 481 | - ' dict; skipping enabling services\n' | ||
1270 | 482 | - 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', | ||
1271 | 483 | - self.logs.getvalue()) | ||
1272 | 484 | + def test_run_command_dict_sorted_as_command_script(self): | ||
1273 | 485 | + """When commands are a dict, sort them and run.""" | ||
1274 | 486 | + outfile = self.tmp_path('output.log', dir=self.tmp) | ||
1275 | 487 | + cmd1 = 'echo "HI" >> %s' % outfile | ||
1276 | 488 | + cmd2 = 'echo "MOM" >> %s' % outfile | ||
1277 | 489 | + commands = {'02': cmd1, '01': cmd2} | ||
1278 | 490 | + with mock.patch('%s.sys.stderr' % MPATH, new_callable=StringIO): | ||
1279 | 491 | + run_commands(commands=commands) | ||
1280 | 492 | + | ||
1281 | 493 | + expected_messages = [ | ||
1282 | 494 | + 'DEBUG: Running user-provided ubuntu-advantage commands'] | ||
1283 | 495 | + for message in expected_messages: | ||
1284 | 496 | + self.assertIn(message, self.logs.getvalue()) | ||
1285 | 497 | + self.assertEqual('MOM\nHI\n', util.load_file(outfile)) | ||
1286 | 498 | |||
1287 | 499 | |||
1288 | 500 | @skipUnlessJsonSchema() | ||
1289 | 501 | @@ -139,50 +112,90 @@ class TestSchema(CiTestCase, SchemaTestC | ||
1290 | 502 | with_logs = True | ||
1291 | 503 | schema = schema | ||
1292 | 504 | |||
1293 | 505 | - @mock.patch('%s.maybe_install_ua_tools' % MPATH) | ||
1294 | 506 | - @mock.patch('%s.configure_ua' % MPATH) | ||
1295 | 507 | - def test_schema_warns_on_ubuntu_advantage_not_dict(self, _cfg, _): | ||
1296 | 508 | - """If ubuntu_advantage configuration is not a dict, emit a warning.""" | ||
1297 | 509 | - validate_cloudconfig_schema({'ubuntu_advantage': 'wrong type'}, schema) | ||
1298 | 510 | + def test_schema_warns_on_ubuntu_advantage_not_as_dict(self): | ||
1299 | 511 | + """If ubuntu-advantage configuration is not a dict, emit a warning.""" | ||
1300 | 512 | + validate_cloudconfig_schema({'ubuntu-advantage': 'wrong type'}, schema) | ||
1301 | 513 | self.assertEqual( | ||
1302 | 514 | - "WARNING: Invalid config:\nubuntu_advantage: 'wrong type' is not" | ||
1303 | 515 | + "WARNING: Invalid config:\nubuntu-advantage: 'wrong type' is not" | ||
1304 | 516 | " of type 'object'\n", | ||
1305 | 517 | self.logs.getvalue()) | ||
1306 | 518 | |||
1307 | 519 | - @mock.patch('%s.maybe_install_ua_tools' % MPATH) | ||
1308 | 520 | - @mock.patch('%s.configure_ua' % MPATH) | ||
1309 | 521 | - def test_schema_disallows_unknown_keys(self, _cfg, _): | ||
1310 | 522 | - """Unknown keys in ubuntu_advantage configuration emit warnings.""" | ||
1311 | 523 | + @mock.patch('%s.run_commands' % MPATH) | ||
1312 | 524 | + def test_schema_disallows_unknown_keys(self, _): | ||
1313 | 525 | + """Unknown keys in ubuntu-advantage configuration emit warnings.""" | ||
1314 | 526 | validate_cloudconfig_schema( | ||
1315 | 527 | - {'ubuntu_advantage': {'token': 'winner', 'invalid-key': ''}}, | ||
1316 | 528 | + {'ubuntu-advantage': {'commands': ['ls'], 'invalid-key': ''}}, | ||
1317 | 529 | schema) | ||
1318 | 530 | self.assertIn( | ||
1319 | 531 | - 'WARNING: Invalid config:\nubuntu_advantage: Additional properties' | ||
1320 | 532 | + 'WARNING: Invalid config:\nubuntu-advantage: Additional properties' | ||
1321 | 533 | " are not allowed ('invalid-key' was unexpected)", | ||
1322 | 534 | self.logs.getvalue()) | ||
1323 | 535 | |||
1324 | 536 | - @mock.patch('%s.maybe_install_ua_tools' % MPATH) | ||
1325 | 537 | - @mock.patch('%s.configure_ua' % MPATH) | ||
1326 | 538 | - def test_warn_schema_requires_token(self, _cfg, _): | ||
1327 | 539 | - """Warn if ubuntu_advantage configuration lacks token.""" | ||
1328 | 540 | + def test_warn_schema_requires_commands(self): | ||
1329 | 541 | + """Warn when ubuntu-advantage configuration lacks commands.""" | ||
1330 | 542 | validate_cloudconfig_schema( | ||
1331 | 543 | - {'ubuntu_advantage': {'enable': ['esm']}}, schema) | ||
1332 | 544 | + {'ubuntu-advantage': {}}, schema) | ||
1333 | 545 | self.assertEqual( | ||
1334 | 546 | - "WARNING: Invalid config:\nubuntu_advantage:" | ||
1335 | 547 | - " 'token' is a required property\n", self.logs.getvalue()) | ||
1336 | 548 | + "WARNING: Invalid config:\nubuntu-advantage: 'commands' is a" | ||
1337 | 549 | + " required property\n", | ||
1338 | 550 | + self.logs.getvalue()) | ||
1339 | 551 | |||
1340 | 552 | - @mock.patch('%s.maybe_install_ua_tools' % MPATH) | ||
1341 | 553 | - @mock.patch('%s.configure_ua' % MPATH) | ||
1342 | 554 | - def test_warn_schema_services_is_not_list_or_dict(self, _cfg, _): | ||
1343 | 555 | - """Warn when ubuntu_advantage:enable config is not a list.""" | ||
1344 | 556 | + @mock.patch('%s.run_commands' % MPATH) | ||
1345 | 557 | + def test_warn_schema_commands_is_not_list_or_dict(self, _): | ||
1346 | 558 | + """Warn when ubuntu-advantage:commands config is not a list or dict.""" | ||
1347 | 559 | validate_cloudconfig_schema( | ||
1348 | 560 | - {'ubuntu_advantage': {'enable': 'needslist'}}, schema) | ||
1349 | 561 | + {'ubuntu-advantage': {'commands': 'broken'}}, schema) | ||
1350 | 562 | self.assertEqual( | ||
1351 | 563 | - "WARNING: Invalid config:\nubuntu_advantage: 'token' is a" | ||
1352 | 564 | - " required property\nubuntu_advantage.enable: 'needslist'" | ||
1353 | 565 | - " is not of type 'array'\n", | ||
1354 | 566 | + "WARNING: Invalid config:\nubuntu-advantage.commands: 'broken' is" | ||
1355 | 567 | + " not of type 'object', 'array'\n", | ||
1356 | 568 | self.logs.getvalue()) | ||
1357 | 569 | |||
1358 | 570 | + @mock.patch('%s.run_commands' % MPATH) | ||
1359 | 571 | + def test_warn_schema_when_commands_is_empty(self, _): | ||
1360 | 572 | + """Emit warnings when ubuntu-advantage:commands is empty.""" | ||
1361 | 573 | + validate_cloudconfig_schema( | ||
1362 | 574 | + {'ubuntu-advantage': {'commands': []}}, schema) | ||
1363 | 575 | + validate_cloudconfig_schema( | ||
1364 | 576 | + {'ubuntu-advantage': {'commands': {}}}, schema) | ||
1365 | 577 | + self.assertEqual( | ||
1366 | 578 | + "WARNING: Invalid config:\nubuntu-advantage.commands: [] is too" | ||
1367 | 579 | + " short\nWARNING: Invalid config:\nubuntu-advantage.commands: {}" | ||
1368 | 580 | + " does not have enough properties\n", | ||
1369 | 581 | + self.logs.getvalue()) | ||
1370 | 582 | + | ||
1371 | 583 | + @mock.patch('%s.run_commands' % MPATH) | ||
1372 | 584 | + def test_schema_when_commands_are_list_or_dict(self, _): | ||
1373 | 585 | + """No warnings when ubuntu-advantage:commands are a list or dict.""" | ||
1374 | 586 | + validate_cloudconfig_schema( | ||
1375 | 587 | + {'ubuntu-advantage': {'commands': ['valid']}}, schema) | ||
1376 | 588 | + validate_cloudconfig_schema( | ||
1377 | 589 | + {'ubuntu-advantage': {'commands': {'01': 'also valid'}}}, schema) | ||
1378 | 590 | + self.assertEqual('', self.logs.getvalue()) | ||
1379 | 591 | + | ||
1380 | 592 | + def test_duplicates_are_fine_array_array(self): | ||
1381 | 593 | + """Duplicated commands array/array entries are allowed.""" | ||
1382 | 594 | + self.assertSchemaValid( | ||
1383 | 595 | + {'commands': [["echo", "bye"], ["echo" "bye"]]}, | ||
1384 | 596 | + "command entries can be duplicate.") | ||
1385 | 597 | + | ||
1386 | 598 | + def test_duplicates_are_fine_array_string(self): | ||
1387 | 599 | + """Duplicated commands array/string entries are allowed.""" | ||
1388 | 600 | + self.assertSchemaValid( | ||
1389 | 601 | + {'commands': ["echo bye", "echo bye"]}, | ||
1390 | 602 | + "command entries can be duplicate.") | ||
1391 | 603 | + | ||
1392 | 604 | + def test_duplicates_are_fine_dict_array(self): | ||
1393 | 605 | + """Duplicated commands dict/array entries are allowed.""" | ||
1394 | 606 | + self.assertSchemaValid( | ||
1395 | 607 | + {'commands': {'00': ["echo", "bye"], '01': ["echo", "bye"]}}, | ||
1396 | 608 | + "command entries can be duplicate.") | ||
1397 | 609 | + | ||
1398 | 610 | + def test_duplicates_are_fine_dict_string(self): | ||
1399 | 611 | + """Duplicated commands dict/string entries are allowed.""" | ||
1400 | 612 | + self.assertSchemaValid( | ||
1401 | 613 | + {'commands': {'00': "echo bye", '01': "echo bye"}}, | ||
1402 | 614 | + "command entries can be duplicate.") | ||
1403 | 615 | + | ||
1404 | 616 | |||
1405 | 617 | class TestHandle(CiTestCase): | ||
1406 | 618 | |||
1407 | 619 | @@ -192,89 +205,41 @@ class TestHandle(CiTestCase): | ||
1408 | 620 | super(TestHandle, self).setUp() | ||
1409 | 621 | self.tmp = self.tmp_dir() | ||
1410 | 622 | |||
1411 | 623 | + @mock.patch('%s.run_commands' % MPATH) | ||
1412 | 624 | @mock.patch('%s.validate_cloudconfig_schema' % MPATH) | ||
1413 | 625 | - def test_handle_no_config(self, m_schema): | ||
1414 | 626 | + def test_handle_no_config(self, m_schema, m_run): | ||
1415 | 627 | """When no ua-related configuration is provided, nothing happens.""" | ||
1416 | 628 | cfg = {} | ||
1417 | 629 | handle('ua-test', cfg=cfg, cloud=None, log=self.logger, args=None) | ||
1418 | 630 | self.assertIn( | ||
1419 | 631 | - "DEBUG: Skipping module named ua-test, no 'ubuntu_advantage'" | ||
1420 | 632 | - ' configuration found', | ||
1421 | 633 | + "DEBUG: Skipping module named ua-test, no 'ubuntu-advantage' key" | ||
1422 | 634 | + " in config", | ||
1423 | 635 | self.logs.getvalue()) | ||
1424 | 636 | m_schema.assert_not_called() | ||
1425 | 637 | + m_run.assert_not_called() | ||
1426 | 638 | |||
1427 | 639 | - @mock.patch('%s.configure_ua' % MPATH) | ||
1428 | 640 | @mock.patch('%s.maybe_install_ua_tools' % MPATH) | ||
1429 | 641 | - def test_handle_tries_to_install_ubuntu_advantage_tools( | ||
1430 | 642 | - self, m_install, m_cfg): | ||
1431 | 643 | + def test_handle_tries_to_install_ubuntu_advantage_tools(self, m_install): | ||
1432 | 644 | """If ubuntu_advantage is provided, try installing ua-tools package.""" | ||
1433 | 645 | - cfg = {'ubuntu_advantage': {'token': 'valid'}} | ||
1434 | 646 | + cfg = {'ubuntu-advantage': {}} | ||
1435 | 647 | mycloud = FakeCloud(None) | ||
1436 | 648 | handle('nomatter', cfg=cfg, cloud=mycloud, log=self.logger, args=None) | ||
1437 | 649 | m_install.assert_called_once_with(mycloud) | ||
1438 | 650 | |||
1439 | 651 | - @mock.patch('%s.configure_ua' % MPATH) | ||
1440 | 652 | @mock.patch('%s.maybe_install_ua_tools' % MPATH) | ||
1441 | 653 | - def test_handle_passes_credentials_and_services_to_configure_ua( | ||
1442 | 654 | - self, m_install, m_configure_ua): | ||
1443 | 655 | - """All ubuntu_advantage config keys are passed to configure_ua.""" | ||
1444 | 656 | - cfg = {'ubuntu_advantage': {'token': 'token', 'enable': ['esm']}} | ||
1445 | 657 | - handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) | ||
1446 | 658 | - m_configure_ua.assert_called_once_with( | ||
1447 | 659 | - token='token', enable=['esm']) | ||
1448 | 660 | - | ||
1449 | 661 | - @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock()) | ||
1450 | 662 | - @mock.patch('%s.configure_ua' % MPATH) | ||
1451 | 663 | - def test_handle_warns_on_deprecated_ubuntu_advantage_key_w_config( | ||
1452 | 664 | - self, m_configure_ua): | ||
1453 | 665 | - """Warning when ubuntu-advantage key is present with new config""" | ||
1454 | 666 | - cfg = {'ubuntu-advantage': {'token': 'token', 'enable': ['esm']}} | ||
1455 | 667 | - handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) | ||
1456 | 668 | - self.assertEqual( | ||
1457 | 669 | - 'WARNING: Deprecated configuration key "ubuntu-advantage"' | ||
1458 | 670 | - ' provided. Expected underscore delimited "ubuntu_advantage";' | ||
1459 | 671 | - ' will attempt to continue.', | ||
1460 | 672 | - self.logs.getvalue().splitlines()[0]) | ||
1461 | 673 | - m_configure_ua.assert_called_once_with( | ||
1462 | 674 | - token='token', enable=['esm']) | ||
1463 | 675 | - | ||
1464 | 676 | - def test_handle_error_on_deprecated_commands_key_dashed(self): | ||
1465 | 677 | - """Error when commands is present in ubuntu-advantage key.""" | ||
1466 | 678 | - cfg = {'ubuntu-advantage': {'commands': 'nogo'}} | ||
1467 | 679 | - with self.assertRaises(RuntimeError) as context_manager: | ||
1468 | 680 | - handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) | ||
1469 | 681 | - self.assertEqual( | ||
1470 | 682 | - 'Deprecated configuration "ubuntu-advantage: commands" provided.' | ||
1471 | 683 | - ' Expected "token"', | ||
1472 | 684 | - str(context_manager.exception)) | ||
1473 | 685 | - | ||
1474 | 686 | - def test_handle_error_on_deprecated_commands_key_underscored(self): | ||
1475 | 687 | - """Error when commands is present in ubuntu_advantage key.""" | ||
1476 | 688 | - cfg = {'ubuntu_advantage': {'commands': 'nogo'}} | ||
1477 | 689 | - with self.assertRaises(RuntimeError) as context_manager: | ||
1478 | 690 | - handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) | ||
1479 | 691 | - self.assertEqual( | ||
1480 | 692 | - 'Deprecated configuration "ubuntu-advantage: commands" provided.' | ||
1481 | 693 | - ' Expected "token"', | ||
1482 | 694 | - str(context_manager.exception)) | ||
1483 | 695 | + def test_handle_runs_commands_provided(self, m_install): | ||
1484 | 696 | + """When commands are specified as a list, run them.""" | ||
1485 | 697 | + outfile = self.tmp_path('output.log', dir=self.tmp) | ||
1486 | 698 | |||
1487 | 699 | - @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock()) | ||
1488 | 700 | - @mock.patch('%s.configure_ua' % MPATH) | ||
1489 | 701 | - def test_handle_prefers_new_style_config( | ||
1490 | 702 | - self, m_configure_ua): | ||
1491 | 703 | - """ubuntu_advantage should be preferred over ubuntu-advantage""" | ||
1492 | 704 | cfg = { | ||
1493 | 705 | - 'ubuntu-advantage': {'token': 'nope', 'enable': ['wrong']}, | ||
1494 | 706 | - 'ubuntu_advantage': {'token': 'token', 'enable': ['esm']}, | ||
1495 | 707 | - } | ||
1496 | 708 | - handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) | ||
1497 | 709 | - self.assertEqual( | ||
1498 | 710 | - 'WARNING: Deprecated configuration key "ubuntu-advantage"' | ||
1499 | 711 | - ' provided. Expected underscore delimited "ubuntu_advantage";' | ||
1500 | 712 | - ' will attempt to continue.', | ||
1501 | 713 | - self.logs.getvalue().splitlines()[0]) | ||
1502 | 714 | - m_configure_ua.assert_called_once_with( | ||
1503 | 715 | - token='token', enable=['esm']) | ||
1504 | 716 | + 'ubuntu-advantage': {'commands': ['echo "HI" >> %s' % outfile, | ||
1505 | 717 | + 'echo "MOM" >> %s' % outfile]}} | ||
1506 | 718 | + mock_path = '%s.sys.stderr' % MPATH | ||
1507 | 719 | + with self.allow_subp([CiTestCase.SUBP_SHELL_TRUE]): | ||
1508 | 720 | + with mock.patch(mock_path, new_callable=StringIO): | ||
1509 | 721 | + handle('nomatter', cfg=cfg, cloud=None, log=self.logger, | ||
1510 | 722 | + args=None) | ||
1511 | 723 | + self.assertEqual('HI\nMOM\n', util.load_file(outfile)) | ||
1512 | 724 | |||
1513 | 725 | |||
1514 | 726 | class TestMaybeInstallUATools(CiTestCase): | ||
1515 | 727 | @@ -288,7 +253,7 @@ class TestMaybeInstallUATools(CiTestCase | ||
1516 | 728 | @mock.patch('%s.util.which' % MPATH) | ||
1517 | 729 | def test_maybe_install_ua_tools_noop_when_ua_tools_present(self, m_which): | ||
1518 | 730 | """Do nothing if ubuntu-advantage-tools already exists.""" | ||
1519 | 731 | - m_which.return_value = '/usr/bin/ua' # already installed | ||
1520 | 732 | + m_which.return_value = '/usr/bin/ubuntu-advantage' # already installed | ||
1521 | 733 | distro = mock.MagicMock() | ||
1522 | 734 | distro.update_package_sources.side_effect = RuntimeError( | ||
1523 | 735 | 'Some apt error') | ||
1524 | diff --git a/doc/rtd/topics/datasources/nocloud.rst b/doc/rtd/topics/datasources/nocloud.rst | |||
1525 | index 08578e8..1c5cf96 100644 | |||
1526 | --- a/doc/rtd/topics/datasources/nocloud.rst | |||
1527 | +++ b/doc/rtd/topics/datasources/nocloud.rst | |||
1528 | @@ -9,7 +9,7 @@ network at all). | |||
1529 | 9 | 9 | ||
1530 | 10 | You can provide meta-data and user-data to a local vm boot via files on a | 10 | You can provide meta-data and user-data to a local vm boot via files on a |
1531 | 11 | `vfat`_ or `iso9660`_ filesystem. The filesystem volume label must be | 11 | `vfat`_ or `iso9660`_ filesystem. The filesystem volume label must be |
1533 | 12 | ``cidata``. | 12 | ``cidata`` or ``CIDATA``. |
1534 | 13 | 13 | ||
1535 | 14 | Alternatively, you can provide meta-data via kernel command line or SMBIOS | 14 | Alternatively, you can provide meta-data via kernel command line or SMBIOS |
1536 | 15 | "serial number" option. The data must be passed in the form of a string: | 15 | "serial number" option. The data must be passed in the form of a string: |
1537 | diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in | |||
1538 | index 6b2022b..057a578 100644 | |||
1539 | --- a/packages/redhat/cloud-init.spec.in | |||
1540 | +++ b/packages/redhat/cloud-init.spec.in | |||
1541 | @@ -205,7 +205,9 @@ fi | |||
1542 | 205 | %dir %{_sysconfdir}/cloud/templates | 205 | %dir %{_sysconfdir}/cloud/templates |
1543 | 206 | %config(noreplace) %{_sysconfdir}/cloud/templates/* | 206 | %config(noreplace) %{_sysconfdir}/cloud/templates/* |
1544 | 207 | %config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf | 207 | %config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf |
1546 | 208 | %{_sysconfdir}/bash_completion.d/cloud-init | 208 | |
1547 | 209 | # Bash completion script | ||
1548 | 210 | %{_datadir}/bash-completion/completions/cloud-init | ||
1549 | 209 | 211 | ||
1550 | 210 | %{_libexecdir}/%{name} | 212 | %{_libexecdir}/%{name} |
1551 | 211 | %dir %{_sharedstatedir}/cloud | 213 | %dir %{_sharedstatedir}/cloud |
1552 | diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in | |||
1553 | index 26894b3..004b875 100644 | |||
1554 | --- a/packages/suse/cloud-init.spec.in | |||
1555 | +++ b/packages/suse/cloud-init.spec.in | |||
1556 | @@ -120,7 +120,9 @@ version_pys=$(cd "%{buildroot}" && find . -name version.py -type f) | |||
1557 | 120 | %config(noreplace) %{_sysconfdir}/cloud/cloud.cfg.d/README | 120 | %config(noreplace) %{_sysconfdir}/cloud/cloud.cfg.d/README |
1558 | 121 | %dir %{_sysconfdir}/cloud/templates | 121 | %dir %{_sysconfdir}/cloud/templates |
1559 | 122 | %config(noreplace) %{_sysconfdir}/cloud/templates/* | 122 | %config(noreplace) %{_sysconfdir}/cloud/templates/* |
1561 | 123 | %{_sysconfdir}/bash_completion.d/cloud-init | 123 | |
1562 | 124 | # Bash completion script | ||
1563 | 125 | %{_datadir}/bash-completion/completions/cloud-init | ||
1564 | 124 | 126 | ||
1565 | 125 | %{_sysconfdir}/dhcp/dhclient-exit-hooks.d/hook-dhclient | 127 | %{_sysconfdir}/dhcp/dhclient-exit-hooks.d/hook-dhclient |
1566 | 126 | %{_sysconfdir}/NetworkManager/dispatcher.d/hook-network-manager | 128 | %{_sysconfdir}/NetworkManager/dispatcher.d/hook-network-manager |
1567 | diff --git a/setup.py b/setup.py | |||
1568 | index 186e215..fcaf26f 100755 | |||
1569 | --- a/setup.py | |||
1570 | +++ b/setup.py | |||
1571 | @@ -245,13 +245,14 @@ if not in_virtualenv(): | |||
1572 | 245 | INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k] | 245 | INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k] |
1573 | 246 | 246 | ||
1574 | 247 | data_files = [ | 247 | data_files = [ |
1575 | 248 | (ETC + '/bash_completion.d', ['bash_completion/cloud-init']), | ||
1576 | 249 | (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]), | 248 | (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]), |
1577 | 250 | (ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')), | 249 | (ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')), |
1578 | 251 | (ETC + '/cloud/templates', glob('templates/*')), | 250 | (ETC + '/cloud/templates', glob('templates/*')), |
1579 | 252 | (USR_LIB_EXEC + '/cloud-init', ['tools/ds-identify', | 251 | (USR_LIB_EXEC + '/cloud-init', ['tools/ds-identify', |
1580 | 253 | 'tools/uncloud-init', | 252 | 'tools/uncloud-init', |
1581 | 254 | 'tools/write-ssh-key-fingerprints']), | 253 | 'tools/write-ssh-key-fingerprints']), |
1582 | 254 | (USR + '/share/bash-completion/completions', | ||
1583 | 255 | ['bash_completion/cloud-init']), | ||
1584 | 255 | (USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]), | 256 | (USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]), |
1585 | 256 | (USR + '/share/doc/cloud-init/examples', | 257 | (USR + '/share/doc/cloud-init/examples', |
1586 | 257 | [f for f in glob('doc/examples/*') if is_f(f)]), | 258 | [f for f in glob('doc/examples/*') if is_f(f)]), |
1587 | diff --git a/tests/cloud_tests/releases.yaml b/tests/cloud_tests/releases.yaml | |||
1588 | index ec5da72..924ad95 100644 | |||
1589 | --- a/tests/cloud_tests/releases.yaml | |||
1590 | +++ b/tests/cloud_tests/releases.yaml | |||
1591 | @@ -129,6 +129,22 @@ features: | |||
1592 | 129 | 129 | ||
1593 | 130 | releases: | 130 | releases: |
1594 | 131 | # UBUNTU ================================================================= | 131 | # UBUNTU ================================================================= |
1595 | 132 | eoan: | ||
1596 | 133 | # EOL: Jul 2020 | ||
1597 | 134 | default: | ||
1598 | 135 | enabled: true | ||
1599 | 136 | release: eoan | ||
1600 | 137 | version: 19.10 | ||
1601 | 138 | os: ubuntu | ||
1602 | 139 | feature_groups: | ||
1603 | 140 | - base | ||
1604 | 141 | - debian_base | ||
1605 | 142 | - ubuntu_specific | ||
1606 | 143 | lxd: | ||
1607 | 144 | sstreams_server: https://cloud-images.ubuntu.com/daily | ||
1608 | 145 | alias: eoan | ||
1609 | 146 | setup_overrides: null | ||
1610 | 147 | override_templates: false | ||
1611 | 132 | disco: | 148 | disco: |
1612 | 133 | # EOL: Jan 2020 | 149 | # EOL: Jan 2020 |
1613 | 134 | default: | 150 | default: |
1614 | diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py | |||
1615 | index 53c56cd..427ab7e 100644 | |||
1616 | --- a/tests/unittests/test_datasource/test_azure.py | |||
1617 | +++ b/tests/unittests/test_datasource/test_azure.py | |||
1618 | @@ -163,7 +163,8 @@ class TestGetMetadataFromIMDS(HttprettyTestCase): | |||
1619 | 163 | 163 | ||
1620 | 164 | m_readurl.assert_called_with( | 164 | m_readurl.assert_called_with( |
1621 | 165 | self.network_md_url, exception_cb=mock.ANY, | 165 | self.network_md_url, exception_cb=mock.ANY, |
1623 | 166 | headers={'Metadata': 'true'}, retries=2, timeout=1) | 166 | headers={'Metadata': 'true'}, retries=2, |
1624 | 167 | timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS) | ||
1625 | 167 | 168 | ||
1626 | 168 | @mock.patch('cloudinit.url_helper.time.sleep') | 169 | @mock.patch('cloudinit.url_helper.time.sleep') |
1627 | 169 | @mock.patch(MOCKPATH + 'net.is_up') | 170 | @mock.patch(MOCKPATH + 'net.is_up') |
1628 | @@ -1375,12 +1376,15 @@ class TestCanDevBeReformatted(CiTestCase): | |||
1629 | 1375 | self._domock(p + "util.mount_cb", 'm_mount_cb') | 1376 | self._domock(p + "util.mount_cb", 'm_mount_cb') |
1630 | 1376 | self._domock(p + "os.path.realpath", 'm_realpath') | 1377 | self._domock(p + "os.path.realpath", 'm_realpath') |
1631 | 1377 | self._domock(p + "os.path.exists", 'm_exists') | 1378 | self._domock(p + "os.path.exists", 'm_exists') |
1632 | 1379 | self._domock(p + "util.SeLinuxGuard", 'm_selguard') | ||
1633 | 1378 | 1380 | ||
1634 | 1379 | self.m_exists.side_effect = lambda p: p in bypath | 1381 | self.m_exists.side_effect = lambda p: p in bypath |
1635 | 1380 | self.m_realpath.side_effect = realpath | 1382 | self.m_realpath.side_effect = realpath |
1636 | 1381 | self.m_has_ntfs_filesystem.side_effect = has_ntfs_fs | 1383 | self.m_has_ntfs_filesystem.side_effect = has_ntfs_fs |
1637 | 1382 | self.m_mount_cb.side_effect = mount_cb | 1384 | self.m_mount_cb.side_effect = mount_cb |
1638 | 1383 | self.m_partitions_on_device.side_effect = partitions_on_device | 1385 | self.m_partitions_on_device.side_effect = partitions_on_device |
1639 | 1386 | self.m_selguard.__enter__ = mock.Mock(return_value=False) | ||
1640 | 1387 | self.m_selguard.__exit__ = mock.Mock() | ||
1641 | 1384 | 1388 | ||
1642 | 1385 | def test_three_partitions_is_false(self): | 1389 | def test_three_partitions_is_false(self): |
1643 | 1386 | """A disk with 3 partitions can not be formatted.""" | 1390 | """A disk with 3 partitions can not be formatted.""" |
1644 | @@ -1788,7 +1792,8 @@ class TestAzureDataSourcePreprovisioning(CiTestCase): | |||
1645 | 1788 | headers={'Metadata': 'true', | 1792 | headers={'Metadata': 'true', |
1646 | 1789 | 'User-Agent': | 1793 | 'User-Agent': |
1647 | 1790 | 'Cloud-Init/%s' % vs() | 1794 | 'Cloud-Init/%s' % vs() |
1649 | 1791 | }, method='GET', timeout=1, | 1795 | }, method='GET', |
1650 | 1796 | timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS, | ||
1651 | 1792 | url=full_url)]) | 1797 | url=full_url)]) |
1652 | 1793 | self.assertEqual(m_dhcp.call_count, 2) | 1798 | self.assertEqual(m_dhcp.call_count, 2) |
1653 | 1794 | m_net.assert_any_call( | 1799 | m_net.assert_any_call( |
1654 | @@ -1825,7 +1830,9 @@ class TestAzureDataSourcePreprovisioning(CiTestCase): | |||
1655 | 1825 | headers={'Metadata': 'true', | 1830 | headers={'Metadata': 'true', |
1656 | 1826 | 'User-Agent': | 1831 | 'User-Agent': |
1657 | 1827 | 'Cloud-Init/%s' % vs()}, | 1832 | 'Cloud-Init/%s' % vs()}, |
1659 | 1828 | method='GET', timeout=1, url=full_url)]) | 1833 | method='GET', |
1660 | 1834 | timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS, | ||
1661 | 1835 | url=full_url)]) | ||
1662 | 1829 | self.assertEqual(m_dhcp.call_count, 2) | 1836 | self.assertEqual(m_dhcp.call_count, 2) |
1663 | 1830 | m_net.assert_any_call( | 1837 | m_net.assert_any_call( |
1664 | 1831 | broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9', | 1838 | broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9', |
1665 | diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py | |||
1666 | index 0255616..bd006ab 100644 | |||
1667 | --- a/tests/unittests/test_datasource/test_azure_helper.py | |||
1668 | +++ b/tests/unittests/test_datasource/test_azure_helper.py | |||
1669 | @@ -67,12 +67,17 @@ class TestFindEndpoint(CiTestCase): | |||
1670 | 67 | self.networkd_leases.return_value = None | 67 | self.networkd_leases.return_value = None |
1671 | 68 | 68 | ||
1672 | 69 | def test_missing_file(self): | 69 | def test_missing_file(self): |
1674 | 70 | self.assertRaises(ValueError, wa_shim.find_endpoint) | 70 | """wa_shim find_endpoint uses default endpoint if leasefile not found |
1675 | 71 | """ | ||
1676 | 72 | self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16") | ||
1677 | 71 | 73 | ||
1678 | 72 | def test_missing_special_azure_line(self): | 74 | def test_missing_special_azure_line(self): |
1679 | 75 | """wa_shim find_endpoint uses default endpoint if leasefile is found | ||
1680 | 76 | but does not contain DHCP Option 245 (whose value is the endpoint) | ||
1681 | 77 | """ | ||
1682 | 73 | self.load_file.return_value = '' | 78 | self.load_file.return_value = '' |
1683 | 74 | self.dhcp_options.return_value = {'eth0': {'key': 'value'}} | 79 | self.dhcp_options.return_value = {'eth0': {'key': 'value'}} |
1685 | 75 | self.assertRaises(ValueError, wa_shim.find_endpoint) | 80 | self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16") |
1686 | 76 | 81 | ||
1687 | 77 | @staticmethod | 82 | @staticmethod |
1688 | 78 | def _build_lease_content(encoded_address): | 83 | def _build_lease_content(encoded_address): |
1689 | diff --git a/tests/unittests/test_datasource/test_nocloud.py b/tests/unittests/test_datasource/test_nocloud.py | |||
1690 | index 3429272..b785362 100644 | |||
1691 | --- a/tests/unittests/test_datasource/test_nocloud.py | |||
1692 | +++ b/tests/unittests/test_datasource/test_nocloud.py | |||
1693 | @@ -32,6 +32,36 @@ class TestNoCloudDataSource(CiTestCase): | |||
1694 | 32 | self.mocks.enter_context( | 32 | self.mocks.enter_context( |
1695 | 33 | mock.patch.object(util, 'read_dmi_data', return_value=None)) | 33 | mock.patch.object(util, 'read_dmi_data', return_value=None)) |
1696 | 34 | 34 | ||
1697 | 35 | def _test_fs_config_is_read(self, fs_label, fs_label_to_search): | ||
1698 | 36 | vfat_device = 'device-1' | ||
1699 | 37 | |||
1700 | 38 | def m_mount_cb(device, callback, mtype): | ||
1701 | 39 | if (device == vfat_device): | ||
1702 | 40 | return {'meta-data': yaml.dump({'instance-id': 'IID'})} | ||
1703 | 41 | else: | ||
1704 | 42 | return {} | ||
1705 | 43 | |||
1706 | 44 | def m_find_devs_with(query='', path=''): | ||
1707 | 45 | if 'TYPE=vfat' == query: | ||
1708 | 46 | return [vfat_device] | ||
1709 | 47 | elif 'LABEL={}'.format(fs_label) == query: | ||
1710 | 48 | return [vfat_device] | ||
1711 | 49 | else: | ||
1712 | 50 | return [] | ||
1713 | 51 | |||
1714 | 52 | self.mocks.enter_context( | ||
1715 | 53 | mock.patch.object(util, 'find_devs_with', | ||
1716 | 54 | side_effect=m_find_devs_with)) | ||
1717 | 55 | self.mocks.enter_context( | ||
1718 | 56 | mock.patch.object(util, 'mount_cb', | ||
1719 | 57 | side_effect=m_mount_cb)) | ||
1720 | 58 | sys_cfg = {'datasource': {'NoCloud': {'fs_label': fs_label_to_search}}} | ||
1721 | 59 | dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths) | ||
1722 | 60 | ret = dsrc.get_data() | ||
1723 | 61 | |||
1724 | 62 | self.assertEqual(dsrc.metadata.get('instance-id'), 'IID') | ||
1725 | 63 | self.assertTrue(ret) | ||
1726 | 64 | |||
1727 | 35 | def test_nocloud_seed_dir_on_lxd(self, m_is_lxd): | 65 | def test_nocloud_seed_dir_on_lxd(self, m_is_lxd): |
1728 | 36 | md = {'instance-id': 'IID', 'dsmode': 'local'} | 66 | md = {'instance-id': 'IID', 'dsmode': 'local'} |
1729 | 37 | ud = b"USER_DATA_HERE" | 67 | ud = b"USER_DATA_HERE" |
1730 | @@ -90,6 +120,18 @@ class TestNoCloudDataSource(CiTestCase): | |||
1731 | 90 | ret = dsrc.get_data() | 120 | ret = dsrc.get_data() |
1732 | 91 | self.assertFalse(ret) | 121 | self.assertFalse(ret) |
1733 | 92 | 122 | ||
1734 | 123 | def test_fs_config_lowercase_label(self, m_is_lxd): | ||
1735 | 124 | self._test_fs_config_is_read('cidata', 'cidata') | ||
1736 | 125 | |||
1737 | 126 | def test_fs_config_uppercase_label(self, m_is_lxd): | ||
1738 | 127 | self._test_fs_config_is_read('CIDATA', 'cidata') | ||
1739 | 128 | |||
1740 | 129 | def test_fs_config_lowercase_label_search_uppercase(self, m_is_lxd): | ||
1741 | 130 | self._test_fs_config_is_read('cidata', 'CIDATA') | ||
1742 | 131 | |||
1743 | 132 | def test_fs_config_uppercase_label_search_uppercase(self, m_is_lxd): | ||
1744 | 133 | self._test_fs_config_is_read('CIDATA', 'CIDATA') | ||
1745 | 134 | |||
1746 | 93 | def test_no_datasource_expected(self, m_is_lxd): | 135 | def test_no_datasource_expected(self, m_is_lxd): |
1747 | 94 | # no source should be found if no cmdline, config, and fs_label=None | 136 | # no source should be found if no cmdline, config, and fs_label=None |
1748 | 95 | sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}} | 137 | sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}} |
1749 | diff --git a/tests/unittests/test_datasource/test_scaleway.py b/tests/unittests/test_datasource/test_scaleway.py | |||
1750 | index 3bfd752..f96bf0a 100644 | |||
1751 | --- a/tests/unittests/test_datasource/test_scaleway.py | |||
1752 | +++ b/tests/unittests/test_datasource/test_scaleway.py | |||
1753 | @@ -7,7 +7,6 @@ import requests | |||
1754 | 7 | 7 | ||
1755 | 8 | from cloudinit import helpers | 8 | from cloudinit import helpers |
1756 | 9 | from cloudinit import settings | 9 | from cloudinit import settings |
1757 | 10 | from cloudinit.event import EventType | ||
1758 | 11 | from cloudinit.sources import DataSourceScaleway | 10 | from cloudinit.sources import DataSourceScaleway |
1759 | 12 | 11 | ||
1760 | 13 | from cloudinit.tests.helpers import mock, HttprettyTestCase, CiTestCase | 12 | from cloudinit.tests.helpers import mock, HttprettyTestCase, CiTestCase |
1761 | @@ -404,9 +403,3 @@ class TestDataSourceScaleway(HttprettyTestCase): | |||
1762 | 404 | 403 | ||
1763 | 405 | netcfg = self.datasource.network_config | 404 | netcfg = self.datasource.network_config |
1764 | 406 | self.assertEqual(netcfg, '0xdeadbeef') | 405 | self.assertEqual(netcfg, '0xdeadbeef') |
1765 | 407 | |||
1766 | 408 | def test_update_events_is_correct(self): | ||
1767 | 409 | """ensure update_events contains correct data""" | ||
1768 | 410 | self.assertEqual( | ||
1769 | 411 | {'network': {EventType.BOOT_NEW_INSTANCE, EventType.BOOT}}, | ||
1770 | 412 | self.datasource.update_events) | ||
1771 | diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py | |||
1772 | index d00c1b4..8c18aa1 100644 | |||
1773 | --- a/tests/unittests/test_ds_identify.py | |||
1774 | +++ b/tests/unittests/test_ds_identify.py | |||
1775 | @@ -520,6 +520,10 @@ class TestDsIdentify(DsIdentifyBase): | |||
1776 | 520 | """NoCloud is found with iso9660 filesystem on non-cdrom disk.""" | 520 | """NoCloud is found with iso9660 filesystem on non-cdrom disk.""" |
1777 | 521 | self._test_ds_found('NoCloud') | 521 | self._test_ds_found('NoCloud') |
1778 | 522 | 522 | ||
1779 | 523 | def test_nocloud_upper(self): | ||
1780 | 524 | """NoCloud is found with uppercase filesystem label.""" | ||
1781 | 525 | self._test_ds_found('NoCloudUpper') | ||
1782 | 526 | |||
1783 | 523 | def test_nocloud_seed(self): | 527 | def test_nocloud_seed(self): |
1784 | 524 | """Nocloud seed directory.""" | 528 | """Nocloud seed directory.""" |
1785 | 525 | self._test_ds_found('NoCloud-seed') | 529 | self._test_ds_found('NoCloud-seed') |
1786 | @@ -713,6 +717,19 @@ VALID_CFG = { | |||
1787 | 713 | 'dev/vdb': 'pretend iso content for cidata\n', | 717 | 'dev/vdb': 'pretend iso content for cidata\n', |
1788 | 714 | } | 718 | } |
1789 | 715 | }, | 719 | }, |
1790 | 720 | 'NoCloudUpper': { | ||
1791 | 721 | 'ds': 'NoCloud', | ||
1792 | 722 | 'mocks': [ | ||
1793 | 723 | MOCK_VIRT_IS_KVM, | ||
1794 | 724 | {'name': 'blkid', 'ret': 0, | ||
1795 | 725 | 'out': blkid_out( | ||
1796 | 726 | BLKID_UEFI_UBUNTU + | ||
1797 | 727 | [{'DEVNAME': 'vdb', 'TYPE': 'iso9660', 'LABEL': 'CIDATA'}])}, | ||
1798 | 728 | ], | ||
1799 | 729 | 'files': { | ||
1800 | 730 | 'dev/vdb': 'pretend iso content for cidata\n', | ||
1801 | 731 | } | ||
1802 | 732 | }, | ||
1803 | 716 | 'NoCloud-seed': { | 733 | 'NoCloud-seed': { |
1804 | 717 | 'ds': 'NoCloud', | 734 | 'ds': 'NoCloud', |
1805 | 718 | 'files': { | 735 | 'files': { |
1806 | diff --git a/tests/unittests/test_handler/test_handler_mounts.py b/tests/unittests/test_handler/test_handler_mounts.py | |||
1807 | index 8fea6c2..0fb160b 100644 | |||
1808 | --- a/tests/unittests/test_handler/test_handler_mounts.py | |||
1809 | +++ b/tests/unittests/test_handler/test_handler_mounts.py | |||
1810 | @@ -154,7 +154,15 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase): | |||
1811 | 154 | return_value=True) | 154 | return_value=True) |
1812 | 155 | 155 | ||
1813 | 156 | self.add_patch('cloudinit.config.cc_mounts.util.subp', | 156 | self.add_patch('cloudinit.config.cc_mounts.util.subp', |
1815 | 157 | 'mock_util_subp') | 157 | 'm_util_subp') |
1816 | 158 | |||
1817 | 159 | self.add_patch('cloudinit.config.cc_mounts.util.mounts', | ||
1818 | 160 | 'mock_util_mounts', | ||
1819 | 161 | return_value={ | ||
1820 | 162 | '/dev/sda1': {'fstype': 'ext4', | ||
1821 | 163 | 'mountpoint': '/', | ||
1822 | 164 | 'opts': 'rw,relatime,discard' | ||
1823 | 165 | }}) | ||
1824 | 158 | 166 | ||
1825 | 159 | self.mock_cloud = mock.Mock() | 167 | self.mock_cloud = mock.Mock() |
1826 | 160 | self.mock_log = mock.Mock() | 168 | self.mock_log = mock.Mock() |
1827 | @@ -230,4 +238,24 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase): | |||
1828 | 230 | fstab_new_content = fd.read() | 238 | fstab_new_content = fd.read() |
1829 | 231 | self.assertEqual(fstab_expected_content, fstab_new_content) | 239 | self.assertEqual(fstab_expected_content, fstab_new_content) |
1830 | 232 | 240 | ||
1831 | 241 | def test_no_change_fstab_sets_needs_mount_all(self): | ||
1832 | 242 | '''verify unchanged fstab entries are mounted if not call mount -a''' | ||
1833 | 243 | fstab_original_content = ( | ||
1834 | 244 | 'LABEL=cloudimg-rootfs / ext4 defaults 0 0\n' | ||
1835 | 245 | 'LABEL=UEFI /boot/efi vfat defaults 0 0\n' | ||
1836 | 246 | '/dev/vdb /mnt auto defaults,noexec,comment=cloudconfig 0 2\n' | ||
1837 | 247 | ) | ||
1838 | 248 | fstab_expected_content = fstab_original_content | ||
1839 | 249 | cc = {'mounts': [ | ||
1840 | 250 | ['/dev/vdb', '/mnt', 'auto', 'defaults,noexec']]} | ||
1841 | 251 | with open(cc_mounts.FSTAB_PATH, 'w') as fd: | ||
1842 | 252 | fd.write(fstab_original_content) | ||
1843 | 253 | with open(cc_mounts.FSTAB_PATH, 'r') as fd: | ||
1844 | 254 | fstab_new_content = fd.read() | ||
1845 | 255 | self.assertEqual(fstab_expected_content, fstab_new_content) | ||
1846 | 256 | cc_mounts.handle(None, cc, self.mock_cloud, self.mock_log, []) | ||
1847 | 257 | self.m_util_subp.assert_has_calls([ | ||
1848 | 258 | mock.call(['mount', '-a']), | ||
1849 | 259 | mock.call(['systemctl', 'daemon-reload'])]) | ||
1850 | 260 | |||
1851 | 233 | # vi: ts=4 expandtab | 261 | # vi: ts=4 expandtab |
1852 | diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py | |||
1853 | index fd03deb..e85e964 100644 | |||
1854 | --- a/tests/unittests/test_net.py | |||
1855 | +++ b/tests/unittests/test_net.py | |||
1856 | @@ -9,6 +9,7 @@ from cloudinit.net import ( | |||
1857 | 9 | from cloudinit.sources.helpers import openstack | 9 | from cloudinit.sources.helpers import openstack |
1858 | 10 | from cloudinit import temp_utils | 10 | from cloudinit import temp_utils |
1859 | 11 | from cloudinit import util | 11 | from cloudinit import util |
1860 | 12 | from cloudinit import safeyaml as yaml | ||
1861 | 12 | 13 | ||
1862 | 13 | from cloudinit.tests.helpers import ( | 14 | from cloudinit.tests.helpers import ( |
1863 | 14 | CiTestCase, FilesystemMockingTestCase, dir2dict, mock, populate_dir) | 15 | CiTestCase, FilesystemMockingTestCase, dir2dict, mock, populate_dir) |
1864 | @@ -21,7 +22,7 @@ import json | |||
1865 | 21 | import os | 22 | import os |
1866 | 22 | import re | 23 | import re |
1867 | 23 | import textwrap | 24 | import textwrap |
1869 | 24 | import yaml | 25 | from yaml.serializer import Serializer |
1870 | 25 | 26 | ||
1871 | 26 | 27 | ||
1872 | 27 | DHCP_CONTENT_1 = """ | 28 | DHCP_CONTENT_1 = """ |
1873 | @@ -3269,9 +3270,12 @@ class TestNetplanPostcommands(CiTestCase): | |||
1874 | 3269 | mock_netplan_generate.assert_called_with(run=True) | 3270 | mock_netplan_generate.assert_called_with(run=True) |
1875 | 3270 | mock_net_setup_link.assert_called_with(run=True) | 3271 | mock_net_setup_link.assert_called_with(run=True) |
1876 | 3271 | 3272 | ||
1877 | 3273 | @mock.patch('cloudinit.util.SeLinuxGuard') | ||
1878 | 3272 | @mock.patch.object(netplan, "get_devicelist") | 3274 | @mock.patch.object(netplan, "get_devicelist") |
1879 | 3273 | @mock.patch('cloudinit.util.subp') | 3275 | @mock.patch('cloudinit.util.subp') |
1881 | 3274 | def test_netplan_postcmds(self, mock_subp, mock_devlist): | 3276 | def test_netplan_postcmds(self, mock_subp, mock_devlist, mock_sel): |
1882 | 3277 | mock_sel.__enter__ = mock.Mock(return_value=False) | ||
1883 | 3278 | mock_sel.__exit__ = mock.Mock() | ||
1884 | 3275 | mock_devlist.side_effect = [['lo']] | 3279 | mock_devlist.side_effect = [['lo']] |
1885 | 3276 | tmp_dir = self.tmp_dir() | 3280 | tmp_dir = self.tmp_dir() |
1886 | 3277 | ns = network_state.parse_net_config_data(self.mycfg, | 3281 | ns = network_state.parse_net_config_data(self.mycfg, |
1887 | @@ -3572,7 +3576,7 @@ class TestNetplanRoundTrip(CiTestCase): | |||
1888 | 3572 | # now look for any alias, avoid rendering them entirely | 3576 | # now look for any alias, avoid rendering them entirely |
1889 | 3573 | # generate the first anchor string using the template | 3577 | # generate the first anchor string using the template |
1890 | 3574 | # as of this writing, looks like "&id001" | 3578 | # as of this writing, looks like "&id001" |
1892 | 3575 | anchor = r'&' + yaml.serializer.Serializer.ANCHOR_TEMPLATE % 1 | 3579 | anchor = r'&' + Serializer.ANCHOR_TEMPLATE % 1 |
1893 | 3576 | found_alias = re.search(anchor, content, re.MULTILINE) | 3580 | found_alias = re.search(anchor, content, re.MULTILINE) |
1894 | 3577 | if found_alias: | 3581 | if found_alias: |
1895 | 3578 | msg = "Error at: %s\nContent:\n%s" % (found_alias, content) | 3582 | msg = "Error at: %s\nContent:\n%s" % (found_alias, content) |
1896 | @@ -3826,6 +3830,41 @@ class TestNetRenderers(CiTestCase): | |||
1897 | 3826 | self.assertRaises(net.RendererNotFoundError, renderers.select, | 3830 | self.assertRaises(net.RendererNotFoundError, renderers.select, |
1898 | 3827 | priority=['sysconfig', 'eni']) | 3831 | priority=['sysconfig', 'eni']) |
1899 | 3828 | 3832 | ||
1900 | 3833 | @mock.patch("cloudinit.net.renderers.netplan.available") | ||
1901 | 3834 | @mock.patch("cloudinit.net.renderers.sysconfig.available_sysconfig") | ||
1902 | 3835 | @mock.patch("cloudinit.net.renderers.sysconfig.available_nm") | ||
1903 | 3836 | @mock.patch("cloudinit.net.renderers.eni.available") | ||
1904 | 3837 | @mock.patch("cloudinit.net.renderers.sysconfig.util.get_linux_distro") | ||
1905 | 3838 | def test_sysconfig_selected_on_sysconfig_enabled_distros(self, m_distro, | ||
1906 | 3839 | m_eni, m_sys_nm, | ||
1907 | 3840 | m_sys_scfg, | ||
1908 | 3841 | m_netplan): | ||
1909 | 3842 | """sysconfig only selected on specific distros (rhel/sles).""" | ||
1910 | 3843 | |||
1911 | 3844 | # Ubuntu with Network-Manager installed | ||
1912 | 3845 | m_eni.return_value = False # no ifupdown (ifquery) | ||
1913 | 3846 | m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown | ||
1914 | 3847 | m_sys_nm.return_value = True # network-manager is installed | ||
1915 | 3848 | m_netplan.return_value = True # netplan is installed | ||
1916 | 3849 | m_distro.return_value = ('ubuntu', None, None) | ||
1917 | 3850 | self.assertEqual('netplan', renderers.select(priority=None)[0]) | ||
1918 | 3851 | |||
1919 | 3852 | # Centos with Network-Manager installed | ||
1920 | 3853 | m_eni.return_value = False # no ifupdown (ifquery) | ||
1921 | 3854 | m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown | ||
1922 | 3855 | m_sys_nm.return_value = True # network-manager is installed | ||
1923 | 3856 | m_netplan.return_value = False # netplan is not installed | ||
1924 | 3857 | m_distro.return_value = ('centos', None, None) | ||
1925 | 3858 | self.assertEqual('sysconfig', renderers.select(priority=None)[0]) | ||
1926 | 3859 | |||
1927 | 3860 | # OpenSuse with Network-Manager installed | ||
1928 | 3861 | m_eni.return_value = False # no ifupdown (ifquery) | ||
1929 | 3862 | m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown | ||
1930 | 3863 | m_sys_nm.return_value = True # network-manager is installed | ||
1931 | 3864 | m_netplan.return_value = False # netplan is not installed | ||
1932 | 3865 | m_distro.return_value = ('opensuse', None, None) | ||
1933 | 3866 | self.assertEqual('sysconfig', renderers.select(priority=None)[0]) | ||
1934 | 3867 | |||
1935 | 3829 | 3868 | ||
1936 | 3830 | class TestGetInterfaces(CiTestCase): | 3869 | class TestGetInterfaces(CiTestCase): |
1937 | 3831 | _data = {'bonds': ['bond1'], | 3870 | _data = {'bonds': ['bond1'], |
1938 | diff --git a/tests/unittests/test_reporting_hyperv.py b/tests/unittests/test_reporting_hyperv.py | |||
1939 | 3832 | old mode 100644 | 3871 | old mode 100644 |
1940 | 3833 | new mode 100755 | 3872 | new mode 100755 |
1941 | index 2e64c6c..d01ed5b | |||
1942 | --- a/tests/unittests/test_reporting_hyperv.py | |||
1943 | +++ b/tests/unittests/test_reporting_hyperv.py | |||
1944 | @@ -1,10 +1,12 @@ | |||
1945 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
1946 | 2 | 2 | ||
1947 | 3 | from cloudinit.reporting import events | 3 | from cloudinit.reporting import events |
1949 | 4 | from cloudinit.reporting import handlers | 4 | from cloudinit.reporting.handlers import HyperVKvpReportingHandler |
1950 | 5 | 5 | ||
1951 | 6 | import json | 6 | import json |
1952 | 7 | import os | 7 | import os |
1953 | 8 | import struct | ||
1954 | 9 | import time | ||
1955 | 8 | 10 | ||
1956 | 9 | from cloudinit import util | 11 | from cloudinit import util |
1957 | 10 | from cloudinit.tests.helpers import CiTestCase | 12 | from cloudinit.tests.helpers import CiTestCase |
1958 | @@ -13,7 +15,7 @@ from cloudinit.tests.helpers import CiTestCase | |||
1959 | 13 | class TestKvpEncoding(CiTestCase): | 15 | class TestKvpEncoding(CiTestCase): |
1960 | 14 | def test_encode_decode(self): | 16 | def test_encode_decode(self): |
1961 | 15 | kvp = {'key': 'key1', 'value': 'value1'} | 17 | kvp = {'key': 'key1', 'value': 'value1'} |
1963 | 16 | kvp_reporting = handlers.HyperVKvpReportingHandler() | 18 | kvp_reporting = HyperVKvpReportingHandler() |
1964 | 17 | data = kvp_reporting._encode_kvp_item(kvp['key'], kvp['value']) | 19 | data = kvp_reporting._encode_kvp_item(kvp['key'], kvp['value']) |
1965 | 18 | self.assertEqual(len(data), kvp_reporting.HV_KVP_RECORD_SIZE) | 20 | self.assertEqual(len(data), kvp_reporting.HV_KVP_RECORD_SIZE) |
1966 | 19 | decoded_kvp = kvp_reporting._decode_kvp_item(data) | 21 | decoded_kvp = kvp_reporting._decode_kvp_item(data) |
1967 | @@ -26,57 +28,9 @@ class TextKvpReporter(CiTestCase): | |||
1968 | 26 | self.tmp_file_path = self.tmp_path('kvp_pool_file') | 28 | self.tmp_file_path = self.tmp_path('kvp_pool_file') |
1969 | 27 | util.ensure_file(self.tmp_file_path) | 29 | util.ensure_file(self.tmp_file_path) |
1970 | 28 | 30 | ||
1971 | 29 | def test_event_type_can_be_filtered(self): | ||
1972 | 30 | reporter = handlers.HyperVKvpReportingHandler( | ||
1973 | 31 | kvp_file_path=self.tmp_file_path, | ||
1974 | 32 | event_types=['foo', 'bar']) | ||
1975 | 33 | |||
1976 | 34 | reporter.publish_event( | ||
1977 | 35 | events.ReportingEvent('foo', 'name', 'description')) | ||
1978 | 36 | reporter.publish_event( | ||
1979 | 37 | events.ReportingEvent('some_other', 'name', 'description3')) | ||
1980 | 38 | reporter.q.join() | ||
1981 | 39 | |||
1982 | 40 | kvps = list(reporter._iterate_kvps(0)) | ||
1983 | 41 | self.assertEqual(1, len(kvps)) | ||
1984 | 42 | |||
1985 | 43 | reporter.publish_event( | ||
1986 | 44 | events.ReportingEvent('bar', 'name', 'description2')) | ||
1987 | 45 | reporter.q.join() | ||
1988 | 46 | kvps = list(reporter._iterate_kvps(0)) | ||
1989 | 47 | self.assertEqual(2, len(kvps)) | ||
1990 | 48 | |||
1991 | 49 | self.assertIn('foo', kvps[0]['key']) | ||
1992 | 50 | self.assertIn('bar', kvps[1]['key']) | ||
1993 | 51 | self.assertNotIn('some_other', kvps[0]['key']) | ||
1994 | 52 | self.assertNotIn('some_other', kvps[1]['key']) | ||
1995 | 53 | |||
1996 | 54 | def test_events_are_over_written(self): | ||
1997 | 55 | reporter = handlers.HyperVKvpReportingHandler( | ||
1998 | 56 | kvp_file_path=self.tmp_file_path) | ||
1999 | 57 | |||
2000 | 58 | self.assertEqual(0, len(list(reporter._iterate_kvps(0)))) | ||
2001 | 59 | |||
2002 | 60 | reporter.publish_event( | ||
2003 | 61 | events.ReportingEvent('foo', 'name1', 'description')) | ||
2004 | 62 | reporter.publish_event( | ||
2005 | 63 | events.ReportingEvent('foo', 'name2', 'description')) | ||
2006 | 64 | reporter.q.join() | ||
2007 | 65 | self.assertEqual(2, len(list(reporter._iterate_kvps(0)))) | ||
2008 | 66 | |||
2009 | 67 | reporter2 = handlers.HyperVKvpReportingHandler( | ||
2010 | 68 | kvp_file_path=self.tmp_file_path) | ||
2011 | 69 | reporter2.incarnation_no = reporter.incarnation_no + 1 | ||
2012 | 70 | reporter2.publish_event( | ||
2013 | 71 | events.ReportingEvent('foo', 'name3', 'description')) | ||
2014 | 72 | reporter2.q.join() | ||
2015 | 73 | |||
2016 | 74 | self.assertEqual(2, len(list(reporter2._iterate_kvps(0)))) | ||
2017 | 75 | |||
2018 | 76 | def test_events_with_higher_incarnation_not_over_written(self): | 31 | def test_events_with_higher_incarnation_not_over_written(self): |
2020 | 77 | reporter = handlers.HyperVKvpReportingHandler( | 32 | reporter = HyperVKvpReportingHandler( |
2021 | 78 | kvp_file_path=self.tmp_file_path) | 33 | kvp_file_path=self.tmp_file_path) |
2022 | 79 | |||
2023 | 80 | self.assertEqual(0, len(list(reporter._iterate_kvps(0)))) | 34 | self.assertEqual(0, len(list(reporter._iterate_kvps(0)))) |
2024 | 81 | 35 | ||
2025 | 82 | reporter.publish_event( | 36 | reporter.publish_event( |
2026 | @@ -86,7 +40,7 @@ class TextKvpReporter(CiTestCase): | |||
2027 | 86 | reporter.q.join() | 40 | reporter.q.join() |
2028 | 87 | self.assertEqual(2, len(list(reporter._iterate_kvps(0)))) | 41 | self.assertEqual(2, len(list(reporter._iterate_kvps(0)))) |
2029 | 88 | 42 | ||
2031 | 89 | reporter3 = handlers.HyperVKvpReportingHandler( | 43 | reporter3 = HyperVKvpReportingHandler( |
2032 | 90 | kvp_file_path=self.tmp_file_path) | 44 | kvp_file_path=self.tmp_file_path) |
2033 | 91 | reporter3.incarnation_no = reporter.incarnation_no - 1 | 45 | reporter3.incarnation_no = reporter.incarnation_no - 1 |
2034 | 92 | reporter3.publish_event( | 46 | reporter3.publish_event( |
2035 | @@ -95,7 +49,7 @@ class TextKvpReporter(CiTestCase): | |||
2036 | 95 | self.assertEqual(3, len(list(reporter3._iterate_kvps(0)))) | 49 | self.assertEqual(3, len(list(reporter3._iterate_kvps(0)))) |
2037 | 96 | 50 | ||
2038 | 97 | def test_finish_event_result_is_logged(self): | 51 | def test_finish_event_result_is_logged(self): |
2040 | 98 | reporter = handlers.HyperVKvpReportingHandler( | 52 | reporter = HyperVKvpReportingHandler( |
2041 | 99 | kvp_file_path=self.tmp_file_path) | 53 | kvp_file_path=self.tmp_file_path) |
2042 | 100 | reporter.publish_event( | 54 | reporter.publish_event( |
2043 | 101 | events.FinishReportingEvent('name2', 'description1', | 55 | events.FinishReportingEvent('name2', 'description1', |
2044 | @@ -105,7 +59,7 @@ class TextKvpReporter(CiTestCase): | |||
2045 | 105 | 59 | ||
2046 | 106 | def test_file_operation_issue(self): | 60 | def test_file_operation_issue(self): |
2047 | 107 | os.remove(self.tmp_file_path) | 61 | os.remove(self.tmp_file_path) |
2049 | 108 | reporter = handlers.HyperVKvpReportingHandler( | 62 | reporter = HyperVKvpReportingHandler( |
2050 | 109 | kvp_file_path=self.tmp_file_path) | 63 | kvp_file_path=self.tmp_file_path) |
2051 | 110 | reporter.publish_event( | 64 | reporter.publish_event( |
2052 | 111 | events.FinishReportingEvent('name2', 'description1', | 65 | events.FinishReportingEvent('name2', 'description1', |
2053 | @@ -113,7 +67,7 @@ class TextKvpReporter(CiTestCase): | |||
2054 | 113 | reporter.q.join() | 67 | reporter.q.join() |
2055 | 114 | 68 | ||
2056 | 115 | def test_event_very_long(self): | 69 | def test_event_very_long(self): |
2058 | 116 | reporter = handlers.HyperVKvpReportingHandler( | 70 | reporter = HyperVKvpReportingHandler( |
2059 | 117 | kvp_file_path=self.tmp_file_path) | 71 | kvp_file_path=self.tmp_file_path) |
2060 | 118 | description = 'ab' * reporter.HV_KVP_EXCHANGE_MAX_VALUE_SIZE | 72 | description = 'ab' * reporter.HV_KVP_EXCHANGE_MAX_VALUE_SIZE |
2061 | 119 | long_event = events.FinishReportingEvent( | 73 | long_event = events.FinishReportingEvent( |
2062 | @@ -132,3 +86,43 @@ class TextKvpReporter(CiTestCase): | |||
2063 | 132 | self.assertEqual(msg_slice['msg_i'], i) | 86 | self.assertEqual(msg_slice['msg_i'], i) |
2064 | 133 | full_description += msg_slice['msg'] | 87 | full_description += msg_slice['msg'] |
2065 | 134 | self.assertEqual(description, full_description) | 88 | self.assertEqual(description, full_description) |
2066 | 89 | |||
2067 | 90 | def test_not_truncate_kvp_file_modified_after_boot(self): | ||
2068 | 91 | with open(self.tmp_file_path, "wb+") as f: | ||
2069 | 92 | kvp = {'key': 'key1', 'value': 'value1'} | ||
2070 | 93 | data = (struct.pack("%ds%ds" % ( | ||
2071 | 94 | HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE, | ||
2072 | 95 | HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE), | ||
2073 | 96 | kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8'))) | ||
2074 | 97 | f.write(data) | ||
2075 | 98 | cur_time = time.time() | ||
2076 | 99 | os.utime(self.tmp_file_path, (cur_time, cur_time)) | ||
2077 | 100 | |||
2078 | 101 | # reset this because the unit test framework | ||
2079 | 102 | # has already polluted the class variable | ||
2080 | 103 | HyperVKvpReportingHandler._already_truncated_pool_file = False | ||
2081 | 104 | |||
2082 | 105 | reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path) | ||
2083 | 106 | kvps = list(reporter._iterate_kvps(0)) | ||
2084 | 107 | self.assertEqual(1, len(kvps)) | ||
2085 | 108 | |||
2086 | 109 | def test_truncate_stale_kvp_file(self): | ||
2087 | 110 | with open(self.tmp_file_path, "wb+") as f: | ||
2088 | 111 | kvp = {'key': 'key1', 'value': 'value1'} | ||
2089 | 112 | data = (struct.pack("%ds%ds" % ( | ||
2090 | 113 | HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE, | ||
2091 | 114 | HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE), | ||
2092 | 115 | kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8'))) | ||
2093 | 116 | f.write(data) | ||
2094 | 117 | |||
2095 | 118 | # set the time ways back to make it look like | ||
2096 | 119 | # we had an old kvp file | ||
2097 | 120 | os.utime(self.tmp_file_path, (1000000, 1000000)) | ||
2098 | 121 | |||
2099 | 122 | # reset this because the unit test framework | ||
2100 | 123 | # has already polluted the class variable | ||
2101 | 124 | HyperVKvpReportingHandler._already_truncated_pool_file = False | ||
2102 | 125 | |||
2103 | 126 | reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path) | ||
2104 | 127 | kvps = list(reporter._iterate_kvps(0)) | ||
2105 | 128 | self.assertEqual(0, len(kvps)) | ||
2106 | diff --git a/tools/build-on-freebsd b/tools/build-on-freebsd | |||
2107 | index d23fde2..dc3b974 100755 | |||
2108 | --- a/tools/build-on-freebsd | |||
2109 | +++ b/tools/build-on-freebsd | |||
2110 | @@ -9,6 +9,7 @@ fail() { echo "FAILED:" "$@" 1>&2; exit 1; } | |||
2111 | 9 | depschecked=/tmp/c-i.dependencieschecked | 9 | depschecked=/tmp/c-i.dependencieschecked |
2112 | 10 | pkgs=" | 10 | pkgs=" |
2113 | 11 | bash | 11 | bash |
2114 | 12 | chpasswd | ||
2115 | 12 | dmidecode | 13 | dmidecode |
2116 | 13 | e2fsprogs | 14 | e2fsprogs |
2117 | 14 | py27-Jinja2 | 15 | py27-Jinja2 |
2118 | @@ -17,6 +18,7 @@ pkgs=" | |||
2119 | 17 | py27-configobj | 18 | py27-configobj |
2120 | 18 | py27-jsonpatch | 19 | py27-jsonpatch |
2121 | 19 | py27-jsonpointer | 20 | py27-jsonpointer |
2122 | 21 | py27-jsonschema | ||
2123 | 20 | py27-oauthlib | 22 | py27-oauthlib |
2124 | 21 | py27-requests | 23 | py27-requests |
2125 | 22 | py27-serial | 24 | py27-serial |
2126 | @@ -28,12 +30,9 @@ pkgs=" | |||
2127 | 28 | [ -f "$depschecked" ] || pkg install ${pkgs} || fail "install packages" | 30 | [ -f "$depschecked" ] || pkg install ${pkgs} || fail "install packages" |
2128 | 29 | touch $depschecked | 31 | touch $depschecked |
2129 | 30 | 32 | ||
2130 | 31 | # Required but unavailable port/pkg: py27-jsonpatch py27-jsonpointer | ||
2131 | 32 | # Luckily, the install step will take care of this by installing it from pypi... | ||
2132 | 33 | |||
2133 | 34 | # Build the code and install in /usr/local/: | 33 | # Build the code and install in /usr/local/: |
2136 | 35 | python setup.py build | 34 | python2.7 setup.py build |
2137 | 36 | python setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd | 35 | python2.7 setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd |
2138 | 37 | 36 | ||
2139 | 38 | # Enable cloud-init in /etc/rc.conf: | 37 | # Enable cloud-init in /etc/rc.conf: |
2140 | 39 | sed -i.bak -e "/cloudinit_enable=.*/d" /etc/rc.conf | 38 | sed -i.bak -e "/cloudinit_enable=.*/d" /etc/rc.conf |
2141 | diff --git a/tools/ds-identify b/tools/ds-identify | |||
2142 | index b78b273..6518901 100755 | |||
2143 | --- a/tools/ds-identify | |||
2144 | +++ b/tools/ds-identify | |||
2145 | @@ -620,7 +620,7 @@ dscheck_MAAS() { | |||
2146 | 620 | } | 620 | } |
2147 | 621 | 621 | ||
2148 | 622 | dscheck_NoCloud() { | 622 | dscheck_NoCloud() { |
2150 | 623 | local fslabel="cidata" d="" | 623 | local fslabel="cidata CIDATA" d="" |
2151 | 624 | case " ${DI_KERNEL_CMDLINE} " in | 624 | case " ${DI_KERNEL_CMDLINE} " in |
2152 | 625 | *\ ds=nocloud*) return ${DS_FOUND};; | 625 | *\ ds=nocloud*) return ${DS_FOUND};; |
2153 | 626 | esac | 626 | esac |
2154 | @@ -632,9 +632,10 @@ dscheck_NoCloud() { | |||
2155 | 632 | check_seed_dir "$d" meta-data user-data && return ${DS_FOUND} | 632 | check_seed_dir "$d" meta-data user-data && return ${DS_FOUND} |
2156 | 633 | check_writable_seed_dir "$d" meta-data user-data && return ${DS_FOUND} | 633 | check_writable_seed_dir "$d" meta-data user-data && return ${DS_FOUND} |
2157 | 634 | done | 634 | done |
2159 | 635 | if has_fs_with_label "${fslabel}"; then | 635 | if has_fs_with_label $fslabel; then |
2160 | 636 | return ${DS_FOUND} | 636 | return ${DS_FOUND} |
2161 | 637 | fi | 637 | fi |
2162 | 638 | |||
2163 | 638 | return ${DS_NOT_FOUND} | 639 | return ${DS_NOT_FOUND} |
2164 | 639 | } | 640 | } |
2165 | 640 | 641 | ||
2166 | @@ -762,7 +763,7 @@ is_cdrom_ovf() { | |||
2167 | 762 | 763 | ||
2168 | 763 | # explicitly skip known labels of other types. rd_rdfe is azure. | 764 | # explicitly skip known labels of other types. rd_rdfe is azure. |
2169 | 764 | case "$label" in | 765 | case "$label" in |
2171 | 765 | config-2|CONFIG-2|rd_rdfe_stable*|cidata) return 1;; | 766 | config-2|CONFIG-2|rd_rdfe_stable*|cidata|CIDATA) return 1;; |
2172 | 766 | esac | 767 | esac |
2173 | 767 | 768 | ||
2174 | 768 | local idstr="http://schemas.dmtf.org/ovf/environment/1" | 769 | local idstr="http://schemas.dmtf.org/ovf/environment/1" |
2175 | diff --git a/tools/read-version b/tools/read-version | |||
2176 | index e69c2ce..6dca659 100755 | |||
2177 | --- a/tools/read-version | |||
2178 | +++ b/tools/read-version | |||
2179 | @@ -71,9 +71,12 @@ if is_gitdir(_tdir) and which("git"): | |||
2180 | 71 | flags = ['--tags'] | 71 | flags = ['--tags'] |
2181 | 72 | cmd = ['git', 'describe', '--abbrev=8', '--match=[0-9]*'] + flags | 72 | cmd = ['git', 'describe', '--abbrev=8', '--match=[0-9]*'] + flags |
2182 | 73 | 73 | ||
2184 | 74 | version = tiny_p(cmd).strip() | 74 | try: |
2185 | 75 | version = tiny_p(cmd).strip() | ||
2186 | 76 | except RuntimeError: | ||
2187 | 77 | version = None | ||
2188 | 75 | 78 | ||
2190 | 76 | if not version.startswith(src_version): | 79 | if version is None or not version.startswith(src_version): |
2191 | 77 | sys.stderr.write("git describe version (%s) differs from " | 80 | sys.stderr.write("git describe version (%s) differs from " |
2192 | 78 | "cloudinit.version (%s)\n" % (version, src_version)) | 81 | "cloudinit.version (%s)\n" % (version, src_version)) |
2193 | 79 | sys.stderr.write( | 82 | sys.stderr.write( |
PASSED: Continuous integration, rev:40bd980d11d c1da884ce6e55c6 0518c3c4ebe106 /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 718/
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 718/rebuild
https:/