Merge ~chad.smith/cloud-init:ubuntu/xenial into cloud-init:ubuntu/xenial
- Git
- lp:~chad.smith/cloud-init
- ubuntu/xenial
- Merge into ubuntu/xenial
Status: | Merged | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Merged at revision: | 421a036c999c3c1ad12872e6509315472089d53d | ||||||||||||
Proposed branch: | ~chad.smith/cloud-init:ubuntu/xenial | ||||||||||||
Merge into: | cloud-init:ubuntu/xenial | ||||||||||||
Diff against target: |
2193 lines (+1247/-204) 38 files modified
ChangeLog (+117/-0) cloudinit/config/cc_apt_configure.py (+1/-1) cloudinit/config/cc_mounts.py (+11/-0) cloudinit/net/sysconfig.py (+4/-2) cloudinit/net/tests/test_init.py (+1/-1) cloudinit/reporting/handlers.py (+57/-60) cloudinit/sources/DataSourceAzure.py (+11/-6) cloudinit/sources/DataSourceCloudStack.py (+1/-1) cloudinit/sources/DataSourceConfigDrive.py (+2/-5) cloudinit/sources/DataSourceEc2.py (+1/-1) cloudinit/sources/DataSourceNoCloud.py (+3/-1) cloudinit/sources/DataSourceScaleway.py (+1/-2) cloudinit/sources/__init__.py (+3/-3) cloudinit/sources/helpers/azure.py (+11/-3) cloudinit/sources/tests/test_init.py (+0/-15) cloudinit/util.py (+2/-13) cloudinit/version.py (+1/-1) debian/changelog (+48/-2) debian/patches/azure-apply-network-config-false.patch (+1/-1) debian/patches/azure-use-walinux-agent.patch (+1/-1) debian/patches/series (+1/-0) debian/patches/ubuntu-advantage-revert-tip.patch (+735/-0) doc/rtd/topics/datasources/nocloud.rst (+1/-1) packages/redhat/cloud-init.spec.in (+3/-1) packages/suse/cloud-init.spec.in (+3/-1) setup.py (+2/-1) tests/cloud_tests/releases.yaml (+16/-0) tests/unittests/test_datasource/test_azure.py (+10/-3) tests/unittests/test_datasource/test_azure_helper.py (+7/-2) tests/unittests/test_datasource/test_nocloud.py (+42/-0) tests/unittests/test_datasource/test_scaleway.py (+0/-7) tests/unittests/test_ds_identify.py (+17/-0) tests/unittests/test_handler/test_handler_mounts.py (+29/-1) tests/unittests/test_net.py (+42/-3) tests/unittests/test_reporting_hyperv.py (+49/-55) tools/build-on-freebsd (+4/-5) tools/ds-identify (+4/-3) tools/read-version (+5/-2) |
||||||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Ryan Harper | Approve | ||
Server Team CI bot | continuous-integration | Approve | |
Review via email:
|
Commit message
new upstream snapshot for SRU into xenial..
QUESTION: we changed ubuntu-advantage cloud config module in an incompatible way on expectation that new ubuntu-
That is contained in this branch with
debian/
Description of the change

Server Team CI bot (server-team-bot) wrote : | # |

Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:421a036c999
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/

Ryan Harper (raharper) wrote : | # |
LGTM. Verified I get the same branch as what's proposed.
Preview Diff
1 | diff --git a/ChangeLog b/ChangeLog |
2 | index 8fa6fdd..bf48fd4 100644 |
3 | --- a/ChangeLog |
4 | +++ b/ChangeLog |
5 | @@ -1,3 +1,120 @@ |
6 | +19.1: |
7 | + - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder] |
8 | + - tests: add Eoan release [Paride Legovini] |
9 | + - cc_mounts: check if mount -a on no-change fstab path |
10 | + [Jason Zions (MSFT)] (LP: #1825596) |
11 | + - replace remaining occurrences of LOG.warn [Daniel Watkins] |
12 | + - DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo] |
13 | + - Azure: Changes to the Hyper-V KVP Reporter [Anh Vo] |
14 | + - git tests: no longer show warning about safe yaml. |
15 | + - tools/read-version: handle errors [Chad Miller] |
16 | + - net/sysconfig: only indicate available on known sysconfig distros |
17 | + (LP: #1819994) |
18 | + - packages: update rpm specs for new bash completion path |
19 | + [Daniel Watkins] (LP: #1825444) |
20 | + - test_azure: mock util.SeLinuxGuard where needed |
21 | + [Jason Zions (MSFT)] (LP: #1825253) |
22 | + - setup.py: install bash completion script in new location [Daniel Watkins] |
23 | + - mount_cb: do not pass sync and rw options to mount |
24 | + [Gonéri Le Bouder] (LP: #1645824) |
25 | + - cc_apt_configure: fix typo in apt documentation [Dominic Schlegel] |
26 | + - Revert "DataSource: move update_events from a class to an instance..." |
27 | + [Daniel Watkins] |
28 | + - Change DataSourceNoCloud to ignore file system label's case. |
29 | + [Risto Oikarinen] |
30 | + - cmd:main.py: Fix missing 'modules-init' key in modes dict |
31 | + [Antonio Romito] (LP: #1815109) |
32 | + - ubuntu_advantage: rewrite cloud-config module |
33 | + - Azure: Treat _unset network configuration as if it were absent |
34 | + [Jason Zions (MSFT)] (LP: #1823084) |
35 | + - DatasourceAzure: add additional logging for azure datasource [Anh Vo] |
36 | + - cloud_tests: fix apt_pipelining test-cases |
37 | + - Azure: Ensure platform random_seed is always serializable as JSON. |
38 | + [Jason Zions (MSFT)] |
39 | + - net/sysconfig: write out SUSE-compatible IPv6 config [Robert Schweikert] |
40 | + - tox: Update testenv for openSUSE Leap to 15.0 [Thomas Bechtold] |
41 | + - net: Fix ipv6 static routes when using eni renderer |
42 | + [Raphael Glon] (LP: #1818669) |
43 | + - Add ubuntu_drivers config module [Daniel Watkins] |
44 | + - doc: Refresh Azure walinuxagent docs [Daniel Watkins] |
45 | + - tox: bump pylint version to latest (2.3.1) [Daniel Watkins] |
46 | + - DataSource: move update_events from a class to an instance attribute |
47 | + [Daniel Watkins] (LP: #1819913) |
48 | + - net/sysconfig: Handle default route setup for dhcp configured NICs |
49 | + [Robert Schweikert] (LP: #1812117) |
50 | + - DataSourceEc2: update RELEASE_BLOCKER to be more accurate |
51 | + [Daniel Watkins] |
52 | + - cloud-init-per: POSIX sh does not support string subst, use sed |
53 | + (LP: #1819222) |
54 | + - Support locking user with usermod if passwd is not available. |
55 | + - Example for Microsoft Azure data disk added. [Anton Olifir] |
56 | + - clean: correctly determine the path for excluding seed directory |
57 | + [Daniel Watkins] (LP: #1818571) |
58 | + - helpers/openstack: Treat unknown link types as physical |
59 | + [Daniel Watkins] (LP: #1639263) |
60 | + - drop Python 2.6 support and our NIH version detection [Daniel Watkins] |
61 | + - tip-pylint: Fix assignment-from-return-none errors |
62 | + - net: append type:dhcp[46] only if dhcp[46] is True in v2 netconfig |
63 | + [Kurt Stieger] (LP: #1818032) |
64 | + - cc_apt_pipelining: stop disabling pipelining by default |
65 | + [Daniel Watkins] (LP: #1794982) |
66 | + - tests: fix some slow tests and some leaking state [Daniel Watkins] |
67 | + - util: don't determine string_types ourselves [Daniel Watkins] |
68 | + - cc_rsyslog: Escape possible nested set [Daniel Watkins] (LP: #1816967) |
69 | + - Enable encrypted_data_bag_secret support for Chef |
70 | + [Eric Williams] (LP: #1817082) |
71 | + - azure: Filter list of ssh keys pulled from fabric [Jason Zions (MSFT)] |
72 | + - doc: update merging doc with fixes and some additional details/examples |
73 | + - tests: integration test failure summary to use traceback if empty error |
74 | + - This is to fix https://bugs.launchpad.net/cloud-init/+bug/1812676 |
75 | + [Vitaly Kuznetsov] |
76 | + - EC2: Rewrite network config on AWS Classic instances every boot |
77 | + [Guilherme G. Piccoli] (LP: #1802073) |
78 | + - netinfo: Adjust ifconfig output parsing for FreeBSD ipv6 entries |
79 | + (LP: #1779672) |
80 | + - netplan: Don't render yaml aliases when dumping netplan (LP: #1815051) |
81 | + - add PyCharm IDE .idea/ path to .gitignore [Dominic Schlegel] |
82 | + - correct grammar issue in instance metadata documentation |
83 | + [Dominic Schlegel] (LP: #1802188) |
84 | + - clean: cloud-init clean should not trace when run from within cloud_dir |
85 | + (LP: #1795508) |
86 | + - Resolve flake8 comparison and pycodestyle over-ident issues |
87 | + [Paride Legovini] |
88 | + - opennebula: also exclude epochseconds from changed environment vars |
89 | + (LP: #1813641) |
90 | + - systemd: Render generator from template to account for system |
91 | + differences. [Robert Schweikert] |
92 | + - sysconfig: On SUSE, use STARTMODE instead of ONBOOT |
93 | + [Robert Schweikert] (LP: #1799540) |
94 | + - flake8: use ==/!= to compare str, bytes, and int literals |
95 | + [Paride Legovini] |
96 | + - opennebula: exclude EPOCHREALTIME as known bash env variable with a |
97 | + delta (LP: #1813383) |
98 | + - tox: fix disco httpretty dependencies for py37 (LP: #1813361) |
99 | + - run-container: uncomment baseurl in yum.repos.d/*.repo when using a |
100 | + proxy [Paride Legovini] |
101 | + - lxd: install zfs-linux instead of zfs meta package |
102 | + [Johnson Shi] (LP: #1799779) |
103 | + - net/sysconfig: do not write a resolv.conf file with only the header. |
104 | + [Robert Schweikert] |
105 | + - net: Make sysconfig renderer compatible with Network Manager. |
106 | + [Eduardo Otubo] |
107 | + - cc_set_passwords: Fix regex when parsing hashed passwords |
108 | + [Marlin Cremers] (LP: #1811446) |
109 | + - net: Wait for dhclient to daemonize before reading lease file |
110 | + [Jason Zions] (LP: #1794399) |
111 | + - [Azure] Increase retries when talking to Wireserver during metadata walk |
112 | + [Jason Zions] |
113 | + - Add documentation on adding a datasource. |
114 | + - doc: clean up some datasource documentation. |
115 | + - ds-identify: fix wrong variable name in ovf_vmware_transport_guestinfo. |
116 | + - Scaleway: Support ssh keys provided inside an instance tag. [PORTE Loïc] |
117 | + - OVF: simplify expected return values of transport functions. |
118 | + - Vmware: Add support for the com.vmware.guestInfo OVF transport. |
119 | + (LP: #1807466) |
120 | + - HACKING.rst: change contact info to Josh Powers |
121 | + - Update to pylint 2.2.2. |
122 | + |
123 | 18.5: |
124 | - tests: add Disco release [Joshua Powers] |
125 | - net: render 'metric' values in per-subnet routes (LP: #1805871) |
126 | diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py |
127 | index e18944e..919d199 100644 |
128 | --- a/cloudinit/config/cc_apt_configure.py |
129 | +++ b/cloudinit/config/cc_apt_configure.py |
130 | @@ -127,7 +127,7 @@ to ``^[\\w-]+:\\w`` |
131 | |
132 | Source list entries can be specified as a dictionary under the ``sources`` |
133 | config key, with key in the dict representing a different source file. The key |
134 | -The key of each source entry will be used as an id that can be referenced in |
135 | +of each source entry will be used as an id that can be referenced in |
136 | other config entries, as well as the filename for the source's configuration |
137 | under ``/etc/apt/sources.list.d``. If the name does not end with ``.list``, |
138 | it will be appended. If there is no configuration for a key in ``sources``, no |
139 | diff --git a/cloudinit/config/cc_mounts.py b/cloudinit/config/cc_mounts.py |
140 | index 339baba..123ffb8 100644 |
141 | --- a/cloudinit/config/cc_mounts.py |
142 | +++ b/cloudinit/config/cc_mounts.py |
143 | @@ -439,6 +439,7 @@ def handle(_name, cfg, cloud, log, _args): |
144 | |
145 | cc_lines = [] |
146 | needswap = False |
147 | + need_mount_all = False |
148 | dirs = [] |
149 | for line in actlist: |
150 | # write 'comment' in the fs_mntops, entry, claiming this |
151 | @@ -449,11 +450,18 @@ def handle(_name, cfg, cloud, log, _args): |
152 | dirs.append(line[1]) |
153 | cc_lines.append('\t'.join(line)) |
154 | |
155 | + mount_points = [v['mountpoint'] for k, v in util.mounts().items() |
156 | + if 'mountpoint' in v] |
157 | for d in dirs: |
158 | try: |
159 | util.ensure_dir(d) |
160 | except Exception: |
161 | util.logexc(log, "Failed to make '%s' config-mount", d) |
162 | + # dirs is list of directories on which a volume should be mounted. |
163 | + # If any of them does not already show up in the list of current |
164 | + # mount points, we will definitely need to do mount -a. |
165 | + if not need_mount_all and d not in mount_points: |
166 | + need_mount_all = True |
167 | |
168 | sadds = [WS.sub(" ", n) for n in cc_lines] |
169 | sdrops = [WS.sub(" ", n) for n in fstab_removed] |
170 | @@ -473,6 +481,9 @@ def handle(_name, cfg, cloud, log, _args): |
171 | log.debug("No changes to /etc/fstab made.") |
172 | else: |
173 | log.debug("Changes to fstab: %s", sops) |
174 | + need_mount_all = True |
175 | + |
176 | + if need_mount_all: |
177 | activate_cmds.append(["mount", "-a"]) |
178 | if uses_systemd: |
179 | activate_cmds.append(["systemctl", "daemon-reload"]) |
180 | diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py |
181 | index 0998392..a47da0a 100644 |
182 | --- a/cloudinit/net/sysconfig.py |
183 | +++ b/cloudinit/net/sysconfig.py |
184 | @@ -18,6 +18,8 @@ from .network_state import ( |
185 | |
186 | LOG = logging.getLogger(__name__) |
187 | NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf" |
188 | +KNOWN_DISTROS = [ |
189 | + 'opensuse', 'sles', 'suse', 'redhat', 'fedora', 'centos'] |
190 | |
191 | |
192 | def _make_header(sep='#'): |
193 | @@ -717,8 +719,8 @@ class Renderer(renderer.Renderer): |
194 | def available(target=None): |
195 | sysconfig = available_sysconfig(target=target) |
196 | nm = available_nm(target=target) |
197 | - |
198 | - return any([nm, sysconfig]) |
199 | + return (util.get_linux_distro()[0] in KNOWN_DISTROS |
200 | + and any([nm, sysconfig])) |
201 | |
202 | |
203 | def available_sysconfig(target=None): |
204 | diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py |
205 | index f55c31e..6d2affe 100644 |
206 | --- a/cloudinit/net/tests/test_init.py |
207 | +++ b/cloudinit/net/tests/test_init.py |
208 | @@ -7,11 +7,11 @@ import mock |
209 | import os |
210 | import requests |
211 | import textwrap |
212 | -import yaml |
213 | |
214 | import cloudinit.net as net |
215 | from cloudinit.util import ensure_file, write_file, ProcessExecutionError |
216 | from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase |
217 | +from cloudinit import safeyaml as yaml |
218 | |
219 | |
220 | class TestSysDevPath(CiTestCase): |
221 | diff --git a/cloudinit/reporting/handlers.py b/cloudinit/reporting/handlers.py |
222 | old mode 100644 |
223 | new mode 100755 |
224 | index 6d23558..10165ae |
225 | --- a/cloudinit/reporting/handlers.py |
226 | +++ b/cloudinit/reporting/handlers.py |
227 | @@ -5,7 +5,6 @@ import fcntl |
228 | import json |
229 | import six |
230 | import os |
231 | -import re |
232 | import struct |
233 | import threading |
234 | import time |
235 | @@ -14,6 +13,7 @@ from cloudinit import log as logging |
236 | from cloudinit.registry import DictRegistry |
237 | from cloudinit import (url_helper, util) |
238 | from datetime import datetime |
239 | +from six.moves.queue import Empty as QueueEmptyError |
240 | |
241 | if six.PY2: |
242 | from multiprocessing.queues import JoinableQueue as JQueue |
243 | @@ -129,24 +129,50 @@ class HyperVKvpReportingHandler(ReportingHandler): |
244 | DESC_IDX_KEY = 'msg_i' |
245 | JSON_SEPARATORS = (',', ':') |
246 | KVP_POOL_FILE_GUEST = '/var/lib/hyperv/.kvp_pool_1' |
247 | + _already_truncated_pool_file = False |
248 | |
249 | def __init__(self, |
250 | kvp_file_path=KVP_POOL_FILE_GUEST, |
251 | event_types=None): |
252 | super(HyperVKvpReportingHandler, self).__init__() |
253 | self._kvp_file_path = kvp_file_path |
254 | + HyperVKvpReportingHandler._truncate_guest_pool_file( |
255 | + self._kvp_file_path) |
256 | + |
257 | self._event_types = event_types |
258 | self.q = JQueue() |
259 | - self.kvp_file = None |
260 | self.incarnation_no = self._get_incarnation_no() |
261 | self.event_key_prefix = u"{0}|{1}".format(self.EVENT_PREFIX, |
262 | self.incarnation_no) |
263 | - self._current_offset = 0 |
264 | self.publish_thread = threading.Thread( |
265 | target=self._publish_event_routine) |
266 | self.publish_thread.daemon = True |
267 | self.publish_thread.start() |
268 | |
269 | + @classmethod |
270 | + def _truncate_guest_pool_file(cls, kvp_file): |
271 | + """ |
272 | + Truncate the pool file if it has not been truncated since boot. |
273 | + This should be done exactly once for the file indicated by |
274 | + KVP_POOL_FILE_GUEST constant above. This method takes a filename |
275 | + so that we can use an arbitrary file during unit testing. |
276 | + Since KVP is a best-effort telemetry channel we only attempt to |
277 | + truncate the file once and only if the file has not been modified |
278 | + since boot. Additional truncation can lead to loss of existing |
279 | + KVPs. |
280 | + """ |
281 | + if cls._already_truncated_pool_file: |
282 | + return |
283 | + boot_time = time.time() - float(util.uptime()) |
284 | + try: |
285 | + if os.path.getmtime(kvp_file) < boot_time: |
286 | + with open(kvp_file, "w"): |
287 | + pass |
288 | + except (OSError, IOError) as e: |
289 | + LOG.warning("failed to truncate kvp pool file, %s", e) |
290 | + finally: |
291 | + cls._already_truncated_pool_file = True |
292 | + |
293 | def _get_incarnation_no(self): |
294 | """ |
295 | use the time passed as the incarnation number. |
296 | @@ -162,20 +188,15 @@ class HyperVKvpReportingHandler(ReportingHandler): |
297 | |
298 | def _iterate_kvps(self, offset): |
299 | """iterate the kvp file from the current offset.""" |
300 | - try: |
301 | - with open(self._kvp_file_path, 'rb+') as f: |
302 | - self.kvp_file = f |
303 | - fcntl.flock(f, fcntl.LOCK_EX) |
304 | - f.seek(offset) |
305 | + with open(self._kvp_file_path, 'rb') as f: |
306 | + fcntl.flock(f, fcntl.LOCK_EX) |
307 | + f.seek(offset) |
308 | + record_data = f.read(self.HV_KVP_RECORD_SIZE) |
309 | + while len(record_data) == self.HV_KVP_RECORD_SIZE: |
310 | + kvp_item = self._decode_kvp_item(record_data) |
311 | + yield kvp_item |
312 | record_data = f.read(self.HV_KVP_RECORD_SIZE) |
313 | - while len(record_data) == self.HV_KVP_RECORD_SIZE: |
314 | - self._current_offset += self.HV_KVP_RECORD_SIZE |
315 | - kvp_item = self._decode_kvp_item(record_data) |
316 | - yield kvp_item |
317 | - record_data = f.read(self.HV_KVP_RECORD_SIZE) |
318 | - fcntl.flock(f, fcntl.LOCK_UN) |
319 | - finally: |
320 | - self.kvp_file = None |
321 | + fcntl.flock(f, fcntl.LOCK_UN) |
322 | |
323 | def _event_key(self, event): |
324 | """ |
325 | @@ -207,23 +228,13 @@ class HyperVKvpReportingHandler(ReportingHandler): |
326 | |
327 | return {'key': k, 'value': v} |
328 | |
329 | - def _update_kvp_item(self, record_data): |
330 | - if self.kvp_file is None: |
331 | - raise ReportException( |
332 | - "kvp file '{0}' not opened." |
333 | - .format(self._kvp_file_path)) |
334 | - self.kvp_file.seek(-self.HV_KVP_RECORD_SIZE, 1) |
335 | - self.kvp_file.write(record_data) |
336 | - |
337 | def _append_kvp_item(self, record_data): |
338 | - with open(self._kvp_file_path, 'rb+') as f: |
339 | + with open(self._kvp_file_path, 'ab') as f: |
340 | fcntl.flock(f, fcntl.LOCK_EX) |
341 | - # seek to end of the file |
342 | - f.seek(0, 2) |
343 | - f.write(record_data) |
344 | + for data in record_data: |
345 | + f.write(data) |
346 | f.flush() |
347 | fcntl.flock(f, fcntl.LOCK_UN) |
348 | - self._current_offset = f.tell() |
349 | |
350 | def _break_down(self, key, meta_data, description): |
351 | del meta_data[self.MSG_KEY] |
352 | @@ -279,40 +290,26 @@ class HyperVKvpReportingHandler(ReportingHandler): |
353 | |
354 | def _publish_event_routine(self): |
355 | while True: |
356 | + items_from_queue = 0 |
357 | try: |
358 | event = self.q.get(block=True) |
359 | - need_append = True |
360 | + items_from_queue += 1 |
361 | + encoded_data = [] |
362 | + while event is not None: |
363 | + encoded_data += self._encode_event(event) |
364 | + try: |
365 | + # get all the rest of the events in the queue |
366 | + event = self.q.get(block=False) |
367 | + items_from_queue += 1 |
368 | + except QueueEmptyError: |
369 | + event = None |
370 | try: |
371 | - if not os.path.exists(self._kvp_file_path): |
372 | - LOG.warning( |
373 | - "skip writing events %s to %s. file not present.", |
374 | - event.as_string(), |
375 | - self._kvp_file_path) |
376 | - encoded_event = self._encode_event(event) |
377 | - # for each encoded_event |
378 | - for encoded_data in (encoded_event): |
379 | - for kvp in self._iterate_kvps(self._current_offset): |
380 | - match = ( |
381 | - re.match( |
382 | - r"^{0}\|(\d+)\|.+" |
383 | - .format(self.EVENT_PREFIX), |
384 | - kvp['key'] |
385 | - )) |
386 | - if match: |
387 | - match_groups = match.groups(0) |
388 | - if int(match_groups[0]) < self.incarnation_no: |
389 | - need_append = False |
390 | - self._update_kvp_item(encoded_data) |
391 | - continue |
392 | - if need_append: |
393 | - self._append_kvp_item(encoded_data) |
394 | - except IOError as e: |
395 | - LOG.warning( |
396 | - "failed posting event to kvp: %s e:%s", |
397 | - event.as_string(), e) |
398 | + self._append_kvp_item(encoded_data) |
399 | + except (OSError, IOError) as e: |
400 | + LOG.warning("failed posting events to kvp, %s", e) |
401 | finally: |
402 | - self.q.task_done() |
403 | - |
404 | + for _ in range(items_from_queue): |
405 | + self.q.task_done() |
406 | # when main process exits, q.get() will through EOFError |
407 | # indicating we should exit this thread. |
408 | except EOFError: |
409 | @@ -322,7 +319,7 @@ class HyperVKvpReportingHandler(ReportingHandler): |
410 | # if the kvp pool already contains a chunk of data, |
411 | # so defer it to another thread. |
412 | def publish_event(self, event): |
413 | - if (not self._event_types or event.event_type in self._event_types): |
414 | + if not self._event_types or event.event_type in self._event_types: |
415 | self.q.put(event) |
416 | |
417 | def flush(self): |
418 | diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py |
419 | index 76b1661..b7440c1 100755 |
420 | --- a/cloudinit/sources/DataSourceAzure.py |
421 | +++ b/cloudinit/sources/DataSourceAzure.py |
422 | @@ -57,7 +57,12 @@ AZURE_CHASSIS_ASSET_TAG = '7783-7084-3265-9085-8269-3286-77' |
423 | REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds" |
424 | REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready" |
425 | AGENT_SEED_DIR = '/var/lib/waagent' |
426 | + |
427 | +# In the event where the IMDS primary server is not |
428 | +# available, it takes 1s to fallback to the secondary one |
429 | +IMDS_TIMEOUT_IN_SECONDS = 2 |
430 | IMDS_URL = "http://169.254.169.254/metadata/" |
431 | + |
432 | PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0" |
433 | |
434 | # List of static scripts and network config artifacts created by |
435 | @@ -407,7 +412,7 @@ class DataSourceAzure(sources.DataSource): |
436 | elif cdev.startswith("/dev/"): |
437 | if util.is_FreeBSD(): |
438 | ret = util.mount_cb(cdev, load_azure_ds_dir, |
439 | - mtype="udf", sync=False) |
440 | + mtype="udf") |
441 | else: |
442 | ret = util.mount_cb(cdev, load_azure_ds_dir) |
443 | else: |
444 | @@ -582,9 +587,9 @@ class DataSourceAzure(sources.DataSource): |
445 | return |
446 | self._ephemeral_dhcp_ctx.clean_network() |
447 | else: |
448 | - return readurl(url, timeout=1, headers=headers, |
449 | - exception_cb=exc_cb, infinite=True, |
450 | - log_req_resp=False).contents |
451 | + return readurl(url, timeout=IMDS_TIMEOUT_IN_SECONDS, |
452 | + headers=headers, exception_cb=exc_cb, |
453 | + infinite=True, log_req_resp=False).contents |
454 | except UrlError: |
455 | # Teardown our EphemeralDHCPv4 context on failure as we retry |
456 | self._ephemeral_dhcp_ctx.clean_network() |
457 | @@ -1291,8 +1296,8 @@ def _get_metadata_from_imds(retries): |
458 | headers = {"Metadata": "true"} |
459 | try: |
460 | response = readurl( |
461 | - url, timeout=1, headers=headers, retries=retries, |
462 | - exception_cb=retry_on_url_exc) |
463 | + url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers, |
464 | + retries=retries, exception_cb=retry_on_url_exc) |
465 | except Exception as e: |
466 | LOG.debug('Ignoring IMDS instance metadata: %s', e) |
467 | return {} |
468 | diff --git a/cloudinit/sources/DataSourceCloudStack.py b/cloudinit/sources/DataSourceCloudStack.py |
469 | index d4b758f..f185dc7 100644 |
470 | --- a/cloudinit/sources/DataSourceCloudStack.py |
471 | +++ b/cloudinit/sources/DataSourceCloudStack.py |
472 | @@ -95,7 +95,7 @@ class DataSourceCloudStack(sources.DataSource): |
473 | start_time = time.time() |
474 | url = uhelp.wait_for_url( |
475 | urls=urls, max_wait=url_params.max_wait_seconds, |
476 | - timeout=url_params.timeout_seconds, status_cb=LOG.warn) |
477 | + timeout=url_params.timeout_seconds, status_cb=LOG.warning) |
478 | |
479 | if url: |
480 | LOG.debug("Using metadata source: '%s'", url) |
481 | diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py |
482 | index 564e3eb..571d30d 100644 |
483 | --- a/cloudinit/sources/DataSourceConfigDrive.py |
484 | +++ b/cloudinit/sources/DataSourceConfigDrive.py |
485 | @@ -72,15 +72,12 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource): |
486 | dslist = self.sys_cfg.get('datasource_list') |
487 | for dev in find_candidate_devs(dslist=dslist): |
488 | try: |
489 | - # Set mtype if freebsd and turn off sync |
490 | - if dev.startswith("/dev/cd"): |
491 | + if util.is_FreeBSD() and dev.startswith("/dev/cd"): |
492 | mtype = "cd9660" |
493 | - sync = False |
494 | else: |
495 | mtype = None |
496 | - sync = True |
497 | results = util.mount_cb(dev, read_config_drive, |
498 | - mtype=mtype, sync=sync) |
499 | + mtype=mtype) |
500 | found = dev |
501 | except openstack.NonReadable: |
502 | pass |
503 | diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py |
504 | index ac28f1d..5c017bf 100644 |
505 | --- a/cloudinit/sources/DataSourceEc2.py |
506 | +++ b/cloudinit/sources/DataSourceEc2.py |
507 | @@ -208,7 +208,7 @@ class DataSourceEc2(sources.DataSource): |
508 | start_time = time.time() |
509 | url = uhelp.wait_for_url( |
510 | urls=urls, max_wait=url_params.max_wait_seconds, |
511 | - timeout=url_params.timeout_seconds, status_cb=LOG.warn) |
512 | + timeout=url_params.timeout_seconds, status_cb=LOG.warning) |
513 | |
514 | if url: |
515 | self.metadata_address = url2base[url] |
516 | diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py |
517 | index 6860f0c..fcf5d58 100644 |
518 | --- a/cloudinit/sources/DataSourceNoCloud.py |
519 | +++ b/cloudinit/sources/DataSourceNoCloud.py |
520 | @@ -106,7 +106,9 @@ class DataSourceNoCloud(sources.DataSource): |
521 | fslist = util.find_devs_with("TYPE=vfat") |
522 | fslist.extend(util.find_devs_with("TYPE=iso9660")) |
523 | |
524 | - label_list = util.find_devs_with("LABEL=%s" % label) |
525 | + label_list = util.find_devs_with("LABEL=%s" % label.upper()) |
526 | + label_list.extend(util.find_devs_with("LABEL=%s" % label.lower())) |
527 | + |
528 | devlist = list(set(fslist) & set(label_list)) |
529 | devlist.sort(reverse=True) |
530 | |
531 | diff --git a/cloudinit/sources/DataSourceScaleway.py b/cloudinit/sources/DataSourceScaleway.py |
532 | index 54bfc1f..b573b38 100644 |
533 | --- a/cloudinit/sources/DataSourceScaleway.py |
534 | +++ b/cloudinit/sources/DataSourceScaleway.py |
535 | @@ -171,11 +171,10 @@ def query_data_api(api_type, api_address, retries, timeout): |
536 | |
537 | class DataSourceScaleway(sources.DataSource): |
538 | dsname = "Scaleway" |
539 | + update_events = {'network': [EventType.BOOT_NEW_INSTANCE, EventType.BOOT]} |
540 | |
541 | def __init__(self, sys_cfg, distro, paths): |
542 | super(DataSourceScaleway, self).__init__(sys_cfg, distro, paths) |
543 | - self.update_events = { |
544 | - 'network': {EventType.BOOT_NEW_INSTANCE, EventType.BOOT}} |
545 | |
546 | self.ds_cfg = util.mergemanydict([ |
547 | util.get_cfg_by_path(sys_cfg, ["datasource", "Scaleway"], {}), |
548 | diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py |
549 | index 1604932..e6966b3 100644 |
550 | --- a/cloudinit/sources/__init__.py |
551 | +++ b/cloudinit/sources/__init__.py |
552 | @@ -164,6 +164,9 @@ class DataSource(object): |
553 | # A datasource which supports writing network config on each system boot |
554 | # would call update_events['network'].add(EventType.BOOT). |
555 | |
556 | + # Default: generate network config on new instance id (first boot). |
557 | + update_events = {'network': set([EventType.BOOT_NEW_INSTANCE])} |
558 | + |
559 | # N-tuple listing default values for any metadata-related class |
560 | # attributes cached on an instance by a process_data runs. These attribute |
561 | # values are reset via clear_cached_attrs during any update_metadata call. |
562 | @@ -188,9 +191,6 @@ class DataSource(object): |
563 | self.vendordata = None |
564 | self.vendordata_raw = None |
565 | |
566 | - # Default: generate network config on new instance id (first boot). |
567 | - self.update_events = {'network': {EventType.BOOT_NEW_INSTANCE}} |
568 | - |
569 | self.ds_cfg = util.get_cfg_by_path( |
570 | self.sys_cfg, ("datasource", self.dsname), {}) |
571 | if not self.ds_cfg: |
572 | diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py |
573 | index d3af05e..82c4c8c 100755 |
574 | --- a/cloudinit/sources/helpers/azure.py |
575 | +++ b/cloudinit/sources/helpers/azure.py |
576 | @@ -20,6 +20,9 @@ from cloudinit.reporting import events |
577 | |
578 | LOG = logging.getLogger(__name__) |
579 | |
580 | +# This endpoint matches the format as found in dhcp lease files, since this |
581 | +# value is applied if the endpoint can't be found within a lease file |
582 | +DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10" |
583 | |
584 | azure_ds_reporter = events.ReportEventStack( |
585 | name="azure-ds", |
586 | @@ -297,7 +300,12 @@ class WALinuxAgentShim(object): |
587 | @azure_ds_telemetry_reporter |
588 | def _get_value_from_leases_file(fallback_lease_file): |
589 | leases = [] |
590 | - content = util.load_file(fallback_lease_file) |
591 | + try: |
592 | + content = util.load_file(fallback_lease_file) |
593 | + except IOError as ex: |
594 | + LOG.error("Failed to read %s: %s", fallback_lease_file, ex) |
595 | + return None |
596 | + |
597 | LOG.debug("content is %s", content) |
598 | option_name = _get_dhcp_endpoint_option_name() |
599 | for line in content.splitlines(): |
600 | @@ -372,9 +380,9 @@ class WALinuxAgentShim(object): |
601 | fallback_lease_file) |
602 | value = WALinuxAgentShim._get_value_from_leases_file( |
603 | fallback_lease_file) |
604 | - |
605 | if value is None: |
606 | - raise ValueError('No endpoint found.') |
607 | + LOG.warning("No lease found; using default endpoint") |
608 | + value = DEFAULT_WIRESERVER_ENDPOINT |
609 | |
610 | endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value) |
611 | LOG.debug('Azure endpoint found at %s', endpoint_ip_address) |
612 | diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py |
613 | index cb1912b..6378e98 100644 |
614 | --- a/cloudinit/sources/tests/test_init.py |
615 | +++ b/cloudinit/sources/tests/test_init.py |
616 | @@ -575,21 +575,6 @@ class TestDataSource(CiTestCase): |
617 | " events: New instance first boot", |
618 | self.logs.getvalue()) |
619 | |
620 | - def test_data_sources_cant_mutate_update_events_for_others(self): |
621 | - """update_events shouldn't be changed for other DSes (LP: #1819913)""" |
622 | - |
623 | - class ModifyingDS(DataSource): |
624 | - |
625 | - def __init__(self, sys_cfg, distro, paths): |
626 | - # This mirrors what DataSourceAzure does which causes LP: |
627 | - # #1819913 |
628 | - DataSource.__init__(self, sys_cfg, distro, paths) |
629 | - self.update_events['network'].add(EventType.BOOT) |
630 | - |
631 | - before_update_events = copy.deepcopy(self.datasource.update_events) |
632 | - ModifyingDS(self.sys_cfg, self.distro, self.paths) |
633 | - self.assertEqual(before_update_events, self.datasource.update_events) |
634 | - |
635 | |
636 | class TestRedactSensitiveData(CiTestCase): |
637 | |
638 | diff --git a/cloudinit/util.py b/cloudinit/util.py |
639 | index 385f231..ea4199c 100644 |
640 | --- a/cloudinit/util.py |
641 | +++ b/cloudinit/util.py |
642 | @@ -1679,7 +1679,7 @@ def mounts(): |
643 | return mounted |
644 | |
645 | |
646 | -def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True, |
647 | +def mount_cb(device, callback, data=None, mtype=None, |
648 | update_env_for_mount=None): |
649 | """ |
650 | Mount the device, call method 'callback' passing the directory |
651 | @@ -1726,18 +1726,7 @@ def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True, |
652 | for mtype in mtypes: |
653 | mountpoint = None |
654 | try: |
655 | - mountcmd = ['mount'] |
656 | - mountopts = [] |
657 | - if rw: |
658 | - mountopts.append('rw') |
659 | - else: |
660 | - mountopts.append('ro') |
661 | - if sync: |
662 | - # This seems like the safe approach to do |
663 | - # (ie where this is on by default) |
664 | - mountopts.append("sync") |
665 | - if mountopts: |
666 | - mountcmd.extend(["-o", ",".join(mountopts)]) |
667 | + mountcmd = ['mount', '-o', 'ro'] |
668 | if mtype: |
669 | mountcmd.extend(['-t', mtype]) |
670 | mountcmd.append(device) |
671 | diff --git a/cloudinit/version.py b/cloudinit/version.py |
672 | index a2c5d43..ddcd436 100644 |
673 | --- a/cloudinit/version.py |
674 | +++ b/cloudinit/version.py |
675 | @@ -4,7 +4,7 @@ |
676 | # |
677 | # This file is part of cloud-init. See LICENSE file for license information. |
678 | |
679 | -__VERSION__ = "18.5" |
680 | +__VERSION__ = "19.1" |
681 | _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' |
682 | |
683 | FEATURES = [ |
684 | diff --git a/debian/changelog b/debian/changelog |
685 | index d21167b..270b0f3 100644 |
686 | --- a/debian/changelog |
687 | +++ b/debian/changelog |
688 | @@ -1,11 +1,57 @@ |
689 | -cloud-init (18.5-45-g3554ffe8-0ubuntu1~16.04.2) UNRELEASED; urgency=medium |
690 | +cloud-init (19.1-1-gbaa47854-0ubuntu1~16.04.1) xenial; urgency=medium |
691 | |
692 | + * debian/patches/ubuntu-advantage-revert-tip.patch |
693 | + Revert ubuntu-advantage config module changes until ubuntu-advantage-tools |
694 | + 19.1 publishes to Xenial (LP: #1828641) |
695 | * refresh patches: |
696 | + debian/patches/azure-apply-network-config-false.patch |
697 | + debian/patches/azure-use-walinux-agent.patch |
698 | + debian/patches/ec2-classic-dont-reapply-networking.patch |
699 | + * refresh patches: |
700 | + + debian/patches/azure-apply-network-config-false.patch |
701 | + + debian/patches/azure-use-walinux-agent.patch |
702 | + * New upstream snapshot. (LP: #1828637) |
703 | + - Azure: Return static fallback address as if failed to find endpoint |
704 | + [Jason Zions (MSFT)] |
705 | + - release 19.1 |
706 | + - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder] |
707 | + - tests: add Eoan release [Paride Legovini] |
708 | + - cc_mounts: check if mount -a on no-change fstab path [Jason Zions (MSFT)] |
709 | + - replace remaining occurrences of LOG.warn |
710 | + - DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo] |
711 | + - Azure: Changes to the Hyper-V KVP Reporter [Anh Vo] |
712 | + - git tests: no longer show warning about safe yaml. [Scott Moser] |
713 | + - tools/read-version: handle errors [Chad Miller] |
714 | + - net/sysconfig: only indicate available on known sysconfig distros |
715 | + - packages: update rpm specs for new bash completion path |
716 | + - test_azure: mock util.SeLinuxGuard where needed [Jason Zions (MSFT)] |
717 | + - setup.py: install bash completion script in new location |
718 | + - mount_cb: do not pass sync and rw options to mount [Gonéri Le Bouder] |
719 | + - cc_apt_configure: fix typo in apt documentation [Dominic Schlegel] |
720 | + - Revert "DataSource: move update_events from a class to an instance..." |
721 | + - Change DataSourceNoCloud to ignore file system label's case. |
722 | + [Risto Oikarinen] |
723 | + - cmd:main.py: Fix missing 'modules-init' key in modes dict |
724 | + [Antonio Romito] |
725 | + - ubuntu_advantage: rewrite cloud-config module |
726 | + - Azure: Treat _unset network configuration as if it were absent |
727 | + [Jason Zions (MSFT)] |
728 | + - DatasourceAzure: add additional logging for azure datasource [Anh Vo] |
729 | + - cloud_tests: fix apt_pipelining test-cases |
730 | + - Azure: Ensure platform random_seed is always serializable as JSON. |
731 | + [Jason Zions (MSFT)] |
732 | + - net/sysconfig: write out SUSE-compatible IPv6 config [Robert Schweikert] |
733 | + - tox: Update testenv for openSUSE Leap to 15.0 [Thomas Bechtold] |
734 | + - net: Fix ipv6 static routes when using eni renderer [Raphael Glon] |
735 | + - Add ubuntu_drivers config module |
736 | + - doc: Refresh Azure walinuxagent docs |
737 | + - tox: bump pylint version to latest (2.3.1) |
738 | + - DataSource: move update_events from a class to an instance attribute |
739 | + - net/sysconfig: Handle default route setup for dhcp configured NICs |
740 | + [Robert Schweikert] |
741 | + - DataSourceEc2: update RELEASE_BLOCKER to be more accurate |
742 | |
743 | - -- Ryan Harper <ryan.harper@canonical.com> Tue, 09 Apr 2019 11:20:17 -0500 |
744 | + -- Chad Smith <chad.smith@canonical.com> Fri, 10 May 2019 16:26:48 -0600 |
745 | |
746 | cloud-init (18.5-45-g3554ffe8-0ubuntu1~16.04.1) xenial; urgency=medium |
747 | |
748 | diff --git a/debian/patches/azure-apply-network-config-false.patch b/debian/patches/azure-apply-network-config-false.patch |
749 | index e16ad64..f0c2fcf 100644 |
750 | --- a/debian/patches/azure-apply-network-config-false.patch |
751 | +++ b/debian/patches/azure-apply-network-config-false.patch |
752 | @@ -10,7 +10,7 @@ Forwarded: not-needed |
753 | Last-Update: 2018-10-17 |
754 | --- a/cloudinit/sources/DataSourceAzure.py |
755 | +++ b/cloudinit/sources/DataSourceAzure.py |
756 | -@@ -215,7 +215,7 @@ BUILTIN_DS_CONFIG = { |
757 | +@@ -220,7 +220,7 @@ BUILTIN_DS_CONFIG = { |
758 | }, |
759 | 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH}, |
760 | 'dhclient_lease_file': LEASE_FILE, |
761 | diff --git a/debian/patches/azure-use-walinux-agent.patch b/debian/patches/azure-use-walinux-agent.patch |
762 | index 3f60dfd..b4ad76c 100644 |
763 | --- a/debian/patches/azure-use-walinux-agent.patch |
764 | +++ b/debian/patches/azure-use-walinux-agent.patch |
765 | @@ -6,7 +6,7 @@ Forwarded: not-needed |
766 | Author: Scott Moser <smoser@ubuntu.com> |
767 | --- a/cloudinit/sources/DataSourceAzure.py |
768 | +++ b/cloudinit/sources/DataSourceAzure.py |
769 | -@@ -204,7 +204,7 @@ if util.is_FreeBSD(): |
770 | +@@ -209,7 +209,7 @@ if util.is_FreeBSD(): |
771 | PLATFORM_ENTROPY_SOURCE = None |
772 | |
773 | BUILTIN_DS_CONFIG = { |
774 | diff --git a/debian/patches/series b/debian/patches/series |
775 | index d37ae8a..5d6995e 100644 |
776 | --- a/debian/patches/series |
777 | +++ b/debian/patches/series |
778 | @@ -4,3 +4,4 @@ stable-release-no-jsonschema-dep.patch |
779 | openstack-no-network-config.patch |
780 | azure-apply-network-config-false.patch |
781 | ec2-classic-dont-reapply-networking.patch |
782 | +ubuntu-advantage-revert-tip.patch |
783 | diff --git a/debian/patches/ubuntu-advantage-revert-tip.patch b/debian/patches/ubuntu-advantage-revert-tip.patch |
784 | new file mode 100644 |
785 | index 0000000..08bdc81 |
786 | --- /dev/null |
787 | +++ b/debian/patches/ubuntu-advantage-revert-tip.patch |
788 | @@ -0,0 +1,735 @@ |
789 | +Description: Revert upstream changes for ubuntu-advantage-tools v 19.1 |
790 | + ubuntu-advantage-tools v. 19.1 or later is required for the newcw |
791 | + cloud-config module becaues the two command lines are incompatible. |
792 | + Xenial can drop this patch once ubuntu-advantage-tools has been SRU'd >= 19.1 |
793 | +Author: Chad Smith <chad.smith@canonical.com> |
794 | +Origin: backport |
795 | +Bug: https://bugs.launchpad.net/cloud-init/+bug/1828641 |
796 | +Forwarded: not-needed |
797 | +Last-Update: 2019-05-10 |
798 | +--- |
799 | +This patch header follows DEP-3: http://dep.debian.net/deps/dep3/ |
800 | +Index: cloud-init/cloudinit/config/cc_ubuntu_advantage.py |
801 | +=================================================================== |
802 | +--- cloud-init.orig/cloudinit/config/cc_ubuntu_advantage.py |
803 | ++++ cloud-init/cloudinit/config/cc_ubuntu_advantage.py |
804 | +@@ -1,143 +1,150 @@ |
805 | ++# Copyright (C) 2018 Canonical Ltd. |
806 | ++# |
807 | + # This file is part of cloud-init. See LICENSE file for license information. |
808 | + |
809 | +-"""ubuntu_advantage: Configure Ubuntu Advantage support services""" |
810 | ++"""Ubuntu advantage: manage ubuntu-advantage offerings from Canonical.""" |
811 | + |
812 | ++import sys |
813 | + from textwrap import dedent |
814 | + |
815 | +-import six |
816 | +- |
817 | ++from cloudinit import log as logging |
818 | + from cloudinit.config.schema import ( |
819 | + get_schema_doc, validate_cloudconfig_schema) |
820 | +-from cloudinit import log as logging |
821 | + from cloudinit.settings import PER_INSTANCE |
822 | ++from cloudinit.subp import prepend_base_command |
823 | + from cloudinit import util |
824 | + |
825 | + |
826 | +-UA_URL = 'https://ubuntu.com/advantage' |
827 | +- |
828 | + distros = ['ubuntu'] |
829 | ++frequency = PER_INSTANCE |
830 | ++ |
831 | ++LOG = logging.getLogger(__name__) |
832 | + |
833 | + schema = { |
834 | + 'id': 'cc_ubuntu_advantage', |
835 | + 'name': 'Ubuntu Advantage', |
836 | +- 'title': 'Configure Ubuntu Advantage support services', |
837 | ++ 'title': 'Install, configure and manage ubuntu-advantage offerings', |
838 | + 'description': dedent("""\ |
839 | +- Attach machine to an existing Ubuntu Advantage support contract and |
840 | +- enable or disable support services such as Livepatch, ESM, |
841 | +- FIPS and FIPS Updates. When attaching a machine to Ubuntu Advantage, |
842 | +- one can also specify services to enable. When the 'enable' |
843 | +- list is present, any named service will be enabled and all absent |
844 | +- services will remain disabled. |
845 | +- |
846 | +- Note that when enabling FIPS or FIPS updates you will need to schedule |
847 | +- a reboot to ensure the machine is running the FIPS-compliant kernel. |
848 | +- See :ref:`Power State Change` for information on how to configure |
849 | +- cloud-init to perform this reboot. |
850 | ++ This module provides configuration options to setup ubuntu-advantage |
851 | ++ subscriptions. |
852 | ++ |
853 | ++ .. note:: |
854 | ++ Both ``commands`` value can be either a dictionary or a list. If |
855 | ++ the configuration provided is a dictionary, the keys are only used |
856 | ++ to order the execution of the commands and the dictionary is |
857 | ++ merged with any vendor-data ubuntu-advantage configuration |
858 | ++ provided. If a ``commands`` is provided as a list, any vendor-data |
859 | ++ ubuntu-advantage ``commands`` are ignored. |
860 | ++ |
861 | ++ Ubuntu-advantage ``commands`` is a dictionary or list of |
862 | ++ ubuntu-advantage commands to run on the deployed machine. |
863 | ++ These commands can be used to enable or disable subscriptions to |
864 | ++ various ubuntu-advantage products. See 'man ubuntu-advantage' for more |
865 | ++ information on supported subcommands. |
866 | ++ |
867 | ++ .. note:: |
868 | ++ Each command item can be a string or list. If the item is a list, |
869 | ++ 'ubuntu-advantage' can be omitted and it will automatically be |
870 | ++ inserted as part of the command. |
871 | + """), |
872 | + 'distros': distros, |
873 | + 'examples': [dedent("""\ |
874 | +- # Attach the machine to a Ubuntu Advantage support contract with a |
875 | +- # UA contract token obtained from %s. |
876 | +- ubuntu_advantage: |
877 | +- token: <ua_contract_token> |
878 | +- """ % UA_URL), dedent("""\ |
879 | +- # Attach the machine to an Ubuntu Advantage support contract enabling |
880 | +- # only fips and esm services. Services will only be enabled if |
881 | +- # the environment supports said service. Otherwise warnings will |
882 | +- # be logged for incompatible services specified. |
883 | ++ # Enable Extended Security Maintenance using your service auth token |
884 | ++ ubuntu-advantage: |
885 | ++ commands: |
886 | ++ 00: ubuntu-advantage enable-esm <token> |
887 | ++ """), dedent("""\ |
888 | ++ # Enable livepatch by providing your livepatch token |
889 | + ubuntu-advantage: |
890 | +- token: <ua_contract_token> |
891 | +- enable: |
892 | +- - fips |
893 | +- - esm |
894 | ++ commands: |
895 | ++ 00: ubuntu-advantage enable-livepatch <livepatch-token> |
896 | ++ |
897 | + """), dedent("""\ |
898 | +- # Attach the machine to an Ubuntu Advantage support contract and enable |
899 | +- # the FIPS service. Perform a reboot once cloud-init has |
900 | +- # completed. |
901 | +- power_state: |
902 | +- mode: reboot |
903 | ++ # Convenience: the ubuntu-advantage command can be omitted when |
904 | ++ # specifying commands as a list and 'ubuntu-advantage' will |
905 | ++ # automatically be prepended. |
906 | ++ # The following commands are equivalent |
907 | + ubuntu-advantage: |
908 | +- token: <ua_contract_token> |
909 | +- enable: |
910 | +- - fips |
911 | +- """)], |
912 | ++ commands: |
913 | ++ 00: ['enable-livepatch', 'my-token'] |
914 | ++ 01: ['ubuntu-advantage', 'enable-livepatch', 'my-token'] |
915 | ++ 02: ubuntu-advantage enable-livepatch my-token |
916 | ++ 03: 'ubuntu-advantage enable-livepatch my-token' |
917 | ++ """)], |
918 | + 'frequency': PER_INSTANCE, |
919 | + 'type': 'object', |
920 | + 'properties': { |
921 | +- 'ubuntu_advantage': { |
922 | ++ 'ubuntu-advantage': { |
923 | + 'type': 'object', |
924 | + 'properties': { |
925 | +- 'enable': { |
926 | +- 'type': 'array', |
927 | +- 'items': {'type': 'string'}, |
928 | +- }, |
929 | +- 'token': { |
930 | +- 'type': 'string', |
931 | +- 'description': ( |
932 | +- 'A contract token obtained from %s.' % UA_URL) |
933 | ++ 'commands': { |
934 | ++ 'type': ['object', 'array'], # Array of strings or dict |
935 | ++ 'items': { |
936 | ++ 'oneOf': [ |
937 | ++ {'type': 'array', 'items': {'type': 'string'}}, |
938 | ++ {'type': 'string'}] |
939 | ++ }, |
940 | ++ 'additionalItems': False, # Reject non-string & non-list |
941 | ++ 'minItems': 1, |
942 | ++ 'minProperties': 1, |
943 | + } |
944 | + }, |
945 | +- 'required': ['token'], |
946 | +- 'additionalProperties': False |
947 | ++ 'additionalProperties': False, # Reject keys not in schema |
948 | ++ 'required': ['commands'] |
949 | + } |
950 | + } |
951 | + } |
952 | + |
953 | ++# TODO schema for 'assertions' and 'commands' are too permissive at the moment. |
954 | ++# Once python-jsonschema supports schema draft 6 add support for arbitrary |
955 | ++# object keys with 'patternProperties' constraint to validate string values. |
956 | ++ |
957 | + __doc__ = get_schema_doc(schema) # Supplement python help() |
958 | + |
959 | +-LOG = logging.getLogger(__name__) |
960 | ++UA_CMD = "ubuntu-advantage" |
961 | + |
962 | + |
963 | +-def configure_ua(token=None, enable=None): |
964 | +- """Call ua commandline client to attach or enable services.""" |
965 | +- error = None |
966 | +- if not token: |
967 | +- error = ('ubuntu_advantage: token must be provided') |
968 | +- LOG.error(error) |
969 | +- raise RuntimeError(error) |
970 | +- |
971 | +- if enable is None: |
972 | +- enable = [] |
973 | +- elif isinstance(enable, six.string_types): |
974 | +- LOG.warning('ubuntu_advantage: enable should be a list, not' |
975 | +- ' a string; treating as a single enable') |
976 | +- enable = [enable] |
977 | +- elif not isinstance(enable, list): |
978 | +- LOG.warning('ubuntu_advantage: enable should be a list, not' |
979 | +- ' a %s; skipping enabling services', |
980 | +- type(enable).__name__) |
981 | +- enable = [] |
982 | ++def run_commands(commands): |
983 | ++ """Run the commands provided in ubuntu-advantage:commands config. |
984 | + |
985 | +- attach_cmd = ['ua', 'attach', token] |
986 | +- LOG.debug('Attaching to Ubuntu Advantage. %s', ' '.join(attach_cmd)) |
987 | +- try: |
988 | +- util.subp(attach_cmd) |
989 | +- except util.ProcessExecutionError as e: |
990 | +- msg = 'Failure attaching Ubuntu Advantage:\n{error}'.format( |
991 | +- error=str(e)) |
992 | +- util.logexc(LOG, msg) |
993 | +- raise RuntimeError(msg) |
994 | +- enable_errors = [] |
995 | +- for service in enable: |
996 | ++ Commands are run individually. Any errors are collected and reported |
997 | ++ after attempting all commands. |
998 | ++ |
999 | ++ @param commands: A list or dict containing commands to run. Keys of a |
1000 | ++ dict will be used to order the commands provided as dict values. |
1001 | ++ """ |
1002 | ++ if not commands: |
1003 | ++ return |
1004 | ++ LOG.debug('Running user-provided ubuntu-advantage commands') |
1005 | ++ if isinstance(commands, dict): |
1006 | ++ # Sort commands based on dictionary key |
1007 | ++ commands = [v for _, v in sorted(commands.items())] |
1008 | ++ elif not isinstance(commands, list): |
1009 | ++ raise TypeError( |
1010 | ++ 'commands parameter was not a list or dict: {commands}'.format( |
1011 | ++ commands=commands)) |
1012 | ++ |
1013 | ++ fixed_ua_commands = prepend_base_command('ubuntu-advantage', commands) |
1014 | ++ |
1015 | ++ cmd_failures = [] |
1016 | ++ for command in fixed_ua_commands: |
1017 | ++ shell = isinstance(command, str) |
1018 | + try: |
1019 | +- cmd = ['ua', 'enable', service] |
1020 | +- util.subp(cmd, capture=True) |
1021 | ++ util.subp(command, shell=shell, status_cb=sys.stderr.write) |
1022 | + except util.ProcessExecutionError as e: |
1023 | +- enable_errors.append((service, e)) |
1024 | +- if enable_errors: |
1025 | +- for service, error in enable_errors: |
1026 | +- msg = 'Failure enabling "{service}":\n{error}'.format( |
1027 | +- service=service, error=str(error)) |
1028 | +- util.logexc(LOG, msg) |
1029 | +- raise RuntimeError( |
1030 | +- 'Failure enabling Ubuntu Advantage service(s): {}'.format( |
1031 | +- ', '.join('"{}"'.format(service) |
1032 | +- for service, _ in enable_errors))) |
1033 | ++ cmd_failures.append(str(e)) |
1034 | ++ if cmd_failures: |
1035 | ++ msg = ( |
1036 | ++ 'Failures running ubuntu-advantage commands:\n' |
1037 | ++ '{cmd_failures}'.format( |
1038 | ++ cmd_failures=cmd_failures)) |
1039 | ++ util.logexc(LOG, msg) |
1040 | ++ raise RuntimeError(msg) |
1041 | + |
1042 | + |
1043 | + def maybe_install_ua_tools(cloud): |
1044 | + """Install ubuntu-advantage-tools if not present.""" |
1045 | +- if util.which('ua'): |
1046 | ++ if util.which('ubuntu-advantage'): |
1047 | + return |
1048 | + try: |
1049 | + cloud.distro.update_package_sources() |
1050 | +@@ -152,28 +159,14 @@ def maybe_install_ua_tools(cloud): |
1051 | + |
1052 | + |
1053 | + def handle(name, cfg, cloud, log, args): |
1054 | +- ua_section = None |
1055 | +- if 'ubuntu-advantage' in cfg: |
1056 | +- LOG.warning('Deprecated configuration key "ubuntu-advantage" provided.' |
1057 | +- ' Expected underscore delimited "ubuntu_advantage"; will' |
1058 | +- ' attempt to continue.') |
1059 | +- ua_section = cfg['ubuntu-advantage'] |
1060 | +- if 'ubuntu_advantage' in cfg: |
1061 | +- ua_section = cfg['ubuntu_advantage'] |
1062 | +- if ua_section is None: |
1063 | +- LOG.debug("Skipping module named %s," |
1064 | +- " no 'ubuntu_advantage' configuration found", name) |
1065 | ++ cfgin = cfg.get('ubuntu-advantage') |
1066 | ++ if cfgin is None: |
1067 | ++ LOG.debug(("Skipping module named %s," |
1068 | ++ " no 'ubuntu-advantage' key in configuration"), name) |
1069 | + return |
1070 | +- validate_cloudconfig_schema(cfg, schema) |
1071 | +- if 'commands' in ua_section: |
1072 | +- msg = ( |
1073 | +- 'Deprecated configuration "ubuntu-advantage: commands" provided.' |
1074 | +- ' Expected "token"') |
1075 | +- LOG.error(msg) |
1076 | +- raise RuntimeError(msg) |
1077 | + |
1078 | ++ validate_cloudconfig_schema(cfg, schema) |
1079 | + maybe_install_ua_tools(cloud) |
1080 | +- configure_ua(token=ua_section.get('token'), |
1081 | +- enable=ua_section.get('enable')) |
1082 | ++ run_commands(cfgin.get('commands', [])) |
1083 | + |
1084 | + # vi: ts=4 expandtab |
1085 | +Index: cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py |
1086 | +=================================================================== |
1087 | +--- cloud-init.orig/cloudinit/config/tests/test_ubuntu_advantage.py |
1088 | ++++ cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py |
1089 | +@@ -1,7 +1,10 @@ |
1090 | + # This file is part of cloud-init. See LICENSE file for license information. |
1091 | + |
1092 | ++import re |
1093 | ++from six import StringIO |
1094 | ++ |
1095 | + from cloudinit.config.cc_ubuntu_advantage import ( |
1096 | +- configure_ua, handle, maybe_install_ua_tools, schema) |
1097 | ++ handle, maybe_install_ua_tools, run_commands, schema) |
1098 | + from cloudinit.config.schema import validate_cloudconfig_schema |
1099 | + from cloudinit import util |
1100 | + from cloudinit.tests.helpers import ( |
1101 | +@@ -17,120 +20,90 @@ class FakeCloud(object): |
1102 | + self.distro = distro |
1103 | + |
1104 | + |
1105 | +-class TestConfigureUA(CiTestCase): |
1106 | ++class TestRunCommands(CiTestCase): |
1107 | + |
1108 | + with_logs = True |
1109 | + allowed_subp = [CiTestCase.SUBP_SHELL_TRUE] |
1110 | + |
1111 | + def setUp(self): |
1112 | +- super(TestConfigureUA, self).setUp() |
1113 | ++ super(TestRunCommands, self).setUp() |
1114 | + self.tmp = self.tmp_dir() |
1115 | + |
1116 | + @mock.patch('%s.util.subp' % MPATH) |
1117 | +- def test_configure_ua_attach_error(self, m_subp): |
1118 | +- """Errors from ua attach command are raised.""" |
1119 | +- m_subp.side_effect = util.ProcessExecutionError( |
1120 | +- 'Invalid token SomeToken') |
1121 | +- with self.assertRaises(RuntimeError) as context_manager: |
1122 | +- configure_ua(token='SomeToken') |
1123 | ++ def test_run_commands_on_empty_list(self, m_subp): |
1124 | ++ """When provided with an empty list, run_commands does nothing.""" |
1125 | ++ run_commands([]) |
1126 | ++ self.assertEqual('', self.logs.getvalue()) |
1127 | ++ m_subp.assert_not_called() |
1128 | ++ |
1129 | ++ def test_run_commands_on_non_list_or_dict(self): |
1130 | ++ """When provided an invalid type, run_commands raises an error.""" |
1131 | ++ with self.assertRaises(TypeError) as context_manager: |
1132 | ++ run_commands(commands="I'm Not Valid") |
1133 | + self.assertEqual( |
1134 | +- 'Failure attaching Ubuntu Advantage:\nUnexpected error while' |
1135 | +- ' running command.\nCommand: -\nExit code: -\nReason: -\n' |
1136 | +- 'Stdout: Invalid token SomeToken\nStderr: -', |
1137 | ++ "commands parameter was not a list or dict: I'm Not Valid", |
1138 | + str(context_manager.exception)) |
1139 | + |
1140 | +- @mock.patch('%s.util.subp' % MPATH) |
1141 | +- def test_configure_ua_attach_with_token(self, m_subp): |
1142 | +- """When token is provided, attach the machine to ua using the token.""" |
1143 | +- configure_ua(token='SomeToken') |
1144 | +- m_subp.assert_called_once_with(['ua', 'attach', 'SomeToken']) |
1145 | +- self.assertEqual( |
1146 | +- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
1147 | +- self.logs.getvalue()) |
1148 | +- |
1149 | +- @mock.patch('%s.util.subp' % MPATH) |
1150 | +- def test_configure_ua_attach_on_service_error(self, m_subp): |
1151 | +- """all services should be enabled and then any failures raised""" |
1152 | +- |
1153 | +- def fake_subp(cmd, capture=None): |
1154 | +- fail_cmds = [['ua', 'enable', svc] for svc in ['esm', 'cc']] |
1155 | +- if cmd in fail_cmds and capture: |
1156 | +- svc = cmd[-1] |
1157 | +- raise util.ProcessExecutionError( |
1158 | +- 'Invalid {} credentials'.format(svc.upper())) |
1159 | ++ def test_run_command_logs_commands_and_exit_codes_to_stderr(self): |
1160 | ++ """All exit codes are logged to stderr.""" |
1161 | ++ outfile = self.tmp_path('output.log', dir=self.tmp) |
1162 | ++ |
1163 | ++ cmd1 = 'echo "HI" >> %s' % outfile |
1164 | ++ cmd2 = 'bogus command' |
1165 | ++ cmd3 = 'echo "MOM" >> %s' % outfile |
1166 | ++ commands = [cmd1, cmd2, cmd3] |
1167 | ++ |
1168 | ++ mock_path = '%s.sys.stderr' % MPATH |
1169 | ++ with mock.patch(mock_path, new_callable=StringIO) as m_stderr: |
1170 | ++ with self.assertRaises(RuntimeError) as context_manager: |
1171 | ++ run_commands(commands=commands) |
1172 | ++ |
1173 | ++ self.assertIsNotNone( |
1174 | ++ re.search(r'bogus: (command )?not found', |
1175 | ++ str(context_manager.exception)), |
1176 | ++ msg='Expected bogus command not found') |
1177 | ++ expected_stderr_log = '\n'.join([ |
1178 | ++ 'Begin run command: {cmd}'.format(cmd=cmd1), |
1179 | ++ 'End run command: exit(0)', |
1180 | ++ 'Begin run command: {cmd}'.format(cmd=cmd2), |
1181 | ++ 'ERROR: End run command: exit(127)', |
1182 | ++ 'Begin run command: {cmd}'.format(cmd=cmd3), |
1183 | ++ 'End run command: exit(0)\n']) |
1184 | ++ self.assertEqual(expected_stderr_log, m_stderr.getvalue()) |
1185 | ++ |
1186 | ++ def test_run_command_as_lists(self): |
1187 | ++ """When commands are specified as a list, run them in order.""" |
1188 | ++ outfile = self.tmp_path('output.log', dir=self.tmp) |
1189 | ++ |
1190 | ++ cmd1 = 'echo "HI" >> %s' % outfile |
1191 | ++ cmd2 = 'echo "MOM" >> %s' % outfile |
1192 | ++ commands = [cmd1, cmd2] |
1193 | ++ with mock.patch('%s.sys.stderr' % MPATH, new_callable=StringIO): |
1194 | ++ run_commands(commands=commands) |
1195 | + |
1196 | +- m_subp.side_effect = fake_subp |
1197 | +- |
1198 | +- with self.assertRaises(RuntimeError) as context_manager: |
1199 | +- configure_ua(token='SomeToken', enable=['esm', 'cc', 'fips']) |
1200 | +- self.assertEqual( |
1201 | +- m_subp.call_args_list, |
1202 | +- [mock.call(['ua', 'attach', 'SomeToken']), |
1203 | +- mock.call(['ua', 'enable', 'esm'], capture=True), |
1204 | +- mock.call(['ua', 'enable', 'cc'], capture=True), |
1205 | +- mock.call(['ua', 'enable', 'fips'], capture=True)]) |
1206 | + self.assertIn( |
1207 | +- 'WARNING: Failure enabling "esm":\nUnexpected error' |
1208 | +- ' while running command.\nCommand: -\nExit code: -\nReason: -\n' |
1209 | +- 'Stdout: Invalid ESM credentials\nStderr: -\n', |
1210 | ++ 'DEBUG: Running user-provided ubuntu-advantage commands', |
1211 | + self.logs.getvalue()) |
1212 | ++ self.assertEqual('HI\nMOM\n', util.load_file(outfile)) |
1213 | + self.assertIn( |
1214 | +- 'WARNING: Failure enabling "cc":\nUnexpected error' |
1215 | +- ' while running command.\nCommand: -\nExit code: -\nReason: -\n' |
1216 | +- 'Stdout: Invalid CC credentials\nStderr: -\n', |
1217 | +- self.logs.getvalue()) |
1218 | +- self.assertEqual( |
1219 | +- 'Failure enabling Ubuntu Advantage service(s): "esm", "cc"', |
1220 | +- str(context_manager.exception)) |
1221 | +- |
1222 | +- @mock.patch('%s.util.subp' % MPATH) |
1223 | +- def test_configure_ua_attach_with_empty_services(self, m_subp): |
1224 | +- """When services is an empty list, do not auto-enable attach.""" |
1225 | +- configure_ua(token='SomeToken', enable=[]) |
1226 | +- m_subp.assert_called_once_with(['ua', 'attach', 'SomeToken']) |
1227 | +- self.assertEqual( |
1228 | +- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
1229 | +- self.logs.getvalue()) |
1230 | +- |
1231 | +- @mock.patch('%s.util.subp' % MPATH) |
1232 | +- def test_configure_ua_attach_with_specific_services(self, m_subp): |
1233 | +- """When services a list, only enable specific services.""" |
1234 | +- configure_ua(token='SomeToken', enable=['fips']) |
1235 | +- self.assertEqual( |
1236 | +- m_subp.call_args_list, |
1237 | +- [mock.call(['ua', 'attach', 'SomeToken']), |
1238 | +- mock.call(['ua', 'enable', 'fips'], capture=True)]) |
1239 | +- self.assertEqual( |
1240 | +- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
1241 | +- self.logs.getvalue()) |
1242 | +- |
1243 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock()) |
1244 | +- @mock.patch('%s.util.subp' % MPATH) |
1245 | +- def test_configure_ua_attach_with_string_services(self, m_subp): |
1246 | +- """When services a string, treat as singleton list and warn""" |
1247 | +- configure_ua(token='SomeToken', enable='fips') |
1248 | +- self.assertEqual( |
1249 | +- m_subp.call_args_list, |
1250 | +- [mock.call(['ua', 'attach', 'SomeToken']), |
1251 | +- mock.call(['ua', 'enable', 'fips'], capture=True)]) |
1252 | +- self.assertEqual( |
1253 | +- 'WARNING: ubuntu_advantage: enable should be a list, not a' |
1254 | +- ' string; treating as a single enable\n' |
1255 | +- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
1256 | ++ 'WARNING: Non-ubuntu-advantage commands in ubuntu-advantage' |
1257 | ++ ' config:', |
1258 | + self.logs.getvalue()) |
1259 | + |
1260 | +- @mock.patch('%s.util.subp' % MPATH) |
1261 | +- def test_configure_ua_attach_with_weird_services(self, m_subp): |
1262 | +- """When services not string or list, warn but still attach""" |
1263 | +- configure_ua(token='SomeToken', enable={'deffo': 'wont work'}) |
1264 | +- self.assertEqual( |
1265 | +- m_subp.call_args_list, |
1266 | +- [mock.call(['ua', 'attach', 'SomeToken'])]) |
1267 | +- self.assertEqual( |
1268 | +- 'WARNING: ubuntu_advantage: enable should be a list, not a' |
1269 | +- ' dict; skipping enabling services\n' |
1270 | +- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n', |
1271 | +- self.logs.getvalue()) |
1272 | ++ def test_run_command_dict_sorted_as_command_script(self): |
1273 | ++ """When commands are a dict, sort them and run.""" |
1274 | ++ outfile = self.tmp_path('output.log', dir=self.tmp) |
1275 | ++ cmd1 = 'echo "HI" >> %s' % outfile |
1276 | ++ cmd2 = 'echo "MOM" >> %s' % outfile |
1277 | ++ commands = {'02': cmd1, '01': cmd2} |
1278 | ++ with mock.patch('%s.sys.stderr' % MPATH, new_callable=StringIO): |
1279 | ++ run_commands(commands=commands) |
1280 | ++ |
1281 | ++ expected_messages = [ |
1282 | ++ 'DEBUG: Running user-provided ubuntu-advantage commands'] |
1283 | ++ for message in expected_messages: |
1284 | ++ self.assertIn(message, self.logs.getvalue()) |
1285 | ++ self.assertEqual('MOM\nHI\n', util.load_file(outfile)) |
1286 | + |
1287 | + |
1288 | + @skipUnlessJsonSchema() |
1289 | +@@ -139,50 +112,90 @@ class TestSchema(CiTestCase, SchemaTestC |
1290 | + with_logs = True |
1291 | + schema = schema |
1292 | + |
1293 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
1294 | +- @mock.patch('%s.configure_ua' % MPATH) |
1295 | +- def test_schema_warns_on_ubuntu_advantage_not_dict(self, _cfg, _): |
1296 | +- """If ubuntu_advantage configuration is not a dict, emit a warning.""" |
1297 | +- validate_cloudconfig_schema({'ubuntu_advantage': 'wrong type'}, schema) |
1298 | ++ def test_schema_warns_on_ubuntu_advantage_not_as_dict(self): |
1299 | ++ """If ubuntu-advantage configuration is not a dict, emit a warning.""" |
1300 | ++ validate_cloudconfig_schema({'ubuntu-advantage': 'wrong type'}, schema) |
1301 | + self.assertEqual( |
1302 | +- "WARNING: Invalid config:\nubuntu_advantage: 'wrong type' is not" |
1303 | ++ "WARNING: Invalid config:\nubuntu-advantage: 'wrong type' is not" |
1304 | + " of type 'object'\n", |
1305 | + self.logs.getvalue()) |
1306 | + |
1307 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
1308 | +- @mock.patch('%s.configure_ua' % MPATH) |
1309 | +- def test_schema_disallows_unknown_keys(self, _cfg, _): |
1310 | +- """Unknown keys in ubuntu_advantage configuration emit warnings.""" |
1311 | ++ @mock.patch('%s.run_commands' % MPATH) |
1312 | ++ def test_schema_disallows_unknown_keys(self, _): |
1313 | ++ """Unknown keys in ubuntu-advantage configuration emit warnings.""" |
1314 | + validate_cloudconfig_schema( |
1315 | +- {'ubuntu_advantage': {'token': 'winner', 'invalid-key': ''}}, |
1316 | ++ {'ubuntu-advantage': {'commands': ['ls'], 'invalid-key': ''}}, |
1317 | + schema) |
1318 | + self.assertIn( |
1319 | +- 'WARNING: Invalid config:\nubuntu_advantage: Additional properties' |
1320 | ++ 'WARNING: Invalid config:\nubuntu-advantage: Additional properties' |
1321 | + " are not allowed ('invalid-key' was unexpected)", |
1322 | + self.logs.getvalue()) |
1323 | + |
1324 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
1325 | +- @mock.patch('%s.configure_ua' % MPATH) |
1326 | +- def test_warn_schema_requires_token(self, _cfg, _): |
1327 | +- """Warn if ubuntu_advantage configuration lacks token.""" |
1328 | ++ def test_warn_schema_requires_commands(self): |
1329 | ++ """Warn when ubuntu-advantage configuration lacks commands.""" |
1330 | + validate_cloudconfig_schema( |
1331 | +- {'ubuntu_advantage': {'enable': ['esm']}}, schema) |
1332 | ++ {'ubuntu-advantage': {}}, schema) |
1333 | + self.assertEqual( |
1334 | +- "WARNING: Invalid config:\nubuntu_advantage:" |
1335 | +- " 'token' is a required property\n", self.logs.getvalue()) |
1336 | ++ "WARNING: Invalid config:\nubuntu-advantage: 'commands' is a" |
1337 | ++ " required property\n", |
1338 | ++ self.logs.getvalue()) |
1339 | + |
1340 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
1341 | +- @mock.patch('%s.configure_ua' % MPATH) |
1342 | +- def test_warn_schema_services_is_not_list_or_dict(self, _cfg, _): |
1343 | +- """Warn when ubuntu_advantage:enable config is not a list.""" |
1344 | ++ @mock.patch('%s.run_commands' % MPATH) |
1345 | ++ def test_warn_schema_commands_is_not_list_or_dict(self, _): |
1346 | ++ """Warn when ubuntu-advantage:commands config is not a list or dict.""" |
1347 | + validate_cloudconfig_schema( |
1348 | +- {'ubuntu_advantage': {'enable': 'needslist'}}, schema) |
1349 | ++ {'ubuntu-advantage': {'commands': 'broken'}}, schema) |
1350 | + self.assertEqual( |
1351 | +- "WARNING: Invalid config:\nubuntu_advantage: 'token' is a" |
1352 | +- " required property\nubuntu_advantage.enable: 'needslist'" |
1353 | +- " is not of type 'array'\n", |
1354 | ++ "WARNING: Invalid config:\nubuntu-advantage.commands: 'broken' is" |
1355 | ++ " not of type 'object', 'array'\n", |
1356 | + self.logs.getvalue()) |
1357 | + |
1358 | ++ @mock.patch('%s.run_commands' % MPATH) |
1359 | ++ def test_warn_schema_when_commands_is_empty(self, _): |
1360 | ++ """Emit warnings when ubuntu-advantage:commands is empty.""" |
1361 | ++ validate_cloudconfig_schema( |
1362 | ++ {'ubuntu-advantage': {'commands': []}}, schema) |
1363 | ++ validate_cloudconfig_schema( |
1364 | ++ {'ubuntu-advantage': {'commands': {}}}, schema) |
1365 | ++ self.assertEqual( |
1366 | ++ "WARNING: Invalid config:\nubuntu-advantage.commands: [] is too" |
1367 | ++ " short\nWARNING: Invalid config:\nubuntu-advantage.commands: {}" |
1368 | ++ " does not have enough properties\n", |
1369 | ++ self.logs.getvalue()) |
1370 | ++ |
1371 | ++ @mock.patch('%s.run_commands' % MPATH) |
1372 | ++ def test_schema_when_commands_are_list_or_dict(self, _): |
1373 | ++ """No warnings when ubuntu-advantage:commands are a list or dict.""" |
1374 | ++ validate_cloudconfig_schema( |
1375 | ++ {'ubuntu-advantage': {'commands': ['valid']}}, schema) |
1376 | ++ validate_cloudconfig_schema( |
1377 | ++ {'ubuntu-advantage': {'commands': {'01': 'also valid'}}}, schema) |
1378 | ++ self.assertEqual('', self.logs.getvalue()) |
1379 | ++ |
1380 | ++ def test_duplicates_are_fine_array_array(self): |
1381 | ++ """Duplicated commands array/array entries are allowed.""" |
1382 | ++ self.assertSchemaValid( |
1383 | ++ {'commands': [["echo", "bye"], ["echo" "bye"]]}, |
1384 | ++ "command entries can be duplicate.") |
1385 | ++ |
1386 | ++ def test_duplicates_are_fine_array_string(self): |
1387 | ++ """Duplicated commands array/string entries are allowed.""" |
1388 | ++ self.assertSchemaValid( |
1389 | ++ {'commands': ["echo bye", "echo bye"]}, |
1390 | ++ "command entries can be duplicate.") |
1391 | ++ |
1392 | ++ def test_duplicates_are_fine_dict_array(self): |
1393 | ++ """Duplicated commands dict/array entries are allowed.""" |
1394 | ++ self.assertSchemaValid( |
1395 | ++ {'commands': {'00': ["echo", "bye"], '01': ["echo", "bye"]}}, |
1396 | ++ "command entries can be duplicate.") |
1397 | ++ |
1398 | ++ def test_duplicates_are_fine_dict_string(self): |
1399 | ++ """Duplicated commands dict/string entries are allowed.""" |
1400 | ++ self.assertSchemaValid( |
1401 | ++ {'commands': {'00': "echo bye", '01': "echo bye"}}, |
1402 | ++ "command entries can be duplicate.") |
1403 | ++ |
1404 | + |
1405 | + class TestHandle(CiTestCase): |
1406 | + |
1407 | +@@ -192,89 +205,41 @@ class TestHandle(CiTestCase): |
1408 | + super(TestHandle, self).setUp() |
1409 | + self.tmp = self.tmp_dir() |
1410 | + |
1411 | ++ @mock.patch('%s.run_commands' % MPATH) |
1412 | + @mock.patch('%s.validate_cloudconfig_schema' % MPATH) |
1413 | +- def test_handle_no_config(self, m_schema): |
1414 | ++ def test_handle_no_config(self, m_schema, m_run): |
1415 | + """When no ua-related configuration is provided, nothing happens.""" |
1416 | + cfg = {} |
1417 | + handle('ua-test', cfg=cfg, cloud=None, log=self.logger, args=None) |
1418 | + self.assertIn( |
1419 | +- "DEBUG: Skipping module named ua-test, no 'ubuntu_advantage'" |
1420 | +- ' configuration found', |
1421 | ++ "DEBUG: Skipping module named ua-test, no 'ubuntu-advantage' key" |
1422 | ++ " in config", |
1423 | + self.logs.getvalue()) |
1424 | + m_schema.assert_not_called() |
1425 | ++ m_run.assert_not_called() |
1426 | + |
1427 | +- @mock.patch('%s.configure_ua' % MPATH) |
1428 | + @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
1429 | +- def test_handle_tries_to_install_ubuntu_advantage_tools( |
1430 | +- self, m_install, m_cfg): |
1431 | ++ def test_handle_tries_to_install_ubuntu_advantage_tools(self, m_install): |
1432 | + """If ubuntu_advantage is provided, try installing ua-tools package.""" |
1433 | +- cfg = {'ubuntu_advantage': {'token': 'valid'}} |
1434 | ++ cfg = {'ubuntu-advantage': {}} |
1435 | + mycloud = FakeCloud(None) |
1436 | + handle('nomatter', cfg=cfg, cloud=mycloud, log=self.logger, args=None) |
1437 | + m_install.assert_called_once_with(mycloud) |
1438 | + |
1439 | +- @mock.patch('%s.configure_ua' % MPATH) |
1440 | + @mock.patch('%s.maybe_install_ua_tools' % MPATH) |
1441 | +- def test_handle_passes_credentials_and_services_to_configure_ua( |
1442 | +- self, m_install, m_configure_ua): |
1443 | +- """All ubuntu_advantage config keys are passed to configure_ua.""" |
1444 | +- cfg = {'ubuntu_advantage': {'token': 'token', 'enable': ['esm']}} |
1445 | +- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
1446 | +- m_configure_ua.assert_called_once_with( |
1447 | +- token='token', enable=['esm']) |
1448 | +- |
1449 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock()) |
1450 | +- @mock.patch('%s.configure_ua' % MPATH) |
1451 | +- def test_handle_warns_on_deprecated_ubuntu_advantage_key_w_config( |
1452 | +- self, m_configure_ua): |
1453 | +- """Warning when ubuntu-advantage key is present with new config""" |
1454 | +- cfg = {'ubuntu-advantage': {'token': 'token', 'enable': ['esm']}} |
1455 | +- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
1456 | +- self.assertEqual( |
1457 | +- 'WARNING: Deprecated configuration key "ubuntu-advantage"' |
1458 | +- ' provided. Expected underscore delimited "ubuntu_advantage";' |
1459 | +- ' will attempt to continue.', |
1460 | +- self.logs.getvalue().splitlines()[0]) |
1461 | +- m_configure_ua.assert_called_once_with( |
1462 | +- token='token', enable=['esm']) |
1463 | +- |
1464 | +- def test_handle_error_on_deprecated_commands_key_dashed(self): |
1465 | +- """Error when commands is present in ubuntu-advantage key.""" |
1466 | +- cfg = {'ubuntu-advantage': {'commands': 'nogo'}} |
1467 | +- with self.assertRaises(RuntimeError) as context_manager: |
1468 | +- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
1469 | +- self.assertEqual( |
1470 | +- 'Deprecated configuration "ubuntu-advantage: commands" provided.' |
1471 | +- ' Expected "token"', |
1472 | +- str(context_manager.exception)) |
1473 | +- |
1474 | +- def test_handle_error_on_deprecated_commands_key_underscored(self): |
1475 | +- """Error when commands is present in ubuntu_advantage key.""" |
1476 | +- cfg = {'ubuntu_advantage': {'commands': 'nogo'}} |
1477 | +- with self.assertRaises(RuntimeError) as context_manager: |
1478 | +- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
1479 | +- self.assertEqual( |
1480 | +- 'Deprecated configuration "ubuntu-advantage: commands" provided.' |
1481 | +- ' Expected "token"', |
1482 | +- str(context_manager.exception)) |
1483 | ++ def test_handle_runs_commands_provided(self, m_install): |
1484 | ++ """When commands are specified as a list, run them.""" |
1485 | ++ outfile = self.tmp_path('output.log', dir=self.tmp) |
1486 | + |
1487 | +- @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock()) |
1488 | +- @mock.patch('%s.configure_ua' % MPATH) |
1489 | +- def test_handle_prefers_new_style_config( |
1490 | +- self, m_configure_ua): |
1491 | +- """ubuntu_advantage should be preferred over ubuntu-advantage""" |
1492 | + cfg = { |
1493 | +- 'ubuntu-advantage': {'token': 'nope', 'enable': ['wrong']}, |
1494 | +- 'ubuntu_advantage': {'token': 'token', 'enable': ['esm']}, |
1495 | +- } |
1496 | +- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None) |
1497 | +- self.assertEqual( |
1498 | +- 'WARNING: Deprecated configuration key "ubuntu-advantage"' |
1499 | +- ' provided. Expected underscore delimited "ubuntu_advantage";' |
1500 | +- ' will attempt to continue.', |
1501 | +- self.logs.getvalue().splitlines()[0]) |
1502 | +- m_configure_ua.assert_called_once_with( |
1503 | +- token='token', enable=['esm']) |
1504 | ++ 'ubuntu-advantage': {'commands': ['echo "HI" >> %s' % outfile, |
1505 | ++ 'echo "MOM" >> %s' % outfile]}} |
1506 | ++ mock_path = '%s.sys.stderr' % MPATH |
1507 | ++ with self.allow_subp([CiTestCase.SUBP_SHELL_TRUE]): |
1508 | ++ with mock.patch(mock_path, new_callable=StringIO): |
1509 | ++ handle('nomatter', cfg=cfg, cloud=None, log=self.logger, |
1510 | ++ args=None) |
1511 | ++ self.assertEqual('HI\nMOM\n', util.load_file(outfile)) |
1512 | + |
1513 | + |
1514 | + class TestMaybeInstallUATools(CiTestCase): |
1515 | +@@ -288,7 +253,7 @@ class TestMaybeInstallUATools(CiTestCase |
1516 | + @mock.patch('%s.util.which' % MPATH) |
1517 | + def test_maybe_install_ua_tools_noop_when_ua_tools_present(self, m_which): |
1518 | + """Do nothing if ubuntu-advantage-tools already exists.""" |
1519 | +- m_which.return_value = '/usr/bin/ua' # already installed |
1520 | ++ m_which.return_value = '/usr/bin/ubuntu-advantage' # already installed |
1521 | + distro = mock.MagicMock() |
1522 | + distro.update_package_sources.side_effect = RuntimeError( |
1523 | + 'Some apt error') |
1524 | diff --git a/doc/rtd/topics/datasources/nocloud.rst b/doc/rtd/topics/datasources/nocloud.rst |
1525 | index 08578e8..1c5cf96 100644 |
1526 | --- a/doc/rtd/topics/datasources/nocloud.rst |
1527 | +++ b/doc/rtd/topics/datasources/nocloud.rst |
1528 | @@ -9,7 +9,7 @@ network at all). |
1529 | |
1530 | You can provide meta-data and user-data to a local vm boot via files on a |
1531 | `vfat`_ or `iso9660`_ filesystem. The filesystem volume label must be |
1532 | -``cidata``. |
1533 | +``cidata`` or ``CIDATA``. |
1534 | |
1535 | Alternatively, you can provide meta-data via kernel command line or SMBIOS |
1536 | "serial number" option. The data must be passed in the form of a string: |
1537 | diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in |
1538 | index 6b2022b..057a578 100644 |
1539 | --- a/packages/redhat/cloud-init.spec.in |
1540 | +++ b/packages/redhat/cloud-init.spec.in |
1541 | @@ -205,7 +205,9 @@ fi |
1542 | %dir %{_sysconfdir}/cloud/templates |
1543 | %config(noreplace) %{_sysconfdir}/cloud/templates/* |
1544 | %config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf |
1545 | -%{_sysconfdir}/bash_completion.d/cloud-init |
1546 | + |
1547 | +# Bash completion script |
1548 | +%{_datadir}/bash-completion/completions/cloud-init |
1549 | |
1550 | %{_libexecdir}/%{name} |
1551 | %dir %{_sharedstatedir}/cloud |
1552 | diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in |
1553 | index 26894b3..004b875 100644 |
1554 | --- a/packages/suse/cloud-init.spec.in |
1555 | +++ b/packages/suse/cloud-init.spec.in |
1556 | @@ -120,7 +120,9 @@ version_pys=$(cd "%{buildroot}" && find . -name version.py -type f) |
1557 | %config(noreplace) %{_sysconfdir}/cloud/cloud.cfg.d/README |
1558 | %dir %{_sysconfdir}/cloud/templates |
1559 | %config(noreplace) %{_sysconfdir}/cloud/templates/* |
1560 | -%{_sysconfdir}/bash_completion.d/cloud-init |
1561 | + |
1562 | +# Bash completion script |
1563 | +%{_datadir}/bash-completion/completions/cloud-init |
1564 | |
1565 | %{_sysconfdir}/dhcp/dhclient-exit-hooks.d/hook-dhclient |
1566 | %{_sysconfdir}/NetworkManager/dispatcher.d/hook-network-manager |
1567 | diff --git a/setup.py b/setup.py |
1568 | index 186e215..fcaf26f 100755 |
1569 | --- a/setup.py |
1570 | +++ b/setup.py |
1571 | @@ -245,13 +245,14 @@ if not in_virtualenv(): |
1572 | INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k] |
1573 | |
1574 | data_files = [ |
1575 | - (ETC + '/bash_completion.d', ['bash_completion/cloud-init']), |
1576 | (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]), |
1577 | (ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')), |
1578 | (ETC + '/cloud/templates', glob('templates/*')), |
1579 | (USR_LIB_EXEC + '/cloud-init', ['tools/ds-identify', |
1580 | 'tools/uncloud-init', |
1581 | 'tools/write-ssh-key-fingerprints']), |
1582 | + (USR + '/share/bash-completion/completions', |
1583 | + ['bash_completion/cloud-init']), |
1584 | (USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]), |
1585 | (USR + '/share/doc/cloud-init/examples', |
1586 | [f for f in glob('doc/examples/*') if is_f(f)]), |
1587 | diff --git a/tests/cloud_tests/releases.yaml b/tests/cloud_tests/releases.yaml |
1588 | index ec5da72..924ad95 100644 |
1589 | --- a/tests/cloud_tests/releases.yaml |
1590 | +++ b/tests/cloud_tests/releases.yaml |
1591 | @@ -129,6 +129,22 @@ features: |
1592 | |
1593 | releases: |
1594 | # UBUNTU ================================================================= |
1595 | + eoan: |
1596 | + # EOL: Jul 2020 |
1597 | + default: |
1598 | + enabled: true |
1599 | + release: eoan |
1600 | + version: 19.10 |
1601 | + os: ubuntu |
1602 | + feature_groups: |
1603 | + - base |
1604 | + - debian_base |
1605 | + - ubuntu_specific |
1606 | + lxd: |
1607 | + sstreams_server: https://cloud-images.ubuntu.com/daily |
1608 | + alias: eoan |
1609 | + setup_overrides: null |
1610 | + override_templates: false |
1611 | disco: |
1612 | # EOL: Jan 2020 |
1613 | default: |
1614 | diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py |
1615 | index 53c56cd..427ab7e 100644 |
1616 | --- a/tests/unittests/test_datasource/test_azure.py |
1617 | +++ b/tests/unittests/test_datasource/test_azure.py |
1618 | @@ -163,7 +163,8 @@ class TestGetMetadataFromIMDS(HttprettyTestCase): |
1619 | |
1620 | m_readurl.assert_called_with( |
1621 | self.network_md_url, exception_cb=mock.ANY, |
1622 | - headers={'Metadata': 'true'}, retries=2, timeout=1) |
1623 | + headers={'Metadata': 'true'}, retries=2, |
1624 | + timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS) |
1625 | |
1626 | @mock.patch('cloudinit.url_helper.time.sleep') |
1627 | @mock.patch(MOCKPATH + 'net.is_up') |
1628 | @@ -1375,12 +1376,15 @@ class TestCanDevBeReformatted(CiTestCase): |
1629 | self._domock(p + "util.mount_cb", 'm_mount_cb') |
1630 | self._domock(p + "os.path.realpath", 'm_realpath') |
1631 | self._domock(p + "os.path.exists", 'm_exists') |
1632 | + self._domock(p + "util.SeLinuxGuard", 'm_selguard') |
1633 | |
1634 | self.m_exists.side_effect = lambda p: p in bypath |
1635 | self.m_realpath.side_effect = realpath |
1636 | self.m_has_ntfs_filesystem.side_effect = has_ntfs_fs |
1637 | self.m_mount_cb.side_effect = mount_cb |
1638 | self.m_partitions_on_device.side_effect = partitions_on_device |
1639 | + self.m_selguard.__enter__ = mock.Mock(return_value=False) |
1640 | + self.m_selguard.__exit__ = mock.Mock() |
1641 | |
1642 | def test_three_partitions_is_false(self): |
1643 | """A disk with 3 partitions can not be formatted.""" |
1644 | @@ -1788,7 +1792,8 @@ class TestAzureDataSourcePreprovisioning(CiTestCase): |
1645 | headers={'Metadata': 'true', |
1646 | 'User-Agent': |
1647 | 'Cloud-Init/%s' % vs() |
1648 | - }, method='GET', timeout=1, |
1649 | + }, method='GET', |
1650 | + timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS, |
1651 | url=full_url)]) |
1652 | self.assertEqual(m_dhcp.call_count, 2) |
1653 | m_net.assert_any_call( |
1654 | @@ -1825,7 +1830,9 @@ class TestAzureDataSourcePreprovisioning(CiTestCase): |
1655 | headers={'Metadata': 'true', |
1656 | 'User-Agent': |
1657 | 'Cloud-Init/%s' % vs()}, |
1658 | - method='GET', timeout=1, url=full_url)]) |
1659 | + method='GET', |
1660 | + timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS, |
1661 | + url=full_url)]) |
1662 | self.assertEqual(m_dhcp.call_count, 2) |
1663 | m_net.assert_any_call( |
1664 | broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9', |
1665 | diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py |
1666 | index 0255616..bd006ab 100644 |
1667 | --- a/tests/unittests/test_datasource/test_azure_helper.py |
1668 | +++ b/tests/unittests/test_datasource/test_azure_helper.py |
1669 | @@ -67,12 +67,17 @@ class TestFindEndpoint(CiTestCase): |
1670 | self.networkd_leases.return_value = None |
1671 | |
1672 | def test_missing_file(self): |
1673 | - self.assertRaises(ValueError, wa_shim.find_endpoint) |
1674 | + """wa_shim find_endpoint uses default endpoint if leasefile not found |
1675 | + """ |
1676 | + self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16") |
1677 | |
1678 | def test_missing_special_azure_line(self): |
1679 | + """wa_shim find_endpoint uses default endpoint if leasefile is found |
1680 | + but does not contain DHCP Option 245 (whose value is the endpoint) |
1681 | + """ |
1682 | self.load_file.return_value = '' |
1683 | self.dhcp_options.return_value = {'eth0': {'key': 'value'}} |
1684 | - self.assertRaises(ValueError, wa_shim.find_endpoint) |
1685 | + self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16") |
1686 | |
1687 | @staticmethod |
1688 | def _build_lease_content(encoded_address): |
1689 | diff --git a/tests/unittests/test_datasource/test_nocloud.py b/tests/unittests/test_datasource/test_nocloud.py |
1690 | index 3429272..b785362 100644 |
1691 | --- a/tests/unittests/test_datasource/test_nocloud.py |
1692 | +++ b/tests/unittests/test_datasource/test_nocloud.py |
1693 | @@ -32,6 +32,36 @@ class TestNoCloudDataSource(CiTestCase): |
1694 | self.mocks.enter_context( |
1695 | mock.patch.object(util, 'read_dmi_data', return_value=None)) |
1696 | |
1697 | + def _test_fs_config_is_read(self, fs_label, fs_label_to_search): |
1698 | + vfat_device = 'device-1' |
1699 | + |
1700 | + def m_mount_cb(device, callback, mtype): |
1701 | + if (device == vfat_device): |
1702 | + return {'meta-data': yaml.dump({'instance-id': 'IID'})} |
1703 | + else: |
1704 | + return {} |
1705 | + |
1706 | + def m_find_devs_with(query='', path=''): |
1707 | + if 'TYPE=vfat' == query: |
1708 | + return [vfat_device] |
1709 | + elif 'LABEL={}'.format(fs_label) == query: |
1710 | + return [vfat_device] |
1711 | + else: |
1712 | + return [] |
1713 | + |
1714 | + self.mocks.enter_context( |
1715 | + mock.patch.object(util, 'find_devs_with', |
1716 | + side_effect=m_find_devs_with)) |
1717 | + self.mocks.enter_context( |
1718 | + mock.patch.object(util, 'mount_cb', |
1719 | + side_effect=m_mount_cb)) |
1720 | + sys_cfg = {'datasource': {'NoCloud': {'fs_label': fs_label_to_search}}} |
1721 | + dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths) |
1722 | + ret = dsrc.get_data() |
1723 | + |
1724 | + self.assertEqual(dsrc.metadata.get('instance-id'), 'IID') |
1725 | + self.assertTrue(ret) |
1726 | + |
1727 | def test_nocloud_seed_dir_on_lxd(self, m_is_lxd): |
1728 | md = {'instance-id': 'IID', 'dsmode': 'local'} |
1729 | ud = b"USER_DATA_HERE" |
1730 | @@ -90,6 +120,18 @@ class TestNoCloudDataSource(CiTestCase): |
1731 | ret = dsrc.get_data() |
1732 | self.assertFalse(ret) |
1733 | |
1734 | + def test_fs_config_lowercase_label(self, m_is_lxd): |
1735 | + self._test_fs_config_is_read('cidata', 'cidata') |
1736 | + |
1737 | + def test_fs_config_uppercase_label(self, m_is_lxd): |
1738 | + self._test_fs_config_is_read('CIDATA', 'cidata') |
1739 | + |
1740 | + def test_fs_config_lowercase_label_search_uppercase(self, m_is_lxd): |
1741 | + self._test_fs_config_is_read('cidata', 'CIDATA') |
1742 | + |
1743 | + def test_fs_config_uppercase_label_search_uppercase(self, m_is_lxd): |
1744 | + self._test_fs_config_is_read('CIDATA', 'CIDATA') |
1745 | + |
1746 | def test_no_datasource_expected(self, m_is_lxd): |
1747 | # no source should be found if no cmdline, config, and fs_label=None |
1748 | sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}} |
1749 | diff --git a/tests/unittests/test_datasource/test_scaleway.py b/tests/unittests/test_datasource/test_scaleway.py |
1750 | index 3bfd752..f96bf0a 100644 |
1751 | --- a/tests/unittests/test_datasource/test_scaleway.py |
1752 | +++ b/tests/unittests/test_datasource/test_scaleway.py |
1753 | @@ -7,7 +7,6 @@ import requests |
1754 | |
1755 | from cloudinit import helpers |
1756 | from cloudinit import settings |
1757 | -from cloudinit.event import EventType |
1758 | from cloudinit.sources import DataSourceScaleway |
1759 | |
1760 | from cloudinit.tests.helpers import mock, HttprettyTestCase, CiTestCase |
1761 | @@ -404,9 +403,3 @@ class TestDataSourceScaleway(HttprettyTestCase): |
1762 | |
1763 | netcfg = self.datasource.network_config |
1764 | self.assertEqual(netcfg, '0xdeadbeef') |
1765 | - |
1766 | - def test_update_events_is_correct(self): |
1767 | - """ensure update_events contains correct data""" |
1768 | - self.assertEqual( |
1769 | - {'network': {EventType.BOOT_NEW_INSTANCE, EventType.BOOT}}, |
1770 | - self.datasource.update_events) |
1771 | diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py |
1772 | index d00c1b4..8c18aa1 100644 |
1773 | --- a/tests/unittests/test_ds_identify.py |
1774 | +++ b/tests/unittests/test_ds_identify.py |
1775 | @@ -520,6 +520,10 @@ class TestDsIdentify(DsIdentifyBase): |
1776 | """NoCloud is found with iso9660 filesystem on non-cdrom disk.""" |
1777 | self._test_ds_found('NoCloud') |
1778 | |
1779 | + def test_nocloud_upper(self): |
1780 | + """NoCloud is found with uppercase filesystem label.""" |
1781 | + self._test_ds_found('NoCloudUpper') |
1782 | + |
1783 | def test_nocloud_seed(self): |
1784 | """Nocloud seed directory.""" |
1785 | self._test_ds_found('NoCloud-seed') |
1786 | @@ -713,6 +717,19 @@ VALID_CFG = { |
1787 | 'dev/vdb': 'pretend iso content for cidata\n', |
1788 | } |
1789 | }, |
1790 | + 'NoCloudUpper': { |
1791 | + 'ds': 'NoCloud', |
1792 | + 'mocks': [ |
1793 | + MOCK_VIRT_IS_KVM, |
1794 | + {'name': 'blkid', 'ret': 0, |
1795 | + 'out': blkid_out( |
1796 | + BLKID_UEFI_UBUNTU + |
1797 | + [{'DEVNAME': 'vdb', 'TYPE': 'iso9660', 'LABEL': 'CIDATA'}])}, |
1798 | + ], |
1799 | + 'files': { |
1800 | + 'dev/vdb': 'pretend iso content for cidata\n', |
1801 | + } |
1802 | + }, |
1803 | 'NoCloud-seed': { |
1804 | 'ds': 'NoCloud', |
1805 | 'files': { |
1806 | diff --git a/tests/unittests/test_handler/test_handler_mounts.py b/tests/unittests/test_handler/test_handler_mounts.py |
1807 | index 8fea6c2..0fb160b 100644 |
1808 | --- a/tests/unittests/test_handler/test_handler_mounts.py |
1809 | +++ b/tests/unittests/test_handler/test_handler_mounts.py |
1810 | @@ -154,7 +154,15 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase): |
1811 | return_value=True) |
1812 | |
1813 | self.add_patch('cloudinit.config.cc_mounts.util.subp', |
1814 | - 'mock_util_subp') |
1815 | + 'm_util_subp') |
1816 | + |
1817 | + self.add_patch('cloudinit.config.cc_mounts.util.mounts', |
1818 | + 'mock_util_mounts', |
1819 | + return_value={ |
1820 | + '/dev/sda1': {'fstype': 'ext4', |
1821 | + 'mountpoint': '/', |
1822 | + 'opts': 'rw,relatime,discard' |
1823 | + }}) |
1824 | |
1825 | self.mock_cloud = mock.Mock() |
1826 | self.mock_log = mock.Mock() |
1827 | @@ -230,4 +238,24 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase): |
1828 | fstab_new_content = fd.read() |
1829 | self.assertEqual(fstab_expected_content, fstab_new_content) |
1830 | |
1831 | + def test_no_change_fstab_sets_needs_mount_all(self): |
1832 | + '''verify unchanged fstab entries are mounted if not call mount -a''' |
1833 | + fstab_original_content = ( |
1834 | + 'LABEL=cloudimg-rootfs / ext4 defaults 0 0\n' |
1835 | + 'LABEL=UEFI /boot/efi vfat defaults 0 0\n' |
1836 | + '/dev/vdb /mnt auto defaults,noexec,comment=cloudconfig 0 2\n' |
1837 | + ) |
1838 | + fstab_expected_content = fstab_original_content |
1839 | + cc = {'mounts': [ |
1840 | + ['/dev/vdb', '/mnt', 'auto', 'defaults,noexec']]} |
1841 | + with open(cc_mounts.FSTAB_PATH, 'w') as fd: |
1842 | + fd.write(fstab_original_content) |
1843 | + with open(cc_mounts.FSTAB_PATH, 'r') as fd: |
1844 | + fstab_new_content = fd.read() |
1845 | + self.assertEqual(fstab_expected_content, fstab_new_content) |
1846 | + cc_mounts.handle(None, cc, self.mock_cloud, self.mock_log, []) |
1847 | + self.m_util_subp.assert_has_calls([ |
1848 | + mock.call(['mount', '-a']), |
1849 | + mock.call(['systemctl', 'daemon-reload'])]) |
1850 | + |
1851 | # vi: ts=4 expandtab |
1852 | diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py |
1853 | index fd03deb..e85e964 100644 |
1854 | --- a/tests/unittests/test_net.py |
1855 | +++ b/tests/unittests/test_net.py |
1856 | @@ -9,6 +9,7 @@ from cloudinit.net import ( |
1857 | from cloudinit.sources.helpers import openstack |
1858 | from cloudinit import temp_utils |
1859 | from cloudinit import util |
1860 | +from cloudinit import safeyaml as yaml |
1861 | |
1862 | from cloudinit.tests.helpers import ( |
1863 | CiTestCase, FilesystemMockingTestCase, dir2dict, mock, populate_dir) |
1864 | @@ -21,7 +22,7 @@ import json |
1865 | import os |
1866 | import re |
1867 | import textwrap |
1868 | -import yaml |
1869 | +from yaml.serializer import Serializer |
1870 | |
1871 | |
1872 | DHCP_CONTENT_1 = """ |
1873 | @@ -3269,9 +3270,12 @@ class TestNetplanPostcommands(CiTestCase): |
1874 | mock_netplan_generate.assert_called_with(run=True) |
1875 | mock_net_setup_link.assert_called_with(run=True) |
1876 | |
1877 | + @mock.patch('cloudinit.util.SeLinuxGuard') |
1878 | @mock.patch.object(netplan, "get_devicelist") |
1879 | @mock.patch('cloudinit.util.subp') |
1880 | - def test_netplan_postcmds(self, mock_subp, mock_devlist): |
1881 | + def test_netplan_postcmds(self, mock_subp, mock_devlist, mock_sel): |
1882 | + mock_sel.__enter__ = mock.Mock(return_value=False) |
1883 | + mock_sel.__exit__ = mock.Mock() |
1884 | mock_devlist.side_effect = [['lo']] |
1885 | tmp_dir = self.tmp_dir() |
1886 | ns = network_state.parse_net_config_data(self.mycfg, |
1887 | @@ -3572,7 +3576,7 @@ class TestNetplanRoundTrip(CiTestCase): |
1888 | # now look for any alias, avoid rendering them entirely |
1889 | # generate the first anchor string using the template |
1890 | # as of this writing, looks like "&id001" |
1891 | - anchor = r'&' + yaml.serializer.Serializer.ANCHOR_TEMPLATE % 1 |
1892 | + anchor = r'&' + Serializer.ANCHOR_TEMPLATE % 1 |
1893 | found_alias = re.search(anchor, content, re.MULTILINE) |
1894 | if found_alias: |
1895 | msg = "Error at: %s\nContent:\n%s" % (found_alias, content) |
1896 | @@ -3826,6 +3830,41 @@ class TestNetRenderers(CiTestCase): |
1897 | self.assertRaises(net.RendererNotFoundError, renderers.select, |
1898 | priority=['sysconfig', 'eni']) |
1899 | |
1900 | + @mock.patch("cloudinit.net.renderers.netplan.available") |
1901 | + @mock.patch("cloudinit.net.renderers.sysconfig.available_sysconfig") |
1902 | + @mock.patch("cloudinit.net.renderers.sysconfig.available_nm") |
1903 | + @mock.patch("cloudinit.net.renderers.eni.available") |
1904 | + @mock.patch("cloudinit.net.renderers.sysconfig.util.get_linux_distro") |
1905 | + def test_sysconfig_selected_on_sysconfig_enabled_distros(self, m_distro, |
1906 | + m_eni, m_sys_nm, |
1907 | + m_sys_scfg, |
1908 | + m_netplan): |
1909 | + """sysconfig only selected on specific distros (rhel/sles).""" |
1910 | + |
1911 | + # Ubuntu with Network-Manager installed |
1912 | + m_eni.return_value = False # no ifupdown (ifquery) |
1913 | + m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown |
1914 | + m_sys_nm.return_value = True # network-manager is installed |
1915 | + m_netplan.return_value = True # netplan is installed |
1916 | + m_distro.return_value = ('ubuntu', None, None) |
1917 | + self.assertEqual('netplan', renderers.select(priority=None)[0]) |
1918 | + |
1919 | + # Centos with Network-Manager installed |
1920 | + m_eni.return_value = False # no ifupdown (ifquery) |
1921 | + m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown |
1922 | + m_sys_nm.return_value = True # network-manager is installed |
1923 | + m_netplan.return_value = False # netplan is not installed |
1924 | + m_distro.return_value = ('centos', None, None) |
1925 | + self.assertEqual('sysconfig', renderers.select(priority=None)[0]) |
1926 | + |
1927 | + # OpenSuse with Network-Manager installed |
1928 | + m_eni.return_value = False # no ifupdown (ifquery) |
1929 | + m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown |
1930 | + m_sys_nm.return_value = True # network-manager is installed |
1931 | + m_netplan.return_value = False # netplan is not installed |
1932 | + m_distro.return_value = ('opensuse', None, None) |
1933 | + self.assertEqual('sysconfig', renderers.select(priority=None)[0]) |
1934 | + |
1935 | |
1936 | class TestGetInterfaces(CiTestCase): |
1937 | _data = {'bonds': ['bond1'], |
1938 | diff --git a/tests/unittests/test_reporting_hyperv.py b/tests/unittests/test_reporting_hyperv.py |
1939 | old mode 100644 |
1940 | new mode 100755 |
1941 | index 2e64c6c..d01ed5b |
1942 | --- a/tests/unittests/test_reporting_hyperv.py |
1943 | +++ b/tests/unittests/test_reporting_hyperv.py |
1944 | @@ -1,10 +1,12 @@ |
1945 | # This file is part of cloud-init. See LICENSE file for license information. |
1946 | |
1947 | from cloudinit.reporting import events |
1948 | -from cloudinit.reporting import handlers |
1949 | +from cloudinit.reporting.handlers import HyperVKvpReportingHandler |
1950 | |
1951 | import json |
1952 | import os |
1953 | +import struct |
1954 | +import time |
1955 | |
1956 | from cloudinit import util |
1957 | from cloudinit.tests.helpers import CiTestCase |
1958 | @@ -13,7 +15,7 @@ from cloudinit.tests.helpers import CiTestCase |
1959 | class TestKvpEncoding(CiTestCase): |
1960 | def test_encode_decode(self): |
1961 | kvp = {'key': 'key1', 'value': 'value1'} |
1962 | - kvp_reporting = handlers.HyperVKvpReportingHandler() |
1963 | + kvp_reporting = HyperVKvpReportingHandler() |
1964 | data = kvp_reporting._encode_kvp_item(kvp['key'], kvp['value']) |
1965 | self.assertEqual(len(data), kvp_reporting.HV_KVP_RECORD_SIZE) |
1966 | decoded_kvp = kvp_reporting._decode_kvp_item(data) |
1967 | @@ -26,57 +28,9 @@ class TextKvpReporter(CiTestCase): |
1968 | self.tmp_file_path = self.tmp_path('kvp_pool_file') |
1969 | util.ensure_file(self.tmp_file_path) |
1970 | |
1971 | - def test_event_type_can_be_filtered(self): |
1972 | - reporter = handlers.HyperVKvpReportingHandler( |
1973 | - kvp_file_path=self.tmp_file_path, |
1974 | - event_types=['foo', 'bar']) |
1975 | - |
1976 | - reporter.publish_event( |
1977 | - events.ReportingEvent('foo', 'name', 'description')) |
1978 | - reporter.publish_event( |
1979 | - events.ReportingEvent('some_other', 'name', 'description3')) |
1980 | - reporter.q.join() |
1981 | - |
1982 | - kvps = list(reporter._iterate_kvps(0)) |
1983 | - self.assertEqual(1, len(kvps)) |
1984 | - |
1985 | - reporter.publish_event( |
1986 | - events.ReportingEvent('bar', 'name', 'description2')) |
1987 | - reporter.q.join() |
1988 | - kvps = list(reporter._iterate_kvps(0)) |
1989 | - self.assertEqual(2, len(kvps)) |
1990 | - |
1991 | - self.assertIn('foo', kvps[0]['key']) |
1992 | - self.assertIn('bar', kvps[1]['key']) |
1993 | - self.assertNotIn('some_other', kvps[0]['key']) |
1994 | - self.assertNotIn('some_other', kvps[1]['key']) |
1995 | - |
1996 | - def test_events_are_over_written(self): |
1997 | - reporter = handlers.HyperVKvpReportingHandler( |
1998 | - kvp_file_path=self.tmp_file_path) |
1999 | - |
2000 | - self.assertEqual(0, len(list(reporter._iterate_kvps(0)))) |
2001 | - |
2002 | - reporter.publish_event( |
2003 | - events.ReportingEvent('foo', 'name1', 'description')) |
2004 | - reporter.publish_event( |
2005 | - events.ReportingEvent('foo', 'name2', 'description')) |
2006 | - reporter.q.join() |
2007 | - self.assertEqual(2, len(list(reporter._iterate_kvps(0)))) |
2008 | - |
2009 | - reporter2 = handlers.HyperVKvpReportingHandler( |
2010 | - kvp_file_path=self.tmp_file_path) |
2011 | - reporter2.incarnation_no = reporter.incarnation_no + 1 |
2012 | - reporter2.publish_event( |
2013 | - events.ReportingEvent('foo', 'name3', 'description')) |
2014 | - reporter2.q.join() |
2015 | - |
2016 | - self.assertEqual(2, len(list(reporter2._iterate_kvps(0)))) |
2017 | - |
2018 | def test_events_with_higher_incarnation_not_over_written(self): |
2019 | - reporter = handlers.HyperVKvpReportingHandler( |
2020 | + reporter = HyperVKvpReportingHandler( |
2021 | kvp_file_path=self.tmp_file_path) |
2022 | - |
2023 | self.assertEqual(0, len(list(reporter._iterate_kvps(0)))) |
2024 | |
2025 | reporter.publish_event( |
2026 | @@ -86,7 +40,7 @@ class TextKvpReporter(CiTestCase): |
2027 | reporter.q.join() |
2028 | self.assertEqual(2, len(list(reporter._iterate_kvps(0)))) |
2029 | |
2030 | - reporter3 = handlers.HyperVKvpReportingHandler( |
2031 | + reporter3 = HyperVKvpReportingHandler( |
2032 | kvp_file_path=self.tmp_file_path) |
2033 | reporter3.incarnation_no = reporter.incarnation_no - 1 |
2034 | reporter3.publish_event( |
2035 | @@ -95,7 +49,7 @@ class TextKvpReporter(CiTestCase): |
2036 | self.assertEqual(3, len(list(reporter3._iterate_kvps(0)))) |
2037 | |
2038 | def test_finish_event_result_is_logged(self): |
2039 | - reporter = handlers.HyperVKvpReportingHandler( |
2040 | + reporter = HyperVKvpReportingHandler( |
2041 | kvp_file_path=self.tmp_file_path) |
2042 | reporter.publish_event( |
2043 | events.FinishReportingEvent('name2', 'description1', |
2044 | @@ -105,7 +59,7 @@ class TextKvpReporter(CiTestCase): |
2045 | |
2046 | def test_file_operation_issue(self): |
2047 | os.remove(self.tmp_file_path) |
2048 | - reporter = handlers.HyperVKvpReportingHandler( |
2049 | + reporter = HyperVKvpReportingHandler( |
2050 | kvp_file_path=self.tmp_file_path) |
2051 | reporter.publish_event( |
2052 | events.FinishReportingEvent('name2', 'description1', |
2053 | @@ -113,7 +67,7 @@ class TextKvpReporter(CiTestCase): |
2054 | reporter.q.join() |
2055 | |
2056 | def test_event_very_long(self): |
2057 | - reporter = handlers.HyperVKvpReportingHandler( |
2058 | + reporter = HyperVKvpReportingHandler( |
2059 | kvp_file_path=self.tmp_file_path) |
2060 | description = 'ab' * reporter.HV_KVP_EXCHANGE_MAX_VALUE_SIZE |
2061 | long_event = events.FinishReportingEvent( |
2062 | @@ -132,3 +86,43 @@ class TextKvpReporter(CiTestCase): |
2063 | self.assertEqual(msg_slice['msg_i'], i) |
2064 | full_description += msg_slice['msg'] |
2065 | self.assertEqual(description, full_description) |
2066 | + |
2067 | + def test_not_truncate_kvp_file_modified_after_boot(self): |
2068 | + with open(self.tmp_file_path, "wb+") as f: |
2069 | + kvp = {'key': 'key1', 'value': 'value1'} |
2070 | + data = (struct.pack("%ds%ds" % ( |
2071 | + HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE, |
2072 | + HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE), |
2073 | + kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8'))) |
2074 | + f.write(data) |
2075 | + cur_time = time.time() |
2076 | + os.utime(self.tmp_file_path, (cur_time, cur_time)) |
2077 | + |
2078 | + # reset this because the unit test framework |
2079 | + # has already polluted the class variable |
2080 | + HyperVKvpReportingHandler._already_truncated_pool_file = False |
2081 | + |
2082 | + reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path) |
2083 | + kvps = list(reporter._iterate_kvps(0)) |
2084 | + self.assertEqual(1, len(kvps)) |
2085 | + |
2086 | + def test_truncate_stale_kvp_file(self): |
2087 | + with open(self.tmp_file_path, "wb+") as f: |
2088 | + kvp = {'key': 'key1', 'value': 'value1'} |
2089 | + data = (struct.pack("%ds%ds" % ( |
2090 | + HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE, |
2091 | + HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE), |
2092 | + kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8'))) |
2093 | + f.write(data) |
2094 | + |
2095 | + # set the time ways back to make it look like |
2096 | + # we had an old kvp file |
2097 | + os.utime(self.tmp_file_path, (1000000, 1000000)) |
2098 | + |
2099 | + # reset this because the unit test framework |
2100 | + # has already polluted the class variable |
2101 | + HyperVKvpReportingHandler._already_truncated_pool_file = False |
2102 | + |
2103 | + reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path) |
2104 | + kvps = list(reporter._iterate_kvps(0)) |
2105 | + self.assertEqual(0, len(kvps)) |
2106 | diff --git a/tools/build-on-freebsd b/tools/build-on-freebsd |
2107 | index d23fde2..dc3b974 100755 |
2108 | --- a/tools/build-on-freebsd |
2109 | +++ b/tools/build-on-freebsd |
2110 | @@ -9,6 +9,7 @@ fail() { echo "FAILED:" "$@" 1>&2; exit 1; } |
2111 | depschecked=/tmp/c-i.dependencieschecked |
2112 | pkgs=" |
2113 | bash |
2114 | + chpasswd |
2115 | dmidecode |
2116 | e2fsprogs |
2117 | py27-Jinja2 |
2118 | @@ -17,6 +18,7 @@ pkgs=" |
2119 | py27-configobj |
2120 | py27-jsonpatch |
2121 | py27-jsonpointer |
2122 | + py27-jsonschema |
2123 | py27-oauthlib |
2124 | py27-requests |
2125 | py27-serial |
2126 | @@ -28,12 +30,9 @@ pkgs=" |
2127 | [ -f "$depschecked" ] || pkg install ${pkgs} || fail "install packages" |
2128 | touch $depschecked |
2129 | |
2130 | -# Required but unavailable port/pkg: py27-jsonpatch py27-jsonpointer |
2131 | -# Luckily, the install step will take care of this by installing it from pypi... |
2132 | - |
2133 | # Build the code and install in /usr/local/: |
2134 | -python setup.py build |
2135 | -python setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd |
2136 | +python2.7 setup.py build |
2137 | +python2.7 setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd |
2138 | |
2139 | # Enable cloud-init in /etc/rc.conf: |
2140 | sed -i.bak -e "/cloudinit_enable=.*/d" /etc/rc.conf |
2141 | diff --git a/tools/ds-identify b/tools/ds-identify |
2142 | index b78b273..6518901 100755 |
2143 | --- a/tools/ds-identify |
2144 | +++ b/tools/ds-identify |
2145 | @@ -620,7 +620,7 @@ dscheck_MAAS() { |
2146 | } |
2147 | |
2148 | dscheck_NoCloud() { |
2149 | - local fslabel="cidata" d="" |
2150 | + local fslabel="cidata CIDATA" d="" |
2151 | case " ${DI_KERNEL_CMDLINE} " in |
2152 | *\ ds=nocloud*) return ${DS_FOUND};; |
2153 | esac |
2154 | @@ -632,9 +632,10 @@ dscheck_NoCloud() { |
2155 | check_seed_dir "$d" meta-data user-data && return ${DS_FOUND} |
2156 | check_writable_seed_dir "$d" meta-data user-data && return ${DS_FOUND} |
2157 | done |
2158 | - if has_fs_with_label "${fslabel}"; then |
2159 | + if has_fs_with_label $fslabel; then |
2160 | return ${DS_FOUND} |
2161 | fi |
2162 | + |
2163 | return ${DS_NOT_FOUND} |
2164 | } |
2165 | |
2166 | @@ -762,7 +763,7 @@ is_cdrom_ovf() { |
2167 | |
2168 | # explicitly skip known labels of other types. rd_rdfe is azure. |
2169 | case "$label" in |
2170 | - config-2|CONFIG-2|rd_rdfe_stable*|cidata) return 1;; |
2171 | + config-2|CONFIG-2|rd_rdfe_stable*|cidata|CIDATA) return 1;; |
2172 | esac |
2173 | |
2174 | local idstr="http://schemas.dmtf.org/ovf/environment/1" |
2175 | diff --git a/tools/read-version b/tools/read-version |
2176 | index e69c2ce..6dca659 100755 |
2177 | --- a/tools/read-version |
2178 | +++ b/tools/read-version |
2179 | @@ -71,9 +71,12 @@ if is_gitdir(_tdir) and which("git"): |
2180 | flags = ['--tags'] |
2181 | cmd = ['git', 'describe', '--abbrev=8', '--match=[0-9]*'] + flags |
2182 | |
2183 | - version = tiny_p(cmd).strip() |
2184 | + try: |
2185 | + version = tiny_p(cmd).strip() |
2186 | + except RuntimeError: |
2187 | + version = None |
2188 | |
2189 | - if not version.startswith(src_version): |
2190 | + if version is None or not version.startswith(src_version): |
2191 | sys.stderr.write("git describe version (%s) differs from " |
2192 | "cloudinit.version (%s)\n" % (version, src_version)) |
2193 | sys.stderr.write( |
PASSED: Continuous integration, rev:40bd980d11d c1da884ce6e55c6 0518c3c4ebe106 /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 718/
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 718/rebuild
https:/