Merge ~chad.smith/cloud-init:ubuntu/xenial into cloud-init:ubuntu/xenial

Proposed by Chad Smith
Status: Merged
Merged at revision: 421a036c999c3c1ad12872e6509315472089d53d
Proposed branch: ~chad.smith/cloud-init:ubuntu/xenial
Merge into: cloud-init:ubuntu/xenial
Diff against target: 2193 lines (+1247/-204)
38 files modified
ChangeLog (+117/-0)
cloudinit/config/cc_apt_configure.py (+1/-1)
cloudinit/config/cc_mounts.py (+11/-0)
cloudinit/net/sysconfig.py (+4/-2)
cloudinit/net/tests/test_init.py (+1/-1)
cloudinit/reporting/handlers.py (+57/-60)
cloudinit/sources/DataSourceAzure.py (+11/-6)
cloudinit/sources/DataSourceCloudStack.py (+1/-1)
cloudinit/sources/DataSourceConfigDrive.py (+2/-5)
cloudinit/sources/DataSourceEc2.py (+1/-1)
cloudinit/sources/DataSourceNoCloud.py (+3/-1)
cloudinit/sources/DataSourceScaleway.py (+1/-2)
cloudinit/sources/__init__.py (+3/-3)
cloudinit/sources/helpers/azure.py (+11/-3)
cloudinit/sources/tests/test_init.py (+0/-15)
cloudinit/util.py (+2/-13)
cloudinit/version.py (+1/-1)
debian/changelog (+48/-2)
debian/patches/azure-apply-network-config-false.patch (+1/-1)
debian/patches/azure-use-walinux-agent.patch (+1/-1)
debian/patches/series (+1/-0)
debian/patches/ubuntu-advantage-revert-tip.patch (+735/-0)
doc/rtd/topics/datasources/nocloud.rst (+1/-1)
packages/redhat/cloud-init.spec.in (+3/-1)
packages/suse/cloud-init.spec.in (+3/-1)
setup.py (+2/-1)
tests/cloud_tests/releases.yaml (+16/-0)
tests/unittests/test_datasource/test_azure.py (+10/-3)
tests/unittests/test_datasource/test_azure_helper.py (+7/-2)
tests/unittests/test_datasource/test_nocloud.py (+42/-0)
tests/unittests/test_datasource/test_scaleway.py (+0/-7)
tests/unittests/test_ds_identify.py (+17/-0)
tests/unittests/test_handler/test_handler_mounts.py (+29/-1)
tests/unittests/test_net.py (+42/-3)
tests/unittests/test_reporting_hyperv.py (+49/-55)
tools/build-on-freebsd (+4/-5)
tools/ds-identify (+4/-3)
tools/read-version (+5/-2)
Reviewer Review Type Date Requested Status
Ryan Harper Approve
Server Team CI bot continuous-integration Approve
Review via email: mp+367297@code.launchpad.net

Commit message

new upstream snapshot for SRU into xenial..

QUESTION: we changed ubuntu-advantage cloud config module in an incompatible way on expectation that new ubuntu-advantage-tools would be released before cloud-init SRU completes. How do we want to resolve this, I've suggested reverting all upstream ubuntu advantage changes with a reverse patch (just cc_ubuntu_advantage.py and it's test file)

That is contained in this branch with
debian/patches/ubuntu-advantage-revert-tip.patch

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:40bd980d11dc1da884ce6e55c60518c3c4ebe106
https://jenkins.ubuntu.com/server/job/cloud-init-ci/718/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/718/rebuild

review: Approve (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:421a036c999c3c1ad12872e6509315472089d53d
https://jenkins.ubuntu.com/server/job/cloud-init-ci/719/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/719/rebuild

review: Approve (continuous-integration)
Revision history for this message
Ryan Harper (raharper) wrote :

LGTM. Verified I get the same branch as what's proposed.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
diff --git a/ChangeLog b/ChangeLog
index 8fa6fdd..bf48fd4 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,120 @@
119.1:
2 - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder]
3 - tests: add Eoan release [Paride Legovini]
4 - cc_mounts: check if mount -a on no-change fstab path
5 [Jason Zions (MSFT)] (LP: #1825596)
6 - replace remaining occurrences of LOG.warn [Daniel Watkins]
7 - DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo]
8 - Azure: Changes to the Hyper-V KVP Reporter [Anh Vo]
9 - git tests: no longer show warning about safe yaml.
10 - tools/read-version: handle errors [Chad Miller]
11 - net/sysconfig: only indicate available on known sysconfig distros
12 (LP: #1819994)
13 - packages: update rpm specs for new bash completion path
14 [Daniel Watkins] (LP: #1825444)
15 - test_azure: mock util.SeLinuxGuard where needed
16 [Jason Zions (MSFT)] (LP: #1825253)
17 - setup.py: install bash completion script in new location [Daniel Watkins]
18 - mount_cb: do not pass sync and rw options to mount
19 [Gonéri Le Bouder] (LP: #1645824)
20 - cc_apt_configure: fix typo in apt documentation [Dominic Schlegel]
21 - Revert "DataSource: move update_events from a class to an instance..."
22 [Daniel Watkins]
23 - Change DataSourceNoCloud to ignore file system label's case.
24 [Risto Oikarinen]
25 - cmd:main.py: Fix missing 'modules-init' key in modes dict
26 [Antonio Romito] (LP: #1815109)
27 - ubuntu_advantage: rewrite cloud-config module
28 - Azure: Treat _unset network configuration as if it were absent
29 [Jason Zions (MSFT)] (LP: #1823084)
30 - DatasourceAzure: add additional logging for azure datasource [Anh Vo]
31 - cloud_tests: fix apt_pipelining test-cases
32 - Azure: Ensure platform random_seed is always serializable as JSON.
33 [Jason Zions (MSFT)]
34 - net/sysconfig: write out SUSE-compatible IPv6 config [Robert Schweikert]
35 - tox: Update testenv for openSUSE Leap to 15.0 [Thomas Bechtold]
36 - net: Fix ipv6 static routes when using eni renderer
37 [Raphael Glon] (LP: #1818669)
38 - Add ubuntu_drivers config module [Daniel Watkins]
39 - doc: Refresh Azure walinuxagent docs [Daniel Watkins]
40 - tox: bump pylint version to latest (2.3.1) [Daniel Watkins]
41 - DataSource: move update_events from a class to an instance attribute
42 [Daniel Watkins] (LP: #1819913)
43 - net/sysconfig: Handle default route setup for dhcp configured NICs
44 [Robert Schweikert] (LP: #1812117)
45 - DataSourceEc2: update RELEASE_BLOCKER to be more accurate
46 [Daniel Watkins]
47 - cloud-init-per: POSIX sh does not support string subst, use sed
48 (LP: #1819222)
49 - Support locking user with usermod if passwd is not available.
50 - Example for Microsoft Azure data disk added. [Anton Olifir]
51 - clean: correctly determine the path for excluding seed directory
52 [Daniel Watkins] (LP: #1818571)
53 - helpers/openstack: Treat unknown link types as physical
54 [Daniel Watkins] (LP: #1639263)
55 - drop Python 2.6 support and our NIH version detection [Daniel Watkins]
56 - tip-pylint: Fix assignment-from-return-none errors
57 - net: append type:dhcp[46] only if dhcp[46] is True in v2 netconfig
58 [Kurt Stieger] (LP: #1818032)
59 - cc_apt_pipelining: stop disabling pipelining by default
60 [Daniel Watkins] (LP: #1794982)
61 - tests: fix some slow tests and some leaking state [Daniel Watkins]
62 - util: don't determine string_types ourselves [Daniel Watkins]
63 - cc_rsyslog: Escape possible nested set [Daniel Watkins] (LP: #1816967)
64 - Enable encrypted_data_bag_secret support for Chef
65 [Eric Williams] (LP: #1817082)
66 - azure: Filter list of ssh keys pulled from fabric [Jason Zions (MSFT)]
67 - doc: update merging doc with fixes and some additional details/examples
68 - tests: integration test failure summary to use traceback if empty error
69 - This is to fix https://bugs.launchpad.net/cloud-init/+bug/1812676
70 [Vitaly Kuznetsov]
71 - EC2: Rewrite network config on AWS Classic instances every boot
72 [Guilherme G. Piccoli] (LP: #1802073)
73 - netinfo: Adjust ifconfig output parsing for FreeBSD ipv6 entries
74 (LP: #1779672)
75 - netplan: Don't render yaml aliases when dumping netplan (LP: #1815051)
76 - add PyCharm IDE .idea/ path to .gitignore [Dominic Schlegel]
77 - correct grammar issue in instance metadata documentation
78 [Dominic Schlegel] (LP: #1802188)
79 - clean: cloud-init clean should not trace when run from within cloud_dir
80 (LP: #1795508)
81 - Resolve flake8 comparison and pycodestyle over-ident issues
82 [Paride Legovini]
83 - opennebula: also exclude epochseconds from changed environment vars
84 (LP: #1813641)
85 - systemd: Render generator from template to account for system
86 differences. [Robert Schweikert]
87 - sysconfig: On SUSE, use STARTMODE instead of ONBOOT
88 [Robert Schweikert] (LP: #1799540)
89 - flake8: use ==/!= to compare str, bytes, and int literals
90 [Paride Legovini]
91 - opennebula: exclude EPOCHREALTIME as known bash env variable with a
92 delta (LP: #1813383)
93 - tox: fix disco httpretty dependencies for py37 (LP: #1813361)
94 - run-container: uncomment baseurl in yum.repos.d/*.repo when using a
95 proxy [Paride Legovini]
96 - lxd: install zfs-linux instead of zfs meta package
97 [Johnson Shi] (LP: #1799779)
98 - net/sysconfig: do not write a resolv.conf file with only the header.
99 [Robert Schweikert]
100 - net: Make sysconfig renderer compatible with Network Manager.
101 [Eduardo Otubo]
102 - cc_set_passwords: Fix regex when parsing hashed passwords
103 [Marlin Cremers] (LP: #1811446)
104 - net: Wait for dhclient to daemonize before reading lease file
105 [Jason Zions] (LP: #1794399)
106 - [Azure] Increase retries when talking to Wireserver during metadata walk
107 [Jason Zions]
108 - Add documentation on adding a datasource.
109 - doc: clean up some datasource documentation.
110 - ds-identify: fix wrong variable name in ovf_vmware_transport_guestinfo.
111 - Scaleway: Support ssh keys provided inside an instance tag. [PORTE Loïc]
112 - OVF: simplify expected return values of transport functions.
113 - Vmware: Add support for the com.vmware.guestInfo OVF transport.
114 (LP: #1807466)
115 - HACKING.rst: change contact info to Josh Powers
116 - Update to pylint 2.2.2.
117
118.5:11818.5:
2 - tests: add Disco release [Joshua Powers]119 - tests: add Disco release [Joshua Powers]
3 - net: render 'metric' values in per-subnet routes (LP: #1805871)120 - net: render 'metric' values in per-subnet routes (LP: #1805871)
diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py
index e18944e..919d199 100644
--- a/cloudinit/config/cc_apt_configure.py
+++ b/cloudinit/config/cc_apt_configure.py
@@ -127,7 +127,7 @@ to ``^[\\w-]+:\\w``
127127
128Source list entries can be specified as a dictionary under the ``sources``128Source list entries can be specified as a dictionary under the ``sources``
129config key, with key in the dict representing a different source file. The key129config key, with key in the dict representing a different source file. The key
130The key of each source entry will be used as an id that can be referenced in130of each source entry will be used as an id that can be referenced in
131other config entries, as well as the filename for the source's configuration131other config entries, as well as the filename for the source's configuration
132under ``/etc/apt/sources.list.d``. If the name does not end with ``.list``,132under ``/etc/apt/sources.list.d``. If the name does not end with ``.list``,
133it will be appended. If there is no configuration for a key in ``sources``, no133it will be appended. If there is no configuration for a key in ``sources``, no
diff --git a/cloudinit/config/cc_mounts.py b/cloudinit/config/cc_mounts.py
index 339baba..123ffb8 100644
--- a/cloudinit/config/cc_mounts.py
+++ b/cloudinit/config/cc_mounts.py
@@ -439,6 +439,7 @@ def handle(_name, cfg, cloud, log, _args):
439439
440 cc_lines = []440 cc_lines = []
441 needswap = False441 needswap = False
442 need_mount_all = False
442 dirs = []443 dirs = []
443 for line in actlist:444 for line in actlist:
444 # write 'comment' in the fs_mntops, entry, claiming this445 # write 'comment' in the fs_mntops, entry, claiming this
@@ -449,11 +450,18 @@ def handle(_name, cfg, cloud, log, _args):
449 dirs.append(line[1])450 dirs.append(line[1])
450 cc_lines.append('\t'.join(line))451 cc_lines.append('\t'.join(line))
451452
453 mount_points = [v['mountpoint'] for k, v in util.mounts().items()
454 if 'mountpoint' in v]
452 for d in dirs:455 for d in dirs:
453 try:456 try:
454 util.ensure_dir(d)457 util.ensure_dir(d)
455 except Exception:458 except Exception:
456 util.logexc(log, "Failed to make '%s' config-mount", d)459 util.logexc(log, "Failed to make '%s' config-mount", d)
460 # dirs is list of directories on which a volume should be mounted.
461 # If any of them does not already show up in the list of current
462 # mount points, we will definitely need to do mount -a.
463 if not need_mount_all and d not in mount_points:
464 need_mount_all = True
457465
458 sadds = [WS.sub(" ", n) for n in cc_lines]466 sadds = [WS.sub(" ", n) for n in cc_lines]
459 sdrops = [WS.sub(" ", n) for n in fstab_removed]467 sdrops = [WS.sub(" ", n) for n in fstab_removed]
@@ -473,6 +481,9 @@ def handle(_name, cfg, cloud, log, _args):
473 log.debug("No changes to /etc/fstab made.")481 log.debug("No changes to /etc/fstab made.")
474 else:482 else:
475 log.debug("Changes to fstab: %s", sops)483 log.debug("Changes to fstab: %s", sops)
484 need_mount_all = True
485
486 if need_mount_all:
476 activate_cmds.append(["mount", "-a"])487 activate_cmds.append(["mount", "-a"])
477 if uses_systemd:488 if uses_systemd:
478 activate_cmds.append(["systemctl", "daemon-reload"])489 activate_cmds.append(["systemctl", "daemon-reload"])
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index 0998392..a47da0a 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -18,6 +18,8 @@ from .network_state import (
1818
19LOG = logging.getLogger(__name__)19LOG = logging.getLogger(__name__)
20NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf"20NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf"
21KNOWN_DISTROS = [
22 'opensuse', 'sles', 'suse', 'redhat', 'fedora', 'centos']
2123
2224
23def _make_header(sep='#'):25def _make_header(sep='#'):
@@ -717,8 +719,8 @@ class Renderer(renderer.Renderer):
717def available(target=None):719def available(target=None):
718 sysconfig = available_sysconfig(target=target)720 sysconfig = available_sysconfig(target=target)
719 nm = available_nm(target=target)721 nm = available_nm(target=target)
720722 return (util.get_linux_distro()[0] in KNOWN_DISTROS
721 return any([nm, sysconfig])723 and any([nm, sysconfig]))
722724
723725
724def available_sysconfig(target=None):726def available_sysconfig(target=None):
diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
index f55c31e..6d2affe 100644
--- a/cloudinit/net/tests/test_init.py
+++ b/cloudinit/net/tests/test_init.py
@@ -7,11 +7,11 @@ import mock
7import os7import os
8import requests8import requests
9import textwrap9import textwrap
10import yaml
1110
12import cloudinit.net as net11import cloudinit.net as net
13from cloudinit.util import ensure_file, write_file, ProcessExecutionError12from cloudinit.util import ensure_file, write_file, ProcessExecutionError
14from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase13from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase
14from cloudinit import safeyaml as yaml
1515
1616
17class TestSysDevPath(CiTestCase):17class TestSysDevPath(CiTestCase):
diff --git a/cloudinit/reporting/handlers.py b/cloudinit/reporting/handlers.py
18old mode 10064418old mode 100644
19new mode 10075519new mode 100755
index 6d23558..10165ae
--- a/cloudinit/reporting/handlers.py
+++ b/cloudinit/reporting/handlers.py
@@ -5,7 +5,6 @@ import fcntl
5import json5import json
6import six6import six
7import os7import os
8import re
9import struct8import struct
10import threading9import threading
11import time10import time
@@ -14,6 +13,7 @@ from cloudinit import log as logging
14from cloudinit.registry import DictRegistry13from cloudinit.registry import DictRegistry
15from cloudinit import (url_helper, util)14from cloudinit import (url_helper, util)
16from datetime import datetime15from datetime import datetime
16from six.moves.queue import Empty as QueueEmptyError
1717
18if six.PY2:18if six.PY2:
19 from multiprocessing.queues import JoinableQueue as JQueue19 from multiprocessing.queues import JoinableQueue as JQueue
@@ -129,24 +129,50 @@ class HyperVKvpReportingHandler(ReportingHandler):
129 DESC_IDX_KEY = 'msg_i'129 DESC_IDX_KEY = 'msg_i'
130 JSON_SEPARATORS = (',', ':')130 JSON_SEPARATORS = (',', ':')
131 KVP_POOL_FILE_GUEST = '/var/lib/hyperv/.kvp_pool_1'131 KVP_POOL_FILE_GUEST = '/var/lib/hyperv/.kvp_pool_1'
132 _already_truncated_pool_file = False
132133
133 def __init__(self,134 def __init__(self,
134 kvp_file_path=KVP_POOL_FILE_GUEST,135 kvp_file_path=KVP_POOL_FILE_GUEST,
135 event_types=None):136 event_types=None):
136 super(HyperVKvpReportingHandler, self).__init__()137 super(HyperVKvpReportingHandler, self).__init__()
137 self._kvp_file_path = kvp_file_path138 self._kvp_file_path = kvp_file_path
139 HyperVKvpReportingHandler._truncate_guest_pool_file(
140 self._kvp_file_path)
141
138 self._event_types = event_types142 self._event_types = event_types
139 self.q = JQueue()143 self.q = JQueue()
140 self.kvp_file = None
141 self.incarnation_no = self._get_incarnation_no()144 self.incarnation_no = self._get_incarnation_no()
142 self.event_key_prefix = u"{0}|{1}".format(self.EVENT_PREFIX,145 self.event_key_prefix = u"{0}|{1}".format(self.EVENT_PREFIX,
143 self.incarnation_no)146 self.incarnation_no)
144 self._current_offset = 0
145 self.publish_thread = threading.Thread(147 self.publish_thread = threading.Thread(
146 target=self._publish_event_routine)148 target=self._publish_event_routine)
147 self.publish_thread.daemon = True149 self.publish_thread.daemon = True
148 self.publish_thread.start()150 self.publish_thread.start()
149151
152 @classmethod
153 def _truncate_guest_pool_file(cls, kvp_file):
154 """
155 Truncate the pool file if it has not been truncated since boot.
156 This should be done exactly once for the file indicated by
157 KVP_POOL_FILE_GUEST constant above. This method takes a filename
158 so that we can use an arbitrary file during unit testing.
159 Since KVP is a best-effort telemetry channel we only attempt to
160 truncate the file once and only if the file has not been modified
161 since boot. Additional truncation can lead to loss of existing
162 KVPs.
163 """
164 if cls._already_truncated_pool_file:
165 return
166 boot_time = time.time() - float(util.uptime())
167 try:
168 if os.path.getmtime(kvp_file) < boot_time:
169 with open(kvp_file, "w"):
170 pass
171 except (OSError, IOError) as e:
172 LOG.warning("failed to truncate kvp pool file, %s", e)
173 finally:
174 cls._already_truncated_pool_file = True
175
150 def _get_incarnation_no(self):176 def _get_incarnation_no(self):
151 """177 """
152 use the time passed as the incarnation number.178 use the time passed as the incarnation number.
@@ -162,20 +188,15 @@ class HyperVKvpReportingHandler(ReportingHandler):
162188
163 def _iterate_kvps(self, offset):189 def _iterate_kvps(self, offset):
164 """iterate the kvp file from the current offset."""190 """iterate the kvp file from the current offset."""
165 try:191 with open(self._kvp_file_path, 'rb') as f:
166 with open(self._kvp_file_path, 'rb+') as f:192 fcntl.flock(f, fcntl.LOCK_EX)
167 self.kvp_file = f193 f.seek(offset)
168 fcntl.flock(f, fcntl.LOCK_EX)194 record_data = f.read(self.HV_KVP_RECORD_SIZE)
169 f.seek(offset)195 while len(record_data) == self.HV_KVP_RECORD_SIZE:
196 kvp_item = self._decode_kvp_item(record_data)
197 yield kvp_item
170 record_data = f.read(self.HV_KVP_RECORD_SIZE)198 record_data = f.read(self.HV_KVP_RECORD_SIZE)
171 while len(record_data) == self.HV_KVP_RECORD_SIZE:199 fcntl.flock(f, fcntl.LOCK_UN)
172 self._current_offset += self.HV_KVP_RECORD_SIZE
173 kvp_item = self._decode_kvp_item(record_data)
174 yield kvp_item
175 record_data = f.read(self.HV_KVP_RECORD_SIZE)
176 fcntl.flock(f, fcntl.LOCK_UN)
177 finally:
178 self.kvp_file = None
179200
180 def _event_key(self, event):201 def _event_key(self, event):
181 """202 """
@@ -207,23 +228,13 @@ class HyperVKvpReportingHandler(ReportingHandler):
207228
208 return {'key': k, 'value': v}229 return {'key': k, 'value': v}
209230
210 def _update_kvp_item(self, record_data):
211 if self.kvp_file is None:
212 raise ReportException(
213 "kvp file '{0}' not opened."
214 .format(self._kvp_file_path))
215 self.kvp_file.seek(-self.HV_KVP_RECORD_SIZE, 1)
216 self.kvp_file.write(record_data)
217
218 def _append_kvp_item(self, record_data):231 def _append_kvp_item(self, record_data):
219 with open(self._kvp_file_path, 'rb+') as f:232 with open(self._kvp_file_path, 'ab') as f:
220 fcntl.flock(f, fcntl.LOCK_EX)233 fcntl.flock(f, fcntl.LOCK_EX)
221 # seek to end of the file234 for data in record_data:
222 f.seek(0, 2)235 f.write(data)
223 f.write(record_data)
224 f.flush()236 f.flush()
225 fcntl.flock(f, fcntl.LOCK_UN)237 fcntl.flock(f, fcntl.LOCK_UN)
226 self._current_offset = f.tell()
227238
228 def _break_down(self, key, meta_data, description):239 def _break_down(self, key, meta_data, description):
229 del meta_data[self.MSG_KEY]240 del meta_data[self.MSG_KEY]
@@ -279,40 +290,26 @@ class HyperVKvpReportingHandler(ReportingHandler):
279290
280 def _publish_event_routine(self):291 def _publish_event_routine(self):
281 while True:292 while True:
293 items_from_queue = 0
282 try:294 try:
283 event = self.q.get(block=True)295 event = self.q.get(block=True)
284 need_append = True296 items_from_queue += 1
297 encoded_data = []
298 while event is not None:
299 encoded_data += self._encode_event(event)
300 try:
301 # get all the rest of the events in the queue
302 event = self.q.get(block=False)
303 items_from_queue += 1
304 except QueueEmptyError:
305 event = None
285 try:306 try:
286 if not os.path.exists(self._kvp_file_path):307 self._append_kvp_item(encoded_data)
287 LOG.warning(308 except (OSError, IOError) as e:
288 "skip writing events %s to %s. file not present.",309 LOG.warning("failed posting events to kvp, %s", e)
289 event.as_string(),
290 self._kvp_file_path)
291 encoded_event = self._encode_event(event)
292 # for each encoded_event
293 for encoded_data in (encoded_event):
294 for kvp in self._iterate_kvps(self._current_offset):
295 match = (
296 re.match(
297 r"^{0}\|(\d+)\|.+"
298 .format(self.EVENT_PREFIX),
299 kvp['key']
300 ))
301 if match:
302 match_groups = match.groups(0)
303 if int(match_groups[0]) < self.incarnation_no:
304 need_append = False
305 self._update_kvp_item(encoded_data)
306 continue
307 if need_append:
308 self._append_kvp_item(encoded_data)
309 except IOError as e:
310 LOG.warning(
311 "failed posting event to kvp: %s e:%s",
312 event.as_string(), e)
313 finally:310 finally:
314 self.q.task_done()311 for _ in range(items_from_queue):
315312 self.q.task_done()
316 # when main process exits, q.get() will through EOFError313 # when main process exits, q.get() will through EOFError
317 # indicating we should exit this thread.314 # indicating we should exit this thread.
318 except EOFError:315 except EOFError:
@@ -322,7 +319,7 @@ class HyperVKvpReportingHandler(ReportingHandler):
322 # if the kvp pool already contains a chunk of data,319 # if the kvp pool already contains a chunk of data,
323 # so defer it to another thread.320 # so defer it to another thread.
324 def publish_event(self, event):321 def publish_event(self, event):
325 if (not self._event_types or event.event_type in self._event_types):322 if not self._event_types or event.event_type in self._event_types:
326 self.q.put(event)323 self.q.put(event)
327324
328 def flush(self):325 def flush(self):
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 76b1661..b7440c1 100755
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -57,7 +57,12 @@ AZURE_CHASSIS_ASSET_TAG = '7783-7084-3265-9085-8269-3286-77'
57REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds"57REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds"
58REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready"58REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready"
59AGENT_SEED_DIR = '/var/lib/waagent'59AGENT_SEED_DIR = '/var/lib/waagent'
60
61# In the event where the IMDS primary server is not
62# available, it takes 1s to fallback to the secondary one
63IMDS_TIMEOUT_IN_SECONDS = 2
60IMDS_URL = "http://169.254.169.254/metadata/"64IMDS_URL = "http://169.254.169.254/metadata/"
65
61PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0"66PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0"
6267
63# List of static scripts and network config artifacts created by68# List of static scripts and network config artifacts created by
@@ -407,7 +412,7 @@ class DataSourceAzure(sources.DataSource):
407 elif cdev.startswith("/dev/"):412 elif cdev.startswith("/dev/"):
408 if util.is_FreeBSD():413 if util.is_FreeBSD():
409 ret = util.mount_cb(cdev, load_azure_ds_dir,414 ret = util.mount_cb(cdev, load_azure_ds_dir,
410 mtype="udf", sync=False)415 mtype="udf")
411 else:416 else:
412 ret = util.mount_cb(cdev, load_azure_ds_dir)417 ret = util.mount_cb(cdev, load_azure_ds_dir)
413 else:418 else:
@@ -582,9 +587,9 @@ class DataSourceAzure(sources.DataSource):
582 return587 return
583 self._ephemeral_dhcp_ctx.clean_network()588 self._ephemeral_dhcp_ctx.clean_network()
584 else:589 else:
585 return readurl(url, timeout=1, headers=headers,590 return readurl(url, timeout=IMDS_TIMEOUT_IN_SECONDS,
586 exception_cb=exc_cb, infinite=True,591 headers=headers, exception_cb=exc_cb,
587 log_req_resp=False).contents592 infinite=True, log_req_resp=False).contents
588 except UrlError:593 except UrlError:
589 # Teardown our EphemeralDHCPv4 context on failure as we retry594 # Teardown our EphemeralDHCPv4 context on failure as we retry
590 self._ephemeral_dhcp_ctx.clean_network()595 self._ephemeral_dhcp_ctx.clean_network()
@@ -1291,8 +1296,8 @@ def _get_metadata_from_imds(retries):
1291 headers = {"Metadata": "true"}1296 headers = {"Metadata": "true"}
1292 try:1297 try:
1293 response = readurl(1298 response = readurl(
1294 url, timeout=1, headers=headers, retries=retries,1299 url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers,
1295 exception_cb=retry_on_url_exc)1300 retries=retries, exception_cb=retry_on_url_exc)
1296 except Exception as e:1301 except Exception as e:
1297 LOG.debug('Ignoring IMDS instance metadata: %s', e)1302 LOG.debug('Ignoring IMDS instance metadata: %s', e)
1298 return {}1303 return {}
diff --git a/cloudinit/sources/DataSourceCloudStack.py b/cloudinit/sources/DataSourceCloudStack.py
index d4b758f..f185dc7 100644
--- a/cloudinit/sources/DataSourceCloudStack.py
+++ b/cloudinit/sources/DataSourceCloudStack.py
@@ -95,7 +95,7 @@ class DataSourceCloudStack(sources.DataSource):
95 start_time = time.time()95 start_time = time.time()
96 url = uhelp.wait_for_url(96 url = uhelp.wait_for_url(
97 urls=urls, max_wait=url_params.max_wait_seconds,97 urls=urls, max_wait=url_params.max_wait_seconds,
98 timeout=url_params.timeout_seconds, status_cb=LOG.warn)98 timeout=url_params.timeout_seconds, status_cb=LOG.warning)
9999
100 if url:100 if url:
101 LOG.debug("Using metadata source: '%s'", url)101 LOG.debug("Using metadata source: '%s'", url)
diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py
index 564e3eb..571d30d 100644
--- a/cloudinit/sources/DataSourceConfigDrive.py
+++ b/cloudinit/sources/DataSourceConfigDrive.py
@@ -72,15 +72,12 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource):
72 dslist = self.sys_cfg.get('datasource_list')72 dslist = self.sys_cfg.get('datasource_list')
73 for dev in find_candidate_devs(dslist=dslist):73 for dev in find_candidate_devs(dslist=dslist):
74 try:74 try:
75 # Set mtype if freebsd and turn off sync75 if util.is_FreeBSD() and dev.startswith("/dev/cd"):
76 if dev.startswith("/dev/cd"):
77 mtype = "cd9660"76 mtype = "cd9660"
78 sync = False
79 else:77 else:
80 mtype = None78 mtype = None
81 sync = True
82 results = util.mount_cb(dev, read_config_drive,79 results = util.mount_cb(dev, read_config_drive,
83 mtype=mtype, sync=sync)80 mtype=mtype)
84 found = dev81 found = dev
85 except openstack.NonReadable:82 except openstack.NonReadable:
86 pass83 pass
diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py
index ac28f1d..5c017bf 100644
--- a/cloudinit/sources/DataSourceEc2.py
+++ b/cloudinit/sources/DataSourceEc2.py
@@ -208,7 +208,7 @@ class DataSourceEc2(sources.DataSource):
208 start_time = time.time()208 start_time = time.time()
209 url = uhelp.wait_for_url(209 url = uhelp.wait_for_url(
210 urls=urls, max_wait=url_params.max_wait_seconds,210 urls=urls, max_wait=url_params.max_wait_seconds,
211 timeout=url_params.timeout_seconds, status_cb=LOG.warn)211 timeout=url_params.timeout_seconds, status_cb=LOG.warning)
212212
213 if url:213 if url:
214 self.metadata_address = url2base[url]214 self.metadata_address = url2base[url]
diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py
index 6860f0c..fcf5d58 100644
--- a/cloudinit/sources/DataSourceNoCloud.py
+++ b/cloudinit/sources/DataSourceNoCloud.py
@@ -106,7 +106,9 @@ class DataSourceNoCloud(sources.DataSource):
106 fslist = util.find_devs_with("TYPE=vfat")106 fslist = util.find_devs_with("TYPE=vfat")
107 fslist.extend(util.find_devs_with("TYPE=iso9660"))107 fslist.extend(util.find_devs_with("TYPE=iso9660"))
108108
109 label_list = util.find_devs_with("LABEL=%s" % label)109 label_list = util.find_devs_with("LABEL=%s" % label.upper())
110 label_list.extend(util.find_devs_with("LABEL=%s" % label.lower()))
111
110 devlist = list(set(fslist) & set(label_list))112 devlist = list(set(fslist) & set(label_list))
111 devlist.sort(reverse=True)113 devlist.sort(reverse=True)
112114
diff --git a/cloudinit/sources/DataSourceScaleway.py b/cloudinit/sources/DataSourceScaleway.py
index 54bfc1f..b573b38 100644
--- a/cloudinit/sources/DataSourceScaleway.py
+++ b/cloudinit/sources/DataSourceScaleway.py
@@ -171,11 +171,10 @@ def query_data_api(api_type, api_address, retries, timeout):
171171
172class DataSourceScaleway(sources.DataSource):172class DataSourceScaleway(sources.DataSource):
173 dsname = "Scaleway"173 dsname = "Scaleway"
174 update_events = {'network': [EventType.BOOT_NEW_INSTANCE, EventType.BOOT]}
174175
175 def __init__(self, sys_cfg, distro, paths):176 def __init__(self, sys_cfg, distro, paths):
176 super(DataSourceScaleway, self).__init__(sys_cfg, distro, paths)177 super(DataSourceScaleway, self).__init__(sys_cfg, distro, paths)
177 self.update_events = {
178 'network': {EventType.BOOT_NEW_INSTANCE, EventType.BOOT}}
179178
180 self.ds_cfg = util.mergemanydict([179 self.ds_cfg = util.mergemanydict([
181 util.get_cfg_by_path(sys_cfg, ["datasource", "Scaleway"], {}),180 util.get_cfg_by_path(sys_cfg, ["datasource", "Scaleway"], {}),
diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
index 1604932..e6966b3 100644
--- a/cloudinit/sources/__init__.py
+++ b/cloudinit/sources/__init__.py
@@ -164,6 +164,9 @@ class DataSource(object):
164 # A datasource which supports writing network config on each system boot164 # A datasource which supports writing network config on each system boot
165 # would call update_events['network'].add(EventType.BOOT).165 # would call update_events['network'].add(EventType.BOOT).
166166
167 # Default: generate network config on new instance id (first boot).
168 update_events = {'network': set([EventType.BOOT_NEW_INSTANCE])}
169
167 # N-tuple listing default values for any metadata-related class170 # N-tuple listing default values for any metadata-related class
168 # attributes cached on an instance by a process_data runs. These attribute171 # attributes cached on an instance by a process_data runs. These attribute
169 # values are reset via clear_cached_attrs during any update_metadata call.172 # values are reset via clear_cached_attrs during any update_metadata call.
@@ -188,9 +191,6 @@ class DataSource(object):
188 self.vendordata = None191 self.vendordata = None
189 self.vendordata_raw = None192 self.vendordata_raw = None
190193
191 # Default: generate network config on new instance id (first boot).
192 self.update_events = {'network': {EventType.BOOT_NEW_INSTANCE}}
193
194 self.ds_cfg = util.get_cfg_by_path(194 self.ds_cfg = util.get_cfg_by_path(
195 self.sys_cfg, ("datasource", self.dsname), {})195 self.sys_cfg, ("datasource", self.dsname), {})
196 if not self.ds_cfg:196 if not self.ds_cfg:
diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py
index d3af05e..82c4c8c 100755
--- a/cloudinit/sources/helpers/azure.py
+++ b/cloudinit/sources/helpers/azure.py
@@ -20,6 +20,9 @@ from cloudinit.reporting import events
2020
21LOG = logging.getLogger(__name__)21LOG = logging.getLogger(__name__)
2222
23# This endpoint matches the format as found in dhcp lease files, since this
24# value is applied if the endpoint can't be found within a lease file
25DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10"
2326
24azure_ds_reporter = events.ReportEventStack(27azure_ds_reporter = events.ReportEventStack(
25 name="azure-ds",28 name="azure-ds",
@@ -297,7 +300,12 @@ class WALinuxAgentShim(object):
297 @azure_ds_telemetry_reporter300 @azure_ds_telemetry_reporter
298 def _get_value_from_leases_file(fallback_lease_file):301 def _get_value_from_leases_file(fallback_lease_file):
299 leases = []302 leases = []
300 content = util.load_file(fallback_lease_file)303 try:
304 content = util.load_file(fallback_lease_file)
305 except IOError as ex:
306 LOG.error("Failed to read %s: %s", fallback_lease_file, ex)
307 return None
308
301 LOG.debug("content is %s", content)309 LOG.debug("content is %s", content)
302 option_name = _get_dhcp_endpoint_option_name()310 option_name = _get_dhcp_endpoint_option_name()
303 for line in content.splitlines():311 for line in content.splitlines():
@@ -372,9 +380,9 @@ class WALinuxAgentShim(object):
372 fallback_lease_file)380 fallback_lease_file)
373 value = WALinuxAgentShim._get_value_from_leases_file(381 value = WALinuxAgentShim._get_value_from_leases_file(
374 fallback_lease_file)382 fallback_lease_file)
375
376 if value is None:383 if value is None:
377 raise ValueError('No endpoint found.')384 LOG.warning("No lease found; using default endpoint")
385 value = DEFAULT_WIRESERVER_ENDPOINT
378386
379 endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value)387 endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value)
380 LOG.debug('Azure endpoint found at %s', endpoint_ip_address)388 LOG.debug('Azure endpoint found at %s', endpoint_ip_address)
diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py
index cb1912b..6378e98 100644
--- a/cloudinit/sources/tests/test_init.py
+++ b/cloudinit/sources/tests/test_init.py
@@ -575,21 +575,6 @@ class TestDataSource(CiTestCase):
575 " events: New instance first boot",575 " events: New instance first boot",
576 self.logs.getvalue())576 self.logs.getvalue())
577577
578 def test_data_sources_cant_mutate_update_events_for_others(self):
579 """update_events shouldn't be changed for other DSes (LP: #1819913)"""
580
581 class ModifyingDS(DataSource):
582
583 def __init__(self, sys_cfg, distro, paths):
584 # This mirrors what DataSourceAzure does which causes LP:
585 # #1819913
586 DataSource.__init__(self, sys_cfg, distro, paths)
587 self.update_events['network'].add(EventType.BOOT)
588
589 before_update_events = copy.deepcopy(self.datasource.update_events)
590 ModifyingDS(self.sys_cfg, self.distro, self.paths)
591 self.assertEqual(before_update_events, self.datasource.update_events)
592
593578
594class TestRedactSensitiveData(CiTestCase):579class TestRedactSensitiveData(CiTestCase):
595580
diff --git a/cloudinit/util.py b/cloudinit/util.py
index 385f231..ea4199c 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -1679,7 +1679,7 @@ def mounts():
1679 return mounted1679 return mounted
16801680
16811681
1682def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True,1682def mount_cb(device, callback, data=None, mtype=None,
1683 update_env_for_mount=None):1683 update_env_for_mount=None):
1684 """1684 """
1685 Mount the device, call method 'callback' passing the directory1685 Mount the device, call method 'callback' passing the directory
@@ -1726,18 +1726,7 @@ def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True,
1726 for mtype in mtypes:1726 for mtype in mtypes:
1727 mountpoint = None1727 mountpoint = None
1728 try:1728 try:
1729 mountcmd = ['mount']1729 mountcmd = ['mount', '-o', 'ro']
1730 mountopts = []
1731 if rw:
1732 mountopts.append('rw')
1733 else:
1734 mountopts.append('ro')
1735 if sync:
1736 # This seems like the safe approach to do
1737 # (ie where this is on by default)
1738 mountopts.append("sync")
1739 if mountopts:
1740 mountcmd.extend(["-o", ",".join(mountopts)])
1741 if mtype:1730 if mtype:
1742 mountcmd.extend(['-t', mtype])1731 mountcmd.extend(['-t', mtype])
1743 mountcmd.append(device)1732 mountcmd.append(device)
diff --git a/cloudinit/version.py b/cloudinit/version.py
index a2c5d43..ddcd436 100644
--- a/cloudinit/version.py
+++ b/cloudinit/version.py
@@ -4,7 +4,7 @@
4#4#
5# This file is part of cloud-init. See LICENSE file for license information.5# This file is part of cloud-init. See LICENSE file for license information.
66
7__VERSION__ = "18.5"7__VERSION__ = "19.1"
8_PACKAGED_VERSION = '@@PACKAGED_VERSION@@'8_PACKAGED_VERSION = '@@PACKAGED_VERSION@@'
99
10FEATURES = [10FEATURES = [
diff --git a/debian/changelog b/debian/changelog
index d21167b..270b0f3 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,11 +1,57 @@
1cloud-init (18.5-45-g3554ffe8-0ubuntu1~16.04.2) UNRELEASED; urgency=medium1cloud-init (19.1-1-gbaa47854-0ubuntu1~16.04.1) xenial; urgency=medium
22
3 * debian/patches/ubuntu-advantage-revert-tip.patch
4 Revert ubuntu-advantage config module changes until ubuntu-advantage-tools
5 19.1 publishes to Xenial (LP: #1828641)
3 * refresh patches:6 * refresh patches:
4 + debian/patches/azure-apply-network-config-false.patch7 + debian/patches/azure-apply-network-config-false.patch
5 + debian/patches/azure-use-walinux-agent.patch8 + debian/patches/azure-use-walinux-agent.patch
6 + debian/patches/ec2-classic-dont-reapply-networking.patch9 + debian/patches/ec2-classic-dont-reapply-networking.patch
10 * refresh patches:
11 + debian/patches/azure-apply-network-config-false.patch
12 + debian/patches/azure-use-walinux-agent.patch
13 * New upstream snapshot. (LP: #1828637)
14 - Azure: Return static fallback address as if failed to find endpoint
15 [Jason Zions (MSFT)]
16 - release 19.1
17 - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder]
18 - tests: add Eoan release [Paride Legovini]
19 - cc_mounts: check if mount -a on no-change fstab path [Jason Zions (MSFT)]
20 - replace remaining occurrences of LOG.warn
21 - DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo]
22 - Azure: Changes to the Hyper-V KVP Reporter [Anh Vo]
23 - git tests: no longer show warning about safe yaml. [Scott Moser]
24 - tools/read-version: handle errors [Chad Miller]
25 - net/sysconfig: only indicate available on known sysconfig distros
26 - packages: update rpm specs for new bash completion path
27 - test_azure: mock util.SeLinuxGuard where needed [Jason Zions (MSFT)]
28 - setup.py: install bash completion script in new location
29 - mount_cb: do not pass sync and rw options to mount [Gonéri Le Bouder]
30 - cc_apt_configure: fix typo in apt documentation [Dominic Schlegel]
31 - Revert "DataSource: move update_events from a class to an instance..."
32 - Change DataSourceNoCloud to ignore file system label's case.
33 [Risto Oikarinen]
34 - cmd:main.py: Fix missing 'modules-init' key in modes dict
35 [Antonio Romito]
36 - ubuntu_advantage: rewrite cloud-config module
37 - Azure: Treat _unset network configuration as if it were absent
38 [Jason Zions (MSFT)]
39 - DatasourceAzure: add additional logging for azure datasource [Anh Vo]
40 - cloud_tests: fix apt_pipelining test-cases
41 - Azure: Ensure platform random_seed is always serializable as JSON.
42 [Jason Zions (MSFT)]
43 - net/sysconfig: write out SUSE-compatible IPv6 config [Robert Schweikert]
44 - tox: Update testenv for openSUSE Leap to 15.0 [Thomas Bechtold]
45 - net: Fix ipv6 static routes when using eni renderer [Raphael Glon]
46 - Add ubuntu_drivers config module
47 - doc: Refresh Azure walinuxagent docs
48 - tox: bump pylint version to latest (2.3.1)
49 - DataSource: move update_events from a class to an instance attribute
50 - net/sysconfig: Handle default route setup for dhcp configured NICs
51 [Robert Schweikert]
52 - DataSourceEc2: update RELEASE_BLOCKER to be more accurate
753
8 -- Ryan Harper <ryan.harper@canonical.com> Tue, 09 Apr 2019 11:20:17 -050054 -- Chad Smith <chad.smith@canonical.com> Fri, 10 May 2019 16:26:48 -0600
955
10cloud-init (18.5-45-g3554ffe8-0ubuntu1~16.04.1) xenial; urgency=medium56cloud-init (18.5-45-g3554ffe8-0ubuntu1~16.04.1) xenial; urgency=medium
1157
diff --git a/debian/patches/azure-apply-network-config-false.patch b/debian/patches/azure-apply-network-config-false.patch
index e16ad64..f0c2fcf 100644
--- a/debian/patches/azure-apply-network-config-false.patch
+++ b/debian/patches/azure-apply-network-config-false.patch
@@ -10,7 +10,7 @@ Forwarded: not-needed
10Last-Update: 2018-10-1710Last-Update: 2018-10-17
11--- a/cloudinit/sources/DataSourceAzure.py11--- a/cloudinit/sources/DataSourceAzure.py
12+++ b/cloudinit/sources/DataSourceAzure.py12+++ b/cloudinit/sources/DataSourceAzure.py
13@@ -215,7 +215,7 @@ BUILTIN_DS_CONFIG = {13@@ -220,7 +220,7 @@ BUILTIN_DS_CONFIG = {
14 },14 },
15 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH},15 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH},
16 'dhclient_lease_file': LEASE_FILE,16 'dhclient_lease_file': LEASE_FILE,
diff --git a/debian/patches/azure-use-walinux-agent.patch b/debian/patches/azure-use-walinux-agent.patch
index 3f60dfd..b4ad76c 100644
--- a/debian/patches/azure-use-walinux-agent.patch
+++ b/debian/patches/azure-use-walinux-agent.patch
@@ -6,7 +6,7 @@ Forwarded: not-needed
6Author: Scott Moser <smoser@ubuntu.com>6Author: Scott Moser <smoser@ubuntu.com>
7--- a/cloudinit/sources/DataSourceAzure.py7--- a/cloudinit/sources/DataSourceAzure.py
8+++ b/cloudinit/sources/DataSourceAzure.py8+++ b/cloudinit/sources/DataSourceAzure.py
9@@ -204,7 +204,7 @@ if util.is_FreeBSD():9@@ -209,7 +209,7 @@ if util.is_FreeBSD():
10 PLATFORM_ENTROPY_SOURCE = None10 PLATFORM_ENTROPY_SOURCE = None
11 11
12 BUILTIN_DS_CONFIG = {12 BUILTIN_DS_CONFIG = {
diff --git a/debian/patches/series b/debian/patches/series
index d37ae8a..5d6995e 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -4,3 +4,4 @@ stable-release-no-jsonschema-dep.patch
4openstack-no-network-config.patch4openstack-no-network-config.patch
5azure-apply-network-config-false.patch5azure-apply-network-config-false.patch
6ec2-classic-dont-reapply-networking.patch6ec2-classic-dont-reapply-networking.patch
7ubuntu-advantage-revert-tip.patch
diff --git a/debian/patches/ubuntu-advantage-revert-tip.patch b/debian/patches/ubuntu-advantage-revert-tip.patch
7new file mode 1006448new file mode 100644
index 0000000..08bdc81
--- /dev/null
+++ b/debian/patches/ubuntu-advantage-revert-tip.patch
@@ -0,0 +1,735 @@
1Description: Revert upstream changes for ubuntu-advantage-tools v 19.1
2 ubuntu-advantage-tools v. 19.1 or later is required for the newcw
3 cloud-config module becaues the two command lines are incompatible.
4 Xenial can drop this patch once ubuntu-advantage-tools has been SRU'd >= 19.1
5Author: Chad Smith <chad.smith@canonical.com>
6Origin: backport
7Bug: https://bugs.launchpad.net/cloud-init/+bug/1828641
8Forwarded: not-needed
9Last-Update: 2019-05-10
10---
11This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
12Index: cloud-init/cloudinit/config/cc_ubuntu_advantage.py
13===================================================================
14--- cloud-init.orig/cloudinit/config/cc_ubuntu_advantage.py
15+++ cloud-init/cloudinit/config/cc_ubuntu_advantage.py
16@@ -1,143 +1,150 @@
17+# Copyright (C) 2018 Canonical Ltd.
18+#
19 # This file is part of cloud-init. See LICENSE file for license information.
20
21-"""ubuntu_advantage: Configure Ubuntu Advantage support services"""
22+"""Ubuntu advantage: manage ubuntu-advantage offerings from Canonical."""
23
24+import sys
25 from textwrap import dedent
26
27-import six
28-
29+from cloudinit import log as logging
30 from cloudinit.config.schema import (
31 get_schema_doc, validate_cloudconfig_schema)
32-from cloudinit import log as logging
33 from cloudinit.settings import PER_INSTANCE
34+from cloudinit.subp import prepend_base_command
35 from cloudinit import util
36
37
38-UA_URL = 'https://ubuntu.com/advantage'
39-
40 distros = ['ubuntu']
41+frequency = PER_INSTANCE
42+
43+LOG = logging.getLogger(__name__)
44
45 schema = {
46 'id': 'cc_ubuntu_advantage',
47 'name': 'Ubuntu Advantage',
48- 'title': 'Configure Ubuntu Advantage support services',
49+ 'title': 'Install, configure and manage ubuntu-advantage offerings',
50 'description': dedent("""\
51- Attach machine to an existing Ubuntu Advantage support contract and
52- enable or disable support services such as Livepatch, ESM,
53- FIPS and FIPS Updates. When attaching a machine to Ubuntu Advantage,
54- one can also specify services to enable. When the 'enable'
55- list is present, any named service will be enabled and all absent
56- services will remain disabled.
57-
58- Note that when enabling FIPS or FIPS updates you will need to schedule
59- a reboot to ensure the machine is running the FIPS-compliant kernel.
60- See :ref:`Power State Change` for information on how to configure
61- cloud-init to perform this reboot.
62+ This module provides configuration options to setup ubuntu-advantage
63+ subscriptions.
64+
65+ .. note::
66+ Both ``commands`` value can be either a dictionary or a list. If
67+ the configuration provided is a dictionary, the keys are only used
68+ to order the execution of the commands and the dictionary is
69+ merged with any vendor-data ubuntu-advantage configuration
70+ provided. If a ``commands`` is provided as a list, any vendor-data
71+ ubuntu-advantage ``commands`` are ignored.
72+
73+ Ubuntu-advantage ``commands`` is a dictionary or list of
74+ ubuntu-advantage commands to run on the deployed machine.
75+ These commands can be used to enable or disable subscriptions to
76+ various ubuntu-advantage products. See 'man ubuntu-advantage' for more
77+ information on supported subcommands.
78+
79+ .. note::
80+ Each command item can be a string or list. If the item is a list,
81+ 'ubuntu-advantage' can be omitted and it will automatically be
82+ inserted as part of the command.
83 """),
84 'distros': distros,
85 'examples': [dedent("""\
86- # Attach the machine to a Ubuntu Advantage support contract with a
87- # UA contract token obtained from %s.
88- ubuntu_advantage:
89- token: <ua_contract_token>
90- """ % UA_URL), dedent("""\
91- # Attach the machine to an Ubuntu Advantage support contract enabling
92- # only fips and esm services. Services will only be enabled if
93- # the environment supports said service. Otherwise warnings will
94- # be logged for incompatible services specified.
95+ # Enable Extended Security Maintenance using your service auth token
96+ ubuntu-advantage:
97+ commands:
98+ 00: ubuntu-advantage enable-esm <token>
99+ """), dedent("""\
100+ # Enable livepatch by providing your livepatch token
101 ubuntu-advantage:
102- token: <ua_contract_token>
103- enable:
104- - fips
105- - esm
106+ commands:
107+ 00: ubuntu-advantage enable-livepatch <livepatch-token>
108+
109 """), dedent("""\
110- # Attach the machine to an Ubuntu Advantage support contract and enable
111- # the FIPS service. Perform a reboot once cloud-init has
112- # completed.
113- power_state:
114- mode: reboot
115+ # Convenience: the ubuntu-advantage command can be omitted when
116+ # specifying commands as a list and 'ubuntu-advantage' will
117+ # automatically be prepended.
118+ # The following commands are equivalent
119 ubuntu-advantage:
120- token: <ua_contract_token>
121- enable:
122- - fips
123- """)],
124+ commands:
125+ 00: ['enable-livepatch', 'my-token']
126+ 01: ['ubuntu-advantage', 'enable-livepatch', 'my-token']
127+ 02: ubuntu-advantage enable-livepatch my-token
128+ 03: 'ubuntu-advantage enable-livepatch my-token'
129+ """)],
130 'frequency': PER_INSTANCE,
131 'type': 'object',
132 'properties': {
133- 'ubuntu_advantage': {
134+ 'ubuntu-advantage': {
135 'type': 'object',
136 'properties': {
137- 'enable': {
138- 'type': 'array',
139- 'items': {'type': 'string'},
140- },
141- 'token': {
142- 'type': 'string',
143- 'description': (
144- 'A contract token obtained from %s.' % UA_URL)
145+ 'commands': {
146+ 'type': ['object', 'array'], # Array of strings or dict
147+ 'items': {
148+ 'oneOf': [
149+ {'type': 'array', 'items': {'type': 'string'}},
150+ {'type': 'string'}]
151+ },
152+ 'additionalItems': False, # Reject non-string & non-list
153+ 'minItems': 1,
154+ 'minProperties': 1,
155 }
156 },
157- 'required': ['token'],
158- 'additionalProperties': False
159+ 'additionalProperties': False, # Reject keys not in schema
160+ 'required': ['commands']
161 }
162 }
163 }
164
165+# TODO schema for 'assertions' and 'commands' are too permissive at the moment.
166+# Once python-jsonschema supports schema draft 6 add support for arbitrary
167+# object keys with 'patternProperties' constraint to validate string values.
168+
169 __doc__ = get_schema_doc(schema) # Supplement python help()
170
171-LOG = logging.getLogger(__name__)
172+UA_CMD = "ubuntu-advantage"
173
174
175-def configure_ua(token=None, enable=None):
176- """Call ua commandline client to attach or enable services."""
177- error = None
178- if not token:
179- error = ('ubuntu_advantage: token must be provided')
180- LOG.error(error)
181- raise RuntimeError(error)
182-
183- if enable is None:
184- enable = []
185- elif isinstance(enable, six.string_types):
186- LOG.warning('ubuntu_advantage: enable should be a list, not'
187- ' a string; treating as a single enable')
188- enable = [enable]
189- elif not isinstance(enable, list):
190- LOG.warning('ubuntu_advantage: enable should be a list, not'
191- ' a %s; skipping enabling services',
192- type(enable).__name__)
193- enable = []
194+def run_commands(commands):
195+ """Run the commands provided in ubuntu-advantage:commands config.
196
197- attach_cmd = ['ua', 'attach', token]
198- LOG.debug('Attaching to Ubuntu Advantage. %s', ' '.join(attach_cmd))
199- try:
200- util.subp(attach_cmd)
201- except util.ProcessExecutionError as e:
202- msg = 'Failure attaching Ubuntu Advantage:\n{error}'.format(
203- error=str(e))
204- util.logexc(LOG, msg)
205- raise RuntimeError(msg)
206- enable_errors = []
207- for service in enable:
208+ Commands are run individually. Any errors are collected and reported
209+ after attempting all commands.
210+
211+ @param commands: A list or dict containing commands to run. Keys of a
212+ dict will be used to order the commands provided as dict values.
213+ """
214+ if not commands:
215+ return
216+ LOG.debug('Running user-provided ubuntu-advantage commands')
217+ if isinstance(commands, dict):
218+ # Sort commands based on dictionary key
219+ commands = [v for _, v in sorted(commands.items())]
220+ elif not isinstance(commands, list):
221+ raise TypeError(
222+ 'commands parameter was not a list or dict: {commands}'.format(
223+ commands=commands))
224+
225+ fixed_ua_commands = prepend_base_command('ubuntu-advantage', commands)
226+
227+ cmd_failures = []
228+ for command in fixed_ua_commands:
229+ shell = isinstance(command, str)
230 try:
231- cmd = ['ua', 'enable', service]
232- util.subp(cmd, capture=True)
233+ util.subp(command, shell=shell, status_cb=sys.stderr.write)
234 except util.ProcessExecutionError as e:
235- enable_errors.append((service, e))
236- if enable_errors:
237- for service, error in enable_errors:
238- msg = 'Failure enabling "{service}":\n{error}'.format(
239- service=service, error=str(error))
240- util.logexc(LOG, msg)
241- raise RuntimeError(
242- 'Failure enabling Ubuntu Advantage service(s): {}'.format(
243- ', '.join('"{}"'.format(service)
244- for service, _ in enable_errors)))
245+ cmd_failures.append(str(e))
246+ if cmd_failures:
247+ msg = (
248+ 'Failures running ubuntu-advantage commands:\n'
249+ '{cmd_failures}'.format(
250+ cmd_failures=cmd_failures))
251+ util.logexc(LOG, msg)
252+ raise RuntimeError(msg)
253
254
255 def maybe_install_ua_tools(cloud):
256 """Install ubuntu-advantage-tools if not present."""
257- if util.which('ua'):
258+ if util.which('ubuntu-advantage'):
259 return
260 try:
261 cloud.distro.update_package_sources()
262@@ -152,28 +159,14 @@ def maybe_install_ua_tools(cloud):
263
264
265 def handle(name, cfg, cloud, log, args):
266- ua_section = None
267- if 'ubuntu-advantage' in cfg:
268- LOG.warning('Deprecated configuration key "ubuntu-advantage" provided.'
269- ' Expected underscore delimited "ubuntu_advantage"; will'
270- ' attempt to continue.')
271- ua_section = cfg['ubuntu-advantage']
272- if 'ubuntu_advantage' in cfg:
273- ua_section = cfg['ubuntu_advantage']
274- if ua_section is None:
275- LOG.debug("Skipping module named %s,"
276- " no 'ubuntu_advantage' configuration found", name)
277+ cfgin = cfg.get('ubuntu-advantage')
278+ if cfgin is None:
279+ LOG.debug(("Skipping module named %s,"
280+ " no 'ubuntu-advantage' key in configuration"), name)
281 return
282- validate_cloudconfig_schema(cfg, schema)
283- if 'commands' in ua_section:
284- msg = (
285- 'Deprecated configuration "ubuntu-advantage: commands" provided.'
286- ' Expected "token"')
287- LOG.error(msg)
288- raise RuntimeError(msg)
289
290+ validate_cloudconfig_schema(cfg, schema)
291 maybe_install_ua_tools(cloud)
292- configure_ua(token=ua_section.get('token'),
293- enable=ua_section.get('enable'))
294+ run_commands(cfgin.get('commands', []))
295
296 # vi: ts=4 expandtab
297Index: cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py
298===================================================================
299--- cloud-init.orig/cloudinit/config/tests/test_ubuntu_advantage.py
300+++ cloud-init/cloudinit/config/tests/test_ubuntu_advantage.py
301@@ -1,7 +1,10 @@
302 # This file is part of cloud-init. See LICENSE file for license information.
303
304+import re
305+from six import StringIO
306+
307 from cloudinit.config.cc_ubuntu_advantage import (
308- configure_ua, handle, maybe_install_ua_tools, schema)
309+ handle, maybe_install_ua_tools, run_commands, schema)
310 from cloudinit.config.schema import validate_cloudconfig_schema
311 from cloudinit import util
312 from cloudinit.tests.helpers import (
313@@ -17,120 +20,90 @@ class FakeCloud(object):
314 self.distro = distro
315
316
317-class TestConfigureUA(CiTestCase):
318+class TestRunCommands(CiTestCase):
319
320 with_logs = True
321 allowed_subp = [CiTestCase.SUBP_SHELL_TRUE]
322
323 def setUp(self):
324- super(TestConfigureUA, self).setUp()
325+ super(TestRunCommands, self).setUp()
326 self.tmp = self.tmp_dir()
327
328 @mock.patch('%s.util.subp' % MPATH)
329- def test_configure_ua_attach_error(self, m_subp):
330- """Errors from ua attach command are raised."""
331- m_subp.side_effect = util.ProcessExecutionError(
332- 'Invalid token SomeToken')
333- with self.assertRaises(RuntimeError) as context_manager:
334- configure_ua(token='SomeToken')
335+ def test_run_commands_on_empty_list(self, m_subp):
336+ """When provided with an empty list, run_commands does nothing."""
337+ run_commands([])
338+ self.assertEqual('', self.logs.getvalue())
339+ m_subp.assert_not_called()
340+
341+ def test_run_commands_on_non_list_or_dict(self):
342+ """When provided an invalid type, run_commands raises an error."""
343+ with self.assertRaises(TypeError) as context_manager:
344+ run_commands(commands="I'm Not Valid")
345 self.assertEqual(
346- 'Failure attaching Ubuntu Advantage:\nUnexpected error while'
347- ' running command.\nCommand: -\nExit code: -\nReason: -\n'
348- 'Stdout: Invalid token SomeToken\nStderr: -',
349+ "commands parameter was not a list or dict: I'm Not Valid",
350 str(context_manager.exception))
351
352- @mock.patch('%s.util.subp' % MPATH)
353- def test_configure_ua_attach_with_token(self, m_subp):
354- """When token is provided, attach the machine to ua using the token."""
355- configure_ua(token='SomeToken')
356- m_subp.assert_called_once_with(['ua', 'attach', 'SomeToken'])
357- self.assertEqual(
358- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n',
359- self.logs.getvalue())
360-
361- @mock.patch('%s.util.subp' % MPATH)
362- def test_configure_ua_attach_on_service_error(self, m_subp):
363- """all services should be enabled and then any failures raised"""
364-
365- def fake_subp(cmd, capture=None):
366- fail_cmds = [['ua', 'enable', svc] for svc in ['esm', 'cc']]
367- if cmd in fail_cmds and capture:
368- svc = cmd[-1]
369- raise util.ProcessExecutionError(
370- 'Invalid {} credentials'.format(svc.upper()))
371+ def test_run_command_logs_commands_and_exit_codes_to_stderr(self):
372+ """All exit codes are logged to stderr."""
373+ outfile = self.tmp_path('output.log', dir=self.tmp)
374+
375+ cmd1 = 'echo "HI" >> %s' % outfile
376+ cmd2 = 'bogus command'
377+ cmd3 = 'echo "MOM" >> %s' % outfile
378+ commands = [cmd1, cmd2, cmd3]
379+
380+ mock_path = '%s.sys.stderr' % MPATH
381+ with mock.patch(mock_path, new_callable=StringIO) as m_stderr:
382+ with self.assertRaises(RuntimeError) as context_manager:
383+ run_commands(commands=commands)
384+
385+ self.assertIsNotNone(
386+ re.search(r'bogus: (command )?not found',
387+ str(context_manager.exception)),
388+ msg='Expected bogus command not found')
389+ expected_stderr_log = '\n'.join([
390+ 'Begin run command: {cmd}'.format(cmd=cmd1),
391+ 'End run command: exit(0)',
392+ 'Begin run command: {cmd}'.format(cmd=cmd2),
393+ 'ERROR: End run command: exit(127)',
394+ 'Begin run command: {cmd}'.format(cmd=cmd3),
395+ 'End run command: exit(0)\n'])
396+ self.assertEqual(expected_stderr_log, m_stderr.getvalue())
397+
398+ def test_run_command_as_lists(self):
399+ """When commands are specified as a list, run them in order."""
400+ outfile = self.tmp_path('output.log', dir=self.tmp)
401+
402+ cmd1 = 'echo "HI" >> %s' % outfile
403+ cmd2 = 'echo "MOM" >> %s' % outfile
404+ commands = [cmd1, cmd2]
405+ with mock.patch('%s.sys.stderr' % MPATH, new_callable=StringIO):
406+ run_commands(commands=commands)
407
408- m_subp.side_effect = fake_subp
409-
410- with self.assertRaises(RuntimeError) as context_manager:
411- configure_ua(token='SomeToken', enable=['esm', 'cc', 'fips'])
412- self.assertEqual(
413- m_subp.call_args_list,
414- [mock.call(['ua', 'attach', 'SomeToken']),
415- mock.call(['ua', 'enable', 'esm'], capture=True),
416- mock.call(['ua', 'enable', 'cc'], capture=True),
417- mock.call(['ua', 'enable', 'fips'], capture=True)])
418 self.assertIn(
419- 'WARNING: Failure enabling "esm":\nUnexpected error'
420- ' while running command.\nCommand: -\nExit code: -\nReason: -\n'
421- 'Stdout: Invalid ESM credentials\nStderr: -\n',
422+ 'DEBUG: Running user-provided ubuntu-advantage commands',
423 self.logs.getvalue())
424+ self.assertEqual('HI\nMOM\n', util.load_file(outfile))
425 self.assertIn(
426- 'WARNING: Failure enabling "cc":\nUnexpected error'
427- ' while running command.\nCommand: -\nExit code: -\nReason: -\n'
428- 'Stdout: Invalid CC credentials\nStderr: -\n',
429- self.logs.getvalue())
430- self.assertEqual(
431- 'Failure enabling Ubuntu Advantage service(s): "esm", "cc"',
432- str(context_manager.exception))
433-
434- @mock.patch('%s.util.subp' % MPATH)
435- def test_configure_ua_attach_with_empty_services(self, m_subp):
436- """When services is an empty list, do not auto-enable attach."""
437- configure_ua(token='SomeToken', enable=[])
438- m_subp.assert_called_once_with(['ua', 'attach', 'SomeToken'])
439- self.assertEqual(
440- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n',
441- self.logs.getvalue())
442-
443- @mock.patch('%s.util.subp' % MPATH)
444- def test_configure_ua_attach_with_specific_services(self, m_subp):
445- """When services a list, only enable specific services."""
446- configure_ua(token='SomeToken', enable=['fips'])
447- self.assertEqual(
448- m_subp.call_args_list,
449- [mock.call(['ua', 'attach', 'SomeToken']),
450- mock.call(['ua', 'enable', 'fips'], capture=True)])
451- self.assertEqual(
452- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n',
453- self.logs.getvalue())
454-
455- @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock())
456- @mock.patch('%s.util.subp' % MPATH)
457- def test_configure_ua_attach_with_string_services(self, m_subp):
458- """When services a string, treat as singleton list and warn"""
459- configure_ua(token='SomeToken', enable='fips')
460- self.assertEqual(
461- m_subp.call_args_list,
462- [mock.call(['ua', 'attach', 'SomeToken']),
463- mock.call(['ua', 'enable', 'fips'], capture=True)])
464- self.assertEqual(
465- 'WARNING: ubuntu_advantage: enable should be a list, not a'
466- ' string; treating as a single enable\n'
467- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n',
468+ 'WARNING: Non-ubuntu-advantage commands in ubuntu-advantage'
469+ ' config:',
470 self.logs.getvalue())
471
472- @mock.patch('%s.util.subp' % MPATH)
473- def test_configure_ua_attach_with_weird_services(self, m_subp):
474- """When services not string or list, warn but still attach"""
475- configure_ua(token='SomeToken', enable={'deffo': 'wont work'})
476- self.assertEqual(
477- m_subp.call_args_list,
478- [mock.call(['ua', 'attach', 'SomeToken'])])
479- self.assertEqual(
480- 'WARNING: ubuntu_advantage: enable should be a list, not a'
481- ' dict; skipping enabling services\n'
482- 'DEBUG: Attaching to Ubuntu Advantage. ua attach SomeToken\n',
483- self.logs.getvalue())
484+ def test_run_command_dict_sorted_as_command_script(self):
485+ """When commands are a dict, sort them and run."""
486+ outfile = self.tmp_path('output.log', dir=self.tmp)
487+ cmd1 = 'echo "HI" >> %s' % outfile
488+ cmd2 = 'echo "MOM" >> %s' % outfile
489+ commands = {'02': cmd1, '01': cmd2}
490+ with mock.patch('%s.sys.stderr' % MPATH, new_callable=StringIO):
491+ run_commands(commands=commands)
492+
493+ expected_messages = [
494+ 'DEBUG: Running user-provided ubuntu-advantage commands']
495+ for message in expected_messages:
496+ self.assertIn(message, self.logs.getvalue())
497+ self.assertEqual('MOM\nHI\n', util.load_file(outfile))
498
499
500 @skipUnlessJsonSchema()
501@@ -139,50 +112,90 @@ class TestSchema(CiTestCase, SchemaTestC
502 with_logs = True
503 schema = schema
504
505- @mock.patch('%s.maybe_install_ua_tools' % MPATH)
506- @mock.patch('%s.configure_ua' % MPATH)
507- def test_schema_warns_on_ubuntu_advantage_not_dict(self, _cfg, _):
508- """If ubuntu_advantage configuration is not a dict, emit a warning."""
509- validate_cloudconfig_schema({'ubuntu_advantage': 'wrong type'}, schema)
510+ def test_schema_warns_on_ubuntu_advantage_not_as_dict(self):
511+ """If ubuntu-advantage configuration is not a dict, emit a warning."""
512+ validate_cloudconfig_schema({'ubuntu-advantage': 'wrong type'}, schema)
513 self.assertEqual(
514- "WARNING: Invalid config:\nubuntu_advantage: 'wrong type' is not"
515+ "WARNING: Invalid config:\nubuntu-advantage: 'wrong type' is not"
516 " of type 'object'\n",
517 self.logs.getvalue())
518
519- @mock.patch('%s.maybe_install_ua_tools' % MPATH)
520- @mock.patch('%s.configure_ua' % MPATH)
521- def test_schema_disallows_unknown_keys(self, _cfg, _):
522- """Unknown keys in ubuntu_advantage configuration emit warnings."""
523+ @mock.patch('%s.run_commands' % MPATH)
524+ def test_schema_disallows_unknown_keys(self, _):
525+ """Unknown keys in ubuntu-advantage configuration emit warnings."""
526 validate_cloudconfig_schema(
527- {'ubuntu_advantage': {'token': 'winner', 'invalid-key': ''}},
528+ {'ubuntu-advantage': {'commands': ['ls'], 'invalid-key': ''}},
529 schema)
530 self.assertIn(
531- 'WARNING: Invalid config:\nubuntu_advantage: Additional properties'
532+ 'WARNING: Invalid config:\nubuntu-advantage: Additional properties'
533 " are not allowed ('invalid-key' was unexpected)",
534 self.logs.getvalue())
535
536- @mock.patch('%s.maybe_install_ua_tools' % MPATH)
537- @mock.patch('%s.configure_ua' % MPATH)
538- def test_warn_schema_requires_token(self, _cfg, _):
539- """Warn if ubuntu_advantage configuration lacks token."""
540+ def test_warn_schema_requires_commands(self):
541+ """Warn when ubuntu-advantage configuration lacks commands."""
542 validate_cloudconfig_schema(
543- {'ubuntu_advantage': {'enable': ['esm']}}, schema)
544+ {'ubuntu-advantage': {}}, schema)
545 self.assertEqual(
546- "WARNING: Invalid config:\nubuntu_advantage:"
547- " 'token' is a required property\n", self.logs.getvalue())
548+ "WARNING: Invalid config:\nubuntu-advantage: 'commands' is a"
549+ " required property\n",
550+ self.logs.getvalue())
551
552- @mock.patch('%s.maybe_install_ua_tools' % MPATH)
553- @mock.patch('%s.configure_ua' % MPATH)
554- def test_warn_schema_services_is_not_list_or_dict(self, _cfg, _):
555- """Warn when ubuntu_advantage:enable config is not a list."""
556+ @mock.patch('%s.run_commands' % MPATH)
557+ def test_warn_schema_commands_is_not_list_or_dict(self, _):
558+ """Warn when ubuntu-advantage:commands config is not a list or dict."""
559 validate_cloudconfig_schema(
560- {'ubuntu_advantage': {'enable': 'needslist'}}, schema)
561+ {'ubuntu-advantage': {'commands': 'broken'}}, schema)
562 self.assertEqual(
563- "WARNING: Invalid config:\nubuntu_advantage: 'token' is a"
564- " required property\nubuntu_advantage.enable: 'needslist'"
565- " is not of type 'array'\n",
566+ "WARNING: Invalid config:\nubuntu-advantage.commands: 'broken' is"
567+ " not of type 'object', 'array'\n",
568 self.logs.getvalue())
569
570+ @mock.patch('%s.run_commands' % MPATH)
571+ def test_warn_schema_when_commands_is_empty(self, _):
572+ """Emit warnings when ubuntu-advantage:commands is empty."""
573+ validate_cloudconfig_schema(
574+ {'ubuntu-advantage': {'commands': []}}, schema)
575+ validate_cloudconfig_schema(
576+ {'ubuntu-advantage': {'commands': {}}}, schema)
577+ self.assertEqual(
578+ "WARNING: Invalid config:\nubuntu-advantage.commands: [] is too"
579+ " short\nWARNING: Invalid config:\nubuntu-advantage.commands: {}"
580+ " does not have enough properties\n",
581+ self.logs.getvalue())
582+
583+ @mock.patch('%s.run_commands' % MPATH)
584+ def test_schema_when_commands_are_list_or_dict(self, _):
585+ """No warnings when ubuntu-advantage:commands are a list or dict."""
586+ validate_cloudconfig_schema(
587+ {'ubuntu-advantage': {'commands': ['valid']}}, schema)
588+ validate_cloudconfig_schema(
589+ {'ubuntu-advantage': {'commands': {'01': 'also valid'}}}, schema)
590+ self.assertEqual('', self.logs.getvalue())
591+
592+ def test_duplicates_are_fine_array_array(self):
593+ """Duplicated commands array/array entries are allowed."""
594+ self.assertSchemaValid(
595+ {'commands': [["echo", "bye"], ["echo" "bye"]]},
596+ "command entries can be duplicate.")
597+
598+ def test_duplicates_are_fine_array_string(self):
599+ """Duplicated commands array/string entries are allowed."""
600+ self.assertSchemaValid(
601+ {'commands': ["echo bye", "echo bye"]},
602+ "command entries can be duplicate.")
603+
604+ def test_duplicates_are_fine_dict_array(self):
605+ """Duplicated commands dict/array entries are allowed."""
606+ self.assertSchemaValid(
607+ {'commands': {'00': ["echo", "bye"], '01': ["echo", "bye"]}},
608+ "command entries can be duplicate.")
609+
610+ def test_duplicates_are_fine_dict_string(self):
611+ """Duplicated commands dict/string entries are allowed."""
612+ self.assertSchemaValid(
613+ {'commands': {'00': "echo bye", '01': "echo bye"}},
614+ "command entries can be duplicate.")
615+
616
617 class TestHandle(CiTestCase):
618
619@@ -192,89 +205,41 @@ class TestHandle(CiTestCase):
620 super(TestHandle, self).setUp()
621 self.tmp = self.tmp_dir()
622
623+ @mock.patch('%s.run_commands' % MPATH)
624 @mock.patch('%s.validate_cloudconfig_schema' % MPATH)
625- def test_handle_no_config(self, m_schema):
626+ def test_handle_no_config(self, m_schema, m_run):
627 """When no ua-related configuration is provided, nothing happens."""
628 cfg = {}
629 handle('ua-test', cfg=cfg, cloud=None, log=self.logger, args=None)
630 self.assertIn(
631- "DEBUG: Skipping module named ua-test, no 'ubuntu_advantage'"
632- ' configuration found',
633+ "DEBUG: Skipping module named ua-test, no 'ubuntu-advantage' key"
634+ " in config",
635 self.logs.getvalue())
636 m_schema.assert_not_called()
637+ m_run.assert_not_called()
638
639- @mock.patch('%s.configure_ua' % MPATH)
640 @mock.patch('%s.maybe_install_ua_tools' % MPATH)
641- def test_handle_tries_to_install_ubuntu_advantage_tools(
642- self, m_install, m_cfg):
643+ def test_handle_tries_to_install_ubuntu_advantage_tools(self, m_install):
644 """If ubuntu_advantage is provided, try installing ua-tools package."""
645- cfg = {'ubuntu_advantage': {'token': 'valid'}}
646+ cfg = {'ubuntu-advantage': {}}
647 mycloud = FakeCloud(None)
648 handle('nomatter', cfg=cfg, cloud=mycloud, log=self.logger, args=None)
649 m_install.assert_called_once_with(mycloud)
650
651- @mock.patch('%s.configure_ua' % MPATH)
652 @mock.patch('%s.maybe_install_ua_tools' % MPATH)
653- def test_handle_passes_credentials_and_services_to_configure_ua(
654- self, m_install, m_configure_ua):
655- """All ubuntu_advantage config keys are passed to configure_ua."""
656- cfg = {'ubuntu_advantage': {'token': 'token', 'enable': ['esm']}}
657- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None)
658- m_configure_ua.assert_called_once_with(
659- token='token', enable=['esm'])
660-
661- @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock())
662- @mock.patch('%s.configure_ua' % MPATH)
663- def test_handle_warns_on_deprecated_ubuntu_advantage_key_w_config(
664- self, m_configure_ua):
665- """Warning when ubuntu-advantage key is present with new config"""
666- cfg = {'ubuntu-advantage': {'token': 'token', 'enable': ['esm']}}
667- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None)
668- self.assertEqual(
669- 'WARNING: Deprecated configuration key "ubuntu-advantage"'
670- ' provided. Expected underscore delimited "ubuntu_advantage";'
671- ' will attempt to continue.',
672- self.logs.getvalue().splitlines()[0])
673- m_configure_ua.assert_called_once_with(
674- token='token', enable=['esm'])
675-
676- def test_handle_error_on_deprecated_commands_key_dashed(self):
677- """Error when commands is present in ubuntu-advantage key."""
678- cfg = {'ubuntu-advantage': {'commands': 'nogo'}}
679- with self.assertRaises(RuntimeError) as context_manager:
680- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None)
681- self.assertEqual(
682- 'Deprecated configuration "ubuntu-advantage: commands" provided.'
683- ' Expected "token"',
684- str(context_manager.exception))
685-
686- def test_handle_error_on_deprecated_commands_key_underscored(self):
687- """Error when commands is present in ubuntu_advantage key."""
688- cfg = {'ubuntu_advantage': {'commands': 'nogo'}}
689- with self.assertRaises(RuntimeError) as context_manager:
690- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None)
691- self.assertEqual(
692- 'Deprecated configuration "ubuntu-advantage: commands" provided.'
693- ' Expected "token"',
694- str(context_manager.exception))
695+ def test_handle_runs_commands_provided(self, m_install):
696+ """When commands are specified as a list, run them."""
697+ outfile = self.tmp_path('output.log', dir=self.tmp)
698
699- @mock.patch('%s.maybe_install_ua_tools' % MPATH, mock.MagicMock())
700- @mock.patch('%s.configure_ua' % MPATH)
701- def test_handle_prefers_new_style_config(
702- self, m_configure_ua):
703- """ubuntu_advantage should be preferred over ubuntu-advantage"""
704 cfg = {
705- 'ubuntu-advantage': {'token': 'nope', 'enable': ['wrong']},
706- 'ubuntu_advantage': {'token': 'token', 'enable': ['esm']},
707- }
708- handle('nomatter', cfg=cfg, cloud=None, log=self.logger, args=None)
709- self.assertEqual(
710- 'WARNING: Deprecated configuration key "ubuntu-advantage"'
711- ' provided. Expected underscore delimited "ubuntu_advantage";'
712- ' will attempt to continue.',
713- self.logs.getvalue().splitlines()[0])
714- m_configure_ua.assert_called_once_with(
715- token='token', enable=['esm'])
716+ 'ubuntu-advantage': {'commands': ['echo "HI" >> %s' % outfile,
717+ 'echo "MOM" >> %s' % outfile]}}
718+ mock_path = '%s.sys.stderr' % MPATH
719+ with self.allow_subp([CiTestCase.SUBP_SHELL_TRUE]):
720+ with mock.patch(mock_path, new_callable=StringIO):
721+ handle('nomatter', cfg=cfg, cloud=None, log=self.logger,
722+ args=None)
723+ self.assertEqual('HI\nMOM\n', util.load_file(outfile))
724
725
726 class TestMaybeInstallUATools(CiTestCase):
727@@ -288,7 +253,7 @@ class TestMaybeInstallUATools(CiTestCase
728 @mock.patch('%s.util.which' % MPATH)
729 def test_maybe_install_ua_tools_noop_when_ua_tools_present(self, m_which):
730 """Do nothing if ubuntu-advantage-tools already exists."""
731- m_which.return_value = '/usr/bin/ua' # already installed
732+ m_which.return_value = '/usr/bin/ubuntu-advantage' # already installed
733 distro = mock.MagicMock()
734 distro.update_package_sources.side_effect = RuntimeError(
735 'Some apt error')
diff --git a/doc/rtd/topics/datasources/nocloud.rst b/doc/rtd/topics/datasources/nocloud.rst
index 08578e8..1c5cf96 100644
--- a/doc/rtd/topics/datasources/nocloud.rst
+++ b/doc/rtd/topics/datasources/nocloud.rst
@@ -9,7 +9,7 @@ network at all).
99
10You can provide meta-data and user-data to a local vm boot via files on a10You can provide meta-data and user-data to a local vm boot via files on a
11`vfat`_ or `iso9660`_ filesystem. The filesystem volume label must be11`vfat`_ or `iso9660`_ filesystem. The filesystem volume label must be
12``cidata``.12``cidata`` or ``CIDATA``.
1313
14Alternatively, you can provide meta-data via kernel command line or SMBIOS14Alternatively, you can provide meta-data via kernel command line or SMBIOS
15"serial number" option. The data must be passed in the form of a string:15"serial number" option. The data must be passed in the form of a string:
diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in
index 6b2022b..057a578 100644
--- a/packages/redhat/cloud-init.spec.in
+++ b/packages/redhat/cloud-init.spec.in
@@ -205,7 +205,9 @@ fi
205%dir %{_sysconfdir}/cloud/templates205%dir %{_sysconfdir}/cloud/templates
206%config(noreplace) %{_sysconfdir}/cloud/templates/*206%config(noreplace) %{_sysconfdir}/cloud/templates/*
207%config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf207%config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf
208%{_sysconfdir}/bash_completion.d/cloud-init208
209# Bash completion script
210%{_datadir}/bash-completion/completions/cloud-init
209211
210%{_libexecdir}/%{name}212%{_libexecdir}/%{name}
211%dir %{_sharedstatedir}/cloud213%dir %{_sharedstatedir}/cloud
diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in
index 26894b3..004b875 100644
--- a/packages/suse/cloud-init.spec.in
+++ b/packages/suse/cloud-init.spec.in
@@ -120,7 +120,9 @@ version_pys=$(cd "%{buildroot}" && find . -name version.py -type f)
120%config(noreplace) %{_sysconfdir}/cloud/cloud.cfg.d/README120%config(noreplace) %{_sysconfdir}/cloud/cloud.cfg.d/README
121%dir %{_sysconfdir}/cloud/templates121%dir %{_sysconfdir}/cloud/templates
122%config(noreplace) %{_sysconfdir}/cloud/templates/*122%config(noreplace) %{_sysconfdir}/cloud/templates/*
123%{_sysconfdir}/bash_completion.d/cloud-init123
124# Bash completion script
125%{_datadir}/bash-completion/completions/cloud-init
124126
125%{_sysconfdir}/dhcp/dhclient-exit-hooks.d/hook-dhclient127%{_sysconfdir}/dhcp/dhclient-exit-hooks.d/hook-dhclient
126%{_sysconfdir}/NetworkManager/dispatcher.d/hook-network-manager128%{_sysconfdir}/NetworkManager/dispatcher.d/hook-network-manager
diff --git a/setup.py b/setup.py
index 186e215..fcaf26f 100755
--- a/setup.py
+++ b/setup.py
@@ -245,13 +245,14 @@ if not in_virtualenv():
245 INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k]245 INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k]
246246
247data_files = [247data_files = [
248 (ETC + '/bash_completion.d', ['bash_completion/cloud-init']),
249 (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]),248 (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]),
250 (ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')),249 (ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')),
251 (ETC + '/cloud/templates', glob('templates/*')),250 (ETC + '/cloud/templates', glob('templates/*')),
252 (USR_LIB_EXEC + '/cloud-init', ['tools/ds-identify',251 (USR_LIB_EXEC + '/cloud-init', ['tools/ds-identify',
253 'tools/uncloud-init',252 'tools/uncloud-init',
254 'tools/write-ssh-key-fingerprints']),253 'tools/write-ssh-key-fingerprints']),
254 (USR + '/share/bash-completion/completions',
255 ['bash_completion/cloud-init']),
255 (USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]),256 (USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]),
256 (USR + '/share/doc/cloud-init/examples',257 (USR + '/share/doc/cloud-init/examples',
257 [f for f in glob('doc/examples/*') if is_f(f)]),258 [f for f in glob('doc/examples/*') if is_f(f)]),
diff --git a/tests/cloud_tests/releases.yaml b/tests/cloud_tests/releases.yaml
index ec5da72..924ad95 100644
--- a/tests/cloud_tests/releases.yaml
+++ b/tests/cloud_tests/releases.yaml
@@ -129,6 +129,22 @@ features:
129129
130releases:130releases:
131 # UBUNTU =================================================================131 # UBUNTU =================================================================
132 eoan:
133 # EOL: Jul 2020
134 default:
135 enabled: true
136 release: eoan
137 version: 19.10
138 os: ubuntu
139 feature_groups:
140 - base
141 - debian_base
142 - ubuntu_specific
143 lxd:
144 sstreams_server: https://cloud-images.ubuntu.com/daily
145 alias: eoan
146 setup_overrides: null
147 override_templates: false
132 disco:148 disco:
133 # EOL: Jan 2020149 # EOL: Jan 2020
134 default:150 default:
diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
index 53c56cd..427ab7e 100644
--- a/tests/unittests/test_datasource/test_azure.py
+++ b/tests/unittests/test_datasource/test_azure.py
@@ -163,7 +163,8 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
163163
164 m_readurl.assert_called_with(164 m_readurl.assert_called_with(
165 self.network_md_url, exception_cb=mock.ANY,165 self.network_md_url, exception_cb=mock.ANY,
166 headers={'Metadata': 'true'}, retries=2, timeout=1)166 headers={'Metadata': 'true'}, retries=2,
167 timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS)
167168
168 @mock.patch('cloudinit.url_helper.time.sleep')169 @mock.patch('cloudinit.url_helper.time.sleep')
169 @mock.patch(MOCKPATH + 'net.is_up')170 @mock.patch(MOCKPATH + 'net.is_up')
@@ -1375,12 +1376,15 @@ class TestCanDevBeReformatted(CiTestCase):
1375 self._domock(p + "util.mount_cb", 'm_mount_cb')1376 self._domock(p + "util.mount_cb", 'm_mount_cb')
1376 self._domock(p + "os.path.realpath", 'm_realpath')1377 self._domock(p + "os.path.realpath", 'm_realpath')
1377 self._domock(p + "os.path.exists", 'm_exists')1378 self._domock(p + "os.path.exists", 'm_exists')
1379 self._domock(p + "util.SeLinuxGuard", 'm_selguard')
13781380
1379 self.m_exists.side_effect = lambda p: p in bypath1381 self.m_exists.side_effect = lambda p: p in bypath
1380 self.m_realpath.side_effect = realpath1382 self.m_realpath.side_effect = realpath
1381 self.m_has_ntfs_filesystem.side_effect = has_ntfs_fs1383 self.m_has_ntfs_filesystem.side_effect = has_ntfs_fs
1382 self.m_mount_cb.side_effect = mount_cb1384 self.m_mount_cb.side_effect = mount_cb
1383 self.m_partitions_on_device.side_effect = partitions_on_device1385 self.m_partitions_on_device.side_effect = partitions_on_device
1386 self.m_selguard.__enter__ = mock.Mock(return_value=False)
1387 self.m_selguard.__exit__ = mock.Mock()
13841388
1385 def test_three_partitions_is_false(self):1389 def test_three_partitions_is_false(self):
1386 """A disk with 3 partitions can not be formatted."""1390 """A disk with 3 partitions can not be formatted."""
@@ -1788,7 +1792,8 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
1788 headers={'Metadata': 'true',1792 headers={'Metadata': 'true',
1789 'User-Agent':1793 'User-Agent':
1790 'Cloud-Init/%s' % vs()1794 'Cloud-Init/%s' % vs()
1791 }, method='GET', timeout=1,1795 }, method='GET',
1796 timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS,
1792 url=full_url)])1797 url=full_url)])
1793 self.assertEqual(m_dhcp.call_count, 2)1798 self.assertEqual(m_dhcp.call_count, 2)
1794 m_net.assert_any_call(1799 m_net.assert_any_call(
@@ -1825,7 +1830,9 @@ class TestAzureDataSourcePreprovisioning(CiTestCase):
1825 headers={'Metadata': 'true',1830 headers={'Metadata': 'true',
1826 'User-Agent':1831 'User-Agent':
1827 'Cloud-Init/%s' % vs()},1832 'Cloud-Init/%s' % vs()},
1828 method='GET', timeout=1, url=full_url)])1833 method='GET',
1834 timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS,
1835 url=full_url)])
1829 self.assertEqual(m_dhcp.call_count, 2)1836 self.assertEqual(m_dhcp.call_count, 2)
1830 m_net.assert_any_call(1837 m_net.assert_any_call(
1831 broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9',1838 broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9',
diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py
index 0255616..bd006ab 100644
--- a/tests/unittests/test_datasource/test_azure_helper.py
+++ b/tests/unittests/test_datasource/test_azure_helper.py
@@ -67,12 +67,17 @@ class TestFindEndpoint(CiTestCase):
67 self.networkd_leases.return_value = None67 self.networkd_leases.return_value = None
6868
69 def test_missing_file(self):69 def test_missing_file(self):
70 self.assertRaises(ValueError, wa_shim.find_endpoint)70 """wa_shim find_endpoint uses default endpoint if leasefile not found
71 """
72 self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16")
7173
72 def test_missing_special_azure_line(self):74 def test_missing_special_azure_line(self):
75 """wa_shim find_endpoint uses default endpoint if leasefile is found
76 but does not contain DHCP Option 245 (whose value is the endpoint)
77 """
73 self.load_file.return_value = ''78 self.load_file.return_value = ''
74 self.dhcp_options.return_value = {'eth0': {'key': 'value'}}79 self.dhcp_options.return_value = {'eth0': {'key': 'value'}}
75 self.assertRaises(ValueError, wa_shim.find_endpoint)80 self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16")
7681
77 @staticmethod82 @staticmethod
78 def _build_lease_content(encoded_address):83 def _build_lease_content(encoded_address):
diff --git a/tests/unittests/test_datasource/test_nocloud.py b/tests/unittests/test_datasource/test_nocloud.py
index 3429272..b785362 100644
--- a/tests/unittests/test_datasource/test_nocloud.py
+++ b/tests/unittests/test_datasource/test_nocloud.py
@@ -32,6 +32,36 @@ class TestNoCloudDataSource(CiTestCase):
32 self.mocks.enter_context(32 self.mocks.enter_context(
33 mock.patch.object(util, 'read_dmi_data', return_value=None))33 mock.patch.object(util, 'read_dmi_data', return_value=None))
3434
35 def _test_fs_config_is_read(self, fs_label, fs_label_to_search):
36 vfat_device = 'device-1'
37
38 def m_mount_cb(device, callback, mtype):
39 if (device == vfat_device):
40 return {'meta-data': yaml.dump({'instance-id': 'IID'})}
41 else:
42 return {}
43
44 def m_find_devs_with(query='', path=''):
45 if 'TYPE=vfat' == query:
46 return [vfat_device]
47 elif 'LABEL={}'.format(fs_label) == query:
48 return [vfat_device]
49 else:
50 return []
51
52 self.mocks.enter_context(
53 mock.patch.object(util, 'find_devs_with',
54 side_effect=m_find_devs_with))
55 self.mocks.enter_context(
56 mock.patch.object(util, 'mount_cb',
57 side_effect=m_mount_cb))
58 sys_cfg = {'datasource': {'NoCloud': {'fs_label': fs_label_to_search}}}
59 dsrc = dsNoCloud(sys_cfg=sys_cfg, distro=None, paths=self.paths)
60 ret = dsrc.get_data()
61
62 self.assertEqual(dsrc.metadata.get('instance-id'), 'IID')
63 self.assertTrue(ret)
64
35 def test_nocloud_seed_dir_on_lxd(self, m_is_lxd):65 def test_nocloud_seed_dir_on_lxd(self, m_is_lxd):
36 md = {'instance-id': 'IID', 'dsmode': 'local'}66 md = {'instance-id': 'IID', 'dsmode': 'local'}
37 ud = b"USER_DATA_HERE"67 ud = b"USER_DATA_HERE"
@@ -90,6 +120,18 @@ class TestNoCloudDataSource(CiTestCase):
90 ret = dsrc.get_data()120 ret = dsrc.get_data()
91 self.assertFalse(ret)121 self.assertFalse(ret)
92122
123 def test_fs_config_lowercase_label(self, m_is_lxd):
124 self._test_fs_config_is_read('cidata', 'cidata')
125
126 def test_fs_config_uppercase_label(self, m_is_lxd):
127 self._test_fs_config_is_read('CIDATA', 'cidata')
128
129 def test_fs_config_lowercase_label_search_uppercase(self, m_is_lxd):
130 self._test_fs_config_is_read('cidata', 'CIDATA')
131
132 def test_fs_config_uppercase_label_search_uppercase(self, m_is_lxd):
133 self._test_fs_config_is_read('CIDATA', 'CIDATA')
134
93 def test_no_datasource_expected(self, m_is_lxd):135 def test_no_datasource_expected(self, m_is_lxd):
94 # no source should be found if no cmdline, config, and fs_label=None136 # no source should be found if no cmdline, config, and fs_label=None
95 sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}137 sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
diff --git a/tests/unittests/test_datasource/test_scaleway.py b/tests/unittests/test_datasource/test_scaleway.py
index 3bfd752..f96bf0a 100644
--- a/tests/unittests/test_datasource/test_scaleway.py
+++ b/tests/unittests/test_datasource/test_scaleway.py
@@ -7,7 +7,6 @@ import requests
77
8from cloudinit import helpers8from cloudinit import helpers
9from cloudinit import settings9from cloudinit import settings
10from cloudinit.event import EventType
11from cloudinit.sources import DataSourceScaleway10from cloudinit.sources import DataSourceScaleway
1211
13from cloudinit.tests.helpers import mock, HttprettyTestCase, CiTestCase12from cloudinit.tests.helpers import mock, HttprettyTestCase, CiTestCase
@@ -404,9 +403,3 @@ class TestDataSourceScaleway(HttprettyTestCase):
404403
405 netcfg = self.datasource.network_config404 netcfg = self.datasource.network_config
406 self.assertEqual(netcfg, '0xdeadbeef')405 self.assertEqual(netcfg, '0xdeadbeef')
407
408 def test_update_events_is_correct(self):
409 """ensure update_events contains correct data"""
410 self.assertEqual(
411 {'network': {EventType.BOOT_NEW_INSTANCE, EventType.BOOT}},
412 self.datasource.update_events)
diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py
index d00c1b4..8c18aa1 100644
--- a/tests/unittests/test_ds_identify.py
+++ b/tests/unittests/test_ds_identify.py
@@ -520,6 +520,10 @@ class TestDsIdentify(DsIdentifyBase):
520 """NoCloud is found with iso9660 filesystem on non-cdrom disk."""520 """NoCloud is found with iso9660 filesystem on non-cdrom disk."""
521 self._test_ds_found('NoCloud')521 self._test_ds_found('NoCloud')
522522
523 def test_nocloud_upper(self):
524 """NoCloud is found with uppercase filesystem label."""
525 self._test_ds_found('NoCloudUpper')
526
523 def test_nocloud_seed(self):527 def test_nocloud_seed(self):
524 """Nocloud seed directory."""528 """Nocloud seed directory."""
525 self._test_ds_found('NoCloud-seed')529 self._test_ds_found('NoCloud-seed')
@@ -713,6 +717,19 @@ VALID_CFG = {
713 'dev/vdb': 'pretend iso content for cidata\n',717 'dev/vdb': 'pretend iso content for cidata\n',
714 }718 }
715 },719 },
720 'NoCloudUpper': {
721 'ds': 'NoCloud',
722 'mocks': [
723 MOCK_VIRT_IS_KVM,
724 {'name': 'blkid', 'ret': 0,
725 'out': blkid_out(
726 BLKID_UEFI_UBUNTU +
727 [{'DEVNAME': 'vdb', 'TYPE': 'iso9660', 'LABEL': 'CIDATA'}])},
728 ],
729 'files': {
730 'dev/vdb': 'pretend iso content for cidata\n',
731 }
732 },
716 'NoCloud-seed': {733 'NoCloud-seed': {
717 'ds': 'NoCloud',734 'ds': 'NoCloud',
718 'files': {735 'files': {
diff --git a/tests/unittests/test_handler/test_handler_mounts.py b/tests/unittests/test_handler/test_handler_mounts.py
index 8fea6c2..0fb160b 100644
--- a/tests/unittests/test_handler/test_handler_mounts.py
+++ b/tests/unittests/test_handler/test_handler_mounts.py
@@ -154,7 +154,15 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase):
154 return_value=True)154 return_value=True)
155155
156 self.add_patch('cloudinit.config.cc_mounts.util.subp',156 self.add_patch('cloudinit.config.cc_mounts.util.subp',
157 'mock_util_subp')157 'm_util_subp')
158
159 self.add_patch('cloudinit.config.cc_mounts.util.mounts',
160 'mock_util_mounts',
161 return_value={
162 '/dev/sda1': {'fstype': 'ext4',
163 'mountpoint': '/',
164 'opts': 'rw,relatime,discard'
165 }})
158166
159 self.mock_cloud = mock.Mock()167 self.mock_cloud = mock.Mock()
160 self.mock_log = mock.Mock()168 self.mock_log = mock.Mock()
@@ -230,4 +238,24 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase):
230 fstab_new_content = fd.read()238 fstab_new_content = fd.read()
231 self.assertEqual(fstab_expected_content, fstab_new_content)239 self.assertEqual(fstab_expected_content, fstab_new_content)
232240
241 def test_no_change_fstab_sets_needs_mount_all(self):
242 '''verify unchanged fstab entries are mounted if not call mount -a'''
243 fstab_original_content = (
244 'LABEL=cloudimg-rootfs / ext4 defaults 0 0\n'
245 'LABEL=UEFI /boot/efi vfat defaults 0 0\n'
246 '/dev/vdb /mnt auto defaults,noexec,comment=cloudconfig 0 2\n'
247 )
248 fstab_expected_content = fstab_original_content
249 cc = {'mounts': [
250 ['/dev/vdb', '/mnt', 'auto', 'defaults,noexec']]}
251 with open(cc_mounts.FSTAB_PATH, 'w') as fd:
252 fd.write(fstab_original_content)
253 with open(cc_mounts.FSTAB_PATH, 'r') as fd:
254 fstab_new_content = fd.read()
255 self.assertEqual(fstab_expected_content, fstab_new_content)
256 cc_mounts.handle(None, cc, self.mock_cloud, self.mock_log, [])
257 self.m_util_subp.assert_has_calls([
258 mock.call(['mount', '-a']),
259 mock.call(['systemctl', 'daemon-reload'])])
260
233# vi: ts=4 expandtab261# vi: ts=4 expandtab
diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py
index fd03deb..e85e964 100644
--- a/tests/unittests/test_net.py
+++ b/tests/unittests/test_net.py
@@ -9,6 +9,7 @@ from cloudinit.net import (
9from cloudinit.sources.helpers import openstack9from cloudinit.sources.helpers import openstack
10from cloudinit import temp_utils10from cloudinit import temp_utils
11from cloudinit import util11from cloudinit import util
12from cloudinit import safeyaml as yaml
1213
13from cloudinit.tests.helpers import (14from cloudinit.tests.helpers import (
14 CiTestCase, FilesystemMockingTestCase, dir2dict, mock, populate_dir)15 CiTestCase, FilesystemMockingTestCase, dir2dict, mock, populate_dir)
@@ -21,7 +22,7 @@ import json
21import os22import os
22import re23import re
23import textwrap24import textwrap
24import yaml25from yaml.serializer import Serializer
2526
2627
27DHCP_CONTENT_1 = """28DHCP_CONTENT_1 = """
@@ -3269,9 +3270,12 @@ class TestNetplanPostcommands(CiTestCase):
3269 mock_netplan_generate.assert_called_with(run=True)3270 mock_netplan_generate.assert_called_with(run=True)
3270 mock_net_setup_link.assert_called_with(run=True)3271 mock_net_setup_link.assert_called_with(run=True)
32713272
3273 @mock.patch('cloudinit.util.SeLinuxGuard')
3272 @mock.patch.object(netplan, "get_devicelist")3274 @mock.patch.object(netplan, "get_devicelist")
3273 @mock.patch('cloudinit.util.subp')3275 @mock.patch('cloudinit.util.subp')
3274 def test_netplan_postcmds(self, mock_subp, mock_devlist):3276 def test_netplan_postcmds(self, mock_subp, mock_devlist, mock_sel):
3277 mock_sel.__enter__ = mock.Mock(return_value=False)
3278 mock_sel.__exit__ = mock.Mock()
3275 mock_devlist.side_effect = [['lo']]3279 mock_devlist.side_effect = [['lo']]
3276 tmp_dir = self.tmp_dir()3280 tmp_dir = self.tmp_dir()
3277 ns = network_state.parse_net_config_data(self.mycfg,3281 ns = network_state.parse_net_config_data(self.mycfg,
@@ -3572,7 +3576,7 @@ class TestNetplanRoundTrip(CiTestCase):
3572 # now look for any alias, avoid rendering them entirely3576 # now look for any alias, avoid rendering them entirely
3573 # generate the first anchor string using the template3577 # generate the first anchor string using the template
3574 # as of this writing, looks like "&id001"3578 # as of this writing, looks like "&id001"
3575 anchor = r'&' + yaml.serializer.Serializer.ANCHOR_TEMPLATE % 13579 anchor = r'&' + Serializer.ANCHOR_TEMPLATE % 1
3576 found_alias = re.search(anchor, content, re.MULTILINE)3580 found_alias = re.search(anchor, content, re.MULTILINE)
3577 if found_alias:3581 if found_alias:
3578 msg = "Error at: %s\nContent:\n%s" % (found_alias, content)3582 msg = "Error at: %s\nContent:\n%s" % (found_alias, content)
@@ -3826,6 +3830,41 @@ class TestNetRenderers(CiTestCase):
3826 self.assertRaises(net.RendererNotFoundError, renderers.select,3830 self.assertRaises(net.RendererNotFoundError, renderers.select,
3827 priority=['sysconfig', 'eni'])3831 priority=['sysconfig', 'eni'])
38283832
3833 @mock.patch("cloudinit.net.renderers.netplan.available")
3834 @mock.patch("cloudinit.net.renderers.sysconfig.available_sysconfig")
3835 @mock.patch("cloudinit.net.renderers.sysconfig.available_nm")
3836 @mock.patch("cloudinit.net.renderers.eni.available")
3837 @mock.patch("cloudinit.net.renderers.sysconfig.util.get_linux_distro")
3838 def test_sysconfig_selected_on_sysconfig_enabled_distros(self, m_distro,
3839 m_eni, m_sys_nm,
3840 m_sys_scfg,
3841 m_netplan):
3842 """sysconfig only selected on specific distros (rhel/sles)."""
3843
3844 # Ubuntu with Network-Manager installed
3845 m_eni.return_value = False # no ifupdown (ifquery)
3846 m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown
3847 m_sys_nm.return_value = True # network-manager is installed
3848 m_netplan.return_value = True # netplan is installed
3849 m_distro.return_value = ('ubuntu', None, None)
3850 self.assertEqual('netplan', renderers.select(priority=None)[0])
3851
3852 # Centos with Network-Manager installed
3853 m_eni.return_value = False # no ifupdown (ifquery)
3854 m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown
3855 m_sys_nm.return_value = True # network-manager is installed
3856 m_netplan.return_value = False # netplan is not installed
3857 m_distro.return_value = ('centos', None, None)
3858 self.assertEqual('sysconfig', renderers.select(priority=None)[0])
3859
3860 # OpenSuse with Network-Manager installed
3861 m_eni.return_value = False # no ifupdown (ifquery)
3862 m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown
3863 m_sys_nm.return_value = True # network-manager is installed
3864 m_netplan.return_value = False # netplan is not installed
3865 m_distro.return_value = ('opensuse', None, None)
3866 self.assertEqual('sysconfig', renderers.select(priority=None)[0])
3867
38293868
3830class TestGetInterfaces(CiTestCase):3869class TestGetInterfaces(CiTestCase):
3831 _data = {'bonds': ['bond1'],3870 _data = {'bonds': ['bond1'],
diff --git a/tests/unittests/test_reporting_hyperv.py b/tests/unittests/test_reporting_hyperv.py
3832old mode 1006443871old mode 100644
3833new mode 1007553872new mode 100755
index 2e64c6c..d01ed5b
--- a/tests/unittests/test_reporting_hyperv.py
+++ b/tests/unittests/test_reporting_hyperv.py
@@ -1,10 +1,12 @@
1# This file is part of cloud-init. See LICENSE file for license information.1# This file is part of cloud-init. See LICENSE file for license information.
22
3from cloudinit.reporting import events3from cloudinit.reporting import events
4from cloudinit.reporting import handlers4from cloudinit.reporting.handlers import HyperVKvpReportingHandler
55
6import json6import json
7import os7import os
8import struct
9import time
810
9from cloudinit import util11from cloudinit import util
10from cloudinit.tests.helpers import CiTestCase12from cloudinit.tests.helpers import CiTestCase
@@ -13,7 +15,7 @@ from cloudinit.tests.helpers import CiTestCase
13class TestKvpEncoding(CiTestCase):15class TestKvpEncoding(CiTestCase):
14 def test_encode_decode(self):16 def test_encode_decode(self):
15 kvp = {'key': 'key1', 'value': 'value1'}17 kvp = {'key': 'key1', 'value': 'value1'}
16 kvp_reporting = handlers.HyperVKvpReportingHandler()18 kvp_reporting = HyperVKvpReportingHandler()
17 data = kvp_reporting._encode_kvp_item(kvp['key'], kvp['value'])19 data = kvp_reporting._encode_kvp_item(kvp['key'], kvp['value'])
18 self.assertEqual(len(data), kvp_reporting.HV_KVP_RECORD_SIZE)20 self.assertEqual(len(data), kvp_reporting.HV_KVP_RECORD_SIZE)
19 decoded_kvp = kvp_reporting._decode_kvp_item(data)21 decoded_kvp = kvp_reporting._decode_kvp_item(data)
@@ -26,57 +28,9 @@ class TextKvpReporter(CiTestCase):
26 self.tmp_file_path = self.tmp_path('kvp_pool_file')28 self.tmp_file_path = self.tmp_path('kvp_pool_file')
27 util.ensure_file(self.tmp_file_path)29 util.ensure_file(self.tmp_file_path)
2830
29 def test_event_type_can_be_filtered(self):
30 reporter = handlers.HyperVKvpReportingHandler(
31 kvp_file_path=self.tmp_file_path,
32 event_types=['foo', 'bar'])
33
34 reporter.publish_event(
35 events.ReportingEvent('foo', 'name', 'description'))
36 reporter.publish_event(
37 events.ReportingEvent('some_other', 'name', 'description3'))
38 reporter.q.join()
39
40 kvps = list(reporter._iterate_kvps(0))
41 self.assertEqual(1, len(kvps))
42
43 reporter.publish_event(
44 events.ReportingEvent('bar', 'name', 'description2'))
45 reporter.q.join()
46 kvps = list(reporter._iterate_kvps(0))
47 self.assertEqual(2, len(kvps))
48
49 self.assertIn('foo', kvps[0]['key'])
50 self.assertIn('bar', kvps[1]['key'])
51 self.assertNotIn('some_other', kvps[0]['key'])
52 self.assertNotIn('some_other', kvps[1]['key'])
53
54 def test_events_are_over_written(self):
55 reporter = handlers.HyperVKvpReportingHandler(
56 kvp_file_path=self.tmp_file_path)
57
58 self.assertEqual(0, len(list(reporter._iterate_kvps(0))))
59
60 reporter.publish_event(
61 events.ReportingEvent('foo', 'name1', 'description'))
62 reporter.publish_event(
63 events.ReportingEvent('foo', 'name2', 'description'))
64 reporter.q.join()
65 self.assertEqual(2, len(list(reporter._iterate_kvps(0))))
66
67 reporter2 = handlers.HyperVKvpReportingHandler(
68 kvp_file_path=self.tmp_file_path)
69 reporter2.incarnation_no = reporter.incarnation_no + 1
70 reporter2.publish_event(
71 events.ReportingEvent('foo', 'name3', 'description'))
72 reporter2.q.join()
73
74 self.assertEqual(2, len(list(reporter2._iterate_kvps(0))))
75
76 def test_events_with_higher_incarnation_not_over_written(self):31 def test_events_with_higher_incarnation_not_over_written(self):
77 reporter = handlers.HyperVKvpReportingHandler(32 reporter = HyperVKvpReportingHandler(
78 kvp_file_path=self.tmp_file_path)33 kvp_file_path=self.tmp_file_path)
79
80 self.assertEqual(0, len(list(reporter._iterate_kvps(0))))34 self.assertEqual(0, len(list(reporter._iterate_kvps(0))))
8135
82 reporter.publish_event(36 reporter.publish_event(
@@ -86,7 +40,7 @@ class TextKvpReporter(CiTestCase):
86 reporter.q.join()40 reporter.q.join()
87 self.assertEqual(2, len(list(reporter._iterate_kvps(0))))41 self.assertEqual(2, len(list(reporter._iterate_kvps(0))))
8842
89 reporter3 = handlers.HyperVKvpReportingHandler(43 reporter3 = HyperVKvpReportingHandler(
90 kvp_file_path=self.tmp_file_path)44 kvp_file_path=self.tmp_file_path)
91 reporter3.incarnation_no = reporter.incarnation_no - 145 reporter3.incarnation_no = reporter.incarnation_no - 1
92 reporter3.publish_event(46 reporter3.publish_event(
@@ -95,7 +49,7 @@ class TextKvpReporter(CiTestCase):
95 self.assertEqual(3, len(list(reporter3._iterate_kvps(0))))49 self.assertEqual(3, len(list(reporter3._iterate_kvps(0))))
9650
97 def test_finish_event_result_is_logged(self):51 def test_finish_event_result_is_logged(self):
98 reporter = handlers.HyperVKvpReportingHandler(52 reporter = HyperVKvpReportingHandler(
99 kvp_file_path=self.tmp_file_path)53 kvp_file_path=self.tmp_file_path)
100 reporter.publish_event(54 reporter.publish_event(
101 events.FinishReportingEvent('name2', 'description1',55 events.FinishReportingEvent('name2', 'description1',
@@ -105,7 +59,7 @@ class TextKvpReporter(CiTestCase):
10559
106 def test_file_operation_issue(self):60 def test_file_operation_issue(self):
107 os.remove(self.tmp_file_path)61 os.remove(self.tmp_file_path)
108 reporter = handlers.HyperVKvpReportingHandler(62 reporter = HyperVKvpReportingHandler(
109 kvp_file_path=self.tmp_file_path)63 kvp_file_path=self.tmp_file_path)
110 reporter.publish_event(64 reporter.publish_event(
111 events.FinishReportingEvent('name2', 'description1',65 events.FinishReportingEvent('name2', 'description1',
@@ -113,7 +67,7 @@ class TextKvpReporter(CiTestCase):
113 reporter.q.join()67 reporter.q.join()
11468
115 def test_event_very_long(self):69 def test_event_very_long(self):
116 reporter = handlers.HyperVKvpReportingHandler(70 reporter = HyperVKvpReportingHandler(
117 kvp_file_path=self.tmp_file_path)71 kvp_file_path=self.tmp_file_path)
118 description = 'ab' * reporter.HV_KVP_EXCHANGE_MAX_VALUE_SIZE72 description = 'ab' * reporter.HV_KVP_EXCHANGE_MAX_VALUE_SIZE
119 long_event = events.FinishReportingEvent(73 long_event = events.FinishReportingEvent(
@@ -132,3 +86,43 @@ class TextKvpReporter(CiTestCase):
132 self.assertEqual(msg_slice['msg_i'], i)86 self.assertEqual(msg_slice['msg_i'], i)
133 full_description += msg_slice['msg']87 full_description += msg_slice['msg']
134 self.assertEqual(description, full_description)88 self.assertEqual(description, full_description)
89
90 def test_not_truncate_kvp_file_modified_after_boot(self):
91 with open(self.tmp_file_path, "wb+") as f:
92 kvp = {'key': 'key1', 'value': 'value1'}
93 data = (struct.pack("%ds%ds" % (
94 HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE,
95 HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE),
96 kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8')))
97 f.write(data)
98 cur_time = time.time()
99 os.utime(self.tmp_file_path, (cur_time, cur_time))
100
101 # reset this because the unit test framework
102 # has already polluted the class variable
103 HyperVKvpReportingHandler._already_truncated_pool_file = False
104
105 reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path)
106 kvps = list(reporter._iterate_kvps(0))
107 self.assertEqual(1, len(kvps))
108
109 def test_truncate_stale_kvp_file(self):
110 with open(self.tmp_file_path, "wb+") as f:
111 kvp = {'key': 'key1', 'value': 'value1'}
112 data = (struct.pack("%ds%ds" % (
113 HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE,
114 HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE),
115 kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8')))
116 f.write(data)
117
118 # set the time ways back to make it look like
119 # we had an old kvp file
120 os.utime(self.tmp_file_path, (1000000, 1000000))
121
122 # reset this because the unit test framework
123 # has already polluted the class variable
124 HyperVKvpReportingHandler._already_truncated_pool_file = False
125
126 reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path)
127 kvps = list(reporter._iterate_kvps(0))
128 self.assertEqual(0, len(kvps))
diff --git a/tools/build-on-freebsd b/tools/build-on-freebsd
index d23fde2..dc3b974 100755
--- a/tools/build-on-freebsd
+++ b/tools/build-on-freebsd
@@ -9,6 +9,7 @@ fail() { echo "FAILED:" "$@" 1>&2; exit 1; }
9depschecked=/tmp/c-i.dependencieschecked9depschecked=/tmp/c-i.dependencieschecked
10pkgs="10pkgs="
11 bash11 bash
12 chpasswd
12 dmidecode13 dmidecode
13 e2fsprogs14 e2fsprogs
14 py27-Jinja215 py27-Jinja2
@@ -17,6 +18,7 @@ pkgs="
17 py27-configobj18 py27-configobj
18 py27-jsonpatch19 py27-jsonpatch
19 py27-jsonpointer20 py27-jsonpointer
21 py27-jsonschema
20 py27-oauthlib22 py27-oauthlib
21 py27-requests23 py27-requests
22 py27-serial24 py27-serial
@@ -28,12 +30,9 @@ pkgs="
28[ -f "$depschecked" ] || pkg install ${pkgs} || fail "install packages"30[ -f "$depschecked" ] || pkg install ${pkgs} || fail "install packages"
29touch $depschecked31touch $depschecked
3032
31# Required but unavailable port/pkg: py27-jsonpatch py27-jsonpointer
32# Luckily, the install step will take care of this by installing it from pypi...
33
34# Build the code and install in /usr/local/:33# Build the code and install in /usr/local/:
35python setup.py build34python2.7 setup.py build
36python setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd35python2.7 setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd
3736
38# Enable cloud-init in /etc/rc.conf:37# Enable cloud-init in /etc/rc.conf:
39sed -i.bak -e "/cloudinit_enable=.*/d" /etc/rc.conf38sed -i.bak -e "/cloudinit_enable=.*/d" /etc/rc.conf
diff --git a/tools/ds-identify b/tools/ds-identify
index b78b273..6518901 100755
--- a/tools/ds-identify
+++ b/tools/ds-identify
@@ -620,7 +620,7 @@ dscheck_MAAS() {
620}620}
621621
622dscheck_NoCloud() {622dscheck_NoCloud() {
623 local fslabel="cidata" d=""623 local fslabel="cidata CIDATA" d=""
624 case " ${DI_KERNEL_CMDLINE} " in624 case " ${DI_KERNEL_CMDLINE} " in
625 *\ ds=nocloud*) return ${DS_FOUND};;625 *\ ds=nocloud*) return ${DS_FOUND};;
626 esac626 esac
@@ -632,9 +632,10 @@ dscheck_NoCloud() {
632 check_seed_dir "$d" meta-data user-data && return ${DS_FOUND}632 check_seed_dir "$d" meta-data user-data && return ${DS_FOUND}
633 check_writable_seed_dir "$d" meta-data user-data && return ${DS_FOUND}633 check_writable_seed_dir "$d" meta-data user-data && return ${DS_FOUND}
634 done634 done
635 if has_fs_with_label "${fslabel}"; then635 if has_fs_with_label $fslabel; then
636 return ${DS_FOUND}636 return ${DS_FOUND}
637 fi637 fi
638
638 return ${DS_NOT_FOUND}639 return ${DS_NOT_FOUND}
639}640}
640641
@@ -762,7 +763,7 @@ is_cdrom_ovf() {
762763
763 # explicitly skip known labels of other types. rd_rdfe is azure.764 # explicitly skip known labels of other types. rd_rdfe is azure.
764 case "$label" in765 case "$label" in
765 config-2|CONFIG-2|rd_rdfe_stable*|cidata) return 1;;766 config-2|CONFIG-2|rd_rdfe_stable*|cidata|CIDATA) return 1;;
766 esac767 esac
767768
768 local idstr="http://schemas.dmtf.org/ovf/environment/1"769 local idstr="http://schemas.dmtf.org/ovf/environment/1"
diff --git a/tools/read-version b/tools/read-version
index e69c2ce..6dca659 100755
--- a/tools/read-version
+++ b/tools/read-version
@@ -71,9 +71,12 @@ if is_gitdir(_tdir) and which("git"):
71 flags = ['--tags']71 flags = ['--tags']
72 cmd = ['git', 'describe', '--abbrev=8', '--match=[0-9]*'] + flags72 cmd = ['git', 'describe', '--abbrev=8', '--match=[0-9]*'] + flags
7373
74 version = tiny_p(cmd).strip()74 try:
75 version = tiny_p(cmd).strip()
76 except RuntimeError:
77 version = None
7578
76 if not version.startswith(src_version):79 if version is None or not version.startswith(src_version):
77 sys.stderr.write("git describe version (%s) differs from "80 sys.stderr.write("git describe version (%s) differs from "
78 "cloudinit.version (%s)\n" % (version, src_version))81 "cloudinit.version (%s)\n" % (version, src_version))
79 sys.stderr.write(82 sys.stderr.write(

Subscribers

People subscribed via source and target branches