Merge ~chad.smith/cloud-init:ubuntu/disco into cloud-init:ubuntu/disco
- Git
- lp:~chad.smith/cloud-init
- ubuntu/disco
- Merge into ubuntu/disco
Proposed by
Chad Smith
Status: | Merged | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Merged at revision: | 59835f9211e70101bde10c63c7e84cc6a74c930c | ||||||||||||
Proposed branch: | ~chad.smith/cloud-init:ubuntu/disco | ||||||||||||
Merge into: | cloud-init:ubuntu/disco | ||||||||||||
Diff against target: |
1142 lines (+417/-168) 25 files modified
ChangeLog (+117/-0) cloudinit/config/cc_apt_configure.py (+1/-1) cloudinit/config/cc_mounts.py (+11/-0) cloudinit/net/sysconfig.py (+4/-2) cloudinit/net/tests/test_init.py (+1/-1) cloudinit/reporting/handlers.py (+57/-60) cloudinit/sources/DataSourceAzure.py (+11/-6) cloudinit/sources/DataSourceCloudStack.py (+1/-1) cloudinit/sources/DataSourceConfigDrive.py (+2/-5) cloudinit/sources/DataSourceEc2.py (+1/-1) cloudinit/sources/helpers/azure.py (+11/-3) cloudinit/util.py (+2/-13) cloudinit/version.py (+1/-1) debian/changelog (+27/-0) packages/redhat/cloud-init.spec.in (+3/-1) packages/suse/cloud-init.spec.in (+3/-1) setup.py (+2/-1) tests/cloud_tests/releases.yaml (+16/-0) tests/unittests/test_datasource/test_azure.py (+10/-3) tests/unittests/test_datasource/test_azure_helper.py (+7/-2) tests/unittests/test_handler/test_handler_mounts.py (+29/-1) tests/unittests/test_net.py (+42/-3) tests/unittests/test_reporting_hyperv.py (+49/-55) tools/build-on-freebsd (+4/-5) tools/read-version (+5/-2) |
||||||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
cloud-init Commiters | Pending | ||
Review via email: mp+367300@code.launchpad.net |
Commit message
new upstream snapshot for release into disco
nothing special here for ubuntu advantage config module as ubuntu-
Description of the change
To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote : | # |
review:
Approve
(continuous-integration)
There was an error fetching revisions from git servers. Please try again in a few minutes. If the problem persists, contact Launchpad support.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | diff --git a/ChangeLog b/ChangeLog | |||
2 | index 8fa6fdd..bf48fd4 100644 | |||
3 | --- a/ChangeLog | |||
4 | +++ b/ChangeLog | |||
5 | @@ -1,3 +1,120 @@ | |||
6 | 1 | 19.1: | ||
7 | 2 | - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder] | ||
8 | 3 | - tests: add Eoan release [Paride Legovini] | ||
9 | 4 | - cc_mounts: check if mount -a on no-change fstab path | ||
10 | 5 | [Jason Zions (MSFT)] (LP: #1825596) | ||
11 | 6 | - replace remaining occurrences of LOG.warn [Daniel Watkins] | ||
12 | 7 | - DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo] | ||
13 | 8 | - Azure: Changes to the Hyper-V KVP Reporter [Anh Vo] | ||
14 | 9 | - git tests: no longer show warning about safe yaml. | ||
15 | 10 | - tools/read-version: handle errors [Chad Miller] | ||
16 | 11 | - net/sysconfig: only indicate available on known sysconfig distros | ||
17 | 12 | (LP: #1819994) | ||
18 | 13 | - packages: update rpm specs for new bash completion path | ||
19 | 14 | [Daniel Watkins] (LP: #1825444) | ||
20 | 15 | - test_azure: mock util.SeLinuxGuard where needed | ||
21 | 16 | [Jason Zions (MSFT)] (LP: #1825253) | ||
22 | 17 | - setup.py: install bash completion script in new location [Daniel Watkins] | ||
23 | 18 | - mount_cb: do not pass sync and rw options to mount | ||
24 | 19 | [Gonéri Le Bouder] (LP: #1645824) | ||
25 | 20 | - cc_apt_configure: fix typo in apt documentation [Dominic Schlegel] | ||
26 | 21 | - Revert "DataSource: move update_events from a class to an instance..." | ||
27 | 22 | [Daniel Watkins] | ||
28 | 23 | - Change DataSourceNoCloud to ignore file system label's case. | ||
29 | 24 | [Risto Oikarinen] | ||
30 | 25 | - cmd:main.py: Fix missing 'modules-init' key in modes dict | ||
31 | 26 | [Antonio Romito] (LP: #1815109) | ||
32 | 27 | - ubuntu_advantage: rewrite cloud-config module | ||
33 | 28 | - Azure: Treat _unset network configuration as if it were absent | ||
34 | 29 | [Jason Zions (MSFT)] (LP: #1823084) | ||
35 | 30 | - DatasourceAzure: add additional logging for azure datasource [Anh Vo] | ||
36 | 31 | - cloud_tests: fix apt_pipelining test-cases | ||
37 | 32 | - Azure: Ensure platform random_seed is always serializable as JSON. | ||
38 | 33 | [Jason Zions (MSFT)] | ||
39 | 34 | - net/sysconfig: write out SUSE-compatible IPv6 config [Robert Schweikert] | ||
40 | 35 | - tox: Update testenv for openSUSE Leap to 15.0 [Thomas Bechtold] | ||
41 | 36 | - net: Fix ipv6 static routes when using eni renderer | ||
42 | 37 | [Raphael Glon] (LP: #1818669) | ||
43 | 38 | - Add ubuntu_drivers config module [Daniel Watkins] | ||
44 | 39 | - doc: Refresh Azure walinuxagent docs [Daniel Watkins] | ||
45 | 40 | - tox: bump pylint version to latest (2.3.1) [Daniel Watkins] | ||
46 | 41 | - DataSource: move update_events from a class to an instance attribute | ||
47 | 42 | [Daniel Watkins] (LP: #1819913) | ||
48 | 43 | - net/sysconfig: Handle default route setup for dhcp configured NICs | ||
49 | 44 | [Robert Schweikert] (LP: #1812117) | ||
50 | 45 | - DataSourceEc2: update RELEASE_BLOCKER to be more accurate | ||
51 | 46 | [Daniel Watkins] | ||
52 | 47 | - cloud-init-per: POSIX sh does not support string subst, use sed | ||
53 | 48 | (LP: #1819222) | ||
54 | 49 | - Support locking user with usermod if passwd is not available. | ||
55 | 50 | - Example for Microsoft Azure data disk added. [Anton Olifir] | ||
56 | 51 | - clean: correctly determine the path for excluding seed directory | ||
57 | 52 | [Daniel Watkins] (LP: #1818571) | ||
58 | 53 | - helpers/openstack: Treat unknown link types as physical | ||
59 | 54 | [Daniel Watkins] (LP: #1639263) | ||
60 | 55 | - drop Python 2.6 support and our NIH version detection [Daniel Watkins] | ||
61 | 56 | - tip-pylint: Fix assignment-from-return-none errors | ||
62 | 57 | - net: append type:dhcp[46] only if dhcp[46] is True in v2 netconfig | ||
63 | 58 | [Kurt Stieger] (LP: #1818032) | ||
64 | 59 | - cc_apt_pipelining: stop disabling pipelining by default | ||
65 | 60 | [Daniel Watkins] (LP: #1794982) | ||
66 | 61 | - tests: fix some slow tests and some leaking state [Daniel Watkins] | ||
67 | 62 | - util: don't determine string_types ourselves [Daniel Watkins] | ||
68 | 63 | - cc_rsyslog: Escape possible nested set [Daniel Watkins] (LP: #1816967) | ||
69 | 64 | - Enable encrypted_data_bag_secret support for Chef | ||
70 | 65 | [Eric Williams] (LP: #1817082) | ||
71 | 66 | - azure: Filter list of ssh keys pulled from fabric [Jason Zions (MSFT)] | ||
72 | 67 | - doc: update merging doc with fixes and some additional details/examples | ||
73 | 68 | - tests: integration test failure summary to use traceback if empty error | ||
74 | 69 | - This is to fix https://bugs.launchpad.net/cloud-init/+bug/1812676 | ||
75 | 70 | [Vitaly Kuznetsov] | ||
76 | 71 | - EC2: Rewrite network config on AWS Classic instances every boot | ||
77 | 72 | [Guilherme G. Piccoli] (LP: #1802073) | ||
78 | 73 | - netinfo: Adjust ifconfig output parsing for FreeBSD ipv6 entries | ||
79 | 74 | (LP: #1779672) | ||
80 | 75 | - netplan: Don't render yaml aliases when dumping netplan (LP: #1815051) | ||
81 | 76 | - add PyCharm IDE .idea/ path to .gitignore [Dominic Schlegel] | ||
82 | 77 | - correct grammar issue in instance metadata documentation | ||
83 | 78 | [Dominic Schlegel] (LP: #1802188) | ||
84 | 79 | - clean: cloud-init clean should not trace when run from within cloud_dir | ||
85 | 80 | (LP: #1795508) | ||
86 | 81 | - Resolve flake8 comparison and pycodestyle over-ident issues | ||
87 | 82 | [Paride Legovini] | ||
88 | 83 | - opennebula: also exclude epochseconds from changed environment vars | ||
89 | 84 | (LP: #1813641) | ||
90 | 85 | - systemd: Render generator from template to account for system | ||
91 | 86 | differences. [Robert Schweikert] | ||
92 | 87 | - sysconfig: On SUSE, use STARTMODE instead of ONBOOT | ||
93 | 88 | [Robert Schweikert] (LP: #1799540) | ||
94 | 89 | - flake8: use ==/!= to compare str, bytes, and int literals | ||
95 | 90 | [Paride Legovini] | ||
96 | 91 | - opennebula: exclude EPOCHREALTIME as known bash env variable with a | ||
97 | 92 | delta (LP: #1813383) | ||
98 | 93 | - tox: fix disco httpretty dependencies for py37 (LP: #1813361) | ||
99 | 94 | - run-container: uncomment baseurl in yum.repos.d/*.repo when using a | ||
100 | 95 | proxy [Paride Legovini] | ||
101 | 96 | - lxd: install zfs-linux instead of zfs meta package | ||
102 | 97 | [Johnson Shi] (LP: #1799779) | ||
103 | 98 | - net/sysconfig: do not write a resolv.conf file with only the header. | ||
104 | 99 | [Robert Schweikert] | ||
105 | 100 | - net: Make sysconfig renderer compatible with Network Manager. | ||
106 | 101 | [Eduardo Otubo] | ||
107 | 102 | - cc_set_passwords: Fix regex when parsing hashed passwords | ||
108 | 103 | [Marlin Cremers] (LP: #1811446) | ||
109 | 104 | - net: Wait for dhclient to daemonize before reading lease file | ||
110 | 105 | [Jason Zions] (LP: #1794399) | ||
111 | 106 | - [Azure] Increase retries when talking to Wireserver during metadata walk | ||
112 | 107 | [Jason Zions] | ||
113 | 108 | - Add documentation on adding a datasource. | ||
114 | 109 | - doc: clean up some datasource documentation. | ||
115 | 110 | - ds-identify: fix wrong variable name in ovf_vmware_transport_guestinfo. | ||
116 | 111 | - Scaleway: Support ssh keys provided inside an instance tag. [PORTE Loïc] | ||
117 | 112 | - OVF: simplify expected return values of transport functions. | ||
118 | 113 | - Vmware: Add support for the com.vmware.guestInfo OVF transport. | ||
119 | 114 | (LP: #1807466) | ||
120 | 115 | - HACKING.rst: change contact info to Josh Powers | ||
121 | 116 | - Update to pylint 2.2.2. | ||
122 | 117 | |||
123 | 1 | 18.5: | 118 | 18.5: |
124 | 2 | - tests: add Disco release [Joshua Powers] | 119 | - tests: add Disco release [Joshua Powers] |
125 | 3 | - net: render 'metric' values in per-subnet routes (LP: #1805871) | 120 | - net: render 'metric' values in per-subnet routes (LP: #1805871) |
126 | diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py | |||
127 | index e18944e..919d199 100644 | |||
128 | --- a/cloudinit/config/cc_apt_configure.py | |||
129 | +++ b/cloudinit/config/cc_apt_configure.py | |||
130 | @@ -127,7 +127,7 @@ to ``^[\\w-]+:\\w`` | |||
131 | 127 | 127 | ||
132 | 128 | Source list entries can be specified as a dictionary under the ``sources`` | 128 | Source list entries can be specified as a dictionary under the ``sources`` |
133 | 129 | config key, with key in the dict representing a different source file. The key | 129 | config key, with key in the dict representing a different source file. The key |
135 | 130 | The key of each source entry will be used as an id that can be referenced in | 130 | of each source entry will be used as an id that can be referenced in |
136 | 131 | other config entries, as well as the filename for the source's configuration | 131 | other config entries, as well as the filename for the source's configuration |
137 | 132 | under ``/etc/apt/sources.list.d``. If the name does not end with ``.list``, | 132 | under ``/etc/apt/sources.list.d``. If the name does not end with ``.list``, |
138 | 133 | it will be appended. If there is no configuration for a key in ``sources``, no | 133 | it will be appended. If there is no configuration for a key in ``sources``, no |
139 | diff --git a/cloudinit/config/cc_mounts.py b/cloudinit/config/cc_mounts.py | |||
140 | index 339baba..123ffb8 100644 | |||
141 | --- a/cloudinit/config/cc_mounts.py | |||
142 | +++ b/cloudinit/config/cc_mounts.py | |||
143 | @@ -439,6 +439,7 @@ def handle(_name, cfg, cloud, log, _args): | |||
144 | 439 | 439 | ||
145 | 440 | cc_lines = [] | 440 | cc_lines = [] |
146 | 441 | needswap = False | 441 | needswap = False |
147 | 442 | need_mount_all = False | ||
148 | 442 | dirs = [] | 443 | dirs = [] |
149 | 443 | for line in actlist: | 444 | for line in actlist: |
150 | 444 | # write 'comment' in the fs_mntops, entry, claiming this | 445 | # write 'comment' in the fs_mntops, entry, claiming this |
151 | @@ -449,11 +450,18 @@ def handle(_name, cfg, cloud, log, _args): | |||
152 | 449 | dirs.append(line[1]) | 450 | dirs.append(line[1]) |
153 | 450 | cc_lines.append('\t'.join(line)) | 451 | cc_lines.append('\t'.join(line)) |
154 | 451 | 452 | ||
155 | 453 | mount_points = [v['mountpoint'] for k, v in util.mounts().items() | ||
156 | 454 | if 'mountpoint' in v] | ||
157 | 452 | for d in dirs: | 455 | for d in dirs: |
158 | 453 | try: | 456 | try: |
159 | 454 | util.ensure_dir(d) | 457 | util.ensure_dir(d) |
160 | 455 | except Exception: | 458 | except Exception: |
161 | 456 | util.logexc(log, "Failed to make '%s' config-mount", d) | 459 | util.logexc(log, "Failed to make '%s' config-mount", d) |
162 | 460 | # dirs is list of directories on which a volume should be mounted. | ||
163 | 461 | # If any of them does not already show up in the list of current | ||
164 | 462 | # mount points, we will definitely need to do mount -a. | ||
165 | 463 | if not need_mount_all and d not in mount_points: | ||
166 | 464 | need_mount_all = True | ||
167 | 457 | 465 | ||
168 | 458 | sadds = [WS.sub(" ", n) for n in cc_lines] | 466 | sadds = [WS.sub(" ", n) for n in cc_lines] |
169 | 459 | sdrops = [WS.sub(" ", n) for n in fstab_removed] | 467 | sdrops = [WS.sub(" ", n) for n in fstab_removed] |
170 | @@ -473,6 +481,9 @@ def handle(_name, cfg, cloud, log, _args): | |||
171 | 473 | log.debug("No changes to /etc/fstab made.") | 481 | log.debug("No changes to /etc/fstab made.") |
172 | 474 | else: | 482 | else: |
173 | 475 | log.debug("Changes to fstab: %s", sops) | 483 | log.debug("Changes to fstab: %s", sops) |
174 | 484 | need_mount_all = True | ||
175 | 485 | |||
176 | 486 | if need_mount_all: | ||
177 | 476 | activate_cmds.append(["mount", "-a"]) | 487 | activate_cmds.append(["mount", "-a"]) |
178 | 477 | if uses_systemd: | 488 | if uses_systemd: |
179 | 478 | activate_cmds.append(["systemctl", "daemon-reload"]) | 489 | activate_cmds.append(["systemctl", "daemon-reload"]) |
180 | diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py | |||
181 | index 0998392..a47da0a 100644 | |||
182 | --- a/cloudinit/net/sysconfig.py | |||
183 | +++ b/cloudinit/net/sysconfig.py | |||
184 | @@ -18,6 +18,8 @@ from .network_state import ( | |||
185 | 18 | 18 | ||
186 | 19 | LOG = logging.getLogger(__name__) | 19 | LOG = logging.getLogger(__name__) |
187 | 20 | NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf" | 20 | NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf" |
188 | 21 | KNOWN_DISTROS = [ | ||
189 | 22 | 'opensuse', 'sles', 'suse', 'redhat', 'fedora', 'centos'] | ||
190 | 21 | 23 | ||
191 | 22 | 24 | ||
192 | 23 | def _make_header(sep='#'): | 25 | def _make_header(sep='#'): |
193 | @@ -717,8 +719,8 @@ class Renderer(renderer.Renderer): | |||
194 | 717 | def available(target=None): | 719 | def available(target=None): |
195 | 718 | sysconfig = available_sysconfig(target=target) | 720 | sysconfig = available_sysconfig(target=target) |
196 | 719 | nm = available_nm(target=target) | 721 | nm = available_nm(target=target) |
199 | 720 | 722 | return (util.get_linux_distro()[0] in KNOWN_DISTROS | |
200 | 721 | return any([nm, sysconfig]) | 723 | and any([nm, sysconfig])) |
201 | 722 | 724 | ||
202 | 723 | 725 | ||
203 | 724 | def available_sysconfig(target=None): | 726 | def available_sysconfig(target=None): |
204 | diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py | |||
205 | index f55c31e..6d2affe 100644 | |||
206 | --- a/cloudinit/net/tests/test_init.py | |||
207 | +++ b/cloudinit/net/tests/test_init.py | |||
208 | @@ -7,11 +7,11 @@ import mock | |||
209 | 7 | import os | 7 | import os |
210 | 8 | import requests | 8 | import requests |
211 | 9 | import textwrap | 9 | import textwrap |
212 | 10 | import yaml | ||
213 | 11 | 10 | ||
214 | 12 | import cloudinit.net as net | 11 | import cloudinit.net as net |
215 | 13 | from cloudinit.util import ensure_file, write_file, ProcessExecutionError | 12 | from cloudinit.util import ensure_file, write_file, ProcessExecutionError |
216 | 14 | from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase | 13 | from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase |
217 | 14 | from cloudinit import safeyaml as yaml | ||
218 | 15 | 15 | ||
219 | 16 | 16 | ||
220 | 17 | class TestSysDevPath(CiTestCase): | 17 | class TestSysDevPath(CiTestCase): |
221 | diff --git a/cloudinit/reporting/handlers.py b/cloudinit/reporting/handlers.py | |||
222 | 18 | old mode 100644 | 18 | old mode 100644 |
223 | 19 | new mode 100755 | 19 | new mode 100755 |
224 | index 6d23558..10165ae | |||
225 | --- a/cloudinit/reporting/handlers.py | |||
226 | +++ b/cloudinit/reporting/handlers.py | |||
227 | @@ -5,7 +5,6 @@ import fcntl | |||
228 | 5 | import json | 5 | import json |
229 | 6 | import six | 6 | import six |
230 | 7 | import os | 7 | import os |
231 | 8 | import re | ||
232 | 9 | import struct | 8 | import struct |
233 | 10 | import threading | 9 | import threading |
234 | 11 | import time | 10 | import time |
235 | @@ -14,6 +13,7 @@ from cloudinit import log as logging | |||
236 | 14 | from cloudinit.registry import DictRegistry | 13 | from cloudinit.registry import DictRegistry |
237 | 15 | from cloudinit import (url_helper, util) | 14 | from cloudinit import (url_helper, util) |
238 | 16 | from datetime import datetime | 15 | from datetime import datetime |
239 | 16 | from six.moves.queue import Empty as QueueEmptyError | ||
240 | 17 | 17 | ||
241 | 18 | if six.PY2: | 18 | if six.PY2: |
242 | 19 | from multiprocessing.queues import JoinableQueue as JQueue | 19 | from multiprocessing.queues import JoinableQueue as JQueue |
243 | @@ -129,24 +129,50 @@ class HyperVKvpReportingHandler(ReportingHandler): | |||
244 | 129 | DESC_IDX_KEY = 'msg_i' | 129 | DESC_IDX_KEY = 'msg_i' |
245 | 130 | JSON_SEPARATORS = (',', ':') | 130 | JSON_SEPARATORS = (',', ':') |
246 | 131 | KVP_POOL_FILE_GUEST = '/var/lib/hyperv/.kvp_pool_1' | 131 | KVP_POOL_FILE_GUEST = '/var/lib/hyperv/.kvp_pool_1' |
247 | 132 | _already_truncated_pool_file = False | ||
248 | 132 | 133 | ||
249 | 133 | def __init__(self, | 134 | def __init__(self, |
250 | 134 | kvp_file_path=KVP_POOL_FILE_GUEST, | 135 | kvp_file_path=KVP_POOL_FILE_GUEST, |
251 | 135 | event_types=None): | 136 | event_types=None): |
252 | 136 | super(HyperVKvpReportingHandler, self).__init__() | 137 | super(HyperVKvpReportingHandler, self).__init__() |
253 | 137 | self._kvp_file_path = kvp_file_path | 138 | self._kvp_file_path = kvp_file_path |
254 | 139 | HyperVKvpReportingHandler._truncate_guest_pool_file( | ||
255 | 140 | self._kvp_file_path) | ||
256 | 141 | |||
257 | 138 | self._event_types = event_types | 142 | self._event_types = event_types |
258 | 139 | self.q = JQueue() | 143 | self.q = JQueue() |
259 | 140 | self.kvp_file = None | ||
260 | 141 | self.incarnation_no = self._get_incarnation_no() | 144 | self.incarnation_no = self._get_incarnation_no() |
261 | 142 | self.event_key_prefix = u"{0}|{1}".format(self.EVENT_PREFIX, | 145 | self.event_key_prefix = u"{0}|{1}".format(self.EVENT_PREFIX, |
262 | 143 | self.incarnation_no) | 146 | self.incarnation_no) |
263 | 144 | self._current_offset = 0 | ||
264 | 145 | self.publish_thread = threading.Thread( | 147 | self.publish_thread = threading.Thread( |
265 | 146 | target=self._publish_event_routine) | 148 | target=self._publish_event_routine) |
266 | 147 | self.publish_thread.daemon = True | 149 | self.publish_thread.daemon = True |
267 | 148 | self.publish_thread.start() | 150 | self.publish_thread.start() |
268 | 149 | 151 | ||
269 | 152 | @classmethod | ||
270 | 153 | def _truncate_guest_pool_file(cls, kvp_file): | ||
271 | 154 | """ | ||
272 | 155 | Truncate the pool file if it has not been truncated since boot. | ||
273 | 156 | This should be done exactly once for the file indicated by | ||
274 | 157 | KVP_POOL_FILE_GUEST constant above. This method takes a filename | ||
275 | 158 | so that we can use an arbitrary file during unit testing. | ||
276 | 159 | Since KVP is a best-effort telemetry channel we only attempt to | ||
277 | 160 | truncate the file once and only if the file has not been modified | ||
278 | 161 | since boot. Additional truncation can lead to loss of existing | ||
279 | 162 | KVPs. | ||
280 | 163 | """ | ||
281 | 164 | if cls._already_truncated_pool_file: | ||
282 | 165 | return | ||
283 | 166 | boot_time = time.time() - float(util.uptime()) | ||
284 | 167 | try: | ||
285 | 168 | if os.path.getmtime(kvp_file) < boot_time: | ||
286 | 169 | with open(kvp_file, "w"): | ||
287 | 170 | pass | ||
288 | 171 | except (OSError, IOError) as e: | ||
289 | 172 | LOG.warning("failed to truncate kvp pool file, %s", e) | ||
290 | 173 | finally: | ||
291 | 174 | cls._already_truncated_pool_file = True | ||
292 | 175 | |||
293 | 150 | def _get_incarnation_no(self): | 176 | def _get_incarnation_no(self): |
294 | 151 | """ | 177 | """ |
295 | 152 | use the time passed as the incarnation number. | 178 | use the time passed as the incarnation number. |
296 | @@ -162,20 +188,15 @@ class HyperVKvpReportingHandler(ReportingHandler): | |||
297 | 162 | 188 | ||
298 | 163 | def _iterate_kvps(self, offset): | 189 | def _iterate_kvps(self, offset): |
299 | 164 | """iterate the kvp file from the current offset.""" | 190 | """iterate the kvp file from the current offset.""" |
305 | 165 | try: | 191 | with open(self._kvp_file_path, 'rb') as f: |
306 | 166 | with open(self._kvp_file_path, 'rb+') as f: | 192 | fcntl.flock(f, fcntl.LOCK_EX) |
307 | 167 | self.kvp_file = f | 193 | f.seek(offset) |
308 | 168 | fcntl.flock(f, fcntl.LOCK_EX) | 194 | record_data = f.read(self.HV_KVP_RECORD_SIZE) |
309 | 169 | f.seek(offset) | 195 | while len(record_data) == self.HV_KVP_RECORD_SIZE: |
310 | 196 | kvp_item = self._decode_kvp_item(record_data) | ||
311 | 197 | yield kvp_item | ||
312 | 170 | record_data = f.read(self.HV_KVP_RECORD_SIZE) | 198 | record_data = f.read(self.HV_KVP_RECORD_SIZE) |
321 | 171 | while len(record_data) == self.HV_KVP_RECORD_SIZE: | 199 | fcntl.flock(f, fcntl.LOCK_UN) |
314 | 172 | self._current_offset += self.HV_KVP_RECORD_SIZE | ||
315 | 173 | kvp_item = self._decode_kvp_item(record_data) | ||
316 | 174 | yield kvp_item | ||
317 | 175 | record_data = f.read(self.HV_KVP_RECORD_SIZE) | ||
318 | 176 | fcntl.flock(f, fcntl.LOCK_UN) | ||
319 | 177 | finally: | ||
320 | 178 | self.kvp_file = None | ||
322 | 179 | 200 | ||
323 | 180 | def _event_key(self, event): | 201 | def _event_key(self, event): |
324 | 181 | """ | 202 | """ |
325 | @@ -207,23 +228,13 @@ class HyperVKvpReportingHandler(ReportingHandler): | |||
326 | 207 | 228 | ||
327 | 208 | return {'key': k, 'value': v} | 229 | return {'key': k, 'value': v} |
328 | 209 | 230 | ||
329 | 210 | def _update_kvp_item(self, record_data): | ||
330 | 211 | if self.kvp_file is None: | ||
331 | 212 | raise ReportException( | ||
332 | 213 | "kvp file '{0}' not opened." | ||
333 | 214 | .format(self._kvp_file_path)) | ||
334 | 215 | self.kvp_file.seek(-self.HV_KVP_RECORD_SIZE, 1) | ||
335 | 216 | self.kvp_file.write(record_data) | ||
336 | 217 | |||
337 | 218 | def _append_kvp_item(self, record_data): | 231 | def _append_kvp_item(self, record_data): |
339 | 219 | with open(self._kvp_file_path, 'rb+') as f: | 232 | with open(self._kvp_file_path, 'ab') as f: |
340 | 220 | fcntl.flock(f, fcntl.LOCK_EX) | 233 | fcntl.flock(f, fcntl.LOCK_EX) |
344 | 221 | # seek to end of the file | 234 | for data in record_data: |
345 | 222 | f.seek(0, 2) | 235 | f.write(data) |
343 | 223 | f.write(record_data) | ||
346 | 224 | f.flush() | 236 | f.flush() |
347 | 225 | fcntl.flock(f, fcntl.LOCK_UN) | 237 | fcntl.flock(f, fcntl.LOCK_UN) |
348 | 226 | self._current_offset = f.tell() | ||
349 | 227 | 238 | ||
350 | 228 | def _break_down(self, key, meta_data, description): | 239 | def _break_down(self, key, meta_data, description): |
351 | 229 | del meta_data[self.MSG_KEY] | 240 | del meta_data[self.MSG_KEY] |
352 | @@ -279,40 +290,26 @@ class HyperVKvpReportingHandler(ReportingHandler): | |||
353 | 279 | 290 | ||
354 | 280 | def _publish_event_routine(self): | 291 | def _publish_event_routine(self): |
355 | 281 | while True: | 292 | while True: |
356 | 293 | items_from_queue = 0 | ||
357 | 282 | try: | 294 | try: |
358 | 283 | event = self.q.get(block=True) | 295 | event = self.q.get(block=True) |
360 | 284 | need_append = True | 296 | items_from_queue += 1 |
361 | 297 | encoded_data = [] | ||
362 | 298 | while event is not None: | ||
363 | 299 | encoded_data += self._encode_event(event) | ||
364 | 300 | try: | ||
365 | 301 | # get all the rest of the events in the queue | ||
366 | 302 | event = self.q.get(block=False) | ||
367 | 303 | items_from_queue += 1 | ||
368 | 304 | except QueueEmptyError: | ||
369 | 305 | event = None | ||
370 | 285 | try: | 306 | try: |
398 | 286 | if not os.path.exists(self._kvp_file_path): | 307 | self._append_kvp_item(encoded_data) |
399 | 287 | LOG.warning( | 308 | except (OSError, IOError) as e: |
400 | 288 | "skip writing events %s to %s. file not present.", | 309 | LOG.warning("failed posting events to kvp, %s", e) |
374 | 289 | event.as_string(), | ||
375 | 290 | self._kvp_file_path) | ||
376 | 291 | encoded_event = self._encode_event(event) | ||
377 | 292 | # for each encoded_event | ||
378 | 293 | for encoded_data in (encoded_event): | ||
379 | 294 | for kvp in self._iterate_kvps(self._current_offset): | ||
380 | 295 | match = ( | ||
381 | 296 | re.match( | ||
382 | 297 | r"^{0}\|(\d+)\|.+" | ||
383 | 298 | .format(self.EVENT_PREFIX), | ||
384 | 299 | kvp['key'] | ||
385 | 300 | )) | ||
386 | 301 | if match: | ||
387 | 302 | match_groups = match.groups(0) | ||
388 | 303 | if int(match_groups[0]) < self.incarnation_no: | ||
389 | 304 | need_append = False | ||
390 | 305 | self._update_kvp_item(encoded_data) | ||
391 | 306 | continue | ||
392 | 307 | if need_append: | ||
393 | 308 | self._append_kvp_item(encoded_data) | ||
394 | 309 | except IOError as e: | ||
395 | 310 | LOG.warning( | ||
396 | 311 | "failed posting event to kvp: %s e:%s", | ||
397 | 312 | event.as_string(), e) | ||
401 | 313 | finally: | 310 | finally: |
404 | 314 | self.q.task_done() | 311 | for _ in range(items_from_queue): |
405 | 315 | 312 | self.q.task_done() | |
406 | 316 | # when main process exits, q.get() will through EOFError | 313 | # when main process exits, q.get() will through EOFError |
407 | 317 | # indicating we should exit this thread. | 314 | # indicating we should exit this thread. |
408 | 318 | except EOFError: | 315 | except EOFError: |
409 | @@ -322,7 +319,7 @@ class HyperVKvpReportingHandler(ReportingHandler): | |||
410 | 322 | # if the kvp pool already contains a chunk of data, | 319 | # if the kvp pool already contains a chunk of data, |
411 | 323 | # so defer it to another thread. | 320 | # so defer it to another thread. |
412 | 324 | def publish_event(self, event): | 321 | def publish_event(self, event): |
414 | 325 | if (not self._event_types or event.event_type in self._event_types): | 322 | if not self._event_types or event.event_type in self._event_types: |
415 | 326 | self.q.put(event) | 323 | self.q.put(event) |
416 | 327 | 324 | ||
417 | 328 | def flush(self): | 325 | def flush(self): |
418 | diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py | |||
419 | index 76b1661..b7440c1 100755 | |||
420 | --- a/cloudinit/sources/DataSourceAzure.py | |||
421 | +++ b/cloudinit/sources/DataSourceAzure.py | |||
422 | @@ -57,7 +57,12 @@ AZURE_CHASSIS_ASSET_TAG = '7783-7084-3265-9085-8269-3286-77' | |||
423 | 57 | REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds" | 57 | REPROVISION_MARKER_FILE = "/var/lib/cloud/data/poll_imds" |
424 | 58 | REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready" | 58 | REPORTED_READY_MARKER_FILE = "/var/lib/cloud/data/reported_ready" |
425 | 59 | AGENT_SEED_DIR = '/var/lib/waagent' | 59 | AGENT_SEED_DIR = '/var/lib/waagent' |
426 | 60 | |||
427 | 61 | # In the event where the IMDS primary server is not | ||
428 | 62 | # available, it takes 1s to fallback to the secondary one | ||
429 | 63 | IMDS_TIMEOUT_IN_SECONDS = 2 | ||
430 | 60 | IMDS_URL = "http://169.254.169.254/metadata/" | 64 | IMDS_URL = "http://169.254.169.254/metadata/" |
431 | 65 | |||
432 | 61 | PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0" | 66 | PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0" |
433 | 62 | 67 | ||
434 | 63 | # List of static scripts and network config artifacts created by | 68 | # List of static scripts and network config artifacts created by |
435 | @@ -407,7 +412,7 @@ class DataSourceAzure(sources.DataSource): | |||
436 | 407 | elif cdev.startswith("/dev/"): | 412 | elif cdev.startswith("/dev/"): |
437 | 408 | if util.is_FreeBSD(): | 413 | if util.is_FreeBSD(): |
438 | 409 | ret = util.mount_cb(cdev, load_azure_ds_dir, | 414 | ret = util.mount_cb(cdev, load_azure_ds_dir, |
440 | 410 | mtype="udf", sync=False) | 415 | mtype="udf") |
441 | 411 | else: | 416 | else: |
442 | 412 | ret = util.mount_cb(cdev, load_azure_ds_dir) | 417 | ret = util.mount_cb(cdev, load_azure_ds_dir) |
443 | 413 | else: | 418 | else: |
444 | @@ -582,9 +587,9 @@ class DataSourceAzure(sources.DataSource): | |||
445 | 582 | return | 587 | return |
446 | 583 | self._ephemeral_dhcp_ctx.clean_network() | 588 | self._ephemeral_dhcp_ctx.clean_network() |
447 | 584 | else: | 589 | else: |
451 | 585 | return readurl(url, timeout=1, headers=headers, | 590 | return readurl(url, timeout=IMDS_TIMEOUT_IN_SECONDS, |
452 | 586 | exception_cb=exc_cb, infinite=True, | 591 | headers=headers, exception_cb=exc_cb, |
453 | 587 | log_req_resp=False).contents | 592 | infinite=True, log_req_resp=False).contents |
454 | 588 | except UrlError: | 593 | except UrlError: |
455 | 589 | # Teardown our EphemeralDHCPv4 context on failure as we retry | 594 | # Teardown our EphemeralDHCPv4 context on failure as we retry |
456 | 590 | self._ephemeral_dhcp_ctx.clean_network() | 595 | self._ephemeral_dhcp_ctx.clean_network() |
457 | @@ -1291,8 +1296,8 @@ def _get_metadata_from_imds(retries): | |||
458 | 1291 | headers = {"Metadata": "true"} | 1296 | headers = {"Metadata": "true"} |
459 | 1292 | try: | 1297 | try: |
460 | 1293 | response = readurl( | 1298 | response = readurl( |
463 | 1294 | url, timeout=1, headers=headers, retries=retries, | 1299 | url, timeout=IMDS_TIMEOUT_IN_SECONDS, headers=headers, |
464 | 1295 | exception_cb=retry_on_url_exc) | 1300 | retries=retries, exception_cb=retry_on_url_exc) |
465 | 1296 | except Exception as e: | 1301 | except Exception as e: |
466 | 1297 | LOG.debug('Ignoring IMDS instance metadata: %s', e) | 1302 | LOG.debug('Ignoring IMDS instance metadata: %s', e) |
467 | 1298 | return {} | 1303 | return {} |
468 | diff --git a/cloudinit/sources/DataSourceCloudStack.py b/cloudinit/sources/DataSourceCloudStack.py | |||
469 | index d4b758f..f185dc7 100644 | |||
470 | --- a/cloudinit/sources/DataSourceCloudStack.py | |||
471 | +++ b/cloudinit/sources/DataSourceCloudStack.py | |||
472 | @@ -95,7 +95,7 @@ class DataSourceCloudStack(sources.DataSource): | |||
473 | 95 | start_time = time.time() | 95 | start_time = time.time() |
474 | 96 | url = uhelp.wait_for_url( | 96 | url = uhelp.wait_for_url( |
475 | 97 | urls=urls, max_wait=url_params.max_wait_seconds, | 97 | urls=urls, max_wait=url_params.max_wait_seconds, |
477 | 98 | timeout=url_params.timeout_seconds, status_cb=LOG.warn) | 98 | timeout=url_params.timeout_seconds, status_cb=LOG.warning) |
478 | 99 | 99 | ||
479 | 100 | if url: | 100 | if url: |
480 | 101 | LOG.debug("Using metadata source: '%s'", url) | 101 | LOG.debug("Using metadata source: '%s'", url) |
481 | diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py | |||
482 | index 564e3eb..571d30d 100644 | |||
483 | --- a/cloudinit/sources/DataSourceConfigDrive.py | |||
484 | +++ b/cloudinit/sources/DataSourceConfigDrive.py | |||
485 | @@ -72,15 +72,12 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource): | |||
486 | 72 | dslist = self.sys_cfg.get('datasource_list') | 72 | dslist = self.sys_cfg.get('datasource_list') |
487 | 73 | for dev in find_candidate_devs(dslist=dslist): | 73 | for dev in find_candidate_devs(dslist=dslist): |
488 | 74 | try: | 74 | try: |
491 | 75 | # Set mtype if freebsd and turn off sync | 75 | if util.is_FreeBSD() and dev.startswith("/dev/cd"): |
490 | 76 | if dev.startswith("/dev/cd"): | ||
492 | 77 | mtype = "cd9660" | 76 | mtype = "cd9660" |
493 | 78 | sync = False | ||
494 | 79 | else: | 77 | else: |
495 | 80 | mtype = None | 78 | mtype = None |
496 | 81 | sync = True | ||
497 | 82 | results = util.mount_cb(dev, read_config_drive, | 79 | results = util.mount_cb(dev, read_config_drive, |
499 | 83 | mtype=mtype, sync=sync) | 80 | mtype=mtype) |
500 | 84 | found = dev | 81 | found = dev |
501 | 85 | except openstack.NonReadable: | 82 | except openstack.NonReadable: |
502 | 86 | pass | 83 | pass |
503 | diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py | |||
504 | index ac28f1d..5c017bf 100644 | |||
505 | --- a/cloudinit/sources/DataSourceEc2.py | |||
506 | +++ b/cloudinit/sources/DataSourceEc2.py | |||
507 | @@ -208,7 +208,7 @@ class DataSourceEc2(sources.DataSource): | |||
508 | 208 | start_time = time.time() | 208 | start_time = time.time() |
509 | 209 | url = uhelp.wait_for_url( | 209 | url = uhelp.wait_for_url( |
510 | 210 | urls=urls, max_wait=url_params.max_wait_seconds, | 210 | urls=urls, max_wait=url_params.max_wait_seconds, |
512 | 211 | timeout=url_params.timeout_seconds, status_cb=LOG.warn) | 211 | timeout=url_params.timeout_seconds, status_cb=LOG.warning) |
513 | 212 | 212 | ||
514 | 213 | if url: | 213 | if url: |
515 | 214 | self.metadata_address = url2base[url] | 214 | self.metadata_address = url2base[url] |
516 | diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py | |||
517 | index d3af05e..82c4c8c 100755 | |||
518 | --- a/cloudinit/sources/helpers/azure.py | |||
519 | +++ b/cloudinit/sources/helpers/azure.py | |||
520 | @@ -20,6 +20,9 @@ from cloudinit.reporting import events | |||
521 | 20 | 20 | ||
522 | 21 | LOG = logging.getLogger(__name__) | 21 | LOG = logging.getLogger(__name__) |
523 | 22 | 22 | ||
524 | 23 | # This endpoint matches the format as found in dhcp lease files, since this | ||
525 | 24 | # value is applied if the endpoint can't be found within a lease file | ||
526 | 25 | DEFAULT_WIRESERVER_ENDPOINT = "a8:3f:81:10" | ||
527 | 23 | 26 | ||
528 | 24 | azure_ds_reporter = events.ReportEventStack( | 27 | azure_ds_reporter = events.ReportEventStack( |
529 | 25 | name="azure-ds", | 28 | name="azure-ds", |
530 | @@ -297,7 +300,12 @@ class WALinuxAgentShim(object): | |||
531 | 297 | @azure_ds_telemetry_reporter | 300 | @azure_ds_telemetry_reporter |
532 | 298 | def _get_value_from_leases_file(fallback_lease_file): | 301 | def _get_value_from_leases_file(fallback_lease_file): |
533 | 299 | leases = [] | 302 | leases = [] |
535 | 300 | content = util.load_file(fallback_lease_file) | 303 | try: |
536 | 304 | content = util.load_file(fallback_lease_file) | ||
537 | 305 | except IOError as ex: | ||
538 | 306 | LOG.error("Failed to read %s: %s", fallback_lease_file, ex) | ||
539 | 307 | return None | ||
540 | 308 | |||
541 | 301 | LOG.debug("content is %s", content) | 309 | LOG.debug("content is %s", content) |
542 | 302 | option_name = _get_dhcp_endpoint_option_name() | 310 | option_name = _get_dhcp_endpoint_option_name() |
543 | 303 | for line in content.splitlines(): | 311 | for line in content.splitlines(): |
544 | @@ -372,9 +380,9 @@ class WALinuxAgentShim(object): | |||
545 | 372 | fallback_lease_file) | 380 | fallback_lease_file) |
546 | 373 | value = WALinuxAgentShim._get_value_from_leases_file( | 381 | value = WALinuxAgentShim._get_value_from_leases_file( |
547 | 374 | fallback_lease_file) | 382 | fallback_lease_file) |
548 | 375 | |||
549 | 376 | if value is None: | 383 | if value is None: |
551 | 377 | raise ValueError('No endpoint found.') | 384 | LOG.warning("No lease found; using default endpoint") |
552 | 385 | value = DEFAULT_WIRESERVER_ENDPOINT | ||
553 | 378 | 386 | ||
554 | 379 | endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value) | 387 | endpoint_ip_address = WALinuxAgentShim.get_ip_from_lease_value(value) |
555 | 380 | LOG.debug('Azure endpoint found at %s', endpoint_ip_address) | 388 | LOG.debug('Azure endpoint found at %s', endpoint_ip_address) |
556 | diff --git a/cloudinit/util.py b/cloudinit/util.py | |||
557 | index 385f231..ea4199c 100644 | |||
558 | --- a/cloudinit/util.py | |||
559 | +++ b/cloudinit/util.py | |||
560 | @@ -1679,7 +1679,7 @@ def mounts(): | |||
561 | 1679 | return mounted | 1679 | return mounted |
562 | 1680 | 1680 | ||
563 | 1681 | 1681 | ||
565 | 1682 | def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True, | 1682 | def mount_cb(device, callback, data=None, mtype=None, |
566 | 1683 | update_env_for_mount=None): | 1683 | update_env_for_mount=None): |
567 | 1684 | """ | 1684 | """ |
568 | 1685 | Mount the device, call method 'callback' passing the directory | 1685 | Mount the device, call method 'callback' passing the directory |
569 | @@ -1726,18 +1726,7 @@ def mount_cb(device, callback, data=None, rw=False, mtype=None, sync=True, | |||
570 | 1726 | for mtype in mtypes: | 1726 | for mtype in mtypes: |
571 | 1727 | mountpoint = None | 1727 | mountpoint = None |
572 | 1728 | try: | 1728 | try: |
585 | 1729 | mountcmd = ['mount'] | 1729 | mountcmd = ['mount', '-o', 'ro'] |
574 | 1730 | mountopts = [] | ||
575 | 1731 | if rw: | ||
576 | 1732 | mountopts.append('rw') | ||
577 | 1733 | else: | ||
578 | 1734 | mountopts.append('ro') | ||
579 | 1735 | if sync: | ||
580 | 1736 | # This seems like the safe approach to do | ||
581 | 1737 | # (ie where this is on by default) | ||
582 | 1738 | mountopts.append("sync") | ||
583 | 1739 | if mountopts: | ||
584 | 1740 | mountcmd.extend(["-o", ",".join(mountopts)]) | ||
586 | 1741 | if mtype: | 1730 | if mtype: |
587 | 1742 | mountcmd.extend(['-t', mtype]) | 1731 | mountcmd.extend(['-t', mtype]) |
588 | 1743 | mountcmd.append(device) | 1732 | mountcmd.append(device) |
589 | diff --git a/cloudinit/version.py b/cloudinit/version.py | |||
590 | index a2c5d43..ddcd436 100644 | |||
591 | --- a/cloudinit/version.py | |||
592 | +++ b/cloudinit/version.py | |||
593 | @@ -4,7 +4,7 @@ | |||
594 | 4 | # | 4 | # |
595 | 5 | # This file is part of cloud-init. See LICENSE file for license information. | 5 | # This file is part of cloud-init. See LICENSE file for license information. |
596 | 6 | 6 | ||
598 | 7 | __VERSION__ = "18.5" | 7 | __VERSION__ = "19.1" |
599 | 8 | _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' | 8 | _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' |
600 | 9 | 9 | ||
601 | 10 | FEATURES = [ | 10 | FEATURES = [ |
602 | diff --git a/debian/changelog b/debian/changelog | |||
603 | index 0630854..8379093 100644 | |||
604 | --- a/debian/changelog | |||
605 | +++ b/debian/changelog | |||
606 | @@ -1,3 +1,30 @@ | |||
607 | 1 | cloud-init (19.1-1-gbaa47854-0ubuntu1~19.04.1) disco; urgency=medium | ||
608 | 2 | |||
609 | 3 | * New upstream snapshot. | ||
610 | 4 | - Azure: Return static fallback address as if failed to find endpoint | ||
611 | 5 | [Jason Zions (MSFT)] | ||
612 | 6 | - release 19.1 (LP: #1828479) | ||
613 | 7 | - freebsd: add chpasswd pkg in the image [Gonéri Le Bouder] | ||
614 | 8 | - tests: add Eoan release [Paride Legovini] | ||
615 | 9 | - cc_mounts: check if mount -a on no-change fstab path | ||
616 | 10 | [Jason Zions (MSFT)] (LP: #1825596) | ||
617 | 11 | - replace remaining occurrences of LOG.warn | ||
618 | 12 | - DataSourceAzure: Adjust timeout for polling IMDS [Anh Vo] | ||
619 | 13 | - Azure: Changes to the Hyper-V KVP Reporter [Anh Vo] | ||
620 | 14 | - git tests: no longer show warning about safe yaml. [Scott Moser] | ||
621 | 15 | - tools/read-version: handle errors [Chad Miller] | ||
622 | 16 | - net/sysconfig: only indicate available on known sysconfig distros | ||
623 | 17 | (LP: #1819994) | ||
624 | 18 | - packages: update rpm specs for new bash completion path (LP: #1825444) | ||
625 | 19 | - test_azure: mock util.SeLinuxGuard where needed | ||
626 | 20 | [Jason Zions (MSFT)] (LP: #1825253) | ||
627 | 21 | - setup.py: install bash completion script in new location | ||
628 | 22 | - mount_cb: do not pass sync and rw options to mount | ||
629 | 23 | [Gonéri Le Bouder] (LP: #1645824) | ||
630 | 24 | - cc_apt_configure: fix typo in apt documentation [Dominic Schlegel] | ||
631 | 25 | |||
632 | 26 | -- Chad Smith <chad.smith@canonical.com> Fri, 10 May 2019 21:11:57 -0600 | ||
633 | 27 | |||
634 | 1 | cloud-init (18.5-62-g6322c2dd-0ubuntu1) disco; urgency=medium | 28 | cloud-init (18.5-62-g6322c2dd-0ubuntu1) disco; urgency=medium |
635 | 2 | 29 | ||
636 | 3 | * New upstream snapshot. | 30 | * New upstream snapshot. |
637 | diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in | |||
638 | index 6b2022b..057a578 100644 | |||
639 | --- a/packages/redhat/cloud-init.spec.in | |||
640 | +++ b/packages/redhat/cloud-init.spec.in | |||
641 | @@ -205,7 +205,9 @@ fi | |||
642 | 205 | %dir %{_sysconfdir}/cloud/templates | 205 | %dir %{_sysconfdir}/cloud/templates |
643 | 206 | %config(noreplace) %{_sysconfdir}/cloud/templates/* | 206 | %config(noreplace) %{_sysconfdir}/cloud/templates/* |
644 | 207 | %config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf | 207 | %config(noreplace) %{_sysconfdir}/rsyslog.d/21-cloudinit.conf |
646 | 208 | %{_sysconfdir}/bash_completion.d/cloud-init | 208 | |
647 | 209 | # Bash completion script | ||
648 | 210 | %{_datadir}/bash-completion/completions/cloud-init | ||
649 | 209 | 211 | ||
650 | 210 | %{_libexecdir}/%{name} | 212 | %{_libexecdir}/%{name} |
651 | 211 | %dir %{_sharedstatedir}/cloud | 213 | %dir %{_sharedstatedir}/cloud |
652 | diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in | |||
653 | index 26894b3..004b875 100644 | |||
654 | --- a/packages/suse/cloud-init.spec.in | |||
655 | +++ b/packages/suse/cloud-init.spec.in | |||
656 | @@ -120,7 +120,9 @@ version_pys=$(cd "%{buildroot}" && find . -name version.py -type f) | |||
657 | 120 | %config(noreplace) %{_sysconfdir}/cloud/cloud.cfg.d/README | 120 | %config(noreplace) %{_sysconfdir}/cloud/cloud.cfg.d/README |
658 | 121 | %dir %{_sysconfdir}/cloud/templates | 121 | %dir %{_sysconfdir}/cloud/templates |
659 | 122 | %config(noreplace) %{_sysconfdir}/cloud/templates/* | 122 | %config(noreplace) %{_sysconfdir}/cloud/templates/* |
661 | 123 | %{_sysconfdir}/bash_completion.d/cloud-init | 123 | |
662 | 124 | # Bash completion script | ||
663 | 125 | %{_datadir}/bash-completion/completions/cloud-init | ||
664 | 124 | 126 | ||
665 | 125 | %{_sysconfdir}/dhcp/dhclient-exit-hooks.d/hook-dhclient | 127 | %{_sysconfdir}/dhcp/dhclient-exit-hooks.d/hook-dhclient |
666 | 126 | %{_sysconfdir}/NetworkManager/dispatcher.d/hook-network-manager | 128 | %{_sysconfdir}/NetworkManager/dispatcher.d/hook-network-manager |
667 | diff --git a/setup.py b/setup.py | |||
668 | index 186e215..fcaf26f 100755 | |||
669 | --- a/setup.py | |||
670 | +++ b/setup.py | |||
671 | @@ -245,13 +245,14 @@ if not in_virtualenv(): | |||
672 | 245 | INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k] | 245 | INITSYS_ROOTS[k] = "/" + INITSYS_ROOTS[k] |
673 | 246 | 246 | ||
674 | 247 | data_files = [ | 247 | data_files = [ |
675 | 248 | (ETC + '/bash_completion.d', ['bash_completion/cloud-init']), | ||
676 | 249 | (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]), | 248 | (ETC + '/cloud', [render_tmpl("config/cloud.cfg.tmpl")]), |
677 | 250 | (ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')), | 249 | (ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')), |
678 | 251 | (ETC + '/cloud/templates', glob('templates/*')), | 250 | (ETC + '/cloud/templates', glob('templates/*')), |
679 | 252 | (USR_LIB_EXEC + '/cloud-init', ['tools/ds-identify', | 251 | (USR_LIB_EXEC + '/cloud-init', ['tools/ds-identify', |
680 | 253 | 'tools/uncloud-init', | 252 | 'tools/uncloud-init', |
681 | 254 | 'tools/write-ssh-key-fingerprints']), | 253 | 'tools/write-ssh-key-fingerprints']), |
682 | 254 | (USR + '/share/bash-completion/completions', | ||
683 | 255 | ['bash_completion/cloud-init']), | ||
684 | 255 | (USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]), | 256 | (USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]), |
685 | 256 | (USR + '/share/doc/cloud-init/examples', | 257 | (USR + '/share/doc/cloud-init/examples', |
686 | 257 | [f for f in glob('doc/examples/*') if is_f(f)]), | 258 | [f for f in glob('doc/examples/*') if is_f(f)]), |
687 | diff --git a/tests/cloud_tests/releases.yaml b/tests/cloud_tests/releases.yaml | |||
688 | index ec5da72..924ad95 100644 | |||
689 | --- a/tests/cloud_tests/releases.yaml | |||
690 | +++ b/tests/cloud_tests/releases.yaml | |||
691 | @@ -129,6 +129,22 @@ features: | |||
692 | 129 | 129 | ||
693 | 130 | releases: | 130 | releases: |
694 | 131 | # UBUNTU ================================================================= | 131 | # UBUNTU ================================================================= |
695 | 132 | eoan: | ||
696 | 133 | # EOL: Jul 2020 | ||
697 | 134 | default: | ||
698 | 135 | enabled: true | ||
699 | 136 | release: eoan | ||
700 | 137 | version: 19.10 | ||
701 | 138 | os: ubuntu | ||
702 | 139 | feature_groups: | ||
703 | 140 | - base | ||
704 | 141 | - debian_base | ||
705 | 142 | - ubuntu_specific | ||
706 | 143 | lxd: | ||
707 | 144 | sstreams_server: https://cloud-images.ubuntu.com/daily | ||
708 | 145 | alias: eoan | ||
709 | 146 | setup_overrides: null | ||
710 | 147 | override_templates: false | ||
711 | 132 | disco: | 148 | disco: |
712 | 133 | # EOL: Jan 2020 | 149 | # EOL: Jan 2020 |
713 | 134 | default: | 150 | default: |
714 | diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py | |||
715 | index 53c56cd..427ab7e 100644 | |||
716 | --- a/tests/unittests/test_datasource/test_azure.py | |||
717 | +++ b/tests/unittests/test_datasource/test_azure.py | |||
718 | @@ -163,7 +163,8 @@ class TestGetMetadataFromIMDS(HttprettyTestCase): | |||
719 | 163 | 163 | ||
720 | 164 | m_readurl.assert_called_with( | 164 | m_readurl.assert_called_with( |
721 | 165 | self.network_md_url, exception_cb=mock.ANY, | 165 | self.network_md_url, exception_cb=mock.ANY, |
723 | 166 | headers={'Metadata': 'true'}, retries=2, timeout=1) | 166 | headers={'Metadata': 'true'}, retries=2, |
724 | 167 | timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS) | ||
725 | 167 | 168 | ||
726 | 168 | @mock.patch('cloudinit.url_helper.time.sleep') | 169 | @mock.patch('cloudinit.url_helper.time.sleep') |
727 | 169 | @mock.patch(MOCKPATH + 'net.is_up') | 170 | @mock.patch(MOCKPATH + 'net.is_up') |
728 | @@ -1375,12 +1376,15 @@ class TestCanDevBeReformatted(CiTestCase): | |||
729 | 1375 | self._domock(p + "util.mount_cb", 'm_mount_cb') | 1376 | self._domock(p + "util.mount_cb", 'm_mount_cb') |
730 | 1376 | self._domock(p + "os.path.realpath", 'm_realpath') | 1377 | self._domock(p + "os.path.realpath", 'm_realpath') |
731 | 1377 | self._domock(p + "os.path.exists", 'm_exists') | 1378 | self._domock(p + "os.path.exists", 'm_exists') |
732 | 1379 | self._domock(p + "util.SeLinuxGuard", 'm_selguard') | ||
733 | 1378 | 1380 | ||
734 | 1379 | self.m_exists.side_effect = lambda p: p in bypath | 1381 | self.m_exists.side_effect = lambda p: p in bypath |
735 | 1380 | self.m_realpath.side_effect = realpath | 1382 | self.m_realpath.side_effect = realpath |
736 | 1381 | self.m_has_ntfs_filesystem.side_effect = has_ntfs_fs | 1383 | self.m_has_ntfs_filesystem.side_effect = has_ntfs_fs |
737 | 1382 | self.m_mount_cb.side_effect = mount_cb | 1384 | self.m_mount_cb.side_effect = mount_cb |
738 | 1383 | self.m_partitions_on_device.side_effect = partitions_on_device | 1385 | self.m_partitions_on_device.side_effect = partitions_on_device |
739 | 1386 | self.m_selguard.__enter__ = mock.Mock(return_value=False) | ||
740 | 1387 | self.m_selguard.__exit__ = mock.Mock() | ||
741 | 1384 | 1388 | ||
742 | 1385 | def test_three_partitions_is_false(self): | 1389 | def test_three_partitions_is_false(self): |
743 | 1386 | """A disk with 3 partitions can not be formatted.""" | 1390 | """A disk with 3 partitions can not be formatted.""" |
744 | @@ -1788,7 +1792,8 @@ class TestAzureDataSourcePreprovisioning(CiTestCase): | |||
745 | 1788 | headers={'Metadata': 'true', | 1792 | headers={'Metadata': 'true', |
746 | 1789 | 'User-Agent': | 1793 | 'User-Agent': |
747 | 1790 | 'Cloud-Init/%s' % vs() | 1794 | 'Cloud-Init/%s' % vs() |
749 | 1791 | }, method='GET', timeout=1, | 1795 | }, method='GET', |
750 | 1796 | timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS, | ||
751 | 1792 | url=full_url)]) | 1797 | url=full_url)]) |
752 | 1793 | self.assertEqual(m_dhcp.call_count, 2) | 1798 | self.assertEqual(m_dhcp.call_count, 2) |
753 | 1794 | m_net.assert_any_call( | 1799 | m_net.assert_any_call( |
754 | @@ -1825,7 +1830,9 @@ class TestAzureDataSourcePreprovisioning(CiTestCase): | |||
755 | 1825 | headers={'Metadata': 'true', | 1830 | headers={'Metadata': 'true', |
756 | 1826 | 'User-Agent': | 1831 | 'User-Agent': |
757 | 1827 | 'Cloud-Init/%s' % vs()}, | 1832 | 'Cloud-Init/%s' % vs()}, |
759 | 1828 | method='GET', timeout=1, url=full_url)]) | 1833 | method='GET', |
760 | 1834 | timeout=dsaz.IMDS_TIMEOUT_IN_SECONDS, | ||
761 | 1835 | url=full_url)]) | ||
762 | 1829 | self.assertEqual(m_dhcp.call_count, 2) | 1836 | self.assertEqual(m_dhcp.call_count, 2) |
763 | 1830 | m_net.assert_any_call( | 1837 | m_net.assert_any_call( |
764 | 1831 | broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9', | 1838 | broadcast='192.168.2.255', interface='eth9', ip='192.168.2.9', |
765 | diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py | |||
766 | index 0255616..bd006ab 100644 | |||
767 | --- a/tests/unittests/test_datasource/test_azure_helper.py | |||
768 | +++ b/tests/unittests/test_datasource/test_azure_helper.py | |||
769 | @@ -67,12 +67,17 @@ class TestFindEndpoint(CiTestCase): | |||
770 | 67 | self.networkd_leases.return_value = None | 67 | self.networkd_leases.return_value = None |
771 | 68 | 68 | ||
772 | 69 | def test_missing_file(self): | 69 | def test_missing_file(self): |
774 | 70 | self.assertRaises(ValueError, wa_shim.find_endpoint) | 70 | """wa_shim find_endpoint uses default endpoint if leasefile not found |
775 | 71 | """ | ||
776 | 72 | self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16") | ||
777 | 71 | 73 | ||
778 | 72 | def test_missing_special_azure_line(self): | 74 | def test_missing_special_azure_line(self): |
779 | 75 | """wa_shim find_endpoint uses default endpoint if leasefile is found | ||
780 | 76 | but does not contain DHCP Option 245 (whose value is the endpoint) | ||
781 | 77 | """ | ||
782 | 73 | self.load_file.return_value = '' | 78 | self.load_file.return_value = '' |
783 | 74 | self.dhcp_options.return_value = {'eth0': {'key': 'value'}} | 79 | self.dhcp_options.return_value = {'eth0': {'key': 'value'}} |
785 | 75 | self.assertRaises(ValueError, wa_shim.find_endpoint) | 80 | self.assertEqual(wa_shim.find_endpoint(), "168.63.129.16") |
786 | 76 | 81 | ||
787 | 77 | @staticmethod | 82 | @staticmethod |
788 | 78 | def _build_lease_content(encoded_address): | 83 | def _build_lease_content(encoded_address): |
789 | diff --git a/tests/unittests/test_handler/test_handler_mounts.py b/tests/unittests/test_handler/test_handler_mounts.py | |||
790 | index 8fea6c2..0fb160b 100644 | |||
791 | --- a/tests/unittests/test_handler/test_handler_mounts.py | |||
792 | +++ b/tests/unittests/test_handler/test_handler_mounts.py | |||
793 | @@ -154,7 +154,15 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase): | |||
794 | 154 | return_value=True) | 154 | return_value=True) |
795 | 155 | 155 | ||
796 | 156 | self.add_patch('cloudinit.config.cc_mounts.util.subp', | 156 | self.add_patch('cloudinit.config.cc_mounts.util.subp', |
798 | 157 | 'mock_util_subp') | 157 | 'm_util_subp') |
799 | 158 | |||
800 | 159 | self.add_patch('cloudinit.config.cc_mounts.util.mounts', | ||
801 | 160 | 'mock_util_mounts', | ||
802 | 161 | return_value={ | ||
803 | 162 | '/dev/sda1': {'fstype': 'ext4', | ||
804 | 163 | 'mountpoint': '/', | ||
805 | 164 | 'opts': 'rw,relatime,discard' | ||
806 | 165 | }}) | ||
807 | 158 | 166 | ||
808 | 159 | self.mock_cloud = mock.Mock() | 167 | self.mock_cloud = mock.Mock() |
809 | 160 | self.mock_log = mock.Mock() | 168 | self.mock_log = mock.Mock() |
810 | @@ -230,4 +238,24 @@ class TestFstabHandling(test_helpers.FilesystemMockingTestCase): | |||
811 | 230 | fstab_new_content = fd.read() | 238 | fstab_new_content = fd.read() |
812 | 231 | self.assertEqual(fstab_expected_content, fstab_new_content) | 239 | self.assertEqual(fstab_expected_content, fstab_new_content) |
813 | 232 | 240 | ||
814 | 241 | def test_no_change_fstab_sets_needs_mount_all(self): | ||
815 | 242 | '''verify unchanged fstab entries are mounted if not call mount -a''' | ||
816 | 243 | fstab_original_content = ( | ||
817 | 244 | 'LABEL=cloudimg-rootfs / ext4 defaults 0 0\n' | ||
818 | 245 | 'LABEL=UEFI /boot/efi vfat defaults 0 0\n' | ||
819 | 246 | '/dev/vdb /mnt auto defaults,noexec,comment=cloudconfig 0 2\n' | ||
820 | 247 | ) | ||
821 | 248 | fstab_expected_content = fstab_original_content | ||
822 | 249 | cc = {'mounts': [ | ||
823 | 250 | ['/dev/vdb', '/mnt', 'auto', 'defaults,noexec']]} | ||
824 | 251 | with open(cc_mounts.FSTAB_PATH, 'w') as fd: | ||
825 | 252 | fd.write(fstab_original_content) | ||
826 | 253 | with open(cc_mounts.FSTAB_PATH, 'r') as fd: | ||
827 | 254 | fstab_new_content = fd.read() | ||
828 | 255 | self.assertEqual(fstab_expected_content, fstab_new_content) | ||
829 | 256 | cc_mounts.handle(None, cc, self.mock_cloud, self.mock_log, []) | ||
830 | 257 | self.m_util_subp.assert_has_calls([ | ||
831 | 258 | mock.call(['mount', '-a']), | ||
832 | 259 | mock.call(['systemctl', 'daemon-reload'])]) | ||
833 | 260 | |||
834 | 233 | # vi: ts=4 expandtab | 261 | # vi: ts=4 expandtab |
835 | diff --git a/tests/unittests/test_net.py b/tests/unittests/test_net.py | |||
836 | index fd03deb..e85e964 100644 | |||
837 | --- a/tests/unittests/test_net.py | |||
838 | +++ b/tests/unittests/test_net.py | |||
839 | @@ -9,6 +9,7 @@ from cloudinit.net import ( | |||
840 | 9 | from cloudinit.sources.helpers import openstack | 9 | from cloudinit.sources.helpers import openstack |
841 | 10 | from cloudinit import temp_utils | 10 | from cloudinit import temp_utils |
842 | 11 | from cloudinit import util | 11 | from cloudinit import util |
843 | 12 | from cloudinit import safeyaml as yaml | ||
844 | 12 | 13 | ||
845 | 13 | from cloudinit.tests.helpers import ( | 14 | from cloudinit.tests.helpers import ( |
846 | 14 | CiTestCase, FilesystemMockingTestCase, dir2dict, mock, populate_dir) | 15 | CiTestCase, FilesystemMockingTestCase, dir2dict, mock, populate_dir) |
847 | @@ -21,7 +22,7 @@ import json | |||
848 | 21 | import os | 22 | import os |
849 | 22 | import re | 23 | import re |
850 | 23 | import textwrap | 24 | import textwrap |
852 | 24 | import yaml | 25 | from yaml.serializer import Serializer |
853 | 25 | 26 | ||
854 | 26 | 27 | ||
855 | 27 | DHCP_CONTENT_1 = """ | 28 | DHCP_CONTENT_1 = """ |
856 | @@ -3269,9 +3270,12 @@ class TestNetplanPostcommands(CiTestCase): | |||
857 | 3269 | mock_netplan_generate.assert_called_with(run=True) | 3270 | mock_netplan_generate.assert_called_with(run=True) |
858 | 3270 | mock_net_setup_link.assert_called_with(run=True) | 3271 | mock_net_setup_link.assert_called_with(run=True) |
859 | 3271 | 3272 | ||
860 | 3273 | @mock.patch('cloudinit.util.SeLinuxGuard') | ||
861 | 3272 | @mock.patch.object(netplan, "get_devicelist") | 3274 | @mock.patch.object(netplan, "get_devicelist") |
862 | 3273 | @mock.patch('cloudinit.util.subp') | 3275 | @mock.patch('cloudinit.util.subp') |
864 | 3274 | def test_netplan_postcmds(self, mock_subp, mock_devlist): | 3276 | def test_netplan_postcmds(self, mock_subp, mock_devlist, mock_sel): |
865 | 3277 | mock_sel.__enter__ = mock.Mock(return_value=False) | ||
866 | 3278 | mock_sel.__exit__ = mock.Mock() | ||
867 | 3275 | mock_devlist.side_effect = [['lo']] | 3279 | mock_devlist.side_effect = [['lo']] |
868 | 3276 | tmp_dir = self.tmp_dir() | 3280 | tmp_dir = self.tmp_dir() |
869 | 3277 | ns = network_state.parse_net_config_data(self.mycfg, | 3281 | ns = network_state.parse_net_config_data(self.mycfg, |
870 | @@ -3572,7 +3576,7 @@ class TestNetplanRoundTrip(CiTestCase): | |||
871 | 3572 | # now look for any alias, avoid rendering them entirely | 3576 | # now look for any alias, avoid rendering them entirely |
872 | 3573 | # generate the first anchor string using the template | 3577 | # generate the first anchor string using the template |
873 | 3574 | # as of this writing, looks like "&id001" | 3578 | # as of this writing, looks like "&id001" |
875 | 3575 | anchor = r'&' + yaml.serializer.Serializer.ANCHOR_TEMPLATE % 1 | 3579 | anchor = r'&' + Serializer.ANCHOR_TEMPLATE % 1 |
876 | 3576 | found_alias = re.search(anchor, content, re.MULTILINE) | 3580 | found_alias = re.search(anchor, content, re.MULTILINE) |
877 | 3577 | if found_alias: | 3581 | if found_alias: |
878 | 3578 | msg = "Error at: %s\nContent:\n%s" % (found_alias, content) | 3582 | msg = "Error at: %s\nContent:\n%s" % (found_alias, content) |
879 | @@ -3826,6 +3830,41 @@ class TestNetRenderers(CiTestCase): | |||
880 | 3826 | self.assertRaises(net.RendererNotFoundError, renderers.select, | 3830 | self.assertRaises(net.RendererNotFoundError, renderers.select, |
881 | 3827 | priority=['sysconfig', 'eni']) | 3831 | priority=['sysconfig', 'eni']) |
882 | 3828 | 3832 | ||
883 | 3833 | @mock.patch("cloudinit.net.renderers.netplan.available") | ||
884 | 3834 | @mock.patch("cloudinit.net.renderers.sysconfig.available_sysconfig") | ||
885 | 3835 | @mock.patch("cloudinit.net.renderers.sysconfig.available_nm") | ||
886 | 3836 | @mock.patch("cloudinit.net.renderers.eni.available") | ||
887 | 3837 | @mock.patch("cloudinit.net.renderers.sysconfig.util.get_linux_distro") | ||
888 | 3838 | def test_sysconfig_selected_on_sysconfig_enabled_distros(self, m_distro, | ||
889 | 3839 | m_eni, m_sys_nm, | ||
890 | 3840 | m_sys_scfg, | ||
891 | 3841 | m_netplan): | ||
892 | 3842 | """sysconfig only selected on specific distros (rhel/sles).""" | ||
893 | 3843 | |||
894 | 3844 | # Ubuntu with Network-Manager installed | ||
895 | 3845 | m_eni.return_value = False # no ifupdown (ifquery) | ||
896 | 3846 | m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown | ||
897 | 3847 | m_sys_nm.return_value = True # network-manager is installed | ||
898 | 3848 | m_netplan.return_value = True # netplan is installed | ||
899 | 3849 | m_distro.return_value = ('ubuntu', None, None) | ||
900 | 3850 | self.assertEqual('netplan', renderers.select(priority=None)[0]) | ||
901 | 3851 | |||
902 | 3852 | # Centos with Network-Manager installed | ||
903 | 3853 | m_eni.return_value = False # no ifupdown (ifquery) | ||
904 | 3854 | m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown | ||
905 | 3855 | m_sys_nm.return_value = True # network-manager is installed | ||
906 | 3856 | m_netplan.return_value = False # netplan is not installed | ||
907 | 3857 | m_distro.return_value = ('centos', None, None) | ||
908 | 3858 | self.assertEqual('sysconfig', renderers.select(priority=None)[0]) | ||
909 | 3859 | |||
910 | 3860 | # OpenSuse with Network-Manager installed | ||
911 | 3861 | m_eni.return_value = False # no ifupdown (ifquery) | ||
912 | 3862 | m_sys_scfg.return_value = False # no sysconfig/ifup/ifdown | ||
913 | 3863 | m_sys_nm.return_value = True # network-manager is installed | ||
914 | 3864 | m_netplan.return_value = False # netplan is not installed | ||
915 | 3865 | m_distro.return_value = ('opensuse', None, None) | ||
916 | 3866 | self.assertEqual('sysconfig', renderers.select(priority=None)[0]) | ||
917 | 3867 | |||
918 | 3829 | 3868 | ||
919 | 3830 | class TestGetInterfaces(CiTestCase): | 3869 | class TestGetInterfaces(CiTestCase): |
920 | 3831 | _data = {'bonds': ['bond1'], | 3870 | _data = {'bonds': ['bond1'], |
921 | diff --git a/tests/unittests/test_reporting_hyperv.py b/tests/unittests/test_reporting_hyperv.py | |||
922 | 3832 | old mode 100644 | 3871 | old mode 100644 |
923 | 3833 | new mode 100755 | 3872 | new mode 100755 |
924 | index 2e64c6c..d01ed5b | |||
925 | --- a/tests/unittests/test_reporting_hyperv.py | |||
926 | +++ b/tests/unittests/test_reporting_hyperv.py | |||
927 | @@ -1,10 +1,12 @@ | |||
928 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
929 | 2 | 2 | ||
930 | 3 | from cloudinit.reporting import events | 3 | from cloudinit.reporting import events |
932 | 4 | from cloudinit.reporting import handlers | 4 | from cloudinit.reporting.handlers import HyperVKvpReportingHandler |
933 | 5 | 5 | ||
934 | 6 | import json | 6 | import json |
935 | 7 | import os | 7 | import os |
936 | 8 | import struct | ||
937 | 9 | import time | ||
938 | 8 | 10 | ||
939 | 9 | from cloudinit import util | 11 | from cloudinit import util |
940 | 10 | from cloudinit.tests.helpers import CiTestCase | 12 | from cloudinit.tests.helpers import CiTestCase |
941 | @@ -13,7 +15,7 @@ from cloudinit.tests.helpers import CiTestCase | |||
942 | 13 | class TestKvpEncoding(CiTestCase): | 15 | class TestKvpEncoding(CiTestCase): |
943 | 14 | def test_encode_decode(self): | 16 | def test_encode_decode(self): |
944 | 15 | kvp = {'key': 'key1', 'value': 'value1'} | 17 | kvp = {'key': 'key1', 'value': 'value1'} |
946 | 16 | kvp_reporting = handlers.HyperVKvpReportingHandler() | 18 | kvp_reporting = HyperVKvpReportingHandler() |
947 | 17 | data = kvp_reporting._encode_kvp_item(kvp['key'], kvp['value']) | 19 | data = kvp_reporting._encode_kvp_item(kvp['key'], kvp['value']) |
948 | 18 | self.assertEqual(len(data), kvp_reporting.HV_KVP_RECORD_SIZE) | 20 | self.assertEqual(len(data), kvp_reporting.HV_KVP_RECORD_SIZE) |
949 | 19 | decoded_kvp = kvp_reporting._decode_kvp_item(data) | 21 | decoded_kvp = kvp_reporting._decode_kvp_item(data) |
950 | @@ -26,57 +28,9 @@ class TextKvpReporter(CiTestCase): | |||
951 | 26 | self.tmp_file_path = self.tmp_path('kvp_pool_file') | 28 | self.tmp_file_path = self.tmp_path('kvp_pool_file') |
952 | 27 | util.ensure_file(self.tmp_file_path) | 29 | util.ensure_file(self.tmp_file_path) |
953 | 28 | 30 | ||
954 | 29 | def test_event_type_can_be_filtered(self): | ||
955 | 30 | reporter = handlers.HyperVKvpReportingHandler( | ||
956 | 31 | kvp_file_path=self.tmp_file_path, | ||
957 | 32 | event_types=['foo', 'bar']) | ||
958 | 33 | |||
959 | 34 | reporter.publish_event( | ||
960 | 35 | events.ReportingEvent('foo', 'name', 'description')) | ||
961 | 36 | reporter.publish_event( | ||
962 | 37 | events.ReportingEvent('some_other', 'name', 'description3')) | ||
963 | 38 | reporter.q.join() | ||
964 | 39 | |||
965 | 40 | kvps = list(reporter._iterate_kvps(0)) | ||
966 | 41 | self.assertEqual(1, len(kvps)) | ||
967 | 42 | |||
968 | 43 | reporter.publish_event( | ||
969 | 44 | events.ReportingEvent('bar', 'name', 'description2')) | ||
970 | 45 | reporter.q.join() | ||
971 | 46 | kvps = list(reporter._iterate_kvps(0)) | ||
972 | 47 | self.assertEqual(2, len(kvps)) | ||
973 | 48 | |||
974 | 49 | self.assertIn('foo', kvps[0]['key']) | ||
975 | 50 | self.assertIn('bar', kvps[1]['key']) | ||
976 | 51 | self.assertNotIn('some_other', kvps[0]['key']) | ||
977 | 52 | self.assertNotIn('some_other', kvps[1]['key']) | ||
978 | 53 | |||
979 | 54 | def test_events_are_over_written(self): | ||
980 | 55 | reporter = handlers.HyperVKvpReportingHandler( | ||
981 | 56 | kvp_file_path=self.tmp_file_path) | ||
982 | 57 | |||
983 | 58 | self.assertEqual(0, len(list(reporter._iterate_kvps(0)))) | ||
984 | 59 | |||
985 | 60 | reporter.publish_event( | ||
986 | 61 | events.ReportingEvent('foo', 'name1', 'description')) | ||
987 | 62 | reporter.publish_event( | ||
988 | 63 | events.ReportingEvent('foo', 'name2', 'description')) | ||
989 | 64 | reporter.q.join() | ||
990 | 65 | self.assertEqual(2, len(list(reporter._iterate_kvps(0)))) | ||
991 | 66 | |||
992 | 67 | reporter2 = handlers.HyperVKvpReportingHandler( | ||
993 | 68 | kvp_file_path=self.tmp_file_path) | ||
994 | 69 | reporter2.incarnation_no = reporter.incarnation_no + 1 | ||
995 | 70 | reporter2.publish_event( | ||
996 | 71 | events.ReportingEvent('foo', 'name3', 'description')) | ||
997 | 72 | reporter2.q.join() | ||
998 | 73 | |||
999 | 74 | self.assertEqual(2, len(list(reporter2._iterate_kvps(0)))) | ||
1000 | 75 | |||
1001 | 76 | def test_events_with_higher_incarnation_not_over_written(self): | 31 | def test_events_with_higher_incarnation_not_over_written(self): |
1003 | 77 | reporter = handlers.HyperVKvpReportingHandler( | 32 | reporter = HyperVKvpReportingHandler( |
1004 | 78 | kvp_file_path=self.tmp_file_path) | 33 | kvp_file_path=self.tmp_file_path) |
1005 | 79 | |||
1006 | 80 | self.assertEqual(0, len(list(reporter._iterate_kvps(0)))) | 34 | self.assertEqual(0, len(list(reporter._iterate_kvps(0)))) |
1007 | 81 | 35 | ||
1008 | 82 | reporter.publish_event( | 36 | reporter.publish_event( |
1009 | @@ -86,7 +40,7 @@ class TextKvpReporter(CiTestCase): | |||
1010 | 86 | reporter.q.join() | 40 | reporter.q.join() |
1011 | 87 | self.assertEqual(2, len(list(reporter._iterate_kvps(0)))) | 41 | self.assertEqual(2, len(list(reporter._iterate_kvps(0)))) |
1012 | 88 | 42 | ||
1014 | 89 | reporter3 = handlers.HyperVKvpReportingHandler( | 43 | reporter3 = HyperVKvpReportingHandler( |
1015 | 90 | kvp_file_path=self.tmp_file_path) | 44 | kvp_file_path=self.tmp_file_path) |
1016 | 91 | reporter3.incarnation_no = reporter.incarnation_no - 1 | 45 | reporter3.incarnation_no = reporter.incarnation_no - 1 |
1017 | 92 | reporter3.publish_event( | 46 | reporter3.publish_event( |
1018 | @@ -95,7 +49,7 @@ class TextKvpReporter(CiTestCase): | |||
1019 | 95 | self.assertEqual(3, len(list(reporter3._iterate_kvps(0)))) | 49 | self.assertEqual(3, len(list(reporter3._iterate_kvps(0)))) |
1020 | 96 | 50 | ||
1021 | 97 | def test_finish_event_result_is_logged(self): | 51 | def test_finish_event_result_is_logged(self): |
1023 | 98 | reporter = handlers.HyperVKvpReportingHandler( | 52 | reporter = HyperVKvpReportingHandler( |
1024 | 99 | kvp_file_path=self.tmp_file_path) | 53 | kvp_file_path=self.tmp_file_path) |
1025 | 100 | reporter.publish_event( | 54 | reporter.publish_event( |
1026 | 101 | events.FinishReportingEvent('name2', 'description1', | 55 | events.FinishReportingEvent('name2', 'description1', |
1027 | @@ -105,7 +59,7 @@ class TextKvpReporter(CiTestCase): | |||
1028 | 105 | 59 | ||
1029 | 106 | def test_file_operation_issue(self): | 60 | def test_file_operation_issue(self): |
1030 | 107 | os.remove(self.tmp_file_path) | 61 | os.remove(self.tmp_file_path) |
1032 | 108 | reporter = handlers.HyperVKvpReportingHandler( | 62 | reporter = HyperVKvpReportingHandler( |
1033 | 109 | kvp_file_path=self.tmp_file_path) | 63 | kvp_file_path=self.tmp_file_path) |
1034 | 110 | reporter.publish_event( | 64 | reporter.publish_event( |
1035 | 111 | events.FinishReportingEvent('name2', 'description1', | 65 | events.FinishReportingEvent('name2', 'description1', |
1036 | @@ -113,7 +67,7 @@ class TextKvpReporter(CiTestCase): | |||
1037 | 113 | reporter.q.join() | 67 | reporter.q.join() |
1038 | 114 | 68 | ||
1039 | 115 | def test_event_very_long(self): | 69 | def test_event_very_long(self): |
1041 | 116 | reporter = handlers.HyperVKvpReportingHandler( | 70 | reporter = HyperVKvpReportingHandler( |
1042 | 117 | kvp_file_path=self.tmp_file_path) | 71 | kvp_file_path=self.tmp_file_path) |
1043 | 118 | description = 'ab' * reporter.HV_KVP_EXCHANGE_MAX_VALUE_SIZE | 72 | description = 'ab' * reporter.HV_KVP_EXCHANGE_MAX_VALUE_SIZE |
1044 | 119 | long_event = events.FinishReportingEvent( | 73 | long_event = events.FinishReportingEvent( |
1045 | @@ -132,3 +86,43 @@ class TextKvpReporter(CiTestCase): | |||
1046 | 132 | self.assertEqual(msg_slice['msg_i'], i) | 86 | self.assertEqual(msg_slice['msg_i'], i) |
1047 | 133 | full_description += msg_slice['msg'] | 87 | full_description += msg_slice['msg'] |
1048 | 134 | self.assertEqual(description, full_description) | 88 | self.assertEqual(description, full_description) |
1049 | 89 | |||
1050 | 90 | def test_not_truncate_kvp_file_modified_after_boot(self): | ||
1051 | 91 | with open(self.tmp_file_path, "wb+") as f: | ||
1052 | 92 | kvp = {'key': 'key1', 'value': 'value1'} | ||
1053 | 93 | data = (struct.pack("%ds%ds" % ( | ||
1054 | 94 | HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE, | ||
1055 | 95 | HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE), | ||
1056 | 96 | kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8'))) | ||
1057 | 97 | f.write(data) | ||
1058 | 98 | cur_time = time.time() | ||
1059 | 99 | os.utime(self.tmp_file_path, (cur_time, cur_time)) | ||
1060 | 100 | |||
1061 | 101 | # reset this because the unit test framework | ||
1062 | 102 | # has already polluted the class variable | ||
1063 | 103 | HyperVKvpReportingHandler._already_truncated_pool_file = False | ||
1064 | 104 | |||
1065 | 105 | reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path) | ||
1066 | 106 | kvps = list(reporter._iterate_kvps(0)) | ||
1067 | 107 | self.assertEqual(1, len(kvps)) | ||
1068 | 108 | |||
1069 | 109 | def test_truncate_stale_kvp_file(self): | ||
1070 | 110 | with open(self.tmp_file_path, "wb+") as f: | ||
1071 | 111 | kvp = {'key': 'key1', 'value': 'value1'} | ||
1072 | 112 | data = (struct.pack("%ds%ds" % ( | ||
1073 | 113 | HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_KEY_SIZE, | ||
1074 | 114 | HyperVKvpReportingHandler.HV_KVP_EXCHANGE_MAX_VALUE_SIZE), | ||
1075 | 115 | kvp['key'].encode('utf-8'), kvp['value'].encode('utf-8'))) | ||
1076 | 116 | f.write(data) | ||
1077 | 117 | |||
1078 | 118 | # set the time ways back to make it look like | ||
1079 | 119 | # we had an old kvp file | ||
1080 | 120 | os.utime(self.tmp_file_path, (1000000, 1000000)) | ||
1081 | 121 | |||
1082 | 122 | # reset this because the unit test framework | ||
1083 | 123 | # has already polluted the class variable | ||
1084 | 124 | HyperVKvpReportingHandler._already_truncated_pool_file = False | ||
1085 | 125 | |||
1086 | 126 | reporter = HyperVKvpReportingHandler(kvp_file_path=self.tmp_file_path) | ||
1087 | 127 | kvps = list(reporter._iterate_kvps(0)) | ||
1088 | 128 | self.assertEqual(0, len(kvps)) | ||
1089 | diff --git a/tools/build-on-freebsd b/tools/build-on-freebsd | |||
1090 | index d23fde2..dc3b974 100755 | |||
1091 | --- a/tools/build-on-freebsd | |||
1092 | +++ b/tools/build-on-freebsd | |||
1093 | @@ -9,6 +9,7 @@ fail() { echo "FAILED:" "$@" 1>&2; exit 1; } | |||
1094 | 9 | depschecked=/tmp/c-i.dependencieschecked | 9 | depschecked=/tmp/c-i.dependencieschecked |
1095 | 10 | pkgs=" | 10 | pkgs=" |
1096 | 11 | bash | 11 | bash |
1097 | 12 | chpasswd | ||
1098 | 12 | dmidecode | 13 | dmidecode |
1099 | 13 | e2fsprogs | 14 | e2fsprogs |
1100 | 14 | py27-Jinja2 | 15 | py27-Jinja2 |
1101 | @@ -17,6 +18,7 @@ pkgs=" | |||
1102 | 17 | py27-configobj | 18 | py27-configobj |
1103 | 18 | py27-jsonpatch | 19 | py27-jsonpatch |
1104 | 19 | py27-jsonpointer | 20 | py27-jsonpointer |
1105 | 21 | py27-jsonschema | ||
1106 | 20 | py27-oauthlib | 22 | py27-oauthlib |
1107 | 21 | py27-requests | 23 | py27-requests |
1108 | 22 | py27-serial | 24 | py27-serial |
1109 | @@ -28,12 +30,9 @@ pkgs=" | |||
1110 | 28 | [ -f "$depschecked" ] || pkg install ${pkgs} || fail "install packages" | 30 | [ -f "$depschecked" ] || pkg install ${pkgs} || fail "install packages" |
1111 | 29 | touch $depschecked | 31 | touch $depschecked |
1112 | 30 | 32 | ||
1113 | 31 | # Required but unavailable port/pkg: py27-jsonpatch py27-jsonpointer | ||
1114 | 32 | # Luckily, the install step will take care of this by installing it from pypi... | ||
1115 | 33 | |||
1116 | 34 | # Build the code and install in /usr/local/: | 33 | # Build the code and install in /usr/local/: |
1119 | 35 | python setup.py build | 34 | python2.7 setup.py build |
1120 | 36 | python setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd | 35 | python2.7 setup.py install -O1 --skip-build --prefix /usr/local/ --init-system sysvinit_freebsd |
1121 | 37 | 36 | ||
1122 | 38 | # Enable cloud-init in /etc/rc.conf: | 37 | # Enable cloud-init in /etc/rc.conf: |
1123 | 39 | sed -i.bak -e "/cloudinit_enable=.*/d" /etc/rc.conf | 38 | sed -i.bak -e "/cloudinit_enable=.*/d" /etc/rc.conf |
1124 | diff --git a/tools/read-version b/tools/read-version | |||
1125 | index e69c2ce..6dca659 100755 | |||
1126 | --- a/tools/read-version | |||
1127 | +++ b/tools/read-version | |||
1128 | @@ -71,9 +71,12 @@ if is_gitdir(_tdir) and which("git"): | |||
1129 | 71 | flags = ['--tags'] | 71 | flags = ['--tags'] |
1130 | 72 | cmd = ['git', 'describe', '--abbrev=8', '--match=[0-9]*'] + flags | 72 | cmd = ['git', 'describe', '--abbrev=8', '--match=[0-9]*'] + flags |
1131 | 73 | 73 | ||
1133 | 74 | version = tiny_p(cmd).strip() | 74 | try: |
1134 | 75 | version = tiny_p(cmd).strip() | ||
1135 | 76 | except RuntimeError: | ||
1136 | 77 | version = None | ||
1137 | 75 | 78 | ||
1139 | 76 | if not version.startswith(src_version): | 79 | if version is None or not version.startswith(src_version): |
1140 | 77 | sys.stderr.write("git describe version (%s) differs from " | 80 | sys.stderr.write("git describe version (%s) differs from " |
1141 | 78 | "cloudinit.version (%s)\n" % (version, src_version)) | 81 | "cloudinit.version (%s)\n" % (version, src_version)) |
1142 | 79 | sys.stderr.write( | 82 | sys.stderr.write( |
PASSED: Continuous integration, rev:59835f9211e 70101bde10c63c7 e84cc6a74c930c /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 720/
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 720/rebuild
https:/