Merge ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic
- Git
- lp:~chad.smith/cloud-init
- ubuntu/bionic
- Merge into ubuntu/bionic
Status: | Merged | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Merged at revision: | d17cef57054aa76f14c3f1d63cd16b4e702939eb | ||||||||||||
Proposed branch: | ~chad.smith/cloud-init:ubuntu/bionic | ||||||||||||
Merge into: | cloud-init:ubuntu/bionic | ||||||||||||
Diff against target: |
1095 lines (+657/-91) 20 files modified
cloudinit/config/cc_bootcmd.py (+7/-1) cloudinit/config/cc_runcmd.py (+5/-0) cloudinit/config/cc_write_files.py (+6/-1) cloudinit/event.py (+17/-0) cloudinit/gpg.py (+42/-10) cloudinit/sources/__init__.py (+77/-1) cloudinit/sources/tests/test_init.py (+82/-1) cloudinit/stages.py (+10/-4) cloudinit/tests/test_gpg.py (+54/-0) cloudinit/tests/test_stages.py (+231/-0) cloudinit/tests/test_util.py (+68/-1) cloudinit/util.py (+18/-10) debian/changelog (+17/-0) dev/null (+0/-49) doc/examples/cloud-config-run-cmds.txt (+4/-1) doc/examples/cloud-config.txt (+4/-1) doc/rtd/topics/format.rst (+1/-1) integration-requirements.txt (+1/-1) tests/unittests/test_datasource/test_azure_helper.py (+3/-1) tools/run-container (+10/-8) |
||||||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
Scott Moser | Pending | ||
Review via email: mp+349212@code.launchpad.net |
Commit message
new upstream snapshot to pull in SRU blocker bug LP: #1780481 into bionic for release.
Description of the change
Server Team CI bot (server-team-bot) wrote : | # |
Scott Moser (smoser) wrote : | # |
I think what we need to do here is new-upstream-
but that should result in a new debian/changelog entry.
It seems that you just removed the entry that got uploaded to -proposed.
I think it makes sense to have one like below..
The point of interest being that we mark bug 1780481 fixed.
I just ran 'new-upstream-
cloud-init (18.3-9-
* New upstream snapshot.
- docs: note in rtd about avoiding /tmp when writing files
- ubuntu,
(LP: #1780481)
- Fix boothook docs on environment variable name (INSTANCE_I ->
INSTANCE_ID) [Marc Tamsky]
- update_metadata: a datasource can support network re-config every boot
- tests: drop salt-minion integration test
- Retry on failed import of gpg receive keys.
- tools: Fix run-container when neither source or binary package requested.
- docs: Fix a small spelling error. [Oz N Tiram]
- tox: use simplestreams from git repository rather than bzr.
Chad Smith (chad.smith) wrote : | # |
I mistakenly thought we consolidated all unreleased content, but since we've already published to -proposed, I guess this counts as 'released' and should be a different changelog entry.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:fef8e403ba0
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:faacc49dce6
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:d17cef57054
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Preview Diff
1 | diff --git a/cloudinit/config/cc_bootcmd.py b/cloudinit/config/cc_bootcmd.py |
2 | index db64f0a..6813f53 100644 |
3 | --- a/cloudinit/config/cc_bootcmd.py |
4 | +++ b/cloudinit/config/cc_bootcmd.py |
5 | @@ -42,7 +42,13 @@ schema = { |
6 | |
7 | .. note:: |
8 | bootcmd should only be used for things that could not be done later |
9 | - in the boot process."""), |
10 | + in the boot process. |
11 | + |
12 | + .. note:: |
13 | + |
14 | + when writing files, do not use /tmp dir as it races with |
15 | + systemd-tmpfiles-clean LP: #1707222. Use /run/somedir instead. |
16 | + """), |
17 | 'distros': distros, |
18 | 'examples': [dedent("""\ |
19 | bootcmd: |
20 | diff --git a/cloudinit/config/cc_runcmd.py b/cloudinit/config/cc_runcmd.py |
21 | index b6f6c80..1f75d6c 100644 |
22 | --- a/cloudinit/config/cc_runcmd.py |
23 | +++ b/cloudinit/config/cc_runcmd.py |
24 | @@ -42,6 +42,11 @@ schema = { |
25 | |
26 | all commands must be proper yaml, so you have to quote any characters |
27 | yaml would eat (':' can be problematic) |
28 | + |
29 | + .. note:: |
30 | + |
31 | + when writing files, do not use /tmp dir as it races with |
32 | + systemd-tmpfiles-clean LP: #1707222. Use /run/somedir instead. |
33 | """), |
34 | 'distros': distros, |
35 | 'examples': [dedent("""\ |
36 | diff --git a/cloudinit/config/cc_write_files.py b/cloudinit/config/cc_write_files.py |
37 | index 54ae3a6..31d1db6 100644 |
38 | --- a/cloudinit/config/cc_write_files.py |
39 | +++ b/cloudinit/config/cc_write_files.py |
40 | @@ -15,9 +15,14 @@ binary gzip data can be specified and will be decoded before being written. |
41 | |
42 | .. note:: |
43 | if multiline data is provided, care should be taken to ensure that it |
44 | - follows yaml formatting standargs. to specify binary data, use the yaml |
45 | + follows yaml formatting standards. to specify binary data, use the yaml |
46 | option ``!!binary`` |
47 | |
48 | +.. note:: |
49 | + Do not write files under /tmp during boot because of a race with |
50 | + systemd-tmpfiles-clean that can cause temp files to get cleaned during |
51 | + the early boot process. Use /run/somedir instead to avoid race LP:1707222. |
52 | + |
53 | **Internal name:** ``cc_write_files`` |
54 | |
55 | **Module frequency:** per instance |
56 | diff --git a/cloudinit/event.py b/cloudinit/event.py |
57 | new file mode 100644 |
58 | index 0000000..f7b311f |
59 | --- /dev/null |
60 | +++ b/cloudinit/event.py |
61 | @@ -0,0 +1,17 @@ |
62 | +# This file is part of cloud-init. See LICENSE file for license information. |
63 | + |
64 | +"""Classes and functions related to event handling.""" |
65 | + |
66 | + |
67 | +# Event types which can generate maintenance requests for cloud-init. |
68 | +class EventType(object): |
69 | + BOOT = "System boot" |
70 | + BOOT_NEW_INSTANCE = "New instance first boot" |
71 | + |
72 | + # TODO: Cloud-init will grow support for the follow event types: |
73 | + # UDEV |
74 | + # METADATA_CHANGE |
75 | + # USER_REQUEST |
76 | + |
77 | + |
78 | +# vi: ts=4 expandtab |
79 | diff --git a/cloudinit/gpg.py b/cloudinit/gpg.py |
80 | index d58d73e..7fe17a2 100644 |
81 | --- a/cloudinit/gpg.py |
82 | +++ b/cloudinit/gpg.py |
83 | @@ -10,6 +10,8 @@ |
84 | from cloudinit import log as logging |
85 | from cloudinit import util |
86 | |
87 | +import time |
88 | + |
89 | LOG = logging.getLogger(__name__) |
90 | |
91 | |
92 | @@ -25,16 +27,46 @@ def export_armour(key): |
93 | return armour |
94 | |
95 | |
96 | -def recv_key(key, keyserver): |
97 | - """Receive gpg key from the specified keyserver""" |
98 | - LOG.debug('Receive gpg key "%s"', key) |
99 | - try: |
100 | - util.subp(["gpg", "--keyserver", keyserver, "--recv", key], |
101 | - capture=True) |
102 | - except util.ProcessExecutionError as error: |
103 | - raise ValueError(('Failed to import key "%s" ' |
104 | - 'from server "%s" - error %s') % |
105 | - (key, keyserver, error)) |
106 | +def recv_key(key, keyserver, retries=(1, 1)): |
107 | + """Receive gpg key from the specified keyserver. |
108 | + |
109 | + Retries are done by default because keyservers can be unreliable. |
110 | + Additionally, there is no way to determine the difference between |
111 | + a non-existant key and a failure. In both cases gpg (at least 2.2.4) |
112 | + exits with status 2 and stderr: "keyserver receive failed: No data" |
113 | + It is assumed that a key provided to cloud-init exists on the keyserver |
114 | + so re-trying makes better sense than failing. |
115 | + |
116 | + @param key: a string key fingerprint (as passed to gpg --recv-keys). |
117 | + @param keyserver: the keyserver to request keys from. |
118 | + @param retries: an iterable of sleep lengths for retries. |
119 | + Use None to indicate no retries.""" |
120 | + LOG.debug("Importing key '%s' from keyserver '%s'", key, keyserver) |
121 | + cmd = ["gpg", "--keyserver=%s" % keyserver, "--recv-keys", key] |
122 | + if retries is None: |
123 | + retries = [] |
124 | + trynum = 0 |
125 | + error = None |
126 | + sleeps = iter(retries) |
127 | + while True: |
128 | + trynum += 1 |
129 | + try: |
130 | + util.subp(cmd, capture=True) |
131 | + LOG.debug("Imported key '%s' from keyserver '%s' on try %d", |
132 | + key, keyserver, trynum) |
133 | + return |
134 | + except util.ProcessExecutionError as e: |
135 | + error = e |
136 | + try: |
137 | + naplen = next(sleeps) |
138 | + LOG.debug( |
139 | + "Import failed with exit code %d, will try again in %ss", |
140 | + error.exit_code, naplen) |
141 | + time.sleep(naplen) |
142 | + except StopIteration: |
143 | + raise ValueError( |
144 | + ("Failed to import key '%s' from keyserver '%s' " |
145 | + "after %d tries: %s") % (key, keyserver, trynum, error)) |
146 | |
147 | |
148 | def delete_key(key): |
149 | diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py |
150 | index 90d7457..f424316 100644 |
151 | --- a/cloudinit/sources/__init__.py |
152 | +++ b/cloudinit/sources/__init__.py |
153 | @@ -19,6 +19,7 @@ from cloudinit.atomic_helper import write_json |
154 | from cloudinit import importer |
155 | from cloudinit import log as logging |
156 | from cloudinit import net |
157 | +from cloudinit.event import EventType |
158 | from cloudinit import type_utils |
159 | from cloudinit import user_data as ud |
160 | from cloudinit import util |
161 | @@ -102,6 +103,25 @@ class DataSource(object): |
162 | url_timeout = 10 # timeout for each metadata url read attempt |
163 | url_retries = 5 # number of times to retry url upon 404 |
164 | |
165 | + # The datasource defines a list of supported EventTypes during which |
166 | + # the datasource can react to changes in metadata and regenerate |
167 | + # network configuration on metadata changes. |
168 | + # A datasource which supports writing network config on each system boot |
169 | + # would set update_events = {'network': [EventType.BOOT]} |
170 | + |
171 | + # Default: generate network config on new instance id (first boot). |
172 | + update_events = {'network': [EventType.BOOT_NEW_INSTANCE]} |
173 | + |
174 | + # N-tuple listing default values for any metadata-related class |
175 | + # attributes cached on an instance by a process_data runs. These attribute |
176 | + # values are reset via clear_cached_attrs during any update_metadata call. |
177 | + cached_attr_defaults = ( |
178 | + ('ec2_metadata', UNSET), ('network_json', UNSET), |
179 | + ('metadata', {}), ('userdata', None), ('userdata_raw', None), |
180 | + ('vendordata', None), ('vendordata_raw', None)) |
181 | + |
182 | + _dirty_cache = False |
183 | + |
184 | def __init__(self, sys_cfg, distro, paths, ud_proc=None): |
185 | self.sys_cfg = sys_cfg |
186 | self.distro = distro |
187 | @@ -134,11 +154,31 @@ class DataSource(object): |
188 | 'region': self.region, |
189 | 'availability-zone': self.availability_zone}} |
190 | |
191 | + def clear_cached_attrs(self, attr_defaults=()): |
192 | + """Reset any cached metadata attributes to datasource defaults. |
193 | + |
194 | + @param attr_defaults: Optional tuple of (attr, value) pairs to |
195 | + set instead of cached_attr_defaults. |
196 | + """ |
197 | + if not self._dirty_cache: |
198 | + return |
199 | + if attr_defaults: |
200 | + attr_values = attr_defaults |
201 | + else: |
202 | + attr_values = self.cached_attr_defaults |
203 | + |
204 | + for attribute, value in attr_values: |
205 | + if hasattr(self, attribute): |
206 | + setattr(self, attribute, value) |
207 | + if not attr_defaults: |
208 | + self._dirty_cache = False |
209 | + |
210 | def get_data(self): |
211 | """Datasources implement _get_data to setup metadata and userdata_raw. |
212 | |
213 | Minimally, the datasource should return a boolean True on success. |
214 | """ |
215 | + self._dirty_cache = True |
216 | return_value = self._get_data() |
217 | json_file = os.path.join(self.paths.run_dir, INSTANCE_JSON_FILE) |
218 | if not return_value: |
219 | @@ -174,6 +214,7 @@ class DataSource(object): |
220 | return return_value |
221 | |
222 | def _get_data(self): |
223 | + """Walk metadata sources, process crawled data and save attributes.""" |
224 | raise NotImplementedError( |
225 | 'Subclasses of DataSource must implement _get_data which' |
226 | ' sets self.metadata, vendordata_raw and userdata_raw.') |
227 | @@ -416,6 +457,41 @@ class DataSource(object): |
228 | def get_package_mirror_info(self): |
229 | return self.distro.get_package_mirror_info(data_source=self) |
230 | |
231 | + def update_metadata(self, source_event_types): |
232 | + """Refresh cached metadata if the datasource supports this event. |
233 | + |
234 | + The datasource has a list of update_events which |
235 | + trigger refreshing all cached metadata as well as refreshing the |
236 | + network configuration. |
237 | + |
238 | + @param source_event_types: List of EventTypes which may trigger a |
239 | + metadata update. |
240 | + |
241 | + @return True if the datasource did successfully update cached metadata |
242 | + due to source_event_type. |
243 | + """ |
244 | + supported_events = {} |
245 | + for event in source_event_types: |
246 | + for update_scope, update_events in self.update_events.items(): |
247 | + if event in update_events: |
248 | + if not supported_events.get(update_scope): |
249 | + supported_events[update_scope] = [] |
250 | + supported_events[update_scope].append(event) |
251 | + for scope, matched_events in supported_events.items(): |
252 | + LOG.debug( |
253 | + "Update datasource metadata and %s config due to events: %s", |
254 | + scope, ', '.join(matched_events)) |
255 | + # Each datasource has a cached config property which needs clearing |
256 | + # Once cleared that config property will be regenerated from |
257 | + # current metadata. |
258 | + self.clear_cached_attrs((('_%s_config' % scope, UNSET),)) |
259 | + if supported_events: |
260 | + self.clear_cached_attrs() |
261 | + result = self.get_data() |
262 | + if result: |
263 | + return True |
264 | + return False |
265 | + |
266 | def check_instance_id(self, sys_cfg): |
267 | # quickly (local check only) if self.instance_id is still |
268 | return False |
269 | @@ -520,7 +596,7 @@ def find_source(sys_cfg, distro, paths, ds_deps, cfg_list, pkg_list, reporter): |
270 | with myrep: |
271 | LOG.debug("Seeing if we can get any data from %s", cls) |
272 | s = cls(sys_cfg, distro, paths) |
273 | - if s.get_data(): |
274 | + if s.update_metadata([EventType.BOOT_NEW_INSTANCE]): |
275 | myrep.message = "found %s data from %s" % (mode, name) |
276 | return (s, type_utils.obj_name(cls)) |
277 | except Exception: |
278 | diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py |
279 | index d5bc98a..dcd221b 100644 |
280 | --- a/cloudinit/sources/tests/test_init.py |
281 | +++ b/cloudinit/sources/tests/test_init.py |
282 | @@ -5,10 +5,11 @@ import os |
283 | import six |
284 | import stat |
285 | |
286 | +from cloudinit.event import EventType |
287 | from cloudinit.helpers import Paths |
288 | from cloudinit import importer |
289 | from cloudinit.sources import ( |
290 | - INSTANCE_JSON_FILE, DataSource) |
291 | + INSTANCE_JSON_FILE, DataSource, UNSET) |
292 | from cloudinit.tests.helpers import CiTestCase, skipIf, mock |
293 | from cloudinit.user_data import UserDataProcessor |
294 | from cloudinit import util |
295 | @@ -381,3 +382,83 @@ class TestDataSource(CiTestCase): |
296 | get_args(grandchild.get_hostname), # pylint: disable=W1505 |
297 | '%s does not implement DataSource.get_hostname params' |
298 | % grandchild) |
299 | + |
300 | + def test_clear_cached_attrs_resets_cached_attr_class_attributes(self): |
301 | + """Class attributes listed in cached_attr_defaults are reset.""" |
302 | + count = 0 |
303 | + # Setup values for all cached class attributes |
304 | + for attr, value in self.datasource.cached_attr_defaults: |
305 | + setattr(self.datasource, attr, count) |
306 | + count += 1 |
307 | + self.datasource._dirty_cache = True |
308 | + self.datasource.clear_cached_attrs() |
309 | + for attr, value in self.datasource.cached_attr_defaults: |
310 | + self.assertEqual(value, getattr(self.datasource, attr)) |
311 | + |
312 | + def test_clear_cached_attrs_noops_on_clean_cache(self): |
313 | + """Class attributes listed in cached_attr_defaults are reset.""" |
314 | + count = 0 |
315 | + # Setup values for all cached class attributes |
316 | + for attr, _ in self.datasource.cached_attr_defaults: |
317 | + setattr(self.datasource, attr, count) |
318 | + count += 1 |
319 | + self.datasource._dirty_cache = False # Fake clean cache |
320 | + self.datasource.clear_cached_attrs() |
321 | + count = 0 |
322 | + for attr, _ in self.datasource.cached_attr_defaults: |
323 | + self.assertEqual(count, getattr(self.datasource, attr)) |
324 | + count += 1 |
325 | + |
326 | + def test_clear_cached_attrs_skips_non_attr_class_attributes(self): |
327 | + """Skip any cached_attr_defaults which aren't class attributes.""" |
328 | + self.datasource._dirty_cache = True |
329 | + self.datasource.clear_cached_attrs() |
330 | + for attr in ('ec2_metadata', 'network_json'): |
331 | + self.assertFalse(hasattr(self.datasource, attr)) |
332 | + |
333 | + def test_clear_cached_attrs_of_custom_attrs(self): |
334 | + """Custom attr_values can be passed to clear_cached_attrs.""" |
335 | + self.datasource._dirty_cache = True |
336 | + cached_attr_name = self.datasource.cached_attr_defaults[0][0] |
337 | + setattr(self.datasource, cached_attr_name, 'himom') |
338 | + self.datasource.myattr = 'orig' |
339 | + self.datasource.clear_cached_attrs( |
340 | + attr_defaults=(('myattr', 'updated'),)) |
341 | + self.assertEqual('himom', getattr(self.datasource, cached_attr_name)) |
342 | + self.assertEqual('updated', self.datasource.myattr) |
343 | + |
344 | + def test_update_metadata_only_acts_on_supported_update_events(self): |
345 | + """update_metadata won't get_data on unsupported update events.""" |
346 | + self.assertEqual( |
347 | + {'network': [EventType.BOOT_NEW_INSTANCE]}, |
348 | + self.datasource.update_events) |
349 | + |
350 | + def fake_get_data(): |
351 | + raise Exception('get_data should not be called') |
352 | + |
353 | + self.datasource.get_data = fake_get_data |
354 | + self.assertFalse( |
355 | + self.datasource.update_metadata( |
356 | + source_event_types=[EventType.BOOT])) |
357 | + |
358 | + def test_update_metadata_returns_true_on_supported_update_event(self): |
359 | + """update_metadata returns get_data response on supported events.""" |
360 | + |
361 | + def fake_get_data(): |
362 | + return True |
363 | + |
364 | + self.datasource.get_data = fake_get_data |
365 | + self.datasource._network_config = 'something' |
366 | + self.datasource._dirty_cache = True |
367 | + self.assertTrue( |
368 | + self.datasource.update_metadata( |
369 | + source_event_types=[ |
370 | + EventType.BOOT, EventType.BOOT_NEW_INSTANCE])) |
371 | + self.assertEqual(UNSET, self.datasource._network_config) |
372 | + self.assertIn( |
373 | + "DEBUG: Update datasource metadata and network config due to" |
374 | + " events: New instance first boot", |
375 | + self.logs.getvalue()) |
376 | + |
377 | + |
378 | +# vi: ts=4 expandtab |
379 | diff --git a/cloudinit/stages.py b/cloudinit/stages.py |
380 | index 286607b..c132b57 100644 |
381 | --- a/cloudinit/stages.py |
382 | +++ b/cloudinit/stages.py |
383 | @@ -22,6 +22,8 @@ from cloudinit.handlers import cloud_config as cc_part |
384 | from cloudinit.handlers import shell_script as ss_part |
385 | from cloudinit.handlers import upstart_job as up_part |
386 | |
387 | +from cloudinit.event import EventType |
388 | + |
389 | from cloudinit import cloud |
390 | from cloudinit import config |
391 | from cloudinit import distros |
392 | @@ -648,10 +650,14 @@ class Init(object): |
393 | except Exception as e: |
394 | LOG.warning("Failed to rename devices: %s", e) |
395 | |
396 | - if (self.datasource is not NULL_DATA_SOURCE and |
397 | - not self.is_new_instance()): |
398 | - LOG.debug("not a new instance. network config is not applied.") |
399 | - return |
400 | + if self.datasource is not NULL_DATA_SOURCE: |
401 | + if not self.is_new_instance(): |
402 | + if not self.datasource.update_metadata([EventType.BOOT]): |
403 | + LOG.debug( |
404 | + "No network config applied. Neither a new instance" |
405 | + " nor datasource network update on '%s' event", |
406 | + EventType.BOOT) |
407 | + return |
408 | |
409 | LOG.info("Applying network configuration from %s bringup=%s: %s", |
410 | src, bring_up, netcfg) |
411 | diff --git a/cloudinit/tests/test_gpg.py b/cloudinit/tests/test_gpg.py |
412 | new file mode 100644 |
413 | index 0000000..0562b96 |
414 | --- /dev/null |
415 | +++ b/cloudinit/tests/test_gpg.py |
416 | @@ -0,0 +1,54 @@ |
417 | +# This file is part of cloud-init. See LICENSE file for license information. |
418 | +"""Test gpg module.""" |
419 | + |
420 | +from cloudinit import gpg |
421 | +from cloudinit import util |
422 | +from cloudinit.tests.helpers import CiTestCase |
423 | + |
424 | +import mock |
425 | + |
426 | + |
427 | +@mock.patch("cloudinit.gpg.time.sleep") |
428 | +@mock.patch("cloudinit.gpg.util.subp") |
429 | +class TestReceiveKeys(CiTestCase): |
430 | + """Test the recv_key method.""" |
431 | + |
432 | + def test_retries_on_subp_exc(self, m_subp, m_sleep): |
433 | + """retry should be done on gpg receive keys failure.""" |
434 | + retries = (1, 2, 4) |
435 | + my_exc = util.ProcessExecutionError( |
436 | + stdout='', stderr='', exit_code=2, cmd=['mycmd']) |
437 | + m_subp.side_effect = (my_exc, my_exc, ('', '')) |
438 | + gpg.recv_key("ABCD", "keyserver.example.com", retries=retries) |
439 | + self.assertEqual([mock.call(1), mock.call(2)], m_sleep.call_args_list) |
440 | + |
441 | + def test_raises_error_after_retries(self, m_subp, m_sleep): |
442 | + """If the final run fails, error should be raised.""" |
443 | + naplen = 1 |
444 | + keyid, keyserver = ("ABCD", "keyserver.example.com") |
445 | + m_subp.side_effect = util.ProcessExecutionError( |
446 | + stdout='', stderr='', exit_code=2, cmd=['mycmd']) |
447 | + with self.assertRaises(ValueError) as rcm: |
448 | + gpg.recv_key(keyid, keyserver, retries=(naplen,)) |
449 | + self.assertIn(keyid, str(rcm.exception)) |
450 | + self.assertIn(keyserver, str(rcm.exception)) |
451 | + m_sleep.assert_called_with(naplen) |
452 | + |
453 | + def test_no_retries_on_none(self, m_subp, m_sleep): |
454 | + """retry should not be done if retries is None.""" |
455 | + m_subp.side_effect = util.ProcessExecutionError( |
456 | + stdout='', stderr='', exit_code=2, cmd=['mycmd']) |
457 | + with self.assertRaises(ValueError): |
458 | + gpg.recv_key("ABCD", "keyserver.example.com", retries=None) |
459 | + m_sleep.assert_not_called() |
460 | + |
461 | + def test_expected_gpg_command(self, m_subp, m_sleep): |
462 | + """Verify gpg is called with expected args.""" |
463 | + key, keyserver = ("DEADBEEF", "keyserver.example.com") |
464 | + retries = (1, 2, 4) |
465 | + m_subp.return_value = ('', '') |
466 | + gpg.recv_key(key, keyserver, retries=retries) |
467 | + m_subp.assert_called_once_with( |
468 | + ['gpg', '--keyserver=%s' % keyserver, '--recv-keys', key], |
469 | + capture=True) |
470 | + m_sleep.assert_not_called() |
471 | diff --git a/cloudinit/tests/test_stages.py b/cloudinit/tests/test_stages.py |
472 | new file mode 100644 |
473 | index 0000000..94b6b25 |
474 | --- /dev/null |
475 | +++ b/cloudinit/tests/test_stages.py |
476 | @@ -0,0 +1,231 @@ |
477 | +# This file is part of cloud-init. See LICENSE file for license information. |
478 | + |
479 | +"""Tests related to cloudinit.stages module.""" |
480 | + |
481 | +import os |
482 | + |
483 | +from cloudinit import stages |
484 | +from cloudinit import sources |
485 | + |
486 | +from cloudinit.event import EventType |
487 | +from cloudinit.util import write_file |
488 | + |
489 | +from cloudinit.tests.helpers import CiTestCase, mock |
490 | + |
491 | +TEST_INSTANCE_ID = 'i-testing' |
492 | + |
493 | + |
494 | +class FakeDataSource(sources.DataSource): |
495 | + |
496 | + def __init__(self, paths=None, userdata=None, vendordata=None, |
497 | + network_config=''): |
498 | + super(FakeDataSource, self).__init__({}, None, paths=paths) |
499 | + self.metadata = {'instance-id': TEST_INSTANCE_ID} |
500 | + self.userdata_raw = userdata |
501 | + self.vendordata_raw = vendordata |
502 | + self._network_config = None |
503 | + if network_config: # Permit for None value to setup attribute |
504 | + self._network_config = network_config |
505 | + |
506 | + @property |
507 | + def network_config(self): |
508 | + return self._network_config |
509 | + |
510 | + def _get_data(self): |
511 | + return True |
512 | + |
513 | + |
514 | +class TestInit(CiTestCase): |
515 | + with_logs = True |
516 | + |
517 | + def setUp(self): |
518 | + super(TestInit, self).setUp() |
519 | + self.tmpdir = self.tmp_dir() |
520 | + self.init = stages.Init() |
521 | + # Setup fake Paths for Init to reference |
522 | + self.init._cfg = {'system_info': { |
523 | + 'distro': 'ubuntu', 'paths': {'cloud_dir': self.tmpdir, |
524 | + 'run_dir': self.tmpdir}}} |
525 | + self.init.datasource = FakeDataSource(paths=self.init.paths) |
526 | + |
527 | + def test_wb__find_networking_config_disabled(self): |
528 | + """find_networking_config returns no config when disabled.""" |
529 | + disable_file = os.path.join( |
530 | + self.init.paths.get_cpath('data'), 'upgraded-network') |
531 | + write_file(disable_file, '') |
532 | + self.assertEqual( |
533 | + (None, disable_file), |
534 | + self.init._find_networking_config()) |
535 | + |
536 | + @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
537 | + def test_wb__find_networking_config_disabled_by_kernel(self, m_cmdline): |
538 | + """find_networking_config returns when disabled by kernel cmdline.""" |
539 | + m_cmdline.return_value = {'config': 'disabled'} |
540 | + self.assertEqual( |
541 | + (None, 'cmdline'), |
542 | + self.init._find_networking_config()) |
543 | + self.assertEqual('DEBUG: network config disabled by cmdline\n', |
544 | + self.logs.getvalue()) |
545 | + |
546 | + @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
547 | + def test_wb__find_networking_config_disabled_by_datasrc(self, m_cmdline): |
548 | + """find_networking_config returns when disabled by datasource cfg.""" |
549 | + m_cmdline.return_value = {} # Kernel doesn't disable networking |
550 | + self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, |
551 | + 'network': {}} # system config doesn't disable |
552 | + |
553 | + self.init.datasource = FakeDataSource( |
554 | + network_config={'config': 'disabled'}) |
555 | + self.assertEqual( |
556 | + (None, 'ds'), |
557 | + self.init._find_networking_config()) |
558 | + self.assertEqual('DEBUG: network config disabled by ds\n', |
559 | + self.logs.getvalue()) |
560 | + |
561 | + @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
562 | + def test_wb__find_networking_config_disabled_by_sysconfig(self, m_cmdline): |
563 | + """find_networking_config returns when disabled by system config.""" |
564 | + m_cmdline.return_value = {} # Kernel doesn't disable networking |
565 | + self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, |
566 | + 'network': {'config': 'disabled'}} |
567 | + self.assertEqual( |
568 | + (None, 'system_cfg'), |
569 | + self.init._find_networking_config()) |
570 | + self.assertEqual('DEBUG: network config disabled by system_cfg\n', |
571 | + self.logs.getvalue()) |
572 | + |
573 | + @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
574 | + def test_wb__find_networking_config_returns_kernel(self, m_cmdline): |
575 | + """find_networking_config returns kernel cmdline config if present.""" |
576 | + expected_cfg = {'config': ['fakekernel']} |
577 | + m_cmdline.return_value = expected_cfg |
578 | + self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, |
579 | + 'network': {'config': ['fakesys_config']}} |
580 | + self.init.datasource = FakeDataSource( |
581 | + network_config={'config': ['fakedatasource']}) |
582 | + self.assertEqual( |
583 | + (expected_cfg, 'cmdline'), |
584 | + self.init._find_networking_config()) |
585 | + |
586 | + @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
587 | + def test_wb__find_networking_config_returns_system_cfg(self, m_cmdline): |
588 | + """find_networking_config returns system config when present.""" |
589 | + m_cmdline.return_value = {} # No kernel network config |
590 | + expected_cfg = {'config': ['fakesys_config']} |
591 | + self.init._cfg = {'system_info': {'paths': {'cloud_dir': self.tmpdir}}, |
592 | + 'network': expected_cfg} |
593 | + self.init.datasource = FakeDataSource( |
594 | + network_config={'config': ['fakedatasource']}) |
595 | + self.assertEqual( |
596 | + (expected_cfg, 'system_cfg'), |
597 | + self.init._find_networking_config()) |
598 | + |
599 | + @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
600 | + def test_wb__find_networking_config_returns_datasrc_cfg(self, m_cmdline): |
601 | + """find_networking_config returns datasource net config if present.""" |
602 | + m_cmdline.return_value = {} # No kernel network config |
603 | + # No system config for network in setUp |
604 | + expected_cfg = {'config': ['fakedatasource']} |
605 | + self.init.datasource = FakeDataSource(network_config=expected_cfg) |
606 | + self.assertEqual( |
607 | + (expected_cfg, 'ds'), |
608 | + self.init._find_networking_config()) |
609 | + |
610 | + @mock.patch('cloudinit.stages.cmdline.read_kernel_cmdline_config') |
611 | + def test_wb__find_networking_config_returns_fallback(self, m_cmdline): |
612 | + """find_networking_config returns fallback config if not defined.""" |
613 | + m_cmdline.return_value = {} # Kernel doesn't disable networking |
614 | + # Neither datasource nor system_info disable or provide network |
615 | + |
616 | + fake_cfg = {'config': [{'type': 'physical', 'name': 'eth9'}], |
617 | + 'version': 1} |
618 | + |
619 | + def fake_generate_fallback(): |
620 | + return fake_cfg |
621 | + |
622 | + # Monkey patch distro which gets cached on self.init |
623 | + distro = self.init.distro |
624 | + distro.generate_fallback_config = fake_generate_fallback |
625 | + self.assertEqual( |
626 | + (fake_cfg, 'fallback'), |
627 | + self.init._find_networking_config()) |
628 | + self.assertNotIn('network config disabled', self.logs.getvalue()) |
629 | + |
630 | + def test_apply_network_config_disabled(self): |
631 | + """Log when network is disabled by upgraded-network.""" |
632 | + disable_file = os.path.join( |
633 | + self.init.paths.get_cpath('data'), 'upgraded-network') |
634 | + |
635 | + def fake_network_config(): |
636 | + return (None, disable_file) |
637 | + |
638 | + self.init._find_networking_config = fake_network_config |
639 | + |
640 | + self.init.apply_network_config(True) |
641 | + self.assertIn( |
642 | + 'INFO: network config is disabled by %s' % disable_file, |
643 | + self.logs.getvalue()) |
644 | + |
645 | + @mock.patch('cloudinit.distros.ubuntu.Distro') |
646 | + def test_apply_network_on_new_instance(self, m_ubuntu): |
647 | + """Call distro apply_network_config methods on is_new_instance.""" |
648 | + net_cfg = { |
649 | + 'version': 1, 'config': [ |
650 | + {'subnets': [{'type': 'dhcp'}], 'type': 'physical', |
651 | + 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]} |
652 | + |
653 | + def fake_network_config(): |
654 | + return net_cfg, 'fallback' |
655 | + |
656 | + self.init._find_networking_config = fake_network_config |
657 | + self.init.apply_network_config(True) |
658 | + self.init.distro.apply_network_config_names.assert_called_with(net_cfg) |
659 | + self.init.distro.apply_network_config.assert_called_with( |
660 | + net_cfg, bring_up=True) |
661 | + |
662 | + @mock.patch('cloudinit.distros.ubuntu.Distro') |
663 | + def test_apply_network_on_same_instance_id(self, m_ubuntu): |
664 | + """Only call distro.apply_network_config_names on same instance id.""" |
665 | + old_instance_id = os.path.join( |
666 | + self.init.paths.get_cpath('data'), 'instance-id') |
667 | + write_file(old_instance_id, TEST_INSTANCE_ID) |
668 | + net_cfg = { |
669 | + 'version': 1, 'config': [ |
670 | + {'subnets': [{'type': 'dhcp'}], 'type': 'physical', |
671 | + 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]} |
672 | + |
673 | + def fake_network_config(): |
674 | + return net_cfg, 'fallback' |
675 | + |
676 | + self.init._find_networking_config = fake_network_config |
677 | + self.init.apply_network_config(True) |
678 | + self.init.distro.apply_network_config_names.assert_called_with(net_cfg) |
679 | + self.init.distro.apply_network_config.assert_not_called() |
680 | + self.assertIn( |
681 | + 'No network config applied. Neither a new instance' |
682 | + " nor datasource network update on '%s' event" % EventType.BOOT, |
683 | + self.logs.getvalue()) |
684 | + |
685 | + @mock.patch('cloudinit.distros.ubuntu.Distro') |
686 | + def test_apply_network_on_datasource_allowed_event(self, m_ubuntu): |
687 | + """Apply network if datasource.update_metadata permits BOOT event.""" |
688 | + old_instance_id = os.path.join( |
689 | + self.init.paths.get_cpath('data'), 'instance-id') |
690 | + write_file(old_instance_id, TEST_INSTANCE_ID) |
691 | + net_cfg = { |
692 | + 'version': 1, 'config': [ |
693 | + {'subnets': [{'type': 'dhcp'}], 'type': 'physical', |
694 | + 'name': 'eth9', 'mac_address': '42:42:42:42:42:42'}]} |
695 | + |
696 | + def fake_network_config(): |
697 | + return net_cfg, 'fallback' |
698 | + |
699 | + self.init._find_networking_config = fake_network_config |
700 | + self.init.datasource = FakeDataSource(paths=self.init.paths) |
701 | + self.init.datasource.update_events = {'network': [EventType.BOOT]} |
702 | + self.init.apply_network_config(True) |
703 | + self.init.distro.apply_network_config_names.assert_called_with(net_cfg) |
704 | + self.init.distro.apply_network_config.assert_called_with( |
705 | + net_cfg, bring_up=True) |
706 | + |
707 | +# vi: ts=4 expandtab |
708 | diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py |
709 | index 17853fc..6a31e50 100644 |
710 | --- a/cloudinit/tests/test_util.py |
711 | +++ b/cloudinit/tests/test_util.py |
712 | @@ -26,8 +26,51 @@ OS_RELEASE_SLES = dedent("""\ |
713 | CPE_NAME="cpe:/o:suse:sles:12:sp3"\n |
714 | """) |
715 | |
716 | +OS_RELEASE_OPENSUSE = dedent("""\ |
717 | +NAME="openSUSE Leap" |
718 | +VERSION="42.3" |
719 | +ID=opensuse |
720 | +ID_LIKE="suse" |
721 | +VERSION_ID="42.3" |
722 | +PRETTY_NAME="openSUSE Leap 42.3" |
723 | +ANSI_COLOR="0;32" |
724 | +CPE_NAME="cpe:/o:opensuse:leap:42.3" |
725 | +BUG_REPORT_URL="https://bugs.opensuse.org" |
726 | +HOME_URL="https://www.opensuse.org/" |
727 | +""") |
728 | + |
729 | +OS_RELEASE_CENTOS = dedent("""\ |
730 | + NAME="CentOS Linux" |
731 | + VERSION="7 (Core)" |
732 | + ID="centos" |
733 | + ID_LIKE="rhel fedora" |
734 | + VERSION_ID="7" |
735 | + PRETTY_NAME="CentOS Linux 7 (Core)" |
736 | + ANSI_COLOR="0;31" |
737 | + CPE_NAME="cpe:/o:centos:centos:7" |
738 | + HOME_URL="https://www.centos.org/" |
739 | + BUG_REPORT_URL="https://bugs.centos.org/" |
740 | + |
741 | + CENTOS_MANTISBT_PROJECT="CentOS-7" |
742 | + CENTOS_MANTISBT_PROJECT_VERSION="7" |
743 | + REDHAT_SUPPORT_PRODUCT="centos" |
744 | + REDHAT_SUPPORT_PRODUCT_VERSION="7" |
745 | +""") |
746 | + |
747 | +OS_RELEASE_DEBIAN = dedent("""\ |
748 | + PRETTY_NAME="Debian GNU/Linux 9 (stretch)" |
749 | + NAME="Debian GNU/Linux" |
750 | + VERSION_ID="9" |
751 | + VERSION="9 (stretch)" |
752 | + ID=debian |
753 | + HOME_URL="https://www.debian.org/" |
754 | + SUPPORT_URL="https://www.debian.org/support" |
755 | + BUG_REPORT_URL="https://bugs.debian.org/" |
756 | +""") |
757 | + |
758 | OS_RELEASE_UBUNTU = dedent("""\ |
759 | NAME="Ubuntu"\n |
760 | + # comment test |
761 | VERSION="16.04.3 LTS (Xenial Xerus)"\n |
762 | ID=ubuntu\n |
763 | ID_LIKE=debian\n |
764 | @@ -310,7 +353,31 @@ class TestGetLinuxDistro(CiTestCase): |
765 | m_os_release.return_value = OS_RELEASE_UBUNTU |
766 | m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists |
767 | dist = util.get_linux_distro() |
768 | - self.assertEqual(('ubuntu', '16.04', platform.machine()), dist) |
769 | + self.assertEqual(('ubuntu', '16.04', 'xenial'), dist) |
770 | + |
771 | + @mock.patch('cloudinit.util.load_file') |
772 | + def test_get_linux_centos(self, m_os_release, m_path_exists): |
773 | + """Verify we get the correct name and release name on CentOS.""" |
774 | + m_os_release.return_value = OS_RELEASE_CENTOS |
775 | + m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists |
776 | + dist = util.get_linux_distro() |
777 | + self.assertEqual(('centos', '7', 'Core'), dist) |
778 | + |
779 | + @mock.patch('cloudinit.util.load_file') |
780 | + def test_get_linux_debian(self, m_os_release, m_path_exists): |
781 | + """Verify we get the correct name and release name on Debian.""" |
782 | + m_os_release.return_value = OS_RELEASE_DEBIAN |
783 | + m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists |
784 | + dist = util.get_linux_distro() |
785 | + self.assertEqual(('debian', '9', 'stretch'), dist) |
786 | + |
787 | + @mock.patch('cloudinit.util.load_file') |
788 | + def test_get_linux_opensuse(self, m_os_release, m_path_exists): |
789 | + """Verify we get the correct name and machine arch on OpenSUSE.""" |
790 | + m_os_release.return_value = OS_RELEASE_OPENSUSE |
791 | + m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists |
792 | + dist = util.get_linux_distro() |
793 | + self.assertEqual(('opensuse', '42.3', platform.machine()), dist) |
794 | |
795 | @mock.patch('platform.dist') |
796 | def test_get_linux_distro_no_data(self, m_platform_dist, m_path_exists): |
797 | diff --git a/cloudinit/util.py b/cloudinit/util.py |
798 | index 6da9511..d0b0e90 100644 |
799 | --- a/cloudinit/util.py |
800 | +++ b/cloudinit/util.py |
801 | @@ -579,16 +579,24 @@ def get_cfg_option_int(yobj, key, default=0): |
802 | def get_linux_distro(): |
803 | distro_name = '' |
804 | distro_version = '' |
805 | + flavor = '' |
806 | if os.path.exists('/etc/os-release'): |
807 | - os_release = load_file('/etc/os-release') |
808 | - for line in os_release.splitlines(): |
809 | - if line.strip().startswith('ID='): |
810 | - distro_name = line.split('=')[-1] |
811 | - distro_name = distro_name.replace('"', '') |
812 | - if line.strip().startswith('VERSION_ID='): |
813 | - # Lets hope for the best that distros stay consistent ;) |
814 | - distro_version = line.split('=')[-1] |
815 | - distro_version = distro_version.replace('"', '') |
816 | + os_release = load_shell_content(load_file('/etc/os-release')) |
817 | + distro_name = os_release.get('ID', '') |
818 | + distro_version = os_release.get('VERSION_ID', '') |
819 | + if 'sles' in distro_name or 'suse' in distro_name: |
820 | + # RELEASE_BLOCKER: We will drop this sles ivergent behavior in |
821 | + # before 18.4 so that get_linux_distro returns a named tuple |
822 | + # which will include both version codename and architecture |
823 | + # on all distributions. |
824 | + flavor = platform.machine() |
825 | + else: |
826 | + flavor = os_release.get('VERSION_CODENAME', '') |
827 | + if not flavor: |
828 | + match = re.match(r'[^ ]+ \((?P<codename>[^)]+)\)', |
829 | + os_release.get('VERSION')) |
830 | + if match: |
831 | + flavor = match.groupdict()['codename'] |
832 | else: |
833 | dist = ('', '', '') |
834 | try: |
835 | @@ -606,7 +614,7 @@ def get_linux_distro(): |
836 | 'expansion may have unexpected results') |
837 | return dist |
838 | |
839 | - return (distro_name, distro_version, platform.machine()) |
840 | + return (distro_name, distro_version, flavor) |
841 | |
842 | |
843 | def system_info(): |
844 | diff --git a/debian/changelog b/debian/changelog |
845 | index bc83f51..726e06f 100644 |
846 | --- a/debian/changelog |
847 | +++ b/debian/changelog |
848 | @@ -1,3 +1,20 @@ |
849 | +cloud-init (18.3-9-g2e62cb8a-0ubuntu1~18.04.1) bionic-proposed; urgency=medium |
850 | + |
851 | + * New upstream snapshot. |
852 | + - docs: note in rtd about avoiding /tmp when writing files |
853 | + - ubuntu,centos,debian: get_linux_distro to align with platform.dist |
854 | + (LP: #1780481) |
855 | + - Fix boothook docs on environment variable name (INSTANCE_I -> |
856 | + INSTANCE_ID) [Marc Tamsky] |
857 | + - update_metadata: a datasource can support network re-config every boot |
858 | + - tests: drop salt-minion integration test |
859 | + - Retry on failed import of gpg receive keys. |
860 | + - tools: Fix run-container when neither source or binary package requested. |
861 | + - docs: Fix a small spelling error. [Oz N Tiram] |
862 | + - tox: use simplestreams from git repository rather than bzr. |
863 | + |
864 | + -- Chad Smith <chad.smith@canonical.com> Mon, 09 Jul 2018 15:31:12 -0600 |
865 | + |
866 | cloud-init (18.3-0ubuntu1~18.04.1) bionic-proposed; urgency=medium |
867 | |
868 | * debian/rules: update version.version_string to contain packaged version. |
869 | diff --git a/doc/examples/cloud-config-run-cmds.txt b/doc/examples/cloud-config-run-cmds.txt |
870 | index 3bb0686..002398f 100644 |
871 | --- a/doc/examples/cloud-config-run-cmds.txt |
872 | +++ b/doc/examples/cloud-config-run-cmds.txt |
873 | @@ -18,5 +18,8 @@ runcmd: |
874 | - [ sh, -xc, "echo $(date) ': hello world!'" ] |
875 | - [ sh, -c, echo "=========hello world'=========" ] |
876 | - ls -l /root |
877 | - - [ wget, "http://slashdot.org", -O, /tmp/index.html ] |
878 | + # Note: Don't write files to /tmp from cloud-init use /run/somedir instead. |
879 | + # Early boot environments can race systemd-tmpfiles-clean LP: #1707222. |
880 | + - mkdir /run/mydir |
881 | + - [ wget, "http://slashdot.org", -O, /run/mydir/index.html ] |
882 | |
883 | diff --git a/doc/examples/cloud-config.txt b/doc/examples/cloud-config.txt |
884 | index bd84c64..774f66b 100644 |
885 | --- a/doc/examples/cloud-config.txt |
886 | +++ b/doc/examples/cloud-config.txt |
887 | @@ -127,7 +127,10 @@ runcmd: |
888 | - [ sh, -xc, "echo $(date) ': hello world!'" ] |
889 | - [ sh, -c, echo "=========hello world'=========" ] |
890 | - ls -l /root |
891 | - - [ wget, "http://slashdot.org", -O, /tmp/index.html ] |
892 | + # Note: Don't write files to /tmp from cloud-init use /run/somedir instead. |
893 | + # Early boot environments can race systemd-tmpfiles-clean LP: #1707222. |
894 | + - mkdir /run/mydir |
895 | + - [ wget, "http://slashdot.org", -O, /run/mydir/index.html ] |
896 | |
897 | |
898 | # boot commands |
899 | diff --git a/doc/rtd/topics/format.rst b/doc/rtd/topics/format.rst |
900 | index e25289a..1b0ff36 100644 |
901 | --- a/doc/rtd/topics/format.rst |
902 | +++ b/doc/rtd/topics/format.rst |
903 | @@ -121,7 +121,7 @@ Cloud Boothook |
904 | |
905 | This content is ``boothook`` data. It is stored in a file under ``/var/lib/cloud`` and then executed immediately. |
906 | This is the earliest ``hook`` available. Note, that there is no mechanism provided for running only once. The boothook must take care of this itself. |
907 | -It is provided with the instance id in the environment variable ``INSTANCE_I``. This could be made use of to provide a 'once-per-instance' type of functionality. |
908 | +It is provided with the instance id in the environment variable ``INSTANCE_ID``. This could be made use of to provide a 'once-per-instance' type of functionality. |
909 | |
910 | Begins with: ``#cloud-boothook`` or ``Content-Type: text/cloud-boothook`` when using a MIME archive. |
911 | |
912 | diff --git a/integration-requirements.txt b/integration-requirements.txt |
913 | index e5bb5b2..01baebd 100644 |
914 | --- a/integration-requirements.txt |
915 | +++ b/integration-requirements.txt |
916 | @@ -17,4 +17,4 @@ git+https://github.com/lxc/pylxd.git@4b8ab1802f9aee4eb29cf7b119dae0aa47150779 |
917 | |
918 | |
919 | # finds latest image information |
920 | -bzr+lp:simplestreams |
921 | +git+https://git.launchpad.net/simplestreams |
922 | diff --git a/tests/cloud_tests/testcases/modules/salt_minion.py b/tests/cloud_tests/testcases/modules/salt_minion.py |
923 | deleted file mode 100644 |
924 | index fc9688e..0000000 |
925 | --- a/tests/cloud_tests/testcases/modules/salt_minion.py |
926 | +++ /dev/null |
927 | @@ -1,38 +0,0 @@ |
928 | -# This file is part of cloud-init. See LICENSE file for license information. |
929 | - |
930 | -"""cloud-init Integration Test Verify Script.""" |
931 | -from tests.cloud_tests.testcases import base |
932 | - |
933 | - |
934 | -class Test(base.CloudTestCase): |
935 | - """Test salt minion module.""" |
936 | - |
937 | - def test_minon_master(self): |
938 | - """Test master value in config.""" |
939 | - out = self.get_data_file('minion') |
940 | - self.assertIn('master: salt.mydomain.com', out) |
941 | - |
942 | - def test_minion_pem(self): |
943 | - """Test private key.""" |
944 | - out = self.get_data_file('minion.pem') |
945 | - self.assertIn('------BEGIN PRIVATE KEY------', out) |
946 | - self.assertIn('<key data>', out) |
947 | - self.assertIn('------END PRIVATE KEY-------', out) |
948 | - |
949 | - def test_minion_pub(self): |
950 | - """Test public key.""" |
951 | - out = self.get_data_file('minion.pub') |
952 | - self.assertIn('------BEGIN PUBLIC KEY-------', out) |
953 | - self.assertIn('<key data>', out) |
954 | - self.assertIn('------END PUBLIC KEY-------', out) |
955 | - |
956 | - def test_grains(self): |
957 | - """Test master value in config.""" |
958 | - out = self.get_data_file('grains') |
959 | - self.assertIn('role: web', out) |
960 | - |
961 | - def test_minion_installed(self): |
962 | - """Test if the salt-minion package is installed""" |
963 | - self.assertPackageInstalled('salt-minion') |
964 | - |
965 | -# vi: ts=4 expandtab |
966 | diff --git a/tests/cloud_tests/testcases/modules/salt_minion.yaml b/tests/cloud_tests/testcases/modules/salt_minion.yaml |
967 | deleted file mode 100644 |
968 | index 9227147..0000000 |
969 | --- a/tests/cloud_tests/testcases/modules/salt_minion.yaml |
970 | +++ /dev/null |
971 | @@ -1,49 +0,0 @@ |
972 | -# |
973 | -# Create config for a salt minion |
974 | -# |
975 | -# 2016-11-17: Currently takes >60 seconds results in test failure |
976 | -# |
977 | -enabled: True |
978 | -cloud_config: | |
979 | - #cloud-config |
980 | - salt_minion: |
981 | - conf: |
982 | - master: salt.mydomain.com |
983 | - public_key: | |
984 | - ------BEGIN PUBLIC KEY------- |
985 | - <key data> |
986 | - ------END PUBLIC KEY------- |
987 | - private_key: | |
988 | - ------BEGIN PRIVATE KEY------ |
989 | - <key data> |
990 | - ------END PRIVATE KEY------- |
991 | - grains: |
992 | - role: web |
993 | -collect_scripts: |
994 | - minion: | |
995 | - #!/bin/bash |
996 | - cat /etc/salt/minion |
997 | - minion_id: | |
998 | - #!/bin/bash |
999 | - cat /etc/salt/minion_id |
1000 | - minion.pem: | |
1001 | - #!/bin/bash |
1002 | - PRIV_KEYFILE=/etc/salt/pki/minion/minion.pem |
1003 | - if [ ! -f $PRIV_KEYFILE ]; then |
1004 | - # Bionic and later automatically moves /etc/salt/pki/minion/* |
1005 | - PRIV_KEYFILE=/var/lib/salt/pki/minion/minion.pem |
1006 | - fi |
1007 | - cat $PRIV_KEYFILE |
1008 | - minion.pub: | |
1009 | - #!/bin/bash |
1010 | - PUB_KEYFILE=/etc/salt/pki/minion/minion.pub |
1011 | - if [ ! -f $PUB_KEYFILE ]; then |
1012 | - # Bionic and later automatically moves /etc/salt/pki/minion/* |
1013 | - PUB_KEYFILE=/var/lib/salt/pki/minion/minion.pub |
1014 | - fi |
1015 | - cat $PUB_KEYFILE |
1016 | - grains: | |
1017 | - #!/bin/bash |
1018 | - cat /etc/salt/grains |
1019 | - |
1020 | -# vi: ts=4 expandtab |
1021 | diff --git a/tests/unittests/test_datasource/test_azure_helper.py b/tests/unittests/test_datasource/test_azure_helper.py |
1022 | index af9d3e1..26b2b93 100644 |
1023 | --- a/tests/unittests/test_datasource/test_azure_helper.py |
1024 | +++ b/tests/unittests/test_datasource/test_azure_helper.py |
1025 | @@ -85,7 +85,9 @@ class TestFindEndpoint(CiTestCase): |
1026 | self.dhcp_options.return_value = {"eth0": {"unknown_245": "5:4:3:2"}} |
1027 | self.assertEqual('5.4.3.2', wa_shim.find_endpoint(None)) |
1028 | |
1029 | - def test_latest_lease_used(self): |
1030 | + @mock.patch('cloudinit.sources.helpers.azure.util.is_FreeBSD') |
1031 | + def test_latest_lease_used(self, m_is_freebsd): |
1032 | + m_is_freebsd.return_value = False # To avoid hitting load_file |
1033 | encoded_addresses = ['5:4:3:2', '4:3:2:1'] |
1034 | file_content = '\n'.join([self._build_lease_content(encoded_address) |
1035 | for encoded_address in encoded_addresses]) |
1036 | diff --git a/tools/run-container b/tools/run-container |
1037 | index 499e85b..6dedb75 100755 |
1038 | --- a/tools/run-container |
1039 | +++ b/tools/run-container |
1040 | @@ -418,7 +418,7 @@ main() { |
1041 | { bad_Usage; return; } |
1042 | |
1043 | local cur="" next="" |
1044 | - local package="" source_package="" unittest="" name="" |
1045 | + local package=false srcpackage=false unittest="" name="" |
1046 | local dirty=false pyexe="auto" artifact_d="." |
1047 | |
1048 | while [ $# -ne 0 ]; do |
1049 | @@ -430,8 +430,8 @@ main() { |
1050 | -k|--keep) KEEP=true;; |
1051 | -n|--name) name="$next"; shift;; |
1052 | --pyexe) pyexe=$next; shift;; |
1053 | - -p|--package) package=1;; |
1054 | - -s|--source-package) source_package=1;; |
1055 | + -p|--package) package=true;; |
1056 | + -s|--source-package) srcpackage=true;; |
1057 | -u|--unittest) unittest=1;; |
1058 | -v|--verbose) VERBOSITY=$((VERBOSITY+1));; |
1059 | --) shift; break;; |
1060 | @@ -529,8 +529,8 @@ main() { |
1061 | build_srcpkg="./packages/brpm $distflag --srpm" |
1062 | pkg_ext=".rpm";; |
1063 | esac |
1064 | - if [ -n "$source_package" ]; then |
1065 | - [ -n "$build_pkg" ] || { |
1066 | + if [ "$srcpackage" = "true" ]; then |
1067 | + [ -n "$build_srcpkg" ] || { |
1068 | error "Unknown package command for $OS_NAME" |
1069 | return 1 |
1070 | } |
1071 | @@ -542,19 +542,21 @@ main() { |
1072 | } |
1073 | fi |
1074 | |
1075 | - if [ -n "$package" ]; then |
1076 | - [ -n "$build_srcpkg" ] || { |
1077 | + if [ "$package" = "true" ]; then |
1078 | + [ -n "$build_pkg" ] || { |
1079 | error "Unknown build source command for $OS_NAME" |
1080 | return 1 |
1081 | } |
1082 | debug 1 "building binary package with $build_pkg." |
1083 | + # shellcheck disable=SC2086 |
1084 | inside_as_cd "$name" "$user" "$cdir" $pyexe $build_pkg || { |
1085 | errorrc "failed: $build_pkg"; |
1086 | errors[${#errors[@]}]="binary package" |
1087 | } |
1088 | fi |
1089 | |
1090 | - if [ -n "$artifact_d" ]; then |
1091 | + if [ -n "$artifact_d" ] && |
1092 | + [ "$package" = "true" -o "$srcpackage" = "true" ]; then |
1093 | local art="" |
1094 | artifact_d="${artifact_d%/}/" |
1095 | [ -d "${artifact_d}" ] || mkdir -p "$artifact_d" || { |
PASSED: Continuous integration, rev:de0488f976a cb3430b01b03b95 a052dfd343a632 /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 148/
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 148/rebuild
https:/