Merge ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic
- Git
- lp:~chad.smith/cloud-init
- ubuntu/bionic
- Merge into ubuntu/bionic
Proposed by
Chad Smith
Status: | Merged | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Merged at revision: | 4a1aaa18595fa663e1a38dd3ac8f73231ec69a7f | ||||||||||||||||
Proposed branch: | ~chad.smith/cloud-init:ubuntu/bionic | ||||||||||||||||
Merge into: | cloud-init:ubuntu/bionic | ||||||||||||||||
Diff against target: |
7789 lines (+3838/-768) 98 files modified
ChangeLog (+54/-0) HACKING.rst (+2/-2) bash_completion/cloud-init (+4/-1) cloudinit/cmd/cloud_id.py (+90/-0) cloudinit/cmd/devel/logs.py (+23/-8) cloudinit/cmd/devel/net_convert.py (+10/-5) cloudinit/cmd/devel/render.py (+24/-11) cloudinit/cmd/devel/tests/test_logs.py (+37/-6) cloudinit/cmd/devel/tests/test_render.py (+44/-1) cloudinit/cmd/main.py (+4/-16) cloudinit/cmd/query.py (+24/-12) cloudinit/cmd/tests/test_cloud_id.py (+127/-0) cloudinit/cmd/tests/test_query.py (+71/-5) cloudinit/config/cc_disk_setup.py (+1/-1) cloudinit/config/cc_lxd.py (+1/-1) cloudinit/config/cc_resizefs.py (+7/-0) cloudinit/config/cc_set_passwords.py (+1/-1) cloudinit/config/cc_write_files.py (+6/-1) cloudinit/config/tests/test_set_passwords.py (+40/-0) cloudinit/dhclient_hook.py (+72/-38) cloudinit/handlers/jinja_template.py (+9/-1) cloudinit/net/__init__.py (+38/-4) cloudinit/net/dhcp.py (+76/-25) cloudinit/net/eni.py (+15/-14) cloudinit/net/netplan.py (+3/-3) cloudinit/net/sysconfig.py (+61/-5) cloudinit/net/tests/test_dhcp.py (+47/-4) cloudinit/net/tests/test_init.py (+51/-1) cloudinit/sources/DataSourceAliYun.py (+5/-15) cloudinit/sources/DataSourceAltCloud.py (+22/-11) cloudinit/sources/DataSourceAzure.py (+82/-31) cloudinit/sources/DataSourceBigstep.py (+4/-0) cloudinit/sources/DataSourceCloudSigma.py (+5/-1) cloudinit/sources/DataSourceConfigDrive.py (+12/-0) cloudinit/sources/DataSourceEc2.py (+59/-56) cloudinit/sources/DataSourceIBMCloud.py (+4/-0) cloudinit/sources/DataSourceMAAS.py (+4/-0) cloudinit/sources/DataSourceNoCloud.py (+52/-1) cloudinit/sources/DataSourceNone.py (+4/-0) cloudinit/sources/DataSourceOVF.py (+36/-26) cloudinit/sources/DataSourceOpenNebula.py (+9/-1) cloudinit/sources/DataSourceOracle.py (+4/-0) cloudinit/sources/DataSourceScaleway.py (+10/-1) cloudinit/sources/DataSourceSmartOS.py (+3/-0) cloudinit/sources/__init__.py (+104/-21) cloudinit/sources/helpers/netlink.py (+250/-0) cloudinit/sources/helpers/tests/test_netlink.py (+373/-0) cloudinit/sources/helpers/vmware/imc/config_nic.py (+2/-3) cloudinit/sources/tests/test_init.py (+83/-3) cloudinit/sources/tests/test_oracle.py (+8/-0) cloudinit/temp_utils.py (+2/-2) cloudinit/tests/test_dhclient_hook.py (+105/-0) cloudinit/tests/test_temp_utils.py (+17/-1) cloudinit/tests/test_url_helper.py (+24/-1) cloudinit/tests/test_util.py (+82/-17) cloudinit/url_helper.py (+25/-6) cloudinit/util.py (+25/-3) cloudinit/version.py (+1/-1) config/cloud.cfg.tmpl (+11/-1) debian/changelog (+75/-0) doc/rtd/topics/datasources.rst (+60/-1) doc/rtd/topics/datasources/azure.rst (+65/-38) doc/rtd/topics/instancedata.rst (+137/-46) doc/rtd/topics/network-config-format-v1.rst (+1/-1) packages/redhat/cloud-init.spec.in (+1/-0) packages/suse/cloud-init.spec.in (+1/-0) setup.py (+2/-1) systemd/cloud-init.service.tmpl (+1/-2) templates/sources.list.ubuntu.tmpl (+17/-17) tests/cloud_tests/releases.yaml (+16/-0) tests/cloud_tests/testcases/base.py (+15/-3) tests/cloud_tests/testcases/modules/apt_configure_primary.py (+9/-5) tests/cloud_tests/testcases/modules/apt_configure_primary.yaml (+0/-7) tests/unittests/test_builtin_handlers.py (+25/-0) tests/unittests/test_cli.py (+8/-8) tests/unittests/test_datasource/test_aliyun.py (+4/-0) tests/unittests/test_datasource/test_altcloud.py (+67/-51) tests/unittests/test_datasource/test_azure.py (+262/-79) tests/unittests/test_datasource/test_cloudsigma.py (+6/-0) tests/unittests/test_datasource/test_configdrive.py (+3/-0) tests/unittests/test_datasource/test_ec2.py (+37/-23) tests/unittests/test_datasource/test_ibmcloud.py (+39/-1) tests/unittests/test_datasource/test_nocloud.py (+98/-41) tests/unittests/test_datasource/test_opennebula.py (+4/-0) tests/unittests/test_datasource/test_ovf.py (+119/-39) tests/unittests/test_datasource/test_scaleway.py (+72/-4) tests/unittests/test_datasource/test_smartos.py (+7/-0) tests/unittests/test_ds_identify.py (+16/-1) tests/unittests/test_handler/test_handler_lxd.py (+1/-1) tests/unittests/test_handler/test_handler_resizefs.py (+42/-10) tests/unittests/test_handler/test_handler_write_files.py (+12/-0) tests/unittests/test_net.py (+137/-6) tests/unittests/test_util.py (+6/-0) tests/unittests/test_vmware_config_file.py (+52/-6) tools/ds-identify (+32/-6) tools/run-container (+1/-0) tox.ini (+2/-2) udev/66-azure-ephemeral.rules (+17/-1) |
||||||||||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
cloud-init Commiters | Pending | ||
Review via email: mp+362281@code.launchpad.net |
Commit message
sync new upstream snapshot for release into bionic via SRU
Description of the change
To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote : | # |
review:
Approve
(continuous-integration)
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | diff --git a/ChangeLog b/ChangeLog | |||
2 | index 9c043b0..8fa6fdd 100644 | |||
3 | --- a/ChangeLog | |||
4 | +++ b/ChangeLog | |||
5 | @@ -1,3 +1,57 @@ | |||
6 | 1 | 18.5: | ||
7 | 2 | - tests: add Disco release [Joshua Powers] | ||
8 | 3 | - net: render 'metric' values in per-subnet routes (LP: #1805871) | ||
9 | 4 | - write_files: add support for appending to files. [James Baxter] | ||
10 | 5 | - config: On ubuntu select cloud archive mirrors for armel, armhf, arm64. | ||
11 | 6 | (LP: #1805854) | ||
12 | 7 | - dhclient-hook: cleanups, tests and fix a bug on 'down' event. | ||
13 | 8 | - NoCloud: Allow top level 'network' key in network-config. (LP: #1798117) | ||
14 | 9 | - ovf: Fix ovf network config generation gateway/routes (LP: #1806103) | ||
15 | 10 | - azure: detect vnet migration via netlink media change event | ||
16 | 11 | [Tamilmani Manoharan] | ||
17 | 12 | - Azure: fix copy/paste error in error handling when reading azure ovf. | ||
18 | 13 | [Adam DePue] | ||
19 | 14 | - tests: fix incorrect order of mocks in test_handle_zfs_root. | ||
20 | 15 | - doc: Change dns_nameserver property to dns_nameservers. [Tomer Cohen] | ||
21 | 16 | - OVF: identify label iso9660 filesystems with label 'OVF ENV'. | ||
22 | 17 | - logs: collect-logs ignore instance-data-sensitive.json on non-root user | ||
23 | 18 | (LP: #1805201) | ||
24 | 19 | - net: Ephemeral*Network: add connectivity check via URL | ||
25 | 20 | - azure: _poll_imds only retry on 404. Fail on Timeout (LP: #1803598) | ||
26 | 21 | - resizefs: Prefix discovered devpath with '/dev/' when path does not | ||
27 | 22 | exist [Igor Galić] | ||
28 | 23 | - azure: retry imds polling on requests.Timeout (LP: #1800223) | ||
29 | 24 | - azure: Accept variation in error msg from mount for ntfs volumes | ||
30 | 25 | [Jason Zions] (LP: #1799338) | ||
31 | 26 | - azure: fix regression introduced when persisting ephemeral dhcp lease | ||
32 | 27 | [asakkurr] | ||
33 | 28 | - azure: add udev rules to create cloud-init Gen2 disk name symlinks | ||
34 | 29 | (LP: #1797480) | ||
35 | 30 | - tests: ec2 mock missing httpretty user-data and instance-identity routes | ||
36 | 31 | - azure: remove /etc/netplan/90-hotplug-azure.yaml when net from IMDS | ||
37 | 32 | - azure: report ready to fabric after reprovision and reduce logging | ||
38 | 33 | [asakkurr] (LP: #1799594) | ||
39 | 34 | - query: better error when missing read permission on instance-data | ||
40 | 35 | - instance-data: fallback to instance-data.json if sensitive is absent. | ||
41 | 36 | (LP: #1798189) | ||
42 | 37 | - docs: remove colon from network v1 config example. [Tomer Cohen] | ||
43 | 38 | - Add cloud-id binary to packages for SUSE [Jason Zions] | ||
44 | 39 | - systemd: On SUSE ensure cloud-init.service runs before wicked | ||
45 | 40 | [Robert Schweikert] (LP: #1799709) | ||
46 | 41 | - update detection of openSUSE variants [Robert Schweikert] | ||
47 | 42 | - azure: Add apply_network_config option to disable network from IMDS | ||
48 | 43 | (LP: #1798424) | ||
49 | 44 | - Correct spelling in an error message (udevadm). [Katie McLaughlin] | ||
50 | 45 | - tests: meta_data key changed to meta-data in ec2 instance-data.json | ||
51 | 46 | (LP: #1797231) | ||
52 | 47 | - tests: fix kvm integration test to assert flexible config-disk path | ||
53 | 48 | (LP: #1797199) | ||
54 | 49 | - tools: Add cloud-id command line utility | ||
55 | 50 | - instance-data: Add standard keys platform and subplatform. Refactor ec2. | ||
56 | 51 | - net: ignore nics that have "zero" mac address. (LP: #1796917) | ||
57 | 52 | - tests: fix apt_configure_primary to be more flexible | ||
58 | 53 | - Ubuntu: update sources.list to comment out deb-src entries. (LP: #74747) | ||
59 | 54 | |||
60 | 1 | 18.4: | 55 | 18.4: |
61 | 2 | - add rtd example docs about new standardized keys | 56 | - add rtd example docs about new standardized keys |
62 | 3 | - use ds._crawled_metadata instance attribute if set when writing | 57 | - use ds._crawled_metadata instance attribute if set when writing |
63 | diff --git a/HACKING.rst b/HACKING.rst | |||
64 | index 3bb555c..fcdfa4f 100644 | |||
65 | --- a/HACKING.rst | |||
66 | +++ b/HACKING.rst | |||
67 | @@ -11,10 +11,10 @@ Do these things once | |||
68 | 11 | 11 | ||
69 | 12 | * To contribute, you must sign the Canonical `contributor license agreement`_ | 12 | * To contribute, you must sign the Canonical `contributor license agreement`_ |
70 | 13 | 13 | ||
72 | 14 | If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group. Unfortunately there is no easy way to check if an organization or company you are doing work for has signed. If you are unsure or have questions, email `Scott Moser <mailto:scott.moser@canonical.com>`_ or ping smoser in ``#cloud-init`` channel via freenode. | 14 | If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group. Unfortunately there is no easy way to check if an organization or company you are doing work for has signed. If you are unsure or have questions, email `Josh Powers <mailto:josh.powers@canonical.com>`_ or ping powersj in ``#cloud-init`` channel via freenode. |
73 | 15 | 15 | ||
74 | 16 | When prompted for 'Project contact' or 'Canonical Project Manager' enter | 16 | When prompted for 'Project contact' or 'Canonical Project Manager' enter |
76 | 17 | 'Scott Moser'. | 17 | 'Josh Powers'. |
77 | 18 | 18 | ||
78 | 19 | * Configure git with your email and name for commit messages. | 19 | * Configure git with your email and name for commit messages. |
79 | 20 | 20 | ||
80 | diff --git a/bash_completion/cloud-init b/bash_completion/cloud-init | |||
81 | index 8c25032..a9577e9 100644 | |||
82 | --- a/bash_completion/cloud-init | |||
83 | +++ b/bash_completion/cloud-init | |||
84 | @@ -30,7 +30,10 @@ _cloudinit_complete() | |||
85 | 30 | devel) | 30 | devel) |
86 | 31 | COMPREPLY=($(compgen -W "--help schema net-convert" -- $cur_word)) | 31 | COMPREPLY=($(compgen -W "--help schema net-convert" -- $cur_word)) |
87 | 32 | ;; | 32 | ;; |
89 | 33 | dhclient-hook|features) | 33 | dhclient-hook) |
90 | 34 | COMPREPLY=($(compgen -W "--help up down" -- $cur_word)) | ||
91 | 35 | ;; | ||
92 | 36 | features) | ||
93 | 34 | COMPREPLY=($(compgen -W "--help" -- $cur_word)) | 37 | COMPREPLY=($(compgen -W "--help" -- $cur_word)) |
94 | 35 | ;; | 38 | ;; |
95 | 36 | init) | 39 | init) |
96 | diff --git a/cloudinit/cmd/cloud_id.py b/cloudinit/cmd/cloud_id.py | |||
97 | 37 | new file mode 100755 | 40 | new file mode 100755 |
98 | index 0000000..9760892 | |||
99 | --- /dev/null | |||
100 | +++ b/cloudinit/cmd/cloud_id.py | |||
101 | @@ -0,0 +1,90 @@ | |||
102 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | ||
103 | 2 | |||
104 | 3 | """Commandline utility to list the canonical cloud-id for an instance.""" | ||
105 | 4 | |||
106 | 5 | import argparse | ||
107 | 6 | import json | ||
108 | 7 | import sys | ||
109 | 8 | |||
110 | 9 | from cloudinit.sources import ( | ||
111 | 10 | INSTANCE_JSON_FILE, METADATA_UNKNOWN, canonical_cloud_id) | ||
112 | 11 | |||
113 | 12 | DEFAULT_INSTANCE_JSON = '/run/cloud-init/%s' % INSTANCE_JSON_FILE | ||
114 | 13 | |||
115 | 14 | NAME = 'cloud-id' | ||
116 | 15 | |||
117 | 16 | |||
118 | 17 | def get_parser(parser=None): | ||
119 | 18 | """Build or extend an arg parser for the cloud-id utility. | ||
120 | 19 | |||
121 | 20 | @param parser: Optional existing ArgumentParser instance representing the | ||
122 | 21 | query subcommand which will be extended to support the args of | ||
123 | 22 | this utility. | ||
124 | 23 | |||
125 | 24 | @returns: ArgumentParser with proper argument configuration. | ||
126 | 25 | """ | ||
127 | 26 | if not parser: | ||
128 | 27 | parser = argparse.ArgumentParser( | ||
129 | 28 | prog=NAME, | ||
130 | 29 | description='Report the canonical cloud-id for this instance') | ||
131 | 30 | parser.add_argument( | ||
132 | 31 | '-j', '--json', action='store_true', default=False, | ||
133 | 32 | help='Report all standardized cloud-id information as json.') | ||
134 | 33 | parser.add_argument( | ||
135 | 34 | '-l', '--long', action='store_true', default=False, | ||
136 | 35 | help='Report extended cloud-id information as tab-delimited string.') | ||
137 | 36 | parser.add_argument( | ||
138 | 37 | '-i', '--instance-data', type=str, default=DEFAULT_INSTANCE_JSON, | ||
139 | 38 | help=('Path to instance-data.json file. Default is %s' % | ||
140 | 39 | DEFAULT_INSTANCE_JSON)) | ||
141 | 40 | return parser | ||
142 | 41 | |||
143 | 42 | |||
144 | 43 | def error(msg): | ||
145 | 44 | sys.stderr.write('ERROR: %s\n' % msg) | ||
146 | 45 | return 1 | ||
147 | 46 | |||
148 | 47 | |||
149 | 48 | def handle_args(name, args): | ||
150 | 49 | """Handle calls to 'cloud-id' cli. | ||
151 | 50 | |||
152 | 51 | Print the canonical cloud-id on which the instance is running. | ||
153 | 52 | |||
154 | 53 | @return: 0 on success, 1 otherwise. | ||
155 | 54 | """ | ||
156 | 55 | try: | ||
157 | 56 | instance_data = json.load(open(args.instance_data)) | ||
158 | 57 | except IOError: | ||
159 | 58 | return error( | ||
160 | 59 | "File not found '%s'. Provide a path to instance data json file" | ||
161 | 60 | ' using --instance-data' % args.instance_data) | ||
162 | 61 | except ValueError as e: | ||
163 | 62 | return error( | ||
164 | 63 | "File '%s' is not valid json. %s" % (args.instance_data, e)) | ||
165 | 64 | v1 = instance_data.get('v1', {}) | ||
166 | 65 | cloud_id = canonical_cloud_id( | ||
167 | 66 | v1.get('cloud_name', METADATA_UNKNOWN), | ||
168 | 67 | v1.get('region', METADATA_UNKNOWN), | ||
169 | 68 | v1.get('platform', METADATA_UNKNOWN)) | ||
170 | 69 | if args.json: | ||
171 | 70 | v1['cloud_id'] = cloud_id | ||
172 | 71 | response = json.dumps( # Pretty, sorted json | ||
173 | 72 | v1, indent=1, sort_keys=True, separators=(',', ': ')) | ||
174 | 73 | elif args.long: | ||
175 | 74 | response = '%s\t%s' % (cloud_id, v1.get('region', METADATA_UNKNOWN)) | ||
176 | 75 | else: | ||
177 | 76 | response = cloud_id | ||
178 | 77 | sys.stdout.write('%s\n' % response) | ||
179 | 78 | return 0 | ||
180 | 79 | |||
181 | 80 | |||
182 | 81 | def main(): | ||
183 | 82 | """Tool to query specific instance-data values.""" | ||
184 | 83 | parser = get_parser() | ||
185 | 84 | sys.exit(handle_args(NAME, parser.parse_args())) | ||
186 | 85 | |||
187 | 86 | |||
188 | 87 | if __name__ == '__main__': | ||
189 | 88 | main() | ||
190 | 89 | |||
191 | 90 | # vi: ts=4 expandtab | ||
192 | diff --git a/cloudinit/cmd/devel/logs.py b/cloudinit/cmd/devel/logs.py | |||
193 | index df72520..4c086b5 100644 | |||
194 | --- a/cloudinit/cmd/devel/logs.py | |||
195 | +++ b/cloudinit/cmd/devel/logs.py | |||
196 | @@ -5,14 +5,16 @@ | |||
197 | 5 | """Define 'collect-logs' utility and handler to include in cloud-init cmd.""" | 5 | """Define 'collect-logs' utility and handler to include in cloud-init cmd.""" |
198 | 6 | 6 | ||
199 | 7 | import argparse | 7 | import argparse |
200 | 8 | from cloudinit.util import ( | ||
201 | 9 | ProcessExecutionError, chdir, copy, ensure_dir, subp, write_file) | ||
202 | 10 | from cloudinit.temp_utils import tempdir | ||
203 | 11 | from datetime import datetime | 8 | from datetime import datetime |
204 | 12 | import os | 9 | import os |
205 | 13 | import shutil | 10 | import shutil |
206 | 14 | import sys | 11 | import sys |
207 | 15 | 12 | ||
208 | 13 | from cloudinit.sources import INSTANCE_JSON_SENSITIVE_FILE | ||
209 | 14 | from cloudinit.temp_utils import tempdir | ||
210 | 15 | from cloudinit.util import ( | ||
211 | 16 | ProcessExecutionError, chdir, copy, ensure_dir, subp, write_file) | ||
212 | 17 | |||
213 | 16 | 18 | ||
214 | 17 | CLOUDINIT_LOGS = ['/var/log/cloud-init.log', '/var/log/cloud-init-output.log'] | 19 | CLOUDINIT_LOGS = ['/var/log/cloud-init.log', '/var/log/cloud-init-output.log'] |
215 | 18 | CLOUDINIT_RUN_DIR = '/run/cloud-init' | 20 | CLOUDINIT_RUN_DIR = '/run/cloud-init' |
216 | @@ -46,6 +48,13 @@ def get_parser(parser=None): | |||
217 | 46 | return parser | 48 | return parser |
218 | 47 | 49 | ||
219 | 48 | 50 | ||
220 | 51 | def _copytree_ignore_sensitive_files(curdir, files): | ||
221 | 52 | """Return a list of files to ignore if we are non-root""" | ||
222 | 53 | if os.getuid() == 0: | ||
223 | 54 | return () | ||
224 | 55 | return (INSTANCE_JSON_SENSITIVE_FILE,) # Ignore root-permissioned files | ||
225 | 56 | |||
226 | 57 | |||
227 | 49 | def _write_command_output_to_file(cmd, filename, msg, verbosity): | 58 | def _write_command_output_to_file(cmd, filename, msg, verbosity): |
228 | 50 | """Helper which runs a command and writes output or error to filename.""" | 59 | """Helper which runs a command and writes output or error to filename.""" |
229 | 51 | try: | 60 | try: |
230 | @@ -78,6 +87,11 @@ def collect_logs(tarfile, include_userdata, verbosity=0): | |||
231 | 78 | @param tarfile: The path of the tar-gzipped file to create. | 87 | @param tarfile: The path of the tar-gzipped file to create. |
232 | 79 | @param include_userdata: Boolean, true means include user-data. | 88 | @param include_userdata: Boolean, true means include user-data. |
233 | 80 | """ | 89 | """ |
234 | 90 | if include_userdata and os.getuid() != 0: | ||
235 | 91 | sys.stderr.write( | ||
236 | 92 | "To include userdata, root user is required." | ||
237 | 93 | " Try sudo cloud-init collect-logs\n") | ||
238 | 94 | return 1 | ||
239 | 81 | tarfile = os.path.abspath(tarfile) | 95 | tarfile = os.path.abspath(tarfile) |
240 | 82 | date = datetime.utcnow().date().strftime('%Y-%m-%d') | 96 | date = datetime.utcnow().date().strftime('%Y-%m-%d') |
241 | 83 | log_dir = 'cloud-init-logs-{0}'.format(date) | 97 | log_dir = 'cloud-init-logs-{0}'.format(date) |
242 | @@ -110,7 +124,8 @@ def collect_logs(tarfile, include_userdata, verbosity=0): | |||
243 | 110 | ensure_dir(run_dir) | 124 | ensure_dir(run_dir) |
244 | 111 | if os.path.exists(CLOUDINIT_RUN_DIR): | 125 | if os.path.exists(CLOUDINIT_RUN_DIR): |
245 | 112 | shutil.copytree(CLOUDINIT_RUN_DIR, | 126 | shutil.copytree(CLOUDINIT_RUN_DIR, |
247 | 113 | os.path.join(run_dir, 'cloud-init')) | 127 | os.path.join(run_dir, 'cloud-init'), |
248 | 128 | ignore=_copytree_ignore_sensitive_files) | ||
249 | 114 | _debug("collected dir %s\n" % CLOUDINIT_RUN_DIR, 1, verbosity) | 129 | _debug("collected dir %s\n" % CLOUDINIT_RUN_DIR, 1, verbosity) |
250 | 115 | else: | 130 | else: |
251 | 116 | _debug("directory '%s' did not exist\n" % CLOUDINIT_RUN_DIR, 1, | 131 | _debug("directory '%s' did not exist\n" % CLOUDINIT_RUN_DIR, 1, |
252 | @@ -118,21 +133,21 @@ def collect_logs(tarfile, include_userdata, verbosity=0): | |||
253 | 118 | with chdir(tmp_dir): | 133 | with chdir(tmp_dir): |
254 | 119 | subp(['tar', 'czvf', tarfile, log_dir.replace(tmp_dir + '/', '')]) | 134 | subp(['tar', 'czvf', tarfile, log_dir.replace(tmp_dir + '/', '')]) |
255 | 120 | sys.stderr.write("Wrote %s\n" % tarfile) | 135 | sys.stderr.write("Wrote %s\n" % tarfile) |
256 | 136 | return 0 | ||
257 | 121 | 137 | ||
258 | 122 | 138 | ||
259 | 123 | def handle_collect_logs_args(name, args): | 139 | def handle_collect_logs_args(name, args): |
260 | 124 | """Handle calls to 'cloud-init collect-logs' as a subcommand.""" | 140 | """Handle calls to 'cloud-init collect-logs' as a subcommand.""" |
262 | 125 | collect_logs(args.tarfile, args.userdata, args.verbosity) | 141 | return collect_logs(args.tarfile, args.userdata, args.verbosity) |
263 | 126 | 142 | ||
264 | 127 | 143 | ||
265 | 128 | def main(): | 144 | def main(): |
266 | 129 | """Tool to collect and tar all cloud-init related logs.""" | 145 | """Tool to collect and tar all cloud-init related logs.""" |
267 | 130 | parser = get_parser() | 146 | parser = get_parser() |
270 | 131 | handle_collect_logs_args('collect-logs', parser.parse_args()) | 147 | return handle_collect_logs_args('collect-logs', parser.parse_args()) |
269 | 132 | return 0 | ||
271 | 133 | 148 | ||
272 | 134 | 149 | ||
273 | 135 | if __name__ == '__main__': | 150 | if __name__ == '__main__': |
275 | 136 | main() | 151 | sys.exit(main()) |
276 | 137 | 152 | ||
277 | 138 | # vi: ts=4 expandtab | 153 | # vi: ts=4 expandtab |
278 | diff --git a/cloudinit/cmd/devel/net_convert.py b/cloudinit/cmd/devel/net_convert.py | |||
279 | index a0f58a0..1ad7e0b 100755 | |||
280 | --- a/cloudinit/cmd/devel/net_convert.py | |||
281 | +++ b/cloudinit/cmd/devel/net_convert.py | |||
282 | @@ -9,6 +9,7 @@ import yaml | |||
283 | 9 | 9 | ||
284 | 10 | from cloudinit.sources.helpers import openstack | 10 | from cloudinit.sources.helpers import openstack |
285 | 11 | from cloudinit.sources import DataSourceAzure as azure | 11 | from cloudinit.sources import DataSourceAzure as azure |
286 | 12 | from cloudinit.sources import DataSourceOVF as ovf | ||
287 | 12 | 13 | ||
288 | 13 | from cloudinit import distros | 14 | from cloudinit import distros |
289 | 14 | from cloudinit.net import eni, netplan, network_state, sysconfig | 15 | from cloudinit.net import eni, netplan, network_state, sysconfig |
290 | @@ -31,7 +32,7 @@ def get_parser(parser=None): | |||
291 | 31 | metavar="PATH", required=True) | 32 | metavar="PATH", required=True) |
292 | 32 | parser.add_argument("-k", "--kind", | 33 | parser.add_argument("-k", "--kind", |
293 | 33 | choices=['eni', 'network_data.json', 'yaml', | 34 | choices=['eni', 'network_data.json', 'yaml', |
295 | 34 | 'azure-imds'], | 35 | 'azure-imds', 'vmware-imc'], |
296 | 35 | required=True) | 36 | required=True) |
297 | 36 | parser.add_argument("-d", "--directory", | 37 | parser.add_argument("-d", "--directory", |
298 | 37 | metavar="PATH", | 38 | metavar="PATH", |
299 | @@ -76,7 +77,6 @@ def handle_args(name, args): | |||
300 | 76 | net_data = args.network_data.read() | 77 | net_data = args.network_data.read() |
301 | 77 | if args.kind == "eni": | 78 | if args.kind == "eni": |
302 | 78 | pre_ns = eni.convert_eni_data(net_data) | 79 | pre_ns = eni.convert_eni_data(net_data) |
303 | 79 | ns = network_state.parse_net_config_data(pre_ns) | ||
304 | 80 | elif args.kind == "yaml": | 80 | elif args.kind == "yaml": |
305 | 81 | pre_ns = yaml.load(net_data) | 81 | pre_ns = yaml.load(net_data) |
306 | 82 | if 'network' in pre_ns: | 82 | if 'network' in pre_ns: |
307 | @@ -85,15 +85,16 @@ def handle_args(name, args): | |||
308 | 85 | sys.stderr.write('\n'.join( | 85 | sys.stderr.write('\n'.join( |
309 | 86 | ["Input YAML", | 86 | ["Input YAML", |
310 | 87 | yaml.dump(pre_ns, default_flow_style=False, indent=4), ""])) | 87 | yaml.dump(pre_ns, default_flow_style=False, indent=4), ""])) |
311 | 88 | ns = network_state.parse_net_config_data(pre_ns) | ||
312 | 89 | elif args.kind == 'network_data.json': | 88 | elif args.kind == 'network_data.json': |
313 | 90 | pre_ns = openstack.convert_net_json( | 89 | pre_ns = openstack.convert_net_json( |
314 | 91 | json.loads(net_data), known_macs=known_macs) | 90 | json.loads(net_data), known_macs=known_macs) |
315 | 92 | ns = network_state.parse_net_config_data(pre_ns) | ||
316 | 93 | elif args.kind == 'azure-imds': | 91 | elif args.kind == 'azure-imds': |
317 | 94 | pre_ns = azure.parse_network_config(json.loads(net_data)) | 92 | pre_ns = azure.parse_network_config(json.loads(net_data)) |
319 | 95 | ns = network_state.parse_net_config_data(pre_ns) | 93 | elif args.kind == 'vmware-imc': |
320 | 94 | config = ovf.Config(ovf.ConfigFile(args.network_data.name)) | ||
321 | 95 | pre_ns = ovf.get_network_config_from_conf(config, False) | ||
322 | 96 | 96 | ||
323 | 97 | ns = network_state.parse_net_config_data(pre_ns) | ||
324 | 97 | if not ns: | 98 | if not ns: |
325 | 98 | raise RuntimeError("No valid network_state object created from" | 99 | raise RuntimeError("No valid network_state object created from" |
326 | 99 | "input data") | 100 | "input data") |
327 | @@ -111,6 +112,10 @@ def handle_args(name, args): | |||
328 | 111 | elif args.output_kind == "netplan": | 112 | elif args.output_kind == "netplan": |
329 | 112 | r_cls = netplan.Renderer | 113 | r_cls = netplan.Renderer |
330 | 113 | config = distro.renderer_configs.get('netplan') | 114 | config = distro.renderer_configs.get('netplan') |
331 | 115 | # don't run netplan generate/apply | ||
332 | 116 | config['postcmds'] = False | ||
333 | 117 | # trim leading slash | ||
334 | 118 | config['netplan_path'] = config['netplan_path'][1:] | ||
335 | 114 | else: | 119 | else: |
336 | 115 | r_cls = sysconfig.Renderer | 120 | r_cls = sysconfig.Renderer |
337 | 116 | config = distro.renderer_configs.get('sysconfig') | 121 | config = distro.renderer_configs.get('sysconfig') |
338 | diff --git a/cloudinit/cmd/devel/render.py b/cloudinit/cmd/devel/render.py | |||
339 | index 2ba6b68..1bc2240 100755 | |||
340 | --- a/cloudinit/cmd/devel/render.py | |||
341 | +++ b/cloudinit/cmd/devel/render.py | |||
342 | @@ -8,11 +8,10 @@ import sys | |||
343 | 8 | 8 | ||
344 | 9 | from cloudinit.handlers.jinja_template import render_jinja_payload_from_file | 9 | from cloudinit.handlers.jinja_template import render_jinja_payload_from_file |
345 | 10 | from cloudinit import log | 10 | from cloudinit import log |
347 | 11 | from cloudinit.sources import INSTANCE_JSON_FILE | 11 | from cloudinit.sources import INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE |
348 | 12 | from . import addLogHandlerCLI, read_cfg_paths | 12 | from . import addLogHandlerCLI, read_cfg_paths |
349 | 13 | 13 | ||
350 | 14 | NAME = 'render' | 14 | NAME = 'render' |
351 | 15 | DEFAULT_INSTANCE_DATA = '/run/cloud-init/instance-data.json' | ||
352 | 16 | 15 | ||
353 | 17 | LOG = log.getLogger(NAME) | 16 | LOG = log.getLogger(NAME) |
354 | 18 | 17 | ||
355 | @@ -47,12 +46,22 @@ def handle_args(name, args): | |||
356 | 47 | @return 0 on success, 1 on failure. | 46 | @return 0 on success, 1 on failure. |
357 | 48 | """ | 47 | """ |
358 | 49 | addLogHandlerCLI(LOG, log.DEBUG if args.debug else log.WARNING) | 48 | addLogHandlerCLI(LOG, log.DEBUG if args.debug else log.WARNING) |
364 | 50 | if not args.instance_data: | 49 | if args.instance_data: |
360 | 51 | paths = read_cfg_paths() | ||
361 | 52 | instance_data_fn = os.path.join( | ||
362 | 53 | paths.run_dir, INSTANCE_JSON_FILE) | ||
363 | 54 | else: | ||
365 | 55 | instance_data_fn = args.instance_data | 50 | instance_data_fn = args.instance_data |
366 | 51 | else: | ||
367 | 52 | paths = read_cfg_paths() | ||
368 | 53 | uid = os.getuid() | ||
369 | 54 | redacted_data_fn = os.path.join(paths.run_dir, INSTANCE_JSON_FILE) | ||
370 | 55 | if uid == 0: | ||
371 | 56 | instance_data_fn = os.path.join( | ||
372 | 57 | paths.run_dir, INSTANCE_JSON_SENSITIVE_FILE) | ||
373 | 58 | if not os.path.exists(instance_data_fn): | ||
374 | 59 | LOG.warning( | ||
375 | 60 | 'Missing root-readable %s. Using redacted %s instead.', | ||
376 | 61 | instance_data_fn, redacted_data_fn) | ||
377 | 62 | instance_data_fn = redacted_data_fn | ||
378 | 63 | else: | ||
379 | 64 | instance_data_fn = redacted_data_fn | ||
380 | 56 | if not os.path.exists(instance_data_fn): | 65 | if not os.path.exists(instance_data_fn): |
381 | 57 | LOG.error('Missing instance-data.json file: %s', instance_data_fn) | 66 | LOG.error('Missing instance-data.json file: %s', instance_data_fn) |
382 | 58 | return 1 | 67 | return 1 |
383 | @@ -62,10 +71,14 @@ def handle_args(name, args): | |||
384 | 62 | except IOError: | 71 | except IOError: |
385 | 63 | LOG.error('Missing user-data file: %s', args.user_data) | 72 | LOG.error('Missing user-data file: %s', args.user_data) |
386 | 64 | return 1 | 73 | return 1 |
391 | 65 | rendered_payload = render_jinja_payload_from_file( | 74 | try: |
392 | 66 | payload=user_data, payload_fn=args.user_data, | 75 | rendered_payload = render_jinja_payload_from_file( |
393 | 67 | instance_data_file=instance_data_fn, | 76 | payload=user_data, payload_fn=args.user_data, |
394 | 68 | debug=True if args.debug else False) | 77 | instance_data_file=instance_data_fn, |
395 | 78 | debug=True if args.debug else False) | ||
396 | 79 | except RuntimeError as e: | ||
397 | 80 | LOG.error('Cannot render from instance data: %s', str(e)) | ||
398 | 81 | return 1 | ||
399 | 69 | if not rendered_payload: | 82 | if not rendered_payload: |
400 | 70 | LOG.error('Unable to render user-data file: %s', args.user_data) | 83 | LOG.error('Unable to render user-data file: %s', args.user_data) |
401 | 71 | return 1 | 84 | return 1 |
402 | diff --git a/cloudinit/cmd/devel/tests/test_logs.py b/cloudinit/cmd/devel/tests/test_logs.py | |||
403 | index 98b4756..4951797 100644 | |||
404 | --- a/cloudinit/cmd/devel/tests/test_logs.py | |||
405 | +++ b/cloudinit/cmd/devel/tests/test_logs.py | |||
406 | @@ -1,13 +1,17 @@ | |||
407 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
408 | 2 | 2 | ||
409 | 3 | from cloudinit.cmd.devel import logs | ||
410 | 4 | from cloudinit.util import ensure_dir, load_file, subp, write_file | ||
411 | 5 | from cloudinit.tests.helpers import FilesystemMockingTestCase, wrap_and_call | ||
412 | 6 | from datetime import datetime | 3 | from datetime import datetime |
413 | 7 | import mock | ||
414 | 8 | import os | 4 | import os |
415 | 5 | from six import StringIO | ||
416 | 6 | |||
417 | 7 | from cloudinit.cmd.devel import logs | ||
418 | 8 | from cloudinit.sources import INSTANCE_JSON_SENSITIVE_FILE | ||
419 | 9 | from cloudinit.tests.helpers import ( | ||
420 | 10 | FilesystemMockingTestCase, mock, wrap_and_call) | ||
421 | 11 | from cloudinit.util import ensure_dir, load_file, subp, write_file | ||
422 | 9 | 12 | ||
423 | 10 | 13 | ||
424 | 14 | @mock.patch('cloudinit.cmd.devel.logs.os.getuid') | ||
425 | 11 | class TestCollectLogs(FilesystemMockingTestCase): | 15 | class TestCollectLogs(FilesystemMockingTestCase): |
426 | 12 | 16 | ||
427 | 13 | def setUp(self): | 17 | def setUp(self): |
428 | @@ -15,14 +19,29 @@ class TestCollectLogs(FilesystemMockingTestCase): | |||
429 | 15 | self.new_root = self.tmp_dir() | 19 | self.new_root = self.tmp_dir() |
430 | 16 | self.run_dir = self.tmp_path('run', self.new_root) | 20 | self.run_dir = self.tmp_path('run', self.new_root) |
431 | 17 | 21 | ||
433 | 18 | def test_collect_logs_creates_tarfile(self): | 22 | def test_collect_logs_with_userdata_requires_root_user(self, m_getuid): |
434 | 23 | """collect-logs errors when non-root user collects userdata .""" | ||
435 | 24 | m_getuid.return_value = 100 # non-root | ||
436 | 25 | output_tarfile = self.tmp_path('logs.tgz') | ||
437 | 26 | with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: | ||
438 | 27 | self.assertEqual( | ||
439 | 28 | 1, logs.collect_logs(output_tarfile, include_userdata=True)) | ||
440 | 29 | self.assertEqual( | ||
441 | 30 | 'To include userdata, root user is required.' | ||
442 | 31 | ' Try sudo cloud-init collect-logs\n', | ||
443 | 32 | m_stderr.getvalue()) | ||
444 | 33 | |||
445 | 34 | def test_collect_logs_creates_tarfile(self, m_getuid): | ||
446 | 19 | """collect-logs creates a tarfile with all related cloud-init info.""" | 35 | """collect-logs creates a tarfile with all related cloud-init info.""" |
447 | 36 | m_getuid.return_value = 100 | ||
448 | 20 | log1 = self.tmp_path('cloud-init.log', self.new_root) | 37 | log1 = self.tmp_path('cloud-init.log', self.new_root) |
449 | 21 | write_file(log1, 'cloud-init-log') | 38 | write_file(log1, 'cloud-init-log') |
450 | 22 | log2 = self.tmp_path('cloud-init-output.log', self.new_root) | 39 | log2 = self.tmp_path('cloud-init-output.log', self.new_root) |
451 | 23 | write_file(log2, 'cloud-init-output-log') | 40 | write_file(log2, 'cloud-init-output-log') |
452 | 24 | ensure_dir(self.run_dir) | 41 | ensure_dir(self.run_dir) |
453 | 25 | write_file(self.tmp_path('results.json', self.run_dir), 'results') | 42 | write_file(self.tmp_path('results.json', self.run_dir), 'results') |
454 | 43 | write_file(self.tmp_path(INSTANCE_JSON_SENSITIVE_FILE, self.run_dir), | ||
455 | 44 | 'sensitive') | ||
456 | 26 | output_tarfile = self.tmp_path('logs.tgz') | 45 | output_tarfile = self.tmp_path('logs.tgz') |
457 | 27 | 46 | ||
458 | 28 | date = datetime.utcnow().date().strftime('%Y-%m-%d') | 47 | date = datetime.utcnow().date().strftime('%Y-%m-%d') |
459 | @@ -59,6 +78,11 @@ class TestCollectLogs(FilesystemMockingTestCase): | |||
460 | 59 | # unpack the tarfile and check file contents | 78 | # unpack the tarfile and check file contents |
461 | 60 | subp(['tar', 'zxvf', output_tarfile, '-C', self.new_root]) | 79 | subp(['tar', 'zxvf', output_tarfile, '-C', self.new_root]) |
462 | 61 | out_logdir = self.tmp_path(date_logdir, self.new_root) | 80 | out_logdir = self.tmp_path(date_logdir, self.new_root) |
463 | 81 | self.assertFalse( | ||
464 | 82 | os.path.exists( | ||
465 | 83 | os.path.join(out_logdir, 'run', 'cloud-init', | ||
466 | 84 | INSTANCE_JSON_SENSITIVE_FILE)), | ||
467 | 85 | 'Unexpected file found: %s' % INSTANCE_JSON_SENSITIVE_FILE) | ||
468 | 62 | self.assertEqual( | 86 | self.assertEqual( |
469 | 63 | '0.7fake\n', | 87 | '0.7fake\n', |
470 | 64 | load_file(os.path.join(out_logdir, 'dpkg-version'))) | 88 | load_file(os.path.join(out_logdir, 'dpkg-version'))) |
471 | @@ -82,8 +106,9 @@ class TestCollectLogs(FilesystemMockingTestCase): | |||
472 | 82 | os.path.join(out_logdir, 'run', 'cloud-init', 'results.json'))) | 106 | os.path.join(out_logdir, 'run', 'cloud-init', 'results.json'))) |
473 | 83 | fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile) | 107 | fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile) |
474 | 84 | 108 | ||
476 | 85 | def test_collect_logs_includes_optional_userdata(self): | 109 | def test_collect_logs_includes_optional_userdata(self, m_getuid): |
477 | 86 | """collect-logs include userdata when --include-userdata is set.""" | 110 | """collect-logs include userdata when --include-userdata is set.""" |
478 | 111 | m_getuid.return_value = 0 | ||
479 | 87 | log1 = self.tmp_path('cloud-init.log', self.new_root) | 112 | log1 = self.tmp_path('cloud-init.log', self.new_root) |
480 | 88 | write_file(log1, 'cloud-init-log') | 113 | write_file(log1, 'cloud-init-log') |
481 | 89 | log2 = self.tmp_path('cloud-init-output.log', self.new_root) | 114 | log2 = self.tmp_path('cloud-init-output.log', self.new_root) |
482 | @@ -92,6 +117,8 @@ class TestCollectLogs(FilesystemMockingTestCase): | |||
483 | 92 | write_file(userdata, 'user-data') | 117 | write_file(userdata, 'user-data') |
484 | 93 | ensure_dir(self.run_dir) | 118 | ensure_dir(self.run_dir) |
485 | 94 | write_file(self.tmp_path('results.json', self.run_dir), 'results') | 119 | write_file(self.tmp_path('results.json', self.run_dir), 'results') |
486 | 120 | write_file(self.tmp_path(INSTANCE_JSON_SENSITIVE_FILE, self.run_dir), | ||
487 | 121 | 'sensitive') | ||
488 | 95 | output_tarfile = self.tmp_path('logs.tgz') | 122 | output_tarfile = self.tmp_path('logs.tgz') |
489 | 96 | 123 | ||
490 | 97 | date = datetime.utcnow().date().strftime('%Y-%m-%d') | 124 | date = datetime.utcnow().date().strftime('%Y-%m-%d') |
491 | @@ -132,4 +159,8 @@ class TestCollectLogs(FilesystemMockingTestCase): | |||
492 | 132 | self.assertEqual( | 159 | self.assertEqual( |
493 | 133 | 'user-data', | 160 | 'user-data', |
494 | 134 | load_file(os.path.join(out_logdir, 'user-data.txt'))) | 161 | load_file(os.path.join(out_logdir, 'user-data.txt'))) |
495 | 162 | self.assertEqual( | ||
496 | 163 | 'sensitive', | ||
497 | 164 | load_file(os.path.join(out_logdir, 'run', 'cloud-init', | ||
498 | 165 | INSTANCE_JSON_SENSITIVE_FILE))) | ||
499 | 135 | fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile) | 166 | fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile) |
500 | diff --git a/cloudinit/cmd/devel/tests/test_render.py b/cloudinit/cmd/devel/tests/test_render.py | |||
501 | index fc5d2c0..988bba0 100644 | |||
502 | --- a/cloudinit/cmd/devel/tests/test_render.py | |||
503 | +++ b/cloudinit/cmd/devel/tests/test_render.py | |||
504 | @@ -6,7 +6,7 @@ import os | |||
505 | 6 | from collections import namedtuple | 6 | from collections import namedtuple |
506 | 7 | from cloudinit.cmd.devel import render | 7 | from cloudinit.cmd.devel import render |
507 | 8 | from cloudinit.helpers import Paths | 8 | from cloudinit.helpers import Paths |
509 | 9 | from cloudinit.sources import INSTANCE_JSON_FILE | 9 | from cloudinit.sources import INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE |
510 | 10 | from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJinja | 10 | from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJinja |
511 | 11 | from cloudinit.util import ensure_dir, write_file | 11 | from cloudinit.util import ensure_dir, write_file |
512 | 12 | 12 | ||
513 | @@ -63,6 +63,49 @@ class TestRender(CiTestCase): | |||
514 | 63 | 'Missing instance-data.json file: %s' % json_file, | 63 | 'Missing instance-data.json file: %s' % json_file, |
515 | 64 | self.logs.getvalue()) | 64 | self.logs.getvalue()) |
516 | 65 | 65 | ||
517 | 66 | def test_handle_args_root_fallback_from_sensitive_instance_data(self): | ||
518 | 67 | """When root user defaults to sensitive.json.""" | ||
519 | 68 | user_data = self.tmp_path('user-data', dir=self.tmp) | ||
520 | 69 | run_dir = self.tmp_path('run_dir', dir=self.tmp) | ||
521 | 70 | ensure_dir(run_dir) | ||
522 | 71 | paths = Paths({'run_dir': run_dir}) | ||
523 | 72 | self.add_patch('cloudinit.cmd.devel.render.read_cfg_paths', 'm_paths') | ||
524 | 73 | self.m_paths.return_value = paths | ||
525 | 74 | args = self.args( | ||
526 | 75 | user_data=user_data, instance_data=None, debug=False) | ||
527 | 76 | with mock.patch('sys.stderr', new_callable=StringIO): | ||
528 | 77 | with mock.patch('os.getuid') as m_getuid: | ||
529 | 78 | m_getuid.return_value = 0 | ||
530 | 79 | self.assertEqual(1, render.handle_args('anyname', args)) | ||
531 | 80 | json_file = os.path.join(run_dir, INSTANCE_JSON_FILE) | ||
532 | 81 | json_sensitive = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE) | ||
533 | 82 | self.assertIn( | ||
534 | 83 | 'WARNING: Missing root-readable %s. Using redacted %s' % ( | ||
535 | 84 | json_sensitive, json_file), self.logs.getvalue()) | ||
536 | 85 | self.assertIn( | ||
537 | 86 | 'ERROR: Missing instance-data.json file: %s' % json_file, | ||
538 | 87 | self.logs.getvalue()) | ||
539 | 88 | |||
540 | 89 | def test_handle_args_root_uses_sensitive_instance_data(self): | ||
541 | 90 | """When root user, and no instance-data arg, use sensitive.json.""" | ||
542 | 91 | user_data = self.tmp_path('user-data', dir=self.tmp) | ||
543 | 92 | write_file(user_data, '##template: jinja\nrendering: {{ my_var }}') | ||
544 | 93 | run_dir = self.tmp_path('run_dir', dir=self.tmp) | ||
545 | 94 | ensure_dir(run_dir) | ||
546 | 95 | json_sensitive = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE) | ||
547 | 96 | write_file(json_sensitive, '{"my-var": "jinja worked"}') | ||
548 | 97 | paths = Paths({'run_dir': run_dir}) | ||
549 | 98 | self.add_patch('cloudinit.cmd.devel.render.read_cfg_paths', 'm_paths') | ||
550 | 99 | self.m_paths.return_value = paths | ||
551 | 100 | args = self.args( | ||
552 | 101 | user_data=user_data, instance_data=None, debug=False) | ||
553 | 102 | with mock.patch('sys.stderr', new_callable=StringIO): | ||
554 | 103 | with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: | ||
555 | 104 | with mock.patch('os.getuid') as m_getuid: | ||
556 | 105 | m_getuid.return_value = 0 | ||
557 | 106 | self.assertEqual(0, render.handle_args('anyname', args)) | ||
558 | 107 | self.assertIn('rendering: jinja worked', m_stdout.getvalue()) | ||
559 | 108 | |||
560 | 66 | @skipUnlessJinja() | 109 | @skipUnlessJinja() |
561 | 67 | def test_handle_args_renders_instance_data_vars_in_template(self): | 110 | def test_handle_args_renders_instance_data_vars_in_template(self): |
562 | 68 | """If user_data file is a jinja template render instance-data vars.""" | 111 | """If user_data file is a jinja template render instance-data vars.""" |
563 | diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py | |||
564 | index 5a43702..933c019 100644 | |||
565 | --- a/cloudinit/cmd/main.py | |||
566 | +++ b/cloudinit/cmd/main.py | |||
567 | @@ -41,7 +41,7 @@ from cloudinit.settings import (PER_INSTANCE, PER_ALWAYS, PER_ONCE, | |||
568 | 41 | from cloudinit import atomic_helper | 41 | from cloudinit import atomic_helper |
569 | 42 | 42 | ||
570 | 43 | from cloudinit.config import cc_set_hostname | 43 | from cloudinit.config import cc_set_hostname |
572 | 44 | from cloudinit.dhclient_hook import LogDhclient | 44 | from cloudinit import dhclient_hook |
573 | 45 | 45 | ||
574 | 46 | 46 | ||
575 | 47 | # Welcome message template | 47 | # Welcome message template |
576 | @@ -586,12 +586,6 @@ def main_single(name, args): | |||
577 | 586 | return 0 | 586 | return 0 |
578 | 587 | 587 | ||
579 | 588 | 588 | ||
580 | 589 | def dhclient_hook(name, args): | ||
581 | 590 | record = LogDhclient(args) | ||
582 | 591 | record.check_hooks_dir() | ||
583 | 592 | record.record() | ||
584 | 593 | |||
585 | 594 | |||
586 | 595 | def status_wrapper(name, args, data_d=None, link_d=None): | 589 | def status_wrapper(name, args, data_d=None, link_d=None): |
587 | 596 | if data_d is None: | 590 | if data_d is None: |
588 | 597 | data_d = os.path.normpath("/var/lib/cloud/data") | 591 | data_d = os.path.normpath("/var/lib/cloud/data") |
589 | @@ -795,15 +789,9 @@ def main(sysv_args=None): | |||
590 | 795 | 'query', | 789 | 'query', |
591 | 796 | help='Query standardized instance metadata from the command line.') | 790 | help='Query standardized instance metadata from the command line.') |
592 | 797 | 791 | ||
602 | 798 | parser_dhclient = subparsers.add_parser('dhclient-hook', | 792 | parser_dhclient = subparsers.add_parser( |
603 | 799 | help=('run the dhclient hook' | 793 | dhclient_hook.NAME, help=dhclient_hook.__doc__) |
604 | 800 | 'to record network info')) | 794 | dhclient_hook.get_parser(parser_dhclient) |
596 | 801 | parser_dhclient.add_argument("net_action", | ||
597 | 802 | help=('action taken on the interface')) | ||
598 | 803 | parser_dhclient.add_argument("net_interface", | ||
599 | 804 | help=('the network interface being acted' | ||
600 | 805 | ' upon')) | ||
601 | 806 | parser_dhclient.set_defaults(action=('dhclient_hook', dhclient_hook)) | ||
605 | 807 | 795 | ||
606 | 808 | parser_features = subparsers.add_parser('features', | 796 | parser_features = subparsers.add_parser('features', |
607 | 809 | help=('list defined features')) | 797 | help=('list defined features')) |
608 | diff --git a/cloudinit/cmd/query.py b/cloudinit/cmd/query.py | |||
609 | index 7d2d4fe..1d888b9 100644 | |||
610 | --- a/cloudinit/cmd/query.py | |||
611 | +++ b/cloudinit/cmd/query.py | |||
612 | @@ -3,6 +3,7 @@ | |||
613 | 3 | """Query standardized instance metadata from the command line.""" | 3 | """Query standardized instance metadata from the command line.""" |
614 | 4 | 4 | ||
615 | 5 | import argparse | 5 | import argparse |
616 | 6 | from errno import EACCES | ||
617 | 6 | import os | 7 | import os |
618 | 7 | import six | 8 | import six |
619 | 8 | import sys | 9 | import sys |
620 | @@ -79,27 +80,38 @@ def handle_args(name, args): | |||
621 | 79 | uid = os.getuid() | 80 | uid = os.getuid() |
622 | 80 | if not all([args.instance_data, args.user_data, args.vendor_data]): | 81 | if not all([args.instance_data, args.user_data, args.vendor_data]): |
623 | 81 | paths = read_cfg_paths() | 82 | paths = read_cfg_paths() |
625 | 82 | if not args.instance_data: | 83 | if args.instance_data: |
626 | 84 | instance_data_fn = args.instance_data | ||
627 | 85 | else: | ||
628 | 86 | redacted_data_fn = os.path.join(paths.run_dir, INSTANCE_JSON_FILE) | ||
629 | 83 | if uid == 0: | 87 | if uid == 0: |
631 | 84 | default_json_fn = INSTANCE_JSON_SENSITIVE_FILE | 88 | sensitive_data_fn = os.path.join( |
632 | 89 | paths.run_dir, INSTANCE_JSON_SENSITIVE_FILE) | ||
633 | 90 | if os.path.exists(sensitive_data_fn): | ||
634 | 91 | instance_data_fn = sensitive_data_fn | ||
635 | 92 | else: | ||
636 | 93 | LOG.warning( | ||
637 | 94 | 'Missing root-readable %s. Using redacted %s instead.', | ||
638 | 95 | sensitive_data_fn, redacted_data_fn) | ||
639 | 96 | instance_data_fn = redacted_data_fn | ||
640 | 85 | else: | 97 | else: |
643 | 86 | default_json_fn = INSTANCE_JSON_FILE # World readable | 98 | instance_data_fn = redacted_data_fn |
644 | 87 | instance_data_fn = os.path.join(paths.run_dir, default_json_fn) | 99 | if args.user_data: |
645 | 100 | user_data_fn = args.user_data | ||
646 | 88 | else: | 101 | else: |
647 | 89 | instance_data_fn = args.instance_data | ||
648 | 90 | if not args.user_data: | ||
649 | 91 | user_data_fn = os.path.join(paths.instance_link, 'user-data.txt') | 102 | user_data_fn = os.path.join(paths.instance_link, 'user-data.txt') |
650 | 103 | if args.vendor_data: | ||
651 | 104 | vendor_data_fn = args.vendor_data | ||
652 | 92 | else: | 105 | else: |
653 | 93 | user_data_fn = args.user_data | ||
654 | 94 | if not args.vendor_data: | ||
655 | 95 | vendor_data_fn = os.path.join(paths.instance_link, 'vendor-data.txt') | 106 | vendor_data_fn = os.path.join(paths.instance_link, 'vendor-data.txt') |
656 | 96 | else: | ||
657 | 97 | vendor_data_fn = args.vendor_data | ||
658 | 98 | 107 | ||
659 | 99 | try: | 108 | try: |
660 | 100 | instance_json = util.load_file(instance_data_fn) | 109 | instance_json = util.load_file(instance_data_fn) |
663 | 101 | except IOError: | 110 | except (IOError, OSError) as e: |
664 | 102 | LOG.error('Missing instance-data.json file: %s', instance_data_fn) | 111 | if e.errno == EACCES: |
665 | 112 | LOG.error("No read permission on '%s'. Try sudo", instance_data_fn) | ||
666 | 113 | else: | ||
667 | 114 | LOG.error('Missing instance-data file: %s', instance_data_fn) | ||
668 | 103 | return 1 | 115 | return 1 |
669 | 104 | 116 | ||
670 | 105 | instance_data = util.load_json(instance_json) | 117 | instance_data = util.load_json(instance_json) |
671 | diff --git a/cloudinit/cmd/tests/test_cloud_id.py b/cloudinit/cmd/tests/test_cloud_id.py | |||
672 | 106 | new file mode 100644 | 118 | new file mode 100644 |
673 | index 0000000..7373817 | |||
674 | --- /dev/null | |||
675 | +++ b/cloudinit/cmd/tests/test_cloud_id.py | |||
676 | @@ -0,0 +1,127 @@ | |||
677 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | ||
678 | 2 | |||
679 | 3 | """Tests for cloud-id command line utility.""" | ||
680 | 4 | |||
681 | 5 | from cloudinit import util | ||
682 | 6 | from collections import namedtuple | ||
683 | 7 | from six import StringIO | ||
684 | 8 | |||
685 | 9 | from cloudinit.cmd import cloud_id | ||
686 | 10 | |||
687 | 11 | from cloudinit.tests.helpers import CiTestCase, mock | ||
688 | 12 | |||
689 | 13 | |||
690 | 14 | class TestCloudId(CiTestCase): | ||
691 | 15 | |||
692 | 16 | args = namedtuple('cloudidargs', ('instance_data json long')) | ||
693 | 17 | |||
694 | 18 | def setUp(self): | ||
695 | 19 | super(TestCloudId, self).setUp() | ||
696 | 20 | self.tmp = self.tmp_dir() | ||
697 | 21 | self.instance_data = self.tmp_path('instance-data.json', dir=self.tmp) | ||
698 | 22 | |||
699 | 23 | def test_cloud_id_arg_parser_defaults(self): | ||
700 | 24 | """Validate the argument defaults when not provided by the end-user.""" | ||
701 | 25 | cmd = ['cloud-id'] | ||
702 | 26 | with mock.patch('sys.argv', cmd): | ||
703 | 27 | args = cloud_id.get_parser().parse_args() | ||
704 | 28 | self.assertEqual( | ||
705 | 29 | '/run/cloud-init/instance-data.json', | ||
706 | 30 | args.instance_data) | ||
707 | 31 | self.assertEqual(False, args.long) | ||
708 | 32 | self.assertEqual(False, args.json) | ||
709 | 33 | |||
710 | 34 | def test_cloud_id_arg_parse_overrides(self): | ||
711 | 35 | """Override argument defaults by specifying values for each param.""" | ||
712 | 36 | util.write_file(self.instance_data, '{}') | ||
713 | 37 | cmd = ['cloud-id', '--instance-data', self.instance_data, '--long', | ||
714 | 38 | '--json'] | ||
715 | 39 | with mock.patch('sys.argv', cmd): | ||
716 | 40 | args = cloud_id.get_parser().parse_args() | ||
717 | 41 | self.assertEqual(self.instance_data, args.instance_data) | ||
718 | 42 | self.assertEqual(True, args.long) | ||
719 | 43 | self.assertEqual(True, args.json) | ||
720 | 44 | |||
721 | 45 | def test_cloud_id_missing_instance_data_json(self): | ||
722 | 46 | """Exit error when the provided instance-data.json does not exist.""" | ||
723 | 47 | cmd = ['cloud-id', '--instance-data', self.instance_data] | ||
724 | 48 | with mock.patch('sys.argv', cmd): | ||
725 | 49 | with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: | ||
726 | 50 | with self.assertRaises(SystemExit) as context_manager: | ||
727 | 51 | cloud_id.main() | ||
728 | 52 | self.assertEqual(1, context_manager.exception.code) | ||
729 | 53 | self.assertIn( | ||
730 | 54 | "ERROR: File not found '%s'" % self.instance_data, | ||
731 | 55 | m_stderr.getvalue()) | ||
732 | 56 | |||
733 | 57 | def test_cloud_id_non_json_instance_data(self): | ||
734 | 58 | """Exit error when the provided instance-data.json is not json.""" | ||
735 | 59 | cmd = ['cloud-id', '--instance-data', self.instance_data] | ||
736 | 60 | util.write_file(self.instance_data, '{') | ||
737 | 61 | with mock.patch('sys.argv', cmd): | ||
738 | 62 | with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: | ||
739 | 63 | with self.assertRaises(SystemExit) as context_manager: | ||
740 | 64 | cloud_id.main() | ||
741 | 65 | self.assertEqual(1, context_manager.exception.code) | ||
742 | 66 | self.assertIn( | ||
743 | 67 | "ERROR: File '%s' is not valid json." % self.instance_data, | ||
744 | 68 | m_stderr.getvalue()) | ||
745 | 69 | |||
746 | 70 | def test_cloud_id_from_cloud_name_in_instance_data(self): | ||
747 | 71 | """Report canonical cloud-id from cloud_name in instance-data.""" | ||
748 | 72 | util.write_file( | ||
749 | 73 | self.instance_data, | ||
750 | 74 | '{"v1": {"cloud_name": "mycloud", "region": "somereg"}}') | ||
751 | 75 | cmd = ['cloud-id', '--instance-data', self.instance_data] | ||
752 | 76 | with mock.patch('sys.argv', cmd): | ||
753 | 77 | with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: | ||
754 | 78 | with self.assertRaises(SystemExit) as context_manager: | ||
755 | 79 | cloud_id.main() | ||
756 | 80 | self.assertEqual(0, context_manager.exception.code) | ||
757 | 81 | self.assertEqual("mycloud\n", m_stdout.getvalue()) | ||
758 | 82 | |||
759 | 83 | def test_cloud_id_long_name_from_instance_data(self): | ||
760 | 84 | """Report long cloud-id format from cloud_name and region.""" | ||
761 | 85 | util.write_file( | ||
762 | 86 | self.instance_data, | ||
763 | 87 | '{"v1": {"cloud_name": "mycloud", "region": "somereg"}}') | ||
764 | 88 | cmd = ['cloud-id', '--instance-data', self.instance_data, '--long'] | ||
765 | 89 | with mock.patch('sys.argv', cmd): | ||
766 | 90 | with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: | ||
767 | 91 | with self.assertRaises(SystemExit) as context_manager: | ||
768 | 92 | cloud_id.main() | ||
769 | 93 | self.assertEqual(0, context_manager.exception.code) | ||
770 | 94 | self.assertEqual("mycloud\tsomereg\n", m_stdout.getvalue()) | ||
771 | 95 | |||
772 | 96 | def test_cloud_id_lookup_from_instance_data_region(self): | ||
773 | 97 | """Report discovered canonical cloud_id when region lookup matches.""" | ||
774 | 98 | util.write_file( | ||
775 | 99 | self.instance_data, | ||
776 | 100 | '{"v1": {"cloud_name": "aws", "region": "cn-north-1",' | ||
777 | 101 | ' "platform": "ec2"}}') | ||
778 | 102 | cmd = ['cloud-id', '--instance-data', self.instance_data, '--long'] | ||
779 | 103 | with mock.patch('sys.argv', cmd): | ||
780 | 104 | with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: | ||
781 | 105 | with self.assertRaises(SystemExit) as context_manager: | ||
782 | 106 | cloud_id.main() | ||
783 | 107 | self.assertEqual(0, context_manager.exception.code) | ||
784 | 108 | self.assertEqual("aws-china\tcn-north-1\n", m_stdout.getvalue()) | ||
785 | 109 | |||
786 | 110 | def test_cloud_id_lookup_json_instance_data_adds_cloud_id_to_json(self): | ||
787 | 111 | """Report v1 instance-data content with cloud_id when --json set.""" | ||
788 | 112 | util.write_file( | ||
789 | 113 | self.instance_data, | ||
790 | 114 | '{"v1": {"cloud_name": "unknown", "region": "dfw",' | ||
791 | 115 | ' "platform": "openstack", "public_ssh_keys": []}}') | ||
792 | 116 | expected = util.json_dumps({ | ||
793 | 117 | 'cloud_id': 'openstack', 'cloud_name': 'unknown', | ||
794 | 118 | 'platform': 'openstack', 'public_ssh_keys': [], 'region': 'dfw'}) | ||
795 | 119 | cmd = ['cloud-id', '--instance-data', self.instance_data, '--json'] | ||
796 | 120 | with mock.patch('sys.argv', cmd): | ||
797 | 121 | with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: | ||
798 | 122 | with self.assertRaises(SystemExit) as context_manager: | ||
799 | 123 | cloud_id.main() | ||
800 | 124 | self.assertEqual(0, context_manager.exception.code) | ||
801 | 125 | self.assertEqual(expected + '\n', m_stdout.getvalue()) | ||
802 | 126 | |||
803 | 127 | # vi: ts=4 expandtab | ||
804 | diff --git a/cloudinit/cmd/tests/test_query.py b/cloudinit/cmd/tests/test_query.py | |||
805 | index fb87c6a..28738b1 100644 | |||
806 | --- a/cloudinit/cmd/tests/test_query.py | |||
807 | +++ b/cloudinit/cmd/tests/test_query.py | |||
808 | @@ -1,5 +1,6 @@ | |||
809 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
810 | 2 | 2 | ||
811 | 3 | import errno | ||
812 | 3 | from six import StringIO | 4 | from six import StringIO |
813 | 4 | from textwrap import dedent | 5 | from textwrap import dedent |
814 | 5 | import os | 6 | import os |
815 | @@ -7,7 +8,8 @@ import os | |||
816 | 7 | from collections import namedtuple | 8 | from collections import namedtuple |
817 | 8 | from cloudinit.cmd import query | 9 | from cloudinit.cmd import query |
818 | 9 | from cloudinit.helpers import Paths | 10 | from cloudinit.helpers import Paths |
820 | 10 | from cloudinit.sources import REDACT_SENSITIVE_VALUE, INSTANCE_JSON_FILE | 11 | from cloudinit.sources import ( |
821 | 12 | REDACT_SENSITIVE_VALUE, INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE) | ||
822 | 11 | from cloudinit.tests.helpers import CiTestCase, mock | 13 | from cloudinit.tests.helpers import CiTestCase, mock |
823 | 12 | from cloudinit.util import ensure_dir, write_file | 14 | from cloudinit.util import ensure_dir, write_file |
824 | 13 | 15 | ||
825 | @@ -50,10 +52,28 @@ class TestQuery(CiTestCase): | |||
826 | 50 | with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: | 52 | with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: |
827 | 51 | self.assertEqual(1, query.handle_args('anyname', args)) | 53 | self.assertEqual(1, query.handle_args('anyname', args)) |
828 | 52 | self.assertIn( | 54 | self.assertIn( |
830 | 53 | 'ERROR: Missing instance-data.json file: %s' % absent_fn, | 55 | 'ERROR: Missing instance-data file: %s' % absent_fn, |
831 | 54 | self.logs.getvalue()) | 56 | self.logs.getvalue()) |
832 | 55 | self.assertIn( | 57 | self.assertIn( |
834 | 56 | 'ERROR: Missing instance-data.json file: %s' % absent_fn, | 58 | 'ERROR: Missing instance-data file: %s' % absent_fn, |
835 | 59 | m_stderr.getvalue()) | ||
836 | 60 | |||
837 | 61 | def test_handle_args_error_when_no_read_permission_instance_data(self): | ||
838 | 62 | """When instance_data file is unreadable, log an error.""" | ||
839 | 63 | noread_fn = self.tmp_path('unreadable', dir=self.tmp) | ||
840 | 64 | write_file(noread_fn, 'thou shall not pass') | ||
841 | 65 | args = self.args( | ||
842 | 66 | debug=False, dump_all=True, format=None, instance_data=noread_fn, | ||
843 | 67 | list_keys=False, user_data='ud', vendor_data='vd', varname=None) | ||
844 | 68 | with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: | ||
845 | 69 | with mock.patch('cloudinit.cmd.query.util.load_file') as m_load: | ||
846 | 70 | m_load.side_effect = OSError(errno.EACCES, 'Not allowed') | ||
847 | 71 | self.assertEqual(1, query.handle_args('anyname', args)) | ||
848 | 72 | self.assertIn( | ||
849 | 73 | "ERROR: No read permission on '%s'. Try sudo" % noread_fn, | ||
850 | 74 | self.logs.getvalue()) | ||
851 | 75 | self.assertIn( | ||
852 | 76 | "ERROR: No read permission on '%s'. Try sudo" % noread_fn, | ||
853 | 57 | m_stderr.getvalue()) | 77 | m_stderr.getvalue()) |
854 | 58 | 78 | ||
855 | 59 | def test_handle_args_defaults_instance_data(self): | 79 | def test_handle_args_defaults_instance_data(self): |
856 | @@ -70,12 +90,58 @@ class TestQuery(CiTestCase): | |||
857 | 70 | self.assertEqual(1, query.handle_args('anyname', args)) | 90 | self.assertEqual(1, query.handle_args('anyname', args)) |
858 | 71 | json_file = os.path.join(run_dir, INSTANCE_JSON_FILE) | 91 | json_file = os.path.join(run_dir, INSTANCE_JSON_FILE) |
859 | 72 | self.assertIn( | 92 | self.assertIn( |
861 | 73 | 'ERROR: Missing instance-data.json file: %s' % json_file, | 93 | 'ERROR: Missing instance-data file: %s' % json_file, |
862 | 74 | self.logs.getvalue()) | 94 | self.logs.getvalue()) |
863 | 75 | self.assertIn( | 95 | self.assertIn( |
865 | 76 | 'ERROR: Missing instance-data.json file: %s' % json_file, | 96 | 'ERROR: Missing instance-data file: %s' % json_file, |
866 | 77 | m_stderr.getvalue()) | 97 | m_stderr.getvalue()) |
867 | 78 | 98 | ||
868 | 99 | def test_handle_args_root_fallsback_to_instance_data(self): | ||
869 | 100 | """When no instance_data argument, root falls back to redacted json.""" | ||
870 | 101 | args = self.args( | ||
871 | 102 | debug=False, dump_all=True, format=None, instance_data=None, | ||
872 | 103 | list_keys=False, user_data=None, vendor_data=None, varname=None) | ||
873 | 104 | run_dir = self.tmp_path('run_dir', dir=self.tmp) | ||
874 | 105 | ensure_dir(run_dir) | ||
875 | 106 | paths = Paths({'run_dir': run_dir}) | ||
876 | 107 | self.add_patch('cloudinit.cmd.query.read_cfg_paths', 'm_paths') | ||
877 | 108 | self.m_paths.return_value = paths | ||
878 | 109 | with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: | ||
879 | 110 | with mock.patch('os.getuid') as m_getuid: | ||
880 | 111 | m_getuid.return_value = 0 | ||
881 | 112 | self.assertEqual(1, query.handle_args('anyname', args)) | ||
882 | 113 | json_file = os.path.join(run_dir, INSTANCE_JSON_FILE) | ||
883 | 114 | sensitive_file = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE) | ||
884 | 115 | self.assertIn( | ||
885 | 116 | 'WARNING: Missing root-readable %s. Using redacted %s instead.' % ( | ||
886 | 117 | sensitive_file, json_file), | ||
887 | 118 | m_stderr.getvalue()) | ||
888 | 119 | |||
889 | 120 | def test_handle_args_root_uses_instance_sensitive_data(self): | ||
890 | 121 | """When no instance_data argument, root uses semsitive json.""" | ||
891 | 122 | user_data = self.tmp_path('user-data', dir=self.tmp) | ||
892 | 123 | vendor_data = self.tmp_path('vendor-data', dir=self.tmp) | ||
893 | 124 | write_file(user_data, 'ud') | ||
894 | 125 | write_file(vendor_data, 'vd') | ||
895 | 126 | run_dir = self.tmp_path('run_dir', dir=self.tmp) | ||
896 | 127 | sensitive_file = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE) | ||
897 | 128 | write_file(sensitive_file, '{"my-var": "it worked"}') | ||
898 | 129 | ensure_dir(run_dir) | ||
899 | 130 | paths = Paths({'run_dir': run_dir}) | ||
900 | 131 | self.add_patch('cloudinit.cmd.query.read_cfg_paths', 'm_paths') | ||
901 | 132 | self.m_paths.return_value = paths | ||
902 | 133 | args = self.args( | ||
903 | 134 | debug=False, dump_all=True, format=None, instance_data=None, | ||
904 | 135 | list_keys=False, user_data=vendor_data, vendor_data=vendor_data, | ||
905 | 136 | varname=None) | ||
906 | 137 | with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: | ||
907 | 138 | with mock.patch('os.getuid') as m_getuid: | ||
908 | 139 | m_getuid.return_value = 0 | ||
909 | 140 | self.assertEqual(0, query.handle_args('anyname', args)) | ||
910 | 141 | self.assertEqual( | ||
911 | 142 | '{\n "my_var": "it worked",\n "userdata": "vd",\n ' | ||
912 | 143 | '"vendordata": "vd"\n}\n', m_stdout.getvalue()) | ||
913 | 144 | |||
914 | 79 | def test_handle_args_dumps_all_instance_data(self): | 145 | def test_handle_args_dumps_all_instance_data(self): |
915 | 80 | """When --all is specified query will dump all instance data vars.""" | 146 | """When --all is specified query will dump all instance data vars.""" |
916 | 81 | write_file(self.instance_data, '{"my-var": "it worked"}') | 147 | write_file(self.instance_data, '{"my-var": "it worked"}') |
917 | diff --git a/cloudinit/config/cc_disk_setup.py b/cloudinit/config/cc_disk_setup.py | |||
918 | index 943089e..29e192e 100644 | |||
919 | --- a/cloudinit/config/cc_disk_setup.py | |||
920 | +++ b/cloudinit/config/cc_disk_setup.py | |||
921 | @@ -743,7 +743,7 @@ def assert_and_settle_device(device): | |||
922 | 743 | util.udevadm_settle() | 743 | util.udevadm_settle() |
923 | 744 | if not os.path.exists(device): | 744 | if not os.path.exists(device): |
924 | 745 | raise RuntimeError("Device %s did not exist and was not created " | 745 | raise RuntimeError("Device %s did not exist and was not created " |
926 | 746 | "with a udevamd settle." % device) | 746 | "with a udevadm settle." % device) |
927 | 747 | 747 | ||
928 | 748 | # Whether or not the device existed above, it is possible that udev | 748 | # Whether or not the device existed above, it is possible that udev |
929 | 749 | # events that would populate udev database (for reading by lsdname) have | 749 | # events that would populate udev database (for reading by lsdname) have |
930 | diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py | |||
931 | index 24a8ebe..71d13ed 100644 | |||
932 | --- a/cloudinit/config/cc_lxd.py | |||
933 | +++ b/cloudinit/config/cc_lxd.py | |||
934 | @@ -89,7 +89,7 @@ def handle(name, cfg, cloud, log, args): | |||
935 | 89 | packages.append('lxd') | 89 | packages.append('lxd') |
936 | 90 | 90 | ||
937 | 91 | if init_cfg.get("storage_backend") == "zfs" and not util.which('zfs'): | 91 | if init_cfg.get("storage_backend") == "zfs" and not util.which('zfs'): |
939 | 92 | packages.append('zfs') | 92 | packages.append('zfsutils-linux') |
940 | 93 | 93 | ||
941 | 94 | if len(packages): | 94 | if len(packages): |
942 | 95 | try: | 95 | try: |
943 | diff --git a/cloudinit/config/cc_resizefs.py b/cloudinit/config/cc_resizefs.py | |||
944 | index 2edddd0..076b9d5 100644 | |||
945 | --- a/cloudinit/config/cc_resizefs.py | |||
946 | +++ b/cloudinit/config/cc_resizefs.py | |||
947 | @@ -197,6 +197,13 @@ def maybe_get_writable_device_path(devpath, info, log): | |||
948 | 197 | if devpath.startswith('gpt/'): | 197 | if devpath.startswith('gpt/'): |
949 | 198 | log.debug('We have a gpt label - just go ahead') | 198 | log.debug('We have a gpt label - just go ahead') |
950 | 199 | return devpath | 199 | return devpath |
951 | 200 | # Alternatively, our device could simply be a name as returned by gpart, | ||
952 | 201 | # such as da0p3 | ||
953 | 202 | if not devpath.startswith('/dev/') and not os.path.exists(devpath): | ||
954 | 203 | fulldevpath = '/dev/' + devpath.lstrip('/') | ||
955 | 204 | log.debug("'%s' doesn't appear to be a valid device path. Trying '%s'", | ||
956 | 205 | devpath, fulldevpath) | ||
957 | 206 | devpath = fulldevpath | ||
958 | 200 | 207 | ||
959 | 201 | try: | 208 | try: |
960 | 202 | statret = os.stat(devpath) | 209 | statret = os.stat(devpath) |
961 | diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py | |||
962 | index 5ef9737..4585e4d 100755 | |||
963 | --- a/cloudinit/config/cc_set_passwords.py | |||
964 | +++ b/cloudinit/config/cc_set_passwords.py | |||
965 | @@ -160,7 +160,7 @@ def handle(_name, cfg, cloud, log, args): | |||
966 | 160 | hashed_users = [] | 160 | hashed_users = [] |
967 | 161 | randlist = [] | 161 | randlist = [] |
968 | 162 | users = [] | 162 | users = [] |
970 | 163 | prog = re.compile(r'\$[1,2a,2y,5,6](\$.+){2}') | 163 | prog = re.compile(r'\$(1|2a|2y|5|6)(\$.+){2}') |
971 | 164 | for line in plist: | 164 | for line in plist: |
972 | 165 | u, p = line.split(':', 1) | 165 | u, p = line.split(':', 1) |
973 | 166 | if prog.match(p) is not None and ":" not in p: | 166 | if prog.match(p) is not None and ":" not in p: |
974 | diff --git a/cloudinit/config/cc_write_files.py b/cloudinit/config/cc_write_files.py | |||
975 | index 31d1db6..0b6546e 100644 | |||
976 | --- a/cloudinit/config/cc_write_files.py | |||
977 | +++ b/cloudinit/config/cc_write_files.py | |||
978 | @@ -49,6 +49,10 @@ binary gzip data can be specified and will be decoded before being written. | |||
979 | 49 | ... | 49 | ... |
980 | 50 | path: /bin/arch | 50 | path: /bin/arch |
981 | 51 | permissions: '0555' | 51 | permissions: '0555' |
982 | 52 | - content: | | ||
983 | 53 | 15 * * * * root ship_logs | ||
984 | 54 | path: /etc/crontab | ||
985 | 55 | append: true | ||
986 | 52 | """ | 56 | """ |
987 | 53 | 57 | ||
988 | 54 | import base64 | 58 | import base64 |
989 | @@ -113,7 +117,8 @@ def write_files(name, files): | |||
990 | 113 | contents = extract_contents(f_info.get('content', ''), extractions) | 117 | contents = extract_contents(f_info.get('content', ''), extractions) |
991 | 114 | (u, g) = util.extract_usergroup(f_info.get('owner', DEFAULT_OWNER)) | 118 | (u, g) = util.extract_usergroup(f_info.get('owner', DEFAULT_OWNER)) |
992 | 115 | perms = decode_perms(f_info.get('permissions'), DEFAULT_PERMS) | 119 | perms = decode_perms(f_info.get('permissions'), DEFAULT_PERMS) |
994 | 116 | util.write_file(path, contents, mode=perms) | 120 | omode = 'ab' if util.get_cfg_option_bool(f_info, 'append') else 'wb' |
995 | 121 | util.write_file(path, contents, omode=omode, mode=perms) | ||
996 | 117 | util.chownbyname(path, u, g) | 122 | util.chownbyname(path, u, g) |
997 | 118 | 123 | ||
998 | 119 | 124 | ||
999 | diff --git a/cloudinit/config/tests/test_set_passwords.py b/cloudinit/config/tests/test_set_passwords.py | |||
1000 | index b051ec8..a2ea5ec 100644 | |||
1001 | --- a/cloudinit/config/tests/test_set_passwords.py | |||
1002 | +++ b/cloudinit/config/tests/test_set_passwords.py | |||
1003 | @@ -68,4 +68,44 @@ class TestHandleSshPwauth(CiTestCase): | |||
1004 | 68 | m_update.assert_called_with({optname: optval}) | 68 | m_update.assert_called_with({optname: optval}) |
1005 | 69 | m_subp.assert_not_called() | 69 | m_subp.assert_not_called() |
1006 | 70 | 70 | ||
1007 | 71 | |||
1008 | 72 | class TestSetPasswordsHandle(CiTestCase): | ||
1009 | 73 | """Test cc_set_passwords.handle""" | ||
1010 | 74 | |||
1011 | 75 | with_logs = True | ||
1012 | 76 | |||
1013 | 77 | def test_handle_on_empty_config(self): | ||
1014 | 78 | """handle logs that no password has changed when config is empty.""" | ||
1015 | 79 | cloud = self.tmp_cloud(distro='ubuntu') | ||
1016 | 80 | setpass.handle( | ||
1017 | 81 | 'IGNORED', cfg={}, cloud=cloud, log=self.logger, args=[]) | ||
1018 | 82 | self.assertEqual( | ||
1019 | 83 | "DEBUG: Leaving ssh config 'PasswordAuthentication' unchanged. " | ||
1020 | 84 | 'ssh_pwauth=None\n', | ||
1021 | 85 | self.logs.getvalue()) | ||
1022 | 86 | |||
1023 | 87 | @mock.patch(MODPATH + "util.subp") | ||
1024 | 88 | def test_handle_on_chpasswd_list_parses_common_hashes(self, m_subp): | ||
1025 | 89 | """handle parses command password hashes.""" | ||
1026 | 90 | cloud = self.tmp_cloud(distro='ubuntu') | ||
1027 | 91 | valid_hashed_pwds = [ | ||
1028 | 92 | 'root:$2y$10$8BQjxjVByHA/Ee.O1bCXtO8S7Y5WojbXWqnqYpUW.BrPx/' | ||
1029 | 93 | 'Dlew1Va', | ||
1030 | 94 | 'ubuntu:$6$5hOurLPO$naywm3Ce0UlmZg9gG2Fl9acWCVEoakMMC7dR52q' | ||
1031 | 95 | 'SDexZbrN9z8yHxhUM2b.sxpguSwOlbOQSW/HpXazGGx3oo1'] | ||
1032 | 96 | cfg = {'chpasswd': {'list': valid_hashed_pwds}} | ||
1033 | 97 | with mock.patch(MODPATH + 'util.subp') as m_subp: | ||
1034 | 98 | setpass.handle( | ||
1035 | 99 | 'IGNORED', cfg=cfg, cloud=cloud, log=self.logger, args=[]) | ||
1036 | 100 | self.assertIn( | ||
1037 | 101 | 'DEBUG: Handling input for chpasswd as list.', | ||
1038 | 102 | self.logs.getvalue()) | ||
1039 | 103 | self.assertIn( | ||
1040 | 104 | "DEBUG: Setting hashed password for ['root', 'ubuntu']", | ||
1041 | 105 | self.logs.getvalue()) | ||
1042 | 106 | self.assertEqual( | ||
1043 | 107 | [mock.call(['chpasswd', '-e'], | ||
1044 | 108 | '\n'.join(valid_hashed_pwds) + '\n')], | ||
1045 | 109 | m_subp.call_args_list) | ||
1046 | 110 | |||
1047 | 71 | # vi: ts=4 expandtab | 111 | # vi: ts=4 expandtab |
1048 | diff --git a/cloudinit/dhclient_hook.py b/cloudinit/dhclient_hook.py | |||
1049 | index 7f02d7f..72b51b6 100644 | |||
1050 | --- a/cloudinit/dhclient_hook.py | |||
1051 | +++ b/cloudinit/dhclient_hook.py | |||
1052 | @@ -1,5 +1,8 @@ | |||
1053 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
1054 | 2 | 2 | ||
1055 | 3 | """Run the dhclient hook to record network info.""" | ||
1056 | 4 | |||
1057 | 5 | import argparse | ||
1058 | 3 | import os | 6 | import os |
1059 | 4 | 7 | ||
1060 | 5 | from cloudinit import atomic_helper | 8 | from cloudinit import atomic_helper |
1061 | @@ -8,44 +11,75 @@ from cloudinit import stages | |||
1062 | 8 | 11 | ||
1063 | 9 | LOG = logging.getLogger(__name__) | 12 | LOG = logging.getLogger(__name__) |
1064 | 10 | 13 | ||
1065 | 14 | NAME = "dhclient-hook" | ||
1066 | 15 | UP = "up" | ||
1067 | 16 | DOWN = "down" | ||
1068 | 17 | EVENTS = (UP, DOWN) | ||
1069 | 18 | |||
1070 | 19 | |||
1071 | 20 | def _get_hooks_dir(): | ||
1072 | 21 | i = stages.Init() | ||
1073 | 22 | return os.path.join(i.paths.get_runpath(), 'dhclient.hooks') | ||
1074 | 23 | |||
1075 | 24 | |||
1076 | 25 | def _filter_env_vals(info): | ||
1077 | 26 | """Given info (os.environ), return a dictionary with | ||
1078 | 27 | lower case keys for each entry starting with DHCP4_ or new_.""" | ||
1079 | 28 | new_info = {} | ||
1080 | 29 | for k, v in info.items(): | ||
1081 | 30 | if k.startswith("DHCP4_") or k.startswith("new_"): | ||
1082 | 31 | key = (k.replace('DHCP4_', '').replace('new_', '')).lower() | ||
1083 | 32 | new_info[key] = v | ||
1084 | 33 | return new_info | ||
1085 | 34 | |||
1086 | 35 | |||
1087 | 36 | def run_hook(interface, event, data_d=None, env=None): | ||
1088 | 37 | if event not in EVENTS: | ||
1089 | 38 | raise ValueError("Unexpected event '%s'. Expected one of: %s" % | ||
1090 | 39 | (event, EVENTS)) | ||
1091 | 40 | if data_d is None: | ||
1092 | 41 | data_d = _get_hooks_dir() | ||
1093 | 42 | if env is None: | ||
1094 | 43 | env = os.environ | ||
1095 | 44 | hook_file = os.path.join(data_d, interface + ".json") | ||
1096 | 45 | |||
1097 | 46 | if event == UP: | ||
1098 | 47 | if not os.path.exists(data_d): | ||
1099 | 48 | os.makedirs(data_d) | ||
1100 | 49 | atomic_helper.write_json(hook_file, _filter_env_vals(env)) | ||
1101 | 50 | LOG.debug("Wrote dhclient options in %s", hook_file) | ||
1102 | 51 | elif event == DOWN: | ||
1103 | 52 | if os.path.exists(hook_file): | ||
1104 | 53 | os.remove(hook_file) | ||
1105 | 54 | LOG.debug("Removed dhclient options file %s", hook_file) | ||
1106 | 55 | |||
1107 | 56 | |||
1108 | 57 | def get_parser(parser=None): | ||
1109 | 58 | if parser is None: | ||
1110 | 59 | parser = argparse.ArgumentParser(prog=NAME, description=__doc__) | ||
1111 | 60 | parser.add_argument( | ||
1112 | 61 | "event", help='event taken on the interface', choices=EVENTS) | ||
1113 | 62 | parser.add_argument( | ||
1114 | 63 | "interface", help='the network interface being acted upon') | ||
1115 | 64 | # cloud-init main uses 'action' | ||
1116 | 65 | parser.set_defaults(action=(NAME, handle_args)) | ||
1117 | 66 | return parser | ||
1118 | 67 | |||
1119 | 68 | |||
1120 | 69 | def handle_args(name, args, data_d=None): | ||
1121 | 70 | """Handle the Namespace args. | ||
1122 | 71 | Takes 'name' as passed by cloud-init main. not used here.""" | ||
1123 | 72 | return run_hook(interface=args.interface, event=args.event, data_d=data_d) | ||
1124 | 73 | |||
1125 | 74 | |||
1126 | 75 | if __name__ == '__main__': | ||
1127 | 76 | import sys | ||
1128 | 77 | parser = get_parser() | ||
1129 | 78 | args = parser.parse_args(args=sys.argv[1:]) | ||
1130 | 79 | return_value = handle_args( | ||
1131 | 80 | NAME, args, data_d=os.environ.get('_CI_DHCP_HOOK_DATA_D')) | ||
1132 | 81 | if return_value: | ||
1133 | 82 | sys.exit(return_value) | ||
1134 | 11 | 83 | ||
1135 | 12 | class LogDhclient(object): | ||
1136 | 13 | |||
1137 | 14 | def __init__(self, cli_args): | ||
1138 | 15 | self.hooks_dir = self._get_hooks_dir() | ||
1139 | 16 | self.net_interface = cli_args.net_interface | ||
1140 | 17 | self.net_action = cli_args.net_action | ||
1141 | 18 | self.hook_file = os.path.join(self.hooks_dir, | ||
1142 | 19 | self.net_interface + ".json") | ||
1143 | 20 | |||
1144 | 21 | @staticmethod | ||
1145 | 22 | def _get_hooks_dir(): | ||
1146 | 23 | i = stages.Init() | ||
1147 | 24 | return os.path.join(i.paths.get_runpath(), 'dhclient.hooks') | ||
1148 | 25 | |||
1149 | 26 | def check_hooks_dir(self): | ||
1150 | 27 | if not os.path.exists(self.hooks_dir): | ||
1151 | 28 | os.makedirs(self.hooks_dir) | ||
1152 | 29 | else: | ||
1153 | 30 | # If the action is down and the json file exists, we need to | ||
1154 | 31 | # delete the file | ||
1155 | 32 | if self.net_action is 'down' and os.path.exists(self.hook_file): | ||
1156 | 33 | os.remove(self.hook_file) | ||
1157 | 34 | |||
1158 | 35 | @staticmethod | ||
1159 | 36 | def get_vals(info): | ||
1160 | 37 | new_info = {} | ||
1161 | 38 | for k, v in info.items(): | ||
1162 | 39 | if k.startswith("DHCP4_") or k.startswith("new_"): | ||
1163 | 40 | key = (k.replace('DHCP4_', '').replace('new_', '')).lower() | ||
1164 | 41 | new_info[key] = v | ||
1165 | 42 | return new_info | ||
1166 | 43 | |||
1167 | 44 | def record(self): | ||
1168 | 45 | envs = os.environ | ||
1169 | 46 | if self.hook_file is None: | ||
1170 | 47 | return | ||
1171 | 48 | atomic_helper.write_json(self.hook_file, self.get_vals(envs)) | ||
1172 | 49 | LOG.debug("Wrote dhclient options in %s", self.hook_file) | ||
1173 | 50 | 84 | ||
1174 | 51 | # vi: ts=4 expandtab | 85 | # vi: ts=4 expandtab |
1175 | diff --git a/cloudinit/handlers/jinja_template.py b/cloudinit/handlers/jinja_template.py | |||
1176 | index 3fa4097..ce3accf 100644 | |||
1177 | --- a/cloudinit/handlers/jinja_template.py | |||
1178 | +++ b/cloudinit/handlers/jinja_template.py | |||
1179 | @@ -1,5 +1,6 @@ | |||
1180 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
1181 | 2 | 2 | ||
1182 | 3 | from errno import EACCES | ||
1183 | 3 | import os | 4 | import os |
1184 | 4 | import re | 5 | import re |
1185 | 5 | 6 | ||
1186 | @@ -76,7 +77,14 @@ def render_jinja_payload_from_file( | |||
1187 | 76 | raise RuntimeError( | 77 | raise RuntimeError( |
1188 | 77 | 'Cannot render jinja template vars. Instance data not yet' | 78 | 'Cannot render jinja template vars. Instance data not yet' |
1189 | 78 | ' present at %s' % instance_data_file) | 79 | ' present at %s' % instance_data_file) |
1191 | 79 | instance_data = load_json(load_file(instance_data_file)) | 80 | try: |
1192 | 81 | instance_data = load_json(load_file(instance_data_file)) | ||
1193 | 82 | except (IOError, OSError) as e: | ||
1194 | 83 | if e.errno == EACCES: | ||
1195 | 84 | raise RuntimeError( | ||
1196 | 85 | 'Cannot render jinja template vars. No read permission on' | ||
1197 | 86 | " '%s'. Try sudo" % instance_data_file) | ||
1198 | 87 | |||
1199 | 80 | rendered_payload = render_jinja_payload( | 88 | rendered_payload = render_jinja_payload( |
1200 | 81 | payload, payload_fn, instance_data, debug) | 89 | payload, payload_fn, instance_data, debug) |
1201 | 82 | if not rendered_payload: | 90 | if not rendered_payload: |
1202 | diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py | |||
1203 | index f83d368..3642fb1 100644 | |||
1204 | --- a/cloudinit/net/__init__.py | |||
1205 | +++ b/cloudinit/net/__init__.py | |||
1206 | @@ -12,6 +12,7 @@ import re | |||
1207 | 12 | 12 | ||
1208 | 13 | from cloudinit.net.network_state import mask_to_net_prefix | 13 | from cloudinit.net.network_state import mask_to_net_prefix |
1209 | 14 | from cloudinit import util | 14 | from cloudinit import util |
1210 | 15 | from cloudinit.url_helper import UrlError, readurl | ||
1211 | 15 | 16 | ||
1212 | 16 | LOG = logging.getLogger(__name__) | 17 | LOG = logging.getLogger(__name__) |
1213 | 17 | SYS_CLASS_NET = "/sys/class/net/" | 18 | SYS_CLASS_NET = "/sys/class/net/" |
1214 | @@ -612,7 +613,8 @@ def get_interfaces(): | |||
1215 | 612 | Bridges and any devices that have a 'stolen' mac are excluded.""" | 613 | Bridges and any devices that have a 'stolen' mac are excluded.""" |
1216 | 613 | ret = [] | 614 | ret = [] |
1217 | 614 | devs = get_devicelist() | 615 | devs = get_devicelist() |
1219 | 615 | empty_mac = '00:00:00:00:00:00' | 616 | # 16 somewhat arbitrarily chosen. Normally a mac is 6 '00:' tokens. |
1220 | 617 | zero_mac = ':'.join(('00',) * 16) | ||
1221 | 616 | for name in devs: | 618 | for name in devs: |
1222 | 617 | if not interface_has_own_mac(name): | 619 | if not interface_has_own_mac(name): |
1223 | 618 | continue | 620 | continue |
1224 | @@ -624,7 +626,8 @@ def get_interfaces(): | |||
1225 | 624 | # some devices may not have a mac (tun0) | 626 | # some devices may not have a mac (tun0) |
1226 | 625 | if not mac: | 627 | if not mac: |
1227 | 626 | continue | 628 | continue |
1229 | 627 | if mac == empty_mac and name != 'lo': | 629 | # skip nics that have no mac (00:00....) |
1230 | 630 | if name != 'lo' and mac == zero_mac[:len(mac)]: | ||
1231 | 628 | continue | 631 | continue |
1232 | 629 | ret.append((name, mac, device_driver(name), device_devid(name))) | 632 | ret.append((name, mac, device_driver(name), device_devid(name))) |
1233 | 630 | return ret | 633 | return ret |
1234 | @@ -645,16 +648,36 @@ def get_ib_hwaddrs_by_interface(): | |||
1235 | 645 | return ret | 648 | return ret |
1236 | 646 | 649 | ||
1237 | 647 | 650 | ||
1238 | 651 | def has_url_connectivity(url): | ||
1239 | 652 | """Return true when the instance has access to the provided URL | ||
1240 | 653 | |||
1241 | 654 | Logs a warning if url is not the expected format. | ||
1242 | 655 | """ | ||
1243 | 656 | if not any([url.startswith('http://'), url.startswith('https://')]): | ||
1244 | 657 | LOG.warning( | ||
1245 | 658 | "Ignoring connectivity check. Expected URL beginning with http*://" | ||
1246 | 659 | " received '%s'", url) | ||
1247 | 660 | return False | ||
1248 | 661 | try: | ||
1249 | 662 | readurl(url, timeout=5) | ||
1250 | 663 | except UrlError: | ||
1251 | 664 | return False | ||
1252 | 665 | return True | ||
1253 | 666 | |||
1254 | 667 | |||
1255 | 648 | class EphemeralIPv4Network(object): | 668 | class EphemeralIPv4Network(object): |
1256 | 649 | """Context manager which sets up temporary static network configuration. | 669 | """Context manager which sets up temporary static network configuration. |
1257 | 650 | 670 | ||
1259 | 651 | No operations are performed if the provided interface is already connected. | 671 | No operations are performed if the provided interface already has the |
1260 | 672 | specified configuration. | ||
1261 | 673 | This can be verified with the connectivity_url. | ||
1262 | 652 | If unconnected, bring up the interface with valid ip, prefix and broadcast. | 674 | If unconnected, bring up the interface with valid ip, prefix and broadcast. |
1263 | 653 | If router is provided setup a default route for that interface. Upon | 675 | If router is provided setup a default route for that interface. Upon |
1264 | 654 | context exit, clean up the interface leaving no configuration behind. | 676 | context exit, clean up the interface leaving no configuration behind. |
1265 | 655 | """ | 677 | """ |
1266 | 656 | 678 | ||
1268 | 657 | def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None): | 679 | def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None, |
1269 | 680 | connectivity_url=None): | ||
1270 | 658 | """Setup context manager and validate call signature. | 681 | """Setup context manager and validate call signature. |
1271 | 659 | 682 | ||
1272 | 660 | @param interface: Name of the network interface to bring up. | 683 | @param interface: Name of the network interface to bring up. |
1273 | @@ -663,6 +686,8 @@ class EphemeralIPv4Network(object): | |||
1274 | 663 | prefix. | 686 | prefix. |
1275 | 664 | @param broadcast: Broadcast address for the IPv4 network. | 687 | @param broadcast: Broadcast address for the IPv4 network. |
1276 | 665 | @param router: Optionally the default gateway IP. | 688 | @param router: Optionally the default gateway IP. |
1277 | 689 | @param connectivity_url: Optionally, a URL to verify if a usable | ||
1278 | 690 | connection already exists. | ||
1279 | 666 | """ | 691 | """ |
1280 | 667 | if not all([interface, ip, prefix_or_mask, broadcast]): | 692 | if not all([interface, ip, prefix_or_mask, broadcast]): |
1281 | 668 | raise ValueError( | 693 | raise ValueError( |
1282 | @@ -673,6 +698,8 @@ class EphemeralIPv4Network(object): | |||
1283 | 673 | except ValueError as e: | 698 | except ValueError as e: |
1284 | 674 | raise ValueError( | 699 | raise ValueError( |
1285 | 675 | 'Cannot setup network: {0}'.format(e)) | 700 | 'Cannot setup network: {0}'.format(e)) |
1286 | 701 | |||
1287 | 702 | self.connectivity_url = connectivity_url | ||
1288 | 676 | self.interface = interface | 703 | self.interface = interface |
1289 | 677 | self.ip = ip | 704 | self.ip = ip |
1290 | 678 | self.broadcast = broadcast | 705 | self.broadcast = broadcast |
1291 | @@ -681,6 +708,13 @@ class EphemeralIPv4Network(object): | |||
1292 | 681 | 708 | ||
1293 | 682 | def __enter__(self): | 709 | def __enter__(self): |
1294 | 683 | """Perform ephemeral network setup if interface is not connected.""" | 710 | """Perform ephemeral network setup if interface is not connected.""" |
1295 | 711 | if self.connectivity_url: | ||
1296 | 712 | if has_url_connectivity(self.connectivity_url): | ||
1297 | 713 | LOG.debug( | ||
1298 | 714 | 'Skip ephemeral network setup, instance has connectivity' | ||
1299 | 715 | ' to %s', self.connectivity_url) | ||
1300 | 716 | return | ||
1301 | 717 | |||
1302 | 684 | self._bringup_device() | 718 | self._bringup_device() |
1303 | 685 | if self.router: | 719 | if self.router: |
1304 | 686 | self._bringup_router() | 720 | self._bringup_router() |
1305 | diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py | |||
1306 | index 12cf509..c98a97c 100644 | |||
1307 | --- a/cloudinit/net/dhcp.py | |||
1308 | +++ b/cloudinit/net/dhcp.py | |||
1309 | @@ -9,9 +9,11 @@ import logging | |||
1310 | 9 | import os | 9 | import os |
1311 | 10 | import re | 10 | import re |
1312 | 11 | import signal | 11 | import signal |
1313 | 12 | import time | ||
1314 | 12 | 13 | ||
1315 | 13 | from cloudinit.net import ( | 14 | from cloudinit.net import ( |
1317 | 14 | EphemeralIPv4Network, find_fallback_nic, get_devicelist) | 15 | EphemeralIPv4Network, find_fallback_nic, get_devicelist, |
1318 | 16 | has_url_connectivity) | ||
1319 | 15 | from cloudinit.net.network_state import mask_and_ipv4_to_bcast_addr as bcip | 17 | from cloudinit.net.network_state import mask_and_ipv4_to_bcast_addr as bcip |
1320 | 16 | from cloudinit import temp_utils | 18 | from cloudinit import temp_utils |
1321 | 17 | from cloudinit import util | 19 | from cloudinit import util |
1322 | @@ -37,37 +39,69 @@ class NoDHCPLeaseError(Exception): | |||
1323 | 37 | 39 | ||
1324 | 38 | 40 | ||
1325 | 39 | class EphemeralDHCPv4(object): | 41 | class EphemeralDHCPv4(object): |
1327 | 40 | def __init__(self, iface=None): | 42 | def __init__(self, iface=None, connectivity_url=None): |
1328 | 41 | self.iface = iface | 43 | self.iface = iface |
1329 | 42 | self._ephipv4 = None | 44 | self._ephipv4 = None |
1330 | 45 | self.lease = None | ||
1331 | 46 | self.connectivity_url = connectivity_url | ||
1332 | 43 | 47 | ||
1333 | 44 | def __enter__(self): | 48 | def __enter__(self): |
1334 | 49 | """Setup sandboxed dhcp context, unless connectivity_url can already be | ||
1335 | 50 | reached.""" | ||
1336 | 51 | if self.connectivity_url: | ||
1337 | 52 | if has_url_connectivity(self.connectivity_url): | ||
1338 | 53 | LOG.debug( | ||
1339 | 54 | 'Skip ephemeral DHCP setup, instance has connectivity' | ||
1340 | 55 | ' to %s', self.connectivity_url) | ||
1341 | 56 | return | ||
1342 | 57 | return self.obtain_lease() | ||
1343 | 58 | |||
1344 | 59 | def __exit__(self, excp_type, excp_value, excp_traceback): | ||
1345 | 60 | """Teardown sandboxed dhcp context.""" | ||
1346 | 61 | self.clean_network() | ||
1347 | 62 | |||
1348 | 63 | def clean_network(self): | ||
1349 | 64 | """Exit _ephipv4 context to teardown of ip configuration performed.""" | ||
1350 | 65 | if self.lease: | ||
1351 | 66 | self.lease = None | ||
1352 | 67 | if not self._ephipv4: | ||
1353 | 68 | return | ||
1354 | 69 | self._ephipv4.__exit__(None, None, None) | ||
1355 | 70 | |||
1356 | 71 | def obtain_lease(self): | ||
1357 | 72 | """Perform dhcp discovery in a sandboxed environment if possible. | ||
1358 | 73 | |||
1359 | 74 | @return: A dict representing dhcp options on the most recent lease | ||
1360 | 75 | obtained from the dhclient discovery if run, otherwise an error | ||
1361 | 76 | is raised. | ||
1362 | 77 | |||
1363 | 78 | @raises: NoDHCPLeaseError if no leases could be obtained. | ||
1364 | 79 | """ | ||
1365 | 80 | if self.lease: | ||
1366 | 81 | return self.lease | ||
1367 | 45 | try: | 82 | try: |
1368 | 46 | leases = maybe_perform_dhcp_discovery(self.iface) | 83 | leases = maybe_perform_dhcp_discovery(self.iface) |
1369 | 47 | except InvalidDHCPLeaseFileError: | 84 | except InvalidDHCPLeaseFileError: |
1370 | 48 | raise NoDHCPLeaseError() | 85 | raise NoDHCPLeaseError() |
1371 | 49 | if not leases: | 86 | if not leases: |
1372 | 50 | raise NoDHCPLeaseError() | 87 | raise NoDHCPLeaseError() |
1374 | 51 | lease = leases[-1] | 88 | self.lease = leases[-1] |
1375 | 52 | LOG.debug("Received dhcp lease on %s for %s/%s", | 89 | LOG.debug("Received dhcp lease on %s for %s/%s", |
1378 | 53 | lease['interface'], lease['fixed-address'], | 90 | self.lease['interface'], self.lease['fixed-address'], |
1379 | 54 | lease['subnet-mask']) | 91 | self.lease['subnet-mask']) |
1380 | 55 | nmap = {'interface': 'interface', 'ip': 'fixed-address', | 92 | nmap = {'interface': 'interface', 'ip': 'fixed-address', |
1381 | 56 | 'prefix_or_mask': 'subnet-mask', | 93 | 'prefix_or_mask': 'subnet-mask', |
1382 | 57 | 'broadcast': 'broadcast-address', | 94 | 'broadcast': 'broadcast-address', |
1383 | 58 | 'router': 'routers'} | 95 | 'router': 'routers'} |
1385 | 59 | kwargs = dict([(k, lease.get(v)) for k, v in nmap.items()]) | 96 | kwargs = dict([(k, self.lease.get(v)) for k, v in nmap.items()]) |
1386 | 60 | if not kwargs['broadcast']: | 97 | if not kwargs['broadcast']: |
1387 | 61 | kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip']) | 98 | kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip']) |
1388 | 99 | if self.connectivity_url: | ||
1389 | 100 | kwargs['connectivity_url'] = self.connectivity_url | ||
1390 | 62 | ephipv4 = EphemeralIPv4Network(**kwargs) | 101 | ephipv4 = EphemeralIPv4Network(**kwargs) |
1391 | 63 | ephipv4.__enter__() | 102 | ephipv4.__enter__() |
1392 | 64 | self._ephipv4 = ephipv4 | 103 | self._ephipv4 = ephipv4 |
1399 | 65 | return lease | 104 | return self.lease |
1394 | 66 | |||
1395 | 67 | def __exit__(self, excp_type, excp_value, excp_traceback): | ||
1396 | 68 | if not self._ephipv4: | ||
1397 | 69 | return | ||
1398 | 70 | self._ephipv4.__exit__(excp_type, excp_value, excp_traceback) | ||
1400 | 71 | 105 | ||
1401 | 72 | 106 | ||
1402 | 73 | def maybe_perform_dhcp_discovery(nic=None): | 107 | def maybe_perform_dhcp_discovery(nic=None): |
1403 | @@ -94,7 +128,9 @@ def maybe_perform_dhcp_discovery(nic=None): | |||
1404 | 94 | if not dhclient_path: | 128 | if not dhclient_path: |
1405 | 95 | LOG.debug('Skip dhclient configuration: No dhclient command found.') | 129 | LOG.debug('Skip dhclient configuration: No dhclient command found.') |
1406 | 96 | return [] | 130 | return [] |
1408 | 97 | with temp_utils.tempdir(prefix='cloud-init-dhcp-', needs_exe=True) as tdir: | 131 | with temp_utils.tempdir(rmtree_ignore_errors=True, |
1409 | 132 | prefix='cloud-init-dhcp-', | ||
1410 | 133 | needs_exe=True) as tdir: | ||
1411 | 98 | # Use /var/tmp because /run/cloud-init/tmp is mounted noexec | 134 | # Use /var/tmp because /run/cloud-init/tmp is mounted noexec |
1412 | 99 | return dhcp_discovery(dhclient_path, nic, tdir) | 135 | return dhcp_discovery(dhclient_path, nic, tdir) |
1413 | 100 | 136 | ||
1414 | @@ -162,24 +198,39 @@ def dhcp_discovery(dhclient_cmd_path, interface, cleandir): | |||
1415 | 162 | '-pf', pid_file, interface, '-sf', '/bin/true'] | 198 | '-pf', pid_file, interface, '-sf', '/bin/true'] |
1416 | 163 | util.subp(cmd, capture=True) | 199 | util.subp(cmd, capture=True) |
1417 | 164 | 200 | ||
1422 | 165 | # dhclient doesn't write a pid file until after it forks when it gets a | 201 | # Wait for pid file and lease file to appear, and for the process |
1423 | 166 | # proper lease response. Since cleandir is a temp directory that gets | 202 | # named by the pid file to daemonize (have pid 1 as its parent). If we |
1424 | 167 | # removed, we need to wait for that pidfile creation before the | 203 | # try to read the lease file before daemonization happens, we might try |
1425 | 168 | # cleandir is removed, otherwise we get FileNotFound errors. | 204 | # to read it before the dhclient has actually written it. We also have |
1426 | 205 | # to wait until the dhclient has become a daemon so we can be sure to | ||
1427 | 206 | # kill the correct process, thus freeing cleandir to be deleted back | ||
1428 | 207 | # up the callstack. | ||
1429 | 169 | missing = util.wait_for_files( | 208 | missing = util.wait_for_files( |
1430 | 170 | [pid_file, lease_file], maxwait=5, naplen=0.01) | 209 | [pid_file, lease_file], maxwait=5, naplen=0.01) |
1431 | 171 | if missing: | 210 | if missing: |
1432 | 172 | LOG.warning("dhclient did not produce expected files: %s", | 211 | LOG.warning("dhclient did not produce expected files: %s", |
1433 | 173 | ', '.join(os.path.basename(f) for f in missing)) | 212 | ', '.join(os.path.basename(f) for f in missing)) |
1434 | 174 | return [] | 213 | return [] |
1443 | 175 | pid_content = util.load_file(pid_file).strip() | 214 | |
1444 | 176 | try: | 215 | ppid = 'unknown' |
1445 | 177 | pid = int(pid_content) | 216 | for _ in range(0, 1000): |
1446 | 178 | except ValueError: | 217 | pid_content = util.load_file(pid_file).strip() |
1447 | 179 | LOG.debug( | 218 | try: |
1448 | 180 | "pid file contains non-integer content '%s'", pid_content) | 219 | pid = int(pid_content) |
1449 | 181 | else: | 220 | except ValueError: |
1450 | 182 | os.kill(pid, signal.SIGKILL) | 221 | pass |
1451 | 222 | else: | ||
1452 | 223 | ppid = util.get_proc_ppid(pid) | ||
1453 | 224 | if ppid == 1: | ||
1454 | 225 | LOG.debug('killing dhclient with pid=%s', pid) | ||
1455 | 226 | os.kill(pid, signal.SIGKILL) | ||
1456 | 227 | return parse_dhcp_lease_file(lease_file) | ||
1457 | 228 | time.sleep(0.01) | ||
1458 | 229 | |||
1459 | 230 | LOG.error( | ||
1460 | 231 | 'dhclient(pid=%s, parentpid=%s) failed to daemonize after %s seconds', | ||
1461 | 232 | pid_content, ppid, 0.01 * 1000 | ||
1462 | 233 | ) | ||
1463 | 183 | return parse_dhcp_lease_file(lease_file) | 234 | return parse_dhcp_lease_file(lease_file) |
1464 | 184 | 235 | ||
1465 | 185 | 236 | ||
1466 | diff --git a/cloudinit/net/eni.py b/cloudinit/net/eni.py | |||
1467 | index c6f631a..6423632 100644 | |||
1468 | --- a/cloudinit/net/eni.py | |||
1469 | +++ b/cloudinit/net/eni.py | |||
1470 | @@ -371,22 +371,23 @@ class Renderer(renderer.Renderer): | |||
1471 | 371 | 'gateway': 'gw', | 371 | 'gateway': 'gw', |
1472 | 372 | 'metric': 'metric', | 372 | 'metric': 'metric', |
1473 | 373 | } | 373 | } |
1474 | 374 | |||
1475 | 375 | default_gw = '' | ||
1476 | 374 | if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0': | 376 | if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0': |
1480 | 375 | default_gw = " default gw %s" % route['gateway'] | 377 | default_gw = ' default' |
1478 | 376 | content.append(up + default_gw + or_true) | ||
1479 | 377 | content.append(down + default_gw + or_true) | ||
1481 | 378 | elif route['network'] == '::' and route['prefix'] == 0: | 378 | elif route['network'] == '::' and route['prefix'] == 0: |
1493 | 379 | # ipv6! | 379 | default_gw = ' -A inet6 default' |
1494 | 380 | default_gw = " -A inet6 default gw %s" % route['gateway'] | 380 | |
1495 | 381 | content.append(up + default_gw + or_true) | 381 | route_line = '' |
1496 | 382 | content.append(down + default_gw + or_true) | 382 | for k in ['network', 'netmask', 'gateway', 'metric']: |
1497 | 383 | else: | 383 | if default_gw and k in ['network', 'netmask']: |
1498 | 384 | route_line = "" | 384 | continue |
1499 | 385 | for k in ['network', 'netmask', 'gateway', 'metric']: | 385 | if k == 'gateway': |
1500 | 386 | if k in route: | 386 | route_line += '%s %s %s' % (default_gw, mapping[k], route[k]) |
1501 | 387 | route_line += " %s %s" % (mapping[k], route[k]) | 387 | elif k in route: |
1502 | 388 | content.append(up + route_line + or_true) | 388 | route_line += ' %s %s' % (mapping[k], route[k]) |
1503 | 389 | content.append(down + route_line + or_true) | 389 | content.append(up + route_line + or_true) |
1504 | 390 | content.append(down + route_line + or_true) | ||
1505 | 390 | return content | 391 | return content |
1506 | 391 | 392 | ||
1507 | 392 | def _render_iface(self, iface, render_hwaddress=False): | 393 | def _render_iface(self, iface, render_hwaddress=False): |
1508 | diff --git a/cloudinit/net/netplan.py b/cloudinit/net/netplan.py | |||
1509 | index bc1087f..21517fd 100644 | |||
1510 | --- a/cloudinit/net/netplan.py | |||
1511 | +++ b/cloudinit/net/netplan.py | |||
1512 | @@ -114,13 +114,13 @@ def _extract_addresses(config, entry, ifname): | |||
1513 | 114 | for route in subnet.get('routes', []): | 114 | for route in subnet.get('routes', []): |
1514 | 115 | to_net = "%s/%s" % (route.get('network'), | 115 | to_net = "%s/%s" % (route.get('network'), |
1515 | 116 | route.get('prefix')) | 116 | route.get('prefix')) |
1517 | 117 | route = { | 117 | new_route = { |
1518 | 118 | 'via': route.get('gateway'), | 118 | 'via': route.get('gateway'), |
1519 | 119 | 'to': to_net, | 119 | 'to': to_net, |
1520 | 120 | } | 120 | } |
1521 | 121 | if 'metric' in route: | 121 | if 'metric' in route: |
1524 | 122 | route.update({'metric': route.get('metric', 100)}) | 122 | new_route.update({'metric': route.get('metric', 100)}) |
1525 | 123 | routes.append(route) | 123 | routes.append(new_route) |
1526 | 124 | 124 | ||
1527 | 125 | addresses.append(addr) | 125 | addresses.append(addr) |
1528 | 126 | 126 | ||
1529 | diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py | |||
1530 | index 9c16d3a..fd8e501 100644 | |||
1531 | --- a/cloudinit/net/sysconfig.py | |||
1532 | +++ b/cloudinit/net/sysconfig.py | |||
1533 | @@ -10,11 +10,14 @@ from cloudinit.distros.parsers import resolv_conf | |||
1534 | 10 | from cloudinit import log as logging | 10 | from cloudinit import log as logging |
1535 | 11 | from cloudinit import util | 11 | from cloudinit import util |
1536 | 12 | 12 | ||
1537 | 13 | from configobj import ConfigObj | ||
1538 | 14 | |||
1539 | 13 | from . import renderer | 15 | from . import renderer |
1540 | 14 | from .network_state import ( | 16 | from .network_state import ( |
1541 | 15 | is_ipv6_addr, net_prefix_to_ipv4_mask, subnet_is_ipv6) | 17 | is_ipv6_addr, net_prefix_to_ipv4_mask, subnet_is_ipv6) |
1542 | 16 | 18 | ||
1543 | 17 | LOG = logging.getLogger(__name__) | 19 | LOG = logging.getLogger(__name__) |
1544 | 20 | NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf" | ||
1545 | 18 | 21 | ||
1546 | 19 | 22 | ||
1547 | 20 | def _make_header(sep='#'): | 23 | def _make_header(sep='#'): |
1548 | @@ -46,6 +49,24 @@ def _quote_value(value): | |||
1549 | 46 | return value | 49 | return value |
1550 | 47 | 50 | ||
1551 | 48 | 51 | ||
1552 | 52 | def enable_ifcfg_rh(path): | ||
1553 | 53 | """Add ifcfg-rh to NetworkManager.cfg plugins if main section is present""" | ||
1554 | 54 | config = ConfigObj(path) | ||
1555 | 55 | if 'main' in config: | ||
1556 | 56 | if 'plugins' in config['main']: | ||
1557 | 57 | if 'ifcfg-rh' in config['main']['plugins']: | ||
1558 | 58 | return | ||
1559 | 59 | else: | ||
1560 | 60 | config['main']['plugins'] = [] | ||
1561 | 61 | |||
1562 | 62 | if isinstance(config['main']['plugins'], list): | ||
1563 | 63 | config['main']['plugins'].append('ifcfg-rh') | ||
1564 | 64 | else: | ||
1565 | 65 | config['main']['plugins'] = [config['main']['plugins'], 'ifcfg-rh'] | ||
1566 | 66 | config.write() | ||
1567 | 67 | LOG.debug('Enabled ifcfg-rh NetworkManager plugins') | ||
1568 | 68 | |||
1569 | 69 | |||
1570 | 49 | class ConfigMap(object): | 70 | class ConfigMap(object): |
1571 | 50 | """Sysconfig like dictionary object.""" | 71 | """Sysconfig like dictionary object.""" |
1572 | 51 | 72 | ||
1573 | @@ -156,13 +177,23 @@ class Route(ConfigMap): | |||
1574 | 156 | _quote_value(gateway_value))) | 177 | _quote_value(gateway_value))) |
1575 | 157 | buf.write("%s=%s\n" % ('NETMASK' + str(reindex), | 178 | buf.write("%s=%s\n" % ('NETMASK' + str(reindex), |
1576 | 158 | _quote_value(netmask_value))) | 179 | _quote_value(netmask_value))) |
1577 | 180 | metric_key = 'METRIC' + index | ||
1578 | 181 | if metric_key in self._conf: | ||
1579 | 182 | metric_value = str(self._conf['METRIC' + index]) | ||
1580 | 183 | buf.write("%s=%s\n" % ('METRIC' + str(reindex), | ||
1581 | 184 | _quote_value(metric_value))) | ||
1582 | 159 | elif proto == "ipv6" and self.is_ipv6_route(address_value): | 185 | elif proto == "ipv6" and self.is_ipv6_route(address_value): |
1583 | 160 | netmask_value = str(self._conf['NETMASK' + index]) | 186 | netmask_value = str(self._conf['NETMASK' + index]) |
1584 | 161 | gateway_value = str(self._conf['GATEWAY' + index]) | 187 | gateway_value = str(self._conf['GATEWAY' + index]) |
1589 | 162 | buf.write("%s/%s via %s dev %s\n" % (address_value, | 188 | metric_value = ( |
1590 | 163 | netmask_value, | 189 | 'metric ' + str(self._conf['METRIC' + index]) |
1591 | 164 | gateway_value, | 190 | if 'METRIC' + index in self._conf else '') |
1592 | 165 | self._route_name)) | 191 | buf.write( |
1593 | 192 | "%s/%s via %s %s dev %s\n" % (address_value, | ||
1594 | 193 | netmask_value, | ||
1595 | 194 | gateway_value, | ||
1596 | 195 | metric_value, | ||
1597 | 196 | self._route_name)) | ||
1598 | 166 | 197 | ||
1599 | 167 | return buf.getvalue() | 198 | return buf.getvalue() |
1600 | 168 | 199 | ||
1601 | @@ -370,6 +401,9 @@ class Renderer(renderer.Renderer): | |||
1602 | 370 | else: | 401 | else: |
1603 | 371 | iface_cfg['GATEWAY'] = subnet['gateway'] | 402 | iface_cfg['GATEWAY'] = subnet['gateway'] |
1604 | 372 | 403 | ||
1605 | 404 | if 'metric' in subnet: | ||
1606 | 405 | iface_cfg['METRIC'] = subnet['metric'] | ||
1607 | 406 | |||
1608 | 373 | if 'dns_search' in subnet: | 407 | if 'dns_search' in subnet: |
1609 | 374 | iface_cfg['DOMAIN'] = ' '.join(subnet['dns_search']) | 408 | iface_cfg['DOMAIN'] = ' '.join(subnet['dns_search']) |
1610 | 375 | 409 | ||
1611 | @@ -414,15 +448,19 @@ class Renderer(renderer.Renderer): | |||
1612 | 414 | else: | 448 | else: |
1613 | 415 | iface_cfg['GATEWAY'] = route['gateway'] | 449 | iface_cfg['GATEWAY'] = route['gateway'] |
1614 | 416 | route_cfg.has_set_default_ipv4 = True | 450 | route_cfg.has_set_default_ipv4 = True |
1615 | 451 | if 'metric' in route: | ||
1616 | 452 | iface_cfg['METRIC'] = route['metric'] | ||
1617 | 417 | 453 | ||
1618 | 418 | else: | 454 | else: |
1619 | 419 | gw_key = 'GATEWAY%s' % route_cfg.last_idx | 455 | gw_key = 'GATEWAY%s' % route_cfg.last_idx |
1620 | 420 | nm_key = 'NETMASK%s' % route_cfg.last_idx | 456 | nm_key = 'NETMASK%s' % route_cfg.last_idx |
1621 | 421 | addr_key = 'ADDRESS%s' % route_cfg.last_idx | 457 | addr_key = 'ADDRESS%s' % route_cfg.last_idx |
1622 | 458 | metric_key = 'METRIC%s' % route_cfg.last_idx | ||
1623 | 422 | route_cfg.last_idx += 1 | 459 | route_cfg.last_idx += 1 |
1624 | 423 | # add default routes only to ifcfg files, not | 460 | # add default routes only to ifcfg files, not |
1625 | 424 | # to route-* or route6-* | 461 | # to route-* or route6-* |
1626 | 425 | for (old_key, new_key) in [('gateway', gw_key), | 462 | for (old_key, new_key) in [('gateway', gw_key), |
1627 | 463 | ('metric', metric_key), | ||
1628 | 426 | ('netmask', nm_key), | 464 | ('netmask', nm_key), |
1629 | 427 | ('network', addr_key)]: | 465 | ('network', addr_key)]: |
1630 | 428 | if old_key in route: | 466 | if old_key in route: |
1631 | @@ -519,6 +557,8 @@ class Renderer(renderer.Renderer): | |||
1632 | 519 | content.add_nameserver(nameserver) | 557 | content.add_nameserver(nameserver) |
1633 | 520 | for searchdomain in network_state.dns_searchdomains: | 558 | for searchdomain in network_state.dns_searchdomains: |
1634 | 521 | content.add_search_domain(searchdomain) | 559 | content.add_search_domain(searchdomain) |
1635 | 560 | if not str(content): | ||
1636 | 561 | return None | ||
1637 | 522 | header = _make_header(';') | 562 | header = _make_header(';') |
1638 | 523 | content_str = str(content) | 563 | content_str = str(content) |
1639 | 524 | if not content_str.startswith(header): | 564 | if not content_str.startswith(header): |
1640 | @@ -628,7 +668,8 @@ class Renderer(renderer.Renderer): | |||
1641 | 628 | dns_path = util.target_path(target, self.dns_path) | 668 | dns_path = util.target_path(target, self.dns_path) |
1642 | 629 | resolv_content = self._render_dns(network_state, | 669 | resolv_content = self._render_dns(network_state, |
1643 | 630 | existing_dns_path=dns_path) | 670 | existing_dns_path=dns_path) |
1645 | 631 | util.write_file(dns_path, resolv_content, file_mode) | 671 | if resolv_content: |
1646 | 672 | util.write_file(dns_path, resolv_content, file_mode) | ||
1647 | 632 | if self.networkmanager_conf_path: | 673 | if self.networkmanager_conf_path: |
1648 | 633 | nm_conf_path = util.target_path(target, | 674 | nm_conf_path = util.target_path(target, |
1649 | 634 | self.networkmanager_conf_path) | 675 | self.networkmanager_conf_path) |
1650 | @@ -640,6 +681,8 @@ class Renderer(renderer.Renderer): | |||
1651 | 640 | netrules_content = self._render_persistent_net(network_state) | 681 | netrules_content = self._render_persistent_net(network_state) |
1652 | 641 | netrules_path = util.target_path(target, self.netrules_path) | 682 | netrules_path = util.target_path(target, self.netrules_path) |
1653 | 642 | util.write_file(netrules_path, netrules_content, file_mode) | 683 | util.write_file(netrules_path, netrules_content, file_mode) |
1654 | 684 | if available_nm(target=target): | ||
1655 | 685 | enable_ifcfg_rh(util.target_path(target, path=NM_CFG_FILE)) | ||
1656 | 643 | 686 | ||
1657 | 644 | sysconfig_path = util.target_path(target, templates.get('control')) | 687 | sysconfig_path = util.target_path(target, templates.get('control')) |
1658 | 645 | # Distros configuring /etc/sysconfig/network as a file e.g. Centos | 688 | # Distros configuring /etc/sysconfig/network as a file e.g. Centos |
1659 | @@ -654,6 +697,13 @@ class Renderer(renderer.Renderer): | |||
1660 | 654 | 697 | ||
1661 | 655 | 698 | ||
1662 | 656 | def available(target=None): | 699 | def available(target=None): |
1663 | 700 | sysconfig = available_sysconfig(target=target) | ||
1664 | 701 | nm = available_nm(target=target) | ||
1665 | 702 | |||
1666 | 703 | return any([nm, sysconfig]) | ||
1667 | 704 | |||
1668 | 705 | |||
1669 | 706 | def available_sysconfig(target=None): | ||
1670 | 657 | expected = ['ifup', 'ifdown'] | 707 | expected = ['ifup', 'ifdown'] |
1671 | 658 | search = ['/sbin', '/usr/sbin'] | 708 | search = ['/sbin', '/usr/sbin'] |
1672 | 659 | for p in expected: | 709 | for p in expected: |
1673 | @@ -669,4 +719,10 @@ def available(target=None): | |||
1674 | 669 | return True | 719 | return True |
1675 | 670 | 720 | ||
1676 | 671 | 721 | ||
1677 | 722 | def available_nm(target=None): | ||
1678 | 723 | if not os.path.isfile(util.target_path(target, path=NM_CFG_FILE)): | ||
1679 | 724 | return False | ||
1680 | 725 | return True | ||
1681 | 726 | |||
1682 | 727 | |||
1683 | 672 | # vi: ts=4 expandtab | 728 | # vi: ts=4 expandtab |
1684 | diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py | |||
1685 | index db25b6f..79e8842 100644 | |||
1686 | --- a/cloudinit/net/tests/test_dhcp.py | |||
1687 | +++ b/cloudinit/net/tests/test_dhcp.py | |||
1688 | @@ -1,15 +1,17 @@ | |||
1689 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
1690 | 2 | 2 | ||
1691 | 3 | import httpretty | ||
1692 | 3 | import os | 4 | import os |
1693 | 4 | import signal | 5 | import signal |
1694 | 5 | from textwrap import dedent | 6 | from textwrap import dedent |
1695 | 6 | 7 | ||
1696 | 8 | import cloudinit.net as net | ||
1697 | 7 | from cloudinit.net.dhcp import ( | 9 | from cloudinit.net.dhcp import ( |
1698 | 8 | InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery, | 10 | InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery, |
1699 | 9 | parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases) | 11 | parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases) |
1700 | 10 | from cloudinit.util import ensure_file, write_file | 12 | from cloudinit.util import ensure_file, write_file |
1701 | 11 | from cloudinit.tests.helpers import ( | 13 | from cloudinit.tests.helpers import ( |
1703 | 12 | CiTestCase, mock, populate_dir, wrap_and_call) | 14 | CiTestCase, HttprettyTestCase, mock, populate_dir, wrap_and_call) |
1704 | 13 | 15 | ||
1705 | 14 | 16 | ||
1706 | 15 | class TestParseDHCPLeasesFile(CiTestCase): | 17 | class TestParseDHCPLeasesFile(CiTestCase): |
1707 | @@ -143,16 +145,20 @@ class TestDHCPDiscoveryClean(CiTestCase): | |||
1708 | 143 | 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1'}], | 145 | 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1'}], |
1709 | 144 | dhcp_discovery(dhclient_script, 'eth9', tmpdir)) | 146 | dhcp_discovery(dhclient_script, 'eth9', tmpdir)) |
1710 | 145 | self.assertIn( | 147 | self.assertIn( |
1712 | 146 | "pid file contains non-integer content ''", self.logs.getvalue()) | 148 | "dhclient(pid=, parentpid=unknown) failed " |
1713 | 149 | "to daemonize after 10.0 seconds", | ||
1714 | 150 | self.logs.getvalue()) | ||
1715 | 147 | m_kill.assert_not_called() | 151 | m_kill.assert_not_called() |
1716 | 148 | 152 | ||
1717 | 153 | @mock.patch('cloudinit.net.dhcp.util.get_proc_ppid') | ||
1718 | 149 | @mock.patch('cloudinit.net.dhcp.os.kill') | 154 | @mock.patch('cloudinit.net.dhcp.os.kill') |
1719 | 150 | @mock.patch('cloudinit.net.dhcp.util.wait_for_files') | 155 | @mock.patch('cloudinit.net.dhcp.util.wait_for_files') |
1720 | 151 | @mock.patch('cloudinit.net.dhcp.util.subp') | 156 | @mock.patch('cloudinit.net.dhcp.util.subp') |
1721 | 152 | def test_dhcp_discovery_run_in_sandbox_waits_on_lease_and_pid(self, | 157 | def test_dhcp_discovery_run_in_sandbox_waits_on_lease_and_pid(self, |
1722 | 153 | m_subp, | 158 | m_subp, |
1723 | 154 | m_wait, | 159 | m_wait, |
1725 | 155 | m_kill): | 160 | m_kill, |
1726 | 161 | m_getppid): | ||
1727 | 156 | """dhcp_discovery waits for the presence of pidfile and dhcp.leases.""" | 162 | """dhcp_discovery waits for the presence of pidfile and dhcp.leases.""" |
1728 | 157 | tmpdir = self.tmp_dir() | 163 | tmpdir = self.tmp_dir() |
1729 | 158 | dhclient_script = os.path.join(tmpdir, 'dhclient.orig') | 164 | dhclient_script = os.path.join(tmpdir, 'dhclient.orig') |
1730 | @@ -162,6 +168,7 @@ class TestDHCPDiscoveryClean(CiTestCase): | |||
1731 | 162 | pidfile = self.tmp_path('dhclient.pid', tmpdir) | 168 | pidfile = self.tmp_path('dhclient.pid', tmpdir) |
1732 | 163 | leasefile = self.tmp_path('dhcp.leases', tmpdir) | 169 | leasefile = self.tmp_path('dhcp.leases', tmpdir) |
1733 | 164 | m_wait.return_value = [pidfile] # Return the missing pidfile wait for | 170 | m_wait.return_value = [pidfile] # Return the missing pidfile wait for |
1734 | 171 | m_getppid.return_value = 1 # Indicate that dhclient has daemonized | ||
1735 | 165 | self.assertEqual([], dhcp_discovery(dhclient_script, 'eth9', tmpdir)) | 172 | self.assertEqual([], dhcp_discovery(dhclient_script, 'eth9', tmpdir)) |
1736 | 166 | self.assertEqual( | 173 | self.assertEqual( |
1737 | 167 | mock.call([pidfile, leasefile], maxwait=5, naplen=0.01), | 174 | mock.call([pidfile, leasefile], maxwait=5, naplen=0.01), |
1738 | @@ -171,9 +178,10 @@ class TestDHCPDiscoveryClean(CiTestCase): | |||
1739 | 171 | self.logs.getvalue()) | 178 | self.logs.getvalue()) |
1740 | 172 | m_kill.assert_not_called() | 179 | m_kill.assert_not_called() |
1741 | 173 | 180 | ||
1742 | 181 | @mock.patch('cloudinit.net.dhcp.util.get_proc_ppid') | ||
1743 | 174 | @mock.patch('cloudinit.net.dhcp.os.kill') | 182 | @mock.patch('cloudinit.net.dhcp.os.kill') |
1744 | 175 | @mock.patch('cloudinit.net.dhcp.util.subp') | 183 | @mock.patch('cloudinit.net.dhcp.util.subp') |
1746 | 176 | def test_dhcp_discovery_run_in_sandbox(self, m_subp, m_kill): | 184 | def test_dhcp_discovery_run_in_sandbox(self, m_subp, m_kill, m_getppid): |
1747 | 177 | """dhcp_discovery brings up the interface and runs dhclient. | 185 | """dhcp_discovery brings up the interface and runs dhclient. |
1748 | 178 | 186 | ||
1749 | 179 | It also returns the parsed dhcp.leases file generated in the sandbox. | 187 | It also returns the parsed dhcp.leases file generated in the sandbox. |
1750 | @@ -195,6 +203,7 @@ class TestDHCPDiscoveryClean(CiTestCase): | |||
1751 | 195 | pid_file = os.path.join(tmpdir, 'dhclient.pid') | 203 | pid_file = os.path.join(tmpdir, 'dhclient.pid') |
1752 | 196 | my_pid = 1 | 204 | my_pid = 1 |
1753 | 197 | write_file(pid_file, "%d\n" % my_pid) | 205 | write_file(pid_file, "%d\n" % my_pid) |
1754 | 206 | m_getppid.return_value = 1 # Indicate that dhclient has daemonized | ||
1755 | 198 | 207 | ||
1756 | 199 | self.assertItemsEqual( | 208 | self.assertItemsEqual( |
1757 | 200 | [{'interface': 'eth9', 'fixed-address': '192.168.2.74', | 209 | [{'interface': 'eth9', 'fixed-address': '192.168.2.74', |
1758 | @@ -321,3 +330,37 @@ class TestSystemdParseLeases(CiTestCase): | |||
1759 | 321 | '9': self.lxd_lease}) | 330 | '9': self.lxd_lease}) |
1760 | 322 | self.assertEqual({'1': self.azure_parsed, '9': self.lxd_parsed}, | 331 | self.assertEqual({'1': self.azure_parsed, '9': self.lxd_parsed}, |
1761 | 323 | networkd_load_leases(self.lease_d)) | 332 | networkd_load_leases(self.lease_d)) |
1762 | 333 | |||
1763 | 334 | |||
1764 | 335 | class TestEphemeralDhcpNoNetworkSetup(HttprettyTestCase): | ||
1765 | 336 | |||
1766 | 337 | @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery') | ||
1767 | 338 | def test_ephemeral_dhcp_no_network_if_url_connectivity(self, m_dhcp): | ||
1768 | 339 | """No EphemeralDhcp4 network setup when connectivity_url succeeds.""" | ||
1769 | 340 | url = 'http://example.org/index.html' | ||
1770 | 341 | |||
1771 | 342 | httpretty.register_uri(httpretty.GET, url) | ||
1772 | 343 | with net.dhcp.EphemeralDHCPv4(connectivity_url=url) as lease: | ||
1773 | 344 | self.assertIsNone(lease) | ||
1774 | 345 | # Ensure that no teardown happens: | ||
1775 | 346 | m_dhcp.assert_not_called() | ||
1776 | 347 | |||
1777 | 348 | @mock.patch('cloudinit.net.dhcp.util.subp') | ||
1778 | 349 | @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery') | ||
1779 | 350 | def test_ephemeral_dhcp_setup_network_if_url_connectivity( | ||
1780 | 351 | self, m_dhcp, m_subp): | ||
1781 | 352 | """No EphemeralDhcp4 network setup when connectivity_url succeeds.""" | ||
1782 | 353 | url = 'http://example.org/index.html' | ||
1783 | 354 | fake_lease = { | ||
1784 | 355 | 'interface': 'eth9', 'fixed-address': '192.168.2.2', | ||
1785 | 356 | 'subnet-mask': '255.255.0.0'} | ||
1786 | 357 | m_dhcp.return_value = [fake_lease] | ||
1787 | 358 | m_subp.return_value = ('', '') | ||
1788 | 359 | |||
1789 | 360 | httpretty.register_uri(httpretty.GET, url, body={}, status=404) | ||
1790 | 361 | with net.dhcp.EphemeralDHCPv4(connectivity_url=url) as lease: | ||
1791 | 362 | self.assertEqual(fake_lease, lease) | ||
1792 | 363 | # Ensure that dhcp discovery occurs | ||
1793 | 364 | m_dhcp.called_once_with() | ||
1794 | 365 | |||
1795 | 366 | # vi: ts=4 expandtab | ||
1796 | diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py | |||
1797 | index 58e0a59..f55c31e 100644 | |||
1798 | --- a/cloudinit/net/tests/test_init.py | |||
1799 | +++ b/cloudinit/net/tests/test_init.py | |||
1800 | @@ -2,14 +2,16 @@ | |||
1801 | 2 | 2 | ||
1802 | 3 | import copy | 3 | import copy |
1803 | 4 | import errno | 4 | import errno |
1804 | 5 | import httpretty | ||
1805 | 5 | import mock | 6 | import mock |
1806 | 6 | import os | 7 | import os |
1807 | 8 | import requests | ||
1808 | 7 | import textwrap | 9 | import textwrap |
1809 | 8 | import yaml | 10 | import yaml |
1810 | 9 | 11 | ||
1811 | 10 | import cloudinit.net as net | 12 | import cloudinit.net as net |
1812 | 11 | from cloudinit.util import ensure_file, write_file, ProcessExecutionError | 13 | from cloudinit.util import ensure_file, write_file, ProcessExecutionError |
1814 | 12 | from cloudinit.tests.helpers import CiTestCase | 14 | from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase |
1815 | 13 | 15 | ||
1816 | 14 | 16 | ||
1817 | 15 | class TestSysDevPath(CiTestCase): | 17 | class TestSysDevPath(CiTestCase): |
1818 | @@ -458,6 +460,22 @@ class TestEphemeralIPV4Network(CiTestCase): | |||
1819 | 458 | self.assertEqual(expected_setup_calls, m_subp.call_args_list) | 460 | self.assertEqual(expected_setup_calls, m_subp.call_args_list) |
1820 | 459 | m_subp.assert_has_calls(expected_teardown_calls) | 461 | m_subp.assert_has_calls(expected_teardown_calls) |
1821 | 460 | 462 | ||
1822 | 463 | @mock.patch('cloudinit.net.readurl') | ||
1823 | 464 | def test_ephemeral_ipv4_no_network_if_url_connectivity( | ||
1824 | 465 | self, m_readurl, m_subp): | ||
1825 | 466 | """No network setup is performed if we can successfully connect to | ||
1826 | 467 | connectivity_url.""" | ||
1827 | 468 | params = { | ||
1828 | 469 | 'interface': 'eth0', 'ip': '192.168.2.2', | ||
1829 | 470 | 'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255', | ||
1830 | 471 | 'connectivity_url': 'http://example.org/index.html'} | ||
1831 | 472 | |||
1832 | 473 | with net.EphemeralIPv4Network(**params): | ||
1833 | 474 | self.assertEqual([mock.call('http://example.org/index.html', | ||
1834 | 475 | timeout=5)], m_readurl.call_args_list) | ||
1835 | 476 | # Ensure that no teardown happens: | ||
1836 | 477 | m_subp.assert_has_calls([]) | ||
1837 | 478 | |||
1838 | 461 | def test_ephemeral_ipv4_network_noop_when_configured(self, m_subp): | 479 | def test_ephemeral_ipv4_network_noop_when_configured(self, m_subp): |
1839 | 462 | """EphemeralIPv4Network handles exception when address is setup. | 480 | """EphemeralIPv4Network handles exception when address is setup. |
1840 | 463 | 481 | ||
1841 | @@ -619,3 +637,35 @@ class TestApplyNetworkCfgNames(CiTestCase): | |||
1842 | 619 | def test_apply_v2_renames_raises_runtime_error_on_unknown_version(self): | 637 | def test_apply_v2_renames_raises_runtime_error_on_unknown_version(self): |
1843 | 620 | with self.assertRaises(RuntimeError): | 638 | with self.assertRaises(RuntimeError): |
1844 | 621 | net.apply_network_config_names(yaml.load("version: 3")) | 639 | net.apply_network_config_names(yaml.load("version: 3")) |
1845 | 640 | |||
1846 | 641 | |||
1847 | 642 | class TestHasURLConnectivity(HttprettyTestCase): | ||
1848 | 643 | |||
1849 | 644 | def setUp(self): | ||
1850 | 645 | super(TestHasURLConnectivity, self).setUp() | ||
1851 | 646 | self.url = 'http://fake/' | ||
1852 | 647 | self.kwargs = {'allow_redirects': True, 'timeout': 5.0} | ||
1853 | 648 | |||
1854 | 649 | @mock.patch('cloudinit.net.readurl') | ||
1855 | 650 | def test_url_timeout_on_connectivity_check(self, m_readurl): | ||
1856 | 651 | """A timeout of 5 seconds is provided when reading a url.""" | ||
1857 | 652 | self.assertTrue( | ||
1858 | 653 | net.has_url_connectivity(self.url), 'Expected True on url connect') | ||
1859 | 654 | |||
1860 | 655 | def test_true_on_url_connectivity_success(self): | ||
1861 | 656 | httpretty.register_uri(httpretty.GET, self.url) | ||
1862 | 657 | self.assertTrue( | ||
1863 | 658 | net.has_url_connectivity(self.url), 'Expected True on url connect') | ||
1864 | 659 | |||
1865 | 660 | @mock.patch('requests.Session.request') | ||
1866 | 661 | def test_true_on_url_connectivity_timeout(self, m_request): | ||
1867 | 662 | """A timeout raised accessing the url will return False.""" | ||
1868 | 663 | m_request.side_effect = requests.Timeout('Fake Connection Timeout') | ||
1869 | 664 | self.assertFalse( | ||
1870 | 665 | net.has_url_connectivity(self.url), | ||
1871 | 666 | 'Expected False on url timeout') | ||
1872 | 667 | |||
1873 | 668 | def test_true_on_url_connectivity_failure(self): | ||
1874 | 669 | httpretty.register_uri(httpretty.GET, self.url, body={}, status=404) | ||
1875 | 670 | self.assertFalse( | ||
1876 | 671 | net.has_url_connectivity(self.url), 'Expected False on url fail') | ||
1877 | diff --git a/cloudinit/sources/DataSourceAliYun.py b/cloudinit/sources/DataSourceAliYun.py | |||
1878 | index 858e082..45cc9f0 100644 | |||
1879 | --- a/cloudinit/sources/DataSourceAliYun.py | |||
1880 | +++ b/cloudinit/sources/DataSourceAliYun.py | |||
1881 | @@ -1,7 +1,5 @@ | |||
1882 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
1883 | 2 | 2 | ||
1884 | 3 | import os | ||
1885 | 4 | |||
1886 | 5 | from cloudinit import sources | 3 | from cloudinit import sources |
1887 | 6 | from cloudinit.sources import DataSourceEc2 as EC2 | 4 | from cloudinit.sources import DataSourceEc2 as EC2 |
1888 | 7 | from cloudinit import util | 5 | from cloudinit import util |
1889 | @@ -18,25 +16,17 @@ class DataSourceAliYun(EC2.DataSourceEc2): | |||
1890 | 18 | min_metadata_version = '2016-01-01' | 16 | min_metadata_version = '2016-01-01' |
1891 | 19 | extended_metadata_versions = [] | 17 | extended_metadata_versions = [] |
1892 | 20 | 18 | ||
1893 | 21 | def __init__(self, sys_cfg, distro, paths): | ||
1894 | 22 | super(DataSourceAliYun, self).__init__(sys_cfg, distro, paths) | ||
1895 | 23 | self.seed_dir = os.path.join(paths.seed_dir, "AliYun") | ||
1896 | 24 | |||
1897 | 25 | def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): | 19 | def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): |
1898 | 26 | return self.metadata.get('hostname', 'localhost.localdomain') | 20 | return self.metadata.get('hostname', 'localhost.localdomain') |
1899 | 27 | 21 | ||
1900 | 28 | def get_public_ssh_keys(self): | 22 | def get_public_ssh_keys(self): |
1901 | 29 | return parse_public_keys(self.metadata.get('public-keys', {})) | 23 | return parse_public_keys(self.metadata.get('public-keys', {})) |
1902 | 30 | 24 | ||
1912 | 31 | @property | 25 | def _get_cloud_name(self): |
1913 | 32 | def cloud_platform(self): | 26 | if _is_aliyun(): |
1914 | 33 | if self._cloud_platform is None: | 27 | return EC2.CloudNames.ALIYUN |
1915 | 34 | if _is_aliyun(): | 28 | else: |
1916 | 35 | self._cloud_platform = EC2.Platforms.ALIYUN | 29 | return EC2.CloudNames.NO_EC2_METADATA |
1908 | 36 | else: | ||
1909 | 37 | self._cloud_platform = EC2.Platforms.NO_EC2_METADATA | ||
1910 | 38 | |||
1911 | 39 | return self._cloud_platform | ||
1917 | 40 | 30 | ||
1918 | 41 | 31 | ||
1919 | 42 | def _is_aliyun(): | 32 | def _is_aliyun(): |
1920 | diff --git a/cloudinit/sources/DataSourceAltCloud.py b/cloudinit/sources/DataSourceAltCloud.py | |||
1921 | index 8cd312d..5270fda 100644 | |||
1922 | --- a/cloudinit/sources/DataSourceAltCloud.py | |||
1923 | +++ b/cloudinit/sources/DataSourceAltCloud.py | |||
1924 | @@ -89,7 +89,9 @@ class DataSourceAltCloud(sources.DataSource): | |||
1925 | 89 | ''' | 89 | ''' |
1926 | 90 | Description: | 90 | Description: |
1927 | 91 | Get the type for the cloud back end this instance is running on | 91 | Get the type for the cloud back end this instance is running on |
1929 | 92 | by examining the string returned by reading the dmi data. | 92 | by examining the string returned by reading either: |
1930 | 93 | CLOUD_INFO_FILE or | ||
1931 | 94 | the dmi data. | ||
1932 | 93 | 95 | ||
1933 | 94 | Input: | 96 | Input: |
1934 | 95 | None | 97 | None |
1935 | @@ -99,7 +101,14 @@ class DataSourceAltCloud(sources.DataSource): | |||
1936 | 99 | 'RHEV', 'VSPHERE' or 'UNKNOWN' | 101 | 'RHEV', 'VSPHERE' or 'UNKNOWN' |
1937 | 100 | 102 | ||
1938 | 101 | ''' | 103 | ''' |
1940 | 102 | 104 | if os.path.exists(CLOUD_INFO_FILE): | |
1941 | 105 | try: | ||
1942 | 106 | cloud_type = util.load_file(CLOUD_INFO_FILE).strip().upper() | ||
1943 | 107 | except IOError: | ||
1944 | 108 | util.logexc(LOG, 'Unable to access cloud info file at %s.', | ||
1945 | 109 | CLOUD_INFO_FILE) | ||
1946 | 110 | return 'UNKNOWN' | ||
1947 | 111 | return cloud_type | ||
1948 | 103 | system_name = util.read_dmi_data("system-product-name") | 112 | system_name = util.read_dmi_data("system-product-name") |
1949 | 104 | if not system_name: | 113 | if not system_name: |
1950 | 105 | return 'UNKNOWN' | 114 | return 'UNKNOWN' |
1951 | @@ -134,15 +143,7 @@ class DataSourceAltCloud(sources.DataSource): | |||
1952 | 134 | 143 | ||
1953 | 135 | LOG.debug('Invoked get_data()') | 144 | LOG.debug('Invoked get_data()') |
1954 | 136 | 145 | ||
1964 | 137 | if os.path.exists(CLOUD_INFO_FILE): | 146 | cloud_type = self.get_cloud_type() |
1956 | 138 | try: | ||
1957 | 139 | cloud_type = util.load_file(CLOUD_INFO_FILE).strip().upper() | ||
1958 | 140 | except IOError: | ||
1959 | 141 | util.logexc(LOG, 'Unable to access cloud info file at %s.', | ||
1960 | 142 | CLOUD_INFO_FILE) | ||
1961 | 143 | return False | ||
1962 | 144 | else: | ||
1963 | 145 | cloud_type = self.get_cloud_type() | ||
1965 | 146 | 147 | ||
1966 | 147 | LOG.debug('cloud_type: %s', str(cloud_type)) | 148 | LOG.debug('cloud_type: %s', str(cloud_type)) |
1967 | 148 | 149 | ||
1968 | @@ -161,6 +162,15 @@ class DataSourceAltCloud(sources.DataSource): | |||
1969 | 161 | util.logexc(LOG, 'Failed accessing user data.') | 162 | util.logexc(LOG, 'Failed accessing user data.') |
1970 | 162 | return False | 163 | return False |
1971 | 163 | 164 | ||
1972 | 165 | def _get_subplatform(self): | ||
1973 | 166 | """Return the subplatform metadata details.""" | ||
1974 | 167 | cloud_type = self.get_cloud_type() | ||
1975 | 168 | if not hasattr(self, 'source'): | ||
1976 | 169 | self.source = sources.METADATA_UNKNOWN | ||
1977 | 170 | if cloud_type == 'RHEV': | ||
1978 | 171 | self.source = '/dev/fd0' | ||
1979 | 172 | return '%s (%s)' % (cloud_type.lower(), self.source) | ||
1980 | 173 | |||
1981 | 164 | def user_data_rhevm(self): | 174 | def user_data_rhevm(self): |
1982 | 165 | ''' | 175 | ''' |
1983 | 166 | RHEVM specific userdata read | 176 | RHEVM specific userdata read |
1984 | @@ -232,6 +242,7 @@ class DataSourceAltCloud(sources.DataSource): | |||
1985 | 232 | try: | 242 | try: |
1986 | 233 | return_str = util.mount_cb(cdrom_dev, read_user_data_callback) | 243 | return_str = util.mount_cb(cdrom_dev, read_user_data_callback) |
1987 | 234 | if return_str: | 244 | if return_str: |
1988 | 245 | self.source = cdrom_dev | ||
1989 | 235 | break | 246 | break |
1990 | 236 | except OSError as err: | 247 | except OSError as err: |
1991 | 237 | if err.errno != errno.ENOENT: | 248 | if err.errno != errno.ENOENT: |
1992 | diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py | |||
1993 | index 783445e..a4f998b 100644 | |||
1994 | --- a/cloudinit/sources/DataSourceAzure.py | |||
1995 | +++ b/cloudinit/sources/DataSourceAzure.py | |||
1996 | @@ -22,7 +22,8 @@ from cloudinit.event import EventType | |||
1997 | 22 | from cloudinit.net.dhcp import EphemeralDHCPv4 | 22 | from cloudinit.net.dhcp import EphemeralDHCPv4 |
1998 | 23 | from cloudinit import sources | 23 | from cloudinit import sources |
1999 | 24 | from cloudinit.sources.helpers.azure import get_metadata_from_fabric | 24 | from cloudinit.sources.helpers.azure import get_metadata_from_fabric |
2001 | 25 | from cloudinit.url_helper import readurl, UrlError | 25 | from cloudinit.sources.helpers import netlink |
2002 | 26 | from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc | ||
2003 | 26 | from cloudinit import util | 27 | from cloudinit import util |
2004 | 27 | 28 | ||
2005 | 28 | LOG = logging.getLogger(__name__) | 29 | LOG = logging.getLogger(__name__) |
2006 | @@ -57,7 +58,7 @@ IMDS_URL = "http://169.254.169.254/metadata/" | |||
2007 | 57 | # List of static scripts and network config artifacts created by | 58 | # List of static scripts and network config artifacts created by |
2008 | 58 | # stock ubuntu suported images. | 59 | # stock ubuntu suported images. |
2009 | 59 | UBUNTU_EXTENDED_NETWORK_SCRIPTS = [ | 60 | UBUNTU_EXTENDED_NETWORK_SCRIPTS = [ |
2011 | 60 | '/etc/netplan/90-azure-hotplug.yaml', | 61 | '/etc/netplan/90-hotplug-azure.yaml', |
2012 | 61 | '/usr/local/sbin/ephemeral_eth.sh', | 62 | '/usr/local/sbin/ephemeral_eth.sh', |
2013 | 62 | '/etc/udev/rules.d/10-net-device-added.rules', | 63 | '/etc/udev/rules.d/10-net-device-added.rules', |
2014 | 63 | '/run/network/interfaces.ephemeral.d', | 64 | '/run/network/interfaces.ephemeral.d', |
2015 | @@ -207,7 +208,9 @@ BUILTIN_DS_CONFIG = { | |||
2016 | 207 | }, | 208 | }, |
2017 | 208 | 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH}, | 209 | 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH}, |
2018 | 209 | 'dhclient_lease_file': LEASE_FILE, | 210 | 'dhclient_lease_file': LEASE_FILE, |
2019 | 211 | 'apply_network_config': True, # Use IMDS published network configuration | ||
2020 | 210 | } | 212 | } |
2021 | 213 | # RELEASE_BLOCKER: Xenial and earlier apply_network_config default is False | ||
2022 | 211 | 214 | ||
2023 | 212 | BUILTIN_CLOUD_CONFIG = { | 215 | BUILTIN_CLOUD_CONFIG = { |
2024 | 213 | 'disk_setup': { | 216 | 'disk_setup': { |
2025 | @@ -278,6 +281,7 @@ class DataSourceAzure(sources.DataSource): | |||
2026 | 278 | self._network_config = None | 281 | self._network_config = None |
2027 | 279 | # Regenerate network config new_instance boot and every boot | 282 | # Regenerate network config new_instance boot and every boot |
2028 | 280 | self.update_events['network'].add(EventType.BOOT) | 283 | self.update_events['network'].add(EventType.BOOT) |
2029 | 284 | self._ephemeral_dhcp_ctx = None | ||
2030 | 281 | 285 | ||
2031 | 282 | def __str__(self): | 286 | def __str__(self): |
2032 | 283 | root = sources.DataSource.__str__(self) | 287 | root = sources.DataSource.__str__(self) |
2033 | @@ -351,6 +355,14 @@ class DataSourceAzure(sources.DataSource): | |||
2034 | 351 | metadata['public-keys'] = key_value or pubkeys_from_crt_files(fp_files) | 355 | metadata['public-keys'] = key_value or pubkeys_from_crt_files(fp_files) |
2035 | 352 | return metadata | 356 | return metadata |
2036 | 353 | 357 | ||
2037 | 358 | def _get_subplatform(self): | ||
2038 | 359 | """Return the subplatform metadata source details.""" | ||
2039 | 360 | if self.seed.startswith('/dev'): | ||
2040 | 361 | subplatform_type = 'config-disk' | ||
2041 | 362 | else: | ||
2042 | 363 | subplatform_type = 'seed-dir' | ||
2043 | 364 | return '%s (%s)' % (subplatform_type, self.seed) | ||
2044 | 365 | |||
2045 | 354 | def crawl_metadata(self): | 366 | def crawl_metadata(self): |
2046 | 355 | """Walk all instance metadata sources returning a dict on success. | 367 | """Walk all instance metadata sources returning a dict on success. |
2047 | 356 | 368 | ||
2048 | @@ -396,10 +408,15 @@ class DataSourceAzure(sources.DataSource): | |||
2049 | 396 | LOG.warning("%s was not mountable", cdev) | 408 | LOG.warning("%s was not mountable", cdev) |
2050 | 397 | continue | 409 | continue |
2051 | 398 | 410 | ||
2053 | 399 | if reprovision or self._should_reprovision(ret): | 411 | perform_reprovision = reprovision or self._should_reprovision(ret) |
2054 | 412 | if perform_reprovision: | ||
2055 | 413 | if util.is_FreeBSD(): | ||
2056 | 414 | msg = "Free BSD is not supported for PPS VMs" | ||
2057 | 415 | LOG.error(msg) | ||
2058 | 416 | raise sources.InvalidMetaDataException(msg) | ||
2059 | 400 | ret = self._reprovision() | 417 | ret = self._reprovision() |
2060 | 401 | imds_md = get_metadata_from_imds( | 418 | imds_md = get_metadata_from_imds( |
2062 | 402 | self.fallback_interface, retries=3) | 419 | self.fallback_interface, retries=10) |
2063 | 403 | (md, userdata_raw, cfg, files) = ret | 420 | (md, userdata_raw, cfg, files) = ret |
2064 | 404 | self.seed = cdev | 421 | self.seed = cdev |
2065 | 405 | crawled_data.update({ | 422 | crawled_data.update({ |
2066 | @@ -424,6 +441,18 @@ class DataSourceAzure(sources.DataSource): | |||
2067 | 424 | crawled_data['metadata']['random_seed'] = seed | 441 | crawled_data['metadata']['random_seed'] = seed |
2068 | 425 | crawled_data['metadata']['instance-id'] = util.read_dmi_data( | 442 | crawled_data['metadata']['instance-id'] = util.read_dmi_data( |
2069 | 426 | 'system-uuid') | 443 | 'system-uuid') |
2070 | 444 | |||
2071 | 445 | if perform_reprovision: | ||
2072 | 446 | LOG.info("Reporting ready to Azure after getting ReprovisionData") | ||
2073 | 447 | use_cached_ephemeral = (net.is_up(self.fallback_interface) and | ||
2074 | 448 | getattr(self, '_ephemeral_dhcp_ctx', None)) | ||
2075 | 449 | if use_cached_ephemeral: | ||
2076 | 450 | self._report_ready(lease=self._ephemeral_dhcp_ctx.lease) | ||
2077 | 451 | self._ephemeral_dhcp_ctx.clean_network() # Teardown ephemeral | ||
2078 | 452 | else: | ||
2079 | 453 | with EphemeralDHCPv4() as lease: | ||
2080 | 454 | self._report_ready(lease=lease) | ||
2081 | 455 | |||
2082 | 427 | return crawled_data | 456 | return crawled_data |
2083 | 428 | 457 | ||
2084 | 429 | def _is_platform_viable(self): | 458 | def _is_platform_viable(self): |
2085 | @@ -450,7 +479,8 @@ class DataSourceAzure(sources.DataSource): | |||
2086 | 450 | except sources.InvalidMetaDataException as e: | 479 | except sources.InvalidMetaDataException as e: |
2087 | 451 | LOG.warning('Could not crawl Azure metadata: %s', e) | 480 | LOG.warning('Could not crawl Azure metadata: %s', e) |
2088 | 452 | return False | 481 | return False |
2090 | 453 | if self.distro and self.distro.name == 'ubuntu': | 482 | if (self.distro and self.distro.name == 'ubuntu' and |
2091 | 483 | self.ds_cfg.get('apply_network_config')): | ||
2092 | 454 | maybe_remove_ubuntu_network_config_scripts() | 484 | maybe_remove_ubuntu_network_config_scripts() |
2093 | 455 | 485 | ||
2094 | 456 | # Process crawled data and augment with various config defaults | 486 | # Process crawled data and augment with various config defaults |
2095 | @@ -498,8 +528,8 @@ class DataSourceAzure(sources.DataSource): | |||
2096 | 498 | response. Then return the returned JSON object.""" | 528 | response. Then return the returned JSON object.""" |
2097 | 499 | url = IMDS_URL + "reprovisiondata?api-version=2017-04-02" | 529 | url = IMDS_URL + "reprovisiondata?api-version=2017-04-02" |
2098 | 500 | headers = {"Metadata": "true"} | 530 | headers = {"Metadata": "true"} |
2099 | 531 | nl_sock = None | ||
2100 | 501 | report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE)) | 532 | report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE)) |
2101 | 502 | LOG.debug("Start polling IMDS") | ||
2102 | 503 | 533 | ||
2103 | 504 | def exc_cb(msg, exception): | 534 | def exc_cb(msg, exception): |
2104 | 505 | if isinstance(exception, UrlError) and exception.code == 404: | 535 | if isinstance(exception, UrlError) and exception.code == 404: |
2105 | @@ -508,25 +538,47 @@ class DataSourceAzure(sources.DataSource): | |||
2106 | 508 | # call DHCP and setup the ephemeral network to acquire the new IP. | 538 | # call DHCP and setup the ephemeral network to acquire the new IP. |
2107 | 509 | return False | 539 | return False |
2108 | 510 | 540 | ||
2109 | 541 | LOG.debug("Wait for vnetswitch to happen") | ||
2110 | 511 | while True: | 542 | while True: |
2111 | 512 | try: | 543 | try: |
2121 | 513 | with EphemeralDHCPv4() as lease: | 544 | # Save our EphemeralDHCPv4 context so we avoid repeated dhcp |
2122 | 514 | if report_ready: | 545 | self._ephemeral_dhcp_ctx = EphemeralDHCPv4() |
2123 | 515 | path = REPORTED_READY_MARKER_FILE | 546 | lease = self._ephemeral_dhcp_ctx.obtain_lease() |
2124 | 516 | LOG.info( | 547 | if report_ready: |
2125 | 517 | "Creating a marker file to report ready: %s", path) | 548 | try: |
2126 | 518 | util.write_file(path, "{pid}: {time}\n".format( | 549 | nl_sock = netlink.create_bound_netlink_socket() |
2127 | 519 | pid=os.getpid(), time=time())) | 550 | except netlink.NetlinkCreateSocketError as e: |
2128 | 520 | self._report_ready(lease=lease) | 551 | LOG.warning(e) |
2129 | 521 | report_ready = False | 552 | self._ephemeral_dhcp_ctx.clean_network() |
2130 | 553 | return | ||
2131 | 554 | path = REPORTED_READY_MARKER_FILE | ||
2132 | 555 | LOG.info( | ||
2133 | 556 | "Creating a marker file to report ready: %s", path) | ||
2134 | 557 | util.write_file(path, "{pid}: {time}\n".format( | ||
2135 | 558 | pid=os.getpid(), time=time())) | ||
2136 | 559 | self._report_ready(lease=lease) | ||
2137 | 560 | report_ready = False | ||
2138 | 561 | try: | ||
2139 | 562 | netlink.wait_for_media_disconnect_connect( | ||
2140 | 563 | nl_sock, lease['interface']) | ||
2141 | 564 | except AssertionError as error: | ||
2142 | 565 | LOG.error(error) | ||
2143 | 566 | return | ||
2144 | 567 | self._ephemeral_dhcp_ctx.clean_network() | ||
2145 | 568 | else: | ||
2146 | 522 | return readurl(url, timeout=1, headers=headers, | 569 | return readurl(url, timeout=1, headers=headers, |
2148 | 523 | exception_cb=exc_cb, infinite=True).contents | 570 | exception_cb=exc_cb, infinite=True, |
2149 | 571 | log_req_resp=False).contents | ||
2150 | 524 | except UrlError: | 572 | except UrlError: |
2151 | 573 | # Teardown our EphemeralDHCPv4 context on failure as we retry | ||
2152 | 574 | self._ephemeral_dhcp_ctx.clean_network() | ||
2153 | 525 | pass | 575 | pass |
2154 | 576 | finally: | ||
2155 | 577 | if nl_sock: | ||
2156 | 578 | nl_sock.close() | ||
2157 | 526 | 579 | ||
2158 | 527 | def _report_ready(self, lease): | 580 | def _report_ready(self, lease): |
2161 | 528 | """Tells the fabric provisioning has completed | 581 | """Tells the fabric provisioning has completed """ |
2160 | 529 | before we go into our polling loop.""" | ||
2162 | 530 | try: | 582 | try: |
2163 | 531 | get_metadata_from_fabric(None, lease['unknown-245']) | 583 | get_metadata_from_fabric(None, lease['unknown-245']) |
2164 | 532 | except Exception: | 584 | except Exception: |
2165 | @@ -611,7 +663,11 @@ class DataSourceAzure(sources.DataSource): | |||
2166 | 611 | the blacklisted devices. | 663 | the blacklisted devices. |
2167 | 612 | """ | 664 | """ |
2168 | 613 | if not self._network_config: | 665 | if not self._network_config: |
2170 | 614 | self._network_config = parse_network_config(self._metadata_imds) | 666 | if self.ds_cfg.get('apply_network_config'): |
2171 | 667 | nc_src = self._metadata_imds | ||
2172 | 668 | else: | ||
2173 | 669 | nc_src = None | ||
2174 | 670 | self._network_config = parse_network_config(nc_src) | ||
2175 | 615 | return self._network_config | 671 | return self._network_config |
2176 | 616 | 672 | ||
2177 | 617 | 673 | ||
2178 | @@ -692,7 +748,7 @@ def can_dev_be_reformatted(devpath, preserve_ntfs): | |||
2179 | 692 | file_count = util.mount_cb(cand_path, count_files, mtype="ntfs", | 748 | file_count = util.mount_cb(cand_path, count_files, mtype="ntfs", |
2180 | 693 | update_env_for_mount={'LANG': 'C'}) | 749 | update_env_for_mount={'LANG': 'C'}) |
2181 | 694 | except util.MountFailedError as e: | 750 | except util.MountFailedError as e: |
2183 | 695 | if "mount: unknown filesystem type 'ntfs'" in str(e): | 751 | if "unknown filesystem type 'ntfs'" in str(e): |
2184 | 696 | return True, (bmsg + ' but this system cannot mount NTFS,' | 752 | return True, (bmsg + ' but this system cannot mount NTFS,' |
2185 | 697 | ' assuming there are no important files.' | 753 | ' assuming there are no important files.' |
2186 | 698 | ' Formatting allowed.') | 754 | ' Formatting allowed.') |
2187 | @@ -920,12 +976,12 @@ def read_azure_ovf(contents): | |||
2188 | 920 | lambda n: | 976 | lambda n: |
2189 | 921 | n.localName == "LinuxProvisioningConfigurationSet") | 977 | n.localName == "LinuxProvisioningConfigurationSet") |
2190 | 922 | 978 | ||
2192 | 923 | if len(results) == 0: | 979 | if len(lpcs_nodes) == 0: |
2193 | 924 | raise NonAzureDataSource("No LinuxProvisioningConfigurationSet") | 980 | raise NonAzureDataSource("No LinuxProvisioningConfigurationSet") |
2195 | 925 | if len(results) > 1: | 981 | if len(lpcs_nodes) > 1: |
2196 | 926 | raise BrokenAzureDataSource("found '%d' %ss" % | 982 | raise BrokenAzureDataSource("found '%d' %ss" % |
2199 | 927 | ("LinuxProvisioningConfigurationSet", | 983 | (len(lpcs_nodes), |
2200 | 928 | len(results))) | 984 | "LinuxProvisioningConfigurationSet")) |
2201 | 929 | lpcs = lpcs_nodes[0] | 985 | lpcs = lpcs_nodes[0] |
2202 | 930 | 986 | ||
2203 | 931 | if not lpcs.hasChildNodes(): | 987 | if not lpcs.hasChildNodes(): |
2204 | @@ -1154,17 +1210,12 @@ def get_metadata_from_imds(fallback_nic, retries): | |||
2205 | 1154 | 1210 | ||
2206 | 1155 | def _get_metadata_from_imds(retries): | 1211 | def _get_metadata_from_imds(retries): |
2207 | 1156 | 1212 | ||
2208 | 1157 | def retry_on_url_error(msg, exception): | ||
2209 | 1158 | if isinstance(exception, UrlError) and exception.code == 404: | ||
2210 | 1159 | return True # Continue retries | ||
2211 | 1160 | return False # Stop retries on all other exceptions | ||
2212 | 1161 | |||
2213 | 1162 | url = IMDS_URL + "instance?api-version=2017-12-01" | 1213 | url = IMDS_URL + "instance?api-version=2017-12-01" |
2214 | 1163 | headers = {"Metadata": "true"} | 1214 | headers = {"Metadata": "true"} |
2215 | 1164 | try: | 1215 | try: |
2216 | 1165 | response = readurl( | 1216 | response = readurl( |
2217 | 1166 | url, timeout=1, headers=headers, retries=retries, | 1217 | url, timeout=1, headers=headers, retries=retries, |
2219 | 1167 | exception_cb=retry_on_url_error) | 1218 | exception_cb=retry_on_url_exc) |
2220 | 1168 | except Exception as e: | 1219 | except Exception as e: |
2221 | 1169 | LOG.debug('Ignoring IMDS instance metadata: %s', e) | 1220 | LOG.debug('Ignoring IMDS instance metadata: %s', e) |
2222 | 1170 | return {} | 1221 | return {} |
2223 | @@ -1187,7 +1238,7 @@ def maybe_remove_ubuntu_network_config_scripts(paths=None): | |||
2224 | 1187 | additional interfaces which get attached by a customer at some point | 1238 | additional interfaces which get attached by a customer at some point |
2225 | 1188 | after initial boot. Since the Azure datasource can now regenerate | 1239 | after initial boot. Since the Azure datasource can now regenerate |
2226 | 1189 | network configuration as metadata reports these new devices, we no longer | 1240 | network configuration as metadata reports these new devices, we no longer |
2228 | 1190 | want the udev rules or netplan's 90-azure-hotplug.yaml to configure | 1241 | want the udev rules or netplan's 90-hotplug-azure.yaml to configure |
2229 | 1191 | networking on eth1 or greater as it might collide with cloud-init's | 1242 | networking on eth1 or greater as it might collide with cloud-init's |
2230 | 1192 | configuration. | 1243 | configuration. |
2231 | 1193 | 1244 | ||
2232 | diff --git a/cloudinit/sources/DataSourceBigstep.py b/cloudinit/sources/DataSourceBigstep.py | |||
2233 | index 699a85b..52fff20 100644 | |||
2234 | --- a/cloudinit/sources/DataSourceBigstep.py | |||
2235 | +++ b/cloudinit/sources/DataSourceBigstep.py | |||
2236 | @@ -36,6 +36,10 @@ class DataSourceBigstep(sources.DataSource): | |||
2237 | 36 | self.userdata_raw = decoded["userdata_raw"] | 36 | self.userdata_raw = decoded["userdata_raw"] |
2238 | 37 | return True | 37 | return True |
2239 | 38 | 38 | ||
2240 | 39 | def _get_subplatform(self): | ||
2241 | 40 | """Return the subplatform metadata source details.""" | ||
2242 | 41 | return 'metadata (%s)' % get_url_from_file() | ||
2243 | 42 | |||
2244 | 39 | 43 | ||
2245 | 40 | def get_url_from_file(): | 44 | def get_url_from_file(): |
2246 | 41 | try: | 45 | try: |
2247 | diff --git a/cloudinit/sources/DataSourceCloudSigma.py b/cloudinit/sources/DataSourceCloudSigma.py | |||
2248 | index c816f34..2955d3f 100644 | |||
2249 | --- a/cloudinit/sources/DataSourceCloudSigma.py | |||
2250 | +++ b/cloudinit/sources/DataSourceCloudSigma.py | |||
2251 | @@ -7,7 +7,7 @@ | |||
2252 | 7 | from base64 import b64decode | 7 | from base64 import b64decode |
2253 | 8 | import re | 8 | import re |
2254 | 9 | 9 | ||
2256 | 10 | from cloudinit.cs_utils import Cepko | 10 | from cloudinit.cs_utils import Cepko, SERIAL_PORT |
2257 | 11 | 11 | ||
2258 | 12 | from cloudinit import log as logging | 12 | from cloudinit import log as logging |
2259 | 13 | from cloudinit import sources | 13 | from cloudinit import sources |
2260 | @@ -84,6 +84,10 @@ class DataSourceCloudSigma(sources.DataSource): | |||
2261 | 84 | 84 | ||
2262 | 85 | return True | 85 | return True |
2263 | 86 | 86 | ||
2264 | 87 | def _get_subplatform(self): | ||
2265 | 88 | """Return the subplatform metadata source details.""" | ||
2266 | 89 | return 'cepko (%s)' % SERIAL_PORT | ||
2267 | 90 | |||
2268 | 87 | def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): | 91 | def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): |
2269 | 88 | """ | 92 | """ |
2270 | 89 | Cleans up and uses the server's name if the latter is set. Otherwise | 93 | Cleans up and uses the server's name if the latter is set. Otherwise |
2271 | diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py | |||
2272 | index 664dc4b..564e3eb 100644 | |||
2273 | --- a/cloudinit/sources/DataSourceConfigDrive.py | |||
2274 | +++ b/cloudinit/sources/DataSourceConfigDrive.py | |||
2275 | @@ -160,6 +160,18 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource): | |||
2276 | 160 | LOG.debug("no network configuration available") | 160 | LOG.debug("no network configuration available") |
2277 | 161 | return self._network_config | 161 | return self._network_config |
2278 | 162 | 162 | ||
2279 | 163 | @property | ||
2280 | 164 | def platform(self): | ||
2281 | 165 | return 'openstack' | ||
2282 | 166 | |||
2283 | 167 | def _get_subplatform(self): | ||
2284 | 168 | """Return the subplatform metadata source details.""" | ||
2285 | 169 | if self.seed_dir in self.source: | ||
2286 | 170 | subplatform_type = 'seed-dir' | ||
2287 | 171 | elif self.source.startswith('/dev'): | ||
2288 | 172 | subplatform_type = 'config-disk' | ||
2289 | 173 | return '%s (%s)' % (subplatform_type, self.source) | ||
2290 | 174 | |||
2291 | 163 | 175 | ||
2292 | 164 | def read_config_drive(source_dir): | 176 | def read_config_drive(source_dir): |
2293 | 165 | reader = openstack.ConfigDriveReader(source_dir) | 177 | reader = openstack.ConfigDriveReader(source_dir) |
2294 | diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py | |||
2295 | index 968ab3f..9ccf2cd 100644 | |||
2296 | --- a/cloudinit/sources/DataSourceEc2.py | |||
2297 | +++ b/cloudinit/sources/DataSourceEc2.py | |||
2298 | @@ -28,18 +28,16 @@ STRICT_ID_PATH = ("datasource", "Ec2", "strict_id") | |||
2299 | 28 | STRICT_ID_DEFAULT = "warn" | 28 | STRICT_ID_DEFAULT = "warn" |
2300 | 29 | 29 | ||
2301 | 30 | 30 | ||
2308 | 31 | class Platforms(object): | 31 | class CloudNames(object): |
2309 | 32 | # TODO Rename and move to cloudinit.cloud.CloudNames | 32 | ALIYUN = "aliyun" |
2310 | 33 | ALIYUN = "AliYun" | 33 | AWS = "aws" |
2311 | 34 | AWS = "AWS" | 34 | BRIGHTBOX = "brightbox" |
2306 | 35 | BRIGHTBOX = "Brightbox" | ||
2307 | 36 | SEEDED = "Seeded" | ||
2312 | 37 | # UNKNOWN indicates no positive id. If strict_id is 'warn' or 'false', | 35 | # UNKNOWN indicates no positive id. If strict_id is 'warn' or 'false', |
2313 | 38 | # then an attempt at the Ec2 Metadata service will be made. | 36 | # then an attempt at the Ec2 Metadata service will be made. |
2315 | 39 | UNKNOWN = "Unknown" | 37 | UNKNOWN = "unknown" |
2316 | 40 | # NO_EC2_METADATA indicates this platform does not have a Ec2 metadata | 38 | # NO_EC2_METADATA indicates this platform does not have a Ec2 metadata |
2317 | 41 | # service available. No attempt at the Ec2 Metadata service will be made. | 39 | # service available. No attempt at the Ec2 Metadata service will be made. |
2319 | 42 | NO_EC2_METADATA = "No-EC2-Metadata" | 40 | NO_EC2_METADATA = "no-ec2-metadata" |
2320 | 43 | 41 | ||
2321 | 44 | 42 | ||
2322 | 45 | class DataSourceEc2(sources.DataSource): | 43 | class DataSourceEc2(sources.DataSource): |
2323 | @@ -61,8 +59,6 @@ class DataSourceEc2(sources.DataSource): | |||
2324 | 61 | url_max_wait = 120 | 59 | url_max_wait = 120 |
2325 | 62 | url_timeout = 50 | 60 | url_timeout = 50 |
2326 | 63 | 61 | ||
2327 | 64 | _cloud_platform = None | ||
2328 | 65 | |||
2329 | 66 | _network_config = sources.UNSET # Used to cache calculated network cfg v1 | 62 | _network_config = sources.UNSET # Used to cache calculated network cfg v1 |
2330 | 67 | 63 | ||
2331 | 68 | # Whether we want to get network configuration from the metadata service. | 64 | # Whether we want to get network configuration from the metadata service. |
2332 | @@ -71,30 +67,21 @@ class DataSourceEc2(sources.DataSource): | |||
2333 | 71 | def __init__(self, sys_cfg, distro, paths): | 67 | def __init__(self, sys_cfg, distro, paths): |
2334 | 72 | super(DataSourceEc2, self).__init__(sys_cfg, distro, paths) | 68 | super(DataSourceEc2, self).__init__(sys_cfg, distro, paths) |
2335 | 73 | self.metadata_address = None | 69 | self.metadata_address = None |
2336 | 74 | self.seed_dir = os.path.join(paths.seed_dir, "ec2") | ||
2337 | 75 | 70 | ||
2338 | 76 | def _get_cloud_name(self): | 71 | def _get_cloud_name(self): |
2339 | 77 | """Return the cloud name as identified during _get_data.""" | 72 | """Return the cloud name as identified during _get_data.""" |
2341 | 78 | return self.cloud_platform | 73 | return identify_platform() |
2342 | 79 | 74 | ||
2343 | 80 | def _get_data(self): | 75 | def _get_data(self): |
2344 | 81 | seed_ret = {} | ||
2345 | 82 | if util.read_optional_seed(seed_ret, base=(self.seed_dir + "/")): | ||
2346 | 83 | self.userdata_raw = seed_ret['user-data'] | ||
2347 | 84 | self.metadata = seed_ret['meta-data'] | ||
2348 | 85 | LOG.debug("Using seeded ec2 data from %s", self.seed_dir) | ||
2349 | 86 | self._cloud_platform = Platforms.SEEDED | ||
2350 | 87 | return True | ||
2351 | 88 | |||
2352 | 89 | strict_mode, _sleep = read_strict_mode( | 76 | strict_mode, _sleep = read_strict_mode( |
2353 | 90 | util.get_cfg_by_path(self.sys_cfg, STRICT_ID_PATH, | 77 | util.get_cfg_by_path(self.sys_cfg, STRICT_ID_PATH, |
2354 | 91 | STRICT_ID_DEFAULT), ("warn", None)) | 78 | STRICT_ID_DEFAULT), ("warn", None)) |
2355 | 92 | 79 | ||
2359 | 93 | LOG.debug("strict_mode: %s, cloud_platform=%s", | 80 | LOG.debug("strict_mode: %s, cloud_name=%s cloud_platform=%s", |
2360 | 94 | strict_mode, self.cloud_platform) | 81 | strict_mode, self.cloud_name, self.platform) |
2361 | 95 | if strict_mode == "true" and self.cloud_platform == Platforms.UNKNOWN: | 82 | if strict_mode == "true" and self.cloud_name == CloudNames.UNKNOWN: |
2362 | 96 | return False | 83 | return False |
2364 | 97 | elif self.cloud_platform == Platforms.NO_EC2_METADATA: | 84 | elif self.cloud_name == CloudNames.NO_EC2_METADATA: |
2365 | 98 | return False | 85 | return False |
2366 | 99 | 86 | ||
2367 | 100 | if self.perform_dhcp_setup: # Setup networking in init-local stage. | 87 | if self.perform_dhcp_setup: # Setup networking in init-local stage. |
2368 | @@ -103,13 +90,22 @@ class DataSourceEc2(sources.DataSource): | |||
2369 | 103 | return False | 90 | return False |
2370 | 104 | try: | 91 | try: |
2371 | 105 | with EphemeralDHCPv4(self.fallback_interface): | 92 | with EphemeralDHCPv4(self.fallback_interface): |
2373 | 106 | return util.log_time( | 93 | self._crawled_metadata = util.log_time( |
2374 | 107 | logfunc=LOG.debug, msg='Crawl of metadata service', | 94 | logfunc=LOG.debug, msg='Crawl of metadata service', |
2376 | 108 | func=self._crawl_metadata) | 95 | func=self.crawl_metadata) |
2377 | 109 | except NoDHCPLeaseError: | 96 | except NoDHCPLeaseError: |
2378 | 110 | return False | 97 | return False |
2379 | 111 | else: | 98 | else: |
2381 | 112 | return self._crawl_metadata() | 99 | self._crawled_metadata = util.log_time( |
2382 | 100 | logfunc=LOG.debug, msg='Crawl of metadata service', | ||
2383 | 101 | func=self.crawl_metadata) | ||
2384 | 102 | if not self._crawled_metadata: | ||
2385 | 103 | return False | ||
2386 | 104 | self.metadata = self._crawled_metadata.get('meta-data', None) | ||
2387 | 105 | self.userdata_raw = self._crawled_metadata.get('user-data', None) | ||
2388 | 106 | self.identity = self._crawled_metadata.get( | ||
2389 | 107 | 'dynamic', {}).get('instance-identity', {}).get('document', {}) | ||
2390 | 108 | return True | ||
2391 | 113 | 109 | ||
2392 | 114 | @property | 110 | @property |
2393 | 115 | def launch_index(self): | 111 | def launch_index(self): |
2394 | @@ -117,6 +113,15 @@ class DataSourceEc2(sources.DataSource): | |||
2395 | 117 | return None | 113 | return None |
2396 | 118 | return self.metadata.get('ami-launch-index') | 114 | return self.metadata.get('ami-launch-index') |
2397 | 119 | 115 | ||
2398 | 116 | @property | ||
2399 | 117 | def platform(self): | ||
2400 | 118 | # Handle upgrade path of pickled ds | ||
2401 | 119 | if not hasattr(self, '_platform_type'): | ||
2402 | 120 | self._platform_type = DataSourceEc2.dsname.lower() | ||
2403 | 121 | if not self._platform_type: | ||
2404 | 122 | self._platform_type = DataSourceEc2.dsname.lower() | ||
2405 | 123 | return self._platform_type | ||
2406 | 124 | |||
2407 | 120 | def get_metadata_api_version(self): | 125 | def get_metadata_api_version(self): |
2408 | 121 | """Get the best supported api version from the metadata service. | 126 | """Get the best supported api version from the metadata service. |
2409 | 122 | 127 | ||
2410 | @@ -144,7 +149,7 @@ class DataSourceEc2(sources.DataSource): | |||
2411 | 144 | return self.min_metadata_version | 149 | return self.min_metadata_version |
2412 | 145 | 150 | ||
2413 | 146 | def get_instance_id(self): | 151 | def get_instance_id(self): |
2415 | 147 | if self.cloud_platform == Platforms.AWS: | 152 | if self.cloud_name == CloudNames.AWS: |
2416 | 148 | # Prefer the ID from the instance identity document, but fall back | 153 | # Prefer the ID from the instance identity document, but fall back |
2417 | 149 | if not getattr(self, 'identity', None): | 154 | if not getattr(self, 'identity', None): |
2418 | 150 | # If re-using cached datasource, it's get_data run didn't | 155 | # If re-using cached datasource, it's get_data run didn't |
2419 | @@ -254,7 +259,7 @@ class DataSourceEc2(sources.DataSource): | |||
2420 | 254 | @property | 259 | @property |
2421 | 255 | def availability_zone(self): | 260 | def availability_zone(self): |
2422 | 256 | try: | 261 | try: |
2424 | 257 | if self.cloud_platform == Platforms.AWS: | 262 | if self.cloud_name == CloudNames.AWS: |
2425 | 258 | return self.identity.get( | 263 | return self.identity.get( |
2426 | 259 | 'availabilityZone', | 264 | 'availabilityZone', |
2427 | 260 | self.metadata['placement']['availability-zone']) | 265 | self.metadata['placement']['availability-zone']) |
2428 | @@ -265,7 +270,7 @@ class DataSourceEc2(sources.DataSource): | |||
2429 | 265 | 270 | ||
2430 | 266 | @property | 271 | @property |
2431 | 267 | def region(self): | 272 | def region(self): |
2433 | 268 | if self.cloud_platform == Platforms.AWS: | 273 | if self.cloud_name == CloudNames.AWS: |
2434 | 269 | region = self.identity.get('region') | 274 | region = self.identity.get('region') |
2435 | 270 | # Fallback to trimming the availability zone if region is missing | 275 | # Fallback to trimming the availability zone if region is missing |
2436 | 271 | if self.availability_zone and not region: | 276 | if self.availability_zone and not region: |
2437 | @@ -277,16 +282,10 @@ class DataSourceEc2(sources.DataSource): | |||
2438 | 277 | return az[:-1] | 282 | return az[:-1] |
2439 | 278 | return None | 283 | return None |
2440 | 279 | 284 | ||
2441 | 280 | @property | ||
2442 | 281 | def cloud_platform(self): # TODO rename cloud_name | ||
2443 | 282 | if self._cloud_platform is None: | ||
2444 | 283 | self._cloud_platform = identify_platform() | ||
2445 | 284 | return self._cloud_platform | ||
2446 | 285 | |||
2447 | 286 | def activate(self, cfg, is_new_instance): | 285 | def activate(self, cfg, is_new_instance): |
2448 | 287 | if not is_new_instance: | 286 | if not is_new_instance: |
2449 | 288 | return | 287 | return |
2451 | 289 | if self.cloud_platform == Platforms.UNKNOWN: | 288 | if self.cloud_name == CloudNames.UNKNOWN: |
2452 | 290 | warn_if_necessary( | 289 | warn_if_necessary( |
2453 | 291 | util.get_cfg_by_path(cfg, STRICT_ID_PATH, STRICT_ID_DEFAULT), | 290 | util.get_cfg_by_path(cfg, STRICT_ID_PATH, STRICT_ID_DEFAULT), |
2454 | 292 | cfg) | 291 | cfg) |
2455 | @@ -306,13 +305,13 @@ class DataSourceEc2(sources.DataSource): | |||
2456 | 306 | result = None | 305 | result = None |
2457 | 307 | no_network_metadata_on_aws = bool( | 306 | no_network_metadata_on_aws = bool( |
2458 | 308 | 'network' not in self.metadata and | 307 | 'network' not in self.metadata and |
2460 | 309 | self.cloud_platform == Platforms.AWS) | 308 | self.cloud_name == CloudNames.AWS) |
2461 | 310 | if no_network_metadata_on_aws: | 309 | if no_network_metadata_on_aws: |
2462 | 311 | LOG.debug("Metadata 'network' not present:" | 310 | LOG.debug("Metadata 'network' not present:" |
2463 | 312 | " Refreshing stale metadata from prior to upgrade.") | 311 | " Refreshing stale metadata from prior to upgrade.") |
2464 | 313 | util.log_time( | 312 | util.log_time( |
2465 | 314 | logfunc=LOG.debug, msg='Re-crawl of metadata service', | 313 | logfunc=LOG.debug, msg='Re-crawl of metadata service', |
2467 | 315 | func=self._crawl_metadata) | 314 | func=self.get_data) |
2468 | 316 | 315 | ||
2469 | 317 | # Limit network configuration to only the primary/fallback nic | 316 | # Limit network configuration to only the primary/fallback nic |
2470 | 318 | iface = self.fallback_interface | 317 | iface = self.fallback_interface |
2471 | @@ -340,28 +339,32 @@ class DataSourceEc2(sources.DataSource): | |||
2472 | 340 | return super(DataSourceEc2, self).fallback_interface | 339 | return super(DataSourceEc2, self).fallback_interface |
2473 | 341 | return self._fallback_interface | 340 | return self._fallback_interface |
2474 | 342 | 341 | ||
2476 | 343 | def _crawl_metadata(self): | 342 | def crawl_metadata(self): |
2477 | 344 | """Crawl metadata service when available. | 343 | """Crawl metadata service when available. |
2478 | 345 | 344 | ||
2480 | 346 | @returns: True on success, False otherwise. | 345 | @returns: Dictionary of crawled metadata content containing the keys: |
2481 | 346 | meta-data, user-data and dynamic. | ||
2482 | 347 | """ | 347 | """ |
2483 | 348 | if not self.wait_for_metadata_service(): | 348 | if not self.wait_for_metadata_service(): |
2485 | 349 | return False | 349 | return {} |
2486 | 350 | api_version = self.get_metadata_api_version() | 350 | api_version = self.get_metadata_api_version() |
2487 | 351 | crawled_metadata = {} | ||
2488 | 351 | try: | 352 | try: |
2490 | 352 | self.userdata_raw = ec2.get_instance_userdata( | 353 | crawled_metadata['user-data'] = ec2.get_instance_userdata( |
2491 | 353 | api_version, self.metadata_address) | 354 | api_version, self.metadata_address) |
2493 | 354 | self.metadata = ec2.get_instance_metadata( | 355 | crawled_metadata['meta-data'] = ec2.get_instance_metadata( |
2494 | 355 | api_version, self.metadata_address) | 356 | api_version, self.metadata_address) |
2498 | 356 | if self.cloud_platform == Platforms.AWS: | 357 | if self.cloud_name == CloudNames.AWS: |
2499 | 357 | self.identity = ec2.get_instance_identity( | 358 | identity = ec2.get_instance_identity( |
2500 | 358 | api_version, self.metadata_address).get('document', {}) | 359 | api_version, self.metadata_address) |
2501 | 360 | crawled_metadata['dynamic'] = {'instance-identity': identity} | ||
2502 | 359 | except Exception: | 361 | except Exception: |
2503 | 360 | util.logexc( | 362 | util.logexc( |
2504 | 361 | LOG, "Failed reading from metadata address %s", | 363 | LOG, "Failed reading from metadata address %s", |
2505 | 362 | self.metadata_address) | 364 | self.metadata_address) |
2508 | 363 | return False | 365 | return {} |
2509 | 364 | return True | 366 | crawled_metadata['_metadata_api_version'] = api_version |
2510 | 367 | return crawled_metadata | ||
2511 | 365 | 368 | ||
2512 | 366 | 369 | ||
2513 | 367 | class DataSourceEc2Local(DataSourceEc2): | 370 | class DataSourceEc2Local(DataSourceEc2): |
2514 | @@ -375,10 +378,10 @@ class DataSourceEc2Local(DataSourceEc2): | |||
2515 | 375 | perform_dhcp_setup = True # Use dhcp before querying metadata | 378 | perform_dhcp_setup = True # Use dhcp before querying metadata |
2516 | 376 | 379 | ||
2517 | 377 | def get_data(self): | 380 | def get_data(self): |
2520 | 378 | supported_platforms = (Platforms.AWS,) | 381 | supported_platforms = (CloudNames.AWS,) |
2521 | 379 | if self.cloud_platform not in supported_platforms: | 382 | if self.cloud_name not in supported_platforms: |
2522 | 380 | LOG.debug("Local Ec2 mode only supported on %s, not %s", | 383 | LOG.debug("Local Ec2 mode only supported on %s, not %s", |
2524 | 381 | supported_platforms, self.cloud_platform) | 384 | supported_platforms, self.cloud_name) |
2525 | 382 | return False | 385 | return False |
2526 | 383 | return super(DataSourceEc2Local, self).get_data() | 386 | return super(DataSourceEc2Local, self).get_data() |
2527 | 384 | 387 | ||
2528 | @@ -439,20 +442,20 @@ def identify_aws(data): | |||
2529 | 439 | if (data['uuid'].startswith('ec2') and | 442 | if (data['uuid'].startswith('ec2') and |
2530 | 440 | (data['uuid_source'] == 'hypervisor' or | 443 | (data['uuid_source'] == 'hypervisor' or |
2531 | 441 | data['uuid'] == data['serial'])): | 444 | data['uuid'] == data['serial'])): |
2533 | 442 | return Platforms.AWS | 445 | return CloudNames.AWS |
2534 | 443 | 446 | ||
2535 | 444 | return None | 447 | return None |
2536 | 445 | 448 | ||
2537 | 446 | 449 | ||
2538 | 447 | def identify_brightbox(data): | 450 | def identify_brightbox(data): |
2539 | 448 | if data['serial'].endswith('brightbox.com'): | 451 | if data['serial'].endswith('brightbox.com'): |
2541 | 449 | return Platforms.BRIGHTBOX | 452 | return CloudNames.BRIGHTBOX |
2542 | 450 | 453 | ||
2543 | 451 | 454 | ||
2544 | 452 | def identify_platform(): | 455 | def identify_platform(): |
2546 | 453 | # identify the platform and return an entry in Platforms. | 456 | # identify the platform and return an entry in CloudNames. |
2547 | 454 | data = _collect_platform_data() | 457 | data = _collect_platform_data() |
2549 | 455 | checks = (identify_aws, identify_brightbox, lambda x: Platforms.UNKNOWN) | 458 | checks = (identify_aws, identify_brightbox, lambda x: CloudNames.UNKNOWN) |
2550 | 456 | for checker in checks: | 459 | for checker in checks: |
2551 | 457 | try: | 460 | try: |
2552 | 458 | result = checker(data) | 461 | result = checker(data) |
2553 | diff --git a/cloudinit/sources/DataSourceIBMCloud.py b/cloudinit/sources/DataSourceIBMCloud.py | |||
2554 | index a535814..21e6ae6 100644 | |||
2555 | --- a/cloudinit/sources/DataSourceIBMCloud.py | |||
2556 | +++ b/cloudinit/sources/DataSourceIBMCloud.py | |||
2557 | @@ -157,6 +157,10 @@ class DataSourceIBMCloud(sources.DataSource): | |||
2558 | 157 | 157 | ||
2559 | 158 | return True | 158 | return True |
2560 | 159 | 159 | ||
2561 | 160 | def _get_subplatform(self): | ||
2562 | 161 | """Return the subplatform metadata source details.""" | ||
2563 | 162 | return '%s (%s)' % (self.platform, self.source) | ||
2564 | 163 | |||
2565 | 160 | def check_instance_id(self, sys_cfg): | 164 | def check_instance_id(self, sys_cfg): |
2566 | 161 | """quickly (local check only) if self.instance_id is still valid | 165 | """quickly (local check only) if self.instance_id is still valid |
2567 | 162 | 166 | ||
2568 | diff --git a/cloudinit/sources/DataSourceMAAS.py b/cloudinit/sources/DataSourceMAAS.py | |||
2569 | index bcb3854..61aa6d7 100644 | |||
2570 | --- a/cloudinit/sources/DataSourceMAAS.py | |||
2571 | +++ b/cloudinit/sources/DataSourceMAAS.py | |||
2572 | @@ -109,6 +109,10 @@ class DataSourceMAAS(sources.DataSource): | |||
2573 | 109 | LOG.warning("Invalid content in vendor-data: %s", e) | 109 | LOG.warning("Invalid content in vendor-data: %s", e) |
2574 | 110 | self.vendordata_raw = None | 110 | self.vendordata_raw = None |
2575 | 111 | 111 | ||
2576 | 112 | def _get_subplatform(self): | ||
2577 | 113 | """Return the subplatform metadata source details.""" | ||
2578 | 114 | return 'seed-dir (%s)' % self.base_url | ||
2579 | 115 | |||
2580 | 112 | def wait_for_metadata_service(self, url): | 116 | def wait_for_metadata_service(self, url): |
2581 | 113 | mcfg = self.ds_cfg | 117 | mcfg = self.ds_cfg |
2582 | 114 | max_wait = 120 | 118 | max_wait = 120 |
2583 | diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py | |||
2584 | index 2daea59..6860f0c 100644 | |||
2585 | --- a/cloudinit/sources/DataSourceNoCloud.py | |||
2586 | +++ b/cloudinit/sources/DataSourceNoCloud.py | |||
2587 | @@ -186,6 +186,27 @@ class DataSourceNoCloud(sources.DataSource): | |||
2588 | 186 | self._network_eni = mydata['meta-data'].get('network-interfaces') | 186 | self._network_eni = mydata['meta-data'].get('network-interfaces') |
2589 | 187 | return True | 187 | return True |
2590 | 188 | 188 | ||
2591 | 189 | @property | ||
2592 | 190 | def platform_type(self): | ||
2593 | 191 | # Handle upgrade path of pickled ds | ||
2594 | 192 | if not hasattr(self, '_platform_type'): | ||
2595 | 193 | self._platform_type = None | ||
2596 | 194 | if not self._platform_type: | ||
2597 | 195 | self._platform_type = 'lxd' if util.is_lxd() else 'nocloud' | ||
2598 | 196 | return self._platform_type | ||
2599 | 197 | |||
2600 | 198 | def _get_cloud_name(self): | ||
2601 | 199 | """Return unknown when 'cloud-name' key is absent from metadata.""" | ||
2602 | 200 | return sources.METADATA_UNKNOWN | ||
2603 | 201 | |||
2604 | 202 | def _get_subplatform(self): | ||
2605 | 203 | """Return the subplatform metadata source details.""" | ||
2606 | 204 | if self.seed.startswith('/dev'): | ||
2607 | 205 | subplatform_type = 'config-disk' | ||
2608 | 206 | else: | ||
2609 | 207 | subplatform_type = 'seed-dir' | ||
2610 | 208 | return '%s (%s)' % (subplatform_type, self.seed) | ||
2611 | 209 | |||
2612 | 189 | def check_instance_id(self, sys_cfg): | 210 | def check_instance_id(self, sys_cfg): |
2613 | 190 | # quickly (local check only) if self.instance_id is still valid | 211 | # quickly (local check only) if self.instance_id is still valid |
2614 | 191 | # we check kernel command line or files. | 212 | # we check kernel command line or files. |
2615 | @@ -290,6 +311,35 @@ def parse_cmdline_data(ds_id, fill, cmdline=None): | |||
2616 | 290 | return True | 311 | return True |
2617 | 291 | 312 | ||
2618 | 292 | 313 | ||
2619 | 314 | def _maybe_remove_top_network(cfg): | ||
2620 | 315 | """If network-config contains top level 'network' key, then remove it. | ||
2621 | 316 | |||
2622 | 317 | Some providers of network configuration may provide a top level | ||
2623 | 318 | 'network' key (LP: #1798117) even though it is not necessary. | ||
2624 | 319 | |||
2625 | 320 | Be friendly and remove it if it really seems so. | ||
2626 | 321 | |||
2627 | 322 | Return the original value if no change or the updated value if changed.""" | ||
2628 | 323 | nullval = object() | ||
2629 | 324 | network_val = cfg.get('network', nullval) | ||
2630 | 325 | if network_val is nullval: | ||
2631 | 326 | return cfg | ||
2632 | 327 | bmsg = 'Top level network key in network-config %s: %s' | ||
2633 | 328 | if not isinstance(network_val, dict): | ||
2634 | 329 | LOG.debug(bmsg, "was not a dict", cfg) | ||
2635 | 330 | return cfg | ||
2636 | 331 | if len(list(cfg.keys())) != 1: | ||
2637 | 332 | LOG.debug(bmsg, "had multiple top level keys", cfg) | ||
2638 | 333 | return cfg | ||
2639 | 334 | if network_val.get('config') == "disabled": | ||
2640 | 335 | LOG.debug(bmsg, "was config/disabled", cfg) | ||
2641 | 336 | elif not all(('config' in network_val, 'version' in network_val)): | ||
2642 | 337 | LOG.debug(bmsg, "but missing 'config' or 'version'", cfg) | ||
2643 | 338 | return cfg | ||
2644 | 339 | LOG.debug(bmsg, "fixed by removing shifting network.", cfg) | ||
2645 | 340 | return network_val | ||
2646 | 341 | |||
2647 | 342 | |||
2648 | 293 | def _merge_new_seed(cur, seeded): | 343 | def _merge_new_seed(cur, seeded): |
2649 | 294 | ret = cur.copy() | 344 | ret = cur.copy() |
2650 | 295 | 345 | ||
2651 | @@ -299,7 +349,8 @@ def _merge_new_seed(cur, seeded): | |||
2652 | 299 | ret['meta-data'] = util.mergemanydict([cur['meta-data'], newmd]) | 349 | ret['meta-data'] = util.mergemanydict([cur['meta-data'], newmd]) |
2653 | 300 | 350 | ||
2654 | 301 | if seeded.get('network-config'): | 351 | if seeded.get('network-config'): |
2656 | 302 | ret['network-config'] = util.load_yaml(seeded['network-config']) | 352 | ret['network-config'] = _maybe_remove_top_network( |
2657 | 353 | util.load_yaml(seeded.get('network-config'))) | ||
2658 | 303 | 354 | ||
2659 | 304 | if 'user-data' in seeded: | 355 | if 'user-data' in seeded: |
2660 | 305 | ret['user-data'] = seeded['user-data'] | 356 | ret['user-data'] = seeded['user-data'] |
2661 | diff --git a/cloudinit/sources/DataSourceNone.py b/cloudinit/sources/DataSourceNone.py | |||
2662 | index e63a7e3..e625080 100644 | |||
2663 | --- a/cloudinit/sources/DataSourceNone.py | |||
2664 | +++ b/cloudinit/sources/DataSourceNone.py | |||
2665 | @@ -28,6 +28,10 @@ class DataSourceNone(sources.DataSource): | |||
2666 | 28 | self.metadata = self.ds_cfg['metadata'] | 28 | self.metadata = self.ds_cfg['metadata'] |
2667 | 29 | return True | 29 | return True |
2668 | 30 | 30 | ||
2669 | 31 | def _get_subplatform(self): | ||
2670 | 32 | """Return the subplatform metadata source details.""" | ||
2671 | 33 | return 'config' | ||
2672 | 34 | |||
2673 | 31 | def get_instance_id(self): | 35 | def get_instance_id(self): |
2674 | 32 | return 'iid-datasource-none' | 36 | return 'iid-datasource-none' |
2675 | 33 | 37 | ||
2676 | diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py | |||
2677 | index 178ccb0..3a3fcdf 100644 | |||
2678 | --- a/cloudinit/sources/DataSourceOVF.py | |||
2679 | +++ b/cloudinit/sources/DataSourceOVF.py | |||
2680 | @@ -232,11 +232,11 @@ class DataSourceOVF(sources.DataSource): | |||
2681 | 232 | GuestCustErrorEnum.GUESTCUST_ERROR_SUCCESS) | 232 | GuestCustErrorEnum.GUESTCUST_ERROR_SUCCESS) |
2682 | 233 | 233 | ||
2683 | 234 | else: | 234 | else: |
2686 | 235 | np = {'iso': transport_iso9660, | 235 | np = [('com.vmware.guestInfo', transport_vmware_guestinfo), |
2687 | 236 | 'vmware-guestd': transport_vmware_guestd, } | 236 | ('iso', transport_iso9660)] |
2688 | 237 | name = None | 237 | name = None |
2691 | 238 | for (name, transfunc) in np.items(): | 238 | for name, transfunc in np: |
2692 | 239 | (contents, _dev, _fname) = transfunc() | 239 | contents = transfunc() |
2693 | 240 | if contents: | 240 | if contents: |
2694 | 241 | break | 241 | break |
2695 | 242 | if contents: | 242 | if contents: |
2696 | @@ -275,6 +275,12 @@ class DataSourceOVF(sources.DataSource): | |||
2697 | 275 | self.cfg = cfg | 275 | self.cfg = cfg |
2698 | 276 | return True | 276 | return True |
2699 | 277 | 277 | ||
2700 | 278 | def _get_subplatform(self): | ||
2701 | 279 | system_type = util.read_dmi_data("system-product-name").lower() | ||
2702 | 280 | if system_type == 'vmware': | ||
2703 | 281 | return 'vmware (%s)' % self.seed | ||
2704 | 282 | return 'ovf (%s)' % self.seed | ||
2705 | 283 | |||
2706 | 278 | def get_public_ssh_keys(self): | 284 | def get_public_ssh_keys(self): |
2707 | 279 | if 'public-keys' not in self.metadata: | 285 | if 'public-keys' not in self.metadata: |
2708 | 280 | return [] | 286 | return [] |
2709 | @@ -458,8 +464,8 @@ def maybe_cdrom_device(devname): | |||
2710 | 458 | return cdmatch.match(devname) is not None | 464 | return cdmatch.match(devname) is not None |
2711 | 459 | 465 | ||
2712 | 460 | 466 | ||
2715 | 461 | # Transport functions take no input and return | 467 | # Transport functions are called with no arguments and return |
2716 | 462 | # a 3 tuple of content, path, filename | 468 | # either None (indicating not present) or string content of an ovf-env.xml |
2717 | 463 | def transport_iso9660(require_iso=True): | 469 | def transport_iso9660(require_iso=True): |
2718 | 464 | 470 | ||
2719 | 465 | # Go through mounts to see if it was already mounted | 471 | # Go through mounts to see if it was already mounted |
2720 | @@ -471,9 +477,9 @@ def transport_iso9660(require_iso=True): | |||
2721 | 471 | if not maybe_cdrom_device(dev): | 477 | if not maybe_cdrom_device(dev): |
2722 | 472 | continue | 478 | continue |
2723 | 473 | mp = info['mountpoint'] | 479 | mp = info['mountpoint'] |
2725 | 474 | (fname, contents) = get_ovf_env(mp) | 480 | (_fname, contents) = get_ovf_env(mp) |
2726 | 475 | if contents is not False: | 481 | if contents is not False: |
2728 | 476 | return (contents, dev, fname) | 482 | return contents |
2729 | 477 | 483 | ||
2730 | 478 | if require_iso: | 484 | if require_iso: |
2731 | 479 | mtype = "iso9660" | 485 | mtype = "iso9660" |
2732 | @@ -486,29 +492,33 @@ def transport_iso9660(require_iso=True): | |||
2733 | 486 | if maybe_cdrom_device(dev)] | 492 | if maybe_cdrom_device(dev)] |
2734 | 487 | for dev in devs: | 493 | for dev in devs: |
2735 | 488 | try: | 494 | try: |
2737 | 489 | (fname, contents) = util.mount_cb(dev, get_ovf_env, mtype=mtype) | 495 | (_fname, contents) = util.mount_cb(dev, get_ovf_env, mtype=mtype) |
2738 | 490 | except util.MountFailedError: | 496 | except util.MountFailedError: |
2739 | 491 | LOG.debug("%s not mountable as iso9660", dev) | 497 | LOG.debug("%s not mountable as iso9660", dev) |
2740 | 492 | continue | 498 | continue |
2741 | 493 | 499 | ||
2742 | 494 | if contents is not False: | 500 | if contents is not False: |
2760 | 495 | return (contents, dev, fname) | 501 | return contents |
2761 | 496 | 502 | ||
2762 | 497 | return (False, None, None) | 503 | return None |
2763 | 498 | 504 | ||
2764 | 499 | 505 | ||
2765 | 500 | def transport_vmware_guestd(): | 506 | def transport_vmware_guestinfo(): |
2766 | 501 | # http://blogs.vmware.com/vapp/2009/07/ \ | 507 | rpctool = "vmware-rpctool" |
2767 | 502 | # selfconfiguration-and-the-ovf-environment.html | 508 | not_found = None |
2768 | 503 | # try: | 509 | if not util.which(rpctool): |
2769 | 504 | # cmd = ['vmware-guestd', '--cmd', 'info-get guestinfo.ovfEnv'] | 510 | return not_found |
2770 | 505 | # (out, err) = subp(cmd) | 511 | cmd = [rpctool, "info-get guestinfo.ovfEnv"] |
2771 | 506 | # return(out, 'guestinfo.ovfEnv', 'vmware-guestd') | 512 | try: |
2772 | 507 | # except: | 513 | out, _err = util.subp(cmd) |
2773 | 508 | # # would need to error check here and see why this failed | 514 | if out: |
2774 | 509 | # # to know if log/error should be raised | 515 | return out |
2775 | 510 | # return(False, None, None) | 516 | LOG.debug("cmd %s exited 0 with empty stdout: %s", cmd, out) |
2776 | 511 | return (False, None, None) | 517 | except util.ProcessExecutionError as e: |
2777 | 518 | if e.exit_code != 1: | ||
2778 | 519 | LOG.warning("%s exited with code %d", rpctool, e.exit_code) | ||
2779 | 520 | LOG.debug(e) | ||
2780 | 521 | return not_found | ||
2781 | 512 | 522 | ||
2782 | 513 | 523 | ||
2783 | 514 | def find_child(node, filter_func): | 524 | def find_child(node, filter_func): |
2784 | diff --git a/cloudinit/sources/DataSourceOpenNebula.py b/cloudinit/sources/DataSourceOpenNebula.py | |||
2785 | index 77ccd12..6e1d04b 100644 | |||
2786 | --- a/cloudinit/sources/DataSourceOpenNebula.py | |||
2787 | +++ b/cloudinit/sources/DataSourceOpenNebula.py | |||
2788 | @@ -95,6 +95,14 @@ class DataSourceOpenNebula(sources.DataSource): | |||
2789 | 95 | self.userdata_raw = results.get('userdata') | 95 | self.userdata_raw = results.get('userdata') |
2790 | 96 | return True | 96 | return True |
2791 | 97 | 97 | ||
2792 | 98 | def _get_subplatform(self): | ||
2793 | 99 | """Return the subplatform metadata source details.""" | ||
2794 | 100 | if self.seed_dir in self.seed: | ||
2795 | 101 | subplatform_type = 'seed-dir' | ||
2796 | 102 | else: | ||
2797 | 103 | subplatform_type = 'config-disk' | ||
2798 | 104 | return '%s (%s)' % (subplatform_type, self.seed) | ||
2799 | 105 | |||
2800 | 98 | @property | 106 | @property |
2801 | 99 | def network_config(self): | 107 | def network_config(self): |
2802 | 100 | if self.network is not None: | 108 | if self.network is not None: |
2803 | @@ -329,7 +337,7 @@ def parse_shell_config(content, keylist=None, bash=None, asuser=None, | |||
2804 | 329 | (output, _error) = util.subp(cmd, data=bcmd) | 337 | (output, _error) = util.subp(cmd, data=bcmd) |
2805 | 330 | 338 | ||
2806 | 331 | # exclude vars in bash that change on their own or that we used | 339 | # exclude vars in bash that change on their own or that we used |
2808 | 332 | excluded = ("RANDOM", "LINENO", "SECONDS", "_", "__v") | 340 | excluded = ("EPOCHREALTIME", "RANDOM", "LINENO", "SECONDS", "_", "__v") |
2809 | 333 | preset = {} | 341 | preset = {} |
2810 | 334 | ret = {} | 342 | ret = {} |
2811 | 335 | target = None | 343 | target = None |
2812 | diff --git a/cloudinit/sources/DataSourceOracle.py b/cloudinit/sources/DataSourceOracle.py | |||
2813 | index fab39af..70b9c58 100644 | |||
2814 | --- a/cloudinit/sources/DataSourceOracle.py | |||
2815 | +++ b/cloudinit/sources/DataSourceOracle.py | |||
2816 | @@ -91,6 +91,10 @@ class DataSourceOracle(sources.DataSource): | |||
2817 | 91 | def crawl_metadata(self): | 91 | def crawl_metadata(self): |
2818 | 92 | return read_metadata() | 92 | return read_metadata() |
2819 | 93 | 93 | ||
2820 | 94 | def _get_subplatform(self): | ||
2821 | 95 | """Return the subplatform metadata source details.""" | ||
2822 | 96 | return 'metadata (%s)' % METADATA_ENDPOINT | ||
2823 | 97 | |||
2824 | 94 | def check_instance_id(self, sys_cfg): | 98 | def check_instance_id(self, sys_cfg): |
2825 | 95 | """quickly check (local only) if self.instance_id is still valid | 99 | """quickly check (local only) if self.instance_id is still valid |
2826 | 96 | 100 | ||
2827 | diff --git a/cloudinit/sources/DataSourceScaleway.py b/cloudinit/sources/DataSourceScaleway.py | |||
2828 | index 9dc4ab2..b573b38 100644 | |||
2829 | --- a/cloudinit/sources/DataSourceScaleway.py | |||
2830 | +++ b/cloudinit/sources/DataSourceScaleway.py | |||
2831 | @@ -253,7 +253,16 @@ class DataSourceScaleway(sources.DataSource): | |||
2832 | 253 | return self.metadata['id'] | 253 | return self.metadata['id'] |
2833 | 254 | 254 | ||
2834 | 255 | def get_public_ssh_keys(self): | 255 | def get_public_ssh_keys(self): |
2836 | 256 | return [key['key'] for key in self.metadata['ssh_public_keys']] | 256 | ssh_keys = [key['key'] for key in self.metadata['ssh_public_keys']] |
2837 | 257 | |||
2838 | 258 | akeypre = "AUTHORIZED_KEY=" | ||
2839 | 259 | plen = len(akeypre) | ||
2840 | 260 | for tag in self.metadata.get('tags', []): | ||
2841 | 261 | if not tag.startswith(akeypre): | ||
2842 | 262 | continue | ||
2843 | 263 | ssh_keys.append(tag[:plen].replace("_", " ")) | ||
2844 | 264 | |||
2845 | 265 | return ssh_keys | ||
2846 | 257 | 266 | ||
2847 | 258 | def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): | 267 | def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): |
2848 | 259 | return self.metadata['hostname'] | 268 | return self.metadata['hostname'] |
2849 | diff --git a/cloudinit/sources/DataSourceSmartOS.py b/cloudinit/sources/DataSourceSmartOS.py | |||
2850 | index 593ac91..32b57cd 100644 | |||
2851 | --- a/cloudinit/sources/DataSourceSmartOS.py | |||
2852 | +++ b/cloudinit/sources/DataSourceSmartOS.py | |||
2853 | @@ -303,6 +303,9 @@ class DataSourceSmartOS(sources.DataSource): | |||
2854 | 303 | self._set_provisioned() | 303 | self._set_provisioned() |
2855 | 304 | return True | 304 | return True |
2856 | 305 | 305 | ||
2857 | 306 | def _get_subplatform(self): | ||
2858 | 307 | return 'serial (%s)' % SERIAL_DEVICE | ||
2859 | 308 | |||
2860 | 306 | def device_name_to_device(self, name): | 309 | def device_name_to_device(self, name): |
2861 | 307 | return self.ds_cfg['disk_aliases'].get(name) | 310 | return self.ds_cfg['disk_aliases'].get(name) |
2862 | 308 | 311 | ||
2863 | diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py | |||
2864 | index 5ac9882..e6966b3 100644 | |||
2865 | --- a/cloudinit/sources/__init__.py | |||
2866 | +++ b/cloudinit/sources/__init__.py | |||
2867 | @@ -54,9 +54,18 @@ REDACT_SENSITIVE_VALUE = 'redacted for non-root user' | |||
2868 | 54 | METADATA_CLOUD_NAME_KEY = 'cloud-name' | 54 | METADATA_CLOUD_NAME_KEY = 'cloud-name' |
2869 | 55 | 55 | ||
2870 | 56 | UNSET = "_unset" | 56 | UNSET = "_unset" |
2871 | 57 | METADATA_UNKNOWN = 'unknown' | ||
2872 | 57 | 58 | ||
2873 | 58 | LOG = logging.getLogger(__name__) | 59 | LOG = logging.getLogger(__name__) |
2874 | 59 | 60 | ||
2875 | 61 | # CLOUD_ID_REGION_PREFIX_MAP format is: | ||
2876 | 62 | # <region-match-prefix>: (<new-cloud-id>: <test_allowed_cloud_callable>) | ||
2877 | 63 | CLOUD_ID_REGION_PREFIX_MAP = { | ||
2878 | 64 | 'cn-': ('aws-china', lambda c: c == 'aws'), # only change aws regions | ||
2879 | 65 | 'us-gov-': ('aws-gov', lambda c: c == 'aws'), # only change aws regions | ||
2880 | 66 | 'china': ('azure-china', lambda c: c == 'azure'), # only change azure | ||
2881 | 67 | } | ||
2882 | 68 | |||
2883 | 60 | 69 | ||
2884 | 61 | class DataSourceNotFoundException(Exception): | 70 | class DataSourceNotFoundException(Exception): |
2885 | 62 | pass | 71 | pass |
2886 | @@ -133,6 +142,14 @@ class DataSource(object): | |||
2887 | 133 | # Cached cloud_name as determined by _get_cloud_name | 142 | # Cached cloud_name as determined by _get_cloud_name |
2888 | 134 | _cloud_name = None | 143 | _cloud_name = None |
2889 | 135 | 144 | ||
2890 | 145 | # Cached cloud platform api type: e.g. ec2, openstack, kvm, lxd, azure etc. | ||
2891 | 146 | _platform_type = None | ||
2892 | 147 | |||
2893 | 148 | # More details about the cloud platform: | ||
2894 | 149 | # - metadata (http://169.254.169.254/) | ||
2895 | 150 | # - seed-dir (<dirname>) | ||
2896 | 151 | _subplatform = None | ||
2897 | 152 | |||
2898 | 136 | # Track the discovered fallback nic for use in configuration generation. | 153 | # Track the discovered fallback nic for use in configuration generation. |
2899 | 137 | _fallback_interface = None | 154 | _fallback_interface = None |
2900 | 138 | 155 | ||
2901 | @@ -192,21 +209,24 @@ class DataSource(object): | |||
2902 | 192 | local_hostname = self.get_hostname() | 209 | local_hostname = self.get_hostname() |
2903 | 193 | instance_id = self.get_instance_id() | 210 | instance_id = self.get_instance_id() |
2904 | 194 | availability_zone = self.availability_zone | 211 | availability_zone = self.availability_zone |
2909 | 195 | cloud_name = self.cloud_name | 212 | # In the event of upgrade from existing cloudinit, pickled datasource |
2910 | 196 | # When adding new standard keys prefer underscore-delimited instead | 213 | # will not contain these new class attributes. So we need to recrawl |
2911 | 197 | # of hyphen-delimted to support simple variable references in jinja | 214 | # metadata to discover that content. |
2908 | 198 | # templates. | ||
2912 | 199 | return { | 215 | return { |
2913 | 200 | 'v1': { | 216 | 'v1': { |
2914 | 217 | '_beta_keys': ['subplatform'], | ||
2915 | 201 | 'availability-zone': availability_zone, | 218 | 'availability-zone': availability_zone, |
2916 | 202 | 'availability_zone': availability_zone, | 219 | 'availability_zone': availability_zone, |
2919 | 203 | 'cloud-name': cloud_name, | 220 | 'cloud-name': self.cloud_name, |
2920 | 204 | 'cloud_name': cloud_name, | 221 | 'cloud_name': self.cloud_name, |
2921 | 222 | 'platform': self.platform_type, | ||
2922 | 223 | 'public_ssh_keys': self.get_public_ssh_keys(), | ||
2923 | 205 | 'instance-id': instance_id, | 224 | 'instance-id': instance_id, |
2924 | 206 | 'instance_id': instance_id, | 225 | 'instance_id': instance_id, |
2925 | 207 | 'local-hostname': local_hostname, | 226 | 'local-hostname': local_hostname, |
2926 | 208 | 'local_hostname': local_hostname, | 227 | 'local_hostname': local_hostname, |
2928 | 209 | 'region': self.region}} | 228 | 'region': self.region, |
2929 | 229 | 'subplatform': self.subplatform}} | ||
2930 | 210 | 230 | ||
2931 | 211 | def clear_cached_attrs(self, attr_defaults=()): | 231 | def clear_cached_attrs(self, attr_defaults=()): |
2932 | 212 | """Reset any cached metadata attributes to datasource defaults. | 232 | """Reset any cached metadata attributes to datasource defaults. |
2933 | @@ -247,19 +267,27 @@ class DataSource(object): | |||
2934 | 247 | 267 | ||
2935 | 248 | @return True on successful write, False otherwise. | 268 | @return True on successful write, False otherwise. |
2936 | 249 | """ | 269 | """ |
2948 | 250 | instance_data = { | 270 | if hasattr(self, '_crawled_metadata'): |
2949 | 251 | 'ds': {'_doc': EXPERIMENTAL_TEXT, | 271 | # Any datasource with _crawled_metadata will best represent |
2950 | 252 | 'meta_data': self.metadata}} | 272 | # most recent, 'raw' metadata |
2951 | 253 | if hasattr(self, 'network_json'): | 273 | crawled_metadata = copy.deepcopy( |
2952 | 254 | network_json = getattr(self, 'network_json') | 274 | getattr(self, '_crawled_metadata')) |
2953 | 255 | if network_json != UNSET: | 275 | crawled_metadata.pop('user-data', None) |
2954 | 256 | instance_data['ds']['network_json'] = network_json | 276 | crawled_metadata.pop('vendor-data', None) |
2955 | 257 | if hasattr(self, 'ec2_metadata'): | 277 | instance_data = {'ds': crawled_metadata} |
2956 | 258 | ec2_metadata = getattr(self, 'ec2_metadata') | 278 | else: |
2957 | 259 | if ec2_metadata != UNSET: | 279 | instance_data = {'ds': {'meta_data': self.metadata}} |
2958 | 260 | instance_data['ds']['ec2_metadata'] = ec2_metadata | 280 | if hasattr(self, 'network_json'): |
2959 | 281 | network_json = getattr(self, 'network_json') | ||
2960 | 282 | if network_json != UNSET: | ||
2961 | 283 | instance_data['ds']['network_json'] = network_json | ||
2962 | 284 | if hasattr(self, 'ec2_metadata'): | ||
2963 | 285 | ec2_metadata = getattr(self, 'ec2_metadata') | ||
2964 | 286 | if ec2_metadata != UNSET: | ||
2965 | 287 | instance_data['ds']['ec2_metadata'] = ec2_metadata | ||
2966 | 261 | instance_data.update( | 288 | instance_data.update( |
2967 | 262 | self._get_standardized_metadata()) | 289 | self._get_standardized_metadata()) |
2968 | 290 | instance_data['ds']['_doc'] = EXPERIMENTAL_TEXT | ||
2969 | 263 | try: | 291 | try: |
2970 | 264 | # Process content base64encoding unserializable values | 292 | # Process content base64encoding unserializable values |
2971 | 265 | content = util.json_dumps(instance_data) | 293 | content = util.json_dumps(instance_data) |
2972 | @@ -347,6 +375,40 @@ class DataSource(object): | |||
2973 | 347 | return self._fallback_interface | 375 | return self._fallback_interface |
2974 | 348 | 376 | ||
2975 | 349 | @property | 377 | @property |
2976 | 378 | def platform_type(self): | ||
2977 | 379 | if not hasattr(self, '_platform_type'): | ||
2978 | 380 | # Handle upgrade path where pickled datasource has no _platform. | ||
2979 | 381 | self._platform_type = self.dsname.lower() | ||
2980 | 382 | if not self._platform_type: | ||
2981 | 383 | self._platform_type = self.dsname.lower() | ||
2982 | 384 | return self._platform_type | ||
2983 | 385 | |||
2984 | 386 | @property | ||
2985 | 387 | def subplatform(self): | ||
2986 | 388 | """Return a string representing subplatform details for the datasource. | ||
2987 | 389 | |||
2988 | 390 | This should be guidance for where the metadata is sourced. | ||
2989 | 391 | Examples of this on different clouds: | ||
2990 | 392 | ec2: metadata (http://169.254.169.254) | ||
2991 | 393 | openstack: configdrive (/dev/path) | ||
2992 | 394 | openstack: metadata (http://169.254.169.254) | ||
2993 | 395 | nocloud: seed-dir (/seed/dir/path) | ||
2994 | 396 | lxd: nocloud (/seed/dir/path) | ||
2995 | 397 | """ | ||
2996 | 398 | if not hasattr(self, '_subplatform'): | ||
2997 | 399 | # Handle upgrade path where pickled datasource has no _platform. | ||
2998 | 400 | self._subplatform = self._get_subplatform() | ||
2999 | 401 | if not self._subplatform: | ||
3000 | 402 | self._subplatform = self._get_subplatform() | ||
3001 | 403 | return self._subplatform | ||
3002 | 404 | |||
3003 | 405 | def _get_subplatform(self): | ||
3004 | 406 | """Subclasses should implement to return a "slug (detail)" string.""" | ||
3005 | 407 | if hasattr(self, 'metadata_address'): | ||
3006 | 408 | return 'metadata (%s)' % getattr(self, 'metadata_address') | ||
3007 | 409 | return METADATA_UNKNOWN | ||
3008 | 410 | |||
3009 | 411 | @property | ||
3010 | 350 | def cloud_name(self): | 412 | def cloud_name(self): |
3011 | 351 | """Return lowercase cloud name as determined by the datasource. | 413 | """Return lowercase cloud name as determined by the datasource. |
3012 | 352 | 414 | ||
3013 | @@ -359,9 +421,11 @@ class DataSource(object): | |||
3014 | 359 | cloud_name = self.metadata.get(METADATA_CLOUD_NAME_KEY) | 421 | cloud_name = self.metadata.get(METADATA_CLOUD_NAME_KEY) |
3015 | 360 | if isinstance(cloud_name, six.string_types): | 422 | if isinstance(cloud_name, six.string_types): |
3016 | 361 | self._cloud_name = cloud_name.lower() | 423 | self._cloud_name = cloud_name.lower() |
3020 | 362 | LOG.debug( | 424 | else: |
3021 | 363 | 'Ignoring metadata provided key %s: non-string type %s', | 425 | self._cloud_name = self._get_cloud_name().lower() |
3022 | 364 | METADATA_CLOUD_NAME_KEY, type(cloud_name)) | 426 | LOG.debug( |
3023 | 427 | 'Ignoring metadata provided key %s: non-string type %s', | ||
3024 | 428 | METADATA_CLOUD_NAME_KEY, type(cloud_name)) | ||
3025 | 365 | else: | 429 | else: |
3026 | 366 | self._cloud_name = self._get_cloud_name().lower() | 430 | self._cloud_name = self._get_cloud_name().lower() |
3027 | 367 | return self._cloud_name | 431 | return self._cloud_name |
3028 | @@ -714,6 +778,25 @@ def instance_id_matches_system_uuid(instance_id, field='system-uuid'): | |||
3029 | 714 | return instance_id.lower() == dmi_value.lower() | 778 | return instance_id.lower() == dmi_value.lower() |
3030 | 715 | 779 | ||
3031 | 716 | 780 | ||
3032 | 781 | def canonical_cloud_id(cloud_name, region, platform): | ||
3033 | 782 | """Lookup the canonical cloud-id for a given cloud_name and region.""" | ||
3034 | 783 | if not cloud_name: | ||
3035 | 784 | cloud_name = METADATA_UNKNOWN | ||
3036 | 785 | if not region: | ||
3037 | 786 | region = METADATA_UNKNOWN | ||
3038 | 787 | if region == METADATA_UNKNOWN: | ||
3039 | 788 | if cloud_name != METADATA_UNKNOWN: | ||
3040 | 789 | return cloud_name | ||
3041 | 790 | return platform | ||
3042 | 791 | for prefix, cloud_id_test in CLOUD_ID_REGION_PREFIX_MAP.items(): | ||
3043 | 792 | (cloud_id, valid_cloud) = cloud_id_test | ||
3044 | 793 | if region.startswith(prefix) and valid_cloud(cloud_name): | ||
3045 | 794 | return cloud_id | ||
3046 | 795 | if cloud_name != METADATA_UNKNOWN: | ||
3047 | 796 | return cloud_name | ||
3048 | 797 | return platform | ||
3049 | 798 | |||
3050 | 799 | |||
3051 | 717 | def convert_vendordata(data, recurse=True): | 800 | def convert_vendordata(data, recurse=True): |
3052 | 718 | """data: a loaded object (strings, arrays, dicts). | 801 | """data: a loaded object (strings, arrays, dicts). |
3053 | 719 | return something suitable for cloudinit vendordata_raw. | 802 | return something suitable for cloudinit vendordata_raw. |
3054 | diff --git a/cloudinit/sources/helpers/netlink.py b/cloudinit/sources/helpers/netlink.py | |||
3055 | 720 | new file mode 100644 | 803 | new file mode 100644 |
3056 | index 0000000..d377ae3 | |||
3057 | --- /dev/null | |||
3058 | +++ b/cloudinit/sources/helpers/netlink.py | |||
3059 | @@ -0,0 +1,250 @@ | |||
3060 | 1 | # Author: Tamilmani Manoharan <tamanoha@microsoft.com> | ||
3061 | 2 | # | ||
3062 | 3 | # This file is part of cloud-init. See LICENSE file for license information. | ||
3063 | 4 | |||
3064 | 5 | from cloudinit import log as logging | ||
3065 | 6 | from cloudinit import util | ||
3066 | 7 | from collections import namedtuple | ||
3067 | 8 | |||
3068 | 9 | import os | ||
3069 | 10 | import select | ||
3070 | 11 | import socket | ||
3071 | 12 | import struct | ||
3072 | 13 | |||
3073 | 14 | LOG = logging.getLogger(__name__) | ||
3074 | 15 | |||
3075 | 16 | # http://man7.org/linux/man-pages/man7/netlink.7.html | ||
3076 | 17 | RTMGRP_LINK = 1 | ||
3077 | 18 | NLMSG_NOOP = 1 | ||
3078 | 19 | NLMSG_ERROR = 2 | ||
3079 | 20 | NLMSG_DONE = 3 | ||
3080 | 21 | RTM_NEWLINK = 16 | ||
3081 | 22 | RTM_DELLINK = 17 | ||
3082 | 23 | RTM_GETLINK = 18 | ||
3083 | 24 | RTM_SETLINK = 19 | ||
3084 | 25 | MAX_SIZE = 65535 | ||
3085 | 26 | RTA_DATA_OFFSET = 32 | ||
3086 | 27 | MSG_TYPE_OFFSET = 16 | ||
3087 | 28 | SELECT_TIMEOUT = 60 | ||
3088 | 29 | |||
3089 | 30 | NLMSGHDR_FMT = "IHHII" | ||
3090 | 31 | IFINFOMSG_FMT = "BHiII" | ||
3091 | 32 | NLMSGHDR_SIZE = struct.calcsize(NLMSGHDR_FMT) | ||
3092 | 33 | IFINFOMSG_SIZE = struct.calcsize(IFINFOMSG_FMT) | ||
3093 | 34 | RTATTR_START_OFFSET = NLMSGHDR_SIZE + IFINFOMSG_SIZE | ||
3094 | 35 | RTA_DATA_START_OFFSET = 4 | ||
3095 | 36 | PAD_ALIGNMENT = 4 | ||
3096 | 37 | |||
3097 | 38 | IFLA_IFNAME = 3 | ||
3098 | 39 | IFLA_OPERSTATE = 16 | ||
3099 | 40 | |||
3100 | 41 | # https://www.kernel.org/doc/Documentation/networking/operstates.txt | ||
3101 | 42 | OPER_UNKNOWN = 0 | ||
3102 | 43 | OPER_NOTPRESENT = 1 | ||
3103 | 44 | OPER_DOWN = 2 | ||
3104 | 45 | OPER_LOWERLAYERDOWN = 3 | ||
3105 | 46 | OPER_TESTING = 4 | ||
3106 | 47 | OPER_DORMANT = 5 | ||
3107 | 48 | OPER_UP = 6 | ||
3108 | 49 | |||
3109 | 50 | RTAAttr = namedtuple('RTAAttr', ['length', 'rta_type', 'data']) | ||
3110 | 51 | InterfaceOperstate = namedtuple('InterfaceOperstate', ['ifname', 'operstate']) | ||
3111 | 52 | NetlinkHeader = namedtuple('NetlinkHeader', ['length', 'type', 'flags', 'seq', | ||
3112 | 53 | 'pid']) | ||
3113 | 54 | |||
3114 | 55 | |||
3115 | 56 | class NetlinkCreateSocketError(RuntimeError): | ||
3116 | 57 | '''Raised if netlink socket fails during create or bind.''' | ||
3117 | 58 | pass | ||
3118 | 59 | |||
3119 | 60 | |||
3120 | 61 | def create_bound_netlink_socket(): | ||
3121 | 62 | '''Creates netlink socket and bind on netlink group to catch interface | ||
3122 | 63 | down/up events. The socket will bound only on RTMGRP_LINK (which only | ||
3123 | 64 | includes RTM_NEWLINK/RTM_DELLINK/RTM_GETLINK events). The socket is set to | ||
3124 | 65 | non-blocking mode since we're only receiving messages. | ||
3125 | 66 | |||
3126 | 67 | :returns: netlink socket in non-blocking mode | ||
3127 | 68 | :raises: NetlinkCreateSocketError | ||
3128 | 69 | ''' | ||
3129 | 70 | try: | ||
3130 | 71 | netlink_socket = socket.socket(socket.AF_NETLINK, | ||
3131 | 72 | socket.SOCK_RAW, | ||
3132 | 73 | socket.NETLINK_ROUTE) | ||
3133 | 74 | netlink_socket.bind((os.getpid(), RTMGRP_LINK)) | ||
3134 | 75 | netlink_socket.setblocking(0) | ||
3135 | 76 | except socket.error as e: | ||
3136 | 77 | msg = "Exception during netlink socket create: %s" % e | ||
3137 | 78 | raise NetlinkCreateSocketError(msg) | ||
3138 | 79 | LOG.debug("Created netlink socket") | ||
3139 | 80 | return netlink_socket | ||
3140 | 81 | |||
3141 | 82 | |||
3142 | 83 | def get_netlink_msg_header(data): | ||
3143 | 84 | '''Gets netlink message type and length | ||
3144 | 85 | |||
3145 | 86 | :param: data read from netlink socket | ||
3146 | 87 | :returns: netlink message type | ||
3147 | 88 | :raises: AssertionError if data is None or data is not >= NLMSGHDR_SIZE | ||
3148 | 89 | struct nlmsghdr { | ||
3149 | 90 | __u32 nlmsg_len; /* Length of message including header */ | ||
3150 | 91 | __u16 nlmsg_type; /* Type of message content */ | ||
3151 | 92 | __u16 nlmsg_flags; /* Additional flags */ | ||
3152 | 93 | __u32 nlmsg_seq; /* Sequence number */ | ||
3153 | 94 | __u32 nlmsg_pid; /* Sender port ID */ | ||
3154 | 95 | }; | ||
3155 | 96 | ''' | ||
3156 | 97 | assert (data is not None), ("data is none") | ||
3157 | 98 | assert (len(data) >= NLMSGHDR_SIZE), ( | ||
3158 | 99 | "data is smaller than netlink message header") | ||
3159 | 100 | msg_len, msg_type, flags, seq, pid = struct.unpack(NLMSGHDR_FMT, | ||
3160 | 101 | data[:MSG_TYPE_OFFSET]) | ||
3161 | 102 | LOG.debug("Got netlink msg of type %d", msg_type) | ||
3162 | 103 | return NetlinkHeader(msg_len, msg_type, flags, seq, pid) | ||
3163 | 104 | |||
3164 | 105 | |||
3165 | 106 | def read_netlink_socket(netlink_socket, timeout=None): | ||
3166 | 107 | '''Select and read from the netlink socket if ready. | ||
3167 | 108 | |||
3168 | 109 | :param: netlink_socket: specify which socket object to read from | ||
3169 | 110 | :param: timeout: specify a timeout value (integer) to wait while reading, | ||
3170 | 111 | if none, it will block indefinitely until socket ready for read | ||
3171 | 112 | :returns: string of data read (max length = <MAX_SIZE>) from socket, | ||
3172 | 113 | if no data read, returns None | ||
3173 | 114 | :raises: AssertionError if netlink_socket is None | ||
3174 | 115 | ''' | ||
3175 | 116 | assert (netlink_socket is not None), ("netlink socket is none") | ||
3176 | 117 | read_set, _, _ = select.select([netlink_socket], [], [], timeout) | ||
3177 | 118 | # Incase of timeout,read_set doesn't contain netlink socket. | ||
3178 | 119 | # just return from this function | ||
3179 | 120 | if netlink_socket not in read_set: | ||
3180 | 121 | return None | ||
3181 | 122 | LOG.debug("netlink socket ready for read") | ||
3182 | 123 | data = netlink_socket.recv(MAX_SIZE) | ||
3183 | 124 | if data is None: | ||
3184 | 125 | LOG.error("Reading from Netlink socket returned no data") | ||
3185 | 126 | return data | ||
3186 | 127 | |||
3187 | 128 | |||
3188 | 129 | def unpack_rta_attr(data, offset): | ||
3189 | 130 | '''Unpack a single rta attribute. | ||
3190 | 131 | |||
3191 | 132 | :param: data: string of data read from netlink socket | ||
3192 | 133 | :param: offset: starting offset of RTA Attribute | ||
3193 | 134 | :return: RTAAttr object with length, type and data. On error, return None. | ||
3194 | 135 | :raises: AssertionError if data is None or offset is not integer. | ||
3195 | 136 | ''' | ||
3196 | 137 | assert (data is not None), ("data is none") | ||
3197 | 138 | assert (type(offset) == int), ("offset is not integer") | ||
3198 | 139 | assert (offset >= RTATTR_START_OFFSET), ( | ||
3199 | 140 | "rta offset is less than expected length") | ||
3200 | 141 | length = rta_type = 0 | ||
3201 | 142 | attr_data = None | ||
3202 | 143 | try: | ||
3203 | 144 | length = struct.unpack_from("H", data, offset=offset)[0] | ||
3204 | 145 | rta_type = struct.unpack_from("H", data, offset=offset+2)[0] | ||
3205 | 146 | except struct.error: | ||
3206 | 147 | return None # Should mean our offset is >= remaining data | ||
3207 | 148 | |||
3208 | 149 | # Unpack just the attribute's data. Offset by 4 to skip length/type header | ||
3209 | 150 | attr_data = data[offset+RTA_DATA_START_OFFSET:offset+length] | ||
3210 | 151 | return RTAAttr(length, rta_type, attr_data) | ||
3211 | 152 | |||
3212 | 153 | |||
3213 | 154 | def read_rta_oper_state(data): | ||
3214 | 155 | '''Reads Interface name and operational state from RTA Data. | ||
3215 | 156 | |||
3216 | 157 | :param: data: string of data read from netlink socket | ||
3217 | 158 | :returns: InterfaceOperstate object containing if_name and oper_state. | ||
3218 | 159 | None if data does not contain valid IFLA_OPERSTATE and | ||
3219 | 160 | IFLA_IFNAME messages. | ||
3220 | 161 | :raises: AssertionError if data is None or length of data is | ||
3221 | 162 | smaller than RTATTR_START_OFFSET. | ||
3222 | 163 | ''' | ||
3223 | 164 | assert (data is not None), ("data is none") | ||
3224 | 165 | assert (len(data) > RTATTR_START_OFFSET), ( | ||
3225 | 166 | "length of data is smaller than RTATTR_START_OFFSET") | ||
3226 | 167 | ifname = operstate = None | ||
3227 | 168 | offset = RTATTR_START_OFFSET | ||
3228 | 169 | while offset <= len(data): | ||
3229 | 170 | attr = unpack_rta_attr(data, offset) | ||
3230 | 171 | if not attr or attr.length == 0: | ||
3231 | 172 | break | ||
3232 | 173 | # Each attribute is 4-byte aligned. Determine pad length. | ||
3233 | 174 | padlen = (PAD_ALIGNMENT - | ||
3234 | 175 | (attr.length % PAD_ALIGNMENT)) % PAD_ALIGNMENT | ||
3235 | 176 | offset += attr.length + padlen | ||
3236 | 177 | |||
3237 | 178 | if attr.rta_type == IFLA_OPERSTATE: | ||
3238 | 179 | operstate = ord(attr.data) | ||
3239 | 180 | elif attr.rta_type == IFLA_IFNAME: | ||
3240 | 181 | interface_name = util.decode_binary(attr.data, 'utf-8') | ||
3241 | 182 | ifname = interface_name.strip('\0') | ||
3242 | 183 | if not ifname or operstate is None: | ||
3243 | 184 | return None | ||
3244 | 185 | LOG.debug("rta attrs: ifname %s operstate %d", ifname, operstate) | ||
3245 | 186 | return InterfaceOperstate(ifname, operstate) | ||
3246 | 187 | |||
3247 | 188 | |||
3248 | 189 | def wait_for_media_disconnect_connect(netlink_socket, ifname): | ||
3249 | 190 | '''Block until media disconnect and connect has happened on an interface. | ||
3250 | 191 | Listens on netlink socket to receive netlink events and when the carrier | ||
3251 | 192 | changes from 0 to 1, it considers event has happened and | ||
3252 | 193 | return from this function | ||
3253 | 194 | |||
3254 | 195 | :param: netlink_socket: netlink_socket to receive events | ||
3255 | 196 | :param: ifname: Interface name to lookout for netlink events | ||
3256 | 197 | :raises: AssertionError if netlink_socket is None or ifname is None. | ||
3257 | 198 | ''' | ||
3258 | 199 | assert (netlink_socket is not None), ("netlink socket is none") | ||
3259 | 200 | assert (ifname is not None), ("interface name is none") | ||
3260 | 201 | assert (len(ifname) > 0), ("interface name cannot be empty") | ||
3261 | 202 | carrier = OPER_UP | ||
3262 | 203 | prevCarrier = OPER_UP | ||
3263 | 204 | data = bytes() | ||
3264 | 205 | LOG.debug("Wait for media disconnect and reconnect to happen") | ||
3265 | 206 | while True: | ||
3266 | 207 | recv_data = read_netlink_socket(netlink_socket, SELECT_TIMEOUT) | ||
3267 | 208 | if recv_data is None: | ||
3268 | 209 | continue | ||
3269 | 210 | LOG.debug('read %d bytes from socket', len(recv_data)) | ||
3270 | 211 | data += recv_data | ||
3271 | 212 | LOG.debug('Length of data after concat %d', len(data)) | ||
3272 | 213 | offset = 0 | ||
3273 | 214 | datalen = len(data) | ||
3274 | 215 | while offset < datalen: | ||
3275 | 216 | nl_msg = data[offset:] | ||
3276 | 217 | if len(nl_msg) < NLMSGHDR_SIZE: | ||
3277 | 218 | LOG.debug("Data is smaller than netlink header") | ||
3278 | 219 | break | ||
3279 | 220 | nlheader = get_netlink_msg_header(nl_msg) | ||
3280 | 221 | if len(nl_msg) < nlheader.length: | ||
3281 | 222 | LOG.debug("Partial data. Smaller than netlink message") | ||
3282 | 223 | break | ||
3283 | 224 | padlen = (nlheader.length+PAD_ALIGNMENT-1) & ~(PAD_ALIGNMENT-1) | ||
3284 | 225 | offset = offset + padlen | ||
3285 | 226 | LOG.debug('offset to next netlink message: %d', offset) | ||
3286 | 227 | # Ignore any messages not new link or del link | ||
3287 | 228 | if nlheader.type not in [RTM_NEWLINK, RTM_DELLINK]: | ||
3288 | 229 | continue | ||
3289 | 230 | interface_state = read_rta_oper_state(nl_msg) | ||
3290 | 231 | if interface_state is None: | ||
3291 | 232 | LOG.debug('Failed to read rta attributes: %s', interface_state) | ||
3292 | 233 | continue | ||
3293 | 234 | if interface_state.ifname != ifname: | ||
3294 | 235 | LOG.debug( | ||
3295 | 236 | "Ignored netlink event on interface %s. Waiting for %s.", | ||
3296 | 237 | interface_state.ifname, ifname) | ||
3297 | 238 | continue | ||
3298 | 239 | if interface_state.operstate not in [OPER_UP, OPER_DOWN]: | ||
3299 | 240 | continue | ||
3300 | 241 | prevCarrier = carrier | ||
3301 | 242 | carrier = interface_state.operstate | ||
3302 | 243 | # check for carrier down, up sequence | ||
3303 | 244 | isVnetSwitch = (prevCarrier == OPER_DOWN) and (carrier == OPER_UP) | ||
3304 | 245 | if isVnetSwitch: | ||
3305 | 246 | LOG.debug("Media switch happened on %s.", ifname) | ||
3306 | 247 | return | ||
3307 | 248 | data = data[offset:] | ||
3308 | 249 | |||
3309 | 250 | # vi: ts=4 expandtab | ||
3310 | diff --git a/cloudinit/sources/helpers/tests/test_netlink.py b/cloudinit/sources/helpers/tests/test_netlink.py | |||
3311 | 0 | new file mode 100644 | 251 | new file mode 100644 |
3312 | index 0000000..c2898a1 | |||
3313 | --- /dev/null | |||
3314 | +++ b/cloudinit/sources/helpers/tests/test_netlink.py | |||
3315 | @@ -0,0 +1,373 @@ | |||
3316 | 1 | # Author: Tamilmani Manoharan <tamanoha@microsoft.com> | ||
3317 | 2 | # | ||
3318 | 3 | # This file is part of cloud-init. See LICENSE file for license information. | ||
3319 | 4 | |||
3320 | 5 | from cloudinit.tests.helpers import CiTestCase, mock | ||
3321 | 6 | import socket | ||
3322 | 7 | import struct | ||
3323 | 8 | import codecs | ||
3324 | 9 | from cloudinit.sources.helpers.netlink import ( | ||
3325 | 10 | NetlinkCreateSocketError, create_bound_netlink_socket, read_netlink_socket, | ||
3326 | 11 | read_rta_oper_state, unpack_rta_attr, wait_for_media_disconnect_connect, | ||
3327 | 12 | OPER_DOWN, OPER_UP, OPER_DORMANT, OPER_LOWERLAYERDOWN, OPER_NOTPRESENT, | ||
3328 | 13 | OPER_TESTING, OPER_UNKNOWN, RTATTR_START_OFFSET, RTM_NEWLINK, RTM_SETLINK, | ||
3329 | 14 | RTM_GETLINK, MAX_SIZE) | ||
3330 | 15 | |||
3331 | 16 | |||
3332 | 17 | def int_to_bytes(i): | ||
3333 | 18 | '''convert integer to binary: eg: 1 to \x01''' | ||
3334 | 19 | hex_value = '{0:x}'.format(i) | ||
3335 | 20 | hex_value = '0' * (len(hex_value) % 2) + hex_value | ||
3336 | 21 | return codecs.decode(hex_value, 'hex_codec') | ||
3337 | 22 | |||
3338 | 23 | |||
3339 | 24 | class TestCreateBoundNetlinkSocket(CiTestCase): | ||
3340 | 25 | |||
3341 | 26 | @mock.patch('cloudinit.sources.helpers.netlink.socket.socket') | ||
3342 | 27 | def test_socket_error_on_create(self, m_socket): | ||
3343 | 28 | '''create_bound_netlink_socket catches socket creation exception''' | ||
3344 | 29 | |||
3345 | 30 | """NetlinkCreateSocketError is raised when socket creation errors.""" | ||
3346 | 31 | m_socket.side_effect = socket.error("Fake socket failure") | ||
3347 | 32 | with self.assertRaises(NetlinkCreateSocketError) as ctx_mgr: | ||
3348 | 33 | create_bound_netlink_socket() | ||
3349 | 34 | self.assertEqual( | ||
3350 | 35 | 'Exception during netlink socket create: Fake socket failure', | ||
3351 | 36 | str(ctx_mgr.exception)) | ||
3352 | 37 | |||
3353 | 38 | |||
3354 | 39 | class TestReadNetlinkSocket(CiTestCase): | ||
3355 | 40 | |||
3356 | 41 | @mock.patch('cloudinit.sources.helpers.netlink.socket.socket') | ||
3357 | 42 | @mock.patch('cloudinit.sources.helpers.netlink.select.select') | ||
3358 | 43 | def test_read_netlink_socket(self, m_select, m_socket): | ||
3359 | 44 | '''read_netlink_socket able to receive data''' | ||
3360 | 45 | data = 'netlinktest' | ||
3361 | 46 | m_select.return_value = [m_socket], None, None | ||
3362 | 47 | m_socket.recv.return_value = data | ||
3363 | 48 | recv_data = read_netlink_socket(m_socket, 2) | ||
3364 | 49 | m_select.assert_called_with([m_socket], [], [], 2) | ||
3365 | 50 | m_socket.recv.assert_called_with(MAX_SIZE) | ||
3366 | 51 | self.assertIsNotNone(recv_data) | ||
3367 | 52 | self.assertEqual(recv_data, data) | ||
3368 | 53 | |||
3369 | 54 | @mock.patch('cloudinit.sources.helpers.netlink.socket.socket') | ||
3370 | 55 | @mock.patch('cloudinit.sources.helpers.netlink.select.select') | ||
3371 | 56 | def test_netlink_read_timeout(self, m_select, m_socket): | ||
3372 | 57 | '''read_netlink_socket should timeout if nothing to read''' | ||
3373 | 58 | m_select.return_value = [], None, None | ||
3374 | 59 | data = read_netlink_socket(m_socket, 1) | ||
3375 | 60 | m_select.assert_called_with([m_socket], [], [], 1) | ||
3376 | 61 | self.assertEqual(m_socket.recv.call_count, 0) | ||
3377 | 62 | self.assertIsNone(data) | ||
3378 | 63 | |||
3379 | 64 | def test_read_invalid_socket(self): | ||
3380 | 65 | '''read_netlink_socket raises assert error if socket is invalid''' | ||
3381 | 66 | socket = None | ||
3382 | 67 | with self.assertRaises(AssertionError) as context: | ||
3383 | 68 | read_netlink_socket(socket, 1) | ||
3384 | 69 | self.assertTrue('netlink socket is none' in str(context.exception)) | ||
3385 | 70 | |||
3386 | 71 | |||
3387 | 72 | class TestParseNetlinkMessage(CiTestCase): | ||
3388 | 73 | |||
3389 | 74 | def test_read_rta_oper_state(self): | ||
3390 | 75 | '''read_rta_oper_state could parse netlink message and extract data''' | ||
3391 | 76 | ifname = "eth0" | ||
3392 | 77 | bytes = ifname.encode("utf-8") | ||
3393 | 78 | buf = bytearray(48) | ||
3394 | 79 | struct.pack_into("HH4sHHc", buf, RTATTR_START_OFFSET, 8, 3, bytes, 5, | ||
3395 | 80 | 16, int_to_bytes(OPER_DOWN)) | ||
3396 | 81 | interface_state = read_rta_oper_state(buf) | ||
3397 | 82 | self.assertEqual(interface_state.ifname, ifname) | ||
3398 | 83 | self.assertEqual(interface_state.operstate, OPER_DOWN) | ||
3399 | 84 | |||
3400 | 85 | def test_read_none_data(self): | ||
3401 | 86 | '''read_rta_oper_state raises assert error if data is none''' | ||
3402 | 87 | data = None | ||
3403 | 88 | with self.assertRaises(AssertionError) as context: | ||
3404 | 89 | read_rta_oper_state(data) | ||
3405 | 90 | self.assertTrue('data is none', str(context.exception)) | ||
3406 | 91 | |||
3407 | 92 | def test_read_invalid_rta_operstate_none(self): | ||
3408 | 93 | '''read_rta_oper_state returns none if operstate is none''' | ||
3409 | 94 | ifname = "eth0" | ||
3410 | 95 | buf = bytearray(40) | ||
3411 | 96 | bytes = ifname.encode("utf-8") | ||
3412 | 97 | struct.pack_into("HH4s", buf, RTATTR_START_OFFSET, 8, 3, bytes) | ||
3413 | 98 | interface_state = read_rta_oper_state(buf) | ||
3414 | 99 | self.assertIsNone(interface_state) | ||
3415 | 100 | |||
3416 | 101 | def test_read_invalid_rta_ifname_none(self): | ||
3417 | 102 | '''read_rta_oper_state returns none if ifname is none''' | ||
3418 | 103 | buf = bytearray(40) | ||
3419 | 104 | struct.pack_into("HHc", buf, RTATTR_START_OFFSET, 5, 16, | ||
3420 | 105 | int_to_bytes(OPER_DOWN)) | ||
3421 | 106 | interface_state = read_rta_oper_state(buf) | ||
3422 | 107 | self.assertIsNone(interface_state) | ||
3423 | 108 | |||
3424 | 109 | def test_read_invalid_data_len(self): | ||
3425 | 110 | '''raise assert error if data size is smaller than required size''' | ||
3426 | 111 | buf = bytearray(32) | ||
3427 | 112 | with self.assertRaises(AssertionError) as context: | ||
3428 | 113 | read_rta_oper_state(buf) | ||
3429 | 114 | self.assertTrue('length of data is smaller than RTATTR_START_OFFSET' in | ||
3430 | 115 | str(context.exception)) | ||
3431 | 116 | |||
3432 | 117 | def test_unpack_rta_attr_none_data(self): | ||
3433 | 118 | '''unpack_rta_attr raises assert error if data is none''' | ||
3434 | 119 | data = None | ||
3435 | 120 | with self.assertRaises(AssertionError) as context: | ||
3436 | 121 | unpack_rta_attr(data, RTATTR_START_OFFSET) | ||
3437 | 122 | self.assertTrue('data is none' in str(context.exception)) | ||
3438 | 123 | |||
3439 | 124 | def test_unpack_rta_attr_invalid_offset(self): | ||
3440 | 125 | '''unpack_rta_attr raises assert error if offset is invalid''' | ||
3441 | 126 | data = bytearray(48) | ||
3442 | 127 | with self.assertRaises(AssertionError) as context: | ||
3443 | 128 | unpack_rta_attr(data, "offset") | ||
3444 | 129 | self.assertTrue('offset is not integer' in str(context.exception)) | ||
3445 | 130 | with self.assertRaises(AssertionError) as context: | ||
3446 | 131 | unpack_rta_attr(data, 31) | ||
3447 | 132 | self.assertTrue('rta offset is less than expected length' in | ||
3448 | 133 | str(context.exception)) | ||
3449 | 134 | |||
3450 | 135 | |||
3451 | 136 | @mock.patch('cloudinit.sources.helpers.netlink.socket.socket') | ||
3452 | 137 | @mock.patch('cloudinit.sources.helpers.netlink.read_netlink_socket') | ||
3453 | 138 | class TestWaitForMediaDisconnectConnect(CiTestCase): | ||
3454 | 139 | with_logs = True | ||
3455 | 140 | |||
3456 | 141 | def _media_switch_data(self, ifname, msg_type, operstate): | ||
3457 | 142 | '''construct netlink data with specified fields''' | ||
3458 | 143 | if ifname and operstate is not None: | ||
3459 | 144 | data = bytearray(48) | ||
3460 | 145 | bytes = ifname.encode("utf-8") | ||
3461 | 146 | struct.pack_into("HH4sHHc", data, RTATTR_START_OFFSET, 8, 3, | ||
3462 | 147 | bytes, 5, 16, int_to_bytes(operstate)) | ||
3463 | 148 | elif ifname: | ||
3464 | 149 | data = bytearray(40) | ||
3465 | 150 | bytes = ifname.encode("utf-8") | ||
3466 | 151 | struct.pack_into("HH4s", data, RTATTR_START_OFFSET, 8, 3, bytes) | ||
3467 | 152 | elif operstate: | ||
3468 | 153 | data = bytearray(40) | ||
3469 | 154 | struct.pack_into("HHc", data, RTATTR_START_OFFSET, 5, 16, | ||
3470 | 155 | int_to_bytes(operstate)) | ||
3471 | 156 | struct.pack_into("=LHHLL", data, 0, len(data), msg_type, 0, 0, 0) | ||
3472 | 157 | return data | ||
3473 | 158 | |||
3474 | 159 | def test_media_down_up_scenario(self, m_read_netlink_socket, | ||
3475 | 160 | m_socket): | ||
3476 | 161 | '''Test for media down up sequence for required interface name''' | ||
3477 | 162 | ifname = "eth0" | ||
3478 | 163 | # construct data for Oper State down | ||
3479 | 164 | data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN) | ||
3480 | 165 | # construct data for Oper State up | ||
3481 | 166 | data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP) | ||
3482 | 167 | m_read_netlink_socket.side_effect = [data_op_down, data_op_up] | ||
3483 | 168 | wait_for_media_disconnect_connect(m_socket, ifname) | ||
3484 | 169 | self.assertEqual(m_read_netlink_socket.call_count, 2) | ||
3485 | 170 | |||
3486 | 171 | def test_wait_for_media_switch_diff_interface(self, m_read_netlink_socket, | ||
3487 | 172 | m_socket): | ||
3488 | 173 | '''wait_for_media_disconnect_connect ignores unexpected interfaces. | ||
3489 | 174 | |||
3490 | 175 | The first two messages are for other interfaces and last two are for | ||
3491 | 176 | expected interface. So the function exit only after receiving last | ||
3492 | 177 | 2 messages and therefore the call count for m_read_netlink_socket | ||
3493 | 178 | has to be 4 | ||
3494 | 179 | ''' | ||
3495 | 180 | other_ifname = "eth1" | ||
3496 | 181 | expected_ifname = "eth0" | ||
3497 | 182 | data_op_down_eth1 = self._media_switch_data( | ||
3498 | 183 | other_ifname, RTM_NEWLINK, OPER_DOWN) | ||
3499 | 184 | data_op_up_eth1 = self._media_switch_data( | ||
3500 | 185 | other_ifname, RTM_NEWLINK, OPER_UP) | ||
3501 | 186 | data_op_down_eth0 = self._media_switch_data( | ||
3502 | 187 | expected_ifname, RTM_NEWLINK, OPER_DOWN) | ||
3503 | 188 | data_op_up_eth0 = self._media_switch_data( | ||
3504 | 189 | expected_ifname, RTM_NEWLINK, OPER_UP) | ||
3505 | 190 | m_read_netlink_socket.side_effect = [data_op_down_eth1, | ||
3506 | 191 | data_op_up_eth1, | ||
3507 | 192 | data_op_down_eth0, | ||
3508 | 193 | data_op_up_eth0] | ||
3509 | 194 | wait_for_media_disconnect_connect(m_socket, expected_ifname) | ||
3510 | 195 | self.assertIn('Ignored netlink event on interface %s' % other_ifname, | ||
3511 | 196 | self.logs.getvalue()) | ||
3512 | 197 | self.assertEqual(m_read_netlink_socket.call_count, 4) | ||
3513 | 198 | |||
3514 | 199 | def test_invalid_msgtype_getlink(self, m_read_netlink_socket, m_socket): | ||
3515 | 200 | '''wait_for_media_disconnect_connect ignores GETLINK events. | ||
3516 | 201 | |||
3517 | 202 | The first two messages are for oper down and up for RTM_GETLINK type | ||
3518 | 203 | which netlink module will ignore. The last 2 messages are RTM_NEWLINK | ||
3519 | 204 | with oper state down and up messages. Therefore the call count for | ||
3520 | 205 | m_read_netlink_socket has to be 4 ignoring first 2 messages | ||
3521 | 206 | of RTM_GETLINK | ||
3522 | 207 | ''' | ||
3523 | 208 | ifname = "eth0" | ||
3524 | 209 | data_getlink_down = self._media_switch_data( | ||
3525 | 210 | ifname, RTM_GETLINK, OPER_DOWN) | ||
3526 | 211 | data_getlink_up = self._media_switch_data( | ||
3527 | 212 | ifname, RTM_GETLINK, OPER_UP) | ||
3528 | 213 | data_newlink_down = self._media_switch_data( | ||
3529 | 214 | ifname, RTM_NEWLINK, OPER_DOWN) | ||
3530 | 215 | data_newlink_up = self._media_switch_data( | ||
3531 | 216 | ifname, RTM_NEWLINK, OPER_UP) | ||
3532 | 217 | m_read_netlink_socket.side_effect = [data_getlink_down, | ||
3533 | 218 | data_getlink_up, | ||
3534 | 219 | data_newlink_down, | ||
3535 | 220 | data_newlink_up] | ||
3536 | 221 | wait_for_media_disconnect_connect(m_socket, ifname) | ||
3537 | 222 | self.assertEqual(m_read_netlink_socket.call_count, 4) | ||
3538 | 223 | |||
3539 | 224 | def test_invalid_msgtype_setlink(self, m_read_netlink_socket, m_socket): | ||
3540 | 225 | '''wait_for_media_disconnect_connect ignores SETLINK events. | ||
3541 | 226 | |||
3542 | 227 | The first two messages are for oper down and up for RTM_GETLINK type | ||
3543 | 228 | which it will ignore. 3rd and 4th messages are RTM_NEWLINK with down | ||
3544 | 229 | and up messages. This function should exit after 4th messages since it | ||
3545 | 230 | sees down->up scenario. So the call count for m_read_netlink_socket | ||
3546 | 231 | has to be 4 ignoring first 2 messages of RTM_GETLINK and | ||
3547 | 232 | last 2 messages of RTM_NEWLINK | ||
3548 | 233 | ''' | ||
3549 | 234 | ifname = "eth0" | ||
3550 | 235 | data_setlink_down = self._media_switch_data( | ||
3551 | 236 | ifname, RTM_SETLINK, OPER_DOWN) | ||
3552 | 237 | data_setlink_up = self._media_switch_data( | ||
3553 | 238 | ifname, RTM_SETLINK, OPER_UP) | ||
3554 | 239 | data_newlink_down = self._media_switch_data( | ||
3555 | 240 | ifname, RTM_NEWLINK, OPER_DOWN) | ||
3556 | 241 | data_newlink_up = self._media_switch_data( | ||
3557 | 242 | ifname, RTM_NEWLINK, OPER_UP) | ||
3558 | 243 | m_read_netlink_socket.side_effect = [data_setlink_down, | ||
3559 | 244 | data_setlink_up, | ||
3560 | 245 | data_newlink_down, | ||
3561 | 246 | data_newlink_up, | ||
3562 | 247 | data_newlink_down, | ||
3563 | 248 | data_newlink_up] | ||
3564 | 249 | wait_for_media_disconnect_connect(m_socket, ifname) | ||
3565 | 250 | self.assertEqual(m_read_netlink_socket.call_count, 4) | ||
3566 | 251 | |||
3567 | 252 | def test_netlink_invalid_switch_scenario(self, m_read_netlink_socket, | ||
3568 | 253 | m_socket): | ||
3569 | 254 | '''returns only if it receives UP event after a DOWN event''' | ||
3570 | 255 | ifname = "eth0" | ||
3571 | 256 | data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN) | ||
3572 | 257 | data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP) | ||
3573 | 258 | data_op_dormant = self._media_switch_data(ifname, RTM_NEWLINK, | ||
3574 | 259 | OPER_DORMANT) | ||
3575 | 260 | data_op_notpresent = self._media_switch_data(ifname, RTM_NEWLINK, | ||
3576 | 261 | OPER_NOTPRESENT) | ||
3577 | 262 | data_op_lowerdown = self._media_switch_data(ifname, RTM_NEWLINK, | ||
3578 | 263 | OPER_LOWERLAYERDOWN) | ||
3579 | 264 | data_op_testing = self._media_switch_data(ifname, RTM_NEWLINK, | ||
3580 | 265 | OPER_TESTING) | ||
3581 | 266 | data_op_unknown = self._media_switch_data(ifname, RTM_NEWLINK, | ||
3582 | 267 | OPER_UNKNOWN) | ||
3583 | 268 | m_read_netlink_socket.side_effect = [data_op_up, data_op_up, | ||
3584 | 269 | data_op_dormant, data_op_up, | ||
3585 | 270 | data_op_notpresent, data_op_up, | ||
3586 | 271 | data_op_lowerdown, data_op_up, | ||
3587 | 272 | data_op_testing, data_op_up, | ||
3588 | 273 | data_op_unknown, data_op_up, | ||
3589 | 274 | data_op_down, data_op_up] | ||
3590 | 275 | wait_for_media_disconnect_connect(m_socket, ifname) | ||
3591 | 276 | self.assertEqual(m_read_netlink_socket.call_count, 14) | ||
3592 | 277 | |||
3593 | 278 | def test_netlink_valid_inbetween_transitions(self, m_read_netlink_socket, | ||
3594 | 279 | m_socket): | ||
3595 | 280 | '''wait_for_media_disconnect_connect handles in between transitions''' | ||
3596 | 281 | ifname = "eth0" | ||
3597 | 282 | data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN) | ||
3598 | 283 | data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP) | ||
3599 | 284 | data_op_dormant = self._media_switch_data(ifname, RTM_NEWLINK, | ||
3600 | 285 | OPER_DORMANT) | ||
3601 | 286 | data_op_unknown = self._media_switch_data(ifname, RTM_NEWLINK, | ||
3602 | 287 | OPER_UNKNOWN) | ||
3603 | 288 | m_read_netlink_socket.side_effect = [data_op_down, data_op_dormant, | ||
3604 | 289 | data_op_unknown, data_op_up] | ||
3605 | 290 | wait_for_media_disconnect_connect(m_socket, ifname) | ||
3606 | 291 | self.assertEqual(m_read_netlink_socket.call_count, 4) | ||
3607 | 292 | |||
3608 | 293 | def test_netlink_invalid_operstate(self, m_read_netlink_socket, m_socket): | ||
3609 | 294 | '''wait_for_media_disconnect_connect should handle invalid operstates. | ||
3610 | 295 | |||
3611 | 296 | The function should not fail and return even if it receives invalid | ||
3612 | 297 | operstates. It always should wait for down up sequence. | ||
3613 | 298 | ''' | ||
3614 | 299 | ifname = "eth0" | ||
3615 | 300 | data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN) | ||
3616 | 301 | data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP) | ||
3617 | 302 | data_op_invalid = self._media_switch_data(ifname, RTM_NEWLINK, 7) | ||
3618 | 303 | m_read_netlink_socket.side_effect = [data_op_invalid, data_op_up, | ||
3619 | 304 | data_op_down, data_op_invalid, | ||
3620 | 305 | data_op_up] | ||
3621 | 306 | wait_for_media_disconnect_connect(m_socket, ifname) | ||
3622 | 307 | self.assertEqual(m_read_netlink_socket.call_count, 5) | ||
3623 | 308 | |||
3624 | 309 | def test_wait_invalid_socket(self, m_read_netlink_socket, m_socket): | ||
3625 | 310 | '''wait_for_media_disconnect_connect handle none netlink socket.''' | ||
3626 | 311 | socket = None | ||
3627 | 312 | ifname = "eth0" | ||
3628 | 313 | with self.assertRaises(AssertionError) as context: | ||
3629 | 314 | wait_for_media_disconnect_connect(socket, ifname) | ||
3630 | 315 | self.assertTrue('netlink socket is none' in str(context.exception)) | ||
3631 | 316 | |||
3632 | 317 | def test_wait_invalid_ifname(self, m_read_netlink_socket, m_socket): | ||
3633 | 318 | '''wait_for_media_disconnect_connect handle none interface name''' | ||
3634 | 319 | ifname = None | ||
3635 | 320 | with self.assertRaises(AssertionError) as context: | ||
3636 | 321 | wait_for_media_disconnect_connect(m_socket, ifname) | ||
3637 | 322 | self.assertTrue('interface name is none' in str(context.exception)) | ||
3638 | 323 | ifname = "" | ||
3639 | 324 | with self.assertRaises(AssertionError) as context: | ||
3640 | 325 | wait_for_media_disconnect_connect(m_socket, ifname) | ||
3641 | 326 | self.assertTrue('interface name cannot be empty' in | ||
3642 | 327 | str(context.exception)) | ||
3643 | 328 | |||
3644 | 329 | def test_wait_invalid_rta_attr(self, m_read_netlink_socket, m_socket): | ||
3645 | 330 | ''' wait_for_media_disconnect_connect handles invalid rta data''' | ||
3646 | 331 | ifname = "eth0" | ||
3647 | 332 | data_invalid1 = self._media_switch_data(None, RTM_NEWLINK, OPER_DOWN) | ||
3648 | 333 | data_invalid2 = self._media_switch_data(ifname, RTM_NEWLINK, None) | ||
3649 | 334 | data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN) | ||
3650 | 335 | data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP) | ||
3651 | 336 | m_read_netlink_socket.side_effect = [data_invalid1, data_invalid2, | ||
3652 | 337 | data_op_down, data_op_up] | ||
3653 | 338 | wait_for_media_disconnect_connect(m_socket, ifname) | ||
3654 | 339 | self.assertEqual(m_read_netlink_socket.call_count, 4) | ||
3655 | 340 | |||
3656 | 341 | def test_read_multiple_netlink_msgs(self, m_read_netlink_socket, m_socket): | ||
3657 | 342 | '''Read multiple messages in single receive call''' | ||
3658 | 343 | ifname = "eth0" | ||
3659 | 344 | bytes = ifname.encode("utf-8") | ||
3660 | 345 | data = bytearray(96) | ||
3661 | 346 | struct.pack_into("=LHHLL", data, 0, 48, RTM_NEWLINK, 0, 0, 0) | ||
3662 | 347 | struct.pack_into("HH4sHHc", data, RTATTR_START_OFFSET, 8, 3, | ||
3663 | 348 | bytes, 5, 16, int_to_bytes(OPER_DOWN)) | ||
3664 | 349 | struct.pack_into("=LHHLL", data, 48, 48, RTM_NEWLINK, 0, 0, 0) | ||
3665 | 350 | struct.pack_into("HH4sHHc", data, 48 + RTATTR_START_OFFSET, 8, | ||
3666 | 351 | 3, bytes, 5, 16, int_to_bytes(OPER_UP)) | ||
3667 | 352 | m_read_netlink_socket.return_value = data | ||
3668 | 353 | wait_for_media_disconnect_connect(m_socket, ifname) | ||
3669 | 354 | self.assertEqual(m_read_netlink_socket.call_count, 1) | ||
3670 | 355 | |||
3671 | 356 | def test_read_partial_netlink_msgs(self, m_read_netlink_socket, m_socket): | ||
3672 | 357 | '''Read partial messages in receive call''' | ||
3673 | 358 | ifname = "eth0" | ||
3674 | 359 | bytes = ifname.encode("utf-8") | ||
3675 | 360 | data1 = bytearray(112) | ||
3676 | 361 | data2 = bytearray(32) | ||
3677 | 362 | struct.pack_into("=LHHLL", data1, 0, 48, RTM_NEWLINK, 0, 0, 0) | ||
3678 | 363 | struct.pack_into("HH4sHHc", data1, RTATTR_START_OFFSET, 8, 3, | ||
3679 | 364 | bytes, 5, 16, int_to_bytes(OPER_DOWN)) | ||
3680 | 365 | struct.pack_into("=LHHLL", data1, 48, 48, RTM_NEWLINK, 0, 0, 0) | ||
3681 | 366 | struct.pack_into("HH4sHHc", data1, 80, 8, 3, bytes, 5, 16, | ||
3682 | 367 | int_to_bytes(OPER_DOWN)) | ||
3683 | 368 | struct.pack_into("=LHHLL", data1, 96, 48, RTM_NEWLINK, 0, 0, 0) | ||
3684 | 369 | struct.pack_into("HH4sHHc", data2, 16, 8, 3, bytes, 5, 16, | ||
3685 | 370 | int_to_bytes(OPER_UP)) | ||
3686 | 371 | m_read_netlink_socket.side_effect = [data1, data2] | ||
3687 | 372 | wait_for_media_disconnect_connect(m_socket, ifname) | ||
3688 | 373 | self.assertEqual(m_read_netlink_socket.call_count, 2) | ||
3689 | diff --git a/cloudinit/sources/helpers/vmware/imc/config_nic.py b/cloudinit/sources/helpers/vmware/imc/config_nic.py | |||
3690 | index e1890e2..77cbf3b 100644 | |||
3691 | --- a/cloudinit/sources/helpers/vmware/imc/config_nic.py | |||
3692 | +++ b/cloudinit/sources/helpers/vmware/imc/config_nic.py | |||
3693 | @@ -165,9 +165,8 @@ class NicConfigurator(object): | |||
3694 | 165 | 165 | ||
3695 | 166 | # Add routes if there is no primary nic | 166 | # Add routes if there is no primary nic |
3696 | 167 | if not self._primaryNic and v4.gateways: | 167 | if not self._primaryNic and v4.gateways: |
3700 | 168 | route_list.extend(self.gen_ipv4_route(nic, | 168 | subnet.update( |
3701 | 169 | v4.gateways, | 169 | {'routes': self.gen_ipv4_route(nic, v4.gateways, v4.netmask)}) |
3699 | 170 | v4.netmask)) | ||
3702 | 171 | 170 | ||
3703 | 172 | return ([subnet], route_list) | 171 | return ([subnet], route_list) |
3704 | 173 | 172 | ||
3705 | diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py | |||
3706 | index 8082019..6378e98 100644 | |||
3707 | --- a/cloudinit/sources/tests/test_init.py | |||
3708 | +++ b/cloudinit/sources/tests/test_init.py | |||
3709 | @@ -11,7 +11,8 @@ from cloudinit.helpers import Paths | |||
3710 | 11 | from cloudinit import importer | 11 | from cloudinit import importer |
3711 | 12 | from cloudinit.sources import ( | 12 | from cloudinit.sources import ( |
3712 | 13 | EXPERIMENTAL_TEXT, INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE, | 13 | EXPERIMENTAL_TEXT, INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE, |
3714 | 14 | REDACT_SENSITIVE_VALUE, UNSET, DataSource, redact_sensitive_keys) | 14 | METADATA_UNKNOWN, REDACT_SENSITIVE_VALUE, UNSET, DataSource, |
3715 | 15 | canonical_cloud_id, redact_sensitive_keys) | ||
3716 | 15 | from cloudinit.tests.helpers import CiTestCase, skipIf, mock | 16 | from cloudinit.tests.helpers import CiTestCase, skipIf, mock |
3717 | 16 | from cloudinit.user_data import UserDataProcessor | 17 | from cloudinit.user_data import UserDataProcessor |
3718 | 17 | from cloudinit import util | 18 | from cloudinit import util |
3719 | @@ -295,6 +296,7 @@ class TestDataSource(CiTestCase): | |||
3720 | 295 | 'base64_encoded_keys': [], | 296 | 'base64_encoded_keys': [], |
3721 | 296 | 'sensitive_keys': [], | 297 | 'sensitive_keys': [], |
3722 | 297 | 'v1': { | 298 | 'v1': { |
3723 | 299 | '_beta_keys': ['subplatform'], | ||
3724 | 298 | 'availability-zone': 'myaz', | 300 | 'availability-zone': 'myaz', |
3725 | 299 | 'availability_zone': 'myaz', | 301 | 'availability_zone': 'myaz', |
3726 | 300 | 'cloud-name': 'subclasscloudname', | 302 | 'cloud-name': 'subclasscloudname', |
3727 | @@ -303,7 +305,10 @@ class TestDataSource(CiTestCase): | |||
3728 | 303 | 'instance_id': 'iid-datasource', | 305 | 'instance_id': 'iid-datasource', |
3729 | 304 | 'local-hostname': 'test-subclass-hostname', | 306 | 'local-hostname': 'test-subclass-hostname', |
3730 | 305 | 'local_hostname': 'test-subclass-hostname', | 307 | 'local_hostname': 'test-subclass-hostname', |
3732 | 306 | 'region': 'myregion'}, | 308 | 'platform': 'mytestsubclass', |
3733 | 309 | 'public_ssh_keys': [], | ||
3734 | 310 | 'region': 'myregion', | ||
3735 | 311 | 'subplatform': 'unknown'}, | ||
3736 | 307 | 'ds': { | 312 | 'ds': { |
3737 | 308 | '_doc': EXPERIMENTAL_TEXT, | 313 | '_doc': EXPERIMENTAL_TEXT, |
3738 | 309 | 'meta_data': {'availability_zone': 'myaz', | 314 | 'meta_data': {'availability_zone': 'myaz', |
3739 | @@ -339,6 +344,7 @@ class TestDataSource(CiTestCase): | |||
3740 | 339 | 'base64_encoded_keys': [], | 344 | 'base64_encoded_keys': [], |
3741 | 340 | 'sensitive_keys': ['ds/meta_data/some/security-credentials'], | 345 | 'sensitive_keys': ['ds/meta_data/some/security-credentials'], |
3742 | 341 | 'v1': { | 346 | 'v1': { |
3743 | 347 | '_beta_keys': ['subplatform'], | ||
3744 | 342 | 'availability-zone': 'myaz', | 348 | 'availability-zone': 'myaz', |
3745 | 343 | 'availability_zone': 'myaz', | 349 | 'availability_zone': 'myaz', |
3746 | 344 | 'cloud-name': 'subclasscloudname', | 350 | 'cloud-name': 'subclasscloudname', |
3747 | @@ -347,7 +353,10 @@ class TestDataSource(CiTestCase): | |||
3748 | 347 | 'instance_id': 'iid-datasource', | 353 | 'instance_id': 'iid-datasource', |
3749 | 348 | 'local-hostname': 'test-subclass-hostname', | 354 | 'local-hostname': 'test-subclass-hostname', |
3750 | 349 | 'local_hostname': 'test-subclass-hostname', | 355 | 'local_hostname': 'test-subclass-hostname', |
3752 | 350 | 'region': 'myregion'}, | 356 | 'platform': 'mytestsubclass', |
3753 | 357 | 'public_ssh_keys': [], | ||
3754 | 358 | 'region': 'myregion', | ||
3755 | 359 | 'subplatform': 'unknown'}, | ||
3756 | 351 | 'ds': { | 360 | 'ds': { |
3757 | 352 | '_doc': EXPERIMENTAL_TEXT, | 361 | '_doc': EXPERIMENTAL_TEXT, |
3758 | 353 | 'meta_data': { | 362 | 'meta_data': { |
3759 | @@ -599,4 +608,75 @@ class TestRedactSensitiveData(CiTestCase): | |||
3760 | 599 | redact_sensitive_keys(md)) | 608 | redact_sensitive_keys(md)) |
3761 | 600 | 609 | ||
3762 | 601 | 610 | ||
3763 | 611 | class TestCanonicalCloudID(CiTestCase): | ||
3764 | 612 | |||
3765 | 613 | def test_cloud_id_returns_platform_on_unknowns(self): | ||
3766 | 614 | """When region and cloud_name are unknown, return platform.""" | ||
3767 | 615 | self.assertEqual( | ||
3768 | 616 | 'platform', | ||
3769 | 617 | canonical_cloud_id(cloud_name=METADATA_UNKNOWN, | ||
3770 | 618 | region=METADATA_UNKNOWN, | ||
3771 | 619 | platform='platform')) | ||
3772 | 620 | |||
3773 | 621 | def test_cloud_id_returns_platform_on_none(self): | ||
3774 | 622 | """When region and cloud_name are unknown, return platform.""" | ||
3775 | 623 | self.assertEqual( | ||
3776 | 624 | 'platform', | ||
3777 | 625 | canonical_cloud_id(cloud_name=None, | ||
3778 | 626 | region=None, | ||
3779 | 627 | platform='platform')) | ||
3780 | 628 | |||
3781 | 629 | def test_cloud_id_returns_cloud_name_on_unknown_region(self): | ||
3782 | 630 | """When region is unknown, return cloud_name.""" | ||
3783 | 631 | for region in (None, METADATA_UNKNOWN): | ||
3784 | 632 | self.assertEqual( | ||
3785 | 633 | 'cloudname', | ||
3786 | 634 | canonical_cloud_id(cloud_name='cloudname', | ||
3787 | 635 | region=region, | ||
3788 | 636 | platform='platform')) | ||
3789 | 637 | |||
3790 | 638 | def test_cloud_id_returns_platform_on_unknown_cloud_name(self): | ||
3791 | 639 | """When region is set but cloud_name is unknown return cloud_name.""" | ||
3792 | 640 | self.assertEqual( | ||
3793 | 641 | 'platform', | ||
3794 | 642 | canonical_cloud_id(cloud_name=METADATA_UNKNOWN, | ||
3795 | 643 | region='region', | ||
3796 | 644 | platform='platform')) | ||
3797 | 645 | |||
3798 | 646 | def test_cloud_id_aws_based_on_region_and_cloud_name(self): | ||
3799 | 647 | """When cloud_name is aws, return proper cloud-id based on region.""" | ||
3800 | 648 | self.assertEqual( | ||
3801 | 649 | 'aws-china', | ||
3802 | 650 | canonical_cloud_id(cloud_name='aws', | ||
3803 | 651 | region='cn-north-1', | ||
3804 | 652 | platform='platform')) | ||
3805 | 653 | self.assertEqual( | ||
3806 | 654 | 'aws', | ||
3807 | 655 | canonical_cloud_id(cloud_name='aws', | ||
3808 | 656 | region='us-east-1', | ||
3809 | 657 | platform='platform')) | ||
3810 | 658 | self.assertEqual( | ||
3811 | 659 | 'aws-gov', | ||
3812 | 660 | canonical_cloud_id(cloud_name='aws', | ||
3813 | 661 | region='us-gov-1', | ||
3814 | 662 | platform='platform')) | ||
3815 | 663 | self.assertEqual( # Overrideen non-aws cloud_name is returned | ||
3816 | 664 | '!aws', | ||
3817 | 665 | canonical_cloud_id(cloud_name='!aws', | ||
3818 | 666 | region='us-gov-1', | ||
3819 | 667 | platform='platform')) | ||
3820 | 668 | |||
3821 | 669 | def test_cloud_id_azure_based_on_region_and_cloud_name(self): | ||
3822 | 670 | """Report cloud-id when cloud_name is azure and region is in china.""" | ||
3823 | 671 | self.assertEqual( | ||
3824 | 672 | 'azure-china', | ||
3825 | 673 | canonical_cloud_id(cloud_name='azure', | ||
3826 | 674 | region='chinaeast', | ||
3827 | 675 | platform='platform')) | ||
3828 | 676 | self.assertEqual( | ||
3829 | 677 | 'azure', | ||
3830 | 678 | canonical_cloud_id(cloud_name='azure', | ||
3831 | 679 | region='!chinaeast', | ||
3832 | 680 | platform='platform')) | ||
3833 | 681 | |||
3834 | 602 | # vi: ts=4 expandtab | 682 | # vi: ts=4 expandtab |
3835 | diff --git a/cloudinit/sources/tests/test_oracle.py b/cloudinit/sources/tests/test_oracle.py | |||
3836 | index 7599126..97d6294 100644 | |||
3837 | --- a/cloudinit/sources/tests/test_oracle.py | |||
3838 | +++ b/cloudinit/sources/tests/test_oracle.py | |||
3839 | @@ -71,6 +71,14 @@ class TestDataSourceOracle(test_helpers.CiTestCase): | |||
3840 | 71 | self.assertFalse(ds._get_data()) | 71 | self.assertFalse(ds._get_data()) |
3841 | 72 | mocks._is_platform_viable.assert_called_once_with() | 72 | mocks._is_platform_viable.assert_called_once_with() |
3842 | 73 | 73 | ||
3843 | 74 | def test_platform_info(self): | ||
3844 | 75 | """Return platform-related information for Oracle Datasource.""" | ||
3845 | 76 | ds, _mocks = self._get_ds() | ||
3846 | 77 | self.assertEqual('oracle', ds.cloud_name) | ||
3847 | 78 | self.assertEqual('oracle', ds.platform_type) | ||
3848 | 79 | self.assertEqual( | ||
3849 | 80 | 'metadata (http://169.254.169.254/openstack/)', ds.subplatform) | ||
3850 | 81 | |||
3851 | 74 | @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) | 82 | @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) |
3852 | 75 | def test_without_userdata(self, m_is_iscsi_root): | 83 | def test_without_userdata(self, m_is_iscsi_root): |
3853 | 76 | """If no user-data is provided, it should not be in return dict.""" | 84 | """If no user-data is provided, it should not be in return dict.""" |
3854 | diff --git a/cloudinit/temp_utils.py b/cloudinit/temp_utils.py | |||
3855 | index c98a1b5..346276e 100644 | |||
3856 | --- a/cloudinit/temp_utils.py | |||
3857 | +++ b/cloudinit/temp_utils.py | |||
3858 | @@ -81,7 +81,7 @@ def ExtendedTemporaryFile(**kwargs): | |||
3859 | 81 | 81 | ||
3860 | 82 | 82 | ||
3861 | 83 | @contextlib.contextmanager | 83 | @contextlib.contextmanager |
3863 | 84 | def tempdir(**kwargs): | 84 | def tempdir(rmtree_ignore_errors=False, **kwargs): |
3864 | 85 | # This seems like it was only added in python 3.2 | 85 | # This seems like it was only added in python 3.2 |
3865 | 86 | # Make it since its useful... | 86 | # Make it since its useful... |
3866 | 87 | # See: http://bugs.python.org/file12970/tempdir.patch | 87 | # See: http://bugs.python.org/file12970/tempdir.patch |
3867 | @@ -89,7 +89,7 @@ def tempdir(**kwargs): | |||
3868 | 89 | try: | 89 | try: |
3869 | 90 | yield tdir | 90 | yield tdir |
3870 | 91 | finally: | 91 | finally: |
3872 | 92 | shutil.rmtree(tdir) | 92 | shutil.rmtree(tdir, ignore_errors=rmtree_ignore_errors) |
3873 | 93 | 93 | ||
3874 | 94 | 94 | ||
3875 | 95 | def mkdtemp(**kwargs): | 95 | def mkdtemp(**kwargs): |
3876 | diff --git a/cloudinit/tests/test_dhclient_hook.py b/cloudinit/tests/test_dhclient_hook.py | |||
3877 | 96 | new file mode 100644 | 96 | new file mode 100644 |
3878 | index 0000000..7aab8dd | |||
3879 | --- /dev/null | |||
3880 | +++ b/cloudinit/tests/test_dhclient_hook.py | |||
3881 | @@ -0,0 +1,105 @@ | |||
3882 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | ||
3883 | 2 | |||
3884 | 3 | """Tests for cloudinit.dhclient_hook.""" | ||
3885 | 4 | |||
3886 | 5 | from cloudinit import dhclient_hook as dhc | ||
3887 | 6 | from cloudinit.tests.helpers import CiTestCase, dir2dict, populate_dir | ||
3888 | 7 | |||
3889 | 8 | import argparse | ||
3890 | 9 | import json | ||
3891 | 10 | import mock | ||
3892 | 11 | import os | ||
3893 | 12 | |||
3894 | 13 | |||
3895 | 14 | class TestDhclientHook(CiTestCase): | ||
3896 | 15 | |||
3897 | 16 | ex_env = { | ||
3898 | 17 | 'interface': 'eth0', | ||
3899 | 18 | 'new_dhcp_lease_time': '3600', | ||
3900 | 19 | 'new_host_name': 'x1', | ||
3901 | 20 | 'new_ip_address': '10.145.210.163', | ||
3902 | 21 | 'new_subnet_mask': '255.255.255.0', | ||
3903 | 22 | 'old_host_name': 'x1', | ||
3904 | 23 | 'PATH': '/usr/sbin:/usr/bin:/sbin:/bin', | ||
3905 | 24 | 'pid': '614', | ||
3906 | 25 | 'reason': 'BOUND', | ||
3907 | 26 | } | ||
3908 | 27 | |||
3909 | 28 | # some older versions of dhclient put the same content, | ||
3910 | 29 | # but in upper case with DHCP4_ instead of new_ | ||
3911 | 30 | ex_env_dhcp4 = { | ||
3912 | 31 | 'REASON': 'BOUND', | ||
3913 | 32 | 'DHCP4_dhcp_lease_time': '3600', | ||
3914 | 33 | 'DHCP4_host_name': 'x1', | ||
3915 | 34 | 'DHCP4_ip_address': '10.145.210.163', | ||
3916 | 35 | 'DHCP4_subnet_mask': '255.255.255.0', | ||
3917 | 36 | 'INTERFACE': 'eth0', | ||
3918 | 37 | 'PATH': '/usr/sbin:/usr/bin:/sbin:/bin', | ||
3919 | 38 | 'pid': '614', | ||
3920 | 39 | } | ||
3921 | 40 | |||
3922 | 41 | expected = { | ||
3923 | 42 | 'dhcp_lease_time': '3600', | ||
3924 | 43 | 'host_name': 'x1', | ||
3925 | 44 | 'ip_address': '10.145.210.163', | ||
3926 | 45 | 'subnet_mask': '255.255.255.0'} | ||
3927 | 46 | |||
3928 | 47 | def setUp(self): | ||
3929 | 48 | super(TestDhclientHook, self).setUp() | ||
3930 | 49 | self.tmp = self.tmp_dir() | ||
3931 | 50 | |||
3932 | 51 | def test_handle_args(self): | ||
3933 | 52 | """quick test of call to handle_args.""" | ||
3934 | 53 | nic = 'eth0' | ||
3935 | 54 | args = argparse.Namespace(event=dhc.UP, interface=nic) | ||
3936 | 55 | with mock.patch.dict("os.environ", clear=True, values=self.ex_env): | ||
3937 | 56 | dhc.handle_args(dhc.NAME, args, data_d=self.tmp) | ||
3938 | 57 | found = dir2dict(self.tmp + os.path.sep) | ||
3939 | 58 | self.assertEqual([nic + ".json"], list(found.keys())) | ||
3940 | 59 | self.assertEqual(self.expected, json.loads(found[nic + ".json"])) | ||
3941 | 60 | |||
3942 | 61 | def test_run_hook_up_creates_dir(self): | ||
3943 | 62 | """If dir does not exist, run_hook should create it.""" | ||
3944 | 63 | subd = self.tmp_path("subdir", self.tmp) | ||
3945 | 64 | nic = 'eth1' | ||
3946 | 65 | dhc.run_hook(nic, 'up', data_d=subd, env=self.ex_env) | ||
3947 | 66 | self.assertEqual( | ||
3948 | 67 | set([nic + ".json"]), set(dir2dict(subd + os.path.sep))) | ||
3949 | 68 | |||
3950 | 69 | def test_run_hook_up(self): | ||
3951 | 70 | """Test expected use of run_hook_up.""" | ||
3952 | 71 | nic = 'eth0' | ||
3953 | 72 | dhc.run_hook(nic, 'up', data_d=self.tmp, env=self.ex_env) | ||
3954 | 73 | found = dir2dict(self.tmp + os.path.sep) | ||
3955 | 74 | self.assertEqual([nic + ".json"], list(found.keys())) | ||
3956 | 75 | self.assertEqual(self.expected, json.loads(found[nic + ".json"])) | ||
3957 | 76 | |||
3958 | 77 | def test_run_hook_up_dhcp4_prefix(self): | ||
3959 | 78 | """Test run_hook filters correctly with older DHCP4_ data.""" | ||
3960 | 79 | nic = 'eth0' | ||
3961 | 80 | dhc.run_hook(nic, 'up', data_d=self.tmp, env=self.ex_env_dhcp4) | ||
3962 | 81 | found = dir2dict(self.tmp + os.path.sep) | ||
3963 | 82 | self.assertEqual([nic + ".json"], list(found.keys())) | ||
3964 | 83 | self.assertEqual(self.expected, json.loads(found[nic + ".json"])) | ||
3965 | 84 | |||
3966 | 85 | def test_run_hook_down_deletes(self): | ||
3967 | 86 | """down should delete the created json file.""" | ||
3968 | 87 | nic = 'eth1' | ||
3969 | 88 | populate_dir( | ||
3970 | 89 | self.tmp, {nic + ".json": "{'abcd'}", 'myfile.txt': 'text'}) | ||
3971 | 90 | dhc.run_hook(nic, 'down', data_d=self.tmp, env={'old_host_name': 'x1'}) | ||
3972 | 91 | self.assertEqual( | ||
3973 | 92 | set(['myfile.txt']), | ||
3974 | 93 | set(dir2dict(self.tmp + os.path.sep))) | ||
3975 | 94 | |||
3976 | 95 | def test_get_parser(self): | ||
3977 | 96 | """Smoke test creation of get_parser.""" | ||
3978 | 97 | # cloud-init main uses 'action'. | ||
3979 | 98 | event, interface = (dhc.UP, 'mynic0') | ||
3980 | 99 | self.assertEqual( | ||
3981 | 100 | argparse.Namespace(event=event, interface=interface, | ||
3982 | 101 | action=(dhc.NAME, dhc.handle_args)), | ||
3983 | 102 | dhc.get_parser().parse_args([event, interface])) | ||
3984 | 103 | |||
3985 | 104 | |||
3986 | 105 | # vi: ts=4 expandtab | ||
3987 | diff --git a/cloudinit/tests/test_temp_utils.py b/cloudinit/tests/test_temp_utils.py | |||
3988 | index ffbb92c..4a52ef8 100644 | |||
3989 | --- a/cloudinit/tests/test_temp_utils.py | |||
3990 | +++ b/cloudinit/tests/test_temp_utils.py | |||
3991 | @@ -2,8 +2,9 @@ | |||
3992 | 2 | 2 | ||
3993 | 3 | """Tests for cloudinit.temp_utils""" | 3 | """Tests for cloudinit.temp_utils""" |
3994 | 4 | 4 | ||
3996 | 5 | from cloudinit.temp_utils import mkdtemp, mkstemp | 5 | from cloudinit.temp_utils import mkdtemp, mkstemp, tempdir |
3997 | 6 | from cloudinit.tests.helpers import CiTestCase, wrap_and_call | 6 | from cloudinit.tests.helpers import CiTestCase, wrap_and_call |
3998 | 7 | import os | ||
3999 | 7 | 8 | ||
4000 | 8 | 9 | ||
4001 | 9 | class TestTempUtils(CiTestCase): | 10 | class TestTempUtils(CiTestCase): |
4002 | @@ -98,4 +99,19 @@ class TestTempUtils(CiTestCase): | |||
4003 | 98 | self.assertEqual('/fake/return/path', retval) | 99 | self.assertEqual('/fake/return/path', retval) |
4004 | 99 | self.assertEqual([{'dir': '/run/cloud-init/tmp'}], calls) | 100 | self.assertEqual([{'dir': '/run/cloud-init/tmp'}], calls) |
4005 | 100 | 101 | ||
4006 | 102 | def test_tempdir_error_suppression(self): | ||
4007 | 103 | """test tempdir suppresses errors during directory removal.""" | ||
4008 | 104 | |||
4009 | 105 | with self.assertRaises(OSError): | ||
4010 | 106 | with tempdir(prefix='cloud-init-dhcp-') as tdir: | ||
4011 | 107 | os.rmdir(tdir) | ||
4012 | 108 | # As a result, the directory is already gone, | ||
4013 | 109 | # so shutil.rmtree should raise OSError | ||
4014 | 110 | |||
4015 | 111 | with tempdir(rmtree_ignore_errors=True, | ||
4016 | 112 | prefix='cloud-init-dhcp-') as tdir: | ||
4017 | 113 | os.rmdir(tdir) | ||
4018 | 114 | # Since the directory is already gone, shutil.rmtree would raise | ||
4019 | 115 | # OSError, but we suppress that | ||
4020 | 116 | |||
4021 | 101 | # vi: ts=4 expandtab | 117 | # vi: ts=4 expandtab |
4022 | diff --git a/cloudinit/tests/test_url_helper.py b/cloudinit/tests/test_url_helper.py | |||
4023 | index 113249d..aa9f3ec 100644 | |||
4024 | --- a/cloudinit/tests/test_url_helper.py | |||
4025 | +++ b/cloudinit/tests/test_url_helper.py | |||
4026 | @@ -1,10 +1,12 @@ | |||
4027 | 1 | # This file is part of cloud-init. See LICENSE file for license information. | 1 | # This file is part of cloud-init. See LICENSE file for license information. |
4028 | 2 | 2 | ||
4030 | 3 | from cloudinit.url_helper import oauth_headers, read_file_or_url | 3 | from cloudinit.url_helper import ( |
4031 | 4 | NOT_FOUND, UrlError, oauth_headers, read_file_or_url, retry_on_url_exc) | ||
4032 | 4 | from cloudinit.tests.helpers import CiTestCase, mock, skipIf | 5 | from cloudinit.tests.helpers import CiTestCase, mock, skipIf |
4033 | 5 | from cloudinit import util | 6 | from cloudinit import util |
4034 | 6 | 7 | ||
4035 | 7 | import httpretty | 8 | import httpretty |
4036 | 9 | import requests | ||
4037 | 8 | 10 | ||
4038 | 9 | 11 | ||
4039 | 10 | try: | 12 | try: |
4040 | @@ -64,3 +66,24 @@ class TestReadFileOrUrl(CiTestCase): | |||
4041 | 64 | result = read_file_or_url(url) | 66 | result = read_file_or_url(url) |
4042 | 65 | self.assertEqual(result.contents, data) | 67 | self.assertEqual(result.contents, data) |
4043 | 66 | self.assertEqual(str(result), data.decode('utf-8')) | 68 | self.assertEqual(str(result), data.decode('utf-8')) |
4044 | 69 | |||
4045 | 70 | |||
4046 | 71 | class TestRetryOnUrlExc(CiTestCase): | ||
4047 | 72 | |||
4048 | 73 | def test_do_not_retry_non_urlerror(self): | ||
4049 | 74 | """When exception is not UrlError return False.""" | ||
4050 | 75 | myerror = IOError('something unexcpected') | ||
4051 | 76 | self.assertFalse(retry_on_url_exc(msg='', exc=myerror)) | ||
4052 | 77 | |||
4053 | 78 | def test_perform_retries_on_not_found(self): | ||
4054 | 79 | """When exception is UrlError with a 404 status code return True.""" | ||
4055 | 80 | myerror = UrlError(cause=RuntimeError( | ||
4056 | 81 | 'something was not found'), code=NOT_FOUND) | ||
4057 | 82 | self.assertTrue(retry_on_url_exc(msg='', exc=myerror)) | ||
4058 | 83 | |||
4059 | 84 | def test_perform_retries_on_timeout(self): | ||
4060 | 85 | """When exception is a requests.Timout return True.""" | ||
4061 | 86 | myerror = UrlError(cause=requests.Timeout('something timed out')) | ||
4062 | 87 | self.assertTrue(retry_on_url_exc(msg='', exc=myerror)) | ||
4063 | 88 | |||
4064 | 89 | # vi: ts=4 expandtab | ||
4065 | diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py | |||
4066 | index edb0c18..e3d2dba 100644 | |||
4067 | --- a/cloudinit/tests/test_util.py | |||
4068 | +++ b/cloudinit/tests/test_util.py | |||
4069 | @@ -18,25 +18,51 @@ MOUNT_INFO = [ | |||
4070 | 18 | ] | 18 | ] |
4071 | 19 | 19 | ||
4072 | 20 | OS_RELEASE_SLES = dedent("""\ | 20 | OS_RELEASE_SLES = dedent("""\ |
4079 | 21 | NAME="SLES"\n | 21 | NAME="SLES" |
4080 | 22 | VERSION="12-SP3"\n | 22 | VERSION="12-SP3" |
4081 | 23 | VERSION_ID="12.3"\n | 23 | VERSION_ID="12.3" |
4082 | 24 | PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"\n | 24 | PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3" |
4083 | 25 | ID="sles"\nANSI_COLOR="0;32"\n | 25 | ID="sles" |
4084 | 26 | CPE_NAME="cpe:/o:suse:sles:12:sp3"\n | 26 | ANSI_COLOR="0;32" |
4085 | 27 | CPE_NAME="cpe:/o:suse:sles:12:sp3" | ||
4086 | 27 | """) | 28 | """) |
4087 | 28 | 29 | ||
4088 | 29 | OS_RELEASE_OPENSUSE = dedent("""\ | 30 | OS_RELEASE_OPENSUSE = dedent("""\ |
4099 | 30 | NAME="openSUSE Leap" | 31 | NAME="openSUSE Leap" |
4100 | 31 | VERSION="42.3" | 32 | VERSION="42.3" |
4101 | 32 | ID=opensuse | 33 | ID=opensuse |
4102 | 33 | ID_LIKE="suse" | 34 | ID_LIKE="suse" |
4103 | 34 | VERSION_ID="42.3" | 35 | VERSION_ID="42.3" |
4104 | 35 | PRETTY_NAME="openSUSE Leap 42.3" | 36 | PRETTY_NAME="openSUSE Leap 42.3" |
4105 | 36 | ANSI_COLOR="0;32" | 37 | ANSI_COLOR="0;32" |
4106 | 37 | CPE_NAME="cpe:/o:opensuse:leap:42.3" | 38 | CPE_NAME="cpe:/o:opensuse:leap:42.3" |
4107 | 38 | BUG_REPORT_URL="https://bugs.opensuse.org" | 39 | BUG_REPORT_URL="https://bugs.opensuse.org" |
4108 | 39 | HOME_URL="https://www.opensuse.org/" | 40 | HOME_URL="https://www.opensuse.org/" |
4109 | 41 | """) | ||
4110 | 42 | |||
4111 | 43 | OS_RELEASE_OPENSUSE_L15 = dedent("""\ | ||
4112 | 44 | NAME="openSUSE Leap" | ||
4113 | 45 | VERSION="15.0" | ||
4114 | 46 | ID="opensuse-leap" | ||
4115 | 47 | ID_LIKE="suse opensuse" | ||
4116 | 48 | VERSION_ID="15.0" | ||
4117 | 49 | PRETTY_NAME="openSUSE Leap 15.0" | ||
4118 | 50 | ANSI_COLOR="0;32" | ||
4119 | 51 | CPE_NAME="cpe:/o:opensuse:leap:15.0" | ||
4120 | 52 | BUG_REPORT_URL="https://bugs.opensuse.org" | ||
4121 | 53 | HOME_URL="https://www.opensuse.org/" | ||
4122 | 54 | """) | ||
4123 | 55 | |||
4124 | 56 | OS_RELEASE_OPENSUSE_TW = dedent("""\ | ||
4125 | 57 | NAME="openSUSE Tumbleweed" | ||
4126 | 58 | ID="opensuse-tumbleweed" | ||
4127 | 59 | ID_LIKE="opensuse suse" | ||
4128 | 60 | VERSION_ID="20180920" | ||
4129 | 61 | PRETTY_NAME="openSUSE Tumbleweed" | ||
4130 | 62 | ANSI_COLOR="0;32" | ||
4131 | 63 | CPE_NAME="cpe:/o:opensuse:tumbleweed:20180920" | ||
4132 | 64 | BUG_REPORT_URL="https://bugs.opensuse.org" | ||
4133 | 65 | HOME_URL="https://www.opensuse.org/" | ||
4134 | 40 | """) | 66 | """) |
4135 | 41 | 67 | ||
4136 | 42 | OS_RELEASE_CENTOS = dedent("""\ | 68 | OS_RELEASE_CENTOS = dedent("""\ |
4137 | @@ -447,12 +473,35 @@ class TestGetLinuxDistro(CiTestCase): | |||
4138 | 447 | 473 | ||
4139 | 448 | @mock.patch('cloudinit.util.load_file') | 474 | @mock.patch('cloudinit.util.load_file') |
4140 | 449 | def test_get_linux_opensuse(self, m_os_release, m_path_exists): | 475 | def test_get_linux_opensuse(self, m_os_release, m_path_exists): |
4142 | 450 | """Verify we get the correct name and machine arch on OpenSUSE.""" | 476 | """Verify we get the correct name and machine arch on openSUSE |
4143 | 477 | prior to openSUSE Leap 15. | ||
4144 | 478 | """ | ||
4145 | 451 | m_os_release.return_value = OS_RELEASE_OPENSUSE | 479 | m_os_release.return_value = OS_RELEASE_OPENSUSE |
4146 | 452 | m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists | 480 | m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists |
4147 | 453 | dist = util.get_linux_distro() | 481 | dist = util.get_linux_distro() |
4148 | 454 | self.assertEqual(('opensuse', '42.3', platform.machine()), dist) | 482 | self.assertEqual(('opensuse', '42.3', platform.machine()), dist) |
4149 | 455 | 483 | ||
4150 | 484 | @mock.patch('cloudinit.util.load_file') | ||
4151 | 485 | def test_get_linux_opensuse_l15(self, m_os_release, m_path_exists): | ||
4152 | 486 | """Verify we get the correct name and machine arch on openSUSE | ||
4153 | 487 | for openSUSE Leap 15.0 and later. | ||
4154 | 488 | """ | ||
4155 | 489 | m_os_release.return_value = OS_RELEASE_OPENSUSE_L15 | ||
4156 | 490 | m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists | ||
4157 | 491 | dist = util.get_linux_distro() | ||
4158 | 492 | self.assertEqual(('opensuse-leap', '15.0', platform.machine()), dist) | ||
4159 | 493 | |||
4160 | 494 | @mock.patch('cloudinit.util.load_file') | ||
4161 | 495 | def test_get_linux_opensuse_tw(self, m_os_release, m_path_exists): | ||
4162 | 496 | """Verify we get the correct name and machine arch on openSUSE | ||
4163 | 497 | for openSUSE Tumbleweed | ||
4164 | 498 | """ | ||
4165 | 499 | m_os_release.return_value = OS_RELEASE_OPENSUSE_TW | ||
4166 | 500 | m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists | ||
4167 | 501 | dist = util.get_linux_distro() | ||
4168 | 502 | self.assertEqual( | ||
4169 | 503 | ('opensuse-tumbleweed', '20180920', platform.machine()), dist) | ||
4170 | 504 | |||
4171 | 456 | @mock.patch('platform.dist') | 505 | @mock.patch('platform.dist') |
4172 | 457 | def test_get_linux_distro_no_data(self, m_platform_dist, m_path_exists): | 506 | def test_get_linux_distro_no_data(self, m_platform_dist, m_path_exists): |
4173 | 458 | """Verify we get no information if os-release does not exist""" | 507 | """Verify we get no information if os-release does not exist""" |
4174 | @@ -478,4 +527,20 @@ class TestGetLinuxDistro(CiTestCase): | |||
4175 | 478 | dist = util.get_linux_distro() | 527 | dist = util.get_linux_distro() |
4176 | 479 | self.assertEqual(('foo', '1.1', 'aarch64'), dist) | 528 | self.assertEqual(('foo', '1.1', 'aarch64'), dist) |
4177 | 480 | 529 | ||
4178 | 530 | |||
4179 | 531 | @mock.patch('os.path.exists') | ||
4180 | 532 | class TestIsLXD(CiTestCase): | ||
4181 | 533 | |||
4182 | 534 | def test_is_lxd_true_on_sock_device(self, m_exists): | ||
4183 | 535 | """When lxd's /dev/lxd/sock exists, is_lxd returns true.""" | ||
4184 | 536 | m_exists.return_value = True | ||
4185 | 537 | self.assertTrue(util.is_lxd()) | ||
4186 | 538 | m_exists.assert_called_once_with('/dev/lxd/sock') | ||
4187 | 539 | |||
4188 | 540 | def test_is_lxd_false_when_sock_device_absent(self, m_exists): | ||
4189 | 541 | """When lxd's /dev/lxd/sock is absent, is_lxd returns false.""" | ||
4190 | 542 | m_exists.return_value = False | ||
4191 | 543 | self.assertFalse(util.is_lxd()) | ||
4192 | 544 | m_exists.assert_called_once_with('/dev/lxd/sock') | ||
4193 | 545 | |||
4194 | 481 | # vi: ts=4 expandtab | 546 | # vi: ts=4 expandtab |
4195 | diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py | |||
4196 | index 8067979..396d69a 100644 | |||
4197 | --- a/cloudinit/url_helper.py | |||
4198 | +++ b/cloudinit/url_helper.py | |||
4199 | @@ -199,7 +199,7 @@ def _get_ssl_args(url, ssl_details): | |||
4200 | 199 | def readurl(url, data=None, timeout=None, retries=0, sec_between=1, | 199 | def readurl(url, data=None, timeout=None, retries=0, sec_between=1, |
4201 | 200 | headers=None, headers_cb=None, ssl_details=None, | 200 | headers=None, headers_cb=None, ssl_details=None, |
4202 | 201 | check_status=True, allow_redirects=True, exception_cb=None, | 201 | check_status=True, allow_redirects=True, exception_cb=None, |
4204 | 202 | session=None, infinite=False): | 202 | session=None, infinite=False, log_req_resp=True): |
4205 | 203 | url = _cleanurl(url) | 203 | url = _cleanurl(url) |
4206 | 204 | req_args = { | 204 | req_args = { |
4207 | 205 | 'url': url, | 205 | 'url': url, |
4208 | @@ -256,9 +256,11 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1, | |||
4209 | 256 | continue | 256 | continue |
4210 | 257 | filtered_req_args[k] = v | 257 | filtered_req_args[k] = v |
4211 | 258 | try: | 258 | try: |
4215 | 259 | LOG.debug("[%s/%s] open '%s' with %s configuration", i, | 259 | |
4216 | 260 | "infinite" if infinite else manual_tries, url, | 260 | if log_req_resp: |
4217 | 261 | filtered_req_args) | 261 | LOG.debug("[%s/%s] open '%s' with %s configuration", i, |
4218 | 262 | "infinite" if infinite else manual_tries, url, | ||
4219 | 263 | filtered_req_args) | ||
4220 | 262 | 264 | ||
4221 | 263 | if session is None: | 265 | if session is None: |
4222 | 264 | session = requests.Session() | 266 | session = requests.Session() |
4223 | @@ -294,8 +296,11 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1, | |||
4224 | 294 | break | 296 | break |
4225 | 295 | if (infinite and sec_between > 0) or \ | 297 | if (infinite and sec_between > 0) or \ |
4226 | 296 | (i + 1 < manual_tries and sec_between > 0): | 298 | (i + 1 < manual_tries and sec_between > 0): |
4229 | 297 | LOG.debug("Please wait %s seconds while we wait to try again", | 299 | |
4230 | 298 | sec_between) | 300 | if log_req_resp: |
4231 | 301 | LOG.debug( | ||
4232 | 302 | "Please wait %s seconds while we wait to try again", | ||
4233 | 303 | sec_between) | ||
4234 | 299 | time.sleep(sec_between) | 304 | time.sleep(sec_between) |
4235 | 300 | if excps: | 305 | if excps: |
4236 | 301 | raise excps[-1] | 306 | raise excps[-1] |
4237 | @@ -549,4 +554,18 @@ def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret, | |||
4238 | 549 | _uri, signed_headers, _body = client.sign(url) | 554 | _uri, signed_headers, _body = client.sign(url) |
4239 | 550 | return signed_headers | 555 | return signed_headers |
4240 | 551 | 556 | ||
4241 | 557 | |||
4242 | 558 | def retry_on_url_exc(msg, exc): | ||
4243 | 559 | """readurl exception_cb that will retry on NOT_FOUND and Timeout. | ||
4244 | 560 | |||
4245 | 561 | Returns False to raise the exception from readurl, True to retry. | ||
4246 | 562 | """ | ||
4247 | 563 | if not isinstance(exc, UrlError): | ||
4248 | 564 | return False | ||
4249 | 565 | if exc.code == NOT_FOUND: | ||
4250 | 566 | return True | ||
4251 | 567 | if exc.cause and isinstance(exc.cause, requests.Timeout): | ||
4252 | 568 | return True | ||
4253 | 569 | return False | ||
4254 | 570 | |||
4255 | 552 | # vi: ts=4 expandtab | 571 | # vi: ts=4 expandtab |
4256 | diff --git a/cloudinit/util.py b/cloudinit/util.py | |||
4257 | index 5068096..a8a232b 100644 | |||
4258 | --- a/cloudinit/util.py | |||
4259 | +++ b/cloudinit/util.py | |||
4260 | @@ -615,8 +615,8 @@ def get_linux_distro(): | |||
4261 | 615 | distro_name = os_release.get('ID', '') | 615 | distro_name = os_release.get('ID', '') |
4262 | 616 | distro_version = os_release.get('VERSION_ID', '') | 616 | distro_version = os_release.get('VERSION_ID', '') |
4263 | 617 | if 'sles' in distro_name or 'suse' in distro_name: | 617 | if 'sles' in distro_name or 'suse' in distro_name: |
4266 | 618 | # RELEASE_BLOCKER: We will drop this sles ivergent behavior in | 618 | # RELEASE_BLOCKER: We will drop this sles divergent behavior in |
4267 | 619 | # before 18.4 so that get_linux_distro returns a named tuple | 619 | # the future so that get_linux_distro returns a named tuple |
4268 | 620 | # which will include both version codename and architecture | 620 | # which will include both version codename and architecture |
4269 | 621 | # on all distributions. | 621 | # on all distributions. |
4270 | 622 | flavor = platform.machine() | 622 | flavor = platform.machine() |
4271 | @@ -668,7 +668,8 @@ def system_info(): | |||
4272 | 668 | var = 'ubuntu' | 668 | var = 'ubuntu' |
4273 | 669 | elif linux_dist == 'redhat': | 669 | elif linux_dist == 'redhat': |
4274 | 670 | var = 'rhel' | 670 | var = 'rhel' |
4276 | 671 | elif linux_dist in ('opensuse', 'sles'): | 671 | elif linux_dist in ( |
4277 | 672 | 'opensuse', 'opensuse-tumbleweed', 'opensuse-leap', 'sles'): | ||
4278 | 672 | var = 'suse' | 673 | var = 'suse' |
4279 | 673 | else: | 674 | else: |
4280 | 674 | var = 'linux' | 675 | var = 'linux' |
4281 | @@ -2171,6 +2172,11 @@ def is_container(): | |||
4282 | 2171 | return False | 2172 | return False |
4283 | 2172 | 2173 | ||
4284 | 2173 | 2174 | ||
4285 | 2175 | def is_lxd(): | ||
4286 | 2176 | """Check to see if we are running in a lxd container.""" | ||
4287 | 2177 | return os.path.exists('/dev/lxd/sock') | ||
4288 | 2178 | |||
4289 | 2179 | |||
4290 | 2174 | def get_proc_env(pid, encoding='utf-8', errors='replace'): | 2180 | def get_proc_env(pid, encoding='utf-8', errors='replace'): |
4291 | 2175 | """ | 2181 | """ |
4292 | 2176 | Return the environment in a dict that a given process id was started with. | 2182 | Return the environment in a dict that a given process id was started with. |
4293 | @@ -2870,4 +2876,20 @@ def udevadm_settle(exists=None, timeout=None): | |||
4294 | 2870 | return subp(settle_cmd) | 2876 | return subp(settle_cmd) |
4295 | 2871 | 2877 | ||
4296 | 2872 | 2878 | ||
4297 | 2879 | def get_proc_ppid(pid): | ||
4298 | 2880 | """ | ||
4299 | 2881 | Return the parent pid of a process. | ||
4300 | 2882 | """ | ||
4301 | 2883 | ppid = 0 | ||
4302 | 2884 | try: | ||
4303 | 2885 | contents = load_file("/proc/%s/stat" % pid, quiet=True) | ||
4304 | 2886 | except IOError as e: | ||
4305 | 2887 | LOG.warning('Failed to load /proc/%s/stat. %s', pid, e) | ||
4306 | 2888 | if contents: | ||
4307 | 2889 | parts = contents.split(" ", 4) | ||
4308 | 2890 | # man proc says | ||
4309 | 2891 | # ppid %d (4) The PID of the parent. | ||
4310 | 2892 | ppid = int(parts[3]) | ||
4311 | 2893 | return ppid | ||
4312 | 2894 | |||
4313 | 2873 | # vi: ts=4 expandtab | 2895 | # vi: ts=4 expandtab |
4314 | diff --git a/cloudinit/version.py b/cloudinit/version.py | |||
4315 | index 844a02e..a2c5d43 100644 | |||
4316 | --- a/cloudinit/version.py | |||
4317 | +++ b/cloudinit/version.py | |||
4318 | @@ -4,7 +4,7 @@ | |||
4319 | 4 | # | 4 | # |
4320 | 5 | # This file is part of cloud-init. See LICENSE file for license information. | 5 | # This file is part of cloud-init. See LICENSE file for license information. |
4321 | 6 | 6 | ||
4323 | 7 | __VERSION__ = "18.4" | 7 | __VERSION__ = "18.5" |
4324 | 8 | _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' | 8 | _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' |
4325 | 9 | 9 | ||
4326 | 10 | FEATURES = [ | 10 | FEATURES = [ |
4327 | diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl | |||
4328 | index 1fef133..7513176 100644 | |||
4329 | --- a/config/cloud.cfg.tmpl | |||
4330 | +++ b/config/cloud.cfg.tmpl | |||
4331 | @@ -167,7 +167,17 @@ system_info: | |||
4332 | 167 | - http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/ | 167 | - http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/ |
4333 | 168 | - http://%(region)s.clouds.archive.ubuntu.com/ubuntu/ | 168 | - http://%(region)s.clouds.archive.ubuntu.com/ubuntu/ |
4334 | 169 | security: [] | 169 | security: [] |
4336 | 170 | - arches: [armhf, armel, default] | 170 | - arches: [arm64, armel, armhf] |
4337 | 171 | failsafe: | ||
4338 | 172 | primary: http://ports.ubuntu.com/ubuntu-ports | ||
4339 | 173 | security: http://ports.ubuntu.com/ubuntu-ports | ||
4340 | 174 | search: | ||
4341 | 175 | primary: | ||
4342 | 176 | - http://%(ec2_region)s.ec2.ports.ubuntu.com/ubuntu-ports/ | ||
4343 | 177 | - http://%(availability_zone)s.clouds.ports.ubuntu.com/ubuntu-ports/ | ||
4344 | 178 | - http://%(region)s.clouds.ports.ubuntu.com/ubuntu-ports/ | ||
4345 | 179 | security: [] | ||
4346 | 180 | - arches: [default] | ||
4347 | 171 | failsafe: | 181 | failsafe: |
4348 | 172 | primary: http://ports.ubuntu.com/ubuntu-ports | 182 | primary: http://ports.ubuntu.com/ubuntu-ports |
4349 | 173 | security: http://ports.ubuntu.com/ubuntu-ports | 183 | security: http://ports.ubuntu.com/ubuntu-ports |
4350 | diff --git a/debian/changelog b/debian/changelog | |||
4351 | index 2bb9520..e611ee7 100644 | |||
4352 | --- a/debian/changelog | |||
4353 | +++ b/debian/changelog | |||
4354 | @@ -1,3 +1,78 @@ | |||
4355 | 1 | cloud-init (18.5-17-gd1a2fe73-0ubuntu1~18.04.1) bionic; urgency=medium | ||
4356 | 2 | |||
4357 | 3 | * New upstream snapshot. (LP: #1813346) | ||
4358 | 4 | - opennebula: exclude EPOCHREALTIME as known bash env variable with a delta | ||
4359 | 5 | - tox: fix disco httpretty dependencies for py37 | ||
4360 | 6 | - run-container: uncomment baseurl in yum.repos.d/*.repo when using a | ||
4361 | 7 | proxy [Paride Legovini] | ||
4362 | 8 | - lxd: install zfs-linux instead of zfs meta package [Johnson Shi] | ||
4363 | 9 | - net/sysconfig: do not write a resolv.conf file with only the header. | ||
4364 | 10 | [Robert Schweikert] | ||
4365 | 11 | - net: Make sysconfig renderer compatible with Network Manager. | ||
4366 | 12 | [Eduardo Otubo] | ||
4367 | 13 | - cc_set_passwords: Fix regex when parsing hashed passwords | ||
4368 | 14 | [Marlin Cremers] | ||
4369 | 15 | - net: Wait for dhclient to daemonize before reading lease file | ||
4370 | 16 | [Jason Zions] | ||
4371 | 17 | - [Azure] Increase retries when talking to Wireserver during metadata walk | ||
4372 | 18 | [Jason Zions] | ||
4373 | 19 | - Add documentation on adding a datasource. | ||
4374 | 20 | - doc: clean up some datasource documentation. | ||
4375 | 21 | - ds-identify: fix wrong variable name in ovf_vmware_transport_guestinfo. | ||
4376 | 22 | - Scaleway: Support ssh keys provided inside an instance tag. [PORTE Loïc] | ||
4377 | 23 | - OVF: simplify expected return values of transport functions. | ||
4378 | 24 | - Vmware: Add support for the com.vmware.guestInfo OVF transport. | ||
4379 | 25 | - HACKING.rst: change contact info to Josh Powers | ||
4380 | 26 | - Update to pylint 2.2.2. | ||
4381 | 27 | - Release 18.5 | ||
4382 | 28 | - tests: add Disco release [Joshua Powers] | ||
4383 | 29 | - net: render 'metric' values in per-subnet routes | ||
4384 | 30 | - write_files: add support for appending to files. [James Baxter] | ||
4385 | 31 | - config: On ubuntu select cloud archive mirrors for armel, armhf, arm64. | ||
4386 | 32 | - dhclient-hook: cleanups, tests and fix a bug on 'down' event. | ||
4387 | 33 | - NoCloud: Allow top level 'network' key in network-config. | ||
4388 | 34 | - ovf: Fix ovf network config generation gateway/routes | ||
4389 | 35 | - azure: detect vnet migration via netlink media change event | ||
4390 | 36 | [Tamilmani Manoharan] | ||
4391 | 37 | - Azure: fix copy/paste error in error handling when reading azure ovf. | ||
4392 | 38 | [Adam DePue] | ||
4393 | 39 | - tests: fix incorrect order of mocks in test_handle_zfs_root. | ||
4394 | 40 | - doc: Change dns_nameserver property to dns_nameservers. [Tomer Cohen] | ||
4395 | 41 | - OVF: identify label iso9660 filesystems with label 'OVF ENV'. | ||
4396 | 42 | - logs: collect-logs ignore instance-data-sensitive.json on non-root user | ||
4397 | 43 | - net: Ephemeral*Network: add connectivity check via URL | ||
4398 | 44 | - azure: _poll_imds only retry on 404. Fail on Timeout | ||
4399 | 45 | - resizefs: Prefix discovered devpath with '/dev/' when path does not | ||
4400 | 46 | exist [Igor Galić] | ||
4401 | 47 | - azure: retry imds polling on requests.Timeout | ||
4402 | 48 | - azure: Accept variation in error msg from mount for ntfs volumes | ||
4403 | 49 | [Jason Zions] | ||
4404 | 50 | - azure: fix regression introduced when persisting ephemeral dhcp lease | ||
4405 | 51 | [Aswin Rajamannar] | ||
4406 | 52 | - azure: add udev rules to create cloud-init Gen2 disk name symlinks | ||
4407 | 53 | - tests: ec2 mock missing httpretty user-data and instance-identity routes | ||
4408 | 54 | - azure: remove /etc/netplan/90-hotplug-azure.yaml when net from IMDS | ||
4409 | 55 | - azure: report ready to fabric after reprovision and reduce logging | ||
4410 | 56 | [Aswin Rajamannar] | ||
4411 | 57 | - query: better error when missing read permission on instance-data | ||
4412 | 58 | - instance-data: fallback to instance-data.json if sensitive is absent. | ||
4413 | 59 | - docs: remove colon from network v1 config example. [Tomer Cohen] | ||
4414 | 60 | - Add cloud-id binary to packages for SUSE [Jason Zions] | ||
4415 | 61 | - systemd: On SUSE ensure cloud-init.service runs before wicked | ||
4416 | 62 | [Robert Schweikert] | ||
4417 | 63 | - update detection of openSUSE variants [Robert Schweikert] | ||
4418 | 64 | - azure: Add apply_network_config option to disable network from IMDS | ||
4419 | 65 | - Correct spelling in an error message (udevadm). [Katie McLaughlin] | ||
4420 | 66 | - tests: meta_data key changed to meta-data in ec2 instance-data.json | ||
4421 | 67 | - tests: fix kvm integration test to assert flexible config-disk path | ||
4422 | 68 | - tools: Add cloud-id command line utility | ||
4423 | 69 | - instance-data: Add standard keys platform and subplatform. Refactor ec2. | ||
4424 | 70 | - net: ignore nics that have "zero" mac address. | ||
4425 | 71 | - tests: fix apt_configure_primary to be more flexible | ||
4426 | 72 | - Ubuntu: update sources.list to comment out deb-src entries. | ||
4427 | 73 | |||
4428 | 74 | -- Chad Smith <chad.smith@canonical.com> Sat, 26 Jan 2019 08:42:04 -0700 | ||
4429 | 75 | |||
4430 | 1 | cloud-init (18.4-0ubuntu1~18.04.1) bionic-proposed; urgency=medium | 76 | cloud-init (18.4-0ubuntu1~18.04.1) bionic-proposed; urgency=medium |
4431 | 2 | 77 | ||
4432 | 3 | * drop the following cherry-picks now included: | 78 | * drop the following cherry-picks now included: |
4433 | diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst | |||
4434 | index e34f145..648c606 100644 | |||
4435 | --- a/doc/rtd/topics/datasources.rst | |||
4436 | +++ b/doc/rtd/topics/datasources.rst | |||
4437 | @@ -18,7 +18,7 @@ single way to access the different cloud systems methods to provide this data | |||
4438 | 18 | through the typical usage of subclasses. | 18 | through the typical usage of subclasses. |
4439 | 19 | 19 | ||
4440 | 20 | Any metadata processed by cloud-init's datasources is persisted as | 20 | Any metadata processed by cloud-init's datasources is persisted as |
4442 | 21 | ``/run/cloud0-init/instance-data.json``. Cloud-init provides tooling | 21 | ``/run/cloud-init/instance-data.json``. Cloud-init provides tooling |
4443 | 22 | to quickly introspect some of that data. See :ref:`instance_metadata` for | 22 | to quickly introspect some of that data. See :ref:`instance_metadata` for |
4444 | 23 | more information. | 23 | more information. |
4445 | 24 | 24 | ||
4446 | @@ -80,6 +80,65 @@ The current interface that a datasource object must provide is the following: | |||
4447 | 80 | def get_package_mirror_info(self) | 80 | def get_package_mirror_info(self) |
4448 | 81 | 81 | ||
4449 | 82 | 82 | ||
4450 | 83 | Adding a new Datasource | ||
4451 | 84 | ----------------------- | ||
4452 | 85 | The datasource objects have a few touch points with cloud-init. If you | ||
4453 | 86 | are interested in adding a new datasource for your cloud platform you'll | ||
4454 | 87 | need to take care of the following items: | ||
4455 | 88 | |||
4456 | 89 | * **Identify a mechanism for positive identification of the platform**: | ||
4457 | 90 | It is good practice for a cloud platform to positively identify itself | ||
4458 | 91 | to the guest. This allows the guest to make educated decisions based | ||
4459 | 92 | on the platform on which it is running. On the x86 and arm64 architectures, | ||
4460 | 93 | many clouds identify themselves through DMI data. For example, | ||
4461 | 94 | Oracle's public cloud provides the string 'OracleCloud.com' in the | ||
4462 | 95 | DMI chassis-asset field. | ||
4463 | 96 | |||
4464 | 97 | cloud-init enabled images produce a log file with details about the | ||
4465 | 98 | platform. Reading through this log in ``/run/cloud-init/ds-identify.log`` | ||
4466 | 99 | may provide the information needed to uniquely identify the platform. | ||
4467 | 100 | If the log is not present, you can generate it by running from source | ||
4468 | 101 | ``./tools/ds-identify`` or the installed location | ||
4469 | 102 | ``/usr/lib/cloud-init/ds-identify``. | ||
4470 | 103 | |||
4471 | 104 | The mechanism used to identify the platform will be required for the | ||
4472 | 105 | ds-identify and datasource module sections below. | ||
4473 | 106 | |||
4474 | 107 | * **Add datasource module ``cloudinit/sources/DataSource<CloudPlatform>.py``**: | ||
4475 | 108 | It is suggested that you start by copying one of the simpler datasources | ||
4476 | 109 | such as DataSourceHetzner. | ||
4477 | 110 | |||
4478 | 111 | * **Add tests for datasource module**: | ||
4479 | 112 | Add a new file with some tests for the module to | ||
4480 | 113 | ``cloudinit/sources/test_<yourplatform>.py``. For example see | ||
4481 | 114 | ``cloudinit/sources/tests/test_oracle.py`` | ||
4482 | 115 | |||
4483 | 116 | * **Update ds-identify**: In systemd systems, ds-identify is used to detect | ||
4484 | 117 | which datasource should be enabled or if cloud-init should run at all. | ||
4485 | 118 | You'll need to make changes to ``tools/ds-identify``. | ||
4486 | 119 | |||
4487 | 120 | * **Add tests for ds-identify**: Add relevant tests in a new class to | ||
4488 | 121 | ``tests/unittests/test_ds_identify.py``. You can use ``TestOracle`` as an | ||
4489 | 122 | example. | ||
4490 | 123 | |||
4491 | 124 | * **Add your datasource name to the builtin list of datasources:** Add | ||
4492 | 125 | your datasource module name to the end of the ``datasource_list`` | ||
4493 | 126 | entry in ``cloudinit/settings.py``. | ||
4494 | 127 | |||
4495 | 128 | * **Add your your cloud platform to apport collection prompts:** Update the | ||
4496 | 129 | list of cloud platforms in ``cloudinit/apport.py``. This list will be | ||
4497 | 130 | provided to the user who invokes ``ubuntu-bug cloud-init``. | ||
4498 | 131 | |||
4499 | 132 | * **Enable datasource by default in ubuntu packaging branches:** | ||
4500 | 133 | Ubuntu packaging branches contain a template file | ||
4501 | 134 | ``debian/cloud-init.templates`` that ultimately sets the default | ||
4502 | 135 | datasource_list when installed via package. This file needs updating when | ||
4503 | 136 | the commit gets into a package. | ||
4504 | 137 | |||
4505 | 138 | * **Add documentation for your datasource**: You should add a new | ||
4506 | 139 | file in ``doc/datasources/<cloudplatform>.rst`` | ||
4507 | 140 | |||
4508 | 141 | |||
4509 | 83 | Datasource Documentation | 142 | Datasource Documentation |
4510 | 84 | ======================== | 143 | ======================== |
4511 | 85 | The following is a list of the implemented datasources. | 144 | The following is a list of the implemented datasources. |
4512 | diff --git a/doc/rtd/topics/datasources/azure.rst b/doc/rtd/topics/datasources/azure.rst | |||
4513 | index 559011e..720a475 100644 | |||
4514 | --- a/doc/rtd/topics/datasources/azure.rst | |||
4515 | +++ b/doc/rtd/topics/datasources/azure.rst | |||
4516 | @@ -23,18 +23,18 @@ information in json format to /run/cloud-init/dhclient.hook/<interface>.json. | |||
4517 | 23 | In order for cloud-init to leverage this method to find the endpoint, the | 23 | In order for cloud-init to leverage this method to find the endpoint, the |
4518 | 24 | cloud.cfg file must contain: | 24 | cloud.cfg file must contain: |
4519 | 25 | 25 | ||
4524 | 26 | datasource: | 26 | .. sourcecode:: yaml |
4525 | 27 | Azure: | 27 | |
4526 | 28 | set_hostname: False | 28 | datasource: |
4527 | 29 | agent_command: __builtin__ | 29 | Azure: |
4528 | 30 | set_hostname: False | ||
4529 | 31 | agent_command: __builtin__ | ||
4530 | 30 | 32 | ||
4531 | 31 | If those files are not available, the fallback is to check the leases file | 33 | If those files are not available, the fallback is to check the leases file |
4532 | 32 | for the endpoint server (again option 245). | 34 | for the endpoint server (again option 245). |
4533 | 33 | 35 | ||
4534 | 34 | You can define the path to the lease file with the 'dhclient_lease_file' | 36 | You can define the path to the lease file with the 'dhclient_lease_file' |
4538 | 35 | configuration. The default value is /var/lib/dhcp/dhclient.eth0.leases. | 37 | configuration. |
4536 | 36 | |||
4537 | 37 | dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases | ||
4539 | 38 | 38 | ||
4540 | 39 | walinuxagent | 39 | walinuxagent |
4541 | 40 | ------------ | 40 | ------------ |
4542 | @@ -57,6 +57,64 @@ in order to use waagent.conf with cloud-init, the following settings are recomme | |||
4543 | 57 | ResourceDisk.MountPoint=/mnt | 57 | ResourceDisk.MountPoint=/mnt |
4544 | 58 | 58 | ||
4545 | 59 | 59 | ||
4546 | 60 | Configuration | ||
4547 | 61 | ------------- | ||
4548 | 62 | The following configuration can be set for the datasource in system | ||
4549 | 63 | configuration (in ``/etc/cloud/cloud.cfg`` or ``/etc/cloud/cloud.cfg.d/``). | ||
4550 | 64 | |||
4551 | 65 | The settings that may be configured are: | ||
4552 | 66 | |||
4553 | 67 | * **agent_command**: Either __builtin__ (default) or a command to run to getcw | ||
4554 | 68 | metadata. If __builtin__, get metadata from walinuxagent. Otherwise run the | ||
4555 | 69 | provided command to obtain metadata. | ||
4556 | 70 | * **apply_network_config**: Boolean set to True to use network configuration | ||
4557 | 71 | described by Azure's IMDS endpoint instead of fallback network config of | ||
4558 | 72 | dhcp on eth0. Default is True. For Ubuntu 16.04 or earlier, default is False. | ||
4559 | 73 | * **data_dir**: Path used to read metadata files and write crawled data. | ||
4560 | 74 | * **dhclient_lease_file**: The fallback lease file to source when looking for | ||
4561 | 75 | custom DHCP option 245 from Azure fabric. | ||
4562 | 76 | * **disk_aliases**: A dictionary defining which device paths should be | ||
4563 | 77 | interpreted as ephemeral images. See cc_disk_setup module for more info. | ||
4564 | 78 | * **hostname_bounce**: A dictionary Azure hostname bounce behavior to react to | ||
4565 | 79 | metadata changes. The '``hostname_bounce: command``' entry can be either | ||
4566 | 80 | the literal string 'builtin' or a command to execute. The command will be | ||
4567 | 81 | invoked after the hostname is set, and will have the 'interface' in its | ||
4568 | 82 | environment. If ``set_hostname`` is not true, then ``hostname_bounce`` | ||
4569 | 83 | will be ignored. An example might be: | ||
4570 | 84 | |||
4571 | 85 | ``command: ["sh", "-c", "killall dhclient; dhclient $interface"]`` | ||
4572 | 86 | |||
4573 | 87 | * **hostname_bounce**: A dictionary Azure hostname bounce behavior to react to | ||
4574 | 88 | metadata changes. Azure will throttle ifup/down in some cases after metadata | ||
4575 | 89 | has been updated to inform dhcp server about updated hostnames. | ||
4576 | 90 | * **set_hostname**: Boolean set to True when we want Azure to set the hostname | ||
4577 | 91 | based on metadata. | ||
4578 | 92 | |||
4579 | 93 | Configuration for the datasource can also be read from a | ||
4580 | 94 | ``dscfg`` entry in the ``LinuxProvisioningConfigurationSet``. Content in | ||
4581 | 95 | dscfg node is expected to be base64 encoded yaml content, and it will be | ||
4582 | 96 | merged into the 'datasource: Azure' entry. | ||
4583 | 97 | |||
4584 | 98 | An example configuration with the default values is provided below: | ||
4585 | 99 | |||
4586 | 100 | .. sourcecode:: yaml | ||
4587 | 101 | |||
4588 | 102 | datasource: | ||
4589 | 103 | Azure: | ||
4590 | 104 | agent_command: __builtin__ | ||
4591 | 105 | apply_network_config: true | ||
4592 | 106 | data_dir: /var/lib/waagent | ||
4593 | 107 | dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases | ||
4594 | 108 | disk_aliases: | ||
4595 | 109 | ephemeral0: /dev/disk/cloud/azure_resource | ||
4596 | 110 | hostname_bounce: | ||
4597 | 111 | interface: eth0 | ||
4598 | 112 | command: builtin | ||
4599 | 113 | policy: true | ||
4600 | 114 | hostname_command: hostname | ||
4601 | 115 | set_hostname: true | ||
4602 | 116 | |||
4603 | 117 | |||
4604 | 60 | Userdata | 118 | Userdata |
4605 | 61 | -------- | 119 | -------- |
4606 | 62 | Userdata is provided to cloud-init inside the ovf-env.xml file. Cloud-init | 120 | Userdata is provided to cloud-init inside the ovf-env.xml file. Cloud-init |
4607 | @@ -97,37 +155,6 @@ Example: | |||
4608 | 97 | </LinuxProvisioningConfigurationSet> | 155 | </LinuxProvisioningConfigurationSet> |
4609 | 98 | </wa:ProvisioningSection> | 156 | </wa:ProvisioningSection> |
4610 | 99 | 157 | ||
4611 | 100 | Configuration | ||
4612 | 101 | ------------- | ||
4613 | 102 | Configuration for the datasource can be read from the system config's or set | ||
4614 | 103 | via the `dscfg` entry in the `LinuxProvisioningConfigurationSet`. Content in | ||
4615 | 104 | dscfg node is expected to be base64 encoded yaml content, and it will be | ||
4616 | 105 | merged into the 'datasource: Azure' entry. | ||
4617 | 106 | |||
4618 | 107 | The '``hostname_bounce: command``' entry can be either the literal string | ||
4619 | 108 | 'builtin' or a command to execute. The command will be invoked after the | ||
4620 | 109 | hostname is set, and will have the 'interface' in its environment. If | ||
4621 | 110 | ``set_hostname`` is not true, then ``hostname_bounce`` will be ignored. | ||
4622 | 111 | |||
4623 | 112 | An example might be: | ||
4624 | 113 | command: ["sh", "-c", "killall dhclient; dhclient $interface"] | ||
4625 | 114 | |||
4626 | 115 | .. code:: yaml | ||
4627 | 116 | |||
4628 | 117 | datasource: | ||
4629 | 118 | agent_command | ||
4630 | 119 | Azure: | ||
4631 | 120 | agent_command: [service, walinuxagent, start] | ||
4632 | 121 | set_hostname: True | ||
4633 | 122 | hostname_bounce: | ||
4634 | 123 | # the name of the interface to bounce | ||
4635 | 124 | interface: eth0 | ||
4636 | 125 | # policy can be 'on', 'off' or 'force' | ||
4637 | 126 | policy: on | ||
4638 | 127 | # the method 'bounce' command. | ||
4639 | 128 | command: "builtin" | ||
4640 | 129 | hostname_command: "hostname" | ||
4641 | 130 | |||
4642 | 131 | hostname | 158 | hostname |
4643 | 132 | -------- | 159 | -------- |
4644 | 133 | When the user launches an instance, they provide a hostname for that instance. | 160 | When the user launches an instance, they provide a hostname for that instance. |
4645 | diff --git a/doc/rtd/topics/instancedata.rst b/doc/rtd/topics/instancedata.rst | |||
4646 | index 634e180..5d2dc94 100644 | |||
4647 | --- a/doc/rtd/topics/instancedata.rst | |||
4648 | +++ b/doc/rtd/topics/instancedata.rst | |||
4649 | @@ -90,24 +90,46 @@ There are three basic top-level keys: | |||
4650 | 90 | 90 | ||
4651 | 91 | The standardized keys present: | 91 | The standardized keys present: |
4652 | 92 | 92 | ||
4671 | 93 | +----------------------+-----------------------------------------------+---------------------------+ | 93 | +----------------------+-----------------------------------------------+-----------------------------------+ |
4672 | 94 | | Key path | Description | Examples | | 94 | | Key path | Description | Examples | |
4673 | 95 | +======================+===============================================+===========================+ | 95 | +======================+===============================================+===================================+ |
4674 | 96 | | v1.cloud_name | The name of the cloud provided by metadata | aws, openstack, azure, | | 96 | | v1._beta_keys | List of standardized keys still in 'beta'. | [subplatform] | |
4675 | 97 | | | key 'cloud-name' or the cloud-init datasource | configdrive, nocloud, | | 97 | | | The format, intent or presence of these keys | | |
4676 | 98 | | | name which was discovered. | ovf, etc. | | 98 | | | can change. Do not consider them | | |
4677 | 99 | +----------------------+-----------------------------------------------+---------------------------+ | 99 | | | production-ready. | | |
4678 | 100 | | v1.instance_id | Unique instance_id allocated by the cloud | i-<somehash> | | 100 | +----------------------+-----------------------------------------------+-----------------------------------+ |
4679 | 101 | +----------------------+-----------------------------------------------+---------------------------+ | 101 | | v1.cloud_name | Where possible this will indicate the 'name' | aws, openstack, azure, | |
4680 | 102 | | v1.local_hostname | The internal or local hostname of the system | ip-10-41-41-70, | | 102 | | | of the cloud this system is running on. This | configdrive, nocloud, | |
4681 | 103 | | | | <user-provided-hostname> | | 103 | | | is specifically different than the 'platform' | ovf, etc. | |
4682 | 104 | +----------------------+-----------------------------------------------+---------------------------+ | 104 | | | below. As an example, the name of Amazon Web | | |
4683 | 105 | | v1.region | The physical region/datacenter in which the | us-east-2 | | 105 | | | Services is 'aws' while the platform is 'ec2'.| | |
4684 | 106 | | | instance is deployed | | | 106 | | | | | |
4685 | 107 | +----------------------+-----------------------------------------------+---------------------------+ | 107 | | | If no specific name is determinable or | | |
4686 | 108 | | v1.availability_zone | The physical availability zone in which the | us-east-2b, nova, null | | 108 | | | provided in meta-data, then this field may | | |
4687 | 109 | | | instance is deployed | | | 109 | | | contain the same content as 'platform'. | | |
4688 | 110 | +----------------------+-----------------------------------------------+---------------------------+ | 110 | +----------------------+-----------------------------------------------+-----------------------------------+ |
4689 | 111 | | v1.instance_id | Unique instance_id allocated by the cloud | i-<somehash> | | ||
4690 | 112 | +----------------------+-----------------------------------------------+-----------------------------------+ | ||
4691 | 113 | | v1.local_hostname | The internal or local hostname of the system | ip-10-41-41-70, | | ||
4692 | 114 | | | | <user-provided-hostname> | | ||
4693 | 115 | +----------------------+-----------------------------------------------+-----------------------------------+ | ||
4694 | 116 | | v1.platform | An attempt to identify the cloud platform | ec2, openstack, lxd, gce | | ||
4695 | 117 | | | instance that the system is running on. | nocloud, ovf | | ||
4696 | 118 | +----------------------+-----------------------------------------------+-----------------------------------+ | ||
4697 | 119 | | v1.subplatform | Additional platform details describing the | metadata (http://168.254.169.254),| | ||
4698 | 120 | | | specific source or type of metadata used. | seed-dir (/path/to/seed-dir/), | | ||
4699 | 121 | | | The format of subplatform will be: | config-disk (/dev/cd0), | | ||
4700 | 122 | | | <subplatform_type> (<url_file_or_dev_path>) | configdrive (/dev/sr0) | | ||
4701 | 123 | +----------------------+-----------------------------------------------+-----------------------------------+ | ||
4702 | 124 | | v1.public_ssh_keys | A list of ssh keys provided to the instance | ['ssh-rsa AA...', ...] | | ||
4703 | 125 | | | by the datasource metadata. | | | ||
4704 | 126 | +----------------------+-----------------------------------------------+-----------------------------------+ | ||
4705 | 127 | | v1.region | The physical region/datacenter in which the | us-east-2 | | ||
4706 | 128 | | | instance is deployed | | | ||
4707 | 129 | +----------------------+-----------------------------------------------+-----------------------------------+ | ||
4708 | 130 | | v1.availability_zone | The physical availability zone in which the | us-east-2b, nova, null | | ||
4709 | 131 | | | instance is deployed | | | ||
4710 | 132 | +----------------------+-----------------------------------------------+-----------------------------------+ | ||
4711 | 111 | 133 | ||
4712 | 112 | 134 | ||
4713 | 113 | Below is an example of ``/run/cloud-init/instance_data.json`` on an EC2 | 135 | Below is an example of ``/run/cloud-init/instance_data.json`` on an EC2 |
4714 | @@ -117,10 +139,75 @@ instance: | |||
4715 | 117 | 139 | ||
4716 | 118 | { | 140 | { |
4717 | 119 | "base64_encoded_keys": [], | 141 | "base64_encoded_keys": [], |
4718 | 120 | "sensitive_keys": [], | ||
4719 | 121 | "ds": { | 142 | "ds": { |
4722 | 122 | "meta_data": { | 143 | "_doc": "EXPERIMENTAL: The structure and format of content scoped under the 'ds' key may change in subsequent releases of cloud-init.", |
4723 | 123 | "ami-id": "ami-014e1416b628b0cbf", | 144 | "_metadata_api_version": "2016-09-02", |
4724 | 145 | "dynamic": { | ||
4725 | 146 | "instance-identity": { | ||
4726 | 147 | "document": { | ||
4727 | 148 | "accountId": "437526006925", | ||
4728 | 149 | "architecture": "x86_64", | ||
4729 | 150 | "availabilityZone": "us-east-2b", | ||
4730 | 151 | "billingProducts": null, | ||
4731 | 152 | "devpayProductCodes": null, | ||
4732 | 153 | "imageId": "ami-079638aae7046bdd2", | ||
4733 | 154 | "instanceId": "i-075f088c72ad3271c", | ||
4734 | 155 | "instanceType": "t2.micro", | ||
4735 | 156 | "kernelId": null, | ||
4736 | 157 | "marketplaceProductCodes": null, | ||
4737 | 158 | "pendingTime": "2018-10-05T20:10:43Z", | ||
4738 | 159 | "privateIp": "10.41.41.95", | ||
4739 | 160 | "ramdiskId": null, | ||
4740 | 161 | "region": "us-east-2", | ||
4741 | 162 | "version": "2017-09-30" | ||
4742 | 163 | }, | ||
4743 | 164 | "pkcs7": [ | ||
4744 | 165 | "MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAaCAJIAEggHbewog", | ||
4745 | 166 | "ICJkZXZwYXlQcm9kdWN0Q29kZXMiIDogbnVsbCwKICAibWFya2V0cGxhY2VQcm9kdWN0Q29kZXMi", | ||
4746 | 167 | "IDogbnVsbCwKICAicHJpdmF0ZUlwIiA6ICIxMC40MS40MS45NSIsCiAgInZlcnNpb24iIDogIjIw", | ||
4747 | 168 | "MTctMDktMzAiLAogICJpbnN0YW5jZUlkIiA6ICJpLTA3NWYwODhjNzJhZDMyNzFjIiwKICAiYmls", | ||
4748 | 169 | "bGluZ1Byb2R1Y3RzIiA6IG51bGwsCiAgImluc3RhbmNlVHlwZSIgOiAidDIubWljcm8iLAogICJh", | ||
4749 | 170 | "Y2NvdW50SWQiIDogIjQzNzUyNjAwNjkyNSIsCiAgImF2YWlsYWJpbGl0eVpvbmUiIDogInVzLWVh", | ||
4750 | 171 | "c3QtMmIiLAogICJrZXJuZWxJZCIgOiBudWxsLAogICJyYW1kaXNrSWQiIDogbnVsbCwKICAiYXJj", | ||
4751 | 172 | "aGl0ZWN0dXJlIiA6ICJ4ODZfNjQiLAogICJpbWFnZUlkIiA6ICJhbWktMDc5NjM4YWFlNzA0NmJk", | ||
4752 | 173 | "ZDIiLAogICJwZW5kaW5nVGltZSIgOiAiMjAxOC0xMC0wNVQyMDoxMDo0M1oiLAogICJyZWdpb24i", | ||
4753 | 174 | "IDogInVzLWVhc3QtMiIKfQAAAAAAADGCARcwggETAgEBMGkwXDELMAkGA1UEBhMCVVMxGTAXBgNV", | ||
4754 | 175 | "BAgTEFdhc2hpbmd0b24gU3RhdGUxEDAOBgNVBAcTB1NlYXR0bGUxIDAeBgNVBAoTF0FtYXpvbiBX", | ||
4755 | 176 | "ZWIgU2VydmljZXMgTExDAgkAlrpI2eVeGmcwCQYFKw4DAhoFAKBdMBgGCSqGSIb3DQEJAzELBgkq", | ||
4756 | 177 | "hkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MTAwNTIwMTA0OFowIwYJKoZIhvcNAQkEMRYEFK0k", | ||
4757 | 178 | "Tz6n1A8/zU1AzFj0riNQORw2MAkGByqGSM44BAMELjAsAhRNrr174y98grPBVXUforN/6wZp8AIU", | ||
4758 | 179 | "JLZBkrB2GJA8A4WJ1okq++jSrBIAAAAAAAA=" | ||
4759 | 180 | ], | ||
4760 | 181 | "rsa2048": [ | ||
4761 | 182 | "MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwGggCSABIIB", | ||
4762 | 183 | "23sKICAiZGV2cGF5UHJvZHVjdENvZGVzIiA6IG51bGwsCiAgIm1hcmtldHBsYWNlUHJvZHVjdENv", | ||
4763 | 184 | "ZGVzIiA6IG51bGwsCiAgInByaXZhdGVJcCIgOiAiMTAuNDEuNDEuOTUiLAogICJ2ZXJzaW9uIiA6", | ||
4764 | 185 | "ICIyMDE3LTA5LTMwIiwKICAiaW5zdGFuY2VJZCIgOiAiaS0wNzVmMDg4YzcyYWQzMjcxYyIsCiAg", | ||
4765 | 186 | "ImJpbGxpbmdQcm9kdWN0cyIgOiBudWxsLAogICJpbnN0YW5jZVR5cGUiIDogInQyLm1pY3JvIiwK", | ||
4766 | 187 | "ICAiYWNjb3VudElkIiA6ICI0Mzc1MjYwMDY5MjUiLAogICJhdmFpbGFiaWxpdHlab25lIiA6ICJ1", | ||
4767 | 188 | "cy1lYXN0LTJiIiwKICAia2VybmVsSWQiIDogbnVsbCwKICAicmFtZGlza0lkIiA6IG51bGwsCiAg", | ||
4768 | 189 | "ImFyY2hpdGVjdHVyZSIgOiAieDg2XzY0IiwKICAiaW1hZ2VJZCIgOiAiYW1pLTA3OTYzOGFhZTcw", | ||
4769 | 190 | "NDZiZGQyIiwKICAicGVuZGluZ1RpbWUiIDogIjIwMTgtMTAtMDVUMjA6MTA6NDNaIiwKICAicmVn", | ||
4770 | 191 | "aW9uIiA6ICJ1cy1lYXN0LTIiCn0AAAAAAAAxggH/MIIB+wIBATBpMFwxCzAJBgNVBAYTAlVTMRkw", | ||
4771 | 192 | "FwYDVQQIExBXYXNoaW5ndG9uIFN0YXRlMRAwDgYDVQQHEwdTZWF0dGxlMSAwHgYDVQQKExdBbWF6", | ||
4772 | 193 | "b24gV2ViIFNlcnZpY2VzIExMQwIJAM07oeX4xevdMA0GCWCGSAFlAwQCAQUAoGkwGAYJKoZIhvcN", | ||
4773 | 194 | "AQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTgxMDA1MjAxMDQ4WjAvBgkqhkiG9w0B", | ||
4774 | 195 | "CQQxIgQgkYz0pZk3zJKBi4KP4egeOKJl/UYwu5UdE7id74pmPwMwDQYJKoZIhvcNAQEBBQAEggEA", | ||
4775 | 196 | "dC3uIGGNul1OC1mJKSH3XoBWsYH20J/xhIdftYBoXHGf2BSFsrs9ZscXd2rKAKea4pSPOZEYMXgz", | ||
4776 | 197 | "lPuT7W0WU89N3ZKviy/ReMSRjmI/jJmsY1lea6mlgcsJXreBXFMYucZvyeWGHdnCjamoKWXkmZlM", | ||
4777 | 198 | "mSB1gshWy8Y7DzoKviYPQZi5aI54XK2Upt4kGme1tH1NI2Cq+hM4K+adxTbNhS3uzvWaWzMklUuU", | ||
4778 | 199 | "QHX2GMmjAVRVc8vnA8IAsBCJJp+gFgYzi09IK+cwNgCFFPADoG6jbMHHf4sLB3MUGpiA+G9JlCnM", | ||
4779 | 200 | "fmkjI2pNRB8spc0k4UG4egqLrqCz67WuK38tjwAAAAAAAA==" | ||
4780 | 201 | ], | ||
4781 | 202 | "signature": [ | ||
4782 | 203 | "Tsw6h+V3WnxrNVSXBYIOs1V4j95YR1mLPPH45XnhX0/Ei3waJqf7/7EEKGYP1Cr4PTYEULtZ7Mvf", | ||
4783 | 204 | "+xJpM50Ivs2bdF7o0c4vnplRWe3f06NI9pv50dr110j/wNzP4MZ1pLhJCqubQOaaBTF3LFutgRrt", | ||
4784 | 205 | "r4B0mN3p7EcqD8G+ll0=" | ||
4785 | 206 | ] | ||
4786 | 207 | } | ||
4787 | 208 | }, | ||
4788 | 209 | "meta-data": { | ||
4789 | 210 | "ami-id": "ami-079638aae7046bdd2", | ||
4790 | 124 | "ami-launch-index": "0", | 211 | "ami-launch-index": "0", |
4791 | 125 | "ami-manifest-path": "(unknown)", | 212 | "ami-manifest-path": "(unknown)", |
4792 | 126 | "block-device-mapping": { | 213 | "block-device-mapping": { |
4793 | @@ -129,31 +216,31 @@ instance: | |||
4794 | 129 | "ephemeral1": "sdc", | 216 | "ephemeral1": "sdc", |
4795 | 130 | "root": "/dev/sda1" | 217 | "root": "/dev/sda1" |
4796 | 131 | }, | 218 | }, |
4798 | 132 | "hostname": "ip-10-41-41-70.us-east-2.compute.internal", | 219 | "hostname": "ip-10-41-41-95.us-east-2.compute.internal", |
4799 | 133 | "instance-action": "none", | 220 | "instance-action": "none", |
4801 | 134 | "instance-id": "i-04fa31cfc55aa7976", | 221 | "instance-id": "i-075f088c72ad3271c", |
4802 | 135 | "instance-type": "t2.micro", | 222 | "instance-type": "t2.micro", |
4806 | 136 | "local-hostname": "ip-10-41-41-70.us-east-2.compute.internal", | 223 | "local-hostname": "ip-10-41-41-95.us-east-2.compute.internal", |
4807 | 137 | "local-ipv4": "10.41.41.70", | 224 | "local-ipv4": "10.41.41.95", |
4808 | 138 | "mac": "06:b6:92:dd:9d:24", | 225 | "mac": "06:74:8f:39:cd:a6", |
4809 | 139 | "metrics": { | 226 | "metrics": { |
4810 | 140 | "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" | 227 | "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" |
4811 | 141 | }, | 228 | }, |
4812 | 142 | "network": { | 229 | "network": { |
4813 | 143 | "interfaces": { | 230 | "interfaces": { |
4814 | 144 | "macs": { | 231 | "macs": { |
4816 | 145 | "06:b6:92:dd:9d:24": { | 232 | "06:74:8f:39:cd:a6": { |
4817 | 146 | "device-number": "0", | 233 | "device-number": "0", |
4819 | 147 | "interface-id": "eni-08c0c9fdb99b6e6f4", | 234 | "interface-id": "eni-052058bbd7831eaae", |
4820 | 148 | "ipv4-associations": { | 235 | "ipv4-associations": { |
4822 | 149 | "18.224.22.43": "10.41.41.70" | 236 | "18.218.221.122": "10.41.41.95" |
4823 | 150 | }, | 237 | }, |
4827 | 151 | "local-hostname": "ip-10-41-41-70.us-east-2.compute.internal", | 238 | "local-hostname": "ip-10-41-41-95.us-east-2.compute.internal", |
4828 | 152 | "local-ipv4s": "10.41.41.70", | 239 | "local-ipv4s": "10.41.41.95", |
4829 | 153 | "mac": "06:b6:92:dd:9d:24", | 240 | "mac": "06:74:8f:39:cd:a6", |
4830 | 154 | "owner-id": "437526006925", | 241 | "owner-id": "437526006925", |
4833 | 155 | "public-hostname": "ec2-18-224-22-43.us-east-2.compute.amazonaws.com", | 242 | "public-hostname": "ec2-18-218-221-122.us-east-2.compute.amazonaws.com", |
4834 | 156 | "public-ipv4s": "18.224.22.43", | 243 | "public-ipv4s": "18.218.221.122", |
4835 | 157 | "security-group-ids": "sg-828247e9", | 244 | "security-group-ids": "sg-828247e9", |
4836 | 158 | "security-groups": "Cloud-init integration test secgroup", | 245 | "security-groups": "Cloud-init integration test secgroup", |
4837 | 159 | "subnet-id": "subnet-282f3053", | 246 | "subnet-id": "subnet-282f3053", |
4838 | @@ -171,16 +258,14 @@ instance: | |||
4839 | 171 | "availability-zone": "us-east-2b" | 258 | "availability-zone": "us-east-2b" |
4840 | 172 | }, | 259 | }, |
4841 | 173 | "profile": "default-hvm", | 260 | "profile": "default-hvm", |
4844 | 174 | "public-hostname": "ec2-18-224-22-43.us-east-2.compute.amazonaws.com", | 261 | "public-hostname": "ec2-18-218-221-122.us-east-2.compute.amazonaws.com", |
4845 | 175 | "public-ipv4": "18.224.22.43", | 262 | "public-ipv4": "18.218.221.122", |
4846 | 176 | "public-keys": { | 263 | "public-keys": { |
4847 | 177 | "cloud-init-integration": [ | 264 | "cloud-init-integration": [ |
4851 | 178 | "ssh-rsa | 265 | "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB cloud-init-integration" |
4849 | 179 | AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB | ||
4850 | 180 | cloud-init-integration" | ||
4852 | 181 | ] | 266 | ] |
4853 | 182 | }, | 267 | }, |
4855 | 183 | "reservation-id": "r-06ab75e9346f54333", | 268 | "reservation-id": "r-0594a20e31f6cfe46", |
4856 | 184 | "security-groups": "Cloud-init integration test secgroup", | 269 | "security-groups": "Cloud-init integration test secgroup", |
4857 | 185 | "services": { | 270 | "services": { |
4858 | 186 | "domain": "amazonaws.com", | 271 | "domain": "amazonaws.com", |
4859 | @@ -188,16 +273,22 @@ instance: | |||
4860 | 188 | } | 273 | } |
4861 | 189 | } | 274 | } |
4862 | 190 | }, | 275 | }, |
4863 | 276 | "sensitive_keys": [], | ||
4864 | 191 | "v1": { | 277 | "v1": { |
4865 | 278 | "_beta_keys": [ | ||
4866 | 279 | "subplatform" | ||
4867 | 280 | ], | ||
4868 | 192 | "availability-zone": "us-east-2b", | 281 | "availability-zone": "us-east-2b", |
4869 | 193 | "availability_zone": "us-east-2b", | 282 | "availability_zone": "us-east-2b", |
4870 | 194 | "cloud-name": "aws", | ||
4871 | 195 | "cloud_name": "aws", | 283 | "cloud_name": "aws", |
4877 | 196 | "instance-id": "i-04fa31cfc55aa7976", | 284 | "instance_id": "i-075f088c72ad3271c", |
4878 | 197 | "instance_id": "i-04fa31cfc55aa7976", | 285 | "local_hostname": "ip-10-41-41-95", |
4879 | 198 | "local-hostname": "ip-10-41-41-70", | 286 | "platform": "ec2", |
4880 | 199 | "local_hostname": "ip-10-41-41-70", | 287 | "public_ssh_keys": [ |
4881 | 200 | "region": "us-east-2" | 288 | "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB cloud-init-integration" |
4882 | 289 | ], | ||
4883 | 290 | "region": "us-east-2", | ||
4884 | 291 | "subplatform": "metadata (http://169.254.169.254)" | ||
4885 | 201 | } | 292 | } |
4886 | 202 | } | 293 | } |
4887 | 203 | 294 | ||
4888 | diff --git a/doc/rtd/topics/network-config-format-v1.rst b/doc/rtd/topics/network-config-format-v1.rst | |||
4889 | index 3b0148c..9723d68 100644 | |||
4890 | --- a/doc/rtd/topics/network-config-format-v1.rst | |||
4891 | +++ b/doc/rtd/topics/network-config-format-v1.rst | |||
4892 | @@ -384,7 +384,7 @@ Valid keys for ``subnets`` include the following: | |||
4893 | 384 | - ``address``: IPv4 or IPv6 address. It may include CIDR netmask notation. | 384 | - ``address``: IPv4 or IPv6 address. It may include CIDR netmask notation. |
4894 | 385 | - ``netmask``: IPv4 subnet mask in dotted format or CIDR notation. | 385 | - ``netmask``: IPv4 subnet mask in dotted format or CIDR notation. |
4895 | 386 | - ``gateway``: IPv4 address of the default gateway for this subnet. | 386 | - ``gateway``: IPv4 address of the default gateway for this subnet. |
4897 | 387 | - ``dns_nameserver``: Specify a list of IPv4 dns server IPs to end up in | 387 | - ``dns_nameservers``: Specify a list of IPv4 dns server IPs to end up in |
4898 | 388 | resolv.conf. | 388 | resolv.conf. |
4899 | 389 | - ``dns_search``: Specify a list of search paths to be included in | 389 | - ``dns_search``: Specify a list of search paths to be included in |
4900 | 390 | resolv.conf. | 390 | resolv.conf. |
4901 | diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in | |||
4902 | index a3a6d1e..6b2022b 100644 | |||
4903 | --- a/packages/redhat/cloud-init.spec.in | |||
4904 | +++ b/packages/redhat/cloud-init.spec.in | |||
4905 | @@ -191,6 +191,7 @@ fi | |||
4906 | 191 | 191 | ||
4907 | 192 | # Program binaries | 192 | # Program binaries |
4908 | 193 | %{_bindir}/cloud-init* | 193 | %{_bindir}/cloud-init* |
4909 | 194 | %{_bindir}/cloud-id* | ||
4910 | 194 | 195 | ||
4911 | 195 | # Docs | 196 | # Docs |
4912 | 196 | %doc LICENSE ChangeLog TODO.rst requirements.txt | 197 | %doc LICENSE ChangeLog TODO.rst requirements.txt |
4913 | diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in | |||
4914 | index e781d74..26894b3 100644 | |||
4915 | --- a/packages/suse/cloud-init.spec.in | |||
4916 | +++ b/packages/suse/cloud-init.spec.in | |||
4917 | @@ -93,6 +93,7 @@ version_pys=$(cd "%{buildroot}" && find . -name version.py -type f) | |||
4918 | 93 | 93 | ||
4919 | 94 | # Program binaries | 94 | # Program binaries |
4920 | 95 | %{_bindir}/cloud-init* | 95 | %{_bindir}/cloud-init* |
4921 | 96 | %{_bindir}/cloud-id* | ||
4922 | 96 | 97 | ||
4923 | 97 | # systemd files | 98 | # systemd files |
4924 | 98 | /usr/lib/systemd/system-generators/* | 99 | /usr/lib/systemd/system-generators/* |
4925 | diff --git a/setup.py b/setup.py | |||
4926 | index 5ed8eae..ea37efc 100755 | |||
4927 | --- a/setup.py | |||
4928 | +++ b/setup.py | |||
4929 | @@ -282,7 +282,8 @@ setuptools.setup( | |||
4930 | 282 | cmdclass=cmdclass, | 282 | cmdclass=cmdclass, |
4931 | 283 | entry_points={ | 283 | entry_points={ |
4932 | 284 | 'console_scripts': [ | 284 | 'console_scripts': [ |
4934 | 285 | 'cloud-init = cloudinit.cmd.main:main' | 285 | 'cloud-init = cloudinit.cmd.main:main', |
4935 | 286 | 'cloud-id = cloudinit.cmd.cloud_id:main' | ||
4936 | 286 | ], | 287 | ], |
4937 | 287 | } | 288 | } |
4938 | 288 | ) | 289 | ) |
4939 | diff --git a/systemd/cloud-init.service.tmpl b/systemd/cloud-init.service.tmpl | |||
4940 | index b92e8ab..5cb0037 100644 | |||
4941 | --- a/systemd/cloud-init.service.tmpl | |||
4942 | +++ b/systemd/cloud-init.service.tmpl | |||
4943 | @@ -14,8 +14,7 @@ After=networking.service | |||
4944 | 14 | After=network.service | 14 | After=network.service |
4945 | 15 | {% endif %} | 15 | {% endif %} |
4946 | 16 | {% if variant in ["suse"] %} | 16 | {% if variant in ["suse"] %} |
4949 | 17 | Requires=wicked.service | 17 | Before=wicked.service |
4948 | 18 | After=wicked.service | ||
4950 | 19 | # setting hostname via hostnamectl depends on dbus, which otherwise | 18 | # setting hostname via hostnamectl depends on dbus, which otherwise |
4951 | 20 | # would not be guaranteed at this point. | 19 | # would not be guaranteed at this point. |
4952 | 21 | After=dbus.service | 20 | After=dbus.service |
4953 | diff --git a/templates/sources.list.ubuntu.tmpl b/templates/sources.list.ubuntu.tmpl | |||
4954 | index d879972..edb92f1 100644 | |||
4955 | --- a/templates/sources.list.ubuntu.tmpl | |||
4956 | +++ b/templates/sources.list.ubuntu.tmpl | |||
4957 | @@ -10,30 +10,30 @@ | |||
4958 | 10 | # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to | 10 | # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to |
4959 | 11 | # newer versions of the distribution. | 11 | # newer versions of the distribution. |
4960 | 12 | deb {{mirror}} {{codename}} main restricted | 12 | deb {{mirror}} {{codename}} main restricted |
4962 | 13 | deb-src {{mirror}} {{codename}} main restricted | 13 | # deb-src {{mirror}} {{codename}} main restricted |
4963 | 14 | 14 | ||
4964 | 15 | ## Major bug fix updates produced after the final release of the | 15 | ## Major bug fix updates produced after the final release of the |
4965 | 16 | ## distribution. | 16 | ## distribution. |
4966 | 17 | deb {{mirror}} {{codename}}-updates main restricted | 17 | deb {{mirror}} {{codename}}-updates main restricted |
4968 | 18 | deb-src {{mirror}} {{codename}}-updates main restricted | 18 | # deb-src {{mirror}} {{codename}}-updates main restricted |
4969 | 19 | 19 | ||
4970 | 20 | ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu | 20 | ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu |
4971 | 21 | ## team. Also, please note that software in universe WILL NOT receive any | 21 | ## team. Also, please note that software in universe WILL NOT receive any |
4972 | 22 | ## review or updates from the Ubuntu security team. | 22 | ## review or updates from the Ubuntu security team. |
4973 | 23 | deb {{mirror}} {{codename}} universe | 23 | deb {{mirror}} {{codename}} universe |
4975 | 24 | deb-src {{mirror}} {{codename}} universe | 24 | # deb-src {{mirror}} {{codename}} universe |
4976 | 25 | deb {{mirror}} {{codename}}-updates universe | 25 | deb {{mirror}} {{codename}}-updates universe |
4978 | 26 | deb-src {{mirror}} {{codename}}-updates universe | 26 | # deb-src {{mirror}} {{codename}}-updates universe |
4979 | 27 | 27 | ||
4983 | 28 | ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu | 28 | ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu |
4984 | 29 | ## team, and may not be under a free licence. Please satisfy yourself as to | 29 | ## team, and may not be under a free licence. Please satisfy yourself as to |
4985 | 30 | ## your rights to use the software. Also, please note that software in | 30 | ## your rights to use the software. Also, please note that software in |
4986 | 31 | ## multiverse WILL NOT receive any review or updates from the Ubuntu | 31 | ## multiverse WILL NOT receive any review or updates from the Ubuntu |
4987 | 32 | ## security team. | 32 | ## security team. |
4988 | 33 | deb {{mirror}} {{codename}} multiverse | 33 | deb {{mirror}} {{codename}} multiverse |
4990 | 34 | deb-src {{mirror}} {{codename}} multiverse | 34 | # deb-src {{mirror}} {{codename}} multiverse |
4991 | 35 | deb {{mirror}} {{codename}}-updates multiverse | 35 | deb {{mirror}} {{codename}}-updates multiverse |
4993 | 36 | deb-src {{mirror}} {{codename}}-updates multiverse | 36 | # deb-src {{mirror}} {{codename}}-updates multiverse |
4994 | 37 | 37 | ||
4995 | 38 | ## N.B. software from this repository may not have been tested as | 38 | ## N.B. software from this repository may not have been tested as |
4996 | 39 | ## extensively as that contained in the main release, although it includes | 39 | ## extensively as that contained in the main release, although it includes |
4997 | @@ -41,14 +41,7 @@ deb-src {{mirror}} {{codename}}-updates multiverse | |||
4998 | 41 | ## Also, please note that software in backports WILL NOT receive any review | 41 | ## Also, please note that software in backports WILL NOT receive any review |
4999 | 42 | ## or updates from the Ubuntu security team. | 42 | ## or updates from the Ubuntu security team. |
5000 | 43 | deb {{mirror}} {{codename}}-backports main restricted universe multiverse | 43 | deb {{mirror}} {{codename}}-backports main restricted universe multiverse |
The diff has been truncated for viewing.
PASSED: Continuous integration, rev:4a1aaa18595 fa663e1a38dd3ac 8f73231ec69a7f /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 543/
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 543/rebuild
https:/