Merge ~chad.smith/cloud-init:ubuntu/bionic into cloud-init:ubuntu/bionic

Proposed by Chad Smith
Status: Merged
Merged at revision: 4a1aaa18595fa663e1a38dd3ac8f73231ec69a7f
Proposed branch: ~chad.smith/cloud-init:ubuntu/bionic
Merge into: cloud-init:ubuntu/bionic
Diff against target: 7789 lines (+3838/-768)
98 files modified
ChangeLog (+54/-0)
HACKING.rst (+2/-2)
bash_completion/cloud-init (+4/-1)
cloudinit/cmd/cloud_id.py (+90/-0)
cloudinit/cmd/devel/logs.py (+23/-8)
cloudinit/cmd/devel/net_convert.py (+10/-5)
cloudinit/cmd/devel/render.py (+24/-11)
cloudinit/cmd/devel/tests/test_logs.py (+37/-6)
cloudinit/cmd/devel/tests/test_render.py (+44/-1)
cloudinit/cmd/main.py (+4/-16)
cloudinit/cmd/query.py (+24/-12)
cloudinit/cmd/tests/test_cloud_id.py (+127/-0)
cloudinit/cmd/tests/test_query.py (+71/-5)
cloudinit/config/cc_disk_setup.py (+1/-1)
cloudinit/config/cc_lxd.py (+1/-1)
cloudinit/config/cc_resizefs.py (+7/-0)
cloudinit/config/cc_set_passwords.py (+1/-1)
cloudinit/config/cc_write_files.py (+6/-1)
cloudinit/config/tests/test_set_passwords.py (+40/-0)
cloudinit/dhclient_hook.py (+72/-38)
cloudinit/handlers/jinja_template.py (+9/-1)
cloudinit/net/__init__.py (+38/-4)
cloudinit/net/dhcp.py (+76/-25)
cloudinit/net/eni.py (+15/-14)
cloudinit/net/netplan.py (+3/-3)
cloudinit/net/sysconfig.py (+61/-5)
cloudinit/net/tests/test_dhcp.py (+47/-4)
cloudinit/net/tests/test_init.py (+51/-1)
cloudinit/sources/DataSourceAliYun.py (+5/-15)
cloudinit/sources/DataSourceAltCloud.py (+22/-11)
cloudinit/sources/DataSourceAzure.py (+82/-31)
cloudinit/sources/DataSourceBigstep.py (+4/-0)
cloudinit/sources/DataSourceCloudSigma.py (+5/-1)
cloudinit/sources/DataSourceConfigDrive.py (+12/-0)
cloudinit/sources/DataSourceEc2.py (+59/-56)
cloudinit/sources/DataSourceIBMCloud.py (+4/-0)
cloudinit/sources/DataSourceMAAS.py (+4/-0)
cloudinit/sources/DataSourceNoCloud.py (+52/-1)
cloudinit/sources/DataSourceNone.py (+4/-0)
cloudinit/sources/DataSourceOVF.py (+36/-26)
cloudinit/sources/DataSourceOpenNebula.py (+9/-1)
cloudinit/sources/DataSourceOracle.py (+4/-0)
cloudinit/sources/DataSourceScaleway.py (+10/-1)
cloudinit/sources/DataSourceSmartOS.py (+3/-0)
cloudinit/sources/__init__.py (+104/-21)
cloudinit/sources/helpers/netlink.py (+250/-0)
cloudinit/sources/helpers/tests/test_netlink.py (+373/-0)
cloudinit/sources/helpers/vmware/imc/config_nic.py (+2/-3)
cloudinit/sources/tests/test_init.py (+83/-3)
cloudinit/sources/tests/test_oracle.py (+8/-0)
cloudinit/temp_utils.py (+2/-2)
cloudinit/tests/test_dhclient_hook.py (+105/-0)
cloudinit/tests/test_temp_utils.py (+17/-1)
cloudinit/tests/test_url_helper.py (+24/-1)
cloudinit/tests/test_util.py (+82/-17)
cloudinit/url_helper.py (+25/-6)
cloudinit/util.py (+25/-3)
cloudinit/version.py (+1/-1)
config/cloud.cfg.tmpl (+11/-1)
debian/changelog (+75/-0)
doc/rtd/topics/datasources.rst (+60/-1)
doc/rtd/topics/datasources/azure.rst (+65/-38)
doc/rtd/topics/instancedata.rst (+137/-46)
doc/rtd/topics/network-config-format-v1.rst (+1/-1)
packages/redhat/cloud-init.spec.in (+1/-0)
packages/suse/cloud-init.spec.in (+1/-0)
setup.py (+2/-1)
systemd/cloud-init.service.tmpl (+1/-2)
templates/sources.list.ubuntu.tmpl (+17/-17)
tests/cloud_tests/releases.yaml (+16/-0)
tests/cloud_tests/testcases/base.py (+15/-3)
tests/cloud_tests/testcases/modules/apt_configure_primary.py (+9/-5)
tests/cloud_tests/testcases/modules/apt_configure_primary.yaml (+0/-7)
tests/unittests/test_builtin_handlers.py (+25/-0)
tests/unittests/test_cli.py (+8/-8)
tests/unittests/test_datasource/test_aliyun.py (+4/-0)
tests/unittests/test_datasource/test_altcloud.py (+67/-51)
tests/unittests/test_datasource/test_azure.py (+262/-79)
tests/unittests/test_datasource/test_cloudsigma.py (+6/-0)
tests/unittests/test_datasource/test_configdrive.py (+3/-0)
tests/unittests/test_datasource/test_ec2.py (+37/-23)
tests/unittests/test_datasource/test_ibmcloud.py (+39/-1)
tests/unittests/test_datasource/test_nocloud.py (+98/-41)
tests/unittests/test_datasource/test_opennebula.py (+4/-0)
tests/unittests/test_datasource/test_ovf.py (+119/-39)
tests/unittests/test_datasource/test_scaleway.py (+72/-4)
tests/unittests/test_datasource/test_smartos.py (+7/-0)
tests/unittests/test_ds_identify.py (+16/-1)
tests/unittests/test_handler/test_handler_lxd.py (+1/-1)
tests/unittests/test_handler/test_handler_resizefs.py (+42/-10)
tests/unittests/test_handler/test_handler_write_files.py (+12/-0)
tests/unittests/test_net.py (+137/-6)
tests/unittests/test_util.py (+6/-0)
tests/unittests/test_vmware_config_file.py (+52/-6)
tools/ds-identify (+32/-6)
tools/run-container (+1/-0)
tox.ini (+2/-2)
udev/66-azure-ephemeral.rules (+17/-1)
Reviewer Review Type Date Requested Status
Server Team CI bot continuous-integration Approve
cloud-init Commiters Pending
Review via email: mp+362281@code.launchpad.net

Commit message

sync new upstream snapshot for release into bionic via SRU

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:4a1aaa18595fa663e1a38dd3ac8f73231ec69a7f
https://jenkins.ubuntu.com/server/job/cloud-init-ci/543/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/543/rebuild

review: Approve (continuous-integration)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
diff --git a/ChangeLog b/ChangeLog
index 9c043b0..8fa6fdd 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,57 @@
118.5:
2 - tests: add Disco release [Joshua Powers]
3 - net: render 'metric' values in per-subnet routes (LP: #1805871)
4 - write_files: add support for appending to files. [James Baxter]
5 - config: On ubuntu select cloud archive mirrors for armel, armhf, arm64.
6 (LP: #1805854)
7 - dhclient-hook: cleanups, tests and fix a bug on 'down' event.
8 - NoCloud: Allow top level 'network' key in network-config. (LP: #1798117)
9 - ovf: Fix ovf network config generation gateway/routes (LP: #1806103)
10 - azure: detect vnet migration via netlink media change event
11 [Tamilmani Manoharan]
12 - Azure: fix copy/paste error in error handling when reading azure ovf.
13 [Adam DePue]
14 - tests: fix incorrect order of mocks in test_handle_zfs_root.
15 - doc: Change dns_nameserver property to dns_nameservers. [Tomer Cohen]
16 - OVF: identify label iso9660 filesystems with label 'OVF ENV'.
17 - logs: collect-logs ignore instance-data-sensitive.json on non-root user
18 (LP: #1805201)
19 - net: Ephemeral*Network: add connectivity check via URL
20 - azure: _poll_imds only retry on 404. Fail on Timeout (LP: #1803598)
21 - resizefs: Prefix discovered devpath with '/dev/' when path does not
22 exist [Igor Galić]
23 - azure: retry imds polling on requests.Timeout (LP: #1800223)
24 - azure: Accept variation in error msg from mount for ntfs volumes
25 [Jason Zions] (LP: #1799338)
26 - azure: fix regression introduced when persisting ephemeral dhcp lease
27 [asakkurr]
28 - azure: add udev rules to create cloud-init Gen2 disk name symlinks
29 (LP: #1797480)
30 - tests: ec2 mock missing httpretty user-data and instance-identity routes
31 - azure: remove /etc/netplan/90-hotplug-azure.yaml when net from IMDS
32 - azure: report ready to fabric after reprovision and reduce logging
33 [asakkurr] (LP: #1799594)
34 - query: better error when missing read permission on instance-data
35 - instance-data: fallback to instance-data.json if sensitive is absent.
36 (LP: #1798189)
37 - docs: remove colon from network v1 config example. [Tomer Cohen]
38 - Add cloud-id binary to packages for SUSE [Jason Zions]
39 - systemd: On SUSE ensure cloud-init.service runs before wicked
40 [Robert Schweikert] (LP: #1799709)
41 - update detection of openSUSE variants [Robert Schweikert]
42 - azure: Add apply_network_config option to disable network from IMDS
43 (LP: #1798424)
44 - Correct spelling in an error message (udevadm). [Katie McLaughlin]
45 - tests: meta_data key changed to meta-data in ec2 instance-data.json
46 (LP: #1797231)
47 - tests: fix kvm integration test to assert flexible config-disk path
48 (LP: #1797199)
49 - tools: Add cloud-id command line utility
50 - instance-data: Add standard keys platform and subplatform. Refactor ec2.
51 - net: ignore nics that have "zero" mac address. (LP: #1796917)
52 - tests: fix apt_configure_primary to be more flexible
53 - Ubuntu: update sources.list to comment out deb-src entries. (LP: #74747)
54
118.4:5518.4:
2 - add rtd example docs about new standardized keys56 - add rtd example docs about new standardized keys
3 - use ds._crawled_metadata instance attribute if set when writing57 - use ds._crawled_metadata instance attribute if set when writing
diff --git a/HACKING.rst b/HACKING.rst
index 3bb555c..fcdfa4f 100644
--- a/HACKING.rst
+++ b/HACKING.rst
@@ -11,10 +11,10 @@ Do these things once
1111
12* To contribute, you must sign the Canonical `contributor license agreement`_12* To contribute, you must sign the Canonical `contributor license agreement`_
1313
14 If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group. Unfortunately there is no easy way to check if an organization or company you are doing work for has signed. If you are unsure or have questions, email `Scott Moser <mailto:scott.moser@canonical.com>`_ or ping smoser in ``#cloud-init`` channel via freenode.14 If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group. Unfortunately there is no easy way to check if an organization or company you are doing work for has signed. If you are unsure or have questions, email `Josh Powers <mailto:josh.powers@canonical.com>`_ or ping powersj in ``#cloud-init`` channel via freenode.
1515
16 When prompted for 'Project contact' or 'Canonical Project Manager' enter16 When prompted for 'Project contact' or 'Canonical Project Manager' enter
17 'Scott Moser'.17 'Josh Powers'.
1818
19* Configure git with your email and name for commit messages.19* Configure git with your email and name for commit messages.
2020
diff --git a/bash_completion/cloud-init b/bash_completion/cloud-init
index 8c25032..a9577e9 100644
--- a/bash_completion/cloud-init
+++ b/bash_completion/cloud-init
@@ -30,7 +30,10 @@ _cloudinit_complete()
30 devel)30 devel)
31 COMPREPLY=($(compgen -W "--help schema net-convert" -- $cur_word))31 COMPREPLY=($(compgen -W "--help schema net-convert" -- $cur_word))
32 ;;32 ;;
33 dhclient-hook|features)33 dhclient-hook)
34 COMPREPLY=($(compgen -W "--help up down" -- $cur_word))
35 ;;
36 features)
34 COMPREPLY=($(compgen -W "--help" -- $cur_word))37 COMPREPLY=($(compgen -W "--help" -- $cur_word))
35 ;;38 ;;
36 init)39 init)
diff --git a/cloudinit/cmd/cloud_id.py b/cloudinit/cmd/cloud_id.py
37new file mode 10075540new file mode 100755
index 0000000..9760892
--- /dev/null
+++ b/cloudinit/cmd/cloud_id.py
@@ -0,0 +1,90 @@
1# This file is part of cloud-init. See LICENSE file for license information.
2
3"""Commandline utility to list the canonical cloud-id for an instance."""
4
5import argparse
6import json
7import sys
8
9from cloudinit.sources import (
10 INSTANCE_JSON_FILE, METADATA_UNKNOWN, canonical_cloud_id)
11
12DEFAULT_INSTANCE_JSON = '/run/cloud-init/%s' % INSTANCE_JSON_FILE
13
14NAME = 'cloud-id'
15
16
17def get_parser(parser=None):
18 """Build or extend an arg parser for the cloud-id utility.
19
20 @param parser: Optional existing ArgumentParser instance representing the
21 query subcommand which will be extended to support the args of
22 this utility.
23
24 @returns: ArgumentParser with proper argument configuration.
25 """
26 if not parser:
27 parser = argparse.ArgumentParser(
28 prog=NAME,
29 description='Report the canonical cloud-id for this instance')
30 parser.add_argument(
31 '-j', '--json', action='store_true', default=False,
32 help='Report all standardized cloud-id information as json.')
33 parser.add_argument(
34 '-l', '--long', action='store_true', default=False,
35 help='Report extended cloud-id information as tab-delimited string.')
36 parser.add_argument(
37 '-i', '--instance-data', type=str, default=DEFAULT_INSTANCE_JSON,
38 help=('Path to instance-data.json file. Default is %s' %
39 DEFAULT_INSTANCE_JSON))
40 return parser
41
42
43def error(msg):
44 sys.stderr.write('ERROR: %s\n' % msg)
45 return 1
46
47
48def handle_args(name, args):
49 """Handle calls to 'cloud-id' cli.
50
51 Print the canonical cloud-id on which the instance is running.
52
53 @return: 0 on success, 1 otherwise.
54 """
55 try:
56 instance_data = json.load(open(args.instance_data))
57 except IOError:
58 return error(
59 "File not found '%s'. Provide a path to instance data json file"
60 ' using --instance-data' % args.instance_data)
61 except ValueError as e:
62 return error(
63 "File '%s' is not valid json. %s" % (args.instance_data, e))
64 v1 = instance_data.get('v1', {})
65 cloud_id = canonical_cloud_id(
66 v1.get('cloud_name', METADATA_UNKNOWN),
67 v1.get('region', METADATA_UNKNOWN),
68 v1.get('platform', METADATA_UNKNOWN))
69 if args.json:
70 v1['cloud_id'] = cloud_id
71 response = json.dumps( # Pretty, sorted json
72 v1, indent=1, sort_keys=True, separators=(',', ': '))
73 elif args.long:
74 response = '%s\t%s' % (cloud_id, v1.get('region', METADATA_UNKNOWN))
75 else:
76 response = cloud_id
77 sys.stdout.write('%s\n' % response)
78 return 0
79
80
81def main():
82 """Tool to query specific instance-data values."""
83 parser = get_parser()
84 sys.exit(handle_args(NAME, parser.parse_args()))
85
86
87if __name__ == '__main__':
88 main()
89
90# vi: ts=4 expandtab
diff --git a/cloudinit/cmd/devel/logs.py b/cloudinit/cmd/devel/logs.py
index df72520..4c086b5 100644
--- a/cloudinit/cmd/devel/logs.py
+++ b/cloudinit/cmd/devel/logs.py
@@ -5,14 +5,16 @@
5"""Define 'collect-logs' utility and handler to include in cloud-init cmd."""5"""Define 'collect-logs' utility and handler to include in cloud-init cmd."""
66
7import argparse7import argparse
8from cloudinit.util import (
9 ProcessExecutionError, chdir, copy, ensure_dir, subp, write_file)
10from cloudinit.temp_utils import tempdir
11from datetime import datetime8from datetime import datetime
12import os9import os
13import shutil10import shutil
14import sys11import sys
1512
13from cloudinit.sources import INSTANCE_JSON_SENSITIVE_FILE
14from cloudinit.temp_utils import tempdir
15from cloudinit.util import (
16 ProcessExecutionError, chdir, copy, ensure_dir, subp, write_file)
17
1618
17CLOUDINIT_LOGS = ['/var/log/cloud-init.log', '/var/log/cloud-init-output.log']19CLOUDINIT_LOGS = ['/var/log/cloud-init.log', '/var/log/cloud-init-output.log']
18CLOUDINIT_RUN_DIR = '/run/cloud-init'20CLOUDINIT_RUN_DIR = '/run/cloud-init'
@@ -46,6 +48,13 @@ def get_parser(parser=None):
46 return parser48 return parser
4749
4850
51def _copytree_ignore_sensitive_files(curdir, files):
52 """Return a list of files to ignore if we are non-root"""
53 if os.getuid() == 0:
54 return ()
55 return (INSTANCE_JSON_SENSITIVE_FILE,) # Ignore root-permissioned files
56
57
49def _write_command_output_to_file(cmd, filename, msg, verbosity):58def _write_command_output_to_file(cmd, filename, msg, verbosity):
50 """Helper which runs a command and writes output or error to filename."""59 """Helper which runs a command and writes output or error to filename."""
51 try:60 try:
@@ -78,6 +87,11 @@ def collect_logs(tarfile, include_userdata, verbosity=0):
78 @param tarfile: The path of the tar-gzipped file to create.87 @param tarfile: The path of the tar-gzipped file to create.
79 @param include_userdata: Boolean, true means include user-data.88 @param include_userdata: Boolean, true means include user-data.
80 """89 """
90 if include_userdata and os.getuid() != 0:
91 sys.stderr.write(
92 "To include userdata, root user is required."
93 " Try sudo cloud-init collect-logs\n")
94 return 1
81 tarfile = os.path.abspath(tarfile)95 tarfile = os.path.abspath(tarfile)
82 date = datetime.utcnow().date().strftime('%Y-%m-%d')96 date = datetime.utcnow().date().strftime('%Y-%m-%d')
83 log_dir = 'cloud-init-logs-{0}'.format(date)97 log_dir = 'cloud-init-logs-{0}'.format(date)
@@ -110,7 +124,8 @@ def collect_logs(tarfile, include_userdata, verbosity=0):
110 ensure_dir(run_dir)124 ensure_dir(run_dir)
111 if os.path.exists(CLOUDINIT_RUN_DIR):125 if os.path.exists(CLOUDINIT_RUN_DIR):
112 shutil.copytree(CLOUDINIT_RUN_DIR,126 shutil.copytree(CLOUDINIT_RUN_DIR,
113 os.path.join(run_dir, 'cloud-init'))127 os.path.join(run_dir, 'cloud-init'),
128 ignore=_copytree_ignore_sensitive_files)
114 _debug("collected dir %s\n" % CLOUDINIT_RUN_DIR, 1, verbosity)129 _debug("collected dir %s\n" % CLOUDINIT_RUN_DIR, 1, verbosity)
115 else:130 else:
116 _debug("directory '%s' did not exist\n" % CLOUDINIT_RUN_DIR, 1,131 _debug("directory '%s' did not exist\n" % CLOUDINIT_RUN_DIR, 1,
@@ -118,21 +133,21 @@ def collect_logs(tarfile, include_userdata, verbosity=0):
118 with chdir(tmp_dir):133 with chdir(tmp_dir):
119 subp(['tar', 'czvf', tarfile, log_dir.replace(tmp_dir + '/', '')])134 subp(['tar', 'czvf', tarfile, log_dir.replace(tmp_dir + '/', '')])
120 sys.stderr.write("Wrote %s\n" % tarfile)135 sys.stderr.write("Wrote %s\n" % tarfile)
136 return 0
121137
122138
123def handle_collect_logs_args(name, args):139def handle_collect_logs_args(name, args):
124 """Handle calls to 'cloud-init collect-logs' as a subcommand."""140 """Handle calls to 'cloud-init collect-logs' as a subcommand."""
125 collect_logs(args.tarfile, args.userdata, args.verbosity)141 return collect_logs(args.tarfile, args.userdata, args.verbosity)
126142
127143
128def main():144def main():
129 """Tool to collect and tar all cloud-init related logs."""145 """Tool to collect and tar all cloud-init related logs."""
130 parser = get_parser()146 parser = get_parser()
131 handle_collect_logs_args('collect-logs', parser.parse_args())147 return handle_collect_logs_args('collect-logs', parser.parse_args())
132 return 0
133148
134149
135if __name__ == '__main__':150if __name__ == '__main__':
136 main()151 sys.exit(main())
137152
138# vi: ts=4 expandtab153# vi: ts=4 expandtab
diff --git a/cloudinit/cmd/devel/net_convert.py b/cloudinit/cmd/devel/net_convert.py
index a0f58a0..1ad7e0b 100755
--- a/cloudinit/cmd/devel/net_convert.py
+++ b/cloudinit/cmd/devel/net_convert.py
@@ -9,6 +9,7 @@ import yaml
99
10from cloudinit.sources.helpers import openstack10from cloudinit.sources.helpers import openstack
11from cloudinit.sources import DataSourceAzure as azure11from cloudinit.sources import DataSourceAzure as azure
12from cloudinit.sources import DataSourceOVF as ovf
1213
13from cloudinit import distros14from cloudinit import distros
14from cloudinit.net import eni, netplan, network_state, sysconfig15from cloudinit.net import eni, netplan, network_state, sysconfig
@@ -31,7 +32,7 @@ def get_parser(parser=None):
31 metavar="PATH", required=True)32 metavar="PATH", required=True)
32 parser.add_argument("-k", "--kind",33 parser.add_argument("-k", "--kind",
33 choices=['eni', 'network_data.json', 'yaml',34 choices=['eni', 'network_data.json', 'yaml',
34 'azure-imds'],35 'azure-imds', 'vmware-imc'],
35 required=True)36 required=True)
36 parser.add_argument("-d", "--directory",37 parser.add_argument("-d", "--directory",
37 metavar="PATH",38 metavar="PATH",
@@ -76,7 +77,6 @@ def handle_args(name, args):
76 net_data = args.network_data.read()77 net_data = args.network_data.read()
77 if args.kind == "eni":78 if args.kind == "eni":
78 pre_ns = eni.convert_eni_data(net_data)79 pre_ns = eni.convert_eni_data(net_data)
79 ns = network_state.parse_net_config_data(pre_ns)
80 elif args.kind == "yaml":80 elif args.kind == "yaml":
81 pre_ns = yaml.load(net_data)81 pre_ns = yaml.load(net_data)
82 if 'network' in pre_ns:82 if 'network' in pre_ns:
@@ -85,15 +85,16 @@ def handle_args(name, args):
85 sys.stderr.write('\n'.join(85 sys.stderr.write('\n'.join(
86 ["Input YAML",86 ["Input YAML",
87 yaml.dump(pre_ns, default_flow_style=False, indent=4), ""]))87 yaml.dump(pre_ns, default_flow_style=False, indent=4), ""]))
88 ns = network_state.parse_net_config_data(pre_ns)
89 elif args.kind == 'network_data.json':88 elif args.kind == 'network_data.json':
90 pre_ns = openstack.convert_net_json(89 pre_ns = openstack.convert_net_json(
91 json.loads(net_data), known_macs=known_macs)90 json.loads(net_data), known_macs=known_macs)
92 ns = network_state.parse_net_config_data(pre_ns)
93 elif args.kind == 'azure-imds':91 elif args.kind == 'azure-imds':
94 pre_ns = azure.parse_network_config(json.loads(net_data))92 pre_ns = azure.parse_network_config(json.loads(net_data))
95 ns = network_state.parse_net_config_data(pre_ns)93 elif args.kind == 'vmware-imc':
94 config = ovf.Config(ovf.ConfigFile(args.network_data.name))
95 pre_ns = ovf.get_network_config_from_conf(config, False)
9696
97 ns = network_state.parse_net_config_data(pre_ns)
97 if not ns:98 if not ns:
98 raise RuntimeError("No valid network_state object created from"99 raise RuntimeError("No valid network_state object created from"
99 "input data")100 "input data")
@@ -111,6 +112,10 @@ def handle_args(name, args):
111 elif args.output_kind == "netplan":112 elif args.output_kind == "netplan":
112 r_cls = netplan.Renderer113 r_cls = netplan.Renderer
113 config = distro.renderer_configs.get('netplan')114 config = distro.renderer_configs.get('netplan')
115 # don't run netplan generate/apply
116 config['postcmds'] = False
117 # trim leading slash
118 config['netplan_path'] = config['netplan_path'][1:]
114 else:119 else:
115 r_cls = sysconfig.Renderer120 r_cls = sysconfig.Renderer
116 config = distro.renderer_configs.get('sysconfig')121 config = distro.renderer_configs.get('sysconfig')
diff --git a/cloudinit/cmd/devel/render.py b/cloudinit/cmd/devel/render.py
index 2ba6b68..1bc2240 100755
--- a/cloudinit/cmd/devel/render.py
+++ b/cloudinit/cmd/devel/render.py
@@ -8,11 +8,10 @@ import sys
88
9from cloudinit.handlers.jinja_template import render_jinja_payload_from_file9from cloudinit.handlers.jinja_template import render_jinja_payload_from_file
10from cloudinit import log10from cloudinit import log
11from cloudinit.sources import INSTANCE_JSON_FILE11from cloudinit.sources import INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE
12from . import addLogHandlerCLI, read_cfg_paths12from . import addLogHandlerCLI, read_cfg_paths
1313
14NAME = 'render'14NAME = 'render'
15DEFAULT_INSTANCE_DATA = '/run/cloud-init/instance-data.json'
1615
17LOG = log.getLogger(NAME)16LOG = log.getLogger(NAME)
1817
@@ -47,12 +46,22 @@ def handle_args(name, args):
47 @return 0 on success, 1 on failure.46 @return 0 on success, 1 on failure.
48 """47 """
49 addLogHandlerCLI(LOG, log.DEBUG if args.debug else log.WARNING)48 addLogHandlerCLI(LOG, log.DEBUG if args.debug else log.WARNING)
50 if not args.instance_data:49 if args.instance_data:
51 paths = read_cfg_paths()
52 instance_data_fn = os.path.join(
53 paths.run_dir, INSTANCE_JSON_FILE)
54 else:
55 instance_data_fn = args.instance_data50 instance_data_fn = args.instance_data
51 else:
52 paths = read_cfg_paths()
53 uid = os.getuid()
54 redacted_data_fn = os.path.join(paths.run_dir, INSTANCE_JSON_FILE)
55 if uid == 0:
56 instance_data_fn = os.path.join(
57 paths.run_dir, INSTANCE_JSON_SENSITIVE_FILE)
58 if not os.path.exists(instance_data_fn):
59 LOG.warning(
60 'Missing root-readable %s. Using redacted %s instead.',
61 instance_data_fn, redacted_data_fn)
62 instance_data_fn = redacted_data_fn
63 else:
64 instance_data_fn = redacted_data_fn
56 if not os.path.exists(instance_data_fn):65 if not os.path.exists(instance_data_fn):
57 LOG.error('Missing instance-data.json file: %s', instance_data_fn)66 LOG.error('Missing instance-data.json file: %s', instance_data_fn)
58 return 167 return 1
@@ -62,10 +71,14 @@ def handle_args(name, args):
62 except IOError:71 except IOError:
63 LOG.error('Missing user-data file: %s', args.user_data)72 LOG.error('Missing user-data file: %s', args.user_data)
64 return 173 return 1
65 rendered_payload = render_jinja_payload_from_file(74 try:
66 payload=user_data, payload_fn=args.user_data,75 rendered_payload = render_jinja_payload_from_file(
67 instance_data_file=instance_data_fn,76 payload=user_data, payload_fn=args.user_data,
68 debug=True if args.debug else False)77 instance_data_file=instance_data_fn,
78 debug=True if args.debug else False)
79 except RuntimeError as e:
80 LOG.error('Cannot render from instance data: %s', str(e))
81 return 1
69 if not rendered_payload:82 if not rendered_payload:
70 LOG.error('Unable to render user-data file: %s', args.user_data)83 LOG.error('Unable to render user-data file: %s', args.user_data)
71 return 184 return 1
diff --git a/cloudinit/cmd/devel/tests/test_logs.py b/cloudinit/cmd/devel/tests/test_logs.py
index 98b4756..4951797 100644
--- a/cloudinit/cmd/devel/tests/test_logs.py
+++ b/cloudinit/cmd/devel/tests/test_logs.py
@@ -1,13 +1,17 @@
1# This file is part of cloud-init. See LICENSE file for license information.1# This file is part of cloud-init. See LICENSE file for license information.
22
3from cloudinit.cmd.devel import logs
4from cloudinit.util import ensure_dir, load_file, subp, write_file
5from cloudinit.tests.helpers import FilesystemMockingTestCase, wrap_and_call
6from datetime import datetime3from datetime import datetime
7import mock
8import os4import os
5from six import StringIO
6
7from cloudinit.cmd.devel import logs
8from cloudinit.sources import INSTANCE_JSON_SENSITIVE_FILE
9from cloudinit.tests.helpers import (
10 FilesystemMockingTestCase, mock, wrap_and_call)
11from cloudinit.util import ensure_dir, load_file, subp, write_file
912
1013
14@mock.patch('cloudinit.cmd.devel.logs.os.getuid')
11class TestCollectLogs(FilesystemMockingTestCase):15class TestCollectLogs(FilesystemMockingTestCase):
1216
13 def setUp(self):17 def setUp(self):
@@ -15,14 +19,29 @@ class TestCollectLogs(FilesystemMockingTestCase):
15 self.new_root = self.tmp_dir()19 self.new_root = self.tmp_dir()
16 self.run_dir = self.tmp_path('run', self.new_root)20 self.run_dir = self.tmp_path('run', self.new_root)
1721
18 def test_collect_logs_creates_tarfile(self):22 def test_collect_logs_with_userdata_requires_root_user(self, m_getuid):
23 """collect-logs errors when non-root user collects userdata ."""
24 m_getuid.return_value = 100 # non-root
25 output_tarfile = self.tmp_path('logs.tgz')
26 with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
27 self.assertEqual(
28 1, logs.collect_logs(output_tarfile, include_userdata=True))
29 self.assertEqual(
30 'To include userdata, root user is required.'
31 ' Try sudo cloud-init collect-logs\n',
32 m_stderr.getvalue())
33
34 def test_collect_logs_creates_tarfile(self, m_getuid):
19 """collect-logs creates a tarfile with all related cloud-init info."""35 """collect-logs creates a tarfile with all related cloud-init info."""
36 m_getuid.return_value = 100
20 log1 = self.tmp_path('cloud-init.log', self.new_root)37 log1 = self.tmp_path('cloud-init.log', self.new_root)
21 write_file(log1, 'cloud-init-log')38 write_file(log1, 'cloud-init-log')
22 log2 = self.tmp_path('cloud-init-output.log', self.new_root)39 log2 = self.tmp_path('cloud-init-output.log', self.new_root)
23 write_file(log2, 'cloud-init-output-log')40 write_file(log2, 'cloud-init-output-log')
24 ensure_dir(self.run_dir)41 ensure_dir(self.run_dir)
25 write_file(self.tmp_path('results.json', self.run_dir), 'results')42 write_file(self.tmp_path('results.json', self.run_dir), 'results')
43 write_file(self.tmp_path(INSTANCE_JSON_SENSITIVE_FILE, self.run_dir),
44 'sensitive')
26 output_tarfile = self.tmp_path('logs.tgz')45 output_tarfile = self.tmp_path('logs.tgz')
2746
28 date = datetime.utcnow().date().strftime('%Y-%m-%d')47 date = datetime.utcnow().date().strftime('%Y-%m-%d')
@@ -59,6 +78,11 @@ class TestCollectLogs(FilesystemMockingTestCase):
59 # unpack the tarfile and check file contents78 # unpack the tarfile and check file contents
60 subp(['tar', 'zxvf', output_tarfile, '-C', self.new_root])79 subp(['tar', 'zxvf', output_tarfile, '-C', self.new_root])
61 out_logdir = self.tmp_path(date_logdir, self.new_root)80 out_logdir = self.tmp_path(date_logdir, self.new_root)
81 self.assertFalse(
82 os.path.exists(
83 os.path.join(out_logdir, 'run', 'cloud-init',
84 INSTANCE_JSON_SENSITIVE_FILE)),
85 'Unexpected file found: %s' % INSTANCE_JSON_SENSITIVE_FILE)
62 self.assertEqual(86 self.assertEqual(
63 '0.7fake\n',87 '0.7fake\n',
64 load_file(os.path.join(out_logdir, 'dpkg-version')))88 load_file(os.path.join(out_logdir, 'dpkg-version')))
@@ -82,8 +106,9 @@ class TestCollectLogs(FilesystemMockingTestCase):
82 os.path.join(out_logdir, 'run', 'cloud-init', 'results.json')))106 os.path.join(out_logdir, 'run', 'cloud-init', 'results.json')))
83 fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile)107 fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile)
84108
85 def test_collect_logs_includes_optional_userdata(self):109 def test_collect_logs_includes_optional_userdata(self, m_getuid):
86 """collect-logs include userdata when --include-userdata is set."""110 """collect-logs include userdata when --include-userdata is set."""
111 m_getuid.return_value = 0
87 log1 = self.tmp_path('cloud-init.log', self.new_root)112 log1 = self.tmp_path('cloud-init.log', self.new_root)
88 write_file(log1, 'cloud-init-log')113 write_file(log1, 'cloud-init-log')
89 log2 = self.tmp_path('cloud-init-output.log', self.new_root)114 log2 = self.tmp_path('cloud-init-output.log', self.new_root)
@@ -92,6 +117,8 @@ class TestCollectLogs(FilesystemMockingTestCase):
92 write_file(userdata, 'user-data')117 write_file(userdata, 'user-data')
93 ensure_dir(self.run_dir)118 ensure_dir(self.run_dir)
94 write_file(self.tmp_path('results.json', self.run_dir), 'results')119 write_file(self.tmp_path('results.json', self.run_dir), 'results')
120 write_file(self.tmp_path(INSTANCE_JSON_SENSITIVE_FILE, self.run_dir),
121 'sensitive')
95 output_tarfile = self.tmp_path('logs.tgz')122 output_tarfile = self.tmp_path('logs.tgz')
96123
97 date = datetime.utcnow().date().strftime('%Y-%m-%d')124 date = datetime.utcnow().date().strftime('%Y-%m-%d')
@@ -132,4 +159,8 @@ class TestCollectLogs(FilesystemMockingTestCase):
132 self.assertEqual(159 self.assertEqual(
133 'user-data',160 'user-data',
134 load_file(os.path.join(out_logdir, 'user-data.txt')))161 load_file(os.path.join(out_logdir, 'user-data.txt')))
162 self.assertEqual(
163 'sensitive',
164 load_file(os.path.join(out_logdir, 'run', 'cloud-init',
165 INSTANCE_JSON_SENSITIVE_FILE)))
135 fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile)166 fake_stderr.write.assert_any_call('Wrote %s\n' % output_tarfile)
diff --git a/cloudinit/cmd/devel/tests/test_render.py b/cloudinit/cmd/devel/tests/test_render.py
index fc5d2c0..988bba0 100644
--- a/cloudinit/cmd/devel/tests/test_render.py
+++ b/cloudinit/cmd/devel/tests/test_render.py
@@ -6,7 +6,7 @@ import os
6from collections import namedtuple6from collections import namedtuple
7from cloudinit.cmd.devel import render7from cloudinit.cmd.devel import render
8from cloudinit.helpers import Paths8from cloudinit.helpers import Paths
9from cloudinit.sources import INSTANCE_JSON_FILE9from cloudinit.sources import INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE
10from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJinja10from cloudinit.tests.helpers import CiTestCase, mock, skipUnlessJinja
11from cloudinit.util import ensure_dir, write_file11from cloudinit.util import ensure_dir, write_file
1212
@@ -63,6 +63,49 @@ class TestRender(CiTestCase):
63 'Missing instance-data.json file: %s' % json_file,63 'Missing instance-data.json file: %s' % json_file,
64 self.logs.getvalue())64 self.logs.getvalue())
6565
66 def test_handle_args_root_fallback_from_sensitive_instance_data(self):
67 """When root user defaults to sensitive.json."""
68 user_data = self.tmp_path('user-data', dir=self.tmp)
69 run_dir = self.tmp_path('run_dir', dir=self.tmp)
70 ensure_dir(run_dir)
71 paths = Paths({'run_dir': run_dir})
72 self.add_patch('cloudinit.cmd.devel.render.read_cfg_paths', 'm_paths')
73 self.m_paths.return_value = paths
74 args = self.args(
75 user_data=user_data, instance_data=None, debug=False)
76 with mock.patch('sys.stderr', new_callable=StringIO):
77 with mock.patch('os.getuid') as m_getuid:
78 m_getuid.return_value = 0
79 self.assertEqual(1, render.handle_args('anyname', args))
80 json_file = os.path.join(run_dir, INSTANCE_JSON_FILE)
81 json_sensitive = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE)
82 self.assertIn(
83 'WARNING: Missing root-readable %s. Using redacted %s' % (
84 json_sensitive, json_file), self.logs.getvalue())
85 self.assertIn(
86 'ERROR: Missing instance-data.json file: %s' % json_file,
87 self.logs.getvalue())
88
89 def test_handle_args_root_uses_sensitive_instance_data(self):
90 """When root user, and no instance-data arg, use sensitive.json."""
91 user_data = self.tmp_path('user-data', dir=self.tmp)
92 write_file(user_data, '##template: jinja\nrendering: {{ my_var }}')
93 run_dir = self.tmp_path('run_dir', dir=self.tmp)
94 ensure_dir(run_dir)
95 json_sensitive = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE)
96 write_file(json_sensitive, '{"my-var": "jinja worked"}')
97 paths = Paths({'run_dir': run_dir})
98 self.add_patch('cloudinit.cmd.devel.render.read_cfg_paths', 'm_paths')
99 self.m_paths.return_value = paths
100 args = self.args(
101 user_data=user_data, instance_data=None, debug=False)
102 with mock.patch('sys.stderr', new_callable=StringIO):
103 with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
104 with mock.patch('os.getuid') as m_getuid:
105 m_getuid.return_value = 0
106 self.assertEqual(0, render.handle_args('anyname', args))
107 self.assertIn('rendering: jinja worked', m_stdout.getvalue())
108
66 @skipUnlessJinja()109 @skipUnlessJinja()
67 def test_handle_args_renders_instance_data_vars_in_template(self):110 def test_handle_args_renders_instance_data_vars_in_template(self):
68 """If user_data file is a jinja template render instance-data vars."""111 """If user_data file is a jinja template render instance-data vars."""
diff --git a/cloudinit/cmd/main.py b/cloudinit/cmd/main.py
index 5a43702..933c019 100644
--- a/cloudinit/cmd/main.py
+++ b/cloudinit/cmd/main.py
@@ -41,7 +41,7 @@ from cloudinit.settings import (PER_INSTANCE, PER_ALWAYS, PER_ONCE,
41from cloudinit import atomic_helper41from cloudinit import atomic_helper
4242
43from cloudinit.config import cc_set_hostname43from cloudinit.config import cc_set_hostname
44from cloudinit.dhclient_hook import LogDhclient44from cloudinit import dhclient_hook
4545
4646
47# Welcome message template47# Welcome message template
@@ -586,12 +586,6 @@ def main_single(name, args):
586 return 0586 return 0
587587
588588
589def dhclient_hook(name, args):
590 record = LogDhclient(args)
591 record.check_hooks_dir()
592 record.record()
593
594
595def status_wrapper(name, args, data_d=None, link_d=None):589def status_wrapper(name, args, data_d=None, link_d=None):
596 if data_d is None:590 if data_d is None:
597 data_d = os.path.normpath("/var/lib/cloud/data")591 data_d = os.path.normpath("/var/lib/cloud/data")
@@ -795,15 +789,9 @@ def main(sysv_args=None):
795 'query',789 'query',
796 help='Query standardized instance metadata from the command line.')790 help='Query standardized instance metadata from the command line.')
797791
798 parser_dhclient = subparsers.add_parser('dhclient-hook',792 parser_dhclient = subparsers.add_parser(
799 help=('run the dhclient hook'793 dhclient_hook.NAME, help=dhclient_hook.__doc__)
800 'to record network info'))794 dhclient_hook.get_parser(parser_dhclient)
801 parser_dhclient.add_argument("net_action",
802 help=('action taken on the interface'))
803 parser_dhclient.add_argument("net_interface",
804 help=('the network interface being acted'
805 ' upon'))
806 parser_dhclient.set_defaults(action=('dhclient_hook', dhclient_hook))
807795
808 parser_features = subparsers.add_parser('features',796 parser_features = subparsers.add_parser('features',
809 help=('list defined features'))797 help=('list defined features'))
diff --git a/cloudinit/cmd/query.py b/cloudinit/cmd/query.py
index 7d2d4fe..1d888b9 100644
--- a/cloudinit/cmd/query.py
+++ b/cloudinit/cmd/query.py
@@ -3,6 +3,7 @@
3"""Query standardized instance metadata from the command line."""3"""Query standardized instance metadata from the command line."""
44
5import argparse5import argparse
6from errno import EACCES
6import os7import os
7import six8import six
8import sys9import sys
@@ -79,27 +80,38 @@ def handle_args(name, args):
79 uid = os.getuid()80 uid = os.getuid()
80 if not all([args.instance_data, args.user_data, args.vendor_data]):81 if not all([args.instance_data, args.user_data, args.vendor_data]):
81 paths = read_cfg_paths()82 paths = read_cfg_paths()
82 if not args.instance_data:83 if args.instance_data:
84 instance_data_fn = args.instance_data
85 else:
86 redacted_data_fn = os.path.join(paths.run_dir, INSTANCE_JSON_FILE)
83 if uid == 0:87 if uid == 0:
84 default_json_fn = INSTANCE_JSON_SENSITIVE_FILE88 sensitive_data_fn = os.path.join(
89 paths.run_dir, INSTANCE_JSON_SENSITIVE_FILE)
90 if os.path.exists(sensitive_data_fn):
91 instance_data_fn = sensitive_data_fn
92 else:
93 LOG.warning(
94 'Missing root-readable %s. Using redacted %s instead.',
95 sensitive_data_fn, redacted_data_fn)
96 instance_data_fn = redacted_data_fn
85 else:97 else:
86 default_json_fn = INSTANCE_JSON_FILE # World readable98 instance_data_fn = redacted_data_fn
87 instance_data_fn = os.path.join(paths.run_dir, default_json_fn)99 if args.user_data:
100 user_data_fn = args.user_data
88 else:101 else:
89 instance_data_fn = args.instance_data
90 if not args.user_data:
91 user_data_fn = os.path.join(paths.instance_link, 'user-data.txt')102 user_data_fn = os.path.join(paths.instance_link, 'user-data.txt')
103 if args.vendor_data:
104 vendor_data_fn = args.vendor_data
92 else:105 else:
93 user_data_fn = args.user_data
94 if not args.vendor_data:
95 vendor_data_fn = os.path.join(paths.instance_link, 'vendor-data.txt')106 vendor_data_fn = os.path.join(paths.instance_link, 'vendor-data.txt')
96 else:
97 vendor_data_fn = args.vendor_data
98107
99 try:108 try:
100 instance_json = util.load_file(instance_data_fn)109 instance_json = util.load_file(instance_data_fn)
101 except IOError:110 except (IOError, OSError) as e:
102 LOG.error('Missing instance-data.json file: %s', instance_data_fn)111 if e.errno == EACCES:
112 LOG.error("No read permission on '%s'. Try sudo", instance_data_fn)
113 else:
114 LOG.error('Missing instance-data file: %s', instance_data_fn)
103 return 1115 return 1
104116
105 instance_data = util.load_json(instance_json)117 instance_data = util.load_json(instance_json)
diff --git a/cloudinit/cmd/tests/test_cloud_id.py b/cloudinit/cmd/tests/test_cloud_id.py
106new file mode 100644118new file mode 100644
index 0000000..7373817
--- /dev/null
+++ b/cloudinit/cmd/tests/test_cloud_id.py
@@ -0,0 +1,127 @@
1# This file is part of cloud-init. See LICENSE file for license information.
2
3"""Tests for cloud-id command line utility."""
4
5from cloudinit import util
6from collections import namedtuple
7from six import StringIO
8
9from cloudinit.cmd import cloud_id
10
11from cloudinit.tests.helpers import CiTestCase, mock
12
13
14class TestCloudId(CiTestCase):
15
16 args = namedtuple('cloudidargs', ('instance_data json long'))
17
18 def setUp(self):
19 super(TestCloudId, self).setUp()
20 self.tmp = self.tmp_dir()
21 self.instance_data = self.tmp_path('instance-data.json', dir=self.tmp)
22
23 def test_cloud_id_arg_parser_defaults(self):
24 """Validate the argument defaults when not provided by the end-user."""
25 cmd = ['cloud-id']
26 with mock.patch('sys.argv', cmd):
27 args = cloud_id.get_parser().parse_args()
28 self.assertEqual(
29 '/run/cloud-init/instance-data.json',
30 args.instance_data)
31 self.assertEqual(False, args.long)
32 self.assertEqual(False, args.json)
33
34 def test_cloud_id_arg_parse_overrides(self):
35 """Override argument defaults by specifying values for each param."""
36 util.write_file(self.instance_data, '{}')
37 cmd = ['cloud-id', '--instance-data', self.instance_data, '--long',
38 '--json']
39 with mock.patch('sys.argv', cmd):
40 args = cloud_id.get_parser().parse_args()
41 self.assertEqual(self.instance_data, args.instance_data)
42 self.assertEqual(True, args.long)
43 self.assertEqual(True, args.json)
44
45 def test_cloud_id_missing_instance_data_json(self):
46 """Exit error when the provided instance-data.json does not exist."""
47 cmd = ['cloud-id', '--instance-data', self.instance_data]
48 with mock.patch('sys.argv', cmd):
49 with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
50 with self.assertRaises(SystemExit) as context_manager:
51 cloud_id.main()
52 self.assertEqual(1, context_manager.exception.code)
53 self.assertIn(
54 "ERROR: File not found '%s'" % self.instance_data,
55 m_stderr.getvalue())
56
57 def test_cloud_id_non_json_instance_data(self):
58 """Exit error when the provided instance-data.json is not json."""
59 cmd = ['cloud-id', '--instance-data', self.instance_data]
60 util.write_file(self.instance_data, '{')
61 with mock.patch('sys.argv', cmd):
62 with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
63 with self.assertRaises(SystemExit) as context_manager:
64 cloud_id.main()
65 self.assertEqual(1, context_manager.exception.code)
66 self.assertIn(
67 "ERROR: File '%s' is not valid json." % self.instance_data,
68 m_stderr.getvalue())
69
70 def test_cloud_id_from_cloud_name_in_instance_data(self):
71 """Report canonical cloud-id from cloud_name in instance-data."""
72 util.write_file(
73 self.instance_data,
74 '{"v1": {"cloud_name": "mycloud", "region": "somereg"}}')
75 cmd = ['cloud-id', '--instance-data', self.instance_data]
76 with mock.patch('sys.argv', cmd):
77 with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
78 with self.assertRaises(SystemExit) as context_manager:
79 cloud_id.main()
80 self.assertEqual(0, context_manager.exception.code)
81 self.assertEqual("mycloud\n", m_stdout.getvalue())
82
83 def test_cloud_id_long_name_from_instance_data(self):
84 """Report long cloud-id format from cloud_name and region."""
85 util.write_file(
86 self.instance_data,
87 '{"v1": {"cloud_name": "mycloud", "region": "somereg"}}')
88 cmd = ['cloud-id', '--instance-data', self.instance_data, '--long']
89 with mock.patch('sys.argv', cmd):
90 with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
91 with self.assertRaises(SystemExit) as context_manager:
92 cloud_id.main()
93 self.assertEqual(0, context_manager.exception.code)
94 self.assertEqual("mycloud\tsomereg\n", m_stdout.getvalue())
95
96 def test_cloud_id_lookup_from_instance_data_region(self):
97 """Report discovered canonical cloud_id when region lookup matches."""
98 util.write_file(
99 self.instance_data,
100 '{"v1": {"cloud_name": "aws", "region": "cn-north-1",'
101 ' "platform": "ec2"}}')
102 cmd = ['cloud-id', '--instance-data', self.instance_data, '--long']
103 with mock.patch('sys.argv', cmd):
104 with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
105 with self.assertRaises(SystemExit) as context_manager:
106 cloud_id.main()
107 self.assertEqual(0, context_manager.exception.code)
108 self.assertEqual("aws-china\tcn-north-1\n", m_stdout.getvalue())
109
110 def test_cloud_id_lookup_json_instance_data_adds_cloud_id_to_json(self):
111 """Report v1 instance-data content with cloud_id when --json set."""
112 util.write_file(
113 self.instance_data,
114 '{"v1": {"cloud_name": "unknown", "region": "dfw",'
115 ' "platform": "openstack", "public_ssh_keys": []}}')
116 expected = util.json_dumps({
117 'cloud_id': 'openstack', 'cloud_name': 'unknown',
118 'platform': 'openstack', 'public_ssh_keys': [], 'region': 'dfw'})
119 cmd = ['cloud-id', '--instance-data', self.instance_data, '--json']
120 with mock.patch('sys.argv', cmd):
121 with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
122 with self.assertRaises(SystemExit) as context_manager:
123 cloud_id.main()
124 self.assertEqual(0, context_manager.exception.code)
125 self.assertEqual(expected + '\n', m_stdout.getvalue())
126
127# vi: ts=4 expandtab
diff --git a/cloudinit/cmd/tests/test_query.py b/cloudinit/cmd/tests/test_query.py
index fb87c6a..28738b1 100644
--- a/cloudinit/cmd/tests/test_query.py
+++ b/cloudinit/cmd/tests/test_query.py
@@ -1,5 +1,6 @@
1# This file is part of cloud-init. See LICENSE file for license information.1# This file is part of cloud-init. See LICENSE file for license information.
22
3import errno
3from six import StringIO4from six import StringIO
4from textwrap import dedent5from textwrap import dedent
5import os6import os
@@ -7,7 +8,8 @@ import os
7from collections import namedtuple8from collections import namedtuple
8from cloudinit.cmd import query9from cloudinit.cmd import query
9from cloudinit.helpers import Paths10from cloudinit.helpers import Paths
10from cloudinit.sources import REDACT_SENSITIVE_VALUE, INSTANCE_JSON_FILE11from cloudinit.sources import (
12 REDACT_SENSITIVE_VALUE, INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE)
11from cloudinit.tests.helpers import CiTestCase, mock13from cloudinit.tests.helpers import CiTestCase, mock
12from cloudinit.util import ensure_dir, write_file14from cloudinit.util import ensure_dir, write_file
1315
@@ -50,10 +52,28 @@ class TestQuery(CiTestCase):
50 with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:52 with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
51 self.assertEqual(1, query.handle_args('anyname', args))53 self.assertEqual(1, query.handle_args('anyname', args))
52 self.assertIn(54 self.assertIn(
53 'ERROR: Missing instance-data.json file: %s' % absent_fn,55 'ERROR: Missing instance-data file: %s' % absent_fn,
54 self.logs.getvalue())56 self.logs.getvalue())
55 self.assertIn(57 self.assertIn(
56 'ERROR: Missing instance-data.json file: %s' % absent_fn,58 'ERROR: Missing instance-data file: %s' % absent_fn,
59 m_stderr.getvalue())
60
61 def test_handle_args_error_when_no_read_permission_instance_data(self):
62 """When instance_data file is unreadable, log an error."""
63 noread_fn = self.tmp_path('unreadable', dir=self.tmp)
64 write_file(noread_fn, 'thou shall not pass')
65 args = self.args(
66 debug=False, dump_all=True, format=None, instance_data=noread_fn,
67 list_keys=False, user_data='ud', vendor_data='vd', varname=None)
68 with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
69 with mock.patch('cloudinit.cmd.query.util.load_file') as m_load:
70 m_load.side_effect = OSError(errno.EACCES, 'Not allowed')
71 self.assertEqual(1, query.handle_args('anyname', args))
72 self.assertIn(
73 "ERROR: No read permission on '%s'. Try sudo" % noread_fn,
74 self.logs.getvalue())
75 self.assertIn(
76 "ERROR: No read permission on '%s'. Try sudo" % noread_fn,
57 m_stderr.getvalue())77 m_stderr.getvalue())
5878
59 def test_handle_args_defaults_instance_data(self):79 def test_handle_args_defaults_instance_data(self):
@@ -70,12 +90,58 @@ class TestQuery(CiTestCase):
70 self.assertEqual(1, query.handle_args('anyname', args))90 self.assertEqual(1, query.handle_args('anyname', args))
71 json_file = os.path.join(run_dir, INSTANCE_JSON_FILE)91 json_file = os.path.join(run_dir, INSTANCE_JSON_FILE)
72 self.assertIn(92 self.assertIn(
73 'ERROR: Missing instance-data.json file: %s' % json_file,93 'ERROR: Missing instance-data file: %s' % json_file,
74 self.logs.getvalue())94 self.logs.getvalue())
75 self.assertIn(95 self.assertIn(
76 'ERROR: Missing instance-data.json file: %s' % json_file,96 'ERROR: Missing instance-data file: %s' % json_file,
77 m_stderr.getvalue())97 m_stderr.getvalue())
7898
99 def test_handle_args_root_fallsback_to_instance_data(self):
100 """When no instance_data argument, root falls back to redacted json."""
101 args = self.args(
102 debug=False, dump_all=True, format=None, instance_data=None,
103 list_keys=False, user_data=None, vendor_data=None, varname=None)
104 run_dir = self.tmp_path('run_dir', dir=self.tmp)
105 ensure_dir(run_dir)
106 paths = Paths({'run_dir': run_dir})
107 self.add_patch('cloudinit.cmd.query.read_cfg_paths', 'm_paths')
108 self.m_paths.return_value = paths
109 with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr:
110 with mock.patch('os.getuid') as m_getuid:
111 m_getuid.return_value = 0
112 self.assertEqual(1, query.handle_args('anyname', args))
113 json_file = os.path.join(run_dir, INSTANCE_JSON_FILE)
114 sensitive_file = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE)
115 self.assertIn(
116 'WARNING: Missing root-readable %s. Using redacted %s instead.' % (
117 sensitive_file, json_file),
118 m_stderr.getvalue())
119
120 def test_handle_args_root_uses_instance_sensitive_data(self):
121 """When no instance_data argument, root uses semsitive json."""
122 user_data = self.tmp_path('user-data', dir=self.tmp)
123 vendor_data = self.tmp_path('vendor-data', dir=self.tmp)
124 write_file(user_data, 'ud')
125 write_file(vendor_data, 'vd')
126 run_dir = self.tmp_path('run_dir', dir=self.tmp)
127 sensitive_file = os.path.join(run_dir, INSTANCE_JSON_SENSITIVE_FILE)
128 write_file(sensitive_file, '{"my-var": "it worked"}')
129 ensure_dir(run_dir)
130 paths = Paths({'run_dir': run_dir})
131 self.add_patch('cloudinit.cmd.query.read_cfg_paths', 'm_paths')
132 self.m_paths.return_value = paths
133 args = self.args(
134 debug=False, dump_all=True, format=None, instance_data=None,
135 list_keys=False, user_data=vendor_data, vendor_data=vendor_data,
136 varname=None)
137 with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout:
138 with mock.patch('os.getuid') as m_getuid:
139 m_getuid.return_value = 0
140 self.assertEqual(0, query.handle_args('anyname', args))
141 self.assertEqual(
142 '{\n "my_var": "it worked",\n "userdata": "vd",\n '
143 '"vendordata": "vd"\n}\n', m_stdout.getvalue())
144
79 def test_handle_args_dumps_all_instance_data(self):145 def test_handle_args_dumps_all_instance_data(self):
80 """When --all is specified query will dump all instance data vars."""146 """When --all is specified query will dump all instance data vars."""
81 write_file(self.instance_data, '{"my-var": "it worked"}')147 write_file(self.instance_data, '{"my-var": "it worked"}')
diff --git a/cloudinit/config/cc_disk_setup.py b/cloudinit/config/cc_disk_setup.py
index 943089e..29e192e 100644
--- a/cloudinit/config/cc_disk_setup.py
+++ b/cloudinit/config/cc_disk_setup.py
@@ -743,7 +743,7 @@ def assert_and_settle_device(device):
743 util.udevadm_settle()743 util.udevadm_settle()
744 if not os.path.exists(device):744 if not os.path.exists(device):
745 raise RuntimeError("Device %s did not exist and was not created "745 raise RuntimeError("Device %s did not exist and was not created "
746 "with a udevamd settle." % device)746 "with a udevadm settle." % device)
747747
748 # Whether or not the device existed above, it is possible that udev748 # Whether or not the device existed above, it is possible that udev
749 # events that would populate udev database (for reading by lsdname) have749 # events that would populate udev database (for reading by lsdname) have
diff --git a/cloudinit/config/cc_lxd.py b/cloudinit/config/cc_lxd.py
index 24a8ebe..71d13ed 100644
--- a/cloudinit/config/cc_lxd.py
+++ b/cloudinit/config/cc_lxd.py
@@ -89,7 +89,7 @@ def handle(name, cfg, cloud, log, args):
89 packages.append('lxd')89 packages.append('lxd')
9090
91 if init_cfg.get("storage_backend") == "zfs" and not util.which('zfs'):91 if init_cfg.get("storage_backend") == "zfs" and not util.which('zfs'):
92 packages.append('zfs')92 packages.append('zfsutils-linux')
9393
94 if len(packages):94 if len(packages):
95 try:95 try:
diff --git a/cloudinit/config/cc_resizefs.py b/cloudinit/config/cc_resizefs.py
index 2edddd0..076b9d5 100644
--- a/cloudinit/config/cc_resizefs.py
+++ b/cloudinit/config/cc_resizefs.py
@@ -197,6 +197,13 @@ def maybe_get_writable_device_path(devpath, info, log):
197 if devpath.startswith('gpt/'):197 if devpath.startswith('gpt/'):
198 log.debug('We have a gpt label - just go ahead')198 log.debug('We have a gpt label - just go ahead')
199 return devpath199 return devpath
200 # Alternatively, our device could simply be a name as returned by gpart,
201 # such as da0p3
202 if not devpath.startswith('/dev/') and not os.path.exists(devpath):
203 fulldevpath = '/dev/' + devpath.lstrip('/')
204 log.debug("'%s' doesn't appear to be a valid device path. Trying '%s'",
205 devpath, fulldevpath)
206 devpath = fulldevpath
200207
201 try:208 try:
202 statret = os.stat(devpath)209 statret = os.stat(devpath)
diff --git a/cloudinit/config/cc_set_passwords.py b/cloudinit/config/cc_set_passwords.py
index 5ef9737..4585e4d 100755
--- a/cloudinit/config/cc_set_passwords.py
+++ b/cloudinit/config/cc_set_passwords.py
@@ -160,7 +160,7 @@ def handle(_name, cfg, cloud, log, args):
160 hashed_users = []160 hashed_users = []
161 randlist = []161 randlist = []
162 users = []162 users = []
163 prog = re.compile(r'\$[1,2a,2y,5,6](\$.+){2}')163 prog = re.compile(r'\$(1|2a|2y|5|6)(\$.+){2}')
164 for line in plist:164 for line in plist:
165 u, p = line.split(':', 1)165 u, p = line.split(':', 1)
166 if prog.match(p) is not None and ":" not in p:166 if prog.match(p) is not None and ":" not in p:
diff --git a/cloudinit/config/cc_write_files.py b/cloudinit/config/cc_write_files.py
index 31d1db6..0b6546e 100644
--- a/cloudinit/config/cc_write_files.py
+++ b/cloudinit/config/cc_write_files.py
@@ -49,6 +49,10 @@ binary gzip data can be specified and will be decoded before being written.
49 ...49 ...
50 path: /bin/arch50 path: /bin/arch
51 permissions: '0555'51 permissions: '0555'
52 - content: |
53 15 * * * * root ship_logs
54 path: /etc/crontab
55 append: true
52"""56"""
5357
54import base6458import base64
@@ -113,7 +117,8 @@ def write_files(name, files):
113 contents = extract_contents(f_info.get('content', ''), extractions)117 contents = extract_contents(f_info.get('content', ''), extractions)
114 (u, g) = util.extract_usergroup(f_info.get('owner', DEFAULT_OWNER))118 (u, g) = util.extract_usergroup(f_info.get('owner', DEFAULT_OWNER))
115 perms = decode_perms(f_info.get('permissions'), DEFAULT_PERMS)119 perms = decode_perms(f_info.get('permissions'), DEFAULT_PERMS)
116 util.write_file(path, contents, mode=perms)120 omode = 'ab' if util.get_cfg_option_bool(f_info, 'append') else 'wb'
121 util.write_file(path, contents, omode=omode, mode=perms)
117 util.chownbyname(path, u, g)122 util.chownbyname(path, u, g)
118123
119124
diff --git a/cloudinit/config/tests/test_set_passwords.py b/cloudinit/config/tests/test_set_passwords.py
index b051ec8..a2ea5ec 100644
--- a/cloudinit/config/tests/test_set_passwords.py
+++ b/cloudinit/config/tests/test_set_passwords.py
@@ -68,4 +68,44 @@ class TestHandleSshPwauth(CiTestCase):
68 m_update.assert_called_with({optname: optval})68 m_update.assert_called_with({optname: optval})
69 m_subp.assert_not_called()69 m_subp.assert_not_called()
7070
71
72class TestSetPasswordsHandle(CiTestCase):
73 """Test cc_set_passwords.handle"""
74
75 with_logs = True
76
77 def test_handle_on_empty_config(self):
78 """handle logs that no password has changed when config is empty."""
79 cloud = self.tmp_cloud(distro='ubuntu')
80 setpass.handle(
81 'IGNORED', cfg={}, cloud=cloud, log=self.logger, args=[])
82 self.assertEqual(
83 "DEBUG: Leaving ssh config 'PasswordAuthentication' unchanged. "
84 'ssh_pwauth=None\n',
85 self.logs.getvalue())
86
87 @mock.patch(MODPATH + "util.subp")
88 def test_handle_on_chpasswd_list_parses_common_hashes(self, m_subp):
89 """handle parses command password hashes."""
90 cloud = self.tmp_cloud(distro='ubuntu')
91 valid_hashed_pwds = [
92 'root:$2y$10$8BQjxjVByHA/Ee.O1bCXtO8S7Y5WojbXWqnqYpUW.BrPx/'
93 'Dlew1Va',
94 'ubuntu:$6$5hOurLPO$naywm3Ce0UlmZg9gG2Fl9acWCVEoakMMC7dR52q'
95 'SDexZbrN9z8yHxhUM2b.sxpguSwOlbOQSW/HpXazGGx3oo1']
96 cfg = {'chpasswd': {'list': valid_hashed_pwds}}
97 with mock.patch(MODPATH + 'util.subp') as m_subp:
98 setpass.handle(
99 'IGNORED', cfg=cfg, cloud=cloud, log=self.logger, args=[])
100 self.assertIn(
101 'DEBUG: Handling input for chpasswd as list.',
102 self.logs.getvalue())
103 self.assertIn(
104 "DEBUG: Setting hashed password for ['root', 'ubuntu']",
105 self.logs.getvalue())
106 self.assertEqual(
107 [mock.call(['chpasswd', '-e'],
108 '\n'.join(valid_hashed_pwds) + '\n')],
109 m_subp.call_args_list)
110
71# vi: ts=4 expandtab111# vi: ts=4 expandtab
diff --git a/cloudinit/dhclient_hook.py b/cloudinit/dhclient_hook.py
index 7f02d7f..72b51b6 100644
--- a/cloudinit/dhclient_hook.py
+++ b/cloudinit/dhclient_hook.py
@@ -1,5 +1,8 @@
1# This file is part of cloud-init. See LICENSE file for license information.1# This file is part of cloud-init. See LICENSE file for license information.
22
3"""Run the dhclient hook to record network info."""
4
5import argparse
3import os6import os
47
5from cloudinit import atomic_helper8from cloudinit import atomic_helper
@@ -8,44 +11,75 @@ from cloudinit import stages
811
9LOG = logging.getLogger(__name__)12LOG = logging.getLogger(__name__)
1013
14NAME = "dhclient-hook"
15UP = "up"
16DOWN = "down"
17EVENTS = (UP, DOWN)
18
19
20def _get_hooks_dir():
21 i = stages.Init()
22 return os.path.join(i.paths.get_runpath(), 'dhclient.hooks')
23
24
25def _filter_env_vals(info):
26 """Given info (os.environ), return a dictionary with
27 lower case keys for each entry starting with DHCP4_ or new_."""
28 new_info = {}
29 for k, v in info.items():
30 if k.startswith("DHCP4_") or k.startswith("new_"):
31 key = (k.replace('DHCP4_', '').replace('new_', '')).lower()
32 new_info[key] = v
33 return new_info
34
35
36def run_hook(interface, event, data_d=None, env=None):
37 if event not in EVENTS:
38 raise ValueError("Unexpected event '%s'. Expected one of: %s" %
39 (event, EVENTS))
40 if data_d is None:
41 data_d = _get_hooks_dir()
42 if env is None:
43 env = os.environ
44 hook_file = os.path.join(data_d, interface + ".json")
45
46 if event == UP:
47 if not os.path.exists(data_d):
48 os.makedirs(data_d)
49 atomic_helper.write_json(hook_file, _filter_env_vals(env))
50 LOG.debug("Wrote dhclient options in %s", hook_file)
51 elif event == DOWN:
52 if os.path.exists(hook_file):
53 os.remove(hook_file)
54 LOG.debug("Removed dhclient options file %s", hook_file)
55
56
57def get_parser(parser=None):
58 if parser is None:
59 parser = argparse.ArgumentParser(prog=NAME, description=__doc__)
60 parser.add_argument(
61 "event", help='event taken on the interface', choices=EVENTS)
62 parser.add_argument(
63 "interface", help='the network interface being acted upon')
64 # cloud-init main uses 'action'
65 parser.set_defaults(action=(NAME, handle_args))
66 return parser
67
68
69def handle_args(name, args, data_d=None):
70 """Handle the Namespace args.
71 Takes 'name' as passed by cloud-init main. not used here."""
72 return run_hook(interface=args.interface, event=args.event, data_d=data_d)
73
74
75if __name__ == '__main__':
76 import sys
77 parser = get_parser()
78 args = parser.parse_args(args=sys.argv[1:])
79 return_value = handle_args(
80 NAME, args, data_d=os.environ.get('_CI_DHCP_HOOK_DATA_D'))
81 if return_value:
82 sys.exit(return_value)
1183
12class LogDhclient(object):
13
14 def __init__(self, cli_args):
15 self.hooks_dir = self._get_hooks_dir()
16 self.net_interface = cli_args.net_interface
17 self.net_action = cli_args.net_action
18 self.hook_file = os.path.join(self.hooks_dir,
19 self.net_interface + ".json")
20
21 @staticmethod
22 def _get_hooks_dir():
23 i = stages.Init()
24 return os.path.join(i.paths.get_runpath(), 'dhclient.hooks')
25
26 def check_hooks_dir(self):
27 if not os.path.exists(self.hooks_dir):
28 os.makedirs(self.hooks_dir)
29 else:
30 # If the action is down and the json file exists, we need to
31 # delete the file
32 if self.net_action is 'down' and os.path.exists(self.hook_file):
33 os.remove(self.hook_file)
34
35 @staticmethod
36 def get_vals(info):
37 new_info = {}
38 for k, v in info.items():
39 if k.startswith("DHCP4_") or k.startswith("new_"):
40 key = (k.replace('DHCP4_', '').replace('new_', '')).lower()
41 new_info[key] = v
42 return new_info
43
44 def record(self):
45 envs = os.environ
46 if self.hook_file is None:
47 return
48 atomic_helper.write_json(self.hook_file, self.get_vals(envs))
49 LOG.debug("Wrote dhclient options in %s", self.hook_file)
5084
51# vi: ts=4 expandtab85# vi: ts=4 expandtab
diff --git a/cloudinit/handlers/jinja_template.py b/cloudinit/handlers/jinja_template.py
index 3fa4097..ce3accf 100644
--- a/cloudinit/handlers/jinja_template.py
+++ b/cloudinit/handlers/jinja_template.py
@@ -1,5 +1,6 @@
1# This file is part of cloud-init. See LICENSE file for license information.1# This file is part of cloud-init. See LICENSE file for license information.
22
3from errno import EACCES
3import os4import os
4import re5import re
56
@@ -76,7 +77,14 @@ def render_jinja_payload_from_file(
76 raise RuntimeError(77 raise RuntimeError(
77 'Cannot render jinja template vars. Instance data not yet'78 'Cannot render jinja template vars. Instance data not yet'
78 ' present at %s' % instance_data_file)79 ' present at %s' % instance_data_file)
79 instance_data = load_json(load_file(instance_data_file))80 try:
81 instance_data = load_json(load_file(instance_data_file))
82 except (IOError, OSError) as e:
83 if e.errno == EACCES:
84 raise RuntimeError(
85 'Cannot render jinja template vars. No read permission on'
86 " '%s'. Try sudo" % instance_data_file)
87
80 rendered_payload = render_jinja_payload(88 rendered_payload = render_jinja_payload(
81 payload, payload_fn, instance_data, debug)89 payload, payload_fn, instance_data, debug)
82 if not rendered_payload:90 if not rendered_payload:
diff --git a/cloudinit/net/__init__.py b/cloudinit/net/__init__.py
index f83d368..3642fb1 100644
--- a/cloudinit/net/__init__.py
+++ b/cloudinit/net/__init__.py
@@ -12,6 +12,7 @@ import re
1212
13from cloudinit.net.network_state import mask_to_net_prefix13from cloudinit.net.network_state import mask_to_net_prefix
14from cloudinit import util14from cloudinit import util
15from cloudinit.url_helper import UrlError, readurl
1516
16LOG = logging.getLogger(__name__)17LOG = logging.getLogger(__name__)
17SYS_CLASS_NET = "/sys/class/net/"18SYS_CLASS_NET = "/sys/class/net/"
@@ -612,7 +613,8 @@ def get_interfaces():
612 Bridges and any devices that have a 'stolen' mac are excluded."""613 Bridges and any devices that have a 'stolen' mac are excluded."""
613 ret = []614 ret = []
614 devs = get_devicelist()615 devs = get_devicelist()
615 empty_mac = '00:00:00:00:00:00'616 # 16 somewhat arbitrarily chosen. Normally a mac is 6 '00:' tokens.
617 zero_mac = ':'.join(('00',) * 16)
616 for name in devs:618 for name in devs:
617 if not interface_has_own_mac(name):619 if not interface_has_own_mac(name):
618 continue620 continue
@@ -624,7 +626,8 @@ def get_interfaces():
624 # some devices may not have a mac (tun0)626 # some devices may not have a mac (tun0)
625 if not mac:627 if not mac:
626 continue628 continue
627 if mac == empty_mac and name != 'lo':629 # skip nics that have no mac (00:00....)
630 if name != 'lo' and mac == zero_mac[:len(mac)]:
628 continue631 continue
629 ret.append((name, mac, device_driver(name), device_devid(name)))632 ret.append((name, mac, device_driver(name), device_devid(name)))
630 return ret633 return ret
@@ -645,16 +648,36 @@ def get_ib_hwaddrs_by_interface():
645 return ret648 return ret
646649
647650
651def has_url_connectivity(url):
652 """Return true when the instance has access to the provided URL
653
654 Logs a warning if url is not the expected format.
655 """
656 if not any([url.startswith('http://'), url.startswith('https://')]):
657 LOG.warning(
658 "Ignoring connectivity check. Expected URL beginning with http*://"
659 " received '%s'", url)
660 return False
661 try:
662 readurl(url, timeout=5)
663 except UrlError:
664 return False
665 return True
666
667
648class EphemeralIPv4Network(object):668class EphemeralIPv4Network(object):
649 """Context manager which sets up temporary static network configuration.669 """Context manager which sets up temporary static network configuration.
650670
651 No operations are performed if the provided interface is already connected.671 No operations are performed if the provided interface already has the
672 specified configuration.
673 This can be verified with the connectivity_url.
652 If unconnected, bring up the interface with valid ip, prefix and broadcast.674 If unconnected, bring up the interface with valid ip, prefix and broadcast.
653 If router is provided setup a default route for that interface. Upon675 If router is provided setup a default route for that interface. Upon
654 context exit, clean up the interface leaving no configuration behind.676 context exit, clean up the interface leaving no configuration behind.
655 """677 """
656678
657 def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None):679 def __init__(self, interface, ip, prefix_or_mask, broadcast, router=None,
680 connectivity_url=None):
658 """Setup context manager and validate call signature.681 """Setup context manager and validate call signature.
659682
660 @param interface: Name of the network interface to bring up.683 @param interface: Name of the network interface to bring up.
@@ -663,6 +686,8 @@ class EphemeralIPv4Network(object):
663 prefix.686 prefix.
664 @param broadcast: Broadcast address for the IPv4 network.687 @param broadcast: Broadcast address for the IPv4 network.
665 @param router: Optionally the default gateway IP.688 @param router: Optionally the default gateway IP.
689 @param connectivity_url: Optionally, a URL to verify if a usable
690 connection already exists.
666 """691 """
667 if not all([interface, ip, prefix_or_mask, broadcast]):692 if not all([interface, ip, prefix_or_mask, broadcast]):
668 raise ValueError(693 raise ValueError(
@@ -673,6 +698,8 @@ class EphemeralIPv4Network(object):
673 except ValueError as e:698 except ValueError as e:
674 raise ValueError(699 raise ValueError(
675 'Cannot setup network: {0}'.format(e))700 'Cannot setup network: {0}'.format(e))
701
702 self.connectivity_url = connectivity_url
676 self.interface = interface703 self.interface = interface
677 self.ip = ip704 self.ip = ip
678 self.broadcast = broadcast705 self.broadcast = broadcast
@@ -681,6 +708,13 @@ class EphemeralIPv4Network(object):
681708
682 def __enter__(self):709 def __enter__(self):
683 """Perform ephemeral network setup if interface is not connected."""710 """Perform ephemeral network setup if interface is not connected."""
711 if self.connectivity_url:
712 if has_url_connectivity(self.connectivity_url):
713 LOG.debug(
714 'Skip ephemeral network setup, instance has connectivity'
715 ' to %s', self.connectivity_url)
716 return
717
684 self._bringup_device()718 self._bringup_device()
685 if self.router:719 if self.router:
686 self._bringup_router()720 self._bringup_router()
diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py
index 12cf509..c98a97c 100644
--- a/cloudinit/net/dhcp.py
+++ b/cloudinit/net/dhcp.py
@@ -9,9 +9,11 @@ import logging
9import os9import os
10import re10import re
11import signal11import signal
12import time
1213
13from cloudinit.net import (14from cloudinit.net import (
14 EphemeralIPv4Network, find_fallback_nic, get_devicelist)15 EphemeralIPv4Network, find_fallback_nic, get_devicelist,
16 has_url_connectivity)
15from cloudinit.net.network_state import mask_and_ipv4_to_bcast_addr as bcip17from cloudinit.net.network_state import mask_and_ipv4_to_bcast_addr as bcip
16from cloudinit import temp_utils18from cloudinit import temp_utils
17from cloudinit import util19from cloudinit import util
@@ -37,37 +39,69 @@ class NoDHCPLeaseError(Exception):
3739
3840
39class EphemeralDHCPv4(object):41class EphemeralDHCPv4(object):
40 def __init__(self, iface=None):42 def __init__(self, iface=None, connectivity_url=None):
41 self.iface = iface43 self.iface = iface
42 self._ephipv4 = None44 self._ephipv4 = None
45 self.lease = None
46 self.connectivity_url = connectivity_url
4347
44 def __enter__(self):48 def __enter__(self):
49 """Setup sandboxed dhcp context, unless connectivity_url can already be
50 reached."""
51 if self.connectivity_url:
52 if has_url_connectivity(self.connectivity_url):
53 LOG.debug(
54 'Skip ephemeral DHCP setup, instance has connectivity'
55 ' to %s', self.connectivity_url)
56 return
57 return self.obtain_lease()
58
59 def __exit__(self, excp_type, excp_value, excp_traceback):
60 """Teardown sandboxed dhcp context."""
61 self.clean_network()
62
63 def clean_network(self):
64 """Exit _ephipv4 context to teardown of ip configuration performed."""
65 if self.lease:
66 self.lease = None
67 if not self._ephipv4:
68 return
69 self._ephipv4.__exit__(None, None, None)
70
71 def obtain_lease(self):
72 """Perform dhcp discovery in a sandboxed environment if possible.
73
74 @return: A dict representing dhcp options on the most recent lease
75 obtained from the dhclient discovery if run, otherwise an error
76 is raised.
77
78 @raises: NoDHCPLeaseError if no leases could be obtained.
79 """
80 if self.lease:
81 return self.lease
45 try:82 try:
46 leases = maybe_perform_dhcp_discovery(self.iface)83 leases = maybe_perform_dhcp_discovery(self.iface)
47 except InvalidDHCPLeaseFileError:84 except InvalidDHCPLeaseFileError:
48 raise NoDHCPLeaseError()85 raise NoDHCPLeaseError()
49 if not leases:86 if not leases:
50 raise NoDHCPLeaseError()87 raise NoDHCPLeaseError()
51 lease = leases[-1]88 self.lease = leases[-1]
52 LOG.debug("Received dhcp lease on %s for %s/%s",89 LOG.debug("Received dhcp lease on %s for %s/%s",
53 lease['interface'], lease['fixed-address'],90 self.lease['interface'], self.lease['fixed-address'],
54 lease['subnet-mask'])91 self.lease['subnet-mask'])
55 nmap = {'interface': 'interface', 'ip': 'fixed-address',92 nmap = {'interface': 'interface', 'ip': 'fixed-address',
56 'prefix_or_mask': 'subnet-mask',93 'prefix_or_mask': 'subnet-mask',
57 'broadcast': 'broadcast-address',94 'broadcast': 'broadcast-address',
58 'router': 'routers'}95 'router': 'routers'}
59 kwargs = dict([(k, lease.get(v)) for k, v in nmap.items()])96 kwargs = dict([(k, self.lease.get(v)) for k, v in nmap.items()])
60 if not kwargs['broadcast']:97 if not kwargs['broadcast']:
61 kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip'])98 kwargs['broadcast'] = bcip(kwargs['prefix_or_mask'], kwargs['ip'])
99 if self.connectivity_url:
100 kwargs['connectivity_url'] = self.connectivity_url
62 ephipv4 = EphemeralIPv4Network(**kwargs)101 ephipv4 = EphemeralIPv4Network(**kwargs)
63 ephipv4.__enter__()102 ephipv4.__enter__()
64 self._ephipv4 = ephipv4103 self._ephipv4 = ephipv4
65 return lease104 return self.lease
66
67 def __exit__(self, excp_type, excp_value, excp_traceback):
68 if not self._ephipv4:
69 return
70 self._ephipv4.__exit__(excp_type, excp_value, excp_traceback)
71105
72106
73def maybe_perform_dhcp_discovery(nic=None):107def maybe_perform_dhcp_discovery(nic=None):
@@ -94,7 +128,9 @@ def maybe_perform_dhcp_discovery(nic=None):
94 if not dhclient_path:128 if not dhclient_path:
95 LOG.debug('Skip dhclient configuration: No dhclient command found.')129 LOG.debug('Skip dhclient configuration: No dhclient command found.')
96 return []130 return []
97 with temp_utils.tempdir(prefix='cloud-init-dhcp-', needs_exe=True) as tdir:131 with temp_utils.tempdir(rmtree_ignore_errors=True,
132 prefix='cloud-init-dhcp-',
133 needs_exe=True) as tdir:
98 # Use /var/tmp because /run/cloud-init/tmp is mounted noexec134 # Use /var/tmp because /run/cloud-init/tmp is mounted noexec
99 return dhcp_discovery(dhclient_path, nic, tdir)135 return dhcp_discovery(dhclient_path, nic, tdir)
100136
@@ -162,24 +198,39 @@ def dhcp_discovery(dhclient_cmd_path, interface, cleandir):
162 '-pf', pid_file, interface, '-sf', '/bin/true']198 '-pf', pid_file, interface, '-sf', '/bin/true']
163 util.subp(cmd, capture=True)199 util.subp(cmd, capture=True)
164200
165 # dhclient doesn't write a pid file until after it forks when it gets a201 # Wait for pid file and lease file to appear, and for the process
166 # proper lease response. Since cleandir is a temp directory that gets202 # named by the pid file to daemonize (have pid 1 as its parent). If we
167 # removed, we need to wait for that pidfile creation before the203 # try to read the lease file before daemonization happens, we might try
168 # cleandir is removed, otherwise we get FileNotFound errors.204 # to read it before the dhclient has actually written it. We also have
205 # to wait until the dhclient has become a daemon so we can be sure to
206 # kill the correct process, thus freeing cleandir to be deleted back
207 # up the callstack.
169 missing = util.wait_for_files(208 missing = util.wait_for_files(
170 [pid_file, lease_file], maxwait=5, naplen=0.01)209 [pid_file, lease_file], maxwait=5, naplen=0.01)
171 if missing:210 if missing:
172 LOG.warning("dhclient did not produce expected files: %s",211 LOG.warning("dhclient did not produce expected files: %s",
173 ', '.join(os.path.basename(f) for f in missing))212 ', '.join(os.path.basename(f) for f in missing))
174 return []213 return []
175 pid_content = util.load_file(pid_file).strip()214
176 try:215 ppid = 'unknown'
177 pid = int(pid_content)216 for _ in range(0, 1000):
178 except ValueError:217 pid_content = util.load_file(pid_file).strip()
179 LOG.debug(218 try:
180 "pid file contains non-integer content '%s'", pid_content)219 pid = int(pid_content)
181 else:220 except ValueError:
182 os.kill(pid, signal.SIGKILL)221 pass
222 else:
223 ppid = util.get_proc_ppid(pid)
224 if ppid == 1:
225 LOG.debug('killing dhclient with pid=%s', pid)
226 os.kill(pid, signal.SIGKILL)
227 return parse_dhcp_lease_file(lease_file)
228 time.sleep(0.01)
229
230 LOG.error(
231 'dhclient(pid=%s, parentpid=%s) failed to daemonize after %s seconds',
232 pid_content, ppid, 0.01 * 1000
233 )
183 return parse_dhcp_lease_file(lease_file)234 return parse_dhcp_lease_file(lease_file)
184235
185236
diff --git a/cloudinit/net/eni.py b/cloudinit/net/eni.py
index c6f631a..6423632 100644
--- a/cloudinit/net/eni.py
+++ b/cloudinit/net/eni.py
@@ -371,22 +371,23 @@ class Renderer(renderer.Renderer):
371 'gateway': 'gw',371 'gateway': 'gw',
372 'metric': 'metric',372 'metric': 'metric',
373 }373 }
374
375 default_gw = ''
374 if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0':376 if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0':
375 default_gw = " default gw %s" % route['gateway']377 default_gw = ' default'
376 content.append(up + default_gw + or_true)
377 content.append(down + default_gw + or_true)
378 elif route['network'] == '::' and route['prefix'] == 0:378 elif route['network'] == '::' and route['prefix'] == 0:
379 # ipv6!379 default_gw = ' -A inet6 default'
380 default_gw = " -A inet6 default gw %s" % route['gateway']380
381 content.append(up + default_gw + or_true)381 route_line = ''
382 content.append(down + default_gw + or_true)382 for k in ['network', 'netmask', 'gateway', 'metric']:
383 else:383 if default_gw and k in ['network', 'netmask']:
384 route_line = ""384 continue
385 for k in ['network', 'netmask', 'gateway', 'metric']:385 if k == 'gateway':
386 if k in route:386 route_line += '%s %s %s' % (default_gw, mapping[k], route[k])
387 route_line += " %s %s" % (mapping[k], route[k])387 elif k in route:
388 content.append(up + route_line + or_true)388 route_line += ' %s %s' % (mapping[k], route[k])
389 content.append(down + route_line + or_true)389 content.append(up + route_line + or_true)
390 content.append(down + route_line + or_true)
390 return content391 return content
391392
392 def _render_iface(self, iface, render_hwaddress=False):393 def _render_iface(self, iface, render_hwaddress=False):
diff --git a/cloudinit/net/netplan.py b/cloudinit/net/netplan.py
index bc1087f..21517fd 100644
--- a/cloudinit/net/netplan.py
+++ b/cloudinit/net/netplan.py
@@ -114,13 +114,13 @@ def _extract_addresses(config, entry, ifname):
114 for route in subnet.get('routes', []):114 for route in subnet.get('routes', []):
115 to_net = "%s/%s" % (route.get('network'),115 to_net = "%s/%s" % (route.get('network'),
116 route.get('prefix'))116 route.get('prefix'))
117 route = {117 new_route = {
118 'via': route.get('gateway'),118 'via': route.get('gateway'),
119 'to': to_net,119 'to': to_net,
120 }120 }
121 if 'metric' in route:121 if 'metric' in route:
122 route.update({'metric': route.get('metric', 100)})122 new_route.update({'metric': route.get('metric', 100)})
123 routes.append(route)123 routes.append(new_route)
124124
125 addresses.append(addr)125 addresses.append(addr)
126126
diff --git a/cloudinit/net/sysconfig.py b/cloudinit/net/sysconfig.py
index 9c16d3a..fd8e501 100644
--- a/cloudinit/net/sysconfig.py
+++ b/cloudinit/net/sysconfig.py
@@ -10,11 +10,14 @@ from cloudinit.distros.parsers import resolv_conf
10from cloudinit import log as logging10from cloudinit import log as logging
11from cloudinit import util11from cloudinit import util
1212
13from configobj import ConfigObj
14
13from . import renderer15from . import renderer
14from .network_state import (16from .network_state import (
15 is_ipv6_addr, net_prefix_to_ipv4_mask, subnet_is_ipv6)17 is_ipv6_addr, net_prefix_to_ipv4_mask, subnet_is_ipv6)
1618
17LOG = logging.getLogger(__name__)19LOG = logging.getLogger(__name__)
20NM_CFG_FILE = "/etc/NetworkManager/NetworkManager.conf"
1821
1922
20def _make_header(sep='#'):23def _make_header(sep='#'):
@@ -46,6 +49,24 @@ def _quote_value(value):
46 return value49 return value
4750
4851
52def enable_ifcfg_rh(path):
53 """Add ifcfg-rh to NetworkManager.cfg plugins if main section is present"""
54 config = ConfigObj(path)
55 if 'main' in config:
56 if 'plugins' in config['main']:
57 if 'ifcfg-rh' in config['main']['plugins']:
58 return
59 else:
60 config['main']['plugins'] = []
61
62 if isinstance(config['main']['plugins'], list):
63 config['main']['plugins'].append('ifcfg-rh')
64 else:
65 config['main']['plugins'] = [config['main']['plugins'], 'ifcfg-rh']
66 config.write()
67 LOG.debug('Enabled ifcfg-rh NetworkManager plugins')
68
69
49class ConfigMap(object):70class ConfigMap(object):
50 """Sysconfig like dictionary object."""71 """Sysconfig like dictionary object."""
5172
@@ -156,13 +177,23 @@ class Route(ConfigMap):
156 _quote_value(gateway_value)))177 _quote_value(gateway_value)))
157 buf.write("%s=%s\n" % ('NETMASK' + str(reindex),178 buf.write("%s=%s\n" % ('NETMASK' + str(reindex),
158 _quote_value(netmask_value)))179 _quote_value(netmask_value)))
180 metric_key = 'METRIC' + index
181 if metric_key in self._conf:
182 metric_value = str(self._conf['METRIC' + index])
183 buf.write("%s=%s\n" % ('METRIC' + str(reindex),
184 _quote_value(metric_value)))
159 elif proto == "ipv6" and self.is_ipv6_route(address_value):185 elif proto == "ipv6" and self.is_ipv6_route(address_value):
160 netmask_value = str(self._conf['NETMASK' + index])186 netmask_value = str(self._conf['NETMASK' + index])
161 gateway_value = str(self._conf['GATEWAY' + index])187 gateway_value = str(self._conf['GATEWAY' + index])
162 buf.write("%s/%s via %s dev %s\n" % (address_value,188 metric_value = (
163 netmask_value,189 'metric ' + str(self._conf['METRIC' + index])
164 gateway_value,190 if 'METRIC' + index in self._conf else '')
165 self._route_name))191 buf.write(
192 "%s/%s via %s %s dev %s\n" % (address_value,
193 netmask_value,
194 gateway_value,
195 metric_value,
196 self._route_name))
166197
167 return buf.getvalue()198 return buf.getvalue()
168199
@@ -370,6 +401,9 @@ class Renderer(renderer.Renderer):
370 else:401 else:
371 iface_cfg['GATEWAY'] = subnet['gateway']402 iface_cfg['GATEWAY'] = subnet['gateway']
372403
404 if 'metric' in subnet:
405 iface_cfg['METRIC'] = subnet['metric']
406
373 if 'dns_search' in subnet:407 if 'dns_search' in subnet:
374 iface_cfg['DOMAIN'] = ' '.join(subnet['dns_search'])408 iface_cfg['DOMAIN'] = ' '.join(subnet['dns_search'])
375409
@@ -414,15 +448,19 @@ class Renderer(renderer.Renderer):
414 else:448 else:
415 iface_cfg['GATEWAY'] = route['gateway']449 iface_cfg['GATEWAY'] = route['gateway']
416 route_cfg.has_set_default_ipv4 = True450 route_cfg.has_set_default_ipv4 = True
451 if 'metric' in route:
452 iface_cfg['METRIC'] = route['metric']
417453
418 else:454 else:
419 gw_key = 'GATEWAY%s' % route_cfg.last_idx455 gw_key = 'GATEWAY%s' % route_cfg.last_idx
420 nm_key = 'NETMASK%s' % route_cfg.last_idx456 nm_key = 'NETMASK%s' % route_cfg.last_idx
421 addr_key = 'ADDRESS%s' % route_cfg.last_idx457 addr_key = 'ADDRESS%s' % route_cfg.last_idx
458 metric_key = 'METRIC%s' % route_cfg.last_idx
422 route_cfg.last_idx += 1459 route_cfg.last_idx += 1
423 # add default routes only to ifcfg files, not460 # add default routes only to ifcfg files, not
424 # to route-* or route6-*461 # to route-* or route6-*
425 for (old_key, new_key) in [('gateway', gw_key),462 for (old_key, new_key) in [('gateway', gw_key),
463 ('metric', metric_key),
426 ('netmask', nm_key),464 ('netmask', nm_key),
427 ('network', addr_key)]:465 ('network', addr_key)]:
428 if old_key in route:466 if old_key in route:
@@ -519,6 +557,8 @@ class Renderer(renderer.Renderer):
519 content.add_nameserver(nameserver)557 content.add_nameserver(nameserver)
520 for searchdomain in network_state.dns_searchdomains:558 for searchdomain in network_state.dns_searchdomains:
521 content.add_search_domain(searchdomain)559 content.add_search_domain(searchdomain)
560 if not str(content):
561 return None
522 header = _make_header(';')562 header = _make_header(';')
523 content_str = str(content)563 content_str = str(content)
524 if not content_str.startswith(header):564 if not content_str.startswith(header):
@@ -628,7 +668,8 @@ class Renderer(renderer.Renderer):
628 dns_path = util.target_path(target, self.dns_path)668 dns_path = util.target_path(target, self.dns_path)
629 resolv_content = self._render_dns(network_state,669 resolv_content = self._render_dns(network_state,
630 existing_dns_path=dns_path)670 existing_dns_path=dns_path)
631 util.write_file(dns_path, resolv_content, file_mode)671 if resolv_content:
672 util.write_file(dns_path, resolv_content, file_mode)
632 if self.networkmanager_conf_path:673 if self.networkmanager_conf_path:
633 nm_conf_path = util.target_path(target,674 nm_conf_path = util.target_path(target,
634 self.networkmanager_conf_path)675 self.networkmanager_conf_path)
@@ -640,6 +681,8 @@ class Renderer(renderer.Renderer):
640 netrules_content = self._render_persistent_net(network_state)681 netrules_content = self._render_persistent_net(network_state)
641 netrules_path = util.target_path(target, self.netrules_path)682 netrules_path = util.target_path(target, self.netrules_path)
642 util.write_file(netrules_path, netrules_content, file_mode)683 util.write_file(netrules_path, netrules_content, file_mode)
684 if available_nm(target=target):
685 enable_ifcfg_rh(util.target_path(target, path=NM_CFG_FILE))
643686
644 sysconfig_path = util.target_path(target, templates.get('control'))687 sysconfig_path = util.target_path(target, templates.get('control'))
645 # Distros configuring /etc/sysconfig/network as a file e.g. Centos688 # Distros configuring /etc/sysconfig/network as a file e.g. Centos
@@ -654,6 +697,13 @@ class Renderer(renderer.Renderer):
654697
655698
656def available(target=None):699def available(target=None):
700 sysconfig = available_sysconfig(target=target)
701 nm = available_nm(target=target)
702
703 return any([nm, sysconfig])
704
705
706def available_sysconfig(target=None):
657 expected = ['ifup', 'ifdown']707 expected = ['ifup', 'ifdown']
658 search = ['/sbin', '/usr/sbin']708 search = ['/sbin', '/usr/sbin']
659 for p in expected:709 for p in expected:
@@ -669,4 +719,10 @@ def available(target=None):
669 return True719 return True
670720
671721
722def available_nm(target=None):
723 if not os.path.isfile(util.target_path(target, path=NM_CFG_FILE)):
724 return False
725 return True
726
727
672# vi: ts=4 expandtab728# vi: ts=4 expandtab
diff --git a/cloudinit/net/tests/test_dhcp.py b/cloudinit/net/tests/test_dhcp.py
index db25b6f..79e8842 100644
--- a/cloudinit/net/tests/test_dhcp.py
+++ b/cloudinit/net/tests/test_dhcp.py
@@ -1,15 +1,17 @@
1# This file is part of cloud-init. See LICENSE file for license information.1# This file is part of cloud-init. See LICENSE file for license information.
22
3import httpretty
3import os4import os
4import signal5import signal
5from textwrap import dedent6from textwrap import dedent
67
8import cloudinit.net as net
7from cloudinit.net.dhcp import (9from cloudinit.net.dhcp import (
8 InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery,10 InvalidDHCPLeaseFileError, maybe_perform_dhcp_discovery,
9 parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases)11 parse_dhcp_lease_file, dhcp_discovery, networkd_load_leases)
10from cloudinit.util import ensure_file, write_file12from cloudinit.util import ensure_file, write_file
11from cloudinit.tests.helpers import (13from cloudinit.tests.helpers import (
12 CiTestCase, mock, populate_dir, wrap_and_call)14 CiTestCase, HttprettyTestCase, mock, populate_dir, wrap_and_call)
1315
1416
15class TestParseDHCPLeasesFile(CiTestCase):17class TestParseDHCPLeasesFile(CiTestCase):
@@ -143,16 +145,20 @@ class TestDHCPDiscoveryClean(CiTestCase):
143 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1'}],145 'subnet-mask': '255.255.255.0', 'routers': '192.168.2.1'}],
144 dhcp_discovery(dhclient_script, 'eth9', tmpdir))146 dhcp_discovery(dhclient_script, 'eth9', tmpdir))
145 self.assertIn(147 self.assertIn(
146 "pid file contains non-integer content ''", self.logs.getvalue())148 "dhclient(pid=, parentpid=unknown) failed "
149 "to daemonize after 10.0 seconds",
150 self.logs.getvalue())
147 m_kill.assert_not_called()151 m_kill.assert_not_called()
148152
153 @mock.patch('cloudinit.net.dhcp.util.get_proc_ppid')
149 @mock.patch('cloudinit.net.dhcp.os.kill')154 @mock.patch('cloudinit.net.dhcp.os.kill')
150 @mock.patch('cloudinit.net.dhcp.util.wait_for_files')155 @mock.patch('cloudinit.net.dhcp.util.wait_for_files')
151 @mock.patch('cloudinit.net.dhcp.util.subp')156 @mock.patch('cloudinit.net.dhcp.util.subp')
152 def test_dhcp_discovery_run_in_sandbox_waits_on_lease_and_pid(self,157 def test_dhcp_discovery_run_in_sandbox_waits_on_lease_and_pid(self,
153 m_subp,158 m_subp,
154 m_wait,159 m_wait,
155 m_kill):160 m_kill,
161 m_getppid):
156 """dhcp_discovery waits for the presence of pidfile and dhcp.leases."""162 """dhcp_discovery waits for the presence of pidfile and dhcp.leases."""
157 tmpdir = self.tmp_dir()163 tmpdir = self.tmp_dir()
158 dhclient_script = os.path.join(tmpdir, 'dhclient.orig')164 dhclient_script = os.path.join(tmpdir, 'dhclient.orig')
@@ -162,6 +168,7 @@ class TestDHCPDiscoveryClean(CiTestCase):
162 pidfile = self.tmp_path('dhclient.pid', tmpdir)168 pidfile = self.tmp_path('dhclient.pid', tmpdir)
163 leasefile = self.tmp_path('dhcp.leases', tmpdir)169 leasefile = self.tmp_path('dhcp.leases', tmpdir)
164 m_wait.return_value = [pidfile] # Return the missing pidfile wait for170 m_wait.return_value = [pidfile] # Return the missing pidfile wait for
171 m_getppid.return_value = 1 # Indicate that dhclient has daemonized
165 self.assertEqual([], dhcp_discovery(dhclient_script, 'eth9', tmpdir))172 self.assertEqual([], dhcp_discovery(dhclient_script, 'eth9', tmpdir))
166 self.assertEqual(173 self.assertEqual(
167 mock.call([pidfile, leasefile], maxwait=5, naplen=0.01),174 mock.call([pidfile, leasefile], maxwait=5, naplen=0.01),
@@ -171,9 +178,10 @@ class TestDHCPDiscoveryClean(CiTestCase):
171 self.logs.getvalue())178 self.logs.getvalue())
172 m_kill.assert_not_called()179 m_kill.assert_not_called()
173180
181 @mock.patch('cloudinit.net.dhcp.util.get_proc_ppid')
174 @mock.patch('cloudinit.net.dhcp.os.kill')182 @mock.patch('cloudinit.net.dhcp.os.kill')
175 @mock.patch('cloudinit.net.dhcp.util.subp')183 @mock.patch('cloudinit.net.dhcp.util.subp')
176 def test_dhcp_discovery_run_in_sandbox(self, m_subp, m_kill):184 def test_dhcp_discovery_run_in_sandbox(self, m_subp, m_kill, m_getppid):
177 """dhcp_discovery brings up the interface and runs dhclient.185 """dhcp_discovery brings up the interface and runs dhclient.
178186
179 It also returns the parsed dhcp.leases file generated in the sandbox.187 It also returns the parsed dhcp.leases file generated in the sandbox.
@@ -195,6 +203,7 @@ class TestDHCPDiscoveryClean(CiTestCase):
195 pid_file = os.path.join(tmpdir, 'dhclient.pid')203 pid_file = os.path.join(tmpdir, 'dhclient.pid')
196 my_pid = 1204 my_pid = 1
197 write_file(pid_file, "%d\n" % my_pid)205 write_file(pid_file, "%d\n" % my_pid)
206 m_getppid.return_value = 1 # Indicate that dhclient has daemonized
198207
199 self.assertItemsEqual(208 self.assertItemsEqual(
200 [{'interface': 'eth9', 'fixed-address': '192.168.2.74',209 [{'interface': 'eth9', 'fixed-address': '192.168.2.74',
@@ -321,3 +330,37 @@ class TestSystemdParseLeases(CiTestCase):
321 '9': self.lxd_lease})330 '9': self.lxd_lease})
322 self.assertEqual({'1': self.azure_parsed, '9': self.lxd_parsed},331 self.assertEqual({'1': self.azure_parsed, '9': self.lxd_parsed},
323 networkd_load_leases(self.lease_d))332 networkd_load_leases(self.lease_d))
333
334
335class TestEphemeralDhcpNoNetworkSetup(HttprettyTestCase):
336
337 @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
338 def test_ephemeral_dhcp_no_network_if_url_connectivity(self, m_dhcp):
339 """No EphemeralDhcp4 network setup when connectivity_url succeeds."""
340 url = 'http://example.org/index.html'
341
342 httpretty.register_uri(httpretty.GET, url)
343 with net.dhcp.EphemeralDHCPv4(connectivity_url=url) as lease:
344 self.assertIsNone(lease)
345 # Ensure that no teardown happens:
346 m_dhcp.assert_not_called()
347
348 @mock.patch('cloudinit.net.dhcp.util.subp')
349 @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
350 def test_ephemeral_dhcp_setup_network_if_url_connectivity(
351 self, m_dhcp, m_subp):
352 """No EphemeralDhcp4 network setup when connectivity_url succeeds."""
353 url = 'http://example.org/index.html'
354 fake_lease = {
355 'interface': 'eth9', 'fixed-address': '192.168.2.2',
356 'subnet-mask': '255.255.0.0'}
357 m_dhcp.return_value = [fake_lease]
358 m_subp.return_value = ('', '')
359
360 httpretty.register_uri(httpretty.GET, url, body={}, status=404)
361 with net.dhcp.EphemeralDHCPv4(connectivity_url=url) as lease:
362 self.assertEqual(fake_lease, lease)
363 # Ensure that dhcp discovery occurs
364 m_dhcp.called_once_with()
365
366# vi: ts=4 expandtab
diff --git a/cloudinit/net/tests/test_init.py b/cloudinit/net/tests/test_init.py
index 58e0a59..f55c31e 100644
--- a/cloudinit/net/tests/test_init.py
+++ b/cloudinit/net/tests/test_init.py
@@ -2,14 +2,16 @@
22
3import copy3import copy
4import errno4import errno
5import httpretty
5import mock6import mock
6import os7import os
8import requests
7import textwrap9import textwrap
8import yaml10import yaml
911
10import cloudinit.net as net12import cloudinit.net as net
11from cloudinit.util import ensure_file, write_file, ProcessExecutionError13from cloudinit.util import ensure_file, write_file, ProcessExecutionError
12from cloudinit.tests.helpers import CiTestCase14from cloudinit.tests.helpers import CiTestCase, HttprettyTestCase
1315
1416
15class TestSysDevPath(CiTestCase):17class TestSysDevPath(CiTestCase):
@@ -458,6 +460,22 @@ class TestEphemeralIPV4Network(CiTestCase):
458 self.assertEqual(expected_setup_calls, m_subp.call_args_list)460 self.assertEqual(expected_setup_calls, m_subp.call_args_list)
459 m_subp.assert_has_calls(expected_teardown_calls)461 m_subp.assert_has_calls(expected_teardown_calls)
460462
463 @mock.patch('cloudinit.net.readurl')
464 def test_ephemeral_ipv4_no_network_if_url_connectivity(
465 self, m_readurl, m_subp):
466 """No network setup is performed if we can successfully connect to
467 connectivity_url."""
468 params = {
469 'interface': 'eth0', 'ip': '192.168.2.2',
470 'prefix_or_mask': '255.255.255.0', 'broadcast': '192.168.2.255',
471 'connectivity_url': 'http://example.org/index.html'}
472
473 with net.EphemeralIPv4Network(**params):
474 self.assertEqual([mock.call('http://example.org/index.html',
475 timeout=5)], m_readurl.call_args_list)
476 # Ensure that no teardown happens:
477 m_subp.assert_has_calls([])
478
461 def test_ephemeral_ipv4_network_noop_when_configured(self, m_subp):479 def test_ephemeral_ipv4_network_noop_when_configured(self, m_subp):
462 """EphemeralIPv4Network handles exception when address is setup.480 """EphemeralIPv4Network handles exception when address is setup.
463481
@@ -619,3 +637,35 @@ class TestApplyNetworkCfgNames(CiTestCase):
619 def test_apply_v2_renames_raises_runtime_error_on_unknown_version(self):637 def test_apply_v2_renames_raises_runtime_error_on_unknown_version(self):
620 with self.assertRaises(RuntimeError):638 with self.assertRaises(RuntimeError):
621 net.apply_network_config_names(yaml.load("version: 3"))639 net.apply_network_config_names(yaml.load("version: 3"))
640
641
642class TestHasURLConnectivity(HttprettyTestCase):
643
644 def setUp(self):
645 super(TestHasURLConnectivity, self).setUp()
646 self.url = 'http://fake/'
647 self.kwargs = {'allow_redirects': True, 'timeout': 5.0}
648
649 @mock.patch('cloudinit.net.readurl')
650 def test_url_timeout_on_connectivity_check(self, m_readurl):
651 """A timeout of 5 seconds is provided when reading a url."""
652 self.assertTrue(
653 net.has_url_connectivity(self.url), 'Expected True on url connect')
654
655 def test_true_on_url_connectivity_success(self):
656 httpretty.register_uri(httpretty.GET, self.url)
657 self.assertTrue(
658 net.has_url_connectivity(self.url), 'Expected True on url connect')
659
660 @mock.patch('requests.Session.request')
661 def test_true_on_url_connectivity_timeout(self, m_request):
662 """A timeout raised accessing the url will return False."""
663 m_request.side_effect = requests.Timeout('Fake Connection Timeout')
664 self.assertFalse(
665 net.has_url_connectivity(self.url),
666 'Expected False on url timeout')
667
668 def test_true_on_url_connectivity_failure(self):
669 httpretty.register_uri(httpretty.GET, self.url, body={}, status=404)
670 self.assertFalse(
671 net.has_url_connectivity(self.url), 'Expected False on url fail')
diff --git a/cloudinit/sources/DataSourceAliYun.py b/cloudinit/sources/DataSourceAliYun.py
index 858e082..45cc9f0 100644
--- a/cloudinit/sources/DataSourceAliYun.py
+++ b/cloudinit/sources/DataSourceAliYun.py
@@ -1,7 +1,5 @@
1# This file is part of cloud-init. See LICENSE file for license information.1# This file is part of cloud-init. See LICENSE file for license information.
22
3import os
4
5from cloudinit import sources3from cloudinit import sources
6from cloudinit.sources import DataSourceEc2 as EC24from cloudinit.sources import DataSourceEc2 as EC2
7from cloudinit import util5from cloudinit import util
@@ -18,25 +16,17 @@ class DataSourceAliYun(EC2.DataSourceEc2):
18 min_metadata_version = '2016-01-01'16 min_metadata_version = '2016-01-01'
19 extended_metadata_versions = []17 extended_metadata_versions = []
2018
21 def __init__(self, sys_cfg, distro, paths):
22 super(DataSourceAliYun, self).__init__(sys_cfg, distro, paths)
23 self.seed_dir = os.path.join(paths.seed_dir, "AliYun")
24
25 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):19 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):
26 return self.metadata.get('hostname', 'localhost.localdomain')20 return self.metadata.get('hostname', 'localhost.localdomain')
2721
28 def get_public_ssh_keys(self):22 def get_public_ssh_keys(self):
29 return parse_public_keys(self.metadata.get('public-keys', {}))23 return parse_public_keys(self.metadata.get('public-keys', {}))
3024
31 @property25 def _get_cloud_name(self):
32 def cloud_platform(self):26 if _is_aliyun():
33 if self._cloud_platform is None:27 return EC2.CloudNames.ALIYUN
34 if _is_aliyun():28 else:
35 self._cloud_platform = EC2.Platforms.ALIYUN29 return EC2.CloudNames.NO_EC2_METADATA
36 else:
37 self._cloud_platform = EC2.Platforms.NO_EC2_METADATA
38
39 return self._cloud_platform
4030
4131
42def _is_aliyun():32def _is_aliyun():
diff --git a/cloudinit/sources/DataSourceAltCloud.py b/cloudinit/sources/DataSourceAltCloud.py
index 8cd312d..5270fda 100644
--- a/cloudinit/sources/DataSourceAltCloud.py
+++ b/cloudinit/sources/DataSourceAltCloud.py
@@ -89,7 +89,9 @@ class DataSourceAltCloud(sources.DataSource):
89 '''89 '''
90 Description:90 Description:
91 Get the type for the cloud back end this instance is running on91 Get the type for the cloud back end this instance is running on
92 by examining the string returned by reading the dmi data.92 by examining the string returned by reading either:
93 CLOUD_INFO_FILE or
94 the dmi data.
9395
94 Input:96 Input:
95 None97 None
@@ -99,7 +101,14 @@ class DataSourceAltCloud(sources.DataSource):
99 'RHEV', 'VSPHERE' or 'UNKNOWN'101 'RHEV', 'VSPHERE' or 'UNKNOWN'
100102
101 '''103 '''
102104 if os.path.exists(CLOUD_INFO_FILE):
105 try:
106 cloud_type = util.load_file(CLOUD_INFO_FILE).strip().upper()
107 except IOError:
108 util.logexc(LOG, 'Unable to access cloud info file at %s.',
109 CLOUD_INFO_FILE)
110 return 'UNKNOWN'
111 return cloud_type
103 system_name = util.read_dmi_data("system-product-name")112 system_name = util.read_dmi_data("system-product-name")
104 if not system_name:113 if not system_name:
105 return 'UNKNOWN'114 return 'UNKNOWN'
@@ -134,15 +143,7 @@ class DataSourceAltCloud(sources.DataSource):
134143
135 LOG.debug('Invoked get_data()')144 LOG.debug('Invoked get_data()')
136145
137 if os.path.exists(CLOUD_INFO_FILE):146 cloud_type = self.get_cloud_type()
138 try:
139 cloud_type = util.load_file(CLOUD_INFO_FILE).strip().upper()
140 except IOError:
141 util.logexc(LOG, 'Unable to access cloud info file at %s.',
142 CLOUD_INFO_FILE)
143 return False
144 else:
145 cloud_type = self.get_cloud_type()
146147
147 LOG.debug('cloud_type: %s', str(cloud_type))148 LOG.debug('cloud_type: %s', str(cloud_type))
148149
@@ -161,6 +162,15 @@ class DataSourceAltCloud(sources.DataSource):
161 util.logexc(LOG, 'Failed accessing user data.')162 util.logexc(LOG, 'Failed accessing user data.')
162 return False163 return False
163164
165 def _get_subplatform(self):
166 """Return the subplatform metadata details."""
167 cloud_type = self.get_cloud_type()
168 if not hasattr(self, 'source'):
169 self.source = sources.METADATA_UNKNOWN
170 if cloud_type == 'RHEV':
171 self.source = '/dev/fd0'
172 return '%s (%s)' % (cloud_type.lower(), self.source)
173
164 def user_data_rhevm(self):174 def user_data_rhevm(self):
165 '''175 '''
166 RHEVM specific userdata read176 RHEVM specific userdata read
@@ -232,6 +242,7 @@ class DataSourceAltCloud(sources.DataSource):
232 try:242 try:
233 return_str = util.mount_cb(cdrom_dev, read_user_data_callback)243 return_str = util.mount_cb(cdrom_dev, read_user_data_callback)
234 if return_str:244 if return_str:
245 self.source = cdrom_dev
235 break246 break
236 except OSError as err:247 except OSError as err:
237 if err.errno != errno.ENOENT:248 if err.errno != errno.ENOENT:
diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
index 783445e..a4f998b 100644
--- a/cloudinit/sources/DataSourceAzure.py
+++ b/cloudinit/sources/DataSourceAzure.py
@@ -22,7 +22,8 @@ from cloudinit.event import EventType
22from cloudinit.net.dhcp import EphemeralDHCPv422from cloudinit.net.dhcp import EphemeralDHCPv4
23from cloudinit import sources23from cloudinit import sources
24from cloudinit.sources.helpers.azure import get_metadata_from_fabric24from cloudinit.sources.helpers.azure import get_metadata_from_fabric
25from cloudinit.url_helper import readurl, UrlError25from cloudinit.sources.helpers import netlink
26from cloudinit.url_helper import UrlError, readurl, retry_on_url_exc
26from cloudinit import util27from cloudinit import util
2728
28LOG = logging.getLogger(__name__)29LOG = logging.getLogger(__name__)
@@ -57,7 +58,7 @@ IMDS_URL = "http://169.254.169.254/metadata/"
57# List of static scripts and network config artifacts created by58# List of static scripts and network config artifacts created by
58# stock ubuntu suported images.59# stock ubuntu suported images.
59UBUNTU_EXTENDED_NETWORK_SCRIPTS = [60UBUNTU_EXTENDED_NETWORK_SCRIPTS = [
60 '/etc/netplan/90-azure-hotplug.yaml',61 '/etc/netplan/90-hotplug-azure.yaml',
61 '/usr/local/sbin/ephemeral_eth.sh',62 '/usr/local/sbin/ephemeral_eth.sh',
62 '/etc/udev/rules.d/10-net-device-added.rules',63 '/etc/udev/rules.d/10-net-device-added.rules',
63 '/run/network/interfaces.ephemeral.d',64 '/run/network/interfaces.ephemeral.d',
@@ -207,7 +208,9 @@ BUILTIN_DS_CONFIG = {
207 },208 },
208 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH},209 'disk_aliases': {'ephemeral0': RESOURCE_DISK_PATH},
209 'dhclient_lease_file': LEASE_FILE,210 'dhclient_lease_file': LEASE_FILE,
211 'apply_network_config': True, # Use IMDS published network configuration
210}212}
213# RELEASE_BLOCKER: Xenial and earlier apply_network_config default is False
211214
212BUILTIN_CLOUD_CONFIG = {215BUILTIN_CLOUD_CONFIG = {
213 'disk_setup': {216 'disk_setup': {
@@ -278,6 +281,7 @@ class DataSourceAzure(sources.DataSource):
278 self._network_config = None281 self._network_config = None
279 # Regenerate network config new_instance boot and every boot282 # Regenerate network config new_instance boot and every boot
280 self.update_events['network'].add(EventType.BOOT)283 self.update_events['network'].add(EventType.BOOT)
284 self._ephemeral_dhcp_ctx = None
281285
282 def __str__(self):286 def __str__(self):
283 root = sources.DataSource.__str__(self)287 root = sources.DataSource.__str__(self)
@@ -351,6 +355,14 @@ class DataSourceAzure(sources.DataSource):
351 metadata['public-keys'] = key_value or pubkeys_from_crt_files(fp_files)355 metadata['public-keys'] = key_value or pubkeys_from_crt_files(fp_files)
352 return metadata356 return metadata
353357
358 def _get_subplatform(self):
359 """Return the subplatform metadata source details."""
360 if self.seed.startswith('/dev'):
361 subplatform_type = 'config-disk'
362 else:
363 subplatform_type = 'seed-dir'
364 return '%s (%s)' % (subplatform_type, self.seed)
365
354 def crawl_metadata(self):366 def crawl_metadata(self):
355 """Walk all instance metadata sources returning a dict on success.367 """Walk all instance metadata sources returning a dict on success.
356368
@@ -396,10 +408,15 @@ class DataSourceAzure(sources.DataSource):
396 LOG.warning("%s was not mountable", cdev)408 LOG.warning("%s was not mountable", cdev)
397 continue409 continue
398410
399 if reprovision or self._should_reprovision(ret):411 perform_reprovision = reprovision or self._should_reprovision(ret)
412 if perform_reprovision:
413 if util.is_FreeBSD():
414 msg = "Free BSD is not supported for PPS VMs"
415 LOG.error(msg)
416 raise sources.InvalidMetaDataException(msg)
400 ret = self._reprovision()417 ret = self._reprovision()
401 imds_md = get_metadata_from_imds(418 imds_md = get_metadata_from_imds(
402 self.fallback_interface, retries=3)419 self.fallback_interface, retries=10)
403 (md, userdata_raw, cfg, files) = ret420 (md, userdata_raw, cfg, files) = ret
404 self.seed = cdev421 self.seed = cdev
405 crawled_data.update({422 crawled_data.update({
@@ -424,6 +441,18 @@ class DataSourceAzure(sources.DataSource):
424 crawled_data['metadata']['random_seed'] = seed441 crawled_data['metadata']['random_seed'] = seed
425 crawled_data['metadata']['instance-id'] = util.read_dmi_data(442 crawled_data['metadata']['instance-id'] = util.read_dmi_data(
426 'system-uuid')443 'system-uuid')
444
445 if perform_reprovision:
446 LOG.info("Reporting ready to Azure after getting ReprovisionData")
447 use_cached_ephemeral = (net.is_up(self.fallback_interface) and
448 getattr(self, '_ephemeral_dhcp_ctx', None))
449 if use_cached_ephemeral:
450 self._report_ready(lease=self._ephemeral_dhcp_ctx.lease)
451 self._ephemeral_dhcp_ctx.clean_network() # Teardown ephemeral
452 else:
453 with EphemeralDHCPv4() as lease:
454 self._report_ready(lease=lease)
455
427 return crawled_data456 return crawled_data
428457
429 def _is_platform_viable(self):458 def _is_platform_viable(self):
@@ -450,7 +479,8 @@ class DataSourceAzure(sources.DataSource):
450 except sources.InvalidMetaDataException as e:479 except sources.InvalidMetaDataException as e:
451 LOG.warning('Could not crawl Azure metadata: %s', e)480 LOG.warning('Could not crawl Azure metadata: %s', e)
452 return False481 return False
453 if self.distro and self.distro.name == 'ubuntu':482 if (self.distro and self.distro.name == 'ubuntu' and
483 self.ds_cfg.get('apply_network_config')):
454 maybe_remove_ubuntu_network_config_scripts()484 maybe_remove_ubuntu_network_config_scripts()
455485
456 # Process crawled data and augment with various config defaults486 # Process crawled data and augment with various config defaults
@@ -498,8 +528,8 @@ class DataSourceAzure(sources.DataSource):
498 response. Then return the returned JSON object."""528 response. Then return the returned JSON object."""
499 url = IMDS_URL + "reprovisiondata?api-version=2017-04-02"529 url = IMDS_URL + "reprovisiondata?api-version=2017-04-02"
500 headers = {"Metadata": "true"}530 headers = {"Metadata": "true"}
531 nl_sock = None
501 report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))532 report_ready = bool(not os.path.isfile(REPORTED_READY_MARKER_FILE))
502 LOG.debug("Start polling IMDS")
503533
504 def exc_cb(msg, exception):534 def exc_cb(msg, exception):
505 if isinstance(exception, UrlError) and exception.code == 404:535 if isinstance(exception, UrlError) and exception.code == 404:
@@ -508,25 +538,47 @@ class DataSourceAzure(sources.DataSource):
508 # call DHCP and setup the ephemeral network to acquire the new IP.538 # call DHCP and setup the ephemeral network to acquire the new IP.
509 return False539 return False
510540
541 LOG.debug("Wait for vnetswitch to happen")
511 while True:542 while True:
512 try:543 try:
513 with EphemeralDHCPv4() as lease:544 # Save our EphemeralDHCPv4 context so we avoid repeated dhcp
514 if report_ready:545 self._ephemeral_dhcp_ctx = EphemeralDHCPv4()
515 path = REPORTED_READY_MARKER_FILE546 lease = self._ephemeral_dhcp_ctx.obtain_lease()
516 LOG.info(547 if report_ready:
517 "Creating a marker file to report ready: %s", path)548 try:
518 util.write_file(path, "{pid}: {time}\n".format(549 nl_sock = netlink.create_bound_netlink_socket()
519 pid=os.getpid(), time=time()))550 except netlink.NetlinkCreateSocketError as e:
520 self._report_ready(lease=lease)551 LOG.warning(e)
521 report_ready = False552 self._ephemeral_dhcp_ctx.clean_network()
553 return
554 path = REPORTED_READY_MARKER_FILE
555 LOG.info(
556 "Creating a marker file to report ready: %s", path)
557 util.write_file(path, "{pid}: {time}\n".format(
558 pid=os.getpid(), time=time()))
559 self._report_ready(lease=lease)
560 report_ready = False
561 try:
562 netlink.wait_for_media_disconnect_connect(
563 nl_sock, lease['interface'])
564 except AssertionError as error:
565 LOG.error(error)
566 return
567 self._ephemeral_dhcp_ctx.clean_network()
568 else:
522 return readurl(url, timeout=1, headers=headers,569 return readurl(url, timeout=1, headers=headers,
523 exception_cb=exc_cb, infinite=True).contents570 exception_cb=exc_cb, infinite=True,
571 log_req_resp=False).contents
524 except UrlError:572 except UrlError:
573 # Teardown our EphemeralDHCPv4 context on failure as we retry
574 self._ephemeral_dhcp_ctx.clean_network()
525 pass575 pass
576 finally:
577 if nl_sock:
578 nl_sock.close()
526579
527 def _report_ready(self, lease):580 def _report_ready(self, lease):
528 """Tells the fabric provisioning has completed581 """Tells the fabric provisioning has completed """
529 before we go into our polling loop."""
530 try:582 try:
531 get_metadata_from_fabric(None, lease['unknown-245'])583 get_metadata_from_fabric(None, lease['unknown-245'])
532 except Exception:584 except Exception:
@@ -611,7 +663,11 @@ class DataSourceAzure(sources.DataSource):
611 the blacklisted devices.663 the blacklisted devices.
612 """664 """
613 if not self._network_config:665 if not self._network_config:
614 self._network_config = parse_network_config(self._metadata_imds)666 if self.ds_cfg.get('apply_network_config'):
667 nc_src = self._metadata_imds
668 else:
669 nc_src = None
670 self._network_config = parse_network_config(nc_src)
615 return self._network_config671 return self._network_config
616672
617673
@@ -692,7 +748,7 @@ def can_dev_be_reformatted(devpath, preserve_ntfs):
692 file_count = util.mount_cb(cand_path, count_files, mtype="ntfs",748 file_count = util.mount_cb(cand_path, count_files, mtype="ntfs",
693 update_env_for_mount={'LANG': 'C'})749 update_env_for_mount={'LANG': 'C'})
694 except util.MountFailedError as e:750 except util.MountFailedError as e:
695 if "mount: unknown filesystem type 'ntfs'" in str(e):751 if "unknown filesystem type 'ntfs'" in str(e):
696 return True, (bmsg + ' but this system cannot mount NTFS,'752 return True, (bmsg + ' but this system cannot mount NTFS,'
697 ' assuming there are no important files.'753 ' assuming there are no important files.'
698 ' Formatting allowed.')754 ' Formatting allowed.')
@@ -920,12 +976,12 @@ def read_azure_ovf(contents):
920 lambda n:976 lambda n:
921 n.localName == "LinuxProvisioningConfigurationSet")977 n.localName == "LinuxProvisioningConfigurationSet")
922978
923 if len(results) == 0:979 if len(lpcs_nodes) == 0:
924 raise NonAzureDataSource("No LinuxProvisioningConfigurationSet")980 raise NonAzureDataSource("No LinuxProvisioningConfigurationSet")
925 if len(results) > 1:981 if len(lpcs_nodes) > 1:
926 raise BrokenAzureDataSource("found '%d' %ss" %982 raise BrokenAzureDataSource("found '%d' %ss" %
927 ("LinuxProvisioningConfigurationSet",983 (len(lpcs_nodes),
928 len(results)))984 "LinuxProvisioningConfigurationSet"))
929 lpcs = lpcs_nodes[0]985 lpcs = lpcs_nodes[0]
930986
931 if not lpcs.hasChildNodes():987 if not lpcs.hasChildNodes():
@@ -1154,17 +1210,12 @@ def get_metadata_from_imds(fallback_nic, retries):
11541210
1155def _get_metadata_from_imds(retries):1211def _get_metadata_from_imds(retries):
11561212
1157 def retry_on_url_error(msg, exception):
1158 if isinstance(exception, UrlError) and exception.code == 404:
1159 return True # Continue retries
1160 return False # Stop retries on all other exceptions
1161
1162 url = IMDS_URL + "instance?api-version=2017-12-01"1213 url = IMDS_URL + "instance?api-version=2017-12-01"
1163 headers = {"Metadata": "true"}1214 headers = {"Metadata": "true"}
1164 try:1215 try:
1165 response = readurl(1216 response = readurl(
1166 url, timeout=1, headers=headers, retries=retries,1217 url, timeout=1, headers=headers, retries=retries,
1167 exception_cb=retry_on_url_error)1218 exception_cb=retry_on_url_exc)
1168 except Exception as e:1219 except Exception as e:
1169 LOG.debug('Ignoring IMDS instance metadata: %s', e)1220 LOG.debug('Ignoring IMDS instance metadata: %s', e)
1170 return {}1221 return {}
@@ -1187,7 +1238,7 @@ def maybe_remove_ubuntu_network_config_scripts(paths=None):
1187 additional interfaces which get attached by a customer at some point1238 additional interfaces which get attached by a customer at some point
1188 after initial boot. Since the Azure datasource can now regenerate1239 after initial boot. Since the Azure datasource can now regenerate
1189 network configuration as metadata reports these new devices, we no longer1240 network configuration as metadata reports these new devices, we no longer
1190 want the udev rules or netplan's 90-azure-hotplug.yaml to configure1241 want the udev rules or netplan's 90-hotplug-azure.yaml to configure
1191 networking on eth1 or greater as it might collide with cloud-init's1242 networking on eth1 or greater as it might collide with cloud-init's
1192 configuration.1243 configuration.
11931244
diff --git a/cloudinit/sources/DataSourceBigstep.py b/cloudinit/sources/DataSourceBigstep.py
index 699a85b..52fff20 100644
--- a/cloudinit/sources/DataSourceBigstep.py
+++ b/cloudinit/sources/DataSourceBigstep.py
@@ -36,6 +36,10 @@ class DataSourceBigstep(sources.DataSource):
36 self.userdata_raw = decoded["userdata_raw"]36 self.userdata_raw = decoded["userdata_raw"]
37 return True37 return True
3838
39 def _get_subplatform(self):
40 """Return the subplatform metadata source details."""
41 return 'metadata (%s)' % get_url_from_file()
42
3943
40def get_url_from_file():44def get_url_from_file():
41 try:45 try:
diff --git a/cloudinit/sources/DataSourceCloudSigma.py b/cloudinit/sources/DataSourceCloudSigma.py
index c816f34..2955d3f 100644
--- a/cloudinit/sources/DataSourceCloudSigma.py
+++ b/cloudinit/sources/DataSourceCloudSigma.py
@@ -7,7 +7,7 @@
7from base64 import b64decode7from base64 import b64decode
8import re8import re
99
10from cloudinit.cs_utils import Cepko10from cloudinit.cs_utils import Cepko, SERIAL_PORT
1111
12from cloudinit import log as logging12from cloudinit import log as logging
13from cloudinit import sources13from cloudinit import sources
@@ -84,6 +84,10 @@ class DataSourceCloudSigma(sources.DataSource):
8484
85 return True85 return True
8686
87 def _get_subplatform(self):
88 """Return the subplatform metadata source details."""
89 return 'cepko (%s)' % SERIAL_PORT
90
87 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):91 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):
88 """92 """
89 Cleans up and uses the server's name if the latter is set. Otherwise93 Cleans up and uses the server's name if the latter is set. Otherwise
diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py
index 664dc4b..564e3eb 100644
--- a/cloudinit/sources/DataSourceConfigDrive.py
+++ b/cloudinit/sources/DataSourceConfigDrive.py
@@ -160,6 +160,18 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource):
160 LOG.debug("no network configuration available")160 LOG.debug("no network configuration available")
161 return self._network_config161 return self._network_config
162162
163 @property
164 def platform(self):
165 return 'openstack'
166
167 def _get_subplatform(self):
168 """Return the subplatform metadata source details."""
169 if self.seed_dir in self.source:
170 subplatform_type = 'seed-dir'
171 elif self.source.startswith('/dev'):
172 subplatform_type = 'config-disk'
173 return '%s (%s)' % (subplatform_type, self.source)
174
163175
164def read_config_drive(source_dir):176def read_config_drive(source_dir):
165 reader = openstack.ConfigDriveReader(source_dir)177 reader = openstack.ConfigDriveReader(source_dir)
diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py
index 968ab3f..9ccf2cd 100644
--- a/cloudinit/sources/DataSourceEc2.py
+++ b/cloudinit/sources/DataSourceEc2.py
@@ -28,18 +28,16 @@ STRICT_ID_PATH = ("datasource", "Ec2", "strict_id")
28STRICT_ID_DEFAULT = "warn"28STRICT_ID_DEFAULT = "warn"
2929
3030
31class Platforms(object):31class CloudNames(object):
32 # TODO Rename and move to cloudinit.cloud.CloudNames32 ALIYUN = "aliyun"
33 ALIYUN = "AliYun"33 AWS = "aws"
34 AWS = "AWS"34 BRIGHTBOX = "brightbox"
35 BRIGHTBOX = "Brightbox"
36 SEEDED = "Seeded"
37 # UNKNOWN indicates no positive id. If strict_id is 'warn' or 'false',35 # UNKNOWN indicates no positive id. If strict_id is 'warn' or 'false',
38 # then an attempt at the Ec2 Metadata service will be made.36 # then an attempt at the Ec2 Metadata service will be made.
39 UNKNOWN = "Unknown"37 UNKNOWN = "unknown"
40 # NO_EC2_METADATA indicates this platform does not have a Ec2 metadata38 # NO_EC2_METADATA indicates this platform does not have a Ec2 metadata
41 # service available. No attempt at the Ec2 Metadata service will be made.39 # service available. No attempt at the Ec2 Metadata service will be made.
42 NO_EC2_METADATA = "No-EC2-Metadata"40 NO_EC2_METADATA = "no-ec2-metadata"
4341
4442
45class DataSourceEc2(sources.DataSource):43class DataSourceEc2(sources.DataSource):
@@ -61,8 +59,6 @@ class DataSourceEc2(sources.DataSource):
61 url_max_wait = 12059 url_max_wait = 120
62 url_timeout = 5060 url_timeout = 50
6361
64 _cloud_platform = None
65
66 _network_config = sources.UNSET # Used to cache calculated network cfg v162 _network_config = sources.UNSET # Used to cache calculated network cfg v1
6763
68 # Whether we want to get network configuration from the metadata service.64 # Whether we want to get network configuration from the metadata service.
@@ -71,30 +67,21 @@ class DataSourceEc2(sources.DataSource):
71 def __init__(self, sys_cfg, distro, paths):67 def __init__(self, sys_cfg, distro, paths):
72 super(DataSourceEc2, self).__init__(sys_cfg, distro, paths)68 super(DataSourceEc2, self).__init__(sys_cfg, distro, paths)
73 self.metadata_address = None69 self.metadata_address = None
74 self.seed_dir = os.path.join(paths.seed_dir, "ec2")
7570
76 def _get_cloud_name(self):71 def _get_cloud_name(self):
77 """Return the cloud name as identified during _get_data."""72 """Return the cloud name as identified during _get_data."""
78 return self.cloud_platform73 return identify_platform()
7974
80 def _get_data(self):75 def _get_data(self):
81 seed_ret = {}
82 if util.read_optional_seed(seed_ret, base=(self.seed_dir + "/")):
83 self.userdata_raw = seed_ret['user-data']
84 self.metadata = seed_ret['meta-data']
85 LOG.debug("Using seeded ec2 data from %s", self.seed_dir)
86 self._cloud_platform = Platforms.SEEDED
87 return True
88
89 strict_mode, _sleep = read_strict_mode(76 strict_mode, _sleep = read_strict_mode(
90 util.get_cfg_by_path(self.sys_cfg, STRICT_ID_PATH,77 util.get_cfg_by_path(self.sys_cfg, STRICT_ID_PATH,
91 STRICT_ID_DEFAULT), ("warn", None))78 STRICT_ID_DEFAULT), ("warn", None))
9279
93 LOG.debug("strict_mode: %s, cloud_platform=%s",80 LOG.debug("strict_mode: %s, cloud_name=%s cloud_platform=%s",
94 strict_mode, self.cloud_platform)81 strict_mode, self.cloud_name, self.platform)
95 if strict_mode == "true" and self.cloud_platform == Platforms.UNKNOWN:82 if strict_mode == "true" and self.cloud_name == CloudNames.UNKNOWN:
96 return False83 return False
97 elif self.cloud_platform == Platforms.NO_EC2_METADATA:84 elif self.cloud_name == CloudNames.NO_EC2_METADATA:
98 return False85 return False
9986
100 if self.perform_dhcp_setup: # Setup networking in init-local stage.87 if self.perform_dhcp_setup: # Setup networking in init-local stage.
@@ -103,13 +90,22 @@ class DataSourceEc2(sources.DataSource):
103 return False90 return False
104 try:91 try:
105 with EphemeralDHCPv4(self.fallback_interface):92 with EphemeralDHCPv4(self.fallback_interface):
106 return util.log_time(93 self._crawled_metadata = util.log_time(
107 logfunc=LOG.debug, msg='Crawl of metadata service',94 logfunc=LOG.debug, msg='Crawl of metadata service',
108 func=self._crawl_metadata)95 func=self.crawl_metadata)
109 except NoDHCPLeaseError:96 except NoDHCPLeaseError:
110 return False97 return False
111 else:98 else:
112 return self._crawl_metadata()99 self._crawled_metadata = util.log_time(
100 logfunc=LOG.debug, msg='Crawl of metadata service',
101 func=self.crawl_metadata)
102 if not self._crawled_metadata:
103 return False
104 self.metadata = self._crawled_metadata.get('meta-data', None)
105 self.userdata_raw = self._crawled_metadata.get('user-data', None)
106 self.identity = self._crawled_metadata.get(
107 'dynamic', {}).get('instance-identity', {}).get('document', {})
108 return True
113109
114 @property110 @property
115 def launch_index(self):111 def launch_index(self):
@@ -117,6 +113,15 @@ class DataSourceEc2(sources.DataSource):
117 return None113 return None
118 return self.metadata.get('ami-launch-index')114 return self.metadata.get('ami-launch-index')
119115
116 @property
117 def platform(self):
118 # Handle upgrade path of pickled ds
119 if not hasattr(self, '_platform_type'):
120 self._platform_type = DataSourceEc2.dsname.lower()
121 if not self._platform_type:
122 self._platform_type = DataSourceEc2.dsname.lower()
123 return self._platform_type
124
120 def get_metadata_api_version(self):125 def get_metadata_api_version(self):
121 """Get the best supported api version from the metadata service.126 """Get the best supported api version from the metadata service.
122127
@@ -144,7 +149,7 @@ class DataSourceEc2(sources.DataSource):
144 return self.min_metadata_version149 return self.min_metadata_version
145150
146 def get_instance_id(self):151 def get_instance_id(self):
147 if self.cloud_platform == Platforms.AWS:152 if self.cloud_name == CloudNames.AWS:
148 # Prefer the ID from the instance identity document, but fall back153 # Prefer the ID from the instance identity document, but fall back
149 if not getattr(self, 'identity', None):154 if not getattr(self, 'identity', None):
150 # If re-using cached datasource, it's get_data run didn't155 # If re-using cached datasource, it's get_data run didn't
@@ -254,7 +259,7 @@ class DataSourceEc2(sources.DataSource):
254 @property259 @property
255 def availability_zone(self):260 def availability_zone(self):
256 try:261 try:
257 if self.cloud_platform == Platforms.AWS:262 if self.cloud_name == CloudNames.AWS:
258 return self.identity.get(263 return self.identity.get(
259 'availabilityZone',264 'availabilityZone',
260 self.metadata['placement']['availability-zone'])265 self.metadata['placement']['availability-zone'])
@@ -265,7 +270,7 @@ class DataSourceEc2(sources.DataSource):
265270
266 @property271 @property
267 def region(self):272 def region(self):
268 if self.cloud_platform == Platforms.AWS:273 if self.cloud_name == CloudNames.AWS:
269 region = self.identity.get('region')274 region = self.identity.get('region')
270 # Fallback to trimming the availability zone if region is missing275 # Fallback to trimming the availability zone if region is missing
271 if self.availability_zone and not region:276 if self.availability_zone and not region:
@@ -277,16 +282,10 @@ class DataSourceEc2(sources.DataSource):
277 return az[:-1]282 return az[:-1]
278 return None283 return None
279284
280 @property
281 def cloud_platform(self): # TODO rename cloud_name
282 if self._cloud_platform is None:
283 self._cloud_platform = identify_platform()
284 return self._cloud_platform
285
286 def activate(self, cfg, is_new_instance):285 def activate(self, cfg, is_new_instance):
287 if not is_new_instance:286 if not is_new_instance:
288 return287 return
289 if self.cloud_platform == Platforms.UNKNOWN:288 if self.cloud_name == CloudNames.UNKNOWN:
290 warn_if_necessary(289 warn_if_necessary(
291 util.get_cfg_by_path(cfg, STRICT_ID_PATH, STRICT_ID_DEFAULT),290 util.get_cfg_by_path(cfg, STRICT_ID_PATH, STRICT_ID_DEFAULT),
292 cfg)291 cfg)
@@ -306,13 +305,13 @@ class DataSourceEc2(sources.DataSource):
306 result = None305 result = None
307 no_network_metadata_on_aws = bool(306 no_network_metadata_on_aws = bool(
308 'network' not in self.metadata and307 'network' not in self.metadata and
309 self.cloud_platform == Platforms.AWS)308 self.cloud_name == CloudNames.AWS)
310 if no_network_metadata_on_aws:309 if no_network_metadata_on_aws:
311 LOG.debug("Metadata 'network' not present:"310 LOG.debug("Metadata 'network' not present:"
312 " Refreshing stale metadata from prior to upgrade.")311 " Refreshing stale metadata from prior to upgrade.")
313 util.log_time(312 util.log_time(
314 logfunc=LOG.debug, msg='Re-crawl of metadata service',313 logfunc=LOG.debug, msg='Re-crawl of metadata service',
315 func=self._crawl_metadata)314 func=self.get_data)
316315
317 # Limit network configuration to only the primary/fallback nic316 # Limit network configuration to only the primary/fallback nic
318 iface = self.fallback_interface317 iface = self.fallback_interface
@@ -340,28 +339,32 @@ class DataSourceEc2(sources.DataSource):
340 return super(DataSourceEc2, self).fallback_interface339 return super(DataSourceEc2, self).fallback_interface
341 return self._fallback_interface340 return self._fallback_interface
342341
343 def _crawl_metadata(self):342 def crawl_metadata(self):
344 """Crawl metadata service when available.343 """Crawl metadata service when available.
345344
346 @returns: True on success, False otherwise.345 @returns: Dictionary of crawled metadata content containing the keys:
346 meta-data, user-data and dynamic.
347 """347 """
348 if not self.wait_for_metadata_service():348 if not self.wait_for_metadata_service():
349 return False349 return {}
350 api_version = self.get_metadata_api_version()350 api_version = self.get_metadata_api_version()
351 crawled_metadata = {}
351 try:352 try:
352 self.userdata_raw = ec2.get_instance_userdata(353 crawled_metadata['user-data'] = ec2.get_instance_userdata(
353 api_version, self.metadata_address)354 api_version, self.metadata_address)
354 self.metadata = ec2.get_instance_metadata(355 crawled_metadata['meta-data'] = ec2.get_instance_metadata(
355 api_version, self.metadata_address)356 api_version, self.metadata_address)
356 if self.cloud_platform == Platforms.AWS:357 if self.cloud_name == CloudNames.AWS:
357 self.identity = ec2.get_instance_identity(358 identity = ec2.get_instance_identity(
358 api_version, self.metadata_address).get('document', {})359 api_version, self.metadata_address)
360 crawled_metadata['dynamic'] = {'instance-identity': identity}
359 except Exception:361 except Exception:
360 util.logexc(362 util.logexc(
361 LOG, "Failed reading from metadata address %s",363 LOG, "Failed reading from metadata address %s",
362 self.metadata_address)364 self.metadata_address)
363 return False365 return {}
364 return True366 crawled_metadata['_metadata_api_version'] = api_version
367 return crawled_metadata
365368
366369
367class DataSourceEc2Local(DataSourceEc2):370class DataSourceEc2Local(DataSourceEc2):
@@ -375,10 +378,10 @@ class DataSourceEc2Local(DataSourceEc2):
375 perform_dhcp_setup = True # Use dhcp before querying metadata378 perform_dhcp_setup = True # Use dhcp before querying metadata
376379
377 def get_data(self):380 def get_data(self):
378 supported_platforms = (Platforms.AWS,)381 supported_platforms = (CloudNames.AWS,)
379 if self.cloud_platform not in supported_platforms:382 if self.cloud_name not in supported_platforms:
380 LOG.debug("Local Ec2 mode only supported on %s, not %s",383 LOG.debug("Local Ec2 mode only supported on %s, not %s",
381 supported_platforms, self.cloud_platform)384 supported_platforms, self.cloud_name)
382 return False385 return False
383 return super(DataSourceEc2Local, self).get_data()386 return super(DataSourceEc2Local, self).get_data()
384387
@@ -439,20 +442,20 @@ def identify_aws(data):
439 if (data['uuid'].startswith('ec2') and442 if (data['uuid'].startswith('ec2') and
440 (data['uuid_source'] == 'hypervisor' or443 (data['uuid_source'] == 'hypervisor' or
441 data['uuid'] == data['serial'])):444 data['uuid'] == data['serial'])):
442 return Platforms.AWS445 return CloudNames.AWS
443446
444 return None447 return None
445448
446449
447def identify_brightbox(data):450def identify_brightbox(data):
448 if data['serial'].endswith('brightbox.com'):451 if data['serial'].endswith('brightbox.com'):
449 return Platforms.BRIGHTBOX452 return CloudNames.BRIGHTBOX
450453
451454
452def identify_platform():455def identify_platform():
453 # identify the platform and return an entry in Platforms.456 # identify the platform and return an entry in CloudNames.
454 data = _collect_platform_data()457 data = _collect_platform_data()
455 checks = (identify_aws, identify_brightbox, lambda x: Platforms.UNKNOWN)458 checks = (identify_aws, identify_brightbox, lambda x: CloudNames.UNKNOWN)
456 for checker in checks:459 for checker in checks:
457 try:460 try:
458 result = checker(data)461 result = checker(data)
diff --git a/cloudinit/sources/DataSourceIBMCloud.py b/cloudinit/sources/DataSourceIBMCloud.py
index a535814..21e6ae6 100644
--- a/cloudinit/sources/DataSourceIBMCloud.py
+++ b/cloudinit/sources/DataSourceIBMCloud.py
@@ -157,6 +157,10 @@ class DataSourceIBMCloud(sources.DataSource):
157157
158 return True158 return True
159159
160 def _get_subplatform(self):
161 """Return the subplatform metadata source details."""
162 return '%s (%s)' % (self.platform, self.source)
163
160 def check_instance_id(self, sys_cfg):164 def check_instance_id(self, sys_cfg):
161 """quickly (local check only) if self.instance_id is still valid165 """quickly (local check only) if self.instance_id is still valid
162166
diff --git a/cloudinit/sources/DataSourceMAAS.py b/cloudinit/sources/DataSourceMAAS.py
index bcb3854..61aa6d7 100644
--- a/cloudinit/sources/DataSourceMAAS.py
+++ b/cloudinit/sources/DataSourceMAAS.py
@@ -109,6 +109,10 @@ class DataSourceMAAS(sources.DataSource):
109 LOG.warning("Invalid content in vendor-data: %s", e)109 LOG.warning("Invalid content in vendor-data: %s", e)
110 self.vendordata_raw = None110 self.vendordata_raw = None
111111
112 def _get_subplatform(self):
113 """Return the subplatform metadata source details."""
114 return 'seed-dir (%s)' % self.base_url
115
112 def wait_for_metadata_service(self, url):116 def wait_for_metadata_service(self, url):
113 mcfg = self.ds_cfg117 mcfg = self.ds_cfg
114 max_wait = 120118 max_wait = 120
diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py
index 2daea59..6860f0c 100644
--- a/cloudinit/sources/DataSourceNoCloud.py
+++ b/cloudinit/sources/DataSourceNoCloud.py
@@ -186,6 +186,27 @@ class DataSourceNoCloud(sources.DataSource):
186 self._network_eni = mydata['meta-data'].get('network-interfaces')186 self._network_eni = mydata['meta-data'].get('network-interfaces')
187 return True187 return True
188188
189 @property
190 def platform_type(self):
191 # Handle upgrade path of pickled ds
192 if not hasattr(self, '_platform_type'):
193 self._platform_type = None
194 if not self._platform_type:
195 self._platform_type = 'lxd' if util.is_lxd() else 'nocloud'
196 return self._platform_type
197
198 def _get_cloud_name(self):
199 """Return unknown when 'cloud-name' key is absent from metadata."""
200 return sources.METADATA_UNKNOWN
201
202 def _get_subplatform(self):
203 """Return the subplatform metadata source details."""
204 if self.seed.startswith('/dev'):
205 subplatform_type = 'config-disk'
206 else:
207 subplatform_type = 'seed-dir'
208 return '%s (%s)' % (subplatform_type, self.seed)
209
189 def check_instance_id(self, sys_cfg):210 def check_instance_id(self, sys_cfg):
190 # quickly (local check only) if self.instance_id is still valid211 # quickly (local check only) if self.instance_id is still valid
191 # we check kernel command line or files.212 # we check kernel command line or files.
@@ -290,6 +311,35 @@ def parse_cmdline_data(ds_id, fill, cmdline=None):
290 return True311 return True
291312
292313
314def _maybe_remove_top_network(cfg):
315 """If network-config contains top level 'network' key, then remove it.
316
317 Some providers of network configuration may provide a top level
318 'network' key (LP: #1798117) even though it is not necessary.
319
320 Be friendly and remove it if it really seems so.
321
322 Return the original value if no change or the updated value if changed."""
323 nullval = object()
324 network_val = cfg.get('network', nullval)
325 if network_val is nullval:
326 return cfg
327 bmsg = 'Top level network key in network-config %s: %s'
328 if not isinstance(network_val, dict):
329 LOG.debug(bmsg, "was not a dict", cfg)
330 return cfg
331 if len(list(cfg.keys())) != 1:
332 LOG.debug(bmsg, "had multiple top level keys", cfg)
333 return cfg
334 if network_val.get('config') == "disabled":
335 LOG.debug(bmsg, "was config/disabled", cfg)
336 elif not all(('config' in network_val, 'version' in network_val)):
337 LOG.debug(bmsg, "but missing 'config' or 'version'", cfg)
338 return cfg
339 LOG.debug(bmsg, "fixed by removing shifting network.", cfg)
340 return network_val
341
342
293def _merge_new_seed(cur, seeded):343def _merge_new_seed(cur, seeded):
294 ret = cur.copy()344 ret = cur.copy()
295345
@@ -299,7 +349,8 @@ def _merge_new_seed(cur, seeded):
299 ret['meta-data'] = util.mergemanydict([cur['meta-data'], newmd])349 ret['meta-data'] = util.mergemanydict([cur['meta-data'], newmd])
300350
301 if seeded.get('network-config'):351 if seeded.get('network-config'):
302 ret['network-config'] = util.load_yaml(seeded['network-config'])352 ret['network-config'] = _maybe_remove_top_network(
353 util.load_yaml(seeded.get('network-config')))
303354
304 if 'user-data' in seeded:355 if 'user-data' in seeded:
305 ret['user-data'] = seeded['user-data']356 ret['user-data'] = seeded['user-data']
diff --git a/cloudinit/sources/DataSourceNone.py b/cloudinit/sources/DataSourceNone.py
index e63a7e3..e625080 100644
--- a/cloudinit/sources/DataSourceNone.py
+++ b/cloudinit/sources/DataSourceNone.py
@@ -28,6 +28,10 @@ class DataSourceNone(sources.DataSource):
28 self.metadata = self.ds_cfg['metadata']28 self.metadata = self.ds_cfg['metadata']
29 return True29 return True
3030
31 def _get_subplatform(self):
32 """Return the subplatform metadata source details."""
33 return 'config'
34
31 def get_instance_id(self):35 def get_instance_id(self):
32 return 'iid-datasource-none'36 return 'iid-datasource-none'
3337
diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py
index 178ccb0..3a3fcdf 100644
--- a/cloudinit/sources/DataSourceOVF.py
+++ b/cloudinit/sources/DataSourceOVF.py
@@ -232,11 +232,11 @@ class DataSourceOVF(sources.DataSource):
232 GuestCustErrorEnum.GUESTCUST_ERROR_SUCCESS)232 GuestCustErrorEnum.GUESTCUST_ERROR_SUCCESS)
233233
234 else:234 else:
235 np = {'iso': transport_iso9660,235 np = [('com.vmware.guestInfo', transport_vmware_guestinfo),
236 'vmware-guestd': transport_vmware_guestd, }236 ('iso', transport_iso9660)]
237 name = None237 name = None
238 for (name, transfunc) in np.items():238 for name, transfunc in np:
239 (contents, _dev, _fname) = transfunc()239 contents = transfunc()
240 if contents:240 if contents:
241 break241 break
242 if contents:242 if contents:
@@ -275,6 +275,12 @@ class DataSourceOVF(sources.DataSource):
275 self.cfg = cfg275 self.cfg = cfg
276 return True276 return True
277277
278 def _get_subplatform(self):
279 system_type = util.read_dmi_data("system-product-name").lower()
280 if system_type == 'vmware':
281 return 'vmware (%s)' % self.seed
282 return 'ovf (%s)' % self.seed
283
278 def get_public_ssh_keys(self):284 def get_public_ssh_keys(self):
279 if 'public-keys' not in self.metadata:285 if 'public-keys' not in self.metadata:
280 return []286 return []
@@ -458,8 +464,8 @@ def maybe_cdrom_device(devname):
458 return cdmatch.match(devname) is not None464 return cdmatch.match(devname) is not None
459465
460466
461# Transport functions take no input and return467# Transport functions are called with no arguments and return
462# a 3 tuple of content, path, filename468# either None (indicating not present) or string content of an ovf-env.xml
463def transport_iso9660(require_iso=True):469def transport_iso9660(require_iso=True):
464470
465 # Go through mounts to see if it was already mounted471 # Go through mounts to see if it was already mounted
@@ -471,9 +477,9 @@ def transport_iso9660(require_iso=True):
471 if not maybe_cdrom_device(dev):477 if not maybe_cdrom_device(dev):
472 continue478 continue
473 mp = info['mountpoint']479 mp = info['mountpoint']
474 (fname, contents) = get_ovf_env(mp)480 (_fname, contents) = get_ovf_env(mp)
475 if contents is not False:481 if contents is not False:
476 return (contents, dev, fname)482 return contents
477483
478 if require_iso:484 if require_iso:
479 mtype = "iso9660"485 mtype = "iso9660"
@@ -486,29 +492,33 @@ def transport_iso9660(require_iso=True):
486 if maybe_cdrom_device(dev)]492 if maybe_cdrom_device(dev)]
487 for dev in devs:493 for dev in devs:
488 try:494 try:
489 (fname, contents) = util.mount_cb(dev, get_ovf_env, mtype=mtype)495 (_fname, contents) = util.mount_cb(dev, get_ovf_env, mtype=mtype)
490 except util.MountFailedError:496 except util.MountFailedError:
491 LOG.debug("%s not mountable as iso9660", dev)497 LOG.debug("%s not mountable as iso9660", dev)
492 continue498 continue
493499
494 if contents is not False:500 if contents is not False:
495 return (contents, dev, fname)501 return contents
496502
497 return (False, None, None)503 return None
498504
499505
500def transport_vmware_guestd():506def transport_vmware_guestinfo():
501 # http://blogs.vmware.com/vapp/2009/07/ \507 rpctool = "vmware-rpctool"
502 # selfconfiguration-and-the-ovf-environment.html508 not_found = None
503 # try:509 if not util.which(rpctool):
504 # cmd = ['vmware-guestd', '--cmd', 'info-get guestinfo.ovfEnv']510 return not_found
505 # (out, err) = subp(cmd)511 cmd = [rpctool, "info-get guestinfo.ovfEnv"]
506 # return(out, 'guestinfo.ovfEnv', 'vmware-guestd')512 try:
507 # except:513 out, _err = util.subp(cmd)
508 # # would need to error check here and see why this failed514 if out:
509 # # to know if log/error should be raised515 return out
510 # return(False, None, None)516 LOG.debug("cmd %s exited 0 with empty stdout: %s", cmd, out)
511 return (False, None, None)517 except util.ProcessExecutionError as e:
518 if e.exit_code != 1:
519 LOG.warning("%s exited with code %d", rpctool, e.exit_code)
520 LOG.debug(e)
521 return not_found
512522
513523
514def find_child(node, filter_func):524def find_child(node, filter_func):
diff --git a/cloudinit/sources/DataSourceOpenNebula.py b/cloudinit/sources/DataSourceOpenNebula.py
index 77ccd12..6e1d04b 100644
--- a/cloudinit/sources/DataSourceOpenNebula.py
+++ b/cloudinit/sources/DataSourceOpenNebula.py
@@ -95,6 +95,14 @@ class DataSourceOpenNebula(sources.DataSource):
95 self.userdata_raw = results.get('userdata')95 self.userdata_raw = results.get('userdata')
96 return True96 return True
9797
98 def _get_subplatform(self):
99 """Return the subplatform metadata source details."""
100 if self.seed_dir in self.seed:
101 subplatform_type = 'seed-dir'
102 else:
103 subplatform_type = 'config-disk'
104 return '%s (%s)' % (subplatform_type, self.seed)
105
98 @property106 @property
99 def network_config(self):107 def network_config(self):
100 if self.network is not None:108 if self.network is not None:
@@ -329,7 +337,7 @@ def parse_shell_config(content, keylist=None, bash=None, asuser=None,
329 (output, _error) = util.subp(cmd, data=bcmd)337 (output, _error) = util.subp(cmd, data=bcmd)
330338
331 # exclude vars in bash that change on their own or that we used339 # exclude vars in bash that change on their own or that we used
332 excluded = ("RANDOM", "LINENO", "SECONDS", "_", "__v")340 excluded = ("EPOCHREALTIME", "RANDOM", "LINENO", "SECONDS", "_", "__v")
333 preset = {}341 preset = {}
334 ret = {}342 ret = {}
335 target = None343 target = None
diff --git a/cloudinit/sources/DataSourceOracle.py b/cloudinit/sources/DataSourceOracle.py
index fab39af..70b9c58 100644
--- a/cloudinit/sources/DataSourceOracle.py
+++ b/cloudinit/sources/DataSourceOracle.py
@@ -91,6 +91,10 @@ class DataSourceOracle(sources.DataSource):
91 def crawl_metadata(self):91 def crawl_metadata(self):
92 return read_metadata()92 return read_metadata()
9393
94 def _get_subplatform(self):
95 """Return the subplatform metadata source details."""
96 return 'metadata (%s)' % METADATA_ENDPOINT
97
94 def check_instance_id(self, sys_cfg):98 def check_instance_id(self, sys_cfg):
95 """quickly check (local only) if self.instance_id is still valid99 """quickly check (local only) if self.instance_id is still valid
96100
diff --git a/cloudinit/sources/DataSourceScaleway.py b/cloudinit/sources/DataSourceScaleway.py
index 9dc4ab2..b573b38 100644
--- a/cloudinit/sources/DataSourceScaleway.py
+++ b/cloudinit/sources/DataSourceScaleway.py
@@ -253,7 +253,16 @@ class DataSourceScaleway(sources.DataSource):
253 return self.metadata['id']253 return self.metadata['id']
254254
255 def get_public_ssh_keys(self):255 def get_public_ssh_keys(self):
256 return [key['key'] for key in self.metadata['ssh_public_keys']]256 ssh_keys = [key['key'] for key in self.metadata['ssh_public_keys']]
257
258 akeypre = "AUTHORIZED_KEY="
259 plen = len(akeypre)
260 for tag in self.metadata.get('tags', []):
261 if not tag.startswith(akeypre):
262 continue
263 ssh_keys.append(tag[:plen].replace("_", " "))
264
265 return ssh_keys
257266
258 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):267 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):
259 return self.metadata['hostname']268 return self.metadata['hostname']
diff --git a/cloudinit/sources/DataSourceSmartOS.py b/cloudinit/sources/DataSourceSmartOS.py
index 593ac91..32b57cd 100644
--- a/cloudinit/sources/DataSourceSmartOS.py
+++ b/cloudinit/sources/DataSourceSmartOS.py
@@ -303,6 +303,9 @@ class DataSourceSmartOS(sources.DataSource):
303 self._set_provisioned()303 self._set_provisioned()
304 return True304 return True
305305
306 def _get_subplatform(self):
307 return 'serial (%s)' % SERIAL_DEVICE
308
306 def device_name_to_device(self, name):309 def device_name_to_device(self, name):
307 return self.ds_cfg['disk_aliases'].get(name)310 return self.ds_cfg['disk_aliases'].get(name)
308311
diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
index 5ac9882..e6966b3 100644
--- a/cloudinit/sources/__init__.py
+++ b/cloudinit/sources/__init__.py
@@ -54,9 +54,18 @@ REDACT_SENSITIVE_VALUE = 'redacted for non-root user'
54METADATA_CLOUD_NAME_KEY = 'cloud-name'54METADATA_CLOUD_NAME_KEY = 'cloud-name'
5555
56UNSET = "_unset"56UNSET = "_unset"
57METADATA_UNKNOWN = 'unknown'
5758
58LOG = logging.getLogger(__name__)59LOG = logging.getLogger(__name__)
5960
61# CLOUD_ID_REGION_PREFIX_MAP format is:
62# <region-match-prefix>: (<new-cloud-id>: <test_allowed_cloud_callable>)
63CLOUD_ID_REGION_PREFIX_MAP = {
64 'cn-': ('aws-china', lambda c: c == 'aws'), # only change aws regions
65 'us-gov-': ('aws-gov', lambda c: c == 'aws'), # only change aws regions
66 'china': ('azure-china', lambda c: c == 'azure'), # only change azure
67}
68
6069
61class DataSourceNotFoundException(Exception):70class DataSourceNotFoundException(Exception):
62 pass71 pass
@@ -133,6 +142,14 @@ class DataSource(object):
133 # Cached cloud_name as determined by _get_cloud_name142 # Cached cloud_name as determined by _get_cloud_name
134 _cloud_name = None143 _cloud_name = None
135144
145 # Cached cloud platform api type: e.g. ec2, openstack, kvm, lxd, azure etc.
146 _platform_type = None
147
148 # More details about the cloud platform:
149 # - metadata (http://169.254.169.254/)
150 # - seed-dir (<dirname>)
151 _subplatform = None
152
136 # Track the discovered fallback nic for use in configuration generation.153 # Track the discovered fallback nic for use in configuration generation.
137 _fallback_interface = None154 _fallback_interface = None
138155
@@ -192,21 +209,24 @@ class DataSource(object):
192 local_hostname = self.get_hostname()209 local_hostname = self.get_hostname()
193 instance_id = self.get_instance_id()210 instance_id = self.get_instance_id()
194 availability_zone = self.availability_zone211 availability_zone = self.availability_zone
195 cloud_name = self.cloud_name212 # In the event of upgrade from existing cloudinit, pickled datasource
196 # When adding new standard keys prefer underscore-delimited instead213 # will not contain these new class attributes. So we need to recrawl
197 # of hyphen-delimted to support simple variable references in jinja214 # metadata to discover that content.
198 # templates.
199 return {215 return {
200 'v1': {216 'v1': {
217 '_beta_keys': ['subplatform'],
201 'availability-zone': availability_zone,218 'availability-zone': availability_zone,
202 'availability_zone': availability_zone,219 'availability_zone': availability_zone,
203 'cloud-name': cloud_name,220 'cloud-name': self.cloud_name,
204 'cloud_name': cloud_name,221 'cloud_name': self.cloud_name,
222 'platform': self.platform_type,
223 'public_ssh_keys': self.get_public_ssh_keys(),
205 'instance-id': instance_id,224 'instance-id': instance_id,
206 'instance_id': instance_id,225 'instance_id': instance_id,
207 'local-hostname': local_hostname,226 'local-hostname': local_hostname,
208 'local_hostname': local_hostname,227 'local_hostname': local_hostname,
209 'region': self.region}}228 'region': self.region,
229 'subplatform': self.subplatform}}
210230
211 def clear_cached_attrs(self, attr_defaults=()):231 def clear_cached_attrs(self, attr_defaults=()):
212 """Reset any cached metadata attributes to datasource defaults.232 """Reset any cached metadata attributes to datasource defaults.
@@ -247,19 +267,27 @@ class DataSource(object):
247267
248 @return True on successful write, False otherwise.268 @return True on successful write, False otherwise.
249 """269 """
250 instance_data = {270 if hasattr(self, '_crawled_metadata'):
251 'ds': {'_doc': EXPERIMENTAL_TEXT,271 # Any datasource with _crawled_metadata will best represent
252 'meta_data': self.metadata}}272 # most recent, 'raw' metadata
253 if hasattr(self, 'network_json'):273 crawled_metadata = copy.deepcopy(
254 network_json = getattr(self, 'network_json')274 getattr(self, '_crawled_metadata'))
255 if network_json != UNSET:275 crawled_metadata.pop('user-data', None)
256 instance_data['ds']['network_json'] = network_json276 crawled_metadata.pop('vendor-data', None)
257 if hasattr(self, 'ec2_metadata'):277 instance_data = {'ds': crawled_metadata}
258 ec2_metadata = getattr(self, 'ec2_metadata')278 else:
259 if ec2_metadata != UNSET:279 instance_data = {'ds': {'meta_data': self.metadata}}
260 instance_data['ds']['ec2_metadata'] = ec2_metadata280 if hasattr(self, 'network_json'):
281 network_json = getattr(self, 'network_json')
282 if network_json != UNSET:
283 instance_data['ds']['network_json'] = network_json
284 if hasattr(self, 'ec2_metadata'):
285 ec2_metadata = getattr(self, 'ec2_metadata')
286 if ec2_metadata != UNSET:
287 instance_data['ds']['ec2_metadata'] = ec2_metadata
261 instance_data.update(288 instance_data.update(
262 self._get_standardized_metadata())289 self._get_standardized_metadata())
290 instance_data['ds']['_doc'] = EXPERIMENTAL_TEXT
263 try:291 try:
264 # Process content base64encoding unserializable values292 # Process content base64encoding unserializable values
265 content = util.json_dumps(instance_data)293 content = util.json_dumps(instance_data)
@@ -347,6 +375,40 @@ class DataSource(object):
347 return self._fallback_interface375 return self._fallback_interface
348376
349 @property377 @property
378 def platform_type(self):
379 if not hasattr(self, '_platform_type'):
380 # Handle upgrade path where pickled datasource has no _platform.
381 self._platform_type = self.dsname.lower()
382 if not self._platform_type:
383 self._platform_type = self.dsname.lower()
384 return self._platform_type
385
386 @property
387 def subplatform(self):
388 """Return a string representing subplatform details for the datasource.
389
390 This should be guidance for where the metadata is sourced.
391 Examples of this on different clouds:
392 ec2: metadata (http://169.254.169.254)
393 openstack: configdrive (/dev/path)
394 openstack: metadata (http://169.254.169.254)
395 nocloud: seed-dir (/seed/dir/path)
396 lxd: nocloud (/seed/dir/path)
397 """
398 if not hasattr(self, '_subplatform'):
399 # Handle upgrade path where pickled datasource has no _platform.
400 self._subplatform = self._get_subplatform()
401 if not self._subplatform:
402 self._subplatform = self._get_subplatform()
403 return self._subplatform
404
405 def _get_subplatform(self):
406 """Subclasses should implement to return a "slug (detail)" string."""
407 if hasattr(self, 'metadata_address'):
408 return 'metadata (%s)' % getattr(self, 'metadata_address')
409 return METADATA_UNKNOWN
410
411 @property
350 def cloud_name(self):412 def cloud_name(self):
351 """Return lowercase cloud name as determined by the datasource.413 """Return lowercase cloud name as determined by the datasource.
352414
@@ -359,9 +421,11 @@ class DataSource(object):
359 cloud_name = self.metadata.get(METADATA_CLOUD_NAME_KEY)421 cloud_name = self.metadata.get(METADATA_CLOUD_NAME_KEY)
360 if isinstance(cloud_name, six.string_types):422 if isinstance(cloud_name, six.string_types):
361 self._cloud_name = cloud_name.lower()423 self._cloud_name = cloud_name.lower()
362 LOG.debug(424 else:
363 'Ignoring metadata provided key %s: non-string type %s',425 self._cloud_name = self._get_cloud_name().lower()
364 METADATA_CLOUD_NAME_KEY, type(cloud_name))426 LOG.debug(
427 'Ignoring metadata provided key %s: non-string type %s',
428 METADATA_CLOUD_NAME_KEY, type(cloud_name))
365 else:429 else:
366 self._cloud_name = self._get_cloud_name().lower()430 self._cloud_name = self._get_cloud_name().lower()
367 return self._cloud_name431 return self._cloud_name
@@ -714,6 +778,25 @@ def instance_id_matches_system_uuid(instance_id, field='system-uuid'):
714 return instance_id.lower() == dmi_value.lower()778 return instance_id.lower() == dmi_value.lower()
715779
716780
781def canonical_cloud_id(cloud_name, region, platform):
782 """Lookup the canonical cloud-id for a given cloud_name and region."""
783 if not cloud_name:
784 cloud_name = METADATA_UNKNOWN
785 if not region:
786 region = METADATA_UNKNOWN
787 if region == METADATA_UNKNOWN:
788 if cloud_name != METADATA_UNKNOWN:
789 return cloud_name
790 return platform
791 for prefix, cloud_id_test in CLOUD_ID_REGION_PREFIX_MAP.items():
792 (cloud_id, valid_cloud) = cloud_id_test
793 if region.startswith(prefix) and valid_cloud(cloud_name):
794 return cloud_id
795 if cloud_name != METADATA_UNKNOWN:
796 return cloud_name
797 return platform
798
799
717def convert_vendordata(data, recurse=True):800def convert_vendordata(data, recurse=True):
718 """data: a loaded object (strings, arrays, dicts).801 """data: a loaded object (strings, arrays, dicts).
719 return something suitable for cloudinit vendordata_raw.802 return something suitable for cloudinit vendordata_raw.
diff --git a/cloudinit/sources/helpers/netlink.py b/cloudinit/sources/helpers/netlink.py
720new file mode 100644803new file mode 100644
index 0000000..d377ae3
--- /dev/null
+++ b/cloudinit/sources/helpers/netlink.py
@@ -0,0 +1,250 @@
1# Author: Tamilmani Manoharan <tamanoha@microsoft.com>
2#
3# This file is part of cloud-init. See LICENSE file for license information.
4
5from cloudinit import log as logging
6from cloudinit import util
7from collections import namedtuple
8
9import os
10import select
11import socket
12import struct
13
14LOG = logging.getLogger(__name__)
15
16# http://man7.org/linux/man-pages/man7/netlink.7.html
17RTMGRP_LINK = 1
18NLMSG_NOOP = 1
19NLMSG_ERROR = 2
20NLMSG_DONE = 3
21RTM_NEWLINK = 16
22RTM_DELLINK = 17
23RTM_GETLINK = 18
24RTM_SETLINK = 19
25MAX_SIZE = 65535
26RTA_DATA_OFFSET = 32
27MSG_TYPE_OFFSET = 16
28SELECT_TIMEOUT = 60
29
30NLMSGHDR_FMT = "IHHII"
31IFINFOMSG_FMT = "BHiII"
32NLMSGHDR_SIZE = struct.calcsize(NLMSGHDR_FMT)
33IFINFOMSG_SIZE = struct.calcsize(IFINFOMSG_FMT)
34RTATTR_START_OFFSET = NLMSGHDR_SIZE + IFINFOMSG_SIZE
35RTA_DATA_START_OFFSET = 4
36PAD_ALIGNMENT = 4
37
38IFLA_IFNAME = 3
39IFLA_OPERSTATE = 16
40
41# https://www.kernel.org/doc/Documentation/networking/operstates.txt
42OPER_UNKNOWN = 0
43OPER_NOTPRESENT = 1
44OPER_DOWN = 2
45OPER_LOWERLAYERDOWN = 3
46OPER_TESTING = 4
47OPER_DORMANT = 5
48OPER_UP = 6
49
50RTAAttr = namedtuple('RTAAttr', ['length', 'rta_type', 'data'])
51InterfaceOperstate = namedtuple('InterfaceOperstate', ['ifname', 'operstate'])
52NetlinkHeader = namedtuple('NetlinkHeader', ['length', 'type', 'flags', 'seq',
53 'pid'])
54
55
56class NetlinkCreateSocketError(RuntimeError):
57 '''Raised if netlink socket fails during create or bind.'''
58 pass
59
60
61def create_bound_netlink_socket():
62 '''Creates netlink socket and bind on netlink group to catch interface
63 down/up events. The socket will bound only on RTMGRP_LINK (which only
64 includes RTM_NEWLINK/RTM_DELLINK/RTM_GETLINK events). The socket is set to
65 non-blocking mode since we're only receiving messages.
66
67 :returns: netlink socket in non-blocking mode
68 :raises: NetlinkCreateSocketError
69 '''
70 try:
71 netlink_socket = socket.socket(socket.AF_NETLINK,
72 socket.SOCK_RAW,
73 socket.NETLINK_ROUTE)
74 netlink_socket.bind((os.getpid(), RTMGRP_LINK))
75 netlink_socket.setblocking(0)
76 except socket.error as e:
77 msg = "Exception during netlink socket create: %s" % e
78 raise NetlinkCreateSocketError(msg)
79 LOG.debug("Created netlink socket")
80 return netlink_socket
81
82
83def get_netlink_msg_header(data):
84 '''Gets netlink message type and length
85
86 :param: data read from netlink socket
87 :returns: netlink message type
88 :raises: AssertionError if data is None or data is not >= NLMSGHDR_SIZE
89 struct nlmsghdr {
90 __u32 nlmsg_len; /* Length of message including header */
91 __u16 nlmsg_type; /* Type of message content */
92 __u16 nlmsg_flags; /* Additional flags */
93 __u32 nlmsg_seq; /* Sequence number */
94 __u32 nlmsg_pid; /* Sender port ID */
95 };
96 '''
97 assert (data is not None), ("data is none")
98 assert (len(data) >= NLMSGHDR_SIZE), (
99 "data is smaller than netlink message header")
100 msg_len, msg_type, flags, seq, pid = struct.unpack(NLMSGHDR_FMT,
101 data[:MSG_TYPE_OFFSET])
102 LOG.debug("Got netlink msg of type %d", msg_type)
103 return NetlinkHeader(msg_len, msg_type, flags, seq, pid)
104
105
106def read_netlink_socket(netlink_socket, timeout=None):
107 '''Select and read from the netlink socket if ready.
108
109 :param: netlink_socket: specify which socket object to read from
110 :param: timeout: specify a timeout value (integer) to wait while reading,
111 if none, it will block indefinitely until socket ready for read
112 :returns: string of data read (max length = <MAX_SIZE>) from socket,
113 if no data read, returns None
114 :raises: AssertionError if netlink_socket is None
115 '''
116 assert (netlink_socket is not None), ("netlink socket is none")
117 read_set, _, _ = select.select([netlink_socket], [], [], timeout)
118 # Incase of timeout,read_set doesn't contain netlink socket.
119 # just return from this function
120 if netlink_socket not in read_set:
121 return None
122 LOG.debug("netlink socket ready for read")
123 data = netlink_socket.recv(MAX_SIZE)
124 if data is None:
125 LOG.error("Reading from Netlink socket returned no data")
126 return data
127
128
129def unpack_rta_attr(data, offset):
130 '''Unpack a single rta attribute.
131
132 :param: data: string of data read from netlink socket
133 :param: offset: starting offset of RTA Attribute
134 :return: RTAAttr object with length, type and data. On error, return None.
135 :raises: AssertionError if data is None or offset is not integer.
136 '''
137 assert (data is not None), ("data is none")
138 assert (type(offset) == int), ("offset is not integer")
139 assert (offset >= RTATTR_START_OFFSET), (
140 "rta offset is less than expected length")
141 length = rta_type = 0
142 attr_data = None
143 try:
144 length = struct.unpack_from("H", data, offset=offset)[0]
145 rta_type = struct.unpack_from("H", data, offset=offset+2)[0]
146 except struct.error:
147 return None # Should mean our offset is >= remaining data
148
149 # Unpack just the attribute's data. Offset by 4 to skip length/type header
150 attr_data = data[offset+RTA_DATA_START_OFFSET:offset+length]
151 return RTAAttr(length, rta_type, attr_data)
152
153
154def read_rta_oper_state(data):
155 '''Reads Interface name and operational state from RTA Data.
156
157 :param: data: string of data read from netlink socket
158 :returns: InterfaceOperstate object containing if_name and oper_state.
159 None if data does not contain valid IFLA_OPERSTATE and
160 IFLA_IFNAME messages.
161 :raises: AssertionError if data is None or length of data is
162 smaller than RTATTR_START_OFFSET.
163 '''
164 assert (data is not None), ("data is none")
165 assert (len(data) > RTATTR_START_OFFSET), (
166 "length of data is smaller than RTATTR_START_OFFSET")
167 ifname = operstate = None
168 offset = RTATTR_START_OFFSET
169 while offset <= len(data):
170 attr = unpack_rta_attr(data, offset)
171 if not attr or attr.length == 0:
172 break
173 # Each attribute is 4-byte aligned. Determine pad length.
174 padlen = (PAD_ALIGNMENT -
175 (attr.length % PAD_ALIGNMENT)) % PAD_ALIGNMENT
176 offset += attr.length + padlen
177
178 if attr.rta_type == IFLA_OPERSTATE:
179 operstate = ord(attr.data)
180 elif attr.rta_type == IFLA_IFNAME:
181 interface_name = util.decode_binary(attr.data, 'utf-8')
182 ifname = interface_name.strip('\0')
183 if not ifname or operstate is None:
184 return None
185 LOG.debug("rta attrs: ifname %s operstate %d", ifname, operstate)
186 return InterfaceOperstate(ifname, operstate)
187
188
189def wait_for_media_disconnect_connect(netlink_socket, ifname):
190 '''Block until media disconnect and connect has happened on an interface.
191 Listens on netlink socket to receive netlink events and when the carrier
192 changes from 0 to 1, it considers event has happened and
193 return from this function
194
195 :param: netlink_socket: netlink_socket to receive events
196 :param: ifname: Interface name to lookout for netlink events
197 :raises: AssertionError if netlink_socket is None or ifname is None.
198 '''
199 assert (netlink_socket is not None), ("netlink socket is none")
200 assert (ifname is not None), ("interface name is none")
201 assert (len(ifname) > 0), ("interface name cannot be empty")
202 carrier = OPER_UP
203 prevCarrier = OPER_UP
204 data = bytes()
205 LOG.debug("Wait for media disconnect and reconnect to happen")
206 while True:
207 recv_data = read_netlink_socket(netlink_socket, SELECT_TIMEOUT)
208 if recv_data is None:
209 continue
210 LOG.debug('read %d bytes from socket', len(recv_data))
211 data += recv_data
212 LOG.debug('Length of data after concat %d', len(data))
213 offset = 0
214 datalen = len(data)
215 while offset < datalen:
216 nl_msg = data[offset:]
217 if len(nl_msg) < NLMSGHDR_SIZE:
218 LOG.debug("Data is smaller than netlink header")
219 break
220 nlheader = get_netlink_msg_header(nl_msg)
221 if len(nl_msg) < nlheader.length:
222 LOG.debug("Partial data. Smaller than netlink message")
223 break
224 padlen = (nlheader.length+PAD_ALIGNMENT-1) & ~(PAD_ALIGNMENT-1)
225 offset = offset + padlen
226 LOG.debug('offset to next netlink message: %d', offset)
227 # Ignore any messages not new link or del link
228 if nlheader.type not in [RTM_NEWLINK, RTM_DELLINK]:
229 continue
230 interface_state = read_rta_oper_state(nl_msg)
231 if interface_state is None:
232 LOG.debug('Failed to read rta attributes: %s', interface_state)
233 continue
234 if interface_state.ifname != ifname:
235 LOG.debug(
236 "Ignored netlink event on interface %s. Waiting for %s.",
237 interface_state.ifname, ifname)
238 continue
239 if interface_state.operstate not in [OPER_UP, OPER_DOWN]:
240 continue
241 prevCarrier = carrier
242 carrier = interface_state.operstate
243 # check for carrier down, up sequence
244 isVnetSwitch = (prevCarrier == OPER_DOWN) and (carrier == OPER_UP)
245 if isVnetSwitch:
246 LOG.debug("Media switch happened on %s.", ifname)
247 return
248 data = data[offset:]
249
250# vi: ts=4 expandtab
diff --git a/cloudinit/sources/helpers/tests/test_netlink.py b/cloudinit/sources/helpers/tests/test_netlink.py
0new file mode 100644251new file mode 100644
index 0000000..c2898a1
--- /dev/null
+++ b/cloudinit/sources/helpers/tests/test_netlink.py
@@ -0,0 +1,373 @@
1# Author: Tamilmani Manoharan <tamanoha@microsoft.com>
2#
3# This file is part of cloud-init. See LICENSE file for license information.
4
5from cloudinit.tests.helpers import CiTestCase, mock
6import socket
7import struct
8import codecs
9from cloudinit.sources.helpers.netlink import (
10 NetlinkCreateSocketError, create_bound_netlink_socket, read_netlink_socket,
11 read_rta_oper_state, unpack_rta_attr, wait_for_media_disconnect_connect,
12 OPER_DOWN, OPER_UP, OPER_DORMANT, OPER_LOWERLAYERDOWN, OPER_NOTPRESENT,
13 OPER_TESTING, OPER_UNKNOWN, RTATTR_START_OFFSET, RTM_NEWLINK, RTM_SETLINK,
14 RTM_GETLINK, MAX_SIZE)
15
16
17def int_to_bytes(i):
18 '''convert integer to binary: eg: 1 to \x01'''
19 hex_value = '{0:x}'.format(i)
20 hex_value = '0' * (len(hex_value) % 2) + hex_value
21 return codecs.decode(hex_value, 'hex_codec')
22
23
24class TestCreateBoundNetlinkSocket(CiTestCase):
25
26 @mock.patch('cloudinit.sources.helpers.netlink.socket.socket')
27 def test_socket_error_on_create(self, m_socket):
28 '''create_bound_netlink_socket catches socket creation exception'''
29
30 """NetlinkCreateSocketError is raised when socket creation errors."""
31 m_socket.side_effect = socket.error("Fake socket failure")
32 with self.assertRaises(NetlinkCreateSocketError) as ctx_mgr:
33 create_bound_netlink_socket()
34 self.assertEqual(
35 'Exception during netlink socket create: Fake socket failure',
36 str(ctx_mgr.exception))
37
38
39class TestReadNetlinkSocket(CiTestCase):
40
41 @mock.patch('cloudinit.sources.helpers.netlink.socket.socket')
42 @mock.patch('cloudinit.sources.helpers.netlink.select.select')
43 def test_read_netlink_socket(self, m_select, m_socket):
44 '''read_netlink_socket able to receive data'''
45 data = 'netlinktest'
46 m_select.return_value = [m_socket], None, None
47 m_socket.recv.return_value = data
48 recv_data = read_netlink_socket(m_socket, 2)
49 m_select.assert_called_with([m_socket], [], [], 2)
50 m_socket.recv.assert_called_with(MAX_SIZE)
51 self.assertIsNotNone(recv_data)
52 self.assertEqual(recv_data, data)
53
54 @mock.patch('cloudinit.sources.helpers.netlink.socket.socket')
55 @mock.patch('cloudinit.sources.helpers.netlink.select.select')
56 def test_netlink_read_timeout(self, m_select, m_socket):
57 '''read_netlink_socket should timeout if nothing to read'''
58 m_select.return_value = [], None, None
59 data = read_netlink_socket(m_socket, 1)
60 m_select.assert_called_with([m_socket], [], [], 1)
61 self.assertEqual(m_socket.recv.call_count, 0)
62 self.assertIsNone(data)
63
64 def test_read_invalid_socket(self):
65 '''read_netlink_socket raises assert error if socket is invalid'''
66 socket = None
67 with self.assertRaises(AssertionError) as context:
68 read_netlink_socket(socket, 1)
69 self.assertTrue('netlink socket is none' in str(context.exception))
70
71
72class TestParseNetlinkMessage(CiTestCase):
73
74 def test_read_rta_oper_state(self):
75 '''read_rta_oper_state could parse netlink message and extract data'''
76 ifname = "eth0"
77 bytes = ifname.encode("utf-8")
78 buf = bytearray(48)
79 struct.pack_into("HH4sHHc", buf, RTATTR_START_OFFSET, 8, 3, bytes, 5,
80 16, int_to_bytes(OPER_DOWN))
81 interface_state = read_rta_oper_state(buf)
82 self.assertEqual(interface_state.ifname, ifname)
83 self.assertEqual(interface_state.operstate, OPER_DOWN)
84
85 def test_read_none_data(self):
86 '''read_rta_oper_state raises assert error if data is none'''
87 data = None
88 with self.assertRaises(AssertionError) as context:
89 read_rta_oper_state(data)
90 self.assertTrue('data is none', str(context.exception))
91
92 def test_read_invalid_rta_operstate_none(self):
93 '''read_rta_oper_state returns none if operstate is none'''
94 ifname = "eth0"
95 buf = bytearray(40)
96 bytes = ifname.encode("utf-8")
97 struct.pack_into("HH4s", buf, RTATTR_START_OFFSET, 8, 3, bytes)
98 interface_state = read_rta_oper_state(buf)
99 self.assertIsNone(interface_state)
100
101 def test_read_invalid_rta_ifname_none(self):
102 '''read_rta_oper_state returns none if ifname is none'''
103 buf = bytearray(40)
104 struct.pack_into("HHc", buf, RTATTR_START_OFFSET, 5, 16,
105 int_to_bytes(OPER_DOWN))
106 interface_state = read_rta_oper_state(buf)
107 self.assertIsNone(interface_state)
108
109 def test_read_invalid_data_len(self):
110 '''raise assert error if data size is smaller than required size'''
111 buf = bytearray(32)
112 with self.assertRaises(AssertionError) as context:
113 read_rta_oper_state(buf)
114 self.assertTrue('length of data is smaller than RTATTR_START_OFFSET' in
115 str(context.exception))
116
117 def test_unpack_rta_attr_none_data(self):
118 '''unpack_rta_attr raises assert error if data is none'''
119 data = None
120 with self.assertRaises(AssertionError) as context:
121 unpack_rta_attr(data, RTATTR_START_OFFSET)
122 self.assertTrue('data is none' in str(context.exception))
123
124 def test_unpack_rta_attr_invalid_offset(self):
125 '''unpack_rta_attr raises assert error if offset is invalid'''
126 data = bytearray(48)
127 with self.assertRaises(AssertionError) as context:
128 unpack_rta_attr(data, "offset")
129 self.assertTrue('offset is not integer' in str(context.exception))
130 with self.assertRaises(AssertionError) as context:
131 unpack_rta_attr(data, 31)
132 self.assertTrue('rta offset is less than expected length' in
133 str(context.exception))
134
135
136@mock.patch('cloudinit.sources.helpers.netlink.socket.socket')
137@mock.patch('cloudinit.sources.helpers.netlink.read_netlink_socket')
138class TestWaitForMediaDisconnectConnect(CiTestCase):
139 with_logs = True
140
141 def _media_switch_data(self, ifname, msg_type, operstate):
142 '''construct netlink data with specified fields'''
143 if ifname and operstate is not None:
144 data = bytearray(48)
145 bytes = ifname.encode("utf-8")
146 struct.pack_into("HH4sHHc", data, RTATTR_START_OFFSET, 8, 3,
147 bytes, 5, 16, int_to_bytes(operstate))
148 elif ifname:
149 data = bytearray(40)
150 bytes = ifname.encode("utf-8")
151 struct.pack_into("HH4s", data, RTATTR_START_OFFSET, 8, 3, bytes)
152 elif operstate:
153 data = bytearray(40)
154 struct.pack_into("HHc", data, RTATTR_START_OFFSET, 5, 16,
155 int_to_bytes(operstate))
156 struct.pack_into("=LHHLL", data, 0, len(data), msg_type, 0, 0, 0)
157 return data
158
159 def test_media_down_up_scenario(self, m_read_netlink_socket,
160 m_socket):
161 '''Test for media down up sequence for required interface name'''
162 ifname = "eth0"
163 # construct data for Oper State down
164 data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
165 # construct data for Oper State up
166 data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
167 m_read_netlink_socket.side_effect = [data_op_down, data_op_up]
168 wait_for_media_disconnect_connect(m_socket, ifname)
169 self.assertEqual(m_read_netlink_socket.call_count, 2)
170
171 def test_wait_for_media_switch_diff_interface(self, m_read_netlink_socket,
172 m_socket):
173 '''wait_for_media_disconnect_connect ignores unexpected interfaces.
174
175 The first two messages are for other interfaces and last two are for
176 expected interface. So the function exit only after receiving last
177 2 messages and therefore the call count for m_read_netlink_socket
178 has to be 4
179 '''
180 other_ifname = "eth1"
181 expected_ifname = "eth0"
182 data_op_down_eth1 = self._media_switch_data(
183 other_ifname, RTM_NEWLINK, OPER_DOWN)
184 data_op_up_eth1 = self._media_switch_data(
185 other_ifname, RTM_NEWLINK, OPER_UP)
186 data_op_down_eth0 = self._media_switch_data(
187 expected_ifname, RTM_NEWLINK, OPER_DOWN)
188 data_op_up_eth0 = self._media_switch_data(
189 expected_ifname, RTM_NEWLINK, OPER_UP)
190 m_read_netlink_socket.side_effect = [data_op_down_eth1,
191 data_op_up_eth1,
192 data_op_down_eth0,
193 data_op_up_eth0]
194 wait_for_media_disconnect_connect(m_socket, expected_ifname)
195 self.assertIn('Ignored netlink event on interface %s' % other_ifname,
196 self.logs.getvalue())
197 self.assertEqual(m_read_netlink_socket.call_count, 4)
198
199 def test_invalid_msgtype_getlink(self, m_read_netlink_socket, m_socket):
200 '''wait_for_media_disconnect_connect ignores GETLINK events.
201
202 The first two messages are for oper down and up for RTM_GETLINK type
203 which netlink module will ignore. The last 2 messages are RTM_NEWLINK
204 with oper state down and up messages. Therefore the call count for
205 m_read_netlink_socket has to be 4 ignoring first 2 messages
206 of RTM_GETLINK
207 '''
208 ifname = "eth0"
209 data_getlink_down = self._media_switch_data(
210 ifname, RTM_GETLINK, OPER_DOWN)
211 data_getlink_up = self._media_switch_data(
212 ifname, RTM_GETLINK, OPER_UP)
213 data_newlink_down = self._media_switch_data(
214 ifname, RTM_NEWLINK, OPER_DOWN)
215 data_newlink_up = self._media_switch_data(
216 ifname, RTM_NEWLINK, OPER_UP)
217 m_read_netlink_socket.side_effect = [data_getlink_down,
218 data_getlink_up,
219 data_newlink_down,
220 data_newlink_up]
221 wait_for_media_disconnect_connect(m_socket, ifname)
222 self.assertEqual(m_read_netlink_socket.call_count, 4)
223
224 def test_invalid_msgtype_setlink(self, m_read_netlink_socket, m_socket):
225 '''wait_for_media_disconnect_connect ignores SETLINK events.
226
227 The first two messages are for oper down and up for RTM_GETLINK type
228 which it will ignore. 3rd and 4th messages are RTM_NEWLINK with down
229 and up messages. This function should exit after 4th messages since it
230 sees down->up scenario. So the call count for m_read_netlink_socket
231 has to be 4 ignoring first 2 messages of RTM_GETLINK and
232 last 2 messages of RTM_NEWLINK
233 '''
234 ifname = "eth0"
235 data_setlink_down = self._media_switch_data(
236 ifname, RTM_SETLINK, OPER_DOWN)
237 data_setlink_up = self._media_switch_data(
238 ifname, RTM_SETLINK, OPER_UP)
239 data_newlink_down = self._media_switch_data(
240 ifname, RTM_NEWLINK, OPER_DOWN)
241 data_newlink_up = self._media_switch_data(
242 ifname, RTM_NEWLINK, OPER_UP)
243 m_read_netlink_socket.side_effect = [data_setlink_down,
244 data_setlink_up,
245 data_newlink_down,
246 data_newlink_up,
247 data_newlink_down,
248 data_newlink_up]
249 wait_for_media_disconnect_connect(m_socket, ifname)
250 self.assertEqual(m_read_netlink_socket.call_count, 4)
251
252 def test_netlink_invalid_switch_scenario(self, m_read_netlink_socket,
253 m_socket):
254 '''returns only if it receives UP event after a DOWN event'''
255 ifname = "eth0"
256 data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
257 data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
258 data_op_dormant = self._media_switch_data(ifname, RTM_NEWLINK,
259 OPER_DORMANT)
260 data_op_notpresent = self._media_switch_data(ifname, RTM_NEWLINK,
261 OPER_NOTPRESENT)
262 data_op_lowerdown = self._media_switch_data(ifname, RTM_NEWLINK,
263 OPER_LOWERLAYERDOWN)
264 data_op_testing = self._media_switch_data(ifname, RTM_NEWLINK,
265 OPER_TESTING)
266 data_op_unknown = self._media_switch_data(ifname, RTM_NEWLINK,
267 OPER_UNKNOWN)
268 m_read_netlink_socket.side_effect = [data_op_up, data_op_up,
269 data_op_dormant, data_op_up,
270 data_op_notpresent, data_op_up,
271 data_op_lowerdown, data_op_up,
272 data_op_testing, data_op_up,
273 data_op_unknown, data_op_up,
274 data_op_down, data_op_up]
275 wait_for_media_disconnect_connect(m_socket, ifname)
276 self.assertEqual(m_read_netlink_socket.call_count, 14)
277
278 def test_netlink_valid_inbetween_transitions(self, m_read_netlink_socket,
279 m_socket):
280 '''wait_for_media_disconnect_connect handles in between transitions'''
281 ifname = "eth0"
282 data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
283 data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
284 data_op_dormant = self._media_switch_data(ifname, RTM_NEWLINK,
285 OPER_DORMANT)
286 data_op_unknown = self._media_switch_data(ifname, RTM_NEWLINK,
287 OPER_UNKNOWN)
288 m_read_netlink_socket.side_effect = [data_op_down, data_op_dormant,
289 data_op_unknown, data_op_up]
290 wait_for_media_disconnect_connect(m_socket, ifname)
291 self.assertEqual(m_read_netlink_socket.call_count, 4)
292
293 def test_netlink_invalid_operstate(self, m_read_netlink_socket, m_socket):
294 '''wait_for_media_disconnect_connect should handle invalid operstates.
295
296 The function should not fail and return even if it receives invalid
297 operstates. It always should wait for down up sequence.
298 '''
299 ifname = "eth0"
300 data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
301 data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
302 data_op_invalid = self._media_switch_data(ifname, RTM_NEWLINK, 7)
303 m_read_netlink_socket.side_effect = [data_op_invalid, data_op_up,
304 data_op_down, data_op_invalid,
305 data_op_up]
306 wait_for_media_disconnect_connect(m_socket, ifname)
307 self.assertEqual(m_read_netlink_socket.call_count, 5)
308
309 def test_wait_invalid_socket(self, m_read_netlink_socket, m_socket):
310 '''wait_for_media_disconnect_connect handle none netlink socket.'''
311 socket = None
312 ifname = "eth0"
313 with self.assertRaises(AssertionError) as context:
314 wait_for_media_disconnect_connect(socket, ifname)
315 self.assertTrue('netlink socket is none' in str(context.exception))
316
317 def test_wait_invalid_ifname(self, m_read_netlink_socket, m_socket):
318 '''wait_for_media_disconnect_connect handle none interface name'''
319 ifname = None
320 with self.assertRaises(AssertionError) as context:
321 wait_for_media_disconnect_connect(m_socket, ifname)
322 self.assertTrue('interface name is none' in str(context.exception))
323 ifname = ""
324 with self.assertRaises(AssertionError) as context:
325 wait_for_media_disconnect_connect(m_socket, ifname)
326 self.assertTrue('interface name cannot be empty' in
327 str(context.exception))
328
329 def test_wait_invalid_rta_attr(self, m_read_netlink_socket, m_socket):
330 ''' wait_for_media_disconnect_connect handles invalid rta data'''
331 ifname = "eth0"
332 data_invalid1 = self._media_switch_data(None, RTM_NEWLINK, OPER_DOWN)
333 data_invalid2 = self._media_switch_data(ifname, RTM_NEWLINK, None)
334 data_op_down = self._media_switch_data(ifname, RTM_NEWLINK, OPER_DOWN)
335 data_op_up = self._media_switch_data(ifname, RTM_NEWLINK, OPER_UP)
336 m_read_netlink_socket.side_effect = [data_invalid1, data_invalid2,
337 data_op_down, data_op_up]
338 wait_for_media_disconnect_connect(m_socket, ifname)
339 self.assertEqual(m_read_netlink_socket.call_count, 4)
340
341 def test_read_multiple_netlink_msgs(self, m_read_netlink_socket, m_socket):
342 '''Read multiple messages in single receive call'''
343 ifname = "eth0"
344 bytes = ifname.encode("utf-8")
345 data = bytearray(96)
346 struct.pack_into("=LHHLL", data, 0, 48, RTM_NEWLINK, 0, 0, 0)
347 struct.pack_into("HH4sHHc", data, RTATTR_START_OFFSET, 8, 3,
348 bytes, 5, 16, int_to_bytes(OPER_DOWN))
349 struct.pack_into("=LHHLL", data, 48, 48, RTM_NEWLINK, 0, 0, 0)
350 struct.pack_into("HH4sHHc", data, 48 + RTATTR_START_OFFSET, 8,
351 3, bytes, 5, 16, int_to_bytes(OPER_UP))
352 m_read_netlink_socket.return_value = data
353 wait_for_media_disconnect_connect(m_socket, ifname)
354 self.assertEqual(m_read_netlink_socket.call_count, 1)
355
356 def test_read_partial_netlink_msgs(self, m_read_netlink_socket, m_socket):
357 '''Read partial messages in receive call'''
358 ifname = "eth0"
359 bytes = ifname.encode("utf-8")
360 data1 = bytearray(112)
361 data2 = bytearray(32)
362 struct.pack_into("=LHHLL", data1, 0, 48, RTM_NEWLINK, 0, 0, 0)
363 struct.pack_into("HH4sHHc", data1, RTATTR_START_OFFSET, 8, 3,
364 bytes, 5, 16, int_to_bytes(OPER_DOWN))
365 struct.pack_into("=LHHLL", data1, 48, 48, RTM_NEWLINK, 0, 0, 0)
366 struct.pack_into("HH4sHHc", data1, 80, 8, 3, bytes, 5, 16,
367 int_to_bytes(OPER_DOWN))
368 struct.pack_into("=LHHLL", data1, 96, 48, RTM_NEWLINK, 0, 0, 0)
369 struct.pack_into("HH4sHHc", data2, 16, 8, 3, bytes, 5, 16,
370 int_to_bytes(OPER_UP))
371 m_read_netlink_socket.side_effect = [data1, data2]
372 wait_for_media_disconnect_connect(m_socket, ifname)
373 self.assertEqual(m_read_netlink_socket.call_count, 2)
diff --git a/cloudinit/sources/helpers/vmware/imc/config_nic.py b/cloudinit/sources/helpers/vmware/imc/config_nic.py
index e1890e2..77cbf3b 100644
--- a/cloudinit/sources/helpers/vmware/imc/config_nic.py
+++ b/cloudinit/sources/helpers/vmware/imc/config_nic.py
@@ -165,9 +165,8 @@ class NicConfigurator(object):
165165
166 # Add routes if there is no primary nic166 # Add routes if there is no primary nic
167 if not self._primaryNic and v4.gateways:167 if not self._primaryNic and v4.gateways:
168 route_list.extend(self.gen_ipv4_route(nic,168 subnet.update(
169 v4.gateways,169 {'routes': self.gen_ipv4_route(nic, v4.gateways, v4.netmask)})
170 v4.netmask))
171170
172 return ([subnet], route_list)171 return ([subnet], route_list)
173172
diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py
index 8082019..6378e98 100644
--- a/cloudinit/sources/tests/test_init.py
+++ b/cloudinit/sources/tests/test_init.py
@@ -11,7 +11,8 @@ from cloudinit.helpers import Paths
11from cloudinit import importer11from cloudinit import importer
12from cloudinit.sources import (12from cloudinit.sources import (
13 EXPERIMENTAL_TEXT, INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE,13 EXPERIMENTAL_TEXT, INSTANCE_JSON_FILE, INSTANCE_JSON_SENSITIVE_FILE,
14 REDACT_SENSITIVE_VALUE, UNSET, DataSource, redact_sensitive_keys)14 METADATA_UNKNOWN, REDACT_SENSITIVE_VALUE, UNSET, DataSource,
15 canonical_cloud_id, redact_sensitive_keys)
15from cloudinit.tests.helpers import CiTestCase, skipIf, mock16from cloudinit.tests.helpers import CiTestCase, skipIf, mock
16from cloudinit.user_data import UserDataProcessor17from cloudinit.user_data import UserDataProcessor
17from cloudinit import util18from cloudinit import util
@@ -295,6 +296,7 @@ class TestDataSource(CiTestCase):
295 'base64_encoded_keys': [],296 'base64_encoded_keys': [],
296 'sensitive_keys': [],297 'sensitive_keys': [],
297 'v1': {298 'v1': {
299 '_beta_keys': ['subplatform'],
298 'availability-zone': 'myaz',300 'availability-zone': 'myaz',
299 'availability_zone': 'myaz',301 'availability_zone': 'myaz',
300 'cloud-name': 'subclasscloudname',302 'cloud-name': 'subclasscloudname',
@@ -303,7 +305,10 @@ class TestDataSource(CiTestCase):
303 'instance_id': 'iid-datasource',305 'instance_id': 'iid-datasource',
304 'local-hostname': 'test-subclass-hostname',306 'local-hostname': 'test-subclass-hostname',
305 'local_hostname': 'test-subclass-hostname',307 'local_hostname': 'test-subclass-hostname',
306 'region': 'myregion'},308 'platform': 'mytestsubclass',
309 'public_ssh_keys': [],
310 'region': 'myregion',
311 'subplatform': 'unknown'},
307 'ds': {312 'ds': {
308 '_doc': EXPERIMENTAL_TEXT,313 '_doc': EXPERIMENTAL_TEXT,
309 'meta_data': {'availability_zone': 'myaz',314 'meta_data': {'availability_zone': 'myaz',
@@ -339,6 +344,7 @@ class TestDataSource(CiTestCase):
339 'base64_encoded_keys': [],344 'base64_encoded_keys': [],
340 'sensitive_keys': ['ds/meta_data/some/security-credentials'],345 'sensitive_keys': ['ds/meta_data/some/security-credentials'],
341 'v1': {346 'v1': {
347 '_beta_keys': ['subplatform'],
342 'availability-zone': 'myaz',348 'availability-zone': 'myaz',
343 'availability_zone': 'myaz',349 'availability_zone': 'myaz',
344 'cloud-name': 'subclasscloudname',350 'cloud-name': 'subclasscloudname',
@@ -347,7 +353,10 @@ class TestDataSource(CiTestCase):
347 'instance_id': 'iid-datasource',353 'instance_id': 'iid-datasource',
348 'local-hostname': 'test-subclass-hostname',354 'local-hostname': 'test-subclass-hostname',
349 'local_hostname': 'test-subclass-hostname',355 'local_hostname': 'test-subclass-hostname',
350 'region': 'myregion'},356 'platform': 'mytestsubclass',
357 'public_ssh_keys': [],
358 'region': 'myregion',
359 'subplatform': 'unknown'},
351 'ds': {360 'ds': {
352 '_doc': EXPERIMENTAL_TEXT,361 '_doc': EXPERIMENTAL_TEXT,
353 'meta_data': {362 'meta_data': {
@@ -599,4 +608,75 @@ class TestRedactSensitiveData(CiTestCase):
599 redact_sensitive_keys(md))608 redact_sensitive_keys(md))
600609
601610
611class TestCanonicalCloudID(CiTestCase):
612
613 def test_cloud_id_returns_platform_on_unknowns(self):
614 """When region and cloud_name are unknown, return platform."""
615 self.assertEqual(
616 'platform',
617 canonical_cloud_id(cloud_name=METADATA_UNKNOWN,
618 region=METADATA_UNKNOWN,
619 platform='platform'))
620
621 def test_cloud_id_returns_platform_on_none(self):
622 """When region and cloud_name are unknown, return platform."""
623 self.assertEqual(
624 'platform',
625 canonical_cloud_id(cloud_name=None,
626 region=None,
627 platform='platform'))
628
629 def test_cloud_id_returns_cloud_name_on_unknown_region(self):
630 """When region is unknown, return cloud_name."""
631 for region in (None, METADATA_UNKNOWN):
632 self.assertEqual(
633 'cloudname',
634 canonical_cloud_id(cloud_name='cloudname',
635 region=region,
636 platform='platform'))
637
638 def test_cloud_id_returns_platform_on_unknown_cloud_name(self):
639 """When region is set but cloud_name is unknown return cloud_name."""
640 self.assertEqual(
641 'platform',
642 canonical_cloud_id(cloud_name=METADATA_UNKNOWN,
643 region='region',
644 platform='platform'))
645
646 def test_cloud_id_aws_based_on_region_and_cloud_name(self):
647 """When cloud_name is aws, return proper cloud-id based on region."""
648 self.assertEqual(
649 'aws-china',
650 canonical_cloud_id(cloud_name='aws',
651 region='cn-north-1',
652 platform='platform'))
653 self.assertEqual(
654 'aws',
655 canonical_cloud_id(cloud_name='aws',
656 region='us-east-1',
657 platform='platform'))
658 self.assertEqual(
659 'aws-gov',
660 canonical_cloud_id(cloud_name='aws',
661 region='us-gov-1',
662 platform='platform'))
663 self.assertEqual( # Overrideen non-aws cloud_name is returned
664 '!aws',
665 canonical_cloud_id(cloud_name='!aws',
666 region='us-gov-1',
667 platform='platform'))
668
669 def test_cloud_id_azure_based_on_region_and_cloud_name(self):
670 """Report cloud-id when cloud_name is azure and region is in china."""
671 self.assertEqual(
672 'azure-china',
673 canonical_cloud_id(cloud_name='azure',
674 region='chinaeast',
675 platform='platform'))
676 self.assertEqual(
677 'azure',
678 canonical_cloud_id(cloud_name='azure',
679 region='!chinaeast',
680 platform='platform'))
681
602# vi: ts=4 expandtab682# vi: ts=4 expandtab
diff --git a/cloudinit/sources/tests/test_oracle.py b/cloudinit/sources/tests/test_oracle.py
index 7599126..97d6294 100644
--- a/cloudinit/sources/tests/test_oracle.py
+++ b/cloudinit/sources/tests/test_oracle.py
@@ -71,6 +71,14 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
71 self.assertFalse(ds._get_data())71 self.assertFalse(ds._get_data())
72 mocks._is_platform_viable.assert_called_once_with()72 mocks._is_platform_viable.assert_called_once_with()
7373
74 def test_platform_info(self):
75 """Return platform-related information for Oracle Datasource."""
76 ds, _mocks = self._get_ds()
77 self.assertEqual('oracle', ds.cloud_name)
78 self.assertEqual('oracle', ds.platform_type)
79 self.assertEqual(
80 'metadata (http://169.254.169.254/openstack/)', ds.subplatform)
81
74 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)82 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
75 def test_without_userdata(self, m_is_iscsi_root):83 def test_without_userdata(self, m_is_iscsi_root):
76 """If no user-data is provided, it should not be in return dict."""84 """If no user-data is provided, it should not be in return dict."""
diff --git a/cloudinit/temp_utils.py b/cloudinit/temp_utils.py
index c98a1b5..346276e 100644
--- a/cloudinit/temp_utils.py
+++ b/cloudinit/temp_utils.py
@@ -81,7 +81,7 @@ def ExtendedTemporaryFile(**kwargs):
8181
8282
83@contextlib.contextmanager83@contextlib.contextmanager
84def tempdir(**kwargs):84def tempdir(rmtree_ignore_errors=False, **kwargs):
85 # This seems like it was only added in python 3.285 # This seems like it was only added in python 3.2
86 # Make it since its useful...86 # Make it since its useful...
87 # See: http://bugs.python.org/file12970/tempdir.patch87 # See: http://bugs.python.org/file12970/tempdir.patch
@@ -89,7 +89,7 @@ def tempdir(**kwargs):
89 try:89 try:
90 yield tdir90 yield tdir
91 finally:91 finally:
92 shutil.rmtree(tdir)92 shutil.rmtree(tdir, ignore_errors=rmtree_ignore_errors)
9393
9494
95def mkdtemp(**kwargs):95def mkdtemp(**kwargs):
diff --git a/cloudinit/tests/test_dhclient_hook.py b/cloudinit/tests/test_dhclient_hook.py
96new file mode 10064496new file mode 100644
index 0000000..7aab8dd
--- /dev/null
+++ b/cloudinit/tests/test_dhclient_hook.py
@@ -0,0 +1,105 @@
1# This file is part of cloud-init. See LICENSE file for license information.
2
3"""Tests for cloudinit.dhclient_hook."""
4
5from cloudinit import dhclient_hook as dhc
6from cloudinit.tests.helpers import CiTestCase, dir2dict, populate_dir
7
8import argparse
9import json
10import mock
11import os
12
13
14class TestDhclientHook(CiTestCase):
15
16 ex_env = {
17 'interface': 'eth0',
18 'new_dhcp_lease_time': '3600',
19 'new_host_name': 'x1',
20 'new_ip_address': '10.145.210.163',
21 'new_subnet_mask': '255.255.255.0',
22 'old_host_name': 'x1',
23 'PATH': '/usr/sbin:/usr/bin:/sbin:/bin',
24 'pid': '614',
25 'reason': 'BOUND',
26 }
27
28 # some older versions of dhclient put the same content,
29 # but in upper case with DHCP4_ instead of new_
30 ex_env_dhcp4 = {
31 'REASON': 'BOUND',
32 'DHCP4_dhcp_lease_time': '3600',
33 'DHCP4_host_name': 'x1',
34 'DHCP4_ip_address': '10.145.210.163',
35 'DHCP4_subnet_mask': '255.255.255.0',
36 'INTERFACE': 'eth0',
37 'PATH': '/usr/sbin:/usr/bin:/sbin:/bin',
38 'pid': '614',
39 }
40
41 expected = {
42 'dhcp_lease_time': '3600',
43 'host_name': 'x1',
44 'ip_address': '10.145.210.163',
45 'subnet_mask': '255.255.255.0'}
46
47 def setUp(self):
48 super(TestDhclientHook, self).setUp()
49 self.tmp = self.tmp_dir()
50
51 def test_handle_args(self):
52 """quick test of call to handle_args."""
53 nic = 'eth0'
54 args = argparse.Namespace(event=dhc.UP, interface=nic)
55 with mock.patch.dict("os.environ", clear=True, values=self.ex_env):
56 dhc.handle_args(dhc.NAME, args, data_d=self.tmp)
57 found = dir2dict(self.tmp + os.path.sep)
58 self.assertEqual([nic + ".json"], list(found.keys()))
59 self.assertEqual(self.expected, json.loads(found[nic + ".json"]))
60
61 def test_run_hook_up_creates_dir(self):
62 """If dir does not exist, run_hook should create it."""
63 subd = self.tmp_path("subdir", self.tmp)
64 nic = 'eth1'
65 dhc.run_hook(nic, 'up', data_d=subd, env=self.ex_env)
66 self.assertEqual(
67 set([nic + ".json"]), set(dir2dict(subd + os.path.sep)))
68
69 def test_run_hook_up(self):
70 """Test expected use of run_hook_up."""
71 nic = 'eth0'
72 dhc.run_hook(nic, 'up', data_d=self.tmp, env=self.ex_env)
73 found = dir2dict(self.tmp + os.path.sep)
74 self.assertEqual([nic + ".json"], list(found.keys()))
75 self.assertEqual(self.expected, json.loads(found[nic + ".json"]))
76
77 def test_run_hook_up_dhcp4_prefix(self):
78 """Test run_hook filters correctly with older DHCP4_ data."""
79 nic = 'eth0'
80 dhc.run_hook(nic, 'up', data_d=self.tmp, env=self.ex_env_dhcp4)
81 found = dir2dict(self.tmp + os.path.sep)
82 self.assertEqual([nic + ".json"], list(found.keys()))
83 self.assertEqual(self.expected, json.loads(found[nic + ".json"]))
84
85 def test_run_hook_down_deletes(self):
86 """down should delete the created json file."""
87 nic = 'eth1'
88 populate_dir(
89 self.tmp, {nic + ".json": "{'abcd'}", 'myfile.txt': 'text'})
90 dhc.run_hook(nic, 'down', data_d=self.tmp, env={'old_host_name': 'x1'})
91 self.assertEqual(
92 set(['myfile.txt']),
93 set(dir2dict(self.tmp + os.path.sep)))
94
95 def test_get_parser(self):
96 """Smoke test creation of get_parser."""
97 # cloud-init main uses 'action'.
98 event, interface = (dhc.UP, 'mynic0')
99 self.assertEqual(
100 argparse.Namespace(event=event, interface=interface,
101 action=(dhc.NAME, dhc.handle_args)),
102 dhc.get_parser().parse_args([event, interface]))
103
104
105# vi: ts=4 expandtab
diff --git a/cloudinit/tests/test_temp_utils.py b/cloudinit/tests/test_temp_utils.py
index ffbb92c..4a52ef8 100644
--- a/cloudinit/tests/test_temp_utils.py
+++ b/cloudinit/tests/test_temp_utils.py
@@ -2,8 +2,9 @@
22
3"""Tests for cloudinit.temp_utils"""3"""Tests for cloudinit.temp_utils"""
44
5from cloudinit.temp_utils import mkdtemp, mkstemp5from cloudinit.temp_utils import mkdtemp, mkstemp, tempdir
6from cloudinit.tests.helpers import CiTestCase, wrap_and_call6from cloudinit.tests.helpers import CiTestCase, wrap_and_call
7import os
78
89
9class TestTempUtils(CiTestCase):10class TestTempUtils(CiTestCase):
@@ -98,4 +99,19 @@ class TestTempUtils(CiTestCase):
98 self.assertEqual('/fake/return/path', retval)99 self.assertEqual('/fake/return/path', retval)
99 self.assertEqual([{'dir': '/run/cloud-init/tmp'}], calls)100 self.assertEqual([{'dir': '/run/cloud-init/tmp'}], calls)
100101
102 def test_tempdir_error_suppression(self):
103 """test tempdir suppresses errors during directory removal."""
104
105 with self.assertRaises(OSError):
106 with tempdir(prefix='cloud-init-dhcp-') as tdir:
107 os.rmdir(tdir)
108 # As a result, the directory is already gone,
109 # so shutil.rmtree should raise OSError
110
111 with tempdir(rmtree_ignore_errors=True,
112 prefix='cloud-init-dhcp-') as tdir:
113 os.rmdir(tdir)
114 # Since the directory is already gone, shutil.rmtree would raise
115 # OSError, but we suppress that
116
101# vi: ts=4 expandtab117# vi: ts=4 expandtab
diff --git a/cloudinit/tests/test_url_helper.py b/cloudinit/tests/test_url_helper.py
index 113249d..aa9f3ec 100644
--- a/cloudinit/tests/test_url_helper.py
+++ b/cloudinit/tests/test_url_helper.py
@@ -1,10 +1,12 @@
1# This file is part of cloud-init. See LICENSE file for license information.1# This file is part of cloud-init. See LICENSE file for license information.
22
3from cloudinit.url_helper import oauth_headers, read_file_or_url3from cloudinit.url_helper import (
4 NOT_FOUND, UrlError, oauth_headers, read_file_or_url, retry_on_url_exc)
4from cloudinit.tests.helpers import CiTestCase, mock, skipIf5from cloudinit.tests.helpers import CiTestCase, mock, skipIf
5from cloudinit import util6from cloudinit import util
67
7import httpretty8import httpretty
9import requests
810
911
10try:12try:
@@ -64,3 +66,24 @@ class TestReadFileOrUrl(CiTestCase):
64 result = read_file_or_url(url)66 result = read_file_or_url(url)
65 self.assertEqual(result.contents, data)67 self.assertEqual(result.contents, data)
66 self.assertEqual(str(result), data.decode('utf-8'))68 self.assertEqual(str(result), data.decode('utf-8'))
69
70
71class TestRetryOnUrlExc(CiTestCase):
72
73 def test_do_not_retry_non_urlerror(self):
74 """When exception is not UrlError return False."""
75 myerror = IOError('something unexcpected')
76 self.assertFalse(retry_on_url_exc(msg='', exc=myerror))
77
78 def test_perform_retries_on_not_found(self):
79 """When exception is UrlError with a 404 status code return True."""
80 myerror = UrlError(cause=RuntimeError(
81 'something was not found'), code=NOT_FOUND)
82 self.assertTrue(retry_on_url_exc(msg='', exc=myerror))
83
84 def test_perform_retries_on_timeout(self):
85 """When exception is a requests.Timout return True."""
86 myerror = UrlError(cause=requests.Timeout('something timed out'))
87 self.assertTrue(retry_on_url_exc(msg='', exc=myerror))
88
89# vi: ts=4 expandtab
diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py
index edb0c18..e3d2dba 100644
--- a/cloudinit/tests/test_util.py
+++ b/cloudinit/tests/test_util.py
@@ -18,25 +18,51 @@ MOUNT_INFO = [
18]18]
1919
20OS_RELEASE_SLES = dedent("""\20OS_RELEASE_SLES = dedent("""\
21 NAME="SLES"\n21 NAME="SLES"
22 VERSION="12-SP3"\n22 VERSION="12-SP3"
23 VERSION_ID="12.3"\n23 VERSION_ID="12.3"
24 PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"\n24 PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"
25 ID="sles"\nANSI_COLOR="0;32"\n25 ID="sles"
26 CPE_NAME="cpe:/o:suse:sles:12:sp3"\n26 ANSI_COLOR="0;32"
27 CPE_NAME="cpe:/o:suse:sles:12:sp3"
27""")28""")
2829
29OS_RELEASE_OPENSUSE = dedent("""\30OS_RELEASE_OPENSUSE = dedent("""\
30NAME="openSUSE Leap"31 NAME="openSUSE Leap"
31VERSION="42.3"32 VERSION="42.3"
32ID=opensuse33 ID=opensuse
33ID_LIKE="suse"34 ID_LIKE="suse"
34VERSION_ID="42.3"35 VERSION_ID="42.3"
35PRETTY_NAME="openSUSE Leap 42.3"36 PRETTY_NAME="openSUSE Leap 42.3"
36ANSI_COLOR="0;32"37 ANSI_COLOR="0;32"
37CPE_NAME="cpe:/o:opensuse:leap:42.3"38 CPE_NAME="cpe:/o:opensuse:leap:42.3"
38BUG_REPORT_URL="https://bugs.opensuse.org"39 BUG_REPORT_URL="https://bugs.opensuse.org"
39HOME_URL="https://www.opensuse.org/"40 HOME_URL="https://www.opensuse.org/"
41""")
42
43OS_RELEASE_OPENSUSE_L15 = dedent("""\
44 NAME="openSUSE Leap"
45 VERSION="15.0"
46 ID="opensuse-leap"
47 ID_LIKE="suse opensuse"
48 VERSION_ID="15.0"
49 PRETTY_NAME="openSUSE Leap 15.0"
50 ANSI_COLOR="0;32"
51 CPE_NAME="cpe:/o:opensuse:leap:15.0"
52 BUG_REPORT_URL="https://bugs.opensuse.org"
53 HOME_URL="https://www.opensuse.org/"
54""")
55
56OS_RELEASE_OPENSUSE_TW = dedent("""\
57 NAME="openSUSE Tumbleweed"
58 ID="opensuse-tumbleweed"
59 ID_LIKE="opensuse suse"
60 VERSION_ID="20180920"
61 PRETTY_NAME="openSUSE Tumbleweed"
62 ANSI_COLOR="0;32"
63 CPE_NAME="cpe:/o:opensuse:tumbleweed:20180920"
64 BUG_REPORT_URL="https://bugs.opensuse.org"
65 HOME_URL="https://www.opensuse.org/"
40""")66""")
4167
42OS_RELEASE_CENTOS = dedent("""\68OS_RELEASE_CENTOS = dedent("""\
@@ -447,12 +473,35 @@ class TestGetLinuxDistro(CiTestCase):
447473
448 @mock.patch('cloudinit.util.load_file')474 @mock.patch('cloudinit.util.load_file')
449 def test_get_linux_opensuse(self, m_os_release, m_path_exists):475 def test_get_linux_opensuse(self, m_os_release, m_path_exists):
450 """Verify we get the correct name and machine arch on OpenSUSE."""476 """Verify we get the correct name and machine arch on openSUSE
477 prior to openSUSE Leap 15.
478 """
451 m_os_release.return_value = OS_RELEASE_OPENSUSE479 m_os_release.return_value = OS_RELEASE_OPENSUSE
452 m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists480 m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
453 dist = util.get_linux_distro()481 dist = util.get_linux_distro()
454 self.assertEqual(('opensuse', '42.3', platform.machine()), dist)482 self.assertEqual(('opensuse', '42.3', platform.machine()), dist)
455483
484 @mock.patch('cloudinit.util.load_file')
485 def test_get_linux_opensuse_l15(self, m_os_release, m_path_exists):
486 """Verify we get the correct name and machine arch on openSUSE
487 for openSUSE Leap 15.0 and later.
488 """
489 m_os_release.return_value = OS_RELEASE_OPENSUSE_L15
490 m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
491 dist = util.get_linux_distro()
492 self.assertEqual(('opensuse-leap', '15.0', platform.machine()), dist)
493
494 @mock.patch('cloudinit.util.load_file')
495 def test_get_linux_opensuse_tw(self, m_os_release, m_path_exists):
496 """Verify we get the correct name and machine arch on openSUSE
497 for openSUSE Tumbleweed
498 """
499 m_os_release.return_value = OS_RELEASE_OPENSUSE_TW
500 m_path_exists.side_effect = TestGetLinuxDistro.os_release_exists
501 dist = util.get_linux_distro()
502 self.assertEqual(
503 ('opensuse-tumbleweed', '20180920', platform.machine()), dist)
504
456 @mock.patch('platform.dist')505 @mock.patch('platform.dist')
457 def test_get_linux_distro_no_data(self, m_platform_dist, m_path_exists):506 def test_get_linux_distro_no_data(self, m_platform_dist, m_path_exists):
458 """Verify we get no information if os-release does not exist"""507 """Verify we get no information if os-release does not exist"""
@@ -478,4 +527,20 @@ class TestGetLinuxDistro(CiTestCase):
478 dist = util.get_linux_distro()527 dist = util.get_linux_distro()
479 self.assertEqual(('foo', '1.1', 'aarch64'), dist)528 self.assertEqual(('foo', '1.1', 'aarch64'), dist)
480529
530
531@mock.patch('os.path.exists')
532class TestIsLXD(CiTestCase):
533
534 def test_is_lxd_true_on_sock_device(self, m_exists):
535 """When lxd's /dev/lxd/sock exists, is_lxd returns true."""
536 m_exists.return_value = True
537 self.assertTrue(util.is_lxd())
538 m_exists.assert_called_once_with('/dev/lxd/sock')
539
540 def test_is_lxd_false_when_sock_device_absent(self, m_exists):
541 """When lxd's /dev/lxd/sock is absent, is_lxd returns false."""
542 m_exists.return_value = False
543 self.assertFalse(util.is_lxd())
544 m_exists.assert_called_once_with('/dev/lxd/sock')
545
481# vi: ts=4 expandtab546# vi: ts=4 expandtab
diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py
index 8067979..396d69a 100644
--- a/cloudinit/url_helper.py
+++ b/cloudinit/url_helper.py
@@ -199,7 +199,7 @@ def _get_ssl_args(url, ssl_details):
199def readurl(url, data=None, timeout=None, retries=0, sec_between=1,199def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
200 headers=None, headers_cb=None, ssl_details=None,200 headers=None, headers_cb=None, ssl_details=None,
201 check_status=True, allow_redirects=True, exception_cb=None,201 check_status=True, allow_redirects=True, exception_cb=None,
202 session=None, infinite=False):202 session=None, infinite=False, log_req_resp=True):
203 url = _cleanurl(url)203 url = _cleanurl(url)
204 req_args = {204 req_args = {
205 'url': url,205 'url': url,
@@ -256,9 +256,11 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
256 continue256 continue
257 filtered_req_args[k] = v257 filtered_req_args[k] = v
258 try:258 try:
259 LOG.debug("[%s/%s] open '%s' with %s configuration", i,259
260 "infinite" if infinite else manual_tries, url,260 if log_req_resp:
261 filtered_req_args)261 LOG.debug("[%s/%s] open '%s' with %s configuration", i,
262 "infinite" if infinite else manual_tries, url,
263 filtered_req_args)
262264
263 if session is None:265 if session is None:
264 session = requests.Session()266 session = requests.Session()
@@ -294,8 +296,11 @@ def readurl(url, data=None, timeout=None, retries=0, sec_between=1,
294 break296 break
295 if (infinite and sec_between > 0) or \297 if (infinite and sec_between > 0) or \
296 (i + 1 < manual_tries and sec_between > 0):298 (i + 1 < manual_tries and sec_between > 0):
297 LOG.debug("Please wait %s seconds while we wait to try again",299
298 sec_between)300 if log_req_resp:
301 LOG.debug(
302 "Please wait %s seconds while we wait to try again",
303 sec_between)
299 time.sleep(sec_between)304 time.sleep(sec_between)
300 if excps:305 if excps:
301 raise excps[-1]306 raise excps[-1]
@@ -549,4 +554,18 @@ def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret,
549 _uri, signed_headers, _body = client.sign(url)554 _uri, signed_headers, _body = client.sign(url)
550 return signed_headers555 return signed_headers
551556
557
558def retry_on_url_exc(msg, exc):
559 """readurl exception_cb that will retry on NOT_FOUND and Timeout.
560
561 Returns False to raise the exception from readurl, True to retry.
562 """
563 if not isinstance(exc, UrlError):
564 return False
565 if exc.code == NOT_FOUND:
566 return True
567 if exc.cause and isinstance(exc.cause, requests.Timeout):
568 return True
569 return False
570
552# vi: ts=4 expandtab571# vi: ts=4 expandtab
diff --git a/cloudinit/util.py b/cloudinit/util.py
index 5068096..a8a232b 100644
--- a/cloudinit/util.py
+++ b/cloudinit/util.py
@@ -615,8 +615,8 @@ def get_linux_distro():
615 distro_name = os_release.get('ID', '')615 distro_name = os_release.get('ID', '')
616 distro_version = os_release.get('VERSION_ID', '')616 distro_version = os_release.get('VERSION_ID', '')
617 if 'sles' in distro_name or 'suse' in distro_name:617 if 'sles' in distro_name or 'suse' in distro_name:
618 # RELEASE_BLOCKER: We will drop this sles ivergent behavior in618 # RELEASE_BLOCKER: We will drop this sles divergent behavior in
619 # before 18.4 so that get_linux_distro returns a named tuple619 # the future so that get_linux_distro returns a named tuple
620 # which will include both version codename and architecture620 # which will include both version codename and architecture
621 # on all distributions.621 # on all distributions.
622 flavor = platform.machine()622 flavor = platform.machine()
@@ -668,7 +668,8 @@ def system_info():
668 var = 'ubuntu'668 var = 'ubuntu'
669 elif linux_dist == 'redhat':669 elif linux_dist == 'redhat':
670 var = 'rhel'670 var = 'rhel'
671 elif linux_dist in ('opensuse', 'sles'):671 elif linux_dist in (
672 'opensuse', 'opensuse-tumbleweed', 'opensuse-leap', 'sles'):
672 var = 'suse'673 var = 'suse'
673 else:674 else:
674 var = 'linux'675 var = 'linux'
@@ -2171,6 +2172,11 @@ def is_container():
2171 return False2172 return False
21722173
21732174
2175def is_lxd():
2176 """Check to see if we are running in a lxd container."""
2177 return os.path.exists('/dev/lxd/sock')
2178
2179
2174def get_proc_env(pid, encoding='utf-8', errors='replace'):2180def get_proc_env(pid, encoding='utf-8', errors='replace'):
2175 """2181 """
2176 Return the environment in a dict that a given process id was started with.2182 Return the environment in a dict that a given process id was started with.
@@ -2870,4 +2876,20 @@ def udevadm_settle(exists=None, timeout=None):
2870 return subp(settle_cmd)2876 return subp(settle_cmd)
28712877
28722878
2879def get_proc_ppid(pid):
2880 """
2881 Return the parent pid of a process.
2882 """
2883 ppid = 0
2884 try:
2885 contents = load_file("/proc/%s/stat" % pid, quiet=True)
2886 except IOError as e:
2887 LOG.warning('Failed to load /proc/%s/stat. %s', pid, e)
2888 if contents:
2889 parts = contents.split(" ", 4)
2890 # man proc says
2891 # ppid %d (4) The PID of the parent.
2892 ppid = int(parts[3])
2893 return ppid
2894
2873# vi: ts=4 expandtab2895# vi: ts=4 expandtab
diff --git a/cloudinit/version.py b/cloudinit/version.py
index 844a02e..a2c5d43 100644
--- a/cloudinit/version.py
+++ b/cloudinit/version.py
@@ -4,7 +4,7 @@
4#4#
5# This file is part of cloud-init. See LICENSE file for license information.5# This file is part of cloud-init. See LICENSE file for license information.
66
7__VERSION__ = "18.4"7__VERSION__ = "18.5"
8_PACKAGED_VERSION = '@@PACKAGED_VERSION@@'8_PACKAGED_VERSION = '@@PACKAGED_VERSION@@'
99
10FEATURES = [10FEATURES = [
diff --git a/config/cloud.cfg.tmpl b/config/cloud.cfg.tmpl
index 1fef133..7513176 100644
--- a/config/cloud.cfg.tmpl
+++ b/config/cloud.cfg.tmpl
@@ -167,7 +167,17 @@ system_info:
167 - http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/167 - http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/
168 - http://%(region)s.clouds.archive.ubuntu.com/ubuntu/168 - http://%(region)s.clouds.archive.ubuntu.com/ubuntu/
169 security: []169 security: []
170 - arches: [armhf, armel, default]170 - arches: [arm64, armel, armhf]
171 failsafe:
172 primary: http://ports.ubuntu.com/ubuntu-ports
173 security: http://ports.ubuntu.com/ubuntu-ports
174 search:
175 primary:
176 - http://%(ec2_region)s.ec2.ports.ubuntu.com/ubuntu-ports/
177 - http://%(availability_zone)s.clouds.ports.ubuntu.com/ubuntu-ports/
178 - http://%(region)s.clouds.ports.ubuntu.com/ubuntu-ports/
179 security: []
180 - arches: [default]
171 failsafe:181 failsafe:
172 primary: http://ports.ubuntu.com/ubuntu-ports182 primary: http://ports.ubuntu.com/ubuntu-ports
173 security: http://ports.ubuntu.com/ubuntu-ports183 security: http://ports.ubuntu.com/ubuntu-ports
diff --git a/debian/changelog b/debian/changelog
index 2bb9520..e611ee7 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,78 @@
1cloud-init (18.5-17-gd1a2fe73-0ubuntu1~18.04.1) bionic; urgency=medium
2
3 * New upstream snapshot. (LP: #1813346)
4 - opennebula: exclude EPOCHREALTIME as known bash env variable with a delta
5 - tox: fix disco httpretty dependencies for py37
6 - run-container: uncomment baseurl in yum.repos.d/*.repo when using a
7 proxy [Paride Legovini]
8 - lxd: install zfs-linux instead of zfs meta package [Johnson Shi]
9 - net/sysconfig: do not write a resolv.conf file with only the header.
10 [Robert Schweikert]
11 - net: Make sysconfig renderer compatible with Network Manager.
12 [Eduardo Otubo]
13 - cc_set_passwords: Fix regex when parsing hashed passwords
14 [Marlin Cremers]
15 - net: Wait for dhclient to daemonize before reading lease file
16 [Jason Zions]
17 - [Azure] Increase retries when talking to Wireserver during metadata walk
18 [Jason Zions]
19 - Add documentation on adding a datasource.
20 - doc: clean up some datasource documentation.
21 - ds-identify: fix wrong variable name in ovf_vmware_transport_guestinfo.
22 - Scaleway: Support ssh keys provided inside an instance tag. [PORTE Loïc]
23 - OVF: simplify expected return values of transport functions.
24 - Vmware: Add support for the com.vmware.guestInfo OVF transport.
25 - HACKING.rst: change contact info to Josh Powers
26 - Update to pylint 2.2.2.
27 - Release 18.5
28 - tests: add Disco release [Joshua Powers]
29 - net: render 'metric' values in per-subnet routes
30 - write_files: add support for appending to files. [James Baxter]
31 - config: On ubuntu select cloud archive mirrors for armel, armhf, arm64.
32 - dhclient-hook: cleanups, tests and fix a bug on 'down' event.
33 - NoCloud: Allow top level 'network' key in network-config.
34 - ovf: Fix ovf network config generation gateway/routes
35 - azure: detect vnet migration via netlink media change event
36 [Tamilmani Manoharan]
37 - Azure: fix copy/paste error in error handling when reading azure ovf.
38 [Adam DePue]
39 - tests: fix incorrect order of mocks in test_handle_zfs_root.
40 - doc: Change dns_nameserver property to dns_nameservers. [Tomer Cohen]
41 - OVF: identify label iso9660 filesystems with label 'OVF ENV'.
42 - logs: collect-logs ignore instance-data-sensitive.json on non-root user
43 - net: Ephemeral*Network: add connectivity check via URL
44 - azure: _poll_imds only retry on 404. Fail on Timeout
45 - resizefs: Prefix discovered devpath with '/dev/' when path does not
46 exist [Igor Galić]
47 - azure: retry imds polling on requests.Timeout
48 - azure: Accept variation in error msg from mount for ntfs volumes
49 [Jason Zions]
50 - azure: fix regression introduced when persisting ephemeral dhcp lease
51 [Aswin Rajamannar]
52 - azure: add udev rules to create cloud-init Gen2 disk name symlinks
53 - tests: ec2 mock missing httpretty user-data and instance-identity routes
54 - azure: remove /etc/netplan/90-hotplug-azure.yaml when net from IMDS
55 - azure: report ready to fabric after reprovision and reduce logging
56 [Aswin Rajamannar]
57 - query: better error when missing read permission on instance-data
58 - instance-data: fallback to instance-data.json if sensitive is absent.
59 - docs: remove colon from network v1 config example. [Tomer Cohen]
60 - Add cloud-id binary to packages for SUSE [Jason Zions]
61 - systemd: On SUSE ensure cloud-init.service runs before wicked
62 [Robert Schweikert]
63 - update detection of openSUSE variants [Robert Schweikert]
64 - azure: Add apply_network_config option to disable network from IMDS
65 - Correct spelling in an error message (udevadm). [Katie McLaughlin]
66 - tests: meta_data key changed to meta-data in ec2 instance-data.json
67 - tests: fix kvm integration test to assert flexible config-disk path
68 - tools: Add cloud-id command line utility
69 - instance-data: Add standard keys platform and subplatform. Refactor ec2.
70 - net: ignore nics that have "zero" mac address.
71 - tests: fix apt_configure_primary to be more flexible
72 - Ubuntu: update sources.list to comment out deb-src entries.
73
74 -- Chad Smith <chad.smith@canonical.com> Sat, 26 Jan 2019 08:42:04 -0700
75
1cloud-init (18.4-0ubuntu1~18.04.1) bionic-proposed; urgency=medium76cloud-init (18.4-0ubuntu1~18.04.1) bionic-proposed; urgency=medium
277
3 * drop the following cherry-picks now included:78 * drop the following cherry-picks now included:
diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst
index e34f145..648c606 100644
--- a/doc/rtd/topics/datasources.rst
+++ b/doc/rtd/topics/datasources.rst
@@ -18,7 +18,7 @@ single way to access the different cloud systems methods to provide this data
18through the typical usage of subclasses.18through the typical usage of subclasses.
1919
20Any metadata processed by cloud-init's datasources is persisted as20Any metadata processed by cloud-init's datasources is persisted as
21``/run/cloud0-init/instance-data.json``. Cloud-init provides tooling21``/run/cloud-init/instance-data.json``. Cloud-init provides tooling
22to quickly introspect some of that data. See :ref:`instance_metadata` for22to quickly introspect some of that data. See :ref:`instance_metadata` for
23more information.23more information.
2424
@@ -80,6 +80,65 @@ The current interface that a datasource object must provide is the following:
80 def get_package_mirror_info(self)80 def get_package_mirror_info(self)
8181
8282
83Adding a new Datasource
84-----------------------
85The datasource objects have a few touch points with cloud-init. If you
86are interested in adding a new datasource for your cloud platform you'll
87need to take care of the following items:
88
89* **Identify a mechanism for positive identification of the platform**:
90 It is good practice for a cloud platform to positively identify itself
91 to the guest. This allows the guest to make educated decisions based
92 on the platform on which it is running. On the x86 and arm64 architectures,
93 many clouds identify themselves through DMI data. For example,
94 Oracle's public cloud provides the string 'OracleCloud.com' in the
95 DMI chassis-asset field.
96
97 cloud-init enabled images produce a log file with details about the
98 platform. Reading through this log in ``/run/cloud-init/ds-identify.log``
99 may provide the information needed to uniquely identify the platform.
100 If the log is not present, you can generate it by running from source
101 ``./tools/ds-identify`` or the installed location
102 ``/usr/lib/cloud-init/ds-identify``.
103
104 The mechanism used to identify the platform will be required for the
105 ds-identify and datasource module sections below.
106
107* **Add datasource module ``cloudinit/sources/DataSource<CloudPlatform>.py``**:
108 It is suggested that you start by copying one of the simpler datasources
109 such as DataSourceHetzner.
110
111* **Add tests for datasource module**:
112 Add a new file with some tests for the module to
113 ``cloudinit/sources/test_<yourplatform>.py``. For example see
114 ``cloudinit/sources/tests/test_oracle.py``
115
116* **Update ds-identify**: In systemd systems, ds-identify is used to detect
117 which datasource should be enabled or if cloud-init should run at all.
118 You'll need to make changes to ``tools/ds-identify``.
119
120* **Add tests for ds-identify**: Add relevant tests in a new class to
121 ``tests/unittests/test_ds_identify.py``. You can use ``TestOracle`` as an
122 example.
123
124* **Add your datasource name to the builtin list of datasources:** Add
125 your datasource module name to the end of the ``datasource_list``
126 entry in ``cloudinit/settings.py``.
127
128* **Add your your cloud platform to apport collection prompts:** Update the
129 list of cloud platforms in ``cloudinit/apport.py``. This list will be
130 provided to the user who invokes ``ubuntu-bug cloud-init``.
131
132* **Enable datasource by default in ubuntu packaging branches:**
133 Ubuntu packaging branches contain a template file
134 ``debian/cloud-init.templates`` that ultimately sets the default
135 datasource_list when installed via package. This file needs updating when
136 the commit gets into a package.
137
138* **Add documentation for your datasource**: You should add a new
139 file in ``doc/datasources/<cloudplatform>.rst``
140
141
83Datasource Documentation142Datasource Documentation
84========================143========================
85The following is a list of the implemented datasources.144The following is a list of the implemented datasources.
diff --git a/doc/rtd/topics/datasources/azure.rst b/doc/rtd/topics/datasources/azure.rst
index 559011e..720a475 100644
--- a/doc/rtd/topics/datasources/azure.rst
+++ b/doc/rtd/topics/datasources/azure.rst
@@ -23,18 +23,18 @@ information in json format to /run/cloud-init/dhclient.hook/<interface>.json.
23In order for cloud-init to leverage this method to find the endpoint, the23In order for cloud-init to leverage this method to find the endpoint, the
24cloud.cfg file must contain:24cloud.cfg file must contain:
2525
26datasource:26.. sourcecode:: yaml
27 Azure:27
28 set_hostname: False28 datasource:
29 agent_command: __builtin__29 Azure:
30 set_hostname: False
31 agent_command: __builtin__
3032
31If those files are not available, the fallback is to check the leases file33If those files are not available, the fallback is to check the leases file
32for the endpoint server (again option 245).34for the endpoint server (again option 245).
3335
34You can define the path to the lease file with the 'dhclient_lease_file'36You can define the path to the lease file with the 'dhclient_lease_file'
35configuration. The default value is /var/lib/dhcp/dhclient.eth0.leases.37configuration.
36
37 dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases
3838
39walinuxagent39walinuxagent
40------------40------------
@@ -57,6 +57,64 @@ in order to use waagent.conf with cloud-init, the following settings are recomme
57 ResourceDisk.MountPoint=/mnt57 ResourceDisk.MountPoint=/mnt
5858
5959
60Configuration
61-------------
62The following configuration can be set for the datasource in system
63configuration (in ``/etc/cloud/cloud.cfg`` or ``/etc/cloud/cloud.cfg.d/``).
64
65The settings that may be configured are:
66
67 * **agent_command**: Either __builtin__ (default) or a command to run to getcw
68 metadata. If __builtin__, get metadata from walinuxagent. Otherwise run the
69 provided command to obtain metadata.
70 * **apply_network_config**: Boolean set to True to use network configuration
71 described by Azure's IMDS endpoint instead of fallback network config of
72 dhcp on eth0. Default is True. For Ubuntu 16.04 or earlier, default is False.
73 * **data_dir**: Path used to read metadata files and write crawled data.
74 * **dhclient_lease_file**: The fallback lease file to source when looking for
75 custom DHCP option 245 from Azure fabric.
76 * **disk_aliases**: A dictionary defining which device paths should be
77 interpreted as ephemeral images. See cc_disk_setup module for more info.
78 * **hostname_bounce**: A dictionary Azure hostname bounce behavior to react to
79 metadata changes. The '``hostname_bounce: command``' entry can be either
80 the literal string 'builtin' or a command to execute. The command will be
81 invoked after the hostname is set, and will have the 'interface' in its
82 environment. If ``set_hostname`` is not true, then ``hostname_bounce``
83 will be ignored. An example might be:
84
85 ``command: ["sh", "-c", "killall dhclient; dhclient $interface"]``
86
87 * **hostname_bounce**: A dictionary Azure hostname bounce behavior to react to
88 metadata changes. Azure will throttle ifup/down in some cases after metadata
89 has been updated to inform dhcp server about updated hostnames.
90 * **set_hostname**: Boolean set to True when we want Azure to set the hostname
91 based on metadata.
92
93Configuration for the datasource can also be read from a
94``dscfg`` entry in the ``LinuxProvisioningConfigurationSet``. Content in
95dscfg node is expected to be base64 encoded yaml content, and it will be
96merged into the 'datasource: Azure' entry.
97
98An example configuration with the default values is provided below:
99
100.. sourcecode:: yaml
101
102 datasource:
103 Azure:
104 agent_command: __builtin__
105 apply_network_config: true
106 data_dir: /var/lib/waagent
107 dhclient_lease_file: /var/lib/dhcp/dhclient.eth0.leases
108 disk_aliases:
109 ephemeral0: /dev/disk/cloud/azure_resource
110 hostname_bounce:
111 interface: eth0
112 command: builtin
113 policy: true
114 hostname_command: hostname
115 set_hostname: true
116
117
60Userdata118Userdata
61--------119--------
62Userdata is provided to cloud-init inside the ovf-env.xml file. Cloud-init120Userdata is provided to cloud-init inside the ovf-env.xml file. Cloud-init
@@ -97,37 +155,6 @@ Example:
97 </LinuxProvisioningConfigurationSet>155 </LinuxProvisioningConfigurationSet>
98 </wa:ProvisioningSection>156 </wa:ProvisioningSection>
99157
100Configuration
101-------------
102Configuration for the datasource can be read from the system config's or set
103via the `dscfg` entry in the `LinuxProvisioningConfigurationSet`. Content in
104dscfg node is expected to be base64 encoded yaml content, and it will be
105merged into the 'datasource: Azure' entry.
106
107The '``hostname_bounce: command``' entry can be either the literal string
108'builtin' or a command to execute. The command will be invoked after the
109hostname is set, and will have the 'interface' in its environment. If
110``set_hostname`` is not true, then ``hostname_bounce`` will be ignored.
111
112An example might be:
113 command: ["sh", "-c", "killall dhclient; dhclient $interface"]
114
115.. code:: yaml
116
117 datasource:
118 agent_command
119 Azure:
120 agent_command: [service, walinuxagent, start]
121 set_hostname: True
122 hostname_bounce:
123 # the name of the interface to bounce
124 interface: eth0
125 # policy can be 'on', 'off' or 'force'
126 policy: on
127 # the method 'bounce' command.
128 command: "builtin"
129 hostname_command: "hostname"
130
131hostname158hostname
132--------159--------
133When the user launches an instance, they provide a hostname for that instance.160When the user launches an instance, they provide a hostname for that instance.
diff --git a/doc/rtd/topics/instancedata.rst b/doc/rtd/topics/instancedata.rst
index 634e180..5d2dc94 100644
--- a/doc/rtd/topics/instancedata.rst
+++ b/doc/rtd/topics/instancedata.rst
@@ -90,24 +90,46 @@ There are three basic top-level keys:
9090
91The standardized keys present:91The standardized keys present:
9292
93+----------------------+-----------------------------------------------+---------------------------+93+----------------------+-----------------------------------------------+-----------------------------------+
94| Key path | Description | Examples |94| Key path | Description | Examples |
95+======================+===============================================+===========================+95+======================+===============================================+===================================+
96| v1.cloud_name | The name of the cloud provided by metadata | aws, openstack, azure, |96| v1._beta_keys | List of standardized keys still in 'beta'. | [subplatform] |
97| | key 'cloud-name' or the cloud-init datasource | configdrive, nocloud, |97| | The format, intent or presence of these keys | |
98| | name which was discovered. | ovf, etc. |98| | can change. Do not consider them | |
99+----------------------+-----------------------------------------------+---------------------------+99| | production-ready. | |
100| v1.instance_id | Unique instance_id allocated by the cloud | i-<somehash> |100+----------------------+-----------------------------------------------+-----------------------------------+
101+----------------------+-----------------------------------------------+---------------------------+101| v1.cloud_name | Where possible this will indicate the 'name' | aws, openstack, azure, |
102| v1.local_hostname | The internal or local hostname of the system | ip-10-41-41-70, |102| | of the cloud this system is running on. This | configdrive, nocloud, |
103| | | <user-provided-hostname> |103| | is specifically different than the 'platform' | ovf, etc. |
104+----------------------+-----------------------------------------------+---------------------------+104| | below. As an example, the name of Amazon Web | |
105| v1.region | The physical region/datacenter in which the | us-east-2 |105| | Services is 'aws' while the platform is 'ec2'.| |
106| | instance is deployed | |106| | | |
107+----------------------+-----------------------------------------------+---------------------------+107| | If no specific name is determinable or | |
108| v1.availability_zone | The physical availability zone in which the | us-east-2b, nova, null |108| | provided in meta-data, then this field may | |
109| | instance is deployed | |109| | contain the same content as 'platform'. | |
110+----------------------+-----------------------------------------------+---------------------------+110+----------------------+-----------------------------------------------+-----------------------------------+
111| v1.instance_id | Unique instance_id allocated by the cloud | i-<somehash> |
112+----------------------+-----------------------------------------------+-----------------------------------+
113| v1.local_hostname | The internal or local hostname of the system | ip-10-41-41-70, |
114| | | <user-provided-hostname> |
115+----------------------+-----------------------------------------------+-----------------------------------+
116| v1.platform | An attempt to identify the cloud platform | ec2, openstack, lxd, gce |
117| | instance that the system is running on. | nocloud, ovf |
118+----------------------+-----------------------------------------------+-----------------------------------+
119| v1.subplatform | Additional platform details describing the | metadata (http://168.254.169.254),|
120| | specific source or type of metadata used. | seed-dir (/path/to/seed-dir/), |
121| | The format of subplatform will be: | config-disk (/dev/cd0), |
122| | <subplatform_type> (<url_file_or_dev_path>) | configdrive (/dev/sr0) |
123+----------------------+-----------------------------------------------+-----------------------------------+
124| v1.public_ssh_keys | A list of ssh keys provided to the instance | ['ssh-rsa AA...', ...] |
125| | by the datasource metadata. | |
126+----------------------+-----------------------------------------------+-----------------------------------+
127| v1.region | The physical region/datacenter in which the | us-east-2 |
128| | instance is deployed | |
129+----------------------+-----------------------------------------------+-----------------------------------+
130| v1.availability_zone | The physical availability zone in which the | us-east-2b, nova, null |
131| | instance is deployed | |
132+----------------------+-----------------------------------------------+-----------------------------------+
111133
112134
113Below is an example of ``/run/cloud-init/instance_data.json`` on an EC2135Below is an example of ``/run/cloud-init/instance_data.json`` on an EC2
@@ -117,10 +139,75 @@ instance:
117139
118 {140 {
119 "base64_encoded_keys": [],141 "base64_encoded_keys": [],
120 "sensitive_keys": [],
121 "ds": {142 "ds": {
122 "meta_data": {143 "_doc": "EXPERIMENTAL: The structure and format of content scoped under the 'ds' key may change in subsequent releases of cloud-init.",
123 "ami-id": "ami-014e1416b628b0cbf",144 "_metadata_api_version": "2016-09-02",
145 "dynamic": {
146 "instance-identity": {
147 "document": {
148 "accountId": "437526006925",
149 "architecture": "x86_64",
150 "availabilityZone": "us-east-2b",
151 "billingProducts": null,
152 "devpayProductCodes": null,
153 "imageId": "ami-079638aae7046bdd2",
154 "instanceId": "i-075f088c72ad3271c",
155 "instanceType": "t2.micro",
156 "kernelId": null,
157 "marketplaceProductCodes": null,
158 "pendingTime": "2018-10-05T20:10:43Z",
159 "privateIp": "10.41.41.95",
160 "ramdiskId": null,
161 "region": "us-east-2",
162 "version": "2017-09-30"
163 },
164 "pkcs7": [
165 "MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAaCAJIAEggHbewog",
166 "ICJkZXZwYXlQcm9kdWN0Q29kZXMiIDogbnVsbCwKICAibWFya2V0cGxhY2VQcm9kdWN0Q29kZXMi",
167 "IDogbnVsbCwKICAicHJpdmF0ZUlwIiA6ICIxMC40MS40MS45NSIsCiAgInZlcnNpb24iIDogIjIw",
168 "MTctMDktMzAiLAogICJpbnN0YW5jZUlkIiA6ICJpLTA3NWYwODhjNzJhZDMyNzFjIiwKICAiYmls",
169 "bGluZ1Byb2R1Y3RzIiA6IG51bGwsCiAgImluc3RhbmNlVHlwZSIgOiAidDIubWljcm8iLAogICJh",
170 "Y2NvdW50SWQiIDogIjQzNzUyNjAwNjkyNSIsCiAgImF2YWlsYWJpbGl0eVpvbmUiIDogInVzLWVh",
171 "c3QtMmIiLAogICJrZXJuZWxJZCIgOiBudWxsLAogICJyYW1kaXNrSWQiIDogbnVsbCwKICAiYXJj",
172 "aGl0ZWN0dXJlIiA6ICJ4ODZfNjQiLAogICJpbWFnZUlkIiA6ICJhbWktMDc5NjM4YWFlNzA0NmJk",
173 "ZDIiLAogICJwZW5kaW5nVGltZSIgOiAiMjAxOC0xMC0wNVQyMDoxMDo0M1oiLAogICJyZWdpb24i",
174 "IDogInVzLWVhc3QtMiIKfQAAAAAAADGCARcwggETAgEBMGkwXDELMAkGA1UEBhMCVVMxGTAXBgNV",
175 "BAgTEFdhc2hpbmd0b24gU3RhdGUxEDAOBgNVBAcTB1NlYXR0bGUxIDAeBgNVBAoTF0FtYXpvbiBX",
176 "ZWIgU2VydmljZXMgTExDAgkAlrpI2eVeGmcwCQYFKw4DAhoFAKBdMBgGCSqGSIb3DQEJAzELBgkq",
177 "hkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MTAwNTIwMTA0OFowIwYJKoZIhvcNAQkEMRYEFK0k",
178 "Tz6n1A8/zU1AzFj0riNQORw2MAkGByqGSM44BAMELjAsAhRNrr174y98grPBVXUforN/6wZp8AIU",
179 "JLZBkrB2GJA8A4WJ1okq++jSrBIAAAAAAAA="
180 ],
181 "rsa2048": [
182 "MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwGggCSABIIB",
183 "23sKICAiZGV2cGF5UHJvZHVjdENvZGVzIiA6IG51bGwsCiAgIm1hcmtldHBsYWNlUHJvZHVjdENv",
184 "ZGVzIiA6IG51bGwsCiAgInByaXZhdGVJcCIgOiAiMTAuNDEuNDEuOTUiLAogICJ2ZXJzaW9uIiA6",
185 "ICIyMDE3LTA5LTMwIiwKICAiaW5zdGFuY2VJZCIgOiAiaS0wNzVmMDg4YzcyYWQzMjcxYyIsCiAg",
186 "ImJpbGxpbmdQcm9kdWN0cyIgOiBudWxsLAogICJpbnN0YW5jZVR5cGUiIDogInQyLm1pY3JvIiwK",
187 "ICAiYWNjb3VudElkIiA6ICI0Mzc1MjYwMDY5MjUiLAogICJhdmFpbGFiaWxpdHlab25lIiA6ICJ1",
188 "cy1lYXN0LTJiIiwKICAia2VybmVsSWQiIDogbnVsbCwKICAicmFtZGlza0lkIiA6IG51bGwsCiAg",
189 "ImFyY2hpdGVjdHVyZSIgOiAieDg2XzY0IiwKICAiaW1hZ2VJZCIgOiAiYW1pLTA3OTYzOGFhZTcw",
190 "NDZiZGQyIiwKICAicGVuZGluZ1RpbWUiIDogIjIwMTgtMTAtMDVUMjA6MTA6NDNaIiwKICAicmVn",
191 "aW9uIiA6ICJ1cy1lYXN0LTIiCn0AAAAAAAAxggH/MIIB+wIBATBpMFwxCzAJBgNVBAYTAlVTMRkw",
192 "FwYDVQQIExBXYXNoaW5ndG9uIFN0YXRlMRAwDgYDVQQHEwdTZWF0dGxlMSAwHgYDVQQKExdBbWF6",
193 "b24gV2ViIFNlcnZpY2VzIExMQwIJAM07oeX4xevdMA0GCWCGSAFlAwQCAQUAoGkwGAYJKoZIhvcN",
194 "AQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTgxMDA1MjAxMDQ4WjAvBgkqhkiG9w0B",
195 "CQQxIgQgkYz0pZk3zJKBi4KP4egeOKJl/UYwu5UdE7id74pmPwMwDQYJKoZIhvcNAQEBBQAEggEA",
196 "dC3uIGGNul1OC1mJKSH3XoBWsYH20J/xhIdftYBoXHGf2BSFsrs9ZscXd2rKAKea4pSPOZEYMXgz",
197 "lPuT7W0WU89N3ZKviy/ReMSRjmI/jJmsY1lea6mlgcsJXreBXFMYucZvyeWGHdnCjamoKWXkmZlM",
198 "mSB1gshWy8Y7DzoKviYPQZi5aI54XK2Upt4kGme1tH1NI2Cq+hM4K+adxTbNhS3uzvWaWzMklUuU",
199 "QHX2GMmjAVRVc8vnA8IAsBCJJp+gFgYzi09IK+cwNgCFFPADoG6jbMHHf4sLB3MUGpiA+G9JlCnM",
200 "fmkjI2pNRB8spc0k4UG4egqLrqCz67WuK38tjwAAAAAAAA=="
201 ],
202 "signature": [
203 "Tsw6h+V3WnxrNVSXBYIOs1V4j95YR1mLPPH45XnhX0/Ei3waJqf7/7EEKGYP1Cr4PTYEULtZ7Mvf",
204 "+xJpM50Ivs2bdF7o0c4vnplRWe3f06NI9pv50dr110j/wNzP4MZ1pLhJCqubQOaaBTF3LFutgRrt",
205 "r4B0mN3p7EcqD8G+ll0="
206 ]
207 }
208 },
209 "meta-data": {
210 "ami-id": "ami-079638aae7046bdd2",
124 "ami-launch-index": "0",211 "ami-launch-index": "0",
125 "ami-manifest-path": "(unknown)",212 "ami-manifest-path": "(unknown)",
126 "block-device-mapping": {213 "block-device-mapping": {
@@ -129,31 +216,31 @@ instance:
129 "ephemeral1": "sdc",216 "ephemeral1": "sdc",
130 "root": "/dev/sda1"217 "root": "/dev/sda1"
131 },218 },
132 "hostname": "ip-10-41-41-70.us-east-2.compute.internal",219 "hostname": "ip-10-41-41-95.us-east-2.compute.internal",
133 "instance-action": "none",220 "instance-action": "none",
134 "instance-id": "i-04fa31cfc55aa7976",221 "instance-id": "i-075f088c72ad3271c",
135 "instance-type": "t2.micro",222 "instance-type": "t2.micro",
136 "local-hostname": "ip-10-41-41-70.us-east-2.compute.internal",223 "local-hostname": "ip-10-41-41-95.us-east-2.compute.internal",
137 "local-ipv4": "10.41.41.70",224 "local-ipv4": "10.41.41.95",
138 "mac": "06:b6:92:dd:9d:24",225 "mac": "06:74:8f:39:cd:a6",
139 "metrics": {226 "metrics": {
140 "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"227 "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
141 },228 },
142 "network": {229 "network": {
143 "interfaces": {230 "interfaces": {
144 "macs": {231 "macs": {
145 "06:b6:92:dd:9d:24": {232 "06:74:8f:39:cd:a6": {
146 "device-number": "0",233 "device-number": "0",
147 "interface-id": "eni-08c0c9fdb99b6e6f4",234 "interface-id": "eni-052058bbd7831eaae",
148 "ipv4-associations": {235 "ipv4-associations": {
149 "18.224.22.43": "10.41.41.70"236 "18.218.221.122": "10.41.41.95"
150 },237 },
151 "local-hostname": "ip-10-41-41-70.us-east-2.compute.internal",238 "local-hostname": "ip-10-41-41-95.us-east-2.compute.internal",
152 "local-ipv4s": "10.41.41.70",239 "local-ipv4s": "10.41.41.95",
153 "mac": "06:b6:92:dd:9d:24",240 "mac": "06:74:8f:39:cd:a6",
154 "owner-id": "437526006925",241 "owner-id": "437526006925",
155 "public-hostname": "ec2-18-224-22-43.us-east-2.compute.amazonaws.com",242 "public-hostname": "ec2-18-218-221-122.us-east-2.compute.amazonaws.com",
156 "public-ipv4s": "18.224.22.43",243 "public-ipv4s": "18.218.221.122",
157 "security-group-ids": "sg-828247e9",244 "security-group-ids": "sg-828247e9",
158 "security-groups": "Cloud-init integration test secgroup",245 "security-groups": "Cloud-init integration test secgroup",
159 "subnet-id": "subnet-282f3053",246 "subnet-id": "subnet-282f3053",
@@ -171,16 +258,14 @@ instance:
171 "availability-zone": "us-east-2b"258 "availability-zone": "us-east-2b"
172 },259 },
173 "profile": "default-hvm",260 "profile": "default-hvm",
174 "public-hostname": "ec2-18-224-22-43.us-east-2.compute.amazonaws.com",261 "public-hostname": "ec2-18-218-221-122.us-east-2.compute.amazonaws.com",
175 "public-ipv4": "18.224.22.43",262 "public-ipv4": "18.218.221.122",
176 "public-keys": {263 "public-keys": {
177 "cloud-init-integration": [264 "cloud-init-integration": [
178 "ssh-rsa265 "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB cloud-init-integration"
179 AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB
180 cloud-init-integration"
181 ]266 ]
182 },267 },
183 "reservation-id": "r-06ab75e9346f54333",268 "reservation-id": "r-0594a20e31f6cfe46",
184 "security-groups": "Cloud-init integration test secgroup",269 "security-groups": "Cloud-init integration test secgroup",
185 "services": {270 "services": {
186 "domain": "amazonaws.com",271 "domain": "amazonaws.com",
@@ -188,16 +273,22 @@ instance:
188 }273 }
189 }274 }
190 },275 },
276 "sensitive_keys": [],
191 "v1": {277 "v1": {
278 "_beta_keys": [
279 "subplatform"
280 ],
192 "availability-zone": "us-east-2b",281 "availability-zone": "us-east-2b",
193 "availability_zone": "us-east-2b",282 "availability_zone": "us-east-2b",
194 "cloud-name": "aws",
195 "cloud_name": "aws",283 "cloud_name": "aws",
196 "instance-id": "i-04fa31cfc55aa7976",284 "instance_id": "i-075f088c72ad3271c",
197 "instance_id": "i-04fa31cfc55aa7976",285 "local_hostname": "ip-10-41-41-95",
198 "local-hostname": "ip-10-41-41-70",286 "platform": "ec2",
199 "local_hostname": "ip-10-41-41-70",287 "public_ssh_keys": [
200 "region": "us-east-2"288 "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB cloud-init-integration"
289 ],
290 "region": "us-east-2",
291 "subplatform": "metadata (http://169.254.169.254)"
201 }292 }
202 }293 }
203294
diff --git a/doc/rtd/topics/network-config-format-v1.rst b/doc/rtd/topics/network-config-format-v1.rst
index 3b0148c..9723d68 100644
--- a/doc/rtd/topics/network-config-format-v1.rst
+++ b/doc/rtd/topics/network-config-format-v1.rst
@@ -384,7 +384,7 @@ Valid keys for ``subnets`` include the following:
384- ``address``: IPv4 or IPv6 address. It may include CIDR netmask notation.384- ``address``: IPv4 or IPv6 address. It may include CIDR netmask notation.
385- ``netmask``: IPv4 subnet mask in dotted format or CIDR notation.385- ``netmask``: IPv4 subnet mask in dotted format or CIDR notation.
386- ``gateway``: IPv4 address of the default gateway for this subnet.386- ``gateway``: IPv4 address of the default gateway for this subnet.
387- ``dns_nameserver``: Specify a list of IPv4 dns server IPs to end up in387- ``dns_nameservers``: Specify a list of IPv4 dns server IPs to end up in
388 resolv.conf.388 resolv.conf.
389- ``dns_search``: Specify a list of search paths to be included in389- ``dns_search``: Specify a list of search paths to be included in
390 resolv.conf.390 resolv.conf.
diff --git a/packages/redhat/cloud-init.spec.in b/packages/redhat/cloud-init.spec.in
index a3a6d1e..6b2022b 100644
--- a/packages/redhat/cloud-init.spec.in
+++ b/packages/redhat/cloud-init.spec.in
@@ -191,6 +191,7 @@ fi
191191
192# Program binaries192# Program binaries
193%{_bindir}/cloud-init*193%{_bindir}/cloud-init*
194%{_bindir}/cloud-id*
194195
195# Docs196# Docs
196%doc LICENSE ChangeLog TODO.rst requirements.txt197%doc LICENSE ChangeLog TODO.rst requirements.txt
diff --git a/packages/suse/cloud-init.spec.in b/packages/suse/cloud-init.spec.in
index e781d74..26894b3 100644
--- a/packages/suse/cloud-init.spec.in
+++ b/packages/suse/cloud-init.spec.in
@@ -93,6 +93,7 @@ version_pys=$(cd "%{buildroot}" && find . -name version.py -type f)
9393
94# Program binaries94# Program binaries
95%{_bindir}/cloud-init*95%{_bindir}/cloud-init*
96%{_bindir}/cloud-id*
9697
97# systemd files98# systemd files
98/usr/lib/systemd/system-generators/*99/usr/lib/systemd/system-generators/*
diff --git a/setup.py b/setup.py
index 5ed8eae..ea37efc 100755
--- a/setup.py
+++ b/setup.py
@@ -282,7 +282,8 @@ setuptools.setup(
282 cmdclass=cmdclass,282 cmdclass=cmdclass,
283 entry_points={283 entry_points={
284 'console_scripts': [284 'console_scripts': [
285 'cloud-init = cloudinit.cmd.main:main'285 'cloud-init = cloudinit.cmd.main:main',
286 'cloud-id = cloudinit.cmd.cloud_id:main'
286 ],287 ],
287 }288 }
288)289)
diff --git a/systemd/cloud-init.service.tmpl b/systemd/cloud-init.service.tmpl
index b92e8ab..5cb0037 100644
--- a/systemd/cloud-init.service.tmpl
+++ b/systemd/cloud-init.service.tmpl
@@ -14,8 +14,7 @@ After=networking.service
14After=network.service14After=network.service
15{% endif %}15{% endif %}
16{% if variant in ["suse"] %}16{% if variant in ["suse"] %}
17Requires=wicked.service17Before=wicked.service
18After=wicked.service
19# setting hostname via hostnamectl depends on dbus, which otherwise18# setting hostname via hostnamectl depends on dbus, which otherwise
20# would not be guaranteed at this point.19# would not be guaranteed at this point.
21After=dbus.service20After=dbus.service
diff --git a/templates/sources.list.ubuntu.tmpl b/templates/sources.list.ubuntu.tmpl
index d879972..edb92f1 100644
--- a/templates/sources.list.ubuntu.tmpl
+++ b/templates/sources.list.ubuntu.tmpl
@@ -10,30 +10,30 @@
10# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to10# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
11# newer versions of the distribution.11# newer versions of the distribution.
12deb {{mirror}} {{codename}} main restricted12deb {{mirror}} {{codename}} main restricted
13deb-src {{mirror}} {{codename}} main restricted13# deb-src {{mirror}} {{codename}} main restricted
1414
15## Major bug fix updates produced after the final release of the15## Major bug fix updates produced after the final release of the
16## distribution.16## distribution.
17deb {{mirror}} {{codename}}-updates main restricted17deb {{mirror}} {{codename}}-updates main restricted
18deb-src {{mirror}} {{codename}}-updates main restricted18# deb-src {{mirror}} {{codename}}-updates main restricted
1919
20## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu20## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
21## team. Also, please note that software in universe WILL NOT receive any21## team. Also, please note that software in universe WILL NOT receive any
22## review or updates from the Ubuntu security team.22## review or updates from the Ubuntu security team.
23deb {{mirror}} {{codename}} universe23deb {{mirror}} {{codename}} universe
24deb-src {{mirror}} {{codename}} universe24# deb-src {{mirror}} {{codename}} universe
25deb {{mirror}} {{codename}}-updates universe25deb {{mirror}} {{codename}}-updates universe
26deb-src {{mirror}} {{codename}}-updates universe26# deb-src {{mirror}} {{codename}}-updates universe
2727
28## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu 28## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
29## team, and may not be under a free licence. Please satisfy yourself as to 29## team, and may not be under a free licence. Please satisfy yourself as to
30## your rights to use the software. Also, please note that software in 30## your rights to use the software. Also, please note that software in
31## multiverse WILL NOT receive any review or updates from the Ubuntu31## multiverse WILL NOT receive any review or updates from the Ubuntu
32## security team.32## security team.
33deb {{mirror}} {{codename}} multiverse33deb {{mirror}} {{codename}} multiverse
34deb-src {{mirror}} {{codename}} multiverse34# deb-src {{mirror}} {{codename}} multiverse
35deb {{mirror}} {{codename}}-updates multiverse35deb {{mirror}} {{codename}}-updates multiverse
36deb-src {{mirror}} {{codename}}-updates multiverse36# deb-src {{mirror}} {{codename}}-updates multiverse
3737
38## N.B. software from this repository may not have been tested as38## N.B. software from this repository may not have been tested as
39## extensively as that contained in the main release, although it includes39## extensively as that contained in the main release, although it includes
@@ -41,14 +41,7 @@ deb-src {{mirror}} {{codename}}-updates multiverse
41## Also, please note that software in backports WILL NOT receive any review41## Also, please note that software in backports WILL NOT receive any review
42## or updates from the Ubuntu security team.42## or updates from the Ubuntu security team.
43deb {{mirror}} {{codename}}-backports main restricted universe multiverse43deb {{mirror}} {{codename}}-backports main restricted universe multiverse
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches