Merge ~chad.smith/cloud-init:cleanup/metadata-cloud-platform into cloud-init:master

Proposed by Chad Smith
Status: Merged
Approved by: Chad Smith
Approved revision: e3fb8a4f3a1438bf02189217005ca77b33c624b5
Merge reported by: Server Team CI bot
Merged at revision: not available
Proposed branch: ~chad.smith/cloud-init:cleanup/metadata-cloud-platform
Merge into: cloud-init:master
Diff against target: 2076 lines (+722/-274)
33 files modified
cloudinit/sources/DataSourceAliYun.py (+5/-15)
cloudinit/sources/DataSourceAltCloud.py (+22/-11)
cloudinit/sources/DataSourceAzure.py (+8/-0)
cloudinit/sources/DataSourceBigstep.py (+4/-0)
cloudinit/sources/DataSourceCloudSigma.py (+5/-1)
cloudinit/sources/DataSourceConfigDrive.py (+12/-0)
cloudinit/sources/DataSourceEc2.py (+59/-56)
cloudinit/sources/DataSourceIBMCloud.py (+4/-0)
cloudinit/sources/DataSourceMAAS.py (+4/-0)
cloudinit/sources/DataSourceNoCloud.py (+21/-0)
cloudinit/sources/DataSourceNone.py (+4/-0)
cloudinit/sources/DataSourceOVF.py (+6/-0)
cloudinit/sources/DataSourceOpenNebula.py (+8/-0)
cloudinit/sources/DataSourceOracle.py (+4/-0)
cloudinit/sources/DataSourceSmartOS.py (+3/-0)
cloudinit/sources/__init__.py (+77/-21)
cloudinit/sources/tests/test_init.py (+10/-2)
cloudinit/sources/tests/test_oracle.py (+8/-0)
cloudinit/tests/test_util.py (+16/-0)
cloudinit/util.py (+5/-0)
doc/rtd/topics/instancedata.rst (+137/-46)
tests/cloud_tests/testcases/base.py (+11/-2)
tests/unittests/test_datasource/test_aliyun.py (+4/-0)
tests/unittests/test_datasource/test_altcloud.py (+67/-51)
tests/unittests/test_datasource/test_azure.py (+63/-47)
tests/unittests/test_datasource/test_cloudsigma.py (+6/-0)
tests/unittests/test_datasource/test_configdrive.py (+3/-0)
tests/unittests/test_datasource/test_ec2.py (+13/-7)
tests/unittests/test_datasource/test_ibmcloud.py (+39/-1)
tests/unittests/test_datasource/test_nocloud.py (+35/-10)
tests/unittests/test_datasource/test_opennebula.py (+4/-0)
tests/unittests/test_datasource/test_ovf.py (+48/-4)
tests/unittests/test_datasource/test_smartos.py (+7/-0)
Reviewer Review Type Date Requested Status
Server Team CI bot continuous-integration Approve
cloud-init Commiters Pending
Review via email: mp+355999@code.launchpad.net

Commit message

instance-data: Add standard keys platform and subplatform. Refactor ec2.

Add the following instance-data.json standardized keys:
* v1._beta_keys: List any v1 keys in beta development,
  e.g. ['subplatform'].
* v1.public_ssh_keys: List of any cloud-provided ssh keys for the
  instance.
* v1.platform: String representing the cloud platform api supporting the
  datasource. For example: 'ec2' for aws, aliyun and brightbox cloud
  names.
* v1.subplatform: String with more details about the source of the
  metadata consumed. For example, metadata uri, config drive device path
  or seed directory.

To support the new platform and subplatform standardized instance-data,
DataSource and its subclasses grew platform and subplatform attributes.
The platform attribute defaults to the lowercase string datasource name at
self.dsname. This method is overridden in NoCloud, Ec2 and ConfigDrive
datasources.

The subplatform attribute calls a _get_subplatform method which will
return a string containing a simple slug for subplatform type such as
metadata, seed-dir or config-drive followed by a detailed uri, device or
directory path where the datasource consumed its configuration.

As part of this work, DatasourceEC2 methods _get_data and _crawl_metadata
have been refactored for a few reasons:
- crawl_metadata is now a read-only operation, persisting no attributes on
  the datasource instance and returns a dictionary of consumed metadata.
- crawl_metadata now closely represents the raw stucture of the ec2
  metadata consumed, so that end-users can leverage public ec2 metadata
  documentation where possible.
- crawl_metadata adds a '_metadata_api_version' key to the crawled
  ds.metadata to advertise what version of EC2's api was consumed by
  cloud-init.
- _get_data now does all the processing of crawl_metadata and saves
  datasource instance attributes userdata_raw, metadata etc.

Additional drive-bys:
* unit test rework for test_altcloud and test_azure to simplify mocks
  and make use of existing util and test_helpers functions.

Description of the change

Some sample instance-data.json on lxd EC2 and OpenStack
- https://pastebin.ubuntu.com/p/52BvK7Yr3V/

More sample instance-data.json:
azure: http://paste.ubuntu.com/p/tTQrwJMPBt/
rackspace: http://paste.ubuntu.com/p/73bhkBTx83/

To post a comment you must log in.
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:1dc0c9f7757e8c682308ef93e20054931f83ecf7
https://jenkins.ubuntu.com/server/job/cloud-init-ci/357/
Executed test runs:
    FAILED: Checkout

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/357/rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:f5914600a4e501acbe326597058e8de59af1193c
https://jenkins.ubuntu.com/server/job/cloud-init-ci/368/
Executed test runs:
    SUCCESS: Checkout
    FAILED: Unit & Style Tests

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/368/rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:ab2fe13ae1aa40edaf4d943dc826ad3063f74cd0
https://jenkins.ubuntu.com/server/job/cloud-init-ci/369/
Executed test runs:
    FAILED: Checkout

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/369/rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:58cdbe97d685c9d39f1cdb2d5e115cb616fcb95d
https://jenkins.ubuntu.com/server/job/cloud-init-ci/370/
Executed test runs:
    FAILED: Checkout

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/370/rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Scott Moser (smoser) wrote :

document beta_keys somwhere ?

Revision history for this message
Chad Smith (chad.smith) :
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:730bcf8f6033155383a8e732f26c48811c29b499
https://jenkins.ubuntu.com/server/job/cloud-init-ci/371/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    FAILED: Ubuntu LTS: Integration

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/371/rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:58b90360ae6cb9799b90cf73b85b5a501bd37eb6
https://jenkins.ubuntu.com/server/job/cloud-init-ci/372/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    FAILED: Ubuntu LTS: Integration

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/372/rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Scott Moser (smoser) wrote :

commit message needs work at this point. at very least, it starts with 'ec2' but is much larger than taht now.

Revision history for this message
Server Team CI bot (server-team-bot) wrote :

FAILED: Continuous integration, rev:27b46c18c1384ce8b238c34cde0824340bc8fb40
https://jenkins.ubuntu.com/server/job/cloud-init-ci/374/
Executed test runs:
    SUCCESS: Checkout
    FAILED: Unit & Style Tests

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/374/rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:02a81e66ccecebb4a624a6ac6346968b77831124
https://jenkins.ubuntu.com/server/job/cloud-init-ci/375/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/375/rebuild

review: Approve (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:94e3f120ce8f85a21c9ef7e256cfac440dc7578d
https://jenkins.ubuntu.com/server/job/cloud-init-ci/376/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/376/rebuild

review: Approve (continuous-integration)
Revision history for this message
Scott Moser (smoser) wrote :

if manual_cache_clean is set, then we should be careful not to re-crawl anything on upgrade. This is most likely a case when there was a "config disk" in the generic sense. Ie, where a datasource provided a disk (cdrom or block device) and then later pulled it or user formatted it.

Revision history for this message
Chad Smith (chad.smith) :
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

PASSED: Continuous integration, rev:e3fb8a4f3a1438bf02189217005ca77b33c624b5
https://jenkins.ubuntu.com/server/job/cloud-init-ci/380/
Executed test runs:
    SUCCESS: Checkout
    SUCCESS: Unit & Style Tests
    SUCCESS: Ubuntu LTS: Build
    SUCCESS: Ubuntu LTS: Integration
    IN_PROGRESS: Declarative: Post Actions

Click here to trigger a rebuild:
https://jenkins.ubuntu.com/server/job/cloud-init-ci/380/rebuild

review: Approve (continuous-integration)
Revision history for this message
Server Team CI bot (server-team-bot) wrote :

Commit message lints:
 - Line #0 has 2 too many characters. Line starts with: "instance-data: Add standardized"...

review: Needs Fixing
Revision history for this message
Server Team CI bot (server-team-bot) :
review: Approve (continuous-integration)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/cloudinit/sources/DataSourceAliYun.py b/cloudinit/sources/DataSourceAliYun.py
2index 858e082..45cc9f0 100644
3--- a/cloudinit/sources/DataSourceAliYun.py
4+++ b/cloudinit/sources/DataSourceAliYun.py
5@@ -1,7 +1,5 @@
6 # This file is part of cloud-init. See LICENSE file for license information.
7
8-import os
9-
10 from cloudinit import sources
11 from cloudinit.sources import DataSourceEc2 as EC2
12 from cloudinit import util
13@@ -18,25 +16,17 @@ class DataSourceAliYun(EC2.DataSourceEc2):
14 min_metadata_version = '2016-01-01'
15 extended_metadata_versions = []
16
17- def __init__(self, sys_cfg, distro, paths):
18- super(DataSourceAliYun, self).__init__(sys_cfg, distro, paths)
19- self.seed_dir = os.path.join(paths.seed_dir, "AliYun")
20-
21 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):
22 return self.metadata.get('hostname', 'localhost.localdomain')
23
24 def get_public_ssh_keys(self):
25 return parse_public_keys(self.metadata.get('public-keys', {}))
26
27- @property
28- def cloud_platform(self):
29- if self._cloud_platform is None:
30- if _is_aliyun():
31- self._cloud_platform = EC2.Platforms.ALIYUN
32- else:
33- self._cloud_platform = EC2.Platforms.NO_EC2_METADATA
34-
35- return self._cloud_platform
36+ def _get_cloud_name(self):
37+ if _is_aliyun():
38+ return EC2.CloudNames.ALIYUN
39+ else:
40+ return EC2.CloudNames.NO_EC2_METADATA
41
42
43 def _is_aliyun():
44diff --git a/cloudinit/sources/DataSourceAltCloud.py b/cloudinit/sources/DataSourceAltCloud.py
45index 8cd312d..5270fda 100644
46--- a/cloudinit/sources/DataSourceAltCloud.py
47+++ b/cloudinit/sources/DataSourceAltCloud.py
48@@ -89,7 +89,9 @@ class DataSourceAltCloud(sources.DataSource):
49 '''
50 Description:
51 Get the type for the cloud back end this instance is running on
52- by examining the string returned by reading the dmi data.
53+ by examining the string returned by reading either:
54+ CLOUD_INFO_FILE or
55+ the dmi data.
56
57 Input:
58 None
59@@ -99,7 +101,14 @@ class DataSourceAltCloud(sources.DataSource):
60 'RHEV', 'VSPHERE' or 'UNKNOWN'
61
62 '''
63-
64+ if os.path.exists(CLOUD_INFO_FILE):
65+ try:
66+ cloud_type = util.load_file(CLOUD_INFO_FILE).strip().upper()
67+ except IOError:
68+ util.logexc(LOG, 'Unable to access cloud info file at %s.',
69+ CLOUD_INFO_FILE)
70+ return 'UNKNOWN'
71+ return cloud_type
72 system_name = util.read_dmi_data("system-product-name")
73 if not system_name:
74 return 'UNKNOWN'
75@@ -134,15 +143,7 @@ class DataSourceAltCloud(sources.DataSource):
76
77 LOG.debug('Invoked get_data()')
78
79- if os.path.exists(CLOUD_INFO_FILE):
80- try:
81- cloud_type = util.load_file(CLOUD_INFO_FILE).strip().upper()
82- except IOError:
83- util.logexc(LOG, 'Unable to access cloud info file at %s.',
84- CLOUD_INFO_FILE)
85- return False
86- else:
87- cloud_type = self.get_cloud_type()
88+ cloud_type = self.get_cloud_type()
89
90 LOG.debug('cloud_type: %s', str(cloud_type))
91
92@@ -161,6 +162,15 @@ class DataSourceAltCloud(sources.DataSource):
93 util.logexc(LOG, 'Failed accessing user data.')
94 return False
95
96+ def _get_subplatform(self):
97+ """Return the subplatform metadata details."""
98+ cloud_type = self.get_cloud_type()
99+ if not hasattr(self, 'source'):
100+ self.source = sources.METADATA_UNKNOWN
101+ if cloud_type == 'RHEV':
102+ self.source = '/dev/fd0'
103+ return '%s (%s)' % (cloud_type.lower(), self.source)
104+
105 def user_data_rhevm(self):
106 '''
107 RHEVM specific userdata read
108@@ -232,6 +242,7 @@ class DataSourceAltCloud(sources.DataSource):
109 try:
110 return_str = util.mount_cb(cdrom_dev, read_user_data_callback)
111 if return_str:
112+ self.source = cdrom_dev
113 break
114 except OSError as err:
115 if err.errno != errno.ENOENT:
116diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py
117index 783445e..39391d0 100644
118--- a/cloudinit/sources/DataSourceAzure.py
119+++ b/cloudinit/sources/DataSourceAzure.py
120@@ -351,6 +351,14 @@ class DataSourceAzure(sources.DataSource):
121 metadata['public-keys'] = key_value or pubkeys_from_crt_files(fp_files)
122 return metadata
123
124+ def _get_subplatform(self):
125+ """Return the subplatform metadata source details."""
126+ if self.seed.startswith('/dev'):
127+ subplatform_type = 'config-disk'
128+ else:
129+ subplatform_type = 'seed-dir'
130+ return '%s (%s)' % (subplatform_type, self.seed)
131+
132 def crawl_metadata(self):
133 """Walk all instance metadata sources returning a dict on success.
134
135diff --git a/cloudinit/sources/DataSourceBigstep.py b/cloudinit/sources/DataSourceBigstep.py
136index 699a85b..52fff20 100644
137--- a/cloudinit/sources/DataSourceBigstep.py
138+++ b/cloudinit/sources/DataSourceBigstep.py
139@@ -36,6 +36,10 @@ class DataSourceBigstep(sources.DataSource):
140 self.userdata_raw = decoded["userdata_raw"]
141 return True
142
143+ def _get_subplatform(self):
144+ """Return the subplatform metadata source details."""
145+ return 'metadata (%s)' % get_url_from_file()
146+
147
148 def get_url_from_file():
149 try:
150diff --git a/cloudinit/sources/DataSourceCloudSigma.py b/cloudinit/sources/DataSourceCloudSigma.py
151index c816f34..2955d3f 100644
152--- a/cloudinit/sources/DataSourceCloudSigma.py
153+++ b/cloudinit/sources/DataSourceCloudSigma.py
154@@ -7,7 +7,7 @@
155 from base64 import b64decode
156 import re
157
158-from cloudinit.cs_utils import Cepko
159+from cloudinit.cs_utils import Cepko, SERIAL_PORT
160
161 from cloudinit import log as logging
162 from cloudinit import sources
163@@ -84,6 +84,10 @@ class DataSourceCloudSigma(sources.DataSource):
164
165 return True
166
167+ def _get_subplatform(self):
168+ """Return the subplatform metadata source details."""
169+ return 'cepko (%s)' % SERIAL_PORT
170+
171 def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):
172 """
173 Cleans up and uses the server's name if the latter is set. Otherwise
174diff --git a/cloudinit/sources/DataSourceConfigDrive.py b/cloudinit/sources/DataSourceConfigDrive.py
175index 664dc4b..564e3eb 100644
176--- a/cloudinit/sources/DataSourceConfigDrive.py
177+++ b/cloudinit/sources/DataSourceConfigDrive.py
178@@ -160,6 +160,18 @@ class DataSourceConfigDrive(openstack.SourceMixin, sources.DataSource):
179 LOG.debug("no network configuration available")
180 return self._network_config
181
182+ @property
183+ def platform(self):
184+ return 'openstack'
185+
186+ def _get_subplatform(self):
187+ """Return the subplatform metadata source details."""
188+ if self.seed_dir in self.source:
189+ subplatform_type = 'seed-dir'
190+ elif self.source.startswith('/dev'):
191+ subplatform_type = 'config-disk'
192+ return '%s (%s)' % (subplatform_type, self.source)
193+
194
195 def read_config_drive(source_dir):
196 reader = openstack.ConfigDriveReader(source_dir)
197diff --git a/cloudinit/sources/DataSourceEc2.py b/cloudinit/sources/DataSourceEc2.py
198index 968ab3f..9ccf2cd 100644
199--- a/cloudinit/sources/DataSourceEc2.py
200+++ b/cloudinit/sources/DataSourceEc2.py
201@@ -28,18 +28,16 @@ STRICT_ID_PATH = ("datasource", "Ec2", "strict_id")
202 STRICT_ID_DEFAULT = "warn"
203
204
205-class Platforms(object):
206- # TODO Rename and move to cloudinit.cloud.CloudNames
207- ALIYUN = "AliYun"
208- AWS = "AWS"
209- BRIGHTBOX = "Brightbox"
210- SEEDED = "Seeded"
211+class CloudNames(object):
212+ ALIYUN = "aliyun"
213+ AWS = "aws"
214+ BRIGHTBOX = "brightbox"
215 # UNKNOWN indicates no positive id. If strict_id is 'warn' or 'false',
216 # then an attempt at the Ec2 Metadata service will be made.
217- UNKNOWN = "Unknown"
218+ UNKNOWN = "unknown"
219 # NO_EC2_METADATA indicates this platform does not have a Ec2 metadata
220 # service available. No attempt at the Ec2 Metadata service will be made.
221- NO_EC2_METADATA = "No-EC2-Metadata"
222+ NO_EC2_METADATA = "no-ec2-metadata"
223
224
225 class DataSourceEc2(sources.DataSource):
226@@ -61,8 +59,6 @@ class DataSourceEc2(sources.DataSource):
227 url_max_wait = 120
228 url_timeout = 50
229
230- _cloud_platform = None
231-
232 _network_config = sources.UNSET # Used to cache calculated network cfg v1
233
234 # Whether we want to get network configuration from the metadata service.
235@@ -71,30 +67,21 @@ class DataSourceEc2(sources.DataSource):
236 def __init__(self, sys_cfg, distro, paths):
237 super(DataSourceEc2, self).__init__(sys_cfg, distro, paths)
238 self.metadata_address = None
239- self.seed_dir = os.path.join(paths.seed_dir, "ec2")
240
241 def _get_cloud_name(self):
242 """Return the cloud name as identified during _get_data."""
243- return self.cloud_platform
244+ return identify_platform()
245
246 def _get_data(self):
247- seed_ret = {}
248- if util.read_optional_seed(seed_ret, base=(self.seed_dir + "/")):
249- self.userdata_raw = seed_ret['user-data']
250- self.metadata = seed_ret['meta-data']
251- LOG.debug("Using seeded ec2 data from %s", self.seed_dir)
252- self._cloud_platform = Platforms.SEEDED
253- return True
254-
255 strict_mode, _sleep = read_strict_mode(
256 util.get_cfg_by_path(self.sys_cfg, STRICT_ID_PATH,
257 STRICT_ID_DEFAULT), ("warn", None))
258
259- LOG.debug("strict_mode: %s, cloud_platform=%s",
260- strict_mode, self.cloud_platform)
261- if strict_mode == "true" and self.cloud_platform == Platforms.UNKNOWN:
262+ LOG.debug("strict_mode: %s, cloud_name=%s cloud_platform=%s",
263+ strict_mode, self.cloud_name, self.platform)
264+ if strict_mode == "true" and self.cloud_name == CloudNames.UNKNOWN:
265 return False
266- elif self.cloud_platform == Platforms.NO_EC2_METADATA:
267+ elif self.cloud_name == CloudNames.NO_EC2_METADATA:
268 return False
269
270 if self.perform_dhcp_setup: # Setup networking in init-local stage.
271@@ -103,13 +90,22 @@ class DataSourceEc2(sources.DataSource):
272 return False
273 try:
274 with EphemeralDHCPv4(self.fallback_interface):
275- return util.log_time(
276+ self._crawled_metadata = util.log_time(
277 logfunc=LOG.debug, msg='Crawl of metadata service',
278- func=self._crawl_metadata)
279+ func=self.crawl_metadata)
280 except NoDHCPLeaseError:
281 return False
282 else:
283- return self._crawl_metadata()
284+ self._crawled_metadata = util.log_time(
285+ logfunc=LOG.debug, msg='Crawl of metadata service',
286+ func=self.crawl_metadata)
287+ if not self._crawled_metadata:
288+ return False
289+ self.metadata = self._crawled_metadata.get('meta-data', None)
290+ self.userdata_raw = self._crawled_metadata.get('user-data', None)
291+ self.identity = self._crawled_metadata.get(
292+ 'dynamic', {}).get('instance-identity', {}).get('document', {})
293+ return True
294
295 @property
296 def launch_index(self):
297@@ -117,6 +113,15 @@ class DataSourceEc2(sources.DataSource):
298 return None
299 return self.metadata.get('ami-launch-index')
300
301+ @property
302+ def platform(self):
303+ # Handle upgrade path of pickled ds
304+ if not hasattr(self, '_platform_type'):
305+ self._platform_type = DataSourceEc2.dsname.lower()
306+ if not self._platform_type:
307+ self._platform_type = DataSourceEc2.dsname.lower()
308+ return self._platform_type
309+
310 def get_metadata_api_version(self):
311 """Get the best supported api version from the metadata service.
312
313@@ -144,7 +149,7 @@ class DataSourceEc2(sources.DataSource):
314 return self.min_metadata_version
315
316 def get_instance_id(self):
317- if self.cloud_platform == Platforms.AWS:
318+ if self.cloud_name == CloudNames.AWS:
319 # Prefer the ID from the instance identity document, but fall back
320 if not getattr(self, 'identity', None):
321 # If re-using cached datasource, it's get_data run didn't
322@@ -254,7 +259,7 @@ class DataSourceEc2(sources.DataSource):
323 @property
324 def availability_zone(self):
325 try:
326- if self.cloud_platform == Platforms.AWS:
327+ if self.cloud_name == CloudNames.AWS:
328 return self.identity.get(
329 'availabilityZone',
330 self.metadata['placement']['availability-zone'])
331@@ -265,7 +270,7 @@ class DataSourceEc2(sources.DataSource):
332
333 @property
334 def region(self):
335- if self.cloud_platform == Platforms.AWS:
336+ if self.cloud_name == CloudNames.AWS:
337 region = self.identity.get('region')
338 # Fallback to trimming the availability zone if region is missing
339 if self.availability_zone and not region:
340@@ -277,16 +282,10 @@ class DataSourceEc2(sources.DataSource):
341 return az[:-1]
342 return None
343
344- @property
345- def cloud_platform(self): # TODO rename cloud_name
346- if self._cloud_platform is None:
347- self._cloud_platform = identify_platform()
348- return self._cloud_platform
349-
350 def activate(self, cfg, is_new_instance):
351 if not is_new_instance:
352 return
353- if self.cloud_platform == Platforms.UNKNOWN:
354+ if self.cloud_name == CloudNames.UNKNOWN:
355 warn_if_necessary(
356 util.get_cfg_by_path(cfg, STRICT_ID_PATH, STRICT_ID_DEFAULT),
357 cfg)
358@@ -306,13 +305,13 @@ class DataSourceEc2(sources.DataSource):
359 result = None
360 no_network_metadata_on_aws = bool(
361 'network' not in self.metadata and
362- self.cloud_platform == Platforms.AWS)
363+ self.cloud_name == CloudNames.AWS)
364 if no_network_metadata_on_aws:
365 LOG.debug("Metadata 'network' not present:"
366 " Refreshing stale metadata from prior to upgrade.")
367 util.log_time(
368 logfunc=LOG.debug, msg='Re-crawl of metadata service',
369- func=self._crawl_metadata)
370+ func=self.get_data)
371
372 # Limit network configuration to only the primary/fallback nic
373 iface = self.fallback_interface
374@@ -340,28 +339,32 @@ class DataSourceEc2(sources.DataSource):
375 return super(DataSourceEc2, self).fallback_interface
376 return self._fallback_interface
377
378- def _crawl_metadata(self):
379+ def crawl_metadata(self):
380 """Crawl metadata service when available.
381
382- @returns: True on success, False otherwise.
383+ @returns: Dictionary of crawled metadata content containing the keys:
384+ meta-data, user-data and dynamic.
385 """
386 if not self.wait_for_metadata_service():
387- return False
388+ return {}
389 api_version = self.get_metadata_api_version()
390+ crawled_metadata = {}
391 try:
392- self.userdata_raw = ec2.get_instance_userdata(
393+ crawled_metadata['user-data'] = ec2.get_instance_userdata(
394 api_version, self.metadata_address)
395- self.metadata = ec2.get_instance_metadata(
396+ crawled_metadata['meta-data'] = ec2.get_instance_metadata(
397 api_version, self.metadata_address)
398- if self.cloud_platform == Platforms.AWS:
399- self.identity = ec2.get_instance_identity(
400- api_version, self.metadata_address).get('document', {})
401+ if self.cloud_name == CloudNames.AWS:
402+ identity = ec2.get_instance_identity(
403+ api_version, self.metadata_address)
404+ crawled_metadata['dynamic'] = {'instance-identity': identity}
405 except Exception:
406 util.logexc(
407 LOG, "Failed reading from metadata address %s",
408 self.metadata_address)
409- return False
410- return True
411+ return {}
412+ crawled_metadata['_metadata_api_version'] = api_version
413+ return crawled_metadata
414
415
416 class DataSourceEc2Local(DataSourceEc2):
417@@ -375,10 +378,10 @@ class DataSourceEc2Local(DataSourceEc2):
418 perform_dhcp_setup = True # Use dhcp before querying metadata
419
420 def get_data(self):
421- supported_platforms = (Platforms.AWS,)
422- if self.cloud_platform not in supported_platforms:
423+ supported_platforms = (CloudNames.AWS,)
424+ if self.cloud_name not in supported_platforms:
425 LOG.debug("Local Ec2 mode only supported on %s, not %s",
426- supported_platforms, self.cloud_platform)
427+ supported_platforms, self.cloud_name)
428 return False
429 return super(DataSourceEc2Local, self).get_data()
430
431@@ -439,20 +442,20 @@ def identify_aws(data):
432 if (data['uuid'].startswith('ec2') and
433 (data['uuid_source'] == 'hypervisor' or
434 data['uuid'] == data['serial'])):
435- return Platforms.AWS
436+ return CloudNames.AWS
437
438 return None
439
440
441 def identify_brightbox(data):
442 if data['serial'].endswith('brightbox.com'):
443- return Platforms.BRIGHTBOX
444+ return CloudNames.BRIGHTBOX
445
446
447 def identify_platform():
448- # identify the platform and return an entry in Platforms.
449+ # identify the platform and return an entry in CloudNames.
450 data = _collect_platform_data()
451- checks = (identify_aws, identify_brightbox, lambda x: Platforms.UNKNOWN)
452+ checks = (identify_aws, identify_brightbox, lambda x: CloudNames.UNKNOWN)
453 for checker in checks:
454 try:
455 result = checker(data)
456diff --git a/cloudinit/sources/DataSourceIBMCloud.py b/cloudinit/sources/DataSourceIBMCloud.py
457index a535814..21e6ae6 100644
458--- a/cloudinit/sources/DataSourceIBMCloud.py
459+++ b/cloudinit/sources/DataSourceIBMCloud.py
460@@ -157,6 +157,10 @@ class DataSourceIBMCloud(sources.DataSource):
461
462 return True
463
464+ def _get_subplatform(self):
465+ """Return the subplatform metadata source details."""
466+ return '%s (%s)' % (self.platform, self.source)
467+
468 def check_instance_id(self, sys_cfg):
469 """quickly (local check only) if self.instance_id is still valid
470
471diff --git a/cloudinit/sources/DataSourceMAAS.py b/cloudinit/sources/DataSourceMAAS.py
472index bcb3854..61aa6d7 100644
473--- a/cloudinit/sources/DataSourceMAAS.py
474+++ b/cloudinit/sources/DataSourceMAAS.py
475@@ -109,6 +109,10 @@ class DataSourceMAAS(sources.DataSource):
476 LOG.warning("Invalid content in vendor-data: %s", e)
477 self.vendordata_raw = None
478
479+ def _get_subplatform(self):
480+ """Return the subplatform metadata source details."""
481+ return 'seed-dir (%s)' % self.base_url
482+
483 def wait_for_metadata_service(self, url):
484 mcfg = self.ds_cfg
485 max_wait = 120
486diff --git a/cloudinit/sources/DataSourceNoCloud.py b/cloudinit/sources/DataSourceNoCloud.py
487index 2daea59..9010f06 100644
488--- a/cloudinit/sources/DataSourceNoCloud.py
489+++ b/cloudinit/sources/DataSourceNoCloud.py
490@@ -186,6 +186,27 @@ class DataSourceNoCloud(sources.DataSource):
491 self._network_eni = mydata['meta-data'].get('network-interfaces')
492 return True
493
494+ @property
495+ def platform_type(self):
496+ # Handle upgrade path of pickled ds
497+ if not hasattr(self, '_platform_type'):
498+ self._platform_type = None
499+ if not self._platform_type:
500+ self._platform_type = 'lxd' if util.is_lxd() else 'nocloud'
501+ return self._platform_type
502+
503+ def _get_cloud_name(self):
504+ """Return unknown when 'cloud-name' key is absent from metadata."""
505+ return sources.METADATA_UNKNOWN
506+
507+ def _get_subplatform(self):
508+ """Return the subplatform metadata source details."""
509+ if self.seed.startswith('/dev'):
510+ subplatform_type = 'config-disk'
511+ else:
512+ subplatform_type = 'seed-dir'
513+ return '%s (%s)' % (subplatform_type, self.seed)
514+
515 def check_instance_id(self, sys_cfg):
516 # quickly (local check only) if self.instance_id is still valid
517 # we check kernel command line or files.
518diff --git a/cloudinit/sources/DataSourceNone.py b/cloudinit/sources/DataSourceNone.py
519index e63a7e3..e625080 100644
520--- a/cloudinit/sources/DataSourceNone.py
521+++ b/cloudinit/sources/DataSourceNone.py
522@@ -28,6 +28,10 @@ class DataSourceNone(sources.DataSource):
523 self.metadata = self.ds_cfg['metadata']
524 return True
525
526+ def _get_subplatform(self):
527+ """Return the subplatform metadata source details."""
528+ return 'config'
529+
530 def get_instance_id(self):
531 return 'iid-datasource-none'
532
533diff --git a/cloudinit/sources/DataSourceOVF.py b/cloudinit/sources/DataSourceOVF.py
534index 178ccb0..045291e 100644
535--- a/cloudinit/sources/DataSourceOVF.py
536+++ b/cloudinit/sources/DataSourceOVF.py
537@@ -275,6 +275,12 @@ class DataSourceOVF(sources.DataSource):
538 self.cfg = cfg
539 return True
540
541+ def _get_subplatform(self):
542+ system_type = util.read_dmi_data("system-product-name").lower()
543+ if system_type == 'vmware':
544+ return 'vmware (%s)' % self.seed
545+ return 'ovf (%s)' % self.seed
546+
547 def get_public_ssh_keys(self):
548 if 'public-keys' not in self.metadata:
549 return []
550diff --git a/cloudinit/sources/DataSourceOpenNebula.py b/cloudinit/sources/DataSourceOpenNebula.py
551index 77ccd12..e62e972 100644
552--- a/cloudinit/sources/DataSourceOpenNebula.py
553+++ b/cloudinit/sources/DataSourceOpenNebula.py
554@@ -95,6 +95,14 @@ class DataSourceOpenNebula(sources.DataSource):
555 self.userdata_raw = results.get('userdata')
556 return True
557
558+ def _get_subplatform(self):
559+ """Return the subplatform metadata source details."""
560+ if self.seed_dir in self.seed:
561+ subplatform_type = 'seed-dir'
562+ else:
563+ subplatform_type = 'config-disk'
564+ return '%s (%s)' % (subplatform_type, self.seed)
565+
566 @property
567 def network_config(self):
568 if self.network is not None:
569diff --git a/cloudinit/sources/DataSourceOracle.py b/cloudinit/sources/DataSourceOracle.py
570index fab39af..70b9c58 100644
571--- a/cloudinit/sources/DataSourceOracle.py
572+++ b/cloudinit/sources/DataSourceOracle.py
573@@ -91,6 +91,10 @@ class DataSourceOracle(sources.DataSource):
574 def crawl_metadata(self):
575 return read_metadata()
576
577+ def _get_subplatform(self):
578+ """Return the subplatform metadata source details."""
579+ return 'metadata (%s)' % METADATA_ENDPOINT
580+
581 def check_instance_id(self, sys_cfg):
582 """quickly check (local only) if self.instance_id is still valid
583
584diff --git a/cloudinit/sources/DataSourceSmartOS.py b/cloudinit/sources/DataSourceSmartOS.py
585index 593ac91..32b57cd 100644
586--- a/cloudinit/sources/DataSourceSmartOS.py
587+++ b/cloudinit/sources/DataSourceSmartOS.py
588@@ -303,6 +303,9 @@ class DataSourceSmartOS(sources.DataSource):
589 self._set_provisioned()
590 return True
591
592+ def _get_subplatform(self):
593+ return 'serial (%s)' % SERIAL_DEVICE
594+
595 def device_name_to_device(self, name):
596 return self.ds_cfg['disk_aliases'].get(name)
597
598diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py
599index 5ac9882..9b90680 100644
600--- a/cloudinit/sources/__init__.py
601+++ b/cloudinit/sources/__init__.py
602@@ -54,6 +54,7 @@ REDACT_SENSITIVE_VALUE = 'redacted for non-root user'
603 METADATA_CLOUD_NAME_KEY = 'cloud-name'
604
605 UNSET = "_unset"
606+METADATA_UNKNOWN = 'unknown'
607
608 LOG = logging.getLogger(__name__)
609
610@@ -133,6 +134,14 @@ class DataSource(object):
611 # Cached cloud_name as determined by _get_cloud_name
612 _cloud_name = None
613
614+ # Cached cloud platform api type: e.g. ec2, openstack, kvm, lxd, azure etc.
615+ _platform_type = None
616+
617+ # More details about the cloud platform:
618+ # - metadata (http://169.254.169.254/)
619+ # - seed-dir (<dirname>)
620+ _subplatform = None
621+
622 # Track the discovered fallback nic for use in configuration generation.
623 _fallback_interface = None
624
625@@ -192,21 +201,24 @@ class DataSource(object):
626 local_hostname = self.get_hostname()
627 instance_id = self.get_instance_id()
628 availability_zone = self.availability_zone
629- cloud_name = self.cloud_name
630- # When adding new standard keys prefer underscore-delimited instead
631- # of hyphen-delimted to support simple variable references in jinja
632- # templates.
633+ # In the event of upgrade from existing cloudinit, pickled datasource
634+ # will not contain these new class attributes. So we need to recrawl
635+ # metadata to discover that content.
636 return {
637 'v1': {
638+ '_beta_keys': ['subplatform'],
639 'availability-zone': availability_zone,
640 'availability_zone': availability_zone,
641- 'cloud-name': cloud_name,
642- 'cloud_name': cloud_name,
643+ 'cloud-name': self.cloud_name,
644+ 'cloud_name': self.cloud_name,
645+ 'platform': self.platform_type,
646+ 'public_ssh_keys': self.get_public_ssh_keys(),
647 'instance-id': instance_id,
648 'instance_id': instance_id,
649 'local-hostname': local_hostname,
650 'local_hostname': local_hostname,
651- 'region': self.region}}
652+ 'region': self.region,
653+ 'subplatform': self.subplatform}}
654
655 def clear_cached_attrs(self, attr_defaults=()):
656 """Reset any cached metadata attributes to datasource defaults.
657@@ -247,19 +259,27 @@ class DataSource(object):
658
659 @return True on successful write, False otherwise.
660 """
661- instance_data = {
662- 'ds': {'_doc': EXPERIMENTAL_TEXT,
663- 'meta_data': self.metadata}}
664- if hasattr(self, 'network_json'):
665- network_json = getattr(self, 'network_json')
666- if network_json != UNSET:
667- instance_data['ds']['network_json'] = network_json
668- if hasattr(self, 'ec2_metadata'):
669- ec2_metadata = getattr(self, 'ec2_metadata')
670- if ec2_metadata != UNSET:
671- instance_data['ds']['ec2_metadata'] = ec2_metadata
672+ if hasattr(self, '_crawled_metadata'):
673+ # Any datasource with _crawled_metadata will best represent
674+ # most recent, 'raw' metadata
675+ crawled_metadata = copy.deepcopy(
676+ getattr(self, '_crawled_metadata'))
677+ crawled_metadata.pop('user-data', None)
678+ crawled_metadata.pop('vendor-data', None)
679+ instance_data = {'ds': crawled_metadata}
680+ else:
681+ instance_data = {'ds': {'meta_data': self.metadata}}
682+ if hasattr(self, 'network_json'):
683+ network_json = getattr(self, 'network_json')
684+ if network_json != UNSET:
685+ instance_data['ds']['network_json'] = network_json
686+ if hasattr(self, 'ec2_metadata'):
687+ ec2_metadata = getattr(self, 'ec2_metadata')
688+ if ec2_metadata != UNSET:
689+ instance_data['ds']['ec2_metadata'] = ec2_metadata
690 instance_data.update(
691 self._get_standardized_metadata())
692+ instance_data['ds']['_doc'] = EXPERIMENTAL_TEXT
693 try:
694 # Process content base64encoding unserializable values
695 content = util.json_dumps(instance_data)
696@@ -347,6 +367,40 @@ class DataSource(object):
697 return self._fallback_interface
698
699 @property
700+ def platform_type(self):
701+ if not hasattr(self, '_platform_type'):
702+ # Handle upgrade path where pickled datasource has no _platform.
703+ self._platform_type = self.dsname.lower()
704+ if not self._platform_type:
705+ self._platform_type = self.dsname.lower()
706+ return self._platform_type
707+
708+ @property
709+ def subplatform(self):
710+ """Return a string representing subplatform details for the datasource.
711+
712+ This should be guidance for where the metadata is sourced.
713+ Examples of this on different clouds:
714+ ec2: metadata (http://169.254.169.254)
715+ openstack: configdrive (/dev/path)
716+ openstack: metadata (http://169.254.169.254)
717+ nocloud: seed-dir (/seed/dir/path)
718+ lxd: nocloud (/seed/dir/path)
719+ """
720+ if not hasattr(self, '_subplatform'):
721+ # Handle upgrade path where pickled datasource has no _platform.
722+ self._subplatform = self._get_subplatform()
723+ if not self._subplatform:
724+ self._subplatform = self._get_subplatform()
725+ return self._subplatform
726+
727+ def _get_subplatform(self):
728+ """Subclasses should implement to return a "slug (detail)" string."""
729+ if hasattr(self, 'metadata_address'):
730+ return 'metadata (%s)' % getattr(self, 'metadata_address')
731+ return METADATA_UNKNOWN
732+
733+ @property
734 def cloud_name(self):
735 """Return lowercase cloud name as determined by the datasource.
736
737@@ -359,9 +413,11 @@ class DataSource(object):
738 cloud_name = self.metadata.get(METADATA_CLOUD_NAME_KEY)
739 if isinstance(cloud_name, six.string_types):
740 self._cloud_name = cloud_name.lower()
741- LOG.debug(
742- 'Ignoring metadata provided key %s: non-string type %s',
743- METADATA_CLOUD_NAME_KEY, type(cloud_name))
744+ else:
745+ self._cloud_name = self._get_cloud_name().lower()
746+ LOG.debug(
747+ 'Ignoring metadata provided key %s: non-string type %s',
748+ METADATA_CLOUD_NAME_KEY, type(cloud_name))
749 else:
750 self._cloud_name = self._get_cloud_name().lower()
751 return self._cloud_name
752diff --git a/cloudinit/sources/tests/test_init.py b/cloudinit/sources/tests/test_init.py
753index 8082019..391b343 100644
754--- a/cloudinit/sources/tests/test_init.py
755+++ b/cloudinit/sources/tests/test_init.py
756@@ -295,6 +295,7 @@ class TestDataSource(CiTestCase):
757 'base64_encoded_keys': [],
758 'sensitive_keys': [],
759 'v1': {
760+ '_beta_keys': ['subplatform'],
761 'availability-zone': 'myaz',
762 'availability_zone': 'myaz',
763 'cloud-name': 'subclasscloudname',
764@@ -303,7 +304,10 @@ class TestDataSource(CiTestCase):
765 'instance_id': 'iid-datasource',
766 'local-hostname': 'test-subclass-hostname',
767 'local_hostname': 'test-subclass-hostname',
768- 'region': 'myregion'},
769+ 'platform': 'mytestsubclass',
770+ 'public_ssh_keys': [],
771+ 'region': 'myregion',
772+ 'subplatform': 'unknown'},
773 'ds': {
774 '_doc': EXPERIMENTAL_TEXT,
775 'meta_data': {'availability_zone': 'myaz',
776@@ -339,6 +343,7 @@ class TestDataSource(CiTestCase):
777 'base64_encoded_keys': [],
778 'sensitive_keys': ['ds/meta_data/some/security-credentials'],
779 'v1': {
780+ '_beta_keys': ['subplatform'],
781 'availability-zone': 'myaz',
782 'availability_zone': 'myaz',
783 'cloud-name': 'subclasscloudname',
784@@ -347,7 +352,10 @@ class TestDataSource(CiTestCase):
785 'instance_id': 'iid-datasource',
786 'local-hostname': 'test-subclass-hostname',
787 'local_hostname': 'test-subclass-hostname',
788- 'region': 'myregion'},
789+ 'platform': 'mytestsubclass',
790+ 'public_ssh_keys': [],
791+ 'region': 'myregion',
792+ 'subplatform': 'unknown'},
793 'ds': {
794 '_doc': EXPERIMENTAL_TEXT,
795 'meta_data': {
796diff --git a/cloudinit/sources/tests/test_oracle.py b/cloudinit/sources/tests/test_oracle.py
797index 7599126..97d6294 100644
798--- a/cloudinit/sources/tests/test_oracle.py
799+++ b/cloudinit/sources/tests/test_oracle.py
800@@ -71,6 +71,14 @@ class TestDataSourceOracle(test_helpers.CiTestCase):
801 self.assertFalse(ds._get_data())
802 mocks._is_platform_viable.assert_called_once_with()
803
804+ def test_platform_info(self):
805+ """Return platform-related information for Oracle Datasource."""
806+ ds, _mocks = self._get_ds()
807+ self.assertEqual('oracle', ds.cloud_name)
808+ self.assertEqual('oracle', ds.platform_type)
809+ self.assertEqual(
810+ 'metadata (http://169.254.169.254/openstack/)', ds.subplatform)
811+
812 @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True)
813 def test_without_userdata(self, m_is_iscsi_root):
814 """If no user-data is provided, it should not be in return dict."""
815diff --git a/cloudinit/tests/test_util.py b/cloudinit/tests/test_util.py
816index edb0c18..749a384 100644
817--- a/cloudinit/tests/test_util.py
818+++ b/cloudinit/tests/test_util.py
819@@ -478,4 +478,20 @@ class TestGetLinuxDistro(CiTestCase):
820 dist = util.get_linux_distro()
821 self.assertEqual(('foo', '1.1', 'aarch64'), dist)
822
823+
824+@mock.patch('os.path.exists')
825+class TestIsLXD(CiTestCase):
826+
827+ def test_is_lxd_true_on_sock_device(self, m_exists):
828+ """When lxd's /dev/lxd/sock exists, is_lxd returns true."""
829+ m_exists.return_value = True
830+ self.assertTrue(util.is_lxd())
831+ m_exists.assert_called_once_with('/dev/lxd/sock')
832+
833+ def test_is_lxd_false_when_sock_device_absent(self, m_exists):
834+ """When lxd's /dev/lxd/sock is absent, is_lxd returns false."""
835+ m_exists.return_value = False
836+ self.assertFalse(util.is_lxd())
837+ m_exists.assert_called_once_with('/dev/lxd/sock')
838+
839 # vi: ts=4 expandtab
840diff --git a/cloudinit/util.py b/cloudinit/util.py
841index 5068096..c67d6be 100644
842--- a/cloudinit/util.py
843+++ b/cloudinit/util.py
844@@ -2171,6 +2171,11 @@ def is_container():
845 return False
846
847
848+def is_lxd():
849+ """Check to see if we are running in a lxd container."""
850+ return os.path.exists('/dev/lxd/sock')
851+
852+
853 def get_proc_env(pid, encoding='utf-8', errors='replace'):
854 """
855 Return the environment in a dict that a given process id was started with.
856diff --git a/doc/rtd/topics/instancedata.rst b/doc/rtd/topics/instancedata.rst
857index 634e180..5d2dc94 100644
858--- a/doc/rtd/topics/instancedata.rst
859+++ b/doc/rtd/topics/instancedata.rst
860@@ -90,24 +90,46 @@ There are three basic top-level keys:
861
862 The standardized keys present:
863
864-+----------------------+-----------------------------------------------+---------------------------+
865-| Key path | Description | Examples |
866-+======================+===============================================+===========================+
867-| v1.cloud_name | The name of the cloud provided by metadata | aws, openstack, azure, |
868-| | key 'cloud-name' or the cloud-init datasource | configdrive, nocloud, |
869-| | name which was discovered. | ovf, etc. |
870-+----------------------+-----------------------------------------------+---------------------------+
871-| v1.instance_id | Unique instance_id allocated by the cloud | i-<somehash> |
872-+----------------------+-----------------------------------------------+---------------------------+
873-| v1.local_hostname | The internal or local hostname of the system | ip-10-41-41-70, |
874-| | | <user-provided-hostname> |
875-+----------------------+-----------------------------------------------+---------------------------+
876-| v1.region | The physical region/datacenter in which the | us-east-2 |
877-| | instance is deployed | |
878-+----------------------+-----------------------------------------------+---------------------------+
879-| v1.availability_zone | The physical availability zone in which the | us-east-2b, nova, null |
880-| | instance is deployed | |
881-+----------------------+-----------------------------------------------+---------------------------+
882++----------------------+-----------------------------------------------+-----------------------------------+
883+| Key path | Description | Examples |
884++======================+===============================================+===================================+
885+| v1._beta_keys | List of standardized keys still in 'beta'. | [subplatform] |
886+| | The format, intent or presence of these keys | |
887+| | can change. Do not consider them | |
888+| | production-ready. | |
889++----------------------+-----------------------------------------------+-----------------------------------+
890+| v1.cloud_name | Where possible this will indicate the 'name' | aws, openstack, azure, |
891+| | of the cloud this system is running on. This | configdrive, nocloud, |
892+| | is specifically different than the 'platform' | ovf, etc. |
893+| | below. As an example, the name of Amazon Web | |
894+| | Services is 'aws' while the platform is 'ec2'.| |
895+| | | |
896+| | If no specific name is determinable or | |
897+| | provided in meta-data, then this field may | |
898+| | contain the same content as 'platform'. | |
899++----------------------+-----------------------------------------------+-----------------------------------+
900+| v1.instance_id | Unique instance_id allocated by the cloud | i-<somehash> |
901++----------------------+-----------------------------------------------+-----------------------------------+
902+| v1.local_hostname | The internal or local hostname of the system | ip-10-41-41-70, |
903+| | | <user-provided-hostname> |
904++----------------------+-----------------------------------------------+-----------------------------------+
905+| v1.platform | An attempt to identify the cloud platform | ec2, openstack, lxd, gce |
906+| | instance that the system is running on. | nocloud, ovf |
907++----------------------+-----------------------------------------------+-----------------------------------+
908+| v1.subplatform | Additional platform details describing the | metadata (http://168.254.169.254),|
909+| | specific source or type of metadata used. | seed-dir (/path/to/seed-dir/), |
910+| | The format of subplatform will be: | config-disk (/dev/cd0), |
911+| | <subplatform_type> (<url_file_or_dev_path>) | configdrive (/dev/sr0) |
912++----------------------+-----------------------------------------------+-----------------------------------+
913+| v1.public_ssh_keys | A list of ssh keys provided to the instance | ['ssh-rsa AA...', ...] |
914+| | by the datasource metadata. | |
915++----------------------+-----------------------------------------------+-----------------------------------+
916+| v1.region | The physical region/datacenter in which the | us-east-2 |
917+| | instance is deployed | |
918++----------------------+-----------------------------------------------+-----------------------------------+
919+| v1.availability_zone | The physical availability zone in which the | us-east-2b, nova, null |
920+| | instance is deployed | |
921++----------------------+-----------------------------------------------+-----------------------------------+
922
923
924 Below is an example of ``/run/cloud-init/instance_data.json`` on an EC2
925@@ -117,10 +139,75 @@ instance:
926
927 {
928 "base64_encoded_keys": [],
929- "sensitive_keys": [],
930 "ds": {
931- "meta_data": {
932- "ami-id": "ami-014e1416b628b0cbf",
933+ "_doc": "EXPERIMENTAL: The structure and format of content scoped under the 'ds' key may change in subsequent releases of cloud-init.",
934+ "_metadata_api_version": "2016-09-02",
935+ "dynamic": {
936+ "instance-identity": {
937+ "document": {
938+ "accountId": "437526006925",
939+ "architecture": "x86_64",
940+ "availabilityZone": "us-east-2b",
941+ "billingProducts": null,
942+ "devpayProductCodes": null,
943+ "imageId": "ami-079638aae7046bdd2",
944+ "instanceId": "i-075f088c72ad3271c",
945+ "instanceType": "t2.micro",
946+ "kernelId": null,
947+ "marketplaceProductCodes": null,
948+ "pendingTime": "2018-10-05T20:10:43Z",
949+ "privateIp": "10.41.41.95",
950+ "ramdiskId": null,
951+ "region": "us-east-2",
952+ "version": "2017-09-30"
953+ },
954+ "pkcs7": [
955+ "MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAaCAJIAEggHbewog",
956+ "ICJkZXZwYXlQcm9kdWN0Q29kZXMiIDogbnVsbCwKICAibWFya2V0cGxhY2VQcm9kdWN0Q29kZXMi",
957+ "IDogbnVsbCwKICAicHJpdmF0ZUlwIiA6ICIxMC40MS40MS45NSIsCiAgInZlcnNpb24iIDogIjIw",
958+ "MTctMDktMzAiLAogICJpbnN0YW5jZUlkIiA6ICJpLTA3NWYwODhjNzJhZDMyNzFjIiwKICAiYmls",
959+ "bGluZ1Byb2R1Y3RzIiA6IG51bGwsCiAgImluc3RhbmNlVHlwZSIgOiAidDIubWljcm8iLAogICJh",
960+ "Y2NvdW50SWQiIDogIjQzNzUyNjAwNjkyNSIsCiAgImF2YWlsYWJpbGl0eVpvbmUiIDogInVzLWVh",
961+ "c3QtMmIiLAogICJrZXJuZWxJZCIgOiBudWxsLAogICJyYW1kaXNrSWQiIDogbnVsbCwKICAiYXJj",
962+ "aGl0ZWN0dXJlIiA6ICJ4ODZfNjQiLAogICJpbWFnZUlkIiA6ICJhbWktMDc5NjM4YWFlNzA0NmJk",
963+ "ZDIiLAogICJwZW5kaW5nVGltZSIgOiAiMjAxOC0xMC0wNVQyMDoxMDo0M1oiLAogICJyZWdpb24i",
964+ "IDogInVzLWVhc3QtMiIKfQAAAAAAADGCARcwggETAgEBMGkwXDELMAkGA1UEBhMCVVMxGTAXBgNV",
965+ "BAgTEFdhc2hpbmd0b24gU3RhdGUxEDAOBgNVBAcTB1NlYXR0bGUxIDAeBgNVBAoTF0FtYXpvbiBX",
966+ "ZWIgU2VydmljZXMgTExDAgkAlrpI2eVeGmcwCQYFKw4DAhoFAKBdMBgGCSqGSIb3DQEJAzELBgkq",
967+ "hkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MTAwNTIwMTA0OFowIwYJKoZIhvcNAQkEMRYEFK0k",
968+ "Tz6n1A8/zU1AzFj0riNQORw2MAkGByqGSM44BAMELjAsAhRNrr174y98grPBVXUforN/6wZp8AIU",
969+ "JLZBkrB2GJA8A4WJ1okq++jSrBIAAAAAAAA="
970+ ],
971+ "rsa2048": [
972+ "MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwGggCSABIIB",
973+ "23sKICAiZGV2cGF5UHJvZHVjdENvZGVzIiA6IG51bGwsCiAgIm1hcmtldHBsYWNlUHJvZHVjdENv",
974+ "ZGVzIiA6IG51bGwsCiAgInByaXZhdGVJcCIgOiAiMTAuNDEuNDEuOTUiLAogICJ2ZXJzaW9uIiA6",
975+ "ICIyMDE3LTA5LTMwIiwKICAiaW5zdGFuY2VJZCIgOiAiaS0wNzVmMDg4YzcyYWQzMjcxYyIsCiAg",
976+ "ImJpbGxpbmdQcm9kdWN0cyIgOiBudWxsLAogICJpbnN0YW5jZVR5cGUiIDogInQyLm1pY3JvIiwK",
977+ "ICAiYWNjb3VudElkIiA6ICI0Mzc1MjYwMDY5MjUiLAogICJhdmFpbGFiaWxpdHlab25lIiA6ICJ1",
978+ "cy1lYXN0LTJiIiwKICAia2VybmVsSWQiIDogbnVsbCwKICAicmFtZGlza0lkIiA6IG51bGwsCiAg",
979+ "ImFyY2hpdGVjdHVyZSIgOiAieDg2XzY0IiwKICAiaW1hZ2VJZCIgOiAiYW1pLTA3OTYzOGFhZTcw",
980+ "NDZiZGQyIiwKICAicGVuZGluZ1RpbWUiIDogIjIwMTgtMTAtMDVUMjA6MTA6NDNaIiwKICAicmVn",
981+ "aW9uIiA6ICJ1cy1lYXN0LTIiCn0AAAAAAAAxggH/MIIB+wIBATBpMFwxCzAJBgNVBAYTAlVTMRkw",
982+ "FwYDVQQIExBXYXNoaW5ndG9uIFN0YXRlMRAwDgYDVQQHEwdTZWF0dGxlMSAwHgYDVQQKExdBbWF6",
983+ "b24gV2ViIFNlcnZpY2VzIExMQwIJAM07oeX4xevdMA0GCWCGSAFlAwQCAQUAoGkwGAYJKoZIhvcN",
984+ "AQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTgxMDA1MjAxMDQ4WjAvBgkqhkiG9w0B",
985+ "CQQxIgQgkYz0pZk3zJKBi4KP4egeOKJl/UYwu5UdE7id74pmPwMwDQYJKoZIhvcNAQEBBQAEggEA",
986+ "dC3uIGGNul1OC1mJKSH3XoBWsYH20J/xhIdftYBoXHGf2BSFsrs9ZscXd2rKAKea4pSPOZEYMXgz",
987+ "lPuT7W0WU89N3ZKviy/ReMSRjmI/jJmsY1lea6mlgcsJXreBXFMYucZvyeWGHdnCjamoKWXkmZlM",
988+ "mSB1gshWy8Y7DzoKviYPQZi5aI54XK2Upt4kGme1tH1NI2Cq+hM4K+adxTbNhS3uzvWaWzMklUuU",
989+ "QHX2GMmjAVRVc8vnA8IAsBCJJp+gFgYzi09IK+cwNgCFFPADoG6jbMHHf4sLB3MUGpiA+G9JlCnM",
990+ "fmkjI2pNRB8spc0k4UG4egqLrqCz67WuK38tjwAAAAAAAA=="
991+ ],
992+ "signature": [
993+ "Tsw6h+V3WnxrNVSXBYIOs1V4j95YR1mLPPH45XnhX0/Ei3waJqf7/7EEKGYP1Cr4PTYEULtZ7Mvf",
994+ "+xJpM50Ivs2bdF7o0c4vnplRWe3f06NI9pv50dr110j/wNzP4MZ1pLhJCqubQOaaBTF3LFutgRrt",
995+ "r4B0mN3p7EcqD8G+ll0="
996+ ]
997+ }
998+ },
999+ "meta-data": {
1000+ "ami-id": "ami-079638aae7046bdd2",
1001 "ami-launch-index": "0",
1002 "ami-manifest-path": "(unknown)",
1003 "block-device-mapping": {
1004@@ -129,31 +216,31 @@ instance:
1005 "ephemeral1": "sdc",
1006 "root": "/dev/sda1"
1007 },
1008- "hostname": "ip-10-41-41-70.us-east-2.compute.internal",
1009+ "hostname": "ip-10-41-41-95.us-east-2.compute.internal",
1010 "instance-action": "none",
1011- "instance-id": "i-04fa31cfc55aa7976",
1012+ "instance-id": "i-075f088c72ad3271c",
1013 "instance-type": "t2.micro",
1014- "local-hostname": "ip-10-41-41-70.us-east-2.compute.internal",
1015- "local-ipv4": "10.41.41.70",
1016- "mac": "06:b6:92:dd:9d:24",
1017+ "local-hostname": "ip-10-41-41-95.us-east-2.compute.internal",
1018+ "local-ipv4": "10.41.41.95",
1019+ "mac": "06:74:8f:39:cd:a6",
1020 "metrics": {
1021 "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
1022 },
1023 "network": {
1024 "interfaces": {
1025 "macs": {
1026- "06:b6:92:dd:9d:24": {
1027+ "06:74:8f:39:cd:a6": {
1028 "device-number": "0",
1029- "interface-id": "eni-08c0c9fdb99b6e6f4",
1030+ "interface-id": "eni-052058bbd7831eaae",
1031 "ipv4-associations": {
1032- "18.224.22.43": "10.41.41.70"
1033+ "18.218.221.122": "10.41.41.95"
1034 },
1035- "local-hostname": "ip-10-41-41-70.us-east-2.compute.internal",
1036- "local-ipv4s": "10.41.41.70",
1037- "mac": "06:b6:92:dd:9d:24",
1038+ "local-hostname": "ip-10-41-41-95.us-east-2.compute.internal",
1039+ "local-ipv4s": "10.41.41.95",
1040+ "mac": "06:74:8f:39:cd:a6",
1041 "owner-id": "437526006925",
1042- "public-hostname": "ec2-18-224-22-43.us-east-2.compute.amazonaws.com",
1043- "public-ipv4s": "18.224.22.43",
1044+ "public-hostname": "ec2-18-218-221-122.us-east-2.compute.amazonaws.com",
1045+ "public-ipv4s": "18.218.221.122",
1046 "security-group-ids": "sg-828247e9",
1047 "security-groups": "Cloud-init integration test secgroup",
1048 "subnet-id": "subnet-282f3053",
1049@@ -171,16 +258,14 @@ instance:
1050 "availability-zone": "us-east-2b"
1051 },
1052 "profile": "default-hvm",
1053- "public-hostname": "ec2-18-224-22-43.us-east-2.compute.amazonaws.com",
1054- "public-ipv4": "18.224.22.43",
1055+ "public-hostname": "ec2-18-218-221-122.us-east-2.compute.amazonaws.com",
1056+ "public-ipv4": "18.218.221.122",
1057 "public-keys": {
1058 "cloud-init-integration": [
1059- "ssh-rsa
1060- AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB
1061- cloud-init-integration"
1062+ "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB cloud-init-integration"
1063 ]
1064 },
1065- "reservation-id": "r-06ab75e9346f54333",
1066+ "reservation-id": "r-0594a20e31f6cfe46",
1067 "security-groups": "Cloud-init integration test secgroup",
1068 "services": {
1069 "domain": "amazonaws.com",
1070@@ -188,16 +273,22 @@ instance:
1071 }
1072 }
1073 },
1074+ "sensitive_keys": [],
1075 "v1": {
1076+ "_beta_keys": [
1077+ "subplatform"
1078+ ],
1079 "availability-zone": "us-east-2b",
1080 "availability_zone": "us-east-2b",
1081- "cloud-name": "aws",
1082 "cloud_name": "aws",
1083- "instance-id": "i-04fa31cfc55aa7976",
1084- "instance_id": "i-04fa31cfc55aa7976",
1085- "local-hostname": "ip-10-41-41-70",
1086- "local_hostname": "ip-10-41-41-70",
1087- "region": "us-east-2"
1088+ "instance_id": "i-075f088c72ad3271c",
1089+ "local_hostname": "ip-10-41-41-95",
1090+ "platform": "ec2",
1091+ "public_ssh_keys": [
1092+ "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDSL7uWGj8cgWyIOaspgKdVy0cKJ+UTjfv7jBOjG2H/GN8bJVXy72XAvnhM0dUM+CCs8FOf0YlPX+Frvz2hKInrmRhZVwRSL129PasD12MlI3l44u6IwS1o/W86Q+tkQYEljtqDOo0a+cOsaZkvUNzUyEXUwz/lmYa6G4hMKZH4NBj7nbAAF96wsMCoyNwbWryBnDYUr6wMbjRR1J9Pw7Xh7WRC73wy4Va2YuOgbD3V/5ZrFPLbWZW/7TFXVrql04QVbyei4aiFR5n//GvoqwQDNe58LmbzX/xvxyKJYdny2zXmdAhMxbrpFQsfpkJ9E/H5w0yOdSvnWbUoG5xNGoOB cloud-init-integration"
1093+ ],
1094+ "region": "us-east-2",
1095+ "subplatform": "metadata (http://169.254.169.254)"
1096 }
1097 }
1098
1099diff --git a/tests/cloud_tests/testcases/base.py b/tests/cloud_tests/testcases/base.py
1100index e18d601..16b268e 100644
1101--- a/tests/cloud_tests/testcases/base.py
1102+++ b/tests/cloud_tests/testcases/base.py
1103@@ -195,6 +195,9 @@ class CloudTestCase(unittest2.TestCase):
1104 self.assertIsNotNone(
1105 v1_data['availability_zone'], 'expected ec2 availability_zone')
1106 self.assertEqual('aws', v1_data['cloud_name'])
1107+ self.assertEqual('ec2', v1_data['platform'])
1108+ self.assertEqual(
1109+ 'metadata (http://169.254.169.254)', v1_data['subplatform'])
1110 self.assertIn('i-', v1_data['instance_id'])
1111 self.assertIn('ip-', v1_data['local_hostname'])
1112 self.assertIsNotNone(v1_data['region'], 'expected ec2 region')
1113@@ -220,7 +223,11 @@ class CloudTestCase(unittest2.TestCase):
1114 instance_data = json.loads(out)
1115 v1_data = instance_data.get('v1', {})
1116 self.assertItemsEqual([], sorted(instance_data['base64_encoded_keys']))
1117- self.assertEqual('nocloud', v1_data['cloud_name'])
1118+ self.assertEqual('unknown', v1_data['cloud_name'])
1119+ self.assertEqual('lxd', v1_data['platform'])
1120+ self.assertEqual(
1121+ 'seed-dir (/var/lib/cloud/seed/nocloud-net)',
1122+ v1_data['subplatform'])
1123 self.assertIsNone(
1124 v1_data['availability_zone'],
1125 'found unexpected lxd availability_zone %s' %
1126@@ -253,7 +260,9 @@ class CloudTestCase(unittest2.TestCase):
1127 instance_data = json.loads(out)
1128 v1_data = instance_data.get('v1', {})
1129 self.assertItemsEqual([], instance_data['base64_encoded_keys'])
1130- self.assertEqual('nocloud', v1_data['cloud_name'])
1131+ self.assertEqual('unknown', v1_data['cloud_name'])
1132+ self.assertEqual('nocloud', v1_data['platform'])
1133+ self.assertEqual('config-disk (/dev/vda)', v1_data['subplatform'])
1134 self.assertIsNone(
1135 v1_data['availability_zone'],
1136 'found unexpected kvm availability_zone %s' %
1137diff --git a/tests/unittests/test_datasource/test_aliyun.py b/tests/unittests/test_datasource/test_aliyun.py
1138index 1e77842..e9213ca 100644
1139--- a/tests/unittests/test_datasource/test_aliyun.py
1140+++ b/tests/unittests/test_datasource/test_aliyun.py
1141@@ -140,6 +140,10 @@ class TestAliYunDatasource(test_helpers.HttprettyTestCase):
1142 self._test_get_sshkey()
1143 self._test_get_iid()
1144 self._test_host_name()
1145+ self.assertEqual('aliyun', self.ds.cloud_name)
1146+ self.assertEqual('ec2', self.ds.platform)
1147+ self.assertEqual(
1148+ 'metadata (http://100.100.100.200)', self.ds.subplatform)
1149
1150 @mock.patch("cloudinit.sources.DataSourceAliYun._is_aliyun")
1151 def test_returns_false_when_not_on_aliyun(self, m_is_aliyun):
1152diff --git a/tests/unittests/test_datasource/test_altcloud.py b/tests/unittests/test_datasource/test_altcloud.py
1153index ff35904..3119bfa 100644
1154--- a/tests/unittests/test_datasource/test_altcloud.py
1155+++ b/tests/unittests/test_datasource/test_altcloud.py
1156@@ -10,7 +10,6 @@
1157 This test file exercises the code in sources DataSourceAltCloud.py
1158 '''
1159
1160-import mock
1161 import os
1162 import shutil
1163 import tempfile
1164@@ -18,32 +17,13 @@ import tempfile
1165 from cloudinit import helpers
1166 from cloudinit import util
1167
1168-from cloudinit.tests.helpers import CiTestCase
1169+from cloudinit.tests.helpers import CiTestCase, mock
1170
1171 import cloudinit.sources.DataSourceAltCloud as dsac
1172
1173 OS_UNAME_ORIG = getattr(os, 'uname')
1174
1175
1176-def _write_cloud_info_file(value):
1177- '''
1178- Populate the CLOUD_INFO_FILE which would be populated
1179- with a cloud backend identifier ImageFactory when building
1180- an image with ImageFactory.
1181- '''
1182- cifile = open(dsac.CLOUD_INFO_FILE, 'w')
1183- cifile.write(value)
1184- cifile.close()
1185- os.chmod(dsac.CLOUD_INFO_FILE, 0o664)
1186-
1187-
1188-def _remove_cloud_info_file():
1189- '''
1190- Remove the test CLOUD_INFO_FILE
1191- '''
1192- os.remove(dsac.CLOUD_INFO_FILE)
1193-
1194-
1195 def _write_user_data_files(mount_dir, value):
1196 '''
1197 Populate the deltacloud_user_data_file the user_data_file
1198@@ -98,13 +78,15 @@ def _dmi_data(expected):
1199
1200
1201 class TestGetCloudType(CiTestCase):
1202- '''
1203- Test to exercise method: DataSourceAltCloud.get_cloud_type()
1204- '''
1205+ '''Test to exercise method: DataSourceAltCloud.get_cloud_type()'''
1206+
1207+ with_logs = True
1208
1209 def setUp(self):
1210 '''Set up.'''
1211- self.paths = helpers.Paths({'cloud_dir': '/tmp'})
1212+ super(TestGetCloudType, self).setUp()
1213+ self.tmp = self.tmp_dir()
1214+ self.paths = helpers.Paths({'cloud_dir': self.tmp})
1215 self.dmi_data = util.read_dmi_data
1216 # We have a different code path for arm to deal with LP1243287
1217 # We have to switch arch to x86_64 to avoid test failure
1218@@ -115,6 +97,26 @@ class TestGetCloudType(CiTestCase):
1219 util.read_dmi_data = self.dmi_data
1220 force_arch()
1221
1222+ def test_cloud_info_file_ioerror(self):
1223+ """Return UNKNOWN when /etc/sysconfig/cloud-info exists but errors."""
1224+ self.assertEqual('/etc/sysconfig/cloud-info', dsac.CLOUD_INFO_FILE)
1225+ dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
1226+ # Attempting to read the directory generates IOError
1227+ with mock.patch.object(dsac, 'CLOUD_INFO_FILE', self.tmp):
1228+ self.assertEqual('UNKNOWN', dsrc.get_cloud_type())
1229+ self.assertIn(
1230+ "[Errno 21] Is a directory: '%s'" % self.tmp,
1231+ self.logs.getvalue())
1232+
1233+ def test_cloud_info_file(self):
1234+ """Return uppercase stripped content from /etc/sysconfig/cloud-info."""
1235+ dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
1236+ cloud_info = self.tmp_path('cloud-info', dir=self.tmp)
1237+ util.write_file(cloud_info, ' OverRiDdeN CloudType ')
1238+ # Attempting to read the directory generates IOError
1239+ with mock.patch.object(dsac, 'CLOUD_INFO_FILE', cloud_info):
1240+ self.assertEqual('OVERRIDDEN CLOUDTYPE', dsrc.get_cloud_type())
1241+
1242 def test_rhev(self):
1243 '''
1244 Test method get_cloud_type() for RHEVm systems.
1245@@ -153,60 +155,57 @@ class TestGetDataCloudInfoFile(CiTestCase):
1246 self.tmp = self.tmp_dir()
1247 self.paths = helpers.Paths(
1248 {'cloud_dir': self.tmp, 'run_dir': self.tmp})
1249- self.cloud_info_file = tempfile.mkstemp()[1]
1250- self.dmi_data = util.read_dmi_data
1251- dsac.CLOUD_INFO_FILE = self.cloud_info_file
1252-
1253- def tearDown(self):
1254- # Reset
1255-
1256- # Attempt to remove the temp file ignoring errors
1257- try:
1258- os.remove(self.cloud_info_file)
1259- except OSError:
1260- pass
1261-
1262- util.read_dmi_data = self.dmi_data
1263- dsac.CLOUD_INFO_FILE = '/etc/sysconfig/cloud-info'
1264+ self.cloud_info_file = self.tmp_path('cloud-info', dir=self.tmp)
1265
1266 def test_rhev(self):
1267 '''Success Test module get_data() forcing RHEV.'''
1268
1269- _write_cloud_info_file('RHEV')
1270+ util.write_file(self.cloud_info_file, 'RHEV')
1271 dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
1272 dsrc.user_data_rhevm = lambda: True
1273- self.assertEqual(True, dsrc.get_data())
1274+ with mock.patch.object(dsac, 'CLOUD_INFO_FILE', self.cloud_info_file):
1275+ self.assertEqual(True, dsrc.get_data())
1276+ self.assertEqual('altcloud', dsrc.cloud_name)
1277+ self.assertEqual('altcloud', dsrc.platform_type)
1278+ self.assertEqual('rhev (/dev/fd0)', dsrc.subplatform)
1279
1280 def test_vsphere(self):
1281 '''Success Test module get_data() forcing VSPHERE.'''
1282
1283- _write_cloud_info_file('VSPHERE')
1284+ util.write_file(self.cloud_info_file, 'VSPHERE')
1285 dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
1286 dsrc.user_data_vsphere = lambda: True
1287- self.assertEqual(True, dsrc.get_data())
1288+ with mock.patch.object(dsac, 'CLOUD_INFO_FILE', self.cloud_info_file):
1289+ self.assertEqual(True, dsrc.get_data())
1290+ self.assertEqual('altcloud', dsrc.cloud_name)
1291+ self.assertEqual('altcloud', dsrc.platform_type)
1292+ self.assertEqual('vsphere (unknown)', dsrc.subplatform)
1293
1294 def test_fail_rhev(self):
1295 '''Failure Test module get_data() forcing RHEV.'''
1296
1297- _write_cloud_info_file('RHEV')
1298+ util.write_file(self.cloud_info_file, 'RHEV')
1299 dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
1300 dsrc.user_data_rhevm = lambda: False
1301- self.assertEqual(False, dsrc.get_data())
1302+ with mock.patch.object(dsac, 'CLOUD_INFO_FILE', self.cloud_info_file):
1303+ self.assertEqual(False, dsrc.get_data())
1304
1305 def test_fail_vsphere(self):
1306 '''Failure Test module get_data() forcing VSPHERE.'''
1307
1308- _write_cloud_info_file('VSPHERE')
1309+ util.write_file(self.cloud_info_file, 'VSPHERE')
1310 dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
1311 dsrc.user_data_vsphere = lambda: False
1312- self.assertEqual(False, dsrc.get_data())
1313+ with mock.patch.object(dsac, 'CLOUD_INFO_FILE', self.cloud_info_file):
1314+ self.assertEqual(False, dsrc.get_data())
1315
1316 def test_unrecognized(self):
1317 '''Failure Test module get_data() forcing unrecognized.'''
1318
1319- _write_cloud_info_file('unrecognized')
1320+ util.write_file(self.cloud_info_file, 'unrecognized')
1321 dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
1322- self.assertEqual(False, dsrc.get_data())
1323+ with mock.patch.object(dsac, 'CLOUD_INFO_FILE', self.cloud_info_file):
1324+ self.assertEqual(False, dsrc.get_data())
1325
1326
1327 class TestGetDataNoCloudInfoFile(CiTestCase):
1328@@ -322,7 +321,8 @@ class TestUserDataVsphere(CiTestCase):
1329 '''
1330 def setUp(self):
1331 '''Set up.'''
1332- self.paths = helpers.Paths({'cloud_dir': '/tmp'})
1333+ self.tmp = self.tmp_dir()
1334+ self.paths = helpers.Paths({'cloud_dir': self.tmp})
1335 self.mount_dir = tempfile.mkdtemp()
1336
1337 _write_user_data_files(self.mount_dir, 'test user data')
1338@@ -363,6 +363,22 @@ class TestUserDataVsphere(CiTestCase):
1339 self.assertEqual(1, m_find_devs_with.call_count)
1340 self.assertEqual(1, m_mount_cb.call_count)
1341
1342+ @mock.patch("cloudinit.sources.DataSourceAltCloud.util.find_devs_with")
1343+ @mock.patch("cloudinit.sources.DataSourceAltCloud.util.mount_cb")
1344+ def test_user_data_vsphere_success(self, m_mount_cb, m_find_devs_with):
1345+ """Test user_data_vsphere() where successful."""
1346+ m_find_devs_with.return_value = ["/dev/mock/cdrom"]
1347+ m_mount_cb.return_value = 'raw userdata from cdrom'
1348+ dsrc = dsac.DataSourceAltCloud({}, None, self.paths)
1349+ cloud_info = self.tmp_path('cloud-info', dir=self.tmp)
1350+ util.write_file(cloud_info, 'VSPHERE')
1351+ self.assertEqual(True, dsrc.user_data_vsphere())
1352+ m_find_devs_with.assert_called_once_with('LABEL=CDROM')
1353+ m_mount_cb.assert_called_once_with(
1354+ '/dev/mock/cdrom', dsac.read_user_data_callback)
1355+ with mock.patch.object(dsrc, 'get_cloud_type', return_value='VSPHERE'):
1356+ self.assertEqual('vsphere (/dev/mock/cdrom)', dsrc.subplatform)
1357+
1358
1359 class TestReadUserDataCallback(CiTestCase):
1360 '''
1361diff --git a/tests/unittests/test_datasource/test_azure.py b/tests/unittests/test_datasource/test_azure.py
1362index 4e428b7..0f4b7bf 100644
1363--- a/tests/unittests/test_datasource/test_azure.py
1364+++ b/tests/unittests/test_datasource/test_azure.py
1365@@ -110,6 +110,8 @@ NETWORK_METADATA = {
1366 }
1367 }
1368
1369+MOCKPATH = 'cloudinit.sources.DataSourceAzure.'
1370+
1371
1372 class TestGetMetadataFromIMDS(HttprettyTestCase):
1373
1374@@ -119,9 +121,9 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
1375 super(TestGetMetadataFromIMDS, self).setUp()
1376 self.network_md_url = dsaz.IMDS_URL + "instance?api-version=2017-12-01"
1377
1378- @mock.patch('cloudinit.sources.DataSourceAzure.readurl')
1379- @mock.patch('cloudinit.sources.DataSourceAzure.EphemeralDHCPv4')
1380- @mock.patch('cloudinit.sources.DataSourceAzure.net.is_up')
1381+ @mock.patch(MOCKPATH + 'readurl')
1382+ @mock.patch(MOCKPATH + 'EphemeralDHCPv4')
1383+ @mock.patch(MOCKPATH + 'net.is_up')
1384 def test_get_metadata_does_not_dhcp_if_network_is_up(
1385 self, m_net_is_up, m_dhcp, m_readurl):
1386 """Do not perform DHCP setup when nic is already up."""
1387@@ -138,9 +140,9 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
1388 "Crawl of Azure Instance Metadata Service (IMDS) took", # log_time
1389 self.logs.getvalue())
1390
1391- @mock.patch('cloudinit.sources.DataSourceAzure.readurl')
1392- @mock.patch('cloudinit.sources.DataSourceAzure.EphemeralDHCPv4')
1393- @mock.patch('cloudinit.sources.DataSourceAzure.net.is_up')
1394+ @mock.patch(MOCKPATH + 'readurl')
1395+ @mock.patch(MOCKPATH + 'EphemeralDHCPv4')
1396+ @mock.patch(MOCKPATH + 'net.is_up')
1397 def test_get_metadata_performs_dhcp_when_network_is_down(
1398 self, m_net_is_up, m_dhcp, m_readurl):
1399 """Perform DHCP setup when nic is not up."""
1400@@ -163,7 +165,7 @@ class TestGetMetadataFromIMDS(HttprettyTestCase):
1401 headers={'Metadata': 'true'}, retries=2, timeout=1)
1402
1403 @mock.patch('cloudinit.url_helper.time.sleep')
1404- @mock.patch('cloudinit.sources.DataSourceAzure.net.is_up')
1405+ @mock.patch(MOCKPATH + 'net.is_up')
1406 def test_get_metadata_from_imds_empty_when_no_imds_present(
1407 self, m_net_is_up, m_sleep):
1408 """Return empty dict when IMDS network metadata is absent."""
1409@@ -380,7 +382,7 @@ fdescfs /dev/fd fdescfs rw 0 0
1410 res = get_path_dev_freebsd('/etc', mnt_list)
1411 self.assertIsNotNone(res)
1412
1413- @mock.patch('cloudinit.sources.DataSourceAzure._is_platform_viable')
1414+ @mock.patch(MOCKPATH + '_is_platform_viable')
1415 def test_call_is_platform_viable_seed(self, m_is_platform_viable):
1416 """Check seed_dir using _is_platform_viable and return False."""
1417 # Return a non-matching asset tag value
1418@@ -401,6 +403,24 @@ fdescfs /dev/fd fdescfs rw 0 0
1419 self.assertEqual(dsrc.metadata['local-hostname'], odata['HostName'])
1420 self.assertTrue(os.path.isfile(
1421 os.path.join(self.waagent_d, 'ovf-env.xml')))
1422+ self.assertEqual('azure', dsrc.cloud_name)
1423+ self.assertEqual('azure', dsrc.platform_type)
1424+ self.assertEqual(
1425+ 'seed-dir (%s/seed/azure)' % self.tmp, dsrc.subplatform)
1426+
1427+ def test_basic_dev_file(self):
1428+ """When a device path is used, present that in subplatform."""
1429+ data = {'sys_cfg': {}, 'dsdevs': ['/dev/cd0']}
1430+ dsrc = self._get_ds(data)
1431+ with mock.patch(MOCKPATH + 'util.mount_cb') as m_mount_cb:
1432+ m_mount_cb.return_value = (
1433+ {'local-hostname': 'me'}, 'ud', {'cfg': ''}, {})
1434+ self.assertTrue(dsrc.get_data())
1435+ self.assertEqual(dsrc.userdata_raw, 'ud')
1436+ self.assertEqual(dsrc.metadata['local-hostname'], 'me')
1437+ self.assertEqual('azure', dsrc.cloud_name)
1438+ self.assertEqual('azure', dsrc.platform_type)
1439+ self.assertEqual('config-disk (/dev/cd0)', dsrc.subplatform)
1440
1441 def test_get_data_non_ubuntu_will_not_remove_network_scripts(self):
1442 """get_data on non-Ubuntu will not remove ubuntu net scripts."""
1443@@ -769,8 +789,8 @@ fdescfs /dev/fd fdescfs rw 0 0
1444 ds.get_data()
1445 self.assertEqual(self.instance_id, ds.metadata['instance-id'])
1446
1447- @mock.patch("cloudinit.sources.DataSourceAzure.util.is_FreeBSD")
1448- @mock.patch("cloudinit.sources.DataSourceAzure._check_freebsd_cdrom")
1449+ @mock.patch(MOCKPATH + 'util.is_FreeBSD')
1450+ @mock.patch(MOCKPATH + '_check_freebsd_cdrom')
1451 def test_list_possible_azure_ds_devs(self, m_check_fbsd_cdrom,
1452 m_is_FreeBSD):
1453 """On FreeBSD, possible devs should show /dev/cd0."""
1454@@ -885,17 +905,17 @@ fdescfs /dev/fd fdescfs rw 0 0
1455 expected_config['config'].append(blacklist_config)
1456 self.assertEqual(netconfig, expected_config)
1457
1458- @mock.patch("cloudinit.sources.DataSourceAzure.util.subp")
1459+ @mock.patch(MOCKPATH + 'util.subp')
1460 def test_get_hostname_with_no_args(self, subp):
1461 dsaz.get_hostname()
1462 subp.assert_called_once_with(("hostname",), capture=True)
1463
1464- @mock.patch("cloudinit.sources.DataSourceAzure.util.subp")
1465+ @mock.patch(MOCKPATH + 'util.subp')
1466 def test_get_hostname_with_string_arg(self, subp):
1467 dsaz.get_hostname(hostname_command="hostname")
1468 subp.assert_called_once_with(("hostname",), capture=True)
1469
1470- @mock.patch("cloudinit.sources.DataSourceAzure.util.subp")
1471+ @mock.patch(MOCKPATH + 'util.subp')
1472 def test_get_hostname_with_iterable_arg(self, subp):
1473 dsaz.get_hostname(hostname_command=("hostname",))
1474 subp.assert_called_once_with(("hostname",), capture=True)
1475@@ -949,7 +969,7 @@ class TestAzureBounce(CiTestCase):
1476 self.set_hostname = self.patches.enter_context(
1477 mock.patch.object(dsaz, 'set_hostname'))
1478 self.subp = self.patches.enter_context(
1479- mock.patch('cloudinit.sources.DataSourceAzure.util.subp'))
1480+ mock.patch(MOCKPATH + 'util.subp'))
1481 self.find_fallback_nic = self.patches.enter_context(
1482 mock.patch('cloudinit.net.find_fallback_nic', return_value='eth9'))
1483
1484@@ -989,7 +1009,7 @@ class TestAzureBounce(CiTestCase):
1485 ds.get_data()
1486 self.assertEqual(0, self.set_hostname.call_count)
1487
1488- @mock.patch('cloudinit.sources.DataSourceAzure.perform_hostname_bounce')
1489+ @mock.patch(MOCKPATH + 'perform_hostname_bounce')
1490 def test_disabled_bounce_does_not_perform_bounce(
1491 self, perform_hostname_bounce):
1492 cfg = {'hostname_bounce': {'policy': 'off'}}
1493@@ -1005,7 +1025,7 @@ class TestAzureBounce(CiTestCase):
1494 ds.get_data()
1495 self.assertEqual(0, self.set_hostname.call_count)
1496
1497- @mock.patch('cloudinit.sources.DataSourceAzure.perform_hostname_bounce')
1498+ @mock.patch(MOCKPATH + 'perform_hostname_bounce')
1499 def test_unchanged_hostname_does_not_perform_bounce(
1500 self, perform_hostname_bounce):
1501 host_name = 'unchanged-host-name'
1502@@ -1015,7 +1035,7 @@ class TestAzureBounce(CiTestCase):
1503 ds.get_data()
1504 self.assertEqual(0, perform_hostname_bounce.call_count)
1505
1506- @mock.patch('cloudinit.sources.DataSourceAzure.perform_hostname_bounce')
1507+ @mock.patch(MOCKPATH + 'perform_hostname_bounce')
1508 def test_force_performs_bounce_regardless(self, perform_hostname_bounce):
1509 host_name = 'unchanged-host-name'
1510 self.get_hostname.return_value = host_name
1511@@ -1032,7 +1052,7 @@ class TestAzureBounce(CiTestCase):
1512 cfg = {'hostname_bounce': {'policy': 'force'}}
1513 dsrc = self._get_ds(self.get_ovf_env_with_dscfg(host_name, cfg),
1514 agent_command=['not', '__builtin__'])
1515- patch_path = 'cloudinit.sources.DataSourceAzure.util.which'
1516+ patch_path = MOCKPATH + 'util.which'
1517 with mock.patch(patch_path) as m_which:
1518 m_which.return_value = None
1519 ret = self._get_and_setup(dsrc)
1520@@ -1053,7 +1073,7 @@ class TestAzureBounce(CiTestCase):
1521 self.assertEqual(expected_hostname,
1522 self.set_hostname.call_args_list[0][0][0])
1523
1524- @mock.patch('cloudinit.sources.DataSourceAzure.perform_hostname_bounce')
1525+ @mock.patch(MOCKPATH + 'perform_hostname_bounce')
1526 def test_different_hostnames_performs_bounce(
1527 self, perform_hostname_bounce):
1528 expected_hostname = 'azure-expected-host-name'
1529@@ -1076,7 +1096,7 @@ class TestAzureBounce(CiTestCase):
1530 self.assertEqual(initial_host_name,
1531 self.set_hostname.call_args_list[-1][0][0])
1532
1533- @mock.patch('cloudinit.sources.DataSourceAzure.perform_hostname_bounce')
1534+ @mock.patch(MOCKPATH + 'perform_hostname_bounce')
1535 def test_failure_in_bounce_still_resets_host_name(
1536 self, perform_hostname_bounce):
1537 perform_hostname_bounce.side_effect = Exception
1538@@ -1117,7 +1137,7 @@ class TestAzureBounce(CiTestCase):
1539 self.assertEqual(
1540 dsaz.BOUNCE_COMMAND_IFUP, bounce_args)
1541
1542- @mock.patch('cloudinit.sources.DataSourceAzure.perform_hostname_bounce')
1543+ @mock.patch(MOCKPATH + 'perform_hostname_bounce')
1544 def test_set_hostname_option_can_disable_bounce(
1545 self, perform_hostname_bounce):
1546 cfg = {'set_hostname': False, 'hostname_bounce': {'policy': 'force'}}
1547@@ -1218,12 +1238,12 @@ class TestCanDevBeReformatted(CiTestCase):
1548 def has_ntfs_fs(device):
1549 return bypath.get(device, {}).get('fs') == 'ntfs'
1550
1551- p = 'cloudinit.sources.DataSourceAzure'
1552- self._domock(p + "._partitions_on_device", 'm_partitions_on_device')
1553- self._domock(p + "._has_ntfs_filesystem", 'm_has_ntfs_filesystem')
1554- self._domock(p + ".util.mount_cb", 'm_mount_cb')
1555- self._domock(p + ".os.path.realpath", 'm_realpath')
1556- self._domock(p + ".os.path.exists", 'm_exists')
1557+ p = MOCKPATH
1558+ self._domock(p + "_partitions_on_device", 'm_partitions_on_device')
1559+ self._domock(p + "_has_ntfs_filesystem", 'm_has_ntfs_filesystem')
1560+ self._domock(p + "util.mount_cb", 'm_mount_cb')
1561+ self._domock(p + "os.path.realpath", 'm_realpath')
1562+ self._domock(p + "os.path.exists", 'm_exists')
1563
1564 self.m_exists.side_effect = lambda p: p in bypath
1565 self.m_realpath.side_effect = realpath
1566@@ -1488,7 +1508,7 @@ class TestPreprovisioningShouldReprovision(CiTestCase):
1567 self.paths = helpers.Paths({'cloud_dir': tmp})
1568 dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
1569
1570- @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
1571+ @mock.patch(MOCKPATH + 'util.write_file')
1572 def test__should_reprovision_with_true_cfg(self, isfile, write_f):
1573 """The _should_reprovision method should return true with config
1574 flag present."""
1575@@ -1512,7 +1532,7 @@ class TestPreprovisioningShouldReprovision(CiTestCase):
1576 dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
1577 self.assertFalse(dsa._should_reprovision((None, None, {}, None)))
1578
1579- @mock.patch('cloudinit.sources.DataSourceAzure.DataSourceAzure._poll_imds')
1580+ @mock.patch(MOCKPATH + 'DataSourceAzure._poll_imds')
1581 def test_reprovision_calls__poll_imds(self, _poll_imds, isfile):
1582 """_reprovision will poll IMDS."""
1583 isfile.return_value = False
1584@@ -1528,8 +1548,7 @@ class TestPreprovisioningShouldReprovision(CiTestCase):
1585 @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
1586 @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
1587 @mock.patch('requests.Session.request')
1588-@mock.patch(
1589- 'cloudinit.sources.DataSourceAzure.DataSourceAzure._report_ready')
1590+@mock.patch(MOCKPATH + 'DataSourceAzure._report_ready')
1591 class TestPreprovisioningPollIMDS(CiTestCase):
1592
1593 def setUp(self):
1594@@ -1539,7 +1558,7 @@ class TestPreprovisioningPollIMDS(CiTestCase):
1595 self.paths = helpers.Paths({'cloud_dir': self.tmp})
1596 dsaz.BUILTIN_DS_CONFIG['data_dir'] = self.waagent_d
1597
1598- @mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
1599+ @mock.patch(MOCKPATH + 'util.write_file')
1600 def test_poll_imds_calls_report_ready(self, write_f, report_ready_func,
1601 fake_resp, m_dhcp, m_net):
1602 """The poll_imds will call report_ready after creating marker file."""
1603@@ -1550,8 +1569,7 @@ class TestPreprovisioningPollIMDS(CiTestCase):
1604 'unknown-245': '624c3620'}
1605 m_dhcp.return_value = [lease]
1606 dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
1607- mock_path = (
1608- 'cloudinit.sources.DataSourceAzure.REPORTED_READY_MARKER_FILE')
1609+ mock_path = (MOCKPATH + 'REPORTED_READY_MARKER_FILE')
1610 with mock.patch(mock_path, report_marker):
1611 dsa._poll_imds()
1612 self.assertEqual(report_ready_func.call_count, 1)
1613@@ -1561,23 +1579,21 @@ class TestPreprovisioningPollIMDS(CiTestCase):
1614 fake_resp, m_dhcp, m_net):
1615 """The poll_imds should not call reporting ready
1616 when flag is false"""
1617- report_marker = self.tmp_path('report_marker', self.tmp)
1618- write_file(report_marker, content='dont run report_ready :)')
1619+ report_file = self.tmp_path('report_marker', self.tmp)
1620+ write_file(report_file, content='dont run report_ready :)')
1621 m_dhcp.return_value = [{
1622 'interface': 'eth9', 'fixed-address': '192.168.2.9',
1623 'routers': '192.168.2.1', 'subnet-mask': '255.255.255.0',
1624 'unknown-245': '624c3620'}]
1625 dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths)
1626- mock_path = (
1627- 'cloudinit.sources.DataSourceAzure.REPORTED_READY_MARKER_FILE')
1628- with mock.patch(mock_path, report_marker):
1629+ with mock.patch(MOCKPATH + 'REPORTED_READY_MARKER_FILE', report_file):
1630 dsa._poll_imds()
1631 self.assertEqual(report_ready_func.call_count, 0)
1632
1633
1634-@mock.patch('cloudinit.sources.DataSourceAzure.util.subp')
1635-@mock.patch('cloudinit.sources.DataSourceAzure.util.write_file')
1636-@mock.patch('cloudinit.sources.DataSourceAzure.util.is_FreeBSD')
1637+@mock.patch(MOCKPATH + 'util.subp')
1638+@mock.patch(MOCKPATH + 'util.write_file')
1639+@mock.patch(MOCKPATH + 'util.is_FreeBSD')
1640 @mock.patch('cloudinit.net.dhcp.EphemeralIPv4Network')
1641 @mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery')
1642 @mock.patch('requests.Session.request')
1643@@ -1688,7 +1704,7 @@ class TestRemoveUbuntuNetworkConfigScripts(CiTestCase):
1644 self.tmp_path('notfilehere', dir=self.tmp)])
1645 self.assertNotIn('/not/a', self.logs.getvalue()) # No delete logs
1646
1647- @mock.patch('cloudinit.sources.DataSourceAzure.os.path.exists')
1648+ @mock.patch(MOCKPATH + 'os.path.exists')
1649 def test_remove_network_scripts_default_removes_stock_scripts(self,
1650 m_exists):
1651 """Azure's stock ubuntu image scripts and artifacts are removed."""
1652@@ -1704,14 +1720,14 @@ class TestWBIsPlatformViable(CiTestCase):
1653 """White box tests for _is_platform_viable."""
1654 with_logs = True
1655
1656- @mock.patch('cloudinit.sources.DataSourceAzure.util.read_dmi_data')
1657+ @mock.patch(MOCKPATH + 'util.read_dmi_data')
1658 def test_true_on_non_azure_chassis(self, m_read_dmi_data):
1659 """Return True if DMI chassis-asset-tag is AZURE_CHASSIS_ASSET_TAG."""
1660 m_read_dmi_data.return_value = dsaz.AZURE_CHASSIS_ASSET_TAG
1661 self.assertTrue(dsaz._is_platform_viable('doesnotmatter'))
1662
1663- @mock.patch('cloudinit.sources.DataSourceAzure.os.path.exists')
1664- @mock.patch('cloudinit.sources.DataSourceAzure.util.read_dmi_data')
1665+ @mock.patch(MOCKPATH + 'os.path.exists')
1666+ @mock.patch(MOCKPATH + 'util.read_dmi_data')
1667 def test_true_on_azure_ovf_env_in_seed_dir(self, m_read_dmi_data, m_exist):
1668 """Return True if ovf-env.xml exists in known seed dirs."""
1669 # Non-matching Azure chassis-asset-tag
1670@@ -1729,7 +1745,7 @@ class TestWBIsPlatformViable(CiTestCase):
1671 and no devices have a label starting with prefix 'rd_rdfe_'.
1672 """
1673 self.assertFalse(wrap_and_call(
1674- 'cloudinit.sources.DataSourceAzure',
1675+ MOCKPATH,
1676 {'os.path.exists': False,
1677 # Non-matching Azure chassis-asset-tag
1678 'util.read_dmi_data': dsaz.AZURE_CHASSIS_ASSET_TAG + 'X',
1679diff --git a/tests/unittests/test_datasource/test_cloudsigma.py b/tests/unittests/test_datasource/test_cloudsigma.py
1680index 380ad1b..3bf52e6 100644
1681--- a/tests/unittests/test_datasource/test_cloudsigma.py
1682+++ b/tests/unittests/test_datasource/test_cloudsigma.py
1683@@ -68,6 +68,12 @@ class DataSourceCloudSigmaTest(test_helpers.CiTestCase):
1684 self.assertEqual(SERVER_CONTEXT['uuid'],
1685 self.datasource.get_instance_id())
1686
1687+ def test_platform(self):
1688+ """All platform-related attributes are set."""
1689+ self.assertEqual(self.datasource.cloud_name, 'cloudsigma')
1690+ self.assertEqual(self.datasource.platform_type, 'cloudsigma')
1691+ self.assertEqual(self.datasource.subplatform, 'cepko (/dev/ttyS1)')
1692+
1693 def test_metadata(self):
1694 self.assertEqual(self.datasource.metadata, SERVER_CONTEXT)
1695
1696diff --git a/tests/unittests/test_datasource/test_configdrive.py b/tests/unittests/test_datasource/test_configdrive.py
1697index 231619c..dcdabea 100644
1698--- a/tests/unittests/test_datasource/test_configdrive.py
1699+++ b/tests/unittests/test_datasource/test_configdrive.py
1700@@ -478,6 +478,9 @@ class TestConfigDriveDataSource(CiTestCase):
1701 myds = cfg_ds_from_dir(self.tmp, files=CFG_DRIVE_FILES_V2)
1702 self.assertEqual(myds.get_public_ssh_keys(),
1703 [OSTACK_META['public_keys']['mykey']])
1704+ self.assertEqual('configdrive', myds.cloud_name)
1705+ self.assertEqual('openstack', myds.platform)
1706+ self.assertEqual('seed-dir (%s/seed)' % self.tmp, myds.subplatform)
1707
1708
1709 class TestNetJson(CiTestCase):
1710diff --git a/tests/unittests/test_datasource/test_ec2.py b/tests/unittests/test_datasource/test_ec2.py
1711index 497e761..9f81255 100644
1712--- a/tests/unittests/test_datasource/test_ec2.py
1713+++ b/tests/unittests/test_datasource/test_ec2.py
1714@@ -351,7 +351,9 @@ class TestEc2(test_helpers.HttprettyTestCase):
1715 m_get_interface_mac.return_value = mac1
1716 nc = ds.network_config # Will re-crawl network metadata
1717 self.assertIsNotNone(nc)
1718- self.assertIn('Re-crawl of metadata service', self.logs.getvalue())
1719+ self.assertIn(
1720+ 'Refreshing stale metadata from prior to upgrade',
1721+ self.logs.getvalue())
1722 expected = {'version': 1, 'config': [
1723 {'mac_address': '06:17:04:d7:26:09',
1724 'name': 'eth9',
1725@@ -386,7 +388,7 @@ class TestEc2(test_helpers.HttprettyTestCase):
1726 register_mock_metaserver(
1727 '{0}/{1}/dynamic/'.format(ds.metadata_address, all_versions[-1]),
1728 DYNAMIC_METADATA)
1729- ds._cloud_platform = ec2.Platforms.AWS
1730+ ds._cloud_name = ec2.CloudNames.AWS
1731 # Setup cached metadata on the Datasource
1732 ds.metadata = DEFAULT_METADATA
1733 self.assertEqual('my-identity-id', ds.get_instance_id())
1734@@ -401,6 +403,9 @@ class TestEc2(test_helpers.HttprettyTestCase):
1735 ret = ds.get_data()
1736 self.assertTrue(ret)
1737 self.assertEqual(0, m_dhcp.call_count)
1738+ self.assertEqual('aws', ds.cloud_name)
1739+ self.assertEqual('ec2', ds.platform_type)
1740+ self.assertEqual('metadata (%s)' % ds.metadata_address, ds.subplatform)
1741
1742 def test_valid_platform_with_strict_false(self):
1743 """Valid platform data should return true with strict_id false."""
1744@@ -439,16 +444,17 @@ class TestEc2(test_helpers.HttprettyTestCase):
1745 sys_cfg={'datasource': {'Ec2': {'strict_id': False}}},
1746 md=DEFAULT_METADATA)
1747 platform_attrs = [
1748- attr for attr in ec2.Platforms.__dict__.keys()
1749+ attr for attr in ec2.CloudNames.__dict__.keys()
1750 if not attr.startswith('__')]
1751 for attr_name in platform_attrs:
1752- platform_name = getattr(ec2.Platforms, attr_name)
1753- if platform_name != 'AWS':
1754- ds._cloud_platform = platform_name
1755+ platform_name = getattr(ec2.CloudNames, attr_name)
1756+ if platform_name != 'aws':
1757+ ds._cloud_name = platform_name
1758 ret = ds.get_data()
1759+ self.assertEqual('ec2', ds.platform_type)
1760 self.assertFalse(ret)
1761 message = (
1762- "Local Ec2 mode only supported on ('AWS',),"
1763+ "Local Ec2 mode only supported on ('aws',),"
1764 ' not {0}'.format(platform_name))
1765 self.assertIn(message, self.logs.getvalue())
1766
1767diff --git a/tests/unittests/test_datasource/test_ibmcloud.py b/tests/unittests/test_datasource/test_ibmcloud.py
1768index e639ae4..0b54f58 100644
1769--- a/tests/unittests/test_datasource/test_ibmcloud.py
1770+++ b/tests/unittests/test_datasource/test_ibmcloud.py
1771@@ -1,14 +1,17 @@
1772 # This file is part of cloud-init. See LICENSE file for license information.
1773
1774+from cloudinit.helpers import Paths
1775 from cloudinit.sources import DataSourceIBMCloud as ibm
1776 from cloudinit.tests import helpers as test_helpers
1777+from cloudinit import util
1778
1779 import base64
1780 import copy
1781 import json
1782-import mock
1783 from textwrap import dedent
1784
1785+mock = test_helpers.mock
1786+
1787 D_PATH = "cloudinit.sources.DataSourceIBMCloud."
1788
1789
1790@@ -309,4 +312,39 @@ class TestIsIBMProvisioning(test_helpers.FilesystemMockingTestCase):
1791 self.assertIn("no reference file", self.logs.getvalue())
1792
1793
1794+class TestDataSourceIBMCloud(test_helpers.CiTestCase):
1795+
1796+ def setUp(self):
1797+ super(TestDataSourceIBMCloud, self).setUp()
1798+ self.tmp = self.tmp_dir()
1799+ self.cloud_dir = self.tmp_path('cloud', dir=self.tmp)
1800+ util.ensure_dir(self.cloud_dir)
1801+ paths = Paths({'run_dir': self.tmp, 'cloud_dir': self.cloud_dir})
1802+ self.ds = ibm.DataSourceIBMCloud(
1803+ sys_cfg={}, distro=None, paths=paths)
1804+
1805+ def test_get_data_false(self):
1806+ """When read_md returns None, get_data returns False."""
1807+ with mock.patch(D_PATH + 'read_md', return_value=None):
1808+ self.assertFalse(self.ds.get_data())
1809+
1810+ def test_get_data_processes_read_md(self):
1811+ """get_data processes and caches content returned by read_md."""
1812+ md = {
1813+ 'metadata': {}, 'networkdata': 'net', 'platform': 'plat',
1814+ 'source': 'src', 'system-uuid': 'uuid', 'userdata': 'ud',
1815+ 'vendordata': 'vd'}
1816+ with mock.patch(D_PATH + 'read_md', return_value=md):
1817+ self.assertTrue(self.ds.get_data())
1818+ self.assertEqual('src', self.ds.source)
1819+ self.assertEqual('plat', self.ds.platform)
1820+ self.assertEqual({}, self.ds.metadata)
1821+ self.assertEqual('ud', self.ds.userdata_raw)
1822+ self.assertEqual('net', self.ds.network_json)
1823+ self.assertEqual('vd', self.ds.vendordata_pure)
1824+ self.assertEqual('uuid', self.ds.system_uuid)
1825+ self.assertEqual('ibmcloud', self.ds.cloud_name)
1826+ self.assertEqual('ibmcloud', self.ds.platform_type)
1827+ self.assertEqual('plat (src)', self.ds.subplatform)
1828+
1829 # vi: ts=4 expandtab
1830diff --git a/tests/unittests/test_datasource/test_nocloud.py b/tests/unittests/test_datasource/test_nocloud.py
1831index 21931eb..b6468b6 100644
1832--- a/tests/unittests/test_datasource/test_nocloud.py
1833+++ b/tests/unittests/test_datasource/test_nocloud.py
1834@@ -10,6 +10,7 @@ import textwrap
1835 import yaml
1836
1837
1838+@mock.patch('cloudinit.sources.DataSourceNoCloud.util.is_lxd')
1839 class TestNoCloudDataSource(CiTestCase):
1840
1841 def setUp(self):
1842@@ -28,10 +29,11 @@ class TestNoCloudDataSource(CiTestCase):
1843 self.mocks.enter_context(
1844 mock.patch.object(util, 'read_dmi_data', return_value=None))
1845
1846- def test_nocloud_seed_dir(self):
1847+ def test_nocloud_seed_dir_on_lxd(self, m_is_lxd):
1848 md = {'instance-id': 'IID', 'dsmode': 'local'}
1849 ud = b"USER_DATA_HERE"
1850- populate_dir(os.path.join(self.paths.seed_dir, "nocloud"),
1851+ seed_dir = os.path.join(self.paths.seed_dir, "nocloud")
1852+ populate_dir(seed_dir,
1853 {'user-data': ud, 'meta-data': yaml.safe_dump(md)})
1854
1855 sys_cfg = {
1856@@ -44,9 +46,32 @@ class TestNoCloudDataSource(CiTestCase):
1857 ret = dsrc.get_data()
1858 self.assertEqual(dsrc.userdata_raw, ud)
1859 self.assertEqual(dsrc.metadata, md)
1860+ self.assertEqual(dsrc.platform_type, 'lxd')
1861+ self.assertEqual(
1862+ dsrc.subplatform, 'seed-dir (%s)' % seed_dir)
1863 self.assertTrue(ret)
1864
1865- def test_fs_label(self):
1866+ def test_nocloud_seed_dir_non_lxd_platform_is_nocloud(self, m_is_lxd):
1867+ """Non-lxd environments will list nocloud as the platform."""
1868+ m_is_lxd.return_value = False
1869+ md = {'instance-id': 'IID', 'dsmode': 'local'}
1870+ seed_dir = os.path.join(self.paths.seed_dir, "nocloud")
1871+ populate_dir(seed_dir,
1872+ {'user-data': '', 'meta-data': yaml.safe_dump(md)})
1873+
1874+ sys_cfg = {
1875+ 'datasource': {'NoCloud': {'fs_label': None}}
1876+ }
1877+
1878+ ds = DataSourceNoCloud.DataSourceNoCloud
1879+
1880+ dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
1881+ self.assertTrue(dsrc.get_data())
1882+ self.assertEqual(dsrc.platform_type, 'nocloud')
1883+ self.assertEqual(
1884+ dsrc.subplatform, 'seed-dir (%s)' % seed_dir)
1885+
1886+ def test_fs_label(self, m_is_lxd):
1887 # find_devs_with should not be called ff fs_label is None
1888 ds = DataSourceNoCloud.DataSourceNoCloud
1889
1890@@ -68,7 +93,7 @@ class TestNoCloudDataSource(CiTestCase):
1891 ret = dsrc.get_data()
1892 self.assertFalse(ret)
1893
1894- def test_no_datasource_expected(self):
1895+ def test_no_datasource_expected(self, m_is_lxd):
1896 # no source should be found if no cmdline, config, and fs_label=None
1897 sys_cfg = {'datasource': {'NoCloud': {'fs_label': None}}}
1898
1899@@ -76,7 +101,7 @@ class TestNoCloudDataSource(CiTestCase):
1900 dsrc = ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
1901 self.assertFalse(dsrc.get_data())
1902
1903- def test_seed_in_config(self):
1904+ def test_seed_in_config(self, m_is_lxd):
1905 ds = DataSourceNoCloud.DataSourceNoCloud
1906
1907 data = {
1908@@ -92,7 +117,7 @@ class TestNoCloudDataSource(CiTestCase):
1909 self.assertEqual(dsrc.metadata.get('instance-id'), 'IID')
1910 self.assertTrue(ret)
1911
1912- def test_nocloud_seed_with_vendordata(self):
1913+ def test_nocloud_seed_with_vendordata(self, m_is_lxd):
1914 md = {'instance-id': 'IID', 'dsmode': 'local'}
1915 ud = b"USER_DATA_HERE"
1916 vd = b"THIS IS MY VENDOR_DATA"
1917@@ -114,7 +139,7 @@ class TestNoCloudDataSource(CiTestCase):
1918 self.assertEqual(dsrc.vendordata_raw, vd)
1919 self.assertTrue(ret)
1920
1921- def test_nocloud_no_vendordata(self):
1922+ def test_nocloud_no_vendordata(self, m_is_lxd):
1923 populate_dir(os.path.join(self.paths.seed_dir, "nocloud"),
1924 {'user-data': b"ud", 'meta-data': "instance-id: IID\n"})
1925
1926@@ -128,7 +153,7 @@ class TestNoCloudDataSource(CiTestCase):
1927 self.assertFalse(dsrc.vendordata)
1928 self.assertTrue(ret)
1929
1930- def test_metadata_network_interfaces(self):
1931+ def test_metadata_network_interfaces(self, m_is_lxd):
1932 gateway = "103.225.10.1"
1933 md = {
1934 'instance-id': 'i-abcd',
1935@@ -157,7 +182,7 @@ class TestNoCloudDataSource(CiTestCase):
1936 # very simple check just for the strings above
1937 self.assertIn(gateway, str(dsrc.network_config))
1938
1939- def test_metadata_network_config(self):
1940+ def test_metadata_network_config(self, m_is_lxd):
1941 # network-config needs to get into network_config
1942 netconf = {'version': 1,
1943 'config': [{'type': 'physical', 'name': 'interface0',
1944@@ -177,7 +202,7 @@ class TestNoCloudDataSource(CiTestCase):
1945 self.assertTrue(ret)
1946 self.assertEqual(netconf, dsrc.network_config)
1947
1948- def test_metadata_network_config_over_interfaces(self):
1949+ def test_metadata_network_config_over_interfaces(self, m_is_lxd):
1950 # network-config should override meta-data/network-interfaces
1951 gateway = "103.225.10.1"
1952 md = {
1953diff --git a/tests/unittests/test_datasource/test_opennebula.py b/tests/unittests/test_datasource/test_opennebula.py
1954index 6159101..bb399f6 100644
1955--- a/tests/unittests/test_datasource/test_opennebula.py
1956+++ b/tests/unittests/test_datasource/test_opennebula.py
1957@@ -123,6 +123,10 @@ class TestOpenNebulaDataSource(CiTestCase):
1958 self.assertTrue(ret)
1959 finally:
1960 util.find_devs_with = orig_find_devs_with
1961+ self.assertEqual('opennebula', dsrc.cloud_name)
1962+ self.assertEqual('opennebula', dsrc.platform_type)
1963+ self.assertEqual(
1964+ 'seed-dir (%s/seed/opennebula)' % self.tmp, dsrc.subplatform)
1965
1966 def test_seed_dir_non_contextdisk(self):
1967 self.assertRaises(ds.NonContextDiskDir, ds.read_context_disk_dir,
1968diff --git a/tests/unittests/test_datasource/test_ovf.py b/tests/unittests/test_datasource/test_ovf.py
1969index 9d52eb9..a226c03 100644
1970--- a/tests/unittests/test_datasource/test_ovf.py
1971+++ b/tests/unittests/test_datasource/test_ovf.py
1972@@ -11,7 +11,7 @@ from collections import OrderedDict
1973 from textwrap import dedent
1974
1975 from cloudinit import util
1976-from cloudinit.tests.helpers import CiTestCase, wrap_and_call
1977+from cloudinit.tests.helpers import CiTestCase, mock, wrap_and_call
1978 from cloudinit.helpers import Paths
1979 from cloudinit.sources import DataSourceOVF as dsovf
1980 from cloudinit.sources.helpers.vmware.imc.config_custom_script import (
1981@@ -120,7 +120,7 @@ class TestDatasourceOVF(CiTestCase):
1982
1983 def test_get_data_false_on_none_dmi_data(self):
1984 """When dmi for system-product-name is None, get_data returns False."""
1985- paths = Paths({'seed_dir': self.tdir})
1986+ paths = Paths({'cloud_dir': self.tdir})
1987 ds = self.datasource(sys_cfg={}, distro={}, paths=paths)
1988 retcode = wrap_and_call(
1989 'cloudinit.sources.DataSourceOVF',
1990@@ -134,7 +134,7 @@ class TestDatasourceOVF(CiTestCase):
1991
1992 def test_get_data_no_vmware_customization_disabled(self):
1993 """When vmware customization is disabled via sys_cfg log a message."""
1994- paths = Paths({'seed_dir': self.tdir})
1995+ paths = Paths({'cloud_dir': self.tdir})
1996 ds = self.datasource(
1997 sys_cfg={'disable_vmware_customization': True}, distro={},
1998 paths=paths)
1999@@ -153,7 +153,7 @@ class TestDatasourceOVF(CiTestCase):
2000 """When cloud-init workflow for vmware is enabled via sys_cfg log a
2001 message.
2002 """
2003- paths = Paths({'seed_dir': self.tdir})
2004+ paths = Paths({'cloud_dir': self.tdir})
2005 ds = self.datasource(
2006 sys_cfg={'disable_vmware_customization': False}, distro={},
2007 paths=paths)
2008@@ -178,6 +178,50 @@ class TestDatasourceOVF(CiTestCase):
2009 self.assertIn('Script %s not found!!' % customscript,
2010 str(context.exception))
2011
2012+ def test_get_data_non_vmware_seed_platform_info(self):
2013+ """Platform info properly reports when on non-vmware platforms."""
2014+ paths = Paths({'cloud_dir': self.tdir, 'run_dir': self.tdir})
2015+ # Write ovf-env.xml seed file
2016+ seed_dir = self.tmp_path('seed', dir=self.tdir)
2017+ ovf_env = self.tmp_path('ovf-env.xml', dir=seed_dir)
2018+ util.write_file(ovf_env, OVF_ENV_CONTENT)
2019+ ds = self.datasource(sys_cfg={}, distro={}, paths=paths)
2020+
2021+ self.assertEqual('ovf', ds.cloud_name)
2022+ self.assertEqual('ovf', ds.platform_type)
2023+ MPATH = 'cloudinit.sources.DataSourceOVF.'
2024+ with mock.patch(MPATH + 'util.read_dmi_data', return_value='!VMware'):
2025+ with mock.patch(MPATH + 'transport_vmware_guestd') as m_guestd:
2026+ with mock.patch(MPATH + 'transport_iso9660') as m_iso9660:
2027+ m_iso9660.return_value = (None, 'ignored', 'ignored')
2028+ m_guestd.return_value = (None, 'ignored', 'ignored')
2029+ self.assertTrue(ds.get_data())
2030+ self.assertEqual(
2031+ 'ovf (%s/seed/ovf-env.xml)' % self.tdir,
2032+ ds.subplatform)
2033+
2034+ def test_get_data_vmware_seed_platform_info(self):
2035+ """Platform info properly reports when on VMware platform."""
2036+ paths = Paths({'cloud_dir': self.tdir, 'run_dir': self.tdir})
2037+ # Write ovf-env.xml seed file
2038+ seed_dir = self.tmp_path('seed', dir=self.tdir)
2039+ ovf_env = self.tmp_path('ovf-env.xml', dir=seed_dir)
2040+ util.write_file(ovf_env, OVF_ENV_CONTENT)
2041+ ds = self.datasource(sys_cfg={}, distro={}, paths=paths)
2042+
2043+ self.assertEqual('ovf', ds.cloud_name)
2044+ self.assertEqual('ovf', ds.platform_type)
2045+ MPATH = 'cloudinit.sources.DataSourceOVF.'
2046+ with mock.patch(MPATH + 'util.read_dmi_data', return_value='VMWare'):
2047+ with mock.patch(MPATH + 'transport_vmware_guestd') as m_guestd:
2048+ with mock.patch(MPATH + 'transport_iso9660') as m_iso9660:
2049+ m_iso9660.return_value = (None, 'ignored', 'ignored')
2050+ m_guestd.return_value = (None, 'ignored', 'ignored')
2051+ self.assertTrue(ds.get_data())
2052+ self.assertEqual(
2053+ 'vmware (%s/seed/ovf-env.xml)' % self.tdir,
2054+ ds.subplatform)
2055+
2056
2057 class TestTransportIso9660(CiTestCase):
2058
2059diff --git a/tests/unittests/test_datasource/test_smartos.py b/tests/unittests/test_datasource/test_smartos.py
2060index 46d67b9..42ac697 100644
2061--- a/tests/unittests/test_datasource/test_smartos.py
2062+++ b/tests/unittests/test_datasource/test_smartos.py
2063@@ -426,6 +426,13 @@ class TestSmartOSDataSource(FilesystemMockingTestCase):
2064 self.assertEqual(MOCK_RETURNS['sdc:uuid'],
2065 dsrc.metadata['instance-id'])
2066
2067+ def test_platform_info(self):
2068+ """All platform-related attributes are properly set."""
2069+ dsrc = self._get_ds(mockdata=MOCK_RETURNS)
2070+ self.assertEqual('joyent', dsrc.cloud_name)
2071+ self.assertEqual('joyent', dsrc.platform_type)
2072+ self.assertEqual('serial (/dev/ttyS1)', dsrc.subplatform)
2073+
2074 def test_root_keys(self):
2075 dsrc = self._get_ds(mockdata=MOCK_RETURNS)
2076 ret = dsrc.get_data()

Subscribers

People subscribed via source and target branches