Merge ~smoser/cloud-init:feature/oracle-datasource into cloud-init:master
- Git
- lp:~smoser/cloud-init
- feature/oracle-datasource
- Merge into master
Status: | Merged |
---|---|
Approved by: | Chad Smith |
Approved revision: | 63977d0865e81c5ee140057d018bf0ba7c3a6c3c |
Merge reported by: | Server Team CI bot |
Merged at revision: | not available |
Proposed branch: | ~smoser/cloud-init:feature/oracle-datasource |
Merge into: | cloud-init:master |
Diff against target: |
941 lines (+650/-23) 15 files modified
.pylintrc (+2/-1) cloudinit/apport.py (+1/-0) cloudinit/settings.py (+1/-0) cloudinit/sources/DataSourceIBMCloud.py (+5/-8) cloudinit/sources/DataSourceOpenStack.py (+8/-4) cloudinit/sources/DataSourceOracle.py (+233/-0) cloudinit/sources/__init__.py (+4/-0) cloudinit/sources/helpers/openstack.py (+2/-4) cloudinit/sources/tests/test_oracle.py (+331/-0) doc/rtd/topics/datasources.rst (+1/-0) doc/rtd/topics/datasources/oracle.rst (+26/-0) tests/unittests/test_datasource/test_common.py (+2/-0) tests/unittests/test_datasource/test_openstack.py (+8/-5) tests/unittests/test_ds_identify.py (+19/-0) tools/ds-identify (+7/-1) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Approve | |
Chad Smith | Approve | ||
Robert C Jennings | Pending | ||
Review via email: mp+352921@code.launchpad.net |
Commit message
Add datasource Oracle Compute Infrastructure (OCI).
This adds a Oracle specific datasource that functions with OCI.
It is a simplified version of the OpenStack metadata server
with support for vendor-data.
It does not support the OCI-C (classic) platform.
Also here is a move of BrokenMetadata to common 'sources'
as this was the third occurrence of that class.
Description of the change
see commit message
Scott Moser (smoser) wrote : | # |
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:cdd3007c930
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:bd1daf2b02a
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:1f7047ece0f
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:88ed915ba6a
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:0df68733f15
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
I do expect to add a few more tests of read_metadata.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:b0e84d5f2a8
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:4a18b49a3e3
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:06eb55e43fa
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:96fc90aa9f7
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
Generally looks good to go. A couple questions in line.
How do you see the upgrade from previous DS to the new DS is going to work?
Do we have a unittest that would cover the transition of one DS to another?
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:cb896e9a6c6
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) : | # |
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:cffbd7f4033
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:fe870c5340f
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
addressed some comments, responses in line.
In response:
> How do you see the upgrade from previous DS to the new DS is going to work?
This is useful thought, thank you for raising.
As it is right now we have the following scenarios in which cloud-init will run on Oracle with the OpenStack metadata service:
1.) ds-identify disabled (xenial)
a. datasource_
b. default datasource_list defined in /etc/cloud/
c. no datasource_list defined anywhere.
2.) ds-identify enabled and datasource_list configured to have only one entry ('OpenStack').
The behavior is then defined as:
* 1.a, 1.b, 2: In this scenario, nothing automatically will update the datasource list to include Oracle, and as such the OpenStack will be considered and should claim itself as found (and hopefully non-new).
* 1.c: This would be where someone installed from source or commented out the lines in /etc/cloud/
> Do we have a unittest that would cover the transition of one DS to another?
No... and as described above, the goal is really not to transition.
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:63e24ea8664
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Ryan Harper (raharper) wrote : | # |
Thanks for the response.
IIUC, the transition to the Oracle DS is really for new instances only and the change is really in the name rather than something like Oracle DS pointing to different metadata URL or having a different format.
If i've got that correct then the transition isn't a big deal since either Oracle DS or Openstack DS will function. The biggest win is that changes to Openstack Datasource will no longer impact Oracle ds directly.
I'm fine with the other responses inline as well.
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:ad8ae1c2d57
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:4bfa3c88ee2
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:4bfa3c88ee2
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:4bfa3c88ee2
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:7c58c09c19f
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Chad Smith (chad.smith) wrote : | # |
looks good, just one major concern about ds detection ordering vs DataSourceOpenS
Please add datasource rtd doc.
Chad Smith (chad.smith) : | # |
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:7194a14775c
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:8b7d02a43e7
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:63977d0865e
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) : | # |
There was an error fetching revisions from git servers. Please try again in a few minutes. If the problem persists, contact Launchpad support.
Preview Diff
1 | diff --git a/.pylintrc b/.pylintrc |
2 | index 3bfa0c8..e376b48 100644 |
3 | --- a/.pylintrc |
4 | +++ b/.pylintrc |
5 | @@ -61,7 +61,8 @@ ignored-modules= |
6 | # List of class names for which member attributes should not be checked (useful |
7 | # for classes with dynamically set attributes). This supports the use of |
8 | # qualified names. |
9 | -ignored-classes=optparse.Values,thread._local |
10 | +# argparse.Namespace from https://github.com/PyCQA/pylint/issues/2413 |
11 | +ignored-classes=argparse.Namespace,optparse.Values,thread._local |
12 | |
13 | # List of members which are set dynamically and missed by pylint inference |
14 | # system, and so shouldn't trigger E1101 when accessed. Python regular |
15 | diff --git a/cloudinit/apport.py b/cloudinit/apport.py |
16 | index 130ff26..22cb7fd 100644 |
17 | --- a/cloudinit/apport.py |
18 | +++ b/cloudinit/apport.py |
19 | @@ -30,6 +30,7 @@ KNOWN_CLOUD_NAMES = [ |
20 | 'NoCloud', |
21 | 'OpenNebula', |
22 | 'OpenStack', |
23 | + 'Oracle', |
24 | 'OVF', |
25 | 'OpenTelekomCloud', |
26 | 'Scaleway', |
27 | diff --git a/cloudinit/settings.py b/cloudinit/settings.py |
28 | index dde5749..ea367cb 100644 |
29 | --- a/cloudinit/settings.py |
30 | +++ b/cloudinit/settings.py |
31 | @@ -38,6 +38,7 @@ CFG_BUILTIN = { |
32 | 'Scaleway', |
33 | 'Hetzner', |
34 | 'IBMCloud', |
35 | + 'Oracle', |
36 | # At the end to act as a 'catch' when none of the above work... |
37 | 'None', |
38 | ], |
39 | diff --git a/cloudinit/sources/DataSourceIBMCloud.py b/cloudinit/sources/DataSourceIBMCloud.py |
40 | index 01106ec..a535814 100644 |
41 | --- a/cloudinit/sources/DataSourceIBMCloud.py |
42 | +++ b/cloudinit/sources/DataSourceIBMCloud.py |
43 | @@ -295,7 +295,7 @@ def read_md(): |
44 | results = metadata_from_dir(path) |
45 | else: |
46 | results = util.mount_cb(path, metadata_from_dir) |
47 | - except BrokenMetadata as e: |
48 | + except sources.BrokenMetadata as e: |
49 | raise RuntimeError( |
50 | "Failed reading IBM config disk (platform=%s path=%s): %s" % |
51 | (platform, path, e)) |
52 | @@ -304,10 +304,6 @@ def read_md(): |
53 | return ret |
54 | |
55 | |
56 | -class BrokenMetadata(IOError): |
57 | - pass |
58 | - |
59 | - |
60 | def metadata_from_dir(source_dir): |
61 | """Walk source_dir extracting standardized metadata. |
62 | |
63 | @@ -352,12 +348,13 @@ def metadata_from_dir(source_dir): |
64 | try: |
65 | data = transl(raw) |
66 | except Exception as e: |
67 | - raise BrokenMetadata("Failed decoding %s: %s" % (path, e)) |
68 | + raise sources.BrokenMetadata( |
69 | + "Failed decoding %s: %s" % (path, e)) |
70 | |
71 | results[name] = data |
72 | |
73 | if results.get('metadata_raw') is None: |
74 | - raise BrokenMetadata( |
75 | + raise sources.BrokenMetadata( |
76 | "%s missing required file 'meta_data.json'" % source_dir) |
77 | |
78 | results['metadata'] = {} |
79 | @@ -368,7 +365,7 @@ def metadata_from_dir(source_dir): |
80 | try: |
81 | md['random_seed'] = base64.b64decode(md_raw['random_seed']) |
82 | except (ValueError, TypeError) as e: |
83 | - raise BrokenMetadata( |
84 | + raise sources.BrokenMetadata( |
85 | "Badly formatted metadata random_seed entry: %s" % e) |
86 | |
87 | renames = ( |
88 | diff --git a/cloudinit/sources/DataSourceOpenStack.py b/cloudinit/sources/DataSourceOpenStack.py |
89 | index b9ade90..4a01524 100644 |
90 | --- a/cloudinit/sources/DataSourceOpenStack.py |
91 | +++ b/cloudinit/sources/DataSourceOpenStack.py |
92 | @@ -13,6 +13,7 @@ from cloudinit import url_helper |
93 | from cloudinit import util |
94 | |
95 | from cloudinit.sources.helpers import openstack |
96 | +from cloudinit.sources import DataSourceOracle as oracle |
97 | |
98 | LOG = logging.getLogger(__name__) |
99 | |
100 | @@ -28,8 +29,7 @@ DMI_PRODUCT_NOVA = 'OpenStack Nova' |
101 | DMI_PRODUCT_COMPUTE = 'OpenStack Compute' |
102 | VALID_DMI_PRODUCT_NAMES = [DMI_PRODUCT_NOVA, DMI_PRODUCT_COMPUTE] |
103 | DMI_ASSET_TAG_OPENTELEKOM = 'OpenTelekomCloud' |
104 | -DMI_ASSET_TAG_ORACLE_CLOUD = 'OracleCloud.com' |
105 | -VALID_DMI_ASSET_TAGS = [DMI_ASSET_TAG_OPENTELEKOM, DMI_ASSET_TAG_ORACLE_CLOUD] |
106 | +VALID_DMI_ASSET_TAGS = [DMI_ASSET_TAG_OPENTELEKOM] |
107 | |
108 | |
109 | class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource): |
110 | @@ -122,8 +122,10 @@ class DataSourceOpenStack(openstack.SourceMixin, sources.DataSource): |
111 | False when unable to contact metadata service or when metadata |
112 | format is invalid or disabled. |
113 | """ |
114 | - if not detect_openstack(): |
115 | + oracle_considered = 'Oracle' in self.sys_cfg.get('datasource_list') |
116 | + if not detect_openstack(accept_oracle=not oracle_considered): |
117 | return False |
118 | + |
119 | if self.perform_dhcp_setup: # Setup networking in init-local stage. |
120 | try: |
121 | with EphemeralDHCPv4(self.fallback_interface): |
122 | @@ -215,7 +217,7 @@ def read_metadata_service(base_url, ssl_details=None, |
123 | return reader.read_v2() |
124 | |
125 | |
126 | -def detect_openstack(): |
127 | +def detect_openstack(accept_oracle=False): |
128 | """Return True when a potential OpenStack platform is detected.""" |
129 | if not util.is_x86(): |
130 | return True # Non-Intel cpus don't properly report dmi product names |
131 | @@ -224,6 +226,8 @@ def detect_openstack(): |
132 | return True |
133 | elif util.read_dmi_data('chassis-asset-tag') in VALID_DMI_ASSET_TAGS: |
134 | return True |
135 | + elif accept_oracle and oracle._is_platform_viable(): |
136 | + return True |
137 | elif util.get_proc_env(1).get('product_name') == DMI_PRODUCT_NOVA: |
138 | return True |
139 | return False |
140 | diff --git a/cloudinit/sources/DataSourceOracle.py b/cloudinit/sources/DataSourceOracle.py |
141 | new file mode 100644 |
142 | index 0000000..fab39af |
143 | --- /dev/null |
144 | +++ b/cloudinit/sources/DataSourceOracle.py |
145 | @@ -0,0 +1,233 @@ |
146 | +# This file is part of cloud-init. See LICENSE file for license information. |
147 | +"""Datasource for Oracle (OCI/Oracle Cloud Infrastructure) |
148 | + |
149 | +OCI provides a OpenStack like metadata service which provides only |
150 | +'2013-10-17' and 'latest' versions.. |
151 | + |
152 | +Notes: |
153 | + * This datasource does not support the OCI-Classic. OCI-Classic |
154 | + provides an EC2 lookalike metadata service. |
155 | + * The uuid provided in DMI data is not the same as the meta-data provided |
156 | + instance-id, but has an equivalent lifespan. |
157 | + * We do need to support upgrade from an instance that cloud-init |
158 | + identified as OpenStack. |
159 | + * Both bare-metal and vms use iscsi root |
160 | + * Both bare-metal and vms provide chassis-asset-tag of OracleCloud.com |
161 | +""" |
162 | + |
163 | +from cloudinit.url_helper import combine_url, readurl, UrlError |
164 | +from cloudinit.net import dhcp |
165 | +from cloudinit import net |
166 | +from cloudinit import sources |
167 | +from cloudinit import util |
168 | +from cloudinit.net import cmdline |
169 | +from cloudinit import log as logging |
170 | + |
171 | +import json |
172 | +import re |
173 | + |
174 | +LOG = logging.getLogger(__name__) |
175 | + |
176 | +CHASSIS_ASSET_TAG = "OracleCloud.com" |
177 | +METADATA_ENDPOINT = "http://169.254.169.254/openstack/" |
178 | + |
179 | + |
180 | +class DataSourceOracle(sources.DataSource): |
181 | + |
182 | + dsname = 'Oracle' |
183 | + system_uuid = None |
184 | + vendordata_pure = None |
185 | + _network_config = sources.UNSET |
186 | + |
187 | + def _is_platform_viable(self): |
188 | + """Check platform environment to report if this datasource may run.""" |
189 | + return _is_platform_viable() |
190 | + |
191 | + def _get_data(self): |
192 | + if not self._is_platform_viable(): |
193 | + return False |
194 | + |
195 | + # network may be configured if iscsi root. If that is the case |
196 | + # then read_kernel_cmdline_config will return non-None. |
197 | + if _is_iscsi_root(): |
198 | + data = self.crawl_metadata() |
199 | + else: |
200 | + with dhcp.EphemeralDHCPv4(net.find_fallback_nic()): |
201 | + data = self.crawl_metadata() |
202 | + |
203 | + self._crawled_metadata = data |
204 | + vdata = data['2013-10-17'] |
205 | + |
206 | + self.userdata_raw = vdata.get('user_data') |
207 | + self.system_uuid = vdata['system_uuid'] |
208 | + |
209 | + vd = vdata.get('vendor_data') |
210 | + if vd: |
211 | + self.vendordata_pure = vd |
212 | + try: |
213 | + self.vendordata_raw = sources.convert_vendordata(vd) |
214 | + except ValueError as e: |
215 | + LOG.warning("Invalid content in vendor-data: %s", e) |
216 | + self.vendordata_raw = None |
217 | + |
218 | + mdcopies = ('public_keys',) |
219 | + md = dict([(k, vdata['meta_data'].get(k)) |
220 | + for k in mdcopies if k in vdata['meta_data']]) |
221 | + |
222 | + mdtrans = ( |
223 | + # oracle meta_data.json name, cloudinit.datasource.metadata name |
224 | + ('availability_zone', 'availability-zone'), |
225 | + ('hostname', 'local-hostname'), |
226 | + ('launch_index', 'launch-index'), |
227 | + ('uuid', 'instance-id'), |
228 | + ) |
229 | + for dsname, ciname in mdtrans: |
230 | + if dsname in vdata['meta_data']: |
231 | + md[ciname] = vdata['meta_data'][dsname] |
232 | + |
233 | + self.metadata = md |
234 | + return True |
235 | + |
236 | + def crawl_metadata(self): |
237 | + return read_metadata() |
238 | + |
239 | + def check_instance_id(self, sys_cfg): |
240 | + """quickly check (local only) if self.instance_id is still valid |
241 | + |
242 | + On Oracle, the dmi-provided system uuid differs from the instance-id |
243 | + but has the same life-span.""" |
244 | + return sources.instance_id_matches_system_uuid(self.system_uuid) |
245 | + |
246 | + def get_public_ssh_keys(self): |
247 | + return sources.normalize_pubkey_data(self.metadata.get('public_keys')) |
248 | + |
249 | + @property |
250 | + def network_config(self): |
251 | + """Network config is read from initramfs provided files |
252 | + If none is present, then we fall back to fallback configuration. |
253 | + |
254 | + One thing to note here is that this method is not currently |
255 | + considered at all if there is is kernel/initramfs provided |
256 | + data. In that case, stages considers that the cmdline data |
257 | + overrides datasource provided data and does not consult here. |
258 | + |
259 | + We nonetheless return cmdline provided config if present |
260 | + and fallback to generate fallback.""" |
261 | + if self._network_config == sources.UNSET: |
262 | + cmdline_cfg = cmdline.read_kernel_cmdline_config() |
263 | + if cmdline_cfg: |
264 | + self._network_config = cmdline_cfg |
265 | + else: |
266 | + self._network_config = self.distro.generate_fallback_config() |
267 | + return self._network_config |
268 | + |
269 | + |
270 | +def _read_system_uuid(): |
271 | + sys_uuid = util.read_dmi_data('system-uuid') |
272 | + return None if sys_uuid is None else sys_uuid.lower() |
273 | + |
274 | + |
275 | +def _is_platform_viable(): |
276 | + asset_tag = util.read_dmi_data('chassis-asset-tag') |
277 | + return asset_tag == CHASSIS_ASSET_TAG |
278 | + |
279 | + |
280 | +def _is_iscsi_root(): |
281 | + return bool(cmdline.read_kernel_cmdline_config()) |
282 | + |
283 | + |
284 | +def _load_index(content): |
285 | + """Return a list entries parsed from content. |
286 | + |
287 | + OpenStack's metadata service returns a newline delimited list |
288 | + of items. Oracle's implementation has html formatted list of links. |
289 | + The parser here just grabs targets from <a href="target"> |
290 | + and throws away "../". |
291 | + |
292 | + Oracle has accepted that to be buggy and may fix in the future |
293 | + to instead return a '\n' delimited plain text list. This function |
294 | + will continue to work if that change is made.""" |
295 | + if not content.lower().startswith("<html>"): |
296 | + return content.splitlines() |
297 | + items = re.findall( |
298 | + r'href="(?P<target>[^"]*)"', content, re.MULTILINE | re.IGNORECASE) |
299 | + return [i for i in items if not i.startswith(".")] |
300 | + |
301 | + |
302 | +def read_metadata(endpoint_base=METADATA_ENDPOINT, sys_uuid=None, |
303 | + version='2013-10-17'): |
304 | + """Read metadata, return a dictionary. |
305 | + |
306 | + Each path listed in the index will be represented in the dictionary. |
307 | + If the path ends in .json, then the content will be decoded and |
308 | + populated into the dictionary. |
309 | + |
310 | + The system uuid (/sys/class/dmi/id/product_uuid) is also populated. |
311 | + Example: given paths = ('user_data', 'meta_data.json') |
312 | + This would return: |
313 | + {version: {'user_data': b'blob', 'meta_data': json.loads(blob.decode()) |
314 | + 'system_uuid': '3b54f2e0-3ab2-458d-b770-af9926eee3b2'}} |
315 | + """ |
316 | + endpoint = combine_url(endpoint_base, version) + "/" |
317 | + if sys_uuid is None: |
318 | + sys_uuid = _read_system_uuid() |
319 | + if not sys_uuid: |
320 | + raise sources.BrokenMetadata("Failed to read system uuid.") |
321 | + |
322 | + try: |
323 | + resp = readurl(endpoint) |
324 | + if not resp.ok(): |
325 | + raise sources.BrokenMetadata( |
326 | + "Bad response from %s: %s" % (endpoint, resp.code)) |
327 | + except UrlError as e: |
328 | + raise sources.BrokenMetadata( |
329 | + "Failed to read index at %s: %s" % (endpoint, e)) |
330 | + |
331 | + entries = _load_index(resp.contents.decode('utf-8')) |
332 | + LOG.debug("index url %s contained: %s", endpoint, entries) |
333 | + |
334 | + # meta_data.json is required. |
335 | + mdj = 'meta_data.json' |
336 | + if mdj not in entries: |
337 | + raise sources.BrokenMetadata( |
338 | + "Required field '%s' missing in index at %s" % (mdj, endpoint)) |
339 | + |
340 | + ret = {'system_uuid': sys_uuid} |
341 | + for path in entries: |
342 | + response = readurl(combine_url(endpoint, path)) |
343 | + if path.endswith(".json"): |
344 | + ret[path.rpartition(".")[0]] = ( |
345 | + json.loads(response.contents.decode('utf-8'))) |
346 | + else: |
347 | + ret[path] = response.contents |
348 | + |
349 | + return {version: ret} |
350 | + |
351 | + |
352 | +# Used to match classes to dependencies |
353 | +datasources = [ |
354 | + (DataSourceOracle, (sources.DEP_FILESYSTEM,)), |
355 | +] |
356 | + |
357 | + |
358 | +# Return a list of data sources that match this set of dependencies |
359 | +def get_datasource_list(depends): |
360 | + return sources.list_from_depends(depends, datasources) |
361 | + |
362 | + |
363 | +if __name__ == "__main__": |
364 | + import argparse |
365 | + import os |
366 | + |
367 | + parser = argparse.ArgumentParser(description='Query Oracle Cloud Metadata') |
368 | + parser.add_argument("--endpoint", metavar="URL", |
369 | + help="The url of the metadata service.", |
370 | + default=METADATA_ENDPOINT) |
371 | + args = parser.parse_args() |
372 | + sys_uuid = "uuid-not-available-not-root" if os.geteuid() != 0 else None |
373 | + |
374 | + data = read_metadata(endpoint_base=args.endpoint, sys_uuid=sys_uuid) |
375 | + data['is_platform_viable'] = _is_platform_viable() |
376 | + print(util.json_dumps(data)) |
377 | + |
378 | +# vi: ts=4 expandtab |
379 | diff --git a/cloudinit/sources/__init__.py b/cloudinit/sources/__init__.py |
380 | index 06e613f..41fde9b 100644 |
381 | --- a/cloudinit/sources/__init__.py |
382 | +++ b/cloudinit/sources/__init__.py |
383 | @@ -671,6 +671,10 @@ def convert_vendordata(data, recurse=True): |
384 | raise ValueError("Unknown data type for vendordata: %s" % type(data)) |
385 | |
386 | |
387 | +class BrokenMetadata(IOError): |
388 | + pass |
389 | + |
390 | + |
391 | # 'depends' is a list of dependencies (DEP_FILESYSTEM) |
392 | # ds_list is a list of 2 item lists |
393 | # ds_list = [ |
394 | diff --git a/cloudinit/sources/helpers/openstack.py b/cloudinit/sources/helpers/openstack.py |
395 | index a4cf066..8f9c144 100644 |
396 | --- a/cloudinit/sources/helpers/openstack.py |
397 | +++ b/cloudinit/sources/helpers/openstack.py |
398 | @@ -21,6 +21,8 @@ from cloudinit import sources |
399 | from cloudinit import url_helper |
400 | from cloudinit import util |
401 | |
402 | +from cloudinit.sources import BrokenMetadata |
403 | + |
404 | # See https://docs.openstack.org/user-guide/cli-config-drive.html |
405 | |
406 | LOG = logging.getLogger(__name__) |
407 | @@ -68,10 +70,6 @@ class NonReadable(IOError): |
408 | pass |
409 | |
410 | |
411 | -class BrokenMetadata(IOError): |
412 | - pass |
413 | - |
414 | - |
415 | class SourceMixin(object): |
416 | def _ec2_name_to_device(self, name): |
417 | if not self.ec2_metadata: |
418 | diff --git a/cloudinit/sources/tests/test_oracle.py b/cloudinit/sources/tests/test_oracle.py |
419 | new file mode 100644 |
420 | index 0000000..7599126 |
421 | --- /dev/null |
422 | +++ b/cloudinit/sources/tests/test_oracle.py |
423 | @@ -0,0 +1,331 @@ |
424 | +# This file is part of cloud-init. See LICENSE file for license information. |
425 | + |
426 | +from cloudinit.sources import DataSourceOracle as oracle |
427 | +from cloudinit.sources import BrokenMetadata |
428 | +from cloudinit import helpers |
429 | + |
430 | +from cloudinit.tests import helpers as test_helpers |
431 | + |
432 | +from textwrap import dedent |
433 | +import argparse |
434 | +import httpretty |
435 | +import json |
436 | +import mock |
437 | +import os |
438 | +import six |
439 | +import uuid |
440 | + |
441 | +DS_PATH = "cloudinit.sources.DataSourceOracle" |
442 | +MD_VER = "2013-10-17" |
443 | + |
444 | + |
445 | +class TestDataSourceOracle(test_helpers.CiTestCase): |
446 | + """Test datasource DataSourceOracle.""" |
447 | + |
448 | + ds_class = oracle.DataSourceOracle |
449 | + |
450 | + my_uuid = str(uuid.uuid4()) |
451 | + my_md = {"uuid": "ocid1.instance.oc1.phx.abyhqlj", |
452 | + "name": "ci-vm1", "availability_zone": "phx-ad-3", |
453 | + "hostname": "ci-vm1hostname", |
454 | + "launch_index": 0, "files": [], |
455 | + "public_keys": {"0": "ssh-rsa AAAAB3N...== user@host"}, |
456 | + "meta": {}} |
457 | + |
458 | + def _patch_instance(self, inst, patches): |
459 | + """Patch an instance of a class 'inst'. |
460 | + for each name, kwargs in patches: |
461 | + inst.name = mock.Mock(**kwargs) |
462 | + returns a namespace object that has |
463 | + namespace.name = mock.Mock(**kwargs) |
464 | + Do not bother with cleanup as instance is assumed transient.""" |
465 | + mocks = argparse.Namespace() |
466 | + for name, kwargs in patches.items(): |
467 | + imock = mock.Mock(name=name, spec=getattr(inst, name), **kwargs) |
468 | + setattr(mocks, name, imock) |
469 | + setattr(inst, name, imock) |
470 | + return mocks |
471 | + |
472 | + def _get_ds(self, sys_cfg=None, distro=None, paths=None, ud_proc=None, |
473 | + patches=None): |
474 | + if sys_cfg is None: |
475 | + sys_cfg = {} |
476 | + if patches is None: |
477 | + patches = {} |
478 | + if paths is None: |
479 | + tmpd = self.tmp_dir() |
480 | + dirs = {'cloud_dir': self.tmp_path('cloud_dir', tmpd), |
481 | + 'run_dir': self.tmp_path('run_dir')} |
482 | + for d in dirs.values(): |
483 | + os.mkdir(d) |
484 | + paths = helpers.Paths(dirs) |
485 | + |
486 | + ds = self.ds_class(sys_cfg=sys_cfg, distro=distro, |
487 | + paths=paths, ud_proc=ud_proc) |
488 | + |
489 | + return ds, self._patch_instance(ds, patches) |
490 | + |
491 | + def test_platform_not_viable_returns_false(self): |
492 | + ds, mocks = self._get_ds( |
493 | + patches={'_is_platform_viable': {'return_value': False}}) |
494 | + self.assertFalse(ds._get_data()) |
495 | + mocks._is_platform_viable.assert_called_once_with() |
496 | + |
497 | + @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) |
498 | + def test_without_userdata(self, m_is_iscsi_root): |
499 | + """If no user-data is provided, it should not be in return dict.""" |
500 | + ds, mocks = self._get_ds(patches={ |
501 | + '_is_platform_viable': {'return_value': True}, |
502 | + 'crawl_metadata': { |
503 | + 'return_value': { |
504 | + MD_VER: {'system_uuid': self.my_uuid, |
505 | + 'meta_data': self.my_md}}}}) |
506 | + self.assertTrue(ds._get_data()) |
507 | + mocks._is_platform_viable.assert_called_once_with() |
508 | + mocks.crawl_metadata.assert_called_once_with() |
509 | + self.assertEqual(self.my_uuid, ds.system_uuid) |
510 | + self.assertEqual(self.my_md['availability_zone'], ds.availability_zone) |
511 | + self.assertIn(self.my_md["public_keys"]["0"], ds.get_public_ssh_keys()) |
512 | + self.assertEqual(self.my_md['uuid'], ds.get_instance_id()) |
513 | + self.assertIsNone(ds.userdata_raw) |
514 | + |
515 | + @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) |
516 | + def test_with_vendordata(self, m_is_iscsi_root): |
517 | + """Test with vendor data.""" |
518 | + vd = {'cloud-init': '#cloud-config\nkey: value'} |
519 | + ds, mocks = self._get_ds(patches={ |
520 | + '_is_platform_viable': {'return_value': True}, |
521 | + 'crawl_metadata': { |
522 | + 'return_value': { |
523 | + MD_VER: {'system_uuid': self.my_uuid, |
524 | + 'meta_data': self.my_md, |
525 | + 'vendor_data': vd}}}}) |
526 | + self.assertTrue(ds._get_data()) |
527 | + mocks._is_platform_viable.assert_called_once_with() |
528 | + mocks.crawl_metadata.assert_called_once_with() |
529 | + self.assertEqual(vd, ds.vendordata_pure) |
530 | + self.assertEqual(vd['cloud-init'], ds.vendordata_raw) |
531 | + |
532 | + @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) |
533 | + def test_with_userdata(self, m_is_iscsi_root): |
534 | + """Ensure user-data is populated if present and is binary.""" |
535 | + my_userdata = b'abcdefg' |
536 | + ds, mocks = self._get_ds(patches={ |
537 | + '_is_platform_viable': {'return_value': True}, |
538 | + 'crawl_metadata': { |
539 | + 'return_value': { |
540 | + MD_VER: {'system_uuid': self.my_uuid, |
541 | + 'meta_data': self.my_md, |
542 | + 'user_data': my_userdata}}}}) |
543 | + self.assertTrue(ds._get_data()) |
544 | + mocks._is_platform_viable.assert_called_once_with() |
545 | + mocks.crawl_metadata.assert_called_once_with() |
546 | + self.assertEqual(self.my_uuid, ds.system_uuid) |
547 | + self.assertIn(self.my_md["public_keys"]["0"], ds.get_public_ssh_keys()) |
548 | + self.assertEqual(self.my_md['uuid'], ds.get_instance_id()) |
549 | + self.assertEqual(my_userdata, ds.userdata_raw) |
550 | + |
551 | + @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config") |
552 | + @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) |
553 | + def test_network_cmdline(self, m_is_iscsi_root, m_cmdline_config): |
554 | + """network_config should read kernel cmdline.""" |
555 | + distro = mock.MagicMock() |
556 | + ds, _ = self._get_ds(distro=distro, patches={ |
557 | + '_is_platform_viable': {'return_value': True}, |
558 | + 'crawl_metadata': { |
559 | + 'return_value': { |
560 | + MD_VER: {'system_uuid': self.my_uuid, |
561 | + 'meta_data': self.my_md}}}}) |
562 | + ncfg = {'version': 1, 'config': [{'a': 'b'}]} |
563 | + m_cmdline_config.return_value = ncfg |
564 | + self.assertTrue(ds._get_data()) |
565 | + self.assertEqual(ncfg, ds.network_config) |
566 | + m_cmdline_config.assert_called_once_with() |
567 | + self.assertFalse(distro.generate_fallback_config.called) |
568 | + |
569 | + @mock.patch(DS_PATH + ".cmdline.read_kernel_cmdline_config") |
570 | + @mock.patch(DS_PATH + "._is_iscsi_root", return_value=True) |
571 | + def test_network_fallback(self, m_is_iscsi_root, m_cmdline_config): |
572 | + """test that fallback network is generated if no kernel cmdline.""" |
573 | + distro = mock.MagicMock() |
574 | + ds, _ = self._get_ds(distro=distro, patches={ |
575 | + '_is_platform_viable': {'return_value': True}, |
576 | + 'crawl_metadata': { |
577 | + 'return_value': { |
578 | + MD_VER: {'system_uuid': self.my_uuid, |
579 | + 'meta_data': self.my_md}}}}) |
580 | + ncfg = {'version': 1, 'config': [{'a': 'b'}]} |
581 | + m_cmdline_config.return_value = None |
582 | + self.assertTrue(ds._get_data()) |
583 | + ncfg = {'version': 1, 'config': [{'distro1': 'value'}]} |
584 | + distro.generate_fallback_config.return_value = ncfg |
585 | + self.assertEqual(ncfg, ds.network_config) |
586 | + m_cmdline_config.assert_called_once_with() |
587 | + distro.generate_fallback_config.assert_called_once_with() |
588 | + self.assertEqual(1, m_cmdline_config.call_count) |
589 | + |
590 | + # test that the result got cached, and the methods not re-called. |
591 | + self.assertEqual(ncfg, ds.network_config) |
592 | + self.assertEqual(1, m_cmdline_config.call_count) |
593 | + |
594 | + |
595 | +@mock.patch(DS_PATH + "._read_system_uuid", return_value=str(uuid.uuid4())) |
596 | +class TestReadMetaData(test_helpers.HttprettyTestCase): |
597 | + """Test the read_metadata which interacts with http metadata service.""" |
598 | + |
599 | + mdurl = oracle.METADATA_ENDPOINT |
600 | + my_md = {"uuid": "ocid1.instance.oc1.phx.abyhqlj", |
601 | + "name": "ci-vm1", "availability_zone": "phx-ad-3", |
602 | + "hostname": "ci-vm1hostname", |
603 | + "launch_index": 0, "files": [], |
604 | + "public_keys": {"0": "ssh-rsa AAAAB3N...== user@host"}, |
605 | + "meta": {}} |
606 | + |
607 | + def populate_md(self, data): |
608 | + """call httppretty.register_url for each item dict 'data', |
609 | + including valid indexes. Text values converted to bytes.""" |
610 | + httpretty.register_uri( |
611 | + httpretty.GET, self.mdurl + MD_VER + "/", |
612 | + '\n'.join(data.keys()).encode('utf-8')) |
613 | + for k, v in data.items(): |
614 | + httpretty.register_uri( |
615 | + httpretty.GET, self.mdurl + MD_VER + "/" + k, |
616 | + v if not isinstance(v, six.text_type) else v.encode('utf-8')) |
617 | + |
618 | + def test_broken_no_sys_uuid(self, m_read_system_uuid): |
619 | + """Datasource requires ability to read system_uuid and true return.""" |
620 | + m_read_system_uuid.return_value = None |
621 | + self.assertRaises(BrokenMetadata, oracle.read_metadata) |
622 | + |
623 | + def test_broken_no_metadata_json(self, m_read_system_uuid): |
624 | + """Datasource requires meta_data.json.""" |
625 | + httpretty.register_uri( |
626 | + httpretty.GET, self.mdurl + MD_VER + "/", |
627 | + '\n'.join(['user_data']).encode('utf-8')) |
628 | + with self.assertRaises(BrokenMetadata) as cm: |
629 | + oracle.read_metadata() |
630 | + self.assertIn("Required field 'meta_data.json' missing", |
631 | + str(cm.exception)) |
632 | + |
633 | + def test_with_userdata(self, m_read_system_uuid): |
634 | + data = {'user_data': b'#!/bin/sh\necho hi world\n', |
635 | + 'meta_data.json': json.dumps(self.my_md)} |
636 | + self.populate_md(data) |
637 | + result = oracle.read_metadata()[MD_VER] |
638 | + self.assertEqual(data['user_data'], result['user_data']) |
639 | + self.assertEqual(self.my_md, result['meta_data']) |
640 | + |
641 | + def test_without_userdata(self, m_read_system_uuid): |
642 | + data = {'meta_data.json': json.dumps(self.my_md)} |
643 | + self.populate_md(data) |
644 | + result = oracle.read_metadata()[MD_VER] |
645 | + self.assertNotIn('user_data', result) |
646 | + self.assertEqual(self.my_md, result['meta_data']) |
647 | + |
648 | + def test_unknown_fields_included(self, m_read_system_uuid): |
649 | + """Unknown fields listed in index should be included. |
650 | + And those ending in .json should be decoded.""" |
651 | + some_data = {'key1': 'data1', 'subk1': {'subd1': 'subv'}} |
652 | + some_vendor_data = {'cloud-init': 'foo'} |
653 | + data = {'meta_data.json': json.dumps(self.my_md), |
654 | + 'some_data.json': json.dumps(some_data), |
655 | + 'vendor_data.json': json.dumps(some_vendor_data), |
656 | + 'other_blob': b'this is blob'} |
657 | + self.populate_md(data) |
658 | + result = oracle.read_metadata()[MD_VER] |
659 | + self.assertNotIn('user_data', result) |
660 | + self.assertEqual(self.my_md, result['meta_data']) |
661 | + self.assertEqual(some_data, result['some_data']) |
662 | + self.assertEqual(some_vendor_data, result['vendor_data']) |
663 | + self.assertEqual(data['other_blob'], result['other_blob']) |
664 | + |
665 | + |
666 | +class TestIsPlatformViable(test_helpers.CiTestCase): |
667 | + @mock.patch(DS_PATH + ".util.read_dmi_data", |
668 | + return_value=oracle.CHASSIS_ASSET_TAG) |
669 | + def test_expected_viable(self, m_read_dmi_data): |
670 | + """System with known chassis tag is viable.""" |
671 | + self.assertTrue(oracle._is_platform_viable()) |
672 | + m_read_dmi_data.assert_has_calls([mock.call('chassis-asset-tag')]) |
673 | + |
674 | + @mock.patch(DS_PATH + ".util.read_dmi_data", return_value=None) |
675 | + def test_expected_not_viable_dmi_data_none(self, m_read_dmi_data): |
676 | + """System without known chassis tag is not viable.""" |
677 | + self.assertFalse(oracle._is_platform_viable()) |
678 | + m_read_dmi_data.assert_has_calls([mock.call('chassis-asset-tag')]) |
679 | + |
680 | + @mock.patch(DS_PATH + ".util.read_dmi_data", return_value="LetsGoCubs") |
681 | + def test_expected_not_viable_other(self, m_read_dmi_data): |
682 | + """System with unnown chassis tag is not viable.""" |
683 | + self.assertFalse(oracle._is_platform_viable()) |
684 | + m_read_dmi_data.assert_has_calls([mock.call('chassis-asset-tag')]) |
685 | + |
686 | + |
687 | +class TestLoadIndex(test_helpers.CiTestCase): |
688 | + """_load_index handles parsing of an index into a proper list. |
689 | + The tests here guarantee correct parsing of html version or |
690 | + a fixed version. See the function docstring for more doc.""" |
691 | + |
692 | + _known_html_api_versions = dedent("""\ |
693 | + <html> |
694 | + <head><title>Index of /openstack/</title></head> |
695 | + <body bgcolor="white"> |
696 | + <h1>Index of /openstack/</h1><hr><pre><a href="../">../</a> |
697 | + <a href="2013-10-17/">2013-10-17/</a> 27-Jun-2018 12:22 - |
698 | + <a href="latest/">latest/</a> 27-Jun-2018 12:22 - |
699 | + </pre><hr></body> |
700 | + </html>""") |
701 | + |
702 | + _known_html_contents = dedent("""\ |
703 | + <html> |
704 | + <head><title>Index of /openstack/2013-10-17/</title></head> |
705 | + <body bgcolor="white"> |
706 | + <h1>Index of /openstack/2013-10-17/</h1><hr><pre><a href="../">../</a> |
707 | + <a href="meta_data.json">meta_data.json</a> 27-Jun-2018 12:22 679 |
708 | + <a href="user_data">user_data</a> 27-Jun-2018 12:22 146 |
709 | + </pre><hr></body> |
710 | + </html>""") |
711 | + |
712 | + def test_parse_html(self): |
713 | + """Test parsing of lower case html.""" |
714 | + self.assertEqual( |
715 | + ['2013-10-17/', 'latest/'], |
716 | + oracle._load_index(self._known_html_api_versions)) |
717 | + self.assertEqual( |
718 | + ['meta_data.json', 'user_data'], |
719 | + oracle._load_index(self._known_html_contents)) |
720 | + |
721 | + def test_parse_html_upper(self): |
722 | + """Test parsing of upper case html, although known content is lower.""" |
723 | + def _toupper(data): |
724 | + return data.replace("<a", "<A").replace("html>", "HTML>") |
725 | + |
726 | + self.assertEqual( |
727 | + ['2013-10-17/', 'latest/'], |
728 | + oracle._load_index(_toupper(self._known_html_api_versions))) |
729 | + self.assertEqual( |
730 | + ['meta_data.json', 'user_data'], |
731 | + oracle._load_index(_toupper(self._known_html_contents))) |
732 | + |
733 | + def test_parse_newline_list_with_endl(self): |
734 | + """Test parsing of newline separated list with ending newline.""" |
735 | + self.assertEqual( |
736 | + ['2013-10-17/', 'latest/'], |
737 | + oracle._load_index("\n".join(["2013-10-17/", "latest/", ""]))) |
738 | + self.assertEqual( |
739 | + ['meta_data.json', 'user_data'], |
740 | + oracle._load_index("\n".join(["meta_data.json", "user_data", ""]))) |
741 | + |
742 | + def test_parse_newline_list_without_endl(self): |
743 | + """Test parsing of newline separated list with no ending newline. |
744 | + |
745 | + Actual openstack implementation does not include trailing newline.""" |
746 | + self.assertEqual( |
747 | + ['2013-10-17/', 'latest/'], |
748 | + oracle._load_index("\n".join(["2013-10-17/", "latest/"]))) |
749 | + self.assertEqual( |
750 | + ['meta_data.json', 'user_data'], |
751 | + oracle._load_index("\n".join(["meta_data.json", "user_data"]))) |
752 | + |
753 | + |
754 | +# vi: ts=4 expandtab |
755 | diff --git a/doc/rtd/topics/datasources.rst b/doc/rtd/topics/datasources.rst |
756 | index 30e57d8..8303458 100644 |
757 | --- a/doc/rtd/topics/datasources.rst |
758 | +++ b/doc/rtd/topics/datasources.rst |
759 | @@ -189,6 +189,7 @@ Follow for more information. |
760 | datasources/nocloud.rst |
761 | datasources/opennebula.rst |
762 | datasources/openstack.rst |
763 | + datasources/oracle.rst |
764 | datasources/ovf.rst |
765 | datasources/smartos.rst |
766 | datasources/fallback.rst |
767 | diff --git a/doc/rtd/topics/datasources/oracle.rst b/doc/rtd/topics/datasources/oracle.rst |
768 | new file mode 100644 |
769 | index 0000000..f2383ce |
770 | --- /dev/null |
771 | +++ b/doc/rtd/topics/datasources/oracle.rst |
772 | @@ -0,0 +1,26 @@ |
773 | +.. _datasource_oracle: |
774 | + |
775 | +Oracle |
776 | +====== |
777 | + |
778 | +This datasource reads metadata, vendor-data and user-data from |
779 | +`Oracle Compute Infrastructure`_ (OCI). |
780 | + |
781 | +Oracle Platform |
782 | +--------------- |
783 | +OCI provides bare metal and virtual machines. In both cases, |
784 | +the platform identifies itself via DMI data in the chassis asset tag |
785 | +with the string 'OracleCloud.com'. |
786 | + |
787 | +Oracle's platform provides a metadata service that mimics the 2013-10-17 |
788 | +version of OpenStack metadata service. Initially support for Oracle |
789 | +was done via the OpenStack datasource. |
790 | + |
791 | +Cloud-init has a specific datasource for Oracle in order to: |
792 | + a. allow and support future growth of the OCI platform. |
793 | + b. address small differences between OpenStack and Oracle metadata |
794 | + implementation. |
795 | + |
796 | + |
797 | +.. _Oracle Compute Infrastructure: https://cloud.oracle.com/ |
798 | +.. vi: textwidth=78 |
799 | diff --git a/tests/unittests/test_datasource/test_common.py b/tests/unittests/test_datasource/test_common.py |
800 | index 1a5a3db..6b01a4e 100644 |
801 | --- a/tests/unittests/test_datasource/test_common.py |
802 | +++ b/tests/unittests/test_datasource/test_common.py |
803 | @@ -20,6 +20,7 @@ from cloudinit.sources import ( |
804 | DataSourceNoCloud as NoCloud, |
805 | DataSourceOpenNebula as OpenNebula, |
806 | DataSourceOpenStack as OpenStack, |
807 | + DataSourceOracle as Oracle, |
808 | DataSourceOVF as OVF, |
809 | DataSourceScaleway as Scaleway, |
810 | DataSourceSmartOS as SmartOS, |
811 | @@ -37,6 +38,7 @@ DEFAULT_LOCAL = [ |
812 | IBMCloud.DataSourceIBMCloud, |
813 | NoCloud.DataSourceNoCloud, |
814 | OpenNebula.DataSourceOpenNebula, |
815 | + Oracle.DataSourceOracle, |
816 | OVF.DataSourceOVF, |
817 | SmartOS.DataSourceSmartOS, |
818 | Ec2.DataSourceEc2Local, |
819 | diff --git a/tests/unittests/test_datasource/test_openstack.py b/tests/unittests/test_datasource/test_openstack.py |
820 | index d862f4b..6e1e971 100644 |
821 | --- a/tests/unittests/test_datasource/test_openstack.py |
822 | +++ b/tests/unittests/test_datasource/test_openstack.py |
823 | @@ -16,7 +16,7 @@ from six import StringIO |
824 | |
825 | from cloudinit import helpers |
826 | from cloudinit import settings |
827 | -from cloudinit.sources import convert_vendordata, UNSET |
828 | +from cloudinit.sources import BrokenMetadata, convert_vendordata, UNSET |
829 | from cloudinit.sources import DataSourceOpenStack as ds |
830 | from cloudinit.sources.helpers import openstack |
831 | from cloudinit import util |
832 | @@ -186,7 +186,7 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase): |
833 | if k.endswith('meta_data.json'): |
834 | os_files[k] = json.dumps(os_meta) |
835 | _register_uris(self.VERSION, {}, {}, os_files) |
836 | - self.assertRaises(openstack.BrokenMetadata, _read_metadata_service) |
837 | + self.assertRaises(BrokenMetadata, _read_metadata_service) |
838 | |
839 | def test_userdata_empty(self): |
840 | os_files = copy.deepcopy(OS_FILES) |
841 | @@ -217,7 +217,7 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase): |
842 | if k.endswith('vendor_data.json'): |
843 | os_files[k] = '{' # some invalid json |
844 | _register_uris(self.VERSION, {}, {}, os_files) |
845 | - self.assertRaises(openstack.BrokenMetadata, _read_metadata_service) |
846 | + self.assertRaises(BrokenMetadata, _read_metadata_service) |
847 | |
848 | def test_metadata_invalid(self): |
849 | os_files = copy.deepcopy(OS_FILES) |
850 | @@ -225,7 +225,7 @@ class TestOpenStackDataSource(test_helpers.HttprettyTestCase): |
851 | if k.endswith('meta_data.json'): |
852 | os_files[k] = '{' # some invalid json |
853 | _register_uris(self.VERSION, {}, {}, os_files) |
854 | - self.assertRaises(openstack.BrokenMetadata, _read_metadata_service) |
855 | + self.assertRaises(BrokenMetadata, _read_metadata_service) |
856 | |
857 | @test_helpers.mock.patch('cloudinit.net.dhcp.maybe_perform_dhcp_discovery') |
858 | def test_datasource(self, m_dhcp): |
859 | @@ -525,8 +525,11 @@ class TestDetectOpenStack(test_helpers.CiTestCase): |
860 | |
861 | m_dmi.side_effect = fake_dmi_read |
862 | self.assertTrue( |
863 | - ds.detect_openstack(), |
864 | + ds.detect_openstack(accept_oracle=True), |
865 | 'Expected detect_openstack == True on OracleCloud.com') |
866 | + self.assertFalse( |
867 | + ds.detect_openstack(accept_oracle=False), |
868 | + 'Expected detect_openstack == False.') |
869 | |
870 | @test_helpers.mock.patch(MOCK_PATH + 'util.get_proc_env') |
871 | @test_helpers.mock.patch(MOCK_PATH + 'util.read_dmi_data') |
872 | diff --git a/tests/unittests/test_ds_identify.py b/tests/unittests/test_ds_identify.py |
873 | index 64d9f9f..e08e790 100644 |
874 | --- a/tests/unittests/test_ds_identify.py |
875 | +++ b/tests/unittests/test_ds_identify.py |
876 | @@ -12,6 +12,7 @@ from cloudinit.tests.helpers import ( |
877 | |
878 | from cloudinit.sources import DataSourceIBMCloud as ds_ibm |
879 | from cloudinit.sources import DataSourceSmartOS as ds_smartos |
880 | +from cloudinit.sources import DataSourceOracle as ds_oracle |
881 | |
882 | UNAME_MYSYS = ("Linux bart 4.4.0-62-generic #83-Ubuntu " |
883 | "SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 GNU/Linux") |
884 | @@ -598,6 +599,18 @@ class TestIsIBMProvisioning(DsIdentifyBase): |
885 | self.assertIn("from current boot", ret.stderr) |
886 | |
887 | |
888 | +class TestOracle(DsIdentifyBase): |
889 | + def test_found_by_chassis(self): |
890 | + """Simple positive test of Oracle by chassis id.""" |
891 | + self._test_ds_found('Oracle') |
892 | + |
893 | + def test_not_found(self): |
894 | + """Simple negative test of Oracle.""" |
895 | + mycfg = copy.deepcopy(VALID_CFG['Oracle']) |
896 | + mycfg['files'][P_CHASSIS_ASSET_TAG] = "Not Oracle" |
897 | + self._check_via_dict(mycfg, rc=RC_NOT_FOUND) |
898 | + |
899 | + |
900 | def blkid_out(disks=None): |
901 | """Convert a list of disk dictionaries into blkid content.""" |
902 | if disks is None: |
903 | @@ -838,6 +851,12 @@ VALID_CFG = { |
904 | }, |
905 | ], |
906 | }, |
907 | + 'Oracle': { |
908 | + 'ds': 'Oracle', |
909 | + 'files': { |
910 | + P_CHASSIS_ASSET_TAG: ds_oracle.CHASSIS_ASSET_TAG + '\n', |
911 | + } |
912 | + }, |
913 | 'SmartOS-bhyve': { |
914 | 'ds': 'SmartOS', |
915 | 'mocks': [ |
916 | diff --git a/tools/ds-identify b/tools/ds-identify |
917 | index ce0477a..fcc6014 100755 |
918 | --- a/tools/ds-identify |
919 | +++ b/tools/ds-identify |
920 | @@ -116,7 +116,7 @@ DI_DSNAME="" |
921 | # be searched if there is no setting found in config. |
922 | DI_DSLIST_DEFAULT="MAAS ConfigDrive NoCloud AltCloud Azure Bigstep \ |
923 | CloudSigma CloudStack DigitalOcean AliYun Ec2 GCE OpenNebula OpenStack \ |
924 | -OVF SmartOS Scaleway Hetzner IBMCloud" |
925 | +OVF SmartOS Scaleway Hetzner IBMCloud Oracle" |
926 | DI_DSLIST="" |
927 | DI_MODE="" |
928 | DI_ON_FOUND="" |
929 | @@ -1036,6 +1036,12 @@ dscheck_Hetzner() { |
930 | return ${DS_NOT_FOUND} |
931 | } |
932 | |
933 | +dscheck_Oracle() { |
934 | + local asset_tag="OracleCloud.com" |
935 | + dmi_chassis_asset_tag_matches "${asset_tag}" && return ${DS_FOUND} |
936 | + return ${DS_NOT_FOUND} |
937 | +} |
938 | + |
939 | is_ibm_provisioning() { |
940 | local pcfg="${PATH_ROOT}/root/provisioningConfiguration.cfg" |
941 | local logf="${PATH_ROOT}/root/swinstall.log" |
I recognize this needs tests, and plan to add them.
Just really a stick in the mud at this point.
it did successfully reboot after modifying a image to remove
the hard coded list of 'OpenStack' as the datasource.