Merge ~shaner/cloud-init:ubuntu/trusty into cloud-init:ubuntu/trusty
- Git
- lp:~shaner/cloud-init
- ubuntu/trusty
- Merge into ubuntu/trusty
Proposed by
Shane Peters
Status: | Superseded |
---|---|
Proposed branch: | ~shaner/cloud-init:ubuntu/trusty |
Merge into: | cloud-init:ubuntu/trusty |
Diff against target: |
744 lines (+722/-0) 3 files modified
debian/changelog (+8/-0) debian/patches/lp-1781039-gce-datasource-update.patch (+713/-0) debian/patches/series (+1/-0) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Scott Moser | Needs Fixing | ||
Review via email: mp+353997@code.launchpad.net |
This proposal supersedes a proposal from 2018-08-07.
Commit message
Fix SSH key functionality for GCE Datasource
Per documentation at https:/
ssh keys for cloudinit and ubuntu users should both be added to the
'ubuntu' users authorized_keys file. This works in Xenial and higher,
but not in Trusty.
This patch updates the GCE Datasource to function like that of Xenial
and newer.
LP: #1781039
Description of the change
To post a comment you must log in.
Revision history for this message
Scott Moser (smoser) wrote : Posted in a previous version of this proposal | # |
review:
Needs Information
Revision history for this message
Scott Moser (smoser) wrote : | # |
some necessary fixes inline.
Please make sure you test with user-data and without user-data.
Revision history for this message
Scott Moser (smoser) : | # |
review:
Needs Fixing
Revision history for this message
Scott Moser (smoser) wrote : | # |
Unmerged commits
- 926c9c0... by Shane Peters
-
debian/
patches/ lp-1781039- gce-datasource- update. patch - e53a5cd... by Shane Peters
-
debian/
patches/ lp-1781039- gce-datasource- update. patch - e0e26ee... by Shane Peters
-
debian/
patches/ lp-1781039- gce-datasource- update. patch
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | diff --git a/debian/changelog b/debian/changelog |
2 | index 4f2c89e..63b6698 100644 |
3 | --- a/debian/changelog |
4 | +++ b/debian/changelog |
5 | @@ -1,3 +1,11 @@ |
6 | +cloud-init (0.7.5-0ubuntu1.23) trusty; urgency=medium |
7 | + |
8 | + * GCE |
9 | + - d/p/lp-1781039-gce-datasource-update.patch (LP: #1781039) |
10 | + Backport GCE datasource functionality from Xenial |
11 | + |
12 | + -- Shane Peters <shane.peters@canonical.com> Tue, 07 Aug 2018 11:57:23 -0400 |
13 | + |
14 | cloud-init (0.7.5-0ubuntu1.22) trusty; urgency=medium |
15 | |
16 | * debian/update-grub-legacy-ec2: |
17 | diff --git a/debian/patches/lp-1781039-gce-datasource-update.patch b/debian/patches/lp-1781039-gce-datasource-update.patch |
18 | new file mode 100644 |
19 | index 0000000..34c67d3 |
20 | --- /dev/null |
21 | +++ b/debian/patches/lp-1781039-gce-datasource-update.patch |
22 | @@ -0,0 +1,713 @@ |
23 | +Author: Shane Peters <shane.peters@canonical.com> |
24 | +Bug: https://bugs.launchpad.net/cloud-init/+bug/1781039 |
25 | +Description: Update GCE datasource |
26 | + Per documentation at https://wiki.ubuntu.com/GoogleComputeEngineSSHKeys |
27 | + ssh keys for cloudinit and ubuntu users should both be added to the |
28 | + 'ubuntu' users authorized_keys file. This works in Xenial and higher, |
29 | + but not in Trusty. |
30 | +--- /dev/null |
31 | ++++ b/cloudinit/distros/ug_util.py |
32 | +@@ -0,0 +1,289 @@ |
33 | ++# Copyright (C) 2012 Canonical Ltd. |
34 | ++# Copyright (C) 2012, 2013 Hewlett-Packard Development Company, L.P. |
35 | ++# Copyright (C) 2012 Yahoo! Inc. |
36 | ++# |
37 | ++# Author: Scott Moser <scott.moser@canonical.com> |
38 | ++# Author: Juerg Haefliger <juerg.haefliger@hp.com> |
39 | ++# Author: Joshua Harlow <harlowja@yahoo-inc.com> |
40 | ++# Author: Ben Howard <ben.howard@canonical.com> |
41 | ++# |
42 | ++# This file is part of cloud-init. See LICENSE file for license information. |
43 | ++ |
44 | ++import six |
45 | ++ |
46 | ++from cloudinit import log as logging |
47 | ++from cloudinit import type_utils |
48 | ++from cloudinit import util |
49 | ++ |
50 | ++LOG = logging.getLogger(__name__) |
51 | ++ |
52 | ++ |
53 | ++# Normalizes a input group configuration |
54 | ++# which can be a comma seperated list of |
55 | ++# group names, or a list of group names |
56 | ++# or a python dictionary of group names |
57 | ++# to a list of members of that group. |
58 | ++# |
59 | ++# The output is a dictionary of group |
60 | ++# names => members of that group which |
61 | ++# is the standard form used in the rest |
62 | ++# of cloud-init |
63 | ++def _normalize_groups(grp_cfg): |
64 | ++ if isinstance(grp_cfg, six.string_types): |
65 | ++ grp_cfg = grp_cfg.strip().split(",") |
66 | ++ if isinstance(grp_cfg, list): |
67 | ++ c_grp_cfg = {} |
68 | ++ for i in grp_cfg: |
69 | ++ if isinstance(i, dict): |
70 | ++ for k, v in i.items(): |
71 | ++ if k not in c_grp_cfg: |
72 | ++ if isinstance(v, list): |
73 | ++ c_grp_cfg[k] = list(v) |
74 | ++ elif isinstance(v, six.string_types): |
75 | ++ c_grp_cfg[k] = [v] |
76 | ++ else: |
77 | ++ raise TypeError("Bad group member type %s" % |
78 | ++ type_utils.obj_name(v)) |
79 | ++ else: |
80 | ++ if isinstance(v, list): |
81 | ++ c_grp_cfg[k].extend(v) |
82 | ++ elif isinstance(v, six.string_types): |
83 | ++ c_grp_cfg[k].append(v) |
84 | ++ else: |
85 | ++ raise TypeError("Bad group member type %s" % |
86 | ++ type_utils.obj_name(v)) |
87 | ++ elif isinstance(i, six.string_types): |
88 | ++ if i not in c_grp_cfg: |
89 | ++ c_grp_cfg[i] = [] |
90 | ++ else: |
91 | ++ raise TypeError("Unknown group name type %s" % |
92 | ++ type_utils.obj_name(i)) |
93 | ++ grp_cfg = c_grp_cfg |
94 | ++ groups = {} |
95 | ++ if isinstance(grp_cfg, dict): |
96 | ++ for (grp_name, grp_members) in grp_cfg.items(): |
97 | ++ groups[grp_name] = util.uniq_merge_sorted(grp_members) |
98 | ++ else: |
99 | ++ raise TypeError(("Group config must be list, dict " |
100 | ++ " or string types only and not %s") % |
101 | ++ type_utils.obj_name(grp_cfg)) |
102 | ++ return groups |
103 | ++ |
104 | ++ |
105 | ++# Normalizes a input group configuration |
106 | ++# which can be a comma seperated list of |
107 | ++# user names, or a list of string user names |
108 | ++# or a list of dictionaries with components |
109 | ++# that define the user config + 'name' (if |
110 | ++# a 'name' field does not exist then the |
111 | ++# default user is assumed to 'own' that |
112 | ++# configuration. |
113 | ++# |
114 | ++# The output is a dictionary of user |
115 | ++# names => user config which is the standard |
116 | ++# form used in the rest of cloud-init. Note |
117 | ++# the default user will have a special config |
118 | ++# entry 'default' which will be marked as true |
119 | ++# all other users will be marked as false. |
120 | ++def _normalize_users(u_cfg, def_user_cfg=None): |
121 | ++ if isinstance(u_cfg, dict): |
122 | ++ ad_ucfg = [] |
123 | ++ for (k, v) in u_cfg.items(): |
124 | ++ if isinstance(v, (bool, int, float) + six.string_types): |
125 | ++ if util.is_true(v): |
126 | ++ ad_ucfg.append(str(k)) |
127 | ++ elif isinstance(v, dict): |
128 | ++ v['name'] = k |
129 | ++ ad_ucfg.append(v) |
130 | ++ else: |
131 | ++ raise TypeError(("Unmappable user value type %s" |
132 | ++ " for key %s") % (type_utils.obj_name(v), k)) |
133 | ++ u_cfg = ad_ucfg |
134 | ++ elif isinstance(u_cfg, six.string_types): |
135 | ++ u_cfg = util.uniq_merge_sorted(u_cfg) |
136 | ++ |
137 | ++ users = {} |
138 | ++ for user_config in u_cfg: |
139 | ++ if isinstance(user_config, (list,) + six.string_types): |
140 | ++ for u in util.uniq_merge(user_config): |
141 | ++ if u and u not in users: |
142 | ++ users[u] = {} |
143 | ++ elif isinstance(user_config, dict): |
144 | ++ if 'name' in user_config: |
145 | ++ n = user_config.pop('name') |
146 | ++ prev_config = users.get(n) or {} |
147 | ++ users[n] = util.mergemanydict([prev_config, |
148 | ++ user_config]) |
149 | ++ else: |
150 | ++ # Assume the default user then |
151 | ++ prev_config = users.get('default') or {} |
152 | ++ users['default'] = util.mergemanydict([prev_config, |
153 | ++ user_config]) |
154 | ++ else: |
155 | ++ raise TypeError(("User config must be dictionary/list " |
156 | ++ " or string types only and not %s") % |
157 | ++ type_utils.obj_name(user_config)) |
158 | ++ |
159 | ++ # Ensure user options are in the right python friendly format |
160 | ++ if users: |
161 | ++ c_users = {} |
162 | ++ for (uname, uconfig) in users.items(): |
163 | ++ c_uconfig = {} |
164 | ++ for (k, v) in uconfig.items(): |
165 | ++ k = k.replace('-', '_').strip() |
166 | ++ if k: |
167 | ++ c_uconfig[k] = v |
168 | ++ c_users[uname] = c_uconfig |
169 | ++ users = c_users |
170 | ++ |
171 | ++ # Fixup the default user into the real |
172 | ++ # default user name and replace it... |
173 | ++ def_user = None |
174 | ++ if users and 'default' in users: |
175 | ++ def_config = users.pop('default') |
176 | ++ if def_user_cfg: |
177 | ++ # Pickup what the default 'real name' is |
178 | ++ # and any groups that are provided by the |
179 | ++ # default config |
180 | ++ def_user_cfg = def_user_cfg.copy() |
181 | ++ def_user = def_user_cfg.pop('name') |
182 | ++ def_groups = def_user_cfg.pop('groups', []) |
183 | ++ # Pickup any config + groups for that user name |
184 | ++ # that we may have previously extracted |
185 | ++ parsed_config = users.pop(def_user, {}) |
186 | ++ parsed_groups = parsed_config.get('groups', []) |
187 | ++ # Now merge our extracted groups with |
188 | ++ # anything the default config provided |
189 | ++ users_groups = util.uniq_merge_sorted(parsed_groups, def_groups) |
190 | ++ parsed_config['groups'] = ",".join(users_groups) |
191 | ++ # The real config for the default user is the |
192 | ++ # combination of the default user config provided |
193 | ++ # by the distro, the default user config provided |
194 | ++ # by the above merging for the user 'default' and |
195 | ++ # then the parsed config from the user's 'real name' |
196 | ++ # which does not have to be 'default' (but could be) |
197 | ++ users[def_user] = util.mergemanydict([def_user_cfg, |
198 | ++ def_config, |
199 | ++ parsed_config]) |
200 | ++ |
201 | ++ # Ensure that only the default user that we |
202 | ++ # found (if any) is actually marked as being |
203 | ++ # the default user |
204 | ++ if users: |
205 | ++ for (uname, uconfig) in users.items(): |
206 | ++ if def_user and uname == def_user: |
207 | ++ uconfig['default'] = True |
208 | ++ else: |
209 | ++ uconfig['default'] = False |
210 | ++ |
211 | ++ return users |
212 | ++ |
213 | ++ |
214 | ++# Normalizes a set of user/users and group |
215 | ++# dictionary configuration into a useable |
216 | ++# format that the rest of cloud-init can |
217 | ++# understand using the default user |
218 | ++# provided by the input distrobution (if any) |
219 | ++# to allow for mapping of the 'default' user. |
220 | ++# |
221 | ++# Output is a dictionary of group names -> [member] (list) |
222 | ++# and a dictionary of user names -> user configuration (dict) |
223 | ++# |
224 | ++# If 'user' exists it will override |
225 | ++# the 'users'[0] entry (if a list) otherwise it will |
226 | ++# just become an entry in the returned dictionary (no override) |
227 | ++def normalize_users_groups(cfg, distro): |
228 | ++ if not cfg: |
229 | ++ cfg = {} |
230 | ++ |
231 | ++ users = {} |
232 | ++ groups = {} |
233 | ++ if 'groups' in cfg: |
234 | ++ groups = _normalize_groups(cfg['groups']) |
235 | ++ |
236 | ++ # Handle the previous style of doing this where the first user |
237 | ++ # overrides the concept of the default user if provided in the user: XYZ |
238 | ++ # format. |
239 | ++ old_user = {} |
240 | ++ if 'user' in cfg and cfg['user']: |
241 | ++ old_user = cfg['user'] |
242 | ++ # Translate it into the format that is more useful |
243 | ++ # going forward |
244 | ++ if isinstance(old_user, six.string_types): |
245 | ++ old_user = { |
246 | ++ 'name': old_user, |
247 | ++ } |
248 | ++ if not isinstance(old_user, dict): |
249 | ++ LOG.warning(("Format for 'user' key must be a string or dictionary" |
250 | ++ " and not %s"), type_utils.obj_name(old_user)) |
251 | ++ old_user = {} |
252 | ++ |
253 | ++ # If no old user format, then assume the distro |
254 | ++ # provides what the 'default' user maps to, but notice |
255 | ++ # that if this is provided, we won't automatically inject |
256 | ++ # a 'default' user into the users list, while if a old user |
257 | ++ # format is provided we will. |
258 | ++ distro_user_config = {} |
259 | ++ try: |
260 | ++ distro_user_config = distro.get_default_user() |
261 | ++ except NotImplementedError: |
262 | ++ LOG.warning(("Distro has not implemented default user " |
263 | ++ "access. No distribution provided default user" |
264 | ++ " will be normalized.")) |
265 | ++ |
266 | ++ # Merge the old user (which may just be an empty dict when not |
267 | ++ # present with the distro provided default user configuration so |
268 | ++ # that the old user style picks up all the distribution specific |
269 | ++ # attributes (if any) |
270 | ++ default_user_config = util.mergemanydict([old_user, distro_user_config]) |
271 | ++ |
272 | ++ base_users = cfg.get('users', []) |
273 | ++ if not isinstance(base_users, (list, dict) + six.string_types): |
274 | ++ LOG.warning(("Format for 'users' key must be a comma separated string" |
275 | ++ " or a dictionary or a list and not %s"), |
276 | ++ type_utils.obj_name(base_users)) |
277 | ++ base_users = [] |
278 | ++ |
279 | ++ if old_user: |
280 | ++ # Ensure that when user: is provided that this user |
281 | ++ # always gets added (as the default user) |
282 | ++ if isinstance(base_users, list): |
283 | ++ # Just add it on at the end... |
284 | ++ base_users.append({'name': 'default'}) |
285 | ++ elif isinstance(base_users, dict): |
286 | ++ base_users['default'] = dict(base_users).get('default', True) |
287 | ++ elif isinstance(base_users, six.string_types): |
288 | ++ # Just append it on to be re-parsed later |
289 | ++ base_users += ",default" |
290 | ++ |
291 | ++ users = _normalize_users(base_users, default_user_config) |
292 | ++ return (users, groups) |
293 | ++ |
294 | ++ |
295 | ++# Given a user dictionary config it will |
296 | ++# extract the default user name and user config |
297 | ++# from that list and return that tuple or |
298 | ++# return (None, None) if no default user is |
299 | ++# found in the given input |
300 | ++def extract_default(users, default_name=None, default_config=None): |
301 | ++ if not users: |
302 | ++ users = {} |
303 | ++ |
304 | ++ def safe_find(entry): |
305 | ++ config = entry[1] |
306 | ++ if not config or 'default' not in config: |
307 | ++ return False |
308 | ++ else: |
309 | ++ return config['default'] |
310 | ++ |
311 | ++ tmp_users = users.items() |
312 | ++ tmp_users = dict(filter(safe_find, tmp_users)) |
313 | ++ if not tmp_users: |
314 | ++ return (default_name, default_config) |
315 | ++ else: |
316 | ++ name = list(tmp_users)[0] |
317 | ++ config = tmp_users[name] |
318 | ++ config.pop('default', None) |
319 | ++ return (name, config) |
320 | ++ |
321 | ++# vi: ts=4 expandtab |
322 | +--- a/cloudinit/sources/DataSourceGCE.py |
323 | ++++ b/cloudinit/sources/DataSourceGCE.py |
324 | +@@ -1,142 +1,99 @@ |
325 | +-# vi: ts=4 expandtab |
326 | +-# |
327 | +-# Author: Vaidas Jablonskis <jablonskis@gmail.com> |
328 | +-# |
329 | +-# This program is free software: you can redistribute it and/or modify |
330 | +-# it under the terms of the GNU General Public License version 3, as |
331 | +-# published by the Free Software Foundation. |
332 | ++# Author: Vaidas Jablonskis <jablonskis@gmail.com> |
333 | + # |
334 | +-# This program is distributed in the hope that it will be useful, |
335 | +-# but WITHOUT ANY WARRANTY; without even the implied warranty of |
336 | +-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
337 | +-# GNU General Public License for more details. |
338 | +-# |
339 | +-# You should have received a copy of the GNU General Public License |
340 | +-# along with this program. If not, see <http://www.gnu.org/licenses/>. |
341 | ++# This file is part of cloud-init. See LICENSE file for license information. |
342 | + |
343 | ++import datetime |
344 | ++import json |
345 | + |
346 | + from base64 import b64decode |
347 | + |
348 | ++from cloudinit.distros import ug_util |
349 | + from cloudinit import log as logging |
350 | +-from cloudinit import util |
351 | + from cloudinit import sources |
352 | + from cloudinit import url_helper |
353 | ++from cloudinit import util |
354 | + |
355 | + LOG = logging.getLogger(__name__) |
356 | + |
357 | +-BUILTIN_DS_CONFIG = { |
358 | +- 'metadata_url': 'http://metadata.google.internal/computeMetadata/v1/' |
359 | +-} |
360 | ++MD_V1_URL = 'http://metadata.google.internal/computeMetadata/v1/' |
361 | ++BUILTIN_DS_CONFIG = {'metadata_url': MD_V1_URL} |
362 | + REQUIRED_FIELDS = ('instance-id', 'availability-zone', 'local-hostname') |
363 | + |
364 | + |
365 | ++class GoogleMetadataFetcher(object): |
366 | ++ headers = {'Metadata-Flavor': 'Google'} |
367 | ++ |
368 | ++ def __init__(self, metadata_address): |
369 | ++ self.metadata_address = metadata_address |
370 | ++ |
371 | ++ def get_value(self, path, is_text, is_recursive=False): |
372 | ++ value = None |
373 | ++ try: |
374 | ++ url = self.metadata_address + path |
375 | ++ if is_recursive: |
376 | ++ url += '/?recursive=True' |
377 | ++ resp = url_helper.readurl(url=url, headers=self.headers) |
378 | ++ except url_helper.UrlError as exc: |
379 | ++ msg = "url %s raised exception %s" |
380 | ++ LOG.debug(msg, path, exc) |
381 | ++ else: |
382 | ++ if resp.code == 200: |
383 | ++ if is_text: |
384 | ++ value = util.decode_binary(resp.contents) |
385 | ++ else: |
386 | ++ value = resp.contents.decode('utf-8') |
387 | ++ else: |
388 | ++ LOG.debug("url %s returned code %s", path, resp.code) |
389 | ++ return value |
390 | ++ |
391 | ++ |
392 | + class DataSourceGCE(sources.DataSource): |
393 | ++ |
394 | ++ dsname = 'GCE' |
395 | ++ |
396 | + def __init__(self, sys_cfg, distro, paths): |
397 | + sources.DataSource.__init__(self, sys_cfg, distro, paths) |
398 | ++ self.default_user = None |
399 | ++ if distro: |
400 | ++ (users, _groups) = ug_util.normalize_users_groups(sys_cfg, distro) |
401 | ++ (self.default_user, _user_config) = ug_util.extract_default(users) |
402 | + self.metadata = dict() |
403 | + self.ds_cfg = util.mergemanydict([ |
404 | + util.get_cfg_by_path(sys_cfg, ["datasource", "GCE"], {}), |
405 | + BUILTIN_DS_CONFIG]) |
406 | + self.metadata_address = self.ds_cfg['metadata_url'] |
407 | + |
408 | +- # GCE takes sshKeys attribute in the format of '<user>:<public_key>' |
409 | +- # so we have to trim each key to remove the username part |
410 | +- def _trim_key(self, public_key): |
411 | +- try: |
412 | +- index = public_key.index(':') |
413 | +- if index > 0: |
414 | +- return public_key[(index + 1):] |
415 | +- except: |
416 | +- return public_key |
417 | +- |
418 | + def get_data(self): |
419 | +- # GCE metadata server requires a custom header since v1 |
420 | +- headers = {'X-Google-Metadata-Request': True} |
421 | +- |
422 | +- # url_map: (our-key, path, required) |
423 | +- url_map = [ |
424 | +- ('instance-id', 'instance/id', True), |
425 | +- ('availability-zone', 'instance/zone', True), |
426 | +- ('local-hostname', 'instance/hostname', True), |
427 | +- ('public-keys', 'project/attributes/sshKeys', False), |
428 | +- ('user-data', 'instance/attributes/user-data', False), |
429 | +- ('user-data-encoding', 'instance/attributes/user-data-encoding', |
430 | +- False), |
431 | +- ] |
432 | +- |
433 | +- # if we cannot resolve the metadata server, then no point in trying |
434 | +- if not util.is_resolvable_url(self.metadata_address): |
435 | +- LOG.debug("%s is not resolvable", self.metadata_address) |
436 | +- return False |
437 | +- |
438 | +- # iterate over url_map keys to get metadata items |
439 | +- found = False |
440 | +- for (mkey, path, required) in url_map: |
441 | +- try: |
442 | +- resp = url_helper.readurl(url=self.metadata_address + path, |
443 | +- headers=headers) |
444 | +- if resp.code == 200: |
445 | +- found = True |
446 | +- self.metadata[mkey] = resp.contents |
447 | +- else: |
448 | +- if required: |
449 | +- msg = "required url %s returned code %s. not GCE" |
450 | +- if not found: |
451 | +- LOG.debug(msg, path, resp.code) |
452 | +- else: |
453 | +- LOG.warn(msg, path, resp.code) |
454 | +- return False |
455 | +- else: |
456 | +- self.metadata[mkey] = None |
457 | +- except url_helper.UrlError as e: |
458 | +- if required: |
459 | +- msg = "required url %s raised exception %s. not GCE" |
460 | +- if not found: |
461 | +- LOG.debug(msg, path, e) |
462 | +- else: |
463 | +- LOG.warn(msg, path, e) |
464 | +- return False |
465 | +- msg = "Failed to get %s metadata item: %s." |
466 | +- LOG.debug(msg, path, e) |
467 | +- |
468 | +- self.metadata[mkey] = None |
469 | +- |
470 | +- if self.metadata['public-keys']: |
471 | +- lines = self.metadata['public-keys'].splitlines() |
472 | +- self.metadata['public-keys'] = [self._trim_key(k) for k in lines] |
473 | +- |
474 | +- if self.metadata['availability-zone']: |
475 | +- self.metadata['availability-zone'] = self.metadata[ |
476 | +- 'availability-zone'].split('/')[-1] |
477 | +- |
478 | +- encoding = self.metadata.get('user-data-encoding') |
479 | +- if encoding: |
480 | +- if encoding == 'base64': |
481 | +- self.metadata['user-data'] = b64decode( |
482 | +- self.metadata['user-data']) |
483 | ++ ret = util.log_time( |
484 | ++ LOG.debug, 'Crawl of GCE metadata service', |
485 | ++ read_md, kwargs={'address': self.metadata_address}) |
486 | ++ |
487 | ++ if not ret['success']: |
488 | ++ if ret['platform_reports_gce']: |
489 | ++ LOG.warning(ret['reason']) |
490 | + else: |
491 | +- LOG.warn('unknown user-data-encoding: %s, ignoring', encoding) |
492 | +- |
493 | +- return found |
494 | ++ LOG.debug(ret['reason']) |
495 | ++ return False |
496 | ++ self.metadata = ret['meta-data'] |
497 | ++ self.userdata_raw = ret['user-data'] |
498 | ++ return True |
499 | + |
500 | + @property |
501 | + def launch_index(self): |
502 | +- # GCE does not provide lauch_index property |
503 | ++ # GCE does not provide lauch_index property. |
504 | + return None |
505 | + |
506 | + def get_instance_id(self): |
507 | + return self.metadata['instance-id'] |
508 | + |
509 | + def get_public_ssh_keys(self): |
510 | +- return self.metadata['public-keys'] |
511 | ++ public_keys_data = self.metadata['public-keys-data'] |
512 | ++ return _parse_public_keys(public_keys_data, self.default_user) |
513 | + |
514 | +- def get_hostname(self, fqdn=False, _resolve_ip=False): |
515 | +- # GCE has long FDQN's and has asked for short hostnames |
516 | ++ def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): |
517 | ++ # GCE has long FDQN's and has asked for short hostnames. |
518 | + return self.metadata['local-hostname'].split('.')[0] |
519 | + |
520 | +- def get_userdata_raw(self): |
521 | +- return self.metadata['user-data'] |
522 | +- |
523 | + @property |
524 | + def availability_zone(self): |
525 | + return self.metadata['availability-zone'] |
526 | +@@ -145,12 +102,187 @@ |
527 | + def region(self): |
528 | + return self.availability_zone.rsplit('-', 1)[0] |
529 | + |
530 | +-# Used to match classes to dependencies |
531 | ++ |
532 | ++def _has_expired(public_key): |
533 | ++ # Check whether an SSH key is expired. Public key input is a single SSH |
534 | ++ # public key in the GCE specific key format documented here: |
535 | ++ # https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#sshkeyformat |
536 | ++ try: |
537 | ++ # Check for the Google-specific schema identifier. |
538 | ++ schema, json_str = public_key.split(None, 3)[2:] |
539 | ++ except (ValueError, AttributeError): |
540 | ++ return False |
541 | ++ |
542 | ++ # Do not expire keys if they do not have the expected schema identifier. |
543 | ++ if schema != 'google-ssh': |
544 | ++ return False |
545 | ++ |
546 | ++ try: |
547 | ++ json_obj = json.loads(json_str) |
548 | ++ except ValueError: |
549 | ++ return False |
550 | ++ |
551 | ++ # Do not expire keys if there is no expriation timestamp. |
552 | ++ if 'expireOn' not in json_obj: |
553 | ++ return False |
554 | ++ |
555 | ++ expire_str = json_obj['expireOn'] |
556 | ++ format_str = '%Y-%m-%dT%H:%M:%S+0000' |
557 | ++ try: |
558 | ++ expire_time = datetime.datetime.strptime(expire_str, format_str) |
559 | ++ except ValueError: |
560 | ++ return False |
561 | ++ |
562 | ++ # Expire the key if and only if we have exceeded the expiration timestamp. |
563 | ++ return datetime.datetime.utcnow() > expire_time |
564 | ++ |
565 | ++ |
566 | ++def _parse_public_keys(public_keys_data, default_user=None): |
567 | ++ # Parse the SSH key data for the default user account. Public keys input is |
568 | ++ # a list containing SSH public keys in the GCE specific key format |
569 | ++ # documented here: |
570 | ++ # https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#sshkeyformat |
571 | ++ public_keys = [] |
572 | ++ if not public_keys_data: |
573 | ++ return public_keys |
574 | ++ for public_key in public_keys_data: |
575 | ++ if not public_key or not all(ord(c) < 128 for c in public_key): |
576 | ++ continue |
577 | ++ split_public_key = public_key.split(':', 1) |
578 | ++ if len(split_public_key) != 2: |
579 | ++ continue |
580 | ++ user, key = split_public_key |
581 | ++ if user in ('cloudinit', default_user) and not _has_expired(key): |
582 | ++ public_keys.append(key) |
583 | ++ return public_keys |
584 | ++ |
585 | ++ |
586 | ++def read_md(address=None, platform_check=True): |
587 | ++ |
588 | ++ if address is None: |
589 | ++ address = MD_V1_URL |
590 | ++ |
591 | ++ ret = {'meta-data': None, 'user-data': None, |
592 | ++ 'success': False, 'reason': None} |
593 | ++ ret['platform_reports_gce'] = platform_reports_gce() |
594 | ++ |
595 | ++ if platform_check and not ret['platform_reports_gce']: |
596 | ++ ret['reason'] = "Not running on GCE." |
597 | ++ return ret |
598 | ++ |
599 | ++ # If we cannot resolve the metadata server, then no point in trying. |
600 | ++ if not util.is_resolvable_url(address): |
601 | ++ LOG.debug("%s is not resolvable", address) |
602 | ++ ret['reason'] = 'address "%s" is not resolvable' % address |
603 | ++ return ret |
604 | ++ |
605 | ++ # url_map: (our-key, path, required, is_text, is_recursive) |
606 | ++ url_map = [ |
607 | ++ ('instance-id', ('instance/id',), True, True, False), |
608 | ++ ('availability-zone', ('instance/zone',), True, True, False), |
609 | ++ ('local-hostname', ('instance/hostname',), True, True, False), |
610 | ++ ('instance-data', ('instance/attributes',), False, False, True), |
611 | ++ ('project-data', ('project/attributes',), False, False, True), |
612 | ++ ] |
613 | ++ |
614 | ++ metadata_fetcher = GoogleMetadataFetcher(address) |
615 | ++ md = {} |
616 | ++ # Iterate over url_map keys to get metadata items. |
617 | ++ for (mkey, paths, required, is_text, is_recursive) in url_map: |
618 | ++ value = None |
619 | ++ for path in paths: |
620 | ++ new_value = metadata_fetcher.get_value(path, is_text, is_recursive) |
621 | ++ if new_value is not None: |
622 | ++ value = new_value |
623 | ++ if required and value is None: |
624 | ++ msg = "required key %s returned nothing. not GCE" |
625 | ++ ret['reason'] = msg % mkey |
626 | ++ return ret |
627 | ++ md[mkey] = value |
628 | ++ |
629 | ++ instance_data = json.loads(md['instance-data'] or '{}') |
630 | ++ project_data = json.loads(md['project-data'] or '{}') |
631 | ++ valid_keys = [instance_data.get('sshKeys'), instance_data.get('ssh-keys')] |
632 | ++ block_project = instance_data.get('block-project-ssh-keys', '').lower() |
633 | ++ if block_project != 'true' and not instance_data.get('sshKeys'): |
634 | ++ valid_keys.append(project_data.get('ssh-keys')) |
635 | ++ valid_keys.append(project_data.get('sshKeys')) |
636 | ++ public_keys_data = '\n'.join([key for key in valid_keys if key]) |
637 | ++ md['public-keys-data'] = public_keys_data.splitlines() |
638 | ++ |
639 | ++ if md['availability-zone']: |
640 | ++ md['availability-zone'] = md['availability-zone'].split('/')[-1] |
641 | ++ |
642 | ++ if 'user-data' in instance_data: |
643 | ++ # instance_data was json, so values are all utf-8 strings. |
644 | ++ ud = instance_data['user-data'].encode("utf-8") |
645 | ++ encoding = instance_data.get('user-data-encoding') |
646 | ++ if encoding == 'base64': |
647 | ++ ud = b64decode(ud) |
648 | ++ elif encoding: |
649 | ++ LOG.warning('unknown user-data-encoding: %s, ignoring', encoding) |
650 | ++ ret['user-data'] = ud |
651 | ++ |
652 | ++ ret['meta-data'] = md |
653 | ++ ret['success'] = True |
654 | ++ |
655 | ++ return ret |
656 | ++ |
657 | ++ |
658 | ++def platform_reports_gce(): |
659 | ++ pname = util.read_dmi_data('system-product-name') or "N/A" |
660 | ++ if pname == "Google Compute Engine": |
661 | ++ return True |
662 | ++ |
663 | ++ # system-product-name is not always guaranteed (LP: #1674861) |
664 | ++ serial = util.read_dmi_data('system-serial-number') or "N/A" |
665 | ++ if serial.startswith("GoogleCloud-"): |
666 | ++ return True |
667 | ++ |
668 | ++ LOG.debug("Not running on google cloud. product-name=%s serial=%s", |
669 | ++ pname, serial) |
670 | ++ return False |
671 | ++ |
672 | ++ |
673 | ++# Used to match classes to dependencies. |
674 | + datasources = [ |
675 | + (DataSourceGCE, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)), |
676 | + ] |
677 | + |
678 | + |
679 | +-# Return a list of data sources that match this set of dependencies |
680 | ++# Return a list of data sources that match this set of dependencies. |
681 | + def get_datasource_list(depends): |
682 | + return sources.list_from_depends(depends, datasources) |
683 | ++ |
684 | ++ |
685 | ++if __name__ == "__main__": |
686 | ++ import argparse |
687 | ++ import sys |
688 | ++ |
689 | ++ from base64 import b64encode |
690 | ++ |
691 | ++ parser = argparse.ArgumentParser(description='Query GCE Metadata Service') |
692 | ++ parser.add_argument("--endpoint", metavar="URL", |
693 | ++ help="The url of the metadata service.", |
694 | ++ default=MD_V1_URL) |
695 | ++ parser.add_argument("--no-platform-check", dest="platform_check", |
696 | ++ help="Ignore smbios platform check", |
697 | ++ action='store_false', default=True) |
698 | ++ args = parser.parse_args() |
699 | ++ data = read_md(address=args.endpoint, platform_check=args.platform_check) |
700 | ++ if 'user-data' in data: |
701 | ++ # user-data is bytes not string like other things. Handle it specially. |
702 | ++ # If it can be represented as utf-8 then do so. Otherwise print base64 |
703 | ++ # encoded value in the key user-data-b64. |
704 | ++ try: |
705 | ++ data['user-data'] = data['user-data'].decode() |
706 | ++ except UnicodeDecodeError: |
707 | ++ sys.stderr.write("User-data cannot be decoded. " |
708 | ++ "Writing as base64\n") |
709 | ++ del data['user-data'] |
710 | ++ # b64encode returns a bytes value. Decode to get the string. |
711 | ++ data['user-data-b64'] = b64encode(data['user-data']).decode() |
712 | ++ |
713 | ++ print(json.dumps(data, indent=1, sort_keys=True, separators=(',', ': '))) |
714 | ++ |
715 | ++# vi: ts=4 expandtab |
716 | +--- a/cloudinit/util.py |
717 | ++++ b/cloudinit/util.py |
718 | +@@ -49,6 +49,7 @@ |
719 | + import time |
720 | + import urlparse |
721 | + |
722 | ++import six |
723 | + import yaml |
724 | + |
725 | + from cloudinit import importer |
726 | +--- a/debian/control |
727 | ++++ b/debian/control |
728 | +@@ -17,6 +17,7 @@ |
729 | + python-oauth, |
730 | + python-prettytable, |
731 | + python-setuptools, |
732 | ++ python-six, |
733 | + python-requests, |
734 | + python-yaml |
735 | + XS-Python-Version: all |
736 | diff --git a/debian/patches/series b/debian/patches/series |
737 | index d20f78b..21d6b4f 100644 |
738 | --- a/debian/patches/series |
739 | +++ b/debian/patches/series |
740 | @@ -24,3 +24,4 @@ lp-1551419-azure-handle-flipped-uuid-endianness.patch |
741 | lp-1553158-bigstep.patch |
742 | lp-1581200-gce-metadatafqdn.patch |
743 | lp-1603222-fix-ephemeral-disk-fstab.patch |
744 | +lp-1781039-gce-datasource-update.patch |
You've added use of 'six' (package python-six) but have not added a
dependency. Normally, we can't add a dependency in a SRU. I think we can
probably get away with it though because there is a direct dependency
chain already in place in trusty through cloud-init.
cloud-init -> python-requests -> python-urllib3 -> python-six
Even so, we will have to add this to debian/control.
There are some other comments inline below.
Feel free to ping in IRC If you have any questions or follow up here.
Thanks.
sorry for the slow reply.
Scott