Merge ~shaner/cloud-init:1781039 into cloud-init:ubuntu/trusty
- Git
- lp:~shaner/cloud-init
- 1781039
- Merge into ubuntu/trusty
Status: | Merged |
---|---|
Merged at revision: | cbd5929848ee1cde1da5d3b698add5b32802ec85 |
Proposed branch: | ~shaner/cloud-init:1781039 |
Merge into: | cloud-init:ubuntu/trusty |
Diff against target: |
737 lines (+704/-0) 4 files modified
debian/changelog (+9/-0) debian/control (+1/-0) debian/patches/lp-1781039-gce-datasource-update.patch (+693/-0) debian/patches/series (+1/-0) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Scott Moser | Approve | ||
Review via email: mp+354428@code.launchpad.net |
Commit message
Fix SSH key functionality for GCE Datasource
Per documentation at https:/
ssh keys for cloudinit and ubuntu users should both be added to the
'ubuntu' users authorized_keys file. This works in Xenial and higher,
but not in Trusty.
This patch updates the GCE Datasource to function like that of Xenial
and newer.
LP: #1781039
Description of the change
Shane Peters (shaner) wrote : | # |
Scott Moser (smoser) wrote : | # |
Scott Moser (smoser) wrote : | # |
Your changes here turn stricter checking of google compute via the
platform_check=True in read_md. That *should* be safe, but I just
want to call it out here. We fixed one issue related to it in bug 1674861.
Also, we sometimes see issues on upgrade and reboot of an instance.
These occur when the datasource makes expectations on the object that
has been re-loaded from the pickle, that are not present in the
reloaded pickle.
So... to test for that you need to boot with an old image and then
install an updated version and then reboot. Please make sure that you've
done that.
Also, please test this on at least one other platform (lxd should
be easy enough) just to make sure we didnt regress anyone else.
But other than those requests I think this looks good.
Scott
Scott Moser (smoser) wrote : | # |
Just to be clear, I'm happy with this if you acknowledge testing:
a.) on one other platform (lxd)
b.) boot old image, upgrade, reboot in GCE
If you acknowledge testing that, then I will approve and I can sponsor upload for you.
Shane Peters (shaner) wrote : | # |
Thanks for the review Scott. I did notice the stricter checking and did consider leaving that bit behind, but like you said, it shouldn't cause any issues. I can still remove it if you'd like.
I tested the upgrade/reboot test as you suggested and things work as expected. Is there anything specific you'd like to see? Here's the test I ran and the subsequent cloud-init.log [0].
Also, I tested on lxd with some custom cloud-init config set in new profile which checked out okay as well. Logs for that are at [1].
Let me know if there's anything else.
Shane
[0] https:/
[1] https:/
Preview Diff
1 | diff --git a/debian/changelog b/debian/changelog |
2 | index 4f2c89e..ca72d76 100644 |
3 | --- a/debian/changelog |
4 | +++ b/debian/changelog |
5 | @@ -1,3 +1,12 @@ |
6 | +cloud-init (0.7.5-0ubuntu1.23) trusty; urgency=medium |
7 | + |
8 | + * GCE |
9 | + - d/p/lp-1781039-gce-datasource-update.patch (LP: #1781039) |
10 | + Backport GCE datasource functionality from Xenial |
11 | + - debian/control added python-six dependency. |
12 | + |
13 | + -- Shane Peters <shane.peters@canonical.com> Tue, 06 Sep 2018 17:57:23 -0400 |
14 | + |
15 | cloud-init (0.7.5-0ubuntu1.22) trusty; urgency=medium |
16 | |
17 | * debian/update-grub-legacy-ec2: |
18 | diff --git a/debian/control b/debian/control |
19 | index d4a6e18..b66bcb2 100644 |
20 | --- a/debian/control |
21 | +++ b/debian/control |
22 | @@ -15,6 +15,7 @@ Build-Depends: cdbs, |
23 | python-mocker, |
24 | python-nose, |
25 | python-oauth, |
26 | + python-six, |
27 | python-prettytable, |
28 | python-setuptools, |
29 | python-requests, |
30 | diff --git a/debian/patches/lp-1781039-gce-datasource-update.patch b/debian/patches/lp-1781039-gce-datasource-update.patch |
31 | new file mode 100644 |
32 | index 0000000..3451863 |
33 | --- /dev/null |
34 | +++ b/debian/patches/lp-1781039-gce-datasource-update.patch |
35 | @@ -0,0 +1,693 @@ |
36 | +Author: Shane Peters <shane.peters@canonical.com> |
37 | +Bug: https://bugs.launchpad.net/cloud-init/+bug/1781039 |
38 | +Description: Update GCE datasource |
39 | + Per documentation at https://wiki.ubuntu.com/GoogleComputeEngineSSHKeys |
40 | + ssh keys for cloudinit and ubuntu users should both be added to the |
41 | + 'ubuntu' users authorized_keys file. This works in Xenial and higher, |
42 | + but not in Trusty. |
43 | +--- /dev/null |
44 | ++++ b/cloudinit/distros/ug_util.py |
45 | +@@ -0,0 +1,289 @@ |
46 | ++# Copyright (C) 2012 Canonical Ltd. |
47 | ++# Copyright (C) 2012, 2013 Hewlett-Packard Development Company, L.P. |
48 | ++# Copyright (C) 2012 Yahoo! Inc. |
49 | ++# |
50 | ++# Author: Scott Moser <scott.moser@canonical.com> |
51 | ++# Author: Juerg Haefliger <juerg.haefliger@hp.com> |
52 | ++# Author: Joshua Harlow <harlowja@yahoo-inc.com> |
53 | ++# Author: Ben Howard <ben.howard@canonical.com> |
54 | ++# |
55 | ++# This file is part of cloud-init. See LICENSE file for license information. |
56 | ++ |
57 | ++import six |
58 | ++ |
59 | ++from cloudinit import log as logging |
60 | ++from cloudinit import type_utils |
61 | ++from cloudinit import util |
62 | ++ |
63 | ++LOG = logging.getLogger(__name__) |
64 | ++ |
65 | ++ |
66 | ++# Normalizes a input group configuration |
67 | ++# which can be a comma seperated list of |
68 | ++# group names, or a list of group names |
69 | ++# or a python dictionary of group names |
70 | ++# to a list of members of that group. |
71 | ++# |
72 | ++# The output is a dictionary of group |
73 | ++# names => members of that group which |
74 | ++# is the standard form used in the rest |
75 | ++# of cloud-init |
76 | ++def _normalize_groups(grp_cfg): |
77 | ++ if isinstance(grp_cfg, six.string_types): |
78 | ++ grp_cfg = grp_cfg.strip().split(",") |
79 | ++ if isinstance(grp_cfg, list): |
80 | ++ c_grp_cfg = {} |
81 | ++ for i in grp_cfg: |
82 | ++ if isinstance(i, dict): |
83 | ++ for k, v in i.items(): |
84 | ++ if k not in c_grp_cfg: |
85 | ++ if isinstance(v, list): |
86 | ++ c_grp_cfg[k] = list(v) |
87 | ++ elif isinstance(v, six.string_types): |
88 | ++ c_grp_cfg[k] = [v] |
89 | ++ else: |
90 | ++ raise TypeError("Bad group member type %s" % |
91 | ++ type_utils.obj_name(v)) |
92 | ++ else: |
93 | ++ if isinstance(v, list): |
94 | ++ c_grp_cfg[k].extend(v) |
95 | ++ elif isinstance(v, six.string_types): |
96 | ++ c_grp_cfg[k].append(v) |
97 | ++ else: |
98 | ++ raise TypeError("Bad group member type %s" % |
99 | ++ type_utils.obj_name(v)) |
100 | ++ elif isinstance(i, six.string_types): |
101 | ++ if i not in c_grp_cfg: |
102 | ++ c_grp_cfg[i] = [] |
103 | ++ else: |
104 | ++ raise TypeError("Unknown group name type %s" % |
105 | ++ type_utils.obj_name(i)) |
106 | ++ grp_cfg = c_grp_cfg |
107 | ++ groups = {} |
108 | ++ if isinstance(grp_cfg, dict): |
109 | ++ for (grp_name, grp_members) in grp_cfg.items(): |
110 | ++ groups[grp_name] = util.uniq_merge_sorted(grp_members) |
111 | ++ else: |
112 | ++ raise TypeError(("Group config must be list, dict " |
113 | ++ " or string types only and not %s") % |
114 | ++ type_utils.obj_name(grp_cfg)) |
115 | ++ return groups |
116 | ++ |
117 | ++ |
118 | ++# Normalizes a input group configuration |
119 | ++# which can be a comma seperated list of |
120 | ++# user names, or a list of string user names |
121 | ++# or a list of dictionaries with components |
122 | ++# that define the user config + 'name' (if |
123 | ++# a 'name' field does not exist then the |
124 | ++# default user is assumed to 'own' that |
125 | ++# configuration. |
126 | ++# |
127 | ++# The output is a dictionary of user |
128 | ++# names => user config which is the standard |
129 | ++# form used in the rest of cloud-init. Note |
130 | ++# the default user will have a special config |
131 | ++# entry 'default' which will be marked as true |
132 | ++# all other users will be marked as false. |
133 | ++def _normalize_users(u_cfg, def_user_cfg=None): |
134 | ++ if isinstance(u_cfg, dict): |
135 | ++ ad_ucfg = [] |
136 | ++ for (k, v) in u_cfg.items(): |
137 | ++ if isinstance(v, (bool, int, float) + six.string_types): |
138 | ++ if util.is_true(v): |
139 | ++ ad_ucfg.append(str(k)) |
140 | ++ elif isinstance(v, dict): |
141 | ++ v['name'] = k |
142 | ++ ad_ucfg.append(v) |
143 | ++ else: |
144 | ++ raise TypeError(("Unmappable user value type %s" |
145 | ++ " for key %s") % (type_utils.obj_name(v), k)) |
146 | ++ u_cfg = ad_ucfg |
147 | ++ elif isinstance(u_cfg, six.string_types): |
148 | ++ u_cfg = util.uniq_merge_sorted(u_cfg) |
149 | ++ |
150 | ++ users = {} |
151 | ++ for user_config in u_cfg: |
152 | ++ if isinstance(user_config, (list,) + six.string_types): |
153 | ++ for u in util.uniq_merge(user_config): |
154 | ++ if u and u not in users: |
155 | ++ users[u] = {} |
156 | ++ elif isinstance(user_config, dict): |
157 | ++ if 'name' in user_config: |
158 | ++ n = user_config.pop('name') |
159 | ++ prev_config = users.get(n) or {} |
160 | ++ users[n] = util.mergemanydict([prev_config, |
161 | ++ user_config]) |
162 | ++ else: |
163 | ++ # Assume the default user then |
164 | ++ prev_config = users.get('default') or {} |
165 | ++ users['default'] = util.mergemanydict([prev_config, |
166 | ++ user_config]) |
167 | ++ else: |
168 | ++ raise TypeError(("User config must be dictionary/list " |
169 | ++ " or string types only and not %s") % |
170 | ++ type_utils.obj_name(user_config)) |
171 | ++ |
172 | ++ # Ensure user options are in the right python friendly format |
173 | ++ if users: |
174 | ++ c_users = {} |
175 | ++ for (uname, uconfig) in users.items(): |
176 | ++ c_uconfig = {} |
177 | ++ for (k, v) in uconfig.items(): |
178 | ++ k = k.replace('-', '_').strip() |
179 | ++ if k: |
180 | ++ c_uconfig[k] = v |
181 | ++ c_users[uname] = c_uconfig |
182 | ++ users = c_users |
183 | ++ |
184 | ++ # Fixup the default user into the real |
185 | ++ # default user name and replace it... |
186 | ++ def_user = None |
187 | ++ if users and 'default' in users: |
188 | ++ def_config = users.pop('default') |
189 | ++ if def_user_cfg: |
190 | ++ # Pickup what the default 'real name' is |
191 | ++ # and any groups that are provided by the |
192 | ++ # default config |
193 | ++ def_user_cfg = def_user_cfg.copy() |
194 | ++ def_user = def_user_cfg.pop('name') |
195 | ++ def_groups = def_user_cfg.pop('groups', []) |
196 | ++ # Pickup any config + groups for that user name |
197 | ++ # that we may have previously extracted |
198 | ++ parsed_config = users.pop(def_user, {}) |
199 | ++ parsed_groups = parsed_config.get('groups', []) |
200 | ++ # Now merge our extracted groups with |
201 | ++ # anything the default config provided |
202 | ++ users_groups = util.uniq_merge_sorted(parsed_groups, def_groups) |
203 | ++ parsed_config['groups'] = ",".join(users_groups) |
204 | ++ # The real config for the default user is the |
205 | ++ # combination of the default user config provided |
206 | ++ # by the distro, the default user config provided |
207 | ++ # by the above merging for the user 'default' and |
208 | ++ # then the parsed config from the user's 'real name' |
209 | ++ # which does not have to be 'default' (but could be) |
210 | ++ users[def_user] = util.mergemanydict([def_user_cfg, |
211 | ++ def_config, |
212 | ++ parsed_config]) |
213 | ++ |
214 | ++ # Ensure that only the default user that we |
215 | ++ # found (if any) is actually marked as being |
216 | ++ # the default user |
217 | ++ if users: |
218 | ++ for (uname, uconfig) in users.items(): |
219 | ++ if def_user and uname == def_user: |
220 | ++ uconfig['default'] = True |
221 | ++ else: |
222 | ++ uconfig['default'] = False |
223 | ++ |
224 | ++ return users |
225 | ++ |
226 | ++ |
227 | ++# Normalizes a set of user/users and group |
228 | ++# dictionary configuration into a useable |
229 | ++# format that the rest of cloud-init can |
230 | ++# understand using the default user |
231 | ++# provided by the input distrobution (if any) |
232 | ++# to allow for mapping of the 'default' user. |
233 | ++# |
234 | ++# Output is a dictionary of group names -> [member] (list) |
235 | ++# and a dictionary of user names -> user configuration (dict) |
236 | ++# |
237 | ++# If 'user' exists it will override |
238 | ++# the 'users'[0] entry (if a list) otherwise it will |
239 | ++# just become an entry in the returned dictionary (no override) |
240 | ++def normalize_users_groups(cfg, distro): |
241 | ++ if not cfg: |
242 | ++ cfg = {} |
243 | ++ |
244 | ++ users = {} |
245 | ++ groups = {} |
246 | ++ if 'groups' in cfg: |
247 | ++ groups = _normalize_groups(cfg['groups']) |
248 | ++ |
249 | ++ # Handle the previous style of doing this where the first user |
250 | ++ # overrides the concept of the default user if provided in the user: XYZ |
251 | ++ # format. |
252 | ++ old_user = {} |
253 | ++ if 'user' in cfg and cfg['user']: |
254 | ++ old_user = cfg['user'] |
255 | ++ # Translate it into the format that is more useful |
256 | ++ # going forward |
257 | ++ if isinstance(old_user, six.string_types): |
258 | ++ old_user = { |
259 | ++ 'name': old_user, |
260 | ++ } |
261 | ++ if not isinstance(old_user, dict): |
262 | ++ LOG.warning(("Format for 'user' key must be a string or dictionary" |
263 | ++ " and not %s"), type_utils.obj_name(old_user)) |
264 | ++ old_user = {} |
265 | ++ |
266 | ++ # If no old user format, then assume the distro |
267 | ++ # provides what the 'default' user maps to, but notice |
268 | ++ # that if this is provided, we won't automatically inject |
269 | ++ # a 'default' user into the users list, while if a old user |
270 | ++ # format is provided we will. |
271 | ++ distro_user_config = {} |
272 | ++ try: |
273 | ++ distro_user_config = distro.get_default_user() |
274 | ++ except NotImplementedError: |
275 | ++ LOG.warning(("Distro has not implemented default user " |
276 | ++ "access. No distribution provided default user" |
277 | ++ " will be normalized.")) |
278 | ++ |
279 | ++ # Merge the old user (which may just be an empty dict when not |
280 | ++ # present with the distro provided default user configuration so |
281 | ++ # that the old user style picks up all the distribution specific |
282 | ++ # attributes (if any) |
283 | ++ default_user_config = util.mergemanydict([old_user, distro_user_config]) |
284 | ++ |
285 | ++ base_users = cfg.get('users', []) |
286 | ++ if not isinstance(base_users, (list, dict) + six.string_types): |
287 | ++ LOG.warning(("Format for 'users' key must be a comma separated string" |
288 | ++ " or a dictionary or a list and not %s"), |
289 | ++ type_utils.obj_name(base_users)) |
290 | ++ base_users = [] |
291 | ++ |
292 | ++ if old_user: |
293 | ++ # Ensure that when user: is provided that this user |
294 | ++ # always gets added (as the default user) |
295 | ++ if isinstance(base_users, list): |
296 | ++ # Just add it on at the end... |
297 | ++ base_users.append({'name': 'default'}) |
298 | ++ elif isinstance(base_users, dict): |
299 | ++ base_users['default'] = dict(base_users).get('default', True) |
300 | ++ elif isinstance(base_users, six.string_types): |
301 | ++ # Just append it on to be re-parsed later |
302 | ++ base_users += ",default" |
303 | ++ |
304 | ++ users = _normalize_users(base_users, default_user_config) |
305 | ++ return (users, groups) |
306 | ++ |
307 | ++ |
308 | ++# Given a user dictionary config it will |
309 | ++# extract the default user name and user config |
310 | ++# from that list and return that tuple or |
311 | ++# return (None, None) if no default user is |
312 | ++# found in the given input |
313 | ++def extract_default(users, default_name=None, default_config=None): |
314 | ++ if not users: |
315 | ++ users = {} |
316 | ++ |
317 | ++ def safe_find(entry): |
318 | ++ config = entry[1] |
319 | ++ if not config or 'default' not in config: |
320 | ++ return False |
321 | ++ else: |
322 | ++ return config['default'] |
323 | ++ |
324 | ++ tmp_users = users.items() |
325 | ++ tmp_users = dict(filter(safe_find, tmp_users)) |
326 | ++ if not tmp_users: |
327 | ++ return (default_name, default_config) |
328 | ++ else: |
329 | ++ name = list(tmp_users)[0] |
330 | ++ config = tmp_users[name] |
331 | ++ config.pop('default', None) |
332 | ++ return (name, config) |
333 | ++ |
334 | ++# vi: ts=4 expandtab |
335 | +--- a/cloudinit/sources/DataSourceGCE.py |
336 | ++++ b/cloudinit/sources/DataSourceGCE.py |
337 | +@@ -1,142 +1,99 @@ |
338 | +-# vi: ts=4 expandtab |
339 | +-# |
340 | +-# Author: Vaidas Jablonskis <jablonskis@gmail.com> |
341 | +-# |
342 | +-# This program is free software: you can redistribute it and/or modify |
343 | +-# it under the terms of the GNU General Public License version 3, as |
344 | +-# published by the Free Software Foundation. |
345 | ++# Author: Vaidas Jablonskis <jablonskis@gmail.com> |
346 | + # |
347 | +-# This program is distributed in the hope that it will be useful, |
348 | +-# but WITHOUT ANY WARRANTY; without even the implied warranty of |
349 | +-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
350 | +-# GNU General Public License for more details. |
351 | +-# |
352 | +-# You should have received a copy of the GNU General Public License |
353 | +-# along with this program. If not, see <http://www.gnu.org/licenses/>. |
354 | ++# This file is part of cloud-init. See LICENSE file for license information. |
355 | + |
356 | ++import datetime |
357 | ++import json |
358 | + |
359 | + from base64 import b64decode |
360 | + |
361 | ++from cloudinit.distros import ug_util |
362 | + from cloudinit import log as logging |
363 | +-from cloudinit import util |
364 | + from cloudinit import sources |
365 | + from cloudinit import url_helper |
366 | ++from cloudinit import util |
367 | + |
368 | + LOG = logging.getLogger(__name__) |
369 | + |
370 | +-BUILTIN_DS_CONFIG = { |
371 | +- 'metadata_url': 'http://metadata.google.internal/computeMetadata/v1/' |
372 | +-} |
373 | ++MD_V1_URL = 'http://metadata.google.internal/computeMetadata/v1/' |
374 | ++BUILTIN_DS_CONFIG = {'metadata_url': MD_V1_URL} |
375 | + REQUIRED_FIELDS = ('instance-id', 'availability-zone', 'local-hostname') |
376 | + |
377 | + |
378 | ++class GoogleMetadataFetcher(object): |
379 | ++ headers = {'Metadata-Flavor': 'Google'} |
380 | ++ |
381 | ++ def __init__(self, metadata_address): |
382 | ++ self.metadata_address = metadata_address |
383 | ++ |
384 | ++ def get_value(self, path, is_text, is_recursive=False): |
385 | ++ value = None |
386 | ++ try: |
387 | ++ url = self.metadata_address + path |
388 | ++ if is_recursive: |
389 | ++ url += '/?recursive=True' |
390 | ++ resp = url_helper.readurl(url=url, headers=self.headers) |
391 | ++ except url_helper.UrlError as exc: |
392 | ++ msg = "url %s raised exception %s" |
393 | ++ LOG.debug(msg, path, exc) |
394 | ++ else: |
395 | ++ if resp.code == 200: |
396 | ++ if is_text: |
397 | ++ value = resp.contents.decode('utf-8') |
398 | ++ else: |
399 | ++ value = resp.contents |
400 | ++ else: |
401 | ++ LOG.debug("url %s returned code %s", path, resp.code) |
402 | ++ return value |
403 | ++ |
404 | ++ |
405 | + class DataSourceGCE(sources.DataSource): |
406 | ++ |
407 | ++ dsname = 'GCE' |
408 | ++ |
409 | + def __init__(self, sys_cfg, distro, paths): |
410 | + sources.DataSource.__init__(self, sys_cfg, distro, paths) |
411 | ++ self.default_user = None |
412 | ++ if distro: |
413 | ++ (users, _groups) = ug_util.normalize_users_groups(sys_cfg, distro) |
414 | ++ (self.default_user, _user_config) = ug_util.extract_default(users) |
415 | + self.metadata = dict() |
416 | + self.ds_cfg = util.mergemanydict([ |
417 | + util.get_cfg_by_path(sys_cfg, ["datasource", "GCE"], {}), |
418 | + BUILTIN_DS_CONFIG]) |
419 | + self.metadata_address = self.ds_cfg['metadata_url'] |
420 | + |
421 | +- # GCE takes sshKeys attribute in the format of '<user>:<public_key>' |
422 | +- # so we have to trim each key to remove the username part |
423 | +- def _trim_key(self, public_key): |
424 | +- try: |
425 | +- index = public_key.index(':') |
426 | +- if index > 0: |
427 | +- return public_key[(index + 1):] |
428 | +- except: |
429 | +- return public_key |
430 | +- |
431 | + def get_data(self): |
432 | +- # GCE metadata server requires a custom header since v1 |
433 | +- headers = {'X-Google-Metadata-Request': True} |
434 | +- |
435 | +- # url_map: (our-key, path, required) |
436 | +- url_map = [ |
437 | +- ('instance-id', 'instance/id', True), |
438 | +- ('availability-zone', 'instance/zone', True), |
439 | +- ('local-hostname', 'instance/hostname', True), |
440 | +- ('public-keys', 'project/attributes/sshKeys', False), |
441 | +- ('user-data', 'instance/attributes/user-data', False), |
442 | +- ('user-data-encoding', 'instance/attributes/user-data-encoding', |
443 | +- False), |
444 | +- ] |
445 | +- |
446 | +- # if we cannot resolve the metadata server, then no point in trying |
447 | +- if not util.is_resolvable_url(self.metadata_address): |
448 | +- LOG.debug("%s is not resolvable", self.metadata_address) |
449 | +- return False |
450 | +- |
451 | +- # iterate over url_map keys to get metadata items |
452 | +- found = False |
453 | +- for (mkey, path, required) in url_map: |
454 | +- try: |
455 | +- resp = url_helper.readurl(url=self.metadata_address + path, |
456 | +- headers=headers) |
457 | +- if resp.code == 200: |
458 | +- found = True |
459 | +- self.metadata[mkey] = resp.contents |
460 | +- else: |
461 | +- if required: |
462 | +- msg = "required url %s returned code %s. not GCE" |
463 | +- if not found: |
464 | +- LOG.debug(msg, path, resp.code) |
465 | +- else: |
466 | +- LOG.warn(msg, path, resp.code) |
467 | +- return False |
468 | +- else: |
469 | +- self.metadata[mkey] = None |
470 | +- except url_helper.UrlError as e: |
471 | +- if required: |
472 | +- msg = "required url %s raised exception %s. not GCE" |
473 | +- if not found: |
474 | +- LOG.debug(msg, path, e) |
475 | +- else: |
476 | +- LOG.warn(msg, path, e) |
477 | +- return False |
478 | +- msg = "Failed to get %s metadata item: %s." |
479 | +- LOG.debug(msg, path, e) |
480 | +- |
481 | +- self.metadata[mkey] = None |
482 | +- |
483 | +- if self.metadata['public-keys']: |
484 | +- lines = self.metadata['public-keys'].splitlines() |
485 | +- self.metadata['public-keys'] = [self._trim_key(k) for k in lines] |
486 | +- |
487 | +- if self.metadata['availability-zone']: |
488 | +- self.metadata['availability-zone'] = self.metadata[ |
489 | +- 'availability-zone'].split('/')[-1] |
490 | +- |
491 | +- encoding = self.metadata.get('user-data-encoding') |
492 | +- if encoding: |
493 | +- if encoding == 'base64': |
494 | +- self.metadata['user-data'] = b64decode( |
495 | +- self.metadata['user-data']) |
496 | ++ ret = util.log_time( |
497 | ++ LOG.debug, 'Crawl of GCE metadata service', |
498 | ++ read_md, kwargs={'address': self.metadata_address}) |
499 | ++ |
500 | ++ if not ret['success']: |
501 | ++ if ret['platform_reports_gce']: |
502 | ++ LOG.warning(ret['reason']) |
503 | + else: |
504 | +- LOG.warn('unknown user-data-encoding: %s, ignoring', encoding) |
505 | +- |
506 | +- return found |
507 | ++ LOG.debug(ret['reason']) |
508 | ++ return False |
509 | ++ self.metadata = ret['meta-data'] |
510 | ++ self.userdata_raw = ret['user-data'] |
511 | ++ return True |
512 | + |
513 | + @property |
514 | + def launch_index(self): |
515 | +- # GCE does not provide lauch_index property |
516 | ++ # GCE does not provide lauch_index property. |
517 | + return None |
518 | + |
519 | + def get_instance_id(self): |
520 | + return self.metadata['instance-id'] |
521 | + |
522 | + def get_public_ssh_keys(self): |
523 | +- return self.metadata['public-keys'] |
524 | ++ public_keys_data = self.metadata['public-keys-data'] |
525 | ++ return _parse_public_keys(public_keys_data, self.default_user) |
526 | + |
527 | +- def get_hostname(self, fqdn=False, _resolve_ip=False): |
528 | +- # GCE has long FDQN's and has asked for short hostnames |
529 | ++ def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): |
530 | ++ # GCE has long FDQN's and has asked for short hostnames. |
531 | + return self.metadata['local-hostname'].split('.')[0] |
532 | + |
533 | +- def get_userdata_raw(self): |
534 | +- return self.metadata['user-data'] |
535 | +- |
536 | + @property |
537 | + def availability_zone(self): |
538 | + return self.metadata['availability-zone'] |
539 | +@@ -145,12 +102,187 @@ |
540 | + def region(self): |
541 | + return self.availability_zone.rsplit('-', 1)[0] |
542 | + |
543 | +-# Used to match classes to dependencies |
544 | ++ |
545 | ++def _has_expired(public_key): |
546 | ++ # Check whether an SSH key is expired. Public key input is a single SSH |
547 | ++ # public key in the GCE specific key format documented here: |
548 | ++ # https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#sshkeyformat |
549 | ++ try: |
550 | ++ # Check for the Google-specific schema identifier. |
551 | ++ schema, json_str = public_key.split(None, 3)[2:] |
552 | ++ except (ValueError, AttributeError): |
553 | ++ return False |
554 | ++ |
555 | ++ # Do not expire keys if they do not have the expected schema identifier. |
556 | ++ if schema != 'google-ssh': |
557 | ++ return False |
558 | ++ |
559 | ++ try: |
560 | ++ json_obj = json.loads(json_str) |
561 | ++ except ValueError: |
562 | ++ return False |
563 | ++ |
564 | ++ # Do not expire keys if there is no expriation timestamp. |
565 | ++ if 'expireOn' not in json_obj: |
566 | ++ return False |
567 | ++ |
568 | ++ expire_str = json_obj['expireOn'] |
569 | ++ format_str = '%Y-%m-%dT%H:%M:%S+0000' |
570 | ++ try: |
571 | ++ expire_time = datetime.datetime.strptime(expire_str, format_str) |
572 | ++ except ValueError: |
573 | ++ return False |
574 | ++ |
575 | ++ # Expire the key if and only if we have exceeded the expiration timestamp. |
576 | ++ return datetime.datetime.utcnow() > expire_time |
577 | ++ |
578 | ++ |
579 | ++def _parse_public_keys(public_keys_data, default_user=None): |
580 | ++ # Parse the SSH key data for the default user account. Public keys input is |
581 | ++ # a list containing SSH public keys in the GCE specific key format |
582 | ++ # documented here: |
583 | ++ # https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#sshkeyformat |
584 | ++ public_keys = [] |
585 | ++ if not public_keys_data: |
586 | ++ return public_keys |
587 | ++ for public_key in public_keys_data: |
588 | ++ if not public_key or not all(ord(c) < 128 for c in public_key): |
589 | ++ continue |
590 | ++ split_public_key = public_key.split(':', 1) |
591 | ++ if len(split_public_key) != 2: |
592 | ++ continue |
593 | ++ user, key = split_public_key |
594 | ++ if user in ('cloudinit', default_user) and not _has_expired(key): |
595 | ++ public_keys.append(key) |
596 | ++ return public_keys |
597 | ++ |
598 | ++ |
599 | ++def read_md(address=None, platform_check=True): |
600 | ++ |
601 | ++ if address is None: |
602 | ++ address = MD_V1_URL |
603 | ++ |
604 | ++ ret = {'meta-data': None, 'user-data': None, |
605 | ++ 'success': False, 'reason': None} |
606 | ++ ret['platform_reports_gce'] = platform_reports_gce() |
607 | ++ |
608 | ++ if platform_check and not ret['platform_reports_gce']: |
609 | ++ ret['reason'] = "Not running on GCE." |
610 | ++ return ret |
611 | ++ |
612 | ++ # If we cannot resolve the metadata server, then no point in trying. |
613 | ++ if not util.is_resolvable_url(address): |
614 | ++ LOG.debug("%s is not resolvable", address) |
615 | ++ ret['reason'] = 'address "%s" is not resolvable' % address |
616 | ++ return ret |
617 | ++ |
618 | ++ # url_map: (our-key, path, required, is_text, is_recursive) |
619 | ++ url_map = [ |
620 | ++ ('instance-id', ('instance/id',), True, True, False), |
621 | ++ ('availability-zone', ('instance/zone',), True, True, False), |
622 | ++ ('local-hostname', ('instance/hostname',), True, True, False), |
623 | ++ ('instance-data', ('instance/attributes',), False, False, True), |
624 | ++ ('project-data', ('project/attributes',), False, False, True), |
625 | ++ ] |
626 | ++ |
627 | ++ metadata_fetcher = GoogleMetadataFetcher(address) |
628 | ++ md = {} |
629 | ++ # Iterate over url_map keys to get metadata items. |
630 | ++ for (mkey, paths, required, is_text, is_recursive) in url_map: |
631 | ++ value = None |
632 | ++ for path in paths: |
633 | ++ new_value = metadata_fetcher.get_value(path, is_text, is_recursive) |
634 | ++ if new_value is not None: |
635 | ++ value = new_value |
636 | ++ if required and value is None: |
637 | ++ msg = "required key %s returned nothing. not GCE" |
638 | ++ ret['reason'] = msg % mkey |
639 | ++ return ret |
640 | ++ md[mkey] = value |
641 | ++ |
642 | ++ instance_data = json.loads(md['instance-data'] or '{}') |
643 | ++ project_data = json.loads(md['project-data'] or '{}') |
644 | ++ valid_keys = [instance_data.get('sshKeys'), instance_data.get('ssh-keys')] |
645 | ++ block_project = instance_data.get('block-project-ssh-keys', '').lower() |
646 | ++ if block_project != 'true' and not instance_data.get('sshKeys'): |
647 | ++ valid_keys.append(project_data.get('ssh-keys')) |
648 | ++ valid_keys.append(project_data.get('sshKeys')) |
649 | ++ public_keys_data = '\n'.join([key for key in valid_keys if key]) |
650 | ++ md['public-keys-data'] = public_keys_data.splitlines() |
651 | ++ |
652 | ++ if md['availability-zone']: |
653 | ++ md['availability-zone'] = md['availability-zone'].split('/')[-1] |
654 | ++ |
655 | ++ if 'user-data' in instance_data: |
656 | ++ # instance_data was json, so values are all utf-8 strings. |
657 | ++ ud = instance_data['user-data'].encode("utf-8") |
658 | ++ encoding = instance_data.get('user-data-encoding') |
659 | ++ if encoding == 'base64': |
660 | ++ ud = b64decode(ud) |
661 | ++ elif encoding: |
662 | ++ LOG.warning('unknown user-data-encoding: %s, ignoring', encoding) |
663 | ++ ret['user-data'] = ud |
664 | ++ |
665 | ++ ret['meta-data'] = md |
666 | ++ ret['success'] = True |
667 | ++ |
668 | ++ return ret |
669 | ++ |
670 | ++ |
671 | ++def platform_reports_gce(): |
672 | ++ pname = util.read_dmi_data('system-product-name') or "N/A" |
673 | ++ if pname == "Google Compute Engine": |
674 | ++ return True |
675 | ++ |
676 | ++ # system-product-name is not always guaranteed (LP: #1674861) |
677 | ++ serial = util.read_dmi_data('system-serial-number') or "N/A" |
678 | ++ if serial.startswith("GoogleCloud-"): |
679 | ++ return True |
680 | ++ |
681 | ++ LOG.debug("Not running on google cloud. product-name=%s serial=%s", |
682 | ++ pname, serial) |
683 | ++ return False |
684 | ++ |
685 | ++ |
686 | ++# Used to match classes to dependencies. |
687 | + datasources = [ |
688 | + (DataSourceGCE, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)), |
689 | + ] |
690 | + |
691 | + |
692 | +-# Return a list of data sources that match this set of dependencies |
693 | ++# Return a list of data sources that match this set of dependencies. |
694 | + def get_datasource_list(depends): |
695 | + return sources.list_from_depends(depends, datasources) |
696 | ++ |
697 | ++ |
698 | ++if __name__ == "__main__": |
699 | ++ import argparse |
700 | ++ import sys |
701 | ++ |
702 | ++ from base64 import b64encode |
703 | ++ |
704 | ++ parser = argparse.ArgumentParser(description='Query GCE Metadata Service') |
705 | ++ parser.add_argument("--endpoint", metavar="URL", |
706 | ++ help="The url of the metadata service.", |
707 | ++ default=MD_V1_URL) |
708 | ++ parser.add_argument("--no-platform-check", dest="platform_check", |
709 | ++ help="Ignore smbios platform check", |
710 | ++ action='store_false', default=True) |
711 | ++ args = parser.parse_args() |
712 | ++ data = read_md(address=args.endpoint, platform_check=args.platform_check) |
713 | ++ if 'user-data' in data: |
714 | ++ # user-data is bytes not string like other things. Handle it specially. |
715 | ++ # If it can be represented as utf-8 then do so. Otherwise print base64 |
716 | ++ # encoded value in the key user-data-b64. |
717 | ++ try: |
718 | ++ data['user-data'] = data['user-data'].decode() |
719 | ++ except UnicodeDecodeError: |
720 | ++ sys.stderr.write("User-data cannot be decoded. " |
721 | ++ "Writing as base64\n") |
722 | ++ del data['user-data'] |
723 | ++ # b64encode returns a bytes value. Decode to get the string. |
724 | ++ data['user-data-b64'] = b64encode(data['user-data']).decode() |
725 | ++ |
726 | ++ print(json.dumps(data, indent=1, sort_keys=True, separators=(',', ': '))) |
727 | ++ |
728 | ++# vi: ts=4 expandtab |
729 | diff --git a/debian/patches/series b/debian/patches/series |
730 | index d20f78b..21d6b4f 100644 |
731 | --- a/debian/patches/series |
732 | +++ b/debian/patches/series |
733 | @@ -24,3 +24,4 @@ lp-1551419-azure-handle-flipped-uuid-endianness.patch |
734 | lp-1553158-bigstep.patch |
735 | lp-1581200-gce-metadatafqdn.patch |
736 | lp-1603222-fix-ephemeral-disk-fstab.patch |
737 | +lp-1781039-gce-datasource-update.patch |
After upgrading cloud-init on an instance and creating a new image from it, I tested using different types of user-data (ssh-keys, cloud-config) using the below commands. Both work fine. Tests w/o user-data work as well.
gcloud compute instances create trusty-1 --image new-cloud-init --image-project $PROJECT --metadata- from-file= ssh-keys= my-ssh- keys --metadata= block-project- ssh-keys= True
gcloud compute instances create trusty-2 --image new-cloud-init - --image-project $PROJECT --metadata- from-file user-data= my-user- data --metadata= block-project- ssh-keys= True
The cloud config I used can be found at https:/ /pastebin. ubuntu. com/p/WnhPtJRsX n/