Merge ~shaner/cloud-init:1781039 into cloud-init:ubuntu/trusty

Proposed by Shane Peters
Status: Merged
Merged at revision: cbd5929848ee1cde1da5d3b698add5b32802ec85
Proposed branch: ~shaner/cloud-init:1781039
Merge into: cloud-init:ubuntu/trusty
Diff against target: 737 lines (+704/-0)
4 files modified
debian/changelog (+9/-0)
debian/control (+1/-0)
debian/patches/lp-1781039-gce-datasource-update.patch (+693/-0)
debian/patches/series (+1/-0)
Reviewer Review Type Date Requested Status
Scott Moser Approve
Review via email: mp+354428@code.launchpad.net

Commit message

Fix SSH key functionality for GCE Datasource

Per documentation at https://wiki.ubuntu.com/GoogleComputeEngineSSHKeys
ssh keys for cloudinit and ubuntu users should both be added to the
'ubuntu' users authorized_keys file. This works in Xenial and higher,
but not in Trusty.

This patch updates the GCE Datasource to function like that of Xenial
and newer.

LP: #1781039

To post a comment you must log in.
Revision history for this message
Shane Peters (shaner) wrote :

After upgrading cloud-init on an instance and creating a new image from it, I tested using different types of user-data (ssh-keys,cloud-config) using the below commands. Both work fine. Tests w/o user-data work as well.

gcloud compute instances create trusty-1 --image new-cloud-init --image-project $PROJECT --metadata-from-file=ssh-keys=my-ssh-keys --metadata=block-project-ssh-keys=True

gcloud compute instances create trusty-2 --image new-cloud-init - --image-project $PROJECT --metadata-from-file user-data=my-user-data --metadata=block-project-ssh-keys=True

The cloud config I used can be found at https://pastebin.ubuntu.com/p/WnhPtJRsXn/

Revision history for this message
Scott Moser (smoser) wrote :
Revision history for this message
Scott Moser (smoser) wrote :

Your changes here turn stricter checking of google compute via the
platform_check=True in read_md. That *should* be safe, but I just
want to call it out here. We fixed one issue related to it in bug 1674861.

Also, we sometimes see issues on upgrade and reboot of an instance.
These occur when the datasource makes expectations on the object that
has been re-loaded from the pickle, that are not present in the
reloaded pickle.

So... to test for that you need to boot with an old image and then
install an updated version and then reboot. Please make sure that you've
done that.

Also, please test this on at least one other platform (lxd should
be easy enough) just to make sure we didnt regress anyone else.

But other than those requests I think this looks good.

Scott

Revision history for this message
Scott Moser (smoser) wrote :

Just to be clear, I'm happy with this if you acknowledge testing:
 a.) on one other platform (lxd)
 b.) boot old image, upgrade, reboot in GCE

If you acknowledge testing that, then I will approve and I can sponsor upload for you.

Revision history for this message
Shane Peters (shaner) wrote :

Thanks for the review Scott. I did notice the stricter checking and did consider leaving that bit behind, but like you said, it shouldn't cause any issues. I can still remove it if you'd like.

I tested the upgrade/reboot test as you suggested and things work as expected. Is there anything specific you'd like to see? Here's the test I ran and the subsequent cloud-init.log [0].

Also, I tested on lxd with some custom cloud-init config set in new profile which checked out okay as well. Logs for that are at [1].

Let me know if there's anything else.

Shane

[0] https://pastebin.ubuntu.com/p/rJ8pg9TvZ7/
[1] https://pastebin.ubuntu.com/p/FsdczQ2zzX/

Revision history for this message
Scott Moser (smoser) wrote :

looks good.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
diff --git a/debian/changelog b/debian/changelog
index 4f2c89e..ca72d76 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,12 @@
1cloud-init (0.7.5-0ubuntu1.23) trusty; urgency=medium
2
3 * GCE
4 - d/p/lp-1781039-gce-datasource-update.patch (LP: #1781039)
5 Backport GCE datasource functionality from Xenial
6 - debian/control added python-six dependency.
7
8 -- Shane Peters <shane.peters@canonical.com> Tue, 06 Sep 2018 17:57:23 -0400
9
1cloud-init (0.7.5-0ubuntu1.22) trusty; urgency=medium10cloud-init (0.7.5-0ubuntu1.22) trusty; urgency=medium
211
3 * debian/update-grub-legacy-ec2:12 * debian/update-grub-legacy-ec2:
diff --git a/debian/control b/debian/control
index d4a6e18..b66bcb2 100644
--- a/debian/control
+++ b/debian/control
@@ -15,6 +15,7 @@ Build-Depends: cdbs,
15 python-mocker,15 python-mocker,
16 python-nose,16 python-nose,
17 python-oauth,17 python-oauth,
18 python-six,
18 python-prettytable,19 python-prettytable,
19 python-setuptools,20 python-setuptools,
20 python-requests,21 python-requests,
diff --git a/debian/patches/lp-1781039-gce-datasource-update.patch b/debian/patches/lp-1781039-gce-datasource-update.patch
21new file mode 10064422new file mode 100644
index 0000000..3451863
--- /dev/null
+++ b/debian/patches/lp-1781039-gce-datasource-update.patch
@@ -0,0 +1,693 @@
1Author: Shane Peters <shane.peters@canonical.com>
2Bug: https://bugs.launchpad.net/cloud-init/+bug/1781039
3Description: Update GCE datasource
4 Per documentation at https://wiki.ubuntu.com/GoogleComputeEngineSSHKeys
5 ssh keys for cloudinit and ubuntu users should both be added to the
6 'ubuntu' users authorized_keys file. This works in Xenial and higher,
7 but not in Trusty.
8--- /dev/null
9+++ b/cloudinit/distros/ug_util.py
10@@ -0,0 +1,289 @@
11+# Copyright (C) 2012 Canonical Ltd.
12+# Copyright (C) 2012, 2013 Hewlett-Packard Development Company, L.P.
13+# Copyright (C) 2012 Yahoo! Inc.
14+#
15+# Author: Scott Moser <scott.moser@canonical.com>
16+# Author: Juerg Haefliger <juerg.haefliger@hp.com>
17+# Author: Joshua Harlow <harlowja@yahoo-inc.com>
18+# Author: Ben Howard <ben.howard@canonical.com>
19+#
20+# This file is part of cloud-init. See LICENSE file for license information.
21+
22+import six
23+
24+from cloudinit import log as logging
25+from cloudinit import type_utils
26+from cloudinit import util
27+
28+LOG = logging.getLogger(__name__)
29+
30+
31+# Normalizes a input group configuration
32+# which can be a comma seperated list of
33+# group names, or a list of group names
34+# or a python dictionary of group names
35+# to a list of members of that group.
36+#
37+# The output is a dictionary of group
38+# names => members of that group which
39+# is the standard form used in the rest
40+# of cloud-init
41+def _normalize_groups(grp_cfg):
42+ if isinstance(grp_cfg, six.string_types):
43+ grp_cfg = grp_cfg.strip().split(",")
44+ if isinstance(grp_cfg, list):
45+ c_grp_cfg = {}
46+ for i in grp_cfg:
47+ if isinstance(i, dict):
48+ for k, v in i.items():
49+ if k not in c_grp_cfg:
50+ if isinstance(v, list):
51+ c_grp_cfg[k] = list(v)
52+ elif isinstance(v, six.string_types):
53+ c_grp_cfg[k] = [v]
54+ else:
55+ raise TypeError("Bad group member type %s" %
56+ type_utils.obj_name(v))
57+ else:
58+ if isinstance(v, list):
59+ c_grp_cfg[k].extend(v)
60+ elif isinstance(v, six.string_types):
61+ c_grp_cfg[k].append(v)
62+ else:
63+ raise TypeError("Bad group member type %s" %
64+ type_utils.obj_name(v))
65+ elif isinstance(i, six.string_types):
66+ if i not in c_grp_cfg:
67+ c_grp_cfg[i] = []
68+ else:
69+ raise TypeError("Unknown group name type %s" %
70+ type_utils.obj_name(i))
71+ grp_cfg = c_grp_cfg
72+ groups = {}
73+ if isinstance(grp_cfg, dict):
74+ for (grp_name, grp_members) in grp_cfg.items():
75+ groups[grp_name] = util.uniq_merge_sorted(grp_members)
76+ else:
77+ raise TypeError(("Group config must be list, dict "
78+ " or string types only and not %s") %
79+ type_utils.obj_name(grp_cfg))
80+ return groups
81+
82+
83+# Normalizes a input group configuration
84+# which can be a comma seperated list of
85+# user names, or a list of string user names
86+# or a list of dictionaries with components
87+# that define the user config + 'name' (if
88+# a 'name' field does not exist then the
89+# default user is assumed to 'own' that
90+# configuration.
91+#
92+# The output is a dictionary of user
93+# names => user config which is the standard
94+# form used in the rest of cloud-init. Note
95+# the default user will have a special config
96+# entry 'default' which will be marked as true
97+# all other users will be marked as false.
98+def _normalize_users(u_cfg, def_user_cfg=None):
99+ if isinstance(u_cfg, dict):
100+ ad_ucfg = []
101+ for (k, v) in u_cfg.items():
102+ if isinstance(v, (bool, int, float) + six.string_types):
103+ if util.is_true(v):
104+ ad_ucfg.append(str(k))
105+ elif isinstance(v, dict):
106+ v['name'] = k
107+ ad_ucfg.append(v)
108+ else:
109+ raise TypeError(("Unmappable user value type %s"
110+ " for key %s") % (type_utils.obj_name(v), k))
111+ u_cfg = ad_ucfg
112+ elif isinstance(u_cfg, six.string_types):
113+ u_cfg = util.uniq_merge_sorted(u_cfg)
114+
115+ users = {}
116+ for user_config in u_cfg:
117+ if isinstance(user_config, (list,) + six.string_types):
118+ for u in util.uniq_merge(user_config):
119+ if u and u not in users:
120+ users[u] = {}
121+ elif isinstance(user_config, dict):
122+ if 'name' in user_config:
123+ n = user_config.pop('name')
124+ prev_config = users.get(n) or {}
125+ users[n] = util.mergemanydict([prev_config,
126+ user_config])
127+ else:
128+ # Assume the default user then
129+ prev_config = users.get('default') or {}
130+ users['default'] = util.mergemanydict([prev_config,
131+ user_config])
132+ else:
133+ raise TypeError(("User config must be dictionary/list "
134+ " or string types only and not %s") %
135+ type_utils.obj_name(user_config))
136+
137+ # Ensure user options are in the right python friendly format
138+ if users:
139+ c_users = {}
140+ for (uname, uconfig) in users.items():
141+ c_uconfig = {}
142+ for (k, v) in uconfig.items():
143+ k = k.replace('-', '_').strip()
144+ if k:
145+ c_uconfig[k] = v
146+ c_users[uname] = c_uconfig
147+ users = c_users
148+
149+ # Fixup the default user into the real
150+ # default user name and replace it...
151+ def_user = None
152+ if users and 'default' in users:
153+ def_config = users.pop('default')
154+ if def_user_cfg:
155+ # Pickup what the default 'real name' is
156+ # and any groups that are provided by the
157+ # default config
158+ def_user_cfg = def_user_cfg.copy()
159+ def_user = def_user_cfg.pop('name')
160+ def_groups = def_user_cfg.pop('groups', [])
161+ # Pickup any config + groups for that user name
162+ # that we may have previously extracted
163+ parsed_config = users.pop(def_user, {})
164+ parsed_groups = parsed_config.get('groups', [])
165+ # Now merge our extracted groups with
166+ # anything the default config provided
167+ users_groups = util.uniq_merge_sorted(parsed_groups, def_groups)
168+ parsed_config['groups'] = ",".join(users_groups)
169+ # The real config for the default user is the
170+ # combination of the default user config provided
171+ # by the distro, the default user config provided
172+ # by the above merging for the user 'default' and
173+ # then the parsed config from the user's 'real name'
174+ # which does not have to be 'default' (but could be)
175+ users[def_user] = util.mergemanydict([def_user_cfg,
176+ def_config,
177+ parsed_config])
178+
179+ # Ensure that only the default user that we
180+ # found (if any) is actually marked as being
181+ # the default user
182+ if users:
183+ for (uname, uconfig) in users.items():
184+ if def_user and uname == def_user:
185+ uconfig['default'] = True
186+ else:
187+ uconfig['default'] = False
188+
189+ return users
190+
191+
192+# Normalizes a set of user/users and group
193+# dictionary configuration into a useable
194+# format that the rest of cloud-init can
195+# understand using the default user
196+# provided by the input distrobution (if any)
197+# to allow for mapping of the 'default' user.
198+#
199+# Output is a dictionary of group names -> [member] (list)
200+# and a dictionary of user names -> user configuration (dict)
201+#
202+# If 'user' exists it will override
203+# the 'users'[0] entry (if a list) otherwise it will
204+# just become an entry in the returned dictionary (no override)
205+def normalize_users_groups(cfg, distro):
206+ if not cfg:
207+ cfg = {}
208+
209+ users = {}
210+ groups = {}
211+ if 'groups' in cfg:
212+ groups = _normalize_groups(cfg['groups'])
213+
214+ # Handle the previous style of doing this where the first user
215+ # overrides the concept of the default user if provided in the user: XYZ
216+ # format.
217+ old_user = {}
218+ if 'user' in cfg and cfg['user']:
219+ old_user = cfg['user']
220+ # Translate it into the format that is more useful
221+ # going forward
222+ if isinstance(old_user, six.string_types):
223+ old_user = {
224+ 'name': old_user,
225+ }
226+ if not isinstance(old_user, dict):
227+ LOG.warning(("Format for 'user' key must be a string or dictionary"
228+ " and not %s"), type_utils.obj_name(old_user))
229+ old_user = {}
230+
231+ # If no old user format, then assume the distro
232+ # provides what the 'default' user maps to, but notice
233+ # that if this is provided, we won't automatically inject
234+ # a 'default' user into the users list, while if a old user
235+ # format is provided we will.
236+ distro_user_config = {}
237+ try:
238+ distro_user_config = distro.get_default_user()
239+ except NotImplementedError:
240+ LOG.warning(("Distro has not implemented default user "
241+ "access. No distribution provided default user"
242+ " will be normalized."))
243+
244+ # Merge the old user (which may just be an empty dict when not
245+ # present with the distro provided default user configuration so
246+ # that the old user style picks up all the distribution specific
247+ # attributes (if any)
248+ default_user_config = util.mergemanydict([old_user, distro_user_config])
249+
250+ base_users = cfg.get('users', [])
251+ if not isinstance(base_users, (list, dict) + six.string_types):
252+ LOG.warning(("Format for 'users' key must be a comma separated string"
253+ " or a dictionary or a list and not %s"),
254+ type_utils.obj_name(base_users))
255+ base_users = []
256+
257+ if old_user:
258+ # Ensure that when user: is provided that this user
259+ # always gets added (as the default user)
260+ if isinstance(base_users, list):
261+ # Just add it on at the end...
262+ base_users.append({'name': 'default'})
263+ elif isinstance(base_users, dict):
264+ base_users['default'] = dict(base_users).get('default', True)
265+ elif isinstance(base_users, six.string_types):
266+ # Just append it on to be re-parsed later
267+ base_users += ",default"
268+
269+ users = _normalize_users(base_users, default_user_config)
270+ return (users, groups)
271+
272+
273+# Given a user dictionary config it will
274+# extract the default user name and user config
275+# from that list and return that tuple or
276+# return (None, None) if no default user is
277+# found in the given input
278+def extract_default(users, default_name=None, default_config=None):
279+ if not users:
280+ users = {}
281+
282+ def safe_find(entry):
283+ config = entry[1]
284+ if not config or 'default' not in config:
285+ return False
286+ else:
287+ return config['default']
288+
289+ tmp_users = users.items()
290+ tmp_users = dict(filter(safe_find, tmp_users))
291+ if not tmp_users:
292+ return (default_name, default_config)
293+ else:
294+ name = list(tmp_users)[0]
295+ config = tmp_users[name]
296+ config.pop('default', None)
297+ return (name, config)
298+
299+# vi: ts=4 expandtab
300--- a/cloudinit/sources/DataSourceGCE.py
301+++ b/cloudinit/sources/DataSourceGCE.py
302@@ -1,142 +1,99 @@
303-# vi: ts=4 expandtab
304-#
305-# Author: Vaidas Jablonskis <jablonskis@gmail.com>
306-#
307-# This program is free software: you can redistribute it and/or modify
308-# it under the terms of the GNU General Public License version 3, as
309-# published by the Free Software Foundation.
310+# Author: Vaidas Jablonskis <jablonskis@gmail.com>
311 #
312-# This program is distributed in the hope that it will be useful,
313-# but WITHOUT ANY WARRANTY; without even the implied warranty of
314-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
315-# GNU General Public License for more details.
316-#
317-# You should have received a copy of the GNU General Public License
318-# along with this program. If not, see <http://www.gnu.org/licenses/>.
319+# This file is part of cloud-init. See LICENSE file for license information.
320
321+import datetime
322+import json
323
324 from base64 import b64decode
325
326+from cloudinit.distros import ug_util
327 from cloudinit import log as logging
328-from cloudinit import util
329 from cloudinit import sources
330 from cloudinit import url_helper
331+from cloudinit import util
332
333 LOG = logging.getLogger(__name__)
334
335-BUILTIN_DS_CONFIG = {
336- 'metadata_url': 'http://metadata.google.internal/computeMetadata/v1/'
337-}
338+MD_V1_URL = 'http://metadata.google.internal/computeMetadata/v1/'
339+BUILTIN_DS_CONFIG = {'metadata_url': MD_V1_URL}
340 REQUIRED_FIELDS = ('instance-id', 'availability-zone', 'local-hostname')
341
342
343+class GoogleMetadataFetcher(object):
344+ headers = {'Metadata-Flavor': 'Google'}
345+
346+ def __init__(self, metadata_address):
347+ self.metadata_address = metadata_address
348+
349+ def get_value(self, path, is_text, is_recursive=False):
350+ value = None
351+ try:
352+ url = self.metadata_address + path
353+ if is_recursive:
354+ url += '/?recursive=True'
355+ resp = url_helper.readurl(url=url, headers=self.headers)
356+ except url_helper.UrlError as exc:
357+ msg = "url %s raised exception %s"
358+ LOG.debug(msg, path, exc)
359+ else:
360+ if resp.code == 200:
361+ if is_text:
362+ value = resp.contents.decode('utf-8')
363+ else:
364+ value = resp.contents
365+ else:
366+ LOG.debug("url %s returned code %s", path, resp.code)
367+ return value
368+
369+
370 class DataSourceGCE(sources.DataSource):
371+
372+ dsname = 'GCE'
373+
374 def __init__(self, sys_cfg, distro, paths):
375 sources.DataSource.__init__(self, sys_cfg, distro, paths)
376+ self.default_user = None
377+ if distro:
378+ (users, _groups) = ug_util.normalize_users_groups(sys_cfg, distro)
379+ (self.default_user, _user_config) = ug_util.extract_default(users)
380 self.metadata = dict()
381 self.ds_cfg = util.mergemanydict([
382 util.get_cfg_by_path(sys_cfg, ["datasource", "GCE"], {}),
383 BUILTIN_DS_CONFIG])
384 self.metadata_address = self.ds_cfg['metadata_url']
385
386- # GCE takes sshKeys attribute in the format of '<user>:<public_key>'
387- # so we have to trim each key to remove the username part
388- def _trim_key(self, public_key):
389- try:
390- index = public_key.index(':')
391- if index > 0:
392- return public_key[(index + 1):]
393- except:
394- return public_key
395-
396 def get_data(self):
397- # GCE metadata server requires a custom header since v1
398- headers = {'X-Google-Metadata-Request': True}
399-
400- # url_map: (our-key, path, required)
401- url_map = [
402- ('instance-id', 'instance/id', True),
403- ('availability-zone', 'instance/zone', True),
404- ('local-hostname', 'instance/hostname', True),
405- ('public-keys', 'project/attributes/sshKeys', False),
406- ('user-data', 'instance/attributes/user-data', False),
407- ('user-data-encoding', 'instance/attributes/user-data-encoding',
408- False),
409- ]
410-
411- # if we cannot resolve the metadata server, then no point in trying
412- if not util.is_resolvable_url(self.metadata_address):
413- LOG.debug("%s is not resolvable", self.metadata_address)
414- return False
415-
416- # iterate over url_map keys to get metadata items
417- found = False
418- for (mkey, path, required) in url_map:
419- try:
420- resp = url_helper.readurl(url=self.metadata_address + path,
421- headers=headers)
422- if resp.code == 200:
423- found = True
424- self.metadata[mkey] = resp.contents
425- else:
426- if required:
427- msg = "required url %s returned code %s. not GCE"
428- if not found:
429- LOG.debug(msg, path, resp.code)
430- else:
431- LOG.warn(msg, path, resp.code)
432- return False
433- else:
434- self.metadata[mkey] = None
435- except url_helper.UrlError as e:
436- if required:
437- msg = "required url %s raised exception %s. not GCE"
438- if not found:
439- LOG.debug(msg, path, e)
440- else:
441- LOG.warn(msg, path, e)
442- return False
443- msg = "Failed to get %s metadata item: %s."
444- LOG.debug(msg, path, e)
445-
446- self.metadata[mkey] = None
447-
448- if self.metadata['public-keys']:
449- lines = self.metadata['public-keys'].splitlines()
450- self.metadata['public-keys'] = [self._trim_key(k) for k in lines]
451-
452- if self.metadata['availability-zone']:
453- self.metadata['availability-zone'] = self.metadata[
454- 'availability-zone'].split('/')[-1]
455-
456- encoding = self.metadata.get('user-data-encoding')
457- if encoding:
458- if encoding == 'base64':
459- self.metadata['user-data'] = b64decode(
460- self.metadata['user-data'])
461+ ret = util.log_time(
462+ LOG.debug, 'Crawl of GCE metadata service',
463+ read_md, kwargs={'address': self.metadata_address})
464+
465+ if not ret['success']:
466+ if ret['platform_reports_gce']:
467+ LOG.warning(ret['reason'])
468 else:
469- LOG.warn('unknown user-data-encoding: %s, ignoring', encoding)
470-
471- return found
472+ LOG.debug(ret['reason'])
473+ return False
474+ self.metadata = ret['meta-data']
475+ self.userdata_raw = ret['user-data']
476+ return True
477
478 @property
479 def launch_index(self):
480- # GCE does not provide lauch_index property
481+ # GCE does not provide lauch_index property.
482 return None
483
484 def get_instance_id(self):
485 return self.metadata['instance-id']
486
487 def get_public_ssh_keys(self):
488- return self.metadata['public-keys']
489+ public_keys_data = self.metadata['public-keys-data']
490+ return _parse_public_keys(public_keys_data, self.default_user)
491
492- def get_hostname(self, fqdn=False, _resolve_ip=False):
493- # GCE has long FDQN's and has asked for short hostnames
494+ def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False):
495+ # GCE has long FDQN's and has asked for short hostnames.
496 return self.metadata['local-hostname'].split('.')[0]
497
498- def get_userdata_raw(self):
499- return self.metadata['user-data']
500-
501 @property
502 def availability_zone(self):
503 return self.metadata['availability-zone']
504@@ -145,12 +102,187 @@
505 def region(self):
506 return self.availability_zone.rsplit('-', 1)[0]
507
508-# Used to match classes to dependencies
509+
510+def _has_expired(public_key):
511+ # Check whether an SSH key is expired. Public key input is a single SSH
512+ # public key in the GCE specific key format documented here:
513+ # https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#sshkeyformat
514+ try:
515+ # Check for the Google-specific schema identifier.
516+ schema, json_str = public_key.split(None, 3)[2:]
517+ except (ValueError, AttributeError):
518+ return False
519+
520+ # Do not expire keys if they do not have the expected schema identifier.
521+ if schema != 'google-ssh':
522+ return False
523+
524+ try:
525+ json_obj = json.loads(json_str)
526+ except ValueError:
527+ return False
528+
529+ # Do not expire keys if there is no expriation timestamp.
530+ if 'expireOn' not in json_obj:
531+ return False
532+
533+ expire_str = json_obj['expireOn']
534+ format_str = '%Y-%m-%dT%H:%M:%S+0000'
535+ try:
536+ expire_time = datetime.datetime.strptime(expire_str, format_str)
537+ except ValueError:
538+ return False
539+
540+ # Expire the key if and only if we have exceeded the expiration timestamp.
541+ return datetime.datetime.utcnow() > expire_time
542+
543+
544+def _parse_public_keys(public_keys_data, default_user=None):
545+ # Parse the SSH key data for the default user account. Public keys input is
546+ # a list containing SSH public keys in the GCE specific key format
547+ # documented here:
548+ # https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#sshkeyformat
549+ public_keys = []
550+ if not public_keys_data:
551+ return public_keys
552+ for public_key in public_keys_data:
553+ if not public_key or not all(ord(c) < 128 for c in public_key):
554+ continue
555+ split_public_key = public_key.split(':', 1)
556+ if len(split_public_key) != 2:
557+ continue
558+ user, key = split_public_key
559+ if user in ('cloudinit', default_user) and not _has_expired(key):
560+ public_keys.append(key)
561+ return public_keys
562+
563+
564+def read_md(address=None, platform_check=True):
565+
566+ if address is None:
567+ address = MD_V1_URL
568+
569+ ret = {'meta-data': None, 'user-data': None,
570+ 'success': False, 'reason': None}
571+ ret['platform_reports_gce'] = platform_reports_gce()
572+
573+ if platform_check and not ret['platform_reports_gce']:
574+ ret['reason'] = "Not running on GCE."
575+ return ret
576+
577+ # If we cannot resolve the metadata server, then no point in trying.
578+ if not util.is_resolvable_url(address):
579+ LOG.debug("%s is not resolvable", address)
580+ ret['reason'] = 'address "%s" is not resolvable' % address
581+ return ret
582+
583+ # url_map: (our-key, path, required, is_text, is_recursive)
584+ url_map = [
585+ ('instance-id', ('instance/id',), True, True, False),
586+ ('availability-zone', ('instance/zone',), True, True, False),
587+ ('local-hostname', ('instance/hostname',), True, True, False),
588+ ('instance-data', ('instance/attributes',), False, False, True),
589+ ('project-data', ('project/attributes',), False, False, True),
590+ ]
591+
592+ metadata_fetcher = GoogleMetadataFetcher(address)
593+ md = {}
594+ # Iterate over url_map keys to get metadata items.
595+ for (mkey, paths, required, is_text, is_recursive) in url_map:
596+ value = None
597+ for path in paths:
598+ new_value = metadata_fetcher.get_value(path, is_text, is_recursive)
599+ if new_value is not None:
600+ value = new_value
601+ if required and value is None:
602+ msg = "required key %s returned nothing. not GCE"
603+ ret['reason'] = msg % mkey
604+ return ret
605+ md[mkey] = value
606+
607+ instance_data = json.loads(md['instance-data'] or '{}')
608+ project_data = json.loads(md['project-data'] or '{}')
609+ valid_keys = [instance_data.get('sshKeys'), instance_data.get('ssh-keys')]
610+ block_project = instance_data.get('block-project-ssh-keys', '').lower()
611+ if block_project != 'true' and not instance_data.get('sshKeys'):
612+ valid_keys.append(project_data.get('ssh-keys'))
613+ valid_keys.append(project_data.get('sshKeys'))
614+ public_keys_data = '\n'.join([key for key in valid_keys if key])
615+ md['public-keys-data'] = public_keys_data.splitlines()
616+
617+ if md['availability-zone']:
618+ md['availability-zone'] = md['availability-zone'].split('/')[-1]
619+
620+ if 'user-data' in instance_data:
621+ # instance_data was json, so values are all utf-8 strings.
622+ ud = instance_data['user-data'].encode("utf-8")
623+ encoding = instance_data.get('user-data-encoding')
624+ if encoding == 'base64':
625+ ud = b64decode(ud)
626+ elif encoding:
627+ LOG.warning('unknown user-data-encoding: %s, ignoring', encoding)
628+ ret['user-data'] = ud
629+
630+ ret['meta-data'] = md
631+ ret['success'] = True
632+
633+ return ret
634+
635+
636+def platform_reports_gce():
637+ pname = util.read_dmi_data('system-product-name') or "N/A"
638+ if pname == "Google Compute Engine":
639+ return True
640+
641+ # system-product-name is not always guaranteed (LP: #1674861)
642+ serial = util.read_dmi_data('system-serial-number') or "N/A"
643+ if serial.startswith("GoogleCloud-"):
644+ return True
645+
646+ LOG.debug("Not running on google cloud. product-name=%s serial=%s",
647+ pname, serial)
648+ return False
649+
650+
651+# Used to match classes to dependencies.
652 datasources = [
653 (DataSourceGCE, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
654 ]
655
656
657-# Return a list of data sources that match this set of dependencies
658+# Return a list of data sources that match this set of dependencies.
659 def get_datasource_list(depends):
660 return sources.list_from_depends(depends, datasources)
661+
662+
663+if __name__ == "__main__":
664+ import argparse
665+ import sys
666+
667+ from base64 import b64encode
668+
669+ parser = argparse.ArgumentParser(description='Query GCE Metadata Service')
670+ parser.add_argument("--endpoint", metavar="URL",
671+ help="The url of the metadata service.",
672+ default=MD_V1_URL)
673+ parser.add_argument("--no-platform-check", dest="platform_check",
674+ help="Ignore smbios platform check",
675+ action='store_false', default=True)
676+ args = parser.parse_args()
677+ data = read_md(address=args.endpoint, platform_check=args.platform_check)
678+ if 'user-data' in data:
679+ # user-data is bytes not string like other things. Handle it specially.
680+ # If it can be represented as utf-8 then do so. Otherwise print base64
681+ # encoded value in the key user-data-b64.
682+ try:
683+ data['user-data'] = data['user-data'].decode()
684+ except UnicodeDecodeError:
685+ sys.stderr.write("User-data cannot be decoded. "
686+ "Writing as base64\n")
687+ del data['user-data']
688+ # b64encode returns a bytes value. Decode to get the string.
689+ data['user-data-b64'] = b64encode(data['user-data']).decode()
690+
691+ print(json.dumps(data, indent=1, sort_keys=True, separators=(',', ': ')))
692+
693+# vi: ts=4 expandtab
diff --git a/debian/patches/series b/debian/patches/series
index d20f78b..21d6b4f 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -24,3 +24,4 @@ lp-1551419-azure-handle-flipped-uuid-endianness.patch
24lp-1553158-bigstep.patch24lp-1553158-bigstep.patch
25lp-1581200-gce-metadatafqdn.patch25lp-1581200-gce-metadatafqdn.patch
26lp-1603222-fix-ephemeral-disk-fstab.patch26lp-1603222-fix-ephemeral-disk-fstab.patch
27lp-1781039-gce-datasource-update.patch

Subscribers

People subscribed via source and target branches