Merge ~shaner/cloud-init:ubuntu/trusty into cloud-init:ubuntu/trusty
- Git
- lp:~shaner/cloud-init
- ubuntu/trusty
- Merge into ubuntu/trusty
Proposed by
Shane Peters
Status: | Superseded |
---|---|
Proposed branch: | ~shaner/cloud-init:ubuntu/trusty |
Merge into: | cloud-init:ubuntu/trusty |
Diff against target: |
744 lines (+722/-0) 3 files modified
debian/changelog (+8/-0) debian/patches/lp-1781039-gce-datasource-update.patch (+713/-0) debian/patches/series (+1/-0) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Scott Moser | Needs Fixing | ||
Review via email: mp+353997@code.launchpad.net |
This proposal supersedes a proposal from 2018-08-07.
Commit message
Fix SSH key functionality for GCE Datasource
Per documentation at https:/
ssh keys for cloudinit and ubuntu users should both be added to the
'ubuntu' users authorized_keys file. This works in Xenial and higher,
but not in Trusty.
This patch updates the GCE Datasource to function like that of Xenial
and newer.
LP: #1781039
Description of the change
To post a comment you must log in.
Revision history for this message
Scott Moser (smoser) wrote : Posted in a previous version of this proposal | # |
review:
Needs Information
Revision history for this message
Scott Moser (smoser) wrote : | # |
some necessary fixes inline.
Please make sure you test with user-data and without user-data.
Revision history for this message
Scott Moser (smoser) : | # |
review:
Needs Fixing
Revision history for this message
Scott Moser (smoser) wrote : | # |
Unmerged commits
- 926c9c0... by Shane Peters
-
debian/
patches/ lp-1781039- gce-datasource- update. patch - e53a5cd... by Shane Peters
-
debian/
patches/ lp-1781039- gce-datasource- update. patch - e0e26ee... by Shane Peters
-
debian/
patches/ lp-1781039- gce-datasource- update. patch
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | diff --git a/debian/changelog b/debian/changelog | |||
2 | index 4f2c89e..63b6698 100644 | |||
3 | --- a/debian/changelog | |||
4 | +++ b/debian/changelog | |||
5 | @@ -1,3 +1,11 @@ | |||
6 | 1 | cloud-init (0.7.5-0ubuntu1.23) trusty; urgency=medium | ||
7 | 2 | |||
8 | 3 | * GCE | ||
9 | 4 | - d/p/lp-1781039-gce-datasource-update.patch (LP: #1781039) | ||
10 | 5 | Backport GCE datasource functionality from Xenial | ||
11 | 6 | |||
12 | 7 | -- Shane Peters <shane.peters@canonical.com> Tue, 07 Aug 2018 11:57:23 -0400 | ||
13 | 8 | |||
14 | 1 | cloud-init (0.7.5-0ubuntu1.22) trusty; urgency=medium | 9 | cloud-init (0.7.5-0ubuntu1.22) trusty; urgency=medium |
15 | 2 | 10 | ||
16 | 3 | * debian/update-grub-legacy-ec2: | 11 | * debian/update-grub-legacy-ec2: |
17 | diff --git a/debian/patches/lp-1781039-gce-datasource-update.patch b/debian/patches/lp-1781039-gce-datasource-update.patch | |||
18 | 4 | new file mode 100644 | 12 | new file mode 100644 |
19 | index 0000000..34c67d3 | |||
20 | --- /dev/null | |||
21 | +++ b/debian/patches/lp-1781039-gce-datasource-update.patch | |||
22 | @@ -0,0 +1,713 @@ | |||
23 | 1 | Author: Shane Peters <shane.peters@canonical.com> | ||
24 | 2 | Bug: https://bugs.launchpad.net/cloud-init/+bug/1781039 | ||
25 | 3 | Description: Update GCE datasource | ||
26 | 4 | Per documentation at https://wiki.ubuntu.com/GoogleComputeEngineSSHKeys | ||
27 | 5 | ssh keys for cloudinit and ubuntu users should both be added to the | ||
28 | 6 | 'ubuntu' users authorized_keys file. This works in Xenial and higher, | ||
29 | 7 | but not in Trusty. | ||
30 | 8 | --- /dev/null | ||
31 | 9 | +++ b/cloudinit/distros/ug_util.py | ||
32 | 10 | @@ -0,0 +1,289 @@ | ||
33 | 11 | +# Copyright (C) 2012 Canonical Ltd. | ||
34 | 12 | +# Copyright (C) 2012, 2013 Hewlett-Packard Development Company, L.P. | ||
35 | 13 | +# Copyright (C) 2012 Yahoo! Inc. | ||
36 | 14 | +# | ||
37 | 15 | +# Author: Scott Moser <scott.moser@canonical.com> | ||
38 | 16 | +# Author: Juerg Haefliger <juerg.haefliger@hp.com> | ||
39 | 17 | +# Author: Joshua Harlow <harlowja@yahoo-inc.com> | ||
40 | 18 | +# Author: Ben Howard <ben.howard@canonical.com> | ||
41 | 19 | +# | ||
42 | 20 | +# This file is part of cloud-init. See LICENSE file for license information. | ||
43 | 21 | + | ||
44 | 22 | +import six | ||
45 | 23 | + | ||
46 | 24 | +from cloudinit import log as logging | ||
47 | 25 | +from cloudinit import type_utils | ||
48 | 26 | +from cloudinit import util | ||
49 | 27 | + | ||
50 | 28 | +LOG = logging.getLogger(__name__) | ||
51 | 29 | + | ||
52 | 30 | + | ||
53 | 31 | +# Normalizes a input group configuration | ||
54 | 32 | +# which can be a comma seperated list of | ||
55 | 33 | +# group names, or a list of group names | ||
56 | 34 | +# or a python dictionary of group names | ||
57 | 35 | +# to a list of members of that group. | ||
58 | 36 | +# | ||
59 | 37 | +# The output is a dictionary of group | ||
60 | 38 | +# names => members of that group which | ||
61 | 39 | +# is the standard form used in the rest | ||
62 | 40 | +# of cloud-init | ||
63 | 41 | +def _normalize_groups(grp_cfg): | ||
64 | 42 | + if isinstance(grp_cfg, six.string_types): | ||
65 | 43 | + grp_cfg = grp_cfg.strip().split(",") | ||
66 | 44 | + if isinstance(grp_cfg, list): | ||
67 | 45 | + c_grp_cfg = {} | ||
68 | 46 | + for i in grp_cfg: | ||
69 | 47 | + if isinstance(i, dict): | ||
70 | 48 | + for k, v in i.items(): | ||
71 | 49 | + if k not in c_grp_cfg: | ||
72 | 50 | + if isinstance(v, list): | ||
73 | 51 | + c_grp_cfg[k] = list(v) | ||
74 | 52 | + elif isinstance(v, six.string_types): | ||
75 | 53 | + c_grp_cfg[k] = [v] | ||
76 | 54 | + else: | ||
77 | 55 | + raise TypeError("Bad group member type %s" % | ||
78 | 56 | + type_utils.obj_name(v)) | ||
79 | 57 | + else: | ||
80 | 58 | + if isinstance(v, list): | ||
81 | 59 | + c_grp_cfg[k].extend(v) | ||
82 | 60 | + elif isinstance(v, six.string_types): | ||
83 | 61 | + c_grp_cfg[k].append(v) | ||
84 | 62 | + else: | ||
85 | 63 | + raise TypeError("Bad group member type %s" % | ||
86 | 64 | + type_utils.obj_name(v)) | ||
87 | 65 | + elif isinstance(i, six.string_types): | ||
88 | 66 | + if i not in c_grp_cfg: | ||
89 | 67 | + c_grp_cfg[i] = [] | ||
90 | 68 | + else: | ||
91 | 69 | + raise TypeError("Unknown group name type %s" % | ||
92 | 70 | + type_utils.obj_name(i)) | ||
93 | 71 | + grp_cfg = c_grp_cfg | ||
94 | 72 | + groups = {} | ||
95 | 73 | + if isinstance(grp_cfg, dict): | ||
96 | 74 | + for (grp_name, grp_members) in grp_cfg.items(): | ||
97 | 75 | + groups[grp_name] = util.uniq_merge_sorted(grp_members) | ||
98 | 76 | + else: | ||
99 | 77 | + raise TypeError(("Group config must be list, dict " | ||
100 | 78 | + " or string types only and not %s") % | ||
101 | 79 | + type_utils.obj_name(grp_cfg)) | ||
102 | 80 | + return groups | ||
103 | 81 | + | ||
104 | 82 | + | ||
105 | 83 | +# Normalizes a input group configuration | ||
106 | 84 | +# which can be a comma seperated list of | ||
107 | 85 | +# user names, or a list of string user names | ||
108 | 86 | +# or a list of dictionaries with components | ||
109 | 87 | +# that define the user config + 'name' (if | ||
110 | 88 | +# a 'name' field does not exist then the | ||
111 | 89 | +# default user is assumed to 'own' that | ||
112 | 90 | +# configuration. | ||
113 | 91 | +# | ||
114 | 92 | +# The output is a dictionary of user | ||
115 | 93 | +# names => user config which is the standard | ||
116 | 94 | +# form used in the rest of cloud-init. Note | ||
117 | 95 | +# the default user will have a special config | ||
118 | 96 | +# entry 'default' which will be marked as true | ||
119 | 97 | +# all other users will be marked as false. | ||
120 | 98 | +def _normalize_users(u_cfg, def_user_cfg=None): | ||
121 | 99 | + if isinstance(u_cfg, dict): | ||
122 | 100 | + ad_ucfg = [] | ||
123 | 101 | + for (k, v) in u_cfg.items(): | ||
124 | 102 | + if isinstance(v, (bool, int, float) + six.string_types): | ||
125 | 103 | + if util.is_true(v): | ||
126 | 104 | + ad_ucfg.append(str(k)) | ||
127 | 105 | + elif isinstance(v, dict): | ||
128 | 106 | + v['name'] = k | ||
129 | 107 | + ad_ucfg.append(v) | ||
130 | 108 | + else: | ||
131 | 109 | + raise TypeError(("Unmappable user value type %s" | ||
132 | 110 | + " for key %s") % (type_utils.obj_name(v), k)) | ||
133 | 111 | + u_cfg = ad_ucfg | ||
134 | 112 | + elif isinstance(u_cfg, six.string_types): | ||
135 | 113 | + u_cfg = util.uniq_merge_sorted(u_cfg) | ||
136 | 114 | + | ||
137 | 115 | + users = {} | ||
138 | 116 | + for user_config in u_cfg: | ||
139 | 117 | + if isinstance(user_config, (list,) + six.string_types): | ||
140 | 118 | + for u in util.uniq_merge(user_config): | ||
141 | 119 | + if u and u not in users: | ||
142 | 120 | + users[u] = {} | ||
143 | 121 | + elif isinstance(user_config, dict): | ||
144 | 122 | + if 'name' in user_config: | ||
145 | 123 | + n = user_config.pop('name') | ||
146 | 124 | + prev_config = users.get(n) or {} | ||
147 | 125 | + users[n] = util.mergemanydict([prev_config, | ||
148 | 126 | + user_config]) | ||
149 | 127 | + else: | ||
150 | 128 | + # Assume the default user then | ||
151 | 129 | + prev_config = users.get('default') or {} | ||
152 | 130 | + users['default'] = util.mergemanydict([prev_config, | ||
153 | 131 | + user_config]) | ||
154 | 132 | + else: | ||
155 | 133 | + raise TypeError(("User config must be dictionary/list " | ||
156 | 134 | + " or string types only and not %s") % | ||
157 | 135 | + type_utils.obj_name(user_config)) | ||
158 | 136 | + | ||
159 | 137 | + # Ensure user options are in the right python friendly format | ||
160 | 138 | + if users: | ||
161 | 139 | + c_users = {} | ||
162 | 140 | + for (uname, uconfig) in users.items(): | ||
163 | 141 | + c_uconfig = {} | ||
164 | 142 | + for (k, v) in uconfig.items(): | ||
165 | 143 | + k = k.replace('-', '_').strip() | ||
166 | 144 | + if k: | ||
167 | 145 | + c_uconfig[k] = v | ||
168 | 146 | + c_users[uname] = c_uconfig | ||
169 | 147 | + users = c_users | ||
170 | 148 | + | ||
171 | 149 | + # Fixup the default user into the real | ||
172 | 150 | + # default user name and replace it... | ||
173 | 151 | + def_user = None | ||
174 | 152 | + if users and 'default' in users: | ||
175 | 153 | + def_config = users.pop('default') | ||
176 | 154 | + if def_user_cfg: | ||
177 | 155 | + # Pickup what the default 'real name' is | ||
178 | 156 | + # and any groups that are provided by the | ||
179 | 157 | + # default config | ||
180 | 158 | + def_user_cfg = def_user_cfg.copy() | ||
181 | 159 | + def_user = def_user_cfg.pop('name') | ||
182 | 160 | + def_groups = def_user_cfg.pop('groups', []) | ||
183 | 161 | + # Pickup any config + groups for that user name | ||
184 | 162 | + # that we may have previously extracted | ||
185 | 163 | + parsed_config = users.pop(def_user, {}) | ||
186 | 164 | + parsed_groups = parsed_config.get('groups', []) | ||
187 | 165 | + # Now merge our extracted groups with | ||
188 | 166 | + # anything the default config provided | ||
189 | 167 | + users_groups = util.uniq_merge_sorted(parsed_groups, def_groups) | ||
190 | 168 | + parsed_config['groups'] = ",".join(users_groups) | ||
191 | 169 | + # The real config for the default user is the | ||
192 | 170 | + # combination of the default user config provided | ||
193 | 171 | + # by the distro, the default user config provided | ||
194 | 172 | + # by the above merging for the user 'default' and | ||
195 | 173 | + # then the parsed config from the user's 'real name' | ||
196 | 174 | + # which does not have to be 'default' (but could be) | ||
197 | 175 | + users[def_user] = util.mergemanydict([def_user_cfg, | ||
198 | 176 | + def_config, | ||
199 | 177 | + parsed_config]) | ||
200 | 178 | + | ||
201 | 179 | + # Ensure that only the default user that we | ||
202 | 180 | + # found (if any) is actually marked as being | ||
203 | 181 | + # the default user | ||
204 | 182 | + if users: | ||
205 | 183 | + for (uname, uconfig) in users.items(): | ||
206 | 184 | + if def_user and uname == def_user: | ||
207 | 185 | + uconfig['default'] = True | ||
208 | 186 | + else: | ||
209 | 187 | + uconfig['default'] = False | ||
210 | 188 | + | ||
211 | 189 | + return users | ||
212 | 190 | + | ||
213 | 191 | + | ||
214 | 192 | +# Normalizes a set of user/users and group | ||
215 | 193 | +# dictionary configuration into a useable | ||
216 | 194 | +# format that the rest of cloud-init can | ||
217 | 195 | +# understand using the default user | ||
218 | 196 | +# provided by the input distrobution (if any) | ||
219 | 197 | +# to allow for mapping of the 'default' user. | ||
220 | 198 | +# | ||
221 | 199 | +# Output is a dictionary of group names -> [member] (list) | ||
222 | 200 | +# and a dictionary of user names -> user configuration (dict) | ||
223 | 201 | +# | ||
224 | 202 | +# If 'user' exists it will override | ||
225 | 203 | +# the 'users'[0] entry (if a list) otherwise it will | ||
226 | 204 | +# just become an entry in the returned dictionary (no override) | ||
227 | 205 | +def normalize_users_groups(cfg, distro): | ||
228 | 206 | + if not cfg: | ||
229 | 207 | + cfg = {} | ||
230 | 208 | + | ||
231 | 209 | + users = {} | ||
232 | 210 | + groups = {} | ||
233 | 211 | + if 'groups' in cfg: | ||
234 | 212 | + groups = _normalize_groups(cfg['groups']) | ||
235 | 213 | + | ||
236 | 214 | + # Handle the previous style of doing this where the first user | ||
237 | 215 | + # overrides the concept of the default user if provided in the user: XYZ | ||
238 | 216 | + # format. | ||
239 | 217 | + old_user = {} | ||
240 | 218 | + if 'user' in cfg and cfg['user']: | ||
241 | 219 | + old_user = cfg['user'] | ||
242 | 220 | + # Translate it into the format that is more useful | ||
243 | 221 | + # going forward | ||
244 | 222 | + if isinstance(old_user, six.string_types): | ||
245 | 223 | + old_user = { | ||
246 | 224 | + 'name': old_user, | ||
247 | 225 | + } | ||
248 | 226 | + if not isinstance(old_user, dict): | ||
249 | 227 | + LOG.warning(("Format for 'user' key must be a string or dictionary" | ||
250 | 228 | + " and not %s"), type_utils.obj_name(old_user)) | ||
251 | 229 | + old_user = {} | ||
252 | 230 | + | ||
253 | 231 | + # If no old user format, then assume the distro | ||
254 | 232 | + # provides what the 'default' user maps to, but notice | ||
255 | 233 | + # that if this is provided, we won't automatically inject | ||
256 | 234 | + # a 'default' user into the users list, while if a old user | ||
257 | 235 | + # format is provided we will. | ||
258 | 236 | + distro_user_config = {} | ||
259 | 237 | + try: | ||
260 | 238 | + distro_user_config = distro.get_default_user() | ||
261 | 239 | + except NotImplementedError: | ||
262 | 240 | + LOG.warning(("Distro has not implemented default user " | ||
263 | 241 | + "access. No distribution provided default user" | ||
264 | 242 | + " will be normalized.")) | ||
265 | 243 | + | ||
266 | 244 | + # Merge the old user (which may just be an empty dict when not | ||
267 | 245 | + # present with the distro provided default user configuration so | ||
268 | 246 | + # that the old user style picks up all the distribution specific | ||
269 | 247 | + # attributes (if any) | ||
270 | 248 | + default_user_config = util.mergemanydict([old_user, distro_user_config]) | ||
271 | 249 | + | ||
272 | 250 | + base_users = cfg.get('users', []) | ||
273 | 251 | + if not isinstance(base_users, (list, dict) + six.string_types): | ||
274 | 252 | + LOG.warning(("Format for 'users' key must be a comma separated string" | ||
275 | 253 | + " or a dictionary or a list and not %s"), | ||
276 | 254 | + type_utils.obj_name(base_users)) | ||
277 | 255 | + base_users = [] | ||
278 | 256 | + | ||
279 | 257 | + if old_user: | ||
280 | 258 | + # Ensure that when user: is provided that this user | ||
281 | 259 | + # always gets added (as the default user) | ||
282 | 260 | + if isinstance(base_users, list): | ||
283 | 261 | + # Just add it on at the end... | ||
284 | 262 | + base_users.append({'name': 'default'}) | ||
285 | 263 | + elif isinstance(base_users, dict): | ||
286 | 264 | + base_users['default'] = dict(base_users).get('default', True) | ||
287 | 265 | + elif isinstance(base_users, six.string_types): | ||
288 | 266 | + # Just append it on to be re-parsed later | ||
289 | 267 | + base_users += ",default" | ||
290 | 268 | + | ||
291 | 269 | + users = _normalize_users(base_users, default_user_config) | ||
292 | 270 | + return (users, groups) | ||
293 | 271 | + | ||
294 | 272 | + | ||
295 | 273 | +# Given a user dictionary config it will | ||
296 | 274 | +# extract the default user name and user config | ||
297 | 275 | +# from that list and return that tuple or | ||
298 | 276 | +# return (None, None) if no default user is | ||
299 | 277 | +# found in the given input | ||
300 | 278 | +def extract_default(users, default_name=None, default_config=None): | ||
301 | 279 | + if not users: | ||
302 | 280 | + users = {} | ||
303 | 281 | + | ||
304 | 282 | + def safe_find(entry): | ||
305 | 283 | + config = entry[1] | ||
306 | 284 | + if not config or 'default' not in config: | ||
307 | 285 | + return False | ||
308 | 286 | + else: | ||
309 | 287 | + return config['default'] | ||
310 | 288 | + | ||
311 | 289 | + tmp_users = users.items() | ||
312 | 290 | + tmp_users = dict(filter(safe_find, tmp_users)) | ||
313 | 291 | + if not tmp_users: | ||
314 | 292 | + return (default_name, default_config) | ||
315 | 293 | + else: | ||
316 | 294 | + name = list(tmp_users)[0] | ||
317 | 295 | + config = tmp_users[name] | ||
318 | 296 | + config.pop('default', None) | ||
319 | 297 | + return (name, config) | ||
320 | 298 | + | ||
321 | 299 | +# vi: ts=4 expandtab | ||
322 | 300 | --- a/cloudinit/sources/DataSourceGCE.py | ||
323 | 301 | +++ b/cloudinit/sources/DataSourceGCE.py | ||
324 | 302 | @@ -1,142 +1,99 @@ | ||
325 | 303 | -# vi: ts=4 expandtab | ||
326 | 304 | -# | ||
327 | 305 | -# Author: Vaidas Jablonskis <jablonskis@gmail.com> | ||
328 | 306 | -# | ||
329 | 307 | -# This program is free software: you can redistribute it and/or modify | ||
330 | 308 | -# it under the terms of the GNU General Public License version 3, as | ||
331 | 309 | -# published by the Free Software Foundation. | ||
332 | 310 | +# Author: Vaidas Jablonskis <jablonskis@gmail.com> | ||
333 | 311 | # | ||
334 | 312 | -# This program is distributed in the hope that it will be useful, | ||
335 | 313 | -# but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
336 | 314 | -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
337 | 315 | -# GNU General Public License for more details. | ||
338 | 316 | -# | ||
339 | 317 | -# You should have received a copy of the GNU General Public License | ||
340 | 318 | -# along with this program. If not, see <http://www.gnu.org/licenses/>. | ||
341 | 319 | +# This file is part of cloud-init. See LICENSE file for license information. | ||
342 | 320 | |||
343 | 321 | +import datetime | ||
344 | 322 | +import json | ||
345 | 323 | |||
346 | 324 | from base64 import b64decode | ||
347 | 325 | |||
348 | 326 | +from cloudinit.distros import ug_util | ||
349 | 327 | from cloudinit import log as logging | ||
350 | 328 | -from cloudinit import util | ||
351 | 329 | from cloudinit import sources | ||
352 | 330 | from cloudinit import url_helper | ||
353 | 331 | +from cloudinit import util | ||
354 | 332 | |||
355 | 333 | LOG = logging.getLogger(__name__) | ||
356 | 334 | |||
357 | 335 | -BUILTIN_DS_CONFIG = { | ||
358 | 336 | - 'metadata_url': 'http://metadata.google.internal/computeMetadata/v1/' | ||
359 | 337 | -} | ||
360 | 338 | +MD_V1_URL = 'http://metadata.google.internal/computeMetadata/v1/' | ||
361 | 339 | +BUILTIN_DS_CONFIG = {'metadata_url': MD_V1_URL} | ||
362 | 340 | REQUIRED_FIELDS = ('instance-id', 'availability-zone', 'local-hostname') | ||
363 | 341 | |||
364 | 342 | |||
365 | 343 | +class GoogleMetadataFetcher(object): | ||
366 | 344 | + headers = {'Metadata-Flavor': 'Google'} | ||
367 | 345 | + | ||
368 | 346 | + def __init__(self, metadata_address): | ||
369 | 347 | + self.metadata_address = metadata_address | ||
370 | 348 | + | ||
371 | 349 | + def get_value(self, path, is_text, is_recursive=False): | ||
372 | 350 | + value = None | ||
373 | 351 | + try: | ||
374 | 352 | + url = self.metadata_address + path | ||
375 | 353 | + if is_recursive: | ||
376 | 354 | + url += '/?recursive=True' | ||
377 | 355 | + resp = url_helper.readurl(url=url, headers=self.headers) | ||
378 | 356 | + except url_helper.UrlError as exc: | ||
379 | 357 | + msg = "url %s raised exception %s" | ||
380 | 358 | + LOG.debug(msg, path, exc) | ||
381 | 359 | + else: | ||
382 | 360 | + if resp.code == 200: | ||
383 | 361 | + if is_text: | ||
384 | 362 | + value = util.decode_binary(resp.contents) | ||
385 | 363 | + else: | ||
386 | 364 | + value = resp.contents.decode('utf-8') | ||
387 | 365 | + else: | ||
388 | 366 | + LOG.debug("url %s returned code %s", path, resp.code) | ||
389 | 367 | + return value | ||
390 | 368 | + | ||
391 | 369 | + | ||
392 | 370 | class DataSourceGCE(sources.DataSource): | ||
393 | 371 | + | ||
394 | 372 | + dsname = 'GCE' | ||
395 | 373 | + | ||
396 | 374 | def __init__(self, sys_cfg, distro, paths): | ||
397 | 375 | sources.DataSource.__init__(self, sys_cfg, distro, paths) | ||
398 | 376 | + self.default_user = None | ||
399 | 377 | + if distro: | ||
400 | 378 | + (users, _groups) = ug_util.normalize_users_groups(sys_cfg, distro) | ||
401 | 379 | + (self.default_user, _user_config) = ug_util.extract_default(users) | ||
402 | 380 | self.metadata = dict() | ||
403 | 381 | self.ds_cfg = util.mergemanydict([ | ||
404 | 382 | util.get_cfg_by_path(sys_cfg, ["datasource", "GCE"], {}), | ||
405 | 383 | BUILTIN_DS_CONFIG]) | ||
406 | 384 | self.metadata_address = self.ds_cfg['metadata_url'] | ||
407 | 385 | |||
408 | 386 | - # GCE takes sshKeys attribute in the format of '<user>:<public_key>' | ||
409 | 387 | - # so we have to trim each key to remove the username part | ||
410 | 388 | - def _trim_key(self, public_key): | ||
411 | 389 | - try: | ||
412 | 390 | - index = public_key.index(':') | ||
413 | 391 | - if index > 0: | ||
414 | 392 | - return public_key[(index + 1):] | ||
415 | 393 | - except: | ||
416 | 394 | - return public_key | ||
417 | 395 | - | ||
418 | 396 | def get_data(self): | ||
419 | 397 | - # GCE metadata server requires a custom header since v1 | ||
420 | 398 | - headers = {'X-Google-Metadata-Request': True} | ||
421 | 399 | - | ||
422 | 400 | - # url_map: (our-key, path, required) | ||
423 | 401 | - url_map = [ | ||
424 | 402 | - ('instance-id', 'instance/id', True), | ||
425 | 403 | - ('availability-zone', 'instance/zone', True), | ||
426 | 404 | - ('local-hostname', 'instance/hostname', True), | ||
427 | 405 | - ('public-keys', 'project/attributes/sshKeys', False), | ||
428 | 406 | - ('user-data', 'instance/attributes/user-data', False), | ||
429 | 407 | - ('user-data-encoding', 'instance/attributes/user-data-encoding', | ||
430 | 408 | - False), | ||
431 | 409 | - ] | ||
432 | 410 | - | ||
433 | 411 | - # if we cannot resolve the metadata server, then no point in trying | ||
434 | 412 | - if not util.is_resolvable_url(self.metadata_address): | ||
435 | 413 | - LOG.debug("%s is not resolvable", self.metadata_address) | ||
436 | 414 | - return False | ||
437 | 415 | - | ||
438 | 416 | - # iterate over url_map keys to get metadata items | ||
439 | 417 | - found = False | ||
440 | 418 | - for (mkey, path, required) in url_map: | ||
441 | 419 | - try: | ||
442 | 420 | - resp = url_helper.readurl(url=self.metadata_address + path, | ||
443 | 421 | - headers=headers) | ||
444 | 422 | - if resp.code == 200: | ||
445 | 423 | - found = True | ||
446 | 424 | - self.metadata[mkey] = resp.contents | ||
447 | 425 | - else: | ||
448 | 426 | - if required: | ||
449 | 427 | - msg = "required url %s returned code %s. not GCE" | ||
450 | 428 | - if not found: | ||
451 | 429 | - LOG.debug(msg, path, resp.code) | ||
452 | 430 | - else: | ||
453 | 431 | - LOG.warn(msg, path, resp.code) | ||
454 | 432 | - return False | ||
455 | 433 | - else: | ||
456 | 434 | - self.metadata[mkey] = None | ||
457 | 435 | - except url_helper.UrlError as e: | ||
458 | 436 | - if required: | ||
459 | 437 | - msg = "required url %s raised exception %s. not GCE" | ||
460 | 438 | - if not found: | ||
461 | 439 | - LOG.debug(msg, path, e) | ||
462 | 440 | - else: | ||
463 | 441 | - LOG.warn(msg, path, e) | ||
464 | 442 | - return False | ||
465 | 443 | - msg = "Failed to get %s metadata item: %s." | ||
466 | 444 | - LOG.debug(msg, path, e) | ||
467 | 445 | - | ||
468 | 446 | - self.metadata[mkey] = None | ||
469 | 447 | - | ||
470 | 448 | - if self.metadata['public-keys']: | ||
471 | 449 | - lines = self.metadata['public-keys'].splitlines() | ||
472 | 450 | - self.metadata['public-keys'] = [self._trim_key(k) for k in lines] | ||
473 | 451 | - | ||
474 | 452 | - if self.metadata['availability-zone']: | ||
475 | 453 | - self.metadata['availability-zone'] = self.metadata[ | ||
476 | 454 | - 'availability-zone'].split('/')[-1] | ||
477 | 455 | - | ||
478 | 456 | - encoding = self.metadata.get('user-data-encoding') | ||
479 | 457 | - if encoding: | ||
480 | 458 | - if encoding == 'base64': | ||
481 | 459 | - self.metadata['user-data'] = b64decode( | ||
482 | 460 | - self.metadata['user-data']) | ||
483 | 461 | + ret = util.log_time( | ||
484 | 462 | + LOG.debug, 'Crawl of GCE metadata service', | ||
485 | 463 | + read_md, kwargs={'address': self.metadata_address}) | ||
486 | 464 | + | ||
487 | 465 | + if not ret['success']: | ||
488 | 466 | + if ret['platform_reports_gce']: | ||
489 | 467 | + LOG.warning(ret['reason']) | ||
490 | 468 | else: | ||
491 | 469 | - LOG.warn('unknown user-data-encoding: %s, ignoring', encoding) | ||
492 | 470 | - | ||
493 | 471 | - return found | ||
494 | 472 | + LOG.debug(ret['reason']) | ||
495 | 473 | + return False | ||
496 | 474 | + self.metadata = ret['meta-data'] | ||
497 | 475 | + self.userdata_raw = ret['user-data'] | ||
498 | 476 | + return True | ||
499 | 477 | |||
500 | 478 | @property | ||
501 | 479 | def launch_index(self): | ||
502 | 480 | - # GCE does not provide lauch_index property | ||
503 | 481 | + # GCE does not provide lauch_index property. | ||
504 | 482 | return None | ||
505 | 483 | |||
506 | 484 | def get_instance_id(self): | ||
507 | 485 | return self.metadata['instance-id'] | ||
508 | 486 | |||
509 | 487 | def get_public_ssh_keys(self): | ||
510 | 488 | - return self.metadata['public-keys'] | ||
511 | 489 | + public_keys_data = self.metadata['public-keys-data'] | ||
512 | 490 | + return _parse_public_keys(public_keys_data, self.default_user) | ||
513 | 491 | |||
514 | 492 | - def get_hostname(self, fqdn=False, _resolve_ip=False): | ||
515 | 493 | - # GCE has long FDQN's and has asked for short hostnames | ||
516 | 494 | + def get_hostname(self, fqdn=False, resolve_ip=False, metadata_only=False): | ||
517 | 495 | + # GCE has long FDQN's and has asked for short hostnames. | ||
518 | 496 | return self.metadata['local-hostname'].split('.')[0] | ||
519 | 497 | |||
520 | 498 | - def get_userdata_raw(self): | ||
521 | 499 | - return self.metadata['user-data'] | ||
522 | 500 | - | ||
523 | 501 | @property | ||
524 | 502 | def availability_zone(self): | ||
525 | 503 | return self.metadata['availability-zone'] | ||
526 | 504 | @@ -145,12 +102,187 @@ | ||
527 | 505 | def region(self): | ||
528 | 506 | return self.availability_zone.rsplit('-', 1)[0] | ||
529 | 507 | |||
530 | 508 | -# Used to match classes to dependencies | ||
531 | 509 | + | ||
532 | 510 | +def _has_expired(public_key): | ||
533 | 511 | + # Check whether an SSH key is expired. Public key input is a single SSH | ||
534 | 512 | + # public key in the GCE specific key format documented here: | ||
535 | 513 | + # https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#sshkeyformat | ||
536 | 514 | + try: | ||
537 | 515 | + # Check for the Google-specific schema identifier. | ||
538 | 516 | + schema, json_str = public_key.split(None, 3)[2:] | ||
539 | 517 | + except (ValueError, AttributeError): | ||
540 | 518 | + return False | ||
541 | 519 | + | ||
542 | 520 | + # Do not expire keys if they do not have the expected schema identifier. | ||
543 | 521 | + if schema != 'google-ssh': | ||
544 | 522 | + return False | ||
545 | 523 | + | ||
546 | 524 | + try: | ||
547 | 525 | + json_obj = json.loads(json_str) | ||
548 | 526 | + except ValueError: | ||
549 | 527 | + return False | ||
550 | 528 | + | ||
551 | 529 | + # Do not expire keys if there is no expriation timestamp. | ||
552 | 530 | + if 'expireOn' not in json_obj: | ||
553 | 531 | + return False | ||
554 | 532 | + | ||
555 | 533 | + expire_str = json_obj['expireOn'] | ||
556 | 534 | + format_str = '%Y-%m-%dT%H:%M:%S+0000' | ||
557 | 535 | + try: | ||
558 | 536 | + expire_time = datetime.datetime.strptime(expire_str, format_str) | ||
559 | 537 | + except ValueError: | ||
560 | 538 | + return False | ||
561 | 539 | + | ||
562 | 540 | + # Expire the key if and only if we have exceeded the expiration timestamp. | ||
563 | 541 | + return datetime.datetime.utcnow() > expire_time | ||
564 | 542 | + | ||
565 | 543 | + | ||
566 | 544 | +def _parse_public_keys(public_keys_data, default_user=None): | ||
567 | 545 | + # Parse the SSH key data for the default user account. Public keys input is | ||
568 | 546 | + # a list containing SSH public keys in the GCE specific key format | ||
569 | 547 | + # documented here: | ||
570 | 548 | + # https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#sshkeyformat | ||
571 | 549 | + public_keys = [] | ||
572 | 550 | + if not public_keys_data: | ||
573 | 551 | + return public_keys | ||
574 | 552 | + for public_key in public_keys_data: | ||
575 | 553 | + if not public_key or not all(ord(c) < 128 for c in public_key): | ||
576 | 554 | + continue | ||
577 | 555 | + split_public_key = public_key.split(':', 1) | ||
578 | 556 | + if len(split_public_key) != 2: | ||
579 | 557 | + continue | ||
580 | 558 | + user, key = split_public_key | ||
581 | 559 | + if user in ('cloudinit', default_user) and not _has_expired(key): | ||
582 | 560 | + public_keys.append(key) | ||
583 | 561 | + return public_keys | ||
584 | 562 | + | ||
585 | 563 | + | ||
586 | 564 | +def read_md(address=None, platform_check=True): | ||
587 | 565 | + | ||
588 | 566 | + if address is None: | ||
589 | 567 | + address = MD_V1_URL | ||
590 | 568 | + | ||
591 | 569 | + ret = {'meta-data': None, 'user-data': None, | ||
592 | 570 | + 'success': False, 'reason': None} | ||
593 | 571 | + ret['platform_reports_gce'] = platform_reports_gce() | ||
594 | 572 | + | ||
595 | 573 | + if platform_check and not ret['platform_reports_gce']: | ||
596 | 574 | + ret['reason'] = "Not running on GCE." | ||
597 | 575 | + return ret | ||
598 | 576 | + | ||
599 | 577 | + # If we cannot resolve the metadata server, then no point in trying. | ||
600 | 578 | + if not util.is_resolvable_url(address): | ||
601 | 579 | + LOG.debug("%s is not resolvable", address) | ||
602 | 580 | + ret['reason'] = 'address "%s" is not resolvable' % address | ||
603 | 581 | + return ret | ||
604 | 582 | + | ||
605 | 583 | + # url_map: (our-key, path, required, is_text, is_recursive) | ||
606 | 584 | + url_map = [ | ||
607 | 585 | + ('instance-id', ('instance/id',), True, True, False), | ||
608 | 586 | + ('availability-zone', ('instance/zone',), True, True, False), | ||
609 | 587 | + ('local-hostname', ('instance/hostname',), True, True, False), | ||
610 | 588 | + ('instance-data', ('instance/attributes',), False, False, True), | ||
611 | 589 | + ('project-data', ('project/attributes',), False, False, True), | ||
612 | 590 | + ] | ||
613 | 591 | + | ||
614 | 592 | + metadata_fetcher = GoogleMetadataFetcher(address) | ||
615 | 593 | + md = {} | ||
616 | 594 | + # Iterate over url_map keys to get metadata items. | ||
617 | 595 | + for (mkey, paths, required, is_text, is_recursive) in url_map: | ||
618 | 596 | + value = None | ||
619 | 597 | + for path in paths: | ||
620 | 598 | + new_value = metadata_fetcher.get_value(path, is_text, is_recursive) | ||
621 | 599 | + if new_value is not None: | ||
622 | 600 | + value = new_value | ||
623 | 601 | + if required and value is None: | ||
624 | 602 | + msg = "required key %s returned nothing. not GCE" | ||
625 | 603 | + ret['reason'] = msg % mkey | ||
626 | 604 | + return ret | ||
627 | 605 | + md[mkey] = value | ||
628 | 606 | + | ||
629 | 607 | + instance_data = json.loads(md['instance-data'] or '{}') | ||
630 | 608 | + project_data = json.loads(md['project-data'] or '{}') | ||
631 | 609 | + valid_keys = [instance_data.get('sshKeys'), instance_data.get('ssh-keys')] | ||
632 | 610 | + block_project = instance_data.get('block-project-ssh-keys', '').lower() | ||
633 | 611 | + if block_project != 'true' and not instance_data.get('sshKeys'): | ||
634 | 612 | + valid_keys.append(project_data.get('ssh-keys')) | ||
635 | 613 | + valid_keys.append(project_data.get('sshKeys')) | ||
636 | 614 | + public_keys_data = '\n'.join([key for key in valid_keys if key]) | ||
637 | 615 | + md['public-keys-data'] = public_keys_data.splitlines() | ||
638 | 616 | + | ||
639 | 617 | + if md['availability-zone']: | ||
640 | 618 | + md['availability-zone'] = md['availability-zone'].split('/')[-1] | ||
641 | 619 | + | ||
642 | 620 | + if 'user-data' in instance_data: | ||
643 | 621 | + # instance_data was json, so values are all utf-8 strings. | ||
644 | 622 | + ud = instance_data['user-data'].encode("utf-8") | ||
645 | 623 | + encoding = instance_data.get('user-data-encoding') | ||
646 | 624 | + if encoding == 'base64': | ||
647 | 625 | + ud = b64decode(ud) | ||
648 | 626 | + elif encoding: | ||
649 | 627 | + LOG.warning('unknown user-data-encoding: %s, ignoring', encoding) | ||
650 | 628 | + ret['user-data'] = ud | ||
651 | 629 | + | ||
652 | 630 | + ret['meta-data'] = md | ||
653 | 631 | + ret['success'] = True | ||
654 | 632 | + | ||
655 | 633 | + return ret | ||
656 | 634 | + | ||
657 | 635 | + | ||
658 | 636 | +def platform_reports_gce(): | ||
659 | 637 | + pname = util.read_dmi_data('system-product-name') or "N/A" | ||
660 | 638 | + if pname == "Google Compute Engine": | ||
661 | 639 | + return True | ||
662 | 640 | + | ||
663 | 641 | + # system-product-name is not always guaranteed (LP: #1674861) | ||
664 | 642 | + serial = util.read_dmi_data('system-serial-number') or "N/A" | ||
665 | 643 | + if serial.startswith("GoogleCloud-"): | ||
666 | 644 | + return True | ||
667 | 645 | + | ||
668 | 646 | + LOG.debug("Not running on google cloud. product-name=%s serial=%s", | ||
669 | 647 | + pname, serial) | ||
670 | 648 | + return False | ||
671 | 649 | + | ||
672 | 650 | + | ||
673 | 651 | +# Used to match classes to dependencies. | ||
674 | 652 | datasources = [ | ||
675 | 653 | (DataSourceGCE, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)), | ||
676 | 654 | ] | ||
677 | 655 | |||
678 | 656 | |||
679 | 657 | -# Return a list of data sources that match this set of dependencies | ||
680 | 658 | +# Return a list of data sources that match this set of dependencies. | ||
681 | 659 | def get_datasource_list(depends): | ||
682 | 660 | return sources.list_from_depends(depends, datasources) | ||
683 | 661 | + | ||
684 | 662 | + | ||
685 | 663 | +if __name__ == "__main__": | ||
686 | 664 | + import argparse | ||
687 | 665 | + import sys | ||
688 | 666 | + | ||
689 | 667 | + from base64 import b64encode | ||
690 | 668 | + | ||
691 | 669 | + parser = argparse.ArgumentParser(description='Query GCE Metadata Service') | ||
692 | 670 | + parser.add_argument("--endpoint", metavar="URL", | ||
693 | 671 | + help="The url of the metadata service.", | ||
694 | 672 | + default=MD_V1_URL) | ||
695 | 673 | + parser.add_argument("--no-platform-check", dest="platform_check", | ||
696 | 674 | + help="Ignore smbios platform check", | ||
697 | 675 | + action='store_false', default=True) | ||
698 | 676 | + args = parser.parse_args() | ||
699 | 677 | + data = read_md(address=args.endpoint, platform_check=args.platform_check) | ||
700 | 678 | + if 'user-data' in data: | ||
701 | 679 | + # user-data is bytes not string like other things. Handle it specially. | ||
702 | 680 | + # If it can be represented as utf-8 then do so. Otherwise print base64 | ||
703 | 681 | + # encoded value in the key user-data-b64. | ||
704 | 682 | + try: | ||
705 | 683 | + data['user-data'] = data['user-data'].decode() | ||
706 | 684 | + except UnicodeDecodeError: | ||
707 | 685 | + sys.stderr.write("User-data cannot be decoded. " | ||
708 | 686 | + "Writing as base64\n") | ||
709 | 687 | + del data['user-data'] | ||
710 | 688 | + # b64encode returns a bytes value. Decode to get the string. | ||
711 | 689 | + data['user-data-b64'] = b64encode(data['user-data']).decode() | ||
712 | 690 | + | ||
713 | 691 | + print(json.dumps(data, indent=1, sort_keys=True, separators=(',', ': '))) | ||
714 | 692 | + | ||
715 | 693 | +# vi: ts=4 expandtab | ||
716 | 694 | --- a/cloudinit/util.py | ||
717 | 695 | +++ b/cloudinit/util.py | ||
718 | 696 | @@ -49,6 +49,7 @@ | ||
719 | 697 | import time | ||
720 | 698 | import urlparse | ||
721 | 699 | |||
722 | 700 | +import six | ||
723 | 701 | import yaml | ||
724 | 702 | |||
725 | 703 | from cloudinit import importer | ||
726 | 704 | --- a/debian/control | ||
727 | 705 | +++ b/debian/control | ||
728 | 706 | @@ -17,6 +17,7 @@ | ||
729 | 707 | python-oauth, | ||
730 | 708 | python-prettytable, | ||
731 | 709 | python-setuptools, | ||
732 | 710 | + python-six, | ||
733 | 711 | python-requests, | ||
734 | 712 | python-yaml | ||
735 | 713 | XS-Python-Version: all | ||
736 | diff --git a/debian/patches/series b/debian/patches/series | |||
737 | index d20f78b..21d6b4f 100644 | |||
738 | --- a/debian/patches/series | |||
739 | +++ b/debian/patches/series | |||
740 | @@ -24,3 +24,4 @@ lp-1551419-azure-handle-flipped-uuid-endianness.patch | |||
741 | 24 | lp-1553158-bigstep.patch | 24 | lp-1553158-bigstep.patch |
742 | 25 | lp-1581200-gce-metadatafqdn.patch | 25 | lp-1581200-gce-metadatafqdn.patch |
743 | 26 | lp-1603222-fix-ephemeral-disk-fstab.patch | 26 | lp-1603222-fix-ephemeral-disk-fstab.patch |
744 | 27 | lp-1781039-gce-datasource-update.patch |
You've added use of 'six' (package python-six) but have not added a
dependency. Normally, we can't add a dependency in a SRU. I think we can
probably get away with it though because there is a direct dependency
chain already in place in trusty through cloud-init.
cloud-init -> python-requests -> python-urllib3 -> python-six
Even so, we will have to add this to debian/control.
There are some other comments inline below.
Feel free to ping in IRC If you have any questions or follow up here.
Thanks.
sorry for the slow reply.
Scott