Merge lp:~harlowja/cloud-init/query-tool-is-back into lp:~cloud-init-dev/cloud-init/trunk
- query-tool-is-back
- Merge into trunk
Status: | Rejected |
---|---|
Rejected by: | Chad Smith |
Proposed branch: | lp:~harlowja/cloud-init/query-tool-is-back |
Merge into: | lp:~cloud-init-dev/cloud-init/trunk |
Diff against target: |
395 lines (+216/-33) 6 files modified
Requires (+1/-0) bin/cloud-init (+36/-10) cloudinit/pprint.py (+107/-0) cloudinit/sources/__init__.py (+30/-0) cloudinit/stages.py (+37/-21) cloudinit/util.py (+5/-2) |
To merge this branch: | bzr merge lp:~harlowja/cloud-init/query-tool-is-back |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Server Team CI bot | continuous-integration | Needs Fixing | |
cloud-init Commiters | Pending | ||
Review via email: mp+123394@code.launchpad.net |
Commit message
Description of the change
- 648. By Joshua Harlow
-
Updated with no encryption and
clearing out of the userdata/raw fields
to prevent access when querying. - 649. By Joshua Harlow
-
Fix some pylint issues and update the datasource query
to just return a map describing the database and use the
borrowed pprint code to show this map in a nice CLI format.Also allow for printing of the init configuration as well
as the datasource via the query entrypoint.
Scott Moser (smoser) wrote : | # |
Joshua Harlow (harlowja) wrote : | # |
Will try to fix these adjustments over the holidays and or do some stuff differently.
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:649
No commit message was specified in the merge proposal. Click on the following link and set the commit message (if you want a jenkins rebuild you need to trigger it yourself):
https:/
https:/
Executed test runs:
None: https:/
Click here to trigger a rebuild:
https:/
Chad Smith (chad.smith) wrote : | # |
Hello,
Thank you for taking the time to contribute to cloud-init. Cloud-init has moved its revision control system to git. As a result, we are marking all bzr merge proposals as 'rejected'. If you would like to re-submit this proposal for review, please do so by following the current HACKING documentation at http://
This branch will no longer apply against master, the cloud-init utility has moved to cloudinit/
Unmerged revisions
- 649. By Joshua Harlow
-
Fix some pylint issues and update the datasource query
to just return a map describing the database and use the
borrowed pprint code to show this map in a nice CLI format.Also allow for printing of the init configuration as well
as the datasource via the query entrypoint. - 648. By Joshua Harlow
-
Updated with no encryption and
clearing out of the userdata/raw fields
to prevent access when querying. - 647. By Joshua Harlow
-
Start adding a query entrypoint with encryption using
aes of the userdata (raw and not raw) if possible using
the provided users (currently root) private ssh key sha256
hash as the secret (openssl was tried, didn't work due to
key file formats being all different).
Preview Diff
1 | === modified file 'Requires' | |||
2 | --- Requires 2012-07-09 20:41:45 +0000 | |||
3 | +++ Requires 2012-09-10 21:48:19 +0000 | |||
4 | @@ -26,3 +26,4 @@ | |||
5 | 26 | 26 | ||
6 | 27 | # The new main entrypoint uses argparse instead of optparse | 27 | # The new main entrypoint uses argparse instead of optparse |
7 | 28 | argparse | 28 | argparse |
8 | 29 | |||
9 | 29 | 30 | ||
10 | === modified file 'bin/cloud-init' | |||
11 | --- bin/cloud-init 2012-08-10 03:48:01 +0000 | |||
12 | +++ bin/cloud-init 2012-09-10 21:48:19 +0000 | |||
13 | @@ -35,6 +35,7 @@ | |||
14 | 35 | 35 | ||
15 | 36 | from cloudinit import log as logging | 36 | from cloudinit import log as logging |
16 | 37 | from cloudinit import netinfo | 37 | from cloudinit import netinfo |
17 | 38 | from cloudinit import pprint as cp | ||
18 | 38 | from cloudinit import sources | 39 | from cloudinit import sources |
19 | 39 | from cloudinit import stages | 40 | from cloudinit import stages |
20 | 40 | from cloudinit import templater | 41 | from cloudinit import templater |
21 | @@ -52,11 +53,10 @@ | |||
22 | 52 | # Module section template | 53 | # Module section template |
23 | 53 | MOD_SECTION_TPL = "cloud_%s_modules" | 54 | MOD_SECTION_TPL = "cloud_%s_modules" |
24 | 54 | 55 | ||
30 | 55 | # Things u can query on | 56 | # Things u can query on... |
31 | 56 | QUERY_DATA_TYPES = [ | 57 | QUERY_NAMES = [ |
32 | 57 | 'data', | 58 | 'ds', |
33 | 58 | 'data_raw', | 59 | 'cfg', |
29 | 59 | 'instance_id', | ||
34 | 60 | ] | 60 | ] |
35 | 61 | 61 | ||
36 | 62 | # Frequency shortname to full name | 62 | # Frequency shortname to full name |
37 | @@ -342,9 +342,35 @@ | |||
38 | 342 | return run_module_section(mods, name, name) | 342 | return run_module_section(mods, name, name) |
39 | 343 | 343 | ||
40 | 344 | 344 | ||
44 | 345 | def main_query(name, _args): | 345 | def main_query(name, args): |
45 | 346 | raise NotImplementedError(("Action '%s' is not" | 346 | w_msg = welcome_format(name) |
46 | 347 | " currently implemented") % (name)) | 347 | welcome(name, msg=w_msg) |
47 | 348 | items = args.what | ||
48 | 349 | if not items: | ||
49 | 350 | return 1 | ||
50 | 351 | init = stages.Init() | ||
51 | 352 | ds = None | ||
52 | 353 | try: | ||
53 | 354 | # Try the 'privileged' datasource first | ||
54 | 355 | ds = init.fetch(attempt_find=False) | ||
55 | 356 | except Exception: | ||
56 | 357 | pass | ||
57 | 358 | if not ds: | ||
58 | 359 | # Use the safer version (if its there) | ||
59 | 360 | ds = init.fetch(safe=True) | ||
60 | 361 | if not ds: | ||
61 | 362 | print("No datasource available for querying!") | ||
62 | 363 | return 1 | ||
63 | 364 | for i in items: | ||
64 | 365 | if i == 'ds': | ||
65 | 366 | print("Datasource details") | ||
66 | 367 | print("-" * len("Datasource details")) | ||
67 | 368 | cp.pprint(ds.describe()) | ||
68 | 369 | elif i == 'cfg': | ||
69 | 370 | print("Configuration details") | ||
70 | 371 | print("-" * len("Configuration details")) | ||
71 | 372 | cp.pprint(init.cfg) | ||
72 | 373 | return 0 | ||
73 | 348 | 374 | ||
74 | 349 | 375 | ||
75 | 350 | def main_single(name, args): | 376 | def main_single(name, args): |
76 | @@ -464,10 +490,10 @@ | |||
77 | 464 | parser_query = subparsers.add_parser('query', | 490 | parser_query = subparsers.add_parser('query', |
78 | 465 | help=('query information stored ' | 491 | help=('query information stored ' |
79 | 466 | 'in cloud-init')) | 492 | 'in cloud-init')) |
81 | 467 | parser_query.add_argument("--name", '-n', action="store", | 493 | parser_query.add_argument("--what", '-w', action="store", |
82 | 468 | help="item name to query on", | 494 | help="item name to query on", |
83 | 469 | required=True, | 495 | required=True, |
85 | 470 | choices=QUERY_DATA_TYPES) | 496 | choices=sorted(QUERY_NAMES)) |
86 | 471 | parser_query.set_defaults(action=('query', main_query)) | 497 | parser_query.set_defaults(action=('query', main_query)) |
87 | 472 | 498 | ||
88 | 473 | # This subcommand allows you to run a single module | 499 | # This subcommand allows you to run a single module |
89 | 474 | 500 | ||
90 | === added file 'cloudinit/pprint.py' | |||
91 | --- cloudinit/pprint.py 1970-01-01 00:00:00 +0000 | |||
92 | +++ cloudinit/pprint.py 2012-09-10 21:48:19 +0000 | |||
93 | @@ -0,0 +1,107 @@ | |||
94 | 1 | # vim: tabstop=4 shiftwidth=4 softtabstop=4 | ||
95 | 2 | |||
96 | 3 | # Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. | ||
97 | 4 | # | ||
98 | 5 | # Licensed under the Apache License, Version 2.0 (the "License"); you may | ||
99 | 6 | # not use this file except in compliance with the License. You may obtain | ||
100 | 7 | # a copy of the License at | ||
101 | 8 | # | ||
102 | 9 | # http://www.apache.org/licenses/LICENSE-2.0 | ||
103 | 10 | # | ||
104 | 11 | # Unless required by applicable law or agreed to in writing, software | ||
105 | 12 | # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT | ||
106 | 13 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the | ||
107 | 14 | # License for the specific language governing permissions and limitations | ||
108 | 15 | # under the License. | ||
109 | 16 | |||
110 | 17 | |||
111 | 18 | def center_text(text, fill, max_len): | ||
112 | 19 | return '{0:{fill}{align}{size}}'.format(text, | ||
113 | 20 | fill=fill, align="^", size=max_len) | ||
114 | 21 | |||
115 | 22 | |||
116 | 23 | def _pformat_list(lst, item_max_len): | ||
117 | 24 | lines = [] | ||
118 | 25 | if not lst: | ||
119 | 26 | lines.append("+------+") | ||
120 | 27 | lines.append("'------'") | ||
121 | 28 | return "\n".join(lines) | ||
122 | 29 | entries = [] | ||
123 | 30 | max_len = 0 | ||
124 | 31 | for i in lst: | ||
125 | 32 | e = pformat(i, item_max_len) | ||
126 | 33 | for v in e.split("\n"): | ||
127 | 34 | max_len = max(max_len, len(v) + 2) | ||
128 | 35 | entries.append(e) | ||
129 | 36 | lines.append("+%s+" % ("-" * (max_len))) | ||
130 | 37 | for e in entries: | ||
131 | 38 | for line in e.split("\n"): | ||
132 | 39 | lines.append("|%s|" % (center_text(line, ' ', max_len))) | ||
133 | 40 | lines.append("'%s'" % ("-" * (max_len))) | ||
134 | 41 | return "\n".join(lines) | ||
135 | 42 | |||
136 | 43 | |||
137 | 44 | |||
138 | 45 | def _pformat_hash(hsh, item_max_len): | ||
139 | 46 | lines = [] | ||
140 | 47 | if not hsh: | ||
141 | 48 | lines.append("+-----+-----+") | ||
142 | 49 | lines.append("'-----+-----'") | ||
143 | 50 | return "\n".join(lines) | ||
144 | 51 | # Figure out the lengths to place items in... | ||
145 | 52 | max_key_len = 0 | ||
146 | 53 | max_value_len = 0 | ||
147 | 54 | entries = [] | ||
148 | 55 | for (k, v) in hsh.items(): | ||
149 | 56 | entry = ("%s" % (_pformat_escape(k, item_max_len)), | ||
150 | 57 | "%s" % (pformat(v, item_max_len))) | ||
151 | 58 | max_key_len = max(max_key_len, len(entry[0]) + 2) | ||
152 | 59 | for v in entry[1].split("\n"): | ||
153 | 60 | max_value_len = max(max_value_len, len(v) + 2) | ||
154 | 61 | entries.append(entry) | ||
155 | 62 | # Now actually do the placement since we have the lengths | ||
156 | 63 | lines.append("+%s+%s+" % ("-" * max_key_len, "-" * max_value_len)) | ||
157 | 64 | for (key, value) in entries: | ||
158 | 65 | value_lines = value.split("\n") | ||
159 | 66 | lines.append("|%s|%s|" % (center_text(key, ' ', max_key_len), | ||
160 | 67 | center_text(value_lines[0], ' ', | ||
161 | 68 | max_value_len))) | ||
162 | 69 | if len(value_lines) > 1: | ||
163 | 70 | for j in range(1, len(value_lines)): | ||
164 | 71 | lines.append("|%s|%s|" % (center_text("-", ' ', max_key_len), | ||
165 | 72 | center_text(value_lines[j], ' ', | ||
166 | 73 | max_value_len))) | ||
167 | 74 | lines.append("'%s+%s'" % ("-" * max_key_len, "-" * max_value_len)) | ||
168 | 75 | return "\n".join(lines) | ||
169 | 76 | |||
170 | 77 | |||
171 | 78 | def _pformat_escape(item, item_max_len): | ||
172 | 79 | item = _pformat_simple(item, item_max_len) | ||
173 | 80 | item = item.replace("\n", "\\n") | ||
174 | 81 | item = item.replace("\t", "\\t") | ||
175 | 82 | return item | ||
176 | 83 | |||
177 | 84 | |||
178 | 85 | def _pformat_simple(item, item_max_len): | ||
179 | 86 | if item_max_len is None or item_max_len < 0: | ||
180 | 87 | return "%s" % (item) | ||
181 | 88 | if item_max_len == 0: | ||
182 | 89 | return '' | ||
183 | 90 | item_str = "%s" % (item) | ||
184 | 91 | if len(item_str) > item_max_len: | ||
185 | 92 | # TODO(harlowja) use utf8 ellipse or '...'?? | ||
186 | 93 | item_str = item_str[0:item_max_len] + '...' | ||
187 | 94 | return item_str | ||
188 | 95 | |||
189 | 96 | |||
190 | 97 | def pformat(item, item_max_len=None): | ||
191 | 98 | if isinstance(item, (list, set, tuple)): | ||
192 | 99 | return _pformat_list(item, item_max_len) | ||
193 | 100 | elif isinstance(item, (dict)): | ||
194 | 101 | return _pformat_hash(item, item_max_len) | ||
195 | 102 | else: | ||
196 | 103 | return _pformat_simple(item, item_max_len) | ||
197 | 104 | |||
198 | 105 | |||
199 | 106 | def pprint(item, item_max_len=None): | ||
200 | 107 | print("%s" % (pformat(item, item_max_len))) | ||
201 | 0 | 108 | ||
202 | === modified file 'cloudinit/sources/__init__.py' | |||
203 | --- cloudinit/sources/__init__.py 2012-08-28 03:51:00 +0000 | |||
204 | +++ cloudinit/sources/__init__.py 2012-09-10 21:48:19 +0000 | |||
205 | @@ -23,6 +23,7 @@ | |||
206 | 23 | from email.mime.multipart import MIMEMultipart | 23 | from email.mime.multipart import MIMEMultipart |
207 | 24 | 24 | ||
208 | 25 | import abc | 25 | import abc |
209 | 26 | import copy | ||
210 | 26 | 27 | ||
211 | 27 | from cloudinit import importer | 28 | from cloudinit import importer |
212 | 28 | from cloudinit import log as logging | 29 | from cloudinit import log as logging |
213 | @@ -56,6 +57,7 @@ | |||
214 | 56 | name = util.obj_name(self) | 57 | name = util.obj_name(self) |
215 | 57 | if name.startswith(DS_PREFIX): | 58 | if name.startswith(DS_PREFIX): |
216 | 58 | name = name[len(DS_PREFIX):] | 59 | name = name[len(DS_PREFIX):] |
217 | 60 | self.name = name | ||
218 | 59 | self.ds_cfg = util.get_cfg_by_path(self.sys_cfg, | 61 | self.ds_cfg = util.get_cfg_by_path(self.sys_cfg, |
219 | 60 | ("datasource", name), {}) | 62 | ("datasource", name), {}) |
220 | 61 | if not ud_proc: | 63 | if not ud_proc: |
221 | @@ -63,6 +65,16 @@ | |||
222 | 63 | else: | 65 | else: |
223 | 64 | self.ud_proc = ud_proc | 66 | self.ud_proc = ud_proc |
224 | 65 | 67 | ||
225 | 68 | def copy(self, safe=False): | ||
226 | 69 | nds = copy.deepcopy(self) | ||
227 | 70 | if not safe: | ||
228 | 71 | return nds | ||
229 | 72 | # Clear it out, nothing to see here... | ||
230 | 73 | nds.ud_proc = None | ||
231 | 74 | nds.userdata = None | ||
232 | 75 | nds.userdata_raw = None | ||
233 | 76 | return nds | ||
234 | 77 | |||
235 | 66 | def get_userdata(self, apply_filter=False): | 78 | def get_userdata(self, apply_filter=False): |
236 | 67 | if self.userdata is None: | 79 | if self.userdata is None: |
237 | 68 | self.userdata = self.ud_proc.process(self.get_userdata_raw()) | 80 | self.userdata = self.ud_proc.process(self.get_userdata_raw()) |
238 | @@ -78,6 +90,24 @@ | |||
239 | 78 | return self.metadata['launch-index'] | 90 | return self.metadata['launch-index'] |
240 | 79 | return None | 91 | return None |
241 | 80 | 92 | ||
242 | 93 | # describes the datasource as a nice map | ||
243 | 94 | # with basics such as userdata, raw userdata, | ||
244 | 95 | # anything else that a subclass can provide... | ||
245 | 96 | def describe(self): | ||
246 | 97 | return { | ||
247 | 98 | 'hostname': self.get_hostname(), | ||
248 | 99 | 'instance-id': self.get_instance_id(), | ||
249 | 100 | 'launch-index': self.launch_index, | ||
250 | 101 | 'locale': self.get_locale(), | ||
251 | 102 | 'metadata': self.metadata, | ||
252 | 103 | 'name': self.name, | ||
253 | 104 | 'package-mirror-info': self.get_package_mirror_info(), | ||
254 | 105 | 'public-ssh-keys': self.get_public_ssh_keys(), | ||
255 | 106 | 'user-data': self.userdata, | ||
256 | 107 | 'user-data-raw': self.userdata_raw, | ||
257 | 108 | 'configuration': self.ds_cfg, | ||
258 | 109 | } | ||
259 | 110 | |||
260 | 81 | def _filter_userdata(self, processed_ud): | 111 | def _filter_userdata(self, processed_ud): |
261 | 82 | filters = [ | 112 | filters = [ |
262 | 83 | launch_index.Filter(util.safe_int(self.launch_index)), | 113 | launch_index.Filter(util.safe_int(self.launch_index)), |
263 | 84 | 114 | ||
264 | === modified file 'cloudinit/stages.py' | |||
265 | --- cloudinit/stages.py 2012-08-26 22:04:06 +0000 | |||
266 | +++ cloudinit/stages.py 2012-09-10 21:48:19 +0000 | |||
267 | @@ -60,6 +60,8 @@ | |||
268 | 60 | self._distro = None | 60 | self._distro = None |
269 | 61 | # Created only when a fetch occurs | 61 | # Created only when a fetch occurs |
270 | 62 | self.datasource = None | 62 | self.datasource = None |
271 | 63 | # Only created if asked and available | ||
272 | 64 | self.safe_datasource = None | ||
273 | 63 | 65 | ||
274 | 64 | @property | 66 | @property |
275 | 65 | def distro(self): | 67 | def distro(self): |
276 | @@ -170,11 +172,11 @@ | |||
277 | 170 | base_cfg=self._read_base_cfg()) | 172 | base_cfg=self._read_base_cfg()) |
278 | 171 | return merger.cfg | 173 | return merger.cfg |
279 | 172 | 174 | ||
281 | 173 | def _restore_from_cache(self): | 175 | def _restore_from_cache(self, name): |
282 | 174 | # We try to restore from a current link and static path | 176 | # We try to restore from a current link and static path |
283 | 175 | # by using the instance link, if purge_cache was called | 177 | # by using the instance link, if purge_cache was called |
284 | 176 | # the file wont exist. | 178 | # the file wont exist. |
286 | 177 | pickled_fn = self.paths.get_ipath_cur('obj_pkl') | 179 | pickled_fn = self.paths.get_ipath_cur(name) |
287 | 178 | pickle_contents = None | 180 | pickle_contents = None |
288 | 179 | try: | 181 | try: |
289 | 180 | pickle_contents = util.load_file(pickled_fn) | 182 | pickle_contents = util.load_file(pickled_fn) |
290 | @@ -190,21 +192,18 @@ | |||
291 | 190 | util.logexc(LOG, "Failed loading pickled blob from %s", pickled_fn) | 192 | util.logexc(LOG, "Failed loading pickled blob from %s", pickled_fn) |
292 | 191 | return None | 193 | return None |
293 | 192 | 194 | ||
305 | 193 | def _write_to_cache(self): | 195 | def _write_to_cache(self, safe): |
306 | 194 | if not self.datasource: | 196 | if not safe: |
307 | 195 | return False | 197 | fmode = 0400 |
308 | 196 | pickled_fn = self.paths.get_ipath_cur("obj_pkl") | 198 | pickled_fn = self.paths.get_ipath_cur("obj_pkl") |
309 | 197 | try: | 199 | else: |
310 | 198 | pk_contents = pickle.dumps(self.datasource) | 200 | fmode = 0644 |
311 | 199 | except Exception: | 201 | pickled_fn = self.paths.get_ipath_cur("safe_obj_pkl") |
312 | 200 | util.logexc(LOG, "Failed pickling datasource %s", self.datasource) | 202 | try: |
313 | 201 | return False | 203 | pk_contents = pickle.dumps(self.datasource.copy(safe)) |
314 | 202 | try: | 204 | util.write_file(pickled_fn, pk_contents, mode=fmode) |
304 | 203 | util.write_file(pickled_fn, pk_contents, mode=0400) | ||
315 | 204 | except Exception: | 205 | except Exception: |
316 | 205 | util.logexc(LOG, "Failed pickling datasource to %s", pickled_fn) | 206 | util.logexc(LOG, "Failed pickling datasource to %s", pickled_fn) |
317 | 206 | return False | ||
318 | 207 | return True | ||
319 | 208 | 207 | ||
320 | 209 | def _get_datasources(self): | 208 | def _get_datasources(self): |
321 | 210 | # Any config provided??? | 209 | # Any config provided??? |
322 | @@ -216,12 +215,23 @@ | |||
323 | 216 | cfg_list = self.cfg.get('datasource_list') or [] | 215 | cfg_list = self.cfg.get('datasource_list') or [] |
324 | 217 | return (cfg_list, pkg_list) | 216 | return (cfg_list, pkg_list) |
325 | 218 | 217 | ||
327 | 219 | def _get_data_source(self): | 218 | def _get_safe_datasource(self): |
328 | 219 | if self.safe_datasource: | ||
329 | 220 | return self.safe_datasource | ||
330 | 221 | ds = self._restore_from_cache('safe_obj_pkl') | ||
331 | 222 | if ds: | ||
332 | 223 | LOG.debug("Restored from cache, safe datasource: %s", ds) | ||
333 | 224 | self.safe_datasource = ds | ||
334 | 225 | return ds | ||
335 | 226 | |||
336 | 227 | def _get_data_source(self, attempt_find): | ||
337 | 220 | if self.datasource: | 228 | if self.datasource: |
338 | 221 | return self.datasource | 229 | return self.datasource |
340 | 222 | ds = self._restore_from_cache() | 230 | ds = self._restore_from_cache('obj_pkl') |
341 | 223 | if ds: | 231 | if ds: |
342 | 224 | LOG.debug("Restored from cache, datasource: %s", ds) | 232 | LOG.debug("Restored from cache, datasource: %s", ds) |
343 | 233 | if not ds and not attempt_find: | ||
344 | 234 | return None | ||
345 | 225 | if not ds: | 235 | if not ds: |
346 | 226 | (cfg_list, pkg_list) = self._get_datasources() | 236 | (cfg_list, pkg_list) = self._get_datasources() |
347 | 227 | # Deep copy so that user-data handlers can not modify | 237 | # Deep copy so that user-data handlers can not modify |
348 | @@ -298,8 +308,11 @@ | |||
349 | 298 | "%s\n" % (previous_iid)) | 308 | "%s\n" % (previous_iid)) |
350 | 299 | return iid | 309 | return iid |
351 | 300 | 310 | ||
354 | 301 | def fetch(self): | 311 | def fetch(self, safe=False, attempt_find=True): |
355 | 302 | return self._get_data_source() | 312 | if not safe: |
356 | 313 | return self._get_data_source(attempt_find) | ||
357 | 314 | else: | ||
358 | 315 | return self._get_safe_datasource() | ||
359 | 303 | 316 | ||
360 | 304 | def instancify(self): | 317 | def instancify(self): |
361 | 305 | return self._reflect_cur_instance() | 318 | return self._reflect_cur_instance() |
362 | @@ -311,8 +324,11 @@ | |||
363 | 311 | self.distro, helpers.Runners(self.paths)) | 324 | self.distro, helpers.Runners(self.paths)) |
364 | 312 | 325 | ||
365 | 313 | def update(self): | 326 | def update(self): |
368 | 314 | if not self._write_to_cache(): | 327 | if not self.datasource: |
369 | 315 | return | 328 | raise RuntimeError(("Unable to update with the given datasource, " |
370 | 329 | "no datasource fetched!")) | ||
371 | 330 | self._write_to_cache(False) | ||
372 | 331 | self._write_to_cache(True) | ||
373 | 316 | self._store_userdata() | 332 | self._store_userdata() |
374 | 317 | 333 | ||
375 | 318 | def _store_userdata(self): | 334 | def _store_userdata(self): |
376 | 319 | 335 | ||
377 | === modified file 'cloudinit/util.py' | |||
378 | --- cloudinit/util.py 2012-08-28 03:51:00 +0000 | |||
379 | +++ cloudinit/util.py 2012-09-10 21:48:19 +0000 | |||
380 | @@ -1093,10 +1093,13 @@ | |||
381 | 1093 | log.debug(msg, exc_info=1, *args) | 1093 | log.debug(msg, exc_info=1, *args) |
382 | 1094 | 1094 | ||
383 | 1095 | 1095 | ||
385 | 1096 | def hash_blob(blob, routine, mlen=None): | 1096 | def hash_blob(blob, routine, mlen=None, give_hex=True): |
386 | 1097 | hasher = hashlib.new(routine) | 1097 | hasher = hashlib.new(routine) |
387 | 1098 | hasher.update(blob) | 1098 | hasher.update(blob) |
389 | 1099 | digest = hasher.hexdigest() | 1099 | if give_hex: |
390 | 1100 | digest = hasher.hexdigest() | ||
391 | 1101 | else: | ||
392 | 1102 | digest = hasher.digest() | ||
393 | 1100 | # Don't get to long now | 1103 | # Don't get to long now |
394 | 1101 | if mlen is not None: | 1104 | if mlen is not None: |
395 | 1102 | return digest[0:mlen] | 1105 | return digest[0:mlen] |
I'd like to have the query tool back. Some comments:
* I'd like some cmdline mechanism to request a single variable 'cloud-init query --instance-id' or something like that. Saving that, at very least well formatted data needs to be output for '--what=ds'. As it exists right now, I dont think its machine parsable output really. I kind of liked hte way it was in revno 649 in that respect.
Basically, I want people to be able to use this as a replacement for 'ec2metadata --instance-id' or 'ec2metadata --local-hostname'. I dont think doing this consistently across data sources is easy, but I'd like to try.
* 'self._ write_to_ cache(safe= False)' is more readable than below, which just looks odd. to_cache( False) to_cache( True)
+ self._write_
+ self._write_
* seems like unused change for 'hash_blob' creeped in.
* random newline in 'Requires' was added.
* cloudinit/pprint.py has 'vim: not consistent with other 'vi:'. I'm not opposed to that change, but I'd prefer it done once all over if we wanted that.
# vim: tabstop=4 shiftwidth=4 softtabstop=4