Merge lp:~smoser/cloud-init/changeable-templates into lp:~harlowja/cloud-init/changeable-templates
- changeable-templates
- Merge into changeable-templates
Proposed by
Scott Moser
Status: | Merged |
---|---|
Merge reported by: | Scott Moser |
Merged at revision: | not available |
Proposed branch: | lp:~smoser/cloud-init/changeable-templates |
Merge into: | lp:~harlowja/cloud-init/changeable-templates |
Diff against target: |
1407 lines (+665/-121) 30 files modified
ChangeLog (+17/-0) TODO.rst (+38/-41) bin/cloud-init (+124/-14) cloudinit/config/cc_final_message.py (+1/-0) cloudinit/config/cc_power_state_change.py (+0/-1) cloudinit/config/cc_seed_random.py (+41/-9) cloudinit/cs_utils.py (+7/-1) cloudinit/importer.py (+0/-4) cloudinit/mergers/__init__.py (+0/-5) cloudinit/sources/DataSourceAzure.py (+102/-4) cloudinit/sources/DataSourceCloudSigma.py (+37/-0) cloudinit/sources/DataSourceNoCloud.py (+1/-1) cloudinit/sources/DataSourceOpenNebula.py (+13/-0) cloudinit/sources/DataSourceSmartOS.py (+8/-2) cloudinit/stages.py (+5/-3) cloudinit/util.py (+3/-1) cloudinit/version.py (+1/-1) doc/examples/cloud-config-user-groups.txt (+1/-1) doc/sources/cloudsigma/README.rst (+4/-0) doc/status.txt (+53/-0) tests/unittests/helpers.py (+24/-0) tests/unittests/test__init__.py (+1/-5) tests/unittests/test_datasource/test_cloudsigma.py (+44/-5) tests/unittests/test_datasource/test_gce.py (+3/-2) tests/unittests/test_datasource/test_maas.py (+0/-1) tests/unittests/test_datasource/test_opennebula.py (+26/-4) tests/unittests/test_datasource/test_smartos.py (+1/-3) tests/unittests/test_handler/test_handler_seed_random.py (+75/-0) tests/unittests/test_handler/test_handler_yum_add_repo.py (+0/-1) tests/unittests/test_templating.py (+35/-12) |
To merge this branch: | bzr merge lp:~smoser/cloud-init/changeable-templates |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Joshua Harlow | Pending | ||
Review via email: mp+227323@code.launchpad.net |
Commit message
Description of the change
a couple things here
a.) merge with trunk (you can 'bzr merge lp:cloud-init' and get the same).
b.) use textwrap.dedent
c.) add some tests based on actually shipped templates that will need to pass for basic renderer.
To post a comment you must log in.
Revision history for this message
Joshua Harlow (harlowja) wrote : | # |
Revision history for this message
Joshua Harlow (harlowja) : | # |
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'ChangeLog' | |||
2 | --- ChangeLog 2014-02-27 15:51:22 +0000 | |||
3 | +++ ChangeLog 2014-07-18 13:33:31 +0000 | |||
4 | @@ -1,3 +1,12 @@ | |||
5 | 1 | 0.7.6: | ||
6 | 2 | - open 0.7.6 | ||
7 | 3 | - Enable vendordata on CloudSigma datasource (LP: #1303986) | ||
8 | 4 | - Poll on /dev/ttyS1 in CloudSigma datasource only if dmidecode says | ||
9 | 5 | we're running on cloudsigma (LP: #1316475) [Kiril Vladimiroff] | ||
10 | 6 | - SmartOS test: do not require existance of /dev/ttyS1. [LP: #1316597] | ||
11 | 7 | - doc: fix user-groups doc to reference plural ssh-authorized-keys | ||
12 | 8 | (LP: #1327065) [Joern Heissler] | ||
13 | 9 | - fix 'make test' in python 2.6 | ||
14 | 1 | 0.7.5: | 10 | 0.7.5: |
15 | 2 | - open 0.7.5 | 11 | - open 0.7.5 |
16 | 3 | - Add a debug log message around import failures | 12 | - Add a debug log message around import failures |
17 | @@ -33,6 +42,14 @@ | |||
18 | 33 | rather than relying on EC2 data in openstack metadata service. | 42 | rather than relying on EC2 data in openstack metadata service. |
19 | 34 | - SmartOS, AltCloud: disable running on arm systems due to bug | 43 | - SmartOS, AltCloud: disable running on arm systems due to bug |
20 | 35 | (LP: #1243287, #1285686) [Oleg Strikov] | 44 | (LP: #1243287, #1285686) [Oleg Strikov] |
21 | 45 | - Allow running a command to seed random, default is 'pollinate -q' | ||
22 | 46 | (LP: #1286316) [Dustin Kirkland] | ||
23 | 47 | - Write status to /run/cloud-init/status.json for consumption by | ||
24 | 48 | other programs (LP: #1284439) | ||
25 | 49 | - Azure: if a reboot causes ephemeral storage to be re-provisioned | ||
26 | 50 | Then we need to re-format it. (LP: #1292648) | ||
27 | 51 | - OpenNebula: support base64 encoded user-data | ||
28 | 52 | [Enol Fernandez, Peter Kotcauer] | ||
29 | 36 | 0.7.4: | 53 | 0.7.4: |
30 | 37 | - fix issue mounting 'ephemeral0' if ephemeral0 was an alias for a | 54 | - fix issue mounting 'ephemeral0' if ephemeral0 was an alias for a |
31 | 38 | partitioned block device with target filesystem on ephemeral0.1. | 55 | partitioned block device with target filesystem on ephemeral0.1. |
32 | 39 | 56 | ||
33 | === renamed file 'TODO' => 'TODO.rst' | |||
34 | --- TODO 2012-07-10 03:32:50 +0000 | |||
35 | +++ TODO.rst 2014-07-18 13:33:31 +0000 | |||
36 | @@ -1,46 +1,43 @@ | |||
51 | 1 | - Consider a 'failsafe' DataSource | 1 | ============================================== |
52 | 2 | If all others fail, setting a default that | 2 | Things that cloud-init may do (better) someday |
53 | 3 | - sets the user password, writing it to console | 3 | ============================================== |
54 | 4 | - logs to console that this happened | 4 | |
55 | 5 | - Consider a 'previous' DataSource | 5 | - Consider making ``failsafe`` ``DataSource`` |
56 | 6 | If no other data source is found, fall back to the 'previous' one | 6 | - sets the user password, writing it to console |
57 | 7 | keep a indication of what instance id that is in /var/lib/cloud | 7 | |
58 | 8 | - Rewrite "cloud-init-query" (currently not implemented) | 8 | - Consider a ``previous`` ``DataSource``, if no other data source is |
59 | 9 | Possibly have DataSource and cloudinit expose explicit fields | 9 | found, fall back to the ``previous`` one that worked. |
60 | 10 | - instance-id | 10 | - Rewrite ``cloud-init-query`` (currently not implemented) |
61 | 11 | - hostname | 11 | - Possibly have a ``DataSource`` expose explicit fields: |
62 | 12 | - mirror | 12 | |
63 | 13 | - release | 13 | - instance-id |
64 | 14 | - ssh public keys | 14 | - hostname |
65 | 15 | - mirror | ||
66 | 16 | - release | ||
67 | 17 | - ssh public keys | ||
68 | 18 | |||
69 | 15 | - Remove the conversion of the ubuntu network interface format conversion | 19 | - Remove the conversion of the ubuntu network interface format conversion |
70 | 16 | to a RH/fedora format and replace it with a top level format that uses | 20 | to a RH/fedora format and replace it with a top level format that uses |
71 | 17 | the netcf libraries format instead (which itself knows how to translate | 21 | the netcf libraries format instead (which itself knows how to translate |
79 | 18 | into the specific formats) | 22 | into the specific formats). See for example `netcf`_ which seems to be |
80 | 19 | - Replace the 'apt*' modules with variants that now use the distro classes | 23 | an active project that has this capability. |
81 | 20 | to perform distro independent packaging commands (where possible) | 24 | - Replace the ``apt*`` modules with variants that now use the distro classes |
82 | 21 | - Canonicalize the semaphore/lock name for modules and user data handlers | 25 | to perform distro independent packaging commands (wherever possible). |
76 | 22 | a. It is most likely a bug that currently exists that if a module in config | ||
77 | 23 | alters its name and it has already ran, then it will get ran again since | ||
78 | 24 | the lock name hasn't be canonicalized | ||
83 | 25 | - Replace some the LOG.debug calls with a LOG.info where appropriate instead | 26 | - Replace some the LOG.debug calls with a LOG.info where appropriate instead |
104 | 26 | of how right now there is really only 2 levels (WARN and DEBUG) | 27 | of how right now there is really only 2 levels (``WARN`` and ``DEBUG``) |
105 | 27 | - Remove the 'cc_' for config modules, either have them fully specified (ie | 28 | - Remove the ``cc_`` prefix for config modules, either have them fully |
106 | 28 | 'cloudinit.config.resizefs') or by default only look in the 'cloudinit.config' | 29 | specified (ie ``cloudinit.config.resizefs``) or by default only look in |
107 | 29 | for these modules (or have a combination of the above), this avoids having | 30 | the ``cloudinit.config`` namespace for these modules (or have a combination |
108 | 30 | to understand where your modules are coming from (which can be altered by | 31 | of the above), this avoids having to understand where your modules are |
109 | 31 | the current python inclusion path) | 32 | coming from (which can be altered by the current python inclusion path) |
110 | 32 | - Depending on if people think the wrapper around 'os.path.join' provided | 33 | - Instead of just warning when a module is being ran on a ``unknown`` |
111 | 33 | by the 'paths' object is useful (allowing us to modify based off a 'read' | 34 | distribution perhaps we should not run that module in that case? Or we might |
112 | 34 | and 'write' configuration based 'root') or is just to confusing, it might be | 35 | want to start reworking those modules so they will run on all |
113 | 35 | something to remove later, and just recommend using 'chroot' instead (or the X | 36 | distributions? Or if that is not the case, then maybe we want to allow |
114 | 36 | different other options which are similar to 'chroot'), which is might be more | 37 | fully specified python paths for modules and start encouraging |
115 | 37 | natural and less confusing... | 38 | packages of ``ubuntu`` modules, packages of ``rhel`` specific modules that |
116 | 38 | - Instead of just warning when a module is being ran on a 'unknown' distribution | 39 | people can add instead of having them all under the cloud-init ``root`` |
117 | 39 | perhaps we should not run that module in that case? Or we might want to start | 40 | tree? This might encourage more development of other modules instead of |
118 | 40 | reworking those modules so they will run on all distributions? Or if that is | 41 | having to go edit the cloud-init code to accomplish this. |
99 | 41 | not the case, then maybe we want to allow fully specified python paths for | ||
100 | 42 | modules and start encouraging packages of 'ubuntu' modules, packages of 'rhel' | ||
101 | 43 | specific modules that people can add instead of having them all under the | ||
102 | 44 | cloud-init 'root' tree? This might encourage more development of other modules | ||
103 | 45 | instead of having to go edit the cloud-init code to accomplish this. | ||
119 | 46 | 42 | ||
120 | 43 | .. _netcf: https://fedorahosted.org/netcf/ | ||
121 | 47 | 44 | ||
122 | === modified file 'bin/cloud-init' | |||
123 | --- bin/cloud-init 2014-01-09 00:16:24 +0000 | |||
124 | +++ bin/cloud-init 2014-07-18 13:33:31 +0000 | |||
125 | @@ -22,8 +22,11 @@ | |||
126 | 22 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | 22 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
127 | 23 | 23 | ||
128 | 24 | import argparse | 24 | import argparse |
129 | 25 | import json | ||
130 | 25 | import os | 26 | import os |
131 | 26 | import sys | 27 | import sys |
132 | 28 | import time | ||
133 | 29 | import tempfile | ||
134 | 27 | import traceback | 30 | import traceback |
135 | 28 | 31 | ||
136 | 29 | # This is more just for running from the bin folder so that | 32 | # This is more just for running from the bin folder so that |
137 | @@ -126,11 +129,11 @@ | |||
138 | 126 | " under section '%s'") % (action_name, full_section_name) | 129 | " under section '%s'") % (action_name, full_section_name) |
139 | 127 | sys.stderr.write("%s\n" % (msg)) | 130 | sys.stderr.write("%s\n" % (msg)) |
140 | 128 | LOG.debug(msg) | 131 | LOG.debug(msg) |
142 | 129 | return 0 | 132 | return [] |
143 | 130 | else: | 133 | else: |
144 | 131 | LOG.debug("Ran %s modules with %s failures", | 134 | LOG.debug("Ran %s modules with %s failures", |
145 | 132 | len(which_ran), len(failures)) | 135 | len(which_ran), len(failures)) |
147 | 133 | return len(failures) | 136 | return failures |
148 | 134 | 137 | ||
149 | 135 | 138 | ||
150 | 136 | def main_init(name, args): | 139 | def main_init(name, args): |
151 | @@ -220,7 +223,10 @@ | |||
152 | 220 | if existing_files: | 223 | if existing_files: |
153 | 221 | LOG.debug("Exiting early due to the existence of %s files", | 224 | LOG.debug("Exiting early due to the existence of %s files", |
154 | 222 | existing_files) | 225 | existing_files) |
156 | 223 | return 0 | 226 | return (None, []) |
157 | 227 | else: | ||
158 | 228 | LOG.debug("Execution continuing, no previous run detected that" | ||
159 | 229 | " would allow us to stop early.") | ||
160 | 224 | else: | 230 | else: |
161 | 225 | # The cache is not instance specific, so it has to be purged | 231 | # The cache is not instance specific, so it has to be purged |
162 | 226 | # but we want 'start' to benefit from a cache if | 232 | # but we want 'start' to benefit from a cache if |
163 | @@ -249,9 +255,9 @@ | |||
164 | 249 | " Likely bad things to come!")) | 255 | " Likely bad things to come!")) |
165 | 250 | if not args.force: | 256 | if not args.force: |
166 | 251 | if args.local: | 257 | if args.local: |
168 | 252 | return 0 | 258 | return (None, []) |
169 | 253 | else: | 259 | else: |
171 | 254 | return 1 | 260 | return (None, ["No instance datasource found."]) |
172 | 255 | # Stage 6 | 261 | # Stage 6 |
173 | 256 | iid = init.instancify() | 262 | iid = init.instancify() |
174 | 257 | LOG.debug("%s will now be targeting instance id: %s", name, iid) | 263 | LOG.debug("%s will now be targeting instance id: %s", name, iid) |
175 | @@ -274,7 +280,7 @@ | |||
176 | 274 | init.consume_data(PER_ALWAYS) | 280 | init.consume_data(PER_ALWAYS) |
177 | 275 | except Exception: | 281 | except Exception: |
178 | 276 | util.logexc(LOG, "Consuming user data failed!") | 282 | util.logexc(LOG, "Consuming user data failed!") |
180 | 277 | return 1 | 283 | return (init.datasource, ["Consuming user data failed!"]) |
181 | 278 | 284 | ||
182 | 279 | # Stage 8 - re-read and apply relevant cloud-config to include user-data | 285 | # Stage 8 - re-read and apply relevant cloud-config to include user-data |
183 | 280 | mods = stages.Modules(init, extract_fns(args)) | 286 | mods = stages.Modules(init, extract_fns(args)) |
184 | @@ -291,7 +297,7 @@ | |||
185 | 291 | logging.setupLogging(mods.cfg) | 297 | logging.setupLogging(mods.cfg) |
186 | 292 | 298 | ||
187 | 293 | # Stage 10 | 299 | # Stage 10 |
189 | 294 | return run_module_section(mods, name, name) | 300 | return (init.datasource, run_module_section(mods, name, name)) |
190 | 295 | 301 | ||
191 | 296 | 302 | ||
192 | 297 | def main_modules(action_name, args): | 303 | def main_modules(action_name, args): |
193 | @@ -315,14 +321,12 @@ | |||
194 | 315 | init.fetch() | 321 | init.fetch() |
195 | 316 | except sources.DataSourceNotFoundException: | 322 | except sources.DataSourceNotFoundException: |
196 | 317 | # There was no datasource found, theres nothing to do | 323 | # There was no datasource found, theres nothing to do |
203 | 318 | util.logexc(LOG, ('Can not apply stage %s, ' | 324 | msg = ('Can not apply stage %s, no datasource found! Likely bad ' |
204 | 319 | 'no datasource found!' | 325 | 'things to come!' % name) |
205 | 320 | " Likely bad things to come!"), name) | 326 | util.logexc(LOG, msg) |
206 | 321 | print_exc(('Can not apply stage %s, ' | 327 | print_exc(msg) |
201 | 322 | 'no datasource found!' | ||
202 | 323 | " Likely bad things to come!") % (name)) | ||
207 | 324 | if not args.force: | 328 | if not args.force: |
209 | 325 | return 1 | 329 | return [(msg)] |
210 | 326 | # Stage 3 | 330 | # Stage 3 |
211 | 327 | mods = stages.Modules(init, extract_fns(args)) | 331 | mods = stages.Modules(init, extract_fns(args)) |
212 | 328 | # Stage 4 | 332 | # Stage 4 |
213 | @@ -419,6 +423,110 @@ | |||
214 | 419 | return 0 | 423 | return 0 |
215 | 420 | 424 | ||
216 | 421 | 425 | ||
217 | 426 | def atomic_write_json(path, data): | ||
218 | 427 | tf = None | ||
219 | 428 | try: | ||
220 | 429 | tf = tempfile.NamedTemporaryFile(dir=os.path.dirname(path), | ||
221 | 430 | delete=False) | ||
222 | 431 | tf.write(json.dumps(data, indent=1) + "\n") | ||
223 | 432 | tf.close() | ||
224 | 433 | os.rename(tf.name, path) | ||
225 | 434 | except Exception as e: | ||
226 | 435 | if tf is not None: | ||
227 | 436 | util.del_file(tf.name) | ||
228 | 437 | raise e | ||
229 | 438 | |||
230 | 439 | |||
231 | 440 | def status_wrapper(name, args, data_d=None, link_d=None): | ||
232 | 441 | if data_d is None: | ||
233 | 442 | data_d = os.path.normpath("/var/lib/cloud/data") | ||
234 | 443 | if link_d is None: | ||
235 | 444 | link_d = os.path.normpath("/run/cloud-init") | ||
236 | 445 | |||
237 | 446 | status_path = os.path.join(data_d, "status.json") | ||
238 | 447 | status_link = os.path.join(link_d, "status.json") | ||
239 | 448 | result_path = os.path.join(data_d, "result.json") | ||
240 | 449 | result_link = os.path.join(link_d, "result.json") | ||
241 | 450 | |||
242 | 451 | util.ensure_dirs((data_d, link_d,)) | ||
243 | 452 | |||
244 | 453 | (_name, functor) = args.action | ||
245 | 454 | |||
246 | 455 | if name == "init": | ||
247 | 456 | if args.local: | ||
248 | 457 | mode = "init-local" | ||
249 | 458 | else: | ||
250 | 459 | mode = "init" | ||
251 | 460 | elif name == "modules": | ||
252 | 461 | mode = "modules-%s" % args.mode | ||
253 | 462 | else: | ||
254 | 463 | raise ValueError("unknown name: %s" % name) | ||
255 | 464 | |||
256 | 465 | modes = ('init', 'init-local', 'modules-config', 'modules-final') | ||
257 | 466 | |||
258 | 467 | status = None | ||
259 | 468 | if mode == 'init-local': | ||
260 | 469 | for f in (status_link, result_link, status_path, result_path): | ||
261 | 470 | util.del_file(f) | ||
262 | 471 | else: | ||
263 | 472 | try: | ||
264 | 473 | status = json.loads(util.load_file(status_path)) | ||
265 | 474 | except: | ||
266 | 475 | pass | ||
267 | 476 | |||
268 | 477 | if status is None: | ||
269 | 478 | nullstatus = { | ||
270 | 479 | 'errors': [], | ||
271 | 480 | 'start': None, | ||
272 | 481 | 'end': None, | ||
273 | 482 | } | ||
274 | 483 | status = {'v1': {}} | ||
275 | 484 | for m in modes: | ||
276 | 485 | status['v1'][m] = nullstatus.copy() | ||
277 | 486 | status['v1']['datasource'] = None | ||
278 | 487 | |||
279 | 488 | v1 = status['v1'] | ||
280 | 489 | v1['stage'] = mode | ||
281 | 490 | v1[mode]['start'] = time.time() | ||
282 | 491 | |||
283 | 492 | atomic_write_json(status_path, status) | ||
284 | 493 | util.sym_link(os.path.relpath(status_path, link_d), status_link, | ||
285 | 494 | force=True) | ||
286 | 495 | |||
287 | 496 | try: | ||
288 | 497 | ret = functor(name, args) | ||
289 | 498 | if mode in ('init', 'init-local'): | ||
290 | 499 | (datasource, errors) = ret | ||
291 | 500 | if datasource is not None: | ||
292 | 501 | v1['datasource'] = str(datasource) | ||
293 | 502 | else: | ||
294 | 503 | errors = ret | ||
295 | 504 | |||
296 | 505 | v1[mode]['errors'] = [str(e) for e in errors] | ||
297 | 506 | |||
298 | 507 | except Exception as e: | ||
299 | 508 | v1[mode]['errors'] = [str(e)] | ||
300 | 509 | |||
301 | 510 | v1[mode]['finished'] = time.time() | ||
302 | 511 | v1['stage'] = None | ||
303 | 512 | |||
304 | 513 | atomic_write_json(status_path, status) | ||
305 | 514 | |||
306 | 515 | if mode == "modules-final": | ||
307 | 516 | # write the 'finished' file | ||
308 | 517 | errors = [] | ||
309 | 518 | for m in modes: | ||
310 | 519 | if v1[m]['errors']: | ||
311 | 520 | errors.extend(v1[m].get('errors', [])) | ||
312 | 521 | |||
313 | 522 | atomic_write_json(result_path, | ||
314 | 523 | {'v1': {'datasource': v1['datasource'], 'errors': errors}}) | ||
315 | 524 | util.sym_link(os.path.relpath(result_path, link_d), result_link, | ||
316 | 525 | force=True) | ||
317 | 526 | |||
318 | 527 | return len(v1[mode]['errors']) | ||
319 | 528 | |||
320 | 529 | |||
321 | 422 | def main(): | 530 | def main(): |
322 | 423 | parser = argparse.ArgumentParser() | 531 | parser = argparse.ArgumentParser() |
323 | 424 | 532 | ||
324 | @@ -502,6 +610,8 @@ | |||
325 | 502 | signal_handler.attach_handlers() | 610 | signal_handler.attach_handlers() |
326 | 503 | 611 | ||
327 | 504 | (name, functor) = args.action | 612 | (name, functor) = args.action |
328 | 613 | if name in ("modules", "init"): | ||
329 | 614 | functor = status_wrapper | ||
330 | 505 | 615 | ||
331 | 506 | return util.log_time(logfunc=LOG.debug, msg="cloud-init mode '%s'" % name, | 616 | return util.log_time(logfunc=LOG.debug, msg="cloud-init mode '%s'" % name, |
332 | 507 | get_uptime=True, func=functor, args=(name, args)) | 617 | get_uptime=True, func=functor, args=(name, args)) |
333 | 508 | 618 | ||
334 | === modified file 'cloudinit/config/cc_final_message.py' | |||
335 | --- cloudinit/config/cc_final_message.py 2013-09-25 17:51:52 +0000 | |||
336 | +++ cloudinit/config/cc_final_message.py 2014-07-18 13:33:31 +0000 | |||
337 | @@ -53,6 +53,7 @@ | |||
338 | 53 | 'version': cver, | 53 | 'version': cver, |
339 | 54 | 'datasource': str(cloud.datasource), | 54 | 'datasource': str(cloud.datasource), |
340 | 55 | } | 55 | } |
341 | 56 | subs.update(dict([(k.upper(), v) for k, v in subs.items()])) | ||
342 | 56 | util.multi_log("%s\n" % (templater.render_string(msg_in, subs)), | 57 | util.multi_log("%s\n" % (templater.render_string(msg_in, subs)), |
343 | 57 | console=False, stderr=True, log=log) | 58 | console=False, stderr=True, log=log) |
344 | 58 | except Exception: | 59 | except Exception: |
345 | 59 | 60 | ||
346 | === modified file 'cloudinit/config/cc_power_state_change.py' | |||
347 | --- cloudinit/config/cc_power_state_change.py 2014-02-03 22:03:14 +0000 | |||
348 | +++ cloudinit/config/cc_power_state_change.py 2014-07-18 13:33:31 +0000 | |||
349 | @@ -22,7 +22,6 @@ | |||
350 | 22 | import errno | 22 | import errno |
351 | 23 | import os | 23 | import os |
352 | 24 | import re | 24 | import re |
353 | 25 | import signal | ||
354 | 26 | import subprocess | 25 | import subprocess |
355 | 27 | import time | 26 | import time |
356 | 28 | 27 | ||
357 | 29 | 28 | ||
358 | === modified file 'cloudinit/config/cc_seed_random.py' | |||
359 | --- cloudinit/config/cc_seed_random.py 2014-02-05 15:36:47 +0000 | |||
360 | +++ cloudinit/config/cc_seed_random.py 2014-07-18 13:33:31 +0000 | |||
361 | @@ -1,8 +1,11 @@ | |||
362 | 1 | # vi: ts=4 expandtab | 1 | # vi: ts=4 expandtab |
363 | 2 | # | 2 | # |
364 | 3 | # Copyright (C) 2013 Yahoo! Inc. | 3 | # Copyright (C) 2013 Yahoo! Inc. |
365 | 4 | # Copyright (C) 2014 Canonical, Ltd | ||
366 | 4 | # | 5 | # |
367 | 5 | # Author: Joshua Harlow <harlowja@yahoo-inc.com> | 6 | # Author: Joshua Harlow <harlowja@yahoo-inc.com> |
368 | 7 | # Author: Dustin Kirkland <kirkland@ubuntu.com> | ||
369 | 8 | # Author: Scott Moser <scott.moser@canonical.com> | ||
370 | 6 | # | 9 | # |
371 | 7 | # This program is free software: you can redistribute it and/or modify | 10 | # This program is free software: you can redistribute it and/or modify |
372 | 8 | # it under the terms of the GNU General Public License version 3, as | 11 | # it under the terms of the GNU General Public License version 3, as |
373 | @@ -17,12 +20,15 @@ | |||
374 | 17 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | 20 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
375 | 18 | 21 | ||
376 | 19 | import base64 | 22 | import base64 |
377 | 23 | import os | ||
378 | 20 | from StringIO import StringIO | 24 | from StringIO import StringIO |
379 | 21 | 25 | ||
380 | 22 | from cloudinit.settings import PER_INSTANCE | 26 | from cloudinit.settings import PER_INSTANCE |
381 | 27 | from cloudinit import log as logging | ||
382 | 23 | from cloudinit import util | 28 | from cloudinit import util |
383 | 24 | 29 | ||
384 | 25 | frequency = PER_INSTANCE | 30 | frequency = PER_INSTANCE |
385 | 31 | LOG = logging.getLogger(__name__) | ||
386 | 26 | 32 | ||
387 | 27 | 33 | ||
388 | 28 | def _decode(data, encoding=None): | 34 | def _decode(data, encoding=None): |
389 | @@ -38,24 +44,50 @@ | |||
390 | 38 | raise IOError("Unknown random_seed encoding: %s" % (encoding)) | 44 | raise IOError("Unknown random_seed encoding: %s" % (encoding)) |
391 | 39 | 45 | ||
392 | 40 | 46 | ||
393 | 47 | def handle_random_seed_command(command, required, env=None): | ||
394 | 48 | if not command and required: | ||
395 | 49 | raise ValueError("no command found but required=true") | ||
396 | 50 | elif not command: | ||
397 | 51 | LOG.debug("no command provided") | ||
398 | 52 | return | ||
399 | 53 | |||
400 | 54 | cmd = command[0] | ||
401 | 55 | if not util.which(cmd): | ||
402 | 56 | if required: | ||
403 | 57 | raise ValueError("command '%s' not found but required=true", cmd) | ||
404 | 58 | else: | ||
405 | 59 | LOG.debug("command '%s' not found for seed_command", cmd) | ||
406 | 60 | return | ||
407 | 61 | util.subp(command, env=env, capture=False) | ||
408 | 62 | |||
409 | 63 | |||
410 | 41 | def handle(name, cfg, cloud, log, _args): | 64 | def handle(name, cfg, cloud, log, _args): |
415 | 42 | if not cfg or "random_seed" not in cfg: | 65 | mycfg = cfg.get('random_seed', {}) |
416 | 43 | log.debug(("Skipping module named %s, " | 66 | seed_path = mycfg.get('file', '/dev/urandom') |
417 | 44 | "no 'random_seed' configuration found"), name) | 67 | seed_data = mycfg.get('data', '') |
414 | 45 | return | ||
418 | 46 | 68 | ||
419 | 47 | my_cfg = cfg['random_seed'] | ||
420 | 48 | seed_path = my_cfg.get('file', '/dev/urandom') | ||
421 | 49 | seed_buf = StringIO() | 69 | seed_buf = StringIO() |
424 | 50 | seed_buf.write(_decode(my_cfg.get('data', ''), | 70 | if seed_data: |
425 | 51 | encoding=my_cfg.get('encoding'))) | 71 | seed_buf.write(_decode(seed_data, encoding=mycfg.get('encoding'))) |
426 | 52 | 72 | ||
427 | 73 | # 'random_seed' is set up by Azure datasource, and comes already in | ||
428 | 74 | # openstack meta_data.json | ||
429 | 53 | metadata = cloud.datasource.metadata | 75 | metadata = cloud.datasource.metadata |
430 | 54 | if metadata and 'random_seed' in metadata: | 76 | if metadata and 'random_seed' in metadata: |
431 | 55 | seed_buf.write(metadata['random_seed']) | 77 | seed_buf.write(metadata['random_seed']) |
432 | 56 | 78 | ||
433 | 57 | seed_data = seed_buf.getvalue() | 79 | seed_data = seed_buf.getvalue() |
434 | 58 | if len(seed_data): | 80 | if len(seed_data): |
436 | 59 | log.debug("%s: adding %s bytes of random seed entrophy to %s", name, | 81 | log.debug("%s: adding %s bytes of random seed entropy to %s", name, |
437 | 60 | len(seed_data), seed_path) | 82 | len(seed_data), seed_path) |
438 | 61 | util.append_file(seed_path, seed_data) | 83 | util.append_file(seed_path, seed_data) |
439 | 84 | |||
440 | 85 | command = mycfg.get('command', ['pollinate', '-q']) | ||
441 | 86 | req = mycfg.get('command_required', False) | ||
442 | 87 | try: | ||
443 | 88 | env = os.environ.copy() | ||
444 | 89 | env['RANDOM_SEED_FILE'] = seed_path | ||
445 | 90 | handle_random_seed_command(command=command, required=req, env=env) | ||
446 | 91 | except ValueError as e: | ||
447 | 92 | log.warn("handling random command [%s] failed: %s", command, e) | ||
448 | 93 | raise e | ||
449 | 62 | 94 | ||
450 | === modified file 'cloudinit/cs_utils.py' | |||
451 | --- cloudinit/cs_utils.py 2014-02-12 10:14:49 +0000 | |||
452 | +++ cloudinit/cs_utils.py 2014-07-18 13:33:31 +0000 | |||
453 | @@ -35,6 +35,10 @@ | |||
454 | 35 | 35 | ||
455 | 36 | import serial | 36 | import serial |
456 | 37 | 37 | ||
457 | 38 | # these high timeouts are necessary as read may read a lot of data. | ||
458 | 39 | READ_TIMEOUT = 60 | ||
459 | 40 | WRITE_TIMEOUT = 10 | ||
460 | 41 | |||
461 | 38 | SERIAL_PORT = '/dev/ttyS1' | 42 | SERIAL_PORT = '/dev/ttyS1' |
462 | 39 | if platform.system() == 'Windows': | 43 | if platform.system() == 'Windows': |
463 | 40 | SERIAL_PORT = 'COM2' | 44 | SERIAL_PORT = 'COM2' |
464 | @@ -76,7 +80,9 @@ | |||
465 | 76 | self.result = self._marshal(self.raw_result) | 80 | self.result = self._marshal(self.raw_result) |
466 | 77 | 81 | ||
467 | 78 | def _execute(self): | 82 | def _execute(self): |
469 | 79 | connection = serial.Serial(SERIAL_PORT) | 83 | connection = serial.Serial(port=SERIAL_PORT, |
470 | 84 | timeout=READ_TIMEOUT, | ||
471 | 85 | writeTimeout=WRITE_TIMEOUT) | ||
472 | 80 | connection.write(self.request) | 86 | connection.write(self.request) |
473 | 81 | return connection.readline().strip('\x04\n') | 87 | return connection.readline().strip('\x04\n') |
474 | 82 | 88 | ||
475 | 83 | 89 | ||
476 | === modified file 'cloudinit/importer.py' | |||
477 | --- cloudinit/importer.py 2013-10-09 19:22:06 +0000 | |||
478 | +++ cloudinit/importer.py 2014-07-18 13:33:31 +0000 | |||
479 | @@ -45,8 +45,6 @@ | |||
480 | 45 | real_path.append(base_name) | 45 | real_path.append(base_name) |
481 | 46 | full_path = '.'.join(real_path) | 46 | full_path = '.'.join(real_path) |
482 | 47 | real_paths.append(full_path) | 47 | real_paths.append(full_path) |
483 | 48 | LOG.debug("Looking for modules %s that have attributes %s", | ||
484 | 49 | real_paths, required_attrs) | ||
485 | 50 | for full_path in real_paths: | 48 | for full_path in real_paths: |
486 | 51 | mod = None | 49 | mod = None |
487 | 52 | try: | 50 | try: |
488 | @@ -62,6 +60,4 @@ | |||
489 | 62 | found_attrs += 1 | 60 | found_attrs += 1 |
490 | 63 | if found_attrs == len(required_attrs): | 61 | if found_attrs == len(required_attrs): |
491 | 64 | found_places.append(full_path) | 62 | found_places.append(full_path) |
492 | 65 | LOG.debug("Found %s with attributes %s in %s", base_name, | ||
493 | 66 | required_attrs, found_places) | ||
494 | 67 | return found_places | 63 | return found_places |
495 | 68 | 64 | ||
496 | === modified file 'cloudinit/mergers/__init__.py' | |||
497 | --- cloudinit/mergers/__init__.py 2013-05-03 21:41:28 +0000 | |||
498 | +++ cloudinit/mergers/__init__.py 2014-07-18 13:33:31 +0000 | |||
499 | @@ -55,9 +55,6 @@ | |||
500 | 55 | if not meth: | 55 | if not meth: |
501 | 56 | meth = self._handle_unknown | 56 | meth = self._handle_unknown |
502 | 57 | args.insert(0, method_name) | 57 | args.insert(0, method_name) |
503 | 58 | LOG.debug("Merging '%s' into '%s' using method '%s' of '%s'", | ||
504 | 59 | type_name, type_utils.obj_name(merge_with), | ||
505 | 60 | meth.__name__, self) | ||
506 | 61 | return meth(*args) | 58 | return meth(*args) |
507 | 62 | 59 | ||
508 | 63 | 60 | ||
509 | @@ -84,8 +81,6 @@ | |||
510 | 84 | # First one that has that method/attr gets to be | 81 | # First one that has that method/attr gets to be |
511 | 85 | # the one that will be called | 82 | # the one that will be called |
512 | 86 | meth = getattr(merger, meth_wanted) | 83 | meth = getattr(merger, meth_wanted) |
513 | 87 | LOG.debug(("Merging using located merger '%s'" | ||
514 | 88 | " since it had method '%s'"), merger, meth_wanted) | ||
515 | 89 | break | 84 | break |
516 | 90 | if not meth: | 85 | if not meth: |
517 | 91 | return UnknownMerger._handle_unknown(self, meth_wanted, | 86 | return UnknownMerger._handle_unknown(self, meth_wanted, |
518 | 92 | 87 | ||
519 | === modified file 'cloudinit/sources/DataSourceAzure.py' | |||
520 | --- cloudinit/sources/DataSourceAzure.py 2014-02-10 20:11:45 +0000 | |||
521 | +++ cloudinit/sources/DataSourceAzure.py 2014-07-18 13:33:31 +0000 | |||
522 | @@ -18,12 +18,14 @@ | |||
523 | 18 | 18 | ||
524 | 19 | import base64 | 19 | import base64 |
525 | 20 | import crypt | 20 | import crypt |
526 | 21 | import fnmatch | ||
527 | 21 | import os | 22 | import os |
528 | 22 | import os.path | 23 | import os.path |
529 | 23 | import time | 24 | import time |
530 | 24 | from xml.dom import minidom | 25 | from xml.dom import minidom |
531 | 25 | 26 | ||
532 | 26 | from cloudinit import log as logging | 27 | from cloudinit import log as logging |
533 | 28 | from cloudinit.settings import PER_ALWAYS | ||
534 | 27 | from cloudinit import sources | 29 | from cloudinit import sources |
535 | 28 | from cloudinit import util | 30 | from cloudinit import util |
536 | 29 | 31 | ||
537 | @@ -53,14 +55,15 @@ | |||
538 | 53 | 'disk_setup': { | 55 | 'disk_setup': { |
539 | 54 | 'ephemeral0': {'table_type': 'mbr', | 56 | 'ephemeral0': {'table_type': 'mbr', |
540 | 55 | 'layout': True, | 57 | 'layout': True, |
543 | 56 | 'overwrite': False} | 58 | 'overwrite': False}, |
544 | 57 | }, | 59 | }, |
545 | 58 | 'fs_setup': [{'filesystem': 'ext4', | 60 | 'fs_setup': [{'filesystem': 'ext4', |
546 | 59 | 'device': 'ephemeral0.1', | 61 | 'device': 'ephemeral0.1', |
548 | 60 | 'replace_fs': 'ntfs'}] | 62 | 'replace_fs': 'ntfs'}], |
549 | 61 | } | 63 | } |
550 | 62 | 64 | ||
551 | 63 | DS_CFG_PATH = ['datasource', DS_NAME] | 65 | DS_CFG_PATH = ['datasource', DS_NAME] |
552 | 66 | DEF_EPHEMERAL_LABEL = 'Temporary Storage' | ||
553 | 64 | 67 | ||
554 | 65 | 68 | ||
555 | 66 | class DataSourceAzureNet(sources.DataSource): | 69 | class DataSourceAzureNet(sources.DataSource): |
556 | @@ -189,8 +192,17 @@ | |||
557 | 189 | LOG.warn("failed to get instance id in %s: %s", shcfgxml, e) | 192 | LOG.warn("failed to get instance id in %s: %s", shcfgxml, e) |
558 | 190 | 193 | ||
559 | 191 | pubkeys = pubkeys_from_crt_files(fp_files) | 194 | pubkeys = pubkeys_from_crt_files(fp_files) |
560 | 192 | |||
561 | 193 | self.metadata['public-keys'] = pubkeys | 195 | self.metadata['public-keys'] = pubkeys |
562 | 196 | |||
563 | 197 | found_ephemeral = find_ephemeral_disk() | ||
564 | 198 | if found_ephemeral: | ||
565 | 199 | self.ds_cfg['disk_aliases']['ephemeral0'] = found_ephemeral | ||
566 | 200 | LOG.debug("using detected ephemeral0 of %s", found_ephemeral) | ||
567 | 201 | |||
568 | 202 | cc_modules_override = support_new_ephemeral(self.sys_cfg) | ||
569 | 203 | if cc_modules_override: | ||
570 | 204 | self.cfg['cloud_config_modules'] = cc_modules_override | ||
571 | 205 | |||
572 | 194 | return True | 206 | return True |
573 | 195 | 207 | ||
574 | 196 | def device_name_to_device(self, name): | 208 | def device_name_to_device(self, name): |
575 | @@ -200,6 +212,92 @@ | |||
576 | 200 | return self.cfg | 212 | return self.cfg |
577 | 201 | 213 | ||
578 | 202 | 214 | ||
579 | 215 | def count_files(mp): | ||
580 | 216 | return len(fnmatch.filter(os.listdir(mp), '*[!cdrom]*')) | ||
581 | 217 | |||
582 | 218 | |||
583 | 219 | def find_ephemeral_part(): | ||
584 | 220 | """ | ||
585 | 221 | Locate the default ephmeral0.1 device. This will be the first device | ||
586 | 222 | that has a LABEL of DEF_EPHEMERAL_LABEL and is a NTFS device. If Azure | ||
587 | 223 | gets more ephemeral devices, this logic will only identify the first | ||
588 | 224 | such device. | ||
589 | 225 | """ | ||
590 | 226 | c_label_devs = util.find_devs_with("LABEL=%s" % DEF_EPHEMERAL_LABEL) | ||
591 | 227 | c_fstype_devs = util.find_devs_with("TYPE=ntfs") | ||
592 | 228 | for dev in c_label_devs: | ||
593 | 229 | if dev in c_fstype_devs: | ||
594 | 230 | return dev | ||
595 | 231 | return None | ||
596 | 232 | |||
597 | 233 | |||
598 | 234 | def find_ephemeral_disk(): | ||
599 | 235 | """ | ||
600 | 236 | Get the ephemeral disk. | ||
601 | 237 | """ | ||
602 | 238 | part_dev = find_ephemeral_part() | ||
603 | 239 | if part_dev and str(part_dev[-1]).isdigit(): | ||
604 | 240 | return part_dev[:-1] | ||
605 | 241 | elif part_dev: | ||
606 | 242 | return part_dev | ||
607 | 243 | return None | ||
608 | 244 | |||
609 | 245 | |||
610 | 246 | def support_new_ephemeral(cfg): | ||
611 | 247 | """ | ||
612 | 248 | Windows Azure makes ephemeral devices ephemeral to boot; a ephemeral device | ||
613 | 249 | may be presented as a fresh device, or not. | ||
614 | 250 | |||
615 | 251 | Since the knowledge of when a disk is supposed to be plowed under is | ||
616 | 252 | specific to Windows Azure, the logic resides here in the datasource. When a | ||
617 | 253 | new ephemeral device is detected, cloud-init overrides the default | ||
618 | 254 | frequency for both disk-setup and mounts for the current boot only. | ||
619 | 255 | """ | ||
620 | 256 | device = find_ephemeral_part() | ||
621 | 257 | if not device: | ||
622 | 258 | LOG.debug("no default fabric formated ephemeral0.1 found") | ||
623 | 259 | return None | ||
624 | 260 | LOG.debug("fabric formated ephemeral0.1 device at %s", device) | ||
625 | 261 | |||
626 | 262 | file_count = 0 | ||
627 | 263 | try: | ||
628 | 264 | file_count = util.mount_cb(device, count_files) | ||
629 | 265 | except: | ||
630 | 266 | return None | ||
631 | 267 | LOG.debug("fabric prepared ephmeral0.1 has %s files on it", file_count) | ||
632 | 268 | |||
633 | 269 | if file_count >= 1: | ||
634 | 270 | LOG.debug("fabric prepared ephemeral0.1 will be preserved") | ||
635 | 271 | return None | ||
636 | 272 | else: | ||
637 | 273 | # if device was already mounted, then we need to unmount it | ||
638 | 274 | # race conditions could allow for a check-then-unmount | ||
639 | 275 | # to have a false positive. so just unmount and then check. | ||
640 | 276 | try: | ||
641 | 277 | util.subp(['umount', device]) | ||
642 | 278 | except util.ProcessExecutionError as e: | ||
643 | 279 | if device in util.mounts(): | ||
644 | 280 | LOG.warn("Failed to unmount %s, will not reformat.", device) | ||
645 | 281 | LOG.debug("Failed umount: %s", e) | ||
646 | 282 | return None | ||
647 | 283 | |||
648 | 284 | LOG.debug("cloud-init will format ephemeral0.1 this boot.") | ||
649 | 285 | LOG.debug("setting disk_setup and mounts modules 'always' for this boot") | ||
650 | 286 | |||
651 | 287 | cc_modules = cfg.get('cloud_config_modules') | ||
652 | 288 | if not cc_modules: | ||
653 | 289 | return None | ||
654 | 290 | |||
655 | 291 | mod_list = [] | ||
656 | 292 | for mod in cc_modules: | ||
657 | 293 | if mod in ("disk_setup", "mounts"): | ||
658 | 294 | mod_list.append([mod, PER_ALWAYS]) | ||
659 | 295 | LOG.debug("set module '%s' to 'always' for this boot", mod) | ||
660 | 296 | else: | ||
661 | 297 | mod_list.append(mod) | ||
662 | 298 | return mod_list | ||
663 | 299 | |||
664 | 300 | |||
665 | 203 | def handle_set_hostname(enabled, hostname, cfg): | 301 | def handle_set_hostname(enabled, hostname, cfg): |
666 | 204 | if not util.is_true(enabled): | 302 | if not util.is_true(enabled): |
667 | 205 | return | 303 | return |
668 | 206 | 304 | ||
669 | === modified file 'cloudinit/sources/DataSourceCloudSigma.py' | |||
670 | --- cloudinit/sources/DataSourceCloudSigma.py 2014-02-18 16:58:12 +0000 | |||
671 | +++ cloudinit/sources/DataSourceCloudSigma.py 2014-07-18 13:33:31 +0000 | |||
672 | @@ -15,10 +15,13 @@ | |||
673 | 15 | # | 15 | # |
674 | 16 | # You should have received a copy of the GNU General Public License | 16 | # You should have received a copy of the GNU General Public License |
675 | 17 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | 17 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
676 | 18 | from base64 import b64decode | ||
677 | 19 | import os | ||
678 | 18 | import re | 20 | import re |
679 | 19 | 21 | ||
680 | 20 | from cloudinit import log as logging | 22 | from cloudinit import log as logging |
681 | 21 | from cloudinit import sources | 23 | from cloudinit import sources |
682 | 24 | from cloudinit import util | ||
683 | 22 | from cloudinit.cs_utils import Cepko | 25 | from cloudinit.cs_utils import Cepko |
684 | 23 | 26 | ||
685 | 24 | LOG = logging.getLogger(__name__) | 27 | LOG = logging.getLogger(__name__) |
686 | @@ -39,12 +42,40 @@ | |||
687 | 39 | self.ssh_public_key = '' | 42 | self.ssh_public_key = '' |
688 | 40 | sources.DataSource.__init__(self, sys_cfg, distro, paths) | 43 | sources.DataSource.__init__(self, sys_cfg, distro, paths) |
689 | 41 | 44 | ||
690 | 45 | def is_running_in_cloudsigma(self): | ||
691 | 46 | """ | ||
692 | 47 | Uses dmidecode to detect if this instance of cloud-init is running | ||
693 | 48 | in the CloudSigma's infrastructure. | ||
694 | 49 | """ | ||
695 | 50 | uname_arch = os.uname()[4] | ||
696 | 51 | if uname_arch.startswith("arm") or uname_arch == "aarch64": | ||
697 | 52 | # Disabling because dmidecode in CMD_DMI_SYSTEM crashes kvm process | ||
698 | 53 | LOG.debug("Disabling CloudSigma datasource on arm (LP: #1243287)") | ||
699 | 54 | return False | ||
700 | 55 | |||
701 | 56 | dmidecode_path = util.which('dmidecode') | ||
702 | 57 | if not dmidecode_path: | ||
703 | 58 | return False | ||
704 | 59 | |||
705 | 60 | LOG.debug("Determining hypervisor product name via dmidecode") | ||
706 | 61 | try: | ||
707 | 62 | cmd = [dmidecode_path, "--string", "system-product-name"] | ||
708 | 63 | system_product_name, _ = util.subp(cmd) | ||
709 | 64 | return 'cloudsigma' in system_product_name.lower() | ||
710 | 65 | except: | ||
711 | 66 | LOG.warn("Failed to get hypervisor product name via dmidecode") | ||
712 | 67 | |||
713 | 68 | return False | ||
714 | 69 | |||
715 | 42 | def get_data(self): | 70 | def get_data(self): |
716 | 43 | """ | 71 | """ |
717 | 44 | Metadata is the whole server context and /meta/cloud-config is used | 72 | Metadata is the whole server context and /meta/cloud-config is used |
718 | 45 | as userdata. | 73 | as userdata. |
719 | 46 | """ | 74 | """ |
720 | 47 | dsmode = None | 75 | dsmode = None |
721 | 76 | if not self.is_running_in_cloudsigma(): | ||
722 | 77 | return False | ||
723 | 78 | |||
724 | 48 | try: | 79 | try: |
725 | 49 | server_context = self.cepko.all().result | 80 | server_context = self.cepko.all().result |
726 | 50 | server_meta = server_context['meta'] | 81 | server_meta = server_context['meta'] |
727 | @@ -61,7 +92,13 @@ | |||
728 | 61 | if dsmode == "disabled" or dsmode != self.dsmode: | 92 | if dsmode == "disabled" or dsmode != self.dsmode: |
729 | 62 | return False | 93 | return False |
730 | 63 | 94 | ||
731 | 95 | base64_fields = server_meta.get('base64_fields', '').split(',') | ||
732 | 64 | self.userdata_raw = server_meta.get('cloudinit-user-data', "") | 96 | self.userdata_raw = server_meta.get('cloudinit-user-data', "") |
733 | 97 | if 'cloudinit-user-data' in base64_fields: | ||
734 | 98 | self.userdata_raw = b64decode(self.userdata_raw) | ||
735 | 99 | if 'cloudinit' in server_context.get('vendor_data', {}): | ||
736 | 100 | self.vendordata_raw = server_context["vendor_data"]["cloudinit"] | ||
737 | 101 | |||
738 | 65 | self.metadata = server_context | 102 | self.metadata = server_context |
739 | 66 | self.ssh_public_key = server_meta['ssh_public_key'] | 103 | self.ssh_public_key = server_meta['ssh_public_key'] |
740 | 67 | 104 | ||
741 | 68 | 105 | ||
742 | === modified file 'cloudinit/sources/DataSourceNoCloud.py' | |||
743 | --- cloudinit/sources/DataSourceNoCloud.py 2014-02-18 17:58:21 +0000 | |||
744 | +++ cloudinit/sources/DataSourceNoCloud.py 2014-07-18 13:33:31 +0000 | |||
745 | @@ -57,7 +57,7 @@ | |||
746 | 57 | md = {} | 57 | md = {} |
747 | 58 | if parse_cmdline_data(self.cmdline_id, md): | 58 | if parse_cmdline_data(self.cmdline_id, md): |
748 | 59 | found.append("cmdline") | 59 | found.append("cmdline") |
750 | 60 | mydata.update(md) | 60 | mydata['meta-data'].update(md) |
751 | 61 | except: | 61 | except: |
752 | 62 | util.logexc(LOG, "Unable to parse command line data") | 62 | util.logexc(LOG, "Unable to parse command line data") |
753 | 63 | return False | 63 | return False |
754 | 64 | 64 | ||
755 | === modified file 'cloudinit/sources/DataSourceOpenNebula.py' | |||
756 | --- cloudinit/sources/DataSourceOpenNebula.py 2014-01-17 01:11:27 +0000 | |||
757 | +++ cloudinit/sources/DataSourceOpenNebula.py 2014-07-18 13:33:31 +0000 | |||
758 | @@ -4,11 +4,13 @@ | |||
759 | 4 | # Copyright (C) 2012 Yahoo! Inc. | 4 | # Copyright (C) 2012 Yahoo! Inc. |
760 | 5 | # Copyright (C) 2012-2013 CERIT Scientific Cloud | 5 | # Copyright (C) 2012-2013 CERIT Scientific Cloud |
761 | 6 | # Copyright (C) 2012-2013 OpenNebula.org | 6 | # Copyright (C) 2012-2013 OpenNebula.org |
762 | 7 | # Copyright (C) 2014 Consejo Superior de Investigaciones Cientificas | ||
763 | 7 | # | 8 | # |
764 | 8 | # Author: Scott Moser <scott.moser@canonical.com> | 9 | # Author: Scott Moser <scott.moser@canonical.com> |
765 | 9 | # Author: Joshua Harlow <harlowja@yahoo-inc.com> | 10 | # Author: Joshua Harlow <harlowja@yahoo-inc.com> |
766 | 10 | # Author: Vlastimil Holer <xholer@mail.muni.cz> | 11 | # Author: Vlastimil Holer <xholer@mail.muni.cz> |
767 | 11 | # Author: Javier Fontan <jfontan@opennebula.org> | 12 | # Author: Javier Fontan <jfontan@opennebula.org> |
768 | 13 | # Author: Enol Fernandez <enolfc@ifca.unican.es> | ||
769 | 12 | # | 14 | # |
770 | 13 | # This program is free software: you can redistribute it and/or modify | 15 | # This program is free software: you can redistribute it and/or modify |
771 | 14 | # it under the terms of the GNU General Public License version 3, as | 16 | # it under the terms of the GNU General Public License version 3, as |
772 | @@ -22,6 +24,7 @@ | |||
773 | 22 | # You should have received a copy of the GNU General Public License | 24 | # You should have received a copy of the GNU General Public License |
774 | 23 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | 25 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
775 | 24 | 26 | ||
776 | 27 | import base64 | ||
777 | 25 | import os | 28 | import os |
778 | 26 | import pwd | 29 | import pwd |
779 | 27 | import re | 30 | import re |
780 | @@ -417,6 +420,16 @@ | |||
781 | 417 | elif "USERDATA" in context: | 420 | elif "USERDATA" in context: |
782 | 418 | results['userdata'] = context["USERDATA"] | 421 | results['userdata'] = context["USERDATA"] |
783 | 419 | 422 | ||
784 | 423 | # b64decode user data if necessary (default) | ||
785 | 424 | if 'userdata' in results: | ||
786 | 425 | encoding = context.get('USERDATA_ENCODING', | ||
787 | 426 | context.get('USER_DATA_ENCODING')) | ||
788 | 427 | if encoding == "base64": | ||
789 | 428 | try: | ||
790 | 429 | results['userdata'] = base64.b64decode(results['userdata']) | ||
791 | 430 | except TypeError: | ||
792 | 431 | LOG.warn("Failed base64 decoding of userdata") | ||
793 | 432 | |||
794 | 420 | # generate static /etc/network/interfaces | 433 | # generate static /etc/network/interfaces |
795 | 421 | # only if there are any required context variables | 434 | # only if there are any required context variables |
796 | 422 | # http://opennebula.org/documentation:rel3.8:cong#network_configuration | 435 | # http://opennebula.org/documentation:rel3.8:cong#network_configuration |
797 | 423 | 436 | ||
798 | === modified file 'cloudinit/sources/DataSourceSmartOS.py' | |||
799 | --- cloudinit/sources/DataSourceSmartOS.py 2014-02-26 19:28:46 +0000 | |||
800 | +++ cloudinit/sources/DataSourceSmartOS.py 2014-07-18 13:33:31 +0000 | |||
801 | @@ -170,8 +170,9 @@ | |||
802 | 170 | md = {} | 170 | md = {} |
803 | 171 | ud = "" | 171 | ud = "" |
804 | 172 | 172 | ||
807 | 173 | if not os.path.exists(self.seed): | 173 | if not device_exists(self.seed): |
808 | 174 | LOG.debug("Host does not appear to be on SmartOS") | 174 | LOG.debug("No serial device '%s' found for SmartOS datasource", |
809 | 175 | self.seed) | ||
810 | 175 | return False | 176 | return False |
811 | 176 | 177 | ||
812 | 177 | uname_arch = os.uname()[4] | 178 | uname_arch = os.uname()[4] |
813 | @@ -274,6 +275,11 @@ | |||
814 | 274 | b64=b64) | 275 | b64=b64) |
815 | 275 | 276 | ||
816 | 276 | 277 | ||
817 | 278 | def device_exists(device): | ||
818 | 279 | """Symplistic method to determine if the device exists or not""" | ||
819 | 280 | return os.path.exists(device) | ||
820 | 281 | |||
821 | 282 | |||
822 | 277 | def get_serial(seed_device, seed_timeout): | 283 | def get_serial(seed_device, seed_timeout): |
823 | 278 | """This is replaced in unit testing, allowing us to replace | 284 | """This is replaced in unit testing, allowing us to replace |
824 | 279 | serial.Serial with a mocked class. | 285 | serial.Serial with a mocked class. |
825 | 280 | 286 | ||
826 | === modified file 'cloudinit/stages.py' | |||
827 | --- cloudinit/stages.py 2014-02-13 18:53:08 +0000 | |||
828 | +++ cloudinit/stages.py 2014-07-18 13:33:31 +0000 | |||
829 | @@ -397,8 +397,8 @@ | |||
830 | 397 | mod = handlers.fixup_handler(mod) | 397 | mod = handlers.fixup_handler(mod) |
831 | 398 | types = c_handlers.register(mod) | 398 | types = c_handlers.register(mod) |
832 | 399 | if types: | 399 | if types: |
835 | 400 | LOG.debug("Added custom handler for %s from %s", | 400 | LOG.debug("Added custom handler for %s [%s] from %s", |
836 | 401 | types, fname) | 401 | types, mod, fname) |
837 | 402 | except Exception: | 402 | except Exception: |
838 | 403 | util.logexc(LOG, "Failed to register handler from %s", | 403 | util.logexc(LOG, "Failed to register handler from %s", |
839 | 404 | fname) | 404 | fname) |
840 | @@ -644,6 +644,8 @@ | |||
841 | 644 | freq = mod.frequency | 644 | freq = mod.frequency |
842 | 645 | if not freq in FREQUENCIES: | 645 | if not freq in FREQUENCIES: |
843 | 646 | freq = PER_INSTANCE | 646 | freq = PER_INSTANCE |
844 | 647 | LOG.debug("Running module %s (%s) with frequency %s", | ||
845 | 648 | name, mod, freq) | ||
846 | 647 | 649 | ||
847 | 648 | # Use the configs logger and not our own | 650 | # Use the configs logger and not our own |
848 | 649 | # TODO(harlowja): possibly check the module | 651 | # TODO(harlowja): possibly check the module |
849 | @@ -657,7 +659,7 @@ | |||
850 | 657 | run_name = "config-%s" % (name) | 659 | run_name = "config-%s" % (name) |
851 | 658 | cc.run(run_name, mod.handle, func_args, freq=freq) | 660 | cc.run(run_name, mod.handle, func_args, freq=freq) |
852 | 659 | except Exception as e: | 661 | except Exception as e: |
854 | 660 | util.logexc(LOG, "Running %s (%s) failed", name, mod) | 662 | util.logexc(LOG, "Running module %s (%s) failed", name, mod) |
855 | 661 | failures.append((name, e)) | 663 | failures.append((name, e)) |
856 | 662 | return (which_ran, failures) | 664 | return (which_ran, failures) |
857 | 663 | 665 | ||
858 | 664 | 666 | ||
859 | === modified file 'cloudinit/util.py' | |||
860 | --- cloudinit/util.py 2014-02-13 11:27:22 +0000 | |||
861 | +++ cloudinit/util.py 2014-07-18 13:33:31 +0000 | |||
862 | @@ -1395,8 +1395,10 @@ | |||
863 | 1395 | return obj_copy.deepcopy(CFG_BUILTIN) | 1395 | return obj_copy.deepcopy(CFG_BUILTIN) |
864 | 1396 | 1396 | ||
865 | 1397 | 1397 | ||
867 | 1398 | def sym_link(source, link): | 1398 | def sym_link(source, link, force=False): |
868 | 1399 | LOG.debug("Creating symbolic link from %r => %r", link, source) | 1399 | LOG.debug("Creating symbolic link from %r => %r", link, source) |
869 | 1400 | if force and os.path.exists(link): | ||
870 | 1401 | del_file(link) | ||
871 | 1400 | os.symlink(source, link) | 1402 | os.symlink(source, link) |
872 | 1401 | 1403 | ||
873 | 1402 | 1404 | ||
874 | 1403 | 1405 | ||
875 | === modified file 'cloudinit/version.py' | |||
876 | --- cloudinit/version.py 2013-11-19 21:49:53 +0000 | |||
877 | +++ cloudinit/version.py 2014-07-18 13:33:31 +0000 | |||
878 | @@ -20,7 +20,7 @@ | |||
879 | 20 | 20 | ||
880 | 21 | 21 | ||
881 | 22 | def version(): | 22 | def version(): |
883 | 23 | return vr.StrictVersion("0.7.5") | 23 | return vr.StrictVersion("0.7.6") |
884 | 24 | 24 | ||
885 | 25 | 25 | ||
886 | 26 | def version_string(): | 26 | def version_string(): |
887 | 27 | 27 | ||
888 | === modified file 'doc/examples/cloud-config-user-groups.txt' | |||
889 | --- doc/examples/cloud-config-user-groups.txt 2013-10-02 13:25:36 +0000 | |||
890 | +++ doc/examples/cloud-config-user-groups.txt 2014-07-18 13:33:31 +0000 | |||
891 | @@ -69,7 +69,7 @@ | |||
892 | 69 | # no-user-group: When set to true, do not create a group named after the user. | 69 | # no-user-group: When set to true, do not create a group named after the user. |
893 | 70 | # no-log-init: When set to true, do not initialize lastlog and faillog database. | 70 | # no-log-init: When set to true, do not initialize lastlog and faillog database. |
894 | 71 | # ssh-import-id: Optional. Import SSH ids | 71 | # ssh-import-id: Optional. Import SSH ids |
896 | 72 | # ssh-authorized-key: Optional. Add key to user's ssh authorized keys file | 72 | # ssh-authorized-keys: Optional. [list] Add keys to user's authorized keys file |
897 | 73 | # sudo: Defaults to none. Set to the sudo string you want to use, i.e. | 73 | # sudo: Defaults to none. Set to the sudo string you want to use, i.e. |
898 | 74 | # ALL=(ALL) NOPASSWD:ALL. To add multiple rules, use the following | 74 | # ALL=(ALL) NOPASSWD:ALL. To add multiple rules, use the following |
899 | 75 | # format. | 75 | # format. |
900 | 76 | 76 | ||
901 | === modified file 'doc/sources/cloudsigma/README.rst' | |||
902 | --- doc/sources/cloudsigma/README.rst 2014-02-13 15:39:39 +0000 | |||
903 | +++ doc/sources/cloudsigma/README.rst 2014-07-18 13:33:31 +0000 | |||
904 | @@ -23,6 +23,10 @@ | |||
905 | 23 | header could be omitted. However since this is a raw-text field you could provide any of the valid | 23 | header could be omitted. However since this is a raw-text field you could provide any of the valid |
906 | 24 | `config formats`_. | 24 | `config formats`_. |
907 | 25 | 25 | ||
908 | 26 | You have the option to encode your user-data using Base64. In order to do that you have to add the | ||
909 | 27 | ``cloudinit-user-data`` field to the ``base64_fields``. The latter is a comma-separated field with | ||
910 | 28 | all the meta fields whit base64 encoded values. | ||
911 | 29 | |||
912 | 26 | If your user-data does not need an internet connection you can create a | 30 | If your user-data does not need an internet connection you can create a |
913 | 27 | `meta field`_ in the `server context`_ ``cloudinit-dsmode`` and set "local" as value. | 31 | `meta field`_ in the `server context`_ ``cloudinit-dsmode`` and set "local" as value. |
914 | 28 | If this field does not exist the default value is "net". | 32 | If this field does not exist the default value is "net". |
915 | 29 | 33 | ||
916 | === added file 'doc/status.txt' | |||
917 | --- doc/status.txt 1970-01-01 00:00:00 +0000 | |||
918 | +++ doc/status.txt 2014-07-18 13:33:31 +0000 | |||
919 | @@ -0,0 +1,53 @@ | |||
920 | 1 | cloud-init will keep a 'status' file up to date for other applications | ||
921 | 2 | wishing to use it to determine cloud-init status. | ||
922 | 3 | |||
923 | 4 | It will manage 2 files: | ||
924 | 5 | status.json | ||
925 | 6 | result.json | ||
926 | 7 | |||
927 | 8 | The files will be written to /var/lib/cloud/data/ . | ||
928 | 9 | A symlink will be created in /run/cloud-init. The link from /run is to ensure | ||
929 | 10 | that if the file exists, it is not stale for this boot. | ||
930 | 11 | |||
931 | 12 | status.json's format is: | ||
932 | 13 | { | ||
933 | 14 | 'v1': { | ||
934 | 15 | 'init': { | ||
935 | 16 | errors: [] # list of strings for each error that occurred | ||
936 | 17 | start: float # time.time() that this stage started or None | ||
937 | 18 | end: float # time.time() that this stage finished or None | ||
938 | 19 | }, | ||
939 | 20 | 'init-local': { | ||
940 | 21 | 'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above) | ||
941 | 22 | }, | ||
942 | 23 | 'modules-config': { | ||
943 | 24 | 'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above) | ||
944 | 25 | }, | ||
945 | 26 | 'modules-final': { | ||
946 | 27 | 'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above) | ||
947 | 28 | }, | ||
948 | 29 | 'datasource': string describing datasource found or None | ||
949 | 30 | 'stage': string representing stage that is currently running | ||
950 | 31 | ('init', 'init-local', 'modules-final', 'modules-config', None) | ||
951 | 32 | if None, then no stage is running. Reader must read the start/end | ||
952 | 33 | of each of the above stages to determine the state. | ||
953 | 34 | } | ||
954 | 35 | |||
955 | 36 | result.json's format is: | ||
956 | 37 | { | ||
957 | 38 | 'v1': { | ||
958 | 39 | 'datasource': string describing the datasource found | ||
959 | 40 | 'errors': [] # list of errors reported | ||
960 | 41 | } | ||
961 | 42 | } | ||
962 | 43 | |||
963 | 44 | Thus, to determine if cloud-init is finished: | ||
964 | 45 | fin = "/run/cloud-init/result.json" | ||
965 | 46 | if os.path.exists(fin): | ||
966 | 47 | ret = json.load(open(fin, "r")) | ||
967 | 48 | if len(ret['v1']['errors']): | ||
968 | 49 | print "Finished with errors:" + "\n".join(ret['v1']['errors']) | ||
969 | 50 | else: | ||
970 | 51 | print "Finished no errors" | ||
971 | 52 | else: | ||
972 | 53 | print "Not Finished" | ||
973 | 0 | 54 | ||
974 | === modified file 'tests/unittests/helpers.py' | |||
975 | --- tests/unittests/helpers.py 2014-02-08 00:40:51 +0000 | |||
976 | +++ tests/unittests/helpers.py 2014-07-18 13:33:31 +0000 | |||
977 | @@ -52,6 +52,30 @@ | |||
978 | 52 | standardMsg = standardMsg % (value) | 52 | standardMsg = standardMsg % (value) |
979 | 53 | self.fail(self._formatMessage(msg, standardMsg)) | 53 | self.fail(self._formatMessage(msg, standardMsg)) |
980 | 54 | 54 | ||
981 | 55 | def assertDictContainsSubset(self, expected, actual, msg=None): | ||
982 | 56 | missing = [] | ||
983 | 57 | mismatched = [] | ||
984 | 58 | for k, v in expected.iteritems(): | ||
985 | 59 | if k not in actual: | ||
986 | 60 | missing.append(k) | ||
987 | 61 | elif actual[k] != v: | ||
988 | 62 | mismatched.append('%r, expected: %r, actual: %r' | ||
989 | 63 | % (k, v, actual[k])) | ||
990 | 64 | |||
991 | 65 | if len(missing) == 0 and len(mismatched) == 0: | ||
992 | 66 | return | ||
993 | 67 | |||
994 | 68 | standardMsg = '' | ||
995 | 69 | if missing: | ||
996 | 70 | standardMsg = 'Missing: %r' % ','.join(m for m in missing) | ||
997 | 71 | if mismatched: | ||
998 | 72 | if standardMsg: | ||
999 | 73 | standardMsg += '; ' | ||
1000 | 74 | standardMsg += 'Mismatched values: %s' % ','.join(mismatched) | ||
1001 | 75 | |||
1002 | 76 | self.fail(self._formatMessage(msg, standardMsg)) | ||
1003 | 77 | |||
1004 | 78 | |||
1005 | 55 | else: | 79 | else: |
1006 | 56 | class TestCase(unittest.TestCase): | 80 | class TestCase(unittest.TestCase): |
1007 | 57 | pass | 81 | pass |
1008 | 58 | 82 | ||
1009 | === modified file 'tests/unittests/test__init__.py' | |||
1010 | --- tests/unittests/test__init__.py 2014-01-25 03:31:28 +0000 | |||
1011 | +++ tests/unittests/test__init__.py 2014-07-18 13:33:31 +0000 | |||
1012 | @@ -1,14 +1,10 @@ | |||
1013 | 1 | import logging | ||
1014 | 2 | import os | 1 | import os |
1015 | 3 | import StringIO | ||
1016 | 4 | import sys | ||
1017 | 5 | 2 | ||
1019 | 6 | from mocker import MockerTestCase, ANY, ARGS, KWARGS | 3 | from mocker import MockerTestCase, ARGS, KWARGS |
1020 | 7 | 4 | ||
1021 | 8 | from cloudinit import handlers | 5 | from cloudinit import handlers |
1022 | 9 | from cloudinit import helpers | 6 | from cloudinit import helpers |
1023 | 10 | from cloudinit import importer | 7 | from cloudinit import importer |
1024 | 11 | from cloudinit import log | ||
1025 | 12 | from cloudinit import settings | 8 | from cloudinit import settings |
1026 | 13 | from cloudinit import url_helper | 9 | from cloudinit import url_helper |
1027 | 14 | from cloudinit import util | 10 | from cloudinit import util |
1028 | 15 | 11 | ||
1029 | === modified file 'tests/unittests/test_datasource/test_cloudsigma.py' | |||
1030 | --- tests/unittests/test_datasource/test_cloudsigma.py 2014-02-12 10:14:49 +0000 | |||
1031 | +++ tests/unittests/test_datasource/test_cloudsigma.py 2014-07-18 13:33:31 +0000 | |||
1032 | @@ -1,9 +1,11 @@ | |||
1033 | 1 | # coding: utf-8 | 1 | # coding: utf-8 |
1035 | 2 | from unittest import TestCase | 2 | import copy |
1036 | 3 | 3 | ||
1037 | 4 | from cloudinit.cs_utils import Cepko | 4 | from cloudinit.cs_utils import Cepko |
1038 | 5 | from cloudinit.sources import DataSourceCloudSigma | 5 | from cloudinit.sources import DataSourceCloudSigma |
1039 | 6 | 6 | ||
1040 | 7 | from tests.unittests import helpers as test_helpers | ||
1041 | 8 | |||
1042 | 7 | 9 | ||
1043 | 8 | SERVER_CONTEXT = { | 10 | SERVER_CONTEXT = { |
1044 | 9 | "cpu": 1000, | 11 | "cpu": 1000, |
1045 | @@ -19,21 +21,27 @@ | |||
1046 | 19 | "smp": 1, | 21 | "smp": 1, |
1047 | 20 | "tags": ["much server", "very performance"], | 22 | "tags": ["much server", "very performance"], |
1048 | 21 | "uuid": "65b2fb23-8c03-4187-a3ba-8b7c919e8890", | 23 | "uuid": "65b2fb23-8c03-4187-a3ba-8b7c919e8890", |
1050 | 22 | "vnc_password": "9e84d6cb49e46379" | 24 | "vnc_password": "9e84d6cb49e46379", |
1051 | 25 | "vendor_data": { | ||
1052 | 26 | "location": "zrh", | ||
1053 | 27 | "cloudinit": "#cloud-config\n\n...", | ||
1054 | 28 | } | ||
1055 | 23 | } | 29 | } |
1056 | 24 | 30 | ||
1057 | 25 | 31 | ||
1058 | 26 | class CepkoMock(Cepko): | 32 | class CepkoMock(Cepko): |
1060 | 27 | result = SERVER_CONTEXT | 33 | def __init__(self, mocked_context): |
1061 | 34 | self.result = mocked_context | ||
1062 | 28 | 35 | ||
1063 | 29 | def all(self): | 36 | def all(self): |
1064 | 30 | return self | 37 | return self |
1065 | 31 | 38 | ||
1066 | 32 | 39 | ||
1068 | 33 | class DataSourceCloudSigmaTest(TestCase): | 40 | class DataSourceCloudSigmaTest(test_helpers.TestCase): |
1069 | 34 | def setUp(self): | 41 | def setUp(self): |
1070 | 35 | self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "") | 42 | self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "") |
1072 | 36 | self.datasource.cepko = CepkoMock() | 43 | self.datasource.is_running_in_cloudsigma = lambda: True |
1073 | 44 | self.datasource.cepko = CepkoMock(SERVER_CONTEXT) | ||
1074 | 37 | self.datasource.get_data() | 45 | self.datasource.get_data() |
1075 | 38 | 46 | ||
1076 | 39 | def test_get_hostname(self): | 47 | def test_get_hostname(self): |
1077 | @@ -57,3 +65,34 @@ | |||
1078 | 57 | def test_user_data(self): | 65 | def test_user_data(self): |
1079 | 58 | self.assertEqual(self.datasource.userdata_raw, | 66 | self.assertEqual(self.datasource.userdata_raw, |
1080 | 59 | SERVER_CONTEXT['meta']['cloudinit-user-data']) | 67 | SERVER_CONTEXT['meta']['cloudinit-user-data']) |
1081 | 68 | |||
1082 | 69 | def test_encoded_user_data(self): | ||
1083 | 70 | encoded_context = copy.deepcopy(SERVER_CONTEXT) | ||
1084 | 71 | encoded_context['meta']['base64_fields'] = 'cloudinit-user-data' | ||
1085 | 72 | encoded_context['meta']['cloudinit-user-data'] = 'aGkgd29ybGQK' | ||
1086 | 73 | self.datasource.cepko = CepkoMock(encoded_context) | ||
1087 | 74 | self.datasource.get_data() | ||
1088 | 75 | |||
1089 | 76 | self.assertEqual(self.datasource.userdata_raw, b'hi world\n') | ||
1090 | 77 | |||
1091 | 78 | def test_vendor_data(self): | ||
1092 | 79 | self.assertEqual(self.datasource.vendordata_raw, | ||
1093 | 80 | SERVER_CONTEXT['vendor_data']['cloudinit']) | ||
1094 | 81 | |||
1095 | 82 | def test_lack_of_vendor_data(self): | ||
1096 | 83 | stripped_context = copy.deepcopy(SERVER_CONTEXT) | ||
1097 | 84 | del stripped_context["vendor_data"] | ||
1098 | 85 | self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "") | ||
1099 | 86 | self.datasource.cepko = CepkoMock(stripped_context) | ||
1100 | 87 | self.datasource.get_data() | ||
1101 | 88 | |||
1102 | 89 | self.assertIsNone(self.datasource.vendordata_raw) | ||
1103 | 90 | |||
1104 | 91 | def test_lack_of_cloudinit_key_in_vendor_data(self): | ||
1105 | 92 | stripped_context = copy.deepcopy(SERVER_CONTEXT) | ||
1106 | 93 | del stripped_context["vendor_data"]["cloudinit"] | ||
1107 | 94 | self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "") | ||
1108 | 95 | self.datasource.cepko = CepkoMock(stripped_context) | ||
1109 | 96 | self.datasource.get_data() | ||
1110 | 97 | |||
1111 | 98 | self.assertIsNone(self.datasource.vendordata_raw) | ||
1112 | 60 | 99 | ||
1113 | === modified file 'tests/unittests/test_datasource/test_gce.py' | |||
1114 | --- tests/unittests/test_datasource/test_gce.py 2014-02-13 22:03:12 +0000 | |||
1115 | +++ tests/unittests/test_datasource/test_gce.py 2014-07-18 13:33:31 +0000 | |||
1116 | @@ -15,7 +15,6 @@ | |||
1117 | 15 | # You should have received a copy of the GNU General Public License | 15 | # You should have received a copy of the GNU General Public License |
1118 | 16 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | 16 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
1119 | 17 | 17 | ||
1120 | 18 | import unittest | ||
1121 | 19 | import httpretty | 18 | import httpretty |
1122 | 20 | import re | 19 | import re |
1123 | 21 | 20 | ||
1124 | @@ -25,6 +24,8 @@ | |||
1125 | 25 | from cloudinit import helpers | 24 | from cloudinit import helpers |
1126 | 26 | from cloudinit.sources import DataSourceGCE | 25 | from cloudinit.sources import DataSourceGCE |
1127 | 27 | 26 | ||
1128 | 27 | from tests.unittests import helpers as test_helpers | ||
1129 | 28 | |||
1130 | 28 | GCE_META = { | 29 | GCE_META = { |
1131 | 29 | 'instance/id': '123', | 30 | 'instance/id': '123', |
1132 | 30 | 'instance/zone': 'foo/bar', | 31 | 'instance/zone': 'foo/bar', |
1133 | @@ -54,7 +55,7 @@ | |||
1134 | 54 | return (404, headers, '') | 55 | return (404, headers, '') |
1135 | 55 | 56 | ||
1136 | 56 | 57 | ||
1138 | 57 | class TestDataSourceGCE(unittest.TestCase): | 58 | class TestDataSourceGCE(test_helpers.TestCase): |
1139 | 58 | 59 | ||
1140 | 59 | def setUp(self): | 60 | def setUp(self): |
1141 | 60 | self.ds = DataSourceGCE.DataSourceGCE( | 61 | self.ds = DataSourceGCE.DataSourceGCE( |
1142 | 61 | 62 | ||
1143 | === modified file 'tests/unittests/test_datasource/test_maas.py' | |||
1144 | --- tests/unittests/test_datasource/test_maas.py 2014-01-25 03:31:28 +0000 | |||
1145 | +++ tests/unittests/test_datasource/test_maas.py 2014-07-18 13:33:31 +0000 | |||
1146 | @@ -3,7 +3,6 @@ | |||
1147 | 3 | 3 | ||
1148 | 4 | from cloudinit.sources import DataSourceMAAS | 4 | from cloudinit.sources import DataSourceMAAS |
1149 | 5 | from cloudinit import url_helper | 5 | from cloudinit import url_helper |
1150 | 6 | from cloudinit import util | ||
1151 | 7 | from tests.unittests.helpers import populate_dir | 6 | from tests.unittests.helpers import populate_dir |
1152 | 8 | 7 | ||
1153 | 9 | import mocker | 8 | import mocker |
1154 | 10 | 9 | ||
1155 | === modified file 'tests/unittests/test_datasource/test_opennebula.py' | |||
1156 | --- tests/unittests/test_datasource/test_opennebula.py 2014-01-17 16:09:15 +0000 | |||
1157 | +++ tests/unittests/test_datasource/test_opennebula.py 2014-07-18 13:33:31 +0000 | |||
1158 | @@ -4,6 +4,7 @@ | |||
1159 | 4 | from mocker import MockerTestCase | 4 | from mocker import MockerTestCase |
1160 | 5 | from tests.unittests.helpers import populate_dir | 5 | from tests.unittests.helpers import populate_dir |
1161 | 6 | 6 | ||
1162 | 7 | from base64 import b64encode | ||
1163 | 7 | import os | 8 | import os |
1164 | 8 | import pwd | 9 | import pwd |
1165 | 9 | 10 | ||
1166 | @@ -164,10 +165,31 @@ | |||
1167 | 164 | 165 | ||
1168 | 165 | public_keys.append(SSH_KEY % (c + 1,)) | 166 | public_keys.append(SSH_KEY % (c + 1,)) |
1169 | 166 | 167 | ||
1174 | 167 | def test_user_data(self): | 168 | def test_user_data_plain(self): |
1175 | 168 | for k in ('USER_DATA', 'USERDATA'): | 169 | for k in ('USER_DATA', 'USERDATA'): |
1176 | 169 | my_d = os.path.join(self.tmp, k) | 170 | my_d = os.path.join(self.tmp, k) |
1177 | 170 | populate_context_dir(my_d, {k: USER_DATA}) | 171 | populate_context_dir(my_d, {k: USER_DATA, |
1178 | 172 | 'USERDATA_ENCODING': ''}) | ||
1179 | 173 | results = ds.read_context_disk_dir(my_d) | ||
1180 | 174 | |||
1181 | 175 | self.assertTrue('userdata' in results) | ||
1182 | 176 | self.assertEqual(USER_DATA, results['userdata']) | ||
1183 | 177 | |||
1184 | 178 | def test_user_data_encoding_required_for_decode(self): | ||
1185 | 179 | b64userdata = b64encode(USER_DATA) | ||
1186 | 180 | for k in ('USER_DATA', 'USERDATA'): | ||
1187 | 181 | my_d = os.path.join(self.tmp, k) | ||
1188 | 182 | populate_context_dir(my_d, {k: b64userdata}) | ||
1189 | 183 | results = ds.read_context_disk_dir(my_d) | ||
1190 | 184 | |||
1191 | 185 | self.assertTrue('userdata' in results) | ||
1192 | 186 | self.assertEqual(b64userdata, results['userdata']) | ||
1193 | 187 | |||
1194 | 188 | def test_user_data_base64_encoding(self): | ||
1195 | 189 | for k in ('USER_DATA', 'USERDATA'): | ||
1196 | 190 | my_d = os.path.join(self.tmp, k) | ||
1197 | 191 | populate_context_dir(my_d, {k: b64encode(USER_DATA), | ||
1198 | 192 | 'USERDATA_ENCODING': 'base64'}) | ||
1199 | 171 | results = ds.read_context_disk_dir(my_d) | 193 | results = ds.read_context_disk_dir(my_d) |
1200 | 172 | 194 | ||
1201 | 173 | self.assertTrue('userdata' in results) | 195 | self.assertTrue('userdata' in results) |
1202 | 174 | 196 | ||
1203 | === modified file 'tests/unittests/test_datasource/test_smartos.py' | |||
1204 | --- tests/unittests/test_datasource/test_smartos.py 2014-02-26 19:21:40 +0000 | |||
1205 | +++ tests/unittests/test_datasource/test_smartos.py 2014-07-18 13:33:31 +0000 | |||
1206 | @@ -24,10 +24,7 @@ | |||
1207 | 24 | 24 | ||
1208 | 25 | import base64 | 25 | import base64 |
1209 | 26 | from cloudinit import helpers as c_helpers | 26 | from cloudinit import helpers as c_helpers |
1210 | 27 | from cloudinit import stages | ||
1211 | 28 | from cloudinit import util | ||
1212 | 29 | from cloudinit.sources import DataSourceSmartOS | 27 | from cloudinit.sources import DataSourceSmartOS |
1213 | 30 | from cloudinit.settings import (PER_INSTANCE) | ||
1214 | 31 | from tests.unittests import helpers | 28 | from tests.unittests import helpers |
1215 | 32 | import os | 29 | import os |
1216 | 33 | import os.path | 30 | import os.path |
1217 | @@ -174,6 +171,7 @@ | |||
1218 | 174 | self.apply_patches([(mod, 'get_serial', _get_serial)]) | 171 | self.apply_patches([(mod, 'get_serial', _get_serial)]) |
1219 | 175 | self.apply_patches([(mod, 'dmi_data', _dmi_data)]) | 172 | self.apply_patches([(mod, 'dmi_data', _dmi_data)]) |
1220 | 176 | self.apply_patches([(os, 'uname', _os_uname)]) | 173 | self.apply_patches([(os, 'uname', _os_uname)]) |
1221 | 174 | self.apply_patches([(mod, 'device_exists', lambda d: True)]) | ||
1222 | 177 | dsrc = mod.DataSourceSmartOS(sys_cfg, distro=None, | 175 | dsrc = mod.DataSourceSmartOS(sys_cfg, distro=None, |
1223 | 178 | paths=self.paths) | 176 | paths=self.paths) |
1224 | 179 | return dsrc | 177 | return dsrc |
1225 | 180 | 178 | ||
1226 | === modified file 'tests/unittests/test_handler/test_handler_seed_random.py' | |||
1227 | --- tests/unittests/test_handler/test_handler_seed_random.py 2013-10-02 13:28:42 +0000 | |||
1228 | +++ tests/unittests/test_handler/test_handler_seed_random.py 2014-07-18 13:33:31 +0000 | |||
1229 | @@ -42,10 +42,32 @@ | |||
1230 | 42 | def setUp(self): | 42 | def setUp(self): |
1231 | 43 | super(TestRandomSeed, self).setUp() | 43 | super(TestRandomSeed, self).setUp() |
1232 | 44 | self._seed_file = tempfile.mktemp() | 44 | self._seed_file = tempfile.mktemp() |
1233 | 45 | self.unapply = [] | ||
1234 | 46 | |||
1235 | 47 | # by default 'which' has nothing in its path | ||
1236 | 48 | self.apply_patches([(util, 'which', self._which)]) | ||
1237 | 49 | self.apply_patches([(util, 'subp', self._subp)]) | ||
1238 | 50 | self.subp_called = [] | ||
1239 | 51 | self.whichdata = {} | ||
1240 | 45 | 52 | ||
1241 | 46 | def tearDown(self): | 53 | def tearDown(self): |
1242 | 54 | apply_patches([i for i in reversed(self.unapply)]) | ||
1243 | 47 | util.del_file(self._seed_file) | 55 | util.del_file(self._seed_file) |
1244 | 48 | 56 | ||
1245 | 57 | def apply_patches(self, patches): | ||
1246 | 58 | ret = apply_patches(patches) | ||
1247 | 59 | self.unapply += ret | ||
1248 | 60 | |||
1249 | 61 | def _which(self, program): | ||
1250 | 62 | return self.whichdata.get(program) | ||
1251 | 63 | |||
1252 | 64 | def _subp(self, *args, **kwargs): | ||
1253 | 65 | # supports subp calling with cmd as args or kwargs | ||
1254 | 66 | if 'args' not in kwargs: | ||
1255 | 67 | kwargs['args'] = args[0] | ||
1256 | 68 | self.subp_called.append(kwargs) | ||
1257 | 69 | return | ||
1258 | 70 | |||
1259 | 49 | def _compress(self, text): | 71 | def _compress(self, text): |
1260 | 50 | contents = StringIO() | 72 | contents = StringIO() |
1261 | 51 | gz_fh = gzip.GzipFile(mode='wb', fileobj=contents) | 73 | gz_fh = gzip.GzipFile(mode='wb', fileobj=contents) |
1262 | @@ -148,3 +170,56 @@ | |||
1263 | 148 | cc_seed_random.handle('test', cfg, c, LOG, []) | 170 | cc_seed_random.handle('test', cfg, c, LOG, []) |
1264 | 149 | contents = util.load_file(self._seed_file) | 171 | contents = util.load_file(self._seed_file) |
1265 | 150 | self.assertEquals('tiny-tim-was-here-so-was-josh', contents) | 172 | self.assertEquals('tiny-tim-was-here-so-was-josh', contents) |
1266 | 173 | |||
1267 | 174 | def test_seed_command_not_provided_pollinate_available(self): | ||
1268 | 175 | c = self._get_cloud('ubuntu', {}) | ||
1269 | 176 | self.whichdata = {'pollinate': '/usr/bin/pollinate'} | ||
1270 | 177 | cc_seed_random.handle('test', {}, c, LOG, []) | ||
1271 | 178 | |||
1272 | 179 | subp_args = [f['args'] for f in self.subp_called] | ||
1273 | 180 | self.assertIn(['pollinate', '-q'], subp_args) | ||
1274 | 181 | |||
1275 | 182 | def test_seed_command_not_provided_pollinate_not_available(self): | ||
1276 | 183 | c = self._get_cloud('ubuntu', {}) | ||
1277 | 184 | self.whichdata = {} | ||
1278 | 185 | cc_seed_random.handle('test', {}, c, LOG, []) | ||
1279 | 186 | |||
1280 | 187 | # subp should not have been called as which would say not available | ||
1281 | 188 | self.assertEquals(self.subp_called, list()) | ||
1282 | 189 | |||
1283 | 190 | def test_unavailable_seed_command_and_required_raises_error(self): | ||
1284 | 191 | c = self._get_cloud('ubuntu', {}) | ||
1285 | 192 | self.whichdata = {} | ||
1286 | 193 | self.assertRaises(ValueError, cc_seed_random.handle, | ||
1287 | 194 | 'test', {'random_seed': {'command_required': True}}, c, LOG, []) | ||
1288 | 195 | |||
1289 | 196 | def test_seed_command_and_required(self): | ||
1290 | 197 | c = self._get_cloud('ubuntu', {}) | ||
1291 | 198 | self.whichdata = {'foo': 'foo'} | ||
1292 | 199 | cfg = {'random_seed': {'command_required': True, 'command': ['foo']}} | ||
1293 | 200 | cc_seed_random.handle('test', cfg, c, LOG, []) | ||
1294 | 201 | |||
1295 | 202 | self.assertIn(['foo'], [f['args'] for f in self.subp_called]) | ||
1296 | 203 | |||
1297 | 204 | def test_file_in_environment_for_command(self): | ||
1298 | 205 | c = self._get_cloud('ubuntu', {}) | ||
1299 | 206 | self.whichdata = {'foo': 'foo'} | ||
1300 | 207 | cfg = {'random_seed': {'command_required': True, 'command': ['foo'], | ||
1301 | 208 | 'file': self._seed_file}} | ||
1302 | 209 | cc_seed_random.handle('test', cfg, c, LOG, []) | ||
1303 | 210 | |||
1304 | 211 | # this just instists that the first time subp was called, | ||
1305 | 212 | # RANDOM_SEED_FILE was in the environment set up correctly | ||
1306 | 213 | subp_env = [f['env'] for f in self.subp_called] | ||
1307 | 214 | self.assertEqual(subp_env[0].get('RANDOM_SEED_FILE'), self._seed_file) | ||
1308 | 215 | |||
1309 | 216 | |||
1310 | 217 | def apply_patches(patches): | ||
1311 | 218 | ret = [] | ||
1312 | 219 | for (ref, name, replace) in patches: | ||
1313 | 220 | if replace is None: | ||
1314 | 221 | continue | ||
1315 | 222 | orig = getattr(ref, name) | ||
1316 | 223 | setattr(ref, name, replace) | ||
1317 | 224 | ret.append((ref, name, orig)) | ||
1318 | 225 | return ret | ||
1319 | 151 | 226 | ||
1320 | === modified file 'tests/unittests/test_handler/test_handler_yum_add_repo.py' | |||
1321 | --- tests/unittests/test_handler/test_handler_yum_add_repo.py 2014-02-06 15:59:04 +0000 | |||
1322 | +++ tests/unittests/test_handler/test_handler_yum_add_repo.py 2014-07-18 13:33:31 +0000 | |||
1323 | @@ -1,4 +1,3 @@ | |||
1324 | 1 | from cloudinit import helpers | ||
1325 | 2 | from cloudinit import util | 1 | from cloudinit import util |
1326 | 3 | 2 | ||
1327 | 4 | from cloudinit.config import cc_yum_add_repo | 3 | from cloudinit.config import cc_yum_add_repo |
1328 | 5 | 4 | ||
1329 | === modified file 'tests/unittests/test_templating.py' | |||
1330 | --- tests/unittests/test_templating.py 2014-07-16 18:31:31 +0000 | |||
1331 | +++ tests/unittests/test_templating.py 2014-07-18 13:33:31 +0000 | |||
1332 | @@ -17,26 +17,51 @@ | |||
1333 | 17 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | 17 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
1334 | 18 | 18 | ||
1335 | 19 | from tests.unittests import helpers as test_helpers | 19 | from tests.unittests import helpers as test_helpers |
1336 | 20 | import textwrap | ||
1337 | 20 | 21 | ||
1338 | 21 | from cloudinit import templater | 22 | from cloudinit import templater |
1339 | 22 | 23 | ||
1340 | 23 | 24 | ||
1341 | 24 | class TestTemplates(test_helpers.TestCase): | 25 | class TestTemplates(test_helpers.TestCase): |
1342 | 25 | def test_render_basic(self): | 26 | def test_render_basic(self): |
1345 | 26 | in_data = """ | 27 | in_data = textwrap.dedent(""" |
1346 | 27 | ${b} | 28 | ${b} |
1347 | 28 | 29 | ||
1350 | 29 | c = d | 30 | c = d |
1351 | 30 | """ | 31 | """) |
1352 | 31 | in_data = in_data.strip() | 32 | in_data = in_data.strip() |
1355 | 32 | expected_data = """ | 33 | expected_data = textwrap.dedent(""" |
1356 | 33 | 2 | 34 | 2 |
1357 | 34 | 35 | ||
1360 | 35 | c = d | 36 | c = d |
1361 | 36 | """ | 37 | """) |
1362 | 37 | out_data = templater.basic_render(in_data, {'b': 2}) | 38 | out_data = templater.basic_render(in_data, {'b': 2}) |
1363 | 38 | self.assertEqual(expected_data.strip(), out_data) | 39 | self.assertEqual(expected_data.strip(), out_data) |
1364 | 39 | 40 | ||
1365 | 41 | def test_render_basic_no_parens(self): | ||
1366 | 42 | hn = "myfoohost" | ||
1367 | 43 | in_data = "h=$hostname\nc=d\n" | ||
1368 | 44 | expected_data = "h=%s\nc=d\n" % hn | ||
1369 | 45 | out_data = templater.basic_render(in_data, {'hostname': hn}) | ||
1370 | 46 | self.assertEqual(expected_data, out_data) | ||
1371 | 47 | |||
1372 | 48 | def test_render_basic_parens(self): | ||
1373 | 49 | hn = "myfoohost" | ||
1374 | 50 | in_data = "h = ${hostname}\nc=d\n" | ||
1375 | 51 | expected_data = "h = %s\nc=d\n" % hn | ||
1376 | 52 | out_data = templater.basic_render(in_data, {'hostname': hn}) | ||
1377 | 53 | self.assertEqual(expected_data, out_data) | ||
1378 | 54 | |||
1379 | 55 | def test_render_basic2(self): | ||
1380 | 56 | mirror = "mymirror" | ||
1381 | 57 | codename = "zany" | ||
1382 | 58 | in_data = "deb $mirror $codename-updates main contrib non-free" | ||
1383 | 59 | ex_data = "deb %s %s-updates main contrib non-free" % (mirror, codename) | ||
1384 | 60 | |||
1385 | 61 | out_data = templater.basic_render(in_data, | ||
1386 | 62 | {'mirror': mirror, 'codename': codename}) | ||
1387 | 63 | self.assertEqual(ex_data, out_data) | ||
1388 | 64 | |||
1389 | 40 | def test_detection(self): | 65 | def test_detection(self): |
1390 | 41 | blob = "## template:cheetah" | 66 | blob = "## template:cheetah" |
1391 | 42 | 67 | ||
1392 | @@ -53,14 +78,12 @@ | |||
1393 | 53 | self.assertRaises(ValueError, templater.detect_template, blob) | 78 | self.assertRaises(ValueError, templater.detect_template, blob) |
1394 | 54 | 79 | ||
1395 | 55 | def test_render_cheetah(self): | 80 | def test_render_cheetah(self): |
1398 | 56 | blob = '''## template:cheetah | 81 | blob = '\n'.join(['## template:cheetah', '$a,$b']) |
1397 | 57 | $a,$b''' | ||
1399 | 58 | c = templater.render_string(blob, {"a": 1, "b": 2}) | 82 | c = templater.render_string(blob, {"a": 1, "b": 2}) |
1400 | 59 | self.assertEquals("1,2", c) | 83 | self.assertEquals("1,2", c) |
1401 | 60 | 84 | ||
1402 | 61 | def test_render_jinja(self): | 85 | def test_render_jinja(self): |
1405 | 62 | blob = '''## template:jinja | 86 | blob = '\n'.join(['## template:jinja', '{{a}},{{b}}']) |
1404 | 63 | {{a}},{{b}}''' | ||
1406 | 64 | c = templater.render_string(blob, {"a": 1, "b": 2}) | 87 | c = templater.render_string(blob, {"a": 1, "b": 2}) |
1407 | 65 | self.assertEquals("1,2", c) | 88 | self.assertEquals("1,2", c) |
1408 | 66 | 89 |
Seems pretty ok to me, some small comments that u can adjust if u want.