Merge lp:~smoser/cloud-init/changeable-templates into lp:~harlowja/cloud-init/changeable-templates

Proposed by Scott Moser
Status: Merged
Merge reported by: Scott Moser
Merged at revision: not available
Proposed branch: lp:~smoser/cloud-init/changeable-templates
Merge into: lp:~harlowja/cloud-init/changeable-templates
Diff against target: 1407 lines (+665/-121)
30 files modified
ChangeLog (+17/-0)
TODO.rst (+38/-41)
bin/cloud-init (+124/-14)
cloudinit/config/cc_final_message.py (+1/-0)
cloudinit/config/cc_power_state_change.py (+0/-1)
cloudinit/config/cc_seed_random.py (+41/-9)
cloudinit/cs_utils.py (+7/-1)
cloudinit/importer.py (+0/-4)
cloudinit/mergers/__init__.py (+0/-5)
cloudinit/sources/DataSourceAzure.py (+102/-4)
cloudinit/sources/DataSourceCloudSigma.py (+37/-0)
cloudinit/sources/DataSourceNoCloud.py (+1/-1)
cloudinit/sources/DataSourceOpenNebula.py (+13/-0)
cloudinit/sources/DataSourceSmartOS.py (+8/-2)
cloudinit/stages.py (+5/-3)
cloudinit/util.py (+3/-1)
cloudinit/version.py (+1/-1)
doc/examples/cloud-config-user-groups.txt (+1/-1)
doc/sources/cloudsigma/README.rst (+4/-0)
doc/status.txt (+53/-0)
tests/unittests/helpers.py (+24/-0)
tests/unittests/test__init__.py (+1/-5)
tests/unittests/test_datasource/test_cloudsigma.py (+44/-5)
tests/unittests/test_datasource/test_gce.py (+3/-2)
tests/unittests/test_datasource/test_maas.py (+0/-1)
tests/unittests/test_datasource/test_opennebula.py (+26/-4)
tests/unittests/test_datasource/test_smartos.py (+1/-3)
tests/unittests/test_handler/test_handler_seed_random.py (+75/-0)
tests/unittests/test_handler/test_handler_yum_add_repo.py (+0/-1)
tests/unittests/test_templating.py (+35/-12)
To merge this branch: bzr merge lp:~smoser/cloud-init/changeable-templates
Reviewer Review Type Date Requested Status
Joshua Harlow Pending
Review via email: mp+227323@code.launchpad.net

Description of the change

a couple things here

a.) merge with trunk (you can 'bzr merge lp:cloud-init' and get the same).
b.) use textwrap.dedent
c.) add some tests based on actually shipped templates that will need to pass for basic renderer.

To post a comment you must log in.
Revision history for this message
Joshua Harlow (harlowja) wrote :

Seems pretty ok to me, some small comments that u can adjust if u want.

Revision history for this message
Joshua Harlow (harlowja) :

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'ChangeLog'
--- ChangeLog 2014-02-27 15:51:22 +0000
+++ ChangeLog 2014-07-18 13:33:31 +0000
@@ -1,3 +1,12 @@
10.7.6:
2 - open 0.7.6
3 - Enable vendordata on CloudSigma datasource (LP: #1303986)
4 - Poll on /dev/ttyS1 in CloudSigma datasource only if dmidecode says
5 we're running on cloudsigma (LP: #1316475) [Kiril Vladimiroff]
6 - SmartOS test: do not require existance of /dev/ttyS1. [LP: #1316597]
7 - doc: fix user-groups doc to reference plural ssh-authorized-keys
8 (LP: #1327065) [Joern Heissler]
9 - fix 'make test' in python 2.6
10.7.5:100.7.5:
2 - open 0.7.511 - open 0.7.5
3 - Add a debug log message around import failures12 - Add a debug log message around import failures
@@ -33,6 +42,14 @@
33 rather than relying on EC2 data in openstack metadata service.42 rather than relying on EC2 data in openstack metadata service.
34 - SmartOS, AltCloud: disable running on arm systems due to bug43 - SmartOS, AltCloud: disable running on arm systems due to bug
35 (LP: #1243287, #1285686) [Oleg Strikov]44 (LP: #1243287, #1285686) [Oleg Strikov]
45 - Allow running a command to seed random, default is 'pollinate -q'
46 (LP: #1286316) [Dustin Kirkland]
47 - Write status to /run/cloud-init/status.json for consumption by
48 other programs (LP: #1284439)
49 - Azure: if a reboot causes ephemeral storage to be re-provisioned
50 Then we need to re-format it. (LP: #1292648)
51 - OpenNebula: support base64 encoded user-data
52 [Enol Fernandez, Peter Kotcauer]
360.7.4:530.7.4:
37 - fix issue mounting 'ephemeral0' if ephemeral0 was an alias for a54 - fix issue mounting 'ephemeral0' if ephemeral0 was an alias for a
38 partitioned block device with target filesystem on ephemeral0.1.55 partitioned block device with target filesystem on ephemeral0.1.
3956
=== renamed file 'TODO' => 'TODO.rst'
--- TODO 2012-07-10 03:32:50 +0000
+++ TODO.rst 2014-07-18 13:33:31 +0000
@@ -1,46 +1,43 @@
1- Consider a 'failsafe' DataSource1==============================================
2 If all others fail, setting a default that2Things that cloud-init may do (better) someday
3 - sets the user password, writing it to console3==============================================
4 - logs to console that this happened4
5- Consider a 'previous' DataSource5- Consider making ``failsafe`` ``DataSource``
6 If no other data source is found, fall back to the 'previous' one6 - sets the user password, writing it to console
7 keep a indication of what instance id that is in /var/lib/cloud7
8- Rewrite "cloud-init-query" (currently not implemented)8- Consider a ``previous`` ``DataSource``, if no other data source is
9 Possibly have DataSource and cloudinit expose explicit fields9 found, fall back to the ``previous`` one that worked.
10 - instance-id10- Rewrite ``cloud-init-query`` (currently not implemented)
11 - hostname11- Possibly have a ``DataSource`` expose explicit fields:
12 - mirror12
13 - release13 - instance-id
14 - ssh public keys14 - hostname
15 - mirror
16 - release
17 - ssh public keys
18
15- Remove the conversion of the ubuntu network interface format conversion19- Remove the conversion of the ubuntu network interface format conversion
16 to a RH/fedora format and replace it with a top level format that uses20 to a RH/fedora format and replace it with a top level format that uses
17 the netcf libraries format instead (which itself knows how to translate21 the netcf libraries format instead (which itself knows how to translate
18 into the specific formats)22 into the specific formats). See for example `netcf`_ which seems to be
19- Replace the 'apt*' modules with variants that now use the distro classes23 an active project that has this capability.
20 to perform distro independent packaging commands (where possible)24- Replace the ``apt*`` modules with variants that now use the distro classes
21- Canonicalize the semaphore/lock name for modules and user data handlers25 to perform distro independent packaging commands (wherever possible).
22 a. It is most likely a bug that currently exists that if a module in config
23 alters its name and it has already ran, then it will get ran again since
24 the lock name hasn't be canonicalized
25- Replace some the LOG.debug calls with a LOG.info where appropriate instead26- Replace some the LOG.debug calls with a LOG.info where appropriate instead
26 of how right now there is really only 2 levels (WARN and DEBUG)27 of how right now there is really only 2 levels (``WARN`` and ``DEBUG``)
27- Remove the 'cc_' for config modules, either have them fully specified (ie28- Remove the ``cc_`` prefix for config modules, either have them fully
28 'cloudinit.config.resizefs') or by default only look in the 'cloudinit.config'29 specified (ie ``cloudinit.config.resizefs``) or by default only look in
29 for these modules (or have a combination of the above), this avoids having30 the ``cloudinit.config`` namespace for these modules (or have a combination
30 to understand where your modules are coming from (which can be altered by31 of the above), this avoids having to understand where your modules are
31 the current python inclusion path)32 coming from (which can be altered by the current python inclusion path)
32- Depending on if people think the wrapper around 'os.path.join' provided33- Instead of just warning when a module is being ran on a ``unknown``
33 by the 'paths' object is useful (allowing us to modify based off a 'read'34 distribution perhaps we should not run that module in that case? Or we might
34 and 'write' configuration based 'root') or is just to confusing, it might be 35 want to start reworking those modules so they will run on all
35 something to remove later, and just recommend using 'chroot' instead (or the X 36 distributions? Or if that is not the case, then maybe we want to allow
36 different other options which are similar to 'chroot'), which is might be more 37 fully specified python paths for modules and start encouraging
37 natural and less confusing...38 packages of ``ubuntu`` modules, packages of ``rhel`` specific modules that
38- Instead of just warning when a module is being ran on a 'unknown' distribution39 people can add instead of having them all under the cloud-init ``root``
39 perhaps we should not run that module in that case? Or we might want to start40 tree? This might encourage more development of other modules instead of
40 reworking those modules so they will run on all distributions? Or if that is41 having to go edit the cloud-init code to accomplish this.
41 not the case, then maybe we want to allow fully specified python paths for
42 modules and start encouraging packages of 'ubuntu' modules, packages of 'rhel'
43 specific modules that people can add instead of having them all under the
44 cloud-init 'root' tree? This might encourage more development of other modules
45 instead of having to go edit the cloud-init code to accomplish this.
4642
43.. _netcf: https://fedorahosted.org/netcf/
4744
=== modified file 'bin/cloud-init'
--- bin/cloud-init 2014-01-09 00:16:24 +0000
+++ bin/cloud-init 2014-07-18 13:33:31 +0000
@@ -22,8 +22,11 @@
22# along with this program. If not, see <http://www.gnu.org/licenses/>.22# along with this program. If not, see <http://www.gnu.org/licenses/>.
2323
24import argparse24import argparse
25import json
25import os26import os
26import sys27import sys
28import time
29import tempfile
27import traceback30import traceback
2831
29# This is more just for running from the bin folder so that32# This is more just for running from the bin folder so that
@@ -126,11 +129,11 @@
126 " under section '%s'") % (action_name, full_section_name)129 " under section '%s'") % (action_name, full_section_name)
127 sys.stderr.write("%s\n" % (msg))130 sys.stderr.write("%s\n" % (msg))
128 LOG.debug(msg)131 LOG.debug(msg)
129 return 0132 return []
130 else:133 else:
131 LOG.debug("Ran %s modules with %s failures",134 LOG.debug("Ran %s modules with %s failures",
132 len(which_ran), len(failures))135 len(which_ran), len(failures))
133 return len(failures)136 return failures
134137
135138
136def main_init(name, args):139def main_init(name, args):
@@ -220,7 +223,10 @@
220 if existing_files:223 if existing_files:
221 LOG.debug("Exiting early due to the existence of %s files",224 LOG.debug("Exiting early due to the existence of %s files",
222 existing_files)225 existing_files)
223 return 0226 return (None, [])
227 else:
228 LOG.debug("Execution continuing, no previous run detected that"
229 " would allow us to stop early.")
224 else:230 else:
225 # The cache is not instance specific, so it has to be purged231 # The cache is not instance specific, so it has to be purged
226 # but we want 'start' to benefit from a cache if232 # but we want 'start' to benefit from a cache if
@@ -249,9 +255,9 @@
249 " Likely bad things to come!"))255 " Likely bad things to come!"))
250 if not args.force:256 if not args.force:
251 if args.local:257 if args.local:
252 return 0258 return (None, [])
253 else:259 else:
254 return 1260 return (None, ["No instance datasource found."])
255 # Stage 6261 # Stage 6
256 iid = init.instancify()262 iid = init.instancify()
257 LOG.debug("%s will now be targeting instance id: %s", name, iid)263 LOG.debug("%s will now be targeting instance id: %s", name, iid)
@@ -274,7 +280,7 @@
274 init.consume_data(PER_ALWAYS)280 init.consume_data(PER_ALWAYS)
275 except Exception:281 except Exception:
276 util.logexc(LOG, "Consuming user data failed!")282 util.logexc(LOG, "Consuming user data failed!")
277 return 1283 return (init.datasource, ["Consuming user data failed!"])
278284
279 # Stage 8 - re-read and apply relevant cloud-config to include user-data285 # Stage 8 - re-read and apply relevant cloud-config to include user-data
280 mods = stages.Modules(init, extract_fns(args))286 mods = stages.Modules(init, extract_fns(args))
@@ -291,7 +297,7 @@
291 logging.setupLogging(mods.cfg)297 logging.setupLogging(mods.cfg)
292298
293 # Stage 10299 # Stage 10
294 return run_module_section(mods, name, name)300 return (init.datasource, run_module_section(mods, name, name))
295301
296302
297def main_modules(action_name, args):303def main_modules(action_name, args):
@@ -315,14 +321,12 @@
315 init.fetch()321 init.fetch()
316 except sources.DataSourceNotFoundException:322 except sources.DataSourceNotFoundException:
317 # There was no datasource found, theres nothing to do323 # There was no datasource found, theres nothing to do
318 util.logexc(LOG, ('Can not apply stage %s, '324 msg = ('Can not apply stage %s, no datasource found! Likely bad '
319 'no datasource found!'325 'things to come!' % name)
320 " Likely bad things to come!"), name)326 util.logexc(LOG, msg)
321 print_exc(('Can not apply stage %s, '327 print_exc(msg)
322 'no datasource found!'
323 " Likely bad things to come!") % (name))
324 if not args.force:328 if not args.force:
325 return 1329 return [(msg)]
326 # Stage 3330 # Stage 3
327 mods = stages.Modules(init, extract_fns(args))331 mods = stages.Modules(init, extract_fns(args))
328 # Stage 4332 # Stage 4
@@ -419,6 +423,110 @@
419 return 0423 return 0
420424
421425
426def atomic_write_json(path, data):
427 tf = None
428 try:
429 tf = tempfile.NamedTemporaryFile(dir=os.path.dirname(path),
430 delete=False)
431 tf.write(json.dumps(data, indent=1) + "\n")
432 tf.close()
433 os.rename(tf.name, path)
434 except Exception as e:
435 if tf is not None:
436 util.del_file(tf.name)
437 raise e
438
439
440def status_wrapper(name, args, data_d=None, link_d=None):
441 if data_d is None:
442 data_d = os.path.normpath("/var/lib/cloud/data")
443 if link_d is None:
444 link_d = os.path.normpath("/run/cloud-init")
445
446 status_path = os.path.join(data_d, "status.json")
447 status_link = os.path.join(link_d, "status.json")
448 result_path = os.path.join(data_d, "result.json")
449 result_link = os.path.join(link_d, "result.json")
450
451 util.ensure_dirs((data_d, link_d,))
452
453 (_name, functor) = args.action
454
455 if name == "init":
456 if args.local:
457 mode = "init-local"
458 else:
459 mode = "init"
460 elif name == "modules":
461 mode = "modules-%s" % args.mode
462 else:
463 raise ValueError("unknown name: %s" % name)
464
465 modes = ('init', 'init-local', 'modules-config', 'modules-final')
466
467 status = None
468 if mode == 'init-local':
469 for f in (status_link, result_link, status_path, result_path):
470 util.del_file(f)
471 else:
472 try:
473 status = json.loads(util.load_file(status_path))
474 except:
475 pass
476
477 if status is None:
478 nullstatus = {
479 'errors': [],
480 'start': None,
481 'end': None,
482 }
483 status = {'v1': {}}
484 for m in modes:
485 status['v1'][m] = nullstatus.copy()
486 status['v1']['datasource'] = None
487
488 v1 = status['v1']
489 v1['stage'] = mode
490 v1[mode]['start'] = time.time()
491
492 atomic_write_json(status_path, status)
493 util.sym_link(os.path.relpath(status_path, link_d), status_link,
494 force=True)
495
496 try:
497 ret = functor(name, args)
498 if mode in ('init', 'init-local'):
499 (datasource, errors) = ret
500 if datasource is not None:
501 v1['datasource'] = str(datasource)
502 else:
503 errors = ret
504
505 v1[mode]['errors'] = [str(e) for e in errors]
506
507 except Exception as e:
508 v1[mode]['errors'] = [str(e)]
509
510 v1[mode]['finished'] = time.time()
511 v1['stage'] = None
512
513 atomic_write_json(status_path, status)
514
515 if mode == "modules-final":
516 # write the 'finished' file
517 errors = []
518 for m in modes:
519 if v1[m]['errors']:
520 errors.extend(v1[m].get('errors', []))
521
522 atomic_write_json(result_path,
523 {'v1': {'datasource': v1['datasource'], 'errors': errors}})
524 util.sym_link(os.path.relpath(result_path, link_d), result_link,
525 force=True)
526
527 return len(v1[mode]['errors'])
528
529
422def main():530def main():
423 parser = argparse.ArgumentParser()531 parser = argparse.ArgumentParser()
424532
@@ -502,6 +610,8 @@
502 signal_handler.attach_handlers()610 signal_handler.attach_handlers()
503611
504 (name, functor) = args.action612 (name, functor) = args.action
613 if name in ("modules", "init"):
614 functor = status_wrapper
505615
506 return util.log_time(logfunc=LOG.debug, msg="cloud-init mode '%s'" % name,616 return util.log_time(logfunc=LOG.debug, msg="cloud-init mode '%s'" % name,
507 get_uptime=True, func=functor, args=(name, args))617 get_uptime=True, func=functor, args=(name, args))
508618
=== modified file 'cloudinit/config/cc_final_message.py'
--- cloudinit/config/cc_final_message.py 2013-09-25 17:51:52 +0000
+++ cloudinit/config/cc_final_message.py 2014-07-18 13:33:31 +0000
@@ -53,6 +53,7 @@
53 'version': cver,53 'version': cver,
54 'datasource': str(cloud.datasource),54 'datasource': str(cloud.datasource),
55 }55 }
56 subs.update(dict([(k.upper(), v) for k, v in subs.items()]))
56 util.multi_log("%s\n" % (templater.render_string(msg_in, subs)),57 util.multi_log("%s\n" % (templater.render_string(msg_in, subs)),
57 console=False, stderr=True, log=log)58 console=False, stderr=True, log=log)
58 except Exception:59 except Exception:
5960
=== modified file 'cloudinit/config/cc_power_state_change.py'
--- cloudinit/config/cc_power_state_change.py 2014-02-03 22:03:14 +0000
+++ cloudinit/config/cc_power_state_change.py 2014-07-18 13:33:31 +0000
@@ -22,7 +22,6 @@
22import errno22import errno
23import os23import os
24import re24import re
25import signal
26import subprocess25import subprocess
27import time26import time
2827
2928
=== modified file 'cloudinit/config/cc_seed_random.py'
--- cloudinit/config/cc_seed_random.py 2014-02-05 15:36:47 +0000
+++ cloudinit/config/cc_seed_random.py 2014-07-18 13:33:31 +0000
@@ -1,8 +1,11 @@
1# vi: ts=4 expandtab1# vi: ts=4 expandtab
2#2#
3# Copyright (C) 2013 Yahoo! Inc.3# Copyright (C) 2013 Yahoo! Inc.
4# Copyright (C) 2014 Canonical, Ltd
4#5#
5# Author: Joshua Harlow <harlowja@yahoo-inc.com>6# Author: Joshua Harlow <harlowja@yahoo-inc.com>
7# Author: Dustin Kirkland <kirkland@ubuntu.com>
8# Author: Scott Moser <scott.moser@canonical.com>
6#9#
7# This program is free software: you can redistribute it and/or modify10# This program is free software: you can redistribute it and/or modify
8# it under the terms of the GNU General Public License version 3, as11# it under the terms of the GNU General Public License version 3, as
@@ -17,12 +20,15 @@
17# along with this program. If not, see <http://www.gnu.org/licenses/>.20# along with this program. If not, see <http://www.gnu.org/licenses/>.
1821
19import base6422import base64
23import os
20from StringIO import StringIO24from StringIO import StringIO
2125
22from cloudinit.settings import PER_INSTANCE26from cloudinit.settings import PER_INSTANCE
27from cloudinit import log as logging
23from cloudinit import util28from cloudinit import util
2429
25frequency = PER_INSTANCE30frequency = PER_INSTANCE
31LOG = logging.getLogger(__name__)
2632
2733
28def _decode(data, encoding=None):34def _decode(data, encoding=None):
@@ -38,24 +44,50 @@
38 raise IOError("Unknown random_seed encoding: %s" % (encoding))44 raise IOError("Unknown random_seed encoding: %s" % (encoding))
3945
4046
47def handle_random_seed_command(command, required, env=None):
48 if not command and required:
49 raise ValueError("no command found but required=true")
50 elif not command:
51 LOG.debug("no command provided")
52 return
53
54 cmd = command[0]
55 if not util.which(cmd):
56 if required:
57 raise ValueError("command '%s' not found but required=true", cmd)
58 else:
59 LOG.debug("command '%s' not found for seed_command", cmd)
60 return
61 util.subp(command, env=env, capture=False)
62
63
41def handle(name, cfg, cloud, log, _args):64def handle(name, cfg, cloud, log, _args):
42 if not cfg or "random_seed" not in cfg:65 mycfg = cfg.get('random_seed', {})
43 log.debug(("Skipping module named %s, "66 seed_path = mycfg.get('file', '/dev/urandom')
44 "no 'random_seed' configuration found"), name)67 seed_data = mycfg.get('data', '')
45 return
4668
47 my_cfg = cfg['random_seed']
48 seed_path = my_cfg.get('file', '/dev/urandom')
49 seed_buf = StringIO()69 seed_buf = StringIO()
50 seed_buf.write(_decode(my_cfg.get('data', ''),70 if seed_data:
51 encoding=my_cfg.get('encoding')))71 seed_buf.write(_decode(seed_data, encoding=mycfg.get('encoding')))
5272
73 # 'random_seed' is set up by Azure datasource, and comes already in
74 # openstack meta_data.json
53 metadata = cloud.datasource.metadata75 metadata = cloud.datasource.metadata
54 if metadata and 'random_seed' in metadata:76 if metadata and 'random_seed' in metadata:
55 seed_buf.write(metadata['random_seed'])77 seed_buf.write(metadata['random_seed'])
5678
57 seed_data = seed_buf.getvalue()79 seed_data = seed_buf.getvalue()
58 if len(seed_data):80 if len(seed_data):
59 log.debug("%s: adding %s bytes of random seed entrophy to %s", name,81 log.debug("%s: adding %s bytes of random seed entropy to %s", name,
60 len(seed_data), seed_path)82 len(seed_data), seed_path)
61 util.append_file(seed_path, seed_data)83 util.append_file(seed_path, seed_data)
84
85 command = mycfg.get('command', ['pollinate', '-q'])
86 req = mycfg.get('command_required', False)
87 try:
88 env = os.environ.copy()
89 env['RANDOM_SEED_FILE'] = seed_path
90 handle_random_seed_command(command=command, required=req, env=env)
91 except ValueError as e:
92 log.warn("handling random command [%s] failed: %s", command, e)
93 raise e
6294
=== modified file 'cloudinit/cs_utils.py'
--- cloudinit/cs_utils.py 2014-02-12 10:14:49 +0000
+++ cloudinit/cs_utils.py 2014-07-18 13:33:31 +0000
@@ -35,6 +35,10 @@
3535
36import serial36import serial
3737
38# these high timeouts are necessary as read may read a lot of data.
39READ_TIMEOUT = 60
40WRITE_TIMEOUT = 10
41
38SERIAL_PORT = '/dev/ttyS1'42SERIAL_PORT = '/dev/ttyS1'
39if platform.system() == 'Windows':43if platform.system() == 'Windows':
40 SERIAL_PORT = 'COM2'44 SERIAL_PORT = 'COM2'
@@ -76,7 +80,9 @@
76 self.result = self._marshal(self.raw_result)80 self.result = self._marshal(self.raw_result)
7781
78 def _execute(self):82 def _execute(self):
79 connection = serial.Serial(SERIAL_PORT)83 connection = serial.Serial(port=SERIAL_PORT,
84 timeout=READ_TIMEOUT,
85 writeTimeout=WRITE_TIMEOUT)
80 connection.write(self.request)86 connection.write(self.request)
81 return connection.readline().strip('\x04\n')87 return connection.readline().strip('\x04\n')
8288
8389
=== modified file 'cloudinit/importer.py'
--- cloudinit/importer.py 2013-10-09 19:22:06 +0000
+++ cloudinit/importer.py 2014-07-18 13:33:31 +0000
@@ -45,8 +45,6 @@
45 real_path.append(base_name)45 real_path.append(base_name)
46 full_path = '.'.join(real_path)46 full_path = '.'.join(real_path)
47 real_paths.append(full_path)47 real_paths.append(full_path)
48 LOG.debug("Looking for modules %s that have attributes %s",
49 real_paths, required_attrs)
50 for full_path in real_paths:48 for full_path in real_paths:
51 mod = None49 mod = None
52 try:50 try:
@@ -62,6 +60,4 @@
62 found_attrs += 160 found_attrs += 1
63 if found_attrs == len(required_attrs):61 if found_attrs == len(required_attrs):
64 found_places.append(full_path)62 found_places.append(full_path)
65 LOG.debug("Found %s with attributes %s in %s", base_name,
66 required_attrs, found_places)
67 return found_places63 return found_places
6864
=== modified file 'cloudinit/mergers/__init__.py'
--- cloudinit/mergers/__init__.py 2013-05-03 21:41:28 +0000
+++ cloudinit/mergers/__init__.py 2014-07-18 13:33:31 +0000
@@ -55,9 +55,6 @@
55 if not meth:55 if not meth:
56 meth = self._handle_unknown56 meth = self._handle_unknown
57 args.insert(0, method_name)57 args.insert(0, method_name)
58 LOG.debug("Merging '%s' into '%s' using method '%s' of '%s'",
59 type_name, type_utils.obj_name(merge_with),
60 meth.__name__, self)
61 return meth(*args)58 return meth(*args)
6259
6360
@@ -84,8 +81,6 @@
84 # First one that has that method/attr gets to be81 # First one that has that method/attr gets to be
85 # the one that will be called82 # the one that will be called
86 meth = getattr(merger, meth_wanted)83 meth = getattr(merger, meth_wanted)
87 LOG.debug(("Merging using located merger '%s'"
88 " since it had method '%s'"), merger, meth_wanted)
89 break84 break
90 if not meth:85 if not meth:
91 return UnknownMerger._handle_unknown(self, meth_wanted,86 return UnknownMerger._handle_unknown(self, meth_wanted,
9287
=== modified file 'cloudinit/sources/DataSourceAzure.py'
--- cloudinit/sources/DataSourceAzure.py 2014-02-10 20:11:45 +0000
+++ cloudinit/sources/DataSourceAzure.py 2014-07-18 13:33:31 +0000
@@ -18,12 +18,14 @@
1818
19import base6419import base64
20import crypt20import crypt
21import fnmatch
21import os22import os
22import os.path23import os.path
23import time24import time
24from xml.dom import minidom25from xml.dom import minidom
2526
26from cloudinit import log as logging27from cloudinit import log as logging
28from cloudinit.settings import PER_ALWAYS
27from cloudinit import sources29from cloudinit import sources
28from cloudinit import util30from cloudinit import util
2931
@@ -53,14 +55,15 @@
53 'disk_setup': {55 'disk_setup': {
54 'ephemeral0': {'table_type': 'mbr',56 'ephemeral0': {'table_type': 'mbr',
55 'layout': True,57 'layout': True,
56 'overwrite': False}58 'overwrite': False},
57 },59 },
58 'fs_setup': [{'filesystem': 'ext4',60 'fs_setup': [{'filesystem': 'ext4',
59 'device': 'ephemeral0.1',61 'device': 'ephemeral0.1',
60 'replace_fs': 'ntfs'}]62 'replace_fs': 'ntfs'}],
61}63}
6264
63DS_CFG_PATH = ['datasource', DS_NAME]65DS_CFG_PATH = ['datasource', DS_NAME]
66DEF_EPHEMERAL_LABEL = 'Temporary Storage'
6467
6568
66class DataSourceAzureNet(sources.DataSource):69class DataSourceAzureNet(sources.DataSource):
@@ -189,8 +192,17 @@
189 LOG.warn("failed to get instance id in %s: %s", shcfgxml, e)192 LOG.warn("failed to get instance id in %s: %s", shcfgxml, e)
190193
191 pubkeys = pubkeys_from_crt_files(fp_files)194 pubkeys = pubkeys_from_crt_files(fp_files)
192
193 self.metadata['public-keys'] = pubkeys195 self.metadata['public-keys'] = pubkeys
196
197 found_ephemeral = find_ephemeral_disk()
198 if found_ephemeral:
199 self.ds_cfg['disk_aliases']['ephemeral0'] = found_ephemeral
200 LOG.debug("using detected ephemeral0 of %s", found_ephemeral)
201
202 cc_modules_override = support_new_ephemeral(self.sys_cfg)
203 if cc_modules_override:
204 self.cfg['cloud_config_modules'] = cc_modules_override
205
194 return True206 return True
195207
196 def device_name_to_device(self, name):208 def device_name_to_device(self, name):
@@ -200,6 +212,92 @@
200 return self.cfg212 return self.cfg
201213
202214
215def count_files(mp):
216 return len(fnmatch.filter(os.listdir(mp), '*[!cdrom]*'))
217
218
219def find_ephemeral_part():
220 """
221 Locate the default ephmeral0.1 device. This will be the first device
222 that has a LABEL of DEF_EPHEMERAL_LABEL and is a NTFS device. If Azure
223 gets more ephemeral devices, this logic will only identify the first
224 such device.
225 """
226 c_label_devs = util.find_devs_with("LABEL=%s" % DEF_EPHEMERAL_LABEL)
227 c_fstype_devs = util.find_devs_with("TYPE=ntfs")
228 for dev in c_label_devs:
229 if dev in c_fstype_devs:
230 return dev
231 return None
232
233
234def find_ephemeral_disk():
235 """
236 Get the ephemeral disk.
237 """
238 part_dev = find_ephemeral_part()
239 if part_dev and str(part_dev[-1]).isdigit():
240 return part_dev[:-1]
241 elif part_dev:
242 return part_dev
243 return None
244
245
246def support_new_ephemeral(cfg):
247 """
248 Windows Azure makes ephemeral devices ephemeral to boot; a ephemeral device
249 may be presented as a fresh device, or not.
250
251 Since the knowledge of when a disk is supposed to be plowed under is
252 specific to Windows Azure, the logic resides here in the datasource. When a
253 new ephemeral device is detected, cloud-init overrides the default
254 frequency for both disk-setup and mounts for the current boot only.
255 """
256 device = find_ephemeral_part()
257 if not device:
258 LOG.debug("no default fabric formated ephemeral0.1 found")
259 return None
260 LOG.debug("fabric formated ephemeral0.1 device at %s", device)
261
262 file_count = 0
263 try:
264 file_count = util.mount_cb(device, count_files)
265 except:
266 return None
267 LOG.debug("fabric prepared ephmeral0.1 has %s files on it", file_count)
268
269 if file_count >= 1:
270 LOG.debug("fabric prepared ephemeral0.1 will be preserved")
271 return None
272 else:
273 # if device was already mounted, then we need to unmount it
274 # race conditions could allow for a check-then-unmount
275 # to have a false positive. so just unmount and then check.
276 try:
277 util.subp(['umount', device])
278 except util.ProcessExecutionError as e:
279 if device in util.mounts():
280 LOG.warn("Failed to unmount %s, will not reformat.", device)
281 LOG.debug("Failed umount: %s", e)
282 return None
283
284 LOG.debug("cloud-init will format ephemeral0.1 this boot.")
285 LOG.debug("setting disk_setup and mounts modules 'always' for this boot")
286
287 cc_modules = cfg.get('cloud_config_modules')
288 if not cc_modules:
289 return None
290
291 mod_list = []
292 for mod in cc_modules:
293 if mod in ("disk_setup", "mounts"):
294 mod_list.append([mod, PER_ALWAYS])
295 LOG.debug("set module '%s' to 'always' for this boot", mod)
296 else:
297 mod_list.append(mod)
298 return mod_list
299
300
203def handle_set_hostname(enabled, hostname, cfg):301def handle_set_hostname(enabled, hostname, cfg):
204 if not util.is_true(enabled):302 if not util.is_true(enabled):
205 return303 return
206304
=== modified file 'cloudinit/sources/DataSourceCloudSigma.py'
--- cloudinit/sources/DataSourceCloudSigma.py 2014-02-18 16:58:12 +0000
+++ cloudinit/sources/DataSourceCloudSigma.py 2014-07-18 13:33:31 +0000
@@ -15,10 +15,13 @@
15#15#
16# You should have received a copy of the GNU General Public License16# You should have received a copy of the GNU General Public License
17# along with this program. If not, see <http://www.gnu.org/licenses/>.17# along with this program. If not, see <http://www.gnu.org/licenses/>.
18from base64 import b64decode
19import os
18import re20import re
1921
20from cloudinit import log as logging22from cloudinit import log as logging
21from cloudinit import sources23from cloudinit import sources
24from cloudinit import util
22from cloudinit.cs_utils import Cepko25from cloudinit.cs_utils import Cepko
2326
24LOG = logging.getLogger(__name__)27LOG = logging.getLogger(__name__)
@@ -39,12 +42,40 @@
39 self.ssh_public_key = ''42 self.ssh_public_key = ''
40 sources.DataSource.__init__(self, sys_cfg, distro, paths)43 sources.DataSource.__init__(self, sys_cfg, distro, paths)
4144
45 def is_running_in_cloudsigma(self):
46 """
47 Uses dmidecode to detect if this instance of cloud-init is running
48 in the CloudSigma's infrastructure.
49 """
50 uname_arch = os.uname()[4]
51 if uname_arch.startswith("arm") or uname_arch == "aarch64":
52 # Disabling because dmidecode in CMD_DMI_SYSTEM crashes kvm process
53 LOG.debug("Disabling CloudSigma datasource on arm (LP: #1243287)")
54 return False
55
56 dmidecode_path = util.which('dmidecode')
57 if not dmidecode_path:
58 return False
59
60 LOG.debug("Determining hypervisor product name via dmidecode")
61 try:
62 cmd = [dmidecode_path, "--string", "system-product-name"]
63 system_product_name, _ = util.subp(cmd)
64 return 'cloudsigma' in system_product_name.lower()
65 except:
66 LOG.warn("Failed to get hypervisor product name via dmidecode")
67
68 return False
69
42 def get_data(self):70 def get_data(self):
43 """71 """
44 Metadata is the whole server context and /meta/cloud-config is used72 Metadata is the whole server context and /meta/cloud-config is used
45 as userdata.73 as userdata.
46 """74 """
47 dsmode = None75 dsmode = None
76 if not self.is_running_in_cloudsigma():
77 return False
78
48 try:79 try:
49 server_context = self.cepko.all().result80 server_context = self.cepko.all().result
50 server_meta = server_context['meta']81 server_meta = server_context['meta']
@@ -61,7 +92,13 @@
61 if dsmode == "disabled" or dsmode != self.dsmode:92 if dsmode == "disabled" or dsmode != self.dsmode:
62 return False93 return False
6394
95 base64_fields = server_meta.get('base64_fields', '').split(',')
64 self.userdata_raw = server_meta.get('cloudinit-user-data', "")96 self.userdata_raw = server_meta.get('cloudinit-user-data', "")
97 if 'cloudinit-user-data' in base64_fields:
98 self.userdata_raw = b64decode(self.userdata_raw)
99 if 'cloudinit' in server_context.get('vendor_data', {}):
100 self.vendordata_raw = server_context["vendor_data"]["cloudinit"]
101
65 self.metadata = server_context102 self.metadata = server_context
66 self.ssh_public_key = server_meta['ssh_public_key']103 self.ssh_public_key = server_meta['ssh_public_key']
67104
68105
=== modified file 'cloudinit/sources/DataSourceNoCloud.py'
--- cloudinit/sources/DataSourceNoCloud.py 2014-02-18 17:58:21 +0000
+++ cloudinit/sources/DataSourceNoCloud.py 2014-07-18 13:33:31 +0000
@@ -57,7 +57,7 @@
57 md = {}57 md = {}
58 if parse_cmdline_data(self.cmdline_id, md):58 if parse_cmdline_data(self.cmdline_id, md):
59 found.append("cmdline")59 found.append("cmdline")
60 mydata.update(md)60 mydata['meta-data'].update(md)
61 except:61 except:
62 util.logexc(LOG, "Unable to parse command line data")62 util.logexc(LOG, "Unable to parse command line data")
63 return False63 return False
6464
=== modified file 'cloudinit/sources/DataSourceOpenNebula.py'
--- cloudinit/sources/DataSourceOpenNebula.py 2014-01-17 01:11:27 +0000
+++ cloudinit/sources/DataSourceOpenNebula.py 2014-07-18 13:33:31 +0000
@@ -4,11 +4,13 @@
4# Copyright (C) 2012 Yahoo! Inc.4# Copyright (C) 2012 Yahoo! Inc.
5# Copyright (C) 2012-2013 CERIT Scientific Cloud5# Copyright (C) 2012-2013 CERIT Scientific Cloud
6# Copyright (C) 2012-2013 OpenNebula.org6# Copyright (C) 2012-2013 OpenNebula.org
7# Copyright (C) 2014 Consejo Superior de Investigaciones Cientificas
7#8#
8# Author: Scott Moser <scott.moser@canonical.com>9# Author: Scott Moser <scott.moser@canonical.com>
9# Author: Joshua Harlow <harlowja@yahoo-inc.com>10# Author: Joshua Harlow <harlowja@yahoo-inc.com>
10# Author: Vlastimil Holer <xholer@mail.muni.cz>11# Author: Vlastimil Holer <xholer@mail.muni.cz>
11# Author: Javier Fontan <jfontan@opennebula.org>12# Author: Javier Fontan <jfontan@opennebula.org>
13# Author: Enol Fernandez <enolfc@ifca.unican.es>
12#14#
13# This program is free software: you can redistribute it and/or modify15# This program is free software: you can redistribute it and/or modify
14# it under the terms of the GNU General Public License version 3, as16# it under the terms of the GNU General Public License version 3, as
@@ -22,6 +24,7 @@
22# You should have received a copy of the GNU General Public License24# You should have received a copy of the GNU General Public License
23# along with this program. If not, see <http://www.gnu.org/licenses/>.25# along with this program. If not, see <http://www.gnu.org/licenses/>.
2426
27import base64
25import os28import os
26import pwd29import pwd
27import re30import re
@@ -417,6 +420,16 @@
417 elif "USERDATA" in context:420 elif "USERDATA" in context:
418 results['userdata'] = context["USERDATA"]421 results['userdata'] = context["USERDATA"]
419422
423 # b64decode user data if necessary (default)
424 if 'userdata' in results:
425 encoding = context.get('USERDATA_ENCODING',
426 context.get('USER_DATA_ENCODING'))
427 if encoding == "base64":
428 try:
429 results['userdata'] = base64.b64decode(results['userdata'])
430 except TypeError:
431 LOG.warn("Failed base64 decoding of userdata")
432
420 # generate static /etc/network/interfaces433 # generate static /etc/network/interfaces
421 # only if there are any required context variables434 # only if there are any required context variables
422 # http://opennebula.org/documentation:rel3.8:cong#network_configuration435 # http://opennebula.org/documentation:rel3.8:cong#network_configuration
423436
=== modified file 'cloudinit/sources/DataSourceSmartOS.py'
--- cloudinit/sources/DataSourceSmartOS.py 2014-02-26 19:28:46 +0000
+++ cloudinit/sources/DataSourceSmartOS.py 2014-07-18 13:33:31 +0000
@@ -170,8 +170,9 @@
170 md = {}170 md = {}
171 ud = ""171 ud = ""
172172
173 if not os.path.exists(self.seed):173 if not device_exists(self.seed):
174 LOG.debug("Host does not appear to be on SmartOS")174 LOG.debug("No serial device '%s' found for SmartOS datasource",
175 self.seed)
175 return False176 return False
176177
177 uname_arch = os.uname()[4]178 uname_arch = os.uname()[4]
@@ -274,6 +275,11 @@
274 b64=b64)275 b64=b64)
275276
276277
278def device_exists(device):
279 """Symplistic method to determine if the device exists or not"""
280 return os.path.exists(device)
281
282
277def get_serial(seed_device, seed_timeout):283def get_serial(seed_device, seed_timeout):
278 """This is replaced in unit testing, allowing us to replace284 """This is replaced in unit testing, allowing us to replace
279 serial.Serial with a mocked class.285 serial.Serial with a mocked class.
280286
=== modified file 'cloudinit/stages.py'
--- cloudinit/stages.py 2014-02-13 18:53:08 +0000
+++ cloudinit/stages.py 2014-07-18 13:33:31 +0000
@@ -397,8 +397,8 @@
397 mod = handlers.fixup_handler(mod)397 mod = handlers.fixup_handler(mod)
398 types = c_handlers.register(mod)398 types = c_handlers.register(mod)
399 if types:399 if types:
400 LOG.debug("Added custom handler for %s from %s",400 LOG.debug("Added custom handler for %s [%s] from %s",
401 types, fname)401 types, mod, fname)
402 except Exception:402 except Exception:
403 util.logexc(LOG, "Failed to register handler from %s",403 util.logexc(LOG, "Failed to register handler from %s",
404 fname)404 fname)
@@ -644,6 +644,8 @@
644 freq = mod.frequency644 freq = mod.frequency
645 if not freq in FREQUENCIES:645 if not freq in FREQUENCIES:
646 freq = PER_INSTANCE646 freq = PER_INSTANCE
647 LOG.debug("Running module %s (%s) with frequency %s",
648 name, mod, freq)
647649
648 # Use the configs logger and not our own650 # Use the configs logger and not our own
649 # TODO(harlowja): possibly check the module651 # TODO(harlowja): possibly check the module
@@ -657,7 +659,7 @@
657 run_name = "config-%s" % (name)659 run_name = "config-%s" % (name)
658 cc.run(run_name, mod.handle, func_args, freq=freq)660 cc.run(run_name, mod.handle, func_args, freq=freq)
659 except Exception as e:661 except Exception as e:
660 util.logexc(LOG, "Running %s (%s) failed", name, mod)662 util.logexc(LOG, "Running module %s (%s) failed", name, mod)
661 failures.append((name, e))663 failures.append((name, e))
662 return (which_ran, failures)664 return (which_ran, failures)
663665
664666
=== modified file 'cloudinit/util.py'
--- cloudinit/util.py 2014-02-13 11:27:22 +0000
+++ cloudinit/util.py 2014-07-18 13:33:31 +0000
@@ -1395,8 +1395,10 @@
1395 return obj_copy.deepcopy(CFG_BUILTIN)1395 return obj_copy.deepcopy(CFG_BUILTIN)
13961396
13971397
1398def sym_link(source, link):1398def sym_link(source, link, force=False):
1399 LOG.debug("Creating symbolic link from %r => %r", link, source)1399 LOG.debug("Creating symbolic link from %r => %r", link, source)
1400 if force and os.path.exists(link):
1401 del_file(link)
1400 os.symlink(source, link)1402 os.symlink(source, link)
14011403
14021404
14031405
=== modified file 'cloudinit/version.py'
--- cloudinit/version.py 2013-11-19 21:49:53 +0000
+++ cloudinit/version.py 2014-07-18 13:33:31 +0000
@@ -20,7 +20,7 @@
2020
2121
22def version():22def version():
23 return vr.StrictVersion("0.7.5")23 return vr.StrictVersion("0.7.6")
2424
2525
26def version_string():26def version_string():
2727
=== modified file 'doc/examples/cloud-config-user-groups.txt'
--- doc/examples/cloud-config-user-groups.txt 2013-10-02 13:25:36 +0000
+++ doc/examples/cloud-config-user-groups.txt 2014-07-18 13:33:31 +0000
@@ -69,7 +69,7 @@
69# no-user-group: When set to true, do not create a group named after the user.69# no-user-group: When set to true, do not create a group named after the user.
70# no-log-init: When set to true, do not initialize lastlog and faillog database.70# no-log-init: When set to true, do not initialize lastlog and faillog database.
71# ssh-import-id: Optional. Import SSH ids71# ssh-import-id: Optional. Import SSH ids
72# ssh-authorized-key: Optional. Add key to user's ssh authorized keys file72# ssh-authorized-keys: Optional. [list] Add keys to user's authorized keys file
73# sudo: Defaults to none. Set to the sudo string you want to use, i.e.73# sudo: Defaults to none. Set to the sudo string you want to use, i.e.
74# ALL=(ALL) NOPASSWD:ALL. To add multiple rules, use the following74# ALL=(ALL) NOPASSWD:ALL. To add multiple rules, use the following
75# format.75# format.
7676
=== modified file 'doc/sources/cloudsigma/README.rst'
--- doc/sources/cloudsigma/README.rst 2014-02-13 15:39:39 +0000
+++ doc/sources/cloudsigma/README.rst 2014-07-18 13:33:31 +0000
@@ -23,6 +23,10 @@
23header could be omitted. However since this is a raw-text field you could provide any of the valid23header could be omitted. However since this is a raw-text field you could provide any of the valid
24`config formats`_.24`config formats`_.
2525
26You have the option to encode your user-data using Base64. In order to do that you have to add the
27``cloudinit-user-data`` field to the ``base64_fields``. The latter is a comma-separated field with
28all the meta fields whit base64 encoded values.
29
26If your user-data does not need an internet connection you can create a30If your user-data does not need an internet connection you can create a
27`meta field`_ in the `server context`_ ``cloudinit-dsmode`` and set "local" as value.31`meta field`_ in the `server context`_ ``cloudinit-dsmode`` and set "local" as value.
28If this field does not exist the default value is "net".32If this field does not exist the default value is "net".
2933
=== added file 'doc/status.txt'
--- doc/status.txt 1970-01-01 00:00:00 +0000
+++ doc/status.txt 2014-07-18 13:33:31 +0000
@@ -0,0 +1,53 @@
1cloud-init will keep a 'status' file up to date for other applications
2wishing to use it to determine cloud-init status.
3
4It will manage 2 files:
5 status.json
6 result.json
7
8The files will be written to /var/lib/cloud/data/ .
9A symlink will be created in /run/cloud-init. The link from /run is to ensure
10that if the file exists, it is not stale for this boot.
11
12status.json's format is:
13 {
14 'v1': {
15 'init': {
16 errors: [] # list of strings for each error that occurred
17 start: float # time.time() that this stage started or None
18 end: float # time.time() that this stage finished or None
19 },
20 'init-local': {
21 'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above)
22 },
23 'modules-config': {
24 'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above)
25 },
26 'modules-final': {
27 'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above)
28 },
29 'datasource': string describing datasource found or None
30 'stage': string representing stage that is currently running
31 ('init', 'init-local', 'modules-final', 'modules-config', None)
32 if None, then no stage is running. Reader must read the start/end
33 of each of the above stages to determine the state.
34 }
35
36result.json's format is:
37 {
38 'v1': {
39 'datasource': string describing the datasource found
40 'errors': [] # list of errors reported
41 }
42 }
43
44Thus, to determine if cloud-init is finished:
45 fin = "/run/cloud-init/result.json"
46 if os.path.exists(fin):
47 ret = json.load(open(fin, "r"))
48 if len(ret['v1']['errors']):
49 print "Finished with errors:" + "\n".join(ret['v1']['errors'])
50 else:
51 print "Finished no errors"
52 else:
53 print "Not Finished"
054
=== modified file 'tests/unittests/helpers.py'
--- tests/unittests/helpers.py 2014-02-08 00:40:51 +0000
+++ tests/unittests/helpers.py 2014-07-18 13:33:31 +0000
@@ -52,6 +52,30 @@
52 standardMsg = standardMsg % (value)52 standardMsg = standardMsg % (value)
53 self.fail(self._formatMessage(msg, standardMsg))53 self.fail(self._formatMessage(msg, standardMsg))
5454
55 def assertDictContainsSubset(self, expected, actual, msg=None):
56 missing = []
57 mismatched = []
58 for k, v in expected.iteritems():
59 if k not in actual:
60 missing.append(k)
61 elif actual[k] != v:
62 mismatched.append('%r, expected: %r, actual: %r'
63 % (k, v, actual[k]))
64
65 if len(missing) == 0 and len(mismatched) == 0:
66 return
67
68 standardMsg = ''
69 if missing:
70 standardMsg = 'Missing: %r' % ','.join(m for m in missing)
71 if mismatched:
72 if standardMsg:
73 standardMsg += '; '
74 standardMsg += 'Mismatched values: %s' % ','.join(mismatched)
75
76 self.fail(self._formatMessage(msg, standardMsg))
77
78
55else:79else:
56 class TestCase(unittest.TestCase):80 class TestCase(unittest.TestCase):
57 pass81 pass
5882
=== modified file 'tests/unittests/test__init__.py'
--- tests/unittests/test__init__.py 2014-01-25 03:31:28 +0000
+++ tests/unittests/test__init__.py 2014-07-18 13:33:31 +0000
@@ -1,14 +1,10 @@
1import logging
2import os1import os
3import StringIO
4import sys
52
6from mocker import MockerTestCase, ANY, ARGS, KWARGS3from mocker import MockerTestCase, ARGS, KWARGS
74
8from cloudinit import handlers5from cloudinit import handlers
9from cloudinit import helpers6from cloudinit import helpers
10from cloudinit import importer7from cloudinit import importer
11from cloudinit import log
12from cloudinit import settings8from cloudinit import settings
13from cloudinit import url_helper9from cloudinit import url_helper
14from cloudinit import util10from cloudinit import util
1511
=== modified file 'tests/unittests/test_datasource/test_cloudsigma.py'
--- tests/unittests/test_datasource/test_cloudsigma.py 2014-02-12 10:14:49 +0000
+++ tests/unittests/test_datasource/test_cloudsigma.py 2014-07-18 13:33:31 +0000
@@ -1,9 +1,11 @@
1# coding: utf-81# coding: utf-8
2from unittest import TestCase2import copy
33
4from cloudinit.cs_utils import Cepko4from cloudinit.cs_utils import Cepko
5from cloudinit.sources import DataSourceCloudSigma5from cloudinit.sources import DataSourceCloudSigma
66
7from tests.unittests import helpers as test_helpers
8
79
8SERVER_CONTEXT = {10SERVER_CONTEXT = {
9 "cpu": 1000,11 "cpu": 1000,
@@ -19,21 +21,27 @@
19 "smp": 1,21 "smp": 1,
20 "tags": ["much server", "very performance"],22 "tags": ["much server", "very performance"],
21 "uuid": "65b2fb23-8c03-4187-a3ba-8b7c919e8890",23 "uuid": "65b2fb23-8c03-4187-a3ba-8b7c919e8890",
22 "vnc_password": "9e84d6cb49e46379"24 "vnc_password": "9e84d6cb49e46379",
25 "vendor_data": {
26 "location": "zrh",
27 "cloudinit": "#cloud-config\n\n...",
28 }
23}29}
2430
2531
26class CepkoMock(Cepko):32class CepkoMock(Cepko):
27 result = SERVER_CONTEXT33 def __init__(self, mocked_context):
34 self.result = mocked_context
2835
29 def all(self):36 def all(self):
30 return self37 return self
3138
3239
33class DataSourceCloudSigmaTest(TestCase):40class DataSourceCloudSigmaTest(test_helpers.TestCase):
34 def setUp(self):41 def setUp(self):
35 self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "")42 self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "")
36 self.datasource.cepko = CepkoMock()43 self.datasource.is_running_in_cloudsigma = lambda: True
44 self.datasource.cepko = CepkoMock(SERVER_CONTEXT)
37 self.datasource.get_data()45 self.datasource.get_data()
3846
39 def test_get_hostname(self):47 def test_get_hostname(self):
@@ -57,3 +65,34 @@
57 def test_user_data(self):65 def test_user_data(self):
58 self.assertEqual(self.datasource.userdata_raw,66 self.assertEqual(self.datasource.userdata_raw,
59 SERVER_CONTEXT['meta']['cloudinit-user-data'])67 SERVER_CONTEXT['meta']['cloudinit-user-data'])
68
69 def test_encoded_user_data(self):
70 encoded_context = copy.deepcopy(SERVER_CONTEXT)
71 encoded_context['meta']['base64_fields'] = 'cloudinit-user-data'
72 encoded_context['meta']['cloudinit-user-data'] = 'aGkgd29ybGQK'
73 self.datasource.cepko = CepkoMock(encoded_context)
74 self.datasource.get_data()
75
76 self.assertEqual(self.datasource.userdata_raw, b'hi world\n')
77
78 def test_vendor_data(self):
79 self.assertEqual(self.datasource.vendordata_raw,
80 SERVER_CONTEXT['vendor_data']['cloudinit'])
81
82 def test_lack_of_vendor_data(self):
83 stripped_context = copy.deepcopy(SERVER_CONTEXT)
84 del stripped_context["vendor_data"]
85 self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "")
86 self.datasource.cepko = CepkoMock(stripped_context)
87 self.datasource.get_data()
88
89 self.assertIsNone(self.datasource.vendordata_raw)
90
91 def test_lack_of_cloudinit_key_in_vendor_data(self):
92 stripped_context = copy.deepcopy(SERVER_CONTEXT)
93 del stripped_context["vendor_data"]["cloudinit"]
94 self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "")
95 self.datasource.cepko = CepkoMock(stripped_context)
96 self.datasource.get_data()
97
98 self.assertIsNone(self.datasource.vendordata_raw)
6099
=== modified file 'tests/unittests/test_datasource/test_gce.py'
--- tests/unittests/test_datasource/test_gce.py 2014-02-13 22:03:12 +0000
+++ tests/unittests/test_datasource/test_gce.py 2014-07-18 13:33:31 +0000
@@ -15,7 +15,6 @@
15# You should have received a copy of the GNU General Public License15# You should have received a copy of the GNU General Public License
16# along with this program. If not, see <http://www.gnu.org/licenses/>.16# along with this program. If not, see <http://www.gnu.org/licenses/>.
1717
18import unittest
19import httpretty18import httpretty
20import re19import re
2120
@@ -25,6 +24,8 @@
25from cloudinit import helpers24from cloudinit import helpers
26from cloudinit.sources import DataSourceGCE25from cloudinit.sources import DataSourceGCE
2726
27from tests.unittests import helpers as test_helpers
28
28GCE_META = {29GCE_META = {
29 'instance/id': '123',30 'instance/id': '123',
30 'instance/zone': 'foo/bar',31 'instance/zone': 'foo/bar',
@@ -54,7 +55,7 @@
54 return (404, headers, '')55 return (404, headers, '')
5556
5657
57class TestDataSourceGCE(unittest.TestCase):58class TestDataSourceGCE(test_helpers.TestCase):
5859
59 def setUp(self):60 def setUp(self):
60 self.ds = DataSourceGCE.DataSourceGCE(61 self.ds = DataSourceGCE.DataSourceGCE(
6162
=== modified file 'tests/unittests/test_datasource/test_maas.py'
--- tests/unittests/test_datasource/test_maas.py 2014-01-25 03:31:28 +0000
+++ tests/unittests/test_datasource/test_maas.py 2014-07-18 13:33:31 +0000
@@ -3,7 +3,6 @@
33
4from cloudinit.sources import DataSourceMAAS4from cloudinit.sources import DataSourceMAAS
5from cloudinit import url_helper5from cloudinit import url_helper
6from cloudinit import util
7from tests.unittests.helpers import populate_dir6from tests.unittests.helpers import populate_dir
87
9import mocker8import mocker
109
=== modified file 'tests/unittests/test_datasource/test_opennebula.py'
--- tests/unittests/test_datasource/test_opennebula.py 2014-01-17 16:09:15 +0000
+++ tests/unittests/test_datasource/test_opennebula.py 2014-07-18 13:33:31 +0000
@@ -4,6 +4,7 @@
4from mocker import MockerTestCase4from mocker import MockerTestCase
5from tests.unittests.helpers import populate_dir5from tests.unittests.helpers import populate_dir
66
7from base64 import b64encode
7import os8import os
8import pwd9import pwd
910
@@ -164,10 +165,31 @@
164165
165 public_keys.append(SSH_KEY % (c + 1,))166 public_keys.append(SSH_KEY % (c + 1,))
166167
167 def test_user_data(self):168 def test_user_data_plain(self):
168 for k in ('USER_DATA', 'USERDATA'):169 for k in ('USER_DATA', 'USERDATA'):
169 my_d = os.path.join(self.tmp, k)170 my_d = os.path.join(self.tmp, k)
170 populate_context_dir(my_d, {k: USER_DATA})171 populate_context_dir(my_d, {k: USER_DATA,
172 'USERDATA_ENCODING': ''})
173 results = ds.read_context_disk_dir(my_d)
174
175 self.assertTrue('userdata' in results)
176 self.assertEqual(USER_DATA, results['userdata'])
177
178 def test_user_data_encoding_required_for_decode(self):
179 b64userdata = b64encode(USER_DATA)
180 for k in ('USER_DATA', 'USERDATA'):
181 my_d = os.path.join(self.tmp, k)
182 populate_context_dir(my_d, {k: b64userdata})
183 results = ds.read_context_disk_dir(my_d)
184
185 self.assertTrue('userdata' in results)
186 self.assertEqual(b64userdata, results['userdata'])
187
188 def test_user_data_base64_encoding(self):
189 for k in ('USER_DATA', 'USERDATA'):
190 my_d = os.path.join(self.tmp, k)
191 populate_context_dir(my_d, {k: b64encode(USER_DATA),
192 'USERDATA_ENCODING': 'base64'})
171 results = ds.read_context_disk_dir(my_d)193 results = ds.read_context_disk_dir(my_d)
172194
173 self.assertTrue('userdata' in results)195 self.assertTrue('userdata' in results)
174196
=== modified file 'tests/unittests/test_datasource/test_smartos.py'
--- tests/unittests/test_datasource/test_smartos.py 2014-02-26 19:21:40 +0000
+++ tests/unittests/test_datasource/test_smartos.py 2014-07-18 13:33:31 +0000
@@ -24,10 +24,7 @@
2424
25import base6425import base64
26from cloudinit import helpers as c_helpers26from cloudinit import helpers as c_helpers
27from cloudinit import stages
28from cloudinit import util
29from cloudinit.sources import DataSourceSmartOS27from cloudinit.sources import DataSourceSmartOS
30from cloudinit.settings import (PER_INSTANCE)
31from tests.unittests import helpers28from tests.unittests import helpers
32import os29import os
33import os.path30import os.path
@@ -174,6 +171,7 @@
174 self.apply_patches([(mod, 'get_serial', _get_serial)])171 self.apply_patches([(mod, 'get_serial', _get_serial)])
175 self.apply_patches([(mod, 'dmi_data', _dmi_data)])172 self.apply_patches([(mod, 'dmi_data', _dmi_data)])
176 self.apply_patches([(os, 'uname', _os_uname)])173 self.apply_patches([(os, 'uname', _os_uname)])
174 self.apply_patches([(mod, 'device_exists', lambda d: True)])
177 dsrc = mod.DataSourceSmartOS(sys_cfg, distro=None,175 dsrc = mod.DataSourceSmartOS(sys_cfg, distro=None,
178 paths=self.paths)176 paths=self.paths)
179 return dsrc177 return dsrc
180178
=== modified file 'tests/unittests/test_handler/test_handler_seed_random.py'
--- tests/unittests/test_handler/test_handler_seed_random.py 2013-10-02 13:28:42 +0000
+++ tests/unittests/test_handler/test_handler_seed_random.py 2014-07-18 13:33:31 +0000
@@ -42,10 +42,32 @@
42 def setUp(self):42 def setUp(self):
43 super(TestRandomSeed, self).setUp()43 super(TestRandomSeed, self).setUp()
44 self._seed_file = tempfile.mktemp()44 self._seed_file = tempfile.mktemp()
45 self.unapply = []
46
47 # by default 'which' has nothing in its path
48 self.apply_patches([(util, 'which', self._which)])
49 self.apply_patches([(util, 'subp', self._subp)])
50 self.subp_called = []
51 self.whichdata = {}
4552
46 def tearDown(self):53 def tearDown(self):
54 apply_patches([i for i in reversed(self.unapply)])
47 util.del_file(self._seed_file)55 util.del_file(self._seed_file)
4856
57 def apply_patches(self, patches):
58 ret = apply_patches(patches)
59 self.unapply += ret
60
61 def _which(self, program):
62 return self.whichdata.get(program)
63
64 def _subp(self, *args, **kwargs):
65 # supports subp calling with cmd as args or kwargs
66 if 'args' not in kwargs:
67 kwargs['args'] = args[0]
68 self.subp_called.append(kwargs)
69 return
70
49 def _compress(self, text):71 def _compress(self, text):
50 contents = StringIO()72 contents = StringIO()
51 gz_fh = gzip.GzipFile(mode='wb', fileobj=contents)73 gz_fh = gzip.GzipFile(mode='wb', fileobj=contents)
@@ -148,3 +170,56 @@
148 cc_seed_random.handle('test', cfg, c, LOG, [])170 cc_seed_random.handle('test', cfg, c, LOG, [])
149 contents = util.load_file(self._seed_file)171 contents = util.load_file(self._seed_file)
150 self.assertEquals('tiny-tim-was-here-so-was-josh', contents)172 self.assertEquals('tiny-tim-was-here-so-was-josh', contents)
173
174 def test_seed_command_not_provided_pollinate_available(self):
175 c = self._get_cloud('ubuntu', {})
176 self.whichdata = {'pollinate': '/usr/bin/pollinate'}
177 cc_seed_random.handle('test', {}, c, LOG, [])
178
179 subp_args = [f['args'] for f in self.subp_called]
180 self.assertIn(['pollinate', '-q'], subp_args)
181
182 def test_seed_command_not_provided_pollinate_not_available(self):
183 c = self._get_cloud('ubuntu', {})
184 self.whichdata = {}
185 cc_seed_random.handle('test', {}, c, LOG, [])
186
187 # subp should not have been called as which would say not available
188 self.assertEquals(self.subp_called, list())
189
190 def test_unavailable_seed_command_and_required_raises_error(self):
191 c = self._get_cloud('ubuntu', {})
192 self.whichdata = {}
193 self.assertRaises(ValueError, cc_seed_random.handle,
194 'test', {'random_seed': {'command_required': True}}, c, LOG, [])
195
196 def test_seed_command_and_required(self):
197 c = self._get_cloud('ubuntu', {})
198 self.whichdata = {'foo': 'foo'}
199 cfg = {'random_seed': {'command_required': True, 'command': ['foo']}}
200 cc_seed_random.handle('test', cfg, c, LOG, [])
201
202 self.assertIn(['foo'], [f['args'] for f in self.subp_called])
203
204 def test_file_in_environment_for_command(self):
205 c = self._get_cloud('ubuntu', {})
206 self.whichdata = {'foo': 'foo'}
207 cfg = {'random_seed': {'command_required': True, 'command': ['foo'],
208 'file': self._seed_file}}
209 cc_seed_random.handle('test', cfg, c, LOG, [])
210
211 # this just instists that the first time subp was called,
212 # RANDOM_SEED_FILE was in the environment set up correctly
213 subp_env = [f['env'] for f in self.subp_called]
214 self.assertEqual(subp_env[0].get('RANDOM_SEED_FILE'), self._seed_file)
215
216
217def apply_patches(patches):
218 ret = []
219 for (ref, name, replace) in patches:
220 if replace is None:
221 continue
222 orig = getattr(ref, name)
223 setattr(ref, name, replace)
224 ret.append((ref, name, orig))
225 return ret
151226
=== modified file 'tests/unittests/test_handler/test_handler_yum_add_repo.py'
--- tests/unittests/test_handler/test_handler_yum_add_repo.py 2014-02-06 15:59:04 +0000
+++ tests/unittests/test_handler/test_handler_yum_add_repo.py 2014-07-18 13:33:31 +0000
@@ -1,4 +1,3 @@
1from cloudinit import helpers
2from cloudinit import util1from cloudinit import util
32
4from cloudinit.config import cc_yum_add_repo3from cloudinit.config import cc_yum_add_repo
54
=== modified file 'tests/unittests/test_templating.py'
--- tests/unittests/test_templating.py 2014-07-16 18:31:31 +0000
+++ tests/unittests/test_templating.py 2014-07-18 13:33:31 +0000
@@ -17,26 +17,51 @@
17# along with this program. If not, see <http://www.gnu.org/licenses/>.17# along with this program. If not, see <http://www.gnu.org/licenses/>.
1818
19from tests.unittests import helpers as test_helpers19from tests.unittests import helpers as test_helpers
20import textwrap
2021
21from cloudinit import templater22from cloudinit import templater
2223
2324
24class TestTemplates(test_helpers.TestCase):25class TestTemplates(test_helpers.TestCase):
25 def test_render_basic(self):26 def test_render_basic(self):
26 in_data = """27 in_data = textwrap.dedent("""
27${b}28 ${b}
2829
29c = d30 c = d
30"""31 """)
31 in_data = in_data.strip()32 in_data = in_data.strip()
32 expected_data = """33 expected_data = textwrap.dedent("""
33234 2
3435
35c = d36 c = d
36"""37 """)
37 out_data = templater.basic_render(in_data, {'b': 2})38 out_data = templater.basic_render(in_data, {'b': 2})
38 self.assertEqual(expected_data.strip(), out_data)39 self.assertEqual(expected_data.strip(), out_data)
3940
41 def test_render_basic_no_parens(self):
42 hn = "myfoohost"
43 in_data = "h=$hostname\nc=d\n"
44 expected_data = "h=%s\nc=d\n" % hn
45 out_data = templater.basic_render(in_data, {'hostname': hn})
46 self.assertEqual(expected_data, out_data)
47
48 def test_render_basic_parens(self):
49 hn = "myfoohost"
50 in_data = "h = ${hostname}\nc=d\n"
51 expected_data = "h = %s\nc=d\n" % hn
52 out_data = templater.basic_render(in_data, {'hostname': hn})
53 self.assertEqual(expected_data, out_data)
54
55 def test_render_basic2(self):
56 mirror = "mymirror"
57 codename = "zany"
58 in_data = "deb $mirror $codename-updates main contrib non-free"
59 ex_data = "deb %s %s-updates main contrib non-free" % (mirror, codename)
60
61 out_data = templater.basic_render(in_data,
62 {'mirror': mirror, 'codename': codename})
63 self.assertEqual(ex_data, out_data)
64
40 def test_detection(self):65 def test_detection(self):
41 blob = "## template:cheetah"66 blob = "## template:cheetah"
4267
@@ -53,14 +78,12 @@
53 self.assertRaises(ValueError, templater.detect_template, blob)78 self.assertRaises(ValueError, templater.detect_template, blob)
5479
55 def test_render_cheetah(self):80 def test_render_cheetah(self):
56 blob = '''## template:cheetah81 blob = '\n'.join(['## template:cheetah', '$a,$b'])
57$a,$b'''
58 c = templater.render_string(blob, {"a": 1, "b": 2})82 c = templater.render_string(blob, {"a": 1, "b": 2})
59 self.assertEquals("1,2", c)83 self.assertEquals("1,2", c)
6084
61 def test_render_jinja(self):85 def test_render_jinja(self):
62 blob = '''## template:jinja86 blob = '\n'.join(['## template:jinja', '{{a}},{{b}}'])
63{{a}},{{b}}'''
64 c = templater.render_string(blob, {"a": 1, "b": 2})87 c = templater.render_string(blob, {"a": 1, "b": 2})
65 self.assertEquals("1,2", c)88 self.assertEquals("1,2", c)
6689

Subscribers

People subscribed via source and target branches

to all changes: