Merge lp:~smoser/cloud-init/changeable-templates into lp:~harlowja/cloud-init/changeable-templates
- changeable-templates
- Merge into changeable-templates
Proposed by
Scott Moser
Status: | Merged |
---|---|
Merge reported by: | Scott Moser |
Merged at revision: | not available |
Proposed branch: | lp:~smoser/cloud-init/changeable-templates |
Merge into: | lp:~harlowja/cloud-init/changeable-templates |
Diff against target: |
1407 lines (+665/-121) 30 files modified
ChangeLog (+17/-0) TODO.rst (+38/-41) bin/cloud-init (+124/-14) cloudinit/config/cc_final_message.py (+1/-0) cloudinit/config/cc_power_state_change.py (+0/-1) cloudinit/config/cc_seed_random.py (+41/-9) cloudinit/cs_utils.py (+7/-1) cloudinit/importer.py (+0/-4) cloudinit/mergers/__init__.py (+0/-5) cloudinit/sources/DataSourceAzure.py (+102/-4) cloudinit/sources/DataSourceCloudSigma.py (+37/-0) cloudinit/sources/DataSourceNoCloud.py (+1/-1) cloudinit/sources/DataSourceOpenNebula.py (+13/-0) cloudinit/sources/DataSourceSmartOS.py (+8/-2) cloudinit/stages.py (+5/-3) cloudinit/util.py (+3/-1) cloudinit/version.py (+1/-1) doc/examples/cloud-config-user-groups.txt (+1/-1) doc/sources/cloudsigma/README.rst (+4/-0) doc/status.txt (+53/-0) tests/unittests/helpers.py (+24/-0) tests/unittests/test__init__.py (+1/-5) tests/unittests/test_datasource/test_cloudsigma.py (+44/-5) tests/unittests/test_datasource/test_gce.py (+3/-2) tests/unittests/test_datasource/test_maas.py (+0/-1) tests/unittests/test_datasource/test_opennebula.py (+26/-4) tests/unittests/test_datasource/test_smartos.py (+1/-3) tests/unittests/test_handler/test_handler_seed_random.py (+75/-0) tests/unittests/test_handler/test_handler_yum_add_repo.py (+0/-1) tests/unittests/test_templating.py (+35/-12) |
To merge this branch: | bzr merge lp:~smoser/cloud-init/changeable-templates |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Joshua Harlow | Pending | ||
Review via email: mp+227323@code.launchpad.net |
Commit message
Description of the change
a couple things here
a.) merge with trunk (you can 'bzr merge lp:cloud-init' and get the same).
b.) use textwrap.dedent
c.) add some tests based on actually shipped templates that will need to pass for basic renderer.
To post a comment you must log in.
Revision history for this message
Joshua Harlow (harlowja) wrote : | # |
Revision history for this message
Joshua Harlow (harlowja) : | # |
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'ChangeLog' |
2 | --- ChangeLog 2014-02-27 15:51:22 +0000 |
3 | +++ ChangeLog 2014-07-18 13:33:31 +0000 |
4 | @@ -1,3 +1,12 @@ |
5 | +0.7.6: |
6 | + - open 0.7.6 |
7 | + - Enable vendordata on CloudSigma datasource (LP: #1303986) |
8 | + - Poll on /dev/ttyS1 in CloudSigma datasource only if dmidecode says |
9 | + we're running on cloudsigma (LP: #1316475) [Kiril Vladimiroff] |
10 | + - SmartOS test: do not require existance of /dev/ttyS1. [LP: #1316597] |
11 | + - doc: fix user-groups doc to reference plural ssh-authorized-keys |
12 | + (LP: #1327065) [Joern Heissler] |
13 | + - fix 'make test' in python 2.6 |
14 | 0.7.5: |
15 | - open 0.7.5 |
16 | - Add a debug log message around import failures |
17 | @@ -33,6 +42,14 @@ |
18 | rather than relying on EC2 data in openstack metadata service. |
19 | - SmartOS, AltCloud: disable running on arm systems due to bug |
20 | (LP: #1243287, #1285686) [Oleg Strikov] |
21 | + - Allow running a command to seed random, default is 'pollinate -q' |
22 | + (LP: #1286316) [Dustin Kirkland] |
23 | + - Write status to /run/cloud-init/status.json for consumption by |
24 | + other programs (LP: #1284439) |
25 | + - Azure: if a reboot causes ephemeral storage to be re-provisioned |
26 | + Then we need to re-format it. (LP: #1292648) |
27 | + - OpenNebula: support base64 encoded user-data |
28 | + [Enol Fernandez, Peter Kotcauer] |
29 | 0.7.4: |
30 | - fix issue mounting 'ephemeral0' if ephemeral0 was an alias for a |
31 | partitioned block device with target filesystem on ephemeral0.1. |
32 | |
33 | === renamed file 'TODO' => 'TODO.rst' |
34 | --- TODO 2012-07-10 03:32:50 +0000 |
35 | +++ TODO.rst 2014-07-18 13:33:31 +0000 |
36 | @@ -1,46 +1,43 @@ |
37 | -- Consider a 'failsafe' DataSource |
38 | - If all others fail, setting a default that |
39 | - - sets the user password, writing it to console |
40 | - - logs to console that this happened |
41 | -- Consider a 'previous' DataSource |
42 | - If no other data source is found, fall back to the 'previous' one |
43 | - keep a indication of what instance id that is in /var/lib/cloud |
44 | -- Rewrite "cloud-init-query" (currently not implemented) |
45 | - Possibly have DataSource and cloudinit expose explicit fields |
46 | - - instance-id |
47 | - - hostname |
48 | - - mirror |
49 | - - release |
50 | - - ssh public keys |
51 | +============================================== |
52 | +Things that cloud-init may do (better) someday |
53 | +============================================== |
54 | + |
55 | +- Consider making ``failsafe`` ``DataSource`` |
56 | + - sets the user password, writing it to console |
57 | + |
58 | +- Consider a ``previous`` ``DataSource``, if no other data source is |
59 | + found, fall back to the ``previous`` one that worked. |
60 | +- Rewrite ``cloud-init-query`` (currently not implemented) |
61 | +- Possibly have a ``DataSource`` expose explicit fields: |
62 | + |
63 | + - instance-id |
64 | + - hostname |
65 | + - mirror |
66 | + - release |
67 | + - ssh public keys |
68 | + |
69 | - Remove the conversion of the ubuntu network interface format conversion |
70 | to a RH/fedora format and replace it with a top level format that uses |
71 | the netcf libraries format instead (which itself knows how to translate |
72 | - into the specific formats) |
73 | -- Replace the 'apt*' modules with variants that now use the distro classes |
74 | - to perform distro independent packaging commands (where possible) |
75 | -- Canonicalize the semaphore/lock name for modules and user data handlers |
76 | - a. It is most likely a bug that currently exists that if a module in config |
77 | - alters its name and it has already ran, then it will get ran again since |
78 | - the lock name hasn't be canonicalized |
79 | + into the specific formats). See for example `netcf`_ which seems to be |
80 | + an active project that has this capability. |
81 | +- Replace the ``apt*`` modules with variants that now use the distro classes |
82 | + to perform distro independent packaging commands (wherever possible). |
83 | - Replace some the LOG.debug calls with a LOG.info where appropriate instead |
84 | - of how right now there is really only 2 levels (WARN and DEBUG) |
85 | -- Remove the 'cc_' for config modules, either have them fully specified (ie |
86 | - 'cloudinit.config.resizefs') or by default only look in the 'cloudinit.config' |
87 | - for these modules (or have a combination of the above), this avoids having |
88 | - to understand where your modules are coming from (which can be altered by |
89 | - the current python inclusion path) |
90 | -- Depending on if people think the wrapper around 'os.path.join' provided |
91 | - by the 'paths' object is useful (allowing us to modify based off a 'read' |
92 | - and 'write' configuration based 'root') or is just to confusing, it might be |
93 | - something to remove later, and just recommend using 'chroot' instead (or the X |
94 | - different other options which are similar to 'chroot'), which is might be more |
95 | - natural and less confusing... |
96 | -- Instead of just warning when a module is being ran on a 'unknown' distribution |
97 | - perhaps we should not run that module in that case? Or we might want to start |
98 | - reworking those modules so they will run on all distributions? Or if that is |
99 | - not the case, then maybe we want to allow fully specified python paths for |
100 | - modules and start encouraging packages of 'ubuntu' modules, packages of 'rhel' |
101 | - specific modules that people can add instead of having them all under the |
102 | - cloud-init 'root' tree? This might encourage more development of other modules |
103 | - instead of having to go edit the cloud-init code to accomplish this. |
104 | + of how right now there is really only 2 levels (``WARN`` and ``DEBUG``) |
105 | +- Remove the ``cc_`` prefix for config modules, either have them fully |
106 | + specified (ie ``cloudinit.config.resizefs``) or by default only look in |
107 | + the ``cloudinit.config`` namespace for these modules (or have a combination |
108 | + of the above), this avoids having to understand where your modules are |
109 | + coming from (which can be altered by the current python inclusion path) |
110 | +- Instead of just warning when a module is being ran on a ``unknown`` |
111 | + distribution perhaps we should not run that module in that case? Or we might |
112 | + want to start reworking those modules so they will run on all |
113 | + distributions? Or if that is not the case, then maybe we want to allow |
114 | + fully specified python paths for modules and start encouraging |
115 | + packages of ``ubuntu`` modules, packages of ``rhel`` specific modules that |
116 | + people can add instead of having them all under the cloud-init ``root`` |
117 | + tree? This might encourage more development of other modules instead of |
118 | + having to go edit the cloud-init code to accomplish this. |
119 | |
120 | +.. _netcf: https://fedorahosted.org/netcf/ |
121 | |
122 | === modified file 'bin/cloud-init' |
123 | --- bin/cloud-init 2014-01-09 00:16:24 +0000 |
124 | +++ bin/cloud-init 2014-07-18 13:33:31 +0000 |
125 | @@ -22,8 +22,11 @@ |
126 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
127 | |
128 | import argparse |
129 | +import json |
130 | import os |
131 | import sys |
132 | +import time |
133 | +import tempfile |
134 | import traceback |
135 | |
136 | # This is more just for running from the bin folder so that |
137 | @@ -126,11 +129,11 @@ |
138 | " under section '%s'") % (action_name, full_section_name) |
139 | sys.stderr.write("%s\n" % (msg)) |
140 | LOG.debug(msg) |
141 | - return 0 |
142 | + return [] |
143 | else: |
144 | LOG.debug("Ran %s modules with %s failures", |
145 | len(which_ran), len(failures)) |
146 | - return len(failures) |
147 | + return failures |
148 | |
149 | |
150 | def main_init(name, args): |
151 | @@ -220,7 +223,10 @@ |
152 | if existing_files: |
153 | LOG.debug("Exiting early due to the existence of %s files", |
154 | existing_files) |
155 | - return 0 |
156 | + return (None, []) |
157 | + else: |
158 | + LOG.debug("Execution continuing, no previous run detected that" |
159 | + " would allow us to stop early.") |
160 | else: |
161 | # The cache is not instance specific, so it has to be purged |
162 | # but we want 'start' to benefit from a cache if |
163 | @@ -249,9 +255,9 @@ |
164 | " Likely bad things to come!")) |
165 | if not args.force: |
166 | if args.local: |
167 | - return 0 |
168 | + return (None, []) |
169 | else: |
170 | - return 1 |
171 | + return (None, ["No instance datasource found."]) |
172 | # Stage 6 |
173 | iid = init.instancify() |
174 | LOG.debug("%s will now be targeting instance id: %s", name, iid) |
175 | @@ -274,7 +280,7 @@ |
176 | init.consume_data(PER_ALWAYS) |
177 | except Exception: |
178 | util.logexc(LOG, "Consuming user data failed!") |
179 | - return 1 |
180 | + return (init.datasource, ["Consuming user data failed!"]) |
181 | |
182 | # Stage 8 - re-read and apply relevant cloud-config to include user-data |
183 | mods = stages.Modules(init, extract_fns(args)) |
184 | @@ -291,7 +297,7 @@ |
185 | logging.setupLogging(mods.cfg) |
186 | |
187 | # Stage 10 |
188 | - return run_module_section(mods, name, name) |
189 | + return (init.datasource, run_module_section(mods, name, name)) |
190 | |
191 | |
192 | def main_modules(action_name, args): |
193 | @@ -315,14 +321,12 @@ |
194 | init.fetch() |
195 | except sources.DataSourceNotFoundException: |
196 | # There was no datasource found, theres nothing to do |
197 | - util.logexc(LOG, ('Can not apply stage %s, ' |
198 | - 'no datasource found!' |
199 | - " Likely bad things to come!"), name) |
200 | - print_exc(('Can not apply stage %s, ' |
201 | - 'no datasource found!' |
202 | - " Likely bad things to come!") % (name)) |
203 | + msg = ('Can not apply stage %s, no datasource found! Likely bad ' |
204 | + 'things to come!' % name) |
205 | + util.logexc(LOG, msg) |
206 | + print_exc(msg) |
207 | if not args.force: |
208 | - return 1 |
209 | + return [(msg)] |
210 | # Stage 3 |
211 | mods = stages.Modules(init, extract_fns(args)) |
212 | # Stage 4 |
213 | @@ -419,6 +423,110 @@ |
214 | return 0 |
215 | |
216 | |
217 | +def atomic_write_json(path, data): |
218 | + tf = None |
219 | + try: |
220 | + tf = tempfile.NamedTemporaryFile(dir=os.path.dirname(path), |
221 | + delete=False) |
222 | + tf.write(json.dumps(data, indent=1) + "\n") |
223 | + tf.close() |
224 | + os.rename(tf.name, path) |
225 | + except Exception as e: |
226 | + if tf is not None: |
227 | + util.del_file(tf.name) |
228 | + raise e |
229 | + |
230 | + |
231 | +def status_wrapper(name, args, data_d=None, link_d=None): |
232 | + if data_d is None: |
233 | + data_d = os.path.normpath("/var/lib/cloud/data") |
234 | + if link_d is None: |
235 | + link_d = os.path.normpath("/run/cloud-init") |
236 | + |
237 | + status_path = os.path.join(data_d, "status.json") |
238 | + status_link = os.path.join(link_d, "status.json") |
239 | + result_path = os.path.join(data_d, "result.json") |
240 | + result_link = os.path.join(link_d, "result.json") |
241 | + |
242 | + util.ensure_dirs((data_d, link_d,)) |
243 | + |
244 | + (_name, functor) = args.action |
245 | + |
246 | + if name == "init": |
247 | + if args.local: |
248 | + mode = "init-local" |
249 | + else: |
250 | + mode = "init" |
251 | + elif name == "modules": |
252 | + mode = "modules-%s" % args.mode |
253 | + else: |
254 | + raise ValueError("unknown name: %s" % name) |
255 | + |
256 | + modes = ('init', 'init-local', 'modules-config', 'modules-final') |
257 | + |
258 | + status = None |
259 | + if mode == 'init-local': |
260 | + for f in (status_link, result_link, status_path, result_path): |
261 | + util.del_file(f) |
262 | + else: |
263 | + try: |
264 | + status = json.loads(util.load_file(status_path)) |
265 | + except: |
266 | + pass |
267 | + |
268 | + if status is None: |
269 | + nullstatus = { |
270 | + 'errors': [], |
271 | + 'start': None, |
272 | + 'end': None, |
273 | + } |
274 | + status = {'v1': {}} |
275 | + for m in modes: |
276 | + status['v1'][m] = nullstatus.copy() |
277 | + status['v1']['datasource'] = None |
278 | + |
279 | + v1 = status['v1'] |
280 | + v1['stage'] = mode |
281 | + v1[mode]['start'] = time.time() |
282 | + |
283 | + atomic_write_json(status_path, status) |
284 | + util.sym_link(os.path.relpath(status_path, link_d), status_link, |
285 | + force=True) |
286 | + |
287 | + try: |
288 | + ret = functor(name, args) |
289 | + if mode in ('init', 'init-local'): |
290 | + (datasource, errors) = ret |
291 | + if datasource is not None: |
292 | + v1['datasource'] = str(datasource) |
293 | + else: |
294 | + errors = ret |
295 | + |
296 | + v1[mode]['errors'] = [str(e) for e in errors] |
297 | + |
298 | + except Exception as e: |
299 | + v1[mode]['errors'] = [str(e)] |
300 | + |
301 | + v1[mode]['finished'] = time.time() |
302 | + v1['stage'] = None |
303 | + |
304 | + atomic_write_json(status_path, status) |
305 | + |
306 | + if mode == "modules-final": |
307 | + # write the 'finished' file |
308 | + errors = [] |
309 | + for m in modes: |
310 | + if v1[m]['errors']: |
311 | + errors.extend(v1[m].get('errors', [])) |
312 | + |
313 | + atomic_write_json(result_path, |
314 | + {'v1': {'datasource': v1['datasource'], 'errors': errors}}) |
315 | + util.sym_link(os.path.relpath(result_path, link_d), result_link, |
316 | + force=True) |
317 | + |
318 | + return len(v1[mode]['errors']) |
319 | + |
320 | + |
321 | def main(): |
322 | parser = argparse.ArgumentParser() |
323 | |
324 | @@ -502,6 +610,8 @@ |
325 | signal_handler.attach_handlers() |
326 | |
327 | (name, functor) = args.action |
328 | + if name in ("modules", "init"): |
329 | + functor = status_wrapper |
330 | |
331 | return util.log_time(logfunc=LOG.debug, msg="cloud-init mode '%s'" % name, |
332 | get_uptime=True, func=functor, args=(name, args)) |
333 | |
334 | === modified file 'cloudinit/config/cc_final_message.py' |
335 | --- cloudinit/config/cc_final_message.py 2013-09-25 17:51:52 +0000 |
336 | +++ cloudinit/config/cc_final_message.py 2014-07-18 13:33:31 +0000 |
337 | @@ -53,6 +53,7 @@ |
338 | 'version': cver, |
339 | 'datasource': str(cloud.datasource), |
340 | } |
341 | + subs.update(dict([(k.upper(), v) for k, v in subs.items()])) |
342 | util.multi_log("%s\n" % (templater.render_string(msg_in, subs)), |
343 | console=False, stderr=True, log=log) |
344 | except Exception: |
345 | |
346 | === modified file 'cloudinit/config/cc_power_state_change.py' |
347 | --- cloudinit/config/cc_power_state_change.py 2014-02-03 22:03:14 +0000 |
348 | +++ cloudinit/config/cc_power_state_change.py 2014-07-18 13:33:31 +0000 |
349 | @@ -22,7 +22,6 @@ |
350 | import errno |
351 | import os |
352 | import re |
353 | -import signal |
354 | import subprocess |
355 | import time |
356 | |
357 | |
358 | === modified file 'cloudinit/config/cc_seed_random.py' |
359 | --- cloudinit/config/cc_seed_random.py 2014-02-05 15:36:47 +0000 |
360 | +++ cloudinit/config/cc_seed_random.py 2014-07-18 13:33:31 +0000 |
361 | @@ -1,8 +1,11 @@ |
362 | # vi: ts=4 expandtab |
363 | # |
364 | # Copyright (C) 2013 Yahoo! Inc. |
365 | +# Copyright (C) 2014 Canonical, Ltd |
366 | # |
367 | # Author: Joshua Harlow <harlowja@yahoo-inc.com> |
368 | +# Author: Dustin Kirkland <kirkland@ubuntu.com> |
369 | +# Author: Scott Moser <scott.moser@canonical.com> |
370 | # |
371 | # This program is free software: you can redistribute it and/or modify |
372 | # it under the terms of the GNU General Public License version 3, as |
373 | @@ -17,12 +20,15 @@ |
374 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
375 | |
376 | import base64 |
377 | +import os |
378 | from StringIO import StringIO |
379 | |
380 | from cloudinit.settings import PER_INSTANCE |
381 | +from cloudinit import log as logging |
382 | from cloudinit import util |
383 | |
384 | frequency = PER_INSTANCE |
385 | +LOG = logging.getLogger(__name__) |
386 | |
387 | |
388 | def _decode(data, encoding=None): |
389 | @@ -38,24 +44,50 @@ |
390 | raise IOError("Unknown random_seed encoding: %s" % (encoding)) |
391 | |
392 | |
393 | +def handle_random_seed_command(command, required, env=None): |
394 | + if not command and required: |
395 | + raise ValueError("no command found but required=true") |
396 | + elif not command: |
397 | + LOG.debug("no command provided") |
398 | + return |
399 | + |
400 | + cmd = command[0] |
401 | + if not util.which(cmd): |
402 | + if required: |
403 | + raise ValueError("command '%s' not found but required=true", cmd) |
404 | + else: |
405 | + LOG.debug("command '%s' not found for seed_command", cmd) |
406 | + return |
407 | + util.subp(command, env=env, capture=False) |
408 | + |
409 | + |
410 | def handle(name, cfg, cloud, log, _args): |
411 | - if not cfg or "random_seed" not in cfg: |
412 | - log.debug(("Skipping module named %s, " |
413 | - "no 'random_seed' configuration found"), name) |
414 | - return |
415 | + mycfg = cfg.get('random_seed', {}) |
416 | + seed_path = mycfg.get('file', '/dev/urandom') |
417 | + seed_data = mycfg.get('data', '') |
418 | |
419 | - my_cfg = cfg['random_seed'] |
420 | - seed_path = my_cfg.get('file', '/dev/urandom') |
421 | seed_buf = StringIO() |
422 | - seed_buf.write(_decode(my_cfg.get('data', ''), |
423 | - encoding=my_cfg.get('encoding'))) |
424 | + if seed_data: |
425 | + seed_buf.write(_decode(seed_data, encoding=mycfg.get('encoding'))) |
426 | |
427 | + # 'random_seed' is set up by Azure datasource, and comes already in |
428 | + # openstack meta_data.json |
429 | metadata = cloud.datasource.metadata |
430 | if metadata and 'random_seed' in metadata: |
431 | seed_buf.write(metadata['random_seed']) |
432 | |
433 | seed_data = seed_buf.getvalue() |
434 | if len(seed_data): |
435 | - log.debug("%s: adding %s bytes of random seed entrophy to %s", name, |
436 | + log.debug("%s: adding %s bytes of random seed entropy to %s", name, |
437 | len(seed_data), seed_path) |
438 | util.append_file(seed_path, seed_data) |
439 | + |
440 | + command = mycfg.get('command', ['pollinate', '-q']) |
441 | + req = mycfg.get('command_required', False) |
442 | + try: |
443 | + env = os.environ.copy() |
444 | + env['RANDOM_SEED_FILE'] = seed_path |
445 | + handle_random_seed_command(command=command, required=req, env=env) |
446 | + except ValueError as e: |
447 | + log.warn("handling random command [%s] failed: %s", command, e) |
448 | + raise e |
449 | |
450 | === modified file 'cloudinit/cs_utils.py' |
451 | --- cloudinit/cs_utils.py 2014-02-12 10:14:49 +0000 |
452 | +++ cloudinit/cs_utils.py 2014-07-18 13:33:31 +0000 |
453 | @@ -35,6 +35,10 @@ |
454 | |
455 | import serial |
456 | |
457 | +# these high timeouts are necessary as read may read a lot of data. |
458 | +READ_TIMEOUT = 60 |
459 | +WRITE_TIMEOUT = 10 |
460 | + |
461 | SERIAL_PORT = '/dev/ttyS1' |
462 | if platform.system() == 'Windows': |
463 | SERIAL_PORT = 'COM2' |
464 | @@ -76,7 +80,9 @@ |
465 | self.result = self._marshal(self.raw_result) |
466 | |
467 | def _execute(self): |
468 | - connection = serial.Serial(SERIAL_PORT) |
469 | + connection = serial.Serial(port=SERIAL_PORT, |
470 | + timeout=READ_TIMEOUT, |
471 | + writeTimeout=WRITE_TIMEOUT) |
472 | connection.write(self.request) |
473 | return connection.readline().strip('\x04\n') |
474 | |
475 | |
476 | === modified file 'cloudinit/importer.py' |
477 | --- cloudinit/importer.py 2013-10-09 19:22:06 +0000 |
478 | +++ cloudinit/importer.py 2014-07-18 13:33:31 +0000 |
479 | @@ -45,8 +45,6 @@ |
480 | real_path.append(base_name) |
481 | full_path = '.'.join(real_path) |
482 | real_paths.append(full_path) |
483 | - LOG.debug("Looking for modules %s that have attributes %s", |
484 | - real_paths, required_attrs) |
485 | for full_path in real_paths: |
486 | mod = None |
487 | try: |
488 | @@ -62,6 +60,4 @@ |
489 | found_attrs += 1 |
490 | if found_attrs == len(required_attrs): |
491 | found_places.append(full_path) |
492 | - LOG.debug("Found %s with attributes %s in %s", base_name, |
493 | - required_attrs, found_places) |
494 | return found_places |
495 | |
496 | === modified file 'cloudinit/mergers/__init__.py' |
497 | --- cloudinit/mergers/__init__.py 2013-05-03 21:41:28 +0000 |
498 | +++ cloudinit/mergers/__init__.py 2014-07-18 13:33:31 +0000 |
499 | @@ -55,9 +55,6 @@ |
500 | if not meth: |
501 | meth = self._handle_unknown |
502 | args.insert(0, method_name) |
503 | - LOG.debug("Merging '%s' into '%s' using method '%s' of '%s'", |
504 | - type_name, type_utils.obj_name(merge_with), |
505 | - meth.__name__, self) |
506 | return meth(*args) |
507 | |
508 | |
509 | @@ -84,8 +81,6 @@ |
510 | # First one that has that method/attr gets to be |
511 | # the one that will be called |
512 | meth = getattr(merger, meth_wanted) |
513 | - LOG.debug(("Merging using located merger '%s'" |
514 | - " since it had method '%s'"), merger, meth_wanted) |
515 | break |
516 | if not meth: |
517 | return UnknownMerger._handle_unknown(self, meth_wanted, |
518 | |
519 | === modified file 'cloudinit/sources/DataSourceAzure.py' |
520 | --- cloudinit/sources/DataSourceAzure.py 2014-02-10 20:11:45 +0000 |
521 | +++ cloudinit/sources/DataSourceAzure.py 2014-07-18 13:33:31 +0000 |
522 | @@ -18,12 +18,14 @@ |
523 | |
524 | import base64 |
525 | import crypt |
526 | +import fnmatch |
527 | import os |
528 | import os.path |
529 | import time |
530 | from xml.dom import minidom |
531 | |
532 | from cloudinit import log as logging |
533 | +from cloudinit.settings import PER_ALWAYS |
534 | from cloudinit import sources |
535 | from cloudinit import util |
536 | |
537 | @@ -53,14 +55,15 @@ |
538 | 'disk_setup': { |
539 | 'ephemeral0': {'table_type': 'mbr', |
540 | 'layout': True, |
541 | - 'overwrite': False} |
542 | - }, |
543 | + 'overwrite': False}, |
544 | + }, |
545 | 'fs_setup': [{'filesystem': 'ext4', |
546 | 'device': 'ephemeral0.1', |
547 | - 'replace_fs': 'ntfs'}] |
548 | + 'replace_fs': 'ntfs'}], |
549 | } |
550 | |
551 | DS_CFG_PATH = ['datasource', DS_NAME] |
552 | +DEF_EPHEMERAL_LABEL = 'Temporary Storage' |
553 | |
554 | |
555 | class DataSourceAzureNet(sources.DataSource): |
556 | @@ -189,8 +192,17 @@ |
557 | LOG.warn("failed to get instance id in %s: %s", shcfgxml, e) |
558 | |
559 | pubkeys = pubkeys_from_crt_files(fp_files) |
560 | - |
561 | self.metadata['public-keys'] = pubkeys |
562 | + |
563 | + found_ephemeral = find_ephemeral_disk() |
564 | + if found_ephemeral: |
565 | + self.ds_cfg['disk_aliases']['ephemeral0'] = found_ephemeral |
566 | + LOG.debug("using detected ephemeral0 of %s", found_ephemeral) |
567 | + |
568 | + cc_modules_override = support_new_ephemeral(self.sys_cfg) |
569 | + if cc_modules_override: |
570 | + self.cfg['cloud_config_modules'] = cc_modules_override |
571 | + |
572 | return True |
573 | |
574 | def device_name_to_device(self, name): |
575 | @@ -200,6 +212,92 @@ |
576 | return self.cfg |
577 | |
578 | |
579 | +def count_files(mp): |
580 | + return len(fnmatch.filter(os.listdir(mp), '*[!cdrom]*')) |
581 | + |
582 | + |
583 | +def find_ephemeral_part(): |
584 | + """ |
585 | + Locate the default ephmeral0.1 device. This will be the first device |
586 | + that has a LABEL of DEF_EPHEMERAL_LABEL and is a NTFS device. If Azure |
587 | + gets more ephemeral devices, this logic will only identify the first |
588 | + such device. |
589 | + """ |
590 | + c_label_devs = util.find_devs_with("LABEL=%s" % DEF_EPHEMERAL_LABEL) |
591 | + c_fstype_devs = util.find_devs_with("TYPE=ntfs") |
592 | + for dev in c_label_devs: |
593 | + if dev in c_fstype_devs: |
594 | + return dev |
595 | + return None |
596 | + |
597 | + |
598 | +def find_ephemeral_disk(): |
599 | + """ |
600 | + Get the ephemeral disk. |
601 | + """ |
602 | + part_dev = find_ephemeral_part() |
603 | + if part_dev and str(part_dev[-1]).isdigit(): |
604 | + return part_dev[:-1] |
605 | + elif part_dev: |
606 | + return part_dev |
607 | + return None |
608 | + |
609 | + |
610 | +def support_new_ephemeral(cfg): |
611 | + """ |
612 | + Windows Azure makes ephemeral devices ephemeral to boot; a ephemeral device |
613 | + may be presented as a fresh device, or not. |
614 | + |
615 | + Since the knowledge of when a disk is supposed to be plowed under is |
616 | + specific to Windows Azure, the logic resides here in the datasource. When a |
617 | + new ephemeral device is detected, cloud-init overrides the default |
618 | + frequency for both disk-setup and mounts for the current boot only. |
619 | + """ |
620 | + device = find_ephemeral_part() |
621 | + if not device: |
622 | + LOG.debug("no default fabric formated ephemeral0.1 found") |
623 | + return None |
624 | + LOG.debug("fabric formated ephemeral0.1 device at %s", device) |
625 | + |
626 | + file_count = 0 |
627 | + try: |
628 | + file_count = util.mount_cb(device, count_files) |
629 | + except: |
630 | + return None |
631 | + LOG.debug("fabric prepared ephmeral0.1 has %s files on it", file_count) |
632 | + |
633 | + if file_count >= 1: |
634 | + LOG.debug("fabric prepared ephemeral0.1 will be preserved") |
635 | + return None |
636 | + else: |
637 | + # if device was already mounted, then we need to unmount it |
638 | + # race conditions could allow for a check-then-unmount |
639 | + # to have a false positive. so just unmount and then check. |
640 | + try: |
641 | + util.subp(['umount', device]) |
642 | + except util.ProcessExecutionError as e: |
643 | + if device in util.mounts(): |
644 | + LOG.warn("Failed to unmount %s, will not reformat.", device) |
645 | + LOG.debug("Failed umount: %s", e) |
646 | + return None |
647 | + |
648 | + LOG.debug("cloud-init will format ephemeral0.1 this boot.") |
649 | + LOG.debug("setting disk_setup and mounts modules 'always' for this boot") |
650 | + |
651 | + cc_modules = cfg.get('cloud_config_modules') |
652 | + if not cc_modules: |
653 | + return None |
654 | + |
655 | + mod_list = [] |
656 | + for mod in cc_modules: |
657 | + if mod in ("disk_setup", "mounts"): |
658 | + mod_list.append([mod, PER_ALWAYS]) |
659 | + LOG.debug("set module '%s' to 'always' for this boot", mod) |
660 | + else: |
661 | + mod_list.append(mod) |
662 | + return mod_list |
663 | + |
664 | + |
665 | def handle_set_hostname(enabled, hostname, cfg): |
666 | if not util.is_true(enabled): |
667 | return |
668 | |
669 | === modified file 'cloudinit/sources/DataSourceCloudSigma.py' |
670 | --- cloudinit/sources/DataSourceCloudSigma.py 2014-02-18 16:58:12 +0000 |
671 | +++ cloudinit/sources/DataSourceCloudSigma.py 2014-07-18 13:33:31 +0000 |
672 | @@ -15,10 +15,13 @@ |
673 | # |
674 | # You should have received a copy of the GNU General Public License |
675 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
676 | +from base64 import b64decode |
677 | +import os |
678 | import re |
679 | |
680 | from cloudinit import log as logging |
681 | from cloudinit import sources |
682 | +from cloudinit import util |
683 | from cloudinit.cs_utils import Cepko |
684 | |
685 | LOG = logging.getLogger(__name__) |
686 | @@ -39,12 +42,40 @@ |
687 | self.ssh_public_key = '' |
688 | sources.DataSource.__init__(self, sys_cfg, distro, paths) |
689 | |
690 | + def is_running_in_cloudsigma(self): |
691 | + """ |
692 | + Uses dmidecode to detect if this instance of cloud-init is running |
693 | + in the CloudSigma's infrastructure. |
694 | + """ |
695 | + uname_arch = os.uname()[4] |
696 | + if uname_arch.startswith("arm") or uname_arch == "aarch64": |
697 | + # Disabling because dmidecode in CMD_DMI_SYSTEM crashes kvm process |
698 | + LOG.debug("Disabling CloudSigma datasource on arm (LP: #1243287)") |
699 | + return False |
700 | + |
701 | + dmidecode_path = util.which('dmidecode') |
702 | + if not dmidecode_path: |
703 | + return False |
704 | + |
705 | + LOG.debug("Determining hypervisor product name via dmidecode") |
706 | + try: |
707 | + cmd = [dmidecode_path, "--string", "system-product-name"] |
708 | + system_product_name, _ = util.subp(cmd) |
709 | + return 'cloudsigma' in system_product_name.lower() |
710 | + except: |
711 | + LOG.warn("Failed to get hypervisor product name via dmidecode") |
712 | + |
713 | + return False |
714 | + |
715 | def get_data(self): |
716 | """ |
717 | Metadata is the whole server context and /meta/cloud-config is used |
718 | as userdata. |
719 | """ |
720 | dsmode = None |
721 | + if not self.is_running_in_cloudsigma(): |
722 | + return False |
723 | + |
724 | try: |
725 | server_context = self.cepko.all().result |
726 | server_meta = server_context['meta'] |
727 | @@ -61,7 +92,13 @@ |
728 | if dsmode == "disabled" or dsmode != self.dsmode: |
729 | return False |
730 | |
731 | + base64_fields = server_meta.get('base64_fields', '').split(',') |
732 | self.userdata_raw = server_meta.get('cloudinit-user-data', "") |
733 | + if 'cloudinit-user-data' in base64_fields: |
734 | + self.userdata_raw = b64decode(self.userdata_raw) |
735 | + if 'cloudinit' in server_context.get('vendor_data', {}): |
736 | + self.vendordata_raw = server_context["vendor_data"]["cloudinit"] |
737 | + |
738 | self.metadata = server_context |
739 | self.ssh_public_key = server_meta['ssh_public_key'] |
740 | |
741 | |
742 | === modified file 'cloudinit/sources/DataSourceNoCloud.py' |
743 | --- cloudinit/sources/DataSourceNoCloud.py 2014-02-18 17:58:21 +0000 |
744 | +++ cloudinit/sources/DataSourceNoCloud.py 2014-07-18 13:33:31 +0000 |
745 | @@ -57,7 +57,7 @@ |
746 | md = {} |
747 | if parse_cmdline_data(self.cmdline_id, md): |
748 | found.append("cmdline") |
749 | - mydata.update(md) |
750 | + mydata['meta-data'].update(md) |
751 | except: |
752 | util.logexc(LOG, "Unable to parse command line data") |
753 | return False |
754 | |
755 | === modified file 'cloudinit/sources/DataSourceOpenNebula.py' |
756 | --- cloudinit/sources/DataSourceOpenNebula.py 2014-01-17 01:11:27 +0000 |
757 | +++ cloudinit/sources/DataSourceOpenNebula.py 2014-07-18 13:33:31 +0000 |
758 | @@ -4,11 +4,13 @@ |
759 | # Copyright (C) 2012 Yahoo! Inc. |
760 | # Copyright (C) 2012-2013 CERIT Scientific Cloud |
761 | # Copyright (C) 2012-2013 OpenNebula.org |
762 | +# Copyright (C) 2014 Consejo Superior de Investigaciones Cientificas |
763 | # |
764 | # Author: Scott Moser <scott.moser@canonical.com> |
765 | # Author: Joshua Harlow <harlowja@yahoo-inc.com> |
766 | # Author: Vlastimil Holer <xholer@mail.muni.cz> |
767 | # Author: Javier Fontan <jfontan@opennebula.org> |
768 | +# Author: Enol Fernandez <enolfc@ifca.unican.es> |
769 | # |
770 | # This program is free software: you can redistribute it and/or modify |
771 | # it under the terms of the GNU General Public License version 3, as |
772 | @@ -22,6 +24,7 @@ |
773 | # You should have received a copy of the GNU General Public License |
774 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
775 | |
776 | +import base64 |
777 | import os |
778 | import pwd |
779 | import re |
780 | @@ -417,6 +420,16 @@ |
781 | elif "USERDATA" in context: |
782 | results['userdata'] = context["USERDATA"] |
783 | |
784 | + # b64decode user data if necessary (default) |
785 | + if 'userdata' in results: |
786 | + encoding = context.get('USERDATA_ENCODING', |
787 | + context.get('USER_DATA_ENCODING')) |
788 | + if encoding == "base64": |
789 | + try: |
790 | + results['userdata'] = base64.b64decode(results['userdata']) |
791 | + except TypeError: |
792 | + LOG.warn("Failed base64 decoding of userdata") |
793 | + |
794 | # generate static /etc/network/interfaces |
795 | # only if there are any required context variables |
796 | # http://opennebula.org/documentation:rel3.8:cong#network_configuration |
797 | |
798 | === modified file 'cloudinit/sources/DataSourceSmartOS.py' |
799 | --- cloudinit/sources/DataSourceSmartOS.py 2014-02-26 19:28:46 +0000 |
800 | +++ cloudinit/sources/DataSourceSmartOS.py 2014-07-18 13:33:31 +0000 |
801 | @@ -170,8 +170,9 @@ |
802 | md = {} |
803 | ud = "" |
804 | |
805 | - if not os.path.exists(self.seed): |
806 | - LOG.debug("Host does not appear to be on SmartOS") |
807 | + if not device_exists(self.seed): |
808 | + LOG.debug("No serial device '%s' found for SmartOS datasource", |
809 | + self.seed) |
810 | return False |
811 | |
812 | uname_arch = os.uname()[4] |
813 | @@ -274,6 +275,11 @@ |
814 | b64=b64) |
815 | |
816 | |
817 | +def device_exists(device): |
818 | + """Symplistic method to determine if the device exists or not""" |
819 | + return os.path.exists(device) |
820 | + |
821 | + |
822 | def get_serial(seed_device, seed_timeout): |
823 | """This is replaced in unit testing, allowing us to replace |
824 | serial.Serial with a mocked class. |
825 | |
826 | === modified file 'cloudinit/stages.py' |
827 | --- cloudinit/stages.py 2014-02-13 18:53:08 +0000 |
828 | +++ cloudinit/stages.py 2014-07-18 13:33:31 +0000 |
829 | @@ -397,8 +397,8 @@ |
830 | mod = handlers.fixup_handler(mod) |
831 | types = c_handlers.register(mod) |
832 | if types: |
833 | - LOG.debug("Added custom handler for %s from %s", |
834 | - types, fname) |
835 | + LOG.debug("Added custom handler for %s [%s] from %s", |
836 | + types, mod, fname) |
837 | except Exception: |
838 | util.logexc(LOG, "Failed to register handler from %s", |
839 | fname) |
840 | @@ -644,6 +644,8 @@ |
841 | freq = mod.frequency |
842 | if not freq in FREQUENCIES: |
843 | freq = PER_INSTANCE |
844 | + LOG.debug("Running module %s (%s) with frequency %s", |
845 | + name, mod, freq) |
846 | |
847 | # Use the configs logger and not our own |
848 | # TODO(harlowja): possibly check the module |
849 | @@ -657,7 +659,7 @@ |
850 | run_name = "config-%s" % (name) |
851 | cc.run(run_name, mod.handle, func_args, freq=freq) |
852 | except Exception as e: |
853 | - util.logexc(LOG, "Running %s (%s) failed", name, mod) |
854 | + util.logexc(LOG, "Running module %s (%s) failed", name, mod) |
855 | failures.append((name, e)) |
856 | return (which_ran, failures) |
857 | |
858 | |
859 | === modified file 'cloudinit/util.py' |
860 | --- cloudinit/util.py 2014-02-13 11:27:22 +0000 |
861 | +++ cloudinit/util.py 2014-07-18 13:33:31 +0000 |
862 | @@ -1395,8 +1395,10 @@ |
863 | return obj_copy.deepcopy(CFG_BUILTIN) |
864 | |
865 | |
866 | -def sym_link(source, link): |
867 | +def sym_link(source, link, force=False): |
868 | LOG.debug("Creating symbolic link from %r => %r", link, source) |
869 | + if force and os.path.exists(link): |
870 | + del_file(link) |
871 | os.symlink(source, link) |
872 | |
873 | |
874 | |
875 | === modified file 'cloudinit/version.py' |
876 | --- cloudinit/version.py 2013-11-19 21:49:53 +0000 |
877 | +++ cloudinit/version.py 2014-07-18 13:33:31 +0000 |
878 | @@ -20,7 +20,7 @@ |
879 | |
880 | |
881 | def version(): |
882 | - return vr.StrictVersion("0.7.5") |
883 | + return vr.StrictVersion("0.7.6") |
884 | |
885 | |
886 | def version_string(): |
887 | |
888 | === modified file 'doc/examples/cloud-config-user-groups.txt' |
889 | --- doc/examples/cloud-config-user-groups.txt 2013-10-02 13:25:36 +0000 |
890 | +++ doc/examples/cloud-config-user-groups.txt 2014-07-18 13:33:31 +0000 |
891 | @@ -69,7 +69,7 @@ |
892 | # no-user-group: When set to true, do not create a group named after the user. |
893 | # no-log-init: When set to true, do not initialize lastlog and faillog database. |
894 | # ssh-import-id: Optional. Import SSH ids |
895 | -# ssh-authorized-key: Optional. Add key to user's ssh authorized keys file |
896 | +# ssh-authorized-keys: Optional. [list] Add keys to user's authorized keys file |
897 | # sudo: Defaults to none. Set to the sudo string you want to use, i.e. |
898 | # ALL=(ALL) NOPASSWD:ALL. To add multiple rules, use the following |
899 | # format. |
900 | |
901 | === modified file 'doc/sources/cloudsigma/README.rst' |
902 | --- doc/sources/cloudsigma/README.rst 2014-02-13 15:39:39 +0000 |
903 | +++ doc/sources/cloudsigma/README.rst 2014-07-18 13:33:31 +0000 |
904 | @@ -23,6 +23,10 @@ |
905 | header could be omitted. However since this is a raw-text field you could provide any of the valid |
906 | `config formats`_. |
907 | |
908 | +You have the option to encode your user-data using Base64. In order to do that you have to add the |
909 | +``cloudinit-user-data`` field to the ``base64_fields``. The latter is a comma-separated field with |
910 | +all the meta fields whit base64 encoded values. |
911 | + |
912 | If your user-data does not need an internet connection you can create a |
913 | `meta field`_ in the `server context`_ ``cloudinit-dsmode`` and set "local" as value. |
914 | If this field does not exist the default value is "net". |
915 | |
916 | === added file 'doc/status.txt' |
917 | --- doc/status.txt 1970-01-01 00:00:00 +0000 |
918 | +++ doc/status.txt 2014-07-18 13:33:31 +0000 |
919 | @@ -0,0 +1,53 @@ |
920 | +cloud-init will keep a 'status' file up to date for other applications |
921 | +wishing to use it to determine cloud-init status. |
922 | + |
923 | +It will manage 2 files: |
924 | + status.json |
925 | + result.json |
926 | + |
927 | +The files will be written to /var/lib/cloud/data/ . |
928 | +A symlink will be created in /run/cloud-init. The link from /run is to ensure |
929 | +that if the file exists, it is not stale for this boot. |
930 | + |
931 | +status.json's format is: |
932 | + { |
933 | + 'v1': { |
934 | + 'init': { |
935 | + errors: [] # list of strings for each error that occurred |
936 | + start: float # time.time() that this stage started or None |
937 | + end: float # time.time() that this stage finished or None |
938 | + }, |
939 | + 'init-local': { |
940 | + 'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above) |
941 | + }, |
942 | + 'modules-config': { |
943 | + 'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above) |
944 | + }, |
945 | + 'modules-final': { |
946 | + 'errors': [], 'start': <float>, 'end' <float> # (same as 'init' above) |
947 | + }, |
948 | + 'datasource': string describing datasource found or None |
949 | + 'stage': string representing stage that is currently running |
950 | + ('init', 'init-local', 'modules-final', 'modules-config', None) |
951 | + if None, then no stage is running. Reader must read the start/end |
952 | + of each of the above stages to determine the state. |
953 | + } |
954 | + |
955 | +result.json's format is: |
956 | + { |
957 | + 'v1': { |
958 | + 'datasource': string describing the datasource found |
959 | + 'errors': [] # list of errors reported |
960 | + } |
961 | + } |
962 | + |
963 | +Thus, to determine if cloud-init is finished: |
964 | + fin = "/run/cloud-init/result.json" |
965 | + if os.path.exists(fin): |
966 | + ret = json.load(open(fin, "r")) |
967 | + if len(ret['v1']['errors']): |
968 | + print "Finished with errors:" + "\n".join(ret['v1']['errors']) |
969 | + else: |
970 | + print "Finished no errors" |
971 | + else: |
972 | + print "Not Finished" |
973 | |
974 | === modified file 'tests/unittests/helpers.py' |
975 | --- tests/unittests/helpers.py 2014-02-08 00:40:51 +0000 |
976 | +++ tests/unittests/helpers.py 2014-07-18 13:33:31 +0000 |
977 | @@ -52,6 +52,30 @@ |
978 | standardMsg = standardMsg % (value) |
979 | self.fail(self._formatMessage(msg, standardMsg)) |
980 | |
981 | + def assertDictContainsSubset(self, expected, actual, msg=None): |
982 | + missing = [] |
983 | + mismatched = [] |
984 | + for k, v in expected.iteritems(): |
985 | + if k not in actual: |
986 | + missing.append(k) |
987 | + elif actual[k] != v: |
988 | + mismatched.append('%r, expected: %r, actual: %r' |
989 | + % (k, v, actual[k])) |
990 | + |
991 | + if len(missing) == 0 and len(mismatched) == 0: |
992 | + return |
993 | + |
994 | + standardMsg = '' |
995 | + if missing: |
996 | + standardMsg = 'Missing: %r' % ','.join(m for m in missing) |
997 | + if mismatched: |
998 | + if standardMsg: |
999 | + standardMsg += '; ' |
1000 | + standardMsg += 'Mismatched values: %s' % ','.join(mismatched) |
1001 | + |
1002 | + self.fail(self._formatMessage(msg, standardMsg)) |
1003 | + |
1004 | + |
1005 | else: |
1006 | class TestCase(unittest.TestCase): |
1007 | pass |
1008 | |
1009 | === modified file 'tests/unittests/test__init__.py' |
1010 | --- tests/unittests/test__init__.py 2014-01-25 03:31:28 +0000 |
1011 | +++ tests/unittests/test__init__.py 2014-07-18 13:33:31 +0000 |
1012 | @@ -1,14 +1,10 @@ |
1013 | -import logging |
1014 | import os |
1015 | -import StringIO |
1016 | -import sys |
1017 | |
1018 | -from mocker import MockerTestCase, ANY, ARGS, KWARGS |
1019 | +from mocker import MockerTestCase, ARGS, KWARGS |
1020 | |
1021 | from cloudinit import handlers |
1022 | from cloudinit import helpers |
1023 | from cloudinit import importer |
1024 | -from cloudinit import log |
1025 | from cloudinit import settings |
1026 | from cloudinit import url_helper |
1027 | from cloudinit import util |
1028 | |
1029 | === modified file 'tests/unittests/test_datasource/test_cloudsigma.py' |
1030 | --- tests/unittests/test_datasource/test_cloudsigma.py 2014-02-12 10:14:49 +0000 |
1031 | +++ tests/unittests/test_datasource/test_cloudsigma.py 2014-07-18 13:33:31 +0000 |
1032 | @@ -1,9 +1,11 @@ |
1033 | # coding: utf-8 |
1034 | -from unittest import TestCase |
1035 | +import copy |
1036 | |
1037 | from cloudinit.cs_utils import Cepko |
1038 | from cloudinit.sources import DataSourceCloudSigma |
1039 | |
1040 | +from tests.unittests import helpers as test_helpers |
1041 | + |
1042 | |
1043 | SERVER_CONTEXT = { |
1044 | "cpu": 1000, |
1045 | @@ -19,21 +21,27 @@ |
1046 | "smp": 1, |
1047 | "tags": ["much server", "very performance"], |
1048 | "uuid": "65b2fb23-8c03-4187-a3ba-8b7c919e8890", |
1049 | - "vnc_password": "9e84d6cb49e46379" |
1050 | + "vnc_password": "9e84d6cb49e46379", |
1051 | + "vendor_data": { |
1052 | + "location": "zrh", |
1053 | + "cloudinit": "#cloud-config\n\n...", |
1054 | + } |
1055 | } |
1056 | |
1057 | |
1058 | class CepkoMock(Cepko): |
1059 | - result = SERVER_CONTEXT |
1060 | + def __init__(self, mocked_context): |
1061 | + self.result = mocked_context |
1062 | |
1063 | def all(self): |
1064 | return self |
1065 | |
1066 | |
1067 | -class DataSourceCloudSigmaTest(TestCase): |
1068 | +class DataSourceCloudSigmaTest(test_helpers.TestCase): |
1069 | def setUp(self): |
1070 | self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "") |
1071 | - self.datasource.cepko = CepkoMock() |
1072 | + self.datasource.is_running_in_cloudsigma = lambda: True |
1073 | + self.datasource.cepko = CepkoMock(SERVER_CONTEXT) |
1074 | self.datasource.get_data() |
1075 | |
1076 | def test_get_hostname(self): |
1077 | @@ -57,3 +65,34 @@ |
1078 | def test_user_data(self): |
1079 | self.assertEqual(self.datasource.userdata_raw, |
1080 | SERVER_CONTEXT['meta']['cloudinit-user-data']) |
1081 | + |
1082 | + def test_encoded_user_data(self): |
1083 | + encoded_context = copy.deepcopy(SERVER_CONTEXT) |
1084 | + encoded_context['meta']['base64_fields'] = 'cloudinit-user-data' |
1085 | + encoded_context['meta']['cloudinit-user-data'] = 'aGkgd29ybGQK' |
1086 | + self.datasource.cepko = CepkoMock(encoded_context) |
1087 | + self.datasource.get_data() |
1088 | + |
1089 | + self.assertEqual(self.datasource.userdata_raw, b'hi world\n') |
1090 | + |
1091 | + def test_vendor_data(self): |
1092 | + self.assertEqual(self.datasource.vendordata_raw, |
1093 | + SERVER_CONTEXT['vendor_data']['cloudinit']) |
1094 | + |
1095 | + def test_lack_of_vendor_data(self): |
1096 | + stripped_context = copy.deepcopy(SERVER_CONTEXT) |
1097 | + del stripped_context["vendor_data"] |
1098 | + self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "") |
1099 | + self.datasource.cepko = CepkoMock(stripped_context) |
1100 | + self.datasource.get_data() |
1101 | + |
1102 | + self.assertIsNone(self.datasource.vendordata_raw) |
1103 | + |
1104 | + def test_lack_of_cloudinit_key_in_vendor_data(self): |
1105 | + stripped_context = copy.deepcopy(SERVER_CONTEXT) |
1106 | + del stripped_context["vendor_data"]["cloudinit"] |
1107 | + self.datasource = DataSourceCloudSigma.DataSourceCloudSigma("", "", "") |
1108 | + self.datasource.cepko = CepkoMock(stripped_context) |
1109 | + self.datasource.get_data() |
1110 | + |
1111 | + self.assertIsNone(self.datasource.vendordata_raw) |
1112 | |
1113 | === modified file 'tests/unittests/test_datasource/test_gce.py' |
1114 | --- tests/unittests/test_datasource/test_gce.py 2014-02-13 22:03:12 +0000 |
1115 | +++ tests/unittests/test_datasource/test_gce.py 2014-07-18 13:33:31 +0000 |
1116 | @@ -15,7 +15,6 @@ |
1117 | # You should have received a copy of the GNU General Public License |
1118 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
1119 | |
1120 | -import unittest |
1121 | import httpretty |
1122 | import re |
1123 | |
1124 | @@ -25,6 +24,8 @@ |
1125 | from cloudinit import helpers |
1126 | from cloudinit.sources import DataSourceGCE |
1127 | |
1128 | +from tests.unittests import helpers as test_helpers |
1129 | + |
1130 | GCE_META = { |
1131 | 'instance/id': '123', |
1132 | 'instance/zone': 'foo/bar', |
1133 | @@ -54,7 +55,7 @@ |
1134 | return (404, headers, '') |
1135 | |
1136 | |
1137 | -class TestDataSourceGCE(unittest.TestCase): |
1138 | +class TestDataSourceGCE(test_helpers.TestCase): |
1139 | |
1140 | def setUp(self): |
1141 | self.ds = DataSourceGCE.DataSourceGCE( |
1142 | |
1143 | === modified file 'tests/unittests/test_datasource/test_maas.py' |
1144 | --- tests/unittests/test_datasource/test_maas.py 2014-01-25 03:31:28 +0000 |
1145 | +++ tests/unittests/test_datasource/test_maas.py 2014-07-18 13:33:31 +0000 |
1146 | @@ -3,7 +3,6 @@ |
1147 | |
1148 | from cloudinit.sources import DataSourceMAAS |
1149 | from cloudinit import url_helper |
1150 | -from cloudinit import util |
1151 | from tests.unittests.helpers import populate_dir |
1152 | |
1153 | import mocker |
1154 | |
1155 | === modified file 'tests/unittests/test_datasource/test_opennebula.py' |
1156 | --- tests/unittests/test_datasource/test_opennebula.py 2014-01-17 16:09:15 +0000 |
1157 | +++ tests/unittests/test_datasource/test_opennebula.py 2014-07-18 13:33:31 +0000 |
1158 | @@ -4,6 +4,7 @@ |
1159 | from mocker import MockerTestCase |
1160 | from tests.unittests.helpers import populate_dir |
1161 | |
1162 | +from base64 import b64encode |
1163 | import os |
1164 | import pwd |
1165 | |
1166 | @@ -164,10 +165,31 @@ |
1167 | |
1168 | public_keys.append(SSH_KEY % (c + 1,)) |
1169 | |
1170 | - def test_user_data(self): |
1171 | - for k in ('USER_DATA', 'USERDATA'): |
1172 | - my_d = os.path.join(self.tmp, k) |
1173 | - populate_context_dir(my_d, {k: USER_DATA}) |
1174 | + def test_user_data_plain(self): |
1175 | + for k in ('USER_DATA', 'USERDATA'): |
1176 | + my_d = os.path.join(self.tmp, k) |
1177 | + populate_context_dir(my_d, {k: USER_DATA, |
1178 | + 'USERDATA_ENCODING': ''}) |
1179 | + results = ds.read_context_disk_dir(my_d) |
1180 | + |
1181 | + self.assertTrue('userdata' in results) |
1182 | + self.assertEqual(USER_DATA, results['userdata']) |
1183 | + |
1184 | + def test_user_data_encoding_required_for_decode(self): |
1185 | + b64userdata = b64encode(USER_DATA) |
1186 | + for k in ('USER_DATA', 'USERDATA'): |
1187 | + my_d = os.path.join(self.tmp, k) |
1188 | + populate_context_dir(my_d, {k: b64userdata}) |
1189 | + results = ds.read_context_disk_dir(my_d) |
1190 | + |
1191 | + self.assertTrue('userdata' in results) |
1192 | + self.assertEqual(b64userdata, results['userdata']) |
1193 | + |
1194 | + def test_user_data_base64_encoding(self): |
1195 | + for k in ('USER_DATA', 'USERDATA'): |
1196 | + my_d = os.path.join(self.tmp, k) |
1197 | + populate_context_dir(my_d, {k: b64encode(USER_DATA), |
1198 | + 'USERDATA_ENCODING': 'base64'}) |
1199 | results = ds.read_context_disk_dir(my_d) |
1200 | |
1201 | self.assertTrue('userdata' in results) |
1202 | |
1203 | === modified file 'tests/unittests/test_datasource/test_smartos.py' |
1204 | --- tests/unittests/test_datasource/test_smartos.py 2014-02-26 19:21:40 +0000 |
1205 | +++ tests/unittests/test_datasource/test_smartos.py 2014-07-18 13:33:31 +0000 |
1206 | @@ -24,10 +24,7 @@ |
1207 | |
1208 | import base64 |
1209 | from cloudinit import helpers as c_helpers |
1210 | -from cloudinit import stages |
1211 | -from cloudinit import util |
1212 | from cloudinit.sources import DataSourceSmartOS |
1213 | -from cloudinit.settings import (PER_INSTANCE) |
1214 | from tests.unittests import helpers |
1215 | import os |
1216 | import os.path |
1217 | @@ -174,6 +171,7 @@ |
1218 | self.apply_patches([(mod, 'get_serial', _get_serial)]) |
1219 | self.apply_patches([(mod, 'dmi_data', _dmi_data)]) |
1220 | self.apply_patches([(os, 'uname', _os_uname)]) |
1221 | + self.apply_patches([(mod, 'device_exists', lambda d: True)]) |
1222 | dsrc = mod.DataSourceSmartOS(sys_cfg, distro=None, |
1223 | paths=self.paths) |
1224 | return dsrc |
1225 | |
1226 | === modified file 'tests/unittests/test_handler/test_handler_seed_random.py' |
1227 | --- tests/unittests/test_handler/test_handler_seed_random.py 2013-10-02 13:28:42 +0000 |
1228 | +++ tests/unittests/test_handler/test_handler_seed_random.py 2014-07-18 13:33:31 +0000 |
1229 | @@ -42,10 +42,32 @@ |
1230 | def setUp(self): |
1231 | super(TestRandomSeed, self).setUp() |
1232 | self._seed_file = tempfile.mktemp() |
1233 | + self.unapply = [] |
1234 | + |
1235 | + # by default 'which' has nothing in its path |
1236 | + self.apply_patches([(util, 'which', self._which)]) |
1237 | + self.apply_patches([(util, 'subp', self._subp)]) |
1238 | + self.subp_called = [] |
1239 | + self.whichdata = {} |
1240 | |
1241 | def tearDown(self): |
1242 | + apply_patches([i for i in reversed(self.unapply)]) |
1243 | util.del_file(self._seed_file) |
1244 | |
1245 | + def apply_patches(self, patches): |
1246 | + ret = apply_patches(patches) |
1247 | + self.unapply += ret |
1248 | + |
1249 | + def _which(self, program): |
1250 | + return self.whichdata.get(program) |
1251 | + |
1252 | + def _subp(self, *args, **kwargs): |
1253 | + # supports subp calling with cmd as args or kwargs |
1254 | + if 'args' not in kwargs: |
1255 | + kwargs['args'] = args[0] |
1256 | + self.subp_called.append(kwargs) |
1257 | + return |
1258 | + |
1259 | def _compress(self, text): |
1260 | contents = StringIO() |
1261 | gz_fh = gzip.GzipFile(mode='wb', fileobj=contents) |
1262 | @@ -148,3 +170,56 @@ |
1263 | cc_seed_random.handle('test', cfg, c, LOG, []) |
1264 | contents = util.load_file(self._seed_file) |
1265 | self.assertEquals('tiny-tim-was-here-so-was-josh', contents) |
1266 | + |
1267 | + def test_seed_command_not_provided_pollinate_available(self): |
1268 | + c = self._get_cloud('ubuntu', {}) |
1269 | + self.whichdata = {'pollinate': '/usr/bin/pollinate'} |
1270 | + cc_seed_random.handle('test', {}, c, LOG, []) |
1271 | + |
1272 | + subp_args = [f['args'] for f in self.subp_called] |
1273 | + self.assertIn(['pollinate', '-q'], subp_args) |
1274 | + |
1275 | + def test_seed_command_not_provided_pollinate_not_available(self): |
1276 | + c = self._get_cloud('ubuntu', {}) |
1277 | + self.whichdata = {} |
1278 | + cc_seed_random.handle('test', {}, c, LOG, []) |
1279 | + |
1280 | + # subp should not have been called as which would say not available |
1281 | + self.assertEquals(self.subp_called, list()) |
1282 | + |
1283 | + def test_unavailable_seed_command_and_required_raises_error(self): |
1284 | + c = self._get_cloud('ubuntu', {}) |
1285 | + self.whichdata = {} |
1286 | + self.assertRaises(ValueError, cc_seed_random.handle, |
1287 | + 'test', {'random_seed': {'command_required': True}}, c, LOG, []) |
1288 | + |
1289 | + def test_seed_command_and_required(self): |
1290 | + c = self._get_cloud('ubuntu', {}) |
1291 | + self.whichdata = {'foo': 'foo'} |
1292 | + cfg = {'random_seed': {'command_required': True, 'command': ['foo']}} |
1293 | + cc_seed_random.handle('test', cfg, c, LOG, []) |
1294 | + |
1295 | + self.assertIn(['foo'], [f['args'] for f in self.subp_called]) |
1296 | + |
1297 | + def test_file_in_environment_for_command(self): |
1298 | + c = self._get_cloud('ubuntu', {}) |
1299 | + self.whichdata = {'foo': 'foo'} |
1300 | + cfg = {'random_seed': {'command_required': True, 'command': ['foo'], |
1301 | + 'file': self._seed_file}} |
1302 | + cc_seed_random.handle('test', cfg, c, LOG, []) |
1303 | + |
1304 | + # this just instists that the first time subp was called, |
1305 | + # RANDOM_SEED_FILE was in the environment set up correctly |
1306 | + subp_env = [f['env'] for f in self.subp_called] |
1307 | + self.assertEqual(subp_env[0].get('RANDOM_SEED_FILE'), self._seed_file) |
1308 | + |
1309 | + |
1310 | +def apply_patches(patches): |
1311 | + ret = [] |
1312 | + for (ref, name, replace) in patches: |
1313 | + if replace is None: |
1314 | + continue |
1315 | + orig = getattr(ref, name) |
1316 | + setattr(ref, name, replace) |
1317 | + ret.append((ref, name, orig)) |
1318 | + return ret |
1319 | |
1320 | === modified file 'tests/unittests/test_handler/test_handler_yum_add_repo.py' |
1321 | --- tests/unittests/test_handler/test_handler_yum_add_repo.py 2014-02-06 15:59:04 +0000 |
1322 | +++ tests/unittests/test_handler/test_handler_yum_add_repo.py 2014-07-18 13:33:31 +0000 |
1323 | @@ -1,4 +1,3 @@ |
1324 | -from cloudinit import helpers |
1325 | from cloudinit import util |
1326 | |
1327 | from cloudinit.config import cc_yum_add_repo |
1328 | |
1329 | === modified file 'tests/unittests/test_templating.py' |
1330 | --- tests/unittests/test_templating.py 2014-07-16 18:31:31 +0000 |
1331 | +++ tests/unittests/test_templating.py 2014-07-18 13:33:31 +0000 |
1332 | @@ -17,26 +17,51 @@ |
1333 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
1334 | |
1335 | from tests.unittests import helpers as test_helpers |
1336 | +import textwrap |
1337 | |
1338 | from cloudinit import templater |
1339 | |
1340 | |
1341 | class TestTemplates(test_helpers.TestCase): |
1342 | def test_render_basic(self): |
1343 | - in_data = """ |
1344 | -${b} |
1345 | + in_data = textwrap.dedent(""" |
1346 | + ${b} |
1347 | |
1348 | -c = d |
1349 | -""" |
1350 | + c = d |
1351 | + """) |
1352 | in_data = in_data.strip() |
1353 | - expected_data = """ |
1354 | -2 |
1355 | + expected_data = textwrap.dedent(""" |
1356 | + 2 |
1357 | |
1358 | -c = d |
1359 | -""" |
1360 | + c = d |
1361 | + """) |
1362 | out_data = templater.basic_render(in_data, {'b': 2}) |
1363 | self.assertEqual(expected_data.strip(), out_data) |
1364 | |
1365 | + def test_render_basic_no_parens(self): |
1366 | + hn = "myfoohost" |
1367 | + in_data = "h=$hostname\nc=d\n" |
1368 | + expected_data = "h=%s\nc=d\n" % hn |
1369 | + out_data = templater.basic_render(in_data, {'hostname': hn}) |
1370 | + self.assertEqual(expected_data, out_data) |
1371 | + |
1372 | + def test_render_basic_parens(self): |
1373 | + hn = "myfoohost" |
1374 | + in_data = "h = ${hostname}\nc=d\n" |
1375 | + expected_data = "h = %s\nc=d\n" % hn |
1376 | + out_data = templater.basic_render(in_data, {'hostname': hn}) |
1377 | + self.assertEqual(expected_data, out_data) |
1378 | + |
1379 | + def test_render_basic2(self): |
1380 | + mirror = "mymirror" |
1381 | + codename = "zany" |
1382 | + in_data = "deb $mirror $codename-updates main contrib non-free" |
1383 | + ex_data = "deb %s %s-updates main contrib non-free" % (mirror, codename) |
1384 | + |
1385 | + out_data = templater.basic_render(in_data, |
1386 | + {'mirror': mirror, 'codename': codename}) |
1387 | + self.assertEqual(ex_data, out_data) |
1388 | + |
1389 | def test_detection(self): |
1390 | blob = "## template:cheetah" |
1391 | |
1392 | @@ -53,14 +78,12 @@ |
1393 | self.assertRaises(ValueError, templater.detect_template, blob) |
1394 | |
1395 | def test_render_cheetah(self): |
1396 | - blob = '''## template:cheetah |
1397 | -$a,$b''' |
1398 | + blob = '\n'.join(['## template:cheetah', '$a,$b']) |
1399 | c = templater.render_string(blob, {"a": 1, "b": 2}) |
1400 | self.assertEquals("1,2", c) |
1401 | |
1402 | def test_render_jinja(self): |
1403 | - blob = '''## template:jinja |
1404 | -{{a}},{{b}}''' |
1405 | + blob = '\n'.join(['## template:jinja', '{{a}},{{b}}']) |
1406 | c = templater.render_string(blob, {"a": 1, "b": 2}) |
1407 | self.assertEquals("1,2", c) |
1408 |
Seems pretty ok to me, some small comments that u can adjust if u want.