Merge lp:~cloud-init-dev/cloud-init/trunk into lp:~jbauer/cloud-init/salt

Proposed by Jeff Bauer
Status: Merged
Merged at revision: 561
Proposed branch: lp:~cloud-init-dev/cloud-init/trunk
Merge into: lp:~jbauer/cloud-init/salt
Diff against target: 3061 lines (+2239/-175)
39 files modified
ChangeLog (+39/-9)
cloud-init.py (+31/-1)
cloudinit/CloudConfig/__init__.py (+1/-1)
cloudinit/CloudConfig/cc_apt_pipelining.py (+53/-0)
cloudinit/CloudConfig/cc_ca_certs.py (+3/-1)
cloudinit/CloudConfig/cc_chef.py (+5/-5)
cloudinit/CloudConfig/cc_landscape.py (+5/-0)
cloudinit/CloudConfig/cc_resizefs.py (+31/-12)
cloudinit/CloudConfig/cc_salt_minion.py (+2/-1)
cloudinit/CloudConfig/cc_update_etc_hosts.py (+1/-1)
cloudinit/DataSource.py (+4/-1)
cloudinit/DataSourceCloudStack.py (+92/-0)
cloudinit/DataSourceConfigDrive.py (+231/-0)
cloudinit/DataSourceEc2.py (+2/-84)
cloudinit/DataSourceMAAS.py (+345/-0)
cloudinit/DataSourceNoCloud.py (+75/-2)
cloudinit/DataSourceOVF.py (+1/-1)
cloudinit/SshUtil.py (+2/-0)
cloudinit/UserDataHandler.py (+2/-2)
cloudinit/__init__.py (+44/-9)
cloudinit/netinfo.py (+5/-5)
cloudinit/util.py (+253/-30)
config/cloud.cfg (+3/-1)
debian.trunk/control (+1/-0)
doc/configdrive/README (+118/-0)
doc/examples/cloud-config-chef-oneiric.txt (+90/-0)
doc/examples/cloud-config-chef.txt (+48/-6)
doc/examples/cloud-config-datasources.txt (+18/-0)
doc/examples/cloud-config.txt (+11/-0)
doc/kernel-cmdline.txt (+48/-0)
doc/nocloud/README (+55/-0)
setup.py (+1/-0)
tests/unittests/test__init__.py (+242/-0)
tests/unittests/test_datasource/test_maas.py (+153/-0)
tests/unittests/test_handler/test_handler_ca_certs.py (+4/-0)
tests/unittests/test_userdata.py (+107/-0)
tests/unittests/test_util.py (+18/-2)
tools/Z99-cloud-locale-test.sh (+92/-0)
tools/run-pylint (+3/-1)
To merge this branch: bzr merge lp:~cloud-init-dev/cloud-init/trunk
Reviewer Review Type Date Requested Status
Scott Moser Pending
Review via email: mp+105094@code.launchpad.net

Commit message

fix launchpad bug #996166, installs wrong salt pkg

Description of the change

Fixes: https://bugs.launchpad.net/cloud-init/+bug/996166

installs wrong package in cc_salt_minion.py

I'm not sure if I've got the bzr merge correct, but it's only a one line modification:

=== modified file 'cloudinit/CloudConfig/cc_salt_minion.py'
--- cloudinit/CloudConfig/cc_salt_minion.py 2012-05-08 17:18:53 +0000
+++ cloudinit/CloudConfig/cc_salt_minion.py 2012-05-08 17:33:32 +0000
@@ -27,7 +27,7 @@
         return
     salt_cfg = cfg['salt_minion']
     # Start by installing the salt package ...
- cc.install_packages(("salt",))
+ cc.install_packages(("salt-minion",))
     config_dir = '/etc/salt'
     if not os.path.isdir(config_dir):
         os.makedirs(config_dir)

To post a comment you must log in.
lp:~cloud-init-dev/cloud-init/trunk updated
558. By Scott Moser

support relative path in AuthorizedKeysFile

559. By Scott Moser

remove usage of subprocess.check_output

in order to work on python 2.6, replace usage of check_output with util.subp.

560. By Scott Moser

Use --quiet when running apt-get

Use the --quiet switch when running apt-get to get output suitable for
logging, rather than with pretty progress updates designed for interactive
use. This makes the log, as returned by GetConsoleOutput for instance, a
little shorter and easier to read. Some action completion notices are also
missed, but it's pretty clear still as no error output appears before
cloud-init goes on to the next thing.

Mer apt-get man page:
  Quiet; produces output suitable for logging, omitting progress indicators.

561. By Scott Moser

cc_salt_minion: install package salt-minion rather than salt

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'ChangeLog'
2--- ChangeLog 2012-01-30 14:24:41 +0000
3+++ ChangeLog 2012-06-13 13:16:19 +0000
4@@ -1,3 +1,6 @@
5+0.6.4:
6+ - support relative path in AuthorizedKeysFile (LP: #970071).
7+ - make apt-get update run with --quiet (suitable for logging) (LP: #1012613)
8 0.6.3:
9 - add sample systemd config files [Garrett Holmstrom]
10 - add Fedora support [Garrent Holstrom] (LP: #883286)
11@@ -8,7 +11,8 @@
12 - support setting of Acquire::HTTP::Proxy via 'apt_proxy'
13 - DataSourceEc2: more resilliant to slow metadata service
14 - config change: 'retries' dropped, 'max_wait' added, timeout increased
15- - close stdin in all cloud-init programs that are launched at boot (LP: #903993)
16+ - close stdin in all cloud-init programs that are launched at boot
17+ (LP: #903993)
18 - revert management of /etc/hosts to 0.6.1 style (LP: #890501, LP: #871966)
19 - write full ssh keys to console for easy machine consumption (LP: #893400)
20 - put INSTANCE_ID environment variable in bootcmd scripts
21@@ -19,9 +23,33 @@
22 in the payload parameter. (LP: #874342)
23 - add test case framework [Mike Milner] (LP: #890851)
24 - fix pylint warnings [Juerg Haefliger] (LP: #914739)
25- - add support for adding and deleting CA Certificates [Mike Milner] (LP: #915232)
26+ - add support for adding and deleting CA Certificates [Mike Milner]
27+ (LP: #915232)
28 - in ci-info lines, use '.' to indicate empty field for easier machine reading
29 - support empty lines in "#include" files (LP: #923043)
30+ - support configuration of salt minions (Jeff Bauer) (LP: #927795)
31+ - DataSourceOVF: only search for OVF data on ISO9660 filesystems (LP: #898373)
32+ - DataSourceConfigDrive: support getting data from openstack config drive
33+ (LP: #857378)
34+ - DataSourceNoCloud: support seed from external disk of ISO or vfat
35+ (LP: #857378)
36+ - DataSourceNoCloud: support inserting /etc/network/interfaces
37+ - DataSourceMaaS: add data source for Ubuntu Machines as a Service (MaaS)
38+ (LP: #942061)
39+ - DataSourceCloudStack: add support for CloudStack datasource [Cosmin Luta]
40+ - add option 'apt_pipelining' to address issue with S3 mirrors
41+ (LP: #948461) [Ben Howard]
42+ - warn on non-multipart, non-handled user-data [Martin Packman]
43+ - run resizefs in the background in order to not block boot (LP: #961226)
44+ - Fix bug in Chef support where validation_key was present in config, but
45+ 'validation_cert' was not (LP: #960547)
46+ - Provide user friendly message when an invalid locale is set
47+ [Ben Howard] (LP: #859814)
48+ - Support reading cloud-config from kernel command line parameter and
49+ populating local file with it, which can then provide data for DataSources
50+ - improve chef examples for working configurations on 11.10 and 12.04
51+ [Lorin Hochstein] (LP: #960564)
52+
53 0.6.2:
54 - fix bug where update was not done unless update was explicitly set.
55 It would not be run if 'upgrade' or packages were set to be installed
56@@ -59,18 +87,20 @@
57 - support multiple staticly configured network devices, as long as
58 all of them come up early (LP: #810044)
59 - Changes to handling user data mean that:
60- * boothooks will now run more than once as they were intended (and as bootcmd
61- commands do)
62+ * boothooks will now run more than once as they were intended (and as
63+ bootcmd commands do)
64 * cloud-config and user-scripts will be updated from user data every boot
65 - Fix issue where 'isatty' would return true for apt-add-repository.
66 apt-add-repository would get stdin which was attached to a terminal
67- (/dev/console) and would thus hang when running during boot. (LP: 831505)
68- This was done by changing all users of util.subp to have None input unless specified
69+ (/dev/console) and would thus hang when running during boot. (LP: 831505)
70+ This was done by changing all users of util.subp to have None input unless
71+ specified
72 - Add some debug info to the console when cloud-init runs.
73- This is useful if debugging, IP and route information is printed to the console.
74+ This is useful if debugging, IP and route information is printed to the
75+ console.
76 - change the mechanism for handling .ssh/authorized_keys, to update entries
77- rather than appending. This ensures that the authorized_keys that are being
78- inserted actually do something (LP: #434076, LP: #833499)
79+ rather than appending. This ensures that the authorized_keys that are
80+ being inserted actually do something (LP: #434076, LP: #833499)
81 - log warning on failure to set hostname (LP: #832175)
82 - upstart/cloud-init-nonet.conf: wait for all network interfaces to be up
83 allow for the possibility of /var/run != /run.
84
85=== modified file 'cloud-init.py'
86--- cloud-init.py 2012-01-18 14:07:33 +0000
87+++ cloud-init.py 2012-06-13 13:16:19 +0000
88@@ -28,6 +28,7 @@
89 import cloudinit.DataSource as ds
90 import cloudinit.netinfo as netinfo
91 import time
92+import traceback
93 import logging
94 import errno
95 import os
96@@ -67,6 +68,30 @@
97 warn("unable to open /proc/uptime\n")
98 uptime = "na"
99
100+ cmdline_msg = None
101+ cmdline_exc = None
102+ if cmd == "start":
103+ target = "%s.d/%s" % (cloudinit.system_config,
104+ "91_kernel_cmdline_url.cfg")
105+ if os.path.exists(target):
106+ cmdline_msg = "cmdline: %s existed" % target
107+ else:
108+ cmdline = util.get_cmdline()
109+ try:
110+ (key, url, content) = cloudinit.get_cmdline_url(
111+ cmdline=cmdline)
112+ if key and content:
113+ util.write_file(target, content, mode=0600)
114+ cmdline_msg = ("cmdline: wrote %s from %s, %s" %
115+ (target, key, url))
116+ elif key:
117+ cmdline_msg = ("cmdline: %s, %s had no cloud-config" %
118+ (key, url))
119+ except Exception:
120+ cmdline_exc = ("cmdline: '%s' raised exception\n%s" %
121+ (cmdline, traceback.format_exc()))
122+ warn(cmdline_exc)
123+
124 try:
125 cfg = cloudinit.get_base_cfg(cfg_path)
126 except Exception as e:
127@@ -86,6 +111,11 @@
128 cloudinit.logging_set_from_cfg(cfg)
129 log = logging.getLogger()
130
131+ if cmdline_exc:
132+ log.debug(cmdline_exc)
133+ elif cmdline_msg:
134+ log.debug(cmdline_msg)
135+
136 try:
137 cloudinit.initfs()
138 except Exception as e:
139@@ -136,7 +166,7 @@
140 cloud.get_data_source()
141 except cloudinit.DataSourceNotFoundException as e:
142 sys.stderr.write("no instance data found in %s\n" % cmd)
143- sys.exit(1)
144+ sys.exit(0)
145
146 # set this as the current instance
147 cloud.set_cur_instance()
148
149=== modified file 'cloudinit/CloudConfig/__init__.py'
150--- cloudinit/CloudConfig/__init__.py 2012-01-18 14:07:33 +0000
151+++ cloudinit/CloudConfig/__init__.py 2012-06-13 13:16:19 +0000
152@@ -260,7 +260,7 @@
153 e = os.environ.copy()
154 e['DEBIAN_FRONTEND'] = 'noninteractive'
155 cmd = ['apt-get', '--option', 'Dpkg::Options::=--force-confold',
156- '--assume-yes', tlc]
157+ '--assume-yes', '--quiet', tlc]
158 cmd.extend(args)
159 subprocess.check_call(cmd, env=e)
160
161
162=== added file 'cloudinit/CloudConfig/cc_apt_pipelining.py'
163--- cloudinit/CloudConfig/cc_apt_pipelining.py 1970-01-01 00:00:00 +0000
164+++ cloudinit/CloudConfig/cc_apt_pipelining.py 2012-06-13 13:16:19 +0000
165@@ -0,0 +1,53 @@
166+# vi: ts=4 expandtab
167+#
168+# Copyright (C) 2011 Canonical Ltd.
169+#
170+# Author: Ben Howard <ben.howard@canonical.com>
171+#
172+# This program is free software: you can redistribute it and/or modify
173+# it under the terms of the GNU General Public License version 3, as
174+# published by the Free Software Foundation.
175+#
176+# This program is distributed in the hope that it will be useful,
177+# but WITHOUT ANY WARRANTY; without even the implied warranty of
178+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
179+# GNU General Public License for more details.
180+#
181+# You should have received a copy of the GNU General Public License
182+# along with this program. If not, see <http://www.gnu.org/licenses/>.
183+
184+import cloudinit.util as util
185+from cloudinit.CloudConfig import per_instance
186+
187+frequency = per_instance
188+default_file = "/etc/apt/apt.conf.d/90cloud-init-pipelining"
189+
190+
191+def handle(_name, cfg, _cloud, log, _args):
192+
193+ apt_pipe_value = util.get_cfg_option_str(cfg, "apt_pipelining", False)
194+ apt_pipe_value = str(apt_pipe_value).lower()
195+
196+ if apt_pipe_value == "false":
197+ write_apt_snippet("0", log)
198+
199+ elif apt_pipe_value in ("none", "unchanged", "os"):
200+ return
201+
202+ elif apt_pipe_value in str(range(0, 6)):
203+ write_apt_snippet(apt_pipe_value, log)
204+
205+ else:
206+ log.warn("Invalid option for apt_pipeling: %s" % apt_pipe_value)
207+
208+
209+def write_apt_snippet(setting, log, f_name=default_file):
210+ """ Writes f_name with apt pipeline depth 'setting' """
211+
212+ acquire_pipeline_depth = 'Acquire::http::Pipeline-Depth "%s";\n'
213+ file_contents = ("//Written by cloud-init per 'apt_pipelining'\n"
214+ + (acquire_pipeline_depth % setting))
215+
216+ util.write_file(f_name, file_contents)
217+
218+ log.debug("Wrote %s with APT pipeline setting" % f_name)
219
220=== modified file 'cloudinit/CloudConfig/cc_ca_certs.py'
221--- cloudinit/CloudConfig/cc_ca_certs.py 2012-01-17 21:38:01 +0000
222+++ cloudinit/CloudConfig/cc_ca_certs.py 2012-06-13 13:16:19 +0000
223@@ -16,7 +16,7 @@
224 import os
225 from subprocess import check_call
226 from cloudinit.util import (write_file, get_cfg_option_list_or_str,
227- delete_dir_contents)
228+ delete_dir_contents, subp)
229
230 CA_CERT_PATH = "/usr/share/ca-certificates/"
231 CA_CERT_FILENAME = "cloud-init-ca-certs.crt"
232@@ -54,6 +54,8 @@
233 delete_dir_contents(CA_CERT_PATH)
234 delete_dir_contents(CA_CERT_SYSTEM_PATH)
235 write_file(CA_CERT_CONFIG, "", mode=0644)
236+ debconf_sel = "ca-certificates ca-certificates/trust_new_crts select no"
237+ subp(('debconf-set-selections', '-'), debconf_sel)
238
239
240 def handle(_name, cfg, _cloud, log, _args):
241
242=== modified file 'cloudinit/CloudConfig/cc_chef.py'
243--- cloudinit/CloudConfig/cc_chef.py 2012-01-18 14:07:33 +0000
244+++ cloudinit/CloudConfig/cc_chef.py 2012-06-13 13:16:19 +0000
245@@ -40,11 +40,11 @@
246 # set the validation key based on the presence of either 'validation_key'
247 # or 'validation_cert'. In the case where both exist, 'validation_key'
248 # takes precedence
249- if ('validation_key' in chef_cfg or 'validation_cert' in chef_cfg):
250- validation_key = util.get_cfg_option_str(chef_cfg, 'validation_key',
251- chef_cfg['validation_cert'])
252- with open('/etc/chef/validation.pem', 'w') as validation_key_fh:
253- validation_key_fh.write(validation_key)
254+ for key in ('validation_key', 'validation_cert'):
255+ if key in chef_cfg and chef_cfg[key]:
256+ with open('/etc/chef/validation.pem', 'w') as validation_key_fh:
257+ validation_key_fh.write(chef_cfg[key])
258+ break
259
260 # create the chef config from template
261 util.render_to_file('chef_client.rb', '/etc/chef/client.rb',
262
263=== modified file 'cloudinit/CloudConfig/cc_landscape.py'
264--- cloudinit/CloudConfig/cc_landscape.py 2012-01-18 14:07:33 +0000
265+++ cloudinit/CloudConfig/cc_landscape.py 2012-06-13 13:16:19 +0000
266@@ -18,6 +18,8 @@
267 # You should have received a copy of the GNU General Public License
268 # along with this program. If not, see <http://www.gnu.org/licenses/>.
269
270+import os
271+import os.path
272 from cloudinit.CloudConfig import per_instance
273 from configobj import ConfigObj
274
275@@ -50,6 +52,9 @@
276
277 merged = mergeTogether([lsc_builtincfg, lsc_client_cfg_file, ls_cloudcfg])
278
279+ if not os.path.isdir(os.path.dirname(lsc_client_cfg_file)):
280+ os.makedirs(os.path.dirname(lsc_client_cfg_file))
281+
282 with open(lsc_client_cfg_file, "w") as fp:
283 merged.write(fp)
284
285
286=== modified file 'cloudinit/CloudConfig/cc_resizefs.py'
287--- cloudinit/CloudConfig/cc_resizefs.py 2012-01-18 14:07:33 +0000
288+++ cloudinit/CloudConfig/cc_resizefs.py 2012-06-13 13:16:19 +0000
289@@ -22,6 +22,8 @@
290 import subprocess
291 import os
292 import stat
293+import sys
294+import time
295 import tempfile
296 from cloudinit.CloudConfig import per_always
297
298@@ -34,23 +36,22 @@
299 if str(args[0]).lower() in ['true', '1', 'on', 'yes']:
300 resize_root = True
301 else:
302- resize_root = util.get_cfg_option_bool(cfg, "resize_rootfs", True)
303+ resize_root = util.get_cfg_option_str(cfg, "resize_rootfs", True)
304
305- if not resize_root:
306+ if str(resize_root).lower() in ['false', '0']:
307 return
308
309- # this really only uses the filename from mktemp, then we mknod into it
310- (fd, devpth) = tempfile.mkstemp()
311- os.unlink(devpth)
312- os.close(fd)
313+ # we use mktemp rather than mkstemp because early in boot nothing
314+ # else should be able to race us for this, and we need to mknod.
315+ devpth = tempfile.mktemp(prefix="cloudinit.resizefs.", dir="/run")
316
317 try:
318 st_dev = os.stat("/").st_dev
319 dev = os.makedev(os.major(st_dev), os.minor(st_dev))
320 os.mknod(devpth, 0400 | stat.S_IFBLK, dev)
321 except:
322- if util.islxc():
323- log.debug("inside lxc, ignoring mknod failure in resizefs")
324+ if util.is_container():
325+ log.debug("inside container, ignoring mknod failure in resizefs")
326 return
327 log.warn("Failed to make device node to resize /")
328 raise
329@@ -65,9 +66,6 @@
330 os.unlink(devpth)
331 raise
332
333- log.debug("resizing root filesystem (type=%s, maj=%i, min=%i)" %
334- (str(fstype).rstrip("\n"), os.major(st_dev), os.minor(st_dev)))
335-
336 if str(fstype).startswith("ext"):
337 resize_cmd = ['resize2fs', devpth]
338 elif fstype == "xfs":
339@@ -77,7 +75,28 @@
340 log.debug("not resizing unknown filesystem %s" % fstype)
341 return
342
343+ if resize_root == "noblock":
344+ fid = os.fork()
345+ if fid == 0:
346+ try:
347+ do_resize(resize_cmd, devpth, log)
348+ os._exit(0) # pylint: disable=W0212
349+ except Exception as exc:
350+ sys.stderr.write("Failed: %s" % exc)
351+ os._exit(1) # pylint: disable=W0212
352+ else:
353+ do_resize(resize_cmd, devpth, log)
354+
355+ log.debug("resizing root filesystem (type=%s, maj=%i, min=%i, val=%s)" %
356+ (str(fstype).rstrip("\n"), os.major(st_dev), os.minor(st_dev),
357+ resize_root))
358+
359+ return
360+
361+
362+def do_resize(resize_cmd, devpth, log):
363 try:
364+ start = time.time()
365 util.subp(resize_cmd)
366 except subprocess.CalledProcessError as e:
367 log.warn("Failed to resize filesystem (%s)" % resize_cmd)
368@@ -86,4 +105,4 @@
369 raise
370
371 os.unlink(devpth)
372- return
373+ log.debug("resize took %s seconds" % (time.time() - start))
374
375=== modified file 'cloudinit/CloudConfig/cc_salt_minion.py'
376--- cloudinit/CloudConfig/cc_salt_minion.py 2012-02-11 15:27:14 +0000
377+++ cloudinit/CloudConfig/cc_salt_minion.py 2012-06-13 13:16:19 +0000
378@@ -20,7 +20,8 @@
379 import cloudinit.CloudConfig as cc
380 import yaml
381
382-def handle(_name, cfg, cloud, log, _args):
383+
384+def handle(_name, cfg, _cloud, _log, _args):
385 # If there isn't a salt key in the configuration don't do anything
386 if 'salt_minion' not in cfg:
387 return
388
389=== modified file 'cloudinit/CloudConfig/cc_update_etc_hosts.py'
390--- cloudinit/CloudConfig/cc_update_etc_hosts.py 2012-01-18 14:07:33 +0000
391+++ cloudinit/CloudConfig/cc_update_etc_hosts.py 2012-06-13 13:16:19 +0000
392@@ -28,7 +28,7 @@
393 def handle(_name, cfg, cloud, log, _args):
394 (hostname, fqdn) = util.get_hostname_fqdn(cfg, cloud)
395
396- manage_hosts = util.get_cfg_option_bool(cfg, "manage_etc_hosts", False)
397+ manage_hosts = util.get_cfg_option_str(cfg, "manage_etc_hosts", False)
398 if manage_hosts in ("True", "true", True, "template"):
399 # render from template file
400 try:
401
402=== modified file 'cloudinit/DataSource.py'
403--- cloudinit/DataSource.py 2012-01-18 14:07:33 +0000
404+++ cloudinit/DataSource.py 2012-06-13 13:16:19 +0000
405@@ -70,7 +70,10 @@
406 return([])
407
408 if isinstance(self.metadata['public-keys'], str):
409- return([self.metadata['public-keys'], ])
410+ return(str(self.metadata['public-keys']).splitlines())
411+
412+ if isinstance(self.metadata['public-keys'], list):
413+ return(self.metadata['public-keys'])
414
415 for _keyname, klist in self.metadata['public-keys'].items():
416 # lp:506332 uec metadata service responds with
417
418=== added file 'cloudinit/DataSourceCloudStack.py'
419--- cloudinit/DataSourceCloudStack.py 1970-01-01 00:00:00 +0000
420+++ cloudinit/DataSourceCloudStack.py 2012-06-13 13:16:19 +0000
421@@ -0,0 +1,92 @@
422+# vi: ts=4 expandtab
423+#
424+# Copyright (C) 2012 Canonical Ltd.
425+# Copyright (C) 2012 Cosmin Luta
426+#
427+# Author: Cosmin Luta <q4break@gmail.com>
428+# Author: Scott Moser <scott.moser@canonical.com>
429+#
430+# This program is free software: you can redistribute it and/or modify
431+# it under the terms of the GNU General Public License version 3, as
432+# published by the Free Software Foundation.
433+#
434+# This program is distributed in the hope that it will be useful,
435+# but WITHOUT ANY WARRANTY; without even the implied warranty of
436+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
437+# GNU General Public License for more details.
438+#
439+# You should have received a copy of the GNU General Public License
440+# along with this program. If not, see <http://www.gnu.org/licenses/>.
441+
442+import cloudinit.DataSource as DataSource
443+
444+from cloudinit import seeddir as base_seeddir
445+from cloudinit import log
446+import cloudinit.util as util
447+from socket import inet_ntoa
448+import time
449+import boto.utils as boto_utils
450+from struct import pack
451+
452+
453+class DataSourceCloudStack(DataSource.DataSource):
454+ api_ver = 'latest'
455+ seeddir = base_seeddir + '/cs'
456+ metadata_address = None
457+
458+ def __init__(self, sys_cfg=None):
459+ DataSource.DataSource.__init__(self, sys_cfg)
460+ # Cloudstack has its metadata/userdata URLs located at
461+ # http://<default-gateway-ip>/latest/
462+ self.metadata_address = "http://%s/" % self.get_default_gateway()
463+
464+ def get_default_gateway(self):
465+ """ Returns the default gateway ip address in the dotted format
466+ """
467+ with open("/proc/net/route", "r") as f:
468+ for line in f.readlines():
469+ items = line.split("\t")
470+ if items[1] == "00000000":
471+ # found the default route, get the gateway
472+ gw = inet_ntoa(pack("<L", int(items[2], 16)))
473+ log.debug("found default route, gateway is %s" % gw)
474+ return gw
475+
476+ def __str__(self):
477+ return "DataSourceCloudStack"
478+
479+ def get_data(self):
480+ seedret = {}
481+ if util.read_optional_seed(seedret, base=self.seeddir + "/"):
482+ self.userdata_raw = seedret['user-data']
483+ self.metadata = seedret['meta-data']
484+ log.debug("using seeded cs data in %s" % self.seeddir)
485+ return True
486+
487+ try:
488+ start = time.time()
489+ self.userdata_raw = boto_utils.get_instance_userdata(self.api_ver,
490+ None, self.metadata_address)
491+ self.metadata = boto_utils.get_instance_metadata(self.api_ver,
492+ self.metadata_address)
493+ log.debug("crawl of metadata service took %ds" %
494+ (time.time() - start))
495+ return True
496+ except Exception as e:
497+ log.exception(e)
498+ return False
499+
500+ def get_instance_id(self):
501+ return self.metadata['instance-id']
502+
503+ def get_availability_zone(self):
504+ return self.metadata['availability-zone']
505+
506+datasources = [
507+ (DataSourceCloudStack, (DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
508+]
509+
510+
511+# return a list of data sources that match this set of dependencies
512+def get_datasource_list(depends):
513+ return DataSource.list_from_depends(depends, datasources)
514
515=== added file 'cloudinit/DataSourceConfigDrive.py'
516--- cloudinit/DataSourceConfigDrive.py 1970-01-01 00:00:00 +0000
517+++ cloudinit/DataSourceConfigDrive.py 2012-06-13 13:16:19 +0000
518@@ -0,0 +1,231 @@
519+# Copyright (C) 2012 Canonical Ltd.
520+#
521+# Author: Scott Moser <scott.moser@canonical.com>
522+#
523+# This program is free software: you can redistribute it and/or modify
524+# it under the terms of the GNU General Public License version 3, as
525+# published by the Free Software Foundation.
526+#
527+# This program is distributed in the hope that it will be useful,
528+# but WITHOUT ANY WARRANTY; without even the implied warranty of
529+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
530+# GNU General Public License for more details.
531+#
532+# You should have received a copy of the GNU General Public License
533+# along with this program. If not, see <http://www.gnu.org/licenses/>.
534+
535+import cloudinit.DataSource as DataSource
536+
537+from cloudinit import seeddir as base_seeddir
538+from cloudinit import log
539+import cloudinit.util as util
540+import os.path
541+import os
542+import json
543+import subprocess
544+
545+DEFAULT_IID = "iid-dsconfigdrive"
546+
547+
548+class DataSourceConfigDrive(DataSource.DataSource):
549+ seed = None
550+ seeddir = base_seeddir + '/config_drive'
551+ cfg = {}
552+ userdata_raw = None
553+ metadata = None
554+ dsmode = "local"
555+
556+ def __str__(self):
557+ mstr = "DataSourceConfigDrive[%s]" % self.dsmode
558+ mstr = mstr + " [seed=%s]" % self.seed
559+ return(mstr)
560+
561+ def get_data(self):
562+ found = None
563+ md = {}
564+ ud = ""
565+
566+ defaults = {"instance-id": DEFAULT_IID, "dsmode": "pass"}
567+
568+ if os.path.isdir(self.seeddir):
569+ try:
570+ (md, ud) = read_config_drive_dir(self.seeddir)
571+ found = self.seeddir
572+ except nonConfigDriveDir:
573+ pass
574+
575+ if not found:
576+ dev = cfg_drive_device()
577+ if dev:
578+ try:
579+ (md, ud) = util.mount_callback_umount(dev,
580+ read_config_drive_dir)
581+ found = dev
582+ except (nonConfigDriveDir, util.mountFailedError):
583+ pass
584+
585+ if not found:
586+ return False
587+
588+ if 'dsconfig' in md:
589+ self.cfg = md['dscfg']
590+
591+ md = util.mergedict(md, defaults)
592+
593+ # update interfaces and ifup only on the local datasource
594+ # this way the DataSourceConfigDriveNet doesn't do it also.
595+ if 'network-interfaces' in md and self.dsmode == "local":
596+ if md['dsmode'] == "pass":
597+ log.info("updating network interfaces from configdrive")
598+ else:
599+ log.debug("updating network interfaces from configdrive")
600+
601+ util.write_file("/etc/network/interfaces",
602+ md['network-interfaces'])
603+ try:
604+ (out, err) = util.subp(['ifup', '--all'])
605+ if len(out) or len(err):
606+ log.warn("ifup --all had stderr: %s" % err)
607+
608+ except subprocess.CalledProcessError as exc:
609+ log.warn("ifup --all failed: %s" % (exc.output[1]))
610+
611+ self.seed = found
612+ self.metadata = md
613+ self.userdata_raw = ud
614+
615+ if md['dsmode'] == self.dsmode:
616+ return True
617+
618+ log.debug("%s: not claiming datasource, dsmode=%s" %
619+ (self, md['dsmode']))
620+ return False
621+
622+ def get_public_ssh_keys(self):
623+ if not 'public-keys' in self.metadata:
624+ return([])
625+ return(self.metadata['public-keys'])
626+
627+ # the data sources' config_obj is a cloud-config formated
628+ # object that came to it from ways other than cloud-config
629+ # because cloud-config content would be handled elsewhere
630+ def get_config_obj(self):
631+ return(self.cfg)
632+
633+
634+class DataSourceConfigDriveNet(DataSourceConfigDrive):
635+ dsmode = "net"
636+
637+
638+class nonConfigDriveDir(Exception):
639+ pass
640+
641+
642+def cfg_drive_device():
643+ """ get the config drive device. return a string like '/dev/vdb'
644+ or None (if there is no non-root device attached). This does not
645+ check the contents, only reports that if there *were* a config_drive
646+ attached, it would be this device.
647+ per config_drive documentation, this is
648+ "associated as the last available disk on the instance"
649+ """
650+
651+ if 'CLOUD_INIT_CONFIG_DRIVE_DEVICE' in os.environ:
652+ return(os.environ['CLOUD_INIT_CONFIG_DRIVE_DEVICE'])
653+
654+ # we are looking for a raw block device (sda, not sda1) with a vfat
655+ # filesystem on it.
656+
657+ letters = "abcdefghijklmnopqrstuvwxyz"
658+ devs = util.find_devs_with("TYPE=vfat")
659+
660+ # filter out anything not ending in a letter (ignore partitions)
661+ devs = [f for f in devs if f[-1] in letters]
662+
663+ # sort them in reverse so "last" device is first
664+ devs.sort(reverse=True)
665+
666+ if len(devs):
667+ return(devs[0])
668+
669+ return(None)
670+
671+
672+def read_config_drive_dir(source_dir):
673+ """
674+ read_config_drive_dir(source_dir):
675+ read source_dir, and return a tuple with metadata dict and user-data
676+ string populated. If not a valid dir, raise a nonConfigDriveDir
677+ """
678+ md = {}
679+ ud = ""
680+
681+ flist = ("etc/network/interfaces", "root/.ssh/authorized_keys", "meta.js")
682+ found = [f for f in flist if os.path.isfile("%s/%s" % (source_dir, f))]
683+ keydata = ""
684+
685+ if len(found) == 0:
686+ raise nonConfigDriveDir("%s: %s" % (source_dir, "no files found"))
687+
688+ if "etc/network/interfaces" in found:
689+ with open("%s/%s" % (source_dir, "/etc/network/interfaces")) as fp:
690+ md['network-interfaces'] = fp.read()
691+
692+ if "root/.ssh/authorized_keys" in found:
693+ with open("%s/%s" % (source_dir, "root/.ssh/authorized_keys")) as fp:
694+ keydata = fp.read()
695+
696+ meta_js = {}
697+
698+ if "meta.js" in found:
699+ content = ''
700+ with open("%s/%s" % (source_dir, "meta.js")) as fp:
701+ content = fp.read()
702+ md['meta_js'] = content
703+ try:
704+ meta_js = json.loads(content)
705+ except ValueError:
706+ raise nonConfigDriveDir("%s: %s" %
707+ (source_dir, "invalid json in meta.js"))
708+
709+ keydata = meta_js.get('public-keys', keydata)
710+
711+ if keydata:
712+ lines = keydata.splitlines()
713+ md['public-keys'] = [l for l in lines
714+ if len(l) and not l.startswith("#")]
715+
716+ for copy in ('dsmode', 'instance-id', 'dscfg'):
717+ if copy in meta_js:
718+ md[copy] = meta_js[copy]
719+
720+ if 'user-data' in meta_js:
721+ ud = meta_js['user-data']
722+
723+ return(md, ud)
724+
725+datasources = (
726+ (DataSourceConfigDrive, (DataSource.DEP_FILESYSTEM, )),
727+ (DataSourceConfigDriveNet,
728+ (DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
729+)
730+
731+
732+# return a list of data sources that match this set of dependencies
733+def get_datasource_list(depends):
734+ return(DataSource.list_from_depends(depends, datasources))
735+
736+if __name__ == "__main__":
737+ def main():
738+ import sys
739+ import pprint
740+ print cfg_drive_device()
741+ (md, ud) = read_config_drive_dir(sys.argv[1])
742+ print "=== md ==="
743+ pprint.pprint(md)
744+ print "=== ud ==="
745+ print(ud)
746+
747+ main()
748+
749+# vi: ts=4 expandtab
750
751=== modified file 'cloudinit/DataSourceEc2.py'
752--- cloudinit/DataSourceEc2.py 2012-01-18 14:07:33 +0000
753+++ cloudinit/DataSourceEc2.py 2012-06-13 13:16:19 +0000
754@@ -24,7 +24,6 @@
755 from cloudinit import log
756 import cloudinit.util as util
757 import socket
758-import urllib2
759 import time
760 import boto.utils as boto_utils
761 import os.path
762@@ -134,8 +133,8 @@
763 url2base[cur] = url
764
765 starttime = time.time()
766- url = wait_for_metadata_service(urls=urls, max_wait=max_wait,
767- timeout=timeout, status_cb=log.warn)
768+ url = util.wait_for_url(urls=urls, max_wait=max_wait,
769+ timeout=timeout, status_cb=log.warn)
770
771 if url:
772 log.debug("Using metadata source: '%s'" % url2base[url])
773@@ -208,87 +207,6 @@
774 return False
775
776
777-def wait_for_metadata_service(urls, max_wait=None, timeout=None,
778- status_cb=None):
779- """
780- urls: a list of urls to try
781- max_wait: roughly the maximum time to wait before giving up
782- The max time is *actually* len(urls)*timeout as each url will
783- be tried once and given the timeout provided.
784- timeout: the timeout provided to urllib2.urlopen
785- status_cb: call method with string message when a url is not available
786-
787- the idea of this routine is to wait for the EC2 metdata service to
788- come up. On both Eucalyptus and EC2 we have seen the case where
789- the instance hit the MD before the MD service was up. EC2 seems
790- to have permenantely fixed this, though.
791-
792- In openstack, the metadata service might be painfully slow, and
793- unable to avoid hitting a timeout of even up to 10 seconds or more
794- (LP: #894279) for a simple GET.
795-
796- Offset those needs with the need to not hang forever (and block boot)
797- on a system where cloud-init is configured to look for EC2 Metadata
798- service but is not going to find one. It is possible that the instance
799- data host (169.254.169.254) may be firewalled off Entirely for a sytem,
800- meaning that the connection will block forever unless a timeout is set.
801- """
802- starttime = time.time()
803-
804- sleeptime = 1
805-
806- def nullstatus_cb(msg):
807- return
808-
809- if status_cb == None:
810- status_cb = nullstatus_cb
811-
812- def timeup(max_wait, starttime):
813- return((max_wait <= 0 or max_wait == None) or
814- (time.time() - starttime > max_wait))
815-
816- loop_n = 0
817- while True:
818- sleeptime = int(loop_n / 5) + 1
819- for url in urls:
820- now = time.time()
821- if loop_n != 0:
822- if timeup(max_wait, starttime):
823- break
824- if timeout and (now + timeout > (starttime + max_wait)):
825- # shorten timeout to not run way over max_time
826- timeout = int((starttime + max_wait) - now)
827-
828- reason = ""
829- try:
830- req = urllib2.Request(url)
831- resp = urllib2.urlopen(req, timeout=timeout)
832- if resp.read() != "":
833- return url
834- reason = "empty data [%s]" % resp.getcode()
835- except urllib2.HTTPError as e:
836- reason = "http error [%s]" % e.code
837- except urllib2.URLError as e:
838- reason = "url error [%s]" % e.reason
839- except socket.timeout as e:
840- reason = "socket timeout [%s]" % e
841- except Exception as e:
842- reason = "unexpected error [%s]" % e
843-
844- if log:
845- status_cb("'%s' failed [%s/%ss]: %s" %
846- (url, int(time.time() - starttime), max_wait,
847- reason))
848-
849- if timeup(max_wait, starttime):
850- break
851-
852- loop_n = loop_n + 1
853- time.sleep(sleeptime)
854-
855- return False
856-
857-
858 datasources = [
859 (DataSourceEc2, (DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
860 ]
861
862=== added file 'cloudinit/DataSourceMAAS.py'
863--- cloudinit/DataSourceMAAS.py 1970-01-01 00:00:00 +0000
864+++ cloudinit/DataSourceMAAS.py 2012-06-13 13:16:19 +0000
865@@ -0,0 +1,345 @@
866+# vi: ts=4 expandtab
867+#
868+# Copyright (C) 2012 Canonical Ltd.
869+#
870+# Author: Scott Moser <scott.moser@canonical.com>
871+#
872+# This program is free software: you can redistribute it and/or modify
873+# it under the terms of the GNU General Public License version 3, as
874+# published by the Free Software Foundation.
875+#
876+# This program is distributed in the hope that it will be useful,
877+# but WITHOUT ANY WARRANTY; without even the implied warranty of
878+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
879+# GNU General Public License for more details.
880+#
881+# You should have received a copy of the GNU General Public License
882+# along with this program. If not, see <http://www.gnu.org/licenses/>.
883+
884+import cloudinit.DataSource as DataSource
885+
886+from cloudinit import seeddir as base_seeddir
887+from cloudinit import log
888+import cloudinit.util as util
889+import errno
890+import oauth.oauth as oauth
891+import os.path
892+import urllib2
893+import time
894+
895+
896+MD_VERSION = "2012-03-01"
897+
898+
899+class DataSourceMAAS(DataSource.DataSource):
900+ """
901+ DataSourceMAAS reads instance information from MAAS.
902+ Given a config metadata_url, and oauth tokens, it expects to find
903+ files under the root named:
904+ instance-id
905+ user-data
906+ hostname
907+ """
908+ seeddir = base_seeddir + '/maas'
909+ baseurl = None
910+
911+ def __str__(self):
912+ return("DataSourceMAAS[%s]" % self.baseurl)
913+
914+ def get_data(self):
915+ mcfg = self.ds_cfg
916+
917+ try:
918+ (userdata, metadata) = read_maas_seed_dir(self.seeddir)
919+ self.userdata_raw = userdata
920+ self.metadata = metadata
921+ self.baseurl = self.seeddir
922+ return True
923+ except MAASSeedDirNone:
924+ pass
925+ except MAASSeedDirMalformed as exc:
926+ log.warn("%s was malformed: %s\n" % (self.seeddir, exc))
927+ raise
928+
929+ try:
930+ # if there is no metadata_url, then we're not configured
931+ url = mcfg.get('metadata_url', None)
932+ if url == None:
933+ return False
934+
935+ if not self.wait_for_metadata_service(url):
936+ return False
937+
938+ self.baseurl = url
939+
940+ (userdata, metadata) = read_maas_seed_url(self.baseurl,
941+ self.md_headers)
942+ self.userdata_raw = userdata
943+ self.metadata = metadata
944+ return True
945+ except Exception:
946+ util.logexc(log)
947+ return False
948+
949+ def md_headers(self, url):
950+ mcfg = self.ds_cfg
951+
952+ # if we are missing token_key, token_secret or consumer_key
953+ # then just do non-authed requests
954+ for required in ('token_key', 'token_secret', 'consumer_key'):
955+ if required not in mcfg:
956+ return({})
957+
958+ consumer_secret = mcfg.get('consumer_secret', "")
959+
960+ return(oauth_headers(url=url, consumer_key=mcfg['consumer_key'],
961+ token_key=mcfg['token_key'], token_secret=mcfg['token_secret'],
962+ consumer_secret=consumer_secret))
963+
964+ def wait_for_metadata_service(self, url):
965+ mcfg = self.ds_cfg
966+
967+ max_wait = 120
968+ try:
969+ max_wait = int(mcfg.get("max_wait", max_wait))
970+ except Exception:
971+ util.logexc(log)
972+ log.warn("Failed to get max wait. using %s" % max_wait)
973+
974+ if max_wait == 0:
975+ return False
976+
977+ timeout = 50
978+ try:
979+ timeout = int(mcfg.get("timeout", timeout))
980+ except Exception:
981+ util.logexc(log)
982+ log.warn("Failed to get timeout, using %s" % timeout)
983+
984+ starttime = time.time()
985+ check_url = "%s/%s/meta-data/instance-id" % (url, MD_VERSION)
986+ url = util.wait_for_url(urls=[check_url], max_wait=max_wait,
987+ timeout=timeout, status_cb=log.warn,
988+ headers_cb=self.md_headers)
989+
990+ if url:
991+ log.debug("Using metadata source: '%s'" % url)
992+ else:
993+ log.critical("giving up on md after %i seconds\n" %
994+ int(time.time() - starttime))
995+
996+ return (bool(url))
997+
998+
999+def read_maas_seed_dir(seed_d):
1000+ """
1001+ Return user-data and metadata for a maas seed dir in seed_d.
1002+ Expected format of seed_d are the following files:
1003+ * instance-id
1004+ * local-hostname
1005+ * user-data
1006+ """
1007+ files = ('local-hostname', 'instance-id', 'user-data', 'public-keys')
1008+ md = {}
1009+
1010+ if not os.path.isdir(seed_d):
1011+ raise MAASSeedDirNone("%s: not a directory")
1012+
1013+ for fname in files:
1014+ try:
1015+ with open(os.path.join(seed_d, fname)) as fp:
1016+ md[fname] = fp.read()
1017+ fp.close()
1018+ except IOError as e:
1019+ if e.errno != errno.ENOENT:
1020+ raise
1021+
1022+ return(check_seed_contents(md, seed_d))
1023+
1024+
1025+def read_maas_seed_url(seed_url, header_cb=None, timeout=None,
1026+ version=MD_VERSION):
1027+ """
1028+ Read the maas datasource at seed_url.
1029+ header_cb is a method that should return a headers dictionary that will
1030+ be given to urllib2.Request()
1031+
1032+ Expected format of seed_url is are the following files:
1033+ * <seed_url>/<version>/meta-data/instance-id
1034+ * <seed_url>/<version>/meta-data/local-hostname
1035+ * <seed_url>/<version>/user-data
1036+ """
1037+ files = ('meta-data/local-hostname',
1038+ 'meta-data/instance-id',
1039+ 'meta-data/public-keys',
1040+ 'user-data')
1041+
1042+ base_url = "%s/%s" % (seed_url, version)
1043+ md = {}
1044+ for fname in files:
1045+ url = "%s/%s" % (base_url, fname)
1046+ if header_cb:
1047+ headers = header_cb(url)
1048+ else:
1049+ headers = {}
1050+
1051+ try:
1052+ req = urllib2.Request(url, data=None, headers=headers)
1053+ resp = urllib2.urlopen(req, timeout=timeout)
1054+ md[os.path.basename(fname)] = resp.read()
1055+ except urllib2.HTTPError as e:
1056+ if e.code != 404:
1057+ raise
1058+
1059+ return(check_seed_contents(md, seed_url))
1060+
1061+
1062+def check_seed_contents(content, seed):
1063+ """Validate if content is Is the content a dict that is valid as a
1064+ return for a datasource.
1065+ Either return a (userdata, metadata) tuple or
1066+ Raise MAASSeedDirMalformed or MAASSeedDirNone
1067+ """
1068+ md_required = ('instance-id', 'local-hostname')
1069+ found = content.keys()
1070+
1071+ if len(content) == 0:
1072+ raise MAASSeedDirNone("%s: no data files found" % seed)
1073+
1074+ missing = [k for k in md_required if k not in found]
1075+ if len(missing):
1076+ raise MAASSeedDirMalformed("%s: missing files %s" % (seed, missing))
1077+
1078+ userdata = content.get('user-data', "")
1079+ md = {}
1080+ for (key, val) in content.iteritems():
1081+ if key == 'user-data':
1082+ continue
1083+ md[key] = val
1084+
1085+ return(userdata, md)
1086+
1087+
1088+def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret):
1089+ consumer = oauth.OAuthConsumer(consumer_key, consumer_secret)
1090+ token = oauth.OAuthToken(token_key, token_secret)
1091+ params = {
1092+ 'oauth_version': "1.0",
1093+ 'oauth_nonce': oauth.generate_nonce(),
1094+ 'oauth_timestamp': int(time.time()),
1095+ 'oauth_token': token.key,
1096+ 'oauth_consumer_key': consumer.key,
1097+ }
1098+ req = oauth.OAuthRequest(http_url=url, parameters=params)
1099+ req.sign_request(oauth.OAuthSignatureMethod_PLAINTEXT(),
1100+ consumer, token)
1101+ return(req.to_header())
1102+
1103+
1104+class MAASSeedDirNone(Exception):
1105+ pass
1106+
1107+
1108+class MAASSeedDirMalformed(Exception):
1109+ pass
1110+
1111+
1112+datasources = [
1113+ (DataSourceMAAS, (DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
1114+]
1115+
1116+
1117+# return a list of data sources that match this set of dependencies
1118+def get_datasource_list(depends):
1119+ return(DataSource.list_from_depends(depends, datasources))
1120+
1121+
1122+if __name__ == "__main__":
1123+ def main():
1124+ """
1125+ Call with single argument of directory or http or https url.
1126+ If url is given additional arguments are allowed, which will be
1127+ interpreted as consumer_key, token_key, token_secret, consumer_secret
1128+ """
1129+ import argparse
1130+ import pprint
1131+
1132+ parser = argparse.ArgumentParser(description='Interact with MAAS DS')
1133+ parser.add_argument("--config", metavar="file",
1134+ help="specify DS config file", default=None)
1135+ parser.add_argument("--ckey", metavar="key",
1136+ help="the consumer key to auth with", default=None)
1137+ parser.add_argument("--tkey", metavar="key",
1138+ help="the token key to auth with", default=None)
1139+ parser.add_argument("--csec", metavar="secret",
1140+ help="the consumer secret (likely '')", default="")
1141+ parser.add_argument("--tsec", metavar="secret",
1142+ help="the token secret to auth with", default=None)
1143+ parser.add_argument("--apiver", metavar="version",
1144+ help="the apiver to use ("" can be used)", default=MD_VERSION)
1145+
1146+ subcmds = parser.add_subparsers(title="subcommands", dest="subcmd")
1147+ subcmds.add_parser('crawl', help="crawl the datasource")
1148+ subcmds.add_parser('get', help="do a single GET of provided url")
1149+ subcmds.add_parser('check-seed', help="read andn verify seed at url")
1150+
1151+ parser.add_argument("url", help="the data source to query")
1152+
1153+ args = parser.parse_args()
1154+
1155+ creds = {'consumer_key': args.ckey, 'token_key': args.tkey,
1156+ 'token_secret': args.tsec, 'consumer_secret': args.csec}
1157+
1158+ if args.config:
1159+ import yaml
1160+ with open(args.config) as fp:
1161+ cfg = yaml.load(fp)
1162+ if 'datasource' in cfg:
1163+ cfg = cfg['datasource']['MAAS']
1164+ for key in creds.keys():
1165+ if key in cfg and creds[key] == None:
1166+ creds[key] = cfg[key]
1167+
1168+ def geturl(url, headers_cb):
1169+ req = urllib2.Request(url, data=None, headers=headers_cb(url))
1170+ return(urllib2.urlopen(req).read())
1171+
1172+ def printurl(url, headers_cb):
1173+ print "== %s ==\n%s\n" % (url, geturl(url, headers_cb))
1174+
1175+ def crawl(url, headers_cb=None):
1176+ if url.endswith("/"):
1177+ for line in geturl(url, headers_cb).splitlines():
1178+ if line.endswith("/"):
1179+ crawl("%s%s" % (url, line), headers_cb)
1180+ else:
1181+ printurl("%s%s" % (url, line), headers_cb)
1182+ else:
1183+ printurl(url, headers_cb)
1184+
1185+ def my_headers(url):
1186+ headers = {}
1187+ if creds.get('consumer_key', None) != None:
1188+ headers = oauth_headers(url, **creds)
1189+ return headers
1190+
1191+ if args.subcmd == "check-seed":
1192+ if args.url.startswith("http"):
1193+ (userdata, metadata) = read_maas_seed_url(args.url,
1194+ header_cb=my_headers, version=args.apiver)
1195+ else:
1196+ (userdata, metadata) = read_maas_seed_url(args.url)
1197+ print "=== userdata ==="
1198+ print userdata
1199+ print "=== metadata ==="
1200+ pprint.pprint(metadata)
1201+
1202+ elif args.subcmd == "get":
1203+ printurl(args.url, my_headers)
1204+
1205+ elif args.subcmd == "crawl":
1206+ if not args.url.endswith("/"):
1207+ args.url = "%s/" % args.url
1208+ crawl(args.url, my_headers)
1209+
1210+ main()
1211
1212=== modified file 'cloudinit/DataSourceNoCloud.py'
1213--- cloudinit/DataSourceNoCloud.py 2012-01-18 14:07:33 +0000
1214+++ cloudinit/DataSourceNoCloud.py 2012-06-13 13:16:19 +0000
1215@@ -23,6 +23,8 @@
1216 from cloudinit import seeddir as base_seeddir
1217 from cloudinit import log
1218 import cloudinit.util as util
1219+import errno
1220+import subprocess
1221
1222
1223 class DataSourceNoCloud(DataSource.DataSource):
1224@@ -30,6 +32,7 @@
1225 userdata = None
1226 userdata_raw = None
1227 supported_seed_starts = ("/", "file://")
1228+ dsmode = "local"
1229 seed = None
1230 cmdline_id = "ds=nocloud"
1231 seeddir = base_seeddir + '/nocloud'
1232@@ -41,7 +44,7 @@
1233
1234 def get_data(self):
1235 defaults = {
1236- "instance-id": "nocloud"
1237+ "instance-id": "nocloud", "dsmode": self.dsmode
1238 }
1239
1240 found = []
1241@@ -64,13 +67,54 @@
1242 found.append(self.seeddir)
1243 log.debug("using seeded cache data in %s" % self.seeddir)
1244
1245+ # if the datasource config had a 'seedfrom' entry, then that takes
1246+ # precedence over a 'seedfrom' that was found in a filesystem
1247+ # but not over external medi
1248+ if 'seedfrom' in self.ds_cfg and self.ds_cfg['seedfrom']:
1249+ found.append("ds_config")
1250+ md["seedfrom"] = self.ds_cfg['seedfrom']
1251+
1252+ fslist = util.find_devs_with("TYPE=vfat")
1253+ fslist.extend(util.find_devs_with("TYPE=iso9660"))
1254+
1255+ label_list = util.find_devs_with("LABEL=cidata")
1256+ devlist = list(set(fslist) & set(label_list))
1257+ devlist.sort(reverse=True)
1258+
1259+ for dev in devlist:
1260+ try:
1261+ (newmd, newud) = util.mount_callback_umount(dev,
1262+ util.read_seeded)
1263+ md = util.mergedict(newmd, md)
1264+ ud = newud
1265+
1266+ # for seed from a device, the default mode is 'net'.
1267+ # that is more likely to be what is desired.
1268+ # If they want dsmode of local, then they must
1269+ # specify that.
1270+ if 'dsmode' not in md:
1271+ md['dsmode'] = "net"
1272+
1273+ log.debug("using data from %s" % dev)
1274+ found.append(dev)
1275+ break
1276+ except OSError, e:
1277+ if e.errno != errno.ENOENT:
1278+ raise
1279+ except util.mountFailedError:
1280+ log.warn("Failed to mount %s when looking for seed" % dev)
1281+
1282 # there was no indication on kernel cmdline or data
1283 # in the seeddir suggesting this handler should be used.
1284 if len(found) == 0:
1285 return False
1286
1287+ seeded_interfaces = None
1288+
1289 # the special argument "seedfrom" indicates we should
1290 # attempt to seed the userdata / metadata from its value
1291+ # its primarily value is in allowing the user to type less
1292+ # on the command line, ie: ds=nocloud;s=http://bit.ly/abcdefg
1293 if "seedfrom" in md:
1294 seedfrom = md["seedfrom"]
1295 seedfound = False
1296@@ -83,6 +127,9 @@
1297 (seedfrom, self.__class__))
1298 return False
1299
1300+ if 'network-interfaces' in md:
1301+ seeded_interfaces = self.dsmode
1302+
1303 # this could throw errors, but the user told us to do it
1304 # so if errors are raised, let them raise
1305 (md_seed, ud) = util.read_seeded(seedfrom, timeout=None)
1306@@ -93,10 +140,35 @@
1307 found.append(seedfrom)
1308
1309 md = util.mergedict(md, defaults)
1310+
1311+ # update the network-interfaces if metadata had 'network-interfaces'
1312+ # entry and this is the local datasource, or 'seedfrom' was used
1313+ # and the source of the seed was self.dsmode
1314+ # ('local' for NoCloud, 'net' for NoCloudNet')
1315+ if ('network-interfaces' in md and
1316+ (self.dsmode in ("local", seeded_interfaces))):
1317+ log.info("updating network interfaces from nocloud")
1318+
1319+ util.write_file("/etc/network/interfaces",
1320+ md['network-interfaces'])
1321+ try:
1322+ (out, err) = util.subp(['ifup', '--all'])
1323+ if len(out) or len(err):
1324+ log.warn("ifup --all had stderr: %s" % err)
1325+
1326+ except subprocess.CalledProcessError as exc:
1327+ log.warn("ifup --all failed: %s" % (exc.output[1]))
1328+
1329 self.seed = ",".join(found)
1330 self.metadata = md
1331 self.userdata_raw = ud
1332- return True
1333+
1334+ if md['dsmode'] == self.dsmode:
1335+ return True
1336+
1337+ log.debug("%s: not claiming datasource, dsmode=%s" %
1338+ (self, md['dsmode']))
1339+ return False
1340
1341
1342 # returns true or false indicating if cmdline indicated
1343@@ -145,6 +217,7 @@
1344 cmdline_id = "ds=nocloud-net"
1345 supported_seed_starts = ("http://", "https://", "ftp://")
1346 seeddir = base_seeddir + '/nocloud-net'
1347+ dsmode = "net"
1348
1349
1350 datasources = (
1351
1352=== modified file 'cloudinit/DataSourceOVF.py'
1353--- cloudinit/DataSourceOVF.py 2012-01-18 14:07:33 +0000
1354+++ cloudinit/DataSourceOVF.py 2012-06-13 13:16:19 +0000
1355@@ -162,7 +162,7 @@
1356
1357 # transport functions take no input and return
1358 # a 3 tuple of content, path, filename
1359-def transport_iso9660(require_iso=False):
1360+def transport_iso9660(require_iso=True):
1361
1362 # default_regex matches values in
1363 # /lib/udev/rules.d/60-cdrom_id.rules
1364
1365=== modified file 'cloudinit/SshUtil.py'
1366--- cloudinit/SshUtil.py 2012-01-18 14:07:33 +0000
1367+++ cloudinit/SshUtil.py 2012-06-13 13:16:19 +0000
1368@@ -155,6 +155,8 @@
1369 akeys = ssh_cfg.get("AuthorizedKeysFile", "%h/.ssh/authorized_keys")
1370 akeys = akeys.replace("%h", pwent.pw_dir)
1371 akeys = akeys.replace("%u", user)
1372+ if not akeys.startswith('/'):
1373+ akeys = os.path.join(pwent.pw_dir, akeys)
1374 authorized_keys = akeys
1375 except Exception:
1376 authorized_keys = '%s/.ssh/authorized_keys' % pwent.pw_dir
1377
1378=== modified file 'cloudinit/UserDataHandler.py'
1379--- cloudinit/UserDataHandler.py 2012-01-30 14:24:41 +0000
1380+++ cloudinit/UserDataHandler.py 2012-06-13 13:16:19 +0000
1381@@ -180,7 +180,7 @@
1382
1383 payload = part.get_payload(decode=True)
1384
1385- if ctype_orig == "text/plain":
1386+ if ctype_orig in ("text/plain", "text/x-not-multipart"):
1387 ctype = type_from_startswith(payload)
1388
1389 if ctype is None:
1390@@ -213,7 +213,7 @@
1391 else:
1392 msg[key] = val
1393 else:
1394- mtype = headers.get("Content-Type", "text/plain")
1395+ mtype = headers.get("Content-Type", "text/x-not-multipart")
1396 maintype, subtype = mtype.split("/", 1)
1397 msg = MIMEBase(maintype, subtype, *headers)
1398 msg.set_payload(data)
1399
1400=== modified file 'cloudinit/__init__.py'
1401--- cloudinit/__init__.py 2012-01-18 14:07:33 +0000
1402+++ cloudinit/__init__.py 2012-06-13 13:16:19 +0000
1403@@ -29,7 +29,7 @@
1404
1405 cfg_builtin = """
1406 log_cfgs: []
1407-datasource_list: ["NoCloud", "OVF", "Ec2"]
1408+datasource_list: ["NoCloud", "ConfigDrive", "OVF", "MAAS", "Ec2", "CloudStack"]
1409 def_log_file: /var/log/cloud-init.log
1410 syslog_fix_perms: syslog:adm
1411 """
1412@@ -60,7 +60,6 @@
1413 import sys
1414 import os.path
1415 import errno
1416-import pwd
1417 import subprocess
1418 import yaml
1419 import logging
1420@@ -138,7 +137,9 @@
1421
1422 if ds_deps != None:
1423 self.ds_deps = ds_deps
1424+
1425 self.sysconfig = sysconfig
1426+
1427 self.cfg = self.read_cfg()
1428
1429 def read_cfg(self):
1430@@ -572,10 +573,14 @@
1431 if not (modfreq == per_always or
1432 (frequency == per_instance and modfreq == per_instance)):
1433 return
1434- if mod.handler_version == 1:
1435- mod.handle_part(data, ctype, filename, payload)
1436- else:
1437- mod.handle_part(data, ctype, filename, payload, frequency)
1438+ try:
1439+ if mod.handler_version == 1:
1440+ mod.handle_part(data, ctype, filename, payload)
1441+ else:
1442+ mod.handle_part(data, ctype, filename, payload, frequency)
1443+ except:
1444+ util.logexc(log)
1445+ traceback.print_exc(file=sys.stderr)
1446
1447
1448 def partwalker_handle_handler(pdata, _ctype, _filename, payload):
1449@@ -586,15 +591,13 @@
1450 modfname = modname + ".py"
1451 util.write_file("%s/%s" % (pdata['handlerdir'], modfname), payload, 0600)
1452
1453- pdata['handlercount'] = curcount + 1
1454-
1455 try:
1456 mod = __import__(modname)
1457 handler_register(mod, pdata['handlers'], pdata['data'], frequency)
1458+ pdata['handlercount'] = curcount + 1
1459 except:
1460 util.logexc(log)
1461 traceback.print_exc(file=sys.stderr)
1462- return
1463
1464
1465 def partwalker_callback(pdata, ctype, filename, payload):
1466@@ -605,6 +608,14 @@
1467 partwalker_handle_handler(pdata, ctype, filename, payload)
1468 return
1469 if ctype not in pdata['handlers']:
1470+ if ctype == "text/x-not-multipart":
1471+ # Extract the first line or 24 bytes for displaying in the log
1472+ start = payload.split("\n", 1)[0][:24]
1473+ if start < payload:
1474+ details = "starting '%s...'" % start.encode("string-escape")
1475+ else:
1476+ details = repr(payload)
1477+ log.warning("Unhandled non-multipart userdata %s", details)
1478 return
1479 handler_handle_part(pdata['handlers'][ctype], pdata['data'],
1480 ctype, filename, payload, pdata['frequency'])
1481@@ -630,3 +641,27 @@
1482
1483 def handle_part(self, data, ctype, filename, payload, frequency):
1484 return(self.handler(data, ctype, filename, payload, frequency))
1485+
1486+
1487+def get_cmdline_url(names=('cloud-config-url', 'url'),
1488+ starts="#cloud-config", cmdline=None):
1489+
1490+ if cmdline == None:
1491+ cmdline = util.get_cmdline()
1492+
1493+ data = util.keyval_str_to_dict(cmdline)
1494+ url = None
1495+ key = None
1496+ for key in names:
1497+ if key in data:
1498+ url = data[key]
1499+ break
1500+ if url == None:
1501+ return (None, None, None)
1502+
1503+ contents = util.readurl(url)
1504+
1505+ if contents.startswith(starts):
1506+ return (key, url, contents)
1507+
1508+ return (key, url, None)
1509
1510=== modified file 'cloudinit/netinfo.py'
1511--- cloudinit/netinfo.py 2012-01-30 14:24:30 +0000
1512+++ cloudinit/netinfo.py 2012-06-13 13:16:19 +0000
1513@@ -19,14 +19,14 @@
1514 # You should have received a copy of the GNU General Public License
1515 # along with this program. If not, see <http://www.gnu.org/licenses/>.
1516
1517-import subprocess
1518+import cloudinit.util as util
1519
1520
1521 def netdev_info(empty=""):
1522 fields = ("hwaddr", "addr", "bcast", "mask")
1523- ifcfg_out = str(subprocess.check_output(["ifconfig", "-a"]))
1524+ (ifcfg_out, _err) = util.subp(["ifconfig", "-a"])
1525 devs = {}
1526- for line in ifcfg_out.splitlines():
1527+ for line in str(ifcfg_out).splitlines():
1528 if len(line) == 0:
1529 continue
1530 if line[0] not in ("\t", " "):
1531@@ -70,9 +70,9 @@
1532
1533
1534 def route_info():
1535- route_out = str(subprocess.check_output(["route", "-n"]))
1536+ (route_out, _err) = util.subp(["route", "-n"])
1537 routes = []
1538- for line in route_out.splitlines()[1:]:
1539+ for line in str(route_out).splitlines()[1:]:
1540 if not line:
1541 continue
1542 toks = line.split()
1543
1544=== modified file 'cloudinit/util.py'
1545--- cloudinit/util.py 2012-01-18 14:07:33 +0000
1546+++ cloudinit/util.py 2012-06-13 13:16:19 +0000
1547@@ -32,6 +32,7 @@
1548 import socket
1549 import sys
1550 import time
1551+import tempfile
1552 import traceback
1553 import urlparse
1554
1555@@ -208,16 +209,18 @@
1556 if skip_no_exist and not os.path.isdir(dirp):
1557 return
1558
1559- # per bug 857926, Fedora's run-parts will exit failure on empty dir
1560- if os.path.isdir(dirp) and os.listdir(dirp) == []:
1561- return
1562-
1563- cmd = ['run-parts', '--regex', '.*', dirp]
1564- sp = subprocess.Popen(cmd)
1565- sp.communicate()
1566- if sp.returncode is not 0:
1567- raise subprocess.CalledProcessError(sp.returncode, cmd)
1568- return
1569+ failed = 0
1570+ for exe_name in sorted(os.listdir(dirp)):
1571+ exe_path = os.path.join(dirp, exe_name)
1572+ if os.path.isfile(exe_path) and os.access(exe_path, os.X_OK):
1573+ popen = subprocess.Popen([exe_path])
1574+ popen.communicate()
1575+ if popen.returncode is not 0:
1576+ failed += 1
1577+ sys.stderr.write("failed: %s [%i]\n" %
1578+ (exe_path, popen.returncode))
1579+ if failed:
1580+ raise RuntimeError('runparts: %i failures' % failed)
1581
1582
1583 def subp(args, input_=None):
1584@@ -515,30 +518,70 @@
1585 return(string.replace('\r\n', '\n'))
1586
1587
1588-def islxc():
1589- # is this host running lxc?
1590- try:
1591- with open("/proc/1/cgroup") as f:
1592- if f.read() == "/":
1593- return True
1594- except IOError as e:
1595- if e.errno != errno.ENOENT:
1596- raise
1597-
1598- try:
1599- # try to run a program named 'lxc-is-container'. if it returns true,
1600- # then we're inside a container. otherwise, no
1601- sp = subprocess.Popen(['lxc-is-container'], stdout=subprocess.PIPE,
1602- stderr=subprocess.PIPE)
1603- sp.communicate(None)
1604- return(sp.returncode == 0)
1605- except OSError as e:
1606- if e.errno != errno.ENOENT:
1607- raise
1608+def is_container():
1609+ # is this code running in a container of some sort
1610+
1611+ for helper in ('running-in-container', 'lxc-is-container'):
1612+ try:
1613+ # try to run a helper program. if it returns true
1614+ # then we're inside a container. otherwise, no
1615+ sp = subprocess.Popen(helper, stdout=subprocess.PIPE,
1616+ stderr=subprocess.PIPE)
1617+ sp.communicate(None)
1618+ return(sp.returncode == 0)
1619+ except OSError as e:
1620+ if e.errno != errno.ENOENT:
1621+ raise
1622+
1623+ # this code is largely from the logic in
1624+ # ubuntu's /etc/init/container-detect.conf
1625+ try:
1626+ # Detect old-style libvirt
1627+ # Detect OpenVZ containers
1628+ pid1env = get_proc_env(1)
1629+ if "container" in pid1env:
1630+ return True
1631+
1632+ if "LIBVIRT_LXC_UUID" in pid1env:
1633+ return True
1634+
1635+ except IOError as e:
1636+ if e.errno != errno.ENOENT:
1637+ pass
1638+
1639+ # Detect OpenVZ containers
1640+ if os.path.isdir("/proc/vz") and not os.path.isdir("/proc/bc"):
1641+ return True
1642+
1643+ try:
1644+ # Detect Vserver containers
1645+ with open("/proc/self/status") as fp:
1646+ lines = fp.read().splitlines()
1647+ for line in lines:
1648+ if line.startswith("VxID:"):
1649+ (_key, val) = line.strip().split(":", 1)
1650+ if val != "0":
1651+ return True
1652+ except IOError as e:
1653+ if e.errno != errno.ENOENT:
1654+ pass
1655
1656 return False
1657
1658
1659+def get_proc_env(pid):
1660+ # return the environment in a dict that a given process id was started with
1661+ env = {}
1662+ with open("/proc/%s/environ" % pid) as fp:
1663+ toks = fp.read().split("\0")
1664+ for tok in toks:
1665+ if tok == "":
1666+ continue
1667+ (name, val) = tok.split("=", 1)
1668+ env[name] = val
1669+ return env
1670+
1671+
1672 def get_hostname_fqdn(cfg, cloud):
1673 # return the hostname and fqdn from 'cfg'. If not found in cfg,
1674 # then fall back to data from cloud
1675@@ -630,3 +673,183 @@
1676 return
1677 with open(os.devnull) as fp:
1678 os.dup2(fp.fileno(), sys.stdin.fileno())
1679+
1680+
1681+def find_devs_with(criteria):
1682+ """
1683+ find devices matching given criteria (via blkid)
1684+ criteria can be *one* of:
1685+ TYPE=<filesystem>
1686+ LABEL=<label>
1687+ UUID=<uuid>
1688+ """
1689+ try:
1690+ (out, _err) = subp(['blkid', '-t%s' % criteria, '-odevice'])
1691+ except subprocess.CalledProcessError:
1692+ return([])
1693+ return(str(out).splitlines())
1694+
1695+
1696+class mountFailedError(Exception):
1697+ pass
1698+
1699+
1700+def mount_callback_umount(device, callback, data=None):
1701+ """
1702+ mount the device, call method 'callback' passing the directory
1703+ in which it was mounted, then unmount. Return whatever 'callback'
1704+ returned. If data != None, also pass data to callback.
1705+ """
1706+
1707+ def _cleanup(umount, tmpd):
1708+ if umount:
1709+ try:
1710+ subp(["umount", '-l', umount])
1711+ except subprocess.CalledProcessError:
1712+ raise
1713+ if tmpd:
1714+ os.rmdir(tmpd)
1715+
1716+ # go through mounts to see if it was already mounted
1717+ fp = open("/proc/mounts")
1718+ mounts = fp.readlines()
1719+ fp.close()
1720+
1721+ tmpd = None
1722+
1723+ mounted = {}
1724+ for mpline in mounts:
1725+ (dev, mp, fstype, _opts, _freq, _passno) = mpline.split()
1726+ mp = mp.replace("\\040", " ")
1727+ mounted[dev] = (dev, fstype, mp, False)
1728+
1729+ umount = False
1730+ if device in mounted:
1731+ mountpoint = "%s/" % mounted[device][2]
1732+ else:
1733+ tmpd = tempfile.mkdtemp()
1734+
1735+ mountcmd = ["mount", "-o", "ro", device, tmpd]
1736+
1737+ try:
1738+ (_out, _err) = subp(mountcmd)
1739+ umount = tmpd
1740+ except subprocess.CalledProcessError as exc:
1741+ _cleanup(umount, tmpd)
1742+ raise mountFailedError(exc.output[1])
1743+
1744+ mountpoint = "%s/" % tmpd
1745+
1746+ try:
1747+ if data == None:
1748+ ret = callback(mountpoint)
1749+ else:
1750+ ret = callback(mountpoint, data)
1751+
1752+ except Exception as exc:
1753+ _cleanup(umount, tmpd)
1754+ raise exc
1755+
1756+ _cleanup(umount, tmpd)
1757+
1758+ return(ret)
1759+
1760+
1761+def wait_for_url(urls, max_wait=None, timeout=None,
1762+ status_cb=None, headers_cb=None):
1763+ """
1764+ urls: a list of urls to try
1765+ max_wait: roughly the maximum time to wait before giving up
1766+ The max time is *actually* len(urls)*timeout as each url will
1767+ be tried once and given the timeout provided.
1768+ timeout: the timeout provided to urllib2.urlopen
1769+ status_cb: call method with string message when a url is not available
1770+ headers_cb: call method with single argument of url to get headers
1771+ for request.
1772+
1773+ the idea of this routine is to wait for the EC2 metdata service to
1774+ come up. On both Eucalyptus and EC2 we have seen the case where
1775+ the instance hit the MD before the MD service was up. EC2 seems
1776+ to have permenantely fixed this, though.
1777+
1778+ In openstack, the metadata service might be painfully slow, and
1779+ unable to avoid hitting a timeout of even up to 10 seconds or more
1780+ (LP: #894279) for a simple GET.
1781+
1782+ Offset those needs with the need to not hang forever (and block boot)
1783+ on a system where cloud-init is configured to look for EC2 Metadata
1784+ service but is not going to find one. It is possible that the instance
1785+ data host (169.254.169.254) may be firewalled off Entirely for a sytem,
1786+ meaning that the connection will block forever unless a timeout is set.
1787+ """
1788+ starttime = time.time()
1789+
1790+ sleeptime = 1
1791+
1792+ def nullstatus_cb(msg):
1793+ return
1794+
1795+ if status_cb == None:
1796+ status_cb = nullstatus_cb
1797+
1798+ def timeup(max_wait, starttime):
1799+ return((max_wait <= 0 or max_wait == None) or
1800+ (time.time() - starttime > max_wait))
1801+
1802+ loop_n = 0
1803+ while True:
1804+ sleeptime = int(loop_n / 5) + 1
1805+ for url in urls:
1806+ now = time.time()
1807+ if loop_n != 0:
1808+ if timeup(max_wait, starttime):
1809+ break
1810+ if timeout and (now + timeout > (starttime + max_wait)):
1811+ # shorten timeout to not run way over max_time
1812+ timeout = int((starttime + max_wait) - now)
1813+
1814+ reason = ""
1815+ try:
1816+ if headers_cb != None:
1817+ headers = headers_cb(url)
1818+ else:
1819+ headers = {}
1820+
1821+ req = urllib2.Request(url, data=None, headers=headers)
1822+ resp = urllib2.urlopen(req, timeout=timeout)
1823+ if resp.read() != "":
1824+ return url
1825+ reason = "empty data [%s]" % resp.getcode()
1826+ except urllib2.HTTPError as e:
1827+ reason = "http error [%s]" % e.code
1828+ except urllib2.URLError as e:
1829+ reason = "url error [%s]" % e.reason
1830+ except socket.timeout as e:
1831+ reason = "socket timeout [%s]" % e
1832+ except Exception as e:
1833+ reason = "unexpected error [%s]" % e
1834+
1835+ status_cb("'%s' failed [%s/%ss]: %s" %
1836+ (url, int(time.time() - starttime), max_wait,
1837+ reason))
1838+
1839+ if timeup(max_wait, starttime):
1840+ break
1841+
1842+ loop_n = loop_n + 1
1843+ time.sleep(sleeptime)
1844+
1845+ return False
1846+
1847+
1848+def keyval_str_to_dict(kvstring):
1849+ ret = {}
1850+ for tok in kvstring.split():
1851+ try:
1852+ (key, val) = tok.split("=", 1)
1853+ except ValueError:
1854+ key = tok
1855+ val = True
1856+ ret[key] = val
1857+
1858+ return(ret)
1859
1860=== modified file 'config/cloud.cfg'
1861--- config/cloud.cfg 2012-01-17 17:46:44 +0000
1862+++ config/cloud.cfg 2012-06-13 13:16:19 +0000
1863@@ -1,7 +1,7 @@
1864 user: ubuntu
1865 disable_root: 1
1866 preserve_hostname: False
1867-# datasource_list: [ "NoCloud", "OVF", "Ec2" ]
1868+# datasource_list: ["NoCloud", "ConfigDrive", "OVF", "MAAS", "Ec2", "CloudStack"]
1869
1870 cloud_init_modules:
1871 - bootcmd
1872@@ -19,11 +19,13 @@
1873 - locale
1874 - set-passwords
1875 - grub-dpkg
1876+ - apt-pipelining
1877 - apt-update-upgrade
1878 - landscape
1879 - timezone
1880 - puppet
1881 - chef
1882+ - salt-minion
1883 - mcollective
1884 - disable-ec2-metadata
1885 - runcmd
1886
1887=== modified file 'debian.trunk/control'
1888--- debian.trunk/control 2012-01-12 17:51:48 +0000
1889+++ debian.trunk/control 2012-06-13 13:16:19 +0000
1890@@ -20,6 +20,7 @@
1891 python-boto (>=2.0),
1892 python-cheetah,
1893 python-configobj,
1894+ python-oauth,
1895 python-software-properties,
1896 python-yaml,
1897 ${misc:Depends},
1898
1899=== added directory 'doc/configdrive'
1900=== added file 'doc/configdrive/README'
1901--- doc/configdrive/README 1970-01-01 00:00:00 +0000
1902+++ doc/configdrive/README 2012-06-13 13:16:19 +0000
1903@@ -0,0 +1,118 @@
1904+The 'ConfigDrive' DataSource supports the OpenStack configdrive disk.
1905+See doc/source/api_ext/ext_config_drive.rst in the nova source code for
1906+more information on config drive.
1907+
1908+The following criteria are required to be identified by
1909+DataSourceConfigDrive as a config drive:
1910+ * must be formated with vfat filesystem
1911+ * must be a un-partitioned block device (/dev/vdb, not /dev/vdb1)
1912+ * must contain one of the following files:
1913+ * etc/network/interfaces
1914+ * root/.ssh/authorized_keys
1915+ * meta.js
1916+
1917+By default, cloud-init does not consider this source to be a full-fledged
1918+datasource. Instead, the default behavior is to assume it is really only
1919+present to provide networking information. Cloud-init will copy off the
1920+network information, apply it to the system, and then continue on. The
1921+"full" datasource would then be found in the EC2 metadata service.
1922+
1923+== Content of config-drive ==
1924+ * etc/network/interfaces
1925+ This file is laid down by nova in order to pass static networking
1926+ information to the guest. Cloud-init will copy it off of the config-drive
1927+ and into /etc/network/interfaces as soon as it can, and then attempt to
1928+ bring up all network interfaces.
1929+
1930+ * root/.ssh/authorized_keys
1931+ This file is laid down by nova, and contains the keys that were
1932+ provided to it on instance creation (nova-boot --key ....)
1933+
1934+ Cloud-init will copy those keys and put them into the configured user
1935+ ('ubuntu') .ssh/authorized_keys.
1936+
1937+ * meta.js
1938+ meta.js is populated on the config-drive in response to the user passing
1939+ "meta flags" (nova boot --meta key=value ...). It is expected to be json
1940+ formated.
1941+
1942+== Configuration ==
1943+Cloud-init's behavior can be modified by keys found in the meta.js file in
1944+the following ways:
1945+ * dsmode:
1946+ values: local, net, pass
1947+ default: pass
1948+
1949+ This is what indicates if configdrive is a final data source or not.
1950+ By default it is 'pass', meaning this datasource should not be read.
1951+ Set it to 'local' or 'net' to stop cloud-init from continuing on to
1952+ search for other data sources after network config.
1953+
1954+ The difference between 'local' and 'net' is that local will not require
1955+ networking to be up before user-data actions (or boothooks) are run.
1956+
1957+ * instance-id:
1958+ default: iid-dsconfigdrive
1959+ This is utilized as the metadata's instance-id. It should generally
1960+ be unique, as it is what is used to determine "is this a new instance".
1961+
1962+ * public-keys:
1963+ default: None
1964+ if present, these keys will be used as the public keys for the
1965+ instance. This value overrides the content in authorized_keys.
1966+ Note: it is likely preferable to provide keys via user-data
1967+
1968+ * user-data:
1969+ default: None
1970+ This provides cloud-init user-data. See other documentation for what
1971+ all can be present here.
1972+
1973+== Example ==
1974+Here is an example using the nova client (python-novaclien)
1975+
1976+Assuming the following variables set up:
1977+ * img_id : set to the nova image id (uuid from image-list)
1978+ * flav_id : set to numeric flavor_id (nova flavor-list)
1979+ * keyname : set to name of key for this instance (nova keypair-list)
1980+
1981+$ cat my-user-data
1982+#!/bin/sh
1983+echo ==== USER_DATA FROM EC2 MD ==== | tee /ud.log
1984+
1985+$ ud_value=$(sed 's,EC2 MD,META KEY,')
1986+
1987+## Now, 'ud_value' has same content of my-user-data file, but
1988+## with the string "USER_DATA FROM META KEY"
1989+
1990+## launch an instance with dsmode=pass
1991+## This will really not use the configdrive for anything as the mode
1992+## for the datasource is 'pass', meaning it will still expect some
1993+## other data source (DataSourceEc2).
1994+
1995+$ nova boot --image=$img_id --config-drive=1 --flavor=$flav_id \
1996+ --key_name=$keyname \
1997+ --user_data=my-user-data \
1998+ "--meta=instance-id=iid-001 \
1999+ "--meta=user-data=${ud_keyval}" \
2000+ "--meta=dsmode=pass" cfgdrive-dsmode-pass
2001+
2002+$ euca-get-console-output i-0000001 | grep USER_DATA
2003+echo ==== USER_DATA FROM EC2 MD ==== | tee /ud.log
2004+
2005+## Now, launch an instance with dsmode=local
2006+## This time, the only metadata and userdata available to cloud-init
2007+## are on the config-drive
2008+$ nova boot --image=$img_id --config-drive=1 --flavor=$flav_id \
2009+ --key_name=$keyname \
2010+ --user_data=my-user-data \
2011+ "--meta=instance-id=iid-001 \
2012+ "--meta=user-data=${ud_keyval}" \
2013+ "--meta=dsmode=local" cfgdrive-dsmode-local
2014+
2015+$ euca-get-console-output i-0000002 | grep USER_DATA
2016+echo ==== USER_DATA FROM META KEY ==== | tee /ud.log
2017+
2018+--
2019+[1] https://github.com/openstack/nova/blob/master/doc/source/api_ext/ext_config_drive.rst for more if
2020+
2021+
2022
2023=== added file 'doc/examples/cloud-config-chef-oneiric.txt'
2024--- doc/examples/cloud-config-chef-oneiric.txt 1970-01-01 00:00:00 +0000
2025+++ doc/examples/cloud-config-chef-oneiric.txt 2012-06-13 13:16:19 +0000
2026@@ -0,0 +1,90 @@
2027+#cloud-config
2028+#
2029+# This is an example file to automatically install chef-client and run a
2030+# list of recipes when the instance boots for the first time.
2031+# Make sure that this file is valid yaml before starting instances.
2032+# It should be passed as user-data when starting the instance.
2033+#
2034+# This example assumes the instance is 11.10 (oneiric)
2035+
2036+
2037+# The default is to install from packages.
2038+
2039+# Key from http://apt.opscode.com/packages@opscode.com.gpg.key
2040+apt_sources:
2041+ - source: "deb http://apt.opscode.com/ $RELEASE-0.10 main"
2042+ key: |
2043+ -----BEGIN PGP PUBLIC KEY BLOCK-----
2044+ Version: GnuPG v1.4.9 (GNU/Linux)
2045+
2046+ mQGiBEppC7QRBADfsOkZU6KZK+YmKw4wev5mjKJEkVGlus+NxW8wItX5sGa6kdUu
2047+ twAyj7Yr92rF+ICFEP3gGU6+lGo0Nve7KxkN/1W7/m3G4zuk+ccIKmjp8KS3qn99
2048+ dxy64vcji9jIllVa+XXOGIp0G8GEaj7mbkixL/bMeGfdMlv8Gf2XPpp9vwCgn/GC
2049+ JKacfnw7MpLKUHOYSlb//JsEAJqao3ViNfav83jJKEkD8cf59Y8xKia5OpZqTK5W
2050+ ShVnNWS3U5IVQk10ZDH97Qn/YrK387H4CyhLE9mxPXs/ul18ioiaars/q2MEKU2I
2051+ XKfV21eMLO9LYd6Ny/Kqj8o5WQK2J6+NAhSwvthZcIEphcFignIuobP+B5wNFQpe
2052+ DbKfA/0WvN2OwFeWRcmmd3Hz7nHTpcnSF+4QX6yHRF/5BgxkG6IqBIACQbzPn6Hm
2053+ sMtm/SVf11izmDqSsQptCrOZILfLX/mE+YOl+CwWSHhl+YsFts1WOuh1EhQD26aO
2054+ Z84HuHV5HFRWjDLw9LriltBVQcXbpfSrRP5bdr7Wh8vhqJTPjrQnT3BzY29kZSBQ
2055+ YWNrYWdlcyA8cGFja2FnZXNAb3BzY29kZS5jb20+iGAEExECACAFAkppC7QCGwMG
2056+ CwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRApQKupg++Caj8sAKCOXmdG36gWji/K
2057+ +o+XtBfvdMnFYQCfTCEWxRy2BnzLoBBFCjDSK6sJqCu5Ag0ESmkLtBAIAIO2SwlR
2058+ lU5i6gTOp42RHWW7/pmW78CwUqJnYqnXROrt3h9F9xrsGkH0Fh1FRtsnncgzIhvh
2059+ DLQnRHnkXm0ws0jV0PF74ttoUT6BLAUsFi2SPP1zYNJ9H9fhhK/pjijtAcQwdgxu
2060+ wwNJ5xCEscBZCjhSRXm0d30bK1o49Cow8ZIbHtnXVP41c9QWOzX/LaGZsKQZnaMx
2061+ EzDk8dyyctR2f03vRSVyTFGgdpUcpbr9eTFVgikCa6ODEBv+0BnCH6yGTXwBid9g
2062+ w0o1e/2DviKUWCC+AlAUOubLmOIGFBuI4UR+rux9affbHcLIOTiKQXv79lW3P7W8
2063+ AAfniSQKfPWXrrcAAwUH/2XBqD4Uxhbs25HDUUiM/m6Gnlj6EsStg8n0nMggLhuN
2064+ QmPfoNByMPUqvA7sULyfr6xCYzbzRNxABHSpf85FzGQ29RF4xsA4vOOU8RDIYQ9X
2065+ Q8NqqR6pydprRFqWe47hsAN7BoYuhWqTtOLSBmnAnzTR5pURoqcquWYiiEavZixJ
2066+ 3ZRAq/HMGioJEtMFrvsZjGXuzef7f0ytfR1zYeLVWnL9Bd32CueBlI7dhYwkFe+V
2067+ Ep5jWOCj02C1wHcwt+uIRDJV6TdtbIiBYAdOMPk15+VBdweBXwMuYXr76+A7VeDL
2068+ zIhi7tKFo6WiwjKZq0dzctsJJjtIfr4K4vbiD9Ojg1iISQQYEQIACQUCSmkLtAIb
2069+ DAAKCRApQKupg++CauISAJ9CxYPOKhOxalBnVTLeNUkAHGg2gACeIsbobtaD4ZHG
2070+ 0GLl8EkfA8uhluM=
2071+ =zKAm
2072+ -----END PGP PUBLIC KEY BLOCK-----
2073+
2074+chef:
2075+
2076+ # 11.10 will fail if install_type is "gems" (LP: #960576)
2077+ install_type: "packages"
2078+
2079+ # Chef settings
2080+ server_url: "https://chef.yourorg.com:4000"
2081+
2082+ # Node Name
2083+ # Defaults to the instance-id if not present
2084+ node_name: "your-node-name"
2085+
2086+ # Environment
2087+ # Defaults to '_default' if not present
2088+ environment: "production"
2089+
2090+ # Default validation name is chef-validator
2091+ validation_name: "yourorg-validator"
2092+
2093+ # value of validation_cert is not used if validation_key defined,
2094+ # but variable needs to be defined (LP: #960547)
2095+ validation_cert: "unused"
2096+ validation_key: |
2097+ -----BEGIN RSA PRIVATE KEY-----
2098+ YOUR-ORGS-VALIDATION-KEY-HERE
2099+ -----END RSA PRIVATE KEY-----
2100+
2101+ # A run list for a first boot json
2102+ run_list:
2103+ - "recipe[apache2]"
2104+ - "role[db]"
2105+
2106+ # Specify a list of initial attributes used by the cookbooks
2107+ initial_attributes:
2108+ apache:
2109+ prefork:
2110+ maxclients: 100
2111+ keepalive: "off"
2112+
2113+
2114+# Capture all subprocess output into a logfile
2115+# Useful for troubleshooting cloud-init issues
2116+output: {all: '| tee -a /var/log/cloud-init-output.log'}
2117
2118=== modified file 'doc/examples/cloud-config-chef.txt'
2119--- doc/examples/cloud-config-chef.txt 2012-01-20 20:10:28 +0000
2120+++ doc/examples/cloud-config-chef.txt 2012-06-13 13:16:19 +0000
2121@@ -1,17 +1,54 @@
2122 #cloud-config
2123 #
2124-# This is an example file to automatically setup chef and run a list of recipes
2125-# when the instance boots for the first time.
2126+# This is an example file to automatically install chef-client and run a
2127+# list of recipes when the instance boots for the first time.
2128 # Make sure that this file is valid yaml before starting instances.
2129 # It should be passed as user-data when starting the instance.
2130-
2131-# The default is to install from packages. If you want the latest packages from Opscode, be sure to add their repo:
2132-apt_mirror: http://apt.opscode.com/
2133+#
2134+# This example assumes the instance is 12.04 (precise)
2135+
2136+
2137+# The default is to install from packages.
2138+
2139+# Key from http://apt.opscode.com/packages@opscode.com.gpg.key
2140+apt_sources:
2141+ - source: "deb http://apt.opscode.com/ $RELEASE-0.10 main"
2142+ key: |
2143+ -----BEGIN PGP PUBLIC KEY BLOCK-----
2144+ Version: GnuPG v1.4.9 (GNU/Linux)
2145+
2146+ mQGiBEppC7QRBADfsOkZU6KZK+YmKw4wev5mjKJEkVGlus+NxW8wItX5sGa6kdUu
2147+ twAyj7Yr92rF+ICFEP3gGU6+lGo0Nve7KxkN/1W7/m3G4zuk+ccIKmjp8KS3qn99
2148+ dxy64vcji9jIllVa+XXOGIp0G8GEaj7mbkixL/bMeGfdMlv8Gf2XPpp9vwCgn/GC
2149+ JKacfnw7MpLKUHOYSlb//JsEAJqao3ViNfav83jJKEkD8cf59Y8xKia5OpZqTK5W
2150+ ShVnNWS3U5IVQk10ZDH97Qn/YrK387H4CyhLE9mxPXs/ul18ioiaars/q2MEKU2I
2151+ XKfV21eMLO9LYd6Ny/Kqj8o5WQK2J6+NAhSwvthZcIEphcFignIuobP+B5wNFQpe
2152+ DbKfA/0WvN2OwFeWRcmmd3Hz7nHTpcnSF+4QX6yHRF/5BgxkG6IqBIACQbzPn6Hm
2153+ sMtm/SVf11izmDqSsQptCrOZILfLX/mE+YOl+CwWSHhl+YsFts1WOuh1EhQD26aO
2154+ Z84HuHV5HFRWjDLw9LriltBVQcXbpfSrRP5bdr7Wh8vhqJTPjrQnT3BzY29kZSBQ
2155+ YWNrYWdlcyA8cGFja2FnZXNAb3BzY29kZS5jb20+iGAEExECACAFAkppC7QCGwMG
2156+ CwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRApQKupg++Caj8sAKCOXmdG36gWji/K
2157+ +o+XtBfvdMnFYQCfTCEWxRy2BnzLoBBFCjDSK6sJqCu5Ag0ESmkLtBAIAIO2SwlR
2158+ lU5i6gTOp42RHWW7/pmW78CwUqJnYqnXROrt3h9F9xrsGkH0Fh1FRtsnncgzIhvh
2159+ DLQnRHnkXm0ws0jV0PF74ttoUT6BLAUsFi2SPP1zYNJ9H9fhhK/pjijtAcQwdgxu
2160+ wwNJ5xCEscBZCjhSRXm0d30bK1o49Cow8ZIbHtnXVP41c9QWOzX/LaGZsKQZnaMx
2161+ EzDk8dyyctR2f03vRSVyTFGgdpUcpbr9eTFVgikCa6ODEBv+0BnCH6yGTXwBid9g
2162+ w0o1e/2DviKUWCC+AlAUOubLmOIGFBuI4UR+rux9affbHcLIOTiKQXv79lW3P7W8
2163+ AAfniSQKfPWXrrcAAwUH/2XBqD4Uxhbs25HDUUiM/m6Gnlj6EsStg8n0nMggLhuN
2164+ QmPfoNByMPUqvA7sULyfr6xCYzbzRNxABHSpf85FzGQ29RF4xsA4vOOU8RDIYQ9X
2165+ Q8NqqR6pydprRFqWe47hsAN7BoYuhWqTtOLSBmnAnzTR5pURoqcquWYiiEavZixJ
2166+ 3ZRAq/HMGioJEtMFrvsZjGXuzef7f0ytfR1zYeLVWnL9Bd32CueBlI7dhYwkFe+V
2167+ Ep5jWOCj02C1wHcwt+uIRDJV6TdtbIiBYAdOMPk15+VBdweBXwMuYXr76+A7VeDL
2168+ zIhi7tKFo6WiwjKZq0dzctsJJjtIfr4K4vbiD9Ojg1iISQQYEQIACQUCSmkLtAIb
2169+ DAAKCRApQKupg++CauISAJ9CxYPOKhOxalBnVTLeNUkAHGg2gACeIsbobtaD4ZHG
2170+ 0GLl8EkfA8uhluM=
2171+ =zKAm
2172+ -----END PGP PUBLIC KEY BLOCK-----
2173
2174 chef:
2175
2176 # Valid values are 'gems' and 'packages'
2177- install_type: "gems"
2178+ install_type: "packages"
2179
2180 # Chef settings
2181 server_url: "https://chef.yourorg.com:4000"
2182@@ -42,3 +79,8 @@
2183 prefork:
2184 maxclients: 100
2185 keepalive: "off"
2186+
2187+
2188+# Capture all subprocess output into a logfile
2189+# Useful for troubleshooting cloud-init issues
2190+output: {all: '| tee -a /var/log/cloud-init-output.log'}
2191
2192=== modified file 'doc/examples/cloud-config-datasources.txt'
2193--- doc/examples/cloud-config-datasources.txt 2011-12-19 17:00:48 +0000
2194+++ doc/examples/cloud-config-datasources.txt 2012-06-13 13:16:19 +0000
2195@@ -13,3 +13,21 @@
2196 metadata_urls:
2197 - http://169.254.169.254:80
2198 - http://instance-data:8773
2199+
2200+ MAAS:
2201+ timeout : 50
2202+ max_wait : 120
2203+
2204+ # there are no default values for metadata_url or oauth credentials
2205+ # If no credentials are present, non-authed attempts will be made.
2206+ metadata_url: http://mass-host.localdomain/source
2207+ consumer_key: Xh234sdkljf
2208+ token_key: kjfhgb3n
2209+ token_secret: 24uysdfx1w4
2210+
2211+ NoCloud:
2212+ # default seedfrom is None
2213+ # if found, then it should contain a url with:
2214+ # <url>/user-data and <url>/meta-data
2215+ # seedfrom: http://my.example.com/i-abcde
2216+ seedfrom: None
2217
2218=== modified file 'doc/examples/cloud-config.txt'
2219--- doc/examples/cloud-config.txt 2011-12-20 16:40:51 +0000
2220+++ doc/examples/cloud-config.txt 2012-06-13 13:16:19 +0000
2221@@ -45,6 +45,15 @@
2222 # apt_proxy (configure Acquire::HTTP::Proxy)
2223 apt_proxy: http://my.apt.proxy:3128
2224
2225+# apt_pipelining (configure Acquire::http::Pipeline-Depth)
2226+# Default: disables HTTP pipelining. Certain web servers, such
2227+# as S3 do not pipeline properly (LP: #948461).
2228+# Valid options:
2229+# False/default: Disables pipelining for APT
2230+# None/Unchanged: Use OS default
2231+# Number: Set pipelining to some number (not recommended)
2232+apt_pipelining: False
2233+
2234 # Preserve existing /etc/apt/sources.list
2235 # Default: overwrite sources_list with mirror. If this is true
2236 # then apt_mirror above will have no effect
2237@@ -342,6 +351,8 @@
2238 # this allows you to launch an instance with a larger disk / partition
2239 # and have the instance automatically grow / to accomoddate it
2240 # set to 'False' to disable
2241+# by default, the resizefs is done early in boot, and blocks
2242+# if resize_rootfs is set to 'noblock', then it will be run in parallel
2243 resize_rootfs: True
2244
2245 ## hostname and /etc/hosts management
2246
2247=== added file 'doc/kernel-cmdline.txt'
2248--- doc/kernel-cmdline.txt 1970-01-01 00:00:00 +0000
2249+++ doc/kernel-cmdline.txt 2012-06-13 13:16:19 +0000
2250@@ -0,0 +1,48 @@
2251+In order to allow an ephemeral, or otherwise pristine image to
2252+receive some configuration, cloud-init will read a url directed by
2253+the kernel command line and proceed as if its data had previously existed.
2254+
2255+This allows for configuring a meta-data service, or some other data.
2256+
2257+Note, that usage of the kernel command line is somewhat of a last resort,
2258+as it requires knowing in advance the correct command line or modifying
2259+the boot loader to append data.
2260+
2261+For example, when 'cloud-init start' runs, it will check to
2262+see if if one of 'cloud-config-url' or 'url' appear in key/value fashion
2263+in the kernel command line as in:
2264+ root=/dev/sda ro url=http://foo.bar.zee/abcde
2265+
2266+Cloud-init will then read the contents of the given url.
2267+If the content starts with '#cloud-config', it will store
2268+that data to the local filesystem in a static filename
2269+'/etc/cloud/cloud.cfg.d/91_kernel_cmdline_url.cfg', and consider it as
2270+part of the config from that point forward.
2271+
2272+If that file exists already, it will not be overwritten, and the url parameters
2273+completely ignored.
2274+
2275+Then, when the DataSource runs, it will find that config already available.
2276+
2277+So, in able to configure the MAAS DataSource by controlling the kernel
2278+command line from outside the image, you can append:
2279+ url=http://your.url.here/abcdefg
2280+or
2281+ cloud-config-url=http://your.url.here/abcdefg
2282+
2283+Then, have the following content at that url:
2284+ #cloud-config
2285+ datasource:
2286+ MAAS:
2287+ metadata_url: http://mass-host.localdomain/source
2288+ consumer_key: Xh234sdkljf
2289+ token_key: kjfhgb3n
2290+ token_secret: 24uysdfx1w4
2291+
2292+Notes:
2293+ * Because 'url=' is so very generic, in order to avoid false positives,
2294+ cloud-init requires the content to start with '#cloud-config' in order
2295+ for it to be considered.
2296+ * The url= is un-authed http GET, and contains credentials
2297+ It could be set up to be randomly generated and also check source
2298+ address in order to be more secure
2299
2300=== added directory 'doc/nocloud'
2301=== added file 'doc/nocloud/README'
2302--- doc/nocloud/README 1970-01-01 00:00:00 +0000
2303+++ doc/nocloud/README 2012-06-13 13:16:19 +0000
2304@@ -0,0 +1,55 @@
2305+The data source 'NoCloud' and 'NoCloudNet' allow the user to provide user-data
2306+and meta-data to the instance without running a network service (or even without
2307+having a network at all)
2308+
2309+You can provide meta-data and user-data to a local vm boot via files on a vfat
2310+or iso9660 filesystem. These user-data and meta-data files are expected to be
2311+in the format described in doc/example/seed/README . Basically, user-data is
2312+simply user-data and meta-data is a yaml formated file representing what you'd
2313+find in the EC2 metadata service.
2314+
2315+Given a disk 12.04 cloud image in 'disk.img', you can create a sufficient disk
2316+by following the example below.
2317+
2318+## create user-data and meta-data files that will be used
2319+## to modify image on first boot
2320+$ { echo instance-id: iid-local01; echo local-hostname: cloudimg; } > meta-data
2321+
2322+$ printf "#cloud-config\npassword: passw0rd\nchpasswd: { expire: False }\nssh_pwauth: True\n" > user-data
2323+
2324+## create a disk to attach with some user-data and meta-data
2325+$ genisoimage -output seed.iso -volid cidata -joliet -rock user-data meta-data
2326+
2327+## alternatively, create a vfat filesystem with same files
2328+## $ truncate --size 2M seed.img
2329+## $ mkfs.vfat -n cidata seed.img
2330+## $ mcopy -oi seed.img user-data meta-data ::
2331+
2332+## create a new qcow image to boot, backed by your original image
2333+$ qemu-img create -f qcow2 -b disk.img boot-disk.img
2334+
2335+## boot the image and login as 'ubuntu' with password 'passw0rd'
2336+## note, passw0rd was set as password through the user-data above,
2337+## there is no password set on these images.
2338+$ kvm -m 256 \
2339+ -net nic -net user,hostfwd=tcp::2222-:22 \
2340+ -drive file=boot-disk.img,if=virtio \
2341+ -drive file=seed.iso,if=virtio
2342+
2343+Note, that the instance-id provided ('iid-local01' above) is what is used to
2344+determine if this is "first boot". So if you are making updates to user-data
2345+you will also have to change that, or start the disk fresh.
2346+
2347+
2348+Also, you can inject an /etc/network/interfaces file by providing the content
2349+for that file in the 'network-interfaces' field of metadata. Example metadata:
2350+ instance-id: iid-abcdefg
2351+ network-interfaces: |
2352+ iface eth0 inet static
2353+ address 192.168.1.10
2354+ network 192.168.1.0
2355+ netmask 255.255.255.0
2356+ broadcast 192.168.1.255
2357+ gateway 192.168.1.254
2358+ hostname: myhost
2359+
2360
2361=== modified file 'setup.py'
2362--- setup.py 2011-12-20 16:39:46 +0000
2363+++ setup.py 2012-06-13 13:16:19 +0000
2364@@ -47,5 +47,6 @@
2365 ('/usr/share/doc/cloud-init', filter(is_f,glob('doc/*'))),
2366 ('/usr/share/doc/cloud-init/examples', filter(is_f,glob('doc/examples/*'))),
2367 ('/usr/share/doc/cloud-init/examples/seed', filter(is_f,glob('doc/examples/seed/*'))),
2368+ ('/etc/profile.d', ['tools/Z99-cloud-locale-test.sh']),
2369 ],
2370 )
2371
2372=== added file 'tests/unittests/test__init__.py'
2373--- tests/unittests/test__init__.py 1970-01-01 00:00:00 +0000
2374+++ tests/unittests/test__init__.py 2012-06-13 13:16:19 +0000
2375@@ -0,0 +1,242 @@
2376+from mocker import MockerTestCase, ANY, ARGS, KWARGS
2377+import os
2378+
2379+from cloudinit import (partwalker_handle_handler, handler_handle_part,
2380+ handler_register, get_cmdline_url)
2381+from cloudinit.util import write_file, logexc, readurl
2382+
2383+
2384+class TestPartwalkerHandleHandler(MockerTestCase):
2385+ def setUp(self):
2386+ self.data = {
2387+ "handlercount": 0,
2388+ "frequency": "?",
2389+ "handlerdir": "?",
2390+ "handlers": [],
2391+ "data": None}
2392+
2393+ self.expected_module_name = "part-handler-%03d" % (
2394+ self.data["handlercount"],)
2395+ expected_file_name = "%s.py" % self.expected_module_name
2396+ expected_file_fullname = os.path.join(self.data["handlerdir"],
2397+ expected_file_name)
2398+ self.module_fake = "fake module handle"
2399+ self.ctype = None
2400+ self.filename = None
2401+ self.payload = "dummy payload"
2402+
2403+ # Mock the write_file function
2404+ write_file_mock = self.mocker.replace(write_file, passthrough=False)
2405+ write_file_mock(expected_file_fullname, self.payload, 0600)
2406+
2407+ def test_no_errors(self):
2408+ """Payload gets written to file and added to C{pdata}."""
2409+ # Mock the __import__ builtin
2410+ import_mock = self.mocker.replace("__builtin__.__import__")
2411+ import_mock(self.expected_module_name)
2412+ self.mocker.result(self.module_fake)
2413+ # Mock the handle_register function
2414+ handle_reg_mock = self.mocker.replace(handler_register,
2415+ passthrough=False)
2416+ handle_reg_mock(self.module_fake, self.data["handlers"],
2417+ self.data["data"], self.data["frequency"])
2418+ # Activate mocks
2419+ self.mocker.replay()
2420+
2421+ partwalker_handle_handler(self.data, self.ctype, self.filename,
2422+ self.payload)
2423+
2424+ self.assertEqual(1, self.data["handlercount"])
2425+
2426+ def test_import_error(self):
2427+ """Module import errors are logged. No handler added to C{pdata}"""
2428+ # Mock the __import__ builtin
2429+ import_mock = self.mocker.replace("__builtin__.__import__")
2430+ import_mock(self.expected_module_name)
2431+ self.mocker.throw(ImportError())
2432+ # Mock log function
2433+ logexc_mock = self.mocker.replace(logexc, passthrough=False)
2434+ logexc_mock(ANY)
2435+ # Mock the print_exc function
2436+ print_exc_mock = self.mocker.replace("traceback.print_exc",
2437+ passthrough=False)
2438+ print_exc_mock(ARGS, KWARGS)
2439+ # Activate mocks
2440+ self.mocker.replay()
2441+
2442+ partwalker_handle_handler(self.data, self.ctype, self.filename,
2443+ self.payload)
2444+
2445+ self.assertEqual(0, self.data["handlercount"])
2446+
2447+ def test_attribute_error(self):
2448+ """Attribute errors are logged. No handler added to C{pdata}"""
2449+ # Mock the __import__ builtin
2450+ import_mock = self.mocker.replace("__builtin__.__import__")
2451+ import_mock(self.expected_module_name)
2452+ self.mocker.result(self.module_fake)
2453+ # Mock the handle_register function
2454+ handle_reg_mock = self.mocker.replace(handler_register,
2455+ passthrough=False)
2456+ handle_reg_mock(self.module_fake, self.data["handlers"],
2457+ self.data["data"], self.data["frequency"])
2458+ self.mocker.throw(AttributeError())
2459+ # Mock log function
2460+ logexc_mock = self.mocker.replace(logexc, passthrough=False)
2461+ logexc_mock(ANY)
2462+ # Mock the print_exc function
2463+ print_exc_mock = self.mocker.replace("traceback.print_exc",
2464+ passthrough=False)
2465+ print_exc_mock(ARGS, KWARGS)
2466+ # Activate mocks
2467+ self.mocker.replay()
2468+
2469+ partwalker_handle_handler(self.data, self.ctype, self.filename,
2470+ self.payload)
2471+
2472+ self.assertEqual(0, self.data["handlercount"])
2473+
2474+
2475+class TestHandlerHandlePart(MockerTestCase):
2476+ def setUp(self):
2477+ self.data = "fake data"
2478+ self.ctype = "fake ctype"
2479+ self.filename = "fake filename"
2480+ self.payload = "fake payload"
2481+ self.frequency = "once-per-instance"
2482+
2483+ def test_normal_version_1(self):
2484+ """
2485+ C{handle_part} is called without C{frequency} for
2486+ C{handler_version} == 1.
2487+ """
2488+ # Build a mock part-handler module
2489+ mod_mock = self.mocker.mock()
2490+ getattr(mod_mock, "frequency")
2491+ self.mocker.result("once-per-instance")
2492+ getattr(mod_mock, "handler_version")
2493+ self.mocker.result(1)
2494+ mod_mock.handle_part(self.data, self.ctype, self.filename,
2495+ self.payload)
2496+ self.mocker.replay()
2497+
2498+ handler_handle_part(mod_mock, self.data, self.ctype, self.filename,
2499+ self.payload, self.frequency)
2500+
2501+ def test_normal_version_2(self):
2502+ """
2503+ C{handle_part} is called with C{frequency} for
2504+ C{handler_version} == 2.
2505+ """
2506+ # Build a mock part-handler module
2507+ mod_mock = self.mocker.mock()
2508+ getattr(mod_mock, "frequency")
2509+ self.mocker.result("once-per-instance")
2510+ getattr(mod_mock, "handler_version")
2511+ self.mocker.result(2)
2512+ mod_mock.handle_part(self.data, self.ctype, self.filename,
2513+ self.payload, self.frequency)
2514+ self.mocker.replay()
2515+
2516+ handler_handle_part(mod_mock, self.data, self.ctype, self.filename,
2517+ self.payload, self.frequency)
2518+
2519+ def test_modfreq_per_always(self):
2520+ """
2521+ C{handle_part} is called regardless of frequency if nofreq is always.
2522+ """
2523+ self.frequency = "once"
2524+ # Build a mock part-handler module
2525+ mod_mock = self.mocker.mock()
2526+ getattr(mod_mock, "frequency")
2527+ self.mocker.result("always")
2528+ getattr(mod_mock, "handler_version")
2529+ self.mocker.result(1)
2530+ mod_mock.handle_part(self.data, self.ctype, self.filename,
2531+ self.payload)
2532+ self.mocker.replay()
2533+
2534+ handler_handle_part(mod_mock, self.data, self.ctype, self.filename,
2535+ self.payload, self.frequency)
2536+
2537+ def test_no_handle_when_modfreq_once(self):
2538+ """C{handle_part} is not called if frequency is once"""
2539+ self.frequency = "once"
2540+ # Build a mock part-handler module
2541+ mod_mock = self.mocker.mock()
2542+ getattr(mod_mock, "frequency")
2543+ self.mocker.result("once-per-instance")
2544+ self.mocker.replay()
2545+
2546+ handler_handle_part(mod_mock, self.data, self.ctype, self.filename,
2547+ self.payload, self.frequency)
2548+
2549+ def test_exception_is_caught(self):
2550+ """Exceptions within C{handle_part} are caught and logged."""
2551+ # Build a mock part-handler module
2552+ mod_mock = self.mocker.mock()
2553+ getattr(mod_mock, "frequency")
2554+ self.mocker.result("once-per-instance")
2555+ getattr(mod_mock, "handler_version")
2556+ self.mocker.result(1)
2557+ mod_mock.handle_part(self.data, self.ctype, self.filename,
2558+ self.payload)
2559+ self.mocker.throw(Exception())
2560+ # Mock log function
2561+ logexc_mock = self.mocker.replace(logexc, passthrough=False)
2562+ logexc_mock(ANY)
2563+ # Mock the print_exc function
2564+ print_exc_mock = self.mocker.replace("traceback.print_exc",
2565+ passthrough=False)
2566+ print_exc_mock(ARGS, KWARGS)
2567+ self.mocker.replay()
2568+
2569+ handler_handle_part(mod_mock, self.data, self.ctype, self.filename,
2570+ self.payload, self.frequency)
2571+
2572+
2573+class TestCmdlineUrl(MockerTestCase):
2574+ def test_invalid_content(self):
2575+ url = "http://example.com/foo"
2576+ key = "mykey"
2577+ payload = "0"
2578+ cmdline = "ro %s=%s bar=1" % (key, url)
2579+
2580+ mock_readurl = self.mocker.replace(readurl, passthrough=False)
2581+ mock_readurl(url)
2582+ self.mocker.result(payload)
2583+
2584+ self.mocker.replay()
2585+
2586+ self.assertEqual((key, url, None),
2587+ get_cmdline_url(names=[key], starts="xxxxxx", cmdline=cmdline))
2588+
2589+ def test_valid_content(self):
2590+ url = "http://example.com/foo"
2591+ key = "mykey"
2592+ payload = "xcloud-config\nmydata: foo\nbar: wark\n"
2593+ cmdline = "ro %s=%s bar=1" % (key, url)
2594+
2595+ mock_readurl = self.mocker.replace(readurl, passthrough=False)
2596+ mock_readurl(url)
2597+ self.mocker.result(payload)
2598+
2599+ self.mocker.replay()
2600+
2601+ self.assertEqual((key, url, payload),
2602+ get_cmdline_url(names=[key], starts="xcloud-config",
2603+ cmdline=cmdline))
2604+
2605+ def test_no_key_found(self):
2606+ url = "http://example.com/foo"
2607+ key = "mykey"
2608+ cmdline = "ro %s=%s bar=1" % (key, url)
2609+
2610+ self.mocker.replace(readurl, passthrough=False)
2611+ self.mocker.replay()
2612+
2613+ self.assertEqual((None, None, None),
2614+ get_cmdline_url(names=["does-not-appear"],
2615+ starts="#cloud-config", cmdline=cmdline))
2616+
2617+# vi: ts=4 expandtab
2618
2619=== added directory 'tests/unittests/test_datasource'
2620=== added file 'tests/unittests/test_datasource/test_maas.py'
2621--- tests/unittests/test_datasource/test_maas.py 1970-01-01 00:00:00 +0000
2622+++ tests/unittests/test_datasource/test_maas.py 2012-06-13 13:16:19 +0000
2623@@ -0,0 +1,153 @@
2624+from tempfile import mkdtemp
2625+from shutil import rmtree
2626+import os
2627+from StringIO import StringIO
2628+from copy import copy
2629+from cloudinit.DataSourceMAAS import (
2630+ MAASSeedDirNone,
2631+ MAASSeedDirMalformed,
2632+ read_maas_seed_dir,
2633+ read_maas_seed_url,
2634+)
2635+from mocker import MockerTestCase
2636+
2637+
2638+class TestMAASDataSource(MockerTestCase):
2639+
2640+ def setUp(self):
2641+ super(TestMAASDataSource, self).setUp()
2642+ # Make a temp directoy for tests to use.
2643+ self.tmp = mkdtemp(prefix="unittest_")
2644+
2645+ def tearDown(self):
2646+ super(TestMAASDataSource, self).tearDown()
2647+ # Clean up temp directory
2648+ rmtree(self.tmp)
2649+
2650+ def test_seed_dir_valid(self):
2651+ """Verify a valid seeddir is read as such"""
2652+
2653+ data = {'instance-id': 'i-valid01',
2654+ 'local-hostname': 'valid01-hostname',
2655+ 'user-data': 'valid01-userdata',
2656+ 'public-keys': 'ssh-rsa AAAAB3Nz...aC1yc2E= keyname'}
2657+
2658+ my_d = os.path.join(self.tmp, "valid")
2659+ populate_dir(my_d, data)
2660+
2661+ (userdata, metadata) = read_maas_seed_dir(my_d)
2662+
2663+ self.assertEqual(userdata, data['user-data'])
2664+ for key in ('instance-id', 'local-hostname'):
2665+ self.assertEqual(data[key], metadata[key])
2666+
2667+ # verify that 'userdata' is not returned as part of the metadata
2668+ self.assertFalse(('user-data' in metadata))
2669+
2670+ def test_seed_dir_valid_extra(self):
2671+ """Verify extra files do not affect seed_dir validity """
2672+
2673+ data = {'instance-id': 'i-valid-extra',
2674+ 'local-hostname': 'valid-extra-hostname',
2675+ 'user-data': 'valid-extra-userdata', 'foo': 'bar'}
2676+
2677+ my_d = os.path.join(self.tmp, "valid_extra")
2678+ populate_dir(my_d, data)
2679+
2680+ (userdata, metadata) = read_maas_seed_dir(my_d)
2681+
2682+ self.assertEqual(userdata, data['user-data'])
2683+ for key in ('instance-id', 'local-hostname'):
2684+ self.assertEqual(data[key], metadata[key])
2685+
2686+ # additional files should not just appear as keys in metadata atm
2687+ self.assertFalse(('foo' in metadata))
2688+
2689+ def test_seed_dir_invalid(self):
2690+ """Verify that invalid seed_dir raises MAASSeedDirMalformed"""
2691+
2692+ valid = {'instance-id': 'i-instanceid',
2693+ 'local-hostname': 'test-hostname', 'user-data': ''}
2694+
2695+ my_based = os.path.join(self.tmp, "valid_extra")
2696+
2697+ # missing 'userdata' file
2698+ my_d = "%s-01" % my_based
2699+ invalid_data = copy(valid)
2700+ del invalid_data['local-hostname']
2701+ populate_dir(my_d, invalid_data)
2702+ self.assertRaises(MAASSeedDirMalformed, read_maas_seed_dir, my_d)
2703+
2704+ # missing 'instance-id'
2705+ my_d = "%s-02" % my_based
2706+ invalid_data = copy(valid)
2707+ del invalid_data['instance-id']
2708+ populate_dir(my_d, invalid_data)
2709+ self.assertRaises(MAASSeedDirMalformed, read_maas_seed_dir, my_d)
2710+
2711+ def test_seed_dir_none(self):
2712+ """Verify that empty seed_dir raises MAASSeedDirNone"""
2713+
2714+ my_d = os.path.join(self.tmp, "valid_empty")
2715+ self.assertRaises(MAASSeedDirNone, read_maas_seed_dir, my_d)
2716+
2717+ def test_seed_dir_missing(self):
2718+ """Verify that missing seed_dir raises MAASSeedDirNone"""
2719+ self.assertRaises(MAASSeedDirNone, read_maas_seed_dir,
2720+ os.path.join(self.tmp, "nonexistantdirectory"))
2721+
2722+ def test_seed_url_valid(self):
2723+ """Verify that valid seed_url is read as such"""
2724+ valid = {'meta-data/instance-id': 'i-instanceid',
2725+ 'meta-data/local-hostname': 'test-hostname',
2726+ 'meta-data/public-keys': 'test-hostname',
2727+ 'user-data': 'foodata'}
2728+
2729+ my_seed = "http://example.com/xmeta"
2730+ my_ver = "1999-99-99"
2731+ my_headers = {'header1': 'value1', 'header2': 'value2'}
2732+
2733+ def my_headers_cb(url):
2734+ return(my_headers)
2735+
2736+ mock_request = self.mocker.replace("urllib2.Request",
2737+ passthrough=False)
2738+ mock_urlopen = self.mocker.replace("urllib2.urlopen",
2739+ passthrough=False)
2740+
2741+ for (key, val) in valid.iteritems():
2742+ mock_request("%s/%s/%s" % (my_seed, my_ver, key),
2743+ data=None, headers=my_headers)
2744+ self.mocker.nospec()
2745+ self.mocker.result("fake-request-%s" % key)
2746+ mock_urlopen("fake-request-%s" % key, timeout=None)
2747+ self.mocker.result(StringIO(val))
2748+
2749+ self.mocker.replay()
2750+
2751+ (userdata, metadata) = read_maas_seed_url(my_seed,
2752+ header_cb=my_headers_cb, version=my_ver)
2753+
2754+ self.assertEqual("foodata", userdata)
2755+ self.assertEqual(metadata['instance-id'],
2756+ valid['meta-data/instance-id'])
2757+ self.assertEqual(metadata['local-hostname'],
2758+ valid['meta-data/local-hostname'])
2759+
2760+ def test_seed_url_invalid(self):
2761+ """Verify that invalid seed_url raises MAASSeedDirMalformed"""
2762+ pass
2763+
2764+ def test_seed_url_missing(self):
2765+ """Verify seed_url with no found entries raises MAASSeedDirNone"""
2766+ pass
2767+
2768+
2769+def populate_dir(seed_dir, files):
2770+ os.mkdir(seed_dir)
2771+ for (name, content) in files.iteritems():
2772+ with open(os.path.join(seed_dir, name), "w") as fp:
2773+ fp.write(content)
2774+ fp.close()
2775+
2776+# vi: ts=4 expandtab
2777
2778=== added directory 'tests/unittests/test_handler'
2779=== renamed file 'tests/unittests/test_handler_ca_certs.py' => 'tests/unittests/test_handler/test_handler_ca_certs.py'
2780--- tests/unittests/test_handler_ca_certs.py 2012-01-17 21:38:01 +0000
2781+++ tests/unittests/test_handler/test_handler_ca_certs.py 2012-06-13 13:16:19 +0000
2782@@ -169,10 +169,14 @@
2783 mock_delete_dir_contents = self.mocker.replace(delete_dir_contents,
2784 passthrough=False)
2785 mock_write = self.mocker.replace(write_file, passthrough=False)
2786+ mock_subp = self.mocker.replace("cloudinit.util.subp",
2787+ passthrough=False)
2788
2789 mock_delete_dir_contents("/usr/share/ca-certificates/")
2790 mock_delete_dir_contents("/etc/ssl/certs/")
2791 mock_write("/etc/ca-certificates.conf", "", mode=0644)
2792+ mock_subp(('debconf-set-selections', '-'),
2793+ "ca-certificates ca-certificates/trust_new_crts select no")
2794 self.mocker.replay()
2795
2796 remove_default_ca_certs()
2797
2798=== added file 'tests/unittests/test_userdata.py'
2799--- tests/unittests/test_userdata.py 1970-01-01 00:00:00 +0000
2800+++ tests/unittests/test_userdata.py 2012-06-13 13:16:19 +0000
2801@@ -0,0 +1,107 @@
2802+"""Tests for handling of userdata within cloud init"""
2803+
2804+import logging
2805+import StringIO
2806+
2807+from email.mime.base import MIMEBase
2808+
2809+from mocker import MockerTestCase
2810+
2811+import cloudinit
2812+from cloudinit.DataSource import DataSource
2813+
2814+
2815+instance_id = "i-testing"
2816+
2817+
2818+class FakeDataSource(DataSource):
2819+
2820+ def __init__(self, userdata):
2821+ DataSource.__init__(self)
2822+ self.metadata = {'instance-id': instance_id}
2823+ self.userdata_raw = userdata
2824+
2825+
2826+class TestConsumeUserData(MockerTestCase):
2827+
2828+ _log_handler = None
2829+ _log = None
2830+ log_file = None
2831+
2832+ def setUp(self):
2833+ self.mock_write = self.mocker.replace("cloudinit.util.write_file",
2834+ passthrough=False)
2835+ self.mock_write(self.get_ipath("cloud_config"), "", 0600)
2836+ self.capture_log()
2837+
2838+ def tearDown(self):
2839+ self._log.removeHandler(self._log_handler)
2840+
2841+ @staticmethod
2842+ def get_ipath(name):
2843+ return "%s/instances/%s%s" % (cloudinit.varlibdir, instance_id,
2844+ cloudinit.pathmap[name])
2845+
2846+ def capture_log(self):
2847+ self.log_file = StringIO.StringIO()
2848+ self._log_handler = logging.StreamHandler(self.log_file)
2849+ self._log_handler.setLevel(logging.DEBUG)
2850+ self._log = logging.getLogger(cloudinit.logger_name)
2851+ self._log.addHandler(self._log_handler)
2852+
2853+ def test_unhandled_type_warning(self):
2854+ """Raw text without magic is ignored but shows warning"""
2855+ self.mocker.replay()
2856+ ci = cloudinit.CloudInit()
2857+ ci.datasource = FakeDataSource("arbitrary text\n")
2858+ ci.consume_userdata()
2859+ self.assertEqual(
2860+ "Unhandled non-multipart userdata starting 'arbitrary text...'\n",
2861+ self.log_file.getvalue())
2862+
2863+ def test_mime_text_plain(self):
2864+ """Mime message of type text/plain is ignored without warning"""
2865+ self.mocker.replay()
2866+ ci = cloudinit.CloudInit()
2867+ message = MIMEBase("text", "plain")
2868+ message.set_payload("Just text")
2869+ ci.datasource = FakeDataSource(message.as_string())
2870+ ci.consume_userdata()
2871+ self.assertEqual("", self.log_file.getvalue())
2872+
2873+ def test_shellscript(self):
2874+ """Raw text starting #!/bin/sh is treated as script"""
2875+ script = "#!/bin/sh\necho hello\n"
2876+ outpath = cloudinit.get_ipath_cur("scripts") + "/part-001"
2877+ self.mock_write(outpath, script, 0700)
2878+ self.mocker.replay()
2879+ ci = cloudinit.CloudInit()
2880+ ci.datasource = FakeDataSource(script)
2881+ ci.consume_userdata()
2882+ self.assertEqual("", self.log_file.getvalue())
2883+
2884+ def test_mime_text_x_shellscript(self):
2885+ """Mime message of type text/x-shellscript is treated as script"""
2886+ script = "#!/bin/sh\necho hello\n"
2887+ outpath = cloudinit.get_ipath_cur("scripts") + "/part-001"
2888+ self.mock_write(outpath, script, 0700)
2889+ self.mocker.replay()
2890+ ci = cloudinit.CloudInit()
2891+ message = MIMEBase("text", "x-shellscript")
2892+ message.set_payload(script)
2893+ ci.datasource = FakeDataSource(message.as_string())
2894+ ci.consume_userdata()
2895+ self.assertEqual("", self.log_file.getvalue())
2896+
2897+ def test_mime_text_plain_shell(self):
2898+ """Mime type text/plain starting #!/bin/sh is treated as script"""
2899+ script = "#!/bin/sh\necho hello\n"
2900+ outpath = cloudinit.get_ipath_cur("scripts") + "/part-001"
2901+ self.mock_write(outpath, script, 0700)
2902+ self.mocker.replay()
2903+ ci = cloudinit.CloudInit()
2904+ message = MIMEBase("text", "plain")
2905+ message.set_payload(script)
2906+ ci.datasource = FakeDataSource(message.as_string())
2907+ ci.consume_userdata()
2908+ self.assertEqual("", self.log_file.getvalue())
2909
2910=== modified file 'tests/unittests/test_util.py'
2911--- tests/unittests/test_util.py 2012-01-17 17:35:31 +0000
2912+++ tests/unittests/test_util.py 2012-06-13 13:16:19 +0000
2913@@ -6,7 +6,8 @@
2914 import stat
2915
2916 from cloudinit.util import (mergedict, get_cfg_option_list_or_str, write_file,
2917- delete_dir_contents)
2918+ delete_dir_contents, get_cmdline,
2919+ keyval_str_to_dict)
2920
2921
2922 class TestMergeDict(TestCase):
2923@@ -28,7 +29,7 @@
2924 def test_merge_does_not_override(self):
2925 """Test that candidate doesn't override source."""
2926 source = {"key1": "value1", "key2": "value2"}
2927- candidate = {"key2": "value2", "key2": "NEW VALUE"}
2928+ candidate = {"key1": "value2", "key2": "NEW VALUE"}
2929 result = mergedict(source, candidate)
2930 self.assertEqual(source, result)
2931
2932@@ -248,3 +249,18 @@
2933 delete_dir_contents(self.tmp)
2934
2935 self.assertDirEmpty(self.tmp)
2936+
2937+
2938+class TestKeyValStrings(TestCase):
2939+ def test_keyval_str_to_dict(self):
2940+ expected = {'1': 'one', '2': 'one+one', 'ro': True}
2941+ cmdline = "1=one ro 2=one+one"
2942+ self.assertEqual(expected, keyval_str_to_dict(cmdline))
2943+
2944+
2945+class TestGetCmdline(TestCase):
2946+ def test_cmdline_reads_debug_env(self):
2947+ os.environ['DEBUG_PROC_CMDLINE'] = 'abcd 123'
2948+ self.assertEqual(os.environ['DEBUG_PROC_CMDLINE'], get_cmdline())
2949+
2950+# vi: ts=4 expandtab
2951
2952=== added file 'tools/Z99-cloud-locale-test.sh'
2953--- tools/Z99-cloud-locale-test.sh 1970-01-01 00:00:00 +0000
2954+++ tools/Z99-cloud-locale-test.sh 2012-06-13 13:16:19 +0000
2955@@ -0,0 +1,92 @@
2956+#!/bin/sh
2957+# vi: ts=4 noexpandtab
2958+#
2959+# Author: Ben Howard <ben.howard@canonical.com>
2960+# Author: Scott Moser <scott.moser@ubuntu.com>
2961+# (c) 2012, Canonical Group, Ltd.
2962+#
2963+# Purpose: Detect invalid locale settings and inform the user
2964+# of how to fix them.
2965+#
2966+
2967+locale_warn() {
2968+ local cr="
2969+"
2970+ local bad_names="" bad_lcs="" key="" value="" var=""
2971+ local w1 w2 w3 w4 remain
2972+ # locale is expected to output either:
2973+ # VARIABLE=
2974+ # VARIABLE="value"
2975+ # locale: Cannot set LC_SOMETHING to default locale
2976+ while read -r w1 w2 w3 w4 remain; do
2977+ case "$w1" in
2978+ locale:) bad_names="${bad_names} ${w4}";;
2979+ *)
2980+ key=${w1%%=*}
2981+ val=${w1#*=}
2982+ val=${val#\"}
2983+ val=${val%\"}
2984+ vars="${vars} $key=$val";;
2985+ esac
2986+ done
2987+ for bad in $bad_names; do
2988+ for var in ${vars}; do
2989+ [ "${bad}" = "${var%=*}" ] || continue
2990+ value=${var#*=}
2991+ [ "${bad_lcs#* ${value}}" = "${bad_lcs}" ] &&
2992+ bad_lcs="${bad_lcs} ${value}"
2993+ break
2994+ done
2995+ done
2996+ bad_lcs=${bad_lcs# }
2997+ [ -n "$bad_lcs" ] || return 0
2998+
2999+ printf "_____________________________________________________________________\n"
3000+ printf "WARNING! Your environment specifies an invalid locale.\n"
3001+ printf " This can affect your user experience significantly, including the\n"
3002+ printf " ability to manage packages. You may install the locales by running:\n\n"
3003+
3004+ local bad invalid="" to_gen="" sfile="/usr/share/i18n/SUPPORTED"
3005+ local pkgs=""
3006+ if [ -e "$sfile" ]; then
3007+ for bad in ${bad_lcs}; do
3008+ grep -q -i "${bad}" "$sfile" &&
3009+ to_gen="${to_gen} ${bad}" ||
3010+ invalid="${invalid} ${bad}"
3011+ done
3012+ else
3013+ printf " sudo apt-get install locales\n"
3014+ to_gen=$bad_lcs
3015+ fi
3016+ to_gen=${to_gen# }
3017+
3018+ local pkgs=""
3019+ for bad in ${to_gen}; do
3020+ pkgs="${pkgs} language-pack-${bad%%_*}"
3021+ done
3022+ pkgs=${pkgs# }
3023+
3024+ if [ -n "${pkgs}" ]; then
3025+ printf " sudo apt-get install ${pkgs# }\n"
3026+ printf " or\n"
3027+ printf " sudo locale-gen ${to_gen# }\n"
3028+ printf "\n"
3029+ fi
3030+ for bad in ${invalid}; do
3031+ printf "WARNING: '${bad}' is an invalid locale\n"
3032+ done
3033+
3034+ printf "To see all available language packs, run:\n"
3035+ printf " apt-cache search \"^language-pack-[a-z][a-z]$\"\n"
3036+ printf "To disable this message for all users, run:\n"
3037+ printf " sudo touch /var/lib/cloud/instance/locale-check.skip\n"
3038+ printf "_____________________________________________________________________\n\n"
3039+
3040+ # only show the message once
3041+ : > ~/.cloud-locale-test.skip 2>/dev/null || :
3042+}
3043+
3044+[ -f ~/.cloud-locale-test.skip -o -f /var/lib/cloud/instance/locale-check.skip ] ||
3045+ locale 2>&1 | locale_warn
3046+
3047+unset locale_warn
3048
3049=== modified file 'tools/run-pylint'
3050--- tools/run-pylint 2012-01-17 20:59:21 +0000
3051+++ tools/run-pylint 2012-06-13 13:16:19 +0000
3052@@ -1,6 +1,8 @@
3053 #!/bin/bash
3054
3055-def_files='cloud*.py cloudinit/*.py cloudinit/CloudConfig/*.py'
3056+ci_files='cloud*.py cloudinit/*.py cloudinit/CloudConfig/*.py'
3057+test_files=$(find tests -name "*.py")
3058+def_files="$ci_files $test_files"
3059
3060 if [ $# -eq 0 ]; then
3061 files=( )