Merge lp:~cloud-init-dev/cloud-init/trunk into lp:~jbauer/cloud-init/salt

Proposed by Jeff Bauer
Status: Merged
Merged at revision: 561
Proposed branch: lp:~cloud-init-dev/cloud-init/trunk
Merge into: lp:~jbauer/cloud-init/salt
Diff against target: 3061 lines (+2239/-175)
39 files modified
ChangeLog (+39/-9)
cloud-init.py (+31/-1)
cloudinit/CloudConfig/__init__.py (+1/-1)
cloudinit/CloudConfig/cc_apt_pipelining.py (+53/-0)
cloudinit/CloudConfig/cc_ca_certs.py (+3/-1)
cloudinit/CloudConfig/cc_chef.py (+5/-5)
cloudinit/CloudConfig/cc_landscape.py (+5/-0)
cloudinit/CloudConfig/cc_resizefs.py (+31/-12)
cloudinit/CloudConfig/cc_salt_minion.py (+2/-1)
cloudinit/CloudConfig/cc_update_etc_hosts.py (+1/-1)
cloudinit/DataSource.py (+4/-1)
cloudinit/DataSourceCloudStack.py (+92/-0)
cloudinit/DataSourceConfigDrive.py (+231/-0)
cloudinit/DataSourceEc2.py (+2/-84)
cloudinit/DataSourceMAAS.py (+345/-0)
cloudinit/DataSourceNoCloud.py (+75/-2)
cloudinit/DataSourceOVF.py (+1/-1)
cloudinit/SshUtil.py (+2/-0)
cloudinit/UserDataHandler.py (+2/-2)
cloudinit/__init__.py (+44/-9)
cloudinit/netinfo.py (+5/-5)
cloudinit/util.py (+253/-30)
config/cloud.cfg (+3/-1)
debian.trunk/control (+1/-0)
doc/configdrive/README (+118/-0)
doc/examples/cloud-config-chef-oneiric.txt (+90/-0)
doc/examples/cloud-config-chef.txt (+48/-6)
doc/examples/cloud-config-datasources.txt (+18/-0)
doc/examples/cloud-config.txt (+11/-0)
doc/kernel-cmdline.txt (+48/-0)
doc/nocloud/README (+55/-0)
setup.py (+1/-0)
tests/unittests/test__init__.py (+242/-0)
tests/unittests/test_datasource/test_maas.py (+153/-0)
tests/unittests/test_handler/test_handler_ca_certs.py (+4/-0)
tests/unittests/test_userdata.py (+107/-0)
tests/unittests/test_util.py (+18/-2)
tools/Z99-cloud-locale-test.sh (+92/-0)
tools/run-pylint (+3/-1)
To merge this branch: bzr merge lp:~cloud-init-dev/cloud-init/trunk
Reviewer Review Type Date Requested Status
Scott Moser Pending
Review via email: mp+105094@code.launchpad.net

Commit message

fix launchpad bug #996166, installs wrong salt pkg

Description of the change

Fixes: https://bugs.launchpad.net/cloud-init/+bug/996166

installs wrong package in cc_salt_minion.py

I'm not sure if I've got the bzr merge correct, but it's only a one line modification:

=== modified file 'cloudinit/CloudConfig/cc_salt_minion.py'
--- cloudinit/CloudConfig/cc_salt_minion.py 2012-05-08 17:18:53 +0000
+++ cloudinit/CloudConfig/cc_salt_minion.py 2012-05-08 17:33:32 +0000
@@ -27,7 +27,7 @@
         return
     salt_cfg = cfg['salt_minion']
     # Start by installing the salt package ...
- cc.install_packages(("salt",))
+ cc.install_packages(("salt-minion",))
     config_dir = '/etc/salt'
     if not os.path.isdir(config_dir):
         os.makedirs(config_dir)

To post a comment you must log in.
lp:~cloud-init-dev/cloud-init/trunk updated
558. By Scott Moser

support relative path in AuthorizedKeysFile

559. By Scott Moser

remove usage of subprocess.check_output

in order to work on python 2.6, replace usage of check_output with util.subp.

560. By Scott Moser

Use --quiet when running apt-get

Use the --quiet switch when running apt-get to get output suitable for
logging, rather than with pretty progress updates designed for interactive
use. This makes the log, as returned by GetConsoleOutput for instance, a
little shorter and easier to read. Some action completion notices are also
missed, but it's pretty clear still as no error output appears before
cloud-init goes on to the next thing.

Mer apt-get man page:
  Quiet; produces output suitable for logging, omitting progress indicators.

561. By Scott Moser

cc_salt_minion: install package salt-minion rather than salt

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'ChangeLog'
--- ChangeLog 2012-01-30 14:24:41 +0000
+++ ChangeLog 2012-06-13 13:16:19 +0000
@@ -1,3 +1,6 @@
10.6.4:
2 - support relative path in AuthorizedKeysFile (LP: #970071).
3 - make apt-get update run with --quiet (suitable for logging) (LP: #1012613)
10.6.3:40.6.3:
2 - add sample systemd config files [Garrett Holmstrom]5 - add sample systemd config files [Garrett Holmstrom]
3 - add Fedora support [Garrent Holstrom] (LP: #883286)6 - add Fedora support [Garrent Holstrom] (LP: #883286)
@@ -8,7 +11,8 @@
8 - support setting of Acquire::HTTP::Proxy via 'apt_proxy'11 - support setting of Acquire::HTTP::Proxy via 'apt_proxy'
9 - DataSourceEc2: more resilliant to slow metadata service12 - DataSourceEc2: more resilliant to slow metadata service
10 - config change: 'retries' dropped, 'max_wait' added, timeout increased13 - config change: 'retries' dropped, 'max_wait' added, timeout increased
11 - close stdin in all cloud-init programs that are launched at boot (LP: #903993)14 - close stdin in all cloud-init programs that are launched at boot
15 (LP: #903993)
12 - revert management of /etc/hosts to 0.6.1 style (LP: #890501, LP: #871966)16 - revert management of /etc/hosts to 0.6.1 style (LP: #890501, LP: #871966)
13 - write full ssh keys to console for easy machine consumption (LP: #893400)17 - write full ssh keys to console for easy machine consumption (LP: #893400)
14 - put INSTANCE_ID environment variable in bootcmd scripts18 - put INSTANCE_ID environment variable in bootcmd scripts
@@ -19,9 +23,33 @@
19 in the payload parameter. (LP: #874342)23 in the payload parameter. (LP: #874342)
20 - add test case framework [Mike Milner] (LP: #890851)24 - add test case framework [Mike Milner] (LP: #890851)
21 - fix pylint warnings [Juerg Haefliger] (LP: #914739)25 - fix pylint warnings [Juerg Haefliger] (LP: #914739)
22 - add support for adding and deleting CA Certificates [Mike Milner] (LP: #915232)26 - add support for adding and deleting CA Certificates [Mike Milner]
27 (LP: #915232)
23 - in ci-info lines, use '.' to indicate empty field for easier machine reading28 - in ci-info lines, use '.' to indicate empty field for easier machine reading
24 - support empty lines in "#include" files (LP: #923043)29 - support empty lines in "#include" files (LP: #923043)
30 - support configuration of salt minions (Jeff Bauer) (LP: #927795)
31 - DataSourceOVF: only search for OVF data on ISO9660 filesystems (LP: #898373)
32 - DataSourceConfigDrive: support getting data from openstack config drive
33 (LP: #857378)
34 - DataSourceNoCloud: support seed from external disk of ISO or vfat
35 (LP: #857378)
36 - DataSourceNoCloud: support inserting /etc/network/interfaces
37 - DataSourceMaaS: add data source for Ubuntu Machines as a Service (MaaS)
38 (LP: #942061)
39 - DataSourceCloudStack: add support for CloudStack datasource [Cosmin Luta]
40 - add option 'apt_pipelining' to address issue with S3 mirrors
41 (LP: #948461) [Ben Howard]
42 - warn on non-multipart, non-handled user-data [Martin Packman]
43 - run resizefs in the background in order to not block boot (LP: #961226)
44 - Fix bug in Chef support where validation_key was present in config, but
45 'validation_cert' was not (LP: #960547)
46 - Provide user friendly message when an invalid locale is set
47 [Ben Howard] (LP: #859814)
48 - Support reading cloud-config from kernel command line parameter and
49 populating local file with it, which can then provide data for DataSources
50 - improve chef examples for working configurations on 11.10 and 12.04
51 [Lorin Hochstein] (LP: #960564)
52
250.6.2:530.6.2:
26 - fix bug where update was not done unless update was explicitly set.54 - fix bug where update was not done unless update was explicitly set.
27 It would not be run if 'upgrade' or packages were set to be installed55 It would not be run if 'upgrade' or packages were set to be installed
@@ -59,18 +87,20 @@
59 - support multiple staticly configured network devices, as long as87 - support multiple staticly configured network devices, as long as
60 all of them come up early (LP: #810044)88 all of them come up early (LP: #810044)
61 - Changes to handling user data mean that:89 - Changes to handling user data mean that:
62 * boothooks will now run more than once as they were intended (and as bootcmd90 * boothooks will now run more than once as they were intended (and as
63 commands do)91 bootcmd commands do)
64 * cloud-config and user-scripts will be updated from user data every boot92 * cloud-config and user-scripts will be updated from user data every boot
65 - Fix issue where 'isatty' would return true for apt-add-repository.93 - Fix issue where 'isatty' would return true for apt-add-repository.
66 apt-add-repository would get stdin which was attached to a terminal94 apt-add-repository would get stdin which was attached to a terminal
67 (/dev/console) and would thus hang when running during boot. (LP: 831505)95 (/dev/console) and would thus hang when running during boot. (LP: 831505)
68 This was done by changing all users of util.subp to have None input unless specified96 This was done by changing all users of util.subp to have None input unless
97 specified
69 - Add some debug info to the console when cloud-init runs.98 - Add some debug info to the console when cloud-init runs.
70 This is useful if debugging, IP and route information is printed to the console.99 This is useful if debugging, IP and route information is printed to the
100 console.
71 - change the mechanism for handling .ssh/authorized_keys, to update entries101 - change the mechanism for handling .ssh/authorized_keys, to update entries
72 rather than appending. This ensures that the authorized_keys that are being102 rather than appending. This ensures that the authorized_keys that are
73 inserted actually do something (LP: #434076, LP: #833499)103 being inserted actually do something (LP: #434076, LP: #833499)
74 - log warning on failure to set hostname (LP: #832175)104 - log warning on failure to set hostname (LP: #832175)
75 - upstart/cloud-init-nonet.conf: wait for all network interfaces to be up105 - upstart/cloud-init-nonet.conf: wait for all network interfaces to be up
76 allow for the possibility of /var/run != /run.106 allow for the possibility of /var/run != /run.
77107
=== modified file 'cloud-init.py'
--- cloud-init.py 2012-01-18 14:07:33 +0000
+++ cloud-init.py 2012-06-13 13:16:19 +0000
@@ -28,6 +28,7 @@
28import cloudinit.DataSource as ds28import cloudinit.DataSource as ds
29import cloudinit.netinfo as netinfo29import cloudinit.netinfo as netinfo
30import time30import time
31import traceback
31import logging32import logging
32import errno33import errno
33import os34import os
@@ -67,6 +68,30 @@
67 warn("unable to open /proc/uptime\n")68 warn("unable to open /proc/uptime\n")
68 uptime = "na"69 uptime = "na"
6970
71 cmdline_msg = None
72 cmdline_exc = None
73 if cmd == "start":
74 target = "%s.d/%s" % (cloudinit.system_config,
75 "91_kernel_cmdline_url.cfg")
76 if os.path.exists(target):
77 cmdline_msg = "cmdline: %s existed" % target
78 else:
79 cmdline = util.get_cmdline()
80 try:
81 (key, url, content) = cloudinit.get_cmdline_url(
82 cmdline=cmdline)
83 if key and content:
84 util.write_file(target, content, mode=0600)
85 cmdline_msg = ("cmdline: wrote %s from %s, %s" %
86 (target, key, url))
87 elif key:
88 cmdline_msg = ("cmdline: %s, %s had no cloud-config" %
89 (key, url))
90 except Exception:
91 cmdline_exc = ("cmdline: '%s' raised exception\n%s" %
92 (cmdline, traceback.format_exc()))
93 warn(cmdline_exc)
94
70 try:95 try:
71 cfg = cloudinit.get_base_cfg(cfg_path)96 cfg = cloudinit.get_base_cfg(cfg_path)
72 except Exception as e:97 except Exception as e:
@@ -86,6 +111,11 @@
86 cloudinit.logging_set_from_cfg(cfg)111 cloudinit.logging_set_from_cfg(cfg)
87 log = logging.getLogger()112 log = logging.getLogger()
88113
114 if cmdline_exc:
115 log.debug(cmdline_exc)
116 elif cmdline_msg:
117 log.debug(cmdline_msg)
118
89 try:119 try:
90 cloudinit.initfs()120 cloudinit.initfs()
91 except Exception as e:121 except Exception as e:
@@ -136,7 +166,7 @@
136 cloud.get_data_source()166 cloud.get_data_source()
137 except cloudinit.DataSourceNotFoundException as e:167 except cloudinit.DataSourceNotFoundException as e:
138 sys.stderr.write("no instance data found in %s\n" % cmd)168 sys.stderr.write("no instance data found in %s\n" % cmd)
139 sys.exit(1)169 sys.exit(0)
140170
141 # set this as the current instance171 # set this as the current instance
142 cloud.set_cur_instance()172 cloud.set_cur_instance()
143173
=== modified file 'cloudinit/CloudConfig/__init__.py'
--- cloudinit/CloudConfig/__init__.py 2012-01-18 14:07:33 +0000
+++ cloudinit/CloudConfig/__init__.py 2012-06-13 13:16:19 +0000
@@ -260,7 +260,7 @@
260 e = os.environ.copy()260 e = os.environ.copy()
261 e['DEBIAN_FRONTEND'] = 'noninteractive'261 e['DEBIAN_FRONTEND'] = 'noninteractive'
262 cmd = ['apt-get', '--option', 'Dpkg::Options::=--force-confold',262 cmd = ['apt-get', '--option', 'Dpkg::Options::=--force-confold',
263 '--assume-yes', tlc]263 '--assume-yes', '--quiet', tlc]
264 cmd.extend(args)264 cmd.extend(args)
265 subprocess.check_call(cmd, env=e)265 subprocess.check_call(cmd, env=e)
266266
267267
=== added file 'cloudinit/CloudConfig/cc_apt_pipelining.py'
--- cloudinit/CloudConfig/cc_apt_pipelining.py 1970-01-01 00:00:00 +0000
+++ cloudinit/CloudConfig/cc_apt_pipelining.py 2012-06-13 13:16:19 +0000
@@ -0,0 +1,53 @@
1# vi: ts=4 expandtab
2#
3# Copyright (C) 2011 Canonical Ltd.
4#
5# Author: Ben Howard <ben.howard@canonical.com>
6#
7# This program is free software: you can redistribute it and/or modify
8# it under the terms of the GNU General Public License version 3, as
9# published by the Free Software Foundation.
10#
11# This program is distributed in the hope that it will be useful,
12# but WITHOUT ANY WARRANTY; without even the implied warranty of
13# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14# GNU General Public License for more details.
15#
16# You should have received a copy of the GNU General Public License
17# along with this program. If not, see <http://www.gnu.org/licenses/>.
18
19import cloudinit.util as util
20from cloudinit.CloudConfig import per_instance
21
22frequency = per_instance
23default_file = "/etc/apt/apt.conf.d/90cloud-init-pipelining"
24
25
26def handle(_name, cfg, _cloud, log, _args):
27
28 apt_pipe_value = util.get_cfg_option_str(cfg, "apt_pipelining", False)
29 apt_pipe_value = str(apt_pipe_value).lower()
30
31 if apt_pipe_value == "false":
32 write_apt_snippet("0", log)
33
34 elif apt_pipe_value in ("none", "unchanged", "os"):
35 return
36
37 elif apt_pipe_value in str(range(0, 6)):
38 write_apt_snippet(apt_pipe_value, log)
39
40 else:
41 log.warn("Invalid option for apt_pipeling: %s" % apt_pipe_value)
42
43
44def write_apt_snippet(setting, log, f_name=default_file):
45 """ Writes f_name with apt pipeline depth 'setting' """
46
47 acquire_pipeline_depth = 'Acquire::http::Pipeline-Depth "%s";\n'
48 file_contents = ("//Written by cloud-init per 'apt_pipelining'\n"
49 + (acquire_pipeline_depth % setting))
50
51 util.write_file(f_name, file_contents)
52
53 log.debug("Wrote %s with APT pipeline setting" % f_name)
054
=== modified file 'cloudinit/CloudConfig/cc_ca_certs.py'
--- cloudinit/CloudConfig/cc_ca_certs.py 2012-01-17 21:38:01 +0000
+++ cloudinit/CloudConfig/cc_ca_certs.py 2012-06-13 13:16:19 +0000
@@ -16,7 +16,7 @@
16import os16import os
17from subprocess import check_call17from subprocess import check_call
18from cloudinit.util import (write_file, get_cfg_option_list_or_str,18from cloudinit.util import (write_file, get_cfg_option_list_or_str,
19 delete_dir_contents)19 delete_dir_contents, subp)
2020
21CA_CERT_PATH = "/usr/share/ca-certificates/"21CA_CERT_PATH = "/usr/share/ca-certificates/"
22CA_CERT_FILENAME = "cloud-init-ca-certs.crt"22CA_CERT_FILENAME = "cloud-init-ca-certs.crt"
@@ -54,6 +54,8 @@
54 delete_dir_contents(CA_CERT_PATH)54 delete_dir_contents(CA_CERT_PATH)
55 delete_dir_contents(CA_CERT_SYSTEM_PATH)55 delete_dir_contents(CA_CERT_SYSTEM_PATH)
56 write_file(CA_CERT_CONFIG, "", mode=0644)56 write_file(CA_CERT_CONFIG, "", mode=0644)
57 debconf_sel = "ca-certificates ca-certificates/trust_new_crts select no"
58 subp(('debconf-set-selections', '-'), debconf_sel)
5759
5860
59def handle(_name, cfg, _cloud, log, _args):61def handle(_name, cfg, _cloud, log, _args):
6062
=== modified file 'cloudinit/CloudConfig/cc_chef.py'
--- cloudinit/CloudConfig/cc_chef.py 2012-01-18 14:07:33 +0000
+++ cloudinit/CloudConfig/cc_chef.py 2012-06-13 13:16:19 +0000
@@ -40,11 +40,11 @@
40 # set the validation key based on the presence of either 'validation_key'40 # set the validation key based on the presence of either 'validation_key'
41 # or 'validation_cert'. In the case where both exist, 'validation_key'41 # or 'validation_cert'. In the case where both exist, 'validation_key'
42 # takes precedence42 # takes precedence
43 if ('validation_key' in chef_cfg or 'validation_cert' in chef_cfg):43 for key in ('validation_key', 'validation_cert'):
44 validation_key = util.get_cfg_option_str(chef_cfg, 'validation_key',44 if key in chef_cfg and chef_cfg[key]:
45 chef_cfg['validation_cert'])45 with open('/etc/chef/validation.pem', 'w') as validation_key_fh:
46 with open('/etc/chef/validation.pem', 'w') as validation_key_fh:46 validation_key_fh.write(chef_cfg[key])
47 validation_key_fh.write(validation_key)47 break
4848
49 # create the chef config from template49 # create the chef config from template
50 util.render_to_file('chef_client.rb', '/etc/chef/client.rb',50 util.render_to_file('chef_client.rb', '/etc/chef/client.rb',
5151
=== modified file 'cloudinit/CloudConfig/cc_landscape.py'
--- cloudinit/CloudConfig/cc_landscape.py 2012-01-18 14:07:33 +0000
+++ cloudinit/CloudConfig/cc_landscape.py 2012-06-13 13:16:19 +0000
@@ -18,6 +18,8 @@
18# You should have received a copy of the GNU General Public License18# You should have received a copy of the GNU General Public License
19# along with this program. If not, see <http://www.gnu.org/licenses/>.19# along with this program. If not, see <http://www.gnu.org/licenses/>.
2020
21import os
22import os.path
21from cloudinit.CloudConfig import per_instance23from cloudinit.CloudConfig import per_instance
22from configobj import ConfigObj24from configobj import ConfigObj
2325
@@ -50,6 +52,9 @@
5052
51 merged = mergeTogether([lsc_builtincfg, lsc_client_cfg_file, ls_cloudcfg])53 merged = mergeTogether([lsc_builtincfg, lsc_client_cfg_file, ls_cloudcfg])
5254
55 if not os.path.isdir(os.path.dirname(lsc_client_cfg_file)):
56 os.makedirs(os.path.dirname(lsc_client_cfg_file))
57
53 with open(lsc_client_cfg_file, "w") as fp:58 with open(lsc_client_cfg_file, "w") as fp:
54 merged.write(fp)59 merged.write(fp)
5560
5661
=== modified file 'cloudinit/CloudConfig/cc_resizefs.py'
--- cloudinit/CloudConfig/cc_resizefs.py 2012-01-18 14:07:33 +0000
+++ cloudinit/CloudConfig/cc_resizefs.py 2012-06-13 13:16:19 +0000
@@ -22,6 +22,8 @@
22import subprocess22import subprocess
23import os23import os
24import stat24import stat
25import sys
26import time
25import tempfile27import tempfile
26from cloudinit.CloudConfig import per_always28from cloudinit.CloudConfig import per_always
2729
@@ -34,23 +36,22 @@
34 if str(args[0]).lower() in ['true', '1', 'on', 'yes']:36 if str(args[0]).lower() in ['true', '1', 'on', 'yes']:
35 resize_root = True37 resize_root = True
36 else:38 else:
37 resize_root = util.get_cfg_option_bool(cfg, "resize_rootfs", True)39 resize_root = util.get_cfg_option_str(cfg, "resize_rootfs", True)
3840
39 if not resize_root:41 if str(resize_root).lower() in ['false', '0']:
40 return42 return
4143
42 # this really only uses the filename from mktemp, then we mknod into it44 # we use mktemp rather than mkstemp because early in boot nothing
43 (fd, devpth) = tempfile.mkstemp()45 # else should be able to race us for this, and we need to mknod.
44 os.unlink(devpth)46 devpth = tempfile.mktemp(prefix="cloudinit.resizefs.", dir="/run")
45 os.close(fd)
4647
47 try:48 try:
48 st_dev = os.stat("/").st_dev49 st_dev = os.stat("/").st_dev
49 dev = os.makedev(os.major(st_dev), os.minor(st_dev))50 dev = os.makedev(os.major(st_dev), os.minor(st_dev))
50 os.mknod(devpth, 0400 | stat.S_IFBLK, dev)51 os.mknod(devpth, 0400 | stat.S_IFBLK, dev)
51 except:52 except:
52 if util.islxc():53 if util.is_container():
53 log.debug("inside lxc, ignoring mknod failure in resizefs")54 log.debug("inside container, ignoring mknod failure in resizefs")
54 return55 return
55 log.warn("Failed to make device node to resize /")56 log.warn("Failed to make device node to resize /")
56 raise57 raise
@@ -65,9 +66,6 @@
65 os.unlink(devpth)66 os.unlink(devpth)
66 raise67 raise
6768
68 log.debug("resizing root filesystem (type=%s, maj=%i, min=%i)" %
69 (str(fstype).rstrip("\n"), os.major(st_dev), os.minor(st_dev)))
70
71 if str(fstype).startswith("ext"):69 if str(fstype).startswith("ext"):
72 resize_cmd = ['resize2fs', devpth]70 resize_cmd = ['resize2fs', devpth]
73 elif fstype == "xfs":71 elif fstype == "xfs":
@@ -77,7 +75,28 @@
77 log.debug("not resizing unknown filesystem %s" % fstype)75 log.debug("not resizing unknown filesystem %s" % fstype)
78 return76 return
7977
78 if resize_root == "noblock":
79 fid = os.fork()
80 if fid == 0:
81 try:
82 do_resize(resize_cmd, devpth, log)
83 os._exit(0) # pylint: disable=W0212
84 except Exception as exc:
85 sys.stderr.write("Failed: %s" % exc)
86 os._exit(1) # pylint: disable=W0212
87 else:
88 do_resize(resize_cmd, devpth, log)
89
90 log.debug("resizing root filesystem (type=%s, maj=%i, min=%i, val=%s)" %
91 (str(fstype).rstrip("\n"), os.major(st_dev), os.minor(st_dev),
92 resize_root))
93
94 return
95
96
97def do_resize(resize_cmd, devpth, log):
80 try:98 try:
99 start = time.time()
81 util.subp(resize_cmd)100 util.subp(resize_cmd)
82 except subprocess.CalledProcessError as e:101 except subprocess.CalledProcessError as e:
83 log.warn("Failed to resize filesystem (%s)" % resize_cmd)102 log.warn("Failed to resize filesystem (%s)" % resize_cmd)
@@ -86,4 +105,4 @@
86 raise105 raise
87106
88 os.unlink(devpth)107 os.unlink(devpth)
89 return108 log.debug("resize took %s seconds" % (time.time() - start))
90109
=== modified file 'cloudinit/CloudConfig/cc_salt_minion.py'
--- cloudinit/CloudConfig/cc_salt_minion.py 2012-02-11 15:27:14 +0000
+++ cloudinit/CloudConfig/cc_salt_minion.py 2012-06-13 13:16:19 +0000
@@ -20,7 +20,8 @@
20import cloudinit.CloudConfig as cc20import cloudinit.CloudConfig as cc
21import yaml21import yaml
2222
23def handle(_name, cfg, cloud, log, _args):23
24def handle(_name, cfg, _cloud, _log, _args):
24 # If there isn't a salt key in the configuration don't do anything25 # If there isn't a salt key in the configuration don't do anything
25 if 'salt_minion' not in cfg:26 if 'salt_minion' not in cfg:
26 return27 return
2728
=== modified file 'cloudinit/CloudConfig/cc_update_etc_hosts.py'
--- cloudinit/CloudConfig/cc_update_etc_hosts.py 2012-01-18 14:07:33 +0000
+++ cloudinit/CloudConfig/cc_update_etc_hosts.py 2012-06-13 13:16:19 +0000
@@ -28,7 +28,7 @@
28def handle(_name, cfg, cloud, log, _args):28def handle(_name, cfg, cloud, log, _args):
29 (hostname, fqdn) = util.get_hostname_fqdn(cfg, cloud)29 (hostname, fqdn) = util.get_hostname_fqdn(cfg, cloud)
3030
31 manage_hosts = util.get_cfg_option_bool(cfg, "manage_etc_hosts", False)31 manage_hosts = util.get_cfg_option_str(cfg, "manage_etc_hosts", False)
32 if manage_hosts in ("True", "true", True, "template"):32 if manage_hosts in ("True", "true", True, "template"):
33 # render from template file33 # render from template file
34 try:34 try:
3535
=== modified file 'cloudinit/DataSource.py'
--- cloudinit/DataSource.py 2012-01-18 14:07:33 +0000
+++ cloudinit/DataSource.py 2012-06-13 13:16:19 +0000
@@ -70,7 +70,10 @@
70 return([])70 return([])
7171
72 if isinstance(self.metadata['public-keys'], str):72 if isinstance(self.metadata['public-keys'], str):
73 return([self.metadata['public-keys'], ])73 return(str(self.metadata['public-keys']).splitlines())
74
75 if isinstance(self.metadata['public-keys'], list):
76 return(self.metadata['public-keys'])
7477
75 for _keyname, klist in self.metadata['public-keys'].items():78 for _keyname, klist in self.metadata['public-keys'].items():
76 # lp:506332 uec metadata service responds with79 # lp:506332 uec metadata service responds with
7780
=== added file 'cloudinit/DataSourceCloudStack.py'
--- cloudinit/DataSourceCloudStack.py 1970-01-01 00:00:00 +0000
+++ cloudinit/DataSourceCloudStack.py 2012-06-13 13:16:19 +0000
@@ -0,0 +1,92 @@
1# vi: ts=4 expandtab
2#
3# Copyright (C) 2012 Canonical Ltd.
4# Copyright (C) 2012 Cosmin Luta
5#
6# Author: Cosmin Luta <q4break@gmail.com>
7# Author: Scott Moser <scott.moser@canonical.com>
8#
9# This program is free software: you can redistribute it and/or modify
10# it under the terms of the GNU General Public License version 3, as
11# published by the Free Software Foundation.
12#
13# This program is distributed in the hope that it will be useful,
14# but WITHOUT ANY WARRANTY; without even the implied warranty of
15# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16# GNU General Public License for more details.
17#
18# You should have received a copy of the GNU General Public License
19# along with this program. If not, see <http://www.gnu.org/licenses/>.
20
21import cloudinit.DataSource as DataSource
22
23from cloudinit import seeddir as base_seeddir
24from cloudinit import log
25import cloudinit.util as util
26from socket import inet_ntoa
27import time
28import boto.utils as boto_utils
29from struct import pack
30
31
32class DataSourceCloudStack(DataSource.DataSource):
33 api_ver = 'latest'
34 seeddir = base_seeddir + '/cs'
35 metadata_address = None
36
37 def __init__(self, sys_cfg=None):
38 DataSource.DataSource.__init__(self, sys_cfg)
39 # Cloudstack has its metadata/userdata URLs located at
40 # http://<default-gateway-ip>/latest/
41 self.metadata_address = "http://%s/" % self.get_default_gateway()
42
43 def get_default_gateway(self):
44 """ Returns the default gateway ip address in the dotted format
45 """
46 with open("/proc/net/route", "r") as f:
47 for line in f.readlines():
48 items = line.split("\t")
49 if items[1] == "00000000":
50 # found the default route, get the gateway
51 gw = inet_ntoa(pack("<L", int(items[2], 16)))
52 log.debug("found default route, gateway is %s" % gw)
53 return gw
54
55 def __str__(self):
56 return "DataSourceCloudStack"
57
58 def get_data(self):
59 seedret = {}
60 if util.read_optional_seed(seedret, base=self.seeddir + "/"):
61 self.userdata_raw = seedret['user-data']
62 self.metadata = seedret['meta-data']
63 log.debug("using seeded cs data in %s" % self.seeddir)
64 return True
65
66 try:
67 start = time.time()
68 self.userdata_raw = boto_utils.get_instance_userdata(self.api_ver,
69 None, self.metadata_address)
70 self.metadata = boto_utils.get_instance_metadata(self.api_ver,
71 self.metadata_address)
72 log.debug("crawl of metadata service took %ds" %
73 (time.time() - start))
74 return True
75 except Exception as e:
76 log.exception(e)
77 return False
78
79 def get_instance_id(self):
80 return self.metadata['instance-id']
81
82 def get_availability_zone(self):
83 return self.metadata['availability-zone']
84
85datasources = [
86 (DataSourceCloudStack, (DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
87]
88
89
90# return a list of data sources that match this set of dependencies
91def get_datasource_list(depends):
92 return DataSource.list_from_depends(depends, datasources)
093
=== added file 'cloudinit/DataSourceConfigDrive.py'
--- cloudinit/DataSourceConfigDrive.py 1970-01-01 00:00:00 +0000
+++ cloudinit/DataSourceConfigDrive.py 2012-06-13 13:16:19 +0000
@@ -0,0 +1,231 @@
1# Copyright (C) 2012 Canonical Ltd.
2#
3# Author: Scott Moser <scott.moser@canonical.com>
4#
5# This program is free software: you can redistribute it and/or modify
6# it under the terms of the GNU General Public License version 3, as
7# published by the Free Software Foundation.
8#
9# This program is distributed in the hope that it will be useful,
10# but WITHOUT ANY WARRANTY; without even the implied warranty of
11# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12# GNU General Public License for more details.
13#
14# You should have received a copy of the GNU General Public License
15# along with this program. If not, see <http://www.gnu.org/licenses/>.
16
17import cloudinit.DataSource as DataSource
18
19from cloudinit import seeddir as base_seeddir
20from cloudinit import log
21import cloudinit.util as util
22import os.path
23import os
24import json
25import subprocess
26
27DEFAULT_IID = "iid-dsconfigdrive"
28
29
30class DataSourceConfigDrive(DataSource.DataSource):
31 seed = None
32 seeddir = base_seeddir + '/config_drive'
33 cfg = {}
34 userdata_raw = None
35 metadata = None
36 dsmode = "local"
37
38 def __str__(self):
39 mstr = "DataSourceConfigDrive[%s]" % self.dsmode
40 mstr = mstr + " [seed=%s]" % self.seed
41 return(mstr)
42
43 def get_data(self):
44 found = None
45 md = {}
46 ud = ""
47
48 defaults = {"instance-id": DEFAULT_IID, "dsmode": "pass"}
49
50 if os.path.isdir(self.seeddir):
51 try:
52 (md, ud) = read_config_drive_dir(self.seeddir)
53 found = self.seeddir
54 except nonConfigDriveDir:
55 pass
56
57 if not found:
58 dev = cfg_drive_device()
59 if dev:
60 try:
61 (md, ud) = util.mount_callback_umount(dev,
62 read_config_drive_dir)
63 found = dev
64 except (nonConfigDriveDir, util.mountFailedError):
65 pass
66
67 if not found:
68 return False
69
70 if 'dsconfig' in md:
71 self.cfg = md['dscfg']
72
73 md = util.mergedict(md, defaults)
74
75 # update interfaces and ifup only on the local datasource
76 # this way the DataSourceConfigDriveNet doesn't do it also.
77 if 'network-interfaces' in md and self.dsmode == "local":
78 if md['dsmode'] == "pass":
79 log.info("updating network interfaces from configdrive")
80 else:
81 log.debug("updating network interfaces from configdrive")
82
83 util.write_file("/etc/network/interfaces",
84 md['network-interfaces'])
85 try:
86 (out, err) = util.subp(['ifup', '--all'])
87 if len(out) or len(err):
88 log.warn("ifup --all had stderr: %s" % err)
89
90 except subprocess.CalledProcessError as exc:
91 log.warn("ifup --all failed: %s" % (exc.output[1]))
92
93 self.seed = found
94 self.metadata = md
95 self.userdata_raw = ud
96
97 if md['dsmode'] == self.dsmode:
98 return True
99
100 log.debug("%s: not claiming datasource, dsmode=%s" %
101 (self, md['dsmode']))
102 return False
103
104 def get_public_ssh_keys(self):
105 if not 'public-keys' in self.metadata:
106 return([])
107 return(self.metadata['public-keys'])
108
109 # the data sources' config_obj is a cloud-config formated
110 # object that came to it from ways other than cloud-config
111 # because cloud-config content would be handled elsewhere
112 def get_config_obj(self):
113 return(self.cfg)
114
115
116class DataSourceConfigDriveNet(DataSourceConfigDrive):
117 dsmode = "net"
118
119
120class nonConfigDriveDir(Exception):
121 pass
122
123
124def cfg_drive_device():
125 """ get the config drive device. return a string like '/dev/vdb'
126 or None (if there is no non-root device attached). This does not
127 check the contents, only reports that if there *were* a config_drive
128 attached, it would be this device.
129 per config_drive documentation, this is
130 "associated as the last available disk on the instance"
131 """
132
133 if 'CLOUD_INIT_CONFIG_DRIVE_DEVICE' in os.environ:
134 return(os.environ['CLOUD_INIT_CONFIG_DRIVE_DEVICE'])
135
136 # we are looking for a raw block device (sda, not sda1) with a vfat
137 # filesystem on it.
138
139 letters = "abcdefghijklmnopqrstuvwxyz"
140 devs = util.find_devs_with("TYPE=vfat")
141
142 # filter out anything not ending in a letter (ignore partitions)
143 devs = [f for f in devs if f[-1] in letters]
144
145 # sort them in reverse so "last" device is first
146 devs.sort(reverse=True)
147
148 if len(devs):
149 return(devs[0])
150
151 return(None)
152
153
154def read_config_drive_dir(source_dir):
155 """
156 read_config_drive_dir(source_dir):
157 read source_dir, and return a tuple with metadata dict and user-data
158 string populated. If not a valid dir, raise a nonConfigDriveDir
159 """
160 md = {}
161 ud = ""
162
163 flist = ("etc/network/interfaces", "root/.ssh/authorized_keys", "meta.js")
164 found = [f for f in flist if os.path.isfile("%s/%s" % (source_dir, f))]
165 keydata = ""
166
167 if len(found) == 0:
168 raise nonConfigDriveDir("%s: %s" % (source_dir, "no files found"))
169
170 if "etc/network/interfaces" in found:
171 with open("%s/%s" % (source_dir, "/etc/network/interfaces")) as fp:
172 md['network-interfaces'] = fp.read()
173
174 if "root/.ssh/authorized_keys" in found:
175 with open("%s/%s" % (source_dir, "root/.ssh/authorized_keys")) as fp:
176 keydata = fp.read()
177
178 meta_js = {}
179
180 if "meta.js" in found:
181 content = ''
182 with open("%s/%s" % (source_dir, "meta.js")) as fp:
183 content = fp.read()
184 md['meta_js'] = content
185 try:
186 meta_js = json.loads(content)
187 except ValueError:
188 raise nonConfigDriveDir("%s: %s" %
189 (source_dir, "invalid json in meta.js"))
190
191 keydata = meta_js.get('public-keys', keydata)
192
193 if keydata:
194 lines = keydata.splitlines()
195 md['public-keys'] = [l for l in lines
196 if len(l) and not l.startswith("#")]
197
198 for copy in ('dsmode', 'instance-id', 'dscfg'):
199 if copy in meta_js:
200 md[copy] = meta_js[copy]
201
202 if 'user-data' in meta_js:
203 ud = meta_js['user-data']
204
205 return(md, ud)
206
207datasources = (
208 (DataSourceConfigDrive, (DataSource.DEP_FILESYSTEM, )),
209 (DataSourceConfigDriveNet,
210 (DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
211)
212
213
214# return a list of data sources that match this set of dependencies
215def get_datasource_list(depends):
216 return(DataSource.list_from_depends(depends, datasources))
217
218if __name__ == "__main__":
219 def main():
220 import sys
221 import pprint
222 print cfg_drive_device()
223 (md, ud) = read_config_drive_dir(sys.argv[1])
224 print "=== md ==="
225 pprint.pprint(md)
226 print "=== ud ==="
227 print(ud)
228
229 main()
230
231# vi: ts=4 expandtab
0232
=== modified file 'cloudinit/DataSourceEc2.py'
--- cloudinit/DataSourceEc2.py 2012-01-18 14:07:33 +0000
+++ cloudinit/DataSourceEc2.py 2012-06-13 13:16:19 +0000
@@ -24,7 +24,6 @@
24from cloudinit import log24from cloudinit import log
25import cloudinit.util as util25import cloudinit.util as util
26import socket26import socket
27import urllib2
28import time27import time
29import boto.utils as boto_utils28import boto.utils as boto_utils
30import os.path29import os.path
@@ -134,8 +133,8 @@
134 url2base[cur] = url133 url2base[cur] = url
135134
136 starttime = time.time()135 starttime = time.time()
137 url = wait_for_metadata_service(urls=urls, max_wait=max_wait,136 url = util.wait_for_url(urls=urls, max_wait=max_wait,
138 timeout=timeout, status_cb=log.warn)137 timeout=timeout, status_cb=log.warn)
139138
140 if url:139 if url:
141 log.debug("Using metadata source: '%s'" % url2base[url])140 log.debug("Using metadata source: '%s'" % url2base[url])
@@ -208,87 +207,6 @@
208 return False207 return False
209208
210209
211def wait_for_metadata_service(urls, max_wait=None, timeout=None,
212 status_cb=None):
213 """
214 urls: a list of urls to try
215 max_wait: roughly the maximum time to wait before giving up
216 The max time is *actually* len(urls)*timeout as each url will
217 be tried once and given the timeout provided.
218 timeout: the timeout provided to urllib2.urlopen
219 status_cb: call method with string message when a url is not available
220
221 the idea of this routine is to wait for the EC2 metdata service to
222 come up. On both Eucalyptus and EC2 we have seen the case where
223 the instance hit the MD before the MD service was up. EC2 seems
224 to have permenantely fixed this, though.
225
226 In openstack, the metadata service might be painfully slow, and
227 unable to avoid hitting a timeout of even up to 10 seconds or more
228 (LP: #894279) for a simple GET.
229
230 Offset those needs with the need to not hang forever (and block boot)
231 on a system where cloud-init is configured to look for EC2 Metadata
232 service but is not going to find one. It is possible that the instance
233 data host (169.254.169.254) may be firewalled off Entirely for a sytem,
234 meaning that the connection will block forever unless a timeout is set.
235 """
236 starttime = time.time()
237
238 sleeptime = 1
239
240 def nullstatus_cb(msg):
241 return
242
243 if status_cb == None:
244 status_cb = nullstatus_cb
245
246 def timeup(max_wait, starttime):
247 return((max_wait <= 0 or max_wait == None) or
248 (time.time() - starttime > max_wait))
249
250 loop_n = 0
251 while True:
252 sleeptime = int(loop_n / 5) + 1
253 for url in urls:
254 now = time.time()
255 if loop_n != 0:
256 if timeup(max_wait, starttime):
257 break
258 if timeout and (now + timeout > (starttime + max_wait)):
259 # shorten timeout to not run way over max_time
260 timeout = int((starttime + max_wait) - now)
261
262 reason = ""
263 try:
264 req = urllib2.Request(url)
265 resp = urllib2.urlopen(req, timeout=timeout)
266 if resp.read() != "":
267 return url
268 reason = "empty data [%s]" % resp.getcode()
269 except urllib2.HTTPError as e:
270 reason = "http error [%s]" % e.code
271 except urllib2.URLError as e:
272 reason = "url error [%s]" % e.reason
273 except socket.timeout as e:
274 reason = "socket timeout [%s]" % e
275 except Exception as e:
276 reason = "unexpected error [%s]" % e
277
278 if log:
279 status_cb("'%s' failed [%s/%ss]: %s" %
280 (url, int(time.time() - starttime), max_wait,
281 reason))
282
283 if timeup(max_wait, starttime):
284 break
285
286 loop_n = loop_n + 1
287 time.sleep(sleeptime)
288
289 return False
290
291
292datasources = [210datasources = [
293 (DataSourceEc2, (DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),211 (DataSourceEc2, (DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
294]212]
295213
=== added file 'cloudinit/DataSourceMAAS.py'
--- cloudinit/DataSourceMAAS.py 1970-01-01 00:00:00 +0000
+++ cloudinit/DataSourceMAAS.py 2012-06-13 13:16:19 +0000
@@ -0,0 +1,345 @@
1# vi: ts=4 expandtab
2#
3# Copyright (C) 2012 Canonical Ltd.
4#
5# Author: Scott Moser <scott.moser@canonical.com>
6#
7# This program is free software: you can redistribute it and/or modify
8# it under the terms of the GNU General Public License version 3, as
9# published by the Free Software Foundation.
10#
11# This program is distributed in the hope that it will be useful,
12# but WITHOUT ANY WARRANTY; without even the implied warranty of
13# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14# GNU General Public License for more details.
15#
16# You should have received a copy of the GNU General Public License
17# along with this program. If not, see <http://www.gnu.org/licenses/>.
18
19import cloudinit.DataSource as DataSource
20
21from cloudinit import seeddir as base_seeddir
22from cloudinit import log
23import cloudinit.util as util
24import errno
25import oauth.oauth as oauth
26import os.path
27import urllib2
28import time
29
30
31MD_VERSION = "2012-03-01"
32
33
34class DataSourceMAAS(DataSource.DataSource):
35 """
36 DataSourceMAAS reads instance information from MAAS.
37 Given a config metadata_url, and oauth tokens, it expects to find
38 files under the root named:
39 instance-id
40 user-data
41 hostname
42 """
43 seeddir = base_seeddir + '/maas'
44 baseurl = None
45
46 def __str__(self):
47 return("DataSourceMAAS[%s]" % self.baseurl)
48
49 def get_data(self):
50 mcfg = self.ds_cfg
51
52 try:
53 (userdata, metadata) = read_maas_seed_dir(self.seeddir)
54 self.userdata_raw = userdata
55 self.metadata = metadata
56 self.baseurl = self.seeddir
57 return True
58 except MAASSeedDirNone:
59 pass
60 except MAASSeedDirMalformed as exc:
61 log.warn("%s was malformed: %s\n" % (self.seeddir, exc))
62 raise
63
64 try:
65 # if there is no metadata_url, then we're not configured
66 url = mcfg.get('metadata_url', None)
67 if url == None:
68 return False
69
70 if not self.wait_for_metadata_service(url):
71 return False
72
73 self.baseurl = url
74
75 (userdata, metadata) = read_maas_seed_url(self.baseurl,
76 self.md_headers)
77 self.userdata_raw = userdata
78 self.metadata = metadata
79 return True
80 except Exception:
81 util.logexc(log)
82 return False
83
84 def md_headers(self, url):
85 mcfg = self.ds_cfg
86
87 # if we are missing token_key, token_secret or consumer_key
88 # then just do non-authed requests
89 for required in ('token_key', 'token_secret', 'consumer_key'):
90 if required not in mcfg:
91 return({})
92
93 consumer_secret = mcfg.get('consumer_secret', "")
94
95 return(oauth_headers(url=url, consumer_key=mcfg['consumer_key'],
96 token_key=mcfg['token_key'], token_secret=mcfg['token_secret'],
97 consumer_secret=consumer_secret))
98
99 def wait_for_metadata_service(self, url):
100 mcfg = self.ds_cfg
101
102 max_wait = 120
103 try:
104 max_wait = int(mcfg.get("max_wait", max_wait))
105 except Exception:
106 util.logexc(log)
107 log.warn("Failed to get max wait. using %s" % max_wait)
108
109 if max_wait == 0:
110 return False
111
112 timeout = 50
113 try:
114 timeout = int(mcfg.get("timeout", timeout))
115 except Exception:
116 util.logexc(log)
117 log.warn("Failed to get timeout, using %s" % timeout)
118
119 starttime = time.time()
120 check_url = "%s/%s/meta-data/instance-id" % (url, MD_VERSION)
121 url = util.wait_for_url(urls=[check_url], max_wait=max_wait,
122 timeout=timeout, status_cb=log.warn,
123 headers_cb=self.md_headers)
124
125 if url:
126 log.debug("Using metadata source: '%s'" % url)
127 else:
128 log.critical("giving up on md after %i seconds\n" %
129 int(time.time() - starttime))
130
131 return (bool(url))
132
133
134def read_maas_seed_dir(seed_d):
135 """
136 Return user-data and metadata for a maas seed dir in seed_d.
137 Expected format of seed_d are the following files:
138 * instance-id
139 * local-hostname
140 * user-data
141 """
142 files = ('local-hostname', 'instance-id', 'user-data', 'public-keys')
143 md = {}
144
145 if not os.path.isdir(seed_d):
146 raise MAASSeedDirNone("%s: not a directory")
147
148 for fname in files:
149 try:
150 with open(os.path.join(seed_d, fname)) as fp:
151 md[fname] = fp.read()
152 fp.close()
153 except IOError as e:
154 if e.errno != errno.ENOENT:
155 raise
156
157 return(check_seed_contents(md, seed_d))
158
159
160def read_maas_seed_url(seed_url, header_cb=None, timeout=None,
161 version=MD_VERSION):
162 """
163 Read the maas datasource at seed_url.
164 header_cb is a method that should return a headers dictionary that will
165 be given to urllib2.Request()
166
167 Expected format of seed_url is are the following files:
168 * <seed_url>/<version>/meta-data/instance-id
169 * <seed_url>/<version>/meta-data/local-hostname
170 * <seed_url>/<version>/user-data
171 """
172 files = ('meta-data/local-hostname',
173 'meta-data/instance-id',
174 'meta-data/public-keys',
175 'user-data')
176
177 base_url = "%s/%s" % (seed_url, version)
178 md = {}
179 for fname in files:
180 url = "%s/%s" % (base_url, fname)
181 if header_cb:
182 headers = header_cb(url)
183 else:
184 headers = {}
185
186 try:
187 req = urllib2.Request(url, data=None, headers=headers)
188 resp = urllib2.urlopen(req, timeout=timeout)
189 md[os.path.basename(fname)] = resp.read()
190 except urllib2.HTTPError as e:
191 if e.code != 404:
192 raise
193
194 return(check_seed_contents(md, seed_url))
195
196
197def check_seed_contents(content, seed):
198 """Validate if content is Is the content a dict that is valid as a
199 return for a datasource.
200 Either return a (userdata, metadata) tuple or
201 Raise MAASSeedDirMalformed or MAASSeedDirNone
202 """
203 md_required = ('instance-id', 'local-hostname')
204 found = content.keys()
205
206 if len(content) == 0:
207 raise MAASSeedDirNone("%s: no data files found" % seed)
208
209 missing = [k for k in md_required if k not in found]
210 if len(missing):
211 raise MAASSeedDirMalformed("%s: missing files %s" % (seed, missing))
212
213 userdata = content.get('user-data', "")
214 md = {}
215 for (key, val) in content.iteritems():
216 if key == 'user-data':
217 continue
218 md[key] = val
219
220 return(userdata, md)
221
222
223def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret):
224 consumer = oauth.OAuthConsumer(consumer_key, consumer_secret)
225 token = oauth.OAuthToken(token_key, token_secret)
226 params = {
227 'oauth_version': "1.0",
228 'oauth_nonce': oauth.generate_nonce(),
229 'oauth_timestamp': int(time.time()),
230 'oauth_token': token.key,
231 'oauth_consumer_key': consumer.key,
232 }
233 req = oauth.OAuthRequest(http_url=url, parameters=params)
234 req.sign_request(oauth.OAuthSignatureMethod_PLAINTEXT(),
235 consumer, token)
236 return(req.to_header())
237
238
239class MAASSeedDirNone(Exception):
240 pass
241
242
243class MAASSeedDirMalformed(Exception):
244 pass
245
246
247datasources = [
248 (DataSourceMAAS, (DataSource.DEP_FILESYSTEM, DataSource.DEP_NETWORK)),
249]
250
251
252# return a list of data sources that match this set of dependencies
253def get_datasource_list(depends):
254 return(DataSource.list_from_depends(depends, datasources))
255
256
257if __name__ == "__main__":
258 def main():
259 """
260 Call with single argument of directory or http or https url.
261 If url is given additional arguments are allowed, which will be
262 interpreted as consumer_key, token_key, token_secret, consumer_secret
263 """
264 import argparse
265 import pprint
266
267 parser = argparse.ArgumentParser(description='Interact with MAAS DS')
268 parser.add_argument("--config", metavar="file",
269 help="specify DS config file", default=None)
270 parser.add_argument("--ckey", metavar="key",
271 help="the consumer key to auth with", default=None)
272 parser.add_argument("--tkey", metavar="key",
273 help="the token key to auth with", default=None)
274 parser.add_argument("--csec", metavar="secret",
275 help="the consumer secret (likely '')", default="")
276 parser.add_argument("--tsec", metavar="secret",
277 help="the token secret to auth with", default=None)
278 parser.add_argument("--apiver", metavar="version",
279 help="the apiver to use ("" can be used)", default=MD_VERSION)
280
281 subcmds = parser.add_subparsers(title="subcommands", dest="subcmd")
282 subcmds.add_parser('crawl', help="crawl the datasource")
283 subcmds.add_parser('get', help="do a single GET of provided url")
284 subcmds.add_parser('check-seed', help="read andn verify seed at url")
285
286 parser.add_argument("url", help="the data source to query")
287
288 args = parser.parse_args()
289
290 creds = {'consumer_key': args.ckey, 'token_key': args.tkey,
291 'token_secret': args.tsec, 'consumer_secret': args.csec}
292
293 if args.config:
294 import yaml
295 with open(args.config) as fp:
296 cfg = yaml.load(fp)
297 if 'datasource' in cfg:
298 cfg = cfg['datasource']['MAAS']
299 for key in creds.keys():
300 if key in cfg and creds[key] == None:
301 creds[key] = cfg[key]
302
303 def geturl(url, headers_cb):
304 req = urllib2.Request(url, data=None, headers=headers_cb(url))
305 return(urllib2.urlopen(req).read())
306
307 def printurl(url, headers_cb):
308 print "== %s ==\n%s\n" % (url, geturl(url, headers_cb))
309
310 def crawl(url, headers_cb=None):
311 if url.endswith("/"):
312 for line in geturl(url, headers_cb).splitlines():
313 if line.endswith("/"):
314 crawl("%s%s" % (url, line), headers_cb)
315 else:
316 printurl("%s%s" % (url, line), headers_cb)
317 else:
318 printurl(url, headers_cb)
319
320 def my_headers(url):
321 headers = {}
322 if creds.get('consumer_key', None) != None:
323 headers = oauth_headers(url, **creds)
324 return headers
325
326 if args.subcmd == "check-seed":
327 if args.url.startswith("http"):
328 (userdata, metadata) = read_maas_seed_url(args.url,
329 header_cb=my_headers, version=args.apiver)
330 else:
331 (userdata, metadata) = read_maas_seed_url(args.url)
332 print "=== userdata ==="
333 print userdata
334 print "=== metadata ==="
335 pprint.pprint(metadata)
336
337 elif args.subcmd == "get":
338 printurl(args.url, my_headers)
339
340 elif args.subcmd == "crawl":
341 if not args.url.endswith("/"):
342 args.url = "%s/" % args.url
343 crawl(args.url, my_headers)
344
345 main()
0346
=== modified file 'cloudinit/DataSourceNoCloud.py'
--- cloudinit/DataSourceNoCloud.py 2012-01-18 14:07:33 +0000
+++ cloudinit/DataSourceNoCloud.py 2012-06-13 13:16:19 +0000
@@ -23,6 +23,8 @@
23from cloudinit import seeddir as base_seeddir23from cloudinit import seeddir as base_seeddir
24from cloudinit import log24from cloudinit import log
25import cloudinit.util as util25import cloudinit.util as util
26import errno
27import subprocess
2628
2729
28class DataSourceNoCloud(DataSource.DataSource):30class DataSourceNoCloud(DataSource.DataSource):
@@ -30,6 +32,7 @@
30 userdata = None32 userdata = None
31 userdata_raw = None33 userdata_raw = None
32 supported_seed_starts = ("/", "file://")34 supported_seed_starts = ("/", "file://")
35 dsmode = "local"
33 seed = None36 seed = None
34 cmdline_id = "ds=nocloud"37 cmdline_id = "ds=nocloud"
35 seeddir = base_seeddir + '/nocloud'38 seeddir = base_seeddir + '/nocloud'
@@ -41,7 +44,7 @@
4144
42 def get_data(self):45 def get_data(self):
43 defaults = {46 defaults = {
44 "instance-id": "nocloud"47 "instance-id": "nocloud", "dsmode": self.dsmode
45 }48 }
4649
47 found = []50 found = []
@@ -64,13 +67,54 @@
64 found.append(self.seeddir)67 found.append(self.seeddir)
65 log.debug("using seeded cache data in %s" % self.seeddir)68 log.debug("using seeded cache data in %s" % self.seeddir)
6669
70 # if the datasource config had a 'seedfrom' entry, then that takes
71 # precedence over a 'seedfrom' that was found in a filesystem
72 # but not over external medi
73 if 'seedfrom' in self.ds_cfg and self.ds_cfg['seedfrom']:
74 found.append("ds_config")
75 md["seedfrom"] = self.ds_cfg['seedfrom']
76
77 fslist = util.find_devs_with("TYPE=vfat")
78 fslist.extend(util.find_devs_with("TYPE=iso9660"))
79
80 label_list = util.find_devs_with("LABEL=cidata")
81 devlist = list(set(fslist) & set(label_list))
82 devlist.sort(reverse=True)
83
84 for dev in devlist:
85 try:
86 (newmd, newud) = util.mount_callback_umount(dev,
87 util.read_seeded)
88 md = util.mergedict(newmd, md)
89 ud = newud
90
91 # for seed from a device, the default mode is 'net'.
92 # that is more likely to be what is desired.
93 # If they want dsmode of local, then they must
94 # specify that.
95 if 'dsmode' not in md:
96 md['dsmode'] = "net"
97
98 log.debug("using data from %s" % dev)
99 found.append(dev)
100 break
101 except OSError, e:
102 if e.errno != errno.ENOENT:
103 raise
104 except util.mountFailedError:
105 log.warn("Failed to mount %s when looking for seed" % dev)
106
67 # there was no indication on kernel cmdline or data107 # there was no indication on kernel cmdline or data
68 # in the seeddir suggesting this handler should be used.108 # in the seeddir suggesting this handler should be used.
69 if len(found) == 0:109 if len(found) == 0:
70 return False110 return False
71111
112 seeded_interfaces = None
113
72 # the special argument "seedfrom" indicates we should114 # the special argument "seedfrom" indicates we should
73 # attempt to seed the userdata / metadata from its value115 # attempt to seed the userdata / metadata from its value
116 # its primarily value is in allowing the user to type less
117 # on the command line, ie: ds=nocloud;s=http://bit.ly/abcdefg
74 if "seedfrom" in md:118 if "seedfrom" in md:
75 seedfrom = md["seedfrom"]119 seedfrom = md["seedfrom"]
76 seedfound = False120 seedfound = False
@@ -83,6 +127,9 @@
83 (seedfrom, self.__class__))127 (seedfrom, self.__class__))
84 return False128 return False
85129
130 if 'network-interfaces' in md:
131 seeded_interfaces = self.dsmode
132
86 # this could throw errors, but the user told us to do it133 # this could throw errors, but the user told us to do it
87 # so if errors are raised, let them raise134 # so if errors are raised, let them raise
88 (md_seed, ud) = util.read_seeded(seedfrom, timeout=None)135 (md_seed, ud) = util.read_seeded(seedfrom, timeout=None)
@@ -93,10 +140,35 @@
93 found.append(seedfrom)140 found.append(seedfrom)
94141
95 md = util.mergedict(md, defaults)142 md = util.mergedict(md, defaults)
143
144 # update the network-interfaces if metadata had 'network-interfaces'
145 # entry and this is the local datasource, or 'seedfrom' was used
146 # and the source of the seed was self.dsmode
147 # ('local' for NoCloud, 'net' for NoCloudNet')
148 if ('network-interfaces' in md and
149 (self.dsmode in ("local", seeded_interfaces))):
150 log.info("updating network interfaces from nocloud")
151
152 util.write_file("/etc/network/interfaces",
153 md['network-interfaces'])
154 try:
155 (out, err) = util.subp(['ifup', '--all'])
156 if len(out) or len(err):
157 log.warn("ifup --all had stderr: %s" % err)
158
159 except subprocess.CalledProcessError as exc:
160 log.warn("ifup --all failed: %s" % (exc.output[1]))
161
96 self.seed = ",".join(found)162 self.seed = ",".join(found)
97 self.metadata = md163 self.metadata = md
98 self.userdata_raw = ud164 self.userdata_raw = ud
99 return True165
166 if md['dsmode'] == self.dsmode:
167 return True
168
169 log.debug("%s: not claiming datasource, dsmode=%s" %
170 (self, md['dsmode']))
171 return False
100172
101173
102# returns true or false indicating if cmdline indicated174# returns true or false indicating if cmdline indicated
@@ -145,6 +217,7 @@
145 cmdline_id = "ds=nocloud-net"217 cmdline_id = "ds=nocloud-net"
146 supported_seed_starts = ("http://", "https://", "ftp://")218 supported_seed_starts = ("http://", "https://", "ftp://")
147 seeddir = base_seeddir + '/nocloud-net'219 seeddir = base_seeddir + '/nocloud-net'
220 dsmode = "net"
148221
149222
150datasources = (223datasources = (
151224
=== modified file 'cloudinit/DataSourceOVF.py'
--- cloudinit/DataSourceOVF.py 2012-01-18 14:07:33 +0000
+++ cloudinit/DataSourceOVF.py 2012-06-13 13:16:19 +0000
@@ -162,7 +162,7 @@
162162
163# transport functions take no input and return163# transport functions take no input and return
164# a 3 tuple of content, path, filename164# a 3 tuple of content, path, filename
165def transport_iso9660(require_iso=False):165def transport_iso9660(require_iso=True):
166166
167 # default_regex matches values in167 # default_regex matches values in
168 # /lib/udev/rules.d/60-cdrom_id.rules168 # /lib/udev/rules.d/60-cdrom_id.rules
169169
=== modified file 'cloudinit/SshUtil.py'
--- cloudinit/SshUtil.py 2012-01-18 14:07:33 +0000
+++ cloudinit/SshUtil.py 2012-06-13 13:16:19 +0000
@@ -155,6 +155,8 @@
155 akeys = ssh_cfg.get("AuthorizedKeysFile", "%h/.ssh/authorized_keys")155 akeys = ssh_cfg.get("AuthorizedKeysFile", "%h/.ssh/authorized_keys")
156 akeys = akeys.replace("%h", pwent.pw_dir)156 akeys = akeys.replace("%h", pwent.pw_dir)
157 akeys = akeys.replace("%u", user)157 akeys = akeys.replace("%u", user)
158 if not akeys.startswith('/'):
159 akeys = os.path.join(pwent.pw_dir, akeys)
158 authorized_keys = akeys160 authorized_keys = akeys
159 except Exception:161 except Exception:
160 authorized_keys = '%s/.ssh/authorized_keys' % pwent.pw_dir162 authorized_keys = '%s/.ssh/authorized_keys' % pwent.pw_dir
161163
=== modified file 'cloudinit/UserDataHandler.py'
--- cloudinit/UserDataHandler.py 2012-01-30 14:24:41 +0000
+++ cloudinit/UserDataHandler.py 2012-06-13 13:16:19 +0000
@@ -180,7 +180,7 @@
180180
181 payload = part.get_payload(decode=True)181 payload = part.get_payload(decode=True)
182182
183 if ctype_orig == "text/plain":183 if ctype_orig in ("text/plain", "text/x-not-multipart"):
184 ctype = type_from_startswith(payload)184 ctype = type_from_startswith(payload)
185185
186 if ctype is None:186 if ctype is None:
@@ -213,7 +213,7 @@
213 else:213 else:
214 msg[key] = val214 msg[key] = val
215 else:215 else:
216 mtype = headers.get("Content-Type", "text/plain")216 mtype = headers.get("Content-Type", "text/x-not-multipart")
217 maintype, subtype = mtype.split("/", 1)217 maintype, subtype = mtype.split("/", 1)
218 msg = MIMEBase(maintype, subtype, *headers)218 msg = MIMEBase(maintype, subtype, *headers)
219 msg.set_payload(data)219 msg.set_payload(data)
220220
=== modified file 'cloudinit/__init__.py'
--- cloudinit/__init__.py 2012-01-18 14:07:33 +0000
+++ cloudinit/__init__.py 2012-06-13 13:16:19 +0000
@@ -29,7 +29,7 @@
2929
30cfg_builtin = """30cfg_builtin = """
31log_cfgs: []31log_cfgs: []
32datasource_list: ["NoCloud", "OVF", "Ec2"]32datasource_list: ["NoCloud", "ConfigDrive", "OVF", "MAAS", "Ec2", "CloudStack"]
33def_log_file: /var/log/cloud-init.log33def_log_file: /var/log/cloud-init.log
34syslog_fix_perms: syslog:adm34syslog_fix_perms: syslog:adm
35"""35"""
@@ -60,7 +60,6 @@
60import sys60import sys
61import os.path61import os.path
62import errno62import errno
63import pwd
64import subprocess63import subprocess
65import yaml64import yaml
66import logging65import logging
@@ -138,7 +137,9 @@
138137
139 if ds_deps != None:138 if ds_deps != None:
140 self.ds_deps = ds_deps139 self.ds_deps = ds_deps
140
141 self.sysconfig = sysconfig141 self.sysconfig = sysconfig
142
142 self.cfg = self.read_cfg()143 self.cfg = self.read_cfg()
143144
144 def read_cfg(self):145 def read_cfg(self):
@@ -572,10 +573,14 @@
572 if not (modfreq == per_always or573 if not (modfreq == per_always or
573 (frequency == per_instance and modfreq == per_instance)):574 (frequency == per_instance and modfreq == per_instance)):
574 return575 return
575 if mod.handler_version == 1:576 try:
576 mod.handle_part(data, ctype, filename, payload)577 if mod.handler_version == 1:
577 else:578 mod.handle_part(data, ctype, filename, payload)
578 mod.handle_part(data, ctype, filename, payload, frequency)579 else:
580 mod.handle_part(data, ctype, filename, payload, frequency)
581 except:
582 util.logexc(log)
583 traceback.print_exc(file=sys.stderr)
579584
580585
581def partwalker_handle_handler(pdata, _ctype, _filename, payload):586def partwalker_handle_handler(pdata, _ctype, _filename, payload):
@@ -586,15 +591,13 @@
586 modfname = modname + ".py"591 modfname = modname + ".py"
587 util.write_file("%s/%s" % (pdata['handlerdir'], modfname), payload, 0600)592 util.write_file("%s/%s" % (pdata['handlerdir'], modfname), payload, 0600)
588593
589 pdata['handlercount'] = curcount + 1
590
591 try:594 try:
592 mod = __import__(modname)595 mod = __import__(modname)
593 handler_register(mod, pdata['handlers'], pdata['data'], frequency)596 handler_register(mod, pdata['handlers'], pdata['data'], frequency)
597 pdata['handlercount'] = curcount + 1
594 except:598 except:
595 util.logexc(log)599 util.logexc(log)
596 traceback.print_exc(file=sys.stderr)600 traceback.print_exc(file=sys.stderr)
597 return
598601
599602
600def partwalker_callback(pdata, ctype, filename, payload):603def partwalker_callback(pdata, ctype, filename, payload):
@@ -605,6 +608,14 @@
605 partwalker_handle_handler(pdata, ctype, filename, payload)608 partwalker_handle_handler(pdata, ctype, filename, payload)
606 return609 return
607 if ctype not in pdata['handlers']:610 if ctype not in pdata['handlers']:
611 if ctype == "text/x-not-multipart":
612 # Extract the first line or 24 bytes for displaying in the log
613 start = payload.split("\n", 1)[0][:24]
614 if start < payload:
615 details = "starting '%s...'" % start.encode("string-escape")
616 else:
617 details = repr(payload)
618 log.warning("Unhandled non-multipart userdata %s", details)
608 return619 return
609 handler_handle_part(pdata['handlers'][ctype], pdata['data'],620 handler_handle_part(pdata['handlers'][ctype], pdata['data'],
610 ctype, filename, payload, pdata['frequency'])621 ctype, filename, payload, pdata['frequency'])
@@ -630,3 +641,27 @@
630641
631 def handle_part(self, data, ctype, filename, payload, frequency):642 def handle_part(self, data, ctype, filename, payload, frequency):
632 return(self.handler(data, ctype, filename, payload, frequency))643 return(self.handler(data, ctype, filename, payload, frequency))
644
645
646def get_cmdline_url(names=('cloud-config-url', 'url'),
647 starts="#cloud-config", cmdline=None):
648
649 if cmdline == None:
650 cmdline = util.get_cmdline()
651
652 data = util.keyval_str_to_dict(cmdline)
653 url = None
654 key = None
655 for key in names:
656 if key in data:
657 url = data[key]
658 break
659 if url == None:
660 return (None, None, None)
661
662 contents = util.readurl(url)
663
664 if contents.startswith(starts):
665 return (key, url, contents)
666
667 return (key, url, None)
633668
=== modified file 'cloudinit/netinfo.py'
--- cloudinit/netinfo.py 2012-01-30 14:24:30 +0000
+++ cloudinit/netinfo.py 2012-06-13 13:16:19 +0000
@@ -19,14 +19,14 @@
19# You should have received a copy of the GNU General Public License19# You should have received a copy of the GNU General Public License
20# along with this program. If not, see <http://www.gnu.org/licenses/>.20# along with this program. If not, see <http://www.gnu.org/licenses/>.
2121
22import subprocess22import cloudinit.util as util
2323
2424
25def netdev_info(empty=""):25def netdev_info(empty=""):
26 fields = ("hwaddr", "addr", "bcast", "mask")26 fields = ("hwaddr", "addr", "bcast", "mask")
27 ifcfg_out = str(subprocess.check_output(["ifconfig", "-a"]))27 (ifcfg_out, _err) = util.subp(["ifconfig", "-a"])
28 devs = {}28 devs = {}
29 for line in ifcfg_out.splitlines():29 for line in str(ifcfg_out).splitlines():
30 if len(line) == 0:30 if len(line) == 0:
31 continue31 continue
32 if line[0] not in ("\t", " "):32 if line[0] not in ("\t", " "):
@@ -70,9 +70,9 @@
7070
7171
72def route_info():72def route_info():
73 route_out = str(subprocess.check_output(["route", "-n"]))73 (route_out, _err) = util.subp(["route", "-n"])
74 routes = []74 routes = []
75 for line in route_out.splitlines()[1:]:75 for line in str(route_out).splitlines()[1:]:
76 if not line:76 if not line:
77 continue77 continue
78 toks = line.split()78 toks = line.split()
7979
=== modified file 'cloudinit/util.py'
--- cloudinit/util.py 2012-01-18 14:07:33 +0000
+++ cloudinit/util.py 2012-06-13 13:16:19 +0000
@@ -32,6 +32,7 @@
32import socket32import socket
33import sys33import sys
34import time34import time
35import tempfile
35import traceback36import traceback
36import urlparse37import urlparse
3738
@@ -208,16 +209,18 @@
208 if skip_no_exist and not os.path.isdir(dirp):209 if skip_no_exist and not os.path.isdir(dirp):
209 return210 return
210211
211 # per bug 857926, Fedora's run-parts will exit failure on empty dir212 failed = 0
212 if os.path.isdir(dirp) and os.listdir(dirp) == []:213 for exe_name in sorted(os.listdir(dirp)):
213 return214 exe_path = os.path.join(dirp, exe_name)
214215 if os.path.isfile(exe_path) and os.access(exe_path, os.X_OK):
215 cmd = ['run-parts', '--regex', '.*', dirp]216 popen = subprocess.Popen([exe_path])
216 sp = subprocess.Popen(cmd)217 popen.communicate()
217 sp.communicate()218 if popen.returncode is not 0:
218 if sp.returncode is not 0:219 failed += 1
219 raise subprocess.CalledProcessError(sp.returncode, cmd)220 sys.stderr.write("failed: %s [%i]\n" %
220 return221 (exe_path, popen.returncode))
222 if failed:
223 raise RuntimeError('runparts: %i failures' % failed)
221224
222225
223def subp(args, input_=None):226def subp(args, input_=None):
@@ -515,30 +518,70 @@
515 return(string.replace('\r\n', '\n'))518 return(string.replace('\r\n', '\n'))
516519
517520
518def islxc():521def is_container():
519 # is this host running lxc?522 # is this code running in a container of some sort
520 try:523
521 with open("/proc/1/cgroup") as f:524 for helper in ('running-in-container', 'lxc-is-container'):
522 if f.read() == "/":525 try:
523 return True526 # try to run a helper program. if it returns true
524 except IOError as e:527 # then we're inside a container. otherwise, no
525 if e.errno != errno.ENOENT:528 sp = subprocess.Popen(helper, stdout=subprocess.PIPE,
526 raise529 stderr=subprocess.PIPE)
527530 sp.communicate(None)
528 try:531 return(sp.returncode == 0)
529 # try to run a program named 'lxc-is-container'. if it returns true,532 except OSError as e:
530 # then we're inside a container. otherwise, no533 if e.errno != errno.ENOENT:
531 sp = subprocess.Popen(['lxc-is-container'], stdout=subprocess.PIPE,534 raise
532 stderr=subprocess.PIPE)535
533 sp.communicate(None)536 # this code is largely from the logic in
534 return(sp.returncode == 0)537 # ubuntu's /etc/init/container-detect.conf
535 except OSError as e:538 try:
536 if e.errno != errno.ENOENT:539 # Detect old-style libvirt
537 raise540 # Detect OpenVZ containers
541 pid1env = get_proc_env(1)
542 if "container" in pid1env:
543 return True
544
545 if "LIBVIRT_LXC_UUID" in pid1env:
546 return True
547
548 except IOError as e:
549 if e.errno != errno.ENOENT:
550 pass
551
552 # Detect OpenVZ containers
553 if os.path.isdir("/proc/vz") and not os.path.isdir("/proc/bc"):
554 return True
555
556 try:
557 # Detect Vserver containers
558 with open("/proc/self/status") as fp:
559 lines = fp.read().splitlines()
560 for line in lines:
561 if line.startswith("VxID:"):
562 (_key, val) = line.strip().split(":", 1)
563 if val != "0":
564 return True
565 except IOError as e:
566 if e.errno != errno.ENOENT:
567 pass
538568
539 return False569 return False
540570
541571
572def get_proc_env(pid):
573 # return the environment in a dict that a given process id was started with
574 env = {}
575 with open("/proc/%s/environ" % pid) as fp:
576 toks = fp.read().split("\0")
577 for tok in toks:
578 if tok == "":
579 continue
580 (name, val) = tok.split("=", 1)
581 env[name] = val
582 return env
583
584
542def get_hostname_fqdn(cfg, cloud):585def get_hostname_fqdn(cfg, cloud):
543 # return the hostname and fqdn from 'cfg'. If not found in cfg,586 # return the hostname and fqdn from 'cfg'. If not found in cfg,
544 # then fall back to data from cloud587 # then fall back to data from cloud
@@ -630,3 +673,183 @@
630 return673 return
631 with open(os.devnull) as fp:674 with open(os.devnull) as fp:
632 os.dup2(fp.fileno(), sys.stdin.fileno())675 os.dup2(fp.fileno(), sys.stdin.fileno())
676
677
678def find_devs_with(criteria):
679 """
680 find devices matching given criteria (via blkid)
681 criteria can be *one* of:
682 TYPE=<filesystem>
683 LABEL=<label>
684 UUID=<uuid>
685 """
686 try:
687 (out, _err) = subp(['blkid', '-t%s' % criteria, '-odevice'])
688 except subprocess.CalledProcessError:
689 return([])
690 return(str(out).splitlines())
691
692
693class mountFailedError(Exception):
694 pass
695
696
697def mount_callback_umount(device, callback, data=None):
698 """
699 mount the device, call method 'callback' passing the directory
700 in which it was mounted, then unmount. Return whatever 'callback'
701 returned. If data != None, also pass data to callback.
702 """
703
704 def _cleanup(umount, tmpd):
705 if umount:
706 try:
707 subp(["umount", '-l', umount])
708 except subprocess.CalledProcessError:
709 raise
710 if tmpd:
711 os.rmdir(tmpd)
712
713 # go through mounts to see if it was already mounted
714 fp = open("/proc/mounts")
715 mounts = fp.readlines()
716 fp.close()
717
718 tmpd = None
719
720 mounted = {}
721 for mpline in mounts:
722 (dev, mp, fstype, _opts, _freq, _passno) = mpline.split()
723 mp = mp.replace("\\040", " ")
724 mounted[dev] = (dev, fstype, mp, False)
725
726 umount = False
727 if device in mounted:
728 mountpoint = "%s/" % mounted[device][2]
729 else:
730 tmpd = tempfile.mkdtemp()
731
732 mountcmd = ["mount", "-o", "ro", device, tmpd]
733
734 try:
735 (_out, _err) = subp(mountcmd)
736 umount = tmpd
737 except subprocess.CalledProcessError as exc:
738 _cleanup(umount, tmpd)
739 raise mountFailedError(exc.output[1])
740
741 mountpoint = "%s/" % tmpd
742
743 try:
744 if data == None:
745 ret = callback(mountpoint)
746 else:
747 ret = callback(mountpoint, data)
748
749 except Exception as exc:
750 _cleanup(umount, tmpd)
751 raise exc
752
753 _cleanup(umount, tmpd)
754
755 return(ret)
756
757
758def wait_for_url(urls, max_wait=None, timeout=None,
759 status_cb=None, headers_cb=None):
760 """
761 urls: a list of urls to try
762 max_wait: roughly the maximum time to wait before giving up
763 The max time is *actually* len(urls)*timeout as each url will
764 be tried once and given the timeout provided.
765 timeout: the timeout provided to urllib2.urlopen
766 status_cb: call method with string message when a url is not available
767 headers_cb: call method with single argument of url to get headers
768 for request.
769
770 the idea of this routine is to wait for the EC2 metdata service to
771 come up. On both Eucalyptus and EC2 we have seen the case where
772 the instance hit the MD before the MD service was up. EC2 seems
773 to have permenantely fixed this, though.
774
775 In openstack, the metadata service might be painfully slow, and
776 unable to avoid hitting a timeout of even up to 10 seconds or more
777 (LP: #894279) for a simple GET.
778
779 Offset those needs with the need to not hang forever (and block boot)
780 on a system where cloud-init is configured to look for EC2 Metadata
781 service but is not going to find one. It is possible that the instance
782 data host (169.254.169.254) may be firewalled off Entirely for a sytem,
783 meaning that the connection will block forever unless a timeout is set.
784 """
785 starttime = time.time()
786
787 sleeptime = 1
788
789 def nullstatus_cb(msg):
790 return
791
792 if status_cb == None:
793 status_cb = nullstatus_cb
794
795 def timeup(max_wait, starttime):
796 return((max_wait <= 0 or max_wait == None) or
797 (time.time() - starttime > max_wait))
798
799 loop_n = 0
800 while True:
801 sleeptime = int(loop_n / 5) + 1
802 for url in urls:
803 now = time.time()
804 if loop_n != 0:
805 if timeup(max_wait, starttime):
806 break
807 if timeout and (now + timeout > (starttime + max_wait)):
808 # shorten timeout to not run way over max_time
809 timeout = int((starttime + max_wait) - now)
810
811 reason = ""
812 try:
813 if headers_cb != None:
814 headers = headers_cb(url)
815 else:
816 headers = {}
817
818 req = urllib2.Request(url, data=None, headers=headers)
819 resp = urllib2.urlopen(req, timeout=timeout)
820 if resp.read() != "":
821 return url
822 reason = "empty data [%s]" % resp.getcode()
823 except urllib2.HTTPError as e:
824 reason = "http error [%s]" % e.code
825 except urllib2.URLError as e:
826 reason = "url error [%s]" % e.reason
827 except socket.timeout as e:
828 reason = "socket timeout [%s]" % e
829 except Exception as e:
830 reason = "unexpected error [%s]" % e
831
832 status_cb("'%s' failed [%s/%ss]: %s" %
833 (url, int(time.time() - starttime), max_wait,
834 reason))
835
836 if timeup(max_wait, starttime):
837 break
838
839 loop_n = loop_n + 1
840 time.sleep(sleeptime)
841
842 return False
843
844
845def keyval_str_to_dict(kvstring):
846 ret = {}
847 for tok in kvstring.split():
848 try:
849 (key, val) = tok.split("=", 1)
850 except ValueError:
851 key = tok
852 val = True
853 ret[key] = val
854
855 return(ret)
633856
=== modified file 'config/cloud.cfg'
--- config/cloud.cfg 2012-01-17 17:46:44 +0000
+++ config/cloud.cfg 2012-06-13 13:16:19 +0000
@@ -1,7 +1,7 @@
1user: ubuntu1user: ubuntu
2disable_root: 12disable_root: 1
3preserve_hostname: False3preserve_hostname: False
4# datasource_list: [ "NoCloud", "OVF", "Ec2" ]4# datasource_list: ["NoCloud", "ConfigDrive", "OVF", "MAAS", "Ec2", "CloudStack"]
55
6cloud_init_modules:6cloud_init_modules:
7 - bootcmd7 - bootcmd
@@ -19,11 +19,13 @@
19 - locale19 - locale
20 - set-passwords20 - set-passwords
21 - grub-dpkg21 - grub-dpkg
22 - apt-pipelining
22 - apt-update-upgrade23 - apt-update-upgrade
23 - landscape24 - landscape
24 - timezone25 - timezone
25 - puppet26 - puppet
26 - chef27 - chef
28 - salt-minion
27 - mcollective29 - mcollective
28 - disable-ec2-metadata30 - disable-ec2-metadata
29 - runcmd31 - runcmd
3032
=== modified file 'debian.trunk/control'
--- debian.trunk/control 2012-01-12 17:51:48 +0000
+++ debian.trunk/control 2012-06-13 13:16:19 +0000
@@ -20,6 +20,7 @@
20 python-boto (>=2.0),20 python-boto (>=2.0),
21 python-cheetah,21 python-cheetah,
22 python-configobj,22 python-configobj,
23 python-oauth,
23 python-software-properties,24 python-software-properties,
24 python-yaml,25 python-yaml,
25 ${misc:Depends},26 ${misc:Depends},
2627
=== added directory 'doc/configdrive'
=== added file 'doc/configdrive/README'
--- doc/configdrive/README 1970-01-01 00:00:00 +0000
+++ doc/configdrive/README 2012-06-13 13:16:19 +0000
@@ -0,0 +1,118 @@
1The 'ConfigDrive' DataSource supports the OpenStack configdrive disk.
2See doc/source/api_ext/ext_config_drive.rst in the nova source code for
3more information on config drive.
4
5The following criteria are required to be identified by
6DataSourceConfigDrive as a config drive:
7 * must be formated with vfat filesystem
8 * must be a un-partitioned block device (/dev/vdb, not /dev/vdb1)
9 * must contain one of the following files:
10 * etc/network/interfaces
11 * root/.ssh/authorized_keys
12 * meta.js
13
14By default, cloud-init does not consider this source to be a full-fledged
15datasource. Instead, the default behavior is to assume it is really only
16present to provide networking information. Cloud-init will copy off the
17network information, apply it to the system, and then continue on. The
18"full" datasource would then be found in the EC2 metadata service.
19
20== Content of config-drive ==
21 * etc/network/interfaces
22 This file is laid down by nova in order to pass static networking
23 information to the guest. Cloud-init will copy it off of the config-drive
24 and into /etc/network/interfaces as soon as it can, and then attempt to
25 bring up all network interfaces.
26
27 * root/.ssh/authorized_keys
28 This file is laid down by nova, and contains the keys that were
29 provided to it on instance creation (nova-boot --key ....)
30
31 Cloud-init will copy those keys and put them into the configured user
32 ('ubuntu') .ssh/authorized_keys.
33
34 * meta.js
35 meta.js is populated on the config-drive in response to the user passing
36 "meta flags" (nova boot --meta key=value ...). It is expected to be json
37 formated.
38
39== Configuration ==
40Cloud-init's behavior can be modified by keys found in the meta.js file in
41the following ways:
42 * dsmode:
43 values: local, net, pass
44 default: pass
45
46 This is what indicates if configdrive is a final data source or not.
47 By default it is 'pass', meaning this datasource should not be read.
48 Set it to 'local' or 'net' to stop cloud-init from continuing on to
49 search for other data sources after network config.
50
51 The difference between 'local' and 'net' is that local will not require
52 networking to be up before user-data actions (or boothooks) are run.
53
54 * instance-id:
55 default: iid-dsconfigdrive
56 This is utilized as the metadata's instance-id. It should generally
57 be unique, as it is what is used to determine "is this a new instance".
58
59 * public-keys:
60 default: None
61 if present, these keys will be used as the public keys for the
62 instance. This value overrides the content in authorized_keys.
63 Note: it is likely preferable to provide keys via user-data
64
65 * user-data:
66 default: None
67 This provides cloud-init user-data. See other documentation for what
68 all can be present here.
69
70== Example ==
71Here is an example using the nova client (python-novaclien)
72
73Assuming the following variables set up:
74 * img_id : set to the nova image id (uuid from image-list)
75 * flav_id : set to numeric flavor_id (nova flavor-list)
76 * keyname : set to name of key for this instance (nova keypair-list)
77
78$ cat my-user-data
79#!/bin/sh
80echo ==== USER_DATA FROM EC2 MD ==== | tee /ud.log
81
82$ ud_value=$(sed 's,EC2 MD,META KEY,')
83
84## Now, 'ud_value' has same content of my-user-data file, but
85## with the string "USER_DATA FROM META KEY"
86
87## launch an instance with dsmode=pass
88## This will really not use the configdrive for anything as the mode
89## for the datasource is 'pass', meaning it will still expect some
90## other data source (DataSourceEc2).
91
92$ nova boot --image=$img_id --config-drive=1 --flavor=$flav_id \
93 --key_name=$keyname \
94 --user_data=my-user-data \
95 "--meta=instance-id=iid-001 \
96 "--meta=user-data=${ud_keyval}" \
97 "--meta=dsmode=pass" cfgdrive-dsmode-pass
98
99$ euca-get-console-output i-0000001 | grep USER_DATA
100echo ==== USER_DATA FROM EC2 MD ==== | tee /ud.log
101
102## Now, launch an instance with dsmode=local
103## This time, the only metadata and userdata available to cloud-init
104## are on the config-drive
105$ nova boot --image=$img_id --config-drive=1 --flavor=$flav_id \
106 --key_name=$keyname \
107 --user_data=my-user-data \
108 "--meta=instance-id=iid-001 \
109 "--meta=user-data=${ud_keyval}" \
110 "--meta=dsmode=local" cfgdrive-dsmode-local
111
112$ euca-get-console-output i-0000002 | grep USER_DATA
113echo ==== USER_DATA FROM META KEY ==== | tee /ud.log
114
115--
116[1] https://github.com/openstack/nova/blob/master/doc/source/api_ext/ext_config_drive.rst for more if
117
118
0119
=== added file 'doc/examples/cloud-config-chef-oneiric.txt'
--- doc/examples/cloud-config-chef-oneiric.txt 1970-01-01 00:00:00 +0000
+++ doc/examples/cloud-config-chef-oneiric.txt 2012-06-13 13:16:19 +0000
@@ -0,0 +1,90 @@
1#cloud-config
2#
3# This is an example file to automatically install chef-client and run a
4# list of recipes when the instance boots for the first time.
5# Make sure that this file is valid yaml before starting instances.
6# It should be passed as user-data when starting the instance.
7#
8# This example assumes the instance is 11.10 (oneiric)
9
10
11# The default is to install from packages.
12
13# Key from http://apt.opscode.com/packages@opscode.com.gpg.key
14apt_sources:
15 - source: "deb http://apt.opscode.com/ $RELEASE-0.10 main"
16 key: |
17 -----BEGIN PGP PUBLIC KEY BLOCK-----
18 Version: GnuPG v1.4.9 (GNU/Linux)
19
20 mQGiBEppC7QRBADfsOkZU6KZK+YmKw4wev5mjKJEkVGlus+NxW8wItX5sGa6kdUu
21 twAyj7Yr92rF+ICFEP3gGU6+lGo0Nve7KxkN/1W7/m3G4zuk+ccIKmjp8KS3qn99
22 dxy64vcji9jIllVa+XXOGIp0G8GEaj7mbkixL/bMeGfdMlv8Gf2XPpp9vwCgn/GC
23 JKacfnw7MpLKUHOYSlb//JsEAJqao3ViNfav83jJKEkD8cf59Y8xKia5OpZqTK5W
24 ShVnNWS3U5IVQk10ZDH97Qn/YrK387H4CyhLE9mxPXs/ul18ioiaars/q2MEKU2I
25 XKfV21eMLO9LYd6Ny/Kqj8o5WQK2J6+NAhSwvthZcIEphcFignIuobP+B5wNFQpe
26 DbKfA/0WvN2OwFeWRcmmd3Hz7nHTpcnSF+4QX6yHRF/5BgxkG6IqBIACQbzPn6Hm
27 sMtm/SVf11izmDqSsQptCrOZILfLX/mE+YOl+CwWSHhl+YsFts1WOuh1EhQD26aO
28 Z84HuHV5HFRWjDLw9LriltBVQcXbpfSrRP5bdr7Wh8vhqJTPjrQnT3BzY29kZSBQ
29 YWNrYWdlcyA8cGFja2FnZXNAb3BzY29kZS5jb20+iGAEExECACAFAkppC7QCGwMG
30 CwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRApQKupg++Caj8sAKCOXmdG36gWji/K
31 +o+XtBfvdMnFYQCfTCEWxRy2BnzLoBBFCjDSK6sJqCu5Ag0ESmkLtBAIAIO2SwlR
32 lU5i6gTOp42RHWW7/pmW78CwUqJnYqnXROrt3h9F9xrsGkH0Fh1FRtsnncgzIhvh
33 DLQnRHnkXm0ws0jV0PF74ttoUT6BLAUsFi2SPP1zYNJ9H9fhhK/pjijtAcQwdgxu
34 wwNJ5xCEscBZCjhSRXm0d30bK1o49Cow8ZIbHtnXVP41c9QWOzX/LaGZsKQZnaMx
35 EzDk8dyyctR2f03vRSVyTFGgdpUcpbr9eTFVgikCa6ODEBv+0BnCH6yGTXwBid9g
36 w0o1e/2DviKUWCC+AlAUOubLmOIGFBuI4UR+rux9affbHcLIOTiKQXv79lW3P7W8
37 AAfniSQKfPWXrrcAAwUH/2XBqD4Uxhbs25HDUUiM/m6Gnlj6EsStg8n0nMggLhuN
38 QmPfoNByMPUqvA7sULyfr6xCYzbzRNxABHSpf85FzGQ29RF4xsA4vOOU8RDIYQ9X
39 Q8NqqR6pydprRFqWe47hsAN7BoYuhWqTtOLSBmnAnzTR5pURoqcquWYiiEavZixJ
40 3ZRAq/HMGioJEtMFrvsZjGXuzef7f0ytfR1zYeLVWnL9Bd32CueBlI7dhYwkFe+V
41 Ep5jWOCj02C1wHcwt+uIRDJV6TdtbIiBYAdOMPk15+VBdweBXwMuYXr76+A7VeDL
42 zIhi7tKFo6WiwjKZq0dzctsJJjtIfr4K4vbiD9Ojg1iISQQYEQIACQUCSmkLtAIb
43 DAAKCRApQKupg++CauISAJ9CxYPOKhOxalBnVTLeNUkAHGg2gACeIsbobtaD4ZHG
44 0GLl8EkfA8uhluM=
45 =zKAm
46 -----END PGP PUBLIC KEY BLOCK-----
47
48chef:
49
50 # 11.10 will fail if install_type is "gems" (LP: #960576)
51 install_type: "packages"
52
53 # Chef settings
54 server_url: "https://chef.yourorg.com:4000"
55
56 # Node Name
57 # Defaults to the instance-id if not present
58 node_name: "your-node-name"
59
60 # Environment
61 # Defaults to '_default' if not present
62 environment: "production"
63
64 # Default validation name is chef-validator
65 validation_name: "yourorg-validator"
66
67 # value of validation_cert is not used if validation_key defined,
68 # but variable needs to be defined (LP: #960547)
69 validation_cert: "unused"
70 validation_key: |
71 -----BEGIN RSA PRIVATE KEY-----
72 YOUR-ORGS-VALIDATION-KEY-HERE
73 -----END RSA PRIVATE KEY-----
74
75 # A run list for a first boot json
76 run_list:
77 - "recipe[apache2]"
78 - "role[db]"
79
80 # Specify a list of initial attributes used by the cookbooks
81 initial_attributes:
82 apache:
83 prefork:
84 maxclients: 100
85 keepalive: "off"
86
87
88# Capture all subprocess output into a logfile
89# Useful for troubleshooting cloud-init issues
90output: {all: '| tee -a /var/log/cloud-init-output.log'}
091
=== modified file 'doc/examples/cloud-config-chef.txt'
--- doc/examples/cloud-config-chef.txt 2012-01-20 20:10:28 +0000
+++ doc/examples/cloud-config-chef.txt 2012-06-13 13:16:19 +0000
@@ -1,17 +1,54 @@
1#cloud-config1#cloud-config
2#2#
3# This is an example file to automatically setup chef and run a list of recipes3# This is an example file to automatically install chef-client and run a
4# when the instance boots for the first time.4# list of recipes when the instance boots for the first time.
5# Make sure that this file is valid yaml before starting instances.5# Make sure that this file is valid yaml before starting instances.
6# It should be passed as user-data when starting the instance.6# It should be passed as user-data when starting the instance.
77#
8# The default is to install from packages. If you want the latest packages from Opscode, be sure to add their repo:8# This example assumes the instance is 12.04 (precise)
9apt_mirror: http://apt.opscode.com/9
10
11# The default is to install from packages.
12
13# Key from http://apt.opscode.com/packages@opscode.com.gpg.key
14apt_sources:
15 - source: "deb http://apt.opscode.com/ $RELEASE-0.10 main"
16 key: |
17 -----BEGIN PGP PUBLIC KEY BLOCK-----
18 Version: GnuPG v1.4.9 (GNU/Linux)
19
20 mQGiBEppC7QRBADfsOkZU6KZK+YmKw4wev5mjKJEkVGlus+NxW8wItX5sGa6kdUu
21 twAyj7Yr92rF+ICFEP3gGU6+lGo0Nve7KxkN/1W7/m3G4zuk+ccIKmjp8KS3qn99
22 dxy64vcji9jIllVa+XXOGIp0G8GEaj7mbkixL/bMeGfdMlv8Gf2XPpp9vwCgn/GC
23 JKacfnw7MpLKUHOYSlb//JsEAJqao3ViNfav83jJKEkD8cf59Y8xKia5OpZqTK5W
24 ShVnNWS3U5IVQk10ZDH97Qn/YrK387H4CyhLE9mxPXs/ul18ioiaars/q2MEKU2I
25 XKfV21eMLO9LYd6Ny/Kqj8o5WQK2J6+NAhSwvthZcIEphcFignIuobP+B5wNFQpe
26 DbKfA/0WvN2OwFeWRcmmd3Hz7nHTpcnSF+4QX6yHRF/5BgxkG6IqBIACQbzPn6Hm
27 sMtm/SVf11izmDqSsQptCrOZILfLX/mE+YOl+CwWSHhl+YsFts1WOuh1EhQD26aO
28 Z84HuHV5HFRWjDLw9LriltBVQcXbpfSrRP5bdr7Wh8vhqJTPjrQnT3BzY29kZSBQ
29 YWNrYWdlcyA8cGFja2FnZXNAb3BzY29kZS5jb20+iGAEExECACAFAkppC7QCGwMG
30 CwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRApQKupg++Caj8sAKCOXmdG36gWji/K
31 +o+XtBfvdMnFYQCfTCEWxRy2BnzLoBBFCjDSK6sJqCu5Ag0ESmkLtBAIAIO2SwlR
32 lU5i6gTOp42RHWW7/pmW78CwUqJnYqnXROrt3h9F9xrsGkH0Fh1FRtsnncgzIhvh
33 DLQnRHnkXm0ws0jV0PF74ttoUT6BLAUsFi2SPP1zYNJ9H9fhhK/pjijtAcQwdgxu
34 wwNJ5xCEscBZCjhSRXm0d30bK1o49Cow8ZIbHtnXVP41c9QWOzX/LaGZsKQZnaMx
35 EzDk8dyyctR2f03vRSVyTFGgdpUcpbr9eTFVgikCa6ODEBv+0BnCH6yGTXwBid9g
36 w0o1e/2DviKUWCC+AlAUOubLmOIGFBuI4UR+rux9affbHcLIOTiKQXv79lW3P7W8
37 AAfniSQKfPWXrrcAAwUH/2XBqD4Uxhbs25HDUUiM/m6Gnlj6EsStg8n0nMggLhuN
38 QmPfoNByMPUqvA7sULyfr6xCYzbzRNxABHSpf85FzGQ29RF4xsA4vOOU8RDIYQ9X
39 Q8NqqR6pydprRFqWe47hsAN7BoYuhWqTtOLSBmnAnzTR5pURoqcquWYiiEavZixJ
40 3ZRAq/HMGioJEtMFrvsZjGXuzef7f0ytfR1zYeLVWnL9Bd32CueBlI7dhYwkFe+V
41 Ep5jWOCj02C1wHcwt+uIRDJV6TdtbIiBYAdOMPk15+VBdweBXwMuYXr76+A7VeDL
42 zIhi7tKFo6WiwjKZq0dzctsJJjtIfr4K4vbiD9Ojg1iISQQYEQIACQUCSmkLtAIb
43 DAAKCRApQKupg++CauISAJ9CxYPOKhOxalBnVTLeNUkAHGg2gACeIsbobtaD4ZHG
44 0GLl8EkfA8uhluM=
45 =zKAm
46 -----END PGP PUBLIC KEY BLOCK-----
1047
11chef:48chef:
1249
13 # Valid values are 'gems' and 'packages'50 # Valid values are 'gems' and 'packages'
14 install_type: "gems"51 install_type: "packages"
1552
16 # Chef settings53 # Chef settings
17 server_url: "https://chef.yourorg.com:4000"54 server_url: "https://chef.yourorg.com:4000"
@@ -42,3 +79,8 @@
42 prefork:79 prefork:
43 maxclients: 10080 maxclients: 100
44 keepalive: "off"81 keepalive: "off"
82
83
84# Capture all subprocess output into a logfile
85# Useful for troubleshooting cloud-init issues
86output: {all: '| tee -a /var/log/cloud-init-output.log'}
4587
=== modified file 'doc/examples/cloud-config-datasources.txt'
--- doc/examples/cloud-config-datasources.txt 2011-12-19 17:00:48 +0000
+++ doc/examples/cloud-config-datasources.txt 2012-06-13 13:16:19 +0000
@@ -13,3 +13,21 @@
13 metadata_urls:13 metadata_urls:
14 - http://169.254.169.254:8014 - http://169.254.169.254:80
15 - http://instance-data:877315 - http://instance-data:8773
16
17 MAAS:
18 timeout : 50
19 max_wait : 120
20
21 # there are no default values for metadata_url or oauth credentials
22 # If no credentials are present, non-authed attempts will be made.
23 metadata_url: http://mass-host.localdomain/source
24 consumer_key: Xh234sdkljf
25 token_key: kjfhgb3n
26 token_secret: 24uysdfx1w4
27
28 NoCloud:
29 # default seedfrom is None
30 # if found, then it should contain a url with:
31 # <url>/user-data and <url>/meta-data
32 # seedfrom: http://my.example.com/i-abcde
33 seedfrom: None
1634
=== modified file 'doc/examples/cloud-config.txt'
--- doc/examples/cloud-config.txt 2011-12-20 16:40:51 +0000
+++ doc/examples/cloud-config.txt 2012-06-13 13:16:19 +0000
@@ -45,6 +45,15 @@
45# apt_proxy (configure Acquire::HTTP::Proxy)45# apt_proxy (configure Acquire::HTTP::Proxy)
46apt_proxy: http://my.apt.proxy:312846apt_proxy: http://my.apt.proxy:3128
4747
48# apt_pipelining (configure Acquire::http::Pipeline-Depth)
49# Default: disables HTTP pipelining. Certain web servers, such
50# as S3 do not pipeline properly (LP: #948461).
51# Valid options:
52# False/default: Disables pipelining for APT
53# None/Unchanged: Use OS default
54# Number: Set pipelining to some number (not recommended)
55apt_pipelining: False
56
48# Preserve existing /etc/apt/sources.list57# Preserve existing /etc/apt/sources.list
49# Default: overwrite sources_list with mirror. If this is true58# Default: overwrite sources_list with mirror. If this is true
50# then apt_mirror above will have no effect59# then apt_mirror above will have no effect
@@ -342,6 +351,8 @@
342# this allows you to launch an instance with a larger disk / partition351# this allows you to launch an instance with a larger disk / partition
343# and have the instance automatically grow / to accomoddate it352# and have the instance automatically grow / to accomoddate it
344# set to 'False' to disable353# set to 'False' to disable
354# by default, the resizefs is done early in boot, and blocks
355# if resize_rootfs is set to 'noblock', then it will be run in parallel
345resize_rootfs: True356resize_rootfs: True
346357
347## hostname and /etc/hosts management358## hostname and /etc/hosts management
348359
=== added file 'doc/kernel-cmdline.txt'
--- doc/kernel-cmdline.txt 1970-01-01 00:00:00 +0000
+++ doc/kernel-cmdline.txt 2012-06-13 13:16:19 +0000
@@ -0,0 +1,48 @@
1In order to allow an ephemeral, or otherwise pristine image to
2receive some configuration, cloud-init will read a url directed by
3the kernel command line and proceed as if its data had previously existed.
4
5This allows for configuring a meta-data service, or some other data.
6
7Note, that usage of the kernel command line is somewhat of a last resort,
8as it requires knowing in advance the correct command line or modifying
9the boot loader to append data.
10
11For example, when 'cloud-init start' runs, it will check to
12see if if one of 'cloud-config-url' or 'url' appear in key/value fashion
13in the kernel command line as in:
14 root=/dev/sda ro url=http://foo.bar.zee/abcde
15
16Cloud-init will then read the contents of the given url.
17If the content starts with '#cloud-config', it will store
18that data to the local filesystem in a static filename
19'/etc/cloud/cloud.cfg.d/91_kernel_cmdline_url.cfg', and consider it as
20part of the config from that point forward.
21
22If that file exists already, it will not be overwritten, and the url parameters
23completely ignored.
24
25Then, when the DataSource runs, it will find that config already available.
26
27So, in able to configure the MAAS DataSource by controlling the kernel
28command line from outside the image, you can append:
29 url=http://your.url.here/abcdefg
30or
31 cloud-config-url=http://your.url.here/abcdefg
32
33Then, have the following content at that url:
34 #cloud-config
35 datasource:
36 MAAS:
37 metadata_url: http://mass-host.localdomain/source
38 consumer_key: Xh234sdkljf
39 token_key: kjfhgb3n
40 token_secret: 24uysdfx1w4
41
42Notes:
43 * Because 'url=' is so very generic, in order to avoid false positives,
44 cloud-init requires the content to start with '#cloud-config' in order
45 for it to be considered.
46 * The url= is un-authed http GET, and contains credentials
47 It could be set up to be randomly generated and also check source
48 address in order to be more secure
049
=== added directory 'doc/nocloud'
=== added file 'doc/nocloud/README'
--- doc/nocloud/README 1970-01-01 00:00:00 +0000
+++ doc/nocloud/README 2012-06-13 13:16:19 +0000
@@ -0,0 +1,55 @@
1The data source 'NoCloud' and 'NoCloudNet' allow the user to provide user-data
2and meta-data to the instance without running a network service (or even without
3having a network at all)
4
5You can provide meta-data and user-data to a local vm boot via files on a vfat
6or iso9660 filesystem. These user-data and meta-data files are expected to be
7in the format described in doc/example/seed/README . Basically, user-data is
8simply user-data and meta-data is a yaml formated file representing what you'd
9find in the EC2 metadata service.
10
11Given a disk 12.04 cloud image in 'disk.img', you can create a sufficient disk
12by following the example below.
13
14## create user-data and meta-data files that will be used
15## to modify image on first boot
16$ { echo instance-id: iid-local01; echo local-hostname: cloudimg; } > meta-data
17
18$ printf "#cloud-config\npassword: passw0rd\nchpasswd: { expire: False }\nssh_pwauth: True\n" > user-data
19
20## create a disk to attach with some user-data and meta-data
21$ genisoimage -output seed.iso -volid cidata -joliet -rock user-data meta-data
22
23## alternatively, create a vfat filesystem with same files
24## $ truncate --size 2M seed.img
25## $ mkfs.vfat -n cidata seed.img
26## $ mcopy -oi seed.img user-data meta-data ::
27
28## create a new qcow image to boot, backed by your original image
29$ qemu-img create -f qcow2 -b disk.img boot-disk.img
30
31## boot the image and login as 'ubuntu' with password 'passw0rd'
32## note, passw0rd was set as password through the user-data above,
33## there is no password set on these images.
34$ kvm -m 256 \
35 -net nic -net user,hostfwd=tcp::2222-:22 \
36 -drive file=boot-disk.img,if=virtio \
37 -drive file=seed.iso,if=virtio
38
39Note, that the instance-id provided ('iid-local01' above) is what is used to
40determine if this is "first boot". So if you are making updates to user-data
41you will also have to change that, or start the disk fresh.
42
43
44Also, you can inject an /etc/network/interfaces file by providing the content
45for that file in the 'network-interfaces' field of metadata. Example metadata:
46 instance-id: iid-abcdefg
47 network-interfaces: |
48 iface eth0 inet static
49 address 192.168.1.10
50 network 192.168.1.0
51 netmask 255.255.255.0
52 broadcast 192.168.1.255
53 gateway 192.168.1.254
54 hostname: myhost
55
056
=== modified file 'setup.py'
--- setup.py 2011-12-20 16:39:46 +0000
+++ setup.py 2012-06-13 13:16:19 +0000
@@ -47,5 +47,6 @@
47 ('/usr/share/doc/cloud-init', filter(is_f,glob('doc/*'))),47 ('/usr/share/doc/cloud-init', filter(is_f,glob('doc/*'))),
48 ('/usr/share/doc/cloud-init/examples', filter(is_f,glob('doc/examples/*'))),48 ('/usr/share/doc/cloud-init/examples', filter(is_f,glob('doc/examples/*'))),
49 ('/usr/share/doc/cloud-init/examples/seed', filter(is_f,glob('doc/examples/seed/*'))),49 ('/usr/share/doc/cloud-init/examples/seed', filter(is_f,glob('doc/examples/seed/*'))),
50 ('/etc/profile.d', ['tools/Z99-cloud-locale-test.sh']),
50 ],51 ],
51 )52 )
5253
=== added file 'tests/unittests/test__init__.py'
--- tests/unittests/test__init__.py 1970-01-01 00:00:00 +0000
+++ tests/unittests/test__init__.py 2012-06-13 13:16:19 +0000
@@ -0,0 +1,242 @@
1from mocker import MockerTestCase, ANY, ARGS, KWARGS
2import os
3
4from cloudinit import (partwalker_handle_handler, handler_handle_part,
5 handler_register, get_cmdline_url)
6from cloudinit.util import write_file, logexc, readurl
7
8
9class TestPartwalkerHandleHandler(MockerTestCase):
10 def setUp(self):
11 self.data = {
12 "handlercount": 0,
13 "frequency": "?",
14 "handlerdir": "?",
15 "handlers": [],
16 "data": None}
17
18 self.expected_module_name = "part-handler-%03d" % (
19 self.data["handlercount"],)
20 expected_file_name = "%s.py" % self.expected_module_name
21 expected_file_fullname = os.path.join(self.data["handlerdir"],
22 expected_file_name)
23 self.module_fake = "fake module handle"
24 self.ctype = None
25 self.filename = None
26 self.payload = "dummy payload"
27
28 # Mock the write_file function
29 write_file_mock = self.mocker.replace(write_file, passthrough=False)
30 write_file_mock(expected_file_fullname, self.payload, 0600)
31
32 def test_no_errors(self):
33 """Payload gets written to file and added to C{pdata}."""
34 # Mock the __import__ builtin
35 import_mock = self.mocker.replace("__builtin__.__import__")
36 import_mock(self.expected_module_name)
37 self.mocker.result(self.module_fake)
38 # Mock the handle_register function
39 handle_reg_mock = self.mocker.replace(handler_register,
40 passthrough=False)
41 handle_reg_mock(self.module_fake, self.data["handlers"],
42 self.data["data"], self.data["frequency"])
43 # Activate mocks
44 self.mocker.replay()
45
46 partwalker_handle_handler(self.data, self.ctype, self.filename,
47 self.payload)
48
49 self.assertEqual(1, self.data["handlercount"])
50
51 def test_import_error(self):
52 """Module import errors are logged. No handler added to C{pdata}"""
53 # Mock the __import__ builtin
54 import_mock = self.mocker.replace("__builtin__.__import__")
55 import_mock(self.expected_module_name)
56 self.mocker.throw(ImportError())
57 # Mock log function
58 logexc_mock = self.mocker.replace(logexc, passthrough=False)
59 logexc_mock(ANY)
60 # Mock the print_exc function
61 print_exc_mock = self.mocker.replace("traceback.print_exc",
62 passthrough=False)
63 print_exc_mock(ARGS, KWARGS)
64 # Activate mocks
65 self.mocker.replay()
66
67 partwalker_handle_handler(self.data, self.ctype, self.filename,
68 self.payload)
69
70 self.assertEqual(0, self.data["handlercount"])
71
72 def test_attribute_error(self):
73 """Attribute errors are logged. No handler added to C{pdata}"""
74 # Mock the __import__ builtin
75 import_mock = self.mocker.replace("__builtin__.__import__")
76 import_mock(self.expected_module_name)
77 self.mocker.result(self.module_fake)
78 # Mock the handle_register function
79 handle_reg_mock = self.mocker.replace(handler_register,
80 passthrough=False)
81 handle_reg_mock(self.module_fake, self.data["handlers"],
82 self.data["data"], self.data["frequency"])
83 self.mocker.throw(AttributeError())
84 # Mock log function
85 logexc_mock = self.mocker.replace(logexc, passthrough=False)
86 logexc_mock(ANY)
87 # Mock the print_exc function
88 print_exc_mock = self.mocker.replace("traceback.print_exc",
89 passthrough=False)
90 print_exc_mock(ARGS, KWARGS)
91 # Activate mocks
92 self.mocker.replay()
93
94 partwalker_handle_handler(self.data, self.ctype, self.filename,
95 self.payload)
96
97 self.assertEqual(0, self.data["handlercount"])
98
99
100class TestHandlerHandlePart(MockerTestCase):
101 def setUp(self):
102 self.data = "fake data"
103 self.ctype = "fake ctype"
104 self.filename = "fake filename"
105 self.payload = "fake payload"
106 self.frequency = "once-per-instance"
107
108 def test_normal_version_1(self):
109 """
110 C{handle_part} is called without C{frequency} for
111 C{handler_version} == 1.
112 """
113 # Build a mock part-handler module
114 mod_mock = self.mocker.mock()
115 getattr(mod_mock, "frequency")
116 self.mocker.result("once-per-instance")
117 getattr(mod_mock, "handler_version")
118 self.mocker.result(1)
119 mod_mock.handle_part(self.data, self.ctype, self.filename,
120 self.payload)
121 self.mocker.replay()
122
123 handler_handle_part(mod_mock, self.data, self.ctype, self.filename,
124 self.payload, self.frequency)
125
126 def test_normal_version_2(self):
127 """
128 C{handle_part} is called with C{frequency} for
129 C{handler_version} == 2.
130 """
131 # Build a mock part-handler module
132 mod_mock = self.mocker.mock()
133 getattr(mod_mock, "frequency")
134 self.mocker.result("once-per-instance")
135 getattr(mod_mock, "handler_version")
136 self.mocker.result(2)
137 mod_mock.handle_part(self.data, self.ctype, self.filename,
138 self.payload, self.frequency)
139 self.mocker.replay()
140
141 handler_handle_part(mod_mock, self.data, self.ctype, self.filename,
142 self.payload, self.frequency)
143
144 def test_modfreq_per_always(self):
145 """
146 C{handle_part} is called regardless of frequency if nofreq is always.
147 """
148 self.frequency = "once"
149 # Build a mock part-handler module
150 mod_mock = self.mocker.mock()
151 getattr(mod_mock, "frequency")
152 self.mocker.result("always")
153 getattr(mod_mock, "handler_version")
154 self.mocker.result(1)
155 mod_mock.handle_part(self.data, self.ctype, self.filename,
156 self.payload)
157 self.mocker.replay()
158
159 handler_handle_part(mod_mock, self.data, self.ctype, self.filename,
160 self.payload, self.frequency)
161
162 def test_no_handle_when_modfreq_once(self):
163 """C{handle_part} is not called if frequency is once"""
164 self.frequency = "once"
165 # Build a mock part-handler module
166 mod_mock = self.mocker.mock()
167 getattr(mod_mock, "frequency")
168 self.mocker.result("once-per-instance")
169 self.mocker.replay()
170
171 handler_handle_part(mod_mock, self.data, self.ctype, self.filename,
172 self.payload, self.frequency)
173
174 def test_exception_is_caught(self):
175 """Exceptions within C{handle_part} are caught and logged."""
176 # Build a mock part-handler module
177 mod_mock = self.mocker.mock()
178 getattr(mod_mock, "frequency")
179 self.mocker.result("once-per-instance")
180 getattr(mod_mock, "handler_version")
181 self.mocker.result(1)
182 mod_mock.handle_part(self.data, self.ctype, self.filename,
183 self.payload)
184 self.mocker.throw(Exception())
185 # Mock log function
186 logexc_mock = self.mocker.replace(logexc, passthrough=False)
187 logexc_mock(ANY)
188 # Mock the print_exc function
189 print_exc_mock = self.mocker.replace("traceback.print_exc",
190 passthrough=False)
191 print_exc_mock(ARGS, KWARGS)
192 self.mocker.replay()
193
194 handler_handle_part(mod_mock, self.data, self.ctype, self.filename,
195 self.payload, self.frequency)
196
197
198class TestCmdlineUrl(MockerTestCase):
199 def test_invalid_content(self):
200 url = "http://example.com/foo"
201 key = "mykey"
202 payload = "0"
203 cmdline = "ro %s=%s bar=1" % (key, url)
204
205 mock_readurl = self.mocker.replace(readurl, passthrough=False)
206 mock_readurl(url)
207 self.mocker.result(payload)
208
209 self.mocker.replay()
210
211 self.assertEqual((key, url, None),
212 get_cmdline_url(names=[key], starts="xxxxxx", cmdline=cmdline))
213
214 def test_valid_content(self):
215 url = "http://example.com/foo"
216 key = "mykey"
217 payload = "xcloud-config\nmydata: foo\nbar: wark\n"
218 cmdline = "ro %s=%s bar=1" % (key, url)
219
220 mock_readurl = self.mocker.replace(readurl, passthrough=False)
221 mock_readurl(url)
222 self.mocker.result(payload)
223
224 self.mocker.replay()
225
226 self.assertEqual((key, url, payload),
227 get_cmdline_url(names=[key], starts="xcloud-config",
228 cmdline=cmdline))
229
230 def test_no_key_found(self):
231 url = "http://example.com/foo"
232 key = "mykey"
233 cmdline = "ro %s=%s bar=1" % (key, url)
234
235 self.mocker.replace(readurl, passthrough=False)
236 self.mocker.replay()
237
238 self.assertEqual((None, None, None),
239 get_cmdline_url(names=["does-not-appear"],
240 starts="#cloud-config", cmdline=cmdline))
241
242# vi: ts=4 expandtab
0243
=== added directory 'tests/unittests/test_datasource'
=== added file 'tests/unittests/test_datasource/test_maas.py'
--- tests/unittests/test_datasource/test_maas.py 1970-01-01 00:00:00 +0000
+++ tests/unittests/test_datasource/test_maas.py 2012-06-13 13:16:19 +0000
@@ -0,0 +1,153 @@
1from tempfile import mkdtemp
2from shutil import rmtree
3import os
4from StringIO import StringIO
5from copy import copy
6from cloudinit.DataSourceMAAS import (
7 MAASSeedDirNone,
8 MAASSeedDirMalformed,
9 read_maas_seed_dir,
10 read_maas_seed_url,
11)
12from mocker import MockerTestCase
13
14
15class TestMAASDataSource(MockerTestCase):
16
17 def setUp(self):
18 super(TestMAASDataSource, self).setUp()
19 # Make a temp directoy for tests to use.
20 self.tmp = mkdtemp(prefix="unittest_")
21
22 def tearDown(self):
23 super(TestMAASDataSource, self).tearDown()
24 # Clean up temp directory
25 rmtree(self.tmp)
26
27 def test_seed_dir_valid(self):
28 """Verify a valid seeddir is read as such"""
29
30 data = {'instance-id': 'i-valid01',
31 'local-hostname': 'valid01-hostname',
32 'user-data': 'valid01-userdata',
33 'public-keys': 'ssh-rsa AAAAB3Nz...aC1yc2E= keyname'}
34
35 my_d = os.path.join(self.tmp, "valid")
36 populate_dir(my_d, data)
37
38 (userdata, metadata) = read_maas_seed_dir(my_d)
39
40 self.assertEqual(userdata, data['user-data'])
41 for key in ('instance-id', 'local-hostname'):
42 self.assertEqual(data[key], metadata[key])
43
44 # verify that 'userdata' is not returned as part of the metadata
45 self.assertFalse(('user-data' in metadata))
46
47 def test_seed_dir_valid_extra(self):
48 """Verify extra files do not affect seed_dir validity """
49
50 data = {'instance-id': 'i-valid-extra',
51 'local-hostname': 'valid-extra-hostname',
52 'user-data': 'valid-extra-userdata', 'foo': 'bar'}
53
54 my_d = os.path.join(self.tmp, "valid_extra")
55 populate_dir(my_d, data)
56
57 (userdata, metadata) = read_maas_seed_dir(my_d)
58
59 self.assertEqual(userdata, data['user-data'])
60 for key in ('instance-id', 'local-hostname'):
61 self.assertEqual(data[key], metadata[key])
62
63 # additional files should not just appear as keys in metadata atm
64 self.assertFalse(('foo' in metadata))
65
66 def test_seed_dir_invalid(self):
67 """Verify that invalid seed_dir raises MAASSeedDirMalformed"""
68
69 valid = {'instance-id': 'i-instanceid',
70 'local-hostname': 'test-hostname', 'user-data': ''}
71
72 my_based = os.path.join(self.tmp, "valid_extra")
73
74 # missing 'userdata' file
75 my_d = "%s-01" % my_based
76 invalid_data = copy(valid)
77 del invalid_data['local-hostname']
78 populate_dir(my_d, invalid_data)
79 self.assertRaises(MAASSeedDirMalformed, read_maas_seed_dir, my_d)
80
81 # missing 'instance-id'
82 my_d = "%s-02" % my_based
83 invalid_data = copy(valid)
84 del invalid_data['instance-id']
85 populate_dir(my_d, invalid_data)
86 self.assertRaises(MAASSeedDirMalformed, read_maas_seed_dir, my_d)
87
88 def test_seed_dir_none(self):
89 """Verify that empty seed_dir raises MAASSeedDirNone"""
90
91 my_d = os.path.join(self.tmp, "valid_empty")
92 self.assertRaises(MAASSeedDirNone, read_maas_seed_dir, my_d)
93
94 def test_seed_dir_missing(self):
95 """Verify that missing seed_dir raises MAASSeedDirNone"""
96 self.assertRaises(MAASSeedDirNone, read_maas_seed_dir,
97 os.path.join(self.tmp, "nonexistantdirectory"))
98
99 def test_seed_url_valid(self):
100 """Verify that valid seed_url is read as such"""
101 valid = {'meta-data/instance-id': 'i-instanceid',
102 'meta-data/local-hostname': 'test-hostname',
103 'meta-data/public-keys': 'test-hostname',
104 'user-data': 'foodata'}
105
106 my_seed = "http://example.com/xmeta"
107 my_ver = "1999-99-99"
108 my_headers = {'header1': 'value1', 'header2': 'value2'}
109
110 def my_headers_cb(url):
111 return(my_headers)
112
113 mock_request = self.mocker.replace("urllib2.Request",
114 passthrough=False)
115 mock_urlopen = self.mocker.replace("urllib2.urlopen",
116 passthrough=False)
117
118 for (key, val) in valid.iteritems():
119 mock_request("%s/%s/%s" % (my_seed, my_ver, key),
120 data=None, headers=my_headers)
121 self.mocker.nospec()
122 self.mocker.result("fake-request-%s" % key)
123 mock_urlopen("fake-request-%s" % key, timeout=None)
124 self.mocker.result(StringIO(val))
125
126 self.mocker.replay()
127
128 (userdata, metadata) = read_maas_seed_url(my_seed,
129 header_cb=my_headers_cb, version=my_ver)
130
131 self.assertEqual("foodata", userdata)
132 self.assertEqual(metadata['instance-id'],
133 valid['meta-data/instance-id'])
134 self.assertEqual(metadata['local-hostname'],
135 valid['meta-data/local-hostname'])
136
137 def test_seed_url_invalid(self):
138 """Verify that invalid seed_url raises MAASSeedDirMalformed"""
139 pass
140
141 def test_seed_url_missing(self):
142 """Verify seed_url with no found entries raises MAASSeedDirNone"""
143 pass
144
145
146def populate_dir(seed_dir, files):
147 os.mkdir(seed_dir)
148 for (name, content) in files.iteritems():
149 with open(os.path.join(seed_dir, name), "w") as fp:
150 fp.write(content)
151 fp.close()
152
153# vi: ts=4 expandtab
0154
=== added directory 'tests/unittests/test_handler'
=== renamed file 'tests/unittests/test_handler_ca_certs.py' => 'tests/unittests/test_handler/test_handler_ca_certs.py'
--- tests/unittests/test_handler_ca_certs.py 2012-01-17 21:38:01 +0000
+++ tests/unittests/test_handler/test_handler_ca_certs.py 2012-06-13 13:16:19 +0000
@@ -169,10 +169,14 @@
169 mock_delete_dir_contents = self.mocker.replace(delete_dir_contents,169 mock_delete_dir_contents = self.mocker.replace(delete_dir_contents,
170 passthrough=False)170 passthrough=False)
171 mock_write = self.mocker.replace(write_file, passthrough=False)171 mock_write = self.mocker.replace(write_file, passthrough=False)
172 mock_subp = self.mocker.replace("cloudinit.util.subp",
173 passthrough=False)
172174
173 mock_delete_dir_contents("/usr/share/ca-certificates/")175 mock_delete_dir_contents("/usr/share/ca-certificates/")
174 mock_delete_dir_contents("/etc/ssl/certs/")176 mock_delete_dir_contents("/etc/ssl/certs/")
175 mock_write("/etc/ca-certificates.conf", "", mode=0644)177 mock_write("/etc/ca-certificates.conf", "", mode=0644)
178 mock_subp(('debconf-set-selections', '-'),
179 "ca-certificates ca-certificates/trust_new_crts select no")
176 self.mocker.replay()180 self.mocker.replay()
177181
178 remove_default_ca_certs()182 remove_default_ca_certs()
179183
=== added file 'tests/unittests/test_userdata.py'
--- tests/unittests/test_userdata.py 1970-01-01 00:00:00 +0000
+++ tests/unittests/test_userdata.py 2012-06-13 13:16:19 +0000
@@ -0,0 +1,107 @@
1"""Tests for handling of userdata within cloud init"""
2
3import logging
4import StringIO
5
6from email.mime.base import MIMEBase
7
8from mocker import MockerTestCase
9
10import cloudinit
11from cloudinit.DataSource import DataSource
12
13
14instance_id = "i-testing"
15
16
17class FakeDataSource(DataSource):
18
19 def __init__(self, userdata):
20 DataSource.__init__(self)
21 self.metadata = {'instance-id': instance_id}
22 self.userdata_raw = userdata
23
24
25class TestConsumeUserData(MockerTestCase):
26
27 _log_handler = None
28 _log = None
29 log_file = None
30
31 def setUp(self):
32 self.mock_write = self.mocker.replace("cloudinit.util.write_file",
33 passthrough=False)
34 self.mock_write(self.get_ipath("cloud_config"), "", 0600)
35 self.capture_log()
36
37 def tearDown(self):
38 self._log.removeHandler(self._log_handler)
39
40 @staticmethod
41 def get_ipath(name):
42 return "%s/instances/%s%s" % (cloudinit.varlibdir, instance_id,
43 cloudinit.pathmap[name])
44
45 def capture_log(self):
46 self.log_file = StringIO.StringIO()
47 self._log_handler = logging.StreamHandler(self.log_file)
48 self._log_handler.setLevel(logging.DEBUG)
49 self._log = logging.getLogger(cloudinit.logger_name)
50 self._log.addHandler(self._log_handler)
51
52 def test_unhandled_type_warning(self):
53 """Raw text without magic is ignored but shows warning"""
54 self.mocker.replay()
55 ci = cloudinit.CloudInit()
56 ci.datasource = FakeDataSource("arbitrary text\n")
57 ci.consume_userdata()
58 self.assertEqual(
59 "Unhandled non-multipart userdata starting 'arbitrary text...'\n",
60 self.log_file.getvalue())
61
62 def test_mime_text_plain(self):
63 """Mime message of type text/plain is ignored without warning"""
64 self.mocker.replay()
65 ci = cloudinit.CloudInit()
66 message = MIMEBase("text", "plain")
67 message.set_payload("Just text")
68 ci.datasource = FakeDataSource(message.as_string())
69 ci.consume_userdata()
70 self.assertEqual("", self.log_file.getvalue())
71
72 def test_shellscript(self):
73 """Raw text starting #!/bin/sh is treated as script"""
74 script = "#!/bin/sh\necho hello\n"
75 outpath = cloudinit.get_ipath_cur("scripts") + "/part-001"
76 self.mock_write(outpath, script, 0700)
77 self.mocker.replay()
78 ci = cloudinit.CloudInit()
79 ci.datasource = FakeDataSource(script)
80 ci.consume_userdata()
81 self.assertEqual("", self.log_file.getvalue())
82
83 def test_mime_text_x_shellscript(self):
84 """Mime message of type text/x-shellscript is treated as script"""
85 script = "#!/bin/sh\necho hello\n"
86 outpath = cloudinit.get_ipath_cur("scripts") + "/part-001"
87 self.mock_write(outpath, script, 0700)
88 self.mocker.replay()
89 ci = cloudinit.CloudInit()
90 message = MIMEBase("text", "x-shellscript")
91 message.set_payload(script)
92 ci.datasource = FakeDataSource(message.as_string())
93 ci.consume_userdata()
94 self.assertEqual("", self.log_file.getvalue())
95
96 def test_mime_text_plain_shell(self):
97 """Mime type text/plain starting #!/bin/sh is treated as script"""
98 script = "#!/bin/sh\necho hello\n"
99 outpath = cloudinit.get_ipath_cur("scripts") + "/part-001"
100 self.mock_write(outpath, script, 0700)
101 self.mocker.replay()
102 ci = cloudinit.CloudInit()
103 message = MIMEBase("text", "plain")
104 message.set_payload(script)
105 ci.datasource = FakeDataSource(message.as_string())
106 ci.consume_userdata()
107 self.assertEqual("", self.log_file.getvalue())
0108
=== modified file 'tests/unittests/test_util.py'
--- tests/unittests/test_util.py 2012-01-17 17:35:31 +0000
+++ tests/unittests/test_util.py 2012-06-13 13:16:19 +0000
@@ -6,7 +6,8 @@
6import stat6import stat
77
8from cloudinit.util import (mergedict, get_cfg_option_list_or_str, write_file,8from cloudinit.util import (mergedict, get_cfg_option_list_or_str, write_file,
9 delete_dir_contents)9 delete_dir_contents, get_cmdline,
10 keyval_str_to_dict)
1011
1112
12class TestMergeDict(TestCase):13class TestMergeDict(TestCase):
@@ -28,7 +29,7 @@
28 def test_merge_does_not_override(self):29 def test_merge_does_not_override(self):
29 """Test that candidate doesn't override source."""30 """Test that candidate doesn't override source."""
30 source = {"key1": "value1", "key2": "value2"}31 source = {"key1": "value1", "key2": "value2"}
31 candidate = {"key2": "value2", "key2": "NEW VALUE"}32 candidate = {"key1": "value2", "key2": "NEW VALUE"}
32 result = mergedict(source, candidate)33 result = mergedict(source, candidate)
33 self.assertEqual(source, result)34 self.assertEqual(source, result)
3435
@@ -248,3 +249,18 @@
248 delete_dir_contents(self.tmp)249 delete_dir_contents(self.tmp)
249250
250 self.assertDirEmpty(self.tmp)251 self.assertDirEmpty(self.tmp)
252
253
254class TestKeyValStrings(TestCase):
255 def test_keyval_str_to_dict(self):
256 expected = {'1': 'one', '2': 'one+one', 'ro': True}
257 cmdline = "1=one ro 2=one+one"
258 self.assertEqual(expected, keyval_str_to_dict(cmdline))
259
260
261class TestGetCmdline(TestCase):
262 def test_cmdline_reads_debug_env(self):
263 os.environ['DEBUG_PROC_CMDLINE'] = 'abcd 123'
264 self.assertEqual(os.environ['DEBUG_PROC_CMDLINE'], get_cmdline())
265
266# vi: ts=4 expandtab
251267
=== added file 'tools/Z99-cloud-locale-test.sh'
--- tools/Z99-cloud-locale-test.sh 1970-01-01 00:00:00 +0000
+++ tools/Z99-cloud-locale-test.sh 2012-06-13 13:16:19 +0000
@@ -0,0 +1,92 @@
1#!/bin/sh
2# vi: ts=4 noexpandtab
3#
4# Author: Ben Howard <ben.howard@canonical.com>
5# Author: Scott Moser <scott.moser@ubuntu.com>
6# (c) 2012, Canonical Group, Ltd.
7#
8# Purpose: Detect invalid locale settings and inform the user
9# of how to fix them.
10#
11
12locale_warn() {
13 local cr="
14"
15 local bad_names="" bad_lcs="" key="" value="" var=""
16 local w1 w2 w3 w4 remain
17 # locale is expected to output either:
18 # VARIABLE=
19 # VARIABLE="value"
20 # locale: Cannot set LC_SOMETHING to default locale
21 while read -r w1 w2 w3 w4 remain; do
22 case "$w1" in
23 locale:) bad_names="${bad_names} ${w4}";;
24 *)
25 key=${w1%%=*}
26 val=${w1#*=}
27 val=${val#\"}
28 val=${val%\"}
29 vars="${vars} $key=$val";;
30 esac
31 done
32 for bad in $bad_names; do
33 for var in ${vars}; do
34 [ "${bad}" = "${var%=*}" ] || continue
35 value=${var#*=}
36 [ "${bad_lcs#* ${value}}" = "${bad_lcs}" ] &&
37 bad_lcs="${bad_lcs} ${value}"
38 break
39 done
40 done
41 bad_lcs=${bad_lcs# }
42 [ -n "$bad_lcs" ] || return 0
43
44 printf "_____________________________________________________________________\n"
45 printf "WARNING! Your environment specifies an invalid locale.\n"
46 printf " This can affect your user experience significantly, including the\n"
47 printf " ability to manage packages. You may install the locales by running:\n\n"
48
49 local bad invalid="" to_gen="" sfile="/usr/share/i18n/SUPPORTED"
50 local pkgs=""
51 if [ -e "$sfile" ]; then
52 for bad in ${bad_lcs}; do
53 grep -q -i "${bad}" "$sfile" &&
54 to_gen="${to_gen} ${bad}" ||
55 invalid="${invalid} ${bad}"
56 done
57 else
58 printf " sudo apt-get install locales\n"
59 to_gen=$bad_lcs
60 fi
61 to_gen=${to_gen# }
62
63 local pkgs=""
64 for bad in ${to_gen}; do
65 pkgs="${pkgs} language-pack-${bad%%_*}"
66 done
67 pkgs=${pkgs# }
68
69 if [ -n "${pkgs}" ]; then
70 printf " sudo apt-get install ${pkgs# }\n"
71 printf " or\n"
72 printf " sudo locale-gen ${to_gen# }\n"
73 printf "\n"
74 fi
75 for bad in ${invalid}; do
76 printf "WARNING: '${bad}' is an invalid locale\n"
77 done
78
79 printf "To see all available language packs, run:\n"
80 printf " apt-cache search \"^language-pack-[a-z][a-z]$\"\n"
81 printf "To disable this message for all users, run:\n"
82 printf " sudo touch /var/lib/cloud/instance/locale-check.skip\n"
83 printf "_____________________________________________________________________\n\n"
84
85 # only show the message once
86 : > ~/.cloud-locale-test.skip 2>/dev/null || :
87}
88
89[ -f ~/.cloud-locale-test.skip -o -f /var/lib/cloud/instance/locale-check.skip ] ||
90 locale 2>&1 | locale_warn
91
92unset locale_warn
093
=== modified file 'tools/run-pylint'
--- tools/run-pylint 2012-01-17 20:59:21 +0000
+++ tools/run-pylint 2012-06-13 13:16:19 +0000
@@ -1,6 +1,8 @@
1#!/bin/bash1#!/bin/bash
22
3def_files='cloud*.py cloudinit/*.py cloudinit/CloudConfig/*.py'3ci_files='cloud*.py cloudinit/*.py cloudinit/CloudConfig/*.py'
4test_files=$(find tests -name "*.py")
5def_files="$ci_files $test_files"
46
5if [ $# -eq 0 ]; then7if [ $# -eq 0 ]; then
6 files=( )8 files=( )