Merge lp:~vlastimil-holer/cloud-init/opennebula into lp:~cloud-init-dev/cloud-init/trunk

Proposed by Vlastimil Holer
Status: Merged
Merged at revision: 867
Proposed branch: lp:~vlastimil-holer/cloud-init/opennebula
Merge into: lp:~cloud-init-dev/cloud-init/trunk
Diff against target: 959 lines (+881/-2)
7 files modified
cloudinit/settings.py (+1/-0)
cloudinit/sources/DataSourceOpenNebula.py (+442/-0)
cloudinit/sources/__init__.py (+16/-2)
cloudinit/util.py (+7/-0)
doc/rtd/topics/datasources.rst (+6/-0)
doc/sources/opennebula/README.rst (+142/-0)
tests/unittests/test_datasource/test_opennebula.py (+267/-0)
To merge this branch: bzr merge lp:~vlastimil-holer/cloud-init/opennebula
Reviewer Review Type Date Requested Status
Joshua Harlow Pending
Scott Moser Pending
Review via email: mp+184278@code.launchpad.net

This proposal supersedes a proposal from 2013-02-21.

Description of the change

I'm resubmitting another review proposal for OpenNebula support in cloud-init

* pep8/pylint/tests were fixed
* shell output parser has been replaced by simple direct RE parser
* fixed DSMODE fetching from contextualization disk
* minor fixes, cleanups
* use unittests.helpers.populate_dir (wrapped by populate_context_dir which generates content)

Please let us know what else would be good to fix. Thanks to Javier Fontan <email address hidden>

To post a comment you must log in.
Revision history for this message
Joshua Harlow (harlowja) wrote : Posted in a previous version of this proposal

Just a few comments, otherwise seems cool.

Is 'NonConfigDriveDir' (exception afaik) supposed to be brought in somewhere?

Should this ds class be derived from the config drive datasource?
 - This is due to the new overload that was just added there
   for mapping device names to actual devices. I noticed this
   code does not have that function either (without it fstab
   will not be adjusted). So perhaps its better to inherit
   if the functionality is the same?? Perhaps override getdata()
   but still use the rest of the parent?

For capturing 'except subprocess.CalledProcessError as exc:' you might want to catch the one in 'util.py' instead (its not the same exception - although maybe it should be).

Is it possible add more comments as to what the context.sh parsing is doing?
Ie: ''comm -23 <(set | sort -u) <(echo "$VARS") | egrep -v "^(VARS|PIPESTATUS|_)="'' (???)

context_sh[key.lower()]=r.group(1).\
205 + replace('\\\\','\\').\
206 + replace('\\t','\t').\
207 + replace('\\n','\n').\
208 + replace("\\'","'")

Should this just be re.group(1).decode('string_escape')?
>>> print c
\n\tblahblah\n\'
>>> print c.decode('string_escape')

 blahblah
'

review: Needs Fixing
Revision history for this message
Joshua Harlow (harlowja) wrote : Posted in a previous version of this proposal

Also some tests would be super-great :-)

Revision history for this message
Vlastimil Holer (vlastimil-holer) wrote : Posted in a previous version of this proposal

> Just a few comments, otherwise seems cool.
>
> Is 'NonConfigDriveDir' (exception afaik) supposed to be brought in somewhere?

Fixed, leftover from ConfigDrive class.

> Should this ds class be derived from the config drive datasource?
> - This is due to the new overload that was just added there
> for mapping device names to actual devices. I noticed this
> code does not have that function either (without it fstab
> will not be adjusted). So perhaps its better to inherit
> if the functionality is the same?? Perhaps override getdata()
> but still use the rest of the parent?

If I understand right, it's not any problem here. OpenNebula's metadata
aren't strictly specified as in OpenStack or EC2. They all are just user
specified and metadata I currently depend on were just mentioned in
their documentation or example scripts, I haven't seen anything like
block device mappings so far. Since there are no default metadata for
device mapping, I think it's better not to invent this functionality now.

You can check here:
http://opennebula.org/documentation:rel3.8:context_overview
All metadata are just optional and user specified, it's common
to have there SSH public key or static network configuration.

> For capturing 'except subprocess.CalledProcessError as exc:' you might want to
> catch the one in 'util.py' instead (its not the same exception - although
> maybe it should be).

Fixed.

> Is it possible add more comments as to what the context.sh parsing is doing?
> Ie: ''comm -23 <(set | sort -u) <(echo "$VARS") | egrep -v
> "^(VARS|PIPESTATUS|_)="'' (???)

Explanation added into code. I know this construction isn't very nice,
but IMHO it's easiest way how to read context variables. Bash dumps
them just as easy parsable (single line) way.

> context_sh[key.lower()]=r.group(1).\
> 205 + replace('\\\\','\\').\
> 206 + replace('\\t','\t').\
> 207 + replace('\\n','\n').\
> 208 + replace("\\'","'")
>
> Should this just be re.group(1).decode('string_escape')?
> >>> print c
> \n\tblahblah\n\'
> >>> print c.decode('string_escape')
>
> blahblah
> '

Yes, thanks, fixed.

> Also some tests would be super-great :-)

On my TODO list, will do that soon.

Revision history for this message
Joshua Harlow (harlowja) wrote : Posted in a previous version of this proposal

Ok, one last comment.

Is the modification of "cloudinit/distros/__init__.py" still needed or is that just leftover code that got picked up in the review?

Revision history for this message
Vlastimil Holer (vlastimil-holer) wrote : Posted in a previous version of this proposal

> Ok, one last comment.
>
> Is the modification of "cloudinit/distros/__init__.py" still needed or is that
> just leftover code that got picked up in the review?

You are right, this was leftover after Javi's first version of static network configuration.
Currently we are already generating full Debian style network configuration including
"dns-search|nameservers" interface options. These are on RHEL systems automatically
converted and applied into /etc/resolv.conf inside your code already.

Sorry fot that. Change has been uncommited.

Revision history for this message
Scott Moser (smoser) wrote : Posted in a previous version of this proposal

Some thoughts:
 * run ./tools/run-pep8 tests/unittests/test_datasource/test_opennebula.py
   and ./tools/run-pylint tests/unittests/test_datasource/test_opennebula.py
   You'll see lots of (sorry) nit-picky things to fix.

 * how does 'user_dsmode' get set? I guess through 'context.sh' ?

 * I guess I might like to see the reading of context.sh in its own method for
   easier testing.

 * populate_dir (copied to your test) is now in unittests/helpers.py

The one thing I really do not like here is that context.sh is explicitly
executed (by bash), as opposed to being parsed. The issue is that I can
essentially do *anything* here. I could attach a disk labeled CDROM, with
context.sh and execute arbitrary code as root.

This isn't terribly a security issue, since executing code as root is generally
cloud-init's goal, but its very inconsistent with other datasources.

Also, fwiw, I've been wanting to explicitly add a "execute something as root
really early" hook. The goal would be to allow you to tweak an image as early
as possible, even fiddling with or fixing cloud-init.

So, please fix the pep8 and pylint, and, are people actually expecting code to be executed? Ie, do they think they can build/provide a context.sh that executes code?

review: Needs Fixing
Revision history for this message
Javi Fontan (jfontan) wrote : Posted in a previous version of this proposal

"context.sh" is generated by OpenNebula and is a shell script with shell variables. It is meant to be executed by some init scripts in the VM to get that information. It should not contain any "executable" code, at least OpenNebula does not add code to it.

The problem with parsing that file is that some variables can be multiline and and parsing is not that straightforward. For example:

# Context variables generated by OpenNebula
DISK_ID='1'
ETH0_DNS='10.0.0.1'
ETH0_GATEWAY='10.0.0.1'
ETH0_IP='10.0.0.72'
ETH0_MASK='255.255.255.0'
FILES_DS='/var/lib/one/datastores/2/9c92ad910c0b30a411ccdfc141bc7a25:'\''jfopet.sh'\'' '
FILES_URL='http://10.0.0.11/files/installer.tar.gz'
INIT_SCRIPTS='jfopet.sh'
MANIFEST='ubuntu-kvm1'
SSH_PUBLIC_KEY='ssh-dss AAAAB3NzaC1kc3MAAACBANprdUVFRaADZXZaAm2elpRaUGCMETLLuYYJCDUZPb0Dh/V7KeM2/a3rFIDA0s5sVK2XQNqLHyy0U8xA/0R8dplmg7BDckkAzhrDVpEGnQE1fk/xPd1t7u+yeVqpbrfyAmfmyE2P980mhBoWalbeV/f7SmHUP8RiQ2hlAWUxr7I5AAAAFQDLHhFndbcA7svGd/yfY6nU4ubodwAAAIAoXLKlckmZur1pc9TN4XoHa+Fl/6Qpu0XO7Ai/tu8dqHlN+FpVk8BQNnokwE+EZBARLIL0JCjHpT9b5aEvlpRz3TuDa6az8wvJRlScNufmVrf0ls1WCFDHujSLzd3aOkKct35Aamf1amP/NPE2aGne/tPS7HQ3TCf5E2MAQzYVJKSARwAAAIAaVqvU7LfGMEw0hEXr7fuJCMHh4FmPvejiOoSUz2GOU5bceasRitdulCQJbNiVtY6U0S+qQ0X8rvAnG934p4zS9TtgKIhplr146fkbYnNjCaAM0rNVvTh2SzEEKJiG2G1d3wyNuO8wpPhIiJ1OVZrGwkVyWwiNzC2sWXAXldQ9Hg== user1@host
ssh-dss AAAAB3NzaC1kc3MAAACBANBWTQmm4GtkAiPpA20DK1TQd1n7UoC5BymTHlGzFmj+BsQdn6ZZbihV/roNVo1roJhb2hQq/WuQR50D72vMIjrC6t1DkofFBL0iz+mb7JdmNxbE7cnOHMxEr/cd4ds4EwzpBKiQzt8NNcz/zbSadHQtBd0+u1G5nvm4MXHNVYQnAAAAFQDbEN554Or4vd7D+2wzjs3c1nNOowAAAIAdRlTrRG1YmceKHh3urcltniIoo8FrNudwCShbHTCOQbM+KkMUTtw5qwWFuJP6HdaLjmUehqxqDWeWGp8c2y5yMee0JR5cx+iMwhg1Q2o4S6c+zgWdUYIVkuPgYOOR2GMCPdl9mwcxtVvpHD59UFlPh16oLzakUCSxkro5V/LUXxuywAAAIBtYvxfwI5Cl0xq3/KQI7giNef05EXIgK+KwYu3xC6fs+7NxQYFzMsSEQhEJI62J091Kh0RFpFPdGPECIQolt8j4ltymKM9+pfgiE7oBXSxkW44XadBnCWCGU5B4gTnz84VKECWuu2J9Z4cn44hKYt1uj0SxxzExnB1X51kUN+Z5Q== user2@host'

I don't think creating a parser for these files is worth the job.

Revision history for this message
Scott Moser (smoser) wrote :

Vlastimil, thanks.
  One thing that I thought about was the 'parsing'. If the content in 'config.sh' is actually shell , then it makes most sense to have shell execute it rather than parsing with regex. Parsing with regex just seems brittle, and is only going to cover very basic things.
  I put together 'parse_shell_config' at https://gist.github.com/smoser/6466708
  That basically reads bash variables into a dict after executing a given file.

  This would execute the file, which is what I was originally opposed to. I don't miss the irony there :).
  The gist there has the ability to use 'sudo' to change to a different user. We could use 'nobody' by default and allow it to be configured. By parsing with shell, and running as nobody, we should be pretty adverse to any complexities that might be present in the 'config.sh'.

Revision history for this message
Vlastimil Holer (vlastimil-holer) wrote :

Scott, it looks great, I definitely love your code style!

From my point of view, I more agree with your former opinion not to use shell. I originally chose that option because I got normalized output with each variable on single line very easy. Also in the past it could be possible to reference variables from other variables, but this is not true since OpenNebula 4.0 (May 2013). Context variables are now single quoted and so it's impossible to reference other variables or execute external commands. Context.sh is now just plain key='value' file, which is (from historic reasons when people wrote their own contextualization shell scripts) usually sourced from shell. Now I see the only benefit in more reliable parsing then our horrible regexp.

So, it's up to you what you prefer to leave in your project at last. Both will work fine.

Also we can make a simple parser using e.g. pyparsing, but it means new dependency into cloud-init.

Revision history for this message
Scott Moser (smoser) wrote :

OK.
 I took a further look.
 I'm fine to merge this.
 Could you make the following changes:
  * use the bash parsing at https://gist.github.com/smoser/6466708
  * allow configuration to set the user to drop privledges to (None indicate no change).
    That configuration would be in the datasource config (datasource:OpenNebula:parseuser) or some name.
  * default that user to 'Nobody'.
  * there is one pylint (at least on my machine) error, complaining about the toks.split() use toks = str(toks).split('.'). Its not a real issue, but just make pylint happy by wrapping 'toks' in str().

Other than that I think it looks good.
Sooner is better to get those changes in. I want to make a upload to ubuntu sometime this week, and I'm shooting for 0.7.3 probably end of next week.

Thanks!

687. By Vlastimil Holer

Replace RE context.sh parser with Scott's rewrite of bash dumper. Upper case context variable names.

688. By Vlastimil Holer

Fix pylint complain on toks.split('.')

689. By Vlastimil Holer

Configurable OpenNebula::parseuser. Seed search dir+dev merge.
Eat shell parser error output. Few tests for tests for get_data.

690. By Vlastimil Holer

Update OpenNebula documentation (parseuser, more fs. labels, K-V single quotes)

691. By Vlastimil Holer

Detect invalid system user for test

Revision history for this message
Vlastimil Holer (vlastimil-holer) wrote :

I have used your bash parsing, the only change I made there was
stderr redirection for invalid (or empty) context scripts:

> sp = subprocess.Popen(cmd, stdin=subprocess.PIPE,
> stdout=subprocess.PIPE,
> stderr=subprocess.PIPE)

I thought you'll move the function somewhere into your
libraries after merge.

* unprivileged user is configurable in
  OpenNebula:parseuser, defaults to "nobody"

* pylint complain fixed

But, I have added some more unit tests around get_data,
which were missing, and other cleanups. So, please look
once again if it's OK for you at least for now. Or what
should I fix before the end of week.

Also I didn't have time to check the new code in real,
I'll let you know if it's ok ASAP.

692. By Vlastimil Holer

Fix detection of ETHx_IP context variable, add test.

Revision history for this message
Vlastimil Holer (vlastimil-holer) wrote :

Now looks good after fix in commit 692, tested on Debian 6.

Revision history for this message
Scott Moser (smoser) wrote :

Vlastimil, can you please merge from
 lp:~smoser/cloud-init/opennebula
and test that?

Just a few cleanups there.

Thanks.

693. By Vlastimil Holer

Merged lp:~smoser/cloud-init/opennebula

694. By Vlastimil Holer

All fake util.find_devs_with set before try-finally section

Revision history for this message
Vlastimil Holer (vlastimil-holer) wrote :

I don't know why have you moved one save of original util.find_devs_with before try-finally section, but I have moved the rest to be same.

I have quickly tested on real systems and works. Thanks!

Revision history for this message
Scott Moser (smoser) wrote :

because I hit that one. probably in error, but the finally didn't account for it not being defined.

On Sep 10, 2013, at 7:28 PM, Vlastimil Holer <email address hidden> wrote:

> I don't know why have you moved one save of original util.find_devs_with before try-finally section, but I have moved the rest to be same.
>
> I have quickly tested on real systems and works. Thanks!
> --
> https://code.launchpad.net/~vlastimil-holer/cloud-init/opennebula/+merge/184278
> You are requested to review the proposed merge of lp:~vlastimil-holer/cloud-init/opennebula into lp:cloud-init.
>

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'cloudinit/settings.py'
--- cloudinit/settings.py 2013-07-18 21:37:18 +0000
+++ cloudinit/settings.py 2013-09-10 23:15:34 +0000
@@ -31,6 +31,7 @@
31 'datasource_list': [31 'datasource_list': [
32 'NoCloud',32 'NoCloud',
33 'ConfigDrive',33 'ConfigDrive',
34 'OpenNebula',
34 'Azure',35 'Azure',
35 'AltCloud',36 'AltCloud',
36 'OVF',37 'OVF',
3738
=== added file 'cloudinit/sources/DataSourceOpenNebula.py'
--- cloudinit/sources/DataSourceOpenNebula.py 1970-01-01 00:00:00 +0000
+++ cloudinit/sources/DataSourceOpenNebula.py 2013-09-10 23:15:34 +0000
@@ -0,0 +1,442 @@
1# vi: ts=4 expandtab
2#
3# Copyright (C) 2012 Canonical Ltd.
4# Copyright (C) 2012 Yahoo! Inc.
5# Copyright (C) 2012-2013 CERIT Scientific Cloud
6# Copyright (C) 2012-2013 OpenNebula.org
7#
8# Author: Scott Moser <scott.moser@canonical.com>
9# Author: Joshua Harlow <harlowja@yahoo-inc.com>
10# Author: Vlastimil Holer <xholer@mail.muni.cz>
11# Author: Javier Fontan <jfontan@opennebula.org>
12#
13# This program is free software: you can redistribute it and/or modify
14# it under the terms of the GNU General Public License version 3, as
15# published by the Free Software Foundation.
16#
17# This program is distributed in the hope that it will be useful,
18# but WITHOUT ANY WARRANTY; without even the implied warranty of
19# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20# GNU General Public License for more details.
21#
22# You should have received a copy of the GNU General Public License
23# along with this program. If not, see <http://www.gnu.org/licenses/>.
24
25import os
26import pwd
27import re
28import string # pylint: disable=W0402
29
30from cloudinit import log as logging
31from cloudinit import sources
32from cloudinit import util
33
34LOG = logging.getLogger(__name__)
35
36DEFAULT_IID = "iid-dsopennebula"
37DEFAULT_MODE = 'net'
38DEFAULT_PARSEUSER = 'nobody'
39CONTEXT_DISK_FILES = ["context.sh"]
40VALID_DSMODES = ("local", "net", "disabled")
41
42
43class DataSourceOpenNebula(sources.DataSource):
44 def __init__(self, sys_cfg, distro, paths):
45 sources.DataSource.__init__(self, sys_cfg, distro, paths)
46 self.dsmode = 'local'
47 self.seed = None
48 self.seed_dir = os.path.join(paths.seed_dir, 'opennebula')
49
50 def __str__(self):
51 root = sources.DataSource.__str__(self)
52 return "%s [seed=%s][dsmode=%s]" % (root, self.seed, self.dsmode)
53
54 def get_data(self):
55 defaults = {"instance-id": DEFAULT_IID}
56 results = None
57 seed = None
58
59 # decide parseuser for context.sh shell reader
60 parseuser = DEFAULT_PARSEUSER
61 if 'parseuser' in self.ds_cfg:
62 parseuser = self.ds_cfg.get('parseuser')
63
64 candidates = [self.seed_dir]
65 candidates.extend(find_candidate_devs())
66 for cdev in candidates:
67 try:
68 if os.path.isdir(self.seed_dir):
69 results = read_context_disk_dir(cdev, asuser=parseuser)
70 elif cdev.startswith("/dev"):
71 results = util.mount_cb(cdev, read_context_disk_dir,
72 data=parseuser)
73 except NonContextDiskDir:
74 continue
75 except BrokenContextDiskDir as exc:
76 raise exc
77 except util.MountFailedError:
78 LOG.warn("%s was not mountable" % cdev)
79
80 if results:
81 seed = cdev
82 LOG.debug("found datasource in %s", cdev)
83 break
84
85 if not seed:
86 return False
87
88 # merge fetched metadata with datasource defaults
89 md = results['metadata']
90 md = util.mergemanydict([md, defaults])
91
92 # check for valid user specified dsmode
93 user_dsmode = results['metadata'].get('DSMODE', None)
94 if user_dsmode not in VALID_DSMODES + (None,):
95 LOG.warn("user specified invalid mode: %s", user_dsmode)
96 user_dsmode = None
97
98 # decide dsmode
99 if user_dsmode:
100 dsmode = user_dsmode
101 elif self.ds_cfg.get('dsmode'):
102 dsmode = self.ds_cfg.get('dsmode')
103 else:
104 dsmode = DEFAULT_MODE
105
106 if dsmode == "disabled":
107 # most likely user specified
108 return False
109
110 # apply static network configuration only in 'local' dsmode
111 if ('network-interfaces' in results and self.dsmode == "local"):
112 LOG.debug("Updating network interfaces from %s", self)
113 self.distro.apply_network(results['network-interfaces'])
114
115 if dsmode != self.dsmode:
116 LOG.debug("%s: not claiming datasource, dsmode=%s", self, dsmode)
117 return False
118
119 self.seed = seed
120 self.metadata = md
121 self.userdata_raw = results.get('userdata')
122 return True
123
124 def get_hostname(self, fqdn=False, resolve_ip=None):
125 if resolve_ip is None:
126 if self.dsmode == 'net':
127 resolve_ip = True
128 else:
129 resolve_ip = False
130 return sources.DataSource.get_hostname(self, fqdn, resolve_ip)
131
132
133class DataSourceOpenNebulaNet(DataSourceOpenNebula):
134 def __init__(self, sys_cfg, distro, paths):
135 DataSourceOpenNebula.__init__(self, sys_cfg, distro, paths)
136 self.dsmode = 'net'
137
138
139class NonContextDiskDir(Exception):
140 pass
141
142
143class BrokenContextDiskDir(Exception):
144 pass
145
146
147class OpenNebulaNetwork(object):
148 REG_DEV_MAC = re.compile(
149 r'^\d+: (eth\d+):.*?link\/ether (..:..:..:..:..:..) ?',
150 re.MULTILINE | re.DOTALL)
151
152 def __init__(self, ip, context):
153 self.ip = ip
154 self.context = context
155 self.ifaces = self.get_ifaces()
156
157 def get_ifaces(self):
158 return self.REG_DEV_MAC.findall(self.ip)
159
160 def mac2ip(self, mac):
161 components = mac.split(':')[2:]
162 return [str(int(c, 16)) for c in components]
163
164 def get_ip(self, dev, components):
165 var_name = dev.upper() + '_IP'
166 if var_name in self.context:
167 return self.context[var_name]
168 else:
169 return '.'.join(components)
170
171 def get_mask(self, dev):
172 var_name = dev.upper() + '_MASK'
173 if var_name in self.context:
174 return self.context[var_name]
175 else:
176 return '255.255.255.0'
177
178 def get_network(self, dev, components):
179 var_name = dev.upper() + '_NETWORK'
180 if var_name in self.context:
181 return self.context[var_name]
182 else:
183 return '.'.join(components[:-1]) + '.0'
184
185 def get_gateway(self, dev):
186 var_name = dev.upper() + '_GATEWAY'
187 if var_name in self.context:
188 return self.context[var_name]
189 else:
190 return None
191
192 def get_dns(self, dev):
193 var_name = dev.upper() + '_DNS'
194 if var_name in self.context:
195 return self.context[var_name]
196 else:
197 return None
198
199 def get_domain(self, dev):
200 var_name = dev.upper() + '_DOMAIN'
201 if var_name in self.context:
202 return self.context[var_name]
203 else:
204 return None
205
206 def gen_conf(self):
207 global_dns = []
208 if 'DNS' in self.context:
209 global_dns.append(self.context['DNS'])
210
211 conf = []
212 conf.append('auto lo')
213 conf.append('iface lo inet loopback')
214 conf.append('')
215
216 for i in self.ifaces:
217 dev = i[0]
218 mac = i[1]
219 ip_components = self.mac2ip(mac)
220
221 conf.append('auto ' + dev)
222 conf.append('iface ' + dev + ' inet static')
223 conf.append(' address ' + self.get_ip(dev, ip_components))
224 conf.append(' network ' + self.get_network(dev, ip_components))
225 conf.append(' netmask ' + self.get_mask(dev))
226
227 gateway = self.get_gateway(dev)
228 if gateway:
229 conf.append(' gateway ' + gateway)
230
231 domain = self.get_domain(dev)
232 if domain:
233 conf.append(' dns-search ' + domain)
234
235 # add global DNS servers to all interfaces
236 dns = self.get_dns(dev)
237 if global_dns or dns:
238 all_dns = global_dns
239 if dns:
240 all_dns.append(dns)
241 conf.append(' dns-nameservers ' + ' '.join(all_dns))
242
243 conf.append('')
244
245 return "\n".join(conf)
246
247
248def find_candidate_devs():
249 """
250 Return a list of devices that may contain the context disk.
251 """
252 combined = []
253 for f in ('LABEL=CONTEXT', 'LABEL=CDROM', 'TYPE=iso9660'):
254 devs = util.find_devs_with(f)
255 devs.sort()
256 for d in devs:
257 if d not in combined:
258 combined.append(d)
259
260 return combined
261
262
263def switch_user_cmd(user):
264 return ['sudo', '-u', user]
265
266
267def parse_shell_config(content, keylist=None, bash=None, asuser=None,
268 switch_user_cb=None):
269
270 if isinstance(bash, str):
271 bash = [bash]
272 elif bash is None:
273 bash = ['bash', '-e']
274
275 if switch_user_cb is None:
276 switch_user_cb = switch_user_cmd
277
278 # allvars expands to all existing variables by using '${!x*}' notation
279 # where x is lower or upper case letters or '_'
280 allvars = ["${!%s*}" % x for x in string.letters + "_"]
281
282 keylist_in = keylist
283 if keylist is None:
284 keylist = allvars
285 keylist_in = []
286
287 setup = '\n'.join(('__v="";', '',))
288
289 def varprinter(vlist):
290 # output '\0'.join(['_start_', key=value NULL for vars in vlist]
291 return '\n'.join((
292 'printf "%s\\0" _start_',
293 'for __v in %s; do' % ' '.join(vlist),
294 ' printf "%s=%s\\0" "$__v" "${!__v}";',
295 'done',
296 ''
297 ))
298
299 # the rendered 'bcmd' is bash syntax that does
300 # setup: declare variables we use (so they show up in 'all')
301 # varprinter(allvars): print all variables known at beginning
302 # content: execute the provided content
303 # varprinter(keylist): print all variables known after content
304 #
305 # output is then a null terminated array of:
306 # literal '_start_'
307 # key=value (for each preset variable)
308 # literal '_start_'
309 # key=value (for each post set variable)
310 bcmd = ('unset IFS\n' +
311 setup +
312 varprinter(allvars) +
313 '{\n%s\n\n:\n} > /dev/null\n' % content +
314 'unset IFS\n' +
315 varprinter(keylist) + "\n")
316
317 cmd = []
318 if asuser is not None:
319 cmd = switch_user_cb(asuser)
320
321 cmd.extend(bash)
322
323 (output, _error) = util.subp(cmd, data=bcmd)
324
325 # exclude vars in bash that change on their own or that we used
326 excluded = ("RANDOM", "LINENO", "_", "__v")
327 preset = {}
328 ret = {}
329 target = None
330 output = output[0:-1] # remove trailing null
331
332 # go through output. First _start_ is for 'preset', second for 'target'.
333 # Add to target only things were changed and not in volitile
334 for line in output.split("\x00"):
335 try:
336 (key, val) = line.split("=", 1)
337 if target is preset:
338 target[key] = val
339 elif (key not in excluded and
340 (key in keylist_in or preset.get(key) != val)):
341 ret[key] = val
342 except ValueError:
343 if line != "_start_":
344 raise
345 if target is None:
346 target = preset
347 elif target is preset:
348 target = ret
349
350 return ret
351
352
353def read_context_disk_dir(source_dir, asuser=None):
354 """
355 read_context_disk_dir(source_dir):
356 read source_dir and return a tuple with metadata dict and user-data
357 string populated. If not a valid dir, raise a NonContextDiskDir
358 """
359 found = {}
360 for af in CONTEXT_DISK_FILES:
361 fn = os.path.join(source_dir, af)
362 if os.path.isfile(fn):
363 found[af] = fn
364
365 if not found:
366 raise NonContextDiskDir("%s: %s" % (source_dir, "no files found"))
367
368 context = {}
369 results = {'userdata': None, 'metadata': {}}
370
371 if "context.sh" in found:
372 if asuser is not None:
373 try:
374 pwd.getpwnam(asuser)
375 except KeyError as e:
376 raise BrokenContextDiskDir("configured user '%s' "
377 "does not exist", asuser)
378 try:
379 with open(os.path.join(source_dir, 'context.sh'), 'r') as f:
380 content = f.read().strip()
381
382 context = parse_shell_config(content, asuser=asuser)
383 except util.ProcessExecutionError as e:
384 raise BrokenContextDiskDir("Error processing context.sh: %s" % (e))
385 except IOError as e:
386 raise NonContextDiskDir("Error reading context.sh: %s" % (e))
387 else:
388 raise NonContextDiskDir("Missing context.sh")
389
390 if not context:
391 return results
392
393 results['metadata'] = context
394
395 # process single or multiple SSH keys
396 ssh_key_var = None
397 if "SSH_KEY" in context:
398 ssh_key_var = "SSH_KEY"
399 elif "SSH_PUBLIC_KEY" in context:
400 ssh_key_var = "SSH_PUBLIC_KEY"
401
402 if ssh_key_var:
403 lines = context.get(ssh_key_var).splitlines()
404 results['metadata']['public-keys'] = [l for l in lines
405 if len(l) and not l.startswith("#")]
406
407 # custom hostname -- try hostname or leave cloud-init
408 # itself create hostname from IP address later
409 for k in ('HOSTNAME', 'PUBLIC_IP', 'IP_PUBLIC', 'ETH0_IP'):
410 if k in context:
411 results['metadata']['local-hostname'] = context[k]
412 break
413
414 # raw user data
415 if "USER_DATA" in context:
416 results['userdata'] = context["USER_DATA"]
417 elif "USERDATA" in context:
418 results['userdata'] = context["USERDATA"]
419
420 # generate static /etc/network/interfaces
421 # only if there are any required context variables
422 # http://opennebula.org/documentation:rel3.8:cong#network_configuration
423 for k in context.keys():
424 if re.match(r'^ETH\d+_IP$', k):
425 (out, _) = util.subp(['/sbin/ip', 'link'])
426 net = OpenNebulaNetwork(out, context)
427 results['network-interfaces'] = net.gen_conf()
428 break
429
430 return results
431
432
433# Used to match classes to dependencies
434datasources = [
435 (DataSourceOpenNebula, (sources.DEP_FILESYSTEM, )),
436 (DataSourceOpenNebulaNet, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
437]
438
439
440# Return a list of data sources that match this set of dependencies
441def get_datasource_list(depends):
442 return sources.list_from_depends(depends, datasources)
0443
=== modified file 'cloudinit/sources/__init__.py'
--- cloudinit/sources/__init__.py 2013-07-23 17:10:33 +0000
+++ cloudinit/sources/__init__.py 2013-09-10 23:15:34 +0000
@@ -53,9 +53,16 @@
53 self.userdata = None53 self.userdata = None
54 self.metadata = None54 self.metadata = None
55 self.userdata_raw = None55 self.userdata_raw = None
56
57 # find the datasource config name.
58 # remove 'DataSource' from classname on front, and remove 'Net' on end.
59 # Both Foo and FooNet sources expect config in cfg['sources']['Foo']
56 name = type_utils.obj_name(self)60 name = type_utils.obj_name(self)
57 if name.startswith(DS_PREFIX):61 if name.startswith(DS_PREFIX):
58 name = name[len(DS_PREFIX):]62 name = name[len(DS_PREFIX):]
63 if name.endswith('Net'):
64 name = name[0:-3]
65
59 self.ds_cfg = util.get_cfg_by_path(self.sys_cfg,66 self.ds_cfg = util.get_cfg_by_path(self.sys_cfg,
60 ("datasource", name), {})67 ("datasource", name), {})
61 if not ud_proc:68 if not ud_proc:
@@ -144,7 +151,7 @@
144 return "iid-datasource"151 return "iid-datasource"
145 return str(self.metadata['instance-id'])152 return str(self.metadata['instance-id'])
146153
147 def get_hostname(self, fqdn=False):154 def get_hostname(self, fqdn=False, resolve_ip=False):
148 defdomain = "localdomain"155 defdomain = "localdomain"
149 defhost = "localhost"156 defhost = "localhost"
150 domain = defdomain157 domain = defdomain
@@ -168,7 +175,14 @@
168 # make up a hostname (LP: #475354) in format ip-xx.xx.xx.xx175 # make up a hostname (LP: #475354) in format ip-xx.xx.xx.xx
169 lhost = self.metadata['local-hostname']176 lhost = self.metadata['local-hostname']
170 if util.is_ipv4(lhost):177 if util.is_ipv4(lhost):
171 toks = ["ip-%s" % lhost.replace(".", "-")]178 toks = []
179 if resolve_ip:
180 toks = util.gethostbyaddr(lhost)
181
182 if toks:
183 toks = str(toks).split('.')
184 else:
185 toks = ["ip-%s" % lhost.replace(".", "-")]
172 else:186 else:
173 toks = lhost.split(".")187 toks = lhost.split(".")
174188
175189
=== modified file 'cloudinit/util.py'
--- cloudinit/util.py 2013-09-07 07:07:26 +0000
+++ cloudinit/util.py 2013-09-10 23:15:34 +0000
@@ -955,6 +955,13 @@
955 return hostname955 return hostname
956956
957957
958def gethostbyaddr(ip):
959 try:
960 return socket.gethostbyaddr(ip)[0]
961 except socket.herror:
962 return None
963
964
958def is_resolvable_url(url):965def is_resolvable_url(url):
959 """determine if this url is resolvable (existing or ip)."""966 """determine if this url is resolvable (existing or ip)."""
960 return (is_resolvable(urlparse.urlparse(url).hostname))967 return (is_resolvable(urlparse.urlparse(url).hostname))
961968
=== modified file 'doc/rtd/topics/datasources.rst'
--- doc/rtd/topics/datasources.rst 2013-02-06 07:58:49 +0000
+++ doc/rtd/topics/datasources.rst 2013-09-10 23:15:34 +0000
@@ -141,6 +141,12 @@
141.. include:: ../../sources/configdrive/README.rst141.. include:: ../../sources/configdrive/README.rst
142142
143---------------------------143---------------------------
144OpenNebula
145---------------------------
146
147.. include:: ../../sources/opennebula/README.rst
148
149---------------------------
144Alt cloud150Alt cloud
145---------------------------151---------------------------
146152
147153
=== added directory 'doc/sources/opennebula'
=== added file 'doc/sources/opennebula/README.rst'
--- doc/sources/opennebula/README.rst 1970-01-01 00:00:00 +0000
+++ doc/sources/opennebula/README.rst 2013-09-10 23:15:34 +0000
@@ -0,0 +1,142 @@
1The `OpenNebula`_ (ON) datasource supports the contextualization disk.
2
3 See `contextualization overview`_, `contextualizing VMs`_ and
4 `network configuration`_ in the public documentation for
5 more information.
6
7OpenNebula's virtual machines are contextualized (parametrized) by
8CD-ROM image, which contains a shell script *context.sh* with
9custom variables defined on virtual machine start. There are no
10fixed contextualization variables, but the datasource accepts
11many used and recommended across the documentation.
12
13Datasource configuration
14~~~~~~~~~~~~~~~~~~~~~~~~~
15
16Datasource accepts following configuration options.
17
18::
19
20 dsmode:
21 values: local, net, disabled
22 default: net
23
24Tells if this datasource will be processed in 'local' (pre-networking) or
25'net' (post-networking) stage or even completely 'disabled'.
26
27::
28
29 parseuser:
30 default: nobody
31
32Unprivileged system user used for contextualization script
33processing.
34
35Contextualization disk
36~~~~~~~~~~~~~~~~~~~~~~
37
38The following criteria are required:
39
401. Must be formatted with `iso9660`_ filesystem
41 or have a *filesystem* label of **CONTEXT** or **CDROM**
422. Must contain file *context.sh* with contextualization variables.
43 File is generated by OpenNebula, it has a KEY='VALUE' format and
44 can be easily read by bash
45
46Contextualization variables
47~~~~~~~~~~~~~~~~~~~~~~~~~~~
48
49There are no fixed contextualization variables in OpenNebula, no standard.
50Following variables were found on various places and revisions of
51the OpenNebula documentation. Where multiple similar variables are
52specified, only first found is taken.
53
54::
55
56 DSMODE
57
58Datasource mode configuration override. Values: local, net, disabled.
59
60::
61
62 DNS
63 ETH<x>_IP
64 ETH<x>_NETWORK
65 ETH<x>_MASK
66 ETH<x>_GATEWAY
67 ETH<x>_DOMAIN
68 ETH<x>_DNS
69
70Static `network configuration`_.
71
72::
73
74 HOSTNAME
75
76Instance hostname.
77
78::
79
80 PUBLIC_IP
81 IP_PUBLIC
82 ETH0_IP
83
84If no hostname has been specified, cloud-init will try to create hostname
85from instance's IP address in 'local' dsmode. In 'net' dsmode, cloud-init
86tries to resolve one of its IP addresses to get hostname.
87
88::
89
90 SSH_KEY
91 SSH_PUBLIC_KEY
92
93One or multiple SSH keys (separated by newlines) can be specified.
94
95::
96
97 USER_DATA
98 USERDATA
99
100cloud-init user data.
101
102Example configuration
103~~~~~~~~~~~~~~~~~~~~~
104
105This example cloud-init configuration (*cloud.cfg*) enables
106OpenNebula datasource only in 'net' mode.
107
108::
109
110 disable_ec2_metadata: True
111 datasource_list: ['OpenNebula']
112 datasource:
113 OpenNebula:
114 dsmode: net
115 parseuser: nobody
116
117Example VM's context section
118~~~~~~~~~~~~~~~~~~~~~~~~~~~~
119
120::
121
122 CONTEXT=[
123 PUBLIC_IP="$NIC[IP]",
124 SSH_KEY="$USER[SSH_KEY]
125 $USER[SSH_KEY1]
126 $USER[SSH_KEY2] ",
127 USER_DATA="#cloud-config
128 # see https://help.ubuntu.com/community/CloudInit
129
130 packages: []
131
132 mounts:
133 - [vdc,none,swap,sw,0,0]
134 runcmd:
135 - echo 'Instance has been configured by cloud-init.' | wall
136 " ]
137
138.. _OpenNebula: http://opennebula.org/
139.. _contextualization overview: http://opennebula.org/documentation:documentation:context_overview
140.. _contextualizing VMs: http://opennebula.org/documentation:documentation:cong
141.. _network configuration: http://opennebula.org/documentation:documentation:cong#network_configuration
142.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
0143
=== added file 'tests/unittests/test_datasource/test_opennebula.py'
--- tests/unittests/test_datasource/test_opennebula.py 1970-01-01 00:00:00 +0000
+++ tests/unittests/test_datasource/test_opennebula.py 2013-09-10 23:15:34 +0000
@@ -0,0 +1,267 @@
1from cloudinit.sources import DataSourceOpenNebula as ds
2from cloudinit import helpers
3from cloudinit import util
4from mocker import MockerTestCase
5from tests.unittests.helpers import populate_dir
6
7import os
8import pwd
9
10TEST_VARS = {
11 'VAR1': 'single',
12 'VAR2': 'double word',
13 'VAR3': 'multi\nline\n',
14 'VAR4': "'single'",
15 'VAR5': "'double word'",
16 'VAR6': "'multi\nline\n'",
17 'VAR7': 'single\\t',
18 'VAR8': 'double\\tword',
19 'VAR9': 'multi\\t\nline\n',
20 'VAR10': '\\', # expect \
21 'VAR11': '\'', # expect '
22 'VAR12': '$', # expect $
23}
24
25INVALID_CONTEXT = ';'
26USER_DATA = '#cloud-config\napt_upgrade: true'
27SSH_KEY = 'ssh-rsa AAAAB3NzaC1....sIkJhq8wdX+4I3A4cYbYP ubuntu@server-460-%i'
28HOSTNAME = 'foo.example.com'
29PUBLIC_IP = '10.0.0.3'
30
31CMD_IP_OUT = '''\
321: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
33 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
342: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
35 link/ether 02:00:0a:12:01:01 brd ff:ff:ff:ff:ff:ff
36'''
37
38
39class TestOpenNebulaDataSource(MockerTestCase):
40 parsed_user = None
41
42 def setUp(self):
43 super(TestOpenNebulaDataSource, self).setUp()
44 self.tmp = self.makeDir()
45 self.paths = helpers.Paths({'cloud_dir': self.tmp})
46
47 # defaults for few tests
48 self.ds = ds.DataSourceOpenNebula
49 self.seed_dir = os.path.join(self.paths.seed_dir, "opennebula")
50 self.sys_cfg = {'datasource': {'OpenNebula': {'dsmode': 'local'}}}
51
52 # we don't want 'sudo' called in tests. so we patch switch_user_cmd
53 def my_switch_user_cmd(user):
54 self.parsed_user = user
55 return []
56
57 self.switch_user_cmd_real = ds.switch_user_cmd
58 ds.switch_user_cmd = my_switch_user_cmd
59
60 def tearDown(self):
61 ds.switch_user_cmd = self.switch_user_cmd_real
62 super(TestOpenNebulaDataSource, self).tearDown()
63
64 def test_get_data_non_contextdisk(self):
65 orig_find_devs_with = util.find_devs_with
66 try:
67 # dont' try to lookup for CDs
68 util.find_devs_with = lambda n: []
69 dsrc = self.ds(sys_cfg=self.sys_cfg, distro=None, paths=self.paths)
70 ret = dsrc.get_data()
71 self.assertFalse(ret)
72 finally:
73 util.find_devs_with = orig_find_devs_with
74
75 def test_get_data_broken_contextdisk(self):
76 orig_find_devs_with = util.find_devs_with
77 try:
78 # dont' try to lookup for CDs
79 util.find_devs_with = lambda n: []
80 populate_dir(self.seed_dir, {'context.sh': INVALID_CONTEXT})
81 dsrc = self.ds(sys_cfg=self.sys_cfg, distro=None, paths=self.paths)
82 self.assertRaises(ds.BrokenContextDiskDir, dsrc.get_data)
83 finally:
84 util.find_devs_with = orig_find_devs_with
85
86 def test_get_data_invalid_identity(self):
87 orig_find_devs_with = util.find_devs_with
88 try:
89 # generate non-existing system user name
90 sys_cfg = self.sys_cfg
91 invalid_user = 'invalid'
92 while not sys_cfg['datasource']['OpenNebula'].get('parseuser'):
93 try:
94 pwd.getpwnam(invalid_user)
95 invalid_user += 'X'
96 except KeyError:
97 sys_cfg['datasource']['OpenNebula']['parseuser'] = \
98 invalid_user
99
100 # dont' try to lookup for CDs
101 util.find_devs_with = lambda n: []
102 populate_context_dir(self.seed_dir, {'KEY1': 'val1'})
103 dsrc = self.ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
104 self.assertRaises(ds.BrokenContextDiskDir, dsrc.get_data)
105 finally:
106 util.find_devs_with = orig_find_devs_with
107
108 def test_get_data(self):
109 orig_find_devs_with = util.find_devs_with
110 try:
111 # dont' try to lookup for CDs
112 util.find_devs_with = lambda n: []
113 populate_context_dir(self.seed_dir, {'KEY1': 'val1'})
114 dsrc = self.ds(sys_cfg=self.sys_cfg, distro=None, paths=self.paths)
115 ret = dsrc.get_data()
116 self.assertTrue(ret)
117 finally:
118 util.find_devs_with = orig_find_devs_with
119
120 def test_seed_dir_non_contextdisk(self):
121 self.assertRaises(ds.NonContextDiskDir, ds.read_context_disk_dir,
122 self.seed_dir)
123
124 def test_seed_dir_empty1_context(self):
125 populate_dir(self.seed_dir, {'context.sh': ''})
126 results = ds.read_context_disk_dir(self.seed_dir)
127
128 self.assertEqual(results['userdata'], None)
129 self.assertEqual(results['metadata'], {})
130
131 def test_seed_dir_empty2_context(self):
132 populate_context_dir(self.seed_dir, {})
133 results = ds.read_context_disk_dir(self.seed_dir)
134
135 self.assertEqual(results['userdata'], None)
136 self.assertEqual(results['metadata'], {})
137
138 def test_seed_dir_broken_context(self):
139 populate_dir(self.seed_dir, {'context.sh': INVALID_CONTEXT})
140
141 self.assertRaises(ds.BrokenContextDiskDir,
142 ds.read_context_disk_dir,
143 self.seed_dir)
144
145 def test_context_parser(self):
146 populate_context_dir(self.seed_dir, TEST_VARS)
147 results = ds.read_context_disk_dir(self.seed_dir)
148
149 self.assertTrue('metadata' in results)
150 self.assertEqual(TEST_VARS, results['metadata'])
151
152 def test_ssh_key(self):
153 public_keys = ['first key', 'second key']
154 for c in range(4):
155 for k in ('SSH_KEY', 'SSH_PUBLIC_KEY'):
156 my_d = os.path.join(self.tmp, "%s-%i" % (k, c))
157 populate_context_dir(my_d, {k: '\n'.join(public_keys)})
158 results = ds.read_context_disk_dir(my_d)
159
160 self.assertTrue('metadata' in results)
161 self.assertTrue('public-keys' in results['metadata'])
162 self.assertEqual(public_keys,
163 results['metadata']['public-keys'])
164
165 public_keys.append(SSH_KEY % (c + 1,))
166
167 def test_user_data(self):
168 for k in ('USER_DATA', 'USERDATA'):
169 my_d = os.path.join(self.tmp, k)
170 populate_context_dir(my_d, {k: USER_DATA})
171 results = ds.read_context_disk_dir(my_d)
172
173 self.assertTrue('userdata' in results)
174 self.assertEqual(USER_DATA, results['userdata'])
175
176 def test_hostname(self):
177 for k in ('HOSTNAME', 'PUBLIC_IP', 'IP_PUBLIC', 'ETH0_IP'):
178 my_d = os.path.join(self.tmp, k)
179 populate_context_dir(my_d, {k: PUBLIC_IP})
180 results = ds.read_context_disk_dir(my_d)
181
182 self.assertTrue('metadata' in results)
183 self.assertTrue('local-hostname' in results['metadata'])
184 self.assertEqual(PUBLIC_IP, results['metadata']['local-hostname'])
185
186 def test_network_interfaces(self):
187 populate_context_dir(self.seed_dir, {'ETH0_IP': '1.2.3.4'})
188 results = ds.read_context_disk_dir(self.seed_dir)
189
190 self.assertTrue('network-interfaces' in results)
191
192 def test_find_candidates(self):
193 def my_devs_with(criteria):
194 return {
195 "LABEL=CONTEXT": ["/dev/sdb"],
196 "LABEL=CDROM": ["/dev/sr0"],
197 "TYPE=iso9660": ["/dev/vdb"],
198 }.get(criteria, [])
199
200 orig_find_devs_with = util.find_devs_with
201 try:
202 util.find_devs_with = my_devs_with
203 self.assertEqual(["/dev/sdb", "/dev/sr0", "/dev/vdb"],
204 ds.find_candidate_devs())
205 finally:
206 util.find_devs_with = orig_find_devs_with
207
208
209class TestOpenNebulaNetwork(MockerTestCase):
210
211 def setUp(self):
212 super(TestOpenNebulaNetwork, self).setUp()
213
214 def test_lo(self):
215 net = ds.OpenNebulaNetwork('', {})
216 self.assertEqual(net.gen_conf(), u'''\
217auto lo
218iface lo inet loopback
219''')
220
221 def test_eth0(self):
222 net = ds.OpenNebulaNetwork(CMD_IP_OUT, {})
223 self.assertEqual(net.gen_conf(), u'''\
224auto lo
225iface lo inet loopback
226
227auto eth0
228iface eth0 inet static
229 address 10.18.1.1
230 network 10.18.1.0
231 netmask 255.255.255.0
232''')
233
234 def test_eth0_override(self):
235 context = {
236 'DNS': '1.2.3.8',
237 'ETH0_IP': '1.2.3.4',
238 'ETH0_NETWORK': '1.2.3.0',
239 'ETH0_MASK': '255.255.0.0',
240 'ETH0_GATEWAY': '1.2.3.5',
241 'ETH0_DOMAIN': 'example.com',
242 'ETH0_DNS': '1.2.3.6 1.2.3.7'
243 }
244
245 net = ds.OpenNebulaNetwork(CMD_IP_OUT, context)
246 self.assertEqual(net.gen_conf(), u'''\
247auto lo
248iface lo inet loopback
249
250auto eth0
251iface eth0 inet static
252 address 1.2.3.4
253 network 1.2.3.0
254 netmask 255.255.0.0
255 gateway 1.2.3.5
256 dns-search example.com
257 dns-nameservers 1.2.3.8 1.2.3.6 1.2.3.7
258''')
259
260
261def populate_context_dir(path, variables):
262 data = "# Context variables generated by OpenNebula\n"
263 for (k, v) in variables.iteritems():
264 data += ("%s='%s'\n" % (k.upper(), v.replace(r"'", r"'\''")))
265 populate_dir(path, {'context.sh': data})
266
267# vi: ts=4 expandtab