Merge lp:~vlastimil-holer/cloud-init/opennebula into lp:~cloud-init-dev/cloud-init/trunk

Proposed by Vlastimil Holer
Status: Merged
Merged at revision: 867
Proposed branch: lp:~vlastimil-holer/cloud-init/opennebula
Merge into: lp:~cloud-init-dev/cloud-init/trunk
Diff against target: 959 lines (+881/-2)
7 files modified
cloudinit/settings.py (+1/-0)
cloudinit/sources/DataSourceOpenNebula.py (+442/-0)
cloudinit/sources/__init__.py (+16/-2)
cloudinit/util.py (+7/-0)
doc/rtd/topics/datasources.rst (+6/-0)
doc/sources/opennebula/README.rst (+142/-0)
tests/unittests/test_datasource/test_opennebula.py (+267/-0)
To merge this branch: bzr merge lp:~vlastimil-holer/cloud-init/opennebula
Reviewer Review Type Date Requested Status
Joshua Harlow Pending
Scott Moser Pending
Review via email: mp+184278@code.launchpad.net

This proposal supersedes a proposal from 2013-02-21.

Description of the change

I'm resubmitting another review proposal for OpenNebula support in cloud-init

* pep8/pylint/tests were fixed
* shell output parser has been replaced by simple direct RE parser
* fixed DSMODE fetching from contextualization disk
* minor fixes, cleanups
* use unittests.helpers.populate_dir (wrapped by populate_context_dir which generates content)

Please let us know what else would be good to fix. Thanks to Javier Fontan <email address hidden>

To post a comment you must log in.
Revision history for this message
Joshua Harlow (harlowja) wrote : Posted in a previous version of this proposal

Just a few comments, otherwise seems cool.

Is 'NonConfigDriveDir' (exception afaik) supposed to be brought in somewhere?

Should this ds class be derived from the config drive datasource?
 - This is due to the new overload that was just added there
   for mapping device names to actual devices. I noticed this
   code does not have that function either (without it fstab
   will not be adjusted). So perhaps its better to inherit
   if the functionality is the same?? Perhaps override getdata()
   but still use the rest of the parent?

For capturing 'except subprocess.CalledProcessError as exc:' you might want to catch the one in 'util.py' instead (its not the same exception - although maybe it should be).

Is it possible add more comments as to what the context.sh parsing is doing?
Ie: ''comm -23 <(set | sort -u) <(echo "$VARS") | egrep -v "^(VARS|PIPESTATUS|_)="'' (???)

context_sh[key.lower()]=r.group(1).\
205 + replace('\\\\','\\').\
206 + replace('\\t','\t').\
207 + replace('\\n','\n').\
208 + replace("\\'","'")

Should this just be re.group(1).decode('string_escape')?
>>> print c
\n\tblahblah\n\'
>>> print c.decode('string_escape')

 blahblah
'

review: Needs Fixing
Revision history for this message
Joshua Harlow (harlowja) wrote : Posted in a previous version of this proposal

Also some tests would be super-great :-)

Revision history for this message
Vlastimil Holer (vlastimil-holer) wrote : Posted in a previous version of this proposal

> Just a few comments, otherwise seems cool.
>
> Is 'NonConfigDriveDir' (exception afaik) supposed to be brought in somewhere?

Fixed, leftover from ConfigDrive class.

> Should this ds class be derived from the config drive datasource?
> - This is due to the new overload that was just added there
> for mapping device names to actual devices. I noticed this
> code does not have that function either (without it fstab
> will not be adjusted). So perhaps its better to inherit
> if the functionality is the same?? Perhaps override getdata()
> but still use the rest of the parent?

If I understand right, it's not any problem here. OpenNebula's metadata
aren't strictly specified as in OpenStack or EC2. They all are just user
specified and metadata I currently depend on were just mentioned in
their documentation or example scripts, I haven't seen anything like
block device mappings so far. Since there are no default metadata for
device mapping, I think it's better not to invent this functionality now.

You can check here:
http://opennebula.org/documentation:rel3.8:context_overview
All metadata are just optional and user specified, it's common
to have there SSH public key or static network configuration.

> For capturing 'except subprocess.CalledProcessError as exc:' you might want to
> catch the one in 'util.py' instead (its not the same exception - although
> maybe it should be).

Fixed.

> Is it possible add more comments as to what the context.sh parsing is doing?
> Ie: ''comm -23 <(set | sort -u) <(echo "$VARS") | egrep -v
> "^(VARS|PIPESTATUS|_)="'' (???)

Explanation added into code. I know this construction isn't very nice,
but IMHO it's easiest way how to read context variables. Bash dumps
them just as easy parsable (single line) way.

> context_sh[key.lower()]=r.group(1).\
> 205 + replace('\\\\','\\').\
> 206 + replace('\\t','\t').\
> 207 + replace('\\n','\n').\
> 208 + replace("\\'","'")
>
> Should this just be re.group(1).decode('string_escape')?
> >>> print c
> \n\tblahblah\n\'
> >>> print c.decode('string_escape')
>
> blahblah
> '

Yes, thanks, fixed.

> Also some tests would be super-great :-)

On my TODO list, will do that soon.

Revision history for this message
Joshua Harlow (harlowja) wrote : Posted in a previous version of this proposal

Ok, one last comment.

Is the modification of "cloudinit/distros/__init__.py" still needed or is that just leftover code that got picked up in the review?

Revision history for this message
Vlastimil Holer (vlastimil-holer) wrote : Posted in a previous version of this proposal

> Ok, one last comment.
>
> Is the modification of "cloudinit/distros/__init__.py" still needed or is that
> just leftover code that got picked up in the review?

You are right, this was leftover after Javi's first version of static network configuration.
Currently we are already generating full Debian style network configuration including
"dns-search|nameservers" interface options. These are on RHEL systems automatically
converted and applied into /etc/resolv.conf inside your code already.

Sorry fot that. Change has been uncommited.

Revision history for this message
Scott Moser (smoser) wrote : Posted in a previous version of this proposal

Some thoughts:
 * run ./tools/run-pep8 tests/unittests/test_datasource/test_opennebula.py
   and ./tools/run-pylint tests/unittests/test_datasource/test_opennebula.py
   You'll see lots of (sorry) nit-picky things to fix.

 * how does 'user_dsmode' get set? I guess through 'context.sh' ?

 * I guess I might like to see the reading of context.sh in its own method for
   easier testing.

 * populate_dir (copied to your test) is now in unittests/helpers.py

The one thing I really do not like here is that context.sh is explicitly
executed (by bash), as opposed to being parsed. The issue is that I can
essentially do *anything* here. I could attach a disk labeled CDROM, with
context.sh and execute arbitrary code as root.

This isn't terribly a security issue, since executing code as root is generally
cloud-init's goal, but its very inconsistent with other datasources.

Also, fwiw, I've been wanting to explicitly add a "execute something as root
really early" hook. The goal would be to allow you to tweak an image as early
as possible, even fiddling with or fixing cloud-init.

So, please fix the pep8 and pylint, and, are people actually expecting code to be executed? Ie, do they think they can build/provide a context.sh that executes code?

review: Needs Fixing
Revision history for this message
Javi Fontan (jfontan) wrote : Posted in a previous version of this proposal

"context.sh" is generated by OpenNebula and is a shell script with shell variables. It is meant to be executed by some init scripts in the VM to get that information. It should not contain any "executable" code, at least OpenNebula does not add code to it.

The problem with parsing that file is that some variables can be multiline and and parsing is not that straightforward. For example:

# Context variables generated by OpenNebula
DISK_ID='1'
ETH0_DNS='10.0.0.1'
ETH0_GATEWAY='10.0.0.1'
ETH0_IP='10.0.0.72'
ETH0_MASK='255.255.255.0'
FILES_DS='/var/lib/one/datastores/2/9c92ad910c0b30a411ccdfc141bc7a25:'\''jfopet.sh'\'' '
FILES_URL='http://10.0.0.11/files/installer.tar.gz'
INIT_SCRIPTS='jfopet.sh'
MANIFEST='ubuntu-kvm1'
SSH_PUBLIC_KEY='ssh-dss AAAAB3NzaC1kc3MAAACBANprdUVFRaADZXZaAm2elpRaUGCMETLLuYYJCDUZPb0Dh/V7KeM2/a3rFIDA0s5sVK2XQNqLHyy0U8xA/0R8dplmg7BDckkAzhrDVpEGnQE1fk/xPd1t7u+yeVqpbrfyAmfmyE2P980mhBoWalbeV/f7SmHUP8RiQ2hlAWUxr7I5AAAAFQDLHhFndbcA7svGd/yfY6nU4ubodwAAAIAoXLKlckmZur1pc9TN4XoHa+Fl/6Qpu0XO7Ai/tu8dqHlN+FpVk8BQNnokwE+EZBARLIL0JCjHpT9b5aEvlpRz3TuDa6az8wvJRlScNufmVrf0ls1WCFDHujSLzd3aOkKct35Aamf1amP/NPE2aGne/tPS7HQ3TCf5E2MAQzYVJKSARwAAAIAaVqvU7LfGMEw0hEXr7fuJCMHh4FmPvejiOoSUz2GOU5bceasRitdulCQJbNiVtY6U0S+qQ0X8rvAnG934p4zS9TtgKIhplr146fkbYnNjCaAM0rNVvTh2SzEEKJiG2G1d3wyNuO8wpPhIiJ1OVZrGwkVyWwiNzC2sWXAXldQ9Hg== user1@host
ssh-dss AAAAB3NzaC1kc3MAAACBANBWTQmm4GtkAiPpA20DK1TQd1n7UoC5BymTHlGzFmj+BsQdn6ZZbihV/roNVo1roJhb2hQq/WuQR50D72vMIjrC6t1DkofFBL0iz+mb7JdmNxbE7cnOHMxEr/cd4ds4EwzpBKiQzt8NNcz/zbSadHQtBd0+u1G5nvm4MXHNVYQnAAAAFQDbEN554Or4vd7D+2wzjs3c1nNOowAAAIAdRlTrRG1YmceKHh3urcltniIoo8FrNudwCShbHTCOQbM+KkMUTtw5qwWFuJP6HdaLjmUehqxqDWeWGp8c2y5yMee0JR5cx+iMwhg1Q2o4S6c+zgWdUYIVkuPgYOOR2GMCPdl9mwcxtVvpHD59UFlPh16oLzakUCSxkro5V/LUXxuywAAAIBtYvxfwI5Cl0xq3/KQI7giNef05EXIgK+KwYu3xC6fs+7NxQYFzMsSEQhEJI62J091Kh0RFpFPdGPECIQolt8j4ltymKM9+pfgiE7oBXSxkW44XadBnCWCGU5B4gTnz84VKECWuu2J9Z4cn44hKYt1uj0SxxzExnB1X51kUN+Z5Q== user2@host'

I don't think creating a parser for these files is worth the job.

Revision history for this message
Scott Moser (smoser) wrote :

Vlastimil, thanks.
  One thing that I thought about was the 'parsing'. If the content in 'config.sh' is actually shell , then it makes most sense to have shell execute it rather than parsing with regex. Parsing with regex just seems brittle, and is only going to cover very basic things.
  I put together 'parse_shell_config' at https://gist.github.com/smoser/6466708
  That basically reads bash variables into a dict after executing a given file.

  This would execute the file, which is what I was originally opposed to. I don't miss the irony there :).
  The gist there has the ability to use 'sudo' to change to a different user. We could use 'nobody' by default and allow it to be configured. By parsing with shell, and running as nobody, we should be pretty adverse to any complexities that might be present in the 'config.sh'.

Revision history for this message
Vlastimil Holer (vlastimil-holer) wrote :

Scott, it looks great, I definitely love your code style!

From my point of view, I more agree with your former opinion not to use shell. I originally chose that option because I got normalized output with each variable on single line very easy. Also in the past it could be possible to reference variables from other variables, but this is not true since OpenNebula 4.0 (May 2013). Context variables are now single quoted and so it's impossible to reference other variables or execute external commands. Context.sh is now just plain key='value' file, which is (from historic reasons when people wrote their own contextualization shell scripts) usually sourced from shell. Now I see the only benefit in more reliable parsing then our horrible regexp.

So, it's up to you what you prefer to leave in your project at last. Both will work fine.

Also we can make a simple parser using e.g. pyparsing, but it means new dependency into cloud-init.

Revision history for this message
Scott Moser (smoser) wrote :

OK.
 I took a further look.
 I'm fine to merge this.
 Could you make the following changes:
  * use the bash parsing at https://gist.github.com/smoser/6466708
  * allow configuration to set the user to drop privledges to (None indicate no change).
    That configuration would be in the datasource config (datasource:OpenNebula:parseuser) or some name.
  * default that user to 'Nobody'.
  * there is one pylint (at least on my machine) error, complaining about the toks.split() use toks = str(toks).split('.'). Its not a real issue, but just make pylint happy by wrapping 'toks' in str().

Other than that I think it looks good.
Sooner is better to get those changes in. I want to make a upload to ubuntu sometime this week, and I'm shooting for 0.7.3 probably end of next week.

Thanks!

687. By Vlastimil Holer

Replace RE context.sh parser with Scott's rewrite of bash dumper. Upper case context variable names.

688. By Vlastimil Holer

Fix pylint complain on toks.split('.')

689. By Vlastimil Holer

Configurable OpenNebula::parseuser. Seed search dir+dev merge.
Eat shell parser error output. Few tests for tests for get_data.

690. By Vlastimil Holer

Update OpenNebula documentation (parseuser, more fs. labels, K-V single quotes)

691. By Vlastimil Holer

Detect invalid system user for test

Revision history for this message
Vlastimil Holer (vlastimil-holer) wrote :

I have used your bash parsing, the only change I made there was
stderr redirection for invalid (or empty) context scripts:

> sp = subprocess.Popen(cmd, stdin=subprocess.PIPE,
> stdout=subprocess.PIPE,
> stderr=subprocess.PIPE)

I thought you'll move the function somewhere into your
libraries after merge.

* unprivileged user is configurable in
  OpenNebula:parseuser, defaults to "nobody"

* pylint complain fixed

But, I have added some more unit tests around get_data,
which were missing, and other cleanups. So, please look
once again if it's OK for you at least for now. Or what
should I fix before the end of week.

Also I didn't have time to check the new code in real,
I'll let you know if it's ok ASAP.

692. By Vlastimil Holer

Fix detection of ETHx_IP context variable, add test.

Revision history for this message
Vlastimil Holer (vlastimil-holer) wrote :

Now looks good after fix in commit 692, tested on Debian 6.

Revision history for this message
Scott Moser (smoser) wrote :

Vlastimil, can you please merge from
 lp:~smoser/cloud-init/opennebula
and test that?

Just a few cleanups there.

Thanks.

693. By Vlastimil Holer

Merged lp:~smoser/cloud-init/opennebula

694. By Vlastimil Holer

All fake util.find_devs_with set before try-finally section

Revision history for this message
Vlastimil Holer (vlastimil-holer) wrote :

I don't know why have you moved one save of original util.find_devs_with before try-finally section, but I have moved the rest to be same.

I have quickly tested on real systems and works. Thanks!

Revision history for this message
Scott Moser (smoser) wrote :

because I hit that one. probably in error, but the finally didn't account for it not being defined.

On Sep 10, 2013, at 7:28 PM, Vlastimil Holer <email address hidden> wrote:

> I don't know why have you moved one save of original util.find_devs_with before try-finally section, but I have moved the rest to be same.
>
> I have quickly tested on real systems and works. Thanks!
> --
> https://code.launchpad.net/~vlastimil-holer/cloud-init/opennebula/+merge/184278
> You are requested to review the proposed merge of lp:~vlastimil-holer/cloud-init/opennebula into lp:cloud-init.
>

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'cloudinit/settings.py'
2--- cloudinit/settings.py 2013-07-18 21:37:18 +0000
3+++ cloudinit/settings.py 2013-09-10 23:15:34 +0000
4@@ -31,6 +31,7 @@
5 'datasource_list': [
6 'NoCloud',
7 'ConfigDrive',
8+ 'OpenNebula',
9 'Azure',
10 'AltCloud',
11 'OVF',
12
13=== added file 'cloudinit/sources/DataSourceOpenNebula.py'
14--- cloudinit/sources/DataSourceOpenNebula.py 1970-01-01 00:00:00 +0000
15+++ cloudinit/sources/DataSourceOpenNebula.py 2013-09-10 23:15:34 +0000
16@@ -0,0 +1,442 @@
17+# vi: ts=4 expandtab
18+#
19+# Copyright (C) 2012 Canonical Ltd.
20+# Copyright (C) 2012 Yahoo! Inc.
21+# Copyright (C) 2012-2013 CERIT Scientific Cloud
22+# Copyright (C) 2012-2013 OpenNebula.org
23+#
24+# Author: Scott Moser <scott.moser@canonical.com>
25+# Author: Joshua Harlow <harlowja@yahoo-inc.com>
26+# Author: Vlastimil Holer <xholer@mail.muni.cz>
27+# Author: Javier Fontan <jfontan@opennebula.org>
28+#
29+# This program is free software: you can redistribute it and/or modify
30+# it under the terms of the GNU General Public License version 3, as
31+# published by the Free Software Foundation.
32+#
33+# This program is distributed in the hope that it will be useful,
34+# but WITHOUT ANY WARRANTY; without even the implied warranty of
35+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
36+# GNU General Public License for more details.
37+#
38+# You should have received a copy of the GNU General Public License
39+# along with this program. If not, see <http://www.gnu.org/licenses/>.
40+
41+import os
42+import pwd
43+import re
44+import string # pylint: disable=W0402
45+
46+from cloudinit import log as logging
47+from cloudinit import sources
48+from cloudinit import util
49+
50+LOG = logging.getLogger(__name__)
51+
52+DEFAULT_IID = "iid-dsopennebula"
53+DEFAULT_MODE = 'net'
54+DEFAULT_PARSEUSER = 'nobody'
55+CONTEXT_DISK_FILES = ["context.sh"]
56+VALID_DSMODES = ("local", "net", "disabled")
57+
58+
59+class DataSourceOpenNebula(sources.DataSource):
60+ def __init__(self, sys_cfg, distro, paths):
61+ sources.DataSource.__init__(self, sys_cfg, distro, paths)
62+ self.dsmode = 'local'
63+ self.seed = None
64+ self.seed_dir = os.path.join(paths.seed_dir, 'opennebula')
65+
66+ def __str__(self):
67+ root = sources.DataSource.__str__(self)
68+ return "%s [seed=%s][dsmode=%s]" % (root, self.seed, self.dsmode)
69+
70+ def get_data(self):
71+ defaults = {"instance-id": DEFAULT_IID}
72+ results = None
73+ seed = None
74+
75+ # decide parseuser for context.sh shell reader
76+ parseuser = DEFAULT_PARSEUSER
77+ if 'parseuser' in self.ds_cfg:
78+ parseuser = self.ds_cfg.get('parseuser')
79+
80+ candidates = [self.seed_dir]
81+ candidates.extend(find_candidate_devs())
82+ for cdev in candidates:
83+ try:
84+ if os.path.isdir(self.seed_dir):
85+ results = read_context_disk_dir(cdev, asuser=parseuser)
86+ elif cdev.startswith("/dev"):
87+ results = util.mount_cb(cdev, read_context_disk_dir,
88+ data=parseuser)
89+ except NonContextDiskDir:
90+ continue
91+ except BrokenContextDiskDir as exc:
92+ raise exc
93+ except util.MountFailedError:
94+ LOG.warn("%s was not mountable" % cdev)
95+
96+ if results:
97+ seed = cdev
98+ LOG.debug("found datasource in %s", cdev)
99+ break
100+
101+ if not seed:
102+ return False
103+
104+ # merge fetched metadata with datasource defaults
105+ md = results['metadata']
106+ md = util.mergemanydict([md, defaults])
107+
108+ # check for valid user specified dsmode
109+ user_dsmode = results['metadata'].get('DSMODE', None)
110+ if user_dsmode not in VALID_DSMODES + (None,):
111+ LOG.warn("user specified invalid mode: %s", user_dsmode)
112+ user_dsmode = None
113+
114+ # decide dsmode
115+ if user_dsmode:
116+ dsmode = user_dsmode
117+ elif self.ds_cfg.get('dsmode'):
118+ dsmode = self.ds_cfg.get('dsmode')
119+ else:
120+ dsmode = DEFAULT_MODE
121+
122+ if dsmode == "disabled":
123+ # most likely user specified
124+ return False
125+
126+ # apply static network configuration only in 'local' dsmode
127+ if ('network-interfaces' in results and self.dsmode == "local"):
128+ LOG.debug("Updating network interfaces from %s", self)
129+ self.distro.apply_network(results['network-interfaces'])
130+
131+ if dsmode != self.dsmode:
132+ LOG.debug("%s: not claiming datasource, dsmode=%s", self, dsmode)
133+ return False
134+
135+ self.seed = seed
136+ self.metadata = md
137+ self.userdata_raw = results.get('userdata')
138+ return True
139+
140+ def get_hostname(self, fqdn=False, resolve_ip=None):
141+ if resolve_ip is None:
142+ if self.dsmode == 'net':
143+ resolve_ip = True
144+ else:
145+ resolve_ip = False
146+ return sources.DataSource.get_hostname(self, fqdn, resolve_ip)
147+
148+
149+class DataSourceOpenNebulaNet(DataSourceOpenNebula):
150+ def __init__(self, sys_cfg, distro, paths):
151+ DataSourceOpenNebula.__init__(self, sys_cfg, distro, paths)
152+ self.dsmode = 'net'
153+
154+
155+class NonContextDiskDir(Exception):
156+ pass
157+
158+
159+class BrokenContextDiskDir(Exception):
160+ pass
161+
162+
163+class OpenNebulaNetwork(object):
164+ REG_DEV_MAC = re.compile(
165+ r'^\d+: (eth\d+):.*?link\/ether (..:..:..:..:..:..) ?',
166+ re.MULTILINE | re.DOTALL)
167+
168+ def __init__(self, ip, context):
169+ self.ip = ip
170+ self.context = context
171+ self.ifaces = self.get_ifaces()
172+
173+ def get_ifaces(self):
174+ return self.REG_DEV_MAC.findall(self.ip)
175+
176+ def mac2ip(self, mac):
177+ components = mac.split(':')[2:]
178+ return [str(int(c, 16)) for c in components]
179+
180+ def get_ip(self, dev, components):
181+ var_name = dev.upper() + '_IP'
182+ if var_name in self.context:
183+ return self.context[var_name]
184+ else:
185+ return '.'.join(components)
186+
187+ def get_mask(self, dev):
188+ var_name = dev.upper() + '_MASK'
189+ if var_name in self.context:
190+ return self.context[var_name]
191+ else:
192+ return '255.255.255.0'
193+
194+ def get_network(self, dev, components):
195+ var_name = dev.upper() + '_NETWORK'
196+ if var_name in self.context:
197+ return self.context[var_name]
198+ else:
199+ return '.'.join(components[:-1]) + '.0'
200+
201+ def get_gateway(self, dev):
202+ var_name = dev.upper() + '_GATEWAY'
203+ if var_name in self.context:
204+ return self.context[var_name]
205+ else:
206+ return None
207+
208+ def get_dns(self, dev):
209+ var_name = dev.upper() + '_DNS'
210+ if var_name in self.context:
211+ return self.context[var_name]
212+ else:
213+ return None
214+
215+ def get_domain(self, dev):
216+ var_name = dev.upper() + '_DOMAIN'
217+ if var_name in self.context:
218+ return self.context[var_name]
219+ else:
220+ return None
221+
222+ def gen_conf(self):
223+ global_dns = []
224+ if 'DNS' in self.context:
225+ global_dns.append(self.context['DNS'])
226+
227+ conf = []
228+ conf.append('auto lo')
229+ conf.append('iface lo inet loopback')
230+ conf.append('')
231+
232+ for i in self.ifaces:
233+ dev = i[0]
234+ mac = i[1]
235+ ip_components = self.mac2ip(mac)
236+
237+ conf.append('auto ' + dev)
238+ conf.append('iface ' + dev + ' inet static')
239+ conf.append(' address ' + self.get_ip(dev, ip_components))
240+ conf.append(' network ' + self.get_network(dev, ip_components))
241+ conf.append(' netmask ' + self.get_mask(dev))
242+
243+ gateway = self.get_gateway(dev)
244+ if gateway:
245+ conf.append(' gateway ' + gateway)
246+
247+ domain = self.get_domain(dev)
248+ if domain:
249+ conf.append(' dns-search ' + domain)
250+
251+ # add global DNS servers to all interfaces
252+ dns = self.get_dns(dev)
253+ if global_dns or dns:
254+ all_dns = global_dns
255+ if dns:
256+ all_dns.append(dns)
257+ conf.append(' dns-nameservers ' + ' '.join(all_dns))
258+
259+ conf.append('')
260+
261+ return "\n".join(conf)
262+
263+
264+def find_candidate_devs():
265+ """
266+ Return a list of devices that may contain the context disk.
267+ """
268+ combined = []
269+ for f in ('LABEL=CONTEXT', 'LABEL=CDROM', 'TYPE=iso9660'):
270+ devs = util.find_devs_with(f)
271+ devs.sort()
272+ for d in devs:
273+ if d not in combined:
274+ combined.append(d)
275+
276+ return combined
277+
278+
279+def switch_user_cmd(user):
280+ return ['sudo', '-u', user]
281+
282+
283+def parse_shell_config(content, keylist=None, bash=None, asuser=None,
284+ switch_user_cb=None):
285+
286+ if isinstance(bash, str):
287+ bash = [bash]
288+ elif bash is None:
289+ bash = ['bash', '-e']
290+
291+ if switch_user_cb is None:
292+ switch_user_cb = switch_user_cmd
293+
294+ # allvars expands to all existing variables by using '${!x*}' notation
295+ # where x is lower or upper case letters or '_'
296+ allvars = ["${!%s*}" % x for x in string.letters + "_"]
297+
298+ keylist_in = keylist
299+ if keylist is None:
300+ keylist = allvars
301+ keylist_in = []
302+
303+ setup = '\n'.join(('__v="";', '',))
304+
305+ def varprinter(vlist):
306+ # output '\0'.join(['_start_', key=value NULL for vars in vlist]
307+ return '\n'.join((
308+ 'printf "%s\\0" _start_',
309+ 'for __v in %s; do' % ' '.join(vlist),
310+ ' printf "%s=%s\\0" "$__v" "${!__v}";',
311+ 'done',
312+ ''
313+ ))
314+
315+ # the rendered 'bcmd' is bash syntax that does
316+ # setup: declare variables we use (so they show up in 'all')
317+ # varprinter(allvars): print all variables known at beginning
318+ # content: execute the provided content
319+ # varprinter(keylist): print all variables known after content
320+ #
321+ # output is then a null terminated array of:
322+ # literal '_start_'
323+ # key=value (for each preset variable)
324+ # literal '_start_'
325+ # key=value (for each post set variable)
326+ bcmd = ('unset IFS\n' +
327+ setup +
328+ varprinter(allvars) +
329+ '{\n%s\n\n:\n} > /dev/null\n' % content +
330+ 'unset IFS\n' +
331+ varprinter(keylist) + "\n")
332+
333+ cmd = []
334+ if asuser is not None:
335+ cmd = switch_user_cb(asuser)
336+
337+ cmd.extend(bash)
338+
339+ (output, _error) = util.subp(cmd, data=bcmd)
340+
341+ # exclude vars in bash that change on their own or that we used
342+ excluded = ("RANDOM", "LINENO", "_", "__v")
343+ preset = {}
344+ ret = {}
345+ target = None
346+ output = output[0:-1] # remove trailing null
347+
348+ # go through output. First _start_ is for 'preset', second for 'target'.
349+ # Add to target only things were changed and not in volitile
350+ for line in output.split("\x00"):
351+ try:
352+ (key, val) = line.split("=", 1)
353+ if target is preset:
354+ target[key] = val
355+ elif (key not in excluded and
356+ (key in keylist_in or preset.get(key) != val)):
357+ ret[key] = val
358+ except ValueError:
359+ if line != "_start_":
360+ raise
361+ if target is None:
362+ target = preset
363+ elif target is preset:
364+ target = ret
365+
366+ return ret
367+
368+
369+def read_context_disk_dir(source_dir, asuser=None):
370+ """
371+ read_context_disk_dir(source_dir):
372+ read source_dir and return a tuple with metadata dict and user-data
373+ string populated. If not a valid dir, raise a NonContextDiskDir
374+ """
375+ found = {}
376+ for af in CONTEXT_DISK_FILES:
377+ fn = os.path.join(source_dir, af)
378+ if os.path.isfile(fn):
379+ found[af] = fn
380+
381+ if not found:
382+ raise NonContextDiskDir("%s: %s" % (source_dir, "no files found"))
383+
384+ context = {}
385+ results = {'userdata': None, 'metadata': {}}
386+
387+ if "context.sh" in found:
388+ if asuser is not None:
389+ try:
390+ pwd.getpwnam(asuser)
391+ except KeyError as e:
392+ raise BrokenContextDiskDir("configured user '%s' "
393+ "does not exist", asuser)
394+ try:
395+ with open(os.path.join(source_dir, 'context.sh'), 'r') as f:
396+ content = f.read().strip()
397+
398+ context = parse_shell_config(content, asuser=asuser)
399+ except util.ProcessExecutionError as e:
400+ raise BrokenContextDiskDir("Error processing context.sh: %s" % (e))
401+ except IOError as e:
402+ raise NonContextDiskDir("Error reading context.sh: %s" % (e))
403+ else:
404+ raise NonContextDiskDir("Missing context.sh")
405+
406+ if not context:
407+ return results
408+
409+ results['metadata'] = context
410+
411+ # process single or multiple SSH keys
412+ ssh_key_var = None
413+ if "SSH_KEY" in context:
414+ ssh_key_var = "SSH_KEY"
415+ elif "SSH_PUBLIC_KEY" in context:
416+ ssh_key_var = "SSH_PUBLIC_KEY"
417+
418+ if ssh_key_var:
419+ lines = context.get(ssh_key_var).splitlines()
420+ results['metadata']['public-keys'] = [l for l in lines
421+ if len(l) and not l.startswith("#")]
422+
423+ # custom hostname -- try hostname or leave cloud-init
424+ # itself create hostname from IP address later
425+ for k in ('HOSTNAME', 'PUBLIC_IP', 'IP_PUBLIC', 'ETH0_IP'):
426+ if k in context:
427+ results['metadata']['local-hostname'] = context[k]
428+ break
429+
430+ # raw user data
431+ if "USER_DATA" in context:
432+ results['userdata'] = context["USER_DATA"]
433+ elif "USERDATA" in context:
434+ results['userdata'] = context["USERDATA"]
435+
436+ # generate static /etc/network/interfaces
437+ # only if there are any required context variables
438+ # http://opennebula.org/documentation:rel3.8:cong#network_configuration
439+ for k in context.keys():
440+ if re.match(r'^ETH\d+_IP$', k):
441+ (out, _) = util.subp(['/sbin/ip', 'link'])
442+ net = OpenNebulaNetwork(out, context)
443+ results['network-interfaces'] = net.gen_conf()
444+ break
445+
446+ return results
447+
448+
449+# Used to match classes to dependencies
450+datasources = [
451+ (DataSourceOpenNebula, (sources.DEP_FILESYSTEM, )),
452+ (DataSourceOpenNebulaNet, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
453+]
454+
455+
456+# Return a list of data sources that match this set of dependencies
457+def get_datasource_list(depends):
458+ return sources.list_from_depends(depends, datasources)
459
460=== modified file 'cloudinit/sources/__init__.py'
461--- cloudinit/sources/__init__.py 2013-07-23 17:10:33 +0000
462+++ cloudinit/sources/__init__.py 2013-09-10 23:15:34 +0000
463@@ -53,9 +53,16 @@
464 self.userdata = None
465 self.metadata = None
466 self.userdata_raw = None
467+
468+ # find the datasource config name.
469+ # remove 'DataSource' from classname on front, and remove 'Net' on end.
470+ # Both Foo and FooNet sources expect config in cfg['sources']['Foo']
471 name = type_utils.obj_name(self)
472 if name.startswith(DS_PREFIX):
473 name = name[len(DS_PREFIX):]
474+ if name.endswith('Net'):
475+ name = name[0:-3]
476+
477 self.ds_cfg = util.get_cfg_by_path(self.sys_cfg,
478 ("datasource", name), {})
479 if not ud_proc:
480@@ -144,7 +151,7 @@
481 return "iid-datasource"
482 return str(self.metadata['instance-id'])
483
484- def get_hostname(self, fqdn=False):
485+ def get_hostname(self, fqdn=False, resolve_ip=False):
486 defdomain = "localdomain"
487 defhost = "localhost"
488 domain = defdomain
489@@ -168,7 +175,14 @@
490 # make up a hostname (LP: #475354) in format ip-xx.xx.xx.xx
491 lhost = self.metadata['local-hostname']
492 if util.is_ipv4(lhost):
493- toks = ["ip-%s" % lhost.replace(".", "-")]
494+ toks = []
495+ if resolve_ip:
496+ toks = util.gethostbyaddr(lhost)
497+
498+ if toks:
499+ toks = str(toks).split('.')
500+ else:
501+ toks = ["ip-%s" % lhost.replace(".", "-")]
502 else:
503 toks = lhost.split(".")
504
505
506=== modified file 'cloudinit/util.py'
507--- cloudinit/util.py 2013-09-07 07:07:26 +0000
508+++ cloudinit/util.py 2013-09-10 23:15:34 +0000
509@@ -955,6 +955,13 @@
510 return hostname
511
512
513+def gethostbyaddr(ip):
514+ try:
515+ return socket.gethostbyaddr(ip)[0]
516+ except socket.herror:
517+ return None
518+
519+
520 def is_resolvable_url(url):
521 """determine if this url is resolvable (existing or ip)."""
522 return (is_resolvable(urlparse.urlparse(url).hostname))
523
524=== modified file 'doc/rtd/topics/datasources.rst'
525--- doc/rtd/topics/datasources.rst 2013-02-06 07:58:49 +0000
526+++ doc/rtd/topics/datasources.rst 2013-09-10 23:15:34 +0000
527@@ -141,6 +141,12 @@
528 .. include:: ../../sources/configdrive/README.rst
529
530 ---------------------------
531+OpenNebula
532+---------------------------
533+
534+.. include:: ../../sources/opennebula/README.rst
535+
536+---------------------------
537 Alt cloud
538 ---------------------------
539
540
541=== added directory 'doc/sources/opennebula'
542=== added file 'doc/sources/opennebula/README.rst'
543--- doc/sources/opennebula/README.rst 1970-01-01 00:00:00 +0000
544+++ doc/sources/opennebula/README.rst 2013-09-10 23:15:34 +0000
545@@ -0,0 +1,142 @@
546+The `OpenNebula`_ (ON) datasource supports the contextualization disk.
547+
548+ See `contextualization overview`_, `contextualizing VMs`_ and
549+ `network configuration`_ in the public documentation for
550+ more information.
551+
552+OpenNebula's virtual machines are contextualized (parametrized) by
553+CD-ROM image, which contains a shell script *context.sh* with
554+custom variables defined on virtual machine start. There are no
555+fixed contextualization variables, but the datasource accepts
556+many used and recommended across the documentation.
557+
558+Datasource configuration
559+~~~~~~~~~~~~~~~~~~~~~~~~~
560+
561+Datasource accepts following configuration options.
562+
563+::
564+
565+ dsmode:
566+ values: local, net, disabled
567+ default: net
568+
569+Tells if this datasource will be processed in 'local' (pre-networking) or
570+'net' (post-networking) stage or even completely 'disabled'.
571+
572+::
573+
574+ parseuser:
575+ default: nobody
576+
577+Unprivileged system user used for contextualization script
578+processing.
579+
580+Contextualization disk
581+~~~~~~~~~~~~~~~~~~~~~~
582+
583+The following criteria are required:
584+
585+1. Must be formatted with `iso9660`_ filesystem
586+ or have a *filesystem* label of **CONTEXT** or **CDROM**
587+2. Must contain file *context.sh* with contextualization variables.
588+ File is generated by OpenNebula, it has a KEY='VALUE' format and
589+ can be easily read by bash
590+
591+Contextualization variables
592+~~~~~~~~~~~~~~~~~~~~~~~~~~~
593+
594+There are no fixed contextualization variables in OpenNebula, no standard.
595+Following variables were found on various places and revisions of
596+the OpenNebula documentation. Where multiple similar variables are
597+specified, only first found is taken.
598+
599+::
600+
601+ DSMODE
602+
603+Datasource mode configuration override. Values: local, net, disabled.
604+
605+::
606+
607+ DNS
608+ ETH<x>_IP
609+ ETH<x>_NETWORK
610+ ETH<x>_MASK
611+ ETH<x>_GATEWAY
612+ ETH<x>_DOMAIN
613+ ETH<x>_DNS
614+
615+Static `network configuration`_.
616+
617+::
618+
619+ HOSTNAME
620+
621+Instance hostname.
622+
623+::
624+
625+ PUBLIC_IP
626+ IP_PUBLIC
627+ ETH0_IP
628+
629+If no hostname has been specified, cloud-init will try to create hostname
630+from instance's IP address in 'local' dsmode. In 'net' dsmode, cloud-init
631+tries to resolve one of its IP addresses to get hostname.
632+
633+::
634+
635+ SSH_KEY
636+ SSH_PUBLIC_KEY
637+
638+One or multiple SSH keys (separated by newlines) can be specified.
639+
640+::
641+
642+ USER_DATA
643+ USERDATA
644+
645+cloud-init user data.
646+
647+Example configuration
648+~~~~~~~~~~~~~~~~~~~~~
649+
650+This example cloud-init configuration (*cloud.cfg*) enables
651+OpenNebula datasource only in 'net' mode.
652+
653+::
654+
655+ disable_ec2_metadata: True
656+ datasource_list: ['OpenNebula']
657+ datasource:
658+ OpenNebula:
659+ dsmode: net
660+ parseuser: nobody
661+
662+Example VM's context section
663+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
664+
665+::
666+
667+ CONTEXT=[
668+ PUBLIC_IP="$NIC[IP]",
669+ SSH_KEY="$USER[SSH_KEY]
670+ $USER[SSH_KEY1]
671+ $USER[SSH_KEY2] ",
672+ USER_DATA="#cloud-config
673+ # see https://help.ubuntu.com/community/CloudInit
674+
675+ packages: []
676+
677+ mounts:
678+ - [vdc,none,swap,sw,0,0]
679+ runcmd:
680+ - echo 'Instance has been configured by cloud-init.' | wall
681+ " ]
682+
683+.. _OpenNebula: http://opennebula.org/
684+.. _contextualization overview: http://opennebula.org/documentation:documentation:context_overview
685+.. _contextualizing VMs: http://opennebula.org/documentation:documentation:cong
686+.. _network configuration: http://opennebula.org/documentation:documentation:cong#network_configuration
687+.. _iso9660: https://en.wikipedia.org/wiki/ISO_9660
688
689=== added file 'tests/unittests/test_datasource/test_opennebula.py'
690--- tests/unittests/test_datasource/test_opennebula.py 1970-01-01 00:00:00 +0000
691+++ tests/unittests/test_datasource/test_opennebula.py 2013-09-10 23:15:34 +0000
692@@ -0,0 +1,267 @@
693+from cloudinit.sources import DataSourceOpenNebula as ds
694+from cloudinit import helpers
695+from cloudinit import util
696+from mocker import MockerTestCase
697+from tests.unittests.helpers import populate_dir
698+
699+import os
700+import pwd
701+
702+TEST_VARS = {
703+ 'VAR1': 'single',
704+ 'VAR2': 'double word',
705+ 'VAR3': 'multi\nline\n',
706+ 'VAR4': "'single'",
707+ 'VAR5': "'double word'",
708+ 'VAR6': "'multi\nline\n'",
709+ 'VAR7': 'single\\t',
710+ 'VAR8': 'double\\tword',
711+ 'VAR9': 'multi\\t\nline\n',
712+ 'VAR10': '\\', # expect \
713+ 'VAR11': '\'', # expect '
714+ 'VAR12': '$', # expect $
715+}
716+
717+INVALID_CONTEXT = ';'
718+USER_DATA = '#cloud-config\napt_upgrade: true'
719+SSH_KEY = 'ssh-rsa AAAAB3NzaC1....sIkJhq8wdX+4I3A4cYbYP ubuntu@server-460-%i'
720+HOSTNAME = 'foo.example.com'
721+PUBLIC_IP = '10.0.0.3'
722+
723+CMD_IP_OUT = '''\
724+1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
725+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
726+2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
727+ link/ether 02:00:0a:12:01:01 brd ff:ff:ff:ff:ff:ff
728+'''
729+
730+
731+class TestOpenNebulaDataSource(MockerTestCase):
732+ parsed_user = None
733+
734+ def setUp(self):
735+ super(TestOpenNebulaDataSource, self).setUp()
736+ self.tmp = self.makeDir()
737+ self.paths = helpers.Paths({'cloud_dir': self.tmp})
738+
739+ # defaults for few tests
740+ self.ds = ds.DataSourceOpenNebula
741+ self.seed_dir = os.path.join(self.paths.seed_dir, "opennebula")
742+ self.sys_cfg = {'datasource': {'OpenNebula': {'dsmode': 'local'}}}
743+
744+ # we don't want 'sudo' called in tests. so we patch switch_user_cmd
745+ def my_switch_user_cmd(user):
746+ self.parsed_user = user
747+ return []
748+
749+ self.switch_user_cmd_real = ds.switch_user_cmd
750+ ds.switch_user_cmd = my_switch_user_cmd
751+
752+ def tearDown(self):
753+ ds.switch_user_cmd = self.switch_user_cmd_real
754+ super(TestOpenNebulaDataSource, self).tearDown()
755+
756+ def test_get_data_non_contextdisk(self):
757+ orig_find_devs_with = util.find_devs_with
758+ try:
759+ # dont' try to lookup for CDs
760+ util.find_devs_with = lambda n: []
761+ dsrc = self.ds(sys_cfg=self.sys_cfg, distro=None, paths=self.paths)
762+ ret = dsrc.get_data()
763+ self.assertFalse(ret)
764+ finally:
765+ util.find_devs_with = orig_find_devs_with
766+
767+ def test_get_data_broken_contextdisk(self):
768+ orig_find_devs_with = util.find_devs_with
769+ try:
770+ # dont' try to lookup for CDs
771+ util.find_devs_with = lambda n: []
772+ populate_dir(self.seed_dir, {'context.sh': INVALID_CONTEXT})
773+ dsrc = self.ds(sys_cfg=self.sys_cfg, distro=None, paths=self.paths)
774+ self.assertRaises(ds.BrokenContextDiskDir, dsrc.get_data)
775+ finally:
776+ util.find_devs_with = orig_find_devs_with
777+
778+ def test_get_data_invalid_identity(self):
779+ orig_find_devs_with = util.find_devs_with
780+ try:
781+ # generate non-existing system user name
782+ sys_cfg = self.sys_cfg
783+ invalid_user = 'invalid'
784+ while not sys_cfg['datasource']['OpenNebula'].get('parseuser'):
785+ try:
786+ pwd.getpwnam(invalid_user)
787+ invalid_user += 'X'
788+ except KeyError:
789+ sys_cfg['datasource']['OpenNebula']['parseuser'] = \
790+ invalid_user
791+
792+ # dont' try to lookup for CDs
793+ util.find_devs_with = lambda n: []
794+ populate_context_dir(self.seed_dir, {'KEY1': 'val1'})
795+ dsrc = self.ds(sys_cfg=sys_cfg, distro=None, paths=self.paths)
796+ self.assertRaises(ds.BrokenContextDiskDir, dsrc.get_data)
797+ finally:
798+ util.find_devs_with = orig_find_devs_with
799+
800+ def test_get_data(self):
801+ orig_find_devs_with = util.find_devs_with
802+ try:
803+ # dont' try to lookup for CDs
804+ util.find_devs_with = lambda n: []
805+ populate_context_dir(self.seed_dir, {'KEY1': 'val1'})
806+ dsrc = self.ds(sys_cfg=self.sys_cfg, distro=None, paths=self.paths)
807+ ret = dsrc.get_data()
808+ self.assertTrue(ret)
809+ finally:
810+ util.find_devs_with = orig_find_devs_with
811+
812+ def test_seed_dir_non_contextdisk(self):
813+ self.assertRaises(ds.NonContextDiskDir, ds.read_context_disk_dir,
814+ self.seed_dir)
815+
816+ def test_seed_dir_empty1_context(self):
817+ populate_dir(self.seed_dir, {'context.sh': ''})
818+ results = ds.read_context_disk_dir(self.seed_dir)
819+
820+ self.assertEqual(results['userdata'], None)
821+ self.assertEqual(results['metadata'], {})
822+
823+ def test_seed_dir_empty2_context(self):
824+ populate_context_dir(self.seed_dir, {})
825+ results = ds.read_context_disk_dir(self.seed_dir)
826+
827+ self.assertEqual(results['userdata'], None)
828+ self.assertEqual(results['metadata'], {})
829+
830+ def test_seed_dir_broken_context(self):
831+ populate_dir(self.seed_dir, {'context.sh': INVALID_CONTEXT})
832+
833+ self.assertRaises(ds.BrokenContextDiskDir,
834+ ds.read_context_disk_dir,
835+ self.seed_dir)
836+
837+ def test_context_parser(self):
838+ populate_context_dir(self.seed_dir, TEST_VARS)
839+ results = ds.read_context_disk_dir(self.seed_dir)
840+
841+ self.assertTrue('metadata' in results)
842+ self.assertEqual(TEST_VARS, results['metadata'])
843+
844+ def test_ssh_key(self):
845+ public_keys = ['first key', 'second key']
846+ for c in range(4):
847+ for k in ('SSH_KEY', 'SSH_PUBLIC_KEY'):
848+ my_d = os.path.join(self.tmp, "%s-%i" % (k, c))
849+ populate_context_dir(my_d, {k: '\n'.join(public_keys)})
850+ results = ds.read_context_disk_dir(my_d)
851+
852+ self.assertTrue('metadata' in results)
853+ self.assertTrue('public-keys' in results['metadata'])
854+ self.assertEqual(public_keys,
855+ results['metadata']['public-keys'])
856+
857+ public_keys.append(SSH_KEY % (c + 1,))
858+
859+ def test_user_data(self):
860+ for k in ('USER_DATA', 'USERDATA'):
861+ my_d = os.path.join(self.tmp, k)
862+ populate_context_dir(my_d, {k: USER_DATA})
863+ results = ds.read_context_disk_dir(my_d)
864+
865+ self.assertTrue('userdata' in results)
866+ self.assertEqual(USER_DATA, results['userdata'])
867+
868+ def test_hostname(self):
869+ for k in ('HOSTNAME', 'PUBLIC_IP', 'IP_PUBLIC', 'ETH0_IP'):
870+ my_d = os.path.join(self.tmp, k)
871+ populate_context_dir(my_d, {k: PUBLIC_IP})
872+ results = ds.read_context_disk_dir(my_d)
873+
874+ self.assertTrue('metadata' in results)
875+ self.assertTrue('local-hostname' in results['metadata'])
876+ self.assertEqual(PUBLIC_IP, results['metadata']['local-hostname'])
877+
878+ def test_network_interfaces(self):
879+ populate_context_dir(self.seed_dir, {'ETH0_IP': '1.2.3.4'})
880+ results = ds.read_context_disk_dir(self.seed_dir)
881+
882+ self.assertTrue('network-interfaces' in results)
883+
884+ def test_find_candidates(self):
885+ def my_devs_with(criteria):
886+ return {
887+ "LABEL=CONTEXT": ["/dev/sdb"],
888+ "LABEL=CDROM": ["/dev/sr0"],
889+ "TYPE=iso9660": ["/dev/vdb"],
890+ }.get(criteria, [])
891+
892+ orig_find_devs_with = util.find_devs_with
893+ try:
894+ util.find_devs_with = my_devs_with
895+ self.assertEqual(["/dev/sdb", "/dev/sr0", "/dev/vdb"],
896+ ds.find_candidate_devs())
897+ finally:
898+ util.find_devs_with = orig_find_devs_with
899+
900+
901+class TestOpenNebulaNetwork(MockerTestCase):
902+
903+ def setUp(self):
904+ super(TestOpenNebulaNetwork, self).setUp()
905+
906+ def test_lo(self):
907+ net = ds.OpenNebulaNetwork('', {})
908+ self.assertEqual(net.gen_conf(), u'''\
909+auto lo
910+iface lo inet loopback
911+''')
912+
913+ def test_eth0(self):
914+ net = ds.OpenNebulaNetwork(CMD_IP_OUT, {})
915+ self.assertEqual(net.gen_conf(), u'''\
916+auto lo
917+iface lo inet loopback
918+
919+auto eth0
920+iface eth0 inet static
921+ address 10.18.1.1
922+ network 10.18.1.0
923+ netmask 255.255.255.0
924+''')
925+
926+ def test_eth0_override(self):
927+ context = {
928+ 'DNS': '1.2.3.8',
929+ 'ETH0_IP': '1.2.3.4',
930+ 'ETH0_NETWORK': '1.2.3.0',
931+ 'ETH0_MASK': '255.255.0.0',
932+ 'ETH0_GATEWAY': '1.2.3.5',
933+ 'ETH0_DOMAIN': 'example.com',
934+ 'ETH0_DNS': '1.2.3.6 1.2.3.7'
935+ }
936+
937+ net = ds.OpenNebulaNetwork(CMD_IP_OUT, context)
938+ self.assertEqual(net.gen_conf(), u'''\
939+auto lo
940+iface lo inet loopback
941+
942+auto eth0
943+iface eth0 inet static
944+ address 1.2.3.4
945+ network 1.2.3.0
946+ netmask 255.255.0.0
947+ gateway 1.2.3.5
948+ dns-search example.com
949+ dns-nameservers 1.2.3.8 1.2.3.6 1.2.3.7
950+''')
951+
952+
953+def populate_context_dir(path, variables):
954+ data = "# Context variables generated by OpenNebula\n"
955+ for (k, v) in variables.iteritems():
956+ data += ("%s='%s'\n" % (k.upper(), v.replace(r"'", r"'\''")))
957+ populate_dir(path, {'context.sh': data})
958+
959+# vi: ts=4 expandtab