Merge lp:~harlowja/cloud-init/cloud-init-net-refactor into lp:~cloud-init-dev/cloud-init/trunk

Proposed by Joshua Harlow
Status: Merged
Merged at revision: 1232
Proposed branch: lp:~harlowja/cloud-init/cloud-init-net-refactor
Merge into: lp:~cloud-init-dev/cloud-init/trunk
Diff against target: 3349 lines (+1330/-1196)
32 files modified
cloudinit/cs_utils.py (+2/-1)
cloudinit/distros/debian.py (+6/-4)
cloudinit/net/__init__.py (+29/-654)
cloudinit/net/cmdline.py (+203/-0)
cloudinit/net/eni.py (+457/-0)
cloudinit/net/network_state.py (+121/-160)
cloudinit/serial.py (+50/-0)
cloudinit/sources/DataSourceAzure.py (+1/-1)
cloudinit/sources/DataSourceConfigDrive.py (+7/-145)
cloudinit/sources/DataSourceSmartOS.py (+1/-3)
cloudinit/sources/helpers/openstack.py (+145/-0)
cloudinit/stages.py (+2/-1)
cloudinit/util.py (+13/-3)
packages/bddeb (+1/-0)
requirements.txt (+6/-2)
setup.py (+0/-1)
test-requirements.txt (+1/-0)
tests/unittests/helpers.py (+9/-76)
tests/unittests/test__init__.py (+4/-13)
tests/unittests/test_cli.py (+1/-6)
tests/unittests/test_cs_util.py (+3/-24)
tests/unittests/test_datasource/test_azure.py (+3/-9)
tests/unittests/test_datasource/test_azure_helper.py (+1/-11)
tests/unittests/test_datasource/test_cloudsigma.py (+1/-1)
tests/unittests/test_datasource/test_cloudstack.py (+1/-10)
tests/unittests/test_datasource/test_configdrive.py (+143/-27)
tests/unittests/test_datasource/test_nocloud.py (+3/-12)
tests/unittests/test_datasource/test_smartos.py (+4/-2)
tests/unittests/test_net.py (+84/-11)
tests/unittests/test_reporting.py (+3/-1)
tests/unittests/test_rh_subscription.py (+19/-7)
tox.ini (+6/-11)
To merge this branch: bzr merge lp:~harlowja/cloud-init/cloud-init-net-refactor
Reviewer Review Type Date Requested Status
Scott Moser Approve
Review via email: mp+293957@code.launchpad.net

Commit message

Refactor a large part of the networking code.

Splits off distro specific code into specific files so that
other kinds of networking configuration can be written by the
various distro(s) that cloud-init supports.

It also isolates some of the cloudinit.net code so that it can
be more easily used on its own (and incorporated into other
projects such as curtin).

During this process it adds tests so that the net process can
be tested (to some level) so that the format conversion processes
can be tested going forward.

To post a comment you must log in.
1216. By Scott Moser

fix timestamp in reporting events.

If no timestamp was passed into a ReportingEvent, then the default was
used. That default was 'time.time()' which was evaluated once only at
import time.

1217. By Joshua Harlow

Enable flake8 and fix a large amount of reported issues

1218. By Matt Fischer

Document improvements for runcmd/bootcmd

Note that runcmd runs only on first boot.
Note that strings need to be quoted, not escaped.
Switch bootcmd list text to use - not * like everything else.

1219. By Joshua Harlow

Remerge against head/master

1220. By Joshua Harlow

Fix up tests and flake8 warnings

1221. By Joshua Harlow

Revert some of the alterations of the tox.ini file

1222. By Joshua Harlow

Remove 26 from default tox.ini listing

1223. By Joshua Harlow

Fix load -> read

Revision history for this message
Scott Moser (smoser) wrote :

Josh,

Thanks for your work and cleanup on this.
My concerns at the moment are
a.) large churn on code and i need to get something into 16.04 to fix some bugs (bug 1577982, bug 1579130, bug 1577844), so i'd like to hold off on this for that.

b.) if we want 'net' to be standalone then i'd prefer for it to not have 'from cloudinit import...' as that indicates reliance on cloudinit or at least some required conversion before external use. you have experience with this through oslo so i guess i'm fine if there is a sane path forward.

c.) non-standard library usage in 'net'. even if this is just six I will need to support curtin running on Ubuntu 12.04 (python-six at 1.1) for the next 12 months at least.

some nitpicks inline below.

1224. By Joshua Harlow

Fix up some of the net usage and restore imports and add a mini compat module

1225. By Joshua Harlow

Rebase against master

1226. By Joshua Harlow

Get cmdline working again

1227. By Joshua Harlow

For now just remove compat.py module

Let's reduce the size of this change for now.

1228. By Joshua Harlow

Less tweaking of tox.ini

1229. By Joshua Harlow

Less less tweaking of tox.ini

Revision history for this message
Scott Moser (smoser) wrote :

Hey.
Grab Hi. grab http://paste.ubuntu.com/17185178/ and please remove 'skip_first_boot' and then go ahead and merge into trunk.

Revision history for this message
Scott Moser (smoser) :
review: Approve
1230. By Joshua Harlow

Add unittest2 to builder list

1231. By Joshua Harlow

Just do all the imports on one line

1232. By Joshua Harlow

Just mock 'on_first_boot' vs special argument

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'cloudinit/cs_utils.py'
--- cloudinit/cs_utils.py 2015-07-16 09:12:24 +0000
+++ cloudinit/cs_utils.py 2016-06-10 21:20:56 +0000
@@ -33,7 +33,8 @@
33import json33import json
34import platform34import platform
3535
36import serial36from cloudinit import serial
37
3738
38# these high timeouts are necessary as read may read a lot of data.39# these high timeouts are necessary as read may read a lot of data.
39READ_TIMEOUT = 6040READ_TIMEOUT = 60
4041
=== modified file 'cloudinit/distros/debian.py'
--- cloudinit/distros/debian.py 2016-05-12 17:56:26 +0000
+++ cloudinit/distros/debian.py 2016-06-10 21:20:56 +0000
@@ -26,6 +26,7 @@
26from cloudinit import helpers26from cloudinit import helpers
27from cloudinit import log as logging27from cloudinit import log as logging
28from cloudinit import net28from cloudinit import net
29from cloudinit.net import eni
29from cloudinit import util30from cloudinit import util
3031
31from cloudinit.distros.parsers.hostname import HostnameConf32from cloudinit.distros.parsers.hostname import HostnameConf
@@ -56,6 +57,7 @@
56 # should only happen say once per instance...)57 # should only happen say once per instance...)
57 self._runner = helpers.Runners(paths)58 self._runner = helpers.Runners(paths)
58 self.osfamily = 'debian'59 self.osfamily = 'debian'
60 self._net_renderer = eni.Renderer()
5961
60 def apply_locale(self, locale, out_fn=None):62 def apply_locale(self, locale, out_fn=None):
61 if not out_fn:63 if not out_fn:
@@ -80,10 +82,10 @@
8082
81 def _write_network_config(self, netconfig):83 def _write_network_config(self, netconfig):
82 ns = net.parse_net_config_data(netconfig)84 ns = net.parse_net_config_data(netconfig)
83 net.render_network_state(target="/", network_state=ns,85 self._net_renderer.render_network_state(
84 eni=self.network_conf_fn,86 target="/", network_state=ns,
85 links_prefix=self.links_prefix,87 eni=self.network_conf_fn, links_prefix=self.links_prefix,
86 netrules=None)88 netrules=None)
87 _maybe_remove_legacy_eth0()89 _maybe_remove_legacy_eth0()
8890
89 return []91 return []
9092
=== modified file 'cloudinit/net/__init__.py'
--- cloudinit/net/__init__.py 2016-06-03 18:58:51 +0000
+++ cloudinit/net/__init__.py 2016-06-10 21:20:56 +0000
@@ -16,43 +16,18 @@
16# You should have received a copy of the GNU Affero General Public License16# You should have received a copy of the GNU Affero General Public License
17# along with Curtin. If not, see <http://www.gnu.org/licenses/>.17# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
1818
19import base64
20import errno19import errno
21import glob20import logging
22import gzip
23import io
24import os21import os
25import re22import re
26import shlex
2723
28from cloudinit import log as logging
29from cloudinit.net import network_state
30from cloudinit.net.udev import generate_udev_rule
31from cloudinit import util24from cloudinit import util
3225
33LOG = logging.getLogger(__name__)26LOG = logging.getLogger(__name__)
34
35SYS_CLASS_NET = "/sys/class/net/"27SYS_CLASS_NET = "/sys/class/net/"
28DEFAULT_PRIMARY_INTERFACE = 'eth0'
36LINKS_FNAME_PREFIX = "etc/systemd/network/50-cloud-init-"29LINKS_FNAME_PREFIX = "etc/systemd/network/50-cloud-init-"
3730
38NET_CONFIG_OPTIONS = [
39 "address", "netmask", "broadcast", "network", "metric", "gateway",
40 "pointtopoint", "media", "mtu", "hostname", "leasehours", "leasetime",
41 "vendor", "client", "bootfile", "server", "hwaddr", "provider", "frame",
42 "netnum", "endpoint", "local", "ttl",
43]
44
45NET_CONFIG_COMMANDS = [
46 "pre-up", "up", "post-up", "down", "pre-down", "post-down",
47]
48
49NET_CONFIG_BRIDGE_OPTIONS = [
50 "bridge_ageing", "bridge_bridgeprio", "bridge_fd", "bridge_gcinit",
51 "bridge_hello", "bridge_maxage", "bridge_maxwait", "bridge_stp",
52]
53
54DEFAULT_PRIMARY_INTERFACE = 'eth0'
55
5631
57def sys_dev_path(devname, path=""):32def sys_dev_path(devname, path=""):
58 return SYS_CLASS_NET + devname + "/" + path33 return SYS_CLASS_NET + devname + "/" + path
@@ -60,23 +35,22 @@
6035
61def read_sys_net(devname, path, translate=None, enoent=None, keyerror=None):36def read_sys_net(devname, path, translate=None, enoent=None, keyerror=None):
62 try:37 try:
63 contents = ""38 contents = util.load_file(sys_dev_path(devname, path))
64 with open(sys_dev_path(devname, path), "r") as fp:39 except (OSError, IOError) as e:
65 contents = fp.read().strip()40 if getattr(e, 'errno', None) == errno.ENOENT:
66 if translate is None:41 if enoent is not None:
67 return contents42 return enoent
6843 raise
69 try:44 contents = contents.strip()
70 return translate.get(contents)45 if translate is None:
71 except KeyError:46 return contents
72 LOG.debug("found unexpected value '%s' in '%s/%s'", contents,47 try:
73 devname, path)48 return translate.get(contents)
74 if keyerror is not None:49 except KeyError:
75 return keyerror50 LOG.debug("found unexpected value '%s' in '%s/%s'", contents,
76 raise51 devname, path)
77 except OSError as e:52 if keyerror is not None:
78 if e.errno == errno.ENOENT and enoent is not None:53 return keyerror
79 return enoent
80 raise54 raise
8155
8256
@@ -127,509 +101,7 @@
127101
128102
129class ParserError(Exception):103class ParserError(Exception):
130 """Raised when parser has issue parsing the interfaces file."""104 """Raised when a parser has issue parsing a file/content."""
131
132
133def parse_deb_config_data(ifaces, contents, src_dir, src_path):
134 """Parses the file contents, placing result into ifaces.
135
136 '_source_path' is added to every dictionary entry to define which file
137 the configration information came from.
138
139 :param ifaces: interface dictionary
140 :param contents: contents of interfaces file
141 :param src_dir: directory interfaces file was located
142 :param src_path: file path the `contents` was read
143 """
144 currif = None
145 for line in contents.splitlines():
146 line = line.strip()
147 if line.startswith('#'):
148 continue
149 split = line.split(' ')
150 option = split[0]
151 if option == "source-directory":
152 parsed_src_dir = split[1]
153 if not parsed_src_dir.startswith("/"):
154 parsed_src_dir = os.path.join(src_dir, parsed_src_dir)
155 for expanded_path in glob.glob(parsed_src_dir):
156 dir_contents = os.listdir(expanded_path)
157 dir_contents = [
158 os.path.join(expanded_path, path)
159 for path in dir_contents
160 if (os.path.isfile(os.path.join(expanded_path, path)) and
161 re.match("^[a-zA-Z0-9_-]+$", path) is not None)
162 ]
163 for entry in dir_contents:
164 with open(entry, "r") as fp:
165 src_data = fp.read().strip()
166 abs_entry = os.path.abspath(entry)
167 parse_deb_config_data(
168 ifaces, src_data,
169 os.path.dirname(abs_entry), abs_entry)
170 elif option == "source":
171 new_src_path = split[1]
172 if not new_src_path.startswith("/"):
173 new_src_path = os.path.join(src_dir, new_src_path)
174 for expanded_path in glob.glob(new_src_path):
175 with open(expanded_path, "r") as fp:
176 src_data = fp.read().strip()
177 abs_path = os.path.abspath(expanded_path)
178 parse_deb_config_data(
179 ifaces, src_data,
180 os.path.dirname(abs_path), abs_path)
181 elif option == "auto":
182 for iface in split[1:]:
183 if iface not in ifaces:
184 ifaces[iface] = {
185 # Include the source path this interface was found in.
186 "_source_path": src_path
187 }
188 ifaces[iface]['auto'] = True
189 elif option == "iface":
190 iface, family, method = split[1:4]
191 if iface not in ifaces:
192 ifaces[iface] = {
193 # Include the source path this interface was found in.
194 "_source_path": src_path
195 }
196 elif 'family' in ifaces[iface]:
197 raise ParserError(
198 "Interface %s can only be defined once. "
199 "Re-defined in '%s'." % (iface, src_path))
200 ifaces[iface]['family'] = family
201 ifaces[iface]['method'] = method
202 currif = iface
203 elif option == "hwaddress":
204 if split[1] == "ether":
205 val = split[2]
206 else:
207 val = split[1]
208 ifaces[currif]['hwaddress'] = val
209 elif option in NET_CONFIG_OPTIONS:
210 ifaces[currif][option] = split[1]
211 elif option in NET_CONFIG_COMMANDS:
212 if option not in ifaces[currif]:
213 ifaces[currif][option] = []
214 ifaces[currif][option].append(' '.join(split[1:]))
215 elif option.startswith('dns-'):
216 if 'dns' not in ifaces[currif]:
217 ifaces[currif]['dns'] = {}
218 if option == 'dns-search':
219 ifaces[currif]['dns']['search'] = []
220 for domain in split[1:]:
221 ifaces[currif]['dns']['search'].append(domain)
222 elif option == 'dns-nameservers':
223 ifaces[currif]['dns']['nameservers'] = []
224 for server in split[1:]:
225 ifaces[currif]['dns']['nameservers'].append(server)
226 elif option.startswith('bridge_'):
227 if 'bridge' not in ifaces[currif]:
228 ifaces[currif]['bridge'] = {}
229 if option in NET_CONFIG_BRIDGE_OPTIONS:
230 bridge_option = option.replace('bridge_', '', 1)
231 ifaces[currif]['bridge'][bridge_option] = split[1]
232 elif option == "bridge_ports":
233 ifaces[currif]['bridge']['ports'] = []
234 for iface in split[1:]:
235 ifaces[currif]['bridge']['ports'].append(iface)
236 elif option == "bridge_hw" and split[1].lower() == "mac":
237 ifaces[currif]['bridge']['mac'] = split[2]
238 elif option == "bridge_pathcost":
239 if 'pathcost' not in ifaces[currif]['bridge']:
240 ifaces[currif]['bridge']['pathcost'] = {}
241 ifaces[currif]['bridge']['pathcost'][split[1]] = split[2]
242 elif option == "bridge_portprio":
243 if 'portprio' not in ifaces[currif]['bridge']:
244 ifaces[currif]['bridge']['portprio'] = {}
245 ifaces[currif]['bridge']['portprio'][split[1]] = split[2]
246 elif option.startswith('bond-'):
247 if 'bond' not in ifaces[currif]:
248 ifaces[currif]['bond'] = {}
249 bond_option = option.replace('bond-', '', 1)
250 ifaces[currif]['bond'][bond_option] = split[1]
251 for iface in ifaces.keys():
252 if 'auto' not in ifaces[iface]:
253 ifaces[iface]['auto'] = False
254
255
256def parse_deb_config(path):
257 """Parses a debian network configuration file."""
258 ifaces = {}
259 with open(path, "r") as fp:
260 contents = fp.read().strip()
261 abs_path = os.path.abspath(path)
262 parse_deb_config_data(
263 ifaces, contents,
264 os.path.dirname(abs_path), abs_path)
265 return ifaces
266
267
268def parse_net_config_data(net_config):
269 """Parses the config, returns NetworkState dictionary
270
271 :param net_config: curtin network config dict
272 """
273 state = None
274 if 'version' in net_config and 'config' in net_config:
275 ns = network_state.NetworkState(version=net_config.get('version'),
276 config=net_config.get('config'))
277 ns.parse_config()
278 state = ns.network_state
279
280 return state
281
282
283def parse_net_config(path):
284 """Parses a curtin network configuration file and
285 return network state"""
286 ns = None
287 net_config = util.read_conf(path)
288 if 'network' in net_config:
289 ns = parse_net_config_data(net_config.get('network'))
290
291 return ns
292
293
294def _load_shell_content(content, add_empty=False, empty_val=None):
295 """Given shell like syntax (key=value\nkey2=value2\n) in content
296 return the data in dictionary form. If 'add_empty' is True
297 then add entries in to the returned dictionary for 'VAR='
298 variables. Set their value to empty_val."""
299 data = {}
300 for line in shlex.split(content):
301 key, value = line.split("=", 1)
302 if not value:
303 value = empty_val
304 if add_empty or value:
305 data[key] = value
306
307 return data
308
309
310def _klibc_to_config_entry(content, mac_addrs=None):
311 """Convert a klibc writtent shell content file to a 'config' entry
312 When ip= is seen on the kernel command line in debian initramfs
313 and networking is brought up, ipconfig will populate
314 /run/net-<name>.cfg.
315
316 The files are shell style syntax, and examples are in the tests
317 provided here. There is no good documentation on this unfortunately.
318
319 DEVICE=<name> is expected/required and PROTO should indicate if
320 this is 'static' or 'dhcp'.
321 """
322
323 if mac_addrs is None:
324 mac_addrs = {}
325
326 data = _load_shell_content(content)
327 try:
328 name = data['DEVICE']
329 except KeyError:
330 raise ValueError("no 'DEVICE' entry in data")
331
332 # ipconfig on precise does not write PROTO
333 proto = data.get('PROTO')
334 if not proto:
335 if data.get('filename'):
336 proto = 'dhcp'
337 else:
338 proto = 'static'
339
340 if proto not in ('static', 'dhcp'):
341 raise ValueError("Unexpected value for PROTO: %s" % proto)
342
343 iface = {
344 'type': 'physical',
345 'name': name,
346 'subnets': [],
347 }
348
349 if name in mac_addrs:
350 iface['mac_address'] = mac_addrs[name]
351
352 # originally believed there might be IPV6* values
353 for v, pre in (('ipv4', 'IPV4'),):
354 # if no IPV4ADDR or IPV6ADDR, then go on.
355 if pre + "ADDR" not in data:
356 continue
357 subnet = {'type': proto, 'control': 'manual'}
358
359 # these fields go right on the subnet
360 for key in ('NETMASK', 'BROADCAST', 'GATEWAY'):
361 if pre + key in data:
362 subnet[key.lower()] = data[pre + key]
363
364 dns = []
365 # handle IPV4DNS0 or IPV6DNS0
366 for nskey in ('DNS0', 'DNS1'):
367 ns = data.get(pre + nskey)
368 # verify it has something other than 0.0.0.0 (or ipv6)
369 if ns and len(ns.strip(":.0")):
370 dns.append(data[pre + nskey])
371 if dns:
372 subnet['dns_nameservers'] = dns
373 # add search to both ipv4 and ipv6, as it has no namespace
374 search = data.get('DOMAINSEARCH')
375 if search:
376 if ',' in search:
377 subnet['dns_search'] = search.split(",")
378 else:
379 subnet['dns_search'] = search.split()
380
381 iface['subnets'].append(subnet)
382
383 return name, iface
384
385
386def config_from_klibc_net_cfg(files=None, mac_addrs=None):
387 if files is None:
388 files = glob.glob('/run/net*.conf')
389
390 entries = []
391 names = {}
392 for cfg_file in files:
393 name, entry = _klibc_to_config_entry(util.load_file(cfg_file),
394 mac_addrs=mac_addrs)
395 if name in names:
396 raise ValueError(
397 "device '%s' defined multiple times: %s and %s" % (
398 name, names[name], cfg_file))
399
400 names[name] = cfg_file
401 entries.append(entry)
402 return {'config': entries, 'version': 1}
403
404
405def render_persistent_net(network_state):
406 '''Given state, emit udev rules to map mac to ifname.'''
407 content = ""
408 interfaces = network_state.get('interfaces')
409 for iface in interfaces.values():
410 # for physical interfaces write out a persist net udev rule
411 if iface['type'] == 'physical' and \
412 'name' in iface and iface.get('mac_address'):
413 content += generate_udev_rule(iface['name'],
414 iface['mac_address'])
415
416 return content
417
418
419# TODO: switch valid_map based on mode inet/inet6
420def iface_add_subnet(iface, subnet):
421 content = ""
422 valid_map = [
423 'address',
424 'netmask',
425 'broadcast',
426 'metric',
427 'gateway',
428 'pointopoint',
429 'mtu',
430 'scope',
431 'dns_search',
432 'dns_nameservers',
433 ]
434 for key, value in subnet.items():
435 if value and key in valid_map:
436 if type(value) == list:
437 value = " ".join(value)
438 if '_' in key:
439 key = key.replace('_', '-')
440 content += " {} {}\n".format(key, value)
441
442 return content
443
444
445# TODO: switch to valid_map for attrs
446def iface_add_attrs(iface):
447 content = ""
448 ignore_map = [
449 'control',
450 'index',
451 'inet',
452 'mode',
453 'name',
454 'subnets',
455 'type',
456 ]
457 if iface['type'] not in ['bond', 'bridge', 'vlan']:
458 ignore_map.append('mac_address')
459
460 for key, value in iface.items():
461 if value and key not in ignore_map:
462 if type(value) == list:
463 value = " ".join(value)
464 content += " {} {}\n".format(key, value)
465
466 return content
467
468
469def render_route(route, indent=""):
470 """When rendering routes for an iface, in some cases applying a route
471 may result in the route command returning non-zero which produces
472 some confusing output for users manually using ifup/ifdown[1]. To
473 that end, we will optionally include an '|| true' postfix to each
474 route line allowing users to work with ifup/ifdown without using
475 --force option.
476
477 We may at somepoint not want to emit this additional postfix, and
478 add a 'strict' flag to this function. When called with strict=True,
479 then we will not append the postfix.
480
481 1. http://askubuntu.com/questions/168033/
482 how-to-set-static-routes-in-ubuntu-server
483 """
484 content = ""
485 up = indent + "post-up route add"
486 down = indent + "pre-down route del"
487 eol = " || true\n"
488 mapping = {
489 'network': '-net',
490 'netmask': 'netmask',
491 'gateway': 'gw',
492 'metric': 'metric',
493 }
494 if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0':
495 default_gw = " default gw %s" % route['gateway']
496 content += up + default_gw + eol
497 content += down + default_gw + eol
498 elif route['network'] == '::' and route['netmask'] == 0:
499 # ipv6!
500 default_gw = " -A inet6 default gw %s" % route['gateway']
501 content += up + default_gw + eol
502 content += down + default_gw + eol
503 else:
504 route_line = ""
505 for k in ['network', 'netmask', 'gateway', 'metric']:
506 if k in route:
507 route_line += " %s %s" % (mapping[k], route[k])
508 content += up + route_line + eol
509 content += down + route_line + eol
510
511 return content
512
513
514def iface_start_entry(iface, index):
515 fullname = iface['name']
516 if index != 0:
517 fullname += ":%s" % index
518
519 control = iface['control']
520 if control == "auto":
521 cverb = "auto"
522 elif control in ("hotplug",):
523 cverb = "allow-" + control
524 else:
525 cverb = "# control-" + control
526
527 subst = iface.copy()
528 subst.update({'fullname': fullname, 'cverb': cverb})
529
530 return ("{cverb} {fullname}\n"
531 "iface {fullname} {inet} {mode}\n").format(**subst)
532
533
534def render_interfaces(network_state):
535 '''Given state, emit etc/network/interfaces content.'''
536
537 content = ""
538 interfaces = network_state.get('interfaces')
539 ''' Apply a sort order to ensure that we write out
540 the physical interfaces first; this is critical for
541 bonding
542 '''
543 order = {
544 'physical': 0,
545 'bond': 1,
546 'bridge': 2,
547 'vlan': 3,
548 }
549 content += "auto lo\niface lo inet loopback\n"
550 for dnskey, value in network_state.get('dns', {}).items():
551 if len(value):
552 content += " dns-{} {}\n".format(dnskey, " ".join(value))
553
554 for iface in sorted(interfaces.values(),
555 key=lambda k: (order[k['type']], k['name'])):
556
557 if content[-2:] != "\n\n":
558 content += "\n"
559 subnets = iface.get('subnets', {})
560 if subnets:
561 for index, subnet in zip(range(0, len(subnets)), subnets):
562 if content[-2:] != "\n\n":
563 content += "\n"
564 iface['index'] = index
565 iface['mode'] = subnet['type']
566 iface['control'] = subnet.get('control', 'auto')
567 if iface['mode'].endswith('6'):
568 iface['inet'] += '6'
569 elif iface['mode'] == 'static' and ":" in subnet['address']:
570 iface['inet'] += '6'
571 if iface['mode'].startswith('dhcp'):
572 iface['mode'] = 'dhcp'
573
574 content += iface_start_entry(iface, index)
575 content += iface_add_subnet(iface, subnet)
576 content += iface_add_attrs(iface)
577 for route in subnet.get('routes', []):
578 content += render_route(route, indent=" ")
579 else:
580 # ifenslave docs say to auto the slave devices
581 if 'bond-master' in iface:
582 content += "auto {name}\n".format(**iface)
583 content += "iface {name} {inet} {mode}\n".format(**iface)
584 content += iface_add_attrs(iface)
585
586 for route in network_state.get('routes'):
587 content += render_route(route)
588
589 # global replacements until v2 format
590 content = content.replace('mac_address', 'hwaddress')
591 return content
592
593
594def render_network_state(target, network_state, eni="etc/network/interfaces",
595 links_prefix=LINKS_FNAME_PREFIX,
596 netrules='etc/udev/rules.d/70-persistent-net.rules'):
597
598 fpeni = os.path.sep.join((target, eni,))
599 util.ensure_dir(os.path.dirname(fpeni))
600 with open(fpeni, 'w+') as f:
601 f.write(render_interfaces(network_state))
602
603 if netrules:
604 netrules = os.path.sep.join((target, netrules,))
605 util.ensure_dir(os.path.dirname(netrules))
606 with open(netrules, 'w+') as f:
607 f.write(render_persistent_net(network_state))
608
609 if links_prefix:
610 render_systemd_links(target, network_state, links_prefix)
611
612
613def render_systemd_links(target, network_state,
614 links_prefix=LINKS_FNAME_PREFIX):
615 fp_prefix = os.path.sep.join((target, links_prefix))
616 for f in glob.glob(fp_prefix + "*"):
617 os.unlink(f)
618
619 interfaces = network_state.get('interfaces')
620 for iface in interfaces.values():
621 if (iface['type'] == 'physical' and 'name' in iface and
622 iface.get('mac_address')):
623 fname = fp_prefix + iface['name'] + ".link"
624 with open(fname, "w") as fp:
625 fp.write("\n".join([
626 "[Match]",
627 "MACAddress=" + iface['mac_address'],
628 "",
629 "[Link]",
630 "Name=" + iface['name'],
631 ""
632 ]))
633105
634106
635def is_disabled_cfg(cfg):107def is_disabled_cfg(cfg):
@@ -642,7 +114,6 @@
642 if not os.path.exists(os.path.join(SYS_CLASS_NET, name)):114 if not os.path.exists(os.path.join(SYS_CLASS_NET, name)):
643 raise OSError("%s: interface does not exist in %s" %115 raise OSError("%s: interface does not exist in %s" %
644 (name, SYS_CLASS_NET))116 (name, SYS_CLASS_NET))
645
646 fname = os.path.join(SYS_CLASS_NET, name, field)117 fname = os.path.join(SYS_CLASS_NET, name, field)
647 if not os.path.exists(fname):118 if not os.path.exists(fname):
648 raise OSError("%s: could not find sysfs entry: %s" % (name, fname))119 raise OSError("%s: could not find sysfs entry: %s" % (name, fname))
@@ -722,108 +193,6 @@
722 return nconf193 return nconf
723194
724195
725def _decomp_gzip(blob, strict=True):
726 # decompress blob. raise exception if not compressed unless strict=False.
727 with io.BytesIO(blob) as iobuf:
728 gzfp = None
729 try:
730 gzfp = gzip.GzipFile(mode="rb", fileobj=iobuf)
731 return gzfp.read()
732 except IOError:
733 if strict:
734 raise
735 return blob
736 finally:
737 if gzfp:
738 gzfp.close()
739
740
741def _b64dgz(b64str, gzipped="try"):
742 # decode a base64 string. If gzipped is true, transparently uncompresss
743 # if gzipped is 'try', then try gunzip, returning the original on fail.
744 try:
745 blob = base64.b64decode(b64str)
746 except TypeError:
747 raise ValueError("Invalid base64 text: %s" % b64str)
748
749 if not gzipped:
750 return blob
751
752 return _decomp_gzip(blob, strict=gzipped != "try")
753
754
755def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None):
756 if cmdline is None:
757 cmdline = util.get_cmdline()
758
759 if 'network-config=' in cmdline:
760 data64 = None
761 for tok in cmdline.split():
762 if tok.startswith("network-config="):
763 data64 = tok.split("=", 1)[1]
764 if data64:
765 return util.load_yaml(_b64dgz(data64))
766
767 if 'ip=' not in cmdline:
768 return None
769
770 if mac_addrs is None:
771 mac_addrs = {k: sys_netdev_info(k, 'address')
772 for k in get_devicelist()}
773
774 return config_from_klibc_net_cfg(files=files, mac_addrs=mac_addrs)
775
776
777def convert_eni_data(eni_data):
778 # return a network config representation of what is in eni_data
779 ifaces = {}
780 parse_deb_config_data(ifaces, eni_data, src_dir=None, src_path=None)
781 return _ifaces_to_net_config_data(ifaces)
782
783
784def _ifaces_to_net_config_data(ifaces):
785 """Return network config that represents the ifaces data provided.
786 ifaces = parse_deb_config("/etc/network/interfaces")
787 config = ifaces_to_net_config_data(ifaces)
788 state = parse_net_config_data(config)."""
789 devs = {}
790 for name, data in ifaces.items():
791 # devname is 'eth0' for name='eth0:1'
792 devname = name.partition(":")[0]
793 if devname == "lo":
794 # currently provding 'lo' in network config results in duplicate
795 # entries. in rendered interfaces file. so skip it.
796 continue
797 if devname not in devs:
798 devs[devname] = {'type': 'physical', 'name': devname,
799 'subnets': []}
800 # this isnt strictly correct, but some might specify
801 # hwaddress on a nic for matching / declaring name.
802 if 'hwaddress' in data:
803 devs[devname]['mac_address'] = data['hwaddress']
804 subnet = {'_orig_eni_name': name, 'type': data['method']}
805 if data.get('auto'):
806 subnet['control'] = 'auto'
807 else:
808 subnet['control'] = 'manual'
809
810 if data.get('method') == 'static':
811 subnet['address'] = data['address']
812
813 for copy_key in ('netmask', 'gateway', 'broadcast'):
814 if copy_key in data:
815 subnet[copy_key] = data[copy_key]
816
817 if 'dns' in data:
818 for n in ('nameservers', 'search'):
819 if n in data['dns'] and data['dns'][n]:
820 subnet['dns_' + n] = data['dns'][n]
821 devs[devname]['subnets'].append(subnet)
822
823 return {'version': 1,
824 'config': [devs[d] for d in sorted(devs)]}
825
826
827def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):196def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
828 """read the network config and rename devices accordingly.197 """read the network config and rename devices accordingly.
829 if strict_present is false, then do not raise exception if no devices198 if strict_present is false, then do not raise exception if no devices
@@ -839,7 +208,7 @@
839 continue208 continue
840 renames.append([mac, name])209 renames.append([mac, name])
841210
842 return rename_interfaces(renames)211 return _rename_interfaces(renames)
843212
844213
845def _get_current_rename_info(check_downable=True):214def _get_current_rename_info(check_downable=True):
@@ -867,8 +236,8 @@
867 return bymac236 return bymac
868237
869238
870def rename_interfaces(renames, strict_present=True, strict_busy=True,239def _rename_interfaces(renames, strict_present=True, strict_busy=True,
871 current_info=None):240 current_info=None):
872 if current_info is None:241 if current_info is None:
873 current_info = _get_current_rename_info()242 current_info = _get_current_rename_info()
874243
@@ -979,7 +348,13 @@
979def get_interfaces_by_mac(devs=None):348def get_interfaces_by_mac(devs=None):
980 """Build a dictionary of tuples {mac: name}"""349 """Build a dictionary of tuples {mac: name}"""
981 if devs is None:350 if devs is None:
982 devs = get_devicelist()351 try:
352 devs = get_devicelist()
353 except OSError as e:
354 if e.errno == errno.ENOENT:
355 devs = []
356 else:
357 raise
983 ret = {}358 ret = {}
984 for name in devs:359 for name in devs:
985 mac = get_interface_mac(name)360 mac = get_interface_mac(name)
986361
=== added file 'cloudinit/net/cmdline.py'
--- cloudinit/net/cmdline.py 1970-01-01 00:00:00 +0000
+++ cloudinit/net/cmdline.py 2016-06-10 21:20:56 +0000
@@ -0,0 +1,203 @@
1# Copyright (C) 2013-2014 Canonical Ltd.
2#
3# Author: Scott Moser <scott.moser@canonical.com>
4# Author: Blake Rouse <blake.rouse@canonical.com>
5#
6# Curtin is free software: you can redistribute it and/or modify it under
7# the terms of the GNU Affero General Public License as published by the
8# Free Software Foundation, either version 3 of the License, or (at your
9# option) any later version.
10#
11# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
12# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
13# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
14# more details.
15#
16# You should have received a copy of the GNU Affero General Public License
17# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
18
19import base64
20import glob
21import gzip
22import io
23import shlex
24import sys
25
26import six
27
28from . import get_devicelist
29from . import sys_netdev_info
30
31from cloudinit import util
32
33PY26 = sys.version_info[0:2] == (2, 6)
34
35
36def _shlex_split(blob):
37 if PY26 and isinstance(blob, six.text_type):
38 # Older versions don't support unicode input
39 blob = blob.encode("utf8")
40 return shlex.split(blob)
41
42
43def _load_shell_content(content, add_empty=False, empty_val=None):
44 """Given shell like syntax (key=value\nkey2=value2\n) in content
45 return the data in dictionary form. If 'add_empty' is True
46 then add entries in to the returned dictionary for 'VAR='
47 variables. Set their value to empty_val."""
48 data = {}
49 for line in _shlex_split(content):
50 key, value = line.split("=", 1)
51 if not value:
52 value = empty_val
53 if add_empty or value:
54 data[key] = value
55
56 return data
57
58
59def _klibc_to_config_entry(content, mac_addrs=None):
60 """Convert a klibc writtent shell content file to a 'config' entry
61 When ip= is seen on the kernel command line in debian initramfs
62 and networking is brought up, ipconfig will populate
63 /run/net-<name>.cfg.
64
65 The files are shell style syntax, and examples are in the tests
66 provided here. There is no good documentation on this unfortunately.
67
68 DEVICE=<name> is expected/required and PROTO should indicate if
69 this is 'static' or 'dhcp'.
70 """
71
72 if mac_addrs is None:
73 mac_addrs = {}
74
75 data = _load_shell_content(content)
76 try:
77 name = data['DEVICE']
78 except KeyError:
79 raise ValueError("no 'DEVICE' entry in data")
80
81 # ipconfig on precise does not write PROTO
82 proto = data.get('PROTO')
83 if not proto:
84 if data.get('filename'):
85 proto = 'dhcp'
86 else:
87 proto = 'static'
88
89 if proto not in ('static', 'dhcp'):
90 raise ValueError("Unexpected value for PROTO: %s" % proto)
91
92 iface = {
93 'type': 'physical',
94 'name': name,
95 'subnets': [],
96 }
97
98 if name in mac_addrs:
99 iface['mac_address'] = mac_addrs[name]
100
101 # originally believed there might be IPV6* values
102 for v, pre in (('ipv4', 'IPV4'),):
103 # if no IPV4ADDR or IPV6ADDR, then go on.
104 if pre + "ADDR" not in data:
105 continue
106 subnet = {'type': proto, 'control': 'manual'}
107
108 # these fields go right on the subnet
109 for key in ('NETMASK', 'BROADCAST', 'GATEWAY'):
110 if pre + key in data:
111 subnet[key.lower()] = data[pre + key]
112
113 dns = []
114 # handle IPV4DNS0 or IPV6DNS0
115 for nskey in ('DNS0', 'DNS1'):
116 ns = data.get(pre + nskey)
117 # verify it has something other than 0.0.0.0 (or ipv6)
118 if ns and len(ns.strip(":.0")):
119 dns.append(data[pre + nskey])
120 if dns:
121 subnet['dns_nameservers'] = dns
122 # add search to both ipv4 and ipv6, as it has no namespace
123 search = data.get('DOMAINSEARCH')
124 if search:
125 if ',' in search:
126 subnet['dns_search'] = search.split(",")
127 else:
128 subnet['dns_search'] = search.split()
129
130 iface['subnets'].append(subnet)
131
132 return name, iface
133
134
135def config_from_klibc_net_cfg(files=None, mac_addrs=None):
136 if files is None:
137 files = glob.glob('/run/net*.conf')
138
139 entries = []
140 names = {}
141 for cfg_file in files:
142 name, entry = _klibc_to_config_entry(util.load_file(cfg_file),
143 mac_addrs=mac_addrs)
144 if name in names:
145 raise ValueError(
146 "device '%s' defined multiple times: %s and %s" % (
147 name, names[name], cfg_file))
148
149 names[name] = cfg_file
150 entries.append(entry)
151 return {'config': entries, 'version': 1}
152
153
154def _decomp_gzip(blob, strict=True):
155 # decompress blob. raise exception if not compressed unless strict=False.
156 with io.BytesIO(blob) as iobuf:
157 gzfp = None
158 try:
159 gzfp = gzip.GzipFile(mode="rb", fileobj=iobuf)
160 return gzfp.read()
161 except IOError:
162 if strict:
163 raise
164 return blob
165 finally:
166 if gzfp:
167 gzfp.close()
168
169
170def _b64dgz(b64str, gzipped="try"):
171 # decode a base64 string. If gzipped is true, transparently uncompresss
172 # if gzipped is 'try', then try gunzip, returning the original on fail.
173 try:
174 blob = base64.b64decode(b64str)
175 except TypeError:
176 raise ValueError("Invalid base64 text: %s" % b64str)
177
178 if not gzipped:
179 return blob
180
181 return _decomp_gzip(blob, strict=gzipped != "try")
182
183
184def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None):
185 if cmdline is None:
186 cmdline = util.get_cmdline()
187
188 if 'network-config=' in cmdline:
189 data64 = None
190 for tok in cmdline.split():
191 if tok.startswith("network-config="):
192 data64 = tok.split("=", 1)[1]
193 if data64:
194 return util.load_yaml(_b64dgz(data64))
195
196 if 'ip=' not in cmdline:
197 return None
198
199 if mac_addrs is None:
200 mac_addrs = dict((k, sys_netdev_info(k, 'address'))
201 for k in get_devicelist())
202
203 return config_from_klibc_net_cfg(files=files, mac_addrs=mac_addrs)
0204
=== added file 'cloudinit/net/eni.py'
--- cloudinit/net/eni.py 1970-01-01 00:00:00 +0000
+++ cloudinit/net/eni.py 2016-06-10 21:20:56 +0000
@@ -0,0 +1,457 @@
1# vi: ts=4 expandtab
2#
3# This program is free software: you can redistribute it and/or modify
4# it under the terms of the GNU General Public License version 3, as
5# published by the Free Software Foundation.
6#
7# This program is distributed in the hope that it will be useful,
8# but WITHOUT ANY WARRANTY; without even the implied warranty of
9# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10# GNU General Public License for more details.
11#
12# You should have received a copy of the GNU General Public License
13# along with this program. If not, see <http://www.gnu.org/licenses/>.
14
15import glob
16import os
17import re
18
19from . import LINKS_FNAME_PREFIX
20from . import ParserError
21
22from .udev import generate_udev_rule
23
24from cloudinit import util
25
26
27NET_CONFIG_COMMANDS = [
28 "pre-up", "up", "post-up", "down", "pre-down", "post-down",
29]
30
31NET_CONFIG_BRIDGE_OPTIONS = [
32 "bridge_ageing", "bridge_bridgeprio", "bridge_fd", "bridge_gcinit",
33 "bridge_hello", "bridge_maxage", "bridge_maxwait", "bridge_stp",
34]
35
36NET_CONFIG_OPTIONS = [
37 "address", "netmask", "broadcast", "network", "metric", "gateway",
38 "pointtopoint", "media", "mtu", "hostname", "leasehours", "leasetime",
39 "vendor", "client", "bootfile", "server", "hwaddr", "provider", "frame",
40 "netnum", "endpoint", "local", "ttl",
41]
42
43
44# TODO: switch valid_map based on mode inet/inet6
45def _iface_add_subnet(iface, subnet):
46 content = ""
47 valid_map = [
48 'address',
49 'netmask',
50 'broadcast',
51 'metric',
52 'gateway',
53 'pointopoint',
54 'mtu',
55 'scope',
56 'dns_search',
57 'dns_nameservers',
58 ]
59 for key, value in subnet.items():
60 if value and key in valid_map:
61 if type(value) == list:
62 value = " ".join(value)
63 if '_' in key:
64 key = key.replace('_', '-')
65 content += " {} {}\n".format(key, value)
66
67 return content
68
69
70# TODO: switch to valid_map for attrs
71
72def _iface_add_attrs(iface):
73 content = ""
74 ignore_map = [
75 'control',
76 'index',
77 'inet',
78 'mode',
79 'name',
80 'subnets',
81 'type',
82 ]
83 if iface['type'] not in ['bond', 'bridge', 'vlan']:
84 ignore_map.append('mac_address')
85
86 for key, value in iface.items():
87 if value and key not in ignore_map:
88 if type(value) == list:
89 value = " ".join(value)
90 content += " {} {}\n".format(key, value)
91
92 return content
93
94
95def _iface_start_entry(iface, index):
96 fullname = iface['name']
97 if index != 0:
98 fullname += ":%s" % index
99
100 control = iface['control']
101 if control == "auto":
102 cverb = "auto"
103 elif control in ("hotplug",):
104 cverb = "allow-" + control
105 else:
106 cverb = "# control-" + control
107
108 subst = iface.copy()
109 subst.update({'fullname': fullname, 'cverb': cverb})
110
111 return ("{cverb} {fullname}\n"
112 "iface {fullname} {inet} {mode}\n").format(**subst)
113
114
115def _parse_deb_config_data(ifaces, contents, src_dir, src_path):
116 """Parses the file contents, placing result into ifaces.
117
118 '_source_path' is added to every dictionary entry to define which file
119 the configration information came from.
120
121 :param ifaces: interface dictionary
122 :param contents: contents of interfaces file
123 :param src_dir: directory interfaces file was located
124 :param src_path: file path the `contents` was read
125 """
126 currif = None
127 for line in contents.splitlines():
128 line = line.strip()
129 if line.startswith('#'):
130 continue
131 split = line.split(' ')
132 option = split[0]
133 if option == "source-directory":
134 parsed_src_dir = split[1]
135 if not parsed_src_dir.startswith("/"):
136 parsed_src_dir = os.path.join(src_dir, parsed_src_dir)
137 for expanded_path in glob.glob(parsed_src_dir):
138 dir_contents = os.listdir(expanded_path)
139 dir_contents = [
140 os.path.join(expanded_path, path)
141 for path in dir_contents
142 if (os.path.isfile(os.path.join(expanded_path, path)) and
143 re.match("^[a-zA-Z0-9_-]+$", path) is not None)
144 ]
145 for entry in dir_contents:
146 with open(entry, "r") as fp:
147 src_data = fp.read().strip()
148 abs_entry = os.path.abspath(entry)
149 _parse_deb_config_data(
150 ifaces, src_data,
151 os.path.dirname(abs_entry), abs_entry)
152 elif option == "source":
153 new_src_path = split[1]
154 if not new_src_path.startswith("/"):
155 new_src_path = os.path.join(src_dir, new_src_path)
156 for expanded_path in glob.glob(new_src_path):
157 with open(expanded_path, "r") as fp:
158 src_data = fp.read().strip()
159 abs_path = os.path.abspath(expanded_path)
160 _parse_deb_config_data(
161 ifaces, src_data,
162 os.path.dirname(abs_path), abs_path)
163 elif option == "auto":
164 for iface in split[1:]:
165 if iface not in ifaces:
166 ifaces[iface] = {
167 # Include the source path this interface was found in.
168 "_source_path": src_path
169 }
170 ifaces[iface]['auto'] = True
171 elif option == "iface":
172 iface, family, method = split[1:4]
173 if iface not in ifaces:
174 ifaces[iface] = {
175 # Include the source path this interface was found in.
176 "_source_path": src_path
177 }
178 elif 'family' in ifaces[iface]:
179 raise ParserError(
180 "Interface %s can only be defined once. "
181 "Re-defined in '%s'." % (iface, src_path))
182 ifaces[iface]['family'] = family
183 ifaces[iface]['method'] = method
184 currif = iface
185 elif option == "hwaddress":
186 if split[1] == "ether":
187 val = split[2]
188 else:
189 val = split[1]
190 ifaces[currif]['hwaddress'] = val
191 elif option in NET_CONFIG_OPTIONS:
192 ifaces[currif][option] = split[1]
193 elif option in NET_CONFIG_COMMANDS:
194 if option not in ifaces[currif]:
195 ifaces[currif][option] = []
196 ifaces[currif][option].append(' '.join(split[1:]))
197 elif option.startswith('dns-'):
198 if 'dns' not in ifaces[currif]:
199 ifaces[currif]['dns'] = {}
200 if option == 'dns-search':
201 ifaces[currif]['dns']['search'] = []
202 for domain in split[1:]:
203 ifaces[currif]['dns']['search'].append(domain)
204 elif option == 'dns-nameservers':
205 ifaces[currif]['dns']['nameservers'] = []
206 for server in split[1:]:
207 ifaces[currif]['dns']['nameservers'].append(server)
208 elif option.startswith('bridge_'):
209 if 'bridge' not in ifaces[currif]:
210 ifaces[currif]['bridge'] = {}
211 if option in NET_CONFIG_BRIDGE_OPTIONS:
212 bridge_option = option.replace('bridge_', '', 1)
213 ifaces[currif]['bridge'][bridge_option] = split[1]
214 elif option == "bridge_ports":
215 ifaces[currif]['bridge']['ports'] = []
216 for iface in split[1:]:
217 ifaces[currif]['bridge']['ports'].append(iface)
218 elif option == "bridge_hw" and split[1].lower() == "mac":
219 ifaces[currif]['bridge']['mac'] = split[2]
220 elif option == "bridge_pathcost":
221 if 'pathcost' not in ifaces[currif]['bridge']:
222 ifaces[currif]['bridge']['pathcost'] = {}
223 ifaces[currif]['bridge']['pathcost'][split[1]] = split[2]
224 elif option == "bridge_portprio":
225 if 'portprio' not in ifaces[currif]['bridge']:
226 ifaces[currif]['bridge']['portprio'] = {}
227 ifaces[currif]['bridge']['portprio'][split[1]] = split[2]
228 elif option.startswith('bond-'):
229 if 'bond' not in ifaces[currif]:
230 ifaces[currif]['bond'] = {}
231 bond_option = option.replace('bond-', '', 1)
232 ifaces[currif]['bond'][bond_option] = split[1]
233 for iface in ifaces.keys():
234 if 'auto' not in ifaces[iface]:
235 ifaces[iface]['auto'] = False
236
237
238def parse_deb_config(path):
239 """Parses a debian network configuration file."""
240 ifaces = {}
241 with open(path, "r") as fp:
242 contents = fp.read().strip()
243 abs_path = os.path.abspath(path)
244 _parse_deb_config_data(
245 ifaces, contents,
246 os.path.dirname(abs_path), abs_path)
247 return ifaces
248
249
250def convert_eni_data(eni_data):
251 # return a network config representation of what is in eni_data
252 ifaces = {}
253 _parse_deb_config_data(ifaces, eni_data, src_dir=None, src_path=None)
254 return _ifaces_to_net_config_data(ifaces)
255
256
257def _ifaces_to_net_config_data(ifaces):
258 """Return network config that represents the ifaces data provided.
259 ifaces = parse_deb_config("/etc/network/interfaces")
260 config = ifaces_to_net_config_data(ifaces)
261 state = parse_net_config_data(config)."""
262 devs = {}
263 for name, data in ifaces.items():
264 # devname is 'eth0' for name='eth0:1'
265 devname = name.partition(":")[0]
266 if devname == "lo":
267 # currently provding 'lo' in network config results in duplicate
268 # entries. in rendered interfaces file. so skip it.
269 continue
270 if devname not in devs:
271 devs[devname] = {'type': 'physical', 'name': devname,
272 'subnets': []}
273 # this isnt strictly correct, but some might specify
274 # hwaddress on a nic for matching / declaring name.
275 if 'hwaddress' in data:
276 devs[devname]['mac_address'] = data['hwaddress']
277 subnet = {'_orig_eni_name': name, 'type': data['method']}
278 if data.get('auto'):
279 subnet['control'] = 'auto'
280 else:
281 subnet['control'] = 'manual'
282
283 if data.get('method') == 'static':
284 subnet['address'] = data['address']
285
286 for copy_key in ('netmask', 'gateway', 'broadcast'):
287 if copy_key in data:
288 subnet[copy_key] = data[copy_key]
289
290 if 'dns' in data:
291 for n in ('nameservers', 'search'):
292 if n in data['dns'] and data['dns'][n]:
293 subnet['dns_' + n] = data['dns'][n]
294 devs[devname]['subnets'].append(subnet)
295
296 return {'version': 1,
297 'config': [devs[d] for d in sorted(devs)]}
298
299
300class Renderer(object):
301 """Renders network information in a /etc/network/interfaces format."""
302
303 def _render_persistent_net(self, network_state):
304 """Given state, emit udev rules to map mac to ifname."""
305 content = ""
306 interfaces = network_state.get('interfaces')
307 for iface in interfaces.values():
308 # for physical interfaces write out a persist net udev rule
309 if iface['type'] == 'physical' and \
310 'name' in iface and iface.get('mac_address'):
311 content += generate_udev_rule(iface['name'],
312 iface['mac_address'])
313
314 return content
315
316 def _render_route(self, route, indent=""):
317 """When rendering routes for an iface, in some cases applying a route
318 may result in the route command returning non-zero which produces
319 some confusing output for users manually using ifup/ifdown[1]. To
320 that end, we will optionally include an '|| true' postfix to each
321 route line allowing users to work with ifup/ifdown without using
322 --force option.
323
324 We may at somepoint not want to emit this additional postfix, and
325 add a 'strict' flag to this function. When called with strict=True,
326 then we will not append the postfix.
327
328 1. http://askubuntu.com/questions/168033/
329 how-to-set-static-routes-in-ubuntu-server
330 """
331 content = ""
332 up = indent + "post-up route add"
333 down = indent + "pre-down route del"
334 eol = " || true\n"
335 mapping = {
336 'network': '-net',
337 'netmask': 'netmask',
338 'gateway': 'gw',
339 'metric': 'metric',
340 }
341 if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0':
342 default_gw = " default gw %s" % route['gateway']
343 content += up + default_gw + eol
344 content += down + default_gw + eol
345 elif route['network'] == '::' and route['netmask'] == 0:
346 # ipv6!
347 default_gw = " -A inet6 default gw %s" % route['gateway']
348 content += up + default_gw + eol
349 content += down + default_gw + eol
350 else:
351 route_line = ""
352 for k in ['network', 'netmask', 'gateway', 'metric']:
353 if k in route:
354 route_line += " %s %s" % (mapping[k], route[k])
355 content += up + route_line + eol
356 content += down + route_line + eol
357 return content
358
359 def _render_interfaces(self, network_state):
360 '''Given state, emit etc/network/interfaces content.'''
361
362 content = ""
363 interfaces = network_state.get('interfaces')
364 ''' Apply a sort order to ensure that we write out
365 the physical interfaces first; this is critical for
366 bonding
367 '''
368 order = {
369 'physical': 0,
370 'bond': 1,
371 'bridge': 2,
372 'vlan': 3,
373 }
374 content += "auto lo\niface lo inet loopback\n"
375 for dnskey, value in network_state.get('dns', {}).items():
376 if len(value):
377 content += " dns-{} {}\n".format(dnskey, " ".join(value))
378
379 for iface in sorted(interfaces.values(),
380 key=lambda k: (order[k['type']], k['name'])):
381
382 if content[-2:] != "\n\n":
383 content += "\n"
384 subnets = iface.get('subnets', {})
385 if subnets:
386 for index, subnet in zip(range(0, len(subnets)), subnets):
387 if content[-2:] != "\n\n":
388 content += "\n"
389 iface['index'] = index
390 iface['mode'] = subnet['type']
391 iface['control'] = subnet.get('control', 'auto')
392 if iface['mode'].endswith('6'):
393 iface['inet'] += '6'
394 elif (iface['mode'] == 'static'
395 and ":" in subnet['address']):
396 iface['inet'] += '6'
397 if iface['mode'].startswith('dhcp'):
398 iface['mode'] = 'dhcp'
399
400 content += _iface_start_entry(iface, index)
401 content += _iface_add_subnet(iface, subnet)
402 content += _iface_add_attrs(iface)
403 for route in subnet.get('routes', []):
404 content += self._render_route(route, indent=" ")
405 else:
406 # ifenslave docs say to auto the slave devices
407 if 'bond-master' in iface:
408 content += "auto {name}\n".format(**iface)
409 content += "iface {name} {inet} {mode}\n".format(**iface)
410 content += _iface_add_attrs(iface)
411
412 for route in network_state.get('routes'):
413 content += self._render_route(route)
414
415 # global replacements until v2 format
416 content = content.replace('mac_address', 'hwaddress')
417 return content
418
419 def render_network_state(
420 self, target, network_state, eni="etc/network/interfaces",
421 links_prefix=LINKS_FNAME_PREFIX,
422 netrules='etc/udev/rules.d/70-persistent-net.rules',
423 writer=None):
424
425 fpeni = os.path.sep.join((target, eni,))
426 util.ensure_dir(os.path.dirname(fpeni))
427 util.write_file(fpeni, self._render_interfaces(network_state))
428
429 if netrules:
430 netrules = os.path.sep.join((target, netrules,))
431 util.ensure_dir(os.path.dirname(netrules))
432 util.write_file(netrules,
433 self._render_persistent_net(network_state))
434
435 if links_prefix:
436 self._render_systemd_links(target, network_state,
437 links_prefix=links_prefix)
438
439 def _render_systemd_links(self, target, network_state,
440 links_prefix=LINKS_FNAME_PREFIX):
441 fp_prefix = os.path.sep.join((target, links_prefix))
442 for f in glob.glob(fp_prefix + "*"):
443 os.unlink(f)
444 interfaces = network_state.get('interfaces')
445 for iface in interfaces.values():
446 if (iface['type'] == 'physical' and 'name' in iface and
447 iface.get('mac_address')):
448 fname = fp_prefix + iface['name'] + ".link"
449 content = "\n".join([
450 "[Match]",
451 "MACAddress=" + iface['mac_address'],
452 "",
453 "[Link]",
454 "Name=" + iface['name'],
455 ""
456 ])
457 util.write_file(fname, content)
0458
=== modified file 'cloudinit/net/network_state.py'
--- cloudinit/net/network_state.py 2016-05-12 17:56:26 +0000
+++ cloudinit/net/network_state.py 2016-06-10 21:20:56 +0000
@@ -15,9 +15,13 @@
15# You should have received a copy of the GNU Affero General Public License15# You should have received a copy of the GNU Affero General Public License
16# along with Curtin. If not, see <http://www.gnu.org/licenses/>.16# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
1717
18from cloudinit import log as logging18import copy
19import functools
20import logging
21
22import six
23
19from cloudinit import util24from cloudinit import util
20from cloudinit.util import yaml_dumps as dump_config
2125
22LOG = logging.getLogger(__name__)26LOG = logging.getLogger(__name__)
2327
@@ -27,39 +31,104 @@
27}31}
2832
2933
34def parse_net_config_data(net_config, skip_broken=True):
35 """Parses the config, returns NetworkState object
36
37 :param net_config: curtin network config dict
38 """
39 state = None
40 if 'version' in net_config and 'config' in net_config:
41 ns = NetworkState(version=net_config.get('version'),
42 config=net_config.get('config'))
43 ns.parse_config(skip_broken=skip_broken)
44 state = ns.network_state
45 return state
46
47
48def parse_net_config(path, skip_broken=True):
49 """Parses a curtin network configuration file and
50 return network state"""
51 ns = None
52 net_config = util.read_conf(path)
53 if 'network' in net_config:
54 ns = parse_net_config_data(net_config.get('network'),
55 skip_broken=skip_broken)
56 return ns
57
58
30def from_state_file(state_file):59def from_state_file(state_file):
31 network_state = None60 network_state = None
32 state = util.read_conf(state_file)61 state = util.read_conf(state_file)
33 network_state = NetworkState()62 network_state = NetworkState()
34 network_state.load(state)63 network_state.load(state)
35
36 return network_state64 return network_state
3765
3866
67def diff_keys(expected, actual):
68 missing = set(expected)
69 for key in actual:
70 missing.discard(key)
71 return missing
72
73
74class InvalidCommand(Exception):
75 pass
76
77
78def ensure_command_keys(required_keys):
79
80 def wrapper(func):
81
82 @functools.wraps(func)
83 def decorator(self, command, *args, **kwargs):
84 if required_keys:
85 missing_keys = diff_keys(required_keys, command)
86 if missing_keys:
87 raise InvalidCommand("Command missing %s of required"
88 " keys %s" % (missing_keys,
89 required_keys))
90 return func(self, command, *args, **kwargs)
91
92 return decorator
93
94 return wrapper
95
96
97class CommandHandlerMeta(type):
98 """Metaclass that dynamically creates a 'command_handlers' attribute.
99
100 This will scan the to-be-created class for methods that start with
101 'handle_' and on finding those will populate a class attribute mapping
102 so that those methods can be quickly located and called.
103 """
104 def __new__(cls, name, parents, dct):
105 command_handlers = {}
106 for attr_name, attr in dct.items():
107 if callable(attr) and attr_name.startswith('handle_'):
108 handles_what = attr_name[len('handle_'):]
109 if handles_what:
110 command_handlers[handles_what] = attr
111 dct['command_handlers'] = command_handlers
112 return super(CommandHandlerMeta, cls).__new__(cls, name,
113 parents, dct)
114
115
116@six.add_metaclass(CommandHandlerMeta)
39class NetworkState(object):117class NetworkState(object):
118
119 initial_network_state = {
120 'interfaces': {},
121 'routes': [],
122 'dns': {
123 'nameservers': [],
124 'search': [],
125 }
126 }
127
40 def __init__(self, version=NETWORK_STATE_VERSION, config=None):128 def __init__(self, version=NETWORK_STATE_VERSION, config=None):
41 self.version = version129 self.version = version
42 self.config = config130 self.config = config
43 self.network_state = {131 self.network_state = copy.deepcopy(self.initial_network_state)
44 'interfaces': {},
45 'routes': [],
46 'dns': {
47 'nameservers': [],
48 'search': [],
49 }
50 }
51 self.command_handlers = self.get_command_handlers()
52
53 def get_command_handlers(self):
54 METHOD_PREFIX = 'handle_'
55 methods = filter(lambda x: callable(getattr(self, x)) and
56 x.startswith(METHOD_PREFIX), dir(self))
57 handlers = {}
58 for m in methods:
59 key = m.replace(METHOD_PREFIX, '')
60 handlers[key] = getattr(self, m)
61
62 return handlers
63132
64 def dump(self):133 def dump(self):
65 state = {134 state = {
@@ -67,7 +136,7 @@
67 'config': self.config,136 'config': self.config,
68 'network_state': self.network_state,137 'network_state': self.network_state,
69 }138 }
70 return dump_config(state)139 return util.yaml_dumps(state)
71140
72 def load(self, state):141 def load(self, state):
73 if 'version' not in state:142 if 'version' not in state:
@@ -75,32 +144,39 @@
75 raise Exception('Invalid state, missing version field')144 raise Exception('Invalid state, missing version field')
76145
77 required_keys = NETWORK_STATE_REQUIRED_KEYS[state['version']]146 required_keys = NETWORK_STATE_REQUIRED_KEYS[state['version']]
78 if not self.valid_command(state, required_keys):147 missing_keys = diff_keys(required_keys, state)
79 msg = 'Invalid state, missing keys: {}'.format(required_keys)148 if missing_keys:
149 msg = 'Invalid state, missing keys: %s' % (missing_keys)
80 LOG.error(msg)150 LOG.error(msg)
81 raise Exception(msg)151 raise ValueError(msg)
82152
83 # v1 - direct attr mapping, except version153 # v1 - direct attr mapping, except version
84 for key in [k for k in required_keys if k not in ['version']]:154 for key in [k for k in required_keys if k not in ['version']]:
85 setattr(self, key, state[key])155 setattr(self, key, state[key])
86 self.command_handlers = self.get_command_handlers()
87156
88 def dump_network_state(self):157 def dump_network_state(self):
89 return dump_config(self.network_state)158 return util.yaml_dumps(self.network_state)
90159
91 def parse_config(self):160 def parse_config(self, skip_broken=True):
92 # rebuild network state161 # rebuild network state
93 for command in self.config:162 for command in self.config:
94 handler = self.command_handlers.get(command['type'])163 command_type = command['type']
95 handler(command)164 try:
96165 handler = self.command_handlers[command_type]
97 def valid_command(self, command, required_keys):166 except KeyError:
98 if not required_keys:167 raise RuntimeError("No handler found for"
99 return False168 " command '%s'" % command_type)
100169 try:
101 found_keys = [key for key in command.keys() if key in required_keys]170 handler(self, command)
102 return len(found_keys) == len(required_keys)171 except InvalidCommand:
103172 if not skip_broken:
173 raise
174 else:
175 LOG.warn("Skipping invalid command: %s", command,
176 exc_info=True)
177 LOG.debug(self.dump_network_state())
178
179 @ensure_command_keys(['name'])
104 def handle_physical(self, command):180 def handle_physical(self, command):
105 '''181 '''
106 command = {182 command = {
@@ -112,13 +188,6 @@
112 ]188 ]
113 }189 }
114 '''190 '''
115 required_keys = [
116 'name',
117 ]
118 if not self.valid_command(command, required_keys):
119 LOG.warn('Skipping Invalid command: {}'.format(command))
120 LOG.debug(self.dump_network_state())
121 return
122191
123 interfaces = self.network_state.get('interfaces')192 interfaces = self.network_state.get('interfaces')
124 iface = interfaces.get(command['name'], {})193 iface = interfaces.get(command['name'], {})
@@ -149,6 +218,7 @@
149 self.network_state['interfaces'].update({command.get('name'): iface})218 self.network_state['interfaces'].update({command.get('name'): iface})
150 self.dump_network_state()219 self.dump_network_state()
151220
221 @ensure_command_keys(['name', 'vlan_id', 'vlan_link'])
152 def handle_vlan(self, command):222 def handle_vlan(self, command):
153 '''223 '''
154 auto eth0.222224 auto eth0.222
@@ -158,16 +228,6 @@
158 hwaddress ether BC:76:4E:06:96:B3228 hwaddress ether BC:76:4E:06:96:B3
159 vlan-raw-device eth0229 vlan-raw-device eth0
160 '''230 '''
161 required_keys = [
162 'name',
163 'vlan_link',
164 'vlan_id',
165 ]
166 if not self.valid_command(command, required_keys):
167 print('Skipping Invalid command: {}'.format(command))
168 print(self.dump_network_state())
169 return
170
171 interfaces = self.network_state.get('interfaces')231 interfaces = self.network_state.get('interfaces')
172 self.handle_physical(command)232 self.handle_physical(command)
173 iface = interfaces.get(command.get('name'), {})233 iface = interfaces.get(command.get('name'), {})
@@ -175,6 +235,7 @@
175 iface['vlan_id'] = command.get('vlan_id')235 iface['vlan_id'] = command.get('vlan_id')
176 interfaces.update({iface['name']: iface})236 interfaces.update({iface['name']: iface})
177237
238 @ensure_command_keys(['name', 'bond_interfaces', 'params'])
178 def handle_bond(self, command):239 def handle_bond(self, command):
179 '''240 '''
180 #/etc/network/interfaces241 #/etc/network/interfaces
@@ -200,15 +261,6 @@
200 bond-updelay 200261 bond-updelay 200
201 bond-lacp-rate 4262 bond-lacp-rate 4
202 '''263 '''
203 required_keys = [
204 'name',
205 'bond_interfaces',
206 'params',
207 ]
208 if not self.valid_command(command, required_keys):
209 print('Skipping Invalid command: {}'.format(command))
210 print(self.dump_network_state())
211 return
212264
213 self.handle_physical(command)265 self.handle_physical(command)
214 interfaces = self.network_state.get('interfaces')266 interfaces = self.network_state.get('interfaces')
@@ -236,6 +288,7 @@
236 bond_if.update({param: val})288 bond_if.update({param: val})
237 self.network_state['interfaces'].update({ifname: bond_if})289 self.network_state['interfaces'].update({ifname: bond_if})
238290
291 @ensure_command_keys(['name', 'bridge_interfaces', 'params'])
239 def handle_bridge(self, command):292 def handle_bridge(self, command):
240 '''293 '''
241 auto br0294 auto br0
@@ -263,15 +316,6 @@
263 "bridge_waitport",316 "bridge_waitport",
264 ]317 ]
265 '''318 '''
266 required_keys = [
267 'name',
268 'bridge_interfaces',
269 'params',
270 ]
271 if not self.valid_command(command, required_keys):
272 print('Skipping Invalid command: {}'.format(command))
273 print(self.dump_network_state())
274 return
275319
276 # find one of the bridge port ifaces to get mac_addr320 # find one of the bridge port ifaces to get mac_addr
277 # handle bridge_slaves321 # handle bridge_slaves
@@ -295,15 +339,8 @@
295339
296 interfaces.update({iface['name']: iface})340 interfaces.update({iface['name']: iface})
297341
342 @ensure_command_keys(['address'])
298 def handle_nameserver(self, command):343 def handle_nameserver(self, command):
299 required_keys = [
300 'address',
301 ]
302 if not self.valid_command(command, required_keys):
303 print('Skipping Invalid command: {}'.format(command))
304 print(self.dump_network_state())
305 return
306
307 dns = self.network_state.get('dns')344 dns = self.network_state.get('dns')
308 if 'address' in command:345 if 'address' in command:
309 addrs = command['address']346 addrs = command['address']
@@ -318,15 +355,8 @@
318 for path in paths:355 for path in paths:
319 dns['search'].append(path)356 dns['search'].append(path)
320357
358 @ensure_command_keys(['destination'])
321 def handle_route(self, command):359 def handle_route(self, command):
322 required_keys = [
323 'destination',
324 ]
325 if not self.valid_command(command, required_keys):
326 print('Skipping Invalid command: {}'.format(command))
327 print(self.dump_network_state())
328 return
329
330 routes = self.network_state.get('routes')360 routes = self.network_state.get('routes')
331 network, cidr = command['destination'].split("/")361 network, cidr = command['destination'].split("/")
332 netmask = cidr2mask(int(cidr))362 netmask = cidr2mask(int(cidr))
@@ -376,72 +406,3 @@
376 return ipv4mask2cidr(mask)406 return ipv4mask2cidr(mask)
377 else:407 else:
378 return mask408 return mask
379
380
381if __name__ == '__main__':
382 import random
383 import sys
384
385 from cloudinit import net
386
387 def load_config(nc):
388 version = nc.get('version')
389 config = nc.get('config')
390 return (version, config)
391
392 def test_parse(network_config):
393 (version, config) = load_config(network_config)
394 ns1 = NetworkState(version=version, config=config)
395 ns1.parse_config()
396 random.shuffle(config)
397 ns2 = NetworkState(version=version, config=config)
398 ns2.parse_config()
399 print("----NS1-----")
400 print(ns1.dump_network_state())
401 print()
402 print("----NS2-----")
403 print(ns2.dump_network_state())
404 print("NS1 == NS2 ?=> {}".format(
405 ns1.network_state == ns2.network_state))
406 eni = net.render_interfaces(ns2.network_state)
407 print(eni)
408 udev_rules = net.render_persistent_net(ns2.network_state)
409 print(udev_rules)
410
411 def test_dump_and_load(network_config):
412 print("Loading network_config into NetworkState")
413 (version, config) = load_config(network_config)
414 ns1 = NetworkState(version=version, config=config)
415 ns1.parse_config()
416 print("Dumping state to file")
417 ns1_dump = ns1.dump()
418 ns1_state = "/tmp/ns1.state"
419 with open(ns1_state, "w+") as f:
420 f.write(ns1_dump)
421
422 print("Loading state from file")
423 ns2 = from_state_file(ns1_state)
424 print("NS1 == NS2 ?=> {}".format(
425 ns1.network_state == ns2.network_state))
426
427 def test_output(network_config):
428 (version, config) = load_config(network_config)
429 ns1 = NetworkState(version=version, config=config)
430 ns1.parse_config()
431 random.shuffle(config)
432 ns2 = NetworkState(version=version, config=config)
433 ns2.parse_config()
434 print("NS1 == NS2 ?=> {}".format(
435 ns1.network_state == ns2.network_state))
436 eni_1 = net.render_interfaces(ns1.network_state)
437 eni_2 = net.render_interfaces(ns2.network_state)
438 print(eni_1)
439 print(eni_2)
440 print("eni_1 == eni_2 ?=> {}".format(
441 eni_1 == eni_2))
442
443 y = util.read_conf(sys.argv[1])
444 network_config = y.get('network')
445 test_parse(network_config)
446 test_dump_and_load(network_config)
447 test_output(network_config)
448409
=== added file 'cloudinit/serial.py'
--- cloudinit/serial.py 1970-01-01 00:00:00 +0000
+++ cloudinit/serial.py 2016-06-10 21:20:56 +0000
@@ -0,0 +1,50 @@
1# vi: ts=4 expandtab
2#
3# This program is free software: you can redistribute it and/or modify
4# it under the terms of the GNU General Public License version 3, as
5# published by the Free Software Foundation.
6#
7# This program is distributed in the hope that it will be useful,
8# but WITHOUT ANY WARRANTY; without even the implied warranty of
9# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10# GNU General Public License for more details.
11#
12# You should have received a copy of the GNU General Public License
13# along with this program. If not, see <http://www.gnu.org/licenses/>.
14
15
16from __future__ import absolute_import
17
18try:
19 from serial import Serial
20except ImportError:
21 # For older versions of python (ie 2.6) pyserial may not exist and/or
22 # work and/or be installed, so make a dummy/fake serial that blows up
23 # when used...
24 class Serial(object):
25 def __init__(self, *args, **kwargs):
26 pass
27
28 @staticmethod
29 def isOpen():
30 return False
31
32 @staticmethod
33 def write(data):
34 raise IOError("Unable to perform serial `write` operation,"
35 " pyserial not installed.")
36
37 @staticmethod
38 def readline():
39 raise IOError("Unable to perform serial `readline` operation,"
40 " pyserial not installed.")
41
42 @staticmethod
43 def flush():
44 raise IOError("Unable to perform serial `flush` operation,"
45 " pyserial not installed.")
46
47 @staticmethod
48 def read(size=1):
49 raise IOError("Unable to perform serial `read` operation,"
50 " pyserial not installed.")
051
=== modified file 'cloudinit/sources/DataSourceAzure.py'
--- cloudinit/sources/DataSourceAzure.py 2016-05-12 17:56:26 +0000
+++ cloudinit/sources/DataSourceAzure.py 2016-06-10 21:20:56 +0000
@@ -423,7 +423,7 @@
423 elem.text = DEF_PASSWD_REDACTION423 elem.text = DEF_PASSWD_REDACTION
424 return ET.tostring(root)424 return ET.tostring(root)
425 except Exception:425 except Exception:
426 LOG.critical("failed to redact userpassword in {}".format(fname))426 LOG.critical("failed to redact userpassword in %s", fname)
427 return cnt427 return cnt
428428
429 if not datadir:429 if not datadir:
430430
=== modified file 'cloudinit/sources/DataSourceConfigDrive.py'
--- cloudinit/sources/DataSourceConfigDrive.py 2016-06-03 19:06:55 +0000
+++ cloudinit/sources/DataSourceConfigDrive.py 2016-06-10 21:20:56 +0000
@@ -18,14 +18,14 @@
18# You should have received a copy of the GNU General Public License18# You should have received a copy of the GNU General Public License
19# along with this program. If not, see <http://www.gnu.org/licenses/>.19# along with this program. If not, see <http://www.gnu.org/licenses/>.
2020
21import copy
22import os21import os
2322
24from cloudinit import log as logging23from cloudinit import log as logging
25from cloudinit import net
26from cloudinit import sources24from cloudinit import sources
27from cloudinit import util25from cloudinit import util
2826
27from cloudinit.net import eni
28
29from cloudinit.sources.helpers import openstack29from cloudinit.sources.helpers import openstack
3030
31LOG = logging.getLogger(__name__)31LOG = logging.getLogger(__name__)
@@ -53,6 +53,7 @@
53 self._network_config = None53 self._network_config = None
54 self.network_json = None54 self.network_json = None
55 self.network_eni = None55 self.network_eni = None
56 self.known_macs = None
56 self.files = {}57 self.files = {}
5758
58 def __str__(self):59 def __str__(self):
@@ -147,9 +148,10 @@
147 if self._network_config is None:148 if self._network_config is None:
148 if self.network_json is not None:149 if self.network_json is not None:
149 LOG.debug("network config provided via network_json")150 LOG.debug("network config provided via network_json")
150 self._network_config = convert_network_data(self.network_json)151 self._network_config = openstack.convert_net_json(
152 self.network_json, known_macs=self.known_macs)
151 elif self.network_eni is not None:153 elif self.network_eni is not None:
152 self._network_config = net.convert_eni_data(self.network_eni)154 self._network_config = eni.convert_eni_data(self.network_eni)
153 LOG.debug("network config provided via converted eni data")155 LOG.debug("network config provided via converted eni data")
154 else:156 else:
155 LOG.debug("no network configuration available")157 LOG.debug("no network configuration available")
@@ -254,152 +256,12 @@
254 return devices256 return devices
255257
256258
257# Convert OpenStack ConfigDrive NetworkData json to network_config yaml
258def convert_network_data(network_json=None, known_macs=None):
259 """Return a dictionary of network_config by parsing provided
260 OpenStack ConfigDrive NetworkData json format
261
262 OpenStack network_data.json provides a 3 element dictionary
263 - "links" (links are network devices, physical or virtual)
264 - "networks" (networks are ip network configurations for one or more
265 links)
266 - services (non-ip services, like dns)
267
268 networks and links are combined via network items referencing specific
269 links via a 'link_id' which maps to a links 'id' field.
270
271 To convert this format to network_config yaml, we first iterate over the
272 links and then walk the network list to determine if any of the networks
273 utilize the current link; if so we generate a subnet entry for the device
274
275 We also need to map network_data.json fields to network_config fields. For
276 example, the network_data links 'id' field is equivalent to network_config
277 'name' field for devices. We apply more of this mapping to the various
278 link types that we encounter.
279
280 There are additional fields that are populated in the network_data.json
281 from OpenStack that are not relevant to network_config yaml, so we
282 enumerate a dictionary of valid keys for network_yaml and apply filtering
283 to drop these superflous keys from the network_config yaml.
284 """
285 if network_json is None:
286 return None
287
288 # dict of network_config key for filtering network_json
289 valid_keys = {
290 'physical': [
291 'name',
292 'type',
293 'mac_address',
294 'subnets',
295 'params',
296 'mtu',
297 ],
298 'subnet': [
299 'type',
300 'address',
301 'netmask',
302 'broadcast',
303 'metric',
304 'gateway',
305 'pointopoint',
306 'scope',
307 'dns_nameservers',
308 'dns_search',
309 'routes',
310 ],
311 }
312
313 links = network_json.get('links', [])
314 networks = network_json.get('networks', [])
315 services = network_json.get('services', [])
316
317 config = []
318 for link in links:
319 subnets = []
320 cfg = {k: v for k, v in link.items()
321 if k in valid_keys['physical']}
322 # 'name' is not in openstack spec yet, but we will support it if it is
323 # present. The 'id' in the spec is currently implemented as the host
324 # nic's name, meaning something like 'tap-adfasdffd'. We do not want
325 # to name guest devices with such ugly names.
326 if 'name' in link:
327 cfg['name'] = link['name']
328
329 for network in [n for n in networks
330 if n['link'] == link['id']]:
331 subnet = {k: v for k, v in network.items()
332 if k in valid_keys['subnet']}
333 if 'dhcp' in network['type']:
334 t = 'dhcp6' if network['type'].startswith('ipv6') else 'dhcp4'
335 subnet.update({
336 'type': t,
337 })
338 else:
339 subnet.update({
340 'type': 'static',
341 'address': network.get('ip_address'),
342 })
343 subnets.append(subnet)
344 cfg.update({'subnets': subnets})
345 if link['type'] in ['ethernet', 'vif', 'ovs', 'phy', 'bridge']:
346 cfg.update({
347 'type': 'physical',
348 'mac_address': link['ethernet_mac_address']})
349 elif link['type'] in ['bond']:
350 params = {}
351 for k, v in link.items():
352 if k == 'bond_links':
353 continue
354 elif k.startswith('bond'):
355 params.update({k: v})
356 cfg.update({
357 'bond_interfaces': copy.deepcopy(link['bond_links']),
358 'params': params,
359 })
360 elif link['type'] in ['vlan']:
361 cfg.update({
362 'name': "%s.%s" % (link['vlan_link'],
363 link['vlan_id']),
364 'vlan_link': link['vlan_link'],
365 'vlan_id': link['vlan_id'],
366 'mac_address': link['vlan_mac_address'],
367 })
368 else:
369 raise ValueError(
370 'Unknown network_data link type: %s' % link['type'])
371
372 config.append(cfg)
373
374 need_names = [d for d in config
375 if d.get('type') == 'physical' and 'name' not in d]
376
377 if need_names:
378 if known_macs is None:
379 known_macs = net.get_interfaces_by_mac()
380
381 for d in need_names:
382 mac = d.get('mac_address')
383 if not mac:
384 raise ValueError("No mac_address or name entry for %s" % d)
385 if mac not in known_macs:
386 raise ValueError("Unable to find a system nic for %s" % d)
387 d['name'] = known_macs[mac]
388
389 for service in services:
390 cfg = service
391 cfg.update({'type': 'nameserver'})
392 config.append(cfg)
393
394 return {'version': 1, 'config': config}
395
396
397# Legacy: Must be present in case we load an old pkl object259# Legacy: Must be present in case we load an old pkl object
398DataSourceConfigDriveNet = DataSourceConfigDrive260DataSourceConfigDriveNet = DataSourceConfigDrive
399261
400# Used to match classes to dependencies262# Used to match classes to dependencies
401datasources = [263datasources = [
402 (DataSourceConfigDrive, (sources.DEP_FILESYSTEM, )),264 (DataSourceConfigDrive, (sources.DEP_FILESYSTEM,)),
403]265]
404266
405267
406268
=== modified file 'cloudinit/sources/DataSourceSmartOS.py'
--- cloudinit/sources/DataSourceSmartOS.py 2016-06-02 18:36:51 +0000
+++ cloudinit/sources/DataSourceSmartOS.py 2016-06-10 21:20:56 +0000
@@ -40,13 +40,11 @@
40import re40import re
41import socket41import socket
4242
43import serial
44
45from cloudinit import log as logging43from cloudinit import log as logging
44from cloudinit import serial
46from cloudinit import sources45from cloudinit import sources
47from cloudinit import util46from cloudinit import util
4847
49
50LOG = logging.getLogger(__name__)48LOG = logging.getLogger(__name__)
5149
52SMARTOS_ATTRIB_MAP = {50SMARTOS_ATTRIB_MAP = {
5351
=== modified file 'cloudinit/sources/helpers/openstack.py'
--- cloudinit/sources/helpers/openstack.py 2016-06-02 17:18:23 +0000
+++ cloudinit/sources/helpers/openstack.py 2016-06-10 21:20:56 +0000
@@ -28,6 +28,7 @@
2828
29from cloudinit import ec2_utils29from cloudinit import ec2_utils
30from cloudinit import log as logging30from cloudinit import log as logging
31from cloudinit import net
31from cloudinit import sources32from cloudinit import sources
32from cloudinit import url_helper33from cloudinit import url_helper
33from cloudinit import util34from cloudinit import util
@@ -478,6 +479,150 @@
478 retries=self.retries)479 retries=self.retries)
479480
480481
482# Convert OpenStack ConfigDrive NetworkData json to network_config yaml
483def convert_net_json(network_json=None, known_macs=None):
484 """Return a dictionary of network_config by parsing provided
485 OpenStack ConfigDrive NetworkData json format
486
487 OpenStack network_data.json provides a 3 element dictionary
488 - "links" (links are network devices, physical or virtual)
489 - "networks" (networks are ip network configurations for one or more
490 links)
491 - services (non-ip services, like dns)
492
493 networks and links are combined via network items referencing specific
494 links via a 'link_id' which maps to a links 'id' field.
495
496 To convert this format to network_config yaml, we first iterate over the
497 links and then walk the network list to determine if any of the networks
498 utilize the current link; if so we generate a subnet entry for the device
499
500 We also need to map network_data.json fields to network_config fields. For
501 example, the network_data links 'id' field is equivalent to network_config
502 'name' field for devices. We apply more of this mapping to the various
503 link types that we encounter.
504
505 There are additional fields that are populated in the network_data.json
506 from OpenStack that are not relevant to network_config yaml, so we
507 enumerate a dictionary of valid keys for network_yaml and apply filtering
508 to drop these superflous keys from the network_config yaml.
509 """
510 if network_json is None:
511 return None
512
513 # dict of network_config key for filtering network_json
514 valid_keys = {
515 'physical': [
516 'name',
517 'type',
518 'mac_address',
519 'subnets',
520 'params',
521 'mtu',
522 ],
523 'subnet': [
524 'type',
525 'address',
526 'netmask',
527 'broadcast',
528 'metric',
529 'gateway',
530 'pointopoint',
531 'scope',
532 'dns_nameservers',
533 'dns_search',
534 'routes',
535 ],
536 }
537
538 links = network_json.get('links', [])
539 networks = network_json.get('networks', [])
540 services = network_json.get('services', [])
541
542 config = []
543 for link in links:
544 subnets = []
545 cfg = {k: v for k, v in link.items()
546 if k in valid_keys['physical']}
547 # 'name' is not in openstack spec yet, but we will support it if it is
548 # present. The 'id' in the spec is currently implemented as the host
549 # nic's name, meaning something like 'tap-adfasdffd'. We do not want
550 # to name guest devices with such ugly names.
551 if 'name' in link:
552 cfg['name'] = link['name']
553
554 for network in [n for n in networks
555 if n['link'] == link['id']]:
556 subnet = {k: v for k, v in network.items()
557 if k in valid_keys['subnet']}
558 if 'dhcp' in network['type']:
559 t = 'dhcp6' if network['type'].startswith('ipv6') else 'dhcp4'
560 subnet.update({
561 'type': t,
562 })
563 else:
564 subnet.update({
565 'type': 'static',
566 'address': network.get('ip_address'),
567 })
568 if network['type'] == 'ipv4':
569 subnet['ipv4'] = True
570 if network['type'] == 'ipv6':
571 subnet['ipv6'] = True
572 subnets.append(subnet)
573 cfg.update({'subnets': subnets})
574 if link['type'] in ['ethernet', 'vif', 'ovs', 'phy', 'bridge']:
575 cfg.update({
576 'type': 'physical',
577 'mac_address': link['ethernet_mac_address']})
578 elif link['type'] in ['bond']:
579 params = {}
580 for k, v in link.items():
581 if k == 'bond_links':
582 continue
583 elif k.startswith('bond'):
584 params.update({k: v})
585 cfg.update({
586 'bond_interfaces': copy.deepcopy(link['bond_links']),
587 'params': params,
588 })
589 elif link['type'] in ['vlan']:
590 cfg.update({
591 'name': "%s.%s" % (link['vlan_link'],
592 link['vlan_id']),
593 'vlan_link': link['vlan_link'],
594 'vlan_id': link['vlan_id'],
595 'mac_address': link['vlan_mac_address'],
596 })
597 else:
598 raise ValueError(
599 'Unknown network_data link type: %s' % link['type'])
600
601 config.append(cfg)
602
603 need_names = [d for d in config
604 if d.get('type') == 'physical' and 'name' not in d]
605
606 if need_names:
607 if known_macs is None:
608 known_macs = net.get_interfaces_by_mac()
609
610 for d in need_names:
611 mac = d.get('mac_address')
612 if not mac:
613 raise ValueError("No mac_address or name entry for %s" % d)
614 if mac not in known_macs:
615 raise ValueError("Unable to find a system nic for %s" % d)
616 d['name'] = known_macs[mac]
617
618 for service in services:
619 cfg = service
620 cfg.update({'type': 'nameserver'})
621 config.append(cfg)
622
623 return {'version': 1, 'config': config}
624
625
481def convert_vendordata_json(data, recurse=True):626def convert_vendordata_json(data, recurse=True):
482 """data: a loaded json *object* (strings, arrays, dicts).627 """data: a loaded json *object* (strings, arrays, dicts).
483 return something suitable for cloudinit vendordata_raw.628 return something suitable for cloudinit vendordata_raw.
484629
=== modified file 'cloudinit/stages.py'
--- cloudinit/stages.py 2016-06-07 18:34:57 +0000
+++ cloudinit/stages.py 2016-06-10 21:20:56 +0000
@@ -44,6 +44,7 @@
44from cloudinit import importer44from cloudinit import importer
45from cloudinit import log as logging45from cloudinit import log as logging
46from cloudinit import net46from cloudinit import net
47from cloudinit.net import cmdline
47from cloudinit.reporting import events48from cloudinit.reporting import events
48from cloudinit import sources49from cloudinit import sources
49from cloudinit import type_utils50from cloudinit import type_utils
@@ -612,7 +613,7 @@
612 if os.path.exists(disable_file):613 if os.path.exists(disable_file):
613 return (None, disable_file)614 return (None, disable_file)
614615
615 cmdline_cfg = ('cmdline', net.read_kernel_cmdline_config())616 cmdline_cfg = ('cmdline', cmdline.read_kernel_cmdline_config())
616 dscfg = ('ds', None)617 dscfg = ('ds', None)
617 if self.datasource and hasattr(self.datasource, 'network_config'):618 if self.datasource and hasattr(self.datasource, 'network_config'):
618 dscfg = ('ds', self.datasource.network_config)619 dscfg = ('ds', self.datasource.network_config)
619620
=== modified file 'cloudinit/util.py'
--- cloudinit/util.py 2016-06-09 07:18:35 +0000
+++ cloudinit/util.py 2016-06-10 21:20:56 +0000
@@ -171,7 +171,8 @@
171171
172 def __init__(self, stdout=None, stderr=None,172 def __init__(self, stdout=None, stderr=None,
173 exit_code=None, cmd=None,173 exit_code=None, cmd=None,
174 description=None, reason=None):174 description=None, reason=None,
175 errno=None):
175 if not cmd:176 if not cmd:
176 self.cmd = '-'177 self.cmd = '-'
177 else:178 else:
@@ -202,6 +203,7 @@
202 else:203 else:
203 self.reason = '-'204 self.reason = '-'
204205
206 self.errno = errno
205 message = self.MESSAGE_TMPL % {207 message = self.MESSAGE_TMPL % {
206 'description': self.description,208 'description': self.description,
207 'cmd': self.cmd,209 'cmd': self.cmd,
@@ -1157,7 +1159,14 @@
1157 options.append(path)1159 options.append(path)
1158 cmd = blk_id_cmd + options1160 cmd = blk_id_cmd + options
1159 # See man blkid for why 2 is added1161 # See man blkid for why 2 is added
1160 (out, _err) = subp(cmd, rcs=[0, 2])1162 try:
1163 (out, _err) = subp(cmd, rcs=[0, 2])
1164 except ProcessExecutionError as e:
1165 if e.errno == errno.ENOENT:
1166 # blkid not found...
1167 out = ""
1168 else:
1169 raise
1161 entries = []1170 entries = []
1162 for line in out.splitlines():1171 for line in out.splitlines():
1163 line = line.strip()1172 line = line.strip()
@@ -1706,7 +1715,8 @@
1706 sp = subprocess.Popen(args, **kws)1715 sp = subprocess.Popen(args, **kws)
1707 (out, err) = sp.communicate(data)1716 (out, err) = sp.communicate(data)
1708 except OSError as e:1717 except OSError as e:
1709 raise ProcessExecutionError(cmd=args, reason=e)1718 raise ProcessExecutionError(cmd=args, reason=e,
1719 errno=e.errno)
1710 rc = sp.returncode1720 rc = sp.returncode
1711 if rc not in rcs:1721 if rc not in rcs:
1712 raise ProcessExecutionError(stdout=out, stderr=err,1722 raise ProcessExecutionError(stdout=out, stderr=err,
17131723
=== modified file 'packages/bddeb'
--- packages/bddeb 2016-06-07 07:56:53 +0000
+++ packages/bddeb 2016-06-10 21:20:56 +0000
@@ -42,6 +42,7 @@
42 'setuptools',42 'setuptools',
43 'flake8',43 'flake8',
44 'hacking',44 'hacking',
45 'unittest2',
45]46]
46NONSTD_NAMED_PACKAGES = {47NONSTD_NAMED_PACKAGES = {
47 'argparse': ('python-argparse', None),48 'argparse': ('python-argparse', None),
4849
=== modified file 'requirements.txt'
--- requirements.txt 2015-01-26 21:37:29 +0000
+++ requirements.txt 2016-06-10 21:20:56 +0000
@@ -11,8 +11,12 @@
11oauthlib11oauthlib
1212
13# This one is currently used only by the CloudSigma and SmartOS datasources.13# This one is currently used only by the CloudSigma and SmartOS datasources.
14# If these datasources are removed, this is no longer needed14# If these datasources are removed, this is no longer needed.
15pyserial15#
16# This will not work in py2.6 so it is only optionally installed on
17# python 2.7 and later.
18#
19# pyserial
1620
17# This is only needed for places where we need to support configs in a manner21# This is only needed for places where we need to support configs in a manner
18# that the built-in config parser is not sufficent (ie22# that the built-in config parser is not sufficent (ie
1923
=== modified file 'setup.py'
--- setup.py 2016-05-27 21:03:49 +0000
+++ setup.py 2016-06-10 21:20:56 +0000
@@ -196,7 +196,6 @@
196if sys.version_info < (3,):196if sys.version_info < (3,):
197 requirements.append('cheetah')197 requirements.append('cheetah')
198198
199
200setuptools.setup(199setuptools.setup(
201 name='cloud-init',200 name='cloud-init',
202 version=get_version(),201 version=get_version(),
203202
=== modified file 'test-requirements.txt'
--- test-requirements.txt 2016-05-24 23:08:14 +0000
+++ test-requirements.txt 2016-06-10 21:20:56 +0000
@@ -2,6 +2,7 @@
2httpretty>=0.7.12httpretty>=0.7.1
3mock3mock
4nose4nose
5unittest2
56
6# Only needed if you want to know the test times7# Only needed if you want to know the test times
7# nose-timer8# nose-timer
89
=== modified file 'tests/unittests/helpers.py'
--- tests/unittests/helpers.py 2016-06-09 08:35:39 +0000
+++ tests/unittests/helpers.py 2016-06-10 21:20:56 +0000
@@ -7,13 +7,11 @@
7import tempfile7import tempfile
8import unittest8import unittest
99
10import mock
10import six11import six
12import unittest2
1113
12try:14try:
13 from unittest import mock
14except ImportError:
15 import mock
16try:
17 from contextlib import ExitStack15 from contextlib import ExitStack
18except ImportError:16except ImportError:
19 from contextlib2 import ExitStack17 from contextlib2 import ExitStack
@@ -21,6 +19,9 @@
21from cloudinit import helpers as ch19from cloudinit import helpers as ch
22from cloudinit import util20from cloudinit import util
2321
22# Used for skipping tests
23SkipTest = unittest2.SkipTest
24
24# Used for detecting different python versions25# Used for detecting different python versions
25PY2 = False26PY2 = False
26PY26 = False27PY26 = False
@@ -44,78 +45,6 @@
44 if _PY_MINOR == 4 and _PY_MICRO < 3:45 if _PY_MINOR == 4 and _PY_MICRO < 3:
45 FIX_HTTPRETTY = True46 FIX_HTTPRETTY = True
4647
47if PY26:
48 # For now add these on, taken from python 2.7 + slightly adjusted. Drop
49 # all this once Python 2.6 is dropped as a minimum requirement.
50 class TestCase(unittest.TestCase):
51 def setUp(self):
52 super(TestCase, self).setUp()
53 self.__all_cleanups = ExitStack()
54
55 def tearDown(self):
56 self.__all_cleanups.close()
57 unittest.TestCase.tearDown(self)
58
59 def addCleanup(self, function, *args, **kws):
60 self.__all_cleanups.callback(function, *args, **kws)
61
62 def assertIs(self, expr1, expr2, msg=None):
63 if expr1 is not expr2:
64 standardMsg = '%r is not %r' % (expr1, expr2)
65 self.fail(self._formatMessage(msg, standardMsg))
66
67 def assertIn(self, member, container, msg=None):
68 if member not in container:
69 standardMsg = '%r not found in %r' % (member, container)
70 self.fail(self._formatMessage(msg, standardMsg))
71
72 def assertNotIn(self, member, container, msg=None):
73 if member in container:
74 standardMsg = '%r unexpectedly found in %r'
75 standardMsg = standardMsg % (member, container)
76 self.fail(self._formatMessage(msg, standardMsg))
77
78 def assertIsNone(self, value, msg=None):
79 if value is not None:
80 standardMsg = '%r is not None'
81 standardMsg = standardMsg % (value)
82 self.fail(self._formatMessage(msg, standardMsg))
83
84 def assertIsInstance(self, obj, cls, msg=None):
85 """Same as self.assertTrue(isinstance(obj, cls)), with a nicer
86 default message."""
87 if not isinstance(obj, cls):
88 standardMsg = '%s is not an instance of %r' % (repr(obj), cls)
89 self.fail(self._formatMessage(msg, standardMsg))
90
91 def assertDictContainsSubset(self, expected, actual, msg=None):
92 missing = []
93 mismatched = []
94 for k, v in expected.items():
95 if k not in actual:
96 missing.append(k)
97 elif actual[k] != v:
98 mismatched.append('%r, expected: %r, actual: %r'
99 % (k, v, actual[k]))
100
101 if len(missing) == 0 and len(mismatched) == 0:
102 return
103
104 standardMsg = ''
105 if missing:
106 standardMsg = 'Missing: %r' % ','.join(m for m in missing)
107 if mismatched:
108 if standardMsg:
109 standardMsg += '; '
110 standardMsg += 'Mismatched values: %s' % ','.join(mismatched)
111
112 self.fail(self._formatMessage(msg, standardMsg))
113
114
115else:
116 class TestCase(unittest.TestCase):
117 pass
118
11948
120# Makes the old path start49# Makes the old path start
121# with new base instead of whatever50# with new base instead of whatever
@@ -151,6 +80,10 @@
151 return wrapper80 return wrapper
15281
15382
83class TestCase(unittest2.TestCase):
84 pass
85
86
154class ResourceUsingTestCase(TestCase):87class ResourceUsingTestCase(TestCase):
155 def setUp(self):88 def setUp(self):
156 super(ResourceUsingTestCase, self).setUp()89 super(ResourceUsingTestCase, self).setUp()
15790
=== modified file 'tests/unittests/test__init__.py'
--- tests/unittests/test__init__.py 2015-07-21 16:02:44 +0000
+++ tests/unittests/test__init__.py 2016-06-10 21:20:56 +0000
@@ -1,16 +1,6 @@
1import os1import os
2import shutil2import shutil
3import tempfile3import tempfile
4import unittest
5
6try:
7 from unittest import mock
8except ImportError:
9 import mock
10try:
11 from contextlib import ExitStack
12except ImportError:
13 from contextlib2 import ExitStack
144
15from cloudinit import handlers5from cloudinit import handlers
16from cloudinit import helpers6from cloudinit import helpers
@@ -18,7 +8,7 @@
18from cloudinit import url_helper8from cloudinit import url_helper
19from cloudinit import util9from cloudinit import util
2010
21from .helpers import TestCase11from .helpers import TestCase, ExitStack, mock
2212
2313
24class FakeModule(handlers.Handler):14class FakeModule(handlers.Handler):
@@ -99,9 +89,10 @@
99 self.assertEqual(self.data['handlercount'], 0)89 self.assertEqual(self.data['handlercount'], 0)
10090
10191
102class TestHandlerHandlePart(unittest.TestCase):92class TestHandlerHandlePart(TestCase):
10393
104 def setUp(self):94 def setUp(self):
95 super(TestHandlerHandlePart, self).setUp()
105 self.data = "fake data"96 self.data = "fake data"
106 self.ctype = "fake ctype"97 self.ctype = "fake ctype"
107 self.filename = "fake filename"98 self.filename = "fake filename"
@@ -177,7 +168,7 @@
177 self.data, self.ctype, self.filename, self.payload)168 self.data, self.ctype, self.filename, self.payload)
178169
179170
180class TestCmdlineUrl(unittest.TestCase):171class TestCmdlineUrl(TestCase):
181 def test_invalid_content(self):172 def test_invalid_content(self):
182 url = "http://example.com/foo"173 url = "http://example.com/foo"
183 key = "mykey"174 key = "mykey"
184175
=== modified file 'tests/unittests/test_cli.py'
--- tests/unittests/test_cli.py 2016-05-12 20:43:11 +0000
+++ tests/unittests/test_cli.py 2016-06-10 21:20:56 +0000
@@ -4,12 +4,7 @@
4import sys4import sys
55
6from . import helpers as test_helpers6from . import helpers as test_helpers
77mock = test_helpers.mock
8try:
9 from unittest import mock
10except ImportError:
11 import mock
12
138
14BIN_CLOUDINIT = "bin/cloud-init"9BIN_CLOUDINIT = "bin/cloud-init"
1510
1611
=== modified file 'tests/unittests/test_cs_util.py'
--- tests/unittests/test_cs_util.py 2015-02-11 01:50:23 +0000
+++ tests/unittests/test_cs_util.py 2016-06-10 21:20:56 +0000
@@ -1,21 +1,9 @@
1from __future__ import print_function1from __future__ import print_function
22
3import sys3from . import helpers as test_helpers
4import unittest
54
6from cloudinit.cs_utils import Cepko5from cloudinit.cs_utils import Cepko
76
8try:
9 skip = unittest.skip
10except AttributeError:
11 # Python 2.6. Doesn't have to be high fidelity.
12 def skip(reason):
13 def decorator(func):
14 def wrapper(*args, **kws):
15 print(reason, file=sys.stderr)
16 return wrapper
17 return decorator
18
197
20SERVER_CONTEXT = {8SERVER_CONTEXT = {
21 "cpu": 1000,9 "cpu": 1000,
@@ -43,18 +31,9 @@
43# 2015-01-22 BAW: This test is completely useless because it only ever tests31# 2015-01-22 BAW: This test is completely useless because it only ever tests
44# the CepkoMock object. Even in its original form, I don't think it ever32# the CepkoMock object. Even in its original form, I don't think it ever
45# touched the underlying Cepko class methods.33# touched the underlying Cepko class methods.
46@skip('This test is completely useless')34class CepkoResultTests(test_helpers.TestCase):
47class CepkoResultTests(unittest.TestCase):
48 def setUp(self):35 def setUp(self):
49 pass36 raise test_helpers.SkipTest('This test is completely useless')
50 # self.mocked = self.mocker.replace("cloudinit.cs_utils.Cepko",
51 # spec=CepkoMock,
52 # count=False,
53 # passthrough=False)
54 # self.mocked()
55 # self.mocker.result(CepkoMock())
56 # self.mocker.replay()
57 # self.c = Cepko()
5837
59 def test_getitem(self):38 def test_getitem(self):
60 result = self.c.all()39 result = self.c.all()
6140
=== modified file 'tests/unittests/test_datasource/test_azure.py'
--- tests/unittests/test_datasource/test_azure.py 2016-05-12 20:43:11 +0000
+++ tests/unittests/test_datasource/test_azure.py 2016-06-10 21:20:56 +0000
@@ -1,16 +1,8 @@
1from cloudinit import helpers1from cloudinit import helpers
2from cloudinit.util import b64e, decode_binary, load_file2from cloudinit.util import b64e, decode_binary, load_file
3from cloudinit.sources import DataSourceAzure3from cloudinit.sources import DataSourceAzure
4from ..helpers import TestCase, populate_dir
54
6try:5from ..helpers import TestCase, populate_dir, mock, ExitStack, PY26, SkipTest
7 from unittest import mock
8except ImportError:
9 import mock
10try:
11 from contextlib import ExitStack
12except ImportError:
13 from contextlib2 import ExitStack
146
15import crypt7import crypt
16import os8import os
@@ -83,6 +75,8 @@
8375
84 def setUp(self):76 def setUp(self):
85 super(TestAzureDataSource, self).setUp()77 super(TestAzureDataSource, self).setUp()
78 if PY26:
79 raise SkipTest("Does not work on python 2.6")
86 self.tmp = tempfile.mkdtemp()80 self.tmp = tempfile.mkdtemp()
87 self.addCleanup(shutil.rmtree, self.tmp)81 self.addCleanup(shutil.rmtree, self.tmp)
8882
8983
=== modified file 'tests/unittests/test_datasource/test_azure_helper.py'
--- tests/unittests/test_datasource/test_azure_helper.py 2016-06-10 19:03:24 +0000
+++ tests/unittests/test_datasource/test_azure_helper.py 2016-06-10 21:20:56 +0000
@@ -2,17 +2,7 @@
22
3from cloudinit.sources.helpers import azure as azure_helper3from cloudinit.sources.helpers import azure as azure_helper
44
5from ..helpers import TestCase5from ..helpers import ExitStack, mock, TestCase
6
7try:
8 from unittest import mock
9except ImportError:
10 import mock
11
12try:
13 from contextlib import ExitStack
14except ImportError:
15 from contextlib2 import ExitStack
166
177
18GOAL_STATE_TEMPLATE = """\8GOAL_STATE_TEMPLATE = """\
199
=== modified file 'tests/unittests/test_datasource/test_cloudsigma.py'
--- tests/unittests/test_datasource/test_cloudsigma.py 2015-01-27 01:02:31 +0000
+++ tests/unittests/test_datasource/test_cloudsigma.py 2016-06-10 21:20:56 +0000
@@ -1,4 +1,5 @@
1# coding: utf-81# coding: utf-8
2
2import copy3import copy
34
4from cloudinit.cs_utils import Cepko5from cloudinit.cs_utils import Cepko
@@ -6,7 +7,6 @@
67
7from .. import helpers as test_helpers8from .. import helpers as test_helpers
89
9
10SERVER_CONTEXT = {10SERVER_CONTEXT = {
11 "cpu": 1000,11 "cpu": 1000,
12 "cpus_instead_of_cores": False,12 "cpus_instead_of_cores": False,
1313
=== modified file 'tests/unittests/test_datasource/test_cloudstack.py'
--- tests/unittests/test_datasource/test_cloudstack.py 2016-05-12 20:43:11 +0000
+++ tests/unittests/test_datasource/test_cloudstack.py 2016-06-10 21:20:56 +0000
@@ -1,16 +1,7 @@
1from cloudinit import helpers1from cloudinit import helpers
2from cloudinit.sources.DataSourceCloudStack import DataSourceCloudStack2from cloudinit.sources.DataSourceCloudStack import DataSourceCloudStack
33
4from ..helpers import TestCase4from ..helpers import TestCase, mock, ExitStack
5
6try:
7 from unittest import mock
8except ImportError:
9 import mock
10try:
11 from contextlib import ExitStack
12except ImportError:
13 from contextlib2 import ExitStack
145
156
16class TestCloudStackPasswordFetching(TestCase):7class TestCloudStackPasswordFetching(TestCase):
178
=== modified file 'tests/unittests/test_datasource/test_configdrive.py'
--- tests/unittests/test_datasource/test_configdrive.py 2016-06-03 18:58:51 +0000
+++ tests/unittests/test_datasource/test_configdrive.py 2016-06-10 21:20:56 +0000
@@ -5,23 +5,15 @@
5import six5import six
6import tempfile6import tempfile
77
8try:
9 from unittest import mock
10except ImportError:
11 import mock
12try:
13 from contextlib import ExitStack
14except ImportError:
15 from contextlib2 import ExitStack
16
17from cloudinit import helpers8from cloudinit import helpers
18from cloudinit import net9from cloudinit.net import eni
10from cloudinit.net import network_state
19from cloudinit import settings11from cloudinit import settings
20from cloudinit.sources import DataSourceConfigDrive as ds12from cloudinit.sources import DataSourceConfigDrive as ds
21from cloudinit.sources.helpers import openstack13from cloudinit.sources.helpers import openstack
22from cloudinit import util14from cloudinit import util
2315
24from ..helpers import TestCase16from ..helpers import TestCase, ExitStack, mock
2517
2618
27PUBKEY = u'ssh-rsa AAAAB3NzaC1....sIkJhq8wdX+4I3A4cYbYP ubuntu@server-460\n'19PUBKEY = u'ssh-rsa AAAAB3NzaC1....sIkJhq8wdX+4I3A4cYbYP ubuntu@server-460\n'
@@ -115,6 +107,7 @@
115 'fa:16:3e:d4:57:ad': 'enp0s2',107 'fa:16:3e:d4:57:ad': 'enp0s2',
116 'fa:16:3e:dd:50:9a': 'foo1',108 'fa:16:3e:dd:50:9a': 'foo1',
117 'fa:16:3e:a8:14:69': 'foo2',109 'fa:16:3e:a8:14:69': 'foo2',
110 'fa:16:3e:ed:9a:59': 'foo3',
118}111}
119112
120CFG_DRIVE_FILES_V2 = {113CFG_DRIVE_FILES_V2 = {
@@ -377,35 +370,150 @@
377 util.find_devs_with = orig_find_devs_with370 util.find_devs_with = orig_find_devs_with
378 util.is_partition = orig_is_partition371 util.is_partition = orig_is_partition
379372
380 def test_pubkeys_v2(self):373 @mock.patch('cloudinit.sources.DataSourceConfigDrive.on_first_boot')
374 def test_pubkeys_v2(self, on_first_boot):
381 """Verify that public-keys work in config-drive-v2."""375 """Verify that public-keys work in config-drive-v2."""
382 populate_dir(self.tmp, CFG_DRIVE_FILES_V2)376 populate_dir(self.tmp, CFG_DRIVE_FILES_V2)
383 myds = cfg_ds_from_dir(self.tmp)377 myds = cfg_ds_from_dir(self.tmp)
384 self.assertEqual(myds.get_public_ssh_keys(),378 self.assertEqual(myds.get_public_ssh_keys(),
385 [OSTACK_META['public_keys']['mykey']])379 [OSTACK_META['public_keys']['mykey']])
386380
387 def test_network_data_is_found(self):381
382class TestNetJson(TestCase):
383 def setUp(self):
384 super(TestNetJson, self).setUp()
385 self.tmp = tempfile.mkdtemp()
386 self.addCleanup(shutil.rmtree, self.tmp)
387 self.maxDiff = None
388
389 @mock.patch('cloudinit.sources.DataSourceConfigDrive.on_first_boot')
390 def test_network_data_is_found(self, on_first_boot):
388 """Verify that network_data is present in ds in config-drive-v2."""391 """Verify that network_data is present in ds in config-drive-v2."""
389 populate_dir(self.tmp, CFG_DRIVE_FILES_V2)392 populate_dir(self.tmp, CFG_DRIVE_FILES_V2)
390 myds = cfg_ds_from_dir(self.tmp)393 myds = cfg_ds_from_dir(self.tmp)
391 self.assertEqual(myds.network_json, NETWORK_DATA)394 self.assertIsNotNone(myds.network_json)
392395
393 def test_network_config_is_converted(self):396 @mock.patch('cloudinit.sources.DataSourceConfigDrive.on_first_boot')
397 def test_network_config_is_converted(self, on_first_boot):
394 """Verify that network_data is converted and present on ds object."""398 """Verify that network_data is converted and present on ds object."""
395 populate_dir(self.tmp, CFG_DRIVE_FILES_V2)399 populate_dir(self.tmp, CFG_DRIVE_FILES_V2)
396 myds = cfg_ds_from_dir(self.tmp)400 myds = cfg_ds_from_dir(self.tmp)
397 network_config = ds.convert_network_data(NETWORK_DATA,401 network_config = openstack.convert_net_json(NETWORK_DATA,
398 known_macs=KNOWN_MACS)402 known_macs=KNOWN_MACS)
399 self.assertEqual(myds.network_config, network_config)403 self.assertEqual(myds.network_config, network_config)
400404
405 def test_network_config_conversions(self):
406 """Tests a bunch of input network json and checks the
407 expected conversions."""
408 in_datas = [
409 NETWORK_DATA,
410 {
411 'services': [{'type': 'dns', 'address': '172.19.0.12'}],
412 'networks': [{
413 'network_id': 'dacd568d-5be6-4786-91fe-750c374b78b4',
414 'type': 'ipv4',
415 'netmask': '255.255.252.0',
416 'link': 'tap1a81968a-79',
417 'routes': [{
418 'netmask': '0.0.0.0',
419 'network': '0.0.0.0',
420 'gateway': '172.19.3.254',
421 }],
422 'ip_address': '172.19.1.34',
423 'id': 'network0',
424 }],
425 'links': [{
426 'type': 'bridge',
427 'vif_id': '1a81968a-797a-400f-8a80-567f997eb93f',
428 'ethernet_mac_address': 'fa:16:3e:ed:9a:59',
429 'id': 'tap1a81968a-79',
430 'mtu': None,
431 }],
432 },
433 ]
434 out_datas = [
435 {
436 'version': 1,
437 'config': [
438 {
439 'subnets': [{'type': 'dhcp4'}],
440 'type': 'physical',
441 'mac_address': 'fa:16:3e:69:b0:58',
442 'name': 'enp0s1',
443 'mtu': None,
444 },
445 {
446 'subnets': [{'type': 'dhcp4'}],
447 'type': 'physical',
448 'mac_address': 'fa:16:3e:d4:57:ad',
449 'name': 'enp0s2',
450 'mtu': None,
451 },
452 {
453 'subnets': [{'type': 'dhcp4'}],
454 'type': 'physical',
455 'mac_address': 'fa:16:3e:05:30:fe',
456 'name': 'nic0',
457 'mtu': None,
458 },
459 {
460 'type': 'nameserver',
461 'address': '199.204.44.24',
462 },
463 {
464 'type': 'nameserver',
465 'address': '199.204.47.54',
466 }
467 ],
468
469 },
470 {
471 'version': 1,
472 'config': [
473 {
474 'name': 'foo3',
475 'mac_address': 'fa:16:3e:ed:9a:59',
476 'mtu': None,
477 'type': 'physical',
478 'subnets': [
479 {
480 'address': '172.19.1.34',
481 'netmask': '255.255.252.0',
482 'type': 'static',
483 'ipv4': True,
484 'routes': [{
485 'gateway': '172.19.3.254',
486 'netmask': '0.0.0.0',
487 'network': '0.0.0.0',
488 }],
489 }
490 ]
491 },
492 {
493 'type': 'nameserver',
494 'address': '172.19.0.12',
495 }
496 ],
497 },
498 ]
499 for in_data, out_data in zip(in_datas, out_datas):
500 conv_data = openstack.convert_net_json(in_data,
501 known_macs=KNOWN_MACS)
502 self.assertEqual(out_data, conv_data)
503
401504
402class TestConvertNetworkData(TestCase):505class TestConvertNetworkData(TestCase):
506 def setUp(self):
507 super(TestConvertNetworkData, self).setUp()
508 self.tmp = tempfile.mkdtemp()
509 self.addCleanup(shutil.rmtree, self.tmp)
510
403 def _getnames_in_config(self, ncfg):511 def _getnames_in_config(self, ncfg):
404 return set([n['name'] for n in ncfg['config']512 return set([n['name'] for n in ncfg['config']
405 if n['type'] == 'physical'])513 if n['type'] == 'physical'])
406514
407 def test_conversion_fills_names(self):515 def test_conversion_fills_names(self):
408 ncfg = ds.convert_network_data(NETWORK_DATA, known_macs=KNOWN_MACS)516 ncfg = openstack.convert_net_json(NETWORK_DATA, known_macs=KNOWN_MACS)
409 expected = set(['nic0', 'enp0s1', 'enp0s2'])517 expected = set(['nic0', 'enp0s1', 'enp0s2'])
410 found = self._getnames_in_config(ncfg)518 found = self._getnames_in_config(ncfg)
411 self.assertEqual(found, expected)519 self.assertEqual(found, expected)
@@ -417,18 +525,19 @@
417 'fa:16:3e:69:b0:58': 'ens1'})525 'fa:16:3e:69:b0:58': 'ens1'})
418 get_interfaces_by_mac.return_value = macs526 get_interfaces_by_mac.return_value = macs
419527
420 ncfg = ds.convert_network_data(NETWORK_DATA)528 ncfg = openstack.convert_net_json(NETWORK_DATA)
421 expected = set(['nic0', 'ens1', 'enp0s2'])529 expected = set(['nic0', 'ens1', 'enp0s2'])
422 found = self._getnames_in_config(ncfg)530 found = self._getnames_in_config(ncfg)
423 self.assertEqual(found, expected)531 self.assertEqual(found, expected)
424532
425 def test_convert_raises_value_error_on_missing_name(self):533 def test_convert_raises_value_error_on_missing_name(self):
426 macs = {'aa:aa:aa:aa:aa:00': 'ens1'}534 macs = {'aa:aa:aa:aa:aa:00': 'ens1'}
427 self.assertRaises(ValueError, ds.convert_network_data,535 self.assertRaises(ValueError, openstack.convert_net_json,
428 NETWORK_DATA, known_macs=macs)536 NETWORK_DATA, known_macs=macs)
429537
430 def test_conversion_with_route(self):538 def test_conversion_with_route(self):
431 ncfg = ds.convert_network_data(NETWORK_DATA_2, known_macs=KNOWN_MACS)539 ncfg = openstack.convert_net_json(NETWORK_DATA_2,
540 known_macs=KNOWN_MACS)
432 # not the best test, but see that we get a route in the541 # not the best test, but see that we get a route in the
433 # network config and that it gets rendered to an ENI file542 # network config and that it gets rendered to an ENI file
434 routes = []543 routes = []
@@ -438,15 +547,23 @@
438 self.assertIn(547 self.assertIn(
439 {'network': '0.0.0.0', 'netmask': '0.0.0.0', 'gateway': '2.2.2.9'},548 {'network': '0.0.0.0', 'netmask': '0.0.0.0', 'gateway': '2.2.2.9'},
440 routes)549 routes)
441 eni = net.render_interfaces(net.parse_net_config_data(ncfg))550 eni_renderer = eni.Renderer()
442 self.assertIn("route add default gw 2.2.2.9", eni)551 eni_renderer.render_network_state(
552 self.tmp, network_state.parse_net_config_data(ncfg))
553 with open(os.path.join(self.tmp, "etc",
554 "network", "interfaces"), 'r') as f:
555 eni_rendering = f.read()
556 self.assertIn("route add default gw 2.2.2.9", eni_rendering)
443557
444558
445def cfg_ds_from_dir(seed_d):559def cfg_ds_from_dir(seed_d):
446 found = ds.read_config_drive(seed_d)
447 cfg_ds = ds.DataSourceConfigDrive(settings.CFG_BUILTIN, None,560 cfg_ds = ds.DataSourceConfigDrive(settings.CFG_BUILTIN, None,
448 helpers.Paths({}))561 helpers.Paths({}))
449 populate_ds_from_read_config(cfg_ds, seed_d, found)562 cfg_ds.seed_dir = seed_d
563 cfg_ds.known_macs = KNOWN_MACS.copy()
564 if not cfg_ds.get_data():
565 raise RuntimeError("Data source did not extract itself from"
566 " seed directory %s" % seed_d)
450 return cfg_ds567 return cfg_ds
451568
452569
@@ -460,7 +577,7 @@
460 cfg_ds.userdata_raw = results.get('userdata')577 cfg_ds.userdata_raw = results.get('userdata')
461 cfg_ds.version = results.get('version')578 cfg_ds.version = results.get('version')
462 cfg_ds.network_json = results.get('networkdata')579 cfg_ds.network_json = results.get('networkdata')
463 cfg_ds._network_config = ds.convert_network_data(580 cfg_ds._network_config = openstack.convert_net_json(
464 cfg_ds.network_json, known_macs=KNOWN_MACS)581 cfg_ds.network_json, known_macs=KNOWN_MACS)
465582
466583
@@ -474,7 +591,6 @@
474 mode = "w"591 mode = "w"
475 else:592 else:
476 mode = "wb"593 mode = "wb"
477
478 with open(path, mode) as fp:594 with open(path, mode) as fp:
479 fp.write(content)595 fp.write(content)
480596
481597
=== modified file 'tests/unittests/test_datasource/test_nocloud.py'
--- tests/unittests/test_datasource/test_nocloud.py 2016-05-12 20:43:11 +0000
+++ tests/unittests/test_datasource/test_nocloud.py 2016-06-10 21:20:56 +0000
@@ -1,23 +1,14 @@
1from cloudinit import helpers1from cloudinit import helpers
2from cloudinit.sources import DataSourceNoCloud2from cloudinit.sources import DataSourceNoCloud
3from cloudinit import util3from cloudinit import util
4from ..helpers import TestCase, populate_dir4from ..helpers import TestCase, populate_dir, mock, ExitStack
55
6import os6import os
7import shutil7import shutil
8import tempfile8import tempfile
9import unittest9
10import yaml10import yaml
1111
12try:
13 from unittest import mock
14except ImportError:
15 import mock
16try:
17 from contextlib import ExitStack
18except ImportError:
19 from contextlib2 import ExitStack
20
2112
22class TestNoCloudDataSource(TestCase):13class TestNoCloudDataSource(TestCase):
2314
@@ -139,7 +130,7 @@
139 self.assertTrue(ret)130 self.assertTrue(ret)
140131
141132
142class TestParseCommandLineData(unittest.TestCase):133class TestParseCommandLineData(TestCase):
143134
144 def test_parse_cmdline_data_valid(self):135 def test_parse_cmdline_data_valid(self):
145 ds_id = "ds=nocloud"136 ds_id = "ds=nocloud"
146137
=== modified file 'tests/unittests/test_datasource/test_smartos.py'
--- tests/unittests/test_datasource/test_smartos.py 2016-06-10 18:49:34 +0000
+++ tests/unittests/test_datasource/test_smartos.py 2016-06-10 21:20:56 +0000
@@ -34,11 +34,12 @@
34import tempfile34import tempfile
35import uuid35import uuid
3636
37import serial37from cloudinit import serial
38from cloudinit.sources import DataSourceSmartOS
39
38import six40import six
3941
40from cloudinit import helpers as c_helpers42from cloudinit import helpers as c_helpers
41from cloudinit.sources import DataSourceSmartOS
42from cloudinit.util import b64e43from cloudinit.util import b64e
4344
44from ..helpers import mock, FilesystemMockingTestCase, TestCase45from ..helpers import mock, FilesystemMockingTestCase, TestCase
@@ -380,6 +381,7 @@
380381
381 def setUp(self):382 def setUp(self):
382 super(TestJoyentMetadataClient, self).setUp()383 super(TestJoyentMetadataClient, self).setUp()
384
383 self.serial = mock.MagicMock(spec=serial.Serial)385 self.serial = mock.MagicMock(spec=serial.Serial)
384 self.request_id = 0xabcdef12386 self.request_id = 0xabcdef12
385 self.metadata_value = 'value'387 self.metadata_value = 'value'
386388
=== modified file 'tests/unittests/test_net.py'
--- tests/unittests/test_net.py 2016-05-12 20:43:11 +0000
+++ tests/unittests/test_net.py 2016-06-10 21:20:56 +0000
@@ -1,6 +1,10 @@
1from cloudinit import net1from cloudinit import net
2from cloudinit.net import cmdline
3from cloudinit.net import eni
4from cloudinit.net import network_state
2from cloudinit import util5from cloudinit import util
36
7from .helpers import mock
4from .helpers import TestCase8from .helpers import TestCase
59
6import base6410import base64
@@ -9,6 +13,8 @@
9import io13import io
10import json14import json
11import os15import os
16import shutil
17import tempfile
1218
13DHCP_CONTENT_1 = """19DHCP_CONTENT_1 = """
14DEVICE='eth0'20DEVICE='eth0'
@@ -69,21 +75,87 @@
69}75}
7076
7177
72class TestNetConfigParsing(TestCase):78class TestEniNetRendering(TestCase):
79
80 @mock.patch("cloudinit.net.sys_dev_path")
81 @mock.patch("cloudinit.net.sys_netdev_info")
82 @mock.patch("cloudinit.net.get_devicelist")
83 def test_default_generation(self, mock_get_devicelist,
84 mock_sys_netdev_info,
85 mock_sys_dev_path):
86 mock_get_devicelist.return_value = ['eth1000', 'lo']
87
88 dev_characteristics = {
89 'eth1000': {
90 "bridge": False,
91 "carrier": False,
92 "dormant": False,
93 "operstate": "down",
94 "address": "07-1C-C6-75-A4-BE",
95 }
96 }
97
98 def netdev_info(name, field):
99 return dev_characteristics[name][field]
100
101 mock_sys_netdev_info.side_effect = netdev_info
102
103 tmp_dir = tempfile.mkdtemp()
104 self.addCleanup(shutil.rmtree, tmp_dir)
105
106 def sys_dev_path(devname, path=""):
107 return tmp_dir + devname + "/" + path
108
109 for dev in dev_characteristics:
110 os.makedirs(os.path.join(tmp_dir, dev))
111 with open(os.path.join(tmp_dir, dev, 'operstate'), 'w') as fh:
112 fh.write("down")
113
114 mock_sys_dev_path.side_effect = sys_dev_path
115
116 network_cfg = net.generate_fallback_config()
117 ns = network_state.parse_net_config_data(network_cfg,
118 skip_broken=False)
119
120 render_dir = os.path.join(tmp_dir, "render")
121 os.makedirs(render_dir)
122
123 renderer = eni.Renderer()
124 renderer.render_network_state(render_dir, ns,
125 eni="interfaces",
126 links_prefix=None,
127 netrules=None)
128
129 self.assertTrue(os.path.exists(os.path.join(render_dir,
130 'interfaces')))
131 with open(os.path.join(render_dir, 'interfaces')) as fh:
132 contents = fh.read()
133
134 expected = """
135auto lo
136iface lo inet loopback
137
138auto eth1000
139iface eth1000 inet dhcp
140"""
141 self.assertEqual(expected.lstrip(), contents.lstrip())
142
143
144class TestCmdlineConfigParsing(TestCase):
73 simple_cfg = {145 simple_cfg = {
74 'config': [{"type": "physical", "name": "eth0",146 'config': [{"type": "physical", "name": "eth0",
75 "mac_address": "c0:d6:9f:2c:e8:80",147 "mac_address": "c0:d6:9f:2c:e8:80",
76 "subnets": [{"type": "dhcp"}]}]}148 "subnets": [{"type": "dhcp"}]}]}
77149
78 def test_klibc_convert_dhcp(self):150 def test_cmdline_convert_dhcp(self):
79 found = net._klibc_to_config_entry(DHCP_CONTENT_1)151 found = cmdline._klibc_to_config_entry(DHCP_CONTENT_1)
80 self.assertEqual(found, ('eth0', DHCP_EXPECTED_1))152 self.assertEqual(found, ('eth0', DHCP_EXPECTED_1))
81153
82 def test_klibc_convert_static(self):154 def test_cmdline_convert_static(self):
83 found = net._klibc_to_config_entry(STATIC_CONTENT_1)155 found = cmdline._klibc_to_config_entry(STATIC_CONTENT_1)
84 self.assertEqual(found, ('eth1', STATIC_EXPECTED_1))156 self.assertEqual(found, ('eth1', STATIC_EXPECTED_1))
85157
86 def test_config_from_klibc_net_cfg(self):158 def test_config_from_cmdline_net_cfg(self):
87 files = []159 files = []
88 pairs = (('net-eth0.cfg', DHCP_CONTENT_1),160 pairs = (('net-eth0.cfg', DHCP_CONTENT_1),
89 ('net-eth1.cfg', STATIC_CONTENT_1))161 ('net-eth1.cfg', STATIC_CONTENT_1))
@@ -104,21 +176,22 @@
104 files.append(fp)176 files.append(fp)
105 util.write_file(fp, content)177 util.write_file(fp, content)
106178
107 found = net.config_from_klibc_net_cfg(files=files, mac_addrs=macs)179 found = cmdline.config_from_klibc_net_cfg(files=files,
180 mac_addrs=macs)
108 self.assertEqual(found, expected)181 self.assertEqual(found, expected)
109182
110 def test_cmdline_with_b64(self):183 def test_cmdline_with_b64(self):
111 data = base64.b64encode(json.dumps(self.simple_cfg).encode())184 data = base64.b64encode(json.dumps(self.simple_cfg).encode())
112 encoded_text = data.decode()185 encoded_text = data.decode()
113 cmdline = 'ro network-config=' + encoded_text + ' root=foo'186 raw_cmdline = 'ro network-config=' + encoded_text + ' root=foo'
114 found = net.read_kernel_cmdline_config(cmdline=cmdline)187 found = cmdline.read_kernel_cmdline_config(cmdline=raw_cmdline)
115 self.assertEqual(found, self.simple_cfg)188 self.assertEqual(found, self.simple_cfg)
116189
117 def test_cmdline_with_b64_gz(self):190 def test_cmdline_with_b64_gz(self):
118 data = _gzip_data(json.dumps(self.simple_cfg).encode())191 data = _gzip_data(json.dumps(self.simple_cfg).encode())
119 encoded_text = base64.b64encode(data).decode()192 encoded_text = base64.b64encode(data).decode()
120 cmdline = 'ro network-config=' + encoded_text + ' root=foo'193 raw_cmdline = 'ro network-config=' + encoded_text + ' root=foo'
121 found = net.read_kernel_cmdline_config(cmdline=cmdline)194 found = cmdline.read_kernel_cmdline_config(cmdline=raw_cmdline)
122 self.assertEqual(found, self.simple_cfg)195 self.assertEqual(found, self.simple_cfg)
123196
124197
125198
=== modified file 'tests/unittests/test_reporting.py'
--- tests/unittests/test_reporting.py 2016-05-12 20:43:11 +0000
+++ tests/unittests/test_reporting.py 2016-06-10 21:20:56 +0000
@@ -7,7 +7,9 @@
7from cloudinit.reporting import events7from cloudinit.reporting import events
8from cloudinit.reporting import handlers8from cloudinit.reporting import handlers
99
10from .helpers import (mock, TestCase)10import mock
11
12from .helpers import TestCase
1113
1214
13def _fake_registry():15def _fake_registry():
1416
=== modified file 'tests/unittests/test_rh_subscription.py'
--- tests/unittests/test_rh_subscription.py 2016-05-12 20:43:11 +0000
+++ tests/unittests/test_rh_subscription.py 2016-06-10 21:20:56 +0000
@@ -1,12 +1,24 @@
1# This program is free software: you can redistribute it and/or modify
2# it under the terms of the GNU General Public License version 3, as
3# published by the Free Software Foundation.
4#
5# This program is distributed in the hope that it will be useful,
6# but WITHOUT ANY WARRANTY; without even the implied warranty of
7# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
8# GNU General Public License for more details.
9#
10# You should have received a copy of the GNU General Public License
11# along with this program. If not, see <http://www.gnu.org/licenses/>.
12
13import logging
14
1from cloudinit.config import cc_rh_subscription15from cloudinit.config import cc_rh_subscription
2from cloudinit import util16from cloudinit import util
317
4import logging18from .helpers import TestCase, mock
5import mock19
6import unittest20
721class GoodTests(TestCase):
8
9class GoodTests(unittest.TestCase):
10 def setUp(self):22 def setUp(self):
11 super(GoodTests, self).setUp()23 super(GoodTests, self).setUp()
12 self.name = "cc_rh_subscription"24 self.name = "cc_rh_subscription"
@@ -93,7 +105,7 @@
93 self.assertEqual(self.SM._sub_man_cli.call_count, 9)105 self.assertEqual(self.SM._sub_man_cli.call_count, 9)
94106
95107
96class TestBadInput(unittest.TestCase):108class TestBadInput(TestCase):
97 name = "cc_rh_subscription"109 name = "cc_rh_subscription"
98 cloud_init = None110 cloud_init = None
99 log = logging.getLogger("bad_tests")111 log = logging.getLogger("bad_tests")
100112
=== modified file 'tox.ini'
--- tox.ini 2016-06-02 00:51:03 +0000
+++ tox.ini 2016-06-10 21:20:56 +0000
@@ -5,10 +5,9 @@
5[testenv]5[testenv]
6commands = python -m nose {posargs:tests}6commands = python -m nose {posargs:tests}
7deps = -r{toxinidir}/test-requirements.txt7deps = -r{toxinidir}/test-requirements.txt
8 -r{toxinidir}/requirements.txt8 -r{toxinidir}/requirements.txt
99setenv =
10[testenv:py3]10 LC_ALL = en_US.utf-8
11basepython = python3
1211
13[testenv:flake8]12[testenv:flake8]
14basepython = python313basepython = python3
@@ -18,15 +17,11 @@
18setenv =17setenv =
19 LC_ALL = en_US.utf-818 LC_ALL = en_US.utf-8
2019
20[testenv:py3]
21basepython = python3
22
21[testenv:py26]23[testenv:py26]
22commands = nosetests {posargs:tests}24commands = nosetests {posargs:tests}
23deps =
24 contextlib2
25 httpretty>=0.7.1
26 mock
27 nose
28 pep8==1.5.7
29 pyflakes
30setenv =25setenv =
31 LC_ALL = C26 LC_ALL = C
3227