Merge lp:~harlowja/cloud-init/cloud-init-net-refactor into lp:~cloud-init-dev/cloud-init/trunk

Proposed by Joshua Harlow
Status: Merged
Merged at revision: 1232
Proposed branch: lp:~harlowja/cloud-init/cloud-init-net-refactor
Merge into: lp:~cloud-init-dev/cloud-init/trunk
Diff against target: 3349 lines (+1330/-1196)
32 files modified
cloudinit/cs_utils.py (+2/-1)
cloudinit/distros/debian.py (+6/-4)
cloudinit/net/__init__.py (+29/-654)
cloudinit/net/cmdline.py (+203/-0)
cloudinit/net/eni.py (+457/-0)
cloudinit/net/network_state.py (+121/-160)
cloudinit/serial.py (+50/-0)
cloudinit/sources/DataSourceAzure.py (+1/-1)
cloudinit/sources/DataSourceConfigDrive.py (+7/-145)
cloudinit/sources/DataSourceSmartOS.py (+1/-3)
cloudinit/sources/helpers/openstack.py (+145/-0)
cloudinit/stages.py (+2/-1)
cloudinit/util.py (+13/-3)
packages/bddeb (+1/-0)
requirements.txt (+6/-2)
setup.py (+0/-1)
test-requirements.txt (+1/-0)
tests/unittests/helpers.py (+9/-76)
tests/unittests/test__init__.py (+4/-13)
tests/unittests/test_cli.py (+1/-6)
tests/unittests/test_cs_util.py (+3/-24)
tests/unittests/test_datasource/test_azure.py (+3/-9)
tests/unittests/test_datasource/test_azure_helper.py (+1/-11)
tests/unittests/test_datasource/test_cloudsigma.py (+1/-1)
tests/unittests/test_datasource/test_cloudstack.py (+1/-10)
tests/unittests/test_datasource/test_configdrive.py (+143/-27)
tests/unittests/test_datasource/test_nocloud.py (+3/-12)
tests/unittests/test_datasource/test_smartos.py (+4/-2)
tests/unittests/test_net.py (+84/-11)
tests/unittests/test_reporting.py (+3/-1)
tests/unittests/test_rh_subscription.py (+19/-7)
tox.ini (+6/-11)
To merge this branch: bzr merge lp:~harlowja/cloud-init/cloud-init-net-refactor
Reviewer Review Type Date Requested Status
Scott Moser Approve
Review via email: mp+293957@code.launchpad.net

Commit message

Refactor a large part of the networking code.

Splits off distro specific code into specific files so that
other kinds of networking configuration can be written by the
various distro(s) that cloud-init supports.

It also isolates some of the cloudinit.net code so that it can
be more easily used on its own (and incorporated into other
projects such as curtin).

During this process it adds tests so that the net process can
be tested (to some level) so that the format conversion processes
can be tested going forward.

To post a comment you must log in.
1216. By Scott Moser

fix timestamp in reporting events.

If no timestamp was passed into a ReportingEvent, then the default was
used. That default was 'time.time()' which was evaluated once only at
import time.

1217. By Joshua Harlow

Enable flake8 and fix a large amount of reported issues

1218. By Matt Fischer

Document improvements for runcmd/bootcmd

Note that runcmd runs only on first boot.
Note that strings need to be quoted, not escaped.
Switch bootcmd list text to use - not * like everything else.

1219. By Joshua Harlow

Remerge against head/master

1220. By Joshua Harlow

Fix up tests and flake8 warnings

1221. By Joshua Harlow

Revert some of the alterations of the tox.ini file

1222. By Joshua Harlow

Remove 26 from default tox.ini listing

1223. By Joshua Harlow

Fix load -> read

Revision history for this message
Scott Moser (smoser) wrote :

Josh,

Thanks for your work and cleanup on this.
My concerns at the moment are
a.) large churn on code and i need to get something into 16.04 to fix some bugs (bug 1577982, bug 1579130, bug 1577844), so i'd like to hold off on this for that.

b.) if we want 'net' to be standalone then i'd prefer for it to not have 'from cloudinit import...' as that indicates reliance on cloudinit or at least some required conversion before external use. you have experience with this through oslo so i guess i'm fine if there is a sane path forward.

c.) non-standard library usage in 'net'. even if this is just six I will need to support curtin running on Ubuntu 12.04 (python-six at 1.1) for the next 12 months at least.

some nitpicks inline below.

1224. By Joshua Harlow

Fix up some of the net usage and restore imports and add a mini compat module

1225. By Joshua Harlow

Rebase against master

1226. By Joshua Harlow

Get cmdline working again

1227. By Joshua Harlow

For now just remove compat.py module

Let's reduce the size of this change for now.

1228. By Joshua Harlow

Less tweaking of tox.ini

1229. By Joshua Harlow

Less less tweaking of tox.ini

Revision history for this message
Scott Moser (smoser) wrote :

Hey.
Grab Hi. grab http://paste.ubuntu.com/17185178/ and please remove 'skip_first_boot' and then go ahead and merge into trunk.

Revision history for this message
Scott Moser (smoser) :
review: Approve
1230. By Joshua Harlow

Add unittest2 to builder list

1231. By Joshua Harlow

Just do all the imports on one line

1232. By Joshua Harlow

Just mock 'on_first_boot' vs special argument

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'cloudinit/cs_utils.py'
2--- cloudinit/cs_utils.py 2015-07-16 09:12:24 +0000
3+++ cloudinit/cs_utils.py 2016-06-10 21:20:56 +0000
4@@ -33,7 +33,8 @@
5 import json
6 import platform
7
8-import serial
9+from cloudinit import serial
10+
11
12 # these high timeouts are necessary as read may read a lot of data.
13 READ_TIMEOUT = 60
14
15=== modified file 'cloudinit/distros/debian.py'
16--- cloudinit/distros/debian.py 2016-05-12 17:56:26 +0000
17+++ cloudinit/distros/debian.py 2016-06-10 21:20:56 +0000
18@@ -26,6 +26,7 @@
19 from cloudinit import helpers
20 from cloudinit import log as logging
21 from cloudinit import net
22+from cloudinit.net import eni
23 from cloudinit import util
24
25 from cloudinit.distros.parsers.hostname import HostnameConf
26@@ -56,6 +57,7 @@
27 # should only happen say once per instance...)
28 self._runner = helpers.Runners(paths)
29 self.osfamily = 'debian'
30+ self._net_renderer = eni.Renderer()
31
32 def apply_locale(self, locale, out_fn=None):
33 if not out_fn:
34@@ -80,10 +82,10 @@
35
36 def _write_network_config(self, netconfig):
37 ns = net.parse_net_config_data(netconfig)
38- net.render_network_state(target="/", network_state=ns,
39- eni=self.network_conf_fn,
40- links_prefix=self.links_prefix,
41- netrules=None)
42+ self._net_renderer.render_network_state(
43+ target="/", network_state=ns,
44+ eni=self.network_conf_fn, links_prefix=self.links_prefix,
45+ netrules=None)
46 _maybe_remove_legacy_eth0()
47
48 return []
49
50=== modified file 'cloudinit/net/__init__.py'
51--- cloudinit/net/__init__.py 2016-06-03 18:58:51 +0000
52+++ cloudinit/net/__init__.py 2016-06-10 21:20:56 +0000
53@@ -16,43 +16,18 @@
54 # You should have received a copy of the GNU Affero General Public License
55 # along with Curtin. If not, see <http://www.gnu.org/licenses/>.
56
57-import base64
58 import errno
59-import glob
60-import gzip
61-import io
62+import logging
63 import os
64 import re
65-import shlex
66
67-from cloudinit import log as logging
68-from cloudinit.net import network_state
69-from cloudinit.net.udev import generate_udev_rule
70 from cloudinit import util
71
72 LOG = logging.getLogger(__name__)
73-
74 SYS_CLASS_NET = "/sys/class/net/"
75+DEFAULT_PRIMARY_INTERFACE = 'eth0'
76 LINKS_FNAME_PREFIX = "etc/systemd/network/50-cloud-init-"
77
78-NET_CONFIG_OPTIONS = [
79- "address", "netmask", "broadcast", "network", "metric", "gateway",
80- "pointtopoint", "media", "mtu", "hostname", "leasehours", "leasetime",
81- "vendor", "client", "bootfile", "server", "hwaddr", "provider", "frame",
82- "netnum", "endpoint", "local", "ttl",
83-]
84-
85-NET_CONFIG_COMMANDS = [
86- "pre-up", "up", "post-up", "down", "pre-down", "post-down",
87-]
88-
89-NET_CONFIG_BRIDGE_OPTIONS = [
90- "bridge_ageing", "bridge_bridgeprio", "bridge_fd", "bridge_gcinit",
91- "bridge_hello", "bridge_maxage", "bridge_maxwait", "bridge_stp",
92-]
93-
94-DEFAULT_PRIMARY_INTERFACE = 'eth0'
95-
96
97 def sys_dev_path(devname, path=""):
98 return SYS_CLASS_NET + devname + "/" + path
99@@ -60,23 +35,22 @@
100
101 def read_sys_net(devname, path, translate=None, enoent=None, keyerror=None):
102 try:
103- contents = ""
104- with open(sys_dev_path(devname, path), "r") as fp:
105- contents = fp.read().strip()
106- if translate is None:
107- return contents
108-
109- try:
110- return translate.get(contents)
111- except KeyError:
112- LOG.debug("found unexpected value '%s' in '%s/%s'", contents,
113- devname, path)
114- if keyerror is not None:
115- return keyerror
116- raise
117- except OSError as e:
118- if e.errno == errno.ENOENT and enoent is not None:
119- return enoent
120+ contents = util.load_file(sys_dev_path(devname, path))
121+ except (OSError, IOError) as e:
122+ if getattr(e, 'errno', None) == errno.ENOENT:
123+ if enoent is not None:
124+ return enoent
125+ raise
126+ contents = contents.strip()
127+ if translate is None:
128+ return contents
129+ try:
130+ return translate.get(contents)
131+ except KeyError:
132+ LOG.debug("found unexpected value '%s' in '%s/%s'", contents,
133+ devname, path)
134+ if keyerror is not None:
135+ return keyerror
136 raise
137
138
139@@ -127,509 +101,7 @@
140
141
142 class ParserError(Exception):
143- """Raised when parser has issue parsing the interfaces file."""
144-
145-
146-def parse_deb_config_data(ifaces, contents, src_dir, src_path):
147- """Parses the file contents, placing result into ifaces.
148-
149- '_source_path' is added to every dictionary entry to define which file
150- the configration information came from.
151-
152- :param ifaces: interface dictionary
153- :param contents: contents of interfaces file
154- :param src_dir: directory interfaces file was located
155- :param src_path: file path the `contents` was read
156- """
157- currif = None
158- for line in contents.splitlines():
159- line = line.strip()
160- if line.startswith('#'):
161- continue
162- split = line.split(' ')
163- option = split[0]
164- if option == "source-directory":
165- parsed_src_dir = split[1]
166- if not parsed_src_dir.startswith("/"):
167- parsed_src_dir = os.path.join(src_dir, parsed_src_dir)
168- for expanded_path in glob.glob(parsed_src_dir):
169- dir_contents = os.listdir(expanded_path)
170- dir_contents = [
171- os.path.join(expanded_path, path)
172- for path in dir_contents
173- if (os.path.isfile(os.path.join(expanded_path, path)) and
174- re.match("^[a-zA-Z0-9_-]+$", path) is not None)
175- ]
176- for entry in dir_contents:
177- with open(entry, "r") as fp:
178- src_data = fp.read().strip()
179- abs_entry = os.path.abspath(entry)
180- parse_deb_config_data(
181- ifaces, src_data,
182- os.path.dirname(abs_entry), abs_entry)
183- elif option == "source":
184- new_src_path = split[1]
185- if not new_src_path.startswith("/"):
186- new_src_path = os.path.join(src_dir, new_src_path)
187- for expanded_path in glob.glob(new_src_path):
188- with open(expanded_path, "r") as fp:
189- src_data = fp.read().strip()
190- abs_path = os.path.abspath(expanded_path)
191- parse_deb_config_data(
192- ifaces, src_data,
193- os.path.dirname(abs_path), abs_path)
194- elif option == "auto":
195- for iface in split[1:]:
196- if iface not in ifaces:
197- ifaces[iface] = {
198- # Include the source path this interface was found in.
199- "_source_path": src_path
200- }
201- ifaces[iface]['auto'] = True
202- elif option == "iface":
203- iface, family, method = split[1:4]
204- if iface not in ifaces:
205- ifaces[iface] = {
206- # Include the source path this interface was found in.
207- "_source_path": src_path
208- }
209- elif 'family' in ifaces[iface]:
210- raise ParserError(
211- "Interface %s can only be defined once. "
212- "Re-defined in '%s'." % (iface, src_path))
213- ifaces[iface]['family'] = family
214- ifaces[iface]['method'] = method
215- currif = iface
216- elif option == "hwaddress":
217- if split[1] == "ether":
218- val = split[2]
219- else:
220- val = split[1]
221- ifaces[currif]['hwaddress'] = val
222- elif option in NET_CONFIG_OPTIONS:
223- ifaces[currif][option] = split[1]
224- elif option in NET_CONFIG_COMMANDS:
225- if option not in ifaces[currif]:
226- ifaces[currif][option] = []
227- ifaces[currif][option].append(' '.join(split[1:]))
228- elif option.startswith('dns-'):
229- if 'dns' not in ifaces[currif]:
230- ifaces[currif]['dns'] = {}
231- if option == 'dns-search':
232- ifaces[currif]['dns']['search'] = []
233- for domain in split[1:]:
234- ifaces[currif]['dns']['search'].append(domain)
235- elif option == 'dns-nameservers':
236- ifaces[currif]['dns']['nameservers'] = []
237- for server in split[1:]:
238- ifaces[currif]['dns']['nameservers'].append(server)
239- elif option.startswith('bridge_'):
240- if 'bridge' not in ifaces[currif]:
241- ifaces[currif]['bridge'] = {}
242- if option in NET_CONFIG_BRIDGE_OPTIONS:
243- bridge_option = option.replace('bridge_', '', 1)
244- ifaces[currif]['bridge'][bridge_option] = split[1]
245- elif option == "bridge_ports":
246- ifaces[currif]['bridge']['ports'] = []
247- for iface in split[1:]:
248- ifaces[currif]['bridge']['ports'].append(iface)
249- elif option == "bridge_hw" and split[1].lower() == "mac":
250- ifaces[currif]['bridge']['mac'] = split[2]
251- elif option == "bridge_pathcost":
252- if 'pathcost' not in ifaces[currif]['bridge']:
253- ifaces[currif]['bridge']['pathcost'] = {}
254- ifaces[currif]['bridge']['pathcost'][split[1]] = split[2]
255- elif option == "bridge_portprio":
256- if 'portprio' not in ifaces[currif]['bridge']:
257- ifaces[currif]['bridge']['portprio'] = {}
258- ifaces[currif]['bridge']['portprio'][split[1]] = split[2]
259- elif option.startswith('bond-'):
260- if 'bond' not in ifaces[currif]:
261- ifaces[currif]['bond'] = {}
262- bond_option = option.replace('bond-', '', 1)
263- ifaces[currif]['bond'][bond_option] = split[1]
264- for iface in ifaces.keys():
265- if 'auto' not in ifaces[iface]:
266- ifaces[iface]['auto'] = False
267-
268-
269-def parse_deb_config(path):
270- """Parses a debian network configuration file."""
271- ifaces = {}
272- with open(path, "r") as fp:
273- contents = fp.read().strip()
274- abs_path = os.path.abspath(path)
275- parse_deb_config_data(
276- ifaces, contents,
277- os.path.dirname(abs_path), abs_path)
278- return ifaces
279-
280-
281-def parse_net_config_data(net_config):
282- """Parses the config, returns NetworkState dictionary
283-
284- :param net_config: curtin network config dict
285- """
286- state = None
287- if 'version' in net_config and 'config' in net_config:
288- ns = network_state.NetworkState(version=net_config.get('version'),
289- config=net_config.get('config'))
290- ns.parse_config()
291- state = ns.network_state
292-
293- return state
294-
295-
296-def parse_net_config(path):
297- """Parses a curtin network configuration file and
298- return network state"""
299- ns = None
300- net_config = util.read_conf(path)
301- if 'network' in net_config:
302- ns = parse_net_config_data(net_config.get('network'))
303-
304- return ns
305-
306-
307-def _load_shell_content(content, add_empty=False, empty_val=None):
308- """Given shell like syntax (key=value\nkey2=value2\n) in content
309- return the data in dictionary form. If 'add_empty' is True
310- then add entries in to the returned dictionary for 'VAR='
311- variables. Set their value to empty_val."""
312- data = {}
313- for line in shlex.split(content):
314- key, value = line.split("=", 1)
315- if not value:
316- value = empty_val
317- if add_empty or value:
318- data[key] = value
319-
320- return data
321-
322-
323-def _klibc_to_config_entry(content, mac_addrs=None):
324- """Convert a klibc writtent shell content file to a 'config' entry
325- When ip= is seen on the kernel command line in debian initramfs
326- and networking is brought up, ipconfig will populate
327- /run/net-<name>.cfg.
328-
329- The files are shell style syntax, and examples are in the tests
330- provided here. There is no good documentation on this unfortunately.
331-
332- DEVICE=<name> is expected/required and PROTO should indicate if
333- this is 'static' or 'dhcp'.
334- """
335-
336- if mac_addrs is None:
337- mac_addrs = {}
338-
339- data = _load_shell_content(content)
340- try:
341- name = data['DEVICE']
342- except KeyError:
343- raise ValueError("no 'DEVICE' entry in data")
344-
345- # ipconfig on precise does not write PROTO
346- proto = data.get('PROTO')
347- if not proto:
348- if data.get('filename'):
349- proto = 'dhcp'
350- else:
351- proto = 'static'
352-
353- if proto not in ('static', 'dhcp'):
354- raise ValueError("Unexpected value for PROTO: %s" % proto)
355-
356- iface = {
357- 'type': 'physical',
358- 'name': name,
359- 'subnets': [],
360- }
361-
362- if name in mac_addrs:
363- iface['mac_address'] = mac_addrs[name]
364-
365- # originally believed there might be IPV6* values
366- for v, pre in (('ipv4', 'IPV4'),):
367- # if no IPV4ADDR or IPV6ADDR, then go on.
368- if pre + "ADDR" not in data:
369- continue
370- subnet = {'type': proto, 'control': 'manual'}
371-
372- # these fields go right on the subnet
373- for key in ('NETMASK', 'BROADCAST', 'GATEWAY'):
374- if pre + key in data:
375- subnet[key.lower()] = data[pre + key]
376-
377- dns = []
378- # handle IPV4DNS0 or IPV6DNS0
379- for nskey in ('DNS0', 'DNS1'):
380- ns = data.get(pre + nskey)
381- # verify it has something other than 0.0.0.0 (or ipv6)
382- if ns and len(ns.strip(":.0")):
383- dns.append(data[pre + nskey])
384- if dns:
385- subnet['dns_nameservers'] = dns
386- # add search to both ipv4 and ipv6, as it has no namespace
387- search = data.get('DOMAINSEARCH')
388- if search:
389- if ',' in search:
390- subnet['dns_search'] = search.split(",")
391- else:
392- subnet['dns_search'] = search.split()
393-
394- iface['subnets'].append(subnet)
395-
396- return name, iface
397-
398-
399-def config_from_klibc_net_cfg(files=None, mac_addrs=None):
400- if files is None:
401- files = glob.glob('/run/net*.conf')
402-
403- entries = []
404- names = {}
405- for cfg_file in files:
406- name, entry = _klibc_to_config_entry(util.load_file(cfg_file),
407- mac_addrs=mac_addrs)
408- if name in names:
409- raise ValueError(
410- "device '%s' defined multiple times: %s and %s" % (
411- name, names[name], cfg_file))
412-
413- names[name] = cfg_file
414- entries.append(entry)
415- return {'config': entries, 'version': 1}
416-
417-
418-def render_persistent_net(network_state):
419- '''Given state, emit udev rules to map mac to ifname.'''
420- content = ""
421- interfaces = network_state.get('interfaces')
422- for iface in interfaces.values():
423- # for physical interfaces write out a persist net udev rule
424- if iface['type'] == 'physical' and \
425- 'name' in iface and iface.get('mac_address'):
426- content += generate_udev_rule(iface['name'],
427- iface['mac_address'])
428-
429- return content
430-
431-
432-# TODO: switch valid_map based on mode inet/inet6
433-def iface_add_subnet(iface, subnet):
434- content = ""
435- valid_map = [
436- 'address',
437- 'netmask',
438- 'broadcast',
439- 'metric',
440- 'gateway',
441- 'pointopoint',
442- 'mtu',
443- 'scope',
444- 'dns_search',
445- 'dns_nameservers',
446- ]
447- for key, value in subnet.items():
448- if value and key in valid_map:
449- if type(value) == list:
450- value = " ".join(value)
451- if '_' in key:
452- key = key.replace('_', '-')
453- content += " {} {}\n".format(key, value)
454-
455- return content
456-
457-
458-# TODO: switch to valid_map for attrs
459-def iface_add_attrs(iface):
460- content = ""
461- ignore_map = [
462- 'control',
463- 'index',
464- 'inet',
465- 'mode',
466- 'name',
467- 'subnets',
468- 'type',
469- ]
470- if iface['type'] not in ['bond', 'bridge', 'vlan']:
471- ignore_map.append('mac_address')
472-
473- for key, value in iface.items():
474- if value and key not in ignore_map:
475- if type(value) == list:
476- value = " ".join(value)
477- content += " {} {}\n".format(key, value)
478-
479- return content
480-
481-
482-def render_route(route, indent=""):
483- """When rendering routes for an iface, in some cases applying a route
484- may result in the route command returning non-zero which produces
485- some confusing output for users manually using ifup/ifdown[1]. To
486- that end, we will optionally include an '|| true' postfix to each
487- route line allowing users to work with ifup/ifdown without using
488- --force option.
489-
490- We may at somepoint not want to emit this additional postfix, and
491- add a 'strict' flag to this function. When called with strict=True,
492- then we will not append the postfix.
493-
494- 1. http://askubuntu.com/questions/168033/
495- how-to-set-static-routes-in-ubuntu-server
496- """
497- content = ""
498- up = indent + "post-up route add"
499- down = indent + "pre-down route del"
500- eol = " || true\n"
501- mapping = {
502- 'network': '-net',
503- 'netmask': 'netmask',
504- 'gateway': 'gw',
505- 'metric': 'metric',
506- }
507- if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0':
508- default_gw = " default gw %s" % route['gateway']
509- content += up + default_gw + eol
510- content += down + default_gw + eol
511- elif route['network'] == '::' and route['netmask'] == 0:
512- # ipv6!
513- default_gw = " -A inet6 default gw %s" % route['gateway']
514- content += up + default_gw + eol
515- content += down + default_gw + eol
516- else:
517- route_line = ""
518- for k in ['network', 'netmask', 'gateway', 'metric']:
519- if k in route:
520- route_line += " %s %s" % (mapping[k], route[k])
521- content += up + route_line + eol
522- content += down + route_line + eol
523-
524- return content
525-
526-
527-def iface_start_entry(iface, index):
528- fullname = iface['name']
529- if index != 0:
530- fullname += ":%s" % index
531-
532- control = iface['control']
533- if control == "auto":
534- cverb = "auto"
535- elif control in ("hotplug",):
536- cverb = "allow-" + control
537- else:
538- cverb = "# control-" + control
539-
540- subst = iface.copy()
541- subst.update({'fullname': fullname, 'cverb': cverb})
542-
543- return ("{cverb} {fullname}\n"
544- "iface {fullname} {inet} {mode}\n").format(**subst)
545-
546-
547-def render_interfaces(network_state):
548- '''Given state, emit etc/network/interfaces content.'''
549-
550- content = ""
551- interfaces = network_state.get('interfaces')
552- ''' Apply a sort order to ensure that we write out
553- the physical interfaces first; this is critical for
554- bonding
555- '''
556- order = {
557- 'physical': 0,
558- 'bond': 1,
559- 'bridge': 2,
560- 'vlan': 3,
561- }
562- content += "auto lo\niface lo inet loopback\n"
563- for dnskey, value in network_state.get('dns', {}).items():
564- if len(value):
565- content += " dns-{} {}\n".format(dnskey, " ".join(value))
566-
567- for iface in sorted(interfaces.values(),
568- key=lambda k: (order[k['type']], k['name'])):
569-
570- if content[-2:] != "\n\n":
571- content += "\n"
572- subnets = iface.get('subnets', {})
573- if subnets:
574- for index, subnet in zip(range(0, len(subnets)), subnets):
575- if content[-2:] != "\n\n":
576- content += "\n"
577- iface['index'] = index
578- iface['mode'] = subnet['type']
579- iface['control'] = subnet.get('control', 'auto')
580- if iface['mode'].endswith('6'):
581- iface['inet'] += '6'
582- elif iface['mode'] == 'static' and ":" in subnet['address']:
583- iface['inet'] += '6'
584- if iface['mode'].startswith('dhcp'):
585- iface['mode'] = 'dhcp'
586-
587- content += iface_start_entry(iface, index)
588- content += iface_add_subnet(iface, subnet)
589- content += iface_add_attrs(iface)
590- for route in subnet.get('routes', []):
591- content += render_route(route, indent=" ")
592- else:
593- # ifenslave docs say to auto the slave devices
594- if 'bond-master' in iface:
595- content += "auto {name}\n".format(**iface)
596- content += "iface {name} {inet} {mode}\n".format(**iface)
597- content += iface_add_attrs(iface)
598-
599- for route in network_state.get('routes'):
600- content += render_route(route)
601-
602- # global replacements until v2 format
603- content = content.replace('mac_address', 'hwaddress')
604- return content
605-
606-
607-def render_network_state(target, network_state, eni="etc/network/interfaces",
608- links_prefix=LINKS_FNAME_PREFIX,
609- netrules='etc/udev/rules.d/70-persistent-net.rules'):
610-
611- fpeni = os.path.sep.join((target, eni,))
612- util.ensure_dir(os.path.dirname(fpeni))
613- with open(fpeni, 'w+') as f:
614- f.write(render_interfaces(network_state))
615-
616- if netrules:
617- netrules = os.path.sep.join((target, netrules,))
618- util.ensure_dir(os.path.dirname(netrules))
619- with open(netrules, 'w+') as f:
620- f.write(render_persistent_net(network_state))
621-
622- if links_prefix:
623- render_systemd_links(target, network_state, links_prefix)
624-
625-
626-def render_systemd_links(target, network_state,
627- links_prefix=LINKS_FNAME_PREFIX):
628- fp_prefix = os.path.sep.join((target, links_prefix))
629- for f in glob.glob(fp_prefix + "*"):
630- os.unlink(f)
631-
632- interfaces = network_state.get('interfaces')
633- for iface in interfaces.values():
634- if (iface['type'] == 'physical' and 'name' in iface and
635- iface.get('mac_address')):
636- fname = fp_prefix + iface['name'] + ".link"
637- with open(fname, "w") as fp:
638- fp.write("\n".join([
639- "[Match]",
640- "MACAddress=" + iface['mac_address'],
641- "",
642- "[Link]",
643- "Name=" + iface['name'],
644- ""
645- ]))
646+ """Raised when a parser has issue parsing a file/content."""
647
648
649 def is_disabled_cfg(cfg):
650@@ -642,7 +114,6 @@
651 if not os.path.exists(os.path.join(SYS_CLASS_NET, name)):
652 raise OSError("%s: interface does not exist in %s" %
653 (name, SYS_CLASS_NET))
654-
655 fname = os.path.join(SYS_CLASS_NET, name, field)
656 if not os.path.exists(fname):
657 raise OSError("%s: could not find sysfs entry: %s" % (name, fname))
658@@ -722,108 +193,6 @@
659 return nconf
660
661
662-def _decomp_gzip(blob, strict=True):
663- # decompress blob. raise exception if not compressed unless strict=False.
664- with io.BytesIO(blob) as iobuf:
665- gzfp = None
666- try:
667- gzfp = gzip.GzipFile(mode="rb", fileobj=iobuf)
668- return gzfp.read()
669- except IOError:
670- if strict:
671- raise
672- return blob
673- finally:
674- if gzfp:
675- gzfp.close()
676-
677-
678-def _b64dgz(b64str, gzipped="try"):
679- # decode a base64 string. If gzipped is true, transparently uncompresss
680- # if gzipped is 'try', then try gunzip, returning the original on fail.
681- try:
682- blob = base64.b64decode(b64str)
683- except TypeError:
684- raise ValueError("Invalid base64 text: %s" % b64str)
685-
686- if not gzipped:
687- return blob
688-
689- return _decomp_gzip(blob, strict=gzipped != "try")
690-
691-
692-def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None):
693- if cmdline is None:
694- cmdline = util.get_cmdline()
695-
696- if 'network-config=' in cmdline:
697- data64 = None
698- for tok in cmdline.split():
699- if tok.startswith("network-config="):
700- data64 = tok.split("=", 1)[1]
701- if data64:
702- return util.load_yaml(_b64dgz(data64))
703-
704- if 'ip=' not in cmdline:
705- return None
706-
707- if mac_addrs is None:
708- mac_addrs = {k: sys_netdev_info(k, 'address')
709- for k in get_devicelist()}
710-
711- return config_from_klibc_net_cfg(files=files, mac_addrs=mac_addrs)
712-
713-
714-def convert_eni_data(eni_data):
715- # return a network config representation of what is in eni_data
716- ifaces = {}
717- parse_deb_config_data(ifaces, eni_data, src_dir=None, src_path=None)
718- return _ifaces_to_net_config_data(ifaces)
719-
720-
721-def _ifaces_to_net_config_data(ifaces):
722- """Return network config that represents the ifaces data provided.
723- ifaces = parse_deb_config("/etc/network/interfaces")
724- config = ifaces_to_net_config_data(ifaces)
725- state = parse_net_config_data(config)."""
726- devs = {}
727- for name, data in ifaces.items():
728- # devname is 'eth0' for name='eth0:1'
729- devname = name.partition(":")[0]
730- if devname == "lo":
731- # currently provding 'lo' in network config results in duplicate
732- # entries. in rendered interfaces file. so skip it.
733- continue
734- if devname not in devs:
735- devs[devname] = {'type': 'physical', 'name': devname,
736- 'subnets': []}
737- # this isnt strictly correct, but some might specify
738- # hwaddress on a nic for matching / declaring name.
739- if 'hwaddress' in data:
740- devs[devname]['mac_address'] = data['hwaddress']
741- subnet = {'_orig_eni_name': name, 'type': data['method']}
742- if data.get('auto'):
743- subnet['control'] = 'auto'
744- else:
745- subnet['control'] = 'manual'
746-
747- if data.get('method') == 'static':
748- subnet['address'] = data['address']
749-
750- for copy_key in ('netmask', 'gateway', 'broadcast'):
751- if copy_key in data:
752- subnet[copy_key] = data[copy_key]
753-
754- if 'dns' in data:
755- for n in ('nameservers', 'search'):
756- if n in data['dns'] and data['dns'][n]:
757- subnet['dns_' + n] = data['dns'][n]
758- devs[devname]['subnets'].append(subnet)
759-
760- return {'version': 1,
761- 'config': [devs[d] for d in sorted(devs)]}
762-
763-
764 def apply_network_config_names(netcfg, strict_present=True, strict_busy=True):
765 """read the network config and rename devices accordingly.
766 if strict_present is false, then do not raise exception if no devices
767@@ -839,7 +208,7 @@
768 continue
769 renames.append([mac, name])
770
771- return rename_interfaces(renames)
772+ return _rename_interfaces(renames)
773
774
775 def _get_current_rename_info(check_downable=True):
776@@ -867,8 +236,8 @@
777 return bymac
778
779
780-def rename_interfaces(renames, strict_present=True, strict_busy=True,
781- current_info=None):
782+def _rename_interfaces(renames, strict_present=True, strict_busy=True,
783+ current_info=None):
784 if current_info is None:
785 current_info = _get_current_rename_info()
786
787@@ -979,7 +348,13 @@
788 def get_interfaces_by_mac(devs=None):
789 """Build a dictionary of tuples {mac: name}"""
790 if devs is None:
791- devs = get_devicelist()
792+ try:
793+ devs = get_devicelist()
794+ except OSError as e:
795+ if e.errno == errno.ENOENT:
796+ devs = []
797+ else:
798+ raise
799 ret = {}
800 for name in devs:
801 mac = get_interface_mac(name)
802
803=== added file 'cloudinit/net/cmdline.py'
804--- cloudinit/net/cmdline.py 1970-01-01 00:00:00 +0000
805+++ cloudinit/net/cmdline.py 2016-06-10 21:20:56 +0000
806@@ -0,0 +1,203 @@
807+# Copyright (C) 2013-2014 Canonical Ltd.
808+#
809+# Author: Scott Moser <scott.moser@canonical.com>
810+# Author: Blake Rouse <blake.rouse@canonical.com>
811+#
812+# Curtin is free software: you can redistribute it and/or modify it under
813+# the terms of the GNU Affero General Public License as published by the
814+# Free Software Foundation, either version 3 of the License, or (at your
815+# option) any later version.
816+#
817+# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
818+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
819+# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
820+# more details.
821+#
822+# You should have received a copy of the GNU Affero General Public License
823+# along with Curtin. If not, see <http://www.gnu.org/licenses/>.
824+
825+import base64
826+import glob
827+import gzip
828+import io
829+import shlex
830+import sys
831+
832+import six
833+
834+from . import get_devicelist
835+from . import sys_netdev_info
836+
837+from cloudinit import util
838+
839+PY26 = sys.version_info[0:2] == (2, 6)
840+
841+
842+def _shlex_split(blob):
843+ if PY26 and isinstance(blob, six.text_type):
844+ # Older versions don't support unicode input
845+ blob = blob.encode("utf8")
846+ return shlex.split(blob)
847+
848+
849+def _load_shell_content(content, add_empty=False, empty_val=None):
850+ """Given shell like syntax (key=value\nkey2=value2\n) in content
851+ return the data in dictionary form. If 'add_empty' is True
852+ then add entries in to the returned dictionary for 'VAR='
853+ variables. Set their value to empty_val."""
854+ data = {}
855+ for line in _shlex_split(content):
856+ key, value = line.split("=", 1)
857+ if not value:
858+ value = empty_val
859+ if add_empty or value:
860+ data[key] = value
861+
862+ return data
863+
864+
865+def _klibc_to_config_entry(content, mac_addrs=None):
866+ """Convert a klibc writtent shell content file to a 'config' entry
867+ When ip= is seen on the kernel command line in debian initramfs
868+ and networking is brought up, ipconfig will populate
869+ /run/net-<name>.cfg.
870+
871+ The files are shell style syntax, and examples are in the tests
872+ provided here. There is no good documentation on this unfortunately.
873+
874+ DEVICE=<name> is expected/required and PROTO should indicate if
875+ this is 'static' or 'dhcp'.
876+ """
877+
878+ if mac_addrs is None:
879+ mac_addrs = {}
880+
881+ data = _load_shell_content(content)
882+ try:
883+ name = data['DEVICE']
884+ except KeyError:
885+ raise ValueError("no 'DEVICE' entry in data")
886+
887+ # ipconfig on precise does not write PROTO
888+ proto = data.get('PROTO')
889+ if not proto:
890+ if data.get('filename'):
891+ proto = 'dhcp'
892+ else:
893+ proto = 'static'
894+
895+ if proto not in ('static', 'dhcp'):
896+ raise ValueError("Unexpected value for PROTO: %s" % proto)
897+
898+ iface = {
899+ 'type': 'physical',
900+ 'name': name,
901+ 'subnets': [],
902+ }
903+
904+ if name in mac_addrs:
905+ iface['mac_address'] = mac_addrs[name]
906+
907+ # originally believed there might be IPV6* values
908+ for v, pre in (('ipv4', 'IPV4'),):
909+ # if no IPV4ADDR or IPV6ADDR, then go on.
910+ if pre + "ADDR" not in data:
911+ continue
912+ subnet = {'type': proto, 'control': 'manual'}
913+
914+ # these fields go right on the subnet
915+ for key in ('NETMASK', 'BROADCAST', 'GATEWAY'):
916+ if pre + key in data:
917+ subnet[key.lower()] = data[pre + key]
918+
919+ dns = []
920+ # handle IPV4DNS0 or IPV6DNS0
921+ for nskey in ('DNS0', 'DNS1'):
922+ ns = data.get(pre + nskey)
923+ # verify it has something other than 0.0.0.0 (or ipv6)
924+ if ns and len(ns.strip(":.0")):
925+ dns.append(data[pre + nskey])
926+ if dns:
927+ subnet['dns_nameservers'] = dns
928+ # add search to both ipv4 and ipv6, as it has no namespace
929+ search = data.get('DOMAINSEARCH')
930+ if search:
931+ if ',' in search:
932+ subnet['dns_search'] = search.split(",")
933+ else:
934+ subnet['dns_search'] = search.split()
935+
936+ iface['subnets'].append(subnet)
937+
938+ return name, iface
939+
940+
941+def config_from_klibc_net_cfg(files=None, mac_addrs=None):
942+ if files is None:
943+ files = glob.glob('/run/net*.conf')
944+
945+ entries = []
946+ names = {}
947+ for cfg_file in files:
948+ name, entry = _klibc_to_config_entry(util.load_file(cfg_file),
949+ mac_addrs=mac_addrs)
950+ if name in names:
951+ raise ValueError(
952+ "device '%s' defined multiple times: %s and %s" % (
953+ name, names[name], cfg_file))
954+
955+ names[name] = cfg_file
956+ entries.append(entry)
957+ return {'config': entries, 'version': 1}
958+
959+
960+def _decomp_gzip(blob, strict=True):
961+ # decompress blob. raise exception if not compressed unless strict=False.
962+ with io.BytesIO(blob) as iobuf:
963+ gzfp = None
964+ try:
965+ gzfp = gzip.GzipFile(mode="rb", fileobj=iobuf)
966+ return gzfp.read()
967+ except IOError:
968+ if strict:
969+ raise
970+ return blob
971+ finally:
972+ if gzfp:
973+ gzfp.close()
974+
975+
976+def _b64dgz(b64str, gzipped="try"):
977+ # decode a base64 string. If gzipped is true, transparently uncompresss
978+ # if gzipped is 'try', then try gunzip, returning the original on fail.
979+ try:
980+ blob = base64.b64decode(b64str)
981+ except TypeError:
982+ raise ValueError("Invalid base64 text: %s" % b64str)
983+
984+ if not gzipped:
985+ return blob
986+
987+ return _decomp_gzip(blob, strict=gzipped != "try")
988+
989+
990+def read_kernel_cmdline_config(files=None, mac_addrs=None, cmdline=None):
991+ if cmdline is None:
992+ cmdline = util.get_cmdline()
993+
994+ if 'network-config=' in cmdline:
995+ data64 = None
996+ for tok in cmdline.split():
997+ if tok.startswith("network-config="):
998+ data64 = tok.split("=", 1)[1]
999+ if data64:
1000+ return util.load_yaml(_b64dgz(data64))
1001+
1002+ if 'ip=' not in cmdline:
1003+ return None
1004+
1005+ if mac_addrs is None:
1006+ mac_addrs = dict((k, sys_netdev_info(k, 'address'))
1007+ for k in get_devicelist())
1008+
1009+ return config_from_klibc_net_cfg(files=files, mac_addrs=mac_addrs)
1010
1011=== added file 'cloudinit/net/eni.py'
1012--- cloudinit/net/eni.py 1970-01-01 00:00:00 +0000
1013+++ cloudinit/net/eni.py 2016-06-10 21:20:56 +0000
1014@@ -0,0 +1,457 @@
1015+# vi: ts=4 expandtab
1016+#
1017+# This program is free software: you can redistribute it and/or modify
1018+# it under the terms of the GNU General Public License version 3, as
1019+# published by the Free Software Foundation.
1020+#
1021+# This program is distributed in the hope that it will be useful,
1022+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1023+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1024+# GNU General Public License for more details.
1025+#
1026+# You should have received a copy of the GNU General Public License
1027+# along with this program. If not, see <http://www.gnu.org/licenses/>.
1028+
1029+import glob
1030+import os
1031+import re
1032+
1033+from . import LINKS_FNAME_PREFIX
1034+from . import ParserError
1035+
1036+from .udev import generate_udev_rule
1037+
1038+from cloudinit import util
1039+
1040+
1041+NET_CONFIG_COMMANDS = [
1042+ "pre-up", "up", "post-up", "down", "pre-down", "post-down",
1043+]
1044+
1045+NET_CONFIG_BRIDGE_OPTIONS = [
1046+ "bridge_ageing", "bridge_bridgeprio", "bridge_fd", "bridge_gcinit",
1047+ "bridge_hello", "bridge_maxage", "bridge_maxwait", "bridge_stp",
1048+]
1049+
1050+NET_CONFIG_OPTIONS = [
1051+ "address", "netmask", "broadcast", "network", "metric", "gateway",
1052+ "pointtopoint", "media", "mtu", "hostname", "leasehours", "leasetime",
1053+ "vendor", "client", "bootfile", "server", "hwaddr", "provider", "frame",
1054+ "netnum", "endpoint", "local", "ttl",
1055+]
1056+
1057+
1058+# TODO: switch valid_map based on mode inet/inet6
1059+def _iface_add_subnet(iface, subnet):
1060+ content = ""
1061+ valid_map = [
1062+ 'address',
1063+ 'netmask',
1064+ 'broadcast',
1065+ 'metric',
1066+ 'gateway',
1067+ 'pointopoint',
1068+ 'mtu',
1069+ 'scope',
1070+ 'dns_search',
1071+ 'dns_nameservers',
1072+ ]
1073+ for key, value in subnet.items():
1074+ if value and key in valid_map:
1075+ if type(value) == list:
1076+ value = " ".join(value)
1077+ if '_' in key:
1078+ key = key.replace('_', '-')
1079+ content += " {} {}\n".format(key, value)
1080+
1081+ return content
1082+
1083+
1084+# TODO: switch to valid_map for attrs
1085+
1086+def _iface_add_attrs(iface):
1087+ content = ""
1088+ ignore_map = [
1089+ 'control',
1090+ 'index',
1091+ 'inet',
1092+ 'mode',
1093+ 'name',
1094+ 'subnets',
1095+ 'type',
1096+ ]
1097+ if iface['type'] not in ['bond', 'bridge', 'vlan']:
1098+ ignore_map.append('mac_address')
1099+
1100+ for key, value in iface.items():
1101+ if value and key not in ignore_map:
1102+ if type(value) == list:
1103+ value = " ".join(value)
1104+ content += " {} {}\n".format(key, value)
1105+
1106+ return content
1107+
1108+
1109+def _iface_start_entry(iface, index):
1110+ fullname = iface['name']
1111+ if index != 0:
1112+ fullname += ":%s" % index
1113+
1114+ control = iface['control']
1115+ if control == "auto":
1116+ cverb = "auto"
1117+ elif control in ("hotplug",):
1118+ cverb = "allow-" + control
1119+ else:
1120+ cverb = "# control-" + control
1121+
1122+ subst = iface.copy()
1123+ subst.update({'fullname': fullname, 'cverb': cverb})
1124+
1125+ return ("{cverb} {fullname}\n"
1126+ "iface {fullname} {inet} {mode}\n").format(**subst)
1127+
1128+
1129+def _parse_deb_config_data(ifaces, contents, src_dir, src_path):
1130+ """Parses the file contents, placing result into ifaces.
1131+
1132+ '_source_path' is added to every dictionary entry to define which file
1133+ the configration information came from.
1134+
1135+ :param ifaces: interface dictionary
1136+ :param contents: contents of interfaces file
1137+ :param src_dir: directory interfaces file was located
1138+ :param src_path: file path the `contents` was read
1139+ """
1140+ currif = None
1141+ for line in contents.splitlines():
1142+ line = line.strip()
1143+ if line.startswith('#'):
1144+ continue
1145+ split = line.split(' ')
1146+ option = split[0]
1147+ if option == "source-directory":
1148+ parsed_src_dir = split[1]
1149+ if not parsed_src_dir.startswith("/"):
1150+ parsed_src_dir = os.path.join(src_dir, parsed_src_dir)
1151+ for expanded_path in glob.glob(parsed_src_dir):
1152+ dir_contents = os.listdir(expanded_path)
1153+ dir_contents = [
1154+ os.path.join(expanded_path, path)
1155+ for path in dir_contents
1156+ if (os.path.isfile(os.path.join(expanded_path, path)) and
1157+ re.match("^[a-zA-Z0-9_-]+$", path) is not None)
1158+ ]
1159+ for entry in dir_contents:
1160+ with open(entry, "r") as fp:
1161+ src_data = fp.read().strip()
1162+ abs_entry = os.path.abspath(entry)
1163+ _parse_deb_config_data(
1164+ ifaces, src_data,
1165+ os.path.dirname(abs_entry), abs_entry)
1166+ elif option == "source":
1167+ new_src_path = split[1]
1168+ if not new_src_path.startswith("/"):
1169+ new_src_path = os.path.join(src_dir, new_src_path)
1170+ for expanded_path in glob.glob(new_src_path):
1171+ with open(expanded_path, "r") as fp:
1172+ src_data = fp.read().strip()
1173+ abs_path = os.path.abspath(expanded_path)
1174+ _parse_deb_config_data(
1175+ ifaces, src_data,
1176+ os.path.dirname(abs_path), abs_path)
1177+ elif option == "auto":
1178+ for iface in split[1:]:
1179+ if iface not in ifaces:
1180+ ifaces[iface] = {
1181+ # Include the source path this interface was found in.
1182+ "_source_path": src_path
1183+ }
1184+ ifaces[iface]['auto'] = True
1185+ elif option == "iface":
1186+ iface, family, method = split[1:4]
1187+ if iface not in ifaces:
1188+ ifaces[iface] = {
1189+ # Include the source path this interface was found in.
1190+ "_source_path": src_path
1191+ }
1192+ elif 'family' in ifaces[iface]:
1193+ raise ParserError(
1194+ "Interface %s can only be defined once. "
1195+ "Re-defined in '%s'." % (iface, src_path))
1196+ ifaces[iface]['family'] = family
1197+ ifaces[iface]['method'] = method
1198+ currif = iface
1199+ elif option == "hwaddress":
1200+ if split[1] == "ether":
1201+ val = split[2]
1202+ else:
1203+ val = split[1]
1204+ ifaces[currif]['hwaddress'] = val
1205+ elif option in NET_CONFIG_OPTIONS:
1206+ ifaces[currif][option] = split[1]
1207+ elif option in NET_CONFIG_COMMANDS:
1208+ if option not in ifaces[currif]:
1209+ ifaces[currif][option] = []
1210+ ifaces[currif][option].append(' '.join(split[1:]))
1211+ elif option.startswith('dns-'):
1212+ if 'dns' not in ifaces[currif]:
1213+ ifaces[currif]['dns'] = {}
1214+ if option == 'dns-search':
1215+ ifaces[currif]['dns']['search'] = []
1216+ for domain in split[1:]:
1217+ ifaces[currif]['dns']['search'].append(domain)
1218+ elif option == 'dns-nameservers':
1219+ ifaces[currif]['dns']['nameservers'] = []
1220+ for server in split[1:]:
1221+ ifaces[currif]['dns']['nameservers'].append(server)
1222+ elif option.startswith('bridge_'):
1223+ if 'bridge' not in ifaces[currif]:
1224+ ifaces[currif]['bridge'] = {}
1225+ if option in NET_CONFIG_BRIDGE_OPTIONS:
1226+ bridge_option = option.replace('bridge_', '', 1)
1227+ ifaces[currif]['bridge'][bridge_option] = split[1]
1228+ elif option == "bridge_ports":
1229+ ifaces[currif]['bridge']['ports'] = []
1230+ for iface in split[1:]:
1231+ ifaces[currif]['bridge']['ports'].append(iface)
1232+ elif option == "bridge_hw" and split[1].lower() == "mac":
1233+ ifaces[currif]['bridge']['mac'] = split[2]
1234+ elif option == "bridge_pathcost":
1235+ if 'pathcost' not in ifaces[currif]['bridge']:
1236+ ifaces[currif]['bridge']['pathcost'] = {}
1237+ ifaces[currif]['bridge']['pathcost'][split[1]] = split[2]
1238+ elif option == "bridge_portprio":
1239+ if 'portprio' not in ifaces[currif]['bridge']:
1240+ ifaces[currif]['bridge']['portprio'] = {}
1241+ ifaces[currif]['bridge']['portprio'][split[1]] = split[2]
1242+ elif option.startswith('bond-'):
1243+ if 'bond' not in ifaces[currif]:
1244+ ifaces[currif]['bond'] = {}
1245+ bond_option = option.replace('bond-', '', 1)
1246+ ifaces[currif]['bond'][bond_option] = split[1]
1247+ for iface in ifaces.keys():
1248+ if 'auto' not in ifaces[iface]:
1249+ ifaces[iface]['auto'] = False
1250+
1251+
1252+def parse_deb_config(path):
1253+ """Parses a debian network configuration file."""
1254+ ifaces = {}
1255+ with open(path, "r") as fp:
1256+ contents = fp.read().strip()
1257+ abs_path = os.path.abspath(path)
1258+ _parse_deb_config_data(
1259+ ifaces, contents,
1260+ os.path.dirname(abs_path), abs_path)
1261+ return ifaces
1262+
1263+
1264+def convert_eni_data(eni_data):
1265+ # return a network config representation of what is in eni_data
1266+ ifaces = {}
1267+ _parse_deb_config_data(ifaces, eni_data, src_dir=None, src_path=None)
1268+ return _ifaces_to_net_config_data(ifaces)
1269+
1270+
1271+def _ifaces_to_net_config_data(ifaces):
1272+ """Return network config that represents the ifaces data provided.
1273+ ifaces = parse_deb_config("/etc/network/interfaces")
1274+ config = ifaces_to_net_config_data(ifaces)
1275+ state = parse_net_config_data(config)."""
1276+ devs = {}
1277+ for name, data in ifaces.items():
1278+ # devname is 'eth0' for name='eth0:1'
1279+ devname = name.partition(":")[0]
1280+ if devname == "lo":
1281+ # currently provding 'lo' in network config results in duplicate
1282+ # entries. in rendered interfaces file. so skip it.
1283+ continue
1284+ if devname not in devs:
1285+ devs[devname] = {'type': 'physical', 'name': devname,
1286+ 'subnets': []}
1287+ # this isnt strictly correct, but some might specify
1288+ # hwaddress on a nic for matching / declaring name.
1289+ if 'hwaddress' in data:
1290+ devs[devname]['mac_address'] = data['hwaddress']
1291+ subnet = {'_orig_eni_name': name, 'type': data['method']}
1292+ if data.get('auto'):
1293+ subnet['control'] = 'auto'
1294+ else:
1295+ subnet['control'] = 'manual'
1296+
1297+ if data.get('method') == 'static':
1298+ subnet['address'] = data['address']
1299+
1300+ for copy_key in ('netmask', 'gateway', 'broadcast'):
1301+ if copy_key in data:
1302+ subnet[copy_key] = data[copy_key]
1303+
1304+ if 'dns' in data:
1305+ for n in ('nameservers', 'search'):
1306+ if n in data['dns'] and data['dns'][n]:
1307+ subnet['dns_' + n] = data['dns'][n]
1308+ devs[devname]['subnets'].append(subnet)
1309+
1310+ return {'version': 1,
1311+ 'config': [devs[d] for d in sorted(devs)]}
1312+
1313+
1314+class Renderer(object):
1315+ """Renders network information in a /etc/network/interfaces format."""
1316+
1317+ def _render_persistent_net(self, network_state):
1318+ """Given state, emit udev rules to map mac to ifname."""
1319+ content = ""
1320+ interfaces = network_state.get('interfaces')
1321+ for iface in interfaces.values():
1322+ # for physical interfaces write out a persist net udev rule
1323+ if iface['type'] == 'physical' and \
1324+ 'name' in iface and iface.get('mac_address'):
1325+ content += generate_udev_rule(iface['name'],
1326+ iface['mac_address'])
1327+
1328+ return content
1329+
1330+ def _render_route(self, route, indent=""):
1331+ """When rendering routes for an iface, in some cases applying a route
1332+ may result in the route command returning non-zero which produces
1333+ some confusing output for users manually using ifup/ifdown[1]. To
1334+ that end, we will optionally include an '|| true' postfix to each
1335+ route line allowing users to work with ifup/ifdown without using
1336+ --force option.
1337+
1338+ We may at somepoint not want to emit this additional postfix, and
1339+ add a 'strict' flag to this function. When called with strict=True,
1340+ then we will not append the postfix.
1341+
1342+ 1. http://askubuntu.com/questions/168033/
1343+ how-to-set-static-routes-in-ubuntu-server
1344+ """
1345+ content = ""
1346+ up = indent + "post-up route add"
1347+ down = indent + "pre-down route del"
1348+ eol = " || true\n"
1349+ mapping = {
1350+ 'network': '-net',
1351+ 'netmask': 'netmask',
1352+ 'gateway': 'gw',
1353+ 'metric': 'metric',
1354+ }
1355+ if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0':
1356+ default_gw = " default gw %s" % route['gateway']
1357+ content += up + default_gw + eol
1358+ content += down + default_gw + eol
1359+ elif route['network'] == '::' and route['netmask'] == 0:
1360+ # ipv6!
1361+ default_gw = " -A inet6 default gw %s" % route['gateway']
1362+ content += up + default_gw + eol
1363+ content += down + default_gw + eol
1364+ else:
1365+ route_line = ""
1366+ for k in ['network', 'netmask', 'gateway', 'metric']:
1367+ if k in route:
1368+ route_line += " %s %s" % (mapping[k], route[k])
1369+ content += up + route_line + eol
1370+ content += down + route_line + eol
1371+ return content
1372+
1373+ def _render_interfaces(self, network_state):
1374+ '''Given state, emit etc/network/interfaces content.'''
1375+
1376+ content = ""
1377+ interfaces = network_state.get('interfaces')
1378+ ''' Apply a sort order to ensure that we write out
1379+ the physical interfaces first; this is critical for
1380+ bonding
1381+ '''
1382+ order = {
1383+ 'physical': 0,
1384+ 'bond': 1,
1385+ 'bridge': 2,
1386+ 'vlan': 3,
1387+ }
1388+ content += "auto lo\niface lo inet loopback\n"
1389+ for dnskey, value in network_state.get('dns', {}).items():
1390+ if len(value):
1391+ content += " dns-{} {}\n".format(dnskey, " ".join(value))
1392+
1393+ for iface in sorted(interfaces.values(),
1394+ key=lambda k: (order[k['type']], k['name'])):
1395+
1396+ if content[-2:] != "\n\n":
1397+ content += "\n"
1398+ subnets = iface.get('subnets', {})
1399+ if subnets:
1400+ for index, subnet in zip(range(0, len(subnets)), subnets):
1401+ if content[-2:] != "\n\n":
1402+ content += "\n"
1403+ iface['index'] = index
1404+ iface['mode'] = subnet['type']
1405+ iface['control'] = subnet.get('control', 'auto')
1406+ if iface['mode'].endswith('6'):
1407+ iface['inet'] += '6'
1408+ elif (iface['mode'] == 'static'
1409+ and ":" in subnet['address']):
1410+ iface['inet'] += '6'
1411+ if iface['mode'].startswith('dhcp'):
1412+ iface['mode'] = 'dhcp'
1413+
1414+ content += _iface_start_entry(iface, index)
1415+ content += _iface_add_subnet(iface, subnet)
1416+ content += _iface_add_attrs(iface)
1417+ for route in subnet.get('routes', []):
1418+ content += self._render_route(route, indent=" ")
1419+ else:
1420+ # ifenslave docs say to auto the slave devices
1421+ if 'bond-master' in iface:
1422+ content += "auto {name}\n".format(**iface)
1423+ content += "iface {name} {inet} {mode}\n".format(**iface)
1424+ content += _iface_add_attrs(iface)
1425+
1426+ for route in network_state.get('routes'):
1427+ content += self._render_route(route)
1428+
1429+ # global replacements until v2 format
1430+ content = content.replace('mac_address', 'hwaddress')
1431+ return content
1432+
1433+ def render_network_state(
1434+ self, target, network_state, eni="etc/network/interfaces",
1435+ links_prefix=LINKS_FNAME_PREFIX,
1436+ netrules='etc/udev/rules.d/70-persistent-net.rules',
1437+ writer=None):
1438+
1439+ fpeni = os.path.sep.join((target, eni,))
1440+ util.ensure_dir(os.path.dirname(fpeni))
1441+ util.write_file(fpeni, self._render_interfaces(network_state))
1442+
1443+ if netrules:
1444+ netrules = os.path.sep.join((target, netrules,))
1445+ util.ensure_dir(os.path.dirname(netrules))
1446+ util.write_file(netrules,
1447+ self._render_persistent_net(network_state))
1448+
1449+ if links_prefix:
1450+ self._render_systemd_links(target, network_state,
1451+ links_prefix=links_prefix)
1452+
1453+ def _render_systemd_links(self, target, network_state,
1454+ links_prefix=LINKS_FNAME_PREFIX):
1455+ fp_prefix = os.path.sep.join((target, links_prefix))
1456+ for f in glob.glob(fp_prefix + "*"):
1457+ os.unlink(f)
1458+ interfaces = network_state.get('interfaces')
1459+ for iface in interfaces.values():
1460+ if (iface['type'] == 'physical' and 'name' in iface and
1461+ iface.get('mac_address')):
1462+ fname = fp_prefix + iface['name'] + ".link"
1463+ content = "\n".join([
1464+ "[Match]",
1465+ "MACAddress=" + iface['mac_address'],
1466+ "",
1467+ "[Link]",
1468+ "Name=" + iface['name'],
1469+ ""
1470+ ])
1471+ util.write_file(fname, content)
1472
1473=== modified file 'cloudinit/net/network_state.py'
1474--- cloudinit/net/network_state.py 2016-05-12 17:56:26 +0000
1475+++ cloudinit/net/network_state.py 2016-06-10 21:20:56 +0000
1476@@ -15,9 +15,13 @@
1477 # You should have received a copy of the GNU Affero General Public License
1478 # along with Curtin. If not, see <http://www.gnu.org/licenses/>.
1479
1480-from cloudinit import log as logging
1481+import copy
1482+import functools
1483+import logging
1484+
1485+import six
1486+
1487 from cloudinit import util
1488-from cloudinit.util import yaml_dumps as dump_config
1489
1490 LOG = logging.getLogger(__name__)
1491
1492@@ -27,39 +31,104 @@
1493 }
1494
1495
1496+def parse_net_config_data(net_config, skip_broken=True):
1497+ """Parses the config, returns NetworkState object
1498+
1499+ :param net_config: curtin network config dict
1500+ """
1501+ state = None
1502+ if 'version' in net_config and 'config' in net_config:
1503+ ns = NetworkState(version=net_config.get('version'),
1504+ config=net_config.get('config'))
1505+ ns.parse_config(skip_broken=skip_broken)
1506+ state = ns.network_state
1507+ return state
1508+
1509+
1510+def parse_net_config(path, skip_broken=True):
1511+ """Parses a curtin network configuration file and
1512+ return network state"""
1513+ ns = None
1514+ net_config = util.read_conf(path)
1515+ if 'network' in net_config:
1516+ ns = parse_net_config_data(net_config.get('network'),
1517+ skip_broken=skip_broken)
1518+ return ns
1519+
1520+
1521 def from_state_file(state_file):
1522 network_state = None
1523 state = util.read_conf(state_file)
1524 network_state = NetworkState()
1525 network_state.load(state)
1526-
1527 return network_state
1528
1529
1530+def diff_keys(expected, actual):
1531+ missing = set(expected)
1532+ for key in actual:
1533+ missing.discard(key)
1534+ return missing
1535+
1536+
1537+class InvalidCommand(Exception):
1538+ pass
1539+
1540+
1541+def ensure_command_keys(required_keys):
1542+
1543+ def wrapper(func):
1544+
1545+ @functools.wraps(func)
1546+ def decorator(self, command, *args, **kwargs):
1547+ if required_keys:
1548+ missing_keys = diff_keys(required_keys, command)
1549+ if missing_keys:
1550+ raise InvalidCommand("Command missing %s of required"
1551+ " keys %s" % (missing_keys,
1552+ required_keys))
1553+ return func(self, command, *args, **kwargs)
1554+
1555+ return decorator
1556+
1557+ return wrapper
1558+
1559+
1560+class CommandHandlerMeta(type):
1561+ """Metaclass that dynamically creates a 'command_handlers' attribute.
1562+
1563+ This will scan the to-be-created class for methods that start with
1564+ 'handle_' and on finding those will populate a class attribute mapping
1565+ so that those methods can be quickly located and called.
1566+ """
1567+ def __new__(cls, name, parents, dct):
1568+ command_handlers = {}
1569+ for attr_name, attr in dct.items():
1570+ if callable(attr) and attr_name.startswith('handle_'):
1571+ handles_what = attr_name[len('handle_'):]
1572+ if handles_what:
1573+ command_handlers[handles_what] = attr
1574+ dct['command_handlers'] = command_handlers
1575+ return super(CommandHandlerMeta, cls).__new__(cls, name,
1576+ parents, dct)
1577+
1578+
1579+@six.add_metaclass(CommandHandlerMeta)
1580 class NetworkState(object):
1581+
1582+ initial_network_state = {
1583+ 'interfaces': {},
1584+ 'routes': [],
1585+ 'dns': {
1586+ 'nameservers': [],
1587+ 'search': [],
1588+ }
1589+ }
1590+
1591 def __init__(self, version=NETWORK_STATE_VERSION, config=None):
1592 self.version = version
1593 self.config = config
1594- self.network_state = {
1595- 'interfaces': {},
1596- 'routes': [],
1597- 'dns': {
1598- 'nameservers': [],
1599- 'search': [],
1600- }
1601- }
1602- self.command_handlers = self.get_command_handlers()
1603-
1604- def get_command_handlers(self):
1605- METHOD_PREFIX = 'handle_'
1606- methods = filter(lambda x: callable(getattr(self, x)) and
1607- x.startswith(METHOD_PREFIX), dir(self))
1608- handlers = {}
1609- for m in methods:
1610- key = m.replace(METHOD_PREFIX, '')
1611- handlers[key] = getattr(self, m)
1612-
1613- return handlers
1614+ self.network_state = copy.deepcopy(self.initial_network_state)
1615
1616 def dump(self):
1617 state = {
1618@@ -67,7 +136,7 @@
1619 'config': self.config,
1620 'network_state': self.network_state,
1621 }
1622- return dump_config(state)
1623+ return util.yaml_dumps(state)
1624
1625 def load(self, state):
1626 if 'version' not in state:
1627@@ -75,32 +144,39 @@
1628 raise Exception('Invalid state, missing version field')
1629
1630 required_keys = NETWORK_STATE_REQUIRED_KEYS[state['version']]
1631- if not self.valid_command(state, required_keys):
1632- msg = 'Invalid state, missing keys: {}'.format(required_keys)
1633+ missing_keys = diff_keys(required_keys, state)
1634+ if missing_keys:
1635+ msg = 'Invalid state, missing keys: %s' % (missing_keys)
1636 LOG.error(msg)
1637- raise Exception(msg)
1638+ raise ValueError(msg)
1639
1640 # v1 - direct attr mapping, except version
1641 for key in [k for k in required_keys if k not in ['version']]:
1642 setattr(self, key, state[key])
1643- self.command_handlers = self.get_command_handlers()
1644
1645 def dump_network_state(self):
1646- return dump_config(self.network_state)
1647+ return util.yaml_dumps(self.network_state)
1648
1649- def parse_config(self):
1650+ def parse_config(self, skip_broken=True):
1651 # rebuild network state
1652 for command in self.config:
1653- handler = self.command_handlers.get(command['type'])
1654- handler(command)
1655-
1656- def valid_command(self, command, required_keys):
1657- if not required_keys:
1658- return False
1659-
1660- found_keys = [key for key in command.keys() if key in required_keys]
1661- return len(found_keys) == len(required_keys)
1662-
1663+ command_type = command['type']
1664+ try:
1665+ handler = self.command_handlers[command_type]
1666+ except KeyError:
1667+ raise RuntimeError("No handler found for"
1668+ " command '%s'" % command_type)
1669+ try:
1670+ handler(self, command)
1671+ except InvalidCommand:
1672+ if not skip_broken:
1673+ raise
1674+ else:
1675+ LOG.warn("Skipping invalid command: %s", command,
1676+ exc_info=True)
1677+ LOG.debug(self.dump_network_state())
1678+
1679+ @ensure_command_keys(['name'])
1680 def handle_physical(self, command):
1681 '''
1682 command = {
1683@@ -112,13 +188,6 @@
1684 ]
1685 }
1686 '''
1687- required_keys = [
1688- 'name',
1689- ]
1690- if not self.valid_command(command, required_keys):
1691- LOG.warn('Skipping Invalid command: {}'.format(command))
1692- LOG.debug(self.dump_network_state())
1693- return
1694
1695 interfaces = self.network_state.get('interfaces')
1696 iface = interfaces.get(command['name'], {})
1697@@ -149,6 +218,7 @@
1698 self.network_state['interfaces'].update({command.get('name'): iface})
1699 self.dump_network_state()
1700
1701+ @ensure_command_keys(['name', 'vlan_id', 'vlan_link'])
1702 def handle_vlan(self, command):
1703 '''
1704 auto eth0.222
1705@@ -158,16 +228,6 @@
1706 hwaddress ether BC:76:4E:06:96:B3
1707 vlan-raw-device eth0
1708 '''
1709- required_keys = [
1710- 'name',
1711- 'vlan_link',
1712- 'vlan_id',
1713- ]
1714- if not self.valid_command(command, required_keys):
1715- print('Skipping Invalid command: {}'.format(command))
1716- print(self.dump_network_state())
1717- return
1718-
1719 interfaces = self.network_state.get('interfaces')
1720 self.handle_physical(command)
1721 iface = interfaces.get(command.get('name'), {})
1722@@ -175,6 +235,7 @@
1723 iface['vlan_id'] = command.get('vlan_id')
1724 interfaces.update({iface['name']: iface})
1725
1726+ @ensure_command_keys(['name', 'bond_interfaces', 'params'])
1727 def handle_bond(self, command):
1728 '''
1729 #/etc/network/interfaces
1730@@ -200,15 +261,6 @@
1731 bond-updelay 200
1732 bond-lacp-rate 4
1733 '''
1734- required_keys = [
1735- 'name',
1736- 'bond_interfaces',
1737- 'params',
1738- ]
1739- if not self.valid_command(command, required_keys):
1740- print('Skipping Invalid command: {}'.format(command))
1741- print(self.dump_network_state())
1742- return
1743
1744 self.handle_physical(command)
1745 interfaces = self.network_state.get('interfaces')
1746@@ -236,6 +288,7 @@
1747 bond_if.update({param: val})
1748 self.network_state['interfaces'].update({ifname: bond_if})
1749
1750+ @ensure_command_keys(['name', 'bridge_interfaces', 'params'])
1751 def handle_bridge(self, command):
1752 '''
1753 auto br0
1754@@ -263,15 +316,6 @@
1755 "bridge_waitport",
1756 ]
1757 '''
1758- required_keys = [
1759- 'name',
1760- 'bridge_interfaces',
1761- 'params',
1762- ]
1763- if not self.valid_command(command, required_keys):
1764- print('Skipping Invalid command: {}'.format(command))
1765- print(self.dump_network_state())
1766- return
1767
1768 # find one of the bridge port ifaces to get mac_addr
1769 # handle bridge_slaves
1770@@ -295,15 +339,8 @@
1771
1772 interfaces.update({iface['name']: iface})
1773
1774+ @ensure_command_keys(['address'])
1775 def handle_nameserver(self, command):
1776- required_keys = [
1777- 'address',
1778- ]
1779- if not self.valid_command(command, required_keys):
1780- print('Skipping Invalid command: {}'.format(command))
1781- print(self.dump_network_state())
1782- return
1783-
1784 dns = self.network_state.get('dns')
1785 if 'address' in command:
1786 addrs = command['address']
1787@@ -318,15 +355,8 @@
1788 for path in paths:
1789 dns['search'].append(path)
1790
1791+ @ensure_command_keys(['destination'])
1792 def handle_route(self, command):
1793- required_keys = [
1794- 'destination',
1795- ]
1796- if not self.valid_command(command, required_keys):
1797- print('Skipping Invalid command: {}'.format(command))
1798- print(self.dump_network_state())
1799- return
1800-
1801 routes = self.network_state.get('routes')
1802 network, cidr = command['destination'].split("/")
1803 netmask = cidr2mask(int(cidr))
1804@@ -376,72 +406,3 @@
1805 return ipv4mask2cidr(mask)
1806 else:
1807 return mask
1808-
1809-
1810-if __name__ == '__main__':
1811- import random
1812- import sys
1813-
1814- from cloudinit import net
1815-
1816- def load_config(nc):
1817- version = nc.get('version')
1818- config = nc.get('config')
1819- return (version, config)
1820-
1821- def test_parse(network_config):
1822- (version, config) = load_config(network_config)
1823- ns1 = NetworkState(version=version, config=config)
1824- ns1.parse_config()
1825- random.shuffle(config)
1826- ns2 = NetworkState(version=version, config=config)
1827- ns2.parse_config()
1828- print("----NS1-----")
1829- print(ns1.dump_network_state())
1830- print()
1831- print("----NS2-----")
1832- print(ns2.dump_network_state())
1833- print("NS1 == NS2 ?=> {}".format(
1834- ns1.network_state == ns2.network_state))
1835- eni = net.render_interfaces(ns2.network_state)
1836- print(eni)
1837- udev_rules = net.render_persistent_net(ns2.network_state)
1838- print(udev_rules)
1839-
1840- def test_dump_and_load(network_config):
1841- print("Loading network_config into NetworkState")
1842- (version, config) = load_config(network_config)
1843- ns1 = NetworkState(version=version, config=config)
1844- ns1.parse_config()
1845- print("Dumping state to file")
1846- ns1_dump = ns1.dump()
1847- ns1_state = "/tmp/ns1.state"
1848- with open(ns1_state, "w+") as f:
1849- f.write(ns1_dump)
1850-
1851- print("Loading state from file")
1852- ns2 = from_state_file(ns1_state)
1853- print("NS1 == NS2 ?=> {}".format(
1854- ns1.network_state == ns2.network_state))
1855-
1856- def test_output(network_config):
1857- (version, config) = load_config(network_config)
1858- ns1 = NetworkState(version=version, config=config)
1859- ns1.parse_config()
1860- random.shuffle(config)
1861- ns2 = NetworkState(version=version, config=config)
1862- ns2.parse_config()
1863- print("NS1 == NS2 ?=> {}".format(
1864- ns1.network_state == ns2.network_state))
1865- eni_1 = net.render_interfaces(ns1.network_state)
1866- eni_2 = net.render_interfaces(ns2.network_state)
1867- print(eni_1)
1868- print(eni_2)
1869- print("eni_1 == eni_2 ?=> {}".format(
1870- eni_1 == eni_2))
1871-
1872- y = util.read_conf(sys.argv[1])
1873- network_config = y.get('network')
1874- test_parse(network_config)
1875- test_dump_and_load(network_config)
1876- test_output(network_config)
1877
1878=== added file 'cloudinit/serial.py'
1879--- cloudinit/serial.py 1970-01-01 00:00:00 +0000
1880+++ cloudinit/serial.py 2016-06-10 21:20:56 +0000
1881@@ -0,0 +1,50 @@
1882+# vi: ts=4 expandtab
1883+#
1884+# This program is free software: you can redistribute it and/or modify
1885+# it under the terms of the GNU General Public License version 3, as
1886+# published by the Free Software Foundation.
1887+#
1888+# This program is distributed in the hope that it will be useful,
1889+# but WITHOUT ANY WARRANTY; without even the implied warranty of
1890+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1891+# GNU General Public License for more details.
1892+#
1893+# You should have received a copy of the GNU General Public License
1894+# along with this program. If not, see <http://www.gnu.org/licenses/>.
1895+
1896+
1897+from __future__ import absolute_import
1898+
1899+try:
1900+ from serial import Serial
1901+except ImportError:
1902+ # For older versions of python (ie 2.6) pyserial may not exist and/or
1903+ # work and/or be installed, so make a dummy/fake serial that blows up
1904+ # when used...
1905+ class Serial(object):
1906+ def __init__(self, *args, **kwargs):
1907+ pass
1908+
1909+ @staticmethod
1910+ def isOpen():
1911+ return False
1912+
1913+ @staticmethod
1914+ def write(data):
1915+ raise IOError("Unable to perform serial `write` operation,"
1916+ " pyserial not installed.")
1917+
1918+ @staticmethod
1919+ def readline():
1920+ raise IOError("Unable to perform serial `readline` operation,"
1921+ " pyserial not installed.")
1922+
1923+ @staticmethod
1924+ def flush():
1925+ raise IOError("Unable to perform serial `flush` operation,"
1926+ " pyserial not installed.")
1927+
1928+ @staticmethod
1929+ def read(size=1):
1930+ raise IOError("Unable to perform serial `read` operation,"
1931+ " pyserial not installed.")
1932
1933=== modified file 'cloudinit/sources/DataSourceAzure.py'
1934--- cloudinit/sources/DataSourceAzure.py 2016-05-12 17:56:26 +0000
1935+++ cloudinit/sources/DataSourceAzure.py 2016-06-10 21:20:56 +0000
1936@@ -423,7 +423,7 @@
1937 elem.text = DEF_PASSWD_REDACTION
1938 return ET.tostring(root)
1939 except Exception:
1940- LOG.critical("failed to redact userpassword in {}".format(fname))
1941+ LOG.critical("failed to redact userpassword in %s", fname)
1942 return cnt
1943
1944 if not datadir:
1945
1946=== modified file 'cloudinit/sources/DataSourceConfigDrive.py'
1947--- cloudinit/sources/DataSourceConfigDrive.py 2016-06-03 19:06:55 +0000
1948+++ cloudinit/sources/DataSourceConfigDrive.py 2016-06-10 21:20:56 +0000
1949@@ -18,14 +18,14 @@
1950 # You should have received a copy of the GNU General Public License
1951 # along with this program. If not, see <http://www.gnu.org/licenses/>.
1952
1953-import copy
1954 import os
1955
1956 from cloudinit import log as logging
1957-from cloudinit import net
1958 from cloudinit import sources
1959 from cloudinit import util
1960
1961+from cloudinit.net import eni
1962+
1963 from cloudinit.sources.helpers import openstack
1964
1965 LOG = logging.getLogger(__name__)
1966@@ -53,6 +53,7 @@
1967 self._network_config = None
1968 self.network_json = None
1969 self.network_eni = None
1970+ self.known_macs = None
1971 self.files = {}
1972
1973 def __str__(self):
1974@@ -147,9 +148,10 @@
1975 if self._network_config is None:
1976 if self.network_json is not None:
1977 LOG.debug("network config provided via network_json")
1978- self._network_config = convert_network_data(self.network_json)
1979+ self._network_config = openstack.convert_net_json(
1980+ self.network_json, known_macs=self.known_macs)
1981 elif self.network_eni is not None:
1982- self._network_config = net.convert_eni_data(self.network_eni)
1983+ self._network_config = eni.convert_eni_data(self.network_eni)
1984 LOG.debug("network config provided via converted eni data")
1985 else:
1986 LOG.debug("no network configuration available")
1987@@ -254,152 +256,12 @@
1988 return devices
1989
1990
1991-# Convert OpenStack ConfigDrive NetworkData json to network_config yaml
1992-def convert_network_data(network_json=None, known_macs=None):
1993- """Return a dictionary of network_config by parsing provided
1994- OpenStack ConfigDrive NetworkData json format
1995-
1996- OpenStack network_data.json provides a 3 element dictionary
1997- - "links" (links are network devices, physical or virtual)
1998- - "networks" (networks are ip network configurations for one or more
1999- links)
2000- - services (non-ip services, like dns)
2001-
2002- networks and links are combined via network items referencing specific
2003- links via a 'link_id' which maps to a links 'id' field.
2004-
2005- To convert this format to network_config yaml, we first iterate over the
2006- links and then walk the network list to determine if any of the networks
2007- utilize the current link; if so we generate a subnet entry for the device
2008-
2009- We also need to map network_data.json fields to network_config fields. For
2010- example, the network_data links 'id' field is equivalent to network_config
2011- 'name' field for devices. We apply more of this mapping to the various
2012- link types that we encounter.
2013-
2014- There are additional fields that are populated in the network_data.json
2015- from OpenStack that are not relevant to network_config yaml, so we
2016- enumerate a dictionary of valid keys for network_yaml and apply filtering
2017- to drop these superflous keys from the network_config yaml.
2018- """
2019- if network_json is None:
2020- return None
2021-
2022- # dict of network_config key for filtering network_json
2023- valid_keys = {
2024- 'physical': [
2025- 'name',
2026- 'type',
2027- 'mac_address',
2028- 'subnets',
2029- 'params',
2030- 'mtu',
2031- ],
2032- 'subnet': [
2033- 'type',
2034- 'address',
2035- 'netmask',
2036- 'broadcast',
2037- 'metric',
2038- 'gateway',
2039- 'pointopoint',
2040- 'scope',
2041- 'dns_nameservers',
2042- 'dns_search',
2043- 'routes',
2044- ],
2045- }
2046-
2047- links = network_json.get('links', [])
2048- networks = network_json.get('networks', [])
2049- services = network_json.get('services', [])
2050-
2051- config = []
2052- for link in links:
2053- subnets = []
2054- cfg = {k: v for k, v in link.items()
2055- if k in valid_keys['physical']}
2056- # 'name' is not in openstack spec yet, but we will support it if it is
2057- # present. The 'id' in the spec is currently implemented as the host
2058- # nic's name, meaning something like 'tap-adfasdffd'. We do not want
2059- # to name guest devices with such ugly names.
2060- if 'name' in link:
2061- cfg['name'] = link['name']
2062-
2063- for network in [n for n in networks
2064- if n['link'] == link['id']]:
2065- subnet = {k: v for k, v in network.items()
2066- if k in valid_keys['subnet']}
2067- if 'dhcp' in network['type']:
2068- t = 'dhcp6' if network['type'].startswith('ipv6') else 'dhcp4'
2069- subnet.update({
2070- 'type': t,
2071- })
2072- else:
2073- subnet.update({
2074- 'type': 'static',
2075- 'address': network.get('ip_address'),
2076- })
2077- subnets.append(subnet)
2078- cfg.update({'subnets': subnets})
2079- if link['type'] in ['ethernet', 'vif', 'ovs', 'phy', 'bridge']:
2080- cfg.update({
2081- 'type': 'physical',
2082- 'mac_address': link['ethernet_mac_address']})
2083- elif link['type'] in ['bond']:
2084- params = {}
2085- for k, v in link.items():
2086- if k == 'bond_links':
2087- continue
2088- elif k.startswith('bond'):
2089- params.update({k: v})
2090- cfg.update({
2091- 'bond_interfaces': copy.deepcopy(link['bond_links']),
2092- 'params': params,
2093- })
2094- elif link['type'] in ['vlan']:
2095- cfg.update({
2096- 'name': "%s.%s" % (link['vlan_link'],
2097- link['vlan_id']),
2098- 'vlan_link': link['vlan_link'],
2099- 'vlan_id': link['vlan_id'],
2100- 'mac_address': link['vlan_mac_address'],
2101- })
2102- else:
2103- raise ValueError(
2104- 'Unknown network_data link type: %s' % link['type'])
2105-
2106- config.append(cfg)
2107-
2108- need_names = [d for d in config
2109- if d.get('type') == 'physical' and 'name' not in d]
2110-
2111- if need_names:
2112- if known_macs is None:
2113- known_macs = net.get_interfaces_by_mac()
2114-
2115- for d in need_names:
2116- mac = d.get('mac_address')
2117- if not mac:
2118- raise ValueError("No mac_address or name entry for %s" % d)
2119- if mac not in known_macs:
2120- raise ValueError("Unable to find a system nic for %s" % d)
2121- d['name'] = known_macs[mac]
2122-
2123- for service in services:
2124- cfg = service
2125- cfg.update({'type': 'nameserver'})
2126- config.append(cfg)
2127-
2128- return {'version': 1, 'config': config}
2129-
2130-
2131 # Legacy: Must be present in case we load an old pkl object
2132 DataSourceConfigDriveNet = DataSourceConfigDrive
2133
2134 # Used to match classes to dependencies
2135 datasources = [
2136- (DataSourceConfigDrive, (sources.DEP_FILESYSTEM, )),
2137+ (DataSourceConfigDrive, (sources.DEP_FILESYSTEM,)),
2138 ]
2139
2140
2141
2142=== modified file 'cloudinit/sources/DataSourceSmartOS.py'
2143--- cloudinit/sources/DataSourceSmartOS.py 2016-06-02 18:36:51 +0000
2144+++ cloudinit/sources/DataSourceSmartOS.py 2016-06-10 21:20:56 +0000
2145@@ -40,13 +40,11 @@
2146 import re
2147 import socket
2148
2149-import serial
2150-
2151 from cloudinit import log as logging
2152+from cloudinit import serial
2153 from cloudinit import sources
2154 from cloudinit import util
2155
2156-
2157 LOG = logging.getLogger(__name__)
2158
2159 SMARTOS_ATTRIB_MAP = {
2160
2161=== modified file 'cloudinit/sources/helpers/openstack.py'
2162--- cloudinit/sources/helpers/openstack.py 2016-06-02 17:18:23 +0000
2163+++ cloudinit/sources/helpers/openstack.py 2016-06-10 21:20:56 +0000
2164@@ -28,6 +28,7 @@
2165
2166 from cloudinit import ec2_utils
2167 from cloudinit import log as logging
2168+from cloudinit import net
2169 from cloudinit import sources
2170 from cloudinit import url_helper
2171 from cloudinit import util
2172@@ -478,6 +479,150 @@
2173 retries=self.retries)
2174
2175
2176+# Convert OpenStack ConfigDrive NetworkData json to network_config yaml
2177+def convert_net_json(network_json=None, known_macs=None):
2178+ """Return a dictionary of network_config by parsing provided
2179+ OpenStack ConfigDrive NetworkData json format
2180+
2181+ OpenStack network_data.json provides a 3 element dictionary
2182+ - "links" (links are network devices, physical or virtual)
2183+ - "networks" (networks are ip network configurations for one or more
2184+ links)
2185+ - services (non-ip services, like dns)
2186+
2187+ networks and links are combined via network items referencing specific
2188+ links via a 'link_id' which maps to a links 'id' field.
2189+
2190+ To convert this format to network_config yaml, we first iterate over the
2191+ links and then walk the network list to determine if any of the networks
2192+ utilize the current link; if so we generate a subnet entry for the device
2193+
2194+ We also need to map network_data.json fields to network_config fields. For
2195+ example, the network_data links 'id' field is equivalent to network_config
2196+ 'name' field for devices. We apply more of this mapping to the various
2197+ link types that we encounter.
2198+
2199+ There are additional fields that are populated in the network_data.json
2200+ from OpenStack that are not relevant to network_config yaml, so we
2201+ enumerate a dictionary of valid keys for network_yaml and apply filtering
2202+ to drop these superflous keys from the network_config yaml.
2203+ """
2204+ if network_json is None:
2205+ return None
2206+
2207+ # dict of network_config key for filtering network_json
2208+ valid_keys = {
2209+ 'physical': [
2210+ 'name',
2211+ 'type',
2212+ 'mac_address',
2213+ 'subnets',
2214+ 'params',
2215+ 'mtu',
2216+ ],
2217+ 'subnet': [
2218+ 'type',
2219+ 'address',
2220+ 'netmask',
2221+ 'broadcast',
2222+ 'metric',
2223+ 'gateway',
2224+ 'pointopoint',
2225+ 'scope',
2226+ 'dns_nameservers',
2227+ 'dns_search',
2228+ 'routes',
2229+ ],
2230+ }
2231+
2232+ links = network_json.get('links', [])
2233+ networks = network_json.get('networks', [])
2234+ services = network_json.get('services', [])
2235+
2236+ config = []
2237+ for link in links:
2238+ subnets = []
2239+ cfg = {k: v for k, v in link.items()
2240+ if k in valid_keys['physical']}
2241+ # 'name' is not in openstack spec yet, but we will support it if it is
2242+ # present. The 'id' in the spec is currently implemented as the host
2243+ # nic's name, meaning something like 'tap-adfasdffd'. We do not want
2244+ # to name guest devices with such ugly names.
2245+ if 'name' in link:
2246+ cfg['name'] = link['name']
2247+
2248+ for network in [n for n in networks
2249+ if n['link'] == link['id']]:
2250+ subnet = {k: v for k, v in network.items()
2251+ if k in valid_keys['subnet']}
2252+ if 'dhcp' in network['type']:
2253+ t = 'dhcp6' if network['type'].startswith('ipv6') else 'dhcp4'
2254+ subnet.update({
2255+ 'type': t,
2256+ })
2257+ else:
2258+ subnet.update({
2259+ 'type': 'static',
2260+ 'address': network.get('ip_address'),
2261+ })
2262+ if network['type'] == 'ipv4':
2263+ subnet['ipv4'] = True
2264+ if network['type'] == 'ipv6':
2265+ subnet['ipv6'] = True
2266+ subnets.append(subnet)
2267+ cfg.update({'subnets': subnets})
2268+ if link['type'] in ['ethernet', 'vif', 'ovs', 'phy', 'bridge']:
2269+ cfg.update({
2270+ 'type': 'physical',
2271+ 'mac_address': link['ethernet_mac_address']})
2272+ elif link['type'] in ['bond']:
2273+ params = {}
2274+ for k, v in link.items():
2275+ if k == 'bond_links':
2276+ continue
2277+ elif k.startswith('bond'):
2278+ params.update({k: v})
2279+ cfg.update({
2280+ 'bond_interfaces': copy.deepcopy(link['bond_links']),
2281+ 'params': params,
2282+ })
2283+ elif link['type'] in ['vlan']:
2284+ cfg.update({
2285+ 'name': "%s.%s" % (link['vlan_link'],
2286+ link['vlan_id']),
2287+ 'vlan_link': link['vlan_link'],
2288+ 'vlan_id': link['vlan_id'],
2289+ 'mac_address': link['vlan_mac_address'],
2290+ })
2291+ else:
2292+ raise ValueError(
2293+ 'Unknown network_data link type: %s' % link['type'])
2294+
2295+ config.append(cfg)
2296+
2297+ need_names = [d for d in config
2298+ if d.get('type') == 'physical' and 'name' not in d]
2299+
2300+ if need_names:
2301+ if known_macs is None:
2302+ known_macs = net.get_interfaces_by_mac()
2303+
2304+ for d in need_names:
2305+ mac = d.get('mac_address')
2306+ if not mac:
2307+ raise ValueError("No mac_address or name entry for %s" % d)
2308+ if mac not in known_macs:
2309+ raise ValueError("Unable to find a system nic for %s" % d)
2310+ d['name'] = known_macs[mac]
2311+
2312+ for service in services:
2313+ cfg = service
2314+ cfg.update({'type': 'nameserver'})
2315+ config.append(cfg)
2316+
2317+ return {'version': 1, 'config': config}
2318+
2319+
2320 def convert_vendordata_json(data, recurse=True):
2321 """data: a loaded json *object* (strings, arrays, dicts).
2322 return something suitable for cloudinit vendordata_raw.
2323
2324=== modified file 'cloudinit/stages.py'
2325--- cloudinit/stages.py 2016-06-07 18:34:57 +0000
2326+++ cloudinit/stages.py 2016-06-10 21:20:56 +0000
2327@@ -44,6 +44,7 @@
2328 from cloudinit import importer
2329 from cloudinit import log as logging
2330 from cloudinit import net
2331+from cloudinit.net import cmdline
2332 from cloudinit.reporting import events
2333 from cloudinit import sources
2334 from cloudinit import type_utils
2335@@ -612,7 +613,7 @@
2336 if os.path.exists(disable_file):
2337 return (None, disable_file)
2338
2339- cmdline_cfg = ('cmdline', net.read_kernel_cmdline_config())
2340+ cmdline_cfg = ('cmdline', cmdline.read_kernel_cmdline_config())
2341 dscfg = ('ds', None)
2342 if self.datasource and hasattr(self.datasource, 'network_config'):
2343 dscfg = ('ds', self.datasource.network_config)
2344
2345=== modified file 'cloudinit/util.py'
2346--- cloudinit/util.py 2016-06-09 07:18:35 +0000
2347+++ cloudinit/util.py 2016-06-10 21:20:56 +0000
2348@@ -171,7 +171,8 @@
2349
2350 def __init__(self, stdout=None, stderr=None,
2351 exit_code=None, cmd=None,
2352- description=None, reason=None):
2353+ description=None, reason=None,
2354+ errno=None):
2355 if not cmd:
2356 self.cmd = '-'
2357 else:
2358@@ -202,6 +203,7 @@
2359 else:
2360 self.reason = '-'
2361
2362+ self.errno = errno
2363 message = self.MESSAGE_TMPL % {
2364 'description': self.description,
2365 'cmd': self.cmd,
2366@@ -1157,7 +1159,14 @@
2367 options.append(path)
2368 cmd = blk_id_cmd + options
2369 # See man blkid for why 2 is added
2370- (out, _err) = subp(cmd, rcs=[0, 2])
2371+ try:
2372+ (out, _err) = subp(cmd, rcs=[0, 2])
2373+ except ProcessExecutionError as e:
2374+ if e.errno == errno.ENOENT:
2375+ # blkid not found...
2376+ out = ""
2377+ else:
2378+ raise
2379 entries = []
2380 for line in out.splitlines():
2381 line = line.strip()
2382@@ -1706,7 +1715,8 @@
2383 sp = subprocess.Popen(args, **kws)
2384 (out, err) = sp.communicate(data)
2385 except OSError as e:
2386- raise ProcessExecutionError(cmd=args, reason=e)
2387+ raise ProcessExecutionError(cmd=args, reason=e,
2388+ errno=e.errno)
2389 rc = sp.returncode
2390 if rc not in rcs:
2391 raise ProcessExecutionError(stdout=out, stderr=err,
2392
2393=== modified file 'packages/bddeb'
2394--- packages/bddeb 2016-06-07 07:56:53 +0000
2395+++ packages/bddeb 2016-06-10 21:20:56 +0000
2396@@ -42,6 +42,7 @@
2397 'setuptools',
2398 'flake8',
2399 'hacking',
2400+ 'unittest2',
2401 ]
2402 NONSTD_NAMED_PACKAGES = {
2403 'argparse': ('python-argparse', None),
2404
2405=== modified file 'requirements.txt'
2406--- requirements.txt 2015-01-26 21:37:29 +0000
2407+++ requirements.txt 2016-06-10 21:20:56 +0000
2408@@ -11,8 +11,12 @@
2409 oauthlib
2410
2411 # This one is currently used only by the CloudSigma and SmartOS datasources.
2412-# If these datasources are removed, this is no longer needed
2413-pyserial
2414+# If these datasources are removed, this is no longer needed.
2415+#
2416+# This will not work in py2.6 so it is only optionally installed on
2417+# python 2.7 and later.
2418+#
2419+# pyserial
2420
2421 # This is only needed for places where we need to support configs in a manner
2422 # that the built-in config parser is not sufficent (ie
2423
2424=== modified file 'setup.py'
2425--- setup.py 2016-05-27 21:03:49 +0000
2426+++ setup.py 2016-06-10 21:20:56 +0000
2427@@ -196,7 +196,6 @@
2428 if sys.version_info < (3,):
2429 requirements.append('cheetah')
2430
2431-
2432 setuptools.setup(
2433 name='cloud-init',
2434 version=get_version(),
2435
2436=== modified file 'test-requirements.txt'
2437--- test-requirements.txt 2016-05-24 23:08:14 +0000
2438+++ test-requirements.txt 2016-06-10 21:20:56 +0000
2439@@ -2,6 +2,7 @@
2440 httpretty>=0.7.1
2441 mock
2442 nose
2443+unittest2
2444
2445 # Only needed if you want to know the test times
2446 # nose-timer
2447
2448=== modified file 'tests/unittests/helpers.py'
2449--- tests/unittests/helpers.py 2016-06-09 08:35:39 +0000
2450+++ tests/unittests/helpers.py 2016-06-10 21:20:56 +0000
2451@@ -7,13 +7,11 @@
2452 import tempfile
2453 import unittest
2454
2455+import mock
2456 import six
2457+import unittest2
2458
2459 try:
2460- from unittest import mock
2461-except ImportError:
2462- import mock
2463-try:
2464 from contextlib import ExitStack
2465 except ImportError:
2466 from contextlib2 import ExitStack
2467@@ -21,6 +19,9 @@
2468 from cloudinit import helpers as ch
2469 from cloudinit import util
2470
2471+# Used for skipping tests
2472+SkipTest = unittest2.SkipTest
2473+
2474 # Used for detecting different python versions
2475 PY2 = False
2476 PY26 = False
2477@@ -44,78 +45,6 @@
2478 if _PY_MINOR == 4 and _PY_MICRO < 3:
2479 FIX_HTTPRETTY = True
2480
2481-if PY26:
2482- # For now add these on, taken from python 2.7 + slightly adjusted. Drop
2483- # all this once Python 2.6 is dropped as a minimum requirement.
2484- class TestCase(unittest.TestCase):
2485- def setUp(self):
2486- super(TestCase, self).setUp()
2487- self.__all_cleanups = ExitStack()
2488-
2489- def tearDown(self):
2490- self.__all_cleanups.close()
2491- unittest.TestCase.tearDown(self)
2492-
2493- def addCleanup(self, function, *args, **kws):
2494- self.__all_cleanups.callback(function, *args, **kws)
2495-
2496- def assertIs(self, expr1, expr2, msg=None):
2497- if expr1 is not expr2:
2498- standardMsg = '%r is not %r' % (expr1, expr2)
2499- self.fail(self._formatMessage(msg, standardMsg))
2500-
2501- def assertIn(self, member, container, msg=None):
2502- if member not in container:
2503- standardMsg = '%r not found in %r' % (member, container)
2504- self.fail(self._formatMessage(msg, standardMsg))
2505-
2506- def assertNotIn(self, member, container, msg=None):
2507- if member in container:
2508- standardMsg = '%r unexpectedly found in %r'
2509- standardMsg = standardMsg % (member, container)
2510- self.fail(self._formatMessage(msg, standardMsg))
2511-
2512- def assertIsNone(self, value, msg=None):
2513- if value is not None:
2514- standardMsg = '%r is not None'
2515- standardMsg = standardMsg % (value)
2516- self.fail(self._formatMessage(msg, standardMsg))
2517-
2518- def assertIsInstance(self, obj, cls, msg=None):
2519- """Same as self.assertTrue(isinstance(obj, cls)), with a nicer
2520- default message."""
2521- if not isinstance(obj, cls):
2522- standardMsg = '%s is not an instance of %r' % (repr(obj), cls)
2523- self.fail(self._formatMessage(msg, standardMsg))
2524-
2525- def assertDictContainsSubset(self, expected, actual, msg=None):
2526- missing = []
2527- mismatched = []
2528- for k, v in expected.items():
2529- if k not in actual:
2530- missing.append(k)
2531- elif actual[k] != v:
2532- mismatched.append('%r, expected: %r, actual: %r'
2533- % (k, v, actual[k]))
2534-
2535- if len(missing) == 0 and len(mismatched) == 0:
2536- return
2537-
2538- standardMsg = ''
2539- if missing:
2540- standardMsg = 'Missing: %r' % ','.join(m for m in missing)
2541- if mismatched:
2542- if standardMsg:
2543- standardMsg += '; '
2544- standardMsg += 'Mismatched values: %s' % ','.join(mismatched)
2545-
2546- self.fail(self._formatMessage(msg, standardMsg))
2547-
2548-
2549-else:
2550- class TestCase(unittest.TestCase):
2551- pass
2552-
2553
2554 # Makes the old path start
2555 # with new base instead of whatever
2556@@ -151,6 +80,10 @@
2557 return wrapper
2558
2559
2560+class TestCase(unittest2.TestCase):
2561+ pass
2562+
2563+
2564 class ResourceUsingTestCase(TestCase):
2565 def setUp(self):
2566 super(ResourceUsingTestCase, self).setUp()
2567
2568=== modified file 'tests/unittests/test__init__.py'
2569--- tests/unittests/test__init__.py 2015-07-21 16:02:44 +0000
2570+++ tests/unittests/test__init__.py 2016-06-10 21:20:56 +0000
2571@@ -1,16 +1,6 @@
2572 import os
2573 import shutil
2574 import tempfile
2575-import unittest
2576-
2577-try:
2578- from unittest import mock
2579-except ImportError:
2580- import mock
2581-try:
2582- from contextlib import ExitStack
2583-except ImportError:
2584- from contextlib2 import ExitStack
2585
2586 from cloudinit import handlers
2587 from cloudinit import helpers
2588@@ -18,7 +8,7 @@
2589 from cloudinit import url_helper
2590 from cloudinit import util
2591
2592-from .helpers import TestCase
2593+from .helpers import TestCase, ExitStack, mock
2594
2595
2596 class FakeModule(handlers.Handler):
2597@@ -99,9 +89,10 @@
2598 self.assertEqual(self.data['handlercount'], 0)
2599
2600
2601-class TestHandlerHandlePart(unittest.TestCase):
2602+class TestHandlerHandlePart(TestCase):
2603
2604 def setUp(self):
2605+ super(TestHandlerHandlePart, self).setUp()
2606 self.data = "fake data"
2607 self.ctype = "fake ctype"
2608 self.filename = "fake filename"
2609@@ -177,7 +168,7 @@
2610 self.data, self.ctype, self.filename, self.payload)
2611
2612
2613-class TestCmdlineUrl(unittest.TestCase):
2614+class TestCmdlineUrl(TestCase):
2615 def test_invalid_content(self):
2616 url = "http://example.com/foo"
2617 key = "mykey"
2618
2619=== modified file 'tests/unittests/test_cli.py'
2620--- tests/unittests/test_cli.py 2016-05-12 20:43:11 +0000
2621+++ tests/unittests/test_cli.py 2016-06-10 21:20:56 +0000
2622@@ -4,12 +4,7 @@
2623 import sys
2624
2625 from . import helpers as test_helpers
2626-
2627-try:
2628- from unittest import mock
2629-except ImportError:
2630- import mock
2631-
2632+mock = test_helpers.mock
2633
2634 BIN_CLOUDINIT = "bin/cloud-init"
2635
2636
2637=== modified file 'tests/unittests/test_cs_util.py'
2638--- tests/unittests/test_cs_util.py 2015-02-11 01:50:23 +0000
2639+++ tests/unittests/test_cs_util.py 2016-06-10 21:20:56 +0000
2640@@ -1,21 +1,9 @@
2641 from __future__ import print_function
2642
2643-import sys
2644-import unittest
2645+from . import helpers as test_helpers
2646
2647 from cloudinit.cs_utils import Cepko
2648
2649-try:
2650- skip = unittest.skip
2651-except AttributeError:
2652- # Python 2.6. Doesn't have to be high fidelity.
2653- def skip(reason):
2654- def decorator(func):
2655- def wrapper(*args, **kws):
2656- print(reason, file=sys.stderr)
2657- return wrapper
2658- return decorator
2659-
2660
2661 SERVER_CONTEXT = {
2662 "cpu": 1000,
2663@@ -43,18 +31,9 @@
2664 # 2015-01-22 BAW: This test is completely useless because it only ever tests
2665 # the CepkoMock object. Even in its original form, I don't think it ever
2666 # touched the underlying Cepko class methods.
2667-@skip('This test is completely useless')
2668-class CepkoResultTests(unittest.TestCase):
2669+class CepkoResultTests(test_helpers.TestCase):
2670 def setUp(self):
2671- pass
2672- # self.mocked = self.mocker.replace("cloudinit.cs_utils.Cepko",
2673- # spec=CepkoMock,
2674- # count=False,
2675- # passthrough=False)
2676- # self.mocked()
2677- # self.mocker.result(CepkoMock())
2678- # self.mocker.replay()
2679- # self.c = Cepko()
2680+ raise test_helpers.SkipTest('This test is completely useless')
2681
2682 def test_getitem(self):
2683 result = self.c.all()
2684
2685=== modified file 'tests/unittests/test_datasource/test_azure.py'
2686--- tests/unittests/test_datasource/test_azure.py 2016-05-12 20:43:11 +0000
2687+++ tests/unittests/test_datasource/test_azure.py 2016-06-10 21:20:56 +0000
2688@@ -1,16 +1,8 @@
2689 from cloudinit import helpers
2690 from cloudinit.util import b64e, decode_binary, load_file
2691 from cloudinit.sources import DataSourceAzure
2692-from ..helpers import TestCase, populate_dir
2693
2694-try:
2695- from unittest import mock
2696-except ImportError:
2697- import mock
2698-try:
2699- from contextlib import ExitStack
2700-except ImportError:
2701- from contextlib2 import ExitStack
2702+from ..helpers import TestCase, populate_dir, mock, ExitStack, PY26, SkipTest
2703
2704 import crypt
2705 import os
2706@@ -83,6 +75,8 @@
2707
2708 def setUp(self):
2709 super(TestAzureDataSource, self).setUp()
2710+ if PY26:
2711+ raise SkipTest("Does not work on python 2.6")
2712 self.tmp = tempfile.mkdtemp()
2713 self.addCleanup(shutil.rmtree, self.tmp)
2714
2715
2716=== modified file 'tests/unittests/test_datasource/test_azure_helper.py'
2717--- tests/unittests/test_datasource/test_azure_helper.py 2016-06-10 19:03:24 +0000
2718+++ tests/unittests/test_datasource/test_azure_helper.py 2016-06-10 21:20:56 +0000
2719@@ -2,17 +2,7 @@
2720
2721 from cloudinit.sources.helpers import azure as azure_helper
2722
2723-from ..helpers import TestCase
2724-
2725-try:
2726- from unittest import mock
2727-except ImportError:
2728- import mock
2729-
2730-try:
2731- from contextlib import ExitStack
2732-except ImportError:
2733- from contextlib2 import ExitStack
2734+from ..helpers import ExitStack, mock, TestCase
2735
2736
2737 GOAL_STATE_TEMPLATE = """\
2738
2739=== modified file 'tests/unittests/test_datasource/test_cloudsigma.py'
2740--- tests/unittests/test_datasource/test_cloudsigma.py 2015-01-27 01:02:31 +0000
2741+++ tests/unittests/test_datasource/test_cloudsigma.py 2016-06-10 21:20:56 +0000
2742@@ -1,4 +1,5 @@
2743 # coding: utf-8
2744+
2745 import copy
2746
2747 from cloudinit.cs_utils import Cepko
2748@@ -6,7 +7,6 @@
2749
2750 from .. import helpers as test_helpers
2751
2752-
2753 SERVER_CONTEXT = {
2754 "cpu": 1000,
2755 "cpus_instead_of_cores": False,
2756
2757=== modified file 'tests/unittests/test_datasource/test_cloudstack.py'
2758--- tests/unittests/test_datasource/test_cloudstack.py 2016-05-12 20:43:11 +0000
2759+++ tests/unittests/test_datasource/test_cloudstack.py 2016-06-10 21:20:56 +0000
2760@@ -1,16 +1,7 @@
2761 from cloudinit import helpers
2762 from cloudinit.sources.DataSourceCloudStack import DataSourceCloudStack
2763
2764-from ..helpers import TestCase
2765-
2766-try:
2767- from unittest import mock
2768-except ImportError:
2769- import mock
2770-try:
2771- from contextlib import ExitStack
2772-except ImportError:
2773- from contextlib2 import ExitStack
2774+from ..helpers import TestCase, mock, ExitStack
2775
2776
2777 class TestCloudStackPasswordFetching(TestCase):
2778
2779=== modified file 'tests/unittests/test_datasource/test_configdrive.py'
2780--- tests/unittests/test_datasource/test_configdrive.py 2016-06-03 18:58:51 +0000
2781+++ tests/unittests/test_datasource/test_configdrive.py 2016-06-10 21:20:56 +0000
2782@@ -5,23 +5,15 @@
2783 import six
2784 import tempfile
2785
2786-try:
2787- from unittest import mock
2788-except ImportError:
2789- import mock
2790-try:
2791- from contextlib import ExitStack
2792-except ImportError:
2793- from contextlib2 import ExitStack
2794-
2795 from cloudinit import helpers
2796-from cloudinit import net
2797+from cloudinit.net import eni
2798+from cloudinit.net import network_state
2799 from cloudinit import settings
2800 from cloudinit.sources import DataSourceConfigDrive as ds
2801 from cloudinit.sources.helpers import openstack
2802 from cloudinit import util
2803
2804-from ..helpers import TestCase
2805+from ..helpers import TestCase, ExitStack, mock
2806
2807
2808 PUBKEY = u'ssh-rsa AAAAB3NzaC1....sIkJhq8wdX+4I3A4cYbYP ubuntu@server-460\n'
2809@@ -115,6 +107,7 @@
2810 'fa:16:3e:d4:57:ad': 'enp0s2',
2811 'fa:16:3e:dd:50:9a': 'foo1',
2812 'fa:16:3e:a8:14:69': 'foo2',
2813+ 'fa:16:3e:ed:9a:59': 'foo3',
2814 }
2815
2816 CFG_DRIVE_FILES_V2 = {
2817@@ -377,35 +370,150 @@
2818 util.find_devs_with = orig_find_devs_with
2819 util.is_partition = orig_is_partition
2820
2821- def test_pubkeys_v2(self):
2822+ @mock.patch('cloudinit.sources.DataSourceConfigDrive.on_first_boot')
2823+ def test_pubkeys_v2(self, on_first_boot):
2824 """Verify that public-keys work in config-drive-v2."""
2825 populate_dir(self.tmp, CFG_DRIVE_FILES_V2)
2826 myds = cfg_ds_from_dir(self.tmp)
2827 self.assertEqual(myds.get_public_ssh_keys(),
2828 [OSTACK_META['public_keys']['mykey']])
2829
2830- def test_network_data_is_found(self):
2831+
2832+class TestNetJson(TestCase):
2833+ def setUp(self):
2834+ super(TestNetJson, self).setUp()
2835+ self.tmp = tempfile.mkdtemp()
2836+ self.addCleanup(shutil.rmtree, self.tmp)
2837+ self.maxDiff = None
2838+
2839+ @mock.patch('cloudinit.sources.DataSourceConfigDrive.on_first_boot')
2840+ def test_network_data_is_found(self, on_first_boot):
2841 """Verify that network_data is present in ds in config-drive-v2."""
2842 populate_dir(self.tmp, CFG_DRIVE_FILES_V2)
2843 myds = cfg_ds_from_dir(self.tmp)
2844- self.assertEqual(myds.network_json, NETWORK_DATA)
2845+ self.assertIsNotNone(myds.network_json)
2846
2847- def test_network_config_is_converted(self):
2848+ @mock.patch('cloudinit.sources.DataSourceConfigDrive.on_first_boot')
2849+ def test_network_config_is_converted(self, on_first_boot):
2850 """Verify that network_data is converted and present on ds object."""
2851 populate_dir(self.tmp, CFG_DRIVE_FILES_V2)
2852 myds = cfg_ds_from_dir(self.tmp)
2853- network_config = ds.convert_network_data(NETWORK_DATA,
2854- known_macs=KNOWN_MACS)
2855+ network_config = openstack.convert_net_json(NETWORK_DATA,
2856+ known_macs=KNOWN_MACS)
2857 self.assertEqual(myds.network_config, network_config)
2858
2859+ def test_network_config_conversions(self):
2860+ """Tests a bunch of input network json and checks the
2861+ expected conversions."""
2862+ in_datas = [
2863+ NETWORK_DATA,
2864+ {
2865+ 'services': [{'type': 'dns', 'address': '172.19.0.12'}],
2866+ 'networks': [{
2867+ 'network_id': 'dacd568d-5be6-4786-91fe-750c374b78b4',
2868+ 'type': 'ipv4',
2869+ 'netmask': '255.255.252.0',
2870+ 'link': 'tap1a81968a-79',
2871+ 'routes': [{
2872+ 'netmask': '0.0.0.0',
2873+ 'network': '0.0.0.0',
2874+ 'gateway': '172.19.3.254',
2875+ }],
2876+ 'ip_address': '172.19.1.34',
2877+ 'id': 'network0',
2878+ }],
2879+ 'links': [{
2880+ 'type': 'bridge',
2881+ 'vif_id': '1a81968a-797a-400f-8a80-567f997eb93f',
2882+ 'ethernet_mac_address': 'fa:16:3e:ed:9a:59',
2883+ 'id': 'tap1a81968a-79',
2884+ 'mtu': None,
2885+ }],
2886+ },
2887+ ]
2888+ out_datas = [
2889+ {
2890+ 'version': 1,
2891+ 'config': [
2892+ {
2893+ 'subnets': [{'type': 'dhcp4'}],
2894+ 'type': 'physical',
2895+ 'mac_address': 'fa:16:3e:69:b0:58',
2896+ 'name': 'enp0s1',
2897+ 'mtu': None,
2898+ },
2899+ {
2900+ 'subnets': [{'type': 'dhcp4'}],
2901+ 'type': 'physical',
2902+ 'mac_address': 'fa:16:3e:d4:57:ad',
2903+ 'name': 'enp0s2',
2904+ 'mtu': None,
2905+ },
2906+ {
2907+ 'subnets': [{'type': 'dhcp4'}],
2908+ 'type': 'physical',
2909+ 'mac_address': 'fa:16:3e:05:30:fe',
2910+ 'name': 'nic0',
2911+ 'mtu': None,
2912+ },
2913+ {
2914+ 'type': 'nameserver',
2915+ 'address': '199.204.44.24',
2916+ },
2917+ {
2918+ 'type': 'nameserver',
2919+ 'address': '199.204.47.54',
2920+ }
2921+ ],
2922+
2923+ },
2924+ {
2925+ 'version': 1,
2926+ 'config': [
2927+ {
2928+ 'name': 'foo3',
2929+ 'mac_address': 'fa:16:3e:ed:9a:59',
2930+ 'mtu': None,
2931+ 'type': 'physical',
2932+ 'subnets': [
2933+ {
2934+ 'address': '172.19.1.34',
2935+ 'netmask': '255.255.252.0',
2936+ 'type': 'static',
2937+ 'ipv4': True,
2938+ 'routes': [{
2939+ 'gateway': '172.19.3.254',
2940+ 'netmask': '0.0.0.0',
2941+ 'network': '0.0.0.0',
2942+ }],
2943+ }
2944+ ]
2945+ },
2946+ {
2947+ 'type': 'nameserver',
2948+ 'address': '172.19.0.12',
2949+ }
2950+ ],
2951+ },
2952+ ]
2953+ for in_data, out_data in zip(in_datas, out_datas):
2954+ conv_data = openstack.convert_net_json(in_data,
2955+ known_macs=KNOWN_MACS)
2956+ self.assertEqual(out_data, conv_data)
2957+
2958
2959 class TestConvertNetworkData(TestCase):
2960+ def setUp(self):
2961+ super(TestConvertNetworkData, self).setUp()
2962+ self.tmp = tempfile.mkdtemp()
2963+ self.addCleanup(shutil.rmtree, self.tmp)
2964+
2965 def _getnames_in_config(self, ncfg):
2966 return set([n['name'] for n in ncfg['config']
2967 if n['type'] == 'physical'])
2968
2969 def test_conversion_fills_names(self):
2970- ncfg = ds.convert_network_data(NETWORK_DATA, known_macs=KNOWN_MACS)
2971+ ncfg = openstack.convert_net_json(NETWORK_DATA, known_macs=KNOWN_MACS)
2972 expected = set(['nic0', 'enp0s1', 'enp0s2'])
2973 found = self._getnames_in_config(ncfg)
2974 self.assertEqual(found, expected)
2975@@ -417,18 +525,19 @@
2976 'fa:16:3e:69:b0:58': 'ens1'})
2977 get_interfaces_by_mac.return_value = macs
2978
2979- ncfg = ds.convert_network_data(NETWORK_DATA)
2980+ ncfg = openstack.convert_net_json(NETWORK_DATA)
2981 expected = set(['nic0', 'ens1', 'enp0s2'])
2982 found = self._getnames_in_config(ncfg)
2983 self.assertEqual(found, expected)
2984
2985 def test_convert_raises_value_error_on_missing_name(self):
2986 macs = {'aa:aa:aa:aa:aa:00': 'ens1'}
2987- self.assertRaises(ValueError, ds.convert_network_data,
2988+ self.assertRaises(ValueError, openstack.convert_net_json,
2989 NETWORK_DATA, known_macs=macs)
2990
2991 def test_conversion_with_route(self):
2992- ncfg = ds.convert_network_data(NETWORK_DATA_2, known_macs=KNOWN_MACS)
2993+ ncfg = openstack.convert_net_json(NETWORK_DATA_2,
2994+ known_macs=KNOWN_MACS)
2995 # not the best test, but see that we get a route in the
2996 # network config and that it gets rendered to an ENI file
2997 routes = []
2998@@ -438,15 +547,23 @@
2999 self.assertIn(
3000 {'network': '0.0.0.0', 'netmask': '0.0.0.0', 'gateway': '2.2.2.9'},
3001 routes)
3002- eni = net.render_interfaces(net.parse_net_config_data(ncfg))
3003- self.assertIn("route add default gw 2.2.2.9", eni)
3004+ eni_renderer = eni.Renderer()
3005+ eni_renderer.render_network_state(
3006+ self.tmp, network_state.parse_net_config_data(ncfg))
3007+ with open(os.path.join(self.tmp, "etc",
3008+ "network", "interfaces"), 'r') as f:
3009+ eni_rendering = f.read()
3010+ self.assertIn("route add default gw 2.2.2.9", eni_rendering)
3011
3012
3013 def cfg_ds_from_dir(seed_d):
3014- found = ds.read_config_drive(seed_d)
3015 cfg_ds = ds.DataSourceConfigDrive(settings.CFG_BUILTIN, None,
3016 helpers.Paths({}))
3017- populate_ds_from_read_config(cfg_ds, seed_d, found)
3018+ cfg_ds.seed_dir = seed_d
3019+ cfg_ds.known_macs = KNOWN_MACS.copy()
3020+ if not cfg_ds.get_data():
3021+ raise RuntimeError("Data source did not extract itself from"
3022+ " seed directory %s" % seed_d)
3023 return cfg_ds
3024
3025
3026@@ -460,7 +577,7 @@
3027 cfg_ds.userdata_raw = results.get('userdata')
3028 cfg_ds.version = results.get('version')
3029 cfg_ds.network_json = results.get('networkdata')
3030- cfg_ds._network_config = ds.convert_network_data(
3031+ cfg_ds._network_config = openstack.convert_net_json(
3032 cfg_ds.network_json, known_macs=KNOWN_MACS)
3033
3034
3035@@ -474,7 +591,6 @@
3036 mode = "w"
3037 else:
3038 mode = "wb"
3039-
3040 with open(path, mode) as fp:
3041 fp.write(content)
3042
3043
3044=== modified file 'tests/unittests/test_datasource/test_nocloud.py'
3045--- tests/unittests/test_datasource/test_nocloud.py 2016-05-12 20:43:11 +0000
3046+++ tests/unittests/test_datasource/test_nocloud.py 2016-06-10 21:20:56 +0000
3047@@ -1,23 +1,14 @@
3048 from cloudinit import helpers
3049 from cloudinit.sources import DataSourceNoCloud
3050 from cloudinit import util
3051-from ..helpers import TestCase, populate_dir
3052+from ..helpers import TestCase, populate_dir, mock, ExitStack
3053
3054 import os
3055 import shutil
3056 import tempfile
3057-import unittest
3058+
3059 import yaml
3060
3061-try:
3062- from unittest import mock
3063-except ImportError:
3064- import mock
3065-try:
3066- from contextlib import ExitStack
3067-except ImportError:
3068- from contextlib2 import ExitStack
3069-
3070
3071 class TestNoCloudDataSource(TestCase):
3072
3073@@ -139,7 +130,7 @@
3074 self.assertTrue(ret)
3075
3076
3077-class TestParseCommandLineData(unittest.TestCase):
3078+class TestParseCommandLineData(TestCase):
3079
3080 def test_parse_cmdline_data_valid(self):
3081 ds_id = "ds=nocloud"
3082
3083=== modified file 'tests/unittests/test_datasource/test_smartos.py'
3084--- tests/unittests/test_datasource/test_smartos.py 2016-06-10 18:49:34 +0000
3085+++ tests/unittests/test_datasource/test_smartos.py 2016-06-10 21:20:56 +0000
3086@@ -34,11 +34,12 @@
3087 import tempfile
3088 import uuid
3089
3090-import serial
3091+from cloudinit import serial
3092+from cloudinit.sources import DataSourceSmartOS
3093+
3094 import six
3095
3096 from cloudinit import helpers as c_helpers
3097-from cloudinit.sources import DataSourceSmartOS
3098 from cloudinit.util import b64e
3099
3100 from ..helpers import mock, FilesystemMockingTestCase, TestCase
3101@@ -380,6 +381,7 @@
3102
3103 def setUp(self):
3104 super(TestJoyentMetadataClient, self).setUp()
3105+
3106 self.serial = mock.MagicMock(spec=serial.Serial)
3107 self.request_id = 0xabcdef12
3108 self.metadata_value = 'value'
3109
3110=== modified file 'tests/unittests/test_net.py'
3111--- tests/unittests/test_net.py 2016-05-12 20:43:11 +0000
3112+++ tests/unittests/test_net.py 2016-06-10 21:20:56 +0000
3113@@ -1,6 +1,10 @@
3114 from cloudinit import net
3115+from cloudinit.net import cmdline
3116+from cloudinit.net import eni
3117+from cloudinit.net import network_state
3118 from cloudinit import util
3119
3120+from .helpers import mock
3121 from .helpers import TestCase
3122
3123 import base64
3124@@ -9,6 +13,8 @@
3125 import io
3126 import json
3127 import os
3128+import shutil
3129+import tempfile
3130
3131 DHCP_CONTENT_1 = """
3132 DEVICE='eth0'
3133@@ -69,21 +75,87 @@
3134 }
3135
3136
3137-class TestNetConfigParsing(TestCase):
3138+class TestEniNetRendering(TestCase):
3139+
3140+ @mock.patch("cloudinit.net.sys_dev_path")
3141+ @mock.patch("cloudinit.net.sys_netdev_info")
3142+ @mock.patch("cloudinit.net.get_devicelist")
3143+ def test_default_generation(self, mock_get_devicelist,
3144+ mock_sys_netdev_info,
3145+ mock_sys_dev_path):
3146+ mock_get_devicelist.return_value = ['eth1000', 'lo']
3147+
3148+ dev_characteristics = {
3149+ 'eth1000': {
3150+ "bridge": False,
3151+ "carrier": False,
3152+ "dormant": False,
3153+ "operstate": "down",
3154+ "address": "07-1C-C6-75-A4-BE",
3155+ }
3156+ }
3157+
3158+ def netdev_info(name, field):
3159+ return dev_characteristics[name][field]
3160+
3161+ mock_sys_netdev_info.side_effect = netdev_info
3162+
3163+ tmp_dir = tempfile.mkdtemp()
3164+ self.addCleanup(shutil.rmtree, tmp_dir)
3165+
3166+ def sys_dev_path(devname, path=""):
3167+ return tmp_dir + devname + "/" + path
3168+
3169+ for dev in dev_characteristics:
3170+ os.makedirs(os.path.join(tmp_dir, dev))
3171+ with open(os.path.join(tmp_dir, dev, 'operstate'), 'w') as fh:
3172+ fh.write("down")
3173+
3174+ mock_sys_dev_path.side_effect = sys_dev_path
3175+
3176+ network_cfg = net.generate_fallback_config()
3177+ ns = network_state.parse_net_config_data(network_cfg,
3178+ skip_broken=False)
3179+
3180+ render_dir = os.path.join(tmp_dir, "render")
3181+ os.makedirs(render_dir)
3182+
3183+ renderer = eni.Renderer()
3184+ renderer.render_network_state(render_dir, ns,
3185+ eni="interfaces",
3186+ links_prefix=None,
3187+ netrules=None)
3188+
3189+ self.assertTrue(os.path.exists(os.path.join(render_dir,
3190+ 'interfaces')))
3191+ with open(os.path.join(render_dir, 'interfaces')) as fh:
3192+ contents = fh.read()
3193+
3194+ expected = """
3195+auto lo
3196+iface lo inet loopback
3197+
3198+auto eth1000
3199+iface eth1000 inet dhcp
3200+"""
3201+ self.assertEqual(expected.lstrip(), contents.lstrip())
3202+
3203+
3204+class TestCmdlineConfigParsing(TestCase):
3205 simple_cfg = {
3206 'config': [{"type": "physical", "name": "eth0",
3207 "mac_address": "c0:d6:9f:2c:e8:80",
3208 "subnets": [{"type": "dhcp"}]}]}
3209
3210- def test_klibc_convert_dhcp(self):
3211- found = net._klibc_to_config_entry(DHCP_CONTENT_1)
3212+ def test_cmdline_convert_dhcp(self):
3213+ found = cmdline._klibc_to_config_entry(DHCP_CONTENT_1)
3214 self.assertEqual(found, ('eth0', DHCP_EXPECTED_1))
3215
3216- def test_klibc_convert_static(self):
3217- found = net._klibc_to_config_entry(STATIC_CONTENT_1)
3218+ def test_cmdline_convert_static(self):
3219+ found = cmdline._klibc_to_config_entry(STATIC_CONTENT_1)
3220 self.assertEqual(found, ('eth1', STATIC_EXPECTED_1))
3221
3222- def test_config_from_klibc_net_cfg(self):
3223+ def test_config_from_cmdline_net_cfg(self):
3224 files = []
3225 pairs = (('net-eth0.cfg', DHCP_CONTENT_1),
3226 ('net-eth1.cfg', STATIC_CONTENT_1))
3227@@ -104,21 +176,22 @@
3228 files.append(fp)
3229 util.write_file(fp, content)
3230
3231- found = net.config_from_klibc_net_cfg(files=files, mac_addrs=macs)
3232+ found = cmdline.config_from_klibc_net_cfg(files=files,
3233+ mac_addrs=macs)
3234 self.assertEqual(found, expected)
3235
3236 def test_cmdline_with_b64(self):
3237 data = base64.b64encode(json.dumps(self.simple_cfg).encode())
3238 encoded_text = data.decode()
3239- cmdline = 'ro network-config=' + encoded_text + ' root=foo'
3240- found = net.read_kernel_cmdline_config(cmdline=cmdline)
3241+ raw_cmdline = 'ro network-config=' + encoded_text + ' root=foo'
3242+ found = cmdline.read_kernel_cmdline_config(cmdline=raw_cmdline)
3243 self.assertEqual(found, self.simple_cfg)
3244
3245 def test_cmdline_with_b64_gz(self):
3246 data = _gzip_data(json.dumps(self.simple_cfg).encode())
3247 encoded_text = base64.b64encode(data).decode()
3248- cmdline = 'ro network-config=' + encoded_text + ' root=foo'
3249- found = net.read_kernel_cmdline_config(cmdline=cmdline)
3250+ raw_cmdline = 'ro network-config=' + encoded_text + ' root=foo'
3251+ found = cmdline.read_kernel_cmdline_config(cmdline=raw_cmdline)
3252 self.assertEqual(found, self.simple_cfg)
3253
3254
3255
3256=== modified file 'tests/unittests/test_reporting.py'
3257--- tests/unittests/test_reporting.py 2016-05-12 20:43:11 +0000
3258+++ tests/unittests/test_reporting.py 2016-06-10 21:20:56 +0000
3259@@ -7,7 +7,9 @@
3260 from cloudinit.reporting import events
3261 from cloudinit.reporting import handlers
3262
3263-from .helpers import (mock, TestCase)
3264+import mock
3265+
3266+from .helpers import TestCase
3267
3268
3269 def _fake_registry():
3270
3271=== modified file 'tests/unittests/test_rh_subscription.py'
3272--- tests/unittests/test_rh_subscription.py 2016-05-12 20:43:11 +0000
3273+++ tests/unittests/test_rh_subscription.py 2016-06-10 21:20:56 +0000
3274@@ -1,12 +1,24 @@
3275+# This program is free software: you can redistribute it and/or modify
3276+# it under the terms of the GNU General Public License version 3, as
3277+# published by the Free Software Foundation.
3278+#
3279+# This program is distributed in the hope that it will be useful,
3280+# but WITHOUT ANY WARRANTY; without even the implied warranty of
3281+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
3282+# GNU General Public License for more details.
3283+#
3284+# You should have received a copy of the GNU General Public License
3285+# along with this program. If not, see <http://www.gnu.org/licenses/>.
3286+
3287+import logging
3288+
3289 from cloudinit.config import cc_rh_subscription
3290 from cloudinit import util
3291
3292-import logging
3293-import mock
3294-import unittest
3295-
3296-
3297-class GoodTests(unittest.TestCase):
3298+from .helpers import TestCase, mock
3299+
3300+
3301+class GoodTests(TestCase):
3302 def setUp(self):
3303 super(GoodTests, self).setUp()
3304 self.name = "cc_rh_subscription"
3305@@ -93,7 +105,7 @@
3306 self.assertEqual(self.SM._sub_man_cli.call_count, 9)
3307
3308
3309-class TestBadInput(unittest.TestCase):
3310+class TestBadInput(TestCase):
3311 name = "cc_rh_subscription"
3312 cloud_init = None
3313 log = logging.getLogger("bad_tests")
3314
3315=== modified file 'tox.ini'
3316--- tox.ini 2016-06-02 00:51:03 +0000
3317+++ tox.ini 2016-06-10 21:20:56 +0000
3318@@ -5,10 +5,9 @@
3319 [testenv]
3320 commands = python -m nose {posargs:tests}
3321 deps = -r{toxinidir}/test-requirements.txt
3322- -r{toxinidir}/requirements.txt
3323-
3324-[testenv:py3]
3325-basepython = python3
3326+ -r{toxinidir}/requirements.txt
3327+setenv =
3328+ LC_ALL = en_US.utf-8
3329
3330 [testenv:flake8]
3331 basepython = python3
3332@@ -18,15 +17,11 @@
3333 setenv =
3334 LC_ALL = en_US.utf-8
3335
3336+[testenv:py3]
3337+basepython = python3
3338+
3339 [testenv:py26]
3340 commands = nosetests {posargs:tests}
3341-deps =
3342- contextlib2
3343- httpretty>=0.7.1
3344- mock
3345- nose
3346- pep8==1.5.7
3347- pyflakes
3348 setenv =
3349 LC_ALL = C
3350