Merge lp:~zhhuabj/charms/trusty/quantum-gateway/lp74646 into lp:~openstack-charmers/charms/trusty/quantum-gateway/next

Proposed by Hua Zhang on 2014-11-24
Status: Superseded
Proposed branch: lp:~zhhuabj/charms/trusty/quantum-gateway/lp74646
Merge into: lp:~openstack-charmers/charms/trusty/quantum-gateway/next
Diff against target: 3408 lines (+992/-561)
29 files modified
config.yaml (+6/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7)
hooks/charmhelpers/contrib/network/ip.py (+83/-51)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
hooks/charmhelpers/contrib/openstack/context.py (+319/-226)
hooks/charmhelpers/contrib/openstack/ip.py (+41/-27)
hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4)
hooks/charmhelpers/contrib/openstack/templating.py (+5/-5)
hooks/charmhelpers/contrib/openstack/utils.py (+146/-13)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/core/fstab.py (+10/-8)
hooks/charmhelpers/core/hookenv.py (+27/-11)
hooks/charmhelpers/core/host.py (+73/-20)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/core/services/helpers.py (+9/-5)
hooks/charmhelpers/core/templating.py (+2/-1)
hooks/charmhelpers/fetch/__init__.py (+18/-12)
hooks/charmhelpers/fetch/archiveurl.py (+53/-16)
hooks/charmhelpers/fetch/bzrurl.py (+5/-1)
hooks/quantum_contexts.py (+3/-1)
hooks/quantum_hooks.py (+5/-0)
hooks/quantum_utils.py (+1/-1)
templates/icehouse/neutron.conf (+1/-0)
unit_tests/test_quantum_contexts.py (+40/-39)
unit_tests/test_quantum_hooks.py (+5/-1)
To merge this branch: bzr merge lp:~zhhuabj/charms/trusty/quantum-gateway/lp74646
Reviewer Review Type Date Requested Status
Edward Hope-Morley 2014-11-24 Needs Fixing on 2014-11-24
Xiang Hui 2014-11-24 Pending
Review via email: mp+242612@code.launchpad.net

This proposal has been superseded by a proposal from 2014-12-08.

Description of the change

This story (SF#74646) supports setting VM's MTU<=1500 by setting mtu of phy NICs and network_device_mtu.
1, setting mtu for phy NICs both in nova-compute charm and neutron-gateway charm
   juju set nova-compute phy-nic-mtu=1546
   juju set neutron-gateway phy-nic-mtu=1546
2, setting mtu for peer devices between ovs bridge br-phy and ovs bridge br-int by adding 'network-device-mtu' parameter into /etc/neutron/neutron.conf
   juju set neutron-api network-device-mtu=1546
   Limitation:
   a, don't support linux bridge because we don't add those three parameters (ovs_use_veth, use_veth_interconnection, veth_mtu)
   b, for gre and vxlan, this step is optional.
   c, after setting network-device-mtu=1546 for neutron-api charm, quantum-gateway and neutron-openvswitch will find network-device-mtu paramter by relation, so it only supports openvswitch plugin at this stage.
3, at this time, MTU inside VM can continue to be configured via DHCP by seeting instance-mtu configuration.
   juju set neutron-gateway instance-mtu=1500
   Limitation:
   a, only support set VM's MTU<=1500, if wanting to set VM's MTU>1500, also need to set MTU for tap devices associated that VM by this link (http://pastebin.ubuntu.com/9272762/ )
   b, doesn't support MTU per network

NOTE: maybe we can't test this feature in bastion

To post a comment you must log in.

UOSCI bot says:
charm_unit_test #1048 quantum-gateway-next for zhhuabj mp242612
    UNIT OK: passed

UNIT Results (max last 5 lines):
  hooks/quantum_hooks 106 2 98% 199-201
  hooks/quantum_utils 214 11 95% 394, 581-590
  TOTAL 451 18 96%
  Ran 83 tests in 3.175s
  OK

Full unit test output: http://paste.ubuntu.com/9206999/
Build: http://10.98.191.181:8080/job/charm_unit_test/1048/

UOSCI bot says:
charm_lint_check #1214 quantum-gateway-next for zhhuabj mp242612
    LINT OK: passed

LINT Results (max last 5 lines):
  I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
  I: config.yaml: option ext-port has no default value
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option instance-mtu has no default value
  I: config.yaml: option external-network-id has no default value

Full lint test output: http://paste.ubuntu.com/9207009/
Build: http://10.98.191.181:8080/job/charm_lint_check/1214/

UOSCI bot says:
charm_amulet_test #518 quantum-gateway-next for zhhuabj mp242612
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv05"
  WARNING cannot delete security group "juju-osci-sv05-0". Used by another environment?
  juju-test INFO : Results: 2 passed, 1 failed, 0 errored
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9207407/
Build: http://10.98.191.181:8080/job/charm_amulet_test/518/

Edward Hope-Morley (hopem) wrote :

So from what I understand, when deploying on metal and/or lxc containers, we need to set mtu in two places. First we need to set mtu on the physical inteface and maas bridge br0 (if maas deployed) used by OVS. Second, we need to set mtu for each veth attached to br0 used by lxc containers into which units are deployed. Also, the mtu needs to be set persistently so that when a node is rebooted it does not return to the default 1500 value.

I don't think the charm is the place to do this since it may not be aware of all these interfaces and setting an lxc default for all containers seems a bit too intrusuive. A MaaS preseed that performs these actions seems a better fit. Thoughts?

review: Needs Fixing
79. By Hua Zhang on 2014-12-03

sync charm-helpers

80. By Hua Zhang on 2014-12-03

fix charm-helpers-hooks

81. By Hua Zhang on 2014-12-03

change to use the method of charm-helpers

UOSCI bot says:
charm_lint_check #1311 quantum-gateway-next for zhhuabj mp242612
    LINT OK: passed

LINT Results (max last 5 lines):
  I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
  I: config.yaml: option ext-port has no default value
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option instance-mtu has no default value
  I: config.yaml: option external-network-id has no default value

Full lint test output: http://paste.ubuntu.com/9354712/
Build: http://10.98.191.181:8080/job/charm_lint_check/1311/

UOSCI bot says:
charm_unit_test #1145 quantum-gateway-next for zhhuabj mp242612
    UNIT OK: passed

UNIT Results (max last 5 lines):
  hooks/quantum_hooks 107 2 98% 201-203
  hooks/quantum_utils 202 1 99% 388
  TOTAL 440 8 98%
  Ran 83 tests in 3.396s
  OK

Full unit test output: http://paste.ubuntu.com/9354713/
Build: http://10.98.191.181:8080/job/charm_unit_test/1145/

UOSCI bot says:
charm_amulet_test #577 quantum-gateway-next for zhhuabj mp242612
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv07"
  WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
  juju-test INFO : Results: 2 passed, 1 failed, 0 errored
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9354797/
Build: http://10.98.191.181:8080/job/charm_amulet_test/577/

UOSCI bot says:
charm_lint_check #1313 quantum-gateway-next for zhhuabj mp242612
    LINT OK: passed

LINT Results (max last 5 lines):
  I: Categories are being deprecated in favor of tags. Please rename the "categories" field to "tags".
  I: config.yaml: option ext-port has no default value
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option instance-mtu has no default value
  I: config.yaml: option external-network-id has no default value

Full lint test output: http://paste.ubuntu.com/9354898/
Build: http://10.98.191.181:8080/job/charm_lint_check/1313/

UOSCI bot says:
charm_unit_test #1147 quantum-gateway-next for zhhuabj mp242612
    UNIT OK: passed

UNIT Results (max last 5 lines):
  hooks/quantum_hooks 107 2 98% 201-203
  hooks/quantum_utils 202 1 99% 388
  TOTAL 440 8 98%
  Ran 83 tests in 3.532s
  OK

Full unit test output: http://paste.ubuntu.com/9354899/
Build: http://10.98.191.181:8080/job/charm_unit_test/1147/

UOSCI bot says:
charm_amulet_test #579 quantum-gateway-next for zhhuabj mp242612
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv05"
  WARNING cannot delete security group "juju-osci-sv05-0". Used by another environment?
  juju-test INFO : Results: 2 passed, 1 failed, 0 errored
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9354971/
Build: http://10.98.191.181:8080/job/charm_amulet_test/579/

82. By Hua Zhang on 2014-12-04

enable network-device-mtu

83. By Hua Zhang on 2014-12-04

fix unit test

84. By Hua Zhang on 2014-12-04

enable persistence

85. By Hua Zhang on 2014-12-04

sync charm-helpers

86. By Hua Zhang on 2014-12-05

sync charm-helpers

87. By Hua Zhang on 2014-12-10

sync charm-helpers to inclue contrib.python to fix unit test error

88. By Hua Zhang on 2014-12-11

fix KeyError: network-device-mtu

89. By Hua Zhang on 2014-12-11

fix hanging indent

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'config.yaml'
2--- config.yaml 2014-11-24 09:34:05 +0000
3+++ config.yaml 2014-12-05 07:18:23 +0000
4@@ -115,3 +115,9 @@
5 .
6 This network will be used for tenant network traffic in overlay
7 networks.
8+ phy-nic-mtu:
9+ type: int
10+ default: 1500
11+ description: |
12+ To improve network performance of VM, sometimes we should keep VM MTU as 1500
13+ and use charm to modify MTU of tunnel nic more than 1500 (e.g. 1546 for GRE)
14
15=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
16--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-07 21:03:47 +0000
17+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-05 07:18:23 +0000
18@@ -13,9 +13,10 @@
19
20 import subprocess
21 import os
22-
23 from socket import gethostname as get_unit_hostname
24
25+import six
26+
27 from charmhelpers.core.hookenv import (
28 log,
29 relation_ids,
30@@ -77,7 +78,7 @@
31 "show", resource
32 ]
33 try:
34- status = subprocess.check_output(cmd)
35+ status = subprocess.check_output(cmd).decode('UTF-8')
36 except subprocess.CalledProcessError:
37 return False
38 else:
39@@ -150,34 +151,42 @@
40 return False
41
42
43-def determine_api_port(public_port):
44+def determine_api_port(public_port, singlenode_mode=False):
45 '''
46 Determine correct API server listening port based on
47 existence of HTTPS reverse proxy and/or haproxy.
48
49 public_port: int: standard public port for given service
50
51+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
52+
53 returns: int: the correct listening port for the API service
54 '''
55 i = 0
56- if len(peer_units()) > 0 or is_clustered():
57+ if singlenode_mode:
58+ i += 1
59+ elif len(peer_units()) > 0 or is_clustered():
60 i += 1
61 if https():
62 i += 1
63 return public_port - (i * 10)
64
65
66-def determine_apache_port(public_port):
67+def determine_apache_port(public_port, singlenode_mode=False):
68 '''
69 Description: Determine correct apache listening port based on public IP +
70 state of the cluster.
71
72 public_port: int: standard public port for given service
73
74+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
75+
76 returns: int: the correct listening port for the HAProxy service
77 '''
78 i = 0
79- if len(peer_units()) > 0 or is_clustered():
80+ if singlenode_mode:
81+ i += 1
82+ elif len(peer_units()) > 0 or is_clustered():
83 i += 1
84 return public_port - (i * 10)
85
86@@ -197,7 +206,7 @@
87 for setting in settings:
88 conf[setting] = config_get(setting)
89 missing = []
90- [missing.append(s) for s, v in conf.iteritems() if v is None]
91+ [missing.append(s) for s, v in six.iteritems(conf) if v is None]
92 if missing:
93 log('Insufficient config data to configure hacluster.', level=ERROR)
94 raise HAIncompleteConfig
95
96=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
97--- hooks/charmhelpers/contrib/network/ip.py 2014-10-16 17:42:14 +0000
98+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-05 07:18:23 +0000
99@@ -1,16 +1,20 @@
100 import glob
101 import re
102 import subprocess
103-import sys
104
105 from functools import partial
106
107 from charmhelpers.core.hookenv import unit_get
108 from charmhelpers.fetch import apt_install
109 from charmhelpers.core.hookenv import (
110- WARNING,
111- ERROR,
112- log
113+ config,
114+ log,
115+ INFO
116+)
117+from charmhelpers.core.host import (
118+ list_nics,
119+ get_nic_mtu,
120+ set_nic_mtu
121 )
122
123 try:
124@@ -34,31 +38,28 @@
125 network)
126
127
128+def no_ip_found_error_out(network):
129+ errmsg = ("No IP address found in network: %s" % network)
130+ raise ValueError(errmsg)
131+
132+
133 def get_address_in_network(network, fallback=None, fatal=False):
134- """
135- Get an IPv4 or IPv6 address within the network from the host.
136+ """Get an IPv4 or IPv6 address within the network from the host.
137
138 :param network (str): CIDR presentation format. For example,
139 '192.168.1.0/24'.
140 :param fallback (str): If no address is found, return fallback.
141 :param fatal (boolean): If no address is found, fallback is not
142 set and fatal is True then exit(1).
143-
144 """
145-
146- def not_found_error_out():
147- log("No IP address found in network: %s" % network,
148- level=ERROR)
149- sys.exit(1)
150-
151 if network is None:
152 if fallback is not None:
153 return fallback
154+
155+ if fatal:
156+ no_ip_found_error_out(network)
157 else:
158- if fatal:
159- not_found_error_out()
160- else:
161- return None
162+ return None
163
164 _validate_cidr(network)
165 network = netaddr.IPNetwork(network)
166@@ -70,6 +71,7 @@
167 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
168 if cidr in network:
169 return str(cidr.ip)
170+
171 if network.version == 6 and netifaces.AF_INET6 in addresses:
172 for addr in addresses[netifaces.AF_INET6]:
173 if not addr['addr'].startswith('fe80'):
174@@ -82,20 +84,20 @@
175 return fallback
176
177 if fatal:
178- not_found_error_out()
179+ no_ip_found_error_out(network)
180
181 return None
182
183
184 def is_ipv6(address):
185- '''Determine whether provided address is IPv6 or not'''
186+ """Determine whether provided address is IPv6 or not."""
187 try:
188 address = netaddr.IPAddress(address)
189 except netaddr.AddrFormatError:
190 # probably a hostname - so not an address at all!
191 return False
192- else:
193- return address.version == 6
194+
195+ return address.version == 6
196
197
198 def is_address_in_network(network, address):
199@@ -113,11 +115,13 @@
200 except (netaddr.core.AddrFormatError, ValueError):
201 raise ValueError("Network (%s) is not in CIDR presentation format" %
202 network)
203+
204 try:
205 address = netaddr.IPAddress(address)
206 except (netaddr.core.AddrFormatError, ValueError):
207 raise ValueError("Address (%s) is not in correct presentation format" %
208 address)
209+
210 if address in network:
211 return True
212 else:
213@@ -147,6 +151,7 @@
214 return iface
215 else:
216 return addresses[netifaces.AF_INET][0][key]
217+
218 if address.version == 6 and netifaces.AF_INET6 in addresses:
219 for addr in addresses[netifaces.AF_INET6]:
220 if not addr['addr'].startswith('fe80'):
221@@ -160,41 +165,42 @@
222 return str(cidr).split('/')[1]
223 else:
224 return addr[key]
225+
226 return None
227
228
229 get_iface_for_address = partial(_get_for_address, key='iface')
230
231+
232 get_netmask_for_address = partial(_get_for_address, key='netmask')
233
234
235 def format_ipv6_addr(address):
236- """
237- IPv6 needs to be wrapped with [] in url link to parse correctly.
238+ """If address is IPv6, wrap it in '[]' otherwise return None.
239+
240+ This is required by most configuration files when specifying IPv6
241+ addresses.
242 """
243 if is_ipv6(address):
244- address = "[%s]" % address
245- else:
246- log("Not a valid ipv6 address: %s" % address, level=WARNING)
247- address = None
248+ return "[%s]" % address
249
250- return address
251+ return None
252
253
254 def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
255 fatal=True, exc_list=None):
256- """
257- Return the assigned IP address for a given interface, if any, or [].
258- """
259+ """Return the assigned IP address for a given interface, if any."""
260 # Extract nic if passed /dev/ethX
261 if '/' in iface:
262 iface = iface.split('/')[-1]
263+
264 if not exc_list:
265 exc_list = []
266+
267 try:
268 inet_num = getattr(netifaces, inet_type)
269 except AttributeError:
270- raise Exception('Unknown inet type ' + str(inet_type))
271+ raise Exception("Unknown inet type '%s'" % str(inet_type))
272
273 interfaces = netifaces.interfaces()
274 if inc_aliases:
275@@ -202,15 +208,18 @@
276 for _iface in interfaces:
277 if iface == _iface or _iface.split(':')[0] == iface:
278 ifaces.append(_iface)
279+
280 if fatal and not ifaces:
281 raise Exception("Invalid interface '%s'" % iface)
282+
283 ifaces.sort()
284 else:
285 if iface not in interfaces:
286 if fatal:
287- raise Exception("%s not found " % (iface))
288+ raise Exception("Interface '%s' not found " % (iface))
289 else:
290 return []
291+
292 else:
293 ifaces = [iface]
294
295@@ -221,10 +230,13 @@
296 for entry in net_info[inet_num]:
297 if 'addr' in entry and entry['addr'] not in exc_list:
298 addresses.append(entry['addr'])
299+
300 if fatal and not addresses:
301 raise Exception("Interface '%s' doesn't have any %s addresses." %
302 (iface, inet_type))
303- return addresses
304+
305+ return sorted(addresses)
306+
307
308 get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
309
310@@ -241,6 +253,7 @@
311 raw = re.match(ll_key, _addr)
312 if raw:
313 _addr = raw.group(1)
314+
315 if _addr == addr:
316 log("Address '%s' is configured on iface '%s'" %
317 (addr, iface))
318@@ -251,8 +264,9 @@
319
320
321 def sniff_iface(f):
322- """If no iface provided, inject net iface inferred from unit private
323- address.
324+ """Ensure decorated function is called with a value for iface.
325+
326+ If no iface provided, inject net iface inferred from unit private address.
327 """
328 def iface_sniffer(*args, **kwargs):
329 if not kwargs.get('iface', None):
330@@ -295,7 +309,7 @@
331 if global_addrs:
332 # Make sure any found global addresses are not temporary
333 cmd = ['ip', 'addr', 'show', iface]
334- out = subprocess.check_output(cmd)
335+ out = subprocess.check_output(cmd).decode('UTF-8')
336 if dynamic_only:
337 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
338 else:
339@@ -317,33 +331,51 @@
340 return addrs
341
342 if fatal:
343- raise Exception("Interface '%s' doesn't have a scope global "
344+ raise Exception("Interface '%s' does not have a scope global "
345 "non-temporary ipv6 address." % iface)
346
347 return []
348
349
350 def get_bridges(vnic_dir='/sys/devices/virtual/net'):
351- """
352- Return a list of bridges on the system or []
353- """
354- b_rgex = vnic_dir + '/*/bridge'
355- return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
356+ """Return a list of bridges on the system."""
357+ b_regex = "%s/*/bridge" % vnic_dir
358+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
359
360
361 def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
362- """
363- Return a list of nics comprising a given bridge on the system or []
364- """
365- brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
366- return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
367+ """Return a list of nics comprising a given bridge on the system."""
368+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
369+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
370
371
372 def is_bridge_member(nic):
373- """
374- Check if a given nic is a member of a bridge
375- """
376+ """Check if a given nic is a member of a bridge."""
377 for bridge in get_bridges():
378 if nic in get_bridge_nics(bridge):
379 return True
380+
381 return False
382+
383+
384+def configure_phy_nic_mtu(mng_ip=None):
385+ """Configure mtu for physical nic."""
386+ phy_nic_mtu = config('phy-nic-mtu')
387+ if phy_nic_mtu >= 1500:
388+ phy_nic = None
389+ if mng_ip is None:
390+ mng_ip = unit_get('private-address')
391+ for nic in list_nics(['eth', 'bond', 'br']):
392+ if mng_ip in get_ipv4_addr(nic, fatal=False):
393+ phy_nic = nic
394+ # need to find the associated phy nic for bridge
395+ if nic.startswith('br'):
396+ for brnic in get_bridge_nics(nic):
397+ if brnic.startswith('eth') or brnic.startswith('bond'):
398+ phy_nic = brnic
399+ break
400+ break
401+ if phy_nic is not None and phy_nic_mtu != get_nic_mtu(phy_nic):
402+ set_nic_mtu(phy_nic, str(phy_nic_mtu), persistence=True)
403+ log('set mtu={} for phy_nic={}'
404+ .format(phy_nic_mtu, phy_nic), level=INFO)
405
406=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
407--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-07 21:03:47 +0000
408+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-05 07:18:23 +0000
409@@ -1,3 +1,4 @@
410+import six
411 from charmhelpers.contrib.amulet.deployment import (
412 AmuletDeployment
413 )
414@@ -69,7 +70,7 @@
415
416 def _configure_services(self, configs):
417 """Configure all of the services."""
418- for service, config in configs.iteritems():
419+ for service, config in six.iteritems(configs):
420 self.d.configure(service, config)
421
422 def _get_openstack_release(self):
423
424=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
425--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-09-25 15:37:05 +0000
426+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-05 07:18:23 +0000
427@@ -7,6 +7,8 @@
428 import keystoneclient.v2_0 as keystone_client
429 import novaclient.v1_1.client as nova_client
430
431+import six
432+
433 from charmhelpers.contrib.amulet.utils import (
434 AmuletUtils
435 )
436@@ -60,7 +62,7 @@
437 expected service catalog endpoints.
438 """
439 self.log.debug('actual: {}'.format(repr(actual)))
440- for k, v in expected.iteritems():
441+ for k, v in six.iteritems(expected):
442 if k in actual:
443 ret = self._validate_dict_data(expected[k][0], actual[k][0])
444 if ret:
445
446=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
447--- hooks/charmhelpers/contrib/openstack/context.py 2014-10-07 21:03:47 +0000
448+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-05 07:18:23 +0000
449@@ -1,20 +1,18 @@
450 import json
451 import os
452 import time
453-
454 from base64 import b64decode
455+from subprocess import check_call
456
457-from subprocess import (
458- check_call
459-)
460+import six
461
462 from charmhelpers.fetch import (
463 apt_install,
464 filter_installed_packages,
465 )
466-
467 from charmhelpers.core.hookenv import (
468 config,
469+ is_relation_made,
470 local_unit,
471 log,
472 relation_get,
473@@ -23,43 +21,40 @@
474 relation_set,
475 unit_get,
476 unit_private_ip,
477+ DEBUG,
478+ INFO,
479+ WARNING,
480 ERROR,
481- INFO
482 )
483-
484 from charmhelpers.core.host import (
485 mkdir,
486- write_file
487+ write_file,
488 )
489-
490 from charmhelpers.contrib.hahelpers.cluster import (
491 determine_apache_port,
492 determine_api_port,
493 https,
494- is_clustered
495+ is_clustered,
496 )
497-
498 from charmhelpers.contrib.hahelpers.apache import (
499 get_cert,
500 get_ca_cert,
501 install_ca_cert,
502 )
503-
504 from charmhelpers.contrib.openstack.neutron import (
505 neutron_plugin_attribute,
506 )
507-
508 from charmhelpers.contrib.network.ip import (
509 get_address_in_network,
510 get_ipv6_addr,
511 get_netmask_for_address,
512 format_ipv6_addr,
513- is_address_in_network
514+ is_address_in_network,
515 )
516-
517 from charmhelpers.contrib.openstack.utils import get_host_ip
518
519 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
520+ADDRESS_TYPES = ['admin', 'internal', 'public']
521
522
523 class OSContextError(Exception):
524@@ -67,7 +62,7 @@
525
526
527 def ensure_packages(packages):
528- '''Install but do not upgrade required plugin packages'''
529+ """Install but do not upgrade required plugin packages."""
530 required = filter_installed_packages(packages)
531 if required:
532 apt_install(required, fatal=True)
533@@ -75,20 +70,27 @@
534
535 def context_complete(ctxt):
536 _missing = []
537- for k, v in ctxt.iteritems():
538+ for k, v in six.iteritems(ctxt):
539 if v is None or v == '':
540 _missing.append(k)
541+
542 if _missing:
543- log('Missing required data: %s' % ' '.join(_missing), level='INFO')
544+ log('Missing required data: %s' % ' '.join(_missing), level=INFO)
545 return False
546+
547 return True
548
549
550 def config_flags_parser(config_flags):
551+ """Parses config flags string into dict.
552+
553+ The provided config_flags string may be a list of comma-separated values
554+ which themselves may be comma-separated list of values.
555+ """
556 if config_flags.find('==') >= 0:
557- log("config_flags is not in expected format (key=value)",
558- level=ERROR)
559+ log("config_flags is not in expected format (key=value)", level=ERROR)
560 raise OSContextError
561+
562 # strip the following from each value.
563 post_strippers = ' ,'
564 # we strip any leading/trailing '=' or ' ' from the string then
565@@ -96,7 +98,7 @@
566 split = config_flags.strip(' =').split('=')
567 limit = len(split)
568 flags = {}
569- for i in xrange(0, limit - 1):
570+ for i in range(0, limit - 1):
571 current = split[i]
572 next = split[i + 1]
573 vindex = next.rfind(',')
574@@ -111,17 +113,18 @@
575 # if this not the first entry, expect an embedded key.
576 index = current.rfind(',')
577 if index < 0:
578- log("invalid config value(s) at index %s" % (i),
579- level=ERROR)
580+ log("Invalid config value(s) at index %s" % (i), level=ERROR)
581 raise OSContextError
582 key = current[index + 1:]
583
584 # Add to collection.
585 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
586+
587 return flags
588
589
590 class OSContextGenerator(object):
591+ """Base class for all context generators."""
592 interfaces = []
593
594 def __call__(self):
595@@ -133,11 +136,11 @@
596
597 def __init__(self,
598 database=None, user=None, relation_prefix=None, ssl_dir=None):
599- '''
600- Allows inspecting relation for settings prefixed with relation_prefix.
601- This is useful for parsing access for multiple databases returned via
602- the shared-db interface (eg, nova_password, quantum_password)
603- '''
604+ """Allows inspecting relation for settings prefixed with
605+ relation_prefix. This is useful for parsing access for multiple
606+ databases returned via the shared-db interface (eg, nova_password,
607+ quantum_password)
608+ """
609 self.relation_prefix = relation_prefix
610 self.database = database
611 self.user = user
612@@ -147,9 +150,8 @@
613 self.database = self.database or config('database')
614 self.user = self.user or config('database-user')
615 if None in [self.database, self.user]:
616- log('Could not generate shared_db context. '
617- 'Missing required charm config options. '
618- '(database name and user)')
619+ log("Could not generate shared_db context. Missing required charm "
620+ "config options. (database name and user)", level=ERROR)
621 raise OSContextError
622
623 ctxt = {}
624@@ -202,23 +204,24 @@
625 def __call__(self):
626 self.database = self.database or config('database')
627 if self.database is None:
628- log('Could not generate postgresql_db context. '
629- 'Missing required charm config options. '
630- '(database name)')
631+ log('Could not generate postgresql_db context. Missing required '
632+ 'charm config options. (database name)', level=ERROR)
633 raise OSContextError
634+
635 ctxt = {}
636-
637 for rid in relation_ids(self.interfaces[0]):
638 for unit in related_units(rid):
639- ctxt = {
640- 'database_host': relation_get('host', rid=rid, unit=unit),
641- 'database': self.database,
642- 'database_user': relation_get('user', rid=rid, unit=unit),
643- 'database_password': relation_get('password', rid=rid, unit=unit),
644- 'database_type': 'postgresql',
645- }
646+ rel_host = relation_get('host', rid=rid, unit=unit)
647+ rel_user = relation_get('user', rid=rid, unit=unit)
648+ rel_passwd = relation_get('password', rid=rid, unit=unit)
649+ ctxt = {'database_host': rel_host,
650+ 'database': self.database,
651+ 'database_user': rel_user,
652+ 'database_password': rel_passwd,
653+ 'database_type': 'postgresql'}
654 if context_complete(ctxt):
655 return ctxt
656+
657 return {}
658
659
660@@ -227,23 +230,29 @@
661 ca_path = os.path.join(ssl_dir, 'db-client.ca')
662 with open(ca_path, 'w') as fh:
663 fh.write(b64decode(rdata['ssl_ca']))
664+
665 ctxt['database_ssl_ca'] = ca_path
666 elif 'ssl_ca' in rdata:
667- log("Charm not setup for ssl support but ssl ca found")
668+ log("Charm not setup for ssl support but ssl ca found", level=INFO)
669 return ctxt
670+
671 if 'ssl_cert' in rdata:
672 cert_path = os.path.join(
673 ssl_dir, 'db-client.cert')
674 if not os.path.exists(cert_path):
675- log("Waiting 1m for ssl client cert validity")
676+ log("Waiting 1m for ssl client cert validity", level=INFO)
677 time.sleep(60)
678+
679 with open(cert_path, 'w') as fh:
680 fh.write(b64decode(rdata['ssl_cert']))
681+
682 ctxt['database_ssl_cert'] = cert_path
683 key_path = os.path.join(ssl_dir, 'db-client.key')
684 with open(key_path, 'w') as fh:
685 fh.write(b64decode(rdata['ssl_key']))
686+
687 ctxt['database_ssl_key'] = key_path
688+
689 return ctxt
690
691
692@@ -251,9 +260,8 @@
693 interfaces = ['identity-service']
694
695 def __call__(self):
696- log('Generating template context for identity-service')
697+ log('Generating template context for identity-service', level=DEBUG)
698 ctxt = {}
699-
700 for rid in relation_ids('identity-service'):
701 for unit in related_units(rid):
702 rdata = relation_get(rid=rid, unit=unit)
703@@ -261,26 +269,24 @@
704 serv_host = format_ipv6_addr(serv_host) or serv_host
705 auth_host = rdata.get('auth_host')
706 auth_host = format_ipv6_addr(auth_host) or auth_host
707-
708- ctxt = {
709- 'service_port': rdata.get('service_port'),
710- 'service_host': serv_host,
711- 'auth_host': auth_host,
712- 'auth_port': rdata.get('auth_port'),
713- 'admin_tenant_name': rdata.get('service_tenant'),
714- 'admin_user': rdata.get('service_username'),
715- 'admin_password': rdata.get('service_password'),
716- 'service_protocol':
717- rdata.get('service_protocol') or 'http',
718- 'auth_protocol':
719- rdata.get('auth_protocol') or 'http',
720- }
721+ svc_protocol = rdata.get('service_protocol') or 'http'
722+ auth_protocol = rdata.get('auth_protocol') or 'http'
723+ ctxt = {'service_port': rdata.get('service_port'),
724+ 'service_host': serv_host,
725+ 'auth_host': auth_host,
726+ 'auth_port': rdata.get('auth_port'),
727+ 'admin_tenant_name': rdata.get('service_tenant'),
728+ 'admin_user': rdata.get('service_username'),
729+ 'admin_password': rdata.get('service_password'),
730+ 'service_protocol': svc_protocol,
731+ 'auth_protocol': auth_protocol}
732 if context_complete(ctxt):
733 # NOTE(jamespage) this is required for >= icehouse
734 # so a missing value just indicates keystone needs
735 # upgrading
736 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
737 return ctxt
738+
739 return {}
740
741
742@@ -293,21 +299,23 @@
743 self.interfaces = [rel_name]
744
745 def __call__(self):
746- log('Generating template context for amqp')
747+ log('Generating template context for amqp', level=DEBUG)
748 conf = config()
749- user_setting = 'rabbit-user'
750- vhost_setting = 'rabbit-vhost'
751 if self.relation_prefix:
752- user_setting = self.relation_prefix + '-rabbit-user'
753- vhost_setting = self.relation_prefix + '-rabbit-vhost'
754+ user_setting = '%s-rabbit-user' % (self.relation_prefix)
755+ vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
756+ else:
757+ user_setting = 'rabbit-user'
758+ vhost_setting = 'rabbit-vhost'
759
760 try:
761 username = conf[user_setting]
762 vhost = conf[vhost_setting]
763 except KeyError as e:
764- log('Could not generate shared_db context. '
765- 'Missing required charm config options: %s.' % e)
766+ log('Could not generate shared_db context. Missing required charm '
767+ 'config options: %s.' % e, level=ERROR)
768 raise OSContextError
769+
770 ctxt = {}
771 for rid in relation_ids(self.rel_name):
772 ha_vip_only = False
773@@ -321,6 +329,7 @@
774 host = relation_get('private-address', rid=rid, unit=unit)
775 host = format_ipv6_addr(host) or host
776 ctxt['rabbitmq_host'] = host
777+
778 ctxt.update({
779 'rabbitmq_user': username,
780 'rabbitmq_password': relation_get('password', rid=rid,
781@@ -331,6 +340,7 @@
782 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
783 if ssl_port:
784 ctxt['rabbit_ssl_port'] = ssl_port
785+
786 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
787 if ssl_ca:
788 ctxt['rabbit_ssl_ca'] = ssl_ca
789@@ -344,41 +354,45 @@
790 if context_complete(ctxt):
791 if 'rabbit_ssl_ca' in ctxt:
792 if not self.ssl_dir:
793- log(("Charm not setup for ssl support "
794- "but ssl ca found"))
795+ log("Charm not setup for ssl support but ssl ca "
796+ "found", level=INFO)
797 break
798+
799 ca_path = os.path.join(
800 self.ssl_dir, 'rabbit-client-ca.pem')
801 with open(ca_path, 'w') as fh:
802 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
803 ctxt['rabbit_ssl_ca'] = ca_path
804+
805 # Sufficient information found = break out!
806 break
807+
808 # Used for active/active rabbitmq >= grizzly
809- if ('clustered' not in ctxt or ha_vip_only) \
810- and len(related_units(rid)) > 1:
811+ if (('clustered' not in ctxt or ha_vip_only) and
812+ len(related_units(rid)) > 1):
813 rabbitmq_hosts = []
814 for unit in related_units(rid):
815 host = relation_get('private-address', rid=rid, unit=unit)
816 host = format_ipv6_addr(host) or host
817 rabbitmq_hosts.append(host)
818- ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
819+
820+ ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
821+
822 if not context_complete(ctxt):
823 return {}
824- else:
825- return ctxt
826+
827+ return ctxt
828
829
830 class CephContext(OSContextGenerator):
831+ """Generates context for /etc/ceph/ceph.conf templates."""
832 interfaces = ['ceph']
833
834 def __call__(self):
835- '''This generates context for /etc/ceph/ceph.conf templates'''
836 if not relation_ids('ceph'):
837 return {}
838
839- log('Generating template context for ceph')
840-
841+ log('Generating template context for ceph', level=DEBUG)
842 mon_hosts = []
843 auth = None
844 key = None
845@@ -387,18 +401,18 @@
846 for unit in related_units(rid):
847 auth = relation_get('auth', rid=rid, unit=unit)
848 key = relation_get('key', rid=rid, unit=unit)
849- ceph_addr = \
850- relation_get('ceph-public-address', rid=rid, unit=unit) or \
851- relation_get('private-address', rid=rid, unit=unit)
852+ ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
853+ unit=unit)
854+ unit_priv_addr = relation_get('private-address', rid=rid,
855+ unit=unit)
856+ ceph_addr = ceph_pub_addr or unit_priv_addr
857 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
858 mon_hosts.append(ceph_addr)
859
860- ctxt = {
861- 'mon_hosts': ' '.join(mon_hosts),
862- 'auth': auth,
863- 'key': key,
864- 'use_syslog': use_syslog
865- }
866+ ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
867+ 'auth': auth,
868+ 'key': key,
869+ 'use_syslog': use_syslog}
870
871 if not os.path.isdir('/etc/ceph'):
872 os.mkdir('/etc/ceph')
873@@ -407,79 +421,68 @@
874 return {}
875
876 ensure_packages(['ceph-common'])
877-
878 return ctxt
879
880
881-ADDRESS_TYPES = ['admin', 'internal', 'public']
882-
883-
884 class HAProxyContext(OSContextGenerator):
885+ """Provides half a context for the haproxy template, which describes
886+ all peers to be included in the cluster. Each charm needs to include
887+ its own context generator that describes the port mapping.
888+ """
889 interfaces = ['cluster']
890
891+ def __init__(self, singlenode_mode=False):
892+ self.singlenode_mode = singlenode_mode
893+
894 def __call__(self):
895- '''
896- Builds half a context for the haproxy template, which describes
897- all peers to be included in the cluster. Each charm needs to include
898- its own context generator that describes the port mapping.
899- '''
900- if not relation_ids('cluster'):
901+ if not relation_ids('cluster') and not self.singlenode_mode:
902 return {}
903
904- l_unit = local_unit().replace('/', '-')
905-
906 if config('prefer-ipv6'):
907 addr = get_ipv6_addr(exc_list=[config('vip')])[0]
908 else:
909 addr = get_host_ip(unit_get('private-address'))
910
911+ l_unit = local_unit().replace('/', '-')
912 cluster_hosts = {}
913
914 # NOTE(jamespage): build out map of configured network endpoints
915 # and associated backends
916 for addr_type in ADDRESS_TYPES:
917- laddr = get_address_in_network(
918- config('os-{}-network'.format(addr_type)))
919+ cfg_opt = 'os-{}-network'.format(addr_type)
920+ laddr = get_address_in_network(config(cfg_opt))
921 if laddr:
922- cluster_hosts[laddr] = {}
923- cluster_hosts[laddr]['network'] = "{}/{}".format(
924- laddr,
925- get_netmask_for_address(laddr)
926- )
927- cluster_hosts[laddr]['backends'] = {}
928- cluster_hosts[laddr]['backends'][l_unit] = laddr
929+ netmask = get_netmask_for_address(laddr)
930+ cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
931+ netmask),
932+ 'backends': {l_unit: laddr}}
933 for rid in relation_ids('cluster'):
934 for unit in related_units(rid):
935- _unit = unit.replace('/', '-')
936 _laddr = relation_get('{}-address'.format(addr_type),
937 rid=rid, unit=unit)
938 if _laddr:
939+ _unit = unit.replace('/', '-')
940 cluster_hosts[laddr]['backends'][_unit] = _laddr
941
942 # NOTE(jamespage) no split configurations found, just use
943 # private addresses
944 if not cluster_hosts:
945- cluster_hosts[addr] = {}
946- cluster_hosts[addr]['network'] = "{}/{}".format(
947- addr,
948- get_netmask_for_address(addr)
949- )
950- cluster_hosts[addr]['backends'] = {}
951- cluster_hosts[addr]['backends'][l_unit] = addr
952+ netmask = get_netmask_for_address(addr)
953+ cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
954+ 'backends': {l_unit: addr}}
955 for rid in relation_ids('cluster'):
956 for unit in related_units(rid):
957- _unit = unit.replace('/', '-')
958 _laddr = relation_get('private-address',
959 rid=rid, unit=unit)
960 if _laddr:
961+ _unit = unit.replace('/', '-')
962 cluster_hosts[addr]['backends'][_unit] = _laddr
963
964- ctxt = {
965- 'frontends': cluster_hosts,
966- }
967+ ctxt = {'frontends': cluster_hosts}
968
969 if config('haproxy-server-timeout'):
970 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
971+
972 if config('haproxy-client-timeout'):
973 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
974
975@@ -493,13 +496,18 @@
976 ctxt['stat_port'] = ':8888'
977
978 for frontend in cluster_hosts:
979- if len(cluster_hosts[frontend]['backends']) > 1:
980+ if (len(cluster_hosts[frontend]['backends']) > 1 or
981+ self.singlenode_mode):
982 # Enable haproxy when we have enough peers.
983- log('Ensuring haproxy enabled in /etc/default/haproxy.')
984+ log('Ensuring haproxy enabled in /etc/default/haproxy.',
985+ level=DEBUG)
986 with open('/etc/default/haproxy', 'w') as out:
987 out.write('ENABLED=1\n')
988+
989 return ctxt
990- log('HAProxy context is incomplete, this unit has no peers.')
991+
992+ log('HAProxy context is incomplete, this unit has no peers.',
993+ level=INFO)
994 return {}
995
996
997@@ -507,29 +515,28 @@
998 interfaces = ['image-service']
999
1000 def __call__(self):
1001- '''
1002- Obtains the glance API server from the image-service relation. Useful
1003- in nova and cinder (currently).
1004- '''
1005- log('Generating template context for image-service.')
1006+ """Obtains the glance API server from the image-service relation.
1007+ Useful in nova and cinder (currently).
1008+ """
1009+ log('Generating template context for image-service.', level=DEBUG)
1010 rids = relation_ids('image-service')
1011 if not rids:
1012 return {}
1013+
1014 for rid in rids:
1015 for unit in related_units(rid):
1016 api_server = relation_get('glance-api-server',
1017 rid=rid, unit=unit)
1018 if api_server:
1019 return {'glance_api_servers': api_server}
1020- log('ImageService context is incomplete. '
1021- 'Missing required relation data.')
1022+
1023+ log("ImageService context is incomplete. Missing required relation "
1024+ "data.", level=INFO)
1025 return {}
1026
1027
1028 class ApacheSSLContext(OSContextGenerator):
1029-
1030- """
1031- Generates a context for an apache vhost configuration that configures
1032+ """Generates a context for an apache vhost configuration that configures
1033 HTTPS reverse proxying for one or many endpoints. Generated context
1034 looks something like::
1035
1036@@ -563,6 +570,7 @@
1037 else:
1038 cert_filename = 'cert'
1039 key_filename = 'key'
1040+
1041 write_file(path=os.path.join(ssl_dir, cert_filename),
1042 content=b64decode(cert))
1043 write_file(path=os.path.join(ssl_dir, key_filename),
1044@@ -574,7 +582,8 @@
1045 install_ca_cert(b64decode(ca_cert))
1046
1047 def canonical_names(self):
1048- '''Figure out which canonical names clients will access this service'''
1049+ """Figure out which canonical names clients will access this service.
1050+ """
1051 cns = []
1052 for r_id in relation_ids('identity-service'):
1053 for unit in related_units(r_id):
1054@@ -582,55 +591,80 @@
1055 for k in rdata:
1056 if k.startswith('ssl_key_'):
1057 cns.append(k.lstrip('ssl_key_'))
1058- return list(set(cns))
1059+
1060+ return sorted(list(set(cns)))
1061+
1062+ def get_network_addresses(self):
1063+ """For each network configured, return corresponding address and vip
1064+ (if available).
1065+
1066+ Returns a list of tuples of the form:
1067+
1068+ [(address_in_net_a, vip_in_net_a),
1069+ (address_in_net_b, vip_in_net_b),
1070+ ...]
1071+
1072+ or, if no vip(s) available:
1073+
1074+ [(address_in_net_a, address_in_net_a),
1075+ (address_in_net_b, address_in_net_b),
1076+ ...]
1077+ """
1078+ addresses = []
1079+ if config('vip'):
1080+ vips = config('vip').split()
1081+ else:
1082+ vips = []
1083+
1084+ for net_type in ['os-internal-network', 'os-admin-network',
1085+ 'os-public-network']:
1086+ addr = get_address_in_network(config(net_type),
1087+ unit_get('private-address'))
1088+ if len(vips) > 1 and is_clustered():
1089+ if not config(net_type):
1090+ log("Multiple networks configured but net_type "
1091+ "is None (%s)." % net_type, level=WARNING)
1092+ continue
1093+
1094+ for vip in vips:
1095+ if is_address_in_network(config(net_type), vip):
1096+ addresses.append((addr, vip))
1097+ break
1098+
1099+ elif is_clustered() and config('vip'):
1100+ addresses.append((addr, config('vip')))
1101+ else:
1102+ addresses.append((addr, addr))
1103+
1104+ return sorted(addresses)
1105
1106 def __call__(self):
1107- if isinstance(self.external_ports, basestring):
1108+ if isinstance(self.external_ports, six.string_types):
1109 self.external_ports = [self.external_ports]
1110- if (not self.external_ports or not https()):
1111+
1112+ if not self.external_ports or not https():
1113 return {}
1114
1115 self.configure_ca()
1116 self.enable_modules()
1117
1118- ctxt = {
1119- 'namespace': self.service_namespace,
1120- 'endpoints': [],
1121- 'ext_ports': []
1122- }
1123+ ctxt = {'namespace': self.service_namespace,
1124+ 'endpoints': [],
1125+ 'ext_ports': []}
1126
1127 for cn in self.canonical_names():
1128 self.configure_cert(cn)
1129
1130- addresses = []
1131- vips = []
1132- if config('vip'):
1133- vips = config('vip').split()
1134-
1135- for network_type in ['os-internal-network',
1136- 'os-admin-network',
1137- 'os-public-network']:
1138- address = get_address_in_network(config(network_type),
1139- unit_get('private-address'))
1140- if len(vips) > 0 and is_clustered():
1141- for vip in vips:
1142- if is_address_in_network(config(network_type),
1143- vip):
1144- addresses.append((address, vip))
1145- break
1146- elif is_clustered():
1147- addresses.append((address, config('vip')))
1148- else:
1149- addresses.append((address, address))
1150-
1151- for address, endpoint in set(addresses):
1152+ addresses = self.get_network_addresses()
1153+ for address, endpoint in sorted(set(addresses)):
1154 for api_port in self.external_ports:
1155 ext_port = determine_apache_port(api_port)
1156 int_port = determine_api_port(api_port)
1157 portmap = (address, endpoint, int(ext_port), int(int_port))
1158 ctxt['endpoints'].append(portmap)
1159 ctxt['ext_ports'].append(int(ext_port))
1160- ctxt['ext_ports'] = list(set(ctxt['ext_ports']))
1161+
1162+ ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
1163 return ctxt
1164
1165
1166@@ -647,21 +681,23 @@
1167
1168 @property
1169 def packages(self):
1170- return neutron_plugin_attribute(
1171- self.plugin, 'packages', self.network_manager)
1172+ return neutron_plugin_attribute(self.plugin, 'packages',
1173+ self.network_manager)
1174
1175 @property
1176 def neutron_security_groups(self):
1177 return None
1178
1179 def _ensure_packages(self):
1180- [ensure_packages(pkgs) for pkgs in self.packages]
1181+ for pkgs in self.packages:
1182+ ensure_packages(pkgs)
1183
1184 def _save_flag_file(self):
1185 if self.network_manager == 'quantum':
1186 _file = '/etc/nova/quantum_plugin.conf'
1187 else:
1188 _file = '/etc/nova/neutron_plugin.conf'
1189+
1190 with open(_file, 'wb') as out:
1191 out.write(self.plugin + '\n')
1192
1193@@ -670,13 +706,11 @@
1194 self.network_manager)
1195 config = neutron_plugin_attribute(self.plugin, 'config',
1196 self.network_manager)
1197- ovs_ctxt = {
1198- 'core_plugin': driver,
1199- 'neutron_plugin': 'ovs',
1200- 'neutron_security_groups': self.neutron_security_groups,
1201- 'local_ip': unit_private_ip(),
1202- 'config': config
1203- }
1204+ ovs_ctxt = {'core_plugin': driver,
1205+ 'neutron_plugin': 'ovs',
1206+ 'neutron_security_groups': self.neutron_security_groups,
1207+ 'local_ip': unit_private_ip(),
1208+ 'config': config}
1209
1210 return ovs_ctxt
1211
1212@@ -685,13 +719,11 @@
1213 self.network_manager)
1214 config = neutron_plugin_attribute(self.plugin, 'config',
1215 self.network_manager)
1216- nvp_ctxt = {
1217- 'core_plugin': driver,
1218- 'neutron_plugin': 'nvp',
1219- 'neutron_security_groups': self.neutron_security_groups,
1220- 'local_ip': unit_private_ip(),
1221- 'config': config
1222- }
1223+ nvp_ctxt = {'core_plugin': driver,
1224+ 'neutron_plugin': 'nvp',
1225+ 'neutron_security_groups': self.neutron_security_groups,
1226+ 'local_ip': unit_private_ip(),
1227+ 'config': config}
1228
1229 return nvp_ctxt
1230
1231@@ -700,35 +732,50 @@
1232 self.network_manager)
1233 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1234 self.network_manager)
1235- n1kv_ctxt = {
1236- 'core_plugin': driver,
1237- 'neutron_plugin': 'n1kv',
1238- 'neutron_security_groups': self.neutron_security_groups,
1239- 'local_ip': unit_private_ip(),
1240- 'config': n1kv_config,
1241- 'vsm_ip': config('n1kv-vsm-ip'),
1242- 'vsm_username': config('n1kv-vsm-username'),
1243- 'vsm_password': config('n1kv-vsm-password'),
1244- 'restrict_policy_profiles': config(
1245- 'n1kv_restrict_policy_profiles'),
1246- }
1247+ n1kv_user_config_flags = config('n1kv-config-flags')
1248+ restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
1249+ n1kv_ctxt = {'core_plugin': driver,
1250+ 'neutron_plugin': 'n1kv',
1251+ 'neutron_security_groups': self.neutron_security_groups,
1252+ 'local_ip': unit_private_ip(),
1253+ 'config': n1kv_config,
1254+ 'vsm_ip': config('n1kv-vsm-ip'),
1255+ 'vsm_username': config('n1kv-vsm-username'),
1256+ 'vsm_password': config('n1kv-vsm-password'),
1257+ 'restrict_policy_profiles': restrict_policy_profiles}
1258+
1259+ if n1kv_user_config_flags:
1260+ flags = config_flags_parser(n1kv_user_config_flags)
1261+ n1kv_ctxt['user_config_flags'] = flags
1262
1263 return n1kv_ctxt
1264
1265+ def calico_ctxt(self):
1266+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1267+ self.network_manager)
1268+ config = neutron_plugin_attribute(self.plugin, 'config',
1269+ self.network_manager)
1270+ calico_ctxt = {'core_plugin': driver,
1271+ 'neutron_plugin': 'Calico',
1272+ 'neutron_security_groups': self.neutron_security_groups,
1273+ 'local_ip': unit_private_ip(),
1274+ 'config': config}
1275+
1276+ return calico_ctxt
1277+
1278 def neutron_ctxt(self):
1279 if https():
1280 proto = 'https'
1281 else:
1282 proto = 'http'
1283+
1284 if is_clustered():
1285 host = config('vip')
1286 else:
1287 host = unit_get('private-address')
1288- url = '%s://%s:%s' % (proto, host, '9696')
1289- ctxt = {
1290- 'network_manager': self.network_manager,
1291- 'neutron_url': url,
1292- }
1293+
1294+ ctxt = {'network_manager': self.network_manager,
1295+ 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1296 return ctxt
1297
1298 def __call__(self):
1299@@ -748,6 +795,8 @@
1300 ctxt.update(self.nvp_ctxt())
1301 elif self.plugin == 'n1kv':
1302 ctxt.update(self.n1kv_ctxt())
1303+ elif self.plugin == 'Calico':
1304+ ctxt.update(self.calico_ctxt())
1305
1306 alchemy_flags = config('neutron-alchemy-flags')
1307 if alchemy_flags:
1308@@ -759,23 +808,40 @@
1309
1310
1311 class OSConfigFlagContext(OSContextGenerator):
1312-
1313- """
1314- Responsible for adding user-defined config-flags in charm config to a
1315- template context.
1316+ """Provides support for user-defined config flags.
1317+
1318+ Users can define a comma-seperated list of key=value pairs
1319+ in the charm configuration and apply them at any point in
1320+ any file by using a template flag.
1321+
1322+ Sometimes users might want config flags inserted within a
1323+ specific section so this class allows users to specify the
1324+ template flag name, allowing for multiple template flags
1325+ (sections) within the same context.
1326
1327 NOTE: the value of config-flags may be a comma-separated list of
1328 key=value pairs and some Openstack config files support
1329 comma-separated lists as values.
1330 """
1331
1332+ def __init__(self, charm_flag='config-flags',
1333+ template_flag='user_config_flags'):
1334+ """
1335+ :param charm_flag: config flags in charm configuration.
1336+ :param template_flag: insert point for user-defined flags in template
1337+ file.
1338+ """
1339+ super(OSConfigFlagContext, self).__init__()
1340+ self._charm_flag = charm_flag
1341+ self._template_flag = template_flag
1342+
1343 def __call__(self):
1344- config_flags = config('config-flags')
1345+ config_flags = config(self._charm_flag)
1346 if not config_flags:
1347 return {}
1348
1349- flags = config_flags_parser(config_flags)
1350- return {'user_config_flags': flags}
1351+ return {self._template_flag:
1352+ config_flags_parser(config_flags)}
1353
1354
1355 class SubordinateConfigContext(OSContextGenerator):
1356@@ -819,7 +885,6 @@
1357 },
1358 }
1359 }
1360-
1361 """
1362
1363 def __init__(self, service, config_file, interface):
1364@@ -849,26 +914,28 @@
1365
1366 if self.service not in sub_config:
1367 log('Found subordinate_config on %s but it contained'
1368- 'nothing for %s service' % (rid, self.service))
1369+ 'nothing for %s service' % (rid, self.service),
1370+ level=INFO)
1371 continue
1372
1373 sub_config = sub_config[self.service]
1374 if self.config_file not in sub_config:
1375 log('Found subordinate_config on %s but it contained'
1376- 'nothing for %s' % (rid, self.config_file))
1377+ 'nothing for %s' % (rid, self.config_file),
1378+ level=INFO)
1379 continue
1380
1381 sub_config = sub_config[self.config_file]
1382- for k, v in sub_config.iteritems():
1383+ for k, v in six.iteritems(sub_config):
1384 if k == 'sections':
1385- for section, config_dict in v.iteritems():
1386- log("adding section '%s'" % (section))
1387+ for section, config_dict in six.iteritems(v):
1388+ log("adding section '%s'" % (section),
1389+ level=DEBUG)
1390 ctxt[k][section] = config_dict
1391 else:
1392 ctxt[k] = v
1393
1394- log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
1395-
1396+ log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1397 return ctxt
1398
1399
1400@@ -880,15 +947,14 @@
1401 False if config('debug') is None else config('debug')
1402 ctxt['verbose'] = \
1403 False if config('verbose') is None else config('verbose')
1404+
1405 return ctxt
1406
1407
1408 class SyslogContext(OSContextGenerator):
1409
1410 def __call__(self):
1411- ctxt = {
1412- 'use_syslog': config('use-syslog')
1413- }
1414+ ctxt = {'use_syslog': config('use-syslog')}
1415 return ctxt
1416
1417
1418@@ -896,13 +962,9 @@
1419
1420 def __call__(self):
1421 if config('prefer-ipv6'):
1422- return {
1423- 'bind_host': '::'
1424- }
1425+ return {'bind_host': '::'}
1426 else:
1427- return {
1428- 'bind_host': '0.0.0.0'
1429- }
1430+ return {'bind_host': '0.0.0.0'}
1431
1432
1433 class WorkerConfigContext(OSContextGenerator):
1434@@ -914,11 +976,42 @@
1435 except ImportError:
1436 apt_install('python-psutil', fatal=True)
1437 from psutil import NUM_CPUS
1438+
1439 return NUM_CPUS
1440
1441 def __call__(self):
1442- multiplier = config('worker-multiplier') or 1
1443- ctxt = {
1444- "workers": self.num_cpus * multiplier
1445- }
1446+ multiplier = config('worker-multiplier') or 0
1447+ ctxt = {"workers": self.num_cpus * multiplier}
1448+ return ctxt
1449+
1450+
1451+class ZeroMQContext(OSContextGenerator):
1452+ interfaces = ['zeromq-configuration']
1453+
1454+ def __call__(self):
1455+ ctxt = {}
1456+ if is_relation_made('zeromq-configuration', 'host'):
1457+ for rid in relation_ids('zeromq-configuration'):
1458+ for unit in related_units(rid):
1459+ ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1460+ ctxt['zmq_host'] = relation_get('host', unit, rid)
1461+
1462+ return ctxt
1463+
1464+
1465+class NotificationDriverContext(OSContextGenerator):
1466+
1467+ def __init__(self, zmq_relation='zeromq-configuration',
1468+ amqp_relation='amqp'):
1469+ """
1470+ :param zmq_relation: Name of Zeromq relation to check
1471+ """
1472+ self.zmq_relation = zmq_relation
1473+ self.amqp_relation = amqp_relation
1474+
1475+ def __call__(self):
1476+ ctxt = {'notifications': 'False'}
1477+ if is_relation_made(self.amqp_relation):
1478+ ctxt['notifications'] = "True"
1479+
1480 return ctxt
1481
1482=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
1483--- hooks/charmhelpers/contrib/openstack/ip.py 2014-10-07 21:03:47 +0000
1484+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-05 07:18:23 +0000
1485@@ -2,21 +2,19 @@
1486 config,
1487 unit_get,
1488 )
1489-
1490 from charmhelpers.contrib.network.ip import (
1491 get_address_in_network,
1492 is_address_in_network,
1493 is_ipv6,
1494 get_ipv6_addr,
1495 )
1496-
1497 from charmhelpers.contrib.hahelpers.cluster import is_clustered
1498
1499 PUBLIC = 'public'
1500 INTERNAL = 'int'
1501 ADMIN = 'admin'
1502
1503-_address_map = {
1504+ADDRESS_MAP = {
1505 PUBLIC: {
1506 'config': 'os-public-network',
1507 'fallback': 'public-address'
1508@@ -33,16 +31,14 @@
1509
1510
1511 def canonical_url(configs, endpoint_type=PUBLIC):
1512- '''
1513- Returns the correct HTTP URL to this host given the state of HTTPS
1514+ """Returns the correct HTTP URL to this host given the state of HTTPS
1515 configuration, hacluster and charm configuration.
1516
1517- :configs OSTemplateRenderer: A config tempating object to inspect for
1518- a complete https context.
1519- :endpoint_type str: The endpoint type to resolve.
1520-
1521- :returns str: Base URL for services on the current service unit.
1522- '''
1523+ :param configs: OSTemplateRenderer config templating object to inspect
1524+ for a complete https context.
1525+ :param endpoint_type: str endpoint type to resolve.
1526+ :param returns: str base URL for services on the current service unit.
1527+ """
1528 scheme = 'http'
1529 if 'https' in configs.complete_contexts():
1530 scheme = 'https'
1531@@ -53,27 +49,45 @@
1532
1533
1534 def resolve_address(endpoint_type=PUBLIC):
1535+ """Return unit address depending on net config.
1536+
1537+ If unit is clustered with vip(s) and has net splits defined, return vip on
1538+ correct network. If clustered with no nets defined, return primary vip.
1539+
1540+ If not clustered, return unit address ensuring address is on configured net
1541+ split if one is configured.
1542+
1543+ :param endpoint_type: Network endpoing type
1544+ """
1545 resolved_address = None
1546- if is_clustered():
1547- if config(_address_map[endpoint_type]['config']) is None:
1548- # Assume vip is simple and pass back directly
1549- resolved_address = config('vip')
1550+ vips = config('vip')
1551+ if vips:
1552+ vips = vips.split()
1553+
1554+ net_type = ADDRESS_MAP[endpoint_type]['config']
1555+ net_addr = config(net_type)
1556+ net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
1557+ clustered = is_clustered()
1558+ if clustered:
1559+ if not net_addr:
1560+ # If no net-splits defined, we expect a single vip
1561+ resolved_address = vips[0]
1562 else:
1563- for vip in config('vip').split():
1564- if is_address_in_network(
1565- config(_address_map[endpoint_type]['config']),
1566- vip):
1567+ for vip in vips:
1568+ if is_address_in_network(net_addr, vip):
1569 resolved_address = vip
1570+ break
1571 else:
1572 if config('prefer-ipv6'):
1573- fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1574+ fallback_addr = get_ipv6_addr(exc_list=vips)[0]
1575 else:
1576- fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])
1577- resolved_address = get_address_in_network(
1578- config(_address_map[endpoint_type]['config']), fallback_addr)
1579+ fallback_addr = unit_get(net_fallback)
1580+
1581+ resolved_address = get_address_in_network(net_addr, fallback_addr)
1582
1583 if resolved_address is None:
1584- raise ValueError('Unable to resolve a suitable IP address'
1585- ' based on charm state and configuration')
1586- else:
1587- return resolved_address
1588+ raise ValueError("Unable to resolve a suitable IP address based on "
1589+ "charm state and configuration. (net_type=%s, "
1590+ "clustered=%s)" % (net_type, clustered))
1591+
1592+ return resolved_address
1593
1594=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1595--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-06-24 13:40:39 +0000
1596+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-05 07:18:23 +0000
1597@@ -14,7 +14,7 @@
1598 def headers_package():
1599 """Ensures correct linux-headers for running kernel are installed,
1600 for building DKMS package"""
1601- kver = check_output(['uname', '-r']).strip()
1602+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1603 return 'linux-headers-%s' % kver
1604
1605 QUANTUM_CONF_DIR = '/etc/quantum'
1606@@ -22,7 +22,7 @@
1607
1608 def kernel_version():
1609 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1610- kver = check_output(['uname', '-r']).strip()
1611+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1612 kver = kver.split('.')
1613 return (int(kver[0]), int(kver[1]))
1614
1615@@ -138,10 +138,25 @@
1616 relation_prefix='neutron',
1617 ssl_dir=NEUTRON_CONF_DIR)],
1618 'services': [],
1619- 'packages': [['neutron-plugin-cisco']],
1620+ 'packages': [[headers_package()] + determine_dkms_package(),
1621+ ['neutron-plugin-cisco']],
1622 'server_packages': ['neutron-server',
1623 'neutron-plugin-cisco'],
1624 'server_services': ['neutron-server']
1625+ },
1626+ 'Calico': {
1627+ 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
1628+ 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
1629+ 'contexts': [
1630+ context.SharedDBContext(user=config('neutron-database-user'),
1631+ database=config('neutron-database'),
1632+ relation_prefix='neutron',
1633+ ssl_dir=NEUTRON_CONF_DIR)],
1634+ 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
1635+ 'packages': [[headers_package()] + determine_dkms_package(),
1636+ ['calico-compute', 'bird', 'neutron-dhcp-agent']],
1637+ 'server_packages': ['neutron-server', 'calico-control'],
1638+ 'server_services': ['neutron-server']
1639 }
1640 }
1641 if release >= 'icehouse':
1642@@ -162,7 +177,8 @@
1643 elif manager == 'neutron':
1644 plugins = neutron_plugins()
1645 else:
1646- log('Error: Network manager does not support plugins.')
1647+ log("Network manager '%s' does not support plugins." % (manager),
1648+ level=ERROR)
1649 raise Exception
1650
1651 try:
1652
1653=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
1654--- hooks/charmhelpers/contrib/openstack/templating.py 2014-07-29 07:46:01 +0000
1655+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-05 07:18:23 +0000
1656@@ -1,13 +1,13 @@
1657 import os
1658
1659+import six
1660+
1661 from charmhelpers.fetch import apt_install
1662-
1663 from charmhelpers.core.hookenv import (
1664 log,
1665 ERROR,
1666 INFO
1667 )
1668-
1669 from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1670
1671 try:
1672@@ -43,7 +43,7 @@
1673 order by OpenStack release.
1674 """
1675 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1676- for rel in OPENSTACK_CODENAMES.itervalues()]
1677+ for rel in six.itervalues(OPENSTACK_CODENAMES)]
1678
1679 if not os.path.isdir(templates_dir):
1680 log('Templates directory not found @ %s.' % templates_dir,
1681@@ -258,7 +258,7 @@
1682 """
1683 Write out all registered config files.
1684 """
1685- [self.write(k) for k in self.templates.iterkeys()]
1686+ [self.write(k) for k in six.iterkeys(self.templates)]
1687
1688 def set_release(self, openstack_release):
1689 """
1690@@ -275,5 +275,5 @@
1691 '''
1692 interfaces = []
1693 [interfaces.extend(i.complete_contexts())
1694- for i in self.templates.itervalues()]
1695+ for i in six.itervalues(self.templates)]
1696 return interfaces
1697
1698=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1699--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-07 21:03:47 +0000
1700+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-05 07:18:23 +0000
1701@@ -2,6 +2,7 @@
1702
1703 # Common python helper functions used for OpenStack charms.
1704 from collections import OrderedDict
1705+from functools import wraps
1706
1707 import subprocess
1708 import json
1709@@ -9,11 +10,13 @@
1710 import socket
1711 import sys
1712
1713+import six
1714+import yaml
1715+
1716 from charmhelpers.core.hookenv import (
1717 config,
1718 log as juju_log,
1719 charm_dir,
1720- ERROR,
1721 INFO,
1722 relation_ids,
1723 relation_set
1724@@ -30,7 +33,8 @@
1725 )
1726
1727 from charmhelpers.core.host import lsb_release, mounts, umount
1728-from charmhelpers.fetch import apt_install, apt_cache
1729+from charmhelpers.fetch import apt_install, apt_cache, install_remote
1730+from charmhelpers.contrib.python.packages import pip_install
1731 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
1732 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
1733
1734@@ -112,7 +116,7 @@
1735
1736 # Best guess match based on deb string provided
1737 if src.startswith('deb') or src.startswith('ppa'):
1738- for k, v in OPENSTACK_CODENAMES.iteritems():
1739+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1740 if v in src:
1741 return v
1742
1743@@ -133,7 +137,7 @@
1744
1745 def get_os_version_codename(codename):
1746 '''Determine OpenStack version number from codename.'''
1747- for k, v in OPENSTACK_CODENAMES.iteritems():
1748+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1749 if v == codename:
1750 return k
1751 e = 'Could not derive OpenStack version for '\
1752@@ -193,7 +197,7 @@
1753 else:
1754 vers_map = OPENSTACK_CODENAMES
1755
1756- for version, cname in vers_map.iteritems():
1757+ for version, cname in six.iteritems(vers_map):
1758 if cname == codename:
1759 return version
1760 # e = "Could not determine OpenStack version for package: %s" % pkg
1761@@ -317,7 +321,7 @@
1762 rc_script.write(
1763 "#!/bin/bash\n")
1764 [rc_script.write('export %s=%s\n' % (u, p))
1765- for u, p in env_vars.iteritems() if u != "script_path"]
1766+ for u, p in six.iteritems(env_vars) if u != "script_path"]
1767
1768
1769 def openstack_upgrade_available(package):
1770@@ -350,8 +354,8 @@
1771 '''
1772 _none = ['None', 'none', None]
1773 if (block_device in _none):
1774- error_out('prepare_storage(): Missing required input: '
1775- 'block_device=%s.' % block_device, level=ERROR)
1776+ error_out('prepare_storage(): Missing required input: block_device=%s.'
1777+ % block_device)
1778
1779 if block_device.startswith('/dev/'):
1780 bdev = block_device
1781@@ -367,8 +371,7 @@
1782 bdev = '/dev/%s' % block_device
1783
1784 if not is_block_device(bdev):
1785- error_out('Failed to locate valid block device at %s' % bdev,
1786- level=ERROR)
1787+ error_out('Failed to locate valid block device at %s' % bdev)
1788
1789 return bdev
1790
1791@@ -417,7 +420,7 @@
1792
1793 if isinstance(address, dns.name.Name):
1794 rtype = 'PTR'
1795- elif isinstance(address, basestring):
1796+ elif isinstance(address, six.string_types):
1797 rtype = 'A'
1798 else:
1799 return None
1800@@ -468,6 +471,14 @@
1801 return result.split('.')[0]
1802
1803
1804+def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
1805+ mm_map = {}
1806+ if os.path.isfile(mm_file):
1807+ with open(mm_file, 'r') as f:
1808+ mm_map = json.load(f)
1809+ return mm_map
1810+
1811+
1812 def sync_db_with_multi_ipv6_addresses(database, database_user,
1813 relation_prefix=None):
1814 hosts = get_ipv6_addr(dynamic_only=False)
1815@@ -477,10 +488,132 @@
1816 'hostname': json.dumps(hosts)}
1817
1818 if relation_prefix:
1819- keys = kwargs.keys()
1820- for key in keys:
1821+ for key in list(kwargs.keys()):
1822 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
1823 del kwargs[key]
1824
1825 for rid in relation_ids('shared-db'):
1826 relation_set(relation_id=rid, **kwargs)
1827+
1828+
1829+def os_requires_version(ostack_release, pkg):
1830+ """
1831+ Decorator for hook to specify minimum supported release
1832+ """
1833+ def wrap(f):
1834+ @wraps(f)
1835+ def wrapped_f(*args):
1836+ if os_release(pkg) < ostack_release:
1837+ raise Exception("This hook is not supported on releases"
1838+ " before %s" % ostack_release)
1839+ f(*args)
1840+ return wrapped_f
1841+ return wrap
1842+
1843+
1844+def git_install_requested():
1845+ """Returns true if openstack-origin-git is specified."""
1846+ return config('openstack-origin-git') != "None"
1847+
1848+
1849+requirements_dir = None
1850+
1851+
1852+def git_clone_and_install(file_name, core_project):
1853+ """Clone/install all OpenStack repos specified in yaml config file."""
1854+ global requirements_dir
1855+
1856+ if file_name == "None":
1857+ return
1858+
1859+ yaml_file = os.path.join(charm_dir(), file_name)
1860+
1861+ # clone/install the requirements project first
1862+ installed = _git_clone_and_install_subset(yaml_file,
1863+ whitelist=['requirements'])
1864+ if 'requirements' not in installed:
1865+ error_out('requirements git repository must be specified')
1866+
1867+ # clone/install all other projects except requirements and the core project
1868+ blacklist = ['requirements', core_project]
1869+ _git_clone_and_install_subset(yaml_file, blacklist=blacklist,
1870+ update_requirements=True)
1871+
1872+ # clone/install the core project
1873+ whitelist = [core_project]
1874+ installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist,
1875+ update_requirements=True)
1876+ if core_project not in installed:
1877+ error_out('{} git repository must be specified'.format(core_project))
1878+
1879+
1880+def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[],
1881+ update_requirements=False):
1882+ """Clone/install subset of OpenStack repos specified in yaml config file."""
1883+ global requirements_dir
1884+ installed = []
1885+
1886+ with open(yaml_file, 'r') as fd:
1887+ projects = yaml.load(fd)
1888+ for proj, val in projects.items():
1889+ # The project subset is chosen based on the following 3 rules:
1890+ # 1) If project is in blacklist, we don't clone/install it, period.
1891+ # 2) If whitelist is empty, we clone/install everything else.
1892+ # 3) If whitelist is not empty, we clone/install everything in the
1893+ # whitelist.
1894+ if proj in blacklist:
1895+ continue
1896+ if whitelist and proj not in whitelist:
1897+ continue
1898+ repo = val['repository']
1899+ branch = val['branch']
1900+ repo_dir = _git_clone_and_install_single(repo, branch,
1901+ update_requirements)
1902+ if proj == 'requirements':
1903+ requirements_dir = repo_dir
1904+ installed.append(proj)
1905+ return installed
1906+
1907+
1908+def _git_clone_and_install_single(repo, branch, update_requirements=False):
1909+ """Clone and install a single git repository."""
1910+ dest_parent_dir = "/mnt/openstack-git/"
1911+ dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo))
1912+
1913+ if not os.path.exists(dest_parent_dir):
1914+ juju_log('Host dir not mounted at {}. '
1915+ 'Creating directory there instead.'.format(dest_parent_dir))
1916+ os.mkdir(dest_parent_dir)
1917+
1918+ if not os.path.exists(dest_dir):
1919+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
1920+ repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch)
1921+ else:
1922+ repo_dir = dest_dir
1923+
1924+ if update_requirements:
1925+ if not requirements_dir:
1926+ error_out('requirements repo must be cloned before '
1927+ 'updating from global requirements.')
1928+ _git_update_requirements(repo_dir, requirements_dir)
1929+
1930+ juju_log('Installing git repo from dir: {}'.format(repo_dir))
1931+ pip_install(repo_dir)
1932+
1933+ return repo_dir
1934+
1935+
1936+def _git_update_requirements(package_dir, reqs_dir):
1937+ """Update from global requirements.
1938+
1939+ Update an OpenStack git directory's requirements.txt and
1940+ test-requirements.txt from global-requirements.txt."""
1941+ orig_dir = os.getcwd()
1942+ os.chdir(reqs_dir)
1943+ cmd = "python update.py {}".format(package_dir)
1944+ try:
1945+ subprocess.check_call(cmd.split(' '))
1946+ except subprocess.CalledProcessError:
1947+ package = os.path.basename(package_dir)
1948+ error_out("Error updating {} from global-requirements.txt".format(package))
1949+ os.chdir(orig_dir)
1950
1951=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
1952--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-29 07:46:01 +0000
1953+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-05 07:18:23 +0000
1954@@ -16,19 +16,18 @@
1955 from subprocess import (
1956 check_call,
1957 check_output,
1958- CalledProcessError
1959+ CalledProcessError,
1960 )
1961-
1962 from charmhelpers.core.hookenv import (
1963 relation_get,
1964 relation_ids,
1965 related_units,
1966 log,
1967+ DEBUG,
1968 INFO,
1969 WARNING,
1970- ERROR
1971+ ERROR,
1972 )
1973-
1974 from charmhelpers.core.host import (
1975 mount,
1976 mounts,
1977@@ -37,7 +36,6 @@
1978 service_running,
1979 umount,
1980 )
1981-
1982 from charmhelpers.fetch import (
1983 apt_install,
1984 )
1985@@ -56,99 +54,85 @@
1986
1987
1988 def install():
1989- ''' Basic Ceph client installation '''
1990+ """Basic Ceph client installation."""
1991 ceph_dir = "/etc/ceph"
1992 if not os.path.exists(ceph_dir):
1993 os.mkdir(ceph_dir)
1994+
1995 apt_install('ceph-common', fatal=True)
1996
1997
1998 def rbd_exists(service, pool, rbd_img):
1999- ''' Check to see if a RADOS block device exists '''
2000+ """Check to see if a RADOS block device exists."""
2001 try:
2002- out = check_output(['rbd', 'list', '--id', service,
2003- '--pool', pool])
2004+ out = check_output(['rbd', 'list', '--id',
2005+ service, '--pool', pool]).decode('UTF-8')
2006 except CalledProcessError:
2007 return False
2008- else:
2009- return rbd_img in out
2010+
2011+ return rbd_img in out
2012
2013
2014 def create_rbd_image(service, pool, image, sizemb):
2015- ''' Create a new RADOS block device '''
2016- cmd = [
2017- 'rbd',
2018- 'create',
2019- image,
2020- '--size',
2021- str(sizemb),
2022- '--id',
2023- service,
2024- '--pool',
2025- pool
2026- ]
2027+ """Create a new RADOS block device."""
2028+ cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
2029+ '--pool', pool]
2030 check_call(cmd)
2031
2032
2033 def pool_exists(service, name):
2034- ''' Check to see if a RADOS pool already exists '''
2035+ """Check to see if a RADOS pool already exists."""
2036 try:
2037- out = check_output(['rados', '--id', service, 'lspools'])
2038+ out = check_output(['rados', '--id', service,
2039+ 'lspools']).decode('UTF-8')
2040 except CalledProcessError:
2041 return False
2042- else:
2043- return name in out
2044+
2045+ return name in out
2046
2047
2048 def get_osds(service):
2049- '''
2050- Return a list of all Ceph Object Storage Daemons
2051- currently in the cluster
2052- '''
2053+ """Return a list of all Ceph Object Storage Daemons currently in the
2054+ cluster.
2055+ """
2056 version = ceph_version()
2057 if version and version >= '0.56':
2058 return json.loads(check_output(['ceph', '--id', service,
2059- 'osd', 'ls', '--format=json']))
2060- else:
2061- return None
2062-
2063-
2064-def create_pool(service, name, replicas=2):
2065- ''' Create a new RADOS pool '''
2066+ 'osd', 'ls',
2067+ '--format=json']).decode('UTF-8'))
2068+
2069+ return None
2070+
2071+
2072+def create_pool(service, name, replicas=3):
2073+ """Create a new RADOS pool."""
2074 if pool_exists(service, name):
2075 log("Ceph pool {} already exists, skipping creation".format(name),
2076 level=WARNING)
2077 return
2078+
2079 # Calculate the number of placement groups based
2080 # on upstream recommended best practices.
2081 osds = get_osds(service)
2082 if osds:
2083- pgnum = (len(osds) * 100 / replicas)
2084+ pgnum = (len(osds) * 100 // replicas)
2085 else:
2086 # NOTE(james-page): Default to 200 for older ceph versions
2087 # which don't support OSD query from cli
2088 pgnum = 200
2089- cmd = [
2090- 'ceph', '--id', service,
2091- 'osd', 'pool', 'create',
2092- name, str(pgnum)
2093- ]
2094+
2095+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
2096 check_call(cmd)
2097- cmd = [
2098- 'ceph', '--id', service,
2099- 'osd', 'pool', 'set', name,
2100- 'size', str(replicas)
2101- ]
2102+
2103+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
2104+ str(replicas)]
2105 check_call(cmd)
2106
2107
2108 def delete_pool(service, name):
2109- ''' Delete a RADOS pool from ceph '''
2110- cmd = [
2111- 'ceph', '--id', service,
2112- 'osd', 'pool', 'delete',
2113- name, '--yes-i-really-really-mean-it'
2114- ]
2115+ """Delete a RADOS pool from ceph."""
2116+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
2117+ '--yes-i-really-really-mean-it']
2118 check_call(cmd)
2119
2120
2121@@ -161,44 +145,43 @@
2122
2123
2124 def create_keyring(service, key):
2125- ''' Create a new Ceph keyring containing key'''
2126+ """Create a new Ceph keyring containing key."""
2127 keyring = _keyring_path(service)
2128 if os.path.exists(keyring):
2129- log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
2130+ log('Ceph keyring exists at %s.' % keyring, level=WARNING)
2131 return
2132- cmd = [
2133- 'ceph-authtool',
2134- keyring,
2135- '--create-keyring',
2136- '--name=client.{}'.format(service),
2137- '--add-key={}'.format(key)
2138- ]
2139+
2140+ cmd = ['ceph-authtool', keyring, '--create-keyring',
2141+ '--name=client.{}'.format(service), '--add-key={}'.format(key)]
2142 check_call(cmd)
2143- log('ceph: Created new ring at %s.' % keyring, level=INFO)
2144+ log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
2145
2146
2147 def create_key_file(service, key):
2148- ''' Create a file containing key '''
2149+ """Create a file containing key."""
2150 keyfile = _keyfile_path(service)
2151 if os.path.exists(keyfile):
2152- log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2153+ log('Keyfile exists at %s.' % keyfile, level=WARNING)
2154 return
2155+
2156 with open(keyfile, 'w') as fd:
2157 fd.write(key)
2158- log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2159+
2160+ log('Created new keyfile at %s.' % keyfile, level=INFO)
2161
2162
2163 def get_ceph_nodes():
2164- ''' Query named relation 'ceph' to detemine current nodes '''
2165+ """Query named relation 'ceph' to determine current nodes."""
2166 hosts = []
2167 for r_id in relation_ids('ceph'):
2168 for unit in related_units(r_id):
2169 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2170+
2171 return hosts
2172
2173
2174 def configure(service, key, auth, use_syslog):
2175- ''' Perform basic configuration of Ceph '''
2176+ """Perform basic configuration of Ceph."""
2177 create_keyring(service, key)
2178 create_key_file(service, key)
2179 hosts = get_ceph_nodes()
2180@@ -211,17 +194,17 @@
2181
2182
2183 def image_mapped(name):
2184- ''' Determine whether a RADOS block device is mapped locally '''
2185+ """Determine whether a RADOS block device is mapped locally."""
2186 try:
2187- out = check_output(['rbd', 'showmapped'])
2188+ out = check_output(['rbd', 'showmapped']).decode('UTF-8')
2189 except CalledProcessError:
2190 return False
2191- else:
2192- return name in out
2193+
2194+ return name in out
2195
2196
2197 def map_block_storage(service, pool, image):
2198- ''' Map a RADOS block device for local use '''
2199+ """Map a RADOS block device for local use."""
2200 cmd = [
2201 'rbd',
2202 'map',
2203@@ -235,31 +218,32 @@
2204
2205
2206 def filesystem_mounted(fs):
2207- ''' Determine whether a filesytems is already mounted '''
2208+ """Determine whether a filesytems is already mounted."""
2209 return fs in [f for f, m in mounts()]
2210
2211
2212 def make_filesystem(blk_device, fstype='ext4', timeout=10):
2213- ''' Make a new filesystem on the specified block device '''
2214+ """Make a new filesystem on the specified block device."""
2215 count = 0
2216 e_noent = os.errno.ENOENT
2217 while not os.path.exists(blk_device):
2218 if count >= timeout:
2219- log('ceph: gave up waiting on block device %s' % blk_device,
2220+ log('Gave up waiting on block device %s' % blk_device,
2221 level=ERROR)
2222 raise IOError(e_noent, os.strerror(e_noent), blk_device)
2223- log('ceph: waiting for block device %s to appear' % blk_device,
2224- level=INFO)
2225+
2226+ log('Waiting for block device %s to appear' % blk_device,
2227+ level=DEBUG)
2228 count += 1
2229 time.sleep(1)
2230 else:
2231- log('ceph: Formatting block device %s as filesystem %s.' %
2232+ log('Formatting block device %s as filesystem %s.' %
2233 (blk_device, fstype), level=INFO)
2234 check_call(['mkfs', '-t', fstype, blk_device])
2235
2236
2237 def place_data_on_block_device(blk_device, data_src_dst):
2238- ''' Migrate data in data_src_dst to blk_device and then remount '''
2239+ """Migrate data in data_src_dst to blk_device and then remount."""
2240 # mount block device into /mnt
2241 mount(blk_device, '/mnt')
2242 # copy data to /mnt
2243@@ -279,8 +263,8 @@
2244
2245 # TODO: re-use
2246 def modprobe(module):
2247- ''' Load a kernel module and configure for auto-load on reboot '''
2248- log('ceph: Loading kernel module', level=INFO)
2249+ """Load a kernel module and configure for auto-load on reboot."""
2250+ log('Loading kernel module', level=INFO)
2251 cmd = ['modprobe', module]
2252 check_call(cmd)
2253 with open('/etc/modules', 'r+') as modules:
2254@@ -289,7 +273,7 @@
2255
2256
2257 def copy_files(src, dst, symlinks=False, ignore=None):
2258- ''' Copy files from src to dst '''
2259+ """Copy files from src to dst."""
2260 for item in os.listdir(src):
2261 s = os.path.join(src, item)
2262 d = os.path.join(dst, item)
2263@@ -300,9 +284,9 @@
2264
2265
2266 def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2267- blk_device, fstype, system_services=[]):
2268- """
2269- NOTE: This function must only be called from a single service unit for
2270+ blk_device, fstype, system_services=[],
2271+ replicas=3):
2272+ """NOTE: This function must only be called from a single service unit for
2273 the same rbd_img otherwise data loss will occur.
2274
2275 Ensures given pool and RBD image exists, is mapped to a block device,
2276@@ -316,15 +300,16 @@
2277 """
2278 # Ensure pool, RBD image, RBD mappings are in place.
2279 if not pool_exists(service, pool):
2280- log('ceph: Creating new pool {}.'.format(pool))
2281- create_pool(service, pool)
2282+ log('Creating new pool {}.'.format(pool), level=INFO)
2283+ create_pool(service, pool, replicas=replicas)
2284
2285 if not rbd_exists(service, pool, rbd_img):
2286- log('ceph: Creating RBD image ({}).'.format(rbd_img))
2287+ log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
2288 create_rbd_image(service, pool, rbd_img, sizemb)
2289
2290 if not image_mapped(rbd_img):
2291- log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2292+ log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
2293+ level=INFO)
2294 map_block_storage(service, pool, rbd_img)
2295
2296 # make file system
2297@@ -339,45 +324,47 @@
2298
2299 for svc in system_services:
2300 if service_running(svc):
2301- log('ceph: Stopping services {} prior to migrating data.'
2302- .format(svc))
2303+ log('Stopping services {} prior to migrating data.'
2304+ .format(svc), level=DEBUG)
2305 service_stop(svc)
2306
2307 place_data_on_block_device(blk_device, mount_point)
2308
2309 for svc in system_services:
2310- log('ceph: Starting service {} after migrating data.'
2311- .format(svc))
2312+ log('Starting service {} after migrating data.'
2313+ .format(svc), level=DEBUG)
2314 service_start(svc)
2315
2316
2317 def ensure_ceph_keyring(service, user=None, group=None):
2318- '''
2319- Ensures a ceph keyring is created for a named service
2320- and optionally ensures user and group ownership.
2321+ """Ensures a ceph keyring is created for a named service and optionally
2322+ ensures user and group ownership.
2323
2324 Returns False if no ceph key is available in relation state.
2325- '''
2326+ """
2327 key = None
2328 for rid in relation_ids('ceph'):
2329 for unit in related_units(rid):
2330 key = relation_get('key', rid=rid, unit=unit)
2331 if key:
2332 break
2333+
2334 if not key:
2335 return False
2336+
2337 create_keyring(service=service, key=key)
2338 keyring = _keyring_path(service)
2339 if user and group:
2340 check_call(['chown', '%s.%s' % (user, group), keyring])
2341+
2342 return True
2343
2344
2345 def ceph_version():
2346- ''' Retrieve the local version of ceph '''
2347+ """Retrieve the local version of ceph."""
2348 if os.path.exists('/usr/bin/ceph'):
2349 cmd = ['ceph', '-v']
2350- output = check_output(cmd)
2351+ output = check_output(cmd).decode('US-ASCII')
2352 output = output.split()
2353 if len(output) > 3:
2354 return output[2]
2355
2356=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2357--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-11-08 05:55:44 +0000
2358+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-05 07:18:23 +0000
2359@@ -1,12 +1,12 @@
2360-
2361 import os
2362 import re
2363-
2364 from subprocess import (
2365 check_call,
2366 check_output,
2367 )
2368
2369+import six
2370+
2371
2372 ##################################################
2373 # loopback device helpers.
2374@@ -37,7 +37,7 @@
2375 '''
2376 file_path = os.path.abspath(file_path)
2377 check_call(['losetup', '--find', file_path])
2378- for d, f in loopback_devices().iteritems():
2379+ for d, f in six.iteritems(loopback_devices()):
2380 if f == file_path:
2381 return d
2382
2383@@ -51,7 +51,7 @@
2384
2385 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2386 '''
2387- for d, f in loopback_devices().iteritems():
2388+ for d, f in six.iteritems(loopback_devices()):
2389 if f == path:
2390 return d
2391
2392
2393=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2394--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:43:55 +0000
2395+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-05 07:18:23 +0000
2396@@ -61,6 +61,7 @@
2397 vg = None
2398 pvd = check_output(['pvdisplay', block_device]).splitlines()
2399 for l in pvd:
2400+ l = l.decode('UTF-8')
2401 if l.strip().startswith('VG Name'):
2402 vg = ' '.join(l.strip().split()[2:])
2403 return vg
2404
2405=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2406--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-08-13 13:12:47 +0000
2407+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-05 07:18:23 +0000
2408@@ -30,7 +30,8 @@
2409 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
2410 call(['sgdisk', '--zap-all', '--mbrtogpt',
2411 '--clear', block_device])
2412- dev_end = check_output(['blockdev', '--getsz', block_device])
2413+ dev_end = check_output(['blockdev', '--getsz',
2414+ block_device]).decode('UTF-8')
2415 gpt_end = int(dev_end.split()[0]) - 100
2416 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
2417 'bs=1M', 'count=1'])
2418@@ -47,7 +48,7 @@
2419 it doesn't.
2420 '''
2421 is_partition = bool(re.search(r".*[0-9]+\b", device))
2422- out = check_output(['mount'])
2423+ out = check_output(['mount']).decode('UTF-8')
2424 if is_partition:
2425 return bool(re.search(device + r"\b", out))
2426 return bool(re.search(device + r"[0-9]+\b", out))
2427
2428=== modified file 'hooks/charmhelpers/core/fstab.py'
2429--- hooks/charmhelpers/core/fstab.py 2014-06-24 13:40:39 +0000
2430+++ hooks/charmhelpers/core/fstab.py 2014-12-05 07:18:23 +0000
2431@@ -3,10 +3,11 @@
2432
2433 __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2434
2435+import io
2436 import os
2437
2438
2439-class Fstab(file):
2440+class Fstab(io.FileIO):
2441 """This class extends file in order to implement a file reader/writer
2442 for file `/etc/fstab`
2443 """
2444@@ -24,8 +25,8 @@
2445 options = "defaults"
2446
2447 self.options = options
2448- self.d = d
2449- self.p = p
2450+ self.d = int(d)
2451+ self.p = int(p)
2452
2453 def __eq__(self, o):
2454 return str(self) == str(o)
2455@@ -45,7 +46,7 @@
2456 self._path = path
2457 else:
2458 self._path = self.DEFAULT_PATH
2459- file.__init__(self, self._path, 'r+')
2460+ super(Fstab, self).__init__(self._path, 'rb+')
2461
2462 def _hydrate_entry(self, line):
2463 # NOTE: use split with no arguments to split on any
2464@@ -58,8 +59,9 @@
2465 def entries(self):
2466 self.seek(0)
2467 for line in self.readlines():
2468+ line = line.decode('us-ascii')
2469 try:
2470- if not line.startswith("#"):
2471+ if line.strip() and not line.startswith("#"):
2472 yield self._hydrate_entry(line)
2473 except ValueError:
2474 pass
2475@@ -75,14 +77,14 @@
2476 if self.get_entry_by_attr('device', entry.device):
2477 return False
2478
2479- self.write(str(entry) + '\n')
2480+ self.write((str(entry) + '\n').encode('us-ascii'))
2481 self.truncate()
2482 return entry
2483
2484 def remove_entry(self, entry):
2485 self.seek(0)
2486
2487- lines = self.readlines()
2488+ lines = [l.decode('us-ascii') for l in self.readlines()]
2489
2490 found = False
2491 for index, line in enumerate(lines):
2492@@ -97,7 +99,7 @@
2493 lines.remove(line)
2494
2495 self.seek(0)
2496- self.write(''.join(lines))
2497+ self.write(''.join(lines).encode('us-ascii'))
2498 self.truncate()
2499 return True
2500
2501
2502=== modified file 'hooks/charmhelpers/core/hookenv.py'
2503--- hooks/charmhelpers/core/hookenv.py 2014-09-25 15:37:05 +0000
2504+++ hooks/charmhelpers/core/hookenv.py 2014-12-05 07:18:23 +0000
2505@@ -9,9 +9,14 @@
2506 import yaml
2507 import subprocess
2508 import sys
2509-import UserDict
2510 from subprocess import CalledProcessError
2511
2512+import six
2513+if not six.PY3:
2514+ from UserDict import UserDict
2515+else:
2516+ from collections import UserDict
2517+
2518 CRITICAL = "CRITICAL"
2519 ERROR = "ERROR"
2520 WARNING = "WARNING"
2521@@ -63,16 +68,18 @@
2522 command = ['juju-log']
2523 if level:
2524 command += ['-l', level]
2525+ if not isinstance(message, six.string_types):
2526+ message = repr(message)
2527 command += [message]
2528 subprocess.call(command)
2529
2530
2531-class Serializable(UserDict.IterableUserDict):
2532+class Serializable(UserDict):
2533 """Wrapper, an object that can be serialized to yaml or json"""
2534
2535 def __init__(self, obj):
2536 # wrap the object
2537- UserDict.IterableUserDict.__init__(self)
2538+ UserDict.__init__(self)
2539 self.data = obj
2540
2541 def __getattr__(self, attr):
2542@@ -214,6 +221,12 @@
2543 except KeyError:
2544 return (self._prev_dict or {})[key]
2545
2546+ def keys(self):
2547+ prev_keys = []
2548+ if self._prev_dict is not None:
2549+ prev_keys = self._prev_dict.keys()
2550+ return list(set(prev_keys + list(dict.keys(self))))
2551+
2552 def load_previous(self, path=None):
2553 """Load previous copy of config from disk.
2554
2555@@ -263,7 +276,7 @@
2556
2557 """
2558 if self._prev_dict:
2559- for k, v in self._prev_dict.iteritems():
2560+ for k, v in six.iteritems(self._prev_dict):
2561 if k not in self:
2562 self[k] = v
2563 with open(self.path, 'w') as f:
2564@@ -278,7 +291,8 @@
2565 config_cmd_line.append(scope)
2566 config_cmd_line.append('--format=json')
2567 try:
2568- config_data = json.loads(subprocess.check_output(config_cmd_line))
2569+ config_data = json.loads(
2570+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
2571 if scope is not None:
2572 return config_data
2573 return Config(config_data)
2574@@ -297,10 +311,10 @@
2575 if unit:
2576 _args.append(unit)
2577 try:
2578- return json.loads(subprocess.check_output(_args))
2579+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2580 except ValueError:
2581 return None
2582- except CalledProcessError, e:
2583+ except CalledProcessError as e:
2584 if e.returncode == 2:
2585 return None
2586 raise
2587@@ -312,7 +326,7 @@
2588 relation_cmd_line = ['relation-set']
2589 if relation_id is not None:
2590 relation_cmd_line.extend(('-r', relation_id))
2591- for k, v in (relation_settings.items() + kwargs.items()):
2592+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
2593 if v is None:
2594 relation_cmd_line.append('{}='.format(k))
2595 else:
2596@@ -329,7 +343,8 @@
2597 relid_cmd_line = ['relation-ids', '--format=json']
2598 if reltype is not None:
2599 relid_cmd_line.append(reltype)
2600- return json.loads(subprocess.check_output(relid_cmd_line)) or []
2601+ return json.loads(
2602+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
2603 return []
2604
2605
2606@@ -340,7 +355,8 @@
2607 units_cmd_line = ['relation-list', '--format=json']
2608 if relid is not None:
2609 units_cmd_line.extend(('-r', relid))
2610- return json.loads(subprocess.check_output(units_cmd_line)) or []
2611+ return json.loads(
2612+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
2613
2614
2615 @cached
2616@@ -449,7 +465,7 @@
2617 """Get the unit ID for the remote unit"""
2618 _args = ['unit-get', '--format=json', attribute]
2619 try:
2620- return json.loads(subprocess.check_output(_args))
2621+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2622 except ValueError:
2623 return None
2624
2625
2626=== modified file 'hooks/charmhelpers/core/host.py'
2627--- hooks/charmhelpers/core/host.py 2014-10-16 17:42:14 +0000
2628+++ hooks/charmhelpers/core/host.py 2014-12-05 07:18:23 +0000
2629@@ -13,13 +13,13 @@
2630 import string
2631 import subprocess
2632 import hashlib
2633-import shutil
2634 from contextlib import contextmanager
2635-
2636 from collections import OrderedDict
2637
2638-from hookenv import log
2639-from fstab import Fstab
2640+import six
2641+
2642+from .hookenv import log
2643+from .fstab import Fstab
2644
2645
2646 def service_start(service_name):
2647@@ -55,7 +55,9 @@
2648 def service_running(service):
2649 """Determine whether a system service is running"""
2650 try:
2651- output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
2652+ output = subprocess.check_output(
2653+ ['service', service, 'status'],
2654+ stderr=subprocess.STDOUT).decode('UTF-8')
2655 except subprocess.CalledProcessError:
2656 return False
2657 else:
2658@@ -68,7 +70,9 @@
2659 def service_available(service_name):
2660 """Determine whether a system service is available"""
2661 try:
2662- subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
2663+ subprocess.check_output(
2664+ ['service', service_name, 'status'],
2665+ stderr=subprocess.STDOUT).decode('UTF-8')
2666 except subprocess.CalledProcessError as e:
2667 return 'unrecognized service' not in e.output
2668 else:
2669@@ -97,6 +101,26 @@
2670 return user_info
2671
2672
2673+def add_group(group_name, system_group=False):
2674+ """Add a group to the system"""
2675+ try:
2676+ group_info = grp.getgrnam(group_name)
2677+ log('group {0} already exists!'.format(group_name))
2678+ except KeyError:
2679+ log('creating group {0}'.format(group_name))
2680+ cmd = ['addgroup']
2681+ if system_group:
2682+ cmd.append('--system')
2683+ else:
2684+ cmd.extend([
2685+ '--group',
2686+ ])
2687+ cmd.append(group_name)
2688+ subprocess.check_call(cmd)
2689+ group_info = grp.getgrnam(group_name)
2690+ return group_info
2691+
2692+
2693 def add_user_to_group(username, group):
2694 """Add a user to a group"""
2695 cmd = [
2696@@ -116,7 +140,7 @@
2697 cmd.append(from_path)
2698 cmd.append(to_path)
2699 log(" ".join(cmd))
2700- return subprocess.check_output(cmd).strip()
2701+ return subprocess.check_output(cmd).decode('UTF-8').strip()
2702
2703
2704 def symlink(source, destination):
2705@@ -131,7 +155,7 @@
2706 subprocess.check_call(cmd)
2707
2708
2709-def mkdir(path, owner='root', group='root', perms=0555, force=False):
2710+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
2711 """Create a directory"""
2712 log("Making dir {} {}:{} {:o}".format(path, owner, group,
2713 perms))
2714@@ -147,7 +171,7 @@
2715 os.chown(realpath, uid, gid)
2716
2717
2718-def write_file(path, content, owner='root', group='root', perms=0444):
2719+def write_file(path, content, owner='root', group='root', perms=0o444):
2720 """Create or overwrite a file with the contents of a string"""
2721 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
2722 uid = pwd.getpwnam(owner).pw_uid
2723@@ -178,7 +202,7 @@
2724 cmd_args.extend([device, mountpoint])
2725 try:
2726 subprocess.check_output(cmd_args)
2727- except subprocess.CalledProcessError, e:
2728+ except subprocess.CalledProcessError as e:
2729 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
2730 return False
2731
2732@@ -192,7 +216,7 @@
2733 cmd_args = ['umount', mountpoint]
2734 try:
2735 subprocess.check_output(cmd_args)
2736- except subprocess.CalledProcessError, e:
2737+ except subprocess.CalledProcessError as e:
2738 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
2739 return False
2740
2741@@ -219,8 +243,8 @@
2742 """
2743 if os.path.exists(path):
2744 h = getattr(hashlib, hash_type)()
2745- with open(path, 'r') as source:
2746- h.update(source.read()) # IGNORE:E1101 - it does have update
2747+ with open(path, 'rb') as source:
2748+ h.update(source.read())
2749 return h.hexdigest()
2750 else:
2751 return None
2752@@ -298,7 +322,7 @@
2753 if length is None:
2754 length = random.choice(range(35, 45))
2755 alphanumeric_chars = [
2756- l for l in (string.letters + string.digits)
2757+ l for l in (string.ascii_letters + string.digits)
2758 if l not in 'l0QD1vAEIOUaeiou']
2759 random_chars = [
2760 random.choice(alphanumeric_chars) for _ in range(length)]
2761@@ -307,14 +331,14 @@
2762
2763 def list_nics(nic_type):
2764 '''Return a list of nics of given type(s)'''
2765- if isinstance(nic_type, basestring):
2766+ if isinstance(nic_type, six.string_types):
2767 int_types = [nic_type]
2768 else:
2769 int_types = nic_type
2770 interfaces = []
2771 for int_type in int_types:
2772 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
2773- ip_output = subprocess.check_output(cmd).split('\n')
2774+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2775 ip_output = (line for line in ip_output if line)
2776 for line in ip_output:
2777 if line.split()[1].startswith(int_type):
2778@@ -328,15 +352,44 @@
2779 return interfaces
2780
2781
2782-def set_nic_mtu(nic, mtu):
2783+def set_nic_mtu(nic, mtu, persistence=False):
2784 '''Set MTU on a network interface'''
2785 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
2786 subprocess.check_call(cmd)
2787+ # persistence mtu configuration
2788+ if not persistence:
2789+ return
2790+ if os.path.exists("/etc/network/interfaces.d/%s.cfg" % nic):
2791+ nic_cfg_file = "/etc/network/interfaces.d/%s.cfg" % nic
2792+ else:
2793+ nic_cfg_file = "/etc/network/interfaces"
2794+ f = open(nic_cfg_file,"r")
2795+ lines = f.readlines()
2796+ found = False
2797+ length = len(lines)
2798+ for i in range(len(lines)):
2799+ lines[i] = lines[i].replace('\n', '')
2800+ if lines[i].startswith("iface %s" % nic):
2801+ found = True
2802+ lines.insert(i+1, " up ip link set $IFACE mtu %s" % mtu)
2803+ lines.insert(i+2, " down ip link set $IFACE mtu 1500")
2804+ if length>i+2 and lines[i+3].startswith(" up ip link set $IFACE mtu"):
2805+ del lines[i+3]
2806+ if length>i+2 and lines[i+3].startswith(" down ip link set $IFACE mtu"):
2807+ del lines[i+3]
2808+ break
2809+ if not found:
2810+ lines.insert(length+1, "")
2811+ lines.insert(length+2, "auto %s" % nic)
2812+ lines.insert(length+3, "iface %s inet dhcp" % nic)
2813+ lines.insert(length+4, " up ip link set $IFACE mtu %s" % mtu)
2814+ lines.insert(length+5, " down ip link set $IFACE mtu 1500")
2815+ write_file(path=nic_cfg_file, content="\n".join(lines), perms=0o644)
2816
2817
2818 def get_nic_mtu(nic):
2819 cmd = ['ip', 'addr', 'show', nic]
2820- ip_output = subprocess.check_output(cmd).split('\n')
2821+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2822 mtu = ""
2823 for line in ip_output:
2824 words = line.split()
2825@@ -347,7 +400,7 @@
2826
2827 def get_nic_hwaddr(nic):
2828 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
2829- ip_output = subprocess.check_output(cmd)
2830+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
2831 hwaddr = ""
2832 words = ip_output.split()
2833 if 'link/ether' in words:
2834@@ -364,8 +417,8 @@
2835
2836 '''
2837 import apt_pkg
2838- from charmhelpers.fetch import apt_cache
2839 if not pkgcache:
2840+ from charmhelpers.fetch import apt_cache
2841 pkgcache = apt_cache()
2842 pkg = pkgcache[package]
2843 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
2844
2845=== modified file 'hooks/charmhelpers/core/services/__init__.py'
2846--- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:47 +0000
2847+++ hooks/charmhelpers/core/services/__init__.py 2014-12-05 07:18:23 +0000
2848@@ -1,2 +1,2 @@
2849-from .base import *
2850-from .helpers import *
2851+from .base import * # NOQA
2852+from .helpers import * # NOQA
2853
2854=== modified file 'hooks/charmhelpers/core/services/helpers.py'
2855--- hooks/charmhelpers/core/services/helpers.py 2014-09-25 15:37:05 +0000
2856+++ hooks/charmhelpers/core/services/helpers.py 2014-12-05 07:18:23 +0000
2857@@ -196,7 +196,7 @@
2858 if not os.path.isabs(file_name):
2859 file_name = os.path.join(hookenv.charm_dir(), file_name)
2860 with open(file_name, 'w') as file_stream:
2861- os.fchmod(file_stream.fileno(), 0600)
2862+ os.fchmod(file_stream.fileno(), 0o600)
2863 yaml.dump(config_data, file_stream)
2864
2865 def read_context(self, file_name):
2866@@ -211,15 +211,19 @@
2867
2868 class TemplateCallback(ManagerCallback):
2869 """
2870- Callback class that will render a Jinja2 template, for use as a ready action.
2871-
2872- :param str source: The template source file, relative to `$CHARM_DIR/templates`
2873+ Callback class that will render a Jinja2 template, for use as a ready
2874+ action.
2875+
2876+ :param str source: The template source file, relative to
2877+ `$CHARM_DIR/templates`
2878+
2879 :param str target: The target to write the rendered template to
2880 :param str owner: The owner of the rendered file
2881 :param str group: The group of the rendered file
2882 :param int perms: The permissions of the rendered file
2883 """
2884- def __init__(self, source, target, owner='root', group='root', perms=0444):
2885+ def __init__(self, source, target,
2886+ owner='root', group='root', perms=0o444):
2887 self.source = source
2888 self.target = target
2889 self.owner = owner
2890
2891=== modified file 'hooks/charmhelpers/core/templating.py'
2892--- hooks/charmhelpers/core/templating.py 2014-08-13 13:12:47 +0000
2893+++ hooks/charmhelpers/core/templating.py 2014-12-05 07:18:23 +0000
2894@@ -4,7 +4,8 @@
2895 from charmhelpers.core import hookenv
2896
2897
2898-def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
2899+def render(source, target, context, owner='root', group='root',
2900+ perms=0o444, templates_dir=None):
2901 """
2902 Render a template.
2903
2904
2905=== modified file 'hooks/charmhelpers/fetch/__init__.py'
2906--- hooks/charmhelpers/fetch/__init__.py 2014-09-25 15:37:05 +0000
2907+++ hooks/charmhelpers/fetch/__init__.py 2014-12-05 07:18:23 +0000
2908@@ -5,10 +5,6 @@
2909 from charmhelpers.core.host import (
2910 lsb_release
2911 )
2912-from urlparse import (
2913- urlparse,
2914- urlunparse,
2915-)
2916 import subprocess
2917 from charmhelpers.core.hookenv import (
2918 config,
2919@@ -16,6 +12,12 @@
2920 )
2921 import os
2922
2923+import six
2924+if six.PY3:
2925+ from urllib.parse import urlparse, urlunparse
2926+else:
2927+ from urlparse import urlparse, urlunparse
2928+
2929
2930 CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
2931 deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
2932@@ -72,6 +74,7 @@
2933 FETCH_HANDLERS = (
2934 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
2935 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
2936+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
2937 )
2938
2939 APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
2940@@ -148,7 +151,7 @@
2941 cmd = ['apt-get', '--assume-yes']
2942 cmd.extend(options)
2943 cmd.append('install')
2944- if isinstance(packages, basestring):
2945+ if isinstance(packages, six.string_types):
2946 cmd.append(packages)
2947 else:
2948 cmd.extend(packages)
2949@@ -181,7 +184,7 @@
2950 def apt_purge(packages, fatal=False):
2951 """Purge one or more packages"""
2952 cmd = ['apt-get', '--assume-yes', 'purge']
2953- if isinstance(packages, basestring):
2954+ if isinstance(packages, six.string_types):
2955 cmd.append(packages)
2956 else:
2957 cmd.extend(packages)
2958@@ -192,7 +195,7 @@
2959 def apt_hold(packages, fatal=False):
2960 """Hold one or more packages"""
2961 cmd = ['apt-mark', 'hold']
2962- if isinstance(packages, basestring):
2963+ if isinstance(packages, six.string_types):
2964 cmd.append(packages)
2965 else:
2966 cmd.extend(packages)
2967@@ -218,6 +221,7 @@
2968 pocket for the release.
2969 'cloud:' may be used to activate official cloud archive pockets,
2970 such as 'cloud:icehouse'
2971+ 'distro' may be used as a noop
2972
2973 @param key: A key to be added to the system's APT keyring and used
2974 to verify the signatures on packages. Ideally, this should be an
2975@@ -251,12 +255,14 @@
2976 release = lsb_release()['DISTRIB_CODENAME']
2977 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
2978 apt.write(PROPOSED_POCKET.format(release))
2979+ elif source == 'distro':
2980+ pass
2981 else:
2982- raise SourceConfigError("Unknown source: {!r}".format(source))
2983+ log("Unknown source: {!r}".format(source))
2984
2985 if key:
2986 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
2987- with NamedTemporaryFile() as key_file:
2988+ with NamedTemporaryFile('w+') as key_file:
2989 key_file.write(key)
2990 key_file.flush()
2991 key_file.seek(0)
2992@@ -293,14 +299,14 @@
2993 sources = safe_load((config(sources_var) or '').strip()) or []
2994 keys = safe_load((config(keys_var) or '').strip()) or None
2995
2996- if isinstance(sources, basestring):
2997+ if isinstance(sources, six.string_types):
2998 sources = [sources]
2999
3000 if keys is None:
3001 for source in sources:
3002 add_source(source, None)
3003 else:
3004- if isinstance(keys, basestring):
3005+ if isinstance(keys, six.string_types):
3006 keys = [keys]
3007
3008 if len(sources) != len(keys):
3009@@ -397,7 +403,7 @@
3010 while result is None or result == APT_NO_LOCK:
3011 try:
3012 result = subprocess.check_call(cmd, env=env)
3013- except subprocess.CalledProcessError, e:
3014+ except subprocess.CalledProcessError as e:
3015 retry_count = retry_count + 1
3016 if retry_count > APT_NO_LOCK_RETRY_COUNT:
3017 raise
3018
3019=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
3020--- hooks/charmhelpers/fetch/archiveurl.py 2014-09-25 15:37:05 +0000
3021+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-05 07:18:23 +0000
3022@@ -1,8 +1,23 @@
3023 import os
3024-import urllib2
3025-from urllib import urlretrieve
3026-import urlparse
3027 import hashlib
3028+import re
3029+
3030+import six
3031+if six.PY3:
3032+ from urllib.request import (
3033+ build_opener, install_opener, urlopen, urlretrieve,
3034+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3035+ )
3036+ from urllib.parse import urlparse, urlunparse, parse_qs
3037+ from urllib.error import URLError
3038+else:
3039+ from urllib import urlretrieve
3040+ from urllib2 import (
3041+ build_opener, install_opener, urlopen,
3042+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3043+ URLError
3044+ )
3045+ from urlparse import urlparse, urlunparse, parse_qs
3046
3047 from charmhelpers.fetch import (
3048 BaseFetchHandler,
3049@@ -15,6 +30,24 @@
3050 from charmhelpers.core.host import mkdir, check_hash
3051
3052
3053+def splituser(host):
3054+ '''urllib.splituser(), but six's support of this seems broken'''
3055+ _userprog = re.compile('^(.*)@(.*)$')
3056+ match = _userprog.match(host)
3057+ if match:
3058+ return match.group(1, 2)
3059+ return None, host
3060+
3061+
3062+def splitpasswd(user):
3063+ '''urllib.splitpasswd(), but six's support of this is missing'''
3064+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
3065+ match = _passwdprog.match(user)
3066+ if match:
3067+ return match.group(1, 2)
3068+ return user, None
3069+
3070+
3071 class ArchiveUrlFetchHandler(BaseFetchHandler):
3072 """
3073 Handler to download archive files from arbitrary URLs.
3074@@ -42,20 +75,20 @@
3075 """
3076 # propogate all exceptions
3077 # URLError, OSError, etc
3078- proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
3079+ proto, netloc, path, params, query, fragment = urlparse(source)
3080 if proto in ('http', 'https'):
3081- auth, barehost = urllib2.splituser(netloc)
3082+ auth, barehost = splituser(netloc)
3083 if auth is not None:
3084- source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
3085- username, password = urllib2.splitpasswd(auth)
3086- passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
3087+ source = urlunparse((proto, barehost, path, params, query, fragment))
3088+ username, password = splitpasswd(auth)
3089+ passman = HTTPPasswordMgrWithDefaultRealm()
3090 # Realm is set to None in add_password to force the username and password
3091 # to be used whatever the realm
3092 passman.add_password(None, source, username, password)
3093- authhandler = urllib2.HTTPBasicAuthHandler(passman)
3094- opener = urllib2.build_opener(authhandler)
3095- urllib2.install_opener(opener)
3096- response = urllib2.urlopen(source)
3097+ authhandler = HTTPBasicAuthHandler(passman)
3098+ opener = build_opener(authhandler)
3099+ install_opener(opener)
3100+ response = urlopen(source)
3101 try:
3102 with open(dest, 'w') as dest_file:
3103 dest_file.write(response.read())
3104@@ -91,17 +124,21 @@
3105 url_parts = self.parse_url(source)
3106 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
3107 if not os.path.exists(dest_dir):
3108- mkdir(dest_dir, perms=0755)
3109+ mkdir(dest_dir, perms=0o755)
3110 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
3111 try:
3112 self.download(source, dld_file)
3113- except urllib2.URLError as e:
3114+ except URLError as e:
3115 raise UnhandledSource(e.reason)
3116 except OSError as e:
3117 raise UnhandledSource(e.strerror)
3118- options = urlparse.parse_qs(url_parts.fragment)
3119+ options = parse_qs(url_parts.fragment)
3120 for key, value in options.items():
3121- if key in hashlib.algorithms:
3122+ if not six.PY3:
3123+ algorithms = hashlib.algorithms
3124+ else:
3125+ algorithms = hashlib.algorithms_available
3126+ if key in algorithms:
3127 check_hash(dld_file, value, key)
3128 if checksum:
3129 check_hash(dld_file, checksum, hash_type)
3130
3131=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
3132--- hooks/charmhelpers/fetch/bzrurl.py 2014-06-24 13:40:39 +0000
3133+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-05 07:18:23 +0000
3134@@ -5,6 +5,10 @@
3135 )
3136 from charmhelpers.core.host import mkdir
3137
3138+import six
3139+if six.PY3:
3140+ raise ImportError('bzrlib does not support Python3')
3141+
3142 try:
3143 from bzrlib.branch import Branch
3144 except ImportError:
3145@@ -42,7 +46,7 @@
3146 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3147 branch_name)
3148 if not os.path.exists(dest_dir):
3149- mkdir(dest_dir, perms=0755)
3150+ mkdir(dest_dir, perms=0o755)
3151 try:
3152 self.branch(source, dest_dir)
3153 except OSError as e:
3154
3155=== modified file 'hooks/quantum_contexts.py'
3156--- hooks/quantum_contexts.py 2014-11-24 09:34:05 +0000
3157+++ hooks/quantum_contexts.py 2014-12-05 07:18:23 +0000
3158@@ -111,8 +111,8 @@
3159 '''
3160 neutron_settings = {
3161 'l2_population': False,
3162+ 'network_device_mtu': 1500,
3163 'overlay_network_type': 'gre',
3164-
3165 }
3166 for rid in relation_ids('neutron-plugin-api'):
3167 for unit in related_units(rid):
3168@@ -122,6 +122,7 @@
3169 neutron_settings = {
3170 'l2_population': rdata['l2-population'],
3171 'overlay_network_type': rdata['overlay-network-type'],
3172+ 'network_device_mtu': rdata['network-device-mtu'],
3173 }
3174 return neutron_settings
3175 return neutron_settings
3176@@ -243,6 +244,7 @@
3177 'verbose': config('verbose'),
3178 'instance_mtu': config('instance-mtu'),
3179 'l2_population': neutron_api_settings['l2_population'],
3180+ 'network_device_mtu': neutron_api_settings['network_device_mtu'],
3181 'overlay_network_type':
3182 neutron_api_settings['overlay_network_type'],
3183 }
3184
3185=== modified file 'hooks/quantum_hooks.py'
3186--- hooks/quantum_hooks.py 2014-11-19 03:09:34 +0000
3187+++ hooks/quantum_hooks.py 2014-12-05 07:18:23 +0000
3188@@ -22,6 +22,9 @@
3189 restart_on_change,
3190 lsb_release,
3191 )
3192+from charmhelpers.contrib.network.ip import (
3193+ configure_phy_nic_mtu
3194+)
3195 from charmhelpers.contrib.hahelpers.cluster import(
3196 eligible_leader
3197 )
3198@@ -66,6 +69,7 @@
3199 fatal=True)
3200 apt_install(filter_installed_packages(get_packages()),
3201 fatal=True)
3202+ configure_phy_nic_mtu()
3203 else:
3204 log('Please provide a valid plugin config', level=ERROR)
3205 sys.exit(1)
3206@@ -89,6 +93,7 @@
3207 if valid_plugin():
3208 CONFIGS.write_all()
3209 configure_ovs()
3210+ configure_phy_nic_mtu()
3211 else:
3212 log('Please provide a valid plugin config', level=ERROR)
3213 sys.exit(1)
3214
3215=== modified file 'hooks/quantum_utils.py'
3216--- hooks/quantum_utils.py 2014-11-24 09:34:05 +0000
3217+++ hooks/quantum_utils.py 2014-12-05 07:18:23 +0000
3218@@ -2,7 +2,7 @@
3219 service_running,
3220 service_stop,
3221 service_restart,
3222- lsb_release
3223+ lsb_release,
3224 )
3225 from charmhelpers.core.hookenv import (
3226 log,
3227
3228=== modified file 'templates/icehouse/neutron.conf'
3229--- templates/icehouse/neutron.conf 2014-06-11 09:30:31 +0000
3230+++ templates/icehouse/neutron.conf 2014-12-05 07:18:23 +0000
3231@@ -11,5 +11,6 @@
3232 control_exchange = neutron
3233 notification_driver = neutron.openstack.common.notifier.list_notifier
3234 list_notifier_drivers = neutron.openstack.common.notifier.rabbit_notifier
3235+network_device_mtu = {{ network_device_mtu }}
3236 [agent]
3237 root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
3238\ No newline at end of file
3239
3240=== modified file 'unit_tests/test_quantum_contexts.py'
3241--- unit_tests/test_quantum_contexts.py 2014-11-24 09:34:05 +0000
3242+++ unit_tests/test_quantum_contexts.py 2014-12-05 07:18:23 +0000
3243@@ -46,48 +46,12 @@
3244 yield mock_open, mock_file
3245
3246
3247-class _TestQuantumContext(CharmTestCase):
3248+class TestNetworkServiceContext(CharmTestCase):
3249
3250 def setUp(self):
3251- super(_TestQuantumContext, self).setUp(quantum_contexts, TO_PATCH)
3252+ super(TestNetworkServiceContext, self).setUp(quantum_contexts,
3253+ TO_PATCH)
3254 self.config.side_effect = self.test_config.get
3255-
3256- def test_not_related(self):
3257- self.relation_ids.return_value = []
3258- self.assertEquals(self.context(), {})
3259-
3260- def test_no_units(self):
3261- self.relation_ids.return_value = []
3262- self.relation_ids.return_value = ['foo']
3263- self.related_units.return_value = []
3264- self.assertEquals(self.context(), {})
3265-
3266- def test_no_data(self):
3267- self.relation_ids.return_value = ['foo']
3268- self.related_units.return_value = ['bar']
3269- self.relation_get.side_effect = self.test_relation.get
3270- self.context_complete.return_value = False
3271- self.assertEquals(self.context(), {})
3272-
3273- def test_data_multi_unit(self):
3274- self.relation_ids.return_value = ['foo']
3275- self.related_units.return_value = ['bar', 'baz']
3276- self.context_complete.return_value = True
3277- self.relation_get.side_effect = self.test_relation.get
3278- self.assertEquals(self.context(), self.data_result)
3279-
3280- def test_data_single_unit(self):
3281- self.relation_ids.return_value = ['foo']
3282- self.related_units.return_value = ['bar']
3283- self.context_complete.return_value = True
3284- self.relation_get.side_effect = self.test_relation.get
3285- self.assertEquals(self.context(), self.data_result)
3286-
3287-
3288-class TestNetworkServiceContext(_TestQuantumContext):
3289-
3290- def setUp(self):
3291- super(TestNetworkServiceContext, self).setUp()
3292 self.context = quantum_contexts.NetworkServiceContext()
3293 self.test_relation.set(
3294 {'keystone_host': '10.5.0.1',
3295@@ -116,6 +80,37 @@
3296 'auth_protocol': 'http',
3297 }
3298
3299+ def test_not_related(self):
3300+ self.relation_ids.return_value = []
3301+ self.assertEquals(self.context(), {})
3302+
3303+ def test_no_units(self):
3304+ self.relation_ids.return_value = []
3305+ self.relation_ids.return_value = ['foo']
3306+ self.related_units.return_value = []
3307+ self.assertEquals(self.context(), {})
3308+
3309+ def test_no_data(self):
3310+ self.relation_ids.return_value = ['foo']
3311+ self.related_units.return_value = ['bar']
3312+ self.relation_get.side_effect = self.test_relation.get
3313+ self.context_complete.return_value = False
3314+ self.assertEquals(self.context(), {})
3315+
3316+ def test_data_multi_unit(self):
3317+ self.relation_ids.return_value = ['foo']
3318+ self.related_units.return_value = ['bar', 'baz']
3319+ self.context_complete.return_value = True
3320+ self.relation_get.side_effect = self.test_relation.get
3321+ self.assertEquals(self.context(), self.data_result)
3322+
3323+ def test_data_single_unit(self):
3324+ self.relation_ids.return_value = ['foo']
3325+ self.related_units.return_value = ['bar']
3326+ self.context_complete.return_value = True
3327+ self.relation_get.side_effect = self.test_relation.get
3328+ self.assertEquals(self.context(), self.data_result)
3329+
3330
3331 class TestNeutronPortContext(CharmTestCase):
3332
3333@@ -241,6 +236,7 @@
3334 'debug': False,
3335 'verbose': True,
3336 'l2_population': False,
3337+ 'network_device_mtu': 1500,
3338 'overlay_network_type': 'gre',
3339 })
3340
3341@@ -367,24 +363,29 @@
3342 self.relation_ids.return_value = ['foo']
3343 self.related_units.return_value = ['bar']
3344 self.test_relation.set({'l2-population': True,
3345+ 'network-device-mtu': 1500,
3346 'overlay-network-type': 'gre', })
3347 self.relation_get.side_effect = self.test_relation.get
3348 self.assertEquals(quantum_contexts._neutron_api_settings(),
3349 {'l2_population': True,
3350+ 'network_device_mtu': 1500,
3351 'overlay_network_type': 'gre'})
3352
3353 def test_neutron_api_settings2(self):
3354 self.relation_ids.return_value = ['foo']
3355 self.related_units.return_value = ['bar']
3356 self.test_relation.set({'l2-population': True,
3357+ 'network-device-mtu': 1500,
3358 'overlay-network-type': 'gre', })
3359 self.relation_get.side_effect = self.test_relation.get
3360 self.assertEquals(quantum_contexts._neutron_api_settings(),
3361 {'l2_population': True,
3362+ 'network_device_mtu': 1500,
3363 'overlay_network_type': 'gre'})
3364
3365 def test_neutron_api_settings_no_apiplugin(self):
3366 self.relation_ids.return_value = []
3367 self.assertEquals(quantum_contexts._neutron_api_settings(),
3368 {'l2_population': False,
3369+ 'network_device_mtu': 1500,
3370 'overlay_network_type': 'gre', })
3371
3372=== modified file 'unit_tests/test_quantum_hooks.py'
3373--- unit_tests/test_quantum_hooks.py 2014-11-19 03:09:34 +0000
3374+++ unit_tests/test_quantum_hooks.py 2014-12-05 07:18:23 +0000
3375@@ -40,7 +40,8 @@
3376 'lsb_release',
3377 'stop_services',
3378 'b64decode',
3379- 'is_relation_made'
3380+ 'is_relation_made',
3381+ 'configure_phy_nic_mtu'
3382 ]
3383
3384
3385@@ -80,6 +81,7 @@
3386 self.assertTrue(self.get_early_packages.called)
3387 self.assertTrue(self.get_packages.called)
3388 self.assertTrue(self.execd_preinstall.called)
3389+ self.assertTrue(self.configure_phy_nic_mtu.called)
3390
3391 def test_install_hook_precise_nocloudarchive(self):
3392 self.test_config.set('openstack-origin', 'distro')
3393@@ -112,6 +114,7 @@
3394 self.assertTrue(_pgsql_db_joined.called)
3395 self.assertTrue(_amqp_joined.called)
3396 self.assertTrue(_amqp_nova_joined.called)
3397+ self.assertTrue(self.configure_phy_nic_mtu.called)
3398
3399 def test_config_changed_upgrade(self):
3400 self.openstack_upgrade_available.return_value = True
3401@@ -119,6 +122,7 @@
3402 self._call_hook('config-changed')
3403 self.assertTrue(self.do_openstack_upgrade.called)
3404 self.assertTrue(self.configure_ovs.called)
3405+ self.assertTrue(self.configure_phy_nic_mtu.called)
3406
3407 def test_config_changed_n1kv(self):
3408 self.openstack_upgrade_available.return_value = False

Subscribers

People subscribed via source and target branches