Merge lp:~zhhuabj/charms/trusty/nova-compute/lp74646 into lp:~openstack-charmers-archive/charms/trusty/nova-compute/next

Proposed by Hua Zhang on 2014-11-24
Status: Superseded
Proposed branch: lp:~zhhuabj/charms/trusty/nova-compute/lp74646
Merge into: lp:~openstack-charmers-archive/charms/trusty/nova-compute/next
Diff against target: 3285 lines (+985/-528)
27 files modified
config.yaml (+6/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7)
hooks/charmhelpers/contrib/network/ip.py (+90/-54)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
hooks/charmhelpers/contrib/openstack/context.py (+339/-225)
hooks/charmhelpers/contrib/openstack/ip.py (+41/-27)
hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+2/-2)
hooks/charmhelpers/contrib/openstack/templating.py (+5/-5)
hooks/charmhelpers/contrib/openstack/utils.py (+146/-13)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/core/fstab.py (+10/-8)
hooks/charmhelpers/core/hookenv.py (+27/-11)
hooks/charmhelpers/core/host.py (+81/-21)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/core/services/helpers.py (+9/-5)
hooks/charmhelpers/core/templating.py (+2/-1)
hooks/charmhelpers/fetch/__init__.py (+18/-12)
hooks/charmhelpers/fetch/archiveurl.py (+53/-16)
hooks/charmhelpers/fetch/bzrurl.py (+5/-1)
hooks/nova_compute_hooks.py (+6/-2)
hooks/nova_compute_utils.py (+1/-1)
unit_tests/test_nova_compute_hooks.py (+4/-1)
To merge this branch: bzr merge lp:~zhhuabj/charms/trusty/nova-compute/lp74646
Reviewer Review Type Date Requested Status
Edward Hope-Morley 2014-11-24 Needs Fixing on 2014-11-24
Xiang Hui 2014-11-24 Pending
Review via email: mp+242611@code.launchpad.net

This proposal has been superseded by a proposal from 2014-12-08.

Description of the change

This story (SF#74646) supports setting VM's MTU<=1500 by setting mtu of phy NICs and network_device_mtu.
1, setting mtu for phy NICs both in nova-compute charm and neutron-gateway charm
   juju set nova-compute phy-nic-mtu=1546
   juju set neutron-gateway phy-nic-mtu=1546
2, setting mtu for peer devices between ovs bridge br-phy and ovs bridge br-int by adding 'network-device-mtu' parameter into /etc/neutron/neutron.conf
   juju set neutron-api network-device-mtu=1546
   Limitation:
   a, don't support linux bridge because we don't add those three parameters (ovs_use_veth, use_veth_interconnection, veth_mtu)
   b, for gre and vxlan, this step is optional.
   c, after setting network-device-mtu=1546 for neutron-api charm, quantum-gateway and neutron-openvswitch will find network-device-mtu paramter by relation, so it only supports openvswitch plugin at this stage.
3, at this time, MTU inside VM can continue to be configured via DHCP by seeting instance-mtu configuration.
   juju set neutron-gateway instance-mtu=1500
   Limitation:
   a, only support set VM's MTU<=1500, if wanting to set VM's MTU>1500, also need to set MTU for tap devices associated that VM by this link (http://pastebin.ubuntu.com/9272762/ )
   b, doesn't support MTU per network

NOTE: maybe we can't test this feature in bastion

To post a comment you must log in.

UOSCI bot says:
charm_lint_check #1213 nova-compute-next for zhhuabj mp242611
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option config-flags has no default value
  I: config.yaml: option instances-path has no default value
  W: config.yaml: option disable-neutron-security-groups has no default value
  I: config.yaml: option migration-auth-type has no default value

Full lint test output: http://paste.ubuntu.com/9206996/
Build: http://10.98.191.181:8080/job/charm_lint_check/1213/

UOSCI bot says:
charm_unit_test #1047 nova-compute-next for zhhuabj mp242611
    UNIT OK: passed

UNIT Results (max last 5 lines):
  nova_compute_hooks 134 13 90% 81, 99-100, 129, 195, 253, 258-259, 265, 269-272
  nova_compute_utils 240 120 50% 168-224, 232, 237-240, 275-277, 284, 288-291, 299-307, 311, 320-329, 342-361, 387-388, 392-393, 412-433, 450-460, 474-475, 480-481, 486-495
  TOTAL 578 203 65%
  Ran 56 tests in 3.028s
  OK (SKIP=5)

Full unit test output: http://paste.ubuntu.com/9206997/
Build: http://10.98.191.181:8080/job/charm_unit_test/1047/

UOSCI bot says:
charm_amulet_test #517 nova-compute-next for zhhuabj mp242611
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  subprocess.CalledProcessError: Command '['juju-deployer', '-W', '-L', '-c', '/tmp/amulet-juju-deployer-51bPaJ.json', '-e', 'osci-sv07', '-t', '1000', 'osci-sv07']' returned non-zero exit status 1
  WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
  juju-test INFO : Results: 1 passed, 2 failed, 0 errored
  ERROR subprocess encountered error code 2
  make: *** [test] Error 2

Full amulet test output: http://paste.ubuntu.com/9207416/
Build: http://10.98.191.181:8080/job/charm_amulet_test/517/

91. By Hua Zhang on 2014-12-03

sync charm-helpers

92. By Hua Zhang on 2014-12-03

change to use the method of charm-helpers

UOSCI bot says:
charm_lint_check #1312 nova-compute-next for zhhuabj mp242611
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option config-flags has no default value
  I: config.yaml: option instances-path has no default value
  W: config.yaml: option disable-neutron-security-groups has no default value
  I: config.yaml: option migration-auth-type has no default value

Full lint test output: http://paste.ubuntu.com/9354896/
Build: http://10.98.191.181:8080/job/charm_lint_check/1312/

UOSCI bot says:
charm_unit_test #1146 nova-compute-next for zhhuabj mp242611
    UNIT OK: passed

UNIT Results (max last 5 lines):
  nova_compute_hooks 134 13 90% 81, 99-100, 129, 195, 253, 258-259, 265, 269-272
  nova_compute_utils 228 110 52% 161-217, 225, 230-233, 268-270, 277, 281-284, 292-300, 304, 313-322, 335-354, 380-381, 385-386, 405-426, 443-453, 467-468, 473-474
  TOTAL 566 193 66%
  Ran 56 tests in 3.216s
  OK (SKIP=5)

Full unit test output: http://paste.ubuntu.com/9354897/
Build: http://10.98.191.181:8080/job/charm_unit_test/1146/

UOSCI bot says:
charm_amulet_test #578 nova-compute-next for zhhuabj mp242611
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv07"
  WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
  juju-test INFO : Results: 2 passed, 1 failed, 0 errored
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9354972/
Build: http://10.98.191.181:8080/job/charm_amulet_test/578/

93. By Hua Zhang on 2014-12-04

enable persistence

94. By Hua Zhang on 2014-12-04

sync charm-helpers

95. By Hua Zhang on 2014-12-05

sync charm-helpers

96. By Hua Zhang on 2014-12-05

restore charm-helper bzr repository

97. By Hua Zhang on 2014-12-10

sync charm-helpers to inclue contrib.python to fix unit test error

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'config.yaml'
2--- config.yaml 2014-11-28 12:54:57 +0000
3+++ config.yaml 2014-12-05 08:00:54 +0000
4@@ -154,3 +154,9 @@
5 order for this charm to function correctly, the privacy extension must be
6 disabled and a non-temporary address must be configured/available on
7 your network interface.
8+ phy-nic-mtu:
9+ type: int
10+ default: 1500
11+ description: |
12+ To improve network performance of VM, sometimes we should keep VM MTU as 1500
13+ and use charm to modify MTU of tunnel nic more than 1500 (e.g. 1546 for GRE)
14
15=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
16--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-06 21:57:43 +0000
17+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-05 08:00:54 +0000
18@@ -13,9 +13,10 @@
19
20 import subprocess
21 import os
22-
23 from socket import gethostname as get_unit_hostname
24
25+import six
26+
27 from charmhelpers.core.hookenv import (
28 log,
29 relation_ids,
30@@ -77,7 +78,7 @@
31 "show", resource
32 ]
33 try:
34- status = subprocess.check_output(cmd)
35+ status = subprocess.check_output(cmd).decode('UTF-8')
36 except subprocess.CalledProcessError:
37 return False
38 else:
39@@ -150,34 +151,42 @@
40 return False
41
42
43-def determine_api_port(public_port):
44+def determine_api_port(public_port, singlenode_mode=False):
45 '''
46 Determine correct API server listening port based on
47 existence of HTTPS reverse proxy and/or haproxy.
48
49 public_port: int: standard public port for given service
50
51+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
52+
53 returns: int: the correct listening port for the API service
54 '''
55 i = 0
56- if len(peer_units()) > 0 or is_clustered():
57+ if singlenode_mode:
58+ i += 1
59+ elif len(peer_units()) > 0 or is_clustered():
60 i += 1
61 if https():
62 i += 1
63 return public_port - (i * 10)
64
65
66-def determine_apache_port(public_port):
67+def determine_apache_port(public_port, singlenode_mode=False):
68 '''
69 Description: Determine correct apache listening port based on public IP +
70 state of the cluster.
71
72 public_port: int: standard public port for given service
73
74+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
75+
76 returns: int: the correct listening port for the HAProxy service
77 '''
78 i = 0
79- if len(peer_units()) > 0 or is_clustered():
80+ if singlenode_mode:
81+ i += 1
82+ elif len(peer_units()) > 0 or is_clustered():
83 i += 1
84 return public_port - (i * 10)
85
86@@ -197,7 +206,7 @@
87 for setting in settings:
88 conf[setting] = config_get(setting)
89 missing = []
90- [missing.append(s) for s, v in conf.iteritems() if v is None]
91+ [missing.append(s) for s, v in six.iteritems(conf) if v is None]
92 if missing:
93 log('Insufficient config data to configure hacluster.', level=ERROR)
94 raise HAIncompleteConfig
95
96=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
97--- hooks/charmhelpers/contrib/network/ip.py 2014-10-06 21:57:43 +0000
98+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-05 08:00:54 +0000
99@@ -1,16 +1,20 @@
100 import glob
101 import re
102 import subprocess
103-import sys
104
105 from functools import partial
106
107 from charmhelpers.core.hookenv import unit_get
108 from charmhelpers.fetch import apt_install
109 from charmhelpers.core.hookenv import (
110- WARNING,
111- ERROR,
112- log
113+ config,
114+ log,
115+ INFO
116+)
117+from charmhelpers.core.host import (
118+ list_nics,
119+ get_nic_mtu,
120+ set_nic_mtu
121 )
122
123 try:
124@@ -34,31 +38,28 @@
125 network)
126
127
128+def no_ip_found_error_out(network):
129+ errmsg = ("No IP address found in network: %s" % network)
130+ raise ValueError(errmsg)
131+
132+
133 def get_address_in_network(network, fallback=None, fatal=False):
134- """
135- Get an IPv4 or IPv6 address within the network from the host.
136+ """Get an IPv4 or IPv6 address within the network from the host.
137
138 :param network (str): CIDR presentation format. For example,
139 '192.168.1.0/24'.
140 :param fallback (str): If no address is found, return fallback.
141 :param fatal (boolean): If no address is found, fallback is not
142 set and fatal is True then exit(1).
143-
144 """
145-
146- def not_found_error_out():
147- log("No IP address found in network: %s" % network,
148- level=ERROR)
149- sys.exit(1)
150-
151 if network is None:
152 if fallback is not None:
153 return fallback
154+
155+ if fatal:
156+ no_ip_found_error_out(network)
157 else:
158- if fatal:
159- not_found_error_out()
160- else:
161- return None
162+ return None
163
164 _validate_cidr(network)
165 network = netaddr.IPNetwork(network)
166@@ -70,6 +71,7 @@
167 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
168 if cidr in network:
169 return str(cidr.ip)
170+
171 if network.version == 6 and netifaces.AF_INET6 in addresses:
172 for addr in addresses[netifaces.AF_INET6]:
173 if not addr['addr'].startswith('fe80'):
174@@ -82,20 +84,20 @@
175 return fallback
176
177 if fatal:
178- not_found_error_out()
179+ no_ip_found_error_out(network)
180
181 return None
182
183
184 def is_ipv6(address):
185- '''Determine whether provided address is IPv6 or not'''
186+ """Determine whether provided address is IPv6 or not."""
187 try:
188 address = netaddr.IPAddress(address)
189 except netaddr.AddrFormatError:
190 # probably a hostname - so not an address at all!
191 return False
192- else:
193- return address.version == 6
194+
195+ return address.version == 6
196
197
198 def is_address_in_network(network, address):
199@@ -113,11 +115,13 @@
200 except (netaddr.core.AddrFormatError, ValueError):
201 raise ValueError("Network (%s) is not in CIDR presentation format" %
202 network)
203+
204 try:
205 address = netaddr.IPAddress(address)
206 except (netaddr.core.AddrFormatError, ValueError):
207 raise ValueError("Address (%s) is not in correct presentation format" %
208 address)
209+
210 if address in network:
211 return True
212 else:
213@@ -140,57 +144,63 @@
214 if address.version == 4 and netifaces.AF_INET in addresses:
215 addr = addresses[netifaces.AF_INET][0]['addr']
216 netmask = addresses[netifaces.AF_INET][0]['netmask']
217- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
218+ network = netaddr.IPNetwork("%s/%s" % (addr, netmask))
219+ cidr = network.cidr
220 if address in cidr:
221 if key == 'iface':
222 return iface
223 else:
224 return addresses[netifaces.AF_INET][0][key]
225+
226 if address.version == 6 and netifaces.AF_INET6 in addresses:
227 for addr in addresses[netifaces.AF_INET6]:
228 if not addr['addr'].startswith('fe80'):
229- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
230- addr['netmask']))
231+ network = netaddr.IPNetwork("%s/%s" % (addr['addr'],
232+ addr['netmask']))
233+ cidr = network.cidr
234 if address in cidr:
235 if key == 'iface':
236 return iface
237+ elif key == 'netmask' and cidr:
238+ return str(cidr).split('/')[1]
239 else:
240 return addr[key]
241+
242 return None
243
244
245 get_iface_for_address = partial(_get_for_address, key='iface')
246
247+
248 get_netmask_for_address = partial(_get_for_address, key='netmask')
249
250
251 def format_ipv6_addr(address):
252- """
253- IPv6 needs to be wrapped with [] in url link to parse correctly.
254+ """If address is IPv6, wrap it in '[]' otherwise return None.
255+
256+ This is required by most configuration files when specifying IPv6
257+ addresses.
258 """
259 if is_ipv6(address):
260- address = "[%s]" % address
261- else:
262- log("Not a valid ipv6 address: %s" % address, level=WARNING)
263- address = None
264+ return "[%s]" % address
265
266- return address
267+ return None
268
269
270 def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
271 fatal=True, exc_list=None):
272- """
273- Return the assigned IP address for a given interface, if any, or [].
274- """
275+ """Return the assigned IP address for a given interface, if any."""
276 # Extract nic if passed /dev/ethX
277 if '/' in iface:
278 iface = iface.split('/')[-1]
279+
280 if not exc_list:
281 exc_list = []
282+
283 try:
284 inet_num = getattr(netifaces, inet_type)
285 except AttributeError:
286- raise Exception('Unknown inet type ' + str(inet_type))
287+ raise Exception("Unknown inet type '%s'" % str(inet_type))
288
289 interfaces = netifaces.interfaces()
290 if inc_aliases:
291@@ -198,15 +208,18 @@
292 for _iface in interfaces:
293 if iface == _iface or _iface.split(':')[0] == iface:
294 ifaces.append(_iface)
295+
296 if fatal and not ifaces:
297 raise Exception("Invalid interface '%s'" % iface)
298+
299 ifaces.sort()
300 else:
301 if iface not in interfaces:
302 if fatal:
303- raise Exception("%s not found " % (iface))
304+ raise Exception("Interface '%s' not found " % (iface))
305 else:
306 return []
307+
308 else:
309 ifaces = [iface]
310
311@@ -217,10 +230,13 @@
312 for entry in net_info[inet_num]:
313 if 'addr' in entry and entry['addr'] not in exc_list:
314 addresses.append(entry['addr'])
315+
316 if fatal and not addresses:
317 raise Exception("Interface '%s' doesn't have any %s addresses." %
318 (iface, inet_type))
319- return addresses
320+
321+ return sorted(addresses)
322+
323
324 get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
325
326@@ -237,6 +253,7 @@
327 raw = re.match(ll_key, _addr)
328 if raw:
329 _addr = raw.group(1)
330+
331 if _addr == addr:
332 log("Address '%s' is configured on iface '%s'" %
333 (addr, iface))
334@@ -247,8 +264,9 @@
335
336
337 def sniff_iface(f):
338- """If no iface provided, inject net iface inferred from unit private
339- address.
340+ """Ensure decorated function is called with a value for iface.
341+
342+ If no iface provided, inject net iface inferred from unit private address.
343 """
344 def iface_sniffer(*args, **kwargs):
345 if not kwargs.get('iface', None):
346@@ -291,7 +309,7 @@
347 if global_addrs:
348 # Make sure any found global addresses are not temporary
349 cmd = ['ip', 'addr', 'show', iface]
350- out = subprocess.check_output(cmd)
351+ out = subprocess.check_output(cmd).decode('UTF-8')
352 if dynamic_only:
353 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
354 else:
355@@ -313,33 +331,51 @@
356 return addrs
357
358 if fatal:
359- raise Exception("Interface '%s' doesn't have a scope global "
360+ raise Exception("Interface '%s' does not have a scope global "
361 "non-temporary ipv6 address." % iface)
362
363 return []
364
365
366 def get_bridges(vnic_dir='/sys/devices/virtual/net'):
367- """
368- Return a list of bridges on the system or []
369- """
370- b_rgex = vnic_dir + '/*/bridge'
371- return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
372+ """Return a list of bridges on the system."""
373+ b_regex = "%s/*/bridge" % vnic_dir
374+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
375
376
377 def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
378- """
379- Return a list of nics comprising a given bridge on the system or []
380- """
381- brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
382- return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
383+ """Return a list of nics comprising a given bridge on the system."""
384+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
385+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
386
387
388 def is_bridge_member(nic):
389- """
390- Check if a given nic is a member of a bridge
391- """
392+ """Check if a given nic is a member of a bridge."""
393 for bridge in get_bridges():
394 if nic in get_bridge_nics(bridge):
395 return True
396+
397 return False
398+
399+
400+def configure_phy_nic_mtu(mng_ip=None):
401+ """Configure mtu for physical nic."""
402+ phy_nic_mtu = config('phy-nic-mtu')
403+ if phy_nic_mtu >= 1500:
404+ phy_nic = None
405+ if mng_ip is None:
406+ mng_ip = unit_get('private-address')
407+ for nic in list_nics(['eth', 'bond', 'br']):
408+ if mng_ip in get_ipv4_addr(nic, fatal=False):
409+ phy_nic = nic
410+ # need to find the associated phy nic for bridge
411+ if nic.startswith('br'):
412+ for brnic in get_bridge_nics(nic):
413+ if brnic.startswith('eth') or brnic.startswith('bond'):
414+ phy_nic = brnic
415+ break
416+ break
417+ if phy_nic is not None and phy_nic_mtu != get_nic_mtu(phy_nic):
418+ set_nic_mtu(phy_nic, str(phy_nic_mtu), persistence=True)
419+ log('set mtu={} for phy_nic={}'
420+ .format(phy_nic_mtu, phy_nic), level=INFO)
421
422=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
423--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-06 21:57:43 +0000
424+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-05 08:00:54 +0000
425@@ -1,3 +1,4 @@
426+import six
427 from charmhelpers.contrib.amulet.deployment import (
428 AmuletDeployment
429 )
430@@ -69,7 +70,7 @@
431
432 def _configure_services(self, configs):
433 """Configure all of the services."""
434- for service, config in configs.iteritems():
435+ for service, config in six.iteritems(configs):
436 self.d.configure(service, config)
437
438 def _get_openstack_release(self):
439
440=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
441--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-10-06 21:57:43 +0000
442+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-05 08:00:54 +0000
443@@ -7,6 +7,8 @@
444 import keystoneclient.v2_0 as keystone_client
445 import novaclient.v1_1.client as nova_client
446
447+import six
448+
449 from charmhelpers.contrib.amulet.utils import (
450 AmuletUtils
451 )
452@@ -60,7 +62,7 @@
453 expected service catalog endpoints.
454 """
455 self.log.debug('actual: {}'.format(repr(actual)))
456- for k, v in expected.iteritems():
457+ for k, v in six.iteritems(expected):
458 if k in actual:
459 ret = self._validate_dict_data(expected[k][0], actual[k][0])
460 if ret:
461
462=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
463--- hooks/charmhelpers/contrib/openstack/context.py 2014-10-06 21:57:43 +0000
464+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-05 08:00:54 +0000
465@@ -1,20 +1,18 @@
466 import json
467 import os
468 import time
469-
470 from base64 import b64decode
471+from subprocess import check_call
472
473-from subprocess import (
474- check_call
475-)
476+import six
477
478 from charmhelpers.fetch import (
479 apt_install,
480 filter_installed_packages,
481 )
482-
483 from charmhelpers.core.hookenv import (
484 config,
485+ is_relation_made,
486 local_unit,
487 log,
488 relation_get,
489@@ -23,41 +21,40 @@
490 relation_set,
491 unit_get,
492 unit_private_ip,
493+ DEBUG,
494+ INFO,
495+ WARNING,
496 ERROR,
497- INFO
498 )
499-
500 from charmhelpers.core.host import (
501 mkdir,
502- write_file
503+ write_file,
504 )
505-
506 from charmhelpers.contrib.hahelpers.cluster import (
507 determine_apache_port,
508 determine_api_port,
509 https,
510- is_clustered
511+ is_clustered,
512 )
513-
514 from charmhelpers.contrib.hahelpers.apache import (
515 get_cert,
516 get_ca_cert,
517 install_ca_cert,
518 )
519-
520 from charmhelpers.contrib.openstack.neutron import (
521 neutron_plugin_attribute,
522 )
523-
524 from charmhelpers.contrib.network.ip import (
525 get_address_in_network,
526 get_ipv6_addr,
527 get_netmask_for_address,
528 format_ipv6_addr,
529- is_address_in_network
530+ is_address_in_network,
531 )
532+from charmhelpers.contrib.openstack.utils import get_host_ip
533
534 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
535+ADDRESS_TYPES = ['admin', 'internal', 'public']
536
537
538 class OSContextError(Exception):
539@@ -65,7 +62,7 @@
540
541
542 def ensure_packages(packages):
543- '''Install but do not upgrade required plugin packages'''
544+ """Install but do not upgrade required plugin packages."""
545 required = filter_installed_packages(packages)
546 if required:
547 apt_install(required, fatal=True)
548@@ -73,20 +70,27 @@
549
550 def context_complete(ctxt):
551 _missing = []
552- for k, v in ctxt.iteritems():
553+ for k, v in six.iteritems(ctxt):
554 if v is None or v == '':
555 _missing.append(k)
556+
557 if _missing:
558- log('Missing required data: %s' % ' '.join(_missing), level='INFO')
559+ log('Missing required data: %s' % ' '.join(_missing), level=INFO)
560 return False
561+
562 return True
563
564
565 def config_flags_parser(config_flags):
566+ """Parses config flags string into dict.
567+
568+ The provided config_flags string may be a list of comma-separated values
569+ which themselves may be comma-separated list of values.
570+ """
571 if config_flags.find('==') >= 0:
572- log("config_flags is not in expected format (key=value)",
573- level=ERROR)
574+ log("config_flags is not in expected format (key=value)", level=ERROR)
575 raise OSContextError
576+
577 # strip the following from each value.
578 post_strippers = ' ,'
579 # we strip any leading/trailing '=' or ' ' from the string then
580@@ -94,7 +98,7 @@
581 split = config_flags.strip(' =').split('=')
582 limit = len(split)
583 flags = {}
584- for i in xrange(0, limit - 1):
585+ for i in range(0, limit - 1):
586 current = split[i]
587 next = split[i + 1]
588 vindex = next.rfind(',')
589@@ -109,17 +113,18 @@
590 # if this not the first entry, expect an embedded key.
591 index = current.rfind(',')
592 if index < 0:
593- log("invalid config value(s) at index %s" % (i),
594- level=ERROR)
595+ log("Invalid config value(s) at index %s" % (i), level=ERROR)
596 raise OSContextError
597 key = current[index + 1:]
598
599 # Add to collection.
600 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
601+
602 return flags
603
604
605 class OSContextGenerator(object):
606+ """Base class for all context generators."""
607 interfaces = []
608
609 def __call__(self):
610@@ -131,11 +136,11 @@
611
612 def __init__(self,
613 database=None, user=None, relation_prefix=None, ssl_dir=None):
614- '''
615- Allows inspecting relation for settings prefixed with relation_prefix.
616- This is useful for parsing access for multiple databases returned via
617- the shared-db interface (eg, nova_password, quantum_password)
618- '''
619+ """Allows inspecting relation for settings prefixed with
620+ relation_prefix. This is useful for parsing access for multiple
621+ databases returned via the shared-db interface (eg, nova_password,
622+ quantum_password)
623+ """
624 self.relation_prefix = relation_prefix
625 self.database = database
626 self.user = user
627@@ -145,9 +150,8 @@
628 self.database = self.database or config('database')
629 self.user = self.user or config('database-user')
630 if None in [self.database, self.user]:
631- log('Could not generate shared_db context. '
632- 'Missing required charm config options. '
633- '(database name and user)')
634+ log("Could not generate shared_db context. Missing required charm "
635+ "config options. (database name and user)", level=ERROR)
636 raise OSContextError
637
638 ctxt = {}
639@@ -200,23 +204,24 @@
640 def __call__(self):
641 self.database = self.database or config('database')
642 if self.database is None:
643- log('Could not generate postgresql_db context. '
644- 'Missing required charm config options. '
645- '(database name)')
646+ log('Could not generate postgresql_db context. Missing required '
647+ 'charm config options. (database name)', level=ERROR)
648 raise OSContextError
649+
650 ctxt = {}
651-
652 for rid in relation_ids(self.interfaces[0]):
653 for unit in related_units(rid):
654- ctxt = {
655- 'database_host': relation_get('host', rid=rid, unit=unit),
656- 'database': self.database,
657- 'database_user': relation_get('user', rid=rid, unit=unit),
658- 'database_password': relation_get('password', rid=rid, unit=unit),
659- 'database_type': 'postgresql',
660- }
661+ rel_host = relation_get('host', rid=rid, unit=unit)
662+ rel_user = relation_get('user', rid=rid, unit=unit)
663+ rel_passwd = relation_get('password', rid=rid, unit=unit)
664+ ctxt = {'database_host': rel_host,
665+ 'database': self.database,
666+ 'database_user': rel_user,
667+ 'database_password': rel_passwd,
668+ 'database_type': 'postgresql'}
669 if context_complete(ctxt):
670 return ctxt
671+
672 return {}
673
674
675@@ -225,23 +230,29 @@
676 ca_path = os.path.join(ssl_dir, 'db-client.ca')
677 with open(ca_path, 'w') as fh:
678 fh.write(b64decode(rdata['ssl_ca']))
679+
680 ctxt['database_ssl_ca'] = ca_path
681 elif 'ssl_ca' in rdata:
682- log("Charm not setup for ssl support but ssl ca found")
683+ log("Charm not setup for ssl support but ssl ca found", level=INFO)
684 return ctxt
685+
686 if 'ssl_cert' in rdata:
687 cert_path = os.path.join(
688 ssl_dir, 'db-client.cert')
689 if not os.path.exists(cert_path):
690- log("Waiting 1m for ssl client cert validity")
691+ log("Waiting 1m for ssl client cert validity", level=INFO)
692 time.sleep(60)
693+
694 with open(cert_path, 'w') as fh:
695 fh.write(b64decode(rdata['ssl_cert']))
696+
697 ctxt['database_ssl_cert'] = cert_path
698 key_path = os.path.join(ssl_dir, 'db-client.key')
699 with open(key_path, 'w') as fh:
700 fh.write(b64decode(rdata['ssl_key']))
701+
702 ctxt['database_ssl_key'] = key_path
703+
704 return ctxt
705
706
707@@ -249,9 +260,8 @@
708 interfaces = ['identity-service']
709
710 def __call__(self):
711- log('Generating template context for identity-service')
712+ log('Generating template context for identity-service', level=DEBUG)
713 ctxt = {}
714-
715 for rid in relation_ids('identity-service'):
716 for unit in related_units(rid):
717 rdata = relation_get(rid=rid, unit=unit)
718@@ -259,26 +269,24 @@
719 serv_host = format_ipv6_addr(serv_host) or serv_host
720 auth_host = rdata.get('auth_host')
721 auth_host = format_ipv6_addr(auth_host) or auth_host
722-
723- ctxt = {
724- 'service_port': rdata.get('service_port'),
725- 'service_host': serv_host,
726- 'auth_host': auth_host,
727- 'auth_port': rdata.get('auth_port'),
728- 'admin_tenant_name': rdata.get('service_tenant'),
729- 'admin_user': rdata.get('service_username'),
730- 'admin_password': rdata.get('service_password'),
731- 'service_protocol':
732- rdata.get('service_protocol') or 'http',
733- 'auth_protocol':
734- rdata.get('auth_protocol') or 'http',
735- }
736+ svc_protocol = rdata.get('service_protocol') or 'http'
737+ auth_protocol = rdata.get('auth_protocol') or 'http'
738+ ctxt = {'service_port': rdata.get('service_port'),
739+ 'service_host': serv_host,
740+ 'auth_host': auth_host,
741+ 'auth_port': rdata.get('auth_port'),
742+ 'admin_tenant_name': rdata.get('service_tenant'),
743+ 'admin_user': rdata.get('service_username'),
744+ 'admin_password': rdata.get('service_password'),
745+ 'service_protocol': svc_protocol,
746+ 'auth_protocol': auth_protocol}
747 if context_complete(ctxt):
748 # NOTE(jamespage) this is required for >= icehouse
749 # so a missing value just indicates keystone needs
750 # upgrading
751 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
752 return ctxt
753+
754 return {}
755
756
757@@ -291,21 +299,23 @@
758 self.interfaces = [rel_name]
759
760 def __call__(self):
761- log('Generating template context for amqp')
762+ log('Generating template context for amqp', level=DEBUG)
763 conf = config()
764- user_setting = 'rabbit-user'
765- vhost_setting = 'rabbit-vhost'
766 if self.relation_prefix:
767- user_setting = self.relation_prefix + '-rabbit-user'
768- vhost_setting = self.relation_prefix + '-rabbit-vhost'
769+ user_setting = '%s-rabbit-user' % (self.relation_prefix)
770+ vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
771+ else:
772+ user_setting = 'rabbit-user'
773+ vhost_setting = 'rabbit-vhost'
774
775 try:
776 username = conf[user_setting]
777 vhost = conf[vhost_setting]
778 except KeyError as e:
779- log('Could not generate shared_db context. '
780- 'Missing required charm config options: %s.' % e)
781+ log('Could not generate shared_db context. Missing required charm '
782+ 'config options: %s.' % e, level=ERROR)
783 raise OSContextError
784+
785 ctxt = {}
786 for rid in relation_ids(self.rel_name):
787 ha_vip_only = False
788@@ -319,6 +329,7 @@
789 host = relation_get('private-address', rid=rid, unit=unit)
790 host = format_ipv6_addr(host) or host
791 ctxt['rabbitmq_host'] = host
792+
793 ctxt.update({
794 'rabbitmq_user': username,
795 'rabbitmq_password': relation_get('password', rid=rid,
796@@ -329,6 +340,7 @@
797 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
798 if ssl_port:
799 ctxt['rabbit_ssl_port'] = ssl_port
800+
801 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
802 if ssl_ca:
803 ctxt['rabbit_ssl_ca'] = ssl_ca
804@@ -342,41 +354,45 @@
805 if context_complete(ctxt):
806 if 'rabbit_ssl_ca' in ctxt:
807 if not self.ssl_dir:
808- log(("Charm not setup for ssl support "
809- "but ssl ca found"))
810+ log("Charm not setup for ssl support but ssl ca "
811+ "found", level=INFO)
812 break
813+
814 ca_path = os.path.join(
815 self.ssl_dir, 'rabbit-client-ca.pem')
816 with open(ca_path, 'w') as fh:
817 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
818 ctxt['rabbit_ssl_ca'] = ca_path
819+
820 # Sufficient information found = break out!
821 break
822+
823 # Used for active/active rabbitmq >= grizzly
824- if ('clustered' not in ctxt or ha_vip_only) \
825- and len(related_units(rid)) > 1:
826+ if (('clustered' not in ctxt or ha_vip_only) and
827+ len(related_units(rid)) > 1):
828 rabbitmq_hosts = []
829 for unit in related_units(rid):
830 host = relation_get('private-address', rid=rid, unit=unit)
831 host = format_ipv6_addr(host) or host
832 rabbitmq_hosts.append(host)
833- ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
834+
835+ ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
836+
837 if not context_complete(ctxt):
838 return {}
839- else:
840- return ctxt
841+
842+ return ctxt
843
844
845 class CephContext(OSContextGenerator):
846+ """Generates context for /etc/ceph/ceph.conf templates."""
847 interfaces = ['ceph']
848
849 def __call__(self):
850- '''This generates context for /etc/ceph/ceph.conf templates'''
851 if not relation_ids('ceph'):
852 return {}
853
854- log('Generating template context for ceph')
855-
856+ log('Generating template context for ceph', level=DEBUG)
857 mon_hosts = []
858 auth = None
859 key = None
860@@ -385,18 +401,18 @@
861 for unit in related_units(rid):
862 auth = relation_get('auth', rid=rid, unit=unit)
863 key = relation_get('key', rid=rid, unit=unit)
864- ceph_addr = \
865- relation_get('ceph-public-address', rid=rid, unit=unit) or \
866- relation_get('private-address', rid=rid, unit=unit)
867+ ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
868+ unit=unit)
869+ unit_priv_addr = relation_get('private-address', rid=rid,
870+ unit=unit)
871+ ceph_addr = ceph_pub_addr or unit_priv_addr
872 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
873 mon_hosts.append(ceph_addr)
874
875- ctxt = {
876- 'mon_hosts': ' '.join(mon_hosts),
877- 'auth': auth,
878- 'key': key,
879- 'use_syslog': use_syslog
880- }
881+ ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
882+ 'auth': auth,
883+ 'key': key,
884+ 'use_syslog': use_syslog}
885
886 if not os.path.isdir('/etc/ceph'):
887 os.mkdir('/etc/ceph')
888@@ -405,79 +421,68 @@
889 return {}
890
891 ensure_packages(['ceph-common'])
892-
893 return ctxt
894
895
896-ADDRESS_TYPES = ['admin', 'internal', 'public']
897-
898-
899 class HAProxyContext(OSContextGenerator):
900+ """Provides half a context for the haproxy template, which describes
901+ all peers to be included in the cluster. Each charm needs to include
902+ its own context generator that describes the port mapping.
903+ """
904 interfaces = ['cluster']
905
906+ def __init__(self, singlenode_mode=False):
907+ self.singlenode_mode = singlenode_mode
908+
909 def __call__(self):
910- '''
911- Builds half a context for the haproxy template, which describes
912- all peers to be included in the cluster. Each charm needs to include
913- its own context generator that describes the port mapping.
914- '''
915- if not relation_ids('cluster'):
916+ if not relation_ids('cluster') and not self.singlenode_mode:
917 return {}
918
919+ if config('prefer-ipv6'):
920+ addr = get_ipv6_addr(exc_list=[config('vip')])[0]
921+ else:
922+ addr = get_host_ip(unit_get('private-address'))
923+
924 l_unit = local_unit().replace('/', '-')
925-
926- if config('prefer-ipv6'):
927- addr = get_ipv6_addr(exc_list=[config('vip')])[0]
928- else:
929- addr = unit_get('private-address')
930-
931 cluster_hosts = {}
932
933 # NOTE(jamespage): build out map of configured network endpoints
934 # and associated backends
935 for addr_type in ADDRESS_TYPES:
936- laddr = get_address_in_network(
937- config('os-{}-network'.format(addr_type)))
938+ cfg_opt = 'os-{}-network'.format(addr_type)
939+ laddr = get_address_in_network(config(cfg_opt))
940 if laddr:
941- cluster_hosts[laddr] = {}
942- cluster_hosts[laddr]['network'] = "{}/{}".format(
943- laddr,
944- get_netmask_for_address(laddr)
945- )
946- cluster_hosts[laddr]['backends'] = {}
947- cluster_hosts[laddr]['backends'][l_unit] = laddr
948+ netmask = get_netmask_for_address(laddr)
949+ cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
950+ netmask),
951+ 'backends': {l_unit: laddr}}
952 for rid in relation_ids('cluster'):
953 for unit in related_units(rid):
954- _unit = unit.replace('/', '-')
955 _laddr = relation_get('{}-address'.format(addr_type),
956 rid=rid, unit=unit)
957 if _laddr:
958+ _unit = unit.replace('/', '-')
959 cluster_hosts[laddr]['backends'][_unit] = _laddr
960
961 # NOTE(jamespage) no split configurations found, just use
962 # private addresses
963 if not cluster_hosts:
964- cluster_hosts[addr] = {}
965- cluster_hosts[addr]['network'] = "{}/{}".format(
966- addr,
967- get_netmask_for_address(addr)
968- )
969- cluster_hosts[addr]['backends'] = {}
970- cluster_hosts[addr]['backends'][l_unit] = addr
971+ netmask = get_netmask_for_address(addr)
972+ cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
973+ 'backends': {l_unit: addr}}
974 for rid in relation_ids('cluster'):
975 for unit in related_units(rid):
976- _unit = unit.replace('/', '-')
977 _laddr = relation_get('private-address',
978 rid=rid, unit=unit)
979 if _laddr:
980+ _unit = unit.replace('/', '-')
981 cluster_hosts[addr]['backends'][_unit] = _laddr
982
983- ctxt = {
984- 'frontends': cluster_hosts,
985- }
986+ ctxt = {'frontends': cluster_hosts}
987
988 if config('haproxy-server-timeout'):
989 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
990+
991 if config('haproxy-client-timeout'):
992 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
993
994@@ -491,13 +496,18 @@
995 ctxt['stat_port'] = ':8888'
996
997 for frontend in cluster_hosts:
998- if len(cluster_hosts[frontend]['backends']) > 1:
999+ if (len(cluster_hosts[frontend]['backends']) > 1 or
1000+ self.singlenode_mode):
1001 # Enable haproxy when we have enough peers.
1002- log('Ensuring haproxy enabled in /etc/default/haproxy.')
1003+ log('Ensuring haproxy enabled in /etc/default/haproxy.',
1004+ level=DEBUG)
1005 with open('/etc/default/haproxy', 'w') as out:
1006 out.write('ENABLED=1\n')
1007+
1008 return ctxt
1009- log('HAProxy context is incomplete, this unit has no peers.')
1010+
1011+ log('HAProxy context is incomplete, this unit has no peers.',
1012+ level=INFO)
1013 return {}
1014
1015
1016@@ -505,29 +515,28 @@
1017 interfaces = ['image-service']
1018
1019 def __call__(self):
1020- '''
1021- Obtains the glance API server from the image-service relation. Useful
1022- in nova and cinder (currently).
1023- '''
1024- log('Generating template context for image-service.')
1025+ """Obtains the glance API server from the image-service relation.
1026+ Useful in nova and cinder (currently).
1027+ """
1028+ log('Generating template context for image-service.', level=DEBUG)
1029 rids = relation_ids('image-service')
1030 if not rids:
1031 return {}
1032+
1033 for rid in rids:
1034 for unit in related_units(rid):
1035 api_server = relation_get('glance-api-server',
1036 rid=rid, unit=unit)
1037 if api_server:
1038 return {'glance_api_servers': api_server}
1039- log('ImageService context is incomplete. '
1040- 'Missing required relation data.')
1041+
1042+ log("ImageService context is incomplete. Missing required relation "
1043+ "data.", level=INFO)
1044 return {}
1045
1046
1047 class ApacheSSLContext(OSContextGenerator):
1048-
1049- """
1050- Generates a context for an apache vhost configuration that configures
1051+ """Generates a context for an apache vhost configuration that configures
1052 HTTPS reverse proxying for one or many endpoints. Generated context
1053 looks something like::
1054
1055@@ -561,6 +570,7 @@
1056 else:
1057 cert_filename = 'cert'
1058 key_filename = 'key'
1059+
1060 write_file(path=os.path.join(ssl_dir, cert_filename),
1061 content=b64decode(cert))
1062 write_file(path=os.path.join(ssl_dir, key_filename),
1063@@ -572,7 +582,8 @@
1064 install_ca_cert(b64decode(ca_cert))
1065
1066 def canonical_names(self):
1067- '''Figure out which canonical names clients will access this service'''
1068+ """Figure out which canonical names clients will access this service.
1069+ """
1070 cns = []
1071 for r_id in relation_ids('identity-service'):
1072 for unit in related_units(r_id):
1073@@ -580,55 +591,80 @@
1074 for k in rdata:
1075 if k.startswith('ssl_key_'):
1076 cns.append(k.lstrip('ssl_key_'))
1077- return list(set(cns))
1078+
1079+ return sorted(list(set(cns)))
1080+
1081+ def get_network_addresses(self):
1082+ """For each network configured, return corresponding address and vip
1083+ (if available).
1084+
1085+ Returns a list of tuples of the form:
1086+
1087+ [(address_in_net_a, vip_in_net_a),
1088+ (address_in_net_b, vip_in_net_b),
1089+ ...]
1090+
1091+ or, if no vip(s) available:
1092+
1093+ [(address_in_net_a, address_in_net_a),
1094+ (address_in_net_b, address_in_net_b),
1095+ ...]
1096+ """
1097+ addresses = []
1098+ if config('vip'):
1099+ vips = config('vip').split()
1100+ else:
1101+ vips = []
1102+
1103+ for net_type in ['os-internal-network', 'os-admin-network',
1104+ 'os-public-network']:
1105+ addr = get_address_in_network(config(net_type),
1106+ unit_get('private-address'))
1107+ if len(vips) > 1 and is_clustered():
1108+ if not config(net_type):
1109+ log("Multiple networks configured but net_type "
1110+ "is None (%s)." % net_type, level=WARNING)
1111+ continue
1112+
1113+ for vip in vips:
1114+ if is_address_in_network(config(net_type), vip):
1115+ addresses.append((addr, vip))
1116+ break
1117+
1118+ elif is_clustered() and config('vip'):
1119+ addresses.append((addr, config('vip')))
1120+ else:
1121+ addresses.append((addr, addr))
1122+
1123+ return sorted(addresses)
1124
1125 def __call__(self):
1126- if isinstance(self.external_ports, basestring):
1127+ if isinstance(self.external_ports, six.string_types):
1128 self.external_ports = [self.external_ports]
1129- if (not self.external_ports or not https()):
1130+
1131+ if not self.external_ports or not https():
1132 return {}
1133
1134 self.configure_ca()
1135 self.enable_modules()
1136
1137- ctxt = {
1138- 'namespace': self.service_namespace,
1139- 'endpoints': [],
1140- 'ext_ports': []
1141- }
1142+ ctxt = {'namespace': self.service_namespace,
1143+ 'endpoints': [],
1144+ 'ext_ports': []}
1145
1146 for cn in self.canonical_names():
1147 self.configure_cert(cn)
1148
1149- addresses = []
1150- vips = []
1151- if config('vip'):
1152- vips = config('vip').split()
1153-
1154- for network_type in ['os-internal-network',
1155- 'os-admin-network',
1156- 'os-public-network']:
1157- address = get_address_in_network(config(network_type),
1158- unit_get('private-address'))
1159- if len(vips) > 0 and is_clustered():
1160- for vip in vips:
1161- if is_address_in_network(config(network_type),
1162- vip):
1163- addresses.append((address, vip))
1164- break
1165- elif is_clustered():
1166- addresses.append((address, config('vip')))
1167- else:
1168- addresses.append((address, address))
1169-
1170- for address, endpoint in set(addresses):
1171+ addresses = self.get_network_addresses()
1172+ for address, endpoint in sorted(set(addresses)):
1173 for api_port in self.external_ports:
1174 ext_port = determine_apache_port(api_port)
1175 int_port = determine_api_port(api_port)
1176 portmap = (address, endpoint, int(ext_port), int(int_port))
1177 ctxt['endpoints'].append(portmap)
1178 ctxt['ext_ports'].append(int(ext_port))
1179- ctxt['ext_ports'] = list(set(ctxt['ext_ports']))
1180+
1181+ ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
1182 return ctxt
1183
1184
1185@@ -645,21 +681,23 @@
1186
1187 @property
1188 def packages(self):
1189- return neutron_plugin_attribute(
1190- self.plugin, 'packages', self.network_manager)
1191+ return neutron_plugin_attribute(self.plugin, 'packages',
1192+ self.network_manager)
1193
1194 @property
1195 def neutron_security_groups(self):
1196 return None
1197
1198 def _ensure_packages(self):
1199- [ensure_packages(pkgs) for pkgs in self.packages]
1200+ for pkgs in self.packages:
1201+ ensure_packages(pkgs)
1202
1203 def _save_flag_file(self):
1204 if self.network_manager == 'quantum':
1205 _file = '/etc/nova/quantum_plugin.conf'
1206 else:
1207 _file = '/etc/nova/neutron_plugin.conf'
1208+
1209 with open(_file, 'wb') as out:
1210 out.write(self.plugin + '\n')
1211
1212@@ -668,13 +706,11 @@
1213 self.network_manager)
1214 config = neutron_plugin_attribute(self.plugin, 'config',
1215 self.network_manager)
1216- ovs_ctxt = {
1217- 'core_plugin': driver,
1218- 'neutron_plugin': 'ovs',
1219- 'neutron_security_groups': self.neutron_security_groups,
1220- 'local_ip': unit_private_ip(),
1221- 'config': config
1222- }
1223+ ovs_ctxt = {'core_plugin': driver,
1224+ 'neutron_plugin': 'ovs',
1225+ 'neutron_security_groups': self.neutron_security_groups,
1226+ 'local_ip': unit_private_ip(),
1227+ 'config': config}
1228
1229 return ovs_ctxt
1230
1231@@ -683,13 +719,11 @@
1232 self.network_manager)
1233 config = neutron_plugin_attribute(self.plugin, 'config',
1234 self.network_manager)
1235- nvp_ctxt = {
1236- 'core_plugin': driver,
1237- 'neutron_plugin': 'nvp',
1238- 'neutron_security_groups': self.neutron_security_groups,
1239- 'local_ip': unit_private_ip(),
1240- 'config': config
1241- }
1242+ nvp_ctxt = {'core_plugin': driver,
1243+ 'neutron_plugin': 'nvp',
1244+ 'neutron_security_groups': self.neutron_security_groups,
1245+ 'local_ip': unit_private_ip(),
1246+ 'config': config}
1247
1248 return nvp_ctxt
1249
1250@@ -698,35 +732,50 @@
1251 self.network_manager)
1252 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1253 self.network_manager)
1254- n1kv_ctxt = {
1255- 'core_plugin': driver,
1256- 'neutron_plugin': 'n1kv',
1257- 'neutron_security_groups': self.neutron_security_groups,
1258- 'local_ip': unit_private_ip(),
1259- 'config': n1kv_config,
1260- 'vsm_ip': config('n1kv-vsm-ip'),
1261- 'vsm_username': config('n1kv-vsm-username'),
1262- 'vsm_password': config('n1kv-vsm-password'),
1263- 'restrict_policy_profiles': config(
1264- 'n1kv_restrict_policy_profiles'),
1265- }
1266+ n1kv_user_config_flags = config('n1kv-config-flags')
1267+ restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
1268+ n1kv_ctxt = {'core_plugin': driver,
1269+ 'neutron_plugin': 'n1kv',
1270+ 'neutron_security_groups': self.neutron_security_groups,
1271+ 'local_ip': unit_private_ip(),
1272+ 'config': n1kv_config,
1273+ 'vsm_ip': config('n1kv-vsm-ip'),
1274+ 'vsm_username': config('n1kv-vsm-username'),
1275+ 'vsm_password': config('n1kv-vsm-password'),
1276+ 'restrict_policy_profiles': restrict_policy_profiles}
1277+
1278+ if n1kv_user_config_flags:
1279+ flags = config_flags_parser(n1kv_user_config_flags)
1280+ n1kv_ctxt['user_config_flags'] = flags
1281
1282 return n1kv_ctxt
1283
1284+ def calico_ctxt(self):
1285+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1286+ self.network_manager)
1287+ config = neutron_plugin_attribute(self.plugin, 'config',
1288+ self.network_manager)
1289+ calico_ctxt = {'core_plugin': driver,
1290+ 'neutron_plugin': 'Calico',
1291+ 'neutron_security_groups': self.neutron_security_groups,
1292+ 'local_ip': unit_private_ip(),
1293+ 'config': config}
1294+
1295+ return calico_ctxt
1296+
1297 def neutron_ctxt(self):
1298 if https():
1299 proto = 'https'
1300 else:
1301 proto = 'http'
1302+
1303 if is_clustered():
1304 host = config('vip')
1305 else:
1306 host = unit_get('private-address')
1307- url = '%s://%s:%s' % (proto, host, '9696')
1308- ctxt = {
1309- 'network_manager': self.network_manager,
1310- 'neutron_url': url,
1311- }
1312+
1313+ ctxt = {'network_manager': self.network_manager,
1314+ 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1315 return ctxt
1316
1317 def __call__(self):
1318@@ -746,6 +795,8 @@
1319 ctxt.update(self.nvp_ctxt())
1320 elif self.plugin == 'n1kv':
1321 ctxt.update(self.n1kv_ctxt())
1322+ elif self.plugin == 'Calico':
1323+ ctxt.update(self.calico_ctxt())
1324
1325 alchemy_flags = config('neutron-alchemy-flags')
1326 if alchemy_flags:
1327@@ -757,23 +808,40 @@
1328
1329
1330 class OSConfigFlagContext(OSContextGenerator):
1331-
1332- """
1333- Responsible for adding user-defined config-flags in charm config to a
1334- template context.
1335+ """Provides support for user-defined config flags.
1336+
1337+ Users can define a comma-seperated list of key=value pairs
1338+ in the charm configuration and apply them at any point in
1339+ any file by using a template flag.
1340+
1341+ Sometimes users might want config flags inserted within a
1342+ specific section so this class allows users to specify the
1343+ template flag name, allowing for multiple template flags
1344+ (sections) within the same context.
1345
1346 NOTE: the value of config-flags may be a comma-separated list of
1347 key=value pairs and some Openstack config files support
1348 comma-separated lists as values.
1349 """
1350
1351+ def __init__(self, charm_flag='config-flags',
1352+ template_flag='user_config_flags'):
1353+ """
1354+ :param charm_flag: config flags in charm configuration.
1355+ :param template_flag: insert point for user-defined flags in template
1356+ file.
1357+ """
1358+ super(OSConfigFlagContext, self).__init__()
1359+ self._charm_flag = charm_flag
1360+ self._template_flag = template_flag
1361+
1362 def __call__(self):
1363- config_flags = config('config-flags')
1364+ config_flags = config(self._charm_flag)
1365 if not config_flags:
1366 return {}
1367
1368- flags = config_flags_parser(config_flags)
1369- return {'user_config_flags': flags}
1370+ return {self._template_flag:
1371+ config_flags_parser(config_flags)}
1372
1373
1374 class SubordinateConfigContext(OSContextGenerator):
1375@@ -817,7 +885,6 @@
1376 },
1377 }
1378 }
1379-
1380 """
1381
1382 def __init__(self, service, config_file, interface):
1383@@ -847,26 +914,28 @@
1384
1385 if self.service not in sub_config:
1386 log('Found subordinate_config on %s but it contained'
1387- 'nothing for %s service' % (rid, self.service))
1388+ 'nothing for %s service' % (rid, self.service),
1389+ level=INFO)
1390 continue
1391
1392 sub_config = sub_config[self.service]
1393 if self.config_file not in sub_config:
1394 log('Found subordinate_config on %s but it contained'
1395- 'nothing for %s' % (rid, self.config_file))
1396+ 'nothing for %s' % (rid, self.config_file),
1397+ level=INFO)
1398 continue
1399
1400 sub_config = sub_config[self.config_file]
1401- for k, v in sub_config.iteritems():
1402+ for k, v in six.iteritems(sub_config):
1403 if k == 'sections':
1404- for section, config_dict in v.iteritems():
1405- log("adding section '%s'" % (section))
1406+ for section, config_dict in six.iteritems(v):
1407+ log("adding section '%s'" % (section),
1408+ level=DEBUG)
1409 ctxt[k][section] = config_dict
1410 else:
1411 ctxt[k] = v
1412
1413- log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
1414-
1415+ log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1416 return ctxt
1417
1418
1419@@ -878,15 +947,14 @@
1420 False if config('debug') is None else config('debug')
1421 ctxt['verbose'] = \
1422 False if config('verbose') is None else config('verbose')
1423+
1424 return ctxt
1425
1426
1427 class SyslogContext(OSContextGenerator):
1428
1429 def __call__(self):
1430- ctxt = {
1431- 'use_syslog': config('use-syslog')
1432- }
1433+ ctxt = {'use_syslog': config('use-syslog')}
1434 return ctxt
1435
1436
1437@@ -894,10 +962,56 @@
1438
1439 def __call__(self):
1440 if config('prefer-ipv6'):
1441- return {
1442- 'bind_host': '::'
1443- }
1444+ return {'bind_host': '::'}
1445 else:
1446- return {
1447- 'bind_host': '0.0.0.0'
1448- }
1449+ return {'bind_host': '0.0.0.0'}
1450+
1451+
1452+class WorkerConfigContext(OSContextGenerator):
1453+
1454+ @property
1455+ def num_cpus(self):
1456+ try:
1457+ from psutil import NUM_CPUS
1458+ except ImportError:
1459+ apt_install('python-psutil', fatal=True)
1460+ from psutil import NUM_CPUS
1461+
1462+ return NUM_CPUS
1463+
1464+ def __call__(self):
1465+ multiplier = config('worker-multiplier') or 0
1466+ ctxt = {"workers": self.num_cpus * multiplier}
1467+ return ctxt
1468+
1469+
1470+class ZeroMQContext(OSContextGenerator):
1471+ interfaces = ['zeromq-configuration']
1472+
1473+ def __call__(self):
1474+ ctxt = {}
1475+ if is_relation_made('zeromq-configuration', 'host'):
1476+ for rid in relation_ids('zeromq-configuration'):
1477+ for unit in related_units(rid):
1478+ ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1479+ ctxt['zmq_host'] = relation_get('host', unit, rid)
1480+
1481+ return ctxt
1482+
1483+
1484+class NotificationDriverContext(OSContextGenerator):
1485+
1486+ def __init__(self, zmq_relation='zeromq-configuration',
1487+ amqp_relation='amqp'):
1488+ """
1489+ :param zmq_relation: Name of Zeromq relation to check
1490+ """
1491+ self.zmq_relation = zmq_relation
1492+ self.amqp_relation = amqp_relation
1493+
1494+ def __call__(self):
1495+ ctxt = {'notifications': 'False'}
1496+ if is_relation_made(self.amqp_relation):
1497+ ctxt['notifications'] = "True"
1498+
1499+ return ctxt
1500
1501=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
1502--- hooks/charmhelpers/contrib/openstack/ip.py 2014-09-22 20:22:04 +0000
1503+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-05 08:00:54 +0000
1504@@ -2,21 +2,19 @@
1505 config,
1506 unit_get,
1507 )
1508-
1509 from charmhelpers.contrib.network.ip import (
1510 get_address_in_network,
1511 is_address_in_network,
1512 is_ipv6,
1513 get_ipv6_addr,
1514 )
1515-
1516 from charmhelpers.contrib.hahelpers.cluster import is_clustered
1517
1518 PUBLIC = 'public'
1519 INTERNAL = 'int'
1520 ADMIN = 'admin'
1521
1522-_address_map = {
1523+ADDRESS_MAP = {
1524 PUBLIC: {
1525 'config': 'os-public-network',
1526 'fallback': 'public-address'
1527@@ -33,16 +31,14 @@
1528
1529
1530 def canonical_url(configs, endpoint_type=PUBLIC):
1531- '''
1532- Returns the correct HTTP URL to this host given the state of HTTPS
1533+ """Returns the correct HTTP URL to this host given the state of HTTPS
1534 configuration, hacluster and charm configuration.
1535
1536- :configs OSTemplateRenderer: A config tempating object to inspect for
1537- a complete https context.
1538- :endpoint_type str: The endpoint type to resolve.
1539-
1540- :returns str: Base URL for services on the current service unit.
1541- '''
1542+ :param configs: OSTemplateRenderer config templating object to inspect
1543+ for a complete https context.
1544+ :param endpoint_type: str endpoint type to resolve.
1545+ :param returns: str base URL for services on the current service unit.
1546+ """
1547 scheme = 'http'
1548 if 'https' in configs.complete_contexts():
1549 scheme = 'https'
1550@@ -53,27 +49,45 @@
1551
1552
1553 def resolve_address(endpoint_type=PUBLIC):
1554+ """Return unit address depending on net config.
1555+
1556+ If unit is clustered with vip(s) and has net splits defined, return vip on
1557+ correct network. If clustered with no nets defined, return primary vip.
1558+
1559+ If not clustered, return unit address ensuring address is on configured net
1560+ split if one is configured.
1561+
1562+ :param endpoint_type: Network endpoing type
1563+ """
1564 resolved_address = None
1565- if is_clustered():
1566- if config(_address_map[endpoint_type]['config']) is None:
1567- # Assume vip is simple and pass back directly
1568- resolved_address = config('vip')
1569+ vips = config('vip')
1570+ if vips:
1571+ vips = vips.split()
1572+
1573+ net_type = ADDRESS_MAP[endpoint_type]['config']
1574+ net_addr = config(net_type)
1575+ net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
1576+ clustered = is_clustered()
1577+ if clustered:
1578+ if not net_addr:
1579+ # If no net-splits defined, we expect a single vip
1580+ resolved_address = vips[0]
1581 else:
1582- for vip in config('vip').split():
1583- if is_address_in_network(
1584- config(_address_map[endpoint_type]['config']),
1585- vip):
1586+ for vip in vips:
1587+ if is_address_in_network(net_addr, vip):
1588 resolved_address = vip
1589+ break
1590 else:
1591 if config('prefer-ipv6'):
1592- fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1593+ fallback_addr = get_ipv6_addr(exc_list=vips)[0]
1594 else:
1595- fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])
1596- resolved_address = get_address_in_network(
1597- config(_address_map[endpoint_type]['config']), fallback_addr)
1598+ fallback_addr = unit_get(net_fallback)
1599+
1600+ resolved_address = get_address_in_network(net_addr, fallback_addr)
1601
1602 if resolved_address is None:
1603- raise ValueError('Unable to resolve a suitable IP address'
1604- ' based on charm state and configuration')
1605- else:
1606- return resolved_address
1607+ raise ValueError("Unable to resolve a suitable IP address based on "
1608+ "charm state and configuration. (net_type=%s, "
1609+ "clustered=%s)" % (net_type, clustered))
1610+
1611+ return resolved_address
1612
1613=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1614--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-07-28 14:38:51 +0000
1615+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-05 08:00:54 +0000
1616@@ -14,7 +14,7 @@
1617 def headers_package():
1618 """Ensures correct linux-headers for running kernel are installed,
1619 for building DKMS package"""
1620- kver = check_output(['uname', '-r']).strip()
1621+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1622 return 'linux-headers-%s' % kver
1623
1624 QUANTUM_CONF_DIR = '/etc/quantum'
1625@@ -22,7 +22,7 @@
1626
1627 def kernel_version():
1628 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1629- kver = check_output(['uname', '-r']).strip()
1630+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1631 kver = kver.split('.')
1632 return (int(kver[0]), int(kver[1]))
1633
1634@@ -138,10 +138,25 @@
1635 relation_prefix='neutron',
1636 ssl_dir=NEUTRON_CONF_DIR)],
1637 'services': [],
1638- 'packages': [['neutron-plugin-cisco']],
1639+ 'packages': [[headers_package()] + determine_dkms_package(),
1640+ ['neutron-plugin-cisco']],
1641 'server_packages': ['neutron-server',
1642 'neutron-plugin-cisco'],
1643 'server_services': ['neutron-server']
1644+ },
1645+ 'Calico': {
1646+ 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
1647+ 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
1648+ 'contexts': [
1649+ context.SharedDBContext(user=config('neutron-database-user'),
1650+ database=config('neutron-database'),
1651+ relation_prefix='neutron',
1652+ ssl_dir=NEUTRON_CONF_DIR)],
1653+ 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
1654+ 'packages': [[headers_package()] + determine_dkms_package(),
1655+ ['calico-compute', 'bird', 'neutron-dhcp-agent']],
1656+ 'server_packages': ['neutron-server', 'calico-control'],
1657+ 'server_services': ['neutron-server']
1658 }
1659 }
1660 if release >= 'icehouse':
1661@@ -162,7 +177,8 @@
1662 elif manager == 'neutron':
1663 plugins = neutron_plugins()
1664 else:
1665- log('Error: Network manager does not support plugins.')
1666+ log("Network manager '%s' does not support plugins." % (manager),
1667+ level=ERROR)
1668 raise Exception
1669
1670 try:
1671
1672=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
1673--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-10-06 21:57:43 +0000
1674+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-12-05 08:00:54 +0000
1675@@ -35,7 +35,7 @@
1676 stats auth admin:password
1677
1678 {% if frontends -%}
1679-{% for service, ports in service_ports.iteritems() -%}
1680+{% for service, ports in service_ports.items() -%}
1681 frontend tcp-in_{{ service }}
1682 bind *:{{ ports[0] }}
1683 bind :::{{ ports[0] }}
1684@@ -46,7 +46,7 @@
1685 {% for frontend in frontends -%}
1686 backend {{ service }}_{{ frontend }}
1687 balance leastconn
1688- {% for unit, address in frontends[frontend]['backends'].iteritems() -%}
1689+ {% for unit, address in frontends[frontend]['backends'].items() -%}
1690 server {{ unit }} {{ address }}:{{ ports[1] }} check
1691 {% endfor %}
1692 {% endfor -%}
1693
1694=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
1695--- hooks/charmhelpers/contrib/openstack/templating.py 2014-07-28 14:38:51 +0000
1696+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-05 08:00:54 +0000
1697@@ -1,13 +1,13 @@
1698 import os
1699
1700+import six
1701+
1702 from charmhelpers.fetch import apt_install
1703-
1704 from charmhelpers.core.hookenv import (
1705 log,
1706 ERROR,
1707 INFO
1708 )
1709-
1710 from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1711
1712 try:
1713@@ -43,7 +43,7 @@
1714 order by OpenStack release.
1715 """
1716 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1717- for rel in OPENSTACK_CODENAMES.itervalues()]
1718+ for rel in six.itervalues(OPENSTACK_CODENAMES)]
1719
1720 if not os.path.isdir(templates_dir):
1721 log('Templates directory not found @ %s.' % templates_dir,
1722@@ -258,7 +258,7 @@
1723 """
1724 Write out all registered config files.
1725 """
1726- [self.write(k) for k in self.templates.iterkeys()]
1727+ [self.write(k) for k in six.iterkeys(self.templates)]
1728
1729 def set_release(self, openstack_release):
1730 """
1731@@ -275,5 +275,5 @@
1732 '''
1733 interfaces = []
1734 [interfaces.extend(i.complete_contexts())
1735- for i in self.templates.itervalues()]
1736+ for i in six.itervalues(self.templates)]
1737 return interfaces
1738
1739=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1740--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-06 21:57:43 +0000
1741+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-05 08:00:54 +0000
1742@@ -2,6 +2,7 @@
1743
1744 # Common python helper functions used for OpenStack charms.
1745 from collections import OrderedDict
1746+from functools import wraps
1747
1748 import subprocess
1749 import json
1750@@ -9,11 +10,13 @@
1751 import socket
1752 import sys
1753
1754+import six
1755+import yaml
1756+
1757 from charmhelpers.core.hookenv import (
1758 config,
1759 log as juju_log,
1760 charm_dir,
1761- ERROR,
1762 INFO,
1763 relation_ids,
1764 relation_set
1765@@ -30,7 +33,8 @@
1766 )
1767
1768 from charmhelpers.core.host import lsb_release, mounts, umount
1769-from charmhelpers.fetch import apt_install, apt_cache
1770+from charmhelpers.fetch import apt_install, apt_cache, install_remote
1771+from charmhelpers.contrib.python.packages import pip_install
1772 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
1773 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
1774
1775@@ -112,7 +116,7 @@
1776
1777 # Best guess match based on deb string provided
1778 if src.startswith('deb') or src.startswith('ppa'):
1779- for k, v in OPENSTACK_CODENAMES.iteritems():
1780+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1781 if v in src:
1782 return v
1783
1784@@ -133,7 +137,7 @@
1785
1786 def get_os_version_codename(codename):
1787 '''Determine OpenStack version number from codename.'''
1788- for k, v in OPENSTACK_CODENAMES.iteritems():
1789+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1790 if v == codename:
1791 return k
1792 e = 'Could not derive OpenStack version for '\
1793@@ -193,7 +197,7 @@
1794 else:
1795 vers_map = OPENSTACK_CODENAMES
1796
1797- for version, cname in vers_map.iteritems():
1798+ for version, cname in six.iteritems(vers_map):
1799 if cname == codename:
1800 return version
1801 # e = "Could not determine OpenStack version for package: %s" % pkg
1802@@ -317,7 +321,7 @@
1803 rc_script.write(
1804 "#!/bin/bash\n")
1805 [rc_script.write('export %s=%s\n' % (u, p))
1806- for u, p in env_vars.iteritems() if u != "script_path"]
1807+ for u, p in six.iteritems(env_vars) if u != "script_path"]
1808
1809
1810 def openstack_upgrade_available(package):
1811@@ -350,8 +354,8 @@
1812 '''
1813 _none = ['None', 'none', None]
1814 if (block_device in _none):
1815- error_out('prepare_storage(): Missing required input: '
1816- 'block_device=%s.' % block_device, level=ERROR)
1817+ error_out('prepare_storage(): Missing required input: block_device=%s.'
1818+ % block_device)
1819
1820 if block_device.startswith('/dev/'):
1821 bdev = block_device
1822@@ -367,8 +371,7 @@
1823 bdev = '/dev/%s' % block_device
1824
1825 if not is_block_device(bdev):
1826- error_out('Failed to locate valid block device at %s' % bdev,
1827- level=ERROR)
1828+ error_out('Failed to locate valid block device at %s' % bdev)
1829
1830 return bdev
1831
1832@@ -417,7 +420,7 @@
1833
1834 if isinstance(address, dns.name.Name):
1835 rtype = 'PTR'
1836- elif isinstance(address, basestring):
1837+ elif isinstance(address, six.string_types):
1838 rtype = 'A'
1839 else:
1840 return None
1841@@ -468,6 +471,14 @@
1842 return result.split('.')[0]
1843
1844
1845+def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
1846+ mm_map = {}
1847+ if os.path.isfile(mm_file):
1848+ with open(mm_file, 'r') as f:
1849+ mm_map = json.load(f)
1850+ return mm_map
1851+
1852+
1853 def sync_db_with_multi_ipv6_addresses(database, database_user,
1854 relation_prefix=None):
1855 hosts = get_ipv6_addr(dynamic_only=False)
1856@@ -477,10 +488,132 @@
1857 'hostname': json.dumps(hosts)}
1858
1859 if relation_prefix:
1860- keys = kwargs.keys()
1861- for key in keys:
1862+ for key in list(kwargs.keys()):
1863 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
1864 del kwargs[key]
1865
1866 for rid in relation_ids('shared-db'):
1867 relation_set(relation_id=rid, **kwargs)
1868+
1869+
1870+def os_requires_version(ostack_release, pkg):
1871+ """
1872+ Decorator for hook to specify minimum supported release
1873+ """
1874+ def wrap(f):
1875+ @wraps(f)
1876+ def wrapped_f(*args):
1877+ if os_release(pkg) < ostack_release:
1878+ raise Exception("This hook is not supported on releases"
1879+ " before %s" % ostack_release)
1880+ f(*args)
1881+ return wrapped_f
1882+ return wrap
1883+
1884+
1885+def git_install_requested():
1886+ """Returns true if openstack-origin-git is specified."""
1887+ return config('openstack-origin-git') != "None"
1888+
1889+
1890+requirements_dir = None
1891+
1892+
1893+def git_clone_and_install(file_name, core_project):
1894+ """Clone/install all OpenStack repos specified in yaml config file."""
1895+ global requirements_dir
1896+
1897+ if file_name == "None":
1898+ return
1899+
1900+ yaml_file = os.path.join(charm_dir(), file_name)
1901+
1902+ # clone/install the requirements project first
1903+ installed = _git_clone_and_install_subset(yaml_file,
1904+ whitelist=['requirements'])
1905+ if 'requirements' not in installed:
1906+ error_out('requirements git repository must be specified')
1907+
1908+ # clone/install all other projects except requirements and the core project
1909+ blacklist = ['requirements', core_project]
1910+ _git_clone_and_install_subset(yaml_file, blacklist=blacklist,
1911+ update_requirements=True)
1912+
1913+ # clone/install the core project
1914+ whitelist = [core_project]
1915+ installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist,
1916+ update_requirements=True)
1917+ if core_project not in installed:
1918+ error_out('{} git repository must be specified'.format(core_project))
1919+
1920+
1921+def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[],
1922+ update_requirements=False):
1923+ """Clone/install subset of OpenStack repos specified in yaml config file."""
1924+ global requirements_dir
1925+ installed = []
1926+
1927+ with open(yaml_file, 'r') as fd:
1928+ projects = yaml.load(fd)
1929+ for proj, val in projects.items():
1930+ # The project subset is chosen based on the following 3 rules:
1931+ # 1) If project is in blacklist, we don't clone/install it, period.
1932+ # 2) If whitelist is empty, we clone/install everything else.
1933+ # 3) If whitelist is not empty, we clone/install everything in the
1934+ # whitelist.
1935+ if proj in blacklist:
1936+ continue
1937+ if whitelist and proj not in whitelist:
1938+ continue
1939+ repo = val['repository']
1940+ branch = val['branch']
1941+ repo_dir = _git_clone_and_install_single(repo, branch,
1942+ update_requirements)
1943+ if proj == 'requirements':
1944+ requirements_dir = repo_dir
1945+ installed.append(proj)
1946+ return installed
1947+
1948+
1949+def _git_clone_and_install_single(repo, branch, update_requirements=False):
1950+ """Clone and install a single git repository."""
1951+ dest_parent_dir = "/mnt/openstack-git/"
1952+ dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo))
1953+
1954+ if not os.path.exists(dest_parent_dir):
1955+ juju_log('Host dir not mounted at {}. '
1956+ 'Creating directory there instead.'.format(dest_parent_dir))
1957+ os.mkdir(dest_parent_dir)
1958+
1959+ if not os.path.exists(dest_dir):
1960+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
1961+ repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch)
1962+ else:
1963+ repo_dir = dest_dir
1964+
1965+ if update_requirements:
1966+ if not requirements_dir:
1967+ error_out('requirements repo must be cloned before '
1968+ 'updating from global requirements.')
1969+ _git_update_requirements(repo_dir, requirements_dir)
1970+
1971+ juju_log('Installing git repo from dir: {}'.format(repo_dir))
1972+ pip_install(repo_dir)
1973+
1974+ return repo_dir
1975+
1976+
1977+def _git_update_requirements(package_dir, reqs_dir):
1978+ """Update from global requirements.
1979+
1980+ Update an OpenStack git directory's requirements.txt and
1981+ test-requirements.txt from global-requirements.txt."""
1982+ orig_dir = os.getcwd()
1983+ os.chdir(reqs_dir)
1984+ cmd = "python update.py {}".format(package_dir)
1985+ try:
1986+ subprocess.check_call(cmd.split(' '))
1987+ except subprocess.CalledProcessError:
1988+ package = os.path.basename(package_dir)
1989+ error_out("Error updating {} from global-requirements.txt".format(package))
1990+ os.chdir(orig_dir)
1991
1992=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
1993--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-28 14:38:51 +0000
1994+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-05 08:00:54 +0000
1995@@ -16,19 +16,18 @@
1996 from subprocess import (
1997 check_call,
1998 check_output,
1999- CalledProcessError
2000+ CalledProcessError,
2001 )
2002-
2003 from charmhelpers.core.hookenv import (
2004 relation_get,
2005 relation_ids,
2006 related_units,
2007 log,
2008+ DEBUG,
2009 INFO,
2010 WARNING,
2011- ERROR
2012+ ERROR,
2013 )
2014-
2015 from charmhelpers.core.host import (
2016 mount,
2017 mounts,
2018@@ -37,7 +36,6 @@
2019 service_running,
2020 umount,
2021 )
2022-
2023 from charmhelpers.fetch import (
2024 apt_install,
2025 )
2026@@ -56,99 +54,85 @@
2027
2028
2029 def install():
2030- ''' Basic Ceph client installation '''
2031+ """Basic Ceph client installation."""
2032 ceph_dir = "/etc/ceph"
2033 if not os.path.exists(ceph_dir):
2034 os.mkdir(ceph_dir)
2035+
2036 apt_install('ceph-common', fatal=True)
2037
2038
2039 def rbd_exists(service, pool, rbd_img):
2040- ''' Check to see if a RADOS block device exists '''
2041+ """Check to see if a RADOS block device exists."""
2042 try:
2043- out = check_output(['rbd', 'list', '--id', service,
2044- '--pool', pool])
2045+ out = check_output(['rbd', 'list', '--id',
2046+ service, '--pool', pool]).decode('UTF-8')
2047 except CalledProcessError:
2048 return False
2049- else:
2050- return rbd_img in out
2051+
2052+ return rbd_img in out
2053
2054
2055 def create_rbd_image(service, pool, image, sizemb):
2056- ''' Create a new RADOS block device '''
2057- cmd = [
2058- 'rbd',
2059- 'create',
2060- image,
2061- '--size',
2062- str(sizemb),
2063- '--id',
2064- service,
2065- '--pool',
2066- pool
2067- ]
2068+ """Create a new RADOS block device."""
2069+ cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
2070+ '--pool', pool]
2071 check_call(cmd)
2072
2073
2074 def pool_exists(service, name):
2075- ''' Check to see if a RADOS pool already exists '''
2076+ """Check to see if a RADOS pool already exists."""
2077 try:
2078- out = check_output(['rados', '--id', service, 'lspools'])
2079+ out = check_output(['rados', '--id', service,
2080+ 'lspools']).decode('UTF-8')
2081 except CalledProcessError:
2082 return False
2083- else:
2084- return name in out
2085+
2086+ return name in out
2087
2088
2089 def get_osds(service):
2090- '''
2091- Return a list of all Ceph Object Storage Daemons
2092- currently in the cluster
2093- '''
2094+ """Return a list of all Ceph Object Storage Daemons currently in the
2095+ cluster.
2096+ """
2097 version = ceph_version()
2098 if version and version >= '0.56':
2099 return json.loads(check_output(['ceph', '--id', service,
2100- 'osd', 'ls', '--format=json']))
2101- else:
2102- return None
2103-
2104-
2105-def create_pool(service, name, replicas=2):
2106- ''' Create a new RADOS pool '''
2107+ 'osd', 'ls',
2108+ '--format=json']).decode('UTF-8'))
2109+
2110+ return None
2111+
2112+
2113+def create_pool(service, name, replicas=3):
2114+ """Create a new RADOS pool."""
2115 if pool_exists(service, name):
2116 log("Ceph pool {} already exists, skipping creation".format(name),
2117 level=WARNING)
2118 return
2119+
2120 # Calculate the number of placement groups based
2121 # on upstream recommended best practices.
2122 osds = get_osds(service)
2123 if osds:
2124- pgnum = (len(osds) * 100 / replicas)
2125+ pgnum = (len(osds) * 100 // replicas)
2126 else:
2127 # NOTE(james-page): Default to 200 for older ceph versions
2128 # which don't support OSD query from cli
2129 pgnum = 200
2130- cmd = [
2131- 'ceph', '--id', service,
2132- 'osd', 'pool', 'create',
2133- name, str(pgnum)
2134- ]
2135+
2136+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
2137 check_call(cmd)
2138- cmd = [
2139- 'ceph', '--id', service,
2140- 'osd', 'pool', 'set', name,
2141- 'size', str(replicas)
2142- ]
2143+
2144+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
2145+ str(replicas)]
2146 check_call(cmd)
2147
2148
2149 def delete_pool(service, name):
2150- ''' Delete a RADOS pool from ceph '''
2151- cmd = [
2152- 'ceph', '--id', service,
2153- 'osd', 'pool', 'delete',
2154- name, '--yes-i-really-really-mean-it'
2155- ]
2156+ """Delete a RADOS pool from ceph."""
2157+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
2158+ '--yes-i-really-really-mean-it']
2159 check_call(cmd)
2160
2161
2162@@ -161,44 +145,43 @@
2163
2164
2165 def create_keyring(service, key):
2166- ''' Create a new Ceph keyring containing key'''
2167+ """Create a new Ceph keyring containing key."""
2168 keyring = _keyring_path(service)
2169 if os.path.exists(keyring):
2170- log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
2171+ log('Ceph keyring exists at %s.' % keyring, level=WARNING)
2172 return
2173- cmd = [
2174- 'ceph-authtool',
2175- keyring,
2176- '--create-keyring',
2177- '--name=client.{}'.format(service),
2178- '--add-key={}'.format(key)
2179- ]
2180+
2181+ cmd = ['ceph-authtool', keyring, '--create-keyring',
2182+ '--name=client.{}'.format(service), '--add-key={}'.format(key)]
2183 check_call(cmd)
2184- log('ceph: Created new ring at %s.' % keyring, level=INFO)
2185+ log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
2186
2187
2188 def create_key_file(service, key):
2189- ''' Create a file containing key '''
2190+ """Create a file containing key."""
2191 keyfile = _keyfile_path(service)
2192 if os.path.exists(keyfile):
2193- log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2194+ log('Keyfile exists at %s.' % keyfile, level=WARNING)
2195 return
2196+
2197 with open(keyfile, 'w') as fd:
2198 fd.write(key)
2199- log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2200+
2201+ log('Created new keyfile at %s.' % keyfile, level=INFO)
2202
2203
2204 def get_ceph_nodes():
2205- ''' Query named relation 'ceph' to detemine current nodes '''
2206+ """Query named relation 'ceph' to determine current nodes."""
2207 hosts = []
2208 for r_id in relation_ids('ceph'):
2209 for unit in related_units(r_id):
2210 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2211+
2212 return hosts
2213
2214
2215 def configure(service, key, auth, use_syslog):
2216- ''' Perform basic configuration of Ceph '''
2217+ """Perform basic configuration of Ceph."""
2218 create_keyring(service, key)
2219 create_key_file(service, key)
2220 hosts = get_ceph_nodes()
2221@@ -211,17 +194,17 @@
2222
2223
2224 def image_mapped(name):
2225- ''' Determine whether a RADOS block device is mapped locally '''
2226+ """Determine whether a RADOS block device is mapped locally."""
2227 try:
2228- out = check_output(['rbd', 'showmapped'])
2229+ out = check_output(['rbd', 'showmapped']).decode('UTF-8')
2230 except CalledProcessError:
2231 return False
2232- else:
2233- return name in out
2234+
2235+ return name in out
2236
2237
2238 def map_block_storage(service, pool, image):
2239- ''' Map a RADOS block device for local use '''
2240+ """Map a RADOS block device for local use."""
2241 cmd = [
2242 'rbd',
2243 'map',
2244@@ -235,31 +218,32 @@
2245
2246
2247 def filesystem_mounted(fs):
2248- ''' Determine whether a filesytems is already mounted '''
2249+ """Determine whether a filesytems is already mounted."""
2250 return fs in [f for f, m in mounts()]
2251
2252
2253 def make_filesystem(blk_device, fstype='ext4', timeout=10):
2254- ''' Make a new filesystem on the specified block device '''
2255+ """Make a new filesystem on the specified block device."""
2256 count = 0
2257 e_noent = os.errno.ENOENT
2258 while not os.path.exists(blk_device):
2259 if count >= timeout:
2260- log('ceph: gave up waiting on block device %s' % blk_device,
2261+ log('Gave up waiting on block device %s' % blk_device,
2262 level=ERROR)
2263 raise IOError(e_noent, os.strerror(e_noent), blk_device)
2264- log('ceph: waiting for block device %s to appear' % blk_device,
2265- level=INFO)
2266+
2267+ log('Waiting for block device %s to appear' % blk_device,
2268+ level=DEBUG)
2269 count += 1
2270 time.sleep(1)
2271 else:
2272- log('ceph: Formatting block device %s as filesystem %s.' %
2273+ log('Formatting block device %s as filesystem %s.' %
2274 (blk_device, fstype), level=INFO)
2275 check_call(['mkfs', '-t', fstype, blk_device])
2276
2277
2278 def place_data_on_block_device(blk_device, data_src_dst):
2279- ''' Migrate data in data_src_dst to blk_device and then remount '''
2280+ """Migrate data in data_src_dst to blk_device and then remount."""
2281 # mount block device into /mnt
2282 mount(blk_device, '/mnt')
2283 # copy data to /mnt
2284@@ -279,8 +263,8 @@
2285
2286 # TODO: re-use
2287 def modprobe(module):
2288- ''' Load a kernel module and configure for auto-load on reboot '''
2289- log('ceph: Loading kernel module', level=INFO)
2290+ """Load a kernel module and configure for auto-load on reboot."""
2291+ log('Loading kernel module', level=INFO)
2292 cmd = ['modprobe', module]
2293 check_call(cmd)
2294 with open('/etc/modules', 'r+') as modules:
2295@@ -289,7 +273,7 @@
2296
2297
2298 def copy_files(src, dst, symlinks=False, ignore=None):
2299- ''' Copy files from src to dst '''
2300+ """Copy files from src to dst."""
2301 for item in os.listdir(src):
2302 s = os.path.join(src, item)
2303 d = os.path.join(dst, item)
2304@@ -300,9 +284,9 @@
2305
2306
2307 def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2308- blk_device, fstype, system_services=[]):
2309- """
2310- NOTE: This function must only be called from a single service unit for
2311+ blk_device, fstype, system_services=[],
2312+ replicas=3):
2313+ """NOTE: This function must only be called from a single service unit for
2314 the same rbd_img otherwise data loss will occur.
2315
2316 Ensures given pool and RBD image exists, is mapped to a block device,
2317@@ -316,15 +300,16 @@
2318 """
2319 # Ensure pool, RBD image, RBD mappings are in place.
2320 if not pool_exists(service, pool):
2321- log('ceph: Creating new pool {}.'.format(pool))
2322- create_pool(service, pool)
2323+ log('Creating new pool {}.'.format(pool), level=INFO)
2324+ create_pool(service, pool, replicas=replicas)
2325
2326 if not rbd_exists(service, pool, rbd_img):
2327- log('ceph: Creating RBD image ({}).'.format(rbd_img))
2328+ log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
2329 create_rbd_image(service, pool, rbd_img, sizemb)
2330
2331 if not image_mapped(rbd_img):
2332- log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2333+ log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
2334+ level=INFO)
2335 map_block_storage(service, pool, rbd_img)
2336
2337 # make file system
2338@@ -339,45 +324,47 @@
2339
2340 for svc in system_services:
2341 if service_running(svc):
2342- log('ceph: Stopping services {} prior to migrating data.'
2343- .format(svc))
2344+ log('Stopping services {} prior to migrating data.'
2345+ .format(svc), level=DEBUG)
2346 service_stop(svc)
2347
2348 place_data_on_block_device(blk_device, mount_point)
2349
2350 for svc in system_services:
2351- log('ceph: Starting service {} after migrating data.'
2352- .format(svc))
2353+ log('Starting service {} after migrating data.'
2354+ .format(svc), level=DEBUG)
2355 service_start(svc)
2356
2357
2358 def ensure_ceph_keyring(service, user=None, group=None):
2359- '''
2360- Ensures a ceph keyring is created for a named service
2361- and optionally ensures user and group ownership.
2362+ """Ensures a ceph keyring is created for a named service and optionally
2363+ ensures user and group ownership.
2364
2365 Returns False if no ceph key is available in relation state.
2366- '''
2367+ """
2368 key = None
2369 for rid in relation_ids('ceph'):
2370 for unit in related_units(rid):
2371 key = relation_get('key', rid=rid, unit=unit)
2372 if key:
2373 break
2374+
2375 if not key:
2376 return False
2377+
2378 create_keyring(service=service, key=key)
2379 keyring = _keyring_path(service)
2380 if user and group:
2381 check_call(['chown', '%s.%s' % (user, group), keyring])
2382+
2383 return True
2384
2385
2386 def ceph_version():
2387- ''' Retrieve the local version of ceph '''
2388+ """Retrieve the local version of ceph."""
2389 if os.path.exists('/usr/bin/ceph'):
2390 cmd = ['ceph', '-v']
2391- output = check_output(cmd)
2392+ output = check_output(cmd).decode('US-ASCII')
2393 output = output.split()
2394 if len(output) > 3:
2395 return output[2]
2396
2397=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2398--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-08-12 21:48:24 +0000
2399+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-05 08:00:54 +0000
2400@@ -1,12 +1,12 @@
2401-
2402 import os
2403 import re
2404-
2405 from subprocess import (
2406 check_call,
2407 check_output,
2408 )
2409
2410+import six
2411+
2412
2413 ##################################################
2414 # loopback device helpers.
2415@@ -37,7 +37,7 @@
2416 '''
2417 file_path = os.path.abspath(file_path)
2418 check_call(['losetup', '--find', file_path])
2419- for d, f in loopback_devices().iteritems():
2420+ for d, f in six.iteritems(loopback_devices()):
2421 if f == file_path:
2422 return d
2423
2424@@ -51,7 +51,7 @@
2425
2426 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2427 '''
2428- for d, f in loopback_devices().iteritems():
2429+ for d, f in six.iteritems(loopback_devices()):
2430 if f == path:
2431 return d
2432
2433
2434=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2435--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:41:02 +0000
2436+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-05 08:00:54 +0000
2437@@ -61,6 +61,7 @@
2438 vg = None
2439 pvd = check_output(['pvdisplay', block_device]).splitlines()
2440 for l in pvd:
2441+ l = l.decode('UTF-8')
2442 if l.strip().startswith('VG Name'):
2443 vg = ' '.join(l.strip().split()[2:])
2444 return vg
2445
2446=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2447--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-08-13 13:12:14 +0000
2448+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-05 08:00:54 +0000
2449@@ -30,7 +30,8 @@
2450 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
2451 call(['sgdisk', '--zap-all', '--mbrtogpt',
2452 '--clear', block_device])
2453- dev_end = check_output(['blockdev', '--getsz', block_device])
2454+ dev_end = check_output(['blockdev', '--getsz',
2455+ block_device]).decode('UTF-8')
2456 gpt_end = int(dev_end.split()[0]) - 100
2457 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
2458 'bs=1M', 'count=1'])
2459@@ -47,7 +48,7 @@
2460 it doesn't.
2461 '''
2462 is_partition = bool(re.search(r".*[0-9]+\b", device))
2463- out = check_output(['mount'])
2464+ out = check_output(['mount']).decode('UTF-8')
2465 if is_partition:
2466 return bool(re.search(device + r"\b", out))
2467 return bool(re.search(device + r"[0-9]+\b", out))
2468
2469=== modified file 'hooks/charmhelpers/core/fstab.py'
2470--- hooks/charmhelpers/core/fstab.py 2014-07-11 02:24:52 +0000
2471+++ hooks/charmhelpers/core/fstab.py 2014-12-05 08:00:54 +0000
2472@@ -3,10 +3,11 @@
2473
2474 __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2475
2476+import io
2477 import os
2478
2479
2480-class Fstab(file):
2481+class Fstab(io.FileIO):
2482 """This class extends file in order to implement a file reader/writer
2483 for file `/etc/fstab`
2484 """
2485@@ -24,8 +25,8 @@
2486 options = "defaults"
2487
2488 self.options = options
2489- self.d = d
2490- self.p = p
2491+ self.d = int(d)
2492+ self.p = int(p)
2493
2494 def __eq__(self, o):
2495 return str(self) == str(o)
2496@@ -45,7 +46,7 @@
2497 self._path = path
2498 else:
2499 self._path = self.DEFAULT_PATH
2500- file.__init__(self, self._path, 'r+')
2501+ super(Fstab, self).__init__(self._path, 'rb+')
2502
2503 def _hydrate_entry(self, line):
2504 # NOTE: use split with no arguments to split on any
2505@@ -58,8 +59,9 @@
2506 def entries(self):
2507 self.seek(0)
2508 for line in self.readlines():
2509+ line = line.decode('us-ascii')
2510 try:
2511- if not line.startswith("#"):
2512+ if line.strip() and not line.startswith("#"):
2513 yield self._hydrate_entry(line)
2514 except ValueError:
2515 pass
2516@@ -75,14 +77,14 @@
2517 if self.get_entry_by_attr('device', entry.device):
2518 return False
2519
2520- self.write(str(entry) + '\n')
2521+ self.write((str(entry) + '\n').encode('us-ascii'))
2522 self.truncate()
2523 return entry
2524
2525 def remove_entry(self, entry):
2526 self.seek(0)
2527
2528- lines = self.readlines()
2529+ lines = [l.decode('us-ascii') for l in self.readlines()]
2530
2531 found = False
2532 for index, line in enumerate(lines):
2533@@ -97,7 +99,7 @@
2534 lines.remove(line)
2535
2536 self.seek(0)
2537- self.write(''.join(lines))
2538+ self.write(''.join(lines).encode('us-ascii'))
2539 self.truncate()
2540 return True
2541
2542
2543=== modified file 'hooks/charmhelpers/core/hookenv.py'
2544--- hooks/charmhelpers/core/hookenv.py 2014-10-06 21:57:43 +0000
2545+++ hooks/charmhelpers/core/hookenv.py 2014-12-05 08:00:54 +0000
2546@@ -9,9 +9,14 @@
2547 import yaml
2548 import subprocess
2549 import sys
2550-import UserDict
2551 from subprocess import CalledProcessError
2552
2553+import six
2554+if not six.PY3:
2555+ from UserDict import UserDict
2556+else:
2557+ from collections import UserDict
2558+
2559 CRITICAL = "CRITICAL"
2560 ERROR = "ERROR"
2561 WARNING = "WARNING"
2562@@ -63,16 +68,18 @@
2563 command = ['juju-log']
2564 if level:
2565 command += ['-l', level]
2566+ if not isinstance(message, six.string_types):
2567+ message = repr(message)
2568 command += [message]
2569 subprocess.call(command)
2570
2571
2572-class Serializable(UserDict.IterableUserDict):
2573+class Serializable(UserDict):
2574 """Wrapper, an object that can be serialized to yaml or json"""
2575
2576 def __init__(self, obj):
2577 # wrap the object
2578- UserDict.IterableUserDict.__init__(self)
2579+ UserDict.__init__(self)
2580 self.data = obj
2581
2582 def __getattr__(self, attr):
2583@@ -214,6 +221,12 @@
2584 except KeyError:
2585 return (self._prev_dict or {})[key]
2586
2587+ def keys(self):
2588+ prev_keys = []
2589+ if self._prev_dict is not None:
2590+ prev_keys = self._prev_dict.keys()
2591+ return list(set(prev_keys + list(dict.keys(self))))
2592+
2593 def load_previous(self, path=None):
2594 """Load previous copy of config from disk.
2595
2596@@ -263,7 +276,7 @@
2597
2598 """
2599 if self._prev_dict:
2600- for k, v in self._prev_dict.iteritems():
2601+ for k, v in six.iteritems(self._prev_dict):
2602 if k not in self:
2603 self[k] = v
2604 with open(self.path, 'w') as f:
2605@@ -278,7 +291,8 @@
2606 config_cmd_line.append(scope)
2607 config_cmd_line.append('--format=json')
2608 try:
2609- config_data = json.loads(subprocess.check_output(config_cmd_line))
2610+ config_data = json.loads(
2611+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
2612 if scope is not None:
2613 return config_data
2614 return Config(config_data)
2615@@ -297,10 +311,10 @@
2616 if unit:
2617 _args.append(unit)
2618 try:
2619- return json.loads(subprocess.check_output(_args))
2620+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2621 except ValueError:
2622 return None
2623- except CalledProcessError, e:
2624+ except CalledProcessError as e:
2625 if e.returncode == 2:
2626 return None
2627 raise
2628@@ -312,7 +326,7 @@
2629 relation_cmd_line = ['relation-set']
2630 if relation_id is not None:
2631 relation_cmd_line.extend(('-r', relation_id))
2632- for k, v in (relation_settings.items() + kwargs.items()):
2633+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
2634 if v is None:
2635 relation_cmd_line.append('{}='.format(k))
2636 else:
2637@@ -329,7 +343,8 @@
2638 relid_cmd_line = ['relation-ids', '--format=json']
2639 if reltype is not None:
2640 relid_cmd_line.append(reltype)
2641- return json.loads(subprocess.check_output(relid_cmd_line)) or []
2642+ return json.loads(
2643+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
2644 return []
2645
2646
2647@@ -340,7 +355,8 @@
2648 units_cmd_line = ['relation-list', '--format=json']
2649 if relid is not None:
2650 units_cmd_line.extend(('-r', relid))
2651- return json.loads(subprocess.check_output(units_cmd_line)) or []
2652+ return json.loads(
2653+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
2654
2655
2656 @cached
2657@@ -449,7 +465,7 @@
2658 """Get the unit ID for the remote unit"""
2659 _args = ['unit-get', '--format=json', attribute]
2660 try:
2661- return json.loads(subprocess.check_output(_args))
2662+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2663 except ValueError:
2664 return None
2665
2666
2667=== modified file 'hooks/charmhelpers/core/host.py'
2668--- hooks/charmhelpers/core/host.py 2014-10-06 21:57:43 +0000
2669+++ hooks/charmhelpers/core/host.py 2014-12-05 08:00:54 +0000
2670@@ -6,19 +6,20 @@
2671 # Matthew Wedgwood <matthew.wedgwood@canonical.com>
2672
2673 import os
2674+import re
2675 import pwd
2676 import grp
2677 import random
2678 import string
2679 import subprocess
2680 import hashlib
2681-import shutil
2682 from contextlib import contextmanager
2683-
2684 from collections import OrderedDict
2685
2686-from hookenv import log
2687-from fstab import Fstab
2688+import six
2689+
2690+from .hookenv import log
2691+from .fstab import Fstab
2692
2693
2694 def service_start(service_name):
2695@@ -54,7 +55,9 @@
2696 def service_running(service):
2697 """Determine whether a system service is running"""
2698 try:
2699- output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
2700+ output = subprocess.check_output(
2701+ ['service', service, 'status'],
2702+ stderr=subprocess.STDOUT).decode('UTF-8')
2703 except subprocess.CalledProcessError:
2704 return False
2705 else:
2706@@ -67,7 +70,9 @@
2707 def service_available(service_name):
2708 """Determine whether a system service is available"""
2709 try:
2710- subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
2711+ subprocess.check_output(
2712+ ['service', service_name, 'status'],
2713+ stderr=subprocess.STDOUT).decode('UTF-8')
2714 except subprocess.CalledProcessError as e:
2715 return 'unrecognized service' not in e.output
2716 else:
2717@@ -96,6 +101,26 @@
2718 return user_info
2719
2720
2721+def add_group(group_name, system_group=False):
2722+ """Add a group to the system"""
2723+ try:
2724+ group_info = grp.getgrnam(group_name)
2725+ log('group {0} already exists!'.format(group_name))
2726+ except KeyError:
2727+ log('creating group {0}'.format(group_name))
2728+ cmd = ['addgroup']
2729+ if system_group:
2730+ cmd.append('--system')
2731+ else:
2732+ cmd.extend([
2733+ '--group',
2734+ ])
2735+ cmd.append(group_name)
2736+ subprocess.check_call(cmd)
2737+ group_info = grp.getgrnam(group_name)
2738+ return group_info
2739+
2740+
2741 def add_user_to_group(username, group):
2742 """Add a user to a group"""
2743 cmd = [
2744@@ -115,7 +140,7 @@
2745 cmd.append(from_path)
2746 cmd.append(to_path)
2747 log(" ".join(cmd))
2748- return subprocess.check_output(cmd).strip()
2749+ return subprocess.check_output(cmd).decode('UTF-8').strip()
2750
2751
2752 def symlink(source, destination):
2753@@ -130,7 +155,7 @@
2754 subprocess.check_call(cmd)
2755
2756
2757-def mkdir(path, owner='root', group='root', perms=0555, force=False):
2758+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
2759 """Create a directory"""
2760 log("Making dir {} {}:{} {:o}".format(path, owner, group,
2761 perms))
2762@@ -146,7 +171,7 @@
2763 os.chown(realpath, uid, gid)
2764
2765
2766-def write_file(path, content, owner='root', group='root', perms=0444):
2767+def write_file(path, content, owner='root', group='root', perms=0o444):
2768 """Create or overwrite a file with the contents of a string"""
2769 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
2770 uid = pwd.getpwnam(owner).pw_uid
2771@@ -177,7 +202,7 @@
2772 cmd_args.extend([device, mountpoint])
2773 try:
2774 subprocess.check_output(cmd_args)
2775- except subprocess.CalledProcessError, e:
2776+ except subprocess.CalledProcessError as e:
2777 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
2778 return False
2779
2780@@ -191,7 +216,7 @@
2781 cmd_args = ['umount', mountpoint]
2782 try:
2783 subprocess.check_output(cmd_args)
2784- except subprocess.CalledProcessError, e:
2785+ except subprocess.CalledProcessError as e:
2786 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
2787 return False
2788
2789@@ -218,8 +243,8 @@
2790 """
2791 if os.path.exists(path):
2792 h = getattr(hashlib, hash_type)()
2793- with open(path, 'r') as source:
2794- h.update(source.read()) # IGNORE:E1101 - it does have update
2795+ with open(path, 'rb') as source:
2796+ h.update(source.read())
2797 return h.hexdigest()
2798 else:
2799 return None
2800@@ -297,7 +322,7 @@
2801 if length is None:
2802 length = random.choice(range(35, 45))
2803 alphanumeric_chars = [
2804- l for l in (string.letters + string.digits)
2805+ l for l in (string.ascii_letters + string.digits)
2806 if l not in 'l0QD1vAEIOUaeiou']
2807 random_chars = [
2808 random.choice(alphanumeric_chars) for _ in range(length)]
2809@@ -306,30 +331,65 @@
2810
2811 def list_nics(nic_type):
2812 '''Return a list of nics of given type(s)'''
2813- if isinstance(nic_type, basestring):
2814+ if isinstance(nic_type, six.string_types):
2815 int_types = [nic_type]
2816 else:
2817 int_types = nic_type
2818 interfaces = []
2819 for int_type in int_types:
2820 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
2821- ip_output = subprocess.check_output(cmd).split('\n')
2822+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2823 ip_output = (line for line in ip_output if line)
2824 for line in ip_output:
2825 if line.split()[1].startswith(int_type):
2826- interfaces.append(line.split()[1].replace(":", ""))
2827+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
2828+ if matched:
2829+ interface = matched.groups()[0]
2830+ else:
2831+ interface = line.split()[1].replace(":", "")
2832+ interfaces.append(interface)
2833+
2834 return interfaces
2835
2836
2837-def set_nic_mtu(nic, mtu):
2838+def set_nic_mtu(nic, mtu, persistence=False):
2839 '''Set MTU on a network interface'''
2840 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
2841 subprocess.check_call(cmd)
2842+ # persistence mtu configuration
2843+ if not persistence:
2844+ return
2845+ if os.path.exists("/etc/network/interfaces.d/%s.cfg" % nic):
2846+ nic_cfg_file = "/etc/network/interfaces.d/%s.cfg" % nic
2847+ else:
2848+ nic_cfg_file = "/etc/network/interfaces"
2849+ f = open(nic_cfg_file,"r")
2850+ lines = f.readlines()
2851+ found = False
2852+ length = len(lines)
2853+ for i in range(len(lines)):
2854+ lines[i] = lines[i].replace('\n', '')
2855+ if lines[i].startswith("iface %s" % nic):
2856+ found = True
2857+ lines.insert(i+1, " up ip link set $IFACE mtu %s" % mtu)
2858+ lines.insert(i+2, " down ip link set $IFACE mtu 1500")
2859+ if length>i+2 and lines[i+3].startswith(" up ip link set $IFACE mtu"):
2860+ del lines[i+3]
2861+ if length>i+2 and lines[i+3].startswith(" down ip link set $IFACE mtu"):
2862+ del lines[i+3]
2863+ break
2864+ if not found:
2865+ lines.insert(length+1, "")
2866+ lines.insert(length+2, "auto %s" % nic)
2867+ lines.insert(length+3, "iface %s inet dhcp" % nic)
2868+ lines.insert(length+4, " up ip link set $IFACE mtu %s" % mtu)
2869+ lines.insert(length+5, " down ip link set $IFACE mtu 1500")
2870+ write_file(path=nic_cfg_file, content="\n".join(lines), perms=0o644)
2871
2872
2873 def get_nic_mtu(nic):
2874 cmd = ['ip', 'addr', 'show', nic]
2875- ip_output = subprocess.check_output(cmd).split('\n')
2876+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2877 mtu = ""
2878 for line in ip_output:
2879 words = line.split()
2880@@ -340,7 +400,7 @@
2881
2882 def get_nic_hwaddr(nic):
2883 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
2884- ip_output = subprocess.check_output(cmd)
2885+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
2886 hwaddr = ""
2887 words = ip_output.split()
2888 if 'link/ether' in words:
2889@@ -357,8 +417,8 @@
2890
2891 '''
2892 import apt_pkg
2893- from charmhelpers.fetch import apt_cache
2894 if not pkgcache:
2895+ from charmhelpers.fetch import apt_cache
2896 pkgcache = apt_cache()
2897 pkg = pkgcache[package]
2898 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
2899
2900=== modified file 'hooks/charmhelpers/core/services/__init__.py'
2901--- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:14 +0000
2902+++ hooks/charmhelpers/core/services/__init__.py 2014-12-05 08:00:54 +0000
2903@@ -1,2 +1,2 @@
2904-from .base import *
2905-from .helpers import *
2906+from .base import * # NOQA
2907+from .helpers import * # NOQA
2908
2909=== modified file 'hooks/charmhelpers/core/services/helpers.py'
2910--- hooks/charmhelpers/core/services/helpers.py 2014-10-06 21:57:43 +0000
2911+++ hooks/charmhelpers/core/services/helpers.py 2014-12-05 08:00:54 +0000
2912@@ -196,7 +196,7 @@
2913 if not os.path.isabs(file_name):
2914 file_name = os.path.join(hookenv.charm_dir(), file_name)
2915 with open(file_name, 'w') as file_stream:
2916- os.fchmod(file_stream.fileno(), 0600)
2917+ os.fchmod(file_stream.fileno(), 0o600)
2918 yaml.dump(config_data, file_stream)
2919
2920 def read_context(self, file_name):
2921@@ -211,15 +211,19 @@
2922
2923 class TemplateCallback(ManagerCallback):
2924 """
2925- Callback class that will render a Jinja2 template, for use as a ready action.
2926-
2927- :param str source: The template source file, relative to `$CHARM_DIR/templates`
2928+ Callback class that will render a Jinja2 template, for use as a ready
2929+ action.
2930+
2931+ :param str source: The template source file, relative to
2932+ `$CHARM_DIR/templates`
2933+
2934 :param str target: The target to write the rendered template to
2935 :param str owner: The owner of the rendered file
2936 :param str group: The group of the rendered file
2937 :param int perms: The permissions of the rendered file
2938 """
2939- def __init__(self, source, target, owner='root', group='root', perms=0444):
2940+ def __init__(self, source, target,
2941+ owner='root', group='root', perms=0o444):
2942 self.source = source
2943 self.target = target
2944 self.owner = owner
2945
2946=== modified file 'hooks/charmhelpers/core/templating.py'
2947--- hooks/charmhelpers/core/templating.py 2014-08-13 13:12:14 +0000
2948+++ hooks/charmhelpers/core/templating.py 2014-12-05 08:00:54 +0000
2949@@ -4,7 +4,8 @@
2950 from charmhelpers.core import hookenv
2951
2952
2953-def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
2954+def render(source, target, context, owner='root', group='root',
2955+ perms=0o444, templates_dir=None):
2956 """
2957 Render a template.
2958
2959
2960=== modified file 'hooks/charmhelpers/fetch/__init__.py'
2961--- hooks/charmhelpers/fetch/__init__.py 2014-10-06 21:57:43 +0000
2962+++ hooks/charmhelpers/fetch/__init__.py 2014-12-05 08:00:54 +0000
2963@@ -5,10 +5,6 @@
2964 from charmhelpers.core.host import (
2965 lsb_release
2966 )
2967-from urlparse import (
2968- urlparse,
2969- urlunparse,
2970-)
2971 import subprocess
2972 from charmhelpers.core.hookenv import (
2973 config,
2974@@ -16,6 +12,12 @@
2975 )
2976 import os
2977
2978+import six
2979+if six.PY3:
2980+ from urllib.parse import urlparse, urlunparse
2981+else:
2982+ from urlparse import urlparse, urlunparse
2983+
2984
2985 CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
2986 deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
2987@@ -72,6 +74,7 @@
2988 FETCH_HANDLERS = (
2989 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
2990 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
2991+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
2992 )
2993
2994 APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
2995@@ -148,7 +151,7 @@
2996 cmd = ['apt-get', '--assume-yes']
2997 cmd.extend(options)
2998 cmd.append('install')
2999- if isinstance(packages, basestring):
3000+ if isinstance(packages, six.string_types):
3001 cmd.append(packages)
3002 else:
3003 cmd.extend(packages)
3004@@ -181,7 +184,7 @@
3005 def apt_purge(packages, fatal=False):
3006 """Purge one or more packages"""
3007 cmd = ['apt-get', '--assume-yes', 'purge']
3008- if isinstance(packages, basestring):
3009+ if isinstance(packages, six.string_types):
3010 cmd.append(packages)
3011 else:
3012 cmd.extend(packages)
3013@@ -192,7 +195,7 @@
3014 def apt_hold(packages, fatal=False):
3015 """Hold one or more packages"""
3016 cmd = ['apt-mark', 'hold']
3017- if isinstance(packages, basestring):
3018+ if isinstance(packages, six.string_types):
3019 cmd.append(packages)
3020 else:
3021 cmd.extend(packages)
3022@@ -218,6 +221,7 @@
3023 pocket for the release.
3024 'cloud:' may be used to activate official cloud archive pockets,
3025 such as 'cloud:icehouse'
3026+ 'distro' may be used as a noop
3027
3028 @param key: A key to be added to the system's APT keyring and used
3029 to verify the signatures on packages. Ideally, this should be an
3030@@ -251,12 +255,14 @@
3031 release = lsb_release()['DISTRIB_CODENAME']
3032 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
3033 apt.write(PROPOSED_POCKET.format(release))
3034+ elif source == 'distro':
3035+ pass
3036 else:
3037- raise SourceConfigError("Unknown source: {!r}".format(source))
3038+ log("Unknown source: {!r}".format(source))
3039
3040 if key:
3041 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
3042- with NamedTemporaryFile() as key_file:
3043+ with NamedTemporaryFile('w+') as key_file:
3044 key_file.write(key)
3045 key_file.flush()
3046 key_file.seek(0)
3047@@ -293,14 +299,14 @@
3048 sources = safe_load((config(sources_var) or '').strip()) or []
3049 keys = safe_load((config(keys_var) or '').strip()) or None
3050
3051- if isinstance(sources, basestring):
3052+ if isinstance(sources, six.string_types):
3053 sources = [sources]
3054
3055 if keys is None:
3056 for source in sources:
3057 add_source(source, None)
3058 else:
3059- if isinstance(keys, basestring):
3060+ if isinstance(keys, six.string_types):
3061 keys = [keys]
3062
3063 if len(sources) != len(keys):
3064@@ -397,7 +403,7 @@
3065 while result is None or result == APT_NO_LOCK:
3066 try:
3067 result = subprocess.check_call(cmd, env=env)
3068- except subprocess.CalledProcessError, e:
3069+ except subprocess.CalledProcessError as e:
3070 retry_count = retry_count + 1
3071 if retry_count > APT_NO_LOCK_RETRY_COUNT:
3072 raise
3073
3074=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
3075--- hooks/charmhelpers/fetch/archiveurl.py 2014-10-06 21:57:43 +0000
3076+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-05 08:00:54 +0000
3077@@ -1,8 +1,23 @@
3078 import os
3079-import urllib2
3080-from urllib import urlretrieve
3081-import urlparse
3082 import hashlib
3083+import re
3084+
3085+import six
3086+if six.PY3:
3087+ from urllib.request import (
3088+ build_opener, install_opener, urlopen, urlretrieve,
3089+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3090+ )
3091+ from urllib.parse import urlparse, urlunparse, parse_qs
3092+ from urllib.error import URLError
3093+else:
3094+ from urllib import urlretrieve
3095+ from urllib2 import (
3096+ build_opener, install_opener, urlopen,
3097+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3098+ URLError
3099+ )
3100+ from urlparse import urlparse, urlunparse, parse_qs
3101
3102 from charmhelpers.fetch import (
3103 BaseFetchHandler,
3104@@ -15,6 +30,24 @@
3105 from charmhelpers.core.host import mkdir, check_hash
3106
3107
3108+def splituser(host):
3109+ '''urllib.splituser(), but six's support of this seems broken'''
3110+ _userprog = re.compile('^(.*)@(.*)$')
3111+ match = _userprog.match(host)
3112+ if match:
3113+ return match.group(1, 2)
3114+ return None, host
3115+
3116+
3117+def splitpasswd(user):
3118+ '''urllib.splitpasswd(), but six's support of this is missing'''
3119+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
3120+ match = _passwdprog.match(user)
3121+ if match:
3122+ return match.group(1, 2)
3123+ return user, None
3124+
3125+
3126 class ArchiveUrlFetchHandler(BaseFetchHandler):
3127 """
3128 Handler to download archive files from arbitrary URLs.
3129@@ -42,20 +75,20 @@
3130 """
3131 # propogate all exceptions
3132 # URLError, OSError, etc
3133- proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
3134+ proto, netloc, path, params, query, fragment = urlparse(source)
3135 if proto in ('http', 'https'):
3136- auth, barehost = urllib2.splituser(netloc)
3137+ auth, barehost = splituser(netloc)
3138 if auth is not None:
3139- source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
3140- username, password = urllib2.splitpasswd(auth)
3141- passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
3142+ source = urlunparse((proto, barehost, path, params, query, fragment))
3143+ username, password = splitpasswd(auth)
3144+ passman = HTTPPasswordMgrWithDefaultRealm()
3145 # Realm is set to None in add_password to force the username and password
3146 # to be used whatever the realm
3147 passman.add_password(None, source, username, password)
3148- authhandler = urllib2.HTTPBasicAuthHandler(passman)
3149- opener = urllib2.build_opener(authhandler)
3150- urllib2.install_opener(opener)
3151- response = urllib2.urlopen(source)
3152+ authhandler = HTTPBasicAuthHandler(passman)
3153+ opener = build_opener(authhandler)
3154+ install_opener(opener)
3155+ response = urlopen(source)
3156 try:
3157 with open(dest, 'w') as dest_file:
3158 dest_file.write(response.read())
3159@@ -91,17 +124,21 @@
3160 url_parts = self.parse_url(source)
3161 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
3162 if not os.path.exists(dest_dir):
3163- mkdir(dest_dir, perms=0755)
3164+ mkdir(dest_dir, perms=0o755)
3165 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
3166 try:
3167 self.download(source, dld_file)
3168- except urllib2.URLError as e:
3169+ except URLError as e:
3170 raise UnhandledSource(e.reason)
3171 except OSError as e:
3172 raise UnhandledSource(e.strerror)
3173- options = urlparse.parse_qs(url_parts.fragment)
3174+ options = parse_qs(url_parts.fragment)
3175 for key, value in options.items():
3176- if key in hashlib.algorithms:
3177+ if not six.PY3:
3178+ algorithms = hashlib.algorithms
3179+ else:
3180+ algorithms = hashlib.algorithms_available
3181+ if key in algorithms:
3182 check_hash(dld_file, value, key)
3183 if checksum:
3184 check_hash(dld_file, checksum, hash_type)
3185
3186=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
3187--- hooks/charmhelpers/fetch/bzrurl.py 2014-07-28 14:38:51 +0000
3188+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-05 08:00:54 +0000
3189@@ -5,6 +5,10 @@
3190 )
3191 from charmhelpers.core.host import mkdir
3192
3193+import six
3194+if six.PY3:
3195+ raise ImportError('bzrlib does not support Python3')
3196+
3197 try:
3198 from bzrlib.branch import Branch
3199 except ImportError:
3200@@ -42,7 +46,7 @@
3201 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3202 branch_name)
3203 if not os.path.exists(dest_dir):
3204- mkdir(dest_dir, perms=0755)
3205+ mkdir(dest_dir, perms=0o755)
3206 try:
3207 self.branch(source, dest_dir)
3208 except OSError as e:
3209
3210=== modified file 'hooks/nova_compute_hooks.py'
3211--- hooks/nova_compute_hooks.py 2014-11-28 12:54:57 +0000
3212+++ hooks/nova_compute_hooks.py 2014-12-05 08:00:54 +0000
3213@@ -50,11 +50,12 @@
3214 ceph_config_file, CEPH_SECRET,
3215 enable_shell, disable_shell,
3216 fix_path_ownership,
3217- assert_charm_supports_ipv6
3218+ assert_charm_supports_ipv6,
3219 )
3220
3221 from charmhelpers.contrib.network.ip import (
3222- get_ipv6_addr
3223+ get_ipv6_addr,
3224+ configure_phy_nic_mtu
3225 )
3226
3227 from nova_compute_context import CEPH_SECRET_UUID
3228@@ -70,6 +71,7 @@
3229 configure_installation_source(config('openstack-origin'))
3230 apt_update()
3231 apt_install(determine_packages(), fatal=True)
3232+ configure_phy_nic_mtu()
3233
3234
3235 @hooks.hook('config-changed')
3236@@ -103,6 +105,8 @@
3237
3238 CONFIGS.write_all()
3239
3240+ configure_phy_nic_mtu()
3241+
3242
3243 @hooks.hook('amqp-relation-joined')
3244 def amqp_joined(relation_id=None):
3245
3246=== modified file 'hooks/nova_compute_utils.py'
3247--- hooks/nova_compute_utils.py 2014-12-03 23:54:27 +0000
3248+++ hooks/nova_compute_utils.py 2014-12-05 08:00:54 +0000
3249@@ -14,7 +14,7 @@
3250 from charmhelpers.core.host import (
3251 mkdir,
3252 service_restart,
3253- lsb_release
3254+ lsb_release,
3255 )
3256
3257 from charmhelpers.core.hookenv import (
3258
3259=== modified file 'unit_tests/test_nova_compute_hooks.py'
3260--- unit_tests/test_nova_compute_hooks.py 2014-11-28 14:17:10 +0000
3261+++ unit_tests/test_nova_compute_hooks.py 2014-12-05 08:00:54 +0000
3262@@ -54,7 +54,8 @@
3263 'ensure_ceph_keyring',
3264 'execd_preinstall',
3265 # socket
3266- 'gethostname'
3267+ 'gethostname',
3268+ 'configure_phy_nic_mtu'
3269 ]
3270
3271
3272@@ -80,11 +81,13 @@
3273 self.assertTrue(self.apt_update.called)
3274 self.apt_install.assert_called_with(['foo', 'bar'], fatal=True)
3275 self.execd_preinstall.assert_called()
3276+ self.assertTrue(self.configure_phy_nic_mtu.called)
3277
3278 def test_config_changed_with_upgrade(self):
3279 self.openstack_upgrade_available.return_value = True
3280 hooks.config_changed()
3281 self.assertTrue(self.do_openstack_upgrade.called)
3282+ self.assertTrue(self.configure_phy_nic_mtu.called)
3283
3284 @patch.object(hooks, 'compute_joined')
3285 def test_config_changed_with_migration(self, compute_joined):

Subscribers

People subscribed via source and target branches