Merge lp:~zhhuabj/charms/trusty/nova-compute/lp74646 into lp:~openstack-charmers-archive/charms/trusty/nova-compute/next

Proposed by Hua Zhang
Status: Superseded
Proposed branch: lp:~zhhuabj/charms/trusty/nova-compute/lp74646
Merge into: lp:~openstack-charmers-archive/charms/trusty/nova-compute/next
Diff against target: 3776 lines (+1430/-528)
35 files modified
charm-helpers-hooks.yaml (+1/-0)
config.yaml (+6/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7)
hooks/charmhelpers/contrib/network/ip.py (+90/-54)
hooks/charmhelpers/contrib/network/ufw.py (+182/-0)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
hooks/charmhelpers/contrib/openstack/context.py (+339/-225)
hooks/charmhelpers/contrib/openstack/ip.py (+41/-27)
hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+2/-2)
hooks/charmhelpers/contrib/openstack/templating.py (+5/-5)
hooks/charmhelpers/contrib/openstack/utils.py (+146/-13)
hooks/charmhelpers/contrib/python/debug.py (+40/-0)
hooks/charmhelpers/contrib/python/packages.py (+77/-0)
hooks/charmhelpers/contrib/python/rpdb.py (+42/-0)
hooks/charmhelpers/contrib/python/version.py (+18/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/core/fstab.py (+10/-8)
hooks/charmhelpers/core/hookenv.py (+27/-11)
hooks/charmhelpers/core/host.py (+81/-21)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/core/services/helpers.py (+9/-5)
hooks/charmhelpers/core/sysctl.py (+34/-0)
hooks/charmhelpers/core/templating.py (+2/-1)
hooks/charmhelpers/fetch/__init__.py (+18/-12)
hooks/charmhelpers/fetch/archiveurl.py (+53/-16)
hooks/charmhelpers/fetch/bzrurl.py (+5/-1)
hooks/charmhelpers/fetch/giturl.py (+51/-0)
hooks/nova_compute_hooks.py (+6/-2)
hooks/nova_compute_utils.py (+1/-1)
unit_tests/test_nova_compute_hooks.py (+4/-1)
To merge this branch: bzr merge lp:~zhhuabj/charms/trusty/nova-compute/lp74646
Reviewer Review Type Date Requested Status
Edward Hope-Morley Pending
Xiang Hui Pending
Review via email: mp+243979@code.launchpad.net

This proposal supersedes a proposal from 2014-11-24.

Description of the change

This story (SF#74646) supports setting VM's MTU<=1500 by setting mtu of phy NICs and network_device_mtu.
1, setting mtu for phy NICs both in nova-compute charm and neutron-gateway charm
   juju set nova-compute phy-nic-mtu=1546
   juju set neutron-gateway phy-nic-mtu=1546
2, setting mtu for peer devices between ovs bridge br-phy and ovs bridge br-int by adding 'network-device-mtu' parameter into /etc/neutron/neutron.conf
   juju set neutron-api network-device-mtu=1546
   Limitation:
   a, don't support linux bridge because we don't add those three parameters (ovs_use_veth, use_veth_interconnection, veth_mtu)
   b, for gre and vxlan, this step is optional.
   c, after setting network-device-mtu=1546 for neutron-api charm, quantum-gateway and neutron-openvswitch will find network-device-mtu paramter by relation, so it only supports openvswitch plugin at this stage.
3, at this time, MTU inside VM can continue to be configured via DHCP by seeting instance-mtu configuration.
   juju set neutron-gateway instance-mtu=1500
   Limitation:
   a, only support set VM's MTU<=1500, if wanting to set VM's MTU>1500, also need to set MTU for tap devices associated that VM by this link (http://pastebin.ubuntu.com/9272762/ )
   b, doesn't support MTU per network

NOTE: maybe we can't test this feature in bastion

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

UOSCI bot says:
charm_lint_check #1213 nova-compute-next for zhhuabj mp242611
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option config-flags has no default value
  I: config.yaml: option instances-path has no default value
  W: config.yaml: option disable-neutron-security-groups has no default value
  I: config.yaml: option migration-auth-type has no default value

Full lint test output: http://paste.ubuntu.com/9206996/
Build: http://10.98.191.181:8080/job/charm_lint_check/1213/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

UOSCI bot says:
charm_unit_test #1047 nova-compute-next for zhhuabj mp242611
    UNIT OK: passed

UNIT Results (max last 5 lines):
  nova_compute_hooks 134 13 90% 81, 99-100, 129, 195, 253, 258-259, 265, 269-272
  nova_compute_utils 240 120 50% 168-224, 232, 237-240, 275-277, 284, 288-291, 299-307, 311, 320-329, 342-361, 387-388, 392-393, 412-433, 450-460, 474-475, 480-481, 486-495
  TOTAL 578 203 65%
  Ran 56 tests in 3.028s
  OK (SKIP=5)

Full unit test output: http://paste.ubuntu.com/9206997/
Build: http://10.98.191.181:8080/job/charm_unit_test/1047/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

UOSCI bot says:
charm_amulet_test #517 nova-compute-next for zhhuabj mp242611
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  subprocess.CalledProcessError: Command '['juju-deployer', '-W', '-L', '-c', '/tmp/amulet-juju-deployer-51bPaJ.json', '-e', 'osci-sv07', '-t', '1000', 'osci-sv07']' returned non-zero exit status 1
  WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
  juju-test INFO : Results: 1 passed, 2 failed, 0 errored
  ERROR subprocess encountered error code 2
  make: *** [test] Error 2

Full amulet test output: http://paste.ubuntu.com/9207416/
Build: http://10.98.191.181:8080/job/charm_amulet_test/517/

Revision history for this message
Edward Hope-Morley (hopem) wrote : Posted in a previous version of this proposal
review: Needs Fixing
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

UOSCI bot says:
charm_lint_check #1312 nova-compute-next for zhhuabj mp242611
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option config-flags has no default value
  I: config.yaml: option instances-path has no default value
  W: config.yaml: option disable-neutron-security-groups has no default value
  I: config.yaml: option migration-auth-type has no default value

Full lint test output: http://paste.ubuntu.com/9354896/
Build: http://10.98.191.181:8080/job/charm_lint_check/1312/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

UOSCI bot says:
charm_unit_test #1146 nova-compute-next for zhhuabj mp242611
    UNIT OK: passed

UNIT Results (max last 5 lines):
  nova_compute_hooks 134 13 90% 81, 99-100, 129, 195, 253, 258-259, 265, 269-272
  nova_compute_utils 228 110 52% 161-217, 225, 230-233, 268-270, 277, 281-284, 292-300, 304, 313-322, 335-354, 380-381, 385-386, 405-426, 443-453, 467-468, 473-474
  TOTAL 566 193 66%
  Ran 56 tests in 3.216s
  OK (SKIP=5)

Full unit test output: http://paste.ubuntu.com/9354897/
Build: http://10.98.191.181:8080/job/charm_unit_test/1146/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

UOSCI bot says:
charm_amulet_test #578 nova-compute-next for zhhuabj mp242611
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv07"
  WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
  juju-test INFO : Results: 2 passed, 1 failed, 0 errored
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9354972/
Build: http://10.98.191.181:8080/job/charm_amulet_test/578/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #94 nova-compute-next for zhhuabj mp243979
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option config-flags has no default value
  I: config.yaml: option instances-path has no default value
  W: config.yaml: option disable-neutron-security-groups has no default value
  I: config.yaml: option migration-auth-type has no default value

Full lint test output: pastebin not avail., cmd error
Build: http://10.98.191.181:8080/job/charm_lint_check/94/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #94 nova-compute-next for zhhuabj mp243979
    UNIT FAIL: unit-test failed

UNIT Results (max last 5 lines):
  ImportError: No module named python.packages
  Name Stmts Miss Cover Missing
  Ran 3 tests in 0.002s
  FAILED (errors=3)
  make: *** [unit_test] Error 1

Full unit test output: pastebin not avail., cmd error
Build: http://10.98.191.181:8080/job/charm_unit_test/94/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_amulet_test #54 nova-compute-next for zhhuabj mp243979
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  juju-test.conductor DEBUG : Tearing down osci-sv03 juju environment
  juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv03"
  juju-test INFO : Results: 1 passed, 2 failed, 0 errored
  ERROR subprocess encountered error code 2
  make: *** [test] Error 2

Full amulet test output: pastebin not avail., cmd error
Build: http://10.98.191.181:8080/job/charm_amulet_test/54/

97. By Hua Zhang

sync charm-helpers to inclue contrib.python to fix unit test error

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'charm-helpers-hooks.yaml'
2--- charm-helpers-hooks.yaml 2014-09-23 10:21:54 +0000
3+++ charm-helpers-hooks.yaml 2014-12-10 07:57:54 +0000
4@@ -9,4 +9,5 @@
5 - apache
6 - cluster
7 - contrib.network
8+ - contrib.python
9 - payload.execd
10
11=== modified file 'config.yaml'
12--- config.yaml 2014-11-28 12:54:57 +0000
13+++ config.yaml 2014-12-10 07:57:54 +0000
14@@ -154,3 +154,9 @@
15 order for this charm to function correctly, the privacy extension must be
16 disabled and a non-temporary address must be configured/available on
17 your network interface.
18+ phy-nic-mtu:
19+ type: int
20+ default: 1500
21+ description: |
22+ To improve network performance of VM, sometimes we should keep VM MTU as 1500
23+ and use charm to modify MTU of tunnel nic more than 1500 (e.g. 1546 for GRE)
24
25=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
26--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-06 21:57:43 +0000
27+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-10 07:57:54 +0000
28@@ -13,9 +13,10 @@
29
30 import subprocess
31 import os
32-
33 from socket import gethostname as get_unit_hostname
34
35+import six
36+
37 from charmhelpers.core.hookenv import (
38 log,
39 relation_ids,
40@@ -77,7 +78,7 @@
41 "show", resource
42 ]
43 try:
44- status = subprocess.check_output(cmd)
45+ status = subprocess.check_output(cmd).decode('UTF-8')
46 except subprocess.CalledProcessError:
47 return False
48 else:
49@@ -150,34 +151,42 @@
50 return False
51
52
53-def determine_api_port(public_port):
54+def determine_api_port(public_port, singlenode_mode=False):
55 '''
56 Determine correct API server listening port based on
57 existence of HTTPS reverse proxy and/or haproxy.
58
59 public_port: int: standard public port for given service
60
61+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
62+
63 returns: int: the correct listening port for the API service
64 '''
65 i = 0
66- if len(peer_units()) > 0 or is_clustered():
67+ if singlenode_mode:
68+ i += 1
69+ elif len(peer_units()) > 0 or is_clustered():
70 i += 1
71 if https():
72 i += 1
73 return public_port - (i * 10)
74
75
76-def determine_apache_port(public_port):
77+def determine_apache_port(public_port, singlenode_mode=False):
78 '''
79 Description: Determine correct apache listening port based on public IP +
80 state of the cluster.
81
82 public_port: int: standard public port for given service
83
84+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
85+
86 returns: int: the correct listening port for the HAProxy service
87 '''
88 i = 0
89- if len(peer_units()) > 0 or is_clustered():
90+ if singlenode_mode:
91+ i += 1
92+ elif len(peer_units()) > 0 or is_clustered():
93 i += 1
94 return public_port - (i * 10)
95
96@@ -197,7 +206,7 @@
97 for setting in settings:
98 conf[setting] = config_get(setting)
99 missing = []
100- [missing.append(s) for s, v in conf.iteritems() if v is None]
101+ [missing.append(s) for s, v in six.iteritems(conf) if v is None]
102 if missing:
103 log('Insufficient config data to configure hacluster.', level=ERROR)
104 raise HAIncompleteConfig
105
106=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
107--- hooks/charmhelpers/contrib/network/ip.py 2014-10-06 21:57:43 +0000
108+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-10 07:57:54 +0000
109@@ -1,16 +1,20 @@
110 import glob
111 import re
112 import subprocess
113-import sys
114
115 from functools import partial
116
117 from charmhelpers.core.hookenv import unit_get
118 from charmhelpers.fetch import apt_install
119 from charmhelpers.core.hookenv import (
120- WARNING,
121- ERROR,
122- log
123+ config,
124+ log,
125+ INFO
126+)
127+from charmhelpers.core.host import (
128+ list_nics,
129+ get_nic_mtu,
130+ set_nic_mtu
131 )
132
133 try:
134@@ -34,31 +38,28 @@
135 network)
136
137
138+def no_ip_found_error_out(network):
139+ errmsg = ("No IP address found in network: %s" % network)
140+ raise ValueError(errmsg)
141+
142+
143 def get_address_in_network(network, fallback=None, fatal=False):
144- """
145- Get an IPv4 or IPv6 address within the network from the host.
146+ """Get an IPv4 or IPv6 address within the network from the host.
147
148 :param network (str): CIDR presentation format. For example,
149 '192.168.1.0/24'.
150 :param fallback (str): If no address is found, return fallback.
151 :param fatal (boolean): If no address is found, fallback is not
152 set and fatal is True then exit(1).
153-
154 """
155-
156- def not_found_error_out():
157- log("No IP address found in network: %s" % network,
158- level=ERROR)
159- sys.exit(1)
160-
161 if network is None:
162 if fallback is not None:
163 return fallback
164+
165+ if fatal:
166+ no_ip_found_error_out(network)
167 else:
168- if fatal:
169- not_found_error_out()
170- else:
171- return None
172+ return None
173
174 _validate_cidr(network)
175 network = netaddr.IPNetwork(network)
176@@ -70,6 +71,7 @@
177 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
178 if cidr in network:
179 return str(cidr.ip)
180+
181 if network.version == 6 and netifaces.AF_INET6 in addresses:
182 for addr in addresses[netifaces.AF_INET6]:
183 if not addr['addr'].startswith('fe80'):
184@@ -82,20 +84,20 @@
185 return fallback
186
187 if fatal:
188- not_found_error_out()
189+ no_ip_found_error_out(network)
190
191 return None
192
193
194 def is_ipv6(address):
195- '''Determine whether provided address is IPv6 or not'''
196+ """Determine whether provided address is IPv6 or not."""
197 try:
198 address = netaddr.IPAddress(address)
199 except netaddr.AddrFormatError:
200 # probably a hostname - so not an address at all!
201 return False
202- else:
203- return address.version == 6
204+
205+ return address.version == 6
206
207
208 def is_address_in_network(network, address):
209@@ -113,11 +115,13 @@
210 except (netaddr.core.AddrFormatError, ValueError):
211 raise ValueError("Network (%s) is not in CIDR presentation format" %
212 network)
213+
214 try:
215 address = netaddr.IPAddress(address)
216 except (netaddr.core.AddrFormatError, ValueError):
217 raise ValueError("Address (%s) is not in correct presentation format" %
218 address)
219+
220 if address in network:
221 return True
222 else:
223@@ -140,57 +144,63 @@
224 if address.version == 4 and netifaces.AF_INET in addresses:
225 addr = addresses[netifaces.AF_INET][0]['addr']
226 netmask = addresses[netifaces.AF_INET][0]['netmask']
227- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
228+ network = netaddr.IPNetwork("%s/%s" % (addr, netmask))
229+ cidr = network.cidr
230 if address in cidr:
231 if key == 'iface':
232 return iface
233 else:
234 return addresses[netifaces.AF_INET][0][key]
235+
236 if address.version == 6 and netifaces.AF_INET6 in addresses:
237 for addr in addresses[netifaces.AF_INET6]:
238 if not addr['addr'].startswith('fe80'):
239- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
240- addr['netmask']))
241+ network = netaddr.IPNetwork("%s/%s" % (addr['addr'],
242+ addr['netmask']))
243+ cidr = network.cidr
244 if address in cidr:
245 if key == 'iface':
246 return iface
247+ elif key == 'netmask' and cidr:
248+ return str(cidr).split('/')[1]
249 else:
250 return addr[key]
251+
252 return None
253
254
255 get_iface_for_address = partial(_get_for_address, key='iface')
256
257+
258 get_netmask_for_address = partial(_get_for_address, key='netmask')
259
260
261 def format_ipv6_addr(address):
262- """
263- IPv6 needs to be wrapped with [] in url link to parse correctly.
264+ """If address is IPv6, wrap it in '[]' otherwise return None.
265+
266+ This is required by most configuration files when specifying IPv6
267+ addresses.
268 """
269 if is_ipv6(address):
270- address = "[%s]" % address
271- else:
272- log("Not a valid ipv6 address: %s" % address, level=WARNING)
273- address = None
274+ return "[%s]" % address
275
276- return address
277+ return None
278
279
280 def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
281 fatal=True, exc_list=None):
282- """
283- Return the assigned IP address for a given interface, if any, or [].
284- """
285+ """Return the assigned IP address for a given interface, if any."""
286 # Extract nic if passed /dev/ethX
287 if '/' in iface:
288 iface = iface.split('/')[-1]
289+
290 if not exc_list:
291 exc_list = []
292+
293 try:
294 inet_num = getattr(netifaces, inet_type)
295 except AttributeError:
296- raise Exception('Unknown inet type ' + str(inet_type))
297+ raise Exception("Unknown inet type '%s'" % str(inet_type))
298
299 interfaces = netifaces.interfaces()
300 if inc_aliases:
301@@ -198,15 +208,18 @@
302 for _iface in interfaces:
303 if iface == _iface or _iface.split(':')[0] == iface:
304 ifaces.append(_iface)
305+
306 if fatal and not ifaces:
307 raise Exception("Invalid interface '%s'" % iface)
308+
309 ifaces.sort()
310 else:
311 if iface not in interfaces:
312 if fatal:
313- raise Exception("%s not found " % (iface))
314+ raise Exception("Interface '%s' not found " % (iface))
315 else:
316 return []
317+
318 else:
319 ifaces = [iface]
320
321@@ -217,10 +230,13 @@
322 for entry in net_info[inet_num]:
323 if 'addr' in entry and entry['addr'] not in exc_list:
324 addresses.append(entry['addr'])
325+
326 if fatal and not addresses:
327 raise Exception("Interface '%s' doesn't have any %s addresses." %
328 (iface, inet_type))
329- return addresses
330+
331+ return sorted(addresses)
332+
333
334 get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
335
336@@ -237,6 +253,7 @@
337 raw = re.match(ll_key, _addr)
338 if raw:
339 _addr = raw.group(1)
340+
341 if _addr == addr:
342 log("Address '%s' is configured on iface '%s'" %
343 (addr, iface))
344@@ -247,8 +264,9 @@
345
346
347 def sniff_iface(f):
348- """If no iface provided, inject net iface inferred from unit private
349- address.
350+ """Ensure decorated function is called with a value for iface.
351+
352+ If no iface provided, inject net iface inferred from unit private address.
353 """
354 def iface_sniffer(*args, **kwargs):
355 if not kwargs.get('iface', None):
356@@ -291,7 +309,7 @@
357 if global_addrs:
358 # Make sure any found global addresses are not temporary
359 cmd = ['ip', 'addr', 'show', iface]
360- out = subprocess.check_output(cmd)
361+ out = subprocess.check_output(cmd).decode('UTF-8')
362 if dynamic_only:
363 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
364 else:
365@@ -313,33 +331,51 @@
366 return addrs
367
368 if fatal:
369- raise Exception("Interface '%s' doesn't have a scope global "
370+ raise Exception("Interface '%s' does not have a scope global "
371 "non-temporary ipv6 address." % iface)
372
373 return []
374
375
376 def get_bridges(vnic_dir='/sys/devices/virtual/net'):
377- """
378- Return a list of bridges on the system or []
379- """
380- b_rgex = vnic_dir + '/*/bridge'
381- return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
382+ """Return a list of bridges on the system."""
383+ b_regex = "%s/*/bridge" % vnic_dir
384+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
385
386
387 def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
388- """
389- Return a list of nics comprising a given bridge on the system or []
390- """
391- brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
392- return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
393+ """Return a list of nics comprising a given bridge on the system."""
394+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
395+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
396
397
398 def is_bridge_member(nic):
399- """
400- Check if a given nic is a member of a bridge
401- """
402+ """Check if a given nic is a member of a bridge."""
403 for bridge in get_bridges():
404 if nic in get_bridge_nics(bridge):
405 return True
406+
407 return False
408+
409+
410+def configure_phy_nic_mtu(mng_ip=None):
411+ """Configure mtu for physical nic."""
412+ phy_nic_mtu = config('phy-nic-mtu')
413+ if phy_nic_mtu >= 1500:
414+ phy_nic = None
415+ if mng_ip is None:
416+ mng_ip = unit_get('private-address')
417+ for nic in list_nics(['eth', 'bond', 'br']):
418+ if mng_ip in get_ipv4_addr(nic, fatal=False):
419+ phy_nic = nic
420+ # need to find the associated phy nic for bridge
421+ if nic.startswith('br'):
422+ for brnic in get_bridge_nics(nic):
423+ if brnic.startswith('eth') or brnic.startswith('bond'):
424+ phy_nic = brnic
425+ break
426+ break
427+ if phy_nic is not None and phy_nic_mtu != get_nic_mtu(phy_nic):
428+ set_nic_mtu(phy_nic, str(phy_nic_mtu), persistence=True)
429+ log('set mtu={} for phy_nic={}'
430+ .format(phy_nic_mtu, phy_nic), level=INFO)
431
432=== added file 'hooks/charmhelpers/contrib/network/ufw.py'
433--- hooks/charmhelpers/contrib/network/ufw.py 1970-01-01 00:00:00 +0000
434+++ hooks/charmhelpers/contrib/network/ufw.py 2014-12-10 07:57:54 +0000
435@@ -0,0 +1,182 @@
436+"""
437+This module contains helpers to add and remove ufw rules.
438+
439+Examples:
440+
441+- open SSH port for subnet 10.0.3.0/24:
442+
443+ >>> from charmhelpers.contrib.network import ufw
444+ >>> ufw.enable()
445+ >>> ufw.grant_access(src='10.0.3.0/24', dst='any', port='22', proto='tcp')
446+
447+- open service by name as defined in /etc/services:
448+
449+ >>> from charmhelpers.contrib.network import ufw
450+ >>> ufw.enable()
451+ >>> ufw.service('ssh', 'open')
452+
453+- close service by port number:
454+
455+ >>> from charmhelpers.contrib.network import ufw
456+ >>> ufw.enable()
457+ >>> ufw.service('4949', 'close') # munin
458+"""
459+
460+__author__ = "Felipe Reyes <felipe.reyes@canonical.com>"
461+
462+import re
463+import subprocess
464+from charmhelpers.core import hookenv
465+
466+
467+def is_enabled():
468+ """
469+ Check if `ufw` is enabled
470+
471+ :returns: True if ufw is enabled
472+ """
473+ output = subprocess.check_output(['ufw', 'status'], env={'LANG': 'en_US'})
474+
475+ m = re.findall(r'^Status: active\n', output, re.M)
476+
477+ return len(m) >= 1
478+
479+
480+def enable():
481+ """
482+ Enable ufw
483+
484+ :returns: True if ufw is successfully enabled
485+ """
486+ if is_enabled():
487+ return True
488+
489+ output = subprocess.check_output(['ufw', 'enable'], env={'LANG': 'en_US'})
490+
491+ m = re.findall('^Firewall is active and enabled on system startup\n',
492+ output, re.M)
493+ hookenv.log(output, level='DEBUG')
494+
495+ if len(m) == 0:
496+ hookenv.log("ufw couldn't be enabled", level='WARN')
497+ return False
498+ else:
499+ hookenv.log("ufw enabled", level='INFO')
500+ return True
501+
502+
503+def disable():
504+ """
505+ Disable ufw
506+
507+ :returns: True if ufw is successfully disabled
508+ """
509+ if not is_enabled():
510+ return True
511+
512+ output = subprocess.check_output(['ufw', 'disable'], env={'LANG': 'en_US'})
513+
514+ m = re.findall(r'^Firewall stopped and disabled on system startup\n',
515+ output, re.M)
516+ hookenv.log(output, level='DEBUG')
517+
518+ if len(m) == 0:
519+ hookenv.log("ufw couldn't be disabled", level='WARN')
520+ return False
521+ else:
522+ hookenv.log("ufw disabled", level='INFO')
523+ return True
524+
525+
526+def modify_access(src, dst='any', port=None, proto=None, action='allow'):
527+ """
528+ Grant access to an address or subnet
529+
530+ :param src: address (e.g. 192.168.1.234) or subnet
531+ (e.g. 192.168.1.0/24).
532+ :param dst: destiny of the connection, if the machine has multiple IPs and
533+ connections to only one of those have to accepted this is the
534+ field has to be set.
535+ :param port: destiny port
536+ :param proto: protocol (tcp or udp)
537+ :param action: `allow` or `delete`
538+ """
539+ if not is_enabled():
540+ hookenv.log('ufw is disabled, skipping modify_access()', level='WARN')
541+ return
542+
543+ if action == 'delete':
544+ cmd = ['ufw', 'delete', 'allow']
545+ else:
546+ cmd = ['ufw', action]
547+
548+ if src is not None:
549+ cmd += ['from', src]
550+
551+ if dst is not None:
552+ cmd += ['to', dst]
553+
554+ if port is not None:
555+ cmd += ['port', port]
556+
557+ if proto is not None:
558+ cmd += ['proto', proto]
559+
560+ hookenv.log('ufw {}: {}'.format(action, ' '.join(cmd)), level='DEBUG')
561+ p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
562+ (stdout, stderr) = p.communicate()
563+
564+ hookenv.log(stdout, level='INFO')
565+
566+ if p.returncode != 0:
567+ hookenv.log(stderr, level='ERROR')
568+ hookenv.log('Error running: {}, exit code: {}'.format(' '.join(cmd),
569+ p.returncode),
570+ level='ERROR')
571+
572+
573+def grant_access(src, dst='any', port=None, proto=None):
574+ """
575+ Grant access to an address or subnet
576+
577+ :param src: address (e.g. 192.168.1.234) or subnet
578+ (e.g. 192.168.1.0/24).
579+ :param dst: destiny of the connection, if the machine has multiple IPs and
580+ connections to only one of those have to accepted this is the
581+ field has to be set.
582+ :param port: destiny port
583+ :param proto: protocol (tcp or udp)
584+ """
585+ return modify_access(src, dst=dst, port=port, proto=proto, action='allow')
586+
587+
588+def revoke_access(src, dst='any', port=None, proto=None):
589+ """
590+ Revoke access to an address or subnet
591+
592+ :param src: address (e.g. 192.168.1.234) or subnet
593+ (e.g. 192.168.1.0/24).
594+ :param dst: destiny of the connection, if the machine has multiple IPs and
595+ connections to only one of those have to accepted this is the
596+ field has to be set.
597+ :param port: destiny port
598+ :param proto: protocol (tcp or udp)
599+ """
600+ return modify_access(src, dst=dst, port=port, proto=proto, action='delete')
601+
602+
603+def service(name, action):
604+ """
605+ Open/close access to a service
606+
607+ :param name: could be a service name defined in `/etc/services` or a port
608+ number.
609+ :param action: `open` or `close`
610+ """
611+ if action == 'open':
612+ subprocess.check_output(['ufw', 'allow', name])
613+ elif action == 'close':
614+ subprocess.check_output(['ufw', 'delete', 'allow', name])
615+ else:
616+ raise Exception(("'{}' not supported, use 'allow' "
617+ "or 'delete'").format(action))
618
619=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
620--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-06 21:57:43 +0000
621+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-10 07:57:54 +0000
622@@ -1,3 +1,4 @@
623+import six
624 from charmhelpers.contrib.amulet.deployment import (
625 AmuletDeployment
626 )
627@@ -69,7 +70,7 @@
628
629 def _configure_services(self, configs):
630 """Configure all of the services."""
631- for service, config in configs.iteritems():
632+ for service, config in six.iteritems(configs):
633 self.d.configure(service, config)
634
635 def _get_openstack_release(self):
636
637=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
638--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-10-06 21:57:43 +0000
639+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-10 07:57:54 +0000
640@@ -7,6 +7,8 @@
641 import keystoneclient.v2_0 as keystone_client
642 import novaclient.v1_1.client as nova_client
643
644+import six
645+
646 from charmhelpers.contrib.amulet.utils import (
647 AmuletUtils
648 )
649@@ -60,7 +62,7 @@
650 expected service catalog endpoints.
651 """
652 self.log.debug('actual: {}'.format(repr(actual)))
653- for k, v in expected.iteritems():
654+ for k, v in six.iteritems(expected):
655 if k in actual:
656 ret = self._validate_dict_data(expected[k][0], actual[k][0])
657 if ret:
658
659=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
660--- hooks/charmhelpers/contrib/openstack/context.py 2014-10-06 21:57:43 +0000
661+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-10 07:57:54 +0000
662@@ -1,20 +1,18 @@
663 import json
664 import os
665 import time
666-
667 from base64 import b64decode
668+from subprocess import check_call
669
670-from subprocess import (
671- check_call
672-)
673+import six
674
675 from charmhelpers.fetch import (
676 apt_install,
677 filter_installed_packages,
678 )
679-
680 from charmhelpers.core.hookenv import (
681 config,
682+ is_relation_made,
683 local_unit,
684 log,
685 relation_get,
686@@ -23,41 +21,40 @@
687 relation_set,
688 unit_get,
689 unit_private_ip,
690+ DEBUG,
691+ INFO,
692+ WARNING,
693 ERROR,
694- INFO
695 )
696-
697 from charmhelpers.core.host import (
698 mkdir,
699- write_file
700+ write_file,
701 )
702-
703 from charmhelpers.contrib.hahelpers.cluster import (
704 determine_apache_port,
705 determine_api_port,
706 https,
707- is_clustered
708+ is_clustered,
709 )
710-
711 from charmhelpers.contrib.hahelpers.apache import (
712 get_cert,
713 get_ca_cert,
714 install_ca_cert,
715 )
716-
717 from charmhelpers.contrib.openstack.neutron import (
718 neutron_plugin_attribute,
719 )
720-
721 from charmhelpers.contrib.network.ip import (
722 get_address_in_network,
723 get_ipv6_addr,
724 get_netmask_for_address,
725 format_ipv6_addr,
726- is_address_in_network
727+ is_address_in_network,
728 )
729+from charmhelpers.contrib.openstack.utils import get_host_ip
730
731 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
732+ADDRESS_TYPES = ['admin', 'internal', 'public']
733
734
735 class OSContextError(Exception):
736@@ -65,7 +62,7 @@
737
738
739 def ensure_packages(packages):
740- '''Install but do not upgrade required plugin packages'''
741+ """Install but do not upgrade required plugin packages."""
742 required = filter_installed_packages(packages)
743 if required:
744 apt_install(required, fatal=True)
745@@ -73,20 +70,27 @@
746
747 def context_complete(ctxt):
748 _missing = []
749- for k, v in ctxt.iteritems():
750+ for k, v in six.iteritems(ctxt):
751 if v is None or v == '':
752 _missing.append(k)
753+
754 if _missing:
755- log('Missing required data: %s' % ' '.join(_missing), level='INFO')
756+ log('Missing required data: %s' % ' '.join(_missing), level=INFO)
757 return False
758+
759 return True
760
761
762 def config_flags_parser(config_flags):
763+ """Parses config flags string into dict.
764+
765+ The provided config_flags string may be a list of comma-separated values
766+ which themselves may be comma-separated list of values.
767+ """
768 if config_flags.find('==') >= 0:
769- log("config_flags is not in expected format (key=value)",
770- level=ERROR)
771+ log("config_flags is not in expected format (key=value)", level=ERROR)
772 raise OSContextError
773+
774 # strip the following from each value.
775 post_strippers = ' ,'
776 # we strip any leading/trailing '=' or ' ' from the string then
777@@ -94,7 +98,7 @@
778 split = config_flags.strip(' =').split('=')
779 limit = len(split)
780 flags = {}
781- for i in xrange(0, limit - 1):
782+ for i in range(0, limit - 1):
783 current = split[i]
784 next = split[i + 1]
785 vindex = next.rfind(',')
786@@ -109,17 +113,18 @@
787 # if this not the first entry, expect an embedded key.
788 index = current.rfind(',')
789 if index < 0:
790- log("invalid config value(s) at index %s" % (i),
791- level=ERROR)
792+ log("Invalid config value(s) at index %s" % (i), level=ERROR)
793 raise OSContextError
794 key = current[index + 1:]
795
796 # Add to collection.
797 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
798+
799 return flags
800
801
802 class OSContextGenerator(object):
803+ """Base class for all context generators."""
804 interfaces = []
805
806 def __call__(self):
807@@ -131,11 +136,11 @@
808
809 def __init__(self,
810 database=None, user=None, relation_prefix=None, ssl_dir=None):
811- '''
812- Allows inspecting relation for settings prefixed with relation_prefix.
813- This is useful for parsing access for multiple databases returned via
814- the shared-db interface (eg, nova_password, quantum_password)
815- '''
816+ """Allows inspecting relation for settings prefixed with
817+ relation_prefix. This is useful for parsing access for multiple
818+ databases returned via the shared-db interface (eg, nova_password,
819+ quantum_password)
820+ """
821 self.relation_prefix = relation_prefix
822 self.database = database
823 self.user = user
824@@ -145,9 +150,8 @@
825 self.database = self.database or config('database')
826 self.user = self.user or config('database-user')
827 if None in [self.database, self.user]:
828- log('Could not generate shared_db context. '
829- 'Missing required charm config options. '
830- '(database name and user)')
831+ log("Could not generate shared_db context. Missing required charm "
832+ "config options. (database name and user)", level=ERROR)
833 raise OSContextError
834
835 ctxt = {}
836@@ -200,23 +204,24 @@
837 def __call__(self):
838 self.database = self.database or config('database')
839 if self.database is None:
840- log('Could not generate postgresql_db context. '
841- 'Missing required charm config options. '
842- '(database name)')
843+ log('Could not generate postgresql_db context. Missing required '
844+ 'charm config options. (database name)', level=ERROR)
845 raise OSContextError
846+
847 ctxt = {}
848-
849 for rid in relation_ids(self.interfaces[0]):
850 for unit in related_units(rid):
851- ctxt = {
852- 'database_host': relation_get('host', rid=rid, unit=unit),
853- 'database': self.database,
854- 'database_user': relation_get('user', rid=rid, unit=unit),
855- 'database_password': relation_get('password', rid=rid, unit=unit),
856- 'database_type': 'postgresql',
857- }
858+ rel_host = relation_get('host', rid=rid, unit=unit)
859+ rel_user = relation_get('user', rid=rid, unit=unit)
860+ rel_passwd = relation_get('password', rid=rid, unit=unit)
861+ ctxt = {'database_host': rel_host,
862+ 'database': self.database,
863+ 'database_user': rel_user,
864+ 'database_password': rel_passwd,
865+ 'database_type': 'postgresql'}
866 if context_complete(ctxt):
867 return ctxt
868+
869 return {}
870
871
872@@ -225,23 +230,29 @@
873 ca_path = os.path.join(ssl_dir, 'db-client.ca')
874 with open(ca_path, 'w') as fh:
875 fh.write(b64decode(rdata['ssl_ca']))
876+
877 ctxt['database_ssl_ca'] = ca_path
878 elif 'ssl_ca' in rdata:
879- log("Charm not setup for ssl support but ssl ca found")
880+ log("Charm not setup for ssl support but ssl ca found", level=INFO)
881 return ctxt
882+
883 if 'ssl_cert' in rdata:
884 cert_path = os.path.join(
885 ssl_dir, 'db-client.cert')
886 if not os.path.exists(cert_path):
887- log("Waiting 1m for ssl client cert validity")
888+ log("Waiting 1m for ssl client cert validity", level=INFO)
889 time.sleep(60)
890+
891 with open(cert_path, 'w') as fh:
892 fh.write(b64decode(rdata['ssl_cert']))
893+
894 ctxt['database_ssl_cert'] = cert_path
895 key_path = os.path.join(ssl_dir, 'db-client.key')
896 with open(key_path, 'w') as fh:
897 fh.write(b64decode(rdata['ssl_key']))
898+
899 ctxt['database_ssl_key'] = key_path
900+
901 return ctxt
902
903
904@@ -249,9 +260,8 @@
905 interfaces = ['identity-service']
906
907 def __call__(self):
908- log('Generating template context for identity-service')
909+ log('Generating template context for identity-service', level=DEBUG)
910 ctxt = {}
911-
912 for rid in relation_ids('identity-service'):
913 for unit in related_units(rid):
914 rdata = relation_get(rid=rid, unit=unit)
915@@ -259,26 +269,24 @@
916 serv_host = format_ipv6_addr(serv_host) or serv_host
917 auth_host = rdata.get('auth_host')
918 auth_host = format_ipv6_addr(auth_host) or auth_host
919-
920- ctxt = {
921- 'service_port': rdata.get('service_port'),
922- 'service_host': serv_host,
923- 'auth_host': auth_host,
924- 'auth_port': rdata.get('auth_port'),
925- 'admin_tenant_name': rdata.get('service_tenant'),
926- 'admin_user': rdata.get('service_username'),
927- 'admin_password': rdata.get('service_password'),
928- 'service_protocol':
929- rdata.get('service_protocol') or 'http',
930- 'auth_protocol':
931- rdata.get('auth_protocol') or 'http',
932- }
933+ svc_protocol = rdata.get('service_protocol') or 'http'
934+ auth_protocol = rdata.get('auth_protocol') or 'http'
935+ ctxt = {'service_port': rdata.get('service_port'),
936+ 'service_host': serv_host,
937+ 'auth_host': auth_host,
938+ 'auth_port': rdata.get('auth_port'),
939+ 'admin_tenant_name': rdata.get('service_tenant'),
940+ 'admin_user': rdata.get('service_username'),
941+ 'admin_password': rdata.get('service_password'),
942+ 'service_protocol': svc_protocol,
943+ 'auth_protocol': auth_protocol}
944 if context_complete(ctxt):
945 # NOTE(jamespage) this is required for >= icehouse
946 # so a missing value just indicates keystone needs
947 # upgrading
948 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
949 return ctxt
950+
951 return {}
952
953
954@@ -291,21 +299,23 @@
955 self.interfaces = [rel_name]
956
957 def __call__(self):
958- log('Generating template context for amqp')
959+ log('Generating template context for amqp', level=DEBUG)
960 conf = config()
961- user_setting = 'rabbit-user'
962- vhost_setting = 'rabbit-vhost'
963 if self.relation_prefix:
964- user_setting = self.relation_prefix + '-rabbit-user'
965- vhost_setting = self.relation_prefix + '-rabbit-vhost'
966+ user_setting = '%s-rabbit-user' % (self.relation_prefix)
967+ vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
968+ else:
969+ user_setting = 'rabbit-user'
970+ vhost_setting = 'rabbit-vhost'
971
972 try:
973 username = conf[user_setting]
974 vhost = conf[vhost_setting]
975 except KeyError as e:
976- log('Could not generate shared_db context. '
977- 'Missing required charm config options: %s.' % e)
978+ log('Could not generate shared_db context. Missing required charm '
979+ 'config options: %s.' % e, level=ERROR)
980 raise OSContextError
981+
982 ctxt = {}
983 for rid in relation_ids(self.rel_name):
984 ha_vip_only = False
985@@ -319,6 +329,7 @@
986 host = relation_get('private-address', rid=rid, unit=unit)
987 host = format_ipv6_addr(host) or host
988 ctxt['rabbitmq_host'] = host
989+
990 ctxt.update({
991 'rabbitmq_user': username,
992 'rabbitmq_password': relation_get('password', rid=rid,
993@@ -329,6 +340,7 @@
994 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
995 if ssl_port:
996 ctxt['rabbit_ssl_port'] = ssl_port
997+
998 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
999 if ssl_ca:
1000 ctxt['rabbit_ssl_ca'] = ssl_ca
1001@@ -342,41 +354,45 @@
1002 if context_complete(ctxt):
1003 if 'rabbit_ssl_ca' in ctxt:
1004 if not self.ssl_dir:
1005- log(("Charm not setup for ssl support "
1006- "but ssl ca found"))
1007+ log("Charm not setup for ssl support but ssl ca "
1008+ "found", level=INFO)
1009 break
1010+
1011 ca_path = os.path.join(
1012 self.ssl_dir, 'rabbit-client-ca.pem')
1013 with open(ca_path, 'w') as fh:
1014 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
1015 ctxt['rabbit_ssl_ca'] = ca_path
1016+
1017 # Sufficient information found = break out!
1018 break
1019+
1020 # Used for active/active rabbitmq >= grizzly
1021- if ('clustered' not in ctxt or ha_vip_only) \
1022- and len(related_units(rid)) > 1:
1023+ if (('clustered' not in ctxt or ha_vip_only) and
1024+ len(related_units(rid)) > 1):
1025 rabbitmq_hosts = []
1026 for unit in related_units(rid):
1027 host = relation_get('private-address', rid=rid, unit=unit)
1028 host = format_ipv6_addr(host) or host
1029 rabbitmq_hosts.append(host)
1030- ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
1031+
1032+ ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
1033+
1034 if not context_complete(ctxt):
1035 return {}
1036- else:
1037- return ctxt
1038+
1039+ return ctxt
1040
1041
1042 class CephContext(OSContextGenerator):
1043+ """Generates context for /etc/ceph/ceph.conf templates."""
1044 interfaces = ['ceph']
1045
1046 def __call__(self):
1047- '''This generates context for /etc/ceph/ceph.conf templates'''
1048 if not relation_ids('ceph'):
1049 return {}
1050
1051- log('Generating template context for ceph')
1052-
1053+ log('Generating template context for ceph', level=DEBUG)
1054 mon_hosts = []
1055 auth = None
1056 key = None
1057@@ -385,18 +401,18 @@
1058 for unit in related_units(rid):
1059 auth = relation_get('auth', rid=rid, unit=unit)
1060 key = relation_get('key', rid=rid, unit=unit)
1061- ceph_addr = \
1062- relation_get('ceph-public-address', rid=rid, unit=unit) or \
1063- relation_get('private-address', rid=rid, unit=unit)
1064+ ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
1065+ unit=unit)
1066+ unit_priv_addr = relation_get('private-address', rid=rid,
1067+ unit=unit)
1068+ ceph_addr = ceph_pub_addr or unit_priv_addr
1069 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
1070 mon_hosts.append(ceph_addr)
1071
1072- ctxt = {
1073- 'mon_hosts': ' '.join(mon_hosts),
1074- 'auth': auth,
1075- 'key': key,
1076- 'use_syslog': use_syslog
1077- }
1078+ ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
1079+ 'auth': auth,
1080+ 'key': key,
1081+ 'use_syslog': use_syslog}
1082
1083 if not os.path.isdir('/etc/ceph'):
1084 os.mkdir('/etc/ceph')
1085@@ -405,79 +421,68 @@
1086 return {}
1087
1088 ensure_packages(['ceph-common'])
1089-
1090 return ctxt
1091
1092
1093-ADDRESS_TYPES = ['admin', 'internal', 'public']
1094-
1095-
1096 class HAProxyContext(OSContextGenerator):
1097+ """Provides half a context for the haproxy template, which describes
1098+ all peers to be included in the cluster. Each charm needs to include
1099+ its own context generator that describes the port mapping.
1100+ """
1101 interfaces = ['cluster']
1102
1103+ def __init__(self, singlenode_mode=False):
1104+ self.singlenode_mode = singlenode_mode
1105+
1106 def __call__(self):
1107- '''
1108- Builds half a context for the haproxy template, which describes
1109- all peers to be included in the cluster. Each charm needs to include
1110- its own context generator that describes the port mapping.
1111- '''
1112- if not relation_ids('cluster'):
1113+ if not relation_ids('cluster') and not self.singlenode_mode:
1114 return {}
1115
1116+ if config('prefer-ipv6'):
1117+ addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1118+ else:
1119+ addr = get_host_ip(unit_get('private-address'))
1120+
1121 l_unit = local_unit().replace('/', '-')
1122-
1123- if config('prefer-ipv6'):
1124- addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1125- else:
1126- addr = unit_get('private-address')
1127-
1128 cluster_hosts = {}
1129
1130 # NOTE(jamespage): build out map of configured network endpoints
1131 # and associated backends
1132 for addr_type in ADDRESS_TYPES:
1133- laddr = get_address_in_network(
1134- config('os-{}-network'.format(addr_type)))
1135+ cfg_opt = 'os-{}-network'.format(addr_type)
1136+ laddr = get_address_in_network(config(cfg_opt))
1137 if laddr:
1138- cluster_hosts[laddr] = {}
1139- cluster_hosts[laddr]['network'] = "{}/{}".format(
1140- laddr,
1141- get_netmask_for_address(laddr)
1142- )
1143- cluster_hosts[laddr]['backends'] = {}
1144- cluster_hosts[laddr]['backends'][l_unit] = laddr
1145+ netmask = get_netmask_for_address(laddr)
1146+ cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
1147+ netmask),
1148+ 'backends': {l_unit: laddr}}
1149 for rid in relation_ids('cluster'):
1150 for unit in related_units(rid):
1151- _unit = unit.replace('/', '-')
1152 _laddr = relation_get('{}-address'.format(addr_type),
1153 rid=rid, unit=unit)
1154 if _laddr:
1155+ _unit = unit.replace('/', '-')
1156 cluster_hosts[laddr]['backends'][_unit] = _laddr
1157
1158 # NOTE(jamespage) no split configurations found, just use
1159 # private addresses
1160 if not cluster_hosts:
1161- cluster_hosts[addr] = {}
1162- cluster_hosts[addr]['network'] = "{}/{}".format(
1163- addr,
1164- get_netmask_for_address(addr)
1165- )
1166- cluster_hosts[addr]['backends'] = {}
1167- cluster_hosts[addr]['backends'][l_unit] = addr
1168+ netmask = get_netmask_for_address(addr)
1169+ cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
1170+ 'backends': {l_unit: addr}}
1171 for rid in relation_ids('cluster'):
1172 for unit in related_units(rid):
1173- _unit = unit.replace('/', '-')
1174 _laddr = relation_get('private-address',
1175 rid=rid, unit=unit)
1176 if _laddr:
1177+ _unit = unit.replace('/', '-')
1178 cluster_hosts[addr]['backends'][_unit] = _laddr
1179
1180- ctxt = {
1181- 'frontends': cluster_hosts,
1182- }
1183+ ctxt = {'frontends': cluster_hosts}
1184
1185 if config('haproxy-server-timeout'):
1186 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
1187+
1188 if config('haproxy-client-timeout'):
1189 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
1190
1191@@ -491,13 +496,18 @@
1192 ctxt['stat_port'] = ':8888'
1193
1194 for frontend in cluster_hosts:
1195- if len(cluster_hosts[frontend]['backends']) > 1:
1196+ if (len(cluster_hosts[frontend]['backends']) > 1 or
1197+ self.singlenode_mode):
1198 # Enable haproxy when we have enough peers.
1199- log('Ensuring haproxy enabled in /etc/default/haproxy.')
1200+ log('Ensuring haproxy enabled in /etc/default/haproxy.',
1201+ level=DEBUG)
1202 with open('/etc/default/haproxy', 'w') as out:
1203 out.write('ENABLED=1\n')
1204+
1205 return ctxt
1206- log('HAProxy context is incomplete, this unit has no peers.')
1207+
1208+ log('HAProxy context is incomplete, this unit has no peers.',
1209+ level=INFO)
1210 return {}
1211
1212
1213@@ -505,29 +515,28 @@
1214 interfaces = ['image-service']
1215
1216 def __call__(self):
1217- '''
1218- Obtains the glance API server from the image-service relation. Useful
1219- in nova and cinder (currently).
1220- '''
1221- log('Generating template context for image-service.')
1222+ """Obtains the glance API server from the image-service relation.
1223+ Useful in nova and cinder (currently).
1224+ """
1225+ log('Generating template context for image-service.', level=DEBUG)
1226 rids = relation_ids('image-service')
1227 if not rids:
1228 return {}
1229+
1230 for rid in rids:
1231 for unit in related_units(rid):
1232 api_server = relation_get('glance-api-server',
1233 rid=rid, unit=unit)
1234 if api_server:
1235 return {'glance_api_servers': api_server}
1236- log('ImageService context is incomplete. '
1237- 'Missing required relation data.')
1238+
1239+ log("ImageService context is incomplete. Missing required relation "
1240+ "data.", level=INFO)
1241 return {}
1242
1243
1244 class ApacheSSLContext(OSContextGenerator):
1245-
1246- """
1247- Generates a context for an apache vhost configuration that configures
1248+ """Generates a context for an apache vhost configuration that configures
1249 HTTPS reverse proxying for one or many endpoints. Generated context
1250 looks something like::
1251
1252@@ -561,6 +570,7 @@
1253 else:
1254 cert_filename = 'cert'
1255 key_filename = 'key'
1256+
1257 write_file(path=os.path.join(ssl_dir, cert_filename),
1258 content=b64decode(cert))
1259 write_file(path=os.path.join(ssl_dir, key_filename),
1260@@ -572,7 +582,8 @@
1261 install_ca_cert(b64decode(ca_cert))
1262
1263 def canonical_names(self):
1264- '''Figure out which canonical names clients will access this service'''
1265+ """Figure out which canonical names clients will access this service.
1266+ """
1267 cns = []
1268 for r_id in relation_ids('identity-service'):
1269 for unit in related_units(r_id):
1270@@ -580,55 +591,80 @@
1271 for k in rdata:
1272 if k.startswith('ssl_key_'):
1273 cns.append(k.lstrip('ssl_key_'))
1274- return list(set(cns))
1275+
1276+ return sorted(list(set(cns)))
1277+
1278+ def get_network_addresses(self):
1279+ """For each network configured, return corresponding address and vip
1280+ (if available).
1281+
1282+ Returns a list of tuples of the form:
1283+
1284+ [(address_in_net_a, vip_in_net_a),
1285+ (address_in_net_b, vip_in_net_b),
1286+ ...]
1287+
1288+ or, if no vip(s) available:
1289+
1290+ [(address_in_net_a, address_in_net_a),
1291+ (address_in_net_b, address_in_net_b),
1292+ ...]
1293+ """
1294+ addresses = []
1295+ if config('vip'):
1296+ vips = config('vip').split()
1297+ else:
1298+ vips = []
1299+
1300+ for net_type in ['os-internal-network', 'os-admin-network',
1301+ 'os-public-network']:
1302+ addr = get_address_in_network(config(net_type),
1303+ unit_get('private-address'))
1304+ if len(vips) > 1 and is_clustered():
1305+ if not config(net_type):
1306+ log("Multiple networks configured but net_type "
1307+ "is None (%s)." % net_type, level=WARNING)
1308+ continue
1309+
1310+ for vip in vips:
1311+ if is_address_in_network(config(net_type), vip):
1312+ addresses.append((addr, vip))
1313+ break
1314+
1315+ elif is_clustered() and config('vip'):
1316+ addresses.append((addr, config('vip')))
1317+ else:
1318+ addresses.append((addr, addr))
1319+
1320+ return sorted(addresses)
1321
1322 def __call__(self):
1323- if isinstance(self.external_ports, basestring):
1324+ if isinstance(self.external_ports, six.string_types):
1325 self.external_ports = [self.external_ports]
1326- if (not self.external_ports or not https()):
1327+
1328+ if not self.external_ports or not https():
1329 return {}
1330
1331 self.configure_ca()
1332 self.enable_modules()
1333
1334- ctxt = {
1335- 'namespace': self.service_namespace,
1336- 'endpoints': [],
1337- 'ext_ports': []
1338- }
1339+ ctxt = {'namespace': self.service_namespace,
1340+ 'endpoints': [],
1341+ 'ext_ports': []}
1342
1343 for cn in self.canonical_names():
1344 self.configure_cert(cn)
1345
1346- addresses = []
1347- vips = []
1348- if config('vip'):
1349- vips = config('vip').split()
1350-
1351- for network_type in ['os-internal-network',
1352- 'os-admin-network',
1353- 'os-public-network']:
1354- address = get_address_in_network(config(network_type),
1355- unit_get('private-address'))
1356- if len(vips) > 0 and is_clustered():
1357- for vip in vips:
1358- if is_address_in_network(config(network_type),
1359- vip):
1360- addresses.append((address, vip))
1361- break
1362- elif is_clustered():
1363- addresses.append((address, config('vip')))
1364- else:
1365- addresses.append((address, address))
1366-
1367- for address, endpoint in set(addresses):
1368+ addresses = self.get_network_addresses()
1369+ for address, endpoint in sorted(set(addresses)):
1370 for api_port in self.external_ports:
1371 ext_port = determine_apache_port(api_port)
1372 int_port = determine_api_port(api_port)
1373 portmap = (address, endpoint, int(ext_port), int(int_port))
1374 ctxt['endpoints'].append(portmap)
1375 ctxt['ext_ports'].append(int(ext_port))
1376- ctxt['ext_ports'] = list(set(ctxt['ext_ports']))
1377+
1378+ ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
1379 return ctxt
1380
1381
1382@@ -645,21 +681,23 @@
1383
1384 @property
1385 def packages(self):
1386- return neutron_plugin_attribute(
1387- self.plugin, 'packages', self.network_manager)
1388+ return neutron_plugin_attribute(self.plugin, 'packages',
1389+ self.network_manager)
1390
1391 @property
1392 def neutron_security_groups(self):
1393 return None
1394
1395 def _ensure_packages(self):
1396- [ensure_packages(pkgs) for pkgs in self.packages]
1397+ for pkgs in self.packages:
1398+ ensure_packages(pkgs)
1399
1400 def _save_flag_file(self):
1401 if self.network_manager == 'quantum':
1402 _file = '/etc/nova/quantum_plugin.conf'
1403 else:
1404 _file = '/etc/nova/neutron_plugin.conf'
1405+
1406 with open(_file, 'wb') as out:
1407 out.write(self.plugin + '\n')
1408
1409@@ -668,13 +706,11 @@
1410 self.network_manager)
1411 config = neutron_plugin_attribute(self.plugin, 'config',
1412 self.network_manager)
1413- ovs_ctxt = {
1414- 'core_plugin': driver,
1415- 'neutron_plugin': 'ovs',
1416- 'neutron_security_groups': self.neutron_security_groups,
1417- 'local_ip': unit_private_ip(),
1418- 'config': config
1419- }
1420+ ovs_ctxt = {'core_plugin': driver,
1421+ 'neutron_plugin': 'ovs',
1422+ 'neutron_security_groups': self.neutron_security_groups,
1423+ 'local_ip': unit_private_ip(),
1424+ 'config': config}
1425
1426 return ovs_ctxt
1427
1428@@ -683,13 +719,11 @@
1429 self.network_manager)
1430 config = neutron_plugin_attribute(self.plugin, 'config',
1431 self.network_manager)
1432- nvp_ctxt = {
1433- 'core_plugin': driver,
1434- 'neutron_plugin': 'nvp',
1435- 'neutron_security_groups': self.neutron_security_groups,
1436- 'local_ip': unit_private_ip(),
1437- 'config': config
1438- }
1439+ nvp_ctxt = {'core_plugin': driver,
1440+ 'neutron_plugin': 'nvp',
1441+ 'neutron_security_groups': self.neutron_security_groups,
1442+ 'local_ip': unit_private_ip(),
1443+ 'config': config}
1444
1445 return nvp_ctxt
1446
1447@@ -698,35 +732,50 @@
1448 self.network_manager)
1449 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1450 self.network_manager)
1451- n1kv_ctxt = {
1452- 'core_plugin': driver,
1453- 'neutron_plugin': 'n1kv',
1454- 'neutron_security_groups': self.neutron_security_groups,
1455- 'local_ip': unit_private_ip(),
1456- 'config': n1kv_config,
1457- 'vsm_ip': config('n1kv-vsm-ip'),
1458- 'vsm_username': config('n1kv-vsm-username'),
1459- 'vsm_password': config('n1kv-vsm-password'),
1460- 'restrict_policy_profiles': config(
1461- 'n1kv_restrict_policy_profiles'),
1462- }
1463+ n1kv_user_config_flags = config('n1kv-config-flags')
1464+ restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
1465+ n1kv_ctxt = {'core_plugin': driver,
1466+ 'neutron_plugin': 'n1kv',
1467+ 'neutron_security_groups': self.neutron_security_groups,
1468+ 'local_ip': unit_private_ip(),
1469+ 'config': n1kv_config,
1470+ 'vsm_ip': config('n1kv-vsm-ip'),
1471+ 'vsm_username': config('n1kv-vsm-username'),
1472+ 'vsm_password': config('n1kv-vsm-password'),
1473+ 'restrict_policy_profiles': restrict_policy_profiles}
1474+
1475+ if n1kv_user_config_flags:
1476+ flags = config_flags_parser(n1kv_user_config_flags)
1477+ n1kv_ctxt['user_config_flags'] = flags
1478
1479 return n1kv_ctxt
1480
1481+ def calico_ctxt(self):
1482+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1483+ self.network_manager)
1484+ config = neutron_plugin_attribute(self.plugin, 'config',
1485+ self.network_manager)
1486+ calico_ctxt = {'core_plugin': driver,
1487+ 'neutron_plugin': 'Calico',
1488+ 'neutron_security_groups': self.neutron_security_groups,
1489+ 'local_ip': unit_private_ip(),
1490+ 'config': config}
1491+
1492+ return calico_ctxt
1493+
1494 def neutron_ctxt(self):
1495 if https():
1496 proto = 'https'
1497 else:
1498 proto = 'http'
1499+
1500 if is_clustered():
1501 host = config('vip')
1502 else:
1503 host = unit_get('private-address')
1504- url = '%s://%s:%s' % (proto, host, '9696')
1505- ctxt = {
1506- 'network_manager': self.network_manager,
1507- 'neutron_url': url,
1508- }
1509+
1510+ ctxt = {'network_manager': self.network_manager,
1511+ 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1512 return ctxt
1513
1514 def __call__(self):
1515@@ -746,6 +795,8 @@
1516 ctxt.update(self.nvp_ctxt())
1517 elif self.plugin == 'n1kv':
1518 ctxt.update(self.n1kv_ctxt())
1519+ elif self.plugin == 'Calico':
1520+ ctxt.update(self.calico_ctxt())
1521
1522 alchemy_flags = config('neutron-alchemy-flags')
1523 if alchemy_flags:
1524@@ -757,23 +808,40 @@
1525
1526
1527 class OSConfigFlagContext(OSContextGenerator):
1528-
1529- """
1530- Responsible for adding user-defined config-flags in charm config to a
1531- template context.
1532+ """Provides support for user-defined config flags.
1533+
1534+ Users can define a comma-seperated list of key=value pairs
1535+ in the charm configuration and apply them at any point in
1536+ any file by using a template flag.
1537+
1538+ Sometimes users might want config flags inserted within a
1539+ specific section so this class allows users to specify the
1540+ template flag name, allowing for multiple template flags
1541+ (sections) within the same context.
1542
1543 NOTE: the value of config-flags may be a comma-separated list of
1544 key=value pairs and some Openstack config files support
1545 comma-separated lists as values.
1546 """
1547
1548+ def __init__(self, charm_flag='config-flags',
1549+ template_flag='user_config_flags'):
1550+ """
1551+ :param charm_flag: config flags in charm configuration.
1552+ :param template_flag: insert point for user-defined flags in template
1553+ file.
1554+ """
1555+ super(OSConfigFlagContext, self).__init__()
1556+ self._charm_flag = charm_flag
1557+ self._template_flag = template_flag
1558+
1559 def __call__(self):
1560- config_flags = config('config-flags')
1561+ config_flags = config(self._charm_flag)
1562 if not config_flags:
1563 return {}
1564
1565- flags = config_flags_parser(config_flags)
1566- return {'user_config_flags': flags}
1567+ return {self._template_flag:
1568+ config_flags_parser(config_flags)}
1569
1570
1571 class SubordinateConfigContext(OSContextGenerator):
1572@@ -817,7 +885,6 @@
1573 },
1574 }
1575 }
1576-
1577 """
1578
1579 def __init__(self, service, config_file, interface):
1580@@ -847,26 +914,28 @@
1581
1582 if self.service not in sub_config:
1583 log('Found subordinate_config on %s but it contained'
1584- 'nothing for %s service' % (rid, self.service))
1585+ 'nothing for %s service' % (rid, self.service),
1586+ level=INFO)
1587 continue
1588
1589 sub_config = sub_config[self.service]
1590 if self.config_file not in sub_config:
1591 log('Found subordinate_config on %s but it contained'
1592- 'nothing for %s' % (rid, self.config_file))
1593+ 'nothing for %s' % (rid, self.config_file),
1594+ level=INFO)
1595 continue
1596
1597 sub_config = sub_config[self.config_file]
1598- for k, v in sub_config.iteritems():
1599+ for k, v in six.iteritems(sub_config):
1600 if k == 'sections':
1601- for section, config_dict in v.iteritems():
1602- log("adding section '%s'" % (section))
1603+ for section, config_dict in six.iteritems(v):
1604+ log("adding section '%s'" % (section),
1605+ level=DEBUG)
1606 ctxt[k][section] = config_dict
1607 else:
1608 ctxt[k] = v
1609
1610- log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
1611-
1612+ log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1613 return ctxt
1614
1615
1616@@ -878,15 +947,14 @@
1617 False if config('debug') is None else config('debug')
1618 ctxt['verbose'] = \
1619 False if config('verbose') is None else config('verbose')
1620+
1621 return ctxt
1622
1623
1624 class SyslogContext(OSContextGenerator):
1625
1626 def __call__(self):
1627- ctxt = {
1628- 'use_syslog': config('use-syslog')
1629- }
1630+ ctxt = {'use_syslog': config('use-syslog')}
1631 return ctxt
1632
1633
1634@@ -894,10 +962,56 @@
1635
1636 def __call__(self):
1637 if config('prefer-ipv6'):
1638- return {
1639- 'bind_host': '::'
1640- }
1641+ return {'bind_host': '::'}
1642 else:
1643- return {
1644- 'bind_host': '0.0.0.0'
1645- }
1646+ return {'bind_host': '0.0.0.0'}
1647+
1648+
1649+class WorkerConfigContext(OSContextGenerator):
1650+
1651+ @property
1652+ def num_cpus(self):
1653+ try:
1654+ from psutil import NUM_CPUS
1655+ except ImportError:
1656+ apt_install('python-psutil', fatal=True)
1657+ from psutil import NUM_CPUS
1658+
1659+ return NUM_CPUS
1660+
1661+ def __call__(self):
1662+ multiplier = config('worker-multiplier') or 0
1663+ ctxt = {"workers": self.num_cpus * multiplier}
1664+ return ctxt
1665+
1666+
1667+class ZeroMQContext(OSContextGenerator):
1668+ interfaces = ['zeromq-configuration']
1669+
1670+ def __call__(self):
1671+ ctxt = {}
1672+ if is_relation_made('zeromq-configuration', 'host'):
1673+ for rid in relation_ids('zeromq-configuration'):
1674+ for unit in related_units(rid):
1675+ ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1676+ ctxt['zmq_host'] = relation_get('host', unit, rid)
1677+
1678+ return ctxt
1679+
1680+
1681+class NotificationDriverContext(OSContextGenerator):
1682+
1683+ def __init__(self, zmq_relation='zeromq-configuration',
1684+ amqp_relation='amqp'):
1685+ """
1686+ :param zmq_relation: Name of Zeromq relation to check
1687+ """
1688+ self.zmq_relation = zmq_relation
1689+ self.amqp_relation = amqp_relation
1690+
1691+ def __call__(self):
1692+ ctxt = {'notifications': 'False'}
1693+ if is_relation_made(self.amqp_relation):
1694+ ctxt['notifications'] = "True"
1695+
1696+ return ctxt
1697
1698=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
1699--- hooks/charmhelpers/contrib/openstack/ip.py 2014-09-22 20:22:04 +0000
1700+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-10 07:57:54 +0000
1701@@ -2,21 +2,19 @@
1702 config,
1703 unit_get,
1704 )
1705-
1706 from charmhelpers.contrib.network.ip import (
1707 get_address_in_network,
1708 is_address_in_network,
1709 is_ipv6,
1710 get_ipv6_addr,
1711 )
1712-
1713 from charmhelpers.contrib.hahelpers.cluster import is_clustered
1714
1715 PUBLIC = 'public'
1716 INTERNAL = 'int'
1717 ADMIN = 'admin'
1718
1719-_address_map = {
1720+ADDRESS_MAP = {
1721 PUBLIC: {
1722 'config': 'os-public-network',
1723 'fallback': 'public-address'
1724@@ -33,16 +31,14 @@
1725
1726
1727 def canonical_url(configs, endpoint_type=PUBLIC):
1728- '''
1729- Returns the correct HTTP URL to this host given the state of HTTPS
1730+ """Returns the correct HTTP URL to this host given the state of HTTPS
1731 configuration, hacluster and charm configuration.
1732
1733- :configs OSTemplateRenderer: A config tempating object to inspect for
1734- a complete https context.
1735- :endpoint_type str: The endpoint type to resolve.
1736-
1737- :returns str: Base URL for services on the current service unit.
1738- '''
1739+ :param configs: OSTemplateRenderer config templating object to inspect
1740+ for a complete https context.
1741+ :param endpoint_type: str endpoint type to resolve.
1742+ :param returns: str base URL for services on the current service unit.
1743+ """
1744 scheme = 'http'
1745 if 'https' in configs.complete_contexts():
1746 scheme = 'https'
1747@@ -53,27 +49,45 @@
1748
1749
1750 def resolve_address(endpoint_type=PUBLIC):
1751+ """Return unit address depending on net config.
1752+
1753+ If unit is clustered with vip(s) and has net splits defined, return vip on
1754+ correct network. If clustered with no nets defined, return primary vip.
1755+
1756+ If not clustered, return unit address ensuring address is on configured net
1757+ split if one is configured.
1758+
1759+ :param endpoint_type: Network endpoing type
1760+ """
1761 resolved_address = None
1762- if is_clustered():
1763- if config(_address_map[endpoint_type]['config']) is None:
1764- # Assume vip is simple and pass back directly
1765- resolved_address = config('vip')
1766+ vips = config('vip')
1767+ if vips:
1768+ vips = vips.split()
1769+
1770+ net_type = ADDRESS_MAP[endpoint_type]['config']
1771+ net_addr = config(net_type)
1772+ net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
1773+ clustered = is_clustered()
1774+ if clustered:
1775+ if not net_addr:
1776+ # If no net-splits defined, we expect a single vip
1777+ resolved_address = vips[0]
1778 else:
1779- for vip in config('vip').split():
1780- if is_address_in_network(
1781- config(_address_map[endpoint_type]['config']),
1782- vip):
1783+ for vip in vips:
1784+ if is_address_in_network(net_addr, vip):
1785 resolved_address = vip
1786+ break
1787 else:
1788 if config('prefer-ipv6'):
1789- fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1790+ fallback_addr = get_ipv6_addr(exc_list=vips)[0]
1791 else:
1792- fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])
1793- resolved_address = get_address_in_network(
1794- config(_address_map[endpoint_type]['config']), fallback_addr)
1795+ fallback_addr = unit_get(net_fallback)
1796+
1797+ resolved_address = get_address_in_network(net_addr, fallback_addr)
1798
1799 if resolved_address is None:
1800- raise ValueError('Unable to resolve a suitable IP address'
1801- ' based on charm state and configuration')
1802- else:
1803- return resolved_address
1804+ raise ValueError("Unable to resolve a suitable IP address based on "
1805+ "charm state and configuration. (net_type=%s, "
1806+ "clustered=%s)" % (net_type, clustered))
1807+
1808+ return resolved_address
1809
1810=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1811--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-07-28 14:38:51 +0000
1812+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-10 07:57:54 +0000
1813@@ -14,7 +14,7 @@
1814 def headers_package():
1815 """Ensures correct linux-headers for running kernel are installed,
1816 for building DKMS package"""
1817- kver = check_output(['uname', '-r']).strip()
1818+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1819 return 'linux-headers-%s' % kver
1820
1821 QUANTUM_CONF_DIR = '/etc/quantum'
1822@@ -22,7 +22,7 @@
1823
1824 def kernel_version():
1825 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1826- kver = check_output(['uname', '-r']).strip()
1827+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1828 kver = kver.split('.')
1829 return (int(kver[0]), int(kver[1]))
1830
1831@@ -138,10 +138,25 @@
1832 relation_prefix='neutron',
1833 ssl_dir=NEUTRON_CONF_DIR)],
1834 'services': [],
1835- 'packages': [['neutron-plugin-cisco']],
1836+ 'packages': [[headers_package()] + determine_dkms_package(),
1837+ ['neutron-plugin-cisco']],
1838 'server_packages': ['neutron-server',
1839 'neutron-plugin-cisco'],
1840 'server_services': ['neutron-server']
1841+ },
1842+ 'Calico': {
1843+ 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
1844+ 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
1845+ 'contexts': [
1846+ context.SharedDBContext(user=config('neutron-database-user'),
1847+ database=config('neutron-database'),
1848+ relation_prefix='neutron',
1849+ ssl_dir=NEUTRON_CONF_DIR)],
1850+ 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
1851+ 'packages': [[headers_package()] + determine_dkms_package(),
1852+ ['calico-compute', 'bird', 'neutron-dhcp-agent']],
1853+ 'server_packages': ['neutron-server', 'calico-control'],
1854+ 'server_services': ['neutron-server']
1855 }
1856 }
1857 if release >= 'icehouse':
1858@@ -162,7 +177,8 @@
1859 elif manager == 'neutron':
1860 plugins = neutron_plugins()
1861 else:
1862- log('Error: Network manager does not support plugins.')
1863+ log("Network manager '%s' does not support plugins." % (manager),
1864+ level=ERROR)
1865 raise Exception
1866
1867 try:
1868
1869=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
1870--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-10-06 21:57:43 +0000
1871+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-12-10 07:57:54 +0000
1872@@ -35,7 +35,7 @@
1873 stats auth admin:password
1874
1875 {% if frontends -%}
1876-{% for service, ports in service_ports.iteritems() -%}
1877+{% for service, ports in service_ports.items() -%}
1878 frontend tcp-in_{{ service }}
1879 bind *:{{ ports[0] }}
1880 bind :::{{ ports[0] }}
1881@@ -46,7 +46,7 @@
1882 {% for frontend in frontends -%}
1883 backend {{ service }}_{{ frontend }}
1884 balance leastconn
1885- {% for unit, address in frontends[frontend]['backends'].iteritems() -%}
1886+ {% for unit, address in frontends[frontend]['backends'].items() -%}
1887 server {{ unit }} {{ address }}:{{ ports[1] }} check
1888 {% endfor %}
1889 {% endfor -%}
1890
1891=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
1892--- hooks/charmhelpers/contrib/openstack/templating.py 2014-07-28 14:38:51 +0000
1893+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-10 07:57:54 +0000
1894@@ -1,13 +1,13 @@
1895 import os
1896
1897+import six
1898+
1899 from charmhelpers.fetch import apt_install
1900-
1901 from charmhelpers.core.hookenv import (
1902 log,
1903 ERROR,
1904 INFO
1905 )
1906-
1907 from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1908
1909 try:
1910@@ -43,7 +43,7 @@
1911 order by OpenStack release.
1912 """
1913 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1914- for rel in OPENSTACK_CODENAMES.itervalues()]
1915+ for rel in six.itervalues(OPENSTACK_CODENAMES)]
1916
1917 if not os.path.isdir(templates_dir):
1918 log('Templates directory not found @ %s.' % templates_dir,
1919@@ -258,7 +258,7 @@
1920 """
1921 Write out all registered config files.
1922 """
1923- [self.write(k) for k in self.templates.iterkeys()]
1924+ [self.write(k) for k in six.iterkeys(self.templates)]
1925
1926 def set_release(self, openstack_release):
1927 """
1928@@ -275,5 +275,5 @@
1929 '''
1930 interfaces = []
1931 [interfaces.extend(i.complete_contexts())
1932- for i in self.templates.itervalues()]
1933+ for i in six.itervalues(self.templates)]
1934 return interfaces
1935
1936=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1937--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-06 21:57:43 +0000
1938+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-10 07:57:54 +0000
1939@@ -2,6 +2,7 @@
1940
1941 # Common python helper functions used for OpenStack charms.
1942 from collections import OrderedDict
1943+from functools import wraps
1944
1945 import subprocess
1946 import json
1947@@ -9,11 +10,13 @@
1948 import socket
1949 import sys
1950
1951+import six
1952+import yaml
1953+
1954 from charmhelpers.core.hookenv import (
1955 config,
1956 log as juju_log,
1957 charm_dir,
1958- ERROR,
1959 INFO,
1960 relation_ids,
1961 relation_set
1962@@ -30,7 +33,8 @@
1963 )
1964
1965 from charmhelpers.core.host import lsb_release, mounts, umount
1966-from charmhelpers.fetch import apt_install, apt_cache
1967+from charmhelpers.fetch import apt_install, apt_cache, install_remote
1968+from charmhelpers.contrib.python.packages import pip_install
1969 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
1970 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
1971
1972@@ -112,7 +116,7 @@
1973
1974 # Best guess match based on deb string provided
1975 if src.startswith('deb') or src.startswith('ppa'):
1976- for k, v in OPENSTACK_CODENAMES.iteritems():
1977+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1978 if v in src:
1979 return v
1980
1981@@ -133,7 +137,7 @@
1982
1983 def get_os_version_codename(codename):
1984 '''Determine OpenStack version number from codename.'''
1985- for k, v in OPENSTACK_CODENAMES.iteritems():
1986+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1987 if v == codename:
1988 return k
1989 e = 'Could not derive OpenStack version for '\
1990@@ -193,7 +197,7 @@
1991 else:
1992 vers_map = OPENSTACK_CODENAMES
1993
1994- for version, cname in vers_map.iteritems():
1995+ for version, cname in six.iteritems(vers_map):
1996 if cname == codename:
1997 return version
1998 # e = "Could not determine OpenStack version for package: %s" % pkg
1999@@ -317,7 +321,7 @@
2000 rc_script.write(
2001 "#!/bin/bash\n")
2002 [rc_script.write('export %s=%s\n' % (u, p))
2003- for u, p in env_vars.iteritems() if u != "script_path"]
2004+ for u, p in six.iteritems(env_vars) if u != "script_path"]
2005
2006
2007 def openstack_upgrade_available(package):
2008@@ -350,8 +354,8 @@
2009 '''
2010 _none = ['None', 'none', None]
2011 if (block_device in _none):
2012- error_out('prepare_storage(): Missing required input: '
2013- 'block_device=%s.' % block_device, level=ERROR)
2014+ error_out('prepare_storage(): Missing required input: block_device=%s.'
2015+ % block_device)
2016
2017 if block_device.startswith('/dev/'):
2018 bdev = block_device
2019@@ -367,8 +371,7 @@
2020 bdev = '/dev/%s' % block_device
2021
2022 if not is_block_device(bdev):
2023- error_out('Failed to locate valid block device at %s' % bdev,
2024- level=ERROR)
2025+ error_out('Failed to locate valid block device at %s' % bdev)
2026
2027 return bdev
2028
2029@@ -417,7 +420,7 @@
2030
2031 if isinstance(address, dns.name.Name):
2032 rtype = 'PTR'
2033- elif isinstance(address, basestring):
2034+ elif isinstance(address, six.string_types):
2035 rtype = 'A'
2036 else:
2037 return None
2038@@ -468,6 +471,14 @@
2039 return result.split('.')[0]
2040
2041
2042+def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
2043+ mm_map = {}
2044+ if os.path.isfile(mm_file):
2045+ with open(mm_file, 'r') as f:
2046+ mm_map = json.load(f)
2047+ return mm_map
2048+
2049+
2050 def sync_db_with_multi_ipv6_addresses(database, database_user,
2051 relation_prefix=None):
2052 hosts = get_ipv6_addr(dynamic_only=False)
2053@@ -477,10 +488,132 @@
2054 'hostname': json.dumps(hosts)}
2055
2056 if relation_prefix:
2057- keys = kwargs.keys()
2058- for key in keys:
2059+ for key in list(kwargs.keys()):
2060 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
2061 del kwargs[key]
2062
2063 for rid in relation_ids('shared-db'):
2064 relation_set(relation_id=rid, **kwargs)
2065+
2066+
2067+def os_requires_version(ostack_release, pkg):
2068+ """
2069+ Decorator for hook to specify minimum supported release
2070+ """
2071+ def wrap(f):
2072+ @wraps(f)
2073+ def wrapped_f(*args):
2074+ if os_release(pkg) < ostack_release:
2075+ raise Exception("This hook is not supported on releases"
2076+ " before %s" % ostack_release)
2077+ f(*args)
2078+ return wrapped_f
2079+ return wrap
2080+
2081+
2082+def git_install_requested():
2083+ """Returns true if openstack-origin-git is specified."""
2084+ return config('openstack-origin-git') != "None"
2085+
2086+
2087+requirements_dir = None
2088+
2089+
2090+def git_clone_and_install(file_name, core_project):
2091+ """Clone/install all OpenStack repos specified in yaml config file."""
2092+ global requirements_dir
2093+
2094+ if file_name == "None":
2095+ return
2096+
2097+ yaml_file = os.path.join(charm_dir(), file_name)
2098+
2099+ # clone/install the requirements project first
2100+ installed = _git_clone_and_install_subset(yaml_file,
2101+ whitelist=['requirements'])
2102+ if 'requirements' not in installed:
2103+ error_out('requirements git repository must be specified')
2104+
2105+ # clone/install all other projects except requirements and the core project
2106+ blacklist = ['requirements', core_project]
2107+ _git_clone_and_install_subset(yaml_file, blacklist=blacklist,
2108+ update_requirements=True)
2109+
2110+ # clone/install the core project
2111+ whitelist = [core_project]
2112+ installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist,
2113+ update_requirements=True)
2114+ if core_project not in installed:
2115+ error_out('{} git repository must be specified'.format(core_project))
2116+
2117+
2118+def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[],
2119+ update_requirements=False):
2120+ """Clone/install subset of OpenStack repos specified in yaml config file."""
2121+ global requirements_dir
2122+ installed = []
2123+
2124+ with open(yaml_file, 'r') as fd:
2125+ projects = yaml.load(fd)
2126+ for proj, val in projects.items():
2127+ # The project subset is chosen based on the following 3 rules:
2128+ # 1) If project is in blacklist, we don't clone/install it, period.
2129+ # 2) If whitelist is empty, we clone/install everything else.
2130+ # 3) If whitelist is not empty, we clone/install everything in the
2131+ # whitelist.
2132+ if proj in blacklist:
2133+ continue
2134+ if whitelist and proj not in whitelist:
2135+ continue
2136+ repo = val['repository']
2137+ branch = val['branch']
2138+ repo_dir = _git_clone_and_install_single(repo, branch,
2139+ update_requirements)
2140+ if proj == 'requirements':
2141+ requirements_dir = repo_dir
2142+ installed.append(proj)
2143+ return installed
2144+
2145+
2146+def _git_clone_and_install_single(repo, branch, update_requirements=False):
2147+ """Clone and install a single git repository."""
2148+ dest_parent_dir = "/mnt/openstack-git/"
2149+ dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo))
2150+
2151+ if not os.path.exists(dest_parent_dir):
2152+ juju_log('Host dir not mounted at {}. '
2153+ 'Creating directory there instead.'.format(dest_parent_dir))
2154+ os.mkdir(dest_parent_dir)
2155+
2156+ if not os.path.exists(dest_dir):
2157+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
2158+ repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch)
2159+ else:
2160+ repo_dir = dest_dir
2161+
2162+ if update_requirements:
2163+ if not requirements_dir:
2164+ error_out('requirements repo must be cloned before '
2165+ 'updating from global requirements.')
2166+ _git_update_requirements(repo_dir, requirements_dir)
2167+
2168+ juju_log('Installing git repo from dir: {}'.format(repo_dir))
2169+ pip_install(repo_dir)
2170+
2171+ return repo_dir
2172+
2173+
2174+def _git_update_requirements(package_dir, reqs_dir):
2175+ """Update from global requirements.
2176+
2177+ Update an OpenStack git directory's requirements.txt and
2178+ test-requirements.txt from global-requirements.txt."""
2179+ orig_dir = os.getcwd()
2180+ os.chdir(reqs_dir)
2181+ cmd = "python update.py {}".format(package_dir)
2182+ try:
2183+ subprocess.check_call(cmd.split(' '))
2184+ except subprocess.CalledProcessError:
2185+ package = os.path.basename(package_dir)
2186+ error_out("Error updating {} from global-requirements.txt".format(package))
2187+ os.chdir(orig_dir)
2188
2189=== added directory 'hooks/charmhelpers/contrib/python'
2190=== added file 'hooks/charmhelpers/contrib/python/__init__.py'
2191=== added file 'hooks/charmhelpers/contrib/python/debug.py'
2192--- hooks/charmhelpers/contrib/python/debug.py 1970-01-01 00:00:00 +0000
2193+++ hooks/charmhelpers/contrib/python/debug.py 2014-12-10 07:57:54 +0000
2194@@ -0,0 +1,40 @@
2195+#!/usr/bin/env python
2196+# coding: utf-8
2197+
2198+from __future__ import print_function
2199+
2200+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
2201+
2202+import atexit
2203+import sys
2204+
2205+from charmhelpers.contrib.python.rpdb import Rpdb
2206+from charmhelpers.core.hookenv import (
2207+ open_port,
2208+ close_port,
2209+ ERROR,
2210+ log
2211+)
2212+
2213+DEFAULT_ADDR = "0.0.0.0"
2214+DEFAULT_PORT = 4444
2215+
2216+
2217+def _error(message):
2218+ log(message, level=ERROR)
2219+
2220+
2221+def set_trace(addr=DEFAULT_ADDR, port=DEFAULT_PORT):
2222+ """
2223+ Set a trace point using the remote debugger
2224+ """
2225+ atexit.register(close_port, port)
2226+ try:
2227+ log("Starting a remote python debugger session on %s:%s" % (addr,
2228+ port))
2229+ open_port(port)
2230+ debugger = Rpdb(addr=addr, port=port)
2231+ debugger.set_trace(sys._getframe().f_back)
2232+ except:
2233+ _error("Cannot start a remote debug session on %s:%s" % (addr,
2234+ port))
2235
2236=== added file 'hooks/charmhelpers/contrib/python/packages.py'
2237--- hooks/charmhelpers/contrib/python/packages.py 1970-01-01 00:00:00 +0000
2238+++ hooks/charmhelpers/contrib/python/packages.py 2014-12-10 07:57:54 +0000
2239@@ -0,0 +1,77 @@
2240+#!/usr/bin/env python
2241+# coding: utf-8
2242+
2243+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
2244+
2245+from charmhelpers.fetch import apt_install, apt_update
2246+from charmhelpers.core.hookenv import log
2247+
2248+try:
2249+ from pip import main as pip_execute
2250+except ImportError:
2251+ apt_update()
2252+ apt_install('python-pip')
2253+ from pip import main as pip_execute
2254+
2255+
2256+def parse_options(given, available):
2257+ """Given a set of options, check if available"""
2258+ for key, value in sorted(given.items()):
2259+ if key in available:
2260+ yield "--{0}={1}".format(key, value)
2261+
2262+
2263+def pip_install_requirements(requirements, **options):
2264+ """Install a requirements file """
2265+ command = ["install"]
2266+
2267+ available_options = ('proxy', 'src', 'log', )
2268+ for option in parse_options(options, available_options):
2269+ command.append(option)
2270+
2271+ command.append("-r {0}".format(requirements))
2272+ log("Installing from file: {} with options: {}".format(requirements,
2273+ command))
2274+ pip_execute(command)
2275+
2276+
2277+def pip_install(package, fatal=False, **options):
2278+ """Install a python package"""
2279+ command = ["install"]
2280+
2281+ available_options = ('proxy', 'src', 'log', "index-url", )
2282+ for option in parse_options(options, available_options):
2283+ command.append(option)
2284+
2285+ if isinstance(package, list):
2286+ command.extend(package)
2287+ else:
2288+ command.append(package)
2289+
2290+ log("Installing {} package with options: {}".format(package,
2291+ command))
2292+ pip_execute(command)
2293+
2294+
2295+def pip_uninstall(package, **options):
2296+ """Uninstall a python package"""
2297+ command = ["uninstall", "-q", "-y"]
2298+
2299+ available_options = ('proxy', 'log', )
2300+ for option in parse_options(options, available_options):
2301+ command.append(option)
2302+
2303+ if isinstance(package, list):
2304+ command.extend(package)
2305+ else:
2306+ command.append(package)
2307+
2308+ log("Uninstalling {} package with options: {}".format(package,
2309+ command))
2310+ pip_execute(command)
2311+
2312+
2313+def pip_list():
2314+ """Returns the list of current python installed packages
2315+ """
2316+ return pip_execute(["list"])
2317
2318=== added file 'hooks/charmhelpers/contrib/python/rpdb.py'
2319--- hooks/charmhelpers/contrib/python/rpdb.py 1970-01-01 00:00:00 +0000
2320+++ hooks/charmhelpers/contrib/python/rpdb.py 2014-12-10 07:57:54 +0000
2321@@ -0,0 +1,42 @@
2322+"""Remote Python Debugger (pdb wrapper)."""
2323+
2324+__author__ = "Bertrand Janin <b@janin.com>"
2325+__version__ = "0.1.3"
2326+
2327+import pdb
2328+import socket
2329+import sys
2330+
2331+
2332+class Rpdb(pdb.Pdb):
2333+
2334+ def __init__(self, addr="127.0.0.1", port=4444):
2335+ """Initialize the socket and initialize pdb."""
2336+
2337+ # Backup stdin and stdout before replacing them by the socket handle
2338+ self.old_stdout = sys.stdout
2339+ self.old_stdin = sys.stdin
2340+
2341+ # Open a 'reusable' socket to let the webapp reload on the same port
2342+ self.skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
2343+ self.skt.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True)
2344+ self.skt.bind((addr, port))
2345+ self.skt.listen(1)
2346+ (clientsocket, address) = self.skt.accept()
2347+ handle = clientsocket.makefile('rw')
2348+ pdb.Pdb.__init__(self, completekey='tab', stdin=handle, stdout=handle)
2349+ sys.stdout = sys.stdin = handle
2350+
2351+ def shutdown(self):
2352+ """Revert stdin and stdout, close the socket."""
2353+ sys.stdout = self.old_stdout
2354+ sys.stdin = self.old_stdin
2355+ self.skt.close()
2356+ self.set_continue()
2357+
2358+ def do_continue(self, arg):
2359+ """Stop all operation on ``continue``."""
2360+ self.shutdown()
2361+ return 1
2362+
2363+ do_EOF = do_quit = do_exit = do_c = do_cont = do_continue
2364
2365=== added file 'hooks/charmhelpers/contrib/python/version.py'
2366--- hooks/charmhelpers/contrib/python/version.py 1970-01-01 00:00:00 +0000
2367+++ hooks/charmhelpers/contrib/python/version.py 2014-12-10 07:57:54 +0000
2368@@ -0,0 +1,18 @@
2369+#!/usr/bin/env python
2370+# coding: utf-8
2371+
2372+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
2373+
2374+import sys
2375+
2376+
2377+def current_version():
2378+ """Current system python version"""
2379+ return sys.version_info
2380+
2381+
2382+def current_version_string():
2383+ """Current system python version as string major.minor.micro"""
2384+ return "{0}.{1}.{2}".format(sys.version_info.major,
2385+ sys.version_info.minor,
2386+ sys.version_info.micro)
2387
2388=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
2389--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-28 14:38:51 +0000
2390+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-10 07:57:54 +0000
2391@@ -16,19 +16,18 @@
2392 from subprocess import (
2393 check_call,
2394 check_output,
2395- CalledProcessError
2396+ CalledProcessError,
2397 )
2398-
2399 from charmhelpers.core.hookenv import (
2400 relation_get,
2401 relation_ids,
2402 related_units,
2403 log,
2404+ DEBUG,
2405 INFO,
2406 WARNING,
2407- ERROR
2408+ ERROR,
2409 )
2410-
2411 from charmhelpers.core.host import (
2412 mount,
2413 mounts,
2414@@ -37,7 +36,6 @@
2415 service_running,
2416 umount,
2417 )
2418-
2419 from charmhelpers.fetch import (
2420 apt_install,
2421 )
2422@@ -56,99 +54,85 @@
2423
2424
2425 def install():
2426- ''' Basic Ceph client installation '''
2427+ """Basic Ceph client installation."""
2428 ceph_dir = "/etc/ceph"
2429 if not os.path.exists(ceph_dir):
2430 os.mkdir(ceph_dir)
2431+
2432 apt_install('ceph-common', fatal=True)
2433
2434
2435 def rbd_exists(service, pool, rbd_img):
2436- ''' Check to see if a RADOS block device exists '''
2437+ """Check to see if a RADOS block device exists."""
2438 try:
2439- out = check_output(['rbd', 'list', '--id', service,
2440- '--pool', pool])
2441+ out = check_output(['rbd', 'list', '--id',
2442+ service, '--pool', pool]).decode('UTF-8')
2443 except CalledProcessError:
2444 return False
2445- else:
2446- return rbd_img in out
2447+
2448+ return rbd_img in out
2449
2450
2451 def create_rbd_image(service, pool, image, sizemb):
2452- ''' Create a new RADOS block device '''
2453- cmd = [
2454- 'rbd',
2455- 'create',
2456- image,
2457- '--size',
2458- str(sizemb),
2459- '--id',
2460- service,
2461- '--pool',
2462- pool
2463- ]
2464+ """Create a new RADOS block device."""
2465+ cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
2466+ '--pool', pool]
2467 check_call(cmd)
2468
2469
2470 def pool_exists(service, name):
2471- ''' Check to see if a RADOS pool already exists '''
2472+ """Check to see if a RADOS pool already exists."""
2473 try:
2474- out = check_output(['rados', '--id', service, 'lspools'])
2475+ out = check_output(['rados', '--id', service,
2476+ 'lspools']).decode('UTF-8')
2477 except CalledProcessError:
2478 return False
2479- else:
2480- return name in out
2481+
2482+ return name in out
2483
2484
2485 def get_osds(service):
2486- '''
2487- Return a list of all Ceph Object Storage Daemons
2488- currently in the cluster
2489- '''
2490+ """Return a list of all Ceph Object Storage Daemons currently in the
2491+ cluster.
2492+ """
2493 version = ceph_version()
2494 if version and version >= '0.56':
2495 return json.loads(check_output(['ceph', '--id', service,
2496- 'osd', 'ls', '--format=json']))
2497- else:
2498- return None
2499-
2500-
2501-def create_pool(service, name, replicas=2):
2502- ''' Create a new RADOS pool '''
2503+ 'osd', 'ls',
2504+ '--format=json']).decode('UTF-8'))
2505+
2506+ return None
2507+
2508+
2509+def create_pool(service, name, replicas=3):
2510+ """Create a new RADOS pool."""
2511 if pool_exists(service, name):
2512 log("Ceph pool {} already exists, skipping creation".format(name),
2513 level=WARNING)
2514 return
2515+
2516 # Calculate the number of placement groups based
2517 # on upstream recommended best practices.
2518 osds = get_osds(service)
2519 if osds:
2520- pgnum = (len(osds) * 100 / replicas)
2521+ pgnum = (len(osds) * 100 // replicas)
2522 else:
2523 # NOTE(james-page): Default to 200 for older ceph versions
2524 # which don't support OSD query from cli
2525 pgnum = 200
2526- cmd = [
2527- 'ceph', '--id', service,
2528- 'osd', 'pool', 'create',
2529- name, str(pgnum)
2530- ]
2531+
2532+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
2533 check_call(cmd)
2534- cmd = [
2535- 'ceph', '--id', service,
2536- 'osd', 'pool', 'set', name,
2537- 'size', str(replicas)
2538- ]
2539+
2540+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
2541+ str(replicas)]
2542 check_call(cmd)
2543
2544
2545 def delete_pool(service, name):
2546- ''' Delete a RADOS pool from ceph '''
2547- cmd = [
2548- 'ceph', '--id', service,
2549- 'osd', 'pool', 'delete',
2550- name, '--yes-i-really-really-mean-it'
2551- ]
2552+ """Delete a RADOS pool from ceph."""
2553+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
2554+ '--yes-i-really-really-mean-it']
2555 check_call(cmd)
2556
2557
2558@@ -161,44 +145,43 @@
2559
2560
2561 def create_keyring(service, key):
2562- ''' Create a new Ceph keyring containing key'''
2563+ """Create a new Ceph keyring containing key."""
2564 keyring = _keyring_path(service)
2565 if os.path.exists(keyring):
2566- log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
2567+ log('Ceph keyring exists at %s.' % keyring, level=WARNING)
2568 return
2569- cmd = [
2570- 'ceph-authtool',
2571- keyring,
2572- '--create-keyring',
2573- '--name=client.{}'.format(service),
2574- '--add-key={}'.format(key)
2575- ]
2576+
2577+ cmd = ['ceph-authtool', keyring, '--create-keyring',
2578+ '--name=client.{}'.format(service), '--add-key={}'.format(key)]
2579 check_call(cmd)
2580- log('ceph: Created new ring at %s.' % keyring, level=INFO)
2581+ log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
2582
2583
2584 def create_key_file(service, key):
2585- ''' Create a file containing key '''
2586+ """Create a file containing key."""
2587 keyfile = _keyfile_path(service)
2588 if os.path.exists(keyfile):
2589- log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2590+ log('Keyfile exists at %s.' % keyfile, level=WARNING)
2591 return
2592+
2593 with open(keyfile, 'w') as fd:
2594 fd.write(key)
2595- log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2596+
2597+ log('Created new keyfile at %s.' % keyfile, level=INFO)
2598
2599
2600 def get_ceph_nodes():
2601- ''' Query named relation 'ceph' to detemine current nodes '''
2602+ """Query named relation 'ceph' to determine current nodes."""
2603 hosts = []
2604 for r_id in relation_ids('ceph'):
2605 for unit in related_units(r_id):
2606 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2607+
2608 return hosts
2609
2610
2611 def configure(service, key, auth, use_syslog):
2612- ''' Perform basic configuration of Ceph '''
2613+ """Perform basic configuration of Ceph."""
2614 create_keyring(service, key)
2615 create_key_file(service, key)
2616 hosts = get_ceph_nodes()
2617@@ -211,17 +194,17 @@
2618
2619
2620 def image_mapped(name):
2621- ''' Determine whether a RADOS block device is mapped locally '''
2622+ """Determine whether a RADOS block device is mapped locally."""
2623 try:
2624- out = check_output(['rbd', 'showmapped'])
2625+ out = check_output(['rbd', 'showmapped']).decode('UTF-8')
2626 except CalledProcessError:
2627 return False
2628- else:
2629- return name in out
2630+
2631+ return name in out
2632
2633
2634 def map_block_storage(service, pool, image):
2635- ''' Map a RADOS block device for local use '''
2636+ """Map a RADOS block device for local use."""
2637 cmd = [
2638 'rbd',
2639 'map',
2640@@ -235,31 +218,32 @@
2641
2642
2643 def filesystem_mounted(fs):
2644- ''' Determine whether a filesytems is already mounted '''
2645+ """Determine whether a filesytems is already mounted."""
2646 return fs in [f for f, m in mounts()]
2647
2648
2649 def make_filesystem(blk_device, fstype='ext4', timeout=10):
2650- ''' Make a new filesystem on the specified block device '''
2651+ """Make a new filesystem on the specified block device."""
2652 count = 0
2653 e_noent = os.errno.ENOENT
2654 while not os.path.exists(blk_device):
2655 if count >= timeout:
2656- log('ceph: gave up waiting on block device %s' % blk_device,
2657+ log('Gave up waiting on block device %s' % blk_device,
2658 level=ERROR)
2659 raise IOError(e_noent, os.strerror(e_noent), blk_device)
2660- log('ceph: waiting for block device %s to appear' % blk_device,
2661- level=INFO)
2662+
2663+ log('Waiting for block device %s to appear' % blk_device,
2664+ level=DEBUG)
2665 count += 1
2666 time.sleep(1)
2667 else:
2668- log('ceph: Formatting block device %s as filesystem %s.' %
2669+ log('Formatting block device %s as filesystem %s.' %
2670 (blk_device, fstype), level=INFO)
2671 check_call(['mkfs', '-t', fstype, blk_device])
2672
2673
2674 def place_data_on_block_device(blk_device, data_src_dst):
2675- ''' Migrate data in data_src_dst to blk_device and then remount '''
2676+ """Migrate data in data_src_dst to blk_device and then remount."""
2677 # mount block device into /mnt
2678 mount(blk_device, '/mnt')
2679 # copy data to /mnt
2680@@ -279,8 +263,8 @@
2681
2682 # TODO: re-use
2683 def modprobe(module):
2684- ''' Load a kernel module and configure for auto-load on reboot '''
2685- log('ceph: Loading kernel module', level=INFO)
2686+ """Load a kernel module and configure for auto-load on reboot."""
2687+ log('Loading kernel module', level=INFO)
2688 cmd = ['modprobe', module]
2689 check_call(cmd)
2690 with open('/etc/modules', 'r+') as modules:
2691@@ -289,7 +273,7 @@
2692
2693
2694 def copy_files(src, dst, symlinks=False, ignore=None):
2695- ''' Copy files from src to dst '''
2696+ """Copy files from src to dst."""
2697 for item in os.listdir(src):
2698 s = os.path.join(src, item)
2699 d = os.path.join(dst, item)
2700@@ -300,9 +284,9 @@
2701
2702
2703 def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2704- blk_device, fstype, system_services=[]):
2705- """
2706- NOTE: This function must only be called from a single service unit for
2707+ blk_device, fstype, system_services=[],
2708+ replicas=3):
2709+ """NOTE: This function must only be called from a single service unit for
2710 the same rbd_img otherwise data loss will occur.
2711
2712 Ensures given pool and RBD image exists, is mapped to a block device,
2713@@ -316,15 +300,16 @@
2714 """
2715 # Ensure pool, RBD image, RBD mappings are in place.
2716 if not pool_exists(service, pool):
2717- log('ceph: Creating new pool {}.'.format(pool))
2718- create_pool(service, pool)
2719+ log('Creating new pool {}.'.format(pool), level=INFO)
2720+ create_pool(service, pool, replicas=replicas)
2721
2722 if not rbd_exists(service, pool, rbd_img):
2723- log('ceph: Creating RBD image ({}).'.format(rbd_img))
2724+ log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
2725 create_rbd_image(service, pool, rbd_img, sizemb)
2726
2727 if not image_mapped(rbd_img):
2728- log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2729+ log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
2730+ level=INFO)
2731 map_block_storage(service, pool, rbd_img)
2732
2733 # make file system
2734@@ -339,45 +324,47 @@
2735
2736 for svc in system_services:
2737 if service_running(svc):
2738- log('ceph: Stopping services {} prior to migrating data.'
2739- .format(svc))
2740+ log('Stopping services {} prior to migrating data.'
2741+ .format(svc), level=DEBUG)
2742 service_stop(svc)
2743
2744 place_data_on_block_device(blk_device, mount_point)
2745
2746 for svc in system_services:
2747- log('ceph: Starting service {} after migrating data.'
2748- .format(svc))
2749+ log('Starting service {} after migrating data.'
2750+ .format(svc), level=DEBUG)
2751 service_start(svc)
2752
2753
2754 def ensure_ceph_keyring(service, user=None, group=None):
2755- '''
2756- Ensures a ceph keyring is created for a named service
2757- and optionally ensures user and group ownership.
2758+ """Ensures a ceph keyring is created for a named service and optionally
2759+ ensures user and group ownership.
2760
2761 Returns False if no ceph key is available in relation state.
2762- '''
2763+ """
2764 key = None
2765 for rid in relation_ids('ceph'):
2766 for unit in related_units(rid):
2767 key = relation_get('key', rid=rid, unit=unit)
2768 if key:
2769 break
2770+
2771 if not key:
2772 return False
2773+
2774 create_keyring(service=service, key=key)
2775 keyring = _keyring_path(service)
2776 if user and group:
2777 check_call(['chown', '%s.%s' % (user, group), keyring])
2778+
2779 return True
2780
2781
2782 def ceph_version():
2783- ''' Retrieve the local version of ceph '''
2784+ """Retrieve the local version of ceph."""
2785 if os.path.exists('/usr/bin/ceph'):
2786 cmd = ['ceph', '-v']
2787- output = check_output(cmd)
2788+ output = check_output(cmd).decode('US-ASCII')
2789 output = output.split()
2790 if len(output) > 3:
2791 return output[2]
2792
2793=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2794--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-08-12 21:48:24 +0000
2795+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-10 07:57:54 +0000
2796@@ -1,12 +1,12 @@
2797-
2798 import os
2799 import re
2800-
2801 from subprocess import (
2802 check_call,
2803 check_output,
2804 )
2805
2806+import six
2807+
2808
2809 ##################################################
2810 # loopback device helpers.
2811@@ -37,7 +37,7 @@
2812 '''
2813 file_path = os.path.abspath(file_path)
2814 check_call(['losetup', '--find', file_path])
2815- for d, f in loopback_devices().iteritems():
2816+ for d, f in six.iteritems(loopback_devices()):
2817 if f == file_path:
2818 return d
2819
2820@@ -51,7 +51,7 @@
2821
2822 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2823 '''
2824- for d, f in loopback_devices().iteritems():
2825+ for d, f in six.iteritems(loopback_devices()):
2826 if f == path:
2827 return d
2828
2829
2830=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2831--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:41:02 +0000
2832+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-10 07:57:54 +0000
2833@@ -61,6 +61,7 @@
2834 vg = None
2835 pvd = check_output(['pvdisplay', block_device]).splitlines()
2836 for l in pvd:
2837+ l = l.decode('UTF-8')
2838 if l.strip().startswith('VG Name'):
2839 vg = ' '.join(l.strip().split()[2:])
2840 return vg
2841
2842=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2843--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-08-13 13:12:14 +0000
2844+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-10 07:57:54 +0000
2845@@ -30,7 +30,8 @@
2846 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
2847 call(['sgdisk', '--zap-all', '--mbrtogpt',
2848 '--clear', block_device])
2849- dev_end = check_output(['blockdev', '--getsz', block_device])
2850+ dev_end = check_output(['blockdev', '--getsz',
2851+ block_device]).decode('UTF-8')
2852 gpt_end = int(dev_end.split()[0]) - 100
2853 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
2854 'bs=1M', 'count=1'])
2855@@ -47,7 +48,7 @@
2856 it doesn't.
2857 '''
2858 is_partition = bool(re.search(r".*[0-9]+\b", device))
2859- out = check_output(['mount'])
2860+ out = check_output(['mount']).decode('UTF-8')
2861 if is_partition:
2862 return bool(re.search(device + r"\b", out))
2863 return bool(re.search(device + r"[0-9]+\b", out))
2864
2865=== modified file 'hooks/charmhelpers/core/fstab.py'
2866--- hooks/charmhelpers/core/fstab.py 2014-07-11 02:24:52 +0000
2867+++ hooks/charmhelpers/core/fstab.py 2014-12-10 07:57:54 +0000
2868@@ -3,10 +3,11 @@
2869
2870 __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2871
2872+import io
2873 import os
2874
2875
2876-class Fstab(file):
2877+class Fstab(io.FileIO):
2878 """This class extends file in order to implement a file reader/writer
2879 for file `/etc/fstab`
2880 """
2881@@ -24,8 +25,8 @@
2882 options = "defaults"
2883
2884 self.options = options
2885- self.d = d
2886- self.p = p
2887+ self.d = int(d)
2888+ self.p = int(p)
2889
2890 def __eq__(self, o):
2891 return str(self) == str(o)
2892@@ -45,7 +46,7 @@
2893 self._path = path
2894 else:
2895 self._path = self.DEFAULT_PATH
2896- file.__init__(self, self._path, 'r+')
2897+ super(Fstab, self).__init__(self._path, 'rb+')
2898
2899 def _hydrate_entry(self, line):
2900 # NOTE: use split with no arguments to split on any
2901@@ -58,8 +59,9 @@
2902 def entries(self):
2903 self.seek(0)
2904 for line in self.readlines():
2905+ line = line.decode('us-ascii')
2906 try:
2907- if not line.startswith("#"):
2908+ if line.strip() and not line.startswith("#"):
2909 yield self._hydrate_entry(line)
2910 except ValueError:
2911 pass
2912@@ -75,14 +77,14 @@
2913 if self.get_entry_by_attr('device', entry.device):
2914 return False
2915
2916- self.write(str(entry) + '\n')
2917+ self.write((str(entry) + '\n').encode('us-ascii'))
2918 self.truncate()
2919 return entry
2920
2921 def remove_entry(self, entry):
2922 self.seek(0)
2923
2924- lines = self.readlines()
2925+ lines = [l.decode('us-ascii') for l in self.readlines()]
2926
2927 found = False
2928 for index, line in enumerate(lines):
2929@@ -97,7 +99,7 @@
2930 lines.remove(line)
2931
2932 self.seek(0)
2933- self.write(''.join(lines))
2934+ self.write(''.join(lines).encode('us-ascii'))
2935 self.truncate()
2936 return True
2937
2938
2939=== modified file 'hooks/charmhelpers/core/hookenv.py'
2940--- hooks/charmhelpers/core/hookenv.py 2014-10-06 21:57:43 +0000
2941+++ hooks/charmhelpers/core/hookenv.py 2014-12-10 07:57:54 +0000
2942@@ -9,9 +9,14 @@
2943 import yaml
2944 import subprocess
2945 import sys
2946-import UserDict
2947 from subprocess import CalledProcessError
2948
2949+import six
2950+if not six.PY3:
2951+ from UserDict import UserDict
2952+else:
2953+ from collections import UserDict
2954+
2955 CRITICAL = "CRITICAL"
2956 ERROR = "ERROR"
2957 WARNING = "WARNING"
2958@@ -63,16 +68,18 @@
2959 command = ['juju-log']
2960 if level:
2961 command += ['-l', level]
2962+ if not isinstance(message, six.string_types):
2963+ message = repr(message)
2964 command += [message]
2965 subprocess.call(command)
2966
2967
2968-class Serializable(UserDict.IterableUserDict):
2969+class Serializable(UserDict):
2970 """Wrapper, an object that can be serialized to yaml or json"""
2971
2972 def __init__(self, obj):
2973 # wrap the object
2974- UserDict.IterableUserDict.__init__(self)
2975+ UserDict.__init__(self)
2976 self.data = obj
2977
2978 def __getattr__(self, attr):
2979@@ -214,6 +221,12 @@
2980 except KeyError:
2981 return (self._prev_dict or {})[key]
2982
2983+ def keys(self):
2984+ prev_keys = []
2985+ if self._prev_dict is not None:
2986+ prev_keys = self._prev_dict.keys()
2987+ return list(set(prev_keys + list(dict.keys(self))))
2988+
2989 def load_previous(self, path=None):
2990 """Load previous copy of config from disk.
2991
2992@@ -263,7 +276,7 @@
2993
2994 """
2995 if self._prev_dict:
2996- for k, v in self._prev_dict.iteritems():
2997+ for k, v in six.iteritems(self._prev_dict):
2998 if k not in self:
2999 self[k] = v
3000 with open(self.path, 'w') as f:
3001@@ -278,7 +291,8 @@
3002 config_cmd_line.append(scope)
3003 config_cmd_line.append('--format=json')
3004 try:
3005- config_data = json.loads(subprocess.check_output(config_cmd_line))
3006+ config_data = json.loads(
3007+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
3008 if scope is not None:
3009 return config_data
3010 return Config(config_data)
3011@@ -297,10 +311,10 @@
3012 if unit:
3013 _args.append(unit)
3014 try:
3015- return json.loads(subprocess.check_output(_args))
3016+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
3017 except ValueError:
3018 return None
3019- except CalledProcessError, e:
3020+ except CalledProcessError as e:
3021 if e.returncode == 2:
3022 return None
3023 raise
3024@@ -312,7 +326,7 @@
3025 relation_cmd_line = ['relation-set']
3026 if relation_id is not None:
3027 relation_cmd_line.extend(('-r', relation_id))
3028- for k, v in (relation_settings.items() + kwargs.items()):
3029+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
3030 if v is None:
3031 relation_cmd_line.append('{}='.format(k))
3032 else:
3033@@ -329,7 +343,8 @@
3034 relid_cmd_line = ['relation-ids', '--format=json']
3035 if reltype is not None:
3036 relid_cmd_line.append(reltype)
3037- return json.loads(subprocess.check_output(relid_cmd_line)) or []
3038+ return json.loads(
3039+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
3040 return []
3041
3042
3043@@ -340,7 +355,8 @@
3044 units_cmd_line = ['relation-list', '--format=json']
3045 if relid is not None:
3046 units_cmd_line.extend(('-r', relid))
3047- return json.loads(subprocess.check_output(units_cmd_line)) or []
3048+ return json.loads(
3049+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
3050
3051
3052 @cached
3053@@ -449,7 +465,7 @@
3054 """Get the unit ID for the remote unit"""
3055 _args = ['unit-get', '--format=json', attribute]
3056 try:
3057- return json.loads(subprocess.check_output(_args))
3058+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
3059 except ValueError:
3060 return None
3061
3062
3063=== modified file 'hooks/charmhelpers/core/host.py'
3064--- hooks/charmhelpers/core/host.py 2014-10-06 21:57:43 +0000
3065+++ hooks/charmhelpers/core/host.py 2014-12-10 07:57:54 +0000
3066@@ -6,19 +6,20 @@
3067 # Matthew Wedgwood <matthew.wedgwood@canonical.com>
3068
3069 import os
3070+import re
3071 import pwd
3072 import grp
3073 import random
3074 import string
3075 import subprocess
3076 import hashlib
3077-import shutil
3078 from contextlib import contextmanager
3079-
3080 from collections import OrderedDict
3081
3082-from hookenv import log
3083-from fstab import Fstab
3084+import six
3085+
3086+from .hookenv import log
3087+from .fstab import Fstab
3088
3089
3090 def service_start(service_name):
3091@@ -54,7 +55,9 @@
3092 def service_running(service):
3093 """Determine whether a system service is running"""
3094 try:
3095- output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
3096+ output = subprocess.check_output(
3097+ ['service', service, 'status'],
3098+ stderr=subprocess.STDOUT).decode('UTF-8')
3099 except subprocess.CalledProcessError:
3100 return False
3101 else:
3102@@ -67,7 +70,9 @@
3103 def service_available(service_name):
3104 """Determine whether a system service is available"""
3105 try:
3106- subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
3107+ subprocess.check_output(
3108+ ['service', service_name, 'status'],
3109+ stderr=subprocess.STDOUT).decode('UTF-8')
3110 except subprocess.CalledProcessError as e:
3111 return 'unrecognized service' not in e.output
3112 else:
3113@@ -96,6 +101,26 @@
3114 return user_info
3115
3116
3117+def add_group(group_name, system_group=False):
3118+ """Add a group to the system"""
3119+ try:
3120+ group_info = grp.getgrnam(group_name)
3121+ log('group {0} already exists!'.format(group_name))
3122+ except KeyError:
3123+ log('creating group {0}'.format(group_name))
3124+ cmd = ['addgroup']
3125+ if system_group:
3126+ cmd.append('--system')
3127+ else:
3128+ cmd.extend([
3129+ '--group',
3130+ ])
3131+ cmd.append(group_name)
3132+ subprocess.check_call(cmd)
3133+ group_info = grp.getgrnam(group_name)
3134+ return group_info
3135+
3136+
3137 def add_user_to_group(username, group):
3138 """Add a user to a group"""
3139 cmd = [
3140@@ -115,7 +140,7 @@
3141 cmd.append(from_path)
3142 cmd.append(to_path)
3143 log(" ".join(cmd))
3144- return subprocess.check_output(cmd).strip()
3145+ return subprocess.check_output(cmd).decode('UTF-8').strip()
3146
3147
3148 def symlink(source, destination):
3149@@ -130,7 +155,7 @@
3150 subprocess.check_call(cmd)
3151
3152
3153-def mkdir(path, owner='root', group='root', perms=0555, force=False):
3154+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
3155 """Create a directory"""
3156 log("Making dir {} {}:{} {:o}".format(path, owner, group,
3157 perms))
3158@@ -146,7 +171,7 @@
3159 os.chown(realpath, uid, gid)
3160
3161
3162-def write_file(path, content, owner='root', group='root', perms=0444):
3163+def write_file(path, content, owner='root', group='root', perms=0o444):
3164 """Create or overwrite a file with the contents of a string"""
3165 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
3166 uid = pwd.getpwnam(owner).pw_uid
3167@@ -177,7 +202,7 @@
3168 cmd_args.extend([device, mountpoint])
3169 try:
3170 subprocess.check_output(cmd_args)
3171- except subprocess.CalledProcessError, e:
3172+ except subprocess.CalledProcessError as e:
3173 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
3174 return False
3175
3176@@ -191,7 +216,7 @@
3177 cmd_args = ['umount', mountpoint]
3178 try:
3179 subprocess.check_output(cmd_args)
3180- except subprocess.CalledProcessError, e:
3181+ except subprocess.CalledProcessError as e:
3182 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
3183 return False
3184
3185@@ -218,8 +243,8 @@
3186 """
3187 if os.path.exists(path):
3188 h = getattr(hashlib, hash_type)()
3189- with open(path, 'r') as source:
3190- h.update(source.read()) # IGNORE:E1101 - it does have update
3191+ with open(path, 'rb') as source:
3192+ h.update(source.read())
3193 return h.hexdigest()
3194 else:
3195 return None
3196@@ -297,7 +322,7 @@
3197 if length is None:
3198 length = random.choice(range(35, 45))
3199 alphanumeric_chars = [
3200- l for l in (string.letters + string.digits)
3201+ l for l in (string.ascii_letters + string.digits)
3202 if l not in 'l0QD1vAEIOUaeiou']
3203 random_chars = [
3204 random.choice(alphanumeric_chars) for _ in range(length)]
3205@@ -306,30 +331,65 @@
3206
3207 def list_nics(nic_type):
3208 '''Return a list of nics of given type(s)'''
3209- if isinstance(nic_type, basestring):
3210+ if isinstance(nic_type, six.string_types):
3211 int_types = [nic_type]
3212 else:
3213 int_types = nic_type
3214 interfaces = []
3215 for int_type in int_types:
3216 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
3217- ip_output = subprocess.check_output(cmd).split('\n')
3218+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
3219 ip_output = (line for line in ip_output if line)
3220 for line in ip_output:
3221 if line.split()[1].startswith(int_type):
3222- interfaces.append(line.split()[1].replace(":", ""))
3223+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
3224+ if matched:
3225+ interface = matched.groups()[0]
3226+ else:
3227+ interface = line.split()[1].replace(":", "")
3228+ interfaces.append(interface)
3229+
3230 return interfaces
3231
3232
3233-def set_nic_mtu(nic, mtu):
3234+def set_nic_mtu(nic, mtu, persistence=False):
3235 '''Set MTU on a network interface'''
3236 cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
3237 subprocess.check_call(cmd)
3238+ # persistence mtu configuration
3239+ if not persistence:
3240+ return
3241+ if os.path.exists("/etc/network/interfaces.d/%s.cfg" % nic):
3242+ nic_cfg_file = "/etc/network/interfaces.d/%s.cfg" % nic
3243+ else:
3244+ nic_cfg_file = "/etc/network/interfaces"
3245+ f = open(nic_cfg_file,"r")
3246+ lines = f.readlines()
3247+ found = False
3248+ length = len(lines)
3249+ for i in range(len(lines)):
3250+ lines[i] = lines[i].replace('\n', '')
3251+ if lines[i].startswith("iface %s" % nic):
3252+ found = True
3253+ lines.insert(i+1, " up ip link set $IFACE mtu %s" % mtu)
3254+ lines.insert(i+2, " down ip link set $IFACE mtu 1500")
3255+ if length>i+2 and lines[i+3].startswith(" up ip link set $IFACE mtu"):
3256+ del lines[i+3]
3257+ if length>i+2 and lines[i+3].startswith(" down ip link set $IFACE mtu"):
3258+ del lines[i+3]
3259+ break
3260+ if not found:
3261+ lines.insert(length+1, "")
3262+ lines.insert(length+2, "auto %s" % nic)
3263+ lines.insert(length+3, "iface %s inet dhcp" % nic)
3264+ lines.insert(length+4, " up ip link set $IFACE mtu %s" % mtu)
3265+ lines.insert(length+5, " down ip link set $IFACE mtu 1500")
3266+ write_file(path=nic_cfg_file, content="\n".join(lines), perms=0o644)
3267
3268
3269 def get_nic_mtu(nic):
3270 cmd = ['ip', 'addr', 'show', nic]
3271- ip_output = subprocess.check_output(cmd).split('\n')
3272+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
3273 mtu = ""
3274 for line in ip_output:
3275 words = line.split()
3276@@ -340,7 +400,7 @@
3277
3278 def get_nic_hwaddr(nic):
3279 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
3280- ip_output = subprocess.check_output(cmd)
3281+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
3282 hwaddr = ""
3283 words = ip_output.split()
3284 if 'link/ether' in words:
3285@@ -357,8 +417,8 @@
3286
3287 '''
3288 import apt_pkg
3289- from charmhelpers.fetch import apt_cache
3290 if not pkgcache:
3291+ from charmhelpers.fetch import apt_cache
3292 pkgcache = apt_cache()
3293 pkg = pkgcache[package]
3294 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
3295
3296=== modified file 'hooks/charmhelpers/core/services/__init__.py'
3297--- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:14 +0000
3298+++ hooks/charmhelpers/core/services/__init__.py 2014-12-10 07:57:54 +0000
3299@@ -1,2 +1,2 @@
3300-from .base import *
3301-from .helpers import *
3302+from .base import * # NOQA
3303+from .helpers import * # NOQA
3304
3305=== modified file 'hooks/charmhelpers/core/services/helpers.py'
3306--- hooks/charmhelpers/core/services/helpers.py 2014-10-06 21:57:43 +0000
3307+++ hooks/charmhelpers/core/services/helpers.py 2014-12-10 07:57:54 +0000
3308@@ -196,7 +196,7 @@
3309 if not os.path.isabs(file_name):
3310 file_name = os.path.join(hookenv.charm_dir(), file_name)
3311 with open(file_name, 'w') as file_stream:
3312- os.fchmod(file_stream.fileno(), 0600)
3313+ os.fchmod(file_stream.fileno(), 0o600)
3314 yaml.dump(config_data, file_stream)
3315
3316 def read_context(self, file_name):
3317@@ -211,15 +211,19 @@
3318
3319 class TemplateCallback(ManagerCallback):
3320 """
3321- Callback class that will render a Jinja2 template, for use as a ready action.
3322-
3323- :param str source: The template source file, relative to `$CHARM_DIR/templates`
3324+ Callback class that will render a Jinja2 template, for use as a ready
3325+ action.
3326+
3327+ :param str source: The template source file, relative to
3328+ `$CHARM_DIR/templates`
3329+
3330 :param str target: The target to write the rendered template to
3331 :param str owner: The owner of the rendered file
3332 :param str group: The group of the rendered file
3333 :param int perms: The permissions of the rendered file
3334 """
3335- def __init__(self, source, target, owner='root', group='root', perms=0444):
3336+ def __init__(self, source, target,
3337+ owner='root', group='root', perms=0o444):
3338 self.source = source
3339 self.target = target
3340 self.owner = owner
3341
3342=== added file 'hooks/charmhelpers/core/sysctl.py'
3343--- hooks/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000
3344+++ hooks/charmhelpers/core/sysctl.py 2014-12-10 07:57:54 +0000
3345@@ -0,0 +1,34 @@
3346+#!/usr/bin/env python
3347+# -*- coding: utf-8 -*-
3348+
3349+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
3350+
3351+import yaml
3352+
3353+from subprocess import check_call
3354+
3355+from charmhelpers.core.hookenv import (
3356+ log,
3357+ DEBUG,
3358+)
3359+
3360+
3361+def create(sysctl_dict, sysctl_file):
3362+ """Creates a sysctl.conf file from a YAML associative array
3363+
3364+ :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
3365+ :type sysctl_dict: dict
3366+ :param sysctl_file: path to the sysctl file to be saved
3367+ :type sysctl_file: str or unicode
3368+ :returns: None
3369+ """
3370+ sysctl_dict = yaml.load(sysctl_dict)
3371+
3372+ with open(sysctl_file, "w") as fd:
3373+ for key, value in sysctl_dict.items():
3374+ fd.write("{}={}\n".format(key, value))
3375+
3376+ log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
3377+ level=DEBUG)
3378+
3379+ check_call(["sysctl", "-p", sysctl_file])
3380
3381=== modified file 'hooks/charmhelpers/core/templating.py'
3382--- hooks/charmhelpers/core/templating.py 2014-08-13 13:12:14 +0000
3383+++ hooks/charmhelpers/core/templating.py 2014-12-10 07:57:54 +0000
3384@@ -4,7 +4,8 @@
3385 from charmhelpers.core import hookenv
3386
3387
3388-def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
3389+def render(source, target, context, owner='root', group='root',
3390+ perms=0o444, templates_dir=None):
3391 """
3392 Render a template.
3393
3394
3395=== modified file 'hooks/charmhelpers/fetch/__init__.py'
3396--- hooks/charmhelpers/fetch/__init__.py 2014-10-06 21:57:43 +0000
3397+++ hooks/charmhelpers/fetch/__init__.py 2014-12-10 07:57:54 +0000
3398@@ -5,10 +5,6 @@
3399 from charmhelpers.core.host import (
3400 lsb_release
3401 )
3402-from urlparse import (
3403- urlparse,
3404- urlunparse,
3405-)
3406 import subprocess
3407 from charmhelpers.core.hookenv import (
3408 config,
3409@@ -16,6 +12,12 @@
3410 )
3411 import os
3412
3413+import six
3414+if six.PY3:
3415+ from urllib.parse import urlparse, urlunparse
3416+else:
3417+ from urlparse import urlparse, urlunparse
3418+
3419
3420 CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
3421 deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
3422@@ -72,6 +74,7 @@
3423 FETCH_HANDLERS = (
3424 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
3425 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
3426+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
3427 )
3428
3429 APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
3430@@ -148,7 +151,7 @@
3431 cmd = ['apt-get', '--assume-yes']
3432 cmd.extend(options)
3433 cmd.append('install')
3434- if isinstance(packages, basestring):
3435+ if isinstance(packages, six.string_types):
3436 cmd.append(packages)
3437 else:
3438 cmd.extend(packages)
3439@@ -181,7 +184,7 @@
3440 def apt_purge(packages, fatal=False):
3441 """Purge one or more packages"""
3442 cmd = ['apt-get', '--assume-yes', 'purge']
3443- if isinstance(packages, basestring):
3444+ if isinstance(packages, six.string_types):
3445 cmd.append(packages)
3446 else:
3447 cmd.extend(packages)
3448@@ -192,7 +195,7 @@
3449 def apt_hold(packages, fatal=False):
3450 """Hold one or more packages"""
3451 cmd = ['apt-mark', 'hold']
3452- if isinstance(packages, basestring):
3453+ if isinstance(packages, six.string_types):
3454 cmd.append(packages)
3455 else:
3456 cmd.extend(packages)
3457@@ -218,6 +221,7 @@
3458 pocket for the release.
3459 'cloud:' may be used to activate official cloud archive pockets,
3460 such as 'cloud:icehouse'
3461+ 'distro' may be used as a noop
3462
3463 @param key: A key to be added to the system's APT keyring and used
3464 to verify the signatures on packages. Ideally, this should be an
3465@@ -251,12 +255,14 @@
3466 release = lsb_release()['DISTRIB_CODENAME']
3467 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
3468 apt.write(PROPOSED_POCKET.format(release))
3469+ elif source == 'distro':
3470+ pass
3471 else:
3472- raise SourceConfigError("Unknown source: {!r}".format(source))
3473+ log("Unknown source: {!r}".format(source))
3474
3475 if key:
3476 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
3477- with NamedTemporaryFile() as key_file:
3478+ with NamedTemporaryFile('w+') as key_file:
3479 key_file.write(key)
3480 key_file.flush()
3481 key_file.seek(0)
3482@@ -293,14 +299,14 @@
3483 sources = safe_load((config(sources_var) or '').strip()) or []
3484 keys = safe_load((config(keys_var) or '').strip()) or None
3485
3486- if isinstance(sources, basestring):
3487+ if isinstance(sources, six.string_types):
3488 sources = [sources]
3489
3490 if keys is None:
3491 for source in sources:
3492 add_source(source, None)
3493 else:
3494- if isinstance(keys, basestring):
3495+ if isinstance(keys, six.string_types):
3496 keys = [keys]
3497
3498 if len(sources) != len(keys):
3499@@ -397,7 +403,7 @@
3500 while result is None or result == APT_NO_LOCK:
3501 try:
3502 result = subprocess.check_call(cmd, env=env)
3503- except subprocess.CalledProcessError, e:
3504+ except subprocess.CalledProcessError as e:
3505 retry_count = retry_count + 1
3506 if retry_count > APT_NO_LOCK_RETRY_COUNT:
3507 raise
3508
3509=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
3510--- hooks/charmhelpers/fetch/archiveurl.py 2014-10-06 21:57:43 +0000
3511+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-10 07:57:54 +0000
3512@@ -1,8 +1,23 @@
3513 import os
3514-import urllib2
3515-from urllib import urlretrieve
3516-import urlparse
3517 import hashlib
3518+import re
3519+
3520+import six
3521+if six.PY3:
3522+ from urllib.request import (
3523+ build_opener, install_opener, urlopen, urlretrieve,
3524+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3525+ )
3526+ from urllib.parse import urlparse, urlunparse, parse_qs
3527+ from urllib.error import URLError
3528+else:
3529+ from urllib import urlretrieve
3530+ from urllib2 import (
3531+ build_opener, install_opener, urlopen,
3532+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3533+ URLError
3534+ )
3535+ from urlparse import urlparse, urlunparse, parse_qs
3536
3537 from charmhelpers.fetch import (
3538 BaseFetchHandler,
3539@@ -15,6 +30,24 @@
3540 from charmhelpers.core.host import mkdir, check_hash
3541
3542
3543+def splituser(host):
3544+ '''urllib.splituser(), but six's support of this seems broken'''
3545+ _userprog = re.compile('^(.*)@(.*)$')
3546+ match = _userprog.match(host)
3547+ if match:
3548+ return match.group(1, 2)
3549+ return None, host
3550+
3551+
3552+def splitpasswd(user):
3553+ '''urllib.splitpasswd(), but six's support of this is missing'''
3554+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
3555+ match = _passwdprog.match(user)
3556+ if match:
3557+ return match.group(1, 2)
3558+ return user, None
3559+
3560+
3561 class ArchiveUrlFetchHandler(BaseFetchHandler):
3562 """
3563 Handler to download archive files from arbitrary URLs.
3564@@ -42,20 +75,20 @@
3565 """
3566 # propogate all exceptions
3567 # URLError, OSError, etc
3568- proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
3569+ proto, netloc, path, params, query, fragment = urlparse(source)
3570 if proto in ('http', 'https'):
3571- auth, barehost = urllib2.splituser(netloc)
3572+ auth, barehost = splituser(netloc)
3573 if auth is not None:
3574- source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
3575- username, password = urllib2.splitpasswd(auth)
3576- passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
3577+ source = urlunparse((proto, barehost, path, params, query, fragment))
3578+ username, password = splitpasswd(auth)
3579+ passman = HTTPPasswordMgrWithDefaultRealm()
3580 # Realm is set to None in add_password to force the username and password
3581 # to be used whatever the realm
3582 passman.add_password(None, source, username, password)
3583- authhandler = urllib2.HTTPBasicAuthHandler(passman)
3584- opener = urllib2.build_opener(authhandler)
3585- urllib2.install_opener(opener)
3586- response = urllib2.urlopen(source)
3587+ authhandler = HTTPBasicAuthHandler(passman)
3588+ opener = build_opener(authhandler)
3589+ install_opener(opener)
3590+ response = urlopen(source)
3591 try:
3592 with open(dest, 'w') as dest_file:
3593 dest_file.write(response.read())
3594@@ -91,17 +124,21 @@
3595 url_parts = self.parse_url(source)
3596 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
3597 if not os.path.exists(dest_dir):
3598- mkdir(dest_dir, perms=0755)
3599+ mkdir(dest_dir, perms=0o755)
3600 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
3601 try:
3602 self.download(source, dld_file)
3603- except urllib2.URLError as e:
3604+ except URLError as e:
3605 raise UnhandledSource(e.reason)
3606 except OSError as e:
3607 raise UnhandledSource(e.strerror)
3608- options = urlparse.parse_qs(url_parts.fragment)
3609+ options = parse_qs(url_parts.fragment)
3610 for key, value in options.items():
3611- if key in hashlib.algorithms:
3612+ if not six.PY3:
3613+ algorithms = hashlib.algorithms
3614+ else:
3615+ algorithms = hashlib.algorithms_available
3616+ if key in algorithms:
3617 check_hash(dld_file, value, key)
3618 if checksum:
3619 check_hash(dld_file, checksum, hash_type)
3620
3621=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
3622--- hooks/charmhelpers/fetch/bzrurl.py 2014-07-28 14:38:51 +0000
3623+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-10 07:57:54 +0000
3624@@ -5,6 +5,10 @@
3625 )
3626 from charmhelpers.core.host import mkdir
3627
3628+import six
3629+if six.PY3:
3630+ raise ImportError('bzrlib does not support Python3')
3631+
3632 try:
3633 from bzrlib.branch import Branch
3634 except ImportError:
3635@@ -42,7 +46,7 @@
3636 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3637 branch_name)
3638 if not os.path.exists(dest_dir):
3639- mkdir(dest_dir, perms=0755)
3640+ mkdir(dest_dir, perms=0o755)
3641 try:
3642 self.branch(source, dest_dir)
3643 except OSError as e:
3644
3645=== added file 'hooks/charmhelpers/fetch/giturl.py'
3646--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
3647+++ hooks/charmhelpers/fetch/giturl.py 2014-12-10 07:57:54 +0000
3648@@ -0,0 +1,51 @@
3649+import os
3650+from charmhelpers.fetch import (
3651+ BaseFetchHandler,
3652+ UnhandledSource
3653+)
3654+from charmhelpers.core.host import mkdir
3655+
3656+import six
3657+if six.PY3:
3658+ raise ImportError('GitPython does not support Python 3')
3659+
3660+try:
3661+ from git import Repo
3662+except ImportError:
3663+ from charmhelpers.fetch import apt_install
3664+ apt_install("python-git")
3665+ from git import Repo
3666+
3667+
3668+class GitUrlFetchHandler(BaseFetchHandler):
3669+ """Handler for git branches via generic and github URLs"""
3670+ def can_handle(self, source):
3671+ url_parts = self.parse_url(source)
3672+ # TODO (mattyw) no support for ssh git@ yet
3673+ if url_parts.scheme not in ('http', 'https', 'git'):
3674+ return False
3675+ else:
3676+ return True
3677+
3678+ def clone(self, source, dest, branch):
3679+ if not self.can_handle(source):
3680+ raise UnhandledSource("Cannot handle {}".format(source))
3681+
3682+ repo = Repo.clone_from(source, dest)
3683+ repo.git.checkout(branch)
3684+
3685+ def install(self, source, branch="master", dest=None):
3686+ url_parts = self.parse_url(source)
3687+ branch_name = url_parts.path.strip("/").split("/")[-1]
3688+ if dest:
3689+ dest_dir = os.path.join(dest, branch_name)
3690+ else:
3691+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3692+ branch_name)
3693+ if not os.path.exists(dest_dir):
3694+ mkdir(dest_dir, perms=0o755)
3695+ try:
3696+ self.clone(source, dest_dir, branch)
3697+ except OSError as e:
3698+ raise UnhandledSource(e.strerror)
3699+ return dest_dir
3700
3701=== modified file 'hooks/nova_compute_hooks.py'
3702--- hooks/nova_compute_hooks.py 2014-11-28 12:54:57 +0000
3703+++ hooks/nova_compute_hooks.py 2014-12-10 07:57:54 +0000
3704@@ -50,11 +50,12 @@
3705 ceph_config_file, CEPH_SECRET,
3706 enable_shell, disable_shell,
3707 fix_path_ownership,
3708- assert_charm_supports_ipv6
3709+ assert_charm_supports_ipv6,
3710 )
3711
3712 from charmhelpers.contrib.network.ip import (
3713- get_ipv6_addr
3714+ get_ipv6_addr,
3715+ configure_phy_nic_mtu
3716 )
3717
3718 from nova_compute_context import CEPH_SECRET_UUID
3719@@ -70,6 +71,7 @@
3720 configure_installation_source(config('openstack-origin'))
3721 apt_update()
3722 apt_install(determine_packages(), fatal=True)
3723+ configure_phy_nic_mtu()
3724
3725
3726 @hooks.hook('config-changed')
3727@@ -103,6 +105,8 @@
3728
3729 CONFIGS.write_all()
3730
3731+ configure_phy_nic_mtu()
3732+
3733
3734 @hooks.hook('amqp-relation-joined')
3735 def amqp_joined(relation_id=None):
3736
3737=== modified file 'hooks/nova_compute_utils.py'
3738--- hooks/nova_compute_utils.py 2014-12-03 23:54:27 +0000
3739+++ hooks/nova_compute_utils.py 2014-12-10 07:57:54 +0000
3740@@ -14,7 +14,7 @@
3741 from charmhelpers.core.host import (
3742 mkdir,
3743 service_restart,
3744- lsb_release
3745+ lsb_release,
3746 )
3747
3748 from charmhelpers.core.hookenv import (
3749
3750=== modified file 'unit_tests/test_nova_compute_hooks.py'
3751--- unit_tests/test_nova_compute_hooks.py 2014-11-28 14:17:10 +0000
3752+++ unit_tests/test_nova_compute_hooks.py 2014-12-10 07:57:54 +0000
3753@@ -54,7 +54,8 @@
3754 'ensure_ceph_keyring',
3755 'execd_preinstall',
3756 # socket
3757- 'gethostname'
3758+ 'gethostname',
3759+ 'configure_phy_nic_mtu'
3760 ]
3761
3762
3763@@ -80,11 +81,13 @@
3764 self.assertTrue(self.apt_update.called)
3765 self.apt_install.assert_called_with(['foo', 'bar'], fatal=True)
3766 self.execd_preinstall.assert_called()
3767+ self.assertTrue(self.configure_phy_nic_mtu.called)
3768
3769 def test_config_changed_with_upgrade(self):
3770 self.openstack_upgrade_available.return_value = True
3771 hooks.config_changed()
3772 self.assertTrue(self.do_openstack_upgrade.called)
3773+ self.assertTrue(self.configure_phy_nic_mtu.called)
3774
3775 @patch.object(hooks, 'compute_joined')
3776 def test_config_changed_with_migration(self, compute_joined):

Subscribers

People subscribed via source and target branches