Merge lp:~niedbalski/charms/trusty/nova-compute/lp-1396613 into lp:~openstack-charmers-archive/charms/trusty/nova-compute/next

Proposed by Jorge Niedbalski
Status: Merged
Merged at revision: 92
Proposed branch: lp:~niedbalski/charms/trusty/nova-compute/lp-1396613
Merge into: lp:~openstack-charmers-archive/charms/trusty/nova-compute/next
Diff against target: 3227 lines (+889/-530)
32 files modified
config.yaml (+6/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7)
hooks/charmhelpers/contrib/network/ip.py (+59/-53)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
hooks/charmhelpers/contrib/openstack/context.py (+339/-225)
hooks/charmhelpers/contrib/openstack/ip.py (+41/-27)
hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+2/-2)
hooks/charmhelpers/contrib/openstack/templating.py (+5/-5)
hooks/charmhelpers/contrib/openstack/utils.py (+35/-12)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/core/fstab.py (+10/-8)
hooks/charmhelpers/core/hookenv.py (+25/-11)
hooks/charmhelpers/core/host.py (+30/-19)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/core/services/helpers.py (+9/-5)
hooks/charmhelpers/core/sysctl.py (+34/-0)
hooks/charmhelpers/core/templating.py (+2/-1)
hooks/charmhelpers/fetch/__init__.py (+18/-12)
hooks/charmhelpers/fetch/archiveurl.py (+53/-16)
hooks/charmhelpers/fetch/bzrurl.py (+5/-1)
hooks/charmhelpers/fetch/giturl.py (+48/-0)
hooks/nova_compute_hooks.py (+6/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+3/-3)
tests/charmhelpers/contrib/amulet/utils.py (+6/-4)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
unit_tests/test_nova_compute_hooks.py (+8/-1)
To merge this branch: bzr merge lp:~niedbalski/charms/trusty/nova-compute/lp-1396613
Reviewer Review Type Date Requested Status
Corey Bryant (community) Approve
Billy Olsen Approve
Review via email: mp+244344@code.launchpad.net

This proposal supersedes a proposal from 2014-11-26.

Description of the change

Fix for LP: #1396613

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

UOSCI bot says:
charm_lint_check #1225 nova-compute-next for niedbalski mp242924
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option config-flags has no default value
  I: config.yaml: option instances-path has no default value
  W: config.yaml: option disable-neutron-security-groups has no default value
  I: config.yaml: option migration-auth-type has no default value
  I: config.yaml: option disk-cachemodes has no default value

Full lint test output: http://paste.ubuntu.com/9252794/
Build: http://10.98.191.181:8080/job/charm_lint_check/1225/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

UOSCI bot says:
charm_unit_test #1059 nova-compute-next for niedbalski mp242924
    UNIT OK: passed

UNIT Results (max last 5 lines):
  nova_compute_hooks 136 13 90% 81, 103-104, 131, 197, 255, 260-261, 267, 271-274
  nova_compute_utils 228 110 52% 161-217, 225, 230-233, 268-270, 277, 281-284, 292-300, 304, 313-322, 335-354, 380-381, 385-386, 405-426, 443-453, 467-468, 473-474
  TOTAL 570 193 66%
  Ran 58 tests in 3.018s
  OK (SKIP=5)

Full unit test output: http://paste.ubuntu.com/9252795/
Build: http://10.98.191.181:8080/job/charm_unit_test/1059/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote : Posted in a previous version of this proposal

UOSCI bot says:
charm_amulet_test #528 nova-compute-next for niedbalski mp242924
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv11"
  WARNING cannot delete security group "juju-osci-sv11-0". Used by another environment?
  juju-test INFO : Results: 2 passed, 1 failed, 0 errored
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9253036/
Build: http://10.98.191.181:8080/job/charm_amulet_test/528/

Revision history for this message
Billy Olsen (billy-olsen) : Posted in a previous version of this proposal
Revision history for this message
Liang Chen (cbjchen) : Posted in a previous version of this proposal
Revision history for this message
Billy Olsen (billy-olsen) wrote : Posted in a previous version of this proposal

Jorge just some minor comments which should be addressed.

review: Needs Fixing
Revision history for this message
Jorge Niedbalski (niedbalski) wrote :

@billy-olsen,

I re-submitted this proposal addressing @cbjchen and @billy-olsen comments.

Thanks.

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #171 nova-compute-next for niedbalski mp244344
    LINT FAIL: lint-test failed

LINT Results (max last 2 lines):
  unit_tests/test_nova_compute_hooks.py:149:1: W191 indentation contains tabs
  make: *** [lint] Error 1

Full lint test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_lint_check/171/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #134 nova-compute-next for niedbalski mp244344
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/134/

92. By Jorge Niedbalski

lint errors fixed

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #173 nova-compute-next for niedbalski mp244344
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/173/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #136 nova-compute-next for niedbalski mp244344
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/136/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #83 nova-compute-next for niedbalski mp244344
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/83/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #85 nova-compute-next for niedbalski mp244344
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/85/

Revision history for this message
Billy Olsen (billy-olsen) wrote :

Thanks Jorge - changes look good, deploys successfully.

review: Approve
Revision history for this message
Corey Bryant (corey.bryant) wrote :

LGTM and follows suit with the ceph charms. I'm going to go ahead and merge this as I don't see any changes that would affect amulet tests.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'config.yaml'
2--- config.yaml 2014-11-28 12:54:57 +0000
3+++ config.yaml 2014-12-10 19:38:11 +0000
4@@ -154,3 +154,9 @@
5 order for this charm to function correctly, the privacy extension must be
6 disabled and a non-temporary address must be configured/available on
7 your network interface.
8+ sysctl:
9+ type: string
10+ default:
11+ description: |
12+ YAML formatted associative array of sysctl values, e.g.:
13+ '{ kernel.pid_max : 4194303 }'
14
15=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
16--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-06 21:57:43 +0000
17+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-10 19:38:11 +0000
18@@ -13,9 +13,10 @@
19
20 import subprocess
21 import os
22-
23 from socket import gethostname as get_unit_hostname
24
25+import six
26+
27 from charmhelpers.core.hookenv import (
28 log,
29 relation_ids,
30@@ -77,7 +78,7 @@
31 "show", resource
32 ]
33 try:
34- status = subprocess.check_output(cmd)
35+ status = subprocess.check_output(cmd).decode('UTF-8')
36 except subprocess.CalledProcessError:
37 return False
38 else:
39@@ -150,34 +151,42 @@
40 return False
41
42
43-def determine_api_port(public_port):
44+def determine_api_port(public_port, singlenode_mode=False):
45 '''
46 Determine correct API server listening port based on
47 existence of HTTPS reverse proxy and/or haproxy.
48
49 public_port: int: standard public port for given service
50
51+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
52+
53 returns: int: the correct listening port for the API service
54 '''
55 i = 0
56- if len(peer_units()) > 0 or is_clustered():
57+ if singlenode_mode:
58+ i += 1
59+ elif len(peer_units()) > 0 or is_clustered():
60 i += 1
61 if https():
62 i += 1
63 return public_port - (i * 10)
64
65
66-def determine_apache_port(public_port):
67+def determine_apache_port(public_port, singlenode_mode=False):
68 '''
69 Description: Determine correct apache listening port based on public IP +
70 state of the cluster.
71
72 public_port: int: standard public port for given service
73
74+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
75+
76 returns: int: the correct listening port for the HAProxy service
77 '''
78 i = 0
79- if len(peer_units()) > 0 or is_clustered():
80+ if singlenode_mode:
81+ i += 1
82+ elif len(peer_units()) > 0 or is_clustered():
83 i += 1
84 return public_port - (i * 10)
85
86@@ -197,7 +206,7 @@
87 for setting in settings:
88 conf[setting] = config_get(setting)
89 missing = []
90- [missing.append(s) for s, v in conf.iteritems() if v is None]
91+ [missing.append(s) for s, v in six.iteritems(conf) if v is None]
92 if missing:
93 log('Insufficient config data to configure hacluster.', level=ERROR)
94 raise HAIncompleteConfig
95
96=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
97--- hooks/charmhelpers/contrib/network/ip.py 2014-10-06 21:57:43 +0000
98+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-10 19:38:11 +0000
99@@ -1,15 +1,12 @@
100 import glob
101 import re
102 import subprocess
103-import sys
104
105 from functools import partial
106
107 from charmhelpers.core.hookenv import unit_get
108 from charmhelpers.fetch import apt_install
109 from charmhelpers.core.hookenv import (
110- WARNING,
111- ERROR,
112 log
113 )
114
115@@ -34,31 +31,28 @@
116 network)
117
118
119+def no_ip_found_error_out(network):
120+ errmsg = ("No IP address found in network: %s" % network)
121+ raise ValueError(errmsg)
122+
123+
124 def get_address_in_network(network, fallback=None, fatal=False):
125- """
126- Get an IPv4 or IPv6 address within the network from the host.
127+ """Get an IPv4 or IPv6 address within the network from the host.
128
129 :param network (str): CIDR presentation format. For example,
130 '192.168.1.0/24'.
131 :param fallback (str): If no address is found, return fallback.
132 :param fatal (boolean): If no address is found, fallback is not
133 set and fatal is True then exit(1).
134-
135 """
136-
137- def not_found_error_out():
138- log("No IP address found in network: %s" % network,
139- level=ERROR)
140- sys.exit(1)
141-
142 if network is None:
143 if fallback is not None:
144 return fallback
145+
146+ if fatal:
147+ no_ip_found_error_out(network)
148 else:
149- if fatal:
150- not_found_error_out()
151- else:
152- return None
153+ return None
154
155 _validate_cidr(network)
156 network = netaddr.IPNetwork(network)
157@@ -70,6 +64,7 @@
158 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
159 if cidr in network:
160 return str(cidr.ip)
161+
162 if network.version == 6 and netifaces.AF_INET6 in addresses:
163 for addr in addresses[netifaces.AF_INET6]:
164 if not addr['addr'].startswith('fe80'):
165@@ -82,20 +77,20 @@
166 return fallback
167
168 if fatal:
169- not_found_error_out()
170+ no_ip_found_error_out(network)
171
172 return None
173
174
175 def is_ipv6(address):
176- '''Determine whether provided address is IPv6 or not'''
177+ """Determine whether provided address is IPv6 or not."""
178 try:
179 address = netaddr.IPAddress(address)
180 except netaddr.AddrFormatError:
181 # probably a hostname - so not an address at all!
182 return False
183- else:
184- return address.version == 6
185+
186+ return address.version == 6
187
188
189 def is_address_in_network(network, address):
190@@ -113,11 +108,13 @@
191 except (netaddr.core.AddrFormatError, ValueError):
192 raise ValueError("Network (%s) is not in CIDR presentation format" %
193 network)
194+
195 try:
196 address = netaddr.IPAddress(address)
197 except (netaddr.core.AddrFormatError, ValueError):
198 raise ValueError("Address (%s) is not in correct presentation format" %
199 address)
200+
201 if address in network:
202 return True
203 else:
204@@ -140,57 +137,63 @@
205 if address.version == 4 and netifaces.AF_INET in addresses:
206 addr = addresses[netifaces.AF_INET][0]['addr']
207 netmask = addresses[netifaces.AF_INET][0]['netmask']
208- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
209+ network = netaddr.IPNetwork("%s/%s" % (addr, netmask))
210+ cidr = network.cidr
211 if address in cidr:
212 if key == 'iface':
213 return iface
214 else:
215 return addresses[netifaces.AF_INET][0][key]
216+
217 if address.version == 6 and netifaces.AF_INET6 in addresses:
218 for addr in addresses[netifaces.AF_INET6]:
219 if not addr['addr'].startswith('fe80'):
220- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
221- addr['netmask']))
222+ network = netaddr.IPNetwork("%s/%s" % (addr['addr'],
223+ addr['netmask']))
224+ cidr = network.cidr
225 if address in cidr:
226 if key == 'iface':
227 return iface
228+ elif key == 'netmask' and cidr:
229+ return str(cidr).split('/')[1]
230 else:
231 return addr[key]
232+
233 return None
234
235
236 get_iface_for_address = partial(_get_for_address, key='iface')
237
238+
239 get_netmask_for_address = partial(_get_for_address, key='netmask')
240
241
242 def format_ipv6_addr(address):
243- """
244- IPv6 needs to be wrapped with [] in url link to parse correctly.
245+ """If address is IPv6, wrap it in '[]' otherwise return None.
246+
247+ This is required by most configuration files when specifying IPv6
248+ addresses.
249 """
250 if is_ipv6(address):
251- address = "[%s]" % address
252- else:
253- log("Not a valid ipv6 address: %s" % address, level=WARNING)
254- address = None
255+ return "[%s]" % address
256
257- return address
258+ return None
259
260
261 def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
262 fatal=True, exc_list=None):
263- """
264- Return the assigned IP address for a given interface, if any, or [].
265- """
266+ """Return the assigned IP address for a given interface, if any."""
267 # Extract nic if passed /dev/ethX
268 if '/' in iface:
269 iface = iface.split('/')[-1]
270+
271 if not exc_list:
272 exc_list = []
273+
274 try:
275 inet_num = getattr(netifaces, inet_type)
276 except AttributeError:
277- raise Exception('Unknown inet type ' + str(inet_type))
278+ raise Exception("Unknown inet type '%s'" % str(inet_type))
279
280 interfaces = netifaces.interfaces()
281 if inc_aliases:
282@@ -198,15 +201,18 @@
283 for _iface in interfaces:
284 if iface == _iface or _iface.split(':')[0] == iface:
285 ifaces.append(_iface)
286+
287 if fatal and not ifaces:
288 raise Exception("Invalid interface '%s'" % iface)
289+
290 ifaces.sort()
291 else:
292 if iface not in interfaces:
293 if fatal:
294- raise Exception("%s not found " % (iface))
295+ raise Exception("Interface '%s' not found " % (iface))
296 else:
297 return []
298+
299 else:
300 ifaces = [iface]
301
302@@ -217,10 +223,13 @@
303 for entry in net_info[inet_num]:
304 if 'addr' in entry and entry['addr'] not in exc_list:
305 addresses.append(entry['addr'])
306+
307 if fatal and not addresses:
308 raise Exception("Interface '%s' doesn't have any %s addresses." %
309 (iface, inet_type))
310- return addresses
311+
312+ return sorted(addresses)
313+
314
315 get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
316
317@@ -237,6 +246,7 @@
318 raw = re.match(ll_key, _addr)
319 if raw:
320 _addr = raw.group(1)
321+
322 if _addr == addr:
323 log("Address '%s' is configured on iface '%s'" %
324 (addr, iface))
325@@ -247,8 +257,9 @@
326
327
328 def sniff_iface(f):
329- """If no iface provided, inject net iface inferred from unit private
330- address.
331+ """Ensure decorated function is called with a value for iface.
332+
333+ If no iface provided, inject net iface inferred from unit private address.
334 """
335 def iface_sniffer(*args, **kwargs):
336 if not kwargs.get('iface', None):
337@@ -291,7 +302,7 @@
338 if global_addrs:
339 # Make sure any found global addresses are not temporary
340 cmd = ['ip', 'addr', 'show', iface]
341- out = subprocess.check_output(cmd)
342+ out = subprocess.check_output(cmd).decode('UTF-8')
343 if dynamic_only:
344 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
345 else:
346@@ -313,33 +324,28 @@
347 return addrs
348
349 if fatal:
350- raise Exception("Interface '%s' doesn't have a scope global "
351+ raise Exception("Interface '%s' does not have a scope global "
352 "non-temporary ipv6 address." % iface)
353
354 return []
355
356
357 def get_bridges(vnic_dir='/sys/devices/virtual/net'):
358- """
359- Return a list of bridges on the system or []
360- """
361- b_rgex = vnic_dir + '/*/bridge'
362- return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
363+ """Return a list of bridges on the system."""
364+ b_regex = "%s/*/bridge" % vnic_dir
365+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
366
367
368 def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
369- """
370- Return a list of nics comprising a given bridge on the system or []
371- """
372- brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
373- return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
374+ """Return a list of nics comprising a given bridge on the system."""
375+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
376+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
377
378
379 def is_bridge_member(nic):
380- """
381- Check if a given nic is a member of a bridge
382- """
383+ """Check if a given nic is a member of a bridge."""
384 for bridge in get_bridges():
385 if nic in get_bridge_nics(bridge):
386 return True
387+
388 return False
389
390=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
391--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-06 21:57:43 +0000
392+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-10 19:38:11 +0000
393@@ -1,3 +1,4 @@
394+import six
395 from charmhelpers.contrib.amulet.deployment import (
396 AmuletDeployment
397 )
398@@ -69,7 +70,7 @@
399
400 def _configure_services(self, configs):
401 """Configure all of the services."""
402- for service, config in configs.iteritems():
403+ for service, config in six.iteritems(configs):
404 self.d.configure(service, config)
405
406 def _get_openstack_release(self):
407
408=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
409--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-10-06 21:57:43 +0000
410+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-10 19:38:11 +0000
411@@ -7,6 +7,8 @@
412 import keystoneclient.v2_0 as keystone_client
413 import novaclient.v1_1.client as nova_client
414
415+import six
416+
417 from charmhelpers.contrib.amulet.utils import (
418 AmuletUtils
419 )
420@@ -60,7 +62,7 @@
421 expected service catalog endpoints.
422 """
423 self.log.debug('actual: {}'.format(repr(actual)))
424- for k, v in expected.iteritems():
425+ for k, v in six.iteritems(expected):
426 if k in actual:
427 ret = self._validate_dict_data(expected[k][0], actual[k][0])
428 if ret:
429
430=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
431--- hooks/charmhelpers/contrib/openstack/context.py 2014-10-06 21:57:43 +0000
432+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-10 19:38:11 +0000
433@@ -1,20 +1,18 @@
434 import json
435 import os
436 import time
437-
438 from base64 import b64decode
439+from subprocess import check_call
440
441-from subprocess import (
442- check_call
443-)
444+import six
445
446 from charmhelpers.fetch import (
447 apt_install,
448 filter_installed_packages,
449 )
450-
451 from charmhelpers.core.hookenv import (
452 config,
453+ is_relation_made,
454 local_unit,
455 log,
456 relation_get,
457@@ -23,41 +21,40 @@
458 relation_set,
459 unit_get,
460 unit_private_ip,
461+ DEBUG,
462+ INFO,
463+ WARNING,
464 ERROR,
465- INFO
466 )
467-
468 from charmhelpers.core.host import (
469 mkdir,
470- write_file
471+ write_file,
472 )
473-
474 from charmhelpers.contrib.hahelpers.cluster import (
475 determine_apache_port,
476 determine_api_port,
477 https,
478- is_clustered
479+ is_clustered,
480 )
481-
482 from charmhelpers.contrib.hahelpers.apache import (
483 get_cert,
484 get_ca_cert,
485 install_ca_cert,
486 )
487-
488 from charmhelpers.contrib.openstack.neutron import (
489 neutron_plugin_attribute,
490 )
491-
492 from charmhelpers.contrib.network.ip import (
493 get_address_in_network,
494 get_ipv6_addr,
495 get_netmask_for_address,
496 format_ipv6_addr,
497- is_address_in_network
498+ is_address_in_network,
499 )
500+from charmhelpers.contrib.openstack.utils import get_host_ip
501
502 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
503+ADDRESS_TYPES = ['admin', 'internal', 'public']
504
505
506 class OSContextError(Exception):
507@@ -65,7 +62,7 @@
508
509
510 def ensure_packages(packages):
511- '''Install but do not upgrade required plugin packages'''
512+ """Install but do not upgrade required plugin packages."""
513 required = filter_installed_packages(packages)
514 if required:
515 apt_install(required, fatal=True)
516@@ -73,20 +70,27 @@
517
518 def context_complete(ctxt):
519 _missing = []
520- for k, v in ctxt.iteritems():
521+ for k, v in six.iteritems(ctxt):
522 if v is None or v == '':
523 _missing.append(k)
524+
525 if _missing:
526- log('Missing required data: %s' % ' '.join(_missing), level='INFO')
527+ log('Missing required data: %s' % ' '.join(_missing), level=INFO)
528 return False
529+
530 return True
531
532
533 def config_flags_parser(config_flags):
534+ """Parses config flags string into dict.
535+
536+ The provided config_flags string may be a list of comma-separated values
537+ which themselves may be comma-separated list of values.
538+ """
539 if config_flags.find('==') >= 0:
540- log("config_flags is not in expected format (key=value)",
541- level=ERROR)
542+ log("config_flags is not in expected format (key=value)", level=ERROR)
543 raise OSContextError
544+
545 # strip the following from each value.
546 post_strippers = ' ,'
547 # we strip any leading/trailing '=' or ' ' from the string then
548@@ -94,7 +98,7 @@
549 split = config_flags.strip(' =').split('=')
550 limit = len(split)
551 flags = {}
552- for i in xrange(0, limit - 1):
553+ for i in range(0, limit - 1):
554 current = split[i]
555 next = split[i + 1]
556 vindex = next.rfind(',')
557@@ -109,17 +113,18 @@
558 # if this not the first entry, expect an embedded key.
559 index = current.rfind(',')
560 if index < 0:
561- log("invalid config value(s) at index %s" % (i),
562- level=ERROR)
563+ log("Invalid config value(s) at index %s" % (i), level=ERROR)
564 raise OSContextError
565 key = current[index + 1:]
566
567 # Add to collection.
568 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
569+
570 return flags
571
572
573 class OSContextGenerator(object):
574+ """Base class for all context generators."""
575 interfaces = []
576
577 def __call__(self):
578@@ -131,11 +136,11 @@
579
580 def __init__(self,
581 database=None, user=None, relation_prefix=None, ssl_dir=None):
582- '''
583- Allows inspecting relation for settings prefixed with relation_prefix.
584- This is useful for parsing access for multiple databases returned via
585- the shared-db interface (eg, nova_password, quantum_password)
586- '''
587+ """Allows inspecting relation for settings prefixed with
588+ relation_prefix. This is useful for parsing access for multiple
589+ databases returned via the shared-db interface (eg, nova_password,
590+ quantum_password)
591+ """
592 self.relation_prefix = relation_prefix
593 self.database = database
594 self.user = user
595@@ -145,9 +150,8 @@
596 self.database = self.database or config('database')
597 self.user = self.user or config('database-user')
598 if None in [self.database, self.user]:
599- log('Could not generate shared_db context. '
600- 'Missing required charm config options. '
601- '(database name and user)')
602+ log("Could not generate shared_db context. Missing required charm "
603+ "config options. (database name and user)", level=ERROR)
604 raise OSContextError
605
606 ctxt = {}
607@@ -200,23 +204,24 @@
608 def __call__(self):
609 self.database = self.database or config('database')
610 if self.database is None:
611- log('Could not generate postgresql_db context. '
612- 'Missing required charm config options. '
613- '(database name)')
614+ log('Could not generate postgresql_db context. Missing required '
615+ 'charm config options. (database name)', level=ERROR)
616 raise OSContextError
617+
618 ctxt = {}
619-
620 for rid in relation_ids(self.interfaces[0]):
621 for unit in related_units(rid):
622- ctxt = {
623- 'database_host': relation_get('host', rid=rid, unit=unit),
624- 'database': self.database,
625- 'database_user': relation_get('user', rid=rid, unit=unit),
626- 'database_password': relation_get('password', rid=rid, unit=unit),
627- 'database_type': 'postgresql',
628- }
629+ rel_host = relation_get('host', rid=rid, unit=unit)
630+ rel_user = relation_get('user', rid=rid, unit=unit)
631+ rel_passwd = relation_get('password', rid=rid, unit=unit)
632+ ctxt = {'database_host': rel_host,
633+ 'database': self.database,
634+ 'database_user': rel_user,
635+ 'database_password': rel_passwd,
636+ 'database_type': 'postgresql'}
637 if context_complete(ctxt):
638 return ctxt
639+
640 return {}
641
642
643@@ -225,23 +230,29 @@
644 ca_path = os.path.join(ssl_dir, 'db-client.ca')
645 with open(ca_path, 'w') as fh:
646 fh.write(b64decode(rdata['ssl_ca']))
647+
648 ctxt['database_ssl_ca'] = ca_path
649 elif 'ssl_ca' in rdata:
650- log("Charm not setup for ssl support but ssl ca found")
651+ log("Charm not setup for ssl support but ssl ca found", level=INFO)
652 return ctxt
653+
654 if 'ssl_cert' in rdata:
655 cert_path = os.path.join(
656 ssl_dir, 'db-client.cert')
657 if not os.path.exists(cert_path):
658- log("Waiting 1m for ssl client cert validity")
659+ log("Waiting 1m for ssl client cert validity", level=INFO)
660 time.sleep(60)
661+
662 with open(cert_path, 'w') as fh:
663 fh.write(b64decode(rdata['ssl_cert']))
664+
665 ctxt['database_ssl_cert'] = cert_path
666 key_path = os.path.join(ssl_dir, 'db-client.key')
667 with open(key_path, 'w') as fh:
668 fh.write(b64decode(rdata['ssl_key']))
669+
670 ctxt['database_ssl_key'] = key_path
671+
672 return ctxt
673
674
675@@ -249,9 +260,8 @@
676 interfaces = ['identity-service']
677
678 def __call__(self):
679- log('Generating template context for identity-service')
680+ log('Generating template context for identity-service', level=DEBUG)
681 ctxt = {}
682-
683 for rid in relation_ids('identity-service'):
684 for unit in related_units(rid):
685 rdata = relation_get(rid=rid, unit=unit)
686@@ -259,26 +269,24 @@
687 serv_host = format_ipv6_addr(serv_host) or serv_host
688 auth_host = rdata.get('auth_host')
689 auth_host = format_ipv6_addr(auth_host) or auth_host
690-
691- ctxt = {
692- 'service_port': rdata.get('service_port'),
693- 'service_host': serv_host,
694- 'auth_host': auth_host,
695- 'auth_port': rdata.get('auth_port'),
696- 'admin_tenant_name': rdata.get('service_tenant'),
697- 'admin_user': rdata.get('service_username'),
698- 'admin_password': rdata.get('service_password'),
699- 'service_protocol':
700- rdata.get('service_protocol') or 'http',
701- 'auth_protocol':
702- rdata.get('auth_protocol') or 'http',
703- }
704+ svc_protocol = rdata.get('service_protocol') or 'http'
705+ auth_protocol = rdata.get('auth_protocol') or 'http'
706+ ctxt = {'service_port': rdata.get('service_port'),
707+ 'service_host': serv_host,
708+ 'auth_host': auth_host,
709+ 'auth_port': rdata.get('auth_port'),
710+ 'admin_tenant_name': rdata.get('service_tenant'),
711+ 'admin_user': rdata.get('service_username'),
712+ 'admin_password': rdata.get('service_password'),
713+ 'service_protocol': svc_protocol,
714+ 'auth_protocol': auth_protocol}
715 if context_complete(ctxt):
716 # NOTE(jamespage) this is required for >= icehouse
717 # so a missing value just indicates keystone needs
718 # upgrading
719 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
720 return ctxt
721+
722 return {}
723
724
725@@ -291,21 +299,23 @@
726 self.interfaces = [rel_name]
727
728 def __call__(self):
729- log('Generating template context for amqp')
730+ log('Generating template context for amqp', level=DEBUG)
731 conf = config()
732- user_setting = 'rabbit-user'
733- vhost_setting = 'rabbit-vhost'
734 if self.relation_prefix:
735- user_setting = self.relation_prefix + '-rabbit-user'
736- vhost_setting = self.relation_prefix + '-rabbit-vhost'
737+ user_setting = '%s-rabbit-user' % (self.relation_prefix)
738+ vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
739+ else:
740+ user_setting = 'rabbit-user'
741+ vhost_setting = 'rabbit-vhost'
742
743 try:
744 username = conf[user_setting]
745 vhost = conf[vhost_setting]
746 except KeyError as e:
747- log('Could not generate shared_db context. '
748- 'Missing required charm config options: %s.' % e)
749+ log('Could not generate shared_db context. Missing required charm '
750+ 'config options: %s.' % e, level=ERROR)
751 raise OSContextError
752+
753 ctxt = {}
754 for rid in relation_ids(self.rel_name):
755 ha_vip_only = False
756@@ -319,6 +329,7 @@
757 host = relation_get('private-address', rid=rid, unit=unit)
758 host = format_ipv6_addr(host) or host
759 ctxt['rabbitmq_host'] = host
760+
761 ctxt.update({
762 'rabbitmq_user': username,
763 'rabbitmq_password': relation_get('password', rid=rid,
764@@ -329,6 +340,7 @@
765 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
766 if ssl_port:
767 ctxt['rabbit_ssl_port'] = ssl_port
768+
769 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
770 if ssl_ca:
771 ctxt['rabbit_ssl_ca'] = ssl_ca
772@@ -342,41 +354,45 @@
773 if context_complete(ctxt):
774 if 'rabbit_ssl_ca' in ctxt:
775 if not self.ssl_dir:
776- log(("Charm not setup for ssl support "
777- "but ssl ca found"))
778+ log("Charm not setup for ssl support but ssl ca "
779+ "found", level=INFO)
780 break
781+
782 ca_path = os.path.join(
783 self.ssl_dir, 'rabbit-client-ca.pem')
784 with open(ca_path, 'w') as fh:
785 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
786 ctxt['rabbit_ssl_ca'] = ca_path
787+
788 # Sufficient information found = break out!
789 break
790+
791 # Used for active/active rabbitmq >= grizzly
792- if ('clustered' not in ctxt or ha_vip_only) \
793- and len(related_units(rid)) > 1:
794+ if (('clustered' not in ctxt or ha_vip_only) and
795+ len(related_units(rid)) > 1):
796 rabbitmq_hosts = []
797 for unit in related_units(rid):
798 host = relation_get('private-address', rid=rid, unit=unit)
799 host = format_ipv6_addr(host) or host
800 rabbitmq_hosts.append(host)
801- ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
802+
803+ ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
804+
805 if not context_complete(ctxt):
806 return {}
807- else:
808- return ctxt
809+
810+ return ctxt
811
812
813 class CephContext(OSContextGenerator):
814+ """Generates context for /etc/ceph/ceph.conf templates."""
815 interfaces = ['ceph']
816
817 def __call__(self):
818- '''This generates context for /etc/ceph/ceph.conf templates'''
819 if not relation_ids('ceph'):
820 return {}
821
822- log('Generating template context for ceph')
823-
824+ log('Generating template context for ceph', level=DEBUG)
825 mon_hosts = []
826 auth = None
827 key = None
828@@ -385,18 +401,18 @@
829 for unit in related_units(rid):
830 auth = relation_get('auth', rid=rid, unit=unit)
831 key = relation_get('key', rid=rid, unit=unit)
832- ceph_addr = \
833- relation_get('ceph-public-address', rid=rid, unit=unit) or \
834- relation_get('private-address', rid=rid, unit=unit)
835+ ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
836+ unit=unit)
837+ unit_priv_addr = relation_get('private-address', rid=rid,
838+ unit=unit)
839+ ceph_addr = ceph_pub_addr or unit_priv_addr
840 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
841 mon_hosts.append(ceph_addr)
842
843- ctxt = {
844- 'mon_hosts': ' '.join(mon_hosts),
845- 'auth': auth,
846- 'key': key,
847- 'use_syslog': use_syslog
848- }
849+ ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
850+ 'auth': auth,
851+ 'key': key,
852+ 'use_syslog': use_syslog}
853
854 if not os.path.isdir('/etc/ceph'):
855 os.mkdir('/etc/ceph')
856@@ -405,79 +421,68 @@
857 return {}
858
859 ensure_packages(['ceph-common'])
860-
861 return ctxt
862
863
864-ADDRESS_TYPES = ['admin', 'internal', 'public']
865-
866-
867 class HAProxyContext(OSContextGenerator):
868+ """Provides half a context for the haproxy template, which describes
869+ all peers to be included in the cluster. Each charm needs to include
870+ its own context generator that describes the port mapping.
871+ """
872 interfaces = ['cluster']
873
874+ def __init__(self, singlenode_mode=False):
875+ self.singlenode_mode = singlenode_mode
876+
877 def __call__(self):
878- '''
879- Builds half a context for the haproxy template, which describes
880- all peers to be included in the cluster. Each charm needs to include
881- its own context generator that describes the port mapping.
882- '''
883- if not relation_ids('cluster'):
884+ if not relation_ids('cluster') and not self.singlenode_mode:
885 return {}
886
887+ if config('prefer-ipv6'):
888+ addr = get_ipv6_addr(exc_list=[config('vip')])[0]
889+ else:
890+ addr = get_host_ip(unit_get('private-address'))
891+
892 l_unit = local_unit().replace('/', '-')
893-
894- if config('prefer-ipv6'):
895- addr = get_ipv6_addr(exc_list=[config('vip')])[0]
896- else:
897- addr = unit_get('private-address')
898-
899 cluster_hosts = {}
900
901 # NOTE(jamespage): build out map of configured network endpoints
902 # and associated backends
903 for addr_type in ADDRESS_TYPES:
904- laddr = get_address_in_network(
905- config('os-{}-network'.format(addr_type)))
906+ cfg_opt = 'os-{}-network'.format(addr_type)
907+ laddr = get_address_in_network(config(cfg_opt))
908 if laddr:
909- cluster_hosts[laddr] = {}
910- cluster_hosts[laddr]['network'] = "{}/{}".format(
911- laddr,
912- get_netmask_for_address(laddr)
913- )
914- cluster_hosts[laddr]['backends'] = {}
915- cluster_hosts[laddr]['backends'][l_unit] = laddr
916+ netmask = get_netmask_for_address(laddr)
917+ cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
918+ netmask),
919+ 'backends': {l_unit: laddr}}
920 for rid in relation_ids('cluster'):
921 for unit in related_units(rid):
922- _unit = unit.replace('/', '-')
923 _laddr = relation_get('{}-address'.format(addr_type),
924 rid=rid, unit=unit)
925 if _laddr:
926+ _unit = unit.replace('/', '-')
927 cluster_hosts[laddr]['backends'][_unit] = _laddr
928
929 # NOTE(jamespage) no split configurations found, just use
930 # private addresses
931 if not cluster_hosts:
932- cluster_hosts[addr] = {}
933- cluster_hosts[addr]['network'] = "{}/{}".format(
934- addr,
935- get_netmask_for_address(addr)
936- )
937- cluster_hosts[addr]['backends'] = {}
938- cluster_hosts[addr]['backends'][l_unit] = addr
939+ netmask = get_netmask_for_address(addr)
940+ cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
941+ 'backends': {l_unit: addr}}
942 for rid in relation_ids('cluster'):
943 for unit in related_units(rid):
944- _unit = unit.replace('/', '-')
945 _laddr = relation_get('private-address',
946 rid=rid, unit=unit)
947 if _laddr:
948+ _unit = unit.replace('/', '-')
949 cluster_hosts[addr]['backends'][_unit] = _laddr
950
951- ctxt = {
952- 'frontends': cluster_hosts,
953- }
954+ ctxt = {'frontends': cluster_hosts}
955
956 if config('haproxy-server-timeout'):
957 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
958+
959 if config('haproxy-client-timeout'):
960 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
961
962@@ -491,13 +496,18 @@
963 ctxt['stat_port'] = ':8888'
964
965 for frontend in cluster_hosts:
966- if len(cluster_hosts[frontend]['backends']) > 1:
967+ if (len(cluster_hosts[frontend]['backends']) > 1 or
968+ self.singlenode_mode):
969 # Enable haproxy when we have enough peers.
970- log('Ensuring haproxy enabled in /etc/default/haproxy.')
971+ log('Ensuring haproxy enabled in /etc/default/haproxy.',
972+ level=DEBUG)
973 with open('/etc/default/haproxy', 'w') as out:
974 out.write('ENABLED=1\n')
975+
976 return ctxt
977- log('HAProxy context is incomplete, this unit has no peers.')
978+
979+ log('HAProxy context is incomplete, this unit has no peers.',
980+ level=INFO)
981 return {}
982
983
984@@ -505,29 +515,28 @@
985 interfaces = ['image-service']
986
987 def __call__(self):
988- '''
989- Obtains the glance API server from the image-service relation. Useful
990- in nova and cinder (currently).
991- '''
992- log('Generating template context for image-service.')
993+ """Obtains the glance API server from the image-service relation.
994+ Useful in nova and cinder (currently).
995+ """
996+ log('Generating template context for image-service.', level=DEBUG)
997 rids = relation_ids('image-service')
998 if not rids:
999 return {}
1000+
1001 for rid in rids:
1002 for unit in related_units(rid):
1003 api_server = relation_get('glance-api-server',
1004 rid=rid, unit=unit)
1005 if api_server:
1006 return {'glance_api_servers': api_server}
1007- log('ImageService context is incomplete. '
1008- 'Missing required relation data.')
1009+
1010+ log("ImageService context is incomplete. Missing required relation "
1011+ "data.", level=INFO)
1012 return {}
1013
1014
1015 class ApacheSSLContext(OSContextGenerator):
1016-
1017- """
1018- Generates a context for an apache vhost configuration that configures
1019+ """Generates a context for an apache vhost configuration that configures
1020 HTTPS reverse proxying for one or many endpoints. Generated context
1021 looks something like::
1022
1023@@ -561,6 +570,7 @@
1024 else:
1025 cert_filename = 'cert'
1026 key_filename = 'key'
1027+
1028 write_file(path=os.path.join(ssl_dir, cert_filename),
1029 content=b64decode(cert))
1030 write_file(path=os.path.join(ssl_dir, key_filename),
1031@@ -572,7 +582,8 @@
1032 install_ca_cert(b64decode(ca_cert))
1033
1034 def canonical_names(self):
1035- '''Figure out which canonical names clients will access this service'''
1036+ """Figure out which canonical names clients will access this service.
1037+ """
1038 cns = []
1039 for r_id in relation_ids('identity-service'):
1040 for unit in related_units(r_id):
1041@@ -580,55 +591,80 @@
1042 for k in rdata:
1043 if k.startswith('ssl_key_'):
1044 cns.append(k.lstrip('ssl_key_'))
1045- return list(set(cns))
1046+
1047+ return sorted(list(set(cns)))
1048+
1049+ def get_network_addresses(self):
1050+ """For each network configured, return corresponding address and vip
1051+ (if available).
1052+
1053+ Returns a list of tuples of the form:
1054+
1055+ [(address_in_net_a, vip_in_net_a),
1056+ (address_in_net_b, vip_in_net_b),
1057+ ...]
1058+
1059+ or, if no vip(s) available:
1060+
1061+ [(address_in_net_a, address_in_net_a),
1062+ (address_in_net_b, address_in_net_b),
1063+ ...]
1064+ """
1065+ addresses = []
1066+ if config('vip'):
1067+ vips = config('vip').split()
1068+ else:
1069+ vips = []
1070+
1071+ for net_type in ['os-internal-network', 'os-admin-network',
1072+ 'os-public-network']:
1073+ addr = get_address_in_network(config(net_type),
1074+ unit_get('private-address'))
1075+ if len(vips) > 1 and is_clustered():
1076+ if not config(net_type):
1077+ log("Multiple networks configured but net_type "
1078+ "is None (%s)." % net_type, level=WARNING)
1079+ continue
1080+
1081+ for vip in vips:
1082+ if is_address_in_network(config(net_type), vip):
1083+ addresses.append((addr, vip))
1084+ break
1085+
1086+ elif is_clustered() and config('vip'):
1087+ addresses.append((addr, config('vip')))
1088+ else:
1089+ addresses.append((addr, addr))
1090+
1091+ return sorted(addresses)
1092
1093 def __call__(self):
1094- if isinstance(self.external_ports, basestring):
1095+ if isinstance(self.external_ports, six.string_types):
1096 self.external_ports = [self.external_ports]
1097- if (not self.external_ports or not https()):
1098+
1099+ if not self.external_ports or not https():
1100 return {}
1101
1102 self.configure_ca()
1103 self.enable_modules()
1104
1105- ctxt = {
1106- 'namespace': self.service_namespace,
1107- 'endpoints': [],
1108- 'ext_ports': []
1109- }
1110+ ctxt = {'namespace': self.service_namespace,
1111+ 'endpoints': [],
1112+ 'ext_ports': []}
1113
1114 for cn in self.canonical_names():
1115 self.configure_cert(cn)
1116
1117- addresses = []
1118- vips = []
1119- if config('vip'):
1120- vips = config('vip').split()
1121-
1122- for network_type in ['os-internal-network',
1123- 'os-admin-network',
1124- 'os-public-network']:
1125- address = get_address_in_network(config(network_type),
1126- unit_get('private-address'))
1127- if len(vips) > 0 and is_clustered():
1128- for vip in vips:
1129- if is_address_in_network(config(network_type),
1130- vip):
1131- addresses.append((address, vip))
1132- break
1133- elif is_clustered():
1134- addresses.append((address, config('vip')))
1135- else:
1136- addresses.append((address, address))
1137-
1138- for address, endpoint in set(addresses):
1139+ addresses = self.get_network_addresses()
1140+ for address, endpoint in sorted(set(addresses)):
1141 for api_port in self.external_ports:
1142 ext_port = determine_apache_port(api_port)
1143 int_port = determine_api_port(api_port)
1144 portmap = (address, endpoint, int(ext_port), int(int_port))
1145 ctxt['endpoints'].append(portmap)
1146 ctxt['ext_ports'].append(int(ext_port))
1147- ctxt['ext_ports'] = list(set(ctxt['ext_ports']))
1148+
1149+ ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
1150 return ctxt
1151
1152
1153@@ -645,21 +681,23 @@
1154
1155 @property
1156 def packages(self):
1157- return neutron_plugin_attribute(
1158- self.plugin, 'packages', self.network_manager)
1159+ return neutron_plugin_attribute(self.plugin, 'packages',
1160+ self.network_manager)
1161
1162 @property
1163 def neutron_security_groups(self):
1164 return None
1165
1166 def _ensure_packages(self):
1167- [ensure_packages(pkgs) for pkgs in self.packages]
1168+ for pkgs in self.packages:
1169+ ensure_packages(pkgs)
1170
1171 def _save_flag_file(self):
1172 if self.network_manager == 'quantum':
1173 _file = '/etc/nova/quantum_plugin.conf'
1174 else:
1175 _file = '/etc/nova/neutron_plugin.conf'
1176+
1177 with open(_file, 'wb') as out:
1178 out.write(self.plugin + '\n')
1179
1180@@ -668,13 +706,11 @@
1181 self.network_manager)
1182 config = neutron_plugin_attribute(self.plugin, 'config',
1183 self.network_manager)
1184- ovs_ctxt = {
1185- 'core_plugin': driver,
1186- 'neutron_plugin': 'ovs',
1187- 'neutron_security_groups': self.neutron_security_groups,
1188- 'local_ip': unit_private_ip(),
1189- 'config': config
1190- }
1191+ ovs_ctxt = {'core_plugin': driver,
1192+ 'neutron_plugin': 'ovs',
1193+ 'neutron_security_groups': self.neutron_security_groups,
1194+ 'local_ip': unit_private_ip(),
1195+ 'config': config}
1196
1197 return ovs_ctxt
1198
1199@@ -683,13 +719,11 @@
1200 self.network_manager)
1201 config = neutron_plugin_attribute(self.plugin, 'config',
1202 self.network_manager)
1203- nvp_ctxt = {
1204- 'core_plugin': driver,
1205- 'neutron_plugin': 'nvp',
1206- 'neutron_security_groups': self.neutron_security_groups,
1207- 'local_ip': unit_private_ip(),
1208- 'config': config
1209- }
1210+ nvp_ctxt = {'core_plugin': driver,
1211+ 'neutron_plugin': 'nvp',
1212+ 'neutron_security_groups': self.neutron_security_groups,
1213+ 'local_ip': unit_private_ip(),
1214+ 'config': config}
1215
1216 return nvp_ctxt
1217
1218@@ -698,35 +732,50 @@
1219 self.network_manager)
1220 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1221 self.network_manager)
1222- n1kv_ctxt = {
1223- 'core_plugin': driver,
1224- 'neutron_plugin': 'n1kv',
1225- 'neutron_security_groups': self.neutron_security_groups,
1226- 'local_ip': unit_private_ip(),
1227- 'config': n1kv_config,
1228- 'vsm_ip': config('n1kv-vsm-ip'),
1229- 'vsm_username': config('n1kv-vsm-username'),
1230- 'vsm_password': config('n1kv-vsm-password'),
1231- 'restrict_policy_profiles': config(
1232- 'n1kv_restrict_policy_profiles'),
1233- }
1234+ n1kv_user_config_flags = config('n1kv-config-flags')
1235+ restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
1236+ n1kv_ctxt = {'core_plugin': driver,
1237+ 'neutron_plugin': 'n1kv',
1238+ 'neutron_security_groups': self.neutron_security_groups,
1239+ 'local_ip': unit_private_ip(),
1240+ 'config': n1kv_config,
1241+ 'vsm_ip': config('n1kv-vsm-ip'),
1242+ 'vsm_username': config('n1kv-vsm-username'),
1243+ 'vsm_password': config('n1kv-vsm-password'),
1244+ 'restrict_policy_profiles': restrict_policy_profiles}
1245+
1246+ if n1kv_user_config_flags:
1247+ flags = config_flags_parser(n1kv_user_config_flags)
1248+ n1kv_ctxt['user_config_flags'] = flags
1249
1250 return n1kv_ctxt
1251
1252+ def calico_ctxt(self):
1253+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1254+ self.network_manager)
1255+ config = neutron_plugin_attribute(self.plugin, 'config',
1256+ self.network_manager)
1257+ calico_ctxt = {'core_plugin': driver,
1258+ 'neutron_plugin': 'Calico',
1259+ 'neutron_security_groups': self.neutron_security_groups,
1260+ 'local_ip': unit_private_ip(),
1261+ 'config': config}
1262+
1263+ return calico_ctxt
1264+
1265 def neutron_ctxt(self):
1266 if https():
1267 proto = 'https'
1268 else:
1269 proto = 'http'
1270+
1271 if is_clustered():
1272 host = config('vip')
1273 else:
1274 host = unit_get('private-address')
1275- url = '%s://%s:%s' % (proto, host, '9696')
1276- ctxt = {
1277- 'network_manager': self.network_manager,
1278- 'neutron_url': url,
1279- }
1280+
1281+ ctxt = {'network_manager': self.network_manager,
1282+ 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1283 return ctxt
1284
1285 def __call__(self):
1286@@ -746,6 +795,8 @@
1287 ctxt.update(self.nvp_ctxt())
1288 elif self.plugin == 'n1kv':
1289 ctxt.update(self.n1kv_ctxt())
1290+ elif self.plugin == 'Calico':
1291+ ctxt.update(self.calico_ctxt())
1292
1293 alchemy_flags = config('neutron-alchemy-flags')
1294 if alchemy_flags:
1295@@ -757,23 +808,40 @@
1296
1297
1298 class OSConfigFlagContext(OSContextGenerator):
1299-
1300- """
1301- Responsible for adding user-defined config-flags in charm config to a
1302- template context.
1303+ """Provides support for user-defined config flags.
1304+
1305+ Users can define a comma-seperated list of key=value pairs
1306+ in the charm configuration and apply them at any point in
1307+ any file by using a template flag.
1308+
1309+ Sometimes users might want config flags inserted within a
1310+ specific section so this class allows users to specify the
1311+ template flag name, allowing for multiple template flags
1312+ (sections) within the same context.
1313
1314 NOTE: the value of config-flags may be a comma-separated list of
1315 key=value pairs and some Openstack config files support
1316 comma-separated lists as values.
1317 """
1318
1319+ def __init__(self, charm_flag='config-flags',
1320+ template_flag='user_config_flags'):
1321+ """
1322+ :param charm_flag: config flags in charm configuration.
1323+ :param template_flag: insert point for user-defined flags in template
1324+ file.
1325+ """
1326+ super(OSConfigFlagContext, self).__init__()
1327+ self._charm_flag = charm_flag
1328+ self._template_flag = template_flag
1329+
1330 def __call__(self):
1331- config_flags = config('config-flags')
1332+ config_flags = config(self._charm_flag)
1333 if not config_flags:
1334 return {}
1335
1336- flags = config_flags_parser(config_flags)
1337- return {'user_config_flags': flags}
1338+ return {self._template_flag:
1339+ config_flags_parser(config_flags)}
1340
1341
1342 class SubordinateConfigContext(OSContextGenerator):
1343@@ -817,7 +885,6 @@
1344 },
1345 }
1346 }
1347-
1348 """
1349
1350 def __init__(self, service, config_file, interface):
1351@@ -847,26 +914,28 @@
1352
1353 if self.service not in sub_config:
1354 log('Found subordinate_config on %s but it contained'
1355- 'nothing for %s service' % (rid, self.service))
1356+ 'nothing for %s service' % (rid, self.service),
1357+ level=INFO)
1358 continue
1359
1360 sub_config = sub_config[self.service]
1361 if self.config_file not in sub_config:
1362 log('Found subordinate_config on %s but it contained'
1363- 'nothing for %s' % (rid, self.config_file))
1364+ 'nothing for %s' % (rid, self.config_file),
1365+ level=INFO)
1366 continue
1367
1368 sub_config = sub_config[self.config_file]
1369- for k, v in sub_config.iteritems():
1370+ for k, v in six.iteritems(sub_config):
1371 if k == 'sections':
1372- for section, config_dict in v.iteritems():
1373- log("adding section '%s'" % (section))
1374+ for section, config_dict in six.iteritems(v):
1375+ log("adding section '%s'" % (section),
1376+ level=DEBUG)
1377 ctxt[k][section] = config_dict
1378 else:
1379 ctxt[k] = v
1380
1381- log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
1382-
1383+ log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1384 return ctxt
1385
1386
1387@@ -878,15 +947,14 @@
1388 False if config('debug') is None else config('debug')
1389 ctxt['verbose'] = \
1390 False if config('verbose') is None else config('verbose')
1391+
1392 return ctxt
1393
1394
1395 class SyslogContext(OSContextGenerator):
1396
1397 def __call__(self):
1398- ctxt = {
1399- 'use_syslog': config('use-syslog')
1400- }
1401+ ctxt = {'use_syslog': config('use-syslog')}
1402 return ctxt
1403
1404
1405@@ -894,10 +962,56 @@
1406
1407 def __call__(self):
1408 if config('prefer-ipv6'):
1409- return {
1410- 'bind_host': '::'
1411- }
1412+ return {'bind_host': '::'}
1413 else:
1414- return {
1415- 'bind_host': '0.0.0.0'
1416- }
1417+ return {'bind_host': '0.0.0.0'}
1418+
1419+
1420+class WorkerConfigContext(OSContextGenerator):
1421+
1422+ @property
1423+ def num_cpus(self):
1424+ try:
1425+ from psutil import NUM_CPUS
1426+ except ImportError:
1427+ apt_install('python-psutil', fatal=True)
1428+ from psutil import NUM_CPUS
1429+
1430+ return NUM_CPUS
1431+
1432+ def __call__(self):
1433+ multiplier = config('worker-multiplier') or 0
1434+ ctxt = {"workers": self.num_cpus * multiplier}
1435+ return ctxt
1436+
1437+
1438+class ZeroMQContext(OSContextGenerator):
1439+ interfaces = ['zeromq-configuration']
1440+
1441+ def __call__(self):
1442+ ctxt = {}
1443+ if is_relation_made('zeromq-configuration', 'host'):
1444+ for rid in relation_ids('zeromq-configuration'):
1445+ for unit in related_units(rid):
1446+ ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1447+ ctxt['zmq_host'] = relation_get('host', unit, rid)
1448+
1449+ return ctxt
1450+
1451+
1452+class NotificationDriverContext(OSContextGenerator):
1453+
1454+ def __init__(self, zmq_relation='zeromq-configuration',
1455+ amqp_relation='amqp'):
1456+ """
1457+ :param zmq_relation: Name of Zeromq relation to check
1458+ """
1459+ self.zmq_relation = zmq_relation
1460+ self.amqp_relation = amqp_relation
1461+
1462+ def __call__(self):
1463+ ctxt = {'notifications': 'False'}
1464+ if is_relation_made(self.amqp_relation):
1465+ ctxt['notifications'] = "True"
1466+
1467+ return ctxt
1468
1469=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
1470--- hooks/charmhelpers/contrib/openstack/ip.py 2014-09-22 20:22:04 +0000
1471+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-10 19:38:11 +0000
1472@@ -2,21 +2,19 @@
1473 config,
1474 unit_get,
1475 )
1476-
1477 from charmhelpers.contrib.network.ip import (
1478 get_address_in_network,
1479 is_address_in_network,
1480 is_ipv6,
1481 get_ipv6_addr,
1482 )
1483-
1484 from charmhelpers.contrib.hahelpers.cluster import is_clustered
1485
1486 PUBLIC = 'public'
1487 INTERNAL = 'int'
1488 ADMIN = 'admin'
1489
1490-_address_map = {
1491+ADDRESS_MAP = {
1492 PUBLIC: {
1493 'config': 'os-public-network',
1494 'fallback': 'public-address'
1495@@ -33,16 +31,14 @@
1496
1497
1498 def canonical_url(configs, endpoint_type=PUBLIC):
1499- '''
1500- Returns the correct HTTP URL to this host given the state of HTTPS
1501+ """Returns the correct HTTP URL to this host given the state of HTTPS
1502 configuration, hacluster and charm configuration.
1503
1504- :configs OSTemplateRenderer: A config tempating object to inspect for
1505- a complete https context.
1506- :endpoint_type str: The endpoint type to resolve.
1507-
1508- :returns str: Base URL for services on the current service unit.
1509- '''
1510+ :param configs: OSTemplateRenderer config templating object to inspect
1511+ for a complete https context.
1512+ :param endpoint_type: str endpoint type to resolve.
1513+ :param returns: str base URL for services on the current service unit.
1514+ """
1515 scheme = 'http'
1516 if 'https' in configs.complete_contexts():
1517 scheme = 'https'
1518@@ -53,27 +49,45 @@
1519
1520
1521 def resolve_address(endpoint_type=PUBLIC):
1522+ """Return unit address depending on net config.
1523+
1524+ If unit is clustered with vip(s) and has net splits defined, return vip on
1525+ correct network. If clustered with no nets defined, return primary vip.
1526+
1527+ If not clustered, return unit address ensuring address is on configured net
1528+ split if one is configured.
1529+
1530+ :param endpoint_type: Network endpoing type
1531+ """
1532 resolved_address = None
1533- if is_clustered():
1534- if config(_address_map[endpoint_type]['config']) is None:
1535- # Assume vip is simple and pass back directly
1536- resolved_address = config('vip')
1537+ vips = config('vip')
1538+ if vips:
1539+ vips = vips.split()
1540+
1541+ net_type = ADDRESS_MAP[endpoint_type]['config']
1542+ net_addr = config(net_type)
1543+ net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
1544+ clustered = is_clustered()
1545+ if clustered:
1546+ if not net_addr:
1547+ # If no net-splits defined, we expect a single vip
1548+ resolved_address = vips[0]
1549 else:
1550- for vip in config('vip').split():
1551- if is_address_in_network(
1552- config(_address_map[endpoint_type]['config']),
1553- vip):
1554+ for vip in vips:
1555+ if is_address_in_network(net_addr, vip):
1556 resolved_address = vip
1557+ break
1558 else:
1559 if config('prefer-ipv6'):
1560- fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1561+ fallback_addr = get_ipv6_addr(exc_list=vips)[0]
1562 else:
1563- fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])
1564- resolved_address = get_address_in_network(
1565- config(_address_map[endpoint_type]['config']), fallback_addr)
1566+ fallback_addr = unit_get(net_fallback)
1567+
1568+ resolved_address = get_address_in_network(net_addr, fallback_addr)
1569
1570 if resolved_address is None:
1571- raise ValueError('Unable to resolve a suitable IP address'
1572- ' based on charm state and configuration')
1573- else:
1574- return resolved_address
1575+ raise ValueError("Unable to resolve a suitable IP address based on "
1576+ "charm state and configuration. (net_type=%s, "
1577+ "clustered=%s)" % (net_type, clustered))
1578+
1579+ return resolved_address
1580
1581=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1582--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-07-28 14:38:51 +0000
1583+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-10 19:38:11 +0000
1584@@ -14,7 +14,7 @@
1585 def headers_package():
1586 """Ensures correct linux-headers for running kernel are installed,
1587 for building DKMS package"""
1588- kver = check_output(['uname', '-r']).strip()
1589+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1590 return 'linux-headers-%s' % kver
1591
1592 QUANTUM_CONF_DIR = '/etc/quantum'
1593@@ -22,7 +22,7 @@
1594
1595 def kernel_version():
1596 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1597- kver = check_output(['uname', '-r']).strip()
1598+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1599 kver = kver.split('.')
1600 return (int(kver[0]), int(kver[1]))
1601
1602@@ -138,10 +138,25 @@
1603 relation_prefix='neutron',
1604 ssl_dir=NEUTRON_CONF_DIR)],
1605 'services': [],
1606- 'packages': [['neutron-plugin-cisco']],
1607+ 'packages': [[headers_package()] + determine_dkms_package(),
1608+ ['neutron-plugin-cisco']],
1609 'server_packages': ['neutron-server',
1610 'neutron-plugin-cisco'],
1611 'server_services': ['neutron-server']
1612+ },
1613+ 'Calico': {
1614+ 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
1615+ 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
1616+ 'contexts': [
1617+ context.SharedDBContext(user=config('neutron-database-user'),
1618+ database=config('neutron-database'),
1619+ relation_prefix='neutron',
1620+ ssl_dir=NEUTRON_CONF_DIR)],
1621+ 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
1622+ 'packages': [[headers_package()] + determine_dkms_package(),
1623+ ['calico-compute', 'bird', 'neutron-dhcp-agent']],
1624+ 'server_packages': ['neutron-server', 'calico-control'],
1625+ 'server_services': ['neutron-server']
1626 }
1627 }
1628 if release >= 'icehouse':
1629@@ -162,7 +177,8 @@
1630 elif manager == 'neutron':
1631 plugins = neutron_plugins()
1632 else:
1633- log('Error: Network manager does not support plugins.')
1634+ log("Network manager '%s' does not support plugins." % (manager),
1635+ level=ERROR)
1636 raise Exception
1637
1638 try:
1639
1640=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
1641--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-10-06 21:57:43 +0000
1642+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-12-10 19:38:11 +0000
1643@@ -35,7 +35,7 @@
1644 stats auth admin:password
1645
1646 {% if frontends -%}
1647-{% for service, ports in service_ports.iteritems() -%}
1648+{% for service, ports in service_ports.items() -%}
1649 frontend tcp-in_{{ service }}
1650 bind *:{{ ports[0] }}
1651 bind :::{{ ports[0] }}
1652@@ -46,7 +46,7 @@
1653 {% for frontend in frontends -%}
1654 backend {{ service }}_{{ frontend }}
1655 balance leastconn
1656- {% for unit, address in frontends[frontend]['backends'].iteritems() -%}
1657+ {% for unit, address in frontends[frontend]['backends'].items() -%}
1658 server {{ unit }} {{ address }}:{{ ports[1] }} check
1659 {% endfor %}
1660 {% endfor -%}
1661
1662=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
1663--- hooks/charmhelpers/contrib/openstack/templating.py 2014-07-28 14:38:51 +0000
1664+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-10 19:38:11 +0000
1665@@ -1,13 +1,13 @@
1666 import os
1667
1668+import six
1669+
1670 from charmhelpers.fetch import apt_install
1671-
1672 from charmhelpers.core.hookenv import (
1673 log,
1674 ERROR,
1675 INFO
1676 )
1677-
1678 from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1679
1680 try:
1681@@ -43,7 +43,7 @@
1682 order by OpenStack release.
1683 """
1684 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1685- for rel in OPENSTACK_CODENAMES.itervalues()]
1686+ for rel in six.itervalues(OPENSTACK_CODENAMES)]
1687
1688 if not os.path.isdir(templates_dir):
1689 log('Templates directory not found @ %s.' % templates_dir,
1690@@ -258,7 +258,7 @@
1691 """
1692 Write out all registered config files.
1693 """
1694- [self.write(k) for k in self.templates.iterkeys()]
1695+ [self.write(k) for k in six.iterkeys(self.templates)]
1696
1697 def set_release(self, openstack_release):
1698 """
1699@@ -275,5 +275,5 @@
1700 '''
1701 interfaces = []
1702 [interfaces.extend(i.complete_contexts())
1703- for i in self.templates.itervalues()]
1704+ for i in six.itervalues(self.templates)]
1705 return interfaces
1706
1707=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1708--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-06 21:57:43 +0000
1709+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-10 19:38:11 +0000
1710@@ -2,6 +2,7 @@
1711
1712 # Common python helper functions used for OpenStack charms.
1713 from collections import OrderedDict
1714+from functools import wraps
1715
1716 import subprocess
1717 import json
1718@@ -9,11 +10,12 @@
1719 import socket
1720 import sys
1721
1722+import six
1723+
1724 from charmhelpers.core.hookenv import (
1725 config,
1726 log as juju_log,
1727 charm_dir,
1728- ERROR,
1729 INFO,
1730 relation_ids,
1731 relation_set
1732@@ -112,7 +114,7 @@
1733
1734 # Best guess match based on deb string provided
1735 if src.startswith('deb') or src.startswith('ppa'):
1736- for k, v in OPENSTACK_CODENAMES.iteritems():
1737+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1738 if v in src:
1739 return v
1740
1741@@ -133,7 +135,7 @@
1742
1743 def get_os_version_codename(codename):
1744 '''Determine OpenStack version number from codename.'''
1745- for k, v in OPENSTACK_CODENAMES.iteritems():
1746+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1747 if v == codename:
1748 return k
1749 e = 'Could not derive OpenStack version for '\
1750@@ -193,7 +195,7 @@
1751 else:
1752 vers_map = OPENSTACK_CODENAMES
1753
1754- for version, cname in vers_map.iteritems():
1755+ for version, cname in six.iteritems(vers_map):
1756 if cname == codename:
1757 return version
1758 # e = "Could not determine OpenStack version for package: %s" % pkg
1759@@ -317,7 +319,7 @@
1760 rc_script.write(
1761 "#!/bin/bash\n")
1762 [rc_script.write('export %s=%s\n' % (u, p))
1763- for u, p in env_vars.iteritems() if u != "script_path"]
1764+ for u, p in six.iteritems(env_vars) if u != "script_path"]
1765
1766
1767 def openstack_upgrade_available(package):
1768@@ -350,8 +352,8 @@
1769 '''
1770 _none = ['None', 'none', None]
1771 if (block_device in _none):
1772- error_out('prepare_storage(): Missing required input: '
1773- 'block_device=%s.' % block_device, level=ERROR)
1774+ error_out('prepare_storage(): Missing required input: block_device=%s.'
1775+ % block_device)
1776
1777 if block_device.startswith('/dev/'):
1778 bdev = block_device
1779@@ -367,8 +369,7 @@
1780 bdev = '/dev/%s' % block_device
1781
1782 if not is_block_device(bdev):
1783- error_out('Failed to locate valid block device at %s' % bdev,
1784- level=ERROR)
1785+ error_out('Failed to locate valid block device at %s' % bdev)
1786
1787 return bdev
1788
1789@@ -417,7 +418,7 @@
1790
1791 if isinstance(address, dns.name.Name):
1792 rtype = 'PTR'
1793- elif isinstance(address, basestring):
1794+ elif isinstance(address, six.string_types):
1795 rtype = 'A'
1796 else:
1797 return None
1798@@ -468,6 +469,14 @@
1799 return result.split('.')[0]
1800
1801
1802+def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
1803+ mm_map = {}
1804+ if os.path.isfile(mm_file):
1805+ with open(mm_file, 'r') as f:
1806+ mm_map = json.load(f)
1807+ return mm_map
1808+
1809+
1810 def sync_db_with_multi_ipv6_addresses(database, database_user,
1811 relation_prefix=None):
1812 hosts = get_ipv6_addr(dynamic_only=False)
1813@@ -477,10 +486,24 @@
1814 'hostname': json.dumps(hosts)}
1815
1816 if relation_prefix:
1817- keys = kwargs.keys()
1818- for key in keys:
1819+ for key in list(kwargs.keys()):
1820 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
1821 del kwargs[key]
1822
1823 for rid in relation_ids('shared-db'):
1824 relation_set(relation_id=rid, **kwargs)
1825+
1826+
1827+def os_requires_version(ostack_release, pkg):
1828+ """
1829+ Decorator for hook to specify minimum supported release
1830+ """
1831+ def wrap(f):
1832+ @wraps(f)
1833+ def wrapped_f(*args):
1834+ if os_release(pkg) < ostack_release:
1835+ raise Exception("This hook is not supported on releases"
1836+ " before %s" % ostack_release)
1837+ f(*args)
1838+ return wrapped_f
1839+ return wrap
1840
1841=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
1842--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-28 14:38:51 +0000
1843+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-10 19:38:11 +0000
1844@@ -16,19 +16,18 @@
1845 from subprocess import (
1846 check_call,
1847 check_output,
1848- CalledProcessError
1849+ CalledProcessError,
1850 )
1851-
1852 from charmhelpers.core.hookenv import (
1853 relation_get,
1854 relation_ids,
1855 related_units,
1856 log,
1857+ DEBUG,
1858 INFO,
1859 WARNING,
1860- ERROR
1861+ ERROR,
1862 )
1863-
1864 from charmhelpers.core.host import (
1865 mount,
1866 mounts,
1867@@ -37,7 +36,6 @@
1868 service_running,
1869 umount,
1870 )
1871-
1872 from charmhelpers.fetch import (
1873 apt_install,
1874 )
1875@@ -56,99 +54,85 @@
1876
1877
1878 def install():
1879- ''' Basic Ceph client installation '''
1880+ """Basic Ceph client installation."""
1881 ceph_dir = "/etc/ceph"
1882 if not os.path.exists(ceph_dir):
1883 os.mkdir(ceph_dir)
1884+
1885 apt_install('ceph-common', fatal=True)
1886
1887
1888 def rbd_exists(service, pool, rbd_img):
1889- ''' Check to see if a RADOS block device exists '''
1890+ """Check to see if a RADOS block device exists."""
1891 try:
1892- out = check_output(['rbd', 'list', '--id', service,
1893- '--pool', pool])
1894+ out = check_output(['rbd', 'list', '--id',
1895+ service, '--pool', pool]).decode('UTF-8')
1896 except CalledProcessError:
1897 return False
1898- else:
1899- return rbd_img in out
1900+
1901+ return rbd_img in out
1902
1903
1904 def create_rbd_image(service, pool, image, sizemb):
1905- ''' Create a new RADOS block device '''
1906- cmd = [
1907- 'rbd',
1908- 'create',
1909- image,
1910- '--size',
1911- str(sizemb),
1912- '--id',
1913- service,
1914- '--pool',
1915- pool
1916- ]
1917+ """Create a new RADOS block device."""
1918+ cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
1919+ '--pool', pool]
1920 check_call(cmd)
1921
1922
1923 def pool_exists(service, name):
1924- ''' Check to see if a RADOS pool already exists '''
1925+ """Check to see if a RADOS pool already exists."""
1926 try:
1927- out = check_output(['rados', '--id', service, 'lspools'])
1928+ out = check_output(['rados', '--id', service,
1929+ 'lspools']).decode('UTF-8')
1930 except CalledProcessError:
1931 return False
1932- else:
1933- return name in out
1934+
1935+ return name in out
1936
1937
1938 def get_osds(service):
1939- '''
1940- Return a list of all Ceph Object Storage Daemons
1941- currently in the cluster
1942- '''
1943+ """Return a list of all Ceph Object Storage Daemons currently in the
1944+ cluster.
1945+ """
1946 version = ceph_version()
1947 if version and version >= '0.56':
1948 return json.loads(check_output(['ceph', '--id', service,
1949- 'osd', 'ls', '--format=json']))
1950- else:
1951- return None
1952-
1953-
1954-def create_pool(service, name, replicas=2):
1955- ''' Create a new RADOS pool '''
1956+ 'osd', 'ls',
1957+ '--format=json']).decode('UTF-8'))
1958+
1959+ return None
1960+
1961+
1962+def create_pool(service, name, replicas=3):
1963+ """Create a new RADOS pool."""
1964 if pool_exists(service, name):
1965 log("Ceph pool {} already exists, skipping creation".format(name),
1966 level=WARNING)
1967 return
1968+
1969 # Calculate the number of placement groups based
1970 # on upstream recommended best practices.
1971 osds = get_osds(service)
1972 if osds:
1973- pgnum = (len(osds) * 100 / replicas)
1974+ pgnum = (len(osds) * 100 // replicas)
1975 else:
1976 # NOTE(james-page): Default to 200 for older ceph versions
1977 # which don't support OSD query from cli
1978 pgnum = 200
1979- cmd = [
1980- 'ceph', '--id', service,
1981- 'osd', 'pool', 'create',
1982- name, str(pgnum)
1983- ]
1984+
1985+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
1986 check_call(cmd)
1987- cmd = [
1988- 'ceph', '--id', service,
1989- 'osd', 'pool', 'set', name,
1990- 'size', str(replicas)
1991- ]
1992+
1993+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
1994+ str(replicas)]
1995 check_call(cmd)
1996
1997
1998 def delete_pool(service, name):
1999- ''' Delete a RADOS pool from ceph '''
2000- cmd = [
2001- 'ceph', '--id', service,
2002- 'osd', 'pool', 'delete',
2003- name, '--yes-i-really-really-mean-it'
2004- ]
2005+ """Delete a RADOS pool from ceph."""
2006+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
2007+ '--yes-i-really-really-mean-it']
2008 check_call(cmd)
2009
2010
2011@@ -161,44 +145,43 @@
2012
2013
2014 def create_keyring(service, key):
2015- ''' Create a new Ceph keyring containing key'''
2016+ """Create a new Ceph keyring containing key."""
2017 keyring = _keyring_path(service)
2018 if os.path.exists(keyring):
2019- log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
2020+ log('Ceph keyring exists at %s.' % keyring, level=WARNING)
2021 return
2022- cmd = [
2023- 'ceph-authtool',
2024- keyring,
2025- '--create-keyring',
2026- '--name=client.{}'.format(service),
2027- '--add-key={}'.format(key)
2028- ]
2029+
2030+ cmd = ['ceph-authtool', keyring, '--create-keyring',
2031+ '--name=client.{}'.format(service), '--add-key={}'.format(key)]
2032 check_call(cmd)
2033- log('ceph: Created new ring at %s.' % keyring, level=INFO)
2034+ log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
2035
2036
2037 def create_key_file(service, key):
2038- ''' Create a file containing key '''
2039+ """Create a file containing key."""
2040 keyfile = _keyfile_path(service)
2041 if os.path.exists(keyfile):
2042- log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2043+ log('Keyfile exists at %s.' % keyfile, level=WARNING)
2044 return
2045+
2046 with open(keyfile, 'w') as fd:
2047 fd.write(key)
2048- log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2049+
2050+ log('Created new keyfile at %s.' % keyfile, level=INFO)
2051
2052
2053 def get_ceph_nodes():
2054- ''' Query named relation 'ceph' to detemine current nodes '''
2055+ """Query named relation 'ceph' to determine current nodes."""
2056 hosts = []
2057 for r_id in relation_ids('ceph'):
2058 for unit in related_units(r_id):
2059 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2060+
2061 return hosts
2062
2063
2064 def configure(service, key, auth, use_syslog):
2065- ''' Perform basic configuration of Ceph '''
2066+ """Perform basic configuration of Ceph."""
2067 create_keyring(service, key)
2068 create_key_file(service, key)
2069 hosts = get_ceph_nodes()
2070@@ -211,17 +194,17 @@
2071
2072
2073 def image_mapped(name):
2074- ''' Determine whether a RADOS block device is mapped locally '''
2075+ """Determine whether a RADOS block device is mapped locally."""
2076 try:
2077- out = check_output(['rbd', 'showmapped'])
2078+ out = check_output(['rbd', 'showmapped']).decode('UTF-8')
2079 except CalledProcessError:
2080 return False
2081- else:
2082- return name in out
2083+
2084+ return name in out
2085
2086
2087 def map_block_storage(service, pool, image):
2088- ''' Map a RADOS block device for local use '''
2089+ """Map a RADOS block device for local use."""
2090 cmd = [
2091 'rbd',
2092 'map',
2093@@ -235,31 +218,32 @@
2094
2095
2096 def filesystem_mounted(fs):
2097- ''' Determine whether a filesytems is already mounted '''
2098+ """Determine whether a filesytems is already mounted."""
2099 return fs in [f for f, m in mounts()]
2100
2101
2102 def make_filesystem(blk_device, fstype='ext4', timeout=10):
2103- ''' Make a new filesystem on the specified block device '''
2104+ """Make a new filesystem on the specified block device."""
2105 count = 0
2106 e_noent = os.errno.ENOENT
2107 while not os.path.exists(blk_device):
2108 if count >= timeout:
2109- log('ceph: gave up waiting on block device %s' % blk_device,
2110+ log('Gave up waiting on block device %s' % blk_device,
2111 level=ERROR)
2112 raise IOError(e_noent, os.strerror(e_noent), blk_device)
2113- log('ceph: waiting for block device %s to appear' % blk_device,
2114- level=INFO)
2115+
2116+ log('Waiting for block device %s to appear' % blk_device,
2117+ level=DEBUG)
2118 count += 1
2119 time.sleep(1)
2120 else:
2121- log('ceph: Formatting block device %s as filesystem %s.' %
2122+ log('Formatting block device %s as filesystem %s.' %
2123 (blk_device, fstype), level=INFO)
2124 check_call(['mkfs', '-t', fstype, blk_device])
2125
2126
2127 def place_data_on_block_device(blk_device, data_src_dst):
2128- ''' Migrate data in data_src_dst to blk_device and then remount '''
2129+ """Migrate data in data_src_dst to blk_device and then remount."""
2130 # mount block device into /mnt
2131 mount(blk_device, '/mnt')
2132 # copy data to /mnt
2133@@ -279,8 +263,8 @@
2134
2135 # TODO: re-use
2136 def modprobe(module):
2137- ''' Load a kernel module and configure for auto-load on reboot '''
2138- log('ceph: Loading kernel module', level=INFO)
2139+ """Load a kernel module and configure for auto-load on reboot."""
2140+ log('Loading kernel module', level=INFO)
2141 cmd = ['modprobe', module]
2142 check_call(cmd)
2143 with open('/etc/modules', 'r+') as modules:
2144@@ -289,7 +273,7 @@
2145
2146
2147 def copy_files(src, dst, symlinks=False, ignore=None):
2148- ''' Copy files from src to dst '''
2149+ """Copy files from src to dst."""
2150 for item in os.listdir(src):
2151 s = os.path.join(src, item)
2152 d = os.path.join(dst, item)
2153@@ -300,9 +284,9 @@
2154
2155
2156 def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2157- blk_device, fstype, system_services=[]):
2158- """
2159- NOTE: This function must only be called from a single service unit for
2160+ blk_device, fstype, system_services=[],
2161+ replicas=3):
2162+ """NOTE: This function must only be called from a single service unit for
2163 the same rbd_img otherwise data loss will occur.
2164
2165 Ensures given pool and RBD image exists, is mapped to a block device,
2166@@ -316,15 +300,16 @@
2167 """
2168 # Ensure pool, RBD image, RBD mappings are in place.
2169 if not pool_exists(service, pool):
2170- log('ceph: Creating new pool {}.'.format(pool))
2171- create_pool(service, pool)
2172+ log('Creating new pool {}.'.format(pool), level=INFO)
2173+ create_pool(service, pool, replicas=replicas)
2174
2175 if not rbd_exists(service, pool, rbd_img):
2176- log('ceph: Creating RBD image ({}).'.format(rbd_img))
2177+ log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
2178 create_rbd_image(service, pool, rbd_img, sizemb)
2179
2180 if not image_mapped(rbd_img):
2181- log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2182+ log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
2183+ level=INFO)
2184 map_block_storage(service, pool, rbd_img)
2185
2186 # make file system
2187@@ -339,45 +324,47 @@
2188
2189 for svc in system_services:
2190 if service_running(svc):
2191- log('ceph: Stopping services {} prior to migrating data.'
2192- .format(svc))
2193+ log('Stopping services {} prior to migrating data.'
2194+ .format(svc), level=DEBUG)
2195 service_stop(svc)
2196
2197 place_data_on_block_device(blk_device, mount_point)
2198
2199 for svc in system_services:
2200- log('ceph: Starting service {} after migrating data.'
2201- .format(svc))
2202+ log('Starting service {} after migrating data.'
2203+ .format(svc), level=DEBUG)
2204 service_start(svc)
2205
2206
2207 def ensure_ceph_keyring(service, user=None, group=None):
2208- '''
2209- Ensures a ceph keyring is created for a named service
2210- and optionally ensures user and group ownership.
2211+ """Ensures a ceph keyring is created for a named service and optionally
2212+ ensures user and group ownership.
2213
2214 Returns False if no ceph key is available in relation state.
2215- '''
2216+ """
2217 key = None
2218 for rid in relation_ids('ceph'):
2219 for unit in related_units(rid):
2220 key = relation_get('key', rid=rid, unit=unit)
2221 if key:
2222 break
2223+
2224 if not key:
2225 return False
2226+
2227 create_keyring(service=service, key=key)
2228 keyring = _keyring_path(service)
2229 if user and group:
2230 check_call(['chown', '%s.%s' % (user, group), keyring])
2231+
2232 return True
2233
2234
2235 def ceph_version():
2236- ''' Retrieve the local version of ceph '''
2237+ """Retrieve the local version of ceph."""
2238 if os.path.exists('/usr/bin/ceph'):
2239 cmd = ['ceph', '-v']
2240- output = check_output(cmd)
2241+ output = check_output(cmd).decode('US-ASCII')
2242 output = output.split()
2243 if len(output) > 3:
2244 return output[2]
2245
2246=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2247--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-08-12 21:48:24 +0000
2248+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-10 19:38:11 +0000
2249@@ -1,12 +1,12 @@
2250-
2251 import os
2252 import re
2253-
2254 from subprocess import (
2255 check_call,
2256 check_output,
2257 )
2258
2259+import six
2260+
2261
2262 ##################################################
2263 # loopback device helpers.
2264@@ -37,7 +37,7 @@
2265 '''
2266 file_path = os.path.abspath(file_path)
2267 check_call(['losetup', '--find', file_path])
2268- for d, f in loopback_devices().iteritems():
2269+ for d, f in six.iteritems(loopback_devices()):
2270 if f == file_path:
2271 return d
2272
2273@@ -51,7 +51,7 @@
2274
2275 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2276 '''
2277- for d, f in loopback_devices().iteritems():
2278+ for d, f in six.iteritems(loopback_devices()):
2279 if f == path:
2280 return d
2281
2282
2283=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2284--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:41:02 +0000
2285+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-10 19:38:11 +0000
2286@@ -61,6 +61,7 @@
2287 vg = None
2288 pvd = check_output(['pvdisplay', block_device]).splitlines()
2289 for l in pvd:
2290+ l = l.decode('UTF-8')
2291 if l.strip().startswith('VG Name'):
2292 vg = ' '.join(l.strip().split()[2:])
2293 return vg
2294
2295=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2296--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-08-13 13:12:14 +0000
2297+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-10 19:38:11 +0000
2298@@ -30,7 +30,8 @@
2299 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
2300 call(['sgdisk', '--zap-all', '--mbrtogpt',
2301 '--clear', block_device])
2302- dev_end = check_output(['blockdev', '--getsz', block_device])
2303+ dev_end = check_output(['blockdev', '--getsz',
2304+ block_device]).decode('UTF-8')
2305 gpt_end = int(dev_end.split()[0]) - 100
2306 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
2307 'bs=1M', 'count=1'])
2308@@ -47,7 +48,7 @@
2309 it doesn't.
2310 '''
2311 is_partition = bool(re.search(r".*[0-9]+\b", device))
2312- out = check_output(['mount'])
2313+ out = check_output(['mount']).decode('UTF-8')
2314 if is_partition:
2315 return bool(re.search(device + r"\b", out))
2316 return bool(re.search(device + r"[0-9]+\b", out))
2317
2318=== modified file 'hooks/charmhelpers/core/fstab.py'
2319--- hooks/charmhelpers/core/fstab.py 2014-07-11 02:24:52 +0000
2320+++ hooks/charmhelpers/core/fstab.py 2014-12-10 19:38:11 +0000
2321@@ -3,10 +3,11 @@
2322
2323 __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2324
2325+import io
2326 import os
2327
2328
2329-class Fstab(file):
2330+class Fstab(io.FileIO):
2331 """This class extends file in order to implement a file reader/writer
2332 for file `/etc/fstab`
2333 """
2334@@ -24,8 +25,8 @@
2335 options = "defaults"
2336
2337 self.options = options
2338- self.d = d
2339- self.p = p
2340+ self.d = int(d)
2341+ self.p = int(p)
2342
2343 def __eq__(self, o):
2344 return str(self) == str(o)
2345@@ -45,7 +46,7 @@
2346 self._path = path
2347 else:
2348 self._path = self.DEFAULT_PATH
2349- file.__init__(self, self._path, 'r+')
2350+ super(Fstab, self).__init__(self._path, 'rb+')
2351
2352 def _hydrate_entry(self, line):
2353 # NOTE: use split with no arguments to split on any
2354@@ -58,8 +59,9 @@
2355 def entries(self):
2356 self.seek(0)
2357 for line in self.readlines():
2358+ line = line.decode('us-ascii')
2359 try:
2360- if not line.startswith("#"):
2361+ if line.strip() and not line.startswith("#"):
2362 yield self._hydrate_entry(line)
2363 except ValueError:
2364 pass
2365@@ -75,14 +77,14 @@
2366 if self.get_entry_by_attr('device', entry.device):
2367 return False
2368
2369- self.write(str(entry) + '\n')
2370+ self.write((str(entry) + '\n').encode('us-ascii'))
2371 self.truncate()
2372 return entry
2373
2374 def remove_entry(self, entry):
2375 self.seek(0)
2376
2377- lines = self.readlines()
2378+ lines = [l.decode('us-ascii') for l in self.readlines()]
2379
2380 found = False
2381 for index, line in enumerate(lines):
2382@@ -97,7 +99,7 @@
2383 lines.remove(line)
2384
2385 self.seek(0)
2386- self.write(''.join(lines))
2387+ self.write(''.join(lines).encode('us-ascii'))
2388 self.truncate()
2389 return True
2390
2391
2392=== modified file 'hooks/charmhelpers/core/hookenv.py'
2393--- hooks/charmhelpers/core/hookenv.py 2014-10-06 21:57:43 +0000
2394+++ hooks/charmhelpers/core/hookenv.py 2014-12-10 19:38:11 +0000
2395@@ -9,9 +9,14 @@
2396 import yaml
2397 import subprocess
2398 import sys
2399-import UserDict
2400 from subprocess import CalledProcessError
2401
2402+import six
2403+if not six.PY3:
2404+ from UserDict import UserDict
2405+else:
2406+ from collections import UserDict
2407+
2408 CRITICAL = "CRITICAL"
2409 ERROR = "ERROR"
2410 WARNING = "WARNING"
2411@@ -67,12 +72,12 @@
2412 subprocess.call(command)
2413
2414
2415-class Serializable(UserDict.IterableUserDict):
2416+class Serializable(UserDict):
2417 """Wrapper, an object that can be serialized to yaml or json"""
2418
2419 def __init__(self, obj):
2420 # wrap the object
2421- UserDict.IterableUserDict.__init__(self)
2422+ UserDict.__init__(self)
2423 self.data = obj
2424
2425 def __getattr__(self, attr):
2426@@ -214,6 +219,12 @@
2427 except KeyError:
2428 return (self._prev_dict or {})[key]
2429
2430+ def keys(self):
2431+ prev_keys = []
2432+ if self._prev_dict is not None:
2433+ prev_keys = self._prev_dict.keys()
2434+ return list(set(prev_keys + list(dict.keys(self))))
2435+
2436 def load_previous(self, path=None):
2437 """Load previous copy of config from disk.
2438
2439@@ -263,7 +274,7 @@
2440
2441 """
2442 if self._prev_dict:
2443- for k, v in self._prev_dict.iteritems():
2444+ for k, v in six.iteritems(self._prev_dict):
2445 if k not in self:
2446 self[k] = v
2447 with open(self.path, 'w') as f:
2448@@ -278,7 +289,8 @@
2449 config_cmd_line.append(scope)
2450 config_cmd_line.append('--format=json')
2451 try:
2452- config_data = json.loads(subprocess.check_output(config_cmd_line))
2453+ config_data = json.loads(
2454+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
2455 if scope is not None:
2456 return config_data
2457 return Config(config_data)
2458@@ -297,10 +309,10 @@
2459 if unit:
2460 _args.append(unit)
2461 try:
2462- return json.loads(subprocess.check_output(_args))
2463+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2464 except ValueError:
2465 return None
2466- except CalledProcessError, e:
2467+ except CalledProcessError as e:
2468 if e.returncode == 2:
2469 return None
2470 raise
2471@@ -312,7 +324,7 @@
2472 relation_cmd_line = ['relation-set']
2473 if relation_id is not None:
2474 relation_cmd_line.extend(('-r', relation_id))
2475- for k, v in (relation_settings.items() + kwargs.items()):
2476+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
2477 if v is None:
2478 relation_cmd_line.append('{}='.format(k))
2479 else:
2480@@ -329,7 +341,8 @@
2481 relid_cmd_line = ['relation-ids', '--format=json']
2482 if reltype is not None:
2483 relid_cmd_line.append(reltype)
2484- return json.loads(subprocess.check_output(relid_cmd_line)) or []
2485+ return json.loads(
2486+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
2487 return []
2488
2489
2490@@ -340,7 +353,8 @@
2491 units_cmd_line = ['relation-list', '--format=json']
2492 if relid is not None:
2493 units_cmd_line.extend(('-r', relid))
2494- return json.loads(subprocess.check_output(units_cmd_line)) or []
2495+ return json.loads(
2496+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
2497
2498
2499 @cached
2500@@ -449,7 +463,7 @@
2501 """Get the unit ID for the remote unit"""
2502 _args = ['unit-get', '--format=json', attribute]
2503 try:
2504- return json.loads(subprocess.check_output(_args))
2505+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2506 except ValueError:
2507 return None
2508
2509
2510=== modified file 'hooks/charmhelpers/core/host.py'
2511--- hooks/charmhelpers/core/host.py 2014-10-06 21:57:43 +0000
2512+++ hooks/charmhelpers/core/host.py 2014-12-10 19:38:11 +0000
2513@@ -6,19 +6,20 @@
2514 # Matthew Wedgwood <matthew.wedgwood@canonical.com>
2515
2516 import os
2517+import re
2518 import pwd
2519 import grp
2520 import random
2521 import string
2522 import subprocess
2523 import hashlib
2524-import shutil
2525 from contextlib import contextmanager
2526-
2527 from collections import OrderedDict
2528
2529-from hookenv import log
2530-from fstab import Fstab
2531+import six
2532+
2533+from .hookenv import log
2534+from .fstab import Fstab
2535
2536
2537 def service_start(service_name):
2538@@ -54,7 +55,9 @@
2539 def service_running(service):
2540 """Determine whether a system service is running"""
2541 try:
2542- output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
2543+ output = subprocess.check_output(
2544+ ['service', service, 'status'],
2545+ stderr=subprocess.STDOUT).decode('UTF-8')
2546 except subprocess.CalledProcessError:
2547 return False
2548 else:
2549@@ -67,7 +70,9 @@
2550 def service_available(service_name):
2551 """Determine whether a system service is available"""
2552 try:
2553- subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
2554+ subprocess.check_output(
2555+ ['service', service_name, 'status'],
2556+ stderr=subprocess.STDOUT).decode('UTF-8')
2557 except subprocess.CalledProcessError as e:
2558 return 'unrecognized service' not in e.output
2559 else:
2560@@ -115,7 +120,7 @@
2561 cmd.append(from_path)
2562 cmd.append(to_path)
2563 log(" ".join(cmd))
2564- return subprocess.check_output(cmd).strip()
2565+ return subprocess.check_output(cmd).decode('UTF-8').strip()
2566
2567
2568 def symlink(source, destination):
2569@@ -130,7 +135,7 @@
2570 subprocess.check_call(cmd)
2571
2572
2573-def mkdir(path, owner='root', group='root', perms=0555, force=False):
2574+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
2575 """Create a directory"""
2576 log("Making dir {} {}:{} {:o}".format(path, owner, group,
2577 perms))
2578@@ -146,7 +151,7 @@
2579 os.chown(realpath, uid, gid)
2580
2581
2582-def write_file(path, content, owner='root', group='root', perms=0444):
2583+def write_file(path, content, owner='root', group='root', perms=0o444):
2584 """Create or overwrite a file with the contents of a string"""
2585 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
2586 uid = pwd.getpwnam(owner).pw_uid
2587@@ -177,7 +182,7 @@
2588 cmd_args.extend([device, mountpoint])
2589 try:
2590 subprocess.check_output(cmd_args)
2591- except subprocess.CalledProcessError, e:
2592+ except subprocess.CalledProcessError as e:
2593 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
2594 return False
2595
2596@@ -191,7 +196,7 @@
2597 cmd_args = ['umount', mountpoint]
2598 try:
2599 subprocess.check_output(cmd_args)
2600- except subprocess.CalledProcessError, e:
2601+ except subprocess.CalledProcessError as e:
2602 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
2603 return False
2604
2605@@ -218,8 +223,8 @@
2606 """
2607 if os.path.exists(path):
2608 h = getattr(hashlib, hash_type)()
2609- with open(path, 'r') as source:
2610- h.update(source.read()) # IGNORE:E1101 - it does have update
2611+ with open(path, 'rb') as source:
2612+ h.update(source.read())
2613 return h.hexdigest()
2614 else:
2615 return None
2616@@ -297,7 +302,7 @@
2617 if length is None:
2618 length = random.choice(range(35, 45))
2619 alphanumeric_chars = [
2620- l for l in (string.letters + string.digits)
2621+ l for l in (string.ascii_letters + string.digits)
2622 if l not in 'l0QD1vAEIOUaeiou']
2623 random_chars = [
2624 random.choice(alphanumeric_chars) for _ in range(length)]
2625@@ -306,18 +311,24 @@
2626
2627 def list_nics(nic_type):
2628 '''Return a list of nics of given type(s)'''
2629- if isinstance(nic_type, basestring):
2630+ if isinstance(nic_type, six.string_types):
2631 int_types = [nic_type]
2632 else:
2633 int_types = nic_type
2634 interfaces = []
2635 for int_type in int_types:
2636 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
2637- ip_output = subprocess.check_output(cmd).split('\n')
2638+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2639 ip_output = (line for line in ip_output if line)
2640 for line in ip_output:
2641 if line.split()[1].startswith(int_type):
2642- interfaces.append(line.split()[1].replace(":", ""))
2643+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
2644+ if matched:
2645+ interface = matched.groups()[0]
2646+ else:
2647+ interface = line.split()[1].replace(":", "")
2648+ interfaces.append(interface)
2649+
2650 return interfaces
2651
2652
2653@@ -329,7 +340,7 @@
2654
2655 def get_nic_mtu(nic):
2656 cmd = ['ip', 'addr', 'show', nic]
2657- ip_output = subprocess.check_output(cmd).split('\n')
2658+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2659 mtu = ""
2660 for line in ip_output:
2661 words = line.split()
2662@@ -340,7 +351,7 @@
2663
2664 def get_nic_hwaddr(nic):
2665 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
2666- ip_output = subprocess.check_output(cmd)
2667+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
2668 hwaddr = ""
2669 words = ip_output.split()
2670 if 'link/ether' in words:
2671
2672=== modified file 'hooks/charmhelpers/core/services/__init__.py'
2673--- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:14 +0000
2674+++ hooks/charmhelpers/core/services/__init__.py 2014-12-10 19:38:11 +0000
2675@@ -1,2 +1,2 @@
2676-from .base import *
2677-from .helpers import *
2678+from .base import * # NOQA
2679+from .helpers import * # NOQA
2680
2681=== modified file 'hooks/charmhelpers/core/services/helpers.py'
2682--- hooks/charmhelpers/core/services/helpers.py 2014-10-06 21:57:43 +0000
2683+++ hooks/charmhelpers/core/services/helpers.py 2014-12-10 19:38:11 +0000
2684@@ -196,7 +196,7 @@
2685 if not os.path.isabs(file_name):
2686 file_name = os.path.join(hookenv.charm_dir(), file_name)
2687 with open(file_name, 'w') as file_stream:
2688- os.fchmod(file_stream.fileno(), 0600)
2689+ os.fchmod(file_stream.fileno(), 0o600)
2690 yaml.dump(config_data, file_stream)
2691
2692 def read_context(self, file_name):
2693@@ -211,15 +211,19 @@
2694
2695 class TemplateCallback(ManagerCallback):
2696 """
2697- Callback class that will render a Jinja2 template, for use as a ready action.
2698-
2699- :param str source: The template source file, relative to `$CHARM_DIR/templates`
2700+ Callback class that will render a Jinja2 template, for use as a ready
2701+ action.
2702+
2703+ :param str source: The template source file, relative to
2704+ `$CHARM_DIR/templates`
2705+
2706 :param str target: The target to write the rendered template to
2707 :param str owner: The owner of the rendered file
2708 :param str group: The group of the rendered file
2709 :param int perms: The permissions of the rendered file
2710 """
2711- def __init__(self, source, target, owner='root', group='root', perms=0444):
2712+ def __init__(self, source, target,
2713+ owner='root', group='root', perms=0o444):
2714 self.source = source
2715 self.target = target
2716 self.owner = owner
2717
2718=== added file 'hooks/charmhelpers/core/sysctl.py'
2719--- hooks/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000
2720+++ hooks/charmhelpers/core/sysctl.py 2014-12-10 19:38:11 +0000
2721@@ -0,0 +1,34 @@
2722+#!/usr/bin/env python
2723+# -*- coding: utf-8 -*-
2724+
2725+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2726+
2727+import yaml
2728+
2729+from subprocess import check_call
2730+
2731+from charmhelpers.core.hookenv import (
2732+ log,
2733+ DEBUG,
2734+)
2735+
2736+
2737+def create(sysctl_dict, sysctl_file):
2738+ """Creates a sysctl.conf file from a YAML associative array
2739+
2740+ :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
2741+ :type sysctl_dict: dict
2742+ :param sysctl_file: path to the sysctl file to be saved
2743+ :type sysctl_file: str or unicode
2744+ :returns: None
2745+ """
2746+ sysctl_dict = yaml.load(sysctl_dict)
2747+
2748+ with open(sysctl_file, "w") as fd:
2749+ for key, value in sysctl_dict.items():
2750+ fd.write("{}={}\n".format(key, value))
2751+
2752+ log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
2753+ level=DEBUG)
2754+
2755+ check_call(["sysctl", "-p", sysctl_file])
2756
2757=== modified file 'hooks/charmhelpers/core/templating.py'
2758--- hooks/charmhelpers/core/templating.py 2014-08-13 13:12:14 +0000
2759+++ hooks/charmhelpers/core/templating.py 2014-12-10 19:38:11 +0000
2760@@ -4,7 +4,8 @@
2761 from charmhelpers.core import hookenv
2762
2763
2764-def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
2765+def render(source, target, context, owner='root', group='root',
2766+ perms=0o444, templates_dir=None):
2767 """
2768 Render a template.
2769
2770
2771=== modified file 'hooks/charmhelpers/fetch/__init__.py'
2772--- hooks/charmhelpers/fetch/__init__.py 2014-10-06 21:57:43 +0000
2773+++ hooks/charmhelpers/fetch/__init__.py 2014-12-10 19:38:11 +0000
2774@@ -5,10 +5,6 @@
2775 from charmhelpers.core.host import (
2776 lsb_release
2777 )
2778-from urlparse import (
2779- urlparse,
2780- urlunparse,
2781-)
2782 import subprocess
2783 from charmhelpers.core.hookenv import (
2784 config,
2785@@ -16,6 +12,12 @@
2786 )
2787 import os
2788
2789+import six
2790+if six.PY3:
2791+ from urllib.parse import urlparse, urlunparse
2792+else:
2793+ from urlparse import urlparse, urlunparse
2794+
2795
2796 CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
2797 deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
2798@@ -72,6 +74,7 @@
2799 FETCH_HANDLERS = (
2800 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
2801 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
2802+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
2803 )
2804
2805 APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
2806@@ -148,7 +151,7 @@
2807 cmd = ['apt-get', '--assume-yes']
2808 cmd.extend(options)
2809 cmd.append('install')
2810- if isinstance(packages, basestring):
2811+ if isinstance(packages, six.string_types):
2812 cmd.append(packages)
2813 else:
2814 cmd.extend(packages)
2815@@ -181,7 +184,7 @@
2816 def apt_purge(packages, fatal=False):
2817 """Purge one or more packages"""
2818 cmd = ['apt-get', '--assume-yes', 'purge']
2819- if isinstance(packages, basestring):
2820+ if isinstance(packages, six.string_types):
2821 cmd.append(packages)
2822 else:
2823 cmd.extend(packages)
2824@@ -192,7 +195,7 @@
2825 def apt_hold(packages, fatal=False):
2826 """Hold one or more packages"""
2827 cmd = ['apt-mark', 'hold']
2828- if isinstance(packages, basestring):
2829+ if isinstance(packages, six.string_types):
2830 cmd.append(packages)
2831 else:
2832 cmd.extend(packages)
2833@@ -218,6 +221,7 @@
2834 pocket for the release.
2835 'cloud:' may be used to activate official cloud archive pockets,
2836 such as 'cloud:icehouse'
2837+ 'distro' may be used as a noop
2838
2839 @param key: A key to be added to the system's APT keyring and used
2840 to verify the signatures on packages. Ideally, this should be an
2841@@ -251,12 +255,14 @@
2842 release = lsb_release()['DISTRIB_CODENAME']
2843 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
2844 apt.write(PROPOSED_POCKET.format(release))
2845+ elif source == 'distro':
2846+ pass
2847 else:
2848- raise SourceConfigError("Unknown source: {!r}".format(source))
2849+ log("Unknown source: {!r}".format(source))
2850
2851 if key:
2852 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
2853- with NamedTemporaryFile() as key_file:
2854+ with NamedTemporaryFile('w+') as key_file:
2855 key_file.write(key)
2856 key_file.flush()
2857 key_file.seek(0)
2858@@ -293,14 +299,14 @@
2859 sources = safe_load((config(sources_var) or '').strip()) or []
2860 keys = safe_load((config(keys_var) or '').strip()) or None
2861
2862- if isinstance(sources, basestring):
2863+ if isinstance(sources, six.string_types):
2864 sources = [sources]
2865
2866 if keys is None:
2867 for source in sources:
2868 add_source(source, None)
2869 else:
2870- if isinstance(keys, basestring):
2871+ if isinstance(keys, six.string_types):
2872 keys = [keys]
2873
2874 if len(sources) != len(keys):
2875@@ -397,7 +403,7 @@
2876 while result is None or result == APT_NO_LOCK:
2877 try:
2878 result = subprocess.check_call(cmd, env=env)
2879- except subprocess.CalledProcessError, e:
2880+ except subprocess.CalledProcessError as e:
2881 retry_count = retry_count + 1
2882 if retry_count > APT_NO_LOCK_RETRY_COUNT:
2883 raise
2884
2885=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
2886--- hooks/charmhelpers/fetch/archiveurl.py 2014-10-06 21:57:43 +0000
2887+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-10 19:38:11 +0000
2888@@ -1,8 +1,23 @@
2889 import os
2890-import urllib2
2891-from urllib import urlretrieve
2892-import urlparse
2893 import hashlib
2894+import re
2895+
2896+import six
2897+if six.PY3:
2898+ from urllib.request import (
2899+ build_opener, install_opener, urlopen, urlretrieve,
2900+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
2901+ )
2902+ from urllib.parse import urlparse, urlunparse, parse_qs
2903+ from urllib.error import URLError
2904+else:
2905+ from urllib import urlretrieve
2906+ from urllib2 import (
2907+ build_opener, install_opener, urlopen,
2908+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
2909+ URLError
2910+ )
2911+ from urlparse import urlparse, urlunparse, parse_qs
2912
2913 from charmhelpers.fetch import (
2914 BaseFetchHandler,
2915@@ -15,6 +30,24 @@
2916 from charmhelpers.core.host import mkdir, check_hash
2917
2918
2919+def splituser(host):
2920+ '''urllib.splituser(), but six's support of this seems broken'''
2921+ _userprog = re.compile('^(.*)@(.*)$')
2922+ match = _userprog.match(host)
2923+ if match:
2924+ return match.group(1, 2)
2925+ return None, host
2926+
2927+
2928+def splitpasswd(user):
2929+ '''urllib.splitpasswd(), but six's support of this is missing'''
2930+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
2931+ match = _passwdprog.match(user)
2932+ if match:
2933+ return match.group(1, 2)
2934+ return user, None
2935+
2936+
2937 class ArchiveUrlFetchHandler(BaseFetchHandler):
2938 """
2939 Handler to download archive files from arbitrary URLs.
2940@@ -42,20 +75,20 @@
2941 """
2942 # propogate all exceptions
2943 # URLError, OSError, etc
2944- proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
2945+ proto, netloc, path, params, query, fragment = urlparse(source)
2946 if proto in ('http', 'https'):
2947- auth, barehost = urllib2.splituser(netloc)
2948+ auth, barehost = splituser(netloc)
2949 if auth is not None:
2950- source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
2951- username, password = urllib2.splitpasswd(auth)
2952- passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
2953+ source = urlunparse((proto, barehost, path, params, query, fragment))
2954+ username, password = splitpasswd(auth)
2955+ passman = HTTPPasswordMgrWithDefaultRealm()
2956 # Realm is set to None in add_password to force the username and password
2957 # to be used whatever the realm
2958 passman.add_password(None, source, username, password)
2959- authhandler = urllib2.HTTPBasicAuthHandler(passman)
2960- opener = urllib2.build_opener(authhandler)
2961- urllib2.install_opener(opener)
2962- response = urllib2.urlopen(source)
2963+ authhandler = HTTPBasicAuthHandler(passman)
2964+ opener = build_opener(authhandler)
2965+ install_opener(opener)
2966+ response = urlopen(source)
2967 try:
2968 with open(dest, 'w') as dest_file:
2969 dest_file.write(response.read())
2970@@ -91,17 +124,21 @@
2971 url_parts = self.parse_url(source)
2972 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
2973 if not os.path.exists(dest_dir):
2974- mkdir(dest_dir, perms=0755)
2975+ mkdir(dest_dir, perms=0o755)
2976 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
2977 try:
2978 self.download(source, dld_file)
2979- except urllib2.URLError as e:
2980+ except URLError as e:
2981 raise UnhandledSource(e.reason)
2982 except OSError as e:
2983 raise UnhandledSource(e.strerror)
2984- options = urlparse.parse_qs(url_parts.fragment)
2985+ options = parse_qs(url_parts.fragment)
2986 for key, value in options.items():
2987- if key in hashlib.algorithms:
2988+ if not six.PY3:
2989+ algorithms = hashlib.algorithms
2990+ else:
2991+ algorithms = hashlib.algorithms_available
2992+ if key in algorithms:
2993 check_hash(dld_file, value, key)
2994 if checksum:
2995 check_hash(dld_file, checksum, hash_type)
2996
2997=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
2998--- hooks/charmhelpers/fetch/bzrurl.py 2014-07-28 14:38:51 +0000
2999+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-10 19:38:11 +0000
3000@@ -5,6 +5,10 @@
3001 )
3002 from charmhelpers.core.host import mkdir
3003
3004+import six
3005+if six.PY3:
3006+ raise ImportError('bzrlib does not support Python3')
3007+
3008 try:
3009 from bzrlib.branch import Branch
3010 except ImportError:
3011@@ -42,7 +46,7 @@
3012 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3013 branch_name)
3014 if not os.path.exists(dest_dir):
3015- mkdir(dest_dir, perms=0755)
3016+ mkdir(dest_dir, perms=0o755)
3017 try:
3018 self.branch(source, dest_dir)
3019 except OSError as e:
3020
3021=== added file 'hooks/charmhelpers/fetch/giturl.py'
3022--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
3023+++ hooks/charmhelpers/fetch/giturl.py 2014-12-10 19:38:11 +0000
3024@@ -0,0 +1,48 @@
3025+import os
3026+from charmhelpers.fetch import (
3027+ BaseFetchHandler,
3028+ UnhandledSource
3029+)
3030+from charmhelpers.core.host import mkdir
3031+
3032+import six
3033+if six.PY3:
3034+ raise ImportError('GitPython does not support Python 3')
3035+
3036+try:
3037+ from git import Repo
3038+except ImportError:
3039+ from charmhelpers.fetch import apt_install
3040+ apt_install("python-git")
3041+ from git import Repo
3042+
3043+
3044+class GitUrlFetchHandler(BaseFetchHandler):
3045+ """Handler for git branches via generic and github URLs"""
3046+ def can_handle(self, source):
3047+ url_parts = self.parse_url(source)
3048+ # TODO (mattyw) no support for ssh git@ yet
3049+ if url_parts.scheme not in ('http', 'https', 'git'):
3050+ return False
3051+ else:
3052+ return True
3053+
3054+ def clone(self, source, dest, branch):
3055+ if not self.can_handle(source):
3056+ raise UnhandledSource("Cannot handle {}".format(source))
3057+
3058+ repo = Repo.clone_from(source, dest)
3059+ repo.git.checkout(branch)
3060+
3061+ def install(self, source, branch="master"):
3062+ url_parts = self.parse_url(source)
3063+ branch_name = url_parts.path.strip("/").split("/")[-1]
3064+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3065+ branch_name)
3066+ if not os.path.exists(dest_dir):
3067+ mkdir(dest_dir, perms=0o755)
3068+ try:
3069+ self.clone(source, dest_dir, branch)
3070+ except OSError as e:
3071+ raise UnhandledSource(e.strerror)
3072+ return dest_dir
3073
3074=== modified file 'hooks/nova_compute_hooks.py'
3075--- hooks/nova_compute_hooks.py 2014-11-28 12:54:57 +0000
3076+++ hooks/nova_compute_hooks.py 2014-12-10 19:38:11 +0000
3077@@ -57,6 +57,8 @@
3078 get_ipv6_addr
3079 )
3080
3081+from charmhelpers.core.sysctl import create as create_sysctl
3082+
3083 from nova_compute_context import CEPH_SECRET_UUID
3084 from socket import gethostname
3085
3086@@ -82,6 +84,10 @@
3087 if openstack_upgrade_available('nova-common'):
3088 CONFIGS = do_openstack_upgrade()
3089
3090+ sysctl_dict = config('sysctl')
3091+ if sysctl_dict:
3092+ create_sysctl(sysctl_dict, '/etc/sysctl.d/50-nova-compute.conf')
3093+
3094 if migration_enabled() and config('migration-auth-type') == 'ssh':
3095 # Check-in with nova-c-c and register new ssh key, if it has just been
3096 # generated.
3097
3098=== modified file 'tests/charmhelpers/contrib/amulet/deployment.py'
3099--- tests/charmhelpers/contrib/amulet/deployment.py 2014-10-06 21:57:43 +0000
3100+++ tests/charmhelpers/contrib/amulet/deployment.py 2014-12-10 19:38:11 +0000
3101@@ -1,6 +1,6 @@
3102 import amulet
3103-
3104 import os
3105+import six
3106
3107
3108 class AmuletDeployment(object):
3109@@ -52,12 +52,12 @@
3110
3111 def _add_relations(self, relations):
3112 """Add all of the relations for the services."""
3113- for k, v in relations.iteritems():
3114+ for k, v in six.iteritems(relations):
3115 self.d.relate(k, v)
3116
3117 def _configure_services(self, configs):
3118 """Configure all of the services."""
3119- for service, config in configs.iteritems():
3120+ for service, config in six.iteritems(configs):
3121 self.d.configure(service, config)
3122
3123 def _deploy(self):
3124
3125=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
3126--- tests/charmhelpers/contrib/amulet/utils.py 2014-07-30 15:16:25 +0000
3127+++ tests/charmhelpers/contrib/amulet/utils.py 2014-12-10 19:38:11 +0000
3128@@ -5,6 +5,8 @@
3129 import sys
3130 import time
3131
3132+import six
3133+
3134
3135 class AmuletUtils(object):
3136 """Amulet utilities.
3137@@ -58,7 +60,7 @@
3138 Verify the specified services are running on the corresponding
3139 service units.
3140 """
3141- for k, v in commands.iteritems():
3142+ for k, v in six.iteritems(commands):
3143 for cmd in v:
3144 output, code = k.run(cmd)
3145 if code != 0:
3146@@ -100,11 +102,11 @@
3147 longs, or can be a function that evaluate a variable and returns a
3148 bool.
3149 """
3150- for k, v in expected.iteritems():
3151+ for k, v in six.iteritems(expected):
3152 if k in actual:
3153- if (isinstance(v, basestring) or
3154+ if (isinstance(v, six.string_types) or
3155 isinstance(v, bool) or
3156- isinstance(v, (int, long))):
3157+ isinstance(v, six.integer_types)):
3158 if v != actual[k]:
3159 return "{}:{}".format(k, actual[k])
3160 elif not v(actual[k]):
3161
3162=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
3163--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-06 21:57:43 +0000
3164+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-10 19:38:11 +0000
3165@@ -1,3 +1,4 @@
3166+import six
3167 from charmhelpers.contrib.amulet.deployment import (
3168 AmuletDeployment
3169 )
3170@@ -69,7 +70,7 @@
3171
3172 def _configure_services(self, configs):
3173 """Configure all of the services."""
3174- for service, config in configs.iteritems():
3175+ for service, config in six.iteritems(configs):
3176 self.d.configure(service, config)
3177
3178 def _get_openstack_release(self):
3179
3180=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
3181--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2014-10-06 21:57:43 +0000
3182+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-10 19:38:11 +0000
3183@@ -7,6 +7,8 @@
3184 import keystoneclient.v2_0 as keystone_client
3185 import novaclient.v1_1.client as nova_client
3186
3187+import six
3188+
3189 from charmhelpers.contrib.amulet.utils import (
3190 AmuletUtils
3191 )
3192@@ -60,7 +62,7 @@
3193 expected service catalog endpoints.
3194 """
3195 self.log.debug('actual: {}'.format(repr(actual)))
3196- for k, v in expected.iteritems():
3197+ for k, v in six.iteritems(expected):
3198 if k in actual:
3199 ret = self._validate_dict_data(expected[k][0], actual[k][0])
3200 if ret:
3201
3202=== modified file 'unit_tests/test_nova_compute_hooks.py'
3203--- unit_tests/test_nova_compute_hooks.py 2014-11-28 14:17:10 +0000
3204+++ unit_tests/test_nova_compute_hooks.py 2014-12-10 19:38:11 +0000
3205@@ -54,7 +54,8 @@
3206 'ensure_ceph_keyring',
3207 'execd_preinstall',
3208 # socket
3209- 'gethostname'
3210+ 'gethostname',
3211+ 'create_sysctl',
3212 ]
3213
3214
3215@@ -141,6 +142,12 @@
3216 self.assertFalse(self.do_openstack_upgrade.called)
3217 self.assertFalse(compute_joined.called)
3218
3219+ @patch.object(hooks, 'compute_joined')
3220+ def test_config_changed_with_sysctl(self, compute_joined):
3221+ self.test_config.set('sysctl', '{ kernel.max_pid : "1337" }')
3222+ hooks.config_changed()
3223+ self.create_sysctl.assert_called()
3224+
3225 def test_amqp_joined(self):
3226 hooks.amqp_joined()
3227 self.relation_set.assert_called_with(

Subscribers

People subscribed via source and target branches