Merge lp:~niedbalski/charms/trusty/quantum-gateway/lp-1396607 into lp:~openstack-charmers/charms/trusty/quantum-gateway/next

Proposed by Jorge Niedbalski
Status: Superseded
Proposed branch: lp:~niedbalski/charms/trusty/quantum-gateway/lp-1396607
Merge into: lp:~openstack-charmers/charms/trusty/quantum-gateway/next
Diff against target: 3150 lines (+817/-526)
30 files modified
config.yaml (+7/-1)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7)
hooks/charmhelpers/contrib/network/ip.py (+52/-50)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
hooks/charmhelpers/contrib/openstack/context.py (+319/-226)
hooks/charmhelpers/contrib/openstack/ip.py (+41/-27)
hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4)
hooks/charmhelpers/contrib/openstack/templating.py (+5/-5)
hooks/charmhelpers/contrib/openstack/utils.py (+35/-12)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/core/fstab.py (+10/-8)
hooks/charmhelpers/core/hookenv.py (+25/-11)
hooks/charmhelpers/core/host.py (+22/-18)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/core/services/helpers.py (+9/-5)
hooks/charmhelpers/core/templating.py (+2/-1)
hooks/charmhelpers/fetch/__init__.py (+18/-12)
hooks/charmhelpers/fetch/archiveurl.py (+53/-16)
hooks/charmhelpers/fetch/bzrurl.py (+5/-1)
hooks/charmhelpers/fetch/giturl.py (+48/-0)
hooks/quantum_hooks.py (+8/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+3/-3)
tests/charmhelpers/contrib/amulet/utils.py (+6/-4)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
unit_tests/test_quantum_hooks.py (+4/-1)
To merge this branch: bzr merge lp:~niedbalski/charms/trusty/quantum-gateway/lp-1396607
Reviewer Review Type Date Requested Status
Billy Olsen Approve
OpenStack Charmers Pending
Review via email: mp+242926@code.launchpad.net

This proposal supersedes a proposal from 2014-11-26.

This proposal has been superseded by a proposal from 2014-12-19.

Description of the change

Fix for LP: #1396607

To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_lint_check #1226 quantum-gateway-next for niedbalski mp242926
    LINT OK: passed

LINT Results (max last 5 lines):
  I: config.yaml: option sysctl has no default value
  I: config.yaml: option os-data-network has no default value
  I: config.yaml: option data-port has no default value
  I: config.yaml: option instance-mtu has no default value
  I: config.yaml: option external-network-id has no default value

Full lint test output: http://paste.ubuntu.com/9252804/
Build: http://10.98.191.181:8080/job/charm_lint_check/1226/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_unit_test #1060 quantum-gateway-next for niedbalski mp242926
    UNIT OK: passed

UNIT Results (max last 5 lines):
  hooks/quantum_hooks 108 2 98% 204-206
  hooks/quantum_utils 207 1 99% 389
  TOTAL 458 9 98%
  Ran 85 tests in 3.389s
  OK

Full unit test output: http://paste.ubuntu.com/9252807/
Build: http://10.98.191.181:8080/job/charm_unit_test/1060/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

UOSCI bot says:
charm_amulet_test #529 quantum-gateway-next for niedbalski mp242926
    AMULET FAIL: amulet-test failed

AMULET Results (max last 5 lines):
  juju-test.conductor DEBUG : Calling "juju destroy-environment -y osci-sv07"
  WARNING cannot delete security group "juju-osci-sv07-0". Used by another environment?
  juju-test INFO : Results: 2 passed, 1 failed, 0 errored
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9253062/
Build: http://10.98.191.181:8080/job/charm_amulet_test/529/

Revision history for this message
Billy Olsen (billy-olsen) wrote :

LGTM. Per our conversation niedbalski, it may be nice to have this integrated into one of the contexts so we can apply it throughout the openstack charms.

review: Approve
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #91 quantum-gateway-next for niedbalski mp242926
    LINT OK: passed

Build: http://10.245.162.77:8080/job/charm_lint_check/91/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #124 quantum-gateway-next for niedbalski mp242926
    UNIT OK: passed

Build: http://10.245.162.77:8080/job/charm_unit_test/124/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #70 quantum-gateway-next for niedbalski mp242926
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: http://paste.ubuntu.com/9491242/
Build: http://10.245.162.77:8080/job/charm_amulet_test/70/

Unmerged revisions

79. By Jorge Niedbalski

[hooks] config_changed checks for "sysctl". fixes LP: #1366598

78. By Jorge Niedbalski

[all] make "sync"

77. By Jorge Niedbalski

[hooks] config_changed checks for "sysctl". fixes LP: #1366598

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'config.yaml'
2--- config.yaml 2014-11-24 09:34:05 +0000
3+++ config.yaml 2014-11-26 13:43:45 +0000
4@@ -92,7 +92,7 @@
5 accomodate the packet overhead of using GRE tunnels.
6 enable-l3-agent:
7 type: boolean
8- default: True
9+ default: True
10 description: |
11 Optional configuration to support use of linux router
12 Note that this is used only for Cisco n1kv plugin.
13@@ -115,3 +115,9 @@
14 .
15 This network will be used for tenant network traffic in overlay
16 networks.
17+ sysctl:
18+ type: string
19+ default:
20+ description: |
21+ YAML formatted associative array of sysctl values, e.g.:
22+ '{ kernel.pid_max : 4194303 }'
23
24=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
25--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-07 21:03:47 +0000
26+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-11-26 13:43:45 +0000
27@@ -13,9 +13,10 @@
28
29 import subprocess
30 import os
31-
32 from socket import gethostname as get_unit_hostname
33
34+import six
35+
36 from charmhelpers.core.hookenv import (
37 log,
38 relation_ids,
39@@ -77,7 +78,7 @@
40 "show", resource
41 ]
42 try:
43- status = subprocess.check_output(cmd)
44+ status = subprocess.check_output(cmd).decode('UTF-8')
45 except subprocess.CalledProcessError:
46 return False
47 else:
48@@ -150,34 +151,42 @@
49 return False
50
51
52-def determine_api_port(public_port):
53+def determine_api_port(public_port, singlenode_mode=False):
54 '''
55 Determine correct API server listening port based on
56 existence of HTTPS reverse proxy and/or haproxy.
57
58 public_port: int: standard public port for given service
59
60+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
61+
62 returns: int: the correct listening port for the API service
63 '''
64 i = 0
65- if len(peer_units()) > 0 or is_clustered():
66+ if singlenode_mode:
67+ i += 1
68+ elif len(peer_units()) > 0 or is_clustered():
69 i += 1
70 if https():
71 i += 1
72 return public_port - (i * 10)
73
74
75-def determine_apache_port(public_port):
76+def determine_apache_port(public_port, singlenode_mode=False):
77 '''
78 Description: Determine correct apache listening port based on public IP +
79 state of the cluster.
80
81 public_port: int: standard public port for given service
82
83+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
84+
85 returns: int: the correct listening port for the HAProxy service
86 '''
87 i = 0
88- if len(peer_units()) > 0 or is_clustered():
89+ if singlenode_mode:
90+ i += 1
91+ elif len(peer_units()) > 0 or is_clustered():
92 i += 1
93 return public_port - (i * 10)
94
95@@ -197,7 +206,7 @@
96 for setting in settings:
97 conf[setting] = config_get(setting)
98 missing = []
99- [missing.append(s) for s, v in conf.iteritems() if v is None]
100+ [missing.append(s) for s, v in six.iteritems(conf) if v is None]
101 if missing:
102 log('Insufficient config data to configure hacluster.', level=ERROR)
103 raise HAIncompleteConfig
104
105=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
106--- hooks/charmhelpers/contrib/network/ip.py 2014-10-16 17:42:14 +0000
107+++ hooks/charmhelpers/contrib/network/ip.py 2014-11-26 13:43:45 +0000
108@@ -1,15 +1,12 @@
109 import glob
110 import re
111 import subprocess
112-import sys
113
114 from functools import partial
115
116 from charmhelpers.core.hookenv import unit_get
117 from charmhelpers.fetch import apt_install
118 from charmhelpers.core.hookenv import (
119- WARNING,
120- ERROR,
121 log
122 )
123
124@@ -34,31 +31,28 @@
125 network)
126
127
128+def no_ip_found_error_out(network):
129+ errmsg = ("No IP address found in network: %s" % network)
130+ raise ValueError(errmsg)
131+
132+
133 def get_address_in_network(network, fallback=None, fatal=False):
134- """
135- Get an IPv4 or IPv6 address within the network from the host.
136+ """Get an IPv4 or IPv6 address within the network from the host.
137
138 :param network (str): CIDR presentation format. For example,
139 '192.168.1.0/24'.
140 :param fallback (str): If no address is found, return fallback.
141 :param fatal (boolean): If no address is found, fallback is not
142 set and fatal is True then exit(1).
143-
144 """
145-
146- def not_found_error_out():
147- log("No IP address found in network: %s" % network,
148- level=ERROR)
149- sys.exit(1)
150-
151 if network is None:
152 if fallback is not None:
153 return fallback
154+
155+ if fatal:
156+ no_ip_found_error_out(network)
157 else:
158- if fatal:
159- not_found_error_out()
160- else:
161- return None
162+ return None
163
164 _validate_cidr(network)
165 network = netaddr.IPNetwork(network)
166@@ -70,6 +64,7 @@
167 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
168 if cidr in network:
169 return str(cidr.ip)
170+
171 if network.version == 6 and netifaces.AF_INET6 in addresses:
172 for addr in addresses[netifaces.AF_INET6]:
173 if not addr['addr'].startswith('fe80'):
174@@ -82,20 +77,20 @@
175 return fallback
176
177 if fatal:
178- not_found_error_out()
179+ no_ip_found_error_out(network)
180
181 return None
182
183
184 def is_ipv6(address):
185- '''Determine whether provided address is IPv6 or not'''
186+ """Determine whether provided address is IPv6 or not."""
187 try:
188 address = netaddr.IPAddress(address)
189 except netaddr.AddrFormatError:
190 # probably a hostname - so not an address at all!
191 return False
192- else:
193- return address.version == 6
194+
195+ return address.version == 6
196
197
198 def is_address_in_network(network, address):
199@@ -113,11 +108,13 @@
200 except (netaddr.core.AddrFormatError, ValueError):
201 raise ValueError("Network (%s) is not in CIDR presentation format" %
202 network)
203+
204 try:
205 address = netaddr.IPAddress(address)
206 except (netaddr.core.AddrFormatError, ValueError):
207 raise ValueError("Address (%s) is not in correct presentation format" %
208 address)
209+
210 if address in network:
211 return True
212 else:
213@@ -147,6 +144,7 @@
214 return iface
215 else:
216 return addresses[netifaces.AF_INET][0][key]
217+
218 if address.version == 6 and netifaces.AF_INET6 in addresses:
219 for addr in addresses[netifaces.AF_INET6]:
220 if not addr['addr'].startswith('fe80'):
221@@ -160,41 +158,42 @@
222 return str(cidr).split('/')[1]
223 else:
224 return addr[key]
225+
226 return None
227
228
229 get_iface_for_address = partial(_get_for_address, key='iface')
230
231+
232 get_netmask_for_address = partial(_get_for_address, key='netmask')
233
234
235 def format_ipv6_addr(address):
236- """
237- IPv6 needs to be wrapped with [] in url link to parse correctly.
238+ """If address is IPv6, wrap it in '[]' otherwise return None.
239+
240+ This is required by most configuration files when specifying IPv6
241+ addresses.
242 """
243 if is_ipv6(address):
244- address = "[%s]" % address
245- else:
246- log("Not a valid ipv6 address: %s" % address, level=WARNING)
247- address = None
248+ return "[%s]" % address
249
250- return address
251+ return None
252
253
254 def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
255 fatal=True, exc_list=None):
256- """
257- Return the assigned IP address for a given interface, if any, or [].
258- """
259+ """Return the assigned IP address for a given interface, if any."""
260 # Extract nic if passed /dev/ethX
261 if '/' in iface:
262 iface = iface.split('/')[-1]
263+
264 if not exc_list:
265 exc_list = []
266+
267 try:
268 inet_num = getattr(netifaces, inet_type)
269 except AttributeError:
270- raise Exception('Unknown inet type ' + str(inet_type))
271+ raise Exception("Unknown inet type '%s'" % str(inet_type))
272
273 interfaces = netifaces.interfaces()
274 if inc_aliases:
275@@ -202,15 +201,18 @@
276 for _iface in interfaces:
277 if iface == _iface or _iface.split(':')[0] == iface:
278 ifaces.append(_iface)
279+
280 if fatal and not ifaces:
281 raise Exception("Invalid interface '%s'" % iface)
282+
283 ifaces.sort()
284 else:
285 if iface not in interfaces:
286 if fatal:
287- raise Exception("%s not found " % (iface))
288+ raise Exception("Interface '%s' not found " % (iface))
289 else:
290 return []
291+
292 else:
293 ifaces = [iface]
294
295@@ -221,10 +223,13 @@
296 for entry in net_info[inet_num]:
297 if 'addr' in entry and entry['addr'] not in exc_list:
298 addresses.append(entry['addr'])
299+
300 if fatal and not addresses:
301 raise Exception("Interface '%s' doesn't have any %s addresses." %
302 (iface, inet_type))
303- return addresses
304+
305+ return sorted(addresses)
306+
307
308 get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
309
310@@ -241,6 +246,7 @@
311 raw = re.match(ll_key, _addr)
312 if raw:
313 _addr = raw.group(1)
314+
315 if _addr == addr:
316 log("Address '%s' is configured on iface '%s'" %
317 (addr, iface))
318@@ -251,8 +257,9 @@
319
320
321 def sniff_iface(f):
322- """If no iface provided, inject net iface inferred from unit private
323- address.
324+ """Ensure decorated function is called with a value for iface.
325+
326+ If no iface provided, inject net iface inferred from unit private address.
327 """
328 def iface_sniffer(*args, **kwargs):
329 if not kwargs.get('iface', None):
330@@ -295,7 +302,7 @@
331 if global_addrs:
332 # Make sure any found global addresses are not temporary
333 cmd = ['ip', 'addr', 'show', iface]
334- out = subprocess.check_output(cmd)
335+ out = subprocess.check_output(cmd).decode('UTF-8')
336 if dynamic_only:
337 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
338 else:
339@@ -317,33 +324,28 @@
340 return addrs
341
342 if fatal:
343- raise Exception("Interface '%s' doesn't have a scope global "
344+ raise Exception("Interface '%s' does not have a scope global "
345 "non-temporary ipv6 address." % iface)
346
347 return []
348
349
350 def get_bridges(vnic_dir='/sys/devices/virtual/net'):
351- """
352- Return a list of bridges on the system or []
353- """
354- b_rgex = vnic_dir + '/*/bridge'
355- return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
356+ """Return a list of bridges on the system."""
357+ b_regex = "%s/*/bridge" % vnic_dir
358+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
359
360
361 def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
362- """
363- Return a list of nics comprising a given bridge on the system or []
364- """
365- brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
366- return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
367+ """Return a list of nics comprising a given bridge on the system."""
368+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
369+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
370
371
372 def is_bridge_member(nic):
373- """
374- Check if a given nic is a member of a bridge
375- """
376+ """Check if a given nic is a member of a bridge."""
377 for bridge in get_bridges():
378 if nic in get_bridge_nics(bridge):
379 return True
380+
381 return False
382
383=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
384--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-07 21:03:47 +0000
385+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-11-26 13:43:45 +0000
386@@ -1,3 +1,4 @@
387+import six
388 from charmhelpers.contrib.amulet.deployment import (
389 AmuletDeployment
390 )
391@@ -69,7 +70,7 @@
392
393 def _configure_services(self, configs):
394 """Configure all of the services."""
395- for service, config in configs.iteritems():
396+ for service, config in six.iteritems(configs):
397 self.d.configure(service, config)
398
399 def _get_openstack_release(self):
400
401=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
402--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-09-25 15:37:05 +0000
403+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-11-26 13:43:45 +0000
404@@ -7,6 +7,8 @@
405 import keystoneclient.v2_0 as keystone_client
406 import novaclient.v1_1.client as nova_client
407
408+import six
409+
410 from charmhelpers.contrib.amulet.utils import (
411 AmuletUtils
412 )
413@@ -60,7 +62,7 @@
414 expected service catalog endpoints.
415 """
416 self.log.debug('actual: {}'.format(repr(actual)))
417- for k, v in expected.iteritems():
418+ for k, v in six.iteritems(expected):
419 if k in actual:
420 ret = self._validate_dict_data(expected[k][0], actual[k][0])
421 if ret:
422
423=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
424--- hooks/charmhelpers/contrib/openstack/context.py 2014-10-07 21:03:47 +0000
425+++ hooks/charmhelpers/contrib/openstack/context.py 2014-11-26 13:43:45 +0000
426@@ -1,20 +1,18 @@
427 import json
428 import os
429 import time
430-
431 from base64 import b64decode
432+from subprocess import check_call
433
434-from subprocess import (
435- check_call
436-)
437+import six
438
439 from charmhelpers.fetch import (
440 apt_install,
441 filter_installed_packages,
442 )
443-
444 from charmhelpers.core.hookenv import (
445 config,
446+ is_relation_made,
447 local_unit,
448 log,
449 relation_get,
450@@ -23,43 +21,40 @@
451 relation_set,
452 unit_get,
453 unit_private_ip,
454+ DEBUG,
455+ INFO,
456+ WARNING,
457 ERROR,
458- INFO
459 )
460-
461 from charmhelpers.core.host import (
462 mkdir,
463- write_file
464+ write_file,
465 )
466-
467 from charmhelpers.contrib.hahelpers.cluster import (
468 determine_apache_port,
469 determine_api_port,
470 https,
471- is_clustered
472+ is_clustered,
473 )
474-
475 from charmhelpers.contrib.hahelpers.apache import (
476 get_cert,
477 get_ca_cert,
478 install_ca_cert,
479 )
480-
481 from charmhelpers.contrib.openstack.neutron import (
482 neutron_plugin_attribute,
483 )
484-
485 from charmhelpers.contrib.network.ip import (
486 get_address_in_network,
487 get_ipv6_addr,
488 get_netmask_for_address,
489 format_ipv6_addr,
490- is_address_in_network
491+ is_address_in_network,
492 )
493-
494 from charmhelpers.contrib.openstack.utils import get_host_ip
495
496 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
497+ADDRESS_TYPES = ['admin', 'internal', 'public']
498
499
500 class OSContextError(Exception):
501@@ -67,7 +62,7 @@
502
503
504 def ensure_packages(packages):
505- '''Install but do not upgrade required plugin packages'''
506+ """Install but do not upgrade required plugin packages."""
507 required = filter_installed_packages(packages)
508 if required:
509 apt_install(required, fatal=True)
510@@ -75,20 +70,27 @@
511
512 def context_complete(ctxt):
513 _missing = []
514- for k, v in ctxt.iteritems():
515+ for k, v in six.iteritems(ctxt):
516 if v is None or v == '':
517 _missing.append(k)
518+
519 if _missing:
520- log('Missing required data: %s' % ' '.join(_missing), level='INFO')
521+ log('Missing required data: %s' % ' '.join(_missing), level=INFO)
522 return False
523+
524 return True
525
526
527 def config_flags_parser(config_flags):
528+ """Parses config flags string into dict.
529+
530+ The provided config_flags string may be a list of comma-separated values
531+ which themselves may be comma-separated list of values.
532+ """
533 if config_flags.find('==') >= 0:
534- log("config_flags is not in expected format (key=value)",
535- level=ERROR)
536+ log("config_flags is not in expected format (key=value)", level=ERROR)
537 raise OSContextError
538+
539 # strip the following from each value.
540 post_strippers = ' ,'
541 # we strip any leading/trailing '=' or ' ' from the string then
542@@ -96,7 +98,7 @@
543 split = config_flags.strip(' =').split('=')
544 limit = len(split)
545 flags = {}
546- for i in xrange(0, limit - 1):
547+ for i in range(0, limit - 1):
548 current = split[i]
549 next = split[i + 1]
550 vindex = next.rfind(',')
551@@ -111,17 +113,18 @@
552 # if this not the first entry, expect an embedded key.
553 index = current.rfind(',')
554 if index < 0:
555- log("invalid config value(s) at index %s" % (i),
556- level=ERROR)
557+ log("Invalid config value(s) at index %s" % (i), level=ERROR)
558 raise OSContextError
559 key = current[index + 1:]
560
561 # Add to collection.
562 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
563+
564 return flags
565
566
567 class OSContextGenerator(object):
568+ """Base class for all context generators."""
569 interfaces = []
570
571 def __call__(self):
572@@ -133,11 +136,11 @@
573
574 def __init__(self,
575 database=None, user=None, relation_prefix=None, ssl_dir=None):
576- '''
577- Allows inspecting relation for settings prefixed with relation_prefix.
578- This is useful for parsing access for multiple databases returned via
579- the shared-db interface (eg, nova_password, quantum_password)
580- '''
581+ """Allows inspecting relation for settings prefixed with
582+ relation_prefix. This is useful for parsing access for multiple
583+ databases returned via the shared-db interface (eg, nova_password,
584+ quantum_password)
585+ """
586 self.relation_prefix = relation_prefix
587 self.database = database
588 self.user = user
589@@ -147,9 +150,8 @@
590 self.database = self.database or config('database')
591 self.user = self.user or config('database-user')
592 if None in [self.database, self.user]:
593- log('Could not generate shared_db context. '
594- 'Missing required charm config options. '
595- '(database name and user)')
596+ log("Could not generate shared_db context. Missing required charm "
597+ "config options. (database name and user)", level=ERROR)
598 raise OSContextError
599
600 ctxt = {}
601@@ -202,23 +204,24 @@
602 def __call__(self):
603 self.database = self.database or config('database')
604 if self.database is None:
605- log('Could not generate postgresql_db context. '
606- 'Missing required charm config options. '
607- '(database name)')
608+ log('Could not generate postgresql_db context. Missing required '
609+ 'charm config options. (database name)', level=ERROR)
610 raise OSContextError
611+
612 ctxt = {}
613-
614 for rid in relation_ids(self.interfaces[0]):
615 for unit in related_units(rid):
616- ctxt = {
617- 'database_host': relation_get('host', rid=rid, unit=unit),
618- 'database': self.database,
619- 'database_user': relation_get('user', rid=rid, unit=unit),
620- 'database_password': relation_get('password', rid=rid, unit=unit),
621- 'database_type': 'postgresql',
622- }
623+ rel_host = relation_get('host', rid=rid, unit=unit)
624+ rel_user = relation_get('user', rid=rid, unit=unit)
625+ rel_passwd = relation_get('password', rid=rid, unit=unit)
626+ ctxt = {'database_host': rel_host,
627+ 'database': self.database,
628+ 'database_user': rel_user,
629+ 'database_password': rel_passwd,
630+ 'database_type': 'postgresql'}
631 if context_complete(ctxt):
632 return ctxt
633+
634 return {}
635
636
637@@ -227,23 +230,29 @@
638 ca_path = os.path.join(ssl_dir, 'db-client.ca')
639 with open(ca_path, 'w') as fh:
640 fh.write(b64decode(rdata['ssl_ca']))
641+
642 ctxt['database_ssl_ca'] = ca_path
643 elif 'ssl_ca' in rdata:
644- log("Charm not setup for ssl support but ssl ca found")
645+ log("Charm not setup for ssl support but ssl ca found", level=INFO)
646 return ctxt
647+
648 if 'ssl_cert' in rdata:
649 cert_path = os.path.join(
650 ssl_dir, 'db-client.cert')
651 if not os.path.exists(cert_path):
652- log("Waiting 1m for ssl client cert validity")
653+ log("Waiting 1m for ssl client cert validity", level=INFO)
654 time.sleep(60)
655+
656 with open(cert_path, 'w') as fh:
657 fh.write(b64decode(rdata['ssl_cert']))
658+
659 ctxt['database_ssl_cert'] = cert_path
660 key_path = os.path.join(ssl_dir, 'db-client.key')
661 with open(key_path, 'w') as fh:
662 fh.write(b64decode(rdata['ssl_key']))
663+
664 ctxt['database_ssl_key'] = key_path
665+
666 return ctxt
667
668
669@@ -251,9 +260,8 @@
670 interfaces = ['identity-service']
671
672 def __call__(self):
673- log('Generating template context for identity-service')
674+ log('Generating template context for identity-service', level=DEBUG)
675 ctxt = {}
676-
677 for rid in relation_ids('identity-service'):
678 for unit in related_units(rid):
679 rdata = relation_get(rid=rid, unit=unit)
680@@ -261,26 +269,24 @@
681 serv_host = format_ipv6_addr(serv_host) or serv_host
682 auth_host = rdata.get('auth_host')
683 auth_host = format_ipv6_addr(auth_host) or auth_host
684-
685- ctxt = {
686- 'service_port': rdata.get('service_port'),
687- 'service_host': serv_host,
688- 'auth_host': auth_host,
689- 'auth_port': rdata.get('auth_port'),
690- 'admin_tenant_name': rdata.get('service_tenant'),
691- 'admin_user': rdata.get('service_username'),
692- 'admin_password': rdata.get('service_password'),
693- 'service_protocol':
694- rdata.get('service_protocol') or 'http',
695- 'auth_protocol':
696- rdata.get('auth_protocol') or 'http',
697- }
698+ svc_protocol = rdata.get('service_protocol') or 'http'
699+ auth_protocol = rdata.get('auth_protocol') or 'http'
700+ ctxt = {'service_port': rdata.get('service_port'),
701+ 'service_host': serv_host,
702+ 'auth_host': auth_host,
703+ 'auth_port': rdata.get('auth_port'),
704+ 'admin_tenant_name': rdata.get('service_tenant'),
705+ 'admin_user': rdata.get('service_username'),
706+ 'admin_password': rdata.get('service_password'),
707+ 'service_protocol': svc_protocol,
708+ 'auth_protocol': auth_protocol}
709 if context_complete(ctxt):
710 # NOTE(jamespage) this is required for >= icehouse
711 # so a missing value just indicates keystone needs
712 # upgrading
713 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
714 return ctxt
715+
716 return {}
717
718
719@@ -293,21 +299,23 @@
720 self.interfaces = [rel_name]
721
722 def __call__(self):
723- log('Generating template context for amqp')
724+ log('Generating template context for amqp', level=DEBUG)
725 conf = config()
726- user_setting = 'rabbit-user'
727- vhost_setting = 'rabbit-vhost'
728 if self.relation_prefix:
729- user_setting = self.relation_prefix + '-rabbit-user'
730- vhost_setting = self.relation_prefix + '-rabbit-vhost'
731+ user_setting = '%s-rabbit-user' % (self.relation_prefix)
732+ vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
733+ else:
734+ user_setting = 'rabbit-user'
735+ vhost_setting = 'rabbit-vhost'
736
737 try:
738 username = conf[user_setting]
739 vhost = conf[vhost_setting]
740 except KeyError as e:
741- log('Could not generate shared_db context. '
742- 'Missing required charm config options: %s.' % e)
743+ log('Could not generate shared_db context. Missing required charm '
744+ 'config options: %s.' % e, level=ERROR)
745 raise OSContextError
746+
747 ctxt = {}
748 for rid in relation_ids(self.rel_name):
749 ha_vip_only = False
750@@ -321,6 +329,7 @@
751 host = relation_get('private-address', rid=rid, unit=unit)
752 host = format_ipv6_addr(host) or host
753 ctxt['rabbitmq_host'] = host
754+
755 ctxt.update({
756 'rabbitmq_user': username,
757 'rabbitmq_password': relation_get('password', rid=rid,
758@@ -331,6 +340,7 @@
759 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
760 if ssl_port:
761 ctxt['rabbit_ssl_port'] = ssl_port
762+
763 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
764 if ssl_ca:
765 ctxt['rabbit_ssl_ca'] = ssl_ca
766@@ -344,41 +354,45 @@
767 if context_complete(ctxt):
768 if 'rabbit_ssl_ca' in ctxt:
769 if not self.ssl_dir:
770- log(("Charm not setup for ssl support "
771- "but ssl ca found"))
772+ log("Charm not setup for ssl support but ssl ca "
773+ "found", level=INFO)
774 break
775+
776 ca_path = os.path.join(
777 self.ssl_dir, 'rabbit-client-ca.pem')
778 with open(ca_path, 'w') as fh:
779 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
780 ctxt['rabbit_ssl_ca'] = ca_path
781+
782 # Sufficient information found = break out!
783 break
784+
785 # Used for active/active rabbitmq >= grizzly
786- if ('clustered' not in ctxt or ha_vip_only) \
787- and len(related_units(rid)) > 1:
788+ if (('clustered' not in ctxt or ha_vip_only) and
789+ len(related_units(rid)) > 1):
790 rabbitmq_hosts = []
791 for unit in related_units(rid):
792 host = relation_get('private-address', rid=rid, unit=unit)
793 host = format_ipv6_addr(host) or host
794 rabbitmq_hosts.append(host)
795- ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
796+
797+ ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
798+
799 if not context_complete(ctxt):
800 return {}
801- else:
802- return ctxt
803+
804+ return ctxt
805
806
807 class CephContext(OSContextGenerator):
808+ """Generates context for /etc/ceph/ceph.conf templates."""
809 interfaces = ['ceph']
810
811 def __call__(self):
812- '''This generates context for /etc/ceph/ceph.conf templates'''
813 if not relation_ids('ceph'):
814 return {}
815
816- log('Generating template context for ceph')
817-
818+ log('Generating template context for ceph', level=DEBUG)
819 mon_hosts = []
820 auth = None
821 key = None
822@@ -387,18 +401,18 @@
823 for unit in related_units(rid):
824 auth = relation_get('auth', rid=rid, unit=unit)
825 key = relation_get('key', rid=rid, unit=unit)
826- ceph_addr = \
827- relation_get('ceph-public-address', rid=rid, unit=unit) or \
828- relation_get('private-address', rid=rid, unit=unit)
829+ ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
830+ unit=unit)
831+ unit_priv_addr = relation_get('private-address', rid=rid,
832+ unit=unit)
833+ ceph_addr = ceph_pub_addr or unit_priv_addr
834 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
835 mon_hosts.append(ceph_addr)
836
837- ctxt = {
838- 'mon_hosts': ' '.join(mon_hosts),
839- 'auth': auth,
840- 'key': key,
841- 'use_syslog': use_syslog
842- }
843+ ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
844+ 'auth': auth,
845+ 'key': key,
846+ 'use_syslog': use_syslog}
847
848 if not os.path.isdir('/etc/ceph'):
849 os.mkdir('/etc/ceph')
850@@ -407,79 +421,68 @@
851 return {}
852
853 ensure_packages(['ceph-common'])
854-
855 return ctxt
856
857
858-ADDRESS_TYPES = ['admin', 'internal', 'public']
859-
860-
861 class HAProxyContext(OSContextGenerator):
862+ """Provides half a context for the haproxy template, which describes
863+ all peers to be included in the cluster. Each charm needs to include
864+ its own context generator that describes the port mapping.
865+ """
866 interfaces = ['cluster']
867
868+ def __init__(self, singlenode_mode=False):
869+ self.singlenode_mode = singlenode_mode
870+
871 def __call__(self):
872- '''
873- Builds half a context for the haproxy template, which describes
874- all peers to be included in the cluster. Each charm needs to include
875- its own context generator that describes the port mapping.
876- '''
877- if not relation_ids('cluster'):
878+ if not relation_ids('cluster') and not self.singlenode_mode:
879 return {}
880
881- l_unit = local_unit().replace('/', '-')
882-
883 if config('prefer-ipv6'):
884 addr = get_ipv6_addr(exc_list=[config('vip')])[0]
885 else:
886 addr = get_host_ip(unit_get('private-address'))
887
888+ l_unit = local_unit().replace('/', '-')
889 cluster_hosts = {}
890
891 # NOTE(jamespage): build out map of configured network endpoints
892 # and associated backends
893 for addr_type in ADDRESS_TYPES:
894- laddr = get_address_in_network(
895- config('os-{}-network'.format(addr_type)))
896+ cfg_opt = 'os-{}-network'.format(addr_type)
897+ laddr = get_address_in_network(config(cfg_opt))
898 if laddr:
899- cluster_hosts[laddr] = {}
900- cluster_hosts[laddr]['network'] = "{}/{}".format(
901- laddr,
902- get_netmask_for_address(laddr)
903- )
904- cluster_hosts[laddr]['backends'] = {}
905- cluster_hosts[laddr]['backends'][l_unit] = laddr
906+ netmask = get_netmask_for_address(laddr)
907+ cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
908+ netmask),
909+ 'backends': {l_unit: laddr}}
910 for rid in relation_ids('cluster'):
911 for unit in related_units(rid):
912- _unit = unit.replace('/', '-')
913 _laddr = relation_get('{}-address'.format(addr_type),
914 rid=rid, unit=unit)
915 if _laddr:
916+ _unit = unit.replace('/', '-')
917 cluster_hosts[laddr]['backends'][_unit] = _laddr
918
919 # NOTE(jamespage) no split configurations found, just use
920 # private addresses
921 if not cluster_hosts:
922- cluster_hosts[addr] = {}
923- cluster_hosts[addr]['network'] = "{}/{}".format(
924- addr,
925- get_netmask_for_address(addr)
926- )
927- cluster_hosts[addr]['backends'] = {}
928- cluster_hosts[addr]['backends'][l_unit] = addr
929+ netmask = get_netmask_for_address(addr)
930+ cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
931+ 'backends': {l_unit: addr}}
932 for rid in relation_ids('cluster'):
933 for unit in related_units(rid):
934- _unit = unit.replace('/', '-')
935 _laddr = relation_get('private-address',
936 rid=rid, unit=unit)
937 if _laddr:
938+ _unit = unit.replace('/', '-')
939 cluster_hosts[addr]['backends'][_unit] = _laddr
940
941- ctxt = {
942- 'frontends': cluster_hosts,
943- }
944+ ctxt = {'frontends': cluster_hosts}
945
946 if config('haproxy-server-timeout'):
947 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
948+
949 if config('haproxy-client-timeout'):
950 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
951
952@@ -493,13 +496,18 @@
953 ctxt['stat_port'] = ':8888'
954
955 for frontend in cluster_hosts:
956- if len(cluster_hosts[frontend]['backends']) > 1:
957+ if (len(cluster_hosts[frontend]['backends']) > 1 or
958+ self.singlenode_mode):
959 # Enable haproxy when we have enough peers.
960- log('Ensuring haproxy enabled in /etc/default/haproxy.')
961+ log('Ensuring haproxy enabled in /etc/default/haproxy.',
962+ level=DEBUG)
963 with open('/etc/default/haproxy', 'w') as out:
964 out.write('ENABLED=1\n')
965+
966 return ctxt
967- log('HAProxy context is incomplete, this unit has no peers.')
968+
969+ log('HAProxy context is incomplete, this unit has no peers.',
970+ level=INFO)
971 return {}
972
973
974@@ -507,29 +515,28 @@
975 interfaces = ['image-service']
976
977 def __call__(self):
978- '''
979- Obtains the glance API server from the image-service relation. Useful
980- in nova and cinder (currently).
981- '''
982- log('Generating template context for image-service.')
983+ """Obtains the glance API server from the image-service relation.
984+ Useful in nova and cinder (currently).
985+ """
986+ log('Generating template context for image-service.', level=DEBUG)
987 rids = relation_ids('image-service')
988 if not rids:
989 return {}
990+
991 for rid in rids:
992 for unit in related_units(rid):
993 api_server = relation_get('glance-api-server',
994 rid=rid, unit=unit)
995 if api_server:
996 return {'glance_api_servers': api_server}
997- log('ImageService context is incomplete. '
998- 'Missing required relation data.')
999+
1000+ log("ImageService context is incomplete. Missing required relation "
1001+ "data.", level=INFO)
1002 return {}
1003
1004
1005 class ApacheSSLContext(OSContextGenerator):
1006-
1007- """
1008- Generates a context for an apache vhost configuration that configures
1009+ """Generates a context for an apache vhost configuration that configures
1010 HTTPS reverse proxying for one or many endpoints. Generated context
1011 looks something like::
1012
1013@@ -563,6 +570,7 @@
1014 else:
1015 cert_filename = 'cert'
1016 key_filename = 'key'
1017+
1018 write_file(path=os.path.join(ssl_dir, cert_filename),
1019 content=b64decode(cert))
1020 write_file(path=os.path.join(ssl_dir, key_filename),
1021@@ -574,7 +582,8 @@
1022 install_ca_cert(b64decode(ca_cert))
1023
1024 def canonical_names(self):
1025- '''Figure out which canonical names clients will access this service'''
1026+ """Figure out which canonical names clients will access this service.
1027+ """
1028 cns = []
1029 for r_id in relation_ids('identity-service'):
1030 for unit in related_units(r_id):
1031@@ -582,55 +591,80 @@
1032 for k in rdata:
1033 if k.startswith('ssl_key_'):
1034 cns.append(k.lstrip('ssl_key_'))
1035- return list(set(cns))
1036+
1037+ return sorted(list(set(cns)))
1038+
1039+ def get_network_addresses(self):
1040+ """For each network configured, return corresponding address and vip
1041+ (if available).
1042+
1043+ Returns a list of tuples of the form:
1044+
1045+ [(address_in_net_a, vip_in_net_a),
1046+ (address_in_net_b, vip_in_net_b),
1047+ ...]
1048+
1049+ or, if no vip(s) available:
1050+
1051+ [(address_in_net_a, address_in_net_a),
1052+ (address_in_net_b, address_in_net_b),
1053+ ...]
1054+ """
1055+ addresses = []
1056+ if config('vip'):
1057+ vips = config('vip').split()
1058+ else:
1059+ vips = []
1060+
1061+ for net_type in ['os-internal-network', 'os-admin-network',
1062+ 'os-public-network']:
1063+ addr = get_address_in_network(config(net_type),
1064+ unit_get('private-address'))
1065+ if len(vips) > 1 and is_clustered():
1066+ if not config(net_type):
1067+ log("Multiple networks configured but net_type "
1068+ "is None (%s)." % net_type, level=WARNING)
1069+ continue
1070+
1071+ for vip in vips:
1072+ if is_address_in_network(config(net_type), vip):
1073+ addresses.append((addr, vip))
1074+ break
1075+
1076+ elif is_clustered() and config('vip'):
1077+ addresses.append((addr, config('vip')))
1078+ else:
1079+ addresses.append((addr, addr))
1080+
1081+ return sorted(addresses)
1082
1083 def __call__(self):
1084- if isinstance(self.external_ports, basestring):
1085+ if isinstance(self.external_ports, six.string_types):
1086 self.external_ports = [self.external_ports]
1087- if (not self.external_ports or not https()):
1088+
1089+ if not self.external_ports or not https():
1090 return {}
1091
1092 self.configure_ca()
1093 self.enable_modules()
1094
1095- ctxt = {
1096- 'namespace': self.service_namespace,
1097- 'endpoints': [],
1098- 'ext_ports': []
1099- }
1100+ ctxt = {'namespace': self.service_namespace,
1101+ 'endpoints': [],
1102+ 'ext_ports': []}
1103
1104 for cn in self.canonical_names():
1105 self.configure_cert(cn)
1106
1107- addresses = []
1108- vips = []
1109- if config('vip'):
1110- vips = config('vip').split()
1111-
1112- for network_type in ['os-internal-network',
1113- 'os-admin-network',
1114- 'os-public-network']:
1115- address = get_address_in_network(config(network_type),
1116- unit_get('private-address'))
1117- if len(vips) > 0 and is_clustered():
1118- for vip in vips:
1119- if is_address_in_network(config(network_type),
1120- vip):
1121- addresses.append((address, vip))
1122- break
1123- elif is_clustered():
1124- addresses.append((address, config('vip')))
1125- else:
1126- addresses.append((address, address))
1127-
1128- for address, endpoint in set(addresses):
1129+ addresses = self.get_network_addresses()
1130+ for address, endpoint in sorted(set(addresses)):
1131 for api_port in self.external_ports:
1132 ext_port = determine_apache_port(api_port)
1133 int_port = determine_api_port(api_port)
1134 portmap = (address, endpoint, int(ext_port), int(int_port))
1135 ctxt['endpoints'].append(portmap)
1136 ctxt['ext_ports'].append(int(ext_port))
1137- ctxt['ext_ports'] = list(set(ctxt['ext_ports']))
1138+
1139+ ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
1140 return ctxt
1141
1142
1143@@ -647,21 +681,23 @@
1144
1145 @property
1146 def packages(self):
1147- return neutron_plugin_attribute(
1148- self.plugin, 'packages', self.network_manager)
1149+ return neutron_plugin_attribute(self.plugin, 'packages',
1150+ self.network_manager)
1151
1152 @property
1153 def neutron_security_groups(self):
1154 return None
1155
1156 def _ensure_packages(self):
1157- [ensure_packages(pkgs) for pkgs in self.packages]
1158+ for pkgs in self.packages:
1159+ ensure_packages(pkgs)
1160
1161 def _save_flag_file(self):
1162 if self.network_manager == 'quantum':
1163 _file = '/etc/nova/quantum_plugin.conf'
1164 else:
1165 _file = '/etc/nova/neutron_plugin.conf'
1166+
1167 with open(_file, 'wb') as out:
1168 out.write(self.plugin + '\n')
1169
1170@@ -670,13 +706,11 @@
1171 self.network_manager)
1172 config = neutron_plugin_attribute(self.plugin, 'config',
1173 self.network_manager)
1174- ovs_ctxt = {
1175- 'core_plugin': driver,
1176- 'neutron_plugin': 'ovs',
1177- 'neutron_security_groups': self.neutron_security_groups,
1178- 'local_ip': unit_private_ip(),
1179- 'config': config
1180- }
1181+ ovs_ctxt = {'core_plugin': driver,
1182+ 'neutron_plugin': 'ovs',
1183+ 'neutron_security_groups': self.neutron_security_groups,
1184+ 'local_ip': unit_private_ip(),
1185+ 'config': config}
1186
1187 return ovs_ctxt
1188
1189@@ -685,13 +719,11 @@
1190 self.network_manager)
1191 config = neutron_plugin_attribute(self.plugin, 'config',
1192 self.network_manager)
1193- nvp_ctxt = {
1194- 'core_plugin': driver,
1195- 'neutron_plugin': 'nvp',
1196- 'neutron_security_groups': self.neutron_security_groups,
1197- 'local_ip': unit_private_ip(),
1198- 'config': config
1199- }
1200+ nvp_ctxt = {'core_plugin': driver,
1201+ 'neutron_plugin': 'nvp',
1202+ 'neutron_security_groups': self.neutron_security_groups,
1203+ 'local_ip': unit_private_ip(),
1204+ 'config': config}
1205
1206 return nvp_ctxt
1207
1208@@ -700,35 +732,50 @@
1209 self.network_manager)
1210 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1211 self.network_manager)
1212- n1kv_ctxt = {
1213- 'core_plugin': driver,
1214- 'neutron_plugin': 'n1kv',
1215- 'neutron_security_groups': self.neutron_security_groups,
1216- 'local_ip': unit_private_ip(),
1217- 'config': n1kv_config,
1218- 'vsm_ip': config('n1kv-vsm-ip'),
1219- 'vsm_username': config('n1kv-vsm-username'),
1220- 'vsm_password': config('n1kv-vsm-password'),
1221- 'restrict_policy_profiles': config(
1222- 'n1kv_restrict_policy_profiles'),
1223- }
1224+ n1kv_user_config_flags = config('n1kv-config-flags')
1225+ restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
1226+ n1kv_ctxt = {'core_plugin': driver,
1227+ 'neutron_plugin': 'n1kv',
1228+ 'neutron_security_groups': self.neutron_security_groups,
1229+ 'local_ip': unit_private_ip(),
1230+ 'config': n1kv_config,
1231+ 'vsm_ip': config('n1kv-vsm-ip'),
1232+ 'vsm_username': config('n1kv-vsm-username'),
1233+ 'vsm_password': config('n1kv-vsm-password'),
1234+ 'restrict_policy_profiles': restrict_policy_profiles}
1235+
1236+ if n1kv_user_config_flags:
1237+ flags = config_flags_parser(n1kv_user_config_flags)
1238+ n1kv_ctxt['user_config_flags'] = flags
1239
1240 return n1kv_ctxt
1241
1242+ def calico_ctxt(self):
1243+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1244+ self.network_manager)
1245+ config = neutron_plugin_attribute(self.plugin, 'config',
1246+ self.network_manager)
1247+ calico_ctxt = {'core_plugin': driver,
1248+ 'neutron_plugin': 'Calico',
1249+ 'neutron_security_groups': self.neutron_security_groups,
1250+ 'local_ip': unit_private_ip(),
1251+ 'config': config}
1252+
1253+ return calico_ctxt
1254+
1255 def neutron_ctxt(self):
1256 if https():
1257 proto = 'https'
1258 else:
1259 proto = 'http'
1260+
1261 if is_clustered():
1262 host = config('vip')
1263 else:
1264 host = unit_get('private-address')
1265- url = '%s://%s:%s' % (proto, host, '9696')
1266- ctxt = {
1267- 'network_manager': self.network_manager,
1268- 'neutron_url': url,
1269- }
1270+
1271+ ctxt = {'network_manager': self.network_manager,
1272+ 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1273 return ctxt
1274
1275 def __call__(self):
1276@@ -748,6 +795,8 @@
1277 ctxt.update(self.nvp_ctxt())
1278 elif self.plugin == 'n1kv':
1279 ctxt.update(self.n1kv_ctxt())
1280+ elif self.plugin == 'Calico':
1281+ ctxt.update(self.calico_ctxt())
1282
1283 alchemy_flags = config('neutron-alchemy-flags')
1284 if alchemy_flags:
1285@@ -759,23 +808,40 @@
1286
1287
1288 class OSConfigFlagContext(OSContextGenerator):
1289-
1290- """
1291- Responsible for adding user-defined config-flags in charm config to a
1292- template context.
1293+ """Provides support for user-defined config flags.
1294+
1295+ Users can define a comma-seperated list of key=value pairs
1296+ in the charm configuration and apply them at any point in
1297+ any file by using a template flag.
1298+
1299+ Sometimes users might want config flags inserted within a
1300+ specific section so this class allows users to specify the
1301+ template flag name, allowing for multiple template flags
1302+ (sections) within the same context.
1303
1304 NOTE: the value of config-flags may be a comma-separated list of
1305 key=value pairs and some Openstack config files support
1306 comma-separated lists as values.
1307 """
1308
1309+ def __init__(self, charm_flag='config-flags',
1310+ template_flag='user_config_flags'):
1311+ """
1312+ :param charm_flag: config flags in charm configuration.
1313+ :param template_flag: insert point for user-defined flags in template
1314+ file.
1315+ """
1316+ super(OSConfigFlagContext, self).__init__()
1317+ self._charm_flag = charm_flag
1318+ self._template_flag = template_flag
1319+
1320 def __call__(self):
1321- config_flags = config('config-flags')
1322+ config_flags = config(self._charm_flag)
1323 if not config_flags:
1324 return {}
1325
1326- flags = config_flags_parser(config_flags)
1327- return {'user_config_flags': flags}
1328+ return {self._template_flag:
1329+ config_flags_parser(config_flags)}
1330
1331
1332 class SubordinateConfigContext(OSContextGenerator):
1333@@ -819,7 +885,6 @@
1334 },
1335 }
1336 }
1337-
1338 """
1339
1340 def __init__(self, service, config_file, interface):
1341@@ -849,26 +914,28 @@
1342
1343 if self.service not in sub_config:
1344 log('Found subordinate_config on %s but it contained'
1345- 'nothing for %s service' % (rid, self.service))
1346+ 'nothing for %s service' % (rid, self.service),
1347+ level=INFO)
1348 continue
1349
1350 sub_config = sub_config[self.service]
1351 if self.config_file not in sub_config:
1352 log('Found subordinate_config on %s but it contained'
1353- 'nothing for %s' % (rid, self.config_file))
1354+ 'nothing for %s' % (rid, self.config_file),
1355+ level=INFO)
1356 continue
1357
1358 sub_config = sub_config[self.config_file]
1359- for k, v in sub_config.iteritems():
1360+ for k, v in six.iteritems(sub_config):
1361 if k == 'sections':
1362- for section, config_dict in v.iteritems():
1363- log("adding section '%s'" % (section))
1364+ for section, config_dict in six.iteritems(v):
1365+ log("adding section '%s'" % (section),
1366+ level=DEBUG)
1367 ctxt[k][section] = config_dict
1368 else:
1369 ctxt[k] = v
1370
1371- log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
1372-
1373+ log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1374 return ctxt
1375
1376
1377@@ -880,15 +947,14 @@
1378 False if config('debug') is None else config('debug')
1379 ctxt['verbose'] = \
1380 False if config('verbose') is None else config('verbose')
1381+
1382 return ctxt
1383
1384
1385 class SyslogContext(OSContextGenerator):
1386
1387 def __call__(self):
1388- ctxt = {
1389- 'use_syslog': config('use-syslog')
1390- }
1391+ ctxt = {'use_syslog': config('use-syslog')}
1392 return ctxt
1393
1394
1395@@ -896,13 +962,9 @@
1396
1397 def __call__(self):
1398 if config('prefer-ipv6'):
1399- return {
1400- 'bind_host': '::'
1401- }
1402+ return {'bind_host': '::'}
1403 else:
1404- return {
1405- 'bind_host': '0.0.0.0'
1406- }
1407+ return {'bind_host': '0.0.0.0'}
1408
1409
1410 class WorkerConfigContext(OSContextGenerator):
1411@@ -914,11 +976,42 @@
1412 except ImportError:
1413 apt_install('python-psutil', fatal=True)
1414 from psutil import NUM_CPUS
1415+
1416 return NUM_CPUS
1417
1418 def __call__(self):
1419- multiplier = config('worker-multiplier') or 1
1420- ctxt = {
1421- "workers": self.num_cpus * multiplier
1422- }
1423+ multiplier = config('worker-multiplier') or 0
1424+ ctxt = {"workers": self.num_cpus * multiplier}
1425+ return ctxt
1426+
1427+
1428+class ZeroMQContext(OSContextGenerator):
1429+ interfaces = ['zeromq-configuration']
1430+
1431+ def __call__(self):
1432+ ctxt = {}
1433+ if is_relation_made('zeromq-configuration', 'host'):
1434+ for rid in relation_ids('zeromq-configuration'):
1435+ for unit in related_units(rid):
1436+ ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1437+ ctxt['zmq_host'] = relation_get('host', unit, rid)
1438+
1439+ return ctxt
1440+
1441+
1442+class NotificationDriverContext(OSContextGenerator):
1443+
1444+ def __init__(self, zmq_relation='zeromq-configuration',
1445+ amqp_relation='amqp'):
1446+ """
1447+ :param zmq_relation: Name of Zeromq relation to check
1448+ """
1449+ self.zmq_relation = zmq_relation
1450+ self.amqp_relation = amqp_relation
1451+
1452+ def __call__(self):
1453+ ctxt = {'notifications': 'False'}
1454+ if is_relation_made(self.amqp_relation):
1455+ ctxt['notifications'] = "True"
1456+
1457 return ctxt
1458
1459=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
1460--- hooks/charmhelpers/contrib/openstack/ip.py 2014-10-07 21:03:47 +0000
1461+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-11-26 13:43:45 +0000
1462@@ -2,21 +2,19 @@
1463 config,
1464 unit_get,
1465 )
1466-
1467 from charmhelpers.contrib.network.ip import (
1468 get_address_in_network,
1469 is_address_in_network,
1470 is_ipv6,
1471 get_ipv6_addr,
1472 )
1473-
1474 from charmhelpers.contrib.hahelpers.cluster import is_clustered
1475
1476 PUBLIC = 'public'
1477 INTERNAL = 'int'
1478 ADMIN = 'admin'
1479
1480-_address_map = {
1481+ADDRESS_MAP = {
1482 PUBLIC: {
1483 'config': 'os-public-network',
1484 'fallback': 'public-address'
1485@@ -33,16 +31,14 @@
1486
1487
1488 def canonical_url(configs, endpoint_type=PUBLIC):
1489- '''
1490- Returns the correct HTTP URL to this host given the state of HTTPS
1491+ """Returns the correct HTTP URL to this host given the state of HTTPS
1492 configuration, hacluster and charm configuration.
1493
1494- :configs OSTemplateRenderer: A config tempating object to inspect for
1495- a complete https context.
1496- :endpoint_type str: The endpoint type to resolve.
1497-
1498- :returns str: Base URL for services on the current service unit.
1499- '''
1500+ :param configs: OSTemplateRenderer config templating object to inspect
1501+ for a complete https context.
1502+ :param endpoint_type: str endpoint type to resolve.
1503+ :param returns: str base URL for services on the current service unit.
1504+ """
1505 scheme = 'http'
1506 if 'https' in configs.complete_contexts():
1507 scheme = 'https'
1508@@ -53,27 +49,45 @@
1509
1510
1511 def resolve_address(endpoint_type=PUBLIC):
1512+ """Return unit address depending on net config.
1513+
1514+ If unit is clustered with vip(s) and has net splits defined, return vip on
1515+ correct network. If clustered with no nets defined, return primary vip.
1516+
1517+ If not clustered, return unit address ensuring address is on configured net
1518+ split if one is configured.
1519+
1520+ :param endpoint_type: Network endpoing type
1521+ """
1522 resolved_address = None
1523- if is_clustered():
1524- if config(_address_map[endpoint_type]['config']) is None:
1525- # Assume vip is simple and pass back directly
1526- resolved_address = config('vip')
1527+ vips = config('vip')
1528+ if vips:
1529+ vips = vips.split()
1530+
1531+ net_type = ADDRESS_MAP[endpoint_type]['config']
1532+ net_addr = config(net_type)
1533+ net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
1534+ clustered = is_clustered()
1535+ if clustered:
1536+ if not net_addr:
1537+ # If no net-splits defined, we expect a single vip
1538+ resolved_address = vips[0]
1539 else:
1540- for vip in config('vip').split():
1541- if is_address_in_network(
1542- config(_address_map[endpoint_type]['config']),
1543- vip):
1544+ for vip in vips:
1545+ if is_address_in_network(net_addr, vip):
1546 resolved_address = vip
1547+ break
1548 else:
1549 if config('prefer-ipv6'):
1550- fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1551+ fallback_addr = get_ipv6_addr(exc_list=vips)[0]
1552 else:
1553- fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])
1554- resolved_address = get_address_in_network(
1555- config(_address_map[endpoint_type]['config']), fallback_addr)
1556+ fallback_addr = unit_get(net_fallback)
1557+
1558+ resolved_address = get_address_in_network(net_addr, fallback_addr)
1559
1560 if resolved_address is None:
1561- raise ValueError('Unable to resolve a suitable IP address'
1562- ' based on charm state and configuration')
1563- else:
1564- return resolved_address
1565+ raise ValueError("Unable to resolve a suitable IP address based on "
1566+ "charm state and configuration. (net_type=%s, "
1567+ "clustered=%s)" % (net_type, clustered))
1568+
1569+ return resolved_address
1570
1571=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1572--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-06-24 13:40:39 +0000
1573+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-11-26 13:43:45 +0000
1574@@ -14,7 +14,7 @@
1575 def headers_package():
1576 """Ensures correct linux-headers for running kernel are installed,
1577 for building DKMS package"""
1578- kver = check_output(['uname', '-r']).strip()
1579+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1580 return 'linux-headers-%s' % kver
1581
1582 QUANTUM_CONF_DIR = '/etc/quantum'
1583@@ -22,7 +22,7 @@
1584
1585 def kernel_version():
1586 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1587- kver = check_output(['uname', '-r']).strip()
1588+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1589 kver = kver.split('.')
1590 return (int(kver[0]), int(kver[1]))
1591
1592@@ -138,10 +138,25 @@
1593 relation_prefix='neutron',
1594 ssl_dir=NEUTRON_CONF_DIR)],
1595 'services': [],
1596- 'packages': [['neutron-plugin-cisco']],
1597+ 'packages': [[headers_package()] + determine_dkms_package(),
1598+ ['neutron-plugin-cisco']],
1599 'server_packages': ['neutron-server',
1600 'neutron-plugin-cisco'],
1601 'server_services': ['neutron-server']
1602+ },
1603+ 'Calico': {
1604+ 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
1605+ 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
1606+ 'contexts': [
1607+ context.SharedDBContext(user=config('neutron-database-user'),
1608+ database=config('neutron-database'),
1609+ relation_prefix='neutron',
1610+ ssl_dir=NEUTRON_CONF_DIR)],
1611+ 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
1612+ 'packages': [[headers_package()] + determine_dkms_package(),
1613+ ['calico-compute', 'bird', 'neutron-dhcp-agent']],
1614+ 'server_packages': ['neutron-server', 'calico-control'],
1615+ 'server_services': ['neutron-server']
1616 }
1617 }
1618 if release >= 'icehouse':
1619@@ -162,7 +177,8 @@
1620 elif manager == 'neutron':
1621 plugins = neutron_plugins()
1622 else:
1623- log('Error: Network manager does not support plugins.')
1624+ log("Network manager '%s' does not support plugins." % (manager),
1625+ level=ERROR)
1626 raise Exception
1627
1628 try:
1629
1630=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
1631--- hooks/charmhelpers/contrib/openstack/templating.py 2014-07-29 07:46:01 +0000
1632+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-11-26 13:43:45 +0000
1633@@ -1,13 +1,13 @@
1634 import os
1635
1636+import six
1637+
1638 from charmhelpers.fetch import apt_install
1639-
1640 from charmhelpers.core.hookenv import (
1641 log,
1642 ERROR,
1643 INFO
1644 )
1645-
1646 from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1647
1648 try:
1649@@ -43,7 +43,7 @@
1650 order by OpenStack release.
1651 """
1652 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1653- for rel in OPENSTACK_CODENAMES.itervalues()]
1654+ for rel in six.itervalues(OPENSTACK_CODENAMES)]
1655
1656 if not os.path.isdir(templates_dir):
1657 log('Templates directory not found @ %s.' % templates_dir,
1658@@ -258,7 +258,7 @@
1659 """
1660 Write out all registered config files.
1661 """
1662- [self.write(k) for k in self.templates.iterkeys()]
1663+ [self.write(k) for k in six.iterkeys(self.templates)]
1664
1665 def set_release(self, openstack_release):
1666 """
1667@@ -275,5 +275,5 @@
1668 '''
1669 interfaces = []
1670 [interfaces.extend(i.complete_contexts())
1671- for i in self.templates.itervalues()]
1672+ for i in six.itervalues(self.templates)]
1673 return interfaces
1674
1675=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1676--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-07 21:03:47 +0000
1677+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-11-26 13:43:45 +0000
1678@@ -2,6 +2,7 @@
1679
1680 # Common python helper functions used for OpenStack charms.
1681 from collections import OrderedDict
1682+from functools import wraps
1683
1684 import subprocess
1685 import json
1686@@ -9,11 +10,12 @@
1687 import socket
1688 import sys
1689
1690+import six
1691+
1692 from charmhelpers.core.hookenv import (
1693 config,
1694 log as juju_log,
1695 charm_dir,
1696- ERROR,
1697 INFO,
1698 relation_ids,
1699 relation_set
1700@@ -112,7 +114,7 @@
1701
1702 # Best guess match based on deb string provided
1703 if src.startswith('deb') or src.startswith('ppa'):
1704- for k, v in OPENSTACK_CODENAMES.iteritems():
1705+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1706 if v in src:
1707 return v
1708
1709@@ -133,7 +135,7 @@
1710
1711 def get_os_version_codename(codename):
1712 '''Determine OpenStack version number from codename.'''
1713- for k, v in OPENSTACK_CODENAMES.iteritems():
1714+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1715 if v == codename:
1716 return k
1717 e = 'Could not derive OpenStack version for '\
1718@@ -193,7 +195,7 @@
1719 else:
1720 vers_map = OPENSTACK_CODENAMES
1721
1722- for version, cname in vers_map.iteritems():
1723+ for version, cname in six.iteritems(vers_map):
1724 if cname == codename:
1725 return version
1726 # e = "Could not determine OpenStack version for package: %s" % pkg
1727@@ -317,7 +319,7 @@
1728 rc_script.write(
1729 "#!/bin/bash\n")
1730 [rc_script.write('export %s=%s\n' % (u, p))
1731- for u, p in env_vars.iteritems() if u != "script_path"]
1732+ for u, p in six.iteritems(env_vars) if u != "script_path"]
1733
1734
1735 def openstack_upgrade_available(package):
1736@@ -350,8 +352,8 @@
1737 '''
1738 _none = ['None', 'none', None]
1739 if (block_device in _none):
1740- error_out('prepare_storage(): Missing required input: '
1741- 'block_device=%s.' % block_device, level=ERROR)
1742+ error_out('prepare_storage(): Missing required input: block_device=%s.'
1743+ % block_device)
1744
1745 if block_device.startswith('/dev/'):
1746 bdev = block_device
1747@@ -367,8 +369,7 @@
1748 bdev = '/dev/%s' % block_device
1749
1750 if not is_block_device(bdev):
1751- error_out('Failed to locate valid block device at %s' % bdev,
1752- level=ERROR)
1753+ error_out('Failed to locate valid block device at %s' % bdev)
1754
1755 return bdev
1756
1757@@ -417,7 +418,7 @@
1758
1759 if isinstance(address, dns.name.Name):
1760 rtype = 'PTR'
1761- elif isinstance(address, basestring):
1762+ elif isinstance(address, six.string_types):
1763 rtype = 'A'
1764 else:
1765 return None
1766@@ -468,6 +469,14 @@
1767 return result.split('.')[0]
1768
1769
1770+def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
1771+ mm_map = {}
1772+ if os.path.isfile(mm_file):
1773+ with open(mm_file, 'r') as f:
1774+ mm_map = json.load(f)
1775+ return mm_map
1776+
1777+
1778 def sync_db_with_multi_ipv6_addresses(database, database_user,
1779 relation_prefix=None):
1780 hosts = get_ipv6_addr(dynamic_only=False)
1781@@ -477,10 +486,24 @@
1782 'hostname': json.dumps(hosts)}
1783
1784 if relation_prefix:
1785- keys = kwargs.keys()
1786- for key in keys:
1787+ for key in list(kwargs.keys()):
1788 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
1789 del kwargs[key]
1790
1791 for rid in relation_ids('shared-db'):
1792 relation_set(relation_id=rid, **kwargs)
1793+
1794+
1795+def os_requires_version(ostack_release, pkg):
1796+ """
1797+ Decorator for hook to specify minimum supported release
1798+ """
1799+ def wrap(f):
1800+ @wraps(f)
1801+ def wrapped_f(*args):
1802+ if os_release(pkg) < ostack_release:
1803+ raise Exception("This hook is not supported on releases"
1804+ " before %s" % ostack_release)
1805+ f(*args)
1806+ return wrapped_f
1807+ return wrap
1808
1809=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
1810--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-29 07:46:01 +0000
1811+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-11-26 13:43:45 +0000
1812@@ -16,19 +16,18 @@
1813 from subprocess import (
1814 check_call,
1815 check_output,
1816- CalledProcessError
1817+ CalledProcessError,
1818 )
1819-
1820 from charmhelpers.core.hookenv import (
1821 relation_get,
1822 relation_ids,
1823 related_units,
1824 log,
1825+ DEBUG,
1826 INFO,
1827 WARNING,
1828- ERROR
1829+ ERROR,
1830 )
1831-
1832 from charmhelpers.core.host import (
1833 mount,
1834 mounts,
1835@@ -37,7 +36,6 @@
1836 service_running,
1837 umount,
1838 )
1839-
1840 from charmhelpers.fetch import (
1841 apt_install,
1842 )
1843@@ -56,99 +54,85 @@
1844
1845
1846 def install():
1847- ''' Basic Ceph client installation '''
1848+ """Basic Ceph client installation."""
1849 ceph_dir = "/etc/ceph"
1850 if not os.path.exists(ceph_dir):
1851 os.mkdir(ceph_dir)
1852+
1853 apt_install('ceph-common', fatal=True)
1854
1855
1856 def rbd_exists(service, pool, rbd_img):
1857- ''' Check to see if a RADOS block device exists '''
1858+ """Check to see if a RADOS block device exists."""
1859 try:
1860- out = check_output(['rbd', 'list', '--id', service,
1861- '--pool', pool])
1862+ out = check_output(['rbd', 'list', '--id',
1863+ service, '--pool', pool]).decode('UTF-8')
1864 except CalledProcessError:
1865 return False
1866- else:
1867- return rbd_img in out
1868+
1869+ return rbd_img in out
1870
1871
1872 def create_rbd_image(service, pool, image, sizemb):
1873- ''' Create a new RADOS block device '''
1874- cmd = [
1875- 'rbd',
1876- 'create',
1877- image,
1878- '--size',
1879- str(sizemb),
1880- '--id',
1881- service,
1882- '--pool',
1883- pool
1884- ]
1885+ """Create a new RADOS block device."""
1886+ cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
1887+ '--pool', pool]
1888 check_call(cmd)
1889
1890
1891 def pool_exists(service, name):
1892- ''' Check to see if a RADOS pool already exists '''
1893+ """Check to see if a RADOS pool already exists."""
1894 try:
1895- out = check_output(['rados', '--id', service, 'lspools'])
1896+ out = check_output(['rados', '--id', service,
1897+ 'lspools']).decode('UTF-8')
1898 except CalledProcessError:
1899 return False
1900- else:
1901- return name in out
1902+
1903+ return name in out
1904
1905
1906 def get_osds(service):
1907- '''
1908- Return a list of all Ceph Object Storage Daemons
1909- currently in the cluster
1910- '''
1911+ """Return a list of all Ceph Object Storage Daemons currently in the
1912+ cluster.
1913+ """
1914 version = ceph_version()
1915 if version and version >= '0.56':
1916 return json.loads(check_output(['ceph', '--id', service,
1917- 'osd', 'ls', '--format=json']))
1918- else:
1919- return None
1920-
1921-
1922-def create_pool(service, name, replicas=2):
1923- ''' Create a new RADOS pool '''
1924+ 'osd', 'ls',
1925+ '--format=json']).decode('UTF-8'))
1926+
1927+ return None
1928+
1929+
1930+def create_pool(service, name, replicas=3):
1931+ """Create a new RADOS pool."""
1932 if pool_exists(service, name):
1933 log("Ceph pool {} already exists, skipping creation".format(name),
1934 level=WARNING)
1935 return
1936+
1937 # Calculate the number of placement groups based
1938 # on upstream recommended best practices.
1939 osds = get_osds(service)
1940 if osds:
1941- pgnum = (len(osds) * 100 / replicas)
1942+ pgnum = (len(osds) * 100 // replicas)
1943 else:
1944 # NOTE(james-page): Default to 200 for older ceph versions
1945 # which don't support OSD query from cli
1946 pgnum = 200
1947- cmd = [
1948- 'ceph', '--id', service,
1949- 'osd', 'pool', 'create',
1950- name, str(pgnum)
1951- ]
1952+
1953+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
1954 check_call(cmd)
1955- cmd = [
1956- 'ceph', '--id', service,
1957- 'osd', 'pool', 'set', name,
1958- 'size', str(replicas)
1959- ]
1960+
1961+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
1962+ str(replicas)]
1963 check_call(cmd)
1964
1965
1966 def delete_pool(service, name):
1967- ''' Delete a RADOS pool from ceph '''
1968- cmd = [
1969- 'ceph', '--id', service,
1970- 'osd', 'pool', 'delete',
1971- name, '--yes-i-really-really-mean-it'
1972- ]
1973+ """Delete a RADOS pool from ceph."""
1974+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
1975+ '--yes-i-really-really-mean-it']
1976 check_call(cmd)
1977
1978
1979@@ -161,44 +145,43 @@
1980
1981
1982 def create_keyring(service, key):
1983- ''' Create a new Ceph keyring containing key'''
1984+ """Create a new Ceph keyring containing key."""
1985 keyring = _keyring_path(service)
1986 if os.path.exists(keyring):
1987- log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
1988+ log('Ceph keyring exists at %s.' % keyring, level=WARNING)
1989 return
1990- cmd = [
1991- 'ceph-authtool',
1992- keyring,
1993- '--create-keyring',
1994- '--name=client.{}'.format(service),
1995- '--add-key={}'.format(key)
1996- ]
1997+
1998+ cmd = ['ceph-authtool', keyring, '--create-keyring',
1999+ '--name=client.{}'.format(service), '--add-key={}'.format(key)]
2000 check_call(cmd)
2001- log('ceph: Created new ring at %s.' % keyring, level=INFO)
2002+ log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
2003
2004
2005 def create_key_file(service, key):
2006- ''' Create a file containing key '''
2007+ """Create a file containing key."""
2008 keyfile = _keyfile_path(service)
2009 if os.path.exists(keyfile):
2010- log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2011+ log('Keyfile exists at %s.' % keyfile, level=WARNING)
2012 return
2013+
2014 with open(keyfile, 'w') as fd:
2015 fd.write(key)
2016- log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2017+
2018+ log('Created new keyfile at %s.' % keyfile, level=INFO)
2019
2020
2021 def get_ceph_nodes():
2022- ''' Query named relation 'ceph' to detemine current nodes '''
2023+ """Query named relation 'ceph' to determine current nodes."""
2024 hosts = []
2025 for r_id in relation_ids('ceph'):
2026 for unit in related_units(r_id):
2027 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2028+
2029 return hosts
2030
2031
2032 def configure(service, key, auth, use_syslog):
2033- ''' Perform basic configuration of Ceph '''
2034+ """Perform basic configuration of Ceph."""
2035 create_keyring(service, key)
2036 create_key_file(service, key)
2037 hosts = get_ceph_nodes()
2038@@ -211,17 +194,17 @@
2039
2040
2041 def image_mapped(name):
2042- ''' Determine whether a RADOS block device is mapped locally '''
2043+ """Determine whether a RADOS block device is mapped locally."""
2044 try:
2045- out = check_output(['rbd', 'showmapped'])
2046+ out = check_output(['rbd', 'showmapped']).decode('UTF-8')
2047 except CalledProcessError:
2048 return False
2049- else:
2050- return name in out
2051+
2052+ return name in out
2053
2054
2055 def map_block_storage(service, pool, image):
2056- ''' Map a RADOS block device for local use '''
2057+ """Map a RADOS block device for local use."""
2058 cmd = [
2059 'rbd',
2060 'map',
2061@@ -235,31 +218,32 @@
2062
2063
2064 def filesystem_mounted(fs):
2065- ''' Determine whether a filesytems is already mounted '''
2066+ """Determine whether a filesytems is already mounted."""
2067 return fs in [f for f, m in mounts()]
2068
2069
2070 def make_filesystem(blk_device, fstype='ext4', timeout=10):
2071- ''' Make a new filesystem on the specified block device '''
2072+ """Make a new filesystem on the specified block device."""
2073 count = 0
2074 e_noent = os.errno.ENOENT
2075 while not os.path.exists(blk_device):
2076 if count >= timeout:
2077- log('ceph: gave up waiting on block device %s' % blk_device,
2078+ log('Gave up waiting on block device %s' % blk_device,
2079 level=ERROR)
2080 raise IOError(e_noent, os.strerror(e_noent), blk_device)
2081- log('ceph: waiting for block device %s to appear' % blk_device,
2082- level=INFO)
2083+
2084+ log('Waiting for block device %s to appear' % blk_device,
2085+ level=DEBUG)
2086 count += 1
2087 time.sleep(1)
2088 else:
2089- log('ceph: Formatting block device %s as filesystem %s.' %
2090+ log('Formatting block device %s as filesystem %s.' %
2091 (blk_device, fstype), level=INFO)
2092 check_call(['mkfs', '-t', fstype, blk_device])
2093
2094
2095 def place_data_on_block_device(blk_device, data_src_dst):
2096- ''' Migrate data in data_src_dst to blk_device and then remount '''
2097+ """Migrate data in data_src_dst to blk_device and then remount."""
2098 # mount block device into /mnt
2099 mount(blk_device, '/mnt')
2100 # copy data to /mnt
2101@@ -279,8 +263,8 @@
2102
2103 # TODO: re-use
2104 def modprobe(module):
2105- ''' Load a kernel module and configure for auto-load on reboot '''
2106- log('ceph: Loading kernel module', level=INFO)
2107+ """Load a kernel module and configure for auto-load on reboot."""
2108+ log('Loading kernel module', level=INFO)
2109 cmd = ['modprobe', module]
2110 check_call(cmd)
2111 with open('/etc/modules', 'r+') as modules:
2112@@ -289,7 +273,7 @@
2113
2114
2115 def copy_files(src, dst, symlinks=False, ignore=None):
2116- ''' Copy files from src to dst '''
2117+ """Copy files from src to dst."""
2118 for item in os.listdir(src):
2119 s = os.path.join(src, item)
2120 d = os.path.join(dst, item)
2121@@ -300,9 +284,9 @@
2122
2123
2124 def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2125- blk_device, fstype, system_services=[]):
2126- """
2127- NOTE: This function must only be called from a single service unit for
2128+ blk_device, fstype, system_services=[],
2129+ replicas=3):
2130+ """NOTE: This function must only be called from a single service unit for
2131 the same rbd_img otherwise data loss will occur.
2132
2133 Ensures given pool and RBD image exists, is mapped to a block device,
2134@@ -316,15 +300,16 @@
2135 """
2136 # Ensure pool, RBD image, RBD mappings are in place.
2137 if not pool_exists(service, pool):
2138- log('ceph: Creating new pool {}.'.format(pool))
2139- create_pool(service, pool)
2140+ log('Creating new pool {}.'.format(pool), level=INFO)
2141+ create_pool(service, pool, replicas=replicas)
2142
2143 if not rbd_exists(service, pool, rbd_img):
2144- log('ceph: Creating RBD image ({}).'.format(rbd_img))
2145+ log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
2146 create_rbd_image(service, pool, rbd_img, sizemb)
2147
2148 if not image_mapped(rbd_img):
2149- log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2150+ log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
2151+ level=INFO)
2152 map_block_storage(service, pool, rbd_img)
2153
2154 # make file system
2155@@ -339,45 +324,47 @@
2156
2157 for svc in system_services:
2158 if service_running(svc):
2159- log('ceph: Stopping services {} prior to migrating data.'
2160- .format(svc))
2161+ log('Stopping services {} prior to migrating data.'
2162+ .format(svc), level=DEBUG)
2163 service_stop(svc)
2164
2165 place_data_on_block_device(blk_device, mount_point)
2166
2167 for svc in system_services:
2168- log('ceph: Starting service {} after migrating data.'
2169- .format(svc))
2170+ log('Starting service {} after migrating data.'
2171+ .format(svc), level=DEBUG)
2172 service_start(svc)
2173
2174
2175 def ensure_ceph_keyring(service, user=None, group=None):
2176- '''
2177- Ensures a ceph keyring is created for a named service
2178- and optionally ensures user and group ownership.
2179+ """Ensures a ceph keyring is created for a named service and optionally
2180+ ensures user and group ownership.
2181
2182 Returns False if no ceph key is available in relation state.
2183- '''
2184+ """
2185 key = None
2186 for rid in relation_ids('ceph'):
2187 for unit in related_units(rid):
2188 key = relation_get('key', rid=rid, unit=unit)
2189 if key:
2190 break
2191+
2192 if not key:
2193 return False
2194+
2195 create_keyring(service=service, key=key)
2196 keyring = _keyring_path(service)
2197 if user and group:
2198 check_call(['chown', '%s.%s' % (user, group), keyring])
2199+
2200 return True
2201
2202
2203 def ceph_version():
2204- ''' Retrieve the local version of ceph '''
2205+ """Retrieve the local version of ceph."""
2206 if os.path.exists('/usr/bin/ceph'):
2207 cmd = ['ceph', '-v']
2208- output = check_output(cmd)
2209+ output = check_output(cmd).decode('US-ASCII')
2210 output = output.split()
2211 if len(output) > 3:
2212 return output[2]
2213
2214=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2215--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-11-08 05:55:44 +0000
2216+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-11-26 13:43:45 +0000
2217@@ -1,12 +1,12 @@
2218-
2219 import os
2220 import re
2221-
2222 from subprocess import (
2223 check_call,
2224 check_output,
2225 )
2226
2227+import six
2228+
2229
2230 ##################################################
2231 # loopback device helpers.
2232@@ -37,7 +37,7 @@
2233 '''
2234 file_path = os.path.abspath(file_path)
2235 check_call(['losetup', '--find', file_path])
2236- for d, f in loopback_devices().iteritems():
2237+ for d, f in six.iteritems(loopback_devices()):
2238 if f == file_path:
2239 return d
2240
2241@@ -51,7 +51,7 @@
2242
2243 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2244 '''
2245- for d, f in loopback_devices().iteritems():
2246+ for d, f in six.iteritems(loopback_devices()):
2247 if f == path:
2248 return d
2249
2250
2251=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2252--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:43:55 +0000
2253+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-11-26 13:43:45 +0000
2254@@ -61,6 +61,7 @@
2255 vg = None
2256 pvd = check_output(['pvdisplay', block_device]).splitlines()
2257 for l in pvd:
2258+ l = l.decode('UTF-8')
2259 if l.strip().startswith('VG Name'):
2260 vg = ' '.join(l.strip().split()[2:])
2261 return vg
2262
2263=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2264--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-08-13 13:12:47 +0000
2265+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-11-26 13:43:45 +0000
2266@@ -30,7 +30,8 @@
2267 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
2268 call(['sgdisk', '--zap-all', '--mbrtogpt',
2269 '--clear', block_device])
2270- dev_end = check_output(['blockdev', '--getsz', block_device])
2271+ dev_end = check_output(['blockdev', '--getsz',
2272+ block_device]).decode('UTF-8')
2273 gpt_end = int(dev_end.split()[0]) - 100
2274 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
2275 'bs=1M', 'count=1'])
2276@@ -47,7 +48,7 @@
2277 it doesn't.
2278 '''
2279 is_partition = bool(re.search(r".*[0-9]+\b", device))
2280- out = check_output(['mount'])
2281+ out = check_output(['mount']).decode('UTF-8')
2282 if is_partition:
2283 return bool(re.search(device + r"\b", out))
2284 return bool(re.search(device + r"[0-9]+\b", out))
2285
2286=== modified file 'hooks/charmhelpers/core/fstab.py'
2287--- hooks/charmhelpers/core/fstab.py 2014-06-24 13:40:39 +0000
2288+++ hooks/charmhelpers/core/fstab.py 2014-11-26 13:43:45 +0000
2289@@ -3,10 +3,11 @@
2290
2291 __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2292
2293+import io
2294 import os
2295
2296
2297-class Fstab(file):
2298+class Fstab(io.FileIO):
2299 """This class extends file in order to implement a file reader/writer
2300 for file `/etc/fstab`
2301 """
2302@@ -24,8 +25,8 @@
2303 options = "defaults"
2304
2305 self.options = options
2306- self.d = d
2307- self.p = p
2308+ self.d = int(d)
2309+ self.p = int(p)
2310
2311 def __eq__(self, o):
2312 return str(self) == str(o)
2313@@ -45,7 +46,7 @@
2314 self._path = path
2315 else:
2316 self._path = self.DEFAULT_PATH
2317- file.__init__(self, self._path, 'r+')
2318+ super(Fstab, self).__init__(self._path, 'rb+')
2319
2320 def _hydrate_entry(self, line):
2321 # NOTE: use split with no arguments to split on any
2322@@ -58,8 +59,9 @@
2323 def entries(self):
2324 self.seek(0)
2325 for line in self.readlines():
2326+ line = line.decode('us-ascii')
2327 try:
2328- if not line.startswith("#"):
2329+ if line.strip() and not line.startswith("#"):
2330 yield self._hydrate_entry(line)
2331 except ValueError:
2332 pass
2333@@ -75,14 +77,14 @@
2334 if self.get_entry_by_attr('device', entry.device):
2335 return False
2336
2337- self.write(str(entry) + '\n')
2338+ self.write((str(entry) + '\n').encode('us-ascii'))
2339 self.truncate()
2340 return entry
2341
2342 def remove_entry(self, entry):
2343 self.seek(0)
2344
2345- lines = self.readlines()
2346+ lines = [l.decode('us-ascii') for l in self.readlines()]
2347
2348 found = False
2349 for index, line in enumerate(lines):
2350@@ -97,7 +99,7 @@
2351 lines.remove(line)
2352
2353 self.seek(0)
2354- self.write(''.join(lines))
2355+ self.write(''.join(lines).encode('us-ascii'))
2356 self.truncate()
2357 return True
2358
2359
2360=== modified file 'hooks/charmhelpers/core/hookenv.py'
2361--- hooks/charmhelpers/core/hookenv.py 2014-09-25 15:37:05 +0000
2362+++ hooks/charmhelpers/core/hookenv.py 2014-11-26 13:43:45 +0000
2363@@ -9,9 +9,14 @@
2364 import yaml
2365 import subprocess
2366 import sys
2367-import UserDict
2368 from subprocess import CalledProcessError
2369
2370+import six
2371+if not six.PY3:
2372+ from UserDict import UserDict
2373+else:
2374+ from collections import UserDict
2375+
2376 CRITICAL = "CRITICAL"
2377 ERROR = "ERROR"
2378 WARNING = "WARNING"
2379@@ -67,12 +72,12 @@
2380 subprocess.call(command)
2381
2382
2383-class Serializable(UserDict.IterableUserDict):
2384+class Serializable(UserDict):
2385 """Wrapper, an object that can be serialized to yaml or json"""
2386
2387 def __init__(self, obj):
2388 # wrap the object
2389- UserDict.IterableUserDict.__init__(self)
2390+ UserDict.__init__(self)
2391 self.data = obj
2392
2393 def __getattr__(self, attr):
2394@@ -214,6 +219,12 @@
2395 except KeyError:
2396 return (self._prev_dict or {})[key]
2397
2398+ def keys(self):
2399+ prev_keys = []
2400+ if self._prev_dict is not None:
2401+ prev_keys = self._prev_dict.keys()
2402+ return list(set(prev_keys + list(dict.keys(self))))
2403+
2404 def load_previous(self, path=None):
2405 """Load previous copy of config from disk.
2406
2407@@ -263,7 +274,7 @@
2408
2409 """
2410 if self._prev_dict:
2411- for k, v in self._prev_dict.iteritems():
2412+ for k, v in six.iteritems(self._prev_dict):
2413 if k not in self:
2414 self[k] = v
2415 with open(self.path, 'w') as f:
2416@@ -278,7 +289,8 @@
2417 config_cmd_line.append(scope)
2418 config_cmd_line.append('--format=json')
2419 try:
2420- config_data = json.loads(subprocess.check_output(config_cmd_line))
2421+ config_data = json.loads(
2422+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
2423 if scope is not None:
2424 return config_data
2425 return Config(config_data)
2426@@ -297,10 +309,10 @@
2427 if unit:
2428 _args.append(unit)
2429 try:
2430- return json.loads(subprocess.check_output(_args))
2431+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2432 except ValueError:
2433 return None
2434- except CalledProcessError, e:
2435+ except CalledProcessError as e:
2436 if e.returncode == 2:
2437 return None
2438 raise
2439@@ -312,7 +324,7 @@
2440 relation_cmd_line = ['relation-set']
2441 if relation_id is not None:
2442 relation_cmd_line.extend(('-r', relation_id))
2443- for k, v in (relation_settings.items() + kwargs.items()):
2444+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
2445 if v is None:
2446 relation_cmd_line.append('{}='.format(k))
2447 else:
2448@@ -329,7 +341,8 @@
2449 relid_cmd_line = ['relation-ids', '--format=json']
2450 if reltype is not None:
2451 relid_cmd_line.append(reltype)
2452- return json.loads(subprocess.check_output(relid_cmd_line)) or []
2453+ return json.loads(
2454+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
2455 return []
2456
2457
2458@@ -340,7 +353,8 @@
2459 units_cmd_line = ['relation-list', '--format=json']
2460 if relid is not None:
2461 units_cmd_line.extend(('-r', relid))
2462- return json.loads(subprocess.check_output(units_cmd_line)) or []
2463+ return json.loads(
2464+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
2465
2466
2467 @cached
2468@@ -449,7 +463,7 @@
2469 """Get the unit ID for the remote unit"""
2470 _args = ['unit-get', '--format=json', attribute]
2471 try:
2472- return json.loads(subprocess.check_output(_args))
2473+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2474 except ValueError:
2475 return None
2476
2477
2478=== modified file 'hooks/charmhelpers/core/host.py'
2479--- hooks/charmhelpers/core/host.py 2014-10-16 17:42:14 +0000
2480+++ hooks/charmhelpers/core/host.py 2014-11-26 13:43:45 +0000
2481@@ -13,13 +13,13 @@
2482 import string
2483 import subprocess
2484 import hashlib
2485-import shutil
2486 from contextlib import contextmanager
2487-
2488 from collections import OrderedDict
2489
2490-from hookenv import log
2491-from fstab import Fstab
2492+import six
2493+
2494+from .hookenv import log
2495+from .fstab import Fstab
2496
2497
2498 def service_start(service_name):
2499@@ -55,7 +55,9 @@
2500 def service_running(service):
2501 """Determine whether a system service is running"""
2502 try:
2503- output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
2504+ output = subprocess.check_output(
2505+ ['service', service, 'status'],
2506+ stderr=subprocess.STDOUT).decode('UTF-8')
2507 except subprocess.CalledProcessError:
2508 return False
2509 else:
2510@@ -68,7 +70,9 @@
2511 def service_available(service_name):
2512 """Determine whether a system service is available"""
2513 try:
2514- subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
2515+ subprocess.check_output(
2516+ ['service', service_name, 'status'],
2517+ stderr=subprocess.STDOUT).decode('UTF-8')
2518 except subprocess.CalledProcessError as e:
2519 return 'unrecognized service' not in e.output
2520 else:
2521@@ -116,7 +120,7 @@
2522 cmd.append(from_path)
2523 cmd.append(to_path)
2524 log(" ".join(cmd))
2525- return subprocess.check_output(cmd).strip()
2526+ return subprocess.check_output(cmd).decode('UTF-8').strip()
2527
2528
2529 def symlink(source, destination):
2530@@ -131,7 +135,7 @@
2531 subprocess.check_call(cmd)
2532
2533
2534-def mkdir(path, owner='root', group='root', perms=0555, force=False):
2535+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
2536 """Create a directory"""
2537 log("Making dir {} {}:{} {:o}".format(path, owner, group,
2538 perms))
2539@@ -147,7 +151,7 @@
2540 os.chown(realpath, uid, gid)
2541
2542
2543-def write_file(path, content, owner='root', group='root', perms=0444):
2544+def write_file(path, content, owner='root', group='root', perms=0o444):
2545 """Create or overwrite a file with the contents of a string"""
2546 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
2547 uid = pwd.getpwnam(owner).pw_uid
2548@@ -178,7 +182,7 @@
2549 cmd_args.extend([device, mountpoint])
2550 try:
2551 subprocess.check_output(cmd_args)
2552- except subprocess.CalledProcessError, e:
2553+ except subprocess.CalledProcessError as e:
2554 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
2555 return False
2556
2557@@ -192,7 +196,7 @@
2558 cmd_args = ['umount', mountpoint]
2559 try:
2560 subprocess.check_output(cmd_args)
2561- except subprocess.CalledProcessError, e:
2562+ except subprocess.CalledProcessError as e:
2563 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
2564 return False
2565
2566@@ -219,8 +223,8 @@
2567 """
2568 if os.path.exists(path):
2569 h = getattr(hashlib, hash_type)()
2570- with open(path, 'r') as source:
2571- h.update(source.read()) # IGNORE:E1101 - it does have update
2572+ with open(path, 'rb') as source:
2573+ h.update(source.read())
2574 return h.hexdigest()
2575 else:
2576 return None
2577@@ -298,7 +302,7 @@
2578 if length is None:
2579 length = random.choice(range(35, 45))
2580 alphanumeric_chars = [
2581- l for l in (string.letters + string.digits)
2582+ l for l in (string.ascii_letters + string.digits)
2583 if l not in 'l0QD1vAEIOUaeiou']
2584 random_chars = [
2585 random.choice(alphanumeric_chars) for _ in range(length)]
2586@@ -307,14 +311,14 @@
2587
2588 def list_nics(nic_type):
2589 '''Return a list of nics of given type(s)'''
2590- if isinstance(nic_type, basestring):
2591+ if isinstance(nic_type, six.string_types):
2592 int_types = [nic_type]
2593 else:
2594 int_types = nic_type
2595 interfaces = []
2596 for int_type in int_types:
2597 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
2598- ip_output = subprocess.check_output(cmd).split('\n')
2599+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2600 ip_output = (line for line in ip_output if line)
2601 for line in ip_output:
2602 if line.split()[1].startswith(int_type):
2603@@ -336,7 +340,7 @@
2604
2605 def get_nic_mtu(nic):
2606 cmd = ['ip', 'addr', 'show', nic]
2607- ip_output = subprocess.check_output(cmd).split('\n')
2608+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2609 mtu = ""
2610 for line in ip_output:
2611 words = line.split()
2612@@ -347,7 +351,7 @@
2613
2614 def get_nic_hwaddr(nic):
2615 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
2616- ip_output = subprocess.check_output(cmd)
2617+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
2618 hwaddr = ""
2619 words = ip_output.split()
2620 if 'link/ether' in words:
2621
2622=== modified file 'hooks/charmhelpers/core/services/__init__.py'
2623--- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:47 +0000
2624+++ hooks/charmhelpers/core/services/__init__.py 2014-11-26 13:43:45 +0000
2625@@ -1,2 +1,2 @@
2626-from .base import *
2627-from .helpers import *
2628+from .base import * # NOQA
2629+from .helpers import * # NOQA
2630
2631=== modified file 'hooks/charmhelpers/core/services/helpers.py'
2632--- hooks/charmhelpers/core/services/helpers.py 2014-09-25 15:37:05 +0000
2633+++ hooks/charmhelpers/core/services/helpers.py 2014-11-26 13:43:45 +0000
2634@@ -196,7 +196,7 @@
2635 if not os.path.isabs(file_name):
2636 file_name = os.path.join(hookenv.charm_dir(), file_name)
2637 with open(file_name, 'w') as file_stream:
2638- os.fchmod(file_stream.fileno(), 0600)
2639+ os.fchmod(file_stream.fileno(), 0o600)
2640 yaml.dump(config_data, file_stream)
2641
2642 def read_context(self, file_name):
2643@@ -211,15 +211,19 @@
2644
2645 class TemplateCallback(ManagerCallback):
2646 """
2647- Callback class that will render a Jinja2 template, for use as a ready action.
2648-
2649- :param str source: The template source file, relative to `$CHARM_DIR/templates`
2650+ Callback class that will render a Jinja2 template, for use as a ready
2651+ action.
2652+
2653+ :param str source: The template source file, relative to
2654+ `$CHARM_DIR/templates`
2655+
2656 :param str target: The target to write the rendered template to
2657 :param str owner: The owner of the rendered file
2658 :param str group: The group of the rendered file
2659 :param int perms: The permissions of the rendered file
2660 """
2661- def __init__(self, source, target, owner='root', group='root', perms=0444):
2662+ def __init__(self, source, target,
2663+ owner='root', group='root', perms=0o444):
2664 self.source = source
2665 self.target = target
2666 self.owner = owner
2667
2668=== modified file 'hooks/charmhelpers/core/templating.py'
2669--- hooks/charmhelpers/core/templating.py 2014-08-13 13:12:47 +0000
2670+++ hooks/charmhelpers/core/templating.py 2014-11-26 13:43:45 +0000
2671@@ -4,7 +4,8 @@
2672 from charmhelpers.core import hookenv
2673
2674
2675-def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
2676+def render(source, target, context, owner='root', group='root',
2677+ perms=0o444, templates_dir=None):
2678 """
2679 Render a template.
2680
2681
2682=== modified file 'hooks/charmhelpers/fetch/__init__.py'
2683--- hooks/charmhelpers/fetch/__init__.py 2014-09-25 15:37:05 +0000
2684+++ hooks/charmhelpers/fetch/__init__.py 2014-11-26 13:43:45 +0000
2685@@ -5,10 +5,6 @@
2686 from charmhelpers.core.host import (
2687 lsb_release
2688 )
2689-from urlparse import (
2690- urlparse,
2691- urlunparse,
2692-)
2693 import subprocess
2694 from charmhelpers.core.hookenv import (
2695 config,
2696@@ -16,6 +12,12 @@
2697 )
2698 import os
2699
2700+import six
2701+if six.PY3:
2702+ from urllib.parse import urlparse, urlunparse
2703+else:
2704+ from urlparse import urlparse, urlunparse
2705+
2706
2707 CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
2708 deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
2709@@ -72,6 +74,7 @@
2710 FETCH_HANDLERS = (
2711 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
2712 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
2713+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
2714 )
2715
2716 APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
2717@@ -148,7 +151,7 @@
2718 cmd = ['apt-get', '--assume-yes']
2719 cmd.extend(options)
2720 cmd.append('install')
2721- if isinstance(packages, basestring):
2722+ if isinstance(packages, six.string_types):
2723 cmd.append(packages)
2724 else:
2725 cmd.extend(packages)
2726@@ -181,7 +184,7 @@
2727 def apt_purge(packages, fatal=False):
2728 """Purge one or more packages"""
2729 cmd = ['apt-get', '--assume-yes', 'purge']
2730- if isinstance(packages, basestring):
2731+ if isinstance(packages, six.string_types):
2732 cmd.append(packages)
2733 else:
2734 cmd.extend(packages)
2735@@ -192,7 +195,7 @@
2736 def apt_hold(packages, fatal=False):
2737 """Hold one or more packages"""
2738 cmd = ['apt-mark', 'hold']
2739- if isinstance(packages, basestring):
2740+ if isinstance(packages, six.string_types):
2741 cmd.append(packages)
2742 else:
2743 cmd.extend(packages)
2744@@ -218,6 +221,7 @@
2745 pocket for the release.
2746 'cloud:' may be used to activate official cloud archive pockets,
2747 such as 'cloud:icehouse'
2748+ 'distro' may be used as a noop
2749
2750 @param key: A key to be added to the system's APT keyring and used
2751 to verify the signatures on packages. Ideally, this should be an
2752@@ -251,12 +255,14 @@
2753 release = lsb_release()['DISTRIB_CODENAME']
2754 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
2755 apt.write(PROPOSED_POCKET.format(release))
2756+ elif source == 'distro':
2757+ pass
2758 else:
2759- raise SourceConfigError("Unknown source: {!r}".format(source))
2760+ log("Unknown source: {!r}".format(source))
2761
2762 if key:
2763 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
2764- with NamedTemporaryFile() as key_file:
2765+ with NamedTemporaryFile('w+') as key_file:
2766 key_file.write(key)
2767 key_file.flush()
2768 key_file.seek(0)
2769@@ -293,14 +299,14 @@
2770 sources = safe_load((config(sources_var) or '').strip()) or []
2771 keys = safe_load((config(keys_var) or '').strip()) or None
2772
2773- if isinstance(sources, basestring):
2774+ if isinstance(sources, six.string_types):
2775 sources = [sources]
2776
2777 if keys is None:
2778 for source in sources:
2779 add_source(source, None)
2780 else:
2781- if isinstance(keys, basestring):
2782+ if isinstance(keys, six.string_types):
2783 keys = [keys]
2784
2785 if len(sources) != len(keys):
2786@@ -397,7 +403,7 @@
2787 while result is None or result == APT_NO_LOCK:
2788 try:
2789 result = subprocess.check_call(cmd, env=env)
2790- except subprocess.CalledProcessError, e:
2791+ except subprocess.CalledProcessError as e:
2792 retry_count = retry_count + 1
2793 if retry_count > APT_NO_LOCK_RETRY_COUNT:
2794 raise
2795
2796=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
2797--- hooks/charmhelpers/fetch/archiveurl.py 2014-09-25 15:37:05 +0000
2798+++ hooks/charmhelpers/fetch/archiveurl.py 2014-11-26 13:43:45 +0000
2799@@ -1,8 +1,23 @@
2800 import os
2801-import urllib2
2802-from urllib import urlretrieve
2803-import urlparse
2804 import hashlib
2805+import re
2806+
2807+import six
2808+if six.PY3:
2809+ from urllib.request import (
2810+ build_opener, install_opener, urlopen, urlretrieve,
2811+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
2812+ )
2813+ from urllib.parse import urlparse, urlunparse, parse_qs
2814+ from urllib.error import URLError
2815+else:
2816+ from urllib import urlretrieve
2817+ from urllib2 import (
2818+ build_opener, install_opener, urlopen,
2819+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
2820+ URLError
2821+ )
2822+ from urlparse import urlparse, urlunparse, parse_qs
2823
2824 from charmhelpers.fetch import (
2825 BaseFetchHandler,
2826@@ -15,6 +30,24 @@
2827 from charmhelpers.core.host import mkdir, check_hash
2828
2829
2830+def splituser(host):
2831+ '''urllib.splituser(), but six's support of this seems broken'''
2832+ _userprog = re.compile('^(.*)@(.*)$')
2833+ match = _userprog.match(host)
2834+ if match:
2835+ return match.group(1, 2)
2836+ return None, host
2837+
2838+
2839+def splitpasswd(user):
2840+ '''urllib.splitpasswd(), but six's support of this is missing'''
2841+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
2842+ match = _passwdprog.match(user)
2843+ if match:
2844+ return match.group(1, 2)
2845+ return user, None
2846+
2847+
2848 class ArchiveUrlFetchHandler(BaseFetchHandler):
2849 """
2850 Handler to download archive files from arbitrary URLs.
2851@@ -42,20 +75,20 @@
2852 """
2853 # propogate all exceptions
2854 # URLError, OSError, etc
2855- proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
2856+ proto, netloc, path, params, query, fragment = urlparse(source)
2857 if proto in ('http', 'https'):
2858- auth, barehost = urllib2.splituser(netloc)
2859+ auth, barehost = splituser(netloc)
2860 if auth is not None:
2861- source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
2862- username, password = urllib2.splitpasswd(auth)
2863- passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
2864+ source = urlunparse((proto, barehost, path, params, query, fragment))
2865+ username, password = splitpasswd(auth)
2866+ passman = HTTPPasswordMgrWithDefaultRealm()
2867 # Realm is set to None in add_password to force the username and password
2868 # to be used whatever the realm
2869 passman.add_password(None, source, username, password)
2870- authhandler = urllib2.HTTPBasicAuthHandler(passman)
2871- opener = urllib2.build_opener(authhandler)
2872- urllib2.install_opener(opener)
2873- response = urllib2.urlopen(source)
2874+ authhandler = HTTPBasicAuthHandler(passman)
2875+ opener = build_opener(authhandler)
2876+ install_opener(opener)
2877+ response = urlopen(source)
2878 try:
2879 with open(dest, 'w') as dest_file:
2880 dest_file.write(response.read())
2881@@ -91,17 +124,21 @@
2882 url_parts = self.parse_url(source)
2883 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
2884 if not os.path.exists(dest_dir):
2885- mkdir(dest_dir, perms=0755)
2886+ mkdir(dest_dir, perms=0o755)
2887 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
2888 try:
2889 self.download(source, dld_file)
2890- except urllib2.URLError as e:
2891+ except URLError as e:
2892 raise UnhandledSource(e.reason)
2893 except OSError as e:
2894 raise UnhandledSource(e.strerror)
2895- options = urlparse.parse_qs(url_parts.fragment)
2896+ options = parse_qs(url_parts.fragment)
2897 for key, value in options.items():
2898- if key in hashlib.algorithms:
2899+ if not six.PY3:
2900+ algorithms = hashlib.algorithms
2901+ else:
2902+ algorithms = hashlib.algorithms_available
2903+ if key in algorithms:
2904 check_hash(dld_file, value, key)
2905 if checksum:
2906 check_hash(dld_file, checksum, hash_type)
2907
2908=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
2909--- hooks/charmhelpers/fetch/bzrurl.py 2014-06-24 13:40:39 +0000
2910+++ hooks/charmhelpers/fetch/bzrurl.py 2014-11-26 13:43:45 +0000
2911@@ -5,6 +5,10 @@
2912 )
2913 from charmhelpers.core.host import mkdir
2914
2915+import six
2916+if six.PY3:
2917+ raise ImportError('bzrlib does not support Python3')
2918+
2919 try:
2920 from bzrlib.branch import Branch
2921 except ImportError:
2922@@ -42,7 +46,7 @@
2923 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2924 branch_name)
2925 if not os.path.exists(dest_dir):
2926- mkdir(dest_dir, perms=0755)
2927+ mkdir(dest_dir, perms=0o755)
2928 try:
2929 self.branch(source, dest_dir)
2930 except OSError as e:
2931
2932=== added file 'hooks/charmhelpers/fetch/giturl.py'
2933--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
2934+++ hooks/charmhelpers/fetch/giturl.py 2014-11-26 13:43:45 +0000
2935@@ -0,0 +1,48 @@
2936+import os
2937+from charmhelpers.fetch import (
2938+ BaseFetchHandler,
2939+ UnhandledSource
2940+)
2941+from charmhelpers.core.host import mkdir
2942+
2943+import six
2944+if six.PY3:
2945+ raise ImportError('GitPython does not support Python 3')
2946+
2947+try:
2948+ from git import Repo
2949+except ImportError:
2950+ from charmhelpers.fetch import apt_install
2951+ apt_install("python-git")
2952+ from git import Repo
2953+
2954+
2955+class GitUrlFetchHandler(BaseFetchHandler):
2956+ """Handler for git branches via generic and github URLs"""
2957+ def can_handle(self, source):
2958+ url_parts = self.parse_url(source)
2959+ # TODO (mattyw) no support for ssh git@ yet
2960+ if url_parts.scheme not in ('http', 'https', 'git'):
2961+ return False
2962+ else:
2963+ return True
2964+
2965+ def clone(self, source, dest, branch):
2966+ if not self.can_handle(source):
2967+ raise UnhandledSource("Cannot handle {}".format(source))
2968+
2969+ repo = Repo.clone_from(source, dest)
2970+ repo.git.checkout(branch)
2971+
2972+ def install(self, source, branch="master"):
2973+ url_parts = self.parse_url(source)
2974+ branch_name = url_parts.path.strip("/").split("/")[-1]
2975+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
2976+ branch_name)
2977+ if not os.path.exists(dest_dir):
2978+ mkdir(dest_dir, perms=0o755)
2979+ try:
2980+ self.clone(source, dest_dir, branch)
2981+ except OSError as e:
2982+ raise UnhandledSource(e.strerror)
2983+ return dest_dir
2984
2985=== modified file 'hooks/quantum_hooks.py'
2986--- hooks/quantum_hooks.py 2014-11-19 03:09:34 +0000
2987+++ hooks/quantum_hooks.py 2014-11-26 13:43:45 +0000
2988@@ -33,6 +33,8 @@
2989 openstack_upgrade_available,
2990 )
2991 from charmhelpers.payload.execd import execd_preinstall
2992+from charmhelpers.core.sysctl import create as create_sysctl
2993+
2994
2995 import sys
2996 from quantum_utils import (
2997@@ -77,6 +79,11 @@
2998 global CONFIGS
2999 if openstack_upgrade_available(get_common_package()):
3000 CONFIGS = do_openstack_upgrade()
3001+
3002+ sysctl_dict = config('sysctl')
3003+ if sysctl_dict:
3004+ create_sysctl(sysctl_dict, '/etc/sysctl.d/50-quantum-gateway.conf')
3005+
3006 # Re-run joined hooks as config might have changed
3007 for r_id in relation_ids('shared-db'):
3008 db_joined(relation_id=r_id)
3009@@ -92,6 +99,7 @@
3010 else:
3011 log('Please provide a valid plugin config', level=ERROR)
3012 sys.exit(1)
3013+
3014 if config('plugin') == 'n1kv':
3015 if config('enable-l3-agent'):
3016 apt_install(filter_installed_packages('neutron-l3-agent'))
3017
3018=== modified file 'tests/charmhelpers/contrib/amulet/deployment.py'
3019--- tests/charmhelpers/contrib/amulet/deployment.py 2014-10-07 21:03:47 +0000
3020+++ tests/charmhelpers/contrib/amulet/deployment.py 2014-11-26 13:43:45 +0000
3021@@ -1,6 +1,6 @@
3022 import amulet
3023-
3024 import os
3025+import six
3026
3027
3028 class AmuletDeployment(object):
3029@@ -52,12 +52,12 @@
3030
3031 def _add_relations(self, relations):
3032 """Add all of the relations for the services."""
3033- for k, v in relations.iteritems():
3034+ for k, v in six.iteritems(relations):
3035 self.d.relate(k, v)
3036
3037 def _configure_services(self, configs):
3038 """Configure all of the services."""
3039- for service, config in configs.iteritems():
3040+ for service, config in six.iteritems(configs):
3041 self.d.configure(service, config)
3042
3043 def _deploy(self):
3044
3045=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
3046--- tests/charmhelpers/contrib/amulet/utils.py 2014-07-30 15:21:30 +0000
3047+++ tests/charmhelpers/contrib/amulet/utils.py 2014-11-26 13:43:45 +0000
3048@@ -5,6 +5,8 @@
3049 import sys
3050 import time
3051
3052+import six
3053+
3054
3055 class AmuletUtils(object):
3056 """Amulet utilities.
3057@@ -58,7 +60,7 @@
3058 Verify the specified services are running on the corresponding
3059 service units.
3060 """
3061- for k, v in commands.iteritems():
3062+ for k, v in six.iteritems(commands):
3063 for cmd in v:
3064 output, code = k.run(cmd)
3065 if code != 0:
3066@@ -100,11 +102,11 @@
3067 longs, or can be a function that evaluate a variable and returns a
3068 bool.
3069 """
3070- for k, v in expected.iteritems():
3071+ for k, v in six.iteritems(expected):
3072 if k in actual:
3073- if (isinstance(v, basestring) or
3074+ if (isinstance(v, six.string_types) or
3075 isinstance(v, bool) or
3076- isinstance(v, (int, long))):
3077+ isinstance(v, six.integer_types)):
3078 if v != actual[k]:
3079 return "{}:{}".format(k, actual[k])
3080 elif not v(actual[k]):
3081
3082=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
3083--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-07 21:03:47 +0000
3084+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2014-11-26 13:43:45 +0000
3085@@ -1,3 +1,4 @@
3086+import six
3087 from charmhelpers.contrib.amulet.deployment import (
3088 AmuletDeployment
3089 )
3090@@ -69,7 +70,7 @@
3091
3092 def _configure_services(self, configs):
3093 """Configure all of the services."""
3094- for service, config in configs.iteritems():
3095+ for service, config in six.iteritems(configs):
3096 self.d.configure(service, config)
3097
3098 def _get_openstack_release(self):
3099
3100=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
3101--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2014-09-25 15:37:05 +0000
3102+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2014-11-26 13:43:45 +0000
3103@@ -7,6 +7,8 @@
3104 import keystoneclient.v2_0 as keystone_client
3105 import novaclient.v1_1.client as nova_client
3106
3107+import six
3108+
3109 from charmhelpers.contrib.amulet.utils import (
3110 AmuletUtils
3111 )
3112@@ -60,7 +62,7 @@
3113 expected service catalog endpoints.
3114 """
3115 self.log.debug('actual: {}'.format(repr(actual)))
3116- for k, v in expected.iteritems():
3117+ for k, v in six.iteritems(expected):
3118 if k in actual:
3119 ret = self._validate_dict_data(expected[k][0], actual[k][0])
3120 if ret:
3121
3122=== modified file 'unit_tests/test_quantum_hooks.py'
3123--- unit_tests/test_quantum_hooks.py 2014-11-19 03:09:34 +0000
3124+++ unit_tests/test_quantum_hooks.py 2014-11-26 13:43:45 +0000
3125@@ -40,7 +40,8 @@
3126 'lsb_release',
3127 'stop_services',
3128 'b64decode',
3129- 'is_relation_made'
3130+ 'is_relation_made',
3131+ 'create_sysctl',
3132 ]
3133
3134
3135@@ -98,6 +99,7 @@
3136 def test_config_changed(self):
3137 def mock_relids(rel):
3138 return ['relid']
3139+ self.test_config.set('sysctl', '{ kernel.max_pid: "1337"}')
3140 self.openstack_upgrade_available.return_value = True
3141 self.valid_plugin.return_value = True
3142 self.relation_ids.side_effect = mock_relids
3143@@ -106,6 +108,7 @@
3144 _amqp_joined = self.patch('amqp_joined')
3145 _amqp_nova_joined = self.patch('amqp_nova_joined')
3146 self._call_hook('config-changed')
3147+ self.assertTrue(self.create_sysctl.called)
3148 self.assertTrue(self.do_openstack_upgrade.called)
3149 self.assertTrue(self.configure_ovs.called)
3150 self.assertTrue(_db_joined.called)

Subscribers

People subscribed via source and target branches