Merge lp:~corey.bryant/charms/trusty/keystone/contrib.python.packages into lp:~openstack-charmers-archive/charms/trusty/keystone/next

Proposed by Corey Bryant
Status: Merged
Merged at revision: 90
Proposed branch: lp:~corey.bryant/charms/trusty/keystone/contrib.python.packages
Merge into: lp:~openstack-charmers-archive/charms/trusty/keystone/next
Diff against target: 3346 lines (+899/-536)
33 files modified
charm-helpers-hooks.yaml (+1/-0)
hooks/charmhelpers/__init__.py (+22/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7)
hooks/charmhelpers/contrib/network/ip.py (+52/-48)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
hooks/charmhelpers/contrib/openstack/context.py (+217/-219)
hooks/charmhelpers/contrib/openstack/ip.py (+41/-27)
hooks/charmhelpers/contrib/openstack/neutron.py (+4/-3)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+2/-2)
hooks/charmhelpers/contrib/openstack/templating.py (+5/-5)
hooks/charmhelpers/contrib/openstack/utils.py (+122/-13)
hooks/charmhelpers/contrib/peerstorage/__init__.py (+4/-3)
hooks/charmhelpers/contrib/python/packages.py (+77/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+83/-97)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/contrib/unison/__init__.py (+21/-14)
hooks/charmhelpers/core/fstab.py (+10/-8)
hooks/charmhelpers/core/hookenv.py (+36/-16)
hooks/charmhelpers/core/host.py (+43/-18)
hooks/charmhelpers/core/services/helpers.py (+9/-5)
hooks/charmhelpers/core/templating.py (+2/-1)
hooks/charmhelpers/fetch/__init__.py (+13/-11)
hooks/charmhelpers/fetch/archiveurl.py (+53/-16)
hooks/charmhelpers/fetch/bzrurl.py (+5/-1)
hooks/charmhelpers/fetch/giturl.py (+12/-5)
tests/charmhelpers/__init__.py (+22/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+3/-3)
tests/charmhelpers/contrib/amulet/utils.py (+6/-4)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
To merge this branch: bzr merge lp:~corey.bryant/charms/trusty/keystone/contrib.python.packages
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Review via email: mp+244323@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #163 keystone-next for corey.bryant mp244323
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/163/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #126 keystone-next for corey.bryant mp244323
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
  FAILED (errors=3)
  make: *** [unit_test] Error 1

Full unit test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_unit_test/126/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #76 keystone-next for corey.bryant mp244323
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 2
  make: *** [test] Error 2

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/76/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #185 keystone-next for corey.bryant mp244323
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/185/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #148 keystone-next for corey.bryant mp244323
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/148/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #103 keystone-next for corey.bryant mp244323
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/103/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #209 keystone-next for corey.bryant mp244323
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/209/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #172 keystone-next for corey.bryant mp244323
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/172/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #127 keystone-next for corey.bryant mp244323
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/127/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #227 keystone-next for corey.bryant mp244323
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/227/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #190 keystone-next for corey.bryant mp244323
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/190/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #144 keystone-next for corey.bryant mp244323
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/144/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'charm-helpers-hooks.yaml'
2--- charm-helpers-hooks.yaml 2014-10-27 02:20:30 +0000
3+++ charm-helpers-hooks.yaml 2014-12-11 17:56:34 +0000
4@@ -12,3 +12,4 @@
5 - payload.execd
6 - contrib.peerstorage
7 - contrib.network.ip
8+ - contrib.python.packages
9
10=== added file 'hooks/charmhelpers/__init__.py'
11--- hooks/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
12+++ hooks/charmhelpers/__init__.py 2014-12-11 17:56:34 +0000
13@@ -0,0 +1,22 @@
14+# Bootstrap charm-helpers, installing its dependencies if necessary using
15+# only standard libraries.
16+import subprocess
17+import sys
18+
19+try:
20+ import six # flake8: noqa
21+except ImportError:
22+ if sys.version_info.major == 2:
23+ subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
24+ else:
25+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
26+ import six # flake8: noqa
27+
28+try:
29+ import yaml # flake8: noqa
30+except ImportError:
31+ if sys.version_info.major == 2:
32+ subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
33+ else:
34+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
35+ import yaml # flake8: noqa
36
37=== removed file 'hooks/charmhelpers/__init__.py'
38=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
39--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-06 20:51:06 +0000
40+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-11 17:56:34 +0000
41@@ -13,9 +13,10 @@
42
43 import subprocess
44 import os
45-
46 from socket import gethostname as get_unit_hostname
47
48+import six
49+
50 from charmhelpers.core.hookenv import (
51 log,
52 relation_ids,
53@@ -77,7 +78,7 @@
54 "show", resource
55 ]
56 try:
57- status = subprocess.check_output(cmd)
58+ status = subprocess.check_output(cmd).decode('UTF-8')
59 except subprocess.CalledProcessError:
60 return False
61 else:
62@@ -150,34 +151,42 @@
63 return False
64
65
66-def determine_api_port(public_port):
67+def determine_api_port(public_port, singlenode_mode=False):
68 '''
69 Determine correct API server listening port based on
70 existence of HTTPS reverse proxy and/or haproxy.
71
72 public_port: int: standard public port for given service
73
74+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
75+
76 returns: int: the correct listening port for the API service
77 '''
78 i = 0
79- if len(peer_units()) > 0 or is_clustered():
80+ if singlenode_mode:
81+ i += 1
82+ elif len(peer_units()) > 0 or is_clustered():
83 i += 1
84 if https():
85 i += 1
86 return public_port - (i * 10)
87
88
89-def determine_apache_port(public_port):
90+def determine_apache_port(public_port, singlenode_mode=False):
91 '''
92 Description: Determine correct apache listening port based on public IP +
93 state of the cluster.
94
95 public_port: int: standard public port for given service
96
97+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
98+
99 returns: int: the correct listening port for the HAProxy service
100 '''
101 i = 0
102- if len(peer_units()) > 0 or is_clustered():
103+ if singlenode_mode:
104+ i += 1
105+ elif len(peer_units()) > 0 or is_clustered():
106 i += 1
107 return public_port - (i * 10)
108
109@@ -197,7 +206,7 @@
110 for setting in settings:
111 conf[setting] = config_get(setting)
112 missing = []
113- [missing.append(s) for s, v in conf.iteritems() if v is None]
114+ [missing.append(s) for s, v in six.iteritems(conf) if v is None]
115 if missing:
116 log('Insufficient config data to configure hacluster.', level=ERROR)
117 raise HAIncompleteConfig
118
119=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
120--- hooks/charmhelpers/contrib/network/ip.py 2014-10-27 02:14:51 +0000
121+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-11 17:56:34 +0000
122@@ -1,14 +1,12 @@
123 import glob
124 import re
125 import subprocess
126-import sys
127
128 from functools import partial
129
130 from charmhelpers.core.hookenv import unit_get
131 from charmhelpers.fetch import apt_install
132 from charmhelpers.core.hookenv import (
133- ERROR,
134 log
135 )
136
137@@ -33,31 +31,28 @@
138 network)
139
140
141+def no_ip_found_error_out(network):
142+ errmsg = ("No IP address found in network: %s" % network)
143+ raise ValueError(errmsg)
144+
145+
146 def get_address_in_network(network, fallback=None, fatal=False):
147- """
148- Get an IPv4 or IPv6 address within the network from the host.
149+ """Get an IPv4 or IPv6 address within the network from the host.
150
151 :param network (str): CIDR presentation format. For example,
152 '192.168.1.0/24'.
153 :param fallback (str): If no address is found, return fallback.
154 :param fatal (boolean): If no address is found, fallback is not
155 set and fatal is True then exit(1).
156-
157 """
158-
159- def not_found_error_out():
160- log("No IP address found in network: %s" % network,
161- level=ERROR)
162- sys.exit(1)
163-
164 if network is None:
165 if fallback is not None:
166 return fallback
167+
168+ if fatal:
169+ no_ip_found_error_out(network)
170 else:
171- if fatal:
172- not_found_error_out()
173- else:
174- return None
175+ return None
176
177 _validate_cidr(network)
178 network = netaddr.IPNetwork(network)
179@@ -69,6 +64,7 @@
180 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
181 if cidr in network:
182 return str(cidr.ip)
183+
184 if network.version == 6 and netifaces.AF_INET6 in addresses:
185 for addr in addresses[netifaces.AF_INET6]:
186 if not addr['addr'].startswith('fe80'):
187@@ -81,20 +77,20 @@
188 return fallback
189
190 if fatal:
191- not_found_error_out()
192+ no_ip_found_error_out(network)
193
194 return None
195
196
197 def is_ipv6(address):
198- '''Determine whether provided address is IPv6 or not'''
199+ """Determine whether provided address is IPv6 or not."""
200 try:
201 address = netaddr.IPAddress(address)
202 except netaddr.AddrFormatError:
203 # probably a hostname - so not an address at all!
204 return False
205- else:
206- return address.version == 6
207+
208+ return address.version == 6
209
210
211 def is_address_in_network(network, address):
212@@ -112,11 +108,13 @@
213 except (netaddr.core.AddrFormatError, ValueError):
214 raise ValueError("Network (%s) is not in CIDR presentation format" %
215 network)
216+
217 try:
218 address = netaddr.IPAddress(address)
219 except (netaddr.core.AddrFormatError, ValueError):
220 raise ValueError("Address (%s) is not in correct presentation format" %
221 address)
222+
223 if address in network:
224 return True
225 else:
226@@ -146,6 +144,7 @@
227 return iface
228 else:
229 return addresses[netifaces.AF_INET][0][key]
230+
231 if address.version == 6 and netifaces.AF_INET6 in addresses:
232 for addr in addresses[netifaces.AF_INET6]:
233 if not addr['addr'].startswith('fe80'):
234@@ -159,40 +158,42 @@
235 return str(cidr).split('/')[1]
236 else:
237 return addr[key]
238+
239 return None
240
241
242 get_iface_for_address = partial(_get_for_address, key='iface')
243
244+
245 get_netmask_for_address = partial(_get_for_address, key='netmask')
246
247
248 def format_ipv6_addr(address):
249- """
250- IPv6 needs to be wrapped with [] in url link to parse correctly.
251+ """If address is IPv6, wrap it in '[]' otherwise return None.
252+
253+ This is required by most configuration files when specifying IPv6
254+ addresses.
255 """
256 if is_ipv6(address):
257- address = "[%s]" % address
258- else:
259- address = None
260+ return "[%s]" % address
261
262- return address
263+ return None
264
265
266 def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
267 fatal=True, exc_list=None):
268- """
269- Return the assigned IP address for a given interface, if any, or [].
270- """
271+ """Return the assigned IP address for a given interface, if any."""
272 # Extract nic if passed /dev/ethX
273 if '/' in iface:
274 iface = iface.split('/')[-1]
275+
276 if not exc_list:
277 exc_list = []
278+
279 try:
280 inet_num = getattr(netifaces, inet_type)
281 except AttributeError:
282- raise Exception('Unknown inet type ' + str(inet_type))
283+ raise Exception("Unknown inet type '%s'" % str(inet_type))
284
285 interfaces = netifaces.interfaces()
286 if inc_aliases:
287@@ -200,15 +201,18 @@
288 for _iface in interfaces:
289 if iface == _iface or _iface.split(':')[0] == iface:
290 ifaces.append(_iface)
291+
292 if fatal and not ifaces:
293 raise Exception("Invalid interface '%s'" % iface)
294+
295 ifaces.sort()
296 else:
297 if iface not in interfaces:
298 if fatal:
299- raise Exception("%s not found " % (iface))
300+ raise Exception("Interface '%s' not found " % (iface))
301 else:
302 return []
303+
304 else:
305 ifaces = [iface]
306
307@@ -219,10 +223,13 @@
308 for entry in net_info[inet_num]:
309 if 'addr' in entry and entry['addr'] not in exc_list:
310 addresses.append(entry['addr'])
311+
312 if fatal and not addresses:
313 raise Exception("Interface '%s' doesn't have any %s addresses." %
314 (iface, inet_type))
315- return addresses
316+
317+ return sorted(addresses)
318+
319
320 get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
321
322@@ -239,6 +246,7 @@
323 raw = re.match(ll_key, _addr)
324 if raw:
325 _addr = raw.group(1)
326+
327 if _addr == addr:
328 log("Address '%s' is configured on iface '%s'" %
329 (addr, iface))
330@@ -249,8 +257,9 @@
331
332
333 def sniff_iface(f):
334- """If no iface provided, inject net iface inferred from unit private
335- address.
336+ """Ensure decorated function is called with a value for iface.
337+
338+ If no iface provided, inject net iface inferred from unit private address.
339 """
340 def iface_sniffer(*args, **kwargs):
341 if not kwargs.get('iface', None):
342@@ -293,7 +302,7 @@
343 if global_addrs:
344 # Make sure any found global addresses are not temporary
345 cmd = ['ip', 'addr', 'show', iface]
346- out = subprocess.check_output(cmd)
347+ out = subprocess.check_output(cmd).decode('UTF-8')
348 if dynamic_only:
349 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
350 else:
351@@ -315,33 +324,28 @@
352 return addrs
353
354 if fatal:
355- raise Exception("Interface '%s' doesn't have a scope global "
356+ raise Exception("Interface '%s' does not have a scope global "
357 "non-temporary ipv6 address." % iface)
358
359 return []
360
361
362 def get_bridges(vnic_dir='/sys/devices/virtual/net'):
363- """
364- Return a list of bridges on the system or []
365- """
366- b_rgex = vnic_dir + '/*/bridge'
367- return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
368+ """Return a list of bridges on the system."""
369+ b_regex = "%s/*/bridge" % vnic_dir
370+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
371
372
373 def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
374- """
375- Return a list of nics comprising a given bridge on the system or []
376- """
377- brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
378- return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
379+ """Return a list of nics comprising a given bridge on the system."""
380+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
381+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
382
383
384 def is_bridge_member(nic):
385- """
386- Check if a given nic is a member of a bridge
387- """
388+ """Check if a given nic is a member of a bridge."""
389 for bridge in get_bridges():
390 if nic in get_bridge_nics(bridge):
391 return True
392+
393 return False
394
395=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
396--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-06 20:51:06 +0000
397+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-11 17:56:34 +0000
398@@ -1,3 +1,4 @@
399+import six
400 from charmhelpers.contrib.amulet.deployment import (
401 AmuletDeployment
402 )
403@@ -69,7 +70,7 @@
404
405 def _configure_services(self, configs):
406 """Configure all of the services."""
407- for service, config in configs.iteritems():
408+ for service, config in six.iteritems(configs):
409 self.d.configure(service, config)
410
411 def _get_openstack_release(self):
412
413=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
414--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-10-06 20:51:06 +0000
415+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-11 17:56:34 +0000
416@@ -7,6 +7,8 @@
417 import keystoneclient.v2_0 as keystone_client
418 import novaclient.v1_1.client as nova_client
419
420+import six
421+
422 from charmhelpers.contrib.amulet.utils import (
423 AmuletUtils
424 )
425@@ -60,7 +62,7 @@
426 expected service catalog endpoints.
427 """
428 self.log.debug('actual: {}'.format(repr(actual)))
429- for k, v in expected.iteritems():
430+ for k, v in six.iteritems(expected):
431 if k in actual:
432 ret = self._validate_dict_data(expected[k][0], actual[k][0])
433 if ret:
434
435=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
436--- hooks/charmhelpers/contrib/openstack/context.py 2014-11-14 02:16:38 +0000
437+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-11 17:56:34 +0000
438@@ -1,18 +1,15 @@
439 import json
440 import os
441 import time
442-
443 from base64 import b64decode
444+from subprocess import check_call
445
446-from subprocess import (
447- check_call
448-)
449+import six
450
451 from charmhelpers.fetch import (
452 apt_install,
453 filter_installed_packages,
454 )
455-
456 from charmhelpers.core.hookenv import (
457 config,
458 is_relation_made,
459@@ -24,44 +21,40 @@
460 relation_set,
461 unit_get,
462 unit_private_ip,
463+ DEBUG,
464+ INFO,
465+ WARNING,
466 ERROR,
467- DEBUG
468 )
469-
470 from charmhelpers.core.host import (
471 mkdir,
472- write_file
473+ write_file,
474 )
475-
476 from charmhelpers.contrib.hahelpers.cluster import (
477 determine_apache_port,
478 determine_api_port,
479 https,
480- is_clustered
481+ is_clustered,
482 )
483-
484 from charmhelpers.contrib.hahelpers.apache import (
485 get_cert,
486 get_ca_cert,
487 install_ca_cert,
488 )
489-
490 from charmhelpers.contrib.openstack.neutron import (
491 neutron_plugin_attribute,
492 )
493-
494 from charmhelpers.contrib.network.ip import (
495 get_address_in_network,
496 get_ipv6_addr,
497 get_netmask_for_address,
498 format_ipv6_addr,
499- is_address_in_network
500+ is_address_in_network,
501 )
502+from charmhelpers.contrib.openstack.utils import get_host_ip
503
504-from charmhelpers.contrib.openstack.utils import (
505- get_host_ip,
506-)
507 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
508+ADDRESS_TYPES = ['admin', 'internal', 'public']
509
510
511 class OSContextError(Exception):
512@@ -69,7 +62,7 @@
513
514
515 def ensure_packages(packages):
516- '''Install but do not upgrade required plugin packages'''
517+ """Install but do not upgrade required plugin packages."""
518 required = filter_installed_packages(packages)
519 if required:
520 apt_install(required, fatal=True)
521@@ -77,20 +70,27 @@
522
523 def context_complete(ctxt):
524 _missing = []
525- for k, v in ctxt.iteritems():
526+ for k, v in six.iteritems(ctxt):
527 if v is None or v == '':
528 _missing.append(k)
529+
530 if _missing:
531- log('Missing required data: %s' % ' '.join(_missing), level='INFO')
532+ log('Missing required data: %s' % ' '.join(_missing), level=INFO)
533 return False
534+
535 return True
536
537
538 def config_flags_parser(config_flags):
539+ """Parses config flags string into dict.
540+
541+ The provided config_flags string may be a list of comma-separated values
542+ which themselves may be comma-separated list of values.
543+ """
544 if config_flags.find('==') >= 0:
545- log("config_flags is not in expected format (key=value)",
546- level=ERROR)
547+ log("config_flags is not in expected format (key=value)", level=ERROR)
548 raise OSContextError
549+
550 # strip the following from each value.
551 post_strippers = ' ,'
552 # we strip any leading/trailing '=' or ' ' from the string then
553@@ -98,7 +98,7 @@
554 split = config_flags.strip(' =').split('=')
555 limit = len(split)
556 flags = {}
557- for i in xrange(0, limit - 1):
558+ for i in range(0, limit - 1):
559 current = split[i]
560 next = split[i + 1]
561 vindex = next.rfind(',')
562@@ -113,17 +113,18 @@
563 # if this not the first entry, expect an embedded key.
564 index = current.rfind(',')
565 if index < 0:
566- log("invalid config value(s) at index %s" % (i),
567- level=ERROR)
568+ log("Invalid config value(s) at index %s" % (i), level=ERROR)
569 raise OSContextError
570 key = current[index + 1:]
571
572 # Add to collection.
573 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
574+
575 return flags
576
577
578 class OSContextGenerator(object):
579+ """Base class for all context generators."""
580 interfaces = []
581
582 def __call__(self):
583@@ -135,11 +136,11 @@
584
585 def __init__(self,
586 database=None, user=None, relation_prefix=None, ssl_dir=None):
587- '''
588- Allows inspecting relation for settings prefixed with relation_prefix.
589- This is useful for parsing access for multiple databases returned via
590- the shared-db interface (eg, nova_password, quantum_password)
591- '''
592+ """Allows inspecting relation for settings prefixed with
593+ relation_prefix. This is useful for parsing access for multiple
594+ databases returned via the shared-db interface (eg, nova_password,
595+ quantum_password)
596+ """
597 self.relation_prefix = relation_prefix
598 self.database = database
599 self.user = user
600@@ -149,9 +150,8 @@
601 self.database = self.database or config('database')
602 self.user = self.user or config('database-user')
603 if None in [self.database, self.user]:
604- log('Could not generate shared_db context. '
605- 'Missing required charm config options. '
606- '(database name and user)')
607+ log("Could not generate shared_db context. Missing required charm "
608+ "config options. (database name and user)", level=ERROR)
609 raise OSContextError
610
611 ctxt = {}
612@@ -204,23 +204,24 @@
613 def __call__(self):
614 self.database = self.database or config('database')
615 if self.database is None:
616- log('Could not generate postgresql_db context. '
617- 'Missing required charm config options. '
618- '(database name)')
619+ log('Could not generate postgresql_db context. Missing required '
620+ 'charm config options. (database name)', level=ERROR)
621 raise OSContextError
622+
623 ctxt = {}
624-
625 for rid in relation_ids(self.interfaces[0]):
626 for unit in related_units(rid):
627- ctxt = {
628- 'database_host': relation_get('host', rid=rid, unit=unit),
629- 'database': self.database,
630- 'database_user': relation_get('user', rid=rid, unit=unit),
631- 'database_password': relation_get('password', rid=rid, unit=unit),
632- 'database_type': 'postgresql',
633- }
634+ rel_host = relation_get('host', rid=rid, unit=unit)
635+ rel_user = relation_get('user', rid=rid, unit=unit)
636+ rel_passwd = relation_get('password', rid=rid, unit=unit)
637+ ctxt = {'database_host': rel_host,
638+ 'database': self.database,
639+ 'database_user': rel_user,
640+ 'database_password': rel_passwd,
641+ 'database_type': 'postgresql'}
642 if context_complete(ctxt):
643 return ctxt
644+
645 return {}
646
647
648@@ -229,23 +230,29 @@
649 ca_path = os.path.join(ssl_dir, 'db-client.ca')
650 with open(ca_path, 'w') as fh:
651 fh.write(b64decode(rdata['ssl_ca']))
652+
653 ctxt['database_ssl_ca'] = ca_path
654 elif 'ssl_ca' in rdata:
655- log("Charm not setup for ssl support but ssl ca found")
656+ log("Charm not setup for ssl support but ssl ca found", level=INFO)
657 return ctxt
658+
659 if 'ssl_cert' in rdata:
660 cert_path = os.path.join(
661 ssl_dir, 'db-client.cert')
662 if not os.path.exists(cert_path):
663- log("Waiting 1m for ssl client cert validity")
664+ log("Waiting 1m for ssl client cert validity", level=INFO)
665 time.sleep(60)
666+
667 with open(cert_path, 'w') as fh:
668 fh.write(b64decode(rdata['ssl_cert']))
669+
670 ctxt['database_ssl_cert'] = cert_path
671 key_path = os.path.join(ssl_dir, 'db-client.key')
672 with open(key_path, 'w') as fh:
673 fh.write(b64decode(rdata['ssl_key']))
674+
675 ctxt['database_ssl_key'] = key_path
676+
677 return ctxt
678
679
680@@ -253,9 +260,8 @@
681 interfaces = ['identity-service']
682
683 def __call__(self):
684- log('Generating template context for identity-service')
685+ log('Generating template context for identity-service', level=DEBUG)
686 ctxt = {}
687-
688 for rid in relation_ids('identity-service'):
689 for unit in related_units(rid):
690 rdata = relation_get(rid=rid, unit=unit)
691@@ -263,26 +269,24 @@
692 serv_host = format_ipv6_addr(serv_host) or serv_host
693 auth_host = rdata.get('auth_host')
694 auth_host = format_ipv6_addr(auth_host) or auth_host
695-
696- ctxt = {
697- 'service_port': rdata.get('service_port'),
698- 'service_host': serv_host,
699- 'auth_host': auth_host,
700- 'auth_port': rdata.get('auth_port'),
701- 'admin_tenant_name': rdata.get('service_tenant'),
702- 'admin_user': rdata.get('service_username'),
703- 'admin_password': rdata.get('service_password'),
704- 'service_protocol':
705- rdata.get('service_protocol') or 'http',
706- 'auth_protocol':
707- rdata.get('auth_protocol') or 'http',
708- }
709+ svc_protocol = rdata.get('service_protocol') or 'http'
710+ auth_protocol = rdata.get('auth_protocol') or 'http'
711+ ctxt = {'service_port': rdata.get('service_port'),
712+ 'service_host': serv_host,
713+ 'auth_host': auth_host,
714+ 'auth_port': rdata.get('auth_port'),
715+ 'admin_tenant_name': rdata.get('service_tenant'),
716+ 'admin_user': rdata.get('service_username'),
717+ 'admin_password': rdata.get('service_password'),
718+ 'service_protocol': svc_protocol,
719+ 'auth_protocol': auth_protocol}
720 if context_complete(ctxt):
721 # NOTE(jamespage) this is required for >= icehouse
722 # so a missing value just indicates keystone needs
723 # upgrading
724 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
725 return ctxt
726+
727 return {}
728
729
730@@ -295,21 +299,23 @@
731 self.interfaces = [rel_name]
732
733 def __call__(self):
734- log('Generating template context for amqp')
735+ log('Generating template context for amqp', level=DEBUG)
736 conf = config()
737- user_setting = 'rabbit-user'
738- vhost_setting = 'rabbit-vhost'
739 if self.relation_prefix:
740- user_setting = self.relation_prefix + '-rabbit-user'
741- vhost_setting = self.relation_prefix + '-rabbit-vhost'
742+ user_setting = '%s-rabbit-user' % (self.relation_prefix)
743+ vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
744+ else:
745+ user_setting = 'rabbit-user'
746+ vhost_setting = 'rabbit-vhost'
747
748 try:
749 username = conf[user_setting]
750 vhost = conf[vhost_setting]
751 except KeyError as e:
752- log('Could not generate shared_db context. '
753- 'Missing required charm config options: %s.' % e)
754+ log('Could not generate shared_db context. Missing required charm '
755+ 'config options: %s.' % e, level=ERROR)
756 raise OSContextError
757+
758 ctxt = {}
759 for rid in relation_ids(self.rel_name):
760 ha_vip_only = False
761@@ -323,6 +329,7 @@
762 host = relation_get('private-address', rid=rid, unit=unit)
763 host = format_ipv6_addr(host) or host
764 ctxt['rabbitmq_host'] = host
765+
766 ctxt.update({
767 'rabbitmq_user': username,
768 'rabbitmq_password': relation_get('password', rid=rid,
769@@ -333,6 +340,7 @@
770 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
771 if ssl_port:
772 ctxt['rabbit_ssl_port'] = ssl_port
773+
774 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
775 if ssl_ca:
776 ctxt['rabbit_ssl_ca'] = ssl_ca
777@@ -346,41 +354,45 @@
778 if context_complete(ctxt):
779 if 'rabbit_ssl_ca' in ctxt:
780 if not self.ssl_dir:
781- log(("Charm not setup for ssl support "
782- "but ssl ca found"))
783+ log("Charm not setup for ssl support but ssl ca "
784+ "found", level=INFO)
785 break
786+
787 ca_path = os.path.join(
788 self.ssl_dir, 'rabbit-client-ca.pem')
789 with open(ca_path, 'w') as fh:
790 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
791 ctxt['rabbit_ssl_ca'] = ca_path
792+
793 # Sufficient information found = break out!
794 break
795+
796 # Used for active/active rabbitmq >= grizzly
797- if ('clustered' not in ctxt or ha_vip_only) \
798- and len(related_units(rid)) > 1:
799+ if (('clustered' not in ctxt or ha_vip_only) and
800+ len(related_units(rid)) > 1):
801 rabbitmq_hosts = []
802 for unit in related_units(rid):
803 host = relation_get('private-address', rid=rid, unit=unit)
804 host = format_ipv6_addr(host) or host
805 rabbitmq_hosts.append(host)
806- ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
807+
808+ ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
809+
810 if not context_complete(ctxt):
811 return {}
812- else:
813- return ctxt
814+
815+ return ctxt
816
817
818 class CephContext(OSContextGenerator):
819+ """Generates context for /etc/ceph/ceph.conf templates."""
820 interfaces = ['ceph']
821
822 def __call__(self):
823- '''This generates context for /etc/ceph/ceph.conf templates'''
824 if not relation_ids('ceph'):
825 return {}
826
827- log('Generating template context for ceph')
828-
829+ log('Generating template context for ceph', level=DEBUG)
830 mon_hosts = []
831 auth = None
832 key = None
833@@ -389,18 +401,18 @@
834 for unit in related_units(rid):
835 auth = relation_get('auth', rid=rid, unit=unit)
836 key = relation_get('key', rid=rid, unit=unit)
837- ceph_addr = \
838- relation_get('ceph-public-address', rid=rid, unit=unit) or \
839- relation_get('private-address', rid=rid, unit=unit)
840+ ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
841+ unit=unit)
842+ unit_priv_addr = relation_get('private-address', rid=rid,
843+ unit=unit)
844+ ceph_addr = ceph_pub_addr or unit_priv_addr
845 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
846 mon_hosts.append(ceph_addr)
847
848- ctxt = {
849- 'mon_hosts': ' '.join(mon_hosts),
850- 'auth': auth,
851- 'key': key,
852- 'use_syslog': use_syslog
853- }
854+ ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
855+ 'auth': auth,
856+ 'key': key,
857+ 'use_syslog': use_syslog}
858
859 if not os.path.isdir('/etc/ceph'):
860 os.mkdir('/etc/ceph')
861@@ -409,79 +421,68 @@
862 return {}
863
864 ensure_packages(['ceph-common'])
865-
866 return ctxt
867
868
869-ADDRESS_TYPES = ['admin', 'internal', 'public']
870-
871-
872 class HAProxyContext(OSContextGenerator):
873+ """Provides half a context for the haproxy template, which describes
874+ all peers to be included in the cluster. Each charm needs to include
875+ its own context generator that describes the port mapping.
876+ """
877 interfaces = ['cluster']
878
879+ def __init__(self, singlenode_mode=False):
880+ self.singlenode_mode = singlenode_mode
881+
882 def __call__(self):
883- '''
884- Builds half a context for the haproxy template, which describes
885- all peers to be included in the cluster. Each charm needs to include
886- its own context generator that describes the port mapping.
887- '''
888- if not relation_ids('cluster'):
889+ if not relation_ids('cluster') and not self.singlenode_mode:
890 return {}
891
892- l_unit = local_unit().replace('/', '-')
893-
894 if config('prefer-ipv6'):
895 addr = get_ipv6_addr(exc_list=[config('vip')])[0]
896 else:
897 addr = get_host_ip(unit_get('private-address'))
898
899+ l_unit = local_unit().replace('/', '-')
900 cluster_hosts = {}
901
902 # NOTE(jamespage): build out map of configured network endpoints
903 # and associated backends
904 for addr_type in ADDRESS_TYPES:
905- laddr = get_address_in_network(
906- config('os-{}-network'.format(addr_type)))
907+ cfg_opt = 'os-{}-network'.format(addr_type)
908+ laddr = get_address_in_network(config(cfg_opt))
909 if laddr:
910- cluster_hosts[laddr] = {}
911- cluster_hosts[laddr]['network'] = "{}/{}".format(
912- laddr,
913- get_netmask_for_address(laddr)
914- )
915- cluster_hosts[laddr]['backends'] = {}
916- cluster_hosts[laddr]['backends'][l_unit] = laddr
917+ netmask = get_netmask_for_address(laddr)
918+ cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
919+ netmask),
920+ 'backends': {l_unit: laddr}}
921 for rid in relation_ids('cluster'):
922 for unit in related_units(rid):
923- _unit = unit.replace('/', '-')
924 _laddr = relation_get('{}-address'.format(addr_type),
925 rid=rid, unit=unit)
926 if _laddr:
927+ _unit = unit.replace('/', '-')
928 cluster_hosts[laddr]['backends'][_unit] = _laddr
929
930 # NOTE(jamespage) no split configurations found, just use
931 # private addresses
932 if not cluster_hosts:
933- cluster_hosts[addr] = {}
934- cluster_hosts[addr]['network'] = "{}/{}".format(
935- addr,
936- get_netmask_for_address(addr)
937- )
938- cluster_hosts[addr]['backends'] = {}
939- cluster_hosts[addr]['backends'][l_unit] = addr
940+ netmask = get_netmask_for_address(addr)
941+ cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
942+ 'backends': {l_unit: addr}}
943 for rid in relation_ids('cluster'):
944 for unit in related_units(rid):
945- _unit = unit.replace('/', '-')
946 _laddr = relation_get('private-address',
947 rid=rid, unit=unit)
948 if _laddr:
949+ _unit = unit.replace('/', '-')
950 cluster_hosts[addr]['backends'][_unit] = _laddr
951
952- ctxt = {
953- 'frontends': cluster_hosts,
954- }
955+ ctxt = {'frontends': cluster_hosts}
956
957 if config('haproxy-server-timeout'):
958 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
959+
960 if config('haproxy-client-timeout'):
961 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
962
963@@ -495,13 +496,18 @@
964 ctxt['stat_port'] = ':8888'
965
966 for frontend in cluster_hosts:
967- if len(cluster_hosts[frontend]['backends']) > 1:
968+ if (len(cluster_hosts[frontend]['backends']) > 1 or
969+ self.singlenode_mode):
970 # Enable haproxy when we have enough peers.
971- log('Ensuring haproxy enabled in /etc/default/haproxy.')
972+ log('Ensuring haproxy enabled in /etc/default/haproxy.',
973+ level=DEBUG)
974 with open('/etc/default/haproxy', 'w') as out:
975 out.write('ENABLED=1\n')
976+
977 return ctxt
978- log('HAProxy context is incomplete, this unit has no peers.')
979+
980+ log('HAProxy context is incomplete, this unit has no peers.',
981+ level=INFO)
982 return {}
983
984
985@@ -509,29 +515,28 @@
986 interfaces = ['image-service']
987
988 def __call__(self):
989- '''
990- Obtains the glance API server from the image-service relation. Useful
991- in nova and cinder (currently).
992- '''
993- log('Generating template context for image-service.')
994+ """Obtains the glance API server from the image-service relation.
995+ Useful in nova and cinder (currently).
996+ """
997+ log('Generating template context for image-service.', level=DEBUG)
998 rids = relation_ids('image-service')
999 if not rids:
1000 return {}
1001+
1002 for rid in rids:
1003 for unit in related_units(rid):
1004 api_server = relation_get('glance-api-server',
1005 rid=rid, unit=unit)
1006 if api_server:
1007 return {'glance_api_servers': api_server}
1008- log('ImageService context is incomplete. '
1009- 'Missing required relation data.')
1010+
1011+ log("ImageService context is incomplete. Missing required relation "
1012+ "data.", level=INFO)
1013 return {}
1014
1015
1016 class ApacheSSLContext(OSContextGenerator):
1017-
1018- """
1019- Generates a context for an apache vhost configuration that configures
1020+ """Generates a context for an apache vhost configuration that configures
1021 HTTPS reverse proxying for one or many endpoints. Generated context
1022 looks something like::
1023
1024@@ -565,6 +570,7 @@
1025 else:
1026 cert_filename = 'cert'
1027 key_filename = 'key'
1028+
1029 write_file(path=os.path.join(ssl_dir, cert_filename),
1030 content=b64decode(cert))
1031 write_file(path=os.path.join(ssl_dir, key_filename),
1032@@ -576,7 +582,8 @@
1033 install_ca_cert(b64decode(ca_cert))
1034
1035 def canonical_names(self):
1036- '''Figure out which canonical names clients will access this service'''
1037+ """Figure out which canonical names clients will access this service.
1038+ """
1039 cns = []
1040 for r_id in relation_ids('identity-service'):
1041 for unit in related_units(r_id):
1042@@ -584,7 +591,8 @@
1043 for k in rdata:
1044 if k.startswith('ssl_key_'):
1045 cns.append(k.lstrip('ssl_key_'))
1046- return list(set(cns))
1047+
1048+ return sorted(list(set(cns)))
1049
1050 def get_network_addresses(self):
1051 """For each network configured, return corresponding address and vip
1052@@ -603,9 +611,10 @@
1053 ...]
1054 """
1055 addresses = []
1056- vips = []
1057 if config('vip'):
1058 vips = config('vip').split()
1059+ else:
1060+ vips = []
1061
1062 for net_type in ['os-internal-network', 'os-admin-network',
1063 'os-public-network']:
1064@@ -614,7 +623,7 @@
1065 if len(vips) > 1 and is_clustered():
1066 if not config(net_type):
1067 log("Multiple networks configured but net_type "
1068- "is None (%s)." % net_type, level='WARNING')
1069+ "is None (%s)." % net_type, level=WARNING)
1070 continue
1071
1072 for vip in vips:
1073@@ -627,35 +636,35 @@
1074 else:
1075 addresses.append((addr, addr))
1076
1077- return addresses
1078+ return sorted(addresses)
1079
1080 def __call__(self):
1081- if isinstance(self.external_ports, basestring):
1082+ if isinstance(self.external_ports, six.string_types):
1083 self.external_ports = [self.external_ports]
1084- if (not self.external_ports or not https()):
1085+
1086+ if not self.external_ports or not https():
1087 return {}
1088
1089 self.configure_ca()
1090 self.enable_modules()
1091
1092- ctxt = {
1093- 'namespace': self.service_namespace,
1094- 'endpoints': [],
1095- 'ext_ports': []
1096- }
1097+ ctxt = {'namespace': self.service_namespace,
1098+ 'endpoints': [],
1099+ 'ext_ports': []}
1100
1101 for cn in self.canonical_names():
1102 self.configure_cert(cn)
1103
1104 addresses = self.get_network_addresses()
1105- for address, endpoint in set(addresses):
1106+ for address, endpoint in sorted(set(addresses)):
1107 for api_port in self.external_ports:
1108 ext_port = determine_apache_port(api_port)
1109 int_port = determine_api_port(api_port)
1110 portmap = (address, endpoint, int(ext_port), int(int_port))
1111 ctxt['endpoints'].append(portmap)
1112 ctxt['ext_ports'].append(int(ext_port))
1113- ctxt['ext_ports'] = list(set(ctxt['ext_ports']))
1114+
1115+ ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
1116 return ctxt
1117
1118
1119@@ -672,21 +681,23 @@
1120
1121 @property
1122 def packages(self):
1123- return neutron_plugin_attribute(
1124- self.plugin, 'packages', self.network_manager)
1125+ return neutron_plugin_attribute(self.plugin, 'packages',
1126+ self.network_manager)
1127
1128 @property
1129 def neutron_security_groups(self):
1130 return None
1131
1132 def _ensure_packages(self):
1133- [ensure_packages(pkgs) for pkgs in self.packages]
1134+ for pkgs in self.packages:
1135+ ensure_packages(pkgs)
1136
1137 def _save_flag_file(self):
1138 if self.network_manager == 'quantum':
1139 _file = '/etc/nova/quantum_plugin.conf'
1140 else:
1141 _file = '/etc/nova/neutron_plugin.conf'
1142+
1143 with open(_file, 'wb') as out:
1144 out.write(self.plugin + '\n')
1145
1146@@ -695,13 +706,11 @@
1147 self.network_manager)
1148 config = neutron_plugin_attribute(self.plugin, 'config',
1149 self.network_manager)
1150- ovs_ctxt = {
1151- 'core_plugin': driver,
1152- 'neutron_plugin': 'ovs',
1153- 'neutron_security_groups': self.neutron_security_groups,
1154- 'local_ip': unit_private_ip(),
1155- 'config': config
1156- }
1157+ ovs_ctxt = {'core_plugin': driver,
1158+ 'neutron_plugin': 'ovs',
1159+ 'neutron_security_groups': self.neutron_security_groups,
1160+ 'local_ip': unit_private_ip(),
1161+ 'config': config}
1162
1163 return ovs_ctxt
1164
1165@@ -710,13 +719,11 @@
1166 self.network_manager)
1167 config = neutron_plugin_attribute(self.plugin, 'config',
1168 self.network_manager)
1169- nvp_ctxt = {
1170- 'core_plugin': driver,
1171- 'neutron_plugin': 'nvp',
1172- 'neutron_security_groups': self.neutron_security_groups,
1173- 'local_ip': unit_private_ip(),
1174- 'config': config
1175- }
1176+ nvp_ctxt = {'core_plugin': driver,
1177+ 'neutron_plugin': 'nvp',
1178+ 'neutron_security_groups': self.neutron_security_groups,
1179+ 'local_ip': unit_private_ip(),
1180+ 'config': config}
1181
1182 return nvp_ctxt
1183
1184@@ -726,18 +733,17 @@
1185 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1186 self.network_manager)
1187 n1kv_user_config_flags = config('n1kv-config-flags')
1188- n1kv_ctxt = {
1189- 'core_plugin': driver,
1190- 'neutron_plugin': 'n1kv',
1191- 'neutron_security_groups': self.neutron_security_groups,
1192- 'local_ip': unit_private_ip(),
1193- 'config': n1kv_config,
1194- 'vsm_ip': config('n1kv-vsm-ip'),
1195- 'vsm_username': config('n1kv-vsm-username'),
1196- 'vsm_password': config('n1kv-vsm-password'),
1197- 'restrict_policy_profiles': config(
1198- 'n1kv-restrict-policy-profiles'),
1199- }
1200+ restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
1201+ n1kv_ctxt = {'core_plugin': driver,
1202+ 'neutron_plugin': 'n1kv',
1203+ 'neutron_security_groups': self.neutron_security_groups,
1204+ 'local_ip': unit_private_ip(),
1205+ 'config': n1kv_config,
1206+ 'vsm_ip': config('n1kv-vsm-ip'),
1207+ 'vsm_username': config('n1kv-vsm-username'),
1208+ 'vsm_password': config('n1kv-vsm-password'),
1209+ 'restrict_policy_profiles': restrict_policy_profiles}
1210+
1211 if n1kv_user_config_flags:
1212 flags = config_flags_parser(n1kv_user_config_flags)
1213 n1kv_ctxt['user_config_flags'] = flags
1214@@ -749,13 +755,11 @@
1215 self.network_manager)
1216 config = neutron_plugin_attribute(self.plugin, 'config',
1217 self.network_manager)
1218- calico_ctxt = {
1219- 'core_plugin': driver,
1220- 'neutron_plugin': 'Calico',
1221- 'neutron_security_groups': self.neutron_security_groups,
1222- 'local_ip': unit_private_ip(),
1223- 'config': config
1224- }
1225+ calico_ctxt = {'core_plugin': driver,
1226+ 'neutron_plugin': 'Calico',
1227+ 'neutron_security_groups': self.neutron_security_groups,
1228+ 'local_ip': unit_private_ip(),
1229+ 'config': config}
1230
1231 return calico_ctxt
1232
1233@@ -764,15 +768,14 @@
1234 proto = 'https'
1235 else:
1236 proto = 'http'
1237+
1238 if is_clustered():
1239 host = config('vip')
1240 else:
1241 host = unit_get('private-address')
1242- url = '%s://%s:%s' % (proto, host, '9696')
1243- ctxt = {
1244- 'network_manager': self.network_manager,
1245- 'neutron_url': url,
1246- }
1247+
1248+ ctxt = {'network_manager': self.network_manager,
1249+ 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1250 return ctxt
1251
1252 def __call__(self):
1253@@ -805,9 +808,7 @@
1254
1255
1256 class OSConfigFlagContext(OSContextGenerator):
1257-
1258- """
1259- Provides support for user-defined config flags.
1260+ """Provides support for user-defined config flags.
1261
1262 Users can define a comma-seperated list of key=value pairs
1263 in the charm configuration and apply them at any point in
1264@@ -826,8 +827,9 @@
1265 def __init__(self, charm_flag='config-flags',
1266 template_flag='user_config_flags'):
1267 """
1268- charm_flag: config flags in charm configuration.
1269- template_flag: insert point for user-defined flags template file.
1270+ :param charm_flag: config flags in charm configuration.
1271+ :param template_flag: insert point for user-defined flags in template
1272+ file.
1273 """
1274 super(OSConfigFlagContext, self).__init__()
1275 self._charm_flag = charm_flag
1276@@ -883,7 +885,6 @@
1277 },
1278 }
1279 }
1280-
1281 """
1282
1283 def __init__(self, service, config_file, interface):
1284@@ -913,26 +914,28 @@
1285
1286 if self.service not in sub_config:
1287 log('Found subordinate_config on %s but it contained'
1288- 'nothing for %s service' % (rid, self.service))
1289+ 'nothing for %s service' % (rid, self.service),
1290+ level=INFO)
1291 continue
1292
1293 sub_config = sub_config[self.service]
1294 if self.config_file not in sub_config:
1295 log('Found subordinate_config on %s but it contained'
1296- 'nothing for %s' % (rid, self.config_file))
1297+ 'nothing for %s' % (rid, self.config_file),
1298+ level=INFO)
1299 continue
1300
1301 sub_config = sub_config[self.config_file]
1302- for k, v in sub_config.iteritems():
1303+ for k, v in six.iteritems(sub_config):
1304 if k == 'sections':
1305- for section, config_dict in v.iteritems():
1306- log("adding section '%s'" % (section))
1307+ for section, config_dict in six.iteritems(v):
1308+ log("adding section '%s'" % (section),
1309+ level=DEBUG)
1310 ctxt[k][section] = config_dict
1311 else:
1312 ctxt[k] = v
1313
1314 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1315-
1316 return ctxt
1317
1318
1319@@ -944,15 +947,14 @@
1320 False if config('debug') is None else config('debug')
1321 ctxt['verbose'] = \
1322 False if config('verbose') is None else config('verbose')
1323+
1324 return ctxt
1325
1326
1327 class SyslogContext(OSContextGenerator):
1328
1329 def __call__(self):
1330- ctxt = {
1331- 'use_syslog': config('use-syslog')
1332- }
1333+ ctxt = {'use_syslog': config('use-syslog')}
1334 return ctxt
1335
1336
1337@@ -960,13 +962,9 @@
1338
1339 def __call__(self):
1340 if config('prefer-ipv6'):
1341- return {
1342- 'bind_host': '::'
1343- }
1344+ return {'bind_host': '::'}
1345 else:
1346- return {
1347- 'bind_host': '0.0.0.0'
1348- }
1349+ return {'bind_host': '0.0.0.0'}
1350
1351
1352 class WorkerConfigContext(OSContextGenerator):
1353@@ -978,13 +976,12 @@
1354 except ImportError:
1355 apt_install('python-psutil', fatal=True)
1356 from psutil import NUM_CPUS
1357+
1358 return NUM_CPUS
1359
1360 def __call__(self):
1361- multiplier = config('worker-multiplier') or 1
1362- ctxt = {
1363- "workers": self.num_cpus * multiplier
1364- }
1365+ multiplier = config('worker-multiplier') or 0
1366+ ctxt = {"workers": self.num_cpus * multiplier}
1367 return ctxt
1368
1369
1370@@ -998,22 +995,23 @@
1371 for unit in related_units(rid):
1372 ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1373 ctxt['zmq_host'] = relation_get('host', unit, rid)
1374+
1375 return ctxt
1376
1377
1378 class NotificationDriverContext(OSContextGenerator):
1379
1380- def __init__(self, zmq_relation='zeromq-configuration', amqp_relation='amqp'):
1381+ def __init__(self, zmq_relation='zeromq-configuration',
1382+ amqp_relation='amqp'):
1383 """
1384- :param zmq_relation : Name of Zeromq relation to check
1385+ :param zmq_relation: Name of Zeromq relation to check
1386 """
1387 self.zmq_relation = zmq_relation
1388 self.amqp_relation = amqp_relation
1389
1390 def __call__(self):
1391- ctxt = {
1392- 'notifications': 'False',
1393- }
1394+ ctxt = {'notifications': 'False'}
1395 if is_relation_made(self.amqp_relation):
1396 ctxt['notifications'] = "True"
1397+
1398 return ctxt
1399
1400=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
1401--- hooks/charmhelpers/contrib/openstack/ip.py 2014-09-22 20:21:19 +0000
1402+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-11 17:56:34 +0000
1403@@ -2,21 +2,19 @@
1404 config,
1405 unit_get,
1406 )
1407-
1408 from charmhelpers.contrib.network.ip import (
1409 get_address_in_network,
1410 is_address_in_network,
1411 is_ipv6,
1412 get_ipv6_addr,
1413 )
1414-
1415 from charmhelpers.contrib.hahelpers.cluster import is_clustered
1416
1417 PUBLIC = 'public'
1418 INTERNAL = 'int'
1419 ADMIN = 'admin'
1420
1421-_address_map = {
1422+ADDRESS_MAP = {
1423 PUBLIC: {
1424 'config': 'os-public-network',
1425 'fallback': 'public-address'
1426@@ -33,16 +31,14 @@
1427
1428
1429 def canonical_url(configs, endpoint_type=PUBLIC):
1430- '''
1431- Returns the correct HTTP URL to this host given the state of HTTPS
1432+ """Returns the correct HTTP URL to this host given the state of HTTPS
1433 configuration, hacluster and charm configuration.
1434
1435- :configs OSTemplateRenderer: A config tempating object to inspect for
1436- a complete https context.
1437- :endpoint_type str: The endpoint type to resolve.
1438-
1439- :returns str: Base URL for services on the current service unit.
1440- '''
1441+ :param configs: OSTemplateRenderer config templating object to inspect
1442+ for a complete https context.
1443+ :param endpoint_type: str endpoint type to resolve.
1444+ :param returns: str base URL for services on the current service unit.
1445+ """
1446 scheme = 'http'
1447 if 'https' in configs.complete_contexts():
1448 scheme = 'https'
1449@@ -53,27 +49,45 @@
1450
1451
1452 def resolve_address(endpoint_type=PUBLIC):
1453+ """Return unit address depending on net config.
1454+
1455+ If unit is clustered with vip(s) and has net splits defined, return vip on
1456+ correct network. If clustered with no nets defined, return primary vip.
1457+
1458+ If not clustered, return unit address ensuring address is on configured net
1459+ split if one is configured.
1460+
1461+ :param endpoint_type: Network endpoing type
1462+ """
1463 resolved_address = None
1464- if is_clustered():
1465- if config(_address_map[endpoint_type]['config']) is None:
1466- # Assume vip is simple and pass back directly
1467- resolved_address = config('vip')
1468+ vips = config('vip')
1469+ if vips:
1470+ vips = vips.split()
1471+
1472+ net_type = ADDRESS_MAP[endpoint_type]['config']
1473+ net_addr = config(net_type)
1474+ net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
1475+ clustered = is_clustered()
1476+ if clustered:
1477+ if not net_addr:
1478+ # If no net-splits defined, we expect a single vip
1479+ resolved_address = vips[0]
1480 else:
1481- for vip in config('vip').split():
1482- if is_address_in_network(
1483- config(_address_map[endpoint_type]['config']),
1484- vip):
1485+ for vip in vips:
1486+ if is_address_in_network(net_addr, vip):
1487 resolved_address = vip
1488+ break
1489 else:
1490 if config('prefer-ipv6'):
1491- fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1492+ fallback_addr = get_ipv6_addr(exc_list=vips)[0]
1493 else:
1494- fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])
1495- resolved_address = get_address_in_network(
1496- config(_address_map[endpoint_type]['config']), fallback_addr)
1497+ fallback_addr = unit_get(net_fallback)
1498+
1499+ resolved_address = get_address_in_network(net_addr, fallback_addr)
1500
1501 if resolved_address is None:
1502- raise ValueError('Unable to resolve a suitable IP address'
1503- ' based on charm state and configuration')
1504- else:
1505- return resolved_address
1506+ raise ValueError("Unable to resolve a suitable IP address based on "
1507+ "charm state and configuration. (net_type=%s, "
1508+ "clustered=%s)" % (net_type, clustered))
1509+
1510+ return resolved_address
1511
1512=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1513--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-11-13 04:44:29 +0000
1514+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-11 17:56:34 +0000
1515@@ -14,7 +14,7 @@
1516 def headers_package():
1517 """Ensures correct linux-headers for running kernel are installed,
1518 for building DKMS package"""
1519- kver = check_output(['uname', '-r']).strip()
1520+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1521 return 'linux-headers-%s' % kver
1522
1523 QUANTUM_CONF_DIR = '/etc/quantum'
1524@@ -22,7 +22,7 @@
1525
1526 def kernel_version():
1527 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1528- kver = check_output(['uname', '-r']).strip()
1529+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1530 kver = kver.split('.')
1531 return (int(kver[0]), int(kver[1]))
1532
1533@@ -177,7 +177,8 @@
1534 elif manager == 'neutron':
1535 plugins = neutron_plugins()
1536 else:
1537- log('Error: Network manager does not support plugins.')
1538+ log("Network manager '%s' does not support plugins." % (manager),
1539+ level=ERROR)
1540 raise Exception
1541
1542 try:
1543
1544=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
1545--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-10-06 20:51:06 +0000
1546+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-12-11 17:56:34 +0000
1547@@ -35,7 +35,7 @@
1548 stats auth admin:password
1549
1550 {% if frontends -%}
1551-{% for service, ports in service_ports.iteritems() -%}
1552+{% for service, ports in service_ports.items() -%}
1553 frontend tcp-in_{{ service }}
1554 bind *:{{ ports[0] }}
1555 bind :::{{ ports[0] }}
1556@@ -46,7 +46,7 @@
1557 {% for frontend in frontends -%}
1558 backend {{ service }}_{{ frontend }}
1559 balance leastconn
1560- {% for unit, address in frontends[frontend]['backends'].iteritems() -%}
1561+ {% for unit, address in frontends[frontend]['backends'].items() -%}
1562 server {{ unit }} {{ address }}:{{ ports[1] }} check
1563 {% endfor %}
1564 {% endfor -%}
1565
1566=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
1567--- hooks/charmhelpers/contrib/openstack/templating.py 2014-07-25 08:13:49 +0000
1568+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-11 17:56:34 +0000
1569@@ -1,13 +1,13 @@
1570 import os
1571
1572+import six
1573+
1574 from charmhelpers.fetch import apt_install
1575-
1576 from charmhelpers.core.hookenv import (
1577 log,
1578 ERROR,
1579 INFO
1580 )
1581-
1582 from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1583
1584 try:
1585@@ -43,7 +43,7 @@
1586 order by OpenStack release.
1587 """
1588 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1589- for rel in OPENSTACK_CODENAMES.itervalues()]
1590+ for rel in six.itervalues(OPENSTACK_CODENAMES)]
1591
1592 if not os.path.isdir(templates_dir):
1593 log('Templates directory not found @ %s.' % templates_dir,
1594@@ -258,7 +258,7 @@
1595 """
1596 Write out all registered config files.
1597 """
1598- [self.write(k) for k in self.templates.iterkeys()]
1599+ [self.write(k) for k in six.iterkeys(self.templates)]
1600
1601 def set_release(self, openstack_release):
1602 """
1603@@ -275,5 +275,5 @@
1604 '''
1605 interfaces = []
1606 [interfaces.extend(i.complete_contexts())
1607- for i in self.templates.itervalues()]
1608+ for i in six.itervalues(self.templates)]
1609 return interfaces
1610
1611=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1612--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-24 11:36:12 +0000
1613+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-11 17:56:34 +0000
1614@@ -10,11 +10,13 @@
1615 import socket
1616 import sys
1617
1618+import six
1619+import yaml
1620+
1621 from charmhelpers.core.hookenv import (
1622 config,
1623 log as juju_log,
1624 charm_dir,
1625- ERROR,
1626 INFO,
1627 relation_ids,
1628 relation_set
1629@@ -31,7 +33,8 @@
1630 )
1631
1632 from charmhelpers.core.host import lsb_release, mounts, umount
1633-from charmhelpers.fetch import apt_install, apt_cache
1634+from charmhelpers.fetch import apt_install, apt_cache, install_remote
1635+from charmhelpers.contrib.python.packages import pip_install
1636 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
1637 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
1638
1639@@ -113,7 +116,7 @@
1640
1641 # Best guess match based on deb string provided
1642 if src.startswith('deb') or src.startswith('ppa'):
1643- for k, v in OPENSTACK_CODENAMES.iteritems():
1644+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1645 if v in src:
1646 return v
1647
1648@@ -134,7 +137,7 @@
1649
1650 def get_os_version_codename(codename):
1651 '''Determine OpenStack version number from codename.'''
1652- for k, v in OPENSTACK_CODENAMES.iteritems():
1653+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1654 if v == codename:
1655 return k
1656 e = 'Could not derive OpenStack version for '\
1657@@ -194,7 +197,7 @@
1658 else:
1659 vers_map = OPENSTACK_CODENAMES
1660
1661- for version, cname in vers_map.iteritems():
1662+ for version, cname in six.iteritems(vers_map):
1663 if cname == codename:
1664 return version
1665 # e = "Could not determine OpenStack version for package: %s" % pkg
1666@@ -318,7 +321,7 @@
1667 rc_script.write(
1668 "#!/bin/bash\n")
1669 [rc_script.write('export %s=%s\n' % (u, p))
1670- for u, p in env_vars.iteritems() if u != "script_path"]
1671+ for u, p in six.iteritems(env_vars) if u != "script_path"]
1672
1673
1674 def openstack_upgrade_available(package):
1675@@ -351,8 +354,8 @@
1676 '''
1677 _none = ['None', 'none', None]
1678 if (block_device in _none):
1679- error_out('prepare_storage(): Missing required input: '
1680- 'block_device=%s.' % block_device, level=ERROR)
1681+ error_out('prepare_storage(): Missing required input: block_device=%s.'
1682+ % block_device)
1683
1684 if block_device.startswith('/dev/'):
1685 bdev = block_device
1686@@ -368,8 +371,7 @@
1687 bdev = '/dev/%s' % block_device
1688
1689 if not is_block_device(bdev):
1690- error_out('Failed to locate valid block device at %s' % bdev,
1691- level=ERROR)
1692+ error_out('Failed to locate valid block device at %s' % bdev)
1693
1694 return bdev
1695
1696@@ -418,7 +420,7 @@
1697
1698 if isinstance(address, dns.name.Name):
1699 rtype = 'PTR'
1700- elif isinstance(address, basestring):
1701+ elif isinstance(address, six.string_types):
1702 rtype = 'A'
1703 else:
1704 return None
1705@@ -486,8 +488,7 @@
1706 'hostname': json.dumps(hosts)}
1707
1708 if relation_prefix:
1709- keys = kwargs.keys()
1710- for key in keys:
1711+ for key in list(kwargs.keys()):
1712 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
1713 del kwargs[key]
1714
1715@@ -508,3 +509,111 @@
1716 f(*args)
1717 return wrapped_f
1718 return wrap
1719+
1720+
1721+def git_install_requested():
1722+ """Returns true if openstack-origin-git is specified."""
1723+ return config('openstack-origin-git') != "None"
1724+
1725+
1726+requirements_dir = None
1727+
1728+
1729+def git_clone_and_install(file_name, core_project):
1730+ """Clone/install all OpenStack repos specified in yaml config file."""
1731+ global requirements_dir
1732+
1733+ if file_name == "None":
1734+ return
1735+
1736+ yaml_file = os.path.join(charm_dir(), file_name)
1737+
1738+ # clone/install the requirements project first
1739+ installed = _git_clone_and_install_subset(yaml_file,
1740+ whitelist=['requirements'])
1741+ if 'requirements' not in installed:
1742+ error_out('requirements git repository must be specified')
1743+
1744+ # clone/install all other projects except requirements and the core project
1745+ blacklist = ['requirements', core_project]
1746+ _git_clone_and_install_subset(yaml_file, blacklist=blacklist,
1747+ update_requirements=True)
1748+
1749+ # clone/install the core project
1750+ whitelist = [core_project]
1751+ installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist,
1752+ update_requirements=True)
1753+ if core_project not in installed:
1754+ error_out('{} git repository must be specified'.format(core_project))
1755+
1756+
1757+def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[],
1758+ update_requirements=False):
1759+ """Clone/install subset of OpenStack repos specified in yaml config file."""
1760+ global requirements_dir
1761+ installed = []
1762+
1763+ with open(yaml_file, 'r') as fd:
1764+ projects = yaml.load(fd)
1765+ for proj, val in projects.items():
1766+ # The project subset is chosen based on the following 3 rules:
1767+ # 1) If project is in blacklist, we don't clone/install it, period.
1768+ # 2) If whitelist is empty, we clone/install everything else.
1769+ # 3) If whitelist is not empty, we clone/install everything in the
1770+ # whitelist.
1771+ if proj in blacklist:
1772+ continue
1773+ if whitelist and proj not in whitelist:
1774+ continue
1775+ repo = val['repository']
1776+ branch = val['branch']
1777+ repo_dir = _git_clone_and_install_single(repo, branch,
1778+ update_requirements)
1779+ if proj == 'requirements':
1780+ requirements_dir = repo_dir
1781+ installed.append(proj)
1782+ return installed
1783+
1784+
1785+def _git_clone_and_install_single(repo, branch, update_requirements=False):
1786+ """Clone and install a single git repository."""
1787+ dest_parent_dir = "/mnt/openstack-git/"
1788+ dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo))
1789+
1790+ if not os.path.exists(dest_parent_dir):
1791+ juju_log('Host dir not mounted at {}. '
1792+ 'Creating directory there instead.'.format(dest_parent_dir))
1793+ os.mkdir(dest_parent_dir)
1794+
1795+ if not os.path.exists(dest_dir):
1796+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
1797+ repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch)
1798+ else:
1799+ repo_dir = dest_dir
1800+
1801+ if update_requirements:
1802+ if not requirements_dir:
1803+ error_out('requirements repo must be cloned before '
1804+ 'updating from global requirements.')
1805+ _git_update_requirements(repo_dir, requirements_dir)
1806+
1807+ juju_log('Installing git repo from dir: {}'.format(repo_dir))
1808+ pip_install(repo_dir)
1809+
1810+ return repo_dir
1811+
1812+
1813+def _git_update_requirements(package_dir, reqs_dir):
1814+ """Update from global requirements.
1815+
1816+ Update an OpenStack git directory's requirements.txt and
1817+ test-requirements.txt from global-requirements.txt."""
1818+ orig_dir = os.getcwd()
1819+ os.chdir(reqs_dir)
1820+ cmd = "python update.py {}".format(package_dir)
1821+ try:
1822+ subprocess.check_call(cmd.split(' '))
1823+ except subprocess.CalledProcessError:
1824+ package = os.path.basename(package_dir)
1825+ error_out("Error updating {} from global-requirements.txt".format(package))
1826+ os.chdir(orig_dir)
1827
1828=== modified file 'hooks/charmhelpers/contrib/peerstorage/__init__.py'
1829--- hooks/charmhelpers/contrib/peerstorage/__init__.py 2014-10-06 20:53:09 +0000
1830+++ hooks/charmhelpers/contrib/peerstorage/__init__.py 2014-12-11 17:56:34 +0000
1831@@ -1,3 +1,4 @@
1832+import six
1833 from charmhelpers.core.hookenv import relation_id as current_relation_id
1834 from charmhelpers.core.hookenv import (
1835 is_relation_made,
1836@@ -93,7 +94,7 @@
1837 if ex in echo_data:
1838 echo_data.pop(ex)
1839 else:
1840- for attribute, value in rdata.iteritems():
1841+ for attribute, value in six.iteritems(rdata):
1842 for include in includes:
1843 if include in attribute:
1844 echo_data[attribute] = value
1845@@ -119,8 +120,8 @@
1846 relation_settings=relation_settings,
1847 **kwargs)
1848 if is_relation_made(peer_relation_name):
1849- for key, value in dict(kwargs.items() +
1850- relation_settings.items()).iteritems():
1851+ for key, value in six.iteritems(dict(list(kwargs.items()) +
1852+ list(relation_settings.items()))):
1853 key_prefix = relation_id or current_relation_id()
1854 peer_store(key_prefix + delimiter + key,
1855 value,
1856
1857=== added directory 'hooks/charmhelpers/contrib/python'
1858=== added file 'hooks/charmhelpers/contrib/python/__init__.py'
1859=== added file 'hooks/charmhelpers/contrib/python/packages.py'
1860--- hooks/charmhelpers/contrib/python/packages.py 1970-01-01 00:00:00 +0000
1861+++ hooks/charmhelpers/contrib/python/packages.py 2014-12-11 17:56:34 +0000
1862@@ -0,0 +1,77 @@
1863+#!/usr/bin/env python
1864+# coding: utf-8
1865+
1866+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
1867+
1868+from charmhelpers.fetch import apt_install, apt_update
1869+from charmhelpers.core.hookenv import log
1870+
1871+try:
1872+ from pip import main as pip_execute
1873+except ImportError:
1874+ apt_update()
1875+ apt_install('python-pip')
1876+ from pip import main as pip_execute
1877+
1878+
1879+def parse_options(given, available):
1880+ """Given a set of options, check if available"""
1881+ for key, value in sorted(given.items()):
1882+ if key in available:
1883+ yield "--{0}={1}".format(key, value)
1884+
1885+
1886+def pip_install_requirements(requirements, **options):
1887+ """Install a requirements file """
1888+ command = ["install"]
1889+
1890+ available_options = ('proxy', 'src', 'log', )
1891+ for option in parse_options(options, available_options):
1892+ command.append(option)
1893+
1894+ command.append("-r {0}".format(requirements))
1895+ log("Installing from file: {} with options: {}".format(requirements,
1896+ command))
1897+ pip_execute(command)
1898+
1899+
1900+def pip_install(package, fatal=False, **options):
1901+ """Install a python package"""
1902+ command = ["install"]
1903+
1904+ available_options = ('proxy', 'src', 'log', "index-url", )
1905+ for option in parse_options(options, available_options):
1906+ command.append(option)
1907+
1908+ if isinstance(package, list):
1909+ command.extend(package)
1910+ else:
1911+ command.append(package)
1912+
1913+ log("Installing {} package with options: {}".format(package,
1914+ command))
1915+ pip_execute(command)
1916+
1917+
1918+def pip_uninstall(package, **options):
1919+ """Uninstall a python package"""
1920+ command = ["uninstall", "-q", "-y"]
1921+
1922+ available_options = ('proxy', 'log', )
1923+ for option in parse_options(options, available_options):
1924+ command.append(option)
1925+
1926+ if isinstance(package, list):
1927+ command.extend(package)
1928+ else:
1929+ command.append(package)
1930+
1931+ log("Uninstalling {} package with options: {}".format(package,
1932+ command))
1933+ pip_execute(command)
1934+
1935+
1936+def pip_list():
1937+ """Returns the list of current python installed packages
1938+ """
1939+ return pip_execute(["list"])
1940
1941=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
1942--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-10-24 11:36:12 +0000
1943+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-11 17:56:34 +0000
1944@@ -16,19 +16,18 @@
1945 from subprocess import (
1946 check_call,
1947 check_output,
1948- CalledProcessError
1949+ CalledProcessError,
1950 )
1951-
1952 from charmhelpers.core.hookenv import (
1953 relation_get,
1954 relation_ids,
1955 related_units,
1956 log,
1957+ DEBUG,
1958 INFO,
1959 WARNING,
1960- ERROR
1961+ ERROR,
1962 )
1963-
1964 from charmhelpers.core.host import (
1965 mount,
1966 mounts,
1967@@ -37,7 +36,6 @@
1968 service_running,
1969 umount,
1970 )
1971-
1972 from charmhelpers.fetch import (
1973 apt_install,
1974 )
1975@@ -56,99 +54,85 @@
1976
1977
1978 def install():
1979- ''' Basic Ceph client installation '''
1980+ """Basic Ceph client installation."""
1981 ceph_dir = "/etc/ceph"
1982 if not os.path.exists(ceph_dir):
1983 os.mkdir(ceph_dir)
1984+
1985 apt_install('ceph-common', fatal=True)
1986
1987
1988 def rbd_exists(service, pool, rbd_img):
1989- ''' Check to see if a RADOS block device exists '''
1990+ """Check to see if a RADOS block device exists."""
1991 try:
1992- out = check_output(['rbd', 'list', '--id', service,
1993- '--pool', pool])
1994+ out = check_output(['rbd', 'list', '--id',
1995+ service, '--pool', pool]).decode('UTF-8')
1996 except CalledProcessError:
1997 return False
1998- else:
1999- return rbd_img in out
2000+
2001+ return rbd_img in out
2002
2003
2004 def create_rbd_image(service, pool, image, sizemb):
2005- ''' Create a new RADOS block device '''
2006- cmd = [
2007- 'rbd',
2008- 'create',
2009- image,
2010- '--size',
2011- str(sizemb),
2012- '--id',
2013- service,
2014- '--pool',
2015- pool
2016- ]
2017+ """Create a new RADOS block device."""
2018+ cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
2019+ '--pool', pool]
2020 check_call(cmd)
2021
2022
2023 def pool_exists(service, name):
2024- ''' Check to see if a RADOS pool already exists '''
2025+ """Check to see if a RADOS pool already exists."""
2026 try:
2027- out = check_output(['rados', '--id', service, 'lspools'])
2028+ out = check_output(['rados', '--id', service,
2029+ 'lspools']).decode('UTF-8')
2030 except CalledProcessError:
2031 return False
2032- else:
2033- return name in out
2034+
2035+ return name in out
2036
2037
2038 def get_osds(service):
2039- '''
2040- Return a list of all Ceph Object Storage Daemons
2041- currently in the cluster
2042- '''
2043+ """Return a list of all Ceph Object Storage Daemons currently in the
2044+ cluster.
2045+ """
2046 version = ceph_version()
2047 if version and version >= '0.56':
2048 return json.loads(check_output(['ceph', '--id', service,
2049- 'osd', 'ls', '--format=json']))
2050- else:
2051- return None
2052+ 'osd', 'ls',
2053+ '--format=json']).decode('UTF-8'))
2054+
2055+ return None
2056
2057
2058 def create_pool(service, name, replicas=3):
2059- ''' Create a new RADOS pool '''
2060+ """Create a new RADOS pool."""
2061 if pool_exists(service, name):
2062 log("Ceph pool {} already exists, skipping creation".format(name),
2063 level=WARNING)
2064 return
2065+
2066 # Calculate the number of placement groups based
2067 # on upstream recommended best practices.
2068 osds = get_osds(service)
2069 if osds:
2070- pgnum = (len(osds) * 100 / replicas)
2071+ pgnum = (len(osds) * 100 // replicas)
2072 else:
2073 # NOTE(james-page): Default to 200 for older ceph versions
2074 # which don't support OSD query from cli
2075 pgnum = 200
2076- cmd = [
2077- 'ceph', '--id', service,
2078- 'osd', 'pool', 'create',
2079- name, str(pgnum)
2080- ]
2081+
2082+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
2083 check_call(cmd)
2084- cmd = [
2085- 'ceph', '--id', service,
2086- 'osd', 'pool', 'set', name,
2087- 'size', str(replicas)
2088- ]
2089+
2090+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
2091+ str(replicas)]
2092 check_call(cmd)
2093
2094
2095 def delete_pool(service, name):
2096- ''' Delete a RADOS pool from ceph '''
2097- cmd = [
2098- 'ceph', '--id', service,
2099- 'osd', 'pool', 'delete',
2100- name, '--yes-i-really-really-mean-it'
2101- ]
2102+ """Delete a RADOS pool from ceph."""
2103+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
2104+ '--yes-i-really-really-mean-it']
2105 check_call(cmd)
2106
2107
2108@@ -161,44 +145,43 @@
2109
2110
2111 def create_keyring(service, key):
2112- ''' Create a new Ceph keyring containing key'''
2113+ """Create a new Ceph keyring containing key."""
2114 keyring = _keyring_path(service)
2115 if os.path.exists(keyring):
2116- log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
2117+ log('Ceph keyring exists at %s.' % keyring, level=WARNING)
2118 return
2119- cmd = [
2120- 'ceph-authtool',
2121- keyring,
2122- '--create-keyring',
2123- '--name=client.{}'.format(service),
2124- '--add-key={}'.format(key)
2125- ]
2126+
2127+ cmd = ['ceph-authtool', keyring, '--create-keyring',
2128+ '--name=client.{}'.format(service), '--add-key={}'.format(key)]
2129 check_call(cmd)
2130- log('ceph: Created new ring at %s.' % keyring, level=INFO)
2131+ log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
2132
2133
2134 def create_key_file(service, key):
2135- ''' Create a file containing key '''
2136+ """Create a file containing key."""
2137 keyfile = _keyfile_path(service)
2138 if os.path.exists(keyfile):
2139- log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2140+ log('Keyfile exists at %s.' % keyfile, level=WARNING)
2141 return
2142+
2143 with open(keyfile, 'w') as fd:
2144 fd.write(key)
2145- log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2146+
2147+ log('Created new keyfile at %s.' % keyfile, level=INFO)
2148
2149
2150 def get_ceph_nodes():
2151- ''' Query named relation 'ceph' to detemine current nodes '''
2152+ """Query named relation 'ceph' to determine current nodes."""
2153 hosts = []
2154 for r_id in relation_ids('ceph'):
2155 for unit in related_units(r_id):
2156 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2157+
2158 return hosts
2159
2160
2161 def configure(service, key, auth, use_syslog):
2162- ''' Perform basic configuration of Ceph '''
2163+ """Perform basic configuration of Ceph."""
2164 create_keyring(service, key)
2165 create_key_file(service, key)
2166 hosts = get_ceph_nodes()
2167@@ -211,17 +194,17 @@
2168
2169
2170 def image_mapped(name):
2171- ''' Determine whether a RADOS block device is mapped locally '''
2172+ """Determine whether a RADOS block device is mapped locally."""
2173 try:
2174- out = check_output(['rbd', 'showmapped'])
2175+ out = check_output(['rbd', 'showmapped']).decode('UTF-8')
2176 except CalledProcessError:
2177 return False
2178- else:
2179- return name in out
2180+
2181+ return name in out
2182
2183
2184 def map_block_storage(service, pool, image):
2185- ''' Map a RADOS block device for local use '''
2186+ """Map a RADOS block device for local use."""
2187 cmd = [
2188 'rbd',
2189 'map',
2190@@ -235,31 +218,32 @@
2191
2192
2193 def filesystem_mounted(fs):
2194- ''' Determine whether a filesytems is already mounted '''
2195+ """Determine whether a filesytems is already mounted."""
2196 return fs in [f for f, m in mounts()]
2197
2198
2199 def make_filesystem(blk_device, fstype='ext4', timeout=10):
2200- ''' Make a new filesystem on the specified block device '''
2201+ """Make a new filesystem on the specified block device."""
2202 count = 0
2203 e_noent = os.errno.ENOENT
2204 while not os.path.exists(blk_device):
2205 if count >= timeout:
2206- log('ceph: gave up waiting on block device %s' % blk_device,
2207+ log('Gave up waiting on block device %s' % blk_device,
2208 level=ERROR)
2209 raise IOError(e_noent, os.strerror(e_noent), blk_device)
2210- log('ceph: waiting for block device %s to appear' % blk_device,
2211- level=INFO)
2212+
2213+ log('Waiting for block device %s to appear' % blk_device,
2214+ level=DEBUG)
2215 count += 1
2216 time.sleep(1)
2217 else:
2218- log('ceph: Formatting block device %s as filesystem %s.' %
2219+ log('Formatting block device %s as filesystem %s.' %
2220 (blk_device, fstype), level=INFO)
2221 check_call(['mkfs', '-t', fstype, blk_device])
2222
2223
2224 def place_data_on_block_device(blk_device, data_src_dst):
2225- ''' Migrate data in data_src_dst to blk_device and then remount '''
2226+ """Migrate data in data_src_dst to blk_device and then remount."""
2227 # mount block device into /mnt
2228 mount(blk_device, '/mnt')
2229 # copy data to /mnt
2230@@ -279,8 +263,8 @@
2231
2232 # TODO: re-use
2233 def modprobe(module):
2234- ''' Load a kernel module and configure for auto-load on reboot '''
2235- log('ceph: Loading kernel module', level=INFO)
2236+ """Load a kernel module and configure for auto-load on reboot."""
2237+ log('Loading kernel module', level=INFO)
2238 cmd = ['modprobe', module]
2239 check_call(cmd)
2240 with open('/etc/modules', 'r+') as modules:
2241@@ -289,7 +273,7 @@
2242
2243
2244 def copy_files(src, dst, symlinks=False, ignore=None):
2245- ''' Copy files from src to dst '''
2246+ """Copy files from src to dst."""
2247 for item in os.listdir(src):
2248 s = os.path.join(src, item)
2249 d = os.path.join(dst, item)
2250@@ -302,8 +286,7 @@
2251 def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2252 blk_device, fstype, system_services=[],
2253 replicas=3):
2254- """
2255- NOTE: This function must only be called from a single service unit for
2256+ """NOTE: This function must only be called from a single service unit for
2257 the same rbd_img otherwise data loss will occur.
2258
2259 Ensures given pool and RBD image exists, is mapped to a block device,
2260@@ -317,15 +300,16 @@
2261 """
2262 # Ensure pool, RBD image, RBD mappings are in place.
2263 if not pool_exists(service, pool):
2264- log('ceph: Creating new pool {}.'.format(pool))
2265+ log('Creating new pool {}.'.format(pool), level=INFO)
2266 create_pool(service, pool, replicas=replicas)
2267
2268 if not rbd_exists(service, pool, rbd_img):
2269- log('ceph: Creating RBD image ({}).'.format(rbd_img))
2270+ log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
2271 create_rbd_image(service, pool, rbd_img, sizemb)
2272
2273 if not image_mapped(rbd_img):
2274- log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2275+ log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
2276+ level=INFO)
2277 map_block_storage(service, pool, rbd_img)
2278
2279 # make file system
2280@@ -340,45 +324,47 @@
2281
2282 for svc in system_services:
2283 if service_running(svc):
2284- log('ceph: Stopping services {} prior to migrating data.'
2285- .format(svc))
2286+ log('Stopping services {} prior to migrating data.'
2287+ .format(svc), level=DEBUG)
2288 service_stop(svc)
2289
2290 place_data_on_block_device(blk_device, mount_point)
2291
2292 for svc in system_services:
2293- log('ceph: Starting service {} after migrating data.'
2294- .format(svc))
2295+ log('Starting service {} after migrating data.'
2296+ .format(svc), level=DEBUG)
2297 service_start(svc)
2298
2299
2300 def ensure_ceph_keyring(service, user=None, group=None):
2301- '''
2302- Ensures a ceph keyring is created for a named service
2303- and optionally ensures user and group ownership.
2304+ """Ensures a ceph keyring is created for a named service and optionally
2305+ ensures user and group ownership.
2306
2307 Returns False if no ceph key is available in relation state.
2308- '''
2309+ """
2310 key = None
2311 for rid in relation_ids('ceph'):
2312 for unit in related_units(rid):
2313 key = relation_get('key', rid=rid, unit=unit)
2314 if key:
2315 break
2316+
2317 if not key:
2318 return False
2319+
2320 create_keyring(service=service, key=key)
2321 keyring = _keyring_path(service)
2322 if user and group:
2323 check_call(['chown', '%s.%s' % (user, group), keyring])
2324+
2325 return True
2326
2327
2328 def ceph_version():
2329- ''' Retrieve the local version of ceph '''
2330+ """Retrieve the local version of ceph."""
2331 if os.path.exists('/usr/bin/ceph'):
2332 cmd = ['ceph', '-v']
2333- output = check_output(cmd)
2334+ output = check_output(cmd).decode('US-ASCII')
2335 output = output.split()
2336 if len(output) > 3:
2337 return output[2]
2338
2339=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2340--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-03-27 10:54:38 +0000
2341+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-11 17:56:34 +0000
2342@@ -1,12 +1,12 @@
2343-
2344 import os
2345 import re
2346-
2347 from subprocess import (
2348 check_call,
2349 check_output,
2350 )
2351
2352+import six
2353+
2354
2355 ##################################################
2356 # loopback device helpers.
2357@@ -37,7 +37,7 @@
2358 '''
2359 file_path = os.path.abspath(file_path)
2360 check_call(['losetup', '--find', file_path])
2361- for d, f in loopback_devices().iteritems():
2362+ for d, f in six.iteritems(loopback_devices()):
2363 if f == file_path:
2364 return d
2365
2366@@ -51,7 +51,7 @@
2367
2368 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2369 '''
2370- for d, f in loopback_devices().iteritems():
2371+ for d, f in six.iteritems(loopback_devices()):
2372 if f == path:
2373 return d
2374
2375
2376=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2377--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:42:30 +0000
2378+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-11 17:56:34 +0000
2379@@ -61,6 +61,7 @@
2380 vg = None
2381 pvd = check_output(['pvdisplay', block_device]).splitlines()
2382 for l in pvd:
2383+ l = l.decode('UTF-8')
2384 if l.strip().startswith('VG Name'):
2385 vg = ' '.join(l.strip().split()[2:])
2386 return vg
2387
2388=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2389--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-09-15 07:44:00 +0000
2390+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-11 17:56:34 +0000
2391@@ -30,7 +30,8 @@
2392 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
2393 call(['sgdisk', '--zap-all', '--mbrtogpt',
2394 '--clear', block_device])
2395- dev_end = check_output(['blockdev', '--getsz', block_device])
2396+ dev_end = check_output(['blockdev', '--getsz',
2397+ block_device]).decode('UTF-8')
2398 gpt_end = int(dev_end.split()[0]) - 100
2399 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
2400 'bs=1M', 'count=1'])
2401@@ -47,7 +48,7 @@
2402 it doesn't.
2403 '''
2404 is_partition = bool(re.search(r".*[0-9]+\b", device))
2405- out = check_output(['mount'])
2406+ out = check_output(['mount']).decode('UTF-8')
2407 if is_partition:
2408 return bool(re.search(device + r"\b", out))
2409 return bool(re.search(device + r"[0-9]+\b", out))
2410
2411=== modified file 'hooks/charmhelpers/contrib/unison/__init__.py'
2412--- hooks/charmhelpers/contrib/unison/__init__.py 2014-03-27 10:54:38 +0000
2413+++ hooks/charmhelpers/contrib/unison/__init__.py 2014-12-11 17:56:34 +0000
2414@@ -185,13 +185,14 @@
2415 relation_set(ssh_authorized_hosts=authed_hosts)
2416
2417
2418-def _run_as_user(user):
2419+def _run_as_user(user, gid=None):
2420 try:
2421 user = pwd.getpwnam(user)
2422 except KeyError:
2423 log('Invalid user: %s' % user)
2424 raise Exception
2425- uid, gid = user.pw_uid, user.pw_gid
2426+ uid = user.pw_uid
2427+ gid = gid or user.pw_gid
2428 os.environ['HOME'] = user.pw_dir
2429
2430 def _inner():
2431@@ -200,8 +201,8 @@
2432 return _inner
2433
2434
2435-def run_as_user(user, cmd):
2436- return check_output(cmd, preexec_fn=_run_as_user(user), cwd='/')
2437+def run_as_user(user, cmd, gid=None):
2438+ return check_output(cmd, preexec_fn=_run_as_user(user, gid), cwd='/')
2439
2440
2441 def collect_authed_hosts(peer_interface):
2442@@ -227,8 +228,8 @@
2443 return hosts
2444
2445
2446-def sync_path_to_host(path, host, user, verbose=False):
2447- cmd = copy(BASE_CMD)
2448+def sync_path_to_host(path, host, user, verbose=False, cmd=None, gid=None):
2449+ cmd = cmd or copy(BASE_CMD)
2450 if not verbose:
2451 cmd.append('-silent')
2452
2453@@ -241,17 +242,23 @@
2454
2455 try:
2456 log('Syncing local path %s to %s@%s:%s' % (path, user, host, path))
2457- run_as_user(user, cmd)
2458+ run_as_user(user, cmd, gid)
2459 except:
2460 log('Error syncing remote files')
2461
2462
2463-def sync_to_peer(host, user, paths=[], verbose=False):
2464+def sync_to_peer(host, user, paths=None, verbose=False, cmd=None, gid=None):
2465 '''Sync paths to an specific host'''
2466- [sync_path_to_host(p, host, user, verbose) for p in paths]
2467-
2468-
2469-def sync_to_peers(peer_interface, user, paths=[], verbose=False):
2470+ if paths:
2471+ for p in paths:
2472+ sync_path_to_host(p, host, user, verbose, cmd, gid)
2473+
2474+
2475+def sync_to_peers(peer_interface, user, paths=None,
2476+ verbose=False, cmd=None, gid=None):
2477 '''Sync all hosts to an specific path'''
2478- for host in collect_authed_hosts(peer_interface):
2479- sync_to_peer(host, user, paths, verbose)
2480+ '''The type of group is integer, it allows user has permissions to '''
2481+ '''operate a directory have a different group id with the user id.'''
2482+ if paths:
2483+ for host in collect_authed_hosts(peer_interface):
2484+ sync_to_peer(host, user, paths, verbose, cmd, gid)
2485
2486=== modified file 'hooks/charmhelpers/core/fstab.py'
2487--- hooks/charmhelpers/core/fstab.py 2014-06-24 17:11:12 +0000
2488+++ hooks/charmhelpers/core/fstab.py 2014-12-11 17:56:34 +0000
2489@@ -3,10 +3,11 @@
2490
2491 __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2492
2493+import io
2494 import os
2495
2496
2497-class Fstab(file):
2498+class Fstab(io.FileIO):
2499 """This class extends file in order to implement a file reader/writer
2500 for file `/etc/fstab`
2501 """
2502@@ -24,8 +25,8 @@
2503 options = "defaults"
2504
2505 self.options = options
2506- self.d = d
2507- self.p = p
2508+ self.d = int(d)
2509+ self.p = int(p)
2510
2511 def __eq__(self, o):
2512 return str(self) == str(o)
2513@@ -45,7 +46,7 @@
2514 self._path = path
2515 else:
2516 self._path = self.DEFAULT_PATH
2517- file.__init__(self, self._path, 'r+')
2518+ super(Fstab, self).__init__(self._path, 'rb+')
2519
2520 def _hydrate_entry(self, line):
2521 # NOTE: use split with no arguments to split on any
2522@@ -58,8 +59,9 @@
2523 def entries(self):
2524 self.seek(0)
2525 for line in self.readlines():
2526+ line = line.decode('us-ascii')
2527 try:
2528- if not line.startswith("#"):
2529+ if line.strip() and not line.startswith("#"):
2530 yield self._hydrate_entry(line)
2531 except ValueError:
2532 pass
2533@@ -75,14 +77,14 @@
2534 if self.get_entry_by_attr('device', entry.device):
2535 return False
2536
2537- self.write(str(entry) + '\n')
2538+ self.write((str(entry) + '\n').encode('us-ascii'))
2539 self.truncate()
2540 return entry
2541
2542 def remove_entry(self, entry):
2543 self.seek(0)
2544
2545- lines = self.readlines()
2546+ lines = [l.decode('us-ascii') for l in self.readlines()]
2547
2548 found = False
2549 for index, line in enumerate(lines):
2550@@ -97,7 +99,7 @@
2551 lines.remove(line)
2552
2553 self.seek(0)
2554- self.write(''.join(lines))
2555+ self.write(''.join(lines).encode('us-ascii'))
2556 self.truncate()
2557 return True
2558
2559
2560=== modified file 'hooks/charmhelpers/core/hookenv.py'
2561--- hooks/charmhelpers/core/hookenv.py 2014-10-24 11:36:12 +0000
2562+++ hooks/charmhelpers/core/hookenv.py 2014-12-11 17:56:34 +0000
2563@@ -9,9 +9,14 @@
2564 import yaml
2565 import subprocess
2566 import sys
2567-import UserDict
2568 from subprocess import CalledProcessError
2569
2570+import six
2571+if not six.PY3:
2572+ from UserDict import UserDict
2573+else:
2574+ from collections import UserDict
2575+
2576 CRITICAL = "CRITICAL"
2577 ERROR = "ERROR"
2578 WARNING = "WARNING"
2579@@ -63,16 +68,18 @@
2580 command = ['juju-log']
2581 if level:
2582 command += ['-l', level]
2583+ if not isinstance(message, six.string_types):
2584+ message = repr(message)
2585 command += [message]
2586 subprocess.call(command)
2587
2588
2589-class Serializable(UserDict.IterableUserDict):
2590+class Serializable(UserDict):
2591 """Wrapper, an object that can be serialized to yaml or json"""
2592
2593 def __init__(self, obj):
2594 # wrap the object
2595- UserDict.IterableUserDict.__init__(self)
2596+ UserDict.__init__(self)
2597 self.data = obj
2598
2599 def __getattr__(self, attr):
2600@@ -218,7 +225,7 @@
2601 prev_keys = []
2602 if self._prev_dict is not None:
2603 prev_keys = self._prev_dict.keys()
2604- return list(set(prev_keys + dict.keys(self)))
2605+ return list(set(prev_keys + list(dict.keys(self))))
2606
2607 def load_previous(self, path=None):
2608 """Load previous copy of config from disk.
2609@@ -269,7 +276,7 @@
2610
2611 """
2612 if self._prev_dict:
2613- for k, v in self._prev_dict.iteritems():
2614+ for k, v in six.iteritems(self._prev_dict):
2615 if k not in self:
2616 self[k] = v
2617 with open(self.path, 'w') as f:
2618@@ -284,7 +291,8 @@
2619 config_cmd_line.append(scope)
2620 config_cmd_line.append('--format=json')
2621 try:
2622- config_data = json.loads(subprocess.check_output(config_cmd_line))
2623+ config_data = json.loads(
2624+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
2625 if scope is not None:
2626 return config_data
2627 return Config(config_data)
2628@@ -303,10 +311,10 @@
2629 if unit:
2630 _args.append(unit)
2631 try:
2632- return json.loads(subprocess.check_output(_args))
2633+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2634 except ValueError:
2635 return None
2636- except CalledProcessError, e:
2637+ except CalledProcessError as e:
2638 if e.returncode == 2:
2639 return None
2640 raise
2641@@ -318,7 +326,7 @@
2642 relation_cmd_line = ['relation-set']
2643 if relation_id is not None:
2644 relation_cmd_line.extend(('-r', relation_id))
2645- for k, v in (relation_settings.items() + kwargs.items()):
2646+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
2647 if v is None:
2648 relation_cmd_line.append('{}='.format(k))
2649 else:
2650@@ -335,7 +343,8 @@
2651 relid_cmd_line = ['relation-ids', '--format=json']
2652 if reltype is not None:
2653 relid_cmd_line.append(reltype)
2654- return json.loads(subprocess.check_output(relid_cmd_line)) or []
2655+ return json.loads(
2656+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
2657 return []
2658
2659
2660@@ -346,7 +355,8 @@
2661 units_cmd_line = ['relation-list', '--format=json']
2662 if relid is not None:
2663 units_cmd_line.extend(('-r', relid))
2664- return json.loads(subprocess.check_output(units_cmd_line)) or []
2665+ return json.loads(
2666+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
2667
2668
2669 @cached
2670@@ -386,21 +396,31 @@
2671
2672
2673 @cached
2674+def metadata():
2675+ """Get the current charm metadata.yaml contents as a python object"""
2676+ with open(os.path.join(charm_dir(), 'metadata.yaml')) as md:
2677+ return yaml.safe_load(md)
2678+
2679+
2680+@cached
2681 def relation_types():
2682 """Get a list of relation types supported by this charm"""
2683- charmdir = os.environ.get('CHARM_DIR', '')
2684- mdf = open(os.path.join(charmdir, 'metadata.yaml'))
2685- md = yaml.safe_load(mdf)
2686 rel_types = []
2687+ md = metadata()
2688 for key in ('provides', 'requires', 'peers'):
2689 section = md.get(key)
2690 if section:
2691 rel_types.extend(section.keys())
2692- mdf.close()
2693 return rel_types
2694
2695
2696 @cached
2697+def charm_name():
2698+ """Get the name of the current charm as is specified on metadata.yaml"""
2699+ return metadata().get('name')
2700+
2701+
2702+@cached
2703 def relations():
2704 """Get a nested dictionary of relation data for all related units"""
2705 rels = {}
2706@@ -455,7 +475,7 @@
2707 """Get the unit ID for the remote unit"""
2708 _args = ['unit-get', '--format=json', attribute]
2709 try:
2710- return json.loads(subprocess.check_output(_args))
2711+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2712 except ValueError:
2713 return None
2714
2715
2716=== modified file 'hooks/charmhelpers/core/host.py'
2717--- hooks/charmhelpers/core/host.py 2014-10-24 11:36:12 +0000
2718+++ hooks/charmhelpers/core/host.py 2014-12-11 17:56:34 +0000
2719@@ -14,11 +14,12 @@
2720 import subprocess
2721 import hashlib
2722 from contextlib import contextmanager
2723-
2724 from collections import OrderedDict
2725
2726-from hookenv import log
2727-from fstab import Fstab
2728+import six
2729+
2730+from .hookenv import log
2731+from .fstab import Fstab
2732
2733
2734 def service_start(service_name):
2735@@ -54,7 +55,9 @@
2736 def service_running(service):
2737 """Determine whether a system service is running"""
2738 try:
2739- output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
2740+ output = subprocess.check_output(
2741+ ['service', service, 'status'],
2742+ stderr=subprocess.STDOUT).decode('UTF-8')
2743 except subprocess.CalledProcessError:
2744 return False
2745 else:
2746@@ -67,7 +70,9 @@
2747 def service_available(service_name):
2748 """Determine whether a system service is available"""
2749 try:
2750- subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
2751+ subprocess.check_output(
2752+ ['service', service_name, 'status'],
2753+ stderr=subprocess.STDOUT).decode('UTF-8')
2754 except subprocess.CalledProcessError as e:
2755 return 'unrecognized service' not in e.output
2756 else:
2757@@ -96,6 +101,26 @@
2758 return user_info
2759
2760
2761+def add_group(group_name, system_group=False):
2762+ """Add a group to the system"""
2763+ try:
2764+ group_info = grp.getgrnam(group_name)
2765+ log('group {0} already exists!'.format(group_name))
2766+ except KeyError:
2767+ log('creating group {0}'.format(group_name))
2768+ cmd = ['addgroup']
2769+ if system_group:
2770+ cmd.append('--system')
2771+ else:
2772+ cmd.extend([
2773+ '--group',
2774+ ])
2775+ cmd.append(group_name)
2776+ subprocess.check_call(cmd)
2777+ group_info = grp.getgrnam(group_name)
2778+ return group_info
2779+
2780+
2781 def add_user_to_group(username, group):
2782 """Add a user to a group"""
2783 cmd = [
2784@@ -115,7 +140,7 @@
2785 cmd.append(from_path)
2786 cmd.append(to_path)
2787 log(" ".join(cmd))
2788- return subprocess.check_output(cmd).strip()
2789+ return subprocess.check_output(cmd).decode('UTF-8').strip()
2790
2791
2792 def symlink(source, destination):
2793@@ -130,7 +155,7 @@
2794 subprocess.check_call(cmd)
2795
2796
2797-def mkdir(path, owner='root', group='root', perms=0555, force=False):
2798+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
2799 """Create a directory"""
2800 log("Making dir {} {}:{} {:o}".format(path, owner, group,
2801 perms))
2802@@ -146,7 +171,7 @@
2803 os.chown(realpath, uid, gid)
2804
2805
2806-def write_file(path, content, owner='root', group='root', perms=0444):
2807+def write_file(path, content, owner='root', group='root', perms=0o444):
2808 """Create or overwrite a file with the contents of a string"""
2809 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
2810 uid = pwd.getpwnam(owner).pw_uid
2811@@ -177,7 +202,7 @@
2812 cmd_args.extend([device, mountpoint])
2813 try:
2814 subprocess.check_output(cmd_args)
2815- except subprocess.CalledProcessError, e:
2816+ except subprocess.CalledProcessError as e:
2817 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
2818 return False
2819
2820@@ -191,7 +216,7 @@
2821 cmd_args = ['umount', mountpoint]
2822 try:
2823 subprocess.check_output(cmd_args)
2824- except subprocess.CalledProcessError, e:
2825+ except subprocess.CalledProcessError as e:
2826 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
2827 return False
2828
2829@@ -218,8 +243,8 @@
2830 """
2831 if os.path.exists(path):
2832 h = getattr(hashlib, hash_type)()
2833- with open(path, 'r') as source:
2834- h.update(source.read()) # IGNORE:E1101 - it does have update
2835+ with open(path, 'rb') as source:
2836+ h.update(source.read())
2837 return h.hexdigest()
2838 else:
2839 return None
2840@@ -297,7 +322,7 @@
2841 if length is None:
2842 length = random.choice(range(35, 45))
2843 alphanumeric_chars = [
2844- l for l in (string.letters + string.digits)
2845+ l for l in (string.ascii_letters + string.digits)
2846 if l not in 'l0QD1vAEIOUaeiou']
2847 random_chars = [
2848 random.choice(alphanumeric_chars) for _ in range(length)]
2849@@ -306,14 +331,14 @@
2850
2851 def list_nics(nic_type):
2852 '''Return a list of nics of given type(s)'''
2853- if isinstance(nic_type, basestring):
2854+ if isinstance(nic_type, six.string_types):
2855 int_types = [nic_type]
2856 else:
2857 int_types = nic_type
2858 interfaces = []
2859 for int_type in int_types:
2860 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
2861- ip_output = subprocess.check_output(cmd).split('\n')
2862+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2863 ip_output = (line for line in ip_output if line)
2864 for line in ip_output:
2865 if line.split()[1].startswith(int_type):
2866@@ -335,7 +360,7 @@
2867
2868 def get_nic_mtu(nic):
2869 cmd = ['ip', 'addr', 'show', nic]
2870- ip_output = subprocess.check_output(cmd).split('\n')
2871+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2872 mtu = ""
2873 for line in ip_output:
2874 words = line.split()
2875@@ -346,7 +371,7 @@
2876
2877 def get_nic_hwaddr(nic):
2878 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
2879- ip_output = subprocess.check_output(cmd)
2880+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
2881 hwaddr = ""
2882 words = ip_output.split()
2883 if 'link/ether' in words:
2884@@ -363,8 +388,8 @@
2885
2886 '''
2887 import apt_pkg
2888- from charmhelpers.fetch import apt_cache
2889 if not pkgcache:
2890+ from charmhelpers.fetch import apt_cache
2891 pkgcache = apt_cache()
2892 pkg = pkgcache[package]
2893 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
2894
2895=== modified file 'hooks/charmhelpers/core/services/helpers.py'
2896--- hooks/charmhelpers/core/services/helpers.py 2014-10-06 20:51:06 +0000
2897+++ hooks/charmhelpers/core/services/helpers.py 2014-12-11 17:56:34 +0000
2898@@ -196,7 +196,7 @@
2899 if not os.path.isabs(file_name):
2900 file_name = os.path.join(hookenv.charm_dir(), file_name)
2901 with open(file_name, 'w') as file_stream:
2902- os.fchmod(file_stream.fileno(), 0600)
2903+ os.fchmod(file_stream.fileno(), 0o600)
2904 yaml.dump(config_data, file_stream)
2905
2906 def read_context(self, file_name):
2907@@ -211,15 +211,19 @@
2908
2909 class TemplateCallback(ManagerCallback):
2910 """
2911- Callback class that will render a Jinja2 template, for use as a ready action.
2912-
2913- :param str source: The template source file, relative to `$CHARM_DIR/templates`
2914+ Callback class that will render a Jinja2 template, for use as a ready
2915+ action.
2916+
2917+ :param str source: The template source file, relative to
2918+ `$CHARM_DIR/templates`
2919+
2920 :param str target: The target to write the rendered template to
2921 :param str owner: The owner of the rendered file
2922 :param str group: The group of the rendered file
2923 :param int perms: The permissions of the rendered file
2924 """
2925- def __init__(self, source, target, owner='root', group='root', perms=0444):
2926+ def __init__(self, source, target,
2927+ owner='root', group='root', perms=0o444):
2928 self.source = source
2929 self.target = target
2930 self.owner = owner
2931
2932=== modified file 'hooks/charmhelpers/core/templating.py'
2933--- hooks/charmhelpers/core/templating.py 2014-08-13 13:11:49 +0000
2934+++ hooks/charmhelpers/core/templating.py 2014-12-11 17:56:34 +0000
2935@@ -4,7 +4,8 @@
2936 from charmhelpers.core import hookenv
2937
2938
2939-def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
2940+def render(source, target, context, owner='root', group='root',
2941+ perms=0o444, templates_dir=None):
2942 """
2943 Render a template.
2944
2945
2946=== modified file 'hooks/charmhelpers/fetch/__init__.py'
2947--- hooks/charmhelpers/fetch/__init__.py 2014-10-24 11:36:12 +0000
2948+++ hooks/charmhelpers/fetch/__init__.py 2014-12-11 17:56:34 +0000
2949@@ -5,10 +5,6 @@
2950 from charmhelpers.core.host import (
2951 lsb_release
2952 )
2953-from urlparse import (
2954- urlparse,
2955- urlunparse,
2956-)
2957 import subprocess
2958 from charmhelpers.core.hookenv import (
2959 config,
2960@@ -16,6 +12,12 @@
2961 )
2962 import os
2963
2964+import six
2965+if six.PY3:
2966+ from urllib.parse import urlparse, urlunparse
2967+else:
2968+ from urlparse import urlparse, urlunparse
2969+
2970
2971 CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
2972 deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
2973@@ -149,7 +151,7 @@
2974 cmd = ['apt-get', '--assume-yes']
2975 cmd.extend(options)
2976 cmd.append('install')
2977- if isinstance(packages, basestring):
2978+ if isinstance(packages, six.string_types):
2979 cmd.append(packages)
2980 else:
2981 cmd.extend(packages)
2982@@ -182,7 +184,7 @@
2983 def apt_purge(packages, fatal=False):
2984 """Purge one or more packages"""
2985 cmd = ['apt-get', '--assume-yes', 'purge']
2986- if isinstance(packages, basestring):
2987+ if isinstance(packages, six.string_types):
2988 cmd.append(packages)
2989 else:
2990 cmd.extend(packages)
2991@@ -193,7 +195,7 @@
2992 def apt_hold(packages, fatal=False):
2993 """Hold one or more packages"""
2994 cmd = ['apt-mark', 'hold']
2995- if isinstance(packages, basestring):
2996+ if isinstance(packages, six.string_types):
2997 cmd.append(packages)
2998 else:
2999 cmd.extend(packages)
3000@@ -260,7 +262,7 @@
3001
3002 if key:
3003 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
3004- with NamedTemporaryFile() as key_file:
3005+ with NamedTemporaryFile('w+') as key_file:
3006 key_file.write(key)
3007 key_file.flush()
3008 key_file.seek(0)
3009@@ -297,14 +299,14 @@
3010 sources = safe_load((config(sources_var) or '').strip()) or []
3011 keys = safe_load((config(keys_var) or '').strip()) or None
3012
3013- if isinstance(sources, basestring):
3014+ if isinstance(sources, six.string_types):
3015 sources = [sources]
3016
3017 if keys is None:
3018 for source in sources:
3019 add_source(source, None)
3020 else:
3021- if isinstance(keys, basestring):
3022+ if isinstance(keys, six.string_types):
3023 keys = [keys]
3024
3025 if len(sources) != len(keys):
3026@@ -401,7 +403,7 @@
3027 while result is None or result == APT_NO_LOCK:
3028 try:
3029 result = subprocess.check_call(cmd, env=env)
3030- except subprocess.CalledProcessError, e:
3031+ except subprocess.CalledProcessError as e:
3032 retry_count = retry_count + 1
3033 if retry_count > APT_NO_LOCK_RETRY_COUNT:
3034 raise
3035
3036=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
3037--- hooks/charmhelpers/fetch/archiveurl.py 2014-10-06 20:51:06 +0000
3038+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-11 17:56:34 +0000
3039@@ -1,8 +1,23 @@
3040 import os
3041-import urllib2
3042-from urllib import urlretrieve
3043-import urlparse
3044 import hashlib
3045+import re
3046+
3047+import six
3048+if six.PY3:
3049+ from urllib.request import (
3050+ build_opener, install_opener, urlopen, urlretrieve,
3051+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3052+ )
3053+ from urllib.parse import urlparse, urlunparse, parse_qs
3054+ from urllib.error import URLError
3055+else:
3056+ from urllib import urlretrieve
3057+ from urllib2 import (
3058+ build_opener, install_opener, urlopen,
3059+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3060+ URLError
3061+ )
3062+ from urlparse import urlparse, urlunparse, parse_qs
3063
3064 from charmhelpers.fetch import (
3065 BaseFetchHandler,
3066@@ -15,6 +30,24 @@
3067 from charmhelpers.core.host import mkdir, check_hash
3068
3069
3070+def splituser(host):
3071+ '''urllib.splituser(), but six's support of this seems broken'''
3072+ _userprog = re.compile('^(.*)@(.*)$')
3073+ match = _userprog.match(host)
3074+ if match:
3075+ return match.group(1, 2)
3076+ return None, host
3077+
3078+
3079+def splitpasswd(user):
3080+ '''urllib.splitpasswd(), but six's support of this is missing'''
3081+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
3082+ match = _passwdprog.match(user)
3083+ if match:
3084+ return match.group(1, 2)
3085+ return user, None
3086+
3087+
3088 class ArchiveUrlFetchHandler(BaseFetchHandler):
3089 """
3090 Handler to download archive files from arbitrary URLs.
3091@@ -42,20 +75,20 @@
3092 """
3093 # propogate all exceptions
3094 # URLError, OSError, etc
3095- proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
3096+ proto, netloc, path, params, query, fragment = urlparse(source)
3097 if proto in ('http', 'https'):
3098- auth, barehost = urllib2.splituser(netloc)
3099+ auth, barehost = splituser(netloc)
3100 if auth is not None:
3101- source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
3102- username, password = urllib2.splitpasswd(auth)
3103- passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
3104+ source = urlunparse((proto, barehost, path, params, query, fragment))
3105+ username, password = splitpasswd(auth)
3106+ passman = HTTPPasswordMgrWithDefaultRealm()
3107 # Realm is set to None in add_password to force the username and password
3108 # to be used whatever the realm
3109 passman.add_password(None, source, username, password)
3110- authhandler = urllib2.HTTPBasicAuthHandler(passman)
3111- opener = urllib2.build_opener(authhandler)
3112- urllib2.install_opener(opener)
3113- response = urllib2.urlopen(source)
3114+ authhandler = HTTPBasicAuthHandler(passman)
3115+ opener = build_opener(authhandler)
3116+ install_opener(opener)
3117+ response = urlopen(source)
3118 try:
3119 with open(dest, 'w') as dest_file:
3120 dest_file.write(response.read())
3121@@ -91,17 +124,21 @@
3122 url_parts = self.parse_url(source)
3123 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
3124 if not os.path.exists(dest_dir):
3125- mkdir(dest_dir, perms=0755)
3126+ mkdir(dest_dir, perms=0o755)
3127 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
3128 try:
3129 self.download(source, dld_file)
3130- except urllib2.URLError as e:
3131+ except URLError as e:
3132 raise UnhandledSource(e.reason)
3133 except OSError as e:
3134 raise UnhandledSource(e.strerror)
3135- options = urlparse.parse_qs(url_parts.fragment)
3136+ options = parse_qs(url_parts.fragment)
3137 for key, value in options.items():
3138- if key in hashlib.algorithms:
3139+ if not six.PY3:
3140+ algorithms = hashlib.algorithms
3141+ else:
3142+ algorithms = hashlib.algorithms_available
3143+ if key in algorithms:
3144 check_hash(dld_file, value, key)
3145 if checksum:
3146 check_hash(dld_file, checksum, hash_type)
3147
3148=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
3149--- hooks/charmhelpers/fetch/bzrurl.py 2014-06-24 17:11:12 +0000
3150+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-11 17:56:34 +0000
3151@@ -5,6 +5,10 @@
3152 )
3153 from charmhelpers.core.host import mkdir
3154
3155+import six
3156+if six.PY3:
3157+ raise ImportError('bzrlib does not support Python3')
3158+
3159 try:
3160 from bzrlib.branch import Branch
3161 except ImportError:
3162@@ -42,7 +46,7 @@
3163 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3164 branch_name)
3165 if not os.path.exists(dest_dir):
3166- mkdir(dest_dir, perms=0755)
3167+ mkdir(dest_dir, perms=0o755)
3168 try:
3169 self.branch(source, dest_dir)
3170 except OSError as e:
3171
3172=== modified file 'hooks/charmhelpers/fetch/giturl.py'
3173--- hooks/charmhelpers/fetch/giturl.py 2014-10-24 11:36:12 +0000
3174+++ hooks/charmhelpers/fetch/giturl.py 2014-12-11 17:56:34 +0000
3175@@ -5,6 +5,10 @@
3176 )
3177 from charmhelpers.core.host import mkdir
3178
3179+import six
3180+if six.PY3:
3181+ raise ImportError('GitPython does not support Python 3')
3182+
3183 try:
3184 from git import Repo
3185 except ImportError:
3186@@ -17,7 +21,7 @@
3187 """Handler for git branches via generic and github URLs"""
3188 def can_handle(self, source):
3189 url_parts = self.parse_url(source)
3190- #TODO (mattyw) no support for ssh git@ yet
3191+ # TODO (mattyw) no support for ssh git@ yet
3192 if url_parts.scheme not in ('http', 'https', 'git'):
3193 return False
3194 else:
3195@@ -30,13 +34,16 @@
3196 repo = Repo.clone_from(source, dest)
3197 repo.git.checkout(branch)
3198
3199- def install(self, source, branch="master"):
3200+ def install(self, source, branch="master", dest=None):
3201 url_parts = self.parse_url(source)
3202 branch_name = url_parts.path.strip("/").split("/")[-1]
3203- dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3204- branch_name)
3205+ if dest:
3206+ dest_dir = os.path.join(dest, branch_name)
3207+ else:
3208+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3209+ branch_name)
3210 if not os.path.exists(dest_dir):
3211- mkdir(dest_dir, perms=0755)
3212+ mkdir(dest_dir, perms=0o755)
3213 try:
3214 self.clone(source, dest_dir, branch)
3215 except OSError as e:
3216
3217=== modified file 'tests/charmhelpers/__init__.py'
3218--- tests/charmhelpers/__init__.py 2014-06-24 17:11:12 +0000
3219+++ tests/charmhelpers/__init__.py 2014-12-11 17:56:34 +0000
3220@@ -0,0 +1,22 @@
3221+# Bootstrap charm-helpers, installing its dependencies if necessary using
3222+# only standard libraries.
3223+import subprocess
3224+import sys
3225+
3226+try:
3227+ import six # flake8: noqa
3228+except ImportError:
3229+ if sys.version_info.major == 2:
3230+ subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
3231+ else:
3232+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
3233+ import six # flake8: noqa
3234+
3235+try:
3236+ import yaml # flake8: noqa
3237+except ImportError:
3238+ if sys.version_info.major == 2:
3239+ subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
3240+ else:
3241+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
3242+ import yaml # flake8: noqa
3243
3244=== modified file 'tests/charmhelpers/contrib/amulet/deployment.py'
3245--- tests/charmhelpers/contrib/amulet/deployment.py 2014-10-06 20:51:06 +0000
3246+++ tests/charmhelpers/contrib/amulet/deployment.py 2014-12-11 17:56:34 +0000
3247@@ -1,6 +1,6 @@
3248 import amulet
3249-
3250 import os
3251+import six
3252
3253
3254 class AmuletDeployment(object):
3255@@ -52,12 +52,12 @@
3256
3257 def _add_relations(self, relations):
3258 """Add all of the relations for the services."""
3259- for k, v in relations.iteritems():
3260+ for k, v in six.iteritems(relations):
3261 self.d.relate(k, v)
3262
3263 def _configure_services(self, configs):
3264 """Configure all of the services."""
3265- for service, config in configs.iteritems():
3266+ for service, config in six.iteritems(configs):
3267 self.d.configure(service, config)
3268
3269 def _deploy(self):
3270
3271=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
3272--- tests/charmhelpers/contrib/amulet/utils.py 2014-07-30 15:11:18 +0000
3273+++ tests/charmhelpers/contrib/amulet/utils.py 2014-12-11 17:56:34 +0000
3274@@ -5,6 +5,8 @@
3275 import sys
3276 import time
3277
3278+import six
3279+
3280
3281 class AmuletUtils(object):
3282 """Amulet utilities.
3283@@ -58,7 +60,7 @@
3284 Verify the specified services are running on the corresponding
3285 service units.
3286 """
3287- for k, v in commands.iteritems():
3288+ for k, v in six.iteritems(commands):
3289 for cmd in v:
3290 output, code = k.run(cmd)
3291 if code != 0:
3292@@ -100,11 +102,11 @@
3293 longs, or can be a function that evaluate a variable and returns a
3294 bool.
3295 """
3296- for k, v in expected.iteritems():
3297+ for k, v in six.iteritems(expected):
3298 if k in actual:
3299- if (isinstance(v, basestring) or
3300+ if (isinstance(v, six.string_types) or
3301 isinstance(v, bool) or
3302- isinstance(v, (int, long))):
3303+ isinstance(v, six.integer_types)):
3304 if v != actual[k]:
3305 return "{}:{}".format(k, actual[k])
3306 elif not v(actual[k]):
3307
3308=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
3309--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-06 20:51:06 +0000
3310+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-11 17:56:34 +0000
3311@@ -1,3 +1,4 @@
3312+import six
3313 from charmhelpers.contrib.amulet.deployment import (
3314 AmuletDeployment
3315 )
3316@@ -69,7 +70,7 @@
3317
3318 def _configure_services(self, configs):
3319 """Configure all of the services."""
3320- for service, config in configs.iteritems():
3321+ for service, config in six.iteritems(configs):
3322 self.d.configure(service, config)
3323
3324 def _get_openstack_release(self):
3325
3326=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
3327--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2014-10-06 20:51:06 +0000
3328+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-11 17:56:34 +0000
3329@@ -7,6 +7,8 @@
3330 import keystoneclient.v2_0 as keystone_client
3331 import novaclient.v1_1.client as nova_client
3332
3333+import six
3334+
3335 from charmhelpers.contrib.amulet.utils import (
3336 AmuletUtils
3337 )
3338@@ -60,7 +62,7 @@
3339 expected service catalog endpoints.
3340 """
3341 self.log.debug('actual: {}'.format(repr(actual)))
3342- for k, v in expected.iteritems():
3343+ for k, v in six.iteritems(expected):
3344 if k in actual:
3345 ret = self._validate_dict_data(expected[k][0], actual[k][0])
3346 if ret:

Subscribers

People subscribed via source and target branches