Merge lp:~corey.bryant/charms/trusty/nova-cloud-controller/contrib.python.packages into lp:~openstack-charmers-archive/charms/trusty/nova-cloud-controller/next

Proposed by Corey Bryant
Status: Merged
Merged at revision: 126
Proposed branch: lp:~corey.bryant/charms/trusty/nova-cloud-controller/contrib.python.packages
Merge into: lp:~openstack-charmers-archive/charms/trusty/nova-cloud-controller/next
Diff against target: 3482 lines (+1085/-536)
33 files modified
charm-helpers-hooks.yaml (+1/-0)
hooks/charmhelpers/__init__.py (+22/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7)
hooks/charmhelpers/contrib/network/ip.py (+52/-50)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
hooks/charmhelpers/contrib/openstack/context.py (+319/-226)
hooks/charmhelpers/contrib/openstack/ip.py (+41/-27)
hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+2/-2)
hooks/charmhelpers/contrib/openstack/templating.py (+5/-5)
hooks/charmhelpers/contrib/openstack/utils.py (+146/-13)
hooks/charmhelpers/contrib/peerstorage/__init__.py (+4/-3)
hooks/charmhelpers/contrib/python/packages.py (+77/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/core/fstab.py (+10/-8)
hooks/charmhelpers/core/hookenv.py (+41/-15)
hooks/charmhelpers/core/host.py (+51/-20)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/core/services/helpers.py (+9/-5)
hooks/charmhelpers/core/templating.py (+2/-1)
hooks/charmhelpers/fetch/__init__.py (+18/-12)
hooks/charmhelpers/fetch/archiveurl.py (+53/-16)
hooks/charmhelpers/fetch/bzrurl.py (+5/-1)
hooks/charmhelpers/fetch/giturl.py (+51/-0)
tests/charmhelpers/__init__.py (+22/-0)
tests/charmhelpers/contrib/amulet/deployment.py (+3/-3)
tests/charmhelpers/contrib/amulet/utils.py (+6/-4)
tests/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
tests/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
To merge this branch: bzr merge lp:~corey.bryant/charms/trusty/nova-cloud-controller/contrib.python.packages
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Review via email: mp+244328@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #167 nova-cloud-controller-next for corey.bryant mp244328
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/167/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #130 nova-cloud-controller-next for corey.bryant mp244328
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
  FAILED (errors=2)
  make: *** [unit_test] Error 1

Full unit test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_unit_test/130/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #80 nova-cloud-controller-next for corey.bryant mp244328
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 2
  make: *** [test] Error 2

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/80/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #188 nova-cloud-controller-next for corey.bryant mp244328
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/188/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #151 nova-cloud-controller-next for corey.bryant mp244328
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/151/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #96 nova-cloud-controller-next for corey.bryant mp244328
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/96/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #197 nova-cloud-controller-next for corey.bryant mp244328
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/197/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #160 nova-cloud-controller-next for corey.bryant mp244328
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/160/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #115 nova-cloud-controller-next for corey.bryant mp244328
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/115/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #216 nova-cloud-controller-next for corey.bryant mp244328
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/216/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #179 nova-cloud-controller-next for corey.bryant mp244328
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/179/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #133 nova-cloud-controller-next for corey.bryant mp244328
    AMULET FAIL: amulet-test failed

AMULET Results (max last 2 lines):
  ERROR subprocess encountered error code 1
  make: *** [test] Error 1

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/133/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'charm-helpers-hooks.yaml'
2--- charm-helpers-hooks.yaml 2014-10-02 09:22:36 +0000
3+++ charm-helpers-hooks.yaml 2014-12-11 17:56:50 +0000
4@@ -10,3 +10,4 @@
5 - payload.execd
6 - contrib.network.ip
7 - contrib.peerstorage
8+ - contrib.python.packages
9
10=== added file 'hooks/charmhelpers/__init__.py'
11--- hooks/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
12+++ hooks/charmhelpers/__init__.py 2014-12-11 17:56:50 +0000
13@@ -0,0 +1,22 @@
14+# Bootstrap charm-helpers, installing its dependencies if necessary using
15+# only standard libraries.
16+import subprocess
17+import sys
18+
19+try:
20+ import six # flake8: noqa
21+except ImportError:
22+ if sys.version_info.major == 2:
23+ subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
24+ else:
25+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
26+ import six # flake8: noqa
27+
28+try:
29+ import yaml # flake8: noqa
30+except ImportError:
31+ if sys.version_info.major == 2:
32+ subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
33+ else:
34+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
35+ import yaml # flake8: noqa
36
37=== removed file 'hooks/charmhelpers/__init__.py'
38=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
39--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-09-22 16:12:12 +0000
40+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-11 17:56:50 +0000
41@@ -13,9 +13,10 @@
42
43 import subprocess
44 import os
45-
46 from socket import gethostname as get_unit_hostname
47
48+import six
49+
50 from charmhelpers.core.hookenv import (
51 log,
52 relation_ids,
53@@ -77,7 +78,7 @@
54 "show", resource
55 ]
56 try:
57- status = subprocess.check_output(cmd)
58+ status = subprocess.check_output(cmd).decode('UTF-8')
59 except subprocess.CalledProcessError:
60 return False
61 else:
62@@ -150,34 +151,42 @@
63 return False
64
65
66-def determine_api_port(public_port):
67+def determine_api_port(public_port, singlenode_mode=False):
68 '''
69 Determine correct API server listening port based on
70 existence of HTTPS reverse proxy and/or haproxy.
71
72 public_port: int: standard public port for given service
73
74+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
75+
76 returns: int: the correct listening port for the API service
77 '''
78 i = 0
79- if len(peer_units()) > 0 or is_clustered():
80+ if singlenode_mode:
81+ i += 1
82+ elif len(peer_units()) > 0 or is_clustered():
83 i += 1
84 if https():
85 i += 1
86 return public_port - (i * 10)
87
88
89-def determine_apache_port(public_port):
90+def determine_apache_port(public_port, singlenode_mode=False):
91 '''
92 Description: Determine correct apache listening port based on public IP +
93 state of the cluster.
94
95 public_port: int: standard public port for given service
96
97+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
98+
99 returns: int: the correct listening port for the HAProxy service
100 '''
101 i = 0
102- if len(peer_units()) > 0 or is_clustered():
103+ if singlenode_mode:
104+ i += 1
105+ elif len(peer_units()) > 0 or is_clustered():
106 i += 1
107 return public_port - (i * 10)
108
109@@ -197,7 +206,7 @@
110 for setting in settings:
111 conf[setting] = config_get(setting)
112 missing = []
113- [missing.append(s) for s, v in conf.iteritems() if v is None]
114+ [missing.append(s) for s, v in six.iteritems(conf) if v is None]
115 if missing:
116 log('Insufficient config data to configure hacluster.', level=ERROR)
117 raise HAIncompleteConfig
118
119=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
120--- hooks/charmhelpers/contrib/network/ip.py 2014-10-09 10:31:45 +0000
121+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-11 17:56:50 +0000
122@@ -1,15 +1,12 @@
123 import glob
124 import re
125 import subprocess
126-import sys
127
128 from functools import partial
129
130 from charmhelpers.core.hookenv import unit_get
131 from charmhelpers.fetch import apt_install
132 from charmhelpers.core.hookenv import (
133- WARNING,
134- ERROR,
135 log
136 )
137
138@@ -34,31 +31,28 @@
139 network)
140
141
142+def no_ip_found_error_out(network):
143+ errmsg = ("No IP address found in network: %s" % network)
144+ raise ValueError(errmsg)
145+
146+
147 def get_address_in_network(network, fallback=None, fatal=False):
148- """
149- Get an IPv4 or IPv6 address within the network from the host.
150+ """Get an IPv4 or IPv6 address within the network from the host.
151
152 :param network (str): CIDR presentation format. For example,
153 '192.168.1.0/24'.
154 :param fallback (str): If no address is found, return fallback.
155 :param fatal (boolean): If no address is found, fallback is not
156 set and fatal is True then exit(1).
157-
158 """
159-
160- def not_found_error_out():
161- log("No IP address found in network: %s" % network,
162- level=ERROR)
163- sys.exit(1)
164-
165 if network is None:
166 if fallback is not None:
167 return fallback
168+
169+ if fatal:
170+ no_ip_found_error_out(network)
171 else:
172- if fatal:
173- not_found_error_out()
174- else:
175- return None
176+ return None
177
178 _validate_cidr(network)
179 network = netaddr.IPNetwork(network)
180@@ -70,6 +64,7 @@
181 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
182 if cidr in network:
183 return str(cidr.ip)
184+
185 if network.version == 6 and netifaces.AF_INET6 in addresses:
186 for addr in addresses[netifaces.AF_INET6]:
187 if not addr['addr'].startswith('fe80'):
188@@ -82,20 +77,20 @@
189 return fallback
190
191 if fatal:
192- not_found_error_out()
193+ no_ip_found_error_out(network)
194
195 return None
196
197
198 def is_ipv6(address):
199- '''Determine whether provided address is IPv6 or not'''
200+ """Determine whether provided address is IPv6 or not."""
201 try:
202 address = netaddr.IPAddress(address)
203 except netaddr.AddrFormatError:
204 # probably a hostname - so not an address at all!
205 return False
206- else:
207- return address.version == 6
208+
209+ return address.version == 6
210
211
212 def is_address_in_network(network, address):
213@@ -113,11 +108,13 @@
214 except (netaddr.core.AddrFormatError, ValueError):
215 raise ValueError("Network (%s) is not in CIDR presentation format" %
216 network)
217+
218 try:
219 address = netaddr.IPAddress(address)
220 except (netaddr.core.AddrFormatError, ValueError):
221 raise ValueError("Address (%s) is not in correct presentation format" %
222 address)
223+
224 if address in network:
225 return True
226 else:
227@@ -147,6 +144,7 @@
228 return iface
229 else:
230 return addresses[netifaces.AF_INET][0][key]
231+
232 if address.version == 6 and netifaces.AF_INET6 in addresses:
233 for addr in addresses[netifaces.AF_INET6]:
234 if not addr['addr'].startswith('fe80'):
235@@ -160,41 +158,42 @@
236 return str(cidr).split('/')[1]
237 else:
238 return addr[key]
239+
240 return None
241
242
243 get_iface_for_address = partial(_get_for_address, key='iface')
244
245+
246 get_netmask_for_address = partial(_get_for_address, key='netmask')
247
248
249 def format_ipv6_addr(address):
250- """
251- IPv6 needs to be wrapped with [] in url link to parse correctly.
252+ """If address is IPv6, wrap it in '[]' otherwise return None.
253+
254+ This is required by most configuration files when specifying IPv6
255+ addresses.
256 """
257 if is_ipv6(address):
258- address = "[%s]" % address
259- else:
260- log("Not a valid ipv6 address: %s" % address, level=WARNING)
261- address = None
262+ return "[%s]" % address
263
264- return address
265+ return None
266
267
268 def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
269 fatal=True, exc_list=None):
270- """
271- Return the assigned IP address for a given interface, if any, or [].
272- """
273+ """Return the assigned IP address for a given interface, if any."""
274 # Extract nic if passed /dev/ethX
275 if '/' in iface:
276 iface = iface.split('/')[-1]
277+
278 if not exc_list:
279 exc_list = []
280+
281 try:
282 inet_num = getattr(netifaces, inet_type)
283 except AttributeError:
284- raise Exception('Unknown inet type ' + str(inet_type))
285+ raise Exception("Unknown inet type '%s'" % str(inet_type))
286
287 interfaces = netifaces.interfaces()
288 if inc_aliases:
289@@ -202,15 +201,18 @@
290 for _iface in interfaces:
291 if iface == _iface or _iface.split(':')[0] == iface:
292 ifaces.append(_iface)
293+
294 if fatal and not ifaces:
295 raise Exception("Invalid interface '%s'" % iface)
296+
297 ifaces.sort()
298 else:
299 if iface not in interfaces:
300 if fatal:
301- raise Exception("%s not found " % (iface))
302+ raise Exception("Interface '%s' not found " % (iface))
303 else:
304 return []
305+
306 else:
307 ifaces = [iface]
308
309@@ -221,10 +223,13 @@
310 for entry in net_info[inet_num]:
311 if 'addr' in entry and entry['addr'] not in exc_list:
312 addresses.append(entry['addr'])
313+
314 if fatal and not addresses:
315 raise Exception("Interface '%s' doesn't have any %s addresses." %
316 (iface, inet_type))
317- return addresses
318+
319+ return sorted(addresses)
320+
321
322 get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
323
324@@ -241,6 +246,7 @@
325 raw = re.match(ll_key, _addr)
326 if raw:
327 _addr = raw.group(1)
328+
329 if _addr == addr:
330 log("Address '%s' is configured on iface '%s'" %
331 (addr, iface))
332@@ -251,8 +257,9 @@
333
334
335 def sniff_iface(f):
336- """If no iface provided, inject net iface inferred from unit private
337- address.
338+ """Ensure decorated function is called with a value for iface.
339+
340+ If no iface provided, inject net iface inferred from unit private address.
341 """
342 def iface_sniffer(*args, **kwargs):
343 if not kwargs.get('iface', None):
344@@ -295,7 +302,7 @@
345 if global_addrs:
346 # Make sure any found global addresses are not temporary
347 cmd = ['ip', 'addr', 'show', iface]
348- out = subprocess.check_output(cmd)
349+ out = subprocess.check_output(cmd).decode('UTF-8')
350 if dynamic_only:
351 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
352 else:
353@@ -317,33 +324,28 @@
354 return addrs
355
356 if fatal:
357- raise Exception("Interface '%s' doesn't have a scope global "
358+ raise Exception("Interface '%s' does not have a scope global "
359 "non-temporary ipv6 address." % iface)
360
361 return []
362
363
364 def get_bridges(vnic_dir='/sys/devices/virtual/net'):
365- """
366- Return a list of bridges on the system or []
367- """
368- b_rgex = vnic_dir + '/*/bridge'
369- return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
370+ """Return a list of bridges on the system."""
371+ b_regex = "%s/*/bridge" % vnic_dir
372+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
373
374
375 def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
376- """
377- Return a list of nics comprising a given bridge on the system or []
378- """
379- brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
380- return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
381+ """Return a list of nics comprising a given bridge on the system."""
382+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
383+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
384
385
386 def is_bridge_member(nic):
387- """
388- Check if a given nic is a member of a bridge
389- """
390+ """Check if a given nic is a member of a bridge."""
391 for bridge in get_bridges():
392 if nic in get_bridge_nics(bridge):
393 return True
394+
395 return False
396
397=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
398--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-06 21:03:50 +0000
399+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-11 17:56:50 +0000
400@@ -1,3 +1,4 @@
401+import six
402 from charmhelpers.contrib.amulet.deployment import (
403 AmuletDeployment
404 )
405@@ -69,7 +70,7 @@
406
407 def _configure_services(self, configs):
408 """Configure all of the services."""
409- for service, config in configs.iteritems():
410+ for service, config in six.iteritems(configs):
411 self.d.configure(service, config)
412
413 def _get_openstack_release(self):
414
415=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
416--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-10-06 21:03:50 +0000
417+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-11 17:56:50 +0000
418@@ -7,6 +7,8 @@
419 import keystoneclient.v2_0 as keystone_client
420 import novaclient.v1_1.client as nova_client
421
422+import six
423+
424 from charmhelpers.contrib.amulet.utils import (
425 AmuletUtils
426 )
427@@ -60,7 +62,7 @@
428 expected service catalog endpoints.
429 """
430 self.log.debug('actual: {}'.format(repr(actual)))
431- for k, v in expected.iteritems():
432+ for k, v in six.iteritems(expected):
433 if k in actual:
434 ret = self._validate_dict_data(expected[k][0], actual[k][0])
435 if ret:
436
437=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
438--- hooks/charmhelpers/contrib/openstack/context.py 2014-10-13 16:18:58 +0000
439+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-11 17:56:50 +0000
440@@ -1,20 +1,18 @@
441 import json
442 import os
443 import time
444-
445 from base64 import b64decode
446+from subprocess import check_call
447
448-from subprocess import (
449- check_call
450-)
451+import six
452
453 from charmhelpers.fetch import (
454 apt_install,
455 filter_installed_packages,
456 )
457-
458 from charmhelpers.core.hookenv import (
459 config,
460+ is_relation_made,
461 local_unit,
462 log,
463 relation_get,
464@@ -23,43 +21,40 @@
465 relation_set,
466 unit_get,
467 unit_private_ip,
468+ DEBUG,
469+ INFO,
470+ WARNING,
471 ERROR,
472- INFO
473 )
474-
475 from charmhelpers.core.host import (
476 mkdir,
477- write_file
478+ write_file,
479 )
480-
481 from charmhelpers.contrib.hahelpers.cluster import (
482 determine_apache_port,
483 determine_api_port,
484 https,
485- is_clustered
486+ is_clustered,
487 )
488-
489 from charmhelpers.contrib.hahelpers.apache import (
490 get_cert,
491 get_ca_cert,
492 install_ca_cert,
493 )
494-
495 from charmhelpers.contrib.openstack.neutron import (
496 neutron_plugin_attribute,
497 )
498-
499 from charmhelpers.contrib.network.ip import (
500 get_address_in_network,
501 get_ipv6_addr,
502 get_netmask_for_address,
503 format_ipv6_addr,
504- is_address_in_network
505+ is_address_in_network,
506 )
507-
508 from charmhelpers.contrib.openstack.utils import get_host_ip
509
510 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
511+ADDRESS_TYPES = ['admin', 'internal', 'public']
512
513
514 class OSContextError(Exception):
515@@ -67,7 +62,7 @@
516
517
518 def ensure_packages(packages):
519- '''Install but do not upgrade required plugin packages'''
520+ """Install but do not upgrade required plugin packages."""
521 required = filter_installed_packages(packages)
522 if required:
523 apt_install(required, fatal=True)
524@@ -75,20 +70,27 @@
525
526 def context_complete(ctxt):
527 _missing = []
528- for k, v in ctxt.iteritems():
529+ for k, v in six.iteritems(ctxt):
530 if v is None or v == '':
531 _missing.append(k)
532+
533 if _missing:
534- log('Missing required data: %s' % ' '.join(_missing), level='INFO')
535+ log('Missing required data: %s' % ' '.join(_missing), level=INFO)
536 return False
537+
538 return True
539
540
541 def config_flags_parser(config_flags):
542+ """Parses config flags string into dict.
543+
544+ The provided config_flags string may be a list of comma-separated values
545+ which themselves may be comma-separated list of values.
546+ """
547 if config_flags.find('==') >= 0:
548- log("config_flags is not in expected format (key=value)",
549- level=ERROR)
550+ log("config_flags is not in expected format (key=value)", level=ERROR)
551 raise OSContextError
552+
553 # strip the following from each value.
554 post_strippers = ' ,'
555 # we strip any leading/trailing '=' or ' ' from the string then
556@@ -96,7 +98,7 @@
557 split = config_flags.strip(' =').split('=')
558 limit = len(split)
559 flags = {}
560- for i in xrange(0, limit - 1):
561+ for i in range(0, limit - 1):
562 current = split[i]
563 next = split[i + 1]
564 vindex = next.rfind(',')
565@@ -111,17 +113,18 @@
566 # if this not the first entry, expect an embedded key.
567 index = current.rfind(',')
568 if index < 0:
569- log("invalid config value(s) at index %s" % (i),
570- level=ERROR)
571+ log("Invalid config value(s) at index %s" % (i), level=ERROR)
572 raise OSContextError
573 key = current[index + 1:]
574
575 # Add to collection.
576 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
577+
578 return flags
579
580
581 class OSContextGenerator(object):
582+ """Base class for all context generators."""
583 interfaces = []
584
585 def __call__(self):
586@@ -133,11 +136,11 @@
587
588 def __init__(self,
589 database=None, user=None, relation_prefix=None, ssl_dir=None):
590- '''
591- Allows inspecting relation for settings prefixed with relation_prefix.
592- This is useful for parsing access for multiple databases returned via
593- the shared-db interface (eg, nova_password, quantum_password)
594- '''
595+ """Allows inspecting relation for settings prefixed with
596+ relation_prefix. This is useful for parsing access for multiple
597+ databases returned via the shared-db interface (eg, nova_password,
598+ quantum_password)
599+ """
600 self.relation_prefix = relation_prefix
601 self.database = database
602 self.user = user
603@@ -147,9 +150,8 @@
604 self.database = self.database or config('database')
605 self.user = self.user or config('database-user')
606 if None in [self.database, self.user]:
607- log('Could not generate shared_db context. '
608- 'Missing required charm config options. '
609- '(database name and user)')
610+ log("Could not generate shared_db context. Missing required charm "
611+ "config options. (database name and user)", level=ERROR)
612 raise OSContextError
613
614 ctxt = {}
615@@ -202,23 +204,24 @@
616 def __call__(self):
617 self.database = self.database or config('database')
618 if self.database is None:
619- log('Could not generate postgresql_db context. '
620- 'Missing required charm config options. '
621- '(database name)')
622+ log('Could not generate postgresql_db context. Missing required '
623+ 'charm config options. (database name)', level=ERROR)
624 raise OSContextError
625+
626 ctxt = {}
627-
628 for rid in relation_ids(self.interfaces[0]):
629 for unit in related_units(rid):
630- ctxt = {
631- 'database_host': relation_get('host', rid=rid, unit=unit),
632- 'database': self.database,
633- 'database_user': relation_get('user', rid=rid, unit=unit),
634- 'database_password': relation_get('password', rid=rid, unit=unit),
635- 'database_type': 'postgresql',
636- }
637+ rel_host = relation_get('host', rid=rid, unit=unit)
638+ rel_user = relation_get('user', rid=rid, unit=unit)
639+ rel_passwd = relation_get('password', rid=rid, unit=unit)
640+ ctxt = {'database_host': rel_host,
641+ 'database': self.database,
642+ 'database_user': rel_user,
643+ 'database_password': rel_passwd,
644+ 'database_type': 'postgresql'}
645 if context_complete(ctxt):
646 return ctxt
647+
648 return {}
649
650
651@@ -227,23 +230,29 @@
652 ca_path = os.path.join(ssl_dir, 'db-client.ca')
653 with open(ca_path, 'w') as fh:
654 fh.write(b64decode(rdata['ssl_ca']))
655+
656 ctxt['database_ssl_ca'] = ca_path
657 elif 'ssl_ca' in rdata:
658- log("Charm not setup for ssl support but ssl ca found")
659+ log("Charm not setup for ssl support but ssl ca found", level=INFO)
660 return ctxt
661+
662 if 'ssl_cert' in rdata:
663 cert_path = os.path.join(
664 ssl_dir, 'db-client.cert')
665 if not os.path.exists(cert_path):
666- log("Waiting 1m for ssl client cert validity")
667+ log("Waiting 1m for ssl client cert validity", level=INFO)
668 time.sleep(60)
669+
670 with open(cert_path, 'w') as fh:
671 fh.write(b64decode(rdata['ssl_cert']))
672+
673 ctxt['database_ssl_cert'] = cert_path
674 key_path = os.path.join(ssl_dir, 'db-client.key')
675 with open(key_path, 'w') as fh:
676 fh.write(b64decode(rdata['ssl_key']))
677+
678 ctxt['database_ssl_key'] = key_path
679+
680 return ctxt
681
682
683@@ -251,9 +260,8 @@
684 interfaces = ['identity-service']
685
686 def __call__(self):
687- log('Generating template context for identity-service')
688+ log('Generating template context for identity-service', level=DEBUG)
689 ctxt = {}
690-
691 for rid in relation_ids('identity-service'):
692 for unit in related_units(rid):
693 rdata = relation_get(rid=rid, unit=unit)
694@@ -261,26 +269,24 @@
695 serv_host = format_ipv6_addr(serv_host) or serv_host
696 auth_host = rdata.get('auth_host')
697 auth_host = format_ipv6_addr(auth_host) or auth_host
698-
699- ctxt = {
700- 'service_port': rdata.get('service_port'),
701- 'service_host': serv_host,
702- 'auth_host': auth_host,
703- 'auth_port': rdata.get('auth_port'),
704- 'admin_tenant_name': rdata.get('service_tenant'),
705- 'admin_user': rdata.get('service_username'),
706- 'admin_password': rdata.get('service_password'),
707- 'service_protocol':
708- rdata.get('service_protocol') or 'http',
709- 'auth_protocol':
710- rdata.get('auth_protocol') or 'http',
711- }
712+ svc_protocol = rdata.get('service_protocol') or 'http'
713+ auth_protocol = rdata.get('auth_protocol') or 'http'
714+ ctxt = {'service_port': rdata.get('service_port'),
715+ 'service_host': serv_host,
716+ 'auth_host': auth_host,
717+ 'auth_port': rdata.get('auth_port'),
718+ 'admin_tenant_name': rdata.get('service_tenant'),
719+ 'admin_user': rdata.get('service_username'),
720+ 'admin_password': rdata.get('service_password'),
721+ 'service_protocol': svc_protocol,
722+ 'auth_protocol': auth_protocol}
723 if context_complete(ctxt):
724 # NOTE(jamespage) this is required for >= icehouse
725 # so a missing value just indicates keystone needs
726 # upgrading
727 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
728 return ctxt
729+
730 return {}
731
732
733@@ -293,21 +299,23 @@
734 self.interfaces = [rel_name]
735
736 def __call__(self):
737- log('Generating template context for amqp')
738+ log('Generating template context for amqp', level=DEBUG)
739 conf = config()
740- user_setting = 'rabbit-user'
741- vhost_setting = 'rabbit-vhost'
742 if self.relation_prefix:
743- user_setting = self.relation_prefix + '-rabbit-user'
744- vhost_setting = self.relation_prefix + '-rabbit-vhost'
745+ user_setting = '%s-rabbit-user' % (self.relation_prefix)
746+ vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
747+ else:
748+ user_setting = 'rabbit-user'
749+ vhost_setting = 'rabbit-vhost'
750
751 try:
752 username = conf[user_setting]
753 vhost = conf[vhost_setting]
754 except KeyError as e:
755- log('Could not generate shared_db context. '
756- 'Missing required charm config options: %s.' % e)
757+ log('Could not generate shared_db context. Missing required charm '
758+ 'config options: %s.' % e, level=ERROR)
759 raise OSContextError
760+
761 ctxt = {}
762 for rid in relation_ids(self.rel_name):
763 ha_vip_only = False
764@@ -321,6 +329,7 @@
765 host = relation_get('private-address', rid=rid, unit=unit)
766 host = format_ipv6_addr(host) or host
767 ctxt['rabbitmq_host'] = host
768+
769 ctxt.update({
770 'rabbitmq_user': username,
771 'rabbitmq_password': relation_get('password', rid=rid,
772@@ -331,6 +340,7 @@
773 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
774 if ssl_port:
775 ctxt['rabbit_ssl_port'] = ssl_port
776+
777 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
778 if ssl_ca:
779 ctxt['rabbit_ssl_ca'] = ssl_ca
780@@ -344,41 +354,45 @@
781 if context_complete(ctxt):
782 if 'rabbit_ssl_ca' in ctxt:
783 if not self.ssl_dir:
784- log(("Charm not setup for ssl support "
785- "but ssl ca found"))
786+ log("Charm not setup for ssl support but ssl ca "
787+ "found", level=INFO)
788 break
789+
790 ca_path = os.path.join(
791 self.ssl_dir, 'rabbit-client-ca.pem')
792 with open(ca_path, 'w') as fh:
793 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
794 ctxt['rabbit_ssl_ca'] = ca_path
795+
796 # Sufficient information found = break out!
797 break
798+
799 # Used for active/active rabbitmq >= grizzly
800- if ('clustered' not in ctxt or ha_vip_only) \
801- and len(related_units(rid)) > 1:
802+ if (('clustered' not in ctxt or ha_vip_only) and
803+ len(related_units(rid)) > 1):
804 rabbitmq_hosts = []
805 for unit in related_units(rid):
806 host = relation_get('private-address', rid=rid, unit=unit)
807 host = format_ipv6_addr(host) or host
808 rabbitmq_hosts.append(host)
809- ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
810+
811+ ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
812+
813 if not context_complete(ctxt):
814 return {}
815- else:
816- return ctxt
817+
818+ return ctxt
819
820
821 class CephContext(OSContextGenerator):
822+ """Generates context for /etc/ceph/ceph.conf templates."""
823 interfaces = ['ceph']
824
825 def __call__(self):
826- '''This generates context for /etc/ceph/ceph.conf templates'''
827 if not relation_ids('ceph'):
828 return {}
829
830- log('Generating template context for ceph')
831-
832+ log('Generating template context for ceph', level=DEBUG)
833 mon_hosts = []
834 auth = None
835 key = None
836@@ -387,18 +401,18 @@
837 for unit in related_units(rid):
838 auth = relation_get('auth', rid=rid, unit=unit)
839 key = relation_get('key', rid=rid, unit=unit)
840- ceph_addr = \
841- relation_get('ceph-public-address', rid=rid, unit=unit) or \
842- relation_get('private-address', rid=rid, unit=unit)
843+ ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
844+ unit=unit)
845+ unit_priv_addr = relation_get('private-address', rid=rid,
846+ unit=unit)
847+ ceph_addr = ceph_pub_addr or unit_priv_addr
848 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
849 mon_hosts.append(ceph_addr)
850
851- ctxt = {
852- 'mon_hosts': ' '.join(mon_hosts),
853- 'auth': auth,
854- 'key': key,
855- 'use_syslog': use_syslog
856- }
857+ ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
858+ 'auth': auth,
859+ 'key': key,
860+ 'use_syslog': use_syslog}
861
862 if not os.path.isdir('/etc/ceph'):
863 os.mkdir('/etc/ceph')
864@@ -407,79 +421,68 @@
865 return {}
866
867 ensure_packages(['ceph-common'])
868-
869 return ctxt
870
871
872-ADDRESS_TYPES = ['admin', 'internal', 'public']
873-
874-
875 class HAProxyContext(OSContextGenerator):
876+ """Provides half a context for the haproxy template, which describes
877+ all peers to be included in the cluster. Each charm needs to include
878+ its own context generator that describes the port mapping.
879+ """
880 interfaces = ['cluster']
881
882+ def __init__(self, singlenode_mode=False):
883+ self.singlenode_mode = singlenode_mode
884+
885 def __call__(self):
886- '''
887- Builds half a context for the haproxy template, which describes
888- all peers to be included in the cluster. Each charm needs to include
889- its own context generator that describes the port mapping.
890- '''
891- if not relation_ids('cluster'):
892+ if not relation_ids('cluster') and not self.singlenode_mode:
893 return {}
894
895- l_unit = local_unit().replace('/', '-')
896-
897 if config('prefer-ipv6'):
898 addr = get_ipv6_addr(exc_list=[config('vip')])[0]
899 else:
900 addr = get_host_ip(unit_get('private-address'))
901
902+ l_unit = local_unit().replace('/', '-')
903 cluster_hosts = {}
904
905 # NOTE(jamespage): build out map of configured network endpoints
906 # and associated backends
907 for addr_type in ADDRESS_TYPES:
908- laddr = get_address_in_network(
909- config('os-{}-network'.format(addr_type)))
910+ cfg_opt = 'os-{}-network'.format(addr_type)
911+ laddr = get_address_in_network(config(cfg_opt))
912 if laddr:
913- cluster_hosts[laddr] = {}
914- cluster_hosts[laddr]['network'] = "{}/{}".format(
915- laddr,
916- get_netmask_for_address(laddr)
917- )
918- cluster_hosts[laddr]['backends'] = {}
919- cluster_hosts[laddr]['backends'][l_unit] = laddr
920+ netmask = get_netmask_for_address(laddr)
921+ cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
922+ netmask),
923+ 'backends': {l_unit: laddr}}
924 for rid in relation_ids('cluster'):
925 for unit in related_units(rid):
926- _unit = unit.replace('/', '-')
927 _laddr = relation_get('{}-address'.format(addr_type),
928 rid=rid, unit=unit)
929 if _laddr:
930+ _unit = unit.replace('/', '-')
931 cluster_hosts[laddr]['backends'][_unit] = _laddr
932
933 # NOTE(jamespage) no split configurations found, just use
934 # private addresses
935 if not cluster_hosts:
936- cluster_hosts[addr] = {}
937- cluster_hosts[addr]['network'] = "{}/{}".format(
938- addr,
939- get_netmask_for_address(addr)
940- )
941- cluster_hosts[addr]['backends'] = {}
942- cluster_hosts[addr]['backends'][l_unit] = addr
943+ netmask = get_netmask_for_address(addr)
944+ cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
945+ 'backends': {l_unit: addr}}
946 for rid in relation_ids('cluster'):
947 for unit in related_units(rid):
948- _unit = unit.replace('/', '-')
949 _laddr = relation_get('private-address',
950 rid=rid, unit=unit)
951 if _laddr:
952+ _unit = unit.replace('/', '-')
953 cluster_hosts[addr]['backends'][_unit] = _laddr
954
955- ctxt = {
956- 'frontends': cluster_hosts,
957- }
958+ ctxt = {'frontends': cluster_hosts}
959
960 if config('haproxy-server-timeout'):
961 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
962+
963 if config('haproxy-client-timeout'):
964 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
965
966@@ -493,13 +496,18 @@
967 ctxt['stat_port'] = ':8888'
968
969 for frontend in cluster_hosts:
970- if len(cluster_hosts[frontend]['backends']) > 1:
971+ if (len(cluster_hosts[frontend]['backends']) > 1 or
972+ self.singlenode_mode):
973 # Enable haproxy when we have enough peers.
974- log('Ensuring haproxy enabled in /etc/default/haproxy.')
975+ log('Ensuring haproxy enabled in /etc/default/haproxy.',
976+ level=DEBUG)
977 with open('/etc/default/haproxy', 'w') as out:
978 out.write('ENABLED=1\n')
979+
980 return ctxt
981- log('HAProxy context is incomplete, this unit has no peers.')
982+
983+ log('HAProxy context is incomplete, this unit has no peers.',
984+ level=INFO)
985 return {}
986
987
988@@ -507,29 +515,28 @@
989 interfaces = ['image-service']
990
991 def __call__(self):
992- '''
993- Obtains the glance API server from the image-service relation. Useful
994- in nova and cinder (currently).
995- '''
996- log('Generating template context for image-service.')
997+ """Obtains the glance API server from the image-service relation.
998+ Useful in nova and cinder (currently).
999+ """
1000+ log('Generating template context for image-service.', level=DEBUG)
1001 rids = relation_ids('image-service')
1002 if not rids:
1003 return {}
1004+
1005 for rid in rids:
1006 for unit in related_units(rid):
1007 api_server = relation_get('glance-api-server',
1008 rid=rid, unit=unit)
1009 if api_server:
1010 return {'glance_api_servers': api_server}
1011- log('ImageService context is incomplete. '
1012- 'Missing required relation data.')
1013+
1014+ log("ImageService context is incomplete. Missing required relation "
1015+ "data.", level=INFO)
1016 return {}
1017
1018
1019 class ApacheSSLContext(OSContextGenerator):
1020-
1021- """
1022- Generates a context for an apache vhost configuration that configures
1023+ """Generates a context for an apache vhost configuration that configures
1024 HTTPS reverse proxying for one or many endpoints. Generated context
1025 looks something like::
1026
1027@@ -563,6 +570,7 @@
1028 else:
1029 cert_filename = 'cert'
1030 key_filename = 'key'
1031+
1032 write_file(path=os.path.join(ssl_dir, cert_filename),
1033 content=b64decode(cert))
1034 write_file(path=os.path.join(ssl_dir, key_filename),
1035@@ -574,7 +582,8 @@
1036 install_ca_cert(b64decode(ca_cert))
1037
1038 def canonical_names(self):
1039- '''Figure out which canonical names clients will access this service'''
1040+ """Figure out which canonical names clients will access this service.
1041+ """
1042 cns = []
1043 for r_id in relation_ids('identity-service'):
1044 for unit in related_units(r_id):
1045@@ -582,55 +591,80 @@
1046 for k in rdata:
1047 if k.startswith('ssl_key_'):
1048 cns.append(k.lstrip('ssl_key_'))
1049- return list(set(cns))
1050+
1051+ return sorted(list(set(cns)))
1052+
1053+ def get_network_addresses(self):
1054+ """For each network configured, return corresponding address and vip
1055+ (if available).
1056+
1057+ Returns a list of tuples of the form:
1058+
1059+ [(address_in_net_a, vip_in_net_a),
1060+ (address_in_net_b, vip_in_net_b),
1061+ ...]
1062+
1063+ or, if no vip(s) available:
1064+
1065+ [(address_in_net_a, address_in_net_a),
1066+ (address_in_net_b, address_in_net_b),
1067+ ...]
1068+ """
1069+ addresses = []
1070+ if config('vip'):
1071+ vips = config('vip').split()
1072+ else:
1073+ vips = []
1074+
1075+ for net_type in ['os-internal-network', 'os-admin-network',
1076+ 'os-public-network']:
1077+ addr = get_address_in_network(config(net_type),
1078+ unit_get('private-address'))
1079+ if len(vips) > 1 and is_clustered():
1080+ if not config(net_type):
1081+ log("Multiple networks configured but net_type "
1082+ "is None (%s)." % net_type, level=WARNING)
1083+ continue
1084+
1085+ for vip in vips:
1086+ if is_address_in_network(config(net_type), vip):
1087+ addresses.append((addr, vip))
1088+ break
1089+
1090+ elif is_clustered() and config('vip'):
1091+ addresses.append((addr, config('vip')))
1092+ else:
1093+ addresses.append((addr, addr))
1094+
1095+ return sorted(addresses)
1096
1097 def __call__(self):
1098- if isinstance(self.external_ports, basestring):
1099+ if isinstance(self.external_ports, six.string_types):
1100 self.external_ports = [self.external_ports]
1101- if (not self.external_ports or not https()):
1102+
1103+ if not self.external_ports or not https():
1104 return {}
1105
1106 self.configure_ca()
1107 self.enable_modules()
1108
1109- ctxt = {
1110- 'namespace': self.service_namespace,
1111- 'endpoints': [],
1112- 'ext_ports': []
1113- }
1114+ ctxt = {'namespace': self.service_namespace,
1115+ 'endpoints': [],
1116+ 'ext_ports': []}
1117
1118 for cn in self.canonical_names():
1119 self.configure_cert(cn)
1120
1121- addresses = []
1122- vips = []
1123- if config('vip'):
1124- vips = config('vip').split()
1125-
1126- for network_type in ['os-internal-network',
1127- 'os-admin-network',
1128- 'os-public-network']:
1129- address = get_address_in_network(config(network_type),
1130- unit_get('private-address'))
1131- if len(vips) > 0 and is_clustered():
1132- for vip in vips:
1133- if is_address_in_network(config(network_type),
1134- vip):
1135- addresses.append((address, vip))
1136- break
1137- elif is_clustered():
1138- addresses.append((address, config('vip')))
1139- else:
1140- addresses.append((address, address))
1141-
1142- for address, endpoint in set(addresses):
1143+ addresses = self.get_network_addresses()
1144+ for address, endpoint in sorted(set(addresses)):
1145 for api_port in self.external_ports:
1146 ext_port = determine_apache_port(api_port)
1147 int_port = determine_api_port(api_port)
1148 portmap = (address, endpoint, int(ext_port), int(int_port))
1149 ctxt['endpoints'].append(portmap)
1150 ctxt['ext_ports'].append(int(ext_port))
1151- ctxt['ext_ports'] = list(set(ctxt['ext_ports']))
1152+
1153+ ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
1154 return ctxt
1155
1156
1157@@ -647,21 +681,23 @@
1158
1159 @property
1160 def packages(self):
1161- return neutron_plugin_attribute(
1162- self.plugin, 'packages', self.network_manager)
1163+ return neutron_plugin_attribute(self.plugin, 'packages',
1164+ self.network_manager)
1165
1166 @property
1167 def neutron_security_groups(self):
1168 return None
1169
1170 def _ensure_packages(self):
1171- [ensure_packages(pkgs) for pkgs in self.packages]
1172+ for pkgs in self.packages:
1173+ ensure_packages(pkgs)
1174
1175 def _save_flag_file(self):
1176 if self.network_manager == 'quantum':
1177 _file = '/etc/nova/quantum_plugin.conf'
1178 else:
1179 _file = '/etc/nova/neutron_plugin.conf'
1180+
1181 with open(_file, 'wb') as out:
1182 out.write(self.plugin + '\n')
1183
1184@@ -670,13 +706,11 @@
1185 self.network_manager)
1186 config = neutron_plugin_attribute(self.plugin, 'config',
1187 self.network_manager)
1188- ovs_ctxt = {
1189- 'core_plugin': driver,
1190- 'neutron_plugin': 'ovs',
1191- 'neutron_security_groups': self.neutron_security_groups,
1192- 'local_ip': unit_private_ip(),
1193- 'config': config
1194- }
1195+ ovs_ctxt = {'core_plugin': driver,
1196+ 'neutron_plugin': 'ovs',
1197+ 'neutron_security_groups': self.neutron_security_groups,
1198+ 'local_ip': unit_private_ip(),
1199+ 'config': config}
1200
1201 return ovs_ctxt
1202
1203@@ -685,13 +719,11 @@
1204 self.network_manager)
1205 config = neutron_plugin_attribute(self.plugin, 'config',
1206 self.network_manager)
1207- nvp_ctxt = {
1208- 'core_plugin': driver,
1209- 'neutron_plugin': 'nvp',
1210- 'neutron_security_groups': self.neutron_security_groups,
1211- 'local_ip': unit_private_ip(),
1212- 'config': config
1213- }
1214+ nvp_ctxt = {'core_plugin': driver,
1215+ 'neutron_plugin': 'nvp',
1216+ 'neutron_security_groups': self.neutron_security_groups,
1217+ 'local_ip': unit_private_ip(),
1218+ 'config': config}
1219
1220 return nvp_ctxt
1221
1222@@ -700,35 +732,50 @@
1223 self.network_manager)
1224 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1225 self.network_manager)
1226- n1kv_ctxt = {
1227- 'core_plugin': driver,
1228- 'neutron_plugin': 'n1kv',
1229- 'neutron_security_groups': self.neutron_security_groups,
1230- 'local_ip': unit_private_ip(),
1231- 'config': n1kv_config,
1232- 'vsm_ip': config('n1kv-vsm-ip'),
1233- 'vsm_username': config('n1kv-vsm-username'),
1234- 'vsm_password': config('n1kv-vsm-password'),
1235- 'restrict_policy_profiles': config(
1236- 'n1kv_restrict_policy_profiles'),
1237- }
1238+ n1kv_user_config_flags = config('n1kv-config-flags')
1239+ restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
1240+ n1kv_ctxt = {'core_plugin': driver,
1241+ 'neutron_plugin': 'n1kv',
1242+ 'neutron_security_groups': self.neutron_security_groups,
1243+ 'local_ip': unit_private_ip(),
1244+ 'config': n1kv_config,
1245+ 'vsm_ip': config('n1kv-vsm-ip'),
1246+ 'vsm_username': config('n1kv-vsm-username'),
1247+ 'vsm_password': config('n1kv-vsm-password'),
1248+ 'restrict_policy_profiles': restrict_policy_profiles}
1249+
1250+ if n1kv_user_config_flags:
1251+ flags = config_flags_parser(n1kv_user_config_flags)
1252+ n1kv_ctxt['user_config_flags'] = flags
1253
1254 return n1kv_ctxt
1255
1256+ def calico_ctxt(self):
1257+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1258+ self.network_manager)
1259+ config = neutron_plugin_attribute(self.plugin, 'config',
1260+ self.network_manager)
1261+ calico_ctxt = {'core_plugin': driver,
1262+ 'neutron_plugin': 'Calico',
1263+ 'neutron_security_groups': self.neutron_security_groups,
1264+ 'local_ip': unit_private_ip(),
1265+ 'config': config}
1266+
1267+ return calico_ctxt
1268+
1269 def neutron_ctxt(self):
1270 if https():
1271 proto = 'https'
1272 else:
1273 proto = 'http'
1274+
1275 if is_clustered():
1276 host = config('vip')
1277 else:
1278 host = unit_get('private-address')
1279- url = '%s://%s:%s' % (proto, host, '9696')
1280- ctxt = {
1281- 'network_manager': self.network_manager,
1282- 'neutron_url': url,
1283- }
1284+
1285+ ctxt = {'network_manager': self.network_manager,
1286+ 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1287 return ctxt
1288
1289 def __call__(self):
1290@@ -748,6 +795,8 @@
1291 ctxt.update(self.nvp_ctxt())
1292 elif self.plugin == 'n1kv':
1293 ctxt.update(self.n1kv_ctxt())
1294+ elif self.plugin == 'Calico':
1295+ ctxt.update(self.calico_ctxt())
1296
1297 alchemy_flags = config('neutron-alchemy-flags')
1298 if alchemy_flags:
1299@@ -759,23 +808,40 @@
1300
1301
1302 class OSConfigFlagContext(OSContextGenerator):
1303-
1304- """
1305- Responsible for adding user-defined config-flags in charm config to a
1306- template context.
1307+ """Provides support for user-defined config flags.
1308+
1309+ Users can define a comma-seperated list of key=value pairs
1310+ in the charm configuration and apply them at any point in
1311+ any file by using a template flag.
1312+
1313+ Sometimes users might want config flags inserted within a
1314+ specific section so this class allows users to specify the
1315+ template flag name, allowing for multiple template flags
1316+ (sections) within the same context.
1317
1318 NOTE: the value of config-flags may be a comma-separated list of
1319 key=value pairs and some Openstack config files support
1320 comma-separated lists as values.
1321 """
1322
1323+ def __init__(self, charm_flag='config-flags',
1324+ template_flag='user_config_flags'):
1325+ """
1326+ :param charm_flag: config flags in charm configuration.
1327+ :param template_flag: insert point for user-defined flags in template
1328+ file.
1329+ """
1330+ super(OSConfigFlagContext, self).__init__()
1331+ self._charm_flag = charm_flag
1332+ self._template_flag = template_flag
1333+
1334 def __call__(self):
1335- config_flags = config('config-flags')
1336+ config_flags = config(self._charm_flag)
1337 if not config_flags:
1338 return {}
1339
1340- flags = config_flags_parser(config_flags)
1341- return {'user_config_flags': flags}
1342+ return {self._template_flag:
1343+ config_flags_parser(config_flags)}
1344
1345
1346 class SubordinateConfigContext(OSContextGenerator):
1347@@ -819,7 +885,6 @@
1348 },
1349 }
1350 }
1351-
1352 """
1353
1354 def __init__(self, service, config_file, interface):
1355@@ -849,26 +914,28 @@
1356
1357 if self.service not in sub_config:
1358 log('Found subordinate_config on %s but it contained'
1359- 'nothing for %s service' % (rid, self.service))
1360+ 'nothing for %s service' % (rid, self.service),
1361+ level=INFO)
1362 continue
1363
1364 sub_config = sub_config[self.service]
1365 if self.config_file not in sub_config:
1366 log('Found subordinate_config on %s but it contained'
1367- 'nothing for %s' % (rid, self.config_file))
1368+ 'nothing for %s' % (rid, self.config_file),
1369+ level=INFO)
1370 continue
1371
1372 sub_config = sub_config[self.config_file]
1373- for k, v in sub_config.iteritems():
1374+ for k, v in six.iteritems(sub_config):
1375 if k == 'sections':
1376- for section, config_dict in v.iteritems():
1377- log("adding section '%s'" % (section))
1378+ for section, config_dict in six.iteritems(v):
1379+ log("adding section '%s'" % (section),
1380+ level=DEBUG)
1381 ctxt[k][section] = config_dict
1382 else:
1383 ctxt[k] = v
1384
1385- log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
1386-
1387+ log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1388 return ctxt
1389
1390
1391@@ -880,15 +947,14 @@
1392 False if config('debug') is None else config('debug')
1393 ctxt['verbose'] = \
1394 False if config('verbose') is None else config('verbose')
1395+
1396 return ctxt
1397
1398
1399 class SyslogContext(OSContextGenerator):
1400
1401 def __call__(self):
1402- ctxt = {
1403- 'use_syslog': config('use-syslog')
1404- }
1405+ ctxt = {'use_syslog': config('use-syslog')}
1406 return ctxt
1407
1408
1409@@ -896,13 +962,9 @@
1410
1411 def __call__(self):
1412 if config('prefer-ipv6'):
1413- return {
1414- 'bind_host': '::'
1415- }
1416+ return {'bind_host': '::'}
1417 else:
1418- return {
1419- 'bind_host': '0.0.0.0'
1420- }
1421+ return {'bind_host': '0.0.0.0'}
1422
1423
1424 class WorkerConfigContext(OSContextGenerator):
1425@@ -914,11 +976,42 @@
1426 except ImportError:
1427 apt_install('python-psutil', fatal=True)
1428 from psutil import NUM_CPUS
1429+
1430 return NUM_CPUS
1431
1432 def __call__(self):
1433- multiplier = config('worker-multiplier') or 1
1434- ctxt = {
1435- "workers": self.num_cpus * multiplier
1436- }
1437+ multiplier = config('worker-multiplier') or 0
1438+ ctxt = {"workers": self.num_cpus * multiplier}
1439+ return ctxt
1440+
1441+
1442+class ZeroMQContext(OSContextGenerator):
1443+ interfaces = ['zeromq-configuration']
1444+
1445+ def __call__(self):
1446+ ctxt = {}
1447+ if is_relation_made('zeromq-configuration', 'host'):
1448+ for rid in relation_ids('zeromq-configuration'):
1449+ for unit in related_units(rid):
1450+ ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1451+ ctxt['zmq_host'] = relation_get('host', unit, rid)
1452+
1453+ return ctxt
1454+
1455+
1456+class NotificationDriverContext(OSContextGenerator):
1457+
1458+ def __init__(self, zmq_relation='zeromq-configuration',
1459+ amqp_relation='amqp'):
1460+ """
1461+ :param zmq_relation: Name of Zeromq relation to check
1462+ """
1463+ self.zmq_relation = zmq_relation
1464+ self.amqp_relation = amqp_relation
1465+
1466+ def __call__(self):
1467+ ctxt = {'notifications': 'False'}
1468+ if is_relation_made(self.amqp_relation):
1469+ ctxt['notifications'] = "True"
1470+
1471 return ctxt
1472
1473=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
1474--- hooks/charmhelpers/contrib/openstack/ip.py 2014-09-22 20:21:48 +0000
1475+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-11 17:56:50 +0000
1476@@ -2,21 +2,19 @@
1477 config,
1478 unit_get,
1479 )
1480-
1481 from charmhelpers.contrib.network.ip import (
1482 get_address_in_network,
1483 is_address_in_network,
1484 is_ipv6,
1485 get_ipv6_addr,
1486 )
1487-
1488 from charmhelpers.contrib.hahelpers.cluster import is_clustered
1489
1490 PUBLIC = 'public'
1491 INTERNAL = 'int'
1492 ADMIN = 'admin'
1493
1494-_address_map = {
1495+ADDRESS_MAP = {
1496 PUBLIC: {
1497 'config': 'os-public-network',
1498 'fallback': 'public-address'
1499@@ -33,16 +31,14 @@
1500
1501
1502 def canonical_url(configs, endpoint_type=PUBLIC):
1503- '''
1504- Returns the correct HTTP URL to this host given the state of HTTPS
1505+ """Returns the correct HTTP URL to this host given the state of HTTPS
1506 configuration, hacluster and charm configuration.
1507
1508- :configs OSTemplateRenderer: A config tempating object to inspect for
1509- a complete https context.
1510- :endpoint_type str: The endpoint type to resolve.
1511-
1512- :returns str: Base URL for services on the current service unit.
1513- '''
1514+ :param configs: OSTemplateRenderer config templating object to inspect
1515+ for a complete https context.
1516+ :param endpoint_type: str endpoint type to resolve.
1517+ :param returns: str base URL for services on the current service unit.
1518+ """
1519 scheme = 'http'
1520 if 'https' in configs.complete_contexts():
1521 scheme = 'https'
1522@@ -53,27 +49,45 @@
1523
1524
1525 def resolve_address(endpoint_type=PUBLIC):
1526+ """Return unit address depending on net config.
1527+
1528+ If unit is clustered with vip(s) and has net splits defined, return vip on
1529+ correct network. If clustered with no nets defined, return primary vip.
1530+
1531+ If not clustered, return unit address ensuring address is on configured net
1532+ split if one is configured.
1533+
1534+ :param endpoint_type: Network endpoing type
1535+ """
1536 resolved_address = None
1537- if is_clustered():
1538- if config(_address_map[endpoint_type]['config']) is None:
1539- # Assume vip is simple and pass back directly
1540- resolved_address = config('vip')
1541+ vips = config('vip')
1542+ if vips:
1543+ vips = vips.split()
1544+
1545+ net_type = ADDRESS_MAP[endpoint_type]['config']
1546+ net_addr = config(net_type)
1547+ net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
1548+ clustered = is_clustered()
1549+ if clustered:
1550+ if not net_addr:
1551+ # If no net-splits defined, we expect a single vip
1552+ resolved_address = vips[0]
1553 else:
1554- for vip in config('vip').split():
1555- if is_address_in_network(
1556- config(_address_map[endpoint_type]['config']),
1557- vip):
1558+ for vip in vips:
1559+ if is_address_in_network(net_addr, vip):
1560 resolved_address = vip
1561+ break
1562 else:
1563 if config('prefer-ipv6'):
1564- fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1565+ fallback_addr = get_ipv6_addr(exc_list=vips)[0]
1566 else:
1567- fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])
1568- resolved_address = get_address_in_network(
1569- config(_address_map[endpoint_type]['config']), fallback_addr)
1570+ fallback_addr = unit_get(net_fallback)
1571+
1572+ resolved_address = get_address_in_network(net_addr, fallback_addr)
1573
1574 if resolved_address is None:
1575- raise ValueError('Unable to resolve a suitable IP address'
1576- ' based on charm state and configuration')
1577- else:
1578- return resolved_address
1579+ raise ValueError("Unable to resolve a suitable IP address based on "
1580+ "charm state and configuration. (net_type=%s, "
1581+ "clustered=%s)" % (net_type, clustered))
1582+
1583+ return resolved_address
1584
1585=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1586--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-07-28 14:41:41 +0000
1587+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-11 17:56:50 +0000
1588@@ -14,7 +14,7 @@
1589 def headers_package():
1590 """Ensures correct linux-headers for running kernel are installed,
1591 for building DKMS package"""
1592- kver = check_output(['uname', '-r']).strip()
1593+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1594 return 'linux-headers-%s' % kver
1595
1596 QUANTUM_CONF_DIR = '/etc/quantum'
1597@@ -22,7 +22,7 @@
1598
1599 def kernel_version():
1600 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1601- kver = check_output(['uname', '-r']).strip()
1602+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1603 kver = kver.split('.')
1604 return (int(kver[0]), int(kver[1]))
1605
1606@@ -138,10 +138,25 @@
1607 relation_prefix='neutron',
1608 ssl_dir=NEUTRON_CONF_DIR)],
1609 'services': [],
1610- 'packages': [['neutron-plugin-cisco']],
1611+ 'packages': [[headers_package()] + determine_dkms_package(),
1612+ ['neutron-plugin-cisco']],
1613 'server_packages': ['neutron-server',
1614 'neutron-plugin-cisco'],
1615 'server_services': ['neutron-server']
1616+ },
1617+ 'Calico': {
1618+ 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
1619+ 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
1620+ 'contexts': [
1621+ context.SharedDBContext(user=config('neutron-database-user'),
1622+ database=config('neutron-database'),
1623+ relation_prefix='neutron',
1624+ ssl_dir=NEUTRON_CONF_DIR)],
1625+ 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
1626+ 'packages': [[headers_package()] + determine_dkms_package(),
1627+ ['calico-compute', 'bird', 'neutron-dhcp-agent']],
1628+ 'server_packages': ['neutron-server', 'calico-control'],
1629+ 'server_services': ['neutron-server']
1630 }
1631 }
1632 if release >= 'icehouse':
1633@@ -162,7 +177,8 @@
1634 elif manager == 'neutron':
1635 plugins = neutron_plugins()
1636 else:
1637- log('Error: Network manager does not support plugins.')
1638+ log("Network manager '%s' does not support plugins." % (manager),
1639+ level=ERROR)
1640 raise Exception
1641
1642 try:
1643
1644=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
1645--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-10-06 21:03:50 +0000
1646+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-12-11 17:56:50 +0000
1647@@ -35,7 +35,7 @@
1648 stats auth admin:password
1649
1650 {% if frontends -%}
1651-{% for service, ports in service_ports.iteritems() -%}
1652+{% for service, ports in service_ports.items() -%}
1653 frontend tcp-in_{{ service }}
1654 bind *:{{ ports[0] }}
1655 bind :::{{ ports[0] }}
1656@@ -46,7 +46,7 @@
1657 {% for frontend in frontends -%}
1658 backend {{ service }}_{{ frontend }}
1659 balance leastconn
1660- {% for unit, address in frontends[frontend]['backends'].iteritems() -%}
1661+ {% for unit, address in frontends[frontend]['backends'].items() -%}
1662 server {{ unit }} {{ address }}:{{ ports[1] }} check
1663 {% endfor %}
1664 {% endfor -%}
1665
1666=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
1667--- hooks/charmhelpers/contrib/openstack/templating.py 2014-07-28 14:41:41 +0000
1668+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-11 17:56:50 +0000
1669@@ -1,13 +1,13 @@
1670 import os
1671
1672+import six
1673+
1674 from charmhelpers.fetch import apt_install
1675-
1676 from charmhelpers.core.hookenv import (
1677 log,
1678 ERROR,
1679 INFO
1680 )
1681-
1682 from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1683
1684 try:
1685@@ -43,7 +43,7 @@
1686 order by OpenStack release.
1687 """
1688 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1689- for rel in OPENSTACK_CODENAMES.itervalues()]
1690+ for rel in six.itervalues(OPENSTACK_CODENAMES)]
1691
1692 if not os.path.isdir(templates_dir):
1693 log('Templates directory not found @ %s.' % templates_dir,
1694@@ -258,7 +258,7 @@
1695 """
1696 Write out all registered config files.
1697 """
1698- [self.write(k) for k in self.templates.iterkeys()]
1699+ [self.write(k) for k in six.iterkeys(self.templates)]
1700
1701 def set_release(self, openstack_release):
1702 """
1703@@ -275,5 +275,5 @@
1704 '''
1705 interfaces = []
1706 [interfaces.extend(i.complete_contexts())
1707- for i in self.templates.itervalues()]
1708+ for i in six.itervalues(self.templates)]
1709 return interfaces
1710
1711=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1712--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-06 21:03:50 +0000
1713+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-11 17:56:50 +0000
1714@@ -2,6 +2,7 @@
1715
1716 # Common python helper functions used for OpenStack charms.
1717 from collections import OrderedDict
1718+from functools import wraps
1719
1720 import subprocess
1721 import json
1722@@ -9,11 +10,13 @@
1723 import socket
1724 import sys
1725
1726+import six
1727+import yaml
1728+
1729 from charmhelpers.core.hookenv import (
1730 config,
1731 log as juju_log,
1732 charm_dir,
1733- ERROR,
1734 INFO,
1735 relation_ids,
1736 relation_set
1737@@ -30,7 +33,8 @@
1738 )
1739
1740 from charmhelpers.core.host import lsb_release, mounts, umount
1741-from charmhelpers.fetch import apt_install, apt_cache
1742+from charmhelpers.fetch import apt_install, apt_cache, install_remote
1743+from charmhelpers.contrib.python.packages import pip_install
1744 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
1745 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
1746
1747@@ -112,7 +116,7 @@
1748
1749 # Best guess match based on deb string provided
1750 if src.startswith('deb') or src.startswith('ppa'):
1751- for k, v in OPENSTACK_CODENAMES.iteritems():
1752+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1753 if v in src:
1754 return v
1755
1756@@ -133,7 +137,7 @@
1757
1758 def get_os_version_codename(codename):
1759 '''Determine OpenStack version number from codename.'''
1760- for k, v in OPENSTACK_CODENAMES.iteritems():
1761+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1762 if v == codename:
1763 return k
1764 e = 'Could not derive OpenStack version for '\
1765@@ -193,7 +197,7 @@
1766 else:
1767 vers_map = OPENSTACK_CODENAMES
1768
1769- for version, cname in vers_map.iteritems():
1770+ for version, cname in six.iteritems(vers_map):
1771 if cname == codename:
1772 return version
1773 # e = "Could not determine OpenStack version for package: %s" % pkg
1774@@ -317,7 +321,7 @@
1775 rc_script.write(
1776 "#!/bin/bash\n")
1777 [rc_script.write('export %s=%s\n' % (u, p))
1778- for u, p in env_vars.iteritems() if u != "script_path"]
1779+ for u, p in six.iteritems(env_vars) if u != "script_path"]
1780
1781
1782 def openstack_upgrade_available(package):
1783@@ -350,8 +354,8 @@
1784 '''
1785 _none = ['None', 'none', None]
1786 if (block_device in _none):
1787- error_out('prepare_storage(): Missing required input: '
1788- 'block_device=%s.' % block_device, level=ERROR)
1789+ error_out('prepare_storage(): Missing required input: block_device=%s.'
1790+ % block_device)
1791
1792 if block_device.startswith('/dev/'):
1793 bdev = block_device
1794@@ -367,8 +371,7 @@
1795 bdev = '/dev/%s' % block_device
1796
1797 if not is_block_device(bdev):
1798- error_out('Failed to locate valid block device at %s' % bdev,
1799- level=ERROR)
1800+ error_out('Failed to locate valid block device at %s' % bdev)
1801
1802 return bdev
1803
1804@@ -417,7 +420,7 @@
1805
1806 if isinstance(address, dns.name.Name):
1807 rtype = 'PTR'
1808- elif isinstance(address, basestring):
1809+ elif isinstance(address, six.string_types):
1810 rtype = 'A'
1811 else:
1812 return None
1813@@ -468,6 +471,14 @@
1814 return result.split('.')[0]
1815
1816
1817+def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
1818+ mm_map = {}
1819+ if os.path.isfile(mm_file):
1820+ with open(mm_file, 'r') as f:
1821+ mm_map = json.load(f)
1822+ return mm_map
1823+
1824+
1825 def sync_db_with_multi_ipv6_addresses(database, database_user,
1826 relation_prefix=None):
1827 hosts = get_ipv6_addr(dynamic_only=False)
1828@@ -477,10 +488,132 @@
1829 'hostname': json.dumps(hosts)}
1830
1831 if relation_prefix:
1832- keys = kwargs.keys()
1833- for key in keys:
1834+ for key in list(kwargs.keys()):
1835 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
1836 del kwargs[key]
1837
1838 for rid in relation_ids('shared-db'):
1839 relation_set(relation_id=rid, **kwargs)
1840+
1841+
1842+def os_requires_version(ostack_release, pkg):
1843+ """
1844+ Decorator for hook to specify minimum supported release
1845+ """
1846+ def wrap(f):
1847+ @wraps(f)
1848+ def wrapped_f(*args):
1849+ if os_release(pkg) < ostack_release:
1850+ raise Exception("This hook is not supported on releases"
1851+ " before %s" % ostack_release)
1852+ f(*args)
1853+ return wrapped_f
1854+ return wrap
1855+
1856+
1857+def git_install_requested():
1858+ """Returns true if openstack-origin-git is specified."""
1859+ return config('openstack-origin-git') != "None"
1860+
1861+
1862+requirements_dir = None
1863+
1864+
1865+def git_clone_and_install(file_name, core_project):
1866+ """Clone/install all OpenStack repos specified in yaml config file."""
1867+ global requirements_dir
1868+
1869+ if file_name == "None":
1870+ return
1871+
1872+ yaml_file = os.path.join(charm_dir(), file_name)
1873+
1874+ # clone/install the requirements project first
1875+ installed = _git_clone_and_install_subset(yaml_file,
1876+ whitelist=['requirements'])
1877+ if 'requirements' not in installed:
1878+ error_out('requirements git repository must be specified')
1879+
1880+ # clone/install all other projects except requirements and the core project
1881+ blacklist = ['requirements', core_project]
1882+ _git_clone_and_install_subset(yaml_file, blacklist=blacklist,
1883+ update_requirements=True)
1884+
1885+ # clone/install the core project
1886+ whitelist = [core_project]
1887+ installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist,
1888+ update_requirements=True)
1889+ if core_project not in installed:
1890+ error_out('{} git repository must be specified'.format(core_project))
1891+
1892+
1893+def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[],
1894+ update_requirements=False):
1895+ """Clone/install subset of OpenStack repos specified in yaml config file."""
1896+ global requirements_dir
1897+ installed = []
1898+
1899+ with open(yaml_file, 'r') as fd:
1900+ projects = yaml.load(fd)
1901+ for proj, val in projects.items():
1902+ # The project subset is chosen based on the following 3 rules:
1903+ # 1) If project is in blacklist, we don't clone/install it, period.
1904+ # 2) If whitelist is empty, we clone/install everything else.
1905+ # 3) If whitelist is not empty, we clone/install everything in the
1906+ # whitelist.
1907+ if proj in blacklist:
1908+ continue
1909+ if whitelist and proj not in whitelist:
1910+ continue
1911+ repo = val['repository']
1912+ branch = val['branch']
1913+ repo_dir = _git_clone_and_install_single(repo, branch,
1914+ update_requirements)
1915+ if proj == 'requirements':
1916+ requirements_dir = repo_dir
1917+ installed.append(proj)
1918+ return installed
1919+
1920+
1921+def _git_clone_and_install_single(repo, branch, update_requirements=False):
1922+ """Clone and install a single git repository."""
1923+ dest_parent_dir = "/mnt/openstack-git/"
1924+ dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo))
1925+
1926+ if not os.path.exists(dest_parent_dir):
1927+ juju_log('Host dir not mounted at {}. '
1928+ 'Creating directory there instead.'.format(dest_parent_dir))
1929+ os.mkdir(dest_parent_dir)
1930+
1931+ if not os.path.exists(dest_dir):
1932+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
1933+ repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch)
1934+ else:
1935+ repo_dir = dest_dir
1936+
1937+ if update_requirements:
1938+ if not requirements_dir:
1939+ error_out('requirements repo must be cloned before '
1940+ 'updating from global requirements.')
1941+ _git_update_requirements(repo_dir, requirements_dir)
1942+
1943+ juju_log('Installing git repo from dir: {}'.format(repo_dir))
1944+ pip_install(repo_dir)
1945+
1946+ return repo_dir
1947+
1948+
1949+def _git_update_requirements(package_dir, reqs_dir):
1950+ """Update from global requirements.
1951+
1952+ Update an OpenStack git directory's requirements.txt and
1953+ test-requirements.txt from global-requirements.txt."""
1954+ orig_dir = os.getcwd()
1955+ os.chdir(reqs_dir)
1956+ cmd = "python update.py {}".format(package_dir)
1957+ try:
1958+ subprocess.check_call(cmd.split(' '))
1959+ except subprocess.CalledProcessError:
1960+ package = os.path.basename(package_dir)
1961+ error_out("Error updating {} from global-requirements.txt".format(package))
1962+ os.chdir(orig_dir)
1963
1964=== modified file 'hooks/charmhelpers/contrib/peerstorage/__init__.py'
1965--- hooks/charmhelpers/contrib/peerstorage/__init__.py 2014-10-06 21:03:50 +0000
1966+++ hooks/charmhelpers/contrib/peerstorage/__init__.py 2014-12-11 17:56:50 +0000
1967@@ -1,3 +1,4 @@
1968+import six
1969 from charmhelpers.core.hookenv import relation_id as current_relation_id
1970 from charmhelpers.core.hookenv import (
1971 is_relation_made,
1972@@ -93,7 +94,7 @@
1973 if ex in echo_data:
1974 echo_data.pop(ex)
1975 else:
1976- for attribute, value in rdata.iteritems():
1977+ for attribute, value in six.iteritems(rdata):
1978 for include in includes:
1979 if include in attribute:
1980 echo_data[attribute] = value
1981@@ -119,8 +120,8 @@
1982 relation_settings=relation_settings,
1983 **kwargs)
1984 if is_relation_made(peer_relation_name):
1985- for key, value in dict(kwargs.items() +
1986- relation_settings.items()).iteritems():
1987+ for key, value in six.iteritems(dict(list(kwargs.items()) +
1988+ list(relation_settings.items()))):
1989 key_prefix = relation_id or current_relation_id()
1990 peer_store(key_prefix + delimiter + key,
1991 value,
1992
1993=== added directory 'hooks/charmhelpers/contrib/python'
1994=== added file 'hooks/charmhelpers/contrib/python/__init__.py'
1995=== added file 'hooks/charmhelpers/contrib/python/packages.py'
1996--- hooks/charmhelpers/contrib/python/packages.py 1970-01-01 00:00:00 +0000
1997+++ hooks/charmhelpers/contrib/python/packages.py 2014-12-11 17:56:50 +0000
1998@@ -0,0 +1,77 @@
1999+#!/usr/bin/env python
2000+# coding: utf-8
2001+
2002+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
2003+
2004+from charmhelpers.fetch import apt_install, apt_update
2005+from charmhelpers.core.hookenv import log
2006+
2007+try:
2008+ from pip import main as pip_execute
2009+except ImportError:
2010+ apt_update()
2011+ apt_install('python-pip')
2012+ from pip import main as pip_execute
2013+
2014+
2015+def parse_options(given, available):
2016+ """Given a set of options, check if available"""
2017+ for key, value in sorted(given.items()):
2018+ if key in available:
2019+ yield "--{0}={1}".format(key, value)
2020+
2021+
2022+def pip_install_requirements(requirements, **options):
2023+ """Install a requirements file """
2024+ command = ["install"]
2025+
2026+ available_options = ('proxy', 'src', 'log', )
2027+ for option in parse_options(options, available_options):
2028+ command.append(option)
2029+
2030+ command.append("-r {0}".format(requirements))
2031+ log("Installing from file: {} with options: {}".format(requirements,
2032+ command))
2033+ pip_execute(command)
2034+
2035+
2036+def pip_install(package, fatal=False, **options):
2037+ """Install a python package"""
2038+ command = ["install"]
2039+
2040+ available_options = ('proxy', 'src', 'log', "index-url", )
2041+ for option in parse_options(options, available_options):
2042+ command.append(option)
2043+
2044+ if isinstance(package, list):
2045+ command.extend(package)
2046+ else:
2047+ command.append(package)
2048+
2049+ log("Installing {} package with options: {}".format(package,
2050+ command))
2051+ pip_execute(command)
2052+
2053+
2054+def pip_uninstall(package, **options):
2055+ """Uninstall a python package"""
2056+ command = ["uninstall", "-q", "-y"]
2057+
2058+ available_options = ('proxy', 'log', )
2059+ for option in parse_options(options, available_options):
2060+ command.append(option)
2061+
2062+ if isinstance(package, list):
2063+ command.extend(package)
2064+ else:
2065+ command.append(package)
2066+
2067+ log("Uninstalling {} package with options: {}".format(package,
2068+ command))
2069+ pip_execute(command)
2070+
2071+
2072+def pip_list():
2073+ """Returns the list of current python installed packages
2074+ """
2075+ return pip_execute(["list"])
2076
2077=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
2078--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-07-28 14:41:41 +0000
2079+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-11 17:56:50 +0000
2080@@ -16,19 +16,18 @@
2081 from subprocess import (
2082 check_call,
2083 check_output,
2084- CalledProcessError
2085+ CalledProcessError,
2086 )
2087-
2088 from charmhelpers.core.hookenv import (
2089 relation_get,
2090 relation_ids,
2091 related_units,
2092 log,
2093+ DEBUG,
2094 INFO,
2095 WARNING,
2096- ERROR
2097+ ERROR,
2098 )
2099-
2100 from charmhelpers.core.host import (
2101 mount,
2102 mounts,
2103@@ -37,7 +36,6 @@
2104 service_running,
2105 umount,
2106 )
2107-
2108 from charmhelpers.fetch import (
2109 apt_install,
2110 )
2111@@ -56,99 +54,85 @@
2112
2113
2114 def install():
2115- ''' Basic Ceph client installation '''
2116+ """Basic Ceph client installation."""
2117 ceph_dir = "/etc/ceph"
2118 if not os.path.exists(ceph_dir):
2119 os.mkdir(ceph_dir)
2120+
2121 apt_install('ceph-common', fatal=True)
2122
2123
2124 def rbd_exists(service, pool, rbd_img):
2125- ''' Check to see if a RADOS block device exists '''
2126+ """Check to see if a RADOS block device exists."""
2127 try:
2128- out = check_output(['rbd', 'list', '--id', service,
2129- '--pool', pool])
2130+ out = check_output(['rbd', 'list', '--id',
2131+ service, '--pool', pool]).decode('UTF-8')
2132 except CalledProcessError:
2133 return False
2134- else:
2135- return rbd_img in out
2136+
2137+ return rbd_img in out
2138
2139
2140 def create_rbd_image(service, pool, image, sizemb):
2141- ''' Create a new RADOS block device '''
2142- cmd = [
2143- 'rbd',
2144- 'create',
2145- image,
2146- '--size',
2147- str(sizemb),
2148- '--id',
2149- service,
2150- '--pool',
2151- pool
2152- ]
2153+ """Create a new RADOS block device."""
2154+ cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
2155+ '--pool', pool]
2156 check_call(cmd)
2157
2158
2159 def pool_exists(service, name):
2160- ''' Check to see if a RADOS pool already exists '''
2161+ """Check to see if a RADOS pool already exists."""
2162 try:
2163- out = check_output(['rados', '--id', service, 'lspools'])
2164+ out = check_output(['rados', '--id', service,
2165+ 'lspools']).decode('UTF-8')
2166 except CalledProcessError:
2167 return False
2168- else:
2169- return name in out
2170+
2171+ return name in out
2172
2173
2174 def get_osds(service):
2175- '''
2176- Return a list of all Ceph Object Storage Daemons
2177- currently in the cluster
2178- '''
2179+ """Return a list of all Ceph Object Storage Daemons currently in the
2180+ cluster.
2181+ """
2182 version = ceph_version()
2183 if version and version >= '0.56':
2184 return json.loads(check_output(['ceph', '--id', service,
2185- 'osd', 'ls', '--format=json']))
2186- else:
2187- return None
2188-
2189-
2190-def create_pool(service, name, replicas=2):
2191- ''' Create a new RADOS pool '''
2192+ 'osd', 'ls',
2193+ '--format=json']).decode('UTF-8'))
2194+
2195+ return None
2196+
2197+
2198+def create_pool(service, name, replicas=3):
2199+ """Create a new RADOS pool."""
2200 if pool_exists(service, name):
2201 log("Ceph pool {} already exists, skipping creation".format(name),
2202 level=WARNING)
2203 return
2204+
2205 # Calculate the number of placement groups based
2206 # on upstream recommended best practices.
2207 osds = get_osds(service)
2208 if osds:
2209- pgnum = (len(osds) * 100 / replicas)
2210+ pgnum = (len(osds) * 100 // replicas)
2211 else:
2212 # NOTE(james-page): Default to 200 for older ceph versions
2213 # which don't support OSD query from cli
2214 pgnum = 200
2215- cmd = [
2216- 'ceph', '--id', service,
2217- 'osd', 'pool', 'create',
2218- name, str(pgnum)
2219- ]
2220+
2221+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
2222 check_call(cmd)
2223- cmd = [
2224- 'ceph', '--id', service,
2225- 'osd', 'pool', 'set', name,
2226- 'size', str(replicas)
2227- ]
2228+
2229+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
2230+ str(replicas)]
2231 check_call(cmd)
2232
2233
2234 def delete_pool(service, name):
2235- ''' Delete a RADOS pool from ceph '''
2236- cmd = [
2237- 'ceph', '--id', service,
2238- 'osd', 'pool', 'delete',
2239- name, '--yes-i-really-really-mean-it'
2240- ]
2241+ """Delete a RADOS pool from ceph."""
2242+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
2243+ '--yes-i-really-really-mean-it']
2244 check_call(cmd)
2245
2246
2247@@ -161,44 +145,43 @@
2248
2249
2250 def create_keyring(service, key):
2251- ''' Create a new Ceph keyring containing key'''
2252+ """Create a new Ceph keyring containing key."""
2253 keyring = _keyring_path(service)
2254 if os.path.exists(keyring):
2255- log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
2256+ log('Ceph keyring exists at %s.' % keyring, level=WARNING)
2257 return
2258- cmd = [
2259- 'ceph-authtool',
2260- keyring,
2261- '--create-keyring',
2262- '--name=client.{}'.format(service),
2263- '--add-key={}'.format(key)
2264- ]
2265+
2266+ cmd = ['ceph-authtool', keyring, '--create-keyring',
2267+ '--name=client.{}'.format(service), '--add-key={}'.format(key)]
2268 check_call(cmd)
2269- log('ceph: Created new ring at %s.' % keyring, level=INFO)
2270+ log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
2271
2272
2273 def create_key_file(service, key):
2274- ''' Create a file containing key '''
2275+ """Create a file containing key."""
2276 keyfile = _keyfile_path(service)
2277 if os.path.exists(keyfile):
2278- log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2279+ log('Keyfile exists at %s.' % keyfile, level=WARNING)
2280 return
2281+
2282 with open(keyfile, 'w') as fd:
2283 fd.write(key)
2284- log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2285+
2286+ log('Created new keyfile at %s.' % keyfile, level=INFO)
2287
2288
2289 def get_ceph_nodes():
2290- ''' Query named relation 'ceph' to detemine current nodes '''
2291+ """Query named relation 'ceph' to determine current nodes."""
2292 hosts = []
2293 for r_id in relation_ids('ceph'):
2294 for unit in related_units(r_id):
2295 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2296+
2297 return hosts
2298
2299
2300 def configure(service, key, auth, use_syslog):
2301- ''' Perform basic configuration of Ceph '''
2302+ """Perform basic configuration of Ceph."""
2303 create_keyring(service, key)
2304 create_key_file(service, key)
2305 hosts = get_ceph_nodes()
2306@@ -211,17 +194,17 @@
2307
2308
2309 def image_mapped(name):
2310- ''' Determine whether a RADOS block device is mapped locally '''
2311+ """Determine whether a RADOS block device is mapped locally."""
2312 try:
2313- out = check_output(['rbd', 'showmapped'])
2314+ out = check_output(['rbd', 'showmapped']).decode('UTF-8')
2315 except CalledProcessError:
2316 return False
2317- else:
2318- return name in out
2319+
2320+ return name in out
2321
2322
2323 def map_block_storage(service, pool, image):
2324- ''' Map a RADOS block device for local use '''
2325+ """Map a RADOS block device for local use."""
2326 cmd = [
2327 'rbd',
2328 'map',
2329@@ -235,31 +218,32 @@
2330
2331
2332 def filesystem_mounted(fs):
2333- ''' Determine whether a filesytems is already mounted '''
2334+ """Determine whether a filesytems is already mounted."""
2335 return fs in [f for f, m in mounts()]
2336
2337
2338 def make_filesystem(blk_device, fstype='ext4', timeout=10):
2339- ''' Make a new filesystem on the specified block device '''
2340+ """Make a new filesystem on the specified block device."""
2341 count = 0
2342 e_noent = os.errno.ENOENT
2343 while not os.path.exists(blk_device):
2344 if count >= timeout:
2345- log('ceph: gave up waiting on block device %s' % blk_device,
2346+ log('Gave up waiting on block device %s' % blk_device,
2347 level=ERROR)
2348 raise IOError(e_noent, os.strerror(e_noent), blk_device)
2349- log('ceph: waiting for block device %s to appear' % blk_device,
2350- level=INFO)
2351+
2352+ log('Waiting for block device %s to appear' % blk_device,
2353+ level=DEBUG)
2354 count += 1
2355 time.sleep(1)
2356 else:
2357- log('ceph: Formatting block device %s as filesystem %s.' %
2358+ log('Formatting block device %s as filesystem %s.' %
2359 (blk_device, fstype), level=INFO)
2360 check_call(['mkfs', '-t', fstype, blk_device])
2361
2362
2363 def place_data_on_block_device(blk_device, data_src_dst):
2364- ''' Migrate data in data_src_dst to blk_device and then remount '''
2365+ """Migrate data in data_src_dst to blk_device and then remount."""
2366 # mount block device into /mnt
2367 mount(blk_device, '/mnt')
2368 # copy data to /mnt
2369@@ -279,8 +263,8 @@
2370
2371 # TODO: re-use
2372 def modprobe(module):
2373- ''' Load a kernel module and configure for auto-load on reboot '''
2374- log('ceph: Loading kernel module', level=INFO)
2375+ """Load a kernel module and configure for auto-load on reboot."""
2376+ log('Loading kernel module', level=INFO)
2377 cmd = ['modprobe', module]
2378 check_call(cmd)
2379 with open('/etc/modules', 'r+') as modules:
2380@@ -289,7 +273,7 @@
2381
2382
2383 def copy_files(src, dst, symlinks=False, ignore=None):
2384- ''' Copy files from src to dst '''
2385+ """Copy files from src to dst."""
2386 for item in os.listdir(src):
2387 s = os.path.join(src, item)
2388 d = os.path.join(dst, item)
2389@@ -300,9 +284,9 @@
2390
2391
2392 def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2393- blk_device, fstype, system_services=[]):
2394- """
2395- NOTE: This function must only be called from a single service unit for
2396+ blk_device, fstype, system_services=[],
2397+ replicas=3):
2398+ """NOTE: This function must only be called from a single service unit for
2399 the same rbd_img otherwise data loss will occur.
2400
2401 Ensures given pool and RBD image exists, is mapped to a block device,
2402@@ -316,15 +300,16 @@
2403 """
2404 # Ensure pool, RBD image, RBD mappings are in place.
2405 if not pool_exists(service, pool):
2406- log('ceph: Creating new pool {}.'.format(pool))
2407- create_pool(service, pool)
2408+ log('Creating new pool {}.'.format(pool), level=INFO)
2409+ create_pool(service, pool, replicas=replicas)
2410
2411 if not rbd_exists(service, pool, rbd_img):
2412- log('ceph: Creating RBD image ({}).'.format(rbd_img))
2413+ log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
2414 create_rbd_image(service, pool, rbd_img, sizemb)
2415
2416 if not image_mapped(rbd_img):
2417- log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2418+ log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
2419+ level=INFO)
2420 map_block_storage(service, pool, rbd_img)
2421
2422 # make file system
2423@@ -339,45 +324,47 @@
2424
2425 for svc in system_services:
2426 if service_running(svc):
2427- log('ceph: Stopping services {} prior to migrating data.'
2428- .format(svc))
2429+ log('Stopping services {} prior to migrating data.'
2430+ .format(svc), level=DEBUG)
2431 service_stop(svc)
2432
2433 place_data_on_block_device(blk_device, mount_point)
2434
2435 for svc in system_services:
2436- log('ceph: Starting service {} after migrating data.'
2437- .format(svc))
2438+ log('Starting service {} after migrating data.'
2439+ .format(svc), level=DEBUG)
2440 service_start(svc)
2441
2442
2443 def ensure_ceph_keyring(service, user=None, group=None):
2444- '''
2445- Ensures a ceph keyring is created for a named service
2446- and optionally ensures user and group ownership.
2447+ """Ensures a ceph keyring is created for a named service and optionally
2448+ ensures user and group ownership.
2449
2450 Returns False if no ceph key is available in relation state.
2451- '''
2452+ """
2453 key = None
2454 for rid in relation_ids('ceph'):
2455 for unit in related_units(rid):
2456 key = relation_get('key', rid=rid, unit=unit)
2457 if key:
2458 break
2459+
2460 if not key:
2461 return False
2462+
2463 create_keyring(service=service, key=key)
2464 keyring = _keyring_path(service)
2465 if user and group:
2466 check_call(['chown', '%s.%s' % (user, group), keyring])
2467+
2468 return True
2469
2470
2471 def ceph_version():
2472- ''' Retrieve the local version of ceph '''
2473+ """Retrieve the local version of ceph."""
2474 if os.path.exists('/usr/bin/ceph'):
2475 cmd = ['ceph', '-v']
2476- output = check_output(cmd)
2477+ output = check_output(cmd).decode('US-ASCII')
2478 output = output.split()
2479 if len(output) > 3:
2480 return output[2]
2481
2482=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2483--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-08-02 03:42:16 +0000
2484+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-11 17:56:50 +0000
2485@@ -1,12 +1,12 @@
2486-
2487 import os
2488 import re
2489-
2490 from subprocess import (
2491 check_call,
2492 check_output,
2493 )
2494
2495+import six
2496+
2497
2498 ##################################################
2499 # loopback device helpers.
2500@@ -37,7 +37,7 @@
2501 '''
2502 file_path = os.path.abspath(file_path)
2503 check_call(['losetup', '--find', file_path])
2504- for d, f in loopback_devices().iteritems():
2505+ for d, f in six.iteritems(loopback_devices()):
2506 if f == file_path:
2507 return d
2508
2509@@ -51,7 +51,7 @@
2510
2511 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2512 '''
2513- for d, f in loopback_devices().iteritems():
2514+ for d, f in six.iteritems(loopback_devices()):
2515 if f == path:
2516 return d
2517
2518
2519=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2520--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:38:09 +0000
2521+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-11 17:56:50 +0000
2522@@ -61,6 +61,7 @@
2523 vg = None
2524 pvd = check_output(['pvdisplay', block_device]).splitlines()
2525 for l in pvd:
2526+ l = l.decode('UTF-8')
2527 if l.strip().startswith('VG Name'):
2528 vg = ' '.join(l.strip().split()[2:])
2529 return vg
2530
2531=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2532--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-09-17 08:37:24 +0000
2533+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-11 17:56:50 +0000
2534@@ -30,7 +30,8 @@
2535 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
2536 call(['sgdisk', '--zap-all', '--mbrtogpt',
2537 '--clear', block_device])
2538- dev_end = check_output(['blockdev', '--getsz', block_device])
2539+ dev_end = check_output(['blockdev', '--getsz',
2540+ block_device]).decode('UTF-8')
2541 gpt_end = int(dev_end.split()[0]) - 100
2542 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
2543 'bs=1M', 'count=1'])
2544@@ -47,7 +48,7 @@
2545 it doesn't.
2546 '''
2547 is_partition = bool(re.search(r".*[0-9]+\b", device))
2548- out = check_output(['mount'])
2549+ out = check_output(['mount']).decode('UTF-8')
2550 if is_partition:
2551 return bool(re.search(device + r"\b", out))
2552 return bool(re.search(device + r"[0-9]+\b", out))
2553
2554=== modified file 'hooks/charmhelpers/core/fstab.py'
2555--- hooks/charmhelpers/core/fstab.py 2014-07-11 02:43:50 +0000
2556+++ hooks/charmhelpers/core/fstab.py 2014-12-11 17:56:50 +0000
2557@@ -3,10 +3,11 @@
2558
2559 __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2560
2561+import io
2562 import os
2563
2564
2565-class Fstab(file):
2566+class Fstab(io.FileIO):
2567 """This class extends file in order to implement a file reader/writer
2568 for file `/etc/fstab`
2569 """
2570@@ -24,8 +25,8 @@
2571 options = "defaults"
2572
2573 self.options = options
2574- self.d = d
2575- self.p = p
2576+ self.d = int(d)
2577+ self.p = int(p)
2578
2579 def __eq__(self, o):
2580 return str(self) == str(o)
2581@@ -45,7 +46,7 @@
2582 self._path = path
2583 else:
2584 self._path = self.DEFAULT_PATH
2585- file.__init__(self, self._path, 'r+')
2586+ super(Fstab, self).__init__(self._path, 'rb+')
2587
2588 def _hydrate_entry(self, line):
2589 # NOTE: use split with no arguments to split on any
2590@@ -58,8 +59,9 @@
2591 def entries(self):
2592 self.seek(0)
2593 for line in self.readlines():
2594+ line = line.decode('us-ascii')
2595 try:
2596- if not line.startswith("#"):
2597+ if line.strip() and not line.startswith("#"):
2598 yield self._hydrate_entry(line)
2599 except ValueError:
2600 pass
2601@@ -75,14 +77,14 @@
2602 if self.get_entry_by_attr('device', entry.device):
2603 return False
2604
2605- self.write(str(entry) + '\n')
2606+ self.write((str(entry) + '\n').encode('us-ascii'))
2607 self.truncate()
2608 return entry
2609
2610 def remove_entry(self, entry):
2611 self.seek(0)
2612
2613- lines = self.readlines()
2614+ lines = [l.decode('us-ascii') for l in self.readlines()]
2615
2616 found = False
2617 for index, line in enumerate(lines):
2618@@ -97,7 +99,7 @@
2619 lines.remove(line)
2620
2621 self.seek(0)
2622- self.write(''.join(lines))
2623+ self.write(''.join(lines).encode('us-ascii'))
2624 self.truncate()
2625 return True
2626
2627
2628=== modified file 'hooks/charmhelpers/core/hookenv.py'
2629--- hooks/charmhelpers/core/hookenv.py 2014-10-06 21:03:50 +0000
2630+++ hooks/charmhelpers/core/hookenv.py 2014-12-11 17:56:50 +0000
2631@@ -9,9 +9,14 @@
2632 import yaml
2633 import subprocess
2634 import sys
2635-import UserDict
2636 from subprocess import CalledProcessError
2637
2638+import six
2639+if not six.PY3:
2640+ from UserDict import UserDict
2641+else:
2642+ from collections import UserDict
2643+
2644 CRITICAL = "CRITICAL"
2645 ERROR = "ERROR"
2646 WARNING = "WARNING"
2647@@ -63,16 +68,18 @@
2648 command = ['juju-log']
2649 if level:
2650 command += ['-l', level]
2651+ if not isinstance(message, six.string_types):
2652+ message = repr(message)
2653 command += [message]
2654 subprocess.call(command)
2655
2656
2657-class Serializable(UserDict.IterableUserDict):
2658+class Serializable(UserDict):
2659 """Wrapper, an object that can be serialized to yaml or json"""
2660
2661 def __init__(self, obj):
2662 # wrap the object
2663- UserDict.IterableUserDict.__init__(self)
2664+ UserDict.__init__(self)
2665 self.data = obj
2666
2667 def __getattr__(self, attr):
2668@@ -214,6 +221,12 @@
2669 except KeyError:
2670 return (self._prev_dict or {})[key]
2671
2672+ def keys(self):
2673+ prev_keys = []
2674+ if self._prev_dict is not None:
2675+ prev_keys = self._prev_dict.keys()
2676+ return list(set(prev_keys + list(dict.keys(self))))
2677+
2678 def load_previous(self, path=None):
2679 """Load previous copy of config from disk.
2680
2681@@ -263,7 +276,7 @@
2682
2683 """
2684 if self._prev_dict:
2685- for k, v in self._prev_dict.iteritems():
2686+ for k, v in six.iteritems(self._prev_dict):
2687 if k not in self:
2688 self[k] = v
2689 with open(self.path, 'w') as f:
2690@@ -278,7 +291,8 @@
2691 config_cmd_line.append(scope)
2692 config_cmd_line.append('--format=json')
2693 try:
2694- config_data = json.loads(subprocess.check_output(config_cmd_line))
2695+ config_data = json.loads(
2696+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
2697 if scope is not None:
2698 return config_data
2699 return Config(config_data)
2700@@ -297,10 +311,10 @@
2701 if unit:
2702 _args.append(unit)
2703 try:
2704- return json.loads(subprocess.check_output(_args))
2705+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2706 except ValueError:
2707 return None
2708- except CalledProcessError, e:
2709+ except CalledProcessError as e:
2710 if e.returncode == 2:
2711 return None
2712 raise
2713@@ -312,7 +326,7 @@
2714 relation_cmd_line = ['relation-set']
2715 if relation_id is not None:
2716 relation_cmd_line.extend(('-r', relation_id))
2717- for k, v in (relation_settings.items() + kwargs.items()):
2718+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
2719 if v is None:
2720 relation_cmd_line.append('{}='.format(k))
2721 else:
2722@@ -329,7 +343,8 @@
2723 relid_cmd_line = ['relation-ids', '--format=json']
2724 if reltype is not None:
2725 relid_cmd_line.append(reltype)
2726- return json.loads(subprocess.check_output(relid_cmd_line)) or []
2727+ return json.loads(
2728+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
2729 return []
2730
2731
2732@@ -340,7 +355,8 @@
2733 units_cmd_line = ['relation-list', '--format=json']
2734 if relid is not None:
2735 units_cmd_line.extend(('-r', relid))
2736- return json.loads(subprocess.check_output(units_cmd_line)) or []
2737+ return json.loads(
2738+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
2739
2740
2741 @cached
2742@@ -380,21 +396,31 @@
2743
2744
2745 @cached
2746+def metadata():
2747+ """Get the current charm metadata.yaml contents as a python object"""
2748+ with open(os.path.join(charm_dir(), 'metadata.yaml')) as md:
2749+ return yaml.safe_load(md)
2750+
2751+
2752+@cached
2753 def relation_types():
2754 """Get a list of relation types supported by this charm"""
2755- charmdir = os.environ.get('CHARM_DIR', '')
2756- mdf = open(os.path.join(charmdir, 'metadata.yaml'))
2757- md = yaml.safe_load(mdf)
2758 rel_types = []
2759+ md = metadata()
2760 for key in ('provides', 'requires', 'peers'):
2761 section = md.get(key)
2762 if section:
2763 rel_types.extend(section.keys())
2764- mdf.close()
2765 return rel_types
2766
2767
2768 @cached
2769+def charm_name():
2770+ """Get the name of the current charm as is specified on metadata.yaml"""
2771+ return metadata().get('name')
2772+
2773+
2774+@cached
2775 def relations():
2776 """Get a nested dictionary of relation data for all related units"""
2777 rels = {}
2778@@ -449,7 +475,7 @@
2779 """Get the unit ID for the remote unit"""
2780 _args = ['unit-get', '--format=json', attribute]
2781 try:
2782- return json.loads(subprocess.check_output(_args))
2783+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2784 except ValueError:
2785 return None
2786
2787
2788=== modified file 'hooks/charmhelpers/core/host.py'
2789--- hooks/charmhelpers/core/host.py 2014-10-06 21:03:50 +0000
2790+++ hooks/charmhelpers/core/host.py 2014-12-11 17:56:50 +0000
2791@@ -6,19 +6,20 @@
2792 # Matthew Wedgwood <matthew.wedgwood@canonical.com>
2793
2794 import os
2795+import re
2796 import pwd
2797 import grp
2798 import random
2799 import string
2800 import subprocess
2801 import hashlib
2802-import shutil
2803 from contextlib import contextmanager
2804-
2805 from collections import OrderedDict
2806
2807-from hookenv import log
2808-from fstab import Fstab
2809+import six
2810+
2811+from .hookenv import log
2812+from .fstab import Fstab
2813
2814
2815 def service_start(service_name):
2816@@ -54,7 +55,9 @@
2817 def service_running(service):
2818 """Determine whether a system service is running"""
2819 try:
2820- output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
2821+ output = subprocess.check_output(
2822+ ['service', service, 'status'],
2823+ stderr=subprocess.STDOUT).decode('UTF-8')
2824 except subprocess.CalledProcessError:
2825 return False
2826 else:
2827@@ -67,7 +70,9 @@
2828 def service_available(service_name):
2829 """Determine whether a system service is available"""
2830 try:
2831- subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
2832+ subprocess.check_output(
2833+ ['service', service_name, 'status'],
2834+ stderr=subprocess.STDOUT).decode('UTF-8')
2835 except subprocess.CalledProcessError as e:
2836 return 'unrecognized service' not in e.output
2837 else:
2838@@ -96,6 +101,26 @@
2839 return user_info
2840
2841
2842+def add_group(group_name, system_group=False):
2843+ """Add a group to the system"""
2844+ try:
2845+ group_info = grp.getgrnam(group_name)
2846+ log('group {0} already exists!'.format(group_name))
2847+ except KeyError:
2848+ log('creating group {0}'.format(group_name))
2849+ cmd = ['addgroup']
2850+ if system_group:
2851+ cmd.append('--system')
2852+ else:
2853+ cmd.extend([
2854+ '--group',
2855+ ])
2856+ cmd.append(group_name)
2857+ subprocess.check_call(cmd)
2858+ group_info = grp.getgrnam(group_name)
2859+ return group_info
2860+
2861+
2862 def add_user_to_group(username, group):
2863 """Add a user to a group"""
2864 cmd = [
2865@@ -115,7 +140,7 @@
2866 cmd.append(from_path)
2867 cmd.append(to_path)
2868 log(" ".join(cmd))
2869- return subprocess.check_output(cmd).strip()
2870+ return subprocess.check_output(cmd).decode('UTF-8').strip()
2871
2872
2873 def symlink(source, destination):
2874@@ -130,7 +155,7 @@
2875 subprocess.check_call(cmd)
2876
2877
2878-def mkdir(path, owner='root', group='root', perms=0555, force=False):
2879+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
2880 """Create a directory"""
2881 log("Making dir {} {}:{} {:o}".format(path, owner, group,
2882 perms))
2883@@ -146,7 +171,7 @@
2884 os.chown(realpath, uid, gid)
2885
2886
2887-def write_file(path, content, owner='root', group='root', perms=0444):
2888+def write_file(path, content, owner='root', group='root', perms=0o444):
2889 """Create or overwrite a file with the contents of a string"""
2890 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
2891 uid = pwd.getpwnam(owner).pw_uid
2892@@ -177,7 +202,7 @@
2893 cmd_args.extend([device, mountpoint])
2894 try:
2895 subprocess.check_output(cmd_args)
2896- except subprocess.CalledProcessError, e:
2897+ except subprocess.CalledProcessError as e:
2898 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
2899 return False
2900
2901@@ -191,7 +216,7 @@
2902 cmd_args = ['umount', mountpoint]
2903 try:
2904 subprocess.check_output(cmd_args)
2905- except subprocess.CalledProcessError, e:
2906+ except subprocess.CalledProcessError as e:
2907 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
2908 return False
2909
2910@@ -218,8 +243,8 @@
2911 """
2912 if os.path.exists(path):
2913 h = getattr(hashlib, hash_type)()
2914- with open(path, 'r') as source:
2915- h.update(source.read()) # IGNORE:E1101 - it does have update
2916+ with open(path, 'rb') as source:
2917+ h.update(source.read())
2918 return h.hexdigest()
2919 else:
2920 return None
2921@@ -297,7 +322,7 @@
2922 if length is None:
2923 length = random.choice(range(35, 45))
2924 alphanumeric_chars = [
2925- l for l in (string.letters + string.digits)
2926+ l for l in (string.ascii_letters + string.digits)
2927 if l not in 'l0QD1vAEIOUaeiou']
2928 random_chars = [
2929 random.choice(alphanumeric_chars) for _ in range(length)]
2930@@ -306,18 +331,24 @@
2931
2932 def list_nics(nic_type):
2933 '''Return a list of nics of given type(s)'''
2934- if isinstance(nic_type, basestring):
2935+ if isinstance(nic_type, six.string_types):
2936 int_types = [nic_type]
2937 else:
2938 int_types = nic_type
2939 interfaces = []
2940 for int_type in int_types:
2941 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
2942- ip_output = subprocess.check_output(cmd).split('\n')
2943+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2944 ip_output = (line for line in ip_output if line)
2945 for line in ip_output:
2946 if line.split()[1].startswith(int_type):
2947- interfaces.append(line.split()[1].replace(":", ""))
2948+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
2949+ if matched:
2950+ interface = matched.groups()[0]
2951+ else:
2952+ interface = line.split()[1].replace(":", "")
2953+ interfaces.append(interface)
2954+
2955 return interfaces
2956
2957
2958@@ -329,7 +360,7 @@
2959
2960 def get_nic_mtu(nic):
2961 cmd = ['ip', 'addr', 'show', nic]
2962- ip_output = subprocess.check_output(cmd).split('\n')
2963+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2964 mtu = ""
2965 for line in ip_output:
2966 words = line.split()
2967@@ -340,7 +371,7 @@
2968
2969 def get_nic_hwaddr(nic):
2970 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
2971- ip_output = subprocess.check_output(cmd)
2972+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
2973 hwaddr = ""
2974 words = ip_output.split()
2975 if 'link/ether' in words:
2976@@ -357,8 +388,8 @@
2977
2978 '''
2979 import apt_pkg
2980- from charmhelpers.fetch import apt_cache
2981 if not pkgcache:
2982+ from charmhelpers.fetch import apt_cache
2983 pkgcache = apt_cache()
2984 pkg = pkgcache[package]
2985 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
2986
2987=== modified file 'hooks/charmhelpers/core/services/__init__.py'
2988--- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:02 +0000
2989+++ hooks/charmhelpers/core/services/__init__.py 2014-12-11 17:56:50 +0000
2990@@ -1,2 +1,2 @@
2991-from .base import *
2992-from .helpers import *
2993+from .base import * # NOQA
2994+from .helpers import * # NOQA
2995
2996=== modified file 'hooks/charmhelpers/core/services/helpers.py'
2997--- hooks/charmhelpers/core/services/helpers.py 2014-10-06 21:03:50 +0000
2998+++ hooks/charmhelpers/core/services/helpers.py 2014-12-11 17:56:50 +0000
2999@@ -196,7 +196,7 @@
3000 if not os.path.isabs(file_name):
3001 file_name = os.path.join(hookenv.charm_dir(), file_name)
3002 with open(file_name, 'w') as file_stream:
3003- os.fchmod(file_stream.fileno(), 0600)
3004+ os.fchmod(file_stream.fileno(), 0o600)
3005 yaml.dump(config_data, file_stream)
3006
3007 def read_context(self, file_name):
3008@@ -211,15 +211,19 @@
3009
3010 class TemplateCallback(ManagerCallback):
3011 """
3012- Callback class that will render a Jinja2 template, for use as a ready action.
3013-
3014- :param str source: The template source file, relative to `$CHARM_DIR/templates`
3015+ Callback class that will render a Jinja2 template, for use as a ready
3016+ action.
3017+
3018+ :param str source: The template source file, relative to
3019+ `$CHARM_DIR/templates`
3020+
3021 :param str target: The target to write the rendered template to
3022 :param str owner: The owner of the rendered file
3023 :param str group: The group of the rendered file
3024 :param int perms: The permissions of the rendered file
3025 """
3026- def __init__(self, source, target, owner='root', group='root', perms=0444):
3027+ def __init__(self, source, target,
3028+ owner='root', group='root', perms=0o444):
3029 self.source = source
3030 self.target = target
3031 self.owner = owner
3032
3033=== modified file 'hooks/charmhelpers/core/templating.py'
3034--- hooks/charmhelpers/core/templating.py 2014-08-13 13:12:02 +0000
3035+++ hooks/charmhelpers/core/templating.py 2014-12-11 17:56:50 +0000
3036@@ -4,7 +4,8 @@
3037 from charmhelpers.core import hookenv
3038
3039
3040-def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
3041+def render(source, target, context, owner='root', group='root',
3042+ perms=0o444, templates_dir=None):
3043 """
3044 Render a template.
3045
3046
3047=== modified file 'hooks/charmhelpers/fetch/__init__.py'
3048--- hooks/charmhelpers/fetch/__init__.py 2014-10-06 21:03:50 +0000
3049+++ hooks/charmhelpers/fetch/__init__.py 2014-12-11 17:56:50 +0000
3050@@ -5,10 +5,6 @@
3051 from charmhelpers.core.host import (
3052 lsb_release
3053 )
3054-from urlparse import (
3055- urlparse,
3056- urlunparse,
3057-)
3058 import subprocess
3059 from charmhelpers.core.hookenv import (
3060 config,
3061@@ -16,6 +12,12 @@
3062 )
3063 import os
3064
3065+import six
3066+if six.PY3:
3067+ from urllib.parse import urlparse, urlunparse
3068+else:
3069+ from urlparse import urlparse, urlunparse
3070+
3071
3072 CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
3073 deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
3074@@ -72,6 +74,7 @@
3075 FETCH_HANDLERS = (
3076 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
3077 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
3078+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
3079 )
3080
3081 APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
3082@@ -148,7 +151,7 @@
3083 cmd = ['apt-get', '--assume-yes']
3084 cmd.extend(options)
3085 cmd.append('install')
3086- if isinstance(packages, basestring):
3087+ if isinstance(packages, six.string_types):
3088 cmd.append(packages)
3089 else:
3090 cmd.extend(packages)
3091@@ -181,7 +184,7 @@
3092 def apt_purge(packages, fatal=False):
3093 """Purge one or more packages"""
3094 cmd = ['apt-get', '--assume-yes', 'purge']
3095- if isinstance(packages, basestring):
3096+ if isinstance(packages, six.string_types):
3097 cmd.append(packages)
3098 else:
3099 cmd.extend(packages)
3100@@ -192,7 +195,7 @@
3101 def apt_hold(packages, fatal=False):
3102 """Hold one or more packages"""
3103 cmd = ['apt-mark', 'hold']
3104- if isinstance(packages, basestring):
3105+ if isinstance(packages, six.string_types):
3106 cmd.append(packages)
3107 else:
3108 cmd.extend(packages)
3109@@ -218,6 +221,7 @@
3110 pocket for the release.
3111 'cloud:' may be used to activate official cloud archive pockets,
3112 such as 'cloud:icehouse'
3113+ 'distro' may be used as a noop
3114
3115 @param key: A key to be added to the system's APT keyring and used
3116 to verify the signatures on packages. Ideally, this should be an
3117@@ -251,12 +255,14 @@
3118 release = lsb_release()['DISTRIB_CODENAME']
3119 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
3120 apt.write(PROPOSED_POCKET.format(release))
3121+ elif source == 'distro':
3122+ pass
3123 else:
3124- raise SourceConfigError("Unknown source: {!r}".format(source))
3125+ log("Unknown source: {!r}".format(source))
3126
3127 if key:
3128 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
3129- with NamedTemporaryFile() as key_file:
3130+ with NamedTemporaryFile('w+') as key_file:
3131 key_file.write(key)
3132 key_file.flush()
3133 key_file.seek(0)
3134@@ -293,14 +299,14 @@
3135 sources = safe_load((config(sources_var) or '').strip()) or []
3136 keys = safe_load((config(keys_var) or '').strip()) or None
3137
3138- if isinstance(sources, basestring):
3139+ if isinstance(sources, six.string_types):
3140 sources = [sources]
3141
3142 if keys is None:
3143 for source in sources:
3144 add_source(source, None)
3145 else:
3146- if isinstance(keys, basestring):
3147+ if isinstance(keys, six.string_types):
3148 keys = [keys]
3149
3150 if len(sources) != len(keys):
3151@@ -397,7 +403,7 @@
3152 while result is None or result == APT_NO_LOCK:
3153 try:
3154 result = subprocess.check_call(cmd, env=env)
3155- except subprocess.CalledProcessError, e:
3156+ except subprocess.CalledProcessError as e:
3157 retry_count = retry_count + 1
3158 if retry_count > APT_NO_LOCK_RETRY_COUNT:
3159 raise
3160
3161=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
3162--- hooks/charmhelpers/fetch/archiveurl.py 2014-10-06 21:03:50 +0000
3163+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-11 17:56:50 +0000
3164@@ -1,8 +1,23 @@
3165 import os
3166-import urllib2
3167-from urllib import urlretrieve
3168-import urlparse
3169 import hashlib
3170+import re
3171+
3172+import six
3173+if six.PY3:
3174+ from urllib.request import (
3175+ build_opener, install_opener, urlopen, urlretrieve,
3176+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3177+ )
3178+ from urllib.parse import urlparse, urlunparse, parse_qs
3179+ from urllib.error import URLError
3180+else:
3181+ from urllib import urlretrieve
3182+ from urllib2 import (
3183+ build_opener, install_opener, urlopen,
3184+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3185+ URLError
3186+ )
3187+ from urlparse import urlparse, urlunparse, parse_qs
3188
3189 from charmhelpers.fetch import (
3190 BaseFetchHandler,
3191@@ -15,6 +30,24 @@
3192 from charmhelpers.core.host import mkdir, check_hash
3193
3194
3195+def splituser(host):
3196+ '''urllib.splituser(), but six's support of this seems broken'''
3197+ _userprog = re.compile('^(.*)@(.*)$')
3198+ match = _userprog.match(host)
3199+ if match:
3200+ return match.group(1, 2)
3201+ return None, host
3202+
3203+
3204+def splitpasswd(user):
3205+ '''urllib.splitpasswd(), but six's support of this is missing'''
3206+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
3207+ match = _passwdprog.match(user)
3208+ if match:
3209+ return match.group(1, 2)
3210+ return user, None
3211+
3212+
3213 class ArchiveUrlFetchHandler(BaseFetchHandler):
3214 """
3215 Handler to download archive files from arbitrary URLs.
3216@@ -42,20 +75,20 @@
3217 """
3218 # propogate all exceptions
3219 # URLError, OSError, etc
3220- proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
3221+ proto, netloc, path, params, query, fragment = urlparse(source)
3222 if proto in ('http', 'https'):
3223- auth, barehost = urllib2.splituser(netloc)
3224+ auth, barehost = splituser(netloc)
3225 if auth is not None:
3226- source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
3227- username, password = urllib2.splitpasswd(auth)
3228- passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
3229+ source = urlunparse((proto, barehost, path, params, query, fragment))
3230+ username, password = splitpasswd(auth)
3231+ passman = HTTPPasswordMgrWithDefaultRealm()
3232 # Realm is set to None in add_password to force the username and password
3233 # to be used whatever the realm
3234 passman.add_password(None, source, username, password)
3235- authhandler = urllib2.HTTPBasicAuthHandler(passman)
3236- opener = urllib2.build_opener(authhandler)
3237- urllib2.install_opener(opener)
3238- response = urllib2.urlopen(source)
3239+ authhandler = HTTPBasicAuthHandler(passman)
3240+ opener = build_opener(authhandler)
3241+ install_opener(opener)
3242+ response = urlopen(source)
3243 try:
3244 with open(dest, 'w') as dest_file:
3245 dest_file.write(response.read())
3246@@ -91,17 +124,21 @@
3247 url_parts = self.parse_url(source)
3248 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
3249 if not os.path.exists(dest_dir):
3250- mkdir(dest_dir, perms=0755)
3251+ mkdir(dest_dir, perms=0o755)
3252 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
3253 try:
3254 self.download(source, dld_file)
3255- except urllib2.URLError as e:
3256+ except URLError as e:
3257 raise UnhandledSource(e.reason)
3258 except OSError as e:
3259 raise UnhandledSource(e.strerror)
3260- options = urlparse.parse_qs(url_parts.fragment)
3261+ options = parse_qs(url_parts.fragment)
3262 for key, value in options.items():
3263- if key in hashlib.algorithms:
3264+ if not six.PY3:
3265+ algorithms = hashlib.algorithms
3266+ else:
3267+ algorithms = hashlib.algorithms_available
3268+ if key in algorithms:
3269 check_hash(dld_file, value, key)
3270 if checksum:
3271 check_hash(dld_file, checksum, hash_type)
3272
3273=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
3274--- hooks/charmhelpers/fetch/bzrurl.py 2014-07-28 14:41:41 +0000
3275+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-11 17:56:50 +0000
3276@@ -5,6 +5,10 @@
3277 )
3278 from charmhelpers.core.host import mkdir
3279
3280+import six
3281+if six.PY3:
3282+ raise ImportError('bzrlib does not support Python3')
3283+
3284 try:
3285 from bzrlib.branch import Branch
3286 except ImportError:
3287@@ -42,7 +46,7 @@
3288 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3289 branch_name)
3290 if not os.path.exists(dest_dir):
3291- mkdir(dest_dir, perms=0755)
3292+ mkdir(dest_dir, perms=0o755)
3293 try:
3294 self.branch(source, dest_dir)
3295 except OSError as e:
3296
3297=== added file 'hooks/charmhelpers/fetch/giturl.py'
3298--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
3299+++ hooks/charmhelpers/fetch/giturl.py 2014-12-11 17:56:50 +0000
3300@@ -0,0 +1,51 @@
3301+import os
3302+from charmhelpers.fetch import (
3303+ BaseFetchHandler,
3304+ UnhandledSource
3305+)
3306+from charmhelpers.core.host import mkdir
3307+
3308+import six
3309+if six.PY3:
3310+ raise ImportError('GitPython does not support Python 3')
3311+
3312+try:
3313+ from git import Repo
3314+except ImportError:
3315+ from charmhelpers.fetch import apt_install
3316+ apt_install("python-git")
3317+ from git import Repo
3318+
3319+
3320+class GitUrlFetchHandler(BaseFetchHandler):
3321+ """Handler for git branches via generic and github URLs"""
3322+ def can_handle(self, source):
3323+ url_parts = self.parse_url(source)
3324+ # TODO (mattyw) no support for ssh git@ yet
3325+ if url_parts.scheme not in ('http', 'https', 'git'):
3326+ return False
3327+ else:
3328+ return True
3329+
3330+ def clone(self, source, dest, branch):
3331+ if not self.can_handle(source):
3332+ raise UnhandledSource("Cannot handle {}".format(source))
3333+
3334+ repo = Repo.clone_from(source, dest)
3335+ repo.git.checkout(branch)
3336+
3337+ def install(self, source, branch="master", dest=None):
3338+ url_parts = self.parse_url(source)
3339+ branch_name = url_parts.path.strip("/").split("/")[-1]
3340+ if dest:
3341+ dest_dir = os.path.join(dest, branch_name)
3342+ else:
3343+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3344+ branch_name)
3345+ if not os.path.exists(dest_dir):
3346+ mkdir(dest_dir, perms=0o755)
3347+ try:
3348+ self.clone(source, dest_dir, branch)
3349+ except OSError as e:
3350+ raise UnhandledSource(e.strerror)
3351+ return dest_dir
3352
3353=== modified file 'tests/charmhelpers/__init__.py'
3354--- tests/charmhelpers/__init__.py 2014-07-11 02:43:50 +0000
3355+++ tests/charmhelpers/__init__.py 2014-12-11 17:56:50 +0000
3356@@ -0,0 +1,22 @@
3357+# Bootstrap charm-helpers, installing its dependencies if necessary using
3358+# only standard libraries.
3359+import subprocess
3360+import sys
3361+
3362+try:
3363+ import six # flake8: noqa
3364+except ImportError:
3365+ if sys.version_info.major == 2:
3366+ subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
3367+ else:
3368+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
3369+ import six # flake8: noqa
3370+
3371+try:
3372+ import yaml # flake8: noqa
3373+except ImportError:
3374+ if sys.version_info.major == 2:
3375+ subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
3376+ else:
3377+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
3378+ import yaml # flake8: noqa
3379
3380=== modified file 'tests/charmhelpers/contrib/amulet/deployment.py'
3381--- tests/charmhelpers/contrib/amulet/deployment.py 2014-10-06 21:03:50 +0000
3382+++ tests/charmhelpers/contrib/amulet/deployment.py 2014-12-11 17:56:50 +0000
3383@@ -1,6 +1,6 @@
3384 import amulet
3385-
3386 import os
3387+import six
3388
3389
3390 class AmuletDeployment(object):
3391@@ -52,12 +52,12 @@
3392
3393 def _add_relations(self, relations):
3394 """Add all of the relations for the services."""
3395- for k, v in relations.iteritems():
3396+ for k, v in six.iteritems(relations):
3397 self.d.relate(k, v)
3398
3399 def _configure_services(self, configs):
3400 """Configure all of the services."""
3401- for service, config in configs.iteritems():
3402+ for service, config in six.iteritems(configs):
3403 self.d.configure(service, config)
3404
3405 def _deploy(self):
3406
3407=== modified file 'tests/charmhelpers/contrib/amulet/utils.py'
3408--- tests/charmhelpers/contrib/amulet/utils.py 2014-07-30 15:20:13 +0000
3409+++ tests/charmhelpers/contrib/amulet/utils.py 2014-12-11 17:56:50 +0000
3410@@ -5,6 +5,8 @@
3411 import sys
3412 import time
3413
3414+import six
3415+
3416
3417 class AmuletUtils(object):
3418 """Amulet utilities.
3419@@ -58,7 +60,7 @@
3420 Verify the specified services are running on the corresponding
3421 service units.
3422 """
3423- for k, v in commands.iteritems():
3424+ for k, v in six.iteritems(commands):
3425 for cmd in v:
3426 output, code = k.run(cmd)
3427 if code != 0:
3428@@ -100,11 +102,11 @@
3429 longs, or can be a function that evaluate a variable and returns a
3430 bool.
3431 """
3432- for k, v in expected.iteritems():
3433+ for k, v in six.iteritems(expected):
3434 if k in actual:
3435- if (isinstance(v, basestring) or
3436+ if (isinstance(v, six.string_types) or
3437 isinstance(v, bool) or
3438- isinstance(v, (int, long))):
3439+ isinstance(v, six.integer_types)):
3440 if v != actual[k]:
3441 return "{}:{}".format(k, actual[k])
3442 elif not v(actual[k]):
3443
3444=== modified file 'tests/charmhelpers/contrib/openstack/amulet/deployment.py'
3445--- tests/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-06 21:03:50 +0000
3446+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-11 17:56:50 +0000
3447@@ -1,3 +1,4 @@
3448+import six
3449 from charmhelpers.contrib.amulet.deployment import (
3450 AmuletDeployment
3451 )
3452@@ -69,7 +70,7 @@
3453
3454 def _configure_services(self, configs):
3455 """Configure all of the services."""
3456- for service, config in configs.iteritems():
3457+ for service, config in six.iteritems(configs):
3458 self.d.configure(service, config)
3459
3460 def _get_openstack_release(self):
3461
3462=== modified file 'tests/charmhelpers/contrib/openstack/amulet/utils.py'
3463--- tests/charmhelpers/contrib/openstack/amulet/utils.py 2014-10-06 21:03:50 +0000
3464+++ tests/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-11 17:56:50 +0000
3465@@ -7,6 +7,8 @@
3466 import keystoneclient.v2_0 as keystone_client
3467 import novaclient.v1_1.client as nova_client
3468
3469+import six
3470+
3471 from charmhelpers.contrib.amulet.utils import (
3472 AmuletUtils
3473 )
3474@@ -60,7 +62,7 @@
3475 expected service catalog endpoints.
3476 """
3477 self.log.debug('actual: {}'.format(repr(actual)))
3478- for k, v in expected.iteritems():
3479+ for k, v in six.iteritems(expected):
3480 if k in actual:
3481 ret = self._validate_dict_data(expected[k][0], actual[k][0])
3482 if ret:

Subscribers

People subscribed via source and target branches