Merge lp:~corey.bryant/charms/trusty/openstack-dashboard/contrib.python.packages into lp:~openstack-charmers-archive/charms/trusty/openstack-dashboard/next

Proposed by Corey Bryant
Status: Merged
Merged at revision: 43
Proposed branch: lp:~corey.bryant/charms/trusty/openstack-dashboard/contrib.python.packages
Merge into: lp:~openstack-charmers-archive/charms/trusty/openstack-dashboard/next
Diff against target: 3370 lines (+1145/-525)
27 files modified
charm-helpers.yaml (+1/-0)
hooks/charmhelpers/__init__.py (+22/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7)
hooks/charmhelpers/contrib/network/ip.py (+59/-51)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
hooks/charmhelpers/contrib/openstack/context.py (+378/-228)
hooks/charmhelpers/contrib/openstack/ip.py (+41/-27)
hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4)
hooks/charmhelpers/contrib/openstack/templating.py (+5/-5)
hooks/charmhelpers/contrib/openstack/utils.py (+148/-13)
hooks/charmhelpers/contrib/python/packages.py (+77/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/core/fstab.py (+10/-8)
hooks/charmhelpers/core/hookenv.py (+41/-15)
hooks/charmhelpers/core/host.py (+51/-20)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/core/services/helpers.py (+9/-5)
hooks/charmhelpers/core/sysctl.py (+34/-0)
hooks/charmhelpers/core/templating.py (+2/-1)
hooks/charmhelpers/fetch/__init__.py (+18/-12)
hooks/charmhelpers/fetch/archiveurl.py (+53/-16)
hooks/charmhelpers/fetch/bzrurl.py (+5/-1)
hooks/charmhelpers/fetch/giturl.py (+51/-0)
To merge this branch: bzr merge lp:~corey.bryant/charms/trusty/openstack-dashboard/contrib.python.packages
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Review via email: mp+244332@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #160 openstack-dashboard-next for corey.bryant mp244332
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/160/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #123 openstack-dashboard-next for corey.bryant mp244332
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
  FAILED (errors=3)
  make: *** [unit_test] Error 1

Full unit test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_unit_test/123/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #177 openstack-dashboard-next for corey.bryant mp244332
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/177/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #140 openstack-dashboard-next for corey.bryant mp244332
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/140/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #89 openstack-dashboard-next for corey.bryant mp244332
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/89/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #200 openstack-dashboard-next for corey.bryant mp244332
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/200/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #163 openstack-dashboard-next for corey.bryant mp244332
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/163/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #118 openstack-dashboard-next for corey.bryant mp244332
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/118/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #219 openstack-dashboard-next for corey.bryant mp244332
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/219/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #182 openstack-dashboard-next for corey.bryant mp244332
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/182/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #136 openstack-dashboard-next for corey.bryant mp244332
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/136/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'charm-helpers.yaml'
2--- charm-helpers.yaml 2014-09-12 10:55:03 +0000
3+++ charm-helpers.yaml 2014-12-11 17:57:05 +0000
4@@ -8,4 +8,5 @@
5 - contrib.hahelpers
6 - contrib.storage
7 - contrib.network.ip
8+ - contrib.python.packages
9 - payload.execd
10
11=== added file 'hooks/charmhelpers/__init__.py'
12--- hooks/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
13+++ hooks/charmhelpers/__init__.py 2014-12-11 17:57:05 +0000
14@@ -0,0 +1,22 @@
15+# Bootstrap charm-helpers, installing its dependencies if necessary using
16+# only standard libraries.
17+import subprocess
18+import sys
19+
20+try:
21+ import six # flake8: noqa
22+except ImportError:
23+ if sys.version_info.major == 2:
24+ subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
25+ else:
26+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
27+ import six # flake8: noqa
28+
29+try:
30+ import yaml # flake8: noqa
31+except ImportError:
32+ if sys.version_info.major == 2:
33+ subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
34+ else:
35+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
36+ import yaml # flake8: noqa
37
38=== removed file 'hooks/charmhelpers/__init__.py'
39=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
40--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-01 13:53:14 +0000
41+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-11 17:57:05 +0000
42@@ -13,9 +13,10 @@
43
44 import subprocess
45 import os
46-
47 from socket import gethostname as get_unit_hostname
48
49+import six
50+
51 from charmhelpers.core.hookenv import (
52 log,
53 relation_ids,
54@@ -77,7 +78,7 @@
55 "show", resource
56 ]
57 try:
58- status = subprocess.check_output(cmd)
59+ status = subprocess.check_output(cmd).decode('UTF-8')
60 except subprocess.CalledProcessError:
61 return False
62 else:
63@@ -150,34 +151,42 @@
64 return False
65
66
67-def determine_api_port(public_port):
68+def determine_api_port(public_port, singlenode_mode=False):
69 '''
70 Determine correct API server listening port based on
71 existence of HTTPS reverse proxy and/or haproxy.
72
73 public_port: int: standard public port for given service
74
75+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
76+
77 returns: int: the correct listening port for the API service
78 '''
79 i = 0
80- if len(peer_units()) > 0 or is_clustered():
81+ if singlenode_mode:
82+ i += 1
83+ elif len(peer_units()) > 0 or is_clustered():
84 i += 1
85 if https():
86 i += 1
87 return public_port - (i * 10)
88
89
90-def determine_apache_port(public_port):
91+def determine_apache_port(public_port, singlenode_mode=False):
92 '''
93 Description: Determine correct apache listening port based on public IP +
94 state of the cluster.
95
96 public_port: int: standard public port for given service
97
98+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
99+
100 returns: int: the correct listening port for the HAProxy service
101 '''
102 i = 0
103- if len(peer_units()) > 0 or is_clustered():
104+ if singlenode_mode:
105+ i += 1
106+ elif len(peer_units()) > 0 or is_clustered():
107 i += 1
108 return public_port - (i * 10)
109
110@@ -197,7 +206,7 @@
111 for setting in settings:
112 conf[setting] = config_get(setting)
113 missing = []
114- [missing.append(s) for s, v in conf.iteritems() if v is None]
115+ [missing.append(s) for s, v in six.iteritems(conf) if v is None]
116 if missing:
117 log('Insufficient config data to configure hacluster.', level=ERROR)
118 raise HAIncompleteConfig
119
120=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
121--- hooks/charmhelpers/contrib/network/ip.py 2014-10-01 13:53:14 +0000
122+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-11 17:57:05 +0000
123@@ -1,15 +1,12 @@
124 import glob
125 import re
126 import subprocess
127-import sys
128
129 from functools import partial
130
131 from charmhelpers.core.hookenv import unit_get
132 from charmhelpers.fetch import apt_install
133 from charmhelpers.core.hookenv import (
134- WARNING,
135- ERROR,
136 log
137 )
138
139@@ -34,29 +31,28 @@
140 network)
141
142
143+def no_ip_found_error_out(network):
144+ errmsg = ("No IP address found in network: %s" % network)
145+ raise ValueError(errmsg)
146+
147+
148 def get_address_in_network(network, fallback=None, fatal=False):
149- """
150- Get an IPv4 or IPv6 address within the network from the host.
151+ """Get an IPv4 or IPv6 address within the network from the host.
152
153 :param network (str): CIDR presentation format. For example,
154 '192.168.1.0/24'.
155 :param fallback (str): If no address is found, return fallback.
156 :param fatal (boolean): If no address is found, fallback is not
157 set and fatal is True then exit(1).
158-
159 """
160-
161- def not_found_error_out():
162- log("No IP address found in network: %s" % network,
163- level=ERROR)
164- sys.exit(1)
165-
166 if network is None:
167 if fallback is not None:
168 return fallback
169+
170+ if fatal:
171+ no_ip_found_error_out(network)
172 else:
173- if fatal:
174- not_found_error_out()
175+ return None
176
177 _validate_cidr(network)
178 network = netaddr.IPNetwork(network)
179@@ -68,6 +64,7 @@
180 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
181 if cidr in network:
182 return str(cidr.ip)
183+
184 if network.version == 6 and netifaces.AF_INET6 in addresses:
185 for addr in addresses[netifaces.AF_INET6]:
186 if not addr['addr'].startswith('fe80'):
187@@ -80,20 +77,20 @@
188 return fallback
189
190 if fatal:
191- not_found_error_out()
192+ no_ip_found_error_out(network)
193
194 return None
195
196
197 def is_ipv6(address):
198- '''Determine whether provided address is IPv6 or not'''
199+ """Determine whether provided address is IPv6 or not."""
200 try:
201 address = netaddr.IPAddress(address)
202 except netaddr.AddrFormatError:
203 # probably a hostname - so not an address at all!
204 return False
205- else:
206- return address.version == 6
207+
208+ return address.version == 6
209
210
211 def is_address_in_network(network, address):
212@@ -111,11 +108,13 @@
213 except (netaddr.core.AddrFormatError, ValueError):
214 raise ValueError("Network (%s) is not in CIDR presentation format" %
215 network)
216+
217 try:
218 address = netaddr.IPAddress(address)
219 except (netaddr.core.AddrFormatError, ValueError):
220 raise ValueError("Address (%s) is not in correct presentation format" %
221 address)
222+
223 if address in network:
224 return True
225 else:
226@@ -138,57 +137,63 @@
227 if address.version == 4 and netifaces.AF_INET in addresses:
228 addr = addresses[netifaces.AF_INET][0]['addr']
229 netmask = addresses[netifaces.AF_INET][0]['netmask']
230- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
231+ network = netaddr.IPNetwork("%s/%s" % (addr, netmask))
232+ cidr = network.cidr
233 if address in cidr:
234 if key == 'iface':
235 return iface
236 else:
237 return addresses[netifaces.AF_INET][0][key]
238+
239 if address.version == 6 and netifaces.AF_INET6 in addresses:
240 for addr in addresses[netifaces.AF_INET6]:
241 if not addr['addr'].startswith('fe80'):
242- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
243- addr['netmask']))
244+ network = netaddr.IPNetwork("%s/%s" % (addr['addr'],
245+ addr['netmask']))
246+ cidr = network.cidr
247 if address in cidr:
248 if key == 'iface':
249 return iface
250+ elif key == 'netmask' and cidr:
251+ return str(cidr).split('/')[1]
252 else:
253 return addr[key]
254+
255 return None
256
257
258 get_iface_for_address = partial(_get_for_address, key='iface')
259
260+
261 get_netmask_for_address = partial(_get_for_address, key='netmask')
262
263
264 def format_ipv6_addr(address):
265- """
266- IPv6 needs to be wrapped with [] in url link to parse correctly.
267+ """If address is IPv6, wrap it in '[]' otherwise return None.
268+
269+ This is required by most configuration files when specifying IPv6
270+ addresses.
271 """
272 if is_ipv6(address):
273- address = "[%s]" % address
274- else:
275- log("Not a valid ipv6 address: %s" % address, level=WARNING)
276- address = None
277+ return "[%s]" % address
278
279- return address
280+ return None
281
282
283 def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
284 fatal=True, exc_list=None):
285- """
286- Return the assigned IP address for a given interface, if any, or [].
287- """
288+ """Return the assigned IP address for a given interface, if any."""
289 # Extract nic if passed /dev/ethX
290 if '/' in iface:
291 iface = iface.split('/')[-1]
292+
293 if not exc_list:
294 exc_list = []
295+
296 try:
297 inet_num = getattr(netifaces, inet_type)
298 except AttributeError:
299- raise Exception('Unknown inet type ' + str(inet_type))
300+ raise Exception("Unknown inet type '%s'" % str(inet_type))
301
302 interfaces = netifaces.interfaces()
303 if inc_aliases:
304@@ -196,15 +201,18 @@
305 for _iface in interfaces:
306 if iface == _iface or _iface.split(':')[0] == iface:
307 ifaces.append(_iface)
308+
309 if fatal and not ifaces:
310 raise Exception("Invalid interface '%s'" % iface)
311+
312 ifaces.sort()
313 else:
314 if iface not in interfaces:
315 if fatal:
316- raise Exception("%s not found " % (iface))
317+ raise Exception("Interface '%s' not found " % (iface))
318 else:
319 return []
320+
321 else:
322 ifaces = [iface]
323
324@@ -215,10 +223,13 @@
325 for entry in net_info[inet_num]:
326 if 'addr' in entry and entry['addr'] not in exc_list:
327 addresses.append(entry['addr'])
328+
329 if fatal and not addresses:
330 raise Exception("Interface '%s' doesn't have any %s addresses." %
331 (iface, inet_type))
332- return addresses
333+
334+ return sorted(addresses)
335+
336
337 get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
338
339@@ -235,6 +246,7 @@
340 raw = re.match(ll_key, _addr)
341 if raw:
342 _addr = raw.group(1)
343+
344 if _addr == addr:
345 log("Address '%s' is configured on iface '%s'" %
346 (addr, iface))
347@@ -245,8 +257,9 @@
348
349
350 def sniff_iface(f):
351- """If no iface provided, inject net iface inferred from unit private
352- address.
353+ """Ensure decorated function is called with a value for iface.
354+
355+ If no iface provided, inject net iface inferred from unit private address.
356 """
357 def iface_sniffer(*args, **kwargs):
358 if not kwargs.get('iface', None):
359@@ -289,7 +302,7 @@
360 if global_addrs:
361 # Make sure any found global addresses are not temporary
362 cmd = ['ip', 'addr', 'show', iface]
363- out = subprocess.check_output(cmd)
364+ out = subprocess.check_output(cmd).decode('UTF-8')
365 if dynamic_only:
366 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
367 else:
368@@ -311,33 +324,28 @@
369 return addrs
370
371 if fatal:
372- raise Exception("Interface '%s' doesn't have a scope global "
373+ raise Exception("Interface '%s' does not have a scope global "
374 "non-temporary ipv6 address." % iface)
375
376 return []
377
378
379 def get_bridges(vnic_dir='/sys/devices/virtual/net'):
380- """
381- Return a list of bridges on the system or []
382- """
383- b_rgex = vnic_dir + '/*/bridge'
384- return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
385+ """Return a list of bridges on the system."""
386+ b_regex = "%s/*/bridge" % vnic_dir
387+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
388
389
390 def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
391- """
392- Return a list of nics comprising a given bridge on the system or []
393- """
394- brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
395- return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
396+ """Return a list of nics comprising a given bridge on the system."""
397+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
398+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
399
400
401 def is_bridge_member(nic):
402- """
403- Check if a given nic is a member of a bridge
404- """
405+ """Check if a given nic is a member of a bridge."""
406 for bridge in get_bridges():
407 if nic in get_bridge_nics(bridge):
408 return True
409+
410 return False
411
412=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
413--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-01 13:53:14 +0000
414+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-11 17:57:05 +0000
415@@ -1,3 +1,4 @@
416+import six
417 from charmhelpers.contrib.amulet.deployment import (
418 AmuletDeployment
419 )
420@@ -69,7 +70,7 @@
421
422 def _configure_services(self, configs):
423 """Configure all of the services."""
424- for service, config in configs.iteritems():
425+ for service, config in six.iteritems(configs):
426 self.d.configure(service, config)
427
428 def _get_openstack_release(self):
429
430=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
431--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-10-01 13:53:14 +0000
432+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-11 17:57:05 +0000
433@@ -7,6 +7,8 @@
434 import keystoneclient.v2_0 as keystone_client
435 import novaclient.v1_1.client as nova_client
436
437+import six
438+
439 from charmhelpers.contrib.amulet.utils import (
440 AmuletUtils
441 )
442@@ -60,7 +62,7 @@
443 expected service catalog endpoints.
444 """
445 self.log.debug('actual: {}'.format(repr(actual)))
446- for k, v in expected.iteritems():
447+ for k, v in six.iteritems(expected):
448 if k in actual:
449 ret = self._validate_dict_data(expected[k][0], actual[k][0])
450 if ret:
451
452=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
453--- hooks/charmhelpers/contrib/openstack/context.py 2014-10-01 13:53:14 +0000
454+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-11 17:57:05 +0000
455@@ -1,20 +1,18 @@
456 import json
457 import os
458 import time
459-
460 from base64 import b64decode
461+from subprocess import check_call
462
463-from subprocess import (
464- check_call
465-)
466+import six
467
468 from charmhelpers.fetch import (
469 apt_install,
470 filter_installed_packages,
471 )
472-
473 from charmhelpers.core.hookenv import (
474 config,
475+ is_relation_made,
476 local_unit,
477 log,
478 relation_get,
479@@ -23,40 +21,40 @@
480 relation_set,
481 unit_get,
482 unit_private_ip,
483+ DEBUG,
484+ INFO,
485+ WARNING,
486 ERROR,
487- INFO
488 )
489-
490 from charmhelpers.core.host import (
491 mkdir,
492- write_file
493+ write_file,
494 )
495-
496 from charmhelpers.contrib.hahelpers.cluster import (
497 determine_apache_port,
498 determine_api_port,
499 https,
500- is_clustered
501+ is_clustered,
502 )
503-
504 from charmhelpers.contrib.hahelpers.apache import (
505 get_cert,
506 get_ca_cert,
507 install_ca_cert,
508 )
509-
510 from charmhelpers.contrib.openstack.neutron import (
511 neutron_plugin_attribute,
512 )
513-
514 from charmhelpers.contrib.network.ip import (
515 get_address_in_network,
516 get_ipv6_addr,
517+ get_netmask_for_address,
518 format_ipv6_addr,
519- is_address_in_network
520+ is_address_in_network,
521 )
522+from charmhelpers.contrib.openstack.utils import get_host_ip
523
524 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
525+ADDRESS_TYPES = ['admin', 'internal', 'public']
526
527
528 class OSContextError(Exception):
529@@ -64,7 +62,7 @@
530
531
532 def ensure_packages(packages):
533- '''Install but do not upgrade required plugin packages'''
534+ """Install but do not upgrade required plugin packages."""
535 required = filter_installed_packages(packages)
536 if required:
537 apt_install(required, fatal=True)
538@@ -72,20 +70,27 @@
539
540 def context_complete(ctxt):
541 _missing = []
542- for k, v in ctxt.iteritems():
543+ for k, v in six.iteritems(ctxt):
544 if v is None or v == '':
545 _missing.append(k)
546+
547 if _missing:
548- log('Missing required data: %s' % ' '.join(_missing), level='INFO')
549+ log('Missing required data: %s' % ' '.join(_missing), level=INFO)
550 return False
551+
552 return True
553
554
555 def config_flags_parser(config_flags):
556+ """Parses config flags string into dict.
557+
558+ The provided config_flags string may be a list of comma-separated values
559+ which themselves may be comma-separated list of values.
560+ """
561 if config_flags.find('==') >= 0:
562- log("config_flags is not in expected format (key=value)",
563- level=ERROR)
564+ log("config_flags is not in expected format (key=value)", level=ERROR)
565 raise OSContextError
566+
567 # strip the following from each value.
568 post_strippers = ' ,'
569 # we strip any leading/trailing '=' or ' ' from the string then
570@@ -93,7 +98,7 @@
571 split = config_flags.strip(' =').split('=')
572 limit = len(split)
573 flags = {}
574- for i in xrange(0, limit - 1):
575+ for i in range(0, limit - 1):
576 current = split[i]
577 next = split[i + 1]
578 vindex = next.rfind(',')
579@@ -108,17 +113,18 @@
580 # if this not the first entry, expect an embedded key.
581 index = current.rfind(',')
582 if index < 0:
583- log("invalid config value(s) at index %s" % (i),
584- level=ERROR)
585+ log("Invalid config value(s) at index %s" % (i), level=ERROR)
586 raise OSContextError
587 key = current[index + 1:]
588
589 # Add to collection.
590 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
591+
592 return flags
593
594
595 class OSContextGenerator(object):
596+ """Base class for all context generators."""
597 interfaces = []
598
599 def __call__(self):
600@@ -130,11 +136,11 @@
601
602 def __init__(self,
603 database=None, user=None, relation_prefix=None, ssl_dir=None):
604- '''
605- Allows inspecting relation for settings prefixed with relation_prefix.
606- This is useful for parsing access for multiple databases returned via
607- the shared-db interface (eg, nova_password, quantum_password)
608- '''
609+ """Allows inspecting relation for settings prefixed with
610+ relation_prefix. This is useful for parsing access for multiple
611+ databases returned via the shared-db interface (eg, nova_password,
612+ quantum_password)
613+ """
614 self.relation_prefix = relation_prefix
615 self.database = database
616 self.user = user
617@@ -144,9 +150,8 @@
618 self.database = self.database or config('database')
619 self.user = self.user or config('database-user')
620 if None in [self.database, self.user]:
621- log('Could not generate shared_db context. '
622- 'Missing required charm config options. '
623- '(database name and user)')
624+ log("Could not generate shared_db context. Missing required charm "
625+ "config options. (database name and user)", level=ERROR)
626 raise OSContextError
627
628 ctxt = {}
629@@ -199,23 +204,24 @@
630 def __call__(self):
631 self.database = self.database or config('database')
632 if self.database is None:
633- log('Could not generate postgresql_db context. '
634- 'Missing required charm config options. '
635- '(database name)')
636+ log('Could not generate postgresql_db context. Missing required '
637+ 'charm config options. (database name)', level=ERROR)
638 raise OSContextError
639+
640 ctxt = {}
641-
642 for rid in relation_ids(self.interfaces[0]):
643 for unit in related_units(rid):
644- ctxt = {
645- 'database_host': relation_get('host', rid=rid, unit=unit),
646- 'database': self.database,
647- 'database_user': relation_get('user', rid=rid, unit=unit),
648- 'database_password': relation_get('password', rid=rid, unit=unit),
649- 'database_type': 'postgresql',
650- }
651+ rel_host = relation_get('host', rid=rid, unit=unit)
652+ rel_user = relation_get('user', rid=rid, unit=unit)
653+ rel_passwd = relation_get('password', rid=rid, unit=unit)
654+ ctxt = {'database_host': rel_host,
655+ 'database': self.database,
656+ 'database_user': rel_user,
657+ 'database_password': rel_passwd,
658+ 'database_type': 'postgresql'}
659 if context_complete(ctxt):
660 return ctxt
661+
662 return {}
663
664
665@@ -224,23 +230,29 @@
666 ca_path = os.path.join(ssl_dir, 'db-client.ca')
667 with open(ca_path, 'w') as fh:
668 fh.write(b64decode(rdata['ssl_ca']))
669+
670 ctxt['database_ssl_ca'] = ca_path
671 elif 'ssl_ca' in rdata:
672- log("Charm not setup for ssl support but ssl ca found")
673+ log("Charm not setup for ssl support but ssl ca found", level=INFO)
674 return ctxt
675+
676 if 'ssl_cert' in rdata:
677 cert_path = os.path.join(
678 ssl_dir, 'db-client.cert')
679 if not os.path.exists(cert_path):
680- log("Waiting 1m for ssl client cert validity")
681+ log("Waiting 1m for ssl client cert validity", level=INFO)
682 time.sleep(60)
683+
684 with open(cert_path, 'w') as fh:
685 fh.write(b64decode(rdata['ssl_cert']))
686+
687 ctxt['database_ssl_cert'] = cert_path
688 key_path = os.path.join(ssl_dir, 'db-client.key')
689 with open(key_path, 'w') as fh:
690 fh.write(b64decode(rdata['ssl_key']))
691+
692 ctxt['database_ssl_key'] = key_path
693+
694 return ctxt
695
696
697@@ -248,9 +260,8 @@
698 interfaces = ['identity-service']
699
700 def __call__(self):
701- log('Generating template context for identity-service')
702+ log('Generating template context for identity-service', level=DEBUG)
703 ctxt = {}
704-
705 for rid in relation_ids('identity-service'):
706 for unit in related_units(rid):
707 rdata = relation_get(rid=rid, unit=unit)
708@@ -258,26 +269,24 @@
709 serv_host = format_ipv6_addr(serv_host) or serv_host
710 auth_host = rdata.get('auth_host')
711 auth_host = format_ipv6_addr(auth_host) or auth_host
712-
713- ctxt = {
714- 'service_port': rdata.get('service_port'),
715- 'service_host': serv_host,
716- 'auth_host': auth_host,
717- 'auth_port': rdata.get('auth_port'),
718- 'admin_tenant_name': rdata.get('service_tenant'),
719- 'admin_user': rdata.get('service_username'),
720- 'admin_password': rdata.get('service_password'),
721- 'service_protocol':
722- rdata.get('service_protocol') or 'http',
723- 'auth_protocol':
724- rdata.get('auth_protocol') or 'http',
725- }
726+ svc_protocol = rdata.get('service_protocol') or 'http'
727+ auth_protocol = rdata.get('auth_protocol') or 'http'
728+ ctxt = {'service_port': rdata.get('service_port'),
729+ 'service_host': serv_host,
730+ 'auth_host': auth_host,
731+ 'auth_port': rdata.get('auth_port'),
732+ 'admin_tenant_name': rdata.get('service_tenant'),
733+ 'admin_user': rdata.get('service_username'),
734+ 'admin_password': rdata.get('service_password'),
735+ 'service_protocol': svc_protocol,
736+ 'auth_protocol': auth_protocol}
737 if context_complete(ctxt):
738 # NOTE(jamespage) this is required for >= icehouse
739 # so a missing value just indicates keystone needs
740 # upgrading
741 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
742 return ctxt
743+
744 return {}
745
746
747@@ -290,21 +299,23 @@
748 self.interfaces = [rel_name]
749
750 def __call__(self):
751- log('Generating template context for amqp')
752+ log('Generating template context for amqp', level=DEBUG)
753 conf = config()
754- user_setting = 'rabbit-user'
755- vhost_setting = 'rabbit-vhost'
756 if self.relation_prefix:
757- user_setting = self.relation_prefix + '-rabbit-user'
758- vhost_setting = self.relation_prefix + '-rabbit-vhost'
759+ user_setting = '%s-rabbit-user' % (self.relation_prefix)
760+ vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
761+ else:
762+ user_setting = 'rabbit-user'
763+ vhost_setting = 'rabbit-vhost'
764
765 try:
766 username = conf[user_setting]
767 vhost = conf[vhost_setting]
768 except KeyError as e:
769- log('Could not generate shared_db context. '
770- 'Missing required charm config options: %s.' % e)
771+ log('Could not generate shared_db context. Missing required charm '
772+ 'config options: %s.' % e, level=ERROR)
773 raise OSContextError
774+
775 ctxt = {}
776 for rid in relation_ids(self.rel_name):
777 ha_vip_only = False
778@@ -318,6 +329,7 @@
779 host = relation_get('private-address', rid=rid, unit=unit)
780 host = format_ipv6_addr(host) or host
781 ctxt['rabbitmq_host'] = host
782+
783 ctxt.update({
784 'rabbitmq_user': username,
785 'rabbitmq_password': relation_get('password', rid=rid,
786@@ -328,6 +340,7 @@
787 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
788 if ssl_port:
789 ctxt['rabbit_ssl_port'] = ssl_port
790+
791 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
792 if ssl_ca:
793 ctxt['rabbit_ssl_ca'] = ssl_ca
794@@ -341,41 +354,45 @@
795 if context_complete(ctxt):
796 if 'rabbit_ssl_ca' in ctxt:
797 if not self.ssl_dir:
798- log(("Charm not setup for ssl support "
799- "but ssl ca found"))
800+ log("Charm not setup for ssl support but ssl ca "
801+ "found", level=INFO)
802 break
803+
804 ca_path = os.path.join(
805 self.ssl_dir, 'rabbit-client-ca.pem')
806 with open(ca_path, 'w') as fh:
807 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
808 ctxt['rabbit_ssl_ca'] = ca_path
809+
810 # Sufficient information found = break out!
811 break
812+
813 # Used for active/active rabbitmq >= grizzly
814- if ('clustered' not in ctxt or ha_vip_only) \
815- and len(related_units(rid)) > 1:
816+ if (('clustered' not in ctxt or ha_vip_only) and
817+ len(related_units(rid)) > 1):
818 rabbitmq_hosts = []
819 for unit in related_units(rid):
820 host = relation_get('private-address', rid=rid, unit=unit)
821 host = format_ipv6_addr(host) or host
822 rabbitmq_hosts.append(host)
823- ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
824+
825+ ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
826+
827 if not context_complete(ctxt):
828 return {}
829- else:
830- return ctxt
831+
832+ return ctxt
833
834
835 class CephContext(OSContextGenerator):
836+ """Generates context for /etc/ceph/ceph.conf templates."""
837 interfaces = ['ceph']
838
839 def __call__(self):
840- '''This generates context for /etc/ceph/ceph.conf templates'''
841 if not relation_ids('ceph'):
842 return {}
843
844- log('Generating template context for ceph')
845-
846+ log('Generating template context for ceph', level=DEBUG)
847 mon_hosts = []
848 auth = None
849 key = None
850@@ -384,18 +401,18 @@
851 for unit in related_units(rid):
852 auth = relation_get('auth', rid=rid, unit=unit)
853 key = relation_get('key', rid=rid, unit=unit)
854- ceph_addr = \
855- relation_get('ceph-public-address', rid=rid, unit=unit) or \
856- relation_get('private-address', rid=rid, unit=unit)
857+ ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
858+ unit=unit)
859+ unit_priv_addr = relation_get('private-address', rid=rid,
860+ unit=unit)
861+ ceph_addr = ceph_pub_addr or unit_priv_addr
862 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
863 mon_hosts.append(ceph_addr)
864
865- ctxt = {
866- 'mon_hosts': ' '.join(mon_hosts),
867- 'auth': auth,
868- 'key': key,
869- 'use_syslog': use_syslog
870- }
871+ ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
872+ 'auth': auth,
873+ 'key': key,
874+ 'use_syslog': use_syslog}
875
876 if not os.path.isdir('/etc/ceph'):
877 os.mkdir('/etc/ceph')
878@@ -404,45 +421,68 @@
879 return {}
880
881 ensure_packages(['ceph-common'])
882-
883 return ctxt
884
885
886 class HAProxyContext(OSContextGenerator):
887+ """Provides half a context for the haproxy template, which describes
888+ all peers to be included in the cluster. Each charm needs to include
889+ its own context generator that describes the port mapping.
890+ """
891 interfaces = ['cluster']
892
893+ def __init__(self, singlenode_mode=False):
894+ self.singlenode_mode = singlenode_mode
895+
896 def __call__(self):
897- '''
898- Builds half a context for the haproxy template, which describes
899- all peers to be included in the cluster. Each charm needs to include
900- its own context generator that describes the port mapping.
901- '''
902- if not relation_ids('cluster'):
903+ if not relation_ids('cluster') and not self.singlenode_mode:
904 return {}
905
906+ if config('prefer-ipv6'):
907+ addr = get_ipv6_addr(exc_list=[config('vip')])[0]
908+ else:
909+ addr = get_host_ip(unit_get('private-address'))
910+
911+ l_unit = local_unit().replace('/', '-')
912 cluster_hosts = {}
913- l_unit = local_unit().replace('/', '-')
914-
915- if config('prefer-ipv6'):
916- addr = get_ipv6_addr(exc_list=[config('vip')])[0]
917- else:
918- addr = unit_get('private-address')
919-
920- cluster_hosts[l_unit] = get_address_in_network(config('os-internal-network'),
921- addr)
922-
923- for rid in relation_ids('cluster'):
924- for unit in related_units(rid):
925- _unit = unit.replace('/', '-')
926- addr = relation_get('private-address', rid=rid, unit=unit)
927- cluster_hosts[_unit] = addr
928-
929- ctxt = {
930- 'units': cluster_hosts,
931- }
932+
933+ # NOTE(jamespage): build out map of configured network endpoints
934+ # and associated backends
935+ for addr_type in ADDRESS_TYPES:
936+ cfg_opt = 'os-{}-network'.format(addr_type)
937+ laddr = get_address_in_network(config(cfg_opt))
938+ if laddr:
939+ netmask = get_netmask_for_address(laddr)
940+ cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
941+ netmask),
942+ 'backends': {l_unit: laddr}}
943+ for rid in relation_ids('cluster'):
944+ for unit in related_units(rid):
945+ _laddr = relation_get('{}-address'.format(addr_type),
946+ rid=rid, unit=unit)
947+ if _laddr:
948+ _unit = unit.replace('/', '-')
949+ cluster_hosts[laddr]['backends'][_unit] = _laddr
950+
951+ # NOTE(jamespage) no split configurations found, just use
952+ # private addresses
953+ if not cluster_hosts:
954+ netmask = get_netmask_for_address(addr)
955+ cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
956+ 'backends': {l_unit: addr}}
957+ for rid in relation_ids('cluster'):
958+ for unit in related_units(rid):
959+ _laddr = relation_get('private-address',
960+ rid=rid, unit=unit)
961+ if _laddr:
962+ _unit = unit.replace('/', '-')
963+ cluster_hosts[addr]['backends'][_unit] = _laddr
964+
965+ ctxt = {'frontends': cluster_hosts}
966
967 if config('haproxy-server-timeout'):
968 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
969+
970 if config('haproxy-client-timeout'):
971 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
972
973@@ -455,13 +495,19 @@
974 ctxt['haproxy_host'] = '0.0.0.0'
975 ctxt['stat_port'] = ':8888'
976
977- if len(cluster_hosts.keys()) > 1:
978- # Enable haproxy when we have enough peers.
979- log('Ensuring haproxy enabled in /etc/default/haproxy.')
980- with open('/etc/default/haproxy', 'w') as out:
981- out.write('ENABLED=1\n')
982- return ctxt
983- log('HAProxy context is incomplete, this unit has no peers.')
984+ for frontend in cluster_hosts:
985+ if (len(cluster_hosts[frontend]['backends']) > 1 or
986+ self.singlenode_mode):
987+ # Enable haproxy when we have enough peers.
988+ log('Ensuring haproxy enabled in /etc/default/haproxy.',
989+ level=DEBUG)
990+ with open('/etc/default/haproxy', 'w') as out:
991+ out.write('ENABLED=1\n')
992+
993+ return ctxt
994+
995+ log('HAProxy context is incomplete, this unit has no peers.',
996+ level=INFO)
997 return {}
998
999
1000@@ -469,29 +515,28 @@
1001 interfaces = ['image-service']
1002
1003 def __call__(self):
1004- '''
1005- Obtains the glance API server from the image-service relation. Useful
1006- in nova and cinder (currently).
1007- '''
1008- log('Generating template context for image-service.')
1009+ """Obtains the glance API server from the image-service relation.
1010+ Useful in nova and cinder (currently).
1011+ """
1012+ log('Generating template context for image-service.', level=DEBUG)
1013 rids = relation_ids('image-service')
1014 if not rids:
1015 return {}
1016+
1017 for rid in rids:
1018 for unit in related_units(rid):
1019 api_server = relation_get('glance-api-server',
1020 rid=rid, unit=unit)
1021 if api_server:
1022 return {'glance_api_servers': api_server}
1023- log('ImageService context is incomplete. '
1024- 'Missing required relation data.')
1025+
1026+ log("ImageService context is incomplete. Missing required relation "
1027+ "data.", level=INFO)
1028 return {}
1029
1030
1031 class ApacheSSLContext(OSContextGenerator):
1032-
1033- """
1034- Generates a context for an apache vhost configuration that configures
1035+ """Generates a context for an apache vhost configuration that configures
1036 HTTPS reverse proxying for one or many endpoints. Generated context
1037 looks something like::
1038
1039@@ -525,6 +570,7 @@
1040 else:
1041 cert_filename = 'cert'
1042 key_filename = 'key'
1043+
1044 write_file(path=os.path.join(ssl_dir, cert_filename),
1045 content=b64decode(cert))
1046 write_file(path=os.path.join(ssl_dir, key_filename),
1047@@ -536,7 +582,8 @@
1048 install_ca_cert(b64decode(ca_cert))
1049
1050 def canonical_names(self):
1051- '''Figure out which canonical names clients will access this service'''
1052+ """Figure out which canonical names clients will access this service.
1053+ """
1054 cns = []
1055 for r_id in relation_ids('identity-service'):
1056 for unit in related_units(r_id):
1057@@ -544,55 +591,80 @@
1058 for k in rdata:
1059 if k.startswith('ssl_key_'):
1060 cns.append(k.lstrip('ssl_key_'))
1061- return list(set(cns))
1062+
1063+ return sorted(list(set(cns)))
1064+
1065+ def get_network_addresses(self):
1066+ """For each network configured, return corresponding address and vip
1067+ (if available).
1068+
1069+ Returns a list of tuples of the form:
1070+
1071+ [(address_in_net_a, vip_in_net_a),
1072+ (address_in_net_b, vip_in_net_b),
1073+ ...]
1074+
1075+ or, if no vip(s) available:
1076+
1077+ [(address_in_net_a, address_in_net_a),
1078+ (address_in_net_b, address_in_net_b),
1079+ ...]
1080+ """
1081+ addresses = []
1082+ if config('vip'):
1083+ vips = config('vip').split()
1084+ else:
1085+ vips = []
1086+
1087+ for net_type in ['os-internal-network', 'os-admin-network',
1088+ 'os-public-network']:
1089+ addr = get_address_in_network(config(net_type),
1090+ unit_get('private-address'))
1091+ if len(vips) > 1 and is_clustered():
1092+ if not config(net_type):
1093+ log("Multiple networks configured but net_type "
1094+ "is None (%s)." % net_type, level=WARNING)
1095+ continue
1096+
1097+ for vip in vips:
1098+ if is_address_in_network(config(net_type), vip):
1099+ addresses.append((addr, vip))
1100+ break
1101+
1102+ elif is_clustered() and config('vip'):
1103+ addresses.append((addr, config('vip')))
1104+ else:
1105+ addresses.append((addr, addr))
1106+
1107+ return sorted(addresses)
1108
1109 def __call__(self):
1110- if isinstance(self.external_ports, basestring):
1111+ if isinstance(self.external_ports, six.string_types):
1112 self.external_ports = [self.external_ports]
1113- if (not self.external_ports or not https()):
1114+
1115+ if not self.external_ports or not https():
1116 return {}
1117
1118 self.configure_ca()
1119 self.enable_modules()
1120
1121- ctxt = {
1122- 'namespace': self.service_namespace,
1123- 'endpoints': [],
1124- 'ext_ports': []
1125- }
1126+ ctxt = {'namespace': self.service_namespace,
1127+ 'endpoints': [],
1128+ 'ext_ports': []}
1129
1130 for cn in self.canonical_names():
1131 self.configure_cert(cn)
1132
1133- addresses = []
1134- vips = []
1135- if config('vip'):
1136- vips = config('vip').split()
1137-
1138- for network_type in ['os-internal-network',
1139- 'os-admin-network',
1140- 'os-public-network']:
1141- address = get_address_in_network(config(network_type),
1142- unit_get('private-address'))
1143- if len(vips) > 0 and is_clustered():
1144- for vip in vips:
1145- if is_address_in_network(config(network_type),
1146- vip):
1147- addresses.append((address, vip))
1148- break
1149- elif is_clustered():
1150- addresses.append((address, config('vip')))
1151- else:
1152- addresses.append((address, address))
1153-
1154- for address, endpoint in set(addresses):
1155+ addresses = self.get_network_addresses()
1156+ for address, endpoint in sorted(set(addresses)):
1157 for api_port in self.external_ports:
1158 ext_port = determine_apache_port(api_port)
1159 int_port = determine_api_port(api_port)
1160 portmap = (address, endpoint, int(ext_port), int(int_port))
1161 ctxt['endpoints'].append(portmap)
1162 ctxt['ext_ports'].append(int(ext_port))
1163- ctxt['ext_ports'] = list(set(ctxt['ext_ports']))
1164+
1165+ ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
1166 return ctxt
1167
1168
1169@@ -609,21 +681,23 @@
1170
1171 @property
1172 def packages(self):
1173- return neutron_plugin_attribute(
1174- self.plugin, 'packages', self.network_manager)
1175+ return neutron_plugin_attribute(self.plugin, 'packages',
1176+ self.network_manager)
1177
1178 @property
1179 def neutron_security_groups(self):
1180 return None
1181
1182 def _ensure_packages(self):
1183- [ensure_packages(pkgs) for pkgs in self.packages]
1184+ for pkgs in self.packages:
1185+ ensure_packages(pkgs)
1186
1187 def _save_flag_file(self):
1188 if self.network_manager == 'quantum':
1189 _file = '/etc/nova/quantum_plugin.conf'
1190 else:
1191 _file = '/etc/nova/neutron_plugin.conf'
1192+
1193 with open(_file, 'wb') as out:
1194 out.write(self.plugin + '\n')
1195
1196@@ -632,13 +706,11 @@
1197 self.network_manager)
1198 config = neutron_plugin_attribute(self.plugin, 'config',
1199 self.network_manager)
1200- ovs_ctxt = {
1201- 'core_plugin': driver,
1202- 'neutron_plugin': 'ovs',
1203- 'neutron_security_groups': self.neutron_security_groups,
1204- 'local_ip': unit_private_ip(),
1205- 'config': config
1206- }
1207+ ovs_ctxt = {'core_plugin': driver,
1208+ 'neutron_plugin': 'ovs',
1209+ 'neutron_security_groups': self.neutron_security_groups,
1210+ 'local_ip': unit_private_ip(),
1211+ 'config': config}
1212
1213 return ovs_ctxt
1214
1215@@ -647,13 +719,11 @@
1216 self.network_manager)
1217 config = neutron_plugin_attribute(self.plugin, 'config',
1218 self.network_manager)
1219- nvp_ctxt = {
1220- 'core_plugin': driver,
1221- 'neutron_plugin': 'nvp',
1222- 'neutron_security_groups': self.neutron_security_groups,
1223- 'local_ip': unit_private_ip(),
1224- 'config': config
1225- }
1226+ nvp_ctxt = {'core_plugin': driver,
1227+ 'neutron_plugin': 'nvp',
1228+ 'neutron_security_groups': self.neutron_security_groups,
1229+ 'local_ip': unit_private_ip(),
1230+ 'config': config}
1231
1232 return nvp_ctxt
1233
1234@@ -662,35 +732,50 @@
1235 self.network_manager)
1236 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1237 self.network_manager)
1238- n1kv_ctxt = {
1239- 'core_plugin': driver,
1240- 'neutron_plugin': 'n1kv',
1241- 'neutron_security_groups': self.neutron_security_groups,
1242- 'local_ip': unit_private_ip(),
1243- 'config': n1kv_config,
1244- 'vsm_ip': config('n1kv-vsm-ip'),
1245- 'vsm_username': config('n1kv-vsm-username'),
1246- 'vsm_password': config('n1kv-vsm-password'),
1247- 'restrict_policy_profiles': config(
1248- 'n1kv_restrict_policy_profiles'),
1249- }
1250+ n1kv_user_config_flags = config('n1kv-config-flags')
1251+ restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
1252+ n1kv_ctxt = {'core_plugin': driver,
1253+ 'neutron_plugin': 'n1kv',
1254+ 'neutron_security_groups': self.neutron_security_groups,
1255+ 'local_ip': unit_private_ip(),
1256+ 'config': n1kv_config,
1257+ 'vsm_ip': config('n1kv-vsm-ip'),
1258+ 'vsm_username': config('n1kv-vsm-username'),
1259+ 'vsm_password': config('n1kv-vsm-password'),
1260+ 'restrict_policy_profiles': restrict_policy_profiles}
1261+
1262+ if n1kv_user_config_flags:
1263+ flags = config_flags_parser(n1kv_user_config_flags)
1264+ n1kv_ctxt['user_config_flags'] = flags
1265
1266 return n1kv_ctxt
1267
1268+ def calico_ctxt(self):
1269+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1270+ self.network_manager)
1271+ config = neutron_plugin_attribute(self.plugin, 'config',
1272+ self.network_manager)
1273+ calico_ctxt = {'core_plugin': driver,
1274+ 'neutron_plugin': 'Calico',
1275+ 'neutron_security_groups': self.neutron_security_groups,
1276+ 'local_ip': unit_private_ip(),
1277+ 'config': config}
1278+
1279+ return calico_ctxt
1280+
1281 def neutron_ctxt(self):
1282 if https():
1283 proto = 'https'
1284 else:
1285 proto = 'http'
1286+
1287 if is_clustered():
1288 host = config('vip')
1289 else:
1290 host = unit_get('private-address')
1291- url = '%s://%s:%s' % (proto, host, '9696')
1292- ctxt = {
1293- 'network_manager': self.network_manager,
1294- 'neutron_url': url,
1295- }
1296+
1297+ ctxt = {'network_manager': self.network_manager,
1298+ 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1299 return ctxt
1300
1301 def __call__(self):
1302@@ -710,6 +795,8 @@
1303 ctxt.update(self.nvp_ctxt())
1304 elif self.plugin == 'n1kv':
1305 ctxt.update(self.n1kv_ctxt())
1306+ elif self.plugin == 'Calico':
1307+ ctxt.update(self.calico_ctxt())
1308
1309 alchemy_flags = config('neutron-alchemy-flags')
1310 if alchemy_flags:
1311@@ -721,23 +808,40 @@
1312
1313
1314 class OSConfigFlagContext(OSContextGenerator):
1315-
1316- """
1317- Responsible for adding user-defined config-flags in charm config to a
1318- template context.
1319-
1320- NOTE: the value of config-flags may be a comma-separated list of
1321- key=value pairs and some Openstack config files support
1322- comma-separated lists as values.
1323- """
1324-
1325- def __call__(self):
1326- config_flags = config('config-flags')
1327- if not config_flags:
1328- return {}
1329-
1330- flags = config_flags_parser(config_flags)
1331- return {'user_config_flags': flags}
1332+ """Provides support for user-defined config flags.
1333+
1334+ Users can define a comma-seperated list of key=value pairs
1335+ in the charm configuration and apply them at any point in
1336+ any file by using a template flag.
1337+
1338+ Sometimes users might want config flags inserted within a
1339+ specific section so this class allows users to specify the
1340+ template flag name, allowing for multiple template flags
1341+ (sections) within the same context.
1342+
1343+ NOTE: the value of config-flags may be a comma-separated list of
1344+ key=value pairs and some Openstack config files support
1345+ comma-separated lists as values.
1346+ """
1347+
1348+ def __init__(self, charm_flag='config-flags',
1349+ template_flag='user_config_flags'):
1350+ """
1351+ :param charm_flag: config flags in charm configuration.
1352+ :param template_flag: insert point for user-defined flags in template
1353+ file.
1354+ """
1355+ super(OSConfigFlagContext, self).__init__()
1356+ self._charm_flag = charm_flag
1357+ self._template_flag = template_flag
1358+
1359+ def __call__(self):
1360+ config_flags = config(self._charm_flag)
1361+ if not config_flags:
1362+ return {}
1363+
1364+ return {self._template_flag:
1365+ config_flags_parser(config_flags)}
1366
1367
1368 class SubordinateConfigContext(OSContextGenerator):
1369@@ -781,7 +885,6 @@
1370 },
1371 }
1372 }
1373-
1374 """
1375
1376 def __init__(self, service, config_file, interface):
1377@@ -811,26 +914,28 @@
1378
1379 if self.service not in sub_config:
1380 log('Found subordinate_config on %s but it contained'
1381- 'nothing for %s service' % (rid, self.service))
1382+ 'nothing for %s service' % (rid, self.service),
1383+ level=INFO)
1384 continue
1385
1386 sub_config = sub_config[self.service]
1387 if self.config_file not in sub_config:
1388 log('Found subordinate_config on %s but it contained'
1389- 'nothing for %s' % (rid, self.config_file))
1390+ 'nothing for %s' % (rid, self.config_file),
1391+ level=INFO)
1392 continue
1393
1394 sub_config = sub_config[self.config_file]
1395- for k, v in sub_config.iteritems():
1396+ for k, v in six.iteritems(sub_config):
1397 if k == 'sections':
1398- for section, config_dict in v.iteritems():
1399- log("adding section '%s'" % (section))
1400+ for section, config_dict in six.iteritems(v):
1401+ log("adding section '%s'" % (section),
1402+ level=DEBUG)
1403 ctxt[k][section] = config_dict
1404 else:
1405 ctxt[k] = v
1406
1407- log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
1408-
1409+ log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1410 return ctxt
1411
1412
1413@@ -842,15 +947,14 @@
1414 False if config('debug') is None else config('debug')
1415 ctxt['verbose'] = \
1416 False if config('verbose') is None else config('verbose')
1417+
1418 return ctxt
1419
1420
1421 class SyslogContext(OSContextGenerator):
1422
1423 def __call__(self):
1424- ctxt = {
1425- 'use_syslog': config('use-syslog')
1426- }
1427+ ctxt = {'use_syslog': config('use-syslog')}
1428 return ctxt
1429
1430
1431@@ -858,10 +962,56 @@
1432
1433 def __call__(self):
1434 if config('prefer-ipv6'):
1435- return {
1436- 'bind_host': '::'
1437- }
1438+ return {'bind_host': '::'}
1439 else:
1440- return {
1441- 'bind_host': '0.0.0.0'
1442- }
1443+ return {'bind_host': '0.0.0.0'}
1444+
1445+
1446+class WorkerConfigContext(OSContextGenerator):
1447+
1448+ @property
1449+ def num_cpus(self):
1450+ try:
1451+ from psutil import NUM_CPUS
1452+ except ImportError:
1453+ apt_install('python-psutil', fatal=True)
1454+ from psutil import NUM_CPUS
1455+
1456+ return NUM_CPUS
1457+
1458+ def __call__(self):
1459+ multiplier = config('worker-multiplier') or 0
1460+ ctxt = {"workers": self.num_cpus * multiplier}
1461+ return ctxt
1462+
1463+
1464+class ZeroMQContext(OSContextGenerator):
1465+ interfaces = ['zeromq-configuration']
1466+
1467+ def __call__(self):
1468+ ctxt = {}
1469+ if is_relation_made('zeromq-configuration', 'host'):
1470+ for rid in relation_ids('zeromq-configuration'):
1471+ for unit in related_units(rid):
1472+ ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1473+ ctxt['zmq_host'] = relation_get('host', unit, rid)
1474+
1475+ return ctxt
1476+
1477+
1478+class NotificationDriverContext(OSContextGenerator):
1479+
1480+ def __init__(self, zmq_relation='zeromq-configuration',
1481+ amqp_relation='amqp'):
1482+ """
1483+ :param zmq_relation: Name of Zeromq relation to check
1484+ """
1485+ self.zmq_relation = zmq_relation
1486+ self.amqp_relation = amqp_relation
1487+
1488+ def __call__(self):
1489+ ctxt = {'notifications': 'False'}
1490+ if is_relation_made(self.amqp_relation):
1491+ ctxt['notifications'] = "True"
1492+
1493+ return ctxt
1494
1495=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
1496--- hooks/charmhelpers/contrib/openstack/ip.py 2014-10-01 13:53:14 +0000
1497+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-11 17:57:05 +0000
1498@@ -2,21 +2,19 @@
1499 config,
1500 unit_get,
1501 )
1502-
1503 from charmhelpers.contrib.network.ip import (
1504 get_address_in_network,
1505 is_address_in_network,
1506 is_ipv6,
1507 get_ipv6_addr,
1508 )
1509-
1510 from charmhelpers.contrib.hahelpers.cluster import is_clustered
1511
1512 PUBLIC = 'public'
1513 INTERNAL = 'int'
1514 ADMIN = 'admin'
1515
1516-_address_map = {
1517+ADDRESS_MAP = {
1518 PUBLIC: {
1519 'config': 'os-public-network',
1520 'fallback': 'public-address'
1521@@ -33,16 +31,14 @@
1522
1523
1524 def canonical_url(configs, endpoint_type=PUBLIC):
1525- '''
1526- Returns the correct HTTP URL to this host given the state of HTTPS
1527+ """Returns the correct HTTP URL to this host given the state of HTTPS
1528 configuration, hacluster and charm configuration.
1529
1530- :configs OSTemplateRenderer: A config tempating object to inspect for
1531- a complete https context.
1532- :endpoint_type str: The endpoint type to resolve.
1533-
1534- :returns str: Base URL for services on the current service unit.
1535- '''
1536+ :param configs: OSTemplateRenderer config templating object to inspect
1537+ for a complete https context.
1538+ :param endpoint_type: str endpoint type to resolve.
1539+ :param returns: str base URL for services on the current service unit.
1540+ """
1541 scheme = 'http'
1542 if 'https' in configs.complete_contexts():
1543 scheme = 'https'
1544@@ -53,27 +49,45 @@
1545
1546
1547 def resolve_address(endpoint_type=PUBLIC):
1548+ """Return unit address depending on net config.
1549+
1550+ If unit is clustered with vip(s) and has net splits defined, return vip on
1551+ correct network. If clustered with no nets defined, return primary vip.
1552+
1553+ If not clustered, return unit address ensuring address is on configured net
1554+ split if one is configured.
1555+
1556+ :param endpoint_type: Network endpoing type
1557+ """
1558 resolved_address = None
1559- if is_clustered():
1560- if config(_address_map[endpoint_type]['config']) is None:
1561- # Assume vip is simple and pass back directly
1562- resolved_address = config('vip')
1563+ vips = config('vip')
1564+ if vips:
1565+ vips = vips.split()
1566+
1567+ net_type = ADDRESS_MAP[endpoint_type]['config']
1568+ net_addr = config(net_type)
1569+ net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
1570+ clustered = is_clustered()
1571+ if clustered:
1572+ if not net_addr:
1573+ # If no net-splits defined, we expect a single vip
1574+ resolved_address = vips[0]
1575 else:
1576- for vip in config('vip').split():
1577- if is_address_in_network(
1578- config(_address_map[endpoint_type]['config']),
1579- vip):
1580+ for vip in vips:
1581+ if is_address_in_network(net_addr, vip):
1582 resolved_address = vip
1583+ break
1584 else:
1585 if config('prefer-ipv6'):
1586- fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1587+ fallback_addr = get_ipv6_addr(exc_list=vips)[0]
1588 else:
1589- fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])
1590- resolved_address = get_address_in_network(
1591- config(_address_map[endpoint_type]['config']), fallback_addr)
1592+ fallback_addr = unit_get(net_fallback)
1593+
1594+ resolved_address = get_address_in_network(net_addr, fallback_addr)
1595
1596 if resolved_address is None:
1597- raise ValueError('Unable to resolve a suitable IP address'
1598- ' based on charm state and configuration')
1599- else:
1600- return resolved_address
1601+ raise ValueError("Unable to resolve a suitable IP address based on "
1602+ "charm state and configuration. (net_type=%s, "
1603+ "clustered=%s)" % (net_type, clustered))
1604+
1605+ return resolved_address
1606
1607=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1608--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-08-13 13:12:30 +0000
1609+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-11 17:57:05 +0000
1610@@ -14,7 +14,7 @@
1611 def headers_package():
1612 """Ensures correct linux-headers for running kernel are installed,
1613 for building DKMS package"""
1614- kver = check_output(['uname', '-r']).strip()
1615+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1616 return 'linux-headers-%s' % kver
1617
1618 QUANTUM_CONF_DIR = '/etc/quantum'
1619@@ -22,7 +22,7 @@
1620
1621 def kernel_version():
1622 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1623- kver = check_output(['uname', '-r']).strip()
1624+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1625 kver = kver.split('.')
1626 return (int(kver[0]), int(kver[1]))
1627
1628@@ -138,10 +138,25 @@
1629 relation_prefix='neutron',
1630 ssl_dir=NEUTRON_CONF_DIR)],
1631 'services': [],
1632- 'packages': [['neutron-plugin-cisco']],
1633+ 'packages': [[headers_package()] + determine_dkms_package(),
1634+ ['neutron-plugin-cisco']],
1635 'server_packages': ['neutron-server',
1636 'neutron-plugin-cisco'],
1637 'server_services': ['neutron-server']
1638+ },
1639+ 'Calico': {
1640+ 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
1641+ 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
1642+ 'contexts': [
1643+ context.SharedDBContext(user=config('neutron-database-user'),
1644+ database=config('neutron-database'),
1645+ relation_prefix='neutron',
1646+ ssl_dir=NEUTRON_CONF_DIR)],
1647+ 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
1648+ 'packages': [[headers_package()] + determine_dkms_package(),
1649+ ['calico-compute', 'bird', 'neutron-dhcp-agent']],
1650+ 'server_packages': ['neutron-server', 'calico-control'],
1651+ 'server_services': ['neutron-server']
1652 }
1653 }
1654 if release >= 'icehouse':
1655@@ -162,7 +177,8 @@
1656 elif manager == 'neutron':
1657 plugins = neutron_plugins()
1658 else:
1659- log('Error: Network manager does not support plugins.')
1660+ log("Network manager '%s' does not support plugins." % (manager),
1661+ level=ERROR)
1662 raise Exception
1663
1664 try:
1665
1666=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
1667--- hooks/charmhelpers/contrib/openstack/templating.py 2014-08-13 13:12:30 +0000
1668+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-11 17:57:05 +0000
1669@@ -1,13 +1,13 @@
1670 import os
1671
1672+import six
1673+
1674 from charmhelpers.fetch import apt_install
1675-
1676 from charmhelpers.core.hookenv import (
1677 log,
1678 ERROR,
1679 INFO
1680 )
1681-
1682 from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1683
1684 try:
1685@@ -43,7 +43,7 @@
1686 order by OpenStack release.
1687 """
1688 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1689- for rel in OPENSTACK_CODENAMES.itervalues()]
1690+ for rel in six.itervalues(OPENSTACK_CODENAMES)]
1691
1692 if not os.path.isdir(templates_dir):
1693 log('Templates directory not found @ %s.' % templates_dir,
1694@@ -258,7 +258,7 @@
1695 """
1696 Write out all registered config files.
1697 """
1698- [self.write(k) for k in self.templates.iterkeys()]
1699+ [self.write(k) for k in six.iterkeys(self.templates)]
1700
1701 def set_release(self, openstack_release):
1702 """
1703@@ -275,5 +275,5 @@
1704 '''
1705 interfaces = []
1706 [interfaces.extend(i.complete_contexts())
1707- for i in self.templates.itervalues()]
1708+ for i in six.itervalues(self.templates)]
1709 return interfaces
1710
1711=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1712--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-06 14:47:17 +0000
1713+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-11 17:57:05 +0000
1714@@ -2,6 +2,7 @@
1715
1716 # Common python helper functions used for OpenStack charms.
1717 from collections import OrderedDict
1718+from functools import wraps
1719
1720 import subprocess
1721 import json
1722@@ -9,11 +10,13 @@
1723 import socket
1724 import sys
1725
1726+import six
1727+import yaml
1728+
1729 from charmhelpers.core.hookenv import (
1730 config,
1731 log as juju_log,
1732 charm_dir,
1733- ERROR,
1734 INFO,
1735 relation_ids,
1736 relation_set
1737@@ -30,7 +33,8 @@
1738 )
1739
1740 from charmhelpers.core.host import lsb_release, mounts, umount
1741-from charmhelpers.fetch import apt_install, apt_cache
1742+from charmhelpers.fetch import apt_install, apt_cache, install_remote
1743+from charmhelpers.contrib.python.packages import pip_install
1744 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
1745 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
1746
1747@@ -78,6 +82,8 @@
1748 ('1.12.0', 'icehouse'),
1749 ('1.11.0', 'icehouse'),
1750 ('2.0.0', 'juno'),
1751+ ('2.1.0', 'juno'),
1752+ ('2.2.0', 'juno'),
1753 ])
1754
1755 DEFAULT_LOOPBACK_SIZE = '5G'
1756@@ -110,7 +116,7 @@
1757
1758 # Best guess match based on deb string provided
1759 if src.startswith('deb') or src.startswith('ppa'):
1760- for k, v in OPENSTACK_CODENAMES.iteritems():
1761+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1762 if v in src:
1763 return v
1764
1765@@ -131,7 +137,7 @@
1766
1767 def get_os_version_codename(codename):
1768 '''Determine OpenStack version number from codename.'''
1769- for k, v in OPENSTACK_CODENAMES.iteritems():
1770+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1771 if v == codename:
1772 return k
1773 e = 'Could not derive OpenStack version for '\
1774@@ -191,7 +197,7 @@
1775 else:
1776 vers_map = OPENSTACK_CODENAMES
1777
1778- for version, cname in vers_map.iteritems():
1779+ for version, cname in six.iteritems(vers_map):
1780 if cname == codename:
1781 return version
1782 # e = "Could not determine OpenStack version for package: %s" % pkg
1783@@ -315,7 +321,7 @@
1784 rc_script.write(
1785 "#!/bin/bash\n")
1786 [rc_script.write('export %s=%s\n' % (u, p))
1787- for u, p in env_vars.iteritems() if u != "script_path"]
1788+ for u, p in six.iteritems(env_vars) if u != "script_path"]
1789
1790
1791 def openstack_upgrade_available(package):
1792@@ -348,8 +354,8 @@
1793 '''
1794 _none = ['None', 'none', None]
1795 if (block_device in _none):
1796- error_out('prepare_storage(): Missing required input: '
1797- 'block_device=%s.' % block_device, level=ERROR)
1798+ error_out('prepare_storage(): Missing required input: block_device=%s.'
1799+ % block_device)
1800
1801 if block_device.startswith('/dev/'):
1802 bdev = block_device
1803@@ -365,8 +371,7 @@
1804 bdev = '/dev/%s' % block_device
1805
1806 if not is_block_device(bdev):
1807- error_out('Failed to locate valid block device at %s' % bdev,
1808- level=ERROR)
1809+ error_out('Failed to locate valid block device at %s' % bdev)
1810
1811 return bdev
1812
1813@@ -415,7 +420,7 @@
1814
1815 if isinstance(address, dns.name.Name):
1816 rtype = 'PTR'
1817- elif isinstance(address, basestring):
1818+ elif isinstance(address, six.string_types):
1819 rtype = 'A'
1820 else:
1821 return None
1822@@ -466,6 +471,14 @@
1823 return result.split('.')[0]
1824
1825
1826+def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
1827+ mm_map = {}
1828+ if os.path.isfile(mm_file):
1829+ with open(mm_file, 'r') as f:
1830+ mm_map = json.load(f)
1831+ return mm_map
1832+
1833+
1834 def sync_db_with_multi_ipv6_addresses(database, database_user,
1835 relation_prefix=None):
1836 hosts = get_ipv6_addr(dynamic_only=False)
1837@@ -475,10 +488,132 @@
1838 'hostname': json.dumps(hosts)}
1839
1840 if relation_prefix:
1841- keys = kwargs.keys()
1842- for key in keys:
1843+ for key in list(kwargs.keys()):
1844 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
1845 del kwargs[key]
1846
1847 for rid in relation_ids('shared-db'):
1848 relation_set(relation_id=rid, **kwargs)
1849+
1850+
1851+def os_requires_version(ostack_release, pkg):
1852+ """
1853+ Decorator for hook to specify minimum supported release
1854+ """
1855+ def wrap(f):
1856+ @wraps(f)
1857+ def wrapped_f(*args):
1858+ if os_release(pkg) < ostack_release:
1859+ raise Exception("This hook is not supported on releases"
1860+ " before %s" % ostack_release)
1861+ f(*args)
1862+ return wrapped_f
1863+ return wrap
1864+
1865+
1866+def git_install_requested():
1867+ """Returns true if openstack-origin-git is specified."""
1868+ return config('openstack-origin-git') != "None"
1869+
1870+
1871+requirements_dir = None
1872+
1873+
1874+def git_clone_and_install(file_name, core_project):
1875+ """Clone/install all OpenStack repos specified in yaml config file."""
1876+ global requirements_dir
1877+
1878+ if file_name == "None":
1879+ return
1880+
1881+ yaml_file = os.path.join(charm_dir(), file_name)
1882+
1883+ # clone/install the requirements project first
1884+ installed = _git_clone_and_install_subset(yaml_file,
1885+ whitelist=['requirements'])
1886+ if 'requirements' not in installed:
1887+ error_out('requirements git repository must be specified')
1888+
1889+ # clone/install all other projects except requirements and the core project
1890+ blacklist = ['requirements', core_project]
1891+ _git_clone_and_install_subset(yaml_file, blacklist=blacklist,
1892+ update_requirements=True)
1893+
1894+ # clone/install the core project
1895+ whitelist = [core_project]
1896+ installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist,
1897+ update_requirements=True)
1898+ if core_project not in installed:
1899+ error_out('{} git repository must be specified'.format(core_project))
1900+
1901+
1902+def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[],
1903+ update_requirements=False):
1904+ """Clone/install subset of OpenStack repos specified in yaml config file."""
1905+ global requirements_dir
1906+ installed = []
1907+
1908+ with open(yaml_file, 'r') as fd:
1909+ projects = yaml.load(fd)
1910+ for proj, val in projects.items():
1911+ # The project subset is chosen based on the following 3 rules:
1912+ # 1) If project is in blacklist, we don't clone/install it, period.
1913+ # 2) If whitelist is empty, we clone/install everything else.
1914+ # 3) If whitelist is not empty, we clone/install everything in the
1915+ # whitelist.
1916+ if proj in blacklist:
1917+ continue
1918+ if whitelist and proj not in whitelist:
1919+ continue
1920+ repo = val['repository']
1921+ branch = val['branch']
1922+ repo_dir = _git_clone_and_install_single(repo, branch,
1923+ update_requirements)
1924+ if proj == 'requirements':
1925+ requirements_dir = repo_dir
1926+ installed.append(proj)
1927+ return installed
1928+
1929+
1930+def _git_clone_and_install_single(repo, branch, update_requirements=False):
1931+ """Clone and install a single git repository."""
1932+ dest_parent_dir = "/mnt/openstack-git/"
1933+ dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo))
1934+
1935+ if not os.path.exists(dest_parent_dir):
1936+ juju_log('Host dir not mounted at {}. '
1937+ 'Creating directory there instead.'.format(dest_parent_dir))
1938+ os.mkdir(dest_parent_dir)
1939+
1940+ if not os.path.exists(dest_dir):
1941+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
1942+ repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch)
1943+ else:
1944+ repo_dir = dest_dir
1945+
1946+ if update_requirements:
1947+ if not requirements_dir:
1948+ error_out('requirements repo must be cloned before '
1949+ 'updating from global requirements.')
1950+ _git_update_requirements(repo_dir, requirements_dir)
1951+
1952+ juju_log('Installing git repo from dir: {}'.format(repo_dir))
1953+ pip_install(repo_dir)
1954+
1955+ return repo_dir
1956+
1957+
1958+def _git_update_requirements(package_dir, reqs_dir):
1959+ """Update from global requirements.
1960+
1961+ Update an OpenStack git directory's requirements.txt and
1962+ test-requirements.txt from global-requirements.txt."""
1963+ orig_dir = os.getcwd()
1964+ os.chdir(reqs_dir)
1965+ cmd = "python update.py {}".format(package_dir)
1966+ try:
1967+ subprocess.check_call(cmd.split(' '))
1968+ except subprocess.CalledProcessError:
1969+ package = os.path.basename(package_dir)
1970+ error_out("Error updating {} from global-requirements.txt".format(package))
1971+ os.chdir(orig_dir)
1972
1973=== added directory 'hooks/charmhelpers/contrib/python'
1974=== added file 'hooks/charmhelpers/contrib/python/__init__.py'
1975=== added file 'hooks/charmhelpers/contrib/python/packages.py'
1976--- hooks/charmhelpers/contrib/python/packages.py 1970-01-01 00:00:00 +0000
1977+++ hooks/charmhelpers/contrib/python/packages.py 2014-12-11 17:57:05 +0000
1978@@ -0,0 +1,77 @@
1979+#!/usr/bin/env python
1980+# coding: utf-8
1981+
1982+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
1983+
1984+from charmhelpers.fetch import apt_install, apt_update
1985+from charmhelpers.core.hookenv import log
1986+
1987+try:
1988+ from pip import main as pip_execute
1989+except ImportError:
1990+ apt_update()
1991+ apt_install('python-pip')
1992+ from pip import main as pip_execute
1993+
1994+
1995+def parse_options(given, available):
1996+ """Given a set of options, check if available"""
1997+ for key, value in sorted(given.items()):
1998+ if key in available:
1999+ yield "--{0}={1}".format(key, value)
2000+
2001+
2002+def pip_install_requirements(requirements, **options):
2003+ """Install a requirements file """
2004+ command = ["install"]
2005+
2006+ available_options = ('proxy', 'src', 'log', )
2007+ for option in parse_options(options, available_options):
2008+ command.append(option)
2009+
2010+ command.append("-r {0}".format(requirements))
2011+ log("Installing from file: {} with options: {}".format(requirements,
2012+ command))
2013+ pip_execute(command)
2014+
2015+
2016+def pip_install(package, fatal=False, **options):
2017+ """Install a python package"""
2018+ command = ["install"]
2019+
2020+ available_options = ('proxy', 'src', 'log', "index-url", )
2021+ for option in parse_options(options, available_options):
2022+ command.append(option)
2023+
2024+ if isinstance(package, list):
2025+ command.extend(package)
2026+ else:
2027+ command.append(package)
2028+
2029+ log("Installing {} package with options: {}".format(package,
2030+ command))
2031+ pip_execute(command)
2032+
2033+
2034+def pip_uninstall(package, **options):
2035+ """Uninstall a python package"""
2036+ command = ["uninstall", "-q", "-y"]
2037+
2038+ available_options = ('proxy', 'log', )
2039+ for option in parse_options(options, available_options):
2040+ command.append(option)
2041+
2042+ if isinstance(package, list):
2043+ command.extend(package)
2044+ else:
2045+ command.append(package)
2046+
2047+ log("Uninstalling {} package with options: {}".format(package,
2048+ command))
2049+ pip_execute(command)
2050+
2051+
2052+def pip_list():
2053+ """Returns the list of current python installed packages
2054+ """
2055+ return pip_execute(["list"])
2056
2057=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
2058--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-08-13 13:12:30 +0000
2059+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-11 17:57:05 +0000
2060@@ -16,19 +16,18 @@
2061 from subprocess import (
2062 check_call,
2063 check_output,
2064- CalledProcessError
2065+ CalledProcessError,
2066 )
2067-
2068 from charmhelpers.core.hookenv import (
2069 relation_get,
2070 relation_ids,
2071 related_units,
2072 log,
2073+ DEBUG,
2074 INFO,
2075 WARNING,
2076- ERROR
2077+ ERROR,
2078 )
2079-
2080 from charmhelpers.core.host import (
2081 mount,
2082 mounts,
2083@@ -37,7 +36,6 @@
2084 service_running,
2085 umount,
2086 )
2087-
2088 from charmhelpers.fetch import (
2089 apt_install,
2090 )
2091@@ -56,99 +54,85 @@
2092
2093
2094 def install():
2095- ''' Basic Ceph client installation '''
2096+ """Basic Ceph client installation."""
2097 ceph_dir = "/etc/ceph"
2098 if not os.path.exists(ceph_dir):
2099 os.mkdir(ceph_dir)
2100+
2101 apt_install('ceph-common', fatal=True)
2102
2103
2104 def rbd_exists(service, pool, rbd_img):
2105- ''' Check to see if a RADOS block device exists '''
2106+ """Check to see if a RADOS block device exists."""
2107 try:
2108- out = check_output(['rbd', 'list', '--id', service,
2109- '--pool', pool])
2110+ out = check_output(['rbd', 'list', '--id',
2111+ service, '--pool', pool]).decode('UTF-8')
2112 except CalledProcessError:
2113 return False
2114- else:
2115- return rbd_img in out
2116+
2117+ return rbd_img in out
2118
2119
2120 def create_rbd_image(service, pool, image, sizemb):
2121- ''' Create a new RADOS block device '''
2122- cmd = [
2123- 'rbd',
2124- 'create',
2125- image,
2126- '--size',
2127- str(sizemb),
2128- '--id',
2129- service,
2130- '--pool',
2131- pool
2132- ]
2133+ """Create a new RADOS block device."""
2134+ cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
2135+ '--pool', pool]
2136 check_call(cmd)
2137
2138
2139 def pool_exists(service, name):
2140- ''' Check to see if a RADOS pool already exists '''
2141+ """Check to see if a RADOS pool already exists."""
2142 try:
2143- out = check_output(['rados', '--id', service, 'lspools'])
2144+ out = check_output(['rados', '--id', service,
2145+ 'lspools']).decode('UTF-8')
2146 except CalledProcessError:
2147 return False
2148- else:
2149- return name in out
2150+
2151+ return name in out
2152
2153
2154 def get_osds(service):
2155- '''
2156- Return a list of all Ceph Object Storage Daemons
2157- currently in the cluster
2158- '''
2159+ """Return a list of all Ceph Object Storage Daemons currently in the
2160+ cluster.
2161+ """
2162 version = ceph_version()
2163 if version and version >= '0.56':
2164 return json.loads(check_output(['ceph', '--id', service,
2165- 'osd', 'ls', '--format=json']))
2166- else:
2167- return None
2168-
2169-
2170-def create_pool(service, name, replicas=2):
2171- ''' Create a new RADOS pool '''
2172+ 'osd', 'ls',
2173+ '--format=json']).decode('UTF-8'))
2174+
2175+ return None
2176+
2177+
2178+def create_pool(service, name, replicas=3):
2179+ """Create a new RADOS pool."""
2180 if pool_exists(service, name):
2181 log("Ceph pool {} already exists, skipping creation".format(name),
2182 level=WARNING)
2183 return
2184+
2185 # Calculate the number of placement groups based
2186 # on upstream recommended best practices.
2187 osds = get_osds(service)
2188 if osds:
2189- pgnum = (len(osds) * 100 / replicas)
2190+ pgnum = (len(osds) * 100 // replicas)
2191 else:
2192 # NOTE(james-page): Default to 200 for older ceph versions
2193 # which don't support OSD query from cli
2194 pgnum = 200
2195- cmd = [
2196- 'ceph', '--id', service,
2197- 'osd', 'pool', 'create',
2198- name, str(pgnum)
2199- ]
2200+
2201+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
2202 check_call(cmd)
2203- cmd = [
2204- 'ceph', '--id', service,
2205- 'osd', 'pool', 'set', name,
2206- 'size', str(replicas)
2207- ]
2208+
2209+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
2210+ str(replicas)]
2211 check_call(cmd)
2212
2213
2214 def delete_pool(service, name):
2215- ''' Delete a RADOS pool from ceph '''
2216- cmd = [
2217- 'ceph', '--id', service,
2218- 'osd', 'pool', 'delete',
2219- name, '--yes-i-really-really-mean-it'
2220- ]
2221+ """Delete a RADOS pool from ceph."""
2222+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
2223+ '--yes-i-really-really-mean-it']
2224 check_call(cmd)
2225
2226
2227@@ -161,44 +145,43 @@
2228
2229
2230 def create_keyring(service, key):
2231- ''' Create a new Ceph keyring containing key'''
2232+ """Create a new Ceph keyring containing key."""
2233 keyring = _keyring_path(service)
2234 if os.path.exists(keyring):
2235- log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
2236+ log('Ceph keyring exists at %s.' % keyring, level=WARNING)
2237 return
2238- cmd = [
2239- 'ceph-authtool',
2240- keyring,
2241- '--create-keyring',
2242- '--name=client.{}'.format(service),
2243- '--add-key={}'.format(key)
2244- ]
2245+
2246+ cmd = ['ceph-authtool', keyring, '--create-keyring',
2247+ '--name=client.{}'.format(service), '--add-key={}'.format(key)]
2248 check_call(cmd)
2249- log('ceph: Created new ring at %s.' % keyring, level=INFO)
2250+ log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
2251
2252
2253 def create_key_file(service, key):
2254- ''' Create a file containing key '''
2255+ """Create a file containing key."""
2256 keyfile = _keyfile_path(service)
2257 if os.path.exists(keyfile):
2258- log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2259+ log('Keyfile exists at %s.' % keyfile, level=WARNING)
2260 return
2261+
2262 with open(keyfile, 'w') as fd:
2263 fd.write(key)
2264- log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2265+
2266+ log('Created new keyfile at %s.' % keyfile, level=INFO)
2267
2268
2269 def get_ceph_nodes():
2270- ''' Query named relation 'ceph' to detemine current nodes '''
2271+ """Query named relation 'ceph' to determine current nodes."""
2272 hosts = []
2273 for r_id in relation_ids('ceph'):
2274 for unit in related_units(r_id):
2275 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2276+
2277 return hosts
2278
2279
2280 def configure(service, key, auth, use_syslog):
2281- ''' Perform basic configuration of Ceph '''
2282+ """Perform basic configuration of Ceph."""
2283 create_keyring(service, key)
2284 create_key_file(service, key)
2285 hosts = get_ceph_nodes()
2286@@ -211,17 +194,17 @@
2287
2288
2289 def image_mapped(name):
2290- ''' Determine whether a RADOS block device is mapped locally '''
2291+ """Determine whether a RADOS block device is mapped locally."""
2292 try:
2293- out = check_output(['rbd', 'showmapped'])
2294+ out = check_output(['rbd', 'showmapped']).decode('UTF-8')
2295 except CalledProcessError:
2296 return False
2297- else:
2298- return name in out
2299+
2300+ return name in out
2301
2302
2303 def map_block_storage(service, pool, image):
2304- ''' Map a RADOS block device for local use '''
2305+ """Map a RADOS block device for local use."""
2306 cmd = [
2307 'rbd',
2308 'map',
2309@@ -235,31 +218,32 @@
2310
2311
2312 def filesystem_mounted(fs):
2313- ''' Determine whether a filesytems is already mounted '''
2314+ """Determine whether a filesytems is already mounted."""
2315 return fs in [f for f, m in mounts()]
2316
2317
2318 def make_filesystem(blk_device, fstype='ext4', timeout=10):
2319- ''' Make a new filesystem on the specified block device '''
2320+ """Make a new filesystem on the specified block device."""
2321 count = 0
2322 e_noent = os.errno.ENOENT
2323 while not os.path.exists(blk_device):
2324 if count >= timeout:
2325- log('ceph: gave up waiting on block device %s' % blk_device,
2326+ log('Gave up waiting on block device %s' % blk_device,
2327 level=ERROR)
2328 raise IOError(e_noent, os.strerror(e_noent), blk_device)
2329- log('ceph: waiting for block device %s to appear' % blk_device,
2330- level=INFO)
2331+
2332+ log('Waiting for block device %s to appear' % blk_device,
2333+ level=DEBUG)
2334 count += 1
2335 time.sleep(1)
2336 else:
2337- log('ceph: Formatting block device %s as filesystem %s.' %
2338+ log('Formatting block device %s as filesystem %s.' %
2339 (blk_device, fstype), level=INFO)
2340 check_call(['mkfs', '-t', fstype, blk_device])
2341
2342
2343 def place_data_on_block_device(blk_device, data_src_dst):
2344- ''' Migrate data in data_src_dst to blk_device and then remount '''
2345+ """Migrate data in data_src_dst to blk_device and then remount."""
2346 # mount block device into /mnt
2347 mount(blk_device, '/mnt')
2348 # copy data to /mnt
2349@@ -279,8 +263,8 @@
2350
2351 # TODO: re-use
2352 def modprobe(module):
2353- ''' Load a kernel module and configure for auto-load on reboot '''
2354- log('ceph: Loading kernel module', level=INFO)
2355+ """Load a kernel module and configure for auto-load on reboot."""
2356+ log('Loading kernel module', level=INFO)
2357 cmd = ['modprobe', module]
2358 check_call(cmd)
2359 with open('/etc/modules', 'r+') as modules:
2360@@ -289,7 +273,7 @@
2361
2362
2363 def copy_files(src, dst, symlinks=False, ignore=None):
2364- ''' Copy files from src to dst '''
2365+ """Copy files from src to dst."""
2366 for item in os.listdir(src):
2367 s = os.path.join(src, item)
2368 d = os.path.join(dst, item)
2369@@ -300,9 +284,9 @@
2370
2371
2372 def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2373- blk_device, fstype, system_services=[]):
2374- """
2375- NOTE: This function must only be called from a single service unit for
2376+ blk_device, fstype, system_services=[],
2377+ replicas=3):
2378+ """NOTE: This function must only be called from a single service unit for
2379 the same rbd_img otherwise data loss will occur.
2380
2381 Ensures given pool and RBD image exists, is mapped to a block device,
2382@@ -316,15 +300,16 @@
2383 """
2384 # Ensure pool, RBD image, RBD mappings are in place.
2385 if not pool_exists(service, pool):
2386- log('ceph: Creating new pool {}.'.format(pool))
2387- create_pool(service, pool)
2388+ log('Creating new pool {}.'.format(pool), level=INFO)
2389+ create_pool(service, pool, replicas=replicas)
2390
2391 if not rbd_exists(service, pool, rbd_img):
2392- log('ceph: Creating RBD image ({}).'.format(rbd_img))
2393+ log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
2394 create_rbd_image(service, pool, rbd_img, sizemb)
2395
2396 if not image_mapped(rbd_img):
2397- log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2398+ log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
2399+ level=INFO)
2400 map_block_storage(service, pool, rbd_img)
2401
2402 # make file system
2403@@ -339,45 +324,47 @@
2404
2405 for svc in system_services:
2406 if service_running(svc):
2407- log('ceph: Stopping services {} prior to migrating data.'
2408- .format(svc))
2409+ log('Stopping services {} prior to migrating data.'
2410+ .format(svc), level=DEBUG)
2411 service_stop(svc)
2412
2413 place_data_on_block_device(blk_device, mount_point)
2414
2415 for svc in system_services:
2416- log('ceph: Starting service {} after migrating data.'
2417- .format(svc))
2418+ log('Starting service {} after migrating data.'
2419+ .format(svc), level=DEBUG)
2420 service_start(svc)
2421
2422
2423 def ensure_ceph_keyring(service, user=None, group=None):
2424- '''
2425- Ensures a ceph keyring is created for a named service
2426- and optionally ensures user and group ownership.
2427+ """Ensures a ceph keyring is created for a named service and optionally
2428+ ensures user and group ownership.
2429
2430 Returns False if no ceph key is available in relation state.
2431- '''
2432+ """
2433 key = None
2434 for rid in relation_ids('ceph'):
2435 for unit in related_units(rid):
2436 key = relation_get('key', rid=rid, unit=unit)
2437 if key:
2438 break
2439+
2440 if not key:
2441 return False
2442+
2443 create_keyring(service=service, key=key)
2444 keyring = _keyring_path(service)
2445 if user and group:
2446 check_call(['chown', '%s.%s' % (user, group), keyring])
2447+
2448 return True
2449
2450
2451 def ceph_version():
2452- ''' Retrieve the local version of ceph '''
2453+ """Retrieve the local version of ceph."""
2454 if os.path.exists('/usr/bin/ceph'):
2455 cmd = ['ceph', '-v']
2456- output = check_output(cmd)
2457+ output = check_output(cmd).decode('US-ASCII')
2458 output = output.split()
2459 if len(output) > 3:
2460 return output[2]
2461
2462=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2463--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-02-18 11:37:37 +0000
2464+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-11 17:57:05 +0000
2465@@ -1,12 +1,12 @@
2466-
2467 import os
2468 import re
2469-
2470 from subprocess import (
2471 check_call,
2472 check_output,
2473 )
2474
2475+import six
2476+
2477
2478 ##################################################
2479 # loopback device helpers.
2480@@ -37,7 +37,7 @@
2481 '''
2482 file_path = os.path.abspath(file_path)
2483 check_call(['losetup', '--find', file_path])
2484- for d, f in loopback_devices().iteritems():
2485+ for d, f in six.iteritems(loopback_devices()):
2486 if f == file_path:
2487 return d
2488
2489@@ -51,7 +51,7 @@
2490
2491 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2492 '''
2493- for d, f in loopback_devices().iteritems():
2494+ for d, f in six.iteritems(loopback_devices()):
2495 if f == path:
2496 return d
2497
2498
2499=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2500--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-05-19 11:42:57 +0000
2501+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-11 17:57:05 +0000
2502@@ -61,6 +61,7 @@
2503 vg = None
2504 pvd = check_output(['pvdisplay', block_device]).splitlines()
2505 for l in pvd:
2506+ l = l.decode('UTF-8')
2507 if l.strip().startswith('VG Name'):
2508 vg = ' '.join(l.strip().split()[2:])
2509 return vg
2510
2511=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2512--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-09-12 10:55:03 +0000
2513+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-11 17:57:05 +0000
2514@@ -30,7 +30,8 @@
2515 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
2516 call(['sgdisk', '--zap-all', '--mbrtogpt',
2517 '--clear', block_device])
2518- dev_end = check_output(['blockdev', '--getsz', block_device])
2519+ dev_end = check_output(['blockdev', '--getsz',
2520+ block_device]).decode('UTF-8')
2521 gpt_end = int(dev_end.split()[0]) - 100
2522 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
2523 'bs=1M', 'count=1'])
2524@@ -47,7 +48,7 @@
2525 it doesn't.
2526 '''
2527 is_partition = bool(re.search(r".*[0-9]+\b", device))
2528- out = check_output(['mount'])
2529+ out = check_output(['mount']).decode('UTF-8')
2530 if is_partition:
2531 return bool(re.search(device + r"\b", out))
2532 return bool(re.search(device + r"[0-9]+\b", out))
2533
2534=== modified file 'hooks/charmhelpers/core/fstab.py'
2535--- hooks/charmhelpers/core/fstab.py 2014-08-13 13:12:30 +0000
2536+++ hooks/charmhelpers/core/fstab.py 2014-12-11 17:57:05 +0000
2537@@ -3,10 +3,11 @@
2538
2539 __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2540
2541+import io
2542 import os
2543
2544
2545-class Fstab(file):
2546+class Fstab(io.FileIO):
2547 """This class extends file in order to implement a file reader/writer
2548 for file `/etc/fstab`
2549 """
2550@@ -24,8 +25,8 @@
2551 options = "defaults"
2552
2553 self.options = options
2554- self.d = d
2555- self.p = p
2556+ self.d = int(d)
2557+ self.p = int(p)
2558
2559 def __eq__(self, o):
2560 return str(self) == str(o)
2561@@ -45,7 +46,7 @@
2562 self._path = path
2563 else:
2564 self._path = self.DEFAULT_PATH
2565- file.__init__(self, self._path, 'r+')
2566+ super(Fstab, self).__init__(self._path, 'rb+')
2567
2568 def _hydrate_entry(self, line):
2569 # NOTE: use split with no arguments to split on any
2570@@ -58,8 +59,9 @@
2571 def entries(self):
2572 self.seek(0)
2573 for line in self.readlines():
2574+ line = line.decode('us-ascii')
2575 try:
2576- if not line.startswith("#"):
2577+ if line.strip() and not line.startswith("#"):
2578 yield self._hydrate_entry(line)
2579 except ValueError:
2580 pass
2581@@ -75,14 +77,14 @@
2582 if self.get_entry_by_attr('device', entry.device):
2583 return False
2584
2585- self.write(str(entry) + '\n')
2586+ self.write((str(entry) + '\n').encode('us-ascii'))
2587 self.truncate()
2588 return entry
2589
2590 def remove_entry(self, entry):
2591 self.seek(0)
2592
2593- lines = self.readlines()
2594+ lines = [l.decode('us-ascii') for l in self.readlines()]
2595
2596 found = False
2597 for index, line in enumerate(lines):
2598@@ -97,7 +99,7 @@
2599 lines.remove(line)
2600
2601 self.seek(0)
2602- self.write(''.join(lines))
2603+ self.write(''.join(lines).encode('us-ascii'))
2604 self.truncate()
2605 return True
2606
2607
2608=== modified file 'hooks/charmhelpers/core/hookenv.py'
2609--- hooks/charmhelpers/core/hookenv.py 2014-10-01 13:53:14 +0000
2610+++ hooks/charmhelpers/core/hookenv.py 2014-12-11 17:57:05 +0000
2611@@ -9,9 +9,14 @@
2612 import yaml
2613 import subprocess
2614 import sys
2615-import UserDict
2616 from subprocess import CalledProcessError
2617
2618+import six
2619+if not six.PY3:
2620+ from UserDict import UserDict
2621+else:
2622+ from collections import UserDict
2623+
2624 CRITICAL = "CRITICAL"
2625 ERROR = "ERROR"
2626 WARNING = "WARNING"
2627@@ -63,16 +68,18 @@
2628 command = ['juju-log']
2629 if level:
2630 command += ['-l', level]
2631+ if not isinstance(message, six.string_types):
2632+ message = repr(message)
2633 command += [message]
2634 subprocess.call(command)
2635
2636
2637-class Serializable(UserDict.IterableUserDict):
2638+class Serializable(UserDict):
2639 """Wrapper, an object that can be serialized to yaml or json"""
2640
2641 def __init__(self, obj):
2642 # wrap the object
2643- UserDict.IterableUserDict.__init__(self)
2644+ UserDict.__init__(self)
2645 self.data = obj
2646
2647 def __getattr__(self, attr):
2648@@ -214,6 +221,12 @@
2649 except KeyError:
2650 return (self._prev_dict or {})[key]
2651
2652+ def keys(self):
2653+ prev_keys = []
2654+ if self._prev_dict is not None:
2655+ prev_keys = self._prev_dict.keys()
2656+ return list(set(prev_keys + list(dict.keys(self))))
2657+
2658 def load_previous(self, path=None):
2659 """Load previous copy of config from disk.
2660
2661@@ -263,7 +276,7 @@
2662
2663 """
2664 if self._prev_dict:
2665- for k, v in self._prev_dict.iteritems():
2666+ for k, v in six.iteritems(self._prev_dict):
2667 if k not in self:
2668 self[k] = v
2669 with open(self.path, 'w') as f:
2670@@ -278,7 +291,8 @@
2671 config_cmd_line.append(scope)
2672 config_cmd_line.append('--format=json')
2673 try:
2674- config_data = json.loads(subprocess.check_output(config_cmd_line))
2675+ config_data = json.loads(
2676+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
2677 if scope is not None:
2678 return config_data
2679 return Config(config_data)
2680@@ -297,10 +311,10 @@
2681 if unit:
2682 _args.append(unit)
2683 try:
2684- return json.loads(subprocess.check_output(_args))
2685+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2686 except ValueError:
2687 return None
2688- except CalledProcessError, e:
2689+ except CalledProcessError as e:
2690 if e.returncode == 2:
2691 return None
2692 raise
2693@@ -312,7 +326,7 @@
2694 relation_cmd_line = ['relation-set']
2695 if relation_id is not None:
2696 relation_cmd_line.extend(('-r', relation_id))
2697- for k, v in (relation_settings.items() + kwargs.items()):
2698+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
2699 if v is None:
2700 relation_cmd_line.append('{}='.format(k))
2701 else:
2702@@ -329,7 +343,8 @@
2703 relid_cmd_line = ['relation-ids', '--format=json']
2704 if reltype is not None:
2705 relid_cmd_line.append(reltype)
2706- return json.loads(subprocess.check_output(relid_cmd_line)) or []
2707+ return json.loads(
2708+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
2709 return []
2710
2711
2712@@ -340,7 +355,8 @@
2713 units_cmd_line = ['relation-list', '--format=json']
2714 if relid is not None:
2715 units_cmd_line.extend(('-r', relid))
2716- return json.loads(subprocess.check_output(units_cmd_line)) or []
2717+ return json.loads(
2718+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
2719
2720
2721 @cached
2722@@ -380,21 +396,31 @@
2723
2724
2725 @cached
2726+def metadata():
2727+ """Get the current charm metadata.yaml contents as a python object"""
2728+ with open(os.path.join(charm_dir(), 'metadata.yaml')) as md:
2729+ return yaml.safe_load(md)
2730+
2731+
2732+@cached
2733 def relation_types():
2734 """Get a list of relation types supported by this charm"""
2735- charmdir = os.environ.get('CHARM_DIR', '')
2736- mdf = open(os.path.join(charmdir, 'metadata.yaml'))
2737- md = yaml.safe_load(mdf)
2738 rel_types = []
2739+ md = metadata()
2740 for key in ('provides', 'requires', 'peers'):
2741 section = md.get(key)
2742 if section:
2743 rel_types.extend(section.keys())
2744- mdf.close()
2745 return rel_types
2746
2747
2748 @cached
2749+def charm_name():
2750+ """Get the name of the current charm as is specified on metadata.yaml"""
2751+ return metadata().get('name')
2752+
2753+
2754+@cached
2755 def relations():
2756 """Get a nested dictionary of relation data for all related units"""
2757 rels = {}
2758@@ -449,7 +475,7 @@
2759 """Get the unit ID for the remote unit"""
2760 _args = ['unit-get', '--format=json', attribute]
2761 try:
2762- return json.loads(subprocess.check_output(_args))
2763+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2764 except ValueError:
2765 return None
2766
2767
2768=== modified file 'hooks/charmhelpers/core/host.py'
2769--- hooks/charmhelpers/core/host.py 2014-10-01 13:53:14 +0000
2770+++ hooks/charmhelpers/core/host.py 2014-12-11 17:57:05 +0000
2771@@ -6,19 +6,20 @@
2772 # Matthew Wedgwood <matthew.wedgwood@canonical.com>
2773
2774 import os
2775+import re
2776 import pwd
2777 import grp
2778 import random
2779 import string
2780 import subprocess
2781 import hashlib
2782-import shutil
2783 from contextlib import contextmanager
2784-
2785 from collections import OrderedDict
2786
2787-from hookenv import log
2788-from fstab import Fstab
2789+import six
2790+
2791+from .hookenv import log
2792+from .fstab import Fstab
2793
2794
2795 def service_start(service_name):
2796@@ -54,7 +55,9 @@
2797 def service_running(service):
2798 """Determine whether a system service is running"""
2799 try:
2800- output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
2801+ output = subprocess.check_output(
2802+ ['service', service, 'status'],
2803+ stderr=subprocess.STDOUT).decode('UTF-8')
2804 except subprocess.CalledProcessError:
2805 return False
2806 else:
2807@@ -67,7 +70,9 @@
2808 def service_available(service_name):
2809 """Determine whether a system service is available"""
2810 try:
2811- subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
2812+ subprocess.check_output(
2813+ ['service', service_name, 'status'],
2814+ stderr=subprocess.STDOUT).decode('UTF-8')
2815 except subprocess.CalledProcessError as e:
2816 return 'unrecognized service' not in e.output
2817 else:
2818@@ -96,6 +101,26 @@
2819 return user_info
2820
2821
2822+def add_group(group_name, system_group=False):
2823+ """Add a group to the system"""
2824+ try:
2825+ group_info = grp.getgrnam(group_name)
2826+ log('group {0} already exists!'.format(group_name))
2827+ except KeyError:
2828+ log('creating group {0}'.format(group_name))
2829+ cmd = ['addgroup']
2830+ if system_group:
2831+ cmd.append('--system')
2832+ else:
2833+ cmd.extend([
2834+ '--group',
2835+ ])
2836+ cmd.append(group_name)
2837+ subprocess.check_call(cmd)
2838+ group_info = grp.getgrnam(group_name)
2839+ return group_info
2840+
2841+
2842 def add_user_to_group(username, group):
2843 """Add a user to a group"""
2844 cmd = [
2845@@ -115,7 +140,7 @@
2846 cmd.append(from_path)
2847 cmd.append(to_path)
2848 log(" ".join(cmd))
2849- return subprocess.check_output(cmd).strip()
2850+ return subprocess.check_output(cmd).decode('UTF-8').strip()
2851
2852
2853 def symlink(source, destination):
2854@@ -130,7 +155,7 @@
2855 subprocess.check_call(cmd)
2856
2857
2858-def mkdir(path, owner='root', group='root', perms=0555, force=False):
2859+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
2860 """Create a directory"""
2861 log("Making dir {} {}:{} {:o}".format(path, owner, group,
2862 perms))
2863@@ -146,7 +171,7 @@
2864 os.chown(realpath, uid, gid)
2865
2866
2867-def write_file(path, content, owner='root', group='root', perms=0444):
2868+def write_file(path, content, owner='root', group='root', perms=0o444):
2869 """Create or overwrite a file with the contents of a string"""
2870 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
2871 uid = pwd.getpwnam(owner).pw_uid
2872@@ -177,7 +202,7 @@
2873 cmd_args.extend([device, mountpoint])
2874 try:
2875 subprocess.check_output(cmd_args)
2876- except subprocess.CalledProcessError, e:
2877+ except subprocess.CalledProcessError as e:
2878 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
2879 return False
2880
2881@@ -191,7 +216,7 @@
2882 cmd_args = ['umount', mountpoint]
2883 try:
2884 subprocess.check_output(cmd_args)
2885- except subprocess.CalledProcessError, e:
2886+ except subprocess.CalledProcessError as e:
2887 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
2888 return False
2889
2890@@ -218,8 +243,8 @@
2891 """
2892 if os.path.exists(path):
2893 h = getattr(hashlib, hash_type)()
2894- with open(path, 'r') as source:
2895- h.update(source.read()) # IGNORE:E1101 - it does have update
2896+ with open(path, 'rb') as source:
2897+ h.update(source.read())
2898 return h.hexdigest()
2899 else:
2900 return None
2901@@ -297,7 +322,7 @@
2902 if length is None:
2903 length = random.choice(range(35, 45))
2904 alphanumeric_chars = [
2905- l for l in (string.letters + string.digits)
2906+ l for l in (string.ascii_letters + string.digits)
2907 if l not in 'l0QD1vAEIOUaeiou']
2908 random_chars = [
2909 random.choice(alphanumeric_chars) for _ in range(length)]
2910@@ -306,18 +331,24 @@
2911
2912 def list_nics(nic_type):
2913 '''Return a list of nics of given type(s)'''
2914- if isinstance(nic_type, basestring):
2915+ if isinstance(nic_type, six.string_types):
2916 int_types = [nic_type]
2917 else:
2918 int_types = nic_type
2919 interfaces = []
2920 for int_type in int_types:
2921 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
2922- ip_output = subprocess.check_output(cmd).split('\n')
2923+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2924 ip_output = (line for line in ip_output if line)
2925 for line in ip_output:
2926 if line.split()[1].startswith(int_type):
2927- interfaces.append(line.split()[1].replace(":", ""))
2928+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
2929+ if matched:
2930+ interface = matched.groups()[0]
2931+ else:
2932+ interface = line.split()[1].replace(":", "")
2933+ interfaces.append(interface)
2934+
2935 return interfaces
2936
2937
2938@@ -329,7 +360,7 @@
2939
2940 def get_nic_mtu(nic):
2941 cmd = ['ip', 'addr', 'show', nic]
2942- ip_output = subprocess.check_output(cmd).split('\n')
2943+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2944 mtu = ""
2945 for line in ip_output:
2946 words = line.split()
2947@@ -340,7 +371,7 @@
2948
2949 def get_nic_hwaddr(nic):
2950 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
2951- ip_output = subprocess.check_output(cmd)
2952+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
2953 hwaddr = ""
2954 words = ip_output.split()
2955 if 'link/ether' in words:
2956@@ -357,8 +388,8 @@
2957
2958 '''
2959 import apt_pkg
2960- from charmhelpers.fetch import apt_cache
2961 if not pkgcache:
2962+ from charmhelpers.fetch import apt_cache
2963 pkgcache = apt_cache()
2964 pkg = pkgcache[package]
2965 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
2966
2967=== modified file 'hooks/charmhelpers/core/services/__init__.py'
2968--- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:12:30 +0000
2969+++ hooks/charmhelpers/core/services/__init__.py 2014-12-11 17:57:05 +0000
2970@@ -1,2 +1,2 @@
2971-from .base import *
2972-from .helpers import *
2973+from .base import * # NOQA
2974+from .helpers import * # NOQA
2975
2976=== modified file 'hooks/charmhelpers/core/services/helpers.py'
2977--- hooks/charmhelpers/core/services/helpers.py 2014-10-01 13:53:14 +0000
2978+++ hooks/charmhelpers/core/services/helpers.py 2014-12-11 17:57:05 +0000
2979@@ -196,7 +196,7 @@
2980 if not os.path.isabs(file_name):
2981 file_name = os.path.join(hookenv.charm_dir(), file_name)
2982 with open(file_name, 'w') as file_stream:
2983- os.fchmod(file_stream.fileno(), 0600)
2984+ os.fchmod(file_stream.fileno(), 0o600)
2985 yaml.dump(config_data, file_stream)
2986
2987 def read_context(self, file_name):
2988@@ -211,15 +211,19 @@
2989
2990 class TemplateCallback(ManagerCallback):
2991 """
2992- Callback class that will render a Jinja2 template, for use as a ready action.
2993-
2994- :param str source: The template source file, relative to `$CHARM_DIR/templates`
2995+ Callback class that will render a Jinja2 template, for use as a ready
2996+ action.
2997+
2998+ :param str source: The template source file, relative to
2999+ `$CHARM_DIR/templates`
3000+
3001 :param str target: The target to write the rendered template to
3002 :param str owner: The owner of the rendered file
3003 :param str group: The group of the rendered file
3004 :param int perms: The permissions of the rendered file
3005 """
3006- def __init__(self, source, target, owner='root', group='root', perms=0444):
3007+ def __init__(self, source, target,
3008+ owner='root', group='root', perms=0o444):
3009 self.source = source
3010 self.target = target
3011 self.owner = owner
3012
3013=== added file 'hooks/charmhelpers/core/sysctl.py'
3014--- hooks/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000
3015+++ hooks/charmhelpers/core/sysctl.py 2014-12-11 17:57:05 +0000
3016@@ -0,0 +1,34 @@
3017+#!/usr/bin/env python
3018+# -*- coding: utf-8 -*-
3019+
3020+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
3021+
3022+import yaml
3023+
3024+from subprocess import check_call
3025+
3026+from charmhelpers.core.hookenv import (
3027+ log,
3028+ DEBUG,
3029+)
3030+
3031+
3032+def create(sysctl_dict, sysctl_file):
3033+ """Creates a sysctl.conf file from a YAML associative array
3034+
3035+ :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
3036+ :type sysctl_dict: dict
3037+ :param sysctl_file: path to the sysctl file to be saved
3038+ :type sysctl_file: str or unicode
3039+ :returns: None
3040+ """
3041+ sysctl_dict = yaml.load(sysctl_dict)
3042+
3043+ with open(sysctl_file, "w") as fd:
3044+ for key, value in sysctl_dict.items():
3045+ fd.write("{}={}\n".format(key, value))
3046+
3047+ log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
3048+ level=DEBUG)
3049+
3050+ check_call(["sysctl", "-p", sysctl_file])
3051
3052=== modified file 'hooks/charmhelpers/core/templating.py'
3053--- hooks/charmhelpers/core/templating.py 2014-08-13 13:12:30 +0000
3054+++ hooks/charmhelpers/core/templating.py 2014-12-11 17:57:05 +0000
3055@@ -4,7 +4,8 @@
3056 from charmhelpers.core import hookenv
3057
3058
3059-def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
3060+def render(source, target, context, owner='root', group='root',
3061+ perms=0o444, templates_dir=None):
3062 """
3063 Render a template.
3064
3065
3066=== modified file 'hooks/charmhelpers/fetch/__init__.py'
3067--- hooks/charmhelpers/fetch/__init__.py 2014-10-01 13:53:14 +0000
3068+++ hooks/charmhelpers/fetch/__init__.py 2014-12-11 17:57:05 +0000
3069@@ -5,10 +5,6 @@
3070 from charmhelpers.core.host import (
3071 lsb_release
3072 )
3073-from urlparse import (
3074- urlparse,
3075- urlunparse,
3076-)
3077 import subprocess
3078 from charmhelpers.core.hookenv import (
3079 config,
3080@@ -16,6 +12,12 @@
3081 )
3082 import os
3083
3084+import six
3085+if six.PY3:
3086+ from urllib.parse import urlparse, urlunparse
3087+else:
3088+ from urlparse import urlparse, urlunparse
3089+
3090
3091 CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
3092 deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
3093@@ -72,6 +74,7 @@
3094 FETCH_HANDLERS = (
3095 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
3096 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
3097+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
3098 )
3099
3100 APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
3101@@ -148,7 +151,7 @@
3102 cmd = ['apt-get', '--assume-yes']
3103 cmd.extend(options)
3104 cmd.append('install')
3105- if isinstance(packages, basestring):
3106+ if isinstance(packages, six.string_types):
3107 cmd.append(packages)
3108 else:
3109 cmd.extend(packages)
3110@@ -181,7 +184,7 @@
3111 def apt_purge(packages, fatal=False):
3112 """Purge one or more packages"""
3113 cmd = ['apt-get', '--assume-yes', 'purge']
3114- if isinstance(packages, basestring):
3115+ if isinstance(packages, six.string_types):
3116 cmd.append(packages)
3117 else:
3118 cmd.extend(packages)
3119@@ -192,7 +195,7 @@
3120 def apt_hold(packages, fatal=False):
3121 """Hold one or more packages"""
3122 cmd = ['apt-mark', 'hold']
3123- if isinstance(packages, basestring):
3124+ if isinstance(packages, six.string_types):
3125 cmd.append(packages)
3126 else:
3127 cmd.extend(packages)
3128@@ -218,6 +221,7 @@
3129 pocket for the release.
3130 'cloud:' may be used to activate official cloud archive pockets,
3131 such as 'cloud:icehouse'
3132+ 'distro' may be used as a noop
3133
3134 @param key: A key to be added to the system's APT keyring and used
3135 to verify the signatures on packages. Ideally, this should be an
3136@@ -251,12 +255,14 @@
3137 release = lsb_release()['DISTRIB_CODENAME']
3138 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
3139 apt.write(PROPOSED_POCKET.format(release))
3140+ elif source == 'distro':
3141+ pass
3142 else:
3143- raise SourceConfigError("Unknown source: {!r}".format(source))
3144+ log("Unknown source: {!r}".format(source))
3145
3146 if key:
3147 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
3148- with NamedTemporaryFile() as key_file:
3149+ with NamedTemporaryFile('w+') as key_file:
3150 key_file.write(key)
3151 key_file.flush()
3152 key_file.seek(0)
3153@@ -293,14 +299,14 @@
3154 sources = safe_load((config(sources_var) or '').strip()) or []
3155 keys = safe_load((config(keys_var) or '').strip()) or None
3156
3157- if isinstance(sources, basestring):
3158+ if isinstance(sources, six.string_types):
3159 sources = [sources]
3160
3161 if keys is None:
3162 for source in sources:
3163 add_source(source, None)
3164 else:
3165- if isinstance(keys, basestring):
3166+ if isinstance(keys, six.string_types):
3167 keys = [keys]
3168
3169 if len(sources) != len(keys):
3170@@ -397,7 +403,7 @@
3171 while result is None or result == APT_NO_LOCK:
3172 try:
3173 result = subprocess.check_call(cmd, env=env)
3174- except subprocess.CalledProcessError, e:
3175+ except subprocess.CalledProcessError as e:
3176 retry_count = retry_count + 1
3177 if retry_count > APT_NO_LOCK_RETRY_COUNT:
3178 raise
3179
3180=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
3181--- hooks/charmhelpers/fetch/archiveurl.py 2014-10-01 13:53:14 +0000
3182+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-11 17:57:05 +0000
3183@@ -1,8 +1,23 @@
3184 import os
3185-import urllib2
3186-from urllib import urlretrieve
3187-import urlparse
3188 import hashlib
3189+import re
3190+
3191+import six
3192+if six.PY3:
3193+ from urllib.request import (
3194+ build_opener, install_opener, urlopen, urlretrieve,
3195+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3196+ )
3197+ from urllib.parse import urlparse, urlunparse, parse_qs
3198+ from urllib.error import URLError
3199+else:
3200+ from urllib import urlretrieve
3201+ from urllib2 import (
3202+ build_opener, install_opener, urlopen,
3203+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3204+ URLError
3205+ )
3206+ from urlparse import urlparse, urlunparse, parse_qs
3207
3208 from charmhelpers.fetch import (
3209 BaseFetchHandler,
3210@@ -15,6 +30,24 @@
3211 from charmhelpers.core.host import mkdir, check_hash
3212
3213
3214+def splituser(host):
3215+ '''urllib.splituser(), but six's support of this seems broken'''
3216+ _userprog = re.compile('^(.*)@(.*)$')
3217+ match = _userprog.match(host)
3218+ if match:
3219+ return match.group(1, 2)
3220+ return None, host
3221+
3222+
3223+def splitpasswd(user):
3224+ '''urllib.splitpasswd(), but six's support of this is missing'''
3225+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
3226+ match = _passwdprog.match(user)
3227+ if match:
3228+ return match.group(1, 2)
3229+ return user, None
3230+
3231+
3232 class ArchiveUrlFetchHandler(BaseFetchHandler):
3233 """
3234 Handler to download archive files from arbitrary URLs.
3235@@ -42,20 +75,20 @@
3236 """
3237 # propogate all exceptions
3238 # URLError, OSError, etc
3239- proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
3240+ proto, netloc, path, params, query, fragment = urlparse(source)
3241 if proto in ('http', 'https'):
3242- auth, barehost = urllib2.splituser(netloc)
3243+ auth, barehost = splituser(netloc)
3244 if auth is not None:
3245- source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
3246- username, password = urllib2.splitpasswd(auth)
3247- passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
3248+ source = urlunparse((proto, barehost, path, params, query, fragment))
3249+ username, password = splitpasswd(auth)
3250+ passman = HTTPPasswordMgrWithDefaultRealm()
3251 # Realm is set to None in add_password to force the username and password
3252 # to be used whatever the realm
3253 passman.add_password(None, source, username, password)
3254- authhandler = urllib2.HTTPBasicAuthHandler(passman)
3255- opener = urllib2.build_opener(authhandler)
3256- urllib2.install_opener(opener)
3257- response = urllib2.urlopen(source)
3258+ authhandler = HTTPBasicAuthHandler(passman)
3259+ opener = build_opener(authhandler)
3260+ install_opener(opener)
3261+ response = urlopen(source)
3262 try:
3263 with open(dest, 'w') as dest_file:
3264 dest_file.write(response.read())
3265@@ -91,17 +124,21 @@
3266 url_parts = self.parse_url(source)
3267 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
3268 if not os.path.exists(dest_dir):
3269- mkdir(dest_dir, perms=0755)
3270+ mkdir(dest_dir, perms=0o755)
3271 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
3272 try:
3273 self.download(source, dld_file)
3274- except urllib2.URLError as e:
3275+ except URLError as e:
3276 raise UnhandledSource(e.reason)
3277 except OSError as e:
3278 raise UnhandledSource(e.strerror)
3279- options = urlparse.parse_qs(url_parts.fragment)
3280+ options = parse_qs(url_parts.fragment)
3281 for key, value in options.items():
3282- if key in hashlib.algorithms:
3283+ if not six.PY3:
3284+ algorithms = hashlib.algorithms
3285+ else:
3286+ algorithms = hashlib.algorithms_available
3287+ if key in algorithms:
3288 check_hash(dld_file, value, key)
3289 if checksum:
3290 check_hash(dld_file, checksum, hash_type)
3291
3292=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
3293--- hooks/charmhelpers/fetch/bzrurl.py 2014-09-12 10:55:03 +0000
3294+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-11 17:57:05 +0000
3295@@ -5,6 +5,10 @@
3296 )
3297 from charmhelpers.core.host import mkdir
3298
3299+import six
3300+if six.PY3:
3301+ raise ImportError('bzrlib does not support Python3')
3302+
3303 try:
3304 from bzrlib.branch import Branch
3305 except ImportError:
3306@@ -42,7 +46,7 @@
3307 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3308 branch_name)
3309 if not os.path.exists(dest_dir):
3310- mkdir(dest_dir, perms=0755)
3311+ mkdir(dest_dir, perms=0o755)
3312 try:
3313 self.branch(source, dest_dir)
3314 except OSError as e:
3315
3316=== added file 'hooks/charmhelpers/fetch/giturl.py'
3317--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
3318+++ hooks/charmhelpers/fetch/giturl.py 2014-12-11 17:57:05 +0000
3319@@ -0,0 +1,51 @@
3320+import os
3321+from charmhelpers.fetch import (
3322+ BaseFetchHandler,
3323+ UnhandledSource
3324+)
3325+from charmhelpers.core.host import mkdir
3326+
3327+import six
3328+if six.PY3:
3329+ raise ImportError('GitPython does not support Python 3')
3330+
3331+try:
3332+ from git import Repo
3333+except ImportError:
3334+ from charmhelpers.fetch import apt_install
3335+ apt_install("python-git")
3336+ from git import Repo
3337+
3338+
3339+class GitUrlFetchHandler(BaseFetchHandler):
3340+ """Handler for git branches via generic and github URLs"""
3341+ def can_handle(self, source):
3342+ url_parts = self.parse_url(source)
3343+ # TODO (mattyw) no support for ssh git@ yet
3344+ if url_parts.scheme not in ('http', 'https', 'git'):
3345+ return False
3346+ else:
3347+ return True
3348+
3349+ def clone(self, source, dest, branch):
3350+ if not self.can_handle(source):
3351+ raise UnhandledSource("Cannot handle {}".format(source))
3352+
3353+ repo = Repo.clone_from(source, dest)
3354+ repo.git.checkout(branch)
3355+
3356+ def install(self, source, branch="master", dest=None):
3357+ url_parts = self.parse_url(source)
3358+ branch_name = url_parts.path.strip("/").split("/")[-1]
3359+ if dest:
3360+ dest_dir = os.path.join(dest, branch_name)
3361+ else:
3362+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3363+ branch_name)
3364+ if not os.path.exists(dest_dir):
3365+ mkdir(dest_dir, perms=0o755)
3366+ try:
3367+ self.clone(source, dest_dir, branch)
3368+ except OSError as e:
3369+ raise UnhandledSource(e.strerror)
3370+ return dest_dir

Subscribers

People subscribed via source and target branches