Merge lp:~corey.bryant/charms/trusty/heat/contrib.python.packages into lp:~openstack-charmers-archive/charms/trusty/heat/next

Proposed by Corey Bryant
Status: Merged
Merged at revision: 28
Proposed branch: lp:~corey.bryant/charms/trusty/heat/contrib.python.packages
Merge into: lp:~openstack-charmers-archive/charms/trusty/heat/next
Diff against target: 4119 lines (+1747/-575)
32 files modified
charm-helpers.yaml (+1/-0)
hooks/charmhelpers/__init__.py (+22/-0)
hooks/charmhelpers/contrib/hahelpers/apache.py (+10/-3)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+17/-9)
hooks/charmhelpers/contrib/network/ip.py (+212/-35)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+40/-9)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+8/-5)
hooks/charmhelpers/contrib/openstack/context.py (+450/-222)
hooks/charmhelpers/contrib/openstack/ip.py (+41/-27)
hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+25/-12)
hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend (+9/-8)
hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf (+9/-8)
hooks/charmhelpers/contrib/openstack/templating.py (+5/-5)
hooks/charmhelpers/contrib/openstack/utils.py (+173/-12)
hooks/charmhelpers/contrib/python/packages.py (+77/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/core/fstab.py (+10/-8)
hooks/charmhelpers/core/hookenv.py (+83/-31)
hooks/charmhelpers/core/host.py (+81/-25)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/core/services/base.py (+3/-0)
hooks/charmhelpers/core/services/helpers.py (+124/-6)
hooks/charmhelpers/core/sysctl.py (+34/-0)
hooks/charmhelpers/core/templating.py (+2/-1)
hooks/charmhelpers/fetch/__init__.py (+37/-17)
hooks/charmhelpers/fetch/archiveurl.py (+99/-17)
hooks/charmhelpers/fetch/bzrurl.py (+5/-1)
hooks/charmhelpers/fetch/giturl.py (+51/-0)
To merge this branch: bzr merge lp:~corey.bryant/charms/trusty/heat/contrib.python.packages
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Review via email: mp+244322@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #162 heat-next for corey.bryant mp244322
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/162/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #125 heat-next for corey.bryant mp244322
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
  FAILED (errors=3)
  make: *** [test] Error 1

Full unit test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_unit_test/125/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #75 heat-next for corey.bryant mp244322
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/75/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #184 heat-next for corey.bryant mp244322
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/184/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #147 heat-next for corey.bryant mp244322
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/147/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #102 heat-next for corey.bryant mp244322
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/102/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #208 heat-next for corey.bryant mp244322
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/208/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #171 heat-next for corey.bryant mp244322
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/171/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #126 heat-next for corey.bryant mp244322
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/126/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #226 heat-next for corey.bryant mp244322
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/226/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #189 heat-next for corey.bryant mp244322
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/189/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #143 heat-next for corey.bryant mp244322
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/143/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'charm-helpers.yaml'
2--- charm-helpers.yaml 2014-08-13 13:45:18 +0000
3+++ charm-helpers.yaml 2014-12-11 17:56:30 +0000
4@@ -4,6 +4,7 @@
5 - core
6 - fetch
7 - contrib.openstack|inc=*
8+ - contrib.python.packages
9 - contrib.storage
10 - contrib.network.ip
11 - contrib.hahelpers:
12
13=== added file 'hooks/charmhelpers/__init__.py'
14--- hooks/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
15+++ hooks/charmhelpers/__init__.py 2014-12-11 17:56:30 +0000
16@@ -0,0 +1,22 @@
17+# Bootstrap charm-helpers, installing its dependencies if necessary using
18+# only standard libraries.
19+import subprocess
20+import sys
21+
22+try:
23+ import six # flake8: noqa
24+except ImportError:
25+ if sys.version_info.major == 2:
26+ subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
27+ else:
28+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
29+ import six # flake8: noqa
30+
31+try:
32+ import yaml # flake8: noqa
33+except ImportError:
34+ if sys.version_info.major == 2:
35+ subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
36+ else:
37+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
38+ import yaml # flake8: noqa
39
40=== removed file 'hooks/charmhelpers/__init__.py'
41=== modified file 'hooks/charmhelpers/contrib/hahelpers/apache.py'
42--- hooks/charmhelpers/contrib/hahelpers/apache.py 2014-04-08 16:29:36 +0000
43+++ hooks/charmhelpers/contrib/hahelpers/apache.py 2014-12-11 17:56:30 +0000
44@@ -20,20 +20,27 @@
45 )
46
47
48-def get_cert():
49+def get_cert(cn=None):
50+ # TODO: deal with multiple https endpoints via charm config
51 cert = config_get('ssl_cert')
52 key = config_get('ssl_key')
53 if not (cert and key):
54 log("Inspecting identity-service relations for SSL certificate.",
55 level=INFO)
56 cert = key = None
57+ if cn:
58+ ssl_cert_attr = 'ssl_cert_{}'.format(cn)
59+ ssl_key_attr = 'ssl_key_{}'.format(cn)
60+ else:
61+ ssl_cert_attr = 'ssl_cert'
62+ ssl_key_attr = 'ssl_key'
63 for r_id in relation_ids('identity-service'):
64 for unit in relation_list(r_id):
65 if not cert:
66- cert = relation_get('ssl_cert',
67+ cert = relation_get(ssl_cert_attr,
68 rid=r_id, unit=unit)
69 if not key:
70- key = relation_get('ssl_key',
71+ key = relation_get(ssl_key_attr,
72 rid=r_id, unit=unit)
73 return (cert, key)
74
75
76=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
77--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-08-13 13:11:34 +0000
78+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-11 17:56:30 +0000
79@@ -13,9 +13,10 @@
80
81 import subprocess
82 import os
83-
84 from socket import gethostname as get_unit_hostname
85
86+import six
87+
88 from charmhelpers.core.hookenv import (
89 log,
90 relation_ids,
91@@ -77,7 +78,7 @@
92 "show", resource
93 ]
94 try:
95- status = subprocess.check_output(cmd)
96+ status = subprocess.check_output(cmd).decode('UTF-8')
97 except subprocess.CalledProcessError:
98 return False
99 else:
100@@ -139,10 +140,9 @@
101 return True
102 for r_id in relation_ids('identity-service'):
103 for unit in relation_list(r_id):
104+ # TODO - needs fixing for new helper as ssl_cert/key suffixes with CN
105 rel_state = [
106 relation_get('https_keystone', rid=r_id, unit=unit),
107- relation_get('ssl_cert', rid=r_id, unit=unit),
108- relation_get('ssl_key', rid=r_id, unit=unit),
109 relation_get('ca_cert', rid=r_id, unit=unit),
110 ]
111 # NOTE: works around (LP: #1203241)
112@@ -151,34 +151,42 @@
113 return False
114
115
116-def determine_api_port(public_port):
117+def determine_api_port(public_port, singlenode_mode=False):
118 '''
119 Determine correct API server listening port based on
120 existence of HTTPS reverse proxy and/or haproxy.
121
122 public_port: int: standard public port for given service
123
124+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
125+
126 returns: int: the correct listening port for the API service
127 '''
128 i = 0
129- if len(peer_units()) > 0 or is_clustered():
130+ if singlenode_mode:
131+ i += 1
132+ elif len(peer_units()) > 0 or is_clustered():
133 i += 1
134 if https():
135 i += 1
136 return public_port - (i * 10)
137
138
139-def determine_apache_port(public_port):
140+def determine_apache_port(public_port, singlenode_mode=False):
141 '''
142 Description: Determine correct apache listening port based on public IP +
143 state of the cluster.
144
145 public_port: int: standard public port for given service
146
147+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
148+
149 returns: int: the correct listening port for the HAProxy service
150 '''
151 i = 0
152- if len(peer_units()) > 0 or is_clustered():
153+ if singlenode_mode:
154+ i += 1
155+ elif len(peer_units()) > 0 or is_clustered():
156 i += 1
157 return public_port - (i * 10)
158
159@@ -198,7 +206,7 @@
160 for setting in settings:
161 conf[setting] = config_get(setting)
162 missing = []
163- [missing.append(s) for s, v in conf.iteritems() if v is None]
164+ [missing.append(s) for s, v in six.iteritems(conf) if v is None]
165 if missing:
166 log('Insufficient config data to configure hacluster.', level=ERROR)
167 raise HAIncompleteConfig
168
169=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
170--- hooks/charmhelpers/contrib/network/ip.py 2014-08-13 13:45:18 +0000
171+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-11 17:56:30 +0000
172@@ -1,10 +1,13 @@
173-import sys
174+import glob
175+import re
176+import subprocess
177
178 from functools import partial
179
180+from charmhelpers.core.hookenv import unit_get
181 from charmhelpers.fetch import apt_install
182 from charmhelpers.core.hookenv import (
183- ERROR, log, config,
184+ log
185 )
186
187 try:
188@@ -28,29 +31,28 @@
189 network)
190
191
192+def no_ip_found_error_out(network):
193+ errmsg = ("No IP address found in network: %s" % network)
194+ raise ValueError(errmsg)
195+
196+
197 def get_address_in_network(network, fallback=None, fatal=False):
198- """
199- Get an IPv4 or IPv6 address within the network from the host.
200+ """Get an IPv4 or IPv6 address within the network from the host.
201
202 :param network (str): CIDR presentation format. For example,
203 '192.168.1.0/24'.
204 :param fallback (str): If no address is found, return fallback.
205 :param fatal (boolean): If no address is found, fallback is not
206 set and fatal is True then exit(1).
207-
208 """
209-
210- def not_found_error_out():
211- log("No IP address found in network: %s" % network,
212- level=ERROR)
213- sys.exit(1)
214-
215 if network is None:
216 if fallback is not None:
217 return fallback
218+
219+ if fatal:
220+ no_ip_found_error_out(network)
221 else:
222- if fatal:
223- not_found_error_out()
224+ return None
225
226 _validate_cidr(network)
227 network = netaddr.IPNetwork(network)
228@@ -62,6 +64,7 @@
229 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
230 if cidr in network:
231 return str(cidr.ip)
232+
233 if network.version == 6 and netifaces.AF_INET6 in addresses:
234 for addr in addresses[netifaces.AF_INET6]:
235 if not addr['addr'].startswith('fe80'):
236@@ -74,20 +77,20 @@
237 return fallback
238
239 if fatal:
240- not_found_error_out()
241+ no_ip_found_error_out(network)
242
243 return None
244
245
246 def is_ipv6(address):
247- '''Determine whether provided address is IPv6 or not'''
248+ """Determine whether provided address is IPv6 or not."""
249 try:
250 address = netaddr.IPAddress(address)
251 except netaddr.AddrFormatError:
252 # probably a hostname - so not an address at all!
253 return False
254- else:
255- return address.version == 6
256+
257+ return address.version == 6
258
259
260 def is_address_in_network(network, address):
261@@ -105,11 +108,13 @@
262 except (netaddr.core.AddrFormatError, ValueError):
263 raise ValueError("Network (%s) is not in CIDR presentation format" %
264 network)
265+
266 try:
267 address = netaddr.IPAddress(address)
268 except (netaddr.core.AddrFormatError, ValueError):
269 raise ValueError("Address (%s) is not in correct presentation format" %
270 address)
271+
272 if address in network:
273 return True
274 else:
275@@ -132,43 +137,215 @@
276 if address.version == 4 and netifaces.AF_INET in addresses:
277 addr = addresses[netifaces.AF_INET][0]['addr']
278 netmask = addresses[netifaces.AF_INET][0]['netmask']
279- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
280+ network = netaddr.IPNetwork("%s/%s" % (addr, netmask))
281+ cidr = network.cidr
282 if address in cidr:
283 if key == 'iface':
284 return iface
285 else:
286 return addresses[netifaces.AF_INET][0][key]
287+
288 if address.version == 6 and netifaces.AF_INET6 in addresses:
289 for addr in addresses[netifaces.AF_INET6]:
290 if not addr['addr'].startswith('fe80'):
291- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
292- addr['netmask']))
293+ network = netaddr.IPNetwork("%s/%s" % (addr['addr'],
294+ addr['netmask']))
295+ cidr = network.cidr
296 if address in cidr:
297 if key == 'iface':
298 return iface
299+ elif key == 'netmask' and cidr:
300+ return str(cidr).split('/')[1]
301 else:
302 return addr[key]
303+
304 return None
305
306
307 get_iface_for_address = partial(_get_for_address, key='iface')
308
309+
310 get_netmask_for_address = partial(_get_for_address, key='netmask')
311
312
313-def get_ipv6_addr(iface="eth0"):
314+def format_ipv6_addr(address):
315+ """If address is IPv6, wrap it in '[]' otherwise return None.
316+
317+ This is required by most configuration files when specifying IPv6
318+ addresses.
319+ """
320+ if is_ipv6(address):
321+ return "[%s]" % address
322+
323+ return None
324+
325+
326+def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
327+ fatal=True, exc_list=None):
328+ """Return the assigned IP address for a given interface, if any."""
329+ # Extract nic if passed /dev/ethX
330+ if '/' in iface:
331+ iface = iface.split('/')[-1]
332+
333+ if not exc_list:
334+ exc_list = []
335+
336 try:
337- iface_addrs = netifaces.ifaddresses(iface)
338- if netifaces.AF_INET6 not in iface_addrs:
339- raise Exception("Interface '%s' doesn't have an ipv6 address." % iface)
340-
341- addresses = netifaces.ifaddresses(iface)[netifaces.AF_INET6]
342- ipv6_addr = [a['addr'] for a in addresses if not a['addr'].startswith('fe80')
343- and config('vip') != a['addr']]
344- if not ipv6_addr:
345- raise Exception("Interface '%s' doesn't have global ipv6 address." % iface)
346-
347- return ipv6_addr[0]
348-
349- except ValueError:
350- raise ValueError("Invalid interface '%s'" % iface)
351+ inet_num = getattr(netifaces, inet_type)
352+ except AttributeError:
353+ raise Exception("Unknown inet type '%s'" % str(inet_type))
354+
355+ interfaces = netifaces.interfaces()
356+ if inc_aliases:
357+ ifaces = []
358+ for _iface in interfaces:
359+ if iface == _iface or _iface.split(':')[0] == iface:
360+ ifaces.append(_iface)
361+
362+ if fatal and not ifaces:
363+ raise Exception("Invalid interface '%s'" % iface)
364+
365+ ifaces.sort()
366+ else:
367+ if iface not in interfaces:
368+ if fatal:
369+ raise Exception("Interface '%s' not found " % (iface))
370+ else:
371+ return []
372+
373+ else:
374+ ifaces = [iface]
375+
376+ addresses = []
377+ for netiface in ifaces:
378+ net_info = netifaces.ifaddresses(netiface)
379+ if inet_num in net_info:
380+ for entry in net_info[inet_num]:
381+ if 'addr' in entry and entry['addr'] not in exc_list:
382+ addresses.append(entry['addr'])
383+
384+ if fatal and not addresses:
385+ raise Exception("Interface '%s' doesn't have any %s addresses." %
386+ (iface, inet_type))
387+
388+ return sorted(addresses)
389+
390+
391+get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
392+
393+
394+def get_iface_from_addr(addr):
395+ """Work out on which interface the provided address is configured."""
396+ for iface in netifaces.interfaces():
397+ addresses = netifaces.ifaddresses(iface)
398+ for inet_type in addresses:
399+ for _addr in addresses[inet_type]:
400+ _addr = _addr['addr']
401+ # link local
402+ ll_key = re.compile("(.+)%.*")
403+ raw = re.match(ll_key, _addr)
404+ if raw:
405+ _addr = raw.group(1)
406+
407+ if _addr == addr:
408+ log("Address '%s' is configured on iface '%s'" %
409+ (addr, iface))
410+ return iface
411+
412+ msg = "Unable to infer net iface on which '%s' is configured" % (addr)
413+ raise Exception(msg)
414+
415+
416+def sniff_iface(f):
417+ """Ensure decorated function is called with a value for iface.
418+
419+ If no iface provided, inject net iface inferred from unit private address.
420+ """
421+ def iface_sniffer(*args, **kwargs):
422+ if not kwargs.get('iface', None):
423+ kwargs['iface'] = get_iface_from_addr(unit_get('private-address'))
424+
425+ return f(*args, **kwargs)
426+
427+ return iface_sniffer
428+
429+
430+@sniff_iface
431+def get_ipv6_addr(iface=None, inc_aliases=False, fatal=True, exc_list=None,
432+ dynamic_only=True):
433+ """Get assigned IPv6 address for a given interface.
434+
435+ Returns list of addresses found. If no address found, returns empty list.
436+
437+ If iface is None, we infer the current primary interface by doing a reverse
438+ lookup on the unit private-address.
439+
440+ We currently only support scope global IPv6 addresses i.e. non-temporary
441+ addresses. If no global IPv6 address is found, return the first one found
442+ in the ipv6 address list.
443+ """
444+ addresses = get_iface_addr(iface=iface, inet_type='AF_INET6',
445+ inc_aliases=inc_aliases, fatal=fatal,
446+ exc_list=exc_list)
447+
448+ if addresses:
449+ global_addrs = []
450+ for addr in addresses:
451+ key_scope_link_local = re.compile("^fe80::..(.+)%(.+)")
452+ m = re.match(key_scope_link_local, addr)
453+ if m:
454+ eui_64_mac = m.group(1)
455+ iface = m.group(2)
456+ else:
457+ global_addrs.append(addr)
458+
459+ if global_addrs:
460+ # Make sure any found global addresses are not temporary
461+ cmd = ['ip', 'addr', 'show', iface]
462+ out = subprocess.check_output(cmd).decode('UTF-8')
463+ if dynamic_only:
464+ key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
465+ else:
466+ key = re.compile("inet6 (.+)/[0-9]+ scope global.*")
467+
468+ addrs = []
469+ for line in out.split('\n'):
470+ line = line.strip()
471+ m = re.match(key, line)
472+ if m and 'temporary' not in line:
473+ # Return the first valid address we find
474+ for addr in global_addrs:
475+ if m.group(1) == addr:
476+ if not dynamic_only or \
477+ m.group(1).endswith(eui_64_mac):
478+ addrs.append(addr)
479+
480+ if addrs:
481+ return addrs
482+
483+ if fatal:
484+ raise Exception("Interface '%s' does not have a scope global "
485+ "non-temporary ipv6 address." % iface)
486+
487+ return []
488+
489+
490+def get_bridges(vnic_dir='/sys/devices/virtual/net'):
491+ """Return a list of bridges on the system."""
492+ b_regex = "%s/*/bridge" % vnic_dir
493+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
494+
495+
496+def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
497+ """Return a list of nics comprising a given bridge on the system."""
498+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
499+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
500+
501+
502+def is_bridge_member(nic):
503+ """Check if a given nic is a member of a bridge."""
504+ for bridge in get_bridges():
505+ if nic in get_bridge_nics(bridge):
506+ return True
507+
508+ return False
509
510=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
511--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-08-13 13:11:34 +0000
512+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-11 17:56:30 +0000
513@@ -1,3 +1,4 @@
514+import six
515 from charmhelpers.contrib.amulet.deployment import (
516 AmuletDeployment
517 )
518@@ -10,36 +11,66 @@
519 that is specifically for use by OpenStack charms.
520 """
521
522- def __init__(self, series=None, openstack=None, source=None):
523+ def __init__(self, series=None, openstack=None, source=None, stable=True):
524 """Initialize the deployment environment."""
525 super(OpenStackAmuletDeployment, self).__init__(series)
526 self.openstack = openstack
527 self.source = source
528+ self.stable = stable
529+ # Note(coreycb): this needs to be changed when new next branches come
530+ # out.
531+ self.current_next = "trusty"
532+
533+ def _determine_branch_locations(self, other_services):
534+ """Determine the branch locations for the other services.
535+
536+ Determine if the local branch being tested is derived from its
537+ stable or next (dev) branch, and based on this, use the corresonding
538+ stable or next branches for the other_services."""
539+ base_charms = ['mysql', 'mongodb', 'rabbitmq-server']
540+
541+ if self.stable:
542+ for svc in other_services:
543+ temp = 'lp:charms/{}'
544+ svc['location'] = temp.format(svc['name'])
545+ else:
546+ for svc in other_services:
547+ if svc['name'] in base_charms:
548+ temp = 'lp:charms/{}'
549+ svc['location'] = temp.format(svc['name'])
550+ else:
551+ temp = 'lp:~openstack-charmers/charms/{}/{}/next'
552+ svc['location'] = temp.format(self.current_next,
553+ svc['name'])
554+ return other_services
555
556 def _add_services(self, this_service, other_services):
557- """Add services to the deployment and set openstack-origin."""
558+ """Add services to the deployment and set openstack-origin/source."""
559+ other_services = self._determine_branch_locations(other_services)
560+
561 super(OpenStackAmuletDeployment, self)._add_services(this_service,
562 other_services)
563- name = 0
564+
565 services = other_services
566 services.append(this_service)
567- use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph']
568+ use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
569+ 'ceph-osd', 'ceph-radosgw']
570
571 if self.openstack:
572 for svc in services:
573- if svc[name] not in use_source:
574+ if svc['name'] not in use_source:
575 config = {'openstack-origin': self.openstack}
576- self.d.configure(svc[name], config)
577+ self.d.configure(svc['name'], config)
578
579 if self.source:
580 for svc in services:
581- if svc[name] in use_source:
582+ if svc['name'] in use_source:
583 config = {'source': self.source}
584- self.d.configure(svc[name], config)
585+ self.d.configure(svc['name'], config)
586
587 def _configure_services(self, configs):
588 """Configure all of the services."""
589- for service, config in configs.iteritems():
590+ for service, config in six.iteritems(configs):
591 self.d.configure(service, config)
592
593 def _get_openstack_release(self):
594
595=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
596--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-08-13 13:11:34 +0000
597+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-11 17:56:30 +0000
598@@ -7,6 +7,8 @@
599 import keystoneclient.v2_0 as keystone_client
600 import novaclient.v1_1.client as nova_client
601
602+import six
603+
604 from charmhelpers.contrib.amulet.utils import (
605 AmuletUtils
606 )
607@@ -60,7 +62,7 @@
608 expected service catalog endpoints.
609 """
610 self.log.debug('actual: {}'.format(repr(actual)))
611- for k, v in expected.iteritems():
612+ for k, v in six.iteritems(expected):
613 if k in actual:
614 ret = self._validate_dict_data(expected[k][0], actual[k][0])
615 if ret:
616@@ -187,15 +189,16 @@
617
618 f = opener.open("http://download.cirros-cloud.net/version/released")
619 version = f.read().strip()
620- cirros_img = "tests/cirros-{}-x86_64-disk.img".format(version)
621+ cirros_img = "cirros-{}-x86_64-disk.img".format(version)
622+ local_path = os.path.join('tests', cirros_img)
623
624- if not os.path.exists(cirros_img):
625+ if not os.path.exists(local_path):
626 cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
627 version, cirros_img)
628- opener.retrieve(cirros_url, cirros_img)
629+ opener.retrieve(cirros_url, local_path)
630 f.close()
631
632- with open(cirros_img) as f:
633+ with open(local_path) as f:
634 image = glance.images.create(name=image_name, is_public=True,
635 disk_format='qcow2',
636 container_format='bare', data=f)
637
638=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
639--- hooks/charmhelpers/contrib/openstack/context.py 2014-08-13 13:11:34 +0000
640+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-11 17:56:30 +0000
641@@ -1,21 +1,18 @@
642 import json
643 import os
644 import time
645-
646 from base64 import b64decode
647-
648-from subprocess import (
649- check_call
650-)
651-
652+from subprocess import check_call
653+
654+import six
655
656 from charmhelpers.fetch import (
657 apt_install,
658 filter_installed_packages,
659 )
660-
661 from charmhelpers.core.hookenv import (
662 config,
663+ is_relation_made,
664 local_unit,
665 log,
666 relation_get,
667@@ -24,32 +21,40 @@
668 relation_set,
669 unit_get,
670 unit_private_ip,
671+ DEBUG,
672+ INFO,
673+ WARNING,
674 ERROR,
675- INFO
676-)
677-
678+)
679+from charmhelpers.core.host import (
680+ mkdir,
681+ write_file,
682+)
683 from charmhelpers.contrib.hahelpers.cluster import (
684 determine_apache_port,
685 determine_api_port,
686 https,
687- is_clustered
688+ is_clustered,
689 )
690-
691 from charmhelpers.contrib.hahelpers.apache import (
692 get_cert,
693 get_ca_cert,
694+ install_ca_cert,
695 )
696-
697 from charmhelpers.contrib.openstack.neutron import (
698 neutron_plugin_attribute,
699 )
700-
701 from charmhelpers.contrib.network.ip import (
702 get_address_in_network,
703 get_ipv6_addr,
704+ get_netmask_for_address,
705+ format_ipv6_addr,
706+ is_address_in_network,
707 )
708+from charmhelpers.contrib.openstack.utils import get_host_ip
709
710 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
711+ADDRESS_TYPES = ['admin', 'internal', 'public']
712
713
714 class OSContextError(Exception):
715@@ -57,7 +62,7 @@
716
717
718 def ensure_packages(packages):
719- '''Install but do not upgrade required plugin packages'''
720+ """Install but do not upgrade required plugin packages."""
721 required = filter_installed_packages(packages)
722 if required:
723 apt_install(required, fatal=True)
724@@ -65,20 +70,27 @@
725
726 def context_complete(ctxt):
727 _missing = []
728- for k, v in ctxt.iteritems():
729+ for k, v in six.iteritems(ctxt):
730 if v is None or v == '':
731 _missing.append(k)
732+
733 if _missing:
734- log('Missing required data: %s' % ' '.join(_missing), level='INFO')
735+ log('Missing required data: %s' % ' '.join(_missing), level=INFO)
736 return False
737+
738 return True
739
740
741 def config_flags_parser(config_flags):
742+ """Parses config flags string into dict.
743+
744+ The provided config_flags string may be a list of comma-separated values
745+ which themselves may be comma-separated list of values.
746+ """
747 if config_flags.find('==') >= 0:
748- log("config_flags is not in expected format (key=value)",
749- level=ERROR)
750+ log("config_flags is not in expected format (key=value)", level=ERROR)
751 raise OSContextError
752+
753 # strip the following from each value.
754 post_strippers = ' ,'
755 # we strip any leading/trailing '=' or ' ' from the string then
756@@ -86,7 +98,7 @@
757 split = config_flags.strip(' =').split('=')
758 limit = len(split)
759 flags = {}
760- for i in xrange(0, limit - 1):
761+ for i in range(0, limit - 1):
762 current = split[i]
763 next = split[i + 1]
764 vindex = next.rfind(',')
765@@ -101,17 +113,18 @@
766 # if this not the first entry, expect an embedded key.
767 index = current.rfind(',')
768 if index < 0:
769- log("invalid config value(s) at index %s" % (i),
770- level=ERROR)
771+ log("Invalid config value(s) at index %s" % (i), level=ERROR)
772 raise OSContextError
773 key = current[index + 1:]
774
775 # Add to collection.
776 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
777+
778 return flags
779
780
781 class OSContextGenerator(object):
782+ """Base class for all context generators."""
783 interfaces = []
784
785 def __call__(self):
786@@ -123,11 +136,11 @@
787
788 def __init__(self,
789 database=None, user=None, relation_prefix=None, ssl_dir=None):
790- '''
791- Allows inspecting relation for settings prefixed with relation_prefix.
792- This is useful for parsing access for multiple databases returned via
793- the shared-db interface (eg, nova_password, quantum_password)
794- '''
795+ """Allows inspecting relation for settings prefixed with
796+ relation_prefix. This is useful for parsing access for multiple
797+ databases returned via the shared-db interface (eg, nova_password,
798+ quantum_password)
799+ """
800 self.relation_prefix = relation_prefix
801 self.database = database
802 self.user = user
803@@ -137,9 +150,8 @@
804 self.database = self.database or config('database')
805 self.user = self.user or config('database-user')
806 if None in [self.database, self.user]:
807- log('Could not generate shared_db context. '
808- 'Missing required charm config options. '
809- '(database name and user)')
810+ log("Could not generate shared_db context. Missing required charm "
811+ "config options. (database name and user)", level=ERROR)
812 raise OSContextError
813
814 ctxt = {}
815@@ -168,8 +180,10 @@
816 for rid in relation_ids('shared-db'):
817 for unit in related_units(rid):
818 rdata = relation_get(rid=rid, unit=unit)
819+ host = rdata.get('db_host')
820+ host = format_ipv6_addr(host) or host
821 ctxt = {
822- 'database_host': rdata.get('db_host'),
823+ 'database_host': host,
824 'database': self.database,
825 'database_user': self.user,
826 'database_password': rdata.get(password_setting),
827@@ -190,23 +204,24 @@
828 def __call__(self):
829 self.database = self.database or config('database')
830 if self.database is None:
831- log('Could not generate postgresql_db context. '
832- 'Missing required charm config options. '
833- '(database name)')
834+ log('Could not generate postgresql_db context. Missing required '
835+ 'charm config options. (database name)', level=ERROR)
836 raise OSContextError
837+
838 ctxt = {}
839-
840 for rid in relation_ids(self.interfaces[0]):
841 for unit in related_units(rid):
842- ctxt = {
843- 'database_host': relation_get('host', rid=rid, unit=unit),
844- 'database': self.database,
845- 'database_user': relation_get('user', rid=rid, unit=unit),
846- 'database_password': relation_get('password', rid=rid, unit=unit),
847- 'database_type': 'postgresql',
848- }
849+ rel_host = relation_get('host', rid=rid, unit=unit)
850+ rel_user = relation_get('user', rid=rid, unit=unit)
851+ rel_passwd = relation_get('password', rid=rid, unit=unit)
852+ ctxt = {'database_host': rel_host,
853+ 'database': self.database,
854+ 'database_user': rel_user,
855+ 'database_password': rel_passwd,
856+ 'database_type': 'postgresql'}
857 if context_complete(ctxt):
858 return ctxt
859+
860 return {}
861
862
863@@ -215,23 +230,29 @@
864 ca_path = os.path.join(ssl_dir, 'db-client.ca')
865 with open(ca_path, 'w') as fh:
866 fh.write(b64decode(rdata['ssl_ca']))
867+
868 ctxt['database_ssl_ca'] = ca_path
869 elif 'ssl_ca' in rdata:
870- log("Charm not setup for ssl support but ssl ca found")
871+ log("Charm not setup for ssl support but ssl ca found", level=INFO)
872 return ctxt
873+
874 if 'ssl_cert' in rdata:
875 cert_path = os.path.join(
876 ssl_dir, 'db-client.cert')
877 if not os.path.exists(cert_path):
878- log("Waiting 1m for ssl client cert validity")
879+ log("Waiting 1m for ssl client cert validity", level=INFO)
880 time.sleep(60)
881+
882 with open(cert_path, 'w') as fh:
883 fh.write(b64decode(rdata['ssl_cert']))
884+
885 ctxt['database_ssl_cert'] = cert_path
886 key_path = os.path.join(ssl_dir, 'db-client.key')
887 with open(key_path, 'w') as fh:
888 fh.write(b64decode(rdata['ssl_key']))
889+
890 ctxt['database_ssl_key'] = key_path
891+
892 return ctxt
893
894
895@@ -239,31 +260,33 @@
896 interfaces = ['identity-service']
897
898 def __call__(self):
899- log('Generating template context for identity-service')
900+ log('Generating template context for identity-service', level=DEBUG)
901 ctxt = {}
902-
903 for rid in relation_ids('identity-service'):
904 for unit in related_units(rid):
905 rdata = relation_get(rid=rid, unit=unit)
906- ctxt = {
907- 'service_port': rdata.get('service_port'),
908- 'service_host': rdata.get('service_host'),
909- 'auth_host': rdata.get('auth_host'),
910- 'auth_port': rdata.get('auth_port'),
911- 'admin_tenant_name': rdata.get('service_tenant'),
912- 'admin_user': rdata.get('service_username'),
913- 'admin_password': rdata.get('service_password'),
914- 'service_protocol':
915- rdata.get('service_protocol') or 'http',
916- 'auth_protocol':
917- rdata.get('auth_protocol') or 'http',
918- }
919+ serv_host = rdata.get('service_host')
920+ serv_host = format_ipv6_addr(serv_host) or serv_host
921+ auth_host = rdata.get('auth_host')
922+ auth_host = format_ipv6_addr(auth_host) or auth_host
923+ svc_protocol = rdata.get('service_protocol') or 'http'
924+ auth_protocol = rdata.get('auth_protocol') or 'http'
925+ ctxt = {'service_port': rdata.get('service_port'),
926+ 'service_host': serv_host,
927+ 'auth_host': auth_host,
928+ 'auth_port': rdata.get('auth_port'),
929+ 'admin_tenant_name': rdata.get('service_tenant'),
930+ 'admin_user': rdata.get('service_username'),
931+ 'admin_password': rdata.get('service_password'),
932+ 'service_protocol': svc_protocol,
933+ 'auth_protocol': auth_protocol}
934 if context_complete(ctxt):
935 # NOTE(jamespage) this is required for >= icehouse
936 # so a missing value just indicates keystone needs
937 # upgrading
938 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
939 return ctxt
940+
941 return {}
942
943
944@@ -276,32 +299,37 @@
945 self.interfaces = [rel_name]
946
947 def __call__(self):
948- log('Generating template context for amqp')
949+ log('Generating template context for amqp', level=DEBUG)
950 conf = config()
951- user_setting = 'rabbit-user'
952- vhost_setting = 'rabbit-vhost'
953 if self.relation_prefix:
954- user_setting = self.relation_prefix + '-rabbit-user'
955- vhost_setting = self.relation_prefix + '-rabbit-vhost'
956+ user_setting = '%s-rabbit-user' % (self.relation_prefix)
957+ vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
958+ else:
959+ user_setting = 'rabbit-user'
960+ vhost_setting = 'rabbit-vhost'
961
962 try:
963 username = conf[user_setting]
964 vhost = conf[vhost_setting]
965 except KeyError as e:
966- log('Could not generate shared_db context. '
967- 'Missing required charm config options: %s.' % e)
968+ log('Could not generate shared_db context. Missing required charm '
969+ 'config options: %s.' % e, level=ERROR)
970 raise OSContextError
971+
972 ctxt = {}
973 for rid in relation_ids(self.rel_name):
974 ha_vip_only = False
975 for unit in related_units(rid):
976 if relation_get('clustered', rid=rid, unit=unit):
977 ctxt['clustered'] = True
978- ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
979- unit=unit)
980+ vip = relation_get('vip', rid=rid, unit=unit)
981+ vip = format_ipv6_addr(vip) or vip
982+ ctxt['rabbitmq_host'] = vip
983 else:
984- ctxt['rabbitmq_host'] = relation_get('private-address',
985- rid=rid, unit=unit)
986+ host = relation_get('private-address', rid=rid, unit=unit)
987+ host = format_ipv6_addr(host) or host
988+ ctxt['rabbitmq_host'] = host
989+
990 ctxt.update({
991 'rabbitmq_user': username,
992 'rabbitmq_password': relation_get('password', rid=rid,
993@@ -312,6 +340,7 @@
994 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
995 if ssl_port:
996 ctxt['rabbit_ssl_port'] = ssl_port
997+
998 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
999 if ssl_ca:
1000 ctxt['rabbit_ssl_ca'] = ssl_ca
1001@@ -325,40 +354,45 @@
1002 if context_complete(ctxt):
1003 if 'rabbit_ssl_ca' in ctxt:
1004 if not self.ssl_dir:
1005- log(("Charm not setup for ssl support "
1006- "but ssl ca found"))
1007+ log("Charm not setup for ssl support but ssl ca "
1008+ "found", level=INFO)
1009 break
1010+
1011 ca_path = os.path.join(
1012 self.ssl_dir, 'rabbit-client-ca.pem')
1013 with open(ca_path, 'w') as fh:
1014 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
1015 ctxt['rabbit_ssl_ca'] = ca_path
1016+
1017 # Sufficient information found = break out!
1018 break
1019+
1020 # Used for active/active rabbitmq >= grizzly
1021- if ('clustered' not in ctxt or ha_vip_only) \
1022- and len(related_units(rid)) > 1:
1023+ if (('clustered' not in ctxt or ha_vip_only) and
1024+ len(related_units(rid)) > 1):
1025 rabbitmq_hosts = []
1026 for unit in related_units(rid):
1027- rabbitmq_hosts.append(relation_get('private-address',
1028- rid=rid, unit=unit))
1029- ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
1030+ host = relation_get('private-address', rid=rid, unit=unit)
1031+ host = format_ipv6_addr(host) or host
1032+ rabbitmq_hosts.append(host)
1033+
1034+ ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
1035+
1036 if not context_complete(ctxt):
1037 return {}
1038- else:
1039- return ctxt
1040+
1041+ return ctxt
1042
1043
1044 class CephContext(OSContextGenerator):
1045+ """Generates context for /etc/ceph/ceph.conf templates."""
1046 interfaces = ['ceph']
1047
1048 def __call__(self):
1049- '''This generates context for /etc/ceph/ceph.conf templates'''
1050 if not relation_ids('ceph'):
1051 return {}
1052
1053- log('Generating template context for ceph')
1054-
1055+ log('Generating template context for ceph', level=DEBUG)
1056 mon_hosts = []
1057 auth = None
1058 key = None
1059@@ -367,17 +401,18 @@
1060 for unit in related_units(rid):
1061 auth = relation_get('auth', rid=rid, unit=unit)
1062 key = relation_get('key', rid=rid, unit=unit)
1063- ceph_addr = \
1064- relation_get('ceph-public-address', rid=rid, unit=unit) or \
1065- relation_get('private-address', rid=rid, unit=unit)
1066+ ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
1067+ unit=unit)
1068+ unit_priv_addr = relation_get('private-address', rid=rid,
1069+ unit=unit)
1070+ ceph_addr = ceph_pub_addr or unit_priv_addr
1071+ ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
1072 mon_hosts.append(ceph_addr)
1073
1074- ctxt = {
1075- 'mon_hosts': ' '.join(mon_hosts),
1076- 'auth': auth,
1077- 'key': key,
1078- 'use_syslog': use_syslog
1079- }
1080+ ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
1081+ 'auth': auth,
1082+ 'key': key,
1083+ 'use_syslog': use_syslog}
1084
1085 if not os.path.isdir('/etc/ceph'):
1086 os.mkdir('/etc/ceph')
1087@@ -386,40 +421,70 @@
1088 return {}
1089
1090 ensure_packages(['ceph-common'])
1091-
1092 return ctxt
1093
1094
1095 class HAProxyContext(OSContextGenerator):
1096+ """Provides half a context for the haproxy template, which describes
1097+ all peers to be included in the cluster. Each charm needs to include
1098+ its own context generator that describes the port mapping.
1099+ """
1100 interfaces = ['cluster']
1101
1102+ def __init__(self, singlenode_mode=False):
1103+ self.singlenode_mode = singlenode_mode
1104+
1105 def __call__(self):
1106- '''
1107- Builds half a context for the haproxy template, which describes
1108- all peers to be included in the cluster. Each charm needs to include
1109- its own context generator that describes the port mapping.
1110- '''
1111- if not relation_ids('cluster'):
1112+ if not relation_ids('cluster') and not self.singlenode_mode:
1113 return {}
1114
1115+ if config('prefer-ipv6'):
1116+ addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1117+ else:
1118+ addr = get_host_ip(unit_get('private-address'))
1119+
1120+ l_unit = local_unit().replace('/', '-')
1121 cluster_hosts = {}
1122- l_unit = local_unit().replace('/', '-')
1123- if config('prefer-ipv6'):
1124- addr = get_ipv6_addr()
1125- else:
1126- addr = unit_get('private-address')
1127- cluster_hosts[l_unit] = get_address_in_network(config('os-internal-network'),
1128- addr)
1129-
1130- for rid in relation_ids('cluster'):
1131- for unit in related_units(rid):
1132- _unit = unit.replace('/', '-')
1133- addr = relation_get('private-address', rid=rid, unit=unit)
1134- cluster_hosts[_unit] = addr
1135-
1136- ctxt = {
1137- 'units': cluster_hosts,
1138- }
1139+
1140+ # NOTE(jamespage): build out map of configured network endpoints
1141+ # and associated backends
1142+ for addr_type in ADDRESS_TYPES:
1143+ cfg_opt = 'os-{}-network'.format(addr_type)
1144+ laddr = get_address_in_network(config(cfg_opt))
1145+ if laddr:
1146+ netmask = get_netmask_for_address(laddr)
1147+ cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
1148+ netmask),
1149+ 'backends': {l_unit: laddr}}
1150+ for rid in relation_ids('cluster'):
1151+ for unit in related_units(rid):
1152+ _laddr = relation_get('{}-address'.format(addr_type),
1153+ rid=rid, unit=unit)
1154+ if _laddr:
1155+ _unit = unit.replace('/', '-')
1156+ cluster_hosts[laddr]['backends'][_unit] = _laddr
1157+
1158+ # NOTE(jamespage) no split configurations found, just use
1159+ # private addresses
1160+ if not cluster_hosts:
1161+ netmask = get_netmask_for_address(addr)
1162+ cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
1163+ 'backends': {l_unit: addr}}
1164+ for rid in relation_ids('cluster'):
1165+ for unit in related_units(rid):
1166+ _laddr = relation_get('private-address',
1167+ rid=rid, unit=unit)
1168+ if _laddr:
1169+ _unit = unit.replace('/', '-')
1170+ cluster_hosts[addr]['backends'][_unit] = _laddr
1171+
1172+ ctxt = {'frontends': cluster_hosts}
1173+
1174+ if config('haproxy-server-timeout'):
1175+ ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
1176+
1177+ if config('haproxy-client-timeout'):
1178+ ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
1179
1180 if config('prefer-ipv6'):
1181 ctxt['local_host'] = 'ip6-localhost'
1182@@ -430,13 +495,19 @@
1183 ctxt['haproxy_host'] = '0.0.0.0'
1184 ctxt['stat_port'] = ':8888'
1185
1186- if len(cluster_hosts.keys()) > 1:
1187- # Enable haproxy when we have enough peers.
1188- log('Ensuring haproxy enabled in /etc/default/haproxy.')
1189- with open('/etc/default/haproxy', 'w') as out:
1190- out.write('ENABLED=1\n')
1191- return ctxt
1192- log('HAProxy context is incomplete, this unit has no peers.')
1193+ for frontend in cluster_hosts:
1194+ if (len(cluster_hosts[frontend]['backends']) > 1 or
1195+ self.singlenode_mode):
1196+ # Enable haproxy when we have enough peers.
1197+ log('Ensuring haproxy enabled in /etc/default/haproxy.',
1198+ level=DEBUG)
1199+ with open('/etc/default/haproxy', 'w') as out:
1200+ out.write('ENABLED=1\n')
1201+
1202+ return ctxt
1203+
1204+ log('HAProxy context is incomplete, this unit has no peers.',
1205+ level=INFO)
1206 return {}
1207
1208
1209@@ -444,29 +515,28 @@
1210 interfaces = ['image-service']
1211
1212 def __call__(self):
1213- '''
1214- Obtains the glance API server from the image-service relation. Useful
1215- in nova and cinder (currently).
1216- '''
1217- log('Generating template context for image-service.')
1218+ """Obtains the glance API server from the image-service relation.
1219+ Useful in nova and cinder (currently).
1220+ """
1221+ log('Generating template context for image-service.', level=DEBUG)
1222 rids = relation_ids('image-service')
1223 if not rids:
1224 return {}
1225+
1226 for rid in rids:
1227 for unit in related_units(rid):
1228 api_server = relation_get('glance-api-server',
1229 rid=rid, unit=unit)
1230 if api_server:
1231 return {'glance_api_servers': api_server}
1232- log('ImageService context is incomplete. '
1233- 'Missing required relation data.')
1234+
1235+ log("ImageService context is incomplete. Missing required relation "
1236+ "data.", level=INFO)
1237 return {}
1238
1239
1240 class ApacheSSLContext(OSContextGenerator):
1241-
1242- """
1243- Generates a context for an apache vhost configuration that configures
1244+ """Generates a context for an apache vhost configuration that configures
1245 HTTPS reverse proxying for one or many endpoints. Generated context
1246 looks something like::
1247
1248@@ -490,44 +560,111 @@
1249 cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
1250 check_call(cmd)
1251
1252- def configure_cert(self):
1253- if not os.path.isdir('/etc/apache2/ssl'):
1254- os.mkdir('/etc/apache2/ssl')
1255+ def configure_cert(self, cn=None):
1256 ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
1257- if not os.path.isdir(ssl_dir):
1258- os.mkdir(ssl_dir)
1259- cert, key = get_cert()
1260- with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
1261- cert_out.write(b64decode(cert))
1262- with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
1263- key_out.write(b64decode(key))
1264+ mkdir(path=ssl_dir)
1265+ cert, key = get_cert(cn)
1266+ if cn:
1267+ cert_filename = 'cert_{}'.format(cn)
1268+ key_filename = 'key_{}'.format(cn)
1269+ else:
1270+ cert_filename = 'cert'
1271+ key_filename = 'key'
1272+
1273+ write_file(path=os.path.join(ssl_dir, cert_filename),
1274+ content=b64decode(cert))
1275+ write_file(path=os.path.join(ssl_dir, key_filename),
1276+ content=b64decode(key))
1277+
1278+ def configure_ca(self):
1279 ca_cert = get_ca_cert()
1280 if ca_cert:
1281- with open(CA_CERT_PATH, 'w') as ca_out:
1282- ca_out.write(b64decode(ca_cert))
1283- check_call(['update-ca-certificates'])
1284+ install_ca_cert(b64decode(ca_cert))
1285+
1286+ def canonical_names(self):
1287+ """Figure out which canonical names clients will access this service.
1288+ """
1289+ cns = []
1290+ for r_id in relation_ids('identity-service'):
1291+ for unit in related_units(r_id):
1292+ rdata = relation_get(rid=r_id, unit=unit)
1293+ for k in rdata:
1294+ if k.startswith('ssl_key_'):
1295+ cns.append(k.lstrip('ssl_key_'))
1296+
1297+ return sorted(list(set(cns)))
1298+
1299+ def get_network_addresses(self):
1300+ """For each network configured, return corresponding address and vip
1301+ (if available).
1302+
1303+ Returns a list of tuples of the form:
1304+
1305+ [(address_in_net_a, vip_in_net_a),
1306+ (address_in_net_b, vip_in_net_b),
1307+ ...]
1308+
1309+ or, if no vip(s) available:
1310+
1311+ [(address_in_net_a, address_in_net_a),
1312+ (address_in_net_b, address_in_net_b),
1313+ ...]
1314+ """
1315+ addresses = []
1316+ if config('vip'):
1317+ vips = config('vip').split()
1318+ else:
1319+ vips = []
1320+
1321+ for net_type in ['os-internal-network', 'os-admin-network',
1322+ 'os-public-network']:
1323+ addr = get_address_in_network(config(net_type),
1324+ unit_get('private-address'))
1325+ if len(vips) > 1 and is_clustered():
1326+ if not config(net_type):
1327+ log("Multiple networks configured but net_type "
1328+ "is None (%s)." % net_type, level=WARNING)
1329+ continue
1330+
1331+ for vip in vips:
1332+ if is_address_in_network(config(net_type), vip):
1333+ addresses.append((addr, vip))
1334+ break
1335+
1336+ elif is_clustered() and config('vip'):
1337+ addresses.append((addr, config('vip')))
1338+ else:
1339+ addresses.append((addr, addr))
1340+
1341+ return sorted(addresses)
1342
1343 def __call__(self):
1344- if isinstance(self.external_ports, basestring):
1345+ if isinstance(self.external_ports, six.string_types):
1346 self.external_ports = [self.external_ports]
1347- if (not self.external_ports or not https()):
1348+
1349+ if not self.external_ports or not https():
1350 return {}
1351
1352- self.configure_cert()
1353+ self.configure_ca()
1354 self.enable_modules()
1355
1356- ctxt = {
1357- 'namespace': self.service_namespace,
1358- 'private_address': unit_get('private-address'),
1359- 'endpoints': []
1360- }
1361- if is_clustered():
1362- ctxt['private_address'] = config('vip')
1363- for api_port in self.external_ports:
1364- ext_port = determine_apache_port(api_port)
1365- int_port = determine_api_port(api_port)
1366- portmap = (int(ext_port), int(int_port))
1367- ctxt['endpoints'].append(portmap)
1368+ ctxt = {'namespace': self.service_namespace,
1369+ 'endpoints': [],
1370+ 'ext_ports': []}
1371+
1372+ for cn in self.canonical_names():
1373+ self.configure_cert(cn)
1374+
1375+ addresses = self.get_network_addresses()
1376+ for address, endpoint in sorted(set(addresses)):
1377+ for api_port in self.external_ports:
1378+ ext_port = determine_apache_port(api_port)
1379+ int_port = determine_api_port(api_port)
1380+ portmap = (address, endpoint, int(ext_port), int(int_port))
1381+ ctxt['endpoints'].append(portmap)
1382+ ctxt['ext_ports'].append(int(ext_port))
1383+
1384+ ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
1385 return ctxt
1386
1387
1388@@ -544,21 +681,23 @@
1389
1390 @property
1391 def packages(self):
1392- return neutron_plugin_attribute(
1393- self.plugin, 'packages', self.network_manager)
1394+ return neutron_plugin_attribute(self.plugin, 'packages',
1395+ self.network_manager)
1396
1397 @property
1398 def neutron_security_groups(self):
1399 return None
1400
1401 def _ensure_packages(self):
1402- [ensure_packages(pkgs) for pkgs in self.packages]
1403+ for pkgs in self.packages:
1404+ ensure_packages(pkgs)
1405
1406 def _save_flag_file(self):
1407 if self.network_manager == 'quantum':
1408 _file = '/etc/nova/quantum_plugin.conf'
1409 else:
1410 _file = '/etc/nova/neutron_plugin.conf'
1411+
1412 with open(_file, 'wb') as out:
1413 out.write(self.plugin + '\n')
1414
1415@@ -567,13 +706,11 @@
1416 self.network_manager)
1417 config = neutron_plugin_attribute(self.plugin, 'config',
1418 self.network_manager)
1419- ovs_ctxt = {
1420- 'core_plugin': driver,
1421- 'neutron_plugin': 'ovs',
1422- 'neutron_security_groups': self.neutron_security_groups,
1423- 'local_ip': unit_private_ip(),
1424- 'config': config
1425- }
1426+ ovs_ctxt = {'core_plugin': driver,
1427+ 'neutron_plugin': 'ovs',
1428+ 'neutron_security_groups': self.neutron_security_groups,
1429+ 'local_ip': unit_private_ip(),
1430+ 'config': config}
1431
1432 return ovs_ctxt
1433
1434@@ -582,13 +719,11 @@
1435 self.network_manager)
1436 config = neutron_plugin_attribute(self.plugin, 'config',
1437 self.network_manager)
1438- nvp_ctxt = {
1439- 'core_plugin': driver,
1440- 'neutron_plugin': 'nvp',
1441- 'neutron_security_groups': self.neutron_security_groups,
1442- 'local_ip': unit_private_ip(),
1443- 'config': config
1444- }
1445+ nvp_ctxt = {'core_plugin': driver,
1446+ 'neutron_plugin': 'nvp',
1447+ 'neutron_security_groups': self.neutron_security_groups,
1448+ 'local_ip': unit_private_ip(),
1449+ 'config': config}
1450
1451 return nvp_ctxt
1452
1453@@ -597,35 +732,50 @@
1454 self.network_manager)
1455 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1456 self.network_manager)
1457- n1kv_ctxt = {
1458- 'core_plugin': driver,
1459- 'neutron_plugin': 'n1kv',
1460- 'neutron_security_groups': self.neutron_security_groups,
1461- 'local_ip': unit_private_ip(),
1462- 'config': n1kv_config,
1463- 'vsm_ip': config('n1kv-vsm-ip'),
1464- 'vsm_username': config('n1kv-vsm-username'),
1465- 'vsm_password': config('n1kv-vsm-password'),
1466- 'restrict_policy_profiles': config(
1467- 'n1kv_restrict_policy_profiles'),
1468- }
1469+ n1kv_user_config_flags = config('n1kv-config-flags')
1470+ restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
1471+ n1kv_ctxt = {'core_plugin': driver,
1472+ 'neutron_plugin': 'n1kv',
1473+ 'neutron_security_groups': self.neutron_security_groups,
1474+ 'local_ip': unit_private_ip(),
1475+ 'config': n1kv_config,
1476+ 'vsm_ip': config('n1kv-vsm-ip'),
1477+ 'vsm_username': config('n1kv-vsm-username'),
1478+ 'vsm_password': config('n1kv-vsm-password'),
1479+ 'restrict_policy_profiles': restrict_policy_profiles}
1480+
1481+ if n1kv_user_config_flags:
1482+ flags = config_flags_parser(n1kv_user_config_flags)
1483+ n1kv_ctxt['user_config_flags'] = flags
1484
1485 return n1kv_ctxt
1486
1487+ def calico_ctxt(self):
1488+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1489+ self.network_manager)
1490+ config = neutron_plugin_attribute(self.plugin, 'config',
1491+ self.network_manager)
1492+ calico_ctxt = {'core_plugin': driver,
1493+ 'neutron_plugin': 'Calico',
1494+ 'neutron_security_groups': self.neutron_security_groups,
1495+ 'local_ip': unit_private_ip(),
1496+ 'config': config}
1497+
1498+ return calico_ctxt
1499+
1500 def neutron_ctxt(self):
1501 if https():
1502 proto = 'https'
1503 else:
1504 proto = 'http'
1505+
1506 if is_clustered():
1507 host = config('vip')
1508 else:
1509 host = unit_get('private-address')
1510- url = '%s://%s:%s' % (proto, host, '9696')
1511- ctxt = {
1512- 'network_manager': self.network_manager,
1513- 'neutron_url': url,
1514- }
1515+
1516+ ctxt = {'network_manager': self.network_manager,
1517+ 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1518 return ctxt
1519
1520 def __call__(self):
1521@@ -645,6 +795,8 @@
1522 ctxt.update(self.nvp_ctxt())
1523 elif self.plugin == 'n1kv':
1524 ctxt.update(self.n1kv_ctxt())
1525+ elif self.plugin == 'Calico':
1526+ ctxt.update(self.calico_ctxt())
1527
1528 alchemy_flags = config('neutron-alchemy-flags')
1529 if alchemy_flags:
1530@@ -656,23 +808,40 @@
1531
1532
1533 class OSConfigFlagContext(OSContextGenerator):
1534-
1535- """
1536- Responsible for adding user-defined config-flags in charm config to a
1537- template context.
1538-
1539- NOTE: the value of config-flags may be a comma-separated list of
1540- key=value pairs and some Openstack config files support
1541- comma-separated lists as values.
1542- """
1543-
1544- def __call__(self):
1545- config_flags = config('config-flags')
1546- if not config_flags:
1547- return {}
1548-
1549- flags = config_flags_parser(config_flags)
1550- return {'user_config_flags': flags}
1551+ """Provides support for user-defined config flags.
1552+
1553+ Users can define a comma-seperated list of key=value pairs
1554+ in the charm configuration and apply them at any point in
1555+ any file by using a template flag.
1556+
1557+ Sometimes users might want config flags inserted within a
1558+ specific section so this class allows users to specify the
1559+ template flag name, allowing for multiple template flags
1560+ (sections) within the same context.
1561+
1562+ NOTE: the value of config-flags may be a comma-separated list of
1563+ key=value pairs and some Openstack config files support
1564+ comma-separated lists as values.
1565+ """
1566+
1567+ def __init__(self, charm_flag='config-flags',
1568+ template_flag='user_config_flags'):
1569+ """
1570+ :param charm_flag: config flags in charm configuration.
1571+ :param template_flag: insert point for user-defined flags in template
1572+ file.
1573+ """
1574+ super(OSConfigFlagContext, self).__init__()
1575+ self._charm_flag = charm_flag
1576+ self._template_flag = template_flag
1577+
1578+ def __call__(self):
1579+ config_flags = config(self._charm_flag)
1580+ if not config_flags:
1581+ return {}
1582+
1583+ return {self._template_flag:
1584+ config_flags_parser(config_flags)}
1585
1586
1587 class SubordinateConfigContext(OSContextGenerator):
1588@@ -716,7 +885,6 @@
1589 },
1590 }
1591 }
1592-
1593 """
1594
1595 def __init__(self, service, config_file, interface):
1596@@ -746,26 +914,28 @@
1597
1598 if self.service not in sub_config:
1599 log('Found subordinate_config on %s but it contained'
1600- 'nothing for %s service' % (rid, self.service))
1601+ 'nothing for %s service' % (rid, self.service),
1602+ level=INFO)
1603 continue
1604
1605 sub_config = sub_config[self.service]
1606 if self.config_file not in sub_config:
1607 log('Found subordinate_config on %s but it contained'
1608- 'nothing for %s' % (rid, self.config_file))
1609+ 'nothing for %s' % (rid, self.config_file),
1610+ level=INFO)
1611 continue
1612
1613 sub_config = sub_config[self.config_file]
1614- for k, v in sub_config.iteritems():
1615+ for k, v in six.iteritems(sub_config):
1616 if k == 'sections':
1617- for section, config_dict in v.iteritems():
1618- log("adding section '%s'" % (section))
1619+ for section, config_dict in six.iteritems(v):
1620+ log("adding section '%s'" % (section),
1621+ level=DEBUG)
1622 ctxt[k][section] = config_dict
1623 else:
1624 ctxt[k] = v
1625
1626- log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
1627-
1628+ log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1629 return ctxt
1630
1631
1632@@ -777,13 +947,71 @@
1633 False if config('debug') is None else config('debug')
1634 ctxt['verbose'] = \
1635 False if config('verbose') is None else config('verbose')
1636+
1637 return ctxt
1638
1639
1640 class SyslogContext(OSContextGenerator):
1641
1642 def __call__(self):
1643- ctxt = {
1644- 'use_syslog': config('use-syslog')
1645- }
1646+ ctxt = {'use_syslog': config('use-syslog')}
1647+ return ctxt
1648+
1649+
1650+class BindHostContext(OSContextGenerator):
1651+
1652+ def __call__(self):
1653+ if config('prefer-ipv6'):
1654+ return {'bind_host': '::'}
1655+ else:
1656+ return {'bind_host': '0.0.0.0'}
1657+
1658+
1659+class WorkerConfigContext(OSContextGenerator):
1660+
1661+ @property
1662+ def num_cpus(self):
1663+ try:
1664+ from psutil import NUM_CPUS
1665+ except ImportError:
1666+ apt_install('python-psutil', fatal=True)
1667+ from psutil import NUM_CPUS
1668+
1669+ return NUM_CPUS
1670+
1671+ def __call__(self):
1672+ multiplier = config('worker-multiplier') or 0
1673+ ctxt = {"workers": self.num_cpus * multiplier}
1674+ return ctxt
1675+
1676+
1677+class ZeroMQContext(OSContextGenerator):
1678+ interfaces = ['zeromq-configuration']
1679+
1680+ def __call__(self):
1681+ ctxt = {}
1682+ if is_relation_made('zeromq-configuration', 'host'):
1683+ for rid in relation_ids('zeromq-configuration'):
1684+ for unit in related_units(rid):
1685+ ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1686+ ctxt['zmq_host'] = relation_get('host', unit, rid)
1687+
1688+ return ctxt
1689+
1690+
1691+class NotificationDriverContext(OSContextGenerator):
1692+
1693+ def __init__(self, zmq_relation='zeromq-configuration',
1694+ amqp_relation='amqp'):
1695+ """
1696+ :param zmq_relation: Name of Zeromq relation to check
1697+ """
1698+ self.zmq_relation = zmq_relation
1699+ self.amqp_relation = amqp_relation
1700+
1701+ def __call__(self):
1702+ ctxt = {'notifications': 'False'}
1703+ if is_relation_made(self.amqp_relation):
1704+ ctxt['notifications'] = "True"
1705+
1706 return ctxt
1707
1708=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
1709--- hooks/charmhelpers/contrib/openstack/ip.py 2014-08-13 13:11:34 +0000
1710+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-11 17:56:30 +0000
1711@@ -2,21 +2,19 @@
1712 config,
1713 unit_get,
1714 )
1715-
1716 from charmhelpers.contrib.network.ip import (
1717 get_address_in_network,
1718 is_address_in_network,
1719 is_ipv6,
1720 get_ipv6_addr,
1721 )
1722-
1723 from charmhelpers.contrib.hahelpers.cluster import is_clustered
1724
1725 PUBLIC = 'public'
1726 INTERNAL = 'int'
1727 ADMIN = 'admin'
1728
1729-_address_map = {
1730+ADDRESS_MAP = {
1731 PUBLIC: {
1732 'config': 'os-public-network',
1733 'fallback': 'public-address'
1734@@ -33,16 +31,14 @@
1735
1736
1737 def canonical_url(configs, endpoint_type=PUBLIC):
1738- '''
1739- Returns the correct HTTP URL to this host given the state of HTTPS
1740+ """Returns the correct HTTP URL to this host given the state of HTTPS
1741 configuration, hacluster and charm configuration.
1742
1743- :configs OSTemplateRenderer: A config tempating object to inspect for
1744- a complete https context.
1745- :endpoint_type str: The endpoint type to resolve.
1746-
1747- :returns str: Base URL for services on the current service unit.
1748- '''
1749+ :param configs: OSTemplateRenderer config templating object to inspect
1750+ for a complete https context.
1751+ :param endpoint_type: str endpoint type to resolve.
1752+ :param returns: str base URL for services on the current service unit.
1753+ """
1754 scheme = 'http'
1755 if 'https' in configs.complete_contexts():
1756 scheme = 'https'
1757@@ -53,27 +49,45 @@
1758
1759
1760 def resolve_address(endpoint_type=PUBLIC):
1761+ """Return unit address depending on net config.
1762+
1763+ If unit is clustered with vip(s) and has net splits defined, return vip on
1764+ correct network. If clustered with no nets defined, return primary vip.
1765+
1766+ If not clustered, return unit address ensuring address is on configured net
1767+ split if one is configured.
1768+
1769+ :param endpoint_type: Network endpoing type
1770+ """
1771 resolved_address = None
1772- if is_clustered():
1773- if config(_address_map[endpoint_type]['config']) is None:
1774- # Assume vip is simple and pass back directly
1775- resolved_address = config('vip')
1776+ vips = config('vip')
1777+ if vips:
1778+ vips = vips.split()
1779+
1780+ net_type = ADDRESS_MAP[endpoint_type]['config']
1781+ net_addr = config(net_type)
1782+ net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
1783+ clustered = is_clustered()
1784+ if clustered:
1785+ if not net_addr:
1786+ # If no net-splits defined, we expect a single vip
1787+ resolved_address = vips[0]
1788 else:
1789- for vip in config('vip').split():
1790- if is_address_in_network(
1791- config(_address_map[endpoint_type]['config']),
1792- vip):
1793+ for vip in vips:
1794+ if is_address_in_network(net_addr, vip):
1795 resolved_address = vip
1796+ break
1797 else:
1798 if config('prefer-ipv6'):
1799- fallback_addr = get_ipv6_addr()
1800+ fallback_addr = get_ipv6_addr(exc_list=vips)[0]
1801 else:
1802- fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])
1803- resolved_address = get_address_in_network(
1804- config(_address_map[endpoint_type]['config']), fallback_addr)
1805+ fallback_addr = unit_get(net_fallback)
1806+
1807+ resolved_address = get_address_in_network(net_addr, fallback_addr)
1808
1809 if resolved_address is None:
1810- raise ValueError('Unable to resolve a suitable IP address'
1811- ' based on charm state and configuration')
1812- else:
1813- return resolved_address
1814+ raise ValueError("Unable to resolve a suitable IP address based on "
1815+ "charm state and configuration. (net_type=%s, "
1816+ "clustered=%s)" % (net_type, clustered))
1817+
1818+ return resolved_address
1819
1820=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1821--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-08-13 13:11:34 +0000
1822+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-11 17:56:30 +0000
1823@@ -14,7 +14,7 @@
1824 def headers_package():
1825 """Ensures correct linux-headers for running kernel are installed,
1826 for building DKMS package"""
1827- kver = check_output(['uname', '-r']).strip()
1828+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1829 return 'linux-headers-%s' % kver
1830
1831 QUANTUM_CONF_DIR = '/etc/quantum'
1832@@ -22,7 +22,7 @@
1833
1834 def kernel_version():
1835 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1836- kver = check_output(['uname', '-r']).strip()
1837+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1838 kver = kver.split('.')
1839 return (int(kver[0]), int(kver[1]))
1840
1841@@ -138,10 +138,25 @@
1842 relation_prefix='neutron',
1843 ssl_dir=NEUTRON_CONF_DIR)],
1844 'services': [],
1845- 'packages': [['neutron-plugin-cisco']],
1846+ 'packages': [[headers_package()] + determine_dkms_package(),
1847+ ['neutron-plugin-cisco']],
1848 'server_packages': ['neutron-server',
1849 'neutron-plugin-cisco'],
1850 'server_services': ['neutron-server']
1851+ },
1852+ 'Calico': {
1853+ 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
1854+ 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
1855+ 'contexts': [
1856+ context.SharedDBContext(user=config('neutron-database-user'),
1857+ database=config('neutron-database'),
1858+ relation_prefix='neutron',
1859+ ssl_dir=NEUTRON_CONF_DIR)],
1860+ 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
1861+ 'packages': [[headers_package()] + determine_dkms_package(),
1862+ ['calico-compute', 'bird', 'neutron-dhcp-agent']],
1863+ 'server_packages': ['neutron-server', 'calico-control'],
1864+ 'server_services': ['neutron-server']
1865 }
1866 }
1867 if release >= 'icehouse':
1868@@ -162,7 +177,8 @@
1869 elif manager == 'neutron':
1870 plugins = neutron_plugins()
1871 else:
1872- log('Error: Network manager does not support plugins.')
1873+ log("Network manager '%s' does not support plugins." % (manager),
1874+ level=ERROR)
1875 raise Exception
1876
1877 try:
1878
1879=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
1880--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-08-13 13:11:34 +0000
1881+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-12-11 17:56:30 +0000
1882@@ -14,8 +14,17 @@
1883 retries 3
1884 timeout queue 1000
1885 timeout connect 1000
1886+{% if haproxy_client_timeout -%}
1887+ timeout client {{ haproxy_client_timeout }}
1888+{% else -%}
1889 timeout client 30000
1890+{% endif -%}
1891+
1892+{% if haproxy_server_timeout -%}
1893+ timeout server {{ haproxy_server_timeout }}
1894+{% else -%}
1895 timeout server 30000
1896+{% endif -%}
1897
1898 listen stats {{ stat_port }}
1899 mode http
1900@@ -25,17 +34,21 @@
1901 stats uri /
1902 stats auth admin:password
1903
1904-{% if units -%}
1905-{% for service, ports in service_ports.iteritems() -%}
1906-listen {{ service }}_ipv4 0.0.0.0:{{ ports[0] }}
1907- balance roundrobin
1908- {% for unit, address in units.iteritems() -%}
1909- server {{ unit }} {{ address }}:{{ ports[1] }} check
1910- {% endfor %}
1911-listen {{ service }}_ipv6 :::{{ ports[0] }}
1912- balance roundrobin
1913- {% for unit, address in units.iteritems() -%}
1914- server {{ unit }} {{ address }}:{{ ports[1] }} check
1915- {% endfor %}
1916+{% if frontends -%}
1917+{% for service, ports in service_ports.items() -%}
1918+frontend tcp-in_{{ service }}
1919+ bind *:{{ ports[0] }}
1920+ bind :::{{ ports[0] }}
1921+ {% for frontend in frontends -%}
1922+ acl net_{{ frontend }} dst {{ frontends[frontend]['network'] }}
1923+ use_backend {{ service }}_{{ frontend }} if net_{{ frontend }}
1924+ {% endfor %}
1925+{% for frontend in frontends -%}
1926+backend {{ service }}_{{ frontend }}
1927+ balance leastconn
1928+ {% for unit, address in frontends[frontend]['backends'].items() -%}
1929+ server {{ unit }} {{ address }}:{{ ports[1] }} check
1930+ {% endfor %}
1931+{% endfor -%}
1932 {% endfor -%}
1933 {% endif -%}
1934
1935=== modified file 'hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend'
1936--- hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 2013-11-19 12:14:57 +0000
1937+++ hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 2014-12-11 17:56:30 +0000
1938@@ -1,16 +1,18 @@
1939 {% if endpoints -%}
1940-{% for ext, int in endpoints -%}
1941-Listen {{ ext }}
1942-NameVirtualHost *:{{ ext }}
1943-<VirtualHost *:{{ ext }}>
1944- ServerName {{ private_address }}
1945+{% for ext_port in ext_ports -%}
1946+Listen {{ ext_port }}
1947+{% endfor -%}
1948+{% for address, endpoint, ext, int in endpoints -%}
1949+<VirtualHost {{ address }}:{{ ext }}>
1950+ ServerName {{ endpoint }}
1951 SSLEngine on
1952- SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert
1953- SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key
1954+ SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }}
1955+ SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key_{{ endpoint }}
1956 ProxyPass / http://localhost:{{ int }}/
1957 ProxyPassReverse / http://localhost:{{ int }}/
1958 ProxyPreserveHost on
1959 </VirtualHost>
1960+{% endfor -%}
1961 <Proxy *>
1962 Order deny,allow
1963 Allow from all
1964@@ -19,5 +21,4 @@
1965 Order allow,deny
1966 Allow from all
1967 </Location>
1968-{% endfor -%}
1969 {% endif -%}
1970
1971=== modified file 'hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf'
1972--- hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf 2013-11-19 12:14:57 +0000
1973+++ hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf 2014-12-11 17:56:30 +0000
1974@@ -1,16 +1,18 @@
1975 {% if endpoints -%}
1976-{% for ext, int in endpoints -%}
1977-Listen {{ ext }}
1978-NameVirtualHost *:{{ ext }}
1979-<VirtualHost *:{{ ext }}>
1980- ServerName {{ private_address }}
1981+{% for ext_port in ext_ports -%}
1982+Listen {{ ext_port }}
1983+{% endfor -%}
1984+{% for address, endpoint, ext, int in endpoints -%}
1985+<VirtualHost {{ address }}:{{ ext }}>
1986+ ServerName {{ endpoint }}
1987 SSLEngine on
1988- SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert
1989- SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key
1990+ SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }}
1991+ SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key_{{ endpoint }}
1992 ProxyPass / http://localhost:{{ int }}/
1993 ProxyPassReverse / http://localhost:{{ int }}/
1994 ProxyPreserveHost on
1995 </VirtualHost>
1996+{% endfor -%}
1997 <Proxy *>
1998 Order deny,allow
1999 Allow from all
2000@@ -19,5 +21,4 @@
2001 Order allow,deny
2002 Allow from all
2003 </Location>
2004-{% endfor -%}
2005 {% endif -%}
2006
2007=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
2008--- hooks/charmhelpers/contrib/openstack/templating.py 2014-08-13 13:11:34 +0000
2009+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-11 17:56:30 +0000
2010@@ -1,13 +1,13 @@
2011 import os
2012
2013+import six
2014+
2015 from charmhelpers.fetch import apt_install
2016-
2017 from charmhelpers.core.hookenv import (
2018 log,
2019 ERROR,
2020 INFO
2021 )
2022-
2023 from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
2024
2025 try:
2026@@ -43,7 +43,7 @@
2027 order by OpenStack release.
2028 """
2029 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
2030- for rel in OPENSTACK_CODENAMES.itervalues()]
2031+ for rel in six.itervalues(OPENSTACK_CODENAMES)]
2032
2033 if not os.path.isdir(templates_dir):
2034 log('Templates directory not found @ %s.' % templates_dir,
2035@@ -258,7 +258,7 @@
2036 """
2037 Write out all registered config files.
2038 """
2039- [self.write(k) for k in self.templates.iterkeys()]
2040+ [self.write(k) for k in six.iterkeys(self.templates)]
2041
2042 def set_release(self, openstack_release):
2043 """
2044@@ -275,5 +275,5 @@
2045 '''
2046 interfaces = []
2047 [interfaces.extend(i.complete_contexts())
2048- for i in self.templates.itervalues()]
2049+ for i in six.itervalues(self.templates)]
2050 return interfaces
2051
2052=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
2053--- hooks/charmhelpers/contrib/openstack/utils.py 2014-08-26 13:27:17 +0000
2054+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-11 17:56:30 +0000
2055@@ -2,18 +2,24 @@
2056
2057 # Common python helper functions used for OpenStack charms.
2058 from collections import OrderedDict
2059+from functools import wraps
2060
2061 import subprocess
2062+import json
2063 import os
2064 import socket
2065 import sys
2066
2067+import six
2068+import yaml
2069+
2070 from charmhelpers.core.hookenv import (
2071 config,
2072 log as juju_log,
2073 charm_dir,
2074- ERROR,
2075- INFO
2076+ INFO,
2077+ relation_ids,
2078+ relation_set
2079 )
2080
2081 from charmhelpers.contrib.storage.linux.lvm import (
2082@@ -22,8 +28,13 @@
2083 remove_lvm_physical_volume,
2084 )
2085
2086+from charmhelpers.contrib.network.ip import (
2087+ get_ipv6_addr
2088+)
2089+
2090 from charmhelpers.core.host import lsb_release, mounts, umount
2091-from charmhelpers.fetch import apt_install, apt_cache
2092+from charmhelpers.fetch import apt_install, apt_cache, install_remote
2093+from charmhelpers.contrib.python.packages import pip_install
2094 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
2095 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
2096
2097@@ -70,6 +81,9 @@
2098 ('1.13.0', 'icehouse'),
2099 ('1.12.0', 'icehouse'),
2100 ('1.11.0', 'icehouse'),
2101+ ('2.0.0', 'juno'),
2102+ ('2.1.0', 'juno'),
2103+ ('2.2.0', 'juno'),
2104 ])
2105
2106 DEFAULT_LOOPBACK_SIZE = '5G'
2107@@ -102,7 +116,7 @@
2108
2109 # Best guess match based on deb string provided
2110 if src.startswith('deb') or src.startswith('ppa'):
2111- for k, v in OPENSTACK_CODENAMES.iteritems():
2112+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
2113 if v in src:
2114 return v
2115
2116@@ -123,7 +137,7 @@
2117
2118 def get_os_version_codename(codename):
2119 '''Determine OpenStack version number from codename.'''
2120- for k, v in OPENSTACK_CODENAMES.iteritems():
2121+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
2122 if v == codename:
2123 return k
2124 e = 'Could not derive OpenStack version for '\
2125@@ -183,7 +197,7 @@
2126 else:
2127 vers_map = OPENSTACK_CODENAMES
2128
2129- for version, cname in vers_map.iteritems():
2130+ for version, cname in six.iteritems(vers_map):
2131 if cname == codename:
2132 return version
2133 # e = "Could not determine OpenStack version for package: %s" % pkg
2134@@ -307,7 +321,7 @@
2135 rc_script.write(
2136 "#!/bin/bash\n")
2137 [rc_script.write('export %s=%s\n' % (u, p))
2138- for u, p in env_vars.iteritems() if u != "script_path"]
2139+ for u, p in six.iteritems(env_vars) if u != "script_path"]
2140
2141
2142 def openstack_upgrade_available(package):
2143@@ -340,8 +354,8 @@
2144 '''
2145 _none = ['None', 'none', None]
2146 if (block_device in _none):
2147- error_out('prepare_storage(): Missing required input: '
2148- 'block_device=%s.' % block_device, level=ERROR)
2149+ error_out('prepare_storage(): Missing required input: block_device=%s.'
2150+ % block_device)
2151
2152 if block_device.startswith('/dev/'):
2153 bdev = block_device
2154@@ -357,8 +371,7 @@
2155 bdev = '/dev/%s' % block_device
2156
2157 if not is_block_device(bdev):
2158- error_out('Failed to locate valid block device at %s' % bdev,
2159- level=ERROR)
2160+ error_out('Failed to locate valid block device at %s' % bdev)
2161
2162 return bdev
2163
2164@@ -407,7 +420,7 @@
2165
2166 if isinstance(address, dns.name.Name):
2167 rtype = 'PTR'
2168- elif isinstance(address, basestring):
2169+ elif isinstance(address, six.string_types):
2170 rtype = 'A'
2171 else:
2172 return None
2173@@ -456,3 +469,151 @@
2174 return result
2175 else:
2176 return result.split('.')[0]
2177+
2178+
2179+def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
2180+ mm_map = {}
2181+ if os.path.isfile(mm_file):
2182+ with open(mm_file, 'r') as f:
2183+ mm_map = json.load(f)
2184+ return mm_map
2185+
2186+
2187+def sync_db_with_multi_ipv6_addresses(database, database_user,
2188+ relation_prefix=None):
2189+ hosts = get_ipv6_addr(dynamic_only=False)
2190+
2191+ kwargs = {'database': database,
2192+ 'username': database_user,
2193+ 'hostname': json.dumps(hosts)}
2194+
2195+ if relation_prefix:
2196+ for key in list(kwargs.keys()):
2197+ kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
2198+ del kwargs[key]
2199+
2200+ for rid in relation_ids('shared-db'):
2201+ relation_set(relation_id=rid, **kwargs)
2202+
2203+
2204+def os_requires_version(ostack_release, pkg):
2205+ """
2206+ Decorator for hook to specify minimum supported release
2207+ """
2208+ def wrap(f):
2209+ @wraps(f)
2210+ def wrapped_f(*args):
2211+ if os_release(pkg) < ostack_release:
2212+ raise Exception("This hook is not supported on releases"
2213+ " before %s" % ostack_release)
2214+ f(*args)
2215+ return wrapped_f
2216+ return wrap
2217+
2218+
2219+def git_install_requested():
2220+ """Returns true if openstack-origin-git is specified."""
2221+ return config('openstack-origin-git') != "None"
2222+
2223+
2224+requirements_dir = None
2225+
2226+
2227+def git_clone_and_install(file_name, core_project):
2228+ """Clone/install all OpenStack repos specified in yaml config file."""
2229+ global requirements_dir
2230+
2231+ if file_name == "None":
2232+ return
2233+
2234+ yaml_file = os.path.join(charm_dir(), file_name)
2235+
2236+ # clone/install the requirements project first
2237+ installed = _git_clone_and_install_subset(yaml_file,
2238+ whitelist=['requirements'])
2239+ if 'requirements' not in installed:
2240+ error_out('requirements git repository must be specified')
2241+
2242+ # clone/install all other projects except requirements and the core project
2243+ blacklist = ['requirements', core_project]
2244+ _git_clone_and_install_subset(yaml_file, blacklist=blacklist,
2245+ update_requirements=True)
2246+
2247+ # clone/install the core project
2248+ whitelist = [core_project]
2249+ installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist,
2250+ update_requirements=True)
2251+ if core_project not in installed:
2252+ error_out('{} git repository must be specified'.format(core_project))
2253+
2254+
2255+def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[],
2256+ update_requirements=False):
2257+ """Clone/install subset of OpenStack repos specified in yaml config file."""
2258+ global requirements_dir
2259+ installed = []
2260+
2261+ with open(yaml_file, 'r') as fd:
2262+ projects = yaml.load(fd)
2263+ for proj, val in projects.items():
2264+ # The project subset is chosen based on the following 3 rules:
2265+ # 1) If project is in blacklist, we don't clone/install it, period.
2266+ # 2) If whitelist is empty, we clone/install everything else.
2267+ # 3) If whitelist is not empty, we clone/install everything in the
2268+ # whitelist.
2269+ if proj in blacklist:
2270+ continue
2271+ if whitelist and proj not in whitelist:
2272+ continue
2273+ repo = val['repository']
2274+ branch = val['branch']
2275+ repo_dir = _git_clone_and_install_single(repo, branch,
2276+ update_requirements)
2277+ if proj == 'requirements':
2278+ requirements_dir = repo_dir
2279+ installed.append(proj)
2280+ return installed
2281+
2282+
2283+def _git_clone_and_install_single(repo, branch, update_requirements=False):
2284+ """Clone and install a single git repository."""
2285+ dest_parent_dir = "/mnt/openstack-git/"
2286+ dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo))
2287+
2288+ if not os.path.exists(dest_parent_dir):
2289+ juju_log('Host dir not mounted at {}. '
2290+ 'Creating directory there instead.'.format(dest_parent_dir))
2291+ os.mkdir(dest_parent_dir)
2292+
2293+ if not os.path.exists(dest_dir):
2294+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
2295+ repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch)
2296+ else:
2297+ repo_dir = dest_dir
2298+
2299+ if update_requirements:
2300+ if not requirements_dir:
2301+ error_out('requirements repo must be cloned before '
2302+ 'updating from global requirements.')
2303+ _git_update_requirements(repo_dir, requirements_dir)
2304+
2305+ juju_log('Installing git repo from dir: {}'.format(repo_dir))
2306+ pip_install(repo_dir)
2307+
2308+ return repo_dir
2309+
2310+
2311+def _git_update_requirements(package_dir, reqs_dir):
2312+ """Update from global requirements.
2313+
2314+ Update an OpenStack git directory's requirements.txt and
2315+ test-requirements.txt from global-requirements.txt."""
2316+ orig_dir = os.getcwd()
2317+ os.chdir(reqs_dir)
2318+ cmd = "python update.py {}".format(package_dir)
2319+ try:
2320+ subprocess.check_call(cmd.split(' '))
2321+ except subprocess.CalledProcessError:
2322+ package = os.path.basename(package_dir)
2323+ error_out("Error updating {} from global-requirements.txt".format(package))
2324+ os.chdir(orig_dir)
2325
2326=== added directory 'hooks/charmhelpers/contrib/python'
2327=== added file 'hooks/charmhelpers/contrib/python/__init__.py'
2328=== added file 'hooks/charmhelpers/contrib/python/packages.py'
2329--- hooks/charmhelpers/contrib/python/packages.py 1970-01-01 00:00:00 +0000
2330+++ hooks/charmhelpers/contrib/python/packages.py 2014-12-11 17:56:30 +0000
2331@@ -0,0 +1,77 @@
2332+#!/usr/bin/env python
2333+# coding: utf-8
2334+
2335+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
2336+
2337+from charmhelpers.fetch import apt_install, apt_update
2338+from charmhelpers.core.hookenv import log
2339+
2340+try:
2341+ from pip import main as pip_execute
2342+except ImportError:
2343+ apt_update()
2344+ apt_install('python-pip')
2345+ from pip import main as pip_execute
2346+
2347+
2348+def parse_options(given, available):
2349+ """Given a set of options, check if available"""
2350+ for key, value in sorted(given.items()):
2351+ if key in available:
2352+ yield "--{0}={1}".format(key, value)
2353+
2354+
2355+def pip_install_requirements(requirements, **options):
2356+ """Install a requirements file """
2357+ command = ["install"]
2358+
2359+ available_options = ('proxy', 'src', 'log', )
2360+ for option in parse_options(options, available_options):
2361+ command.append(option)
2362+
2363+ command.append("-r {0}".format(requirements))
2364+ log("Installing from file: {} with options: {}".format(requirements,
2365+ command))
2366+ pip_execute(command)
2367+
2368+
2369+def pip_install(package, fatal=False, **options):
2370+ """Install a python package"""
2371+ command = ["install"]
2372+
2373+ available_options = ('proxy', 'src', 'log', "index-url", )
2374+ for option in parse_options(options, available_options):
2375+ command.append(option)
2376+
2377+ if isinstance(package, list):
2378+ command.extend(package)
2379+ else:
2380+ command.append(package)
2381+
2382+ log("Installing {} package with options: {}".format(package,
2383+ command))
2384+ pip_execute(command)
2385+
2386+
2387+def pip_uninstall(package, **options):
2388+ """Uninstall a python package"""
2389+ command = ["uninstall", "-q", "-y"]
2390+
2391+ available_options = ('proxy', 'log', )
2392+ for option in parse_options(options, available_options):
2393+ command.append(option)
2394+
2395+ if isinstance(package, list):
2396+ command.extend(package)
2397+ else:
2398+ command.append(package)
2399+
2400+ log("Uninstalling {} package with options: {}".format(package,
2401+ command))
2402+ pip_execute(command)
2403+
2404+
2405+def pip_list():
2406+ """Returns the list of current python installed packages
2407+ """
2408+ return pip_execute(["list"])
2409
2410=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
2411--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-08-13 13:11:34 +0000
2412+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-11 17:56:30 +0000
2413@@ -16,19 +16,18 @@
2414 from subprocess import (
2415 check_call,
2416 check_output,
2417- CalledProcessError
2418+ CalledProcessError,
2419 )
2420-
2421 from charmhelpers.core.hookenv import (
2422 relation_get,
2423 relation_ids,
2424 related_units,
2425 log,
2426+ DEBUG,
2427 INFO,
2428 WARNING,
2429- ERROR
2430+ ERROR,
2431 )
2432-
2433 from charmhelpers.core.host import (
2434 mount,
2435 mounts,
2436@@ -37,7 +36,6 @@
2437 service_running,
2438 umount,
2439 )
2440-
2441 from charmhelpers.fetch import (
2442 apt_install,
2443 )
2444@@ -56,99 +54,85 @@
2445
2446
2447 def install():
2448- ''' Basic Ceph client installation '''
2449+ """Basic Ceph client installation."""
2450 ceph_dir = "/etc/ceph"
2451 if not os.path.exists(ceph_dir):
2452 os.mkdir(ceph_dir)
2453+
2454 apt_install('ceph-common', fatal=True)
2455
2456
2457 def rbd_exists(service, pool, rbd_img):
2458- ''' Check to see if a RADOS block device exists '''
2459+ """Check to see if a RADOS block device exists."""
2460 try:
2461- out = check_output(['rbd', 'list', '--id', service,
2462- '--pool', pool])
2463+ out = check_output(['rbd', 'list', '--id',
2464+ service, '--pool', pool]).decode('UTF-8')
2465 except CalledProcessError:
2466 return False
2467- else:
2468- return rbd_img in out
2469+
2470+ return rbd_img in out
2471
2472
2473 def create_rbd_image(service, pool, image, sizemb):
2474- ''' Create a new RADOS block device '''
2475- cmd = [
2476- 'rbd',
2477- 'create',
2478- image,
2479- '--size',
2480- str(sizemb),
2481- '--id',
2482- service,
2483- '--pool',
2484- pool
2485- ]
2486+ """Create a new RADOS block device."""
2487+ cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
2488+ '--pool', pool]
2489 check_call(cmd)
2490
2491
2492 def pool_exists(service, name):
2493- ''' Check to see if a RADOS pool already exists '''
2494+ """Check to see if a RADOS pool already exists."""
2495 try:
2496- out = check_output(['rados', '--id', service, 'lspools'])
2497+ out = check_output(['rados', '--id', service,
2498+ 'lspools']).decode('UTF-8')
2499 except CalledProcessError:
2500 return False
2501- else:
2502- return name in out
2503+
2504+ return name in out
2505
2506
2507 def get_osds(service):
2508- '''
2509- Return a list of all Ceph Object Storage Daemons
2510- currently in the cluster
2511- '''
2512+ """Return a list of all Ceph Object Storage Daemons currently in the
2513+ cluster.
2514+ """
2515 version = ceph_version()
2516 if version and version >= '0.56':
2517 return json.loads(check_output(['ceph', '--id', service,
2518- 'osd', 'ls', '--format=json']))
2519- else:
2520- return None
2521-
2522-
2523-def create_pool(service, name, replicas=2):
2524- ''' Create a new RADOS pool '''
2525+ 'osd', 'ls',
2526+ '--format=json']).decode('UTF-8'))
2527+
2528+ return None
2529+
2530+
2531+def create_pool(service, name, replicas=3):
2532+ """Create a new RADOS pool."""
2533 if pool_exists(service, name):
2534 log("Ceph pool {} already exists, skipping creation".format(name),
2535 level=WARNING)
2536 return
2537+
2538 # Calculate the number of placement groups based
2539 # on upstream recommended best practices.
2540 osds = get_osds(service)
2541 if osds:
2542- pgnum = (len(osds) * 100 / replicas)
2543+ pgnum = (len(osds) * 100 // replicas)
2544 else:
2545 # NOTE(james-page): Default to 200 for older ceph versions
2546 # which don't support OSD query from cli
2547 pgnum = 200
2548- cmd = [
2549- 'ceph', '--id', service,
2550- 'osd', 'pool', 'create',
2551- name, str(pgnum)
2552- ]
2553+
2554+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
2555 check_call(cmd)
2556- cmd = [
2557- 'ceph', '--id', service,
2558- 'osd', 'pool', 'set', name,
2559- 'size', str(replicas)
2560- ]
2561+
2562+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
2563+ str(replicas)]
2564 check_call(cmd)
2565
2566
2567 def delete_pool(service, name):
2568- ''' Delete a RADOS pool from ceph '''
2569- cmd = [
2570- 'ceph', '--id', service,
2571- 'osd', 'pool', 'delete',
2572- name, '--yes-i-really-really-mean-it'
2573- ]
2574+ """Delete a RADOS pool from ceph."""
2575+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
2576+ '--yes-i-really-really-mean-it']
2577 check_call(cmd)
2578
2579
2580@@ -161,44 +145,43 @@
2581
2582
2583 def create_keyring(service, key):
2584- ''' Create a new Ceph keyring containing key'''
2585+ """Create a new Ceph keyring containing key."""
2586 keyring = _keyring_path(service)
2587 if os.path.exists(keyring):
2588- log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
2589+ log('Ceph keyring exists at %s.' % keyring, level=WARNING)
2590 return
2591- cmd = [
2592- 'ceph-authtool',
2593- keyring,
2594- '--create-keyring',
2595- '--name=client.{}'.format(service),
2596- '--add-key={}'.format(key)
2597- ]
2598+
2599+ cmd = ['ceph-authtool', keyring, '--create-keyring',
2600+ '--name=client.{}'.format(service), '--add-key={}'.format(key)]
2601 check_call(cmd)
2602- log('ceph: Created new ring at %s.' % keyring, level=INFO)
2603+ log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
2604
2605
2606 def create_key_file(service, key):
2607- ''' Create a file containing key '''
2608+ """Create a file containing key."""
2609 keyfile = _keyfile_path(service)
2610 if os.path.exists(keyfile):
2611- log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2612+ log('Keyfile exists at %s.' % keyfile, level=WARNING)
2613 return
2614+
2615 with open(keyfile, 'w') as fd:
2616 fd.write(key)
2617- log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2618+
2619+ log('Created new keyfile at %s.' % keyfile, level=INFO)
2620
2621
2622 def get_ceph_nodes():
2623- ''' Query named relation 'ceph' to detemine current nodes '''
2624+ """Query named relation 'ceph' to determine current nodes."""
2625 hosts = []
2626 for r_id in relation_ids('ceph'):
2627 for unit in related_units(r_id):
2628 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2629+
2630 return hosts
2631
2632
2633 def configure(service, key, auth, use_syslog):
2634- ''' Perform basic configuration of Ceph '''
2635+ """Perform basic configuration of Ceph."""
2636 create_keyring(service, key)
2637 create_key_file(service, key)
2638 hosts = get_ceph_nodes()
2639@@ -211,17 +194,17 @@
2640
2641
2642 def image_mapped(name):
2643- ''' Determine whether a RADOS block device is mapped locally '''
2644+ """Determine whether a RADOS block device is mapped locally."""
2645 try:
2646- out = check_output(['rbd', 'showmapped'])
2647+ out = check_output(['rbd', 'showmapped']).decode('UTF-8')
2648 except CalledProcessError:
2649 return False
2650- else:
2651- return name in out
2652+
2653+ return name in out
2654
2655
2656 def map_block_storage(service, pool, image):
2657- ''' Map a RADOS block device for local use '''
2658+ """Map a RADOS block device for local use."""
2659 cmd = [
2660 'rbd',
2661 'map',
2662@@ -235,31 +218,32 @@
2663
2664
2665 def filesystem_mounted(fs):
2666- ''' Determine whether a filesytems is already mounted '''
2667+ """Determine whether a filesytems is already mounted."""
2668 return fs in [f for f, m in mounts()]
2669
2670
2671 def make_filesystem(blk_device, fstype='ext4', timeout=10):
2672- ''' Make a new filesystem on the specified block device '''
2673+ """Make a new filesystem on the specified block device."""
2674 count = 0
2675 e_noent = os.errno.ENOENT
2676 while not os.path.exists(blk_device):
2677 if count >= timeout:
2678- log('ceph: gave up waiting on block device %s' % blk_device,
2679+ log('Gave up waiting on block device %s' % blk_device,
2680 level=ERROR)
2681 raise IOError(e_noent, os.strerror(e_noent), blk_device)
2682- log('ceph: waiting for block device %s to appear' % blk_device,
2683- level=INFO)
2684+
2685+ log('Waiting for block device %s to appear' % blk_device,
2686+ level=DEBUG)
2687 count += 1
2688 time.sleep(1)
2689 else:
2690- log('ceph: Formatting block device %s as filesystem %s.' %
2691+ log('Formatting block device %s as filesystem %s.' %
2692 (blk_device, fstype), level=INFO)
2693 check_call(['mkfs', '-t', fstype, blk_device])
2694
2695
2696 def place_data_on_block_device(blk_device, data_src_dst):
2697- ''' Migrate data in data_src_dst to blk_device and then remount '''
2698+ """Migrate data in data_src_dst to blk_device and then remount."""
2699 # mount block device into /mnt
2700 mount(blk_device, '/mnt')
2701 # copy data to /mnt
2702@@ -279,8 +263,8 @@
2703
2704 # TODO: re-use
2705 def modprobe(module):
2706- ''' Load a kernel module and configure for auto-load on reboot '''
2707- log('ceph: Loading kernel module', level=INFO)
2708+ """Load a kernel module and configure for auto-load on reboot."""
2709+ log('Loading kernel module', level=INFO)
2710 cmd = ['modprobe', module]
2711 check_call(cmd)
2712 with open('/etc/modules', 'r+') as modules:
2713@@ -289,7 +273,7 @@
2714
2715
2716 def copy_files(src, dst, symlinks=False, ignore=None):
2717- ''' Copy files from src to dst '''
2718+ """Copy files from src to dst."""
2719 for item in os.listdir(src):
2720 s = os.path.join(src, item)
2721 d = os.path.join(dst, item)
2722@@ -300,9 +284,9 @@
2723
2724
2725 def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2726- blk_device, fstype, system_services=[]):
2727- """
2728- NOTE: This function must only be called from a single service unit for
2729+ blk_device, fstype, system_services=[],
2730+ replicas=3):
2731+ """NOTE: This function must only be called from a single service unit for
2732 the same rbd_img otherwise data loss will occur.
2733
2734 Ensures given pool and RBD image exists, is mapped to a block device,
2735@@ -316,15 +300,16 @@
2736 """
2737 # Ensure pool, RBD image, RBD mappings are in place.
2738 if not pool_exists(service, pool):
2739- log('ceph: Creating new pool {}.'.format(pool))
2740- create_pool(service, pool)
2741+ log('Creating new pool {}.'.format(pool), level=INFO)
2742+ create_pool(service, pool, replicas=replicas)
2743
2744 if not rbd_exists(service, pool, rbd_img):
2745- log('ceph: Creating RBD image ({}).'.format(rbd_img))
2746+ log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
2747 create_rbd_image(service, pool, rbd_img, sizemb)
2748
2749 if not image_mapped(rbd_img):
2750- log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2751+ log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
2752+ level=INFO)
2753 map_block_storage(service, pool, rbd_img)
2754
2755 # make file system
2756@@ -339,45 +324,47 @@
2757
2758 for svc in system_services:
2759 if service_running(svc):
2760- log('ceph: Stopping services {} prior to migrating data.'
2761- .format(svc))
2762+ log('Stopping services {} prior to migrating data.'
2763+ .format(svc), level=DEBUG)
2764 service_stop(svc)
2765
2766 place_data_on_block_device(blk_device, mount_point)
2767
2768 for svc in system_services:
2769- log('ceph: Starting service {} after migrating data.'
2770- .format(svc))
2771+ log('Starting service {} after migrating data.'
2772+ .format(svc), level=DEBUG)
2773 service_start(svc)
2774
2775
2776 def ensure_ceph_keyring(service, user=None, group=None):
2777- '''
2778- Ensures a ceph keyring is created for a named service
2779- and optionally ensures user and group ownership.
2780+ """Ensures a ceph keyring is created for a named service and optionally
2781+ ensures user and group ownership.
2782
2783 Returns False if no ceph key is available in relation state.
2784- '''
2785+ """
2786 key = None
2787 for rid in relation_ids('ceph'):
2788 for unit in related_units(rid):
2789 key = relation_get('key', rid=rid, unit=unit)
2790 if key:
2791 break
2792+
2793 if not key:
2794 return False
2795+
2796 create_keyring(service=service, key=key)
2797 keyring = _keyring_path(service)
2798 if user and group:
2799 check_call(['chown', '%s.%s' % (user, group), keyring])
2800+
2801 return True
2802
2803
2804 def ceph_version():
2805- ''' Retrieve the local version of ceph '''
2806+ """Retrieve the local version of ceph."""
2807 if os.path.exists('/usr/bin/ceph'):
2808 cmd = ['ceph', '-v']
2809- output = check_output(cmd)
2810+ output = check_output(cmd).decode('US-ASCII')
2811 output = output.split()
2812 if len(output) > 3:
2813 return output[2]
2814
2815=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2816--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2013-11-19 12:14:57 +0000
2817+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-11 17:56:30 +0000
2818@@ -1,12 +1,12 @@
2819-
2820 import os
2821 import re
2822-
2823 from subprocess import (
2824 check_call,
2825 check_output,
2826 )
2827
2828+import six
2829+
2830
2831 ##################################################
2832 # loopback device helpers.
2833@@ -37,7 +37,7 @@
2834 '''
2835 file_path = os.path.abspath(file_path)
2836 check_call(['losetup', '--find', file_path])
2837- for d, f in loopback_devices().iteritems():
2838+ for d, f in six.iteritems(loopback_devices()):
2839 if f == file_path:
2840 return d
2841
2842@@ -51,7 +51,7 @@
2843
2844 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2845 '''
2846- for d, f in loopback_devices().iteritems():
2847+ for d, f in six.iteritems(loopback_devices()):
2848 if f == path:
2849 return d
2850
2851
2852=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2853--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-06-04 13:07:49 +0000
2854+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-11 17:56:30 +0000
2855@@ -61,6 +61,7 @@
2856 vg = None
2857 pvd = check_output(['pvdisplay', block_device]).splitlines()
2858 for l in pvd:
2859+ l = l.decode('UTF-8')
2860 if l.strip().startswith('VG Name'):
2861 vg = ' '.join(l.strip().split()[2:])
2862 return vg
2863
2864=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2865--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-08-13 13:11:34 +0000
2866+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-11 17:56:30 +0000
2867@@ -30,7 +30,8 @@
2868 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
2869 call(['sgdisk', '--zap-all', '--mbrtogpt',
2870 '--clear', block_device])
2871- dev_end = check_output(['blockdev', '--getsz', block_device])
2872+ dev_end = check_output(['blockdev', '--getsz',
2873+ block_device]).decode('UTF-8')
2874 gpt_end = int(dev_end.split()[0]) - 100
2875 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
2876 'bs=1M', 'count=1'])
2877@@ -47,7 +48,7 @@
2878 it doesn't.
2879 '''
2880 is_partition = bool(re.search(r".*[0-9]+\b", device))
2881- out = check_output(['mount'])
2882+ out = check_output(['mount']).decode('UTF-8')
2883 if is_partition:
2884 return bool(re.search(device + r"\b", out))
2885 return bool(re.search(device + r"[0-9]+\b", out))
2886
2887=== modified file 'hooks/charmhelpers/core/fstab.py'
2888--- hooks/charmhelpers/core/fstab.py 2014-08-13 13:11:34 +0000
2889+++ hooks/charmhelpers/core/fstab.py 2014-12-11 17:56:30 +0000
2890@@ -3,10 +3,11 @@
2891
2892 __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2893
2894+import io
2895 import os
2896
2897
2898-class Fstab(file):
2899+class Fstab(io.FileIO):
2900 """This class extends file in order to implement a file reader/writer
2901 for file `/etc/fstab`
2902 """
2903@@ -24,8 +25,8 @@
2904 options = "defaults"
2905
2906 self.options = options
2907- self.d = d
2908- self.p = p
2909+ self.d = int(d)
2910+ self.p = int(p)
2911
2912 def __eq__(self, o):
2913 return str(self) == str(o)
2914@@ -45,7 +46,7 @@
2915 self._path = path
2916 else:
2917 self._path = self.DEFAULT_PATH
2918- file.__init__(self, self._path, 'r+')
2919+ super(Fstab, self).__init__(self._path, 'rb+')
2920
2921 def _hydrate_entry(self, line):
2922 # NOTE: use split with no arguments to split on any
2923@@ -58,8 +59,9 @@
2924 def entries(self):
2925 self.seek(0)
2926 for line in self.readlines():
2927+ line = line.decode('us-ascii')
2928 try:
2929- if not line.startswith("#"):
2930+ if line.strip() and not line.startswith("#"):
2931 yield self._hydrate_entry(line)
2932 except ValueError:
2933 pass
2934@@ -75,14 +77,14 @@
2935 if self.get_entry_by_attr('device', entry.device):
2936 return False
2937
2938- self.write(str(entry) + '\n')
2939+ self.write((str(entry) + '\n').encode('us-ascii'))
2940 self.truncate()
2941 return entry
2942
2943 def remove_entry(self, entry):
2944 self.seek(0)
2945
2946- lines = self.readlines()
2947+ lines = [l.decode('us-ascii') for l in self.readlines()]
2948
2949 found = False
2950 for index, line in enumerate(lines):
2951@@ -97,7 +99,7 @@
2952 lines.remove(line)
2953
2954 self.seek(0)
2955- self.write(''.join(lines))
2956+ self.write(''.join(lines).encode('us-ascii'))
2957 self.truncate()
2958 return True
2959
2960
2961=== modified file 'hooks/charmhelpers/core/hookenv.py'
2962--- hooks/charmhelpers/core/hookenv.py 2014-08-26 13:27:17 +0000
2963+++ hooks/charmhelpers/core/hookenv.py 2014-12-11 17:56:30 +0000
2964@@ -9,9 +9,14 @@
2965 import yaml
2966 import subprocess
2967 import sys
2968-import UserDict
2969 from subprocess import CalledProcessError
2970
2971+import six
2972+if not six.PY3:
2973+ from UserDict import UserDict
2974+else:
2975+ from collections import UserDict
2976+
2977 CRITICAL = "CRITICAL"
2978 ERROR = "ERROR"
2979 WARNING = "WARNING"
2980@@ -63,16 +68,18 @@
2981 command = ['juju-log']
2982 if level:
2983 command += ['-l', level]
2984+ if not isinstance(message, six.string_types):
2985+ message = repr(message)
2986 command += [message]
2987 subprocess.call(command)
2988
2989
2990-class Serializable(UserDict.IterableUserDict):
2991+class Serializable(UserDict):
2992 """Wrapper, an object that can be serialized to yaml or json"""
2993
2994 def __init__(self, obj):
2995 # wrap the object
2996- UserDict.IterableUserDict.__init__(self)
2997+ UserDict.__init__(self)
2998 self.data = obj
2999
3000 def __getattr__(self, attr):
3001@@ -156,12 +163,15 @@
3002
3003
3004 class Config(dict):
3005- """A Juju charm config dictionary that can write itself to
3006- disk (as json) and track which values have changed since
3007- the previous hook invocation.
3008-
3009- Do not instantiate this object directly - instead call
3010- ``hookenv.config()``
3011+ """A dictionary representation of the charm's config.yaml, with some
3012+ extra features:
3013+
3014+ - See which values in the dictionary have changed since the previous hook.
3015+ - For values that have changed, see what the previous value was.
3016+ - Store arbitrary data for use in a later hook.
3017+
3018+ NOTE: Do not instantiate this object directly - instead call
3019+ ``hookenv.config()``, which will return an instance of :class:`Config`.
3020
3021 Example usage::
3022
3023@@ -170,8 +180,8 @@
3024 >>> config = hookenv.config()
3025 >>> config['foo']
3026 'bar'
3027+ >>> # store a new key/value for later use
3028 >>> config['mykey'] = 'myval'
3029- >>> config.save()
3030
3031
3032 >>> # user runs `juju set mycharm foo=baz`
3033@@ -188,22 +198,40 @@
3034 >>> # keys/values that we add are preserved across hooks
3035 >>> config['mykey']
3036 'myval'
3037- >>> # don't forget to save at the end of hook!
3038- >>> config.save()
3039
3040 """
3041 CONFIG_FILE_NAME = '.juju-persistent-config'
3042
3043 def __init__(self, *args, **kw):
3044 super(Config, self).__init__(*args, **kw)
3045+ self.implicit_save = True
3046 self._prev_dict = None
3047 self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
3048 if os.path.exists(self.path):
3049 self.load_previous()
3050
3051+ def __getitem__(self, key):
3052+ """For regular dict lookups, check the current juju config first,
3053+ then the previous (saved) copy. This ensures that user-saved values
3054+ will be returned by a dict lookup.
3055+
3056+ """
3057+ try:
3058+ return dict.__getitem__(self, key)
3059+ except KeyError:
3060+ return (self._prev_dict or {})[key]
3061+
3062+ def keys(self):
3063+ prev_keys = []
3064+ if self._prev_dict is not None:
3065+ prev_keys = self._prev_dict.keys()
3066+ return list(set(prev_keys + list(dict.keys(self))))
3067+
3068 def load_previous(self, path=None):
3069- """Load previous copy of config from disk so that current values
3070- can be compared to previous values.
3071+ """Load previous copy of config from disk.
3072+
3073+ In normal usage you don't need to call this method directly - it
3074+ is called automatically at object initialization.
3075
3076 :param path:
3077
3078@@ -218,8 +246,8 @@
3079 self._prev_dict = json.load(f)
3080
3081 def changed(self, key):
3082- """Return true if the value for this key has changed since
3083- the last save.
3084+ """Return True if the current value for this key is different from
3085+ the previous value.
3086
3087 """
3088 if self._prev_dict is None:
3089@@ -228,7 +256,7 @@
3090
3091 def previous(self, key):
3092 """Return previous value for this key, or None if there
3093- is no "previous" value.
3094+ is no previous value.
3095
3096 """
3097 if self._prev_dict:
3098@@ -238,11 +266,17 @@
3099 def save(self):
3100 """Save this config to disk.
3101
3102- Preserves items in _prev_dict that do not exist in self.
3103+ If the charm is using the :mod:`Services Framework <services.base>`
3104+ or :meth:'@hook <Hooks.hook>' decorator, this
3105+ is called automatically at the end of successful hook execution.
3106+ Otherwise, it should be called directly by user code.
3107+
3108+ To disable automatic saves, set ``implicit_save=False`` on this
3109+ instance.
3110
3111 """
3112 if self._prev_dict:
3113- for k, v in self._prev_dict.iteritems():
3114+ for k, v in six.iteritems(self._prev_dict):
3115 if k not in self:
3116 self[k] = v
3117 with open(self.path, 'w') as f:
3118@@ -257,7 +291,8 @@
3119 config_cmd_line.append(scope)
3120 config_cmd_line.append('--format=json')
3121 try:
3122- config_data = json.loads(subprocess.check_output(config_cmd_line))
3123+ config_data = json.loads(
3124+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
3125 if scope is not None:
3126 return config_data
3127 return Config(config_data)
3128@@ -276,10 +311,10 @@
3129 if unit:
3130 _args.append(unit)
3131 try:
3132- return json.loads(subprocess.check_output(_args))
3133+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
3134 except ValueError:
3135 return None
3136- except CalledProcessError, e:
3137+ except CalledProcessError as e:
3138 if e.returncode == 2:
3139 return None
3140 raise
3141@@ -291,7 +326,7 @@
3142 relation_cmd_line = ['relation-set']
3143 if relation_id is not None:
3144 relation_cmd_line.extend(('-r', relation_id))
3145- for k, v in (relation_settings.items() + kwargs.items()):
3146+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
3147 if v is None:
3148 relation_cmd_line.append('{}='.format(k))
3149 else:
3150@@ -308,7 +343,8 @@
3151 relid_cmd_line = ['relation-ids', '--format=json']
3152 if reltype is not None:
3153 relid_cmd_line.append(reltype)
3154- return json.loads(subprocess.check_output(relid_cmd_line)) or []
3155+ return json.loads(
3156+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
3157 return []
3158
3159
3160@@ -319,7 +355,8 @@
3161 units_cmd_line = ['relation-list', '--format=json']
3162 if relid is not None:
3163 units_cmd_line.extend(('-r', relid))
3164- return json.loads(subprocess.check_output(units_cmd_line)) or []
3165+ return json.loads(
3166+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
3167
3168
3169 @cached
3170@@ -359,21 +396,31 @@
3171
3172
3173 @cached
3174+def metadata():
3175+ """Get the current charm metadata.yaml contents as a python object"""
3176+ with open(os.path.join(charm_dir(), 'metadata.yaml')) as md:
3177+ return yaml.safe_load(md)
3178+
3179+
3180+@cached
3181 def relation_types():
3182 """Get a list of relation types supported by this charm"""
3183- charmdir = os.environ.get('CHARM_DIR', '')
3184- mdf = open(os.path.join(charmdir, 'metadata.yaml'))
3185- md = yaml.safe_load(mdf)
3186 rel_types = []
3187+ md = metadata()
3188 for key in ('provides', 'requires', 'peers'):
3189 section = md.get(key)
3190 if section:
3191 rel_types.extend(section.keys())
3192- mdf.close()
3193 return rel_types
3194
3195
3196 @cached
3197+def charm_name():
3198+ """Get the name of the current charm as is specified on metadata.yaml"""
3199+ return metadata().get('name')
3200+
3201+
3202+@cached
3203 def relations():
3204 """Get a nested dictionary of relation data for all related units"""
3205 rels = {}
3206@@ -428,7 +475,7 @@
3207 """Get the unit ID for the remote unit"""
3208 _args = ['unit-get', '--format=json', attribute]
3209 try:
3210- return json.loads(subprocess.check_output(_args))
3211+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
3212 except ValueError:
3213 return None
3214
3215@@ -465,9 +512,10 @@
3216 hooks.execute(sys.argv)
3217 """
3218
3219- def __init__(self):
3220+ def __init__(self, config_save=True):
3221 super(Hooks, self).__init__()
3222 self._hooks = {}
3223+ self._config_save = config_save
3224
3225 def register(self, name, function):
3226 """Register a hook"""
3227@@ -478,6 +526,10 @@
3228 hook_name = os.path.basename(args[0])
3229 if hook_name in self._hooks:
3230 self._hooks[hook_name]()
3231+ if self._config_save:
3232+ cfg = config()
3233+ if cfg.implicit_save:
3234+ cfg.save()
3235 else:
3236 raise UnregisteredHookError(hook_name)
3237
3238
3239=== modified file 'hooks/charmhelpers/core/host.py'
3240--- hooks/charmhelpers/core/host.py 2014-08-26 13:27:17 +0000
3241+++ hooks/charmhelpers/core/host.py 2014-12-11 17:56:30 +0000
3242@@ -6,19 +6,20 @@
3243 # Matthew Wedgwood <matthew.wedgwood@canonical.com>
3244
3245 import os
3246+import re
3247 import pwd
3248 import grp
3249 import random
3250 import string
3251 import subprocess
3252 import hashlib
3253-import shutil
3254 from contextlib import contextmanager
3255-
3256 from collections import OrderedDict
3257
3258-from hookenv import log
3259-from fstab import Fstab
3260+import six
3261+
3262+from .hookenv import log
3263+from .fstab import Fstab
3264
3265
3266 def service_start(service_name):
3267@@ -54,7 +55,9 @@
3268 def service_running(service):
3269 """Determine whether a system service is running"""
3270 try:
3271- output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
3272+ output = subprocess.check_output(
3273+ ['service', service, 'status'],
3274+ stderr=subprocess.STDOUT).decode('UTF-8')
3275 except subprocess.CalledProcessError:
3276 return False
3277 else:
3278@@ -67,9 +70,11 @@
3279 def service_available(service_name):
3280 """Determine whether a system service is available"""
3281 try:
3282- subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
3283- except subprocess.CalledProcessError:
3284- return False
3285+ subprocess.check_output(
3286+ ['service', service_name, 'status'],
3287+ stderr=subprocess.STDOUT).decode('UTF-8')
3288+ except subprocess.CalledProcessError as e:
3289+ return 'unrecognized service' not in e.output
3290 else:
3291 return True
3292
3293@@ -96,6 +101,26 @@
3294 return user_info
3295
3296
3297+def add_group(group_name, system_group=False):
3298+ """Add a group to the system"""
3299+ try:
3300+ group_info = grp.getgrnam(group_name)
3301+ log('group {0} already exists!'.format(group_name))
3302+ except KeyError:
3303+ log('creating group {0}'.format(group_name))
3304+ cmd = ['addgroup']
3305+ if system_group:
3306+ cmd.append('--system')
3307+ else:
3308+ cmd.extend([
3309+ '--group',
3310+ ])
3311+ cmd.append(group_name)
3312+ subprocess.check_call(cmd)
3313+ group_info = grp.getgrnam(group_name)
3314+ return group_info
3315+
3316+
3317 def add_user_to_group(username, group):
3318 """Add a user to a group"""
3319 cmd = [
3320@@ -115,7 +140,7 @@
3321 cmd.append(from_path)
3322 cmd.append(to_path)
3323 log(" ".join(cmd))
3324- return subprocess.check_output(cmd).strip()
3325+ return subprocess.check_output(cmd).decode('UTF-8').strip()
3326
3327
3328 def symlink(source, destination):
3329@@ -130,7 +155,7 @@
3330 subprocess.check_call(cmd)
3331
3332
3333-def mkdir(path, owner='root', group='root', perms=0555, force=False):
3334+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
3335 """Create a directory"""
3336 log("Making dir {} {}:{} {:o}".format(path, owner, group,
3337 perms))
3338@@ -146,7 +171,7 @@
3339 os.chown(realpath, uid, gid)
3340
3341
3342-def write_file(path, content, owner='root', group='root', perms=0444):
3343+def write_file(path, content, owner='root', group='root', perms=0o444):
3344 """Create or overwrite a file with the contents of a string"""
3345 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
3346 uid = pwd.getpwnam(owner).pw_uid
3347@@ -177,7 +202,7 @@
3348 cmd_args.extend([device, mountpoint])
3349 try:
3350 subprocess.check_output(cmd_args)
3351- except subprocess.CalledProcessError, e:
3352+ except subprocess.CalledProcessError as e:
3353 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
3354 return False
3355
3356@@ -191,7 +216,7 @@
3357 cmd_args = ['umount', mountpoint]
3358 try:
3359 subprocess.check_output(cmd_args)
3360- except subprocess.CalledProcessError, e:
3361+ except subprocess.CalledProcessError as e:
3362 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
3363 return False
3364
3365@@ -209,17 +234,42 @@
3366 return system_mounts
3367
3368
3369-def file_hash(path):
3370- """Generate a md5 hash of the contents of 'path' or None if not found """
3371+def file_hash(path, hash_type='md5'):
3372+ """
3373+ Generate a hash checksum of the contents of 'path' or None if not found.
3374+
3375+ :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
3376+ such as md5, sha1, sha256, sha512, etc.
3377+ """
3378 if os.path.exists(path):
3379- h = hashlib.md5()
3380- with open(path, 'r') as source:
3381- h.update(source.read()) # IGNORE:E1101 - it does have update
3382+ h = getattr(hashlib, hash_type)()
3383+ with open(path, 'rb') as source:
3384+ h.update(source.read())
3385 return h.hexdigest()
3386 else:
3387 return None
3388
3389
3390+def check_hash(path, checksum, hash_type='md5'):
3391+ """
3392+ Validate a file using a cryptographic checksum.
3393+
3394+ :param str checksum: Value of the checksum used to validate the file.
3395+ :param str hash_type: Hash algorithm used to generate `checksum`.
3396+ Can be any hash alrgorithm supported by :mod:`hashlib`,
3397+ such as md5, sha1, sha256, sha512, etc.
3398+ :raises ChecksumError: If the file fails the checksum
3399+
3400+ """
3401+ actual_checksum = file_hash(path, hash_type)
3402+ if checksum != actual_checksum:
3403+ raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum))
3404+
3405+
3406+class ChecksumError(ValueError):
3407+ pass
3408+
3409+
3410 def restart_on_change(restart_map, stopstart=False):
3411 """Restart services based on configuration files changing
3412
3413@@ -272,7 +322,7 @@
3414 if length is None:
3415 length = random.choice(range(35, 45))
3416 alphanumeric_chars = [
3417- l for l in (string.letters + string.digits)
3418+ l for l in (string.ascii_letters + string.digits)
3419 if l not in 'l0QD1vAEIOUaeiou']
3420 random_chars = [
3421 random.choice(alphanumeric_chars) for _ in range(length)]
3422@@ -281,18 +331,24 @@
3423
3424 def list_nics(nic_type):
3425 '''Return a list of nics of given type(s)'''
3426- if isinstance(nic_type, basestring):
3427+ if isinstance(nic_type, six.string_types):
3428 int_types = [nic_type]
3429 else:
3430 int_types = nic_type
3431 interfaces = []
3432 for int_type in int_types:
3433 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
3434- ip_output = subprocess.check_output(cmd).split('\n')
3435+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
3436 ip_output = (line for line in ip_output if line)
3437 for line in ip_output:
3438 if line.split()[1].startswith(int_type):
3439- interfaces.append(line.split()[1].replace(":", ""))
3440+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
3441+ if matched:
3442+ interface = matched.groups()[0]
3443+ else:
3444+ interface = line.split()[1].replace(":", "")
3445+ interfaces.append(interface)
3446+
3447 return interfaces
3448
3449
3450@@ -304,7 +360,7 @@
3451
3452 def get_nic_mtu(nic):
3453 cmd = ['ip', 'addr', 'show', nic]
3454- ip_output = subprocess.check_output(cmd).split('\n')
3455+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
3456 mtu = ""
3457 for line in ip_output:
3458 words = line.split()
3459@@ -315,7 +371,7 @@
3460
3461 def get_nic_hwaddr(nic):
3462 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
3463- ip_output = subprocess.check_output(cmd)
3464+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
3465 hwaddr = ""
3466 words = ip_output.split()
3467 if 'link/ether' in words:
3468@@ -332,8 +388,8 @@
3469
3470 '''
3471 import apt_pkg
3472- from charmhelpers.fetch import apt_cache
3473 if not pkgcache:
3474+ from charmhelpers.fetch import apt_cache
3475 pkgcache = apt_cache()
3476 pkg = pkgcache[package]
3477 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
3478
3479=== modified file 'hooks/charmhelpers/core/services/__init__.py'
3480--- hooks/charmhelpers/core/services/__init__.py 2014-08-13 13:11:34 +0000
3481+++ hooks/charmhelpers/core/services/__init__.py 2014-12-11 17:56:30 +0000
3482@@ -1,2 +1,2 @@
3483-from .base import *
3484-from .helpers import *
3485+from .base import * # NOQA
3486+from .helpers import * # NOQA
3487
3488=== modified file 'hooks/charmhelpers/core/services/base.py'
3489--- hooks/charmhelpers/core/services/base.py 2014-08-26 13:27:17 +0000
3490+++ hooks/charmhelpers/core/services/base.py 2014-12-11 17:56:30 +0000
3491@@ -118,6 +118,9 @@
3492 else:
3493 self.provide_data()
3494 self.reconfigure_services()
3495+ cfg = hookenv.config()
3496+ if cfg.implicit_save:
3497+ cfg.save()
3498
3499 def provide_data(self):
3500 """
3501
3502=== modified file 'hooks/charmhelpers/core/services/helpers.py'
3503--- hooks/charmhelpers/core/services/helpers.py 2014-08-13 13:11:34 +0000
3504+++ hooks/charmhelpers/core/services/helpers.py 2014-12-11 17:56:30 +0000
3505@@ -1,3 +1,5 @@
3506+import os
3507+import yaml
3508 from charmhelpers.core import hookenv
3509 from charmhelpers.core import templating
3510
3511@@ -19,15 +21,21 @@
3512 the `name` attribute that are complete will used to populate the dictionary
3513 values (see `get_data`, below).
3514
3515- The generated context will be namespaced under the interface type, to prevent
3516- potential naming conflicts.
3517+ The generated context will be namespaced under the relation :attr:`name`,
3518+ to prevent potential naming conflicts.
3519+
3520+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
3521+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
3522 """
3523 name = None
3524 interface = None
3525 required_keys = []
3526
3527- def __init__(self, *args, **kwargs):
3528- super(RelationContext, self).__init__(*args, **kwargs)
3529+ def __init__(self, name=None, additional_required_keys=None):
3530+ if name is not None:
3531+ self.name = name
3532+ if additional_required_keys is not None:
3533+ self.required_keys.extend(additional_required_keys)
3534 self.get_data()
3535
3536 def __bool__(self):
3537@@ -101,11 +109,121 @@
3538 return {}
3539
3540
3541+class MysqlRelation(RelationContext):
3542+ """
3543+ Relation context for the `mysql` interface.
3544+
3545+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
3546+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
3547+ """
3548+ name = 'db'
3549+ interface = 'mysql'
3550+ required_keys = ['host', 'user', 'password', 'database']
3551+
3552+
3553+class HttpRelation(RelationContext):
3554+ """
3555+ Relation context for the `http` interface.
3556+
3557+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
3558+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
3559+ """
3560+ name = 'website'
3561+ interface = 'http'
3562+ required_keys = ['host', 'port']
3563+
3564+ def provide_data(self):
3565+ return {
3566+ 'host': hookenv.unit_get('private-address'),
3567+ 'port': 80,
3568+ }
3569+
3570+
3571+class RequiredConfig(dict):
3572+ """
3573+ Data context that loads config options with one or more mandatory options.
3574+
3575+ Once the required options have been changed from their default values, all
3576+ config options will be available, namespaced under `config` to prevent
3577+ potential naming conflicts (for example, between a config option and a
3578+ relation property).
3579+
3580+ :param list *args: List of options that must be changed from their default values.
3581+ """
3582+
3583+ def __init__(self, *args):
3584+ self.required_options = args
3585+ self['config'] = hookenv.config()
3586+ with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp:
3587+ self.config = yaml.load(fp).get('options', {})
3588+
3589+ def __bool__(self):
3590+ for option in self.required_options:
3591+ if option not in self['config']:
3592+ return False
3593+ current_value = self['config'][option]
3594+ default_value = self.config[option].get('default')
3595+ if current_value == default_value:
3596+ return False
3597+ if current_value in (None, '') and default_value in (None, ''):
3598+ return False
3599+ return True
3600+
3601+ def __nonzero__(self):
3602+ return self.__bool__()
3603+
3604+
3605+class StoredContext(dict):
3606+ """
3607+ A data context that always returns the data that it was first created with.
3608+
3609+ This is useful to do a one-time generation of things like passwords, that
3610+ will thereafter use the same value that was originally generated, instead
3611+ of generating a new value each time it is run.
3612+ """
3613+ def __init__(self, file_name, config_data):
3614+ """
3615+ If the file exists, populate `self` with the data from the file.
3616+ Otherwise, populate with the given data and persist it to the file.
3617+ """
3618+ if os.path.exists(file_name):
3619+ self.update(self.read_context(file_name))
3620+ else:
3621+ self.store_context(file_name, config_data)
3622+ self.update(config_data)
3623+
3624+ def store_context(self, file_name, config_data):
3625+ if not os.path.isabs(file_name):
3626+ file_name = os.path.join(hookenv.charm_dir(), file_name)
3627+ with open(file_name, 'w') as file_stream:
3628+ os.fchmod(file_stream.fileno(), 0o600)
3629+ yaml.dump(config_data, file_stream)
3630+
3631+ def read_context(self, file_name):
3632+ if not os.path.isabs(file_name):
3633+ file_name = os.path.join(hookenv.charm_dir(), file_name)
3634+ with open(file_name, 'r') as file_stream:
3635+ data = yaml.load(file_stream)
3636+ if not data:
3637+ raise OSError("%s is empty" % file_name)
3638+ return data
3639+
3640+
3641 class TemplateCallback(ManagerCallback):
3642 """
3643- Callback class that will render a template, for use as a ready action.
3644+ Callback class that will render a Jinja2 template, for use as a ready
3645+ action.
3646+
3647+ :param str source: The template source file, relative to
3648+ `$CHARM_DIR/templates`
3649+
3650+ :param str target: The target to write the rendered template to
3651+ :param str owner: The owner of the rendered file
3652+ :param str group: The group of the rendered file
3653+ :param int perms: The permissions of the rendered file
3654 """
3655- def __init__(self, source, target, owner='root', group='root', perms=0444):
3656+ def __init__(self, source, target,
3657+ owner='root', group='root', perms=0o444):
3658 self.source = source
3659 self.target = target
3660 self.owner = owner
3661
3662=== added file 'hooks/charmhelpers/core/sysctl.py'
3663--- hooks/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000
3664+++ hooks/charmhelpers/core/sysctl.py 2014-12-11 17:56:30 +0000
3665@@ -0,0 +1,34 @@
3666+#!/usr/bin/env python
3667+# -*- coding: utf-8 -*-
3668+
3669+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
3670+
3671+import yaml
3672+
3673+from subprocess import check_call
3674+
3675+from charmhelpers.core.hookenv import (
3676+ log,
3677+ DEBUG,
3678+)
3679+
3680+
3681+def create(sysctl_dict, sysctl_file):
3682+ """Creates a sysctl.conf file from a YAML associative array
3683+
3684+ :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
3685+ :type sysctl_dict: dict
3686+ :param sysctl_file: path to the sysctl file to be saved
3687+ :type sysctl_file: str or unicode
3688+ :returns: None
3689+ """
3690+ sysctl_dict = yaml.load(sysctl_dict)
3691+
3692+ with open(sysctl_file, "w") as fd:
3693+ for key, value in sysctl_dict.items():
3694+ fd.write("{}={}\n".format(key, value))
3695+
3696+ log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
3697+ level=DEBUG)
3698+
3699+ check_call(["sysctl", "-p", sysctl_file])
3700
3701=== modified file 'hooks/charmhelpers/core/templating.py'
3702--- hooks/charmhelpers/core/templating.py 2014-08-13 13:11:34 +0000
3703+++ hooks/charmhelpers/core/templating.py 2014-12-11 17:56:30 +0000
3704@@ -4,7 +4,8 @@
3705 from charmhelpers.core import hookenv
3706
3707
3708-def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
3709+def render(source, target, context, owner='root', group='root',
3710+ perms=0o444, templates_dir=None):
3711 """
3712 Render a template.
3713
3714
3715=== modified file 'hooks/charmhelpers/fetch/__init__.py'
3716--- hooks/charmhelpers/fetch/__init__.py 2014-08-26 13:27:17 +0000
3717+++ hooks/charmhelpers/fetch/__init__.py 2014-12-11 17:56:30 +0000
3718@@ -5,10 +5,6 @@
3719 from charmhelpers.core.host import (
3720 lsb_release
3721 )
3722-from urlparse import (
3723- urlparse,
3724- urlunparse,
3725-)
3726 import subprocess
3727 from charmhelpers.core.hookenv import (
3728 config,
3729@@ -16,6 +12,12 @@
3730 )
3731 import os
3732
3733+import six
3734+if six.PY3:
3735+ from urllib.parse import urlparse, urlunparse
3736+else:
3737+ from urlparse import urlparse, urlunparse
3738+
3739
3740 CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
3741 deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
3742@@ -72,6 +74,7 @@
3743 FETCH_HANDLERS = (
3744 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
3745 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
3746+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
3747 )
3748
3749 APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
3750@@ -148,7 +151,7 @@
3751 cmd = ['apt-get', '--assume-yes']
3752 cmd.extend(options)
3753 cmd.append('install')
3754- if isinstance(packages, basestring):
3755+ if isinstance(packages, six.string_types):
3756 cmd.append(packages)
3757 else:
3758 cmd.extend(packages)
3759@@ -181,7 +184,7 @@
3760 def apt_purge(packages, fatal=False):
3761 """Purge one or more packages"""
3762 cmd = ['apt-get', '--assume-yes', 'purge']
3763- if isinstance(packages, basestring):
3764+ if isinstance(packages, six.string_types):
3765 cmd.append(packages)
3766 else:
3767 cmd.extend(packages)
3768@@ -192,7 +195,7 @@
3769 def apt_hold(packages, fatal=False):
3770 """Hold one or more packages"""
3771 cmd = ['apt-mark', 'hold']
3772- if isinstance(packages, basestring):
3773+ if isinstance(packages, six.string_types):
3774 cmd.append(packages)
3775 else:
3776 cmd.extend(packages)
3777@@ -208,7 +211,8 @@
3778 """Add a package source to this system.
3779
3780 @param source: a URL or sources.list entry, as supported by
3781- add-apt-repository(1). Examples:
3782+ add-apt-repository(1). Examples::
3783+
3784 ppa:charmers/example
3785 deb https://stub:key@private.example.com/ubuntu trusty main
3786
3787@@ -217,6 +221,7 @@
3788 pocket for the release.
3789 'cloud:' may be used to activate official cloud archive pockets,
3790 such as 'cloud:icehouse'
3791+ 'distro' may be used as a noop
3792
3793 @param key: A key to be added to the system's APT keyring and used
3794 to verify the signatures on packages. Ideally, this should be an
3795@@ -250,12 +255,14 @@
3796 release = lsb_release()['DISTRIB_CODENAME']
3797 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
3798 apt.write(PROPOSED_POCKET.format(release))
3799+ elif source == 'distro':
3800+ pass
3801 else:
3802- raise SourceConfigError("Unknown source: {!r}".format(source))
3803+ log("Unknown source: {!r}".format(source))
3804
3805 if key:
3806 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
3807- with NamedTemporaryFile() as key_file:
3808+ with NamedTemporaryFile('w+') as key_file:
3809 key_file.write(key)
3810 key_file.flush()
3811 key_file.seek(0)
3812@@ -292,14 +299,14 @@
3813 sources = safe_load((config(sources_var) or '').strip()) or []
3814 keys = safe_load((config(keys_var) or '').strip()) or None
3815
3816- if isinstance(sources, basestring):
3817+ if isinstance(sources, six.string_types):
3818 sources = [sources]
3819
3820 if keys is None:
3821 for source in sources:
3822 add_source(source, None)
3823 else:
3824- if isinstance(keys, basestring):
3825+ if isinstance(keys, six.string_types):
3826 keys = [keys]
3827
3828 if len(sources) != len(keys):
3829@@ -311,22 +318,35 @@
3830 apt_update(fatal=True)
3831
3832
3833-def install_remote(source):
3834+def install_remote(source, *args, **kwargs):
3835 """
3836 Install a file tree from a remote source
3837
3838 The specified source should be a url of the form:
3839 scheme://[host]/path[#[option=value][&...]]
3840
3841- Schemes supported are based on this modules submodules
3842- Options supported are submodule-specific"""
3843+ Schemes supported are based on this modules submodules.
3844+ Options supported are submodule-specific.
3845+ Additional arguments are passed through to the submodule.
3846+
3847+ For example::
3848+
3849+ dest = install_remote('http://example.com/archive.tgz',
3850+ checksum='deadbeef',
3851+ hash_type='sha1')
3852+
3853+ This will download `archive.tgz`, validate it using SHA1 and, if
3854+ the file is ok, extract it and return the directory in which it
3855+ was extracted. If the checksum fails, it will raise
3856+ :class:`charmhelpers.core.host.ChecksumError`.
3857+ """
3858 # We ONLY check for True here because can_handle may return a string
3859 # explaining why it can't handle a given source.
3860 handlers = [h for h in plugins() if h.can_handle(source) is True]
3861 installed_to = None
3862 for handler in handlers:
3863 try:
3864- installed_to = handler.install(source)
3865+ installed_to = handler.install(source, *args, **kwargs)
3866 except UnhandledSource:
3867 pass
3868 if not installed_to:
3869@@ -383,7 +403,7 @@
3870 while result is None or result == APT_NO_LOCK:
3871 try:
3872 result = subprocess.check_call(cmd, env=env)
3873- except subprocess.CalledProcessError, e:
3874+ except subprocess.CalledProcessError as e:
3875 retry_count = retry_count + 1
3876 if retry_count > APT_NO_LOCK_RETRY_COUNT:
3877 raise
3878
3879=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
3880--- hooks/charmhelpers/fetch/archiveurl.py 2014-04-08 16:29:36 +0000
3881+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-11 17:56:30 +0000
3882@@ -1,6 +1,23 @@
3883 import os
3884-import urllib2
3885-import urlparse
3886+import hashlib
3887+import re
3888+
3889+import six
3890+if six.PY3:
3891+ from urllib.request import (
3892+ build_opener, install_opener, urlopen, urlretrieve,
3893+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3894+ )
3895+ from urllib.parse import urlparse, urlunparse, parse_qs
3896+ from urllib.error import URLError
3897+else:
3898+ from urllib import urlretrieve
3899+ from urllib2 import (
3900+ build_opener, install_opener, urlopen,
3901+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3902+ URLError
3903+ )
3904+ from urlparse import urlparse, urlunparse, parse_qs
3905
3906 from charmhelpers.fetch import (
3907 BaseFetchHandler,
3908@@ -10,11 +27,37 @@
3909 get_archive_handler,
3910 extract,
3911 )
3912-from charmhelpers.core.host import mkdir
3913+from charmhelpers.core.host import mkdir, check_hash
3914+
3915+
3916+def splituser(host):
3917+ '''urllib.splituser(), but six's support of this seems broken'''
3918+ _userprog = re.compile('^(.*)@(.*)$')
3919+ match = _userprog.match(host)
3920+ if match:
3921+ return match.group(1, 2)
3922+ return None, host
3923+
3924+
3925+def splitpasswd(user):
3926+ '''urllib.splitpasswd(), but six's support of this is missing'''
3927+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
3928+ match = _passwdprog.match(user)
3929+ if match:
3930+ return match.group(1, 2)
3931+ return user, None
3932
3933
3934 class ArchiveUrlFetchHandler(BaseFetchHandler):
3935- """Handler for archives via generic URLs"""
3936+ """
3937+ Handler to download archive files from arbitrary URLs.
3938+
3939+ Can fetch from http, https, ftp, and file URLs.
3940+
3941+ Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files.
3942+
3943+ Installs the contents of the archive in $CHARM_DIR/fetched/.
3944+ """
3945 def can_handle(self, source):
3946 url_parts = self.parse_url(source)
3947 if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
3948@@ -24,22 +67,28 @@
3949 return False
3950
3951 def download(self, source, dest):
3952+ """
3953+ Download an archive file.
3954+
3955+ :param str source: URL pointing to an archive file.
3956+ :param str dest: Local path location to download archive file to.
3957+ """
3958 # propogate all exceptions
3959 # URLError, OSError, etc
3960- proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
3961+ proto, netloc, path, params, query, fragment = urlparse(source)
3962 if proto in ('http', 'https'):
3963- auth, barehost = urllib2.splituser(netloc)
3964+ auth, barehost = splituser(netloc)
3965 if auth is not None:
3966- source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
3967- username, password = urllib2.splitpasswd(auth)
3968- passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
3969+ source = urlunparse((proto, barehost, path, params, query, fragment))
3970+ username, password = splitpasswd(auth)
3971+ passman = HTTPPasswordMgrWithDefaultRealm()
3972 # Realm is set to None in add_password to force the username and password
3973 # to be used whatever the realm
3974 passman.add_password(None, source, username, password)
3975- authhandler = urllib2.HTTPBasicAuthHandler(passman)
3976- opener = urllib2.build_opener(authhandler)
3977- urllib2.install_opener(opener)
3978- response = urllib2.urlopen(source)
3979+ authhandler = HTTPBasicAuthHandler(passman)
3980+ opener = build_opener(authhandler)
3981+ install_opener(opener)
3982+ response = urlopen(source)
3983 try:
3984 with open(dest, 'w') as dest_file:
3985 dest_file.write(response.read())
3986@@ -48,16 +97,49 @@
3987 os.unlink(dest)
3988 raise e
3989
3990- def install(self, source):
3991+ # Mandatory file validation via Sha1 or MD5 hashing.
3992+ def download_and_validate(self, url, hashsum, validate="sha1"):
3993+ tempfile, headers = urlretrieve(url)
3994+ check_hash(tempfile, hashsum, validate)
3995+ return tempfile
3996+
3997+ def install(self, source, dest=None, checksum=None, hash_type='sha1'):
3998+ """
3999+ Download and install an archive file, with optional checksum validation.
4000+
4001+ The checksum can also be given on the `source` URL's fragment.
4002+ For example::
4003+
4004+ handler.install('http://example.com/file.tgz#sha1=deadbeef')
4005+
4006+ :param str source: URL pointing to an archive file.
4007+ :param str dest: Local destination path to install to. If not given,
4008+ installs to `$CHARM_DIR/archives/archive_file_name`.
4009+ :param str checksum: If given, validate the archive file after download.
4010+ :param str hash_type: Algorithm used to generate `checksum`.
4011+ Can be any hash alrgorithm supported by :mod:`hashlib`,
4012+ such as md5, sha1, sha256, sha512, etc.
4013+
4014+ """
4015 url_parts = self.parse_url(source)
4016 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
4017 if not os.path.exists(dest_dir):
4018- mkdir(dest_dir, perms=0755)
4019+ mkdir(dest_dir, perms=0o755)
4020 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
4021 try:
4022 self.download(source, dld_file)
4023- except urllib2.URLError as e:
4024+ except URLError as e:
4025 raise UnhandledSource(e.reason)
4026 except OSError as e:
4027 raise UnhandledSource(e.strerror)
4028- return extract(dld_file)
4029+ options = parse_qs(url_parts.fragment)
4030+ for key, value in options.items():
4031+ if not six.PY3:
4032+ algorithms = hashlib.algorithms
4033+ else:
4034+ algorithms = hashlib.algorithms_available
4035+ if key in algorithms:
4036+ check_hash(dld_file, value, key)
4037+ if checksum:
4038+ check_hash(dld_file, checksum, hash_type)
4039+ return extract(dld_file, dest)
4040
4041=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
4042--- hooks/charmhelpers/fetch/bzrurl.py 2014-08-13 13:11:34 +0000
4043+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-11 17:56:30 +0000
4044@@ -5,6 +5,10 @@
4045 )
4046 from charmhelpers.core.host import mkdir
4047
4048+import six
4049+if six.PY3:
4050+ raise ImportError('bzrlib does not support Python3')
4051+
4052 try:
4053 from bzrlib.branch import Branch
4054 except ImportError:
4055@@ -42,7 +46,7 @@
4056 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
4057 branch_name)
4058 if not os.path.exists(dest_dir):
4059- mkdir(dest_dir, perms=0755)
4060+ mkdir(dest_dir, perms=0o755)
4061 try:
4062 self.branch(source, dest_dir)
4063 except OSError as e:
4064
4065=== added file 'hooks/charmhelpers/fetch/giturl.py'
4066--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
4067+++ hooks/charmhelpers/fetch/giturl.py 2014-12-11 17:56:30 +0000
4068@@ -0,0 +1,51 @@
4069+import os
4070+from charmhelpers.fetch import (
4071+ BaseFetchHandler,
4072+ UnhandledSource
4073+)
4074+from charmhelpers.core.host import mkdir
4075+
4076+import six
4077+if six.PY3:
4078+ raise ImportError('GitPython does not support Python 3')
4079+
4080+try:
4081+ from git import Repo
4082+except ImportError:
4083+ from charmhelpers.fetch import apt_install
4084+ apt_install("python-git")
4085+ from git import Repo
4086+
4087+
4088+class GitUrlFetchHandler(BaseFetchHandler):
4089+ """Handler for git branches via generic and github URLs"""
4090+ def can_handle(self, source):
4091+ url_parts = self.parse_url(source)
4092+ # TODO (mattyw) no support for ssh git@ yet
4093+ if url_parts.scheme not in ('http', 'https', 'git'):
4094+ return False
4095+ else:
4096+ return True
4097+
4098+ def clone(self, source, dest, branch):
4099+ if not self.can_handle(source):
4100+ raise UnhandledSource("Cannot handle {}".format(source))
4101+
4102+ repo = Repo.clone_from(source, dest)
4103+ repo.git.checkout(branch)
4104+
4105+ def install(self, source, branch="master", dest=None):
4106+ url_parts = self.parse_url(source)
4107+ branch_name = url_parts.path.strip("/").split("/")[-1]
4108+ if dest:
4109+ dest_dir = os.path.join(dest, branch_name)
4110+ else:
4111+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
4112+ branch_name)
4113+ if not os.path.exists(dest_dir):
4114+ mkdir(dest_dir, perms=0o755)
4115+ try:
4116+ self.clone(source, dest_dir, branch)
4117+ except OSError as e:
4118+ raise UnhandledSource(e.strerror)
4119+ return dest_dir

Subscribers

People subscribed via source and target branches