Merge lp:~corey.bryant/charms/trusty/neutron-api/contrib.python.packages into lp:~openstack-charmers-archive/charms/trusty/neutron-api/next

Proposed by Corey Bryant
Status: Merged
Merged at revision: 64
Proposed branch: lp:~corey.bryant/charms/trusty/neutron-api/contrib.python.packages
Merge into: lp:~openstack-charmers-archive/charms/trusty/neutron-api/next
Diff against target: 3322 lines (+1045/-524)
27 files modified
charm-helpers-sync.yaml (+1/-0)
hooks/charmhelpers/__init__.py (+22/-0)
hooks/charmhelpers/contrib/hahelpers/cluster.py (+16/-7)
hooks/charmhelpers/contrib/network/ip.py (+52/-50)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+2/-1)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+3/-1)
hooks/charmhelpers/contrib/openstack/context.py (+319/-226)
hooks/charmhelpers/contrib/openstack/ip.py (+41/-27)
hooks/charmhelpers/contrib/openstack/neutron.py (+20/-4)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+2/-2)
hooks/charmhelpers/contrib/openstack/templating.py (+5/-5)
hooks/charmhelpers/contrib/openstack/utils.py (+146/-13)
hooks/charmhelpers/contrib/python/packages.py (+77/-0)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+89/-102)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+4/-4)
hooks/charmhelpers/contrib/storage/linux/lvm.py (+1/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+3/-2)
hooks/charmhelpers/core/fstab.py (+10/-8)
hooks/charmhelpers/core/hookenv.py (+41/-15)
hooks/charmhelpers/core/host.py (+51/-20)
hooks/charmhelpers/core/services/__init__.py (+2/-2)
hooks/charmhelpers/core/services/helpers.py (+9/-5)
hooks/charmhelpers/core/templating.py (+2/-1)
hooks/charmhelpers/fetch/__init__.py (+18/-12)
hooks/charmhelpers/fetch/archiveurl.py (+53/-16)
hooks/charmhelpers/fetch/bzrurl.py (+5/-1)
hooks/charmhelpers/fetch/giturl.py (+51/-0)
To merge this branch: bzr merge lp:~corey.bryant/charms/trusty/neutron-api/contrib.python.packages
Reviewer Review Type Date Requested Status
OpenStack Charmers Pending
Review via email: mp+244325@code.launchpad.net
To post a comment you must log in.
Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #165 neutron-api-next for corey.bryant mp244325
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/165/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #128 neutron-api-next for corey.bryant mp244325
    UNIT FAIL: unit-test failed

UNIT Results (max last 2 lines):
  FAILED (errors=3)
  make: *** [unit_test] Error 1

Full unit test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_unit_test/128/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #78 neutron-api-next for corey.bryant mp244325
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/78/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #186 neutron-api-next for corey.bryant mp244325
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/186/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #149 neutron-api-next for corey.bryant mp244325
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/149/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #104 neutron-api-next for corey.bryant mp244325
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/104/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #195 neutron-api-next for corey.bryant mp244325
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/195/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #158 neutron-api-next for corey.bryant mp244325
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/158/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #113 neutron-api-next for corey.bryant mp244325
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/113/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_lint_check #229 neutron-api-next for corey.bryant mp244325
    LINT OK: passed

Build: http://10.230.18.80:8080/job/charm_lint_check/229/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_unit_test #192 neutron-api-next for corey.bryant mp244325
    UNIT OK: passed

Build: http://10.230.18.80:8080/job/charm_unit_test/192/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #146 neutron-api-next for corey.bryant mp244325
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: pastebin not avail., cmd error
Build: http://10.230.18.80:8080/job/charm_amulet_test/146/

Revision history for this message
uosci-testing-bot (uosci-testing-bot) wrote :

charm_amulet_test #68 neutron-api-next for corey.bryant mp244325
    AMULET FAIL: no-tear-down-replace failed
    AMULET FAIL: amulet-test missing

AMULET Results (max last 2 lines):
INFO:root:Search string not found in makefile target commands.
ERROR:root:No make target was executed.

Full amulet test output: http://paste.ubuntu.com/9489748/
Build: http://10.245.162.77:8080/job/charm_amulet_test/68/

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'charm-helpers-sync.yaml'
2--- charm-helpers-sync.yaml 2014-10-02 09:18:00 +0000
3+++ charm-helpers-sync.yaml 2014-12-11 17:56:40 +0000
4@@ -9,3 +9,4 @@
5 - contrib.storage.linux
6 - payload.execd
7 - contrib.network.ip
8+ - contrib.python.packages
9
10=== added file 'hooks/charmhelpers/__init__.py'
11--- hooks/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
12+++ hooks/charmhelpers/__init__.py 2014-12-11 17:56:40 +0000
13@@ -0,0 +1,22 @@
14+# Bootstrap charm-helpers, installing its dependencies if necessary using
15+# only standard libraries.
16+import subprocess
17+import sys
18+
19+try:
20+ import six # flake8: noqa
21+except ImportError:
22+ if sys.version_info.major == 2:
23+ subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
24+ else:
25+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
26+ import six # flake8: noqa
27+
28+try:
29+ import yaml # flake8: noqa
30+except ImportError:
31+ if sys.version_info.major == 2:
32+ subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
33+ else:
34+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
35+ import yaml # flake8: noqa
36
37=== removed file 'hooks/charmhelpers/__init__.py'
38=== modified file 'hooks/charmhelpers/contrib/hahelpers/cluster.py'
39--- hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-10-02 09:18:00 +0000
40+++ hooks/charmhelpers/contrib/hahelpers/cluster.py 2014-12-11 17:56:40 +0000
41@@ -13,9 +13,10 @@
42
43 import subprocess
44 import os
45-
46 from socket import gethostname as get_unit_hostname
47
48+import six
49+
50 from charmhelpers.core.hookenv import (
51 log,
52 relation_ids,
53@@ -77,7 +78,7 @@
54 "show", resource
55 ]
56 try:
57- status = subprocess.check_output(cmd)
58+ status = subprocess.check_output(cmd).decode('UTF-8')
59 except subprocess.CalledProcessError:
60 return False
61 else:
62@@ -150,34 +151,42 @@
63 return False
64
65
66-def determine_api_port(public_port):
67+def determine_api_port(public_port, singlenode_mode=False):
68 '''
69 Determine correct API server listening port based on
70 existence of HTTPS reverse proxy and/or haproxy.
71
72 public_port: int: standard public port for given service
73
74+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
75+
76 returns: int: the correct listening port for the API service
77 '''
78 i = 0
79- if len(peer_units()) > 0 or is_clustered():
80+ if singlenode_mode:
81+ i += 1
82+ elif len(peer_units()) > 0 or is_clustered():
83 i += 1
84 if https():
85 i += 1
86 return public_port - (i * 10)
87
88
89-def determine_apache_port(public_port):
90+def determine_apache_port(public_port, singlenode_mode=False):
91 '''
92 Description: Determine correct apache listening port based on public IP +
93 state of the cluster.
94
95 public_port: int: standard public port for given service
96
97+ singlenode_mode: boolean: Shuffle ports when only a single unit is present
98+
99 returns: int: the correct listening port for the HAProxy service
100 '''
101 i = 0
102- if len(peer_units()) > 0 or is_clustered():
103+ if singlenode_mode:
104+ i += 1
105+ elif len(peer_units()) > 0 or is_clustered():
106 i += 1
107 return public_port - (i * 10)
108
109@@ -197,7 +206,7 @@
110 for setting in settings:
111 conf[setting] = config_get(setting)
112 missing = []
113- [missing.append(s) for s, v in conf.iteritems() if v is None]
114+ [missing.append(s) for s, v in six.iteritems(conf) if v is None]
115 if missing:
116 log('Insufficient config data to configure hacluster.', level=ERROR)
117 raise HAIncompleteConfig
118
119=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
120--- hooks/charmhelpers/contrib/network/ip.py 2014-10-09 10:34:27 +0000
121+++ hooks/charmhelpers/contrib/network/ip.py 2014-12-11 17:56:40 +0000
122@@ -1,15 +1,12 @@
123 import glob
124 import re
125 import subprocess
126-import sys
127
128 from functools import partial
129
130 from charmhelpers.core.hookenv import unit_get
131 from charmhelpers.fetch import apt_install
132 from charmhelpers.core.hookenv import (
133- WARNING,
134- ERROR,
135 log
136 )
137
138@@ -34,31 +31,28 @@
139 network)
140
141
142+def no_ip_found_error_out(network):
143+ errmsg = ("No IP address found in network: %s" % network)
144+ raise ValueError(errmsg)
145+
146+
147 def get_address_in_network(network, fallback=None, fatal=False):
148- """
149- Get an IPv4 or IPv6 address within the network from the host.
150+ """Get an IPv4 or IPv6 address within the network from the host.
151
152 :param network (str): CIDR presentation format. For example,
153 '192.168.1.0/24'.
154 :param fallback (str): If no address is found, return fallback.
155 :param fatal (boolean): If no address is found, fallback is not
156 set and fatal is True then exit(1).
157-
158 """
159-
160- def not_found_error_out():
161- log("No IP address found in network: %s" % network,
162- level=ERROR)
163- sys.exit(1)
164-
165 if network is None:
166 if fallback is not None:
167 return fallback
168+
169+ if fatal:
170+ no_ip_found_error_out(network)
171 else:
172- if fatal:
173- not_found_error_out()
174- else:
175- return None
176+ return None
177
178 _validate_cidr(network)
179 network = netaddr.IPNetwork(network)
180@@ -70,6 +64,7 @@
181 cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
182 if cidr in network:
183 return str(cidr.ip)
184+
185 if network.version == 6 and netifaces.AF_INET6 in addresses:
186 for addr in addresses[netifaces.AF_INET6]:
187 if not addr['addr'].startswith('fe80'):
188@@ -82,20 +77,20 @@
189 return fallback
190
191 if fatal:
192- not_found_error_out()
193+ no_ip_found_error_out(network)
194
195 return None
196
197
198 def is_ipv6(address):
199- '''Determine whether provided address is IPv6 or not'''
200+ """Determine whether provided address is IPv6 or not."""
201 try:
202 address = netaddr.IPAddress(address)
203 except netaddr.AddrFormatError:
204 # probably a hostname - so not an address at all!
205 return False
206- else:
207- return address.version == 6
208+
209+ return address.version == 6
210
211
212 def is_address_in_network(network, address):
213@@ -113,11 +108,13 @@
214 except (netaddr.core.AddrFormatError, ValueError):
215 raise ValueError("Network (%s) is not in CIDR presentation format" %
216 network)
217+
218 try:
219 address = netaddr.IPAddress(address)
220 except (netaddr.core.AddrFormatError, ValueError):
221 raise ValueError("Address (%s) is not in correct presentation format" %
222 address)
223+
224 if address in network:
225 return True
226 else:
227@@ -147,6 +144,7 @@
228 return iface
229 else:
230 return addresses[netifaces.AF_INET][0][key]
231+
232 if address.version == 6 and netifaces.AF_INET6 in addresses:
233 for addr in addresses[netifaces.AF_INET6]:
234 if not addr['addr'].startswith('fe80'):
235@@ -160,41 +158,42 @@
236 return str(cidr).split('/')[1]
237 else:
238 return addr[key]
239+
240 return None
241
242
243 get_iface_for_address = partial(_get_for_address, key='iface')
244
245+
246 get_netmask_for_address = partial(_get_for_address, key='netmask')
247
248
249 def format_ipv6_addr(address):
250- """
251- IPv6 needs to be wrapped with [] in url link to parse correctly.
252+ """If address is IPv6, wrap it in '[]' otherwise return None.
253+
254+ This is required by most configuration files when specifying IPv6
255+ addresses.
256 """
257 if is_ipv6(address):
258- address = "[%s]" % address
259- else:
260- log("Not a valid ipv6 address: %s" % address, level=WARNING)
261- address = None
262+ return "[%s]" % address
263
264- return address
265+ return None
266
267
268 def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
269 fatal=True, exc_list=None):
270- """
271- Return the assigned IP address for a given interface, if any, or [].
272- """
273+ """Return the assigned IP address for a given interface, if any."""
274 # Extract nic if passed /dev/ethX
275 if '/' in iface:
276 iface = iface.split('/')[-1]
277+
278 if not exc_list:
279 exc_list = []
280+
281 try:
282 inet_num = getattr(netifaces, inet_type)
283 except AttributeError:
284- raise Exception('Unknown inet type ' + str(inet_type))
285+ raise Exception("Unknown inet type '%s'" % str(inet_type))
286
287 interfaces = netifaces.interfaces()
288 if inc_aliases:
289@@ -202,15 +201,18 @@
290 for _iface in interfaces:
291 if iface == _iface or _iface.split(':')[0] == iface:
292 ifaces.append(_iface)
293+
294 if fatal and not ifaces:
295 raise Exception("Invalid interface '%s'" % iface)
296+
297 ifaces.sort()
298 else:
299 if iface not in interfaces:
300 if fatal:
301- raise Exception("%s not found " % (iface))
302+ raise Exception("Interface '%s' not found " % (iface))
303 else:
304 return []
305+
306 else:
307 ifaces = [iface]
308
309@@ -221,10 +223,13 @@
310 for entry in net_info[inet_num]:
311 if 'addr' in entry and entry['addr'] not in exc_list:
312 addresses.append(entry['addr'])
313+
314 if fatal and not addresses:
315 raise Exception("Interface '%s' doesn't have any %s addresses." %
316 (iface, inet_type))
317- return addresses
318+
319+ return sorted(addresses)
320+
321
322 get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
323
324@@ -241,6 +246,7 @@
325 raw = re.match(ll_key, _addr)
326 if raw:
327 _addr = raw.group(1)
328+
329 if _addr == addr:
330 log("Address '%s' is configured on iface '%s'" %
331 (addr, iface))
332@@ -251,8 +257,9 @@
333
334
335 def sniff_iface(f):
336- """If no iface provided, inject net iface inferred from unit private
337- address.
338+ """Ensure decorated function is called with a value for iface.
339+
340+ If no iface provided, inject net iface inferred from unit private address.
341 """
342 def iface_sniffer(*args, **kwargs):
343 if not kwargs.get('iface', None):
344@@ -295,7 +302,7 @@
345 if global_addrs:
346 # Make sure any found global addresses are not temporary
347 cmd = ['ip', 'addr', 'show', iface]
348- out = subprocess.check_output(cmd)
349+ out = subprocess.check_output(cmd).decode('UTF-8')
350 if dynamic_only:
351 key = re.compile("inet6 (.+)/[0-9]+ scope global dynamic.*")
352 else:
353@@ -317,33 +324,28 @@
354 return addrs
355
356 if fatal:
357- raise Exception("Interface '%s' doesn't have a scope global "
358+ raise Exception("Interface '%s' does not have a scope global "
359 "non-temporary ipv6 address." % iface)
360
361 return []
362
363
364 def get_bridges(vnic_dir='/sys/devices/virtual/net'):
365- """
366- Return a list of bridges on the system or []
367- """
368- b_rgex = vnic_dir + '/*/bridge'
369- return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_rgex)]
370+ """Return a list of bridges on the system."""
371+ b_regex = "%s/*/bridge" % vnic_dir
372+ return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
373
374
375 def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
376- """
377- Return a list of nics comprising a given bridge on the system or []
378- """
379- brif_rgex = "%s/%s/brif/*" % (vnic_dir, bridge)
380- return [x.split('/')[-1] for x in glob.glob(brif_rgex)]
381+ """Return a list of nics comprising a given bridge on the system."""
382+ brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
383+ return [x.split('/')[-1] for x in glob.glob(brif_regex)]
384
385
386 def is_bridge_member(nic):
387- """
388- Check if a given nic is a member of a bridge
389- """
390+ """Check if a given nic is a member of a bridge."""
391 for bridge in get_bridges():
392 if nic in get_bridge_nics(bridge):
393 return True
394+
395 return False
396
397=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
398--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-10-02 09:18:00 +0000
399+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-11 17:56:40 +0000
400@@ -1,3 +1,4 @@
401+import six
402 from charmhelpers.contrib.amulet.deployment import (
403 AmuletDeployment
404 )
405@@ -69,7 +70,7 @@
406
407 def _configure_services(self, configs):
408 """Configure all of the services."""
409- for service, config in configs.iteritems():
410+ for service, config in six.iteritems(configs):
411 self.d.configure(service, config)
412
413 def _get_openstack_release(self):
414
415=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
416--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-10-02 09:18:00 +0000
417+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-11 17:56:40 +0000
418@@ -7,6 +7,8 @@
419 import keystoneclient.v2_0 as keystone_client
420 import novaclient.v1_1.client as nova_client
421
422+import six
423+
424 from charmhelpers.contrib.amulet.utils import (
425 AmuletUtils
426 )
427@@ -60,7 +62,7 @@
428 expected service catalog endpoints.
429 """
430 self.log.debug('actual: {}'.format(repr(actual)))
431- for k, v in expected.iteritems():
432+ for k, v in six.iteritems(expected):
433 if k in actual:
434 ret = self._validate_dict_data(expected[k][0], actual[k][0])
435 if ret:
436
437=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
438--- hooks/charmhelpers/contrib/openstack/context.py 2014-10-07 12:29:50 +0000
439+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-11 17:56:40 +0000
440@@ -1,20 +1,18 @@
441 import json
442 import os
443 import time
444-
445 from base64 import b64decode
446+from subprocess import check_call
447
448-from subprocess import (
449- check_call
450-)
451+import six
452
453 from charmhelpers.fetch import (
454 apt_install,
455 filter_installed_packages,
456 )
457-
458 from charmhelpers.core.hookenv import (
459 config,
460+ is_relation_made,
461 local_unit,
462 log,
463 relation_get,
464@@ -23,43 +21,40 @@
465 relation_set,
466 unit_get,
467 unit_private_ip,
468+ DEBUG,
469+ INFO,
470+ WARNING,
471 ERROR,
472- INFO
473 )
474-
475 from charmhelpers.core.host import (
476 mkdir,
477- write_file
478+ write_file,
479 )
480-
481 from charmhelpers.contrib.hahelpers.cluster import (
482 determine_apache_port,
483 determine_api_port,
484 https,
485- is_clustered
486+ is_clustered,
487 )
488-
489 from charmhelpers.contrib.hahelpers.apache import (
490 get_cert,
491 get_ca_cert,
492 install_ca_cert,
493 )
494-
495 from charmhelpers.contrib.openstack.neutron import (
496 neutron_plugin_attribute,
497 )
498-
499 from charmhelpers.contrib.network.ip import (
500 get_address_in_network,
501 get_ipv6_addr,
502 get_netmask_for_address,
503 format_ipv6_addr,
504- is_address_in_network
505+ is_address_in_network,
506 )
507-
508 from charmhelpers.contrib.openstack.utils import get_host_ip
509
510 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
511+ADDRESS_TYPES = ['admin', 'internal', 'public']
512
513
514 class OSContextError(Exception):
515@@ -67,7 +62,7 @@
516
517
518 def ensure_packages(packages):
519- '''Install but do not upgrade required plugin packages'''
520+ """Install but do not upgrade required plugin packages."""
521 required = filter_installed_packages(packages)
522 if required:
523 apt_install(required, fatal=True)
524@@ -75,20 +70,27 @@
525
526 def context_complete(ctxt):
527 _missing = []
528- for k, v in ctxt.iteritems():
529+ for k, v in six.iteritems(ctxt):
530 if v is None or v == '':
531 _missing.append(k)
532+
533 if _missing:
534- log('Missing required data: %s' % ' '.join(_missing), level='INFO')
535+ log('Missing required data: %s' % ' '.join(_missing), level=INFO)
536 return False
537+
538 return True
539
540
541 def config_flags_parser(config_flags):
542+ """Parses config flags string into dict.
543+
544+ The provided config_flags string may be a list of comma-separated values
545+ which themselves may be comma-separated list of values.
546+ """
547 if config_flags.find('==') >= 0:
548- log("config_flags is not in expected format (key=value)",
549- level=ERROR)
550+ log("config_flags is not in expected format (key=value)", level=ERROR)
551 raise OSContextError
552+
553 # strip the following from each value.
554 post_strippers = ' ,'
555 # we strip any leading/trailing '=' or ' ' from the string then
556@@ -96,7 +98,7 @@
557 split = config_flags.strip(' =').split('=')
558 limit = len(split)
559 flags = {}
560- for i in xrange(0, limit - 1):
561+ for i in range(0, limit - 1):
562 current = split[i]
563 next = split[i + 1]
564 vindex = next.rfind(',')
565@@ -111,17 +113,18 @@
566 # if this not the first entry, expect an embedded key.
567 index = current.rfind(',')
568 if index < 0:
569- log("invalid config value(s) at index %s" % (i),
570- level=ERROR)
571+ log("Invalid config value(s) at index %s" % (i), level=ERROR)
572 raise OSContextError
573 key = current[index + 1:]
574
575 # Add to collection.
576 flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
577+
578 return flags
579
580
581 class OSContextGenerator(object):
582+ """Base class for all context generators."""
583 interfaces = []
584
585 def __call__(self):
586@@ -133,11 +136,11 @@
587
588 def __init__(self,
589 database=None, user=None, relation_prefix=None, ssl_dir=None):
590- '''
591- Allows inspecting relation for settings prefixed with relation_prefix.
592- This is useful for parsing access for multiple databases returned via
593- the shared-db interface (eg, nova_password, quantum_password)
594- '''
595+ """Allows inspecting relation for settings prefixed with
596+ relation_prefix. This is useful for parsing access for multiple
597+ databases returned via the shared-db interface (eg, nova_password,
598+ quantum_password)
599+ """
600 self.relation_prefix = relation_prefix
601 self.database = database
602 self.user = user
603@@ -147,9 +150,8 @@
604 self.database = self.database or config('database')
605 self.user = self.user or config('database-user')
606 if None in [self.database, self.user]:
607- log('Could not generate shared_db context. '
608- 'Missing required charm config options. '
609- '(database name and user)')
610+ log("Could not generate shared_db context. Missing required charm "
611+ "config options. (database name and user)", level=ERROR)
612 raise OSContextError
613
614 ctxt = {}
615@@ -202,23 +204,24 @@
616 def __call__(self):
617 self.database = self.database or config('database')
618 if self.database is None:
619- log('Could not generate postgresql_db context. '
620- 'Missing required charm config options. '
621- '(database name)')
622+ log('Could not generate postgresql_db context. Missing required '
623+ 'charm config options. (database name)', level=ERROR)
624 raise OSContextError
625+
626 ctxt = {}
627-
628 for rid in relation_ids(self.interfaces[0]):
629 for unit in related_units(rid):
630- ctxt = {
631- 'database_host': relation_get('host', rid=rid, unit=unit),
632- 'database': self.database,
633- 'database_user': relation_get('user', rid=rid, unit=unit),
634- 'database_password': relation_get('password', rid=rid, unit=unit),
635- 'database_type': 'postgresql',
636- }
637+ rel_host = relation_get('host', rid=rid, unit=unit)
638+ rel_user = relation_get('user', rid=rid, unit=unit)
639+ rel_passwd = relation_get('password', rid=rid, unit=unit)
640+ ctxt = {'database_host': rel_host,
641+ 'database': self.database,
642+ 'database_user': rel_user,
643+ 'database_password': rel_passwd,
644+ 'database_type': 'postgresql'}
645 if context_complete(ctxt):
646 return ctxt
647+
648 return {}
649
650
651@@ -227,23 +230,29 @@
652 ca_path = os.path.join(ssl_dir, 'db-client.ca')
653 with open(ca_path, 'w') as fh:
654 fh.write(b64decode(rdata['ssl_ca']))
655+
656 ctxt['database_ssl_ca'] = ca_path
657 elif 'ssl_ca' in rdata:
658- log("Charm not setup for ssl support but ssl ca found")
659+ log("Charm not setup for ssl support but ssl ca found", level=INFO)
660 return ctxt
661+
662 if 'ssl_cert' in rdata:
663 cert_path = os.path.join(
664 ssl_dir, 'db-client.cert')
665 if not os.path.exists(cert_path):
666- log("Waiting 1m for ssl client cert validity")
667+ log("Waiting 1m for ssl client cert validity", level=INFO)
668 time.sleep(60)
669+
670 with open(cert_path, 'w') as fh:
671 fh.write(b64decode(rdata['ssl_cert']))
672+
673 ctxt['database_ssl_cert'] = cert_path
674 key_path = os.path.join(ssl_dir, 'db-client.key')
675 with open(key_path, 'w') as fh:
676 fh.write(b64decode(rdata['ssl_key']))
677+
678 ctxt['database_ssl_key'] = key_path
679+
680 return ctxt
681
682
683@@ -251,9 +260,8 @@
684 interfaces = ['identity-service']
685
686 def __call__(self):
687- log('Generating template context for identity-service')
688+ log('Generating template context for identity-service', level=DEBUG)
689 ctxt = {}
690-
691 for rid in relation_ids('identity-service'):
692 for unit in related_units(rid):
693 rdata = relation_get(rid=rid, unit=unit)
694@@ -261,26 +269,24 @@
695 serv_host = format_ipv6_addr(serv_host) or serv_host
696 auth_host = rdata.get('auth_host')
697 auth_host = format_ipv6_addr(auth_host) or auth_host
698-
699- ctxt = {
700- 'service_port': rdata.get('service_port'),
701- 'service_host': serv_host,
702- 'auth_host': auth_host,
703- 'auth_port': rdata.get('auth_port'),
704- 'admin_tenant_name': rdata.get('service_tenant'),
705- 'admin_user': rdata.get('service_username'),
706- 'admin_password': rdata.get('service_password'),
707- 'service_protocol':
708- rdata.get('service_protocol') or 'http',
709- 'auth_protocol':
710- rdata.get('auth_protocol') or 'http',
711- }
712+ svc_protocol = rdata.get('service_protocol') or 'http'
713+ auth_protocol = rdata.get('auth_protocol') or 'http'
714+ ctxt = {'service_port': rdata.get('service_port'),
715+ 'service_host': serv_host,
716+ 'auth_host': auth_host,
717+ 'auth_port': rdata.get('auth_port'),
718+ 'admin_tenant_name': rdata.get('service_tenant'),
719+ 'admin_user': rdata.get('service_username'),
720+ 'admin_password': rdata.get('service_password'),
721+ 'service_protocol': svc_protocol,
722+ 'auth_protocol': auth_protocol}
723 if context_complete(ctxt):
724 # NOTE(jamespage) this is required for >= icehouse
725 # so a missing value just indicates keystone needs
726 # upgrading
727 ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
728 return ctxt
729+
730 return {}
731
732
733@@ -293,21 +299,23 @@
734 self.interfaces = [rel_name]
735
736 def __call__(self):
737- log('Generating template context for amqp')
738+ log('Generating template context for amqp', level=DEBUG)
739 conf = config()
740- user_setting = 'rabbit-user'
741- vhost_setting = 'rabbit-vhost'
742 if self.relation_prefix:
743- user_setting = self.relation_prefix + '-rabbit-user'
744- vhost_setting = self.relation_prefix + '-rabbit-vhost'
745+ user_setting = '%s-rabbit-user' % (self.relation_prefix)
746+ vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
747+ else:
748+ user_setting = 'rabbit-user'
749+ vhost_setting = 'rabbit-vhost'
750
751 try:
752 username = conf[user_setting]
753 vhost = conf[vhost_setting]
754 except KeyError as e:
755- log('Could not generate shared_db context. '
756- 'Missing required charm config options: %s.' % e)
757+ log('Could not generate shared_db context. Missing required charm '
758+ 'config options: %s.' % e, level=ERROR)
759 raise OSContextError
760+
761 ctxt = {}
762 for rid in relation_ids(self.rel_name):
763 ha_vip_only = False
764@@ -321,6 +329,7 @@
765 host = relation_get('private-address', rid=rid, unit=unit)
766 host = format_ipv6_addr(host) or host
767 ctxt['rabbitmq_host'] = host
768+
769 ctxt.update({
770 'rabbitmq_user': username,
771 'rabbitmq_password': relation_get('password', rid=rid,
772@@ -331,6 +340,7 @@
773 ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
774 if ssl_port:
775 ctxt['rabbit_ssl_port'] = ssl_port
776+
777 ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
778 if ssl_ca:
779 ctxt['rabbit_ssl_ca'] = ssl_ca
780@@ -344,41 +354,45 @@
781 if context_complete(ctxt):
782 if 'rabbit_ssl_ca' in ctxt:
783 if not self.ssl_dir:
784- log(("Charm not setup for ssl support "
785- "but ssl ca found"))
786+ log("Charm not setup for ssl support but ssl ca "
787+ "found", level=INFO)
788 break
789+
790 ca_path = os.path.join(
791 self.ssl_dir, 'rabbit-client-ca.pem')
792 with open(ca_path, 'w') as fh:
793 fh.write(b64decode(ctxt['rabbit_ssl_ca']))
794 ctxt['rabbit_ssl_ca'] = ca_path
795+
796 # Sufficient information found = break out!
797 break
798+
799 # Used for active/active rabbitmq >= grizzly
800- if ('clustered' not in ctxt or ha_vip_only) \
801- and len(related_units(rid)) > 1:
802+ if (('clustered' not in ctxt or ha_vip_only) and
803+ len(related_units(rid)) > 1):
804 rabbitmq_hosts = []
805 for unit in related_units(rid):
806 host = relation_get('private-address', rid=rid, unit=unit)
807 host = format_ipv6_addr(host) or host
808 rabbitmq_hosts.append(host)
809- ctxt['rabbitmq_hosts'] = ','.join(rabbitmq_hosts)
810+
811+ ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
812+
813 if not context_complete(ctxt):
814 return {}
815- else:
816- return ctxt
817+
818+ return ctxt
819
820
821 class CephContext(OSContextGenerator):
822+ """Generates context for /etc/ceph/ceph.conf templates."""
823 interfaces = ['ceph']
824
825 def __call__(self):
826- '''This generates context for /etc/ceph/ceph.conf templates'''
827 if not relation_ids('ceph'):
828 return {}
829
830- log('Generating template context for ceph')
831-
832+ log('Generating template context for ceph', level=DEBUG)
833 mon_hosts = []
834 auth = None
835 key = None
836@@ -387,18 +401,18 @@
837 for unit in related_units(rid):
838 auth = relation_get('auth', rid=rid, unit=unit)
839 key = relation_get('key', rid=rid, unit=unit)
840- ceph_addr = \
841- relation_get('ceph-public-address', rid=rid, unit=unit) or \
842- relation_get('private-address', rid=rid, unit=unit)
843+ ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
844+ unit=unit)
845+ unit_priv_addr = relation_get('private-address', rid=rid,
846+ unit=unit)
847+ ceph_addr = ceph_pub_addr or unit_priv_addr
848 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
849 mon_hosts.append(ceph_addr)
850
851- ctxt = {
852- 'mon_hosts': ' '.join(mon_hosts),
853- 'auth': auth,
854- 'key': key,
855- 'use_syslog': use_syslog
856- }
857+ ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
858+ 'auth': auth,
859+ 'key': key,
860+ 'use_syslog': use_syslog}
861
862 if not os.path.isdir('/etc/ceph'):
863 os.mkdir('/etc/ceph')
864@@ -407,79 +421,68 @@
865 return {}
866
867 ensure_packages(['ceph-common'])
868-
869 return ctxt
870
871
872-ADDRESS_TYPES = ['admin', 'internal', 'public']
873-
874-
875 class HAProxyContext(OSContextGenerator):
876+ """Provides half a context for the haproxy template, which describes
877+ all peers to be included in the cluster. Each charm needs to include
878+ its own context generator that describes the port mapping.
879+ """
880 interfaces = ['cluster']
881
882+ def __init__(self, singlenode_mode=False):
883+ self.singlenode_mode = singlenode_mode
884+
885 def __call__(self):
886- '''
887- Builds half a context for the haproxy template, which describes
888- all peers to be included in the cluster. Each charm needs to include
889- its own context generator that describes the port mapping.
890- '''
891- if not relation_ids('cluster'):
892+ if not relation_ids('cluster') and not self.singlenode_mode:
893 return {}
894
895- l_unit = local_unit().replace('/', '-')
896-
897 if config('prefer-ipv6'):
898 addr = get_ipv6_addr(exc_list=[config('vip')])[0]
899 else:
900 addr = get_host_ip(unit_get('private-address'))
901
902+ l_unit = local_unit().replace('/', '-')
903 cluster_hosts = {}
904
905 # NOTE(jamespage): build out map of configured network endpoints
906 # and associated backends
907 for addr_type in ADDRESS_TYPES:
908- laddr = get_address_in_network(
909- config('os-{}-network'.format(addr_type)))
910+ cfg_opt = 'os-{}-network'.format(addr_type)
911+ laddr = get_address_in_network(config(cfg_opt))
912 if laddr:
913- cluster_hosts[laddr] = {}
914- cluster_hosts[laddr]['network'] = "{}/{}".format(
915- laddr,
916- get_netmask_for_address(laddr)
917- )
918- cluster_hosts[laddr]['backends'] = {}
919- cluster_hosts[laddr]['backends'][l_unit] = laddr
920+ netmask = get_netmask_for_address(laddr)
921+ cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
922+ netmask),
923+ 'backends': {l_unit: laddr}}
924 for rid in relation_ids('cluster'):
925 for unit in related_units(rid):
926- _unit = unit.replace('/', '-')
927 _laddr = relation_get('{}-address'.format(addr_type),
928 rid=rid, unit=unit)
929 if _laddr:
930+ _unit = unit.replace('/', '-')
931 cluster_hosts[laddr]['backends'][_unit] = _laddr
932
933 # NOTE(jamespage) no split configurations found, just use
934 # private addresses
935 if not cluster_hosts:
936- cluster_hosts[addr] = {}
937- cluster_hosts[addr]['network'] = "{}/{}".format(
938- addr,
939- get_netmask_for_address(addr)
940- )
941- cluster_hosts[addr]['backends'] = {}
942- cluster_hosts[addr]['backends'][l_unit] = addr
943+ netmask = get_netmask_for_address(addr)
944+ cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
945+ 'backends': {l_unit: addr}}
946 for rid in relation_ids('cluster'):
947 for unit in related_units(rid):
948- _unit = unit.replace('/', '-')
949 _laddr = relation_get('private-address',
950 rid=rid, unit=unit)
951 if _laddr:
952+ _unit = unit.replace('/', '-')
953 cluster_hosts[addr]['backends'][_unit] = _laddr
954
955- ctxt = {
956- 'frontends': cluster_hosts,
957- }
958+ ctxt = {'frontends': cluster_hosts}
959
960 if config('haproxy-server-timeout'):
961 ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
962+
963 if config('haproxy-client-timeout'):
964 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
965
966@@ -493,13 +496,18 @@
967 ctxt['stat_port'] = ':8888'
968
969 for frontend in cluster_hosts:
970- if len(cluster_hosts[frontend]['backends']) > 1:
971+ if (len(cluster_hosts[frontend]['backends']) > 1 or
972+ self.singlenode_mode):
973 # Enable haproxy when we have enough peers.
974- log('Ensuring haproxy enabled in /etc/default/haproxy.')
975+ log('Ensuring haproxy enabled in /etc/default/haproxy.',
976+ level=DEBUG)
977 with open('/etc/default/haproxy', 'w') as out:
978 out.write('ENABLED=1\n')
979+
980 return ctxt
981- log('HAProxy context is incomplete, this unit has no peers.')
982+
983+ log('HAProxy context is incomplete, this unit has no peers.',
984+ level=INFO)
985 return {}
986
987
988@@ -507,29 +515,28 @@
989 interfaces = ['image-service']
990
991 def __call__(self):
992- '''
993- Obtains the glance API server from the image-service relation. Useful
994- in nova and cinder (currently).
995- '''
996- log('Generating template context for image-service.')
997+ """Obtains the glance API server from the image-service relation.
998+ Useful in nova and cinder (currently).
999+ """
1000+ log('Generating template context for image-service.', level=DEBUG)
1001 rids = relation_ids('image-service')
1002 if not rids:
1003 return {}
1004+
1005 for rid in rids:
1006 for unit in related_units(rid):
1007 api_server = relation_get('glance-api-server',
1008 rid=rid, unit=unit)
1009 if api_server:
1010 return {'glance_api_servers': api_server}
1011- log('ImageService context is incomplete. '
1012- 'Missing required relation data.')
1013+
1014+ log("ImageService context is incomplete. Missing required relation "
1015+ "data.", level=INFO)
1016 return {}
1017
1018
1019 class ApacheSSLContext(OSContextGenerator):
1020-
1021- """
1022- Generates a context for an apache vhost configuration that configures
1023+ """Generates a context for an apache vhost configuration that configures
1024 HTTPS reverse proxying for one or many endpoints. Generated context
1025 looks something like::
1026
1027@@ -563,6 +570,7 @@
1028 else:
1029 cert_filename = 'cert'
1030 key_filename = 'key'
1031+
1032 write_file(path=os.path.join(ssl_dir, cert_filename),
1033 content=b64decode(cert))
1034 write_file(path=os.path.join(ssl_dir, key_filename),
1035@@ -574,7 +582,8 @@
1036 install_ca_cert(b64decode(ca_cert))
1037
1038 def canonical_names(self):
1039- '''Figure out which canonical names clients will access this service'''
1040+ """Figure out which canonical names clients will access this service.
1041+ """
1042 cns = []
1043 for r_id in relation_ids('identity-service'):
1044 for unit in related_units(r_id):
1045@@ -582,55 +591,80 @@
1046 for k in rdata:
1047 if k.startswith('ssl_key_'):
1048 cns.append(k.lstrip('ssl_key_'))
1049- return list(set(cns))
1050+
1051+ return sorted(list(set(cns)))
1052+
1053+ def get_network_addresses(self):
1054+ """For each network configured, return corresponding address and vip
1055+ (if available).
1056+
1057+ Returns a list of tuples of the form:
1058+
1059+ [(address_in_net_a, vip_in_net_a),
1060+ (address_in_net_b, vip_in_net_b),
1061+ ...]
1062+
1063+ or, if no vip(s) available:
1064+
1065+ [(address_in_net_a, address_in_net_a),
1066+ (address_in_net_b, address_in_net_b),
1067+ ...]
1068+ """
1069+ addresses = []
1070+ if config('vip'):
1071+ vips = config('vip').split()
1072+ else:
1073+ vips = []
1074+
1075+ for net_type in ['os-internal-network', 'os-admin-network',
1076+ 'os-public-network']:
1077+ addr = get_address_in_network(config(net_type),
1078+ unit_get('private-address'))
1079+ if len(vips) > 1 and is_clustered():
1080+ if not config(net_type):
1081+ log("Multiple networks configured but net_type "
1082+ "is None (%s)." % net_type, level=WARNING)
1083+ continue
1084+
1085+ for vip in vips:
1086+ if is_address_in_network(config(net_type), vip):
1087+ addresses.append((addr, vip))
1088+ break
1089+
1090+ elif is_clustered() and config('vip'):
1091+ addresses.append((addr, config('vip')))
1092+ else:
1093+ addresses.append((addr, addr))
1094+
1095+ return sorted(addresses)
1096
1097 def __call__(self):
1098- if isinstance(self.external_ports, basestring):
1099+ if isinstance(self.external_ports, six.string_types):
1100 self.external_ports = [self.external_ports]
1101- if (not self.external_ports or not https()):
1102+
1103+ if not self.external_ports or not https():
1104 return {}
1105
1106 self.configure_ca()
1107 self.enable_modules()
1108
1109- ctxt = {
1110- 'namespace': self.service_namespace,
1111- 'endpoints': [],
1112- 'ext_ports': []
1113- }
1114+ ctxt = {'namespace': self.service_namespace,
1115+ 'endpoints': [],
1116+ 'ext_ports': []}
1117
1118 for cn in self.canonical_names():
1119 self.configure_cert(cn)
1120
1121- addresses = []
1122- vips = []
1123- if config('vip'):
1124- vips = config('vip').split()
1125-
1126- for network_type in ['os-internal-network',
1127- 'os-admin-network',
1128- 'os-public-network']:
1129- address = get_address_in_network(config(network_type),
1130- unit_get('private-address'))
1131- if len(vips) > 0 and is_clustered():
1132- for vip in vips:
1133- if is_address_in_network(config(network_type),
1134- vip):
1135- addresses.append((address, vip))
1136- break
1137- elif is_clustered():
1138- addresses.append((address, config('vip')))
1139- else:
1140- addresses.append((address, address))
1141-
1142- for address, endpoint in set(addresses):
1143+ addresses = self.get_network_addresses()
1144+ for address, endpoint in sorted(set(addresses)):
1145 for api_port in self.external_ports:
1146 ext_port = determine_apache_port(api_port)
1147 int_port = determine_api_port(api_port)
1148 portmap = (address, endpoint, int(ext_port), int(int_port))
1149 ctxt['endpoints'].append(portmap)
1150 ctxt['ext_ports'].append(int(ext_port))
1151- ctxt['ext_ports'] = list(set(ctxt['ext_ports']))
1152+
1153+ ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
1154 return ctxt
1155
1156
1157@@ -647,21 +681,23 @@
1158
1159 @property
1160 def packages(self):
1161- return neutron_plugin_attribute(
1162- self.plugin, 'packages', self.network_manager)
1163+ return neutron_plugin_attribute(self.plugin, 'packages',
1164+ self.network_manager)
1165
1166 @property
1167 def neutron_security_groups(self):
1168 return None
1169
1170 def _ensure_packages(self):
1171- [ensure_packages(pkgs) for pkgs in self.packages]
1172+ for pkgs in self.packages:
1173+ ensure_packages(pkgs)
1174
1175 def _save_flag_file(self):
1176 if self.network_manager == 'quantum':
1177 _file = '/etc/nova/quantum_plugin.conf'
1178 else:
1179 _file = '/etc/nova/neutron_plugin.conf'
1180+
1181 with open(_file, 'wb') as out:
1182 out.write(self.plugin + '\n')
1183
1184@@ -670,13 +706,11 @@
1185 self.network_manager)
1186 config = neutron_plugin_attribute(self.plugin, 'config',
1187 self.network_manager)
1188- ovs_ctxt = {
1189- 'core_plugin': driver,
1190- 'neutron_plugin': 'ovs',
1191- 'neutron_security_groups': self.neutron_security_groups,
1192- 'local_ip': unit_private_ip(),
1193- 'config': config
1194- }
1195+ ovs_ctxt = {'core_plugin': driver,
1196+ 'neutron_plugin': 'ovs',
1197+ 'neutron_security_groups': self.neutron_security_groups,
1198+ 'local_ip': unit_private_ip(),
1199+ 'config': config}
1200
1201 return ovs_ctxt
1202
1203@@ -685,13 +719,11 @@
1204 self.network_manager)
1205 config = neutron_plugin_attribute(self.plugin, 'config',
1206 self.network_manager)
1207- nvp_ctxt = {
1208- 'core_plugin': driver,
1209- 'neutron_plugin': 'nvp',
1210- 'neutron_security_groups': self.neutron_security_groups,
1211- 'local_ip': unit_private_ip(),
1212- 'config': config
1213- }
1214+ nvp_ctxt = {'core_plugin': driver,
1215+ 'neutron_plugin': 'nvp',
1216+ 'neutron_security_groups': self.neutron_security_groups,
1217+ 'local_ip': unit_private_ip(),
1218+ 'config': config}
1219
1220 return nvp_ctxt
1221
1222@@ -700,35 +732,50 @@
1223 self.network_manager)
1224 n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1225 self.network_manager)
1226- n1kv_ctxt = {
1227- 'core_plugin': driver,
1228- 'neutron_plugin': 'n1kv',
1229- 'neutron_security_groups': self.neutron_security_groups,
1230- 'local_ip': unit_private_ip(),
1231- 'config': n1kv_config,
1232- 'vsm_ip': config('n1kv-vsm-ip'),
1233- 'vsm_username': config('n1kv-vsm-username'),
1234- 'vsm_password': config('n1kv-vsm-password'),
1235- 'restrict_policy_profiles': config(
1236- 'n1kv_restrict_policy_profiles'),
1237- }
1238+ n1kv_user_config_flags = config('n1kv-config-flags')
1239+ restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
1240+ n1kv_ctxt = {'core_plugin': driver,
1241+ 'neutron_plugin': 'n1kv',
1242+ 'neutron_security_groups': self.neutron_security_groups,
1243+ 'local_ip': unit_private_ip(),
1244+ 'config': n1kv_config,
1245+ 'vsm_ip': config('n1kv-vsm-ip'),
1246+ 'vsm_username': config('n1kv-vsm-username'),
1247+ 'vsm_password': config('n1kv-vsm-password'),
1248+ 'restrict_policy_profiles': restrict_policy_profiles}
1249+
1250+ if n1kv_user_config_flags:
1251+ flags = config_flags_parser(n1kv_user_config_flags)
1252+ n1kv_ctxt['user_config_flags'] = flags
1253
1254 return n1kv_ctxt
1255
1256+ def calico_ctxt(self):
1257+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1258+ self.network_manager)
1259+ config = neutron_plugin_attribute(self.plugin, 'config',
1260+ self.network_manager)
1261+ calico_ctxt = {'core_plugin': driver,
1262+ 'neutron_plugin': 'Calico',
1263+ 'neutron_security_groups': self.neutron_security_groups,
1264+ 'local_ip': unit_private_ip(),
1265+ 'config': config}
1266+
1267+ return calico_ctxt
1268+
1269 def neutron_ctxt(self):
1270 if https():
1271 proto = 'https'
1272 else:
1273 proto = 'http'
1274+
1275 if is_clustered():
1276 host = config('vip')
1277 else:
1278 host = unit_get('private-address')
1279- url = '%s://%s:%s' % (proto, host, '9696')
1280- ctxt = {
1281- 'network_manager': self.network_manager,
1282- 'neutron_url': url,
1283- }
1284+
1285+ ctxt = {'network_manager': self.network_manager,
1286+ 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1287 return ctxt
1288
1289 def __call__(self):
1290@@ -748,6 +795,8 @@
1291 ctxt.update(self.nvp_ctxt())
1292 elif self.plugin == 'n1kv':
1293 ctxt.update(self.n1kv_ctxt())
1294+ elif self.plugin == 'Calico':
1295+ ctxt.update(self.calico_ctxt())
1296
1297 alchemy_flags = config('neutron-alchemy-flags')
1298 if alchemy_flags:
1299@@ -759,23 +808,40 @@
1300
1301
1302 class OSConfigFlagContext(OSContextGenerator):
1303-
1304- """
1305- Responsible for adding user-defined config-flags in charm config to a
1306- template context.
1307+ """Provides support for user-defined config flags.
1308+
1309+ Users can define a comma-seperated list of key=value pairs
1310+ in the charm configuration and apply them at any point in
1311+ any file by using a template flag.
1312+
1313+ Sometimes users might want config flags inserted within a
1314+ specific section so this class allows users to specify the
1315+ template flag name, allowing for multiple template flags
1316+ (sections) within the same context.
1317
1318 NOTE: the value of config-flags may be a comma-separated list of
1319 key=value pairs and some Openstack config files support
1320 comma-separated lists as values.
1321 """
1322
1323+ def __init__(self, charm_flag='config-flags',
1324+ template_flag='user_config_flags'):
1325+ """
1326+ :param charm_flag: config flags in charm configuration.
1327+ :param template_flag: insert point for user-defined flags in template
1328+ file.
1329+ """
1330+ super(OSConfigFlagContext, self).__init__()
1331+ self._charm_flag = charm_flag
1332+ self._template_flag = template_flag
1333+
1334 def __call__(self):
1335- config_flags = config('config-flags')
1336+ config_flags = config(self._charm_flag)
1337 if not config_flags:
1338 return {}
1339
1340- flags = config_flags_parser(config_flags)
1341- return {'user_config_flags': flags}
1342+ return {self._template_flag:
1343+ config_flags_parser(config_flags)}
1344
1345
1346 class SubordinateConfigContext(OSContextGenerator):
1347@@ -819,7 +885,6 @@
1348 },
1349 }
1350 }
1351-
1352 """
1353
1354 def __init__(self, service, config_file, interface):
1355@@ -849,26 +914,28 @@
1356
1357 if self.service not in sub_config:
1358 log('Found subordinate_config on %s but it contained'
1359- 'nothing for %s service' % (rid, self.service))
1360+ 'nothing for %s service' % (rid, self.service),
1361+ level=INFO)
1362 continue
1363
1364 sub_config = sub_config[self.service]
1365 if self.config_file not in sub_config:
1366 log('Found subordinate_config on %s but it contained'
1367- 'nothing for %s' % (rid, self.config_file))
1368+ 'nothing for %s' % (rid, self.config_file),
1369+ level=INFO)
1370 continue
1371
1372 sub_config = sub_config[self.config_file]
1373- for k, v in sub_config.iteritems():
1374+ for k, v in six.iteritems(sub_config):
1375 if k == 'sections':
1376- for section, config_dict in v.iteritems():
1377- log("adding section '%s'" % (section))
1378+ for section, config_dict in six.iteritems(v):
1379+ log("adding section '%s'" % (section),
1380+ level=DEBUG)
1381 ctxt[k][section] = config_dict
1382 else:
1383 ctxt[k] = v
1384
1385- log("%d section(s) found" % (len(ctxt['sections'])), level=INFO)
1386-
1387+ log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1388 return ctxt
1389
1390
1391@@ -880,15 +947,14 @@
1392 False if config('debug') is None else config('debug')
1393 ctxt['verbose'] = \
1394 False if config('verbose') is None else config('verbose')
1395+
1396 return ctxt
1397
1398
1399 class SyslogContext(OSContextGenerator):
1400
1401 def __call__(self):
1402- ctxt = {
1403- 'use_syslog': config('use-syslog')
1404- }
1405+ ctxt = {'use_syslog': config('use-syslog')}
1406 return ctxt
1407
1408
1409@@ -896,13 +962,9 @@
1410
1411 def __call__(self):
1412 if config('prefer-ipv6'):
1413- return {
1414- 'bind_host': '::'
1415- }
1416+ return {'bind_host': '::'}
1417 else:
1418- return {
1419- 'bind_host': '0.0.0.0'
1420- }
1421+ return {'bind_host': '0.0.0.0'}
1422
1423
1424 class WorkerConfigContext(OSContextGenerator):
1425@@ -914,11 +976,42 @@
1426 except ImportError:
1427 apt_install('python-psutil', fatal=True)
1428 from psutil import NUM_CPUS
1429+
1430 return NUM_CPUS
1431
1432 def __call__(self):
1433- multiplier = config('worker-multiplier') or 1
1434- ctxt = {
1435- "workers": self.num_cpus * multiplier
1436- }
1437+ multiplier = config('worker-multiplier') or 0
1438+ ctxt = {"workers": self.num_cpus * multiplier}
1439+ return ctxt
1440+
1441+
1442+class ZeroMQContext(OSContextGenerator):
1443+ interfaces = ['zeromq-configuration']
1444+
1445+ def __call__(self):
1446+ ctxt = {}
1447+ if is_relation_made('zeromq-configuration', 'host'):
1448+ for rid in relation_ids('zeromq-configuration'):
1449+ for unit in related_units(rid):
1450+ ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1451+ ctxt['zmq_host'] = relation_get('host', unit, rid)
1452+
1453+ return ctxt
1454+
1455+
1456+class NotificationDriverContext(OSContextGenerator):
1457+
1458+ def __init__(self, zmq_relation='zeromq-configuration',
1459+ amqp_relation='amqp'):
1460+ """
1461+ :param zmq_relation: Name of Zeromq relation to check
1462+ """
1463+ self.zmq_relation = zmq_relation
1464+ self.amqp_relation = amqp_relation
1465+
1466+ def __call__(self):
1467+ ctxt = {'notifications': 'False'}
1468+ if is_relation_made(self.amqp_relation):
1469+ ctxt['notifications'] = "True"
1470+
1471 return ctxt
1472
1473=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
1474--- hooks/charmhelpers/contrib/openstack/ip.py 2014-10-02 09:18:00 +0000
1475+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-11 17:56:40 +0000
1476@@ -2,21 +2,19 @@
1477 config,
1478 unit_get,
1479 )
1480-
1481 from charmhelpers.contrib.network.ip import (
1482 get_address_in_network,
1483 is_address_in_network,
1484 is_ipv6,
1485 get_ipv6_addr,
1486 )
1487-
1488 from charmhelpers.contrib.hahelpers.cluster import is_clustered
1489
1490 PUBLIC = 'public'
1491 INTERNAL = 'int'
1492 ADMIN = 'admin'
1493
1494-_address_map = {
1495+ADDRESS_MAP = {
1496 PUBLIC: {
1497 'config': 'os-public-network',
1498 'fallback': 'public-address'
1499@@ -33,16 +31,14 @@
1500
1501
1502 def canonical_url(configs, endpoint_type=PUBLIC):
1503- '''
1504- Returns the correct HTTP URL to this host given the state of HTTPS
1505+ """Returns the correct HTTP URL to this host given the state of HTTPS
1506 configuration, hacluster and charm configuration.
1507
1508- :configs OSTemplateRenderer: A config tempating object to inspect for
1509- a complete https context.
1510- :endpoint_type str: The endpoint type to resolve.
1511-
1512- :returns str: Base URL for services on the current service unit.
1513- '''
1514+ :param configs: OSTemplateRenderer config templating object to inspect
1515+ for a complete https context.
1516+ :param endpoint_type: str endpoint type to resolve.
1517+ :param returns: str base URL for services on the current service unit.
1518+ """
1519 scheme = 'http'
1520 if 'https' in configs.complete_contexts():
1521 scheme = 'https'
1522@@ -53,27 +49,45 @@
1523
1524
1525 def resolve_address(endpoint_type=PUBLIC):
1526+ """Return unit address depending on net config.
1527+
1528+ If unit is clustered with vip(s) and has net splits defined, return vip on
1529+ correct network. If clustered with no nets defined, return primary vip.
1530+
1531+ If not clustered, return unit address ensuring address is on configured net
1532+ split if one is configured.
1533+
1534+ :param endpoint_type: Network endpoing type
1535+ """
1536 resolved_address = None
1537- if is_clustered():
1538- if config(_address_map[endpoint_type]['config']) is None:
1539- # Assume vip is simple and pass back directly
1540- resolved_address = config('vip')
1541+ vips = config('vip')
1542+ if vips:
1543+ vips = vips.split()
1544+
1545+ net_type = ADDRESS_MAP[endpoint_type]['config']
1546+ net_addr = config(net_type)
1547+ net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
1548+ clustered = is_clustered()
1549+ if clustered:
1550+ if not net_addr:
1551+ # If no net-splits defined, we expect a single vip
1552+ resolved_address = vips[0]
1553 else:
1554- for vip in config('vip').split():
1555- if is_address_in_network(
1556- config(_address_map[endpoint_type]['config']),
1557- vip):
1558+ for vip in vips:
1559+ if is_address_in_network(net_addr, vip):
1560 resolved_address = vip
1561+ break
1562 else:
1563 if config('prefer-ipv6'):
1564- fallback_addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1565+ fallback_addr = get_ipv6_addr(exc_list=vips)[0]
1566 else:
1567- fallback_addr = unit_get(_address_map[endpoint_type]['fallback'])
1568- resolved_address = get_address_in_network(
1569- config(_address_map[endpoint_type]['config']), fallback_addr)
1570+ fallback_addr = unit_get(net_fallback)
1571+
1572+ resolved_address = get_address_in_network(net_addr, fallback_addr)
1573
1574 if resolved_address is None:
1575- raise ValueError('Unable to resolve a suitable IP address'
1576- ' based on charm state and configuration')
1577- else:
1578- return resolved_address
1579+ raise ValueError("Unable to resolve a suitable IP address based on "
1580+ "charm state and configuration. (net_type=%s, "
1581+ "clustered=%s)" % (net_type, clustered))
1582+
1583+ return resolved_address
1584
1585=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1586--- hooks/charmhelpers/contrib/openstack/neutron.py 2014-06-24 11:05:17 +0000
1587+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-11 17:56:40 +0000
1588@@ -14,7 +14,7 @@
1589 def headers_package():
1590 """Ensures correct linux-headers for running kernel are installed,
1591 for building DKMS package"""
1592- kver = check_output(['uname', '-r']).strip()
1593+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1594 return 'linux-headers-%s' % kver
1595
1596 QUANTUM_CONF_DIR = '/etc/quantum'
1597@@ -22,7 +22,7 @@
1598
1599 def kernel_version():
1600 """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1601- kver = check_output(['uname', '-r']).strip()
1602+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1603 kver = kver.split('.')
1604 return (int(kver[0]), int(kver[1]))
1605
1606@@ -138,10 +138,25 @@
1607 relation_prefix='neutron',
1608 ssl_dir=NEUTRON_CONF_DIR)],
1609 'services': [],
1610- 'packages': [['neutron-plugin-cisco']],
1611+ 'packages': [[headers_package()] + determine_dkms_package(),
1612+ ['neutron-plugin-cisco']],
1613 'server_packages': ['neutron-server',
1614 'neutron-plugin-cisco'],
1615 'server_services': ['neutron-server']
1616+ },
1617+ 'Calico': {
1618+ 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
1619+ 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
1620+ 'contexts': [
1621+ context.SharedDBContext(user=config('neutron-database-user'),
1622+ database=config('neutron-database'),
1623+ relation_prefix='neutron',
1624+ ssl_dir=NEUTRON_CONF_DIR)],
1625+ 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
1626+ 'packages': [[headers_package()] + determine_dkms_package(),
1627+ ['calico-compute', 'bird', 'neutron-dhcp-agent']],
1628+ 'server_packages': ['neutron-server', 'calico-control'],
1629+ 'server_services': ['neutron-server']
1630 }
1631 }
1632 if release >= 'icehouse':
1633@@ -162,7 +177,8 @@
1634 elif manager == 'neutron':
1635 plugins = neutron_plugins()
1636 else:
1637- log('Error: Network manager does not support plugins.')
1638+ log("Network manager '%s' does not support plugins." % (manager),
1639+ level=ERROR)
1640 raise Exception
1641
1642 try:
1643
1644=== modified file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
1645--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-10-02 09:18:00 +0000
1646+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-12-11 17:56:40 +0000
1647@@ -35,7 +35,7 @@
1648 stats auth admin:password
1649
1650 {% if frontends -%}
1651-{% for service, ports in service_ports.iteritems() -%}
1652+{% for service, ports in service_ports.items() -%}
1653 frontend tcp-in_{{ service }}
1654 bind *:{{ ports[0] }}
1655 bind :::{{ ports[0] }}
1656@@ -46,7 +46,7 @@
1657 {% for frontend in frontends -%}
1658 backend {{ service }}_{{ frontend }}
1659 balance leastconn
1660- {% for unit, address in frontends[frontend]['backends'].iteritems() -%}
1661+ {% for unit, address in frontends[frontend]['backends'].items() -%}
1662 server {{ unit }} {{ address }}:{{ ports[1] }} check
1663 {% endfor %}
1664 {% endfor -%}
1665
1666=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
1667--- hooks/charmhelpers/contrib/openstack/templating.py 2014-06-27 11:55:45 +0000
1668+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-11 17:56:40 +0000
1669@@ -1,13 +1,13 @@
1670 import os
1671
1672+import six
1673+
1674 from charmhelpers.fetch import apt_install
1675-
1676 from charmhelpers.core.hookenv import (
1677 log,
1678 ERROR,
1679 INFO
1680 )
1681-
1682 from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
1683
1684 try:
1685@@ -43,7 +43,7 @@
1686 order by OpenStack release.
1687 """
1688 tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
1689- for rel in OPENSTACK_CODENAMES.itervalues()]
1690+ for rel in six.itervalues(OPENSTACK_CODENAMES)]
1691
1692 if not os.path.isdir(templates_dir):
1693 log('Templates directory not found @ %s.' % templates_dir,
1694@@ -258,7 +258,7 @@
1695 """
1696 Write out all registered config files.
1697 """
1698- [self.write(k) for k in self.templates.iterkeys()]
1699+ [self.write(k) for k in six.iterkeys(self.templates)]
1700
1701 def set_release(self, openstack_release):
1702 """
1703@@ -275,5 +275,5 @@
1704 '''
1705 interfaces = []
1706 [interfaces.extend(i.complete_contexts())
1707- for i in self.templates.itervalues()]
1708+ for i in six.itervalues(self.templates)]
1709 return interfaces
1710
1711=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
1712--- hooks/charmhelpers/contrib/openstack/utils.py 2014-10-06 21:21:47 +0000
1713+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-11 17:56:40 +0000
1714@@ -2,6 +2,7 @@
1715
1716 # Common python helper functions used for OpenStack charms.
1717 from collections import OrderedDict
1718+from functools import wraps
1719
1720 import subprocess
1721 import json
1722@@ -9,11 +10,13 @@
1723 import socket
1724 import sys
1725
1726+import six
1727+import yaml
1728+
1729 from charmhelpers.core.hookenv import (
1730 config,
1731 log as juju_log,
1732 charm_dir,
1733- ERROR,
1734 INFO,
1735 relation_ids,
1736 relation_set
1737@@ -30,7 +33,8 @@
1738 )
1739
1740 from charmhelpers.core.host import lsb_release, mounts, umount
1741-from charmhelpers.fetch import apt_install, apt_cache
1742+from charmhelpers.fetch import apt_install, apt_cache, install_remote
1743+from charmhelpers.contrib.python.packages import pip_install
1744 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
1745 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
1746
1747@@ -112,7 +116,7 @@
1748
1749 # Best guess match based on deb string provided
1750 if src.startswith('deb') or src.startswith('ppa'):
1751- for k, v in OPENSTACK_CODENAMES.iteritems():
1752+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1753 if v in src:
1754 return v
1755
1756@@ -133,7 +137,7 @@
1757
1758 def get_os_version_codename(codename):
1759 '''Determine OpenStack version number from codename.'''
1760- for k, v in OPENSTACK_CODENAMES.iteritems():
1761+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
1762 if v == codename:
1763 return k
1764 e = 'Could not derive OpenStack version for '\
1765@@ -193,7 +197,7 @@
1766 else:
1767 vers_map = OPENSTACK_CODENAMES
1768
1769- for version, cname in vers_map.iteritems():
1770+ for version, cname in six.iteritems(vers_map):
1771 if cname == codename:
1772 return version
1773 # e = "Could not determine OpenStack version for package: %s" % pkg
1774@@ -317,7 +321,7 @@
1775 rc_script.write(
1776 "#!/bin/bash\n")
1777 [rc_script.write('export %s=%s\n' % (u, p))
1778- for u, p in env_vars.iteritems() if u != "script_path"]
1779+ for u, p in six.iteritems(env_vars) if u != "script_path"]
1780
1781
1782 def openstack_upgrade_available(package):
1783@@ -350,8 +354,8 @@
1784 '''
1785 _none = ['None', 'none', None]
1786 if (block_device in _none):
1787- error_out('prepare_storage(): Missing required input: '
1788- 'block_device=%s.' % block_device, level=ERROR)
1789+ error_out('prepare_storage(): Missing required input: block_device=%s.'
1790+ % block_device)
1791
1792 if block_device.startswith('/dev/'):
1793 bdev = block_device
1794@@ -367,8 +371,7 @@
1795 bdev = '/dev/%s' % block_device
1796
1797 if not is_block_device(bdev):
1798- error_out('Failed to locate valid block device at %s' % bdev,
1799- level=ERROR)
1800+ error_out('Failed to locate valid block device at %s' % bdev)
1801
1802 return bdev
1803
1804@@ -417,7 +420,7 @@
1805
1806 if isinstance(address, dns.name.Name):
1807 rtype = 'PTR'
1808- elif isinstance(address, basestring):
1809+ elif isinstance(address, six.string_types):
1810 rtype = 'A'
1811 else:
1812 return None
1813@@ -468,6 +471,14 @@
1814 return result.split('.')[0]
1815
1816
1817+def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
1818+ mm_map = {}
1819+ if os.path.isfile(mm_file):
1820+ with open(mm_file, 'r') as f:
1821+ mm_map = json.load(f)
1822+ return mm_map
1823+
1824+
1825 def sync_db_with_multi_ipv6_addresses(database, database_user,
1826 relation_prefix=None):
1827 hosts = get_ipv6_addr(dynamic_only=False)
1828@@ -477,10 +488,132 @@
1829 'hostname': json.dumps(hosts)}
1830
1831 if relation_prefix:
1832- keys = kwargs.keys()
1833- for key in keys:
1834+ for key in list(kwargs.keys()):
1835 kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
1836 del kwargs[key]
1837
1838 for rid in relation_ids('shared-db'):
1839 relation_set(relation_id=rid, **kwargs)
1840+
1841+
1842+def os_requires_version(ostack_release, pkg):
1843+ """
1844+ Decorator for hook to specify minimum supported release
1845+ """
1846+ def wrap(f):
1847+ @wraps(f)
1848+ def wrapped_f(*args):
1849+ if os_release(pkg) < ostack_release:
1850+ raise Exception("This hook is not supported on releases"
1851+ " before %s" % ostack_release)
1852+ f(*args)
1853+ return wrapped_f
1854+ return wrap
1855+
1856+
1857+def git_install_requested():
1858+ """Returns true if openstack-origin-git is specified."""
1859+ return config('openstack-origin-git') != "None"
1860+
1861+
1862+requirements_dir = None
1863+
1864+
1865+def git_clone_and_install(file_name, core_project):
1866+ """Clone/install all OpenStack repos specified in yaml config file."""
1867+ global requirements_dir
1868+
1869+ if file_name == "None":
1870+ return
1871+
1872+ yaml_file = os.path.join(charm_dir(), file_name)
1873+
1874+ # clone/install the requirements project first
1875+ installed = _git_clone_and_install_subset(yaml_file,
1876+ whitelist=['requirements'])
1877+ if 'requirements' not in installed:
1878+ error_out('requirements git repository must be specified')
1879+
1880+ # clone/install all other projects except requirements and the core project
1881+ blacklist = ['requirements', core_project]
1882+ _git_clone_and_install_subset(yaml_file, blacklist=blacklist,
1883+ update_requirements=True)
1884+
1885+ # clone/install the core project
1886+ whitelist = [core_project]
1887+ installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist,
1888+ update_requirements=True)
1889+ if core_project not in installed:
1890+ error_out('{} git repository must be specified'.format(core_project))
1891+
1892+
1893+def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[],
1894+ update_requirements=False):
1895+ """Clone/install subset of OpenStack repos specified in yaml config file."""
1896+ global requirements_dir
1897+ installed = []
1898+
1899+ with open(yaml_file, 'r') as fd:
1900+ projects = yaml.load(fd)
1901+ for proj, val in projects.items():
1902+ # The project subset is chosen based on the following 3 rules:
1903+ # 1) If project is in blacklist, we don't clone/install it, period.
1904+ # 2) If whitelist is empty, we clone/install everything else.
1905+ # 3) If whitelist is not empty, we clone/install everything in the
1906+ # whitelist.
1907+ if proj in blacklist:
1908+ continue
1909+ if whitelist and proj not in whitelist:
1910+ continue
1911+ repo = val['repository']
1912+ branch = val['branch']
1913+ repo_dir = _git_clone_and_install_single(repo, branch,
1914+ update_requirements)
1915+ if proj == 'requirements':
1916+ requirements_dir = repo_dir
1917+ installed.append(proj)
1918+ return installed
1919+
1920+
1921+def _git_clone_and_install_single(repo, branch, update_requirements=False):
1922+ """Clone and install a single git repository."""
1923+ dest_parent_dir = "/mnt/openstack-git/"
1924+ dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo))
1925+
1926+ if not os.path.exists(dest_parent_dir):
1927+ juju_log('Host dir not mounted at {}. '
1928+ 'Creating directory there instead.'.format(dest_parent_dir))
1929+ os.mkdir(dest_parent_dir)
1930+
1931+ if not os.path.exists(dest_dir):
1932+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
1933+ repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch)
1934+ else:
1935+ repo_dir = dest_dir
1936+
1937+ if update_requirements:
1938+ if not requirements_dir:
1939+ error_out('requirements repo must be cloned before '
1940+ 'updating from global requirements.')
1941+ _git_update_requirements(repo_dir, requirements_dir)
1942+
1943+ juju_log('Installing git repo from dir: {}'.format(repo_dir))
1944+ pip_install(repo_dir)
1945+
1946+ return repo_dir
1947+
1948+
1949+def _git_update_requirements(package_dir, reqs_dir):
1950+ """Update from global requirements.
1951+
1952+ Update an OpenStack git directory's requirements.txt and
1953+ test-requirements.txt from global-requirements.txt."""
1954+ orig_dir = os.getcwd()
1955+ os.chdir(reqs_dir)
1956+ cmd = "python update.py {}".format(package_dir)
1957+ try:
1958+ subprocess.check_call(cmd.split(' '))
1959+ except subprocess.CalledProcessError:
1960+ package = os.path.basename(package_dir)
1961+ error_out("Error updating {} from global-requirements.txt".format(package))
1962+ os.chdir(orig_dir)
1963
1964=== added directory 'hooks/charmhelpers/contrib/python'
1965=== added file 'hooks/charmhelpers/contrib/python/__init__.py'
1966=== added file 'hooks/charmhelpers/contrib/python/packages.py'
1967--- hooks/charmhelpers/contrib/python/packages.py 1970-01-01 00:00:00 +0000
1968+++ hooks/charmhelpers/contrib/python/packages.py 2014-12-11 17:56:40 +0000
1969@@ -0,0 +1,77 @@
1970+#!/usr/bin/env python
1971+# coding: utf-8
1972+
1973+__author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
1974+
1975+from charmhelpers.fetch import apt_install, apt_update
1976+from charmhelpers.core.hookenv import log
1977+
1978+try:
1979+ from pip import main as pip_execute
1980+except ImportError:
1981+ apt_update()
1982+ apt_install('python-pip')
1983+ from pip import main as pip_execute
1984+
1985+
1986+def parse_options(given, available):
1987+ """Given a set of options, check if available"""
1988+ for key, value in sorted(given.items()):
1989+ if key in available:
1990+ yield "--{0}={1}".format(key, value)
1991+
1992+
1993+def pip_install_requirements(requirements, **options):
1994+ """Install a requirements file """
1995+ command = ["install"]
1996+
1997+ available_options = ('proxy', 'src', 'log', )
1998+ for option in parse_options(options, available_options):
1999+ command.append(option)
2000+
2001+ command.append("-r {0}".format(requirements))
2002+ log("Installing from file: {} with options: {}".format(requirements,
2003+ command))
2004+ pip_execute(command)
2005+
2006+
2007+def pip_install(package, fatal=False, **options):
2008+ """Install a python package"""
2009+ command = ["install"]
2010+
2011+ available_options = ('proxy', 'src', 'log', "index-url", )
2012+ for option in parse_options(options, available_options):
2013+ command.append(option)
2014+
2015+ if isinstance(package, list):
2016+ command.extend(package)
2017+ else:
2018+ command.append(package)
2019+
2020+ log("Installing {} package with options: {}".format(package,
2021+ command))
2022+ pip_execute(command)
2023+
2024+
2025+def pip_uninstall(package, **options):
2026+ """Uninstall a python package"""
2027+ command = ["uninstall", "-q", "-y"]
2028+
2029+ available_options = ('proxy', 'log', )
2030+ for option in parse_options(options, available_options):
2031+ command.append(option)
2032+
2033+ if isinstance(package, list):
2034+ command.extend(package)
2035+ else:
2036+ command.append(package)
2037+
2038+ log("Uninstalling {} package with options: {}".format(package,
2039+ command))
2040+ pip_execute(command)
2041+
2042+
2043+def pip_list():
2044+ """Returns the list of current python installed packages
2045+ """
2046+ return pip_execute(["list"])
2047
2048=== modified file 'hooks/charmhelpers/contrib/storage/linux/ceph.py'
2049--- hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-06-27 11:55:45 +0000
2050+++ hooks/charmhelpers/contrib/storage/linux/ceph.py 2014-12-11 17:56:40 +0000
2051@@ -16,19 +16,18 @@
2052 from subprocess import (
2053 check_call,
2054 check_output,
2055- CalledProcessError
2056+ CalledProcessError,
2057 )
2058-
2059 from charmhelpers.core.hookenv import (
2060 relation_get,
2061 relation_ids,
2062 related_units,
2063 log,
2064+ DEBUG,
2065 INFO,
2066 WARNING,
2067- ERROR
2068+ ERROR,
2069 )
2070-
2071 from charmhelpers.core.host import (
2072 mount,
2073 mounts,
2074@@ -37,7 +36,6 @@
2075 service_running,
2076 umount,
2077 )
2078-
2079 from charmhelpers.fetch import (
2080 apt_install,
2081 )
2082@@ -56,99 +54,85 @@
2083
2084
2085 def install():
2086- ''' Basic Ceph client installation '''
2087+ """Basic Ceph client installation."""
2088 ceph_dir = "/etc/ceph"
2089 if not os.path.exists(ceph_dir):
2090 os.mkdir(ceph_dir)
2091+
2092 apt_install('ceph-common', fatal=True)
2093
2094
2095 def rbd_exists(service, pool, rbd_img):
2096- ''' Check to see if a RADOS block device exists '''
2097+ """Check to see if a RADOS block device exists."""
2098 try:
2099- out = check_output(['rbd', 'list', '--id', service,
2100- '--pool', pool])
2101+ out = check_output(['rbd', 'list', '--id',
2102+ service, '--pool', pool]).decode('UTF-8')
2103 except CalledProcessError:
2104 return False
2105- else:
2106- return rbd_img in out
2107+
2108+ return rbd_img in out
2109
2110
2111 def create_rbd_image(service, pool, image, sizemb):
2112- ''' Create a new RADOS block device '''
2113- cmd = [
2114- 'rbd',
2115- 'create',
2116- image,
2117- '--size',
2118- str(sizemb),
2119- '--id',
2120- service,
2121- '--pool',
2122- pool
2123- ]
2124+ """Create a new RADOS block device."""
2125+ cmd = ['rbd', 'create', image, '--size', str(sizemb), '--id', service,
2126+ '--pool', pool]
2127 check_call(cmd)
2128
2129
2130 def pool_exists(service, name):
2131- ''' Check to see if a RADOS pool already exists '''
2132+ """Check to see if a RADOS pool already exists."""
2133 try:
2134- out = check_output(['rados', '--id', service, 'lspools'])
2135+ out = check_output(['rados', '--id', service,
2136+ 'lspools']).decode('UTF-8')
2137 except CalledProcessError:
2138 return False
2139- else:
2140- return name in out
2141+
2142+ return name in out
2143
2144
2145 def get_osds(service):
2146- '''
2147- Return a list of all Ceph Object Storage Daemons
2148- currently in the cluster
2149- '''
2150+ """Return a list of all Ceph Object Storage Daemons currently in the
2151+ cluster.
2152+ """
2153 version = ceph_version()
2154 if version and version >= '0.56':
2155 return json.loads(check_output(['ceph', '--id', service,
2156- 'osd', 'ls', '--format=json']))
2157- else:
2158- return None
2159-
2160-
2161-def create_pool(service, name, replicas=2):
2162- ''' Create a new RADOS pool '''
2163+ 'osd', 'ls',
2164+ '--format=json']).decode('UTF-8'))
2165+
2166+ return None
2167+
2168+
2169+def create_pool(service, name, replicas=3):
2170+ """Create a new RADOS pool."""
2171 if pool_exists(service, name):
2172 log("Ceph pool {} already exists, skipping creation".format(name),
2173 level=WARNING)
2174 return
2175+
2176 # Calculate the number of placement groups based
2177 # on upstream recommended best practices.
2178 osds = get_osds(service)
2179 if osds:
2180- pgnum = (len(osds) * 100 / replicas)
2181+ pgnum = (len(osds) * 100 // replicas)
2182 else:
2183 # NOTE(james-page): Default to 200 for older ceph versions
2184 # which don't support OSD query from cli
2185 pgnum = 200
2186- cmd = [
2187- 'ceph', '--id', service,
2188- 'osd', 'pool', 'create',
2189- name, str(pgnum)
2190- ]
2191+
2192+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'create', name, str(pgnum)]
2193 check_call(cmd)
2194- cmd = [
2195- 'ceph', '--id', service,
2196- 'osd', 'pool', 'set', name,
2197- 'size', str(replicas)
2198- ]
2199+
2200+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'set', name, 'size',
2201+ str(replicas)]
2202 check_call(cmd)
2203
2204
2205 def delete_pool(service, name):
2206- ''' Delete a RADOS pool from ceph '''
2207- cmd = [
2208- 'ceph', '--id', service,
2209- 'osd', 'pool', 'delete',
2210- name, '--yes-i-really-really-mean-it'
2211- ]
2212+ """Delete a RADOS pool from ceph."""
2213+ cmd = ['ceph', '--id', service, 'osd', 'pool', 'delete', name,
2214+ '--yes-i-really-really-mean-it']
2215 check_call(cmd)
2216
2217
2218@@ -161,44 +145,43 @@
2219
2220
2221 def create_keyring(service, key):
2222- ''' Create a new Ceph keyring containing key'''
2223+ """Create a new Ceph keyring containing key."""
2224 keyring = _keyring_path(service)
2225 if os.path.exists(keyring):
2226- log('ceph: Keyring exists at %s.' % keyring, level=WARNING)
2227+ log('Ceph keyring exists at %s.' % keyring, level=WARNING)
2228 return
2229- cmd = [
2230- 'ceph-authtool',
2231- keyring,
2232- '--create-keyring',
2233- '--name=client.{}'.format(service),
2234- '--add-key={}'.format(key)
2235- ]
2236+
2237+ cmd = ['ceph-authtool', keyring, '--create-keyring',
2238+ '--name=client.{}'.format(service), '--add-key={}'.format(key)]
2239 check_call(cmd)
2240- log('ceph: Created new ring at %s.' % keyring, level=INFO)
2241+ log('Created new ceph keyring at %s.' % keyring, level=DEBUG)
2242
2243
2244 def create_key_file(service, key):
2245- ''' Create a file containing key '''
2246+ """Create a file containing key."""
2247 keyfile = _keyfile_path(service)
2248 if os.path.exists(keyfile):
2249- log('ceph: Keyfile exists at %s.' % keyfile, level=WARNING)
2250+ log('Keyfile exists at %s.' % keyfile, level=WARNING)
2251 return
2252+
2253 with open(keyfile, 'w') as fd:
2254 fd.write(key)
2255- log('ceph: Created new keyfile at %s.' % keyfile, level=INFO)
2256+
2257+ log('Created new keyfile at %s.' % keyfile, level=INFO)
2258
2259
2260 def get_ceph_nodes():
2261- ''' Query named relation 'ceph' to detemine current nodes '''
2262+ """Query named relation 'ceph' to determine current nodes."""
2263 hosts = []
2264 for r_id in relation_ids('ceph'):
2265 for unit in related_units(r_id):
2266 hosts.append(relation_get('private-address', unit=unit, rid=r_id))
2267+
2268 return hosts
2269
2270
2271 def configure(service, key, auth, use_syslog):
2272- ''' Perform basic configuration of Ceph '''
2273+ """Perform basic configuration of Ceph."""
2274 create_keyring(service, key)
2275 create_key_file(service, key)
2276 hosts = get_ceph_nodes()
2277@@ -211,17 +194,17 @@
2278
2279
2280 def image_mapped(name):
2281- ''' Determine whether a RADOS block device is mapped locally '''
2282+ """Determine whether a RADOS block device is mapped locally."""
2283 try:
2284- out = check_output(['rbd', 'showmapped'])
2285+ out = check_output(['rbd', 'showmapped']).decode('UTF-8')
2286 except CalledProcessError:
2287 return False
2288- else:
2289- return name in out
2290+
2291+ return name in out
2292
2293
2294 def map_block_storage(service, pool, image):
2295- ''' Map a RADOS block device for local use '''
2296+ """Map a RADOS block device for local use."""
2297 cmd = [
2298 'rbd',
2299 'map',
2300@@ -235,31 +218,32 @@
2301
2302
2303 def filesystem_mounted(fs):
2304- ''' Determine whether a filesytems is already mounted '''
2305+ """Determine whether a filesytems is already mounted."""
2306 return fs in [f for f, m in mounts()]
2307
2308
2309 def make_filesystem(blk_device, fstype='ext4', timeout=10):
2310- ''' Make a new filesystem on the specified block device '''
2311+ """Make a new filesystem on the specified block device."""
2312 count = 0
2313 e_noent = os.errno.ENOENT
2314 while not os.path.exists(blk_device):
2315 if count >= timeout:
2316- log('ceph: gave up waiting on block device %s' % blk_device,
2317+ log('Gave up waiting on block device %s' % blk_device,
2318 level=ERROR)
2319 raise IOError(e_noent, os.strerror(e_noent), blk_device)
2320- log('ceph: waiting for block device %s to appear' % blk_device,
2321- level=INFO)
2322+
2323+ log('Waiting for block device %s to appear' % blk_device,
2324+ level=DEBUG)
2325 count += 1
2326 time.sleep(1)
2327 else:
2328- log('ceph: Formatting block device %s as filesystem %s.' %
2329+ log('Formatting block device %s as filesystem %s.' %
2330 (blk_device, fstype), level=INFO)
2331 check_call(['mkfs', '-t', fstype, blk_device])
2332
2333
2334 def place_data_on_block_device(blk_device, data_src_dst):
2335- ''' Migrate data in data_src_dst to blk_device and then remount '''
2336+ """Migrate data in data_src_dst to blk_device and then remount."""
2337 # mount block device into /mnt
2338 mount(blk_device, '/mnt')
2339 # copy data to /mnt
2340@@ -279,8 +263,8 @@
2341
2342 # TODO: re-use
2343 def modprobe(module):
2344- ''' Load a kernel module and configure for auto-load on reboot '''
2345- log('ceph: Loading kernel module', level=INFO)
2346+ """Load a kernel module and configure for auto-load on reboot."""
2347+ log('Loading kernel module', level=INFO)
2348 cmd = ['modprobe', module]
2349 check_call(cmd)
2350 with open('/etc/modules', 'r+') as modules:
2351@@ -289,7 +273,7 @@
2352
2353
2354 def copy_files(src, dst, symlinks=False, ignore=None):
2355- ''' Copy files from src to dst '''
2356+ """Copy files from src to dst."""
2357 for item in os.listdir(src):
2358 s = os.path.join(src, item)
2359 d = os.path.join(dst, item)
2360@@ -300,9 +284,9 @@
2361
2362
2363 def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
2364- blk_device, fstype, system_services=[]):
2365- """
2366- NOTE: This function must only be called from a single service unit for
2367+ blk_device, fstype, system_services=[],
2368+ replicas=3):
2369+ """NOTE: This function must only be called from a single service unit for
2370 the same rbd_img otherwise data loss will occur.
2371
2372 Ensures given pool and RBD image exists, is mapped to a block device,
2373@@ -316,15 +300,16 @@
2374 """
2375 # Ensure pool, RBD image, RBD mappings are in place.
2376 if not pool_exists(service, pool):
2377- log('ceph: Creating new pool {}.'.format(pool))
2378- create_pool(service, pool)
2379+ log('Creating new pool {}.'.format(pool), level=INFO)
2380+ create_pool(service, pool, replicas=replicas)
2381
2382 if not rbd_exists(service, pool, rbd_img):
2383- log('ceph: Creating RBD image ({}).'.format(rbd_img))
2384+ log('Creating RBD image ({}).'.format(rbd_img), level=INFO)
2385 create_rbd_image(service, pool, rbd_img, sizemb)
2386
2387 if not image_mapped(rbd_img):
2388- log('ceph: Mapping RBD Image {} as a Block Device.'.format(rbd_img))
2389+ log('Mapping RBD Image {} as a Block Device.'.format(rbd_img),
2390+ level=INFO)
2391 map_block_storage(service, pool, rbd_img)
2392
2393 # make file system
2394@@ -339,45 +324,47 @@
2395
2396 for svc in system_services:
2397 if service_running(svc):
2398- log('ceph: Stopping services {} prior to migrating data.'
2399- .format(svc))
2400+ log('Stopping services {} prior to migrating data.'
2401+ .format(svc), level=DEBUG)
2402 service_stop(svc)
2403
2404 place_data_on_block_device(blk_device, mount_point)
2405
2406 for svc in system_services:
2407- log('ceph: Starting service {} after migrating data.'
2408- .format(svc))
2409+ log('Starting service {} after migrating data.'
2410+ .format(svc), level=DEBUG)
2411 service_start(svc)
2412
2413
2414 def ensure_ceph_keyring(service, user=None, group=None):
2415- '''
2416- Ensures a ceph keyring is created for a named service
2417- and optionally ensures user and group ownership.
2418+ """Ensures a ceph keyring is created for a named service and optionally
2419+ ensures user and group ownership.
2420
2421 Returns False if no ceph key is available in relation state.
2422- '''
2423+ """
2424 key = None
2425 for rid in relation_ids('ceph'):
2426 for unit in related_units(rid):
2427 key = relation_get('key', rid=rid, unit=unit)
2428 if key:
2429 break
2430+
2431 if not key:
2432 return False
2433+
2434 create_keyring(service=service, key=key)
2435 keyring = _keyring_path(service)
2436 if user and group:
2437 check_call(['chown', '%s.%s' % (user, group), keyring])
2438+
2439 return True
2440
2441
2442 def ceph_version():
2443- ''' Retrieve the local version of ceph '''
2444+ """Retrieve the local version of ceph."""
2445 if os.path.exists('/usr/bin/ceph'):
2446 cmd = ['ceph', '-v']
2447- output = check_output(cmd)
2448+ output = check_output(cmd).decode('US-ASCII')
2449 output = output.split()
2450 if len(output) > 3:
2451 return output[2]
2452
2453=== modified file 'hooks/charmhelpers/contrib/storage/linux/loopback.py'
2454--- hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-06-05 10:59:00 +0000
2455+++ hooks/charmhelpers/contrib/storage/linux/loopback.py 2014-12-11 17:56:40 +0000
2456@@ -1,12 +1,12 @@
2457-
2458 import os
2459 import re
2460-
2461 from subprocess import (
2462 check_call,
2463 check_output,
2464 )
2465
2466+import six
2467+
2468
2469 ##################################################
2470 # loopback device helpers.
2471@@ -37,7 +37,7 @@
2472 '''
2473 file_path = os.path.abspath(file_path)
2474 check_call(['losetup', '--find', file_path])
2475- for d, f in loopback_devices().iteritems():
2476+ for d, f in six.iteritems(loopback_devices()):
2477 if f == file_path:
2478 return d
2479
2480@@ -51,7 +51,7 @@
2481
2482 :returns: str: Full path to the ensured loopback device (eg, /dev/loop0)
2483 '''
2484- for d, f in loopback_devices().iteritems():
2485+ for d, f in six.iteritems(loopback_devices()):
2486 if f == path:
2487 return d
2488
2489
2490=== modified file 'hooks/charmhelpers/contrib/storage/linux/lvm.py'
2491--- hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-06-24 11:05:17 +0000
2492+++ hooks/charmhelpers/contrib/storage/linux/lvm.py 2014-12-11 17:56:40 +0000
2493@@ -61,6 +61,7 @@
2494 vg = None
2495 pvd = check_output(['pvdisplay', block_device]).splitlines()
2496 for l in pvd:
2497+ l = l.decode('UTF-8')
2498 if l.strip().startswith('VG Name'):
2499 vg = ' '.join(l.strip().split()[2:])
2500 return vg
2501
2502=== modified file 'hooks/charmhelpers/contrib/storage/linux/utils.py'
2503--- hooks/charmhelpers/contrib/storage/linux/utils.py 2014-10-02 09:18:00 +0000
2504+++ hooks/charmhelpers/contrib/storage/linux/utils.py 2014-12-11 17:56:40 +0000
2505@@ -30,7 +30,8 @@
2506 # sometimes sgdisk exits non-zero; this is OK, dd will clean up
2507 call(['sgdisk', '--zap-all', '--mbrtogpt',
2508 '--clear', block_device])
2509- dev_end = check_output(['blockdev', '--getsz', block_device])
2510+ dev_end = check_output(['blockdev', '--getsz',
2511+ block_device]).decode('UTF-8')
2512 gpt_end = int(dev_end.split()[0]) - 100
2513 check_call(['dd', 'if=/dev/zero', 'of=%s' % (block_device),
2514 'bs=1M', 'count=1'])
2515@@ -47,7 +48,7 @@
2516 it doesn't.
2517 '''
2518 is_partition = bool(re.search(r".*[0-9]+\b", device))
2519- out = check_output(['mount'])
2520+ out = check_output(['mount']).decode('UTF-8')
2521 if is_partition:
2522 return bool(re.search(device + r"\b", out))
2523 return bool(re.search(device + r"[0-9]+\b", out))
2524
2525=== modified file 'hooks/charmhelpers/core/fstab.py'
2526--- hooks/charmhelpers/core/fstab.py 2014-06-24 11:05:17 +0000
2527+++ hooks/charmhelpers/core/fstab.py 2014-12-11 17:56:40 +0000
2528@@ -3,10 +3,11 @@
2529
2530 __author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
2531
2532+import io
2533 import os
2534
2535
2536-class Fstab(file):
2537+class Fstab(io.FileIO):
2538 """This class extends file in order to implement a file reader/writer
2539 for file `/etc/fstab`
2540 """
2541@@ -24,8 +25,8 @@
2542 options = "defaults"
2543
2544 self.options = options
2545- self.d = d
2546- self.p = p
2547+ self.d = int(d)
2548+ self.p = int(p)
2549
2550 def __eq__(self, o):
2551 return str(self) == str(o)
2552@@ -45,7 +46,7 @@
2553 self._path = path
2554 else:
2555 self._path = self.DEFAULT_PATH
2556- file.__init__(self, self._path, 'r+')
2557+ super(Fstab, self).__init__(self._path, 'rb+')
2558
2559 def _hydrate_entry(self, line):
2560 # NOTE: use split with no arguments to split on any
2561@@ -58,8 +59,9 @@
2562 def entries(self):
2563 self.seek(0)
2564 for line in self.readlines():
2565+ line = line.decode('us-ascii')
2566 try:
2567- if not line.startswith("#"):
2568+ if line.strip() and not line.startswith("#"):
2569 yield self._hydrate_entry(line)
2570 except ValueError:
2571 pass
2572@@ -75,14 +77,14 @@
2573 if self.get_entry_by_attr('device', entry.device):
2574 return False
2575
2576- self.write(str(entry) + '\n')
2577+ self.write((str(entry) + '\n').encode('us-ascii'))
2578 self.truncate()
2579 return entry
2580
2581 def remove_entry(self, entry):
2582 self.seek(0)
2583
2584- lines = self.readlines()
2585+ lines = [l.decode('us-ascii') for l in self.readlines()]
2586
2587 found = False
2588 for index, line in enumerate(lines):
2589@@ -97,7 +99,7 @@
2590 lines.remove(line)
2591
2592 self.seek(0)
2593- self.write(''.join(lines))
2594+ self.write(''.join(lines).encode('us-ascii'))
2595 self.truncate()
2596 return True
2597
2598
2599=== modified file 'hooks/charmhelpers/core/hookenv.py'
2600--- hooks/charmhelpers/core/hookenv.py 2014-10-02 09:18:00 +0000
2601+++ hooks/charmhelpers/core/hookenv.py 2014-12-11 17:56:40 +0000
2602@@ -9,9 +9,14 @@
2603 import yaml
2604 import subprocess
2605 import sys
2606-import UserDict
2607 from subprocess import CalledProcessError
2608
2609+import six
2610+if not six.PY3:
2611+ from UserDict import UserDict
2612+else:
2613+ from collections import UserDict
2614+
2615 CRITICAL = "CRITICAL"
2616 ERROR = "ERROR"
2617 WARNING = "WARNING"
2618@@ -63,16 +68,18 @@
2619 command = ['juju-log']
2620 if level:
2621 command += ['-l', level]
2622+ if not isinstance(message, six.string_types):
2623+ message = repr(message)
2624 command += [message]
2625 subprocess.call(command)
2626
2627
2628-class Serializable(UserDict.IterableUserDict):
2629+class Serializable(UserDict):
2630 """Wrapper, an object that can be serialized to yaml or json"""
2631
2632 def __init__(self, obj):
2633 # wrap the object
2634- UserDict.IterableUserDict.__init__(self)
2635+ UserDict.__init__(self)
2636 self.data = obj
2637
2638 def __getattr__(self, attr):
2639@@ -214,6 +221,12 @@
2640 except KeyError:
2641 return (self._prev_dict or {})[key]
2642
2643+ def keys(self):
2644+ prev_keys = []
2645+ if self._prev_dict is not None:
2646+ prev_keys = self._prev_dict.keys()
2647+ return list(set(prev_keys + list(dict.keys(self))))
2648+
2649 def load_previous(self, path=None):
2650 """Load previous copy of config from disk.
2651
2652@@ -263,7 +276,7 @@
2653
2654 """
2655 if self._prev_dict:
2656- for k, v in self._prev_dict.iteritems():
2657+ for k, v in six.iteritems(self._prev_dict):
2658 if k not in self:
2659 self[k] = v
2660 with open(self.path, 'w') as f:
2661@@ -278,7 +291,8 @@
2662 config_cmd_line.append(scope)
2663 config_cmd_line.append('--format=json')
2664 try:
2665- config_data = json.loads(subprocess.check_output(config_cmd_line))
2666+ config_data = json.loads(
2667+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
2668 if scope is not None:
2669 return config_data
2670 return Config(config_data)
2671@@ -297,10 +311,10 @@
2672 if unit:
2673 _args.append(unit)
2674 try:
2675- return json.loads(subprocess.check_output(_args))
2676+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2677 except ValueError:
2678 return None
2679- except CalledProcessError, e:
2680+ except CalledProcessError as e:
2681 if e.returncode == 2:
2682 return None
2683 raise
2684@@ -312,7 +326,7 @@
2685 relation_cmd_line = ['relation-set']
2686 if relation_id is not None:
2687 relation_cmd_line.extend(('-r', relation_id))
2688- for k, v in (relation_settings.items() + kwargs.items()):
2689+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
2690 if v is None:
2691 relation_cmd_line.append('{}='.format(k))
2692 else:
2693@@ -329,7 +343,8 @@
2694 relid_cmd_line = ['relation-ids', '--format=json']
2695 if reltype is not None:
2696 relid_cmd_line.append(reltype)
2697- return json.loads(subprocess.check_output(relid_cmd_line)) or []
2698+ return json.loads(
2699+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
2700 return []
2701
2702
2703@@ -340,7 +355,8 @@
2704 units_cmd_line = ['relation-list', '--format=json']
2705 if relid is not None:
2706 units_cmd_line.extend(('-r', relid))
2707- return json.loads(subprocess.check_output(units_cmd_line)) or []
2708+ return json.loads(
2709+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
2710
2711
2712 @cached
2713@@ -380,21 +396,31 @@
2714
2715
2716 @cached
2717+def metadata():
2718+ """Get the current charm metadata.yaml contents as a python object"""
2719+ with open(os.path.join(charm_dir(), 'metadata.yaml')) as md:
2720+ return yaml.safe_load(md)
2721+
2722+
2723+@cached
2724 def relation_types():
2725 """Get a list of relation types supported by this charm"""
2726- charmdir = os.environ.get('CHARM_DIR', '')
2727- mdf = open(os.path.join(charmdir, 'metadata.yaml'))
2728- md = yaml.safe_load(mdf)
2729 rel_types = []
2730+ md = metadata()
2731 for key in ('provides', 'requires', 'peers'):
2732 section = md.get(key)
2733 if section:
2734 rel_types.extend(section.keys())
2735- mdf.close()
2736 return rel_types
2737
2738
2739 @cached
2740+def charm_name():
2741+ """Get the name of the current charm as is specified on metadata.yaml"""
2742+ return metadata().get('name')
2743+
2744+
2745+@cached
2746 def relations():
2747 """Get a nested dictionary of relation data for all related units"""
2748 rels = {}
2749@@ -449,7 +475,7 @@
2750 """Get the unit ID for the remote unit"""
2751 _args = ['unit-get', '--format=json', attribute]
2752 try:
2753- return json.loads(subprocess.check_output(_args))
2754+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
2755 except ValueError:
2756 return None
2757
2758
2759=== modified file 'hooks/charmhelpers/core/host.py'
2760--- hooks/charmhelpers/core/host.py 2014-10-02 09:18:00 +0000
2761+++ hooks/charmhelpers/core/host.py 2014-12-11 17:56:40 +0000
2762@@ -6,19 +6,20 @@
2763 # Matthew Wedgwood <matthew.wedgwood@canonical.com>
2764
2765 import os
2766+import re
2767 import pwd
2768 import grp
2769 import random
2770 import string
2771 import subprocess
2772 import hashlib
2773-import shutil
2774 from contextlib import contextmanager
2775-
2776 from collections import OrderedDict
2777
2778-from hookenv import log
2779-from fstab import Fstab
2780+import six
2781+
2782+from .hookenv import log
2783+from .fstab import Fstab
2784
2785
2786 def service_start(service_name):
2787@@ -54,7 +55,9 @@
2788 def service_running(service):
2789 """Determine whether a system service is running"""
2790 try:
2791- output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
2792+ output = subprocess.check_output(
2793+ ['service', service, 'status'],
2794+ stderr=subprocess.STDOUT).decode('UTF-8')
2795 except subprocess.CalledProcessError:
2796 return False
2797 else:
2798@@ -67,7 +70,9 @@
2799 def service_available(service_name):
2800 """Determine whether a system service is available"""
2801 try:
2802- subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
2803+ subprocess.check_output(
2804+ ['service', service_name, 'status'],
2805+ stderr=subprocess.STDOUT).decode('UTF-8')
2806 except subprocess.CalledProcessError as e:
2807 return 'unrecognized service' not in e.output
2808 else:
2809@@ -96,6 +101,26 @@
2810 return user_info
2811
2812
2813+def add_group(group_name, system_group=False):
2814+ """Add a group to the system"""
2815+ try:
2816+ group_info = grp.getgrnam(group_name)
2817+ log('group {0} already exists!'.format(group_name))
2818+ except KeyError:
2819+ log('creating group {0}'.format(group_name))
2820+ cmd = ['addgroup']
2821+ if system_group:
2822+ cmd.append('--system')
2823+ else:
2824+ cmd.extend([
2825+ '--group',
2826+ ])
2827+ cmd.append(group_name)
2828+ subprocess.check_call(cmd)
2829+ group_info = grp.getgrnam(group_name)
2830+ return group_info
2831+
2832+
2833 def add_user_to_group(username, group):
2834 """Add a user to a group"""
2835 cmd = [
2836@@ -115,7 +140,7 @@
2837 cmd.append(from_path)
2838 cmd.append(to_path)
2839 log(" ".join(cmd))
2840- return subprocess.check_output(cmd).strip()
2841+ return subprocess.check_output(cmd).decode('UTF-8').strip()
2842
2843
2844 def symlink(source, destination):
2845@@ -130,7 +155,7 @@
2846 subprocess.check_call(cmd)
2847
2848
2849-def mkdir(path, owner='root', group='root', perms=0555, force=False):
2850+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
2851 """Create a directory"""
2852 log("Making dir {} {}:{} {:o}".format(path, owner, group,
2853 perms))
2854@@ -146,7 +171,7 @@
2855 os.chown(realpath, uid, gid)
2856
2857
2858-def write_file(path, content, owner='root', group='root', perms=0444):
2859+def write_file(path, content, owner='root', group='root', perms=0o444):
2860 """Create or overwrite a file with the contents of a string"""
2861 log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
2862 uid = pwd.getpwnam(owner).pw_uid
2863@@ -177,7 +202,7 @@
2864 cmd_args.extend([device, mountpoint])
2865 try:
2866 subprocess.check_output(cmd_args)
2867- except subprocess.CalledProcessError, e:
2868+ except subprocess.CalledProcessError as e:
2869 log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
2870 return False
2871
2872@@ -191,7 +216,7 @@
2873 cmd_args = ['umount', mountpoint]
2874 try:
2875 subprocess.check_output(cmd_args)
2876- except subprocess.CalledProcessError, e:
2877+ except subprocess.CalledProcessError as e:
2878 log('Error unmounting {}\n{}'.format(mountpoint, e.output))
2879 return False
2880
2881@@ -218,8 +243,8 @@
2882 """
2883 if os.path.exists(path):
2884 h = getattr(hashlib, hash_type)()
2885- with open(path, 'r') as source:
2886- h.update(source.read()) # IGNORE:E1101 - it does have update
2887+ with open(path, 'rb') as source:
2888+ h.update(source.read())
2889 return h.hexdigest()
2890 else:
2891 return None
2892@@ -297,7 +322,7 @@
2893 if length is None:
2894 length = random.choice(range(35, 45))
2895 alphanumeric_chars = [
2896- l for l in (string.letters + string.digits)
2897+ l for l in (string.ascii_letters + string.digits)
2898 if l not in 'l0QD1vAEIOUaeiou']
2899 random_chars = [
2900 random.choice(alphanumeric_chars) for _ in range(length)]
2901@@ -306,18 +331,24 @@
2902
2903 def list_nics(nic_type):
2904 '''Return a list of nics of given type(s)'''
2905- if isinstance(nic_type, basestring):
2906+ if isinstance(nic_type, six.string_types):
2907 int_types = [nic_type]
2908 else:
2909 int_types = nic_type
2910 interfaces = []
2911 for int_type in int_types:
2912 cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
2913- ip_output = subprocess.check_output(cmd).split('\n')
2914+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2915 ip_output = (line for line in ip_output if line)
2916 for line in ip_output:
2917 if line.split()[1].startswith(int_type):
2918- interfaces.append(line.split()[1].replace(":", ""))
2919+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
2920+ if matched:
2921+ interface = matched.groups()[0]
2922+ else:
2923+ interface = line.split()[1].replace(":", "")
2924+ interfaces.append(interface)
2925+
2926 return interfaces
2927
2928
2929@@ -329,7 +360,7 @@
2930
2931 def get_nic_mtu(nic):
2932 cmd = ['ip', 'addr', 'show', nic]
2933- ip_output = subprocess.check_output(cmd).split('\n')
2934+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
2935 mtu = ""
2936 for line in ip_output:
2937 words = line.split()
2938@@ -340,7 +371,7 @@
2939
2940 def get_nic_hwaddr(nic):
2941 cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
2942- ip_output = subprocess.check_output(cmd)
2943+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
2944 hwaddr = ""
2945 words = ip_output.split()
2946 if 'link/ether' in words:
2947@@ -357,8 +388,8 @@
2948
2949 '''
2950 import apt_pkg
2951- from charmhelpers.fetch import apt_cache
2952 if not pkgcache:
2953+ from charmhelpers.fetch import apt_cache
2954 pkgcache = apt_cache()
2955 pkg = pkgcache[package]
2956 return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
2957
2958=== modified file 'hooks/charmhelpers/core/services/__init__.py'
2959--- hooks/charmhelpers/core/services/__init__.py 2014-09-19 16:52:38 +0000
2960+++ hooks/charmhelpers/core/services/__init__.py 2014-12-11 17:56:40 +0000
2961@@ -1,2 +1,2 @@
2962-from .base import *
2963-from .helpers import *
2964+from .base import * # NOQA
2965+from .helpers import * # NOQA
2966
2967=== modified file 'hooks/charmhelpers/core/services/helpers.py'
2968--- hooks/charmhelpers/core/services/helpers.py 2014-09-22 20:21:38 +0000
2969+++ hooks/charmhelpers/core/services/helpers.py 2014-12-11 17:56:40 +0000
2970@@ -196,7 +196,7 @@
2971 if not os.path.isabs(file_name):
2972 file_name = os.path.join(hookenv.charm_dir(), file_name)
2973 with open(file_name, 'w') as file_stream:
2974- os.fchmod(file_stream.fileno(), 0600)
2975+ os.fchmod(file_stream.fileno(), 0o600)
2976 yaml.dump(config_data, file_stream)
2977
2978 def read_context(self, file_name):
2979@@ -211,15 +211,19 @@
2980
2981 class TemplateCallback(ManagerCallback):
2982 """
2983- Callback class that will render a Jinja2 template, for use as a ready action.
2984-
2985- :param str source: The template source file, relative to `$CHARM_DIR/templates`
2986+ Callback class that will render a Jinja2 template, for use as a ready
2987+ action.
2988+
2989+ :param str source: The template source file, relative to
2990+ `$CHARM_DIR/templates`
2991+
2992 :param str target: The target to write the rendered template to
2993 :param str owner: The owner of the rendered file
2994 :param str group: The group of the rendered file
2995 :param int perms: The permissions of the rendered file
2996 """
2997- def __init__(self, source, target, owner='root', group='root', perms=0444):
2998+ def __init__(self, source, target,
2999+ owner='root', group='root', perms=0o444):
3000 self.source = source
3001 self.target = target
3002 self.owner = owner
3003
3004=== modified file 'hooks/charmhelpers/core/templating.py'
3005--- hooks/charmhelpers/core/templating.py 2014-09-19 16:52:38 +0000
3006+++ hooks/charmhelpers/core/templating.py 2014-12-11 17:56:40 +0000
3007@@ -4,7 +4,8 @@
3008 from charmhelpers.core import hookenv
3009
3010
3011-def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
3012+def render(source, target, context, owner='root', group='root',
3013+ perms=0o444, templates_dir=None):
3014 """
3015 Render a template.
3016
3017
3018=== modified file 'hooks/charmhelpers/fetch/__init__.py'
3019--- hooks/charmhelpers/fetch/__init__.py 2014-10-02 09:18:00 +0000
3020+++ hooks/charmhelpers/fetch/__init__.py 2014-12-11 17:56:40 +0000
3021@@ -5,10 +5,6 @@
3022 from charmhelpers.core.host import (
3023 lsb_release
3024 )
3025-from urlparse import (
3026- urlparse,
3027- urlunparse,
3028-)
3029 import subprocess
3030 from charmhelpers.core.hookenv import (
3031 config,
3032@@ -16,6 +12,12 @@
3033 )
3034 import os
3035
3036+import six
3037+if six.PY3:
3038+ from urllib.parse import urlparse, urlunparse
3039+else:
3040+ from urlparse import urlparse, urlunparse
3041+
3042
3043 CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
3044 deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
3045@@ -72,6 +74,7 @@
3046 FETCH_HANDLERS = (
3047 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
3048 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
3049+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
3050 )
3051
3052 APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
3053@@ -148,7 +151,7 @@
3054 cmd = ['apt-get', '--assume-yes']
3055 cmd.extend(options)
3056 cmd.append('install')
3057- if isinstance(packages, basestring):
3058+ if isinstance(packages, six.string_types):
3059 cmd.append(packages)
3060 else:
3061 cmd.extend(packages)
3062@@ -181,7 +184,7 @@
3063 def apt_purge(packages, fatal=False):
3064 """Purge one or more packages"""
3065 cmd = ['apt-get', '--assume-yes', 'purge']
3066- if isinstance(packages, basestring):
3067+ if isinstance(packages, six.string_types):
3068 cmd.append(packages)
3069 else:
3070 cmd.extend(packages)
3071@@ -192,7 +195,7 @@
3072 def apt_hold(packages, fatal=False):
3073 """Hold one or more packages"""
3074 cmd = ['apt-mark', 'hold']
3075- if isinstance(packages, basestring):
3076+ if isinstance(packages, six.string_types):
3077 cmd.append(packages)
3078 else:
3079 cmd.extend(packages)
3080@@ -218,6 +221,7 @@
3081 pocket for the release.
3082 'cloud:' may be used to activate official cloud archive pockets,
3083 such as 'cloud:icehouse'
3084+ 'distro' may be used as a noop
3085
3086 @param key: A key to be added to the system's APT keyring and used
3087 to verify the signatures on packages. Ideally, this should be an
3088@@ -251,12 +255,14 @@
3089 release = lsb_release()['DISTRIB_CODENAME']
3090 with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
3091 apt.write(PROPOSED_POCKET.format(release))
3092+ elif source == 'distro':
3093+ pass
3094 else:
3095- raise SourceConfigError("Unknown source: {!r}".format(source))
3096+ log("Unknown source: {!r}".format(source))
3097
3098 if key:
3099 if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
3100- with NamedTemporaryFile() as key_file:
3101+ with NamedTemporaryFile('w+') as key_file:
3102 key_file.write(key)
3103 key_file.flush()
3104 key_file.seek(0)
3105@@ -293,14 +299,14 @@
3106 sources = safe_load((config(sources_var) or '').strip()) or []
3107 keys = safe_load((config(keys_var) or '').strip()) or None
3108
3109- if isinstance(sources, basestring):
3110+ if isinstance(sources, six.string_types):
3111 sources = [sources]
3112
3113 if keys is None:
3114 for source in sources:
3115 add_source(source, None)
3116 else:
3117- if isinstance(keys, basestring):
3118+ if isinstance(keys, six.string_types):
3119 keys = [keys]
3120
3121 if len(sources) != len(keys):
3122@@ -397,7 +403,7 @@
3123 while result is None or result == APT_NO_LOCK:
3124 try:
3125 result = subprocess.check_call(cmd, env=env)
3126- except subprocess.CalledProcessError, e:
3127+ except subprocess.CalledProcessError as e:
3128 retry_count = retry_count + 1
3129 if retry_count > APT_NO_LOCK_RETRY_COUNT:
3130 raise
3131
3132=== modified file 'hooks/charmhelpers/fetch/archiveurl.py'
3133--- hooks/charmhelpers/fetch/archiveurl.py 2014-10-02 09:18:00 +0000
3134+++ hooks/charmhelpers/fetch/archiveurl.py 2014-12-11 17:56:40 +0000
3135@@ -1,8 +1,23 @@
3136 import os
3137-import urllib2
3138-from urllib import urlretrieve
3139-import urlparse
3140 import hashlib
3141+import re
3142+
3143+import six
3144+if six.PY3:
3145+ from urllib.request import (
3146+ build_opener, install_opener, urlopen, urlretrieve,
3147+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3148+ )
3149+ from urllib.parse import urlparse, urlunparse, parse_qs
3150+ from urllib.error import URLError
3151+else:
3152+ from urllib import urlretrieve
3153+ from urllib2 import (
3154+ build_opener, install_opener, urlopen,
3155+ HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler,
3156+ URLError
3157+ )
3158+ from urlparse import urlparse, urlunparse, parse_qs
3159
3160 from charmhelpers.fetch import (
3161 BaseFetchHandler,
3162@@ -15,6 +30,24 @@
3163 from charmhelpers.core.host import mkdir, check_hash
3164
3165
3166+def splituser(host):
3167+ '''urllib.splituser(), but six's support of this seems broken'''
3168+ _userprog = re.compile('^(.*)@(.*)$')
3169+ match = _userprog.match(host)
3170+ if match:
3171+ return match.group(1, 2)
3172+ return None, host
3173+
3174+
3175+def splitpasswd(user):
3176+ '''urllib.splitpasswd(), but six's support of this is missing'''
3177+ _passwdprog = re.compile('^([^:]*):(.*)$', re.S)
3178+ match = _passwdprog.match(user)
3179+ if match:
3180+ return match.group(1, 2)
3181+ return user, None
3182+
3183+
3184 class ArchiveUrlFetchHandler(BaseFetchHandler):
3185 """
3186 Handler to download archive files from arbitrary URLs.
3187@@ -42,20 +75,20 @@
3188 """
3189 # propogate all exceptions
3190 # URLError, OSError, etc
3191- proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
3192+ proto, netloc, path, params, query, fragment = urlparse(source)
3193 if proto in ('http', 'https'):
3194- auth, barehost = urllib2.splituser(netloc)
3195+ auth, barehost = splituser(netloc)
3196 if auth is not None:
3197- source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
3198- username, password = urllib2.splitpasswd(auth)
3199- passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
3200+ source = urlunparse((proto, barehost, path, params, query, fragment))
3201+ username, password = splitpasswd(auth)
3202+ passman = HTTPPasswordMgrWithDefaultRealm()
3203 # Realm is set to None in add_password to force the username and password
3204 # to be used whatever the realm
3205 passman.add_password(None, source, username, password)
3206- authhandler = urllib2.HTTPBasicAuthHandler(passman)
3207- opener = urllib2.build_opener(authhandler)
3208- urllib2.install_opener(opener)
3209- response = urllib2.urlopen(source)
3210+ authhandler = HTTPBasicAuthHandler(passman)
3211+ opener = build_opener(authhandler)
3212+ install_opener(opener)
3213+ response = urlopen(source)
3214 try:
3215 with open(dest, 'w') as dest_file:
3216 dest_file.write(response.read())
3217@@ -91,17 +124,21 @@
3218 url_parts = self.parse_url(source)
3219 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
3220 if not os.path.exists(dest_dir):
3221- mkdir(dest_dir, perms=0755)
3222+ mkdir(dest_dir, perms=0o755)
3223 dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
3224 try:
3225 self.download(source, dld_file)
3226- except urllib2.URLError as e:
3227+ except URLError as e:
3228 raise UnhandledSource(e.reason)
3229 except OSError as e:
3230 raise UnhandledSource(e.strerror)
3231- options = urlparse.parse_qs(url_parts.fragment)
3232+ options = parse_qs(url_parts.fragment)
3233 for key, value in options.items():
3234- if key in hashlib.algorithms:
3235+ if not six.PY3:
3236+ algorithms = hashlib.algorithms
3237+ else:
3238+ algorithms = hashlib.algorithms_available
3239+ if key in algorithms:
3240 check_hash(dld_file, value, key)
3241 if checksum:
3242 check_hash(dld_file, checksum, hash_type)
3243
3244=== modified file 'hooks/charmhelpers/fetch/bzrurl.py'
3245--- hooks/charmhelpers/fetch/bzrurl.py 2014-06-24 11:05:17 +0000
3246+++ hooks/charmhelpers/fetch/bzrurl.py 2014-12-11 17:56:40 +0000
3247@@ -5,6 +5,10 @@
3248 )
3249 from charmhelpers.core.host import mkdir
3250
3251+import six
3252+if six.PY3:
3253+ raise ImportError('bzrlib does not support Python3')
3254+
3255 try:
3256 from bzrlib.branch import Branch
3257 except ImportError:
3258@@ -42,7 +46,7 @@
3259 dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3260 branch_name)
3261 if not os.path.exists(dest_dir):
3262- mkdir(dest_dir, perms=0755)
3263+ mkdir(dest_dir, perms=0o755)
3264 try:
3265 self.branch(source, dest_dir)
3266 except OSError as e:
3267
3268=== added file 'hooks/charmhelpers/fetch/giturl.py'
3269--- hooks/charmhelpers/fetch/giturl.py 1970-01-01 00:00:00 +0000
3270+++ hooks/charmhelpers/fetch/giturl.py 2014-12-11 17:56:40 +0000
3271@@ -0,0 +1,51 @@
3272+import os
3273+from charmhelpers.fetch import (
3274+ BaseFetchHandler,
3275+ UnhandledSource
3276+)
3277+from charmhelpers.core.host import mkdir
3278+
3279+import six
3280+if six.PY3:
3281+ raise ImportError('GitPython does not support Python 3')
3282+
3283+try:
3284+ from git import Repo
3285+except ImportError:
3286+ from charmhelpers.fetch import apt_install
3287+ apt_install("python-git")
3288+ from git import Repo
3289+
3290+
3291+class GitUrlFetchHandler(BaseFetchHandler):
3292+ """Handler for git branches via generic and github URLs"""
3293+ def can_handle(self, source):
3294+ url_parts = self.parse_url(source)
3295+ # TODO (mattyw) no support for ssh git@ yet
3296+ if url_parts.scheme not in ('http', 'https', 'git'):
3297+ return False
3298+ else:
3299+ return True
3300+
3301+ def clone(self, source, dest, branch):
3302+ if not self.can_handle(source):
3303+ raise UnhandledSource("Cannot handle {}".format(source))
3304+
3305+ repo = Repo.clone_from(source, dest)
3306+ repo.git.checkout(branch)
3307+
3308+ def install(self, source, branch="master", dest=None):
3309+ url_parts = self.parse_url(source)
3310+ branch_name = url_parts.path.strip("/").split("/")[-1]
3311+ if dest:
3312+ dest_dir = os.path.join(dest, branch_name)
3313+ else:
3314+ dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
3315+ branch_name)
3316+ if not os.path.exists(dest_dir):
3317+ mkdir(dest_dir, perms=0o755)
3318+ try:
3319+ self.clone(source, dest_dir, branch)
3320+ except OSError as e:
3321+ raise UnhandledSource(e.strerror)
3322+ return dest_dir

Subscribers

People subscribed via source and target branches