Merge lp:~hopem/charms/trusty/jenkins/python-redux into lp:charms/trusty/jenkins

Proposed by Edward Hope-Morley
Status: Superseded
Proposed branch: lp:~hopem/charms/trusty/jenkins/python-redux
Merge into: lp:charms/trusty/jenkins
Diff against target: 6363 lines (+5865/-235)
46 files modified
Makefile (+30/-0)
bin/charm_helpers_sync.py (+225/-0)
charm-helpers-hooks.yaml (+8/-0)
config.yaml (+1/-1)
hooks/charmhelpers/__init__.py (+22/-0)
hooks/charmhelpers/contrib/openstack/alternatives.py (+17/-0)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+92/-0)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+278/-0)
hooks/charmhelpers/contrib/openstack/context.py (+1017/-0)
hooks/charmhelpers/contrib/openstack/ip.py (+93/-0)
hooks/charmhelpers/contrib/openstack/neutron.py (+217/-0)
hooks/charmhelpers/contrib/openstack/templates/__init__.py (+2/-0)
hooks/charmhelpers/contrib/openstack/templates/ceph.conf (+15/-0)
hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg (+54/-0)
hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend (+24/-0)
hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf (+24/-0)
hooks/charmhelpers/contrib/openstack/templating.py (+279/-0)
hooks/charmhelpers/contrib/openstack/utils.py (+619/-0)
hooks/charmhelpers/core/fstab.py (+118/-0)
hooks/charmhelpers/core/hookenv.py (+552/-0)
hooks/charmhelpers/core/host.py (+416/-0)
hooks/charmhelpers/core/services/__init__.py (+2/-0)
hooks/charmhelpers/core/services/base.py (+313/-0)
hooks/charmhelpers/core/services/helpers.py (+243/-0)
hooks/charmhelpers/core/sysctl.py (+34/-0)
hooks/charmhelpers/core/templating.py (+52/-0)
hooks/charmhelpers/fetch/__init__.py (+416/-0)
hooks/charmhelpers/fetch/archiveurl.py (+145/-0)
hooks/charmhelpers/fetch/bzrurl.py (+54/-0)
hooks/charmhelpers/fetch/giturl.py (+51/-0)
hooks/charmhelpers/payload/__init__.py (+1/-0)
hooks/charmhelpers/payload/execd.py (+50/-0)
hooks/config-changed (+0/-7)
hooks/install (+0/-151)
hooks/jenkins_hooks.py (+220/-0)
hooks/jenkins_utils.py (+169/-0)
hooks/master-relation-broken (+0/-17)
hooks/master-relation-changed (+0/-24)
hooks/master-relation-departed (+0/-12)
hooks/master-relation-joined (+0/-5)
hooks/start (+0/-3)
hooks/stop (+0/-3)
hooks/upgrade-charm (+0/-7)
hooks/website-relation-joined (+0/-5)
unit_tests/test_jenkins_hooks.py (+6/-0)
unit_tests/test_jenkins_utils.py (+6/-0)
To merge this branch: bzr merge lp:~hopem/charms/trusty/jenkins/python-redux
Reviewer Review Type Date Requested Status
Review Queue (community) automated testing Needs Fixing
Paul Larson Pending
Jorge Niedbalski Pending
Ryan Beisner Pending
Felipe Reyes Pending
James Page Pending
Review via email: mp+243958@code.launchpad.net

This proposal supersedes a proposal from 2014-11-18.

This proposal has been superseded by a proposal from 2014-12-11.

To post a comment you must log in.
Revision history for this message
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This items has failed automated testing! Results available here http://reports.vapour.ws/charm-tests/charm-bundle-test-10336-results

review: Needs Fixing (automated testing)
Revision history for this message
Felipe Reyes (freyes) wrote : Posted in a previous version of this proposal

Setting the password doesn't work, deploying as below doesn't allow you to login with admin/admin. Also when first deploying from the charm store and then upgrading to this branch breaks the password.

---
jenkins:
    password: "admin"
---

$ juju deploy --config config.yaml local:trusty/jenkins

review: Needs Fixing
Revision history for this message
Edward Hope-Morley (hopem) wrote : Posted in a previous version of this proposal

Thanks Felipe, taking a look.

Revision history for this message
Review Queue (review-queue) wrote :

This items has failed automated testing! Results available here http://reports.vapour.ws/charm-tests/charm-bundle-test-10636-results

review: Needs Fixing (automated testing)
39. By Edward Hope-Morley

fixed deb source and added amulet test rule to makefile

40. By Edward Hope-Morley

fixed deb source and added amulet test rule to makefile

41. By Edward Hope-Morley

fixed deb source and added amulet test rule to makefile

42. By Edward Hope-Morley

synced lp:charm-helpers

43. By Edward Hope-Morley

fixed python-six import issue

44. By Edward Hope-Morley

fixed python-six import issue

45. By Edward Hope-Morley

fixed python-six import issue

46. By Edward Hope-Morley

removed unecessary charm-helpers

47. By Edward Hope-Morley

tell amulet to deploy on trusty (default is precise)

48. By Edward Hope-Morley

fix amulet test

49. By Edward Hope-Morley

synced charmhelpers

50. By Edward Hope-Morley

allow retries when adding node

51. By Edward Hope-Morley

 * Fixed Makefile amulet test filename
 * Synced charm-helpers python-six deps
 * Synced charm-helpers test deps

52. By Edward Hope-Morley

added precise and trusty amulet

53. By Edward Hope-Morley

switch makefile rules to names that juju ci (hopefully) understands

54. By Edward Hope-Morley

ensure apt update prior to install

55. By Edward Hope-Morley

added venv for tests and lint

Unmerged revisions

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file 'Makefile'
2--- Makefile 1970-01-01 00:00:00 +0000
3+++ Makefile 2014-12-11 11:30:34 +0000
4@@ -0,0 +1,30 @@
5+#!/usr/bin/make
6+PYTHON := /usr/bin/env python
7+
8+lint:
9+ @flake8 --exclude hooks/charmhelpers hooks unit_tests tests
10+ @charm proof
11+
12+test:
13+ @echo Starting Amulet tests...
14+ # coreycb note: The -v should only be temporary until Amulet sends
15+ # raise_status() messages to stderr:
16+ # https://bugs.launchpad.net/amulet/+bug/1320357
17+ @juju test -v -p AMULET_HTTP_PROXY --timeout 900 \
18+ 00-setup 100-deploy
19+
20+unit_test:
21+ @echo Starting unit tests...
22+ @$(PYTHON) /usr/bin/nosetests --nologcapture --with-coverage unit_tests
23+
24+bin/charm_helpers_sync.py:
25+ @mkdir -p bin
26+ @bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
27+ > bin/charm_helpers_sync.py
28+
29+sync: bin/charm_helpers_sync.py
30+ @$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
31+
32+publish: lint unit_test
33+ bzr push lp:charms/jenkins
34+ bzr push lp:charms/trusty/jenkins
35
36=== added directory 'bin'
37=== added file 'bin/charm_helpers_sync.py'
38--- bin/charm_helpers_sync.py 1970-01-01 00:00:00 +0000
39+++ bin/charm_helpers_sync.py 2014-12-11 11:30:34 +0000
40@@ -0,0 +1,225 @@
41+#!/usr/bin/python
42+#
43+# Copyright 2013 Canonical Ltd.
44+
45+# Authors:
46+# Adam Gandelman <adamg@ubuntu.com>
47+#
48+
49+import logging
50+import optparse
51+import os
52+import subprocess
53+import shutil
54+import sys
55+import tempfile
56+import yaml
57+
58+from fnmatch import fnmatch
59+
60+CHARM_HELPERS_BRANCH = 'lp:charm-helpers'
61+
62+
63+def parse_config(conf_file):
64+ if not os.path.isfile(conf_file):
65+ logging.error('Invalid config file: %s.' % conf_file)
66+ return False
67+ return yaml.load(open(conf_file).read())
68+
69+
70+def clone_helpers(work_dir, branch):
71+ dest = os.path.join(work_dir, 'charm-helpers')
72+ logging.info('Checking out %s to %s.' % (branch, dest))
73+ cmd = ['bzr', 'checkout', '--lightweight', branch, dest]
74+ subprocess.check_call(cmd)
75+ return dest
76+
77+
78+def _module_path(module):
79+ return os.path.join(*module.split('.'))
80+
81+
82+def _src_path(src, module):
83+ return os.path.join(src, 'charmhelpers', _module_path(module))
84+
85+
86+def _dest_path(dest, module):
87+ return os.path.join(dest, _module_path(module))
88+
89+
90+def _is_pyfile(path):
91+ return os.path.isfile(path + '.py')
92+
93+
94+def ensure_init(path):
95+ '''
96+ ensure directories leading up to path are importable, omitting
97+ parent directory, eg path='/hooks/helpers/foo'/:
98+ hooks/
99+ hooks/helpers/__init__.py
100+ hooks/helpers/foo/__init__.py
101+ '''
102+ for d, dirs, files in os.walk(os.path.join(*path.split('/')[:2])):
103+ _i = os.path.join(d, '__init__.py')
104+ if not os.path.exists(_i):
105+ logging.info('Adding missing __init__.py: %s' % _i)
106+ open(_i, 'wb').close()
107+
108+
109+def sync_pyfile(src, dest):
110+ src = src + '.py'
111+ src_dir = os.path.dirname(src)
112+ logging.info('Syncing pyfile: %s -> %s.' % (src, dest))
113+ if not os.path.exists(dest):
114+ os.makedirs(dest)
115+ shutil.copy(src, dest)
116+ if os.path.isfile(os.path.join(src_dir, '__init__.py')):
117+ shutil.copy(os.path.join(src_dir, '__init__.py'),
118+ dest)
119+ ensure_init(dest)
120+
121+
122+def get_filter(opts=None):
123+ opts = opts or []
124+ if 'inc=*' in opts:
125+ # do not filter any files, include everything
126+ return None
127+
128+ def _filter(dir, ls):
129+ incs = [opt.split('=').pop() for opt in opts if 'inc=' in opt]
130+ _filter = []
131+ for f in ls:
132+ _f = os.path.join(dir, f)
133+
134+ if not os.path.isdir(_f) and not _f.endswith('.py') and incs:
135+ if True not in [fnmatch(_f, inc) for inc in incs]:
136+ logging.debug('Not syncing %s, does not match include '
137+ 'filters (%s)' % (_f, incs))
138+ _filter.append(f)
139+ else:
140+ logging.debug('Including file, which matches include '
141+ 'filters (%s): %s' % (incs, _f))
142+ elif (os.path.isfile(_f) and not _f.endswith('.py')):
143+ logging.debug('Not syncing file: %s' % f)
144+ _filter.append(f)
145+ elif (os.path.isdir(_f) and not
146+ os.path.isfile(os.path.join(_f, '__init__.py'))):
147+ logging.debug('Not syncing directory: %s' % f)
148+ _filter.append(f)
149+ return _filter
150+ return _filter
151+
152+
153+def sync_directory(src, dest, opts=None):
154+ if os.path.exists(dest):
155+ logging.debug('Removing existing directory: %s' % dest)
156+ shutil.rmtree(dest)
157+ logging.info('Syncing directory: %s -> %s.' % (src, dest))
158+
159+ shutil.copytree(src, dest, ignore=get_filter(opts))
160+ ensure_init(dest)
161+
162+
163+def sync(src, dest, module, opts=None):
164+ if os.path.isdir(_src_path(src, module)):
165+ sync_directory(_src_path(src, module), _dest_path(dest, module), opts)
166+ elif _is_pyfile(_src_path(src, module)):
167+ sync_pyfile(_src_path(src, module),
168+ os.path.dirname(_dest_path(dest, module)))
169+ else:
170+ logging.warn('Could not sync: %s. Neither a pyfile or directory, '
171+ 'does it even exist?' % module)
172+
173+
174+def parse_sync_options(options):
175+ if not options:
176+ return []
177+ return options.split(',')
178+
179+
180+def extract_options(inc, global_options=None):
181+ global_options = global_options or []
182+ if global_options and isinstance(global_options, basestring):
183+ global_options = [global_options]
184+ if '|' not in inc:
185+ return (inc, global_options)
186+ inc, opts = inc.split('|')
187+ return (inc, parse_sync_options(opts) + global_options)
188+
189+
190+def sync_helpers(include, src, dest, options=None):
191+ if not os.path.isdir(dest):
192+ os.makedirs(dest)
193+
194+ global_options = parse_sync_options(options)
195+
196+ for inc in include:
197+ if isinstance(inc, str):
198+ inc, opts = extract_options(inc, global_options)
199+ sync(src, dest, inc, opts)
200+ elif isinstance(inc, dict):
201+ # could also do nested dicts here.
202+ for k, v in inc.iteritems():
203+ if isinstance(v, list):
204+ for m in v:
205+ inc, opts = extract_options(m, global_options)
206+ sync(src, dest, '%s.%s' % (k, inc), opts)
207+
208+if __name__ == '__main__':
209+ parser = optparse.OptionParser()
210+ parser.add_option('-c', '--config', action='store', dest='config',
211+ default=None, help='helper config file')
212+ parser.add_option('-D', '--debug', action='store_true', dest='debug',
213+ default=False, help='debug')
214+ parser.add_option('-b', '--branch', action='store', dest='branch',
215+ help='charm-helpers bzr branch (overrides config)')
216+ parser.add_option('-d', '--destination', action='store', dest='dest_dir',
217+ help='sync destination dir (overrides config)')
218+ (opts, args) = parser.parse_args()
219+
220+ if opts.debug:
221+ logging.basicConfig(level=logging.DEBUG)
222+ else:
223+ logging.basicConfig(level=logging.INFO)
224+
225+ if opts.config:
226+ logging.info('Loading charm helper config from %s.' % opts.config)
227+ config = parse_config(opts.config)
228+ if not config:
229+ logging.error('Could not parse config from %s.' % opts.config)
230+ sys.exit(1)
231+ else:
232+ config = {}
233+
234+ if 'branch' not in config:
235+ config['branch'] = CHARM_HELPERS_BRANCH
236+ if opts.branch:
237+ config['branch'] = opts.branch
238+ if opts.dest_dir:
239+ config['destination'] = opts.dest_dir
240+
241+ if 'destination' not in config:
242+ logging.error('No destination dir. specified as option or config.')
243+ sys.exit(1)
244+
245+ if 'include' not in config:
246+ if not args:
247+ logging.error('No modules to sync specified as option or config.')
248+ sys.exit(1)
249+ config['include'] = []
250+ [config['include'].append(a) for a in args]
251+
252+ sync_options = None
253+ if 'options' in config:
254+ sync_options = config['options']
255+ tmpd = tempfile.mkdtemp()
256+ try:
257+ checkout = clone_helpers(tmpd, config['branch'])
258+ sync_helpers(config['include'], checkout, config['destination'],
259+ options=sync_options)
260+ except Exception, e:
261+ logging.error("Could not sync: %s" % e)
262+ raise e
263+ finally:
264+ logging.debug('Cleaning up %s' % tmpd)
265+ shutil.rmtree(tmpd)
266
267=== added file 'charm-helpers-hooks.yaml'
268--- charm-helpers-hooks.yaml 1970-01-01 00:00:00 +0000
269+++ charm-helpers-hooks.yaml 2014-12-11 11:30:34 +0000
270@@ -0,0 +1,8 @@
271+branch: lp:charm-helpers
272+destination: hooks/charmhelpers
273+include:
274+ - __init__
275+ - core
276+ - contrib.openstack|inc=*
277+ - fetch
278+ - payload.execd
279
280=== modified file 'config.yaml'
281--- config.yaml 2014-08-14 19:53:02 +0000
282+++ config.yaml 2014-12-11 11:30:34 +0000
283@@ -17,9 +17,9 @@
284 slave nodes so please don't change in Jenkins.
285 password:
286 type: string
287+ default: ""
288 description: Admin user password - used to manage
289 slave nodes so please don't change in Jenkins.
290- default:
291 plugins:
292 type: string
293 default: ""
294
295=== added directory 'hooks/charmhelpers'
296=== added file 'hooks/charmhelpers/__init__.py'
297--- hooks/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
298+++ hooks/charmhelpers/__init__.py 2014-12-11 11:30:34 +0000
299@@ -0,0 +1,22 @@
300+# Bootstrap charm-helpers, installing its dependencies if necessary using
301+# only standard libraries.
302+import subprocess
303+import sys
304+
305+try:
306+ import six # flake8: noqa
307+except ImportError:
308+ if sys.version_info.major == 2:
309+ subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
310+ else:
311+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
312+ import six # flake8: noqa
313+
314+try:
315+ import yaml # flake8: noqa
316+except ImportError:
317+ if sys.version_info.major == 2:
318+ subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
319+ else:
320+ subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
321+ import yaml # flake8: noqa
322
323=== added directory 'hooks/charmhelpers/contrib'
324=== added file 'hooks/charmhelpers/contrib/__init__.py'
325=== added directory 'hooks/charmhelpers/contrib/openstack'
326=== added file 'hooks/charmhelpers/contrib/openstack/__init__.py'
327=== added file 'hooks/charmhelpers/contrib/openstack/alternatives.py'
328--- hooks/charmhelpers/contrib/openstack/alternatives.py 1970-01-01 00:00:00 +0000
329+++ hooks/charmhelpers/contrib/openstack/alternatives.py 2014-12-11 11:30:34 +0000
330@@ -0,0 +1,17 @@
331+''' Helper for managing alternatives for file conflict resolution '''
332+
333+import subprocess
334+import shutil
335+import os
336+
337+
338+def install_alternative(name, target, source, priority=50):
339+ ''' Install alternative configuration '''
340+ if (os.path.exists(target) and not os.path.islink(target)):
341+ # Move existing file/directory away before installing
342+ shutil.move(target, '{}.bak'.format(target))
343+ cmd = [
344+ 'update-alternatives', '--force', '--install',
345+ target, name, source, str(priority)
346+ ]
347+ subprocess.check_call(cmd)
348
349=== added directory 'hooks/charmhelpers/contrib/openstack/amulet'
350=== added file 'hooks/charmhelpers/contrib/openstack/amulet/__init__.py'
351=== added file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
352--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 1970-01-01 00:00:00 +0000
353+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2014-12-11 11:30:34 +0000
354@@ -0,0 +1,92 @@
355+import six
356+from charmhelpers.contrib.amulet.deployment import (
357+ AmuletDeployment
358+)
359+
360+
361+class OpenStackAmuletDeployment(AmuletDeployment):
362+ """OpenStack amulet deployment.
363+
364+ This class inherits from AmuletDeployment and has additional support
365+ that is specifically for use by OpenStack charms.
366+ """
367+
368+ def __init__(self, series=None, openstack=None, source=None, stable=True):
369+ """Initialize the deployment environment."""
370+ super(OpenStackAmuletDeployment, self).__init__(series)
371+ self.openstack = openstack
372+ self.source = source
373+ self.stable = stable
374+ # Note(coreycb): this needs to be changed when new next branches come
375+ # out.
376+ self.current_next = "trusty"
377+
378+ def _determine_branch_locations(self, other_services):
379+ """Determine the branch locations for the other services.
380+
381+ Determine if the local branch being tested is derived from its
382+ stable or next (dev) branch, and based on this, use the corresonding
383+ stable or next branches for the other_services."""
384+ base_charms = ['mysql', 'mongodb', 'rabbitmq-server']
385+
386+ if self.stable:
387+ for svc in other_services:
388+ temp = 'lp:charms/{}'
389+ svc['location'] = temp.format(svc['name'])
390+ else:
391+ for svc in other_services:
392+ if svc['name'] in base_charms:
393+ temp = 'lp:charms/{}'
394+ svc['location'] = temp.format(svc['name'])
395+ else:
396+ temp = 'lp:~openstack-charmers/charms/{}/{}/next'
397+ svc['location'] = temp.format(self.current_next,
398+ svc['name'])
399+ return other_services
400+
401+ def _add_services(self, this_service, other_services):
402+ """Add services to the deployment and set openstack-origin/source."""
403+ other_services = self._determine_branch_locations(other_services)
404+
405+ super(OpenStackAmuletDeployment, self)._add_services(this_service,
406+ other_services)
407+
408+ services = other_services
409+ services.append(this_service)
410+ use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
411+ 'ceph-osd', 'ceph-radosgw']
412+
413+ if self.openstack:
414+ for svc in services:
415+ if svc['name'] not in use_source:
416+ config = {'openstack-origin': self.openstack}
417+ self.d.configure(svc['name'], config)
418+
419+ if self.source:
420+ for svc in services:
421+ if svc['name'] in use_source:
422+ config = {'source': self.source}
423+ self.d.configure(svc['name'], config)
424+
425+ def _configure_services(self, configs):
426+ """Configure all of the services."""
427+ for service, config in six.iteritems(configs):
428+ self.d.configure(service, config)
429+
430+ def _get_openstack_release(self):
431+ """Get openstack release.
432+
433+ Return an integer representing the enum value of the openstack
434+ release.
435+ """
436+ (self.precise_essex, self.precise_folsom, self.precise_grizzly,
437+ self.precise_havana, self.precise_icehouse,
438+ self.trusty_icehouse) = range(6)
439+ releases = {
440+ ('precise', None): self.precise_essex,
441+ ('precise', 'cloud:precise-folsom'): self.precise_folsom,
442+ ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
443+ ('precise', 'cloud:precise-havana'): self.precise_havana,
444+ ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
445+ ('trusty', None): self.trusty_icehouse}
446+ return releases[(self.series, self.openstack)]
447
448=== added file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
449--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 1970-01-01 00:00:00 +0000
450+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2014-12-11 11:30:34 +0000
451@@ -0,0 +1,278 @@
452+import logging
453+import os
454+import time
455+import urllib
456+
457+import glanceclient.v1.client as glance_client
458+import keystoneclient.v2_0 as keystone_client
459+import novaclient.v1_1.client as nova_client
460+
461+import six
462+
463+from charmhelpers.contrib.amulet.utils import (
464+ AmuletUtils
465+)
466+
467+DEBUG = logging.DEBUG
468+ERROR = logging.ERROR
469+
470+
471+class OpenStackAmuletUtils(AmuletUtils):
472+ """OpenStack amulet utilities.
473+
474+ This class inherits from AmuletUtils and has additional support
475+ that is specifically for use by OpenStack charms.
476+ """
477+
478+ def __init__(self, log_level=ERROR):
479+ """Initialize the deployment environment."""
480+ super(OpenStackAmuletUtils, self).__init__(log_level)
481+
482+ def validate_endpoint_data(self, endpoints, admin_port, internal_port,
483+ public_port, expected):
484+ """Validate endpoint data.
485+
486+ Validate actual endpoint data vs expected endpoint data. The ports
487+ are used to find the matching endpoint.
488+ """
489+ found = False
490+ for ep in endpoints:
491+ self.log.debug('endpoint: {}'.format(repr(ep)))
492+ if (admin_port in ep.adminurl and
493+ internal_port in ep.internalurl and
494+ public_port in ep.publicurl):
495+ found = True
496+ actual = {'id': ep.id,
497+ 'region': ep.region,
498+ 'adminurl': ep.adminurl,
499+ 'internalurl': ep.internalurl,
500+ 'publicurl': ep.publicurl,
501+ 'service_id': ep.service_id}
502+ ret = self._validate_dict_data(expected, actual)
503+ if ret:
504+ return 'unexpected endpoint data - {}'.format(ret)
505+
506+ if not found:
507+ return 'endpoint not found'
508+
509+ def validate_svc_catalog_endpoint_data(self, expected, actual):
510+ """Validate service catalog endpoint data.
511+
512+ Validate a list of actual service catalog endpoints vs a list of
513+ expected service catalog endpoints.
514+ """
515+ self.log.debug('actual: {}'.format(repr(actual)))
516+ for k, v in six.iteritems(expected):
517+ if k in actual:
518+ ret = self._validate_dict_data(expected[k][0], actual[k][0])
519+ if ret:
520+ return self.endpoint_error(k, ret)
521+ else:
522+ return "endpoint {} does not exist".format(k)
523+ return ret
524+
525+ def validate_tenant_data(self, expected, actual):
526+ """Validate tenant data.
527+
528+ Validate a list of actual tenant data vs list of expected tenant
529+ data.
530+ """
531+ self.log.debug('actual: {}'.format(repr(actual)))
532+ for e in expected:
533+ found = False
534+ for act in actual:
535+ a = {'enabled': act.enabled, 'description': act.description,
536+ 'name': act.name, 'id': act.id}
537+ if e['name'] == a['name']:
538+ found = True
539+ ret = self._validate_dict_data(e, a)
540+ if ret:
541+ return "unexpected tenant data - {}".format(ret)
542+ if not found:
543+ return "tenant {} does not exist".format(e['name'])
544+ return ret
545+
546+ def validate_role_data(self, expected, actual):
547+ """Validate role data.
548+
549+ Validate a list of actual role data vs a list of expected role
550+ data.
551+ """
552+ self.log.debug('actual: {}'.format(repr(actual)))
553+ for e in expected:
554+ found = False
555+ for act in actual:
556+ a = {'name': act.name, 'id': act.id}
557+ if e['name'] == a['name']:
558+ found = True
559+ ret = self._validate_dict_data(e, a)
560+ if ret:
561+ return "unexpected role data - {}".format(ret)
562+ if not found:
563+ return "role {} does not exist".format(e['name'])
564+ return ret
565+
566+ def validate_user_data(self, expected, actual):
567+ """Validate user data.
568+
569+ Validate a list of actual user data vs a list of expected user
570+ data.
571+ """
572+ self.log.debug('actual: {}'.format(repr(actual)))
573+ for e in expected:
574+ found = False
575+ for act in actual:
576+ a = {'enabled': act.enabled, 'name': act.name,
577+ 'email': act.email, 'tenantId': act.tenantId,
578+ 'id': act.id}
579+ if e['name'] == a['name']:
580+ found = True
581+ ret = self._validate_dict_data(e, a)
582+ if ret:
583+ return "unexpected user data - {}".format(ret)
584+ if not found:
585+ return "user {} does not exist".format(e['name'])
586+ return ret
587+
588+ def validate_flavor_data(self, expected, actual):
589+ """Validate flavor data.
590+
591+ Validate a list of actual flavors vs a list of expected flavors.
592+ """
593+ self.log.debug('actual: {}'.format(repr(actual)))
594+ act = [a.name for a in actual]
595+ return self._validate_list_data(expected, act)
596+
597+ def tenant_exists(self, keystone, tenant):
598+ """Return True if tenant exists."""
599+ return tenant in [t.name for t in keystone.tenants.list()]
600+
601+ def authenticate_keystone_admin(self, keystone_sentry, user, password,
602+ tenant):
603+ """Authenticates admin user with the keystone admin endpoint."""
604+ unit = keystone_sentry
605+ service_ip = unit.relation('shared-db',
606+ 'mysql:shared-db')['private-address']
607+ ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
608+ return keystone_client.Client(username=user, password=password,
609+ tenant_name=tenant, auth_url=ep)
610+
611+ def authenticate_keystone_user(self, keystone, user, password, tenant):
612+ """Authenticates a regular user with the keystone public endpoint."""
613+ ep = keystone.service_catalog.url_for(service_type='identity',
614+ endpoint_type='publicURL')
615+ return keystone_client.Client(username=user, password=password,
616+ tenant_name=tenant, auth_url=ep)
617+
618+ def authenticate_glance_admin(self, keystone):
619+ """Authenticates admin user with glance."""
620+ ep = keystone.service_catalog.url_for(service_type='image',
621+ endpoint_type='adminURL')
622+ return glance_client.Client(ep, token=keystone.auth_token)
623+
624+ def authenticate_nova_user(self, keystone, user, password, tenant):
625+ """Authenticates a regular user with nova-api."""
626+ ep = keystone.service_catalog.url_for(service_type='identity',
627+ endpoint_type='publicURL')
628+ return nova_client.Client(username=user, api_key=password,
629+ project_id=tenant, auth_url=ep)
630+
631+ def create_cirros_image(self, glance, image_name):
632+ """Download the latest cirros image and upload it to glance."""
633+ http_proxy = os.getenv('AMULET_HTTP_PROXY')
634+ self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
635+ if http_proxy:
636+ proxies = {'http': http_proxy}
637+ opener = urllib.FancyURLopener(proxies)
638+ else:
639+ opener = urllib.FancyURLopener()
640+
641+ f = opener.open("http://download.cirros-cloud.net/version/released")
642+ version = f.read().strip()
643+ cirros_img = "cirros-{}-x86_64-disk.img".format(version)
644+ local_path = os.path.join('tests', cirros_img)
645+
646+ if not os.path.exists(local_path):
647+ cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net",
648+ version, cirros_img)
649+ opener.retrieve(cirros_url, local_path)
650+ f.close()
651+
652+ with open(local_path) as f:
653+ image = glance.images.create(name=image_name, is_public=True,
654+ disk_format='qcow2',
655+ container_format='bare', data=f)
656+ count = 1
657+ status = image.status
658+ while status != 'active' and count < 10:
659+ time.sleep(3)
660+ image = glance.images.get(image.id)
661+ status = image.status
662+ self.log.debug('image status: {}'.format(status))
663+ count += 1
664+
665+ if status != 'active':
666+ self.log.error('image creation timed out')
667+ return None
668+
669+ return image
670+
671+ def delete_image(self, glance, image):
672+ """Delete the specified image."""
673+ num_before = len(list(glance.images.list()))
674+ glance.images.delete(image)
675+
676+ count = 1
677+ num_after = len(list(glance.images.list()))
678+ while num_after != (num_before - 1) and count < 10:
679+ time.sleep(3)
680+ num_after = len(list(glance.images.list()))
681+ self.log.debug('number of images: {}'.format(num_after))
682+ count += 1
683+
684+ if num_after != (num_before - 1):
685+ self.log.error('image deletion timed out')
686+ return False
687+
688+ return True
689+
690+ def create_instance(self, nova, image_name, instance_name, flavor):
691+ """Create the specified instance."""
692+ image = nova.images.find(name=image_name)
693+ flavor = nova.flavors.find(name=flavor)
694+ instance = nova.servers.create(name=instance_name, image=image,
695+ flavor=flavor)
696+
697+ count = 1
698+ status = instance.status
699+ while status != 'ACTIVE' and count < 60:
700+ time.sleep(3)
701+ instance = nova.servers.get(instance.id)
702+ status = instance.status
703+ self.log.debug('instance status: {}'.format(status))
704+ count += 1
705+
706+ if status != 'ACTIVE':
707+ self.log.error('instance creation timed out')
708+ return None
709+
710+ return instance
711+
712+ def delete_instance(self, nova, instance):
713+ """Delete the specified instance."""
714+ num_before = len(list(nova.servers.list()))
715+ nova.servers.delete(instance)
716+
717+ count = 1
718+ num_after = len(list(nova.servers.list()))
719+ while num_after != (num_before - 1) and count < 10:
720+ time.sleep(3)
721+ num_after = len(list(nova.servers.list()))
722+ self.log.debug('number of instances: {}'.format(num_after))
723+ count += 1
724+
725+ if num_after != (num_before - 1):
726+ self.log.error('instance deletion timed out')
727+ return False
728+
729+ return True
730
731=== added file 'hooks/charmhelpers/contrib/openstack/context.py'
732--- hooks/charmhelpers/contrib/openstack/context.py 1970-01-01 00:00:00 +0000
733+++ hooks/charmhelpers/contrib/openstack/context.py 2014-12-11 11:30:34 +0000
734@@ -0,0 +1,1017 @@
735+import json
736+import os
737+import time
738+from base64 import b64decode
739+from subprocess import check_call
740+
741+import six
742+
743+from charmhelpers.fetch import (
744+ apt_install,
745+ filter_installed_packages,
746+)
747+from charmhelpers.core.hookenv import (
748+ config,
749+ is_relation_made,
750+ local_unit,
751+ log,
752+ relation_get,
753+ relation_ids,
754+ related_units,
755+ relation_set,
756+ unit_get,
757+ unit_private_ip,
758+ DEBUG,
759+ INFO,
760+ WARNING,
761+ ERROR,
762+)
763+from charmhelpers.core.host import (
764+ mkdir,
765+ write_file,
766+)
767+from charmhelpers.contrib.hahelpers.cluster import (
768+ determine_apache_port,
769+ determine_api_port,
770+ https,
771+ is_clustered,
772+)
773+from charmhelpers.contrib.hahelpers.apache import (
774+ get_cert,
775+ get_ca_cert,
776+ install_ca_cert,
777+)
778+from charmhelpers.contrib.openstack.neutron import (
779+ neutron_plugin_attribute,
780+)
781+from charmhelpers.contrib.network.ip import (
782+ get_address_in_network,
783+ get_ipv6_addr,
784+ get_netmask_for_address,
785+ format_ipv6_addr,
786+ is_address_in_network,
787+)
788+from charmhelpers.contrib.openstack.utils import get_host_ip
789+
790+CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
791+ADDRESS_TYPES = ['admin', 'internal', 'public']
792+
793+
794+class OSContextError(Exception):
795+ pass
796+
797+
798+def ensure_packages(packages):
799+ """Install but do not upgrade required plugin packages."""
800+ required = filter_installed_packages(packages)
801+ if required:
802+ apt_install(required, fatal=True)
803+
804+
805+def context_complete(ctxt):
806+ _missing = []
807+ for k, v in six.iteritems(ctxt):
808+ if v is None or v == '':
809+ _missing.append(k)
810+
811+ if _missing:
812+ log('Missing required data: %s' % ' '.join(_missing), level=INFO)
813+ return False
814+
815+ return True
816+
817+
818+def config_flags_parser(config_flags):
819+ """Parses config flags string into dict.
820+
821+ The provided config_flags string may be a list of comma-separated values
822+ which themselves may be comma-separated list of values.
823+ """
824+ if config_flags.find('==') >= 0:
825+ log("config_flags is not in expected format (key=value)", level=ERROR)
826+ raise OSContextError
827+
828+ # strip the following from each value.
829+ post_strippers = ' ,'
830+ # we strip any leading/trailing '=' or ' ' from the string then
831+ # split on '='.
832+ split = config_flags.strip(' =').split('=')
833+ limit = len(split)
834+ flags = {}
835+ for i in range(0, limit - 1):
836+ current = split[i]
837+ next = split[i + 1]
838+ vindex = next.rfind(',')
839+ if (i == limit - 2) or (vindex < 0):
840+ value = next
841+ else:
842+ value = next[:vindex]
843+
844+ if i == 0:
845+ key = current
846+ else:
847+ # if this not the first entry, expect an embedded key.
848+ index = current.rfind(',')
849+ if index < 0:
850+ log("Invalid config value(s) at index %s" % (i), level=ERROR)
851+ raise OSContextError
852+ key = current[index + 1:]
853+
854+ # Add to collection.
855+ flags[key.strip(post_strippers)] = value.rstrip(post_strippers)
856+
857+ return flags
858+
859+
860+class OSContextGenerator(object):
861+ """Base class for all context generators."""
862+ interfaces = []
863+
864+ def __call__(self):
865+ raise NotImplementedError
866+
867+
868+class SharedDBContext(OSContextGenerator):
869+ interfaces = ['shared-db']
870+
871+ def __init__(self,
872+ database=None, user=None, relation_prefix=None, ssl_dir=None):
873+ """Allows inspecting relation for settings prefixed with
874+ relation_prefix. This is useful for parsing access for multiple
875+ databases returned via the shared-db interface (eg, nova_password,
876+ quantum_password)
877+ """
878+ self.relation_prefix = relation_prefix
879+ self.database = database
880+ self.user = user
881+ self.ssl_dir = ssl_dir
882+
883+ def __call__(self):
884+ self.database = self.database or config('database')
885+ self.user = self.user or config('database-user')
886+ if None in [self.database, self.user]:
887+ log("Could not generate shared_db context. Missing required charm "
888+ "config options. (database name and user)", level=ERROR)
889+ raise OSContextError
890+
891+ ctxt = {}
892+
893+ # NOTE(jamespage) if mysql charm provides a network upon which
894+ # access to the database should be made, reconfigure relation
895+ # with the service units local address and defer execution
896+ access_network = relation_get('access-network')
897+ if access_network is not None:
898+ if self.relation_prefix is not None:
899+ hostname_key = "{}_hostname".format(self.relation_prefix)
900+ else:
901+ hostname_key = "hostname"
902+ access_hostname = get_address_in_network(access_network,
903+ unit_get('private-address'))
904+ set_hostname = relation_get(attribute=hostname_key,
905+ unit=local_unit())
906+ if set_hostname != access_hostname:
907+ relation_set(relation_settings={hostname_key: access_hostname})
908+ return ctxt # Defer any further hook execution for now....
909+
910+ password_setting = 'password'
911+ if self.relation_prefix:
912+ password_setting = self.relation_prefix + '_password'
913+
914+ for rid in relation_ids('shared-db'):
915+ for unit in related_units(rid):
916+ rdata = relation_get(rid=rid, unit=unit)
917+ host = rdata.get('db_host')
918+ host = format_ipv6_addr(host) or host
919+ ctxt = {
920+ 'database_host': host,
921+ 'database': self.database,
922+ 'database_user': self.user,
923+ 'database_password': rdata.get(password_setting),
924+ 'database_type': 'mysql'
925+ }
926+ if context_complete(ctxt):
927+ db_ssl(rdata, ctxt, self.ssl_dir)
928+ return ctxt
929+ return {}
930+
931+
932+class PostgresqlDBContext(OSContextGenerator):
933+ interfaces = ['pgsql-db']
934+
935+ def __init__(self, database=None):
936+ self.database = database
937+
938+ def __call__(self):
939+ self.database = self.database or config('database')
940+ if self.database is None:
941+ log('Could not generate postgresql_db context. Missing required '
942+ 'charm config options. (database name)', level=ERROR)
943+ raise OSContextError
944+
945+ ctxt = {}
946+ for rid in relation_ids(self.interfaces[0]):
947+ for unit in related_units(rid):
948+ rel_host = relation_get('host', rid=rid, unit=unit)
949+ rel_user = relation_get('user', rid=rid, unit=unit)
950+ rel_passwd = relation_get('password', rid=rid, unit=unit)
951+ ctxt = {'database_host': rel_host,
952+ 'database': self.database,
953+ 'database_user': rel_user,
954+ 'database_password': rel_passwd,
955+ 'database_type': 'postgresql'}
956+ if context_complete(ctxt):
957+ return ctxt
958+
959+ return {}
960+
961+
962+def db_ssl(rdata, ctxt, ssl_dir):
963+ if 'ssl_ca' in rdata and ssl_dir:
964+ ca_path = os.path.join(ssl_dir, 'db-client.ca')
965+ with open(ca_path, 'w') as fh:
966+ fh.write(b64decode(rdata['ssl_ca']))
967+
968+ ctxt['database_ssl_ca'] = ca_path
969+ elif 'ssl_ca' in rdata:
970+ log("Charm not setup for ssl support but ssl ca found", level=INFO)
971+ return ctxt
972+
973+ if 'ssl_cert' in rdata:
974+ cert_path = os.path.join(
975+ ssl_dir, 'db-client.cert')
976+ if not os.path.exists(cert_path):
977+ log("Waiting 1m for ssl client cert validity", level=INFO)
978+ time.sleep(60)
979+
980+ with open(cert_path, 'w') as fh:
981+ fh.write(b64decode(rdata['ssl_cert']))
982+
983+ ctxt['database_ssl_cert'] = cert_path
984+ key_path = os.path.join(ssl_dir, 'db-client.key')
985+ with open(key_path, 'w') as fh:
986+ fh.write(b64decode(rdata['ssl_key']))
987+
988+ ctxt['database_ssl_key'] = key_path
989+
990+ return ctxt
991+
992+
993+class IdentityServiceContext(OSContextGenerator):
994+ interfaces = ['identity-service']
995+
996+ def __call__(self):
997+ log('Generating template context for identity-service', level=DEBUG)
998+ ctxt = {}
999+ for rid in relation_ids('identity-service'):
1000+ for unit in related_units(rid):
1001+ rdata = relation_get(rid=rid, unit=unit)
1002+ serv_host = rdata.get('service_host')
1003+ serv_host = format_ipv6_addr(serv_host) or serv_host
1004+ auth_host = rdata.get('auth_host')
1005+ auth_host = format_ipv6_addr(auth_host) or auth_host
1006+ svc_protocol = rdata.get('service_protocol') or 'http'
1007+ auth_protocol = rdata.get('auth_protocol') or 'http'
1008+ ctxt = {'service_port': rdata.get('service_port'),
1009+ 'service_host': serv_host,
1010+ 'auth_host': auth_host,
1011+ 'auth_port': rdata.get('auth_port'),
1012+ 'admin_tenant_name': rdata.get('service_tenant'),
1013+ 'admin_user': rdata.get('service_username'),
1014+ 'admin_password': rdata.get('service_password'),
1015+ 'service_protocol': svc_protocol,
1016+ 'auth_protocol': auth_protocol}
1017+ if context_complete(ctxt):
1018+ # NOTE(jamespage) this is required for >= icehouse
1019+ # so a missing value just indicates keystone needs
1020+ # upgrading
1021+ ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
1022+ return ctxt
1023+
1024+ return {}
1025+
1026+
1027+class AMQPContext(OSContextGenerator):
1028+
1029+ def __init__(self, ssl_dir=None, rel_name='amqp', relation_prefix=None):
1030+ self.ssl_dir = ssl_dir
1031+ self.rel_name = rel_name
1032+ self.relation_prefix = relation_prefix
1033+ self.interfaces = [rel_name]
1034+
1035+ def __call__(self):
1036+ log('Generating template context for amqp', level=DEBUG)
1037+ conf = config()
1038+ if self.relation_prefix:
1039+ user_setting = '%s-rabbit-user' % (self.relation_prefix)
1040+ vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
1041+ else:
1042+ user_setting = 'rabbit-user'
1043+ vhost_setting = 'rabbit-vhost'
1044+
1045+ try:
1046+ username = conf[user_setting]
1047+ vhost = conf[vhost_setting]
1048+ except KeyError as e:
1049+ log('Could not generate shared_db context. Missing required charm '
1050+ 'config options: %s.' % e, level=ERROR)
1051+ raise OSContextError
1052+
1053+ ctxt = {}
1054+ for rid in relation_ids(self.rel_name):
1055+ ha_vip_only = False
1056+ for unit in related_units(rid):
1057+ if relation_get('clustered', rid=rid, unit=unit):
1058+ ctxt['clustered'] = True
1059+ vip = relation_get('vip', rid=rid, unit=unit)
1060+ vip = format_ipv6_addr(vip) or vip
1061+ ctxt['rabbitmq_host'] = vip
1062+ else:
1063+ host = relation_get('private-address', rid=rid, unit=unit)
1064+ host = format_ipv6_addr(host) or host
1065+ ctxt['rabbitmq_host'] = host
1066+
1067+ ctxt.update({
1068+ 'rabbitmq_user': username,
1069+ 'rabbitmq_password': relation_get('password', rid=rid,
1070+ unit=unit),
1071+ 'rabbitmq_virtual_host': vhost,
1072+ })
1073+
1074+ ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
1075+ if ssl_port:
1076+ ctxt['rabbit_ssl_port'] = ssl_port
1077+
1078+ ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
1079+ if ssl_ca:
1080+ ctxt['rabbit_ssl_ca'] = ssl_ca
1081+
1082+ if relation_get('ha_queues', rid=rid, unit=unit) is not None:
1083+ ctxt['rabbitmq_ha_queues'] = True
1084+
1085+ ha_vip_only = relation_get('ha-vip-only',
1086+ rid=rid, unit=unit) is not None
1087+
1088+ if context_complete(ctxt):
1089+ if 'rabbit_ssl_ca' in ctxt:
1090+ if not self.ssl_dir:
1091+ log("Charm not setup for ssl support but ssl ca "
1092+ "found", level=INFO)
1093+ break
1094+
1095+ ca_path = os.path.join(
1096+ self.ssl_dir, 'rabbit-client-ca.pem')
1097+ with open(ca_path, 'w') as fh:
1098+ fh.write(b64decode(ctxt['rabbit_ssl_ca']))
1099+ ctxt['rabbit_ssl_ca'] = ca_path
1100+
1101+ # Sufficient information found = break out!
1102+ break
1103+
1104+ # Used for active/active rabbitmq >= grizzly
1105+ if (('clustered' not in ctxt or ha_vip_only) and
1106+ len(related_units(rid)) > 1):
1107+ rabbitmq_hosts = []
1108+ for unit in related_units(rid):
1109+ host = relation_get('private-address', rid=rid, unit=unit)
1110+ host = format_ipv6_addr(host) or host
1111+ rabbitmq_hosts.append(host)
1112+
1113+ ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
1114+
1115+ if not context_complete(ctxt):
1116+ return {}
1117+
1118+ return ctxt
1119+
1120+
1121+class CephContext(OSContextGenerator):
1122+ """Generates context for /etc/ceph/ceph.conf templates."""
1123+ interfaces = ['ceph']
1124+
1125+ def __call__(self):
1126+ if not relation_ids('ceph'):
1127+ return {}
1128+
1129+ log('Generating template context for ceph', level=DEBUG)
1130+ mon_hosts = []
1131+ auth = None
1132+ key = None
1133+ use_syslog = str(config('use-syslog')).lower()
1134+ for rid in relation_ids('ceph'):
1135+ for unit in related_units(rid):
1136+ auth = relation_get('auth', rid=rid, unit=unit)
1137+ key = relation_get('key', rid=rid, unit=unit)
1138+ ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
1139+ unit=unit)
1140+ unit_priv_addr = relation_get('private-address', rid=rid,
1141+ unit=unit)
1142+ ceph_addr = ceph_pub_addr or unit_priv_addr
1143+ ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
1144+ mon_hosts.append(ceph_addr)
1145+
1146+ ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
1147+ 'auth': auth,
1148+ 'key': key,
1149+ 'use_syslog': use_syslog}
1150+
1151+ if not os.path.isdir('/etc/ceph'):
1152+ os.mkdir('/etc/ceph')
1153+
1154+ if not context_complete(ctxt):
1155+ return {}
1156+
1157+ ensure_packages(['ceph-common'])
1158+ return ctxt
1159+
1160+
1161+class HAProxyContext(OSContextGenerator):
1162+ """Provides half a context for the haproxy template, which describes
1163+ all peers to be included in the cluster. Each charm needs to include
1164+ its own context generator that describes the port mapping.
1165+ """
1166+ interfaces = ['cluster']
1167+
1168+ def __init__(self, singlenode_mode=False):
1169+ self.singlenode_mode = singlenode_mode
1170+
1171+ def __call__(self):
1172+ if not relation_ids('cluster') and not self.singlenode_mode:
1173+ return {}
1174+
1175+ if config('prefer-ipv6'):
1176+ addr = get_ipv6_addr(exc_list=[config('vip')])[0]
1177+ else:
1178+ addr = get_host_ip(unit_get('private-address'))
1179+
1180+ l_unit = local_unit().replace('/', '-')
1181+ cluster_hosts = {}
1182+
1183+ # NOTE(jamespage): build out map of configured network endpoints
1184+ # and associated backends
1185+ for addr_type in ADDRESS_TYPES:
1186+ cfg_opt = 'os-{}-network'.format(addr_type)
1187+ laddr = get_address_in_network(config(cfg_opt))
1188+ if laddr:
1189+ netmask = get_netmask_for_address(laddr)
1190+ cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
1191+ netmask),
1192+ 'backends': {l_unit: laddr}}
1193+ for rid in relation_ids('cluster'):
1194+ for unit in related_units(rid):
1195+ _laddr = relation_get('{}-address'.format(addr_type),
1196+ rid=rid, unit=unit)
1197+ if _laddr:
1198+ _unit = unit.replace('/', '-')
1199+ cluster_hosts[laddr]['backends'][_unit] = _laddr
1200+
1201+ # NOTE(jamespage) no split configurations found, just use
1202+ # private addresses
1203+ if not cluster_hosts:
1204+ netmask = get_netmask_for_address(addr)
1205+ cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
1206+ 'backends': {l_unit: addr}}
1207+ for rid in relation_ids('cluster'):
1208+ for unit in related_units(rid):
1209+ _laddr = relation_get('private-address',
1210+ rid=rid, unit=unit)
1211+ if _laddr:
1212+ _unit = unit.replace('/', '-')
1213+ cluster_hosts[addr]['backends'][_unit] = _laddr
1214+
1215+ ctxt = {'frontends': cluster_hosts}
1216+
1217+ if config('haproxy-server-timeout'):
1218+ ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
1219+
1220+ if config('haproxy-client-timeout'):
1221+ ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
1222+
1223+ if config('prefer-ipv6'):
1224+ ctxt['local_host'] = 'ip6-localhost'
1225+ ctxt['haproxy_host'] = '::'
1226+ ctxt['stat_port'] = ':::8888'
1227+ else:
1228+ ctxt['local_host'] = '127.0.0.1'
1229+ ctxt['haproxy_host'] = '0.0.0.0'
1230+ ctxt['stat_port'] = ':8888'
1231+
1232+ for frontend in cluster_hosts:
1233+ if (len(cluster_hosts[frontend]['backends']) > 1 or
1234+ self.singlenode_mode):
1235+ # Enable haproxy when we have enough peers.
1236+ log('Ensuring haproxy enabled in /etc/default/haproxy.',
1237+ level=DEBUG)
1238+ with open('/etc/default/haproxy', 'w') as out:
1239+ out.write('ENABLED=1\n')
1240+
1241+ return ctxt
1242+
1243+ log('HAProxy context is incomplete, this unit has no peers.',
1244+ level=INFO)
1245+ return {}
1246+
1247+
1248+class ImageServiceContext(OSContextGenerator):
1249+ interfaces = ['image-service']
1250+
1251+ def __call__(self):
1252+ """Obtains the glance API server from the image-service relation.
1253+ Useful in nova and cinder (currently).
1254+ """
1255+ log('Generating template context for image-service.', level=DEBUG)
1256+ rids = relation_ids('image-service')
1257+ if not rids:
1258+ return {}
1259+
1260+ for rid in rids:
1261+ for unit in related_units(rid):
1262+ api_server = relation_get('glance-api-server',
1263+ rid=rid, unit=unit)
1264+ if api_server:
1265+ return {'glance_api_servers': api_server}
1266+
1267+ log("ImageService context is incomplete. Missing required relation "
1268+ "data.", level=INFO)
1269+ return {}
1270+
1271+
1272+class ApacheSSLContext(OSContextGenerator):
1273+ """Generates a context for an apache vhost configuration that configures
1274+ HTTPS reverse proxying for one or many endpoints. Generated context
1275+ looks something like::
1276+
1277+ {
1278+ 'namespace': 'cinder',
1279+ 'private_address': 'iscsi.mycinderhost.com',
1280+ 'endpoints': [(8776, 8766), (8777, 8767)]
1281+ }
1282+
1283+ The endpoints list consists of a tuples mapping external ports
1284+ to internal ports.
1285+ """
1286+ interfaces = ['https']
1287+
1288+ # charms should inherit this context and set external ports
1289+ # and service namespace accordingly.
1290+ external_ports = []
1291+ service_namespace = None
1292+
1293+ def enable_modules(self):
1294+ cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
1295+ check_call(cmd)
1296+
1297+ def configure_cert(self, cn=None):
1298+ ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
1299+ mkdir(path=ssl_dir)
1300+ cert, key = get_cert(cn)
1301+ if cn:
1302+ cert_filename = 'cert_{}'.format(cn)
1303+ key_filename = 'key_{}'.format(cn)
1304+ else:
1305+ cert_filename = 'cert'
1306+ key_filename = 'key'
1307+
1308+ write_file(path=os.path.join(ssl_dir, cert_filename),
1309+ content=b64decode(cert))
1310+ write_file(path=os.path.join(ssl_dir, key_filename),
1311+ content=b64decode(key))
1312+
1313+ def configure_ca(self):
1314+ ca_cert = get_ca_cert()
1315+ if ca_cert:
1316+ install_ca_cert(b64decode(ca_cert))
1317+
1318+ def canonical_names(self):
1319+ """Figure out which canonical names clients will access this service.
1320+ """
1321+ cns = []
1322+ for r_id in relation_ids('identity-service'):
1323+ for unit in related_units(r_id):
1324+ rdata = relation_get(rid=r_id, unit=unit)
1325+ for k in rdata:
1326+ if k.startswith('ssl_key_'):
1327+ cns.append(k.lstrip('ssl_key_'))
1328+
1329+ return sorted(list(set(cns)))
1330+
1331+ def get_network_addresses(self):
1332+ """For each network configured, return corresponding address and vip
1333+ (if available).
1334+
1335+ Returns a list of tuples of the form:
1336+
1337+ [(address_in_net_a, vip_in_net_a),
1338+ (address_in_net_b, vip_in_net_b),
1339+ ...]
1340+
1341+ or, if no vip(s) available:
1342+
1343+ [(address_in_net_a, address_in_net_a),
1344+ (address_in_net_b, address_in_net_b),
1345+ ...]
1346+ """
1347+ addresses = []
1348+ if config('vip'):
1349+ vips = config('vip').split()
1350+ else:
1351+ vips = []
1352+
1353+ for net_type in ['os-internal-network', 'os-admin-network',
1354+ 'os-public-network']:
1355+ addr = get_address_in_network(config(net_type),
1356+ unit_get('private-address'))
1357+ if len(vips) > 1 and is_clustered():
1358+ if not config(net_type):
1359+ log("Multiple networks configured but net_type "
1360+ "is None (%s)." % net_type, level=WARNING)
1361+ continue
1362+
1363+ for vip in vips:
1364+ if is_address_in_network(config(net_type), vip):
1365+ addresses.append((addr, vip))
1366+ break
1367+
1368+ elif is_clustered() and config('vip'):
1369+ addresses.append((addr, config('vip')))
1370+ else:
1371+ addresses.append((addr, addr))
1372+
1373+ return sorted(addresses)
1374+
1375+ def __call__(self):
1376+ if isinstance(self.external_ports, six.string_types):
1377+ self.external_ports = [self.external_ports]
1378+
1379+ if not self.external_ports or not https():
1380+ return {}
1381+
1382+ self.configure_ca()
1383+ self.enable_modules()
1384+
1385+ ctxt = {'namespace': self.service_namespace,
1386+ 'endpoints': [],
1387+ 'ext_ports': []}
1388+
1389+ for cn in self.canonical_names():
1390+ self.configure_cert(cn)
1391+
1392+ addresses = self.get_network_addresses()
1393+ for address, endpoint in sorted(set(addresses)):
1394+ for api_port in self.external_ports:
1395+ ext_port = determine_apache_port(api_port)
1396+ int_port = determine_api_port(api_port)
1397+ portmap = (address, endpoint, int(ext_port), int(int_port))
1398+ ctxt['endpoints'].append(portmap)
1399+ ctxt['ext_ports'].append(int(ext_port))
1400+
1401+ ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
1402+ return ctxt
1403+
1404+
1405+class NeutronContext(OSContextGenerator):
1406+ interfaces = []
1407+
1408+ @property
1409+ def plugin(self):
1410+ return None
1411+
1412+ @property
1413+ def network_manager(self):
1414+ return None
1415+
1416+ @property
1417+ def packages(self):
1418+ return neutron_plugin_attribute(self.plugin, 'packages',
1419+ self.network_manager)
1420+
1421+ @property
1422+ def neutron_security_groups(self):
1423+ return None
1424+
1425+ def _ensure_packages(self):
1426+ for pkgs in self.packages:
1427+ ensure_packages(pkgs)
1428+
1429+ def _save_flag_file(self):
1430+ if self.network_manager == 'quantum':
1431+ _file = '/etc/nova/quantum_plugin.conf'
1432+ else:
1433+ _file = '/etc/nova/neutron_plugin.conf'
1434+
1435+ with open(_file, 'wb') as out:
1436+ out.write(self.plugin + '\n')
1437+
1438+ def ovs_ctxt(self):
1439+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1440+ self.network_manager)
1441+ config = neutron_plugin_attribute(self.plugin, 'config',
1442+ self.network_manager)
1443+ ovs_ctxt = {'core_plugin': driver,
1444+ 'neutron_plugin': 'ovs',
1445+ 'neutron_security_groups': self.neutron_security_groups,
1446+ 'local_ip': unit_private_ip(),
1447+ 'config': config}
1448+
1449+ return ovs_ctxt
1450+
1451+ def nvp_ctxt(self):
1452+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1453+ self.network_manager)
1454+ config = neutron_plugin_attribute(self.plugin, 'config',
1455+ self.network_manager)
1456+ nvp_ctxt = {'core_plugin': driver,
1457+ 'neutron_plugin': 'nvp',
1458+ 'neutron_security_groups': self.neutron_security_groups,
1459+ 'local_ip': unit_private_ip(),
1460+ 'config': config}
1461+
1462+ return nvp_ctxt
1463+
1464+ def n1kv_ctxt(self):
1465+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1466+ self.network_manager)
1467+ n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
1468+ self.network_manager)
1469+ n1kv_user_config_flags = config('n1kv-config-flags')
1470+ restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
1471+ n1kv_ctxt = {'core_plugin': driver,
1472+ 'neutron_plugin': 'n1kv',
1473+ 'neutron_security_groups': self.neutron_security_groups,
1474+ 'local_ip': unit_private_ip(),
1475+ 'config': n1kv_config,
1476+ 'vsm_ip': config('n1kv-vsm-ip'),
1477+ 'vsm_username': config('n1kv-vsm-username'),
1478+ 'vsm_password': config('n1kv-vsm-password'),
1479+ 'restrict_policy_profiles': restrict_policy_profiles}
1480+
1481+ if n1kv_user_config_flags:
1482+ flags = config_flags_parser(n1kv_user_config_flags)
1483+ n1kv_ctxt['user_config_flags'] = flags
1484+
1485+ return n1kv_ctxt
1486+
1487+ def calico_ctxt(self):
1488+ driver = neutron_plugin_attribute(self.plugin, 'driver',
1489+ self.network_manager)
1490+ config = neutron_plugin_attribute(self.plugin, 'config',
1491+ self.network_manager)
1492+ calico_ctxt = {'core_plugin': driver,
1493+ 'neutron_plugin': 'Calico',
1494+ 'neutron_security_groups': self.neutron_security_groups,
1495+ 'local_ip': unit_private_ip(),
1496+ 'config': config}
1497+
1498+ return calico_ctxt
1499+
1500+ def neutron_ctxt(self):
1501+ if https():
1502+ proto = 'https'
1503+ else:
1504+ proto = 'http'
1505+
1506+ if is_clustered():
1507+ host = config('vip')
1508+ else:
1509+ host = unit_get('private-address')
1510+
1511+ ctxt = {'network_manager': self.network_manager,
1512+ 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
1513+ return ctxt
1514+
1515+ def __call__(self):
1516+ self._ensure_packages()
1517+
1518+ if self.network_manager not in ['quantum', 'neutron']:
1519+ return {}
1520+
1521+ if not self.plugin:
1522+ return {}
1523+
1524+ ctxt = self.neutron_ctxt()
1525+
1526+ if self.plugin == 'ovs':
1527+ ctxt.update(self.ovs_ctxt())
1528+ elif self.plugin in ['nvp', 'nsx']:
1529+ ctxt.update(self.nvp_ctxt())
1530+ elif self.plugin == 'n1kv':
1531+ ctxt.update(self.n1kv_ctxt())
1532+ elif self.plugin == 'Calico':
1533+ ctxt.update(self.calico_ctxt())
1534+
1535+ alchemy_flags = config('neutron-alchemy-flags')
1536+ if alchemy_flags:
1537+ flags = config_flags_parser(alchemy_flags)
1538+ ctxt['neutron_alchemy_flags'] = flags
1539+
1540+ self._save_flag_file()
1541+ return ctxt
1542+
1543+
1544+class OSConfigFlagContext(OSContextGenerator):
1545+ """Provides support for user-defined config flags.
1546+
1547+ Users can define a comma-seperated list of key=value pairs
1548+ in the charm configuration and apply them at any point in
1549+ any file by using a template flag.
1550+
1551+ Sometimes users might want config flags inserted within a
1552+ specific section so this class allows users to specify the
1553+ template flag name, allowing for multiple template flags
1554+ (sections) within the same context.
1555+
1556+ NOTE: the value of config-flags may be a comma-separated list of
1557+ key=value pairs and some Openstack config files support
1558+ comma-separated lists as values.
1559+ """
1560+
1561+ def __init__(self, charm_flag='config-flags',
1562+ template_flag='user_config_flags'):
1563+ """
1564+ :param charm_flag: config flags in charm configuration.
1565+ :param template_flag: insert point for user-defined flags in template
1566+ file.
1567+ """
1568+ super(OSConfigFlagContext, self).__init__()
1569+ self._charm_flag = charm_flag
1570+ self._template_flag = template_flag
1571+
1572+ def __call__(self):
1573+ config_flags = config(self._charm_flag)
1574+ if not config_flags:
1575+ return {}
1576+
1577+ return {self._template_flag:
1578+ config_flags_parser(config_flags)}
1579+
1580+
1581+class SubordinateConfigContext(OSContextGenerator):
1582+
1583+ """
1584+ Responsible for inspecting relations to subordinates that
1585+ may be exporting required config via a json blob.
1586+
1587+ The subordinate interface allows subordinates to export their
1588+ configuration requirements to the principle for multiple config
1589+ files and multiple serivces. Ie, a subordinate that has interfaces
1590+ to both glance and nova may export to following yaml blob as json::
1591+
1592+ glance:
1593+ /etc/glance/glance-api.conf:
1594+ sections:
1595+ DEFAULT:
1596+ - [key1, value1]
1597+ /etc/glance/glance-registry.conf:
1598+ MYSECTION:
1599+ - [key2, value2]
1600+ nova:
1601+ /etc/nova/nova.conf:
1602+ sections:
1603+ DEFAULT:
1604+ - [key3, value3]
1605+
1606+
1607+ It is then up to the principle charms to subscribe this context to
1608+ the service+config file it is interestd in. Configuration data will
1609+ be available in the template context, in glance's case, as::
1610+
1611+ ctxt = {
1612+ ... other context ...
1613+ 'subordinate_config': {
1614+ 'DEFAULT': {
1615+ 'key1': 'value1',
1616+ },
1617+ 'MYSECTION': {
1618+ 'key2': 'value2',
1619+ },
1620+ }
1621+ }
1622+ """
1623+
1624+ def __init__(self, service, config_file, interface):
1625+ """
1626+ :param service : Service name key to query in any subordinate
1627+ data found
1628+ :param config_file : Service's config file to query sections
1629+ :param interface : Subordinate interface to inspect
1630+ """
1631+ self.service = service
1632+ self.config_file = config_file
1633+ self.interface = interface
1634+
1635+ def __call__(self):
1636+ ctxt = {'sections': {}}
1637+ for rid in relation_ids(self.interface):
1638+ for unit in related_units(rid):
1639+ sub_config = relation_get('subordinate_configuration',
1640+ rid=rid, unit=unit)
1641+ if sub_config and sub_config != '':
1642+ try:
1643+ sub_config = json.loads(sub_config)
1644+ except:
1645+ log('Could not parse JSON from subordinate_config '
1646+ 'setting from %s' % rid, level=ERROR)
1647+ continue
1648+
1649+ if self.service not in sub_config:
1650+ log('Found subordinate_config on %s but it contained'
1651+ 'nothing for %s service' % (rid, self.service),
1652+ level=INFO)
1653+ continue
1654+
1655+ sub_config = sub_config[self.service]
1656+ if self.config_file not in sub_config:
1657+ log('Found subordinate_config on %s but it contained'
1658+ 'nothing for %s' % (rid, self.config_file),
1659+ level=INFO)
1660+ continue
1661+
1662+ sub_config = sub_config[self.config_file]
1663+ for k, v in six.iteritems(sub_config):
1664+ if k == 'sections':
1665+ for section, config_dict in six.iteritems(v):
1666+ log("adding section '%s'" % (section),
1667+ level=DEBUG)
1668+ ctxt[k][section] = config_dict
1669+ else:
1670+ ctxt[k] = v
1671+
1672+ log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1673+ return ctxt
1674+
1675+
1676+class LogLevelContext(OSContextGenerator):
1677+
1678+ def __call__(self):
1679+ ctxt = {}
1680+ ctxt['debug'] = \
1681+ False if config('debug') is None else config('debug')
1682+ ctxt['verbose'] = \
1683+ False if config('verbose') is None else config('verbose')
1684+
1685+ return ctxt
1686+
1687+
1688+class SyslogContext(OSContextGenerator):
1689+
1690+ def __call__(self):
1691+ ctxt = {'use_syslog': config('use-syslog')}
1692+ return ctxt
1693+
1694+
1695+class BindHostContext(OSContextGenerator):
1696+
1697+ def __call__(self):
1698+ if config('prefer-ipv6'):
1699+ return {'bind_host': '::'}
1700+ else:
1701+ return {'bind_host': '0.0.0.0'}
1702+
1703+
1704+class WorkerConfigContext(OSContextGenerator):
1705+
1706+ @property
1707+ def num_cpus(self):
1708+ try:
1709+ from psutil import NUM_CPUS
1710+ except ImportError:
1711+ apt_install('python-psutil', fatal=True)
1712+ from psutil import NUM_CPUS
1713+
1714+ return NUM_CPUS
1715+
1716+ def __call__(self):
1717+ multiplier = config('worker-multiplier') or 0
1718+ ctxt = {"workers": self.num_cpus * multiplier}
1719+ return ctxt
1720+
1721+
1722+class ZeroMQContext(OSContextGenerator):
1723+ interfaces = ['zeromq-configuration']
1724+
1725+ def __call__(self):
1726+ ctxt = {}
1727+ if is_relation_made('zeromq-configuration', 'host'):
1728+ for rid in relation_ids('zeromq-configuration'):
1729+ for unit in related_units(rid):
1730+ ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1731+ ctxt['zmq_host'] = relation_get('host', unit, rid)
1732+
1733+ return ctxt
1734+
1735+
1736+class NotificationDriverContext(OSContextGenerator):
1737+
1738+ def __init__(self, zmq_relation='zeromq-configuration',
1739+ amqp_relation='amqp'):
1740+ """
1741+ :param zmq_relation: Name of Zeromq relation to check
1742+ """
1743+ self.zmq_relation = zmq_relation
1744+ self.amqp_relation = amqp_relation
1745+
1746+ def __call__(self):
1747+ ctxt = {'notifications': 'False'}
1748+ if is_relation_made(self.amqp_relation):
1749+ ctxt['notifications'] = "True"
1750+
1751+ return ctxt
1752
1753=== added file 'hooks/charmhelpers/contrib/openstack/ip.py'
1754--- hooks/charmhelpers/contrib/openstack/ip.py 1970-01-01 00:00:00 +0000
1755+++ hooks/charmhelpers/contrib/openstack/ip.py 2014-12-11 11:30:34 +0000
1756@@ -0,0 +1,93 @@
1757+from charmhelpers.core.hookenv import (
1758+ config,
1759+ unit_get,
1760+)
1761+from charmhelpers.contrib.network.ip import (
1762+ get_address_in_network,
1763+ is_address_in_network,
1764+ is_ipv6,
1765+ get_ipv6_addr,
1766+)
1767+from charmhelpers.contrib.hahelpers.cluster import is_clustered
1768+
1769+PUBLIC = 'public'
1770+INTERNAL = 'int'
1771+ADMIN = 'admin'
1772+
1773+ADDRESS_MAP = {
1774+ PUBLIC: {
1775+ 'config': 'os-public-network',
1776+ 'fallback': 'public-address'
1777+ },
1778+ INTERNAL: {
1779+ 'config': 'os-internal-network',
1780+ 'fallback': 'private-address'
1781+ },
1782+ ADMIN: {
1783+ 'config': 'os-admin-network',
1784+ 'fallback': 'private-address'
1785+ }
1786+}
1787+
1788+
1789+def canonical_url(configs, endpoint_type=PUBLIC):
1790+ """Returns the correct HTTP URL to this host given the state of HTTPS
1791+ configuration, hacluster and charm configuration.
1792+
1793+ :param configs: OSTemplateRenderer config templating object to inspect
1794+ for a complete https context.
1795+ :param endpoint_type: str endpoint type to resolve.
1796+ :param returns: str base URL for services on the current service unit.
1797+ """
1798+ scheme = 'http'
1799+ if 'https' in configs.complete_contexts():
1800+ scheme = 'https'
1801+ address = resolve_address(endpoint_type)
1802+ if is_ipv6(address):
1803+ address = "[{}]".format(address)
1804+ return '%s://%s' % (scheme, address)
1805+
1806+
1807+def resolve_address(endpoint_type=PUBLIC):
1808+ """Return unit address depending on net config.
1809+
1810+ If unit is clustered with vip(s) and has net splits defined, return vip on
1811+ correct network. If clustered with no nets defined, return primary vip.
1812+
1813+ If not clustered, return unit address ensuring address is on configured net
1814+ split if one is configured.
1815+
1816+ :param endpoint_type: Network endpoing type
1817+ """
1818+ resolved_address = None
1819+ vips = config('vip')
1820+ if vips:
1821+ vips = vips.split()
1822+
1823+ net_type = ADDRESS_MAP[endpoint_type]['config']
1824+ net_addr = config(net_type)
1825+ net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
1826+ clustered = is_clustered()
1827+ if clustered:
1828+ if not net_addr:
1829+ # If no net-splits defined, we expect a single vip
1830+ resolved_address = vips[0]
1831+ else:
1832+ for vip in vips:
1833+ if is_address_in_network(net_addr, vip):
1834+ resolved_address = vip
1835+ break
1836+ else:
1837+ if config('prefer-ipv6'):
1838+ fallback_addr = get_ipv6_addr(exc_list=vips)[0]
1839+ else:
1840+ fallback_addr = unit_get(net_fallback)
1841+
1842+ resolved_address = get_address_in_network(net_addr, fallback_addr)
1843+
1844+ if resolved_address is None:
1845+ raise ValueError("Unable to resolve a suitable IP address based on "
1846+ "charm state and configuration. (net_type=%s, "
1847+ "clustered=%s)" % (net_type, clustered))
1848+
1849+ return resolved_address
1850
1851=== added file 'hooks/charmhelpers/contrib/openstack/neutron.py'
1852--- hooks/charmhelpers/contrib/openstack/neutron.py 1970-01-01 00:00:00 +0000
1853+++ hooks/charmhelpers/contrib/openstack/neutron.py 2014-12-11 11:30:34 +0000
1854@@ -0,0 +1,217 @@
1855+# Various utilies for dealing with Neutron and the renaming from Quantum.
1856+
1857+from subprocess import check_output
1858+
1859+from charmhelpers.core.hookenv import (
1860+ config,
1861+ log,
1862+ ERROR,
1863+)
1864+
1865+from charmhelpers.contrib.openstack.utils import os_release
1866+
1867+
1868+def headers_package():
1869+ """Ensures correct linux-headers for running kernel are installed,
1870+ for building DKMS package"""
1871+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1872+ return 'linux-headers-%s' % kver
1873+
1874+QUANTUM_CONF_DIR = '/etc/quantum'
1875+
1876+
1877+def kernel_version():
1878+ """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
1879+ kver = check_output(['uname', '-r']).decode('UTF-8').strip()
1880+ kver = kver.split('.')
1881+ return (int(kver[0]), int(kver[1]))
1882+
1883+
1884+def determine_dkms_package():
1885+ """ Determine which DKMS package should be used based on kernel version """
1886+ # NOTE: 3.13 kernels have support for GRE and VXLAN native
1887+ if kernel_version() >= (3, 13):
1888+ return []
1889+ else:
1890+ return ['openvswitch-datapath-dkms']
1891+
1892+
1893+# legacy
1894+
1895+
1896+def quantum_plugins():
1897+ from charmhelpers.contrib.openstack import context
1898+ return {
1899+ 'ovs': {
1900+ 'config': '/etc/quantum/plugins/openvswitch/'
1901+ 'ovs_quantum_plugin.ini',
1902+ 'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
1903+ 'OVSQuantumPluginV2',
1904+ 'contexts': [
1905+ context.SharedDBContext(user=config('neutron-database-user'),
1906+ database=config('neutron-database'),
1907+ relation_prefix='neutron',
1908+ ssl_dir=QUANTUM_CONF_DIR)],
1909+ 'services': ['quantum-plugin-openvswitch-agent'],
1910+ 'packages': [[headers_package()] + determine_dkms_package(),
1911+ ['quantum-plugin-openvswitch-agent']],
1912+ 'server_packages': ['quantum-server',
1913+ 'quantum-plugin-openvswitch'],
1914+ 'server_services': ['quantum-server']
1915+ },
1916+ 'nvp': {
1917+ 'config': '/etc/quantum/plugins/nicira/nvp.ini',
1918+ 'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
1919+ 'QuantumPlugin.NvpPluginV2',
1920+ 'contexts': [
1921+ context.SharedDBContext(user=config('neutron-database-user'),
1922+ database=config('neutron-database'),
1923+ relation_prefix='neutron',
1924+ ssl_dir=QUANTUM_CONF_DIR)],
1925+ 'services': [],
1926+ 'packages': [],
1927+ 'server_packages': ['quantum-server',
1928+ 'quantum-plugin-nicira'],
1929+ 'server_services': ['quantum-server']
1930+ }
1931+ }
1932+
1933+NEUTRON_CONF_DIR = '/etc/neutron'
1934+
1935+
1936+def neutron_plugins():
1937+ from charmhelpers.contrib.openstack import context
1938+ release = os_release('nova-common')
1939+ plugins = {
1940+ 'ovs': {
1941+ 'config': '/etc/neutron/plugins/openvswitch/'
1942+ 'ovs_neutron_plugin.ini',
1943+ 'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
1944+ 'OVSNeutronPluginV2',
1945+ 'contexts': [
1946+ context.SharedDBContext(user=config('neutron-database-user'),
1947+ database=config('neutron-database'),
1948+ relation_prefix='neutron',
1949+ ssl_dir=NEUTRON_CONF_DIR)],
1950+ 'services': ['neutron-plugin-openvswitch-agent'],
1951+ 'packages': [[headers_package()] + determine_dkms_package(),
1952+ ['neutron-plugin-openvswitch-agent']],
1953+ 'server_packages': ['neutron-server',
1954+ 'neutron-plugin-openvswitch'],
1955+ 'server_services': ['neutron-server']
1956+ },
1957+ 'nvp': {
1958+ 'config': '/etc/neutron/plugins/nicira/nvp.ini',
1959+ 'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
1960+ 'NeutronPlugin.NvpPluginV2',
1961+ 'contexts': [
1962+ context.SharedDBContext(user=config('neutron-database-user'),
1963+ database=config('neutron-database'),
1964+ relation_prefix='neutron',
1965+ ssl_dir=NEUTRON_CONF_DIR)],
1966+ 'services': [],
1967+ 'packages': [],
1968+ 'server_packages': ['neutron-server',
1969+ 'neutron-plugin-nicira'],
1970+ 'server_services': ['neutron-server']
1971+ },
1972+ 'nsx': {
1973+ 'config': '/etc/neutron/plugins/vmware/nsx.ini',
1974+ 'driver': 'vmware',
1975+ 'contexts': [
1976+ context.SharedDBContext(user=config('neutron-database-user'),
1977+ database=config('neutron-database'),
1978+ relation_prefix='neutron',
1979+ ssl_dir=NEUTRON_CONF_DIR)],
1980+ 'services': [],
1981+ 'packages': [],
1982+ 'server_packages': ['neutron-server',
1983+ 'neutron-plugin-vmware'],
1984+ 'server_services': ['neutron-server']
1985+ },
1986+ 'n1kv': {
1987+ 'config': '/etc/neutron/plugins/cisco/cisco_plugins.ini',
1988+ 'driver': 'neutron.plugins.cisco.network_plugin.PluginV2',
1989+ 'contexts': [
1990+ context.SharedDBContext(user=config('neutron-database-user'),
1991+ database=config('neutron-database'),
1992+ relation_prefix='neutron',
1993+ ssl_dir=NEUTRON_CONF_DIR)],
1994+ 'services': [],
1995+ 'packages': [[headers_package()] + determine_dkms_package(),
1996+ ['neutron-plugin-cisco']],
1997+ 'server_packages': ['neutron-server',
1998+ 'neutron-plugin-cisco'],
1999+ 'server_services': ['neutron-server']
2000+ },
2001+ 'Calico': {
2002+ 'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
2003+ 'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
2004+ 'contexts': [
2005+ context.SharedDBContext(user=config('neutron-database-user'),
2006+ database=config('neutron-database'),
2007+ relation_prefix='neutron',
2008+ ssl_dir=NEUTRON_CONF_DIR)],
2009+ 'services': ['calico-compute', 'bird', 'neutron-dhcp-agent'],
2010+ 'packages': [[headers_package()] + determine_dkms_package(),
2011+ ['calico-compute', 'bird', 'neutron-dhcp-agent']],
2012+ 'server_packages': ['neutron-server', 'calico-control'],
2013+ 'server_services': ['neutron-server']
2014+ }
2015+ }
2016+ if release >= 'icehouse':
2017+ # NOTE: patch in ml2 plugin for icehouse onwards
2018+ plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini'
2019+ plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin'
2020+ plugins['ovs']['server_packages'] = ['neutron-server',
2021+ 'neutron-plugin-ml2']
2022+ # NOTE: patch in vmware renames nvp->nsx for icehouse onwards
2023+ plugins['nvp'] = plugins['nsx']
2024+ return plugins
2025+
2026+
2027+def neutron_plugin_attribute(plugin, attr, net_manager=None):
2028+ manager = net_manager or network_manager()
2029+ if manager == 'quantum':
2030+ plugins = quantum_plugins()
2031+ elif manager == 'neutron':
2032+ plugins = neutron_plugins()
2033+ else:
2034+ log("Network manager '%s' does not support plugins." % (manager),
2035+ level=ERROR)
2036+ raise Exception
2037+
2038+ try:
2039+ _plugin = plugins[plugin]
2040+ except KeyError:
2041+ log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
2042+ raise Exception
2043+
2044+ try:
2045+ return _plugin[attr]
2046+ except KeyError:
2047+ return None
2048+
2049+
2050+def network_manager():
2051+ '''
2052+ Deals with the renaming of Quantum to Neutron in H and any situations
2053+ that require compatability (eg, deploying H with network-manager=quantum,
2054+ upgrading from G).
2055+ '''
2056+ release = os_release('nova-common')
2057+ manager = config('network-manager').lower()
2058+
2059+ if manager not in ['quantum', 'neutron']:
2060+ return manager
2061+
2062+ if release in ['essex']:
2063+ # E does not support neutron
2064+ log('Neutron networking not supported in Essex.', level=ERROR)
2065+ raise Exception
2066+ elif release in ['folsom', 'grizzly']:
2067+ # neutron is named quantum in F and G
2068+ return 'quantum'
2069+ else:
2070+ # ensure accurate naming for all releases post-H
2071+ return 'neutron'
2072
2073=== added directory 'hooks/charmhelpers/contrib/openstack/templates'
2074=== added file 'hooks/charmhelpers/contrib/openstack/templates/__init__.py'
2075--- hooks/charmhelpers/contrib/openstack/templates/__init__.py 1970-01-01 00:00:00 +0000
2076+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py 2014-12-11 11:30:34 +0000
2077@@ -0,0 +1,2 @@
2078+# dummy __init__.py to fool syncer into thinking this is a syncable python
2079+# module
2080
2081=== added file 'hooks/charmhelpers/contrib/openstack/templates/ceph.conf'
2082--- hooks/charmhelpers/contrib/openstack/templates/ceph.conf 1970-01-01 00:00:00 +0000
2083+++ hooks/charmhelpers/contrib/openstack/templates/ceph.conf 2014-12-11 11:30:34 +0000
2084@@ -0,0 +1,15 @@
2085+###############################################################################
2086+# [ WARNING ]
2087+# cinder configuration file maintained by Juju
2088+# local changes may be overwritten.
2089+###############################################################################
2090+[global]
2091+{% if auth -%}
2092+ auth_supported = {{ auth }}
2093+ keyring = /etc/ceph/$cluster.$name.keyring
2094+ mon host = {{ mon_hosts }}
2095+{% endif -%}
2096+ log to syslog = {{ use_syslog }}
2097+ err to syslog = {{ use_syslog }}
2098+ clog to syslog = {{ use_syslog }}
2099+
2100
2101=== added file 'hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg'
2102--- hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 1970-01-01 00:00:00 +0000
2103+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg 2014-12-11 11:30:34 +0000
2104@@ -0,0 +1,54 @@
2105+global
2106+ log {{ local_host }} local0
2107+ log {{ local_host }} local1 notice
2108+ maxconn 20000
2109+ user haproxy
2110+ group haproxy
2111+ spread-checks 0
2112+
2113+defaults
2114+ log global
2115+ mode tcp
2116+ option tcplog
2117+ option dontlognull
2118+ retries 3
2119+ timeout queue 1000
2120+ timeout connect 1000
2121+{% if haproxy_client_timeout -%}
2122+ timeout client {{ haproxy_client_timeout }}
2123+{% else -%}
2124+ timeout client 30000
2125+{% endif -%}
2126+
2127+{% if haproxy_server_timeout -%}
2128+ timeout server {{ haproxy_server_timeout }}
2129+{% else -%}
2130+ timeout server 30000
2131+{% endif -%}
2132+
2133+listen stats {{ stat_port }}
2134+ mode http
2135+ stats enable
2136+ stats hide-version
2137+ stats realm Haproxy\ Statistics
2138+ stats uri /
2139+ stats auth admin:password
2140+
2141+{% if frontends -%}
2142+{% for service, ports in service_ports.items() -%}
2143+frontend tcp-in_{{ service }}
2144+ bind *:{{ ports[0] }}
2145+ bind :::{{ ports[0] }}
2146+ {% for frontend in frontends -%}
2147+ acl net_{{ frontend }} dst {{ frontends[frontend]['network'] }}
2148+ use_backend {{ service }}_{{ frontend }} if net_{{ frontend }}
2149+ {% endfor %}
2150+{% for frontend in frontends -%}
2151+backend {{ service }}_{{ frontend }}
2152+ balance leastconn
2153+ {% for unit, address in frontends[frontend]['backends'].items() -%}
2154+ server {{ unit }} {{ address }}:{{ ports[1] }} check
2155+ {% endfor %}
2156+{% endfor -%}
2157+{% endfor -%}
2158+{% endif -%}
2159
2160=== added file 'hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend'
2161--- hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 1970-01-01 00:00:00 +0000
2162+++ hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend 2014-12-11 11:30:34 +0000
2163@@ -0,0 +1,24 @@
2164+{% if endpoints -%}
2165+{% for ext_port in ext_ports -%}
2166+Listen {{ ext_port }}
2167+{% endfor -%}
2168+{% for address, endpoint, ext, int in endpoints -%}
2169+<VirtualHost {{ address }}:{{ ext }}>
2170+ ServerName {{ endpoint }}
2171+ SSLEngine on
2172+ SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }}
2173+ SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key_{{ endpoint }}
2174+ ProxyPass / http://localhost:{{ int }}/
2175+ ProxyPassReverse / http://localhost:{{ int }}/
2176+ ProxyPreserveHost on
2177+</VirtualHost>
2178+{% endfor -%}
2179+<Proxy *>
2180+ Order deny,allow
2181+ Allow from all
2182+</Proxy>
2183+<Location />
2184+ Order allow,deny
2185+ Allow from all
2186+</Location>
2187+{% endif -%}
2188
2189=== added file 'hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf'
2190--- hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf 1970-01-01 00:00:00 +0000
2191+++ hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf 2014-12-11 11:30:34 +0000
2192@@ -0,0 +1,24 @@
2193+{% if endpoints -%}
2194+{% for ext_port in ext_ports -%}
2195+Listen {{ ext_port }}
2196+{% endfor -%}
2197+{% for address, endpoint, ext, int in endpoints -%}
2198+<VirtualHost {{ address }}:{{ ext }}>
2199+ ServerName {{ endpoint }}
2200+ SSLEngine on
2201+ SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }}
2202+ SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key_{{ endpoint }}
2203+ ProxyPass / http://localhost:{{ int }}/
2204+ ProxyPassReverse / http://localhost:{{ int }}/
2205+ ProxyPreserveHost on
2206+</VirtualHost>
2207+{% endfor -%}
2208+<Proxy *>
2209+ Order deny,allow
2210+ Allow from all
2211+</Proxy>
2212+<Location />
2213+ Order allow,deny
2214+ Allow from all
2215+</Location>
2216+{% endif -%}
2217
2218=== added file 'hooks/charmhelpers/contrib/openstack/templating.py'
2219--- hooks/charmhelpers/contrib/openstack/templating.py 1970-01-01 00:00:00 +0000
2220+++ hooks/charmhelpers/contrib/openstack/templating.py 2014-12-11 11:30:34 +0000
2221@@ -0,0 +1,279 @@
2222+import os
2223+
2224+import six
2225+
2226+from charmhelpers.fetch import apt_install
2227+from charmhelpers.core.hookenv import (
2228+ log,
2229+ ERROR,
2230+ INFO
2231+)
2232+from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
2233+
2234+try:
2235+ from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
2236+except ImportError:
2237+ # python-jinja2 may not be installed yet, or we're running unittests.
2238+ FileSystemLoader = ChoiceLoader = Environment = exceptions = None
2239+
2240+
2241+class OSConfigException(Exception):
2242+ pass
2243+
2244+
2245+def get_loader(templates_dir, os_release):
2246+ """
2247+ Create a jinja2.ChoiceLoader containing template dirs up to
2248+ and including os_release. If directory template directory
2249+ is missing at templates_dir, it will be omitted from the loader.
2250+ templates_dir is added to the bottom of the search list as a base
2251+ loading dir.
2252+
2253+ A charm may also ship a templates dir with this module
2254+ and it will be appended to the bottom of the search list, eg::
2255+
2256+ hooks/charmhelpers/contrib/openstack/templates
2257+
2258+ :param templates_dir (str): Base template directory containing release
2259+ sub-directories.
2260+ :param os_release (str): OpenStack release codename to construct template
2261+ loader.
2262+ :returns: jinja2.ChoiceLoader constructed with a list of
2263+ jinja2.FilesystemLoaders, ordered in descending
2264+ order by OpenStack release.
2265+ """
2266+ tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
2267+ for rel in six.itervalues(OPENSTACK_CODENAMES)]
2268+
2269+ if not os.path.isdir(templates_dir):
2270+ log('Templates directory not found @ %s.' % templates_dir,
2271+ level=ERROR)
2272+ raise OSConfigException
2273+
2274+ # the bottom contains tempaltes_dir and possibly a common templates dir
2275+ # shipped with the helper.
2276+ loaders = [FileSystemLoader(templates_dir)]
2277+ helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
2278+ if os.path.isdir(helper_templates):
2279+ loaders.append(FileSystemLoader(helper_templates))
2280+
2281+ for rel, tmpl_dir in tmpl_dirs:
2282+ if os.path.isdir(tmpl_dir):
2283+ loaders.insert(0, FileSystemLoader(tmpl_dir))
2284+ if rel == os_release:
2285+ break
2286+ log('Creating choice loader with dirs: %s' %
2287+ [l.searchpath for l in loaders], level=INFO)
2288+ return ChoiceLoader(loaders)
2289+
2290+
2291+class OSConfigTemplate(object):
2292+ """
2293+ Associates a config file template with a list of context generators.
2294+ Responsible for constructing a template context based on those generators.
2295+ """
2296+ def __init__(self, config_file, contexts):
2297+ self.config_file = config_file
2298+
2299+ if hasattr(contexts, '__call__'):
2300+ self.contexts = [contexts]
2301+ else:
2302+ self.contexts = contexts
2303+
2304+ self._complete_contexts = []
2305+
2306+ def context(self):
2307+ ctxt = {}
2308+ for context in self.contexts:
2309+ _ctxt = context()
2310+ if _ctxt:
2311+ ctxt.update(_ctxt)
2312+ # track interfaces for every complete context.
2313+ [self._complete_contexts.append(interface)
2314+ for interface in context.interfaces
2315+ if interface not in self._complete_contexts]
2316+ return ctxt
2317+
2318+ def complete_contexts(self):
2319+ '''
2320+ Return a list of interfaces that have atisfied contexts.
2321+ '''
2322+ if self._complete_contexts:
2323+ return self._complete_contexts
2324+ self.context()
2325+ return self._complete_contexts
2326+
2327+
2328+class OSConfigRenderer(object):
2329+ """
2330+ This class provides a common templating system to be used by OpenStack
2331+ charms. It is intended to help charms share common code and templates,
2332+ and ease the burden of managing config templates across multiple OpenStack
2333+ releases.
2334+
2335+ Basic usage::
2336+
2337+ # import some common context generates from charmhelpers
2338+ from charmhelpers.contrib.openstack import context
2339+
2340+ # Create a renderer object for a specific OS release.
2341+ configs = OSConfigRenderer(templates_dir='/tmp/templates',
2342+ openstack_release='folsom')
2343+ # register some config files with context generators.
2344+ configs.register(config_file='/etc/nova/nova.conf',
2345+ contexts=[context.SharedDBContext(),
2346+ context.AMQPContext()])
2347+ configs.register(config_file='/etc/nova/api-paste.ini',
2348+ contexts=[context.IdentityServiceContext()])
2349+ configs.register(config_file='/etc/haproxy/haproxy.conf',
2350+ contexts=[context.HAProxyContext()])
2351+ # write out a single config
2352+ configs.write('/etc/nova/nova.conf')
2353+ # write out all registered configs
2354+ configs.write_all()
2355+
2356+ **OpenStack Releases and template loading**
2357+
2358+ When the object is instantiated, it is associated with a specific OS
2359+ release. This dictates how the template loader will be constructed.
2360+
2361+ The constructed loader attempts to load the template from several places
2362+ in the following order:
2363+ - from the most recent OS release-specific template dir (if one exists)
2364+ - the base templates_dir
2365+ - a template directory shipped in the charm with this helper file.
2366+
2367+ For the example above, '/tmp/templates' contains the following structure::
2368+
2369+ /tmp/templates/nova.conf
2370+ /tmp/templates/api-paste.ini
2371+ /tmp/templates/grizzly/api-paste.ini
2372+ /tmp/templates/havana/api-paste.ini
2373+
2374+ Since it was registered with the grizzly release, it first seraches
2375+ the grizzly directory for nova.conf, then the templates dir.
2376+
2377+ When writing api-paste.ini, it will find the template in the grizzly
2378+ directory.
2379+
2380+ If the object were created with folsom, it would fall back to the
2381+ base templates dir for its api-paste.ini template.
2382+
2383+ This system should help manage changes in config files through
2384+ openstack releases, allowing charms to fall back to the most recently
2385+ updated config template for a given release
2386+
2387+ The haproxy.conf, since it is not shipped in the templates dir, will
2388+ be loaded from the module directory's template directory, eg
2389+ $CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
2390+ us to ship common templates (haproxy, apache) with the helpers.
2391+
2392+ **Context generators**
2393+
2394+ Context generators are used to generate template contexts during hook
2395+ execution. Doing so may require inspecting service relations, charm
2396+ config, etc. When registered, a config file is associated with a list
2397+ of generators. When a template is rendered and written, all context
2398+ generates are called in a chain to generate the context dictionary
2399+ passed to the jinja2 template. See context.py for more info.
2400+ """
2401+ def __init__(self, templates_dir, openstack_release):
2402+ if not os.path.isdir(templates_dir):
2403+ log('Could not locate templates dir %s' % templates_dir,
2404+ level=ERROR)
2405+ raise OSConfigException
2406+
2407+ self.templates_dir = templates_dir
2408+ self.openstack_release = openstack_release
2409+ self.templates = {}
2410+ self._tmpl_env = None
2411+
2412+ if None in [Environment, ChoiceLoader, FileSystemLoader]:
2413+ # if this code is running, the object is created pre-install hook.
2414+ # jinja2 shouldn't get touched until the module is reloaded on next
2415+ # hook execution, with proper jinja2 bits successfully imported.
2416+ apt_install('python-jinja2')
2417+
2418+ def register(self, config_file, contexts):
2419+ """
2420+ Register a config file with a list of context generators to be called
2421+ during rendering.
2422+ """
2423+ self.templates[config_file] = OSConfigTemplate(config_file=config_file,
2424+ contexts=contexts)
2425+ log('Registered config file: %s' % config_file, level=INFO)
2426+
2427+ def _get_tmpl_env(self):
2428+ if not self._tmpl_env:
2429+ loader = get_loader(self.templates_dir, self.openstack_release)
2430+ self._tmpl_env = Environment(loader=loader)
2431+
2432+ def _get_template(self, template):
2433+ self._get_tmpl_env()
2434+ template = self._tmpl_env.get_template(template)
2435+ log('Loaded template from %s' % template.filename, level=INFO)
2436+ return template
2437+
2438+ def render(self, config_file):
2439+ if config_file not in self.templates:
2440+ log('Config not registered: %s' % config_file, level=ERROR)
2441+ raise OSConfigException
2442+ ctxt = self.templates[config_file].context()
2443+
2444+ _tmpl = os.path.basename(config_file)
2445+ try:
2446+ template = self._get_template(_tmpl)
2447+ except exceptions.TemplateNotFound:
2448+ # if no template is found with basename, try looking for it
2449+ # using a munged full path, eg:
2450+ # /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
2451+ _tmpl = '_'.join(config_file.split('/')[1:])
2452+ try:
2453+ template = self._get_template(_tmpl)
2454+ except exceptions.TemplateNotFound as e:
2455+ log('Could not load template from %s by %s or %s.' %
2456+ (self.templates_dir, os.path.basename(config_file), _tmpl),
2457+ level=ERROR)
2458+ raise e
2459+
2460+ log('Rendering from template: %s' % _tmpl, level=INFO)
2461+ return template.render(ctxt)
2462+
2463+ def write(self, config_file):
2464+ """
2465+ Write a single config file, raises if config file is not registered.
2466+ """
2467+ if config_file not in self.templates:
2468+ log('Config not registered: %s' % config_file, level=ERROR)
2469+ raise OSConfigException
2470+
2471+ _out = self.render(config_file)
2472+
2473+ with open(config_file, 'wb') as out:
2474+ out.write(_out)
2475+
2476+ log('Wrote template %s.' % config_file, level=INFO)
2477+
2478+ def write_all(self):
2479+ """
2480+ Write out all registered config files.
2481+ """
2482+ [self.write(k) for k in six.iterkeys(self.templates)]
2483+
2484+ def set_release(self, openstack_release):
2485+ """
2486+ Resets the template environment and generates a new template loader
2487+ based on a the new openstack release.
2488+ """
2489+ self._tmpl_env = None
2490+ self.openstack_release = openstack_release
2491+ self._get_tmpl_env()
2492+
2493+ def complete_contexts(self):
2494+ '''
2495+ Returns a list of context interfaces that yield a complete context.
2496+ '''
2497+ interfaces = []
2498+ [interfaces.extend(i.complete_contexts())
2499+ for i in six.itervalues(self.templates)]
2500+ return interfaces
2501
2502=== added file 'hooks/charmhelpers/contrib/openstack/utils.py'
2503--- hooks/charmhelpers/contrib/openstack/utils.py 1970-01-01 00:00:00 +0000
2504+++ hooks/charmhelpers/contrib/openstack/utils.py 2014-12-11 11:30:34 +0000
2505@@ -0,0 +1,619 @@
2506+#!/usr/bin/python
2507+
2508+# Common python helper functions used for OpenStack charms.
2509+from collections import OrderedDict
2510+from functools import wraps
2511+
2512+import subprocess
2513+import json
2514+import os
2515+import socket
2516+import sys
2517+
2518+import six
2519+import yaml
2520+
2521+from charmhelpers.core.hookenv import (
2522+ config,
2523+ log as juju_log,
2524+ charm_dir,
2525+ INFO,
2526+ relation_ids,
2527+ relation_set
2528+)
2529+
2530+from charmhelpers.contrib.storage.linux.lvm import (
2531+ deactivate_lvm_volume_group,
2532+ is_lvm_physical_volume,
2533+ remove_lvm_physical_volume,
2534+)
2535+
2536+from charmhelpers.contrib.network.ip import (
2537+ get_ipv6_addr
2538+)
2539+
2540+from charmhelpers.core.host import lsb_release, mounts, umount
2541+from charmhelpers.fetch import apt_install, apt_cache, install_remote
2542+from charmhelpers.contrib.python.packages import pip_install
2543+from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
2544+from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
2545+
2546+CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
2547+CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
2548+
2549+DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
2550+ 'restricted main multiverse universe')
2551+
2552+
2553+UBUNTU_OPENSTACK_RELEASE = OrderedDict([
2554+ ('oneiric', 'diablo'),
2555+ ('precise', 'essex'),
2556+ ('quantal', 'folsom'),
2557+ ('raring', 'grizzly'),
2558+ ('saucy', 'havana'),
2559+ ('trusty', 'icehouse'),
2560+ ('utopic', 'juno'),
2561+])
2562+
2563+
2564+OPENSTACK_CODENAMES = OrderedDict([
2565+ ('2011.2', 'diablo'),
2566+ ('2012.1', 'essex'),
2567+ ('2012.2', 'folsom'),
2568+ ('2013.1', 'grizzly'),
2569+ ('2013.2', 'havana'),
2570+ ('2014.1', 'icehouse'),
2571+ ('2014.2', 'juno'),
2572+])
2573+
2574+# The ugly duckling
2575+SWIFT_CODENAMES = OrderedDict([
2576+ ('1.4.3', 'diablo'),
2577+ ('1.4.8', 'essex'),
2578+ ('1.7.4', 'folsom'),
2579+ ('1.8.0', 'grizzly'),
2580+ ('1.7.7', 'grizzly'),
2581+ ('1.7.6', 'grizzly'),
2582+ ('1.10.0', 'havana'),
2583+ ('1.9.1', 'havana'),
2584+ ('1.9.0', 'havana'),
2585+ ('1.13.1', 'icehouse'),
2586+ ('1.13.0', 'icehouse'),
2587+ ('1.12.0', 'icehouse'),
2588+ ('1.11.0', 'icehouse'),
2589+ ('2.0.0', 'juno'),
2590+ ('2.1.0', 'juno'),
2591+ ('2.2.0', 'juno'),
2592+])
2593+
2594+DEFAULT_LOOPBACK_SIZE = '5G'
2595+
2596+
2597+def error_out(msg):
2598+ juju_log("FATAL ERROR: %s" % msg, level='ERROR')
2599+ sys.exit(1)
2600+
2601+
2602+def get_os_codename_install_source(src):
2603+ '''Derive OpenStack release codename from a given installation source.'''
2604+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
2605+ rel = ''
2606+ if src is None:
2607+ return rel
2608+ if src in ['distro', 'distro-proposed']:
2609+ try:
2610+ rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
2611+ except KeyError:
2612+ e = 'Could not derive openstack release for '\
2613+ 'this Ubuntu release: %s' % ubuntu_rel
2614+ error_out(e)
2615+ return rel
2616+
2617+ if src.startswith('cloud:'):
2618+ ca_rel = src.split(':')[1]
2619+ ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
2620+ return ca_rel
2621+
2622+ # Best guess match based on deb string provided
2623+ if src.startswith('deb') or src.startswith('ppa'):
2624+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
2625+ if v in src:
2626+ return v
2627+
2628+
2629+def get_os_version_install_source(src):
2630+ codename = get_os_codename_install_source(src)
2631+ return get_os_version_codename(codename)
2632+
2633+
2634+def get_os_codename_version(vers):
2635+ '''Determine OpenStack codename from version number.'''
2636+ try:
2637+ return OPENSTACK_CODENAMES[vers]
2638+ except KeyError:
2639+ e = 'Could not determine OpenStack codename for version %s' % vers
2640+ error_out(e)
2641+
2642+
2643+def get_os_version_codename(codename):
2644+ '''Determine OpenStack version number from codename.'''
2645+ for k, v in six.iteritems(OPENSTACK_CODENAMES):
2646+ if v == codename:
2647+ return k
2648+ e = 'Could not derive OpenStack version for '\
2649+ 'codename: %s' % codename
2650+ error_out(e)
2651+
2652+
2653+def get_os_codename_package(package, fatal=True):
2654+ '''Derive OpenStack release codename from an installed package.'''
2655+ import apt_pkg as apt
2656+
2657+ cache = apt_cache()
2658+
2659+ try:
2660+ pkg = cache[package]
2661+ except:
2662+ if not fatal:
2663+ return None
2664+ # the package is unknown to the current apt cache.
2665+ e = 'Could not determine version of package with no installation '\
2666+ 'candidate: %s' % package
2667+ error_out(e)
2668+
2669+ if not pkg.current_ver:
2670+ if not fatal:
2671+ return None
2672+ # package is known, but no version is currently installed.
2673+ e = 'Could not determine version of uninstalled package: %s' % package
2674+ error_out(e)
2675+
2676+ vers = apt.upstream_version(pkg.current_ver.ver_str)
2677+
2678+ try:
2679+ if 'swift' in pkg.name:
2680+ swift_vers = vers[:5]
2681+ if swift_vers not in SWIFT_CODENAMES:
2682+ # Deal with 1.10.0 upward
2683+ swift_vers = vers[:6]
2684+ return SWIFT_CODENAMES[swift_vers]
2685+ else:
2686+ vers = vers[:6]
2687+ return OPENSTACK_CODENAMES[vers]
2688+ except KeyError:
2689+ e = 'Could not determine OpenStack codename for version %s' % vers
2690+ error_out(e)
2691+
2692+
2693+def get_os_version_package(pkg, fatal=True):
2694+ '''Derive OpenStack version number from an installed package.'''
2695+ codename = get_os_codename_package(pkg, fatal=fatal)
2696+
2697+ if not codename:
2698+ return None
2699+
2700+ if 'swift' in pkg:
2701+ vers_map = SWIFT_CODENAMES
2702+ else:
2703+ vers_map = OPENSTACK_CODENAMES
2704+
2705+ for version, cname in six.iteritems(vers_map):
2706+ if cname == codename:
2707+ return version
2708+ # e = "Could not determine OpenStack version for package: %s" % pkg
2709+ # error_out(e)
2710+
2711+
2712+os_rel = None
2713+
2714+
2715+def os_release(package, base='essex'):
2716+ '''
2717+ Returns OpenStack release codename from a cached global.
2718+ If the codename can not be determined from either an installed package or
2719+ the installation source, the earliest release supported by the charm should
2720+ be returned.
2721+ '''
2722+ global os_rel
2723+ if os_rel:
2724+ return os_rel
2725+ os_rel = (get_os_codename_package(package, fatal=False) or
2726+ get_os_codename_install_source(config('openstack-origin')) or
2727+ base)
2728+ return os_rel
2729+
2730+
2731+def import_key(keyid):
2732+ cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
2733+ "--recv-keys %s" % keyid
2734+ try:
2735+ subprocess.check_call(cmd.split(' '))
2736+ except subprocess.CalledProcessError:
2737+ error_out("Error importing repo key %s" % keyid)
2738+
2739+
2740+def configure_installation_source(rel):
2741+ '''Configure apt installation source.'''
2742+ if rel == 'distro':
2743+ return
2744+ elif rel == 'distro-proposed':
2745+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
2746+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
2747+ f.write(DISTRO_PROPOSED % ubuntu_rel)
2748+ elif rel[:4] == "ppa:":
2749+ src = rel
2750+ subprocess.check_call(["add-apt-repository", "-y", src])
2751+ elif rel[:3] == "deb":
2752+ l = len(rel.split('|'))
2753+ if l == 2:
2754+ src, key = rel.split('|')
2755+ juju_log("Importing PPA key from keyserver for %s" % src)
2756+ import_key(key)
2757+ elif l == 1:
2758+ src = rel
2759+ with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
2760+ f.write(src)
2761+ elif rel[:6] == 'cloud:':
2762+ ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
2763+ rel = rel.split(':')[1]
2764+ u_rel = rel.split('-')[0]
2765+ ca_rel = rel.split('-')[1]
2766+
2767+ if u_rel != ubuntu_rel:
2768+ e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
2769+ 'version (%s)' % (ca_rel, ubuntu_rel)
2770+ error_out(e)
2771+
2772+ if 'staging' in ca_rel:
2773+ # staging is just a regular PPA.
2774+ os_rel = ca_rel.split('/')[0]
2775+ ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
2776+ cmd = 'add-apt-repository -y %s' % ppa
2777+ subprocess.check_call(cmd.split(' '))
2778+ return
2779+
2780+ # map charm config options to actual archive pockets.
2781+ pockets = {
2782+ 'folsom': 'precise-updates/folsom',
2783+ 'folsom/updates': 'precise-updates/folsom',
2784+ 'folsom/proposed': 'precise-proposed/folsom',
2785+ 'grizzly': 'precise-updates/grizzly',
2786+ 'grizzly/updates': 'precise-updates/grizzly',
2787+ 'grizzly/proposed': 'precise-proposed/grizzly',
2788+ 'havana': 'precise-updates/havana',
2789+ 'havana/updates': 'precise-updates/havana',
2790+ 'havana/proposed': 'precise-proposed/havana',
2791+ 'icehouse': 'precise-updates/icehouse',
2792+ 'icehouse/updates': 'precise-updates/icehouse',
2793+ 'icehouse/proposed': 'precise-proposed/icehouse',
2794+ 'juno': 'trusty-updates/juno',
2795+ 'juno/updates': 'trusty-updates/juno',
2796+ 'juno/proposed': 'trusty-proposed/juno',
2797+ }
2798+
2799+ try:
2800+ pocket = pockets[ca_rel]
2801+ except KeyError:
2802+ e = 'Invalid Cloud Archive release specified: %s' % rel
2803+ error_out(e)
2804+
2805+ src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
2806+ apt_install('ubuntu-cloud-keyring', fatal=True)
2807+
2808+ with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
2809+ f.write(src)
2810+ else:
2811+ error_out("Invalid openstack-release specified: %s" % rel)
2812+
2813+
2814+def save_script_rc(script_path="scripts/scriptrc", **env_vars):
2815+ """
2816+ Write an rc file in the charm-delivered directory containing
2817+ exported environment variables provided by env_vars. Any charm scripts run
2818+ outside the juju hook environment can source this scriptrc to obtain
2819+ updated config information necessary to perform health checks or
2820+ service changes.
2821+ """
2822+ juju_rc_path = "%s/%s" % (charm_dir(), script_path)
2823+ if not os.path.exists(os.path.dirname(juju_rc_path)):
2824+ os.mkdir(os.path.dirname(juju_rc_path))
2825+ with open(juju_rc_path, 'wb') as rc_script:
2826+ rc_script.write(
2827+ "#!/bin/bash\n")
2828+ [rc_script.write('export %s=%s\n' % (u, p))
2829+ for u, p in six.iteritems(env_vars) if u != "script_path"]
2830+
2831+
2832+def openstack_upgrade_available(package):
2833+ """
2834+ Determines if an OpenStack upgrade is available from installation
2835+ source, based on version of installed package.
2836+
2837+ :param package: str: Name of installed package.
2838+
2839+ :returns: bool: : Returns True if configured installation source offers
2840+ a newer version of package.
2841+
2842+ """
2843+
2844+ import apt_pkg as apt
2845+ src = config('openstack-origin')
2846+ cur_vers = get_os_version_package(package)
2847+ available_vers = get_os_version_install_source(src)
2848+ apt.init()
2849+ return apt.version_compare(available_vers, cur_vers) == 1
2850+
2851+
2852+def ensure_block_device(block_device):
2853+ '''
2854+ Confirm block_device, create as loopback if necessary.
2855+
2856+ :param block_device: str: Full path of block device to ensure.
2857+
2858+ :returns: str: Full path of ensured block device.
2859+ '''
2860+ _none = ['None', 'none', None]
2861+ if (block_device in _none):
2862+ error_out('prepare_storage(): Missing required input: block_device=%s.'
2863+ % block_device)
2864+
2865+ if block_device.startswith('/dev/'):
2866+ bdev = block_device
2867+ elif block_device.startswith('/'):
2868+ _bd = block_device.split('|')
2869+ if len(_bd) == 2:
2870+ bdev, size = _bd
2871+ else:
2872+ bdev = block_device
2873+ size = DEFAULT_LOOPBACK_SIZE
2874+ bdev = ensure_loopback_device(bdev, size)
2875+ else:
2876+ bdev = '/dev/%s' % block_device
2877+
2878+ if not is_block_device(bdev):
2879+ error_out('Failed to locate valid block device at %s' % bdev)
2880+
2881+ return bdev
2882+
2883+
2884+def clean_storage(block_device):
2885+ '''
2886+ Ensures a block device is clean. That is:
2887+ - unmounted
2888+ - any lvm volume groups are deactivated
2889+ - any lvm physical device signatures removed
2890+ - partition table wiped
2891+
2892+ :param block_device: str: Full path to block device to clean.
2893+ '''
2894+ for mp, d in mounts():
2895+ if d == block_device:
2896+ juju_log('clean_storage(): %s is mounted @ %s, unmounting.' %
2897+ (d, mp), level=INFO)
2898+ umount(mp, persist=True)
2899+
2900+ if is_lvm_physical_volume(block_device):
2901+ deactivate_lvm_volume_group(block_device)
2902+ remove_lvm_physical_volume(block_device)
2903+ else:
2904+ zap_disk(block_device)
2905+
2906+
2907+def is_ip(address):
2908+ """
2909+ Returns True if address is a valid IP address.
2910+ """
2911+ try:
2912+ # Test to see if already an IPv4 address
2913+ socket.inet_aton(address)
2914+ return True
2915+ except socket.error:
2916+ return False
2917+
2918+
2919+def ns_query(address):
2920+ try:
2921+ import dns.resolver
2922+ except ImportError:
2923+ apt_install('python-dnspython')
2924+ import dns.resolver
2925+
2926+ if isinstance(address, dns.name.Name):
2927+ rtype = 'PTR'
2928+ elif isinstance(address, six.string_types):
2929+ rtype = 'A'
2930+ else:
2931+ return None
2932+
2933+ answers = dns.resolver.query(address, rtype)
2934+ if answers:
2935+ return str(answers[0])
2936+ return None
2937+
2938+
2939+def get_host_ip(hostname):
2940+ """
2941+ Resolves the IP for a given hostname, or returns
2942+ the input if it is already an IP.
2943+ """
2944+ if is_ip(hostname):
2945+ return hostname
2946+
2947+ return ns_query(hostname)
2948+
2949+
2950+def get_hostname(address, fqdn=True):
2951+ """
2952+ Resolves hostname for given IP, or returns the input
2953+ if it is already a hostname.
2954+ """
2955+ if is_ip(address):
2956+ try:
2957+ import dns.reversename
2958+ except ImportError:
2959+ apt_install('python-dnspython')
2960+ import dns.reversename
2961+
2962+ rev = dns.reversename.from_address(address)
2963+ result = ns_query(rev)
2964+ if not result:
2965+ return None
2966+ else:
2967+ result = address
2968+
2969+ if fqdn:
2970+ # strip trailing .
2971+ if result.endswith('.'):
2972+ return result[:-1]
2973+ else:
2974+ return result
2975+ else:
2976+ return result.split('.')[0]
2977+
2978+
2979+def get_matchmaker_map(mm_file='/etc/oslo/matchmaker_ring.json'):
2980+ mm_map = {}
2981+ if os.path.isfile(mm_file):
2982+ with open(mm_file, 'r') as f:
2983+ mm_map = json.load(f)
2984+ return mm_map
2985+
2986+
2987+def sync_db_with_multi_ipv6_addresses(database, database_user,
2988+ relation_prefix=None):
2989+ hosts = get_ipv6_addr(dynamic_only=False)
2990+
2991+ kwargs = {'database': database,
2992+ 'username': database_user,
2993+ 'hostname': json.dumps(hosts)}
2994+
2995+ if relation_prefix:
2996+ for key in list(kwargs.keys()):
2997+ kwargs["%s_%s" % (relation_prefix, key)] = kwargs[key]
2998+ del kwargs[key]
2999+
3000+ for rid in relation_ids('shared-db'):
3001+ relation_set(relation_id=rid, **kwargs)
3002+
3003+
3004+def os_requires_version(ostack_release, pkg):
3005+ """
3006+ Decorator for hook to specify minimum supported release
3007+ """
3008+ def wrap(f):
3009+ @wraps(f)
3010+ def wrapped_f(*args):
3011+ if os_release(pkg) < ostack_release:
3012+ raise Exception("This hook is not supported on releases"
3013+ " before %s" % ostack_release)
3014+ f(*args)
3015+ return wrapped_f
3016+ return wrap
3017+
3018+
3019+def git_install_requested():
3020+ """Returns true if openstack-origin-git is specified."""
3021+ return config('openstack-origin-git') != "None"
3022+
3023+
3024+requirements_dir = None
3025+
3026+
3027+def git_clone_and_install(file_name, core_project):
3028+ """Clone/install all OpenStack repos specified in yaml config file."""
3029+ global requirements_dir
3030+
3031+ if file_name == "None":
3032+ return
3033+
3034+ yaml_file = os.path.join(charm_dir(), file_name)
3035+
3036+ # clone/install the requirements project first
3037+ installed = _git_clone_and_install_subset(yaml_file,
3038+ whitelist=['requirements'])
3039+ if 'requirements' not in installed:
3040+ error_out('requirements git repository must be specified')
3041+
3042+ # clone/install all other projects except requirements and the core project
3043+ blacklist = ['requirements', core_project]
3044+ _git_clone_and_install_subset(yaml_file, blacklist=blacklist,
3045+ update_requirements=True)
3046+
3047+ # clone/install the core project
3048+ whitelist = [core_project]
3049+ installed = _git_clone_and_install_subset(yaml_file, whitelist=whitelist,
3050+ update_requirements=True)
3051+ if core_project not in installed:
3052+ error_out('{} git repository must be specified'.format(core_project))
3053+
3054+
3055+def _git_clone_and_install_subset(yaml_file, whitelist=[], blacklist=[],
3056+ update_requirements=False):
3057+ """Clone/install subset of OpenStack repos specified in yaml config file."""
3058+ global requirements_dir
3059+ installed = []
3060+
3061+ with open(yaml_file, 'r') as fd:
3062+ projects = yaml.load(fd)
3063+ for proj, val in projects.items():
3064+ # The project subset is chosen based on the following 3 rules:
3065+ # 1) If project is in blacklist, we don't clone/install it, period.
3066+ # 2) If whitelist is empty, we clone/install everything else.
3067+ # 3) If whitelist is not empty, we clone/install everything in the
3068+ # whitelist.
3069+ if proj in blacklist:
3070+ continue
3071+ if whitelist and proj not in whitelist:
3072+ continue
3073+ repo = val['repository']
3074+ branch = val['branch']
3075+ repo_dir = _git_clone_and_install_single(repo, branch,
3076+ update_requirements)
3077+ if proj == 'requirements':
3078+ requirements_dir = repo_dir
3079+ installed.append(proj)
3080+ return installed
3081+
3082+
3083+def _git_clone_and_install_single(repo, branch, update_requirements=False):
3084+ """Clone and install a single git repository."""
3085+ dest_parent_dir = "/mnt/openstack-git/"
3086+ dest_dir = os.path.join(dest_parent_dir, os.path.basename(repo))
3087+
3088+ if not os.path.exists(dest_parent_dir):
3089+ juju_log('Host dir not mounted at {}. '
3090+ 'Creating directory there instead.'.format(dest_parent_dir))
3091+ os.mkdir(dest_parent_dir)
3092+
3093+ if not os.path.exists(dest_dir):
3094+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
3095+ repo_dir = install_remote(repo, dest=dest_parent_dir, branch=branch)
3096+ else:
3097+ repo_dir = dest_dir
3098+
3099+ if update_requirements:
3100+ if not requirements_dir:
3101+ error_out('requirements repo must be cloned before '
3102+ 'updating from global requirements.')
3103+ _git_update_requirements(repo_dir, requirements_dir)
3104+
3105+ juju_log('Installing git repo from dir: {}'.format(repo_dir))
3106+ pip_install(repo_dir)
3107+
3108+ return repo_dir
3109+
3110+
3111+def _git_update_requirements(package_dir, reqs_dir):
3112+ """Update from global requirements.
3113+
3114+ Update an OpenStack git directory's requirements.txt and
3115+ test-requirements.txt from global-requirements.txt."""
3116+ orig_dir = os.getcwd()
3117+ os.chdir(reqs_dir)
3118+ cmd = "python update.py {}".format(package_dir)
3119+ try:
3120+ subprocess.check_call(cmd.split(' '))
3121+ except subprocess.CalledProcessError:
3122+ package = os.path.basename(package_dir)
3123+ error_out("Error updating {} from global-requirements.txt".format(package))
3124+ os.chdir(orig_dir)
3125
3126=== added directory 'hooks/charmhelpers/core'
3127=== added file 'hooks/charmhelpers/core/__init__.py'
3128=== added file 'hooks/charmhelpers/core/fstab.py'
3129--- hooks/charmhelpers/core/fstab.py 1970-01-01 00:00:00 +0000
3130+++ hooks/charmhelpers/core/fstab.py 2014-12-11 11:30:34 +0000
3131@@ -0,0 +1,118 @@
3132+#!/usr/bin/env python
3133+# -*- coding: utf-8 -*-
3134+
3135+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
3136+
3137+import io
3138+import os
3139+
3140+
3141+class Fstab(io.FileIO):
3142+ """This class extends file in order to implement a file reader/writer
3143+ for file `/etc/fstab`
3144+ """
3145+
3146+ class Entry(object):
3147+ """Entry class represents a non-comment line on the `/etc/fstab` file
3148+ """
3149+ def __init__(self, device, mountpoint, filesystem,
3150+ options, d=0, p=0):
3151+ self.device = device
3152+ self.mountpoint = mountpoint
3153+ self.filesystem = filesystem
3154+
3155+ if not options:
3156+ options = "defaults"
3157+
3158+ self.options = options
3159+ self.d = int(d)
3160+ self.p = int(p)
3161+
3162+ def __eq__(self, o):
3163+ return str(self) == str(o)
3164+
3165+ def __str__(self):
3166+ return "{} {} {} {} {} {}".format(self.device,
3167+ self.mountpoint,
3168+ self.filesystem,
3169+ self.options,
3170+ self.d,
3171+ self.p)
3172+
3173+ DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
3174+
3175+ def __init__(self, path=None):
3176+ if path:
3177+ self._path = path
3178+ else:
3179+ self._path = self.DEFAULT_PATH
3180+ super(Fstab, self).__init__(self._path, 'rb+')
3181+
3182+ def _hydrate_entry(self, line):
3183+ # NOTE: use split with no arguments to split on any
3184+ # whitespace including tabs
3185+ return Fstab.Entry(*filter(
3186+ lambda x: x not in ('', None),
3187+ line.strip("\n").split()))
3188+
3189+ @property
3190+ def entries(self):
3191+ self.seek(0)
3192+ for line in self.readlines():
3193+ line = line.decode('us-ascii')
3194+ try:
3195+ if line.strip() and not line.startswith("#"):
3196+ yield self._hydrate_entry(line)
3197+ except ValueError:
3198+ pass
3199+
3200+ def get_entry_by_attr(self, attr, value):
3201+ for entry in self.entries:
3202+ e_attr = getattr(entry, attr)
3203+ if e_attr == value:
3204+ return entry
3205+ return None
3206+
3207+ def add_entry(self, entry):
3208+ if self.get_entry_by_attr('device', entry.device):
3209+ return False
3210+
3211+ self.write((str(entry) + '\n').encode('us-ascii'))
3212+ self.truncate()
3213+ return entry
3214+
3215+ def remove_entry(self, entry):
3216+ self.seek(0)
3217+
3218+ lines = [l.decode('us-ascii') for l in self.readlines()]
3219+
3220+ found = False
3221+ for index, line in enumerate(lines):
3222+ if not line.startswith("#"):
3223+ if self._hydrate_entry(line) == entry:
3224+ found = True
3225+ break
3226+
3227+ if not found:
3228+ return False
3229+
3230+ lines.remove(line)
3231+
3232+ self.seek(0)
3233+ self.write(''.join(lines).encode('us-ascii'))
3234+ self.truncate()
3235+ return True
3236+
3237+ @classmethod
3238+ def remove_by_mountpoint(cls, mountpoint, path=None):
3239+ fstab = cls(path=path)
3240+ entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
3241+ if entry:
3242+ return fstab.remove_entry(entry)
3243+ return False
3244+
3245+ @classmethod
3246+ def add(cls, device, mountpoint, filesystem, options=None, path=None):
3247+ return cls(path=path).add_entry(Fstab.Entry(device,
3248+ mountpoint, filesystem,
3249+ options=options))
3250
3251=== added file 'hooks/charmhelpers/core/hookenv.py'
3252--- hooks/charmhelpers/core/hookenv.py 1970-01-01 00:00:00 +0000
3253+++ hooks/charmhelpers/core/hookenv.py 2014-12-11 11:30:34 +0000
3254@@ -0,0 +1,552 @@
3255+"Interactions with the Juju environment"
3256+# Copyright 2013 Canonical Ltd.
3257+#
3258+# Authors:
3259+# Charm Helpers Developers <juju@lists.ubuntu.com>
3260+
3261+import os
3262+import json
3263+import yaml
3264+import subprocess
3265+import sys
3266+from subprocess import CalledProcessError
3267+
3268+import six
3269+if not six.PY3:
3270+ from UserDict import UserDict
3271+else:
3272+ from collections import UserDict
3273+
3274+CRITICAL = "CRITICAL"
3275+ERROR = "ERROR"
3276+WARNING = "WARNING"
3277+INFO = "INFO"
3278+DEBUG = "DEBUG"
3279+MARKER = object()
3280+
3281+cache = {}
3282+
3283+
3284+def cached(func):
3285+ """Cache return values for multiple executions of func + args
3286+
3287+ For example::
3288+
3289+ @cached
3290+ def unit_get(attribute):
3291+ pass
3292+
3293+ unit_get('test')
3294+
3295+ will cache the result of unit_get + 'test' for future calls.
3296+ """
3297+ def wrapper(*args, **kwargs):
3298+ global cache
3299+ key = str((func, args, kwargs))
3300+ try:
3301+ return cache[key]
3302+ except KeyError:
3303+ res = func(*args, **kwargs)
3304+ cache[key] = res
3305+ return res
3306+ return wrapper
3307+
3308+
3309+def flush(key):
3310+ """Flushes any entries from function cache where the
3311+ key is found in the function+args """
3312+ flush_list = []
3313+ for item in cache:
3314+ if key in item:
3315+ flush_list.append(item)
3316+ for item in flush_list:
3317+ del cache[item]
3318+
3319+
3320+def log(message, level=None):
3321+ """Write a message to the juju log"""
3322+ command = ['juju-log']
3323+ if level:
3324+ command += ['-l', level]
3325+ if not isinstance(message, six.string_types):
3326+ message = repr(message)
3327+ command += [message]
3328+ subprocess.call(command)
3329+
3330+
3331+class Serializable(UserDict):
3332+ """Wrapper, an object that can be serialized to yaml or json"""
3333+
3334+ def __init__(self, obj):
3335+ # wrap the object
3336+ UserDict.__init__(self)
3337+ self.data = obj
3338+
3339+ def __getattr__(self, attr):
3340+ # See if this object has attribute.
3341+ if attr in ("json", "yaml", "data"):
3342+ return self.__dict__[attr]
3343+ # Check for attribute in wrapped object.
3344+ got = getattr(self.data, attr, MARKER)
3345+ if got is not MARKER:
3346+ return got
3347+ # Proxy to the wrapped object via dict interface.
3348+ try:
3349+ return self.data[attr]
3350+ except KeyError:
3351+ raise AttributeError(attr)
3352+
3353+ def __getstate__(self):
3354+ # Pickle as a standard dictionary.
3355+ return self.data
3356+
3357+ def __setstate__(self, state):
3358+ # Unpickle into our wrapper.
3359+ self.data = state
3360+
3361+ def json(self):
3362+ """Serialize the object to json"""
3363+ return json.dumps(self.data)
3364+
3365+ def yaml(self):
3366+ """Serialize the object to yaml"""
3367+ return yaml.dump(self.data)
3368+
3369+
3370+def execution_environment():
3371+ """A convenient bundling of the current execution context"""
3372+ context = {}
3373+ context['conf'] = config()
3374+ if relation_id():
3375+ context['reltype'] = relation_type()
3376+ context['relid'] = relation_id()
3377+ context['rel'] = relation_get()
3378+ context['unit'] = local_unit()
3379+ context['rels'] = relations()
3380+ context['env'] = os.environ
3381+ return context
3382+
3383+
3384+def in_relation_hook():
3385+ """Determine whether we're running in a relation hook"""
3386+ return 'JUJU_RELATION' in os.environ
3387+
3388+
3389+def relation_type():
3390+ """The scope for the current relation hook"""
3391+ return os.environ.get('JUJU_RELATION', None)
3392+
3393+
3394+def relation_id():
3395+ """The relation ID for the current relation hook"""
3396+ return os.environ.get('JUJU_RELATION_ID', None)
3397+
3398+
3399+def local_unit():
3400+ """Local unit ID"""
3401+ return os.environ['JUJU_UNIT_NAME']
3402+
3403+
3404+def remote_unit():
3405+ """The remote unit for the current relation hook"""
3406+ return os.environ['JUJU_REMOTE_UNIT']
3407+
3408+
3409+def service_name():
3410+ """The name service group this unit belongs to"""
3411+ return local_unit().split('/')[0]
3412+
3413+
3414+def hook_name():
3415+ """The name of the currently executing hook"""
3416+ return os.path.basename(sys.argv[0])
3417+
3418+
3419+class Config(dict):
3420+ """A dictionary representation of the charm's config.yaml, with some
3421+ extra features:
3422+
3423+ - See which values in the dictionary have changed since the previous hook.
3424+ - For values that have changed, see what the previous value was.
3425+ - Store arbitrary data for use in a later hook.
3426+
3427+ NOTE: Do not instantiate this object directly - instead call
3428+ ``hookenv.config()``, which will return an instance of :class:`Config`.
3429+
3430+ Example usage::
3431+
3432+ >>> # inside a hook
3433+ >>> from charmhelpers.core import hookenv
3434+ >>> config = hookenv.config()
3435+ >>> config['foo']
3436+ 'bar'
3437+ >>> # store a new key/value for later use
3438+ >>> config['mykey'] = 'myval'
3439+
3440+
3441+ >>> # user runs `juju set mycharm foo=baz`
3442+ >>> # now we're inside subsequent config-changed hook
3443+ >>> config = hookenv.config()
3444+ >>> config['foo']
3445+ 'baz'
3446+ >>> # test to see if this val has changed since last hook
3447+ >>> config.changed('foo')
3448+ True
3449+ >>> # what was the previous value?
3450+ >>> config.previous('foo')
3451+ 'bar'
3452+ >>> # keys/values that we add are preserved across hooks
3453+ >>> config['mykey']
3454+ 'myval'
3455+
3456+ """
3457+ CONFIG_FILE_NAME = '.juju-persistent-config'
3458+
3459+ def __init__(self, *args, **kw):
3460+ super(Config, self).__init__(*args, **kw)
3461+ self.implicit_save = True
3462+ self._prev_dict = None
3463+ self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
3464+ if os.path.exists(self.path):
3465+ self.load_previous()
3466+
3467+ def __getitem__(self, key):
3468+ """For regular dict lookups, check the current juju config first,
3469+ then the previous (saved) copy. This ensures that user-saved values
3470+ will be returned by a dict lookup.
3471+
3472+ """
3473+ try:
3474+ return dict.__getitem__(self, key)
3475+ except KeyError:
3476+ return (self._prev_dict or {})[key]
3477+
3478+ def keys(self):
3479+ prev_keys = []
3480+ if self._prev_dict is not None:
3481+ prev_keys = self._prev_dict.keys()
3482+ return list(set(prev_keys + list(dict.keys(self))))
3483+
3484+ def load_previous(self, path=None):
3485+ """Load previous copy of config from disk.
3486+
3487+ In normal usage you don't need to call this method directly - it
3488+ is called automatically at object initialization.
3489+
3490+ :param path:
3491+
3492+ File path from which to load the previous config. If `None`,
3493+ config is loaded from the default location. If `path` is
3494+ specified, subsequent `save()` calls will write to the same
3495+ path.
3496+
3497+ """
3498+ self.path = path or self.path
3499+ with open(self.path) as f:
3500+ self._prev_dict = json.load(f)
3501+
3502+ def changed(self, key):
3503+ """Return True if the current value for this key is different from
3504+ the previous value.
3505+
3506+ """
3507+ if self._prev_dict is None:
3508+ return True
3509+ return self.previous(key) != self.get(key)
3510+
3511+ def previous(self, key):
3512+ """Return previous value for this key, or None if there
3513+ is no previous value.
3514+
3515+ """
3516+ if self._prev_dict:
3517+ return self._prev_dict.get(key)
3518+ return None
3519+
3520+ def save(self):
3521+ """Save this config to disk.
3522+
3523+ If the charm is using the :mod:`Services Framework <services.base>`
3524+ or :meth:'@hook <Hooks.hook>' decorator, this
3525+ is called automatically at the end of successful hook execution.
3526+ Otherwise, it should be called directly by user code.
3527+
3528+ To disable automatic saves, set ``implicit_save=False`` on this
3529+ instance.
3530+
3531+ """
3532+ if self._prev_dict:
3533+ for k, v in six.iteritems(self._prev_dict):
3534+ if k not in self:
3535+ self[k] = v
3536+ with open(self.path, 'w') as f:
3537+ json.dump(self, f)
3538+
3539+
3540+@cached
3541+def config(scope=None):
3542+ """Juju charm configuration"""
3543+ config_cmd_line = ['config-get']
3544+ if scope is not None:
3545+ config_cmd_line.append(scope)
3546+ config_cmd_line.append('--format=json')
3547+ try:
3548+ config_data = json.loads(
3549+ subprocess.check_output(config_cmd_line).decode('UTF-8'))
3550+ if scope is not None:
3551+ return config_data
3552+ return Config(config_data)
3553+ except ValueError:
3554+ return None
3555+
3556+
3557+@cached
3558+def relation_get(attribute=None, unit=None, rid=None):
3559+ """Get relation information"""
3560+ _args = ['relation-get', '--format=json']
3561+ if rid:
3562+ _args.append('-r')
3563+ _args.append(rid)
3564+ _args.append(attribute or '-')
3565+ if unit:
3566+ _args.append(unit)
3567+ try:
3568+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
3569+ except ValueError:
3570+ return None
3571+ except CalledProcessError as e:
3572+ if e.returncode == 2:
3573+ return None
3574+ raise
3575+
3576+
3577+def relation_set(relation_id=None, relation_settings=None, **kwargs):
3578+ """Set relation information for the current unit"""
3579+ relation_settings = relation_settings if relation_settings else {}
3580+ relation_cmd_line = ['relation-set']
3581+ if relation_id is not None:
3582+ relation_cmd_line.extend(('-r', relation_id))
3583+ for k, v in (list(relation_settings.items()) + list(kwargs.items())):
3584+ if v is None:
3585+ relation_cmd_line.append('{}='.format(k))
3586+ else:
3587+ relation_cmd_line.append('{}={}'.format(k, v))
3588+ subprocess.check_call(relation_cmd_line)
3589+ # Flush cache of any relation-gets for local unit
3590+ flush(local_unit())
3591+
3592+
3593+@cached
3594+def relation_ids(reltype=None):
3595+ """A list of relation_ids"""
3596+ reltype = reltype or relation_type()
3597+ relid_cmd_line = ['relation-ids', '--format=json']
3598+ if reltype is not None:
3599+ relid_cmd_line.append(reltype)
3600+ return json.loads(
3601+ subprocess.check_output(relid_cmd_line).decode('UTF-8')) or []
3602+ return []
3603+
3604+
3605+@cached
3606+def related_units(relid=None):
3607+ """A list of related units"""
3608+ relid = relid or relation_id()
3609+ units_cmd_line = ['relation-list', '--format=json']
3610+ if relid is not None:
3611+ units_cmd_line.extend(('-r', relid))
3612+ return json.loads(
3613+ subprocess.check_output(units_cmd_line).decode('UTF-8')) or []
3614+
3615+
3616+@cached
3617+def relation_for_unit(unit=None, rid=None):
3618+ """Get the json represenation of a unit's relation"""
3619+ unit = unit or remote_unit()
3620+ relation = relation_get(unit=unit, rid=rid)
3621+ for key in relation:
3622+ if key.endswith('-list'):
3623+ relation[key] = relation[key].split()
3624+ relation['__unit__'] = unit
3625+ return relation
3626+
3627+
3628+@cached
3629+def relations_for_id(relid=None):
3630+ """Get relations of a specific relation ID"""
3631+ relation_data = []
3632+ relid = relid or relation_ids()
3633+ for unit in related_units(relid):
3634+ unit_data = relation_for_unit(unit, relid)
3635+ unit_data['__relid__'] = relid
3636+ relation_data.append(unit_data)
3637+ return relation_data
3638+
3639+
3640+@cached
3641+def relations_of_type(reltype=None):
3642+ """Get relations of a specific type"""
3643+ relation_data = []
3644+ reltype = reltype or relation_type()
3645+ for relid in relation_ids(reltype):
3646+ for relation in relations_for_id(relid):
3647+ relation['__relid__'] = relid
3648+ relation_data.append(relation)
3649+ return relation_data
3650+
3651+
3652+@cached
3653+def metadata():
3654+ """Get the current charm metadata.yaml contents as a python object"""
3655+ with open(os.path.join(charm_dir(), 'metadata.yaml')) as md:
3656+ return yaml.safe_load(md)
3657+
3658+
3659+@cached
3660+def relation_types():
3661+ """Get a list of relation types supported by this charm"""
3662+ rel_types = []
3663+ md = metadata()
3664+ for key in ('provides', 'requires', 'peers'):
3665+ section = md.get(key)
3666+ if section:
3667+ rel_types.extend(section.keys())
3668+ return rel_types
3669+
3670+
3671+@cached
3672+def charm_name():
3673+ """Get the name of the current charm as is specified on metadata.yaml"""
3674+ return metadata().get('name')
3675+
3676+
3677+@cached
3678+def relations():
3679+ """Get a nested dictionary of relation data for all related units"""
3680+ rels = {}
3681+ for reltype in relation_types():
3682+ relids = {}
3683+ for relid in relation_ids(reltype):
3684+ units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
3685+ for unit in related_units(relid):
3686+ reldata = relation_get(unit=unit, rid=relid)
3687+ units[unit] = reldata
3688+ relids[relid] = units
3689+ rels[reltype] = relids
3690+ return rels
3691+
3692+
3693+@cached
3694+def is_relation_made(relation, keys='private-address'):
3695+ '''
3696+ Determine whether a relation is established by checking for
3697+ presence of key(s). If a list of keys is provided, they
3698+ must all be present for the relation to be identified as made
3699+ '''
3700+ if isinstance(keys, str):
3701+ keys = [keys]
3702+ for r_id in relation_ids(relation):
3703+ for unit in related_units(r_id):
3704+ context = {}
3705+ for k in keys:
3706+ context[k] = relation_get(k, rid=r_id,
3707+ unit=unit)
3708+ if None not in context.values():
3709+ return True
3710+ return False
3711+
3712+
3713+def open_port(port, protocol="TCP"):
3714+ """Open a service network port"""
3715+ _args = ['open-port']
3716+ _args.append('{}/{}'.format(port, protocol))
3717+ subprocess.check_call(_args)
3718+
3719+
3720+def close_port(port, protocol="TCP"):
3721+ """Close a service network port"""
3722+ _args = ['close-port']
3723+ _args.append('{}/{}'.format(port, protocol))
3724+ subprocess.check_call(_args)
3725+
3726+
3727+@cached
3728+def unit_get(attribute):
3729+ """Get the unit ID for the remote unit"""
3730+ _args = ['unit-get', '--format=json', attribute]
3731+ try:
3732+ return json.loads(subprocess.check_output(_args).decode('UTF-8'))
3733+ except ValueError:
3734+ return None
3735+
3736+
3737+def unit_private_ip():
3738+ """Get this unit's private IP address"""
3739+ return unit_get('private-address')
3740+
3741+
3742+class UnregisteredHookError(Exception):
3743+ """Raised when an undefined hook is called"""
3744+ pass
3745+
3746+
3747+class Hooks(object):
3748+ """A convenient handler for hook functions.
3749+
3750+ Example::
3751+
3752+ hooks = Hooks()
3753+
3754+ # register a hook, taking its name from the function name
3755+ @hooks.hook()
3756+ def install():
3757+ pass # your code here
3758+
3759+ # register a hook, providing a custom hook name
3760+ @hooks.hook("config-changed")
3761+ def config_changed():
3762+ pass # your code here
3763+
3764+ if __name__ == "__main__":
3765+ # execute a hook based on the name the program is called by
3766+ hooks.execute(sys.argv)
3767+ """
3768+
3769+ def __init__(self, config_save=True):
3770+ super(Hooks, self).__init__()
3771+ self._hooks = {}
3772+ self._config_save = config_save
3773+
3774+ def register(self, name, function):
3775+ """Register a hook"""
3776+ self._hooks[name] = function
3777+
3778+ def execute(self, args):
3779+ """Execute a registered hook based on args[0]"""
3780+ hook_name = os.path.basename(args[0])
3781+ if hook_name in self._hooks:
3782+ self._hooks[hook_name]()
3783+ if self._config_save:
3784+ cfg = config()
3785+ if cfg.implicit_save:
3786+ cfg.save()
3787+ else:
3788+ raise UnregisteredHookError(hook_name)
3789+
3790+ def hook(self, *hook_names):
3791+ """Decorator, registering them as hooks"""
3792+ def wrapper(decorated):
3793+ for hook_name in hook_names:
3794+ self.register(hook_name, decorated)
3795+ else:
3796+ self.register(decorated.__name__, decorated)
3797+ if '_' in decorated.__name__:
3798+ self.register(
3799+ decorated.__name__.replace('_', '-'), decorated)
3800+ return decorated
3801+ return wrapper
3802+
3803+
3804+def charm_dir():
3805+ """Return the root directory of the current charm"""
3806+ return os.environ.get('CHARM_DIR')
3807
3808=== added file 'hooks/charmhelpers/core/host.py'
3809--- hooks/charmhelpers/core/host.py 1970-01-01 00:00:00 +0000
3810+++ hooks/charmhelpers/core/host.py 2014-12-11 11:30:34 +0000
3811@@ -0,0 +1,416 @@
3812+"""Tools for working with the host system"""
3813+# Copyright 2012 Canonical Ltd.
3814+#
3815+# Authors:
3816+# Nick Moffitt <nick.moffitt@canonical.com>
3817+# Matthew Wedgwood <matthew.wedgwood@canonical.com>
3818+
3819+import os
3820+import re
3821+import pwd
3822+import grp
3823+import random
3824+import string
3825+import subprocess
3826+import hashlib
3827+from contextlib import contextmanager
3828+from collections import OrderedDict
3829+
3830+import six
3831+
3832+from .hookenv import log
3833+from .fstab import Fstab
3834+
3835+
3836+def service_start(service_name):
3837+ """Start a system service"""
3838+ return service('start', service_name)
3839+
3840+
3841+def service_stop(service_name):
3842+ """Stop a system service"""
3843+ return service('stop', service_name)
3844+
3845+
3846+def service_restart(service_name):
3847+ """Restart a system service"""
3848+ return service('restart', service_name)
3849+
3850+
3851+def service_reload(service_name, restart_on_failure=False):
3852+ """Reload a system service, optionally falling back to restart if
3853+ reload fails"""
3854+ service_result = service('reload', service_name)
3855+ if not service_result and restart_on_failure:
3856+ service_result = service('restart', service_name)
3857+ return service_result
3858+
3859+
3860+def service(action, service_name):
3861+ """Control a system service"""
3862+ cmd = ['service', service_name, action]
3863+ return subprocess.call(cmd) == 0
3864+
3865+
3866+def service_running(service):
3867+ """Determine whether a system service is running"""
3868+ try:
3869+ output = subprocess.check_output(
3870+ ['service', service, 'status'],
3871+ stderr=subprocess.STDOUT).decode('UTF-8')
3872+ except subprocess.CalledProcessError:
3873+ return False
3874+ else:
3875+ if ("start/running" in output or "is running" in output):
3876+ return True
3877+ else:
3878+ return False
3879+
3880+
3881+def service_available(service_name):
3882+ """Determine whether a system service is available"""
3883+ try:
3884+ subprocess.check_output(
3885+ ['service', service_name, 'status'],
3886+ stderr=subprocess.STDOUT).decode('UTF-8')
3887+ except subprocess.CalledProcessError as e:
3888+ return 'unrecognized service' not in e.output
3889+ else:
3890+ return True
3891+
3892+
3893+def adduser(username, password=None, shell='/bin/bash', system_user=False):
3894+ """Add a user to the system"""
3895+ try:
3896+ user_info = pwd.getpwnam(username)
3897+ log('user {0} already exists!'.format(username))
3898+ except KeyError:
3899+ log('creating user {0}'.format(username))
3900+ cmd = ['useradd']
3901+ if system_user or password is None:
3902+ cmd.append('--system')
3903+ else:
3904+ cmd.extend([
3905+ '--create-home',
3906+ '--shell', shell,
3907+ '--password', password,
3908+ ])
3909+ cmd.append(username)
3910+ subprocess.check_call(cmd)
3911+ user_info = pwd.getpwnam(username)
3912+ return user_info
3913+
3914+
3915+def add_group(group_name, system_group=False):
3916+ """Add a group to the system"""
3917+ try:
3918+ group_info = grp.getgrnam(group_name)
3919+ log('group {0} already exists!'.format(group_name))
3920+ except KeyError:
3921+ log('creating group {0}'.format(group_name))
3922+ cmd = ['addgroup']
3923+ if system_group:
3924+ cmd.append('--system')
3925+ else:
3926+ cmd.extend([
3927+ '--group',
3928+ ])
3929+ cmd.append(group_name)
3930+ subprocess.check_call(cmd)
3931+ group_info = grp.getgrnam(group_name)
3932+ return group_info
3933+
3934+
3935+def add_user_to_group(username, group):
3936+ """Add a user to a group"""
3937+ cmd = [
3938+ 'gpasswd', '-a',
3939+ username,
3940+ group
3941+ ]
3942+ log("Adding user {} to group {}".format(username, group))
3943+ subprocess.check_call(cmd)
3944+
3945+
3946+def rsync(from_path, to_path, flags='-r', options=None):
3947+ """Replicate the contents of a path"""
3948+ options = options or ['--delete', '--executability']
3949+ cmd = ['/usr/bin/rsync', flags]
3950+ cmd.extend(options)
3951+ cmd.append(from_path)
3952+ cmd.append(to_path)
3953+ log(" ".join(cmd))
3954+ return subprocess.check_output(cmd).decode('UTF-8').strip()
3955+
3956+
3957+def symlink(source, destination):
3958+ """Create a symbolic link"""
3959+ log("Symlinking {} as {}".format(source, destination))
3960+ cmd = [
3961+ 'ln',
3962+ '-sf',
3963+ source,
3964+ destination,
3965+ ]
3966+ subprocess.check_call(cmd)
3967+
3968+
3969+def mkdir(path, owner='root', group='root', perms=0o555, force=False):
3970+ """Create a directory"""
3971+ log("Making dir {} {}:{} {:o}".format(path, owner, group,
3972+ perms))
3973+ uid = pwd.getpwnam(owner).pw_uid
3974+ gid = grp.getgrnam(group).gr_gid
3975+ realpath = os.path.abspath(path)
3976+ if os.path.exists(realpath):
3977+ if force and not os.path.isdir(realpath):
3978+ log("Removing non-directory file {} prior to mkdir()".format(path))
3979+ os.unlink(realpath)
3980+ else:
3981+ os.makedirs(realpath, perms)
3982+ os.chown(realpath, uid, gid)
3983+
3984+
3985+def write_file(path, content, owner='root', group='root', perms=0o444):
3986+ """Create or overwrite a file with the contents of a string"""
3987+ log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
3988+ uid = pwd.getpwnam(owner).pw_uid
3989+ gid = grp.getgrnam(group).gr_gid
3990+ with open(path, 'w') as target:
3991+ os.fchown(target.fileno(), uid, gid)
3992+ os.fchmod(target.fileno(), perms)
3993+ target.write(content)
3994+
3995+
3996+def fstab_remove(mp):
3997+ """Remove the given mountpoint entry from /etc/fstab
3998+ """
3999+ return Fstab.remove_by_mountpoint(mp)
4000+
4001+
4002+def fstab_add(dev, mp, fs, options=None):
4003+ """Adds the given device entry to the /etc/fstab file
4004+ """
4005+ return Fstab.add(dev, mp, fs, options=options)
4006+
4007+
4008+def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
4009+ """Mount a filesystem at a particular mountpoint"""
4010+ cmd_args = ['mount']
4011+ if options is not None:
4012+ cmd_args.extend(['-o', options])
4013+ cmd_args.extend([device, mountpoint])
4014+ try:
4015+ subprocess.check_output(cmd_args)
4016+ except subprocess.CalledProcessError as e:
4017+ log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
4018+ return False
4019+
4020+ if persist:
4021+ return fstab_add(device, mountpoint, filesystem, options=options)
4022+ return True
4023+
4024+
4025+def umount(mountpoint, persist=False):
4026+ """Unmount a filesystem"""
4027+ cmd_args = ['umount', mountpoint]
4028+ try:
4029+ subprocess.check_output(cmd_args)
4030+ except subprocess.CalledProcessError as e:
4031+ log('Error unmounting {}\n{}'.format(mountpoint, e.output))
4032+ return False
4033+
4034+ if persist:
4035+ return fstab_remove(mountpoint)
4036+ return True
4037+
4038+
4039+def mounts():
4040+ """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
4041+ with open('/proc/mounts') as f:
4042+ # [['/mount/point','/dev/path'],[...]]
4043+ system_mounts = [m[1::-1] for m in [l.strip().split()
4044+ for l in f.readlines()]]
4045+ return system_mounts
4046+
4047+
4048+def file_hash(path, hash_type='md5'):
4049+ """
4050+ Generate a hash checksum of the contents of 'path' or None if not found.
4051+
4052+ :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
4053+ such as md5, sha1, sha256, sha512, etc.
4054+ """
4055+ if os.path.exists(path):
4056+ h = getattr(hashlib, hash_type)()
4057+ with open(path, 'rb') as source:
4058+ h.update(source.read())
4059+ return h.hexdigest()
4060+ else:
4061+ return None
4062+
4063+
4064+def check_hash(path, checksum, hash_type='md5'):
4065+ """
4066+ Validate a file using a cryptographic checksum.
4067+
4068+ :param str checksum: Value of the checksum used to validate the file.
4069+ :param str hash_type: Hash algorithm used to generate `checksum`.
4070+ Can be any hash alrgorithm supported by :mod:`hashlib`,
4071+ such as md5, sha1, sha256, sha512, etc.
4072+ :raises ChecksumError: If the file fails the checksum
4073+
4074+ """
4075+ actual_checksum = file_hash(path, hash_type)
4076+ if checksum != actual_checksum:
4077+ raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum))
4078+
4079+
4080+class ChecksumError(ValueError):
4081+ pass
4082+
4083+
4084+def restart_on_change(restart_map, stopstart=False):
4085+ """Restart services based on configuration files changing
4086+
4087+ This function is used a decorator, for example::
4088+
4089+ @restart_on_change({
4090+ '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
4091+ })
4092+ def ceph_client_changed():
4093+ pass # your code here
4094+
4095+ In this example, the cinder-api and cinder-volume services
4096+ would be restarted if /etc/ceph/ceph.conf is changed by the
4097+ ceph_client_changed function.
4098+ """
4099+ def wrap(f):
4100+ def wrapped_f(*args):
4101+ checksums = {}
4102+ for path in restart_map:
4103+ checksums[path] = file_hash(path)
4104+ f(*args)
4105+ restarts = []
4106+ for path in restart_map:
4107+ if checksums[path] != file_hash(path):
4108+ restarts += restart_map[path]
4109+ services_list = list(OrderedDict.fromkeys(restarts))
4110+ if not stopstart:
4111+ for service_name in services_list:
4112+ service('restart', service_name)
4113+ else:
4114+ for action in ['stop', 'start']:
4115+ for service_name in services_list:
4116+ service(action, service_name)
4117+ return wrapped_f
4118+ return wrap
4119+
4120+
4121+def lsb_release():
4122+ """Return /etc/lsb-release in a dict"""
4123+ d = {}
4124+ with open('/etc/lsb-release', 'r') as lsb:
4125+ for l in lsb:
4126+ k, v = l.split('=')
4127+ d[k.strip()] = v.strip()
4128+ return d
4129+
4130+
4131+def pwgen(length=None):
4132+ """Generate a random pasword."""
4133+ if length is None:
4134+ length = random.choice(range(35, 45))
4135+ alphanumeric_chars = [
4136+ l for l in (string.ascii_letters + string.digits)
4137+ if l not in 'l0QD1vAEIOUaeiou']
4138+ random_chars = [
4139+ random.choice(alphanumeric_chars) for _ in range(length)]
4140+ return(''.join(random_chars))
4141+
4142+
4143+def list_nics(nic_type):
4144+ '''Return a list of nics of given type(s)'''
4145+ if isinstance(nic_type, six.string_types):
4146+ int_types = [nic_type]
4147+ else:
4148+ int_types = nic_type
4149+ interfaces = []
4150+ for int_type in int_types:
4151+ cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
4152+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
4153+ ip_output = (line for line in ip_output if line)
4154+ for line in ip_output:
4155+ if line.split()[1].startswith(int_type):
4156+ matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
4157+ if matched:
4158+ interface = matched.groups()[0]
4159+ else:
4160+ interface = line.split()[1].replace(":", "")
4161+ interfaces.append(interface)
4162+
4163+ return interfaces
4164+
4165+
4166+def set_nic_mtu(nic, mtu):
4167+ '''Set MTU on a network interface'''
4168+ cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
4169+ subprocess.check_call(cmd)
4170+
4171+
4172+def get_nic_mtu(nic):
4173+ cmd = ['ip', 'addr', 'show', nic]
4174+ ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
4175+ mtu = ""
4176+ for line in ip_output:
4177+ words = line.split()
4178+ if 'mtu' in words:
4179+ mtu = words[words.index("mtu") + 1]
4180+ return mtu
4181+
4182+
4183+def get_nic_hwaddr(nic):
4184+ cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
4185+ ip_output = subprocess.check_output(cmd).decode('UTF-8')
4186+ hwaddr = ""
4187+ words = ip_output.split()
4188+ if 'link/ether' in words:
4189+ hwaddr = words[words.index('link/ether') + 1]
4190+ return hwaddr
4191+
4192+
4193+def cmp_pkgrevno(package, revno, pkgcache=None):
4194+ '''Compare supplied revno with the revno of the installed package
4195+
4196+ * 1 => Installed revno is greater than supplied arg
4197+ * 0 => Installed revno is the same as supplied arg
4198+ * -1 => Installed revno is less than supplied arg
4199+
4200+ '''
4201+ import apt_pkg
4202+ if not pkgcache:
4203+ from charmhelpers.fetch import apt_cache
4204+ pkgcache = apt_cache()
4205+ pkg = pkgcache[package]
4206+ return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
4207+
4208+
4209+@contextmanager
4210+def chdir(d):
4211+ cur = os.getcwd()
4212+ try:
4213+ yield os.chdir(d)
4214+ finally:
4215+ os.chdir(cur)
4216+
4217+
4218+def chownr(path, owner, group):
4219+ uid = pwd.getpwnam(owner).pw_uid
4220+ gid = grp.getgrnam(group).gr_gid
4221+
4222+ for root, dirs, files in os.walk(path):
4223+ for name in dirs + files:
4224+ full = os.path.join(root, name)
4225+ broken_symlink = os.path.lexists(full) and not os.path.exists(full)
4226+ if not broken_symlink:
4227+ os.chown(full, uid, gid)
4228
4229=== added directory 'hooks/charmhelpers/core/services'
4230=== added file 'hooks/charmhelpers/core/services/__init__.py'
4231--- hooks/charmhelpers/core/services/__init__.py 1970-01-01 00:00:00 +0000
4232+++ hooks/charmhelpers/core/services/__init__.py 2014-12-11 11:30:34 +0000
4233@@ -0,0 +1,2 @@
4234+from .base import * # NOQA
4235+from .helpers import * # NOQA
4236
4237=== added file 'hooks/charmhelpers/core/services/base.py'
4238--- hooks/charmhelpers/core/services/base.py 1970-01-01 00:00:00 +0000
4239+++ hooks/charmhelpers/core/services/base.py 2014-12-11 11:30:34 +0000
4240@@ -0,0 +1,313 @@
4241+import os
4242+import re
4243+import json
4244+from collections import Iterable
4245+
4246+from charmhelpers.core import host
4247+from charmhelpers.core import hookenv
4248+
4249+
4250+__all__ = ['ServiceManager', 'ManagerCallback',
4251+ 'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports',
4252+ 'service_restart', 'service_stop']
4253+
4254+
4255+class ServiceManager(object):
4256+ def __init__(self, services=None):
4257+ """
4258+ Register a list of services, given their definitions.
4259+
4260+ Service definitions are dicts in the following formats (all keys except
4261+ 'service' are optional)::
4262+
4263+ {
4264+ "service": <service name>,
4265+ "required_data": <list of required data contexts>,
4266+ "provided_data": <list of provided data contexts>,
4267+ "data_ready": <one or more callbacks>,
4268+ "data_lost": <one or more callbacks>,
4269+ "start": <one or more callbacks>,
4270+ "stop": <one or more callbacks>,
4271+ "ports": <list of ports to manage>,
4272+ }
4273+
4274+ The 'required_data' list should contain dicts of required data (or
4275+ dependency managers that act like dicts and know how to collect the data).
4276+ Only when all items in the 'required_data' list are populated are the list
4277+ of 'data_ready' and 'start' callbacks executed. See `is_ready()` for more
4278+ information.
4279+
4280+ The 'provided_data' list should contain relation data providers, most likely
4281+ a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`,
4282+ that will indicate a set of data to set on a given relation.
4283+
4284+ The 'data_ready' value should be either a single callback, or a list of
4285+ callbacks, to be called when all items in 'required_data' pass `is_ready()`.
4286+ Each callback will be called with the service name as the only parameter.
4287+ After all of the 'data_ready' callbacks are called, the 'start' callbacks
4288+ are fired.
4289+
4290+ The 'data_lost' value should be either a single callback, or a list of
4291+ callbacks, to be called when a 'required_data' item no longer passes
4292+ `is_ready()`. Each callback will be called with the service name as the
4293+ only parameter. After all of the 'data_lost' callbacks are called,
4294+ the 'stop' callbacks are fired.
4295+
4296+ The 'start' value should be either a single callback, or a list of
4297+ callbacks, to be called when starting the service, after the 'data_ready'
4298+ callbacks are complete. Each callback will be called with the service
4299+ name as the only parameter. This defaults to
4300+ `[host.service_start, services.open_ports]`.
4301+
4302+ The 'stop' value should be either a single callback, or a list of
4303+ callbacks, to be called when stopping the service. If the service is
4304+ being stopped because it no longer has all of its 'required_data', this
4305+ will be called after all of the 'data_lost' callbacks are complete.
4306+ Each callback will be called with the service name as the only parameter.
4307+ This defaults to `[services.close_ports, host.service_stop]`.
4308+
4309+ The 'ports' value should be a list of ports to manage. The default
4310+ 'start' handler will open the ports after the service is started,
4311+ and the default 'stop' handler will close the ports prior to stopping
4312+ the service.
4313+
4314+
4315+ Examples:
4316+
4317+ The following registers an Upstart service called bingod that depends on
4318+ a mongodb relation and which runs a custom `db_migrate` function prior to
4319+ restarting the service, and a Runit service called spadesd::
4320+
4321+ manager = services.ServiceManager([
4322+ {
4323+ 'service': 'bingod',
4324+ 'ports': [80, 443],
4325+ 'required_data': [MongoRelation(), config(), {'my': 'data'}],
4326+ 'data_ready': [
4327+ services.template(source='bingod.conf'),
4328+ services.template(source='bingod.ini',
4329+ target='/etc/bingod.ini',
4330+ owner='bingo', perms=0400),
4331+ ],
4332+ },
4333+ {
4334+ 'service': 'spadesd',
4335+ 'data_ready': services.template(source='spadesd_run.j2',
4336+ target='/etc/sv/spadesd/run',
4337+ perms=0555),
4338+ 'start': runit_start,
4339+ 'stop': runit_stop,
4340+ },
4341+ ])
4342+ manager.manage()
4343+ """
4344+ self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
4345+ self._ready = None
4346+ self.services = {}
4347+ for service in services or []:
4348+ service_name = service['service']
4349+ self.services[service_name] = service
4350+
4351+ def manage(self):
4352+ """
4353+ Handle the current hook by doing The Right Thing with the registered services.
4354+ """
4355+ hook_name = hookenv.hook_name()
4356+ if hook_name == 'stop':
4357+ self.stop_services()
4358+ else:
4359+ self.provide_data()
4360+ self.reconfigure_services()
4361+ cfg = hookenv.config()
4362+ if cfg.implicit_save:
4363+ cfg.save()
4364+
4365+ def provide_data(self):
4366+ """
4367+ Set the relation data for each provider in the ``provided_data`` list.
4368+
4369+ A provider must have a `name` attribute, which indicates which relation
4370+ to set data on, and a `provide_data()` method, which returns a dict of
4371+ data to set.
4372+ """
4373+ hook_name = hookenv.hook_name()
4374+ for service in self.services.values():
4375+ for provider in service.get('provided_data', []):
4376+ if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
4377+ data = provider.provide_data()
4378+ _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data
4379+ if _ready:
4380+ hookenv.relation_set(None, data)
4381+
4382+ def reconfigure_services(self, *service_names):
4383+ """
4384+ Update all files for one or more registered services, and,
4385+ if ready, optionally restart them.
4386+
4387+ If no service names are given, reconfigures all registered services.
4388+ """
4389+ for service_name in service_names or self.services.keys():
4390+ if self.is_ready(service_name):
4391+ self.fire_event('data_ready', service_name)
4392+ self.fire_event('start', service_name, default=[
4393+ service_restart,
4394+ manage_ports])
4395+ self.save_ready(service_name)
4396+ else:
4397+ if self.was_ready(service_name):
4398+ self.fire_event('data_lost', service_name)
4399+ self.fire_event('stop', service_name, default=[
4400+ manage_ports,
4401+ service_stop])
4402+ self.save_lost(service_name)
4403+
4404+ def stop_services(self, *service_names):
4405+ """
4406+ Stop one or more registered services, by name.
4407+
4408+ If no service names are given, stops all registered services.
4409+ """
4410+ for service_name in service_names or self.services.keys():
4411+ self.fire_event('stop', service_name, default=[
4412+ manage_ports,
4413+ service_stop])
4414+
4415+ def get_service(self, service_name):
4416+ """
4417+ Given the name of a registered service, return its service definition.
4418+ """
4419+ service = self.services.get(service_name)
4420+ if not service:
4421+ raise KeyError('Service not registered: %s' % service_name)
4422+ return service
4423+
4424+ def fire_event(self, event_name, service_name, default=None):
4425+ """
4426+ Fire a data_ready, data_lost, start, or stop event on a given service.
4427+ """
4428+ service = self.get_service(service_name)
4429+ callbacks = service.get(event_name, default)
4430+ if not callbacks:
4431+ return
4432+ if not isinstance(callbacks, Iterable):
4433+ callbacks = [callbacks]
4434+ for callback in callbacks:
4435+ if isinstance(callback, ManagerCallback):
4436+ callback(self, service_name, event_name)
4437+ else:
4438+ callback(service_name)
4439+
4440+ def is_ready(self, service_name):
4441+ """
4442+ Determine if a registered service is ready, by checking its 'required_data'.
4443+
4444+ A 'required_data' item can be any mapping type, and is considered ready
4445+ if `bool(item)` evaluates as True.
4446+ """
4447+ service = self.get_service(service_name)
4448+ reqs = service.get('required_data', [])
4449+ return all(bool(req) for req in reqs)
4450+
4451+ def _load_ready_file(self):
4452+ if self._ready is not None:
4453+ return
4454+ if os.path.exists(self._ready_file):
4455+ with open(self._ready_file) as fp:
4456+ self._ready = set(json.load(fp))
4457+ else:
4458+ self._ready = set()
4459+
4460+ def _save_ready_file(self):
4461+ if self._ready is None:
4462+ return
4463+ with open(self._ready_file, 'w') as fp:
4464+ json.dump(list(self._ready), fp)
4465+
4466+ def save_ready(self, service_name):
4467+ """
4468+ Save an indicator that the given service is now data_ready.
4469+ """
4470+ self._load_ready_file()
4471+ self._ready.add(service_name)
4472+ self._save_ready_file()
4473+
4474+ def save_lost(self, service_name):
4475+ """
4476+ Save an indicator that the given service is no longer data_ready.
4477+ """
4478+ self._load_ready_file()
4479+ self._ready.discard(service_name)
4480+ self._save_ready_file()
4481+
4482+ def was_ready(self, service_name):
4483+ """
4484+ Determine if the given service was previously data_ready.
4485+ """
4486+ self._load_ready_file()
4487+ return service_name in self._ready
4488+
4489+
4490+class ManagerCallback(object):
4491+ """
4492+ Special case of a callback that takes the `ServiceManager` instance
4493+ in addition to the service name.
4494+
4495+ Subclasses should implement `__call__` which should accept three parameters:
4496+
4497+ * `manager` The `ServiceManager` instance
4498+ * `service_name` The name of the service it's being triggered for
4499+ * `event_name` The name of the event that this callback is handling
4500+ """
4501+ def __call__(self, manager, service_name, event_name):
4502+ raise NotImplementedError()
4503+
4504+
4505+class PortManagerCallback(ManagerCallback):
4506+ """
4507+ Callback class that will open or close ports, for use as either
4508+ a start or stop action.
4509+ """
4510+ def __call__(self, manager, service_name, event_name):
4511+ service = manager.get_service(service_name)
4512+ new_ports = service.get('ports', [])
4513+ port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))
4514+ if os.path.exists(port_file):
4515+ with open(port_file) as fp:
4516+ old_ports = fp.read().split(',')
4517+ for old_port in old_ports:
4518+ if bool(old_port):
4519+ old_port = int(old_port)
4520+ if old_port not in new_ports:
4521+ hookenv.close_port(old_port)
4522+ with open(port_file, 'w') as fp:
4523+ fp.write(','.join(str(port) for port in new_ports))
4524+ for port in new_ports:
4525+ if event_name == 'start':
4526+ hookenv.open_port(port)
4527+ elif event_name == 'stop':
4528+ hookenv.close_port(port)
4529+
4530+
4531+def service_stop(service_name):
4532+ """
4533+ Wrapper around host.service_stop to prevent spurious "unknown service"
4534+ messages in the logs.
4535+ """
4536+ if host.service_running(service_name):
4537+ host.service_stop(service_name)
4538+
4539+
4540+def service_restart(service_name):
4541+ """
4542+ Wrapper around host.service_restart to prevent spurious "unknown service"
4543+ messages in the logs.
4544+ """
4545+ if host.service_available(service_name):
4546+ if host.service_running(service_name):
4547+ host.service_restart(service_name)
4548+ else:
4549+ host.service_start(service_name)
4550+
4551+
4552+# Convenience aliases
4553+open_ports = close_ports = manage_ports = PortManagerCallback()
4554
4555=== added file 'hooks/charmhelpers/core/services/helpers.py'
4556--- hooks/charmhelpers/core/services/helpers.py 1970-01-01 00:00:00 +0000
4557+++ hooks/charmhelpers/core/services/helpers.py 2014-12-11 11:30:34 +0000
4558@@ -0,0 +1,243 @@
4559+import os
4560+import yaml
4561+from charmhelpers.core import hookenv
4562+from charmhelpers.core import templating
4563+
4564+from charmhelpers.core.services.base import ManagerCallback
4565+
4566+
4567+__all__ = ['RelationContext', 'TemplateCallback',
4568+ 'render_template', 'template']
4569+
4570+
4571+class RelationContext(dict):
4572+ """
4573+ Base class for a context generator that gets relation data from juju.
4574+
4575+ Subclasses must provide the attributes `name`, which is the name of the
4576+ interface of interest, `interface`, which is the type of the interface of
4577+ interest, and `required_keys`, which is the set of keys required for the
4578+ relation to be considered complete. The data for all interfaces matching
4579+ the `name` attribute that are complete will used to populate the dictionary
4580+ values (see `get_data`, below).
4581+
4582+ The generated context will be namespaced under the relation :attr:`name`,
4583+ to prevent potential naming conflicts.
4584+
4585+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
4586+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
4587+ """
4588+ name = None
4589+ interface = None
4590+ required_keys = []
4591+
4592+ def __init__(self, name=None, additional_required_keys=None):
4593+ if name is not None:
4594+ self.name = name
4595+ if additional_required_keys is not None:
4596+ self.required_keys.extend(additional_required_keys)
4597+ self.get_data()
4598+
4599+ def __bool__(self):
4600+ """
4601+ Returns True if all of the required_keys are available.
4602+ """
4603+ return self.is_ready()
4604+
4605+ __nonzero__ = __bool__
4606+
4607+ def __repr__(self):
4608+ return super(RelationContext, self).__repr__()
4609+
4610+ def is_ready(self):
4611+ """
4612+ Returns True if all of the `required_keys` are available from any units.
4613+ """
4614+ ready = len(self.get(self.name, [])) > 0
4615+ if not ready:
4616+ hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG)
4617+ return ready
4618+
4619+ def _is_ready(self, unit_data):
4620+ """
4621+ Helper method that tests a set of relation data and returns True if
4622+ all of the `required_keys` are present.
4623+ """
4624+ return set(unit_data.keys()).issuperset(set(self.required_keys))
4625+
4626+ def get_data(self):
4627+ """
4628+ Retrieve the relation data for each unit involved in a relation and,
4629+ if complete, store it in a list under `self[self.name]`. This
4630+ is automatically called when the RelationContext is instantiated.
4631+
4632+ The units are sorted lexographically first by the service ID, then by
4633+ the unit ID. Thus, if an interface has two other services, 'db:1'
4634+ and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1',
4635+ and 'db:2' having one unit, 'mediawiki/0', all of which have a complete
4636+ set of data, the relation data for the units will be stored in the
4637+ order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'.
4638+
4639+ If you only care about a single unit on the relation, you can just
4640+ access it as `{{ interface[0]['key'] }}`. However, if you can at all
4641+ support multiple units on a relation, you should iterate over the list,
4642+ like::
4643+
4644+ {% for unit in interface -%}
4645+ {{ unit['key'] }}{% if not loop.last %},{% endif %}
4646+ {%- endfor %}
4647+
4648+ Note that since all sets of relation data from all related services and
4649+ units are in a single list, if you need to know which service or unit a
4650+ set of data came from, you'll need to extend this class to preserve
4651+ that information.
4652+ """
4653+ if not hookenv.relation_ids(self.name):
4654+ return
4655+
4656+ ns = self.setdefault(self.name, [])
4657+ for rid in sorted(hookenv.relation_ids(self.name)):
4658+ for unit in sorted(hookenv.related_units(rid)):
4659+ reldata = hookenv.relation_get(rid=rid, unit=unit)
4660+ if self._is_ready(reldata):
4661+ ns.append(reldata)
4662+
4663+ def provide_data(self):
4664+ """
4665+ Return data to be relation_set for this interface.
4666+ """
4667+ return {}
4668+
4669+
4670+class MysqlRelation(RelationContext):
4671+ """
4672+ Relation context for the `mysql` interface.
4673+
4674+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
4675+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
4676+ """
4677+ name = 'db'
4678+ interface = 'mysql'
4679+ required_keys = ['host', 'user', 'password', 'database']
4680+
4681+
4682+class HttpRelation(RelationContext):
4683+ """
4684+ Relation context for the `http` interface.
4685+
4686+ :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
4687+ :param list additional_required_keys: Extend the list of :attr:`required_keys`
4688+ """
4689+ name = 'website'
4690+ interface = 'http'
4691+ required_keys = ['host', 'port']
4692+
4693+ def provide_data(self):
4694+ return {
4695+ 'host': hookenv.unit_get('private-address'),
4696+ 'port': 80,
4697+ }
4698+
4699+
4700+class RequiredConfig(dict):
4701+ """
4702+ Data context that loads config options with one or more mandatory options.
4703+
4704+ Once the required options have been changed from their default values, all
4705+ config options will be available, namespaced under `config` to prevent
4706+ potential naming conflicts (for example, between a config option and a
4707+ relation property).
4708+
4709+ :param list *args: List of options that must be changed from their default values.
4710+ """
4711+
4712+ def __init__(self, *args):
4713+ self.required_options = args
4714+ self['config'] = hookenv.config()
4715+ with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp:
4716+ self.config = yaml.load(fp).get('options', {})
4717+
4718+ def __bool__(self):
4719+ for option in self.required_options:
4720+ if option not in self['config']:
4721+ return False
4722+ current_value = self['config'][option]
4723+ default_value = self.config[option].get('default')
4724+ if current_value == default_value:
4725+ return False
4726+ if current_value in (None, '') and default_value in (None, ''):
4727+ return False
4728+ return True
4729+
4730+ def __nonzero__(self):
4731+ return self.__bool__()
4732+
4733+
4734+class StoredContext(dict):
4735+ """
4736+ A data context that always returns the data that it was first created with.
4737+
4738+ This is useful to do a one-time generation of things like passwords, that
4739+ will thereafter use the same value that was originally generated, instead
4740+ of generating a new value each time it is run.
4741+ """
4742+ def __init__(self, file_name, config_data):
4743+ """
4744+ If the file exists, populate `self` with the data from the file.
4745+ Otherwise, populate with the given data and persist it to the file.
4746+ """
4747+ if os.path.exists(file_name):
4748+ self.update(self.read_context(file_name))
4749+ else:
4750+ self.store_context(file_name, config_data)
4751+ self.update(config_data)
4752+
4753+ def store_context(self, file_name, config_data):
4754+ if not os.path.isabs(file_name):
4755+ file_name = os.path.join(hookenv.charm_dir(), file_name)
4756+ with open(file_name, 'w') as file_stream:
4757+ os.fchmod(file_stream.fileno(), 0o600)
4758+ yaml.dump(config_data, file_stream)
4759+
4760+ def read_context(self, file_name):
4761+ if not os.path.isabs(file_name):
4762+ file_name = os.path.join(hookenv.charm_dir(), file_name)
4763+ with open(file_name, 'r') as file_stream:
4764+ data = yaml.load(file_stream)
4765+ if not data:
4766+ raise OSError("%s is empty" % file_name)
4767+ return data
4768+
4769+
4770+class TemplateCallback(ManagerCallback):
4771+ """
4772+ Callback class that will render a Jinja2 template, for use as a ready
4773+ action.
4774+
4775+ :param str source: The template source file, relative to
4776+ `$CHARM_DIR/templates`
4777+
4778+ :param str target: The target to write the rendered template to
4779+ :param str owner: The owner of the rendered file
4780+ :param str group: The group of the rendered file
4781+ :param int perms: The permissions of the rendered file
4782+ """
4783+ def __init__(self, source, target,
4784+ owner='root', group='root', perms=0o444):
4785+ self.source = source
4786+ self.target = target
4787+ self.owner = owner
4788+ self.group = group
4789+ self.perms = perms
4790+
4791+ def __call__(self, manager, service_name, event_name):
4792+ service = manager.get_service(service_name)
4793+ context = {}
4794+ for ctx in service.get('required_data', []):
4795+ context.update(ctx)
4796+ templating.render(self.source, self.target, context,
4797+ self.owner, self.group, self.perms)
4798+
4799+
4800+# Convenience aliases for templates
4801+render_template = template = TemplateCallback
4802
4803=== added file 'hooks/charmhelpers/core/sysctl.py'
4804--- hooks/charmhelpers/core/sysctl.py 1970-01-01 00:00:00 +0000
4805+++ hooks/charmhelpers/core/sysctl.py 2014-12-11 11:30:34 +0000
4806@@ -0,0 +1,34 @@
4807+#!/usr/bin/env python
4808+# -*- coding: utf-8 -*-
4809+
4810+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
4811+
4812+import yaml
4813+
4814+from subprocess import check_call
4815+
4816+from charmhelpers.core.hookenv import (
4817+ log,
4818+ DEBUG,
4819+)
4820+
4821+
4822+def create(sysctl_dict, sysctl_file):
4823+ """Creates a sysctl.conf file from a YAML associative array
4824+
4825+ :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
4826+ :type sysctl_dict: dict
4827+ :param sysctl_file: path to the sysctl file to be saved
4828+ :type sysctl_file: str or unicode
4829+ :returns: None
4830+ """
4831+ sysctl_dict = yaml.load(sysctl_dict)
4832+
4833+ with open(sysctl_file, "w") as fd:
4834+ for key, value in sysctl_dict.items():
4835+ fd.write("{}={}\n".format(key, value))
4836+
4837+ log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
4838+ level=DEBUG)
4839+
4840+ check_call(["sysctl", "-p", sysctl_file])
4841
4842=== added file 'hooks/charmhelpers/core/templating.py'
4843--- hooks/charmhelpers/core/templating.py 1970-01-01 00:00:00 +0000
4844+++ hooks/charmhelpers/core/templating.py 2014-12-11 11:30:34 +0000
4845@@ -0,0 +1,52 @@
4846+import os
4847+
4848+from charmhelpers.core import host
4849+from charmhelpers.core import hookenv
4850+
4851+
4852+def render(source, target, context, owner='root', group='root',
4853+ perms=0o444, templates_dir=None):
4854+ """
4855+ Render a template.
4856+
4857+ The `source` path, if not absolute, is relative to the `templates_dir`.
4858+
4859+ The `target` path should be absolute.
4860+
4861+ The context should be a dict containing the values to be replaced in the
4862+ template.
4863+
4864+ The `owner`, `group`, and `perms` options will be passed to `write_file`.
4865+
4866+ If omitted, `templates_dir` defaults to the `templates` folder in the charm.
4867+
4868+ Note: Using this requires python-jinja2; if it is not installed, calling
4869+ this will attempt to use charmhelpers.fetch.apt_install to install it.
4870+ """
4871+ try:
4872+ from jinja2 import FileSystemLoader, Environment, exceptions
4873+ except ImportError:
4874+ try:
4875+ from charmhelpers.fetch import apt_install
4876+ except ImportError:
4877+ hookenv.log('Could not import jinja2, and could not import '
4878+ 'charmhelpers.fetch to install it',
4879+ level=hookenv.ERROR)
4880+ raise
4881+ apt_install('python-jinja2', fatal=True)
4882+ from jinja2 import FileSystemLoader, Environment, exceptions
4883+
4884+ if templates_dir is None:
4885+ templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
4886+ loader = Environment(loader=FileSystemLoader(templates_dir))
4887+ try:
4888+ source = source
4889+ template = loader.get_template(source)
4890+ except exceptions.TemplateNotFound as e:
4891+ hookenv.log('Could not load template %s from %s.' %
4892+ (source, templates_dir),
4893+ level=hookenv.ERROR)
4894+ raise e
4895+ content = template.render(context)
4896+ host.mkdir(os.path.dirname(target))
4897+ host.write_file(target, content, owner, group, perms)
4898
4899=== added directory 'hooks/charmhelpers/fetch'
4900=== added file 'hooks/charmhelpers/fetch/__init__.py'
4901--- hooks/charmhelpers/fetch/__init__.py 1970-01-01 00:00:00 +0000
4902+++ hooks/charmhelpers/fetch/__init__.py 2014-12-11 11:30:34 +0000
4903@@ -0,0 +1,416 @@
4904+import importlib
4905+from tempfile import NamedTemporaryFile
4906+import time
4907+from yaml import safe_load
4908+from charmhelpers.core.host import (
4909+ lsb_release
4910+)
4911+import subprocess
4912+from charmhelpers.core.hookenv import (
4913+ config,
4914+ log,
4915+)
4916+import os
4917+
4918+import six
4919+if six.PY3:
4920+ from urllib.parse import urlparse, urlunparse
4921+else:
4922+ from urlparse import urlparse, urlunparse
4923+
4924+
4925+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
4926+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
4927+"""
4928+PROPOSED_POCKET = """# Proposed
4929+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
4930+"""
4931+CLOUD_ARCHIVE_POCKETS = {
4932+ # Folsom
4933+ 'folsom': 'precise-updates/folsom',
4934+ 'precise-folsom': 'precise-updates/folsom',
4935+ 'precise-folsom/updates': 'precise-updates/folsom',
4936+ 'precise-updates/folsom': 'precise-updates/folsom',
4937+ 'folsom/proposed': 'precise-proposed/folsom',
4938+ 'precise-folsom/proposed': 'precise-proposed/folsom',
4939+ 'precise-proposed/folsom': 'precise-proposed/folsom',
4940+ # Grizzly
4941+ 'grizzly': 'precise-updates/grizzly',
4942+ 'precise-grizzly': 'precise-updates/grizzly',
4943+ 'precise-grizzly/updates': 'precise-updates/grizzly',
4944+ 'precise-updates/grizzly': 'precise-updates/grizzly',
4945+ 'grizzly/proposed': 'precise-proposed/grizzly',
4946+ 'precise-grizzly/proposed': 'precise-proposed/grizzly',
4947+ 'precise-proposed/grizzly': 'precise-proposed/grizzly',
4948+ # Havana
4949+ 'havana': 'precise-updates/havana',
4950+ 'precise-havana': 'precise-updates/havana',
4951+ 'precise-havana/updates': 'precise-updates/havana',
4952+ 'precise-updates/havana': 'precise-updates/havana',
4953+ 'havana/proposed': 'precise-proposed/havana',
4954+ 'precise-havana/proposed': 'precise-proposed/havana',
4955+ 'precise-proposed/havana': 'precise-proposed/havana',
4956+ # Icehouse
4957+ 'icehouse': 'precise-updates/icehouse',
4958+ 'precise-icehouse': 'precise-updates/icehouse',
4959+ 'precise-icehouse/updates': 'precise-updates/icehouse',
4960+ 'precise-updates/icehouse': 'precise-updates/icehouse',
4961+ 'icehouse/proposed': 'precise-proposed/icehouse',
4962+ 'precise-icehouse/proposed': 'precise-proposed/icehouse',
4963+ 'precise-proposed/icehouse': 'precise-proposed/icehouse',
4964+ # Juno
4965+ 'juno': 'trusty-updates/juno',
4966+ 'trusty-juno': 'trusty-updates/juno',
4967+ 'trusty-juno/updates': 'trusty-updates/juno',
4968+ 'trusty-updates/juno': 'trusty-updates/juno',
4969+ 'juno/proposed': 'trusty-proposed/juno',
4970+ 'juno/proposed': 'trusty-proposed/juno',
4971+ 'trusty-juno/proposed': 'trusty-proposed/juno',
4972+ 'trusty-proposed/juno': 'trusty-proposed/juno',
4973+}
4974+
4975+# The order of this list is very important. Handlers should be listed in from
4976+# least- to most-specific URL matching.
4977+FETCH_HANDLERS = (
4978+ 'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
4979+ 'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
4980+ 'charmhelpers.fetch.giturl.GitUrlFetchHandler',
4981+)
4982+
4983+APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
4984+APT_NO_LOCK_RETRY_DELAY = 10 # Wait 10 seconds between apt lock checks.
4985+APT_NO_LOCK_RETRY_COUNT = 30 # Retry to acquire the lock X times.
4986+
4987+
4988+class SourceConfigError(Exception):
4989+ pass
4990+
4991+
4992+class UnhandledSource(Exception):
4993+ pass
4994+
4995+
4996+class AptLockError(Exception):
4997+ pass
4998+
4999+
5000+class BaseFetchHandler(object):
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches

to all changes: