Merge lp:~plumgrid-team/charms/trusty/plumgrid-edge/trunk into lp:charms/trusty/plumgrid-edge

Proposed by Bilal Baqar on 2016-05-18
Status: Merged
Merged at revision: 16
Proposed branch: lp:~plumgrid-team/charms/trusty/plumgrid-edge/trunk
Merge into: lp:charms/trusty/plumgrid-edge
Diff against target: 9738 lines (+4393/-3429)
49 files modified
Makefile (+1/-1)
bin/charm_helpers_sync.py (+253/-0)
charm-helpers-sync.yaml (+6/-1)
hooks/charmhelpers/contrib/amulet/deployment.py (+4/-2)
hooks/charmhelpers/contrib/amulet/utils.py (+382/-86)
hooks/charmhelpers/contrib/ansible/__init__.py (+0/-254)
hooks/charmhelpers/contrib/benchmark/__init__.py (+0/-126)
hooks/charmhelpers/contrib/charmhelpers/__init__.py (+0/-208)
hooks/charmhelpers/contrib/charmsupport/__init__.py (+0/-15)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+0/-360)
hooks/charmhelpers/contrib/charmsupport/volumes.py (+0/-175)
hooks/charmhelpers/contrib/database/mysql.py (+0/-412)
hooks/charmhelpers/contrib/network/ip.py (+55/-23)
hooks/charmhelpers/contrib/network/ovs/__init__.py (+6/-2)
hooks/charmhelpers/contrib/network/ufw.py (+5/-6)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+135/-14)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+421/-13)
hooks/charmhelpers/contrib/openstack/context.py (+318/-79)
hooks/charmhelpers/contrib/openstack/ip.py (+35/-7)
hooks/charmhelpers/contrib/openstack/neutron.py (+62/-21)
hooks/charmhelpers/contrib/openstack/templating.py (+30/-2)
hooks/charmhelpers/contrib/openstack/utils.py (+939/-70)
hooks/charmhelpers/contrib/peerstorage/__init__.py (+0/-268)
hooks/charmhelpers/contrib/python/packages.py (+35/-11)
hooks/charmhelpers/contrib/saltstack/__init__.py (+0/-118)
hooks/charmhelpers/contrib/ssl/__init__.py (+0/-94)
hooks/charmhelpers/contrib/ssl/service.py (+0/-279)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+823/-61)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+8/-7)
hooks/charmhelpers/contrib/templating/__init__.py (+0/-15)
hooks/charmhelpers/contrib/templating/contexts.py (+0/-139)
hooks/charmhelpers/contrib/templating/jinja.py (+0/-39)
hooks/charmhelpers/contrib/templating/pyformat.py (+0/-29)
hooks/charmhelpers/contrib/unison/__init__.py (+0/-313)
hooks/charmhelpers/core/hookenv.py (+220/-13)
hooks/charmhelpers/core/host.py (+298/-75)
hooks/charmhelpers/core/hugepage.py (+71/-0)
hooks/charmhelpers/core/kernel.py (+68/-0)
hooks/charmhelpers/core/services/helpers.py (+30/-5)
hooks/charmhelpers/core/strutils.py (+30/-0)
hooks/charmhelpers/core/templating.py (+21/-8)
hooks/charmhelpers/core/unitdata.py (+61/-17)
hooks/charmhelpers/fetch/__init__.py (+18/-2)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+20/-23)
hooks/pg_edge_utils.py (+3/-2)
unit_tests/test_pg_edge_hooks.py (+2/-1)
To merge this branch: bzr merge lp:~plumgrid-team/charms/trusty/plumgrid-edge/trunk
Reviewer Review Type Date Requested Status
Review Queue (community) automated testing 2016-05-18 Needs Fixing on 2016-05-24
charmers 2016-05-18 Pending
Review via email: mp+295029@code.launchpad.net

This proposal supersedes a proposal from 2016-03-03.

Commit message

Trusty - Liberty/Mitaka support added

Description of the change

- Liberty/Mitaka support
- Charmhelpers sync and improved pg-restart

To post a comment you must log in.
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/2221/

review: Needs Fixing (automated testing)
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/2200/

review: Needs Fixing (automated testing)
Bilal Baqar (bbaqar) wrote : Posted in a previous version of this proposal

tests/files/plumgrid-edge-dense.yaml and tests/tests.yaml in both branches are identical. Dont know the reason for the conflict. Please resolve it before merging.

Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/4276/

review: Needs Fixing (automated testing)
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/4263/

review: Needs Fixing (automated testing)
Bilal Baqar (bbaqar) wrote :

Looking at the results. Will provide fix shortly.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2016-03-03 21:37:13 +0000
3+++ Makefile 2016-05-18 10:05:52 +0000
4@@ -4,7 +4,7 @@
5 virtualenv:
6 virtualenv .venv
7 .venv/bin/pip install flake8 nose coverage mock pyyaml netifaces \
8- netaddr jinja2
9+ netaddr jinja2 pyflakes pep8 six pbr funcsigs psutil
10
11 lint: virtualenv
12 .venv/bin/flake8 --exclude hooks/charmhelpers hooks unit_tests tests --ignore E402
13
14=== added directory 'bin'
15=== added file 'bin/charm_helpers_sync.py'
16--- bin/charm_helpers_sync.py 1970-01-01 00:00:00 +0000
17+++ bin/charm_helpers_sync.py 2016-05-18 10:05:52 +0000
18@@ -0,0 +1,253 @@
19+#!/usr/bin/python
20+
21+# Copyright 2014-2015 Canonical Limited.
22+#
23+# This file is part of charm-helpers.
24+#
25+# charm-helpers is free software: you can redistribute it and/or modify
26+# it under the terms of the GNU Lesser General Public License version 3 as
27+# published by the Free Software Foundation.
28+#
29+# charm-helpers is distributed in the hope that it will be useful,
30+# but WITHOUT ANY WARRANTY; without even the implied warranty of
31+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
32+# GNU Lesser General Public License for more details.
33+#
34+# You should have received a copy of the GNU Lesser General Public License
35+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
36+
37+# Authors:
38+# Adam Gandelman <adamg@ubuntu.com>
39+
40+import logging
41+import optparse
42+import os
43+import subprocess
44+import shutil
45+import sys
46+import tempfile
47+import yaml
48+from fnmatch import fnmatch
49+
50+import six
51+
52+CHARM_HELPERS_BRANCH = 'lp:charm-helpers'
53+
54+
55+def parse_config(conf_file):
56+ if not os.path.isfile(conf_file):
57+ logging.error('Invalid config file: %s.' % conf_file)
58+ return False
59+ return yaml.load(open(conf_file).read())
60+
61+
62+def clone_helpers(work_dir, branch):
63+ dest = os.path.join(work_dir, 'charm-helpers')
64+ logging.info('Checking out %s to %s.' % (branch, dest))
65+ cmd = ['bzr', 'checkout', '--lightweight', branch, dest]
66+ subprocess.check_call(cmd)
67+ return dest
68+
69+
70+def _module_path(module):
71+ return os.path.join(*module.split('.'))
72+
73+
74+def _src_path(src, module):
75+ return os.path.join(src, 'charmhelpers', _module_path(module))
76+
77+
78+def _dest_path(dest, module):
79+ return os.path.join(dest, _module_path(module))
80+
81+
82+def _is_pyfile(path):
83+ return os.path.isfile(path + '.py')
84+
85+
86+def ensure_init(path):
87+ '''
88+ ensure directories leading up to path are importable, omitting
89+ parent directory, eg path='/hooks/helpers/foo'/:
90+ hooks/
91+ hooks/helpers/__init__.py
92+ hooks/helpers/foo/__init__.py
93+ '''
94+ for d, dirs, files in os.walk(os.path.join(*path.split('/')[:2])):
95+ _i = os.path.join(d, '__init__.py')
96+ if not os.path.exists(_i):
97+ logging.info('Adding missing __init__.py: %s' % _i)
98+ open(_i, 'wb').close()
99+
100+
101+def sync_pyfile(src, dest):
102+ src = src + '.py'
103+ src_dir = os.path.dirname(src)
104+ logging.info('Syncing pyfile: %s -> %s.' % (src, dest))
105+ if not os.path.exists(dest):
106+ os.makedirs(dest)
107+ shutil.copy(src, dest)
108+ if os.path.isfile(os.path.join(src_dir, '__init__.py')):
109+ shutil.copy(os.path.join(src_dir, '__init__.py'),
110+ dest)
111+ ensure_init(dest)
112+
113+
114+def get_filter(opts=None):
115+ opts = opts or []
116+ if 'inc=*' in opts:
117+ # do not filter any files, include everything
118+ return None
119+
120+ def _filter(dir, ls):
121+ incs = [opt.split('=').pop() for opt in opts if 'inc=' in opt]
122+ _filter = []
123+ for f in ls:
124+ _f = os.path.join(dir, f)
125+
126+ if not os.path.isdir(_f) and not _f.endswith('.py') and incs:
127+ if True not in [fnmatch(_f, inc) for inc in incs]:
128+ logging.debug('Not syncing %s, does not match include '
129+ 'filters (%s)' % (_f, incs))
130+ _filter.append(f)
131+ else:
132+ logging.debug('Including file, which matches include '
133+ 'filters (%s): %s' % (incs, _f))
134+ elif (os.path.isfile(_f) and not _f.endswith('.py')):
135+ logging.debug('Not syncing file: %s' % f)
136+ _filter.append(f)
137+ elif (os.path.isdir(_f) and not
138+ os.path.isfile(os.path.join(_f, '__init__.py'))):
139+ logging.debug('Not syncing directory: %s' % f)
140+ _filter.append(f)
141+ return _filter
142+ return _filter
143+
144+
145+def sync_directory(src, dest, opts=None):
146+ if os.path.exists(dest):
147+ logging.debug('Removing existing directory: %s' % dest)
148+ shutil.rmtree(dest)
149+ logging.info('Syncing directory: %s -> %s.' % (src, dest))
150+
151+ shutil.copytree(src, dest, ignore=get_filter(opts))
152+ ensure_init(dest)
153+
154+
155+def sync(src, dest, module, opts=None):
156+
157+ # Sync charmhelpers/__init__.py for bootstrap code.
158+ sync_pyfile(_src_path(src, '__init__'), dest)
159+
160+ # Sync other __init__.py files in the path leading to module.
161+ m = []
162+ steps = module.split('.')[:-1]
163+ while steps:
164+ m.append(steps.pop(0))
165+ init = '.'.join(m + ['__init__'])
166+ sync_pyfile(_src_path(src, init),
167+ os.path.dirname(_dest_path(dest, init)))
168+
169+ # Sync the module, or maybe a .py file.
170+ if os.path.isdir(_src_path(src, module)):
171+ sync_directory(_src_path(src, module), _dest_path(dest, module), opts)
172+ elif _is_pyfile(_src_path(src, module)):
173+ sync_pyfile(_src_path(src, module),
174+ os.path.dirname(_dest_path(dest, module)))
175+ else:
176+ logging.warn('Could not sync: %s. Neither a pyfile or directory, '
177+ 'does it even exist?' % module)
178+
179+
180+def parse_sync_options(options):
181+ if not options:
182+ return []
183+ return options.split(',')
184+
185+
186+def extract_options(inc, global_options=None):
187+ global_options = global_options or []
188+ if global_options and isinstance(global_options, six.string_types):
189+ global_options = [global_options]
190+ if '|' not in inc:
191+ return (inc, global_options)
192+ inc, opts = inc.split('|')
193+ return (inc, parse_sync_options(opts) + global_options)
194+
195+
196+def sync_helpers(include, src, dest, options=None):
197+ if not os.path.isdir(dest):
198+ os.makedirs(dest)
199+
200+ global_options = parse_sync_options(options)
201+
202+ for inc in include:
203+ if isinstance(inc, str):
204+ inc, opts = extract_options(inc, global_options)
205+ sync(src, dest, inc, opts)
206+ elif isinstance(inc, dict):
207+ # could also do nested dicts here.
208+ for k, v in six.iteritems(inc):
209+ if isinstance(v, list):
210+ for m in v:
211+ inc, opts = extract_options(m, global_options)
212+ sync(src, dest, '%s.%s' % (k, inc), opts)
213+
214+if __name__ == '__main__':
215+ parser = optparse.OptionParser()
216+ parser.add_option('-c', '--config', action='store', dest='config',
217+ default=None, help='helper config file')
218+ parser.add_option('-D', '--debug', action='store_true', dest='debug',
219+ default=False, help='debug')
220+ parser.add_option('-b', '--branch', action='store', dest='branch',
221+ help='charm-helpers bzr branch (overrides config)')
222+ parser.add_option('-d', '--destination', action='store', dest='dest_dir',
223+ help='sync destination dir (overrides config)')
224+ (opts, args) = parser.parse_args()
225+
226+ if opts.debug:
227+ logging.basicConfig(level=logging.DEBUG)
228+ else:
229+ logging.basicConfig(level=logging.INFO)
230+
231+ if opts.config:
232+ logging.info('Loading charm helper config from %s.' % opts.config)
233+ config = parse_config(opts.config)
234+ if not config:
235+ logging.error('Could not parse config from %s.' % opts.config)
236+ sys.exit(1)
237+ else:
238+ config = {}
239+
240+ if 'branch' not in config:
241+ config['branch'] = CHARM_HELPERS_BRANCH
242+ if opts.branch:
243+ config['branch'] = opts.branch
244+ if opts.dest_dir:
245+ config['destination'] = opts.dest_dir
246+
247+ if 'destination' not in config:
248+ logging.error('No destination dir. specified as option or config.')
249+ sys.exit(1)
250+
251+ if 'include' not in config:
252+ if not args:
253+ logging.error('No modules to sync specified as option or config.')
254+ sys.exit(1)
255+ config['include'] = []
256+ [config['include'].append(a) for a in args]
257+
258+ sync_options = None
259+ if 'options' in config:
260+ sync_options = config['options']
261+ tmpd = tempfile.mkdtemp()
262+ try:
263+ checkout = clone_helpers(tmpd, config['branch'])
264+ sync_helpers(config['include'], checkout, config['destination'],
265+ options=sync_options)
266+ except Exception as e:
267+ logging.error("Could not sync: %s" % e)
268+ raise e
269+ finally:
270+ logging.debug('Cleaning up %s' % tmpd)
271+ shutil.rmtree(tmpd)
272
273=== modified file 'charm-helpers-sync.yaml'
274--- charm-helpers-sync.yaml 2015-07-29 18:19:18 +0000
275+++ charm-helpers-sync.yaml 2016-05-18 10:05:52 +0000
276@@ -3,5 +3,10 @@
277 include:
278 - core
279 - fetch
280- - contrib
281+ - contrib.amulet
282+ - contrib.hahelpers
283+ - contrib.network
284+ - contrib.openstack
285+ - contrib.python
286+ - contrib.storage
287 - payload
288
289=== modified file 'hooks/charmhelpers/contrib/amulet/deployment.py'
290--- hooks/charmhelpers/contrib/amulet/deployment.py 2015-07-29 18:19:18 +0000
291+++ hooks/charmhelpers/contrib/amulet/deployment.py 2016-05-18 10:05:52 +0000
292@@ -51,7 +51,8 @@
293 if 'units' not in this_service:
294 this_service['units'] = 1
295
296- self.d.add(this_service['name'], units=this_service['units'])
297+ self.d.add(this_service['name'], units=this_service['units'],
298+ constraints=this_service.get('constraints'))
299
300 for svc in other_services:
301 if 'location' in svc:
302@@ -64,7 +65,8 @@
303 if 'units' not in svc:
304 svc['units'] = 1
305
306- self.d.add(svc['name'], charm=branch_location, units=svc['units'])
307+ self.d.add(svc['name'], charm=branch_location, units=svc['units'],
308+ constraints=svc.get('constraints'))
309
310 def _add_relations(self, relations):
311 """Add all of the relations for the services."""
312
313=== modified file 'hooks/charmhelpers/contrib/amulet/utils.py'
314--- hooks/charmhelpers/contrib/amulet/utils.py 2015-07-29 18:19:18 +0000
315+++ hooks/charmhelpers/contrib/amulet/utils.py 2016-05-18 10:05:52 +0000
316@@ -14,17 +14,25 @@
317 # You should have received a copy of the GNU Lesser General Public License
318 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
319
320-import amulet
321-import ConfigParser
322-import distro_info
323 import io
324+import json
325 import logging
326 import os
327 import re
328-import six
329+import socket
330+import subprocess
331 import sys
332 import time
333-import urlparse
334+import uuid
335+
336+import amulet
337+import distro_info
338+import six
339+from six.moves import configparser
340+if six.PY3:
341+ from urllib import parse as urlparse
342+else:
343+ import urlparse
344
345
346 class AmuletUtils(object):
347@@ -108,7 +116,7 @@
348 # /!\ DEPRECATION WARNING (beisner):
349 # New and existing tests should be rewritten to use
350 # validate_services_by_name() as it is aware of init systems.
351- self.log.warn('/!\\ DEPRECATION WARNING: use '
352+ self.log.warn('DEPRECATION WARNING: use '
353 'validate_services_by_name instead of validate_services '
354 'due to init system differences.')
355
356@@ -142,19 +150,23 @@
357
358 for service_name in services_list:
359 if (self.ubuntu_releases.index(release) >= systemd_switch or
360- service_name == "rabbitmq-server"):
361- # init is systemd
362+ service_name in ['rabbitmq-server', 'apache2']):
363+ # init is systemd (or regular sysv)
364 cmd = 'sudo service {} status'.format(service_name)
365+ output, code = sentry_unit.run(cmd)
366+ service_running = code == 0
367 elif self.ubuntu_releases.index(release) < systemd_switch:
368 # init is upstart
369 cmd = 'sudo status {}'.format(service_name)
370+ output, code = sentry_unit.run(cmd)
371+ service_running = code == 0 and "start/running" in output
372
373- output, code = sentry_unit.run(cmd)
374 self.log.debug('{} `{}` returned '
375 '{}'.format(sentry_unit.info['unit_name'],
376 cmd, code))
377- if code != 0:
378- return "command `{}` returned {}".format(cmd, str(code))
379+ if not service_running:
380+ return u"command `{}` returned {} {}".format(
381+ cmd, output, str(code))
382 return None
383
384 def _get_config(self, unit, filename):
385@@ -164,7 +176,7 @@
386 # NOTE(beisner): by default, ConfigParser does not handle options
387 # with no value, such as the flags used in the mysql my.cnf file.
388 # https://bugs.python.org/issue7005
389- config = ConfigParser.ConfigParser(allow_no_value=True)
390+ config = configparser.ConfigParser(allow_no_value=True)
391 config.readfp(io.StringIO(file_contents))
392 return config
393
394@@ -259,33 +271,52 @@
395 """Get last modification time of directory."""
396 return sentry_unit.directory_stat(directory)['mtime']
397
398- def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):
399- """Get process' start time.
400-
401- Determine start time of the process based on the last modification
402- time of the /proc/pid directory. If pgrep_full is True, the process
403- name is matched against the full command line.
404- """
405- if pgrep_full:
406- cmd = 'pgrep -o -f {}'.format(service)
407- else:
408- cmd = 'pgrep -o {}'.format(service)
409- cmd = cmd + ' | grep -v pgrep || exit 0'
410- cmd_out = sentry_unit.run(cmd)
411- self.log.debug('CMDout: ' + str(cmd_out))
412- if cmd_out[0]:
413- self.log.debug('Pid for %s %s' % (service, str(cmd_out[0])))
414- proc_dir = '/proc/{}'.format(cmd_out[0].strip())
415- return self._get_dir_mtime(sentry_unit, proc_dir)
416+ def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None):
417+ """Get start time of a process based on the last modification time
418+ of the /proc/pid directory.
419+
420+ :sentry_unit: The sentry unit to check for the service on
421+ :service: service name to look for in process table
422+ :pgrep_full: [Deprecated] Use full command line search mode with pgrep
423+ :returns: epoch time of service process start
424+ :param commands: list of bash commands
425+ :param sentry_units: list of sentry unit pointers
426+ :returns: None if successful; Failure message otherwise
427+ """
428+ if pgrep_full is not None:
429+ # /!\ DEPRECATION WARNING (beisner):
430+ # No longer implemented, as pidof is now used instead of pgrep.
431+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
432+ self.log.warn('DEPRECATION WARNING: pgrep_full bool is no '
433+ 'longer implemented re: lp 1474030.')
434+
435+ pid_list = self.get_process_id_list(sentry_unit, service)
436+ pid = pid_list[0]
437+ proc_dir = '/proc/{}'.format(pid)
438+ self.log.debug('Pid for {} on {}: {}'.format(
439+ service, sentry_unit.info['unit_name'], pid))
440+
441+ return self._get_dir_mtime(sentry_unit, proc_dir)
442
443 def service_restarted(self, sentry_unit, service, filename,
444- pgrep_full=False, sleep_time=20):
445+ pgrep_full=None, sleep_time=20):
446 """Check if service was restarted.
447
448 Compare a service's start time vs a file's last modification time
449 (such as a config file for that service) to determine if the service
450 has been restarted.
451 """
452+ # /!\ DEPRECATION WARNING (beisner):
453+ # This method is prone to races in that no before-time is known.
454+ # Use validate_service_config_changed instead.
455+
456+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
457+ # used instead of pgrep. pgrep_full is still passed through to ensure
458+ # deprecation WARNS. lp1474030
459+ self.log.warn('DEPRECATION WARNING: use '
460+ 'validate_service_config_changed instead of '
461+ 'service_restarted due to known races.')
462+
463 time.sleep(sleep_time)
464 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
465 self._get_file_mtime(sentry_unit, filename)):
466@@ -294,78 +325,122 @@
467 return False
468
469 def service_restarted_since(self, sentry_unit, mtime, service,
470- pgrep_full=False, sleep_time=20,
471- retry_count=2):
472+ pgrep_full=None, sleep_time=20,
473+ retry_count=30, retry_sleep_time=10):
474 """Check if service was been started after a given time.
475
476 Args:
477 sentry_unit (sentry): The sentry unit to check for the service on
478 mtime (float): The epoch time to check against
479 service (string): service name to look for in process table
480- pgrep_full (boolean): Use full command line search mode with pgrep
481- sleep_time (int): Seconds to sleep before looking for process
482- retry_count (int): If service is not found, how many times to retry
483+ pgrep_full: [Deprecated] Use full command line search mode with pgrep
484+ sleep_time (int): Initial sleep time (s) before looking for file
485+ retry_sleep_time (int): Time (s) to sleep between retries
486+ retry_count (int): If file is not found, how many times to retry
487
488 Returns:
489 bool: True if service found and its start time it newer than mtime,
490 False if service is older than mtime or if service was
491 not found.
492 """
493- self.log.debug('Checking %s restarted since %s' % (service, mtime))
494+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
495+ # used instead of pgrep. pgrep_full is still passed through to ensure
496+ # deprecation WARNS. lp1474030
497+
498+ unit_name = sentry_unit.info['unit_name']
499+ self.log.debug('Checking that %s service restarted since %s on '
500+ '%s' % (service, mtime, unit_name))
501 time.sleep(sleep_time)
502- proc_start_time = self._get_proc_start_time(sentry_unit, service,
503- pgrep_full)
504- while retry_count > 0 and not proc_start_time:
505- self.log.debug('No pid file found for service %s, will retry %i '
506- 'more times' % (service, retry_count))
507- time.sleep(30)
508- proc_start_time = self._get_proc_start_time(sentry_unit, service,
509- pgrep_full)
510- retry_count = retry_count - 1
511+ proc_start_time = None
512+ tries = 0
513+ while tries <= retry_count and not proc_start_time:
514+ try:
515+ proc_start_time = self._get_proc_start_time(sentry_unit,
516+ service,
517+ pgrep_full)
518+ self.log.debug('Attempt {} to get {} proc start time on {} '
519+ 'OK'.format(tries, service, unit_name))
520+ except IOError as e:
521+ # NOTE(beisner) - race avoidance, proc may not exist yet.
522+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
523+ self.log.debug('Attempt {} to get {} proc start time on {} '
524+ 'failed\n{}'.format(tries, service,
525+ unit_name, e))
526+ time.sleep(retry_sleep_time)
527+ tries += 1
528
529 if not proc_start_time:
530 self.log.warn('No proc start time found, assuming service did '
531 'not start')
532 return False
533 if proc_start_time >= mtime:
534- self.log.debug('proc start time is newer than provided mtime'
535- '(%s >= %s)' % (proc_start_time, mtime))
536+ self.log.debug('Proc start time is newer than provided mtime'
537+ '(%s >= %s) on %s (OK)' % (proc_start_time,
538+ mtime, unit_name))
539 return True
540 else:
541- self.log.warn('proc start time (%s) is older than provided mtime '
542- '(%s), service did not restart' % (proc_start_time,
543- mtime))
544+ self.log.warn('Proc start time (%s) is older than provided mtime '
545+ '(%s) on %s, service did not '
546+ 'restart' % (proc_start_time, mtime, unit_name))
547 return False
548
549 def config_updated_since(self, sentry_unit, filename, mtime,
550- sleep_time=20):
551+ sleep_time=20, retry_count=30,
552+ retry_sleep_time=10):
553 """Check if file was modified after a given time.
554
555 Args:
556 sentry_unit (sentry): The sentry unit to check the file mtime on
557 filename (string): The file to check mtime of
558 mtime (float): The epoch time to check against
559- sleep_time (int): Seconds to sleep before looking for process
560+ sleep_time (int): Initial sleep time (s) before looking for file
561+ retry_sleep_time (int): Time (s) to sleep between retries
562+ retry_count (int): If file is not found, how many times to retry
563
564 Returns:
565 bool: True if file was modified more recently than mtime, False if
566- file was modified before mtime,
567+ file was modified before mtime, or if file not found.
568 """
569- self.log.debug('Checking %s updated since %s' % (filename, mtime))
570+ unit_name = sentry_unit.info['unit_name']
571+ self.log.debug('Checking that %s updated since %s on '
572+ '%s' % (filename, mtime, unit_name))
573 time.sleep(sleep_time)
574- file_mtime = self._get_file_mtime(sentry_unit, filename)
575+ file_mtime = None
576+ tries = 0
577+ while tries <= retry_count and not file_mtime:
578+ try:
579+ file_mtime = self._get_file_mtime(sentry_unit, filename)
580+ self.log.debug('Attempt {} to get {} file mtime on {} '
581+ 'OK'.format(tries, filename, unit_name))
582+ except IOError as e:
583+ # NOTE(beisner) - race avoidance, file may not exist yet.
584+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
585+ self.log.debug('Attempt {} to get {} file mtime on {} '
586+ 'failed\n{}'.format(tries, filename,
587+ unit_name, e))
588+ time.sleep(retry_sleep_time)
589+ tries += 1
590+
591+ if not file_mtime:
592+ self.log.warn('Could not determine file mtime, assuming '
593+ 'file does not exist')
594+ return False
595+
596 if file_mtime >= mtime:
597 self.log.debug('File mtime is newer than provided mtime '
598- '(%s >= %s)' % (file_mtime, mtime))
599+ '(%s >= %s) on %s (OK)' % (file_mtime,
600+ mtime, unit_name))
601 return True
602 else:
603- self.log.warn('File mtime %s is older than provided mtime %s'
604- % (file_mtime, mtime))
605+ self.log.warn('File mtime is older than provided mtime'
606+ '(%s < on %s) on %s' % (file_mtime,
607+ mtime, unit_name))
608 return False
609
610 def validate_service_config_changed(self, sentry_unit, mtime, service,
611- filename, pgrep_full=False,
612- sleep_time=20, retry_count=2):
613+ filename, pgrep_full=None,
614+ sleep_time=20, retry_count=30,
615+ retry_sleep_time=10):
616 """Check service and file were updated after mtime
617
618 Args:
619@@ -373,9 +448,10 @@
620 mtime (float): The epoch time to check against
621 service (string): service name to look for in process table
622 filename (string): The file to check mtime of
623- pgrep_full (boolean): Use full command line search mode with pgrep
624- sleep_time (int): Seconds to sleep before looking for process
625+ pgrep_full: [Deprecated] Use full command line search mode with pgrep
626+ sleep_time (int): Initial sleep in seconds to pass to test helpers
627 retry_count (int): If service is not found, how many times to retry
628+ retry_sleep_time (int): Time in seconds to wait between retries
629
630 Typical Usage:
631 u = OpenStackAmuletUtils(ERROR)
632@@ -392,15 +468,27 @@
633 mtime, False if service is older than mtime or if service was
634 not found or if filename was modified before mtime.
635 """
636- self.log.debug('Checking %s restarted since %s' % (service, mtime))
637- time.sleep(sleep_time)
638- service_restart = self.service_restarted_since(sentry_unit, mtime,
639- service,
640- pgrep_full=pgrep_full,
641- sleep_time=0,
642- retry_count=retry_count)
643- config_update = self.config_updated_since(sentry_unit, filename, mtime,
644- sleep_time=0)
645+
646+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
647+ # used instead of pgrep. pgrep_full is still passed through to ensure
648+ # deprecation WARNS. lp1474030
649+
650+ service_restart = self.service_restarted_since(
651+ sentry_unit, mtime,
652+ service,
653+ pgrep_full=pgrep_full,
654+ sleep_time=sleep_time,
655+ retry_count=retry_count,
656+ retry_sleep_time=retry_sleep_time)
657+
658+ config_update = self.config_updated_since(
659+ sentry_unit,
660+ filename,
661+ mtime,
662+ sleep_time=sleep_time,
663+ retry_count=retry_count,
664+ retry_sleep_time=retry_sleep_time)
665+
666 return service_restart and config_update
667
668 def get_sentry_time(self, sentry_unit):
669@@ -418,7 +506,6 @@
670 """Return a list of all Ubuntu releases in order of release."""
671 _d = distro_info.UbuntuDistroInfo()
672 _release_list = _d.all
673- self.log.debug('Ubuntu release list: {}'.format(_release_list))
674 return _release_list
675
676 def file_to_url(self, file_rel_path):
677@@ -450,15 +537,20 @@
678 cmd, code, output))
679 return None
680
681- def get_process_id_list(self, sentry_unit, process_name):
682+ def get_process_id_list(self, sentry_unit, process_name,
683+ expect_success=True):
684 """Get a list of process ID(s) from a single sentry juju unit
685 for a single process name.
686
687- :param sentry_unit: Pointer to amulet sentry instance (juju unit)
688+ :param sentry_unit: Amulet sentry instance (juju unit)
689 :param process_name: Process name
690+ :param expect_success: If False, expect the PID to be missing,
691+ raise if it is present.
692 :returns: List of process IDs
693 """
694- cmd = 'pidof {}'.format(process_name)
695+ cmd = 'pidof -x {}'.format(process_name)
696+ if not expect_success:
697+ cmd += " || exit 0 && exit 1"
698 output, code = sentry_unit.run(cmd)
699 if code != 0:
700 msg = ('{} `{}` returned {} '
701@@ -467,14 +559,23 @@
702 amulet.raise_status(amulet.FAIL, msg=msg)
703 return str(output).split()
704
705- def get_unit_process_ids(self, unit_processes):
706+ def get_unit_process_ids(self, unit_processes, expect_success=True):
707 """Construct a dict containing unit sentries, process names, and
708- process IDs."""
709+ process IDs.
710+
711+ :param unit_processes: A dictionary of Amulet sentry instance
712+ to list of process names.
713+ :param expect_success: if False expect the processes to not be
714+ running, raise if they are.
715+ :returns: Dictionary of Amulet sentry instance to dictionary
716+ of process names to PIDs.
717+ """
718 pid_dict = {}
719- for sentry_unit, process_list in unit_processes.iteritems():
720+ for sentry_unit, process_list in six.iteritems(unit_processes):
721 pid_dict[sentry_unit] = {}
722 for process in process_list:
723- pids = self.get_process_id_list(sentry_unit, process)
724+ pids = self.get_process_id_list(
725+ sentry_unit, process, expect_success=expect_success)
726 pid_dict[sentry_unit].update({process: pids})
727 return pid_dict
728
729@@ -488,7 +589,7 @@
730 return ('Unit count mismatch. expected, actual: {}, '
731 '{} '.format(len(expected), len(actual)))
732
733- for (e_sentry, e_proc_names) in expected.iteritems():
734+ for (e_sentry, e_proc_names) in six.iteritems(expected):
735 e_sentry_name = e_sentry.info['unit_name']
736 if e_sentry in actual.keys():
737 a_proc_names = actual[e_sentry]
738@@ -500,22 +601,40 @@
739 return ('Process name count mismatch. expected, actual: {}, '
740 '{}'.format(len(expected), len(actual)))
741
742- for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
743+ for (e_proc_name, e_pids), (a_proc_name, a_pids) in \
744 zip(e_proc_names.items(), a_proc_names.items()):
745 if e_proc_name != a_proc_name:
746 return ('Process name mismatch. expected, actual: {}, '
747 '{}'.format(e_proc_name, a_proc_name))
748
749 a_pids_length = len(a_pids)
750- if e_pids_length != a_pids_length:
751- return ('PID count mismatch. {} ({}) expected, actual: '
752+ fail_msg = ('PID count mismatch. {} ({}) expected, actual: '
753 '{}, {} ({})'.format(e_sentry_name, e_proc_name,
754- e_pids_length, a_pids_length,
755+ e_pids, a_pids_length,
756 a_pids))
757+
758+ # If expected is a list, ensure at least one PID quantity match
759+ if isinstance(e_pids, list) and \
760+ a_pids_length not in e_pids:
761+ return fail_msg
762+ # If expected is not bool and not list,
763+ # ensure PID quantities match
764+ elif not isinstance(e_pids, bool) and \
765+ not isinstance(e_pids, list) and \
766+ a_pids_length != e_pids:
767+ return fail_msg
768+ # If expected is bool True, ensure 1 or more PIDs exist
769+ elif isinstance(e_pids, bool) and \
770+ e_pids is True and a_pids_length < 1:
771+ return fail_msg
772+ # If expected is bool False, ensure 0 PIDs exist
773+ elif isinstance(e_pids, bool) and \
774+ e_pids is False and a_pids_length != 0:
775+ return fail_msg
776 else:
777 self.log.debug('PID check OK: {} {} {}: '
778 '{}'.format(e_sentry_name, e_proc_name,
779- e_pids_length, a_pids))
780+ e_pids, a_pids))
781 return None
782
783 def validate_list_of_identical_dicts(self, list_of_dicts):
784@@ -531,3 +650,180 @@
785 return 'Dicts within list are not identical'
786
787 return None
788+
789+ def validate_sectionless_conf(self, file_contents, expected):
790+ """A crude conf parser. Useful to inspect configuration files which
791+ do not have section headers (as would be necessary in order to use
792+ the configparser). Such as openstack-dashboard or rabbitmq confs."""
793+ for line in file_contents.split('\n'):
794+ if '=' in line:
795+ args = line.split('=')
796+ if len(args) <= 1:
797+ continue
798+ key = args[0].strip()
799+ value = args[1].strip()
800+ if key in expected.keys():
801+ if expected[key] != value:
802+ msg = ('Config mismatch. Expected, actual: {}, '
803+ '{}'.format(expected[key], value))
804+ amulet.raise_status(amulet.FAIL, msg=msg)
805+
806+ def get_unit_hostnames(self, units):
807+ """Return a dict of juju unit names to hostnames."""
808+ host_names = {}
809+ for unit in units:
810+ host_names[unit.info['unit_name']] = \
811+ str(unit.file_contents('/etc/hostname').strip())
812+ self.log.debug('Unit host names: {}'.format(host_names))
813+ return host_names
814+
815+ def run_cmd_unit(self, sentry_unit, cmd):
816+ """Run a command on a unit, return the output and exit code."""
817+ output, code = sentry_unit.run(cmd)
818+ if code == 0:
819+ self.log.debug('{} `{}` command returned {} '
820+ '(OK)'.format(sentry_unit.info['unit_name'],
821+ cmd, code))
822+ else:
823+ msg = ('{} `{}` command returned {} '
824+ '{}'.format(sentry_unit.info['unit_name'],
825+ cmd, code, output))
826+ amulet.raise_status(amulet.FAIL, msg=msg)
827+ return str(output), code
828+
829+ def file_exists_on_unit(self, sentry_unit, file_name):
830+ """Check if a file exists on a unit."""
831+ try:
832+ sentry_unit.file_stat(file_name)
833+ return True
834+ except IOError:
835+ return False
836+ except Exception as e:
837+ msg = 'Error checking file {}: {}'.format(file_name, e)
838+ amulet.raise_status(amulet.FAIL, msg=msg)
839+
840+ def file_contents_safe(self, sentry_unit, file_name,
841+ max_wait=60, fatal=False):
842+ """Get file contents from a sentry unit. Wrap amulet file_contents
843+ with retry logic to address races where a file checks as existing,
844+ but no longer exists by the time file_contents is called.
845+ Return None if file not found. Optionally raise if fatal is True."""
846+ unit_name = sentry_unit.info['unit_name']
847+ file_contents = False
848+ tries = 0
849+ while not file_contents and tries < (max_wait / 4):
850+ try:
851+ file_contents = sentry_unit.file_contents(file_name)
852+ except IOError:
853+ self.log.debug('Attempt {} to open file {} from {} '
854+ 'failed'.format(tries, file_name,
855+ unit_name))
856+ time.sleep(4)
857+ tries += 1
858+
859+ if file_contents:
860+ return file_contents
861+ elif not fatal:
862+ return None
863+ elif fatal:
864+ msg = 'Failed to get file contents from unit.'
865+ amulet.raise_status(amulet.FAIL, msg)
866+
867+ def port_knock_tcp(self, host="localhost", port=22, timeout=15):
868+ """Open a TCP socket to check for a listening sevice on a host.
869+
870+ :param host: host name or IP address, default to localhost
871+ :param port: TCP port number, default to 22
872+ :param timeout: Connect timeout, default to 15 seconds
873+ :returns: True if successful, False if connect failed
874+ """
875+
876+ # Resolve host name if possible
877+ try:
878+ connect_host = socket.gethostbyname(host)
879+ host_human = "{} ({})".format(connect_host, host)
880+ except socket.error as e:
881+ self.log.warn('Unable to resolve address: '
882+ '{} ({}) Trying anyway!'.format(host, e))
883+ connect_host = host
884+ host_human = connect_host
885+
886+ # Attempt socket connection
887+ try:
888+ knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
889+ knock.settimeout(timeout)
890+ knock.connect((connect_host, port))
891+ knock.close()
892+ self.log.debug('Socket connect OK for host '
893+ '{} on port {}.'.format(host_human, port))
894+ return True
895+ except socket.error as e:
896+ self.log.debug('Socket connect FAIL for'
897+ ' {} port {} ({})'.format(host_human, port, e))
898+ return False
899+
900+ def port_knock_units(self, sentry_units, port=22,
901+ timeout=15, expect_success=True):
902+ """Open a TCP socket to check for a listening sevice on each
903+ listed juju unit.
904+
905+ :param sentry_units: list of sentry unit pointers
906+ :param port: TCP port number, default to 22
907+ :param timeout: Connect timeout, default to 15 seconds
908+ :expect_success: True by default, set False to invert logic
909+ :returns: None if successful, Failure message otherwise
910+ """
911+ for unit in sentry_units:
912+ host = unit.info['public-address']
913+ connected = self.port_knock_tcp(host, port, timeout)
914+ if not connected and expect_success:
915+ return 'Socket connect failed.'
916+ elif connected and not expect_success:
917+ return 'Socket connected unexpectedly.'
918+
919+ def get_uuid_epoch_stamp(self):
920+ """Returns a stamp string based on uuid4 and epoch time. Useful in
921+ generating test messages which need to be unique-ish."""
922+ return '[{}-{}]'.format(uuid.uuid4(), time.time())
923+
924+# amulet juju action helpers:
925+ def run_action(self, unit_sentry, action,
926+ _check_output=subprocess.check_output,
927+ params=None):
928+ """Run the named action on a given unit sentry.
929+
930+ params a dict of parameters to use
931+ _check_output parameter is used for dependency injection.
932+
933+ @return action_id.
934+ """
935+ unit_id = unit_sentry.info["unit_name"]
936+ command = ["juju", "action", "do", "--format=json", unit_id, action]
937+ if params is not None:
938+ for key, value in params.iteritems():
939+ command.append("{}={}".format(key, value))
940+ self.log.info("Running command: %s\n" % " ".join(command))
941+ output = _check_output(command, universal_newlines=True)
942+ data = json.loads(output)
943+ action_id = data[u'Action queued with id']
944+ return action_id
945+
946+ def wait_on_action(self, action_id, _check_output=subprocess.check_output):
947+ """Wait for a given action, returning if it completed or not.
948+
949+ _check_output parameter is used for dependency injection.
950+ """
951+ command = ["juju", "action", "fetch", "--format=json", "--wait=0",
952+ action_id]
953+ output = _check_output(command, universal_newlines=True)
954+ data = json.loads(output)
955+ return data.get(u"status") == "completed"
956+
957+ def status_get(self, unit):
958+ """Return the current service status of this unit."""
959+ raw_status, return_code = unit.run(
960+ "status-get --format=json --include-data")
961+ if return_code != 0:
962+ return ("unknown", "")
963+ status = json.loads(raw_status)
964+ return (status["status"], status["message"])
965
966=== removed directory 'hooks/charmhelpers/contrib/ansible'
967=== removed file 'hooks/charmhelpers/contrib/ansible/__init__.py'
968--- hooks/charmhelpers/contrib/ansible/__init__.py 2015-07-29 18:19:18 +0000
969+++ hooks/charmhelpers/contrib/ansible/__init__.py 1970-01-01 00:00:00 +0000
970@@ -1,254 +0,0 @@
971-# Copyright 2014-2015 Canonical Limited.
972-#
973-# This file is part of charm-helpers.
974-#
975-# charm-helpers is free software: you can redistribute it and/or modify
976-# it under the terms of the GNU Lesser General Public License version 3 as
977-# published by the Free Software Foundation.
978-#
979-# charm-helpers is distributed in the hope that it will be useful,
980-# but WITHOUT ANY WARRANTY; without even the implied warranty of
981-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
982-# GNU Lesser General Public License for more details.
983-#
984-# You should have received a copy of the GNU Lesser General Public License
985-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
986-
987-# Copyright 2013 Canonical Ltd.
988-#
989-# Authors:
990-# Charm Helpers Developers <juju@lists.ubuntu.com>
991-"""Charm Helpers ansible - declare the state of your machines.
992-
993-This helper enables you to declare your machine state, rather than
994-program it procedurally (and have to test each change to your procedures).
995-Your install hook can be as simple as::
996-
997- {{{
998- import charmhelpers.contrib.ansible
999-
1000-
1001- def install():
1002- charmhelpers.contrib.ansible.install_ansible_support()
1003- charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml')
1004- }}}
1005-
1006-and won't need to change (nor will its tests) when you change the machine
1007-state.
1008-
1009-All of your juju config and relation-data are available as template
1010-variables within your playbooks and templates. An install playbook looks
1011-something like::
1012-
1013- {{{
1014- ---
1015- - hosts: localhost
1016- user: root
1017-
1018- tasks:
1019- - name: Add private repositories.
1020- template:
1021- src: ../templates/private-repositories.list.jinja2
1022- dest: /etc/apt/sources.list.d/private.list
1023-
1024- - name: Update the cache.
1025- apt: update_cache=yes
1026-
1027- - name: Install dependencies.
1028- apt: pkg={{ item }}
1029- with_items:
1030- - python-mimeparse
1031- - python-webob
1032- - sunburnt
1033-
1034- - name: Setup groups.
1035- group: name={{ item.name }} gid={{ item.gid }}
1036- with_items:
1037- - { name: 'deploy_user', gid: 1800 }
1038- - { name: 'service_user', gid: 1500 }
1039-
1040- ...
1041- }}}
1042-
1043-Read more online about `playbooks`_ and standard ansible `modules`_.
1044-
1045-.. _playbooks: http://www.ansibleworks.com/docs/playbooks.html
1046-.. _modules: http://www.ansibleworks.com/docs/modules.html
1047-
1048-A further feature os the ansible hooks is to provide a light weight "action"
1049-scripting tool. This is a decorator that you apply to a function, and that
1050-function can now receive cli args, and can pass extra args to the playbook.
1051-
1052-e.g.
1053-
1054-
1055-@hooks.action()
1056-def some_action(amount, force="False"):
1057- "Usage: some-action AMOUNT [force=True]" # <-- shown on error
1058- # process the arguments
1059- # do some calls
1060- # return extra-vars to be passed to ansible-playbook
1061- return {
1062- 'amount': int(amount),
1063- 'type': force,
1064- }
1065-
1066-You can now create a symlink to hooks.py that can be invoked like a hook, but
1067-with cli params:
1068-
1069-# link actions/some-action to hooks/hooks.py
1070-
1071-actions/some-action amount=10 force=true
1072-
1073-"""
1074-import os
1075-import stat
1076-import subprocess
1077-import functools
1078-
1079-import charmhelpers.contrib.templating.contexts
1080-import charmhelpers.core.host
1081-import charmhelpers.core.hookenv
1082-import charmhelpers.fetch
1083-
1084-
1085-charm_dir = os.environ.get('CHARM_DIR', '')
1086-ansible_hosts_path = '/etc/ansible/hosts'
1087-# Ansible will automatically include any vars in the following
1088-# file in its inventory when run locally.
1089-ansible_vars_path = '/etc/ansible/host_vars/localhost'
1090-
1091-
1092-def install_ansible_support(from_ppa=True, ppa_location='ppa:rquillo/ansible'):
1093- """Installs the ansible package.
1094-
1095- By default it is installed from the `PPA`_ linked from
1096- the ansible `website`_ or from a ppa specified by a charm config..
1097-
1098- .. _PPA: https://launchpad.net/~rquillo/+archive/ansible
1099- .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu
1100-
1101- If from_ppa is empty, you must ensure that the package is available
1102- from a configured repository.
1103- """
1104- if from_ppa:
1105- charmhelpers.fetch.add_source(ppa_location)
1106- charmhelpers.fetch.apt_update(fatal=True)
1107- charmhelpers.fetch.apt_install('ansible')
1108- with open(ansible_hosts_path, 'w+') as hosts_file:
1109- hosts_file.write('localhost ansible_connection=local')
1110-
1111-
1112-def apply_playbook(playbook, tags=None, extra_vars=None):
1113- tags = tags or []
1114- tags = ",".join(tags)
1115- charmhelpers.contrib.templating.contexts.juju_state_to_yaml(
1116- ansible_vars_path, namespace_separator='__',
1117- allow_hyphens_in_keys=False, mode=(stat.S_IRUSR | stat.S_IWUSR))
1118-
1119- # we want ansible's log output to be unbuffered
1120- env = os.environ.copy()
1121- env['PYTHONUNBUFFERED'] = "1"
1122- call = [
1123- 'ansible-playbook',
1124- '-c',
1125- 'local',
1126- playbook,
1127- ]
1128- if tags:
1129- call.extend(['--tags', '{}'.format(tags)])
1130- if extra_vars:
1131- extra = ["%s=%s" % (k, v) for k, v in extra_vars.items()]
1132- call.extend(['--extra-vars', " ".join(extra)])
1133- subprocess.check_call(call, env=env)
1134-
1135-
1136-class AnsibleHooks(charmhelpers.core.hookenv.Hooks):
1137- """Run a playbook with the hook-name as the tag.
1138-
1139- This helper builds on the standard hookenv.Hooks helper,
1140- but additionally runs the playbook with the hook-name specified
1141- using --tags (ie. running all the tasks tagged with the hook-name).
1142-
1143- Example::
1144-
1145- hooks = AnsibleHooks(playbook_path='playbooks/my_machine_state.yaml')
1146-
1147- # All the tasks within my_machine_state.yaml tagged with 'install'
1148- # will be run automatically after do_custom_work()
1149- @hooks.hook()
1150- def install():
1151- do_custom_work()
1152-
1153- # For most of your hooks, you won't need to do anything other
1154- # than run the tagged tasks for the hook:
1155- @hooks.hook('config-changed', 'start', 'stop')
1156- def just_use_playbook():
1157- pass
1158-
1159- # As a convenience, you can avoid the above noop function by specifying
1160- # the hooks which are handled by ansible-only and they'll be registered
1161- # for you:
1162- # hooks = AnsibleHooks(
1163- # 'playbooks/my_machine_state.yaml',
1164- # default_hooks=['config-changed', 'start', 'stop'])
1165-
1166- if __name__ == "__main__":
1167- # execute a hook based on the name the program is called by
1168- hooks.execute(sys.argv)
1169-
1170- """
1171-
1172- def __init__(self, playbook_path, default_hooks=None):
1173- """Register any hooks handled by ansible."""
1174- super(AnsibleHooks, self).__init__()
1175-
1176- self._actions = {}
1177- self.playbook_path = playbook_path
1178-
1179- default_hooks = default_hooks or []
1180-
1181- def noop(*args, **kwargs):
1182- pass
1183-
1184- for hook in default_hooks:
1185- self.register(hook, noop)
1186-
1187- def register_action(self, name, function):
1188- """Register a hook"""
1189- self._actions[name] = function
1190-
1191- def execute(self, args):
1192- """Execute the hook followed by the playbook using the hook as tag."""
1193- hook_name = os.path.basename(args[0])
1194- extra_vars = None
1195- if hook_name in self._actions:
1196- extra_vars = self._actions[hook_name](args[1:])
1197- else:
1198- super(AnsibleHooks, self).execute(args)
1199-
1200- charmhelpers.contrib.ansible.apply_playbook(
1201- self.playbook_path, tags=[hook_name], extra_vars=extra_vars)
1202-
1203- def action(self, *action_names):
1204- """Decorator, registering them as actions"""
1205- def action_wrapper(decorated):
1206-
1207- @functools.wraps(decorated)
1208- def wrapper(argv):
1209- kwargs = dict(arg.split('=') for arg in argv)
1210- try:
1211- return decorated(**kwargs)
1212- except TypeError as e:
1213- if decorated.__doc__:
1214- e.args += (decorated.__doc__,)
1215- raise
1216-
1217- self.register_action(decorated.__name__, wrapper)
1218- if '_' in decorated.__name__:
1219- self.register_action(
1220- decorated.__name__.replace('_', '-'), wrapper)
1221-
1222- return wrapper
1223-
1224- return action_wrapper
1225
1226=== removed directory 'hooks/charmhelpers/contrib/benchmark'
1227=== removed file 'hooks/charmhelpers/contrib/benchmark/__init__.py'
1228--- hooks/charmhelpers/contrib/benchmark/__init__.py 2015-07-29 18:19:18 +0000
1229+++ hooks/charmhelpers/contrib/benchmark/__init__.py 1970-01-01 00:00:00 +0000
1230@@ -1,126 +0,0 @@
1231-# Copyright 2014-2015 Canonical Limited.
1232-#
1233-# This file is part of charm-helpers.
1234-#
1235-# charm-helpers is free software: you can redistribute it and/or modify
1236-# it under the terms of the GNU Lesser General Public License version 3 as
1237-# published by the Free Software Foundation.
1238-#
1239-# charm-helpers is distributed in the hope that it will be useful,
1240-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1241-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1242-# GNU Lesser General Public License for more details.
1243-#
1244-# You should have received a copy of the GNU Lesser General Public License
1245-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1246-
1247-import subprocess
1248-import time
1249-import os
1250-from distutils.spawn import find_executable
1251-
1252-from charmhelpers.core.hookenv import (
1253- in_relation_hook,
1254- relation_ids,
1255- relation_set,
1256- relation_get,
1257-)
1258-
1259-
1260-def action_set(key, val):
1261- if find_executable('action-set'):
1262- action_cmd = ['action-set']
1263-
1264- if isinstance(val, dict):
1265- for k, v in iter(val.items()):
1266- action_set('%s.%s' % (key, k), v)
1267- return True
1268-
1269- action_cmd.append('%s=%s' % (key, val))
1270- subprocess.check_call(action_cmd)
1271- return True
1272- return False
1273-
1274-
1275-class Benchmark():
1276- """
1277- Helper class for the `benchmark` interface.
1278-
1279- :param list actions: Define the actions that are also benchmarks
1280-
1281- From inside the benchmark-relation-changed hook, you would
1282- Benchmark(['memory', 'cpu', 'disk', 'smoke', 'custom'])
1283-
1284- Examples:
1285-
1286- siege = Benchmark(['siege'])
1287- siege.start()
1288- [... run siege ...]
1289- # The higher the score, the better the benchmark
1290- siege.set_composite_score(16.70, 'trans/sec', 'desc')
1291- siege.finish()
1292-
1293-
1294- """
1295-
1296- BENCHMARK_CONF = '/etc/benchmark.conf' # Replaced in testing
1297-
1298- required_keys = [
1299- 'hostname',
1300- 'port',
1301- 'graphite_port',
1302- 'graphite_endpoint',
1303- 'api_port'
1304- ]
1305-
1306- def __init__(self, benchmarks=None):
1307- if in_relation_hook():
1308- if benchmarks is not None:
1309- for rid in sorted(relation_ids('benchmark')):
1310- relation_set(relation_id=rid, relation_settings={
1311- 'benchmarks': ",".join(benchmarks)
1312- })
1313-
1314- # Check the relation data
1315- config = {}
1316- for key in self.required_keys:
1317- val = relation_get(key)
1318- if val is not None:
1319- config[key] = val
1320- else:
1321- # We don't have all of the required keys
1322- config = {}
1323- break
1324-
1325- if len(config):
1326- with open(self.BENCHMARK_CONF, 'w') as f:
1327- for key, val in iter(config.items()):
1328- f.write("%s=%s\n" % (key, val))
1329-
1330- @staticmethod
1331- def start():
1332- action_set('meta.start', time.strftime('%Y-%m-%dT%H:%M:%SZ'))
1333-
1334- """
1335- If the collectd charm is also installed, tell it to send a snapshot
1336- of the current profile data.
1337- """
1338- COLLECT_PROFILE_DATA = '/usr/local/bin/collect-profile-data'
1339- if os.path.exists(COLLECT_PROFILE_DATA):
1340- subprocess.check_output([COLLECT_PROFILE_DATA])
1341-
1342- @staticmethod
1343- def finish():
1344- action_set('meta.stop', time.strftime('%Y-%m-%dT%H:%M:%SZ'))
1345-
1346- @staticmethod
1347- def set_composite_score(value, units, direction='asc'):
1348- """
1349- Set the composite score for a benchmark run. This is a single number
1350- representative of the benchmark results. This could be the most
1351- important metric, or an amalgamation of metric scores.
1352- """
1353- return action_set(
1354- "meta.composite",
1355- {'value': value, 'units': units, 'direction': direction}
1356- )
1357
1358=== removed directory 'hooks/charmhelpers/contrib/charmhelpers'
1359=== removed file 'hooks/charmhelpers/contrib/charmhelpers/__init__.py'
1360--- hooks/charmhelpers/contrib/charmhelpers/__init__.py 2015-07-29 18:19:18 +0000
1361+++ hooks/charmhelpers/contrib/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
1362@@ -1,208 +0,0 @@
1363-# Copyright 2014-2015 Canonical Limited.
1364-#
1365-# This file is part of charm-helpers.
1366-#
1367-# charm-helpers is free software: you can redistribute it and/or modify
1368-# it under the terms of the GNU Lesser General Public License version 3 as
1369-# published by the Free Software Foundation.
1370-#
1371-# charm-helpers is distributed in the hope that it will be useful,
1372-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1373-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1374-# GNU Lesser General Public License for more details.
1375-#
1376-# You should have received a copy of the GNU Lesser General Public License
1377-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1378-
1379-# Copyright 2012 Canonical Ltd. This software is licensed under the
1380-# GNU Affero General Public License version 3 (see the file LICENSE).
1381-
1382-import warnings
1383-warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning) # noqa
1384-
1385-import operator
1386-import tempfile
1387-import time
1388-import yaml
1389-import subprocess
1390-
1391-import six
1392-if six.PY3:
1393- from urllib.request import urlopen
1394- from urllib.error import (HTTPError, URLError)
1395-else:
1396- from urllib2 import (urlopen, HTTPError, URLError)
1397-
1398-"""Helper functions for writing Juju charms in Python."""
1399-
1400-__metaclass__ = type
1401-__all__ = [
1402- # 'get_config', # core.hookenv.config()
1403- # 'log', # core.hookenv.log()
1404- # 'log_entry', # core.hookenv.log()
1405- # 'log_exit', # core.hookenv.log()
1406- # 'relation_get', # core.hookenv.relation_get()
1407- # 'relation_set', # core.hookenv.relation_set()
1408- # 'relation_ids', # core.hookenv.relation_ids()
1409- # 'relation_list', # core.hookenv.relation_units()
1410- # 'config_get', # core.hookenv.config()
1411- # 'unit_get', # core.hookenv.unit_get()
1412- # 'open_port', # core.hookenv.open_port()
1413- # 'close_port', # core.hookenv.close_port()
1414- # 'service_control', # core.host.service()
1415- 'unit_info', # client-side, NOT IMPLEMENTED
1416- 'wait_for_machine', # client-side, NOT IMPLEMENTED
1417- 'wait_for_page_contents', # client-side, NOT IMPLEMENTED
1418- 'wait_for_relation', # client-side, NOT IMPLEMENTED
1419- 'wait_for_unit', # client-side, NOT IMPLEMENTED
1420-]
1421-
1422-
1423-SLEEP_AMOUNT = 0.1
1424-
1425-
1426-# We create a juju_status Command here because it makes testing much,
1427-# much easier.
1428-def juju_status():
1429- subprocess.check_call(['juju', 'status'])
1430-
1431-# re-implemented as charmhelpers.fetch.configure_sources()
1432-# def configure_source(update=False):
1433-# source = config_get('source')
1434-# if ((source.startswith('ppa:') or
1435-# source.startswith('cloud:') or
1436-# source.startswith('http:'))):
1437-# run('add-apt-repository', source)
1438-# if source.startswith("http:"):
1439-# run('apt-key', 'import', config_get('key'))
1440-# if update:
1441-# run('apt-get', 'update')
1442-
1443-
1444-# DEPRECATED: client-side only
1445-def make_charm_config_file(charm_config):
1446- charm_config_file = tempfile.NamedTemporaryFile(mode='w+')
1447- charm_config_file.write(yaml.dump(charm_config))
1448- charm_config_file.flush()
1449- # The NamedTemporaryFile instance is returned instead of just the name
1450- # because we want to take advantage of garbage collection-triggered
1451- # deletion of the temp file when it goes out of scope in the caller.
1452- return charm_config_file
1453-
1454-
1455-# DEPRECATED: client-side only
1456-def unit_info(service_name, item_name, data=None, unit=None):
1457- if data is None:
1458- data = yaml.safe_load(juju_status())
1459- service = data['services'].get(service_name)
1460- if service is None:
1461- # XXX 2012-02-08 gmb:
1462- # This allows us to cope with the race condition that we
1463- # have between deploying a service and having it come up in
1464- # `juju status`. We could probably do with cleaning it up so
1465- # that it fails a bit more noisily after a while.
1466- return ''
1467- units = service['units']
1468- if unit is not None:
1469- item = units[unit][item_name]
1470- else:
1471- # It might seem odd to sort the units here, but we do it to
1472- # ensure that when no unit is specified, the first unit for the
1473- # service (or at least the one with the lowest number) is the
1474- # one whose data gets returned.
1475- sorted_unit_names = sorted(units.keys())
1476- item = units[sorted_unit_names[0]][item_name]
1477- return item
1478-
1479-
1480-# DEPRECATED: client-side only
1481-def get_machine_data():
1482- return yaml.safe_load(juju_status())['machines']
1483-
1484-
1485-# DEPRECATED: client-side only
1486-def wait_for_machine(num_machines=1, timeout=300):
1487- """Wait `timeout` seconds for `num_machines` machines to come up.
1488-
1489- This wait_for... function can be called by other wait_for functions
1490- whose timeouts might be too short in situations where only a bare
1491- Juju setup has been bootstrapped.
1492-
1493- :return: A tuple of (num_machines, time_taken). This is used for
1494- testing.
1495- """
1496- # You may think this is a hack, and you'd be right. The easiest way
1497- # to tell what environment we're working in (LXC vs EC2) is to check
1498- # the dns-name of the first machine. If it's localhost we're in LXC
1499- # and we can just return here.
1500- if get_machine_data()[0]['dns-name'] == 'localhost':
1501- return 1, 0
1502- start_time = time.time()
1503- while True:
1504- # Drop the first machine, since it's the Zookeeper and that's
1505- # not a machine that we need to wait for. This will only work
1506- # for EC2 environments, which is why we return early above if
1507- # we're in LXC.
1508- machine_data = get_machine_data()
1509- non_zookeeper_machines = [
1510- machine_data[key] for key in list(machine_data.keys())[1:]]
1511- if len(non_zookeeper_machines) >= num_machines:
1512- all_machines_running = True
1513- for machine in non_zookeeper_machines:
1514- if machine.get('instance-state') != 'running':
1515- all_machines_running = False
1516- break
1517- if all_machines_running:
1518- break
1519- if time.time() - start_time >= timeout:
1520- raise RuntimeError('timeout waiting for service to start')
1521- time.sleep(SLEEP_AMOUNT)
1522- return num_machines, time.time() - start_time
1523-
1524-
1525-# DEPRECATED: client-side only
1526-def wait_for_unit(service_name, timeout=480):
1527- """Wait `timeout` seconds for a given service name to come up."""
1528- wait_for_machine(num_machines=1)
1529- start_time = time.time()
1530- while True:
1531- state = unit_info(service_name, 'agent-state')
1532- if 'error' in state or state == 'started':
1533- break
1534- if time.time() - start_time >= timeout:
1535- raise RuntimeError('timeout waiting for service to start')
1536- time.sleep(SLEEP_AMOUNT)
1537- if state != 'started':
1538- raise RuntimeError('unit did not start, agent-state: ' + state)
1539-
1540-
1541-# DEPRECATED: client-side only
1542-def wait_for_relation(service_name, relation_name, timeout=120):
1543- """Wait `timeout` seconds for a given relation to come up."""
1544- start_time = time.time()
1545- while True:
1546- relation = unit_info(service_name, 'relations').get(relation_name)
1547- if relation is not None and relation['state'] == 'up':
1548- break
1549- if time.time() - start_time >= timeout:
1550- raise RuntimeError('timeout waiting for relation to be up')
1551- time.sleep(SLEEP_AMOUNT)
1552-
1553-
1554-# DEPRECATED: client-side only
1555-def wait_for_page_contents(url, contents, timeout=120, validate=None):
1556- if validate is None:
1557- validate = operator.contains
1558- start_time = time.time()
1559- while True:
1560- try:
1561- stream = urlopen(url)
1562- except (HTTPError, URLError):
1563- pass
1564- else:
1565- page = stream.read()
1566- if validate(page, contents):
1567- return page
1568- if time.time() - start_time >= timeout:
1569- raise RuntimeError('timeout waiting for contents of ' + url)
1570- time.sleep(SLEEP_AMOUNT)
1571
1572=== removed directory 'hooks/charmhelpers/contrib/charmsupport'
1573=== removed file 'hooks/charmhelpers/contrib/charmsupport/__init__.py'
1574--- hooks/charmhelpers/contrib/charmsupport/__init__.py 2015-07-29 18:19:18 +0000
1575+++ hooks/charmhelpers/contrib/charmsupport/__init__.py 1970-01-01 00:00:00 +0000
1576@@ -1,15 +0,0 @@
1577-# Copyright 2014-2015 Canonical Limited.
1578-#
1579-# This file is part of charm-helpers.
1580-#
1581-# charm-helpers is free software: you can redistribute it and/or modify
1582-# it under the terms of the GNU Lesser General Public License version 3 as
1583-# published by the Free Software Foundation.
1584-#
1585-# charm-helpers is distributed in the hope that it will be useful,
1586-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1587-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1588-# GNU Lesser General Public License for more details.
1589-#
1590-# You should have received a copy of the GNU Lesser General Public License
1591-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1592
1593=== removed file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
1594--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 2015-07-29 18:19:18 +0000
1595+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000
1596@@ -1,360 +0,0 @@
1597-# Copyright 2014-2015 Canonical Limited.
1598-#
1599-# This file is part of charm-helpers.
1600-#
1601-# charm-helpers is free software: you can redistribute it and/or modify
1602-# it under the terms of the GNU Lesser General Public License version 3 as
1603-# published by the Free Software Foundation.
1604-#
1605-# charm-helpers is distributed in the hope that it will be useful,
1606-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1607-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1608-# GNU Lesser General Public License for more details.
1609-#
1610-# You should have received a copy of the GNU Lesser General Public License
1611-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1612-
1613-"""Compatibility with the nrpe-external-master charm"""
1614-# Copyright 2012 Canonical Ltd.
1615-#
1616-# Authors:
1617-# Matthew Wedgwood <matthew.wedgwood@canonical.com>
1618-
1619-import subprocess
1620-import pwd
1621-import grp
1622-import os
1623-import glob
1624-import shutil
1625-import re
1626-import shlex
1627-import yaml
1628-
1629-from charmhelpers.core.hookenv import (
1630- config,
1631- local_unit,
1632- log,
1633- relation_ids,
1634- relation_set,
1635- relations_of_type,
1636-)
1637-
1638-from charmhelpers.core.host import service
1639-
1640-# This module adds compatibility with the nrpe-external-master and plain nrpe
1641-# subordinate charms. To use it in your charm:
1642-#
1643-# 1. Update metadata.yaml
1644-#
1645-# provides:
1646-# (...)
1647-# nrpe-external-master:
1648-# interface: nrpe-external-master
1649-# scope: container
1650-#
1651-# and/or
1652-#
1653-# provides:
1654-# (...)
1655-# local-monitors:
1656-# interface: local-monitors
1657-# scope: container
1658-
1659-#
1660-# 2. Add the following to config.yaml
1661-#
1662-# nagios_context:
1663-# default: "juju"
1664-# type: string
1665-# description: |
1666-# Used by the nrpe subordinate charms.
1667-# A string that will be prepended to instance name to set the host name
1668-# in nagios. So for instance the hostname would be something like:
1669-# juju-myservice-0
1670-# If you're running multiple environments with the same services in them
1671-# this allows you to differentiate between them.
1672-# nagios_servicegroups:
1673-# default: ""
1674-# type: string
1675-# description: |
1676-# A comma-separated list of nagios servicegroups.
1677-# If left empty, the nagios_context will be used as the servicegroup
1678-#
1679-# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
1680-#
1681-# 4. Update your hooks.py with something like this:
1682-#
1683-# from charmsupport.nrpe import NRPE
1684-# (...)
1685-# def update_nrpe_config():
1686-# nrpe_compat = NRPE()
1687-# nrpe_compat.add_check(
1688-# shortname = "myservice",
1689-# description = "Check MyService",
1690-# check_cmd = "check_http -w 2 -c 10 http://localhost"
1691-# )
1692-# nrpe_compat.add_check(
1693-# "myservice_other",
1694-# "Check for widget failures",
1695-# check_cmd = "/srv/myapp/scripts/widget_check"
1696-# )
1697-# nrpe_compat.write()
1698-#
1699-# def config_changed():
1700-# (...)
1701-# update_nrpe_config()
1702-#
1703-# def nrpe_external_master_relation_changed():
1704-# update_nrpe_config()
1705-#
1706-# def local_monitors_relation_changed():
1707-# update_nrpe_config()
1708-#
1709-# 5. ln -s hooks.py nrpe-external-master-relation-changed
1710-# ln -s hooks.py local-monitors-relation-changed
1711-
1712-
1713-class CheckException(Exception):
1714- pass
1715-
1716-
1717-class Check(object):
1718- shortname_re = '[A-Za-z0-9-_]+$'
1719- service_template = ("""
1720-#---------------------------------------------------
1721-# This file is Juju managed
1722-#---------------------------------------------------
1723-define service {{
1724- use active-service
1725- host_name {nagios_hostname}
1726- service_description {nagios_hostname}[{shortname}] """
1727- """{description}
1728- check_command check_nrpe!{command}
1729- servicegroups {nagios_servicegroup}
1730-}}
1731-""")
1732-
1733- def __init__(self, shortname, description, check_cmd):
1734- super(Check, self).__init__()
1735- # XXX: could be better to calculate this from the service name
1736- if not re.match(self.shortname_re, shortname):
1737- raise CheckException("shortname must match {}".format(
1738- Check.shortname_re))
1739- self.shortname = shortname
1740- self.command = "check_{}".format(shortname)
1741- # Note: a set of invalid characters is defined by the
1742- # Nagios server config
1743- # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
1744- self.description = description
1745- self.check_cmd = self._locate_cmd(check_cmd)
1746-
1747- def _locate_cmd(self, check_cmd):
1748- search_path = (
1749- '/usr/lib/nagios/plugins',
1750- '/usr/local/lib/nagios/plugins',
1751- )
1752- parts = shlex.split(check_cmd)
1753- for path in search_path:
1754- if os.path.exists(os.path.join(path, parts[0])):
1755- command = os.path.join(path, parts[0])
1756- if len(parts) > 1:
1757- command += " " + " ".join(parts[1:])
1758- return command
1759- log('Check command not found: {}'.format(parts[0]))
1760- return ''
1761-
1762- def write(self, nagios_context, hostname, nagios_servicegroups):
1763- nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
1764- self.command)
1765- with open(nrpe_check_file, 'w') as nrpe_check_config:
1766- nrpe_check_config.write("# check {}\n".format(self.shortname))
1767- nrpe_check_config.write("command[{}]={}\n".format(
1768- self.command, self.check_cmd))
1769-
1770- if not os.path.exists(NRPE.nagios_exportdir):
1771- log('Not writing service config as {} is not accessible'.format(
1772- NRPE.nagios_exportdir))
1773- else:
1774- self.write_service_config(nagios_context, hostname,
1775- nagios_servicegroups)
1776-
1777- def write_service_config(self, nagios_context, hostname,
1778- nagios_servicegroups):
1779- for f in os.listdir(NRPE.nagios_exportdir):
1780- if re.search('.*{}.cfg'.format(self.command), f):
1781- os.remove(os.path.join(NRPE.nagios_exportdir, f))
1782-
1783- templ_vars = {
1784- 'nagios_hostname': hostname,
1785- 'nagios_servicegroup': nagios_servicegroups,
1786- 'description': self.description,
1787- 'shortname': self.shortname,
1788- 'command': self.command,
1789- }
1790- nrpe_service_text = Check.service_template.format(**templ_vars)
1791- nrpe_service_file = '{}/service__{}_{}.cfg'.format(
1792- NRPE.nagios_exportdir, hostname, self.command)
1793- with open(nrpe_service_file, 'w') as nrpe_service_config:
1794- nrpe_service_config.write(str(nrpe_service_text))
1795-
1796- def run(self):
1797- subprocess.call(self.check_cmd)
1798-
1799-
1800-class NRPE(object):
1801- nagios_logdir = '/var/log/nagios'
1802- nagios_exportdir = '/var/lib/nagios/export'
1803- nrpe_confdir = '/etc/nagios/nrpe.d'
1804-
1805- def __init__(self, hostname=None):
1806- super(NRPE, self).__init__()
1807- self.config = config()
1808- self.nagios_context = self.config['nagios_context']
1809- if 'nagios_servicegroups' in self.config and self.config['nagios_servicegroups']:
1810- self.nagios_servicegroups = self.config['nagios_servicegroups']
1811- else:
1812- self.nagios_servicegroups = self.nagios_context
1813- self.unit_name = local_unit().replace('/', '-')
1814- if hostname:
1815- self.hostname = hostname
1816- else:
1817- self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
1818- self.checks = []
1819-
1820- def add_check(self, *args, **kwargs):
1821- self.checks.append(Check(*args, **kwargs))
1822-
1823- def write(self):
1824- try:
1825- nagios_uid = pwd.getpwnam('nagios').pw_uid
1826- nagios_gid = grp.getgrnam('nagios').gr_gid
1827- except:
1828- log("Nagios user not set up, nrpe checks not updated")
1829- return
1830-
1831- if not os.path.exists(NRPE.nagios_logdir):
1832- os.mkdir(NRPE.nagios_logdir)
1833- os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
1834-
1835- nrpe_monitors = {}
1836- monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
1837- for nrpecheck in self.checks:
1838- nrpecheck.write(self.nagios_context, self.hostname,
1839- self.nagios_servicegroups)
1840- nrpe_monitors[nrpecheck.shortname] = {
1841- "command": nrpecheck.command,
1842- }
1843-
1844- service('restart', 'nagios-nrpe-server')
1845-
1846- monitor_ids = relation_ids("local-monitors") + \
1847- relation_ids("nrpe-external-master")
1848- for rid in monitor_ids:
1849- relation_set(relation_id=rid, monitors=yaml.dump(monitors))
1850-
1851-
1852-def get_nagios_hostcontext(relation_name='nrpe-external-master'):
1853- """
1854- Query relation with nrpe subordinate, return the nagios_host_context
1855-
1856- :param str relation_name: Name of relation nrpe sub joined to
1857- """
1858- for rel in relations_of_type(relation_name):
1859- if 'nagios_hostname' in rel:
1860- return rel['nagios_host_context']
1861-
1862-
1863-def get_nagios_hostname(relation_name='nrpe-external-master'):
1864- """
1865- Query relation with nrpe subordinate, return the nagios_hostname
1866-
1867- :param str relation_name: Name of relation nrpe sub joined to
1868- """
1869- for rel in relations_of_type(relation_name):
1870- if 'nagios_hostname' in rel:
1871- return rel['nagios_hostname']
1872-
1873-
1874-def get_nagios_unit_name(relation_name='nrpe-external-master'):
1875- """
1876- Return the nagios unit name prepended with host_context if needed
1877-
1878- :param str relation_name: Name of relation nrpe sub joined to
1879- """
1880- host_context = get_nagios_hostcontext(relation_name)
1881- if host_context:
1882- unit = "%s:%s" % (host_context, local_unit())
1883- else:
1884- unit = local_unit()
1885- return unit
1886-
1887-
1888-def add_init_service_checks(nrpe, services, unit_name):
1889- """
1890- Add checks for each service in list
1891-
1892- :param NRPE nrpe: NRPE object to add check to
1893- :param list services: List of services to check
1894- :param str unit_name: Unit name to use in check description
1895- """
1896- for svc in services:
1897- upstart_init = '/etc/init/%s.conf' % svc
1898- sysv_init = '/etc/init.d/%s' % svc
1899- if os.path.exists(upstart_init):
1900- nrpe.add_check(
1901- shortname=svc,
1902- description='process check {%s}' % unit_name,
1903- check_cmd='check_upstart_job %s' % svc
1904- )
1905- elif os.path.exists(sysv_init):
1906- cronpath = '/etc/cron.d/nagios-service-check-%s' % svc
1907- cron_file = ('*/5 * * * * root '
1908- '/usr/local/lib/nagios/plugins/check_exit_status.pl '
1909- '-s /etc/init.d/%s status > '
1910- '/var/lib/nagios/service-check-%s.txt\n' % (svc,
1911- svc)
1912- )
1913- f = open(cronpath, 'w')
1914- f.write(cron_file)
1915- f.close()
1916- nrpe.add_check(
1917- shortname=svc,
1918- description='process check {%s}' % unit_name,
1919- check_cmd='check_status_file.py -f '
1920- '/var/lib/nagios/service-check-%s.txt' % svc,
1921- )
1922-
1923-
1924-def copy_nrpe_checks():
1925- """
1926- Copy the nrpe checks into place
1927-
1928- """
1929- NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins'
1930- nrpe_files_dir = os.path.join(os.getenv('CHARM_DIR'), 'hooks',
1931- 'charmhelpers', 'contrib', 'openstack',
1932- 'files')
1933-
1934- if not os.path.exists(NAGIOS_PLUGINS):
1935- os.makedirs(NAGIOS_PLUGINS)
1936- for fname in glob.glob(os.path.join(nrpe_files_dir, "check_*")):
1937- if os.path.isfile(fname):
1938- shutil.copy2(fname,
1939- os.path.join(NAGIOS_PLUGINS, os.path.basename(fname)))
1940-
1941-
1942-def add_haproxy_checks(nrpe, unit_name):
1943- """
1944- Add checks for each service in list
1945-
1946- :param NRPE nrpe: NRPE object to add check to
1947- :param str unit_name: Unit name to use in check description
1948- """
1949- nrpe.add_check(
1950- shortname='haproxy_servers',
1951- description='Check HAProxy {%s}' % unit_name,
1952- check_cmd='check_haproxy.sh')
1953- nrpe.add_check(
1954- shortname='haproxy_queue',
1955- description='Check HAProxy queue depth {%s}' % unit_name,
1956- check_cmd='check_haproxy_queue_depth.sh')
1957
1958=== removed file 'hooks/charmhelpers/contrib/charmsupport/volumes.py'
1959--- hooks/charmhelpers/contrib/charmsupport/volumes.py 2015-07-29 18:19:18 +0000
1960+++ hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000
1961@@ -1,175 +0,0 @@
1962-# Copyright 2014-2015 Canonical Limited.
1963-#
1964-# This file is part of charm-helpers.
1965-#
1966-# charm-helpers is free software: you can redistribute it and/or modify
1967-# it under the terms of the GNU Lesser General Public License version 3 as
1968-# published by the Free Software Foundation.
1969-#
1970-# charm-helpers is distributed in the hope that it will be useful,
1971-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1972-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1973-# GNU Lesser General Public License for more details.
1974-#
1975-# You should have received a copy of the GNU Lesser General Public License
1976-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1977-
1978-'''
1979-Functions for managing volumes in juju units. One volume is supported per unit.
1980-Subordinates may have their own storage, provided it is on its own partition.
1981-
1982-Configuration stanzas::
1983-
1984- volume-ephemeral:
1985- type: boolean
1986- default: true
1987- description: >
1988- If false, a volume is mounted as sepecified in "volume-map"
1989- If true, ephemeral storage will be used, meaning that log data
1990- will only exist as long as the machine. YOU HAVE BEEN WARNED.
1991- volume-map:
1992- type: string
1993- default: {}
1994- description: >
1995- YAML map of units to device names, e.g:
1996- "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
1997- Service units will raise a configure-error if volume-ephemeral
1998- is 'true' and no volume-map value is set. Use 'juju set' to set a
1999- value and 'juju resolved' to complete configuration.
2000-
2001-Usage::
2002-
2003- from charmsupport.volumes import configure_volume, VolumeConfigurationError
2004- from charmsupport.hookenv import log, ERROR
2005- def post_mount_hook():
2006- stop_service('myservice')
2007- def post_mount_hook():
2008- start_service('myservice')
2009-
2010- if __name__ == '__main__':
2011- try:
2012- configure_volume(before_change=pre_mount_hook,
2013- after_change=post_mount_hook)
2014- except VolumeConfigurationError:
2015- log('Storage could not be configured', ERROR)
2016-
2017-'''
2018-
2019-# XXX: Known limitations
2020-# - fstab is neither consulted nor updated
2021-
2022-import os
2023-from charmhelpers.core import hookenv
2024-from charmhelpers.core import host
2025-import yaml
2026-
2027-
2028-MOUNT_BASE = '/srv/juju/volumes'
2029-
2030-
2031-class VolumeConfigurationError(Exception):
2032- '''Volume configuration data is missing or invalid'''
2033- pass
2034-
2035-
2036-def get_config():
2037- '''Gather and sanity-check volume configuration data'''
2038- volume_config = {}
2039- config = hookenv.config()
2040-
2041- errors = False
2042-
2043- if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
2044- volume_config['ephemeral'] = True
2045- else:
2046- volume_config['ephemeral'] = False
2047-
2048- try:
2049- volume_map = yaml.safe_load(config.get('volume-map', '{}'))
2050- except yaml.YAMLError as e:
2051- hookenv.log("Error parsing YAML volume-map: {}".format(e),
2052- hookenv.ERROR)
2053- errors = True
2054- if volume_map is None:
2055- # probably an empty string
2056- volume_map = {}
2057- elif not isinstance(volume_map, dict):
2058- hookenv.log("Volume-map should be a dictionary, not {}".format(
2059- type(volume_map)))
2060- errors = True
2061-
2062- volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
2063- if volume_config['device'] and volume_config['ephemeral']:
2064- # asked for ephemeral storage but also defined a volume ID
2065- hookenv.log('A volume is defined for this unit, but ephemeral '
2066- 'storage was requested', hookenv.ERROR)
2067- errors = True
2068- elif not volume_config['device'] and not volume_config['ephemeral']:
2069- # asked for permanent storage but did not define volume ID
2070- hookenv.log('Ephemeral storage was requested, but there is no volume '
2071- 'defined for this unit.', hookenv.ERROR)
2072- errors = True
2073-
2074- unit_mount_name = hookenv.local_unit().replace('/', '-')
2075- volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
2076-
2077- if errors:
2078- return None
2079- return volume_config
2080-
2081-
2082-def mount_volume(config):
2083- if os.path.exists(config['mountpoint']):
2084- if not os.path.isdir(config['mountpoint']):
2085- hookenv.log('Not a directory: {}'.format(config['mountpoint']))
2086- raise VolumeConfigurationError()
2087- else:
2088- host.mkdir(config['mountpoint'])
2089- if os.path.ismount(config['mountpoint']):
2090- unmount_volume(config)
2091- if not host.mount(config['device'], config['mountpoint'], persist=True):
2092- raise VolumeConfigurationError()
2093-
2094-
2095-def unmount_volume(config):
2096- if os.path.ismount(config['mountpoint']):
2097- if not host.umount(config['mountpoint'], persist=True):
2098- raise VolumeConfigurationError()
2099-
2100-
2101-def managed_mounts():
2102- '''List of all mounted managed volumes'''
2103- return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
2104-
2105-
2106-def configure_volume(before_change=lambda: None, after_change=lambda: None):
2107- '''Set up storage (or don't) according to the charm's volume configuration.
2108- Returns the mount point or "ephemeral". before_change and after_change
2109- are optional functions to be called if the volume configuration changes.
2110- '''
2111-
2112- config = get_config()
2113- if not config:
2114- hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
2115- raise VolumeConfigurationError()
2116-
2117- if config['ephemeral']:
2118- if os.path.ismount(config['mountpoint']):
2119- before_change()
2120- unmount_volume(config)
2121- after_change()
2122- return 'ephemeral'
2123- else:
2124- # persistent storage
2125- if os.path.ismount(config['mountpoint']):
2126- mounts = dict(managed_mounts())
2127- if mounts.get(config['mountpoint']) != config['device']:
2128- before_change()
2129- unmount_volume(config)
2130- mount_volume(config)
2131- after_change()
2132- else:
2133- before_change()
2134- mount_volume(config)
2135- after_change()
2136- return config['mountpoint']
2137
2138=== removed directory 'hooks/charmhelpers/contrib/database'
2139=== removed file 'hooks/charmhelpers/contrib/database/__init__.py'
2140=== removed file 'hooks/charmhelpers/contrib/database/mysql.py'
2141--- hooks/charmhelpers/contrib/database/mysql.py 2015-07-29 18:19:18 +0000
2142+++ hooks/charmhelpers/contrib/database/mysql.py 1970-01-01 00:00:00 +0000
2143@@ -1,412 +0,0 @@
2144-"""Helper for working with a MySQL database"""
2145-import json
2146-import re
2147-import sys
2148-import platform
2149-import os
2150-import glob
2151-
2152-# from string import upper
2153-
2154-from charmhelpers.core.host import (
2155- mkdir,
2156- pwgen,
2157- write_file
2158-)
2159-from charmhelpers.core.hookenv import (
2160- config as config_get,
2161- relation_get,
2162- related_units,
2163- unit_get,
2164- log,
2165- DEBUG,
2166- INFO,
2167- WARNING,
2168-)
2169-from charmhelpers.fetch import (
2170- apt_install,
2171- apt_update,
2172- filter_installed_packages,
2173-)
2174-from charmhelpers.contrib.peerstorage import (
2175- peer_store,
2176- peer_retrieve,
2177-)
2178-from charmhelpers.contrib.network.ip import get_host_ip
2179-
2180-try:
2181- import MySQLdb
2182-except ImportError:
2183- apt_update(fatal=True)
2184- apt_install(filter_installed_packages(['python-mysqldb']), fatal=True)
2185- import MySQLdb
2186-
2187-
2188-class MySQLHelper(object):
2189-
2190- def __init__(self, rpasswdf_template, upasswdf_template, host='localhost',
2191- migrate_passwd_to_peer_relation=True,
2192- delete_ondisk_passwd_file=True):
2193- self.host = host
2194- # Password file path templates
2195- self.root_passwd_file_template = rpasswdf_template
2196- self.user_passwd_file_template = upasswdf_template
2197-
2198- self.migrate_passwd_to_peer_relation = migrate_passwd_to_peer_relation
2199- # If we migrate we have the option to delete local copy of root passwd
2200- self.delete_ondisk_passwd_file = delete_ondisk_passwd_file
2201-
2202- def connect(self, user='root', password=None):
2203- log("Opening db connection for %s@%s" % (user, self.host), level=DEBUG)
2204- self.connection = MySQLdb.connect(user=user, host=self.host,
2205- passwd=password)
2206-
2207- def database_exists(self, db_name):
2208- cursor = self.connection.cursor()
2209- try:
2210- cursor.execute("SHOW DATABASES")
2211- databases = [i[0] for i in cursor.fetchall()]
2212- finally:
2213- cursor.close()
2214-
2215- return db_name in databases
2216-
2217- def create_database(self, db_name):
2218- cursor = self.connection.cursor()
2219- try:
2220- cursor.execute("CREATE DATABASE {} CHARACTER SET UTF8"
2221- .format(db_name))
2222- finally:
2223- cursor.close()
2224-
2225- def grant_exists(self, db_name, db_user, remote_ip):
2226- cursor = self.connection.cursor()
2227- priv_string = "GRANT ALL PRIVILEGES ON `{}`.* " \
2228- "TO '{}'@'{}'".format(db_name, db_user, remote_ip)
2229- try:
2230- cursor.execute("SHOW GRANTS for '{}'@'{}'".format(db_user,
2231- remote_ip))
2232- grants = [i[0] for i in cursor.fetchall()]
2233- except MySQLdb.OperationalError:
2234- return False
2235- finally:
2236- cursor.close()
2237-
2238- # TODO: review for different grants
2239- return priv_string in grants
2240-
2241- def create_grant(self, db_name, db_user, remote_ip, password):
2242- cursor = self.connection.cursor()
2243- try:
2244- # TODO: review for different grants
2245- cursor.execute("GRANT ALL PRIVILEGES ON {}.* TO '{}'@'{}' "
2246- "IDENTIFIED BY '{}'".format(db_name,
2247- db_user,
2248- remote_ip,
2249- password))
2250- finally:
2251- cursor.close()
2252-
2253- def create_admin_grant(self, db_user, remote_ip, password):
2254- cursor = self.connection.cursor()
2255- try:
2256- cursor.execute("GRANT ALL PRIVILEGES ON *.* TO '{}'@'{}' "
2257- "IDENTIFIED BY '{}'".format(db_user,
2258- remote_ip,
2259- password))
2260- finally:
2261- cursor.close()
2262-
2263- def cleanup_grant(self, db_user, remote_ip):
2264- cursor = self.connection.cursor()
2265- try:
2266- cursor.execute("DROP FROM mysql.user WHERE user='{}' "
2267- "AND HOST='{}'".format(db_user,
2268- remote_ip))
2269- finally:
2270- cursor.close()
2271-
2272- def execute(self, sql):
2273- """Execute arbitary SQL against the database."""
2274- cursor = self.connection.cursor()
2275- try:
2276- cursor.execute(sql)
2277- finally:
2278- cursor.close()
2279-
2280- def migrate_passwords_to_peer_relation(self, excludes=None):
2281- """Migrate any passwords storage on disk to cluster peer relation."""
2282- dirname = os.path.dirname(self.root_passwd_file_template)
2283- path = os.path.join(dirname, '*.passwd')
2284- for f in glob.glob(path):
2285- if excludes and f in excludes:
2286- log("Excluding %s from peer migration" % (f), level=DEBUG)
2287- continue
2288-
2289- key = os.path.basename(f)
2290- with open(f, 'r') as passwd:
2291- _value = passwd.read().strip()
2292-
2293- try:
2294- peer_store(key, _value)
2295-
2296- if self.delete_ondisk_passwd_file:
2297- os.unlink(f)
2298- except ValueError:
2299- # NOTE cluster relation not yet ready - skip for now
2300- pass
2301-
2302- def get_mysql_password_on_disk(self, username=None, password=None):
2303- """Retrieve, generate or store a mysql password for the provided
2304- username on disk."""
2305- if username:
2306- template = self.user_passwd_file_template
2307- passwd_file = template.format(username)
2308- else:
2309- passwd_file = self.root_passwd_file_template
2310-
2311- _password = None
2312- if os.path.exists(passwd_file):
2313- log("Using existing password file '%s'" % passwd_file, level=DEBUG)
2314- with open(passwd_file, 'r') as passwd:
2315- _password = passwd.read().strip()
2316- else:
2317- log("Generating new password file '%s'" % passwd_file, level=DEBUG)
2318- if not os.path.isdir(os.path.dirname(passwd_file)):
2319- # NOTE: need to ensure this is not mysql root dir (which needs
2320- # to be mysql readable)
2321- mkdir(os.path.dirname(passwd_file), owner='root', group='root',
2322- perms=0o770)
2323- # Force permissions - for some reason the chmod in makedirs
2324- # fails
2325- os.chmod(os.path.dirname(passwd_file), 0o770)
2326-
2327- _password = password or pwgen(length=32)
2328- write_file(passwd_file, _password, owner='root', group='root',
2329- perms=0o660)
2330-
2331- return _password
2332-
2333- def passwd_keys(self, username):
2334- """Generator to return keys used to store passwords in peer store.
2335-
2336- NOTE: we support both legacy and new format to support mysql
2337- charm prior to refactor. This is necessary to avoid LP 1451890.
2338- """
2339- keys = []
2340- if username == 'mysql':
2341- log("Bad username '%s'" % (username), level=WARNING)
2342-
2343- if username:
2344- # IMPORTANT: *newer* format must be returned first
2345- keys.append('mysql-%s.passwd' % (username))
2346- keys.append('%s.passwd' % (username))
2347- else:
2348- keys.append('mysql.passwd')
2349-
2350- for key in keys:
2351- yield key
2352-
2353- def get_mysql_password(self, username=None, password=None):
2354- """Retrieve, generate or store a mysql password for the provided
2355- username using peer relation cluster."""
2356- excludes = []
2357-
2358- # First check peer relation.
2359- try:
2360- for key in self.passwd_keys(username):
2361- _password = peer_retrieve(key)
2362- if _password:
2363- break
2364-
2365- # If root password available don't update peer relation from local
2366- if _password and not username:
2367- excludes.append(self.root_passwd_file_template)
2368-
2369- except ValueError:
2370- # cluster relation is not yet started; use on-disk
2371- _password = None
2372-
2373- # If none available, generate new one
2374- if not _password:
2375- _password = self.get_mysql_password_on_disk(username, password)
2376-
2377- # Put on wire if required
2378- if self.migrate_passwd_to_peer_relation:
2379- self.migrate_passwords_to_peer_relation(excludes=excludes)
2380-
2381- return _password
2382-
2383- def get_mysql_root_password(self, password=None):
2384- """Retrieve or generate mysql root password for service units."""
2385- return self.get_mysql_password(username=None, password=password)
2386-
2387- def normalize_address(self, hostname):
2388- """Ensure that address returned is an IP address (i.e. not fqdn)"""
2389- if config_get('prefer-ipv6'):
2390- # TODO: add support for ipv6 dns
2391- return hostname
2392-
2393- if hostname != unit_get('private-address'):
2394- return get_host_ip(hostname, fallback=hostname)
2395-
2396- # Otherwise assume localhost
2397- return '127.0.0.1'
2398-
2399- def get_allowed_units(self, database, username, relation_id=None):
2400- """Get list of units with access grants for database with username.
2401-
2402- This is typically used to provide shared-db relations with a list of
2403- which units have been granted access to the given database.
2404- """
2405- self.connect(password=self.get_mysql_root_password())
2406- allowed_units = set()
2407- for unit in related_units(relation_id):
2408- settings = relation_get(rid=relation_id, unit=unit)
2409- # First check for setting with prefix, then without
2410- for attr in ["%s_hostname" % (database), 'hostname']:
2411- hosts = settings.get(attr, None)
2412- if hosts:
2413- break
2414-
2415- if hosts:
2416- # hostname can be json-encoded list of hostnames
2417- try:
2418- hosts = json.loads(hosts)
2419- except ValueError:
2420- hosts = [hosts]
2421- else:
2422- hosts = [settings['private-address']]
2423-
2424- if hosts:
2425- for host in hosts:
2426- host = self.normalize_address(host)
2427- if self.grant_exists(database, username, host):
2428- log("Grant exists for host '%s' on db '%s'" %
2429- (host, database), level=DEBUG)
2430- if unit not in allowed_units:
2431- allowed_units.add(unit)
2432- else:
2433- log("Grant does NOT exist for host '%s' on db '%s'" %
2434- (host, database), level=DEBUG)
2435- else:
2436- log("No hosts found for grant check", level=INFO)
2437-
2438- return allowed_units
2439-
2440- def configure_db(self, hostname, database, username, admin=False):
2441- """Configure access to database for username from hostname."""
2442- self.connect(password=self.get_mysql_root_password())
2443- if not self.database_exists(database):
2444- self.create_database(database)
2445-
2446- remote_ip = self.normalize_address(hostname)
2447- password = self.get_mysql_password(username)
2448- if not self.grant_exists(database, username, remote_ip):
2449- if not admin:
2450- self.create_grant(database, username, remote_ip, password)
2451- else:
2452- self.create_admin_grant(username, remote_ip, password)
2453-
2454- return password
2455-
2456-
2457-class PerconaClusterHelper(object):
2458-
2459- # Going for the biggest page size to avoid wasted bytes.
2460- # InnoDB page size is 16MB
2461-
2462- DEFAULT_PAGE_SIZE = 16 * 1024 * 1024
2463- DEFAULT_INNODB_BUFFER_FACTOR = 0.50
2464-
2465- def human_to_bytes(self, human):
2466- """Convert human readable configuration options to bytes."""
2467- num_re = re.compile('^[0-9]+$')
2468- if num_re.match(human):
2469- return human
2470-
2471- factors = {
2472- 'K': 1024,
2473- 'M': 1048576,
2474- 'G': 1073741824,
2475- 'T': 1099511627776
2476- }
2477- modifier = human[-1]
2478- if modifier in factors:
2479- return int(human[:-1]) * factors[modifier]
2480-
2481- if modifier == '%':
2482- total_ram = self.human_to_bytes(self.get_mem_total())
2483- if self.is_32bit_system() and total_ram > self.sys_mem_limit():
2484- total_ram = self.sys_mem_limit()
2485- factor = int(human[:-1]) * 0.01
2486- pctram = total_ram * factor
2487- return int(pctram - (pctram % self.DEFAULT_PAGE_SIZE))
2488-
2489- raise ValueError("Can only convert K,M,G, or T")
2490-
2491- def is_32bit_system(self):
2492- """Determine whether system is 32 or 64 bit."""
2493- try:
2494- return sys.maxsize < 2 ** 32
2495- except OverflowError:
2496- return False
2497-
2498- def sys_mem_limit(self):
2499- """Determine the default memory limit for the current service unit."""
2500- if platform.machine() in ['armv7l']:
2501- _mem_limit = self.human_to_bytes('2700M') # experimentally determined
2502- else:
2503- # Limit for x86 based 32bit systems
2504- _mem_limit = self.human_to_bytes('4G')
2505-
2506- return _mem_limit
2507-
2508- def get_mem_total(self):
2509- """Calculate the total memory in the current service unit."""
2510- with open('/proc/meminfo') as meminfo_file:
2511- for line in meminfo_file:
2512- key, mem = line.split(':', 2)
2513- if key == 'MemTotal':
2514- mtot, modifier = mem.strip().split(' ')
2515- return '%s%s' % (mtot, modifier[0].upper())
2516-
2517- def parse_config(self):
2518- """Parse charm configuration and calculate values for config files."""
2519- config = config_get()
2520- mysql_config = {}
2521- if 'max-connections' in config:
2522- mysql_config['max_connections'] = config['max-connections']
2523-
2524- if 'wait-timeout' in config:
2525- mysql_config['wait_timeout'] = config['wait-timeout']
2526-
2527- if 'innodb-flush-log-at-trx-commit' in config:
2528- mysql_config['innodb_flush_log_at_trx_commit'] = config['innodb-flush-log-at-trx-commit']
2529-
2530- # Set a sane default key_buffer size
2531- mysql_config['key_buffer'] = self.human_to_bytes('32M')
2532- total_memory = self.human_to_bytes(self.get_mem_total())
2533-
2534- dataset_bytes = config.get('dataset-size', None)
2535- innodb_buffer_pool_size = config.get('innodb-buffer-pool-size', None)
2536-
2537- if innodb_buffer_pool_size:
2538- innodb_buffer_pool_size = self.human_to_bytes(
2539- innodb_buffer_pool_size)
2540- elif dataset_bytes:
2541- log("Option 'dataset-size' has been deprecated, please use"
2542- "innodb_buffer_pool_size option instead", level="WARN")
2543- innodb_buffer_pool_size = self.human_to_bytes(
2544- dataset_bytes)
2545- else:
2546- innodb_buffer_pool_size = int(
2547- total_memory * self.DEFAULT_INNODB_BUFFER_FACTOR)
2548-
2549- if innodb_buffer_pool_size > total_memory:
2550- log("innodb_buffer_pool_size; {} is greater than system available memory:{}".format(
2551- innodb_buffer_pool_size,
2552- total_memory), level='WARN')
2553-
2554- mysql_config['innodb_buffer_pool_size'] = innodb_buffer_pool_size
2555- return mysql_config
2556
2557=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
2558--- hooks/charmhelpers/contrib/network/ip.py 2015-05-19 21:31:34 +0000
2559+++ hooks/charmhelpers/contrib/network/ip.py 2016-05-18 10:05:52 +0000
2560@@ -23,7 +23,7 @@
2561 from functools import partial
2562
2563 from charmhelpers.core.hookenv import unit_get
2564-from charmhelpers.fetch import apt_install
2565+from charmhelpers.fetch import apt_install, apt_update
2566 from charmhelpers.core.hookenv import (
2567 log,
2568 WARNING,
2569@@ -32,13 +32,15 @@
2570 try:
2571 import netifaces
2572 except ImportError:
2573- apt_install('python-netifaces')
2574+ apt_update(fatal=True)
2575+ apt_install('python-netifaces', fatal=True)
2576 import netifaces
2577
2578 try:
2579 import netaddr
2580 except ImportError:
2581- apt_install('python-netaddr')
2582+ apt_update(fatal=True)
2583+ apt_install('python-netaddr', fatal=True)
2584 import netaddr
2585
2586
2587@@ -51,7 +53,7 @@
2588
2589
2590 def no_ip_found_error_out(network):
2591- errmsg = ("No IP address found in network: %s" % network)
2592+ errmsg = ("No IP address found in network(s): %s" % network)
2593 raise ValueError(errmsg)
2594
2595
2596@@ -59,7 +61,7 @@
2597 """Get an IPv4 or IPv6 address within the network from the host.
2598
2599 :param network (str): CIDR presentation format. For example,
2600- '192.168.1.0/24'.
2601+ '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
2602 :param fallback (str): If no address is found, return fallback.
2603 :param fatal (boolean): If no address is found, fallback is not
2604 set and fatal is True then exit(1).
2605@@ -73,24 +75,26 @@
2606 else:
2607 return None
2608
2609- _validate_cidr(network)
2610- network = netaddr.IPNetwork(network)
2611- for iface in netifaces.interfaces():
2612- addresses = netifaces.ifaddresses(iface)
2613- if network.version == 4 and netifaces.AF_INET in addresses:
2614- addr = addresses[netifaces.AF_INET][0]['addr']
2615- netmask = addresses[netifaces.AF_INET][0]['netmask']
2616- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
2617- if cidr in network:
2618- return str(cidr.ip)
2619+ networks = network.split() or [network]
2620+ for network in networks:
2621+ _validate_cidr(network)
2622+ network = netaddr.IPNetwork(network)
2623+ for iface in netifaces.interfaces():
2624+ addresses = netifaces.ifaddresses(iface)
2625+ if network.version == 4 and netifaces.AF_INET in addresses:
2626+ addr = addresses[netifaces.AF_INET][0]['addr']
2627+ netmask = addresses[netifaces.AF_INET][0]['netmask']
2628+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
2629+ if cidr in network:
2630+ return str(cidr.ip)
2631
2632- if network.version == 6 and netifaces.AF_INET6 in addresses:
2633- for addr in addresses[netifaces.AF_INET6]:
2634- if not addr['addr'].startswith('fe80'):
2635- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
2636- addr['netmask']))
2637- if cidr in network:
2638- return str(cidr.ip)
2639+ if network.version == 6 and netifaces.AF_INET6 in addresses:
2640+ for addr in addresses[netifaces.AF_INET6]:
2641+ if not addr['addr'].startswith('fe80'):
2642+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
2643+ addr['netmask']))
2644+ if cidr in network:
2645+ return str(cidr.ip)
2646
2647 if fallback is not None:
2648 return fallback
2649@@ -187,6 +191,15 @@
2650 get_netmask_for_address = partial(_get_for_address, key='netmask')
2651
2652
2653+def resolve_network_cidr(ip_address):
2654+ '''
2655+ Resolves the full address cidr of an ip_address based on
2656+ configured network interfaces
2657+ '''
2658+ netmask = get_netmask_for_address(ip_address)
2659+ return str(netaddr.IPNetwork("%s/%s" % (ip_address, netmask)).cidr)
2660+
2661+
2662 def format_ipv6_addr(address):
2663 """If address is IPv6, wrap it in '[]' otherwise return None.
2664
2665@@ -435,8 +448,12 @@
2666
2667 rev = dns.reversename.from_address(address)
2668 result = ns_query(rev)
2669+
2670 if not result:
2671- return None
2672+ try:
2673+ result = socket.gethostbyaddr(address)[0]
2674+ except:
2675+ return None
2676 else:
2677 result = address
2678
2679@@ -448,3 +465,18 @@
2680 return result
2681 else:
2682 return result.split('.')[0]
2683+
2684+
2685+def port_has_listener(address, port):
2686+ """
2687+ Returns True if the address:port is open and being listened to,
2688+ else False.
2689+
2690+ @param address: an IP address or hostname
2691+ @param port: integer port
2692+
2693+ Note calls 'zc' via a subprocess shell
2694+ """
2695+ cmd = ['nc', '-z', address, str(port)]
2696+ result = subprocess.call(cmd)
2697+ return not(bool(result))
2698
2699=== modified file 'hooks/charmhelpers/contrib/network/ovs/__init__.py'
2700--- hooks/charmhelpers/contrib/network/ovs/__init__.py 2015-05-19 21:31:34 +0000
2701+++ hooks/charmhelpers/contrib/network/ovs/__init__.py 2016-05-18 10:05:52 +0000
2702@@ -25,10 +25,14 @@
2703 )
2704
2705
2706-def add_bridge(name):
2707+def add_bridge(name, datapath_type=None):
2708 ''' Add the named bridge to openvswitch '''
2709 log('Creating bridge {}'.format(name))
2710- subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name])
2711+ cmd = ["ovs-vsctl", "--", "--may-exist", "add-br", name]
2712+ if datapath_type is not None:
2713+ cmd += ['--', 'set', 'bridge', name,
2714+ 'datapath_type={}'.format(datapath_type)]
2715+ subprocess.check_call(cmd)
2716
2717
2718 def del_bridge(name):
2719
2720=== modified file 'hooks/charmhelpers/contrib/network/ufw.py'
2721--- hooks/charmhelpers/contrib/network/ufw.py 2015-07-29 18:19:18 +0000
2722+++ hooks/charmhelpers/contrib/network/ufw.py 2016-05-18 10:05:52 +0000
2723@@ -40,7 +40,9 @@
2724 import re
2725 import os
2726 import subprocess
2727+
2728 from charmhelpers.core import hookenv
2729+from charmhelpers.core.kernel import modprobe, is_module_loaded
2730
2731 __author__ = "Felipe Reyes <felipe.reyes@canonical.com>"
2732
2733@@ -82,14 +84,11 @@
2734 # do we have IPv6 in the machine?
2735 if os.path.isdir('/proc/sys/net/ipv6'):
2736 # is ip6tables kernel module loaded?
2737- lsmod = subprocess.check_output(['lsmod'], universal_newlines=True)
2738- matches = re.findall('^ip6_tables[ ]+', lsmod, re.M)
2739- if len(matches) == 0:
2740+ if not is_module_loaded('ip6_tables'):
2741 # ip6tables support isn't complete, let's try to load it
2742 try:
2743- subprocess.check_output(['modprobe', 'ip6_tables'],
2744- universal_newlines=True)
2745- # great, we could load the module
2746+ modprobe('ip6_tables')
2747+ # great, we can load the module
2748 return True
2749 except subprocess.CalledProcessError as ex:
2750 hookenv.log("Couldn't load ip6_tables module: %s" % ex.output,
2751
2752=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
2753--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-29 18:19:18 +0000
2754+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2016-05-18 10:05:52 +0000
2755@@ -14,12 +14,18 @@
2756 # You should have received a copy of the GNU Lesser General Public License
2757 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2758
2759+import logging
2760+import re
2761+import sys
2762 import six
2763 from collections import OrderedDict
2764 from charmhelpers.contrib.amulet.deployment import (
2765 AmuletDeployment
2766 )
2767
2768+DEBUG = logging.DEBUG
2769+ERROR = logging.ERROR
2770+
2771
2772 class OpenStackAmuletDeployment(AmuletDeployment):
2773 """OpenStack amulet deployment.
2774@@ -28,9 +34,12 @@
2775 that is specifically for use by OpenStack charms.
2776 """
2777
2778- def __init__(self, series=None, openstack=None, source=None, stable=True):
2779+ def __init__(self, series=None, openstack=None, source=None,
2780+ stable=True, log_level=DEBUG):
2781 """Initialize the deployment environment."""
2782 super(OpenStackAmuletDeployment, self).__init__(series)
2783+ self.log = self.get_logger(level=log_level)
2784+ self.log.info('OpenStackAmuletDeployment: init')
2785 self.openstack = openstack
2786 self.source = source
2787 self.stable = stable
2788@@ -38,26 +47,55 @@
2789 # out.
2790 self.current_next = "trusty"
2791
2792+ def get_logger(self, name="deployment-logger", level=logging.DEBUG):
2793+ """Get a logger object that will log to stdout."""
2794+ log = logging
2795+ logger = log.getLogger(name)
2796+ fmt = log.Formatter("%(asctime)s %(funcName)s "
2797+ "%(levelname)s: %(message)s")
2798+
2799+ handler = log.StreamHandler(stream=sys.stdout)
2800+ handler.setLevel(level)
2801+ handler.setFormatter(fmt)
2802+
2803+ logger.addHandler(handler)
2804+ logger.setLevel(level)
2805+
2806+ return logger
2807+
2808 def _determine_branch_locations(self, other_services):
2809 """Determine the branch locations for the other services.
2810
2811 Determine if the local branch being tested is derived from its
2812 stable or next (dev) branch, and based on this, use the corresonding
2813 stable or next branches for the other_services."""
2814- base_charms = ['mysql', 'mongodb']
2815+
2816+ self.log.info('OpenStackAmuletDeployment: determine branch locations')
2817+
2818+ # Charms outside the lp:~openstack-charmers namespace
2819+ base_charms = ['mysql', 'mongodb', 'nrpe']
2820+
2821+ # Force these charms to current series even when using an older series.
2822+ # ie. Use trusty/nrpe even when series is precise, as the P charm
2823+ # does not possess the necessary external master config and hooks.
2824+ force_series_current = ['nrpe']
2825
2826 if self.series in ['precise', 'trusty']:
2827 base_series = self.series
2828 else:
2829 base_series = self.current_next
2830
2831- if self.stable:
2832- for svc in other_services:
2833+ for svc in other_services:
2834+ if svc['name'] in force_series_current:
2835+ base_series = self.current_next
2836+ # If a location has been explicitly set, use it
2837+ if svc.get('location'):
2838+ continue
2839+ if self.stable:
2840 temp = 'lp:charms/{}/{}'
2841 svc['location'] = temp.format(base_series,
2842 svc['name'])
2843- else:
2844- for svc in other_services:
2845+ else:
2846 if svc['name'] in base_charms:
2847 temp = 'lp:charms/{}/{}'
2848 svc['location'] = temp.format(base_series,
2849@@ -66,10 +104,13 @@
2850 temp = 'lp:~openstack-charmers/charms/{}/{}/next'
2851 svc['location'] = temp.format(self.current_next,
2852 svc['name'])
2853+
2854 return other_services
2855
2856 def _add_services(self, this_service, other_services):
2857 """Add services to the deployment and set openstack-origin/source."""
2858+ self.log.info('OpenStackAmuletDeployment: adding services')
2859+
2860 other_services = self._determine_branch_locations(other_services)
2861
2862 super(OpenStackAmuletDeployment, self)._add_services(this_service,
2863@@ -77,29 +118,105 @@
2864
2865 services = other_services
2866 services.append(this_service)
2867+
2868+ # Charms which should use the source config option
2869 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
2870- 'ceph-osd', 'ceph-radosgw']
2871- # Most OpenStack subordinate charms do not expose an origin option
2872- # as that is controlled by the principle.
2873- ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']
2874+ 'ceph-osd', 'ceph-radosgw', 'ceph-mon']
2875+
2876+ # Charms which can not use openstack-origin, ie. many subordinates
2877+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
2878+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
2879+ 'cinder-backup', 'nexentaedge-data',
2880+ 'nexentaedge-iscsi-gw', 'nexentaedge-swift-gw',
2881+ 'cinder-nexentaedge', 'nexentaedge-mgmt']
2882
2883 if self.openstack:
2884 for svc in services:
2885- if svc['name'] not in use_source + ignore:
2886+ if svc['name'] not in use_source + no_origin:
2887 config = {'openstack-origin': self.openstack}
2888 self.d.configure(svc['name'], config)
2889
2890 if self.source:
2891 for svc in services:
2892- if svc['name'] in use_source and svc['name'] not in ignore:
2893+ if svc['name'] in use_source and svc['name'] not in no_origin:
2894 config = {'source': self.source}
2895 self.d.configure(svc['name'], config)
2896
2897 def _configure_services(self, configs):
2898 """Configure all of the services."""
2899+ self.log.info('OpenStackAmuletDeployment: configure services')
2900 for service, config in six.iteritems(configs):
2901 self.d.configure(service, config)
2902
2903+ def _auto_wait_for_status(self, message=None, exclude_services=None,
2904+ include_only=None, timeout=1800):
2905+ """Wait for all units to have a specific extended status, except
2906+ for any defined as excluded. Unless specified via message, any
2907+ status containing any case of 'ready' will be considered a match.
2908+
2909+ Examples of message usage:
2910+
2911+ Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
2912+ message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
2913+
2914+ Wait for all units to reach this status (exact match):
2915+ message = re.compile('^Unit is ready and clustered$')
2916+
2917+ Wait for all units to reach any one of these (exact match):
2918+ message = re.compile('Unit is ready|OK|Ready')
2919+
2920+ Wait for at least one unit to reach this status (exact match):
2921+ message = {'ready'}
2922+
2923+ See Amulet's sentry.wait_for_messages() for message usage detail.
2924+ https://github.com/juju/amulet/blob/master/amulet/sentry.py
2925+
2926+ :param message: Expected status match
2927+ :param exclude_services: List of juju service names to ignore,
2928+ not to be used in conjuction with include_only.
2929+ :param include_only: List of juju service names to exclusively check,
2930+ not to be used in conjuction with exclude_services.
2931+ :param timeout: Maximum time in seconds to wait for status match
2932+ :returns: None. Raises if timeout is hit.
2933+ """
2934+ self.log.info('Waiting for extended status on units...')
2935+
2936+ all_services = self.d.services.keys()
2937+
2938+ if exclude_services and include_only:
2939+ raise ValueError('exclude_services can not be used '
2940+ 'with include_only')
2941+
2942+ if message:
2943+ if isinstance(message, re._pattern_type):
2944+ match = message.pattern
2945+ else:
2946+ match = message
2947+
2948+ self.log.debug('Custom extended status wait match: '
2949+ '{}'.format(match))
2950+ else:
2951+ self.log.debug('Default extended status wait match: contains '
2952+ 'READY (case-insensitive)')
2953+ message = re.compile('.*ready.*', re.IGNORECASE)
2954+
2955+ if exclude_services:
2956+ self.log.debug('Excluding services from extended status match: '
2957+ '{}'.format(exclude_services))
2958+ else:
2959+ exclude_services = []
2960+
2961+ if include_only:
2962+ services = include_only
2963+ else:
2964+ services = list(set(all_services) - set(exclude_services))
2965+
2966+ self.log.debug('Waiting up to {}s for extended status on services: '
2967+ '{}'.format(timeout, services))
2968+ service_messages = {service: message for service in services}
2969+ self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
2970+ self.log.info('OK')
2971+
2972 def _get_openstack_release(self):
2973 """Get openstack release.
2974
2975@@ -111,7 +228,8 @@
2976 self.precise_havana, self.precise_icehouse,
2977 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
2978 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
2979- self.wily_liberty) = range(12)
2980+ self.wily_liberty, self.trusty_mitaka,
2981+ self.xenial_mitaka) = range(14)
2982
2983 releases = {
2984 ('precise', None): self.precise_essex,
2985@@ -123,9 +241,11 @@
2986 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
2987 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
2988 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
2989+ ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
2990 ('utopic', None): self.utopic_juno,
2991 ('vivid', None): self.vivid_kilo,
2992- ('wily', None): self.wily_liberty}
2993+ ('wily', None): self.wily_liberty,
2994+ ('xenial', None): self.xenial_mitaka}
2995 return releases[(self.series, self.openstack)]
2996
2997 def _get_openstack_release_string(self):
2998@@ -142,6 +262,7 @@
2999 ('utopic', 'juno'),
3000 ('vivid', 'kilo'),
3001 ('wily', 'liberty'),
3002+ ('xenial', 'mitaka'),
3003 ])
3004 if self.openstack:
3005 os_origin = self.openstack.split(':')[1]
3006
3007=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
3008--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-29 18:19:18 +0000
3009+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2016-05-18 10:05:52 +0000
3010@@ -18,6 +18,7 @@
3011 import json
3012 import logging
3013 import os
3014+import re
3015 import six
3016 import time
3017 import urllib
3018@@ -26,7 +27,12 @@
3019 import glanceclient.v1.client as glance_client
3020 import heatclient.v1.client as heat_client
3021 import keystoneclient.v2_0 as keystone_client
3022-import novaclient.v1_1.client as nova_client
3023+from keystoneclient.auth.identity import v3 as keystone_id_v3
3024+from keystoneclient import session as keystone_session
3025+from keystoneclient.v3 import client as keystone_client_v3
3026+
3027+import novaclient.client as nova_client
3028+import pika
3029 import swiftclient
3030
3031 from charmhelpers.contrib.amulet.utils import (
3032@@ -36,6 +42,8 @@
3033 DEBUG = logging.DEBUG
3034 ERROR = logging.ERROR
3035
3036+NOVA_CLIENT_VERSION = "2"
3037+
3038
3039 class OpenStackAmuletUtils(AmuletUtils):
3040 """OpenStack amulet utilities.
3041@@ -137,7 +145,7 @@
3042 return "role {} does not exist".format(e['name'])
3043 return ret
3044
3045- def validate_user_data(self, expected, actual):
3046+ def validate_user_data(self, expected, actual, api_version=None):
3047 """Validate user data.
3048
3049 Validate a list of actual user data vs a list of expected user
3050@@ -148,10 +156,15 @@
3051 for e in expected:
3052 found = False
3053 for act in actual:
3054- a = {'enabled': act.enabled, 'name': act.name,
3055- 'email': act.email, 'tenantId': act.tenantId,
3056- 'id': act.id}
3057- if e['name'] == a['name']:
3058+ if e['name'] == act.name:
3059+ a = {'enabled': act.enabled, 'name': act.name,
3060+ 'email': act.email, 'id': act.id}
3061+ if api_version == 3:
3062+ a['default_project_id'] = getattr(act,
3063+ 'default_project_id',
3064+ 'none')
3065+ else:
3066+ a['tenantId'] = act.tenantId
3067 found = True
3068 ret = self._validate_dict_data(e, a)
3069 if ret:
3070@@ -186,15 +199,30 @@
3071 return cinder_client.Client(username, password, tenant, ept)
3072
3073 def authenticate_keystone_admin(self, keystone_sentry, user, password,
3074- tenant):
3075+ tenant=None, api_version=None,
3076+ keystone_ip=None):
3077 """Authenticates admin user with the keystone admin endpoint."""
3078 self.log.debug('Authenticating keystone admin...')
3079 unit = keystone_sentry
3080- service_ip = unit.relation('shared-db',
3081- 'mysql:shared-db')['private-address']
3082- ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
3083- return keystone_client.Client(username=user, password=password,
3084- tenant_name=tenant, auth_url=ep)
3085+ if not keystone_ip:
3086+ keystone_ip = unit.relation('shared-db',
3087+ 'mysql:shared-db')['private-address']
3088+ base_ep = "http://{}:35357".format(keystone_ip.strip().decode('utf-8'))
3089+ if not api_version or api_version == 2:
3090+ ep = base_ep + "/v2.0"
3091+ return keystone_client.Client(username=user, password=password,
3092+ tenant_name=tenant, auth_url=ep)
3093+ else:
3094+ ep = base_ep + "/v3"
3095+ auth = keystone_id_v3.Password(
3096+ user_domain_name='admin_domain',
3097+ username=user,
3098+ password=password,
3099+ domain_name='admin_domain',
3100+ auth_url=ep,
3101+ )
3102+ sess = keystone_session.Session(auth=auth)
3103+ return keystone_client_v3.Client(session=sess)
3104
3105 def authenticate_keystone_user(self, keystone, user, password, tenant):
3106 """Authenticates a regular user with the keystone public endpoint."""
3107@@ -223,7 +251,8 @@
3108 self.log.debug('Authenticating nova user ({})...'.format(user))
3109 ep = keystone.service_catalog.url_for(service_type='identity',
3110 endpoint_type='publicURL')
3111- return nova_client.Client(username=user, api_key=password,
3112+ return nova_client.Client(NOVA_CLIENT_VERSION,
3113+ username=user, api_key=password,
3114 project_id=tenant, auth_url=ep)
3115
3116 def authenticate_swift_user(self, keystone, user, password, tenant):
3117@@ -602,3 +631,382 @@
3118 self.log.debug('Ceph {} samples (OK): '
3119 '{}'.format(sample_type, samples))
3120 return None
3121+
3122+ # rabbitmq/amqp specific helpers:
3123+
3124+ def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
3125+ """Wait for rmq units extended status to show cluster readiness,
3126+ after an optional initial sleep period. Initial sleep is likely
3127+ necessary to be effective following a config change, as status
3128+ message may not instantly update to non-ready."""
3129+
3130+ if init_sleep:
3131+ time.sleep(init_sleep)
3132+
3133+ message = re.compile('^Unit is ready and clustered$')
3134+ deployment._auto_wait_for_status(message=message,
3135+ timeout=timeout,
3136+ include_only=['rabbitmq-server'])
3137+
3138+ def add_rmq_test_user(self, sentry_units,
3139+ username="testuser1", password="changeme"):
3140+ """Add a test user via the first rmq juju unit, check connection as
3141+ the new user against all sentry units.
3142+
3143+ :param sentry_units: list of sentry unit pointers
3144+ :param username: amqp user name, default to testuser1
3145+ :param password: amqp user password
3146+ :returns: None if successful. Raise on error.
3147+ """
3148+ self.log.debug('Adding rmq user ({})...'.format(username))
3149+
3150+ # Check that user does not already exist
3151+ cmd_user_list = 'rabbitmqctl list_users'
3152+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
3153+ if username in output:
3154+ self.log.warning('User ({}) already exists, returning '
3155+ 'gracefully.'.format(username))
3156+ return
3157+
3158+ perms = '".*" ".*" ".*"'
3159+ cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
3160+ 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
3161+
3162+ # Add user via first unit
3163+ for cmd in cmds:
3164+ output, _ = self.run_cmd_unit(sentry_units[0], cmd)
3165+
3166+ # Check connection against the other sentry_units
3167+ self.log.debug('Checking user connect against units...')
3168+ for sentry_unit in sentry_units:
3169+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
3170+ username=username,
3171+ password=password)
3172+ connection.close()
3173+
3174+ def delete_rmq_test_user(self, sentry_units, username="testuser1"):
3175+ """Delete a rabbitmq user via the first rmq juju unit.
3176+
3177+ :param sentry_units: list of sentry unit pointers
3178+ :param username: amqp user name, default to testuser1
3179+ :param password: amqp user password
3180+ :returns: None if successful or no such user.
3181+ """
3182+ self.log.debug('Deleting rmq user ({})...'.format(username))
3183+
3184+ # Check that the user exists
3185+ cmd_user_list = 'rabbitmqctl list_users'
3186+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
3187+
3188+ if username not in output:
3189+ self.log.warning('User ({}) does not exist, returning '
3190+ 'gracefully.'.format(username))
3191+ return
3192+
3193+ # Delete the user
3194+ cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
3195+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
3196+
3197+ def get_rmq_cluster_status(self, sentry_unit):
3198+ """Execute rabbitmq cluster status command on a unit and return
3199+ the full output.
3200+
3201+ :param unit: sentry unit
3202+ :returns: String containing console output of cluster status command
3203+ """
3204+ cmd = 'rabbitmqctl cluster_status'
3205+ output, _ = self.run_cmd_unit(sentry_unit, cmd)
3206+ self.log.debug('{} cluster_status:\n{}'.format(
3207+ sentry_unit.info['unit_name'], output))
3208+ return str(output)
3209+
3210+ def get_rmq_cluster_running_nodes(self, sentry_unit):
3211+ """Parse rabbitmqctl cluster_status output string, return list of
3212+ running rabbitmq cluster nodes.
3213+
3214+ :param unit: sentry unit
3215+ :returns: List containing node names of running nodes
3216+ """
3217+ # NOTE(beisner): rabbitmqctl cluster_status output is not
3218+ # json-parsable, do string chop foo, then json.loads that.
3219+ str_stat = self.get_rmq_cluster_status(sentry_unit)
3220+ if 'running_nodes' in str_stat:
3221+ pos_start = str_stat.find("{running_nodes,") + 15
3222+ pos_end = str_stat.find("]},", pos_start) + 1
3223+ str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
3224+ run_nodes = json.loads(str_run_nodes)
3225+ return run_nodes
3226+ else:
3227+ return []
3228+
3229+ def validate_rmq_cluster_running_nodes(self, sentry_units):
3230+ """Check that all rmq unit hostnames are represented in the
3231+ cluster_status output of all units.
3232+
3233+ :param host_names: dict of juju unit names to host names
3234+ :param units: list of sentry unit pointers (all rmq units)
3235+ :returns: None if successful, otherwise return error message
3236+ """
3237+ host_names = self.get_unit_hostnames(sentry_units)
3238+ errors = []
3239+
3240+ # Query every unit for cluster_status running nodes
3241+ for query_unit in sentry_units:
3242+ query_unit_name = query_unit.info['unit_name']
3243+ running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
3244+
3245+ # Confirm that every unit is represented in the queried unit's
3246+ # cluster_status running nodes output.
3247+ for validate_unit in sentry_units:
3248+ val_host_name = host_names[validate_unit.info['unit_name']]
3249+ val_node_name = 'rabbit@{}'.format(val_host_name)
3250+
3251+ if val_node_name not in running_nodes:
3252+ errors.append('Cluster member check failed on {}: {} not '
3253+ 'in {}\n'.format(query_unit_name,
3254+ val_node_name,
3255+ running_nodes))
3256+ if errors:
3257+ return ''.join(errors)
3258+
3259+ def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
3260+ """Check a single juju rmq unit for ssl and port in the config file."""
3261+ host = sentry_unit.info['public-address']
3262+ unit_name = sentry_unit.info['unit_name']
3263+
3264+ conf_file = '/etc/rabbitmq/rabbitmq.config'
3265+ conf_contents = str(self.file_contents_safe(sentry_unit,
3266+ conf_file, max_wait=16))
3267+ # Checks
3268+ conf_ssl = 'ssl' in conf_contents
3269+ conf_port = str(port) in conf_contents
3270+
3271+ # Port explicitly checked in config
3272+ if port and conf_port and conf_ssl:
3273+ self.log.debug('SSL is enabled @{}:{} '
3274+ '({})'.format(host, port, unit_name))
3275+ return True
3276+ elif port and not conf_port and conf_ssl:
3277+ self.log.debug('SSL is enabled @{} but not on port {} '
3278+ '({})'.format(host, port, unit_name))
3279+ return False
3280+ # Port not checked (useful when checking that ssl is disabled)
3281+ elif not port and conf_ssl:
3282+ self.log.debug('SSL is enabled @{}:{} '
3283+ '({})'.format(host, port, unit_name))
3284+ return True
3285+ elif not conf_ssl:
3286+ self.log.debug('SSL not enabled @{}:{} '
3287+ '({})'.format(host, port, unit_name))
3288+ return False
3289+ else:
3290+ msg = ('Unknown condition when checking SSL status @{}:{} '
3291+ '({})'.format(host, port, unit_name))
3292+ amulet.raise_status(amulet.FAIL, msg)
3293+
3294+ def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
3295+ """Check that ssl is enabled on rmq juju sentry units.
3296+
3297+ :param sentry_units: list of all rmq sentry units
3298+ :param port: optional ssl port override to validate
3299+ :returns: None if successful, otherwise return error message
3300+ """
3301+ for sentry_unit in sentry_units:
3302+ if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
3303+ return ('Unexpected condition: ssl is disabled on unit '
3304+ '({})'.format(sentry_unit.info['unit_name']))
3305+ return None
3306+
3307+ def validate_rmq_ssl_disabled_units(self, sentry_units):
3308+ """Check that ssl is enabled on listed rmq juju sentry units.
3309+
3310+ :param sentry_units: list of all rmq sentry units
3311+ :returns: True if successful. Raise on error.
3312+ """
3313+ for sentry_unit in sentry_units:
3314+ if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
3315+ return ('Unexpected condition: ssl is enabled on unit '
3316+ '({})'.format(sentry_unit.info['unit_name']))
3317+ return None
3318+
3319+ def configure_rmq_ssl_on(self, sentry_units, deployment,
3320+ port=None, max_wait=60):
3321+ """Turn ssl charm config option on, with optional non-default
3322+ ssl port specification. Confirm that it is enabled on every
3323+ unit.
3324+
3325+ :param sentry_units: list of sentry units
3326+ :param deployment: amulet deployment object pointer
3327+ :param port: amqp port, use defaults if None
3328+ :param max_wait: maximum time to wait in seconds to confirm
3329+ :returns: None if successful. Raise on error.
3330+ """
3331+ self.log.debug('Setting ssl charm config option: on')
3332+
3333+ # Enable RMQ SSL
3334+ config = {'ssl': 'on'}
3335+ if port:
3336+ config['ssl_port'] = port
3337+
3338+ deployment.d.configure('rabbitmq-server', config)
3339+
3340+ # Wait for unit status
3341+ self.rmq_wait_for_cluster(deployment)
3342+
3343+ # Confirm
3344+ tries = 0
3345+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
3346+ while ret and tries < (max_wait / 4):
3347+ time.sleep(4)
3348+ self.log.debug('Attempt {}: {}'.format(tries, ret))
3349+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
3350+ tries += 1
3351+
3352+ if ret:
3353+ amulet.raise_status(amulet.FAIL, ret)
3354+
3355+ def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
3356+ """Turn ssl charm config option off, confirm that it is disabled
3357+ on every unit.
3358+
3359+ :param sentry_units: list of sentry units
3360+ :param deployment: amulet deployment object pointer
3361+ :param max_wait: maximum time to wait in seconds to confirm
3362+ :returns: None if successful. Raise on error.
3363+ """
3364+ self.log.debug('Setting ssl charm config option: off')
3365+
3366+ # Disable RMQ SSL
3367+ config = {'ssl': 'off'}
3368+ deployment.d.configure('rabbitmq-server', config)
3369+
3370+ # Wait for unit status
3371+ self.rmq_wait_for_cluster(deployment)
3372+
3373+ # Confirm
3374+ tries = 0
3375+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
3376+ while ret and tries < (max_wait / 4):
3377+ time.sleep(4)
3378+ self.log.debug('Attempt {}: {}'.format(tries, ret))
3379+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
3380+ tries += 1
3381+
3382+ if ret:
3383+ amulet.raise_status(amulet.FAIL, ret)
3384+
3385+ def connect_amqp_by_unit(self, sentry_unit, ssl=False,
3386+ port=None, fatal=True,
3387+ username="testuser1", password="changeme"):
3388+ """Establish and return a pika amqp connection to the rabbitmq service
3389+ running on a rmq juju unit.
3390+
3391+ :param sentry_unit: sentry unit pointer
3392+ :param ssl: boolean, default to False
3393+ :param port: amqp port, use defaults if None
3394+ :param fatal: boolean, default to True (raises on connect error)
3395+ :param username: amqp user name, default to testuser1
3396+ :param password: amqp user password
3397+ :returns: pika amqp connection pointer or None if failed and non-fatal
3398+ """
3399+ host = sentry_unit.info['public-address']
3400+ unit_name = sentry_unit.info['unit_name']
3401+
3402+ # Default port logic if port is not specified
3403+ if ssl and not port:
3404+ port = 5671
3405+ elif not ssl and not port:
3406+ port = 5672
3407+
3408+ self.log.debug('Connecting to amqp on {}:{} ({}) as '
3409+ '{}...'.format(host, port, unit_name, username))
3410+
3411+ try:
3412+ credentials = pika.PlainCredentials(username, password)
3413+ parameters = pika.ConnectionParameters(host=host, port=port,
3414+ credentials=credentials,
3415+ ssl=ssl,
3416+ connection_attempts=3,
3417+ retry_delay=5,
3418+ socket_timeout=1)
3419+ connection = pika.BlockingConnection(parameters)
3420+ assert connection.server_properties['product'] == 'RabbitMQ'
3421+ self.log.debug('Connect OK')
3422+ return connection
3423+ except Exception as e:
3424+ msg = ('amqp connection failed to {}:{} as '
3425+ '{} ({})'.format(host, port, username, str(e)))
3426+ if fatal:
3427+ amulet.raise_status(amulet.FAIL, msg)
3428+ else:
3429+ self.log.warn(msg)
3430+ return None
3431+
3432+ def publish_amqp_message_by_unit(self, sentry_unit, message,
3433+ queue="test", ssl=False,
3434+ username="testuser1",
3435+ password="changeme",
3436+ port=None):
3437+ """Publish an amqp message to a rmq juju unit.
3438+
3439+ :param sentry_unit: sentry unit pointer
3440+ :param message: amqp message string
3441+ :param queue: message queue, default to test
3442+ :param username: amqp user name, default to testuser1
3443+ :param password: amqp user password
3444+ :param ssl: boolean, default to False
3445+ :param port: amqp port, use defaults if None
3446+ :returns: None. Raises exception if publish failed.
3447+ """
3448+ self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
3449+ message))
3450+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
3451+ port=port,
3452+ username=username,
3453+ password=password)
3454+
3455+ # NOTE(beisner): extra debug here re: pika hang potential:
3456+ # https://github.com/pika/pika/issues/297
3457+ # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
3458+ self.log.debug('Defining channel...')
3459+ channel = connection.channel()
3460+ self.log.debug('Declaring queue...')
3461+ channel.queue_declare(queue=queue, auto_delete=False, durable=True)
3462+ self.log.debug('Publishing message...')
3463+ channel.basic_publish(exchange='', routing_key=queue, body=message)
3464+ self.log.debug('Closing channel...')
3465+ channel.close()
3466+ self.log.debug('Closing connection...')
3467+ connection.close()
3468+
3469+ def get_amqp_message_by_unit(self, sentry_unit, queue="test",
3470+ username="testuser1",
3471+ password="changeme",
3472+ ssl=False, port=None):
3473+ """Get an amqp message from a rmq juju unit.
3474+
3475+ :param sentry_unit: sentry unit pointer
3476+ :param queue: message queue, default to test
3477+ :param username: amqp user name, default to testuser1
3478+ :param password: amqp user password
3479+ :param ssl: boolean, default to False
3480+ :param port: amqp port, use defaults if None
3481+ :returns: amqp message body as string. Raise if get fails.
3482+ """
3483+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
3484+ port=port,
3485+ username=username,
3486+ password=password)
3487+ channel = connection.channel()
3488+ method_frame, _, body = channel.basic_get(queue)
3489+
3490+ if method_frame:
3491+ self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
3492+ body))
3493+ channel.basic_ack(method_frame.delivery_tag)
3494+ channel.close()
3495+ connection.close()
3496+ return body
3497+ else:
3498+ msg = 'No message retrieved.'
3499+ amulet.raise_status(amulet.FAIL, msg)
3500
3501=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
3502--- hooks/charmhelpers/contrib/openstack/context.py 2015-07-29 18:19:18 +0000
3503+++ hooks/charmhelpers/contrib/openstack/context.py 2016-05-18 10:05:52 +0000
3504@@ -14,12 +14,13 @@
3505 # You should have received a copy of the GNU Lesser General Public License
3506 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3507
3508+import glob
3509 import json
3510 import os
3511 import re
3512 import time
3513 from base64 import b64decode
3514-from subprocess import check_call
3515+from subprocess import check_call, CalledProcessError
3516
3517 import six
3518 import yaml
3519@@ -44,16 +45,20 @@
3520 INFO,
3521 WARNING,
3522 ERROR,
3523+ status_set,
3524 )
3525
3526 from charmhelpers.core.sysctl import create as sysctl_create
3527 from charmhelpers.core.strutils import bool_from_string
3528
3529 from charmhelpers.core.host import (
3530+ get_bond_master,
3531+ is_phy_iface,
3532 list_nics,
3533 get_nic_hwaddr,
3534 mkdir,
3535 write_file,
3536+ pwgen,
3537 )
3538 from charmhelpers.contrib.hahelpers.cluster import (
3539 determine_apache_port,
3540@@ -84,6 +89,14 @@
3541 is_bridge_member,
3542 )
3543 from charmhelpers.contrib.openstack.utils import get_host_ip
3544+from charmhelpers.core.unitdata import kv
3545+
3546+try:
3547+ import psutil
3548+except ImportError:
3549+ apt_install('python-psutil', fatal=True)
3550+ import psutil
3551+
3552 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
3553 ADDRESS_TYPES = ['admin', 'internal', 'public']
3554
3555@@ -192,10 +205,50 @@
3556 class OSContextGenerator(object):
3557 """Base class for all context generators."""
3558 interfaces = []
3559+ related = False
3560+ complete = False
3561+ missing_data = []
3562
3563 def __call__(self):
3564 raise NotImplementedError
3565
3566+ def context_complete(self, ctxt):
3567+ """Check for missing data for the required context data.
3568+ Set self.missing_data if it exists and return False.
3569+ Set self.complete if no missing data and return True.
3570+ """
3571+ # Fresh start
3572+ self.complete = False
3573+ self.missing_data = []
3574+ for k, v in six.iteritems(ctxt):
3575+ if v is None or v == '':
3576+ if k not in self.missing_data:
3577+ self.missing_data.append(k)
3578+
3579+ if self.missing_data:
3580+ self.complete = False
3581+ log('Missing required data: %s' % ' '.join(self.missing_data), level=INFO)
3582+ else:
3583+ self.complete = True
3584+ return self.complete
3585+
3586+ def get_related(self):
3587+ """Check if any of the context interfaces have relation ids.
3588+ Set self.related and return True if one of the interfaces
3589+ has relation ids.
3590+ """
3591+ # Fresh start
3592+ self.related = False
3593+ try:
3594+ for interface in self.interfaces:
3595+ if relation_ids(interface):
3596+ self.related = True
3597+ return self.related
3598+ except AttributeError as e:
3599+ log("{} {}"
3600+ "".format(self, e), 'INFO')
3601+ return self.related
3602+
3603
3604 class SharedDBContext(OSContextGenerator):
3605 interfaces = ['shared-db']
3606@@ -211,6 +264,7 @@
3607 self.database = database
3608 self.user = user
3609 self.ssl_dir = ssl_dir
3610+ self.rel_name = self.interfaces[0]
3611
3612 def __call__(self):
3613 self.database = self.database or config('database')
3614@@ -244,6 +298,7 @@
3615 password_setting = self.relation_prefix + '_password'
3616
3617 for rid in relation_ids(self.interfaces[0]):
3618+ self.related = True
3619 for unit in related_units(rid):
3620 rdata = relation_get(rid=rid, unit=unit)
3621 host = rdata.get('db_host')
3622@@ -255,7 +310,7 @@
3623 'database_password': rdata.get(password_setting),
3624 'database_type': 'mysql'
3625 }
3626- if context_complete(ctxt):
3627+ if self.context_complete(ctxt):
3628 db_ssl(rdata, ctxt, self.ssl_dir)
3629 return ctxt
3630 return {}
3631@@ -276,6 +331,7 @@
3632
3633 ctxt = {}
3634 for rid in relation_ids(self.interfaces[0]):
3635+ self.related = True
3636 for unit in related_units(rid):
3637 rel_host = relation_get('host', rid=rid, unit=unit)
3638 rel_user = relation_get('user', rid=rid, unit=unit)
3639@@ -285,7 +341,7 @@
3640 'database_user': rel_user,
3641 'database_password': rel_passwd,
3642 'database_type': 'postgresql'}
3643- if context_complete(ctxt):
3644+ if self.context_complete(ctxt):
3645 return ctxt
3646
3647 return {}
3648@@ -346,6 +402,7 @@
3649 ctxt['signing_dir'] = cachedir
3650
3651 for rid in relation_ids(self.rel_name):
3652+ self.related = True
3653 for unit in related_units(rid):
3654 rdata = relation_get(rid=rid, unit=unit)
3655 serv_host = rdata.get('service_host')
3656@@ -354,6 +411,7 @@
3657 auth_host = format_ipv6_addr(auth_host) or auth_host
3658 svc_protocol = rdata.get('service_protocol') or 'http'
3659 auth_protocol = rdata.get('auth_protocol') or 'http'
3660+ api_version = rdata.get('api_version') or '2.0'
3661 ctxt.update({'service_port': rdata.get('service_port'),
3662 'service_host': serv_host,
3663 'auth_host': auth_host,
3664@@ -362,9 +420,10 @@
3665 'admin_user': rdata.get('service_username'),
3666 'admin_password': rdata.get('service_password'),
3667 'service_protocol': svc_protocol,
3668- 'auth_protocol': auth_protocol})
3669+ 'auth_protocol': auth_protocol,
3670+ 'api_version': api_version})
3671
3672- if context_complete(ctxt):
3673+ if self.context_complete(ctxt):
3674 # NOTE(jamespage) this is required for >= icehouse
3675 # so a missing value just indicates keystone needs
3676 # upgrading
3677@@ -403,6 +462,7 @@
3678 ctxt = {}
3679 for rid in relation_ids(self.rel_name):
3680 ha_vip_only = False
3681+ self.related = True
3682 for unit in related_units(rid):
3683 if relation_get('clustered', rid=rid, unit=unit):
3684 ctxt['clustered'] = True
3685@@ -435,7 +495,7 @@
3686 ha_vip_only = relation_get('ha-vip-only',
3687 rid=rid, unit=unit) is not None
3688
3689- if context_complete(ctxt):
3690+ if self.context_complete(ctxt):
3691 if 'rabbit_ssl_ca' in ctxt:
3692 if not self.ssl_dir:
3693 log("Charm not setup for ssl support but ssl ca "
3694@@ -467,7 +527,7 @@
3695 ctxt['oslo_messaging_flags'] = config_flags_parser(
3696 oslo_messaging_flags)
3697
3698- if not context_complete(ctxt):
3699+ if not self.complete:
3700 return {}
3701
3702 return ctxt
3703@@ -483,13 +543,15 @@
3704
3705 log('Generating template context for ceph', level=DEBUG)
3706 mon_hosts = []
3707- auth = None
3708- key = None
3709- use_syslog = str(config('use-syslog')).lower()
3710+ ctxt = {
3711+ 'use_syslog': str(config('use-syslog')).lower()
3712+ }
3713 for rid in relation_ids('ceph'):
3714 for unit in related_units(rid):
3715- auth = relation_get('auth', rid=rid, unit=unit)
3716- key = relation_get('key', rid=rid, unit=unit)
3717+ if not ctxt.get('auth'):
3718+ ctxt['auth'] = relation_get('auth', rid=rid, unit=unit)
3719+ if not ctxt.get('key'):
3720+ ctxt['key'] = relation_get('key', rid=rid, unit=unit)
3721 ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
3722 unit=unit)
3723 unit_priv_addr = relation_get('private-address', rid=rid,
3724@@ -498,15 +560,12 @@
3725 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
3726 mon_hosts.append(ceph_addr)
3727
3728- ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
3729- 'auth': auth,
3730- 'key': key,
3731- 'use_syslog': use_syslog}
3732+ ctxt['mon_hosts'] = ' '.join(sorted(mon_hosts))
3733
3734 if not os.path.isdir('/etc/ceph'):
3735 os.mkdir('/etc/ceph')
3736
3737- if not context_complete(ctxt):
3738+ if not self.context_complete(ctxt):
3739 return {}
3740
3741 ensure_packages(['ceph-common'])
3742@@ -579,15 +638,28 @@
3743 if config('haproxy-client-timeout'):
3744 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
3745
3746+ if config('haproxy-queue-timeout'):
3747+ ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout')
3748+
3749+ if config('haproxy-connect-timeout'):
3750+ ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout')
3751+
3752 if config('prefer-ipv6'):
3753 ctxt['ipv6'] = True
3754 ctxt['local_host'] = 'ip6-localhost'
3755 ctxt['haproxy_host'] = '::'
3756- ctxt['stat_port'] = ':::8888'
3757 else:
3758 ctxt['local_host'] = '127.0.0.1'
3759 ctxt['haproxy_host'] = '0.0.0.0'
3760- ctxt['stat_port'] = ':8888'
3761+
3762+ ctxt['stat_port'] = '8888'
3763+
3764+ db = kv()
3765+ ctxt['stat_password'] = db.get('stat-password')
3766+ if not ctxt['stat_password']:
3767+ ctxt['stat_password'] = db.set('stat-password',
3768+ pwgen(32))
3769+ db.flush()
3770
3771 for frontend in cluster_hosts:
3772 if (len(cluster_hosts[frontend]['backends']) > 1 or
3773@@ -878,19 +950,6 @@
3774
3775 return calico_ctxt
3776
3777- def pg_ctxt(self):
3778- driver = neutron_plugin_attribute(self.plugin, 'driver',
3779- self.network_manager)
3780- config = neutron_plugin_attribute(self.plugin, 'config',
3781- self.network_manager)
3782- ovs_ctxt = {'core_plugin': driver,
3783- 'neutron_plugin': 'plumgrid',
3784- 'neutron_security_groups': self.neutron_security_groups,
3785- 'local_ip': unit_private_ip(),
3786- 'config': config}
3787-
3788- return ovs_ctxt
3789-
3790 def neutron_ctxt(self):
3791 if https():
3792 proto = 'https'
3793@@ -906,6 +965,31 @@
3794 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
3795 return ctxt
3796
3797+ def pg_ctxt(self):
3798+ driver = neutron_plugin_attribute(self.plugin, 'driver',
3799+ self.network_manager)
3800+ config = neutron_plugin_attribute(self.plugin, 'config',
3801+ self.network_manager)
3802+ ovs_ctxt = {'core_plugin': driver,
3803+ 'neutron_plugin': 'plumgrid',
3804+ 'neutron_security_groups': self.neutron_security_groups,
3805+ 'local_ip': unit_private_ip(),
3806+ 'config': config}
3807+ return ovs_ctxt
3808+
3809+ def midonet_ctxt(self):
3810+ driver = neutron_plugin_attribute(self.plugin, 'driver',
3811+ self.network_manager)
3812+ midonet_config = neutron_plugin_attribute(self.plugin, 'config',
3813+ self.network_manager)
3814+ mido_ctxt = {'core_plugin': driver,
3815+ 'neutron_plugin': 'midonet',
3816+ 'neutron_security_groups': self.neutron_security_groups,
3817+ 'local_ip': unit_private_ip(),
3818+ 'config': midonet_config}
3819+
3820+ return mido_ctxt
3821+
3822 def __call__(self):
3823 if self.network_manager not in ['quantum', 'neutron']:
3824 return {}
3825@@ -927,6 +1011,8 @@
3826 ctxt.update(self.nuage_ctxt())
3827 elif self.plugin == 'plumgrid':
3828 ctxt.update(self.pg_ctxt())
3829+ elif self.plugin == 'midonet':
3830+ ctxt.update(self.midonet_ctxt())
3831
3832 alchemy_flags = config('neutron-alchemy-flags')
3833 if alchemy_flags:
3834@@ -938,7 +1024,6 @@
3835
3836
3837 class NeutronPortContext(OSContextGenerator):
3838- NIC_PREFIXES = ['eth', 'bond']
3839
3840 def resolve_ports(self, ports):
3841 """Resolve NICs not yet bound to bridge(s)
3842@@ -950,7 +1035,18 @@
3843
3844 hwaddr_to_nic = {}
3845 hwaddr_to_ip = {}
3846- for nic in list_nics(self.NIC_PREFIXES):
3847+ for nic in list_nics():
3848+ # Ignore virtual interfaces (bond masters will be identified from
3849+ # their slaves)
3850+ if not is_phy_iface(nic):
3851+ continue
3852+
3853+ _nic = get_bond_master(nic)
3854+ if _nic:
3855+ log("Replacing iface '%s' with bond master '%s'" % (nic, _nic),
3856+ level=DEBUG)
3857+ nic = _nic
3858+
3859 hwaddr = get_nic_hwaddr(nic)
3860 hwaddr_to_nic[hwaddr] = nic
3861 addresses = get_ipv4_addr(nic, fatal=False)
3862@@ -976,7 +1072,8 @@
3863 # trust it to be the real external network).
3864 resolved.append(entry)
3865
3866- return resolved
3867+ # Ensure no duplicates
3868+ return list(set(resolved))
3869
3870
3871 class OSConfigFlagContext(OSContextGenerator):
3872@@ -1016,6 +1113,20 @@
3873 config_flags_parser(config_flags)}
3874
3875
3876+class LibvirtConfigFlagsContext(OSContextGenerator):
3877+ """
3878+ This context provides support for extending
3879+ the libvirt section through user-defined flags.
3880+ """
3881+ def __call__(self):
3882+ ctxt = {}
3883+ libvirt_flags = config('libvirt-flags')
3884+ if libvirt_flags:
3885+ ctxt['libvirt_flags'] = config_flags_parser(
3886+ libvirt_flags)
3887+ return ctxt
3888+
3889+
3890 class SubordinateConfigContext(OSContextGenerator):
3891
3892 """
3893@@ -1048,7 +1159,7 @@
3894
3895 ctxt = {
3896 ... other context ...
3897- 'subordinate_config': {
3898+ 'subordinate_configuration': {
3899 'DEFAULT': {
3900 'key1': 'value1',
3901 },
3902@@ -1066,13 +1177,22 @@
3903 :param config_file : Service's config file to query sections
3904 :param interface : Subordinate interface to inspect
3905 """
3906- self.service = service
3907 self.config_file = config_file
3908- self.interface = interface
3909+ if isinstance(service, list):
3910+ self.services = service
3911+ else:
3912+ self.services = [service]
3913+ if isinstance(interface, list):
3914+ self.interfaces = interface
3915+ else:
3916+ self.interfaces = [interface]
3917
3918 def __call__(self):
3919 ctxt = {'sections': {}}
3920- for rid in relation_ids(self.interface):
3921+ rids = []
3922+ for interface in self.interfaces:
3923+ rids.extend(relation_ids(interface))
3924+ for rid in rids:
3925 for unit in related_units(rid):
3926 sub_config = relation_get('subordinate_configuration',
3927 rid=rid, unit=unit)
3928@@ -1080,33 +1200,37 @@
3929 try:
3930 sub_config = json.loads(sub_config)
3931 except:
3932- log('Could not parse JSON from subordinate_config '
3933- 'setting from %s' % rid, level=ERROR)
3934- continue
3935-
3936- if self.service not in sub_config:
3937- log('Found subordinate_config on %s but it contained'
3938- 'nothing for %s service' % (rid, self.service),
3939- level=INFO)
3940- continue
3941-
3942- sub_config = sub_config[self.service]
3943- if self.config_file not in sub_config:
3944- log('Found subordinate_config on %s but it contained'
3945- 'nothing for %s' % (rid, self.config_file),
3946- level=INFO)
3947- continue
3948-
3949- sub_config = sub_config[self.config_file]
3950- for k, v in six.iteritems(sub_config):
3951- if k == 'sections':
3952- for section, config_dict in six.iteritems(v):
3953- log("adding section '%s'" % (section),
3954- level=DEBUG)
3955- ctxt[k][section] = config_dict
3956- else:
3957- ctxt[k] = v
3958-
3959+ log('Could not parse JSON from '
3960+ 'subordinate_configuration setting from %s'
3961+ % rid, level=ERROR)
3962+ continue
3963+
3964+ for service in self.services:
3965+ if service not in sub_config:
3966+ log('Found subordinate_configuration on %s but it '
3967+ 'contained nothing for %s service'
3968+ % (rid, service), level=INFO)
3969+ continue
3970+
3971+ sub_config = sub_config[service]
3972+ if self.config_file not in sub_config:
3973+ log('Found subordinate_configuration on %s but it '
3974+ 'contained nothing for %s'
3975+ % (rid, self.config_file), level=INFO)
3976+ continue
3977+
3978+ sub_config = sub_config[self.config_file]
3979+ for k, v in six.iteritems(sub_config):
3980+ if k == 'sections':
3981+ for section, config_list in six.iteritems(v):
3982+ log("adding section '%s'" % (section),
3983+ level=DEBUG)
3984+ if ctxt[k].get(section):
3985+ ctxt[k][section].extend(config_list)
3986+ else:
3987+ ctxt[k][section] = config_list
3988+ else:
3989+ ctxt[k] = v
3990 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
3991 return ctxt
3992
3993@@ -1143,13 +1267,11 @@
3994
3995 @property
3996 def num_cpus(self):
3997- try:
3998- from psutil import NUM_CPUS
3999- except ImportError:
4000- apt_install('python-psutil', fatal=True)
4001- from psutil import NUM_CPUS
4002-
4003- return NUM_CPUS
4004+ # NOTE: use cpu_count if present (16.04 support)
4005+ if hasattr(psutil, 'cpu_count'):
4006+ return psutil.cpu_count()
4007+ else:
4008+ return psutil.NUM_CPUS
4009
4010 def __call__(self):
4011 multiplier = config('worker-multiplier') or 0
4012@@ -1283,15 +1405,19 @@
4013 def __call__(self):
4014 ports = config('data-port')
4015 if ports:
4016+ # Map of {port/mac:bridge}
4017 portmap = parse_data_port_mappings(ports)
4018- ports = portmap.values()
4019+ ports = portmap.keys()
4020+ # Resolve provided ports or mac addresses and filter out those
4021+ # already attached to a bridge.
4022 resolved = self.resolve_ports(ports)
4023+ # FIXME: is this necessary?
4024 normalized = {get_nic_hwaddr(port): port for port in resolved
4025 if port not in ports}
4026 normalized.update({port: port for port in resolved
4027 if port in ports})
4028 if resolved:
4029- return {bridge: normalized[port] for bridge, port in
4030+ return {normalized[port]: bridge for port, bridge in
4031 six.iteritems(portmap) if port in normalized.keys()}
4032
4033 return None
4034@@ -1302,12 +1428,22 @@
4035 def __call__(self):
4036 ctxt = {}
4037 mappings = super(PhyNICMTUContext, self).__call__()
4038- if mappings and mappings.values():
4039- ports = mappings.values()
4040+ if mappings and mappings.keys():
4041+ ports = sorted(mappings.keys())
4042 napi_settings = NeutronAPIContext()()
4043 mtu = napi_settings.get('network_device_mtu')
4044+ all_ports = set()
4045+ # If any of ports is a vlan device, its underlying device must have
4046+ # mtu applied first.
4047+ for port in ports:
4048+ for lport in glob.glob("/sys/class/net/%s/lower_*" % port):
4049+ lport = os.path.basename(lport)
4050+ all_ports.add(lport.split('_')[1])
4051+
4052+ all_ports = list(all_ports)
4053+ all_ports.extend(ports)
4054 if mtu:
4055- ctxt["devs"] = '\\n'.join(ports)
4056+ ctxt["devs"] = '\\n'.join(all_ports)
4057 ctxt['mtu'] = mtu
4058
4059 return ctxt
4060@@ -1338,7 +1474,110 @@
4061 rdata.get('service_protocol') or 'http',
4062 'auth_protocol':
4063 rdata.get('auth_protocol') or 'http',
4064+ 'api_version':
4065+ rdata.get('api_version') or '2.0',
4066 }
4067- if context_complete(ctxt):
4068+ if self.context_complete(ctxt):
4069 return ctxt
4070 return {}
4071+
4072+
4073+class InternalEndpointContext(OSContextGenerator):
4074+ """Internal endpoint context.
4075+
4076+ This context provides the endpoint type used for communication between
4077+ services e.g. between Nova and Cinder internally. Openstack uses Public
4078+ endpoints by default so this allows admins to optionally use internal
4079+ endpoints.
4080+ """
4081+ def __call__(self):
4082+ return {'use_internal_endpoints': config('use-internal-endpoints')}
4083+
4084+
4085+class AppArmorContext(OSContextGenerator):
4086+ """Base class for apparmor contexts."""
4087+
4088+ def __init__(self):
4089+ self._ctxt = None
4090+ self.aa_profile = None
4091+ self.aa_utils_packages = ['apparmor-utils']
4092+
4093+ @property
4094+ def ctxt(self):
4095+ if self._ctxt is not None:
4096+ return self._ctxt
4097+ self._ctxt = self._determine_ctxt()
4098+ return self._ctxt
4099+
4100+ def _determine_ctxt(self):
4101+ """
4102+ Validate aa-profile-mode settings is disable, enforce, or complain.
4103+
4104+ :return ctxt: Dictionary of the apparmor profile or None
4105+ """
4106+ if config('aa-profile-mode') in ['disable', 'enforce', 'complain']:
4107+ ctxt = {'aa-profile-mode': config('aa-profile-mode')}
4108+ else:
4109+ ctxt = None
4110+ return ctxt
4111+
4112+ def __call__(self):
4113+ return self.ctxt
4114+
4115+ def install_aa_utils(self):
4116+ """
4117+ Install packages required for apparmor configuration.
4118+ """
4119+ log("Installing apparmor utils.")
4120+ ensure_packages(self.aa_utils_packages)
4121+
4122+ def manually_disable_aa_profile(self):
4123+ """
4124+ Manually disable an apparmor profile.
4125+
4126+ If aa-profile-mode is set to disabled (default) this is required as the
4127+ template has been written but apparmor is yet unaware of the profile
4128+ and aa-disable aa-profile fails. Without this the profile would kick
4129+ into enforce mode on the next service restart.
4130+
4131+ """
4132+ profile_path = '/etc/apparmor.d'
4133+ disable_path = '/etc/apparmor.d/disable'
4134+ if not os.path.lexists(os.path.join(disable_path, self.aa_profile)):
4135+ os.symlink(os.path.join(profile_path, self.aa_profile),
4136+ os.path.join(disable_path, self.aa_profile))
4137+
4138+ def setup_aa_profile(self):
4139+ """
4140+ Setup an apparmor profile.
4141+ The ctxt dictionary will contain the apparmor profile mode and
4142+ the apparmor profile name.
4143+ Makes calls out to aa-disable, aa-complain, or aa-enforce to setup
4144+ the apparmor profile.
4145+ """
4146+ self()
4147+ if not self.ctxt:
4148+ log("Not enabling apparmor Profile")
4149+ return
4150+ self.install_aa_utils()
4151+ cmd = ['aa-{}'.format(self.ctxt['aa-profile-mode'])]
4152+ cmd.append(self.ctxt['aa-profile'])
4153+ log("Setting up the apparmor profile for {} in {} mode."
4154+ "".format(self.ctxt['aa-profile'], self.ctxt['aa-profile-mode']))
4155+ try:
4156+ check_call(cmd)
4157+ except CalledProcessError as e:
4158+ # If aa-profile-mode is set to disabled (default) manual
4159+ # disabling is required as the template has been written but
4160+ # apparmor is yet unaware of the profile and aa-disable aa-profile
4161+ # fails. If aa-disable learns to read profile files first this can
4162+ # be removed.
4163+ if self.ctxt['aa-profile-mode'] == 'disable':
4164+ log("Manually disabling the apparmor profile for {}."
4165+ "".format(self.ctxt['aa-profile']))
4166+ self.manually_disable_aa_profile()
4167+ return
4168+ status_set('blocked', "Apparmor profile {} failed to be set to {}."
4169+ "".format(self.ctxt['aa-profile'],
4170+ self.ctxt['aa-profile-mode']))
4171+ raise e
4172
4173=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
4174--- hooks/charmhelpers/contrib/openstack/ip.py 2015-07-29 18:19:18 +0000
4175+++ hooks/charmhelpers/contrib/openstack/ip.py 2016-05-18 10:05:52 +0000
4176@@ -14,16 +14,19 @@
4177 # You should have received a copy of the GNU Lesser General Public License
4178 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
4179
4180+
4181 from charmhelpers.core.hookenv import (
4182 config,
4183 unit_get,
4184 service_name,
4185+ network_get_primary_address,
4186 )
4187 from charmhelpers.contrib.network.ip import (
4188 get_address_in_network,
4189 is_address_in_network,
4190 is_ipv6,
4191 get_ipv6_addr,
4192+ resolve_network_cidr,
4193 )
4194 from charmhelpers.contrib.hahelpers.cluster import is_clustered
4195
4196@@ -33,16 +36,19 @@
4197
4198 ADDRESS_MAP = {
4199 PUBLIC: {
4200+ 'binding': 'public',
4201 'config': 'os-public-network',
4202 'fallback': 'public-address',
4203 'override': 'os-public-hostname',
4204 },
4205 INTERNAL: {
4206+ 'binding': 'internal',
4207 'config': 'os-internal-network',
4208 'fallback': 'private-address',
4209 'override': 'os-internal-hostname',
4210 },
4211 ADMIN: {
4212+ 'binding': 'admin',
4213 'config': 'os-admin-network',
4214 'fallback': 'private-address',
4215 'override': 'os-admin-hostname',
4216@@ -110,7 +116,7 @@
4217 correct network. If clustered with no nets defined, return primary vip.
4218
4219 If not clustered, return unit address ensuring address is on configured net
4220- split if one is configured.
4221+ split if one is configured, or a Juju 2.0 extra-binding has been used.
4222
4223 :param endpoint_type: Network endpoing type
4224 """
4225@@ -125,23 +131,45 @@
4226 net_type = ADDRESS_MAP[endpoint_type]['config']
4227 net_addr = config(net_type)
4228 net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
4229+ binding = ADDRESS_MAP[endpoint_type]['binding']
4230 clustered = is_clustered()
4231- if clustered:
4232- if not net_addr:
4233- # If no net-splits defined, we expect a single vip
4234- resolved_address = vips[0]
4235- else:
4236+
4237+ if clustered and vips:
4238+ if net_addr:
4239 for vip in vips:
4240 if is_address_in_network(net_addr, vip):
4241 resolved_address = vip
4242 break
4243+ else:
4244+ # NOTE: endeavour to check vips against network space
4245+ # bindings
4246+ try:
4247+ bound_cidr = resolve_network_cidr(
4248+ network_get_primary_address(binding)
4249+ )
4250+ for vip in vips:
4251+ if is_address_in_network(bound_cidr, vip):
4252+ resolved_address = vip
4253+ break
4254+ except NotImplementedError:
4255+ # If no net-splits configured and no support for extra
4256+ # bindings/network spaces so we expect a single vip
4257+ resolved_address = vips[0]
4258 else:
4259 if config('prefer-ipv6'):
4260 fallback_addr = get_ipv6_addr(exc_list=vips)[0]
4261 else:
4262 fallback_addr = unit_get(net_fallback)
4263
4264- resolved_address = get_address_in_network(net_addr, fallback_addr)
4265+ if net_addr:
4266+ resolved_address = get_address_in_network(net_addr, fallback_addr)
4267+ else:
4268+ # NOTE: only try to use extra bindings if legacy network
4269+ # configuration is not in use
4270+ try:
4271+ resolved_address = network_get_primary_address(binding)
4272+ except NotImplementedError:
4273+ resolved_address = fallback_addr
4274
4275 if resolved_address is None:
4276 raise ValueError("Unable to resolve a suitable IP address based on "
4277
4278=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
4279--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-10-02 15:07:24 +0000
4280+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-05-18 10:05:52 +0000
4281@@ -50,7 +50,7 @@
4282 if kernel_version() >= (3, 13):
4283 return []
4284 else:
4285- return ['openvswitch-datapath-dkms']
4286+ return [headers_package(), 'openvswitch-datapath-dkms']
4287
4288
4289 # legacy
4290@@ -70,7 +70,7 @@
4291 relation_prefix='neutron',
4292 ssl_dir=QUANTUM_CONF_DIR)],
4293 'services': ['quantum-plugin-openvswitch-agent'],
4294- 'packages': [[headers_package()] + determine_dkms_package(),
4295+ 'packages': [determine_dkms_package(),
4296 ['quantum-plugin-openvswitch-agent']],
4297 'server_packages': ['quantum-server',
4298 'quantum-plugin-openvswitch'],
4299@@ -111,7 +111,7 @@
4300 relation_prefix='neutron',
4301 ssl_dir=NEUTRON_CONF_DIR)],
4302 'services': ['neutron-plugin-openvswitch-agent'],
4303- 'packages': [[headers_package()] + determine_dkms_package(),
4304+ 'packages': [determine_dkms_package(),
4305 ['neutron-plugin-openvswitch-agent']],
4306 'server_packages': ['neutron-server',
4307 'neutron-plugin-openvswitch'],
4308@@ -155,7 +155,7 @@
4309 relation_prefix='neutron',
4310 ssl_dir=NEUTRON_CONF_DIR)],
4311 'services': [],
4312- 'packages': [[headers_package()] + determine_dkms_package(),
4313+ 'packages': [determine_dkms_package(),
4314 ['neutron-plugin-cisco']],
4315 'server_packages': ['neutron-server',
4316 'neutron-plugin-cisco'],
4317@@ -174,7 +174,7 @@
4318 'neutron-dhcp-agent',
4319 'nova-api-metadata',
4320 'etcd'],
4321- 'packages': [[headers_package()] + determine_dkms_package(),
4322+ 'packages': [determine_dkms_package(),
4323 ['calico-compute',
4324 'bird',
4325 'neutron-dhcp-agent',
4326@@ -209,6 +209,20 @@
4327 'server_packages': ['neutron-server',
4328 'neutron-plugin-plumgrid'],
4329 'server_services': ['neutron-server']
4330+ },
4331+ 'midonet': {
4332+ 'config': '/etc/neutron/plugins/midonet/midonet.ini',
4333+ 'driver': 'midonet.neutron.plugin.MidonetPluginV2',
4334+ 'contexts': [
4335+ context.SharedDBContext(user=config('neutron-database-user'),
4336+ database=config('neutron-database'),
4337+ relation_prefix='neutron',
4338+ ssl_dir=NEUTRON_CONF_DIR)],
4339+ 'services': [],
4340+ 'packages': [determine_dkms_package()],
4341+ 'server_packages': ['neutron-server',
4342+ 'python-neutron-plugin-midonet'],
4343+ 'server_services': ['neutron-server']
4344 }
4345 }
4346 if release >= 'icehouse':
4347@@ -219,6 +233,20 @@
4348 'neutron-plugin-ml2']
4349 # NOTE: patch in vmware renames nvp->nsx for icehouse onwards
4350 plugins['nvp'] = plugins['nsx']
4351+ if release >= 'kilo':
4352+ plugins['midonet']['driver'] = (
4353+ 'neutron.plugins.midonet.plugin.MidonetPluginV2')
4354+ if release >= 'liberty':
4355+ plugins['midonet']['driver'] = (
4356+ 'midonet.neutron.plugin_v1.MidonetPluginV2')
4357+ plugins['midonet']['server_packages'].remove(
4358+ 'python-neutron-plugin-midonet')
4359+ plugins['midonet']['server_packages'].append(
4360+ 'python-networking-midonet')
4361+ plugins['plumgrid']['driver'] = (
4362+ 'networking_plumgrid.neutron.plugins.plugin.NeutronPluginPLUMgridV2')
4363+ plugins['plumgrid']['server_packages'].remove(
4364+ 'neutron-plugin-plumgrid')
4365 return plugins
4366
4367
4368@@ -269,17 +297,30 @@
4369 return 'neutron'
4370
4371
4372-def parse_mappings(mappings):
4373+def parse_mappings(mappings, key_rvalue=False):
4374+ """By default mappings are lvalue keyed.
4375+
4376+ If key_rvalue is True, the mapping will be reversed to allow multiple
4377+ configs for the same lvalue.
4378+ """
4379 parsed = {}
4380 if mappings:
4381 mappings = mappings.split()
4382 for m in mappings:
4383 p = m.partition(':')
4384- key = p[0].strip()
4385- if p[1]:
4386- parsed[key] = p[2].strip()
4387+
4388+ if key_rvalue:
4389+ key_index = 2
4390+ val_index = 0
4391+ # if there is no rvalue skip to next
4392+ if not p[1]:
4393+ continue
4394 else:
4395- parsed[key] = ''
4396+ key_index = 0
4397+ val_index = 2
4398+
4399+ key = p[key_index].strip()
4400+ parsed[key] = p[val_index].strip()
4401
4402 return parsed
4403
4404@@ -297,25 +338,25 @@
4405 def parse_data_port_mappings(mappings, default_bridge='br-data'):
4406 """Parse data port mappings.
4407
4408- Mappings must be a space-delimited list of bridge:port mappings.
4409+ Mappings must be a space-delimited list of bridge:port.
4410
4411- Returns dict of the form {bridge:port}.
4412+ Returns dict of the form {port:bridge} where ports may be mac addresses or
4413+ interface names.
4414 """
4415- _mappings = parse_mappings(mappings)
4416+
4417+ # NOTE(dosaboy): we use rvalue for key to allow multiple values to be
4418+ # proposed for <port> since it may be a mac address which will differ
4419+ # across units this allowing first-known-good to be chosen.
4420+ _mappings = parse_mappings(mappings, key_rvalue=True)
4421 if not _mappings or list(_mappings.values()) == ['']:
4422 if not mappings:
4423 return {}
4424
4425 # For backwards-compatibility we need to support port-only provided in
4426 # config.
4427- _mappings = {default_bridge: mappings.split()[0]}
4428-
4429- bridges = _mappings.keys()
4430- ports = _mappings.values()
4431- if len(set(bridges)) != len(bridges):
4432- raise Exception("It is not allowed to have more than one port "
4433- "configured on the same bridge")
4434-
4435+ _mappings = {mappings.split()[0]: default_bridge}
4436+
4437+ ports = _mappings.keys()
4438 if len(set(ports)) != len(ports):
4439 raise Exception("It is not allowed to have the same port configured "
4440 "on more than one bridge")
4441
4442=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
4443--- hooks/charmhelpers/contrib/openstack/templating.py 2015-07-29 18:19:18 +0000
4444+++ hooks/charmhelpers/contrib/openstack/templating.py 2016-05-18 10:05:52 +0000
4445@@ -18,7 +18,7 @@
4446
4447 import six
4448
4449-from charmhelpers.fetch import apt_install
4450+from charmhelpers.fetch import apt_install, apt_update
4451 from charmhelpers.core.hookenv import (
4452 log,
4453 ERROR,
4454@@ -29,6 +29,7 @@
4455 try:
4456 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
4457 except ImportError:
4458+ apt_update(fatal=True)
4459 apt_install('python-jinja2', fatal=True)
4460 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
4461
4462@@ -112,7 +113,7 @@
4463
4464 def complete_contexts(self):
4465 '''
4466- Return a list of interfaces that have atisfied contexts.
4467+ Return a list of interfaces that have satisfied contexts.
4468 '''
4469 if self._complete_contexts:
4470 return self._complete_contexts
4471@@ -293,3 +294,30 @@
4472 [interfaces.extend(i.complete_contexts())
4473 for i in six.itervalues(self.templates)]
4474 return interfaces
4475+
4476+ def get_incomplete_context_data(self, interfaces):
4477+ '''
4478+ Return dictionary of relation status of interfaces and any missing
4479+ required context data. Example:
4480+ {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
4481+ 'zeromq-configuration': {'related': False}}
4482+ '''
4483+ incomplete_context_data = {}
4484+
4485+ for i in six.itervalues(self.templates):
4486+ for context in i.contexts:
4487+ for interface in interfaces:
4488+ related = False
4489+ if interface in context.interfaces:
4490+ related = context.get_related()
4491+ missing_data = context.missing_data
4492+ if missing_data:
4493+ incomplete_context_data[interface] = {'missing_data': missing_data}
4494+ if related:
4495+ if incomplete_context_data.get(interface):
4496+ incomplete_context_data[interface].update({'related': True})
4497+ else:
4498+ incomplete_context_data[interface] = {'related': True}
4499+ else:
4500+ incomplete_context_data[interface] = {'related': False}
4501+ return incomplete_context_data
4502
4503=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
4504--- hooks/charmhelpers/contrib/openstack/utils.py 2015-07-29 18:19:18 +0000
4505+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-05-18 10:05:52 +0000
4506@@ -1,5 +1,3 @@
4507-#!/usr/bin/python
4508-
4509 # Copyright 2014-2015 Canonical Limited.
4510 #
4511 # This file is part of charm-helpers.
4512@@ -24,8 +22,14 @@
4513 import json
4514 import os
4515 import sys
4516+import re
4517+import itertools
4518+import functools
4519
4520 import six
4521+import tempfile
4522+import traceback
4523+import uuid
4524 import yaml
4525
4526 from charmhelpers.contrib.network import ip
4527@@ -35,12 +39,18 @@
4528 )
4529
4530 from charmhelpers.core.hookenv import (
4531+ action_fail,
4532+ action_set,
4533 config,
4534 log as juju_log,
4535 charm_dir,
4536+ DEBUG,
4537 INFO,
4538+ related_units,
4539 relation_ids,
4540- relation_set
4541+ relation_set,
4542+ status_set,
4543+ hook_name
4544 )
4545
4546 from charmhelpers.contrib.storage.linux.lvm import (
4547@@ -50,7 +60,9 @@
4548 )
4549
4550 from charmhelpers.contrib.network.ip import (
4551- get_ipv6_addr
4552+ get_ipv6_addr,
4553+ is_ipv6,
4554+ port_has_listener,
4555 )
4556
4557 from charmhelpers.contrib.python.packages import (
4558@@ -58,7 +70,15 @@
4559 pip_install,
4560 )
4561
4562-from charmhelpers.core.host import lsb_release, mounts, umount
4563+from charmhelpers.core.host import (
4564+ lsb_release,
4565+ mounts,
4566+ umount,
4567+ service_running,
4568+ service_pause,
4569+ service_resume,
4570+ restart_on_change_helper,
4571+)
4572 from charmhelpers.fetch import apt_install, apt_cache, install_remote
4573 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
4574 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
4575@@ -69,7 +89,6 @@
4576 DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
4577 'restricted main multiverse universe')
4578
4579-
4580 UBUNTU_OPENSTACK_RELEASE = OrderedDict([
4581 ('oneiric', 'diablo'),
4582 ('precise', 'essex'),
4583@@ -80,6 +99,7 @@
4584 ('utopic', 'juno'),
4585 ('vivid', 'kilo'),
4586 ('wily', 'liberty'),
4587+ ('xenial', 'mitaka'),
4588 ])
4589
4590
4591@@ -93,31 +113,74 @@
4592 ('2014.2', 'juno'),
4593 ('2015.1', 'kilo'),
4594 ('2015.2', 'liberty'),
4595+ ('2016.1', 'mitaka'),
4596 ])
4597
4598-# The ugly duckling
4599+# The ugly duckling - must list releases oldest to newest
4600 SWIFT_CODENAMES = OrderedDict([
4601- ('1.4.3', 'diablo'),
4602- ('1.4.8', 'essex'),
4603- ('1.7.4', 'folsom'),
4604- ('1.8.0', 'grizzly'),
4605- ('1.7.7', 'grizzly'),
4606- ('1.7.6', 'grizzly'),
4607- ('1.10.0', 'havana'),
4608- ('1.9.1', 'havana'),
4609- ('1.9.0', 'havana'),
4610- ('1.13.1', 'icehouse'),
4611- ('1.13.0', 'icehouse'),
4612- ('1.12.0', 'icehouse'),
4613- ('1.11.0', 'icehouse'),
4614- ('2.0.0', 'juno'),
4615- ('2.1.0', 'juno'),
4616- ('2.2.0', 'juno'),
4617- ('2.2.1', 'kilo'),
4618- ('2.2.2', 'kilo'),
4619- ('2.3.0', 'liberty'),
4620+ ('diablo',
4621+ ['1.4.3']),
4622+ ('essex',
4623+ ['1.4.8']),
4624+ ('folsom',
4625+ ['1.7.4']),
4626+ ('grizzly',
4627+ ['1.7.6', '1.7.7', '1.8.0']),
4628+ ('havana',
4629+ ['1.9.0', '1.9.1', '1.10.0']),
4630+ ('icehouse',
4631+ ['1.11.0', '1.12.0', '1.13.0', '1.13.1']),
4632+ ('juno',
4633+ ['2.0.0', '2.1.0', '2.2.0']),
4634+ ('kilo',
4635+ ['2.2.1', '2.2.2']),
4636+ ('liberty',
4637+ ['2.3.0', '2.4.0', '2.5.0']),
4638+ ('mitaka',
4639+ ['2.5.0', '2.6.0', '2.7.0']),
4640 ])
4641
4642+# >= Liberty version->codename mapping
4643+PACKAGE_CODENAMES = {
4644+ 'nova-common': OrderedDict([
4645+ ('12.0', 'liberty'),
4646+ ('13.0', 'mitaka'),
4647+ ]),
4648+ 'neutron-common': OrderedDict([
4649+ ('7.0', 'liberty'),
4650+ ('8.0', 'mitaka'),
4651+ ]),
4652+ 'cinder-common': OrderedDict([
4653+ ('7.0', 'liberty'),
4654+ ('8.0', 'mitaka'),
4655+ ]),
4656+ 'keystone': OrderedDict([
4657+ ('8.0', 'liberty'),
4658+ ('8.1', 'liberty'),
4659+ ('9.0', 'mitaka'),
4660+ ]),
4661+ 'horizon-common': OrderedDict([
4662+ ('8.0', 'liberty'),
4663+ ('9.0', 'mitaka'),
4664+ ]),
4665+ 'ceilometer-common': OrderedDict([
4666+ ('5.0', 'liberty'),
4667+ ('6.0', 'mitaka'),
4668+ ]),
4669+ 'heat-common': OrderedDict([
4670+ ('5.0', 'liberty'),
4671+ ('6.0', 'mitaka'),
4672+ ]),
4673+ 'glance-common': OrderedDict([
4674+ ('11.0', 'liberty'),
4675+ ('12.0', 'mitaka'),
4676+ ]),
4677+ 'openstack-dashboard': OrderedDict([
4678+ ('8.0', 'liberty'),
4679+ ('9.0', 'mitaka'),
4680+ ]),
4681+}
4682+
4683 DEFAULT_LOOPBACK_SIZE = '5G'
4684
4685
4686@@ -167,9 +230,9 @@
4687 error_out(e)
4688
4689
4690-def get_os_version_codename(codename):
4691+def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES):
4692 '''Determine OpenStack version number from codename.'''
4693- for k, v in six.iteritems(OPENSTACK_CODENAMES):
4694+ for k, v in six.iteritems(version_map):
4695 if v == codename:
4696 return k
4697 e = 'Could not derive OpenStack version for '\
4698@@ -177,6 +240,33 @@
4699 error_out(e)
4700
4701
4702+def get_os_version_codename_swift(codename):
4703+ '''Determine OpenStack version number of swift from codename.'''
4704+ for k, v in six.iteritems(SWIFT_CODENAMES):
4705+ if k == codename:
4706+ return v[-1]
4707+ e = 'Could not derive swift version for '\
4708+ 'codename: %s' % codename
4709+ error_out(e)
4710+
4711+
4712+def get_swift_codename(version):
4713+ '''Determine OpenStack codename that corresponds to swift version.'''
4714+ codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v]
4715+ if len(codenames) > 1:
4716+ # If more than one release codename contains this version we determine
4717+ # the actual codename based on the highest available install source.
4718+ for codename in reversed(codenames):
4719+ releases = UBUNTU_OPENSTACK_RELEASE
4720+ release = [k for k, v in six.iteritems(releases) if codename in v]
4721+ ret = subprocess.check_output(['apt-cache', 'policy', 'swift'])
4722+ if codename in ret or release[0] in ret:
4723+ return codename
4724+ elif len(codenames) == 1:
4725+ return codenames[0]
4726+ return None
4727+
4728+
4729 def get_os_codename_package(package, fatal=True):
4730 '''Derive OpenStack release codename from an installed package.'''
4731 import apt_pkg as apt
4732@@ -201,20 +291,33 @@
4733 error_out(e)
4734
4735 vers = apt.upstream_version(pkg.current_ver.ver_str)
4736-
4737- try:
4738- if 'swift' in pkg.name:
4739- swift_vers = vers[:5]
4740- if swift_vers not in SWIFT_CODENAMES:
4741- # Deal with 1.10.0 upward
4742- swift_vers = vers[:6]
4743- return SWIFT_CODENAMES[swift_vers]
4744- else:
4745- vers = vers[:6]
4746- return OPENSTACK_CODENAMES[vers]
4747- except KeyError:
4748- e = 'Could not determine OpenStack codename for version %s' % vers
4749- error_out(e)
4750+ if 'swift' in pkg.name:
4751+ # Fully x.y.z match for swift versions
4752+ match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
4753+ else:
4754+ # x.y match only for 20XX.X
4755+ # and ignore patch level for other packages
4756+ match = re.match('^(\d+)\.(\d+)', vers)
4757+
4758+ if match:
4759+ vers = match.group(0)
4760+
4761+ # >= Liberty independent project versions
4762+ if (package in PACKAGE_CODENAMES and
4763+ vers in PACKAGE_CODENAMES[package]):
4764+ return PACKAGE_CODENAMES[package][vers]
4765+ else:
4766+ # < Liberty co-ordinated project versions
4767+ try:
4768+ if 'swift' in pkg.name:
4769+ return get_swift_codename(vers)
4770+ else:
4771+ return OPENSTACK_CODENAMES[vers]
4772+ except KeyError:
4773+ if not fatal:
4774+ return None
4775+ e = 'Could not determine OpenStack codename for version %s' % vers
4776+ error_out(e)
4777
4778
4779 def get_os_version_package(pkg, fatal=True):
4780@@ -226,12 +329,14 @@
4781
4782 if 'swift' in pkg:
4783 vers_map = SWIFT_CODENAMES
4784+ for cname, version in six.iteritems(vers_map):
4785+ if cname == codename:
4786+ return version[-1]
4787 else:
4788 vers_map = OPENSTACK_CODENAMES
4789-
4790- for version, cname in six.iteritems(vers_map):
4791- if cname == codename:
4792- return version
4793+ for version, cname in six.iteritems(vers_map):
4794+ if cname == codename:
4795+ return version
4796 # e = "Could not determine OpenStack version for package: %s" % pkg
4797 # error_out(e)
4798
4799@@ -256,12 +361,42 @@
4800
4801
4802 def import_key(keyid):
4803- cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
4804- "--recv-keys %s" % keyid
4805- try:
4806- subprocess.check_call(cmd.split(' '))
4807- except subprocess.CalledProcessError:
4808- error_out("Error importing repo key %s" % keyid)
4809+ key = keyid.strip()
4810+ if (key.startswith('-----BEGIN PGP PUBLIC KEY BLOCK-----') and
4811+ key.endswith('-----END PGP PUBLIC KEY BLOCK-----')):
4812+ juju_log("PGP key found (looks like ASCII Armor format)", level=DEBUG)
4813+ juju_log("Importing ASCII Armor PGP key", level=DEBUG)
4814+ with tempfile.NamedTemporaryFile() as keyfile:
4815+ with open(keyfile.name, 'w') as fd:
4816+ fd.write(key)
4817+ fd.write("\n")
4818+
4819+ cmd = ['apt-key', 'add', keyfile.name]
4820+ try:
4821+ subprocess.check_call(cmd)
4822+ except subprocess.CalledProcessError:
4823+ error_out("Error importing PGP key '%s'" % key)
4824+ else:
4825+ juju_log("PGP key found (looks like Radix64 format)", level=DEBUG)
4826+ juju_log("Importing PGP key from keyserver", level=DEBUG)
4827+ cmd = ['apt-key', 'adv', '--keyserver',
4828+ 'hkp://keyserver.ubuntu.com:80', '--recv-keys', key]
4829+ try:
4830+ subprocess.check_call(cmd)
4831+ except subprocess.CalledProcessError:
4832+ error_out("Error importing PGP key '%s'" % key)
4833+
4834+
4835+def get_source_and_pgp_key(input):
4836+ """Look for a pgp key ID or ascii-armor key in the given input."""
4837+ index = input.strip()
4838+ index = input.rfind('|')
4839+ if index < 0:
4840+ return input, None
4841+
4842+ key = input[index + 1:].strip('|')
4843+ source = input[:index]
4844+ return source, key
4845
4846
4847 def configure_installation_source(rel):
4848@@ -273,16 +408,16 @@
4849 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
4850 f.write(DISTRO_PROPOSED % ubuntu_rel)
4851 elif rel[:4] == "ppa:":
4852- src = rel
4853+ src, key = get_source_and_pgp_key(rel)
4854+ if key:
4855+ import_key(key)
4856+
4857 subprocess.check_call(["add-apt-repository", "-y", src])
4858 elif rel[:3] == "deb":
4859- l = len(rel.split('|'))
4860- if l == 2:
4861- src, key = rel.split('|')
4862- juju_log("Importing PPA key from keyserver for %s" % src)
4863+ src, key = get_source_and_pgp_key(rel)
4864+ if key:
4865 import_key(key)
4866- elif l == 1:
4867- src = rel
4868+
4869 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
4870 f.write(src)
4871 elif rel[:6] == 'cloud:':
4872@@ -327,6 +462,9 @@
4873 'liberty': 'trusty-updates/liberty',
4874 'liberty/updates': 'trusty-updates/liberty',
4875 'liberty/proposed': 'trusty-proposed/liberty',
4876+ 'mitaka': 'trusty-updates/mitaka',
4877+ 'mitaka/updates': 'trusty-updates/mitaka',
4878+ 'mitaka/proposed': 'trusty-proposed/mitaka',
4879 }
4880
4881 try:
4882@@ -392,9 +530,18 @@
4883 import apt_pkg as apt
4884 src = config('openstack-origin')
4885 cur_vers = get_os_version_package(package)
4886- available_vers = get_os_version_install_source(src)
4887+ if "swift" in package:
4888+ codename = get_os_codename_install_source(src)
4889+ avail_vers = get_os_version_codename_swift(codename)
4890+ else:
4891+ avail_vers = get_os_version_install_source(src)
4892 apt.init()
4893- return apt.version_compare(available_vers, cur_vers) == 1
4894+ if "swift" in package:
4895+ major_cur_vers = cur_vers.split('.', 1)[0]
4896+ major_avail_vers = avail_vers.split('.', 1)[0]
4897+ major_diff = apt.version_compare(major_avail_vers, major_cur_vers)
4898+ return avail_vers > cur_vers and (major_diff == 1 or major_diff == 0)
4899+ return apt.version_compare(avail_vers, cur_vers) == 1
4900
4901
4902 def ensure_block_device(block_device):
4903@@ -469,6 +616,12 @@
4904 relation_prefix=None):
4905 hosts = get_ipv6_addr(dynamic_only=False)
4906
4907+ if config('vip'):
4908+ vips = config('vip').split()
4909+ for vip in vips:
4910+ if vip and is_ipv6(vip):
4911+ hosts.append(vip)
4912+
4913 kwargs = {'database': database,
4914 'username': database_user,
4915 'hostname': json.dumps(hosts)}
4916@@ -517,7 +670,7 @@
4917 return yaml.load(projects_yaml)
4918
4919
4920-def git_clone_and_install(projects_yaml, core_project, depth=1):
4921+def git_clone_and_install(projects_yaml, core_project):
4922 """
4923 Clone/install all specified OpenStack repositories.
4924
4925@@ -567,6 +720,9 @@
4926 for p in projects['repositories']:
4927 repo = p['repository']
4928 branch = p['branch']
4929+ depth = '1'
4930+ if 'depth' in p.keys():
4931+ depth = p['depth']
4932 if p['name'] == 'requirements':
4933 repo_dir = _git_clone_and_install_single(repo, branch, depth,
4934 parent_dir, http_proxy,
4935@@ -611,19 +767,14 @@
4936 """
4937 Clone and install a single git repository.
4938 """
4939- dest_dir = os.path.join(parent_dir, os.path.basename(repo))
4940-
4941 if not os.path.exists(parent_dir):
4942 juju_log('Directory already exists at {}. '
4943 'No need to create directory.'.format(parent_dir))
4944 os.mkdir(parent_dir)
4945
4946- if not os.path.exists(dest_dir):
4947- juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
4948- repo_dir = install_remote(repo, dest=parent_dir, branch=branch,
4949- depth=depth)
4950- else:
4951- repo_dir = dest_dir
4952+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
4953+ repo_dir = install_remote(
4954+ repo, dest=parent_dir, branch=branch, depth=depth)
4955
4956 venv = os.path.join(parent_dir, 'venv')
4957
4958@@ -704,3 +855,721 @@
4959 return projects[key]
4960
4961 return None
4962+
4963+
4964+def os_workload_status(configs, required_interfaces, charm_func=None):
4965+ """
4966+ Decorator to set workload status based on complete contexts
4967+ """
4968+ def wrap(f):
4969+ @wraps(f)
4970+ def wrapped_f(*args, **kwargs):
4971+ # Run the original function first
4972+ f(*args, **kwargs)
4973+ # Set workload status now that contexts have been
4974+ # acted on
4975+ set_os_workload_status(configs, required_interfaces, charm_func)
4976+ return wrapped_f
4977+ return wrap
4978+
4979+
4980+def set_os_workload_status(configs, required_interfaces, charm_func=None,
4981+ services=None, ports=None):
4982+ """Set the state of the workload status for the charm.
4983+
4984+ This calls _determine_os_workload_status() to get the new state, message
4985+ and sets the status using status_set()
4986+
4987+ @param configs: a templating.OSConfigRenderer() object
4988+ @param required_interfaces: {generic: [specific, specific2, ...]}
4989+ @param charm_func: a callable function that returns state, message. The
4990+ signature is charm_func(configs) -> (state, message)
4991+ @param services: list of strings OR dictionary specifying services/ports
4992+ @param ports: OPTIONAL list of port numbers.
4993+ @returns state, message: the new workload status, user message
4994+ """
4995+ state, message = _determine_os_workload_status(
4996+ configs, required_interfaces, charm_func, services, ports)
4997+ status_set(state, message)
4998+
4999+
5000+def _determine_os_workload_status(
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches