Merge lp:~plumgrid-team/charms/trusty/plumgrid-director/trunk into lp:charms/trusty/plumgrid-director

Proposed by Bilal Baqar
Status: Merged
Merged at revision: 19
Proposed branch: lp:~plumgrid-team/charms/trusty/plumgrid-director/trunk
Merge into: lp:charms/trusty/plumgrid-director
Diff against target: 9850 lines (+4432/-3432)
53 files modified
Makefile (+1/-1)
bin/charm_helpers_sync.py (+253/-0)
charm-helpers-sync.yaml (+6/-1)
config.yaml (+8/-0)
hooks/charmhelpers/contrib/amulet/deployment.py (+4/-2)
hooks/charmhelpers/contrib/amulet/utils.py (+382/-86)
hooks/charmhelpers/contrib/ansible/__init__.py (+0/-254)
hooks/charmhelpers/contrib/benchmark/__init__.py (+0/-126)
hooks/charmhelpers/contrib/charmhelpers/__init__.py (+0/-208)
hooks/charmhelpers/contrib/charmsupport/__init__.py (+0/-15)
hooks/charmhelpers/contrib/charmsupport/nrpe.py (+0/-360)
hooks/charmhelpers/contrib/charmsupport/volumes.py (+0/-175)
hooks/charmhelpers/contrib/database/mysql.py (+0/-412)
hooks/charmhelpers/contrib/network/ip.py (+55/-23)
hooks/charmhelpers/contrib/network/ovs/__init__.py (+6/-2)
hooks/charmhelpers/contrib/network/ufw.py (+5/-6)
hooks/charmhelpers/contrib/openstack/amulet/deployment.py (+135/-14)
hooks/charmhelpers/contrib/openstack/amulet/utils.py (+421/-13)
hooks/charmhelpers/contrib/openstack/context.py (+318/-79)
hooks/charmhelpers/contrib/openstack/ip.py (+35/-7)
hooks/charmhelpers/contrib/openstack/neutron.py (+62/-21)
hooks/charmhelpers/contrib/openstack/templating.py (+30/-2)
hooks/charmhelpers/contrib/openstack/utils.py (+939/-70)
hooks/charmhelpers/contrib/peerstorage/__init__.py (+0/-268)
hooks/charmhelpers/contrib/python/packages.py (+35/-11)
hooks/charmhelpers/contrib/saltstack/__init__.py (+0/-118)
hooks/charmhelpers/contrib/ssl/__init__.py (+0/-94)
hooks/charmhelpers/contrib/ssl/service.py (+0/-279)
hooks/charmhelpers/contrib/storage/linux/ceph.py (+823/-61)
hooks/charmhelpers/contrib/storage/linux/loopback.py (+10/-0)
hooks/charmhelpers/contrib/storage/linux/utils.py (+8/-7)
hooks/charmhelpers/contrib/templating/__init__.py (+0/-15)
hooks/charmhelpers/contrib/templating/contexts.py (+0/-139)
hooks/charmhelpers/contrib/templating/jinja.py (+0/-39)
hooks/charmhelpers/contrib/templating/pyformat.py (+0/-29)
hooks/charmhelpers/contrib/unison/__init__.py (+0/-313)
hooks/charmhelpers/core/hookenv.py (+220/-13)
hooks/charmhelpers/core/host.py (+298/-75)
hooks/charmhelpers/core/hugepage.py (+71/-0)
hooks/charmhelpers/core/kernel.py (+68/-0)
hooks/charmhelpers/core/services/helpers.py (+30/-5)
hooks/charmhelpers/core/strutils.py (+30/-0)
hooks/charmhelpers/core/templating.py (+21/-8)
hooks/charmhelpers/core/unitdata.py (+61/-17)
hooks/charmhelpers/fetch/__init__.py (+18/-2)
hooks/charmhelpers/fetch/archiveurl.py (+1/-1)
hooks/charmhelpers/fetch/bzrurl.py (+22/-32)
hooks/charmhelpers/fetch/giturl.py (+20/-23)
hooks/pg_dir_hooks.py (+24/-2)
hooks/pg_dir_utils.py (+3/-2)
metadata.yaml (+2/-0)
templates/kilo/nginx.conf (+5/-1)
unit_tests/test_pg_dir_hooks.py (+2/-1)
To merge this branch: bzr merge lp:~plumgrid-team/charms/trusty/plumgrid-director/trunk
Reviewer Review Type Date Requested Status
Review Queue (community) automated testing Needs Fixing
Charles Butler Pending
Review via email: mp+295027@code.launchpad.net

This proposal supersedes a proposal from 2016-01-12.

Commit message

Trusty - Liberty/Mitaka support added

Description of the change

Mitaka/Liberty changes
- Created new relation with neutron-api-plumgrid
- getting pg creds in config
- nginx conf changes for middleware

To post a comment you must log in.
Revision history for this message
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/2161/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote : Posted in a previous version of this proposal

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/2141/

review: Needs Fixing (automated testing)
Revision history for this message
Charles Butler (lazypower) wrote : Posted in a previous version of this proposal

Greetings Bilal,

This branch doesn't appear to apply cleanly. Can you take a look and resolve the merge conflicts?

review: Needs Fixing
Revision history for this message
Bilal Baqar (bbaqar) wrote : Posted in a previous version of this proposal

Hey Charles

Thanks for taking the time out to review the merge proposal. I ll deal with the conflicts.

Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/4275/

review: Needs Fixing (automated testing)
Revision history for this message
Review Queue (review-queue) wrote :

This item has failed automated testing! Results available here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/4262/

review: Needs Fixing (automated testing)
Revision history for this message
Bilal Baqar (bbaqar) wrote :

Looking at the results. Will provide fix shortly.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2016-03-03 20:56:40 +0000
3+++ Makefile 2016-05-18 10:01:02 +0000
4@@ -4,7 +4,7 @@
5 virtualenv:
6 virtualenv .venv
7 .venv/bin/pip install flake8 nose coverage mock pyyaml netifaces \
8- netaddr jinja2
9+ netaddr jinja2 pyflakes pep8 six pbr funcsigs psutil
10
11 lint: virtualenv
12 .venv/bin/flake8 --exclude hooks/charmhelpers hooks unit_tests tests --ignore E402
13
14=== added directory 'bin'
15=== added file 'bin/charm_helpers_sync.py'
16--- bin/charm_helpers_sync.py 1970-01-01 00:00:00 +0000
17+++ bin/charm_helpers_sync.py 2016-05-18 10:01:02 +0000
18@@ -0,0 +1,253 @@
19+#!/usr/bin/python
20+
21+# Copyright 2014-2015 Canonical Limited.
22+#
23+# This file is part of charm-helpers.
24+#
25+# charm-helpers is free software: you can redistribute it and/or modify
26+# it under the terms of the GNU Lesser General Public License version 3 as
27+# published by the Free Software Foundation.
28+#
29+# charm-helpers is distributed in the hope that it will be useful,
30+# but WITHOUT ANY WARRANTY; without even the implied warranty of
31+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
32+# GNU Lesser General Public License for more details.
33+#
34+# You should have received a copy of the GNU Lesser General Public License
35+# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
36+
37+# Authors:
38+# Adam Gandelman <adamg@ubuntu.com>
39+
40+import logging
41+import optparse
42+import os
43+import subprocess
44+import shutil
45+import sys
46+import tempfile
47+import yaml
48+from fnmatch import fnmatch
49+
50+import six
51+
52+CHARM_HELPERS_BRANCH = 'lp:charm-helpers'
53+
54+
55+def parse_config(conf_file):
56+ if not os.path.isfile(conf_file):
57+ logging.error('Invalid config file: %s.' % conf_file)
58+ return False
59+ return yaml.load(open(conf_file).read())
60+
61+
62+def clone_helpers(work_dir, branch):
63+ dest = os.path.join(work_dir, 'charm-helpers')
64+ logging.info('Checking out %s to %s.' % (branch, dest))
65+ cmd = ['bzr', 'checkout', '--lightweight', branch, dest]
66+ subprocess.check_call(cmd)
67+ return dest
68+
69+
70+def _module_path(module):
71+ return os.path.join(*module.split('.'))
72+
73+
74+def _src_path(src, module):
75+ return os.path.join(src, 'charmhelpers', _module_path(module))
76+
77+
78+def _dest_path(dest, module):
79+ return os.path.join(dest, _module_path(module))
80+
81+
82+def _is_pyfile(path):
83+ return os.path.isfile(path + '.py')
84+
85+
86+def ensure_init(path):
87+ '''
88+ ensure directories leading up to path are importable, omitting
89+ parent directory, eg path='/hooks/helpers/foo'/:
90+ hooks/
91+ hooks/helpers/__init__.py
92+ hooks/helpers/foo/__init__.py
93+ '''
94+ for d, dirs, files in os.walk(os.path.join(*path.split('/')[:2])):
95+ _i = os.path.join(d, '__init__.py')
96+ if not os.path.exists(_i):
97+ logging.info('Adding missing __init__.py: %s' % _i)
98+ open(_i, 'wb').close()
99+
100+
101+def sync_pyfile(src, dest):
102+ src = src + '.py'
103+ src_dir = os.path.dirname(src)
104+ logging.info('Syncing pyfile: %s -> %s.' % (src, dest))
105+ if not os.path.exists(dest):
106+ os.makedirs(dest)
107+ shutil.copy(src, dest)
108+ if os.path.isfile(os.path.join(src_dir, '__init__.py')):
109+ shutil.copy(os.path.join(src_dir, '__init__.py'),
110+ dest)
111+ ensure_init(dest)
112+
113+
114+def get_filter(opts=None):
115+ opts = opts or []
116+ if 'inc=*' in opts:
117+ # do not filter any files, include everything
118+ return None
119+
120+ def _filter(dir, ls):
121+ incs = [opt.split('=').pop() for opt in opts if 'inc=' in opt]
122+ _filter = []
123+ for f in ls:
124+ _f = os.path.join(dir, f)
125+
126+ if not os.path.isdir(_f) and not _f.endswith('.py') and incs:
127+ if True not in [fnmatch(_f, inc) for inc in incs]:
128+ logging.debug('Not syncing %s, does not match include '
129+ 'filters (%s)' % (_f, incs))
130+ _filter.append(f)
131+ else:
132+ logging.debug('Including file, which matches include '
133+ 'filters (%s): %s' % (incs, _f))
134+ elif (os.path.isfile(_f) and not _f.endswith('.py')):
135+ logging.debug('Not syncing file: %s' % f)
136+ _filter.append(f)
137+ elif (os.path.isdir(_f) and not
138+ os.path.isfile(os.path.join(_f, '__init__.py'))):
139+ logging.debug('Not syncing directory: %s' % f)
140+ _filter.append(f)
141+ return _filter
142+ return _filter
143+
144+
145+def sync_directory(src, dest, opts=None):
146+ if os.path.exists(dest):
147+ logging.debug('Removing existing directory: %s' % dest)
148+ shutil.rmtree(dest)
149+ logging.info('Syncing directory: %s -> %s.' % (src, dest))
150+
151+ shutil.copytree(src, dest, ignore=get_filter(opts))
152+ ensure_init(dest)
153+
154+
155+def sync(src, dest, module, opts=None):
156+
157+ # Sync charmhelpers/__init__.py for bootstrap code.
158+ sync_pyfile(_src_path(src, '__init__'), dest)
159+
160+ # Sync other __init__.py files in the path leading to module.
161+ m = []
162+ steps = module.split('.')[:-1]
163+ while steps:
164+ m.append(steps.pop(0))
165+ init = '.'.join(m + ['__init__'])
166+ sync_pyfile(_src_path(src, init),
167+ os.path.dirname(_dest_path(dest, init)))
168+
169+ # Sync the module, or maybe a .py file.
170+ if os.path.isdir(_src_path(src, module)):
171+ sync_directory(_src_path(src, module), _dest_path(dest, module), opts)
172+ elif _is_pyfile(_src_path(src, module)):
173+ sync_pyfile(_src_path(src, module),
174+ os.path.dirname(_dest_path(dest, module)))
175+ else:
176+ logging.warn('Could not sync: %s. Neither a pyfile or directory, '
177+ 'does it even exist?' % module)
178+
179+
180+def parse_sync_options(options):
181+ if not options:
182+ return []
183+ return options.split(',')
184+
185+
186+def extract_options(inc, global_options=None):
187+ global_options = global_options or []
188+ if global_options and isinstance(global_options, six.string_types):
189+ global_options = [global_options]
190+ if '|' not in inc:
191+ return (inc, global_options)
192+ inc, opts = inc.split('|')
193+ return (inc, parse_sync_options(opts) + global_options)
194+
195+
196+def sync_helpers(include, src, dest, options=None):
197+ if not os.path.isdir(dest):
198+ os.makedirs(dest)
199+
200+ global_options = parse_sync_options(options)
201+
202+ for inc in include:
203+ if isinstance(inc, str):
204+ inc, opts = extract_options(inc, global_options)
205+ sync(src, dest, inc, opts)
206+ elif isinstance(inc, dict):
207+ # could also do nested dicts here.
208+ for k, v in six.iteritems(inc):
209+ if isinstance(v, list):
210+ for m in v:
211+ inc, opts = extract_options(m, global_options)
212+ sync(src, dest, '%s.%s' % (k, inc), opts)
213+
214+if __name__ == '__main__':
215+ parser = optparse.OptionParser()
216+ parser.add_option('-c', '--config', action='store', dest='config',
217+ default=None, help='helper config file')
218+ parser.add_option('-D', '--debug', action='store_true', dest='debug',
219+ default=False, help='debug')
220+ parser.add_option('-b', '--branch', action='store', dest='branch',
221+ help='charm-helpers bzr branch (overrides config)')
222+ parser.add_option('-d', '--destination', action='store', dest='dest_dir',
223+ help='sync destination dir (overrides config)')
224+ (opts, args) = parser.parse_args()
225+
226+ if opts.debug:
227+ logging.basicConfig(level=logging.DEBUG)
228+ else:
229+ logging.basicConfig(level=logging.INFO)
230+
231+ if opts.config:
232+ logging.info('Loading charm helper config from %s.' % opts.config)
233+ config = parse_config(opts.config)
234+ if not config:
235+ logging.error('Could not parse config from %s.' % opts.config)
236+ sys.exit(1)
237+ else:
238+ config = {}
239+
240+ if 'branch' not in config:
241+ config['branch'] = CHARM_HELPERS_BRANCH
242+ if opts.branch:
243+ config['branch'] = opts.branch
244+ if opts.dest_dir:
245+ config['destination'] = opts.dest_dir
246+
247+ if 'destination' not in config:
248+ logging.error('No destination dir. specified as option or config.')
249+ sys.exit(1)
250+
251+ if 'include' not in config:
252+ if not args:
253+ logging.error('No modules to sync specified as option or config.')
254+ sys.exit(1)
255+ config['include'] = []
256+ [config['include'].append(a) for a in args]
257+
258+ sync_options = None
259+ if 'options' in config:
260+ sync_options = config['options']
261+ tmpd = tempfile.mkdtemp()
262+ try:
263+ checkout = clone_helpers(tmpd, config['branch'])
264+ sync_helpers(config['include'], checkout, config['destination'],
265+ options=sync_options)
266+ except Exception as e:
267+ logging.error("Could not sync: %s" % e)
268+ raise e
269+ finally:
270+ logging.debug('Cleaning up %s' % tmpd)
271+ shutil.rmtree(tmpd)
272
273=== modified file 'charm-helpers-sync.yaml'
274--- charm-helpers-sync.yaml 2015-07-29 18:07:31 +0000
275+++ charm-helpers-sync.yaml 2016-05-18 10:01:02 +0000
276@@ -3,5 +3,10 @@
277 include:
278 - core
279 - fetch
280- - contrib
281+ - contrib.amulet
282+ - contrib.hahelpers
283+ - contrib.network
284+ - contrib.openstack
285+ - contrib.python
286+ - contrib.storage
287 - payload
288
289=== modified file 'config.yaml'
290--- config.yaml 2016-03-24 12:33:25 +0000
291+++ config.yaml 2016-05-18 10:01:02 +0000
292@@ -3,6 +3,14 @@
293 default: 192.168.100.250
294 type: string
295 description: IP address of the Director's Management interface. Same IP can be used to access PG Console.
296+ plumgrid-username:
297+ default: plumgrid
298+ type: string
299+ description: Username to access PLUMgrid Director
300+ plumgrid-password:
301+ default: plumgrid
302+ type: string
303+ description: Password to access PLUMgrid Director
304 lcm-ssh-key:
305 default: 'null'
306 type: string
307
308=== modified file 'hooks/charmhelpers/contrib/amulet/deployment.py'
309--- hooks/charmhelpers/contrib/amulet/deployment.py 2015-07-29 18:07:31 +0000
310+++ hooks/charmhelpers/contrib/amulet/deployment.py 2016-05-18 10:01:02 +0000
311@@ -51,7 +51,8 @@
312 if 'units' not in this_service:
313 this_service['units'] = 1
314
315- self.d.add(this_service['name'], units=this_service['units'])
316+ self.d.add(this_service['name'], units=this_service['units'],
317+ constraints=this_service.get('constraints'))
318
319 for svc in other_services:
320 if 'location' in svc:
321@@ -64,7 +65,8 @@
322 if 'units' not in svc:
323 svc['units'] = 1
324
325- self.d.add(svc['name'], charm=branch_location, units=svc['units'])
326+ self.d.add(svc['name'], charm=branch_location, units=svc['units'],
327+ constraints=svc.get('constraints'))
328
329 def _add_relations(self, relations):
330 """Add all of the relations for the services."""
331
332=== modified file 'hooks/charmhelpers/contrib/amulet/utils.py'
333--- hooks/charmhelpers/contrib/amulet/utils.py 2015-07-29 18:07:31 +0000
334+++ hooks/charmhelpers/contrib/amulet/utils.py 2016-05-18 10:01:02 +0000
335@@ -14,17 +14,25 @@
336 # You should have received a copy of the GNU Lesser General Public License
337 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
338
339-import amulet
340-import ConfigParser
341-import distro_info
342 import io
343+import json
344 import logging
345 import os
346 import re
347-import six
348+import socket
349+import subprocess
350 import sys
351 import time
352-import urlparse
353+import uuid
354+
355+import amulet
356+import distro_info
357+import six
358+from six.moves import configparser
359+if six.PY3:
360+ from urllib import parse as urlparse
361+else:
362+ import urlparse
363
364
365 class AmuletUtils(object):
366@@ -108,7 +116,7 @@
367 # /!\ DEPRECATION WARNING (beisner):
368 # New and existing tests should be rewritten to use
369 # validate_services_by_name() as it is aware of init systems.
370- self.log.warn('/!\\ DEPRECATION WARNING: use '
371+ self.log.warn('DEPRECATION WARNING: use '
372 'validate_services_by_name instead of validate_services '
373 'due to init system differences.')
374
375@@ -142,19 +150,23 @@
376
377 for service_name in services_list:
378 if (self.ubuntu_releases.index(release) >= systemd_switch or
379- service_name == "rabbitmq-server"):
380- # init is systemd
381+ service_name in ['rabbitmq-server', 'apache2']):
382+ # init is systemd (or regular sysv)
383 cmd = 'sudo service {} status'.format(service_name)
384+ output, code = sentry_unit.run(cmd)
385+ service_running = code == 0
386 elif self.ubuntu_releases.index(release) < systemd_switch:
387 # init is upstart
388 cmd = 'sudo status {}'.format(service_name)
389+ output, code = sentry_unit.run(cmd)
390+ service_running = code == 0 and "start/running" in output
391
392- output, code = sentry_unit.run(cmd)
393 self.log.debug('{} `{}` returned '
394 '{}'.format(sentry_unit.info['unit_name'],
395 cmd, code))
396- if code != 0:
397- return "command `{}` returned {}".format(cmd, str(code))
398+ if not service_running:
399+ return u"command `{}` returned {} {}".format(
400+ cmd, output, str(code))
401 return None
402
403 def _get_config(self, unit, filename):
404@@ -164,7 +176,7 @@
405 # NOTE(beisner): by default, ConfigParser does not handle options
406 # with no value, such as the flags used in the mysql my.cnf file.
407 # https://bugs.python.org/issue7005
408- config = ConfigParser.ConfigParser(allow_no_value=True)
409+ config = configparser.ConfigParser(allow_no_value=True)
410 config.readfp(io.StringIO(file_contents))
411 return config
412
413@@ -259,33 +271,52 @@
414 """Get last modification time of directory."""
415 return sentry_unit.directory_stat(directory)['mtime']
416
417- def _get_proc_start_time(self, sentry_unit, service, pgrep_full=False):
418- """Get process' start time.
419-
420- Determine start time of the process based on the last modification
421- time of the /proc/pid directory. If pgrep_full is True, the process
422- name is matched against the full command line.
423- """
424- if pgrep_full:
425- cmd = 'pgrep -o -f {}'.format(service)
426- else:
427- cmd = 'pgrep -o {}'.format(service)
428- cmd = cmd + ' | grep -v pgrep || exit 0'
429- cmd_out = sentry_unit.run(cmd)
430- self.log.debug('CMDout: ' + str(cmd_out))
431- if cmd_out[0]:
432- self.log.debug('Pid for %s %s' % (service, str(cmd_out[0])))
433- proc_dir = '/proc/{}'.format(cmd_out[0].strip())
434- return self._get_dir_mtime(sentry_unit, proc_dir)
435+ def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None):
436+ """Get start time of a process based on the last modification time
437+ of the /proc/pid directory.
438+
439+ :sentry_unit: The sentry unit to check for the service on
440+ :service: service name to look for in process table
441+ :pgrep_full: [Deprecated] Use full command line search mode with pgrep
442+ :returns: epoch time of service process start
443+ :param commands: list of bash commands
444+ :param sentry_units: list of sentry unit pointers
445+ :returns: None if successful; Failure message otherwise
446+ """
447+ if pgrep_full is not None:
448+ # /!\ DEPRECATION WARNING (beisner):
449+ # No longer implemented, as pidof is now used instead of pgrep.
450+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
451+ self.log.warn('DEPRECATION WARNING: pgrep_full bool is no '
452+ 'longer implemented re: lp 1474030.')
453+
454+ pid_list = self.get_process_id_list(sentry_unit, service)
455+ pid = pid_list[0]
456+ proc_dir = '/proc/{}'.format(pid)
457+ self.log.debug('Pid for {} on {}: {}'.format(
458+ service, sentry_unit.info['unit_name'], pid))
459+
460+ return self._get_dir_mtime(sentry_unit, proc_dir)
461
462 def service_restarted(self, sentry_unit, service, filename,
463- pgrep_full=False, sleep_time=20):
464+ pgrep_full=None, sleep_time=20):
465 """Check if service was restarted.
466
467 Compare a service's start time vs a file's last modification time
468 (such as a config file for that service) to determine if the service
469 has been restarted.
470 """
471+ # /!\ DEPRECATION WARNING (beisner):
472+ # This method is prone to races in that no before-time is known.
473+ # Use validate_service_config_changed instead.
474+
475+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
476+ # used instead of pgrep. pgrep_full is still passed through to ensure
477+ # deprecation WARNS. lp1474030
478+ self.log.warn('DEPRECATION WARNING: use '
479+ 'validate_service_config_changed instead of '
480+ 'service_restarted due to known races.')
481+
482 time.sleep(sleep_time)
483 if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
484 self._get_file_mtime(sentry_unit, filename)):
485@@ -294,78 +325,122 @@
486 return False
487
488 def service_restarted_since(self, sentry_unit, mtime, service,
489- pgrep_full=False, sleep_time=20,
490- retry_count=2):
491+ pgrep_full=None, sleep_time=20,
492+ retry_count=30, retry_sleep_time=10):
493 """Check if service was been started after a given time.
494
495 Args:
496 sentry_unit (sentry): The sentry unit to check for the service on
497 mtime (float): The epoch time to check against
498 service (string): service name to look for in process table
499- pgrep_full (boolean): Use full command line search mode with pgrep
500- sleep_time (int): Seconds to sleep before looking for process
501- retry_count (int): If service is not found, how many times to retry
502+ pgrep_full: [Deprecated] Use full command line search mode with pgrep
503+ sleep_time (int): Initial sleep time (s) before looking for file
504+ retry_sleep_time (int): Time (s) to sleep between retries
505+ retry_count (int): If file is not found, how many times to retry
506
507 Returns:
508 bool: True if service found and its start time it newer than mtime,
509 False if service is older than mtime or if service was
510 not found.
511 """
512- self.log.debug('Checking %s restarted since %s' % (service, mtime))
513+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
514+ # used instead of pgrep. pgrep_full is still passed through to ensure
515+ # deprecation WARNS. lp1474030
516+
517+ unit_name = sentry_unit.info['unit_name']
518+ self.log.debug('Checking that %s service restarted since %s on '
519+ '%s' % (service, mtime, unit_name))
520 time.sleep(sleep_time)
521- proc_start_time = self._get_proc_start_time(sentry_unit, service,
522- pgrep_full)
523- while retry_count > 0 and not proc_start_time:
524- self.log.debug('No pid file found for service %s, will retry %i '
525- 'more times' % (service, retry_count))
526- time.sleep(30)
527- proc_start_time = self._get_proc_start_time(sentry_unit, service,
528- pgrep_full)
529- retry_count = retry_count - 1
530+ proc_start_time = None
531+ tries = 0
532+ while tries <= retry_count and not proc_start_time:
533+ try:
534+ proc_start_time = self._get_proc_start_time(sentry_unit,
535+ service,
536+ pgrep_full)
537+ self.log.debug('Attempt {} to get {} proc start time on {} '
538+ 'OK'.format(tries, service, unit_name))
539+ except IOError as e:
540+ # NOTE(beisner) - race avoidance, proc may not exist yet.
541+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
542+ self.log.debug('Attempt {} to get {} proc start time on {} '
543+ 'failed\n{}'.format(tries, service,
544+ unit_name, e))
545+ time.sleep(retry_sleep_time)
546+ tries += 1
547
548 if not proc_start_time:
549 self.log.warn('No proc start time found, assuming service did '
550 'not start')
551 return False
552 if proc_start_time >= mtime:
553- self.log.debug('proc start time is newer than provided mtime'
554- '(%s >= %s)' % (proc_start_time, mtime))
555+ self.log.debug('Proc start time is newer than provided mtime'
556+ '(%s >= %s) on %s (OK)' % (proc_start_time,
557+ mtime, unit_name))
558 return True
559 else:
560- self.log.warn('proc start time (%s) is older than provided mtime '
561- '(%s), service did not restart' % (proc_start_time,
562- mtime))
563+ self.log.warn('Proc start time (%s) is older than provided mtime '
564+ '(%s) on %s, service did not '
565+ 'restart' % (proc_start_time, mtime, unit_name))
566 return False
567
568 def config_updated_since(self, sentry_unit, filename, mtime,
569- sleep_time=20):
570+ sleep_time=20, retry_count=30,
571+ retry_sleep_time=10):
572 """Check if file was modified after a given time.
573
574 Args:
575 sentry_unit (sentry): The sentry unit to check the file mtime on
576 filename (string): The file to check mtime of
577 mtime (float): The epoch time to check against
578- sleep_time (int): Seconds to sleep before looking for process
579+ sleep_time (int): Initial sleep time (s) before looking for file
580+ retry_sleep_time (int): Time (s) to sleep between retries
581+ retry_count (int): If file is not found, how many times to retry
582
583 Returns:
584 bool: True if file was modified more recently than mtime, False if
585- file was modified before mtime,
586+ file was modified before mtime, or if file not found.
587 """
588- self.log.debug('Checking %s updated since %s' % (filename, mtime))
589+ unit_name = sentry_unit.info['unit_name']
590+ self.log.debug('Checking that %s updated since %s on '
591+ '%s' % (filename, mtime, unit_name))
592 time.sleep(sleep_time)
593- file_mtime = self._get_file_mtime(sentry_unit, filename)
594+ file_mtime = None
595+ tries = 0
596+ while tries <= retry_count and not file_mtime:
597+ try:
598+ file_mtime = self._get_file_mtime(sentry_unit, filename)
599+ self.log.debug('Attempt {} to get {} file mtime on {} '
600+ 'OK'.format(tries, filename, unit_name))
601+ except IOError as e:
602+ # NOTE(beisner) - race avoidance, file may not exist yet.
603+ # https://bugs.launchpad.net/charm-helpers/+bug/1474030
604+ self.log.debug('Attempt {} to get {} file mtime on {} '
605+ 'failed\n{}'.format(tries, filename,
606+ unit_name, e))
607+ time.sleep(retry_sleep_time)
608+ tries += 1
609+
610+ if not file_mtime:
611+ self.log.warn('Could not determine file mtime, assuming '
612+ 'file does not exist')
613+ return False
614+
615 if file_mtime >= mtime:
616 self.log.debug('File mtime is newer than provided mtime '
617- '(%s >= %s)' % (file_mtime, mtime))
618+ '(%s >= %s) on %s (OK)' % (file_mtime,
619+ mtime, unit_name))
620 return True
621 else:
622- self.log.warn('File mtime %s is older than provided mtime %s'
623- % (file_mtime, mtime))
624+ self.log.warn('File mtime is older than provided mtime'
625+ '(%s < on %s) on %s' % (file_mtime,
626+ mtime, unit_name))
627 return False
628
629 def validate_service_config_changed(self, sentry_unit, mtime, service,
630- filename, pgrep_full=False,
631- sleep_time=20, retry_count=2):
632+ filename, pgrep_full=None,
633+ sleep_time=20, retry_count=30,
634+ retry_sleep_time=10):
635 """Check service and file were updated after mtime
636
637 Args:
638@@ -373,9 +448,10 @@
639 mtime (float): The epoch time to check against
640 service (string): service name to look for in process table
641 filename (string): The file to check mtime of
642- pgrep_full (boolean): Use full command line search mode with pgrep
643- sleep_time (int): Seconds to sleep before looking for process
644+ pgrep_full: [Deprecated] Use full command line search mode with pgrep
645+ sleep_time (int): Initial sleep in seconds to pass to test helpers
646 retry_count (int): If service is not found, how many times to retry
647+ retry_sleep_time (int): Time in seconds to wait between retries
648
649 Typical Usage:
650 u = OpenStackAmuletUtils(ERROR)
651@@ -392,15 +468,27 @@
652 mtime, False if service is older than mtime or if service was
653 not found or if filename was modified before mtime.
654 """
655- self.log.debug('Checking %s restarted since %s' % (service, mtime))
656- time.sleep(sleep_time)
657- service_restart = self.service_restarted_since(sentry_unit, mtime,
658- service,
659- pgrep_full=pgrep_full,
660- sleep_time=0,
661- retry_count=retry_count)
662- config_update = self.config_updated_since(sentry_unit, filename, mtime,
663- sleep_time=0)
664+
665+ # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
666+ # used instead of pgrep. pgrep_full is still passed through to ensure
667+ # deprecation WARNS. lp1474030
668+
669+ service_restart = self.service_restarted_since(
670+ sentry_unit, mtime,
671+ service,
672+ pgrep_full=pgrep_full,
673+ sleep_time=sleep_time,
674+ retry_count=retry_count,
675+ retry_sleep_time=retry_sleep_time)
676+
677+ config_update = self.config_updated_since(
678+ sentry_unit,
679+ filename,
680+ mtime,
681+ sleep_time=sleep_time,
682+ retry_count=retry_count,
683+ retry_sleep_time=retry_sleep_time)
684+
685 return service_restart and config_update
686
687 def get_sentry_time(self, sentry_unit):
688@@ -418,7 +506,6 @@
689 """Return a list of all Ubuntu releases in order of release."""
690 _d = distro_info.UbuntuDistroInfo()
691 _release_list = _d.all
692- self.log.debug('Ubuntu release list: {}'.format(_release_list))
693 return _release_list
694
695 def file_to_url(self, file_rel_path):
696@@ -450,15 +537,20 @@
697 cmd, code, output))
698 return None
699
700- def get_process_id_list(self, sentry_unit, process_name):
701+ def get_process_id_list(self, sentry_unit, process_name,
702+ expect_success=True):
703 """Get a list of process ID(s) from a single sentry juju unit
704 for a single process name.
705
706- :param sentry_unit: Pointer to amulet sentry instance (juju unit)
707+ :param sentry_unit: Amulet sentry instance (juju unit)
708 :param process_name: Process name
709+ :param expect_success: If False, expect the PID to be missing,
710+ raise if it is present.
711 :returns: List of process IDs
712 """
713- cmd = 'pidof {}'.format(process_name)
714+ cmd = 'pidof -x {}'.format(process_name)
715+ if not expect_success:
716+ cmd += " || exit 0 && exit 1"
717 output, code = sentry_unit.run(cmd)
718 if code != 0:
719 msg = ('{} `{}` returned {} '
720@@ -467,14 +559,23 @@
721 amulet.raise_status(amulet.FAIL, msg=msg)
722 return str(output).split()
723
724- def get_unit_process_ids(self, unit_processes):
725+ def get_unit_process_ids(self, unit_processes, expect_success=True):
726 """Construct a dict containing unit sentries, process names, and
727- process IDs."""
728+ process IDs.
729+
730+ :param unit_processes: A dictionary of Amulet sentry instance
731+ to list of process names.
732+ :param expect_success: if False expect the processes to not be
733+ running, raise if they are.
734+ :returns: Dictionary of Amulet sentry instance to dictionary
735+ of process names to PIDs.
736+ """
737 pid_dict = {}
738- for sentry_unit, process_list in unit_processes.iteritems():
739+ for sentry_unit, process_list in six.iteritems(unit_processes):
740 pid_dict[sentry_unit] = {}
741 for process in process_list:
742- pids = self.get_process_id_list(sentry_unit, process)
743+ pids = self.get_process_id_list(
744+ sentry_unit, process, expect_success=expect_success)
745 pid_dict[sentry_unit].update({process: pids})
746 return pid_dict
747
748@@ -488,7 +589,7 @@
749 return ('Unit count mismatch. expected, actual: {}, '
750 '{} '.format(len(expected), len(actual)))
751
752- for (e_sentry, e_proc_names) in expected.iteritems():
753+ for (e_sentry, e_proc_names) in six.iteritems(expected):
754 e_sentry_name = e_sentry.info['unit_name']
755 if e_sentry in actual.keys():
756 a_proc_names = actual[e_sentry]
757@@ -500,22 +601,40 @@
758 return ('Process name count mismatch. expected, actual: {}, '
759 '{}'.format(len(expected), len(actual)))
760
761- for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
762+ for (e_proc_name, e_pids), (a_proc_name, a_pids) in \
763 zip(e_proc_names.items(), a_proc_names.items()):
764 if e_proc_name != a_proc_name:
765 return ('Process name mismatch. expected, actual: {}, '
766 '{}'.format(e_proc_name, a_proc_name))
767
768 a_pids_length = len(a_pids)
769- if e_pids_length != a_pids_length:
770- return ('PID count mismatch. {} ({}) expected, actual: '
771+ fail_msg = ('PID count mismatch. {} ({}) expected, actual: '
772 '{}, {} ({})'.format(e_sentry_name, e_proc_name,
773- e_pids_length, a_pids_length,
774+ e_pids, a_pids_length,
775 a_pids))
776+
777+ # If expected is a list, ensure at least one PID quantity match
778+ if isinstance(e_pids, list) and \
779+ a_pids_length not in e_pids:
780+ return fail_msg
781+ # If expected is not bool and not list,
782+ # ensure PID quantities match
783+ elif not isinstance(e_pids, bool) and \
784+ not isinstance(e_pids, list) and \
785+ a_pids_length != e_pids:
786+ return fail_msg
787+ # If expected is bool True, ensure 1 or more PIDs exist
788+ elif isinstance(e_pids, bool) and \
789+ e_pids is True and a_pids_length < 1:
790+ return fail_msg
791+ # If expected is bool False, ensure 0 PIDs exist
792+ elif isinstance(e_pids, bool) and \
793+ e_pids is False and a_pids_length != 0:
794+ return fail_msg
795 else:
796 self.log.debug('PID check OK: {} {} {}: '
797 '{}'.format(e_sentry_name, e_proc_name,
798- e_pids_length, a_pids))
799+ e_pids, a_pids))
800 return None
801
802 def validate_list_of_identical_dicts(self, list_of_dicts):
803@@ -531,3 +650,180 @@
804 return 'Dicts within list are not identical'
805
806 return None
807+
808+ def validate_sectionless_conf(self, file_contents, expected):
809+ """A crude conf parser. Useful to inspect configuration files which
810+ do not have section headers (as would be necessary in order to use
811+ the configparser). Such as openstack-dashboard or rabbitmq confs."""
812+ for line in file_contents.split('\n'):
813+ if '=' in line:
814+ args = line.split('=')
815+ if len(args) <= 1:
816+ continue
817+ key = args[0].strip()
818+ value = args[1].strip()
819+ if key in expected.keys():
820+ if expected[key] != value:
821+ msg = ('Config mismatch. Expected, actual: {}, '
822+ '{}'.format(expected[key], value))
823+ amulet.raise_status(amulet.FAIL, msg=msg)
824+
825+ def get_unit_hostnames(self, units):
826+ """Return a dict of juju unit names to hostnames."""
827+ host_names = {}
828+ for unit in units:
829+ host_names[unit.info['unit_name']] = \
830+ str(unit.file_contents('/etc/hostname').strip())
831+ self.log.debug('Unit host names: {}'.format(host_names))
832+ return host_names
833+
834+ def run_cmd_unit(self, sentry_unit, cmd):
835+ """Run a command on a unit, return the output and exit code."""
836+ output, code = sentry_unit.run(cmd)
837+ if code == 0:
838+ self.log.debug('{} `{}` command returned {} '
839+ '(OK)'.format(sentry_unit.info['unit_name'],
840+ cmd, code))
841+ else:
842+ msg = ('{} `{}` command returned {} '
843+ '{}'.format(sentry_unit.info['unit_name'],
844+ cmd, code, output))
845+ amulet.raise_status(amulet.FAIL, msg=msg)
846+ return str(output), code
847+
848+ def file_exists_on_unit(self, sentry_unit, file_name):
849+ """Check if a file exists on a unit."""
850+ try:
851+ sentry_unit.file_stat(file_name)
852+ return True
853+ except IOError:
854+ return False
855+ except Exception as e:
856+ msg = 'Error checking file {}: {}'.format(file_name, e)
857+ amulet.raise_status(amulet.FAIL, msg=msg)
858+
859+ def file_contents_safe(self, sentry_unit, file_name,
860+ max_wait=60, fatal=False):
861+ """Get file contents from a sentry unit. Wrap amulet file_contents
862+ with retry logic to address races where a file checks as existing,
863+ but no longer exists by the time file_contents is called.
864+ Return None if file not found. Optionally raise if fatal is True."""
865+ unit_name = sentry_unit.info['unit_name']
866+ file_contents = False
867+ tries = 0
868+ while not file_contents and tries < (max_wait / 4):
869+ try:
870+ file_contents = sentry_unit.file_contents(file_name)
871+ except IOError:
872+ self.log.debug('Attempt {} to open file {} from {} '
873+ 'failed'.format(tries, file_name,
874+ unit_name))
875+ time.sleep(4)
876+ tries += 1
877+
878+ if file_contents:
879+ return file_contents
880+ elif not fatal:
881+ return None
882+ elif fatal:
883+ msg = 'Failed to get file contents from unit.'
884+ amulet.raise_status(amulet.FAIL, msg)
885+
886+ def port_knock_tcp(self, host="localhost", port=22, timeout=15):
887+ """Open a TCP socket to check for a listening sevice on a host.
888+
889+ :param host: host name or IP address, default to localhost
890+ :param port: TCP port number, default to 22
891+ :param timeout: Connect timeout, default to 15 seconds
892+ :returns: True if successful, False if connect failed
893+ """
894+
895+ # Resolve host name if possible
896+ try:
897+ connect_host = socket.gethostbyname(host)
898+ host_human = "{} ({})".format(connect_host, host)
899+ except socket.error as e:
900+ self.log.warn('Unable to resolve address: '
901+ '{} ({}) Trying anyway!'.format(host, e))
902+ connect_host = host
903+ host_human = connect_host
904+
905+ # Attempt socket connection
906+ try:
907+ knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
908+ knock.settimeout(timeout)
909+ knock.connect((connect_host, port))
910+ knock.close()
911+ self.log.debug('Socket connect OK for host '
912+ '{} on port {}.'.format(host_human, port))
913+ return True
914+ except socket.error as e:
915+ self.log.debug('Socket connect FAIL for'
916+ ' {} port {} ({})'.format(host_human, port, e))
917+ return False
918+
919+ def port_knock_units(self, sentry_units, port=22,
920+ timeout=15, expect_success=True):
921+ """Open a TCP socket to check for a listening sevice on each
922+ listed juju unit.
923+
924+ :param sentry_units: list of sentry unit pointers
925+ :param port: TCP port number, default to 22
926+ :param timeout: Connect timeout, default to 15 seconds
927+ :expect_success: True by default, set False to invert logic
928+ :returns: None if successful, Failure message otherwise
929+ """
930+ for unit in sentry_units:
931+ host = unit.info['public-address']
932+ connected = self.port_knock_tcp(host, port, timeout)
933+ if not connected and expect_success:
934+ return 'Socket connect failed.'
935+ elif connected and not expect_success:
936+ return 'Socket connected unexpectedly.'
937+
938+ def get_uuid_epoch_stamp(self):
939+ """Returns a stamp string based on uuid4 and epoch time. Useful in
940+ generating test messages which need to be unique-ish."""
941+ return '[{}-{}]'.format(uuid.uuid4(), time.time())
942+
943+# amulet juju action helpers:
944+ def run_action(self, unit_sentry, action,
945+ _check_output=subprocess.check_output,
946+ params=None):
947+ """Run the named action on a given unit sentry.
948+
949+ params a dict of parameters to use
950+ _check_output parameter is used for dependency injection.
951+
952+ @return action_id.
953+ """
954+ unit_id = unit_sentry.info["unit_name"]
955+ command = ["juju", "action", "do", "--format=json", unit_id, action]
956+ if params is not None:
957+ for key, value in params.iteritems():
958+ command.append("{}={}".format(key, value))
959+ self.log.info("Running command: %s\n" % " ".join(command))
960+ output = _check_output(command, universal_newlines=True)
961+ data = json.loads(output)
962+ action_id = data[u'Action queued with id']
963+ return action_id
964+
965+ def wait_on_action(self, action_id, _check_output=subprocess.check_output):
966+ """Wait for a given action, returning if it completed or not.
967+
968+ _check_output parameter is used for dependency injection.
969+ """
970+ command = ["juju", "action", "fetch", "--format=json", "--wait=0",
971+ action_id]
972+ output = _check_output(command, universal_newlines=True)
973+ data = json.loads(output)
974+ return data.get(u"status") == "completed"
975+
976+ def status_get(self, unit):
977+ """Return the current service status of this unit."""
978+ raw_status, return_code = unit.run(
979+ "status-get --format=json --include-data")
980+ if return_code != 0:
981+ return ("unknown", "")
982+ status = json.loads(raw_status)
983+ return (status["status"], status["message"])
984
985=== removed directory 'hooks/charmhelpers/contrib/ansible'
986=== removed file 'hooks/charmhelpers/contrib/ansible/__init__.py'
987--- hooks/charmhelpers/contrib/ansible/__init__.py 2015-07-29 18:07:31 +0000
988+++ hooks/charmhelpers/contrib/ansible/__init__.py 1970-01-01 00:00:00 +0000
989@@ -1,254 +0,0 @@
990-# Copyright 2014-2015 Canonical Limited.
991-#
992-# This file is part of charm-helpers.
993-#
994-# charm-helpers is free software: you can redistribute it and/or modify
995-# it under the terms of the GNU Lesser General Public License version 3 as
996-# published by the Free Software Foundation.
997-#
998-# charm-helpers is distributed in the hope that it will be useful,
999-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1000-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1001-# GNU Lesser General Public License for more details.
1002-#
1003-# You should have received a copy of the GNU Lesser General Public License
1004-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1005-
1006-# Copyright 2013 Canonical Ltd.
1007-#
1008-# Authors:
1009-# Charm Helpers Developers <juju@lists.ubuntu.com>
1010-"""Charm Helpers ansible - declare the state of your machines.
1011-
1012-This helper enables you to declare your machine state, rather than
1013-program it procedurally (and have to test each change to your procedures).
1014-Your install hook can be as simple as::
1015-
1016- {{{
1017- import charmhelpers.contrib.ansible
1018-
1019-
1020- def install():
1021- charmhelpers.contrib.ansible.install_ansible_support()
1022- charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml')
1023- }}}
1024-
1025-and won't need to change (nor will its tests) when you change the machine
1026-state.
1027-
1028-All of your juju config and relation-data are available as template
1029-variables within your playbooks and templates. An install playbook looks
1030-something like::
1031-
1032- {{{
1033- ---
1034- - hosts: localhost
1035- user: root
1036-
1037- tasks:
1038- - name: Add private repositories.
1039- template:
1040- src: ../templates/private-repositories.list.jinja2
1041- dest: /etc/apt/sources.list.d/private.list
1042-
1043- - name: Update the cache.
1044- apt: update_cache=yes
1045-
1046- - name: Install dependencies.
1047- apt: pkg={{ item }}
1048- with_items:
1049- - python-mimeparse
1050- - python-webob
1051- - sunburnt
1052-
1053- - name: Setup groups.
1054- group: name={{ item.name }} gid={{ item.gid }}
1055- with_items:
1056- - { name: 'deploy_user', gid: 1800 }
1057- - { name: 'service_user', gid: 1500 }
1058-
1059- ...
1060- }}}
1061-
1062-Read more online about `playbooks`_ and standard ansible `modules`_.
1063-
1064-.. _playbooks: http://www.ansibleworks.com/docs/playbooks.html
1065-.. _modules: http://www.ansibleworks.com/docs/modules.html
1066-
1067-A further feature os the ansible hooks is to provide a light weight "action"
1068-scripting tool. This is a decorator that you apply to a function, and that
1069-function can now receive cli args, and can pass extra args to the playbook.
1070-
1071-e.g.
1072-
1073-
1074-@hooks.action()
1075-def some_action(amount, force="False"):
1076- "Usage: some-action AMOUNT [force=True]" # <-- shown on error
1077- # process the arguments
1078- # do some calls
1079- # return extra-vars to be passed to ansible-playbook
1080- return {
1081- 'amount': int(amount),
1082- 'type': force,
1083- }
1084-
1085-You can now create a symlink to hooks.py that can be invoked like a hook, but
1086-with cli params:
1087-
1088-# link actions/some-action to hooks/hooks.py
1089-
1090-actions/some-action amount=10 force=true
1091-
1092-"""
1093-import os
1094-import stat
1095-import subprocess
1096-import functools
1097-
1098-import charmhelpers.contrib.templating.contexts
1099-import charmhelpers.core.host
1100-import charmhelpers.core.hookenv
1101-import charmhelpers.fetch
1102-
1103-
1104-charm_dir = os.environ.get('CHARM_DIR', '')
1105-ansible_hosts_path = '/etc/ansible/hosts'
1106-# Ansible will automatically include any vars in the following
1107-# file in its inventory when run locally.
1108-ansible_vars_path = '/etc/ansible/host_vars/localhost'
1109-
1110-
1111-def install_ansible_support(from_ppa=True, ppa_location='ppa:rquillo/ansible'):
1112- """Installs the ansible package.
1113-
1114- By default it is installed from the `PPA`_ linked from
1115- the ansible `website`_ or from a ppa specified by a charm config..
1116-
1117- .. _PPA: https://launchpad.net/~rquillo/+archive/ansible
1118- .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu
1119-
1120- If from_ppa is empty, you must ensure that the package is available
1121- from a configured repository.
1122- """
1123- if from_ppa:
1124- charmhelpers.fetch.add_source(ppa_location)
1125- charmhelpers.fetch.apt_update(fatal=True)
1126- charmhelpers.fetch.apt_install('ansible')
1127- with open(ansible_hosts_path, 'w+') as hosts_file:
1128- hosts_file.write('localhost ansible_connection=local')
1129-
1130-
1131-def apply_playbook(playbook, tags=None, extra_vars=None):
1132- tags = tags or []
1133- tags = ",".join(tags)
1134- charmhelpers.contrib.templating.contexts.juju_state_to_yaml(
1135- ansible_vars_path, namespace_separator='__',
1136- allow_hyphens_in_keys=False, mode=(stat.S_IRUSR | stat.S_IWUSR))
1137-
1138- # we want ansible's log output to be unbuffered
1139- env = os.environ.copy()
1140- env['PYTHONUNBUFFERED'] = "1"
1141- call = [
1142- 'ansible-playbook',
1143- '-c',
1144- 'local',
1145- playbook,
1146- ]
1147- if tags:
1148- call.extend(['--tags', '{}'.format(tags)])
1149- if extra_vars:
1150- extra = ["%s=%s" % (k, v) for k, v in extra_vars.items()]
1151- call.extend(['--extra-vars', " ".join(extra)])
1152- subprocess.check_call(call, env=env)
1153-
1154-
1155-class AnsibleHooks(charmhelpers.core.hookenv.Hooks):
1156- """Run a playbook with the hook-name as the tag.
1157-
1158- This helper builds on the standard hookenv.Hooks helper,
1159- but additionally runs the playbook with the hook-name specified
1160- using --tags (ie. running all the tasks tagged with the hook-name).
1161-
1162- Example::
1163-
1164- hooks = AnsibleHooks(playbook_path='playbooks/my_machine_state.yaml')
1165-
1166- # All the tasks within my_machine_state.yaml tagged with 'install'
1167- # will be run automatically after do_custom_work()
1168- @hooks.hook()
1169- def install():
1170- do_custom_work()
1171-
1172- # For most of your hooks, you won't need to do anything other
1173- # than run the tagged tasks for the hook:
1174- @hooks.hook('config-changed', 'start', 'stop')
1175- def just_use_playbook():
1176- pass
1177-
1178- # As a convenience, you can avoid the above noop function by specifying
1179- # the hooks which are handled by ansible-only and they'll be registered
1180- # for you:
1181- # hooks = AnsibleHooks(
1182- # 'playbooks/my_machine_state.yaml',
1183- # default_hooks=['config-changed', 'start', 'stop'])
1184-
1185- if __name__ == "__main__":
1186- # execute a hook based on the name the program is called by
1187- hooks.execute(sys.argv)
1188-
1189- """
1190-
1191- def __init__(self, playbook_path, default_hooks=None):
1192- """Register any hooks handled by ansible."""
1193- super(AnsibleHooks, self).__init__()
1194-
1195- self._actions = {}
1196- self.playbook_path = playbook_path
1197-
1198- default_hooks = default_hooks or []
1199-
1200- def noop(*args, **kwargs):
1201- pass
1202-
1203- for hook in default_hooks:
1204- self.register(hook, noop)
1205-
1206- def register_action(self, name, function):
1207- """Register a hook"""
1208- self._actions[name] = function
1209-
1210- def execute(self, args):
1211- """Execute the hook followed by the playbook using the hook as tag."""
1212- hook_name = os.path.basename(args[0])
1213- extra_vars = None
1214- if hook_name in self._actions:
1215- extra_vars = self._actions[hook_name](args[1:])
1216- else:
1217- super(AnsibleHooks, self).execute(args)
1218-
1219- charmhelpers.contrib.ansible.apply_playbook(
1220- self.playbook_path, tags=[hook_name], extra_vars=extra_vars)
1221-
1222- def action(self, *action_names):
1223- """Decorator, registering them as actions"""
1224- def action_wrapper(decorated):
1225-
1226- @functools.wraps(decorated)
1227- def wrapper(argv):
1228- kwargs = dict(arg.split('=') for arg in argv)
1229- try:
1230- return decorated(**kwargs)
1231- except TypeError as e:
1232- if decorated.__doc__:
1233- e.args += (decorated.__doc__,)
1234- raise
1235-
1236- self.register_action(decorated.__name__, wrapper)
1237- if '_' in decorated.__name__:
1238- self.register_action(
1239- decorated.__name__.replace('_', '-'), wrapper)
1240-
1241- return wrapper
1242-
1243- return action_wrapper
1244
1245=== removed directory 'hooks/charmhelpers/contrib/benchmark'
1246=== removed file 'hooks/charmhelpers/contrib/benchmark/__init__.py'
1247--- hooks/charmhelpers/contrib/benchmark/__init__.py 2015-07-29 18:07:31 +0000
1248+++ hooks/charmhelpers/contrib/benchmark/__init__.py 1970-01-01 00:00:00 +0000
1249@@ -1,126 +0,0 @@
1250-# Copyright 2014-2015 Canonical Limited.
1251-#
1252-# This file is part of charm-helpers.
1253-#
1254-# charm-helpers is free software: you can redistribute it and/or modify
1255-# it under the terms of the GNU Lesser General Public License version 3 as
1256-# published by the Free Software Foundation.
1257-#
1258-# charm-helpers is distributed in the hope that it will be useful,
1259-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1260-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1261-# GNU Lesser General Public License for more details.
1262-#
1263-# You should have received a copy of the GNU Lesser General Public License
1264-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1265-
1266-import subprocess
1267-import time
1268-import os
1269-from distutils.spawn import find_executable
1270-
1271-from charmhelpers.core.hookenv import (
1272- in_relation_hook,
1273- relation_ids,
1274- relation_set,
1275- relation_get,
1276-)
1277-
1278-
1279-def action_set(key, val):
1280- if find_executable('action-set'):
1281- action_cmd = ['action-set']
1282-
1283- if isinstance(val, dict):
1284- for k, v in iter(val.items()):
1285- action_set('%s.%s' % (key, k), v)
1286- return True
1287-
1288- action_cmd.append('%s=%s' % (key, val))
1289- subprocess.check_call(action_cmd)
1290- return True
1291- return False
1292-
1293-
1294-class Benchmark():
1295- """
1296- Helper class for the `benchmark` interface.
1297-
1298- :param list actions: Define the actions that are also benchmarks
1299-
1300- From inside the benchmark-relation-changed hook, you would
1301- Benchmark(['memory', 'cpu', 'disk', 'smoke', 'custom'])
1302-
1303- Examples:
1304-
1305- siege = Benchmark(['siege'])
1306- siege.start()
1307- [... run siege ...]
1308- # The higher the score, the better the benchmark
1309- siege.set_composite_score(16.70, 'trans/sec', 'desc')
1310- siege.finish()
1311-
1312-
1313- """
1314-
1315- BENCHMARK_CONF = '/etc/benchmark.conf' # Replaced in testing
1316-
1317- required_keys = [
1318- 'hostname',
1319- 'port',
1320- 'graphite_port',
1321- 'graphite_endpoint',
1322- 'api_port'
1323- ]
1324-
1325- def __init__(self, benchmarks=None):
1326- if in_relation_hook():
1327- if benchmarks is not None:
1328- for rid in sorted(relation_ids('benchmark')):
1329- relation_set(relation_id=rid, relation_settings={
1330- 'benchmarks': ",".join(benchmarks)
1331- })
1332-
1333- # Check the relation data
1334- config = {}
1335- for key in self.required_keys:
1336- val = relation_get(key)
1337- if val is not None:
1338- config[key] = val
1339- else:
1340- # We don't have all of the required keys
1341- config = {}
1342- break
1343-
1344- if len(config):
1345- with open(self.BENCHMARK_CONF, 'w') as f:
1346- for key, val in iter(config.items()):
1347- f.write("%s=%s\n" % (key, val))
1348-
1349- @staticmethod
1350- def start():
1351- action_set('meta.start', time.strftime('%Y-%m-%dT%H:%M:%SZ'))
1352-
1353- """
1354- If the collectd charm is also installed, tell it to send a snapshot
1355- of the current profile data.
1356- """
1357- COLLECT_PROFILE_DATA = '/usr/local/bin/collect-profile-data'
1358- if os.path.exists(COLLECT_PROFILE_DATA):
1359- subprocess.check_output([COLLECT_PROFILE_DATA])
1360-
1361- @staticmethod
1362- def finish():
1363- action_set('meta.stop', time.strftime('%Y-%m-%dT%H:%M:%SZ'))
1364-
1365- @staticmethod
1366- def set_composite_score(value, units, direction='asc'):
1367- """
1368- Set the composite score for a benchmark run. This is a single number
1369- representative of the benchmark results. This could be the most
1370- important metric, or an amalgamation of metric scores.
1371- """
1372- return action_set(
1373- "meta.composite",
1374- {'value': value, 'units': units, 'direction': direction}
1375- )
1376
1377=== removed directory 'hooks/charmhelpers/contrib/charmhelpers'
1378=== removed file 'hooks/charmhelpers/contrib/charmhelpers/__init__.py'
1379--- hooks/charmhelpers/contrib/charmhelpers/__init__.py 2015-07-29 18:07:31 +0000
1380+++ hooks/charmhelpers/contrib/charmhelpers/__init__.py 1970-01-01 00:00:00 +0000
1381@@ -1,208 +0,0 @@
1382-# Copyright 2014-2015 Canonical Limited.
1383-#
1384-# This file is part of charm-helpers.
1385-#
1386-# charm-helpers is free software: you can redistribute it and/or modify
1387-# it under the terms of the GNU Lesser General Public License version 3 as
1388-# published by the Free Software Foundation.
1389-#
1390-# charm-helpers is distributed in the hope that it will be useful,
1391-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1392-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1393-# GNU Lesser General Public License for more details.
1394-#
1395-# You should have received a copy of the GNU Lesser General Public License
1396-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1397-
1398-# Copyright 2012 Canonical Ltd. This software is licensed under the
1399-# GNU Affero General Public License version 3 (see the file LICENSE).
1400-
1401-import warnings
1402-warnings.warn("contrib.charmhelpers is deprecated", DeprecationWarning) # noqa
1403-
1404-import operator
1405-import tempfile
1406-import time
1407-import yaml
1408-import subprocess
1409-
1410-import six
1411-if six.PY3:
1412- from urllib.request import urlopen
1413- from urllib.error import (HTTPError, URLError)
1414-else:
1415- from urllib2 import (urlopen, HTTPError, URLError)
1416-
1417-"""Helper functions for writing Juju charms in Python."""
1418-
1419-__metaclass__ = type
1420-__all__ = [
1421- # 'get_config', # core.hookenv.config()
1422- # 'log', # core.hookenv.log()
1423- # 'log_entry', # core.hookenv.log()
1424- # 'log_exit', # core.hookenv.log()
1425- # 'relation_get', # core.hookenv.relation_get()
1426- # 'relation_set', # core.hookenv.relation_set()
1427- # 'relation_ids', # core.hookenv.relation_ids()
1428- # 'relation_list', # core.hookenv.relation_units()
1429- # 'config_get', # core.hookenv.config()
1430- # 'unit_get', # core.hookenv.unit_get()
1431- # 'open_port', # core.hookenv.open_port()
1432- # 'close_port', # core.hookenv.close_port()
1433- # 'service_control', # core.host.service()
1434- 'unit_info', # client-side, NOT IMPLEMENTED
1435- 'wait_for_machine', # client-side, NOT IMPLEMENTED
1436- 'wait_for_page_contents', # client-side, NOT IMPLEMENTED
1437- 'wait_for_relation', # client-side, NOT IMPLEMENTED
1438- 'wait_for_unit', # client-side, NOT IMPLEMENTED
1439-]
1440-
1441-
1442-SLEEP_AMOUNT = 0.1
1443-
1444-
1445-# We create a juju_status Command here because it makes testing much,
1446-# much easier.
1447-def juju_status():
1448- subprocess.check_call(['juju', 'status'])
1449-
1450-# re-implemented as charmhelpers.fetch.configure_sources()
1451-# def configure_source(update=False):
1452-# source = config_get('source')
1453-# if ((source.startswith('ppa:') or
1454-# source.startswith('cloud:') or
1455-# source.startswith('http:'))):
1456-# run('add-apt-repository', source)
1457-# if source.startswith("http:"):
1458-# run('apt-key', 'import', config_get('key'))
1459-# if update:
1460-# run('apt-get', 'update')
1461-
1462-
1463-# DEPRECATED: client-side only
1464-def make_charm_config_file(charm_config):
1465- charm_config_file = tempfile.NamedTemporaryFile(mode='w+')
1466- charm_config_file.write(yaml.dump(charm_config))
1467- charm_config_file.flush()
1468- # The NamedTemporaryFile instance is returned instead of just the name
1469- # because we want to take advantage of garbage collection-triggered
1470- # deletion of the temp file when it goes out of scope in the caller.
1471- return charm_config_file
1472-
1473-
1474-# DEPRECATED: client-side only
1475-def unit_info(service_name, item_name, data=None, unit=None):
1476- if data is None:
1477- data = yaml.safe_load(juju_status())
1478- service = data['services'].get(service_name)
1479- if service is None:
1480- # XXX 2012-02-08 gmb:
1481- # This allows us to cope with the race condition that we
1482- # have between deploying a service and having it come up in
1483- # `juju status`. We could probably do with cleaning it up so
1484- # that it fails a bit more noisily after a while.
1485- return ''
1486- units = service['units']
1487- if unit is not None:
1488- item = units[unit][item_name]
1489- else:
1490- # It might seem odd to sort the units here, but we do it to
1491- # ensure that when no unit is specified, the first unit for the
1492- # service (or at least the one with the lowest number) is the
1493- # one whose data gets returned.
1494- sorted_unit_names = sorted(units.keys())
1495- item = units[sorted_unit_names[0]][item_name]
1496- return item
1497-
1498-
1499-# DEPRECATED: client-side only
1500-def get_machine_data():
1501- return yaml.safe_load(juju_status())['machines']
1502-
1503-
1504-# DEPRECATED: client-side only
1505-def wait_for_machine(num_machines=1, timeout=300):
1506- """Wait `timeout` seconds for `num_machines` machines to come up.
1507-
1508- This wait_for... function can be called by other wait_for functions
1509- whose timeouts might be too short in situations where only a bare
1510- Juju setup has been bootstrapped.
1511-
1512- :return: A tuple of (num_machines, time_taken). This is used for
1513- testing.
1514- """
1515- # You may think this is a hack, and you'd be right. The easiest way
1516- # to tell what environment we're working in (LXC vs EC2) is to check
1517- # the dns-name of the first machine. If it's localhost we're in LXC
1518- # and we can just return here.
1519- if get_machine_data()[0]['dns-name'] == 'localhost':
1520- return 1, 0
1521- start_time = time.time()
1522- while True:
1523- # Drop the first machine, since it's the Zookeeper and that's
1524- # not a machine that we need to wait for. This will only work
1525- # for EC2 environments, which is why we return early above if
1526- # we're in LXC.
1527- machine_data = get_machine_data()
1528- non_zookeeper_machines = [
1529- machine_data[key] for key in list(machine_data.keys())[1:]]
1530- if len(non_zookeeper_machines) >= num_machines:
1531- all_machines_running = True
1532- for machine in non_zookeeper_machines:
1533- if machine.get('instance-state') != 'running':
1534- all_machines_running = False
1535- break
1536- if all_machines_running:
1537- break
1538- if time.time() - start_time >= timeout:
1539- raise RuntimeError('timeout waiting for service to start')
1540- time.sleep(SLEEP_AMOUNT)
1541- return num_machines, time.time() - start_time
1542-
1543-
1544-# DEPRECATED: client-side only
1545-def wait_for_unit(service_name, timeout=480):
1546- """Wait `timeout` seconds for a given service name to come up."""
1547- wait_for_machine(num_machines=1)
1548- start_time = time.time()
1549- while True:
1550- state = unit_info(service_name, 'agent-state')
1551- if 'error' in state or state == 'started':
1552- break
1553- if time.time() - start_time >= timeout:
1554- raise RuntimeError('timeout waiting for service to start')
1555- time.sleep(SLEEP_AMOUNT)
1556- if state != 'started':
1557- raise RuntimeError('unit did not start, agent-state: ' + state)
1558-
1559-
1560-# DEPRECATED: client-side only
1561-def wait_for_relation(service_name, relation_name, timeout=120):
1562- """Wait `timeout` seconds for a given relation to come up."""
1563- start_time = time.time()
1564- while True:
1565- relation = unit_info(service_name, 'relations').get(relation_name)
1566- if relation is not None and relation['state'] == 'up':
1567- break
1568- if time.time() - start_time >= timeout:
1569- raise RuntimeError('timeout waiting for relation to be up')
1570- time.sleep(SLEEP_AMOUNT)
1571-
1572-
1573-# DEPRECATED: client-side only
1574-def wait_for_page_contents(url, contents, timeout=120, validate=None):
1575- if validate is None:
1576- validate = operator.contains
1577- start_time = time.time()
1578- while True:
1579- try:
1580- stream = urlopen(url)
1581- except (HTTPError, URLError):
1582- pass
1583- else:
1584- page = stream.read()
1585- if validate(page, contents):
1586- return page
1587- if time.time() - start_time >= timeout:
1588- raise RuntimeError('timeout waiting for contents of ' + url)
1589- time.sleep(SLEEP_AMOUNT)
1590
1591=== removed directory 'hooks/charmhelpers/contrib/charmsupport'
1592=== removed file 'hooks/charmhelpers/contrib/charmsupport/__init__.py'
1593--- hooks/charmhelpers/contrib/charmsupport/__init__.py 2015-07-29 18:07:31 +0000
1594+++ hooks/charmhelpers/contrib/charmsupport/__init__.py 1970-01-01 00:00:00 +0000
1595@@ -1,15 +0,0 @@
1596-# Copyright 2014-2015 Canonical Limited.
1597-#
1598-# This file is part of charm-helpers.
1599-#
1600-# charm-helpers is free software: you can redistribute it and/or modify
1601-# it under the terms of the GNU Lesser General Public License version 3 as
1602-# published by the Free Software Foundation.
1603-#
1604-# charm-helpers is distributed in the hope that it will be useful,
1605-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1606-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1607-# GNU Lesser General Public License for more details.
1608-#
1609-# You should have received a copy of the GNU Lesser General Public License
1610-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1611
1612=== removed file 'hooks/charmhelpers/contrib/charmsupport/nrpe.py'
1613--- hooks/charmhelpers/contrib/charmsupport/nrpe.py 2015-07-29 18:07:31 +0000
1614+++ hooks/charmhelpers/contrib/charmsupport/nrpe.py 1970-01-01 00:00:00 +0000
1615@@ -1,360 +0,0 @@
1616-# Copyright 2014-2015 Canonical Limited.
1617-#
1618-# This file is part of charm-helpers.
1619-#
1620-# charm-helpers is free software: you can redistribute it and/or modify
1621-# it under the terms of the GNU Lesser General Public License version 3 as
1622-# published by the Free Software Foundation.
1623-#
1624-# charm-helpers is distributed in the hope that it will be useful,
1625-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1626-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1627-# GNU Lesser General Public License for more details.
1628-#
1629-# You should have received a copy of the GNU Lesser General Public License
1630-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1631-
1632-"""Compatibility with the nrpe-external-master charm"""
1633-# Copyright 2012 Canonical Ltd.
1634-#
1635-# Authors:
1636-# Matthew Wedgwood <matthew.wedgwood@canonical.com>
1637-
1638-import subprocess
1639-import pwd
1640-import grp
1641-import os
1642-import glob
1643-import shutil
1644-import re
1645-import shlex
1646-import yaml
1647-
1648-from charmhelpers.core.hookenv import (
1649- config,
1650- local_unit,
1651- log,
1652- relation_ids,
1653- relation_set,
1654- relations_of_type,
1655-)
1656-
1657-from charmhelpers.core.host import service
1658-
1659-# This module adds compatibility with the nrpe-external-master and plain nrpe
1660-# subordinate charms. To use it in your charm:
1661-#
1662-# 1. Update metadata.yaml
1663-#
1664-# provides:
1665-# (...)
1666-# nrpe-external-master:
1667-# interface: nrpe-external-master
1668-# scope: container
1669-#
1670-# and/or
1671-#
1672-# provides:
1673-# (...)
1674-# local-monitors:
1675-# interface: local-monitors
1676-# scope: container
1677-
1678-#
1679-# 2. Add the following to config.yaml
1680-#
1681-# nagios_context:
1682-# default: "juju"
1683-# type: string
1684-# description: |
1685-# Used by the nrpe subordinate charms.
1686-# A string that will be prepended to instance name to set the host name
1687-# in nagios. So for instance the hostname would be something like:
1688-# juju-myservice-0
1689-# If you're running multiple environments with the same services in them
1690-# this allows you to differentiate between them.
1691-# nagios_servicegroups:
1692-# default: ""
1693-# type: string
1694-# description: |
1695-# A comma-separated list of nagios servicegroups.
1696-# If left empty, the nagios_context will be used as the servicegroup
1697-#
1698-# 3. Add custom checks (Nagios plugins) to files/nrpe-external-master
1699-#
1700-# 4. Update your hooks.py with something like this:
1701-#
1702-# from charmsupport.nrpe import NRPE
1703-# (...)
1704-# def update_nrpe_config():
1705-# nrpe_compat = NRPE()
1706-# nrpe_compat.add_check(
1707-# shortname = "myservice",
1708-# description = "Check MyService",
1709-# check_cmd = "check_http -w 2 -c 10 http://localhost"
1710-# )
1711-# nrpe_compat.add_check(
1712-# "myservice_other",
1713-# "Check for widget failures",
1714-# check_cmd = "/srv/myapp/scripts/widget_check"
1715-# )
1716-# nrpe_compat.write()
1717-#
1718-# def config_changed():
1719-# (...)
1720-# update_nrpe_config()
1721-#
1722-# def nrpe_external_master_relation_changed():
1723-# update_nrpe_config()
1724-#
1725-# def local_monitors_relation_changed():
1726-# update_nrpe_config()
1727-#
1728-# 5. ln -s hooks.py nrpe-external-master-relation-changed
1729-# ln -s hooks.py local-monitors-relation-changed
1730-
1731-
1732-class CheckException(Exception):
1733- pass
1734-
1735-
1736-class Check(object):
1737- shortname_re = '[A-Za-z0-9-_]+$'
1738- service_template = ("""
1739-#---------------------------------------------------
1740-# This file is Juju managed
1741-#---------------------------------------------------
1742-define service {{
1743- use active-service
1744- host_name {nagios_hostname}
1745- service_description {nagios_hostname}[{shortname}] """
1746- """{description}
1747- check_command check_nrpe!{command}
1748- servicegroups {nagios_servicegroup}
1749-}}
1750-""")
1751-
1752- def __init__(self, shortname, description, check_cmd):
1753- super(Check, self).__init__()
1754- # XXX: could be better to calculate this from the service name
1755- if not re.match(self.shortname_re, shortname):
1756- raise CheckException("shortname must match {}".format(
1757- Check.shortname_re))
1758- self.shortname = shortname
1759- self.command = "check_{}".format(shortname)
1760- # Note: a set of invalid characters is defined by the
1761- # Nagios server config
1762- # The default is: illegal_object_name_chars=`~!$%^&*"|'<>?,()=
1763- self.description = description
1764- self.check_cmd = self._locate_cmd(check_cmd)
1765-
1766- def _locate_cmd(self, check_cmd):
1767- search_path = (
1768- '/usr/lib/nagios/plugins',
1769- '/usr/local/lib/nagios/plugins',
1770- )
1771- parts = shlex.split(check_cmd)
1772- for path in search_path:
1773- if os.path.exists(os.path.join(path, parts[0])):
1774- command = os.path.join(path, parts[0])
1775- if len(parts) > 1:
1776- command += " " + " ".join(parts[1:])
1777- return command
1778- log('Check command not found: {}'.format(parts[0]))
1779- return ''
1780-
1781- def write(self, nagios_context, hostname, nagios_servicegroups):
1782- nrpe_check_file = '/etc/nagios/nrpe.d/{}.cfg'.format(
1783- self.command)
1784- with open(nrpe_check_file, 'w') as nrpe_check_config:
1785- nrpe_check_config.write("# check {}\n".format(self.shortname))
1786- nrpe_check_config.write("command[{}]={}\n".format(
1787- self.command, self.check_cmd))
1788-
1789- if not os.path.exists(NRPE.nagios_exportdir):
1790- log('Not writing service config as {} is not accessible'.format(
1791- NRPE.nagios_exportdir))
1792- else:
1793- self.write_service_config(nagios_context, hostname,
1794- nagios_servicegroups)
1795-
1796- def write_service_config(self, nagios_context, hostname,
1797- nagios_servicegroups):
1798- for f in os.listdir(NRPE.nagios_exportdir):
1799- if re.search('.*{}.cfg'.format(self.command), f):
1800- os.remove(os.path.join(NRPE.nagios_exportdir, f))
1801-
1802- templ_vars = {
1803- 'nagios_hostname': hostname,
1804- 'nagios_servicegroup': nagios_servicegroups,
1805- 'description': self.description,
1806- 'shortname': self.shortname,
1807- 'command': self.command,
1808- }
1809- nrpe_service_text = Check.service_template.format(**templ_vars)
1810- nrpe_service_file = '{}/service__{}_{}.cfg'.format(
1811- NRPE.nagios_exportdir, hostname, self.command)
1812- with open(nrpe_service_file, 'w') as nrpe_service_config:
1813- nrpe_service_config.write(str(nrpe_service_text))
1814-
1815- def run(self):
1816- subprocess.call(self.check_cmd)
1817-
1818-
1819-class NRPE(object):
1820- nagios_logdir = '/var/log/nagios'
1821- nagios_exportdir = '/var/lib/nagios/export'
1822- nrpe_confdir = '/etc/nagios/nrpe.d'
1823-
1824- def __init__(self, hostname=None):
1825- super(NRPE, self).__init__()
1826- self.config = config()
1827- self.nagios_context = self.config['nagios_context']
1828- if 'nagios_servicegroups' in self.config and self.config['nagios_servicegroups']:
1829- self.nagios_servicegroups = self.config['nagios_servicegroups']
1830- else:
1831- self.nagios_servicegroups = self.nagios_context
1832- self.unit_name = local_unit().replace('/', '-')
1833- if hostname:
1834- self.hostname = hostname
1835- else:
1836- self.hostname = "{}-{}".format(self.nagios_context, self.unit_name)
1837- self.checks = []
1838-
1839- def add_check(self, *args, **kwargs):
1840- self.checks.append(Check(*args, **kwargs))
1841-
1842- def write(self):
1843- try:
1844- nagios_uid = pwd.getpwnam('nagios').pw_uid
1845- nagios_gid = grp.getgrnam('nagios').gr_gid
1846- except:
1847- log("Nagios user not set up, nrpe checks not updated")
1848- return
1849-
1850- if not os.path.exists(NRPE.nagios_logdir):
1851- os.mkdir(NRPE.nagios_logdir)
1852- os.chown(NRPE.nagios_logdir, nagios_uid, nagios_gid)
1853-
1854- nrpe_monitors = {}
1855- monitors = {"monitors": {"remote": {"nrpe": nrpe_monitors}}}
1856- for nrpecheck in self.checks:
1857- nrpecheck.write(self.nagios_context, self.hostname,
1858- self.nagios_servicegroups)
1859- nrpe_monitors[nrpecheck.shortname] = {
1860- "command": nrpecheck.command,
1861- }
1862-
1863- service('restart', 'nagios-nrpe-server')
1864-
1865- monitor_ids = relation_ids("local-monitors") + \
1866- relation_ids("nrpe-external-master")
1867- for rid in monitor_ids:
1868- relation_set(relation_id=rid, monitors=yaml.dump(monitors))
1869-
1870-
1871-def get_nagios_hostcontext(relation_name='nrpe-external-master'):
1872- """
1873- Query relation with nrpe subordinate, return the nagios_host_context
1874-
1875- :param str relation_name: Name of relation nrpe sub joined to
1876- """
1877- for rel in relations_of_type(relation_name):
1878- if 'nagios_hostname' in rel:
1879- return rel['nagios_host_context']
1880-
1881-
1882-def get_nagios_hostname(relation_name='nrpe-external-master'):
1883- """
1884- Query relation with nrpe subordinate, return the nagios_hostname
1885-
1886- :param str relation_name: Name of relation nrpe sub joined to
1887- """
1888- for rel in relations_of_type(relation_name):
1889- if 'nagios_hostname' in rel:
1890- return rel['nagios_hostname']
1891-
1892-
1893-def get_nagios_unit_name(relation_name='nrpe-external-master'):
1894- """
1895- Return the nagios unit name prepended with host_context if needed
1896-
1897- :param str relation_name: Name of relation nrpe sub joined to
1898- """
1899- host_context = get_nagios_hostcontext(relation_name)
1900- if host_context:
1901- unit = "%s:%s" % (host_context, local_unit())
1902- else:
1903- unit = local_unit()
1904- return unit
1905-
1906-
1907-def add_init_service_checks(nrpe, services, unit_name):
1908- """
1909- Add checks for each service in list
1910-
1911- :param NRPE nrpe: NRPE object to add check to
1912- :param list services: List of services to check
1913- :param str unit_name: Unit name to use in check description
1914- """
1915- for svc in services:
1916- upstart_init = '/etc/init/%s.conf' % svc
1917- sysv_init = '/etc/init.d/%s' % svc
1918- if os.path.exists(upstart_init):
1919- nrpe.add_check(
1920- shortname=svc,
1921- description='process check {%s}' % unit_name,
1922- check_cmd='check_upstart_job %s' % svc
1923- )
1924- elif os.path.exists(sysv_init):
1925- cronpath = '/etc/cron.d/nagios-service-check-%s' % svc
1926- cron_file = ('*/5 * * * * root '
1927- '/usr/local/lib/nagios/plugins/check_exit_status.pl '
1928- '-s /etc/init.d/%s status > '
1929- '/var/lib/nagios/service-check-%s.txt\n' % (svc,
1930- svc)
1931- )
1932- f = open(cronpath, 'w')
1933- f.write(cron_file)
1934- f.close()
1935- nrpe.add_check(
1936- shortname=svc,
1937- description='process check {%s}' % unit_name,
1938- check_cmd='check_status_file.py -f '
1939- '/var/lib/nagios/service-check-%s.txt' % svc,
1940- )
1941-
1942-
1943-def copy_nrpe_checks():
1944- """
1945- Copy the nrpe checks into place
1946-
1947- """
1948- NAGIOS_PLUGINS = '/usr/local/lib/nagios/plugins'
1949- nrpe_files_dir = os.path.join(os.getenv('CHARM_DIR'), 'hooks',
1950- 'charmhelpers', 'contrib', 'openstack',
1951- 'files')
1952-
1953- if not os.path.exists(NAGIOS_PLUGINS):
1954- os.makedirs(NAGIOS_PLUGINS)
1955- for fname in glob.glob(os.path.join(nrpe_files_dir, "check_*")):
1956- if os.path.isfile(fname):
1957- shutil.copy2(fname,
1958- os.path.join(NAGIOS_PLUGINS, os.path.basename(fname)))
1959-
1960-
1961-def add_haproxy_checks(nrpe, unit_name):
1962- """
1963- Add checks for each service in list
1964-
1965- :param NRPE nrpe: NRPE object to add check to
1966- :param str unit_name: Unit name to use in check description
1967- """
1968- nrpe.add_check(
1969- shortname='haproxy_servers',
1970- description='Check HAProxy {%s}' % unit_name,
1971- check_cmd='check_haproxy.sh')
1972- nrpe.add_check(
1973- shortname='haproxy_queue',
1974- description='Check HAProxy queue depth {%s}' % unit_name,
1975- check_cmd='check_haproxy_queue_depth.sh')
1976
1977=== removed file 'hooks/charmhelpers/contrib/charmsupport/volumes.py'
1978--- hooks/charmhelpers/contrib/charmsupport/volumes.py 2015-07-29 18:07:31 +0000
1979+++ hooks/charmhelpers/contrib/charmsupport/volumes.py 1970-01-01 00:00:00 +0000
1980@@ -1,175 +0,0 @@
1981-# Copyright 2014-2015 Canonical Limited.
1982-#
1983-# This file is part of charm-helpers.
1984-#
1985-# charm-helpers is free software: you can redistribute it and/or modify
1986-# it under the terms of the GNU Lesser General Public License version 3 as
1987-# published by the Free Software Foundation.
1988-#
1989-# charm-helpers is distributed in the hope that it will be useful,
1990-# but WITHOUT ANY WARRANTY; without even the implied warranty of
1991-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1992-# GNU Lesser General Public License for more details.
1993-#
1994-# You should have received a copy of the GNU Lesser General Public License
1995-# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
1996-
1997-'''
1998-Functions for managing volumes in juju units. One volume is supported per unit.
1999-Subordinates may have their own storage, provided it is on its own partition.
2000-
2001-Configuration stanzas::
2002-
2003- volume-ephemeral:
2004- type: boolean
2005- default: true
2006- description: >
2007- If false, a volume is mounted as sepecified in "volume-map"
2008- If true, ephemeral storage will be used, meaning that log data
2009- will only exist as long as the machine. YOU HAVE BEEN WARNED.
2010- volume-map:
2011- type: string
2012- default: {}
2013- description: >
2014- YAML map of units to device names, e.g:
2015- "{ rsyslog/0: /dev/vdb, rsyslog/1: /dev/vdb }"
2016- Service units will raise a configure-error if volume-ephemeral
2017- is 'true' and no volume-map value is set. Use 'juju set' to set a
2018- value and 'juju resolved' to complete configuration.
2019-
2020-Usage::
2021-
2022- from charmsupport.volumes import configure_volume, VolumeConfigurationError
2023- from charmsupport.hookenv import log, ERROR
2024- def post_mount_hook():
2025- stop_service('myservice')
2026- def post_mount_hook():
2027- start_service('myservice')
2028-
2029- if __name__ == '__main__':
2030- try:
2031- configure_volume(before_change=pre_mount_hook,
2032- after_change=post_mount_hook)
2033- except VolumeConfigurationError:
2034- log('Storage could not be configured', ERROR)
2035-
2036-'''
2037-
2038-# XXX: Known limitations
2039-# - fstab is neither consulted nor updated
2040-
2041-import os
2042-from charmhelpers.core import hookenv
2043-from charmhelpers.core import host
2044-import yaml
2045-
2046-
2047-MOUNT_BASE = '/srv/juju/volumes'
2048-
2049-
2050-class VolumeConfigurationError(Exception):
2051- '''Volume configuration data is missing or invalid'''
2052- pass
2053-
2054-
2055-def get_config():
2056- '''Gather and sanity-check volume configuration data'''
2057- volume_config = {}
2058- config = hookenv.config()
2059-
2060- errors = False
2061-
2062- if config.get('volume-ephemeral') in (True, 'True', 'true', 'Yes', 'yes'):
2063- volume_config['ephemeral'] = True
2064- else:
2065- volume_config['ephemeral'] = False
2066-
2067- try:
2068- volume_map = yaml.safe_load(config.get('volume-map', '{}'))
2069- except yaml.YAMLError as e:
2070- hookenv.log("Error parsing YAML volume-map: {}".format(e),
2071- hookenv.ERROR)
2072- errors = True
2073- if volume_map is None:
2074- # probably an empty string
2075- volume_map = {}
2076- elif not isinstance(volume_map, dict):
2077- hookenv.log("Volume-map should be a dictionary, not {}".format(
2078- type(volume_map)))
2079- errors = True
2080-
2081- volume_config['device'] = volume_map.get(os.environ['JUJU_UNIT_NAME'])
2082- if volume_config['device'] and volume_config['ephemeral']:
2083- # asked for ephemeral storage but also defined a volume ID
2084- hookenv.log('A volume is defined for this unit, but ephemeral '
2085- 'storage was requested', hookenv.ERROR)
2086- errors = True
2087- elif not volume_config['device'] and not volume_config['ephemeral']:
2088- # asked for permanent storage but did not define volume ID
2089- hookenv.log('Ephemeral storage was requested, but there is no volume '
2090- 'defined for this unit.', hookenv.ERROR)
2091- errors = True
2092-
2093- unit_mount_name = hookenv.local_unit().replace('/', '-')
2094- volume_config['mountpoint'] = os.path.join(MOUNT_BASE, unit_mount_name)
2095-
2096- if errors:
2097- return None
2098- return volume_config
2099-
2100-
2101-def mount_volume(config):
2102- if os.path.exists(config['mountpoint']):
2103- if not os.path.isdir(config['mountpoint']):
2104- hookenv.log('Not a directory: {}'.format(config['mountpoint']))
2105- raise VolumeConfigurationError()
2106- else:
2107- host.mkdir(config['mountpoint'])
2108- if os.path.ismount(config['mountpoint']):
2109- unmount_volume(config)
2110- if not host.mount(config['device'], config['mountpoint'], persist=True):
2111- raise VolumeConfigurationError()
2112-
2113-
2114-def unmount_volume(config):
2115- if os.path.ismount(config['mountpoint']):
2116- if not host.umount(config['mountpoint'], persist=True):
2117- raise VolumeConfigurationError()
2118-
2119-
2120-def managed_mounts():
2121- '''List of all mounted managed volumes'''
2122- return filter(lambda mount: mount[0].startswith(MOUNT_BASE), host.mounts())
2123-
2124-
2125-def configure_volume(before_change=lambda: None, after_change=lambda: None):
2126- '''Set up storage (or don't) according to the charm's volume configuration.
2127- Returns the mount point or "ephemeral". before_change and after_change
2128- are optional functions to be called if the volume configuration changes.
2129- '''
2130-
2131- config = get_config()
2132- if not config:
2133- hookenv.log('Failed to read volume configuration', hookenv.CRITICAL)
2134- raise VolumeConfigurationError()
2135-
2136- if config['ephemeral']:
2137- if os.path.ismount(config['mountpoint']):
2138- before_change()
2139- unmount_volume(config)
2140- after_change()
2141- return 'ephemeral'
2142- else:
2143- # persistent storage
2144- if os.path.ismount(config['mountpoint']):
2145- mounts = dict(managed_mounts())
2146- if mounts.get(config['mountpoint']) != config['device']:
2147- before_change()
2148- unmount_volume(config)
2149- mount_volume(config)
2150- after_change()
2151- else:
2152- before_change()
2153- mount_volume(config)
2154- after_change()
2155- return config['mountpoint']
2156
2157=== removed directory 'hooks/charmhelpers/contrib/database'
2158=== removed file 'hooks/charmhelpers/contrib/database/__init__.py'
2159=== removed file 'hooks/charmhelpers/contrib/database/mysql.py'
2160--- hooks/charmhelpers/contrib/database/mysql.py 2015-07-29 18:07:31 +0000
2161+++ hooks/charmhelpers/contrib/database/mysql.py 1970-01-01 00:00:00 +0000
2162@@ -1,412 +0,0 @@
2163-"""Helper for working with a MySQL database"""
2164-import json
2165-import re
2166-import sys
2167-import platform
2168-import os
2169-import glob
2170-
2171-# from string import upper
2172-
2173-from charmhelpers.core.host import (
2174- mkdir,
2175- pwgen,
2176- write_file
2177-)
2178-from charmhelpers.core.hookenv import (
2179- config as config_get,
2180- relation_get,
2181- related_units,
2182- unit_get,
2183- log,
2184- DEBUG,
2185- INFO,
2186- WARNING,
2187-)
2188-from charmhelpers.fetch import (
2189- apt_install,
2190- apt_update,
2191- filter_installed_packages,
2192-)
2193-from charmhelpers.contrib.peerstorage import (
2194- peer_store,
2195- peer_retrieve,
2196-)
2197-from charmhelpers.contrib.network.ip import get_host_ip
2198-
2199-try:
2200- import MySQLdb
2201-except ImportError:
2202- apt_update(fatal=True)
2203- apt_install(filter_installed_packages(['python-mysqldb']), fatal=True)
2204- import MySQLdb
2205-
2206-
2207-class MySQLHelper(object):
2208-
2209- def __init__(self, rpasswdf_template, upasswdf_template, host='localhost',
2210- migrate_passwd_to_peer_relation=True,
2211- delete_ondisk_passwd_file=True):
2212- self.host = host
2213- # Password file path templates
2214- self.root_passwd_file_template = rpasswdf_template
2215- self.user_passwd_file_template = upasswdf_template
2216-
2217- self.migrate_passwd_to_peer_relation = migrate_passwd_to_peer_relation
2218- # If we migrate we have the option to delete local copy of root passwd
2219- self.delete_ondisk_passwd_file = delete_ondisk_passwd_file
2220-
2221- def connect(self, user='root', password=None):
2222- log("Opening db connection for %s@%s" % (user, self.host), level=DEBUG)
2223- self.connection = MySQLdb.connect(user=user, host=self.host,
2224- passwd=password)
2225-
2226- def database_exists(self, db_name):
2227- cursor = self.connection.cursor()
2228- try:
2229- cursor.execute("SHOW DATABASES")
2230- databases = [i[0] for i in cursor.fetchall()]
2231- finally:
2232- cursor.close()
2233-
2234- return db_name in databases
2235-
2236- def create_database(self, db_name):
2237- cursor = self.connection.cursor()
2238- try:
2239- cursor.execute("CREATE DATABASE {} CHARACTER SET UTF8"
2240- .format(db_name))
2241- finally:
2242- cursor.close()
2243-
2244- def grant_exists(self, db_name, db_user, remote_ip):
2245- cursor = self.connection.cursor()
2246- priv_string = "GRANT ALL PRIVILEGES ON `{}`.* " \
2247- "TO '{}'@'{}'".format(db_name, db_user, remote_ip)
2248- try:
2249- cursor.execute("SHOW GRANTS for '{}'@'{}'".format(db_user,
2250- remote_ip))
2251- grants = [i[0] for i in cursor.fetchall()]
2252- except MySQLdb.OperationalError:
2253- return False
2254- finally:
2255- cursor.close()
2256-
2257- # TODO: review for different grants
2258- return priv_string in grants
2259-
2260- def create_grant(self, db_name, db_user, remote_ip, password):
2261- cursor = self.connection.cursor()
2262- try:
2263- # TODO: review for different grants
2264- cursor.execute("GRANT ALL PRIVILEGES ON {}.* TO '{}'@'{}' "
2265- "IDENTIFIED BY '{}'".format(db_name,
2266- db_user,
2267- remote_ip,
2268- password))
2269- finally:
2270- cursor.close()
2271-
2272- def create_admin_grant(self, db_user, remote_ip, password):
2273- cursor = self.connection.cursor()
2274- try:
2275- cursor.execute("GRANT ALL PRIVILEGES ON *.* TO '{}'@'{}' "
2276- "IDENTIFIED BY '{}'".format(db_user,
2277- remote_ip,
2278- password))
2279- finally:
2280- cursor.close()
2281-
2282- def cleanup_grant(self, db_user, remote_ip):
2283- cursor = self.connection.cursor()
2284- try:
2285- cursor.execute("DROP FROM mysql.user WHERE user='{}' "
2286- "AND HOST='{}'".format(db_user,
2287- remote_ip))
2288- finally:
2289- cursor.close()
2290-
2291- def execute(self, sql):
2292- """Execute arbitary SQL against the database."""
2293- cursor = self.connection.cursor()
2294- try:
2295- cursor.execute(sql)
2296- finally:
2297- cursor.close()
2298-
2299- def migrate_passwords_to_peer_relation(self, excludes=None):
2300- """Migrate any passwords storage on disk to cluster peer relation."""
2301- dirname = os.path.dirname(self.root_passwd_file_template)
2302- path = os.path.join(dirname, '*.passwd')
2303- for f in glob.glob(path):
2304- if excludes and f in excludes:
2305- log("Excluding %s from peer migration" % (f), level=DEBUG)
2306- continue
2307-
2308- key = os.path.basename(f)
2309- with open(f, 'r') as passwd:
2310- _value = passwd.read().strip()
2311-
2312- try:
2313- peer_store(key, _value)
2314-
2315- if self.delete_ondisk_passwd_file:
2316- os.unlink(f)
2317- except ValueError:
2318- # NOTE cluster relation not yet ready - skip for now
2319- pass
2320-
2321- def get_mysql_password_on_disk(self, username=None, password=None):
2322- """Retrieve, generate or store a mysql password for the provided
2323- username on disk."""
2324- if username:
2325- template = self.user_passwd_file_template
2326- passwd_file = template.format(username)
2327- else:
2328- passwd_file = self.root_passwd_file_template
2329-
2330- _password = None
2331- if os.path.exists(passwd_file):
2332- log("Using existing password file '%s'" % passwd_file, level=DEBUG)
2333- with open(passwd_file, 'r') as passwd:
2334- _password = passwd.read().strip()
2335- else:
2336- log("Generating new password file '%s'" % passwd_file, level=DEBUG)
2337- if not os.path.isdir(os.path.dirname(passwd_file)):
2338- # NOTE: need to ensure this is not mysql root dir (which needs
2339- # to be mysql readable)
2340- mkdir(os.path.dirname(passwd_file), owner='root', group='root',
2341- perms=0o770)
2342- # Force permissions - for some reason the chmod in makedirs
2343- # fails
2344- os.chmod(os.path.dirname(passwd_file), 0o770)
2345-
2346- _password = password or pwgen(length=32)
2347- write_file(passwd_file, _password, owner='root', group='root',
2348- perms=0o660)
2349-
2350- return _password
2351-
2352- def passwd_keys(self, username):
2353- """Generator to return keys used to store passwords in peer store.
2354-
2355- NOTE: we support both legacy and new format to support mysql
2356- charm prior to refactor. This is necessary to avoid LP 1451890.
2357- """
2358- keys = []
2359- if username == 'mysql':
2360- log("Bad username '%s'" % (username), level=WARNING)
2361-
2362- if username:
2363- # IMPORTANT: *newer* format must be returned first
2364- keys.append('mysql-%s.passwd' % (username))
2365- keys.append('%s.passwd' % (username))
2366- else:
2367- keys.append('mysql.passwd')
2368-
2369- for key in keys:
2370- yield key
2371-
2372- def get_mysql_password(self, username=None, password=None):
2373- """Retrieve, generate or store a mysql password for the provided
2374- username using peer relation cluster."""
2375- excludes = []
2376-
2377- # First check peer relation.
2378- try:
2379- for key in self.passwd_keys(username):
2380- _password = peer_retrieve(key)
2381- if _password:
2382- break
2383-
2384- # If root password available don't update peer relation from local
2385- if _password and not username:
2386- excludes.append(self.root_passwd_file_template)
2387-
2388- except ValueError:
2389- # cluster relation is not yet started; use on-disk
2390- _password = None
2391-
2392- # If none available, generate new one
2393- if not _password:
2394- _password = self.get_mysql_password_on_disk(username, password)
2395-
2396- # Put on wire if required
2397- if self.migrate_passwd_to_peer_relation:
2398- self.migrate_passwords_to_peer_relation(excludes=excludes)
2399-
2400- return _password
2401-
2402- def get_mysql_root_password(self, password=None):
2403- """Retrieve or generate mysql root password for service units."""
2404- return self.get_mysql_password(username=None, password=password)
2405-
2406- def normalize_address(self, hostname):
2407- """Ensure that address returned is an IP address (i.e. not fqdn)"""
2408- if config_get('prefer-ipv6'):
2409- # TODO: add support for ipv6 dns
2410- return hostname
2411-
2412- if hostname != unit_get('private-address'):
2413- return get_host_ip(hostname, fallback=hostname)
2414-
2415- # Otherwise assume localhost
2416- return '127.0.0.1'
2417-
2418- def get_allowed_units(self, database, username, relation_id=None):
2419- """Get list of units with access grants for database with username.
2420-
2421- This is typically used to provide shared-db relations with a list of
2422- which units have been granted access to the given database.
2423- """
2424- self.connect(password=self.get_mysql_root_password())
2425- allowed_units = set()
2426- for unit in related_units(relation_id):
2427- settings = relation_get(rid=relation_id, unit=unit)
2428- # First check for setting with prefix, then without
2429- for attr in ["%s_hostname" % (database), 'hostname']:
2430- hosts = settings.get(attr, None)
2431- if hosts:
2432- break
2433-
2434- if hosts:
2435- # hostname can be json-encoded list of hostnames
2436- try:
2437- hosts = json.loads(hosts)
2438- except ValueError:
2439- hosts = [hosts]
2440- else:
2441- hosts = [settings['private-address']]
2442-
2443- if hosts:
2444- for host in hosts:
2445- host = self.normalize_address(host)
2446- if self.grant_exists(database, username, host):
2447- log("Grant exists for host '%s' on db '%s'" %
2448- (host, database), level=DEBUG)
2449- if unit not in allowed_units:
2450- allowed_units.add(unit)
2451- else:
2452- log("Grant does NOT exist for host '%s' on db '%s'" %
2453- (host, database), level=DEBUG)
2454- else:
2455- log("No hosts found for grant check", level=INFO)
2456-
2457- return allowed_units
2458-
2459- def configure_db(self, hostname, database, username, admin=False):
2460- """Configure access to database for username from hostname."""
2461- self.connect(password=self.get_mysql_root_password())
2462- if not self.database_exists(database):
2463- self.create_database(database)
2464-
2465- remote_ip = self.normalize_address(hostname)
2466- password = self.get_mysql_password(username)
2467- if not self.grant_exists(database, username, remote_ip):
2468- if not admin:
2469- self.create_grant(database, username, remote_ip, password)
2470- else:
2471- self.create_admin_grant(username, remote_ip, password)
2472-
2473- return password
2474-
2475-
2476-class PerconaClusterHelper(object):
2477-
2478- # Going for the biggest page size to avoid wasted bytes.
2479- # InnoDB page size is 16MB
2480-
2481- DEFAULT_PAGE_SIZE = 16 * 1024 * 1024
2482- DEFAULT_INNODB_BUFFER_FACTOR = 0.50
2483-
2484- def human_to_bytes(self, human):
2485- """Convert human readable configuration options to bytes."""
2486- num_re = re.compile('^[0-9]+$')
2487- if num_re.match(human):
2488- return human
2489-
2490- factors = {
2491- 'K': 1024,
2492- 'M': 1048576,
2493- 'G': 1073741824,
2494- 'T': 1099511627776
2495- }
2496- modifier = human[-1]
2497- if modifier in factors:
2498- return int(human[:-1]) * factors[modifier]
2499-
2500- if modifier == '%':
2501- total_ram = self.human_to_bytes(self.get_mem_total())
2502- if self.is_32bit_system() and total_ram > self.sys_mem_limit():
2503- total_ram = self.sys_mem_limit()
2504- factor = int(human[:-1]) * 0.01
2505- pctram = total_ram * factor
2506- return int(pctram - (pctram % self.DEFAULT_PAGE_SIZE))
2507-
2508- raise ValueError("Can only convert K,M,G, or T")
2509-
2510- def is_32bit_system(self):
2511- """Determine whether system is 32 or 64 bit."""
2512- try:
2513- return sys.maxsize < 2 ** 32
2514- except OverflowError:
2515- return False
2516-
2517- def sys_mem_limit(self):
2518- """Determine the default memory limit for the current service unit."""
2519- if platform.machine() in ['armv7l']:
2520- _mem_limit = self.human_to_bytes('2700M') # experimentally determined
2521- else:
2522- # Limit for x86 based 32bit systems
2523- _mem_limit = self.human_to_bytes('4G')
2524-
2525- return _mem_limit
2526-
2527- def get_mem_total(self):
2528- """Calculate the total memory in the current service unit."""
2529- with open('/proc/meminfo') as meminfo_file:
2530- for line in meminfo_file:
2531- key, mem = line.split(':', 2)
2532- if key == 'MemTotal':
2533- mtot, modifier = mem.strip().split(' ')
2534- return '%s%s' % (mtot, modifier[0].upper())
2535-
2536- def parse_config(self):
2537- """Parse charm configuration and calculate values for config files."""
2538- config = config_get()
2539- mysql_config = {}
2540- if 'max-connections' in config:
2541- mysql_config['max_connections'] = config['max-connections']
2542-
2543- if 'wait-timeout' in config:
2544- mysql_config['wait_timeout'] = config['wait-timeout']
2545-
2546- if 'innodb-flush-log-at-trx-commit' in config:
2547- mysql_config['innodb_flush_log_at_trx_commit'] = config['innodb-flush-log-at-trx-commit']
2548-
2549- # Set a sane default key_buffer size
2550- mysql_config['key_buffer'] = self.human_to_bytes('32M')
2551- total_memory = self.human_to_bytes(self.get_mem_total())
2552-
2553- dataset_bytes = config.get('dataset-size', None)
2554- innodb_buffer_pool_size = config.get('innodb-buffer-pool-size', None)
2555-
2556- if innodb_buffer_pool_size:
2557- innodb_buffer_pool_size = self.human_to_bytes(
2558- innodb_buffer_pool_size)
2559- elif dataset_bytes:
2560- log("Option 'dataset-size' has been deprecated, please use"
2561- "innodb_buffer_pool_size option instead", level="WARN")
2562- innodb_buffer_pool_size = self.human_to_bytes(
2563- dataset_bytes)
2564- else:
2565- innodb_buffer_pool_size = int(
2566- total_memory * self.DEFAULT_INNODB_BUFFER_FACTOR)
2567-
2568- if innodb_buffer_pool_size > total_memory:
2569- log("innodb_buffer_pool_size; {} is greater than system available memory:{}".format(
2570- innodb_buffer_pool_size,
2571- total_memory), level='WARN')
2572-
2573- mysql_config['innodb_buffer_pool_size'] = innodb_buffer_pool_size
2574- return mysql_config
2575
2576=== modified file 'hooks/charmhelpers/contrib/network/ip.py'
2577--- hooks/charmhelpers/contrib/network/ip.py 2015-05-19 21:31:00 +0000
2578+++ hooks/charmhelpers/contrib/network/ip.py 2016-05-18 10:01:02 +0000
2579@@ -23,7 +23,7 @@
2580 from functools import partial
2581
2582 from charmhelpers.core.hookenv import unit_get
2583-from charmhelpers.fetch import apt_install
2584+from charmhelpers.fetch import apt_install, apt_update
2585 from charmhelpers.core.hookenv import (
2586 log,
2587 WARNING,
2588@@ -32,13 +32,15 @@
2589 try:
2590 import netifaces
2591 except ImportError:
2592- apt_install('python-netifaces')
2593+ apt_update(fatal=True)
2594+ apt_install('python-netifaces', fatal=True)
2595 import netifaces
2596
2597 try:
2598 import netaddr
2599 except ImportError:
2600- apt_install('python-netaddr')
2601+ apt_update(fatal=True)
2602+ apt_install('python-netaddr', fatal=True)
2603 import netaddr
2604
2605
2606@@ -51,7 +53,7 @@
2607
2608
2609 def no_ip_found_error_out(network):
2610- errmsg = ("No IP address found in network: %s" % network)
2611+ errmsg = ("No IP address found in network(s): %s" % network)
2612 raise ValueError(errmsg)
2613
2614
2615@@ -59,7 +61,7 @@
2616 """Get an IPv4 or IPv6 address within the network from the host.
2617
2618 :param network (str): CIDR presentation format. For example,
2619- '192.168.1.0/24'.
2620+ '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
2621 :param fallback (str): If no address is found, return fallback.
2622 :param fatal (boolean): If no address is found, fallback is not
2623 set and fatal is True then exit(1).
2624@@ -73,24 +75,26 @@
2625 else:
2626 return None
2627
2628- _validate_cidr(network)
2629- network = netaddr.IPNetwork(network)
2630- for iface in netifaces.interfaces():
2631- addresses = netifaces.ifaddresses(iface)
2632- if network.version == 4 and netifaces.AF_INET in addresses:
2633- addr = addresses[netifaces.AF_INET][0]['addr']
2634- netmask = addresses[netifaces.AF_INET][0]['netmask']
2635- cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
2636- if cidr in network:
2637- return str(cidr.ip)
2638+ networks = network.split() or [network]
2639+ for network in networks:
2640+ _validate_cidr(network)
2641+ network = netaddr.IPNetwork(network)
2642+ for iface in netifaces.interfaces():
2643+ addresses = netifaces.ifaddresses(iface)
2644+ if network.version == 4 and netifaces.AF_INET in addresses:
2645+ addr = addresses[netifaces.AF_INET][0]['addr']
2646+ netmask = addresses[netifaces.AF_INET][0]['netmask']
2647+ cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
2648+ if cidr in network:
2649+ return str(cidr.ip)
2650
2651- if network.version == 6 and netifaces.AF_INET6 in addresses:
2652- for addr in addresses[netifaces.AF_INET6]:
2653- if not addr['addr'].startswith('fe80'):
2654- cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
2655- addr['netmask']))
2656- if cidr in network:
2657- return str(cidr.ip)
2658+ if network.version == 6 and netifaces.AF_INET6 in addresses:
2659+ for addr in addresses[netifaces.AF_INET6]:
2660+ if not addr['addr'].startswith('fe80'):
2661+ cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
2662+ addr['netmask']))
2663+ if cidr in network:
2664+ return str(cidr.ip)
2665
2666 if fallback is not None:
2667 return fallback
2668@@ -187,6 +191,15 @@
2669 get_netmask_for_address = partial(_get_for_address, key='netmask')
2670
2671
2672+def resolve_network_cidr(ip_address):
2673+ '''
2674+ Resolves the full address cidr of an ip_address based on
2675+ configured network interfaces
2676+ '''
2677+ netmask = get_netmask_for_address(ip_address)
2678+ return str(netaddr.IPNetwork("%s/%s" % (ip_address, netmask)).cidr)
2679+
2680+
2681 def format_ipv6_addr(address):
2682 """If address is IPv6, wrap it in '[]' otherwise return None.
2683
2684@@ -435,8 +448,12 @@
2685
2686 rev = dns.reversename.from_address(address)
2687 result = ns_query(rev)
2688+
2689 if not result:
2690- return None
2691+ try:
2692+ result = socket.gethostbyaddr(address)[0]
2693+ except:
2694+ return None
2695 else:
2696 result = address
2697
2698@@ -448,3 +465,18 @@
2699 return result
2700 else:
2701 return result.split('.')[0]
2702+
2703+
2704+def port_has_listener(address, port):
2705+ """
2706+ Returns True if the address:port is open and being listened to,
2707+ else False.
2708+
2709+ @param address: an IP address or hostname
2710+ @param port: integer port
2711+
2712+ Note calls 'zc' via a subprocess shell
2713+ """
2714+ cmd = ['nc', '-z', address, str(port)]
2715+ result = subprocess.call(cmd)
2716+ return not(bool(result))
2717
2718=== modified file 'hooks/charmhelpers/contrib/network/ovs/__init__.py'
2719--- hooks/charmhelpers/contrib/network/ovs/__init__.py 2015-05-19 21:31:00 +0000
2720+++ hooks/charmhelpers/contrib/network/ovs/__init__.py 2016-05-18 10:01:02 +0000
2721@@ -25,10 +25,14 @@
2722 )
2723
2724
2725-def add_bridge(name):
2726+def add_bridge(name, datapath_type=None):
2727 ''' Add the named bridge to openvswitch '''
2728 log('Creating bridge {}'.format(name))
2729- subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name])
2730+ cmd = ["ovs-vsctl", "--", "--may-exist", "add-br", name]
2731+ if datapath_type is not None:
2732+ cmd += ['--', 'set', 'bridge', name,
2733+ 'datapath_type={}'.format(datapath_type)]
2734+ subprocess.check_call(cmd)
2735
2736
2737 def del_bridge(name):
2738
2739=== modified file 'hooks/charmhelpers/contrib/network/ufw.py'
2740--- hooks/charmhelpers/contrib/network/ufw.py 2015-07-29 18:07:31 +0000
2741+++ hooks/charmhelpers/contrib/network/ufw.py 2016-05-18 10:01:02 +0000
2742@@ -40,7 +40,9 @@
2743 import re
2744 import os
2745 import subprocess
2746+
2747 from charmhelpers.core import hookenv
2748+from charmhelpers.core.kernel import modprobe, is_module_loaded
2749
2750 __author__ = "Felipe Reyes <felipe.reyes@canonical.com>"
2751
2752@@ -82,14 +84,11 @@
2753 # do we have IPv6 in the machine?
2754 if os.path.isdir('/proc/sys/net/ipv6'):
2755 # is ip6tables kernel module loaded?
2756- lsmod = subprocess.check_output(['lsmod'], universal_newlines=True)
2757- matches = re.findall('^ip6_tables[ ]+', lsmod, re.M)
2758- if len(matches) == 0:
2759+ if not is_module_loaded('ip6_tables'):
2760 # ip6tables support isn't complete, let's try to load it
2761 try:
2762- subprocess.check_output(['modprobe', 'ip6_tables'],
2763- universal_newlines=True)
2764- # great, we could load the module
2765+ modprobe('ip6_tables')
2766+ # great, we can load the module
2767 return True
2768 except subprocess.CalledProcessError as ex:
2769 hookenv.log("Couldn't load ip6_tables module: %s" % ex.output,
2770
2771=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/deployment.py'
2772--- hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2015-07-29 18:07:31 +0000
2773+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py 2016-05-18 10:01:02 +0000
2774@@ -14,12 +14,18 @@
2775 # You should have received a copy of the GNU Lesser General Public License
2776 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
2777
2778+import logging
2779+import re
2780+import sys
2781 import six
2782 from collections import OrderedDict
2783 from charmhelpers.contrib.amulet.deployment import (
2784 AmuletDeployment
2785 )
2786
2787+DEBUG = logging.DEBUG
2788+ERROR = logging.ERROR
2789+
2790
2791 class OpenStackAmuletDeployment(AmuletDeployment):
2792 """OpenStack amulet deployment.
2793@@ -28,9 +34,12 @@
2794 that is specifically for use by OpenStack charms.
2795 """
2796
2797- def __init__(self, series=None, openstack=None, source=None, stable=True):
2798+ def __init__(self, series=None, openstack=None, source=None,
2799+ stable=True, log_level=DEBUG):
2800 """Initialize the deployment environment."""
2801 super(OpenStackAmuletDeployment, self).__init__(series)
2802+ self.log = self.get_logger(level=log_level)
2803+ self.log.info('OpenStackAmuletDeployment: init')
2804 self.openstack = openstack
2805 self.source = source
2806 self.stable = stable
2807@@ -38,26 +47,55 @@
2808 # out.
2809 self.current_next = "trusty"
2810
2811+ def get_logger(self, name="deployment-logger", level=logging.DEBUG):
2812+ """Get a logger object that will log to stdout."""
2813+ log = logging
2814+ logger = log.getLogger(name)
2815+ fmt = log.Formatter("%(asctime)s %(funcName)s "
2816+ "%(levelname)s: %(message)s")
2817+
2818+ handler = log.StreamHandler(stream=sys.stdout)
2819+ handler.setLevel(level)
2820+ handler.setFormatter(fmt)
2821+
2822+ logger.addHandler(handler)
2823+ logger.setLevel(level)
2824+
2825+ return logger
2826+
2827 def _determine_branch_locations(self, other_services):
2828 """Determine the branch locations for the other services.
2829
2830 Determine if the local branch being tested is derived from its
2831 stable or next (dev) branch, and based on this, use the corresonding
2832 stable or next branches for the other_services."""
2833- base_charms = ['mysql', 'mongodb']
2834+
2835+ self.log.info('OpenStackAmuletDeployment: determine branch locations')
2836+
2837+ # Charms outside the lp:~openstack-charmers namespace
2838+ base_charms = ['mysql', 'mongodb', 'nrpe']
2839+
2840+ # Force these charms to current series even when using an older series.
2841+ # ie. Use trusty/nrpe even when series is precise, as the P charm
2842+ # does not possess the necessary external master config and hooks.
2843+ force_series_current = ['nrpe']
2844
2845 if self.series in ['precise', 'trusty']:
2846 base_series = self.series
2847 else:
2848 base_series = self.current_next
2849
2850- if self.stable:
2851- for svc in other_services:
2852+ for svc in other_services:
2853+ if svc['name'] in force_series_current:
2854+ base_series = self.current_next
2855+ # If a location has been explicitly set, use it
2856+ if svc.get('location'):
2857+ continue
2858+ if self.stable:
2859 temp = 'lp:charms/{}/{}'
2860 svc['location'] = temp.format(base_series,
2861 svc['name'])
2862- else:
2863- for svc in other_services:
2864+ else:
2865 if svc['name'] in base_charms:
2866 temp = 'lp:charms/{}/{}'
2867 svc['location'] = temp.format(base_series,
2868@@ -66,10 +104,13 @@
2869 temp = 'lp:~openstack-charmers/charms/{}/{}/next'
2870 svc['location'] = temp.format(self.current_next,
2871 svc['name'])
2872+
2873 return other_services
2874
2875 def _add_services(self, this_service, other_services):
2876 """Add services to the deployment and set openstack-origin/source."""
2877+ self.log.info('OpenStackAmuletDeployment: adding services')
2878+
2879 other_services = self._determine_branch_locations(other_services)
2880
2881 super(OpenStackAmuletDeployment, self)._add_services(this_service,
2882@@ -77,29 +118,105 @@
2883
2884 services = other_services
2885 services.append(this_service)
2886+
2887+ # Charms which should use the source config option
2888 use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
2889- 'ceph-osd', 'ceph-radosgw']
2890- # Most OpenStack subordinate charms do not expose an origin option
2891- # as that is controlled by the principle.
2892- ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch']
2893+ 'ceph-osd', 'ceph-radosgw', 'ceph-mon']
2894+
2895+ # Charms which can not use openstack-origin, ie. many subordinates
2896+ no_origin = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe',
2897+ 'openvswitch-odl', 'neutron-api-odl', 'odl-controller',
2898+ 'cinder-backup', 'nexentaedge-data',
2899+ 'nexentaedge-iscsi-gw', 'nexentaedge-swift-gw',
2900+ 'cinder-nexentaedge', 'nexentaedge-mgmt']
2901
2902 if self.openstack:
2903 for svc in services:
2904- if svc['name'] not in use_source + ignore:
2905+ if svc['name'] not in use_source + no_origin:
2906 config = {'openstack-origin': self.openstack}
2907 self.d.configure(svc['name'], config)
2908
2909 if self.source:
2910 for svc in services:
2911- if svc['name'] in use_source and svc['name'] not in ignore:
2912+ if svc['name'] in use_source and svc['name'] not in no_origin:
2913 config = {'source': self.source}
2914 self.d.configure(svc['name'], config)
2915
2916 def _configure_services(self, configs):
2917 """Configure all of the services."""
2918+ self.log.info('OpenStackAmuletDeployment: configure services')
2919 for service, config in six.iteritems(configs):
2920 self.d.configure(service, config)
2921
2922+ def _auto_wait_for_status(self, message=None, exclude_services=None,
2923+ include_only=None, timeout=1800):
2924+ """Wait for all units to have a specific extended status, except
2925+ for any defined as excluded. Unless specified via message, any
2926+ status containing any case of 'ready' will be considered a match.
2927+
2928+ Examples of message usage:
2929+
2930+ Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
2931+ message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
2932+
2933+ Wait for all units to reach this status (exact match):
2934+ message = re.compile('^Unit is ready and clustered$')
2935+
2936+ Wait for all units to reach any one of these (exact match):
2937+ message = re.compile('Unit is ready|OK|Ready')
2938+
2939+ Wait for at least one unit to reach this status (exact match):
2940+ message = {'ready'}
2941+
2942+ See Amulet's sentry.wait_for_messages() for message usage detail.
2943+ https://github.com/juju/amulet/blob/master/amulet/sentry.py
2944+
2945+ :param message: Expected status match
2946+ :param exclude_services: List of juju service names to ignore,
2947+ not to be used in conjuction with include_only.
2948+ :param include_only: List of juju service names to exclusively check,
2949+ not to be used in conjuction with exclude_services.
2950+ :param timeout: Maximum time in seconds to wait for status match
2951+ :returns: None. Raises if timeout is hit.
2952+ """
2953+ self.log.info('Waiting for extended status on units...')
2954+
2955+ all_services = self.d.services.keys()
2956+
2957+ if exclude_services and include_only:
2958+ raise ValueError('exclude_services can not be used '
2959+ 'with include_only')
2960+
2961+ if message:
2962+ if isinstance(message, re._pattern_type):
2963+ match = message.pattern
2964+ else:
2965+ match = message
2966+
2967+ self.log.debug('Custom extended status wait match: '
2968+ '{}'.format(match))
2969+ else:
2970+ self.log.debug('Default extended status wait match: contains '
2971+ 'READY (case-insensitive)')
2972+ message = re.compile('.*ready.*', re.IGNORECASE)
2973+
2974+ if exclude_services:
2975+ self.log.debug('Excluding services from extended status match: '
2976+ '{}'.format(exclude_services))
2977+ else:
2978+ exclude_services = []
2979+
2980+ if include_only:
2981+ services = include_only
2982+ else:
2983+ services = list(set(all_services) - set(exclude_services))
2984+
2985+ self.log.debug('Waiting up to {}s for extended status on services: '
2986+ '{}'.format(timeout, services))
2987+ service_messages = {service: message for service in services}
2988+ self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
2989+ self.log.info('OK')
2990+
2991 def _get_openstack_release(self):
2992 """Get openstack release.
2993
2994@@ -111,7 +228,8 @@
2995 self.precise_havana, self.precise_icehouse,
2996 self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
2997 self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
2998- self.wily_liberty) = range(12)
2999+ self.wily_liberty, self.trusty_mitaka,
3000+ self.xenial_mitaka) = range(14)
3001
3002 releases = {
3003 ('precise', None): self.precise_essex,
3004@@ -123,9 +241,11 @@
3005 ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
3006 ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
3007 ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
3008+ ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
3009 ('utopic', None): self.utopic_juno,
3010 ('vivid', None): self.vivid_kilo,
3011- ('wily', None): self.wily_liberty}
3012+ ('wily', None): self.wily_liberty,
3013+ ('xenial', None): self.xenial_mitaka}
3014 return releases[(self.series, self.openstack)]
3015
3016 def _get_openstack_release_string(self):
3017@@ -142,6 +262,7 @@
3018 ('utopic', 'juno'),
3019 ('vivid', 'kilo'),
3020 ('wily', 'liberty'),
3021+ ('xenial', 'mitaka'),
3022 ])
3023 if self.openstack:
3024 os_origin = self.openstack.split(':')[1]
3025
3026=== modified file 'hooks/charmhelpers/contrib/openstack/amulet/utils.py'
3027--- hooks/charmhelpers/contrib/openstack/amulet/utils.py 2015-07-29 18:07:31 +0000
3028+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py 2016-05-18 10:01:02 +0000
3029@@ -18,6 +18,7 @@
3030 import json
3031 import logging
3032 import os
3033+import re
3034 import six
3035 import time
3036 import urllib
3037@@ -26,7 +27,12 @@
3038 import glanceclient.v1.client as glance_client
3039 import heatclient.v1.client as heat_client
3040 import keystoneclient.v2_0 as keystone_client
3041-import novaclient.v1_1.client as nova_client
3042+from keystoneclient.auth.identity import v3 as keystone_id_v3
3043+from keystoneclient import session as keystone_session
3044+from keystoneclient.v3 import client as keystone_client_v3
3045+
3046+import novaclient.client as nova_client
3047+import pika
3048 import swiftclient
3049
3050 from charmhelpers.contrib.amulet.utils import (
3051@@ -36,6 +42,8 @@
3052 DEBUG = logging.DEBUG
3053 ERROR = logging.ERROR
3054
3055+NOVA_CLIENT_VERSION = "2"
3056+
3057
3058 class OpenStackAmuletUtils(AmuletUtils):
3059 """OpenStack amulet utilities.
3060@@ -137,7 +145,7 @@
3061 return "role {} does not exist".format(e['name'])
3062 return ret
3063
3064- def validate_user_data(self, expected, actual):
3065+ def validate_user_data(self, expected, actual, api_version=None):
3066 """Validate user data.
3067
3068 Validate a list of actual user data vs a list of expected user
3069@@ -148,10 +156,15 @@
3070 for e in expected:
3071 found = False
3072 for act in actual:
3073- a = {'enabled': act.enabled, 'name': act.name,
3074- 'email': act.email, 'tenantId': act.tenantId,
3075- 'id': act.id}
3076- if e['name'] == a['name']:
3077+ if e['name'] == act.name:
3078+ a = {'enabled': act.enabled, 'name': act.name,
3079+ 'email': act.email, 'id': act.id}
3080+ if api_version == 3:
3081+ a['default_project_id'] = getattr(act,
3082+ 'default_project_id',
3083+ 'none')
3084+ else:
3085+ a['tenantId'] = act.tenantId
3086 found = True
3087 ret = self._validate_dict_data(e, a)
3088 if ret:
3089@@ -186,15 +199,30 @@
3090 return cinder_client.Client(username, password, tenant, ept)
3091
3092 def authenticate_keystone_admin(self, keystone_sentry, user, password,
3093- tenant):
3094+ tenant=None, api_version=None,
3095+ keystone_ip=None):
3096 """Authenticates admin user with the keystone admin endpoint."""
3097 self.log.debug('Authenticating keystone admin...')
3098 unit = keystone_sentry
3099- service_ip = unit.relation('shared-db',
3100- 'mysql:shared-db')['private-address']
3101- ep = "http://{}:35357/v2.0".format(service_ip.strip().decode('utf-8'))
3102- return keystone_client.Client(username=user, password=password,
3103- tenant_name=tenant, auth_url=ep)
3104+ if not keystone_ip:
3105+ keystone_ip = unit.relation('shared-db',
3106+ 'mysql:shared-db')['private-address']
3107+ base_ep = "http://{}:35357".format(keystone_ip.strip().decode('utf-8'))
3108+ if not api_version or api_version == 2:
3109+ ep = base_ep + "/v2.0"
3110+ return keystone_client.Client(username=user, password=password,
3111+ tenant_name=tenant, auth_url=ep)
3112+ else:
3113+ ep = base_ep + "/v3"
3114+ auth = keystone_id_v3.Password(
3115+ user_domain_name='admin_domain',
3116+ username=user,
3117+ password=password,
3118+ domain_name='admin_domain',
3119+ auth_url=ep,
3120+ )
3121+ sess = keystone_session.Session(auth=auth)
3122+ return keystone_client_v3.Client(session=sess)
3123
3124 def authenticate_keystone_user(self, keystone, user, password, tenant):
3125 """Authenticates a regular user with the keystone public endpoint."""
3126@@ -223,7 +251,8 @@
3127 self.log.debug('Authenticating nova user ({})...'.format(user))
3128 ep = keystone.service_catalog.url_for(service_type='identity',
3129 endpoint_type='publicURL')
3130- return nova_client.Client(username=user, api_key=password,
3131+ return nova_client.Client(NOVA_CLIENT_VERSION,
3132+ username=user, api_key=password,
3133 project_id=tenant, auth_url=ep)
3134
3135 def authenticate_swift_user(self, keystone, user, password, tenant):
3136@@ -602,3 +631,382 @@
3137 self.log.debug('Ceph {} samples (OK): '
3138 '{}'.format(sample_type, samples))
3139 return None
3140+
3141+ # rabbitmq/amqp specific helpers:
3142+
3143+ def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
3144+ """Wait for rmq units extended status to show cluster readiness,
3145+ after an optional initial sleep period. Initial sleep is likely
3146+ necessary to be effective following a config change, as status
3147+ message may not instantly update to non-ready."""
3148+
3149+ if init_sleep:
3150+ time.sleep(init_sleep)
3151+
3152+ message = re.compile('^Unit is ready and clustered$')
3153+ deployment._auto_wait_for_status(message=message,
3154+ timeout=timeout,
3155+ include_only=['rabbitmq-server'])
3156+
3157+ def add_rmq_test_user(self, sentry_units,
3158+ username="testuser1", password="changeme"):
3159+ """Add a test user via the first rmq juju unit, check connection as
3160+ the new user against all sentry units.
3161+
3162+ :param sentry_units: list of sentry unit pointers
3163+ :param username: amqp user name, default to testuser1
3164+ :param password: amqp user password
3165+ :returns: None if successful. Raise on error.
3166+ """
3167+ self.log.debug('Adding rmq user ({})...'.format(username))
3168+
3169+ # Check that user does not already exist
3170+ cmd_user_list = 'rabbitmqctl list_users'
3171+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
3172+ if username in output:
3173+ self.log.warning('User ({}) already exists, returning '
3174+ 'gracefully.'.format(username))
3175+ return
3176+
3177+ perms = '".*" ".*" ".*"'
3178+ cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
3179+ 'rabbitmqctl set_permissions {} {}'.format(username, perms)]
3180+
3181+ # Add user via first unit
3182+ for cmd in cmds:
3183+ output, _ = self.run_cmd_unit(sentry_units[0], cmd)
3184+
3185+ # Check connection against the other sentry_units
3186+ self.log.debug('Checking user connect against units...')
3187+ for sentry_unit in sentry_units:
3188+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
3189+ username=username,
3190+ password=password)
3191+ connection.close()
3192+
3193+ def delete_rmq_test_user(self, sentry_units, username="testuser1"):
3194+ """Delete a rabbitmq user via the first rmq juju unit.
3195+
3196+ :param sentry_units: list of sentry unit pointers
3197+ :param username: amqp user name, default to testuser1
3198+ :param password: amqp user password
3199+ :returns: None if successful or no such user.
3200+ """
3201+ self.log.debug('Deleting rmq user ({})...'.format(username))
3202+
3203+ # Check that the user exists
3204+ cmd_user_list = 'rabbitmqctl list_users'
3205+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
3206+
3207+ if username not in output:
3208+ self.log.warning('User ({}) does not exist, returning '
3209+ 'gracefully.'.format(username))
3210+ return
3211+
3212+ # Delete the user
3213+ cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
3214+ output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
3215+
3216+ def get_rmq_cluster_status(self, sentry_unit):
3217+ """Execute rabbitmq cluster status command on a unit and return
3218+ the full output.
3219+
3220+ :param unit: sentry unit
3221+ :returns: String containing console output of cluster status command
3222+ """
3223+ cmd = 'rabbitmqctl cluster_status'
3224+ output, _ = self.run_cmd_unit(sentry_unit, cmd)
3225+ self.log.debug('{} cluster_status:\n{}'.format(
3226+ sentry_unit.info['unit_name'], output))
3227+ return str(output)
3228+
3229+ def get_rmq_cluster_running_nodes(self, sentry_unit):
3230+ """Parse rabbitmqctl cluster_status output string, return list of
3231+ running rabbitmq cluster nodes.
3232+
3233+ :param unit: sentry unit
3234+ :returns: List containing node names of running nodes
3235+ """
3236+ # NOTE(beisner): rabbitmqctl cluster_status output is not
3237+ # json-parsable, do string chop foo, then json.loads that.
3238+ str_stat = self.get_rmq_cluster_status(sentry_unit)
3239+ if 'running_nodes' in str_stat:
3240+ pos_start = str_stat.find("{running_nodes,") + 15
3241+ pos_end = str_stat.find("]},", pos_start) + 1
3242+ str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
3243+ run_nodes = json.loads(str_run_nodes)
3244+ return run_nodes
3245+ else:
3246+ return []
3247+
3248+ def validate_rmq_cluster_running_nodes(self, sentry_units):
3249+ """Check that all rmq unit hostnames are represented in the
3250+ cluster_status output of all units.
3251+
3252+ :param host_names: dict of juju unit names to host names
3253+ :param units: list of sentry unit pointers (all rmq units)
3254+ :returns: None if successful, otherwise return error message
3255+ """
3256+ host_names = self.get_unit_hostnames(sentry_units)
3257+ errors = []
3258+
3259+ # Query every unit for cluster_status running nodes
3260+ for query_unit in sentry_units:
3261+ query_unit_name = query_unit.info['unit_name']
3262+ running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
3263+
3264+ # Confirm that every unit is represented in the queried unit's
3265+ # cluster_status running nodes output.
3266+ for validate_unit in sentry_units:
3267+ val_host_name = host_names[validate_unit.info['unit_name']]
3268+ val_node_name = 'rabbit@{}'.format(val_host_name)
3269+
3270+ if val_node_name not in running_nodes:
3271+ errors.append('Cluster member check failed on {}: {} not '
3272+ 'in {}\n'.format(query_unit_name,
3273+ val_node_name,
3274+ running_nodes))
3275+ if errors:
3276+ return ''.join(errors)
3277+
3278+ def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
3279+ """Check a single juju rmq unit for ssl and port in the config file."""
3280+ host = sentry_unit.info['public-address']
3281+ unit_name = sentry_unit.info['unit_name']
3282+
3283+ conf_file = '/etc/rabbitmq/rabbitmq.config'
3284+ conf_contents = str(self.file_contents_safe(sentry_unit,
3285+ conf_file, max_wait=16))
3286+ # Checks
3287+ conf_ssl = 'ssl' in conf_contents
3288+ conf_port = str(port) in conf_contents
3289+
3290+ # Port explicitly checked in config
3291+ if port and conf_port and conf_ssl:
3292+ self.log.debug('SSL is enabled @{}:{} '
3293+ '({})'.format(host, port, unit_name))
3294+ return True
3295+ elif port and not conf_port and conf_ssl:
3296+ self.log.debug('SSL is enabled @{} but not on port {} '
3297+ '({})'.format(host, port, unit_name))
3298+ return False
3299+ # Port not checked (useful when checking that ssl is disabled)
3300+ elif not port and conf_ssl:
3301+ self.log.debug('SSL is enabled @{}:{} '
3302+ '({})'.format(host, port, unit_name))
3303+ return True
3304+ elif not conf_ssl:
3305+ self.log.debug('SSL not enabled @{}:{} '
3306+ '({})'.format(host, port, unit_name))
3307+ return False
3308+ else:
3309+ msg = ('Unknown condition when checking SSL status @{}:{} '
3310+ '({})'.format(host, port, unit_name))
3311+ amulet.raise_status(amulet.FAIL, msg)
3312+
3313+ def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
3314+ """Check that ssl is enabled on rmq juju sentry units.
3315+
3316+ :param sentry_units: list of all rmq sentry units
3317+ :param port: optional ssl port override to validate
3318+ :returns: None if successful, otherwise return error message
3319+ """
3320+ for sentry_unit in sentry_units:
3321+ if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
3322+ return ('Unexpected condition: ssl is disabled on unit '
3323+ '({})'.format(sentry_unit.info['unit_name']))
3324+ return None
3325+
3326+ def validate_rmq_ssl_disabled_units(self, sentry_units):
3327+ """Check that ssl is enabled on listed rmq juju sentry units.
3328+
3329+ :param sentry_units: list of all rmq sentry units
3330+ :returns: True if successful. Raise on error.
3331+ """
3332+ for sentry_unit in sentry_units:
3333+ if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
3334+ return ('Unexpected condition: ssl is enabled on unit '
3335+ '({})'.format(sentry_unit.info['unit_name']))
3336+ return None
3337+
3338+ def configure_rmq_ssl_on(self, sentry_units, deployment,
3339+ port=None, max_wait=60):
3340+ """Turn ssl charm config option on, with optional non-default
3341+ ssl port specification. Confirm that it is enabled on every
3342+ unit.
3343+
3344+ :param sentry_units: list of sentry units
3345+ :param deployment: amulet deployment object pointer
3346+ :param port: amqp port, use defaults if None
3347+ :param max_wait: maximum time to wait in seconds to confirm
3348+ :returns: None if successful. Raise on error.
3349+ """
3350+ self.log.debug('Setting ssl charm config option: on')
3351+
3352+ # Enable RMQ SSL
3353+ config = {'ssl': 'on'}
3354+ if port:
3355+ config['ssl_port'] = port
3356+
3357+ deployment.d.configure('rabbitmq-server', config)
3358+
3359+ # Wait for unit status
3360+ self.rmq_wait_for_cluster(deployment)
3361+
3362+ # Confirm
3363+ tries = 0
3364+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
3365+ while ret and tries < (max_wait / 4):
3366+ time.sleep(4)
3367+ self.log.debug('Attempt {}: {}'.format(tries, ret))
3368+ ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
3369+ tries += 1
3370+
3371+ if ret:
3372+ amulet.raise_status(amulet.FAIL, ret)
3373+
3374+ def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
3375+ """Turn ssl charm config option off, confirm that it is disabled
3376+ on every unit.
3377+
3378+ :param sentry_units: list of sentry units
3379+ :param deployment: amulet deployment object pointer
3380+ :param max_wait: maximum time to wait in seconds to confirm
3381+ :returns: None if successful. Raise on error.
3382+ """
3383+ self.log.debug('Setting ssl charm config option: off')
3384+
3385+ # Disable RMQ SSL
3386+ config = {'ssl': 'off'}
3387+ deployment.d.configure('rabbitmq-server', config)
3388+
3389+ # Wait for unit status
3390+ self.rmq_wait_for_cluster(deployment)
3391+
3392+ # Confirm
3393+ tries = 0
3394+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
3395+ while ret and tries < (max_wait / 4):
3396+ time.sleep(4)
3397+ self.log.debug('Attempt {}: {}'.format(tries, ret))
3398+ ret = self.validate_rmq_ssl_disabled_units(sentry_units)
3399+ tries += 1
3400+
3401+ if ret:
3402+ amulet.raise_status(amulet.FAIL, ret)
3403+
3404+ def connect_amqp_by_unit(self, sentry_unit, ssl=False,
3405+ port=None, fatal=True,
3406+ username="testuser1", password="changeme"):
3407+ """Establish and return a pika amqp connection to the rabbitmq service
3408+ running on a rmq juju unit.
3409+
3410+ :param sentry_unit: sentry unit pointer
3411+ :param ssl: boolean, default to False
3412+ :param port: amqp port, use defaults if None
3413+ :param fatal: boolean, default to True (raises on connect error)
3414+ :param username: amqp user name, default to testuser1
3415+ :param password: amqp user password
3416+ :returns: pika amqp connection pointer or None if failed and non-fatal
3417+ """
3418+ host = sentry_unit.info['public-address']
3419+ unit_name = sentry_unit.info['unit_name']
3420+
3421+ # Default port logic if port is not specified
3422+ if ssl and not port:
3423+ port = 5671
3424+ elif not ssl and not port:
3425+ port = 5672
3426+
3427+ self.log.debug('Connecting to amqp on {}:{} ({}) as '
3428+ '{}...'.format(host, port, unit_name, username))
3429+
3430+ try:
3431+ credentials = pika.PlainCredentials(username, password)
3432+ parameters = pika.ConnectionParameters(host=host, port=port,
3433+ credentials=credentials,
3434+ ssl=ssl,
3435+ connection_attempts=3,
3436+ retry_delay=5,
3437+ socket_timeout=1)
3438+ connection = pika.BlockingConnection(parameters)
3439+ assert connection.server_properties['product'] == 'RabbitMQ'
3440+ self.log.debug('Connect OK')
3441+ return connection
3442+ except Exception as e:
3443+ msg = ('amqp connection failed to {}:{} as '
3444+ '{} ({})'.format(host, port, username, str(e)))
3445+ if fatal:
3446+ amulet.raise_status(amulet.FAIL, msg)
3447+ else:
3448+ self.log.warn(msg)
3449+ return None
3450+
3451+ def publish_amqp_message_by_unit(self, sentry_unit, message,
3452+ queue="test", ssl=False,
3453+ username="testuser1",
3454+ password="changeme",
3455+ port=None):
3456+ """Publish an amqp message to a rmq juju unit.
3457+
3458+ :param sentry_unit: sentry unit pointer
3459+ :param message: amqp message string
3460+ :param queue: message queue, default to test
3461+ :param username: amqp user name, default to testuser1
3462+ :param password: amqp user password
3463+ :param ssl: boolean, default to False
3464+ :param port: amqp port, use defaults if None
3465+ :returns: None. Raises exception if publish failed.
3466+ """
3467+ self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
3468+ message))
3469+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
3470+ port=port,
3471+ username=username,
3472+ password=password)
3473+
3474+ # NOTE(beisner): extra debug here re: pika hang potential:
3475+ # https://github.com/pika/pika/issues/297
3476+ # https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
3477+ self.log.debug('Defining channel...')
3478+ channel = connection.channel()
3479+ self.log.debug('Declaring queue...')
3480+ channel.queue_declare(queue=queue, auto_delete=False, durable=True)
3481+ self.log.debug('Publishing message...')
3482+ channel.basic_publish(exchange='', routing_key=queue, body=message)
3483+ self.log.debug('Closing channel...')
3484+ channel.close()
3485+ self.log.debug('Closing connection...')
3486+ connection.close()
3487+
3488+ def get_amqp_message_by_unit(self, sentry_unit, queue="test",
3489+ username="testuser1",
3490+ password="changeme",
3491+ ssl=False, port=None):
3492+ """Get an amqp message from a rmq juju unit.
3493+
3494+ :param sentry_unit: sentry unit pointer
3495+ :param queue: message queue, default to test
3496+ :param username: amqp user name, default to testuser1
3497+ :param password: amqp user password
3498+ :param ssl: boolean, default to False
3499+ :param port: amqp port, use defaults if None
3500+ :returns: amqp message body as string. Raise if get fails.
3501+ """
3502+ connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
3503+ port=port,
3504+ username=username,
3505+ password=password)
3506+ channel = connection.channel()
3507+ method_frame, _, body = channel.basic_get(queue)
3508+
3509+ if method_frame:
3510+ self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
3511+ body))
3512+ channel.basic_ack(method_frame.delivery_tag)
3513+ channel.close()
3514+ connection.close()
3515+ return body
3516+ else:
3517+ msg = 'No message retrieved.'
3518+ amulet.raise_status(amulet.FAIL, msg)
3519
3520=== modified file 'hooks/charmhelpers/contrib/openstack/context.py'
3521--- hooks/charmhelpers/contrib/openstack/context.py 2015-07-29 18:07:31 +0000
3522+++ hooks/charmhelpers/contrib/openstack/context.py 2016-05-18 10:01:02 +0000
3523@@ -14,12 +14,13 @@
3524 # You should have received a copy of the GNU Lesser General Public License
3525 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
3526
3527+import glob
3528 import json
3529 import os
3530 import re
3531 import time
3532 from base64 import b64decode
3533-from subprocess import check_call
3534+from subprocess import check_call, CalledProcessError
3535
3536 import six
3537 import yaml
3538@@ -44,16 +45,20 @@
3539 INFO,
3540 WARNING,
3541 ERROR,
3542+ status_set,
3543 )
3544
3545 from charmhelpers.core.sysctl import create as sysctl_create
3546 from charmhelpers.core.strutils import bool_from_string
3547
3548 from charmhelpers.core.host import (
3549+ get_bond_master,
3550+ is_phy_iface,
3551 list_nics,
3552 get_nic_hwaddr,
3553 mkdir,
3554 write_file,
3555+ pwgen,
3556 )
3557 from charmhelpers.contrib.hahelpers.cluster import (
3558 determine_apache_port,
3559@@ -84,6 +89,14 @@
3560 is_bridge_member,
3561 )
3562 from charmhelpers.contrib.openstack.utils import get_host_ip
3563+from charmhelpers.core.unitdata import kv
3564+
3565+try:
3566+ import psutil
3567+except ImportError:
3568+ apt_install('python-psutil', fatal=True)
3569+ import psutil
3570+
3571 CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
3572 ADDRESS_TYPES = ['admin', 'internal', 'public']
3573
3574@@ -192,10 +205,50 @@
3575 class OSContextGenerator(object):
3576 """Base class for all context generators."""
3577 interfaces = []
3578+ related = False
3579+ complete = False
3580+ missing_data = []
3581
3582 def __call__(self):
3583 raise NotImplementedError
3584
3585+ def context_complete(self, ctxt):
3586+ """Check for missing data for the required context data.
3587+ Set self.missing_data if it exists and return False.
3588+ Set self.complete if no missing data and return True.
3589+ """
3590+ # Fresh start
3591+ self.complete = False
3592+ self.missing_data = []
3593+ for k, v in six.iteritems(ctxt):
3594+ if v is None or v == '':
3595+ if k not in self.missing_data:
3596+ self.missing_data.append(k)
3597+
3598+ if self.missing_data:
3599+ self.complete = False
3600+ log('Missing required data: %s' % ' '.join(self.missing_data), level=INFO)
3601+ else:
3602+ self.complete = True
3603+ return self.complete
3604+
3605+ def get_related(self):
3606+ """Check if any of the context interfaces have relation ids.
3607+ Set self.related and return True if one of the interfaces
3608+ has relation ids.
3609+ """
3610+ # Fresh start
3611+ self.related = False
3612+ try:
3613+ for interface in self.interfaces:
3614+ if relation_ids(interface):
3615+ self.related = True
3616+ return self.related
3617+ except AttributeError as e:
3618+ log("{} {}"
3619+ "".format(self, e), 'INFO')
3620+ return self.related
3621+
3622
3623 class SharedDBContext(OSContextGenerator):
3624 interfaces = ['shared-db']
3625@@ -211,6 +264,7 @@
3626 self.database = database
3627 self.user = user
3628 self.ssl_dir = ssl_dir
3629+ self.rel_name = self.interfaces[0]
3630
3631 def __call__(self):
3632 self.database = self.database or config('database')
3633@@ -244,6 +298,7 @@
3634 password_setting = self.relation_prefix + '_password'
3635
3636 for rid in relation_ids(self.interfaces[0]):
3637+ self.related = True
3638 for unit in related_units(rid):
3639 rdata = relation_get(rid=rid, unit=unit)
3640 host = rdata.get('db_host')
3641@@ -255,7 +310,7 @@
3642 'database_password': rdata.get(password_setting),
3643 'database_type': 'mysql'
3644 }
3645- if context_complete(ctxt):
3646+ if self.context_complete(ctxt):
3647 db_ssl(rdata, ctxt, self.ssl_dir)
3648 return ctxt
3649 return {}
3650@@ -276,6 +331,7 @@
3651
3652 ctxt = {}
3653 for rid in relation_ids(self.interfaces[0]):
3654+ self.related = True
3655 for unit in related_units(rid):
3656 rel_host = relation_get('host', rid=rid, unit=unit)
3657 rel_user = relation_get('user', rid=rid, unit=unit)
3658@@ -285,7 +341,7 @@
3659 'database_user': rel_user,
3660 'database_password': rel_passwd,
3661 'database_type': 'postgresql'}
3662- if context_complete(ctxt):
3663+ if self.context_complete(ctxt):
3664 return ctxt
3665
3666 return {}
3667@@ -346,6 +402,7 @@
3668 ctxt['signing_dir'] = cachedir
3669
3670 for rid in relation_ids(self.rel_name):
3671+ self.related = True
3672 for unit in related_units(rid):
3673 rdata = relation_get(rid=rid, unit=unit)
3674 serv_host = rdata.get('service_host')
3675@@ -354,6 +411,7 @@
3676 auth_host = format_ipv6_addr(auth_host) or auth_host
3677 svc_protocol = rdata.get('service_protocol') or 'http'
3678 auth_protocol = rdata.get('auth_protocol') or 'http'
3679+ api_version = rdata.get('api_version') or '2.0'
3680 ctxt.update({'service_port': rdata.get('service_port'),
3681 'service_host': serv_host,
3682 'auth_host': auth_host,
3683@@ -362,9 +420,10 @@
3684 'admin_user': rdata.get('service_username'),
3685 'admin_password': rdata.get('service_password'),
3686 'service_protocol': svc_protocol,
3687- 'auth_protocol': auth_protocol})
3688+ 'auth_protocol': auth_protocol,
3689+ 'api_version': api_version})
3690
3691- if context_complete(ctxt):
3692+ if self.context_complete(ctxt):
3693 # NOTE(jamespage) this is required for >= icehouse
3694 # so a missing value just indicates keystone needs
3695 # upgrading
3696@@ -403,6 +462,7 @@
3697 ctxt = {}
3698 for rid in relation_ids(self.rel_name):
3699 ha_vip_only = False
3700+ self.related = True
3701 for unit in related_units(rid):
3702 if relation_get('clustered', rid=rid, unit=unit):
3703 ctxt['clustered'] = True
3704@@ -435,7 +495,7 @@
3705 ha_vip_only = relation_get('ha-vip-only',
3706 rid=rid, unit=unit) is not None
3707
3708- if context_complete(ctxt):
3709+ if self.context_complete(ctxt):
3710 if 'rabbit_ssl_ca' in ctxt:
3711 if not self.ssl_dir:
3712 log("Charm not setup for ssl support but ssl ca "
3713@@ -467,7 +527,7 @@
3714 ctxt['oslo_messaging_flags'] = config_flags_parser(
3715 oslo_messaging_flags)
3716
3717- if not context_complete(ctxt):
3718+ if not self.complete:
3719 return {}
3720
3721 return ctxt
3722@@ -483,13 +543,15 @@
3723
3724 log('Generating template context for ceph', level=DEBUG)
3725 mon_hosts = []
3726- auth = None
3727- key = None
3728- use_syslog = str(config('use-syslog')).lower()
3729+ ctxt = {
3730+ 'use_syslog': str(config('use-syslog')).lower()
3731+ }
3732 for rid in relation_ids('ceph'):
3733 for unit in related_units(rid):
3734- auth = relation_get('auth', rid=rid, unit=unit)
3735- key = relation_get('key', rid=rid, unit=unit)
3736+ if not ctxt.get('auth'):
3737+ ctxt['auth'] = relation_get('auth', rid=rid, unit=unit)
3738+ if not ctxt.get('key'):
3739+ ctxt['key'] = relation_get('key', rid=rid, unit=unit)
3740 ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
3741 unit=unit)
3742 unit_priv_addr = relation_get('private-address', rid=rid,
3743@@ -498,15 +560,12 @@
3744 ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
3745 mon_hosts.append(ceph_addr)
3746
3747- ctxt = {'mon_hosts': ' '.join(sorted(mon_hosts)),
3748- 'auth': auth,
3749- 'key': key,
3750- 'use_syslog': use_syslog}
3751+ ctxt['mon_hosts'] = ' '.join(sorted(mon_hosts))
3752
3753 if not os.path.isdir('/etc/ceph'):
3754 os.mkdir('/etc/ceph')
3755
3756- if not context_complete(ctxt):
3757+ if not self.context_complete(ctxt):
3758 return {}
3759
3760 ensure_packages(['ceph-common'])
3761@@ -579,15 +638,28 @@
3762 if config('haproxy-client-timeout'):
3763 ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
3764
3765+ if config('haproxy-queue-timeout'):
3766+ ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout')
3767+
3768+ if config('haproxy-connect-timeout'):
3769+ ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout')
3770+
3771 if config('prefer-ipv6'):
3772 ctxt['ipv6'] = True
3773 ctxt['local_host'] = 'ip6-localhost'
3774 ctxt['haproxy_host'] = '::'
3775- ctxt['stat_port'] = ':::8888'
3776 else:
3777 ctxt['local_host'] = '127.0.0.1'
3778 ctxt['haproxy_host'] = '0.0.0.0'
3779- ctxt['stat_port'] = ':8888'
3780+
3781+ ctxt['stat_port'] = '8888'
3782+
3783+ db = kv()
3784+ ctxt['stat_password'] = db.get('stat-password')
3785+ if not ctxt['stat_password']:
3786+ ctxt['stat_password'] = db.set('stat-password',
3787+ pwgen(32))
3788+ db.flush()
3789
3790 for frontend in cluster_hosts:
3791 if (len(cluster_hosts[frontend]['backends']) > 1 or
3792@@ -878,19 +950,6 @@
3793
3794 return calico_ctxt
3795
3796- def pg_ctxt(self):
3797- driver = neutron_plugin_attribute(self.plugin, 'driver',
3798- self.network_manager)
3799- config = neutron_plugin_attribute(self.plugin, 'config',
3800- self.network_manager)
3801- pg_ctxt = {'core_plugin': driver,
3802- 'neutron_plugin': 'plumgrid',
3803- 'neutron_security_groups': self.neutron_security_groups,
3804- 'local_ip': unit_private_ip(),
3805- 'config': config}
3806-
3807- return pg_ctxt
3808-
3809 def neutron_ctxt(self):
3810 if https():
3811 proto = 'https'
3812@@ -906,6 +965,31 @@
3813 'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
3814 return ctxt
3815
3816+ def pg_ctxt(self):
3817+ driver = neutron_plugin_attribute(self.plugin, 'driver',
3818+ self.network_manager)
3819+ config = neutron_plugin_attribute(self.plugin, 'config',
3820+ self.network_manager)
3821+ ovs_ctxt = {'core_plugin': driver,
3822+ 'neutron_plugin': 'plumgrid',
3823+ 'neutron_security_groups': self.neutron_security_groups,
3824+ 'local_ip': unit_private_ip(),
3825+ 'config': config}
3826+ return ovs_ctxt
3827+
3828+ def midonet_ctxt(self):
3829+ driver = neutron_plugin_attribute(self.plugin, 'driver',
3830+ self.network_manager)
3831+ midonet_config = neutron_plugin_attribute(self.plugin, 'config',
3832+ self.network_manager)
3833+ mido_ctxt = {'core_plugin': driver,
3834+ 'neutron_plugin': 'midonet',
3835+ 'neutron_security_groups': self.neutron_security_groups,
3836+ 'local_ip': unit_private_ip(),
3837+ 'config': midonet_config}
3838+
3839+ return mido_ctxt
3840+
3841 def __call__(self):
3842 if self.network_manager not in ['quantum', 'neutron']:
3843 return {}
3844@@ -927,6 +1011,8 @@
3845 ctxt.update(self.nuage_ctxt())
3846 elif self.plugin == 'plumgrid':
3847 ctxt.update(self.pg_ctxt())
3848+ elif self.plugin == 'midonet':
3849+ ctxt.update(self.midonet_ctxt())
3850
3851 alchemy_flags = config('neutron-alchemy-flags')
3852 if alchemy_flags:
3853@@ -938,7 +1024,6 @@
3854
3855
3856 class NeutronPortContext(OSContextGenerator):
3857- NIC_PREFIXES = ['eth', 'bond']
3858
3859 def resolve_ports(self, ports):
3860 """Resolve NICs not yet bound to bridge(s)
3861@@ -950,7 +1035,18 @@
3862
3863 hwaddr_to_nic = {}
3864 hwaddr_to_ip = {}
3865- for nic in list_nics(self.NIC_PREFIXES):
3866+ for nic in list_nics():
3867+ # Ignore virtual interfaces (bond masters will be identified from
3868+ # their slaves)
3869+ if not is_phy_iface(nic):
3870+ continue
3871+
3872+ _nic = get_bond_master(nic)
3873+ if _nic:
3874+ log("Replacing iface '%s' with bond master '%s'" % (nic, _nic),
3875+ level=DEBUG)
3876+ nic = _nic
3877+
3878 hwaddr = get_nic_hwaddr(nic)
3879 hwaddr_to_nic[hwaddr] = nic
3880 addresses = get_ipv4_addr(nic, fatal=False)
3881@@ -976,7 +1072,8 @@
3882 # trust it to be the real external network).
3883 resolved.append(entry)
3884
3885- return resolved
3886+ # Ensure no duplicates
3887+ return list(set(resolved))
3888
3889
3890 class OSConfigFlagContext(OSContextGenerator):
3891@@ -1016,6 +1113,20 @@
3892 config_flags_parser(config_flags)}
3893
3894
3895+class LibvirtConfigFlagsContext(OSContextGenerator):
3896+ """
3897+ This context provides support for extending
3898+ the libvirt section through user-defined flags.
3899+ """
3900+ def __call__(self):
3901+ ctxt = {}
3902+ libvirt_flags = config('libvirt-flags')
3903+ if libvirt_flags:
3904+ ctxt['libvirt_flags'] = config_flags_parser(
3905+ libvirt_flags)
3906+ return ctxt
3907+
3908+
3909 class SubordinateConfigContext(OSContextGenerator):
3910
3911 """
3912@@ -1048,7 +1159,7 @@
3913
3914 ctxt = {
3915 ... other context ...
3916- 'subordinate_config': {
3917+ 'subordinate_configuration': {
3918 'DEFAULT': {
3919 'key1': 'value1',
3920 },
3921@@ -1066,13 +1177,22 @@
3922 :param config_file : Service's config file to query sections
3923 :param interface : Subordinate interface to inspect
3924 """
3925- self.service = service
3926 self.config_file = config_file
3927- self.interface = interface
3928+ if isinstance(service, list):
3929+ self.services = service
3930+ else:
3931+ self.services = [service]
3932+ if isinstance(interface, list):
3933+ self.interfaces = interface
3934+ else:
3935+ self.interfaces = [interface]
3936
3937 def __call__(self):
3938 ctxt = {'sections': {}}
3939- for rid in relation_ids(self.interface):
3940+ rids = []
3941+ for interface in self.interfaces:
3942+ rids.extend(relation_ids(interface))
3943+ for rid in rids:
3944 for unit in related_units(rid):
3945 sub_config = relation_get('subordinate_configuration',
3946 rid=rid, unit=unit)
3947@@ -1080,33 +1200,37 @@
3948 try:
3949 sub_config = json.loads(sub_config)
3950 except:
3951- log('Could not parse JSON from subordinate_config '
3952- 'setting from %s' % rid, level=ERROR)
3953- continue
3954-
3955- if self.service not in sub_config:
3956- log('Found subordinate_config on %s but it contained'
3957- 'nothing for %s service' % (rid, self.service),
3958- level=INFO)
3959- continue
3960-
3961- sub_config = sub_config[self.service]
3962- if self.config_file not in sub_config:
3963- log('Found subordinate_config on %s but it contained'
3964- 'nothing for %s' % (rid, self.config_file),
3965- level=INFO)
3966- continue
3967-
3968- sub_config = sub_config[self.config_file]
3969- for k, v in six.iteritems(sub_config):
3970- if k == 'sections':
3971- for section, config_dict in six.iteritems(v):
3972- log("adding section '%s'" % (section),
3973- level=DEBUG)
3974- ctxt[k][section] = config_dict
3975- else:
3976- ctxt[k] = v
3977-
3978+ log('Could not parse JSON from '
3979+ 'subordinate_configuration setting from %s'
3980+ % rid, level=ERROR)
3981+ continue
3982+
3983+ for service in self.services:
3984+ if service not in sub_config:
3985+ log('Found subordinate_configuration on %s but it '
3986+ 'contained nothing for %s service'
3987+ % (rid, service), level=INFO)
3988+ continue
3989+
3990+ sub_config = sub_config[service]
3991+ if self.config_file not in sub_config:
3992+ log('Found subordinate_configuration on %s but it '
3993+ 'contained nothing for %s'
3994+ % (rid, self.config_file), level=INFO)
3995+ continue
3996+
3997+ sub_config = sub_config[self.config_file]
3998+ for k, v in six.iteritems(sub_config):
3999+ if k == 'sections':
4000+ for section, config_list in six.iteritems(v):
4001+ log("adding section '%s'" % (section),
4002+ level=DEBUG)
4003+ if ctxt[k].get(section):
4004+ ctxt[k][section].extend(config_list)
4005+ else:
4006+ ctxt[k][section] = config_list
4007+ else:
4008+ ctxt[k] = v
4009 log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
4010 return ctxt
4011
4012@@ -1143,13 +1267,11 @@
4013
4014 @property
4015 def num_cpus(self):
4016- try:
4017- from psutil import NUM_CPUS
4018- except ImportError:
4019- apt_install('python-psutil', fatal=True)
4020- from psutil import NUM_CPUS
4021-
4022- return NUM_CPUS
4023+ # NOTE: use cpu_count if present (16.04 support)
4024+ if hasattr(psutil, 'cpu_count'):
4025+ return psutil.cpu_count()
4026+ else:
4027+ return psutil.NUM_CPUS
4028
4029 def __call__(self):
4030 multiplier = config('worker-multiplier') or 0
4031@@ -1283,15 +1405,19 @@
4032 def __call__(self):
4033 ports = config('data-port')
4034 if ports:
4035+ # Map of {port/mac:bridge}
4036 portmap = parse_data_port_mappings(ports)
4037- ports = portmap.values()
4038+ ports = portmap.keys()
4039+ # Resolve provided ports or mac addresses and filter out those
4040+ # already attached to a bridge.
4041 resolved = self.resolve_ports(ports)
4042+ # FIXME: is this necessary?
4043 normalized = {get_nic_hwaddr(port): port for port in resolved
4044 if port not in ports}
4045 normalized.update({port: port for port in resolved
4046 if port in ports})
4047 if resolved:
4048- return {bridge: normalized[port] for bridge, port in
4049+ return {normalized[port]: bridge for port, bridge in
4050 six.iteritems(portmap) if port in normalized.keys()}
4051
4052 return None
4053@@ -1302,12 +1428,22 @@
4054 def __call__(self):
4055 ctxt = {}
4056 mappings = super(PhyNICMTUContext, self).__call__()
4057- if mappings and mappings.values():
4058- ports = mappings.values()
4059+ if mappings and mappings.keys():
4060+ ports = sorted(mappings.keys())
4061 napi_settings = NeutronAPIContext()()
4062 mtu = napi_settings.get('network_device_mtu')
4063+ all_ports = set()
4064+ # If any of ports is a vlan device, its underlying device must have
4065+ # mtu applied first.
4066+ for port in ports:
4067+ for lport in glob.glob("/sys/class/net/%s/lower_*" % port):
4068+ lport = os.path.basename(lport)
4069+ all_ports.add(lport.split('_')[1])
4070+
4071+ all_ports = list(all_ports)
4072+ all_ports.extend(ports)
4073 if mtu:
4074- ctxt["devs"] = '\\n'.join(ports)
4075+ ctxt["devs"] = '\\n'.join(all_ports)
4076 ctxt['mtu'] = mtu
4077
4078 return ctxt
4079@@ -1338,7 +1474,110 @@
4080 rdata.get('service_protocol') or 'http',
4081 'auth_protocol':
4082 rdata.get('auth_protocol') or 'http',
4083+ 'api_version':
4084+ rdata.get('api_version') or '2.0',
4085 }
4086- if context_complete(ctxt):
4087+ if self.context_complete(ctxt):
4088 return ctxt
4089 return {}
4090+
4091+
4092+class InternalEndpointContext(OSContextGenerator):
4093+ """Internal endpoint context.
4094+
4095+ This context provides the endpoint type used for communication between
4096+ services e.g. between Nova and Cinder internally. Openstack uses Public
4097+ endpoints by default so this allows admins to optionally use internal
4098+ endpoints.
4099+ """
4100+ def __call__(self):
4101+ return {'use_internal_endpoints': config('use-internal-endpoints')}
4102+
4103+
4104+class AppArmorContext(OSContextGenerator):
4105+ """Base class for apparmor contexts."""
4106+
4107+ def __init__(self):
4108+ self._ctxt = None
4109+ self.aa_profile = None
4110+ self.aa_utils_packages = ['apparmor-utils']
4111+
4112+ @property
4113+ def ctxt(self):
4114+ if self._ctxt is not None:
4115+ return self._ctxt
4116+ self._ctxt = self._determine_ctxt()
4117+ return self._ctxt
4118+
4119+ def _determine_ctxt(self):
4120+ """
4121+ Validate aa-profile-mode settings is disable, enforce, or complain.
4122+
4123+ :return ctxt: Dictionary of the apparmor profile or None
4124+ """
4125+ if config('aa-profile-mode') in ['disable', 'enforce', 'complain']:
4126+ ctxt = {'aa-profile-mode': config('aa-profile-mode')}
4127+ else:
4128+ ctxt = None
4129+ return ctxt
4130+
4131+ def __call__(self):
4132+ return self.ctxt
4133+
4134+ def install_aa_utils(self):
4135+ """
4136+ Install packages required for apparmor configuration.
4137+ """
4138+ log("Installing apparmor utils.")
4139+ ensure_packages(self.aa_utils_packages)
4140+
4141+ def manually_disable_aa_profile(self):
4142+ """
4143+ Manually disable an apparmor profile.
4144+
4145+ If aa-profile-mode is set to disabled (default) this is required as the
4146+ template has been written but apparmor is yet unaware of the profile
4147+ and aa-disable aa-profile fails. Without this the profile would kick
4148+ into enforce mode on the next service restart.
4149+
4150+ """
4151+ profile_path = '/etc/apparmor.d'
4152+ disable_path = '/etc/apparmor.d/disable'
4153+ if not os.path.lexists(os.path.join(disable_path, self.aa_profile)):
4154+ os.symlink(os.path.join(profile_path, self.aa_profile),
4155+ os.path.join(disable_path, self.aa_profile))
4156+
4157+ def setup_aa_profile(self):
4158+ """
4159+ Setup an apparmor profile.
4160+ The ctxt dictionary will contain the apparmor profile mode and
4161+ the apparmor profile name.
4162+ Makes calls out to aa-disable, aa-complain, or aa-enforce to setup
4163+ the apparmor profile.
4164+ """
4165+ self()
4166+ if not self.ctxt:
4167+ log("Not enabling apparmor Profile")
4168+ return
4169+ self.install_aa_utils()
4170+ cmd = ['aa-{}'.format(self.ctxt['aa-profile-mode'])]
4171+ cmd.append(self.ctxt['aa-profile'])
4172+ log("Setting up the apparmor profile for {} in {} mode."
4173+ "".format(self.ctxt['aa-profile'], self.ctxt['aa-profile-mode']))
4174+ try:
4175+ check_call(cmd)
4176+ except CalledProcessError as e:
4177+ # If aa-profile-mode is set to disabled (default) manual
4178+ # disabling is required as the template has been written but
4179+ # apparmor is yet unaware of the profile and aa-disable aa-profile
4180+ # fails. If aa-disable learns to read profile files first this can
4181+ # be removed.
4182+ if self.ctxt['aa-profile-mode'] == 'disable':
4183+ log("Manually disabling the apparmor profile for {}."
4184+ "".format(self.ctxt['aa-profile']))
4185+ self.manually_disable_aa_profile()
4186+ return
4187+ status_set('blocked', "Apparmor profile {} failed to be set to {}."
4188+ "".format(self.ctxt['aa-profile'],
4189+ self.ctxt['aa-profile-mode']))
4190+ raise e
4191
4192=== modified file 'hooks/charmhelpers/contrib/openstack/ip.py'
4193--- hooks/charmhelpers/contrib/openstack/ip.py 2015-07-29 18:07:31 +0000
4194+++ hooks/charmhelpers/contrib/openstack/ip.py 2016-05-18 10:01:02 +0000
4195@@ -14,16 +14,19 @@
4196 # You should have received a copy of the GNU Lesser General Public License
4197 # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
4198
4199+
4200 from charmhelpers.core.hookenv import (
4201 config,
4202 unit_get,
4203 service_name,
4204+ network_get_primary_address,
4205 )
4206 from charmhelpers.contrib.network.ip import (
4207 get_address_in_network,
4208 is_address_in_network,
4209 is_ipv6,
4210 get_ipv6_addr,
4211+ resolve_network_cidr,
4212 )
4213 from charmhelpers.contrib.hahelpers.cluster import is_clustered
4214
4215@@ -33,16 +36,19 @@
4216
4217 ADDRESS_MAP = {
4218 PUBLIC: {
4219+ 'binding': 'public',
4220 'config': 'os-public-network',
4221 'fallback': 'public-address',
4222 'override': 'os-public-hostname',
4223 },
4224 INTERNAL: {
4225+ 'binding': 'internal',
4226 'config': 'os-internal-network',
4227 'fallback': 'private-address',
4228 'override': 'os-internal-hostname',
4229 },
4230 ADMIN: {
4231+ 'binding': 'admin',
4232 'config': 'os-admin-network',
4233 'fallback': 'private-address',
4234 'override': 'os-admin-hostname',
4235@@ -110,7 +116,7 @@
4236 correct network. If clustered with no nets defined, return primary vip.
4237
4238 If not clustered, return unit address ensuring address is on configured net
4239- split if one is configured.
4240+ split if one is configured, or a Juju 2.0 extra-binding has been used.
4241
4242 :param endpoint_type: Network endpoing type
4243 """
4244@@ -125,23 +131,45 @@
4245 net_type = ADDRESS_MAP[endpoint_type]['config']
4246 net_addr = config(net_type)
4247 net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
4248+ binding = ADDRESS_MAP[endpoint_type]['binding']
4249 clustered = is_clustered()
4250- if clustered:
4251- if not net_addr:
4252- # If no net-splits defined, we expect a single vip
4253- resolved_address = vips[0]
4254- else:
4255+
4256+ if clustered and vips:
4257+ if net_addr:
4258 for vip in vips:
4259 if is_address_in_network(net_addr, vip):
4260 resolved_address = vip
4261 break
4262+ else:
4263+ # NOTE: endeavour to check vips against network space
4264+ # bindings
4265+ try:
4266+ bound_cidr = resolve_network_cidr(
4267+ network_get_primary_address(binding)
4268+ )
4269+ for vip in vips:
4270+ if is_address_in_network(bound_cidr, vip):
4271+ resolved_address = vip
4272+ break
4273+ except NotImplementedError:
4274+ # If no net-splits configured and no support for extra
4275+ # bindings/network spaces so we expect a single vip
4276+ resolved_address = vips[0]
4277 else:
4278 if config('prefer-ipv6'):
4279 fallback_addr = get_ipv6_addr(exc_list=vips)[0]
4280 else:
4281 fallback_addr = unit_get(net_fallback)
4282
4283- resolved_address = get_address_in_network(net_addr, fallback_addr)
4284+ if net_addr:
4285+ resolved_address = get_address_in_network(net_addr, fallback_addr)
4286+ else:
4287+ # NOTE: only try to use extra bindings if legacy network
4288+ # configuration is not in use
4289+ try:
4290+ resolved_address = network_get_primary_address(binding)
4291+ except NotImplementedError:
4292+ resolved_address = fallback_addr
4293
4294 if resolved_address is None:
4295 raise ValueError("Unable to resolve a suitable IP address based on "
4296
4297=== modified file 'hooks/charmhelpers/contrib/openstack/neutron.py'
4298--- hooks/charmhelpers/contrib/openstack/neutron.py 2015-10-02 15:06:23 +0000
4299+++ hooks/charmhelpers/contrib/openstack/neutron.py 2016-05-18 10:01:02 +0000
4300@@ -50,7 +50,7 @@
4301 if kernel_version() >= (3, 13):
4302 return []
4303 else:
4304- return ['openvswitch-datapath-dkms']
4305+ return [headers_package(), 'openvswitch-datapath-dkms']
4306
4307
4308 # legacy
4309@@ -70,7 +70,7 @@
4310 relation_prefix='neutron',
4311 ssl_dir=QUANTUM_CONF_DIR)],
4312 'services': ['quantum-plugin-openvswitch-agent'],
4313- 'packages': [[headers_package()] + determine_dkms_package(),
4314+ 'packages': [determine_dkms_package(),
4315 ['quantum-plugin-openvswitch-agent']],
4316 'server_packages': ['quantum-server',
4317 'quantum-plugin-openvswitch'],
4318@@ -111,7 +111,7 @@
4319 relation_prefix='neutron',
4320 ssl_dir=NEUTRON_CONF_DIR)],
4321 'services': ['neutron-plugin-openvswitch-agent'],
4322- 'packages': [[headers_package()] + determine_dkms_package(),
4323+ 'packages': [determine_dkms_package(),
4324 ['neutron-plugin-openvswitch-agent']],
4325 'server_packages': ['neutron-server',
4326 'neutron-plugin-openvswitch'],
4327@@ -155,7 +155,7 @@
4328 relation_prefix='neutron',
4329 ssl_dir=NEUTRON_CONF_DIR)],
4330 'services': [],
4331- 'packages': [[headers_package()] + determine_dkms_package(),
4332+ 'packages': [determine_dkms_package(),
4333 ['neutron-plugin-cisco']],
4334 'server_packages': ['neutron-server',
4335 'neutron-plugin-cisco'],
4336@@ -174,7 +174,7 @@
4337 'neutron-dhcp-agent',
4338 'nova-api-metadata',
4339 'etcd'],
4340- 'packages': [[headers_package()] + determine_dkms_package(),
4341+ 'packages': [determine_dkms_package(),
4342 ['calico-compute',
4343 'bird',
4344 'neutron-dhcp-agent',
4345@@ -209,6 +209,20 @@
4346 'server_packages': ['neutron-server',
4347 'neutron-plugin-plumgrid'],
4348 'server_services': ['neutron-server']
4349+ },
4350+ 'midonet': {
4351+ 'config': '/etc/neutron/plugins/midonet/midonet.ini',
4352+ 'driver': 'midonet.neutron.plugin.MidonetPluginV2',
4353+ 'contexts': [
4354+ context.SharedDBContext(user=config('neutron-database-user'),
4355+ database=config('neutron-database'),
4356+ relation_prefix='neutron',
4357+ ssl_dir=NEUTRON_CONF_DIR)],
4358+ 'services': [],
4359+ 'packages': [determine_dkms_package()],
4360+ 'server_packages': ['neutron-server',
4361+ 'python-neutron-plugin-midonet'],
4362+ 'server_services': ['neutron-server']
4363 }
4364 }
4365 if release >= 'icehouse':
4366@@ -219,6 +233,20 @@
4367 'neutron-plugin-ml2']
4368 # NOTE: patch in vmware renames nvp->nsx for icehouse onwards
4369 plugins['nvp'] = plugins['nsx']
4370+ if release >= 'kilo':
4371+ plugins['midonet']['driver'] = (
4372+ 'neutron.plugins.midonet.plugin.MidonetPluginV2')
4373+ if release >= 'liberty':
4374+ plugins['midonet']['driver'] = (
4375+ 'midonet.neutron.plugin_v1.MidonetPluginV2')
4376+ plugins['midonet']['server_packages'].remove(
4377+ 'python-neutron-plugin-midonet')
4378+ plugins['midonet']['server_packages'].append(
4379+ 'python-networking-midonet')
4380+ plugins['plumgrid']['driver'] = (
4381+ 'networking_plumgrid.neutron.plugins.plugin.NeutronPluginPLUMgridV2')
4382+ plugins['plumgrid']['server_packages'].remove(
4383+ 'neutron-plugin-plumgrid')
4384 return plugins
4385
4386
4387@@ -269,17 +297,30 @@
4388 return 'neutron'
4389
4390
4391-def parse_mappings(mappings):
4392+def parse_mappings(mappings, key_rvalue=False):
4393+ """By default mappings are lvalue keyed.
4394+
4395+ If key_rvalue is True, the mapping will be reversed to allow multiple
4396+ configs for the same lvalue.
4397+ """
4398 parsed = {}
4399 if mappings:
4400 mappings = mappings.split()
4401 for m in mappings:
4402 p = m.partition(':')
4403- key = p[0].strip()
4404- if p[1]:
4405- parsed[key] = p[2].strip()
4406+
4407+ if key_rvalue:
4408+ key_index = 2
4409+ val_index = 0
4410+ # if there is no rvalue skip to next
4411+ if not p[1]:
4412+ continue
4413 else:
4414- parsed[key] = ''
4415+ key_index = 0
4416+ val_index = 2
4417+
4418+ key = p[key_index].strip()
4419+ parsed[key] = p[val_index].strip()
4420
4421 return parsed
4422
4423@@ -297,25 +338,25 @@
4424 def parse_data_port_mappings(mappings, default_bridge='br-data'):
4425 """Parse data port mappings.
4426
4427- Mappings must be a space-delimited list of bridge:port mappings.
4428+ Mappings must be a space-delimited list of bridge:port.
4429
4430- Returns dict of the form {bridge:port}.
4431+ Returns dict of the form {port:bridge} where ports may be mac addresses or
4432+ interface names.
4433 """
4434- _mappings = parse_mappings(mappings)
4435+
4436+ # NOTE(dosaboy): we use rvalue for key to allow multiple values to be
4437+ # proposed for <port> since it may be a mac address which will differ
4438+ # across units this allowing first-known-good to be chosen.
4439+ _mappings = parse_mappings(mappings, key_rvalue=True)
4440 if not _mappings or list(_mappings.values()) == ['']:
4441 if not mappings:
4442 return {}
4443
4444 # For backwards-compatibility we need to support port-only provided in
4445 # config.
4446- _mappings = {default_bridge: mappings.split()[0]}
4447-
4448- bridges = _mappings.keys()
4449- ports = _mappings.values()
4450- if len(set(bridges)) != len(bridges):
4451- raise Exception("It is not allowed to have more than one port "
4452- "configured on the same bridge")
4453-
4454+ _mappings = {mappings.split()[0]: default_bridge}
4455+
4456+ ports = _mappings.keys()
4457 if len(set(ports)) != len(ports):
4458 raise Exception("It is not allowed to have the same port configured "
4459 "on more than one bridge")
4460
4461=== modified file 'hooks/charmhelpers/contrib/openstack/templating.py'
4462--- hooks/charmhelpers/contrib/openstack/templating.py 2015-07-29 18:07:31 +0000
4463+++ hooks/charmhelpers/contrib/openstack/templating.py 2016-05-18 10:01:02 +0000
4464@@ -18,7 +18,7 @@
4465
4466 import six
4467
4468-from charmhelpers.fetch import apt_install
4469+from charmhelpers.fetch import apt_install, apt_update
4470 from charmhelpers.core.hookenv import (
4471 log,
4472 ERROR,
4473@@ -29,6 +29,7 @@
4474 try:
4475 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
4476 except ImportError:
4477+ apt_update(fatal=True)
4478 apt_install('python-jinja2', fatal=True)
4479 from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
4480
4481@@ -112,7 +113,7 @@
4482
4483 def complete_contexts(self):
4484 '''
4485- Return a list of interfaces that have atisfied contexts.
4486+ Return a list of interfaces that have satisfied contexts.
4487 '''
4488 if self._complete_contexts:
4489 return self._complete_contexts
4490@@ -293,3 +294,30 @@
4491 [interfaces.extend(i.complete_contexts())
4492 for i in six.itervalues(self.templates)]
4493 return interfaces
4494+
4495+ def get_incomplete_context_data(self, interfaces):
4496+ '''
4497+ Return dictionary of relation status of interfaces and any missing
4498+ required context data. Example:
4499+ {'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
4500+ 'zeromq-configuration': {'related': False}}
4501+ '''
4502+ incomplete_context_data = {}
4503+
4504+ for i in six.itervalues(self.templates):
4505+ for context in i.contexts:
4506+ for interface in interfaces:
4507+ related = False
4508+ if interface in context.interfaces:
4509+ related = context.get_related()
4510+ missing_data = context.missing_data
4511+ if missing_data:
4512+ incomplete_context_data[interface] = {'missing_data': missing_data}
4513+ if related:
4514+ if incomplete_context_data.get(interface):
4515+ incomplete_context_data[interface].update({'related': True})
4516+ else:
4517+ incomplete_context_data[interface] = {'related': True}
4518+ else:
4519+ incomplete_context_data[interface] = {'related': False}
4520+ return incomplete_context_data
4521
4522=== modified file 'hooks/charmhelpers/contrib/openstack/utils.py'
4523--- hooks/charmhelpers/contrib/openstack/utils.py 2015-07-29 18:07:31 +0000
4524+++ hooks/charmhelpers/contrib/openstack/utils.py 2016-05-18 10:01:02 +0000
4525@@ -1,5 +1,3 @@
4526-#!/usr/bin/python
4527-
4528 # Copyright 2014-2015 Canonical Limited.
4529 #
4530 # This file is part of charm-helpers.
4531@@ -24,8 +22,14 @@
4532 import json
4533 import os
4534 import sys
4535+import re
4536+import itertools
4537+import functools
4538
4539 import six
4540+import tempfile
4541+import traceback
4542+import uuid
4543 import yaml
4544
4545 from charmhelpers.contrib.network import ip
4546@@ -35,12 +39,18 @@
4547 )
4548
4549 from charmhelpers.core.hookenv import (
4550+ action_fail,
4551+ action_set,
4552 config,
4553 log as juju_log,
4554 charm_dir,
4555+ DEBUG,
4556 INFO,
4557+ related_units,
4558 relation_ids,
4559- relation_set
4560+ relation_set,
4561+ status_set,
4562+ hook_name
4563 )
4564
4565 from charmhelpers.contrib.storage.linux.lvm import (
4566@@ -50,7 +60,9 @@
4567 )
4568
4569 from charmhelpers.contrib.network.ip import (
4570- get_ipv6_addr
4571+ get_ipv6_addr,
4572+ is_ipv6,
4573+ port_has_listener,
4574 )
4575
4576 from charmhelpers.contrib.python.packages import (
4577@@ -58,7 +70,15 @@
4578 pip_install,
4579 )
4580
4581-from charmhelpers.core.host import lsb_release, mounts, umount
4582+from charmhelpers.core.host import (
4583+ lsb_release,
4584+ mounts,
4585+ umount,
4586+ service_running,
4587+ service_pause,
4588+ service_resume,
4589+ restart_on_change_helper,
4590+)
4591 from charmhelpers.fetch import apt_install, apt_cache, install_remote
4592 from charmhelpers.contrib.storage.linux.utils import is_block_device, zap_disk
4593 from charmhelpers.contrib.storage.linux.loopback import ensure_loopback_device
4594@@ -69,7 +89,6 @@
4595 DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
4596 'restricted main multiverse universe')
4597
4598-
4599 UBUNTU_OPENSTACK_RELEASE = OrderedDict([
4600 ('oneiric', 'diablo'),
4601 ('precise', 'essex'),
4602@@ -80,6 +99,7 @@
4603 ('utopic', 'juno'),
4604 ('vivid', 'kilo'),
4605 ('wily', 'liberty'),
4606+ ('xenial', 'mitaka'),
4607 ])
4608
4609
4610@@ -93,31 +113,74 @@
4611 ('2014.2', 'juno'),
4612 ('2015.1', 'kilo'),
4613 ('2015.2', 'liberty'),
4614+ ('2016.1', 'mitaka'),
4615 ])
4616
4617-# The ugly duckling
4618+# The ugly duckling - must list releases oldest to newest
4619 SWIFT_CODENAMES = OrderedDict([
4620- ('1.4.3', 'diablo'),
4621- ('1.4.8', 'essex'),
4622- ('1.7.4', 'folsom'),
4623- ('1.8.0', 'grizzly'),
4624- ('1.7.7', 'grizzly'),
4625- ('1.7.6', 'grizzly'),
4626- ('1.10.0', 'havana'),
4627- ('1.9.1', 'havana'),
4628- ('1.9.0', 'havana'),
4629- ('1.13.1', 'icehouse'),
4630- ('1.13.0', 'icehouse'),
4631- ('1.12.0', 'icehouse'),
4632- ('1.11.0', 'icehouse'),
4633- ('2.0.0', 'juno'),
4634- ('2.1.0', 'juno'),
4635- ('2.2.0', 'juno'),
4636- ('2.2.1', 'kilo'),
4637- ('2.2.2', 'kilo'),
4638- ('2.3.0', 'liberty'),
4639+ ('diablo',
4640+ ['1.4.3']),
4641+ ('essex',
4642+ ['1.4.8']),
4643+ ('folsom',
4644+ ['1.7.4']),
4645+ ('grizzly',
4646+ ['1.7.6', '1.7.7', '1.8.0']),
4647+ ('havana',
4648+ ['1.9.0', '1.9.1', '1.10.0']),
4649+ ('icehouse',
4650+ ['1.11.0', '1.12.0', '1.13.0', '1.13.1']),
4651+ ('juno',
4652+ ['2.0.0', '2.1.0', '2.2.0']),
4653+ ('kilo',
4654+ ['2.2.1', '2.2.2']),
4655+ ('liberty',
4656+ ['2.3.0', '2.4.0', '2.5.0']),
4657+ ('mitaka',
4658+ ['2.5.0', '2.6.0', '2.7.0']),
4659 ])
4660
4661+# >= Liberty version->codename mapping
4662+PACKAGE_CODENAMES = {
4663+ 'nova-common': OrderedDict([
4664+ ('12.0', 'liberty'),
4665+ ('13.0', 'mitaka'),
4666+ ]),
4667+ 'neutron-common': OrderedDict([
4668+ ('7.0', 'liberty'),
4669+ ('8.0', 'mitaka'),
4670+ ]),
4671+ 'cinder-common': OrderedDict([
4672+ ('7.0', 'liberty'),
4673+ ('8.0', 'mitaka'),
4674+ ]),
4675+ 'keystone': OrderedDict([
4676+ ('8.0', 'liberty'),
4677+ ('8.1', 'liberty'),
4678+ ('9.0', 'mitaka'),
4679+ ]),
4680+ 'horizon-common': OrderedDict([
4681+ ('8.0', 'liberty'),
4682+ ('9.0', 'mitaka'),
4683+ ]),
4684+ 'ceilometer-common': OrderedDict([
4685+ ('5.0', 'liberty'),
4686+ ('6.0', 'mitaka'),
4687+ ]),
4688+ 'heat-common': OrderedDict([
4689+ ('5.0', 'liberty'),
4690+ ('6.0', 'mitaka'),
4691+ ]),
4692+ 'glance-common': OrderedDict([
4693+ ('11.0', 'liberty'),
4694+ ('12.0', 'mitaka'),
4695+ ]),
4696+ 'openstack-dashboard': OrderedDict([
4697+ ('8.0', 'liberty'),
4698+ ('9.0', 'mitaka'),
4699+ ]),
4700+}
4701+
4702 DEFAULT_LOOPBACK_SIZE = '5G'
4703
4704
4705@@ -167,9 +230,9 @@
4706 error_out(e)
4707
4708
4709-def get_os_version_codename(codename):
4710+def get_os_version_codename(codename, version_map=OPENSTACK_CODENAMES):
4711 '''Determine OpenStack version number from codename.'''
4712- for k, v in six.iteritems(OPENSTACK_CODENAMES):
4713+ for k, v in six.iteritems(version_map):
4714 if v == codename:
4715 return k
4716 e = 'Could not derive OpenStack version for '\
4717@@ -177,6 +240,33 @@
4718 error_out(e)
4719
4720
4721+def get_os_version_codename_swift(codename):
4722+ '''Determine OpenStack version number of swift from codename.'''
4723+ for k, v in six.iteritems(SWIFT_CODENAMES):
4724+ if k == codename:
4725+ return v[-1]
4726+ e = 'Could not derive swift version for '\
4727+ 'codename: %s' % codename
4728+ error_out(e)
4729+
4730+
4731+def get_swift_codename(version):
4732+ '''Determine OpenStack codename that corresponds to swift version.'''
4733+ codenames = [k for k, v in six.iteritems(SWIFT_CODENAMES) if version in v]
4734+ if len(codenames) > 1:
4735+ # If more than one release codename contains this version we determine
4736+ # the actual codename based on the highest available install source.
4737+ for codename in reversed(codenames):
4738+ releases = UBUNTU_OPENSTACK_RELEASE
4739+ release = [k for k, v in six.iteritems(releases) if codename in v]
4740+ ret = subprocess.check_output(['apt-cache', 'policy', 'swift'])
4741+ if codename in ret or release[0] in ret:
4742+ return codename
4743+ elif len(codenames) == 1:
4744+ return codenames[0]
4745+ return None
4746+
4747+
4748 def get_os_codename_package(package, fatal=True):
4749 '''Derive OpenStack release codename from an installed package.'''
4750 import apt_pkg as apt
4751@@ -201,20 +291,33 @@
4752 error_out(e)
4753
4754 vers = apt.upstream_version(pkg.current_ver.ver_str)
4755-
4756- try:
4757- if 'swift' in pkg.name:
4758- swift_vers = vers[:5]
4759- if swift_vers not in SWIFT_CODENAMES:
4760- # Deal with 1.10.0 upward
4761- swift_vers = vers[:6]
4762- return SWIFT_CODENAMES[swift_vers]
4763- else:
4764- vers = vers[:6]
4765- return OPENSTACK_CODENAMES[vers]
4766- except KeyError:
4767- e = 'Could not determine OpenStack codename for version %s' % vers
4768- error_out(e)
4769+ if 'swift' in pkg.name:
4770+ # Fully x.y.z match for swift versions
4771+ match = re.match('^(\d+)\.(\d+)\.(\d+)', vers)
4772+ else:
4773+ # x.y match only for 20XX.X
4774+ # and ignore patch level for other packages
4775+ match = re.match('^(\d+)\.(\d+)', vers)
4776+
4777+ if match:
4778+ vers = match.group(0)
4779+
4780+ # >= Liberty independent project versions
4781+ if (package in PACKAGE_CODENAMES and
4782+ vers in PACKAGE_CODENAMES[package]):
4783+ return PACKAGE_CODENAMES[package][vers]
4784+ else:
4785+ # < Liberty co-ordinated project versions
4786+ try:
4787+ if 'swift' in pkg.name:
4788+ return get_swift_codename(vers)
4789+ else:
4790+ return OPENSTACK_CODENAMES[vers]
4791+ except KeyError:
4792+ if not fatal:
4793+ return None
4794+ e = 'Could not determine OpenStack codename for version %s' % vers
4795+ error_out(e)
4796
4797
4798 def get_os_version_package(pkg, fatal=True):
4799@@ -226,12 +329,14 @@
4800
4801 if 'swift' in pkg:
4802 vers_map = SWIFT_CODENAMES
4803+ for cname, version in six.iteritems(vers_map):
4804+ if cname == codename:
4805+ return version[-1]
4806 else:
4807 vers_map = OPENSTACK_CODENAMES
4808-
4809- for version, cname in six.iteritems(vers_map):
4810- if cname == codename:
4811- return version
4812+ for version, cname in six.iteritems(vers_map):
4813+ if cname == codename:
4814+ return version
4815 # e = "Could not determine OpenStack version for package: %s" % pkg
4816 # error_out(e)
4817
4818@@ -256,12 +361,42 @@
4819
4820
4821 def import_key(keyid):
4822- cmd = "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 " \
4823- "--recv-keys %s" % keyid
4824- try:
4825- subprocess.check_call(cmd.split(' '))
4826- except subprocess.CalledProcessError:
4827- error_out("Error importing repo key %s" % keyid)
4828+ key = keyid.strip()
4829+ if (key.startswith('-----BEGIN PGP PUBLIC KEY BLOCK-----') and
4830+ key.endswith('-----END PGP PUBLIC KEY BLOCK-----')):
4831+ juju_log("PGP key found (looks like ASCII Armor format)", level=DEBUG)
4832+ juju_log("Importing ASCII Armor PGP key", level=DEBUG)
4833+ with tempfile.NamedTemporaryFile() as keyfile:
4834+ with open(keyfile.name, 'w') as fd:
4835+ fd.write(key)
4836+ fd.write("\n")
4837+
4838+ cmd = ['apt-key', 'add', keyfile.name]
4839+ try:
4840+ subprocess.check_call(cmd)
4841+ except subprocess.CalledProcessError:
4842+ error_out("Error importing PGP key '%s'" % key)
4843+ else:
4844+ juju_log("PGP key found (looks like Radix64 format)", level=DEBUG)
4845+ juju_log("Importing PGP key from keyserver", level=DEBUG)
4846+ cmd = ['apt-key', 'adv', '--keyserver',
4847+ 'hkp://keyserver.ubuntu.com:80', '--recv-keys', key]
4848+ try:
4849+ subprocess.check_call(cmd)
4850+ except subprocess.CalledProcessError:
4851+ error_out("Error importing PGP key '%s'" % key)
4852+
4853+
4854+def get_source_and_pgp_key(input):
4855+ """Look for a pgp key ID or ascii-armor key in the given input."""
4856+ index = input.strip()
4857+ index = input.rfind('|')
4858+ if index < 0:
4859+ return input, None
4860+
4861+ key = input[index + 1:].strip('|')
4862+ source = input[:index]
4863+ return source, key
4864
4865
4866 def configure_installation_source(rel):
4867@@ -273,16 +408,16 @@
4868 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
4869 f.write(DISTRO_PROPOSED % ubuntu_rel)
4870 elif rel[:4] == "ppa:":
4871- src = rel
4872+ src, key = get_source_and_pgp_key(rel)
4873+ if key:
4874+ import_key(key)
4875+
4876 subprocess.check_call(["add-apt-repository", "-y", src])
4877 elif rel[:3] == "deb":
4878- l = len(rel.split('|'))
4879- if l == 2:
4880- src, key = rel.split('|')
4881- juju_log("Importing PPA key from keyserver for %s" % src)
4882+ src, key = get_source_and_pgp_key(rel)
4883+ if key:
4884 import_key(key)
4885- elif l == 1:
4886- src = rel
4887+
4888 with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
4889 f.write(src)
4890 elif rel[:6] == 'cloud:':
4891@@ -327,6 +462,9 @@
4892 'liberty': 'trusty-updates/liberty',
4893 'liberty/updates': 'trusty-updates/liberty',
4894 'liberty/proposed': 'trusty-proposed/liberty',
4895+ 'mitaka': 'trusty-updates/mitaka',
4896+ 'mitaka/updates': 'trusty-updates/mitaka',
4897+ 'mitaka/proposed': 'trusty-proposed/mitaka',
4898 }
4899
4900 try:
4901@@ -392,9 +530,18 @@
4902 import apt_pkg as apt
4903 src = config('openstack-origin')
4904 cur_vers = get_os_version_package(package)
4905- available_vers = get_os_version_install_source(src)
4906+ if "swift" in package:
4907+ codename = get_os_codename_install_source(src)
4908+ avail_vers = get_os_version_codename_swift(codename)
4909+ else:
4910+ avail_vers = get_os_version_install_source(src)
4911 apt.init()
4912- return apt.version_compare(available_vers, cur_vers) == 1
4913+ if "swift" in package:
4914+ major_cur_vers = cur_vers.split('.', 1)[0]
4915+ major_avail_vers = avail_vers.split('.', 1)[0]
4916+ major_diff = apt.version_compare(major_avail_vers, major_cur_vers)
4917+ return avail_vers > cur_vers and (major_diff == 1 or major_diff == 0)
4918+ return apt.version_compare(avail_vers, cur_vers) == 1
4919
4920
4921 def ensure_block_device(block_device):
4922@@ -469,6 +616,12 @@
4923 relation_prefix=None):
4924 hosts = get_ipv6_addr(dynamic_only=False)
4925
4926+ if config('vip'):
4927+ vips = config('vip').split()
4928+ for vip in vips:
4929+ if vip and is_ipv6(vip):
4930+ hosts.append(vip)
4931+
4932 kwargs = {'database': database,
4933 'username': database_user,
4934 'hostname': json.dumps(hosts)}
4935@@ -517,7 +670,7 @@
4936 return yaml.load(projects_yaml)
4937
4938
4939-def git_clone_and_install(projects_yaml, core_project, depth=1):
4940+def git_clone_and_install(projects_yaml, core_project):
4941 """
4942 Clone/install all specified OpenStack repositories.
4943
4944@@ -567,6 +720,9 @@
4945 for p in projects['repositories']:
4946 repo = p['repository']
4947 branch = p['branch']
4948+ depth = '1'
4949+ if 'depth' in p.keys():
4950+ depth = p['depth']
4951 if p['name'] == 'requirements':
4952 repo_dir = _git_clone_and_install_single(repo, branch, depth,
4953 parent_dir, http_proxy,
4954@@ -611,19 +767,14 @@
4955 """
4956 Clone and install a single git repository.
4957 """
4958- dest_dir = os.path.join(parent_dir, os.path.basename(repo))
4959-
4960 if not os.path.exists(parent_dir):
4961 juju_log('Directory already exists at {}. '
4962 'No need to create directory.'.format(parent_dir))
4963 os.mkdir(parent_dir)
4964
4965- if not os.path.exists(dest_dir):
4966- juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
4967- repo_dir = install_remote(repo, dest=parent_dir, branch=branch,
4968- depth=depth)
4969- else:
4970- repo_dir = dest_dir
4971+ juju_log('Cloning git repo: {}, branch: {}'.format(repo, branch))
4972+ repo_dir = install_remote(
4973+ repo, dest=parent_dir, branch=branch, depth=depth)
4974
4975 venv = os.path.join(parent_dir, 'venv')
4976
4977@@ -704,3 +855,721 @@
4978 return projects[key]
4979
4980 return None
4981+
4982+
4983+def os_workload_status(configs, required_interfaces, charm_func=None):
4984+ """
4985+ Decorator to set workload status based on complete contexts
4986+ """
4987+ def wrap(f):
4988+ @wraps(f)
4989+ def wrapped_f(*args, **kwargs):
4990+ # Run the original function first
4991+ f(*args, **kwargs)
4992+ # Set workload status now that contexts have been
4993+ # acted on
4994+ set_os_workload_status(configs, required_interfaces, charm_func)
4995+ return wrapped_f
4996+ return wrap
4997+
4998+
4999+def set_os_workload_status(configs, required_interfaces, charm_func=None,
5000+ services=None, ports=None):
The diff has been truncated for viewing.

Subscribers

People subscribed via source and target branches